text
stringlengths 56
7.94M
|
---|
\begin{document}
\title{Two-qubit entangling gates between distant atomic qubits in a lattice}
\author{A.~Cesa and J.~Martin}
\affiliation{Institut de Physique Nucl\'eaire, Atomique et de Spectroscopie, CESAM, Universit\'e de Li\`ege, B\^at.\ B15, B - 4000 Li\`ege, Belgium}
\date{June 6, 2017}
\begin{abstract}
Arrays of qubits encoded in the ground-state manifold of neutral atoms trapped in optical (or magnetic) lattices appear to be a promising platform for the realization of a scalable quantum computer. Two-qubit conditional gates between nearest-neighbor qubits in the array can be implemented by exploiting the Rydberg blockade mechanism, as was shown by D.~Jaksch \emph{et al.}\ [Phys.~Rev.~Lett.~\textbf{85}, 2208 (2000)]. However, the energy shift due to dipole-dipole interactions causing the blockade falls off rapidly with the interatomic distance and protocols based on direct Rydberg blockade typically fail to operate between atoms separated by more than one lattice site. In this work, we propose an extension of the protocol of Jaksch \emph{et al.}\ for controlled-Z and controlled-NOT gates which works in the general case where the qubits are not nearest-neighbor in the array. Our proposal relies on the Rydberg excitation hopping along a chain of ancilla non-coding atoms connecting the qubits on which the gate is to be applied. The dependence of the gate fidelity on the number of ancilla atoms, the blockade strength and the decay rates of the Rydberg states is investigated. A comparison between our implementation of distant controlled-NOT gate and one based on a sequence of nearest-neighbor two-qubit gates is also provided.
\end{abstract}
\maketitle
\section{Introduction}
It is now recognized that quantum computing holds the promise of a new technological revolution. For instance, it will enable solving efficiently complex optimization problems or simulating efficiently many-body quantum systems to understand new phases of matter or even biological systems. A wealth of applications in the fields of artificial intelligence and secure communications is also foreseen. The task to build a quantum computer is, however, a considerable one. Different paradigms have been proposed to build a universal quantum computer~\cite{Lad10}, such as cluster-state~\cite{Rau01,Rau03} or gate-based quantum computers~\cite{Nie00}. The latter are composed of a qubit register on which logic gates are applied. Any unitary operator acting on the register can be approximated with arbitrary accuracy by a sequence of operations from a set of universal quantum gates composed of single-qubit operations and a two-qubit entangling gate~\cite{Nie00}. Although there is always a non-zero probability of error per gate, quantum error correction and fault-tolerant quantum computation open the door to accurate and arbitrarily long quantum computations, provided the error produced by single- and two-qubit gates does not exceed a certain threshold~\cite{Nie00}. High-fidelity quantum gates are thus a major ingredient for scalable quantum computing. Several platforms implementing a universal gate-based quantum computer have been proposed (see e.g.~\cite{Lad10,Neg11} for reviews), which include neutral atoms~\cite{Saf16_1}, photons~\cite{Kok07}, trapped ions~\cite{Sch13,Deb16} and superconducting circuits~\cite{Pla07,Cla08,Dev13}. Cold neutral atoms in optical or magnetic lattices represent a very promising platform due to the long coherence time of the qubits encoded in Zeeman or hyperfine ground states, the possibility to address atoms individually~\cite{Sch04,Lun09,Wei11} and the ability to produce large arrays of qubits~\cite{Neg11,Saf05,Saf10,Saf16_1}. Moreover, deterministic loading of one atom per lattice site in large arrays can be achieved byrelying on the superfluid-Mott insulator transition in a cloud of ultracold atoms~\cite{Pei03,Wei11}.
Recently, high fidelity single-qubit gates using microwave fields have been reported in a two-dimensional (2D) array of cesium atoms~\cite{Xia15}. Different schemes implementing two-qubit entangling gates on neutral atoms have been proposed~\cite{Bre99}, one of which relies on the dipole blockade~\cite{Jak00}. By taking advantage of the strong dipole-dipole interactions between atoms in Rydberg states~\cite{Wal08,Gae09,Wil10,Beg13,Bet15}, it is possible to prevent any modifications of the target's atom state conditionally on the control's atom state.
This concept has been demonstrated experimentally with the implementations of two-qubit controlled-NOT (CNOT)~\cite{Zha10,Ise10,Zha12,Mal15} and controlled-phase gates~\cite{Mal15,Mul14}. Note that interactions between atom us in Rydberg states also allows to implement, in principle, quantum gates involving more than two qubits~\cite{Bri07,Ise11}. Most of the protocols for the implementation of two-qubit gates proposed so far operate between atoms that are on adjacent lattice sites. However, in a large array of qubits, it is highly desirable to be able to perform entangling gates between arbitrarily far apart atoms in the lattice. A few proposals addressing this problem have been made~\cite{Wei12,Sod09, Kuz11,Kuz16,Raf12,Raf14}. One idea put forward is to use a spin chain as a quantum bus to perform quantum gates between distant qubits~\cite{Wei12}. It is based on the adiabatic following of the ground state of the spin chain across the paramagnet to crystal phase transition. Another proposal is to use moving carrier atoms of a different species while mediating the quantum gate with molecular states~\cite{Sod09, Kuz11,Kuz16}. Alternatively, it has been suggested to transport the state of the control qubit near the target qubit via optical lattice modulations~\cite{Raf12,Raf14}.
In this work, we propose to use a chain of ancilla noncoding atoms to implement two-qubit entangling gates between atoms arbitrarily far apart in the lattice. The ancilla atoms are used as mediators to connect control and target atoms. Rydberg excitation hopping along the chain of ancilla atoms enables us to modify the state of the target atom conditionally on the state of the control atom via Rydberg blockade. As such, our protocol can be seen as a generalization of the one of Jaksch \emph{et al.}~\cite{Jak00} to the case where the qubits are spatially separated. More specifically, we present protocols that implement either a CNOT gate or a modified control-Z (CZ) gate, represented in the computational basis by the unitary matrices~\cite{footnote4}
\begin{equation}
U_{\mathrm{CNOT}}=\begin{pmatrix}1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0
\end{pmatrix},
\;\;\tilde{U}_{\mathrm{CZ}}=\begin{pmatrix}1 & 0 & 0 & 0\\
0 & -1 & 0 & 0\\
0 & 0 & -1 & 0\\
0 & 0 & 0 & -1
\end{pmatrix}.
\label{2qubit_gates}
\end{equation}
This paper is organized as follows: In Sec.~\ref{System}, the system and the master equation describing its time evolution are presented. The section also contains a review of the process fidelity used to assess the performance of our protocols in the presence of errors. Sec.~\ref{Protocol} is devoted to the description of the protocol implementing two-qubit entangling gates between atoms arbitrarily far apart in the lattice. In Sec.~\ref{Results}, we present and discuss our results on the effects of dissipation and imperfect blockade on the gate fidelity and compare, in terms of performance, our protocol with implementations using only nearest-neighbor two-qubit gates. Section~\ref{Perspective} discusses some experimental considerations and gives perspectives of our work. A conclusion (Sec.~\ref{Conclusion}) ends this paper.
\section{System and its theoretical description}\label{System}
\subsection{System and Hamiltonian}
The physical system that we consider is a one-dimensional (1D) chain of coding atoms (qubit atoms, labeled q) next to a shifted parallel chain of noncoding atoms [ancilla atoms, labeled A; see Fig.~\ref{fig2}(a)]. This system could be implemented e.g.\ by loading an optical lattice with different atomic species~\cite{Bet15}. The protocol that we present in Sec.~\ref{Protocol} also works for 2D or three-dimensional lattices. Here, we consider a 1D lattice merely for computational convenience. The control (C) and target (T) qubits are connected via a chain of $n_{\mathrm{A}}$ ancilla atoms, as illustrated in Fig.~\ref{fig2}(a).
\begin{figure}
\caption{(a) Qubits are encoded in a 1D chain of atoms (blue). A parallel chain of ancilla noncoding atoms (green) is used to connect the control and target qubits through the nearest-neighbor Rydberg blockade (red arrows). (b) Internal structure of the qubit and ancilla atoms. For clarity, the subscript denoting the atom to which the different states belong is omitted.}
\label{fig2}
\end{figure}
Each atom, either qubit or ancilla, is assumed to be individually addressable by laser pulses. Qubit atoms are modeled by three-level systems in a $\Lambda$ configuration [see Fig.~\ref{fig2}(b)], where the two lower states $|0_\mathrm{q}\rangle$ and $|1_\mathrm{q}\rangle$ encode the qubit and the upper Rydberg state $|r_\mathrm{q}\rangle$ allows for dipole-dipole interaction with either ancilla or qubit atoms. The transitions $|0_\mathrm{q}\rangle \leftrightarrow |r_\mathrm{q}\rangle$ and $|1_\mathrm{q}\rangle\leftrightarrow |r_\mathrm{q}\rangle$ can be resonantly driven by laser pulses with constant Rabi frequencies $\Omega_0$ and $\Omega_1$ respectively. The Hamiltonian for a single-qubit atom in the basis $\{|0_\mathrm{q}\rangle,|1_\mathrm{q}\rangle,|r_\mathrm{q}\rangle\}$ thus reads
\begin{equation}
\hat{H}_{\mathrm{q}}(t)=\frac{\hbar}{2}\begin{pmatrix}0 & 0 & \Omega_0(t) \\
0 & 0 & \Omega_1(t) \\
\Omega_0^*(t) & \Omega_1^*(t) & 0
\end{pmatrix}.
\end{equation}
As only square pulses will be considered, $\Omega_k(t)=\Omega_k$ ($k=0,1$) whenever a pulse drives the transition $|k_\mathrm{q}\rangle \leftrightarrow |r_\mathrm{q}\rangle$, and $\Omega_k(t)=0$ otherwise.
The ancilla atoms are modelled as two-level systems with a lower-energy state $|g\rangle$ and excited Rydberg state $|e\rangle$ resonantly driven by laser pulses [see Fig.~\ref{fig2}(b)]. In the basis $\{|g\rangle,|e\rangle\}$, the Hamiltonian for a single ancilla atom reads
\begin{equation}
\hat{H}_{\mathrm{A}}(t)=\frac{\hbar}{2}\begin{pmatrix} 0 & \Omega_A(t) \\
\Omega_A^*(t) & 0
\end{pmatrix},
\end{equation}
with Rabi frequency $\Omega_A(t)=\Omega_A$ whenever a pulse drives the transition $|g\rangle \leftrightarrow |e\rangle$ and $\Omega_A(t)=0$ otherwise. As only the control, target and ancilla atoms take part in the gate protocol, other qubit atoms will not be considered in our description of the two-qubit gate.
Any two neighboring atoms, either qubit or ancilla, strongly interact via dipole-dipole interactions as soon as one of them is in a Rydberg state. In the strong interaction regime, this leads to an effective energy shift of the doubly excited state, either $|e_ie_{i+1}\rangle$, $|r_\mathrm{C} r_\mathrm{T}\rangle$, or $|e_ir_\mathrm{q}\rangle$ ($\mathrm{q}=\mathrm{C},\mathrm{T}$) depending on the pair of interacting atoms. When this energy shift is much larger than the atom-laser interaction energy, only one atom can be excited at a time to the Rydberg state and the system is said to exhibit dipole or Rydberg blockade~\cite{Saf10,Wal08,Gae09,Beg13}. Note that this phenomenon can also occur between atoms of different species~\cite{Bet15}. In order to model Rydberg blockade in our system, we add to the system Hamiltonian terms accounting for the energy shifts of the doubly excited states: $U_{rr} |r_{\mathrm{C}} r_{\mathrm{T}}\rangle\langle r_{\mathrm{C}} r_{\mathrm{T}}|$ for the interaction between control and target qubit atoms, $U_{ee} |e_i e_j\rangle\langle e_i e_j|$ with $i \neq j$ for the interaction between ancilla atoms $i$ and $j$, and $U_{re} |r_{\mathrm{q}} e_j\rangle\langle r_\mathrm{q} e_j|$ ($\mathrm{q}=\mathrm{C},\mathrm{T}$) for the interaction between qubit and ancilla atoms. The total Hamiltonian of our system is thus
\begin{equation}
\hat{H}(t)=\hat{H}_0(t)+\hat{V}_{\mathrm{dd}},
\end{equation}
with
\begin{equation}
\hat{H}_0(t)=\hat{H}_{\mathrm{q}}^{\mathrm{C}}(t)+\hat{H}_{\mathrm{q}}^{\mathrm{T}}(t)+\sum_{j=1}^{n_{\mathrm{A}}}\hat{H}_{\mathrm{A}}^{j}(t)
\end{equation}
and
\begin{equation}\label{Vdd}
\begin{aligned}
\hat{V}_{\mathrm{dd}}={}&U_{rr} |r_{\mathrm{C}} r_{\mathrm{T}}\rangle\langle r_{\mathrm{C}} r_{\mathrm{T}}|
+\sum_{\mathrm{q}=\mathrm{C},\mathrm{T}} U_{re} |r_{\mathrm{q}} e_j\rangle\langle r_{\mathrm{q}} e_j|\\
&+\sum_{i\ne j}^{n_{\mathrm{A}}}U_{ee} |e_i e_j\rangle\langle e_i e_j|.
\end{aligned}
\end{equation}
The next-nearest-neighbor energy shifts are also taken into account assuming a resonant dipole-dipole interaction between atoms in Rydberg states~\cite{footnote2}.
\subsection{Master equation}
Spontaneous deexcitation of the Rydberg states to lower-energy states is one of the major sources of error in the implementation of quantum gates relying on the dipole blockade mechanism. In our system, we consider that the qubit atoms can decay from the Rydberg state $|r\rangle$ to states $|0\rangle$ and $|1\rangle$ with decay rates $\gamma_0$ and $\gamma_1$, respectively, whereas the ancilla atoms can decay from the Rydberg state $|e\rangle$ to the state $|g\rangle$ with decay rate $\gamma_A$.
In order to take this source of dissipation into account, we solve a master equation for
the density operator $\hat{\rho}$ describing the global state of the control, target and $n_{\mathrm{A}}$ ancilla atoms. Its standard form reads
\begin{equation}
\begin{aligned}
\frac{d\hat{\rho}(t)}{dt}= & \frac{1}{i\hbar}\left[\hat{H}(t),\hat{\rho}(t)\right]\\
& +\sum_{\mathrm{q}=\mathrm{C},\mathrm{T}}\sum_{k=0,1}{\left(\hat{L}_k^{\mathrm{q}}\hat{\rho}(t) (\hat{L}_k^{\mathrm{q}})^\dagger -\frac{1}{2} \left\{(\hat{L}_k^{\mathrm{q}})^\dagger \hat{L}_k^{\mathrm{q}}, \hat{\rho}(t)\right\}\right)}\\
& +\sum_{j=1}^{n_{\mathrm{A}}}\left(\hat{L}_{\mathrm{A}}^j\hat{\rho}(t) (\hat{L}_{\mathrm{A}}^j)^\dagger -\frac{1}{2} \left\{(\hat{L}_{\mathrm{A}}^j)^\dagger \hat{L}_{\mathrm{A}}^j,\hat{\rho}(t)\right\}\right),
\end{aligned}
\label{Linblad}
\end{equation}
with the jump operators $\hat{L}_0=\sqrt{\gamma_0}\,|0\rangle\langle r|$, $\hat{L}_1=\sqrt{\gamma_1}\,|1\rangle\langle r|$, and $\hat{L}_{\mathrm{A}}=\sqrt{\gamma_{\mathrm{A}}}\,|g\rangle\langle e|$. For convenience, we introduce the total decay rate for the qubit atoms, $\gamma_\mathrm{q}=\gamma_0+\gamma_1$.
\subsection{Process fidelity}
In order to assess the performance of our protocols implementing CNOT and CZ gates against sources of error, we compute the process fidelity $F_{\mathrm{pro}}$~\cite{Gil05}.
The process fidelity measures the difference between an ideal and real quantum processes. For an ideal unitary quantum process $\hat{U}$, the process fidelity between $\hat{U}$ and the real process specified by its complete positive map $\mathcal{E}(\cdot)$ takes the simple form~\cite{Nie02,Gil05}
\begin{equation}
F_{\mathrm{pro}}(\mathcal{E},\hat{U})=\frac{1}{d^3}\sum_{j=1}^{d^2}{\mathrm{Tr}\left(\hat{U}\hat{A}_j^\dagger \hat{U}^\dagger \mathcal{E}(\hat{A}_j)\right)},
\label{Fprocess}
\end{equation}
where $\{\hat{A}_j:j=1,\ldots,d^2\}$ is a basis for operators acting on a $d$-dimensional Hilbert space that verifies the orthonormalization condition $\mathrm{Tr}(\hat{A}_i^\dagger \hat{A}_j)= d\,\delta_{ij}$. The process fidelity thus corresponds to the overlap between an operator $\hat{A}_j$ evolved with the ideal process and the same operator evolved with the real process, averaged over all basis operators $\hat{A}_j$. It is related to the average fidelity $F_{\mathrm{av}}$ which quantifies the uniform average over the whole Hilbert space of the overlap between $\hat{U}|\psi\rangle$ and $\mathcal{E}(|\psi\rangle\langle\psi|)$ through~\cite{Nie02,Gil05,Ped07}
\begin{equation}
F_{\mathrm{av}}(\mathcal{E},\hat{U})=\frac{d\,F_{\mathrm{pro}}(\mathcal{E},\hat{U}) +1 }{d+1}.
\label{Faverage}
\end{equation}
The computation of the process fidelity involves the propagation of $d^2$ operators under the process $\mathcal{E}$, which rapidly becomes intractable as $d$ increases. However, lower and upper bounds of the process fidelity can be computed much faster using only two complementary bases of pure states~\cite{Hof05}. Consider a basis of $d$ pure states $\{|\psi_n\rangle:n=1,\ldots,d\}$ and the complementary basis $\{|\phi_k\rangle:k=1,\ldots,d\}$ defined as
\begin{equation}
|\phi_k\rangle=\frac{1}{\sqrt{d}}\sum_{n=1}^d{\exp\left(-i \frac{2 \pi}{d} k n \right)|\psi_n\rangle}.
\end{equation}
Introducing the classical fidelity of the process in a basis $\{|i\rangle:i=1,\ldots,d\}$ of the $d$-dimensional Hilbert space as
\begin{equation}\label{fidbounds}
F_i(\mathcal{E},\hat{U})=\frac{1}{d}\sum_{i=1}^d{\langle i| \hat{U}^\dagger \mathcal{E}\left(|i\rangle\langle i|\right)\hat{U} |i\rangle},
\end{equation}
the following inequalities hold~\cite{Hof05}
\begin{equation}
F_{\psi_{n}}+F_{\phi_{k}}-1\leq F_{\mathrm{pro}}(\mathcal{E},\hat{U}) \leq \min(F_{\psi_{n}},F_{\phi_{k}}),
\label{HofmannBound}
\end{equation}
where $F_{\psi_{n}}\equiv F_{\psi_{n}}(\mathcal{E},\hat{U})$ and $F_{\phi_{k}}\equiv F_{\phi_{k}}(\mathcal{E},\hat{U})$ are the classical fidelities computed respectively in the bases $\{|\psi_n\rangle\}$ and $\{|\phi_k\rangle\}$.
The expression $F_{\psi_{n}}+F_{\phi_{k}}-1$ is referred to as the Hofmann bound on process fidelity. For these bounds to be computed, it suffices to propagate $2d$ pure states instead of $d^2$ operators.
As in many other situations considering implementations of quantum computing devices, the system of interest not only contains the qubits but also includes ancilla subsystems or noncoding sublevels needed in order to implement quantum gates. As these ancilla systems or levels are generally in well-defined states before and after the gate operation, computing the fidelity on the whole Hilbert space may lead to overly pessimistic error estimation. In order to avoid this problem, the process fidelity can be computed using only a basis of the relevant qubits subspace~\cite{Ped07}. In the case of our distant-qubit gate protocol, the relevant subspace is spanned by the four states encoding the control and target qubits, whereas all noncoding ancilla atoms are in their ground state. Therefore, the process fidelity will be computed only for this subspace of dimension $d=4$.
\section{Protocol}
\label{Protocol}
Our protocol is a generalization of the one proposed in~\cite{Jak00} for the implementation of a two-qubit quantum gate for the case where the qubits are spatially separated as encountered in arrays of qubits encoded in the internal state of neutral atoms trapped in an optical lattice.
The basic idea of our proposal is to use a chain of ancilla atoms to transfer the Rydberg excitation that the control atom may carry, depending on its initial state, near the target atom. This can again be performed by using the Rydberg blockade. More specifically, we consider the case in which control and target qubits are separated by $n_{\mathrm{A}}$ ancilla non-coding atoms (see Fig.~\ref{fig2}). In the following, we assume that the ancilla atoms are initially prepared in their ground state $|g\rangle$ and that we operate in the strong blockade regime ($U\gg \Omega$).
The generic pulse sequence implementing a given transformation on the target qubit conditionally on the state of the control qubit is illustrated in Fig.~\ref{fig3}.
\begin{figure*}
\caption{
Pulse sequence implementing the CZ and CNOT gates. Note that the pulse sequence on ancilla atoms is the same regardless of the parity of $n_{\mathrm{A}
\label{fig3}
\end{figure*}
During the protocol, the transition of the control atom that is to be driven depends both on the length $n_{\mathrm{A}}$ of the chain of ancilla atoms and on the particular two-qubit gate that is to be implemented (in this work either CNOT or modified CZ). The first part of the pulse sequence goes as follows: The first $\pi$ pulse is applied to the control atom and drives only the transition from one of the ground states (either $|0_{\mathrm{C}}\rangle$ or $|1_{\mathrm{C}}\rangle$) to the Rydberg state $|r_{\mathrm{C}} \rangle$. It is followed by a $\pi$ pulse acting on the first ancilla atom $\mathrm{A}_1$. Due to the Rydberg blockade, if the control atom is in $|r_{\mathrm{C}} \rangle$, then $\mathrm{A}_1$ stays in the ground state $|g_1\rangle$, while if the control atom is in the ground-state manifold, then $\mathrm{A}_1$ gets excited to the Rydberg state $|e_1\rangle$. Then, a second $\pi$ pulse is applied on the control atom that brings it back to its initial state.
After these three pulses, the first ancilla atom is in its ground state $|g_1\rangle$ only if the control atom was excited to its Rydberg state $|r_{\mathrm{C}} \rangle$.
Next, two $\pi$ pulses are successively applied to the second and the first ancilla atoms $\mathrm{A}_2$ and $\mathrm{A}_1$. The first one excites $\mathrm{A}_2$ to its Rydberg state $|e_2\rangle$ only if $\mathrm{A}_1$ is in $|g_1\rangle$. The second pulse brings $\mathrm{A}_1$ back to its initial state. Note that if $\mathrm{A}_1$ is initially in $|g_1\rangle$, then the Rydberg blockade due to atom $\mathrm{A}_2$ prevents unwanted excitation of $\mathrm{A}_1$. At this stage of the protocol, the ancilla atom $\mathrm{A}_2$ is in the Rydberg state $|e_2\rangle$ only if the control atom was driven to $|r_{\mathrm{C}} \rangle$ by the very first pulse of the sequence. The same pattern that consists of successive $\pi$-pulses on $\mathrm{A}_{i+1}$ and $\mathrm{A}_{i}$ is applied sequentially to each pair of ancilla atoms, i.e.~for $i=1,\dots,n_\mathrm{A}-1$.
The effect of the pulse sequence above is to produce a hopping of the Rydberg excitation from one atom to the next-nearest-neighbor atom all along the chain separating control and target qubits [see red path in Fig~\ref{fig2}(a)]. More precisely, if the first $\pi$ pulse of the protocol excites the control atom to its Rydberg state $|r_{\mathrm{C}}\rangle$, then the ancilla atoms $\mathrm{A}_{i}$ with even $i$ go through their Rydberg state during the protocol, while those with odd $i$ always stay in the ground state. Conversely, if the control atom is unaffected by the first pulse, i.e.~if it remains in the ground-state manifold, then the ancilla atoms $\mathrm{A}_{i}$ with odd $i$ go through their Rydberg state during the protocol, while those with even $i$ always stay in the ground state.
After this first part of the pulse sequence, a suitable transformation $U_{\textrm{Cond}}$ is applied to the target atom that implements the desired conditional gate. The implementation, which should obviously rely on the dipole blockade mechanism, is shown on the bottom of Fig.~2 for the cases of CZ and CNOT gates. Note that the transition of the control atom to be driven should be chosen in accordance with the parity of the length $n_\mathrm{A}$ of the chain of ancilla atoms. Finally, in order to bring back the ancilla atoms to their initial state, the same pulses as in the first part of the sequence are applied, but this time in reverse order.
During the execution of our protocol, at most one atom at a time is in a Rydberg state, and the Rydberg excitation thus stays localized on a single atom. The number of $\pi$ pulses required in our protocol to implement a long-distance quantum gate is
\begin{equation}
n_{\mathrm{pulse}}=4n_{\mathrm{A}}+2+n_{\mathrm{T}}
\label{n_pulse}
\end{equation}
where $n_{\mathrm{T}}$ is the number of $\pi$-pulses applied on the target atom.
Each $\pi$ pulse leads either to no phase shift when the transition is prevented by the dipole blockade mechanism or to a $\pi/2$ phase shift when the transition is driven resonantly. All atoms, except $\mathrm{A}_{n_{\mathrm{A}}}$ and target atoms, are submitted to four $\pi$ pulses which altogether do not produce any phase shift. Thus, the accumulated phase of the global state comes from only the pulses on $\mathrm{A}_{n_{\mathrm{A}}}$ and on the target atom. If $\mathrm{A}_{n_{\mathrm{A}}}$ is excited to its Rydberg state during the sequence, then it produces a phase shift of $\pi$.
This generic pulse sequence can be tailored in order to implement either a modified CZ-gate or a CNOT gate [as in Eq.~\eqref{2qubit_gates}]. Let us first consider the case of the modified CZ-gate with an even number of ancilla atoms $n_{\mathrm{A}}=2n$. In that case, the control atom is submitted to (four) pulses driving the transition $|1_{\mathrm{C}}\rangle\leftrightarrow |r_{\mathrm{C}}\rangle$, and the target atom is submitted to a $2\pi$ pulse driving the transition $|1_{\mathrm{T}}\rangle\leftrightarrow |r_{\mathrm{T}}\rangle$. When the control atom is in $|1_{\mathrm{C}}\rangle$, the last ancilla atom $\mathrm{A}_{2n}$ is excited to the Rydberg state $|r_{2n}\rangle$, which produces a $\pi$ phase-shift while the dipole blockade prevents the excitation of the target atom. When the control atom is in $|0_{\mathrm{C}}\rangle$, the last ancilla atom stays in the ground state and the pulse on the target atom produces a $\pi$ phase shift only if the target atom is in $|1_{\mathrm{T}}\rangle$. Therefore, the only state producing no phase shift is $|0_{\mathrm{C}}0_{\mathrm{T}}\rangle$, and the pulse sequence implements the modified CZ gate~\eqref{2qubit_gates} between two qubits that can be arbitrarily far apart in the lattice.
For an odd number $n_{\mathrm{A}}=2n+1$ of ancilla atoms, the protocol needs to be slightly amended. In that case, when the control atom is in $|1_{\mathrm{C}}\rangle$, it should not be driven to the Rydberg state by the pulses applied on it, so that the last ancilla atom $\mathrm{A}_{2n+1}$ gets excited to the Rydberg state. Therefore, driving the transition $|0_{\mathrm{C}}\rangle\leftrightarrow |r_{\mathrm{C}}\rangle$ leads to the desired modified CZ gate~\eqref{2qubit_gates}.
A potential alternative for implementing the modified CZ gate with an odd number of ancilla atoms consists of applying a $\hat{\sigma}_x$ transformation on the control qubit right before and after the protocol shown in Fig.~\ref{fig3}, where the pulses on the control atom drive the transition $|1_{\mathrm{C}}\rangle\leftrightarrow|r_{\mathrm{C}}\rangle$, as was the case for an even number of ancilla atoms. This operation amounts to swapping the role of the states $|0_{\mathrm{C}}\rangle$ and $|1_{\mathrm{C}}\rangle$, which eventually leads to the desired situation where $\mathrm{A}_{2n+1}$ is in the Rydberg state $|e_{2n+1}\rangle$ if the control atom is initially in $|1_{\mathrm{C}}\rangle$ and $\mathrm{A}_{2n+1}$ is in $|g_{2n+1}\rangle$ if the control atom is initially in $|0_{\mathrm{C}}\rangle$. An advantage of this alternative protocol is that, independent of the number of ancilla atoms, only the $|1_{\mathrm{C}}\rangle\leftrightarrow|r_{\mathrm{C}}\rangle$ transition of the control atoms has to be driven. This simplifies the experimental implementation, at the cost of performing two additional single-qubit gates on the control qubit.
As explained previously, the modified CZ gate can be turned into a CNOT gate using only single-qubit operations~\cite{footnote4}. Nevertheless, it might be useful to directly perform a CNOT gate, which consists of swapping the internal state of the target qubit when the control atom is in $|1_{\mathrm{C}}\rangle$. This can be achieved by applying three $\pi$ pulses on the target atom driving successively the transitions $|0_{\mathrm{T}}\rangle\leftrightarrow |r_{\mathrm{T}}\rangle$, $|1_{\mathrm{T}}\rangle\leftrightarrow |r_{\mathrm{T}}\rangle$, and $|0_{\mathrm{T}}\rangle\leftrightarrow |r_{\mathrm{T}}\rangle$. In the absence of Rydberg excitation near the target atom that would induce Rydberg blockade, this sequence of three pulses swaps the coding state of the target atom. Note that regardless of the state of the target atom, only two pulses out of three effectively affect the target atom. Therefore, this operation on the target atom leads to a $\pi$ phase shift. In order to ensure that the swap operation is performed only if the control atom is in $|1_{\mathrm{C}}\rangle$, the transition $|0_{\mathrm{C}}\rangle\leftrightarrow|r_{\mathrm{C}}\rangle$ ($|1_{\mathrm{C}}\rangle\leftrightarrow|r_{\mathrm{C}}\rangle$) must be driven on the control atom if the number of ancilla atoms is even (odd). The resulting protocol implements a CNOT gate up to a global phase factor of $-1$.
\section{Results and discussion}
\label{Results}
In this section, we discuss the results of our simulations based on the resolution of the master equation~\eqref{Linblad} for the pulse sequences presented above. For small numbers of ancilla atoms ($n_{\mathrm{A}}\leqslant 5$) the master equation is directly solved for $\hat{\rho}(t)$, while for larger numbers of ancilla atoms ($5<n_{\mathrm{A}}\leqslant 9$) it is solved using a Monte Carlo wave-function approach~\cite{Dal92,Dum92,Mol93,Ple98,Joh12}. In the former case, the exact process fidelity \eqref{Fprocess} is computed, while in the latter case only lower and upper bounds given in Eq.~\eqref{fidbounds} are evaluated. For an odd number of ancilla atoms, we performed the simulations for the alternative protocol by relying on a swap of the internal states of the control qubit. In our simulations, we consider that all transitions are driven with identical Rabi frequencies, i.e.,~$\Omega_0=\Omega_1=\Omega_A=\Omega>0$, which sets a natural frequency unit. We consider square pulses without any delay time between two consecutive pulses. We also take identical energy shifts of the doubly excited states between nearest-neighbor atoms, i.e.,~$U_{rr}=U_{re}=U_{ee}=U$ with $U\gg \Omega$. In all recent experiments demonstrating two-qubit gates based on Rydberg blockade~\cite{Zha10,Ise10,Zha12, Mal15,Mul14}, the Rabi frequency is of the order of $1$~MHz. Moreover, Rydberg states with principal quantum number $n\approx 80$ have a lifetime of the order of 1~ms at cryogenic temperature~\cite{Bet09}. Accordingly, we choose the decay rates $\gamma_\mathrm{q}=\gamma_0+\gamma_1$ and $\gamma_\mathrm{A}$ to vary with $\gamma_i/\Omega$ ranging from $0$ to $0.01$ ($i=\mathrm{A},\mathrm{q}$).
\subsection{Gate fidelity with respect to the dipole blockade shift}
In this section, we discuss the effects of imperfect blockade on the performance of our protocols. The process fidelity of both gates was numerically computed for values of the dipole blockade shift $U/\Omega$ ranging from $1$ to $200$ and for up to five ancilla atoms in the absence of dissipation ($\gamma_i=0$, $i=0,1,\mathrm{A}$). In this situation, the gate error originates from imperfect blockade. In the strong-blockade regime ($U\gg\Omega$), the probability of double excitation to the Rydberg states is proportional at leading order to $P_{2}\propto\Omega^2/U^2$~\cite{Saf05,Wal08}. In this regime, we expect the gate error also to be proportional to $P_{2}$, which is indeed confirmed by our numerical simulations (data not presented). We observe that the ratio of the gate error $1-F_{\mathrm{proc}}$ to the probability of double excitation $P_{2}$ is constant for $U/\Omega\gtrsim 25$ for both gates, regardless of the number of ancilla atoms.
Thus, in the regime of strong blockade without any dissipation, the process fidelity can be accurately approximated by
\begin{equation}
F_{\mathrm{pro}}^{\gamma_i=0}\left(\frac{U}{\Omega}\right)=1-\alpha \left(\frac{U}{\Omega}\right)^{-2},
\label{F_U}
\end{equation}
with $0.1\lesssim\alpha\lesssim 2$ being a constant whose value depends only on $n_{\mathrm{A}}$ and $U_{\mathrm{Cond}}$. More precisely, $\alpha$ depends only on the parity of the number $n_{\mathrm{A}}$ of ancilla atoms. For the CZ gate, $\alpha\approx 0.5$ for $n_{\mathrm{A}}=0$ and odd $n_{\mathrm{A}}$, while $\alpha\approx 1.7$ for even $n_{\mathrm{A}}$. In the case of the CNOT gate, $\alpha\approx 0.4$ for $n_{\mathrm{A}}=0$, $\alpha\approx 0.1$ for odd $n_{\mathrm{A}}$ and $\alpha\approx 2$ for even $n_{\mathrm{A}}$.
We attribute the differences in $\alpha$ to the dependence on the parity of $n_{\mathrm{A}}$ of the way errors arising from double Rydberg excitation propagate along the chain of ancilla atoms.
\subsection{Gate fidelity with respect to the dissipation rate}
We now turn to a discussion of the effects of dissipation on the process fidelity. For this purpose, the doubly excited state energy shift is set to $U_{rr}/\Omega=U_{re}/\Omega=U_{ee}/\Omega=200$. At this value of $U/\Omega$ and in the absence of dissipation, the gate error $1-F_{\mathrm{proc}}$ is smaller than $10^{-4}$ for every number of ancilla atoms we will consider.
\paragraph{Modified CZ gate} The results of our simulations for the process fidelity of the modified CZ gate are displayed in Fig.~\ref{fig11}. Figure~\ref{fig11}(a) and~\ref{fig11}(b) show the process fidelity (dots) in the case of no dissipation on the ancilla and qubit atoms, respectively. Figure~\ref{fig11}(c) shows the process fidelity for identical decay rates for qubit and ancilla atoms. The upper and lower bounds for $F_{\mathrm{pro}}$ [see Eq.~\eqref{HofmannBound}] delimit the shaded areas. Our results show that the actual process fidelity is consistently very close to the upper bound.
\begin{figure}
\caption{Process fidelity $F_{\mathrm{pro}
\label{fig11}
\end{figure}
For $\gamma_{\mathrm{A}}=0$ and $\gamma_0=\gamma_1=\gamma/2>0$ [dissipation only on the qubit atoms, Fig~\ref{fig11}(a)], the process fidelity no longer depends on the number of ancilla atoms. This is an immediate consequence of our protocol in which the accumulated time spent by the qubit atoms in their Rydberg state does not depend on $n_{\mathrm{A}}$. There is, however, one exception when there are no ancilla atoms ($n_{\mathrm{A}}=0$) because in this case only two $\pi$ pulses are applied to the control atom instead of four, which reduces errors caused by the decay of the control atom from the Rydberg state to the ground-state manifold.
For $\gamma_{\mathrm{A}}=\gamma$ and $\gamma_0=\gamma_1=0$ [dissipation only on the ancilla atoms, middle panel Fig.~\ref{fig11}(b)], the process fidelity decreases with $n_{\mathrm{A}}$.
For identical decay rates on the qubit and ancilla atoms [Fig.~\ref{fig11}(c)], the process fidelity displays the combined features of the two previous cases. For $n_{\mathrm{A}}=0$, it starts at the value for $\gamma_{\mathrm{A}}=0$ and decreases with $n_{\mathrm{A}}$ as in the case where dissipation acts only on the ancilla atoms [Fig.~\ref{fig11}(b)]. In all cases, the process fidelity decreases with the total decay rate $\gamma$ of the qubit atoms.
In the protocol depicted in Fig.~\ref{fig3}, dissipation occurs either in the interval between two $\pi$ pulses when an atom, either a qubit or an ancilla, is in its Rydberg state or during one of the pulses. In the former case, the probability for an atom to stay in its Rydberg state decays as $\exp(-\gamma t)$, with $\gamma$ being the decay rate and $t$ being the time since the atom is in its Rydberg state. In the latter case, the probability of unwanted deexcitation from the Rydberg state during a pulse has to be evaluated from the exact solution of the master equation for a decaying two-level atom.
The probability to be in the target state after a $\pi$ pulse can be written in the form $\exp(-\gamma t_{\mathrm{eff}}^\pi)$, where $t_{\mathrm{eff}}^\pi$ is interpreted as the effective time spent by the atom in the decaying Rydberg state during the pulse. If the decay rates are low enough ($\gamma/\Omega\ll 1$), $t_{\mathrm{eff}}^\pi$ is constant up to corrections of order $\gamma/\Omega$, for both exciting and deexciting $\pi$ pulses. By equating $\exp(-\gamma t_{\mathrm{eff}}^\pi)$ with the probability for the atom to be in the target state after the $\pi$ pulse as evaluated from the exact solution of the master equation, we obtain $\Omega t_{\mathrm{eff}}^\pi/\pi=3/8$. because in our protocol, only one atom can be in an excited state at a time, the effects of dissipation on the different atoms simply add up, and the process fidelity is determined by the cumulated time spent by the atoms in the decaying Rydberg states during the execution of the protocols.
If the dissipation rates are low enough to ensure that there is at most one decay (quantum jump) during the whole protocol, the process fidelity can be approximated by
\begin{equation}
F_{\mathrm{pro}}\left(\frac{U}{\Omega},\gamma_\mathrm{q},\gamma_{\mathrm{A}}\right)\approx F^{\gamma=0}_{\mathrm{pro}}\left(\frac{U}{\Omega}\right)e^{- \gamma_\mathrm{q}t_\mathrm{q}}e^{- \gamma_{\mathrm{A}} t_\mathrm{A}(n_{\mathrm{A}})}.
\label{Fpro_CZ}
\end{equation}
In Eq.~(\ref{Fpro_CZ}), $F^{\gamma=0}_{\mathrm{pro}}\left(U/\Omega\right)$, given by Eq.~\eqref{F_U}, takes into account the effects of imperfect blockade, $\gamma_\mathrm{q}$ is the total decay rate of the qubit atoms, $t_\mathrm{q}$ is the effective cumulated time spent by the qubit atoms in the Rydberg states averaged over all possible qubit initial states and $t_\mathrm{A}$ is the effective total time spent by the ancilla atoms in the Rydberg states.
Both times $t_\mathrm{q}$ and $t_\mathrm{A}$ can be directly evaluated for the pulse sequence depicted in Fig.~\ref{fig3}. A total of six $\pi$ pulses are applied to the qubit atoms (control and target). Depending on the control atom's initial state, either the four pulses on the control atom or the $2\pi$ pulse on the target atom lead to an excitation to the Rydberg state, but not both at the same time as a consequence of the dipole blockade. Depending on its initial state, the control atom either spends the duration of two $\pi$ pulses in the Rydberg state or stays in the ground state.
Thus, by averaging over all possible initial states, we obtain
\begin{equation}
\Omega t_\mathrm{q}=\frac{2\pi+6\,\Omega t_{\mathrm{eff}}^\pi}{2}.
\label{t_Q_CZ}
\end{equation}
As for the ancilla atoms, they are each submitted to four $\pi$ pulses except for the last one which is submitted to only two $\pi$ pulses. Only half of these pulses bring the ancilla atoms to their Rydberg state, in which they spend in total the duration of four $\pi$ pulses.
This leads eventually to
\begin{equation}
\Omega t_\mathrm{A}(n_{\mathrm{A}})=\frac{4\pi n_{\mathrm{A}}+(4 n_{\mathrm{A}}-2)\Omega t_{\mathrm{eff}}^\pi}{2}.
\label{t_A_CZ}
\end{equation}
Our data for the process fidelity display excellent agreement with Eq.~\eqref{Fpro_CZ}. In fact, by fitting our data by Eq.~\eqref{Fpro_CZ} with $t_{\mathrm{eff}}^\pi$ as the only parameter, we get $\Omega t_{\mathrm{eff}}^\pi/\pi\approx 0.40$, in good agreement with our previous estimate of $3/8$. The fits are shown by solid lines in Fig.~\ref{fig3}. The upper and lower bounds on the fidelity~\eqref{HofmannBound} follow similar behavior with respect to the dissipation rate and the number of ancilla atoms.
\paragraph{CNOT gate} The results of our simulations for the process fidelity of the CNOT gate are displayed in Fig.~\ref{fig12}. Like in Fig.~\ref{fig11}, Fig.~\ref{fig12}(a) and Fig.~\ref{fig12}(b) show the process fidelity (dots) in the case of no dissipation on the ancilla and qubit atoms respectively. Figure~\ref{fig12}(c) shows the process fidelity for identical decay rates for qubit and ancilla atoms.
\begin{figure}
\caption{Same as in Fig.~\ref{fig11}
\label{fig12}
\end{figure}
Overall, the process fidelity of the CNOT gate behaves, as a function of the decay rate and the number of ancilla atoms, in a way similar to the modified CZ gate. In the absence of dissipation acting on the ancilla atoms [Fig.~\ref{fig12}(a)], the process fidelity does not depend on $n_{\mathrm{A}}$. When the dissipation acts only on the ancilla atoms [Fig.~\ref{fig12}(b)], the fidelity decreases with $n_{\mathrm{A}}$. For identical decay rates for the qubit and ancilla atoms [Fig.~\ref{fig12}(c)], the process fidelity displays the combined features of the two previous cases.
Like for the modified CZ gate, dissipation acts only when the atoms are evolving freely in their Rydberg state or during the pulses that drive qubit or ancilla atoms into their Rydberg states, and thus, the process fidelity of the CNOT gate is determined by the cumulated time spent by the atoms in the Rydberg states.
A counting argument similar to that for the modified CZ gate can be made, which leads to an approximation of the form of Eq.~\eqref{Fpro_CZ} for the process fidelity with
\begin{equation}
\Omega t_\mathrm{q}=\frac{2\pi+7\, \Omega t_{\mathrm{eff}}^\pi}{2}
\label{t_Q_CNOT}
\end{equation}
and
\begin{equation}
\Omega t_\mathrm{A}(n_{\mathrm{A}})=\frac{4\pi n_{\mathrm{A}} + \pi +(4 n_{\mathrm{A}}-2)\Omega t_{\mathrm{eff}}^\pi}{2}.
\label{t_A_CNOT}
\end{equation}
In Eq.~\eqref{t_A_CNOT}, the last two terms in the numerator account for the facts that the last ancilla atom may spend the duration of five $\pi$ pulses in its Rydberg state instead of four and that only two $\pi$ pulses are applied on it, respectively.
Again, our data for the process fidelity display excellent agreement with Eq.~\eqref{Fpro_CZ}. In fact, by fitting our data by Eq.~\eqref{Fpro_CZ} with $t_{\mathrm{eff}}^\pi$ as the only parameter, we get $\Omega t_{\mathrm{eff}}^\pi/\pi\approx 0.39$. This value is similar to the one obtained for the case of the CZ gate. The results of this fit are illustrated on Fig.~\ref{fig12}.
\subsection{Comparison with a sequence of nearest-neighbor CNOT gates}\label{Comparison}
It is interesting to compare the fidelity of our protocol for the distant-qubit CNOT gate with an implementation relying on a sequence of nearest-neighbor two-qubit gates.
In the geometrical configuration illustrated in Fig.~\ref{fig2} where the two chains of atoms are displaced, performing our distant-qubit protocol for $n_{\mathrm{A}}$ ancilla atoms corresponds to the control and target qubits being separated by $n_{\mathrm{A}}-1$ other qubits. An obvious advantage of our protocol is that only the two qubits involved in the gate are manipulated and thus prone to errors (we recall that ancilla atoms are non coding). In contrast, for a sequence of nearest-neighbor CNOT gates, all the qubits in between the control and target qubits are submitted to quantum gates and thus potentially prone to errors.
The number of pulses needed to perform a distant-qubit CNOT gate with $n_{\mathrm{A}}$ ancilla atoms, given in Eq.~\eqref{n_pulse}, is $4n_{\mathrm{A}}+2+n_{\mathrm{T}}$. The same operation can be performed with $4(n_{\mathrm{A}}-1)$ nearest-neighbor CNOT gates~\cite{Sae11,Rah15}, which amounts to applying $20(n_{\mathrm{A}}-1)$ pulses to the register of qubits, as illustrated in Fig.~\ref{fig13} in the case $n_{\mathrm{A}}-1=3$~\cite{footnote3}.
\begin{figure}
\caption{Possible implementation of a CNOT quantum gate between non adjacent qubits using only nearest-neighbor CNOT gates~\cite{Rah15}
\label{fig13}
\end{figure}
Even for next-nearest-neighbor qubits ($n_{\mathrm{A}}=2$), our protocol requires a smaller number of pulses ($13$ pulses instead of $20$), resulting in a higher process fidelity. This is exemplified in Fig.~\ref{fig14}, where we compare the process fidelity~\eqref{Fprocess} of our protocol with the one based on a sequence of nearest-neighbor CNOT gates~\cite{footnote1}. The process fidelity is plotted as a function of the decay rate when control and target qubits are separated by two and three qubits, respectively.
\begin{figure}
\caption{Comparison of the process fidelity obtained using only the nearest neighbour CNOT gate~\cite{Rah15}
\label{fig14}
\end{figure}
Our protocol always leads to a higher fidelity. A simple estimate of the gain in fidelity can be made in the case of low dissipation rates following a reasoning similar to that in the previous sections.
For strong blockade and identical decay rates on qubit and ancilla atoms ($\gamma_\mathrm{A}=\gamma_\mathrm{q}=\gamma$ and $\gamma_0=\gamma_1$), the ratio of process fidelities is approximately given by
\begin{equation}
\frac{F_{\mathrm{pro}}}{F_{\mathrm{pro}}^\mathrm{nn}}\approx\exp \left( \frac{8n_{\mathrm{A}}(\pi+2\Omega t_{\mathrm{eff}}^\pi) -5(3\pi+5 \Omega t_{\mathrm{eff}}^\pi)}{2}\frac{\gamma}{\Omega}\right),
\label{gain_nn}
\end{equation}
where $F_{\mathrm{pro}}$ and $F_{\mathrm{pro}}^\mathrm{nn}$ are the process fidelities for our protocol, and for the protocol relying on a sequence of nearest-neighbor CNOT gates, respectively. Figure~\ref{fig15} shows the results of numerical simulations for the ratio $F_{\mathrm{pro}}/F_{\mathrm{pro}}^\mathrm{nn}$ (dots) as a function of the decay rate in the case of two and three ancilla atoms. The solid lines represent Eq.~\eqref{gain_nn} with $\Omega t_{\mathrm{eff}}^\pi/\pi\approx 0.379$, which was obtained from a fit.
\begin{figure}
\caption{Gain in the fidelity~\eqref{gain_nn}
\label{fig15}
\end{figure}
The ratio is always greater than $1$ and increases with the decay rate $\gamma$ and the distance $n_\mathrm{A}$ between control and target qubits.
\section{Perspective and experimental considerations}
\label{Perspective}
The estimates for the process fidelity of the protocols presented in this work may not account for all possible dissipation and decoherence channels or experimental imperfections. In this regard, it would be interesting to include in our model a non coding state for both qubit and ancilla atoms in order to account for qubit atom losses due to dissipation. Also, we could consider an intermediary level between the ground-state manifold and the Rydberg state of the qubit atoms as the laser excitation to the Rydberg state is usually a two-stage process. A more rigorous description of the dipole-dipole interaction between atoms leading to the Rydberg blockade could also be considered. However, such simulations are much more demanding in terms of resources.
For simplicity, only square pulses have been considered in this work. From an experimental perspective, it is certainly relevant to investigate the implementation of our protocol using Gaussian pulses or optimized pulses~\cite{The16}, allowing us to further increase the process fidelity. In order to experimentally implement our protocol, one could use the same species for both qubit and ancilla atoms. In such a configuration, the position of the atoms in the different traps or in the lattice will determine its role (coding or non coding) in the protocol. This solution could be implemented with rubidium atoms~\cite{Ise10,Mul14} in dipole traps or using two-dimensional arrays of cesium atoms~\cite{Mal15}. Another possibility is to rely on two different atomic species to implement the qubits and the chain of ancilla non coding atoms. In this case, suitable Rydberg states need to be identified that allow for strong dipole blockade between qubit and ancilla atoms and in between ancilla atoms. A good candidate for the implementation of our protocol is the configuration described in Ref.~\cite{Bet15} in which two optical lattices, one for rubidium and the other for cesium atoms, are considered to perform non demolition state measurements.
\section{conclusion}
\label{Conclusion}
In this paper, we have considered an array of qubits encoded in the ground state manifold of trapped neutral atoms, supplemented by an array of ancilla non-coding atoms. We have proposed a protocol for the implementation of two-qubit entangling gates (CZ, CNOT) between any pair of qubits in the array that relies on the Rydberg excitation hopping along a chain of ancilla non coding atoms in the strong-blockade regime. The hopping of the Rydberg excitation from one atom to its next nearest neighbor is produced by an appropriate pulse sequence that ensures that at most one atom at a time in the entire system is in a Rydberg state. We have solved a master equation for up to nine ancilla atoms in order to evaluate the process fidelity characterizing the performance of our protocols in the presence of dissipation. We have found that the process fidelity is determined by the cumulated time spent by the atoms in the decaying Rydberg states during the execution of the protocols. The design of our protocol ensures that this time scales linearly with the number of ancilla atoms. Moreover, we have shown that our protocols for entangling gates between distant qubits lead to better process fidelities than those based on a sequence of nearest-neighbor two-qubit gates, even when the qubits are separated by a few other atoms. Our protocols could be implemented experimentally for a few ancilla atoms using state-of-the-art trapping and selective laser-addressing techniques.
{\em Acknowledgments:} Computational resources were provided by the Consortium des Equipements de Calcul Intensif (CECI), funded by the Fonds de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under Grant No. 2.5020.11.
\end{document} |
\begin{document}
\title[Divergence-conforming HDG for The Biot Problem]{
A high-order HDG method for the Biot's consolidation model}
\author{Guosheng Fu}
\address{Division of Applied Mathematics, Brown University, 182 George St,
Providence RI 02912, USA.}
\email{Guosheng\_Fu@brown.edu}
\keywords{HDG, divergece-conforming, fully discrete, poroelasticity}
\subjclass{65N30, 65N12, 76S05, 76D07}
\begin{abstract}
We propose a novel high-order HDG method for the Biot's consolidation model in poroelasticity.
We present optimal error analysis for both the semi-discrete and full-discrete (combined with temporal backward differentiation formula) schemes.
Numerical tests are provided to demonstrate the performance of the method.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec:intro}
Biot's seminar work \cite{Biot41,Biot56,Biot72} laid the foundation of the theory of poroelasticity, which models the
the interaction between the fluid flow and deformation in an fluid-saturated porous medium.
The model is used in several industries such as petroleum and environmental engineering \cite{Zheng03,Strehlow15}
and medical applications such as the modeling of the intestinal oedema \cite{Young14}.
In this paper, we consider the numerical solution of the
following quasi-static Biot's consolidation model
\begin{subequations}
\label{eqns}
\begin{alignat}{3}
\label{eq1}
c_s \dot{p}+\alpha\, {\mathrm{div}}(\dot{\bld u}) - {\mathrm{div}}(\kappa{\nabla} p) &= f \qquad && \text{in $\Omega$,}\\
\label{eq2}
- \mathrm{div}\left(2\mu{\nabla}s(\bld u) - \lambda\,\mathrm{div}(\bld u)\bld I\right)
+\alpha{\nabla} p&=
\bld g \qquad && \text{in $\Omega$,}
\end{alignat}
with homogeneous Dirichlet boundary conditions and proper initial data:
\begin{alignat}{3}
\label{eq3}
\bld u &= \bld 0, && \quad\quad\quad p \;= 0\quad\quad\quad&&\text{on $\partial \Omega$,}\\
\label{eq4}
\bld u(0,\bld x) &= \bld u_0(\bld x) , &&\quad\quad\quad p(0,\bld x) \;= p_0(\bld x)\quad\quad\quad
&& \text{in $ \Omega$,}
\end{alignat}
\end{subequations}
where $\Omega\subset \mathbb{R}^d$, $d=2,3$, is a bounded polygonal/polyhedral domain,
$p$ is the pressure and $\bld u$ is the deformation,
$c_s\ge 0$ is the constrained specific storage coefficient which is close to {\it zero}
in many applications,
$\alpha$ is the Biot-Willis constant which is close to {\it one},
$\kappa$ is the permeability tensor,
$\lambda$ and $\mu$ are the Lam\'e constants,
and ${\nabla}s\bld u = {({\nabla}\bld u +
{\nabla}^T\bld u)}/{2}$ is the symmetric gradient operator.
Here we consider homogeneous boundary condition for simplicity. More general boundary
conditions, c.f. \cite{Phillips07a}, can be handled with minor modification.
There are extensive literature on the study of spatial discretization for the Biot's consolidation model with the finite element methods.
The early work of Murad et. al. \cite{MuradLoula92, MuradLoula94,Murad96} studied the stability of the scheme using
stable pair of Stokes finite elements for displacement and pressure.
The monograph of Lewis and Schrefler \cite{Lewis98}, c.f. also references therein,
discussed the finite element discretization using continuous Galerkin method for both the the displacement and pressure.
Phillips and Wheeler proposed and analyzed an algorithm that combines
the mixed methods for pressure and a continue/discontinuous Galerkin method for displacement
\cite{Phillips07a, Phillips07b,Phillips08}.
See also the discontinuous Galerkin methods \cite{Liu09, Chen13,Wheeler14a, Riviere17},
Galerkin least square method \cite{Korsawe05},
the pressure-stabilized methods \cite{Wan02,White08,Berger15,Berger17},
the mixed methods \cite{Ferronato10, Yi14, Lee16,Oyarzua16, Lee17,Lee17b}, and the nonconforming methods \cite{Yi13, Boffi16, Hu17}.
In this paper, we consider the discretization to \eqref{eqns} using a displacement-pressure formulation with a high-order, superconvergent HDG method
for the pressure Poisson operator \cite{Lehrenfeld:10, Oikawa:15}, and a high-order, divergence-conforming
HDG method for the elasticity operator \cite{Lehrenfeld:10,LehrenfeldSchoberl16, FuLehrenfeld18, FuLehrenfeld18a}.
The resulting differential algebraic system (DAE) is solved using backward differentiation formula (BDF) \cite{HairerWanner10}.
We present optimal a priori error estimates for the resulting semi-discrete and full-discrete schemes. The method is proven to be
free from
Poisson locking as $\lambda\rightarrow \infty$, and is numerically shown to be also free from pressure oscillation in the case of
low permeability with small time step size \cite{Phillips09}.
To reach a convergence rate of $k+1$ for an energy norm, the fully discrete scheme has a set of globally coupled degrees of freedom (after static
condensation) consists of
polynomials of degree $k+1$ for the normal displacement,
polynomials of degree $k$ for the tangential displacement, and
polynomials of degree $k-1$ for the pressure per facet (edge in 2D, face in 3D).
We also discuss an improvement of this base scheme by slightly relaxing the $H(\mathrm{div})$-conformity of the displacement space so that
only unknowns of polynomial degree $k$ are involved for normal-continuity, c.f. \cite{Lederer17, FuLehrenfeld18a}.
This modification results a globally coupled degrees of freedom
consists of (vector) polynomials of degree $k$ for the displacement, and degree $k-1$ for the pressure per facet.
It does not deteriorate the convergence rate, and allow for optimality of the method also in the sense of
superconvergent HDG methods.
The rest of the paper is organized as follows. In Section \ref{sec2:disc}, the semi-discrete scheme is introduced and analyzed.
In Section \ref{sec3:time}, the fully-discrete scheme is introduced and analyzed. The numerical results supporting the theory is
presented in Section \ref{sec:numerics}. And a conclusion is drawn in Section \ref{sec:conclusion}.
\section{Semi-discrete Scheme}
\label{sec2:disc}
\subsection{Preliminaries}
Let $\mathcal{T}_h=\{T\}$ be a conforming simplicial triangulation of $\Omega$.
Let $\mathcal{F}_h=\{F\}$ be the collection of facets (edges in 2D, faces in 3D) in $\mathcal{T}_h$.
For any element $T \in\mathcal{T}_h$, we denote by $h_T$ its diameter and we denote by $h$ the
maximum diameter over all mesh elements.
We distinguish functions with support only on facets indicated by a {\it hat } notation,
e.g. $\widehat {\phi}$, $\widehat{\bld \xi}$, with functions with support also on the volume elements.
Compositions of functions supported on volume elements (without {\it hat } notation)
and those only on facets
are used for the HDG discretization and indicated by underlining, e.g.,
$\underline{{\phi}} = (\phi ,\widehat{\phi})$,
$\underline{{\bld \xi}} = (\bld \xi ,\widehat{\bld \xi})$.
To simplify notation, we denote the compound spaces
\begin{align*}
\underline{{p}}space : = &\;H_0^2(\Omega)\times H_0^1(\mathcal{F}_h), \text{ and }\;
\underline{\boldsymbol{u}}space : = [H_0^2(\Omega)]^d\times [H_0^1(\mathcal{F}_h)]^d.
\end{align*}
We denote the tangential component of a vector
$\bld v$ on a facet $F$ by $(\bld v)^t = \bld v-(\bld v\cdot \bld n)\bld n$, where $\bld n$ is the normal direction on $F$.
Furthermore, for any function $\phi\in H^2_0(\Omega)$, we denote $\underline{{\phi}} :=(\phi, \phi|_{\mathcal{F}_h})\in \underline{{p}}space$,
and for any function $\bld \xi\in [H^2_0(\Omega)]^d$, we denote
$\underline{{\bld \xi}} :=(\bld \xi, (\bld \xi)^t|_{\mathcal{F}_h}) \in \underline{\boldsymbol{u}}space$.
For a domain $D\in \mathbb{R}^d$,
we denote $(\cdot,\cdot)_D$ as the standard $L^2$-inner product on $D$. Whenever there is no confusion,
we simply denote $(\cdot,\cdot)$ as the inner product on the whole domain $\Omega$.
Finally, to simplify the presentation of our analysis, we assume the permeability tensor $\kappa$ is a constant
scalar throughout the domain $\Omega$.
However, we note that the method is applicable to the more general case of a fully tensorial (possibly piecewise defined) permeability.
\subsection{Finite elements}
We
consider
an HDG method which approximates
the pressure and displacement on the mesh
$\mathcal{T}_h$,
and the pressure and {\it tangential } component of the displacement on
the mesh skeleton $\mathcal{F}_h$:
\begin{subequations}
\label{space}
\begin{align}
\label{space-1}
{W}_{\!h} : =&\; \prod_{T\in\mathcal{T}_h}\mathbb{P}^{k}(T),\\
\widehat{W}_{\!h} := &\;\{\widehat{w} \in \prod_{F\in\mathcal{F}_h}\mathbb{P}^{k-1}(F), \;\;
\widehat{w} = 0 \,\;\;\forall F\subset \partial\Omega\},
\\
\bld{V}_{\!h} : =&\; \{\bld v\in \prod_{T\in\mathcal{T}_h}[\mathbb{P}^{k+1}(T)]^d, \;\;
\jmp{\bld v\cdot\bld n}_F = 0 \,\forall F\in\mathcal{F}_h\}\subset H_0(\mathrm{div},\Omega),\\
\widehat{\bld V}_{\!h} := &\;\{\widehat{\bld v}\in \prod_{F\in\mathcal{F}_h} [\mathbb{P}^{k}(F)]^d, \;\;
\widehat{\bld v}\cdot\bld n = 0 \,\forall F\in\mathcal{F}_h, \;\;
\widehat{\bld v} = 0 \,\;\;\forall F\subset \partial\Omega\},
\end{align}
where $\jmp{\cdot}$ is the usual jump operator,
$\mathbb{P}^m$ the space of polynomials up to degree $m$.
Note that functions in $\widehat{W}_{\!h}$ and $\widehat{\bld V}_{\!h}$ are defined only on the mesh skeleton, and the
normal component of functions in $\widehat{\bld V}_{\!h}$ is {\it zero}.
Here the polynomial degree $k\ge 1$ is a positive integer.
To further simplify notation, we denote the composite spaces
\begin{alignat*}{2}
\underline{{p}}hspace : =&\; {W}_{\!h}\times \widehat{W}_{\!h}, &&\text{ and }
\underline{\boldsymbol{u}}hspace : = \bld{V}_{\!h}\times \widehat{\bld V}_{\!h}.
\end{alignat*}
\end{subequations}
\subsection{The semi-discrete numerical scheme}
First, we introduce the following $L^2$ projections on the facets:
\begin{align*}
\Pi_{\widehat{W}}: L^2(F)\rightarrow \mathbb{P}_{k-1}(F),
\quad \int_F (\Pi_{\widehat{W}} f) w \, \mathrm{ds} = \int_{F}f\,w\, \mathrm{ds} \quad \forall w\in \mathbb{P}_{k-1}(F),\\
\Pi_{\widehat{V}}: [L^2(F)]^d\rightarrow [\mathbb{P}_k(F)]^d,
\quad \int_F (\Pi_{\widehat{V}} \bld f)\bld v \, \mathrm{ds} = \int_{F}\bld f\,\bld v\, \mathrm{ds} \quad \forall \bld v\in [\mathbb{P}_k(F)]^d.
\end{align*}
Then, for all $\underline{{\phi}}=(\psi,\widehat{\psi}), \underline{\psi}=(\psi,\widehat{\psi})\in \underline{{p}}hspace+\underline{{p}}space$, and
$\underline{{\bld \xi}}=(\bld \xi,\widehat{\bld \xi}), \underline{\bld \eta}=(\bld \eta, \widehat{\bld \eta})\in \underline{\boldsymbol{u}}hspace+\underline{\boldsymbol{u}}space$,
we introduce the bilinear forms for the diffusion and elasticity operators, respectively,
\begin{subequations}
\label{bilinearforms}
\begin{align}
\label{diffusion-1}
a_{h}(\underline{{\phi}}, \underline{\psi}) :=&\;
\sum_{T\in\mathcal{T}_h}\int_T\kappa\,{\nabla} \phi\cdot{\nabla} \psi
-\int_{\partial T}\kappa\,{\nabla} \phi\cdot\bld n\, \jmp{\underline{\psi}}\,\mathrm{ds}\\
&\;
-\int_{\partial T}\kappa\,{\nabla} \psi\cdot \bld n\jmp{\underline{{\phi}}}\,\mathrm{ds}+
\int_{\partial T}\kappa\frac{\tau}{h}\Pi_{\widehat{W}}\jmp{\underline{{\phi}}}\, \Pi_{\widehat{W}}\jmp{\underline{\psi}}\,\mathrm{ds},
\nonumber\\
\label{elas-1}
b_{h}(\underline{{\bld \xi}}, \underline{\bld \eta}) :=&\;
\sum_{T\in\mathcal{T}_h}\int_T2\mu\,{\nabla}s(\bld \xi):{\nabla}s(\bld \eta)
+ \lambda\,\mathrm{div}(\bld \xi)\mathrm{div}(\bld \eta)
\,\mathrm{dx}\\
&\; -\int_{\partial T}2\mu\,{\nabla}s(\bld \xi)\bld n\cdot \jmp{\underline{\bld \eta}^t}\,\mathrm{ds}
-\int_{\partial T}2\mu\,{\nabla}s(\bld \eta)\bld n\cdot \jmp{\underline{{\bld \xi}}^t}\,\mathrm{ds}\nonumber\\
&\; + \int_{\partial T}\mu\frac{\tau}{h}\Pi_{\widehat{V}}\jmp{\underline{{\bld \xi}}^t}\cdot \Pi_{\widehat{V}}\jmp{\underline{\bld \eta}^t}\,\mathrm{ds},
\nonumber
\end{align}
\end{subequations}
where
$\jmp{\underline{{\phi}}}= \phi-\widehat{\phi}$ and
$\jmp{\underline{{\bld \xi}}^t}= (\bld \xi)^t-\widehat{\bld \xi}$ denote
the jumps between interior and facet unknowns, and
$\tau = \tau_0 k^2$ with $\tau_0$ a sufficiently large positive constant.
We note that as long as $\phi$ and
$\bld \xi$ are finite element functions in
${W}_{\!h}$ and $\bld{V}_{\!h}$, respectively, we have
\begin{align}
\label{idd1}
\int_{\partial T} \kappa {\nabla} \phi \cdot\bld n \, \jmp{\underline{\psi}} ds
= &\;\int_{\partial T} \kappa {\nabla}\phi \cdot\bld n \, \Pi_{\widehat{W}} \jmp{\underline{\psi}} ds,\\
\label{idd2}
\int_{\partial T} 2 \mu {\nabla}s(\bld \xi) \bld n \cdot \jmp{\underline{\bld \eta}^t} ds
=&\; \int_{\partial T} 2 \mu {\nabla}s(\bld \xi) \bld n \cdot \Pi_{\widehat{V}} \jmp{\underline{\bld \eta}^t} ds
\end{align}
as
$\kappa {\nabla} \phi\cdot \bld n$
is a polynomial of degree $k-1$, and
$2 \mu {\nabla}s(\bld \xi) \bld n$
is a polynomial of degree $k$ on each facet.
The semi-discrete numerical scheme then reads:
Find $\underline{{p}}h = (p_h,\widehat{p}_h)\in \underline{{p}}hspace$ and
$\underline{\boldsymbol{u}}h=(\bld u_h,\widehat{\bld u}_h)\in \underline{\boldsymbol{u}}hspace$ such that
\begin{subequations}
\label{scheme}
\begin{align}
\label{scheme-1}
(c_s \dot{p}_{h}+\alpha\,{\mathrm{div}}(\dot{\bld u}_h), w_h)
+
a_h(\underline{{p}}h, \underline{{w}}_h) =&\; (f, w_h), \quad \forall \underline{{w}}_h=(w_h,\widehat{w}_h) \in \underline{{p}}hspace,\\
\label{scheme-2}
b_h(\underline{\boldsymbol{u}}h, \underline{\boldsymbol{v}}h)
-(p_h,\alpha\,{\mathrm{div}}(\bld v_h))=&\; (\bld g, \bld v_h), \quad \forall \underline{\boldsymbol{v}}h=(\bld v_h,\widehat{\bld v}_h) \in \underline{\boldsymbol{u}}hspace.
\end{align}
\end{subequations}
\subsection{Semi-discrete error estimates}
We write
\[
A\preceq B
\]
to indicate that there exists a constant $C$, independent of the mesh size $h$, the parameters
$c_s,\alpha, \mu,\lambda,\kappa$
and the numerical solution, such that
$A\le CB$.
We denote the following (semi)norms:
\begin{subequations}
\label{norms}
\begin{align}
\label{norm-p}
\|\underline{{w}}\|_{1,h} := &\;
\left(
\sum_{T\in\mathcal{T}_h} \|{\nabla} w\|^2_T
+\frac{1}{h}\|\jmp{\underline{{w}}}\|^2_{\partial T}
\right)^{1/2},\\
\label{norm-u}
\|\underline{\boldsymbol{v}}\|_{\mu,h} := &\;
\left(
\sum_{T\in\mathcal{T}_h} 2\mu\|{\nabla}s \bld v\|^2_T
+\frac{2\mu}{h}\|\Pi_{\widehat{V}}\jmp{\underline{\boldsymbol{v}}^t}\|^2_{\partial T}
\right)^{1/2},\\
\label{norm-energy2}
\|\underline{\boldsymbol{v}}\|_{\mu,*,h} := &\;
\Big(
\|\underline{\boldsymbol{v}}\|_{\mu,h}^2+
\sum_{T\in\mathcal{T}_h} 2\mu h
\|{\nabla}s(\bld v)\bld n\|^2_{\partial T}
\Big)^{1/2},\\
\label{norm-total}
\vertiii{\{\underline{{w}}, \underline{\boldsymbol{v}}\}}_{h} := &\;
\left(c_s\|w\|^2+\|\underline{\boldsymbol{v}}\|_{\mu,h}^2
+\lambda \|{\mathrm{div}}\,\bld u\|^2 \right)^{1/2}.
\end{align}
\end{subequations}
We also denote the $H^{s}$-norm on $\Omega$ as $\|\cdot\|_{s}$, and when
$s=0$, we simply denote $\|\cdot\|$ as the $L^2$-norm on $\Omega$.
Coercivity of the bilinear forms \eqref{bilinearforms} follows directly from
\cite{Oikawa:15,FuLehrenfeld18a}.
\begin{lemma}
\label{lemma:coercivity}
Let the stabilization parameter $\tau_0$ be sufficiently large.
Then, for any function $\underline{{w}}_h\in \underline{{p}}hspace$, there holds
\begin{subequations}
\label{coercivity}
\begin{align}
\label{coercivity-1}
\kappa\|\underline{{w}}_h\|_{1,h}^2 \preceq a_h(\underline{{w}}_h,\underline{{w}}_h),
\end{align}
and for any function $\underline{\boldsymbol{v}}h\in\underline{\boldsymbol{u}}hspace$, there holds
\begin{align}
\label{coercivity-2}
\|\underline{\boldsymbol{v}}h\|_{\mu,h}^2
+\lambda\|{\mathrm{div}}\, \bld v_h\|^2
\preceq b_h(\underline{\boldsymbol{v}}h,\underline{\boldsymbol{v}}h).
\end{align}
\end{subequations}
\end{lemma}
Consistency of the semi-discrete scheme \eqref{scheme} follows directly from integration by parts.
\begin{lemma}
\label{lemma:consistency}
Let $(p, \bld u)\in H_0^2(\Omega)\times \bld H_0^2(\Omega)$ be the solution to the equations \eqref{eqns}.
We have
\begin{align*}
(c_s \dot{p}+\alpha\,{\mathrm{div}}(\dot{\bld u}), w)
+
a_h(\underline{{p}}, \underline{{w}}) =&\; (f, w), \quad \forall \underline{{w}}_h=(w,\widehat{w}) \in \underline{{p}}hspace+\underline{{p}}space,\\
b_h(\underline{\boldsymbol{u}}, \underline{\boldsymbol{v}})
-(p,\alpha\,{\mathrm{div}}(\bld v))=&\; (\bld g, \bld v), \quad \forall \underline{\boldsymbol{v}}=(\bld v,\widehat{\bld v}) \in \underline{\boldsymbol{u}}hspace
+\underline{\boldsymbol{u}}space.
\end{align*}
\end{lemma}
We use the technique of {\it elliptic projectors} \cite{Wheeler75} to derive optimal convergent error estimates.
Let $\underline{{p}}P = (\Pi p, \widehat{\Pi}p)\in \underline{{p}}hspace$ and
$\underline{\boldsymbol{u}}P = (\Pi \bld u, \widehat{\Pi}\bld u)\in \underline{\boldsymbol{u}}hspace$ be the
projectors defined as follows:
\begin{subequations}
\label{ellip-proj}
\begin{alignat}{3}
\label{ellip-proj-1}
a_h(\underline{{p}}P-\underline{{p}}, \underline{{w}}_h) =&\; 0, &&\quad \forall \underline{{w}}_h=(w_h,\widehat{w}_h) \in \underline{{p}}hspace,\\
\label{ellip-proj-2}
b_h(\underline{\boldsymbol{u}}P-\underline{\boldsymbol{u}}, \underline{\boldsymbol{v}}h)
-(\Pi p-p,\alpha\,{\mathrm{div}}(\bld v_h))=&\;
0, &&\quad \forall \underline{\boldsymbol{v}}h=(\bld v_h,\widehat{\bld v}_h) \in \underline{\boldsymbol{u}}hspace.
\end{alignat}
\end{subequations}
Note that the above coupling is weak since
the pressure projector is purely determined by the first set of equations \eqref{ellip-proj-1}.
The approximation properties of these elliptic projectors follows directly from the
corresponding analysis for the elliptic problems \cite{Oikawa:15, FuLehrenfeld18a}.
We shall assuming the following full $H^2$-regularity
\begin{align}
\label{dual}
\|\phi\|_{2}
\preceq \|\theta\|
\end{align}
for the dual problem
$ - {\nabla}(\kappa\,{\nabla} \phi ) =\; \theta$ with homogeneous Dirichlet boundary conditions
for any source term $\theta\in L^2(\Omega)$.
The estimate \eqref{dual} holds on convex domains.
\begin{lemma}
\label{lemma:approx}
Let the stabilization parameter $\tau_0$ be sufficiently large.
Let $\underline{{p}}P\in \underline{{p}}hspace$ and $\underline{\boldsymbol{u}}P\in \underline{\boldsymbol{u}}hspace$ be given by \eqref{ellip-proj}.
Assume the elliptic regularity result \eqref{dual} holds. Then,
the following estimates holds:
\begin{subequations}
\label{approx:est}
\begin{align}
\label{approx:est-1}
\|p-\Pi p\|\preceq&\; h^{k+1}\|p\|_{k+1},\\
\label{approx:est-2}
\|\underline{\boldsymbol{u}}-\underline{\boldsymbol{u}}P\|_{\mu,h}\preceq &\;h^{k+1}\left(\mu^{1/2}\|\bld u\|_{k+2}+
\frac{\alpha}{\lambda^{1/2}}\|p\|_{k+1}\right),\\
\label{approx:est-3}
\|{\mathrm{div}}(\bld u -\Pi\bld u)\|\preceq &\;h^{k+1}\left(\frac{\mu^{1/2}}{\lambda^{1/2}}\|\bld u\|_{k+2}+
\|{\mathrm{div}}\,\bld u\|_{k+1}+
\frac{\alpha}{\lambda}\|p\|_{k+1}\right).
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
The pressure estimate follows from \cite{Oikawa:15}.
The displacement estimates follow from \cite{FuLehrenfeld18a}.
In particular, we introduce $\underline{\boldsymbol{v}}h:=(\Pi_V \bld u, \Pi_{\widehat{V}} \bld u^t)$ where $\Pi_V$ is the classical BDM interpolator,
\cite[Proposition 2.3.2]{BoffiBrezziFortin13}, and estimate the error by first applying a triangle inequality to split
\[
\|\underline{\boldsymbol{u}}-\underline{\boldsymbol{u}}P\|_{\mu,h} =
\|\underline{\boldsymbol{v}}h-\underline{\boldsymbol{u}}\|_{\mu,h}+ \|\underline{\boldsymbol{u}}P-\underline{\boldsymbol{v}}h\|_{\mu,h}.
\]
Using coercivity result in Lemma \ref{lemma:coercivity}, we get
\begin{align*}
& \|\underline{\boldsymbol{u}}P-\underline{\boldsymbol{v}}h\|_{\mu,h}^2
+\lambda \|\mathrm{div}(\Pi \bld u-\bld v_h)\|^2 \preceq\; b_h(\underline{\boldsymbol{u}}P-\underline{\boldsymbol{v}}h, \underline{\boldsymbol{u}}P - \underline{\boldsymbol{v}}h)\\
&\hspace{2cm} =\; b_h(\underline{\boldsymbol{u}}-\underline{\boldsymbol{v}}h, \underline{\boldsymbol{u}}P - \underline{\boldsymbol{v}}h) + (\Pi p-p,\alpha\,{\mathrm{div}}(\Pi \bld u-\bld v_h))\\
&\hspace{2cm} \preceq\;
\|\underline{\boldsymbol{u}}-\underline{\boldsymbol{v}}h\|_{\mu,*,h}\|\underline{\boldsymbol{u}}P-\underline{\boldsymbol{v}}h\|_{\mu,h}
+\alpha \|\Pi p-p\| \|{\mathrm{div}}(\Pi \bld u-\bld v_h)\|.
\end{align*}
Hence, applying the triangle inequality,
\begin{align*}
\|\underline{\boldsymbol{u}}-\underline{\boldsymbol{u}}P\|_{\mu,h}\preceq &\;
\|\underline{\boldsymbol{u}}-\underline{\boldsymbol{v}}h\|_{\mu,*,h}+\frac{\alpha}{\lambda^{1/2}} \|p-\Pi p\|,\\
\|{\mathrm{div}}(\bld u-\underline{\boldsymbol{u}}P)\|\preceq&\;
\frac{1}{\lambda^{1/2}}\|\underline{\boldsymbol{u}}-\underline{\boldsymbol{v}}h\|_{\mu,*,h}
+ \|{\mathrm{div}}(\bld u-\Pi_V\bld u)\|
+\frac{\alpha}{\lambda} \|p-\Pi p\|.
\end{align*}
The estimates \eqref{approx:est-2}, \eqref{approx:est-3} now follows from the standard approximation
properties of the BDM interpolator $\Pi_V$.
\end{proof}
To further simplify notation, we denote
\begin{align}
\label{notation}
\underline{\bld \varepsilon}_{u} = \underline{\boldsymbol{u}}h-\underline{\boldsymbol{u}}P, \quad
\underline{\varepsilon}_{p} = \underline{{p}}h-\underline{{p}}P, \quad{\bld\delta}_{u} = \bld u-\Pi\bld u, \quad{ \delta}_{p} = p-\Pi p.
\end{align}
Combining the numerical scheme \eqref{scheme} with the consistency result in Lemma \ref{lemma:consistency},
adding and subtracting the above elliptic projectors, we arrive at the following error equations:
\begin{subequations}
\label{error-eq}
\begin{align}
\label{error-eq-1}
(c_s \dot{{ \varepsilon}_{p}}+\alpha\,{\mathrm{div}}(\dot{{\bld \varepsilon}_{u}}), w_h)
+
a_h(\underline{\varepsilon}_{p}, \underline{{w}}_h) =&\;
(c_s \dot{{ \delta}_{p}}+\alpha\,{\mathrm{div}}(\dot{{\bld\delta}_{u}}), w_h),\\
\label{error-eq-2}
b_h(\underline{\bld \varepsilon}_{u}, \underline{\boldsymbol{v}}h)
-({ \varepsilon}_{p},\alpha\,{\mathrm{div}}(\bld v_h))=&\; 0,
\end{align}
\end{subequations}
for all $\underline{{w}}_h=(w_h,\widehat{w}_h) \in \underline{{p}}hspace$ and
$\underline{\boldsymbol{v}}h=(\bld v_h,\widehat{\bld v}_h) \in \underline{\boldsymbol{u}}hspace$.
By the inf-sup stability \cite{BoffiBrezziFortin13} of the finite elements pair
${W}_{\!h}\times \bld{V}_{\!h}\subset L^2(\Omega)\times H_0(\mathrm{div},\Omega)$, we have the following
pressure estimate.
\begin{lemma}
\label{lemma:inf-sup}
Let $\overline{{ \varepsilon}_{p}}$ be the average of ${ \varepsilon}_{p}$ on $\Omega$. Then, we have
\[
\alpha \|{ \varepsilon}_{p}-\overline{{ \varepsilon}_{p}}\|\preceq \mu^{1/2}\|\underline{\bld \varepsilon}_{u}\|_{\mu,h}+\lambda \|{\mathrm{div}}\,{\bld \varepsilon}_{u}\|.
\]
\end{lemma}
\begin{proof}
By inf-sup stability \cite{BoffiBrezziFortin13},
there exists a function $\underline{\boldsymbol{w}}_h=(\bld w_h,\widehat{\bld w}_h)\in \underline{\boldsymbol{u}}hspace$ such that
\[
{\mathrm{div}} \,\bld w_h = { \varepsilon}_{p}-\overline{{ \varepsilon}_{p}}, \quad \text{ and }
\|\underline{\boldsymbol{w}}_h\|_{\mu,h}\le \mu^{1/2}\|{ \varepsilon}_{p}-\overline{{ \varepsilon}_{p}}\|.
\]
The estimate in Lemma \ref{lemma:inf-sup}
follows directly by taking $\underline{\boldsymbol{v}}h = \underline{\boldsymbol{w}}_h$ in \eqref{error-eq-2},
using the fact that $({\mathrm{div}} \,\bld w_h,\overline{{ \varepsilon}_{p}}) = 0$,
and
applying the Cauchy-Schwarz inequality.
\end{proof}
Now, we are ready to present our main results on the semi-discrete error estimates.
\begin{theorem}
\label{thm:energy-semi}
Let the stabilization parameter $\tau_0$ be sufficiently large.
Let $(\underline{{p}}h,\underline{\boldsymbol{u}}h)\in \underline{{p}}hspace\times\underline{\boldsymbol{u}}hspace$ be the solution to \eqref{scheme}
with initial data
$ \underline{{p}}h(0)=\underline{{p}}P(0)$ and $ \underline{\boldsymbol{u}}h(0) = \underline{\boldsymbol{u}}P(0)$.
Then, the following estimate holds for all $T>0$:
\begin{alignat}{2}
\label{est-1}
\vertiii{\{\underline{\varepsilon}_{p}(T),\underline{\bld \varepsilon}_{u}(T)\}}_{h}^2
+\int_0^Ta_h(\underline{\varepsilon}_{p},\underline{\varepsilon}_{p})\,\mathrm{dt}
\preceq h^{2k+2}\,\Xi_1,
\end{alignat}
where
\[
\Xi_1 = T\,\int_0^T (c_s+\frac{\alpha^2(\lambda+\mu)}{\lambda^2})
\|\dot{p}\|_{k+1}^2
+ \frac{\mu(\lambda+\mu)}{\lambda}\|\dot{\bld u}\|_{k+2}^2
+ (\lambda+\mu)\|{\mathrm{div}}\,\dot{\bld u}\|_{k+1}^2
\,\mathrm{dt}.
\]
\end{theorem}
\begin{remark}[Robust displacement estimate]
The above estimate for the displacement is robust with respect to the incompressible limit
$c_s\rightarrow 0$ and
$\lambda\rightarrow +\infty$, as long as
the term $\lambda\|{\mathrm{div}}\,\dot{\bld u}\|_{k+1}^2$ is bounded.
It is also robust in the degenerate case as the permeability $\kappa\rightarrow 0$.
\end{remark}
\begin{proof}
We use a standard energy argument.
Taking $\underline{{w}}_h = \underline{\varepsilon}_{p}$ and $\underline{\boldsymbol{v}}h = \dot{\underline{\bld \varepsilon}_{u}}$ in the error equations \eqref{error-eq} and adding,
we get
\begin{align*}
(c_s \dot{{ \varepsilon}_{p}}, { \varepsilon}_{p})
+
b_h(\underline{\bld \varepsilon}_{u}, \dot{\underline{\bld \varepsilon}_{u}})+
a_h(\underline{\varepsilon}_{p}, \underline{\varepsilon}_{p}) =&\;
(c_s \dot{{ \delta}_{p}}+\alpha\,{\mathrm{div}}(\dot{{\bld\delta}_{u}}), { \varepsilon}_{p})\\
=&\;
(c_s \dot{{ \delta}_{p}}, { \varepsilon}_{p})
+ (\alpha\,{\mathrm{div}}(\dot{{\bld\delta}_{u}}), { \varepsilon}_{p}-\overline{{ \varepsilon}_{p}})
\end{align*}
Applying the Cauchy-Schwarz inequality on the above right hand side and using
the estimate in Lemma \ref{lemma:inf-sup},
we have
\begin{align*}
(c_s \dot{{ \delta}_{p}}, { \varepsilon}_{p})
+ (\alpha\,{\mathrm{div}}(\dot{{\bld\delta}_{u}}), { \varepsilon}_{p}-\overline{{ \varepsilon}_{p}})
\le&\;
{c_s\|\dot{{ \delta}_{p}}\|\|{ \varepsilon}_{p}\|
+
\|{\mathrm{div}}\,\dot{{\bld\delta}_{u}}\|\,(\mu^{1/2}\|{\bld \varepsilon}_{u}\|_{\mu,h}+\lambda\|{\mathrm{div}}\,{\bld \varepsilon}_{u}\|})\\
\le \;
\Big(\underbrace{c_s\|\dot{{ \delta}_{p}}\|^2+(\mu+\lambda)\|{\mathrm{div}}\,\dot{{\bld\delta}_{u}}\|^2}_{:=\Theta}&\Big)^{1/2}
\left( c_s\|{ \varepsilon}_{p}\|^2
+
\|{\bld \varepsilon}_{u}\|_{\mu,h}^2+\lambda\|{\mathrm{div}}\,{\bld \varepsilon}_{u}\|^2
\right)^{1/2}.
\end{align*}
Combing this estimate with
the above identity, and invoking the coercivity result \eqref{coercivity-2}, we get
\[
\frac{1}{2} \partial_t\Big( c_s({ \varepsilon}_{p}, { \varepsilon}_{p})
+
b_h(\underline{\bld \varepsilon}_{u}, {\underline{\bld \varepsilon}_{u}})\Big)+
a_h(\underline{\varepsilon}_{p}, \underline{\varepsilon}_{p}) \preceq \Theta^{1/2} \Big( c_s({ \varepsilon}_{p}, { \varepsilon}_{p})
+
b_h(\underline{\bld \varepsilon}_{u}, {\underline{\bld \varepsilon}_{u}})\Big)^{1/2}
\]
Recalling that $\underline{\varepsilon}_{p}(0) = 0$ and $\underline{\bld \varepsilon}_{u}(0) = 0$, then an application of the
Gronwall's inequality implies that
\[
c_s({ \varepsilon}_{p}(T), { \varepsilon}_{p}(T))
+
b_h(\underline{\bld \varepsilon}_{u}(T), {\underline{\bld \varepsilon}_{u}}(T))+\int_0^Ta_h(\underline{\varepsilon}_{p},\underline{\varepsilon}_{p})\,\mathrm{dt}
\preceq
T\,\int_0^T\Theta \,\mathrm{dt},
\]
for all $T>0$.
Combining the above estimate with \eqref{coercivity-2} and \eqref{approx:est}, we get the
desired inequality in
Theorem \ref{thm:energy-semi}.
\end{proof}
\begin{corollary}
\label{coro:ener}
Let assumptions of Theorem \ref{thm:energy-semi} holds.
Then, the following estimate holds for all $T>0$:
\begin{alignat}{2}
\label{est-2}
\vertiii{\{\dot\underline{\varepsilon}_{p}(T),\dot\underline{\bld \varepsilon}_{u}(T)\}}_{h}^2
\preceq h^{2k+2}\,\Xi_2,
\end{alignat}
where
\[
\Xi_2 = T\,\int_0^T (c_s+\frac{\alpha^2(\lambda+\mu)}{\lambda^2})
\|\ddot{p}\|_{k+1}^2
+ \frac{\mu(\lambda+\mu)}{\lambda}\|\ddot{\bld u}\|_{k+2}^2
+ (\lambda+\mu)\|{\mathrm{div}}\,\ddot{\bld u}\|_{k+1}^2
\,\mathrm{dt}.
\]
\end{corollary}
\begin{proof}
Take one time derivative of the error equations \eqref{error-eq}. Then proceed as in the
proof of Theorem \ref{thm:energy-semi}.
\end{proof}
Now, we give a robust pressure estimate, with respect to $c_s$,
under the assumption that permeability $\kappa$ is away from
{\it zero}.
\begin{theorem}
\label{thm:pressure}
Let the assumptions of Theorem \ref{thm:energy-semi} hold.
Then, for all $T>0$, the following estimate holds
\[
\kappa\|{ \varepsilon}_{p}(T)\|\preceq \kappa\|\underline{\varepsilon}_{p}(T)\|_{1,h}\preceq
h^{k+1}((c_s^{1/2}+\frac{\alpha}{{\lambda}^{1/2}})\Xi_2^{1/2}+\Xi_3),
\]
where $\Xi_2$ is given in Corollary \ref{coro:ener}, and
\[
\Xi_3 = (c_s+ \frac{\alpha^2}{\lambda})\|\dot{p}(T)\|_{k+1}
+
\alpha\left(\frac{\mu^{1/2}}{\lambda^{1/2}}\|\dot{\bld u}(T)\|_{k+2}+
\|{\mathrm{div}}\,\dot{\bld u}(T)\|_{k+1}\right).
\]
\end{theorem}
\begin{proof}
Taking $\underline{{w}}_h = \underline{\varepsilon}_{p}$ in \eqref{error-eq-1},
reordering terms,
and applying the Cauchy-Schwarz inequality,
we have
\begin{align*}
a_h(\underline{\varepsilon}_{p},\underline{\varepsilon}_{p}) = &\;
\left(c_s (\dot{{ \delta}_{p}}-\dot{{ \varepsilon}_{p}})+\alpha\,{\mathrm{div}}(\dot{{\bld\delta}_{u}}-\dot{{\bld \varepsilon}_{u}}), { \varepsilon}_{p}\right)\\
\preceq &\; \left(c_s(\|\dot{{ \delta}_{p}}\|+\|\dot{{ \varepsilon}_{p}}\|)
+\alpha(\|{\mathrm{div}}\,\dot{{\bld\delta}_{u}}\|+\|\
{\mathrm{div}}\,\dot{{\bld \varepsilon}_{u}}\|)
\right)\|{ \varepsilon}_{p}\|
\end{align*}
Invoking the discrete Poincar\'e inequality
\cite{DiPietroErn10},
$\|w_h\|\preceq \|\underline{{w}}_h\|_{1,h}$ for
all $\underline{{w}}_h\in \underline{{p}}hspace$, and using the coercivity result \eqref{coercivity-1}, we get
\[
\kappa\|\underline{\varepsilon}_{p}\|_{1,h} \preceq \; c_s(\|\dot{{ \delta}_{p}}\|+\|\dot{{ \varepsilon}_{p}}\|)
+\alpha(\|{\mathrm{div}}\,\dot{{\bld\delta}_{u}}\|+\|\
{\mathrm{div}}\,\dot{{\bld \varepsilon}_{u}}\|).
\]
Combing the above estimate with Lemma \ref{lemma:approx} and Corollary \ref{coro:ener}, we get the
desired inequality in Theorem \ref{thm:pressure}.
\end{proof}
We conclude this section with a remark on (slightly) relaxing the $H(\mathrm{div})$-conformity of the displacement space to reduce global coupling.
\begin{remark}[Relaxed $H(\mathrm{div})$-conformity]
\label{rk:relax}
We noticed that to reach a convergence rate of ${k+1}$ for the ``energy norm'' $\vertiii{\cdot}_h$, we need unknowns of polynomial degree
$k+1$ on the facets. We follow the idea of \cite{Lederer17} to relax the highest-order normal conformity of the displacement space:
\[
\bld{V}_{\!h}^- : =\; \{\bld v\in \prod_{T\in\mathcal{T}_h}[\mathbb{P}^{k+1}(T)]^d, \;\;
\Pi_F^k\jmp{\bld v\cdot\bld n}_F = 0 \,\forall F\in\mathcal{F}_h\}\subset H_0(\mathrm{div},\Omega),
\]
where $\Pi_F^k: L^2(F)\rightarrow \mathbb{P}^k(F)$ is the $L^2$-projection. The resulting semi-discrete scheme still use the formulation \eqref{scheme}, but with
the space $\underline{\boldsymbol{u}}hspace^-:= \bld{V}_{\!h}^- \times \widehat{\bld V}_h$ for displacement and $\underline{{p}}hspace$ for pressure.
The globally coupled degrees of freedom (after static condensation) for this modification consists of
polynomials of degree $k$ for the displacement and polynomials of degree $k-1$ for the pressure per facet; while the
that for the original scheme
consists of
polynomials of degree $k+1$ for the normal-component of the displacement,
polynomials of degree $k$ for the tangential-component of the displacement,
and polynomials of degree $k-1$ for the pressure per facet.
We present numerical results in Section \ref{sec:numerics} to validate the optimality of such modification,
and refer interested reader to \cite{Lederer17,FuLehrenfeld18a} for the analysis.
\end{remark}
\section{Fully-discrete Scheme}
\label{sec3:time}
For the temporal discretization
of the semi-discrete DAE \eqref{scheme}, we consider the $m$-step
BDF \cite[Chapter V]{HairerWanner10} method with step size $\Delta t >0$:
for $n\ge m$, find $(\underline{{p}}h^n, \underline{\boldsymbol{u}}h^n)\in \underline{{p}}hspace\times \underline{\boldsymbol{u}}hspace$ such that
\begin{subequations}
\label{bdf-time}
\begin{align}
\label{bdf-1}
\sum_{j=0}^m\frac{\delta_j}{\Delta t}(c_s {p}_{h}^{n-j}+\alpha\,{\mathrm{div}}({\bld u}_h^{n-j}), w_h)
+
a_h(\underline{{p}}h^n, \underline{{w}}_h) =&\; (f(t^n), w_h),\\
\label{bdf-2}
b_h(\underline{\boldsymbol{u}}h^n, \underline{\boldsymbol{v}}h)
-(p_h^n,\alpha\,{\mathrm{div}}(\bld v_h))=&\; (\bld g(t^n), \bld v_h),
\end{align}
\end{subequations}
for all $(\underline{{w}}_h,\underline{\boldsymbol{v}}h)\in \underline{{p}}hspace\times \underline{\boldsymbol{u}}hspace$
with given starting values $\{\underline{{p}}h^i, \underline{\boldsymbol{u}}h^i\}_{i=0}^{m-1}$, where $t^n =n \Delta t$.
The method coefficients $\delta_j$ are determined
from the relation
\begin{align}
\label{zeta-f}
\delta(\zeta) = \sum_{j=0}^m \delta_j \zeta^j = \sum_{\ell=1}^m \frac{1}{\ell}(1-\zeta)^\ell.
\end{align}
The BDF method is known to have order $m$ for $m\le 6$,
and is A-stable for $m=1$ and $m=2$, but not for $m\ge 3$.
Next, we provide error estimates for the
fully discrete scheme \eqref{bdf-time} with $m=2$ using an energy argument.
We remark that the analysis for the cases with $3\le m\le 5$ is similar but more technical as
one needs to use the multiplier technique \cite{NevanlinnaOdeh81, Akrivis15}.
To simplify notation, we denote the backward difference operator
\begin{align}
\label{bdf2}
\mathsf{d_t}\phi^n := \frac{3\phi^{n}-4\phi^{n-1}+\phi^{n-2}}{2\Delta t}.
\end{align}
Let $(\cdot,\cdot)$ be an inner product with associated norm $|\cdot|$. Then,
a straightfoward calculation yields
\begin{align}
\label{energy-x}
(\mathsf{d_t}\phi^n, \phi^n) =&\; \frac{1}{4\Delta t}\Big(
|\phi^n|^2+|2\phi^n-\phi^{n-1}|^2
-|\phi^{n-1}|^2-|2\phi^{n-1}-\phi^{n-2}|^2\\
&\;\quad\quad\;\; +|\phi^n-2\phi^{n-1}+\phi^{n-2}|^2\Big)\nonumber
\end{align}
We continue to use the notation \eqref{notation}.
Denoting the norm
\begin{align}
\label{norm-sup}
\|\phi\|_{L^\infty(H^s)}:= \sup_t \|\phi(t)\|_{H^s(\Omega)},
\end{align}
we have the following result on the consistency of the scheme \eqref{bdf-time}.
\begin{lemma}
\label{lemma:cs}
Let $(\underline{{p}}h^n, \underline{\boldsymbol{u}}h^n)\in \underline{{p}}hspace\times \underline{\boldsymbol{u}}hspace$, $n\ge 2$,
be the solution to equations \eqref{bdf-time} with $m=2$
and starting values $(\underline{{p}}h^0, \underline{\boldsymbol{u}}h^0)$ and $(\underline{{p}}h^1, \underline{\boldsymbol{u}}h^1)$.
Let $p^n:=p(t^n)$ and $\bld u^n :=\bld u(t^n)$ be the exact solution to equations \eqref{eqns}
at time $t^n$. Then, there holds, for $n\ge 2$,
\begin{subequations}
\label{f-error-eq}
\begin{align}
\label{f-error-eq-1}
(c_s \mathsf{d_t}{{ \varepsilon}_{p}^n}+\alpha\,{\mathrm{div}}(\mathsf{d_t}{{\bld \varepsilon}_{u}^n}), w_h)
+
a_h(\underline{\varepsilon}_{p}^n, \underline{{w}}_h) =&\;
\mathcal{E}_h^n(w_h)\\
\label{f-error-eq-2}
b_h(\underline{\bld \varepsilon}_{u}^n, \underline{\boldsymbol{v}}h)
-({ \varepsilon}_{p}^n,\alpha\,{\mathrm{div}}(\bld v_h))=&\; 0,
\end{align}
\end{subequations}
for all $\underline{{w}}_h=(w_h,\widehat{w}_h) \in \underline{{p}}hspace$ and
$\underline{\boldsymbol{v}}h=(\bld v_h,\widehat{\bld v}_h) \in \underline{\boldsymbol{u}}hspace$,
where
\[
\mathcal{E}_h^n(w_h) : =
c_s\left(\mathsf{d_t}{{ \delta}_{p}^n}- \mathsf{d_t}{p^n}+\dot{p}^n,w_h\right)
+
\alpha\left({\mathrm{div}}(\mathsf{d_t}{{\bld\delta}_{u}^n}-\mathsf{d_t}{\bld u}^n+
\dot{\bld u}^n), w_h-\overline{w}_h\right),
\]
and $\overline{w}_h$ is the average of $w_h$ on $\Omega$.
Moreover, there holds
\begin{align}
\label{cst-est}
\| \mathcal{E}_h^n(w_h)\|\preceq &\;
c_s\,\mathcal{O}_{2,I} \|w_h\|
+\alpha\, \mathcal{O}_{2,I\!I} \|w_h-\overline{w}_h\|,
\end{align}
where, for integer $s\ge 1$,
\begin{align*}
\mathcal{O}_{s,I} :=&\;h^{k+1}
\|\frac{\partial p}{\partial t}\|_{L^\infty(H^{k+1})}
+\Delta t^s \|\frac{\partial^{s+1} p}{\partial t^{s+1}}\|_{L^\infty(L^2)}\\
\mathcal{O}_{s,I\!I} :=&\;h^{k+1}\left(
\frac{\mu^{1/2}}{\lambda^{1/2}}\|\frac{\partial \bld u}{\partial t}\|_{L^\infty(H^{k+2})}+
\|{\mathrm{div}}\,\frac{\partial \bld u}{\partial t}\|_{L^\infty(H^{k+1})}+
\frac{\alpha}{\lambda}\|\frac{\partial p}{\partial t}\|_{L^\infty(H^{k+1})}
\right)\\
&\;+\Delta t^s \|{\mathrm{div}}\, \frac{\partial^{s+1} \bld u}{\partial t^{s+1}}\|_{L^\infty(L^2)}.
\end{align*}
\end{lemma}
\begin{proof}
The error equations \eqref{f-error-eq} follows from the scheme \eqref{bdf-time} and
the consistency result in Lemma \ref{lemma:consistency}.
The estimate \eqref{cst-est} follows from the Cauchy-Schwarz inequality,
the approximation properties in Lemma \ref{lemma:approx} of the elliptic projector,
and Taylor expansion in time.
\end{proof}
Our main result on the fully-discrete error estimates is given below.
\begin{theorem}
\label{thm:energy-full}
Let $(\underline{{p}}h^n, \underline{\boldsymbol{u}}h^n)\in \underline{{p}}hspace\times \underline{\boldsymbol{u}}hspace$, $n\ge 2$,
be the solution to equations \eqref{bdf-time} with $m=2$
and starting values $(\underline{{p}}h^0, \underline{\boldsymbol{u}}h^0)$ and $(\underline{{p}}h^1, \underline{\boldsymbol{u}}h^1)$.
Let $p^n:=p(t^n)$ and $\bld u^n :=\bld u(t^n)$ be the exact solution
to equations \eqref{eqns} at time $t^n$.
Then, there holds, for $N\ge 2$,
\begin{align}\label{full-est}
\vertiii{\{\underline{\varepsilon}_{p}^N,\underline{\bld \varepsilon}_{u}^N\}}_{h}^2
+\Delta t\sum_{n=2}^Na_h(\underline{\varepsilon}_{p}^n,\underline{\varepsilon}_{p}^n)
\preceq
\exp(N\Delta t)&\;\Big(
\sum_{i=0}^1\vertiii{\{\underline{\varepsilon}_{p}^i,\underline{\bld \varepsilon}_{u}^i\}}_{h}^2\\
&\hspace{-0.2cm}+N\Delta t(c_s\mathcal{O}_{2,I}^2+(\lambda+\mu)\mathcal{O}_{2,I\!I}^2)\Big)\nonumber
\end{align}
\end{theorem}
\begin{proof}
Taking $\underline{{w}}_h = \underline{\varepsilon}_{p}^n$ in equation \eqref{f-error-eq-1} and
$\underline{\boldsymbol{v}}h = \mathsf{d_t}\underline{\bld \varepsilon}_{u}^n$ in equation \eqref{f-error-eq-2}, and adding and summing
the resulting expression for $n = 2,\cdots, N$, we get
\begin{align}
\label{haha}
\sum_{n=2}^N (c_s \mathsf{d_t}{{ \varepsilon}_{p}^n}, { \varepsilon}_{p}^n)
+b_h(\underline{\bld \varepsilon}_{u}^n,\mathsf{d_t}{\underline{\bld \varepsilon}_{u}^n}) + a_h(\underline{\varepsilon}_{p}^n,\underline{\varepsilon}_{p}^n)
=&\sum_{n=2}^N \mathcal{E}_h^n(\underline{\varepsilon}_{p}^n)\nonumber\\
&\hspace{-2.5cm}\preceq
\sum_{n=2}^N (c_s\mathcal{O}_{2,I}^2+(\lambda+\mu)\mathcal{O}_{2,I\!I}^2)^{1/2}
\vertiii{\{\underline{\varepsilon}_{p}^n, \underline{\bld \varepsilon}_{u}^n\}}_{h},
\end{align}
where we used a combination of Lemma \ref{lemma:cs} and Lemma \ref{lemma:inf-sup} to derive the
last inequality.
The identity \eqref{energy-x} implies that
\begin{align*}
\sum_{n=2}^N (c_s \mathsf{d_t}{{ \varepsilon}_{p}^n}, { \varepsilon}_{p}^n)
+b_h(\underline{\bld \varepsilon}_{u}^n,\mathsf{d_t}{\underline{\bld \varepsilon}_{u}^n})
\ge &\;
\frac{1}{4\Delta t}\Big( c_s\|{{ \varepsilon}_{p}^N}\|^2
+b_h(\underline{\bld \varepsilon}_{u}^N,{\underline{\bld \varepsilon}_{u}^N})
-
c_s\|{{ \varepsilon}_{p}^1}\|^2\\
&\hspace{-2cm}-c_s\|2{{ \varepsilon}_{p}^1}-{ \varepsilon}_{p}^0\|^2)
-b_h(\underline{\bld \varepsilon}_{u}^1,{\underline{\bld \varepsilon}_{u}^1})
-b_h(2\underline{\bld \varepsilon}_{u}^1-\underline{\bld \varepsilon}_{u}^0,{2\underline{\bld \varepsilon}_{u}^1-\underline{\bld \varepsilon}_{u}^0})
\Big).
\end{align*}
Hence,
\[
\frac{1}{\Delta t}(\vertiii{\{\underline{\varepsilon}_{p}^N, \underline{\bld \varepsilon}_{u}^N\}}_{h}^2
-\sum_{i=0}^1\vertiii{\{\underline{\varepsilon}_{p}^i, \underline{\bld \varepsilon}_{u}^i\}}_{h}^2)
\preceq \sum_{n=2}^N (c_s \mathsf{d_t}{{ \varepsilon}_{p}^n}, { \varepsilon}_{p}^n)
+b_h(\underline{\bld \varepsilon}_{u}^n,\mathsf{d_t}{\underline{\bld \varepsilon}_{u}^n})
\]
Combining this estimate with \eqref{haha}, we get
\begin{align*}
\vertiii{\{\underline{\varepsilon}_{p}^N, \underline{\bld \varepsilon}_{u}^N\}}_{h}^2
+\Delta t\sum_{n=2}^Na_h(\underline{\varepsilon}_{p}^n,\underline{\varepsilon}_{p}^n)
\preceq &
\Delta t \sum_{n=2}^N (c_s\mathcal{O}_{2,I}^2+(\lambda+\mu)\mathcal{O}_{2,I\!I}^2)^{1/2}
\vertiii{\{\underline{\varepsilon}_{p}^n, \underline{\bld \varepsilon}_{u}^n\}}_{h}\\
&+\sum_{i=0}^1\vertiii{\{\underline{\varepsilon}_{p}^i, \underline{\bld \varepsilon}_{u}^i\}}_{h}^2.
\end{align*}
Finally, the estimate \eqref{full-est} follows from a discrete Gronwall's inequality, c.f.
\cite[Lemma 5.1]{HeywoodRannacher90}.
\end{proof}
\begin{remark}[Higher order BDF method]
\label{rk:2}
For $m$-step BDF methods with $m = 1$ or $3\le m\le 5$,
we can
still use a similar energy argument
to
derive the following estimate
\begin{align*}
\vertiii{\{\underline{\varepsilon}_{p}^N,\underline{\bld \varepsilon}_{u}^N\}}_{h}^2
+\Delta t\sum_{n=2}^Na_h(\underline{\varepsilon}_{p}^n,\underline{\varepsilon}_{p}^n)
\preceq
\exp(N\Delta t)&\;\Big(
\sum_{i=0}^{m-1}\vertiii{\{\underline{\varepsilon}_{p}^i,\underline{\bld \varepsilon}_{u}^i\}}_{h}^2\\
&\hspace{-0.2cm}+N\Delta t(c_s{\mathcal{O}}_{m,I}^2+(\lambda+\mu){\mathcal{O}}_{m,I\!I}^2)\Big).
\end{align*}
In the cases for $3\le m\le 5$, we need to apply
the multiplier technique \cite{NevanlinnaOdeh81}, and take in the energy argument
the test function in the error equation
\eqref{f-error-eq-1} to be $\underline{{w}}_h:= \underline{\varepsilon}_{p}^n-\eta\, \underline{\varepsilon}_{p}^{n-1}$ with the multiplier
$\eta = 0.0836$ for $m=3$,
$\eta = 0.2878$ for $m=4$, and
$\eta = 0.8160$ for $m=5$. More details of the multiplier technique can be found in the recent
publications \cite{LubichMansour13, Akrivis15}.
\end{remark}
\begin{remark}[Starting values for BDF2 and BDF3]
The $m$-step BDF method needs ${m-1}$ starting values to begin with.
For BDF2, we can simply take
$(\underline{{p}}h^1, \underline{\boldsymbol{u}}h^1)$ to be the Backward Euler solution to equations \eqref{bdf-time} with $m=1$
and $(\underline{{p}}h^0, \underline{\boldsymbol{u}}h^0) = (\underline{{p}}P(0), \underline{\boldsymbol{u}}P(0))$.
This implies
\[
\vertiii{\{{ \varepsilon}_{p}^0,{\bld \varepsilon}_{u}^0\}}_{h}=0, \quad\quad
\vertiii{\{{ \varepsilon}_{p}^1,{\bld \varepsilon}_{u}^1\}}_{h}\preceq
\Delta t (c_s{\mathcal{O}}_{1,I}^2+(\lambda+\mu){\mathcal{O}}_{1,I\!I}^2).
\]
Combining these estimates with \eqref{full-est}, we readily have
$\vertiii{\{{ \varepsilon}_{p}^N,{\bld \varepsilon}_{u}^N\}}_{h}$ converges $k+1$-th order in space,
and second-order in time.
For BDF3, we take
$(\underline{{p}}h^0, \underline{\boldsymbol{u}}h^0) = (\underline{{p}}P(0), \underline{\boldsymbol{u}}P(0))$,
$(\underline{{p}}h^i, \underline{\boldsymbol{u}}h^i)$, $i=1,2$, to be the solution with Crank-Nicolson time stepping.
Similarly, the local error for
$\vertiii{\{{ \varepsilon}_{p}^i,{\bld \varepsilon}_{u}^i\}}_{h}$, $i=1,2$, are
third-order in time.
Then, the estimates in Remark \ref{rk:2} yields that
$\vertiii{\{{ \varepsilon}_{p}^N,{\bld \varepsilon}_{u}^N\}}_{h}$ converges $k+1$-th order in space,
and third-order in time.
\end{remark}
\begin{remark}[diagonally implicit Runge-Kutta time stepping]
Alternatively, we can apply
the (one-step, multi-stage) diagonally implicit Runge-Kutta (DIRK) methods to solve the DAE \eqref{scheme}.
We refer the interested reader to the references \cite{NguyenPeraireCockburn11b,JaustSchutz14} for a setup.
However, in our numerical experiments not documented here,
we do observe the order reduction \cite{CarpenterGottliebAbarbanelDon95,RosalesSeiboldShirokoffZhou17}
in high-order ($\ge3$) DIRK schemes due to inappropriate boundary treatment in the intermediate stages.
We only observe second order accuracy for third- and fourth-order DIRK schemes.
\end{remark}
\section{Numerical results}
\label{sec:numerics}
In this section, we present several numerical experiments to illustrate the performance of the proposed method.
The numerical results are performed using the NGSolve software \cite{Schoberl16}.
\subsection{Accuracy for a smooth solution with a large $\lambda$}
In order to confirm the optimal convergence rates in Section \ref{sec2:disc} and Section \ref{sec3:time},
we consider a manufactured smooth exact solution, similar to the one considered in \cite[Section 7.1]{Y117}.
Specifically, we take the domain to be $\Omega=(0,1)^2$, with the exact
displacement $\bld u=(u, v)$ and exact pressure $p$ given by
\begin{align*}
u(\bld x, t) =& -e^{-t}\cos(\pi x)\sin(\pi y)+\frac{1}{\mu+\lambda}e^{-t}
\sin(\pi x)\sin(\pi y)\\
v(\bld x, t) =& e^{-t}\sin(\pi x)\cos(\pi y)+\frac{1}{\mu+\lambda}e^{-t}
\sin(\pi x)\sin(\pi y)\\
p(\bld x, t) =& e^{-t}\sin(\pi x)\sin(\pi y).
\end{align*}
Note that the solution is designed to satisfy \[{\mathrm{div}}\,\bld u = \pi e^{-t}
\sin(\pi (x+y))/(\mu+\lambda)\rightarrow 0 \quad \text{as }\quad \lambda\rightarrow +\infty.\]
We impose Dirichlet boundary conditions for both $\bld u$ and $p$, and choose the following material parameters:
\[
c_0=0, \quad \alpha = 1,\quad \kappa = 1,\quad \lambda =10^5,\quad \mu = 1.
\]
The final computational time is $T=0.5$.
Our computation is based on uniform triangular meshes; see Figure \ref{fig:1} for the coarsest mesh with mesh size $h=1/4$.
We consider the fully discrete scheme \eqref{bdf-time},
with the (spatial) polynomial degree $k$ in the finite element spaces \eqref{space} varying from $1$ to $3$,
and the (temporal) BDF3 method ($m=3$).
We also present numerical results using the relaxed $H(\mathrm{div})$-conformity approach, c.f. Remark \ref{rk:relax}.
The stabilization parameter $\tau$ in the bilinear forms \eqref{bilinearforms}
is taking to be $\tau = 10 k^2$ for all the tests.
We take the time step size to be $\Delta t = h^{\max\{(k+1)/3,1\}}$, where $h$ is the
spatial mesh size. The error in the norm $\vertiii{\cdot}_h$, and the $L^2$-norms for displacement and pressure at the final time
$T=0.5$ are
recorded in Table \ref{table:1} on a sequences of uniformly refined meshes for the original scheme \ref{scheme}, and
in Table \ref{table:2} for the relaxed $H(\mathrm{div})$-conforming scheme, c.f. Remark \ref{rk:relax}.
In both tables, we observe the optimal
convergence rates for the norm $\vertiii{\cdot}_h$,
in full agreements with our main result in Theorem \ref{thm:energy-full} and Remark \ref{rk:2},
we also observe optimal convergence rates in the $L^2$-norm of the displacement ($k+2$), and the $L^2$-norm of the
pressure ($k+1$).
\begin{figure}
\caption{The coarsest mesh with $h = 1/4$}
\label{fig:1}
\end{figure}
\begin{table}[ht!]
\caption{Convergence study at the final time $T=0.5$: The original scheme.}
\centering
{
\begin{tabular}{ c c cc cc c c}
\toprule
& mesh & \multicolumn{2}{c}{$\vertiii{\{\underline{{p}}, \underline{\boldsymbol{u}}\}-
\{\underline{{p}}h, \underline{\boldsymbol{u}}h\}}_{h}$}
& \multicolumn{2}{c}{$\|\bld u -\bld u_{h}\|_{\Omega}$}
&\multicolumn{2}{c}{{$\|p-p_h\|_{\Omega}$}}\\
\midrule
$k$ & $h$ & error & order & error & order & error & order
\tabularnewline
\midrule
\multirow{5}{2mm}{1}
&1/4& 7.214e-02 & - & 3.589e-03 & - & 2.190e-02 & - \\
& 1/8&1.854e-02 & 1.96 & 4.476e-04 & 3.00 & 5.194e-03 & 2.08 \\
&1/16& 4.692e-03 & 1.98 & 5.558e-05 & 3.01 & 1.304e-03 & 1.99 \\
&1/32& 1.178e-03 & 1.99 & 6.911e-06 & 3.01 & 3.263e-04 & 2.00 \\
&1/64& 2.951e-04 & 2.00 & 8.609e-07 & 3.00 & 8.159e-05 & 2.00 \\
\midrule
\multirow{5}{2mm}{2}
&1/4& 8.342e-03 & - & 3.306e-04 & - & 1.832e-03 & - \\
&1/8& 1.034e-03 & 3.01 & 2.024e-05 & 4.03 & 2.421e-04 & 2.92 \\
&1/16& 1.283e-04 & 3.01 & 1.239e-06 & 4.03 & 3.037e-05 & 3.00 \\
&1/32& 1.598e-05 & 3.01 & 7.658e-08 & 4.02 & 3.799e-06 & 3.00 \\
&1/64& 1.994e-06 & 3.00 & 4.759e-09 & 4.01 & 4.750e-07 & 3.00 \\
\midrule
\multirow{5}{2mm}{3}
& 1/4&
7.739e-04 & - & 2.583e-05 & - & 1.851e-04 & - \\
& 1/8&
4.785e-05 & 4.02 & 7.659e-07 & 5.08 & 1.130e-05 & 4.03 \\
&1/16&
3.019e-06 & 3.99 & 2.357e-08 & 5.02 & 7.095e-07 & 3.99 \\
&1/32&
1.879e-07 & 4.01 & 7.244e-10 & 5.02 & 4.409e-08 & 4.01 \\
&1/64&
1.178e-08 & 4.00 & 4.849e-11 & 3.90 & 2.761e-09 & 4.00 \\
\bottomrule
\end{tabular}}
\label{table:1}
\end{table}
\begin{table}[ht!]
\caption{Convergence study at the final time $T=0.5$: The relaxed $H(\mathrm{div})$-conforming scheme. }
\centering
{
\begin{tabular}{ c c cc cc c c}
\toprule
& mesh & \multicolumn{2}{c}{$\vertiii{\{\underline{{p}}, \underline{\boldsymbol{u}}\}-
\{\underline{{p}}h, \underline{\boldsymbol{u}}h\}}_{h}$}
& \multicolumn{2}{c}{$\|\bld u -\bld u_{h}\|_{\Omega}$}
&\multicolumn{2}{c}{{$\|p-p_h\|_{\Omega}$}}\\
\midrule
$k$ & $h$ & error & order & error & order & error & order
\tabularnewline
\midrule
\multirow{5}{2mm}{1}
&1/4&
6.201e-02 & - & 3.441e-03 & - & 2.190e-02 & - \\
& 1/8&
1.641e-02 & 1.92 & 4.691e-04 & 2.87 & 5.194e-03 & 2.08 \\
&1/16&
4.262e-03 & 1.94 & 6.326e-05 & 2.89 & 1.304e-03 & 1.99 \\
&1/32&
1.085e-03 & 1.97 & 8.237e-06 & 2.94 & 3.263e-04 & 2.00 \\
&1/64&
2.732e-04 & 1.99 & 1.048e-06 & 2.97 & 8.159e-05 & 2.00 \\
\midrule
\multirow{5}{2mm}{2}
&1/4&
8.170e-03 & - & 3.253e-04 & - & 1.832e-03 & -- \\
&1/8&
1.036e-03 & 2.98 & 2.040e-05 & 4.00 & 2.421e-04 & 2.92 \\
&1/16&
1.295e-04 & 3.00 & 1.271e-06 & 4.01 & 3.037e-05 & 3.00 \\
&1/32&
1.618e-05 & 3.00 & 7.921e-08 & 4.00 & 3.799e-06 & 3.00 \\
&1/64&
2.021e-06 & 3.00 & 4.943e-09 & 4.00 & 4.750e-07 & 3.00 \\
\midrule
\multirow{5}{2mm}{3}
& 1/4&
7.633e-04 & - & 2.562e-05 & - & 1.851e-04 & - \\
& 1/8&
4.741e-05 & 4.01 & 7.749e-07 & 5.05 & 1.130e-05 & 4.03 \\
&1/16&
3.024e-06 & 3.97 & 2.442e-08 & 4.99 & 7.095e-07 & 3.99 \\
&1/32&
1.894e-07 & 4.00 & 7.615e-10 & 5.00 & 4.409e-08 & 4.01 \\
&1/64&
1.191e-08 & 3.99 & 3.930e-11 & 4.28 & 2.761e-09 & 4.00 \\
\bottomrule
\end{tabular}}
\label{table:2}
\end{table}
\subsection{Barry and Mercer's problem}
We consider the Barry and Mercer's problem \cite{BarryMercer99}, for which an exact
solution is available in terms of infinite series
(we refer the reader to the cited paper and also to \cite[Section 4.2.1]{Phillips05} for the expression).
It models the behavior of a rectangular uniform porous material with a pulsating point source, drained on all sides, and on
which zero tangential displacements are assumed on the whole boundary. The point-source corresponds to a sine wave on the
rectangular domain $(0,a)\times (0,b)$ and is given as \[f(t) = 2\beta \delta_{\bld x_0}\sin(\beta t),\] where
$\beta = \frac{(\lambda+2\mu)\kappa}{ab}$ and $\delta_{\bld x_0}$ is the Dirac delta at the point $\bld x_0$.
The computational domain together with the boundary conditions are depicted in Figure \ref{fig:2}.
\begin{figure}
\caption{Computational domain and boundary conditions for the Barry and Mercer's problem}
\label{fig:2}
\end{figure}
As in \cite{Phillips07a,Rodrigo16},
we consider the rectangular domain $(0,1)\times (0,1)$, and the following values of the material parameters:
\[
c_0=0,\;\;\alpha = 1,\;\; E = 10^5,\;\; \nu = 0.1,\;\; \kappa = 10^{-2},
\]
where $E$ and $\nu$ denotes Young's modulus and the Poisson ratio, respectively, and
\[
\mu = \frac{E}{2(1+\nu)},\quad
\lambda = \frac{ E\nu}{(1-2\nu)(1+\nu)}.
\]
The source is positioned at the point $(1/4,1/4)$.
We consider the fully discrete scheme \eqref{bdf-time},
with the (spatial) polynomial degree $k =1$ in the finite element spaces \eqref{space}
and the (temporal) BDF2 method ($m=2$). We use a relatively large time step of $\Delta t =\frac{\pi}{20 \beta}$.
The solution for the pressure on the deformed domain
on a uniform triangular mesh with mesh size $h = 1/64$ is plotted in Figure \ref{fig:3}
for two different ``normalized time'' $\hat t = \beta t$ of values $\hat t = \pi/2$ and
$\hat t = 3\pi/2$. We observe that depending on the sign of the source term (positive for $\hat t = \pi/2$, negative for $\hat t=3\pi/2$)
the resultant displacements cause an expansion or a contraction of the medium.
We also plot the pressure and x-component of the
displacement profiles on three
consecutive meshes, with mesh size $h = 1/32, 1/64,1/128$,
along the diagonal line (0,0)--(1,1) of the domain, along with the exact solution in Figure \ref{fig:4}.
We observe form Figure \ref{fig:4} that the numerical solution resemble the exact solution very precisely.
\begin{figure}
\caption{Numerical solution for the pressure on the deformed domain at different time}
\label{fig:3}
\end{figure}
\begin{figure}
\caption{Numerical solution for pressure and x-component of displacement along the diagonal (0,0)-(1,1) of the domain for different time}
\label{fig:4}
\end{figure}
Finally, to check the robustness of the method with respect to pressure oscillations for small permeability combined with small time steps,
we show in Figure \ref{fig:5} the pressure profile after one step of backward Euler with $\kappa = 10^{-6}$ and $\Delta t = 10^{-4}$ on
the uniform triangular mesh with $h = 1/64$. We do not observe significant oscillation.
\begin{figure}
\caption{Numerical solution for pressure after one time step. Left: numerical pressure on $\Omega$; Right:
numerical pressure on the diagonal line (0,0)--(1,1).}
\label{fig:5}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper we have analyzed the convergence property of a novel high-order
HDG discretization of Biot's consolidation model in poroelasticity combined
with BDF time stepping.
The method produce optimal convergence rates, and is free from Poisson locking when $\lambda \rightarrow \infty$.
\end{document} |
\begin{document}
\title{Deep Ising Born Machine}
\author{Zhu Cao}
\email{caozhu@ecust.edu.cn}
\address{Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China}
\begin{abstract}
A quantum neural network (QNN) is a method to find patterns in quantum data and has a wide range of applications including quantum chemistry, quantum computation, quantum metrology, and quantum simulation. Efficiency and universality are two desirable properties of a QNN but are unfortunately contradictory. In this work, we examine a \emph{deep Ising Born machine} (DIBoM), and show it has a good balance between efficiency and universality. More precisely, the DIBoM has a flexible number of parameters to be efficient, and achieves provable universality with sufficient parameters. The architecture of the DIBoM is based on generalized controlled-Z gates, conditional gates, and some other ingredients. To compare the universality of the DIBoM with other QNNs, we propose a fidelity-based expressivity measure, which may be of independent interest. Extensive empirical evaluations corroborate that the DIBoM is both efficient and expressive.
\end{abstract}
\keywords{quantum machine learning, quantum neural network, efficiency, universality, expressivity}
\pacs{03.67.Ac, 03.67.Lx, 07.05.Mh}
\maketitle
\section{Introduction}
Machine learning (ML) has emerged as one of the most revolutionary techniques in recent years \cite{Goodfellow-et-al-2016}. Despite its significance, ML necessitates a tremendous amount of computational power. However, with the waning effectiveness of Moore's law on the speed of classical processors \cite{waldrop2016chips}, and the ever-increasing computational demands of state-of-the-art ML models, the future development of ML may face significant hindrances due to the shortage of adequate computational resources. Quantum computing \cite{nielsen2010quantum}, a novel computing paradigm, holds the potential to sustainably advance ML. At present, quantum machine learning (QML) \cite{biamonte2017quantum}, which refers to the use of quantum computers for machine learning, is still in its nascent stage. Depending on whether the data and the learning algorithm are classical or quantum, QML can be categorized into four types: classical learning of classical data, quantum learning of classical data, classical learning of quantum data, and quantum learning of quantum data.
Among the four categories of QML, quantum learning of quantum data is arguably the most promising type to achieve a demonstrable exponential speedup over classical machine learning methods. Furthermore, quantum learning of quantum data has a diverse array of applications, including quantum chemistry \cite{arute2020hartree}, quantum data compression \cite{romero2017quantum}, quantum error correction \cite{johnson2017qvector}, quantum metrology \cite{koczor2020variational}, quantum compiling \cite{sharma2020noise}, quantum state diagonalization \cite{larose2019variational}, quantum simulation \cite{li2017efficient}, quantum fidelity estimation \cite{cerezo2020variational}, and consistent histories \cite{arrasmith2019variational}. It is worth noting that these quantum applications generate a substantial amount of quantum data, which in turn fuels the development of quantum learning of quantum data, analogous to how the vast amount of classical information drives the advancement of classical machine learning.
Methods of quantum learning of quantum data can be classified into two categories: those that belong to quantum neural networks (QNNs) and those that do not. Examples of methods that do not fall into the QNN category are the Harrow-Hassidim-Lloyd algorithm \cite{harrow2009quantum}, quantum principal component analysis \cite{lloyd2014quantum}, and quantum support vector machines \cite{rebentrost2014quantum}. Various proposals of QNNs have been put forward in the literature \cite{schuld2015simulating,lewenstein1994quantum,wan2017quantum,da2016quantum,gonsalves2017quantum,kouda2005qubit,beer2020training}, among which the current state-of-the-art is arguably given by Ref.~\cite{beer2020training}. In addition, specialized QNNs tailored to specific data inputs have also been developed, including quantum convolutional neural networks \cite{cong2019quantum}, quantum recurrent neural networks \cite{bausch2020recurrent}, quantum generative adversarial networks \cite{lloyd2018quantum}, quantum autoencoders \cite{bondarenko2020quantum}, quantum reservoir networks \cite{ghosh2021quantum}, and quantum residual networks \cite{killoran2019continuous}. However, these specialized QNNs cannot perform universal quantum computation, which is essential for the general quantum learning task of learning the hidden mapping between a set of quantum input and label pairs. Consequently, we will focus our attention on general QNNs and the general quantum learning task hereafter.
Efficiency and universality are two desirable properties of a general QNN. While efficiency is measured in terms of the number of parameters in the model, which should be as small as possible, universality refers to the ability of a QNN to approximate an arbitrary unitary on $n$ qubits. These two properties are often in conflict with each other. For instance, consider a basic QNN that applies a parametrized unitary $U$ to the quantum input and produces an output that approximates the label. Here, the parametrized unitary $U$ for $n$ qubits is represented as
\begin{equation}
U = \exp \left[ i\left( \sum\limits_{j_1=0}^{3}\cdots\sum\limits_{j_n=0}^{3} \alpha_{j_1,j_2,\dots,j_n} \left( \sigma_{j_1}\otimes \cdots \otimes \sigma_{j_n} \right) \right) \right],
\label{eq:generalQNN}
\end{equation}
where $\sigma_0$ is the identity matrix, $\sigma_1$, $\sigma_2$, $\sigma_3$ are Pauli matrices, and $\alpha_{j_1,j_2,\dots,j_n}$ are real parameters that are learned during training. Although this basic QNN is universal, as it can approximate any unitary by adjusting its parameters, it is not efficient since its number of parameters is $4^n$. Ideally, the number of parameters should be polynomial in $n$ for the model to be efficient.
This work investigates a \emph{deep Ising Born machine} (DIBoM), which has a flexible number of parameters to mitigate the efficiency issue while retaining universality with sufficient parameters. The DIBoM consists of a generalized controlled-Z (CZ) gate, a conditional gate, a global or local cost function, and some other ingredients. By replacing the normal CZ gate with a generalized CZ gate in a hardware-efficient QNN \cite{mcclean2018barren,benedetti2019generative}, we demonstrate that the expressivity can be increased through numerical evaluations. Along the way, we develop an expressivity measure, called fidelity-based expressivity, to characterize the expressivity of different QNN architectures. This measure may be of independent interest. Moreover, we theoretically prove that hardware-efficient QNN with generalized CZ gates can achieve universal quantum computation with sufficient parameters. The conditional gate is used to solve the problem of different input and output dimensions, and this approach can save space by a constant factor compared to dissipative QNNs \cite{beer2020training}. In addition, the ablation study shows that this ingredient improves the expressivity of the DIBoM. We examine two variants of the DIBoM, one with a global cost function and the other with a local cost function, and show that the global cost function version has a wider range of applicability, while the local cost function is more trainable and can mitigate the barren plateau issue.
We perform extensive experiments to compare the DIBoM with other QNN architectures, evaluate its different components, analyze the sensitivity of its performance to various parameters, and examine its robustness to noise. Our work invites further research on the design of QNNs with multiple desirable properties, and we hope it will stimulate further development of the architecture design of QNNs in general.
The roadmap for the rest of the paper is as follows. First, in Sec.~\ref{sec:relatedwork}, we review related works. Next, in Sec.~\ref{sec:model}, we present the DIBoM model and its training method. In Sec.~\ref{sec:theory}, we analyze theoretically the properties of the DIBoM. We then turn to the empirical evaluation of the model, with Sec. \ref{sec:simulationsetup} describing the simulation setup and Sec. \ref{sec:simulationresult} presenting the results.
Finally, we conclude the paper in Sec. \ref{sec:discussion} and give a few outlooks.
\section{Related works}
\label{sec:relatedwork}
In this section, we provide a review of the related works in the field, including hardware-efficient QNNs, dissipative QNNs, Ising Born machines, and Hamiltonian learning.
\subsection{Hardware-efficient QNNs}
We start by reviewing hardware-efficient QNNs \cite{mcclean2018barren,benedetti2019generative}. Hardware-efficient QNNs were proposed to reduce the exponential training cost of the basic QNN, and require only a polynomial number of resources. They are composed of alternating layers of single-qubit rotations and entangling gates such as CZ gates.
The connectivity of the entangling gates can be either linear \cite{mcclean2018barren} or pairwise \cite{benedetti2019generative}, as shown in Figs.~\ref{fig:illlus}(a) and (b). The layer number of a hardware-efficient QNN can vary and so are the parameters of its single-qubit rotations. From now on, we will refer to the architecture in Ref.~\cite{mcclean2018barren} as \emph{the} hardware-efficient QNN.
\begin{figure}
\caption{(a) A series of blocks where each block consists of single-qubit rotations with nearest-neighbor CZ gates; (b) A series of blocks where each block consists of single-qubit rotations with all-to-all CZ gates; (c) Unitary with a single CZ gate connecting the first two qubits. }
\label{fig:illlus}
\end{figure}
There are two drawbacks to the hardware-efficient QNN. First, to our knowledge, there is to date no proof that the hardware-efficient QNN presented in \cite{mcclean2018barren} is capable of universal quantum computation even with an exponential number of layers, see Sec.~\ref{sec:power} for more details. In particular, it is not known whether it can be used to simulate a circuit with a single CZ gate, which is illustrated in Fig.~\ref{fig:illlus}(c). Secondly, it falls short of varying the relative number of qubits between the input and the output. The DIBoM resolves these shortcomings of the hardware-efficient QNN \cite{mcclean2018barren} while retaining its merits.
Note that the definition of the hardware-efficient QNN varies in the literature, and in the broadest sense can include any QNN that can be implemented efficiently on some quantum hardware. This in particular includes the DIBoM as a special case, which is different from its usage in our work.
\subsection{Dissipative QNN}
We next review dissipative QNNs, a different type of QNNs that deal with quantum inputs and outputs of unmatched dimensions \cite{beer2020training,sharma2020trainability}. Initially, all hidden and output qubits are in the state $\ket{0}$. Dissipative QNNs apply a unitary on all input, hidden, and output qubits and subsequently trace out the input and hidden qubits,
\begin{equation}
\rho_{out} \equiv \mathrm{tr}_{in,hid}\left( U( \rho_{in} \otimes \ket{0\cdots 0}_{hid,out} \bra{0\cdots 0} ) U^\dagger \right).
\end{equation}
This process is illustrated in Fig.~\ref{fig:dissipativeQNN}. Here, it can be easily seen that the dimensions of the quantum input $\rho_{in}$ and the quantum output $\rho_{out}$ need not be the same. Since this network architecture discards lots of qubits, it was given the name a \emph{dissipative QNN} \cite{sharma2020trainability}.
\begin{figure}
\caption{An illustration of a dissipative QNN. Here, $\rho_{in}
\label{fig:dissipativeQNN}
\end{figure}
However, a dissipative QNN also has several disadvantages. First, it has a larger space complexity due to its dissipative nature. When the dimensions of the quantum input and output are equal, a dissipative QNN has an overhead of 2 over a basic QNN in terms of the qubits used. Second, the resource that a dissipative QNN requires is still exponential, due to the same reason that an exponential number of parameters are needed to parametrize a unitary transformation. A DIBoM maintains the merit of a dissipative QNN that can handle unequal quantum input and output dimensions and in the meantime has the additional merits of having small space complexity and being resource efficient.
\subsection{Ising Born machine}
We then review the Ising Born machine \cite{coyle2020born}, which bears a close resemblance to the DIBoM. We begin by defining a quantum Born machine \cite{cheng2018information,liu2018differentiable,benedetti2019generative}, as illustrated in Fig.~\ref{fig:isingborn}(a). A quantum Born machine is a class of models that consists of a parametrized quantum circuit followed by a quantum measurement. As the measurement outcome is determined by Born's rule, these models are named ``quantum Born machines'' \cite{cheng2018information,liu2018differentiable,benedetti2019generative}. Figure~\ref{fig:isingborn}(b) provides a specific choice of the parametrized quantum circuit proposed by Ref.~\cite{coyle2020born}, containing one fixed layer (Hadamard gates on all qubits) and two tunable layers (one with tunable two-qubit gates on all pairs of qubits and the other with tunable one-qubit gates on all qubits). Since tunable two-qubit gates mimic the Ising model, this model was named the ``Ising Born machine'' \cite{coyle2020born}. Although the original Ising Born machine was designed for generative modeling of classical data, it can be adapted to the general quantum learning task by removing the final quantum measurement layer in Fig.~\ref{fig:isingborn}(b). The main difference between the DIBoM and Ising Born machine lies in the number of tunable layers; the DIBoM can contain more than two tunable layers, thereby achieving higher expressive power than the Ising Born machine. In particular, the Ising Born machine is not capable of universal quantum computation, while the DIBoM is.
\begin{figure}
\caption{(a) The schematics of a quantum Born machine. Here, $\mathcal{U}
\label{fig:isingborn}
\end{figure}
\subsection{Hamiltonian learning}
Finally, the general quantum learning task is closely related to Hamiltonian learning \cite{cirstoiu2020variational,barison2021efficient}, a topic that has attracted a tremendous amount of interest recently. The correspondence is as follows: the quantum input $\ket{ \phi_{in} }$ corresponds to the initial quantum state of a system, and the quantum label $\ket{ \phi_{out} }$ corresponds to the quantum state of the system after evolving for a time $\delta t$. The hidden mapping $V$ can be associated with the time-evolution operator $e^{-i H \delta t}$ where $H$ is the Hamiltonian of the system. By approximating $V$ using QNNs, the Hamiltonian $H$ is learnt.
\section{Model}
\label{sec:model}
After reviewing related works, we now describe the DIBoM architecture. We first mathematically formulate the problem of quantum learning of quantum data in Sec.~\ref{sec:problemsetup}. Then in Sec.~\ref{sec:transformermodel}, we describe the DIBoM model which is targeted to this learning problem. Finally, we describe the training procedure of the DIBoM in Sec.~\ref{sec:quantumtraining}.
\subsection{Learning problem setup}
\label{sec:problemsetup}
We begin by introducing the quantum learning problem addressed by the DIBoM. Let $\mathcal{D}$ be an underlying distribution, and suppose we have $N$ pairs of training samples and labels $(\ket{\psi_i}, \ket{\phi_i}) \in \mathcal{D}$, where $1 \le i \le N$. For each pair, we are given $K$ copies of the input $\otimes_{i=1}^N(\ket{\psi_i}^{\otimes K}, \ket{\phi_i}^{\otimes K})$, as well as $M$ copies of test samples denoted by $\rho^{\otimes M}$. The goal is to generate model outputs for each test sample that closely approximate the corresponding test label. We use the infidelity to measure the similarity between two quantum states, and assume that the training and test data are independently sampled from $\mathcal{D}$. To illustrate, consider an example where $\ket{\psi_i}$ is a randomly generated $n$-qubit pure state and its corresponding label is $\ket{\phi_i}=V \ket{\psi_i}$. Here $V$ is a hidden $n$-qubit unitary that is independent of $i$ and unknown to the model. This example will also be used in the evaluation section. It should be noted that $\ket{\psi_i}$ and $\ket{\phi_i}$ may not be of the same dimension for general $\mathcal{D}$.
\subsection{Model architecture}
\label{sec:transformermodel}
After defining the learning problem, we now turn to the model of the DIBoM. The DIBoM takes a quantum state as an input and outputs a quantum state as an output which may have different dimensions. It is based on a basic quantum structure which is illustrated in Fig.~\ref{fig:perceptron}. This basic quantum structure has three steps. First, the quantum input $\rho_{\mathrm{in}}$ together with a $k$-qubit ancilla $\ket{0}^{\otimes k}$ undergo a unitary transformation $U$ that produces an intermediate state
\begin{equation}
\rho_{\mathrm{inter1}} = U ( \rho_{\mathrm{in}} \otimes \ket{0}^{\otimes k} ) U^\dagger.
\end{equation}
Second, part of the joint quantum state is measured, resulting in outcome $j$.
Let $\rho_{\mathrm{inter2}}^j$ denote the post-measurement state conditioned on that the outcome is $j$.
Third, another unitary $V_j$, that can depend on the outcome $j$ in the second step, is applied to $\rho_{\mathrm{inter2}}^j$ to produce the output quantum state
\begin{equation}
\rho_{\mathrm{out}} = V_j \rho_{\mathrm{inter2}}^j V_j^\dagger.
\end{equation}
\begin{figure}
\caption{The basic quantum structure. Here, $\rho_{in}
\label{fig:perceptron}
\end{figure}
With the basic quantum structure at hand, we are ready to define the DIBoM. Instead of applying the unitaries $U$ and $V_j$ which would consume exponential time to train, the DIBoM uses a stack of layers as a substitute for $U$ and $V_j$. The layers have two types. The first type consists of single-qubit rotations that are parametrized by $\alpha_j$ as follows:
\begin{equation}
U_{\mathrm{SG}} = \exp \left[ i( \sum\limits_{j=1}^{3} \alpha_{j} \sigma_{j} ) \right],
\end{equation}
where $\sigma_1$, $\sigma_2$, $\sigma_3$ are Pauli matrices.
The second type of layer applies a generalized CZ gate to all pairs of qubits:
\begin{equation}
U_{\mathrm{CZ}} = \exp \left[ -i\pi( \sum\limits_{1\le j <k \le n} \beta_{jk} \ket{11}_{jk} \bra{11}_{jk} ) \right],
\end{equation}
where $\beta_{jk}$ is an arbitrary real number that interpolates smoothly between a CZ gate and an identity gate.
It is worth noting that if $\beta_{jk}=1$, a CZ gate is applied on qubits $j$ and $k$, and if $\beta_{jk}=0$, an identity gate is applied.
The DIBoM is constructed as $\prod_{j=L/2}^{1} ( U_{\mathrm{CZ}}^j U_{\mathrm{SG}} ^j )$ for $L$ even and $U_{\mathrm{SG}} ^{(L+1)/2} \prod_{j=(L-1)/2}^{1} ( U_{\mathrm{CZ}}^j U_{\mathrm{SG}} ^j )$ for $L$ odd, where $L$ is the total number of layers and $\prod_{j=1}^L U^j$ is short for $U^1\cdots U^L$. An illustration of the DIBoM is shown in Fig.~\ref{fig:main_model}.
\begin{figure}
\caption{The schematics of a deep Ising Born machine. Here, SQ denotes a tunable single-qubit gate; CZ denotes generalized CZ gates between all pairs of qubits. The rest symbols have the same meanings as the basic quantum structure.}
\label{fig:main_model}
\end{figure}
\subsection{Training procedure}
\label{sec:quantumtraining}
After presenting the DIBoM model, the next step is to discuss the training process. In this regard, two variants of the loss functions are considered. The first loss function, called the global loss function, has the form
\begin{equation}
\label{eq:lossfunction}
\mathcal{L}_G =1 - \mathbb{E}_x \bra{\phi^x_\mathrm{out}} \rho^x_\mathrm{out}\ket{\phi^x_\mathrm{out}},
\end{equation}
where $\ket{\phi^x_\mathrm{out}}$ is the correct label, $ \rho^x_\mathrm{out}$ is the output of the DIBoM, and $ \mathbb{E}_x$ stands for the expectation over the random variable $x$.
The intuition behind this loss function can be seen through some special cases.
If the correct label is identical to the model output, the loss is 0; otherwise, the loss is positive.
Hence, by minimizing the loss function, the model output converges to the correct label.
If the correct label is a mixed state $\sigma^x_\mathrm{out}$, the loss function can be easily generalized to
$ \mathcal{L}=1- \mathbb{E}_x F(\rho^x_\mathrm{out}, \sigma^x_\mathrm{out})$, where
$F(\rho,\sigma):= \left [ \mathrm{tr} \sqrt{\rho^{1/2}\sigma\rho^{1/2}} \right]^2$.
The second loss function, called the local loss function, is given by
\begin{equation}
\mathcal{L}_L = 1-\frac{1}{nN} \sum \limits_{x=1}^N \sum \limits_{y=1}^n tr( ( \ket{\phi_x^{0}}_y\bra{\phi_x^{0}}_y \otimes I_{\bar{y}} ) \rho_x^{0}(s) ) ,
\label{eq:locallossfunction}
\end{equation}
where $\ket{\phi_x^{0}}_y$ is the $y$-th qubit of the input state $\ket{\phi_x^0}$, $I_{\bar{y}}$ denotes completely mixed states for all qubits except the $y$-th qubit, $N$ is the number of samples, and $n$ is the number of qubits. Here, $\rho_x^{0}(s)$ represents the effective input which, when applied the unitary given by the current model, generates the correct quantum label. When applying the local loss function, it is assumed that the input quantum state $\ket{\phi_x^0}$ is a product state and has the form $\ket{\phi_x^0} = \ket{\phi_x^0}_1 \otimes \dots \otimes \ket{\phi_x^0}_n $, where $n$ is the number of qubits. Note that no such assumption is made when applying the global loss function.
With the loss function defined (either $\mathcal{L}_G$ or $\mathcal{L}_L$, for simplicity denoted as $\mathcal{L}$), we describe the procedure to train the network with quantum computers in two steps:
(i) calculate the loss function with quantum computers; (ii) update the parameters by performing gradient descent on the loss function.
For the first step, we first compute the quantity
$\bra{\phi^x_\mathrm{out}} \rho^x_\mathrm{out}\ket{\phi^x_\mathrm{out}}$. To this end, we exploit the quantum circuit plotted in Fig.~\ref{fig:innerproduct} \cite{beer2020training}.
Through straightforward calculation, one can verify that this circuit takes $\ket{\phi^x_\mathrm{out}}$ and $ \rho^x_\mathrm{out}$ as inputs and outputs
$(1+\bra{\phi^x_\mathrm{out}} \rho^x_\mathrm{out}\ket{\phi^x_\mathrm{out}})/2$.
By a linear transformation and some classical computation, the loss function $\mathcal{L}$ is obtained.
For the second step, we calculate the derivative of the loss function for all parameters and update the parameters accordingly by performing gradient descent.
For a parameter $y^\mu$ (either $\alpha_j$ or $\beta_{jk}$) in any layer, we calculate its derivative by running the loss function calculation twice as follows,
\begin{equation}
\frac{ \delta \mathcal{L} }{ \delta y^\mu } = \frac{\mathcal{L}(y^\mu )-\mathcal{L}(y^\mu - \epsilon )}{\epsilon},
\end{equation}
where $\epsilon$ is a small value. Note that this method assumes the availability of a high-precision quantum computer, as a noisy quantum computer may yield a derivative that is far from the true value. An alternative way would be calculating an analytic derivative directly with quantum computers, but this requires further investigation and is left as future work.
Then we update each parameter $y^\mu$ in the $k$-th iteration to minimize the loss function by the rule
\begin{equation}
y^\mu_{k+1} =y^\mu_{k} - \eta \frac{ \delta \mathcal{L} }{ \delta y^\mu_k },
\label{eq:naivegd}
\end{equation}
where $\eta$ is the learning rate.
When $\eta$ is small enough, the loss function always decreases by the parameter update.
We note that the DIBoM is efficiently trainable as it has only a polynomial number of parameters.
\begin{figure}
\caption{Circuit to calculate the loss function. Here, $H$ is the Hadamard gate, $CSWAP$ is the controlled-SWAP gate, $\ket{\phi_x}
\label{fig:innerproduct}
\end{figure}
The training strategy presented here does not aim to optimize the efficiency or computation cost of the training algorithm. Instead, we chose this strategy to evaluate the model's performance in terms of its converged loss. If the model can converge to a low test loss with some strategies, it is highly probable that the training strategy presented here will also result in a low test loss.
This makes it quite ideal for testing the performance of the model in terms of its converged loss. Notably, there are training strategies such as the parameter-shift rule \cite{wierichs2022general} that can significantly reduce the number of quantum circuit evaluations, and gradient descent methods that offer faster convergence. For example, one may utilize Nesterov acceleration \cite{nesterov27method} which is also a first-order optimization method (utilizing only first-order derivatives) to speed up the convergence. The $k$-th iteration of the parameter $y^\mu$ in Nesterov acceleration has the form,
\begin{equation}
\begin{aligned}
x^\mu_{k+1} &=y^\mu_{k} + \frac{k-1}{k+2} (y^\mu_{k}-y^\mu_{k-1}) , \\
y^\mu_{k+1} &=x^\mu_{k+1} - \eta \frac{ \delta \mathcal{L} }{ \delta x^\mu_{k+1} }.
\end{aligned}
\end{equation}
It can be shown that when $\eta = 1/\mathsf{L}$ where $\mathsf{L}$ is the Lipschitz constant of $\mathcal{L} $, the convergence rate by the above iteration rule is $O(1/k^2)$, quadratically better than $O(1/k)$ of Eq.~\eqref{eq:naivegd}.
Second-order or higher-order optimization methods can offer further improvements in convergence by utilizing second-order derivatives $\partial^2 \mathcal{L} / \partial y^\mu \partial y^\nu$. However, the computational cost of each iteration step in second-order optimization methods is $O(N^2)$, in contrast to $O(N)$ of first-order optimization methods, where $N$ is the number of parameters. In classical neural networks, $N$ is usually on the order of $10^8$, making second-order optimization methods computationally too expensive. As a result, first-order optimization methods are generally preferred. Similarly, in QNNs, second-order optimization methods were considered inferior to first-order methods in cases where the learning problem required a large $N$ to solve. However, recent advances have shown that second-order methods can be substantially sped up \cite{gacon2021simultaneous}, making them a competitive alternative.
The training procedures presented above update all parameters simultaneously, and we refer to them as \emph{simultaneous training}. Another training method, known as \emph{layer-by-layer training} \cite{skolik2021layerwise}, offers an alternative approach. In each training step, the parameters of one layer are updated using gradient descent, while the parameters of all other layers are fixed. The layer to be trained can be selected in a round-robin manner, from layer $1$ to layer $L$ and then repeated, where $L$ is the number of layers. Alternatively, choosing the layer randomly is also a plausible approach. For the remainder of this paper, we will use the round-robin approach for layer-by-layer training, unless otherwise specified.
\section{Theoretical Analysis}
\label{sec:theory}
In this section, we conduct a theoretical analysis of the DIBoM architecture from three perspectives to gain insight into its properties. Specifically, we examine its flexibility with unequal input and output dimensions in Sec.~\ref{sec:inout}, its balance between expressive power and efficiency in Sec.~\ref{sec:power}, and compare it with other models in Sec.~\ref{sec:theocompare}.
\subsection{Input-output dimension}
\label{sec:inout}
We start by showing that the DIBoM can support unequal input and output dimensions.
This is due to its underlying structure, which was illustrated in Figure~\ref{fig:perceptron}. The DIBoM can accommodate a larger or smaller number of input qubits $m$ than output qubits $n$ by adjusting the number of ancilla qubits and the qubits to be measured. There are two cases to consider. First, if $m<n$, an ancilla $\ket{0}^{\otimes (m-n)}$ can be used, and no measurement is required after the unitary $U$. Second, if $m>n$, no ancilla is needed, and a measurement can be performed on $m-n$ qubits after the unitary $U$.
As a result, quantum teleportation can be instantiated by the DIBoM as follows. We begin with a single-qubit quantum input $\rho_{in}$ and an ancilla in the state $\ket{00}$. A unitary operator $U$ is next applied to the system, which leaves the quantum input unchanged and entangles the ancilla into an EPR pair. A measurement is subsequently performed on both the quantum input and one of the qubits in the EPR pair. Based on the measurement outcome, an appropriate unitary operation is applied to the other qubit of the EPR pair. The result is the original quantum state being teleported to the quantum output $\rho_{out}$.
\subsection{Expressive power}
\label{sec:power}
Next we show another theoretical property of the DIBoM, namely its ability to perform universal quantum computation.
This is achieved by a reduction from a well-known result that $2^{O(n)}$ layers of single-qubit gates and CZ gates suffice for universal quantum computation, where $n$ is the number of qubits \cite{nielsen2010quantum}. Note that a general circuit with $2^{O(n)}$ layers of single-qubit gates and CZ gates is inequivalent to the hardware-efficient QNN \cite{mcclean2018barren}. For example, Fig.~\ref{fig:illlus}(c), which belongs to the class of circuits with single-qubit gates and CZ gates, cannot be converted into the form of the hardware-efficient QNN \cite{mcclean2018barren}, while it can be turned into the form of DIBoM as we will see shortly.
Given a circuit $\mathcal{C}$ with $2^{O(n)}$ layers of single-qubit gates and CZ gates that approximates the desired unitary $U$ within an error of $\epsilon$, we convert it to the structure of a DIBoM in two steps.
\begin{enumerate}
\item In the first step, we split each layer of $\mathcal{C}$ into two layers, with the first layer containing only single-qubit gates and the second layer containing only CZ gates. The resulting circuit is called $\mathcal{C}_2$.
\item In the second step, we fill missing single-qubit gates in the odd layer of $\mathcal{C}_2$ with identity single-qubit gates, and fill missing generalized CZ gates in the even layer of $\mathcal{C}_2$ with identity two-qubit gates.
\end{enumerate}
An illustration of this reduction is shown in Fig.~\ref{fig:universalproof}.
Hence a DIBoM with $2\times 2^{O(n)}=2^{O(n)}$ layers is capable of universal quantum computation.
\begin{figure}
\caption{A reduction from $\mathcal{C}
\label{fig:universalproof}
\end{figure}
The representation power of the DIBoM forms a hierarchy that varies with the number of layers. At one end of the spectrum, when the DIBoM has a polynomial depth, it possesses a limited number of parameters, making it highly efficient. Conversely, at the other end of the spectrum, when the DIBoM has an exponential depth, it has the ability to approximate universal quantum computation with high precision. Hence the DIBoM balances the efficiency and the expressive power quite well.
To quantitively compare the expressivity of the DIBoM and other QNN architectures, we propose an expressivity measure
\begin{equation}
E( \mathsf{A}) = \min_{U, \ket{\phi} } \max_\theta \left| \bra{\phi} \mathsf{A}(\theta)^\dagger U \ket{\phi} \right|,
\end{equation}
where $\mathsf{A}(\theta)$ and $\theta$ are the parametrized circuit and its parameters, $U$ is an arbitrary unitary, and $\ket{\phi}$ is an arbitrary pure quantum state.
To understand this measure, let us consider two special cases. First, when $\mathsf{A}(\theta)$ can recover any unitary, we can choose $\theta$ such that $\mathsf{A}(\theta) = U$ and hence $E( \mathsf{A}) =1$. Second, when $\mathsf{A}(\theta)$ is a fixed unitary, i.e. $\theta$ is empty, for an arbitrary $\ket{\phi}$, we can select a unitary $U$ such that $\mathsf{A}(\theta)\ket{\phi}$ and $U\ket{\phi}$ are orthogonal quantum states, hence $E( \mathsf{A}) =0$. Due to its close relationship with fidelity, we call this expressivity measure, \emph{fidelity-based expressivity} (FBE). Compared with other expressivity measures of QNNs, such as covering-number-based expressivity (CNBE) \cite{du2022efficient}, FBE has the advantages of having no extra parameters (CNBE has a parameter $\epsilon$), and having the range $[0,1]$ (CNBE is not upper bounded by a constant). In addition, FBE can be computed through continuous optimization, while some measures such as CNBE require discrete optimization which is usually harder to compute.
We now compare the expressive power of the DIBoM and hardware-efficient QNN through FBE. Specifically, we consider a three-qubit learning task and plot the FBE of the DIBoM and hardware-efficient QNN with $L$ layers as a function of $L$. (More details of the plot can be found in Appendix~\ref{appsec:fbe}.) Recall that the hardware-efficient QNN consists of alternating layers of single-qubit rotations and fixed CZ gates, while the DIBoM consists of alternating layers of single-qubit rotations and generalized CZ gates. Our results, as shown in Fig.~\ref{fig:expressivity}, indicate that the DIBoM outperforms the hardware-efficient QNN by a substantial margin when the layer number is the same. For instance, with 21 layers, the DIBoM achieves an FBE exceeding 0.77 (where higher values indicate superior performance), whereas the hardware-efficient QNN achieves only around 0.57. (Note however that DIBoM has more parameters than the hardware-efficient QNN with the same number of layers and hence this does not imply that DIBoM is strictly superior to the hardware-efficient QNN.)
Finally, the Ising Born machine, which corresponds to a DIBoM with $L=3$, exhibits significantly lower expressivity than the DIBoM with 21 layers, as shown in the figure.
\begin{figure}
\caption{ Fidelity-based expressivity (FBE) of the DIBoM and hardware-efficient QNN as a function of the layer number $L$. }
\label{fig:expressivity}
\end{figure}
The use of FBE also allows for a quantitative evaluation of the balance between efficiency and expressivity in DIBoMs. In Fig.~\ref{fig:balance}, we present such an evaluation for a 3-qubit DIBoM. Efficiency is quantified as the logarithm of the number of parameters, while expressivity is measured using FBE. The endpoints of the curve are obtained through theoretical analysis, where a 0-parameter DIBoM corresponds to an FBE value of 0 and a 13449-parameter DIBoM corresponds to an FBE value of 1. (The proof for the latter fact can be found in Appendix~\ref{appsec:3qubit}.) The remaining data points are computed numerically. The quality of the balance is characterized by the area of the purple region, with a smaller area indicating a better balance. It is worth noting that for some QNN architectures such as the hardware-efficient QNN, this area is not even guaranteed to be finite. In the figure, we show that through DIBoM, this area can be made finite. An interesting question for future work is how to achieve the smallest possible area of the purple region by optimizing over different QNN architectures.
\begin{figure}
\caption{ Quantitative characterization of the balance between efficiency and expressivity for a 3-qubit DIBoM. Here, the $y$ axis stands for efficiency which is measured as the logarithm of the number of parameters, and the $x$ axis stands for expressivity which is measured using fidelity-based expressivity (FBE).}
\label{fig:balance}
\end{figure}
\subsection{Comparisons with other models}
\label{sec:theocompare}
With the theoretical properties of the DIBoM at hand, we are ready to compare the DIBoM with other QNNs theoretically. First, we compare it to a basic QNN, as defined by Eq.~\eqref{eq:generalQNN}. The DIBoM has the advantage that its number of parameters is quadratic while a basic QNN has an exponential number of parameters.
Second, we compare it to a dissipative QNN \cite{beer2020training}. In the case that the input and output have the same quantum dimension, the DIBoM uses only half the number of qubits required by a dissipative QNN. In addition, the number of parameters that a DIBoM uses is exponentially smaller than that of a dissipative QNN.
\section{Empirical evaluation setup}
\label{sec:simulationsetup}
To further investigate the properties of the DIBoM architecture, we conduct an extensive empirical evaluation of the DIBoM in this and the following sections. In this section, we present the setup of the evaluation, while the results of the evaluation are presented in the next section.
The setup consists of two parts. In the first part, we describe the synthetic dataset that is used in the evaluation of the DIBoM. In the second part, we give the classical simulation for the training of the DIBoM.
\subsection{Dataset}
\label{sec:data}
We start with the construction of the synthetic datasets used in the empirical evaluation.
The samples in each dataset are of the form $\ket{\phi_x^{in}}$ where $x=1,\dots,N$ and
$N$ is the number of samples.
Each sample is associated with a corresponding label $\ket{\phi_x^{out}}=V\ket{\phi_x^{in}}$ where $x=1,\dots,N$. The unitary $V$ is referred to as the \emph{intrinsic unitary} of the data and is hidden from the training models.
If not otherwise specified, the samples are 2-qubit states and chosen randomly.
In our main experiment, we generate a total of 20 samples, which are randomly divided into equal-sized training and test sets (50:50).
\subsection{Classical simulation of training}
\label{sec:simultaneous}
With the dataset in place, we next describe how to evaluate the performance of a DIBoM on the dataset.
Due to the lack of a quantum computer, we classically simulate the
training procedure of the DIBoM and examine the result.
In the following, we describe the classical simulation of the network training for a DIBoM.
Let $L+1$ be the number of layers in the network, where layer 0 is the input layer and layer $L$ the output layer. The transition from layer $l-1$ to layer $l$ is given by
\begin{eqnarray}
\rho_x^l(s) &=& U^l (s) \rho_x^{l-1}(s) {U^l}^\dagger(s) ,
\end{eqnarray}
where $s$ is any parameter of the model (such as $\alpha_j$ or $\beta_{jk}$) and $U^l (s)$ is the unitary in layer $l$.
The loss function is computed as $\mathcal{L}(s)=1-C(s)$, where
\begin{equation}
C(s) = \frac{1}{N} \sum \limits_{x=1}^N \bra{\phi_x^{L}} \rho_x^{L}(s) \ket{\phi_x^{L}},
\end{equation}
with $N$ being the number of data points, $\rho_x^{L}(s)$ denoting the output state of the network, and $\ket{\phi_x^L}$ denoting the label.
In each iteration, the unitaries in the network are updated by
$U^l (s+\epsilon) = e^{i\epsilon K^l(s)} U^l(s)$.
Therefore, the network training is equivalent to obtaining $K^l(s)$ in each iteration.
To this end, we first calculate the derivative of $C$ with respect to the parameter $s$, which is
\begin{equation}
\frac{dC}{ds} = \lim\limits_{\epsilon \to 0} \frac{ C(s+\epsilon)-C(s)}{ \epsilon},
\end{equation}
where $\epsilon$ is a small positive number.
To evaluate this derivative, we first obtain the expression of $C(s+\epsilon)$. For the parameter $s+\epsilon$, the input quantum state stays unchanged as
$\rho_x^{0} (s+\epsilon)= \rho_x^{0} = \ket{\phi_x^{0}}\bra{\phi_x^{0}}$.
The quantum output, by the composition of layers, is however changed and can be expressed as
\begin{eqnarray*}
\rho^{L}_x (s+\epsilon) &= &\prod_{l=L}^1 e^{i\epsilon K^{l}(s)} U^{l}(s) \rho_x^{0} \prod_{l=1}^{L} {U^{l}}^\dagger (s) e^{-i\epsilon K^{l}(s)} . \nonumber
\end{eqnarray*}
We then substitute the updated quantum output corresponding to the parameter $s+\epsilon$ into the derivative of $C$, obtaining
\begin{equation}
\frac{dC}{ds} = \frac{i}{N} \sum\limits_x \mathrm{tr} ( \sum_{l=L}^1 M^{l}(s) K^{l}(s) ),
\label{eq:derivative}
\end{equation}
where $N$ is the number of samples, $\mathrm{tr}$ denotes the trace operation, and $M^{l}(s)$ is defined as
\begin{eqnarray*}
M^{l}(s)& =& [ \prod_{j=l}^1 U^{j}(s) \rho_x^0 \prod_{j=1}^l {U^{j}}^\dagger (s) , \\
&& \quad \prod_{j=l+1}^L{U^{j}}^\dagger (s) \ket{\phi_x^L}\bra{\phi_x^L} \prod_{j=L}^{l+1} U^{j} (s) ].
\end{eqnarray*}
Here, $[\cdot,\cdot]$ denotes the commutator operation. The derivation of Eq.~\eqref{eq:derivative} is given in Appendix \ref{appsec:derivation}.
To maximize the increase of $C$, $K^l(s)$ should be chosen such that $dC/ds$ is maximized. For this purpose, we consider $K^l(s)$ which corresponds to a general $n$-qubit unitary $U^l(s)$.
To avoid overfitting, we impose regularization on the parameters.
Specifically, we regularize the parameters $K_{\alpha_1, \cdots, \alpha_n}^l(s)$ which are defined as
\begin{eqnarray}
K^l(s) &=& \sum \limits_{\alpha_1, \cdots, \alpha_{n}} K_{\alpha_1, \cdots, \alpha_{n}}^l(s) (\otimes_{k=1}^n \sigma^{\alpha_k }).
\end{eqnarray}
Hence, the combined objective (which both maximizes the derivative of $C$ and minimizes the change of network parameters) to be maximized is
\begin{eqnarray}
C_2 &=& \frac{dC}{ds} - \lambda \sum\limits_{\alpha_i} K_{\alpha_1, \cdots, \alpha_n}^l(s) ^2 \nonumber \\
&=& \frac{i}{N} \sum\limits_x \mathrm{tr}( \sum_{l=L}^1 M^{l}(s) K^{l}(s) ) - \lambda \sum\limits_{\alpha_j} K_{\alpha_1, \cdots, \alpha_n}^l(s) ^2 \nonumber \\
&=& \frac{i}{N} \sum\limits_x \mathrm{tr}_{\alpha_1, \dots, \alpha_n} (\mathrm{tr}_{rest}( \sum_{l=L}^1 M^{l}(s) K^{l}(s) )) \nonumber \\
&& - \lambda \sum\limits_{\alpha_j} K_{\alpha_1, \cdots, \alpha_n}^l(s) ^2.
\end{eqnarray}
To maximize $C_2$, we calculate its derivative with respect to $ K_{\alpha_1, \cdots, \alpha_n}^l(s)$ as
$i \sum_x \mathrm{tr}_{\alpha_1, \cdots, \alpha_n} (\mathrm{tr}_{rest}( M^{l}(s) ) (\otimes_{k=1}^n \sigma^{\alpha_k }) )/N - 2\lambda K_{\alpha_1, \cdots, \alpha_n}^l(s)$.
By setting it to 0 and solving for $K_{\alpha_1,\cdots,\alpha_n}^l(s)$, we obtain
\begin{equation*}
K_{\alpha_1, \cdots, \alpha_n}^l(s)= \frac{i}{2N\lambda} \sum\limits_x \mathrm{tr}_{\alpha_1, \cdots, \alpha_n} (\mathrm{tr}_{rest}( M^{l}(s) )(\otimes_{k=1}^n \sigma^{\alpha_k }).
\end{equation*}
There is a caveat that $C_2$ might be always negative, in which case we should not update $s$. To ensure that the solution of $dC_2/dK_{\alpha_1,\dots,\alpha_n}^l=0$ results in an increase in $C$, we explicitly check the value of $C$ before updating $s$.
We substitute the obtained value of $K_{\alpha_1, \dots, \alpha_n}^l(s)$ back to $K^l(s)$ and obtain
\begin{eqnarray}
K^l(s)
&=& \frac{i}{2N\lambda} \sum \limits_{\alpha_1, \cdots, \alpha_n} \sum\limits_x \mathrm{tr}_{\alpha_1, \cdots, \alpha_n} (\mathrm{tr}_{rest}( M^{l}(s) ) \nonumber \\
&& (\otimes_{k=1}^n \sigma^{\alpha_k }) ) (\otimes_{k=1}^n \sigma^{\alpha_k }) \nonumber \\
&=& i 2^{n} \sum\limits_x \mathrm{tr}_{rest}( M^{l}(s) ) / (N\lambda).
\end{eqnarray}
Finally the unitary $U^l$ is updated by
\begin{equation*}
U^l (s+\epsilon) = \exp(-\epsilon 2^{n} \sum\limits_x \mathrm{tr}_{rest}( M^{l}(s) ) / (N\lambda)) U^l(s).
\end{equation*}
Now we consider the specific unitaries used by the DIBoM which can be categorized into three cases:
\begin{enumerate}
\item The first case is $U_{SG}^j$, which is a single-qubit unitary on the qubit $j$. The corresponding $K$ for this unitary is
$K^l(s) = \sum_{\alpha=0}^3 K^l_\alpha(s) \sigma^\alpha$.
To obtain the optimal $K^l_\alpha(s)$, we set $d C_2/d K^l_\alpha(s)=0$ and obtain
\begin{equation}
K^l_\alpha(s)= \frac{i}{2N\lambda} \sum\limits_x \mathrm{tr}_{j} (\mathrm{tr}_{rest}( M^{l}(s) ) \sigma^{\alpha } ),
\end{equation}
where ``rest'' denotes qubits other than qubit $j$.
Substituting the expression back into $K^l(s) $, we have
\begin{eqnarray}
K^l(s)
&=& \frac{i}{2N\lambda} \sum \limits_{\alpha} \sum\limits_x \mathrm{tr}_{j} (\mathrm{tr}_{rest}( M^{l}(s) ) \sigma^{\alpha } ) \sigma^{\alpha } \nonumber \\
&=& i \sum\limits_x \mathrm{tr}_{rest}(M^{l}(s) )/(N\lambda).
\end{eqnarray}
Therefore the unitary is updated as $U^l (s+\epsilon) = \exp( i \epsilon K^l(s) ) U^l(s)$.
\item The second case is a product of single-qubit gates $U_{SG}^\otimes$, the corresponding $K$ of which has the form
\begin{equation}
K^l(s) = \sum_{j=1}^n \sum_{\alpha=0}^3 K^l_{j,\alpha}(s) \sigma^\alpha_j.
\end{equation}
By letting $d C_2/d K^l_{j,\alpha}(s)=0$, we obtain
\begin{equation}
K^l_{j,\alpha}(s)= \frac{i}{2N\lambda} \sum\limits_x \mathrm{tr}_{j} (\mathrm{tr}_{[n]\backslash\{j\}}( M^{l}(s) ) \sigma^{\alpha }_j ),
\end{equation}
where ${[n]\backslash\{j\}}$ refers to all qubits except qubit $j$.
Substituting this expression back into $K^l(s) $, we have
\begin{eqnarray}
K^l(s)
&=& \frac{i}{2N\lambda} \sum\limits_{j=1}^n\sum \limits_{\alpha} \sum\limits_x \mathrm{tr}_{j} (\mathrm{tr}_{[n]\backslash\{j\}}( M^{l}(s) ) \sigma^{\alpha }_j ) \sigma^{\alpha }_j \nonumber \\
&=& i \sum\limits_{j=1}^n\sum\limits_x \mathrm{tr}_{[n]\backslash\{j\}}( M^{l}(s) )/ (N\lambda).
\end{eqnarray}
Therefore the unitary is updated as
$U^l (s+\epsilon) = \exp( i \epsilon K^l(s) ) U^l(s)$.
\item The third case is the collection of generalized CZ gates on all pairs of qubits $U_{CZ}$. Its corresponding $K$ is
$K^l(s) = \sum_{1\le j<k\le n} K^l_{jk}(s) \ket{11}_{jk}\bra{11}$.
By setting $d C_2/d K^l_{jk}(s) =0$, we obtain
\begin{equation}
K^l_{jk}(s)= \frac{i}{2N\lambda} \sum\limits_x \mathrm{tr}_{j,k}(\mathrm{tr}_{[n]\backslash\{j,k\}} M^{l}(s) ) \ket{11}_{jk}\bra{11} ),
\end{equation}
where ${[n]\backslash\{j,k\}} $ refers to the set of qubits excluding qubits $j$ and $k$.
Substituting this expression back into $K^l(s)$, we have
\begin{eqnarray*}
K^l(s) &=& \frac{i}{2N\lambda} \sum \limits_{j,k} \sum\limits_x \mathrm{tr}_{j,k}(\mathrm{tr}_{[n]\backslash\{j,k\}} ( M^{l}(s) ) \ket{11}_{jk} \nonumber \\
&&\bra{11} ) \ket{11}_{jk}\bra{11} \nonumber \\
&=& i [\sum\limits_x \sum \limits_{j,k} \mathrm{tr}_{j,k}(\mathrm{tr}_{[n]\backslash\{j,k\}} ( M^{l}(s) ) \ket{11}_{jk}\bra{11} ) \nonumber \\
&& \ket{11}_{jk}\bra{11}] / (2 N\lambda).
\end{eqnarray*}
Therefore the unitary is updated as $U^l (s+\epsilon) = \exp( i \epsilon K^l(s) ) U^l(s)$.
\end{enumerate}
Three final remarks are in order. First, the hyperparameter $\lambda$ is set to $0.5$ in the simulation unless otherwise stated. Second, the classical simulation of the gradient descent for each parameter $s$ in layer-by-layer training is identical to the simultaneous training method. Third, the classical simulation for training controlled unitaries $V_j$ is similar, and the specifics are deferred to Appendix~\ref{app:simulate_control}.
\section{Empirical evaluation results}
\label{sec:simulationresult}
Having presenting the simulation setup, this section proceeds to present the empirical evaluation results. We first empirically compare the performance of DIBoM with other QNNs in Sec.~\ref{sec:trainperform}. Next, we conduct an ablation study on the DIBoM in Sec.~\ref{sec:ablation} to investigate the individual components of the model. Then in Sec.~\ref{sec:sensitivity}, we assess the sensitivity of the performance of the DIBoM to its various parameters. Additionally, in Sec.~\ref{sec:robustness}, we analyze the robustness of the DIBoM to noise. Finally, in Sec.~\ref{sec:barren}, we mitigate the barren plateau issue in the training of the DIBoM. Some auxiliary details pertaining to the construction of the datasets used in this section are presented in Appendix~\ref{appsec:detail}.
\subsection{Performance comparison}
\label{sec:trainperform}
To begin, we analyze the training performance of the DIBoM and compare it to that of other models, considering both the converged loss and the model complexity. Additionally, we explore two different training methods and evaluate the gap between training and test performance. Moreover, we plot the optimization landscape of the DIBoM and dissipative QNN to gain a deeper understanding of their respective training processes.
We first test the simultaneous training and layer-by-layer training methods and display four training results with different datasets in Fig.~\ref{fig:plotHybrid}. The results indicate that the model's loss converges to 0. Comparing the two training methods, we observe that layer-by-layer training consistently performs worse than simultaneous training. Thus, we will solely utilize simultaneous training in future simulations.
\begin{figure}
\caption{Four learning curves for a DIBoM with two training methods.
}
\label{fig:plotHybrid}
\end{figure}
We next examine the DIBoM's prediction accuracy on the test set and assess its gap with the training accuracy, as depicted in Fig.~\ref{fig:compare}. Notably, the training loss and test loss are nearly identical, with the test loss occasionally being smaller than the training loss. This suggests that statistical fluctuations rather than generalization errors may cause the deviation between the training and test losses. Given the close proximity of the two losses, we will exclusively evaluate the test loss in subsequent simulations.
\begin{figure}
\caption{Four training and test learning curves for a DIBoM.
}
\label{fig:compare}
\end{figure}
We then compare the DIBoM with three other QNNs: a hardware-efficient QNN \cite{mcclean2018barren}, a dissipative QNN \cite{beer2020training} and an Ising Born machine \cite{coyle2020born}. In the simulation, we set the number of qubits to be 2, the DIBoM to contain five alternating layers of tunable single-qubit gates and tunable two-qubit gates, the hardware-efficient QNN to have the same structure as the DIBoM but with all tunable two-qubit gates replaced by CZ gates, and the Ising Born machine to contain a layer of tunable two-qubit gates followed by a layer of tunable single-qubit gates.
The intrinsic unitary $V$ of this simulation is restricted to have the same structure as the DIBoM but with unknown parameters. We compare the models based on two criteria. First, we compare them in terms of their performance, as shown in Fig.~\ref{fig:baseline}. We observe that both the dissipative QNN and the DIBoM reach zero loss while the hardware-efficient QNN and the Ising Born machine do not. This could be attributed to the limited expressive power of the hardware-efficient QNN and the Ising Born machine. We also observe that the dissipative QNN converges faster than the DIBoM. Second, we compare them in terms of the number of
model parameters in Fig.~\ref{fig:paramCompare}. We observe that the
dissipative QNN requires significantly more parameters than the other three models. As the number of qubits increases, the ratio of the number of parameters of the DIBoM to that of the dissipative QNN tends to 0. Hence, there is a tradeoff between performance and the number of parameters.
\begin{figure}
\caption{Comparison of a DIBoM with a dissipative QNN, a hardware-efficient QNN and an Ising Born machine in terms of the performance for a structured unitary $V$.
}
\label{fig:baseline}
\end{figure}
\begin{figure}
\caption{Comparison of a DIBoM with a dissipative QNN, a hardware-efficient QNN and an Ising Born machine in terms of the number of model parameters.
}
\label{fig:paramCompare}
\end{figure}
To investigate why the DIBoM and dissipative QNN can achieve zero loss, we plot the optimization
landscapes of them with two parameters varying and other parameters fixed.
The result of the DIBoM is shown in Fig.~\ref{fig:landscape}, where one parameter to be changed is from the single-qubit unitary gate (parameter 1) and the other parameter to be changed is from the generalized CZ gate (parameter 2). Despite the highly non-convex landscape, all local minima are global minima, explaining why the DIBoM can always converge to zero loss.
The result of the dissipative QNN is shown
in Fig.~\ref{fig:dissipative_landscape}. It can be seen that there is no flat region in the landscape and
this explains the fast convergence of a dissipative network. However, there exists a local minimum in the middle of the figure which
does not coincide with the global minimum. Hence, whether dissipative networks can always be trained to reach a global minimum requires further investigation.
\begin{figure}
\caption{Optimization landscape of a DIBoM with two parameters varying and other parameters fixed.
}
\label{fig:landscape}
\end{figure}
\begin{figure}
\caption{Optimization landscape of a dissipative QNN with two parameters varying and other parameters fixed.
}
\label{fig:dissipative_landscape}
\end{figure}
To facilitate a performance comparison between the DIBoM and other models, we have assumed that the hidden unitary $V$ has the same structure as the DIBoM, but with unknown parameters. To further evaluate the DIBoM's performance, we conduct a test where $V$ is a random unitary, with $n=3$ qubits and $10$ layers for both the DIBoM and hardware-efficient QNN. The results are shown in Fig.~\ref{fig:baseline_random}. The DIBoM outperforms the hardware-efficient QNN and the Ising Born machine but has lower accuracy than the dissipative QNN. As compensation, the number of parameters in a DIBoM scales only quadratically with $n$, whereas the dissipative QNN has exponential scaling.
\begin{figure}
\caption{Comparison of a DIBoM with a dissipative QNN, a hardware-efficient QNN and an Ising Born machine in terms of the performance for a random unitary $V$.
}
\label{fig:baseline_random}
\end{figure}
\subsection{Ablation study}
\label{sec:ablation}
After analyzing the overall performance of the DIBoM architecture, we next
alter some components of the DIBoM to get a better understanding of the contribution of each component
of the DIBoM to its performance.
First, we investigate the case that the DIBoM consists of a layer of one single-qubit gate ($U_{SG}^2$), which acts exclusively on the second qubit. The training result is plotted in Fig.~\ref{fig:plotSG}, where it is evident that the fidelity of the model output reaches 1 after sufficient training. Moreover, it converges rapidly, reaching zero loss by the 14th iteration. To delve deeper into the optimization process and understand why the optimization of the single-qubit unitary does not trap at a local minimum, we plot the loss function as a function of two parameters while keeping all other parameters fixed. Both parameters are from the single-qubit unitary, and the resulting plot is depicted in Fig.~\ref{fig:landscape2}. Despite the highly non-convex optimization landscape, it is noticeable that all local minima have approximately the same value, explaining the achievement of zero loss during training of a single-qubit unitary.
\begin{figure}
\caption{Learning curve of a variant of the DIBoM ($U_{SG}
\label{fig:plotSG}
\end{figure}
\begin{figure}
\caption{Optimization landscape of a variant of the DIBoM ($U_{SG}
\label{fig:landscape2}
\end{figure}
Next we examine the case that the DIBoM consists of one layer of generalized CZ gates ($U_{CZ}$). As displayed in Fig.~\ref{fig:plotCZ}, the fidelity of the model output approaches unity after sufficient training, similar to the case of single-qubit gates. Note however that the initial loss for the generalized CZ gate case is lower than that of the single-qubit gate case. This observation suggests that a generalized CZ gate is more rigid than a single-qubit gate, implying a narrower range of variation in the quantum output induced by the former. Note also that convergence for the generalized CZ gate case occurs around the 50th iteration, which is slower than the convergence rate for the single-qubit gate case, indicating the former case is comparatively harder to train than the latter case.
\begin{figure}
\caption{Learning curve for a variant of the DIBoM ($U_{CZ}
\label{fig:plotCZ}
\end{figure}
Then we investigate the DIBoM ($U_{CZ} U_{SG}^2 $) where one of the layers is fixed. For the case that the second layer is fixed, the training result is shown in Fig.~\ref{fig:fixedsecond}. As illustrated, the training converges to zero loss and the convergence is quite fast, reaching the optimal loss at the 40th iteration. Conversely, for the case that the first layer of the DIBoM is fixed while the second layer is to be trained, the convergence is slower, as evidenced by the training results shown in Fig.~\ref{fig:fixedfirst}. This slower convergence could be attributed to the flatter optimization landscape of the generalized CZ gate. Nonetheless, the training also eventually converges to zero loss.
\begin{figure}
\caption{Learning curve of a DIBoM ($U_{CZ}
\label{fig:fixedsecond}
\end{figure}
\begin{figure}
\caption{Learning curve of a DIBoM ($U_{CZ}
\label{fig:fixedfirst}
\end{figure}
Next we examine a network comprising a layer of single-qubit gates on all qubits followed by a layer of generalized CZ gates ($U_{CZ} U_{SG}^\otimes $). Here, $U_{SG}^\otimes$ is a $n$-qubit gate that can be decomposed as a tensor product of parameterized single-qubit gates, potentially featuring varying parameters.
We refer to the former layer as a \emph{product gate} layer.
Figure~\ref{fig:product_gate} compares the performance of the product gate case to the single gate setting.
It can be seen that the product gate case converges to a zero loss while the single gate case does not, implying that the product gate case has more expressive power.
\begin{figure}
\caption{The performance comparison between the product-gate variant of the DIBoM ($U_{CZ}
\label{fig:product_gate}
\end{figure}
An ablation study of generalized CZ gates is then conducted by comparing the following two models. The first model, denoted as ``generalized CZ'', utilizes DIBoM with 3 layers ($U_{SG}^\otimes U_{CZ} U_{SG}^\otimes$). The second model, denoted as ``normal CZ'', is almost identical to the first model, except that all the generalized CZ gates in the second layer are replaced with normal CZ gates. The study's results are presented in Fig.~\ref{fig:ab_GCZ}, which indicates that the first model achieves a much lower loss than the second model. This finding suggests the strong effectiveness of generalized CZ gates.
\begin{figure}
\caption{Performance comparison between a DIBoM and an ablated DIBoM where generalized CZ gates are replaced by normal CZ gates. }
\label{fig:ab_GCZ}
\end{figure}
To evaluate the effect of the controlled unitary $V_j$, we conduct an ablation study. For this purpose, we modify the training dataset to have different input and output dimensions. Specifically, we construct the training dataset as ${( \ket{\psi_i}\otimes \ket{0} \otimes \ket{0} , \ket{\psi_i})}_{1\le i \le N}$, where $\ket{\psi_i}$ represents a randomly generated pure qubit using the Haar measure. The objective of the model is to transform the quantum sample such that the third qubit approximates the quantum label as closely as possible.
We denote the complete DIBoM model as \emph{with control}. The model first applies a three-qubit unitary $U$, measures the first two qubits in the computational basis, and then uses the measurement result $j$ to perform the controlled unitary $V_j$ on the third qubit. On the other hand, the ablated version, denoted as \emph{without control}, includes the same first two components but lacks the final component. More precisely, the unitary $U$ is composed of three layers $U_{SG}U_{CZ}U_{SG}$, and there are four single-qubit unitaries $V_j$ ($0\le j \le 3$) to be optimized.
The results are illustrated in Fig.~\ref{fig:abcontrol}, which indicates that the complete DIBoM model achieves significantly lower loss than the ablated version. This suggests that the complete DIBoM model can effectively learn the quantum teleportation protocol from scratch, while the ablated version lacks this capability.
\begin{figure}
\caption{The performance comparison between a full DIBoM with controlled gates and an ablated DIBoM without controlled gates. }
\label{fig:abcontrol}
\end{figure}
\subsection{Parameter sensitivity}
\label{sec:sensitivity}
So far, we have investigated the performance of the DIBoM and the contributions of its individual components to this performance. For a more comprehensive evaluation of the DIBoM, we conduct experiments to test the effect of different parameters, including the data size, the number of layers, the number of qubits per layer, and the regularization constant.
We begin by examining the relation between the number of training samples $N$ and the performance characterized by the training loss.
The result is shown in Fig.~\ref{fig:sampleTraining}, which clearly demonstrates that the larger the sample size, the
faster the convergence. This is because more samples
help to estimate the intrinsic unitary $V$ better during training.
We next test the relation between the number of samples and the gap between
the training performance and the test performance, as shown in Fig.~\ref{fig:sampleGap}. It can be seen that
as the number of samples increases, the gap gradually becomes smaller.
This is expected as a larger number of samples results in a smaller variance and thus, improved generalization performance.
\begin{figure}
\caption{The training performance of a DIBoM with a varying number $N$ of training samples.
}
\label{fig:sampleTraining}
\end{figure}
\begin{figure}
\caption{The gap between the training and test performance of a DIBoM with $N$ training samples.
}
\label{fig:sampleGap}
\end{figure}
Next, we test the effects of the layer number $L$
on the performance of the product gate variant of the DIBoM ($\cdots U_{SG}^\otimes U_{CZ} U_{SG}^\otimes $), as shown in Fig.~\ref{fig:variousLayer}. It can be seen that
the losses of all cases converge to zero loss, indicating that the network can scale to many layers without adversely affecting trainablity.
\begin{figure}
\caption{The performance of a DIBoM with a different number $L$ of layers.
}
\label{fig:variousLayer}
\end{figure}
Then we test the number of qubits $n$
that the input and output contain, also on the product gate variant of the DIBoM ($ U_{CZ} U_{SG}^\otimes$).
As shown in Fig.~\ref{fig:variousqubit},
for all cases from $2$ qubits to $4$ qubits, the DIBoM can train well, with zero loss convergence.
However, the case of $n=5$ did not perform as well, resulting in non-zero loss. Therefore, we perform four additional simulations for $n=5$, which are displayed in Fig.~\ref{fig:n=5}.
It is evident that although the loss function eventually converges to zero loss, this requires a larger number of iterations. Moreover, the training curve displays both slow-varying and fast-varying regions, a phenomenon that is already observable in the case of $n=4$, but is more pronounced when $n=5$.
\begin{figure}
\caption{The performance of a DIBoM with a different number of qubits $n$ per layer.
}
\label{fig:variousqubit}
\end{figure}
\begin{figure}
\caption{Four learning curves of a DIBoM with 5 qubits per layer.
}
\label{fig:n=5}
\end{figure}
In addition, we test the effect of the regularity constraint parameter $\lambda$
on the performance of the DIBoM.
The results, shown in Fig.~\ref{fig:variouslambda}, indicate
an optimal value of $\lambda$ at 0.5. Deviating from this value leads to worse convergence.
\begin{figure}
\caption{The performance of a DIBoM with different parameters of the regularity constraint $\lambda$. }
\label{fig:variouslambda}
\end{figure}
Finally, we assess the ability of a DIBoM ($\cdots U_{SG}^\otimes U_{CZ} U_{SG}^\otimes $) to approximate an arbitrary unitary using 2 to 5 layers, alternating between product gate and generalized CZ gate layers. The results, presented in Fig.~\ref{fig:full_unitary}, demonstrate that the converged loss decreases as the number of layers increases. Notably, with only 5 layers, the DIBoM already achieves a low loss.
\begin{figure}
\caption{The performance of a DIBoM with $L$ layers on a generic quantum learning dataset.
}
\label{fig:full_unitary}
\end{figure}
\subsection{Robustness to noise}
\label{sec:robustness}
We further assess the
robustness of a DIBoM to noise in the data since real-world data inevitably contains noise. To this end, we manually corrupt
some of the training data and evaluate its effect on the test loss. The corruption ratio of the training data varies from $10\%$ to $100\%$, with an uncorrupted dataset, denoted by ``original'', acting as the control. Given a corruption ratio, say $30\%$, we randomly select $30\%$ of real quantum data and replace it with fake data
$( \ket{ \phi^{in}_x},\ket{\phi^{out}_x})$, where $\ket{ \phi^{in}_x}$ and $\ket{\phi^{out}_x}$ are Haar random $n$-qubit pure states.
The results are visualized in Fig.~\ref{fig:corrupt}.
On the negative side, with a gradual increase of the corrupted ratio, the performance gradually degrades, as evidenced by the comparison of solid and dashed lines. On the positive side, even with up to $60\%$ of corrupted data, the DIBoM remained effective, which indicates a high level of robustness against noise.
\begin{figure}
\caption{The performance of a DIBoM with various proportions of the training data corrupted.
}
\label{fig:corrupt}
\end{figure}
We also investigate the noise robustness of the model with respect to varying numbers of layers.
To perform this investigation, we set a fixed corruption ratio of $20\%$ and vary the number of layers, while monitoring the loss at the 300 iterations. The results are presented in Fig.~\ref{fig:robustVSlayers}. Notably, our analysis reveals that the noise robustness of the model remains consistent across different numbers of layers. This finding highlights the scalability of the DIBoM and suggests that increasing the number of layers does not negatively impact the noise robustness of the model.
\begin{figure}
\caption{Noise robustness of the DIBoM with different numbers of layers. }
\label{fig:robustVSlayers}
\end{figure}
\subsection{Barren plateau}
\label{sec:barren}
In Sec.~\ref{sec:sensitivity}, we have observed that the model with the global loss function Eq.~\eqref{eq:lossfunction} already suffers from the barren plateau (slow-varying region) issues for $n=5$ qubits.
Previous work by Cerezo et al. \cite{cerezo2021cost} demonstrated that local cost functions can mitigate this issue.
Hence we incorporate the local cost function described in Eq.~\eqref{eq:locallossfunction}.
We defer the details of the classical simulation of the model under the local loss function to Appendix~\ref{appsec:localcost}.
To accommodate the local cost function, we design the training data as a product state $\ket{\phi_{in}}=\ket{\phi_{in}}_1 \otimes \dots \otimes \ket{\phi_{in}}_n$ for $n$ qubits, which we refer to as \emph{product-form training data}.
Notably, a zero local cost function for this data implies a zero global cost function.
We examine the performance of the local cost function by training the model with 2 to 5 qubits, as shown in Fig.~\ref{fig:localcost}. We observe that all learning curves converge to zero loss quickly, suggesting that the barren plateau issue is mitigated. This stands in stark contrast to the global cost function, which exhibits the barren plateau phenomenon when $n=5$.
\begin{figure}
\caption{The performance of a DIBoM with a local cost function. Here, $n$ is the number of qubits.
}
\label{fig:localcost}
\end{figure}
We have also plotted the comparison between local and global cost functions in Fig.~\ref{fig:globalVSlocal}.
The local cost function always reaches zero loss faster than the global cost function. More
importantly, the curve of the local cost function lacks a flat region,
suggesting that the barren plateau phenomenon is mitigated for the DIBoM with a local cost function.
Notably, we observe that the barren plateau issue is also mitigated for a global cost function with product-form training data. This observation suggests that product-form training data may be easier to train than entangled training data.
\begin{figure}
\caption{Comparisons between global and local cost functions with different numbers of qubits $n$. Here, $n$ ranges from 3 to 8. The redline depicts the local cost function while the blue line depicts the global cost function.
}
\label{fig:globalVSlocal}
\end{figure}
Finally, we discuss the computation time required for our simulations.
For the case $n=8$, which is the largest simulation we performed, the computation takes about 1 hour on an 8-core 3.2GHz computer.
The number of parameters for this case is $4n + n(n-1)/2=60$.
It is worth noting that the
computation time is exponential in $n$, which is a consequence of the classical computation requiring multiple multiplications on density matrices of size $2^n \times 2^n$ in each iteration.
Each of these multiplications
takes time $2^{3n}$, resulting in a cost of $256^3$ per multiplication when $n=8$. Furthermore, the number of iterations is polynomially
related to $n$, with approximately 2000 iterations required for $n=8$. Hence, the computation burden on a classical computer is substantial. However, it is important to note that intermediate-scale quantum computers have the potential to substantially decrease computation time to a polynomial of $n$, as the classical manipulation of $2^n \times 2^n$ density matrices is no longer necessary. As such, we expect that current noisy intermediate-scale quantum (NISQ) devices will boost the trainable size to a few hundred qubits.
\section{Discussion}
\label{sec:discussion}
In summary, we examined a deep Ising Born machine (DIBoM) and showed it has a good balance between efficiency and universality. Specifically, we described its model architecture and its training procedure. Additionally, through theoretical analysis, we demonstrated that the DIBoM has the capability of universal quantum computation. Apart from the theoretical analysis, we empirically evaluated the performance of the DIBoM and compared it with other QNNs. Our evaluations revealed that the DIBoM has a moderate number of parameters while being quite expressive. Along the way, we introduced a new expressivity measure called fidelity-based expressivity, which may be of independent interest.
There are two potential limitations of the DIBoM: trainability and generalizability. Trainability refers to the ability to find the global minimum in a polynomial number of iterations with respect to the number of qubits. In our simulations, we have shown that the DIBoM is trainable for a moderate number of qubits, but it is unclear whether it remains trainable for a large number of qubits, which is beyond the capability of our simulation. Moreover, there is no theoretical guarantee that the DIBoM is trainable, and recent negative results \cite{anschuetz2022quantum} suggest that most shallow and local QNN architectures are not trainable.
Generalizability refers to the ability to achieve low test error given a small training error. In our simulations, we have empirically observed that the DIBoM has low test error, but we have no theoretical guarantee for this fact. When the number of parameters of a QNN is much larger than the number of training data, over-parameterization can cause overfitting, making it difficult to achieve theoretical guarantees of generalizability. This is a challenging problem even for classical neural models.
Therefore, it is crucial to develop QNN architectures that achieve efficiency, universality, provable trainability, and provable generalizability simultaneously. The DIBoM only addresses the first two goals, leaving much room for improvement.
There are a few other promising avenues for future research. First, applying the DIBoM to downstream quantum learning tasks is likely to be both fruitful and interesting. Second, an experimental demonstration of the DIBoM on quantum hardware would be interesting. Third, due to the exponential cost of the classical simulation, a NISQ device may show a speed advantage in training the DIBoM, which makes it an ideal target for showing quantum supremacy on practical problems.
The tradeoff between efficiency and universality is also worth further investigation. To achieve universality as defined in our work, an ansatz needs to have exponentially many parameters because the ansatz cannot express all unitaries if the dimension of its Hilbert space is smaller than that of $\mathsf{SU}(n)$. This implies full universality and efficiency cannot be achieved simultaneously for any quantum learning model. There are several directions to further explore the tradeoff between universality and efficiency and the associated design of quantum learning models.
First, by relaxing the definition of universality, there may exist more interesting tradeoff between universality and efficiency. However, this makes the research landscape more complex since there are a lot of ways to weaken the notion of universality. Previous research has considered weakening the universality to the class of functions that maps 0 to the ground state of a Hamiltonian which is the sum of poly($n$) Pauli bases \cite{biamonte2021universal}, that is real and continuous \cite{goto2021universal}, that can be described by a quantum circuit with a polynomial number of gates where each gate acts on a constant number of qubits \cite{cai2022sample}, and that is boolean \cite{herman2022expressivity}. Besides these choices, there are many other choices available, potentially infinitely many. For example, one can consider the class of function that maps 0 to the thermal state of a Hamiltonian which is the sum of poly($n$) Pauli bases, that is complex and meromorphic, that can be described by a quantum circuit with a logarithmic number of gates where each gate acts on a logarithmic number of qubits, to just name a few. How to achieve these different types of universality while maintaining efficiency in a strict sense is an interesting research question.
Another direction is to replace universality by restricting the ansatz to contain the solution one is looking for. In this case, the ansatz becomes problem specific, which is not a universal ansatz that can deal with all learning problems. Following this line, after taking a learning problem, one should design a specific ansatz that suits this problem which requires additional manual work and expertise. How to reduce the manual efforts in designing a specific ansatz for a given learning problem (such as combinatorial optimization problems \cite{zhou2020quantum}, learning the ground state of a Hamiltonian \cite{motta2020determining}, simulating quantum dynamics \cite{sparrow2018simulating}, or drug discovery for a specific disease \cite{cao2018potential}) is an interesting question on its own.
\section{Evaluation details of the Fidelity-based Measure}
\label{appsec:fbe}
Here, we detail how we numerically evaluate an upper bound of the fidelity-based measure (FBE) for any QNN architecture in Fig.~\ref{fig:expressivity}.
For a QNN architecture $\mathsf{A}(\theta)$, we first select $k=100$ random unitaries $U_i$ ($1 \le i \le k$) and $m=10$ random pure states $\phi_j$ ($1 \le j \le m$).
Then for any $U_i$, we optimize $\theta$ to minimize
\begin{equation}
s_i = \frac{1}{m} \sum\limits_{j=1}^{m} | \bra{\phi_j} \mathsf{A}(\theta)^\dagger U_i \ket{\phi_j}|.
\end{equation}
The bound $\min_i s_i $ is then taken as the upper bound of the FBE.
\section{Concrete number of parameters for a universal 3-qubit DIBoM}
\label{appsec:3qubit}
In this section, we provide the precise number of parameters needed for a 3-qubit DIBoM to be universal.
To begin with, according to Section 4.5.1 of Ref.~\cite{nielsen2010quantum}, a 3-qubit unitary can be decomposed as a product of $2^{3-1}(2^3-1)=28$ two-level unitaries. Our next goal is to decompose a two-level 3-qubit unitary further.
As per Section 4.5.2 of Ref.~\cite{nielsen2010quantum}, a two-level 3-qubit unitary can be decomposed as a product of at most 5 controlled-controlled single-qubit unitaries. Our next goal is to decompose a controlled-controlled single-qubit unitary further.
Figure 4.18 of Ref.~\cite{nielsen2010quantum} reveals that a controlled-controlled single-qubit unitary can be decomposed as a product of 2 CZ gates and 3 controlled single-qubit unitary gates. Our next goal is to decompose a controlled single-qubit unitary further.
Figure 4.6 of Ref.~\cite{nielsen2010quantum} states that a controlled single-qubit unitary can be decomposed as a product of two CZ gates and single-qubit gates. Consequently, a controlled-controlled single-qubit unitary can be decomposed as a product of $2+3\times 2=8$ CZ gates and single-qubit gates, a two-level 3-qubit unitary as a product of $8 \times 5=40$ CZ gates and single-qubit gates, and a 3-qubit unitary as a product of $40 \times 28=1120$ CZ gates and single-qubit gates. This implies that a $(1120 \times 2+1)$-layer 3-qubit DIBoM is sufficient to realize any 3-qubit unitary.
A $(1120 \times 2+1)$-layer 3-qubit DIBoM contains 1121 single-qubit gate layers and 1120 generalized CZ gate layers. This translates to a total of $1121 \times 9 + 1120 \times 3 = 13449$ parameters.
\section{Derivation of Eq.~\eqref{eq:derivative}}
\label{appsec:derivation}
Expanding the term $dC/ds$ yields
\begin{widetext}
\begin{eqnarray}
\frac{dC}{ds} & = & \lim\limits_{\epsilon \to 0} \frac{ C(s+\epsilon)-C(s)}{ \epsilon} = \lim\limits_{\epsilon \to 0} \frac{ \frac{1}{N} (\sum \limits_{x=1}^N \bra{\phi_x^L} (\rho_x^L(s+\epsilon)- \rho_x^L(s) ) \ket{\phi_x^L}) }{ \epsilon} \nonumber \\
& = & \frac{1}{N} \sum\limits_x \mathrm{tr}( \ket{\phi_x^L}\bra{\phi_x^L}
[ i K^L(s), \prod_{l=L}^1 U^l (s)
\rho_x^0
\prod_{l=1}^L {U^{l}}^\dagger (s) ]
+\dots + \ket{\phi_x^L}\bra{\phi_x^L} \prod_{l=L}^2 U^l(s)
[ i K^{1}(s), U^{1} (s) \rho_x^0
{U^{1}}^\dagger (s) ] \prod_{l=1}^L {U^{l}}^\dagger (s) ) \nonumber \\
& = & \frac{1}{N} \sum\limits_x \mathrm{tr}(
[ \prod_{l=L}^1 U^l (s) \rho_x^0 \prod_{l=1}^L {U^{l}}^\dagger (s), \ket{\phi_x^L}\bra{\phi_x^L}]
i K^L(s) + \dots + [ U^{1} (s) \rho_x^0 {U^{1}}^\dagger (s) ,
\prod_{l=2}^L {U^{l}}^\dagger (s) (\ket{\phi_x^L}\bra{\phi_x^L})
\prod_{l=L}^2 U^l (s) ]
i K^{1}(s) ) \nonumber \\
& = &\frac{i}{N} \sum\limits_x \mathrm{tr} ( \sum_{l=L}^1 M^{l}(s) K^{l}(s) ),
\label{eq:derivative31}
\end{eqnarray}
\end{widetext}
where $M^{l}(s)$ is defined as
\begin{eqnarray*}
M^{l}(s)& =& [ \prod_{j=l}^1 U^{j}(s) \rho_x^0 \prod_{j=1}^l {U^{j}}^\dagger (s) , \\
&& \prod_{j=l+1}^L{U^{j}}^\dagger (s) \ket{\phi_x^L}\bra{\phi_x^L} \prod_{j=L}^{l+1} U^{j} (s) ].
\end{eqnarray*}
Here $[\cdot,\cdot]$ denotes the commutator operator, and
the fourth equality in Eq.~\eqref{eq:derivative31} has exploited the relation $\mathrm{tr}(A[B, C]) = \mathrm{tr}([C,A]B)$.
\section{Simulation of the controlled unitary $V_j$}
\label{app:simulate_control}
In this section, we present the classical simulation that involves the controlled unitaries $V_j$.
Before any measurements, the initial quantum state $\rho^0$ is evolved to the following quantum state:
\begin{equation}
\rho^1 = U^k \cdots U^1 \rho^0 {U^1}^\dagger \cdots {U^k}^\dagger.
\end{equation}
After measuring with outcome $i$, the post-measurement state is given by
\begin{equation}
\rho^2_i = \mathrm{tr}_A (\rho^1 (\ket{i}\bra{i}_A \otimes I_B) ).
\end{equation}
The post-measurement states then undergo another series of unitaries and become
\begin{equation}
\rho^3 = \sum_i V_i^l \cdots V_i^1 \rho^2_i {V_i^1}^\dagger \cdots {V_i^l}^\dagger \triangleq \sum_i \mathcal{E}^i (\rho^1),
\end{equation}
where $\mathcal{E}^i$ is a quantum operation that acts on the state $\rho^1$.
After getting the output of the quantum circuit $\rho^3$, the cost function can be written as
\begin{equation}
C(s) = \frac{1}{N} \sum\limits_{x=1}^N \bra{\psi_x} \rho_x^3 \ket{\psi_x} = \frac{1}{N} \mathrm{tr}( \ket{\psi_x} \bra{\psi_x} \rho_x^3).
\end{equation}
Since
\begin{equation}
\begin{aligned}
\rho^3(s+\epsilon) = &\sum_i e^{i\epsilon K_{2,i}^l} V_i^l \cdots e^{i\epsilon K_{2,i}^1} V_i^1 \mathrm{tr}_A[ e^{i\epsilon K_1^k } U^k \cdots e^{i\epsilon K_1^1 } \\
& U^1 \rho^0_x {U^1}^\dagger e^{-i\epsilon K_1^1} \cdots {U^k}^\dagger e^{-i\epsilon K_1^k} ( \ket{i}\bra{i}_A \otimes I_B ) ] \\
& {V_i^1}^\dagger e^{-i\epsilon K_{2,i}^1} \cdots {V_i^l}^\dagger e^{-i\epsilon K_{2,i}^l},
\end{aligned}
\end{equation}
we can evaluate the derivative of the cost function as
\begin{equation}
\frac{ dC}{ ds } = \lim\limits_{\epsilon > 0} \frac{C(s+\epsilon)-C(s)}{\epsilon} = \frac{1}{N} \mathrm{tr}( \ket{\psi_x} \bra{\psi_x} X),
\end{equation}
where
\begin{equation}
\begin{aligned}
X= & \sum_{i=1}^4 \{ [i K_{2.i}^l, V_i^l \cdots V_i^1 \rho^2_i {V_i^1}^\dagger \cdots {V_i^l}^\dagger] +\cdots + \\
& V_i^l \cdots V_i^2 [ i K_{2.i}^1, V_i^1 \rho^2_i {V_i^1}^\dagger ] {V_i^2}^\dagger \cdots {V_i^l}^\dagger + \\
& + \mathcal{E}^i ( [iK^k_1(s), U^k \cdots U^1 \rho^0 {U^1}^\dagger \cdots {U^k}^\dagger] ) + \cdots + \\
& + \mathcal{E}^i ( U^k \cdots U^2 [iK^1_1(s), U^1 \rho^0 {U^1}^\dagger] {U^2}^\dagger \cdots {U^k}^\dagger ) \} .\\
\end{aligned}
\end{equation}
To update the parameter in the network, we minimize the function
\begin{equation}
\frac{dC}{ds} - \lambda \sum_{\alpha_1,\cdots, \alpha_n} K^2_{\alpha_1,\cdots, \alpha_n} (s)^2,
\end{equation}
where $K_{\alpha_1,\cdots, \alpha_n}(s)$ is related to $K_1(s)$ by
\begin{equation}
K_1(s) = \sum K_{\alpha_1,\cdots, \alpha_n}(s) \otimes_{k=1}^n \sigma^{\alpha_k}.
\end{equation}
We will focus on two specific cases of $K_1(s)$:
$K_1(s) \to \sigma_j^\alpha$ and $K_1(s) \to \ket{11}_{jk} $.
The former case involves one qubit, while the latter case only involves two qubits.
If $K_1^j(s)$ only acts on qubit $j$, we let
\begin{equation}
K_1(s) = K_1^j(s) \otimes I_{\bar{j}}.
\end{equation}
If $K_1^{j,k}(s)$ acts on qubits $j$ and $k$, we let
\begin{equation}
K_1(s) = K_1^{j,k}(s) \otimes I_{\bar{j,k}}.
\end{equation}
This ends the classical simulation of the controlled unitaries.
\section{Simulation details}
\label{appsec:detail}
This section presents additional simulation setups for the figures in Sec.~\ref{sec:simulationresult}.
To know beforehand that the global optimal loss of the DIBoM can reach 0 with a suitable tuning of its parameters, we make the following restrictions on the intrinsic unitary $V$. For Figs. \ref{fig:plotHybrid}, \ref{fig:compare}, and \ref{fig:baseline}, the intrinsic unitary $V$ is restricted to a single-qubit unitary multiplied by a generalized CZ gate, with the single-qubit unitary acting on the second qubit. For Fig.~\ref{fig:plotSG}, the intrinsic unitary $V$ is restricted to a single-qubit unitary acting on the second qubit. For Fig.~\ref{fig:plotCZ}, the intrinsic unitary $V$ is restricted to a layer of generalized CZ gates. For Fig.~\ref{fig:fixedsecond}, the intrinsic unitary $V$ is chosen in such a way that it is obtained by a single-qubit gate multiplied by a generalized CZ gate, where the generalized CZ gate of $V$ is set to be identical to the fixed second layer of the DIBoM.
For Fig.~\ref{fig:fixedfirst}, the intrinsic unitary $V$ is also obtained by a single-qubit gate multiplied by a generalized CZ gate, with the single-qubit gate matching the first layer of the DIBoM. For Figs.~\ref{fig:product_gate}, \ref{fig:variousqubit}, \ref{fig:localcost}, and \ref{fig:globalVSlocal}, the intrinsic unitary $V$ is restricted to a product gate layer followed by a generalized CZ gate layer, with the product gate acting on all qubits of the quantum input. For Fig.~\ref{fig:ab_GCZ}, the intrinsic unitary $V$ is restricted to have the same circuit structure as the DIBoM but with different parameters. For Fig.~\ref{fig:variousLayer}, the intrinsic unitary $V$ is restricted to an alternating product of a product gate layer and a generalized CZ gate layer with a total of $L$ layers, where $L$ is the given layer number.
Some auxiliary setups are as follows. For Fig.~\ref{fig:compare}, both the training and test samples are drawn from the same distribution, meaning that the intrinsic unitary $V$ that transforms the input into the output is identical for both sets. The number of samples in the test set is fixed at 10.
For Fig.~\ref{fig:full_unitary}, the intrinsic unitary $V$ is randomly selected from all 2-qubit unitaries, with no additional constraints.
\section{Simulation of local cost function}
\label{appsec:localcost}
When simulating the local cost function, there are two changes compared to simulating the global cost function. Firstly, the input and output are reversed. Secondly, the input $\ket{\phi}\bra{\phi}$ is substituted by $\ket{\phi}_i \bra{\phi} \otimes I_{\bar{i}}$.
Let us start with the first change. The ground truth unitary is $V$, hence the ground truth output is $\ket{\phi_x^L} = V\ket{\phi_x^0}$, where $\ket{\phi_x^0}$
is the quantum input. In the reverse setup, we will compare the ``model'' input $ \rho_x^0 = U^\dagger \ket{\phi_x^L}\bra{\phi_x^L} U$ with the actual input $\ket{\phi_x^0}\bra{\phi_x^0}$ in the cost function:
\begin{equation}
C_{reverse}(s) = \frac{1}{N} \sum \limits_{x=1}^N \bra{\phi_x^{0}} \rho_x^{0}(s) \ket{\phi_x^{0}}.
\end{equation}
A crucial observation is
\begin{eqnarray}
\frac{dC_{reverse}}{ds} =\frac{i}{N} \sum\limits_x \mathrm{tr} ( \sum_{l=L}^1 M^{l}(s) K^{l}(s) ),
\label{eq:derivative3}
\end{eqnarray}
where $M^{l}(s)$ and $K^l(s)$ are exactly the same as the ones in Appendix~\ref{appsec:derivation}. The optimization of the reversed
cost function $C_{reverse}$ is hence equivalent to that of the original cost function $C$, thus completing the first change.
Moving on to the second change, we note that the input $\ket{\phi_x^0}$ is a product state, which allows us to express it as
$ \ket{\phi_x^0} = \ket{\phi_x^0}_1 \otimes \dots \otimes \ket{\phi_x^0}_n $.
As a result, the local cost function takes the form
\begin{equation}
C_{local}(s) = \frac{1}{nN} \sum \limits_{x=1}^N \sum \limits_{y=1}^n \mathrm{tr}( ( \ket{\phi_x^{0}}_y\bra{\phi_x^{0}}_y \otimes I_{\bar{y}} ) \rho_x^{0}(s) ),
\end{equation}
where $I_{\bar{y}}$ is the identity operator acting on all subsystems except the $y$-th one.
Accordingly, the derivative of $C_{local}$ with respect to the parameter $s$ can be expressed as
\begin{equation}
\frac{dC_{local}}{ds} =\frac{i}{Nn} \sum\limits_{x} \mathrm{tr} ( \sum_{l=1}^L M^{l}_{local}(s) K^{l}(s) ),
\end{equation}
where
\begin{eqnarray*}
M^{l}_{local}(s)& =& [ \prod_{j=l}^1 U^{j}(s)( \sum\limits_{y} \ket{\phi_x^{0}}_y\bra{\phi_x^{0}}_y \otimes I_{\bar{y}} ) \prod_{j=1}^l {U^{j}}^\dagger (s) , \\
&& \prod_{j=l+1}^L{U^{j}}^\dagger (s) \rho_x^0 \prod_{j=L}^{l+1} U^{j} (s) ].
\end{eqnarray*}
The remaining procedure is identical to that of the global cost function.
\end{document} |
\begin{document}
\title {Self-Similar Surfaces: Involutions and Perfection}
\author {\ensuremath{S}\xspacetepcounter{footnote}Justin Malestein\thanks{Partially supported by Simons
Collaboration Grant 713006}\,\, and\, Jing Tao\thanks{Partially
supported by NSF DMS-1651963.}}
\date{}
\maketitle
\thispagestyle{empty}
\begin{abstract}
We investigate the problem of when big mapping class groups are
generated by involutions. Restricting our attention to the class of
\emph{self-similar} surfaces, which are surfaces with self-similar ends
space, as defined by Mann and Rafi, and with $0$ or infinite genus, we
show that, when the set of maximal ends is infinite, then the mapping
class groups of these surfaces are generated by involutions, normally
generated by a single involution, and uniformly perfect. In fact, we
derive this statement as a corollary of the corresponding statement for
the homeomorphism groups of these surfaces. On the other hand, among
self-similar surfaces with one maximal end, we produce infinitely many
examples in which their big mapping class groups are neither perfect
nor generated by torsion elements. These groups also do not have the
automatic continuity property.
\end{abstract}
\ensuremath{S}\xspaceection{Introduction}
Consider a connected and oriented surface $\Sigma$. We distinguish two
types of surfaces, those of finite type, i.e.\ a closed surface minus
finitely many points, or of infinite type otherwise. Let $G(\Sigma)$ be
either the group $\Homeo^+(\Sigma)$ of orientation preserving
self-homeomorphisms of $\Sigma$ or the mapping class group $\mcg(\Sigma)$
of $\Sigma$. We are interested in the algebraic structure of $G(\Sigma)$,
especially when $\Sigma$ has infinite type.
As a topological group, equipped with the compact open topology,
$\Homeo^+(\Sigma)$ is a non-locally-compact Polish group. $\mcg(\Sigma)$,
being a quotient of $\Homeo^+(\Sigma)$, inherits a topology. When
$\Sigma$ has finite type, then this topology is discrete and
$\mcg(\Sigma)$ is finitely presented. But when $\Sigma$ has infinite
type, then $\mcg(\Sigma)$ is also a non-locally-compact Polish group,
similar to the homeomorphism group. In particular, $\mcg(\Sigma)$ is not
countably generated, justifying the nomenclature of \emph{big mapping
class group} in the literature.
An obvious group-theoretic problem is to identify canonical generating
sets for $G(\Sigma)$. For any group, a natural choice is its set of
involutions, or more broadly, its set of torsion elements. This leads us
to ask if $G(\Sigma)$ is generated by involutions (or torsion elements).
(The set of Dehn twists, being countable, can never generate a big
mapping class group; and often, they do not even topologically generate
\cite{APV20}.) For finite type surfaces, this question is well studied
for their mapping class groups; see \cite{MP87, Luo00, BF04, Kas03,
Kor20, LM21, Yil20} and the references within for the story on generating
by involutions. The goal of this paper is to explore this question for
surfaces of infinite type.
To answer this question for all surfaces of infinite type should be
challenging, as $G(\Sigma)$ is as complicated as the homeomorphism group
of the \emph{ends space} of $\Sigma$. In trying to tame the world of
surfaces of infinite type, Mann and Rafi \cite{MR21} introduced a
preorder on an ends space, and showed that the induced partial order
always has maximal elements. They also introduced the notion of
self-similar ends spaces. We call a surface self-similar if it has a
self-similar ends space and $0$ or infinite genus. Among these, we
identity a subclass, called \emph{uniformly} self-similar, which are
self-similar with infinitely many maximal ends. This subclass, which is
uncountable, exhibits additional symmetry, to which the sphere minus a
Cantor set belongs. It was observed by Calegari \cite{Cal09} that the
mapping class group of the sphere minus a Cantor set is uniformly
perfect. We extend this result to all uniformly self-similar surfaces,
along with answering the generation problem by involutions for these
surfaces. Our main theorem is the following.
\begin{introthm} \label{introthm1}
Let $\Sigma$ be a uniformly self-similar surface and $G(\Sigma)$ be
either $\Homeo^+(\Sigma)$ or $\mcg(\Sigma)$. Then $G(\Sigma)$ is
generated by involutions, normally generated by a single involution,
and uniformly perfect. Moreover, each element of $G(\Sigma)$ is a
product of at most 3 commutators and 12 involutions.
\end{introthm}
Note that the case of $\mcg(\Sigma)$ follows immediately from that of
$\Homeo^+(\Sigma)$. For the latter case, we use a method akin to
\emph{fragmentation}, a well known tool in the study of homeomorphism
groups. In more detail, we first observe that a uniformly self-similar
ends space $E$ behaves very much like a Cantor set. Namely, any clopen
subset $U \ensuremath{S}\xspaceubset E$ containing a proper subset of the maximal ends is
homeomorphic to its complement $U^c$. This gives rise to the notion of a
\emph{half-space} in a uniformly self-similar surface $\Sigma$, which is
a subsurface $H \ensuremath{S}\xspaceubset \Sigma$ with a single connected, compact boundary
component, such that $\overline{H^c}$ is also a half-space and
homeomorphic to $H$. (The exact definition is different and appears as
Definition \ref{def:halfspaces}). We then find an
$H$--\emph{translation}, that is, a homeomorphism $\phi$ such that
$\ensuremath{S}\xspaceet{\phi^n(H)}_{n \in \ensuremath{\mathbb{Z}}\xspace}$ are all disjoint. This is a key step in the
proof and requires putting the surface $\Sigma$ into a particular form
that reflects its symmetry. By our construction, the $H$--translation
$\phi$ is a product of two conjugate involutions. Then, using a standard
trick, we write every $f \in \Homeo(H,\partial H)$ as a commutator of the
form $f=[\hat{f},\phi]$ for some $\hat{f}$. The final step is to show
$\Homeo^+(\Sigma)$ is the normal closure of $\Homeo(H,\partial H)$, and
so it is normally generated by $\phi$ and hence by a single involution.
The other statements are achieved by keeping track of the number of
commutators or involutions needed at each step.
Many of our steps above carry over to the case of equipping $\Sigma$ with
a marked point $\ast$. The key difference is now one can find a curve
$\alpha \ensuremath{S}\xspaceubset \Sigma$ which is not contained in any half-space $H$ of
$\Sigma$. Thus, it is no longer immediate that the Dehn twist about
$\alpha$ can be generated by elements supported on $H$ and their
conjugates. To deal with this issue, we invoke the lantern relation.
Using self-similarity, we can find an appropriate lantern, i.e.\ a
4-holed sphere bounding $\alpha$ with the other boundary components lying
in half-spaces. Once we get all Dehn twists, then combining our previous
method together with the fact that mapping class groups of compact
surfaces are generated by Dehn twists, we obtain the following theorem.
\begin{introthm} \label{introthm2}
Let $\Sigma$ be a uniformly self-similar surface with a marked point
$\ast \in \Sigma$. Then, $\mcg(\Sigma, \ast)$ is perfect, generated by
involutions, and normally generated by a single involution.
\end{introthm}
Because we used the lantern relation, our proof does not apply to the
homeomorphism group. For a different argument that the mapping class
group of the marked sphere minus a Cantor set is perfect, see
\cite{Vla21}. Theorem \ref{introthm2} is sharp in the sense that we
cannot expect a statement about uniform perfection or a bound on the
number of involutions, due to the fact that the marked sphere minus a
Cantor set provides a counterexample, by \cite{Bav16}.
It is not possible for all big mapping class groups to be generated by
torsion elements or be perfect, even among the class of self-similar
surfaces. One counterexample is the infinite genus surface with one end.
This is a self-similar surface, but, by Domat and Dickmann \cite{DD21},
the abelianization of its mapping class group contains
$\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$ as a summand.
Using their results and a covering trick, we can show the mapping class
group of the surface $\ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{N}}\xspace$ has similarly large
abelianization. Note that this surface is also self-similar but not
uniformly. On the other hand, using a method similar to our proof of
Theorem \ref{introthm1}, we can also show $\mcg(\ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{N}}\xspace)$ is
\emph{topologically} generated by involutions. Since any homomorphism
from a Polish group to $\ensuremath{\mathbb{Z}}\xspace$ is always continuous, this makes its first
cohomology group vanish, in contrast with homology.
\begin{introthm} \label{introthm3}
The group $\mcg(\ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{N}}\xspace)$ surjects onto
$\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$. In particular, it is not perfect or
generated by torsion elements. On the other hand, $\mcg(\ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus
\ensuremath{\mathbb{N}}\xspace)$ is topologically generated by involutions, so $\HH^1(\mcg(\ensuremath{\mathbb{R}}\xspace
\ensuremath{S}\xspaceetminus \ensuremath{\mathbb{N}}\xspace),\ensuremath{\mathbb{Z}}\xspace)=0$.
\end{introthm}
The statement of topological generation by involutions also extends to
the mapping class group of the one-ended infinite genus surface
$\Sigma_L$. Additionally, we can get infinitely many examples of surfaces
whose mapping class groups have similarly large abelianization, by
considering appropriate maps to $\Sigma_L$ or $\ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{N}}\xspace$. Many
of these examples are self-similar but not uniformly.
Another application of our result beyond the ones mentioned is the
\emph{automatic continuity property}. Recall a Polish group $G$ has the
automatic continuity property if every homomorphism from $G$ to a
separable topological group is necessarily continuous. The family of
surfaces we construct also gives rise to a large family of homeomorphism
groups or big mapping class groups that do not have this property. This
gives some progress towards answering \cite[Question 2.4]{Man20}. We
highlight the following examples and refer to Theorem \ref{thm:bigabel}
and Corollary \ref{cor:bigabel} for the full technical statement.
\begin{introthm} \label{introthm4}
Let $\Sigma = S^2 \ensuremath{S}\xspaceetminus E$, where $S^2$ is the 2-sphere and $E$ is
a countable closed subset of the Cantor set homeomorphic to the ordinal
$\omega^\alpha +1$, where $\alpha$ is a countable successor ordinal.
Let $G(\Sigma)$ be either $\Homeo^+(\Sigma)$ or $\mcg(\Sigma)$. Then
$G(\Sigma)$ is not perfect, is not generated by torsion elements, and
does not have the automatic continuity property.
\end{introthm}
One may wonder what happens in the case of positive genus, rather than
$0$ or infinite genus. Our methods do not extend to these surfaces.
However, for a surface $\Sigma$ obtained by removing a Cantor set from a
surface of finite type, Calegari and Chen \cite{CC21} showed various
results for $\mcg(\Sigma)$ including that it is generated by torsion.
Additionally, Mann \cite{Man20} showed $G(\Sigma)$ has the automatic
continuity property. It would be interesting to know if their techniques
extend to uniformly self-similar ends spaces. We refer to their papers
for more details.
One may also wonder whether our results extend to other surfaces of
infinite type. In \cite{Fie21}, using very similar methods that were
developed independently and concurrently, Field, Patel, and Rasmussen
proved analogues of some of the above results for other classes of
surfaces. Specifically, for their class of surfaces, which are required
to have locally CB mapping class group and infinitely many maximal ends
among other minor conditions, they show that the commutator lengths of
elements in the commutator subgroup are uniformly bounded above and
$\HH_1(\mcg(\Sigma), \ensuremath{\mathbb{Z}}\xspace)$ is finitely generated. See \cite{Fie21} for
precise statements.
As many cases still remain open, we invite the reader to explore other
classes of surfaces of infinite type which may verify the properties in
Theorem \ref{introthm1} or admit an obstruction. It would also be
interesting to find other natural generating sets for big mapping class
groups or homeomorphism groups. Similar questions can also be asked for
the homeomorphism groups of ends spaces.
Here is a brief outline of the paper. In Section \ref{sec:preliminaries},
we introduce ends spaces and the classification of surfaces of infinite
type. Following \cite{MR21}, we define self-similar ends spaces and
surfaces and a partial order on ends spaces. We also observe some nice
properties about self-similar ends spaces that lead to the definition of
half-spaces in uniformly self-similar surfaces. The proof of Theorem
\ref{introthm1} is contained in Section \ref{sec:genunmarked}, and the
proof of Theorem \ref{introthm2} in Section \ref{sec:genmarked}. The two
parts of Theorem \ref{introthm3} appear in Section \ref{sec:bigabel} as
Proposition \ref{prop:flute} and Theorem \ref{thm:topgen}. Theorem
\ref{introthm4} follows from Corollary \ref{cor:bigabel} as a special
case.
\paragraph{Acknowledgements} We would like to thank the anonymous referee for their helpful comments.
\ensuremath{S}\xspaceection{Preliminaries}
\label{sec:preliminaries}
\ensuremath{S}\xspaceubsection{Partial order on ends spaces}
\label{sec:po}
An \emph{ends space} is a pair $(E,F)$, where $E$ is a totally
disconnected, compact, metrizable space and $F \ensuremath{S}\xspaceubset E$ is a (possibly
empty) closed subspace. For simplicity, we will often suppress the
notation $F$, but by convention, all homeomorphisms of $E$ will be
relative to $F$. For instance, to say $C\ensuremath{S}\xspaceubset E$ is homeomorphic to $D
\ensuremath{S}\xspaceubset E$ means there is a homeomorphism from $(C,C\cap F)$ to $(D,D
\cap F)$. We denote by $\Homeo(E,F)$ the group of homeomorphisms of $E$
preserving $F$.
The assumptions on $E$ imply it is homeomorphic to a closed subspace of
the standard Cantor set (see \cite[Proposition 5]{Ric63}). We will often
view $E$ as this subspace (and $F$ as a further closed subspace).
\begin{definition}
An ends space $(E,F)$ is called \emph{self-similar} if for any
decomposition of $E = E_1 \ensuremath{S}\xspaceqcup E_2 \ensuremath{S}\xspaceqcup \cdots \ensuremath{S}\xspaceqcup E_n$ into pairwise
disjoint clopen sets, then there exists some clopen set $D$ contained in
some $E_i$ such that $(D,D\cap F)$ is homeomorphic to $(E,F)$.
\end{definition}
Following \cite{MR21}, given an ends space $(E,F)$, define a preorder
$\preceq$ on $E$ where for $x,y \in E$, we say $x \preceq y$ if every
neighborhood $U$ of $y$ contains some homeomorphic copy of a neighborhood
$V$ of $x$. \textit{Here and throughout the paper, a neighborhood in an
end space will always be a clopen neighborhood.} We say $x$ and $y$ are
equivalent, and write $x \ensuremath{S}\xspaceim y$, if $x \preceq y$ and $y \preceq x$.
This defines an equivalence relation on $E$. For $x \in E$, denote by
$E(x)$ the equivalence class of $x$, and denote by $[E]$ the set of
equivalence classes. From this we get a partial order $\prec$ on $[E]$,
defined by $E(x) \prec E(y)$ if $x \preceq y$ and $x \nsim y$. Note that
by definition, if $x \preceq y$, then $y$ is either locally homeomorphic
to $x$ or an accumulation point of homeomorphic images of $x$ under
$\Homeo(E,F)$. One easily verifies that, since $F \ensuremath{S}\xspaceubset E$ is closed,
either $E(x) \cap F = \emptyset$ or $E(x) \cap F = E(x)$. Note additionally
that if there is a homeomorphism $(E, F) \to (E, F)$ which maps $x \mapsto y$,
then $x \ensuremath{S}\xspaceim y$. Consequently, self-homeomorphims of $(E, F)$ preserve each equivalence class.
We say a point $x \in E$ is \emph{maximal} if $E(x)$ is maximal with
respect to $\prec$. Denote by $M(E)$ the set of
maximal elements in $E$.
\begin{proposition}[\cite{MR21}] \label{prop:orderends}
Let $(E,F)$ be an ends space. The following statements hold.
\begin{itemize}
\item The set $M(E)$ of maximal elements under the partial order
$\prec$ is non-empty.
\item For every $x \in M(E)$, its equivalence class $E(x)$ is either
finite or homeomorphic to a Cantor set.
\item If $(E,F)$ is self-similar, then $M(E)$ is a single equivalence
class $E(x)$, and $E(x)$ is either a singleton or homeomorphic to a
Cantor set.
\end{itemize}
\end{proposition}
Observe that when $M(E)$ is a single equivalence class $E(x)$ and $F \ne
\emptyset$, then $E(x) \cap F = E(x)$.
\ensuremath{S}\xspaceubsection{Classification of infinite-type surfaces}
By a \emph{surface} we always mean a connected, orientable $2$--manifold.
A surface has \emph{finite type} if its fundamental group is finitely
generated; otherwise, it has infinite type. In this paper, we are
primarily interested in surfaces of infinite type. We refer to
\cite{Ric63} for details.
The collection of compact sets on a surface $\Sigma$ forms a directed set
by inclusion. The \emph{space of ends} of $\Sigma$ is \[ E(\Sigma) =
\lim_{\longleftarrow} \pi_0(\Sigma \ensuremath{S}\xspaceetminus K), \] where the inverse
limit is taken over the collection of compact subsets $K \ensuremath{S}\xspaceubset \Sigma$.
Equip each $\pi_0(\Sigma \ensuremath{S}\xspaceetminus K)$ with the discrete topology. Then
the limit topology on $E(\Sigma)$ is a totally disconnected, compact, and
metrizable. An element of $E(\Sigma)$ is called an \emph{end} of
$\Sigma$.
An end $e \in E(\Sigma)$ is \emph{accumulated by genus} if for all
subsurface $S \ensuremath{S}\xspaceubset \Sigma$ with $e \in E(S)$, then $S$ has infinite
genus; otherwise, $e$ is called \emph{planar}. Let $E^g(\Sigma)$ be the
subset of $E(\Sigma)$ consisting of ends accumulated by genus. This is
always a closed subset of $E(\Sigma)$, with $E(\Sigma)=\emptyset$ if and
only if $\Sigma$ has finite genus. Hence the pair
$(E(\Sigma),E^g(\Sigma))$ forms an ends space. Conversely, by
\cite[Theorem 2]{Ric63}, every ends space $(E,F)$ can be realized as the
space of ends of some surface $\Sigma$, with
$(E,F)=(E(\Sigma),E^g(\Sigma))$.
Infinite type surfaces are completely classified by the following data:
the genus (possibly infinite), and the homeomorphism type of the ends
space $(E(\Sigma), E^g(\Sigma))$. More precisely:
\begin{theorem}[{\cite{Ker23} \cite[Theorem 1]{Ric63}}] \label{thm:classification}
Suppose $\Sigma$ and $\Sigma'$ are boundaryless surfaces. Then, $\Sigma$ and $\Sigma'$ are
homeomorphic if and only if they have the same (possibly infinite) genus
and there is a homeomorphism between
$(E(\Sigma),E^g(\Sigma))$ and $(E(\Sigma'),E^g(\Sigma'))$.
\end{theorem}
We remark that although Richards' classification of infinite type
surfaces is only stated for boundaryless surfaces, it easily extends to
surfaces with finitely many compact boundary components. That is, two
surfaces with the same genus, same number of (finitely many) compact
boundary components, and homeomorphic end space pairs $(E, E^g)$ are in
fact homeomorphic.
Fix an orientation on a surface $\Sigma$ and set $(E,F) =
(E(\Sigma),E^g(\Sigma))$. Let $\Homeo^+(\Sigma)$ be the group of
orientation preserving homeomorphisms of $\Sigma$. This is a topological
group equipped with the compact open topology, and moreover it is a
Polish group. The connected component of the identity is a closed normal
subgroup $\Homeo_0(\Sigma)$ comprised of homeomorphisms isotopic to the
identity. The quotient group \[\mcg(\Sigma) = \Homeo^+(S)/\Homeo_0(S)\]
is called the \emph{mapping class group} of $\Sigma$. When $\Sigma$ has
finite type, then $\mcg(\Sigma)$ is discrete and finitely presented. When
$\Sigma$ has infinite type, then $\mcg(\Sigma)$ is a non locally-compact
Polish group.
Every homeomorphism of $\Sigma$ induces a homeomorphism of its ends space
$(E,F)$, and two homotopic homeomorphisms of $\Sigma$ induce the same map
on $(E,F)$. This gives a continuous homomorphism $\Phi : \Homeo^+(\Sigma)
\to \Homeo(E,F)$ that factors through $\mcg(\Sigma)$. By \cite{Ric63},
the map $\Phi$ is also surjective.
As noted in \cite[Section 4]{MR21}, we also know the preorder $\preceq$
on $E$ is equivalent to: $x \preceq y$ if and only if for every
neighborhood $U$ of $y$ there is a neighborhood $V$ of $x$ and $f \in
\Homeo^+(\Sigma)$ such that $\Phi(f)(V) \ensuremath{S}\xspaceubset U$.
\begin{definition}
A surface $\Sigma$ is called \emph{self-similar} if its space of ends
$(E(\Sigma),E^g(\Sigma))$ is self-similar and $\Sigma$ has genus 0 or
infinite genus.
\end{definition}
Note that when $\Sigma$ is self-similar and has infinite genus, then each
maximal end of $E(\Sigma)$ must be accumulated by genus.
\begin{remark}
We point out our definition of self-similar surfaces is equivalent to
another notion. First, following \cite{MR21}, a subset $A$ of a surface
$\Sigma$ is called \emph{non-displaceable} if $f(A) \cap A \ne
\emptyset$ for every $f \in \Homeo(S)$. Then, $\Sigma$ is self-similar
if and only if $\Sigma$ has self-similar ends space and no
non-displaceable compact subsurfaces. One direction is clear: if
$\Sigma$ has finite positive genus, then $\Sigma$ has a compact
non-displaceable subsurface. The other direction is observed by
\cite[Lemma 5.9 and 5.13]{APV21}.
\end{remark}
\ensuremath{S}\xspaceubsection{Stable neighborhoods of ends and self-similarity}
\label{sec:stable}
We now collect some facts about self-similar ends spaces. The key take
away of this section is that self-similar ends spaces with infinitely
many maximal ends behave very much like a Cantor set.
\begin{definition}
Given $x \in E$, a neighborhood $U$ of $x$ is called \emph{stable} if
any smaller neighborhood $V \ensuremath{S}\xspaceubset U$ contains a homeomorphic copy of
$U$. (Recall that this means that $(V, V \cap F)$ contains a
homeomorphic copy of $(U, U\cap F)$).
\end{definition}
\begin{lemma}[{\cite[Lemma 5.4]{APV21}}] \label{lemma:Eisstablenbhd}
If $(E,F)$ is self-similar, then for all maximal element $x \in E$, the
set $E$ is a stable neighborhood of $x$.
\end{lemma}
The following statement is reminiscent of the statement of \cite[Lemma
4.17]{MR21}, but stronger than what the latter implies, though our proof
is modeled after theirs.
\begin{lemma} \label{lemma:allclopenshomeo}
Suppose $(E,F)$ is self-similar. Then for all maximal points $x, y \in
M(E)$ and all clopen neighborhoods $U, V$ resp.\ of $x, y$, there exists
a homeomorphism $\varphi: (U, U \cap F) \to (V,V \cap F)$ such that
$\varphi(x) = y$.
\end{lemma}
\begin{proof}
The proof follows a back-and-forth argument. As usual, we will suppress
$F$, so all maps below are maps of pairs relative to $F$.
Let $U_0 = U$ and $V_0 = V$. We define the homeomorphism from $U$ to
$V$ inductively on clopen subsets exhausting $U \ensuremath{S}\xspaceetminus \{x \}, V
\ensuremath{S}\xspaceetminus \{y\}$. For convenience, we choose some metric on $E$. We
choose $U_1 \ensuremath{S}\xspaceubseteq U_0$ to be a proper neighborhood of $x$ of
diameter less than $1$. Since $E$ is a stable neighborhood of $y$ by
Lemma \ref{lemma:Eisstablenbhd}, and $U_0 \ensuremath{S}\xspaceetminus U_1$ is clopen,
there is a continuous map $$ f_0: U_0\ensuremath{S}\xspaceetminus U_1 \to V_0$$ which is a
homeomorphism onto a clopen image. We can choose $f_0$ such that
$\image(f_0) \ensuremath{S}\xspaceubseteq V_0 \ensuremath{S}\xspaceetminus \{y\}$ for the following reasons.
If $M(E) = \{y\}$, this is automatic. If $M(E)$ is a Cantor set,
then $V_0 \ensuremath{S}\xspaceetminus \{y\}$ contains some $z_0 \in M(E)$, and Lemma
\ref{lemma:Eisstablenbhd} ensures we can map $U_0 \ensuremath{S}\xspaceetminus U_1$
homeomorphically into a sufficiently small neighborhood of $z_0$ which
avoids $y$. Since $\image(f_0)$ is clopen, we can choose a proper
clopen subset $V_1 \ensuremath{S}\xspaceubseteq V_0 \ensuremath{S}\xspaceetminus \image(f_0)$ of $y$ which
has diameter less than $1$. By the same token, we can find a map $$
g_0: V_0 \ensuremath{S}\xspaceetminus (V_1 \cup \image(f_0)) \to U_1 \ensuremath{S}\xspaceetminus \{x\} $$
which is a homeomorphism onto a proper clopen image. We similarly
define a proper clopen neighborhood $U_2 \ensuremath{S}\xspaceubseteq U_1 \ensuremath{S}\xspaceetminus
\image(g_0)$ of $x$ which has diameter less than $\frac{1}{2}$.
Inductively, suppose $U_0, \dots, U_{n+1}, V_0, \dots, V_n$ have been
constructed along with maps that are homeomorphic onto their image
$$f_i : U_i \ensuremath{S}\xspaceetminus (U_{i+1} \cup \image(g_{i-1})) \to V_i \ensuremath{S}\xspaceetminus
V_{i+1}$$ $$g_i : V_i \ensuremath{S}\xspaceetminus (V_{i+1} \cup \image(f_i)) \to U_{i+1}
\ensuremath{S}\xspaceetminus U_{i+2}$$ for $0 \leq i \leq n-1$. Using Lemma
\ref{lemma:Eisstablenbhd} as above, we then define a map which is a
homeomorphism onto its image $$f_n : U_n \ensuremath{S}\xspaceetminus (U_{n+1} \cup
\image(g_{n-1})) \to V_n \ensuremath{S}\xspaceetminus \{y\}$$ and choose a proper clopen
neighborhood $V_{n+1} \ensuremath{S}\xspaceubseteq V_n \ensuremath{S}\xspaceetminus \image(f_n)$ of $y$, of
diameter less than $\frac{1}{n+1}$. Similarly, we define a map which is
a homeomorphism onto its image $$g_n : V_n \ensuremath{S}\xspaceetminus (V_{n+1} \cup
\image(f_n)) \to U_{n+1} \ensuremath{S}\xspaceetminus \{x\}$$ and choose a proper clopen
neighborhood $U_{n+2} \ensuremath{S}\xspaceubseteq U_{n+1} \ensuremath{S}\xspaceetminus \image(g_n)$ of $x$,
of diameter less than $\frac{1}{n+2}$. We thereby inductively construct
such a sequence of maps $f_0, f_1, \dots$ and $g_0, g_1, \dots$.
Now, restrict target spaces of $f_i, g_i$ to their images. Then, by
construction, the domains and images of the $f_i$ and $g_i^{-1}$ are
disjoint and their respective unions are $U \ensuremath{S}\xspaceetminus \{x\}$ and $V
\ensuremath{S}\xspaceetminus \{y\}$. Thus, by taking the union of $f_i$ and $g_i^{-1}$, we
obtain a continuous bijection $\psi: U \ensuremath{S}\xspaceetminus \{x\} \to V \ensuremath{S}\xspaceetminus
\{y\}$ since their domains are open subsets. Similarly, we can define
the continuous inverse of $\psi$ with the $f_i^{-1}$ and $g_i$.
Moreover, we can extend $\psi$ to a homeomorphism $\varphi: U \to V$ by
mapping $x$ to $y$. \qedhere
\end{proof}
\ensuremath{S}\xspaceection{Generation of the homeomorphism group }
\label{sec:genunmarked}
Our proof of Theorem \ref{introthm1} in the case of an unmarked surface
proceeds via the following steps. First, we define the notion of a
\emph{half-space} of $\Sigma$ and show that the normal closure of a
single involution contains an $H$-translation for some half-space $H$.
Formally, if $H$ is a half-space, we say a homeomorphism $\varphi$ is an
\textit{$H$-translation} if $\{\varphi^n(H)\}_{n \in \ensuremath{\mathbb{Z}}\xspace}$ are all
pairwise disjoint. We then show that the normal closure of such a
$\varphi$ contains all homeomorphisms supported on $H$ and that
half-space supported homeomorphisms generate $\Homeo^+(\Sigma)$.
\begin{definition}
A self-similar ends space $(E,F)$ is \emph{uniformly} self-similar if
$M(E)$ is one equivalence class homeomorphic to a Cantor set. A surface
$\Sigma$ is called \textit{uniformly self-similar} if
$(E(\Sigma),E^g(\Sigma))$ is uniformly self-similar and $\Sigma$ has
genus 0 or infinity.
\end{definition}
\begin{definition} \label{def:halfspaces}
For a uniformly self-similar surface $\Sigma$, we will define a
\textit{half-space} to be a subsurface $H \ensuremath{S}\xspaceubset \Sigma$ such that
\begin{itemize}
\item[(i)] $H$ is a closed subset of $\Sigma$.
\item[(ii)] $H$ has a single connected, compact boundary component.
\item[(iii)] $E(H)$ and $E(\overline{H^c})$ both contain a maximal
end of $E(\Sigma)$.
\end{itemize}
\end{definition}
We state the following useful lemma.
\begin{lemma}[Lemma 2.1 \cite{FGM21}] \label{lemma:clopensfromsurfs}
Let $\Sigma$ be a surface. Every clopen set $U$ of $E(\Sigma)$ is
induced by a connected subsurface of $\Sigma$ with a single boundary
circle. Consequently, if $\Sigma$ is uniformly self-similar, and both
$U, U^c$ contain maximal ends, then this subsurface is a half-space.
\end{lemma}
The following corollary follows easily from the above Lemma and the fact
that $E(\Sigma)$ is a subspace of a Cantor set.
\begin{corollary} \label{cor:nestedgoodhs}
Let $\Sigma$ be a uniformly self-similar surface, and let $x \in M(E)$.
There exists a sequence of nested half-spaces $S_1 \ensuremath{S}\xspaceupseteq S_2
\ensuremath{S}\xspaceupseteq \dots$ such that $\{x\} = \displaystyle \bigcap_i E(S_i)$ and
$\partial S_i$ is compact and connected for all $i$.
\end{corollary}
\begin{lemma} \label{lem:involhavetrans}
Let $\Sigma$ be a uniformly self-similar surface. Then, there exists a
half-space $H \ensuremath{S}\xspaceubseteq \Sigma$, an involution $\tau$, and $\varphi \in
\Homeo^+(\Sigma)$ such that $\ensuremath{S}\xspaceet{\varphi^n(H)}_{n \in \ensuremath{\mathbb{Z}}\xspace}$ are all
pairwise disjoint and $\varphi \in \langle \langle \tau \rangle
\rangle$. Moreover, $\varphi$ is a product of two conjugates of $\tau$,
and we can choose $\tau$ and $\varphi$ to fix some point in the
complement of $\displaystyle \bigcup_{n \in \ensuremath{\mathbb{Z}}\xspace} \varphi^n(H)$.
\end{lemma}
\begin{proof}
We first construct a somewhat explicit surface which is homeomorphic to
$\Sigma$. Let $E = E(\Sigma)$ and $M=M(E)$. Let $y, z \in M$ be
distinct points. Since $E$ is homeomorphic to a subspace of the Cantor
set, we can find disjoint clopen subsets $\ensuremath{S}\xspaceet{U_i \mid i \in \ensuremath{\mathbb{Z}}\xspace}$ such
that
\begin{itemize}
\item $U_i \cap M \neq \emptyset$
\item $E = \ensuremath{S}\xspaceet{y, z} \cup \bigcup_i U_i$
\item $y$ is an accumulation point of $\{U_i \mid i \leq 0\}$ but not
$\{U_i \mid i \geq 0\}$, and $z$
is an accumulation point of $\ensuremath{S}\xspaceet{U_i \mid i \geq 0}$ but not $\{U_i \mid i \leq 0\}$.
\end{itemize}
By Lemma \ref{lemma:clopensfromsurfs}, there is a half-space $\Sigma_0
\ensuremath{S}\xspaceubseteq \Sigma$ where $E(\Sigma_0) = U_0$. We let $S_i$ be a copy of
$\Sigma_0$ for each $i \in \ensuremath{\mathbb{Z}}\xspace$, and we let $\underline{S}$ be the
(oriented) infinite cylinder with countably many disjoint open discs
removed in a periodic fashion. (We make sure to choose discs with
disjoint closures.) See Figure \ref{fig:cylinder}. Let $\ensuremath{S}\xspaceet{C_i \mid i
\in \ensuremath{\mathbb{Z}}\xspace}$ be the boundary components of $\underline{S}$. Let $S$ be the
surface obtained by gluing $C_i$ to $\partial S_i$ via some
homeomorphism $\psi_i: C_i \to \partial S_i$ which respects orientation
of the surfaces.
\begin{figure}
\caption{The surface $\underline{S}
\label{fig:cylinder}
\end{figure}
We first claim that $S \cong \Sigma$. By Theorem
\ref{thm:classification}, we need only prove that $S, \Sigma$ have the
same genus and that there is a homeomorphism of end spaces mapping
$E^g(S)$ to $E^g(\Sigma)$. We will implicitly use some results from
\cite{Ric63} without referencing them. Recall that $\Sigma$ has genus
$0$ or $\infty$, and in the latter case, a maximal end must be
accumulated by genus. Thus $S_i$ and $S$ have infinite genus if and
only if $\Sigma$ does. By Lemma \ref{lemma:allclopenshomeo}, we have
that $E(S_i) \cong U_0 \cong U_i$ (respecting genus accumulation). By
construction of $S$, all of $E(S_i)$ are clopen subsets of $E(S)$.
Moreover, for $i \in \ensuremath{\mathbb{Z}}\xspace$, let $D_i$ be disjoint curves in the cylinder
$\underline{S}$ such that they are all translates of each other,
separate the two ends of the cylinder, and the two sides of $D_i$
contain $\{C_j \mid j < i\}$ and $\ensuremath{S}\xspaceet{C_j \mid j \geq i}$. See Figure \ref{fig:cylinder}.
Let $P_i^+,
P_i^-$ be the subsurfaces of $S$ on either side of $D_i$. Then, $P_i^+$
for $i \geq 0$ (resp. $P_i^-$ for $i \leq 0$) defines an end $z'$
(resp. $y'$) of $S$. Then, for $n \in \ensuremath{\mathbb{N}}\xspace$, $$E(S) = E(P_{-n}^-) \cup
E(P_{n}^+) \cup \bigcup_{i=-n}^{n-1} E(S_i),$$ and since only $y', z'$
are in all $E(P_{-n}^-)$ and $E(P_n^+)$ resp., we have $E(S) = \{y',
z'\} \cup \bigcup_i E(S_i)$. Moreover, it's clear that $E(S_i)$
accumulate to $y'$ but not $z'$ as $i \to -\infty$ and $E(S_i)$ accumulates to $z'$
but not $y'$
as $i \to \infty$. Therefore, we may define a homeomorphism $E(S) \to
E(\Sigma)$ mapping $E(S_i) \to U_i$ and $\{y', z'\} \to \{y, z\}$.
Recall that the homeomorphism $E(S_i) \cong U_i$ maps ends accumulated
by genus to ends accumulated by genus. If $S$ has infinite genus, then
every $S_i$ has infinite genus and so $y', z'$ are accumulated by genus (as
must $y, z$ as they are maximal in $E(\Sigma)$). Consequently, $E(S)
\cong E(\Sigma)$ maps $E^g(S)$ to $E^g(\Sigma)$ and only $E^g(S)$ to
$E^g(\Sigma)$. Consequently, $S \cong \Sigma$.
We now construct an explicit involution $\tau$ of $S$ which normally
generates the desired $\varphi$. First, we define an involution
$\underline{\tau}$ on $\underline{S}$. We simply take the ``rotation''
about an axis piercing $D_0$ and interchanging the ends $y', z'$. This
induces a homeomorphism between pairs of curves $C_i$ and $C_j$, where
$j \ge 0$ and $i=-(j+1)$. For such a pair $i < j$ related by
$\underline{\tau}(C_i) = C_j$, define a homeomorphism $\tau_{i, j}: S_i
\to S_j$ such that
\[\tau_{i, j}|_{\partial S_i} = \psi_j \circ \underline{\tau} \circ
\psi_i^{-1}.\] For the same pair, define $\tau_{j, i}: S_j \to S_i$ as
the inverse of $\tau_{i, j}$. Note that
\[\tau_{j, i}|_{\partial S_j} = \psi_i \circ \underline{\tau}^{-1}
\circ \psi_j^{-1} = \psi_i \circ \underline{\tau} \circ \psi_j^{-1}\]
since $\underline{\tau}$ has order $2$. Thus, the $\tau_{i, j}$ agree
on the overlap with $\underline{\tau}$, and so we extend
$\underline{\tau}$ to a homeomorphism $\tau$ on all of $S$ via the
$\tau_{i, j}$. It is clear that $\tau$ has order $2$.
We can similarly define an involution $\ensuremath{S}\xspaceigma$ which is a ``rotation''
with axis piercing $D_1$. Then, $\ensuremath{S}\xspaceigma(\tau(S_i)) = S_{i+2}$. I.e.
$\varphi = \ensuremath{S}\xspaceigma \circ \tau$ is the desired $H$-translation where $H =
S_0$. This establishes the lemma.
To show that we can choose $\tau$ and $\varphi$ to also fix a point
outside the $S_i$, we do the following. We homotope $D_0$ and $D_1$
within $\underline{S}$ towards each other until they meet tangentially
at one point. One can choose an involution $\tau$ that permutes the
$C_i$ in the same manner as above, maps $D_0$ to itself and fixes the
common point of $D_0 \cap D_1$. Similarly, $\ensuremath{S}\xspaceigma$ may be chosen to
map $D_1$ to itself and fix the common point of $D_0 \cap D_1$. Since
the new $\tau$ and $\ensuremath{S}\xspaceigma$ each permute the $C_i$ in the same manner
as before, the rest of the argument goes through, and $\varphi$ will
fix the same point. \qedhere
\end{proof}
\begin{remark} \label{rem:nondisplace}
Note that in the above proof, the translation $\varphi$ (in the
version without a fixed point) verifies that a surface $\Sigma$ with
uniformly self-similar ends space with 0 or infinite genus has no
non-displaceable surfaces. This also follows from \cite[Lemma 5.9,
Lemma 5.13]{APV21} which prove it in the case where $E(\Sigma)$ is
merely self-similar, but our construction gives a different
perspective.
\end{remark}
We now show that an $H$-translation normally generates $\Homeo(H,
\partial H) < \Homeo^+(\Sigma)$ for some half-space $H$. The proof
technique is sometimes referred to as a ``swindle''.
\begin{lemma} \label{lem:swindle}
Let $\Sigma$ be uniformly self-similar, and let $\varphi$ have the
properties as described in Lemma \ref{lem:involhavetrans}. Then,
$\langle \langle \varphi \rangle \rangle$ contains $\Homeo(H,
\partial H)$. Moreover, every element of $\, \Homeo(H,\partial H)$
is a product of $\varphi$ and a conjugate of $\varphi^{-1}$.
\end{lemma}
\begin{proof}
Let $f \in \Homeo(H, \partial H) \ensuremath{S}\xspaceubseteq \Homeo^+(\Sigma)$. We let
$\hat{f} = \prod_{i=0}^\infty \varphi^{-i} f \varphi^i$. This is
well-defined since $\varphi^{-i} f \varphi^i$ is supported on
$\varphi^{-i}(H)$ and these are pairwise disjoint for all $i \geq 0$
by assumption. Then, we have the following computation, which, again,
is valid because of disjoint supports. $$ \left[\hat f,
\varphi^{-1}\right] = \hat f \varphi^{-1} \hat f^{-1} \varphi =
\left(\prod_{i=0}^\infty \varphi^{-i} f \varphi^i\right) \varphi^{-1}
\left(\prod_{i=0}^\infty \varphi^{-i} f^{-1} \varphi^i\right) \varphi
= \left(\prod_{i=0}^\infty \varphi^{-i} f \varphi^i\right)
\left(\prod_{i=1}^\infty \varphi^{-i} f^{-1} \varphi^{i}\right) = f.
\qedhere$$
\end{proof}
\ensuremath{S}\xspaceubsection{Half-space homeomorphisms generate}
\label{section:generation}
Our proof that homeomorphisms of half-spaces generate $\Homeo^+(\Sigma)$
relies on a few key facts about half-spaces in uniformly self-similar
surfaces. We record these as lemmas which we will prove below.
\begin{lemma} \label{lemma:halfspaceintersections}
Let $H_1, H_2 \ensuremath{S}\xspaceubset \Sigma$ be two half-spaces. Then, one of $H_1
\cap H_2^c, H_1^c \cap H_2^c$ contains a half-space.
\end{lemma}
\begin{lemma} \label{lemma:homeohalfspaces}
If $H_1, H_2$ are two distinct half-spaces contained in a third
distinct half-space $H_3$ and both are disjoint from a fourth
half-space $H_4 \ensuremath{S}\xspaceubset H_3$, then there exists $\varphi \in
\Homeo^+(\Sigma)$ supported on $H_3$ such that $\varphi(H_1) = H_2$.
\end{lemma}
\begin{lemma} \label{lemma:selfsimhalfspaces}
If $H \ensuremath{S}\xspaceubseteq \Sigma$ is a half-space, then so is $\overline{H^c}$.
All half-spaces are homeomorphic via an ambient homeomorphism of
$\Sigma$. Every half-space contains two disjoint half-spaces.
\end{lemma}
Using these three lemmas we can prove one of our main results, namely,
that half-space homeomorphisms generate $\Homeo^+(\Sigma)$.
\begin{theorem} \label{thm:halfspacehomeogen}
Let $\Sigma$ be a uniformly self-similar surface, and let $H \ensuremath{S}\xspaceubseteq
\Sigma$ be a half-space. Then, $\Homeo^+(\Sigma)$ is the normal
closure of the subgroup $\Homeo(H, \partial H)$. Furthermore, every
element of $\Homeo^+(\Sigma)$ is a product of at most $3$
homeomorphisms, each of which is conjugate to an element of $\,
\Homeo(H, \partial H)$.
\end{theorem}
\begin{proof}
Let $f \in \Homeo^+(\Sigma)$. First, note that by Lemma
\ref{lemma:selfsimhalfspaces}, any half-space supported homeomorphism
is conjugate into $\Homeo(H, \partial H)$. Thus, it suffices to show
$f$ is a product of at most $3$ half-space supported homeomorphisms.
Let $H_1 = H$ and $H_2 = f(H)$. We now apply Lemma
\ref{lemma:halfspaceintersections}, and first consider the case where
$H_1^c \cap H_2^c$ contains a half-space. By Lemma
\ref{lemma:selfsimhalfspaces}, $H_1^c \cap H_2^c$ contains two
disjoint half-spaces $H_3, H_4$. Applying Lemma
\ref{lemma:homeohalfspaces} to $H_1, H_2, \overline{H_3^c},$ and
$H_4$, we see that there is a homeomorphism $\varphi_1$, supported on
$\overline{H_3^c}$, such that $\varphi_1(H_2) = \varphi_1(f(H_1)) =
H_1$. By further composing by some $\varphi_2$ supported on $H_1$, we
can ensure $\varphi_2 \circ \varphi_1 \circ f$ restricts to the
identity on $H_1$. Finally, composing by an appropriate third
homeomorphism $\varphi_3$ supported on $\overline{H_1^c}$, we obtain
$\varphi_3 \circ \varphi_2 \circ \varphi_1 \circ f = \text{Id}$. Note
that $\varphi_2 \circ \varphi_1$ is supported on $\overline{H_3^c}$,
so in this case we only require two half-space supported
homeomorphisms.
Now, suppose we are in the case where $H_1 \cap H_2^c$ contains a
half-space. By Lemma \ref{lemma:selfsimhalfspaces}, we can assume $H_1
\cap H_2^c$ contains three disjoint half-spaces $H_3, H_4, H_5$. By
Lemma \ref{lemma:homeohalfspaces} applied to $H_2, H_4,
\overline{H_3^c},$ and $H_5$, there exists a homeomorphism $\psi$
supported on $\overline{H_3^c}$ such that $\psi(H_2) = H_5$. Since
$H_5 \ensuremath{S}\xspaceubset H_1$, the subsurface $H_1^c \cap \psi(f(H_1))^c = H_1^c$
contains a half-space, and we are reduced to the previous case. In
this case, we see that $f$ is a product of three half-space supported
homeomorphisms. \qedhere
\end{proof}
We now prove the required lemmas.
\begin{proof}[Proof of Lemma \ref{lemma:halfspaceintersections}]
By definition, $E(\overline{H_2^c})$ contains some maximal end $x$. By
Corollary \ref{cor:nestedgoodhs}, there exist nested half-spaces $S_1
\ensuremath{S}\xspaceupset S_2 \ensuremath{S}\xspaceupset \dots$ such that $\bigcap_i E(S_i) = \{x\}$. Since
these half-spaces leave every compact set, eventually some $S_i$ does
not intersect $\partial H_1 \cup \partial H_2$. Since $x \in
E(\overline{H_2^c})$ and the $S_i$ are connected, either $S_i
\ensuremath{S}\xspaceubseteq H_1 \cap H_2^c$ or $S_i \ensuremath{S}\xspaceubseteq H_1^c \cap H_2^c$.
\qedhere
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:selfsimhalfspaces}]
The first statement follows immediately from the definition of
half-space. Since a half-space has a maximal end of $\Sigma$, it has
the same genus as $\Sigma$. Thus, by the classification of surfaces
and Lemma \ref{lemma:allclopenshomeo}, any two half-spaces are
homeomorphic. Since the (closures of) the complements are half-spaces
too, we can map the complement to the complement and extend the
homeomorphism to all of $\Sigma$.
By assumption, $E(H)$ has some maximal end $x$ of $E(\Sigma)$. Since
$E(H)$ is a clopen in $E(\Sigma)$ and the set of maximal ends is a
Cantor set, $E(H)$ contains another distinct maximal end $y$. Using
Corollary \ref{cor:nestedgoodhs} for $x$ and $y$ each and compactness
of boundaries of half spaces, we can easily deduce the existence of
the required half-spaces. \qedhere
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:homeohalfspaces}]
The presence of the half-space $H_4$ guarantees that $E(\overline{H_3
\ensuremath{S}\xspaceetminus H_1})$ and $E(\overline{H_3 \ensuremath{S}\xspaceetminus H_2})$ both contain a
maximal end. Thus, by Lemma \ref{lemma:allclopenshomeo},
$$(E(\overline{H_3 \ensuremath{S}\xspaceetminus H_1}), E^g(\overline{H_3 \ensuremath{S}\xspaceetminus H_1}))
\cong (E(\overline{H_3 \ensuremath{S}\xspaceetminus H_2}), E^g(\overline{H_3 \ensuremath{S}\xspaceetminus
H_2})).$$ Clearly, the two subsurfaces have the same genus and finite
number of boundary components, and so $\overline{H_3 \ensuremath{S}\xspaceetminus H_1}
\cong \overline{H_3 \ensuremath{S}\xspaceetminus H_2}$. Similarly, by Lemma
\ref{lemma:selfsimhalfspaces}, $H_1 \cong H_2$. By arranging the
homeomorphisms to be identical on the overlapping boundary component,
we produce a homeomorphism $H_3 \to H_3$ mapping $H_1 \to H_2$ and
$\overline{H_3 \ensuremath{S}\xspaceetminus H_1} \to \overline{H_3 \ensuremath{S}\xspaceetminus H_2}$.
\qedhere
\end{proof}
\begin{theorem} \label{thm:unmarkedhomeo}
If $\Sigma$ is uniformly self-similar, then $\Homeo^+(\Sigma)$
\begin{itemize}
\item is normally generated by a single involution,
\item is normally generated by an $H$-translation,
\item is uniformly perfect.
\end{itemize}
Moreover, each element of $\, \Homeo^+(\Sigma)$ is a product of at
most $3$ commutators, $6$ $H$-translations, and $12$ involutions.
\end{theorem}
\begin{proof}
Combine Lemma \ref{lem:involhavetrans}, Lemma \ref{lem:swindle}, and
Theorem \ref{thm:halfspacehomeogen}. \qedhere
\end{proof}
By considering quotients of $\Homeo^+(\Sigma)$ onto the mapping class
group $\mcg(\Sigma)$ and the homeomorphism group of its ends space
$(E(\Sigma),E^g(\Sigma))$, we also derive the following corollaries.
Note that for a half-space $H \ensuremath{S}\xspaceubset \Sigma$, $E(H)$ is a clopen set
containing a non-empty proper subset of $M(E(\Sigma))$.
\begin{corollary}
If $\Sigma$ is uniformly self-similar, then the statements of
Theorem \ref{thm:unmarkedhomeo} also hold for $\mcg(\Sigma)$.
\end{corollary}
\begin{corollary}
If $(E,F)$ is uniformly self-similar, then the statements of Theorem
\ref{thm:unmarkedhomeo} also hold for $\Homeo(E,F)$, where a
half-space $H \ensuremath{S}\xspaceubset E$ is a clopen set containing a non-empty
proper subset of $M(E)$.
\end{corollary}
\begin{proof}
By \cite[Theorem 2]{Ric63}, there is a surface $\Sigma$ such that
$(E(\Sigma), E^g(\Sigma)) \cong (E, F)$ and the genus is $0$ if $F =
\emptyset$ and infinite if $F \neq \emptyset$. Thus $\Sigma$ is
uniformly self-similar when $(E,F)$ is. The corollary then follows
from Theorem \ref{thm:unmarkedhomeo} and surjectivity of
$\Homeo^+(\Sigma) \to \Homeo(E(\Sigma), E^g(\Sigma))$. \qedhere
\end{proof}
\ensuremath{S}\xspaceection{Surfaces with a marked point}
\label{sec:genmarked}
The proof in the case of a marked surface is very similar to the unmarked
case, and we will use some of the same lemmas. Let $\Sigma$ be a
uniformly self-similar surface with a fixed basepoint $\ast \in \Sigma$.
We define half-space exactly as before, but distinguish between
\textit{marked half-spaces} which contain $\ast$ and \textit{unmarked
half-spaces} which don't. The main new lemma we require is the following.
\begin{lemma} \label{lemma:haveDTs}
Let $\Sigma$ be a uniformly self-similar surface with a marked point
$\ast \in \Sigma$. Let $H \ensuremath{S}\xspaceubseteq \Sigma$ be an unmarked half-space.
Then, every Dehn twist in $\mcg(\Sigma, \ast)$ is contained in the
normal closure of $\mcg(H, \partial H)$.
\end{lemma}
\begin{remark}
For convenience and simplicity, we will conflate half-spaces and simple
closed curves with their ambient isotopy classes rel $\ast$ throughout
this section.
\end{remark}
\begin{proof}
Let $T_\gamma \in \mcg(\Sigma, \ast)$ be the Dehn twist about a simple
closed curve $\gamma$ (which avoids $\ast$). First, we consider the
case where $\gamma$ is nonseparating. Then, $\Sigma$ has infinite
genus. Since $\gamma$ is compact, Corollary \ref{cor:nestedgoodhs}
implies that $\gamma$ is contained in some half-space (or the closure
of its complement which is also a half-space) which we denote by $H$.
This case is concluded if $H$ is unmarked. Suppose instead $H$ is
marked. Then, since $\gamma$ is nonseparating, we can find a path from
$\partial H$ to $\ast$ which avoids $\gamma$. Deleting some small
regular neighborhood of this path from $H$, we obtain an unmarked
half-space containing $\gamma$.
Now, suppose $\gamma$ is a separating curve, and let $S_1, S_2
\ensuremath{S}\xspaceubseteq \Sigma$ be the two surfaces on either side of $\gamma$. If
both $E(S_1), E(S_2)$ contain a maximal end of $\Sigma$, then they are
both half-spaces whose mapping class groups contain $T_\gamma$, and one
must be unmarked. Suppose, w.l.o.g., then that $E(S_1)$ contains no
maximal ends. If $S_1$ is also unmarked, then we can connect it by some
strip (avoiding $\ast)$ to an unmarked half-space in $S_2$ to create a
new unmarked half-space $H'$ which contains $S_1$. Then $\Homeo(H',
\partial H')$ contains $T_\gamma$.
The difficult case is when $S_1$ contains no maximal ends but is
marked. Using Corollary \ref{cor:nestedgoodhs} repeatedly, we can find
three disjoint half-spaces $H_1, H_2, H_3$ contained in $S_2$. Since
half-spaces have connected boundary, the complement of $H_1 \cup H_2
\cup H_3 \cup S_1$ is connected, and we may choose disjoint paths
$\alpha_1, \alpha_2$ in this complement connecting $\gamma = \partial
S_1$ to $\partial H_1, \partial H_2$ respectively. Let $L$ be a regular
neighborhood of $\gamma \cup \partial H_1 \cup \partial H_2 \cup
\alpha_1 \cup \alpha_2$ in this complement. Then, $L$ is a sphere with
$4$ boundary components, i.e.\ a lantern, where three boundary curves
are $\gamma, \partial H_1,$ and $\partial H_2$ and the fourth is some
simple closed curve $\beta$ bounding a half-space $H_4$ containing
$H_3$.
We seek to use the lantern relation to show that $f$ is a product of
homeomorphisms supported on an unmarked half-space. The lantern
relation implies that $T_\gamma$ is equal to a word in the Dehn twists
about $\partial H_1, \partial H_2, \beta$ and three other simple closed
curves $\delta_1, \delta_2, \delta_3$ each of which separates $L$ into
two three-holed spheres. Thus for all $i = 1, 2, 3$, each side of
$\delta_i$ must contain at least one of $H_1, H_2, H_3$, i.e.\ each
$\delta_i$ separates $\Sigma$ into one marked and one unmarked
half-space, and thus $\delta_i$ lies in an unmarked half-space.
Consequently, the twists about $\partial H_1, \partial H_2, \beta,$ and
the $\delta_i$ are all supported on an unmarked half-space. The lemma
follows. \qedhere
\end{proof}
In the unmarked case, we replace Lemma \ref{lemma:homeohalfspaces} with
the following.
\begin{lemma} \label{lemma:homeohalfspacesv2}
If $H_1, H_2,$ and $H_3$ are disjoint unmarked half-spaces, then there
is a homeomorphism $\varphi \in \Homeo(\Sigma)$ supported on some
unmarked half-space $H_4$ such that $\varphi(H_1) = H_2$.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:selfsimhalfspaces}, $H_3$ contains two unmarked
disjoint half-spaces $H_3', H_3''$. Since half-spaces have single
boundary components, the complement of $H_1 \cup H_2 \cup H_3' \cup
H_3''$ is connected, and we can attach $H_1$ to $H_2$ and $H_3'$ by two
strips disjoint from $H_3''$ and the marked point to create a
subsurface $H_4$ with a single boundary circle that contains $H_1,
H_2,$ and $H_3'$. Since both $E(H_4) \ensuremath{S}\xspaceupset E(H_1)$ and $E(H_4^c)
\ensuremath{S}\xspaceupset E(H_3'')$ contain a maximal end, $H_4$ is a half-space. We can
now apply Lemma \ref{lemma:homeohalfspaces} to $H_1, H_2, H_4,$ and
$H_3'$. \qedhere
\end{proof}
We can now prove the analogous theorem that half-space supported
homeomorphisms generate $\mcg(\Sigma, \ast)$.
\begin{theorem} \label{thm:unmarkedhalfspacegen}
Let $\Sigma$ be a uniformly self-similar surface with a marked point
$\ast \in \Sigma$, and let $H \ensuremath{S}\xspaceubseteq \Sigma$ be an unmarked
half-space. Then, $\mcg(\Sigma, \ast)$ is generated by the normal
closure of $\mcg(H, \partial H)$.
\end{theorem}
\begin{proof}
Let $f \in \mcg(\Sigma, \ast)$. All unmarked half-spaces are the same
up to $\Homeo(\Sigma, \ast)$ by an argument nearly identical to that in
the proof of Lemma \ref{lemma:selfsimhalfspaces}. Therefore, it
suffices to show $f$ is a product of mapping classes supported on
unmarked half-spaces.
Let $H_1$ be an unmarked half-space, and let $C$ be a simple closed
curve such that $C$ and $\partial H_1$ bound an annulus containing
$\ast$ (in the interior). Let $H_2 = f(H_1)$. Then, by Lemma
\ref{lemma:halfspaceintersections}, either $H_1 \cap H_2^c$ or $H_1^c
\cap H_2^c$ contains a half-space, which we can choose to be unmarked,
by passing to a deeper half-space if necessary.
We first show that there is some mapping class $g$ in the normal
closure of $\mcg(H, \partial H)$ such that $g_1(f(H_1)) = H_1$ and $g_1
\circ f|_{H_1} = \id|_{H_1}$. Let's first consider the case where
$H_1^c \cap H_2^c$ contains an unmarked half-space. Then, by Lemma
\ref{lemma:selfsimhalfspaces}, $H_1^c \cap H_2^c$ contains two disjoint
unmarked half-spaces $H_3, H_4$. By Lemma
\ref{lemma:homeohalfspacesv2}, there are two mapping classes supported
on some unmarked half-spaces, one which maps $H_2$ to $H_3$ and another
which maps $H_3$ to $H_1$. By composing these maps with some
appropriate third mapping class supported on $H_1$, we obtain the
desired $g_1$. If instead $H_1 \cap H_2^c$ contains an unmarked
half-space, then $H_1 \cap H_2^c$ contains two disjoint unmarked
half-spaces $H_3, H_4$. By Lemma \ref{lemma:homeohalfspacesv2}, there
is some mapping class $h$ supported on an unmarked half-space such that
$h(f(H_1)) = h(H_2) = H_3 \ensuremath{S}\xspaceubseteq H_1$. Thus $H_1^c \cap H_3^c =
H_1^c$ contains an unmarked half-space, the one bounded by $C$, and we
are reduced to the first case.
Let $C' = g_1(f(C))$. Then $C'$ and $\partial H_1$ bound an annulus
containing $\ast$. (Note that $C'$ need not be $C$ up to ambient
isotopy fixing $\ast$.) Let $S \ensuremath{S}\xspaceubseteq \Sigma$ be a compact
subsurface with the following properties.
\begin{itemize}
\item $S$ contains the annulus bounded by $\partial H_1$ and $C$
and the annulus bounded by $\partial H_1$ and $C'$.
\item $S$ does not intersect the interior of $H_1$
\item no boundary component bounds a disc in $\Sigma$.
\end{itemize}
Within $S$, each of $C, C'$ is a separating curve which bounds an
annulus with $\partial H_1 \ensuremath{S}\xspaceubset \partial S$ containing a marked
point. Consequently, the genus of the separating curves $C,
C'$ must be identical and they partition the boundary of $S$
identically. Thus, there is some mapping class in $\mcg(S, \partial S
\cup {\ast})$ mapping $C'$ to $C$. Since $\mcg(S, \partial S \cup
{\ast})$ is generated by Dehn twists, by Lemma \ref{lemma:haveDTs},
there is some $g_2$ in the normal closure of $\mcg(H, \partial H)$ such
that $g_2(g_1(f(C)) = C$ and $g_2 \circ g_1 \circ f|_{H_1} =
\id|_{H_1}$.
Let $H_0$ be the unmarked half-space bounded by $C$. Clearly,
$g_2(g_1(f(H_0)) = H_0$. Since the mapping class group of the annulus
with a marked point between $H_0$ and $H_1$ is generated by Dehn
twists, by Lemma \ref{lemma:haveDTs}, we can compose by some third
element $g_3$ in the normal closure of $\mcg(H, \partial H)$ such that
$g_3 \circ g_2 \circ g_1 \circ f = \id$. \qedhere
\end{proof}
We can now easily prove the analogous theorem for the mapping class group
of a marked uniformly self-similar surface. Note that we have no
statements about uniform perfection, or about a bound on the word length
of an element as a product of involutions, or about the homeomorphism
group. The first two are impossible by a result of J. Bavard \cite{Bav16}
in the case of $S^2$ minus a Cantor set. The proof fails to show every
element is a word of uniformly bounded length in involutions and
half-space supported homeomorphisms only because of the step where we map
$C'$ to $C$. The theorem is only proven for the mapping class group and
not the homeomorphism group because we use the lantern relation in the
proof of Lemma \ref{lemma:haveDTs}.
\begin{theorem}
Let $\Sigma$ be a uniformly self-similar surface with a marked point
$\ast \in \Sigma$. Then, $\mcg(\Sigma, \ast)$ is generated by
involutions. Moreover, $\mcg(\Sigma, \ast)$ is normally generated by a
single involution and is a perfect group.
\end{theorem}
\begin{proof}
Let $\tau$ and $\varphi$ be as in Lemma \ref{lem:involhavetrans}.
Lemma \ref{lem:swindle} applies equally to $\mcg(\Sigma, \ast)$ (with
an identical proof), and so $\langle \langle \tau \rangle \rangle$
contains $\mcg(H, \partial H)$ for some unmarked half-space and all
elements of $\mcg(H, \partial H)$ are a single commutator in
$\mcg(\Sigma)$. Theorem \ref{thm:unmarkedhalfspacegen} finishes the
proof. \qedhere
\end{proof}
\ensuremath{S}\xspaceection{Self-similar but not uniformly}
\label{sec:bigabel}
It is natural to wonder whether surfaces with a self-similar ends space
and with genus $0$ or $\infty$ are generated by involutions, are perfect,
etc. It is already known that the mapping class group of the one-ended,
infinite genus surface has abelianization containing an uncountable
direct sum of $\ensuremath{\mathbb{Q}}\xspace$'s \cite{DD21}. This surface fits into this category,
but perhaps is not a particularly compelling example since the results of
\cite{DD21} are for pure mapping class groups of infinite-type surfaces,
and for the one-ended infinite genus surface, the mapping class group
happens to coincide with the pure mapping class group. However, using a
covering trick and some of the results of \cite{DD21}, we can prove that
the abelianization of $\mcg(\ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{N}}\xspace)$ is similarly large.
\begin{proposition} \label{prop:flute}
$\mcg(\ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{N}}\xspace)$ surjects onto $\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$.
\end{proposition}
For the proof of the proposition, we need the following fact about
abelian groups, which follows from \cite[Theorem 21.3 and 23.1]{Fuc70}.
\begin{lemma} \label{lem:divisible}
Let $A$ be an abelian group. Suppose $A$ contains $\bigoplus_I \ensuremath{\mathbb{Q}}\xspace$ for
some non-empty set $I$. Then $A$ surjects onto $\bigoplus_I \ensuremath{\mathbb{Q}}\xspace$.
\end{lemma}
\begin{proof}[Proof of Proposition \ref{prop:flute}]
Let $\Sigma_L$ be the infinite genus surface with one end. This admits
a $2$-fold branched cover of $\ensuremath{\mathbb{R}}\xspace^2$ where $\ensuremath{\mathbb{R}}\xspace^2 = \Sigma_L/D$ and $D=
\ensuremath{\mathbb{Z}}\xspace/2\ensuremath{\mathbb{Z}}\xspace$ acts by an involution. See Figure \ref{fig:branchcover}. More
formally, one can construct this from gluing infinitely many copies of
a $2$-fold branch cover of an annulus by a $2$-holed torus and one copy
of a $2$-fold branched cover of a disc by a $1$-holed torus. Let
$\Sigma_F$ be $\ensuremath{\mathbb{R}}\xspace^2$ with the branch points removed, i.e. $\Sigma_F
\cong \ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{N}}\xspace$, and let $\Sigma_{PL}$ be $\Sigma_L$ with the
branch points removed. Then, $\Sigma_{PL} \to \Sigma_F$ is a regular
degree $2$ cover with deck group $D$.
\begin{figure}
\caption{The surface $\Sigma_L$ admits an involution which gives
a degree 2 branched cover of $\ensuremath{\mathbb{R}
\label{fig:branchcover}
\end{figure}
Choose marked points $\tilde{\ast} \in \Sigma_{PL}$ and $\ast \in
\Sigma_F$. We first show that there is a lifting homomorphism
$\mcg(\Sigma_F, \ast) \to \mcg(\Sigma_{PL}, \tilde{\ast})$, defined by
lifting representative homeomorphisms. Since these are mapping class
groups fixing marked points, it is a straightforward consequence of
covering space theory that such a homomorphism exists and is unique
provided the action of $\mcg(\Sigma_F, \ast)$ preserves the subgroup $K
= \ker(\pi_1(\Sigma_F, \ast) \to D)$. Since the punctures of $\Sigma_F$
came from branched points of degree $2$, any simple loop in
$\pi_1(\Sigma_F, \ast)$ which encloses a single puncture does not lift
to a closed curve in $\Sigma_{PL}$ and so must map to the nontrivial
element of $D$. We can choose a basis $\{\beta_n\}_{n \in \ensuremath{\mathbb{N}}\xspace}$ of the
free group $\pi_1(\Sigma_F, \ast)$ consisting entirely of simple loops
each enclosing a single puncture. Consequently, $K$ consists precisely
of those even length words in this generating set. For any mapping
class $f \in \mcg(\Sigma_F, \ast)$, the set $\{f(\beta_n)\}_{n \in \ensuremath{\mathbb{N}}\xspace}$
is also another generating set of simple loops enclosing single
punctures, and for the same reasons $K$ consists of words of even
length in these generators. It is clear then that $f(K) = K$.
We now have a lifting homomorphism $\mcg(\Sigma_F, \ast) \to
\mcg(\Sigma_{PL}, \tilde{\ast})$. Since the points deleted from
$\Sigma_L$ are isolated, $\mcg(\Sigma_{PL}, \tilde{\ast})$ preserves
that set of ends, and we have a well-defined forgetful map
$\mcg(\Sigma_{PL}, \tilde{\ast}) \to \mcg(\Sigma_L, \tilde{\ast})$. In
\cite{DD21}, explicit mapping classes are constructed which project to
nontrivial elements in the abelianization of $\mcg(\Sigma_L,
\tilde{\ast})$. (See \cite[Theorem 6.1]{DD21}.) Specifically, if
$\{\gamma_n\}_{n \in \ensuremath{\mathbb{N}}\xspace}$ is a sequence of distinct, pairwise disjoint,
separating, simple closed curves where each $\gamma_n$ separates the
marked point from the single end of $\Sigma_L$, then the subgroup
topologically generated by the twists $\{T_{\gamma_n}\}_{n \in \ensuremath{\mathbb{N}}\xspace}$
projects to a group containing a $\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$. One can
easily find such $\gamma_n$ which double cover simple closed curves
$\alpha_n$ in $\Sigma_F$, and so $T_{\gamma_n}$ is the lift of
$T_{\alpha_n}^2$. (E.g. one can choose $\alpha_1$ to be a simple closed
curve bounding a disc with the marked point and three punctures
and then choose $\alpha_i, \alpha_{i+1}$ to always bound an annulus
with two punctures. Then $\gamma_i$ are the preimages of the $\alpha_i$
under the covering map.) Thus, $\mcg(\Sigma_F, \tilde{\ast})$ maps onto the
same abelian group (generated by the $T_{\gamma_n}$). I.e.
$\HH_1(\mcg(\Sigma_F, \tilde{\ast}); \ensuremath{\mathbb{Z}}\xspace)$ has a quotient $A$ containing
$\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$. By Lemma \ref{lem:divisible}, $A$ maps
onto $\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$, so we also get a surjection
$\varphi: \HH_1(\mcg(\Sigma_F, \tilde{\ast}); \ensuremath{\mathbb{Z}}\xspace) \to
\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$.
To pass to $\mcg(\Sigma_F)$, we borrow a technique from \cite{DD21}.
Consider the Birman short exact sequence (see \cite{DD21})
$$ 1 \longrightarrow
\pi_1(\Sigma_F, \ast) \longrightarrow \mcg(\Sigma_F, \tilde{\ast})
\longrightarrow \mcg(\Sigma_F) \longrightarrow 1.$$
Abelianization is right exact, so we get the commutative diagram
$$
\begin{tikzcd}
\bigoplus_{_{\aleph_0}} \ensuremath{\mathbb{Z}}\xspace \arrow[r] \arrow[d,"\id"]
&
\HH_1(\mcg(\Sigma_F, \tilde{\ast}); \ensuremath{\mathbb{Z}}\xspace) \arrow[r] \arrow[d, "\varphi"]
&
\HH_1( \mcg(\Sigma_F); \ensuremath{\mathbb{Z}}\xspace) \arrow[r] \arrow[d,"\bar\varphi"] & 0\\
\bigoplus_{_{\aleph_0}} \ensuremath{\mathbb{Z}}\xspace \arrow[r]
&
\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace \arrow[r]
&
P \arrow[r]
& 0
\end{tikzcd}
$$
The image of $\bigoplus_{_{\aleph_0}} \ensuremath{\mathbb{Z}}\xspace$ in $\bigoplus_{2^{\aleph_0}}
\ensuremath{\mathbb{Q}}\xspace$ still misses a copy of $\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$, so the
quotient $P$ still contains a copy of $\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$.
The map $\bar\varphi$ is surjective, so we can conclude $\HH_1(
\mcg(\Sigma_F); \ensuremath{\mathbb{Z}}\xspace)$ surjects onto $\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$, again
by Lemma \ref{lem:divisible}. \qedhere
\end{proof}
We now produce many more classes of examples by building surfaces that
naturally map onto the one-ended infinite genus surface $\Sigma_L$ or
$\Sigma_F = \ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{N}}\xspace$.
\begin{theorem} \label{thm:bigabel}
Suppose $\Sigma$ is a surface of one of the following two types.
\begin{itemize}
\item[(1)] $E(\Sigma)$ has exactly one end accumulated by genus.
\item[(2)] $\Sigma$ has genus $0$ and $E(\Sigma)$ has
one maximal end $y$, such that in the partial order on $[E]$, the
class of $y$ has an immediate predecessor $E(x)$ with countably
infinite cardinality.
\end{itemize}
Then $\mcg(\Sigma)$ maps onto $\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$.
\end{theorem}
\begin{proof}
Choose a marked point $\ast$ on $\Sigma$. Note that by the same trick
of using the Birman short exact sequence $$ 1 \longrightarrow
\pi_1(\Sigma, \ast) \longrightarrow \mcg(\Sigma, \ast) \longrightarrow
\mcg(\Sigma) \longrightarrow 1,$$ it is enough to show the
abelianization of $\mcg(\Sigma,\ast)$ maps onto
$\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$.
The statement for the one-ended infinite genus surface $\Sigma_L$ is by
\cite{DD21}. The statement for $\Sigma_F = \ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{N}}\xspace$ is
Proposition \ref{prop:flute}. For all other case, we will consider an
appropriate map to one of these two surfaces.
On $\Sigma_L$, we will say a sequence of simple closed curves
$\ensuremath{S}\xspaceet{\gamma_n}_{n \in \ensuremath{\mathbb{N}}\xspace}$ is \emph{good} if the curves are distinct,
pairwise disjoint, separating, and each curve separates the maximal end
of $\Sigma_L$ from the marked point. On $\Sigma_F$, a sequence of
curves $\ensuremath{S}\xspaceet{\alpha_n}_{n \in \ensuremath{\mathbb{N}}\xspace}$ is \emph{good} if under the covering
map $(\Sigma_L,\ast) \to (\Sigma_F,\ast)$, each $\alpha_n$ is double
covered by a curve $\gamma_n$ and the sequence $\{\gamma_n\}$ is good.
By \cite{DD21} and the proof of Proposition \ref{prop:flute}, the
subgroup topologically generated by Dehn twists about a good sequence
of curves maps onto $\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$ under the map to the
abelianization of the mapping class group.
First, assume $\Sigma$ is of the first type. The proof in the other
case will be similar. The assumption on $\Sigma$ means we have a map
$(\Sigma,\ast) \to (\Sigma_L,\ast)$ by forgetting all but the only end
accumulated by genus. This induces a well-defined map
$\mcg(\Sigma,\ast) \to \mcg(\Sigma_L,\ast)$, since this end is
invariant under $\mcg(\Sigma, \ast)$. By the previous paragraph, it is
enough to exhibit a sequence of pairwise disjoint curves
$\ensuremath{S}\xspaceet{\alpha_n}_{n \in \ensuremath{\mathbb{N}}\xspace}$ on $\Sigma$ whose image under the forgetful
map forms a good sequence on $\Sigma_L$. To do this we will represent
$\Sigma$ in an explicit way as described below.
\begin{figure}
\caption{Building $\Sigma$ of the first type.}
\label{fig:lochends}
\end{figure}
Identify $S^2 = \ensuremath{\mathbb{R}}\xspace^2\cup \ensuremath{S}\xspaceet{\infty}$ with base point $\infty$. We
will construct $\Sigma$ from $S^2$ by removing points from $\ensuremath{\mathbb{R}}\xspace^2
\ensuremath{S}\xspaceubset S^2$ and attaching handles appropriately. Let $K \ensuremath{S}\xspaceubset [0,1]$
be the standard Cantor set. Recall $E(\Sigma)$ is homeomorphic to a
closed subset of $K$. Since the homeomorphism group of $K$ acts
transitively, we can realize $E(\Sigma)$ as a closed subset $E \ensuremath{S}\xspaceubset
K$ with the only end accumulated by genus at $0$. By \cite{Ric63}, the
ends space of $S^2-E$ is homeomorphic to $E$. It remains to attach
handles to $\ensuremath{\mathbb{R}}\xspace^2 - E$ so that the handles will only accumulate onto the
origin. To this end, choose a sequence $\ensuremath{S}\xspaceet{y_n}_{n \in \ensuremath{\mathbb{N}}\xspace} \ensuremath{S}\xspaceubset
[0,1] - K$ such that $y_n \to 0$ monotonically. In particular,
$\ensuremath{S}\xspaceet{y_n} \cap E = \emptyset$. Let $d_n = y_n-y_{n+1}$. For each $n$,
let $T_n$ be a torus with one boundary component. Let $z_n$ be the
midpoint of $[y_{n+1},y_n]$. In $\ensuremath{\mathbb{R}}\xspace^2$, let $p_n = (-z_n, 0)$, and
$B_n$ be the open ball of diameter $d_n/2$ centered at $p_n$. Now
remove each $B_n$ from $\ensuremath{\mathbb{R}}\xspace^2$ and attach $T_n$ by gluing $\partial T_n$
to $\partial B_n$. Let $\Sigma'$ be the resulting surface with marked
point $\infty$. By construction, the tori accumulate only onto the
origin. It follows then by the classification of surfaces, $\Sigma'$ is
homeomorphic to $\Sigma$, and we can make this homeomorphism take
$\infty$ to $\ast$. By filling in all of $E$ except the origin, we get
a marked surface $(\Sigma_L',\infty)$ homeomorphic to
$(\Sigma_L,\ast)$, and a representation of the forgetful map
$(\Sigma,\ast) \to (\Sigma_L,\ast)$. Using this picture, it is now easy
to find the curves $\alpha_n$ which we take to be the circle of radius
$y_n$ centered at the origin. By construction, the circles
$\{\alpha_n\}$ avoid $E$, are pairwise disjoint, and each separates the
origin from $\infty$. Furthermore, since there is a handle between two
consecutive circles $\alpha_n$ and $\alpha_{n+1}$, namely $T_{n+1}$,
these circles remain topologically distinct after filling in all of
$E-\{0\}$. This finishes the proof in this case.
Now suppose $\Sigma$ is of the second type. We first claim that, by
forgetting all but the maximal end of $E(\Sigma)$ and the class $E(x)$
of its immediate predecessor, we get a map $(\Sigma,\ast) \to
(\Sigma_F,\ast)$. Taking the same approach as above, realize
$E(\Sigma)$ as a closed subset $E \ensuremath{S}\xspaceubset K$ with the maximal end at
the origin. By Richards, the surface $(S^2\ensuremath{S}\xspaceetminus E,\infty)$ is
homeomorphic to $(\Sigma,\ast)$. We claim the origin is the only
accumulation point of $E(x)$. Since $E(x)$ has no successor except the
origin, any other accumulation point of $E(x)$ must be equivalent to
$x$. But then every point in $E(x)$ is an accumulation point of $E(x)$.
This makes $E(x) \cup \{0\}$ a closed and perfect subset of $K$, so it
is homeomorphic to $K$, contradicting our assumption that $E(x)$ has
countable cardinality. Thus, for any compact interval $I \ensuremath{S}\xspaceubset
(0,1]$, $I\cap E(x)$ has finite cardinality, so we can enumerate $E(x)$
as a decreasing sequence $\{x_n\}_{n \in\ensuremath{\mathbb{N}}\xspace} \ensuremath{S}\xspaceubset E$ converging to
$0$. This shows $\left( S^2 \ensuremath{S}\xspaceetminus (E(x)\cup \{0\} ), \infty \right)
\cong (\Sigma_F,\ast)$. Since mapping classes induce homeomorphisms
of the ends space which, as noted in Section \ref{sec:preliminaries},
preserve the equivalence class of each end,
we obtain a well-defined map $\mcg(\Sigma,
\ast) \to \mcg(\Sigma_F, \ast)$. To finish, take any point $y_n \in
[x_{n+1},x_n]$ such that $\ensuremath{S}\xspaceet{y_n} \cap E = \emptyset$. Then the
circles $\{\alpha_n\}_{n \in \ensuremath{\mathbb{N}}\xspace}$ of radius $y_n$ centered at the
origin are pairwise disjoint and we can extract from them a subsequence
that project to a good sequence of curves on $\Sigma_F$. This finishes
the proof. \qedhere
\end{proof}
\begin{remark}
Note that for $\Sigma$ of the second type in Theorem \ref{thm:bigabel},
we do not need the maximal end $y$ to have a unique immediate
predecessor. This is because the mapping class group always preserves
equivalence classes of ends, so even if $y$ has other immediate
predecessors, the map forgetting all ends except $y$ and $E(x)$ still
induces a well-defined homomorphism on the level of mapping class
groups.
\end{remark}
\begin{remark}
In our setting above, it seems plausible that the forgetful map from
$\mcg(\Sigma)$ to either $\mcg(\Sigma_L)$ or $\mcg(\Sigma_F)$ is
surjective, but we will not pursue that statement here.
\end{remark}
We record some consequences of Theorem \ref{thm:bigabel}.
\begin{corollary} \label{cor:bigabel}
Suppose $\Sigma$ is a surface that satisfies one of the descriptions in
Theorem \ref{thm:bigabel}. Let $\ast \in \Sigma$ be a marked point. Let
$G$ be either $\Homeo^+(\Sigma)$, $\mcg(\Sigma)$,
$\Homeo^+(\Sigma,\ast)$, or $\mcg(\Sigma,\ast)$. Then $G$ is not
perfect, is not generated by torsion elements, and does not have the
automatic continuity property.
\end{corollary}
\begin{proof}
Since a Polish group is separable, it can have at most
$\mathfrak{c}=2^{\aleph_0}$ continuous epimorphisms to $\ensuremath{\mathbb{Q}}\xspace$. But
$\bigoplus_{2^{\aleph_0}} \ensuremath{\mathbb{Q}}\xspace$ has $2^{\mathfrak{c}}$ epimorphisms to
$\ensuremath{\mathbb{Q}}\xspace$. So, by Theorem \ref{thm:bigabel}, $\mcg(\Sigma)$ is not perfect,
is not generated by torsion elements, and does not have the automatic
continuity property. These three properties are inherited by quotients,
so $\Homeo^+(\Sigma)$ also cannot have any of these properties. The
same argument applies to a marked $\Sigma$. \qedhere
\end{proof}
\begin{remark}
If $E$ is a countable ends space homeomorphic to $\omega^\alpha + 1$,
for some countable successor ordinal $\alpha$, then $\Sigma =S^2
\ensuremath{S}\xspaceetminus E$ is a surface of type $2$ of Theorem \ref{thm:bigabel}, by
\cite[Proposition 4.3]{MR21}. This gives Theorem \ref{introthm4} of the
introduction.
\end{remark}
\ensuremath{S}\xspaceubsection{Topological generation by involutions}
\begin{theorem} \label{thm:topgen}
Let $\Sigma$ be either $\ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{N}}\xspace$ or the infinite genus
surface with one end. Then $\mcg(\Sigma)$ is topologically generated by
involutions and is the topological closure of the normal closure of a
single involution. Consequentially, $\HH^1(\mcg(\Sigma),\ensuremath{\mathbb{Z}}\xspace)=0$.
\end{theorem}
\begin{proof}
We first focus on $\Sigma = \ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{N}}\xspace$. The beginning of the
proof is very similar to that of Theorem \ref{thm:unmarkedhomeo}. Note
that $\ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{N}}\xspace$ is homeomorphic to $\ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{Z}}\xspace^2$. This
is because both surfaces have genus $0$, and their ends spaces are
homeomorphic. The advantage of viewing the surface as $\ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus
\ensuremath{\mathbb{Z}}\xspace^2$ is as follows.
Let $\tau$ be the rotation in the plane by angle $\pi$ centered at the
origin, i.e. $\tau(x+iy) = e^{i\pi}(x+iy)$. We also have the
translation $\phi(x+iy) = (x+1)+iy$. Both maps preserve $\ensuremath{\mathbb{Z}}\xspace^2$, so they
induce homeomorphisms of $\Sigma$, where $\tau$ has order $2$. One
checks that $[\phi,\tau] = \phi \tau \phi^{-1} \tau = \phi^2.$
We define a \emph{half-space} of $\Sigma$ to be a closed subset $H
\ensuremath{S}\xspaceubset \Sigma$, such that $\partial H$ is a properly embedded simple
arc joining infinity to itself, and both $H$ and $H^c$ contain
infinitely many punctures (isolated ends) of $\Sigma$. We will consider
an explicit half-space in $\Sigma$. Let $h(x) = \ensuremath{S}\xspaceec(\pi x)-.5$ with
domain $(-.5,.5)$. The graph of $h(x)$ is a convex curve that misses
all of $\ensuremath{\mathbb{Z}}\xspace^2$ and is contained in the vertical strip $\ensuremath{S}\xspaceet{(x,y): -.5 \le
x \le .5}$. See figure \ref{fig:flute}. The set $H= \{(x,y) \in \Sigma:
y \ge h(x)\}$ is a half-space, and $\phi^2$ is an $H$--free translation,
in the sense that $\{ \phi^{2n} H\}_{n \in \ensuremath{\mathbb{Z}}\xspace}$ are pairwise disjoint.
Therefore, with the same swindle as before, we obtain
\[\Homeo(H,\partial H) \le \langle \langle \phi^2 \rangle \rangle \le
\langle \langle \tau \rangle \rangle.\]
While the swindle still works, the rest of the proof for Theorem
\ref{thm:unmarkedhomeo} does (and should) not work. The only statement
that seems to fail is Lemma \ref{lemma:halfspaceintersections}, (and so
Theorem \ref{thm:halfspacehomeogen} also fails in this case).
\begin{figure}
\caption{The half-space $H$ in $\ensuremath{\mathbb{R}
\label{fig:flute}
\end{figure}
We now move to the mapping class group. As before, to simplify the
discussion, we will conflate half-spaces and simple closed curves with
their ambient isotopy classes. We will keep on denoting $\phi$ and
$\tau$ for their mapping classes.
Consider the short exact sequence $$ 1 \to \pmcg(\Sigma) \to
\mcg(\Sigma) \to \Homeo(E(\Sigma)) \to 1,$$ where $\pmcg(\Sigma)$ is
called the \emph{pure mapping class group}, i.e.\ the subgroup fixing
each end of $\Sigma$. Since $\Sigma$ has no genus, by \cite{PV18},
$\pmcg(\Sigma) = \overline{\pmcg_c(\Sigma)}$, where $\pmcg_c(\Sigma)$
is the subgroup of compactly supported mapping classes. Since Dehn
twists generate the pure mapping class group of any compact surface,
$\pmcg(\Sigma)$ is topologically generated by Dehn twists. The goal now
is to show every Dehn twist in $\mcg(\Sigma)$ is contained in the
normal closure of $\mcg(H, \partial H)$, and that the normal closure of
$\mcg(H,\partial H)$ surjects onto $\Homeo(E(\Sigma))$
We first deal with the Dehn twists. Let $\alpha \ensuremath{S}\xspaceubset \Sigma$ be any
simple closed curve. Then $\alpha$ bounds a topological disk containing
finitely many points of $\ensuremath{\mathbb{Z}}\xspace^2$. Choose a simple closed curve $\beta
\ensuremath{S}\xspaceubset H$ that bounds an equal number of points of $\ensuremath{\mathbb{Z}}\xspace^2$. We can find
a homeomorphism $f \in \Homeo^+(\Sigma)$, such that $f(\alpha)=\beta$.
This is simply the change-of-coordinate principle made possible by the
classification of surfaces. We now have
$$T_\alpha = T_{f^{-1}(\beta)} = f^{-1} T_\beta f \in \langle \langle
\mcg(H,\partial H) \rangle \rangle.$$
To show $\langle \langle \mcg(H,\partial H) \rangle \rangle$ surjects
onto $\Homeo(E(\Sigma))$, we produce sufficiently many permutations of
non-maximal ends. First note that $E(\Sigma)$ has exactly one maximal
end, represented by $\infty$, which must be invariant under any
homeomorphism. Every other end is isolated, so $\Homeo(E(\Sigma))$ is
nothing other than the permutation group $\Sym(\ensuremath{\mathbb{Z}}\xspace^2)$ on $\ensuremath{\mathbb{Z}}\xspace^2$. Within
$H$, pair off infinitely many punctures/ends $\{(x_{i, 1}, x_{i,
2})\}_{i \in I}$ such that $x_{i, 2}$ is directly above $x_{i, 1}$
and all the pairs are pairwise disjoint. It is clear that $\mcg(H,
\partial H)$ contains a mapping class $f$ which transposes all pairs
simultaneously. Note that $\bigcup_{i \in I}\{(x_{i, 1}, x_{i, 2})\}$ is both infinite and co-infinite in
$E(\Sigma)$. Since by \cite{Ric63}, $\mcg(\Sigma)$ surjects onto
$\Homeo(E(\Sigma)) = \Sym(\ensuremath{\mathbb{Z}}\xspace^2)$, the image of $\langle \langle
\mcg(H,\partial H) \rangle \rangle$ in $\Sym(\ensuremath{\mathbb{Z}}\xspace^2)$ contains all order
$2$ permutations supported on infinite, co-infinite subsets. It is
straightforward to show this set generates $\Sym(\ensuremath{\mathbb{Z}}\xspace^2)$. In summary, we
have shown $\langle \langle \mcg(H,\partial H) \rangle \rangle$
topologically generates $\pmcg(\Sigma)$ and surjects onto
$\Homeo(E(\Sigma))$. This yields
\[ \mcg(\Sigma) = \overline{\langle \langle \mcg(H,\partial H) \rangle
\rangle} = \overline{\langle \langle \phi^2 \rangle \rangle} =
\overline{\langle \langle \tau \rangle \rangle}.\]
To go from $\ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{Z}}\xspace^2$ to the one-ended infinite genus
surface $\Sigma_L$ we observe that instead of removing the integer
lattice points from $\ensuremath{\mathbb{R}}\xspace^2$, we can remove a small disk from each
lattice point and glue on a handle to get a surface $\Sigma'$
homeomorphic to $\Sigma_L$. Furthermore, we can make sure $\tau$ and
$\phi$ preserve $\Sigma'$. A half-space in $\Sigma'$ is simply a closed
component of a dividing arc that cuts off two component of infinite
genus. The explicit half-space $H$ we defined for $\ensuremath{\mathbb{R}}\xspace^2 \ensuremath{S}\xspaceetminus \ensuremath{\mathbb{Z}}\xspace^2$
can also be made into a half-space here. Then running the same argument
as above and observing that $\pmcg(\Sigma') = \mcg(\Sigma')$ completes
the proof.
The last statement about the cohomology of these groups follows from
the fact that any homomorphism from a Polish group to $\ensuremath{\mathbb{Z}}\xspace$ is
automatically continuous \cite{Dud61}.\qedhere
\end{proof}
\begin{center}
\begin{tabular}{|p{2.1in}@{\qquad\qquad\qquad}|p{2.1in}}
Justin Malestein
\newline
Department of Mathematics
\newline
University of Oklahoma
\newline
\texttt{justin.malestein@ou.edu}
&
Jing Tao
\newline
Department of Mathematics
\newline
University of Oklahoma
\newline
\texttt{jing@ou.edu}
\end{tabular}
\end{center}
\end{document} |
\begin{document}
\title{An extended Hilbert scale and its applications}
\author[V. Mikhailets]{Vladimir Mikhailets}
\address{Institute of Mathematics of the National Academy of Sciences of Ukraine, 3 Tereshchen\-kivs'ka, Kyiv, 01024, Ukraine}
\email{mikhailets@imath.kiev.ua}
\author[A. Murach]{Aleksandr Murach}
\address{Institute of Mathematics of the National Academy of Sciences of Ukraine, 3 Tereshchen\-kivs'ka, Kyiv, 01024, Ukraine}
\email{murach@imath.kiev.ua}
\author[T. Zinchenko]{Tetiana Zinchenko}
\address{5d Mittelstr., Oranienburg, 16515, Germany}
\email{zinchenkotat@ukr.net}
\subjclass[2010]{46B70, 46E35, 47A40}
\keywords{Hilbert scale, interpolation space, interpolation with function parameter, interpolational inequality, generalized Sobolev space, spectral expansion}
\thanks{This work is supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska-Curie grant agreement No 873071 (SOMPATY: Spectral Optimization: From Mathematics to Physics and Advanced Technology).}
\begin{abstract}
We propose a new viewpoint on Hilbert scales extending them by means of all Hilbert spaces that are interpolation ones between spaces on the scale. We prove that this extension admits an explicit description with the help of $\mathrm{OR}$-varying functions of the operator generating the scale. We also show that this extended Hilbert scale is obtained by the quadratic interpolation (with function parameter) between the above spaces and is closed with respect to the quadratic interpolation between Hilbert spaces. We give applications of the extended Hilbert scale to interpolational inequalities, generalized Sobolev spaces, and spectral expansions induced by abstract and elliptic operators.
\end{abstract}
\maketitle
\section{Introduction}\label{sec1}
Hilbert scales (above all, the Sobolev scale) play an important role in mathematical analysis and the theory of differential equations; see, e.g., the classical monographs \cite{Berezansky68, Hermander85iii, Lax06, LionsMagenes72}, surveys \cite{Agranovich94, Agranovich97, Eidelman94}, and recent book \cite{KoshmanenkoDudkin16}. Such scales are built with respect to an arbitrarily chosen Hilbert space $H$ and a positive definite self-adjoint unbounded operator $A$ acting in this space. As a result, we obtain the Hilbert scale $\{H^{s}_{A}:s\in\mathbb{R}\}$, where $H^{s}_{A}$ is the completion of the domain of $A^{s}$ in the norm $\|A^{s}u\|_{H}$ of a vector~$u$. This scale has the following fundamental property: if $0<\theta<1$, then the mapping $\{H^{s_0}_{A},H^{s_1}_{A}\}\mapsto H^{s}_{A}$, with $s_0<s_1$ and $s:=(1-\theta)s_0+\theta s_1$, is an exact interpolation functor of type $\theta$ \cite[Theorem~9.1]{KreinPetunin66}. Concerning a linear operator $T$ bounded on both spaces $H^{s_0}_{A}$ and $H^{s_1}_{A}$, this means that $T$ is also bounded on $H^{s}_{A}$ and that the norms of $T$ on these spaces satisfy the inequality
\begin{equation*}
\|T:H^{s}_{A}\to H^{s}_{A}\|\leq
\|T:H^{s_0}_{A}\to H^{s_0}_{A}\|^{1-\theta}\,
\|T:H^{s_1}_{A}\to H^{s_1}_{A}\|^{\theta}.
\end{equation*}
(An analogous property is fulfilled for bounded linear operators that act on pairs of different spaces belonging to two Hilbert scales.) Hence, every space $H^{s}_{A}$ subject to $s_0<s<s_1$ is an interpolation space between $H^{s_0}_{A}$ and $H^{s_1}_{A}$. However, the class of such interpolation Hilbert spaces is far broader than the section $\{H^{s}_{A}:s_0\leq s\leq s_1\}$ of the Hilbert scale.
It is therefore natural to consider the extension of this scale by means of all Hilbert spaces that are interpolation ones between some spaces $H^{s_0}_{A}$ and $H^{s_1}_{A}$, where the numbers $s_0<s_1$ range over $\mathbb{R}$. Such an extended Hilbert scale is an object of our investigation. We will show that this scale admits a simple explicit description with the help of $\mathrm{OR}$-varying functions of $A$, is obtained by the quadratic interpolation (with function parameter) between the spaces $H^{s_0}_{A}$ and $H^{s_1}_{A}$, and is closed with respect to the quadratic interpolation between Hilbert spaces. These and some other properties of the extended Hilbert scale are considered in Section~\ref{sec2} of this paper; they are proved in Section~\ref{sec3}. Note that the above interpolation and interpolational properties of Hilbert scales are studied in articles \cite{Ameur04, Ameur19, Donoghue67, Fan11, FoiasLions61, Krein60a, KreinPetunin66, Lions58, MikhailetsMurach08MFAT1, Ovchinnikov84, Pustylnik82} (see also monographs \cite[Chapter~1]{LionsMagenes72},
\cite[Section 1.1]{MikhailetsMurach14}, and \cite[Chapters 15 and 30]{Simon19}). Among them, of fundamental importance for our investigation is Ovchinnikov's result \cite[Theorem 11.4.1]{Ovchinnikov84} on an explicit description (with respect to equivalence of norms) of all Hilbert spaces that are interpolation ones between arbitrarily chosen compatible Hilbert spaces.
The next sections are devoted to various applications of the extended Hilbert scale. Section~\ref{sec3b} considers interpolational inequalities that connect the norms in spaces on the scale to each other, as well as the norms of linear operators acting between extended Hilbert scales. From the viewpoint of inequalities for norms of vectors, this scale can be interpreted as a variable Hilbert scale investigated in \cite{Hegland95, Hegland10, HeglandAnderssen11, MatheTautenhahn06}; the latter appears naturally in the theory of ill-posed problems (see, e.g., \cite{HeglandHofmann11, JinTautenhahn11, MathePereverzev03, TautenhahnHamarikHofmannShao13}). Section~\ref{sec4} gives applications of the extended Hilbert scale to function or distribution spaces, which are used specifically in the theory of pseudodifferential operators. We show that the extended Hilbert scale generated by some elliptic operators consists of generalized Sobolev spaces whose regularity order is a function $\mathrm{OR}$-varying at infinity. These spaces form the extended Sobolev scale considered in \cite{MikhailetsMurach13UMJ3, MikhailetsMurach15ResMath1} and \cite[Section~2.4.2]{MikhailetsMurach14}. It has important applications to elliptic operators \cite{Murach09UMJ3, MurachZinchenko13MFAT1, ZinchenkoMurach12UMJ11, ZinchenkoMurach14JMathSci} and elliptic boundary-value problems \cite{AnopDenkMurach20arxiv, AnopKasirenko16MFAT, AnopMurach14MFAT, AnopMurach14UMJ, KasirenkoMurach18UMJ11}. Among them are applications to the investigation of various types of convergence of spectral expansions induced by elliptic operators. This topic is examined in the last Section~\ref{sec6}. Its results are based on theorems on the convergence---in a space with two norms---of the spectral expansion induced by an abstract normal operator and on the degree of this convergence. These theorems are proved in Section~\ref{sec6a}.
\section{Basic results}\label{sec2}
Let $H$ be a separable infinite-dimensional complex Hilbert space, with $(\cdot,\cdot)$ and $\|\cdot\|$ respectively denoting the inner product and the corresponding norm in~$H$. Let $A$ be a positive definite self-adjoint unbounded linear operator in~$H$. The positive definiteness of $A$ means that there exists a number $r>0$ such that $(Au,u)\geq r(u,u)$ for every $u\in\mathrm{Dom}\,A$. As usual, $\mathrm{Dom}\,A$ denotes the domain of $A$. Without loss of generality we suppose that the lower bound $r=1$.
For every $s\in\mathbb{R}$, the self-adjoint operator $A^{s}$ in $H$ is well defined with the help of the spectral decomposition of~$A$. The domain $\mathrm{Dom}\,A^{s}$ of $A^{s}$ is dense in $H$; moreover, $\mathrm{Dom}\,A^{s}=H$ whenever $s\leq0$. Let $H^{s}_{A}$ denote the completion of $\mathrm{Dom}\,A^{s}$ with respect to the norm $\|u\|_{s}:=\|A^{s}u\|$ and the corresponding inner product $(u_{1},u_{2})_{s}:=(A^{s}u_{1},A^{s}u_{2})$, with $u,u_{1},u_{2}\in\mathrm{Dom}\,A^{s}$. The Hilbert space $H^{s}_{A}$ is separable. As usual, we retain designations $(\cdot,\cdot)_{s}$ and $\|\cdot\|_{s}$ for the inner product and the corresponding norm in this space. Note that the linear manifold $H^{s}_{A}$ coincides with $\mathrm{Dom}\,A^{s}$ whenever $s\geq0$ and that $H^{s}_{A}\supset H$ whenever $s<0$. The set $H^{\infty}_{A}:=\bigcap_{\lambda>0}H^{\lambda}_{A}$
is dense in every space $H^{s}_{A}$, with $s\in\mathbb{R}$.
The class $\{H^{s}_{A}:s\in\mathbb{R}\}$ is called the Hilbert scale generated by $A$ or, simply, $A$-scale (see., e.g., \cite[Section~9, Subsection~1]{KreinPetunin66}). If $s_{0},s_{1}\in\mathbb{R}$ and $s_{0}<s_{1}$, then the identity mapping on $\mathrm{Dom}\,A^{s_{1}}$ extends uniquely to a continuous embedding operator $H^{s_{1}}_{A}\hookrightarrow H^{s_{0}}_{A}$, the embedding being normal. Therefore, interpreting $H^{s_{1}}_{A}$ as a linear manifold in $H^{s_{0}}_{A}$, we obtain the normal pair $[H^{s_{0}}_{A},H^{s_{1}}_{A}]$ of Hilbert spaces. This means that $H^{s_{1}}_{A}$ is dense in $H^{s_{0}}_{A}$ and that $\|u\|_{s_{0}}\leq\|u\|_{s_{1}}$ for every $u\in H^{s_{1}}_{A}$.
\begin{main-definition}
\emph{The extended Hilbert scale generated by $A$} or, simply, \emph{the extended $A$-scale} consists of all Hilbert spaces each of which is an interpolation space for a certain pair $[H^{s_{0}}_{A},H^{s_{1}}_{A}]$ where $s_{0}<s_{1}$ (the real numbers $s_{0}$ and $s_{1}$ may depend on the interpolation Hilbert space).
\end{main-definition}
We will give an explicit description of this scale and prove its important interpolation properties.
Beforehand, let us recall the definition of an interpolation space in the case considered. Suppose $H_{0}$ and $H_{1}$ are Hilbert spaces such that $H_{1}$ is a linear manifold in $H_{0}$ and that the embedding $H_{1}\hookrightarrow H_{0}$ is continuous. A~Hilbert space $X$ is called an interpolation space for the pair $[H_{0},H_{1}]$ (or, in other words, an interpolation space between $H_{0}$ and $H_{1}$) if $X$ satisfies the following two conditions:
\begin{enumerate}
\item [(i)] $X$ is an intermediate space for this pair, i.e. $X$ is a linear manifold in $H_{0}$ and the continuous embeddings $H_{1}\hookrightarrow X\hookrightarrow H_{0}$ hold;
\item [(ii)] for every linear operator $T$ given on $H_{0}$, the following implication is true: if the restriction of $T$ to $H_{j}$ is a bounded operator on $H_{j}$ for each $j\in\{0,1\}$, then the restriction of $T$ to $X$ is a bounded operator on $X$.
\end{enumerate}
Property (ii) implies the following inequality for norms of operators:
\begin{equation*}
\|T:X\to X\|\leq c\,\max\bigl\{\,\|T:H_{0}\to H_{0}\|,\,
\|T:H_{1}\to H_{1} \|\,\bigr\},
\end{equation*}
where $c$ is a certain positive number which does not depend on $T$ (see, e.g., \cite[Theorem 2.4.2]{BerghLefstrem76}). If $c=1$, the interpolation space $X$ is called exact.
Both properties (i) and (ii) are invariant with respect to the change of the norm in $X$ for an equivalent norm. Therefore, it makes sense to describe the interpolation spaces for the pair $[H_{0},H_{1}]$ up to equivalence of norms.
As is known \cite[Theorem 9.1]{KreinPetunin66}, every space $H^{s}_{A}$ is an interpolation one for the pair $[H^{s_{0}}_{A},H^{s_{1}}_{A}]$ whenever $s_{0}\leq s\leq s_{1}$. To give a description of all interpolation Hilbert spaces for this pair, we need more general functions of $A$ than power functions used in the definition of $H^{s}_{A}$.
Choosing a Borel measurable function $\varphi:[1,\infty)\to(0,\infty)$ arbitrarily and using the spectral decomposition of~$A$, we define the self-adjoint operator $\varphi(A)>0$ which acts in~$H$. Recall that $\mathrm{Spec}\,A\subseteq[1,\infty)$ according to our assumption. Here and below, $\mathrm{Spec}\,A$ denotes the spectrum of $A$, and $\varphi(A)>0$ means that $(\varphi(A)u,u)>\nobreak0$ for every $u\in\mathrm{Dom}\,\varphi(A)\setminus\{0\}$. Let $H^{\varphi}_{A}$ denote the completion of the domain $\mathrm{Dom}\,\varphi(A)$ of $\varphi(A)$ with respect to the norm $\|u\|_{\varphi}:=\|\varphi(A)u\|$ of $u\in\mathrm{Dom}\,\varphi(A)$.
The space $H^{\varphi}_{A}$ is Hilbert and separable. Indeed, this norm is induced by the inner product $(u_{1},u_{2})_{\varphi}:=(\varphi(A)u_{1},\varphi(A)u_{2})$ of $u_{1},u_{2}\in\mathrm{Dom}\,\varphi(A)$. Besides, endowing the linear space $\mathrm{Dom}\,\varphi(A)$ with the norm $\|\cdot\|_{\varphi}$ and considering the isometric operator
\begin{equation}\label{f2.1}
\varphi(A):\mathrm{Dom}\,\varphi(A)\to H,
\end{equation}
we infer the separability of $\mathrm{Dom}\,\varphi(A)$ (in the norm $\|\cdot\|_{\varphi}$) from the separability of~$H$. Therefore, the space $H_{A}^{\varphi}$ is separable as well. In the sequel we use the same designations $(\cdot,\cdot)_{\varphi}$ and $\|\cdot\|_{\varphi}$ for the inner product and the corresponding norm in the whole Hilbert space~$H^{\varphi}_{A}$.
Operator \eqref{f2.1} extends uniquely (by continuity) to an isometric isomorphism
\begin{equation}\label{f2.2}
B:H_{A}^{\varphi}\leftrightarrow H.
\end{equation}
The equality $B(H_{A}^{\varphi})=H$ follows from the fact that the range of $\varphi(A)$ coincides with $H$ whenever $0\not\in\mathrm{Spec}\,\varphi(A)$ and that the range is narrower than $H$ but is dense in~$H$ whenever $0\in\mathrm{Spec}\,\varphi(A)$. Hence, $(u_{1},u_{2})_{\varphi}=(Bu_{1},Bu_{2})$ for every $u_{1},u_{2}\in H_{A}^{\varphi}$. Besides, $H_{A}^{\varphi}=\mathrm{Dom}\,\varphi(A)$ if and only if $0\not\in\mathrm{Spec}\,\varphi(A)$.
Remark that we use the same designation $H^{\varphi}_{A}$ both in the case where $\varphi$ is a function and in the case where $\varphi$ is a number. This will not lead to ambiguity because we will always specify what $\varphi$ means, a function or number. Of course, this remark also concerns the designations of the norm and inner product in $H^{\varphi}_{A}$.
We need the Hilbert spaces $H^{\varphi}_{A}$ such that $\varphi$ ranges over a certain function class $\mathrm{OR}$. By definition, this class consists of all Borel measurable functions
$\varphi:\nobreak[1,\infty)\rightarrow(0,\infty)$ for which there exist numbers $a>1$ and $c\geq1$ such that $c^{-1}\leq\varphi(\lambda t)/\varphi(t)\leq c$ for all $t\geq1$ and $\lambda\in[1,a]$ (the numbers $a$ and $c$ may depend on $\varphi$). Such functions were introduced by V.~G.~Avakumovi\'c \cite{Avakumovic36} in 1936, are called OR-varying (or O-regularly varying) at infinity and have been well investigated \cite{BinghamGoldieTeugels89, BuldyginIndlekoferKlesovSteinebach18, Seneta76}.
The class $\mathrm{OR}$ admits the following simple description \cite[Theorem~A.1]{Seneta76}: $\varphi\in\mathrm{OR}$ if and only if
\begin{equation*}
\varphi(t)=
\exp\Biggl(\beta(t)+
\int\limits_{1}^{t}\frac{\gamma(\tau)}{\tau}\;d\tau\Biggr),
\quad t\geq1,
\end{equation*}
for some bounded Borel measurable functions $\beta,\gamma:[1,\infty)\to\mathbb{R}$.
This class has the following important property \cite[Theorem~A.2(a)]{Seneta76}: for every $\varphi\in\mathrm{OR}$ there exist real numbers $s_{0}$ and $s_{1}$, with $s_{0}\leq s_{1}$, and positive numbers $c_{0}$ and $c_{1}$ such that
\begin{equation}\label{f2.3}
c_{0}\lambda^{s_{0}}\leq\frac{\varphi(\lambda t)}{\varphi(t)}\leq
c_{1}\lambda^{s_{1}}\quad\mbox{for all}\quad t\geq1\quad\mbox{and}\quad\lambda\geq1.
\end{equation}
Let $\varphi\in\mathrm{OR}$; considering the left-hand side of the inequality \eqref{f2.3} in the $t=1$ case, we conclude that $\varphi(\lambda)\geq\mathrm{const}\cdot e^{-\lambda}$ whenever $\lambda\geq1$. Hence, the identity mapping on $\mathrm{Dom}\,\varphi(A)$ extends uniquely to a continuous embedding operator $H^{\varphi}_{A}\hookrightarrow H^{1/\exp}_{A}$. This will be shown in the first two paragraphs of the proof of Theorem~\ref{th2.6}, in which we put $\varphi_{1}(t):=\varphi(t)$ and $\varphi_{2}(t):=e^{-t}$.
Here, of course, $H^{1/\exp}_{A}$ denotes the Hilbert space $H^{\chi}_{A}$ parametrized with the function $\chi(t):=e^{-t}$ of $t\geq1$. Therefore, we will interpret $H^{\varphi}_{A}$ as a linear manifold in $H^{1/\exp}_{A}$.
Thus, all the spaces $H^{\varphi}_{A}$ parametrized with $\varphi\in\mathrm{OR}$ and, hence, all the spaces from the extended $A$-scale lie in the same space $H^{1/\exp}_{A}$, which enables us to compare them.
\begin{theorem}\label{th2.1}
A Hilbert space $X$ belongs to the extended $A$-scale if and only if $X=\nobreak H^{\varphi}_{A}$ up to equivalence of norms for certain $\varphi\in\mathrm{OR}$.
\end{theorem}
\begin{remark}\label{rem2.2}
We cannot transfer from the extended $A$-scale to a wider class of spaces by means of interpolation Hilbert spaces between any spaces from this scale. Namely, suppose that certain Hilbert spaces $H_{0}$ and $H_{1}$ belong to the extended $A$-scale and satisfy the continuous embedding $H_{1}\hookrightarrow H_{0}$. Then every Hilbert space $X$ which is an interpolation one for the pair $[H_{0},H_{1}]$ belongs to this scale as well. Indeed, for each $j\in\{0,1\}$, the space $H_{j}$ is an interpolation one for a certain pair $[H^{s_{j,0}}_{A},H^{s_{j,1}}_{A}]$, where $s_{j,0}<s_{j,1}$. Besides, both $H^{s_{j,0}}_{A}$ and $H^{s_{j,1}}_{A}$ are interpolation spaces for the pair $[H^{s_{0}}_{A},H^{s_{1}}_{A}]$ provided that $s_{0}:=\min\{s_{0,0},s_{1,0}\}$ and $s_{1}:=\max\{s_{0,1},s_{1,1}\}$. Hence, the above-mentioned space $X$ is an interpolation one for the latter pair, which follows directly from the given definition of an interpolation space. Thus, $X$ belongs to the extended $A$-scale.
\end{remark}
We will also give an explicit description (up to equivalence of norms) of all Hilbert spaces that are interpolation ones for the given pair
$[H^{s_{0}}_{A},H^{s_{1}}_{A}]$, where $s_{0}<s_{1}$.
Considering $\varphi\in\mathrm{OR}$, we put
\begin{gather}\label{f2.4}
\sigma_{0}(\varphi):=\sup\{s_{0}\in\mathbb{R}\mid\mbox{the left-hand inequality in \eqref{f2.3} holds}\},\\ \label{f2.5}
\sigma_{1}(\varphi):=\inf\{s_{1}\in\mathbb{R}\mid\mbox{the right-hand inequality in \eqref{f2.3} holds}\}.
\end{gather}
Evidently, $-\infty<\sigma_{0}(\varphi)\leq\sigma_{1}(\varphi)<\infty$. The numbers $\sigma_{0}(\varphi)$ and $\sigma_{1}(\varphi)$ are equal to the lower and the upper Matuszewska indices of $\varphi$, respectively (see \cite{Matuszewska64} and \cite[Theorem~2.2.2]{BinghamGoldieTeugels89}).
\begin{theorem}\label{th2.3}
Let $s_{0},s_{1}\in\mathbb{R}$ and $s_{0}<s_{1}$. A Hilbert space $X$ is an interpolation space for the pair $[H^{s_{0}}_{A},H^{s_{1}}_{A}]$
if and only if $X=H^{\varphi}_{A}$ up to equivalence of norms for a certain function parameter $\varphi\in\mathrm{OR}$ that satisfies condition~\eqref{f2.3}.
\end{theorem}
\begin{remark}\label{rem2.4}
Of course, we mean in Theorem~\ref{th2.3} that the positive numbers $c_{0}$ and $c_{1}$ in condition \eqref{f2.3} depend neither on $t$ nor on $\lambda$. Evidently, this condition is equivalent to the following pair of conditions:
\begin{enumerate}
\item [$\mathrm{(i)}$] $s_{0}\leq\sigma_{0}(\varphi)$ and, moreover,
$s_{0}<\sigma_{0}(\varphi)$ if the supremum in $\eqref{f2.4}$ is not attained;
\item [$\mathrm{(ii)}$] $\sigma_{1}(\varphi)\leq s_{1}$ and, moreover, $\sigma_{1}(\varphi)<s_{1}$ if the infimum in $\eqref{f2.5}$ is not attained.
\end{enumerate}
\end{remark}
It is important for applications that the extended $A$-scale can be obtained by means of the quadratic interpolation (with function parameter) between spaces from $A$-scale. Before we formulate a relevant theorem, we will recall the definition of the quadratic interpolation between Hilbert spaces. This interpolation is a natural generalization of the classical interpolation method by J.-L.~Lions \cite{Lions58} and S.~G.~Krein \cite{Krein60a} (see also the book \cite[Chapter~1, Sections 2 and~5]{LionsMagenes72} and survey \cite[Section~9]{KreinPetunin66}) to the case where a general enough function is used, instead of the number $\theta\in(0,\,1)$, as an interpolation parameter. The generalization first appeared in C.~Foia\c{s} and J.-L.~Lions' paper \cite[Section~3.4]{FoiasLions61}. We mainly follow monograph \cite[Section~1.1]{MikhailetsMurach14} (see also \cite[Section~2.1]{MikhailetsMurach08MFAT1}).
Let $\mathcal{B}$ denote the set of all Borel measurable functions $\psi:(0,\infty)\rightarrow(0,\infty)$ such that $\psi$ is bounded on each compact interval $[a,b]$, with $0<a<b<\infty$, and that $1/\psi$ is bounded on every set $[r,\infty)$, with $r>0$. We arbitrarily choose a function $\psi\in\mathcal{B}$ and a regular pair $\mathcal{H}:=[H_{0},H_{1}]$ of separable complex Hilbert spaces. The regularity of this pair means that $H_{1}$ is a dense linear manifold in $H_{0}$ and that the embedding $H_{1}\hookrightarrow H_{0}$ is continuous. For $\mathcal{H}$ there exists a positive definite self-adjoint linear operator $J$ in $H_{0}$ such that $\mathrm{Dom}\,J=H_{1}$ and that $\|Ju\|_{H_{0}}=\|u\|_{H_{1}}$ for every $u\in H_{1}$. The operator $J$ is uniquely determined by the pair $\mathcal{H}$ and is called the generating operator for this pair.
Using the spectral decomposition of $J$, we define the self-adjoint operator $\psi(J)$ in $H_{0}$. Let $[H_{0},H_{1}]_{\psi}$ or, simply, $\mathcal{H}_{\psi}$ denote the domain of $\psi(J)$ endowed with the inner product $(u_{1},u_{2})_{\mathcal{H}_{\psi}}:=(\psi(J)u_{1},\psi(J)u_{2})_{H_{0}}$ and the corresponding norm $\|u\|_{\mathcal{H}_{\psi}}=\|\psi(J)u\|_{H_{0}}$, with $u,u_{1},u_{2}\in\mathcal{H}_{\psi}$. The space $\mathcal{H}_{\psi}$ is Hilbert and separable.
A function $\psi\in\mathcal{B}$ is called an interpolation parameter if the following condition is fulfilled for all regular pairs $\mathcal{H}=[H_{0},H_{1}]$ and
$\mathcal{G}=[G_{0},G_{1}]$ of separable complex Hilbert spaces and for an arbitrary linear mapping $T$ given on $H_{0}$: if the restriction of $T$ to $H_{j}$ is a bounded operator $T:H_{j}\rightarrow G_{j}$ for each $j\in\{0,1\}$, then the restriction of $T$ to
$\mathcal{H}_{\psi}$ is also a bounded operator $T:\mathcal{H}_{\psi}\to\mathcal{G}_{\psi}$. If $\psi$ is an interpolation parameter, we will say that the Hilbert space
$\mathcal{H}_{\psi}$ is obtained by the quadratic interpolation with the function parameter~$\psi$ of the pair $\mathcal{H}$ (or, in other words, between the spaces $H_{0}$ and $H_{1}$). In this case, the dense continuous embeddings $H_{1}\hookrightarrow\mathcal{H}_{\psi}\hookrightarrow H_{0}$ hold true.
A function $\psi\in\mathcal{B}$ is an interpolation parameter if and only if $\psi$ is pseudoconcave in a neighbourhood of infinity. The latter property means that there exists a number $r>0$ and a concave function $\psi_{1}:(r,\infty)\rightarrow(0,\infty)$ that both functions $\psi/\psi_{1}$ and $\psi_{1}/\psi$ are bounded on $(r,\infty)$. This key fact follows from J.~Peetre's \cite{Peetre66, Peetre68} description of all interpolation functions for the weighted $L_{p}(\mathbb{R}^{n})$-type spaces (the description is also set forth in monograph \cite[Theorem 5.4.4]{BerghLefstrem76}).
The above-mentioned interpolation property of the extended $A$-scale is formulated as follows:
\begin{theorem}\label{th2.5}
Let $\varphi\in\mathrm{OR}$, and let real numbers $s_{0}<s_{1}$ be taken from condition~\eqref{f2.3}. Put
\begin{equation}\label{f2.6}
\psi(\tau):=
\begin{cases}
\;\tau^{-s_{0}/(s_{1}-s_{0})}\,\varphi(\tau^{1/(s_{1}-s_{0})}) &\text{whenever}\quad\tau\geq1, \\
\;\varphi(1) & \text{whenever}\quad0<\tau<1.
\end{cases}
\end{equation}
Then the function $\psi$ belongs to $\mathcal{B}$ and is an interpolation parameter, and
\begin{equation}\label{f2.7}
\bigl[H^{s_{0}}_{A},H^{s_{1}}_{A}\bigr]_{\psi}=H^{\varphi}_{A}
\quad\mbox{with equality of norms}.
\end{equation}
\end{theorem}
For instance, considering the function $\varphi(t):=1+\log t$ of $t\geq1$ from the class $\mathrm{OR}$, we can take $s_{0}:=0$ and $s_{1}:=\varepsilon$ for every $\varepsilon>0$ and put $\psi(\tau):=1+\varepsilon^{-1}\log\tau$ whenever $\tau\geq1$ in the interpolation formula \eqref{f2.7}.
Note that, if $s_{0}<\sigma_{0}(\varphi)$ and $s_{1}>\sigma_{1}(\varphi)$, the numbers $s_{0}$ and $s_{1}$ will satisfy the condition of Theorem~\ref{th2.5} whatever $\varphi\in\mathrm{OR}$.
The extended $A$-scale is closed with respect to the quadratic interpolation (with function parameter). This follows directly from the next two results.
\begin{theorem}\label{th2.6}
Let $\varphi_{0},\varphi_{1}:[1,\infty)\to(0,\infty)$ be Borel measurable functions. Suppose that the function $\varphi_{0}/\varphi_{1}$ is bounded on $[1,\infty)$. Then the pair $[H^{\varphi_{0}}_{A},H^{\varphi_{1}}_{A}\bigr]$ is regular. Let $\psi\in\mathcal{B}$, and put
\begin{equation}\label{f2.8}
\varphi(t):=\varphi_{0}(t)\,\psi
\biggl(\frac{\varphi_{1}(t)}{\varphi_{0}(t)}\biggr)
\quad\mbox{whenever}\quad t\geq1.
\end{equation}
Then
\begin{equation}\label{f2.9}
\bigl[H^{\varphi_{0}}_{A},H^{\varphi_{1}}_{A}\bigr]_{\psi}=H^{\varphi}_{A}
\quad\mbox{with equality of norms}.
\end{equation}
\end{theorem}
\begin{proposition}\label{prop2.7}
Let $\varphi_{0},\varphi_{1}\in\mathrm{OR}$ and $\psi\in\mathcal{B}$. Suppose that the function $\varphi_{0}/\varphi_{1}$ is bounded in a neighbourhood of infinity and that $\psi$ is an interpolation parameter. Then the function \eqref{f2.8} belongs to the class $\mathrm{OR}$.
\end{proposition}
This proposition is contained in \cite[Theorem~5.2]{MikhailetsMurach15ResMath1}.
As to Theorem~\ref{th2.6}, it is necessary to note that its hypothesis allows us to consider $H^{\varphi_{1}}_{A}$ and $H^{\varphi}_{A}$ as linear manifolds in $H^{\varphi_0}_{A}$. Indeed, since the functions $\varphi_{0}/\varphi_{1}$ and $\varphi_{0}/\varphi$ are bounded on $[1,\infty)$, the identity mappings on $\mathrm{Dom}\,\varphi_{1}(A)$ and on $\mathrm{Dom}\,\varphi(A)$ extend uniquely to some continuous
embedding operators $H^{\varphi_{1}}_{A}\hookrightarrow H^{\varphi_{0}}_{A}$ and
$H^{\varphi}_{A}\hookrightarrow H^{\varphi_{0}}_{A}$ respectively (see the first two paragraphs of the proof of Theorem~\ref{th2.6}). Thus, we may
say about the regularity of the pair $[H^{\varphi_{0}}_{A},H^{\varphi_{1}}_{A}]$ and compare the spaces $[H^{\varphi_{0}}_{A},H^{\varphi_{1}}_{A}]_{\psi}$ and $H^{\varphi}_{A}$ in \eqref{f2.9}.
\section{Proofs of basic results}\label{sec3}
We will prove Theorems \ref{th2.1}, \ref{th2.3}, \ref{th2.5}, and \ref{th2.6} in the reverse order, which is stipulated by a remarkable result by Ovchinnikov \cite[Theorem 11.4.1]{Ovchinnikov84}. This result explicitly describes (up to equivalence of norms) all the Hilbert spaces that are interpolation ones for an arbitrary compatible pair of Hilbert spaces. As to our consideration, Ovchinnikov's theorem can be formulated as follows:
\begin{proposition}\label{prop3.1}
Let $\mathcal{H}:=[H_{0},H_{1}]$ be a regular pair of separable complex Hilbert spaces. A Hilbert space $X$ is an interpolation space for $\mathcal{H}$ if and only if $X=\mathcal{H}_{\psi}$ up to equivalence of norms for a certain interpolation parameter $\psi\in\mathcal{B}$.
\end{proposition}
Note that all exact interpolation Hilbert spaces for $\mathcal{H}$ were characterized (isometrically) by Donoghue \cite{Donoghue67}.
Let us turn to the proofs of the theorems formulated in Section~\ref{sec2}.
\begin{proof}[Proof of Theorem $\ref{th2.6}$]
We first show that the pair $[H^{\varphi_{0}}_{A},H^{\varphi_{1}}_{A}]$ is regular. It follows from the hypothesis of the theorem that $\mathrm{Dom}\,\varphi_{1}(A)\subseteq\mathrm{Dom}\,\varphi_{0}(A)$ and that $\|u\|_{\varphi_{0}}\leq\varkappa^{-1}\|u\|_{\varphi_{1}}$ for every $u\in\mathrm{Dom}\,\varphi_{1}(A)$, with
\begin{equation}\label{f3.2}
\varkappa:=\inf_{t\geq1}
\frac{\varphi_{1}(t)}{\varphi_{0}(t)}>0.
\end{equation}
Hence, the identity mapping on $\mathrm{Dom}\,\varphi_{1}(A)$ extends uniquely to a continuous linear operator
\begin{equation}\label{f3.3}
I:H^{\varphi_{1}}_{A}\to H^{\varphi_{0}}_{A}.
\end{equation}
Let us prove that this operator is injective.
Suppose that $Iu=0$ for certain $u\in H^{\varphi_{1}}_{A}$. We must prove the equality $u=0$. Choose a sequence $(u_{k})_{k=1}^{\infty}\subset\mathrm{Dom}\,\varphi_{1}(A)$ such that $u_{k}\to u$ in $H^{\varphi_{1}}_{A}$ as $k\to\infty$. Since operator \eqref{f3.3} is bounded, we have the convergence $u_{k}=Iu_{k}\to Iu=0$ in $H^{\varphi_{0}}_{A}$. Hence, the sequence $(u_{k})_{k=1}^{\infty}$ is a Cauchy one in $\mathrm{Dom}\,\varphi_{1}(A)$, and $u_{k}\to0$ in $\mathrm{Dom}\,\varphi_{0}(A)$. Here and below in the proof, the linear space $\mathrm{Dom}\,\varphi_{j}(A)$ is endowed with the norm $\|\cdot\|_{\varphi_{j}}$ for each $j\in\{0,1\}$. Thus, there exists a vector $v\in H$ such that $\varphi_{1}(A)u_{k}\to v$ in $H$, and $\varphi_{0}(A)u_{k}\to0$ in $H$. Besides,
\begin{equation*}
\varphi_{0}(A)u_{k}=
\frac{\varphi_{0}}{\varphi_{1}}(A)\varphi_{1}(A)u_{k}\to
\frac{\varphi_{0}}{\varphi_{1}}(A)v\quad\mbox{in}\quad H
\end{equation*}
because the function $\varphi_{0}/\varphi_{1}$ is bounded on $[1,\infty)$. Therefore, $(\varphi_{0}/\varphi_{1})(A)v=0$. Hence, $v=0$ as a vector from $H$ because the function $\varphi_{0}/\varphi_{1}$ is positive on $[1,\infty)$. Thus, $\varphi_{1}(A)u_{k}\to0$ in~$H$. Therefore, $\|u_{k}\|_{\varphi_{1}}=\|\varphi_{1}(A)u_{k}\|\to0$, i.e. $u=\lim_{k\to\infty}u_{k}=0$ in $H^{\varphi_{1}}_{A}$. We have proved that the operator \eqref{f3.3} is injective.
Hence, it realizes a continuous embedding $H^{\varphi_{1}}_{A}\hookrightarrow H^{\varphi_{0}}_{A}$. The density of this embedding follows directly from the density of $\mathrm{Dom}\,\varphi_{1}(A)$ in the normed space $\mathrm{Dom}\,\varphi_{0}(A)$. Let us prove the latter density. Choose a vector $u\in\mathrm{Dom}\,\varphi_{0}(A)$ arbitrarily. The domain of the operator $\varphi_{1}(A)(1/\varphi_{0})(A)$ is dense in $H$ because the closure of this operator coincides with the operator $(\varphi_{1}/\varphi_{0})(A)$, whose domain is dense in~$H$. Hence, there exists a sequence
\begin{equation*}
(v_{k})_{k=1}^{\infty}\subset
\mathrm{Dom}\bigl(\varphi_{1}(A)(1/\varphi_{0})(A)\bigr)
\end{equation*}
such that $v_{k}\to\varphi_{0}(A)u$ in $H$ as $k\to\infty$. Putting
\begin{equation*}
u_{k}:=(1/\varphi_{0})(A)v_{k}\in\mathrm{Dom}\,\varphi_{1}(A)
\end{equation*}
for every integer $k\geq1$, we conclude that $\varphi_{0}(A)u_{k}=v_{k}\to\varphi_{0}(A)u$ in $H$. Therefore, $\mathrm{Dom}\,\varphi_{1}(A)\ni u_{k}\to u$ in $\mathrm{Dom}\,\varphi_{0}(A)$. Hence, the set $\mathrm{Dom}\,\varphi_{1}(A)$ is dense in $\mathrm{Dom}\,\varphi_{0}(A)$. Thus, the continuous embedding $H^{\varphi_{1}}_{A}\hookrightarrow H^{\varphi_{0}}_{A}$ is dense, i.e. the pair $[H^{\varphi_{0}}_{A},H^{\varphi_{1}}_{A}\bigr]$ is regular.
Let us build the generating operator for this pair. Choosing $j\in\{0,1\}$ arbitrarily, we have the isometric linear operator
\begin{equation*}
\varphi_{j}(A):\mathrm{Dom}\,\varphi_{j}(A)\to H.
\end{equation*}
This operator extends uniquely (by continuity) to an isometric isomorphism
\begin{equation}\label{f3.4}
B_{j}:H_{A}^{\varphi_{j}}\leftrightarrow H,
\quad\mbox{with}\quad j\in\{0,1\}
\end{equation}
(see the explanation for \eqref{f2.2}). Define the linear operator $J$ in $H_{A}^{\varphi_{0}}$ by the formula $Ju:=B_{0}^{-1}B_{1}u$ for every $u\in\mathrm{Dom}\,J:=H_{A}^{\varphi_{1}}$. Let us prove that $J$ is the generating operator for the pair $[H^{\varphi_{0}}_{A},H^{\varphi_{1}}_{A}]$.
Note first that $J$ sets an isometric isomorphism
\begin{equation}\label{f3.5}
J=B_{0}^{-1}B_{1}:H_{A}^{\varphi_{1}}\leftrightarrow H_{A}^{\varphi_{0}}.
\end{equation}
Hence, the operator $J$ is closed in $H_{A}^{\varphi_{0}}$. Besides, $J$ is a positive definite operator in $H_{A}^{\varphi_{0}}$. Indeed, choosing $u\in\mathrm{Dom}\,\varphi_{1}(A)$ arbitrarily, we write the following:
\begin{align*}
(Ju,u)_{\varphi_{0}}&=(B_{0}^{-1}B_{1}u,u)_{\varphi_{0}}=(B_{1}u,B_{0}u)=
(\varphi_{1}(A)u,\varphi_{0}(A)u)\\
&=\Bigl(\frac{\varphi_{1}}{\varphi_{0}}(A)\varphi_{0}(A)u,
\varphi_{0}(A)u\Bigl)\geq\varkappa(\varphi_{0}(A)u,\varphi_{0}(A)u)=
\varkappa(u,u)_{\varphi_{0}},
\end{align*}
the inequality being due to~\eqref{f3.2}. Passing here to the limit and using \eqref{f3.5}, we conclude that
\begin{equation}\label{f3.5bis}
(Ju,u)_{\varphi_{0}}\geq\varkappa(u,u)_{\varphi_{0}}
\quad\mbox{for every}\quad u\in H_{A}^{\varphi_{1}}.
\end{equation}
Thus, $J$ is a positive definite closed operator in $H_{A}^{\varphi_{0}}$. Moreover, since $0\notin\mathrm{Spec}\,J$ by \eqref{f3.5}, this operator is self-adjoint on $H_{A}^{\varphi_{0}}$. Regarding \eqref{f3.5} again, we conclude that $J$ is the generating operator for the pair $[H^{\varphi_{0}}_{A},H^{\varphi_{1}}_{A}]$.
Let us reduce the self-adjoint operator $J$ in $H_{A}^{\varphi_{0}}$ to an operator of multiplication by function. Since the operator $A$ is self-adjoint in $H$ and since $A\geq1$, there exists a space $R$ with a finite measure $\mu$, a measurable function $\alpha:R\to[1,\infty)$, and an isometric isomorphism
\begin{equation}\label{f3.6}
\mathcal{I}:H\leftrightarrow L_{2}(R,d\mu)
\end{equation}
such that
\begin{equation*}
\mathrm{Dom}\,A=\{u\in H:\alpha\cdot\mathcal{I}u\in L_{2}(R,d\mu)\}
\end{equation*}
and that $\mathcal{I}Au=\alpha\cdot\mathcal{I}u$ for every $u\in\mathrm{Dom}\,A$; see, e.g, \cite[Theorem VIII.4]{ReedSimon72}. Otherwise speaking, $\mathcal{I}$ reduces $A$ to the operator of multiplication by~$\alpha$.
Using \eqref{f3.4} and \eqref{f3.6}, we introduce the isometric isomorphism
\begin{equation}\label{f3.9}
\mathcal{I}_{0}:=\mathcal{I}B_{0}:H^{\varphi_{0}}_{A}\leftrightarrow L_{2}(R,d\mu).
\end{equation}
Let us show that $\mathcal{I}_{0}$ reduces $J$ to an operator of multiplication by function. Given $u\in\mathrm{Dom}\,\varphi_{1}(A)$, we write the following:
\begin{align*}
\mathcal{I}_{0}Ju&=(\mathcal{I}B_{0})(B_{0}^{-1}B_{1})u=
\mathcal{I}B_{1}u=\mathcal{I}\varphi_{1}(A)u\\
&=\mathcal{I}\Bigl(\frac{\varphi_{1}}{\varphi_{0}}\Bigr)(A)\varphi_{0}(A)u
=\Bigl(\frac{\varphi_{1}}{\varphi_{0}}\circ\alpha\Bigr)
\mathcal{I}\varphi_{0}(A)u=
\Bigl(\frac{\varphi_{1}}{\varphi_{0}}\circ\alpha\Bigr)\mathcal{I}_{0}u.
\end{align*}
Thus,
\begin{equation}\label{f3.10}
\mathcal{I}_{0}Ju=
\Bigl(\frac{\varphi_{1}}{\varphi_{0}}\circ\alpha\Bigr)\mathcal{I}_{0}u
\quad\mbox{for every}\quad u\in\mathrm{Dom}\,\varphi_{1}(A).
\end{equation}
Let us prove that this equality holds true for every $u\in H^{\varphi_{1}}_{A}$.
Choose $u\in H^{\varphi_{1}}_{A}$ arbitrarily, and consider a sequence $(u_{k})_{k=1}^{\infty}\subset\mathrm{Dom}\,\varphi_{1}(A)$ such that $u_{k}\to u$ in $H^{\varphi_{1}}_{A}$ as $k\to\infty$. Owing to \eqref{f3.10}, we have the equality
\begin{equation}\label{f3.11}
\mathcal{I}_{0}Ju_{k}=
\Bigl(\frac{\varphi_{1}}{\varphi_{0}}\circ\alpha\Bigr)\mathcal{I}_{0}u_{k}
\quad\mbox{whenever}\quad 1\leq k\in\mathbb{Z}.
\end{equation}
Here,
\begin{equation}\label{f3.12}
\mathcal{I}_{0}Ju_{k}\to\mathcal{I}_{0}Ju\quad\mbox{and}\quad
\mathcal{I}_{0}u_{k}\to\mathcal{I}_{0}u
\quad\mbox{in}\quad L_{2}(R,d\mu)\quad\mbox{as}\quad k\to\infty
\end{equation}
due to the isometric isomorphisms \eqref{f3.5} and \eqref{f3.9}. Since convergence in $L_{2}(R,d\mu)$ implies convergence in the measure $\mu$, it follows from \eqref{f3.12} by the Riesz theorem that
\begin{equation}\label{f3.13}
\mathcal{I}_{0}Ju_{k_l}\to\mathcal{I}_{0}Ju
\quad\mbox{and}\quad \mathcal{I}_{0}u_{k_l}\to\mathcal{I}_{0}u
\quad\mu\mbox{-a.e. on}\;R\quad\mbox{as}\quad l\to\infty
\end{equation}
for a certain subsequence $(u_{k_l})_{l=1}^{\infty}$ of $(u_{k})_{k=1}^{\infty}$. The latter convergence implies that
\begin{equation}\label{f3.14}
\Bigl(\frac{\varphi_{1}}{\varphi_{0}}\circ\alpha\Bigr)
\mathcal{I}_{0}u_{k_l}\to
\Bigl(\frac{\varphi_{1}}{\varphi_{0}}\circ\alpha\Bigr)\mathcal{I}_{0}u
\quad\mu\mbox{-a.e. on}\;R\quad\mbox{as}\quad l\to\infty.
\end{equation}
Now formulas \eqref{f3.11}, \eqref{f3.13}, and \eqref{f3.14} yield the required equality
\begin{equation}\label{f3.15}
\mathcal{I}_{0}Ju=
\Bigl(\frac{\varphi_{1}}{\varphi_{0}}\circ\alpha\Bigr)\mathcal{I}_{0}u
\quad\mbox{for every}\quad u\in H^{\varphi_{1}}_{A}.
\end{equation}
It follows from \eqref{f3.9} and \eqref{f3.15} that
\begin{equation}\label{f3.16}
H^{\varphi_{1}}_{A}\subseteq\Bigl\{u\in H^{\varphi_{0}}_{A}:
\Bigl(\frac{\varphi_{1}}{\varphi_{0}}\circ\alpha\Bigr)\mathcal{I}_{0}u\in
L_{2}(R,d\mu)\Bigr\}=:Q,
\end{equation}
we recalling $H^{\varphi_{1}}_{A}=\mathrm{Dom}\,J$. Let us prove that $H^{\varphi_{1}}_{A}=Q$ in fact. We endow the linear space $Q$ with the norm
\begin{equation*}
\|u\|_{Q}:=\Bigl\|\Bigl(\frac{\varphi_{1}}{\varphi_{0}}\circ\alpha\Bigr)
\mathcal{I}_{0}u\Bigr\|_{L_{2}(R,d\mu)}.
\end{equation*}
Owing to \eqref{f3.15} and \eqref{f3.16}, we have the normal embedding
\begin{equation}\label{f3.17}
H^{\varphi_{1}}_{A}\subseteq Q\quad\mbox{with}\quad
\|u\|_{Q}=\|u\|_{\varphi_{1}}\;\;\mbox{for every}\;\;
u\in H^{\varphi_{1}}_{A}.
\end{equation}
Consider the linear mapping
\begin{equation}\label{f3.18}
L:u\mapsto B_{1}^{-1}\mathcal{I}^{-1}\Bigl[\Bigl
(\frac{\varphi_{1}}{\varphi_{0}}\circ\alpha\Bigr)\cdot
\mathcal{I}_{0}u\Bigr]\quad\mbox{where}\quad u\in Q.
\end{equation}
According to the isometric isomorphisms \eqref{f3.4} and \eqref{f3.6}, we have
\begin{equation}\label{f3.19}
\mbox{the isometric operator}\quad L:Q\to H^{\varphi_{1}}_{A}.
\end{equation}
If $Lu=u$ for every $u\in H^{\varphi_{1}}_{A}$, the required equality $H^{\varphi_{1}}_{A}=Q$ will follow plainly from \eqref{f3.16} and the injectivity of \eqref{f3.19}. Let us prove that $Lu=u$ for every $u\in H^{\varphi_{1}}_{A}$.
Given $u\in\mathrm{Dom}\,\varphi_{1}(A)$, we write the following:
\begin{align*}
Lu&=B_{1}^{-1}\mathcal{I}^{-1}\Bigl[\Bigl
(\frac{\varphi_{1}}{\varphi_{0}}\circ\alpha\Bigr)
\mathcal{I}\varphi_{0}(A)u\Bigr]=
B_{1}^{-1}\mathcal{I}^{-1}\Bigl[\Bigl
(\frac{\varphi_{1}}{\varphi_{0}}\circ\alpha\Bigr)
(\varphi_{0}\circ\alpha)\mathcal{I}u\Bigr]\\
&=B_{1}^{-1}\mathcal{I}^{-1}
\bigl[(\varphi_{1}\circ\alpha)\mathcal{I}u\bigr]=
B_{1}^{-1}\varphi_{1}(A)u=B_{1}^{-1}B_{1}u=u.
\end{align*}
Thus,
\begin{equation}\label{f3.20}
Lu=u\quad\mbox{for every}\quad u\in\mathrm{Dom}\,\varphi_{1}(A).
\end{equation}
Choose now $u\in H^{\varphi_{1}}_{A}$ arbitrarily, and let a sequence $(u_{k})_{k=1}^{\infty}\subset\mathrm{Dom}\,\varphi_{1}(A)$ converge to $u$ in $H^{\varphi_{1}}_{A}$. Since $u_{k}\to u$ in $Q$ by \eqref{f3.17}, we write
\begin{equation*}
Lu=\lim_{k\to\infty}Lu_{k}=\lim_{k\to\infty}u_{k}=u
\quad\mbox{in}\quad H^{\varphi_{1}}_{A}
\end{equation*}
in view of \eqref{f3.19} and \eqref{f3.20}. Thus, $Lu=u$ for every $u\in H^{\varphi_{1}}_{A}$, and we have proved the required equality
\begin{equation}\label{f3.21}
\mathrm{Dom}\,J=\Bigl\{u\in H^{\varphi_{0}}_{A}:
\Bigl(\frac{\varphi_{1}}{\varphi_{0}}\circ\alpha\Bigr)\mathcal{I}_{0}u\in
L_{2}(R,d\mu)\Bigr\}.
\end{equation}
Formulas \eqref{f3.15} and \eqref{f3.21} mean that the operator $J$ is reduced by the isometric isomorphism \eqref{f3.9} to the operator of multiplication by the function $(\varphi_{1}/\varphi_{0})\circ\alpha$.
Using this fact, we will prove the required formula \eqref{f2.9}.
Since $\psi\in\mathcal{B}$, the function $1/\psi$ is bounded on $[\varkappa,\infty)$. Hence, the function
\begin{equation*}
\frac{\varphi_{0}(t)}{\varphi(t)}=
\frac{1}{\psi}\Bigl(\frac{\varphi_{1}(t)}{\varphi_{0}(t)}\Bigr)
\quad\mbox{of}\quad t\geq1
\end{equation*}
is bounded due to \eqref{f3.2}. Therefore, $\mathrm{Dom}\,\varphi(A)\subseteq\mathrm{Dom}\,\varphi_{0}(A)$.
Choosing $u\in\mathrm{Dom}\,\varphi(A)$ arbitrarily and using the above-mentioned reductions of $A$ and $J$ to operators of multiplication by function, we write the following:
\begin{align*}
L_2(R,d\mu)\ni\mathcal{I}\varphi(A)u&=(\varphi\circ\alpha)\mathcal{I}u=
\Bigl(\psi\circ\frac{\varphi_1}{\varphi_0}\circ\alpha\Bigr)
(\varphi_{0}\circ\alpha)\mathcal{I}u\\
&=\Bigl(\psi\circ\frac{\varphi_1}{\varphi_0}\circ\alpha\Bigr)
\mathcal{I}\varphi_{0}(A)u=
\Bigl(\psi\circ\frac{\varphi_1}{\varphi_0}\circ\alpha\Bigr)
\mathcal{I}_{0}u=\mathcal{I}_{0}\psi(J)u.
\end{align*}
Hence,
\begin{equation*}
\|u\|_{\varphi}=\|\varphi(A)u\|=\|\mathcal{I}\varphi(A)u\|_{L_2(R,d\mu)}=
\|\mathcal{I}_{0}\psi(J)u\|_{L_2(R,d\mu)}=\|\psi(J)u\|_{\varphi_0}.
\end{equation*}
Therefore, $\mathrm{Dom}\,\varphi(A)\subseteq\mathrm{Dom}\,\psi(J)$,
and $\|u\|_{\varphi}=\|u\|_{X}$ for every $u\in\mathrm{Dom}\,\varphi(A)$, where $X:=[H^{\varphi_{0}}_{A},H^{\varphi_{1}}_{A}]_{\psi}=
\mathrm{Dom}\,\psi(J)$. Passing here to the limit, we infer the normal embedding
\begin{equation}\label{f3.22}
H^{\varphi}_{A}\subseteq X\quad\mbox{with}\quad
\|u\|_{\varphi}=\|u\|_{X}\;\;\mbox{for every}\;\;
u\in H^{\varphi}_{A}.
\end{equation}
Besides, as we have just shown,
\begin{equation}\label{f3.23}
\mathcal{I}\varphi(A)u=\mathcal{I}_{0}\psi(J)u
\quad\mbox{whenever}\quad u\in\mathrm{Dom}\,\varphi(A).
\end{equation}
Let us deduce the equality $H^{\varphi}_{A}=X$ from \eqref{f3.22} and \eqref{f3.23}. Using the isometric isomorphisms \eqref{f2.2}, \eqref{f3.6}, and \eqref{f3.9}, we get
\begin{equation}\label{f3.24}
\mbox{the isometric operator}\quad M:=B^{-1}\mathcal{I}^{-1}\mathcal{I}_{0}\psi(J):X\to H^{\varphi}_{A}.
\end{equation}
Owing to \eqref{f3.23}, we write $Mu=B^{-1}\mathcal{I}^{-1}\mathcal{I}\varphi(A)u=u$ for every $u\in\mathrm{Dom}\,\varphi(A)$. Therefore, choosing $u\in H^{\varphi}_{A}$ arbitrarily and considering a sequence $(u_{k})_{k=1}^{\infty}\subset\mathrm{Dom}\,\varphi(A)$ such that $u_{k}\to u$ in $H^{\varphi}_{A}$, we get
\begin{equation*}
Mu=\lim_{k\to\infty}Mu_{k}=\lim_{k\to\infty}u_{k}=u
\quad\mbox{in}\quad H^{\varphi}_{A}
\end{equation*}
due to \eqref{f3.22}. Now the required equality $H^{\varphi}_{A}=X$ follows from the property $Mu=u$ whenever $u\in H^{\varphi}_{A}$, the inclusion $H^{\varphi}_{A}\subseteq X$, and the injectivity of the operator \eqref{f3.24}. In view of \eqref{f3.22}, we have proved \eqref{f2.9} and, hence, Theorem~\ref{th2.6}.
\end{proof}
We will deduce Theorem~\ref{th2.5} from Theorem~\ref{th2.6} with the help of the following result \cite[Theorem~4.2]{MikhailetsMurach15ResMath1}:
\begin{proposition}\label{prop3.2}
Let $s_{0},s_{1}\in\mathbb{R}$, $s_{0}<s_{1}$, and $\psi\in\mathcal{B}$. Put $\varphi(t):=t^{s_{0}}\psi(t^{s_{1}-s_{0}})$ for every $t\geq1$. Then the function $\psi$ is an interpolation parameter if and only if the function $\varphi$ satisfies \eqref{f2.3} with some positive numbers $c_{0}$ and $c_{1}$ that are independent of $t$ and $\lambda$.
\end{proposition}
\begin{proof}[Proof of Theorem $\ref{th2.5}$]
Let us show first that $\psi\in\mathcal{B}$. Evidently, the function $\psi$ is Borel measurable. Putting $t:=1$ in \eqref{f2.3}, we write $c_{0}\varphi(1)\lambda^{s_{0}}\leq\varphi(\lambda)\leq c_{1}\varphi(1)\lambda^{s_{1}}$ for arbitrary $\lambda\geq1$. Hence, the function $\varphi$ is bounded on every compact subset of $[1,\infty)$, which yields the boundedness of $\psi$ on every interval $(0,b]$ with $b>1$. Besides,
\begin{equation*}
\psi(\tau):=\tau^{-s_{0}/(s_{1}-s_{0})}\,\varphi(\tau^{1/(s_{1}-s_{0})})
\geq c_{0}\varphi(1)\quad\mbox{whenever}\quad\tau\geq1.
\end{equation*}
Therefore, the function $1/\psi$ is bounded on $(0,\infty)$. Thus, $\psi\in\mathcal{B}$ by the definition of~$\mathcal{B}$.
It follows from the definition of $\psi$ that $\varphi(t)=t^{s_{0}}\psi(t^{s_{1}-s_{0}})$ for every $t\geq1$. Hence, $\psi$ is an interpolation parameter according to Proposition~\ref{prop3.2}, whereas the interpolation property \eqref{f2.7} is due to Theorem~\ref{th2.6}, in which we put $\varphi_{0}(t)\equiv t^{s_0}$ and $\varphi_{1}(t)\equiv t^{s_1}$.
\end{proof}
\begin{proof}[Proof of Theorem $\ref{th2.3}$.]
\emph{Necessity.} Suppose that a Hilbert space $X$ is an interpolation space for the pair $[H^{s_{0}}_{A},H^{s_{1}}_{A}]$. Then, owing to Proposition~\ref{prop3.1}, there exists an interpolation parameter $\psi\in\mathcal{B}$ such that $X=[H^{s_{0}}_{A},H^{s_{1}}_{A}]_{\psi}$ up to equivalence of norms. According to Theorem~\ref{th2.6}, we get $[H^{s_{0}}_{A},H^{s_{1}}_{A}]_{\psi}=H^{\varphi}_{A}$ with equality of norms; here, $\varphi(t):=t^{s_{0}}\psi(t^{s_{1}-s_{0}})$ for every $t\geq1$. Thus, $X=H^{\varphi}_{A}$ up to equivalence of norms. Note that $\varphi\in\mathrm{OR}$ due to Proposition~\ref{prop2.7} considered in the case of $\varphi_{0}(t)\equiv t^{s_0}$ and $\varphi_{1}(t)\equiv t^{s_1}$. Moreover, according to Proposition~\ref{prop3.2}, the function $\varphi$ satisfies condition~\eqref{f2.3}. The necessity is proved.
\emph{Sufficiency.} Suppose now that a Hilbert space $X$ coincides with $H^{\varphi}_{A}$ up to equivalence of norms for a certain function parameter $\varphi\in\mathrm{OR}$ that satisfies condition~\eqref{f2.3}. Then, owing to Theorem~\ref{th2.5}, we get $H^{\varphi}_{A}=[H^{s_{0}}_{A},H^{s_{1}}_{A}]_{\psi}$ with equality of norms, where the interpolation parameter $\psi\in\mathcal{B}$ is defined by formula~\eqref{f2.6}. Thus, $X=[H^{s_{0}}_{A},H^{s_{1}}_{A}]_{\psi}$ up to equivalence of norms. This implies that $X$ is an interpolation space for the pair $[H^{s_{0}}_{A},H^{s_{1}}_{A}]$ in view of the definition of an interpolation parameter (or by Proposition~\ref{prop3.1}). The sufficiency is also proved.
\end{proof}
\begin{proof}[Proof of Theorem $\ref{th2.1}$.]
\emph{Necessity.} Suppose that a Hilbert space $X$ belongs to the extended $A$-scale. Then $X$ is an interpolation space for a certain pair $[H^{s_{0}}_{A},H^{s_{1}}_{A}]$ where $s_{0}<s_{1}$. Hence, we conclude by Theorem~\ref{th2.3} that $X=H^{\varphi}(\Omega)$ up to equivalence of norms for a certain function parameter $\varphi\in\mathrm{OR}$. The necessity is proved.
\emph{Sufficiency.} Suppose now that $X=H^{\varphi}_{A}$ up to equivalence of norms for certain $\varphi\in\mathrm{OR}$. The function $\varphi$ satisfies condition~\eqref{f2.3} for the numbers $s_{0}:=\sigma_{0}(\varphi)-1$ and $s_{1}:=\sigma_{1}(\varphi)+1$, for example. Therefore, $X$ is an interpolation space for the pair $[H^{s_{0}}_{A},H^{s_{1}}_{A}]$ due to Theorem~\ref{th2.3}; i.e., $X$ belongs to the extended $A$-scale. The sufficiency is also proved.
\end{proof}
\section{Interpolational inequalities}\label{sec3b}
We assume in this section that functions $\varphi_{0},\varphi_{1}:[1,\infty)\to(0,\infty)$ and $\psi:(0,\infty)\to(0,\infty)$ satisfy the hypothesis of Theorem~\ref{th2.6}; i.e., $\varphi_{0}$ and $\varphi_{1}$ are Borel measurable, and $\varphi_{0}/\varphi_{1}$ is bounded on $[1,\infty)$, and $\psi$ belongs to $\mathcal{B}$. Moreover, suppose that $\psi$ is pseudoconcave in a neighbourhood of infinity; then $\psi$ is an interpolation parameter (see, e.g., \cite[Theorem~1.9]{MikhailetsMurach14}). Owing to Theorem~\ref{th2.6}, we have the dense continuous embedding $H^{\varphi_{1}}_{A}\hookrightarrow H^{\varphi_{0}}_{A}$ and the interpolation formula $[H^{\varphi_{0}}_{A},H^{\varphi_{1}}_{A}]_{\psi}=H^{\varphi}_{A}$ with equality of norms. Here, the Borel measurable function $\varphi:[1,\infty)\to(0,\infty)$ is defined by \eqref{f2.8}. Hence, $H^{\varphi}_{A}$ is an interpolation space between $H^{\varphi_{0}}_{A}$ and~$H^{\varphi_{1}}_{A}$.
We will obtain some inequalities that estimate (from above) the norm in the interpolation space $H^{\varphi}_{A}$ via the norms in the marginal spaces $H^{\varphi_{0}}_{A}$ and $H^{\varphi_{1}}_{A}$ with the help of the interpolation parameter $\psi$. Such inequalities are naturally called interpolational. Specifically, if $\varphi_{0},\varphi_{1}\in\mathrm{OR}$, then $\varphi\in\mathrm{OR}$ as well, due to Proposition~\ref{prop2.7}. In this case, these interpolational inequalities deal with norms in spaces belonging to the extended Hilbert scale.
We denote the number $\varkappa>0$ by formula \eqref{f3.2}. Owing to \cite[Lemma 1.1]{MikhailetsMurach14}, the function $\psi$ is pseudoconcave on $(\varepsilon,0)$ whenever $\varepsilon>0$. Hence, according to \cite[Lemma 1.2]{MikhailetsMurach14}, there exists a number $c_{\psi,\varkappa}>0$ such that
\begin{equation}\label{f3b.2}
\frac{\psi(t)}{\psi(\tau)}\leq c_{\psi,\varkappa}\max\biggl\{1,\frac{t}{\tau}\biggr\}\quad\mbox{for all}\;\; t,\tau\in[\varkappa,\infty).
\end{equation}
\begin{theorem}\label{th3b.1}
Let $\tau\geq\varkappa$ and $u\in H^{\varphi_{1}}_{A}$; then
\begin{equation}\label{f3b.3}
\|u\|_{\varphi}\leq c_{\psi,\varkappa}\,\psi(\tau)
\bigl(\|u\|_{\varphi_0}^{2}+\tau^{-2}\|u\|_{\varphi_1}^{2}\bigr)^{1/2}.
\end{equation}
\end{theorem}
Before we prove this theorem, let us comment formula \eqref{f3b.3}. It follows from \eqref{f3b.2} that $\psi(t)\leq c_{\psi,\varkappa}\psi(\tau)$ whenever $\varkappa\leq t\leq\tau$ and that $\psi(t)/t\leq c_{\psi,\varkappa}\psi(\tau)/\tau$ whenever $\varkappa\leq\tau\leq t$. Hence, $\psi$ is slowly equivalent to an increasing function on the set $[\varkappa,\infty)$, and the function $\psi(\tau)/\tau$ is slowly equivalent to a decreasing function on the same set. Of the main interest is the case where $\psi(\tau)\to\infty$ and $\psi(\tau)/\tau\to0$ as $\tau\to\infty$. In this case, it is useful to rewrite inequality \eqref{f3b.3} in the form
\begin{equation}\label{f3b.4}
\|u\|_{\varphi}\leq c_{\psi,\varkappa}
\biggl(\frac{\psi^{2}(\tau)}{\tau^{2}}\|u\|_{\varphi_1}^{2}+
\psi^{2}(\tau)\|u\|_{\varphi_0}^{2}\biggr)^{1/2}.
\end{equation}
Restricting ourselves to the Hilbert scale $\{H^{s}_{A}:s\in\mathbb{R}\}$, we conclude that this inequality becomes
\begin{equation}\label{f3b.5}
\|u\|_{s}\leq\bigl(\tau^{2(\theta-1)}\|u\|_{s_1}^{2}+
\tau^{2\theta}\|u\|_{s_0}^{2}\bigr)^{1/2}
\quad\mbox{whenever}\;\;u\in H^{s_1}_{A};
\end{equation}
here, the real numbers $s$, $s_0$, $s_1$, $\theta$, and $\tau$ satisfy the conditions
\begin{equation}\label{f3b.6}
s_0<s_1,\quad 0<\theta<1,\quad s=(1-\theta)s_0+\theta s_1,
\end{equation}
and $\tau\geq1$. Indeed, we only need to take
\begin{equation}\label{f3b.7}
\varphi_0(t)\equiv t^{s_0},\quad\varphi_1(t)\equiv t^{s_1},\quad \psi(\tau)\equiv\tau^{\theta},\quad\mbox{and}\quad\varphi(t)\equiv t^{s}\;\;\mbox{by \eqref{f2.8}}
\end{equation}
in \eqref{f3b.4} and observe that $c_{\psi,\varkappa}=1$ for $\psi$ taken. Interpolational inequalities of type \eqref{f3b.5} for Sobolev scales are used in the theory of partial differential operators (see, e.g., \cite[Section~1, Subsection~6]{AgranovichVishik64}).
\begin{proof}[Proof of Theorem $\ref{th3b.1}$.] Let $J$ denote the generating operator for the pair $[H^{\varphi_{0}}_{A},H^{\varphi_{1}}_{A}]$. Recall that $J$ is a positive definite self-adjoint operator in the Hilbert space $H^{\varphi_{0}}_{A}$ and that $\|Ju\|_{\varphi_{0}}=\|u\|_{\varphi_{1}}$ for every $u\in H^{\varphi_{1}}_{A}=\mathrm{Dom}\,J$. According to \eqref{f3.5bis}, we have $\mathrm{Spec}\,J\subseteq[\varkappa,\infty)$.
Let $E_{t}$, $t\geq\varkappa$, be the resolution of the identity associated with the self-adjoint operator $J$. Choosing $\tau\geq\varkappa$ and $u\in H^{\varphi_{1}}_{A}$ arbitrarily, we get
\begin{align*}
\|u\|_{\varphi}^{2}&=\|\psi(J)u\|_{\varphi_{0}}^{2}=
\int\limits_{\varkappa}^{\infty}\psi^{2}(t)\,d(E_{t}u,u)_{\varphi_{0}}\\
&\leq c_{\psi,\varkappa}^{2}\,\psi^{2}(\tau)\int\limits_{\varkappa}^{\infty}
\max\biggl\{1,\frac{t^{2}}{\tau^{2}}\biggr\}d(E_{t}u,u)_{\varphi_{0}}\\
&\leq c_{\psi,\varkappa}^{2}\,\psi^{2}(\tau)\int\limits_{\varkappa}^{\infty}
\biggl(1+\frac{t^{2}}{\tau^{2}}\biggr)d(E_{t}u,u)_{\varphi_{0}}\\
&=c_{\psi,\varkappa}^{2}\,\psi^{2}(\tau)\bigl(\|u\|_{\varphi_0}^{2}+
\tau^{-2}\,\|Ju\|_{\varphi_0}^{2}\bigr)\\
&=c_{\psi,\varkappa}^{2}\,\psi^{2}(\tau)\bigl(\|u\|_{\varphi_0}^{2}+
\tau^{-2}\,\|u\|_{\varphi_1}^{2}\bigr).
\end{align*}
Here, we use the interpolation formula $[H^{\varphi_{0}}_{A},H^{\varphi_{1}}_{A}]_{\psi}=H^{\varphi}_{A}$ and inequality \eqref{f3b.2}. Thus, the required inequality \eqref{f3b.3} is proved.
\end{proof}
Let us consider an application of Theorem~\ref{th3b.1}. We arbitrarily choose $u\in H^{\varphi_{1}}_{A}$ such that $u\neq0$. Put $\tau:=\|u\|_{\varphi_1}/\|u\|_{\varphi_0}$ in \eqref{f3b.3} and note that $\tau\geq\varkappa$ in view of \eqref{f3.2}. We then obtain the interpolational inequality
\begin{equation}\label{f3b.9new}
\|u\|_{\varphi}\leq c_{\psi,\varkappa}\sqrt{2}\,\|u\|_{\varphi_0}\,
\psi\biggl(\frac{\|u\|_{\varphi_1}}{\|u\|_{\varphi_0}}\biggr).
\end{equation}
If the function
\begin{equation}\label{f3b.10}
\chi(\tau):=\psi^{2}(\sqrt{\tau})\quad\mbox{is concave on}\quad
[\varkappa^{2},\infty),
\end{equation}
then \eqref{f3b.9new} holds true without the factor $c_{\psi,\varkappa}\sqrt{2}$. Indeed, choosing $v\in H^{\varphi_{1}}_{A}$ with $\|v\|_{\varphi_0}=1$ arbitrarily, we get
\begin{align*}
\|v\|_{\varphi}^2&=
\int\limits_{\varkappa}^{\infty}\psi^{2}(t)\,d(E_{t}v,v)_{\varphi_{0}}=
\int\limits_{\varkappa}^{\infty}\chi(t^2)\,d(E_{t}v,v)_{\varphi_{0}}\\
&\leq\chi\Biggl(\,\int\limits_{\varkappa}^{\infty}
t^2\,d(E_{t}v,v)_{\varphi_{0}}\Biggr)=
\chi\bigl(\|Jv\|_{\varphi_0}^2\bigr)=
\chi\bigl(\|v\|_{\varphi_1}^2\bigr)=
\psi^2\bigl(\|v\|_{\varphi_1}\bigr)
\end{align*}
due to the Jensen inequality applied to the concave function $\chi$. Here, $E_{t}$ is the same as that in the proof of Theorem~\ref{th3b.1}, and
\begin{equation*}
\int\limits_{\varkappa}^{\infty}d(E_{t}v,v)_{\varphi_{0}}=
\|v\|_{\varphi_0}^2=1.
\end{equation*}
Putting $v:=u/\|u\|_{\varphi_{0}}$ in the inequality $\|v\|_{\varphi}\leq\psi(\|v\|_{\varphi_1})$ just obtained, we conclude that
\begin{equation}\label{f3b.11}
\|u\|_{\varphi}\leq\|u\|_{\varphi_0}\,
\psi\biggl(\frac{\|u\|_{\varphi_1}}{\|u\|_{\varphi_0}}\biggr)
\quad\mbox{under condition \eqref{f3b.10}}.
\end{equation}
This interpolational inequality is equivalent to Variable Hilbert Scale Inequality \cite[Theorem~1, formula (9)]{HeglandAnderssen11} on the supplementary assumption that both functions $\varphi_0$ and $\varphi_1$ are continuous. (Note that the norm $\|\cdot\|_{\varphi}$ used in the cited article \cite{HeglandAnderssen11} means the norm $\|\cdot\|_{\sqrt{\varphi}}$ used by us. Besides, there is no assumption in this article that the function $\varphi_0/\varphi_1$ is bounded.)
In the case of the Hilbert scale $\{H^{s}_{A}:s\in\mathbb{R}\}$, inequality \eqref{f3b.11} becomes
\begin{equation}\label{f3b.12}
\|u\|_{s}\leq\|u\|_{s_0}^{1-\theta}\,\|u\|_{s_1}^{\theta}
\end{equation}
provided that the real numbers $s$, $s_0$, $s_1$, and $\theta$ satisfy \eqref{f3b.6} (we use the power functions \eqref{f3b.7}). The interpolational inequality \eqref{f3b.12} is well known \cite[Section~9, Subsection~1]{KreinPetunin66} and means that the Hilbert scale is a normal scale of spaces.
The interpolational inequalities just considered deal with norms of vectors. Now we focus our attention on interpolational inequalities that involve norms of linear operators acting continuously between appropriate Hilbert spaces $H^{\varphi}_{A}$ and~$G^{\eta}_{Q}$. Here, $G$ (just as $H$) is a separable infinite-dimensional complex Hilbert space, and $Q$ is a counterpart of $A$ for $G$. Namely, $Q$ is a self-adjoint unbounded linear operator in $G$ such that $\mathrm{Spec}\,Q\subseteq[1,\infty)$. We suppose that functions $\eta_{0},\eta_{1},\eta:[1,\infty)\to(0,\infty)$ satisfy analogous conditions to those imposed on $\varphi_{0}$, $\varphi_{1}$, and $\varphi$ at the beginning of this section. Namely, these functions are Borel measurable, and the function $\eta_{0}/\eta_{1}$ is bounded, and
\begin{equation}\label{f3b.13}
\eta(t)=\eta_{0}(t)\,\psi\biggl(\frac{\eta_{1}(t)}{\eta_{0}(t)}\biggr)
\quad\mbox{whenever}\quad t\geq1.
\end{equation}
We suppose that a linear mapping $T$ is given on $H^{\varphi_0}_{A}$ and satisfies the following condition: the restriction of $T$ to the space $H^{\varphi_j}_{A}$ is a bounded operator
\begin{equation}\label{f3b.14}
T:H^{\varphi_j}_{A}\to G^{\eta_j}_{Q}\quad\mbox{for each}\quad j\in\{0,1\}.
\end{equation}
Then the restriction of $T$ to $H^{\varphi}_{A}$ is a bounded operator
\begin{equation}\label{f3b.15}
T:H^{\varphi}_{A}=\bigl[H^{\varphi_0}_{A},H^{\varphi_1}_{A}\bigr]_{\psi}\to \bigl[G^{\eta_0}_{Q},G^{\eta_1}_{Q}\bigr]_{\psi}=G^{\eta}_{Q}
\end{equation}
according to Theorem~\ref{th2.6} and because $\psi$ is an interpolation parameter. Let $\|T\|_{j}$ and $\|T\|$ denote the norms of operators \eqref{f3b.14} and \eqref{f3b.15} respectively. Then
\begin{equation}\label{f3b.16}
\|T\|\leq c\,\max\{\|T\|_0,\|T\|_1\}
\end{equation}
for some number $c>0$ that does not depend on $T$ but may depend on $\psi$ and the spaces $H^{\varphi_j}_{A}$ and $G^{\eta_j}_{Q}$ (see, e.g., \cite[Theorem 2.4.2]{BerghLefstrem76}). This is an interpolational inequality for operator norms, which means that the method of quadratic interpolation is uniform.
We will consider a more precise interpolational inequality than \eqref{f3b.16}; it involves $\psi$ in some way. Put
\begin{equation*}
\nu:=\min\biggl\{\inf_{t\geq1}\frac{\varphi_{1}(t)}{\varphi_{0}(t)},\,
\inf_{t\geq1}\frac{\eta_{1}(t)}{\eta_{0}(t)}\biggr\}>0,
\end{equation*}
and let $c_{\psi,\nu}$ denote a positive number such that inequality \eqref{f3b.2} holds true with $\nu$ taken instead of $\varkappa$. Without loss of generality we suppose that
\begin{equation}\label{f3b.17}
\frac{\psi(t)}{\psi(\tau)}\leq c_{\psi,\nu}\max\biggl\{1,\frac{t}{\tau}\biggr\}\quad\mbox{for all}\;\; t,\tau>0.
\end{equation}
(Hence, $\psi$ is pseudoconcave on $(0,\infty)$ according to
\cite[Lemma 5.4.3]{BerghLefstrem76}.) We can achieve this by
redefining $\psi$ properly on $(0,\nu)$, e.g., by the formula $\psi(t):=\psi(\nu)\nu^{-1}t$ whenever $0<t<\nu$. This does not change $\varphi$ and $\eta$ in view of \eqref{f2.8} and \eqref{f3b.13}. Let $\widetilde{\psi}$ denote the dilation function for $\psi$, i.e.
\begin{equation}\label{f3b.18}
\widetilde{\psi}(\lambda):=\sup_{t>0}\frac{\psi(\lambda t)}{\psi(t)}
\leq c_{\psi,\nu}\max\{1,\lambda\}
\quad\mbox{whenever}\;\;\lambda>0.
\end{equation}
\begin{theorem}\label{th3b.2}
The following interpolational inequality holds true:
\begin{equation}\label{f3b.19}
\|T\|\leq c_{\psi,\nu}^{2}\sqrt{8}\,\|T\|_{0}\,
\widetilde{\psi}\biggl(\frac{\|T\|_{1}}{\|T\|_{0}}\biggr).
\end{equation}
\end{theorem}
\begin{proof}
It follows directly from Theorem~\ref{th2.6} (namely, from the equalities in \eqref{f3b.15}) and the result by Fan \cite[formula (2.3)]{Fan11} that inequality \eqref{f3b.19} holds true with a certain number $c>0$ written instead of $c_{\psi,\nu}^{2}\sqrt{8}$. Note that Fan \cite{Fan11} denotes the interpolation space $[H_{0},H_{1}]_{\psi}$ by $\overline{\mathcal{H}}_{\chi}$ where $\chi(t)\equiv\psi^{2}(\sqrt{t})$ and that $\psi$ is pseudoconcave on $(0,\infty)$ if and only if so is $\chi$, with the dilation function $\widetilde{\chi}(t)\equiv\widetilde{\psi}^{2}(\sqrt{t})$.
(As in Section~\ref{sec2}, $[H_{0},H_{1}]$ is a regular pair of separable complex Hilbert spaces.) Besides, if $\chi$ is concave on $(0,\infty)$, then $c=\sqrt{2}$. This is a direct consequence of Theorem~\ref{th2.6} and the inequality \cite[formula (2.2)]{Fan11}.
Let us show that we may take $c=c_{\psi,\nu}^{2}\sqrt{8}$ in \eqref{f3b.19} in the general case where $\psi$ is pseudoconcave on $(0,\infty)$.
Considering the function $\chi(t):=\psi^2(\sqrt{t})$ of $t>0$ and using \eqref{f3b.17}, we have
\begin{equation*}
\frac{\chi(t)}{\chi(\tau)}\leq
c_{\psi,\nu}^2\max\biggl\{1,\frac{t}{\tau}\biggr\}
\quad\mbox{for all}\;\; t,\tau>0.
\end{equation*}
It follows from this that
\begin{equation*}
\frac{1}{2c_{\psi,\nu}^2}\,\chi_1(t)\leq
\chi(t)\leq\chi_1(t)\quad\mbox{whenever}\quad t>0,
\end{equation*}
with $\chi_1:(0,\infty)\to(0,\infty)$ being the least concave majorant of $\chi$ (see \cite[p.~91]{Peetre68}). Hence,
\begin{equation}\label{est-psi}
\frac{1}{\sqrt{2}\,c_{\psi,\nu}}\,\psi_{\ast}(t)\leq
\psi(t)\leq\psi_{\ast}(t)\quad\mbox{whenever}\quad t>0,
\end{equation}
where $\psi_{\ast}(t):=\sqrt{\chi_1(t^2)}$ of $t>0$. Since the function
$\psi_{\ast}^{2}(\sqrt{t})\equiv\chi_{1}(t)$ is concave on $(0,\infty)$, we conclude by \cite[formula (2.2)]{Fan11} that
\begin{equation}\label{est-T-ast}
\|T\|_{\ast}\leq \sqrt{2}\,\|T\|_{0}\,
\widetilde{\psi_{\ast}}\biggl(\frac{\|T\|_{1}}{\|T\|_{0}}\biggr).
\end{equation}
Here, $\|T\|_{\ast}$ denotes the norm of the bounded operator
\begin{equation*}
T:\bigl[H^{\varphi_0}_{A},H^{\varphi_1}_{A}\bigr]_{\psi_{\ast}}\to \bigl[G^{\eta_0}_{Q},G^{\eta_1}_{Q}\bigr]_{\psi_{\ast}},
\end{equation*}
and $\widetilde{\psi_{\ast}}$ is the dilation function for $\psi_{\ast}$.
It follows from \eqref{f3b.15} and \eqref{est-psi} that
\begin{equation}\label{f3b.20}
\|T\|\leq \sqrt{2}\,c_{\psi,\nu}\,\|T\|_{\ast}.
\end{equation}
Besides,
\begin{equation}\label{f3b.21}
\widetilde{\psi_{\ast}}(\lambda)=
\sup_{t>0}\frac{\psi_{\ast}(\lambda t)}{\psi_{\ast}(t)}\leq
\sqrt{2}\,c_{\psi,\nu}\widetilde{\psi}(\lambda)
\quad\mbox{whenever}\;\;\lambda>0.
\end{equation}
Now \eqref{est-T-ast}, \eqref{f3b.20}, and \eqref{f3b.21} yield the required inequality \eqref{f3b.19}.
\end{proof}
The inequality \eqref{f3b.19} is more precise than \eqref{f3b.16} in view of \eqref{f3b.18}.
\begin{remark}
If the function $\psi$ is concave on $(0,\infty)$, then $c_{\psi,\nu}=1$ in inequality \eqref{f3b.19}. Besides, we may write $\sqrt{2}$ instead of $c_{\psi,\nu}^{2}\sqrt{8}$ in this inequality provided that the function
$\psi^{2}(\sqrt{t})$ of $t>0$ is concave on $(0,\infty)$, as we have noted in the proof of Theorem~\ref{th3b.2}.
\end{remark}
\section{Applications to function spaces}\label{sec4}
In this section, we will show how the concept of the extended Hilbert scale allows us to introduce and investigate wide classes of Hilbert function (or distribution) spaces related to Sobolev spaces on manifolds.
Let $\Gamma$ be a separable paracompact infinitely smooth Riemannian
manifold without boundary. Consider the separable complex Hilbert space $H:=L_{2}(\Gamma)$ of all functions $f:\Gamma\to\mathbb{C}$ which are square integrable over $\Gamma$ with respect to the Riemann measure. Let $\Delta_{\Gamma}$ be the Laplace\,--\,Beltrami operator on $\Gamma$; it is defined on the linear manifold $C^{\infty}_{0}(\Gamma)$ of all compactly supported functions $f\in C^{\infty}(\Gamma)$. Assume that the closure of this operator is self-adjoint in $H$, and denote this closure by $\Delta_{\Gamma}$ too. (Specifically, this self-adjointness follows from the completeness of $\Gamma$ under the Riemann metric \cite[p.~140]{Gaffney54}. For incomplete Riemannian manifolds, sufficient conditions for the self-adjointness are given, e.g., in \cite{BravermanMilatovicShubin02, MilatovicTruc16}). Then the operator $A:=(1-\Delta_{\Gamma})^{1/2}$ is self-adjoint and positive definite with the lower bound $r=1$. Therefore, the separable Hilbert space $H_{A}^{\varphi}$ is defined for every Borel measurable function $\varphi:\nobreak[1,\infty)\to(0,\infty)$; we denote this space by $H_{A}^{\varphi}(\Gamma)$. If $\varphi(t)\equiv t^{s}$ for certain $s\in\mathbb{R}$, then $H_{A}^{\varphi}(\Gamma)$ becomes the Sobolev space $H^{s}(\Gamma)$ of order~$s$. According to Theorem~\ref{th2.1}, the extended $A$-scale consists (up to equivalence of norms) of all Hilbert spaces $H_{A}^{\varphi}(\Gamma)$ where $\varphi\in\mathrm{OR}$. In other words, the class $\{H_{A}^{\varphi}(\Gamma):\varphi\in\mathrm{OR}\}$ consists of all interpolation Hilbert spaces between inner product Sobolev spaces over $\Gamma$. Therefore, it is naturally to call this class the extended Sobolev scale over $\Gamma$.
Now we focus our attention on two important cases where $\Gamma$ is the
Euclidean space $\mathbb{R}^{n}$ and where $\Gamma$ is a compact boundaryless manifold. Generalizing the above consideration, we use some elliptic operators as $A$. We will prove that the extended Hilbert scales generated by these operators consist of some distribution spaces, admit explicit description with the help of the Fourier transform and local charts on $\Gamma$, and do not depend on the choice of the elliptic operators and these charts. Since we consider complex linear spaces formed by functions or distributions, all functions and distributions are supposed to be complex-valued unless otherwise stated.
\subsection{}\label{sec4.1}
In this subsection, we consider the extended Hilbert scale generated by a uniformly elliptic operator on $\mathbb{R}^{n}$, with $n\geq1$. Here, we put $H:=L_{2}(\mathbb{R}^{n})$ and note that the Riemann measure on $\mathbb{R}^{n}$ is the Lebesgue measure. Let $(\cdot,\cdot)_{\mathbb{R}^{n}}$ and $\|\cdot\|_{\mathbb{R}^{n}}$ stand respectively for the inner product and norm in $L_{2}(\mathbb{R}^{n})$.
Following \cite[Section 1.1]{Agranovich94}, we let $\Psi^{m}(\mathbb{R}^{n})$, where $m\in\mathbb{R}$, denote the class of all pseudodifferential operator (PsDO) $\mathcal{A}$ on $\mathbb{R}^{n}$ whose symbols $a$ belong to $C^{\infty}(\mathbb{R}^{2n})$ and satisfy the following condition: for every multi-indices $\alpha,\beta\in\mathbb{Z}_{+}^{n}$ there exists a number $c_{\alpha,\beta}>0$ such that
\begin{equation*}
|\partial^{\alpha}_{x}\partial^{\beta}_{\xi}a(x,\xi)|\leq c_{\alpha,\beta}(1+|\xi|)^{m-|\beta|}\quad\mbox{for all}\quad x,\xi\in\mathbb{R}^{n},
\end{equation*}
with $x$ and $\xi$ being considered respectively as spatial and frequency variables. If $\mathcal{A}\in\Psi^{m}(\mathbb{R}^{n})$, we say that the (formal) order of $\mathcal{A}$ is $m$. A PsDO $\mathcal{A}\in\Psi^{m}(\mathbb{R}^{n})$ is called uniformly elliptic on $\mathbb{R}^{n}$ if there exist positive numbers $c_{1}$ and $c_{2}$ such that $|a(x,\xi)|\geq c_{1}|\xi|^{m}$ whenever $x,\xi\in\mathbb{R}^{n}$ and $|\xi|\geq c_{2}$ (see \cite[Section 3.1~b]{Agranovich94}).
We suppose henceforth in this subsection that
\begin{itemize}
\item[(a)] $\mathcal{A}$ is a PsDO of class $\Psi^{1}(\mathbb{R}^{n})$;
\item[(b)] $\mathcal{A}$ is uniformly elliptic on $\mathbb{R}^{n}$;
\item[(c)] the inequality $(\mathcal{A}w,w)_{\mathbb{R}^{n}}\geq\|w\|_{\mathbb{R}^{n}}^{2}$ holds true for every $w\in C^{\infty}_{0}(\mathbb{R}^{n})$.
\end{itemize}
Here, as usual, $C^{\infty}_{0}(\mathbb{R}^{n})$ denotes the set of all compactly supported functions $u\in C^{\infty}(\mathbb{R}^{n})$.
Consider the mapping
\begin{equation}\label{f4.1}
w\mapsto\mathcal{A}w,\quad\mbox{with}\quad w\in C^{\infty}_{0}(\mathbb{R}^{n}),
\end{equation}
as an unbounded linear operator in the Hilbert space $H=L_{2}(\mathbb{R}^{n})$. This operator is closable because the PsDO $\mathcal{A}\in\Psi^{1}(\mathbb{R}^{n})$ acts continuously from $L_{2}(\mathbb{R}^{n})$ to $H^{-1}(\mathbb{R}^{n})$ (see \cite[Theorem 1.1.2]{Agranovich94}). Here and below, $H^{s}(\mathbb{R}^{n})$ denotes the inner product Sobolev space of order $s\in\mathbb{R}$ over~$\mathbb{R}^{n}$.
Let $A$ denote the closure of the operator \eqref{f4.1} in $L_{2}(\mathbb{R}^{n})$. It follows from (a) and (b) that $\mathrm{Dom}\,A=H^{1}(\mathbb{R}^{n})$ (see \cite[Sections 2.3~c and 3.1~b]{Agranovich94}). Owing to (c), the operator $A$ is positive definite with the lower bound $r=1$. This implies due to (b) that $A$ is self-adjoint \cite[Sections 2.3~d and 3.1~b]{Agranovich94}, with $\mathrm{Spec}\,A\subseteq[1,\infty)$.
Thus, $A$ is the operator considered in Section~\ref{sec2}. Therefore, the separable Hilbert space $H_{A}^{\varphi}$ is defined for every Borel measurable function $\varphi:\nobreak[1,\infty)\to(0,\infty)$. We denote this space by $H_{A}^{\varphi}(\mathbb{R}^{n})$. An important example of $A$ is the operator $(1-\Delta)^{1/2}$, with $\Delta$ denoting the Laplace operator in $\mathbb{R}^{n}$. In this case, the PsDO $\mathcal{A}$ has the symbol $a(x,\xi)\equiv(1+|\xi|^{2})^{1/2}$.
If $\varphi(t)\equiv t^{s}$ for certain $s\in\mathbb{R}$, then it is possible to show that the space $H_{A}^{s}(\mathbb{R}^{n}):=H_{A}^{\varphi}(\mathbb{R}^{n})$ coincides with the Sobolev space $H^{s}(\mathbb{R}^{n})$ up to equivalence norms. Thus, the $A$-scale $\{H_{A}^{s}(\mathbb{R}^{n}):s\in\mathbb{R}\}$ is the Sobolev Hilbert scale. Let us show that the extended $A$-scale consists of some generalized Sobolev spaces, namely the spaces $H^{\varphi}(\mathbb{R}^{n})$ with $\varphi\in\mathrm{OR}$.
Let $\varphi\in\mathrm{OR}$. By definition, the linear space $H^{\varphi}(\mathbb{R}^{n})$ consists of all distributions $w\in\mathcal{S}'(\mathbb{R}^{n})$ that their Fourier transform $\widehat{w}$ is locally Lebesgue integrable over $\mathbb{R}^{n}$ and satisfies the condition
$$
\int\limits_{\mathbb{R}^{n}}\varphi^{2}(\langle\xi\rangle)\,
|\widehat{w}(\xi)|^{2}\,d\xi<\infty.
$$
Here, as usual, $\mathcal{S}'(\mathbb{R}^{n})$ is the linear topological space of all tempered distributions on $\mathbb{R}^{n}$, and
$\langle\xi\rangle:=(1+|\xi|^{2})^{1/2}$ is the smoothed modulus of $\xi\in\mathbb{R}^{n}$. The space $H^{\varphi}(\mathbb{R}^{n})$ is
endowed with the inner product
$$
(w_{1},w_{2})_{\varphi,\mathbb{R}^{n}}:=
\int\limits_{\mathbb{R}^{n}}\varphi^{2}(\langle\xi\rangle)\,
\widehat{w_{1}}(\xi)\,\overline{\widehat{w_{2}}(\xi)}\,d\xi
$$
and the corresponding norm $\|w\|_{\varphi,\mathbb{R}^{n}}:=
(w,w)_{\varphi,\mathbb{R}^{n}}^{1/2}$. This space is complete and separable with respect to this norm and is embedded continuously in $\mathcal{S}'(\mathbb{R}^{n})$; the set $C^{\infty}_{0}(\mathbb{R}^{n})$ is dense in $H^{\varphi}(\mathbb{R}^{n})$ \cite[Theorem 2.2.1]{Hermander63}.
If $\varphi(t)\equiv t^{s}$ for some $s\in\mathbb{R}$, then $H^{\varphi}(\mathbb{R}^{n})$ becomes the Sobolev space $H^{s}(\mathbb{R}^{n})$. The Hilbert space $H^{\varphi}(\mathbb{R}^{n})$ is an isotropic case of the spaces introduced and investigated by Malgrange \cite{Malgrange57}, H\"ormander \cite[Sec. 2.2]{Hermander63} (see also \cite[Section 10.1]{Hermander83}), and Volevich and Paneah \cite[Section~2]{VolevichPaneah65}. These spaces are generalizations of Sobolev spaces to the case where a general enough function of frequency variables serves as an order of distribution spaces.
\begin{theorem}\label{th4.1}
Let $\varphi\in\mathrm{OR}$. Then the spaces $H^{\varphi}_{A}(\mathbb{R}^{n})$ and $H^{\varphi}(\mathbb{R}^{n})$ coincide as completions of $C^{\infty}_{0}(\mathbb{R}^{n})$ with respect to equivalent norms.
\end{theorem}
It is worth-wile to note that the norm of $w\in C^{\infty}_{0}(\mathbb{R}^{n})$ in $H^{\varphi}_{A}(\mathbb{R}^{n})$ is $\|\varphi(A)w\|_{\mathbb{R}^{n}}$ because
\begin{equation*}
\mathrm{Dom}\,\varphi(A)\supset\mathrm{Dom}\,A^{s_1}=
H^{s_1}_{A}(\mathbb{R}^{n})\supset C^{\infty}_{0}(\mathbb{R}^{n});
\end{equation*}
here $s_1$ is a positive number that satisfies \eqref{f2.3}.
According to Theorem~\ref{th4.1}, the space $H^{\varphi}_{A}(\mathbb{R}^{n})$, with $\varphi\in\mathrm{OR}$, does not depend on~$A$ up to equivalence of norms. Theorems \ref{th2.1} and \ref{th4.1} yield the following explicit description of the extended Hilbert scale generated by the considered operator $A$:
\begin{corollary}\label{cor4.2}
The extended $A$-scale consists (up to equivalence of norms) of all the spaces $H^{\varphi}(\mathbb{R}^{n})$ with $\varphi\in\mathrm{OR}$.
\end{corollary}
Thus, the class $\{H^{\varphi}(\mathbb{R}^{n}):\varphi\in\mathrm{OR}\}$ consists (up to equivalence if norms) of all Hilbert spaces each of which is an interpolation space between some Sobolev spaces $H^{s_0}(\mathbb{R}^{n})$ and $H^{s_1}(\mathbb{R}^{n})$ with $s_0<s_1$.
As we have noted, this class is called the extended Sobolev scale over~$\mathbb{R}^{n}$. It was considered in \cite{MikhailetsMurach13UMJ3, MikhailetsMurach15ResMath1} and \cite[Section~2.4.2]{MikhailetsMurach14}.
\begin{proof}[Proof of Theorem $\ref{th4.1}$.]
Choose an integer $k\gg1$ such that $-k<\sigma_{0}(\varphi)$ and $k>\sigma_{1}(\varphi)$, and define the interpolation parameter $\psi$ by formula \eqref{f2.6} in which $s_{0}:=-k$ and $s_{1}:=k$. According to Theorem~\ref{th2.5}, we have the equality
\begin{equation}\label{f4.2}
H^{\varphi}_{A}(\mathbb{R}^{n})=
\bigl[H^{-k}_{A}(\mathbb{R}^{n}),H^{k}_{A}(\mathbb{R}^{n})\bigr]_{\psi}.
\end{equation}
Note that each space $H^{\pm k}_{A}(\mathbb{R}^{n})$ coincides with $H^{\pm k}(\mathbb{R}^{n})$ up to equivalence of norms. Indeed, $A$ sets an isomorphism between $H^{1}(\mathbb{R}^{n})$ and $L_{2}(\mathbb{R}^{n})$ because $\mathrm{Dom}\,A=H^{1}(\mathbb{R}^{n})$ and $0\notin\mathrm{Spec}\,A$. Besides, since the PsDO $\mathcal{A}$ of the first order is uniformly elliptic on $\mathbb{R}^{n}$, the operator $A$ has the following lifting property: if $u\in H^{1}(\mathbb{R}^{n})$ and if $Au\in H^{s-1}(\mathbb{R}^{n})$ for some $s>1$, then $u\in H^{s}(\mathbb{R}^{n})$ (see \cite[Sections 1.8 and 3.1~b]{Agranovich94}). Hence, $A^{k}$ sets an isomorphism between $H^{k}(\mathbb{R}^{n})$ and $L_{2}(\mathbb{R}^{n})$. Thus, $H^{k}_{A}(\mathbb{R}^{n})=H^{k}(\mathbb{R}^{n})$ up to equivalence of norms. Passing here to dual spaces with respect to $L_{2}(\mathbb{R}^{n})$, we conclude that $H^{-k}_{A}(\mathbb{R}^{n})=H^{-k}(\mathbb{R}^{n})$ up to equivalence of norms (see \cite[Section~9, Subsection~1]{KreinPetunin66}).
Thus, it follows from \eqref{f4.2} that
\begin{equation}\label{f4.3}
H^{\varphi}_{A}(\mathbb{R}^{n})=
\bigl[H^{-k}(\mathbb{R}^{n}),H^{k}(\mathbb{R}^{n})\bigr]_{\psi}
\end{equation}
up to equivalence of norms. Indeed, since the identity mapping sets the isomorphisms
\begin{equation*}
I:H^{\mp k}_{A}(\mathbb{R}^{n})\leftrightarrow H^{\mp k}(\mathbb{R}^{n})
\end{equation*}
and since $\psi$ is an interpolation parameter, the identity mapping realizes the isomorphism
\begin{equation*}
I:\bigl[H^{-k}_{A}(\mathbb{R}^{n}),H^{k}_{A}(\mathbb{R}^{n})\bigr]_{\psi}
\leftrightarrow
\bigl[H^{-k}(\mathbb{R}^{n}),H^{k}(\mathbb{R}^{n})\bigr]_{\psi}.
\end{equation*}
This yields \eqref{f4.3} due to \eqref{f4.2}.
Thus, Theorem~\ref{th4.1} follows directly from \eqref{f4.3} and
\begin{equation}\label{f4.4}
H^{\varphi}(\mathbb{R}^{n})=
\bigl[H^{-k}(\mathbb{R}^{n}),H^{k}(\mathbb{R}^{n})\bigr]_{\psi}.
\end{equation}
The letter equality is proved in \cite[Theorem~2.19]{MikhailetsMurach14}. Besides, \eqref{f4.4} is a special case of \eqref{f4.3} because $H^{\varphi}(\mathbb{R}^{n})=H_{A}^{\varphi}(\mathbb{R}^{n})$ if $A=(1-\Delta)^{1/2}$. Indeed, since the Fourier transform reduces
$A=(1-\Delta)^{1/2}$ to the operator of multiplication by $\langle\xi\rangle$, we conclude that $\mathrm{Dom}\,\varphi(A)\subseteq H^{\varphi}(\mathbb{R}^{n})$ and $\|\varphi(A)u\|_{\mathbb{R}^{n}}=\|u\|_{\varphi,\mathbb{R}^{n}}$ for every $u\in\mathrm{Dom}\,\varphi(A)$. Hence, $H_{A}^{\varphi}(\mathbb{R}^{n})$ is a subspace of $H^{\varphi}(\mathbb{R}^{n})$. But $C^{\infty}_{0}(\mathbb{R}^{n})$ and then $H_{A}^{\varphi}(\mathbb{R}^{n})$ are dense in $H^{\varphi}(\mathbb{R}^{n})$. Thus, $H_{A}^{\varphi}(\mathbb{R}^{n})$ coincides with $H^{\varphi}(\mathbb{R}^{n})$ if $A=(1-\Delta)^{1/2}$.
\end{proof}
Ending this subsection, we note the following: in contrast to the spaces $H^{s}(\mathbb{R}^{n})$, the inner product Sobolev spaces
\begin{equation*}
H^{s}(\Omega):=\bigl\{w\!\upharpoonright\!\Omega:w\in H^{s}(\mathbb{R}^{n})\bigr\},\quad\mbox{with}\quad s\in\mathbb{R},
\end{equation*}
over a domain $\Omega\subset\mathbb{R}^{n}$ do not form a Hilbert scale even if $\Omega$ is a bounded domain with infinitely smooth boundary
and if we restrict ourselves to the spaces of order $s\geq0$ \cite[Corollary~2.3]{Neubauer88}. An explicit description of all Hilbert spaces that are interpolation ones for an arbitrary chosen pair of Sobolev spaces $H^{s_{0}}(\Omega)$ and $H^{s_{1}}(\Omega)$, where $-\infty<s_{0}<s_{1}<\infty$, is given in \cite[Theorem~2.4]{MikhailetsMurach15ResMath1} provided that $\Omega$ is a bounded domain with Lipschitz boundary. These interpolation spaces are (up to equivalence of norms) the generalized Sobolev spaces
\begin{gather*}
H^{\varphi}(\Omega):=\bigl\{v:=w\!\upharpoonright\!\Omega:w\in H^{\varphi}(\mathbb{R}^{n})\bigr\}
\end{gather*}
such that $\varphi$ belongs to $\mathrm{OR}$ and satisfies~\eqref{f2.3}, the Hilbert norm in $H^{\varphi}(\Omega)$ being naturally defined by the formula
\begin{equation*}
\|v\|_{\varphi,\Omega}:=\inf\bigl\{\|w\|_{\varphi,\mathbb{R}^{n}}:
w\in H^{\varphi}(\mathbb{R}^{n}),v=w\!\upharpoonright\!\Omega\bigr\}.
\end{equation*}
\subsection{}\label{sec4.2}
Here, we consider the extended Hilbert scale generated by an elliptic operator given on a closed manifold. Let $\Gamma$ be an arbitrary closed (i.e. compact and boundaryless) infinitely smooth
manifold of dimension $n\geq1$. We suppose that a certain positive $C^{\infty}$-density $dx$ is given on~$\Gamma$. We put $H:=L_{2}(\Gamma)$, where $L_{2}(\Gamma)$ is the complex Hilbert space of all square integrable functions over $\Gamma$ with respect to the measure induced by this density. Let $(\cdot,\cdot)_{\Gamma}$ and $\|\cdot\|_{\Gamma}$ stand respectively for the inner product and norm in $L_{2}(\Gamma)$.
Following \cite[Section 2.1]{Agranovich94}, we let $\Psi^{m}(\Gamma)$, where $m\in\mathbb{R}$, denote the class of all PsDOs on $\Gamma$ whose
representations in every local chart on $\Gamma$ belong to $\Psi^{m}(\mathbb{R}^{n})$. If $\mathcal{A}\in\Psi^{m}(\Gamma)$, we say that the (formal) order of $\mathcal{A}$ is $m$. A~PsDO $\mathcal{A}\in\Psi^{m}(\Gamma)$ is called elliptic on $\Gamma$ if for every point $x_{0}\in\Gamma$ there exist positive numbers $c_{1}$ and $c_{2}$ such that $|a_{x_{0}}(x,\xi)|\geq c_{1}|\xi|^{m}$ whenever $x\in U(x_{0})$ and $\xi\in\mathbb{R}^{n}$ and $|\xi|\geq\nobreak c_{2}$, with $a_{x_{0}}(x,\xi)$ be the local symbol of $\mathcal{A}$ corresponding to a certain coordinate neighbourhood $U(x_{0})$ of $x_{0}$ (see \cite[Section 3.1 b]{Agranovich94}).
We suppose in this subsection that
\begin{itemize}
\item[(a)] $\mathcal{A}$ is a PsDO of class $\Psi^{1}(\Gamma)$;
\item[(b)] $\mathcal{A}$ is elliptic on $\Gamma$;
\item[(c)] the inequality $(\mathcal{A}f,f)_{\Gamma}\geq\|f\|_{\Gamma}^{2}$ holds true for every $f\in C^{\infty}(\Gamma)$.
\end{itemize}
Let $A$ denote the closure, in $H=L_{2}(\Gamma)$, of the linear operator $f\mapsto\mathcal{A}f$, with $f\in\nobreak C^{\infty}(\Gamma)$.
Note that this operator is closable in $L_{2}(\Gamma)$ because the PsDO $\mathcal{A}\in\nobreak\Psi^{1}(\Gamma)$ acts continuously from $L_{2}(\Gamma)$ to $H^{-1}(\Gamma)$ \cite[Theorem 2.1.2]{Agranovich94}. Here and below, $H^{s}(\Gamma)$ stands for the inner product Sobolev space of order $s\in\mathbb{R}$ over $\Gamma$. It follows from (a)--(c) that the operator $A$ is positive definite and self-adjoint in $L_{2}(\Gamma)$, with $\mathrm{Dom}\,A=H^{1}(\Gamma)$ and $\mathrm{Spec}\,A\subseteq[1,\infty)$
(see \cite[Sections 2.3 c, d and 3.1~b]{Agranovich94}).
Thus, $A$ is the operator considered in Section~\ref{sec2}, and the separable Hilbert space $H_{A}^{\varphi}$ is defined for every Borel measurable function $\varphi:\nobreak[1,\infty)\to(0,\infty)$. We denote this space by $H_{A}^{\varphi}(\Gamma)$. An important example of $A$ is the operator $(1-\Delta_{\Gamma})^{1/2}$, where $\Gamma$ is endowed with a Riemann metric (then the density $dx$ is induced by this metric).
If $\varphi(t)\equiv t^{s}$ for some $s\in\mathbb{R}$, then the space $H_{A}^{s}(\Gamma):=H_{A}^{\varphi}(\Gamma)$ coincides with the Sobolev space $H^{s}(\Gamma)$ up to equivalence norms \cite[Corollary 5.3.2]{Agranovich94}. Thus, the $A$-scale $\{H_{A}^{s}(\Gamma):s\in\nobreak\mathbb{R}\}$ is the Sobolev Hilbert scale over~$\Gamma$. We will prove that the extended $A$-scale consists of the generalized Sobolev spaces $H^{\varphi}(\Gamma)$ with $\varphi\in\mathrm{OR}$. Let us give their definition with the help of local charts on~$\Gamma$.
We arbitrarily choose a finite atlas from the $C^{\infty}$-structure on $\Gamma$; let this atlas be formed by $\varkappa$ local charts $\pi_j: \mathbb{R}^{n}\leftrightarrow \Gamma_{j}$, with $j=1,\ldots,\varkappa$.
Here, the open sets $\Gamma_{1},\ldots,\Gamma_{\varkappa}$ form a covering of $\Gamma$. We also arbitrarily choose functions $\chi_j\in C^{\infty}(\Gamma)$, with $j=1,\ldots,\varkappa$, that form a partition of unity on $\Gamma$ such that $\mathrm{supp}\,\chi_j\subset \Gamma_j$.
Let $\varphi\in\mathrm{OR}$. By definition, the linear space $H^{\varphi}(\Gamma)$ is the completion of the linear manifold $C^{\infty}(\Gamma)$ with respect to the inner product
\begin{equation}\label{f4.5}
(f_{1},f_{2})_{\varphi,\Gamma}:=
\sum_{j=1}^{\varkappa}\,((\chi_{j}f_{1})\circ\pi_{j},
(\chi_{j}f_{2})\circ\pi_{j})_{\varphi,\mathbb{R}^{n}}
\end{equation}
of functions $f_{1},f_{2}\in C^{\infty}(\Gamma)$. Thus, $H^{\varphi}(\Gamma)$ is a Hilbert space. Let $\|\cdot\|_{\varphi,\Gamma}$ denote the norm induced by the inner product \eqref{f4.5}. If $\varphi(t)\equiv t^{s}$ for certain $s\in\mathbb{R}$, then $H^{\varphi}(\Gamma)$ becomes the Sobolev space $H^{s}(\Gamma)$.
\begin{theorem}\label{th4.3}
Let $\varphi\in\mathrm{OR}$. Then the spaces $H^{\varphi}_{A}(\Gamma)$ and $H^{\varphi}(\Gamma)$ coincide as completions of $C^{\infty}(\Gamma)$ with respect to equivalent norms.
\end{theorem}
Note that the norm of $f\in C^{\infty}(\Gamma)$ in $H^{\varphi}_{A}(\Gamma)$ is $\|\varphi(A)f\|_{\Gamma}$ because
\begin{equation*}
\mathrm{Dom}\,\varphi(A)\supset\mathrm{Dom}\,A^{s_1}
\supset C^{\infty}(\Gamma);
\end{equation*}
here $s_1$ is a positive integer that satisfies \eqref{f2.3}.
Theorem \ref{th4.3} specifically entails
\begin{corollary}\label{cor4.4}
Let $\varphi\in\mathrm{OR}$. Then the space $H^{\varphi}_{A}(\Gamma)$ does not depend on~$A$ up to equivalence of norms. Besides, the space $H^{\varphi}(\Gamma)$ does not depend (up to equivalence of norms) on our choice of the atlas and partition of unity on $\Gamma$.
\end{corollary}
Owing to Theorems \ref{th2.1} and \ref{th4.3}, we obtain the following explicit description of the extended Hilbert scale generated by the considered PsDO $A$ on $\Gamma$:
\begin{corollary}\label{cor4.5}
The extended $A$-scale consists (up to equivalence if norms) of all the spaces $H^{\varphi}(\Gamma)$ with $\varphi\in\mathrm{OR}$.
\end{corollary}
Thus, the class $\{H^{\varphi}(\Gamma):\varphi\in\mathrm{OR}\}$ consists (up to equivalence if norms) of all Hilbert spaces each of which is an interpolation space between some Sobolev inner-product spaces $H^{s_0}(\Gamma)$ and $H^{s_1}(\Gamma)$ with $s_0<s_1$.
This class is called the extended Sobolev scale over~$\Gamma$.
\begin{theorem}\label{th4.6}
Suppose that the manifold $\Gamma$ is endowed with a Riemann metric, and let $\varphi\in\mathrm{OR}$. Then the space $H^{\varphi}(\Gamma)$ admits the following three equivalent definitions in the sense that they introduce the same Hilbert space up to equivalence of norms:
\begin{itemize}
\item[(i)] \textbf{Operational definition.} The Hilbert space $H^{\varphi}(\Gamma)$ is the completion of $C^{\infty}(\Gamma)$ with respect to the norm $\|\varphi((1-\Delta_{\Gamma})^{1/2})f\|_{\Gamma}$ of $f\in C^{\infty}(\Gamma)$.
\item[(ii)] \textbf{Local definition.} The Hilbert space $H^{\varphi}(\Gamma)$ consists of all distributions $f\in\mathcal{D}'(\Gamma)$ such that $(\chi_{j}f)\circ\pi_{j}\in
H^{\varphi}(\mathbb{R}^{n})$ for every $j\in\{1,\ldots,\varkappa\}$ and is endowed with the inner product \eqref{f4.5} of distributions $f_{1},f_{2}\in H^{\varphi}(\Gamma)$.
\item[(iii)] \textbf{Interpolational definition.} Let integers $s_{0}$ and $s_{1}$ satisfy the conditions $s_{0}<\sigma_{0}(\varphi)$ and $s_{1}>\sigma_{1}(\varphi)$, and let $\psi$ be the interpolation parameter defined by \eqref{f2.6}. Then
\begin{equation*}
H^{\varphi}(\Gamma):=
\bigl[H^{s_{0}}(\Gamma),H^{s_{1}}(\Gamma)\bigr]_{\psi}.
\end{equation*}
\end{itemize}
\end{theorem}
Here, as usual, $\mathcal{D}'(\Gamma)$ is the linear topological space of all distributions on $\Gamma$, and $(\chi_{j}f)\circ\pi_{j}$ stands for the representation of the distribution $\chi_{j}f\in\mathcal{D}'(\Gamma)$ in the local chart $\pi_{j}$. We naturally interpret $\mathcal{D}'(\Gamma)$ as the dual of $C^{\infty}(\Gamma)$ with respect to the extension of the inner product in $L_{2}(\Gamma)$. This extension is denoted by $(\cdot,\cdot)_{\Gamma}$ as well. Of course, the $C^{\infty}$-density $dx$ is now induced by the Riemann metric.
\begin{remark}\label{rem4.7}
It follows directly from Theorem~\ref{th4.3} that in the operational definition we may change $(1-\Delta_{\Gamma})^{1/2}$ for the more general PsDO $A$ considered in this subsection.
\end{remark}
Let us prove Theorems \ref{th4.3} and \ref{th4.6}.
\begin{proof}[Proof of Theorem $\ref{th4.3}$.]
Let the integer $k\gg1$ and the interpolation parameter $\psi$ be the same as those in the proof of Theorem~$\ref{th4.1}$. Then
\begin{equation}\label{f4.7}
H^{\varphi}_{A}(\Gamma)=
\bigl[H^{-k}_{A}(\Gamma),H^{k}_{A}(\Gamma)\bigr]_{\psi}
\end{equation}
due to Theorem~\ref{th2.5}. Here, $H^{\pm k}_{A}(\Gamma)=H^{\pm k}(\Gamma)$ up to equivalence of norms, which is demonstrated in the same way as that in the proof of Theorem~$\ref{th4.1}$. Therefore, \eqref{f4.7} implies that
\begin{equation}\label{f4.8}
H^{\varphi}_{A}(\Gamma)=
\bigl[H^{-k}(\Gamma),H^{k}(\Gamma)\bigr]_{\psi}
\end{equation}
up to equivalence of norms. Hence, we have the dense continuous embedding $H^{k}(\Gamma)\hookrightarrow H^{\varphi}_{A}(\Gamma)$, which entails the density of $C^{\infty}(\Gamma)$ in $H^{\varphi}_{A}(\Gamma)$.
Owing to \eqref{f4.8}, it remains to show that
\begin{equation}\label{f4.9}
\bigl[H^{-k}(\Gamma),H^{k}(\Gamma)\bigr]_{\psi}=H^{\varphi}(\Gamma)
\end{equation}
up to equivalence of norms. We will deduce this formula from \eqref{f4.4}
with the help of certain operators of flattening and sewing of the manifold $\Gamma$.
Let us define the flattening operator by the formula
\begin{equation}\label{f4.10}
T:f\mapsto ((\chi_1 f)\circ\pi_1,\ldots,
(\chi_{\varkappa}f)\circ\pi_{\varkappa})
\;\;\mbox{for every}\;\;f\in C^{\infty}(\Gamma).
\end{equation}
The mapping \eqref{f4.10} extends by continuity to isometric linear operators
\begin{equation}\label{f4.11}
T:H^{\varphi}(\Gamma)\rightarrow (H^{\varphi}(\mathbb{R}^n))^{\varkappa}
\end{equation}
and
\begin{equation}\label{f4.12}
T:H^{\mp k}(\Gamma)\rightarrow (H^{\mp k}(\mathbb{R}^n))^{\varkappa}.
\end{equation}
Since $\psi$ is an interpolation parameter, it follows from the boundedness of the operators \eqref{f4.12} that a restriction of the first operator acts continuously
\begin{equation}\label{f4.13}
T:\bigl[H^{-k}(\Gamma),H^{k}(\Gamma)\bigr]_{\psi}\to
\bigl[(H^{-k}(\mathbb{R}^n))^{\varkappa},
(H^{k}(\mathbb{R}^n))^{\varkappa}\bigr]_{\psi}.
\end{equation}
Here, the target space equals $(H^{\varphi}(\mathbb{R}^n))^{\varkappa}$ due to \eqref{f4.4} and the definition of the interpolation with the parameter $\psi$. Thus, the operator \eqref{f4.13} acts continuously
\begin{equation}\label{f4.14}
T:\bigl[H^{-k}(\Gamma),H^{k}(\Gamma)\bigr]_{\psi}\to
\bigl(H^{\varphi}(\mathbb{R}^n)\bigr)^{\varkappa}.
\end{equation}
We define the sewing operator by the formula
\begin{equation}\label{f4.15}
\begin{gathered}
K:\mathbf{w}\mapsto\sum_{j=1}^{\varkappa}
\Theta_j\bigl((\eta_j w_j)\circ\pi_j^{-1}\bigr)\\
\mbox{for every}\quad\mathbf{w}:=(w_{1},\ldots, w_{\varkappa})\in \bigl(C^{\infty}_{0}(\mathbb{R}^n)\bigr)^{\varkappa}.
\end{gathered}
\end{equation}
Here, for each $j\in\{1,\ldots,\varkappa\}$, the function $\eta_j \in C^\infty_0(\mathbb{R}^n)$ is chosen such that $\eta_j=\nobreak1$ in a neighbourhood of $\pi^{-1}_j(\mathrm{supp}\,\chi_j)$.
Besides, for every function $\omega:\Gamma_{j}\to\mathbb{C}$, we put $(\Theta_j\omega)(x):=\omega(x)$ whenever $x\in\Gamma_{j}$ and put
$(\Theta_j\omega)(x):=0$ whenever $x\in\Gamma\setminus\Gamma_{j}$. Thus,
$K\mathbf{w}\in C^{\infty}(\Gamma)$ for every $\mathbf{w}\in (C^{\infty}_{0}(\mathbb{R}^n))^{\varkappa}$.
The mapping $K$ is left inverse to the flattening operator \eqref{f4.10}. Indeed, given $f\in C^{\infty}(\Gamma)$, we have the following equalities:
\begin{align*}
KTf=&\sum_{j=1}^\varkappa\Theta_j\Bigl(\bigl(\eta_j ((\chi_j f)\circ\pi_j)\bigr)\circ\pi_j^{-1}\Bigr)\\
=&\sum_{j=1}^\varkappa
\Theta_j\bigl((\eta_j\circ\pi_j^{-1})(\chi_j f)\bigr)
=\sum_{j=1}^\varkappa \Theta_j (\chi_j f) =
\sum_{j=1}^\varkappa \chi_j f = f.
\end{align*}
Thus,
\begin{equation}\label{f4.16}
KTf=f\quad\mbox{for every}\quad f\in C^{\infty}(\Gamma).
\end{equation}
There exists a number $c>0$ such that
\begin{equation}\label{f4.17}
\|K\mathbf{w}\|_{\varphi,\Gamma}^{2}\leq c \sum_{l=1}^{\varkappa}\|w_l\|_{\varphi,\mathbb{R}^n}^{2}
\quad\mbox{whenever}\quad\mathbf{w}\in \bigl(C^{\infty}_{0}(\mathbb{R}^n)\bigr)^{\varkappa}.
\end{equation}
Indeed,
\begin{align*}
\| K\mathbf{w} \|_{\varphi,\Gamma}^{2}&=
\sum_{j=1}^{\varkappa}\|(\chi_jK\mathbf{w})\circ\pi_j\|_
{\varphi,\mathbb{R}^n}^2 \\
&= \sum_{j=1}^{\varkappa}\,\Bigl\|\sum_{l=1}^{\varkappa} \bigl(\chi_j\Theta_l((\eta_lw_l)\circ\pi_l^{-1})\bigr)
\circ\pi_j\Bigr\|_{\varphi,\mathbb{R}^n}^2\\
&= \sum_{j=1}^{\varkappa}\,\Bigl\|\sum_{l=1}^{\varkappa} (\eta_{l,j} w_l)\circ\beta_{l,j} \Bigr\|_{\varphi,\mathbb{R}^n}^2\leq
c \sum_{l=1}^{\varkappa}\|w_l\|_{\varphi,\mathbb{R}^n}^2.
\end{align*}
Here, $\eta_{l,j}:=(\chi_j\circ\pi_l)\eta_l\in C^{\infty}_{0}(\mathbb{R}^n)$,
and $\beta_{l,j}:\mathbb{R}^n\rightarrow\mathbb{R}^n$ is a $C^\infty$-diffeomorphism such that $\beta_{l,j}=\pi_l^{-1}\circ\pi_j$ in a neighbourhood of $\mathrm{supp}\,\eta_{l,j}$ and that $\beta_{l,j}(t)= t$ whenever $|t|$ is sufficiently large. The last inequality is a consequence of the fact that the operator of the multiplication by a function from $C^{\infty}_{0}(\mathbb{R}^n)$ and the operator $v\mapsto v\circ\beta_{l,j}$ of change of variables are bounded on the space $H^\varphi(\mathbb{R}^n)$. These properties of $H^\varphi(\mathbb{R}^n)$ follow by \eqref{f4.4} from their known analogs for the Sobolev spaces $H^{\mp k}(\mathbb{R}^n)$.
According to \eqref{f4.17}, the mapping \eqref{f4.15} extends by continuity to a bounded linear operator
\begin{equation}\label{f4.18}
K:(H^{\varphi}(\mathbb{R}^n))^{\varkappa}\rightarrow H^{\varphi}(\Gamma)
\end{equation}
and, specifically, to bounded linear operators
\begin{equation}\label{f4.19}
K:(H^{\mp k}(\mathbb{R}^n))^{\varkappa}\rightarrow H^{\mp k}(\Gamma).
\end{equation}
Hence, a restriction of the first operator \eqref{f4.19} acts continuously
\begin{equation}\label{f4.19b}
K:\bigl(H^{\varphi}(\mathbb{R}^n)\bigr)^{\varkappa}=
\bigl[(H^{-k}(\mathbb{R}^n))^{\varkappa},
(H^{k}(\mathbb{R}^n))^{\varkappa}\bigr]_{\psi}\to
\bigl[H^{-k}(\Gamma),H^{k}(\Gamma)\bigr]_{\psi}
\end{equation}
in view of \eqref{f4.4}.
According to \eqref{f4.11} and \eqref{f4.19b}, we have the bounded operator
\begin{equation*}
KT:H^{\varphi}(\Gamma)\to[H^{-k}(\Gamma),H^{k}(\Gamma)]_{\psi}.
\end{equation*}
Besides, owing to \eqref{f4.14} and \eqref{f4.18}, we get the bounded operator
\begin{equation*}
KT:[H^{-k}(\Gamma),H^{k}(\Gamma)]_{\psi}\to H^{\varphi}(\Gamma).
\end{equation*}
These operators are identical mappings in view of \eqref{f4.16} and the density of $C^{\infty}(\Gamma)$ in their target spaces. Thus, the required equality \eqref{f4.9} holds true up to equivalence of norms.
\end{proof}
\begin{proof}[Proof of Theorem $\ref{th4.6}$.]
Let us prove that the initial definition of $H^{\varphi}(\Gamma)$ as the completion of $C^{\infty}(\Gamma)$ with respect to the inner product \eqref{f4.5} is equivalent to each of definitions (i)\,--\,(iii). The initial definition is tantamount to (i) due to Theorem~\ref{th4.3} in the $A=(1-\Delta_{\Gamma})^{1/2}$ case. Hence, the initial definition is equivalent to (iii) in view of Theorem~\ref{th2.5}.
To prove the equivalence of this definition and (ii), it suffices to show that $C^{\infty}(\Gamma)$ is dense in the space defined by (ii). We arbitrarily choose a distribution $f\in\mathcal{D}'(\Gamma)$ such that $(\chi_{j}f)\circ\pi_{j}\in
H^{\varphi}(\mathbb{R}^{n})$ for every $j\in\{1,\ldots,\varkappa\}$. Given such $j$, we take a sequence $(w^{r}_{j})_{r=1}^{\infty}\subset C^{\infty}_{0}(\mathbb{R}^{n})$ such that $w^{(r)}_{j}\to(\chi_{j}f)\circ\pi_{j}$ in $H^{\varphi}(\mathbb{R}^{n})$ as $r\to\infty$. Let $T$ and $K$ be the flattening and sewing mappings used in the proof of Theorem~\ref{th4.3}. These mappings are well defined respectively on $\mathcal{D}'(\Gamma)$ and $(\mathcal{S}'(\mathbb{R}^{n}))^{\varkappa}$, with the formulas $KTf=f$ and \eqref{f4.17} being valid whenever $f\in\mathcal{D}'(\Gamma)$ and $w\in(H^{\varphi}(\mathbb{R}^{n}))^{\varkappa}$. Therefore, putting $\mathbf{w}^{(r)}:=(w^{(r)}_{1},\ldots,w^{(r)}_{\varkappa})$,
we conclude that $K\mathbf{w}^{(r)}\in C^{\infty}(\Gamma)$ and that
\begin{align*}
\|K\mathbf{w}^{(r)}-f\|_{\varphi,\Gamma}^{2}&=
\|K(\mathbf{w}^{(r)}-Tf)\|_{\varphi,\Gamma}^{2}\\
&\leq c\sum_{l=1}^{\varkappa}
\|w^{(r)}_{l}-(\chi_{l}f)\circ\pi_{l}\|_{\varphi,\mathbb{R}^n}^{2}
\to0\quad\mbox{as}\quad r\to\infty.
\end{align*}
Thus, $C^{\infty}(\Gamma)$ is dense in the space defined by (ii), and the initial definition is then equivalent to (ii).
\end{proof}
At the end of this subsection, we give a description of the space $H^{\varphi}(\Gamma)$ in terms of sequences induced by the spectral decomposition of the self-adjoint operator $A$. Since this operator is positive definite and since $\mathrm{Dom}\,A=H^{1}(\Gamma)$, its inverse $A^{-1}$ is a compact self-adjoint operator on $L_{2}(\Gamma)$ (recall that $H^{1}(\Gamma)$ is compactly embedded in $L_{2}(\Gamma)$). Hence, the Hilbert space $L_{2}(\Gamma)$ has an orthonormal basis $\mathcal{E}:=(e_{j})_{j=1}^{\infty}$ formed by eigenvectors of $A$.
Let $\lambda_{j}\geq1$ be the corresponding eigenvalue of $A$, i.e. $Ae_j=\lambda_{j}e_j$. We may and will enumerate the eigenvectors $e_{j}$ so that $\lambda_{j}\leq\lambda_{j+1}$ whenever $j\geq1$, with $\lambda_{j}\to\infty$ as $j\to\infty$. Since $\mathcal{A}$ is elliptic on $\Gamma$, each $e_{j}\in C^{\infty}(\Gamma)$. We suppose that
the PsDO $\mathcal{A}$ is classical (i.e. polyhomogeneous); see, e.g., \cite[Definitions 1.5.1 and 2.1.3]{Agranovich94}. Then
\begin{equation}\label{f5.20}
\lambda_{j}\sim\widetilde{c}\,j^{1/n}\quad\mbox{as}\quad j\to\infty,
\end{equation}
where $\widetilde{c}$ is a positive number that does not depend on $j$ \cite[Section 6.1~b]{Agranovich94}. Every distribution $f\in\mathcal{D}'(\Gamma)$ expands into the series
\begin{equation}\label{f5.21}
f=\sum_{j=1}^{\infty}\varkappa_{j}(f)e_j\quad\mbox{in}\;\;\mathcal{D}'(\Gamma);
\end{equation}
here, $\varkappa_{j}(f):=(f,e_j)_{\Gamma}$ is the value of the distribution $f$ at the test function $e_{j}$ \cite[Section 6.1~a]{Agranovich94}.
\begin{theorem}\label{th5.8}
Let $\varphi\in\mathrm{OR}$. Then the space $H^{\varphi}(\Gamma)$ consists of all distributions $f\in\mathcal{D}'(\Gamma)$ such that
\begin{equation}\label{f5.22}
\|f\|_{\varphi,\Gamma,\mathcal{E}}^{2}:=
\sum_{j=1}^{\infty}\varphi^{2}(j^{1/n})|\varkappa_{j}(f)|^{2}<\infty,
\end{equation}
and the norm in $H^{\varphi}(\Gamma)$ is equivalent to the (Hilbert) norm $\|\cdot\|_{\varphi,\Gamma,\mathcal{E}}$. If $f\in H^{\varphi}(\Gamma)$, then the series \eqref{f5.21} converges in $H^{\varphi}(\Gamma)$.
\end{theorem}
\begin{proof}
It follows from \eqref{f5.20} and $\varphi\in\mathrm{OR}$ that there exists a number $c\geq1$ such that
\begin{equation}\label{f5.23}
c^{-1}\varphi(\lambda_{j})\leq\varphi(j^{1/n})\leq c\,\varphi(\lambda_{j})\quad\mbox{whenever}\quad 1\leq j\in\mathbb{Z}.
\end{equation}
Since $\mathrm{Spec}\,A=\{\lambda_{j}:j\geq1\}$, we have
\begin{equation*}
\|\varphi(A)f\|_{\Gamma}^{2}=
\sum_{j=1}^{\infty}\varphi^{2}(\lambda_{j})|\varkappa_{j}(f)|^{2}<\infty
\end{equation*}
for every $f\in\mathrm{Dom}\,\varphi(A)$. Hence, the norm $\|\cdot\|_{\varphi,\Gamma,\mathcal{E}}$ is equivalent to the norm in $H^{\varphi}(\Gamma)$ on $\mathrm{Dom}\,\varphi(A)\supset C^{\infty}(\Gamma)$ (see Theorem~\ref{th4.3}).
If $f\in H^{\varphi}(\Gamma)$, we consider a sequence $(f_k)_{k=1}^{\infty}\subset C^{\infty}(\Gamma)$ such that $f_{k}\rightarrow f$ in $H^{\varphi}(\Gamma)$ as $k\rightarrow\infty$. There exist positive numbers $c_1$ and $c_2$ such that
\begin{equation*}
\sum_{j=1}^{\infty}\varphi^{2}(j^{1/n})|\varkappa_{j}(f_k)|^{2}=
\|f_k\|_{\varphi,\Gamma,\mathcal{E}}^{2}\leq
c_1\|f_k\|_{\varphi,\Gamma}^{2}\leq c_2<\infty
\end{equation*}
for every integer $k\geq1$. Passing here to the limit as $k\to\infty$ and taking $\varkappa_{j}(f_k)\to\varkappa_{j}(f)$ into account, we conclude by Fatou's lemma that every distribution $f\in H^{\varphi}(\Gamma)$ satisfies \eqref{f5.22}.
Assume now that a distribution $f\in\mathcal{D}'(\Gamma)$ satisfies \eqref{f5.22}, and prove that $f\in H^{\varphi}(\Gamma)$. Owing to our assumption and \eqref{f5.23}, we have the convergent orthogonal series
\begin{equation}\label{f5.26}
\sum_{j=1}^{\infty}\varphi(\lambda_{j})\varkappa_{j}(f)e_{j}=:h
\quad\mbox{in}\quad L_{2}(\Gamma).
\end{equation}
Consider its partial sum
\begin{equation*}
h_k:=\sum_{j=1}^{k}\varphi(\lambda_{j})\varkappa_{j}(f)e_{j}
\end{equation*}
for each $k$, and note that
\begin{equation*}
\varphi^{-1}(A)h_k=\sum_{j=1}^{k}\varkappa_{j}(f)e_{j}\in C^{\infty}(\Gamma).
\end{equation*}
Since $h_k\to h$ in $L_{2}(\Gamma)$ as $k\to\infty$, the sequence $(\varphi^{-1}(A)h_k)_{k=1}^{\infty}$ is Cauchy in $H^{\varphi}_{A}(\Gamma)$. Denoting its limit by $g$, we get
\begin{equation}\label{f5.27}
g=\lim_{k\to\infty}\varphi^{-1}(A)h_k=
\sum_{j=1}^{\infty}\varkappa_{j}(f)e_{j}\quad\mbox{in}\quad H^{\varphi}(\Gamma).
\end{equation}
Hence, $f=g\in H^{\varphi}(\Gamma)$ in view of \eqref{f5.21}.
Thus, a distribution $f\in\mathcal{D}'(\Gamma)$ belongs to $H^{\varphi}(\Gamma)$ if and only if \eqref{f5.22} is satisfied.
Besides, given $f\in H^{\varphi}(\Gamma)$, we have
\begin{align*}
\|f\|_{\varphi,\Gamma}^{2}&=
\lim_{k\to\infty}\|\varphi^{-1}(A)h_k\|_{\varphi,\Gamma}^{2}\asymp
\lim_{k\to\infty}\|h_k\|_{\Gamma}^{2}=\|h\|_{\Gamma}^{2}\\
&=\sum_{j=1}^{\infty}\varphi^{2}(\lambda_{j})|\varkappa_{j}(f)|^{2}\asymp
\|f\|_{\varphi,\Gamma,\mathcal{E}}^{2}
\end{align*}
by \eqref{f5.23}, \eqref{f5.26}, and \eqref{f5.27} where $g=f$ (as usual, the symbol $\asymp$ means equivalence of norms). The last assertion of the theorem is due to~\eqref{f5.27}.
\end{proof}
\begin{remark}\label{rem4.8}
Let $0<m\in\mathbb{R}$. Analogs of Theorems \ref{th4.1} and \ref{th4.3} hold true for PsDOs of order~$m$. Namely, suppose that a PsDO $\mathcal{A}$ belongs to $\Psi^{m}(\mathbb{R}^{n})$ or $\Psi^{m}(\Gamma)$ and satisfies conditions (b) and (c). Let $\varphi\in\mathrm{OR}$, and put $\varphi_{m}(t):=\varphi(t^{m})$ whenever $t\geq1$ (evidently, $\varphi_{m}\in\mathrm{OR}$). Then the equality of spaces
\begin{equation}\label{f4.20}
H^{\varphi}_{A}(\mathbb{R}^{n}\;\mbox{or}\;\Gamma)=
H^{\varphi_{m}}(\mathbb{R}^{n}\;\mbox{or}\;\Gamma)
\end{equation}
holds in the sense that these spaces coincide as completions of $C^{\infty}_{0}(\mathbb{R}^{n})$ or $C^{\infty}(\Gamma)$ with respect to equivalent norms. This implies that Corollaries \ref{cor4.2}, \ref{cor4.4}, and \ref{cor4.5} remain true in this (more general) case. The proof of \eqref{f4.20} is very similar to the proofs of Theorems \ref{th4.1} and \ref{th4.3}. We only observe that $H^{k}_{A}(V)=\nobreak H^{km}(V)$ for every $k\in\mathbb{Z}$ whenever $V=\mathbb{R}^{n}$ or $V=\Gamma$ because $\mathrm{ord}\,\mathcal{A}=m$, which gives
\begin{equation}\label{f4.21}
H^{\varphi}_{A}(V)=
\bigl[H^{-k}_{A}(V),H^{k}_{A}(V)\bigr]_{\psi}=
\bigl[H^{-km}(V),H^{km}(V)\bigr]_{\psi}=
H^{\varphi_{m}}(V)
\end{equation}
with equivalence of norms; here the integer $k>0$ and the interpolation parameter $\psi$ are the same as those in the proof of Theorem~\ref{th4.1}. The first equality in \eqref{f4.21} is due to Theorem~\ref{th2.5}, whereas the third is a direct consequence of this theorem and Theorems \ref{th4.1} and \ref{th4.3}. Note if the PsDO $\mathcal{A}\in\Psi^{m}(\Gamma)$ is classical, then $A^{1/m}$ is the closure (in $L_{2}(\Gamma)$) of some classical PsDO $\mathcal{A}_{1}\in\Psi^{1}(\Gamma)$ as was established by Seeley \cite{Seeley67}. Hence,
$$
H^{\varphi}_{A}(\Gamma)=H^{\varphi_{m}}_{A^{1/m}}(\Gamma)=
H^{\varphi_{m}}(\Gamma)
$$
immediately due to Theorem~\ref{th4.3}. Ending this remark, note that Theorem~\ref{th5.8} remains true if the order of the classical PsDO $\mathcal{A}$ is~$m$. It follows from the fact every eigenvector of $A$ is also an eigenvector of $A^{1/m}$.
\end{remark}
\section{Spectral expansions in spaces with two norms}\label{sec6a}
We will obtain some abstract results on the convergence of spectral expansions in a Hilbert space endowed with a second norm. In the next section, we will apply these results (together with results of Section~\ref{sec4}) to the investigation of the convergence of spectral expansions induced by normal elliptic operators.
\subsection{}\label{sec6.1a}
As in Section~\ref{sec2}, $H$ is a separable infinite-dimensional complex Hilbert space. Let $L$ be a normal (specifically, self-adjoint) unbounded linear operator in $H$. Let $E$ be the resolution of the identity (i.e., the spectral measure) generated by $L$, we considering $E$ as an operator-valued function $E=E({\delta})$ of $\delta\in\mathcal{B}(\mathbb{C})$. Here, as usual, $\mathcal{B}(\mathbb{C})$ denotes the class of all Borel subsets of the complex plane~$\mathbb{C}$. Then
\begin{equation}\label{f6.1}
f=\int\limits_{\mathbb{C}}dEf
\end{equation}
for every $f\in H$. Besides, let $N$ be a normed space. (We use the standard notation $\|\cdot\|_{N}$ for the norm in $N$. As above, $\|\cdot\|$ and $(\cdot,\cdot)$ denote the norm and inner product in~$H$.) Suppose that $N$ and $H$ are embedded algebraically in a certain linear space. We find sufficient conditions for the convergence of the spectral expansion \eqref{f6.1} in the space~$N$. Put $\widetilde{B}_{\lambda}:=\{z\in\mathbb{C}:|z|\leq\lambda\}$ for every number $\lambda>0$.
\begin{definition}\label{def6.1}
Let $f\in H$. We say that the spectral expansion \eqref{f6.1} converges unconditionally in the space $N$ at the vector $f$ if $E(\delta)f\in N$ whenever $\delta\in\mathcal{B}(\mathbb{C})$ and if for an arbitrary number $\varepsilon>0$ there exists a bounded set $\gamma=\gamma(\varepsilon)\in\mathcal{B}(\mathbb{C})$ such that
\begin{equation}\label{f6.2}
\|f-E(\delta)f\|_{N}<\varepsilon\quad\mbox{whenever}\quad \gamma\subseteq\delta\in\mathcal{B}(\mathbb{C}).
\end{equation}
\end{definition}
Note that
\begin{equation*}
E(\delta)f=\int\limits_{\delta}dEf
\end{equation*}
for all $f\in H$ and $\delta\in\mathcal{B}(\mathbb{C})$. If the spectrum of $L$ is countable, say $\mathrm{Spec}\,L=\{z_{j}:1\leq j\in\mathbb{Z}\}$ where $j\neq k\Rightarrow z_{j}\neq z_{k}$, then \eqref{f6.1} becomes
\begin{equation}\label{f6.3}
f=\sum_{j=1}^{\infty}E(\{z_{j}\})f.
\end{equation}
If moreover $|z_{j}|\to\infty$ as $j\to\infty$, Definition~\ref{def6.1} will mean that the series \eqref{f6.3} converges to $f$ in $N$ under an arbitrary permutation of its terms.
Let $I$ stand for the identity operator on $H$, and let $\|\cdot\|_{H\to N}$ and $\|\cdot\|_{H\to H}$ denote the norms of bounded linear operators on the pair of spaces $H$ and $N$ and on the space $H$, respectively.
\begin{theorem}\label{th6.2}
Let $R$ and $S$ be bounded linear operators on (whole) $H$ such that they are commutative with $L$ and that
\begin{equation}\label{f6.4}
\mbox{$R$ is a bounded operator from $H$ to $N$.}
\end{equation}
Then the spectral expansion \eqref{f6.1} converges unconditionally in the space $N$ at every vector $f\in RS(H)$. Moreover, the degree of this convergence admits the estimate
\begin{equation}\label{f6.5}
\|f-E(\delta)f\|_{N}\leq\|R\|_{H\to N}\cdot\|g\|\cdot \|S(I-E(\delta))\|_{H\to H}\cdot r_{g}(\delta)
\end{equation}
for every $\delta\in\mathcal{B}(\mathbb{C})$ and with some decreasing function $r_{g}(\delta)\in[0,1]$ of $\delta\in\mathcal{B}(\mathbb{C})$ such that $r_{g}(\widetilde{B}_{\lambda})\to0$ as $\lambda\to\infty$. Here, $g\in H$ is an arbitrary vector satisfying $f=RSg$, and the function $r_{g}(\delta)$ does not depend on $S$ and~$R$.
\end{theorem}
Note, if $T$ is a bounded linear operator on $H$ and if $M$ is an unbounded linear operator in $H$, the phrase ``$T$ is commutative with $M$'' means that $TMf=MTf$ for every vector $f\in(\mathrm{Dom}\,M)\cap\mathrm{Dom}(MT)$ (see, e.g., \cite[Chapter~IV, \S~3, Section~4]{FunctionalAnalysis72}).
\begin{proof}[Proof of Theorem $\ref{th6.2}$]
Choose a vector $f\in RS(H)\subseteq N\cap H$ arbitrarily. If $f=0$, the conclusion of this theorem will be trivial; we thus suppose that $f\neq0$. Consider a nonzero vector $g\in H$ such that $f=RSg$. Choose a set $\delta\in\mathcal{B}(\mathbb{C})$ arbitrarily. Since the operators $R$ and $S$ are bounded on $H$ and commutative with $L$, they are also commutative with $E(\delta)$. Therefore,
\begin{equation*}
E(\delta)f=E(\delta)(RS)g=(RS)E(\delta)g\in N
\end{equation*}
due to \eqref{f6.4}. Hence,
\begin{equation}\label{f6.8}
\begin{aligned}
\|f-E(\delta)f\|_{N}&
=\|RS(I-E(\delta))g\|_{N}=\|RS(I-E(\delta))^{2}g\|_{N}\\
&\leq\|R\|_{H\to N}\cdot\|S(I-E(\delta))\|_{H\to H}\cdot \|(I-E(\delta))g\|.
\end{aligned}
\end{equation}
Put
\begin{equation}\label{f6.10}
r_{g}(\delta):=\|(I-E(\delta))g\|\cdot\|g\|^{-1};
\end{equation}
then \eqref{f6.8} yields the required estimate \eqref{f6.5}. It follows plainly from \eqref{f6.10} that $r_{g}(\delta)$ viewed as a function of $\delta\in\mathcal{B}(\mathbb{C})$ is required.
\end{proof}
\begin{remark}\label{rem6.3}
Let $R$ be a bounded operator on $H$. If the norms in $N$ and $H$ are compatible, condition \eqref{f6.4} is equivalent to the inclusion $R(H)\subseteq N$. Indeed, assume that these norms are compatible and that $R(H)\subseteq N$, and show that $R$ satisfies~\eqref{f6.4}. According to the closed graph theorem, the operator $R:H\to\widetilde{N}$ is bounded if and only if it is closed; here, $\widetilde{N}$ is the completion of the normed space~$N$. Therefore, it is enough to prove that this operator is closable. Assume that a sequence $(f_{k})_{k=1}^{\infty}\subset H$ satisfies the following two conditions: $f_{k}\to0$ in $H$ and $Rf_{k}\to h$ in $\widetilde{N}$ for certain $h\in\widetilde{N}$, as $k\to\infty$. Then $Rf_{k}\to0$ in $H$ because $R$ is bounded on $H$. Hence, $h=0$ as the norms in $N$ and $H$ are compatible. Thus, the operator $R:H\to\widetilde{N}$ is closable.
\end{remark}
\begin{remark}\label{rem6.4}
Borel measurable bounded functions of $L$ are important examples of the bounded operators on $H$ commuting with~$L$. If $S=\eta(L)$ for a bounded Borel measurable function $\eta:\mathrm{Spec}\,L\to\mathbb{C}$, the third factor on the right of \eqref{f6.5} will admit the estimate
\begin{equation}
\begin{aligned}
\|S(I-E(\delta))\|_{H\to H}&\leq
\sup\bigl\{|\eta(z)|(1-\chi_{E(\delta)}(z)):z\in\mathrm{Spec}\,L\bigr\}\\
&\leq\sup\bigl\{|\eta(z)|:z\in(\mathrm{Spec}\,L)\setminus\delta\bigr\}.
\end{aligned}
\end{equation}
(As usual, $\chi_{E(\delta)}$ stands for the characteristic function of the set $E(\delta)$.) Hence, if $\eta(z)\to0$ as $|z|\to\infty$, then
\begin{equation*}
\lim_{\lambda\to\infty}\|S(I-E(\widetilde{B}_{\lambda}))\|_{H\to H}=0
\end{equation*}
(as well as the fourth factor $r_{g}(\delta)$ if $\delta=\widetilde{B}_{\lambda}$).
\end{remark}
\subsection{}\label{sec6.1b}
Assume now that the normal operator $L$ has pure point spectrum, i.e. the Hilbert space $H$ has an orthonormal basis $(e_{j})_{j=1}^{\infty}$ formed by some eigenvectors $e_{j}$ of $L$. Unlike the previous part of this subsection, we suppose that $L$ is either unbounded in $H$ or bounded on $H$. Thus,
\begin{equation}\label{f6.11}
f=\sum_{j=1}^{\infty}(f,e_j)e_j
\end{equation}
in $H$ for every $f\in H$. Let $\lambda_{j}$ denote the eigenvalue of $L$ such that $Le_j=\lambda_{j}e_j$. Note that the expansions \eqref{f6.1} and \eqref{f6.3} become \eqref{f6.11} provided that all the proper subspaces of $L$ are one-dimensional. Let $P_k$ denote the orthoprojector on the linear span of the eigenvectors $e_1,\ldots,e_k$.
\begin{theorem}\label{th6.5}
Let $\omega,\eta:\mathrm{Spec}\,L\to\mathbb{C}\setminus\{0\}$ be Borel measurable bounded functions, and consider the bounded linear operators $R:=\omega(L)$ and $S:=\eta(L)$ on $H$. Assume that $R$ satisfies \eqref{f6.4}. Then the series \eqref{f6.11} converges unconditionally (i.e. under each permutation of its terms) in the space $N$ at every vector $f\in RS(H)$. Moreover, the degree of this convergence admits the estimate
\begin{equation}\label{f6.12}
\biggl\|f-\sum_{j=1}^{k}(f,e_j)e_j\biggr\|_{N}\leq
\|R\|_{H\to N}\cdot\|g\|\cdot \|S(I-P_k)\|_{H\to H}\cdot r_{g,k}
\end{equation}
for every integer $k\geq1$ and with some decreasing sequence $(r_{g,k})_{k=1}^{\infty}\subset[0,1]$ that tends to zero and does not depend on $S$ and~$R$. Here, $g:=(RS)^{-1}f\in H$.
\end{theorem}
\begin{proof}
Since $RSe_j=(\omega\eta)(\lambda_j)e_j$ for every integer $j\geq1$ and since $(\omega\eta)(t)\neq0$ for every $t\in\mathrm{Spec}\,L$, we conclude that each $e_j\in N$ in view of hypothesis \eqref{f6.4}. Thus, the left-hand side of \eqref{f6.12} makes sense. Besides, the operator $RS=(\omega\eta)(L)$ is algebraically reversible; hence, the vector $g:=(RS)^{-1}f\in H$ is well defined for every $f\in RS(H)$. We suppose that $f\neq0$ because the conclusion of this theorem is trivial in the $f=0$ case. Choosing an integer $k\geq1$ arbitrarily, we get
\begin{align*}
(RS)P_{k}g&=RS\sum_{j=1}^{k}(g,e_j)e_j=\sum_{j=1}^{k}(g,e_j)RSe_j=
\sum_{j=1}^{k}(g,e_j)(\omega\eta)(\lambda_j)e_j\\
&=P_{k}(RS)\sum_{j=1}^{\infty}(g,e_j)e_j=P_{k}(RS)g.
\end{align*}
Hence,
\begin{equation}\label{f6.13}
\begin{aligned}
\biggl\|f-\sum_{j=1}^{k}(f,e_j)e_j\biggr\|_{N}&=\|f-P_{k}f\|_{N}=
\|RSg-P_{k}(RS)g\|_{N}\\
&=\|RS(I-P_{k})g\|_{N}=\|RS(I-P_{k})^{2}g\|_{N}\\
&\leq\|R\|_{H\to N}\cdot\|S(I-P_{k})\|_{H\to H}\cdot \|(I-P_{k})g\|.
\end{aligned}
\end{equation}
Putting
\begin{equation}\label{f6.14}
r_{g,k}:=\|(I-P_k)g\|\cdot\|g\|^{-1},
\end{equation}
we see that \eqref{f6.13} yields the required estimate \eqref{f6.12}. It follows plainly from \eqref{f6.14} that the sequence $(r_{g,k})_{k=1}^{\infty}$ is required. Hence, the series \eqref{f6.11} converges in $N$. This convergence is unconditional because the hypotheses of the theorem are invariant with respect to permutations of terms of this series.
\end{proof}
\begin{remark}\label{rem6.6}
The third factor on the right of \eqref{f6.12} admits the estimate
\begin{equation}\label{f6.15}
\|S(I-P_k)\|_{H\to H}\leq\sup_{j\geq k+1}|\eta(\lambda_j)|
\end{equation}
for each integer $k\geq1$. Indeed, since
\begin{equation*}
S(I-P_k)f=\eta(L)\sum_{j=k+1}^{\infty}(f,e_{j})e_{j}=
\sum_{j=k+1}^{\infty}(f,e_{j})\eta(\lambda_j)e_{j}
\end{equation*}
for every $f\in H$ (the convergence holds in $H$), we have
\begin{equation*}
\|S(I-P_k)f\|^{2}=\sum_{j=k+1}^{\infty}|(f,e_{j})\eta(\lambda_j)|^{2}\leq
\bigl(\sup_{j\geq k+1}|\eta(\lambda_j)|\bigr)^{2}\cdot\|f\|^{2},
\end{equation*}
which gives \eqref{f6.15}. Specifically, if $\eta(t)\to0$ as $|t|\to\infty$ and if $|\lambda_j|\to\infty$ as $j\to\infty$, then
\begin{equation*}
\lim_{k\to\infty}\|S(I-P_k)\|_{H\to H}=0
\end{equation*}
(as well as the fourth factor $r_{g,k}$).
\end{remark}
It is worthwhile to note that the hypotheses of Theorem~\ref{th6.5} do not depend on the choice of a basis of~$H$. They hence imply the unconditional convergence of the series \eqref{f6.11} in $N$ at every vector $f\in RS(H)$ for \emph{any} orthonormal basis of $H$ formed by eigenvectors of~$L$. Remark also that Theorem~\ref{th6.5} reinforces the conclusion of Theorem~\ref{th6.2} under the hypotheses of Theorem~\ref{th6.5}. Indeed, owing to Theorem~\ref{th6.2}, the series \eqref{f6.11} converges in $N$ at every $f\in RS(H)$ if its terms corresponding to equal eigenvalues are grouped together and if $|\lambda_{j}|\to\infty$ as $j\to\infty$.
Theorem~\ref{th6.5} contains M.~G.~Krein's theorem \cite{Krein47} according to which the series \eqref{f6.11} converges in $N$ at every $f\in L(H)$ if $L$ is a self-adjoint compact operator in $H$ obeying \eqref{f6.4}. The latter theorem generalizes (to abstract operators) the Hilbert\,--\,Schmidt theorem about the uniform decomposability of sourcewise representable functions with respect to eigenfunctions of a symmetric integral operator. If $L$ is a positive definite self-adjoint operator with discrete spectrum and if $R=L^{-\sigma}$ and $S=L^{-\tau}$ for certain $\sigma,\tau\geq0$ and if $R$ satisfies \eqref{f6.4}, Krasnosel'ski\u{\i} and Pustyl'nik \cite[Theorem~22.1]{KrasnoselskiiZabreikoPustylnikSobolevskii76} proved that the left-hand side of \eqref{f6.12} is $o(\lambda_{k}^{-\tau})$ as $k\to\infty$. This result follows from \eqref{f6.12} in view of \eqref{f6.15}.
\section{Applications to spectral expansions induced by elliptic operators}\label{sec6}
This section is devoted to applications of results of Sections \ref{sec4} and \ref{sec6a} to the investigation of the convergence (in the uniform metric) of spectral expansions induced by uniformly elliptic operators on $\mathbb{R}^{n}$ and by elliptic operators on a closed manifold $\Gamma\in C^{\infty}$. We find explicit criteria of the convergence of these expansions in the normed space $C^{q}$, with $q\geq0$, on the function class $H^{\varphi}$, with $\varphi\in\mathrm{OR}$, and evaluate the degree of this convergence. Besides, we consider applications of the spaces $H^{\varphi}(\Gamma)$ to the investigation of the almost everywhere convergence of the spectral expansions.
\subsection{}\label{sec6.2}
Let $1\leq n\in\mathbb{Z}$ and $0<m\in\mathbb{R}$. We suppose in this subsection that $L$ is a PsDO of class $\Psi^{m}(\mathbb{R}^{n})$ and that $L$ is uniformly elliptic on $\mathbb{R}^{n}$. We may and will consider $L$ as a closed unbounded operator in the Hilbert space $H:=L_{2}(\mathbb{R}^{n})$ with $\mathrm{Dom}\,L=H^{m}(\mathbb{R}^{n})$ (see \cite[Sections 2.3~d and 3.1~b]{Agranovich94}). We also suppose that $L$ is a normal operator in $L_{2}(\mathbb{R}^{n})$. Then $L$ generates a resolution of the identity $E=E({\delta})$, and the spectral expansion \eqref{f6.1} holds for every function $f\in L_{2}(\mathbb{R}^{n})$. Note that the spectrum of $L$ may be uncountable and may not have any eigenfunctions. Hence, the expansion \eqref{f6.1}
may not be represented in the form of the series \eqref{f6.3} or \eqref{f6.11}. For example, if $L=-\Delta$, then the spectrum of $L$ coincides with $[0,\infty)$ and is continuous.
\begin{definition}\label{def6.7}
Let a normed function space $N$ lie in $\mathcal{S}'(\mathbb{R}^{n})$. We say that the expansion \eqref{f6.1} (where $H=L_{2}(\mathbb{R}^{n})$) converges unconditionally in $N$ on a function class $\Upsilon$ if $\Upsilon\subset L_{2}(\mathbb{R}^{n})$ and if this expansion satisfies Definition~\ref{def6.1} for every $f\in\Upsilon$.
\end{definition}
We consider the important case where $N=C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ for an integer $q\geq0$ and use generalized Sobolev spaces $H^{\varphi}(\mathbb{R}^{n})$ as $\Upsilon$. Here, $C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ denotes the Banach space of
$q$ times continuously differentiable functions $\nobreak{f:\mathbb{R}^{n}\to\mathbb{C}}$ whose partial derivatives $\partial^{\alpha}f$ are bounded on $\mathbb{R}^{n}$ whenever $|\alpha|\leq q$. As usual, $\alpha=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{Z}_{+}^{n}$ and $|\alpha|=\alpha_{1}+\cdots+\alpha_{n}$. This space is endowed with the norm
\begin{equation*}
\|f\|_{C,q,\mathbb{R}^{n}}:=\sum_{|\alpha|\leq q}\,
\sup\bigl\{|\partial^{\alpha}f(x)|:x\in\mathbb{R}^{n}\bigr\}.
\end{equation*}
\begin{theorem}\label{th6.8}
Let $0\leq q\in\mathbb{Z}$ and $\varphi\in\mathrm{OR}$. The spectral expansion \eqref{f6.1} converges unconditionally in the normed space $C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ on the function class $H^{\varphi}(\mathbb{R}^{n})$ if and only if
\begin{equation}\label{f6.16}
\int\limits_{1}^{\infty}\frac{t^{2q+n-1}}{\varphi^2(t)}\,dt<\infty.
\end{equation}
\end{theorem}
\begin{remark}\label{rem6.9}
If we replace the lover limit $1$ in \eqref{f6.16} with an arbitrary number $k>1$, we will obtain an equivalent condition on the function $\varphi\in\mathrm{OR}$. This is due to the fact that every function $\varphi\in\mathrm{OR}$ is bounded together with $1/\varphi$ on each compact interval $[1,k]$ where $k>1$. This follows from property \eqref{f2.3}, in which we put $t=1$.
\end{remark}
The next result allows us to estimate the degree of the convergence stipulated by Theorem~\ref{th6.8}.
\begin{theorem}\label{th6.10}
Let $0\leq q\in\mathbb{Z}$ and $\phi_{1},\phi_{2}\in\mathrm{OR}$. Suppose that $\phi_{1}(t)\to\infty$ as $t\to\infty$ and that
\begin{equation}\label{f6.17}
\int\limits_{1}^{\infty}\frac{t^{2q+n-1}}{\phi_{2}^{2}(t)}\,dt<\infty.
\end{equation}
Consider the function $\varphi:=\phi_{1}\phi_{2}$, which evidently belongs to $\mathrm{OR}$ and satisfies \eqref{f6.16}. Then the degree of the convergence of the spectral expansion \eqref{f6.1} in the normed space $C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ on the class $H^{\varphi}(\mathbb{R}^{n})$ admits the estimate
\begin{equation}\label{f6.18}
\|f-E(\widetilde{B}_{\lambda})f\|_{C,q,\mathbb{R}^{n}}\leq
c\cdot\|f\|_{\varphi,\mathbb{R}^{n}}\cdot
\sup\bigl\{(\phi_{1}(t))^{-1}:t\geq\langle\lambda\rangle^{1/m}\bigr\}
\cdot\theta_{f}(\lambda)
\end{equation}
for every function $f\in H^{\varphi}(\mathbb{R}^{n})$ and each number $\lambda>0$. Here, $c$ is a certain positive number that does not depend on $f$ and $\lambda$, and $\theta_{f}(\lambda)$ is a decreasing function of $\lambda$ such that $\nobreak{0\leq\theta_{f}(\lambda)\leq1}$ whenever $\lambda>0$ and that $\theta_{f}(\lambda)\to0$ as $\lambda\to\infty$.
\end{theorem}
As to \eqref{f6.18}, recall that $\langle\lambda\rangle:=(1+|\lambda|^{2})^{1/2}$.
\begin{remark}\label{rem6.11}
Suppose that a function $\varphi\in\mathrm{OR}$ satisfies \eqref{f6.16}; then it may be represented in the form $\varphi=\phi_{1}\phi_{2}$ for some functions $\phi_{1},\phi_{2}\in\mathrm{OR}$ subject to the hypotheses of Theorem~\ref{th6.10}. Indeed, considering the function
\begin{equation*}
\eta(t):= \int\limits_t^\infty\frac{\tau^{2q+n-1}}{\varphi^2(\tau)}\,d\tau<\infty
\quad\mbox{of}\quad t\geq1
\end{equation*}
and choosing a number $\varepsilon\in(0,1/2)$, we put $\phi_{1}(t):=\eta^{-\varepsilon}(t)$ and $\phi_{2}(t):=\varphi(t)\eta^{\varepsilon}(t)$ whenever $t\geq1$. Then
$\phi_{1}(t)\to\infty$ as $t\to\infty$, and
\begin{equation*}
\int\limits_1^\infty\frac{t^{2q+n-1}}{\phi_2^2(t)}\,dt=
\int\limits_1^\infty\frac{t^{2q+n-1}}{\varphi^2(t)\eta^{2\varepsilon}(t)}
\,dt=-\int\limits_1^\infty\frac{d\eta(t)}{\eta^{2\varepsilon}(t)}=
\int\limits^{\eta(1)}_0\frac{d\eta}{\eta^{2\varepsilon}}<\infty.
\end{equation*}
To show that $\phi_{1},\phi_{2}\in\mathrm{OR}$, it suffices to prove the inclusion $\eta\in\mathrm{OR}$. Since $\varphi\in\mathrm{OR}$,
there exist numbers $a>1$ and $c\geq1$ such that $c^{-1}\leq\varphi(\lambda\zeta)/\varphi(\zeta)\leq c$ for all $\zeta\geq1$ and $\lambda\in[1,a]$. Assuming $t\geq1$ and $1\leq\lambda\leq a$, we therefore get
\begin{equation*}
\eta(\lambda t)=\int\limits_{\lambda t}^\infty
\frac{\tau^{2q+n-1}}{\varphi^2(\tau)}\,d\tau=
\lambda^{2q+n}\int\limits_{t}^\infty
\frac{\zeta^{2q+n-1}}{\varphi^2(\lambda\zeta)}\,d\zeta\leq
c^{2}\lambda^{2q+n}\int\limits_{t}^\infty
\frac{\zeta^{2q+n-1}}{\varphi^2(\zeta)}\,d\zeta
\leq c^2a^{2q+n}\eta(t)
\end{equation*}
and
\begin{equation*}
\eta(\lambda t)=\lambda^{2q+n}\int\limits_{t}^\infty
\frac{\zeta^{2q+n-1}}{\varphi^2(\lambda\zeta)}\,d\zeta\geq
c^{-2}\lambda^{2q+n}\int\limits_{t}^\infty
\frac{\zeta^{2q+n-1}}{\varphi^2(\zeta)}\,d\zeta\geq c^{-2}\eta(t);
\end{equation*}
i.e. $\eta\in\mathrm{OR}$.
\end{remark}
Before we prove Theorems \ref{th6.8} and \ref{th6.10}, we will illustrate them with three examples. As above, $0\leq q\in\mathbb{Z}$. As in Theorem~\ref{th6.10}, we let $c$ denote a positive number that does not depend on $f$ and $\lambda$.
\begin{example}\label{ex6.2.1}
Let us restrict ourselves to the Sobolev spaces $H^{s}(\mathbb{R}^{n})$, with $s\in\mathbb{R}$. Owing to Theorem~\ref{th6.8}, the spectral expansion \eqref{f6.1} converges unconditionally in $C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ on the class $H^{s}(\mathbb{R}^{n})$ if and only if $s>q+n/2$. Let $s>q+n/2$, and put $r:=s-q-n/2>0$. If $0<\varepsilon<r/m$, then the degree of this convergence admits the following estimate:
\begin{equation*}
\|f-E(\widetilde{B}_{\lambda})f\|_{C,q,\mathbb{R}^{n}}\leq
c\,\|f\|_{s,\mathbb{R}^{n}}\langle\lambda\rangle^{\varepsilon-r/m}
\end{equation*}
for all $f\in H^{s}(\mathbb{R}^{n})$ and $\lambda>0$. Here, $\|\cdot\|_{s,\mathbb{R}^{n}}$ is the norm in $H^{s}(\mathbb{R}^{n})$. This estimate follows from Theorem~\ref{th6.10}, in which we put
$\phi_{1}(t):=t^{r-m\varepsilon}$ and $\phi_{2}(t):=t^{s-r+m\varepsilon}$ for every $t\geq1$. Choosing a number $\varepsilon>0$ arbitrarily and putting
\begin{equation}\label{f6.18b}
\phi_{1}(t):=t^{r}\log^{-\varepsilon-1/2}(1+t)\quad\mbox{and}\quad
\phi_{2}(t):=t^{s-r}\log^{\varepsilon+1/2}(1+t)
\end{equation}
for every $t\geq1$ in this theorem, we obtain the sharper estimate
\begin{equation*}
\|f-E(\widetilde{B}_{\lambda})f\|_{C,q,\mathbb{R}^{n}}\leq
c\,\|f\|_{s,\mathbb{R}^{n}}\langle\lambda\rangle^{-r/m}
\log^{\varepsilon+1/2}(1+\langle\lambda\rangle)
\end{equation*}
for the same $f$ and $\lambda$.
\end{example}
Using the generalized Sobolev spaces $H^{\varphi}(\mathbb{R}^{n})$, with $\varphi\in\mathrm{OR}$, we may establish the unconditional convergence of \eqref{f6.1} in $C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ at some functions
\begin{equation*}
f\notin H^{q+n/2+}(\mathbb{R}^{n}):=
\bigcup_{s>q+n/2}H^{s}(\mathbb{R}^{n})
\end{equation*}
and evaluate its degree. (Note that this union is narrower than $H^{q+n/2}(\mathbb{R}^{n})$.)
\begin{example}\label{ex6.2.2}
Choosing a number $\varrho>0$ arbitrarily and putting
\begin{equation}\label{f6.19}
\varphi(t):=t^{q+n/2}\log^{\varrho+1/2}(1+t)
\quad\mbox{for every}\quad t\geq1,
\end{equation}
we conclude by Theorem~\ref{th6.8} that the spectral expansion \eqref{f6.1} converges unconditionally in $C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ on the class $H^{\varphi}(\mathbb{R}^{n})$. This class is evidently broader than $H^{q+n/2+}(\mathbb{R}^{n})$. If $0<\varepsilon<\varrho$, then the degree of this convergence admits the estimate
\begin{equation*}
\|f-E(\widetilde{B}_{\lambda})f\|_{C,q,\mathbb{R}^{n}}\leq
c\,\|f\|_{\varphi,\mathbb{R}^{n}}
\log^{\varepsilon-\varrho}(1+\langle\lambda\rangle)
\end{equation*}
for all $f\in H^{\varphi}(\mathbb{R}^{n})$ and $\lambda>0$. This estimate follows from Theorem~\ref{th6.10}, in which we represent $\varphi$ as the product of the functions
\begin{equation*}
\phi_{1}(t):=\log^{\varrho-\varepsilon}(1+t)\quad\mbox{and}\quad
\phi_{2}(t):=t^{q+n/2}\log^{\varepsilon+1/2}(1+t).
\end{equation*}
\end{example}
Using iterated logarithms, we may obtain weaker sufficient conditions for the unconditional convergence of \eqref{f6.1} in $C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$. The next example involves the double logarithm.
\begin{example}\label{ex6.2.3}
Choose a number $\varrho>0$ arbitrarily, and consider the function
\begin{equation}\label{f6.20}
\varphi(t):=t^{q+n/2}\,(\log(1+t))^{1/2}\,(\log\log(2+t))^{\varrho+1/2}
\quad\mbox{of}\quad t\geq1.
\end{equation}
According to Theorem~\ref{th6.8}, the spectral expansion \eqref{f6.1} converges unconditionally in $C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ on the class $H^{\varphi}(\mathbb{R}^{n})$. If $0<\varepsilon<\varrho$, then the degree of this convergence admits the estimate
\begin{equation*}
\|f-E(\widetilde{B}_{\lambda})f\|_{C,q,\mathbb{R}^{n}}\leq
c\,\|f\|_{\varphi,\mathbb{R}^{n}}
\bigl(\log\log(2+\langle\lambda\rangle)\bigr)^{\varepsilon-\varrho}
\end{equation*}
for all $f\in H^{\varphi}(\mathbb{R}^{n})$ and $\lambda>0$. The estimate follows from Theorem~\ref{th6.10} provided that we represent $\varphi$ as the product of the functions $\phi_{1}(t):=(\log\log(2+t))^{\varrho-\varepsilon}$ and
\begin{equation*}
\phi_{2}(t):=t^{q+n/2}(\log(1+t))^{1/2}
(\log\log(2+t))^{\varepsilon+1/2}.
\end{equation*}
\end{example}
Let us turn to the proofs of Theorems \ref{th6.8} and \ref{th6.10}. The proofs are based on the following version of H\"ormander's embedding theorem \cite[Theorem 2.2.7]{Hermander63}:
\begin{proposition}\label{prop6.12}
Let $0\leq q\in\mathbb{Z}$ and $\varphi\in\mathrm{OR}$. Then condition \eqref{f6.16} implies the continuous embedding $H^{\varphi}(\mathbb{R}^n)\hookrightarrow C^{q}_\mathrm{b}(\mathbb{R}^n)$. Conversely, if
\begin{equation}\label{f6.21}
\{w\in H^{\varphi}(\mathbb{R}^n): \mathrm{supp}\,w\subset G\}\subseteq C^q(\mathbb{R}^n)
\end{equation}
for an open nonempty set $G\subset\mathbb{R}^n$, then condition \eqref{f6.16} is satisfied.
\end{proposition}
\begin{proof}
We previously recall the definition of the H\"ormander space $\mathcal{B}_{p,k}$, which the embedding theorem deals with. Let $1\leq p\leq\infty$, and let a function $k:\mathbb{R}^{n}\to(0,\infty)$ satisfy the following condition: there exist positive numbers $c$ and $\ell$ such that
\begin{equation}\label{f6.22}
k(\xi+\zeta)\leq(1+c|\xi|)^{\ell}\,k(\zeta)\quad\mbox{for all}\quad
\xi,\zeta\in\mathbb{R}^{n}
\end{equation}
(the class of all such functions $k$ is denoted by $\mathcal{K}$). According to \cite[Definition 2.2.1]{Hermander63}, the complex linear space $\mathcal{B}_{p,k}$ consists of all distributions $w\in\mathcal{S}'(\mathbb{R}^{n})$ that their Fourier transform $\widehat{w}$ is locally Lebesgue integrable over $\mathbb{R}^{n}$ and that the product $k\widehat{w}$ belongs to the Lebesgue space $L_{p}(\mathbb{R}^{n})$. The space $\mathcal{B}_{p,k}$ is endowed with the norm $\|k\widehat{w}\|_{L_{p}(\mathbb{R}^{n})}$ and is complete with respect to it.
According to \cite[Theorem 2.2.7]{Hermander63} and its proof, the condition
\begin{equation}\label{f6.23}
\frac{(1+|\xi|)^{q}}{k(\xi)}\in L_{p'}(\mathbb{R}^{n})
\end{equation}
implies the inclusion $\mathcal{B}_{p,k}\subset C^{q}_\mathrm{b}(\mathbb{R}^n)$; here, as usual, the conjugate parameter $p'\in[1,\infty]$ is defined by $1/p+1/p'=1$. Moreover, if the set $\{w\in \mathcal{B}_{p,k}:\mathrm{supp}\,w\subset G\}$ lies in $C^{q}(\mathbb{R}^n)$ for an open nonempty set $G\subset\mathbb{R}^n$, then condition \eqref{f6.23} is satisfied. Note that the inclusion $\mathcal{B}_{p,k}\subset C^{q}_\mathrm{b}(\mathbb{R}^n)$ is continuous because its components are continuously embedded in a Hausdorff space, e.g. in $\mathcal{S}'(\mathbb{R}^{n})$.
The Hilbert space $H^{\varphi}(\mathbb{R}^n)$ is the H\"ormander space $\mathcal{B}_{2,k}$ provided that $k(\xi)=\varphi(\langle\xi\rangle)$ for every $\xi\in\mathbb{R}^n$ and that $k$ satisfies \eqref{f6.22}. Owing to \cite[Lemma~2.7]{MikhailetsMurach14}, the function $k(\xi):=\varphi(\langle\xi\rangle)$ of $\xi\in\mathbb{R}^n$ satisfies a weaker condition than \eqref{f6.22}; namely, there exist positive numbers $c_{0}$ and $\ell_0$ such that
\begin{equation*}
k(\xi+\zeta)\leq c_{0}(1+|\xi|)^{\ell_0}k(\zeta)
\quad\mbox{for all}\quad \xi,\zeta\in\mathbb{R}^{n}.
\end{equation*}
However, there exists a function $k_{1}\in\mathcal{K}$ that both functions $k/k_{1}$ and $k_{1}/k$ are bounded on $\mathbb{R}^n$ (see \cite[the remark at the end of Section~2.1]{Hermander63}). Hence, the spaces $H^{\varphi}(\mathbb{R}^n)$ and $\mathcal{B}_{2,k_1}$ are equal with equivalence of norms. Thus, Proposition~\ref{prop6.12} holds true if we change \eqref{f6.16} for the condition $(1+|\xi|)^{q}/k_{1}(\xi)\in L_{2}(\mathbb{R}^{n})$. The latter is equivalent to
\begin{equation}\label{f6.24}
\int\limits_{\mathbb{R}^{n}}\,
\frac{\langle\xi\rangle^{2q}d\xi}{\varphi^{2}(\langle\xi\rangle)}<\infty
\end{equation}
It remains to show that $\eqref{f6.16}\Leftrightarrow\eqref{f6.24}$.
Passing to spherical coordinates with $r:=|\xi|$ and changing variables $t=\sqrt{1+r^{2}}$, we obtain
\begin{align*}
\int\limits_{\mathbb{R}^{n}}\,
\frac{\langle\xi\rangle^{2q}d\xi}{\varphi^{2}(\langle\xi\rangle)}&=
c_1\int\limits_{0}^{\infty}\,
\frac{(1+r^{2})^{q}\,r^{n-1}dr}{\varphi^{2}(\sqrt{1+r^{2}}\,)}=
c_1\int\limits_{1}^{\infty}\,
\frac{t^{2q+1}(t^{2}-1)^{n/2-1}dt}{\varphi^{2}(t)}\\
&=c_2+c_1\int\limits_{2}^{\infty}
\frac{t^{2q+1}(t^{2}-1)^{n/2-1}dt}{\varphi^{2}(t)}.
\end{align*}
Here, $c_1:=n\,\mathrm{mes}\,\widetilde{B}_{1}$, with the second factor being the volume of the unit ball in $\mathbb{R}^{n}$, and
$$
c_2:=c_1\int\limits_{1}^{2}
\frac{t^{2q+1}(t^{2}-1)^{n/2-1}dt}{\varphi^{2}(t)}<\infty
$$
because the function $1/\varphi$ is bounded on $[1,2]$ and because $n/2-1>-1$. Hence,
\begin{align*}
\eqref{f6.24}\,\Longleftrightarrow
\int\limits_{2}^{\infty}
\frac{t^{2q+1}(t^{2}-1)^{n/2-1}dt}{\varphi^{2}(t)}<\infty\,
\Longleftrightarrow
\int\limits_{2}^{\infty}\frac{t^{2q+n-1}dt}{\varphi^{2}(t)}<\infty
\,\Longleftrightarrow\,\eqref{f6.16}.
\end{align*}
\end{proof}
We systematically use the following auxiliary result:
\begin{lemma}\label{lem6.13}
Suppose that a function $\chi\in\mathrm{OR}$ is integrable over $[1,\infty)$. Then $\chi$ is bounded on $[1,\infty)$, and $t\chi(t)\to0$ as $t\to\infty$.
\end{lemma}
\begin{proof}
Let us prove by contradiction that $t\chi(t)\to0$ as $t\to\infty$. Assume the contrary; i.e., there exists a number $\varepsilon>0$ and a sequence $(t_{j})_{j=1}^{\infty}\subset[1,\infty)$ such that $t_{j}\to\infty$ as $j\to\infty$ and that $t_{j}\,\chi(t_{j})\geq\varepsilon$ for each $j\geq1$. Since $\chi\in\mathrm{OR}$, there are numbers $a>1$ and $c\geq1$ such that $c^{-1}\leq\chi(\lambda\tau)/\chi(\tau)\leq c$ for all $\tau\geq1$ and $\lambda\in[1,a]$. Since $\chi$ is integrable over $[1,\infty)$, we have
\begin{equation}\label{f6.25}
\sum_{k=0}^{\infty}\int\limits_{a^k}^{a^{k+1}}\chi(t)dt<\infty.
\end{equation}
Choosing an integer $j\geq1$ arbitrarily, we find an integer $k(j)\geq0$ such that $a^{k(j)}\leq t_{j}<a^{k(j)+1}$ and observe that $c^{-1}\leq\chi(t)/\chi(t_{j})\leq c$ whenever $t\in[a^{k(j)},a^{k(j)+1}]$. Hence,
\begin{equation*}
\int\limits_{a^{k(j)}}^{a^{k(j)+1}}\chi(t)dt\geq
\int\limits_{a^{k(j)}}^{a^{k(j)+1}}c^{-1}\chi(t_{j})dt\geq
c^{-1}\varepsilon\,t_{j}^{-1}(a^{k(j)+1}-a^{k(j)})>
c^{-1}\varepsilon(1-a^{-1})
\end{equation*}
for each integer $j\geq1$, which contradicts \eqref{f6.25} because $c^{-1}\varepsilon(1-a^{-1})>0$ and $k(j)\to\infty$ as $j\to\infty$. Thus, our assumption is wrong; i.e., $t\chi(t)\to0$ as $t\to\infty$. It follows from this that the function $\chi\in\mathrm{OR}$ is bounded on $[1,\infty)$ because it is bounded
on each compact subinterval of $[1,\infty)$.
\end{proof}
\begin{proof}[Proof of Theorem $\ref{th6.8}$] \emph{Sufficiency.} Assume that $\varphi$ satisfies \eqref{f6.16} and prove that the spectral expansion \eqref{f6.1} converges unconditionally in the normed space $C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ on the class $H^{\varphi}(\mathbb{R}^{n})$. Note first that $H^{\varphi}(\mathbb{R}^{n})\subset L_{2}(\mathbb{R}^{n})$ because the function $1/\varphi$ is bounded on $[1,\infty)$; the latter property follows from \eqref{f6.16} due to Lemma~\ref{lem6.13}.
We put $A:=I+L^{\ast}L$ and observe that $A$ is a positive definite self-adjoint unbounded linear operator in the Hilbert space $H=L_{2}(\mathbb{R}^{n})$ and that $\mathrm{Spec}\,A\subseteq[1,\infty)$. Here, $I$ is the identity operator in $L_{2}(\mathbb{R}^{n})$. It follows from the theorem on composition of PsDOs \cite[Theorem~1.2.4]{Agranovich94} that $A\in\Psi^{2m}(\mathbb{R}^{n})$ is uniformly elliptic on $\mathbb{R}^{n}$. Consider the functions $\chi(t):=\varphi(t^{1/(2m)})$ of $t\geq1$ and $\omega(z):=(\chi(1+|z|^{2}))^{-1}$ of $z\in\mathbb{C}$, and put $R:=\omega(L)=(1/\chi)(A)$ and $S:=I$ in Theorem~\ref{th6.2}. Since the function $1/\chi$ is bounded on $[1,\infty)$, the operator $R$ is bounded on $L_{2}(\mathbb{R}^{n})$, and $0\not\in\mathrm{Spec}\,\chi(A)$. It follows from the latter property that $H^{\chi}_{A}=\mathrm{Dom}\,\chi(A)$; hence, the operator $\chi(A)$ sets an isometric isomorphism between $H^{\chi}_{A}$ and $L_{2}(\mathbb{R}^{n})$. Thus,
\begin{equation*}
R(L_{2}(\mathbb{R}^{n}))=H^{\chi}_{A}=H^{\varphi}(\mathbb{R}^{n})\subset
C^{q}_{\mathrm{b}}(\mathbb{R}^{n})
\end{equation*}
due to \eqref{f4.20}, Proposition~\ref{prop6.12}, and our assumption~\eqref{f6.16}. Since the norms in the spaces $L_{2}(\mathbb{R}^{n})$ and $C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ are compatible, the operator $R$ acts continuously from $L_{2}(\mathbb{R}^{n})$ to $N=C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$, as was shown in Remark~\ref{rem6.3}. Thus, the operators $R$ and $S$ satisfy all the hypotheses of Theorem~\ref{th6.2}. According to this theorem, the spectral expansion \eqref{f6.1} converges unconditionally in the space $C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ at every vector $f\in (RS)(L_{2}(\mathbb{R}^{n}))=H^{\varphi}(\mathbb{R}^{n})$. The sufficiency is proved
\emph{Necessity.} Assume now that the spectral expansion \eqref{f6.1} converges unconditionally in $C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ on the class $H^{\varphi}(\mathbb{R}^{n})$. Then $f\in H^{\varphi}(\mathbb{R}^{n})$ implies $f=E(\mathbb{C})f\in C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ by Definition~\ref{def6.1}. Hence, $\varphi$ satisfies \eqref{f6.16} due to Proposition~\ref{prop6.12}. The necessity is also proved.
\end{proof}
\begin{proof}[Proof of Theorem $\ref{th6.10}$]
Consider the function $\chi_{j}(t):=\phi_{j}(t^{1/(2m)})$ of $t\geq1$ for each $j\in\{1,2\}$ and the functions $\eta(z):=(\chi_{1}(1+|z|^{2}))^{-1}$ and $\omega(z):=(\chi_{2}(1+|z|^{2}))^{-1}$ of $z\in\mathbb{C}$. Setting
$A:=I+L^{\ast}L$, we put $S:=\eta(L)=(1/\chi_{1})(A)$ and
$R:=\omega(L)=(1/\chi_{2})(A)$ in Theorem~\ref{th6.2}. The functions $\eta$ and $\omega$ are bounded on $\mathbb{C}$ by its hypotheses (note that the boundedness of $\omega$ follows from \eqref{f6.17} in view of Lemma~\ref{lem6.13}). Hence, the operators $R$ and $S$ are bounded on the Hilbert space $H=L_{2}(\mathbb{R}^{n})$. It follows from \eqref{f6.17} that $R$ acts continuously from $L_{2}(\mathbb{R}^{n})$ to $N=C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$, as was shown in the proof of Theorem $\ref{th6.8}$ (the sufficiency). According to Theorem~\ref{th6.2} and Remark~\ref{rem6.4}, we have the estimate
\begin{equation}\label{f6.26}
\begin{aligned}
&\|f-E(\widetilde{B}_{\lambda})f\|_{C,q,\mathbb{R}^{n}}\\
&\leq
c'\cdot\|g\|_{\mathbb{R}^{n}}\cdot
\sup_{}\bigl\{(\phi_{1}(\langle z\rangle^{1/m})^{-1}:z\in\mathbb{C},|z|\geq\lambda\bigr\}
\cdot r_{g}(\widetilde{B}_{\lambda})
\end{aligned}
\end{equation}
for all $f\in RS(L_{2}(\mathbb{R}^{n}))$ and $\lambda>0$. Here, $c'$ denotes the norm of the bounded operator $R:L_{2}(\mathbb{R}^{n})\to C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$, whereas $\|\cdot\|_{\mathbb{R}^{n}}$ stands for the norm in $L_{2}(\mathbb{R}^{n})$, and $g\in L_{2}(\mathbb{R}^{n})$ satisfies $f=RSg$. Note that $RS=(1/\chi)(A)$ where $\chi(t):=\chi_{1}(t)\chi_{2}(t)=\varphi(t^{1/(2m)})$ for every $t\geq1$. Since $0\not\in\mathrm{Spec}\,\chi(A)$, the operator $\chi(A)$ sets an isometric isomorphism between $H^{\chi}_{A}$ and $L_{2}(\mathbb{R}^{n})$. The inverse operator $RS$ sets an isomorphism between $L_{2}(\mathbb{R}^{n})$ and $H^{\varphi}(\mathbb{R}^{n})$ because the spaces $H^{\chi}_{A}$ and $H^{\varphi}(\mathbb{R}^{n})$ coincide up to equivalence of norms by \eqref{f4.20}. Hence, $c'\|g\|_{\mathbb{R}^{n}}\leq c\,\|f\|_{\varphi,\mathbb{R}^{n}}$
for some number $c>0$ that does not depend on $f$ and $\lambda$. Thus, formula \eqref{f6.26} yields the required estimate \eqref{f6.18} if we put $\theta_{f}(\lambda):=r_{g}(\widetilde{B}_{\lambda})$.
\end{proof}
\subsection{}\label{sec6.3}
As in Subsection~\ref{sec4.2}, let $\Gamma$ be a compact boundaryless $C^{\infty}$-manifold of dimension $n\geq1$ endowed with a positive $C^{\infty}$-density $dx$. We suppose here that $L$ is a PsDO of class $\Psi^{m}(\Gamma)$ for some $m>0$ and that $L$ is elliptic on $\Gamma$. We may and will consider $L$ as a closed unbounded operator in the Hilbert space $H:=L_{2}(\Gamma)$ with $\mathrm{Dom}\,L=H^{m}(\Gamma)$ (see \cite[Sections 2.3~d and 3.1~b]{Agranovich94}). We also suppose that $L$ is a normal operator in $L_{2}(\Gamma)$. Then the Hilbert space $L_{2}(\Gamma)$ has an orthonormal basis $\mathcal{E}:=(e_{j})_{j=1}^{\infty}$ formed by some eigenvectors $e_{j}\in C^{\infty}(\Gamma)$ of $L$ (see, e.g., \cite[Section~15.2]{Shubin01}). Thus, the spectral expansion
\begin{equation}\label{f6.27}
f=\sum_{j=1}^{\infty}\varkappa_{j}(f)e_j,\quad\mbox{with}\quad \varkappa_{j}(f):=(f,e_j)_{\Gamma},
\end{equation}
holds in $L_{2}(\Gamma)$ for every $f\in L_{2}(\Gamma)$. (Recall that $(\cdot,\cdot)_{\Gamma}$ and $\|\cdot\|_{\Gamma}$ respectively stand for the inner product and norm in $L_{2}(\Gamma)$.) These eigenvectors are enumerated so that $|\lambda_{j}|\leq|\lambda_{j+1}|$ whenever $j\geq1$, with $\lambda_{j}$ denoting the eigenvalue of $L$ such that $Le_j=\lambda_{j}e_j$. Note that $|\lambda_{j}|\to\infty$ as $j\to\infty$. Moreover, if $L$ is a classical PsDO, then
\begin{equation}\label{f6.28}
|\lambda_{j}|\sim\widetilde{c}\,j^{m/n}\quad\mbox{as}\quad j\to\infty,
\end{equation}
where $\widetilde{c}$ is a certain positive number that does not depend on~$j$.
As usual, $C^{q}(\Gamma)$ denotes the Banach space of all functions $u:\Gamma\to\mathbb{C}$ that are $q$ times continuously differentiable on $\Gamma$. The norm in this space is denoted by $\|\cdot\|_{C,q,\Gamma}$.
For the spectral expansion \eqref{f6.27}, the following versions of Theorems \ref{th6.8} and \ref{th6.10} hold true:
\begin{theorem}\label{th6.14}
Let $0\leq q\in\mathbb{Z}$ and $\varphi\in\mathrm{OR}$. The series \eqref{f6.27} converges unconditionally in the normed space $C^{q}(\Gamma)$ on the function class $H^{\varphi}(\Gamma)$ if and only if $\varphi$ satisfies \eqref{f6.16}.
\end{theorem}
\begin{theorem}\label{th6.15}
Let $0\leq q\in\mathbb{Z}$, and assume that the PsDO $L$ is classical. Suppose that certain functions $\phi_{1},\phi_{2}\in\mathrm{OR}$ satisfy the hypotheses of Theorem~$\ref{th6.10}$, and consider the function $\varphi:=\phi_{1}\phi_{2}\in\mathrm{OR}$ subject to \eqref{f6.16}. Then the degree of the convergence of \eqref{f6.27} in the normed space $C^{q}(\Gamma)$ on the class $H^{\varphi}(\Gamma)$ admits the estimate
\begin{equation}\label{f6.29}
\biggl\|f-\sum_{j=1}^{k}\varkappa_j(f)e_j\biggr\|_{C,q,\Gamma}
\leq c\cdot\|f\|_{\varphi,\Gamma}\cdot
\sup\bigl\{(\phi_{1}(j^{1/n}))^{-1}:k+1\leq j\in\mathbb{Z}\bigr\}
\cdot\theta_{f,k}
\end{equation}
for every function $f\in H^{\varphi}(\Gamma)$ and each integer $k\geq1$. Here, $c$ is a certain positive number that does not depend on $f$ and $k$, and $(\theta_{f,k})_{k=1}^{\infty}$ is a decreasing sequence that lies in $[0,1]$ and tends to zero.
\end{theorem}
We illustrate these theorems with analogous examples to those given in the previous subsection. Let $0\leq q\in\mathbb{Z}$, and let $c$ denote a positive number that does not depend on the function $f$ and integer $k$ from Theorem~\ref{th6.15}. Dealing with estimates of the form \eqref{f6.29}, we suppose that the PsDO $L$ is classical.
\begin{example}\label{ex6.3.1}
Owing to Theorem~\ref{th6.14}, the series \eqref{f6.27} converges unconditionally in $C^{q}(\Gamma)$ on the Sobolev class $H^{s}(\Gamma)$ if and only if $s>q+n/2$. This fact is known (see, e.g., \cite[Chapter~XII, Exercise~4.5]{Taylor81} in the $q=0$ case). Let $s>q+n/2$, and put $r:=s-q-n/2>0$. If $0<\varepsilon<r/n$, then
\begin{equation*}
\biggl\|f-\sum_{j=1}^{k}\varkappa_j(f)e_j\biggr\|_{C,q,\Gamma}\leq
c\,\|f\|_{s,\Gamma}(k+1)^{\varepsilon-r/n}
\end{equation*}
for all $f\in H^{s}(\Gamma)$ and $k\geq1$, with $\|\cdot\|_{s,\Gamma}$ being the norm in $H^{s}(\Gamma)$. This estimate follows from Theorem~\ref{th6.15}, in which we put $\phi_{1}(t):=t^{r-n\varepsilon}$ and $\phi_{2}(t):=t^{s-r+n\varepsilon}$ for every $t\geq1$. The estimate admits the following refinement:
\begin{equation*}
\biggl\|f-\sum_{j=1}^{k}\varkappa_j(f)e_j\biggr\|_{C,q,\Gamma}\leq
c\,\|f\|_{s,\Gamma}(k+1)^{-r/n}\log^{\varepsilon+1/2}(k+1)
\end{equation*}
for the same $f$ and $k$, we choosing a real number $\varepsilon>0$ arbitrarily. This estimate follows from Theorem~\ref{th6.15} applied to the functions \eqref{f6.18b}.
\end{example}
\begin{example}\label{ex6.3.2}
We choose a number $\varrho>0$ arbitrarily and define a function $\varphi$ by formula \eqref{f6.19}. According to Theorem~\ref{th6.14}, the series \eqref{f6.27} converges unconditionally in $C^{q}(\Gamma)$ on the class $H^{\varphi}(\Gamma)$. This fact is known at least in the $q=0$ case (see \cite[Chapter~XII, Exercise~4.8]{Taylor81}). If $0<\varepsilon<\varrho$, then
\begin{equation*}
\biggl\|f-\sum_{j=1}^{k}\varkappa_j(f)e_j\biggr\|_{C,q,\Gamma}\leq
c\,\|f\|_{\varphi,\Gamma}\log^{\varepsilon-\varrho}(k+1)
\end{equation*}
for all $f\in H^{\varphi}(\Gamma)$ and $k\geq1$. This estimate follows from Theorem~\ref{th6.15} if we represent $\varphi$ in the form used in Example~\ref{ex6.2.2}. Comparing this result with the previous example, we see that $H^{\varphi}(\Gamma)$ is broader than the union
\begin{equation*}
H^{q+n/2+}(\Gamma):=\bigcup_{s>q+n/2}H^{s}(\Gamma).
\end{equation*}
\end{example}
\begin{example}\label{ex6.3.3}
We choose a number $\varrho>0$ arbitrarily and define a function $\varphi$ by formula \eqref{f6.20}. Owing to Theorem~\ref{th6.14}, the series \eqref{f6.27} converges unconditionally in $C^{q}_{\mathrm{b}}(\mathbb{R}^{n})$ on the class $H^{\varphi}(\mathbb{R}^{n})$. This class is broader than that used in Example~\ref{ex6.3.2}. If $0<\varepsilon<\varrho$, then
\begin{equation*}
\biggl\|f-\sum_{j=1}^{k}\varkappa_j(f)e_j\biggr\|_{C,q,\Gamma}\leq
c\cdot\|f\|_{\varphi,\Gamma}\cdot(\log\log(k+2))^{\varepsilon-\varrho}
\end{equation*}
for all $f\in H^{\varphi}(\Gamma)$ and $k\geq1$. This bound follows from Theorem~\ref{th6.15} if we represent $\varphi$ in the form given in Example~\ref{ex6.2.3}.
\end{example}
These results are applicable to multiple trigonometric series. Indeed, if $\Gamma=\mathbb{T}^{n}$ and $A=\Delta_{\Gamma}$, then \eqref{f6.27} becomes the expansion of $f$ into the $n$-multiple trigonometric series (as usual, $\mathbb{T}:=\{e^{i\tau}:0\leq\tau\leq 2\pi\}$). It is known \cite[Section~6]{Golubov84} that this series is unconditionally uniformly convergent (on $\Gamma$) on every H\"older class $C^{s}(\Gamma)$ of order $s>n/2$. The exponent $n/2$ is critical here; namely, there exists a function $f\in C^{n/2}(\Gamma)$ whose trigonometric series diverges at some point of $\mathbb{T}^{n}$. These results consist a multi-dimensional generalization of Bernstein's theorem for trigonometric series. Since $C^{s}(\Gamma)\subset H^{s}(\Gamma)$, Example \ref{ex6.3.1} gives a weaker sufficient condition for this convergent. The next Examples \ref{ex6.3.2} and \ref{ex6.3.3} treat the case of the critical exponent with the help of generalized Sobolev spaces.
The proofs of Theorems \ref{th6.14} and \ref{th6.15} are similar to the proofs of Theorems \ref{th6.8} and \ref{th6.10}, we using Theorem~\ref{th6.5} (instead of Theorem~\ref{th6.2}) and the following analog of Proposition~\ref{prop6.12}:
\begin{proposition}\label{prop6.16}
Let $0\leq q\in\mathbb{Z}$ and $\varphi\in\mathrm{OR}$. Then condition \eqref{f6.16} is equivalent to the embedding $H^{\varphi}(\Gamma)\subseteq C^{q}(\Gamma)$. Moreover, this embedding is compact under condition \eqref{f6.16}.
\end{proposition}
\begin{proof}
Suppose first that $\varphi$ satisfies condition~\eqref{f6.16}. Then the continuous embedding $H^\varphi(\mathbb{R}^n)\hookrightarrow C^q_\mathrm{b}(\mathbb{R}^n)$ holds true by Proposition~\ref{prop6.12}.
Let $\varkappa$, $\chi_j$, and $\pi_j$ be the same as those in the definition of $H^{\varphi}(\Gamma)$. Choosing $f\in H^\varphi(\Gamma)$ arbitrarily, we get the inclusion
\begin{equation*}
(\chi_jf)\circ\pi_j\in H^\varphi(\mathbb{R}^n)\hookrightarrow C^q_\mathrm{b}(\mathbb{R}^n)
\end{equation*}
for each $j\in\{1,\ldots,\varkappa\}$. Hence, each $\chi_jf\in C^q(\Gamma)$, which implies that
\begin{equation*}
f=\sum_{j=1}^\varkappa\chi_j f\in C^q(\Gamma).
\end{equation*}
Thus, $H^\varphi(\Gamma)\subseteq C^q(\Gamma)$; this embedding is continuous because both the spaces are complete and continuously embedded in $\mathcal{D}'(\Gamma)$. Let us prove that it is compact.
We showed in Remark~\ref{rem6.11} that $\varphi=\phi_{1}\phi_{2}$ for some functions $\phi_{1}$ and $\phi_{2}$ satisfying the hypotheses of Theorem~\ref{th6.10}. Since $\phi_{2}(t)/\varphi(t)=1/\phi_{1}(t)\to0$ as $t\to\infty$, the compact embedding $H^\varphi(\Gamma)\hookrightarrow H^{\phi_{2}}(\Gamma)$ holds true. Indeed, let $T$ and $K$ be the bounded operators \eqref{f4.11} and \eqref{f4.18}. If a sequence $(f_{k})$ is bounded in $H^\varphi(\Gamma)$, then the sequence $(Tf_{k})$ is bounded
in $(H^{\varphi}(\mathbb{R}^n))^{\varkappa}$. It follows from this by \cite[Theorem~2.2.3]{Hermander63} that the latter sequence contains a convergent subsequence $(Tf_{k_\ell})$ in $(H^{\phi_{2}}(\mathbb{R}^n))^{\varkappa}$. Hence,
the subsequence of vectors $f_{k_\ell}=KTf_{k_\ell}$ is convergent in $H^{\phi_{2}}(\Gamma)$. Thus, the embedding $H^\varphi(\Gamma)\hookrightarrow H^{\phi_{2}}(\Gamma)$ is compact. As we showed in the previous paragraph, the continuous embedding $H^{\phi_{2}}(\Gamma)\hookrightarrow C^q(\Gamma)$ holds true because $\phi_{2}$ satisfies \eqref{f6.17}. Therefore, the embedding $H^\varphi(\Gamma)\hookrightarrow C^q(\Gamma)$ is compact.
Assume now that the embedding $H^\varphi(\Gamma)\subseteq C^q(\Gamma)$ holds true, and prove that $\varphi$ satisfies \eqref{f6.16}.
We suppose without loss of generality that $\Gamma_1$ is not contained in $\Gamma_2\cup\cdots\cup\Gamma_\varkappa$, choose an open nonempty set $U\subset\Gamma_1$ which satisfies $U\cap\Gamma_j=\emptyset$ whenever $j\neq1$, and put $G:=\pi^{-1}_1(U)$. Consider an arbitrary distribution $w\in H^{\varphi}(\mathbb{R}^n)$ subject to $\mathrm{supp}\,w\subset G$.
Owing to \eqref{f4.18} and our assumption, we have the inclusion
\begin{equation*}
u:=K(w,\underbrace{0,\ldots,0}_{\varkappa-1}\,)\in H^{\varphi}(\Gamma) \subseteq C^q(\Gamma).
\end{equation*}
Hence, $w=(\chi_1 u)\circ\pi_1\in C^q(\mathbb{R}^{n})$; note that the letter equality is true because $\chi_1=1$ on $U$. Thus, \eqref{f6.21} holds true, which implies \eqref{f6.16} due to Proposition~\ref{prop6.12}.
\end{proof}
\begin{proof}[Proof of Theorem $\ref{th6.14}$]
\emph{Sufficiency} is proved in the same manner as the proof of the sufficiency in Theorem~\ref{th6.8}. We only replace $\mathbb{R}^{n}$ with $\Gamma$ and use Theorem~\ref{th6.5} instead of Theorem~\ref{th6.2} and Proposition~\ref{prop6.16} instead of
Proposition~\ref{prop6.12}.
\emph{Necessity.} Assume that the series \eqref{f6.27} converges in $C^{q}(\Gamma)$ on the class $H^{\varphi}(\Gamma)$. Then $H^{\varphi}(\Gamma)\subseteq C^{q}(\Gamma)$, which implies \eqref{f6.16} by Proposition~\ref{prop6.16}.
\end{proof}
\begin{proof}[Proof of Theorem $\ref{th6.15}$.] It is very similar to the proof of Theorem $\ref{th6.10}$. Replacing $\mathbb{R}^{n}$ with $\Gamma$ in this proof and using Theorem~\ref{th6.5} and Remark~\ref{rem6.6} instead of Theorem~\ref{th6.2} and Remark~\ref{rem6.4}, we obtain the following analog of the estimate \eqref{f6.12}:
\begin{equation}\label{f6.30}
\biggl\|f-\sum_{j=1}^{k}\varkappa_j(f)e_j\biggr\|_{C,q,\Gamma}\leq
c'\cdot\|g\|_{\Gamma}\cdot\sup_{j\geq k+1}
\bigl\{(\phi_{1}(\langle\lambda_j\rangle^{1/m}))^{-1}\bigr\}
\cdot r_{g,k}
\end{equation}
for every function $f\in H^{\varphi}(\Gamma)$ and each integer $k\geq1$. Here, $c'$ denotes the norm of the bounded operator $R:L_{2}(\Gamma)\to C^{q}(\Gamma)$, and $g:=(RS)^{-1}f\in L_{2}(\Gamma)$. Reasoning in the same way as that given after formula \eqref{f6.26}, we arrive at the inequality $c'\|g\|_{\Gamma}\leq c''\|f\|_{\varphi,\Gamma}$ where the number $c''>0$ does not depend on $f$ and $k$. Besides, owing to the inclusion $\phi_{1}\in\mathrm{OR}$ and asymptotic formula \eqref{f6.28}, the exist two positive numbers $c_{1}$ and $c_{2}$ such that
\begin{equation*}
c_{1}\phi_{1}(j^{1/n})\leq
\phi_{1}(\langle\lambda_j\rangle^{1/m})\leq
c_{2}\,\phi_{1}(j^{1/n})
\end{equation*}
for every integer $j\geq1$. Thus, formula \eqref{f6.30} yields the required estimate \eqref{f6.29} if we put $\theta_{f,k}:=r_{g,k}$ and $c:=c''/c_{1}$.
\end{proof}
\subsection{}\label{sec6.last}
We end Section~\ref{sec6} with two sufficient conditions under which
the spectral expansion \eqref{f6.27} converges a.e. (almost everywhere) on the manifold $\Gamma$ with respect to the measure induced by the $C^{\infty}$-density $dx$. These conditions are formulated in terms of belonging of $f$ to some generalized Sobolev spaces on $\Gamma$. Put
\begin{equation*}
S^{\ast}(f,x):=\sup_{1\leq k<\infty}\,
\biggl|\,\sum_{j=1}^{k}\;\varkappa_{j}(f)e_{j}(x)\,\biggr|
\end{equation*}
for all $f\in L_{2}(\Gamma)$ and $x\in\Gamma$; thus, $S^{\ast}(f,x)$ is the majorant of partial sums of \eqref{f6.27}. Consider the function $\log^{\ast}t:=\max\{1,\log t\}$ of $t\geq1$; it pertains to $\mathrm{OR}$. We suppose that the PsDO $L$ is classical.
\begin{theorem}\label{th6.17}
The series \eqref{f6.27} converges a.e. on $\Gamma$ on the function class $H^{\log^{\ast}}(\Gamma)$. Besides, there exists a number $c>0$ such that
\begin{equation*}
\|S^{\ast}(f,\cdot)\|_{\Gamma}\leq c\,\|f\|_{\log^{\ast},\Gamma}
\quad\mbox{for every}\quad f\in H^{\log^{\ast}}(\Gamma).
\end{equation*}
\end{theorem}
If $f\in H^{\log^{\ast}}(\Gamma)$, then the convergence of the series \eqref{f6.27} may be violated under a permutation of its terms. To ensure that the convergence does not depend on their order, we should subject $f$ to a stronger condition.
\begin{theorem}\label{th6.18}
Assume that a function $\varphi\in\mathrm{OR}$ (nonstrictly) increases and satisfies
\begin{equation}\label{f6.32}
\int\limits_{2}^{\infty}\frac{dt}{t\,(\log t)\,\varphi^{2}(t)}<\infty.
\end{equation}
Then the series \eqref{f6.27} converges unconditionally a.e. on $\Gamma$ on the function class $H^{\varphi\log^{\ast}}(\Gamma)$.
\end{theorem}
These theorems are proved in \cite[Section~2.3.2]{MikhailetsMurach14}, the second being demonstrated in the case where $\varphi$ varies slowly at infinity in the sense of Karamata. The proofs rely on Theorem~\ref{th5.8} and general forms of the classical Menshov--Rademacher \cite{Menschoff23, Rademacher22} and Orlicz \cite{Orlicz27} theorems about a.e. convergence of orthogonal series. We give these brief proofs for the sake of completeness.
\begin{proof}[Proof of Theorem~$\ref{th6.17}$.]
Note that the orthonormal basis $\mathcal{E}$ of $L_{2}(\Gamma)$ consists of eigenvectors of the operator $A:=(I+L^{\ast}L)^{1/m}$, to which Theorem~\ref{th5.8} is applicable. Owing to this theorem, we have
\begin{equation*}
\sum_{j=1}^{\infty}(\log^{2}(j+1))|\varkappa_{j}(f)|^{2}\asymp
\sum_{j=1}^{\infty}(\log^{\ast}(j^{1/n}))^{2}\,|\varkappa_{j}(f)|^{2}
\asymp\|f\|_{\log^{\ast},\Gamma}^{2}<\infty
\end{equation*}
whenever $f\in H^{\log^{\ast}}(\Gamma)$, with $\asymp$ meaning equivalence of norms. Now Theorem~\ref{th6.17} follows from the Menshov--Rademacher theorem, which remains true for general complex orthogonal series formed by square integrable functions (see, e.g., \cite{Meaney07, MikhailetsMurach11MFAT4, MoriczTandori96}).
\end{proof}
\begin{proof}[Proof of Theorem~$\ref{th6.18}$.]
Let $f\in H^{\varphi\log^{\ast}}(\Gamma)$, and put $\omega_{j}:=\varphi^{2}(j^{1/n})$ for every integer $j\geq1$. Owing to Theorem~\ref{th5.8} applied to $A:=(I+L^{\ast}L)^{1/m}$, we have
\begin{equation}\label{f6.33}
\sum_{j=2}^{\infty}(\log^{2}j)\,\omega_{j}\,|\varkappa_{j}(f)|^{2}\asymp
\|f\|_{\varphi\log^{\ast},\Gamma}^{2}<\infty.
\end{equation}
Besides, condition \eqref{f6.32} implies that
\begin{equation}\label{f6.34}
\sum_{j=3}^{\infty}\frac{1}{j\,(\log j)\,\omega_{j}}\leq
\int\limits_{2}^{\infty}
\frac{d\tau}{\tau\,(\log\tau)\,\varphi^{2}(\tau^{1/n})}= \int\limits_{2^{1/n}}^{\infty}
\frac{n\,t^{n-1}\,dt}{t^{n}\,n\,(\log
t)\,\varphi^{2}(t)}<\infty.
\end{equation}
The conclusion of Theorem \ref{th6.18} follows from \eqref{f6.33} and \eqref{f6.34} due to the Orlicz theorem (in Ul'janov's equivalent statement \cite[Section~9, Subsection~1]{Uljanov64}), which remains true for general complex orthogonal series \cite[Theorem~2]{MikhailetsMurach12UMJ10} (see also \cite[Theorem~3]{MikhailetsMurach11MFAT4}).
\end{proof}
As to Theorems \ref{th6.17} and \ref{th6.18}, note the following: if we restrict ourselves to the Sobolev spaces, we will assert only that the series \eqref{f6.27} converges unconditionally a.e. on $\Gamma$ on the function class $H^{0+}(\Gamma):=\bigcup_{s>0}H^{s}(\Gamma)$ (cf. \cite{Meaney82}). This class is significantly narrower than the spaces used in these theorems. Using the extended Sobolev scale, we express in adequate forms the hypotheses of the Menshov--Rademacher and Orlicz theorems.
\end{document} |
\betaegin{eqnarray*}gin{document}
\title{Area integral functions and $H^{\infty}$ functional calculus for sectorial operators on Hilbert spaces}
\thanks{{\it 2010 Mathematics Subject Classification:} 47D06, 47A60}
\thanks{{\it Key words:} Sectorial operator, $H^{\infty}$ Functional calculus, Area integral function, Square function, Hilbert space.}
\alphauthor{Zeqian Chen}
\alphaddress{Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, 30 West District, Xiao-Hong-Shan, Wuhan 430071,China}
\varepsilonmail{zqchen@wipm.ac.cn}
\thanks{Z. Chen is partially supported by NSFC grant No. 11171338.}
\alphauthor{Mu Sun}
\alphaddress{Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, 30 West District, Xiao-Hong-Shan, Wuhan 430071, China
and Graduate University of Chinese Academy of Sciences, Beijing 100049, China}
\deltaate{}
\maketitle
\markboth{Z. Chen and M. Sun}
{$H^{\infty}$ functional calculus}
\betaegin{eqnarray*}gin{abstract}
Area integral functions are introduced for sectorial operators on Hilbert spaces. We establish the equivalence relationship between the square and area integral functions. This immediately extends McIntosh/Yagi's results on $H^{\infty}$ functional calculus of sectorial operators on Hilbert spaces to the case when the square functions are replaced by the area integral functions.
\varepsilonnd{abstract}
\sigmaection{preliminaries}\lambdaanglebel{pre}
The theory of sectorial operators, their $H^{\infty}$ functional calculus, and their associated square functions on Hilbert spaces grew out from McIntosh's seminal paper \cite{M1986} and a subsequent work by McIntosh/Yagi \cite{MY1989}, and then was generalized to the setting of Banach spaces by Cowling-Doust-McIntosh-Yagi \cite{CDMY1996} and by Kalton/Weis \cite{KW2001}. The aim of this paper is to introduce so-called area integral functions for sectorial operators on Hilbert spaces and to extend McIntosh/Yagi's theory to the case when the square functions are replaced by the area integral functions. The corresponding $L_p$ case will be given elsewhere \cite{CS}.
To this end, in this section we give a brief review of $H^{\infty}$ functional calculus on general Banach spaces, and preliminary results that will be used for what follows. We mainly follow the fundamental works \cite{CDMY1996, M1986, MY1989}. See also \cite{ADM1996, LeM2007} for further details. We refer to \cite{Gold1985} for the necessary background on semigroup theory.
\sigmaubsection{Sectorial operators and $C_0$-semigroups}
Let $\mathbf{X}$ be a complex Banach space. We denote by $\mathcal{B} ( \mathbf{X})$ the Banach algebra of all bounded operators on $\mathbf{X}.$ Let $A$ be a closed and densely defined operator on $\mathbf{X}.$ We let $\mathrm{D} (A),$ $\mathrm{N} (A)$ and $\mathrm{R} (A)$ denote the domain, kernel and range of $A$ respectively. Further we let $\sigmaigma (A)$ and $\rho (A)$ denote the spectrum and resolvent set of $A$ respectively. Then, for any $\lambdaanglembda \in \rho(A),$ we let
\betaegin{eqnarray*}
R(\lambdaanglembda, A)=(\lambdaanglembda-A)^{-1}
\varepsilone
denote the corresponding resolvent operator.
For any
$\omega\in(0,\pi)$, we let
\betaegin{eqnarray*}
\Sigma_\omega = \{z \in \mathbb{C}\sigmaetminus \{0\} : \; | \mathrm{Arg} (z) | < \omega\}
\varepsilone
be the open sector of angle $2\omega$ around the half-line $(0, \infty).$ Then, $A$ is said to be a sectorial operator of type $\omega$ if $A$ is closed and densely defined, $\sigmaigma (A) \sigmaubset \overline{\Sigma}_\omega,$ and for any $\theta \in(\omega,\pi)$ there is a constant $K_\theta>0$ such that
\betaegin{eqnarray*}q\lambdaanglebel{eq:EsitSectorialOper}
| z R(z,A) | \lambdae K_\theta, \quad z \in \mathbb{C} \sigmaetminus \overline{\Sigma}_\theta .
\varepsiloneq
We say that $A$ is sectorial of type $0$ if it is of type $\omega$ for any $\omega >0.$
Let $(T_t)_{t \gammae 0}$ be a bounded $C_0$-semigroup on $\mathbf{X}$ and let $- A$ denote its infinitesimal generator. Then $A$ is closed and densely defined. Moreover, $\sigmaigma (A) \sigmaubset \overline{\Sigma}_{\varphirac{\pi}{2}}$ and, for any $\lambdaanglembda \in \mathbb{C} \sigmaetminus \overline{\Sigma}_{\varphirac{\pi}{2}}$ we have
\betaegin{eqnarray*}
R (\lambdaanglembda, A ) = - \int^{\infty}_0 e^{\lambdaanglembda t} T_t d t
\varepsilone
in the strong operator topology, from which it follows that $A$ is a sectorial operator of type $\varphirac{\pi}{2}.$
\betaegin{eqnarray*}gin{prop}\lambdaanglebel{prop:AnalyticSemigroup}{\rm ( see e.g. \cite{Gold1985})}
Let $(T_t)_{t \gammae 0}$ be a bounded $C_0$-semigroup on $\mathbf{X}$ with the infinitesimal generator $- A.$ Given $\omega \in (0, \varphirac{\pi}{2}),$ the following are equivalent:
\betaegin{eqnarray*}gin{enumerate}[{\rm (i)}]
\item $A$ is sectorial of type $\omega.$
\item For any $\alphalpha \in (0, \varphirac{\pi}{2} - \omega ),$ $(T_t)_{t \gammae 0}$ admits a bounded analytic extension $(T_z)_{z \in \Sigma_{\alphalpha}}$ in $\mathcal{B} ( \mathbf{X}).$
\varepsilonnd{enumerate}
\varepsilonnd{prop}
By definition, a $C_0$-semigroup $(T_t)_{t \gammae 0}$ is called a bounded analytic semigroup if there exist a positive angle $0 < \alphalpha <\varphirac{\pi}{2}$ and a bounded analytic extension of $(T_t)_{t \gammae 0}$ on $\Sigma_\alphalpha.$ That is, there exists a bounded family of operators $(T_z)_{z \in \Sigma_\alphalpha}$ extending $(T_t)_{t \gammae 0}$ and such that $z \mapsto T_z$ is analytic from $\Sigma_\alphalpha$ into $\mathcal{B} (\mathbf{X}).$ Note that such an extension necessarily satisfies $T_z T_w = T_{z + w}$ for all $z, w \in \Sigma.$
By Proposition \ref{prop:AnalyticSemigroup}, a $C_0$-semigroup $(T_t)_{t \gammae 0}$ with the infinitesimal generator $- A$ is a bounded analytic semigroup if and only if $A$ is a sectorial operator of type $\omega$ for some $\omega \in (0, \varphirac{\pi}{2}).$
\sigmaubsection{$H^{\infty}$ functional calculus}
Given any $\theta \in (0,\pi),$ we let $H^\infty(\Sigma_\theta)$ be the set of all bounded analytic functions $f:\Sigma_\theta \to \mathbb{C}.$ This is a Banach algebra for the supermum norm
\betaegin{eqnarray*}
\| f \|_{\infty, \theta}: = \sigmaup_{z \in \Sigma_\theta} | f (z) |.
\varepsilone
Then we let $H^\infty_0(\Sigma_\theta)$ be the subalgebra of all $f\in H^\infty(\Sigma_\theta)$ for which there exist two positive numbers $s,c>0$ such that
\betaegin{eqnarray*}q\lambdaanglebel{eq:EstiH_0funct}
| f(z) | \lambdae c \varphirac{| z |^s}{(1+ | z | )^{2s}},\quad z \in \Sigma_\theta.
\varepsiloneq
Now given a sectorial operator $A$ of type $\omega\in(0,\pi)$ on a Banach space $\mathbf{X}$, a number $\theta\in(\omega, \pi ),$ and a function $f\in H^\infty_0(\Sigma_\theta),$ one may define an
operator $f(A)\in \mathcal{B} ( \mathbf{X} )$ as follows. We let $\gammaamma \in (\omega,\theta)$ be an intermediate angle and consider the oriented contour ${\mathcal G}amma_\gammaamma$ defined by
\betaegin{eqnarray*}
{\mathcal G}amma_\gammaamma(t)= \lambdaeft\{
\betaegin{eqnarray*}gin{split} & -te^{i\gammaamma},\quad t\in \mathbb{R}_-;\\[0.6mm]
& te^{-i\gammaamma},\quad t\in \mathbb{R}_+. \varepsilonnd{split} \right.
\varepsilone
In other words, ${\mathcal G}amma_\gammaamma$ is the boundary of $\Sigma_\gammaamma$ oriented counterclockwise. For any $f\in H^\infty_0(\Sigma_\theta),$ we set
\betaegin{eqnarray*}q\lambdaanglebel{eq:f(A)}
f(A)=\varphirac{1}{2\pi i}\int_{{\mathcal G}amma_\gammaamma} f(z)R(z,A)dz.
\varepsiloneq
By \varepsilonqref{eq:EsitSectorialOper} and \varepsilonqref{eq:EstiH_0funct}, it follows that this integral is absolutely convergent. Indeed, \varepsilonqref{eq:EstiH_0funct} implies that for any $\gammaamma \in (0, \theta),$ we have
\betaegin{eqnarray*}
\int_{{\mathcal G}amma_{\gammaamma}} {\mathcal B}ig | \varphirac{f(z)}{z} {\mathcal B}ig | | d z| < \infty.
\varepsilone
Thus $f(A)$ is a well defined element of $\mathcal{B} (\mathbf{X}).$ It follows from Cauchy's Theorem that the definition of $f(A)$ does not depend on the choice of $\gammaamma.$ Furthermore, it can be shown that the mapping $f \mapsto f(A)$ is an algebra homomorphism from $H^\infty_0(\Sigma_\theta)$ into $\mathcal{B} ( \mathbf{X}).$
\betaegin{eqnarray*}gin{defi}\lambdaanglebel{def:HinftyCalculus}
Let $A$ be a sectorial operator of type $\omega \in(0,\pi)$ on $\mathbf{X}$ and let $\theta \in (\omega, \pi)$. We say that $A$ admits a bounded $H^\infty(\Sigma_\theta)$ functional calculus if
there is a constant $K>0$ such that
\betaegin{eqnarray*}q\lambdaanglebel{eq:HinftyInequa}
\| f(A)\| \lambdaeq K \| f \|_{\infty,\theta}, \quad \varphiorall f \in H^\infty_0(\Sigma_\theta).
\varepsiloneq
\varepsilonnd{defi}
\betaegin{eqnarray*}gin{remark}\lambdaanglebel{rk:HinftyFunctDuality}\rm
Suppose that $\mathbf{X}$ is reflexive and that $A$ is a sectorial operator of type $\omega \in (0, \pi)$ on $\mathbf{X}.$ Then $A^*$ is a sectorial operator of type $\omega$ on $\mathbf{X}^*$ as well. Given $0< \omega < \theta< \pi$ and any $f \in H^{\infty} (\Sigma_\theta),$ let us define
\betaegin{eqnarray*}
\tilde{f} (z) = \overline{f ( \betaar{z} )},\quad \varphiorall z \in \Sigma_\theta.
\varepsilone
Then $\tilde{f} \in H^{\infty} (\Sigma_\theta)$ and $\| \tilde{f} \|_{\infty, \theta} = \| f \|_{\infty, \theta}.$ Moreover,
\betaegin{eqnarray*}
\tilde{f} (A^*) = f(A)^*,\quad \varphiorall f \in H^{\infty}_0 (\Sigma_\theta).
\varepsilone
Consequently, $A^*$ admits a bounded $H^\infty(\Sigma_\theta)$ functional calculus whenever $A$ does.
\varepsilonnd{remark}
\betaegin{eqnarray*}gin{remark}\lambdaanglebel{rk:HinftyFunctExtension}\rm
For any $\lambdaanglembda \in \mathbb{C} \sigmaetminus \overline{\Sigma}_\theta,$ define $R_\lambdaanglembda (z) = (\lambdaanglembda -z)^{-1}.$ Then $R_\lambdaanglembda \in H^{\infty} (\Sigma_\theta).$ Set
\betaegin{eqnarray*}
\wedgeidetilde{H}^{\infty}_0 (\Sigma_\theta) = H^{\infty}_0 (\Sigma_\theta) \opluslus \mathrm{span} \{ 1, R_{-1} \} \sigmaubset H^{\infty} (\Sigma_\theta).
\varepsilone
This is a subalgebra of $H^{\infty} (\Sigma_\theta).$ Now we define
\betaegin{eqnarray*}
u_A : \wedgeidetilde{H}^{\infty}_0 (\Sigma_\theta) \mapsto \mathcal{B} (\mathbf{X})
\varepsilone
be the linear mapping such that
\betaegin{eqnarray*}
u_A (1) = I_{\mathbf{X}},\quad u_A (R_{-1}) = - (1 + A)^{-1},
\varepsilone
and $u_A (f) = f(A)$ for any $f \in H^{\infty}_0 (\Sigma_\theta).$ Then, it is easy to check that $u_A$ is an algebra homomorphism and for any $\lambdaanglembda \in \mathbb{C} \sigmaetminus \overline{\Sigma}_\theta,$ we have
\betaegin{eqnarray*}
R_\lambdaanglembda \in \wedgeidetilde{H}^{\infty}_0 (\Sigma_\theta) \quad \text{and}\quad u_A (R_\lambdaanglembda) = R(\lambdaanglembda, A).
\varepsilone
$u_A$ is said to be the holomorphic functional calculus of $A$ on $\wedgeidetilde{H}^{\infty}_0 (\Sigma_\theta).$
Evidently, $A$ admits a bounded $H^\infty(\Sigma_\theta)$ functional calculus if and only if the homomorphism $u_A$ is continuous.
\varepsilonnd{remark}
Let $A$ be a sectorial operator of type $\omega \in (0, \pi)$ and assume that $A$ has dense range. Let $\varphi (z) = z (1+z)^{-2}$ and so $\varphi (A) = A (1 + A)^{-2}.$ Then $\varphi (A)$ is one-one and has dense range (see e.g. \cite[Proposition 2.4]{LeM2007}). Following \cite{M1986, CDMY1996}, we can define an operator $f(A)$ for any $f \in H^{\infty} (\Sigma_\theta)$ whenever $\omega < \theta < \pi.$ Indeed, for each $f \in H^{\infty} (\Sigma_\theta)$ the product function $f \varphi$ belongs to $H^{\infty}_0 (\Sigma_\theta).$ Then using the fact that $\varphi (A)$ is one-one we set
\betaegin{eqnarray*}
f(A) = \varphi (A)^{-1} (f \varphi) (A)
\varepsilone
with the domain being
\betaegin{eqnarray*}
\mathrm{D} ( f(A)) = \betaig \{ x \in \mathbf{X}:\; (f \varphi) (A) (x) \in \mathrm{D} (A) \cap \mathrm{R} (A) \betaig \}.
\varepsilone
This domain contains $\mathrm{D} (A) \cap \mathrm{R} (A)$ and so is dense in $\mathbf{X}.$ Since $\varphi (A)$ is bounded, $f(A)$ is closed. Therefore, $f(A)$ is bounded if and only if $\mathrm{D} ( f(A)) = \mathbf{X}.$ Note however that $f(A)$ may be unbounded in general.
\betaegin{eqnarray*}gin{thm}\lambdaanglebel{th:HinftyCalculus}{\rm (\cite{M1986, CDMY1996})}
Let $0< \omega < \theta < \pi$ and let $A$ be a sectorial operator of type $\omega$ on $\mathbf{X}$ with dense range. Then $f(A)$ is bounded for any $f \in H^{\infty} (\Sigma_\theta)$ if and only if $A$ admits a bounded $H^\infty(\Sigma_\theta)$ functional calculus. In that case, we have
\betaegin{eqnarray*}
\| f(A) \| \lambdae K \| f \|_{\infty, \theta},\quad \varphiorall f \in H^{\infty} (\Sigma_\theta),
\varepsilone
where the constant $K$ is the one appearing in \varepsilonqref{eq:HinftyInequa}.
\varepsilonnd{thm}
\betaegin{eqnarray*}gin{remark}\lambdaanglebel{rk:HinftyCalculusReflexiveSpace}\rm
Let $A$ be a sectorial operator on $\mathbf{X}.$ If $\mathbf{X}$ is a reflexive Banach space, then $\mathbf{X}$ has a direct sum decomposition
\betaegin{eqnarray*}
\mathbf{X} = \mathrm{N} (A) \opluslus \overline{\mathrm{R} (A)}
\varepsilone
(see \cite[Theorem 3.8]{CDMY1996}). Then $A$ is one-one if and only if $A$ has dense range. Moreover, the restriction of $A$ to $\overline{\mathrm{R} (A)}$ is a sectorial operator with dense range. Thus changing $\mathbf{X}$ into $\overline{\mathrm{R} (A)},$ or changing $A$ into $A+P$ where $P$ is the projection onto $\mathrm{N} (A)$ with the kernel equals to $\overline{\mathrm{R} (A)},$ it reduces to the case when a sectorial operator has dense range.
\varepsilonnd{remark}
\betaegin{eqnarray*}gin{remark}\lambdaanglebel{rk:HinftyCalculusImaginaryPowers}\rm
Given $s \in \mathbb{R},$ let $f_s$ be the analytic function on $\mathbb{C} \sigmaetminus (-\infty, 0]$ defined by $f_s (z) = z^{\mathrm{i} s}.$ Then $f_s \in H^{\infty} (\Sigma_\theta)$ for any $\theta \in (0, \pi)$ with
\betaegin{eqnarray*}
\| f_s \|_{\infty, \theta} = e^{\theta |s|}.
\varepsilone
The imaginary powers of a sectorial operator $A$ with dense range may be defined by letting $A^{\mathrm{i}s}= f_s (A)$ for any $s \in \mathbb{R}.$ In particular, $A^{\mathrm{i}s}$ is bounded for any $s \in \mathbb{R}$ if $A$ admits a bounded $H^\infty(\Sigma_\theta)$ functional calculus for some $\theta \in (0, \pi)$ (see e.g. \cite[Section 5]{CDMY1996}).
\varepsilonnd{remark}
\sigmaubsection{Square functions on Hilbert spaces}
Square functions for sectorial operators on Hilbert spaces were introduced by McIntosh in \cite{M1986} and developed further with applications to $H^{\infty}$ functional calculus in \cite{MY1989}. We give a brief description of this theory in this subsection.
To this end, we let $\mathbb{H}$ be a Hilbert space throughout the paper. Let $A$ be a sectorial operator of type $\omega \in (0, \pi)$ on $\mathbb{H}.$ We set
\betaegin{eqnarray*}
H^{\infty}_0 (\Sigma_{\omega +}) = \betaigcup_{\omega < \theta < \pi} H^{\infty}_0 (\Sigma_\theta).
\varepsilone
Then for any $F \in H^{\infty}_0 (\Sigma_{\omega +}),$ we set
\betaegin{eqnarray*}q\lambdaanglebel{eq;SquareFunct}
\| x \|_F : = \lambdaeft ( \int^{\infty}_0 \| F (t A ) x \|^2 \varphirac{d t}{ t} \right )^{\varphirac{1}{2}}, \quad \varphiorall x \in \mathbb{H}.
\varepsiloneq
In the above definition, $F(t A)$ means $F_t (A)$ where $F_t (z) = F (t z).$ By Lebesgue's dominated theorem it is easy to check that for any $x \in \mathbb{H},$ the mapping $t \mapsto F(t A)x$ is continuous and hence $\| x \|_F$ is well defined. However we may have $\| x \|_F = \infty$ for some $x.$ We call $\| x \|_F$ a square function associated with $A.$
\betaegin{eqnarray*}gin{thm}\lambdaanglebel{th:SquareFunctEquiv}{\rm (McIntosh/Yagi \cite{MY1989})}
Let $\mathbb{H}$ be a Hilbert space. Let $A$ be a sectorial operator of type $\omega \in (0, \pi)$ on $\mathbb{H},$ and suppose that $A$ is one-one. Given $\theta \in (\omega, \pi),$ let $F$ and $G$ be two nonzero functions in $H^{\infty}_0 (\Sigma_\theta).$
\betaegin{eqnarray*}gin{enumerate}[{\rm (i)}]
\item There is a constant $K>0$ such that for any $f \in H^{\infty} (\Sigma_\theta),$
\betaegin{eqnarray*}
\lambdaeft ( \int^{\infty}_0 \| f (A) F (t A ) x \|^2 \varphirac{d t}{ t} \right )^{\varphirac{1}{2}} \lambdae K \| f \|_{\infty, \theta} \| x \|_G, \quad \varphiorall x \in \mathbb{H}.
\varepsilone
\item There is a constant $C>0$ such that
\betaegin{eqnarray*}
C^{-1} \| x \|_G \lambdae \| x \|_F \lambdae C \| x \|_G,\quad \varphiorall x \in \mathbb{H}.
\varepsilone
\varepsilonnd{enumerate}
\varepsilonnd{thm}
Let $G \in H^{\infty}_0 (\Sigma_{\omega +}).$ We denote by $\| \cdot \|^*_G$ the square function for $G$ associated with the adjoint operator $A^*,$ that is,
\betaegin{eqnarray*}
\| x \|^*_G = \lambdaeft ( \int^{\infty}_0 \| G (t A^* ) x \|^2 \varphirac{d t}{ t} \right )^{\varphirac{1}{2}}, \quad \varphiorall x \in \mathbb{H}.
\varepsilone
The following theorem establishes the close connection between $H^{\infty}$ functional calculus and square functions on Hilbert spaces.
\betaegin{eqnarray*}gin{thm}\lambdaanglebel{th:SquareFunctHinftyCalculus}{\rm (McIntosh \cite{M1986})}
Let $\mathbb{H}$ be a Hilbert space. Let $A$ be a sectorial operator of type $\omega \in (0, \pi)$ on $\mathbb{H},$ and suppose that $A$ is one-one. Given $\theta \in (\omega, \pi),$ the following assertions are equivalent:
\betaegin{eqnarray*}gin{enumerate}[{\rm (i)}]
\item $A$ has a bounded $H^\infty(\Sigma_\theta)$ functional calculus.
\item For some (equivalently, for any) pair $(F, G)$ of nonzero functions in $H^{\infty}_0 (\Sigma_{\omega +}),$ there is a constant $K>0$ such that
\betaegin{eqnarray*}
\| x\|_F \lambdae K \| x \| \quad \text{and}\quad \|x\|^*_G \lambdae K \|x\|
\varepsilone
for all $x \in \mathbb{H}.$
\item For some (equivalently, for any) nonzero function $F \in H^{\infty}_0 (\Sigma_{\omega +}),$ there is a constant $C>0$ such that
\betaegin{eqnarray*}
C^{-1} \| x \| \lambdae \| x \|_F \lambdae C \| x \|,\quad \varphiorall x \in \mathbb{H}.
\varepsilone
\varepsilonnd{enumerate}
\varepsilonnd{thm}
Consequently, for a sectorial operator $A$ of type $\omega \in (0, \pi)$ on a Hilbert space $\mathbb{H},$ if $A$ has a bounded $H^\infty(\Sigma_\theta)$ functional calculus for some $\theta \in (\omega, \pi)$ then it has a bounded $H^\infty(\Sigma_\theta)$ functional calculus for all $\theta \in (\omega, \pi).$ In this case, we simply say that $A$ has a bounded $H^\infty$ functional calculus.
\betaegin{eqnarray*}gin{remark}\lambdaanglebel{rk:SquareFunctHinftyCalculus}\rm
$A$ is said to satisfy a square function estimate if for some (equivalently, for any) $F \in H^{\infty}_0 (\Sigma_{\omega +}),$ there is a constant $C>0$ such that $\| x \|_F \lambdae C \| x \|$ for all $x \in \mathbb{H}.$ As a consequence of Theorem \ref{th:SquareFunctHinftyCalculus} (and Remark \ref{rk:HinftyCalculusReflexiveSpace}), $A$ has a bounded $H^\infty$ functional calculus if and only if both $A$ and $A^*$ satisfy a square function estimate. Note that an example was given in \cite{LeM2003} of a sectorial operator $A$ which satisfies a square function estimate, but does not have a bounded $H^\infty$ functional calculus.
\varepsilonnd{remark}
Our goal of this paper is to extend Theorems \ref{th:SquareFunctEquiv} and \ref{th:SquareFunctHinftyCalculus} to the case where square functions are replaced by so-called area integral functions defined below.
\sigmaection{Area integral functions}\lambdaanglebel{AreaFunct}
First of all, we introduce so-called area integral functions associated with sectorial operators on Hilbert spaces.
\betaegin{eqnarray*}gin{defi}\lambdaanglebel{df:AreaFunct}
Let $\omega \in (0, \pi)$ and $\theta \in (\omega, \pi).$ Let $A$ be a sectorial operator of type $\omega$ on a Hilbert space $\mathbb{H}.$ Given $0 < \alphalpha < \varphirac{\theta - \omega}{2},$ for any $F \in H^\infty_0(\Sigma_{\theta +})$ we define
\betaegin{eqnarray*}q\lambdaanglebel{eq:AreaFunct}
\| x \|_{F, \alphalpha}: = \lambdaeft ( \int_{\Sigma_\alphalpha} \| F (z A ) x \|^2 \varphirac{d m(z)}{|z|^2} \right )^{\varphirac{1}{2}}, \quad \varphiorall x \in \mathbb{H},
\varepsiloneq
where $d m$ is the Lebesgue measure in $\mathbb{R}^2 \cong \mathbb{C}.$ Here, $F (z A )$ is understood as $F_z (A)$ where $F_z (w) = F (z w)$ for $w \in \Sigma_{\theta-\alphalpha}.$
We will call $\| x \|_{F, \alphalpha}$ the area integral function associated with $A.$
\varepsilonnd{defi}
Evidently, for any $z \in \Sigma_\alphalpha$ one has
\betaegin{eqnarray*}
F_z \in H^\infty_0(\Sigma_{\theta -\alphalpha}) \sigmaubset H^\infty_0(\Sigma_{\omega +}).
\varepsilone
Also, by Lebesgue's dominated theorem, for any $x \in \mathbb{H}$ the mapping $z \mapsto F_z (A) x$ is continuous from $\Sigma_\alphalpha$ into $\mathbb{H}.$ Hence, $\| x \|_{F, \alphalpha}$ is well defined but possibly $\| x \|_{F, \alphalpha} = \infty.$
The corresponding area integral function associated with $A^*$ is defined as
\betaegin{eqnarray*}q\lambdaanglebel{eq:AreaFunctDualOperator}
\| x \|^*_{F, \alphalpha}: = \lambdaeft ( \int_{\Sigma_\alphalpha} \| F (z A^* ) x \|^2 \varphirac{d m(z)}{|z|^2} \right )^{\varphirac{1}{2}}, \quad \varphiorall x \in \mathbb{H}.
\varepsiloneq
Our main results read as follows.
\betaegin{eqnarray*}gin{thm}\lambdaanglebel{th:AreaIntFunctEquiv}
Let $\mathbb{H}$ be a Hilbert space. Let $A$ be a sectorial operator of type $\omega \in (0, \pi)$ on $\mathbb{H},$ and suppose that $A$ is one-one. Given $\theta \in (\omega, \pi)$ and $0 < \alphalpha, \betaegin{eqnarray*}ta < \varphirac{\theta - \omega}{2},$ let $F$ and $G$ be two nonzero functions in $H^{\infty}_0 (\Sigma_\theta).$
\betaegin{eqnarray*}gin{enumerate}[{\rm (i)}]
\item There is a constant $K>0$ such that for any $f \in H^{\infty} (\Sigma_\theta),$
\betaegin{eqnarray*}
\lambdaeft ( \int_{\Sigma_\alphalpha} \| f (A) F (z A ) x \|^2 \varphirac{d m(z)}{|z|^2} \right )^{\varphirac{1}{2}} \lambdae K \| f \|_{\infty, \theta} \| x \|_{G, \betaegin{eqnarray*}ta}, \quad \varphiorall x \in \mathbb{H}.
\varepsilone
\item There is a constant $C>0$ such that
\betaegin{eqnarray*}
C^{-1} \| x \|_{G, \betaegin{eqnarray*}ta} \lambdae \| x \|_{F, \alphalpha} \lambdae C \| x \|_{G, \betaegin{eqnarray*}ta},\quad \varphiorall x \in \mathbb{H}.
\varepsilone
\varepsilonnd{enumerate}
\varepsilonnd{thm}
\betaegin{eqnarray*}gin{thm}\lambdaanglebel{th:AreIntFunctHinftyCalculus}
Let $\mathbb{H}$ be a Hilbert space. Let $A$ be a sectorial operator of type $\omega \in (0, \pi)$ on $\mathbb{H},$ and suppose that $A$ is one-one. Given $\theta \in (\omega, \pi)$ and $0 < \alphalpha < \varphirac{\theta - \omega}{2},$ the following assertions are equivalent:
\betaegin{eqnarray*}gin{enumerate}[{\rm (i)}]
\item $A$ has a bounded $H^\infty(\Sigma_\theta)$ functional calculus.
\item For some (equivalently, for any) pair $(F, G)$ of nonzero functions in $H^{\infty}_0 (\Sigma_{(\omega + \alphalpha) +}),$ there is a constant $K>0$ such that
\betaegin{eqnarray*}
\| x \|_{F, \alphalpha} \lambdae K \| x \| \quad \text{and}\quad \| x \|^*_{G, \alphalpha} \lambdae K \|x\|
\varepsilone
for all $x \in \mathbb{H}.$
\item For some (equivalently, for any) nonzero function $F \in H^{\infty}_0 (\Sigma_{(\omega + \alphalpha) +}),$ there is a constant $C>0$ such that
\betaegin{eqnarray*}
C^{-1} \| x \| \lambdae \| x \|_{F, \alphalpha} \lambdae C \| x \|,\quad \varphiorall x \in \mathbb{H}.
\varepsilone
\varepsilonnd{enumerate}
\varepsilonnd{thm}
\betaegin{eqnarray*}gin{ex}
As similar to the square functions that are used in Stein's book \cite{Stein1970}, area integral functions associated with sectorial operators originate naturally in harmonic analysis. We mention a few classical ones for illustrations. For any $k \gammae 1,$ let
\betaegin{eqnarray*}
G_k = z^k e^{-z},\quad \varphiorall z \in \mathbb{C}.
\varepsilone
Then $G_k \in H^{\infty}_0 (\Sigma_{\omega +})$ for any $\omega \in (0, \varphirac{\pi}{2}).$ Hence, if $A$ is a sectorial operator of type $\omega$ on a Hilbert space for some $\omega \in (0, \varphirac{\pi}{2}),$ then $G_k$ gives rise area integral functions associated with $A.$ Indeed, if $(T_t)_{t \gammae 0}$ is the bounded analytic semigroup generated by $- A,$ we have
\betaegin{eqnarray*}
G_k (z A)x = z^k A^k e^{-z A} x = (-z)^k \varphirac{\partialrtial^k}{\partialrtial z^k} (T_z x), \quad z \in \Sigma_{\varphirac{\pi}{2}- \omega} \; \text{and}\; x \in \mathbb{H}.
\varepsilone
Hence the corresponding area integral function is
\betaegin{eqnarray*}
\| x \|_{G_k, \alphalpha} = \lambdaeft ( \int_{\Sigma_\alphalpha} |z|^{2(k -1)} {\mathcal B}ig \| \varphirac{\partialrtial^k}{\partialrtial z^k} (T_z x) {\mathcal B}ig \|^2 d m (z) \right )^{\varphirac{1}{2}}, \quad \varphiorall x \in \mathbb{H}
\varepsilone
for any $0 < \alphalpha < \varphirac{\pi}{2}- \omega.$ We thus have that
\betaegin{eqnarray*}
\| x \|_{G_k, \alphalpha} \thickapprox \| x \|_{G_m, \betaegin{eqnarray*}ta},\quad \varphiorall x \in \mathbb{H}
\varepsilone
for any $k, m \gammae 1$ and any $0< \alphalpha, \betaegin{eqnarray*}ta < \varphirac{\pi}{2}- \omega.$
\varepsilonnd{ex}
\sigmaection{Proofs of main results}\lambdaanglebel{pf}
This section is devoted to the proofs of Theorems \ref{th:AreaIntFunctEquiv} and \ref{th:AreIntFunctHinftyCalculus}. Our proofs require two technical variants of the square and area integral functions $\| x \|_F$ and $\| x \|_{F, \alphalpha}.$
Let $A$ be a sectorial operator of type $\omega \in (0, \pi)$ on $\mathbb{H}.$ Let $\theta \in (\omega, \pi)$ and $0 < \alphalpha < \varphirac{\theta - \omega}{2}.$ Given $\varepsilonpsilon > 0$ and $\deltaelta>0,$ we set for any $F \in H^{\infty}_0 (\Sigma_{\theta}),$
\betaegin{eqnarray*}q\lambdaanglebel{eq:SuqareFunctVariant}
G_\varepsilonpsilon (F)(x): = \lambdaeft ( \int^{\infty}_\varepsilonpsilon \| F (t A ) x \|^2 \varphirac{d t}{ t} \right )^{\varphirac{1}{2}}, \quad \varphiorall x \in \mathbb{H},
\varepsiloneq
and
\betaegin{eqnarray*}q\lambdaanglebel{eq:AreaIntFunctVariant}
S_{\alphalpha, \deltaelta} ( F ) (x) : = \lambdaeft ( \int_{\Sigma_{\alphalpha, \deltaelta}} \| F (z A ) x \|^2 \varphirac{d m(z)}{|z|^2} \right )^{\varphirac{1}{2}}, \quad \varphiorall x \in \mathbb{H},
\varepsiloneq
where $\Sigma_{\alphalpha, \deltaelta} = \{z \in \mathbb{C}:\; |z| > \deltaelta,\; | \mathrm{Arg} (z)| < \alphalpha \},$ respectively. Evidently,
\betaegin{eqnarray*}
\| x \|_F = \lambdaim_{\varepsilonpsilon \to 0} G_\varepsilonpsilon (F)(x) \quad \text{and} \quad \| x \|_{F, \alphalpha} = \lambdaim_{\deltaelta \to 0} S_{\alphalpha, \deltaelta} ( F ) (x).
\varepsilone
\betaegin{eqnarray*}gin{lem}\lambdaanglebel{le:SquareAreaFunct} For any $\varepsilonpsilon >0,$
\betaegin{eqnarray*}
G_\varepsilonpsilon (F)(x) \lambdae \varphirac{2}{\sigmaqrt{\pi \sigmain \alphalpha}} S_{\alphalpha, \varepsilonpsilon (1- \sigmain \alphalpha)} ( F ) (x),\quad \varphiorall x \in \mathbb{H}.
\varepsilone
Consequently, for every $0 < \alphalpha < \varphirac{\theta - \omega}{2}$ we have
\betaegin{eqnarray*}q\lambdaanglebel{eq:Square<AreaFunct}
\| x \|_F \lambdae \varphirac{2}{\sigmaqrt{\pi \sigmain \alphalpha}}\| x \|_{F, \alphalpha},\quad \varphiorall x \in \mathbb{H}.
\varepsiloneq
\varepsilonnd{lem}
\betaegin{eqnarray*}gin{proof}
Given $t > \varepsilonpsilon,$ let $D_t$ be the disc in $\mathbb{R}^2 \cong \mathbb{C}$ centered at $(t, 0)$ and tangent to the boundary of ${\mathcal G}amma_{\alphalpha, \varepsilonpsilon (1- \sigmain \alphalpha )}.$ Note that the mapping $z \mapsto F(z A) x$ is analytic in $\Sigma_\alphalpha,$ we have
\betaegin{eqnarray*}
F (t A) x = \varphirac{2}{ (\pi \sigmain^2 \alphalpha) t^2}\int_{D_t} F (z A) x d m (z).
\varepsilone
Consequently,
\betaegin{eqnarray*}
\| F (t A) x \|^2 \lambdae \varphirac{C_\alphalpha}{ t^2} \int_{D_t} \| F (z A) x \|^2 d m (z)
\varepsilone
with $C_\alphalpha = \varphirac{2}{\pi \sigmain^2 \alphalpha}.$ Then
\betaegin{eqnarray*}
[ G_\varepsilonpsilon (F)(x)]^2 \lambdae C_\alphalpha \int^{\infty}_\varepsilonpsilon \int_{D_t} \| F (z A) x \|^2 \varphirac{d m (z) d t}{t^3}.
\varepsilone
However, since $\varphirac{|z|}{1+ \sigmain \alphalpha} \lambdae t \lambdae \varphirac{|z|}{1 - \sigmain \alphalpha}$ for any $z \in D_t,$ we have
\betaegin{eqnarray*}\betaegin{eqnarray*}gin{split}
[ G_\varepsilonpsilon (F)(x)]^2 & \lambdae C_\alphalpha \int_{\Sigma_{\alphalpha, \varepsilonpsilon (1- \sigmain \alphalpha)}} \| F( z A)\|^2 \int^{\varphirac{|z|}{1- \sigmain \alphalpha}}_{\varphirac{|z|}{1 + \sigmain \alphalpha}} \varphirac{d t}{t^3} d m(z)\\
& = 2 C_\alphalpha \sigmain \alphalpha \int_{\Sigma_{\alphalpha, \varepsilonpsilon (1 - \sigmain \alphalpha )}} \| F( z A)\|^2 \varphirac{ d m (z)}{ |z|^2}\\
& = \varphirac{4}{\pi \sigmain \alphalpha} [ S_{\alphalpha, \varepsilonpsilon (1- \sigmain \alphalpha)} (F)(x)]^2.
\varepsilonnd{split}\varepsilone
This completes the proof.
\varepsilonnd{proof}
\
{\it Proof of Theorem \ref{th:AreaIntFunctEquiv}.}\; Note that the second assertion follows from the first one. Indeed, applying (i) with the constant function $f=1$ yields an estimate $\| x \|_{F, \alphalpha} \lambdae K \| x \|_{G, \betaegin{eqnarray*}ta}.$ Then (ii) follows by switching the roles of $F$ and $G$ as well as $\alphalpha$ and $\betaegin{eqnarray*}ta.$
To prove (i), note that
\betaegin{eqnarray*}
\int_{\Sigma_\alphalpha} \| f (A) F (z A ) x \|^2 \varphirac{d m(z)}{|z|^2} = \int^\alphalpha_{-\alphalpha} d s \int^{\infty}_0 \| f (A) F (t e^{\mathrm{i} s} A ) x \|^2 \varphirac{d t}{t}.
\varepsilone
By the proof of Theorem \ref{th:SquareFunctEquiv} (i) (see e.g. \cite{ADM1996, MY1989}), there exists a constant $K>0$ such that for any $f \in H^{\infty} (\Sigma_\theta)$ and any $s \in (- \alphalpha, \alphalpha),$
\betaegin{eqnarray*}
\lambdaeft ( \int^{\infty}_0 \| f (A) F (t e^{\mathrm{i} s}A ) x \|^2 \varphirac{d t}{ t} \right )^{\varphirac{1}{2}} \lambdae K \| f \|_{\infty, \theta} \| x \|_G, \quad \varphiorall x \in \mathbb{H}.
\varepsilone
Thus, we deduce that
\betaegin{eqnarray*}q\lambdaanglebel{eq:Area<SquareFunct}
\lambdaeft ( \int_{\Sigma_\alphalpha} \| f (A) F (z A ) x \|^2 \varphirac{d m(z)}{|z|^2} \right )^{\varphirac{1}{2}} \lambdae \sigmaqrt{2 \alphalpha} K \| f \|_{\infty, \theta} \| x \|_G, \quad \varphiorall x \in \mathbb{H}.
\varepsiloneq
By Lemma \ref{le:SquareAreaFunct} we conclude (i).
${\mathcal B}ox$
\betaegin{eqnarray*}gin{rk}\lambdaanglebel{rk:Sqare=AreaFunct}
Taking $f =1$ in \varepsilonqref{eq:Area<SquareFunct}, we obtain that
\betaegin{eqnarray*}
\| x \|_{F, \alphalpha} \lambdae \sigmaqrt{2 \alphalpha} K \| x \|_G,\quad \varphiorall x \in \mathbb{H}.
\varepsilone
Combining this inequality with \varepsilonqref{eq:Square<AreaFunct} implies that
\betaegin{eqnarray*}q\lambdaanglebel{eq:Square=AreaFunct}
\| x \|_{F, \alphalpha} \thickapprox \| x \|_G,\quad \varphiorall x \in \mathbb{H}.
\varepsiloneq
\varepsilonnd{rk}
\
{\it Proof of Theorem \ref{th:AreIntFunctHinftyCalculus}.}\; This is a straightforward consequence of Theorem \ref{th:SquareFunctHinftyCalculus} and the equivalence relationship \varepsilonqref{eq:Square=AreaFunct} between the square and area integral functions.
${\mathcal B}ox$
\betaegin{eqnarray*}gin{thebibliography}{99}
\betaibitem{ADM1996} D. Albrecht, X. T. Duong, and A. McIntosh,
Operator theory and harmonic analysis,
{\it Proc. Centre Math. Analysis, Canberra} {\betaf 14} (1996), 77-136.
\betaibitem{CS} Z. Chen and M. Sun,
Area integral functions for sectorial operators on $L_p$ spaces,
in progress.
\betaibitem{CDMY1996} M. Cowling, I. Doust, A. McIntosh, and A. Yagi,
Banach space operators with a bounded $H^{\infty}$ functional calculus,
{\it J. Austr. Math. Soc. Series A} {\betaf 60} (1996), 51-89.
\betaibitem{Gold1985} J. A. Goldstein,
{\it Semigroups of linear operators and applications,}
Oxford University Press, New York, 1985.
\betaibitem{KW2001} N. Kalton, and L. Weis,
The $H^{\infty}$ calculus and sums of closed operators,
{\it Math. Annalen} {\betaf 321} (2001), 319-345.
\betaibitem{LeM2003} C. Le Merdy,
The Weiss conjecture for bounded analytic semigroups,
{\it J. London Math. Soc.} {\betaf 67} (2003), 715-738.
\betaibitem{LeM2007} C. Le Merdy,
Square functions, bounded analytic semigroups, and applications,
In: {\it Perspectives in Operator Theory,}
Banach Center Publ. {\betaf 75}, Polish Acad. Sci., Warsaw, 2007, 191-220.
\betaibitem{M1986} A. McIntosh,
Operators which have an $H^{\infty}$ functional calculus,
{\it Proc. Centre Math. Analysis, Canberra} {\betaf 14} (1986), 210-231.
\betaibitem{MY1989} A. McIntosh and A. Yagi,
Operators of type $\omega$ without a bounded $H^{\infty}$ functional calculus,
{\it Proc. Centre Math. Analysis, Canberra} {\betaf 24} (1989), 159-172.
\betaibitem{Stein1970} E. M. Stein,
{\it Topics in Harmonic Analysis Related to the Littlewood-Paley Theory,}
Princeton University Press, Princeton, 1970.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\title[On period polynomials of degree $2^m$]{On period polynomials of degree $\bm{2^m}$\\ for finite fields}
\author{Ioulia N. Baoulina}
\address{Department of Mathematics, Moscow State Pedagogical University, Krasnoprudnaya str. 14, Moscow 107140, Russia}
\email{jbaulina@mail.ru}
\date{}
\maketitle
\begin{abstract}
We obtain explicit factorizations of reduced period polynomials of degree $2^m$, $m\ge 4$, for finite fields of characteristic $p\equiv 3\text{\;or\;}5\pmod{8}$. This extends the results of G.~Myerson, who considered the cases $m=1$ and $m=2$, and S.~Gurak, who studied the case $m=3$.
\end{abstract}
\keywords{{\it Keywords}: Period polynomial; cyclotomic period; $f$-nomial period; reduced period polynomial; Gauss sum; Jacobi sum; factorization.}
\subjclass{2010 Mathematics Subject Classification: 11L05, 11T22, 11T24}
\thispagestyle{empty}
\section{Introduction}
Let $\mathbb F_q$ be a finite field of characteristic~$p$ with $q=p^s$ elements, $\mathbb F_q^*=\mathbb F_q^{}\setminus\{0\}$, and let $\gamma$ be a fixed generator of the cyclic group $\mathbb F_q^*$ . By ${\mathop{\rm Tr}\nolimits}:\mathbb F_q\rightarrow\mathbb F_p$ we denote the trace mapping, that is, ${\mathop{\rm
Tr}\nolimits}(x)=x+x^p+x^{p^2}+\dots+x^{p^{s-1}}$ for $x\in\mathbb F_q$. Let $e$ and $f$ be positive integers such that $q=ef+1$. Denote by $\mathcal{H}$ the subgroup of $e$-th powers in $\mathbb F_q^*$. For any positive integer $n$, write $\zeta_n=\exp(2\pi i/n)$.
The cyclotomic (or $f$-nomial) periods of order $e$ for $\mathbb F_q$ with respect to $\gamma$ are defined by
$$
\eta_k=\sum_{x\in\gamma^k\mathcal{H}}\zeta_p^{{\mathop{\rm Tr}\nolimits}(x)}=\sum_{h=0}^{f-1}\zeta_p^{{\mathop{\rm Tr}\nolimits}(\gamma^{eh+k})},\quad k=0,1,\dots,e-1.
$$
The period polynomial of degree $e$ for $\mathbb F_q$ is the polynomial
$$
P_e(X)=\prod_{k=0}^{e-1}(X-\eta_k).
$$
The reduced cyclotomic (or reduced $f$-nomial) periods of order $e$ for $\mathbb F_q$ with respect to $\gamma$ are defined by
$$
\eta_k^*=\sum_{x\in\mathbb F_q}\zeta_p^{{\mathop{\rm Tr}\nolimits}(\gamma^k x^e)}=1+e\eta_k,\quad k=0,1,\dots,e-1,
$$
and the reduced period polynomial of degree $e$ for $\mathbb F_q$ is
$$
P_e^*(X)=\prod_{k=0}^{e-1}(X-\eta_k^*).
$$
The polynomials $P_e(X)$ and $P_e^*(X)$ have integer coefficients and are independent of the choice of generator~$\gamma$. They are irreducible over the rationals when $s=1,$ but not necessarily irreducible when $s>1$. More precisely, $P_e(X)$ and $P_e^*(X)$ split over the rationals into $\delta=\gcd(e,(q-1)/(p-1))$ factors of degree~$e/\delta$ (not necessarily distinct), and each of these factors is irreducible or a power of an irreducible polynomial. Furthermore, the polynomials $P_e(X)$ and $P_e^*(X)$ are irreducible over the rationals if and only if $\gcd(e,(q-1)/(p-1))=1$. For proofs of these facts, see~\cite{M}.
In the case $s=1$, the period polynomials were determined explicitly by Gauss for ${e\in\{2, 3, 4\}}$ and by many others for certain small values of~$e$. In the general case, Myerson~\cite{M} derived the explicit formulas for $P_e(X)$ and $P_e^*(X)$ when $e\in\{2,3,4\}$, and also found their factorizations into irreducible polynomials over the rationals. Gurak~\cite{G3} obtained similar results for $e\in\{6,8,12,24\}$; see also \cite{G2} for the case $s=2$, $e\in\{6,8,12\}$. Note that if $-1$ is a power of $p$ modulo $e$, then the period polynomials can also be easily obtained. Indeed, if $e>2$ and $e\mid(p^{\ell}+1)$, with $\ell$ chosen minimal, then $2\ell\mid s$, and \cite[Proposition~20]{M} yields
$$
P_e^*(X)=(X+(-1)^{s/2\ell}(e-1)q^{1/2})(X-(-1)^{s/2\ell}q^{1/2})^{e-1}.
$$
Baumert and Mykkeltveit~\cite{BM} found the values of cyclotomic periods in the case when $e>3$ is a prime, $e\equiv 3\pmod{4}$ and $p$ generates the quadratic residues modulo~$e$; see also \cite[Proposition~21]{M}.
It is seen immediately from the definitions that $P_e(X)=e^{-e}P_e^*(eX+1)$, and so it suffices to factorize only $P_e^*(X)$.
The aim of this paper is to obtain the explicit factorizations of the reduced period polynomials of degree $2^m$ with $m\ge 4$ in the case that $p\equiv 3\text{\;or\;}5\pmod{8}$. Notice that in this case $\mathop{\rm ord}_2(q-1)=\mathop{\rm ord}_2(p^s-1)=\mathop{\rm ord}_2 s+2$. Hence, for $p\equiv 3\pmod{8}$,
$$
\gcd(2^m,(q-1)/(p-1))=\begin{cases}
2^m&\text{if $2^{m-1}\mid s$,}\\
2^{m-1}&\text{if $2^{m-2}\parallel s$.}
\end{cases}
$$
Appealing to \cite[Theorem~4]{M}, we conclude that in the case when $2^{m-1}\mid s$, $P_{2^m}^*(X)$ splits over the rationals into linear factors. If $2^{m-2}\parallel s$, then $P_{2^m}^*(X)$ splits into irreducible polynomials of degrees at most 2. Similarly, for $p\equiv 5\pmod{8}$,
$$
\gcd(2^m,(q-1)/(p-1))=\begin{cases}
2^m&\text{if $2^m\mid s$,}\\
2^{m-1}&\text{if $2^{m-1}\parallel s$,}\\
2^{m-2}&\text{if $2^{m-2}\parallel s$.}
\end{cases}
$$
Using \cite[Theorem~4]{M} again, we see that $P_{2^m}^*(X)$ splits over the rationals into linear factors if $2^m\mid s$, splits into linear and quadratic irreducible factors if $2^{m-1}\parallel s$, and splits into linear, quadratic and biquadratic irreducible factors if $2^{m-2}\parallel s$. Our main results are Theorems~\ref{t1} and \ref{t2}, which give the explicit factorizations of $P_{2^m}^*(X)$ in the cases $p\equiv 3\pmod{8}$ and $p\equiv 5\pmod{8}$, respectively. All the evaluations in Sections~\ref{s3} and \ref{s4} are effected in terms of parameters occuring in quadratic partitions of some powers of~$p$.
\section{Preliminary Lemmas}
\label{s2}
In the remainder of the paper, we assume that $p$ is an odd prime. Let $\psi$ be a nontrivial character on $\mathbb F_q$. We extend $\psi$ to all of $\mathbb F_q$ by setting $\psi(0)=0$. The Gauss sum
$G(\psi)$ over $\mathbb F_q$ is defined by
$$
G(\psi)=\sum_{x\in\mathbb F_q}\psi(x)\zeta_p^{{\mathop{\rm
Tr}\nolimits}(x)}.
$$
Gauss sums occur in the Fourier expansion of a reduced cyclotomic period.
\begin{lemma}
\label{l1}
Let $\psi$ be a character of order $e>1$ on $\mathbb F_q$ such that $\psi(\gamma)=\zeta_e$. Then for $k=0,1,\dots, e-1$,
$$
\eta_k^*=\sum_{j=1}^{e-1} G(\psi^j)\zeta_e^{-jk}.
$$
\end{lemma}
\begin{proof}
It follows from \cite[Theorem~1.1.3 and Equation~(1.1.4)]{BEW}.
\end{proof}
In the next three lemmas, we record some properties of Gauss sums which will be used throughout this paper. By $\rho$ we denote the quadratic character on $\mathbb F_q$ ($\rho(x)=+1, -1, 0$ according as $x$ is a square, a non-square or zero in $\mathbb F_q$).
\begin{lemma}
\label{l2}
Let $\psi$ be a nontrivial character on $\mathbb F_q$
with $\psi\ne\rho$. Then
\begin{itemize}
\item[\textup{(a)}]
$G(\psi)G(\bar\psi)=\psi(-1)q$;
\item[\textup{(b)}]
$G(\psi)=G(\psi^p)$;
\item[\textup{(c)}]
$G(\psi)G(\psi\rho)=\bar\psi(4)G(\psi^2)G(\rho)$.
\end{itemize}
\end{lemma}
\begin{proof}
See \cite[Theorems~1.1.4(a, d) and 11.3.5]{BEW} or \cite[Theorem~5.12(iv, v) and Corollary~5.29]{LN}.
\end{proof}
\begin{lemma}
\label{l3}
We have
$$
G(\rho)=
\begin{cases}
(-1)^{s-1}q^{1/2}&\text{if\,\, $p\equiv 1\pmod{4}$,}\\
(-1)^{s-1}i^s q^{1/2}&\text{if\,\, $p\equiv 3\pmod 4$.}
\end{cases}
$$
\end{lemma}
\begin{proof}
See \cite[Theorem~11.5.4]{BEW} or \cite[Theorem~5.15]{LN}.
\end{proof}
\begin{lemma}
\label{l4}
Let $p\equiv 3\pmod{8}$, $2\mid s$ and $\psi$ be a biquadratic character on $\mathbb F_q$. Then $G(\psi)=-q^{1/2}$.
\end{lemma}
\begin{proof}
It is a special case of \cite[Theorem~11.6.3]{BEW}.
\end{proof}
Let $\psi$ be a nontrivial character on $\mathbb F_q$. The Jacobi sum $J(\psi)$ over $\mathbb F_q$ is defined by
$$
J(\psi)=\sum_{x\in\mathbb F_q}\psi(x)\psi(1-x).
$$
The following lemma gives a relationship between Gauss sums and Jacobi sums.
\begin{lemma}
\label{l5}
Let $\psi$ be a nontrivial character on $\mathbb F_q$
with $\psi\ne\rho$. Then
$$
G(\psi)^2=G(\psi^2)J(\psi).
$$
\end{lemma}
\begin{proof}
See \cite[Theorem~2.1.3(a)]{BEW} or \cite[Theorem~5.21]{LN}.
\end{proof}
Let $\psi$ be a character on $\mathbb F_q$. The lift $\psi'$ of
the character $\psi$ from $\mathbb F_{q^{\vphantom{r}}}$ to the
extension field $\mathbb F_{q^r}$ is given by
$$
\psi'(x)=\psi({\mathop{\rm N}}_{\mathbb F_{q^r}/\mathbb
F_{q^{\vphantom{r}}}}(x)), \qquad x\in\mathbb F_{q^r},
$$
where ${\mathop{\rm N}}_{\mathbb F_{q^r}/\mathbb
F_{q^{\vphantom{r}}}}(x)=x\cdot x^q\cdot x^{q^2}\cdots
x^{q^{r-1}}=x^{(q^r-1)/(q-1)}$ is the norm of $x$ from
$\mathbb F_{q^r}$ to $\mathbb F_{q^{\vphantom{r}}}$.
\begin{lemma}
\label{l6}
Let $\psi$ be a character on $\mathbb
F_{q^{\vphantom{r}}}$ and let $\psi'$ denote the lift of $\psi$
from $\mathbb F_{q^{\vphantom{r}}}$ to $\mathbb F_{q^r}$. Then
\begin{itemize}
\item[\textup{(a)}]
$\psi'$ is a character on $\mathbb F_{q^r}$;
\item[\textup{(b)}]
a character $\lambda$ on $\mathbb F_{q^r}$ equals the lift $\psi'$ of some character $\psi$ on $\mathbb F_q$ if and only if the order of $\lambda$ divides $q-1$;
\item[\textup{(c)}]
$\psi'$ and $\psi$ have the same order.
\end{itemize}
\end{lemma}
\begin{proof}
See \cite[Theorem~11.4.4(a, c, e)]{BEW}.
\end{proof}
The following lemma, which is due to Davenport and Hasse, connects a Gauss sum and its lift.
\begin{lemma}
\label{l7}
Let $\psi$ be a nontrivial character on $\mathbb F_q$
and let $\psi'$ denote the lift of $\psi$ from $\mathbb F_{q^{}}$
to $\mathbb F_{q^r}$. Then
$$
G(\psi')=(-1)^{r-1}G(\psi)^r.
$$
\end{lemma}
\begin{proof}
See \cite[Theorem~11.5.2]{BEW} or \cite[Theorem~5.14]{LN}.
\end{proof}
Now we turn to the case $p\equiv 3\text{\;or\;}5\pmod{8}$. We recall a few facts which were established in our earlier paper~\cite{B2} in more general settings.
\begin{lemma}
\label{l8}
Let $p\equiv 3\text{\;or\;}5\pmod{8}$ and
$\psi$ be a character of order~$2^r$ on $\mathbb F_q$, where
$$
r\ge\begin{cases}
4&\text{if $p\equiv 3\pmod{8}$,}\\
3&\text{if $p\equiv 5\pmod{8}$.}
\end{cases}
$$
Then $G(\psi)=G(\psi\rho)$.
\end{lemma}
\begin{proof}
See \cite[Lemma 2.13]{B2}.
\end{proof}
\begin{lemma}
\label{l9}
Let $p\equiv 3\text{\;or\;}5\pmod{8}$, $r\ge 3$, and
$\psi$ be a character of order~$2^r$ on $\mathbb F_q$. Then
$$
\psi(4)=
\begin{cases}
1 & \text{if $p\equiv 3\pmod{8}$,}\\
(-1)^{s/2^{r-2}}&\text{if $p\equiv 5\pmod{8}$.}
\end{cases}
$$
\end{lemma}
\begin{proof}
See \cite[Lemma 2.16]{B2}.
\end{proof}
\begin{lemma}
\label{l10}
Let $p\equiv 3\text{\;or\;}5\pmod{8}$, $n\ge 1$ and $r\ge 3$ be integers, $r\ge n$. Then
$$
\sum_{v=0}^{2^{r-2}-1}\zeta_{2^n}^{p^v}=\begin{cases}
-2^{r-2}&\text{if $n=1$,}\\
2^{r-2}i&\text{if $n=2$ and $p\equiv 5\pmod{8}$,}\\
2^{r-3}i\sqrt{2}&\text{if $n=3$ and $p\equiv 3\pmod{8}$,}\\
0&\text{otherwise.}
\end{cases}
$$
\end{lemma}
\begin{proof}
It is an immediate consequence of \cite[Lemma 2.2]{B2}.
\end{proof}
The next lemma relates Gauss sums over $\mathbb F_q$ to Jacobi sums over a subfield of $\mathbb F_q$.
\begin{lemma}
\label{l11}
Let $p\equiv 3\text{\;or\;}5\pmod{8}$, and $\psi$ be a character of order $2^r$ on $\mathbb F_q$, where
$$
r\ge n=\begin{cases}
3&\text{if $p\equiv 3\pmod{8}$,}\\
2&\text{if $p\equiv 5\pmod{8}$,}
\end{cases}
$$
Assume that $2^{r-1}\mid s$. Then $\psi^{2^{r-n}}$ is equal to the lift of some character $\chi$ of order~$2^n$ on $\mathbb F_{p^{s/2^{r-n+1}}}$. Moreover,
$$
G(\psi)=q^{(2^{r-n+1}-1)/2^{r-n+2}}J(\chi)\cdot\begin{cases}
1&\text{if $p\equiv 3\pmod{8}$,}\\
(-1)^{s(r-1)/2^{r-1}}&\text{if $p\equiv 5\pmod{8}$.}
\end{cases}
$$
\end{lemma}
\begin{proof}
We prove the assertion of the lemma by induction on $r$, for $r\ge n$. Let $2^{n-1}\mid s$ and $\psi$ be a character of order $2^n$ on $\mathbb F_q$. As $2^n\mid(p^{s/2}-1)$, Lemma~\ref{l6} shows that $\psi$ is equal to the lift of some character $\chi$ of order $2^n$ on $\mathbb F_{p^{s/2}}$, that is, $\chi'=\psi$. Lemmas~\ref{l5} and \ref{l7} yield $G(\psi)=G(\chi')=-G(\chi)^2=-G(\chi^2)J(\chi)$. Note that $\chi^2$ has order~$2^{n-1}$. Thus, by Lemmas~\ref{l3} and \ref{l4},
$$
G(\chi^2)=\begin{cases}
-q^{1/4}&\text{if $p\equiv 3\pmod{8}$,}\\
(-1)^{(s/2)-1}q^{1/4}&\text{if $p\equiv 5\pmod{8}$,}
\end{cases}
$$
and so
$$
G(\psi)=q^{1/4}J(\chi)\cdot\begin{cases}
1&\text{if $p\equiv 3\pmod{8}$,}\\
(-1)^{s/2}&\text{if $p\equiv 5\pmod{8}$.}
\end{cases}
$$
This completes the proof for the case $r=n$.
Suppose now that $r\ge n+1$, and assume that the result is true when $r$ is replaced by $r-1$. Let $2^{r-1}\mid s$ and $\psi$ be a character of order $2^r$ on $\mathbb F_q$. Then $2^{r-2}\mid\frac s2$, and so $2^r\mid(p^{s/2}-1)$. By Lemma~\ref{l6}, $\psi$ is equal to the lift of some character $\phi$ of order $2^r$ on $\mathbb F_{p^{s/2}}$, that is $\phi'=\psi$. Applying Lemmas \ref{l2}(c), \ref{l3}, \ref{l7}, \ref{l8} and using the fact that $2^n\mid s$, we deduce
\begin{equation}
\label{eq1}
G(\psi)=-G(\phi)^2=-G(\phi)G(\phi\rho_0)=-\bar\phi(4)G(\phi^2)G(\rho_0)=\bar\phi(4)q^{1/4}G(\phi^2),
\end{equation}
where $\rho_0$ denotes the quadratic character on $\mathbb F_{p^{s/2}}$. Note that $\phi^2$ has order $2^{r-1}$ and $2^{r-2}\mid\frac s2$. Hence, by inductive hypothesis, $(\phi^2)^{2^{r-1-n}}=\phi^{2^{r-n}}$ is equal to the lift of some character $\chi$ of order $2^n$ on $\mathbb F_{p^{(s/2)/2^{r-n}}}=\mathbb F_{p^{s/2^{r-n+1}}}$ and
$$
G(\phi^2)=(p^{s/2})^{(2^{r-n}-1)/2^{r-n+1}}J(\chi)\cdot\begin{cases}
1&\text{if $p\equiv 3\pmod{8}$,}\\
(-1)^{(s/2)(r-2)/2^{r-2}}&\text{if $p\equiv 5\pmod{8}$,}
\end{cases}
$$
that is,
$$
G(\phi^2)=q^{(2^{r-n}-1)/2^{r-n+2}}J(\chi)\cdot\begin{cases}
1&\text{if $p\equiv 3\pmod{8}$,}\\
(-1)^{s(r-2)/2^{r-1}}&\text{if $p\equiv 5\pmod{8}$.}
\end{cases}
$$
Substituting this expression for $G(\phi^2)$ into \eqref{eq1} and using Lemma~\ref{l9}, we obtain
$$
G(\psi)=q^{(2^{r-n+1}-1)/2^{r-n+2}}J(\chi)\cdot\begin{cases}
1&\text{if $p\equiv 3\pmod{8}$,}\\
(-1)^{s(r-1)/2^{r-1}}&\text{if $p\equiv 5\pmod{8}$.}
\end{cases}
$$
It remains to show that $\psi^{2^{r-n}}$ is equal to the lift of $\chi$. Indeed, for any $x\in\mathbb F_q$ we have
\begin{align*}
\chi({\mathop{\rm N}}_{\mathbb F_q/\mathbb F_{p^{s/2^{r-n+1}}}}(x))&=\chi(x^{(p^s-1)/(p^{s/2^{r-n+1}}-1)})\\
&=\chi((x^{(p^s-1)/(p^{s/2}-1)})^{(p^{s/2}-1)/(p^{s/2^{r-n+1}}-1)})\\
&=\chi({\mathop{\rm N}}_{\mathbb F_{p^{s/2}}/\mathbb F_{p^{s/2^{r-n+1}}}}(x^{(p^s-1)/(p^{s/2}-1)}))=\phi^{2^{r-n}}(x^{(p^s-1)/(p^{s/2}-1)})\\
&=\left(\phi({\mathop{\rm N}}_{\mathbb F_{p^s}/\mathbb F_{p^{s/2}}}(x))\right)^{2^{r-n}}=\psi^{2^{r-n}}(x).
\end{align*}
Therefore $\chi'=\psi^{2^{r-n}}$, and the result now follows by the principle of mathematical induction.
\end{proof}
For an arbitrary integer $k$, it is convenient to set $\eta_k^*=\eta_{\ell}^*$, where $k\equiv {\ell}\pmod{e}$, $0\le\ell\le e-1$.
\begin{lemma}
\label{l12}
For any integer $k$, $\eta_{kp}^*=\eta_k^*$.
\end{lemma}
\begin{proof}
It is a straightforward consequence of \cite[Proposition~1]{G1}.
\end{proof}
From now on we shall assume that $p\equiv 3\text{\;or\;}5\pmod{8}$, $e=2^m$ with $m\ge 3$, and $\lambda$ is a character of order $2^m$ on $\mathbb F_q$ such that $\lambda(\gamma)=\zeta_{2^m}$. We observe that $2^{m-2}\mid s$.
\begin{lemma}
\label{l13}
We have
$$
P_{2^m}^*(X)=(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)\prod_{t=0}^{m-2}(X-\eta_{2^t}^*)^{2^{m-t-2}}(X-\eta_{-2^t}^*)^{2^{m-t-2}}.
$$
\end{lemma}
\begin{proof}
Write
\begin{align*}
P_{2^m}^*(X)&=(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)\prod_{t=0}^{m-2}\,\prod_{\substack{k=1\\ 2^t\parallel k}}^{2^m-1}(X-\eta_k^*)\\
&=(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)\prod_{t=0}^{m-2}\,\prod_{\substack{k_0=1\\ 2\nmid k_0}}^{2^{m-t}-1}(X-\eta_{2^tk_0}^*).
\end{align*}
Since $p\equiv 3\text{\;or\;}5\pmod{8}$, \,$\pm p^0,\pm p^1,\dots, \pm p^{2^{m-t-2}-1}$ is a reduced residue system modulo $2^{m-t}$ for each $0\le t\le m-2$. Thus
$$
P_{2^m}^*(X)=(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)\prod_{t=0}^{m-2}\,\prod_{j=0}^{2^{m-t-2}-1}(X-\eta_{2^t p^j}^*)(X-\eta_{-2^t p^j}^*).
$$
The result now follows from Lemma~\ref{l12}.
\end{proof}
\begin{lemma}
\label{l14}
We have
\begin{align*}
\eta_0^*=\,&G(\rho)+\sum_{r=2}^m 2^{r-2}\left(G(\lambda^{2^{m-r}})+G(\bar\lambda^{2^{m-r}})\right),\\
\eta_{2^{m-1}}^*=\,&G(\rho)+\sum_{r=2}^{m-1} 2^{r-2}\left(G(\lambda^{2^{m-r}})+G(\bar\lambda^{2^{m-r}})\right)-2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right),
\end{align*}
and, for $0\le t\le m-2$,
\begin{align*}
\eta_{\pm 2^t}^*=\,&\sum_{r=2}^t 2^{r-2}\left(G(\lambda^{2^{m-r}})+G(\bar\lambda^{2^{m-r}})\right),\\
&+\begin{cases}
-G(\rho)&\text{if $t=0$,}\\
G(\rho)-2^{t-1}\left(G(\lambda^{2^{m-t-1}})+G(\bar\lambda^{2^{m-t-1}})\right)&\text{if $t>0$,}
\end{cases}\\
&\mp\begin{cases}
0&\text{if $p\equiv 3\pmod{8}$,}\\
2^ti\,\left(G(\lambda^{2^{m-t-2}})-G(\bar\lambda^{2^{m-t-2}})\right)&\text{if $p\equiv 5\pmod{8}$,}
\end{cases}\\
&\mp\begin{cases}
2^ti\sqrt{2}\,\left(G(\lambda^{2^{m-t-3}})-G(\bar\lambda^{2^{m-t-3}})\right)&\text{if $p\equiv 3\pmod{8}$ and $t\le m-3$,}\\
0&\text{otherwise.}
\end{cases}
\end{align*}
\end{lemma}
\begin{proof}
From Lemma~\ref{l1} we deduce that
$$
\eta_k^*=\sum_{j=1}^{2^m-1}G(\lambda^j)\zeta_{2^m}^{-jk}=\sum_{r=1}^m \sum_{\substack{j=1\\ 2^{m-r}\parallel j}}^{2^m-1}G(\lambda^j)\zeta_{2^m}^{-jk}
=\sum_{r=1}^m \sum_{\substack{j_0=1\\ 2\nmid j_0}}^{2^r-1}G(\lambda^{2^{m-r}j_0})\zeta_{2^r}^{-j_0k}.
$$
Since $\lambda^{2^{m-r}}$ has order $2^r$ and, for $r\ge 2$, $\pm p^0,\pm p^1,\dots,\pm p^{2^{r-2}-1}$ is a reduced residue system modulo $2^r$, we conclude that
$$
\eta_k^*=(-1)^k G(\rho)+\sum_{r=2}^m\, \sum_{u\in\{\pm 1\}}\sum_{v=0}^{2^{r-2}-1}G(\lambda^{2^{m-r}up^v})\zeta_{2^r}^{-kup^v},
$$
or, in view of Lemma~\ref{l2}(b),
\begin{equation}
\label{eq2}
\eta_k^*=(-1)^k G(\rho)+\sum_{r=2}^m\left[G(\lambda^{2^{m-r}})\sum_{v=0}^{2^{r-2}-1}\zeta_{2^r}^{-kp^v}+G(\bar\lambda^{2^{m-r}})\sum_{v=0}^{2^{r-2}-1}\zeta_{2^r}^{kp^v}\right].
\end{equation}
The expressions for $\eta_0^*$ and $\eta_{2^{m-1}}^*$ follow immediately from \eqref{eq2}. Next we assume that $0\le t\le m-2$. If $r>t+3$, then, by Lemma~\ref{l10},
$$
\sum_{v=0}^{2^{r-2}-1}\zeta_{2^r}^{2^t p^v}=\sum_{v=0}^{2^{r-2}-1}\zeta_{2^r}^{-2^t p^v}=0,
$$
and so \eqref{eq2} yields
\begin{align*}
\eta_{2^t}^*=\,&\sum_{r=2}^t 2^{r-2}\left(G(\lambda^{2^{m-r}})+G(\bar\lambda^{2^{m-r}})\right),\\
&+\begin{cases}
-G(\rho)&\text{if $t=0$,}\\
G(\rho)-2^{t-1}\left(G(\lambda^{2^{m-t-1}})+G(\bar\lambda^{2^{m-t-1}})\right)&\text{if $t>0$,}
\end{cases}\\
&+G(\lambda^{2^{m-t-2}})\sum_{v=0}^{2^t-1}i^{-p^v}+G(\bar\lambda^{2^{m-t-2}})\sum_{v=0}^{2^t-1}i^{p^v}\\
&+\begin{cases}
G(\lambda^{2^{m-t-3}})\sum_{v=0}^{2^{t+1}-1}\zeta_8^{-p^v}+G(\bar\lambda^{2^{m-t-3}})\sum_{v=0}^{2^{t+1}-1}\zeta_8^{p^v}&\text{if $t\le m-3$,}\\
0&\text{if $t=m-2$.}
\end{cases}
\end{align*}
The asserted result now follows from Lemmas~\ref{l4} and \ref{l10}. The expression for $\eta_{-2^t}^*$ can be obtained in a similar manner.
\end{proof}
\section{The Case $p\equiv 3\pmod{8}$}
\label{s3}
In this section, $p\equiv 3\pmod{8}$. As before, $2^m\mid(q-1)$ and $\lambda$ is a character of order~$2^m$ on $\mathbb F_q$ with $\lambda(\gamma)=\zeta_{2^m}$.
For $3\le r\le m$, define the integers $A_r$ and $B_r$ by
\begin{gather}
p^{s/2^{r-2}}=A_r^2+2B_r^2,\qquad A_r\equiv -1\pmod{4},\qquad p\nmid A_r,\label{eq3}\\
2B_r\equiv A_r(\gamma^{(q-1)/8}+\gamma^{3(q-1)/8})\pmod{p}.\label{eq4}
\end{gather}
It is well known that for each fixed $r$, the conditions \eqref{eq3} and \eqref{eq4} determine $A_r$ and $B_r$ uniquely.
\begin{lemma}
\label{l15}
Let $r$ be an integer with $2^{r-1}\mid s$ and $3\le r\le m$. Then
$$
G(\lambda^{2^{m-r}})+G(\bar\lambda^{2^{m-r}})=2A_r q^{(2^{r-2}-1)/2^{r-1}}
$$
and
$$
G(\lambda^{2^{m-r}})-G(\bar\lambda^{2^{m-r}})=2B_r q^{(2^{r-2}-1)/2^{r-1}}i\sqrt{2}.
$$
\end{lemma}
\begin{proof}
We observe that $\lambda^{2^{m-r}}$ has order $2^r$. By Lemma~\ref{l11}, $(\lambda^{2^{m-r}})^{2^{r-3}}=\lambda^{2^{m-3}}$ is equal to the lift of some octic character $\chi$ on $\mathbb F_{p^{s/2^{r-2}}}$ and
$$
G(\lambda^{2^{m-r}})\pm G(\bar\lambda^{2^{m-r}})=q^{(2^{r-2}-1)/2^{r-1}}(J(\chi)\pm J(\bar\chi)).
$$
Note that $\gamma^{(q-1)/(p^{s/2^{r-2}-1})}$ is a generator of the cyclic group $\mathbb F_{p^{s/2^{r-2}}}^*$ and, by the defition of the lift, $\chi(\gamma^{(q-1)/(p^{s/2^{r-2}-1})})=\chi({\mathop{\rm N}}_{\mathbb F_q/\mathbb F_{p^{s/2^{r-2}}}}(\gamma))=\lambda^{2^{m-3}}(\gamma)=\zeta_8$. By \cite[Lemma~17]{B1}, $J(\chi)=A_r+B_ri\sqrt{2}$, and the result follows.
\end{proof}
We are now in a position to prove the main result of this section.
\begin{theorem}
\label{t1}
Let $p\equiv 3\pmod{8}$ and $m\ge 4$. Then $P_{2^m}^*(X)$ has a unique decomposition into irreducible polynomials over the rationals as follows:
\begin{itemize}
\item[\rm (a)]
if $2^{m-1}\mid s$, then
\begin{align*}
P_{2^m}^*(X)=\,& (X-q^{\frac 12}+4B_3 q^{\frac 14})^{2^{m-2}} (X-q^{\frac 12}-4B_3 q^{\frac 14})^{2^{m-2}}\\
&\times (X-q^{\frac 12}+8B_4 q^{\frac 38})^{2^{m-3}} (X-q^{\frac 12}-8B_4 q^{\frac 38})^{2^{m-3}}\\
&\times \Bigl(X+3q^{\frac 12}-\sum_{r=3}^{m-2} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^{m-2}A_{m-1} q^{\frac{2^{m-3}-1}{2^{m-2}}}\Bigr)^2\\
&\times \Bigl(X+3q^{\frac 12}-\sum_{r=3}^{m-1} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^{m-1}A_m q^{\frac{2^{m-2}-1}{2^{m-1}}}\Bigr)\\
&\times \Bigl(X+3q^{\frac 12}-\sum_{r=3}^m 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}\Bigr)\prod_{t=2}^{m-3}Q_t(X)^{2^{m-t-2}};
\end{align*}
\item[\rm (b)]
if $2^{m-2}\parallel s$ and $m\ge 5$, then
\begin{align*}
P_{2^m}^*(X)=\,& (X-q^{\frac 12}+4B_3 q^{\frac 14})^{2^{m-2}} (X-q^{\frac 12}-4B_3 q^{\frac 14})^{2^{m-2}}\\
&\times (X-q^{\frac 12}+8B_4 q^{\frac 38})^{2^{m-3}} (X-q^{\frac 12}-8B_4 q^{\frac 38})^{2^{m-3}}\\
&\times \Bigl(X+3q^{\frac 12}-\sum_{r=3}^{m-2} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^{m-2}A_{m-1} q^{\frac{2^{m-3}-1}{2^{m-2}}}\Bigr)^2\\
&\times \left(\Bigl(X+3q^{\frac 12}-\sum_{r=3}^{m-1} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}\Bigr)^2+2^{2(m-1)}A_m^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}\right)\\
&\times \left(\Bigl(X+3q^{\frac 12}-\sum_{r=3}^{m-3} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^{m-3}A_{m-2} q^{\frac{2^{m-4}-1}{2^{m-3}}}\Bigr)^2\right.\\
&\hskip42pt+\Biggl.2^{2(m-1)}B_m^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}\Biggr)^2\, \prod_{t=2}^{m-4}Q_t(X)^{2^{m-t-2}};
\end{align*}
\item[\rm (c)]
if $4\parallel s$, then
\begin{align*}
P_{16}^*(X)=\,&(X+3q^{\frac 12}+4A_3 q^{\frac 14})^2 (X-q^{\frac 12}+4B_3 q^{\frac 14})^4 (X-q^{\frac 12}-4B_3 q^{\frac 14})^4\\
&\times\left((X+3q^{\frac 12}-4A_3 q^{\frac 14})^2+64A_4^2 q^{\frac 34}\right)\left((X-q^{\frac 12})^2+64B_4^2 q^{\frac 34}\right)^2.
\end{align*}
\end{itemize}
The integers $A_r$ and $|B_r|$ are uniquely determined by~\eqref{eq3}, and
\begin{align*}
Q_t(X)=\,&\Bigl(X+3q^{\frac 12}-\sum_{r=3}^t 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^t A_{t+1} q^{\frac{2^{t-1}-1}{2^t}}+2^{t+2}B_{t+3}q^{\frac{2^{t+1}-1}{2^{t+2}}}\Bigr)\\
&\times\Bigl(X+3q^{\frac 12}-\sum_{r=3}^t 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^t A_{t+1} q^{\frac{2^{t-1}-1}{2^t}}-2^{t+2}B_{t+3}q^{\frac{2^{t+1}-1}{2^{t+2}}}\Bigr).
\end{align*}
\end{theorem}
\begin{proof}
Since $4\mid s$, Lemmas~\ref{l3} and \ref{l4} yield $G(\rho)=G(\lambda^{2^{m-2}})=G(\bar\lambda^{2^{m-2}})=-q^{1/2}$. Appealing to Lemmas \ref{l14} and \ref{l15}, we deduce that
\begin{align}
\eta_0^*&=-3q^{\frac 12}+\sum\limits_{r=3}^{m-1} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right),\label{eq5}\\
\eta_{2^{m-1}}^*&=-3q^{\frac 12}+\sum\limits_{r=3}^{m-1} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}-2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right),\label{eq6}\\
\eta_{\pm 2^{m-2}}^*&=-3q^{\frac 12}+\sum\limits_{r=3}^{m-2} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}-2^{m-2}A_{m-1}q^{\frac{2^{m-3}-1}{2^{m-2}}},\label{eq7}\\
\eta_{\pm 2^{m-3}}^*&=\begin{cases}
q^{\frac 12}\mp 2i\sqrt{2}\left(G(\lambda)-G(\bar\lambda)\right)&\text{if $m=4$,}\\
-3q^{\frac 12}+\sum\limits_{r=3}^{m-3} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}&\\
-2^{m-3}A_{m-2}q^{\frac{2^{m-4}-1}{2^{m-3}}}\mp 2^{m-3}i\sqrt{2}\left(G(\lambda)-G(\bar\lambda)\right)&\text{if $m\ge 5$,}
\end{cases}\label{eq8}\\
\eta_{\pm 1}^*&= q^{\frac 12}\pm 4B_3 q^{\frac 14}.\label{eq9}
\end{align}
Moreover, if $m\ge 5$, then
\begin{equation}
\label{eq10}
\eta_{\pm 2}^*=q^{\frac 12}\pm 8B_4 q^{\frac 38}
\end{equation}
and, for $2\le t\le m-4$,
\begin{equation}
\eta_{\pm 2^t}^*=-3q^{\frac 12}+\sum_{r=3}^t 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}-2^t A_{t+1} q^{\frac{2^{t-1}-1}{2^t}}\pm 2^{t+2}B_{t+3}q^{\frac{2^{t+1}-1}{2^{t+2}}}.\label{eq11}
\end{equation}
Assume that $2^{m-1}\mid s$. Combining \eqref{eq5}~--~\eqref{eq11} with Lemma~\ref{l15}, we obtain the values of the cyclotomic periods, which are all integers. Part~(a) now follows from Lemma~\ref{l13}.
Next assume that $2^{m-2}\parallel s$. We have $2^m\parallel(q-1)$, and so $\lambda(-1)=-1$. Hence, by
Lemma~\ref{l2}(a),
$$
\left(G(\lambda)\pm G(\bar\lambda)\right)^2=G(\lambda)^2+G(\bar\lambda)^2\pm 2\lambda(-1)q=G(\lambda)^2+G(\bar\lambda)^2\mp 2q.
$$
Lemmas~\ref{l2}(c), \ref{l3}, \ref{l8}, \ref{l9} and \ref{l15} yield
\begin{align*}
G(\lambda)^2+G(\bar\lambda)^2&=G(\lambda)G(\lambda\rho)+G(\bar\lambda)G(\bar\lambda\rho)=\bar\lambda(4)G(\lambda^2)G(\rho)+\lambda(4)G(\bar\lambda^2)G(\rho)\\
&=-q^{1/2}(G(\lambda^2)+G(\bar\lambda^2))=-2A_{m-1}q^{(2^{m-2}-1)/2^{m-2}},
\end{align*}
and thus
\begin{equation}
\label{eq12}
\left(G(\lambda)\pm G(\bar\lambda)\right)^2=-2q^{(2^{m-2}-1)/2^{m-2}}(A_{m-1}\pm p^{s/2^{m-2}}).
\end{equation}
Note that
$$
A_{m-1}^2+2B_{m-1}^2=p^{s/2^{m-3}}=(p^{s/2^{m-2}})^2=(A_m^2+2B_m^2)^2
=(A_m^2-2B_m^2)^2+2\cdot(2A_mB_m)^2.
$$
Hence $A_{m-1}=\pm(A_m^2-2B_m^2)$. Since $p^{s/2^{m-2}}=A_m^2+2B_m^2\equiv 3\pmod{8}$, $B_m$ is odd, and so $A_{m-1}=A_m^2-2B_m^2$. Substituting the expressions
for $p^{s/2^{m-2}}$ and $A_{m-1}$ into \eqref{eq12}, we find that
\begin{align*}
\left(G(\lambda)+G(\bar\lambda)\right)^2&=-4A_m^2q^{(2^{m-2}-1)/2^{m-2}},\\
\left(G(\lambda)-G(\bar\lambda)\right)^2&=8B_m^2q^{(2^{m-2}-1)/2^{m-2}}.
\end{align*}
The last two equalities together with \eqref{eq5}, \eqref{eq6} and \eqref{eq9} imply
\begin{align}
(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)=\,&
\Bigl(X+3q^{\frac 12}-\sum\limits_{r=3}^{m-1} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}\Bigr)^2\notag\\
&+2^{2(m-1)}A_m^2q^{\frac{2^{m-2}-1}{2^{m-2}}},\label{eq13}\\
(X-\eta_{2^{m-3}}^*)(X-\eta_{-2^{m-3}}^*)=\,&\Bigl(X+3q^{\frac 12}-\sum\limits_{r=3}^{m-3} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^{m-3}A_{m-2}q^{\frac{2^{m-4}-1}{2^{m-3}}}\Bigr)^2\notag\\
&+2^{2(m-1)}B_m^2q^{\frac{2^{m-2}-1}{2^{m-2}}}\qquad\text{if $m\ge 5$,}\label{eq14}\\
(X-\eta_{2^{m-3}}^*)(X-\eta_{-2^{m-3}}^*)=\,&(X-q^{\frac 12})^2+64B_4^2 q^{\frac 34}\qquad\text{if $m=4$.}\label{eq15}
\end{align}
Clearly, the quadratic polynomials on the right sides of \eqref{eq13}~--~\eqref{eq15} are irreducible over the rationals.
Putting \eqref{eq7}, \eqref{eq9}~--~\eqref{eq11}, \eqref{eq13}~--~\eqref{eq15} together and appealing to Lemma~\ref{l13}, we deduce parts~(b) and (c). This completes the proof.
\end{proof}
\begin{remark}
\label{r}
{\rm The result of Gurak~\cite[Proposition~3.3(iii)]{G3} can be reformulated in terms of $A_3$ and $B_3$. Namely, $P_8^*(X)$ has the following factorization into irreducible polynomials over the rationals:
\begin{align*}
P_8^*(X)=\,& (X-q^{1/2})^2 (X-q^{1/2}+4B_3 q^{1/4})^2 (X-q^{1/2}-4B_3 q^{1/4})^2 &\\
&\times (X+3q^{1/2}+4A_3 q^{1/4})(X+3q^{1/2}-4A_3 q^{1/4})&\text{if $4\mid s$,}\\
P_8^*(X)=\,&(X-3q^{1/2})^2 &\\
&\times\left((X+q^{1/2})^2+16A_3^2 q^{1/2}\right)\left((X+q^{1/2})^2+16B_3^2 q^{1/2}\right)^2 &\text{if $2\parallel s$.}
\end{align*}
We see that Theorem~\ref{t1} is not valid for $m=3$.}
\end{remark}
\section{The Case $p\equiv 5\pmod{8}$}
\label{s4}
In this section, $p\equiv 5\pmod{8}$. As in the previous sections, $2^m\mid(q-1)$ and $\lambda$ denotes a character of order~$2^m$ on $\mathbb F_q$ such that $\lambda(\gamma)=\zeta_{2^m}$.
For $2\le r\le m-1$, define the integers $C_r$ and $D_r$ by
\begin{gather}
p^{s/2^{r-1}}=C_r^2+D_r^2,\qquad C_r\equiv 1\pmod{4},\qquad p\nmid C_r,\label{eq16}\\
D_r \gamma^{(q-1)/4}\equiv C_r\pmod{p}.\label{eq17}
\end{gather}
If $2^{m-1}\mid s$, we extend this notation to $r=m$. It is well known that for each fixed $r$, the conditions~\eqref{eq16} and \eqref{eq17} determine $C_r$ and $D_r$ uniquely.
\begin{lemma}
\label{l16}
Let $r$ be an integer with $2^{r-1}\mid s$ and $2\le r\le m$. Then
$$
G(\lambda^{2^{m-r}})+G(\bar\lambda^{2^{m-r}})=\begin{cases}
-2C_r q^{(2^{r-1}-1)/2^r}&\text{if $2^r\mid s$,}\\
(-1)^r\cdot 2C_r q^{(2^{r-1}-1)/2^r}&\text{if $2^{r-1}\parallel s$,}
\end{cases}
$$
and
$$
G(\lambda^{2^{m-r}})-G(\bar\lambda^{2^{m-r}})=\begin{cases}
2D_r q^{(2^{r-1}-1)/2^r}i&\text{if $2^r\mid s$,}\\
(-1)^{r-1}\cdot 2D_r q^{(2^{r-1}-1)/2^r}i&\text{if $2^{r-1}\parallel s$.}
\end{cases}
$$
\end{lemma}
\begin{proof}
The proof proceeds exactly as for Lemma~\ref{l15}, except that at the end, \cite[Proposition~3]{KR} is invoked instead of \cite[Lemma~17]{B1}.
\end{proof}
We are now ready to establish our second main result.
\begin{theorem}
\label{t2}
Let $p\equiv 5\pmod{8}$ and $m\ge 4$. Then $P_{2^m}^*(X)$ has a unique decomposition into irreducible polynomials over the rationals as follows:
\begin{itemize}
\item[\rm (a)]
if $2^m\mid s$, then
\begin{align*}
P_{2^m}^*(X)=\,& (X-q^{\frac 12}+2D_2 q^{\frac 14})^{2^{m-2}} (X-q^{\frac 12}-2D_2 q^{\frac 14})^{2^{m-2}}\\
&\times\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-1}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^{m-1}C_m q^{\frac{2^{m-1}-1}{2^m}}\Bigr)\\
&\times\Bigl(X+q^{\frac 12}+\sum_{r=2}^m 2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigr)\prod_{t=1}^{m-2}R_t(X)^{2^{m-t-2}};
\end{align*}
\item[\rm (b)]
if $2^{m-1}\parallel s$, then
\begin{align*}
P_{2^m}^*(X)=\,& (X-q^{\frac 12}+2D_2 q^{\frac 14})^{2^{m-2}} (X-q^{\frac 12}-2D_2 q^{\frac 14})^{2^{m-2}}\\
&\times\left(\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-1}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigl)^2 -2^{2(m-1)}C_m^2 q^{\frac{2^{m-1}-1}{2^{m-1}}}\right)\\
&\times\Biggl(\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\Bigr)^2\Biggr.\\
&\hskip20pt -\Biggl.2^{2(m-1)}D_m^2 q^{\frac{2^{m-1}-1}{2^{m-1}}}\Biggr)\prod_{t=1}^{m-3}R_t(X)^{2^{m-t-2}};
\end{align*}
\item[\rm (c)]
if $2^{m-2}\parallel s$, then
\begin{align*}
P_{2^m}^*(X)=\,& (X-q^{\frac 12}+2D_2 q^{\frac 14})^{2^{m-2}} (X-q^{\frac 12}-2D_2 q^{\frac 14})^{2^{m-2}}\\
&\times\biggl(\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-3}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^{m-3}C_{m-2}q^{\frac{2^{m-3}-1}{2^{m-2}}}\Bigr)^2\biggr.\\
&\hskip26pt-\biggl.2^{2(m-2)}D_{m-1}^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}\biggr)^2
\end{align*}
\begin{align*}
&\times\Biggl(\biggl(\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigr)^2+2^{2(m-2)}C_{m-1}^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}+2^{2m-3}q\biggr)^2\Biggr.\\
&\hskip26pt -2^{2(m-1)}C_{m-1}^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}\Biggl.\Bigl(X+(2^{m-2}+1)q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigr)^2\Biggr)\\
&\times\prod_{t=1}^{m-4}R_t(X)^{2^{m-t-2}}.
\end{align*}
\end{itemize}
The integers $C_r$ and $|D_r|$ are uniquely determined by~\eqref{eq16}, and
\begin{align*}
R_t(X)=\,&\Bigl(X+q^{\frac 12}+\sum_{r=2}^t 2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^t C_{t+1} q^{\frac{2^t-1}{2^{t+1}}}+2^{t+1} D_{t+2} q^{\frac{2^{t+1}-1}{2^{t+2}}}\Bigr)\\
&\times\Bigl(X+q^{\frac 12}+\sum_{r=2}^t 2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^t C_{t+1} q^{\frac{2^t-1}{2^{t+1}}}-2^{t+1} D_{t+2} q^{\frac{2^{t+1}-1}{2^{t+2}}}\Bigr).
\end{align*}
\end{theorem}
\begin{proof}
As $s$ is even, Lemma~\ref{l3} yields $G(\rho)=-q^{1/2}$. Then, applying Lemmas~\ref{l14} and \ref{l16}, we obtain
\begin{align}
\eta_0^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+2^{m-3}\left(G(\lambda^2)+G(\bar\lambda^2)\right)\notag\\
&+2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right),\label{eq18}\\
\eta_{2^{m-1}}^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+2^{m-3}\left(G(\lambda^2)+G(\bar\lambda^2)\right)\notag\\
&-2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right),\label{eq19}\\
\eta_{\pm 2^{m-2}}^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^{m-3}\left(G(\lambda^2)+G(\bar\lambda^2)\right)\notag\\
&\mp 2^{m-2}i\left(G(\lambda)-G(\bar\lambda)\right),\label{eq20}\\
\eta_{\pm 2^{m-3}}^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-3}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+2^{m-3}C_{m-2}q^{\frac{2^{m-3}-1}{2^{m-2}}}\notag\\
&\mp 2^{m-3}i\left(G(\lambda^2)-G(\bar\lambda^2)\right),\label{eq21}\\
\eta_{\pm 1}^*=\,&q^{\frac 12}\pm 2D_2 q^{\frac 14},\label{eq22}
\end{align}
and, for $1\le t\le m-4$,
\begin{equation}
\label{eq23}
\eta_{\pm 2^t}^*=-q^{\frac 12}-\sum_{r=2}^t 2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+2^t C_{t+1} q^{\frac{2^t-1}{2^{t+1}}}\pm 2^{t+1} D_{t+2} q^{\frac{2^{t+1}-1}{2^{t+2}}}.
\end{equation}
First suppose that $2^m\mid s$. By combining \eqref{eq18}~--~\eqref{eq23} with Lemma~\ref{l16}, we find the values of the cyclotomic periods, which are all integers. Now part~(a) follows from Lemma~\ref{l13}.
Next suppose that $2^{m-1}\parallel s$. Using \eqref{eq18}~--~\eqref{eq23} and Lemma~\ref{l16} again, we find the values of the cyclotomic periods. We observe that $\eta_0^*$ and $\eta_{2^{m-1}}^*$ as well as $\eta_{2^{m-2}}^*$ and $\eta_{-2^{m-2}}^*$ are algebraic conjugates of degree~2 over the rationals, and the remaining cyclotomic periods are integers. Therefore the polynomials
$$
(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)=\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-1}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigr)^2-2^{2(m-1)}C_m^2 q^{\frac{2^{m-1}-1}{2^{m-1}}}
$$
and
\begin{align*}
(X-\eta_{2^{m-2}}^*)(X-\eta_{-2^{m-2}}^*)=\,&\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\Bigr)^2\\
&-2^{2(m-1)}D_m^2 q^{\frac{2^{m-1}-1}{2^{m-1}}}
\end{align*}
are irreducible over the rationals. Part~(b) now follows in view of Lemma~\ref{l13}.
Finally, suppose that $2^{m-2}\parallel s$. Making use of \eqref{eq18}~--~\eqref{eq20} and Lemma~\ref{l16}, we obtain
\begin{align}
\eta_0^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-(-1)^m\cdot 2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\notag\\
&+2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right),\label{eq24}\\
\eta_{2^{m-1}}^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-(-1)^m\cdot 2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\notag\\
&-2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right),\label{eq25}\\
\eta_{\pm 2^{m-2}}^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+(-1)^m\cdot 2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\notag\\
&\mp 2^{m-2}i\left(G(\lambda)-G(\bar\lambda)\right).\label{eq26}
\end{align}
By employing the same type of argument as in the proof of Theorem~\ref{t1}, we see that
$$
\left(G(\lambda)\pm G(\bar\lambda)\right)^2=\mp\, 2q^{(2^{m-1}-1)/2^{m-1}}\left(q^{1/2^{m-1}}\pm (-1)^m\cdot C_{m-1}\right).
$$
Combining this with \eqref{eq24}~--~\eqref{eq26}, we conclude that
\begin{align*}
(X-\eta_0^*)(X-&\eta_{2^{m-1}}^*)\\
=\,&\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+(-1)^m\cdot 2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\Bigr)^2\\
&+2^{2m-3}q^{\frac{2^{m-1}-1}{2^{m-1}}}\left(q^{\frac 1{2^{m-1}}}+(-1)^m\cdot C_{m-1}\right)
\end{align*}
and
\begin{align*}
(X-\eta_{2^{m-2}}^*)(X-&\eta_{-2^{m-2}}^*)\\
=\,&\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-(-1)^m\cdot 2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\Bigr)^2\\
&+2^{2m-3}q^{\frac{2^{m-1}-1}{2^{m-1}}}\left(q^{\frac 1{2^{m-1}}}-(-1)^m\cdot C_{m-1}\right).
\end{align*}
Since $q^{1/2^{m-2}}=p^{s/2^{m-2}}=C_{m-1}^2+D_{m-1}^2$, we have $q^{1/2^{m-1}}>|C_{m-1}|$. This means that the polynomials $(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)$ and $(X-\eta_{2^{m-2}}^*)(X-\eta_{-2^{m-2}}^*)$ are irreducible over the reals. Furthermore, since $2^{m-2}\parallel s$, the polynomials $(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)$ and $(X-\eta_{2^{m-2}}^*)(X-\eta_{-2^{m-2}}^*)$ belong to $\mathbb R[X]\setminus\mathbb Q[X]$. Since $\mathbb R[X]$ is a unique factorization domain, it follows that the polynomial
\begin{align*}
(X-&\eta_0^*)(X-\eta_{2^{m-1}}^*)(X-\eta_{2^{m-2}}^*)(X-\eta_{-2^{m-2}}^*)\\
=\,&\biggl(\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigr)^2+2^{2(m-2)}C_{m-1}^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}+2^{2m-3}q\biggr)^2\\
&-2^{2(m-1)}C_{m-1}^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}\Bigl(X+(2^{m-2}+1)q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigr)^2
\end{align*}
is irreducible over the rationals. Further, by Lemma~\ref{l16} and \eqref{eq21},
$$
\eta_{\pm 2^{m-3}}^*=-q^{\frac 12}-\sum_{r=2}^{m-3}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+ 2^{m-3}C_{m-2}q^{\frac{2^{m-3}-1}{2^{m-2}}}\pm (-1)^m\cdot 2^{m-2}D_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}},
$$
and so $\eta_{2^{m-3}}^*$ and $\eta_{-2^{m-3}}^*$ are algebraic conjugates of degree~2 over the rationals. Hence, the polynomial
\begin{align*}
(X-\eta_{2^{m-3}}^*)(X-\eta_{-2^{m-3}}^*)=\,&\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-3}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^{m-3}C_{m-2}q^{\frac{2^{m-3}-1}{2^{m-2}}}\Bigr)^2\\
&-2^{2(m-2)}D_{m-1}^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}
\end{align*}
is irreducible over the rationals. The remaining cyclotomic periods $\eta_{\pm 2^t}^*$, $0\le t\le m-4$, are integers, in view of \eqref{eq22} and \eqref{eq23}. Now Part~(c) follows by appealing to Lemma~\ref{l13}. This concludes the proof.
\end{proof}
\begin{remark}
{\rm Myerson has shown \cite[Theorem~17]{M} that $P_4^*(X)$ is irreducible if $2\nmid s$,
\begin{align*}
P_4^*(X)=\,& (X+q^{1/2}+2C_2 q^{1/4})(X+q^{1/2}-2C_2 q^{1/4})&\\
&\times(X-q^{1/2}+2D_2 q^{1/4}) (X-q^{1/2}-2D_2 q^{1/4})&\text{if $4\mid s$,}\\
\intertext{and, with a slight modification,}
P_4^*(X)=\,&\left((X+q^{1/2})^2-4C_2^2q^{1/2}\right)\left((X-q^{1/2})^2-4D_2^2q^{1/2}\right)&\text{if $2\parallel s$,}
\end{align*}
where in the latter case the quadratic polynomials are irreducible over the rationals. Furthermore, the result of Gurak~\cite[Proposition~3.3(ii)]{G3} can be reformulated in terms of $C_2$, $D_2$, $C_3$ and $D_3$. Namely, $P_8^*(X)$ has the following factorization into irreducible polynomials over the rationals:
\begin{align*}
P_8^*(X)=\,& (X-q^{1/2}+2D_2 q^{1/4})^2 (X-q^{1/2}-2D_2 q^{1/4})^2\\
&\times(X+q^{1/2}+2C_2 q^{1/4}+4C_3 q^{3/8})(X+q^{1/2}+2C_2 q^{1/4}-4C_3 q^{3/8})&\\
&\times(X+q^{1/2}-2C_2 q^{1/4}+4D_3 q^{3/8})(X+q^{1/2}-2C_2 q^{1/4}-4D_3 q^{3/8})&\!\text{if $8\mid s$,}\\
P_8^*(X)=\,& (X-q^{1/2}+2D_2 q^{1/4})^2 (X-q^{1/2}-2D_2 q^{1/4})^2&\\
&\times\left((X+q^{1/2}+2C_2 q^{1/4})^2-16C_3^2 q^{3/4}\right)&\\
&\times\left((X+q^{1/2}-2C_2 q^{1/4})^2-16D_3^2 q^{3/4}\right)&\!\text{if $4\parallel s$,}\\
P_8^*(X)=\,&\left((X-q^{1/2})^2-4D_2^2 q^{1/2}\right)^2&\\
&\times\left(\left((X+q^{1/2})^2+4C_2^2 q^{1/2}+8q\right)^2-16C_2^2 q^{1/2}(X+3q^{1/2})^2\right)&\!\text{if $2\parallel s$.}
\end{align*}
Thus part~(a) of Theorem~\ref{t2} remains valid for $m=2$ and $m=3$. Moreover, for $m=3$, part~(b) of Theorem~\ref{t2} is still valid (cf. Remark~\ref{r}).}
\end{remark}
\end{document} |
\begin{document}
\title{Sparse Euclidean Spanners with Tiny Diameter: \A Tight Lower Bound}
\begin{abstract}
In STOC'95 \cite{ADMSS95} Arya et al. showed that any set of $n$ points in $\mathbb R^d$ admits a $(1+\epsilonilon)$-spanner with hop-diameter at most 2 (respectively, 3) and $O(n \log n)$ edges (resp., $O(n \log \log n)$ edges). They also gave a general upper bound tradeoff of hop-diameter at most $k$ and $O(n \alpha_k(n))$ edges, for any $k \ge 2$.
The function $\alpha_k$ is the inverse of a certain Ackermann-style function at the $\lfloor k/2 \rfloor$th level of the primitive recursive hierarchy, where $\alpha_0(n) = \lceil n/2 \rceil$, $\alpha_1(n) = \left\lceil \sqrt{n} \right\rceil$, $\alpha_2(n) = \lceil \log{n} \rceil$, $\alpha_3(n) = \lceil \log\log{n} \rceil$, $\alpha_4(n) = \log^* n$, $\alpha_5(n) = \lfloor \frac{1}{2} \log^*n \rfloor$, \ldots. Roughly speaking, for $k \ge 2$ the function $\alpha_{k}$ is close to $\lfloor \frac{k-2}{2} \rfloor$-iterated log-star function, i.e., $\log$ with $\lfloor \frac{k-2}{2} \rfloor$ stars. Also, $\alpha_{2\alpha(n)+4}(n) \le 4$, where $\alpha(n)$ is the one-parameter inverse Ackermann function, which is an extremely slowly growing function.
Whether or not this tradeoff is tight has remained open, even for the cases $k = 2$ and $k = 3$. Two lower bounds are known: The first applies only to spanners with stretch 1 and the second is sub-optimal and applies only to sufficiently large (constant) values of $k$. In this paper we prove a tight lower bound for any constant $k$: For any fixed
$\epsilonilon > 0$, any $(1+\epsilonilon)$-spanner for the uniform line metric with hop-diameter at most $k$ must have at least $\Omega(n \alpha_k(n))$ edges.
\end{abstract}
\section{Introduction}
Consider a set $S$ of $n$ points in $\mathbb R^d$ and a real number $t \ge 1$.
A weighted graph $G = (S,E,w)$ in which the weight function is given by the Euclidean distance,
i.e., $w(x,y) = \|x-y\|$ for each $e =(x,y)\in E$,
is called a \emph{geometric graph}.
We say that a geometric graph $G$ is a \emph{$t$-spanner} for $S$ if for every pair $p,q \in S$ of distinct points,
there is a path in $G$ between $p$ and $q$ whose {\em weight}
(i.e., the sum of all edge weights in it)
is at most $t$ times the
Euclidean distance $\|p-q\|$ between $p$ and $q$.
Such a path
is called a \emph{$t$-spanner path}. The problem of constructing
Euclidean spanners
has been studied intensively
over the years \cite{Chew86,KG92,ADDJS93,CK93,DN94,ADMSS95,DNS95,AS97,RS98,AWY05,CG06,DES09tech,SE10,Sol14,ES15,LS19}.
Euclidean spanners are of importance both in theory and in practice,
as they enable approximation of the complete Euclidean
graph in a more succinct form; in particular, they
find a plethora of applications, e.g., in geometric approximation algorithms, network topology design, geometric distance oracles, distributed systems, design of parallel machines, and other areas \cite{DN94,LNS98,RS98,GLNS02,GNS05,GLNS08,HP00,MP00}.
We refer the reader to the book by Narasimhan and Smid \cite{NS07}, which provides a thorough account on Euclidean spanners and their applications.
In terms of applications, the most basic requirement from a spanner (besides achieving a small stretch) is to be \emph{sparse}, i.e.,
to have only a small number of edges.
However, for many applications, the spanner is required to
preserve some additional properties of the underlying complete graph.
One such property, which plays a key role in various applications (such as to routing protocols) \cite{AMS94,AM04,AWY05,CG06,DES09tech,KLMS21}, is the \emph{hop-diameter}:
a $t$-spanner for $S$ is said to have an hop-diameter of $k$ if, for any $p,q \in S$, there is a $t$-spanner path between $p$ and $q$ with at most $k$ edges (or hops).
\subsection{Known upper bounds}
\paragraph*{1-spanners for tree metrics.}
We denote the tree metric induced by an $n$-vertex (possibly weighted) rooted tree $(T,rt)$ by $M_T$.
A spanning subgraph $G$ of $M_T$ is said to be a \emph{1-spanner} for $T$, if for every pair of vertices, their distance in $G$ is equal to their distance in $T$.
The problem of constructing 1-spanners for tree metrics is a fundamental one, and has been studied quite extensively over the years, also in more
general settings, such as planar
metrics \cite{Thor95}, general metrics \cite{Thor92} and general graphs \cite{BGJRW09}.
This problem is also intimately related to the extremely well-studied problems of
computing partial-sums and online product queries in semigroup and their variants (see \cite{Tarj79,Yao82,AS87,CR91,PD04,AWY05}, and the references therein).
Alon and Schieber \cite{AS87} and Bodlaender et al.\ \cite{BTS94} showed that
for any $n$-point tree metric, a 1-spanner with diameter 2 (respectively, 3) and $O(n \log n)$ edges (resp., $O(n \log \log n)$ edges)
can be built within time linear in its size.
For $k \ge 4$, Alon and Schieber \cite{AS87} showed that 1-spanners with diameter at most $2k$
and $O(n \alpha_k(n))$ edges can be built in $O(n \alpha_k(n))$ time.
The function $\alpha_k$ is the inverse of a certain Ackermann-style function at the $\lfloor k/2 \rfloor$th level of the primitive recursive hierarchy, where $\alpha_0(n) = \ceil*{n/2}$, $\alpha_1(n) = \ceil*{\sqrt{n}}$, $\alpha_2(n) = \ceil{\log{n}}$, $\alpha_3(n) = \ceil{\log\log{n}}$, $\alpha_4(n) = \log^* n$, $\alpha_5(n) = \floor*{\frac{1}{2} \log^*n}$, etc. Roughly speaking, for $k \ge 2$ the function $\alpha_{k}$ is close to $\lfloor \frac{k-2}{2} \rfloor$-iterated log-star function, i.e., $\log$ with $\lfloor \frac{k-2}{2} \rfloor$ stars. Also, $\alpha_{2\alpha(n)+2}(n) \le 4$, where $\alpha(n)$ is the one-parameter inverse Ackermann function, which is an extremely slowly growing function. (These functions are formally defined in \Cref{sec:ackermann}.)
Bodlaender et al.\ \cite{BTS94} constructed 1-spanners
with diameter at most $k$ and $O(n \alpha_k(n))$ edges, with a high running time.
Solomon~\cite{Sol13} gave a construction that achieved the best of both worlds: a tradeoff of $k$ versus $O(n\alpha_k(n))$ between the hop-diameter and the number of edges in linear time of $O(n\alpha_k(n))$.
Alternative constructions, given by Yao~\cite{Yao82} for line metrics and later extended by Chazelle~\cite{Chaz87} to general tree metrics, achieve a tradeoff of $m$ edges versus $\Theta( \alpha(m,n))$ hop-diameter, where $\alpha(m,n)$ is the two-parameter inverse Ackermann function (see \Cref{def:invAck}). However, these constructions provide 1-spanners
with diameter $\Gamma' \cdot k$ rather than $2k$ or $k$, for some constant $\Gamma' > 30$.
\paragraph*{\bm{$(1+\epsilon)$}-spanners.~}
In STOC'95 Arya~et~al.~\cite{ADMSS95} proved the so-called ``Dumbbell Theorem'', which states that,
for any $d$-dimensional Euclidean space, a $(1+\epsilon,O(\frac{\log(1/\epsilon)}{\epsilon^d}))$-tree cover
can be constructed in $O(\frac{\log(1/\epsilon)}{\epsilon^d}\cdot n\log{n}+\frac{1}{\epsilon^{2d}}\cdot n)$ time; see \cref{sec:prelim} for the definition of tree cover. The Dummbell Theorem implies that any construction of 1-spanners for tree metrics can be translated into a construction of Euclidean $(1+\epsilon)$-spanners. Applying the construction of 1-spanners for tree metrics from \cite{Sol13}, this gives rise to an optimal $O(n \log n)$-time construction (in the algebraic computation tree (ACT) model\footnote{Refer to Chapter 3 in \cite{NS07} for the definition of the ACT model. A matching lower bound of $\Omega(n \log n)$ on the time needed to construct Euclidean spanners is given in \cite{CDS01}.}) of Euclidean $(1+\epsilon)$-spanners.
This result can be generalized (albeit not in the ACT model) for the wider family of {\em doubling metrics}, by using the tree cover theorem of Bartal~et~al.~\cite{BFN19}, which generalizes the Dumbbell Theorem of \cite{ADMSS95} for arbitrary doubling metrics.
\subsection{Known lower bounds}
The first lower bound on 1-spanners for tree metrics was given by Yao~\cite{Yao82} and it establishes a tradeoff of $m$ edges versus hop-diameter of $\Omega(\alpha(m,n))$ for the uniform line metric.
Alon~and~Schieber~\cite{AS87} gave a stronger lower bound on 1-spanners for the uniform line metric: hop-diameter $k$ versus $\Omega(n\alpha_k(n))$ edges, for any $k$; it is easily shown that this lower bound implies that of \cite{Yao82}
(see \Cref{lemma:tradeoff}),
but the converse is not true.
The above lower bounds apply to 1-spanners. There is also a lower bound on $(1+\epsilon)$-spanners that applies to line metrics, by Chan and Gupta \cite{CG06},
which extends that of \cite{Yao82}: $m$ edges versus hop-diameter of $\Omega(\alpha(m,n))$.
As mentioned already concerning this tradeoff, it only provides a meaningful bound for sufficiently large values of hop-diameter (above say 30), and it does not apply to hop-diameter values that approach 1, which is the focus of this work.
More specifically, it can be used to show that any $(1+\epsilon)$-spanner for a certain line metric with hop-diameter at most $k$ must have $\Omega(n \alpha_{2k+6} (n))$ edges. When $k=2$ (resp. $k=3$), this gives $\Omega(n\log^{****}{n})$ (resp. $\Omega(n\log^{*****}{n})$) edges,
which is far from the upper bound of $O(n \log n)$ (resp., $O(n \log\log n)$).
Furthermore, the line metric used in the proof of \cite{CG06} is not as basic as the uniform line metric --- it is derived from hierarchically well-separated trees (HSTs), and to achieve the result for line metrics, an embedding from HSTs to the line with an appropriate separation parameter is employed. The resulting line metric is very far from a uniform one and its aspect ratio\footnote{The {\em aspect ratio} of a metric is the ratio of the maximum pairwise distance to the minimum one.} depends on the stretch --- it will be super-polynomial whenever $\epsilon$ is sufficiently small or sufficiently large;
of course, the aspect ratio of the uniform line metric (which is the metric used by \cite{Yao82,AS87}) is linear in $n$.
As point sets arising in real-life applications (e.g., for various random distributions) have polynomially
bounded aspect ratio, it is natural to ask whether one can achieve a lower bound for a point set of polynomial aspect ratio.
\subsection{Our contribution}
We prove that any $(1+\epsilon)$-spanner for the uniform line metric with hop-diameter $k$ must have at least $\Omega(n\alpha_k(n))$ edges, for any constant $k \ge 2$.
\begin{theorem}\label{thm:main}
For any positive integer $n$, any integer $k\ge2$ and any $\epsilon \in [0, 1/2]$, any $(1+\epsilon)$-spanner with hop-diameter $k$ for the uniform line metric with $n$ points must contain at least $\Omega(\frac{n}{2^{6\floor{k/2}}} \alpha_{k}(n))$ edges.
\end{theorem}
Interestingly, our lower bound applies also to any $\epsilon > 1/2$, where the bound on the number of edges reduces linearly with $\epsilon$, i.e., it becomes $\Omega(n\alpha_k(n) /\epsilon)$.
We stress that our lower bound instance, namely the uniform line metric, does not depend on $\epsilon$, and the lower bound that it provides holds {\em simultaneously for all values of $\epsilon$}.
Although our lower bound on the number of edges coincides with $\Omega(n\alpha_k(n))$ only for constant $k$, we note that the values of $k$ of interest range between $1$ and $O(\alpha(n))$,
where $\alpha(\cdot)$ is a very slowly growing function, e.g., $\alpha(n)$ is asymptotically much smaller than $\log^*n$; we formally define $\alpha(n)$ in \Cref{sec:ackermann}.
Indeed, as mentioned, for $k=2\alpha(n)+4$, we have $\alpha_{2\alpha(n) + 4}(n) \le 4$,
and clearly any spanner must have $\Omega(n)$ edges.
Thus the gap between our lower bound on the number of edges and $\Omega(n\alpha_k(n))$,
namely, a multiplicative factor of $2^{6\floor{k/2}}$, which in particular is no greater than $2^{O(\alpha(n)}$, is very small.
For technical reasons we prove a more general lower bound, stated in \Cref{thm:k}. In particular, we need to consider a more general notion of Steiner spanners\footnote{A Steiner spanner for a point set $S$ is a spanner that may contain additional Steiner points (which do not belong to $S$). Clearly, a lower bound for Steiner spanners also applies to ordinary spanners.}, and to prove the lower bound for a certain family of line metrics to which the uniform line metric belongs;
\Cref{thm:main} follows directly from \Cref{thm:k}. See \Cref{sec:prelim} for the definitions.
For constant values of k, \Cref{thm:main} strengthens the lower bound shown by \cite{AS87}, which applies only to stretch 1, whereas our tradeoff holds for arbitrary stretch. Whether or not the term $\frac{1}{2^{6\floor{k/2}}}$ in the bound on the number of edges in \Cref{thm:main} can be removed is left open by our work. As mentioned before, we show in \Cref{sec:tradeoff} that this tradeoff implies the tradeoff by \cite{Yao82} (for stretch 1) and \cite{CG06} (for larger stretch).
\subsection{Proof Overview}
The starting point of our lower bound argument is the one by \cite{AS87} for stretch 1, which applies to the uniform line metric, $U(n)$. The argument of \cite{AS87} crucially relies on the following {\em separation} property: for any 4 points $i,j,i',j'$ in $U(n)$ such that $i \le j < i' \le j'$, any 1-spanner path between $i$ and $j$ is completely disjoint to any 1-spanner path between $i'$ and $j'$. Thus, for any two subintervals that do not overlap --- and are thus separated in the most basic sense, any 1-spanner path between a pair of points in one of the sub-intervals does not overlap any 1-spanner path in the other sub-interval. With this separation property in mind, a rather straightforward inductive argument can be employed. Consider for concreteness the easiest case $k = 2$. The number of spanner edges satisfies the recurrence $T_2(n) = 2T_2(n/2) + \Omega(n)$. Indeed, by induction, the number of spanner edges for the left $n/2$ points is at least $T_2(n/2)$, and the same goes for the right $n/2$ points, and importantly the corresponding sets of edges are disjoint by the separation property. All is left is to reason that there are $\Omega(n)$ \emph{cross edges}, which are the edges whose one endpoint belongs to the left $n/2$ points and the other to the right $n/2$ points; the separation property implies that the cross edges are disjoint to the aforementioned edge sets. Alas, when the stretch grows from 1 to $1+\epsilon$, even for a tiny $\epsilon > 0$, the basic separation property no longer holds, so it is possible to have significant overlaps between the edge sets corresponding to the recursive calls of the left half and right half of the points, and between them and the cross edges. Of course, as $k$ increases, such issues regarding overlaps become even more challenging. To overcome these issues, \cite{CG06} resorted to a different metric space, which is far from a uniform line metric to an extent that renders such issues of overlapping negligible. As mentioned before, the resulting metric of \cite{CG06} has a super-polynomial aspect ratio whenever $\epsilon$ is sufficiently small or sufficiently large. Our argument, on the other hand, applies to the uniform line metric; coping with overlaps issues in this case is nontrivial. Next, we highlight some of the challenges that our argument overcomes.
The easiest case is $k=2$.
We view the line metric $U(1,n)$ as an interval $[1,n]$ of all integers from $1$ to $n$, and also consider sub-intervals $[i,j]$ of all integers from $i$ to $j$, for all $1 \le i<j \le n$.
Our first observation is that one can achieve a similar separation property as in the case of stretch 1 by, roughly speaking, restricting the attention to a subset of $\Omega(n)$ points that are sufficiently far from the boundaries of sub-intervals. More concretely, for $\epsilon = 1/2$, consider the $n/2$ points closest to $n/2$ (i.e., the sub-interval $[n/4, 3n/4]$): While these $n/2$ points still induce $\Omega(n)$ cross edges, note that any $(1+\epsilon)$-spanner path between any pair of such points is contained in $[1,n]$. To achieve the required separation property, we would like to apply this observation inductively.
However, a naive application is doomed to failure.
Indeed, if we used the induction hypothesis only on the points inside $[n/4,3n/4]$, i.e., on the points of $[n/4, n/2]$ as the left interval and the points of $[n/2+1,3n/4]$
as the right interval, the number of cross edges would degrade by a factor of 2. Losing a factor of 2 at each recursion level cannot give a lower bound larger than $\Omega(n)$. Instead,
we apply the induction hypothesis on all $n/2$ points of the left half and on all $n/2$ points of the right half, but restrict the attention to the $n/4$ points that are closest to $n/4$ in the left half and to the $n/4$ points that are closest to $3n/4$ in the right half, which gives us $n/2$ points in total, just as with the first recursion level. In this way at each level of recursion we restrict the attention to a different set of points, but of the same size $n/2$, and in this way avoid the loss.
The case $k=3$ is handled similarly, but is more intricate --- primarily since the basic argument of \cite{AS87} (for stretch 1) for $k = 3$ is more intricate than for $k = 2$.
We refer the reader to the proof of \Cref{lemma:lb2} (resp. \Cref{lemma:lb3}) for the proof of the case $k=2$ (resp. $k=3$).
For $k\ge 4$, the argument is considerably more involved.
As with \cite{AS87}, we divide the interval $[1,n]$ into consecutive sub-intervals of size $\alpha_{k-2}(n)$ --- the goal is to show that the number of spanner edges satisfies the recurrence $T_k(n) = (n/\alpha_{k-2}(n)) \cdot T_k(\alpha_{k-2}(n)) + \Omega(n)$, implying that $T_k(n) = \Omega(n\alpha_k(n))$. Using the separation property as in the case $k=2$, employing the induction hypothesis on each of the $n/\alpha_{k-2}(n)$ subintervals yields the term $(n/\alpha_{k-2}(n)) \cdot T_k(\alpha_{k-2}(n))$. The crux is to show that there are $\Omega(n)$ cross edges, now defined as edges with endpoints in different sub-intervals.
To ensure that the separation property holds, as in the case $k = 2$, we consider only the $n/2$ points of $[n/4, 3n/4]$. We distinguish between \emph{global} and \emph{non-global} points: A point is called global if it is incident on at least one cross edge, and non-global otherwise. If
a constant fraction of points in $[n/4,3n/4]$ are global,
we must have $\Omega(n)$ cross edges by definition. The complementary case, where a constant fraction of points in $[n/4,3n/4]$ are non-global, is the interesting one.
We'd like to use the induction hypothesis for $k-2$, as in \cite{AS87}, to reason that the number of cross edges induced by the non-global points is at least $T_{k-2}(n/\alpha_{k-2}(n)) = \Omega(n)$. The problem is that the non-global points induce a line metric that is {\em not uniform}, hence we cannot apply the induction hypothesis --- overcoming this obstacle is the key difficulty in our argument.
To be able to apply the induction hypothesis, we prove all the results on a generalized line metric. Consider first what we call a {\em $t$-sparse line metric}, which contains a single point in every consecutive sub-interval of size $t$.
Such a line metric would suffice for applying the induction hypothesis (with hop-diameter $k-2$), {\em if all sub-intervals inside $[n/4,3n/4]$ contained at least one non-global point}. In this special case, we could apply the induction hypothesis on an $\alpha_{k-2}(n)$-sparse line metric, and if we're always in this special case, we'll always have a $t$-sparse line metric for a growing parameter $t$, and it is not difficult to show that the induction step will work fine. Alas, we only know that there is a constant fraction of points around the middle of the interval and the intervals containing them form a subspace of a $t$-sparse line metric, which is possibly very different than this special case; thus, in general, we are unable to apply the induction hypothesis in such a way.
To overcome this hurdle, we prove a stronger, more general lower bound, which concerns subspaces of $t$-sparse line metrics, where a constant fraction of the points is missing. On the bright side, such generalized spaces
provide the required flexibility for carrying out the induction step.
On the negative side, each invocation of the induction hypothesis with a smaller value of $k$ incurs some multiplicative loss in the number of considered points, yielding the exponential on $k$ slack in our lower bound.
We refer the reader to the proof of \Cref{thm:k}.
\section{Preliminaries}\label{sec:prelim}
\begin{definition}[Tree covers]
Let $M_X = (X, \delta_X)$ be an arbitrary metric space.
We say that a weighted tree $T$ is a {\em dominating} tree for $M_X$ if $X \subseteq V(T)$ and it holds that $\delta_T(x,y) \ge \delta_X(x,y)$, for every $x,y\in X$.
For $\gamma \ge 1$ and an integer $\zeta\ge1$, a {\em $(\gamma, \zeta)$-tree cover} of $M_X = (X,\delta_X)$ is a collection of $\zeta$ {\em dominating trees} for $M_X$, such that for every $x,y \in X$, there exists a tree $T$ with $d_T(u,v) \le \gamma \cdot \delta_X(u,v)$;
we say that the {\em stretch} between $x$ and $y$ in $T$ is at most $\gamma$,
and the parameter $\gamma$ is referred to as the {\em stretch} of the tree cover.
\end{definition}
\begin{definition}[Uniform line metric]
A uniform line metric $U=(\mathbb{Z}, d)$ is a metric on a set of integer points such that the distance between two points $a,b \in \mathbb{Z}$, denoted by $d(a,b)$ is their Euclidean distance, which is $|a-b|$.
For two integers $l,r\in\mathbb{Z}$, such that $l \le r$, we define a uniform line metric on an interval $[l, r]$, denoted by $U(l, r)$, as a subspace of $U$ consisting of all the integer points $k$, such that $l \le k \le r$. We use $U(n)$ to denote a uniform line metric on the interval $[1,n]$.
\end{definition}
Although we aim to prove the lower bound for uniform line metric, the inductive nature of our argument requires several generalizations of the considered metric space and spanner.
\begin{definition}[$t$-sparse line metric]\label{def:tsparse}
Let $l$ and $r$ be two integers such that $l<r$. We call metric space $U((l,r),t)$ $t$-sparse if:
\begin{itemize}
\item It is a subspace of $U(l,r)$.
\item Each of the consecutive intervals of $[l,r]$ of size $t$ ($[l, l+t], [l+t+1, l+2t], \dots$) contains exactly one point. These intervals are called $((l,r),t)$-intervals and the point inside each such interval is called \emph{representative} of the interval.
\end{itemize}
\end{definition}
\emph{Throughout the paper, we will always consider Steiner spanners that can contain arbitrary points from the uniform line metric.}
\begin{definition}[Global hop-diameter]\label{def:spanner} For any two integers $l,r$ such that $r=l+nt-1$, let $U((l,r),t)$ be a $t$-sparse line metric with $n$ points and let $X$ be a subspace of $U((l,r),t)$. An edge that connects two points in $U((l,r),t)$ is \emph{$((l,r),t)$-global} if it has endpoints in two different $((l,r),t)$-intervals of $U((l,r),t)$.
A spanner on $X$ with stretch $(1+\epsilon)$ has its \emph{$((l,r),t)$-global hop-diameter} bounded by $k$ if every pair of points in $X$ has a path of stretch at most $(1+\epsilon)$ consisting of at most $k$ $((l,r),t)$-global edges.
\end{definition}
For ease of presentation, we focus on $\epsilon \in [0, 1/2]$, as this is the basic regime.
Our argument naturally extends to any $\epsilon>1/2$, with the lower bound degrading by a factor of $1/\epsilon$.
\begin{lemma}[Separation property]\label{lemma:sep}
Let $l, r, t \in \mathbb{N}$, $l\le r$, $t\ge 1$ and let $i \coloneqq \ceil{\frac{1+\epsilon/2}{1+\epsilon}l + \frac{\epsilon/2}{1+\epsilon}r}$, and $j \coloneqq \floor{\frac{\epsilon/2}{1+\epsilon}l + \frac{1+\epsilon/2}{1+\epsilon}r}$.
Let $a, b$ be two points in $U((l,r),t)$ such that $i\le a < b \le j$. Then, any $(1+\epsilon)$-spanner path between $a$ and $b$ contains points strictly inside $[l,r]$.
\end{lemma}
\begin{proof}
Consider a spanner path between $a$ and $b$ which contains a point $q$ outside $U((l,r),t)$ such that $q > r$. (A similar argument holds for $q < l$.)
The length of any such path is at least $(b-a) + 2(q-b)$. Since $b \le j < r < q$, it holds that $2(q-b) > 2(r-j) =\frac{\epsilon}{1+\epsilon}(r-l)$.
On the other hand, since $i \le a < b \le j$, the distance between $a$ and $b$ is at most $b-a \le j-i = \frac{1}{1+\epsilon}(r-l)$.
The last two inequalities imply that $2(q-b) > \epsilon(b-a)$. It follows that the spanner path between $a$ and $b$ containing $q$ is of length greater than $(1+\epsilon)(b-a)$, i.e., it has a stretch bigger than $(1+\epsilon)$.
\end{proof}
\begin{corollary}\label{cor:sep}
For every integer $N\ge 34$ and any $t$-sparse line metric $U((1,N),t)$, any spanner path with stretch at most $3/2$ between metric points $a$ and $b$ such that $\floor{N/4} \le a \le b \le \ceil{3N/4}$ contains points strictly inside $[1,N]$.
\end{corollary}
\subsection{Ackermann functions}\label{sec:ackermann}
Following standard notions \cite{Tarjan75,AS87,Chaz87,NS07,Sol13}, we will introduce two very rapidly growing functions $A(k, n)$ and $B(k, n)$, which are variants of Ackermann's function. Later, we also introduce several inverses and state their properties that will be used throughout the paper.
\begin{definition}
For all $k \ge 0$, the functions $A(k,n)$ and $B(k,n)$ are defined as follows:
\begin{align*}
A(0, n) &\coloneqq 2n, \text{\emph{ for all }} n \ge 0,\\
A(k, n) &\coloneqq
\begin{cases}
1 &\text{\emph{ if }} k \ge 1 \text{\emph{ and }} n = 0\\
A(k-1, A(k, n-1)) & \text{\emph{ if }} k \ge 1 \text{\emph{ and }} n \ge 1\\
\end{cases}\\
B(0, n) &\coloneqq n^2, \text{\emph{ for all }} n \ge 0,\\
B(k, n) &\coloneqq
\begin{cases}
2 &\text{\emph{ if }} k \ge 1 \text{\emph{ and }} n = 0\\
B(k-1, B(k, n-1)) & \text{\emph{ if }} k \ge 1 \text{\emph{ and }} n \ge 1\\
\end{cases}
\end{align*}
\end{definition}
We now define the functional inverses of $A(k,n)$ and $B(k,n)$.
\begin{definition}
For all $k \ge 0$, the function $\alpha_k(n)$ is defined as follows:
\begin{align*}
\alpha_{2k}(n) &\coloneqq \min\{s \ge 0: A(k, s) \ge n\},\text{\emph{ for all }} n\ge 0\text{\emph{, and}}\\
\alpha_{2k+1}(n) &\coloneqq \min\{s \ge 0: B(k, s) \ge n\},\text{\emph{ for all }} n\ge 0,\text{{for all }} n\ge 0.
\end{align*}
\end{definition}
For technical convenience we define $\log{x} = 0$ for any $x \le 0$, $x \in \mathbb{R}$. All the logarithms are with base 2.
It is not hard to verify that $\alpha_0(n) = \lceil n/2\rceil$,
$\alpha_1(n) = \lceil \sqrt{n} \rceil$,
$\alpha_2(n)= \lceil\log{n}\rceil$,
$\alpha_3(n)= \lceil\log\log{n}\rceil$,
$\alpha_4(n)= \log^*{n}$,
$\alpha_5(n)= \lfloor \frac{1}{2}\log^*{n} \rfloor$, etc.
We will use the following property of $\alpha_k(n)$.
\begin{lemma}[cf. Lemma 12.1.16. in \cite{NS07}]\label{lemma:alphakstep}
For each $k\ge 1$, we have:\\
$\alpha_{2k}(n) = 1 + \alpha_{2k}(\alpha_{2k-2}(n))$, for all $n\ge 2$, and\\
$\alpha_{2k+1}(n) = 1 + \alpha_{2k+1}(\alpha_{2k-1}(n))$, for all $n\ge 3$.
\end{lemma}
Finally, another functional inverse of $A(\cdot, \cdot)$ is defined as in \cite{Tarjan75,Chaz87,CG06,NS07,Sol13}.
\begin{definition} [Inverse Ackermann function]\label{def:invAck}
$\alpha\left(m,n\right) = \min \left\{ i | i \ge 1, A \left( i, 4\ceil{m/n} \right) > \log_2{n} \right\}$.
\end{definition}
Finally, for all $n\ge 0$, we introduce the Ackermann function as $A(n) \coloneqq A(n,n)$, and its inverse as $\alpha(n) = \min\{s \ge 0 : A(s) \ge n\}$. In \cite{NS07}, it was shown that $\alpha_{2\alpha(n)+2}(n)\le 4$. We observe that $\alpha(n)$ satisfies $\alpha(n) \le \log^*n$ for any $n \ge 2$.
\section{Warm-up: lower bounds for hop-diameters 2 and 3}
In this section, we prove the lower bound for cases $k=2$ (\Cref{lemma:lb2} in \Cref{sec:lb2}) and $k=3$ (\Cref{lemma:lb3} in \Cref{sec:lb3}). In fact, we prove more general statements (\Cref{thm:2,thm:3}), which apply not only to uniform line metric, but to subspaces of $t$-sparse line metrics, where a constant fraction of the points is missing.
We use these general statements in \Cref{sec:lbk}, to prove the result for general $k$ (cf.\ \Cref{thm:k}).
\subsection{Hop diameter 2}\label{sec:lb2}
\begin{theorem}\label{thm:2}
For any two positive integers $n\ge 1000$ and $t$, and any two integers $l,r$ such that $r = l + nt-1$, let $U((l,r),t)$ be a $t$-sparse line metric with $n$ points and let $X$ be a subspace of $U((l,r),t)$ which contains at least $\frac{31}{32}n$ points. Then, for any choice of $\epsilon \in [0,1/2]$, any spanner on $X$ with $((l,r),t)$-global hop-diameter 2 and stretch $1+\epsilon$ contains at least $T'_2(n) \ge \frac{n}{256}\cdot\alpha_{2}(n)$ $((l,r),t)$-global edges which have both endpoints inside $[l,r]$.
\end{theorem}
\begin{remark}
Recall that we consider Steiner spanners, which could possibly contain additional Steiner points from the uniform line metric.
\end{remark}
\begin{remark}
\Cref{thm:2} can be extended to $\epsilon > 1/2$. The only required change in the proof is to decrease the lengths of intervals by a factor of $1+\epsilon$, as provided by \Cref{lemma:sep}; it is readily verified that, as a result, the lower bound decreases by a factor of $\Theta(\epsilon)$.
\end{remark}
The theorem is proved in three steps. First, we prove \Cref{lemma:lb2}, which concerns uniform line metrics. Then, we prove \Cref{lemma:2prim} for a subspace that contains at least 31/32 fraction of the points of the original metric. In the third step, we observe that the same argument applies for $t$-sparse line metrics.
\begin{lemma}\label{lemma:lb2}
For any positive integer $n$, and any two integers $l,r$ such that $r = l + n-1$, let $U(l,r)$ be a uniform line metric with $n$ points. Then, for any choice of $\epsilon \in [0,1/2]$, any spanner on $U(l,r)$ with hop-diameter 2 and stretch $1+\epsilon$ contains at least $T_2(n) \ge \frac{1}{16}\cdot n\log{n}$ edges which have both endpoints inside $[l,r]$.
\end{lemma}
\begin{proof}
Suppose without loss of generality that we are working on the uniform line metric $U(1,n)$. Let $H$ be an arbitrary $(1+\epsilon)$-spanner for $U(1,n)$ with hop-diameter 2.
For the base case, we take $n < 128$. In that case our lower bound is
$\frac{n}{16}\cdot \log{n} < n-1$, which is a trivial lower bound for the number of edges in $H$.
For the proof of the inductive step, we can assume that $n \ge 128$. We would like to prove that the number of spanner edges in $H$ is lower bounded by $T_2(n)$, which satisfies recurrence $T_2(n) = 2T_2(\floor{n/2})+ \Omega(n)$ with the base case $T_2(n) = (n/16)\log{n}$ when $n\le 128$. Split the interval into two disjoint parts: the left part $[1,\floor{n/2}]$ and the right part $[\floor{n/2}+1, n]$. From the induction hypothesis on the uniform line metric $U(1,\floor{n/2})$ we know that any spanner with hop-diameter 2 and stretch $1+\epsilon$ contains at least $T_2(\floor{n/2})$ edges that have both endpoints inside $[1,\floor{n/2}]$. Similarly, any spanner for $U(\floor{n/2}+1,n)$ contains at least $T_2(\floor{n/2})$ edges that have both endpoints inside $[\floor{n/2}+1,n]$. This means that the sets of edges considered on the left side and the right side are disjoint. We will show below that there are $\Omega(n)$ edges that have one point on the left and the other on the right.
Consider the set $L$, consisting of the points inside $[n/4, \floor{n/2}]$ and the set $R$, consisting of the points in $[\floor{n/2}+1, 3n/4]$. From \Cref{cor:sep}, since $n$ is sufficiently large, we know that any $(1+\epsilon)$-spanner path connecting point $a \in L$ and $b\in R$ has to have all its points inside $[1,n]$.
We use term \emph{cross edge} to denote any edge that has one endpoint in the left part and the other endpoint in the right part.
We claim that any spanner with hop-diameter at most $2$ and stretch $1+\epsilon$ has to contain at least $\min(|L|,|R|)$ cross edges. Without loss of generality, assume that $|L| \le |R|$. Suppose for contradiction that the spanner contains less than $|L|$ cross edges. This means that at least one point in $x\in L$ is not connected via a direct edge to any point on the right. Observe that, for every point $r \in R$, the 2-hop spanner path between $x$ and $r$ must be of the form $(x,l_r,r)$ for some point $l_r$ in the left set. It follows that every $r\in R$ induces a different cross edge $(l_r, r)$. Thus, the number of cross edges, denoted by $|E_C|$, is $|R| \geq |L|$, which is a contradiction. From the definition of $L$ and $R$, we know that $\min(|L|,|R|) \ge n/4-2$, implying that the number of cross edges is at least $n/4-2 \ge 11n/64$, for all $n \ge 26$. (See also \Cref{fig:lb2} for an illustration.) Thus, we have:
\begin{align*}
T_2(n) ~=~ 2T_2(\floor{n/2}) + \frac{11n}{64}
~\stackrel{\mbox{\tiny{induction}}}{\ge}~ 2\cdot\frac{\floor{n/2}}{16}\log\floor{n/2} + \frac{11n}{64}
~\ge~ \frac{n}{16}\cdot \log{n}~,\\
\end{align*}
as claimed.
\end{proof}
\begin{figure}
\caption{An illustration of the first two levels of the recurrence for the lower bound for $k=2$ and $\epsilon=1/2$. We split the interval $U(1, n)$ into two disjoint parts.
In \Cref{lemma:lb2}
\label{fig:lb2}
\end{figure}
\begin{lemma}\label{lemma:2prim}
For any positive integer $n$, and any two integers $l,r$ such that $r = l + n-1$, let $U(l,r)$ be a uniform line metric with $n$ points and let $X$ be a subspace of $U(l,r)$ which contains at least $\frac{31}{32}n$ points. Then, for any choice of $\epsilon \in [0,1/2]$, any spanner on $X$ with hop-diameter 2 and stretch $1+\epsilon$ contains at least $T'_2(n) \ge 0.48 \cdot \frac{n}{16}\log{n}$ edges which have both endpoints inside $[l,r]$.
\end{lemma}
\begin{proof}
Recall the recurrence used in \Cref{lemma:lb2}, $T_2(n) =2 T_2(\floor{n/2}) + \frac{11n}{64}$, which provides a lower bound on the number of edges of any $(1+\epsilon)$-spanner with hop-diameter 2 for $U(l,r)$ for any $l$ and $r$ such that $r = l+n-1$. The base case for this recurrence occurs when the considered interval contains less than 128 points. Consider the recursion tree of $T_2(n)$ and denote its depth by $\ell$ and the number of nodes at depth $i$ by $c_i$. In addition, denote by $n_{i,j}$ the number of points in the $j$th interval of the $i$th recursion level and by $e_{i,j}$ the number of cross edges contributed by this interval. By definition, we have
\begin{align*}
T_2(n)
&= \sum_{i=0}^{\ell}\sum_{j=1}^{c_i} e_{i,j}
\ge \frac{n}{16}\cdot \log{n}.
\end{align*}
Recursion tree of $T_2(n)$ contains one node at depth $i=0$ which corresponds to an interval of $n$ points, two nodes at depth $i=1$ corresponding to intervals of $\floor{n/2}$ points, etc. Letting $n_i$ denote the total size of intervals corresponding to nodes at depth $i$ of recursion tree, we get
\begin{align*}
n_i = 2^i\cdot\floor*{\frac{n}{2^i}} \ge n-2^i + 1.
\end{align*}
The base case of the recurrence occurs whenever the considered interval contains less than 128 points. In other words, leaves of the recursion tree have $n_{i,j} < 128$.
Since the size of all the intervals at depth $i$ is the same and it equals to $\floor{n/2^{i}}$, it follows that the leaves of the tree are at some level $i$ which satisfies $64 \le \floor{n/2^{i}} < 127$. At that level, $n_i \ge 0.98\cdot n$. For some level $i$ of the recursion tree, we refer to all the $n-n_i$ points which are not contained in any of the intervals of that level as \emph{ignored points}. Denote the collection of intervals containing ignored points by \emph{ignored intervals}.
Let $H'$ be any $(1+\epsilon)$-spanner on $X$ with hop-diameter 2. To lower bound the number of spanner edges in $H'$, we now consider the same recursion tree but take into consideration the fact that we are working on metric $X$, which is a subspace of $U(l,r)$. Hence, at each level of recursion, instead of $n$ points, there are at least $31n/32$ points in all the intervals of that level. We call the $j$th interval in the $i$th level \emph{good} if it contains at least $15n_{i,j}/16$ points from $X$. (Recall that we have used $n_{i,j}$ to denote the number of points from $U(l,r)$ in the $j$th interval of the $i$th level.) From the definition of good interval and the fact the intervals of each recursion level contain together at least $31n/32$ points, it follows that there are at least $n/2$ points contained in the good intervals at the $i$th level. If more than $n/2$ points were contained in bad intervals, and any such interval had at least $1/16$-fraction of points missing, it means that we would have more than $n/32$ points missing, which contradicts that we have at least $31n/32$ points in $X$. Recalling that we have at most $0.02n$ ignored points, we conclude that there is at least $0.48n$ points contained in the good intervals which were not ignored.
In \Cref{lemma:lb2}, we worked on the uniform line metric $U(1,n)$ and considered two intervals $L$ and $R$. We lower bounded the number of cross edges in this interval by $\min(|L|,|R|)\ge n/4-2 \ge 11n/64$, which holds for all $n \ge 26$. Observe that if the interval $[1,n]$ contained at least $15n/16$ points, rather than $n$, the same bound on the number of cross edges would hold, since $\min(|L|,|R|)-n/16 \ge n/4-2-n/16\ge 11n/64$, for all $n \ge 128$. This means that every good interval of size $n_{i,j}$ contributes at least $11n_{i,j}/64$ cross edges, as long as it is not considered as the base case (i.e. as long as $n_{i,j} \ge 128$). The same reasoning is applied inductively, so it holds also for $n_{i,j}$, rather than $n$, for any $i,j$.
Next, we analyze the recurrence $T'_2(n)$ representing the contribution of the intervals to the number of spanner edges for $X$. A lower bound on $T'_2(n)$ will provide a lower bound on the number of spanner edges of any spanner $H'$ of $X$ as defined above. Denote by $\Gamma_i$ all the good intervals in the $i$th level which were not ignored and by $e'_{i,j}$ the number of cross edges contributed by the $j$th interval in the $i$th level.
Then we have
\begin{align*}
T'_2(n)
&= \sum_{i=0}^{\ell}\sum_{j=1}^{c_i} e'_{i,j} \ge \sum_{i=0}^{\ell}\sum_{j\in \Gamma_i} e_{i,j} \ge 0.48\cdot T_2(n) \ge 0.48 \cdot \frac{n}{16}\log{n}
\end{align*}
as claimed.
This completes the proof of \Cref{lemma:2prim}.
\end{proof}
\begin{proof}[Completing the proof of \Cref{thm:2}] Note that $\alpha_2(n)=\ceil{\log{n}}$ and hence, we will show that $T'_2(n)\geq \frac{n}{256}\ceil{\log n}$. Suppose without loss of generality that we are working on any $t$-sparse line metric with $n$ points, $U((1,N),t)$, where $N = nt$. Let $H$ be an arbitrary $(1+\epsilon)$-spanner for $U((1,N),t)$ with $((1,N),t)$-global hop-diameter 2. We would like to lower bound the number of $((l,r),t)$-global edges required for $H$. Let $M=\floor{n/2}t$ and let $L$ be the set of $((l,r),t)$-intervals that are fully inside $[N/4,M]$ and $R$ be the set of $((l,r),t)$-intervals that are fully inside $[M,3N/4]$. In that case, the number of $((l,r),t)$-intervals inside $L$ can be lower bounded by $|L|\ge\floor{(M-N/4+1)/t}\ge n/4-2$, which is the bound that we used for $L$. Similarly, we obtain that $|R|\ge n/4-1$. The cross edges will be those edges that contain one endpoint in $[1,M]$ and the other endpoint in $[M+1,N]$. It follows that the cross edges are also $((l,r),t)$-global edges. The same argument can be applied to lower bound the number of cross edges, implying the lower bound on the number of $((l,r),t)$-global edges. The same proof in \Cref{lemma:2prim} gives:
\begin{equation*}
T'_2(n)\geq 0.48\cdot \frac{n}{16}\log{n}\geq \frac{n}{256}\ceil{\log n},
\end{equation*}
when $n\geq 1000$, as desired.
\end{proof}
\subsection{Hop diameter 3}\label{sec:lb3}
\begin{theorem}\label{thm:3}
For any two positive integers $n\ge 1000$ and $t$, and any two integers $l,r$ such that $r = l + nt-1$, let $U((l,r),t)$ be a $t$-sparse line metric with $n$ points and let $X$ be a subspace of $U((l,r),t)$ which contains at least $\frac{127}{128}n$ points. Then, for any choice of $\epsilon \in [0,1/2]$, any spanner on $X$ with $((l,r),t)$-global hop-diameter 3 and stretch $1+\epsilon$ contains at least $T'_3(n) \ge \frac{n}{1024}\cdot\alpha_{3}(n)$ $((l,r),t)$-global edges which have both endpoints inside $[l,r]$.
\end{theorem}
\begin{remark}
Recall that we consider Steiner spanners, which could possibly contain additional Steiner points from the uniform line metric.
\end{remark}
\begin{remark}
\Cref{thm:3} can be extended to $\epsilon > 1/2$. The only required change in the proof is to decrease the lengths of intervals by a factor of $1+\epsilon$, as provided by \Cref{lemma:sep}; it is readily verified that, as a result, the lower bound decreases by a factor of $\Theta(\epsilon)$.
\end{remark}
The theorem is proved in three steps. First, we prove \Cref{lemma:lb3}, which concerns uniform line metrics. Then, we prove \Cref{lemma:3prim} for a subspace that contains at least 31/32 fraction of the points of the original metric. In the third step, we observe that the same argument applies for $t$-sparse line metrics.
\begin{lemma}\label{lemma:lb3}
For any positive integer $n$, and any two integers $l,r$ such that $r = l + n-1$, let $U(l,r)$ be a uniform line metric with $n$ points. Then, for any choice of $\epsilon \in [0,1/2]$, any spanner on $U(l,r)$ with hop-diameter 3 and stretch $1+\epsilon$ contains at least $T_3(n) \ge \frac{n}{40} \log\log{n}$ edges which have both endpoints inside $[l,r]$.
\end{lemma}
\begin{proof}
Suppose without loss of generality that we are working on the uniform line metric $U(1,n)$. Let $H$ be an arbitrary $(1+\epsilon)$-spanner for $U(1,n)$ with hop-diameter 3.
For the base case, we assume that $n < 128$. We have that $ \frac{n}{40} \log\log{n} < n-1$, which is a trivial lower bound on the number of edges of $H$.
We now assume that $n \ge 128$. Divide the the interval $[1,n]$ into consecutive subintervals containing $b\coloneqq\floor{\sqrt{n}}$ points: $[1, b], [b+1, 2b]$, etc.
Our goal is to show that the number of spanner edges is lower bounded by $T_3(n)$, which satisfies recurrence
\begin{align*}
T_3(n) = \floor*{\frac{n}{\floor*{\sqrt{n}}}} \cdot T_3\left(\floor*{\sqrt{n}}\right) + \Omega(n),
\end{align*}
with the base case $T_3(n) = (n/40)\log\log{n}$ when $n< 128$.
For any $j$ such that $1 \le j \le \floor{n/b}$, the interval spanned by the $j$th subinterval is $[(j-1)b+1, jb]$. Using the induction hypothesis, any spanner on $U((j-1)b+1, jb)$ contains at least $T_3(b)$ edges that are inside $[(j-1)b+1, jb]$. This means that all the subintervals will contribute at least $\floor{n/b} \cdot T_3(b)$ spanner edges that are mutually disjoint and in addition do not go outside of $[1,n]$. We will show that there are $\Omega(n)$ edges that have endpoints in two different subintervals, called \emph{cross edges}. By definition, the set of cross edges is disjoint from the set of spanner edges considered in the term $\floor{n/b}\cdot T_3(b)$.
Consider the points that are within interval $[n/4, 3n/4]$. From \Cref{cor:sep}, since $n$ is sufficiently large, we know that any $(1+\epsilon)$-spanner path connecting two points in $[n/4,3n/4]$ has to have all its points inside $[1,n]$.
We call a point \emph{global} if it is adjacent to at least one cross edge. Otherwise, the point is \emph{non-global}. The following two claims bound the number of cross edges induced by global and non-global points, respectively.
\begin{claim}\label{claim:g3}
Suppose that among points inside interval $[n/4,3n/4]$,
$m$ of them are global. Then, they induce at least $m/2$ spanner edges.
\end{claim}
\begin{proof}
Each global point contributes at least one cross edge and each edge is counted at most twice.
\end{proof}
\begin{claim}\label{claim:ng3}
Suppose that among points inside interval $[n/4,3n/4]$, $m$ of them are non-global. Then, they induce at least $\binom{m/\sqrt{n}}{2}$ cross edges.
\end{claim}
\begin{proof}
Consider two sets $A$ and $B$ such that $A$ contains a non-global point $a \in [n/4,3n/4]$ and $B$ contains a non-global point $b \in [n/4,3n/4]$. Since $a$ is non-global, it can be connected via an edge either to a point inside of $A$ or to a point outside of $[1,n]$. Similarly, $b$ can be connected to either a point inside of $B$ or to a point outside of $[1,n]$. From \Cref{cor:sep}, and since $a,b \in [n/4,3n/4]$, we know that every spanner path with stretch $(1+\epsilon)$ connecting $a$ and $b$ has to use points inside $[1,n]$. This means that the spanner path with stretch $(1+\epsilon)$ has to have a form $(a,a',b',b)$, where $a' \in A$ and $b' \in B$.
In other words, we have to connect points $a'$ and $b'$ using a cross edge; furthermore every pair of intervals containing at least one non-global point induce one such edge and for every pair this edge is different.
Each interval contains at most $b = \floor{\sqrt{n}}$ non-global points, so the number of sets containing at least one non-global point is at least $m/b$.
Interconnecting all the sets requires $\binom{m/b}{2} \ge \binom{m/\sqrt{n}}{2}$ edges.
\end{proof}
The number of points inside $[n/4,3n/4]$ is at least $n/2+1$, but we shall use a slightly weaker lower bound of $15n/32$.
We consider two complementary cases. In the first case, at least $1/4$ of $15n/32$ points are global. \Cref{claim:g3} implies that the number of the cross edges induced by these points is at least $15n/256$.
The other case is that at least $3/4$ fraction of $15n/32$ points are non-global. \Cref{claim:ng3} implies that for a sufficiently large $n$, the number of cross edges induced by these points can be lower bounded by $15n/256$ as well.
In other words, we have shown that in both cases, the number of cross edges is at least $\frac{15}{256}n > \frac{n}{18}$. Thus, we have:
\begin{align*}
T_3(n) &\geq \floor*{\frac{n}{\floor*{\sqrt{n}}}} \cdot T_3\left(\floor{\sqrt{n}}\right) + \frac{n}{18} ~\stackrel{\mbox{\tiny{induction}}}{\ge}~ \floor{\sqrt{n}} \cdot \frac{\floor{\sqrt{n}}}{40}(\log\log\floor{\sqrt{n}}) + \frac{n}{18}
~\ge~ \frac{n}{40}\log\log{n}~,
\end{align*}
as claimed.
\end{proof}
\begin{lemma}\label{lemma:3prim}
For any positive integer $n$, and any two integers $l,r$ such that $r = l + n-1$, let $U(l,r)$ be a uniform line metric with $n$ points and let $X$ be a subspace of $U(l,r)$ which contains at least $\frac{127}{128}n$ points. Then, for any choice of $\epsilon \in [0,1/2]$, any spanner on $X$ with hop-diameter 3 and stretch $1+\epsilon$ contains at least $T'_3(n) \ge 0.18\cdot\frac{n}{40}\log\log{n}$ edges which have both endpoints inside $[l,r]$.
\end{lemma}
\begin{proof}
Recall the recurrence used in \Cref{lemma:lb3}, $T_3(n) = \floor{n/\floor{\sqrt{n}}}\cdot T_3(\floor{\sqrt{n}}) + \frac{n}{18}$, which provides a lower bound on the number of edges of any $(1+\epsilon)$-spanner with hop-diameter 3 for $U(l,r)$ for any $l$ and $r$ such that $r = l+n-1$. The base case for this recurrence occurs whenever the considered interval contains less than 128 points. Consider the recursion tree of $T_3(n)$ and denote its depth by $\ell$ and the number of nodes at depth $i$ by $c_i$. In addition, denote by $n_{i,j}$ the number of points in the $j$th interval of the $i$th recursion level and by $e_{i,j}$ the number of cross edges contributed by this interval. By definition, we have
\begin{align*}
T_3(n)
&= \sum_{i=0}^{\ell}\sum_{j=1}^{c_i} e_{i,j}
~\ge~ \frac{n}{40}\cdot \log\log{n}
\end{align*}
Recursion tree of $T_3(n)$ contains one node at depth $i=0$ which corresponds to an interval of $n$ points, $\floor{n/\floor{\sqrt{n}}}$ nodes at depth $i=1$ corresponding to intervals of $\floor{\sqrt{n}}$ points, etc. Letting $n_i$ denote the total size of intervals corresponding to nodes at depth $i$ of the recursion tree, we get
\begin{align*}
n_i \ge n - \sum_{j=1}^{i-1} n ^ {1 - 1/2^j}.
\end{align*}
Since the size of all the intervals at depth $i$ is the same and it equals to $\floor{n^{1-1/2^i}}$, it follows that the leaves of the tree are at some level $i$ which satisfies $11 \le \floor{n^{1-1/2^i}} < 127$. At that level, $n_i \ge 0.68\cdot n$. For some level $i$ of the recursion tree, we refer to all the $n-n_i$ points which are not contained in any of the intervals of that level as \emph{ignored points}. Denote the collection of intervals containing ignored points by \emph{ignored intervals}.
Let $H'$ be any $(1+\epsilon)$-spanner on $X$ with hop-diameter 3. To lower bound the number of spanner edges in $H'$, we now consider the same recursion tree but take into consideration the fact that we are working on metric $X$, which is a subspace of $U(l,r)$. Hence, at each level of recursion,
instead of $n$ points, there are at least $127n/128$ points in all the intervals of that level. We call the $j$th interval in the $i$th level \emph{good} if it contains at least $63n_{i,j}/64$ points from $X$. (Recall that we have used $n_{i,j}$ to denote the number of points from $U(l,r)$ in the $j$th interval of the $i$th level.) From the definition of a good interval and the fact the intervals of each recursion level contain together at least $127n/128$ points, it follows that there are at least $n/2$ points contained in the good intervals at the $i$th level. If more than $n/2$ points were contained in bad intervals, and any such interval had at least $1/64$-fraction of points missing, it means that we would have more than $n/128$ points missing; this contradicts that we have at least $127n/128$ points in $X$. In conclusion, at each level $i$, we have at least $0.18n$ points contained in the good intervals which were not ignored.
In \Cref{lemma:lb3}, we worked on the uniform line metric $U(1,n)$ and considered the points inside $[n/4,3n/4]$. We lower bounded the number of points inside $[n/4,3n/4]$ by $15n/32$. Observe that if the interval $[1,n]$ contained at least $63n/64$ points, rather than $n$, the same bound on the number of cross edges would hold, for any $n\ge 128$. This means that every good interval of size $n_{i,j}$ contributes at least $n_{i,j}/18$ cross edges. The same reasoning is applied inductively, so it holds also for $n_{i,j}$ rather than $n$ for any $i,j$.
Next, we analyze the recurrence $T'_3(n)$ representing the contribution of the intervals to the number of spanner edges for $X$. A lower bound on $T'_3(n)$ will provide a lower bound on the number of edges of any spanner $H'$ of $X$ as defined above.
Denote by $\Gamma_i$ all the good intervals in the $i$th level which were not ignored and by $e'_{i,j}$ the number of cross edges contributed by the $j$th interval in the $i$th level. Then, we have
\begin{align*}
T'_3(n)
&= \sum_{i=0}^{\ell}\sum_{j=1}^{c_i} e'_{i,j}~\ge~ \sum_{i=0}^{\ell}\sum_{j\in \Gamma_i} e_{i,j}~=~ 0.18\cdot T_3(n) \ge 0.18\cdot\frac{n}{40}\log\log{n}
\end{align*}
as claimed. This completes the proof of \Cref{lemma:3prim}.
\end{proof}
\begin{proof}[Completing the proof of \Cref{thm:3}] Note that $\alpha_3(n)=\ceil{\log\log{n}}$ and hence, we will show that $T'_3(n) \ge \frac{n}{1024}\cdot \ceil{\log\log n}$. Suppose without loss of generality that we are working on any $t$-sparse line metric with $n$ points, $U((1,N),t)$, where $N = nt$. Let $H$ be an arbitrary $(1+\epsilon)$-spanner for $U((1,N),t)$ with $((1,N),t)$-global hop-diameter 3. We would like to lower bound the number of $((l,r),t)$-global edges required for $H$. Let consider the set of $((l,r),t)$-intervals that are fully inside $[N/4,3N/4]$. The number of such intervals can be lower bounded by $((3N/4-N/4)/t-2\ge n/2-2$, which is larger than the bound of $15n/32$, which we used.
The cross edges will become $((1,N),t)$-global edges and the same argument can be applied to lower bound their number. The same proof in \Cref{lemma:3prim} gives:
\begin{equation*}
T'_3(n)\geq 0.18\cdot \frac{n}{40}\log\log{n}\geq \frac{n}{1024}\cdot \ceil{\log\log n}
\end{equation*}
when $n\geq 1000$, as desired.
\end{proof}
\section{Lower bound for constant hop-diameter}\label{sec:lbk}
We proceed to prove our main result, which is a generalization of \Cref{thm:main}. In particular, invoking \Cref{thm:k} stated below where $X$ is the uniform line metric $U(1,n)$ gives \Cref{thm:main}.
\begin{theorem}\label{thm:k}
For any two positive integers $n\ge 1000$ and $t$, and any two integers $l,r$ such that $r = l + nt-1$, let $U((l,r),t)$ be a $t$-sparse line metric with $n$ points and let $X$ be a subspace of $U((l,r),t)$ which contains at least $n(1-\frac{1}{2^{k+4}})$ points. Then, for any choice of $\epsilon \in [0,1/2]$ and any integer $k\ge 2$, any spanner on $X$ with $((l,r),t)$-global hop-diameter $k$ and stretch $1+\epsilon$ contains at least $T'_k(n) \ge \frac{n}{2^{6\floor{k/2}+4}} \cdot \alpha_{k}(n)$ $((l,r),t)$-global edges which have both endpoints inside $[l,r]$.
\end{theorem}
\begin{remark}
Recall that we consider Steiner spanners, which could possibly contain additional Steiner points from the uniform line metric.
\end{remark}
\begin{remark}
\Cref{thm:k} can be extended to $\epsilon > 1/2$. The only required change in the proof is to decrease the lengths of intervals by a factor of $1+\epsilon$, as provided by \Cref{lemma:sep}; it is readily verified that, as a result, the lower bound decreases by a factor of $\Theta(\epsilon)$.
\end{remark}
\begin{proof}
We will prove the theorem by double induction on $k\ge 2$ and $n$.
The base case for $k=2$ and $k=3$ and every $n$ is proved in \Cref{thm:2,thm:3}, respectively.
For every $k\ge 4$, we shall prove the following two assertions.
\begin{assertions}
\item\label{assertion:1} For any two positive integers $n$ and $t$, and any two integers $l,r$ such that $r = l + nt-1$, let $U((l,r),t)$ be a $t$-sparse line metric with $n$ points. Then, for any choice of $ \epsilon \in [0,1/2]$, any spanner on $U((l,r),t)$ with $((l,r),t)$-global hop-diameter $k$ and stretch $1+\epsilon$ contains at least $T_k(n) \ge \frac{n}{2^{6\floor{k/2}+2}} \alpha_{k}(n)$ $((l,r),t)$-global edges which have both endpoints inside $[l,r]$.
\item\label{assertion:2} For any two positive integers $n$ and $t$, and any two integers $l,r$ such that $r = l + nt-1$, let $U((l,r),t)$ be a $t$-sparse line metric with $n$ points and let $X$ be a subspace of $U((l,r),t)$ which contains at least $n(1-\frac{1}{2^{k+4}})$ points. Then, for any choice of $\epsilon \in [0,1/2]$, any spanner on $X$ with $((l,r),t)$-global hop-diameter $k$ and stretch $1+\epsilon$ contains at least $T'_k(n) \ge \frac{n}{2^{6\floor{k/2}+4}} \cdot \alpha_{k}(n)$ $((l,r),t)$-global edges which have both endpoints inside $[l,r]$.
\end{assertions}
For every $k \ge 4$, we first prove the first assertion, which relies on the second assertion for $k-2$. Then, we prove the second assertion which relies on the first assertion for $k$.
We proceed to prove \cref{assertion:1}.
\paragraph*{Proof of \cref{assertion:1}.}
Suppose without loss of generality that we are working on any $t$-sparse line metric $U((1,N),t)$. Let $H$ be an arbitrary $(1+\epsilon)$-spanner for $U((1,N),t)$ with $((1,N),t)$-global hop-diameter $k$.
For the base case, we take $1 \le n \le 10000$. We have $\frac{n}{2^{6\floor{k/2}}+2} \alpha_{k}(n)$, which is at most $\frac{n}{2^{6\floor{k/2}}+2}\log^*(n)\le n-1$, a trivial lower bound on the number of the edges of any spanner.
Next, we prove the induction step. We shall assume the correctness of the two statements: (i) for $k$ and all smaller values of $n$,
and (ii) for $k' < k$ and all values of $n$.
Let $N \coloneqq nt$ and let $b\coloneqq\alpha_{k-2}(n)$. Divide the the interval $[1,N]$ into consecutive $((1,N),bt)$-intervals containing $b$ points: $[1, bt], [bt+1, 2bt]$, etc. We would like to prove that the number of spanner edges is lower bounded by recurrence
\begin{align*}
T_k(n) = \floor*{\frac{n}{\alpha_{k-2}(n)}}\cdot T_{k}(\alpha_{k-2}(n)) + \Omega{\left(\frac{n}{2^{3k}}\right)},
\end{align*}
with the base case $T_k(n) = \frac{n}{2^{6\floor{k/2}+2}}\alpha_k(n)$ for $n \le 10000$.
There are $\floor{n/b}$ $((1,N),bt)$-intervals containing exactly $b$ points. For any $j$ such that $1 \le j \le \floor{n/b}$, the $j$th $((1,N),bt)$-interval is $[(j-1)bt+1, jbt]$. Using inductively the \cref{assertion:1} for $k$ and a value $b < n$, any spanner on $U((j-1)bt+1, jbt)$ contains at least $T_k(b)$ edges that are inside $[(j-1)bt+1, jbt]$. This means that all the $((1,N),bt)$-intervals will contribute at least $\floor{n/b} \cdot T_k(b)$ spanner edges that are mutually disjoint and in addition do not go outside of $[1,N]$.
We will show that there are $\Omega(n/2^{3k})$ edges that have endpoints in two different $((1,N),bt)$-intervals, i.e. edges that are $((1,N),bt)$-global. Since these edges are $((1,N),bt)$-global, they are disjoint from the spanner edges considered in the term $\floor{n/b}\cdot T_3(b)$. We shall focus on points that are inside $((1,N),bt)$-intervals fully inside $[N/4, 3N/4]$;
denote the number of such points by $p$. We have $p\ge n/2-2\alpha_{k-2}(n)$, but we will use a weaker bound:
\begin{equation} \label{e:pnt}
p\ge n/4.
\end{equation}
\begin{definition}
A point that is incident on at least one $((1,N),bt)$-global edge is called a $((1,N),bt)$-global point.
\end{definition}
Among the $p$ points inside inside $[N/4, 3N/4]$, denote by $p'$ the number of $((1,N),bt)$-global points. Let $p'' = p - p'$, and $m$ be the number of $((1,N),bt)$-global edges incident on the $p$ points.
Since each $((1,N),bt)$-global point contributes at least one $((1,N),bt)$-global edge and each such edge is counted at most twice,
we have
\begin{equation}\label{e:global}
m ~\ge~ p' / 2.
\end{equation}
Next, we prove that
\begin{equation} \label{e:main}
m ~\ge~ \frac{n}{2^{6\floor{k/2}+1}}, \mbox{~~if~} \ceil*{\frac{p''}{b}}
~\ge~ \left(1-\frac{1}{2^{k+2}}\right)\cdot\ceil*{\frac{p}{b}}
\end{equation}
Recall that we have divided $[1,N]$ into consecutive $((1,N),bt)$-intervals containing $b\coloneqq\alpha_{k-2}(n)$ points.
Consider now all the $((1,N),bt)$-intervals that are fully inside $[N/4,3N/4]$, and denote this collection of $((1,N),bt)$-intervals by $\mathcal{C}$. Let $l'$ (resp. $r'$) be the leftmost (resp. rightmost) point
of the leftmost (resp. rightmost) interval in $\mathcal{C}$; note that $l'$ and $r'$ may not coincide with points of the input metric,
they are simply the leftmost and rightmost boundaries of the intervals in $\mathcal{C}$.
\paragraph*{Constructing a new line metric.}
For each $((1,N),bt)$-interval $I$ in $\mathcal{C}$, if $I$ contains a point that is not $((1,N),bt)$-global, assign an arbitrary such point in $I$ as its representative; otherwise, assign an arbitrary point as its representative. The collection $\mathcal{C}$ of $((1,N),bt)$-intervals, together with the set of representatives uniquely defines $(bt)$-sparse line metric, $U((l',r'),bt)$.
This metric has $\ceil{p/b}$ $((1,N),bt)$-intervals, since there are $\ceil{p/b}$ intervals covering $p$ points in the input $t$-sparse metric $U((1,N),t)$ inside the interval $[N/4,3N/4]$. Recall from \Cref{def:tsparse} that a $bt$-sparse metric is uniquely defined given its $((1,N),bt)$-intervals and representatives.
Let $X$ be the subspace of $U((l',r'),bt)$ induced by the representatives of all intervals in $\mathcal{C}$ that contain points that are not $((1,N),bt)$-global and using \Cref{e:main},
we have
\begin{equation} \label{eq:fraction}
|X| ~\ge~ \ceil*{\frac{p''}{b}}
~\ge~ \left(1-\frac{1}{2^{k+2}}\right)\cdot\ceil*{\frac{p}{b}}
\end{equation}
Recall that $H$ is an arbitrary $(1+\epsilon)$-spanner for $U((1,N),t)$ with $((1,N),t)$-global hop-diameter $k$.
Let $a$ and $b$ be two arbitrary points in $X$, and denote their corresponding $((1,N),bt)$-intervals by $A$ and $B$, respectively.
Since $a$ (reps., $b$) is not $((1,N),bt)$-global, it can be adjacent either to points outside of $[1,N]$ or to points inside $A$ (resp., $B$).
By \Cref{cor:sep} and since $a,b\in[N/4,3N/4]$, any spanner path with stretch $(1+\epsilon)$ connecting $a$ and $b$ must remain inside $[1,N]$. Hence, any $(1+\epsilon)$-spanner path in $H$ between $a$ and $b$ is of the form $(a, a',\dots, b', b)$, where $a' \in A$ (resp. $b' \in B$). Consider now the same path in the metric $X$. It has at most $k$ hops, where the first and the last edges are not $((1,N),bt)$-global. Thus, although this path contains at most $k$ $((1,N),t)$-global edges in $U((1,N),t)$, it has at most $k-2$ $((1,N),bt)$-global edges in $X$.
It follows that $H$ is a (Steiner) $(1+\epsilon)$-spanner with $((1,N),bt)$-global hop-diameter $k-2$ for $X$. See \Cref{fig:lb4} for an illustration.
Denote by $n' \coloneqq \ceil{p/b}$ the number of points in $U((l',r'), bt)$.
Since $p\ge n/4$, it follows that $n' \ge \ceil{n/(4b)}$.
By \labelcref{eq:fraction}, $X$ is a subspace of $U((l',r'), bt)$,
and its size is at least a $(1-1/2^{k+2})$-fraction (i.e., a $(1-1/2^{(k-2)+4})$-fraction) of that of $U((l',r'), bt)$.
Hence, by the induction hypothesis of \cref{assertion:2} for $k-2$,
we know that any spanner on $X$ with $((l',r'),bt)$-global hop-diameter $k-2$ and stretch $1+\epsilon$ contains at least $T'_{k-2}(n') \ge \frac{n'}{2^{6\floor{(k-2)/2}+4}} \cdot \alpha_{k-2}(n')$ $((l',r'),bt)$-global edges which have both endpoints inside $[l',r']$.
Since every $((l',r'),bt)$-global edge is also a $((1,N),bt)$-global edge,
we conclude with the following lower bound on the number of $((1,N), bt)$-global edges required by $H$:
\begin{align*}
T'_{k-2}\left(n'\right) &\ge \frac{n'}{2^{6\floor{(k-2)/2}+4}}\cdot\alpha_{k-2}\left(n'\right)\\
&\ge \frac{n}{4\cdot 2^{6\floor{(k-2)/2}+4}\cdot\alpha_{k-2}(n)}\cdot\alpha_{k-2}{\left(\ceil*{\frac{n}{4\alpha_{k-2}(n)}}\right)}\\
&\ge \frac{n}{8\cdot 2^{6\floor{(k-2)/2}+4}}\\
&= \frac{n}{2^{6\floor{k/2}+1}}
\end{align*}
The last inequality follows since, when $k\ge 4$, the ratio between $\alpha_{k-2}(\ceil{n/4\alpha_{k-2}(n)})$ and $\alpha_{k-2}(n)$ can be bounded by $1/2$ for sufficiently large $n$ (i.e. larger than the value considered in the base case).
In other word, we have shown that whenever $\ceil{{p''}/{b}}
~\ge~ (1-{1}/{2^{k+2}})\cdot\ceil{{p}/{b}}$, the number of the $((1,N),bt)$-global edges incident on the $p$ points inside $[N/4,3N/4]$ is lower bounded by ${n}/{2^{6\floor{k/2}+1}}$; we have thus proved \labelcref{e:main}.
Recall (see \labelcref{e:pnt}) that we lower bounded the number $p$ of points inside $[N/4, 3N/4]$ as $p\ge n/4$. We consider two complementary cases: either $\ceil{p''/b} ~\ge~ (1-{1}/{2^{k+2}})\cdot\ceil{p/b}$, or $\ceil{p''/b} ~<~ (1-{1}/{2^{k+2}})\cdot\ceil{p/b}$, where $p''$ is the number of points in $[N/4,3N/4]$ that are not $((1,N),bt)$-global. In the former case (i.e. when $\ceil{p''/b} ~\ge~ (1-{1}/{2^{k+2}})$),
by \labelcref{e:main}, we have the number of $((1,N),bt)$-global edges is lower bounded by $n/2^{6\floor{k/2}+1}$. In the latter case, we have
\begin{align*}
\frac{p-p'}{b} - 1 < \floor*{\frac{p-p'}{b}} = \frac{p''}{b} < \left(1-\frac{1}{2^{k+2}}\right)\cdot\ceil*{\frac{p}{b}} < \left(1-\frac{1}{2^{k+2}}\right)\cdot\frac{p}{b} + 1.
\end{align*}
In other words, we can lower bound $p'$ by $p/2^{k+2} - 2b$. From \labelcref{e:global} and using that $p \ge n/4$,
the number of $((1,N),bt)$-global edges is lower bounded by $n/2^{k+5}-\alpha_{k-2}(n)$. Since the former bound is always smaller for $n$ sufficiently large (i.e. larger than the value considered in the base case), we shall use it as a lower bound on the number of $((1,N),bt)$-global edges required by $H$. We note that every $((1,N),bt)$-global edge is also $((1,N),t)$-global, as required by \cref{assertion:1}. It follows that
\begin{align*}
T_k(n) &\ge \floor*{\frac{n}{\alpha_{k-2}(n)}}\cdot\frac{\alpha_{k-2}(n)}{2^{6\floor{k/2}+2}}\cdot\alpha_k(\alpha_{k-2}(n)) + \frac{n}{2^{6\floor{k/2}+1}}\\
&\ge \left(\frac{n}{\alpha_{k-2}(n)}-1 \right)\cdot \frac{\alpha_{k-2}(n)}{2^{6\floor{k/2}+2}} \cdot (\alpha_k(n)-1) + \frac{n}{2^{6\floor{k/2}+1}}\\
&\ge \frac{n}{2^{6\floor{k/2}+2}}\cdot\alpha_k(n) - \frac{2n}{2^{6\floor{k/2}+2}} + \frac{n}{2^{6\floor{k/2}+1}}\\
&= \frac{n}{2^{6\floor{k/2}+2}} \alpha_k(n)
\end{align*}
For the second inequality we have used \Cref{lemma:alphakstep}, and for the third, the fact that $\alpha_{k-2}(n) \cdot (\alpha_{k}(n) - 1) \le n$ for sufficiently large $n$ (i.e. larger than the value considered in the base case).
This concludes the proof of \cref{assertion:1}.
\begin{figure}
\caption{Constructing a new line metric and invoking the induction hypothesis. \textbf{(a)}
\label{fig:lb4}
\end{figure}
\paragraph*{Proof of \cref{assertion:2}.~}
Suppose without loss of generality that we are working on any $t$-sparse line metric $U((1,N),t)$. Let $H$ be an arbitrary $(1+\epsilon)$-spanner for $U(1,N)$ with $((1,N),t)$-global hop-diameter $k$.
We shall inductively assume the correctness of \cref{assertion:1} and \cref{assertion:2}: (i) for $k$ and all smaller values of $n$,
and (ii) for $k' < k$ and all values of $n$.
Recall the recurrence we used in the proof of \cref{assertion:1}, $T_k(n) = \floor{n/\alpha_{k-2}(n)}\cdot T_k(\alpha_{k-2}(n)) + \Omega(n/2^{3k})$, which provides a lower bound on the number of $((l,r),t)$-global edges of $H$.
The base case for this recurrence is whenever $n<10000$. Consider the recursion tree of $T_k(n)$ and denote its depth by $\ell$ and the number of nodes at depth $i$ by $c_i$. In addition, denote by $n_{i,j}$ the number of points in the $j$th interval of the $i$th level and by $e_{i,j}$ the number of $((1,N),bt)$-global edges contributed by this interval. We have that the contribution of an interval is $n_{i,j}/2^{6\floor{k/2}+1}$. By definition, we have
\begin{align*}
T_k(n)
&= \sum_{i=1}^{\ell}\sum_{j=1}^{c_i} e_{i,j}
\ge \frac{n}{2^{6\floor{k/2}+2}}\alpha_k(n)
\end{align*}
Let $H'$ be any $(1+\epsilon)$ spanner on $X$ with $((1,N),t)$-global hop-diameter $k$. To lower bound the number of spanner edges in $H'$, we now consider the same recursion tree, but take into consideration the fact that we are working on metric $X$, which is a subspace of $U((1,N),t)$.
This means that at each level of recursion, instead of $n$ points, there is at least $n(1-1/2^{k+4})$ points in $X$.
The contribution of the $j$th interval in the $i$th level is denoted by $e'_{i,j}$.
We call the $j$th interval in the $i$th level \emph{good} if it contains at least $n_{i,j}(1-1/2^{k+3})$ points from $X$. (Recall that we have used $n_{i,j}$ to denote the number of points from $U(l,r)$ in the $j$th interval of the $i$th level.) From the definition of good interval and the fact that each level of recurrence contains at least $n(1-1/2^{k+4})$ points, it follows that there are at least $n/2$ points contained in the good intervals at the $i$th level. Denote the collection of all the good intervals at the $i$th level by $\Gamma_i$.
Recall that we are working with recurrence $T_k(n) = \floor{n/\alpha_{k-2}(n)}\cdot T_k(\alpha_{k-2}(n)) + \Omega(n/2^{3k})$. In particular, in the first level of recurrence, we consider the contribution of $n$ points, whereas in the second level, we consider the contribution of $\floor{n/\alpha_{k-2}(n)}\cdot \alpha_{k-2}(n)$ points. Denote by $n_i$ the number of points whose contribution we consider in the $i$th level of recurrence. Then, we have $n_1 = n$, $n_2 = \floor{n/\alpha_{k-2}(n)}\cdot \alpha_{k-2}(n) \ge n - \alpha_{k-2}(n)$. Denote by $\alpha_{k-2}^{(j)}(n)$ value of $\alpha_{k-2}(\cdot)$ iterated on $n$, i.e. $\alpha_{k-2}^{(0)}(n) = n$, $\alpha_{k-2}^{(1)}(n) = \alpha_{k-2}(n)$, $\alpha_{k-2}^{(2)}(n) = \alpha_{k-2}(\alpha_{k-2}(n))$, etc. In general, for $i\ge 2$, we have
\begin{align*}
n_i &\ge n- \sum_{j=2}^{i}\frac{n\alpha_{k-2}^{(j-1)}(n)}{\alpha_{k-2}^{(j-2)}(n)}\\
&\ge n - n \cdot\sum_{j=2}^{i}\frac{\ceil*{\log^{(j-1)}(n)}}{\ceil*{\log^{(j-2)}(n)}}.
\end{align*}
We observe that there is an exponential decay between the numerator and denominator of terms in each summand and that terms grow with $j$. Since we do not consider intervals in the base case, we also know that $\ceil{\log^{(i-1)}(n)} \ge 10000$, meaning that the largest term in the sum is $10000 / 2^{9999}$. By observing that every two consecutive terms increase by a factor larger than 2, we conclude that $n_i \ge 0.99n$.
Since at each level there are at least $n/2$ points inside of good intervals, this means that there are at least $0.49n$ points inside of good intervals which were not ignored. Denote by $\Gamma_i$ the set of good intervals in the $i$th level whose contribution is not ignored. Then we have
\begin{align*}
T'_k(n)
&= \sum_{i=1}^{\ell}\sum_{j=1}^{c'_i} e'_{i,j}
~\ge~ \sum_{i=1}^{\ell}\sum_{j\in \Gamma_i} e_{i,j}
~\ge~ 0.49\cdot T_k(n) \ge \frac{n}{2^{6\floor{k/2}+4}}\alpha_k(n),
\end{align*}
as claimed.
This concludes the proof of \cref{assertion:2}.
We have thus completed the inductive step for $k$.
\end{proof}
\appendix
\section{Tradeoff using two-parameter Ackermann function}\label{sec:tradeoff}
In this \namecref{sec:tradeoff}, we prove that the tradeoff of \cite{AS87} of $k$ vs. $\Omega(n \alpha_k(n))$ between hop-diameter and number of edges implies the tradeoff of \cite{Yao82} and \cite{CG06} of $\Omega(\alpha(m,n))$ vs. $m$. We also prove a variant of this result,
relevant to our lower bound (\Cref{thm:main}) that has an exponential on $k$ slack in the number of edges.
Specifically, we prove \Cref{lemma:tradeoff} and \Cref{cor:tradeoff}.
We start with making the following simple claim.
\begin{claim}\label{clm:ackermann}
For every $i \ge 0$ and every $j \ge 4$, we have $A(i+1, j) \ge A(i, 2j)$.
\end{claim}
\begin{proof}
Observe that $A(1,j) = 2^j$. We have $A(i+1, j) = A(i, A(i+1, j-1)) \ge A(i, 2^{j-1}) \ge A(i, 2j)$, whenever $j \ge 4$.
\end{proof}
\begin{lemma}\label{lemma:tradeoff}
Let $\mathcal{G}$ be a graph family and let $k'$, $n_0$ be a constant such that for every integer $n'\ge n_0$ any $k'$-hop spanner on an $n'$-vertex graph $G_{n'}$ from $\mathcal{G}$ has at least $n' \alpha_{k'}(n')$ edges. Then, for any choice of integers $n\ge n_0$ and $m\ge n$, any $m$-edge spanner on an $n$-vertex graph $G_{n}$ from $\mathcal{G}$ has hop diameter of at least $\alpha(m,n)$.
\end{lemma}
\begin{proof}
Let $H$ be an arbitrary $m$-edge spanner on $G_{n}$ from $\mathcal{G}$ as in the lemma statement.
If $n\alpha_0(n) \le m$, then $\alpha(m,n)$ is a small constant, and the hop-diameter of $H$ is trivially $\Omega(1)$.
We henceforth assume that $n\alpha_0(n) > m$, hence there is a unique $k$ such that
\begin{equation}
n\cdot\alpha_{k}(n) ~\le~ m ~<~ n\cdot \alpha_{k-1}(n).\label{eq:region}
\end{equation}
From the lemma statement, the hop-diameter of $H$ is greater than $k-1$, i.e., it is at least:
\begin{align*}
k &\ge \min\left\{i \mid A(i, \min\left\{s \mid A(k, s) \ge n \right\}) \ge n \right\}\\
&\ge \min\left\{i \mid A(i, \alpha_{2k}(n)) \ge n \right\} &\text{ (by definition of $\alpha_{2k}(n)$)}\\
&\ge \min\left\{i \mid A(i, \alpha_{2k}(n)) > \log{n} \right\}\\
&\ge \min\left\{i \mid A(i, \alpha_{k}(n)) > \log{n} \right\}\\
&\ge \min\left\{i \mid A\left(i, \frac{m}{n}\right) > \log{n} \right\} &\text{ (\Cref{eq:region}) }\\
&\ge \min\left\{i \mid A\left(i, 4\ceil*{\frac{m}{n}}\right) > \log{n} \right\}\\
&= \alpha(m,n)
\end{align*}
This completes the proof of \Cref{lemma:tradeoff}.
\end{proof}
We proceed to proving \Cref{cor:tradeoff}, which is a variant of \Cref{lemma:tradeoff}, relevant to our lower bound (\Cref{thm:main}) that has an exponential on $k$ slack in the number of edges.
By \Cref{thm:main}, any $(1+\epsilon)$-spanner with hop-diameter $k$ on any $n$-point uniform line metric must have at least $\frac{n}{2^{6\floor{k/2}+4}} \alpha_{k}(n)$ edges. (This refined bound on the number of edges, which does not use the $\Omega$-notation, is due to \Cref{thm:k}, which is a generalization of \Cref{thm:main}.)
\begin{corollary}\label{cor:tradeoff}
For any two positive integers $m$ and $n$ such that $m < n^2/32$, let $k$ be the unique integer such that $\frac{n}{2^{6\floor{k/2}+4}}\cdot \alpha_{k}(n) \le m < \frac{n}{2^{6\floor{(k-1)/2}+4}}\cdot \alpha_{k-1}(n)$. For any choice of $\epsilon \in [0, 1/2]$, any $m$-edge $(1+\epsilon)$-spanner for the uniform line metric with $n$ points must have hop diameter of at least $\Omega(\alpha(m,n)-k)$.
\end{corollary}
\begin{proof}
The value $k$ as in the statement always exists since
for $k=0$, we have $\frac{n}{2^{6\floor{k/2}+4}} \alpha_{k}(n) \ge \frac{n^2}{32} > m$ and the sequence $\{\frac{n}{2^{6\floor{k/2}+4}} \alpha_{k}(n)\}_{k=0}^\infty$ is decreasing.
Similarly to the proof of \Cref{lemma:tradeoff}, we can lower bound $k$ as follows.
\begin{align*}
k &\ge \min\left\{i \mid A(i, \min\left\{s \mid A(k, s) \ge n \right\}) \ge n \right\}\\
&\ge \min\left\{i \mid A(i, \alpha_{2k}(n)) \ge n \right\} &\text{ (by definition of $\alpha_{2k}(n)$)}\\
&\ge \min\left\{i \mid A(i, \alpha_{2k}(n)) > \log{n} \right\}\\
&\ge \min\left\{i \mid A(i, \alpha_{k}(n)) > \log{n} \right\}\\
&\ge \min\left\{i \mid A\left(i, 2^{6\floor{k/2}+4}\cdot\frac{m}{n}\right) > \log{n}\right\}\\
&\ge\min\left\{i \mid A\left(i + 6\floor{k/2}+4,\frac{m}{n}\right) > \log{n}\right\} &\text{(\Cref{clm:ackermann})}\\
&\ge\min\left\{i \mid A\left(i,\frac{m}{n}\right) > \log{n}\right\} - 6\floor{k/2}-4\\
&\ge\min\left\{i \mid A\left(i,4\ceil*{\frac{m}{n}}\right) > \log{n}\right\} - 6\floor{k/2}-4\\
&\ge\alpha(m,n) - 6\floor{k/2}-4\\
&= \Omega(\alpha(m,n) - k)
\end{align*}
This completes the proof of \Cref{cor:tradeoff}.
\end{proof}
\end{document} |
\begin{document}
\title{A uniform boundedness principle in pluripotential theory}
\author[Kosi\'nski]{\L ukasz Kosi\'nski}
\address{Institute of Mathematics, Jagiellonian University, \L ojasiewicza 6, 30-348 Krak\'ow, Poland.}
\email{lukasz.kosinski@im.uj.edu.pl}
\thanks{\L K supported by the Ideas Plus grant 0001/ID3/2014/63 of the Polish Ministry of Science
and Higher Education. }
\author[Martel]{\'Etienne Martel}
\address{D\'epartement d'informatique et de g\'enie logiciel, Universit\'e Laval, Qu\'ebec City (Qu\'ebec), Canada G1V 0A6}
\email{etienne.martel.1@ulaval.ca}
\thanks{EM supported by an NSERC undergraduate student research award}
\author[Ransford]{Thomas Ransford}
\address{D\'epartement de math\'ematiques et de statistique, Universit\'e Laval,
Qu\'ebec City (Qu\'ebec), Canada G1V 0A6,.}
\email{thomas.ransford@mat.ulaval.ca}
\thanks{TR supported by grants from NSERC and the Canada research chairs program}
\date{7 August 2017}
\begin{abstract}
For families of continuous plurisubharmonic functions
we show that, in a local sense, separately bounded above implies bounded above.
\end{abstract}
\maketitle
\section{The uniform boundedness principle}\label{S:ubp}
Let $\Omega$ be an open subset of ${\mathbb C}^N$.
A function $u:\Omega\to[-\infty,\infty)$ is called
\emph{plurisubharmonic} if:
\begin{enumerate}
\item $u$ is upper semicontinuous, and
\item $u|_{\Omega\cap L}$ is subharmonic, for each complex line $L$.
\end{enumerate}
For background information on plurisubharmonic functions,
we refer to the book of Klimek \cite{Kl91}.
It is apparently an open problem whether in fact (2) implies (1) if $N\ge2$.
In attacking this problem,
we have repeatedly run up against an
obstruction in the form of a uniform boundedness principle for plurisubharmonic functions.
This principle, which we think is of interest in its own right, is the main subject of this note.
Here is the formal statement.
\begin{theorem}[Uniform boundedness principle]\label{T:ubp}
Let $D\subset{\mathbb C}^N$ and $G\subset{\mathbb C}^M$ be domains, where $N,M\ge1$, and let ${\cal U}$ be a family of continuous
plurisubharmonic functions on $D\times G$. Suppose that:
\begin{enumerate}[label=\normalfont(\roman*)]
\item ${\cal U}$ is locally uniformly bounded above on $D\times\{w\}$, for each $w\in G$;
\item ${\cal U}$ is locally uniformly bounded above on $\{z\}\times G$, for each $z\in D$.
\end{enumerate}
Then:
\begin{enumerate}[resume, label=\normalfont(\roman*)]
\item ${\cal U}$ is locally uniformly bounded above on $D\times G$.
\end{enumerate}
\end{theorem}
In other words, if there is an upper bound for ${\cal U}$ on each compact subset of $D\times G$ of the form $K\times\{w\}$ or $\{z\}\times L$, then there is an upper bound for ${\cal U}$ on every compact subset of $D\times G$. The point is that we have no \textit{a priori} quantitative information about these upper bounds, merely that they exist. In this respect, the result resembles the classical Banach--Steinhaus theorem from functional analysis.
The proof of Theorem~\ref{T:ubp} is based on two well-known but non-trivial results from several complex variables: the equivalence (under appropriate assumptions) of plurisubharmonic hulls and polynomial hulls, and Hartogs' theorem on separately holomorphic functions. The details of the proof are presented in \S\ref{S:pfubp}.
The Banach--Steinhaus theorem is usually stated as saying that a family of bounded linear operators on a Banach space $X$ that is pointwise-bounded on $X$ is automatically
norm-bounded. There is a stronger version of the result in which one assumes merely that the operators are pointwise-bounded on a non-meagre subset $Y$ of $X$, but with the same conclusion. This sharper form leads to new applications (for example, a nice one in the theory of Fourier series can be found in \cite[Theorem~5.12]{Ru87}). Theorem~\ref{T:ubp} too possesses a sharper form, in which one of the conditions (i),(ii) is merely assumed to hold on a non-pluripolar set. This improved version of theorem is the subject of \S\ref{S:ubpgen}.
We conclude the paper in \S\ref{S:appl} by considering applications of these results, and we also discuss the connection with the upper semicontinuity problem mentioned at the beginning of the section.
\section{Proof of the uniform boundedness principle}\label{S:pfubp}
We shall need two auxiliary results. The first one concerns hulls. Given a compact subset $K$ of ${\mathbb C}^N$, its
\emph{polynomial hull} is defined by
\[
\widehat{K}:=\{z\in{\mathbb C}^N:|p(z)|\le\sup_K|p|\text{~for every polynomial~$p$ on ${\mathbb C}^N$}\}.
\]
Further, given an open subset $\Omega$ of ${\mathbb C}^N$ containing $K$, the \emph{plurisubharmonic hull} of $K$ with respect to $\Omega$ is defined by
\[
\widehat{K}_{\psh(\Omega)}:=\{z\in\Omega:u(z)\le\sup_Ku\text{~for every plurisubharmonic $u$ on $\Omega$}\}.
\]
Since $|p|$ is plurisubharmonic on $\Omega$ for every polynomial $p$,
it is evident that $\widehat{K}_{\psh(\Omega)}\subset \widehat{K}$.
In the other direction, we have the following result.
\begin{lemma}[\protect{\cite[Corollary~5.3.5]{Kl91}}]\label{L:hulls}
Let $K$ be a compact subset of ${\mathbb C}^N$ and let $\Omega$ be an open subset of ${\mathbb C}^N$ such that $\widehat{K}\subset\Omega$. Then
$\widehat{K}_{\psh(\Omega)}=\widehat{K}$.
\end{lemma}
The second result that we shall need is Hartogs' theorem \cite{Ha06} that separately holomorphic functions are holomorphic.
\begin{lemma}\label{L:Hartogs}
Let $D\subset{\mathbb C}^N$ and $G\subset{\mathbb C}^M$ be domains,
and let $f:D\times G\to{\mathbb C}$ be a function such that:
\begin{itemize}
\item $z\mapsto f(z,w)$ is holomorphic on $D$, for each $w\in G$;
\item $w\mapsto f(z,w)$ is holomorphic on $G$, for each $z\in D$.
\end{itemize}
Then $f$ is holomorphic on $D\times G$.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{T:ubp}]
Suppose that the result is false.
Then there exist sequences $(a_n)$ in $D\times G$ and $(u_n)$
in ${\cal U}$ such that $u_n(a_n)>n$ for all $n$ and $a_n\to a\in D\times G$. Let $P$ be a compact polydisk with centre $a$ such that $P\subset D\times G$. For each $n$, set
\[
P_n:=\{\zeta\in P:u_n(\zeta)\le n\}.
\]
Then $P_n$ is compact, because the functions in ${\cal U}$ are assumed continuous. Further, since $P$ is convex, we have
$\widehat{P_n}\subset P\subset D\times G$.
By Lemma~\ref{L:hulls}, we have $\widehat{P_n}=\widehat{(P_n)}_{\psh(D\times G)}$.
As $a_n$ clearly lies outside this plurisubharmonic hull,
it follows that $a_n$ also lies outside the polynomial hull of $P_n$. Thus there exists a polynomial $q_n$
such that $\sup_{P_n}|q_n|<1$ and $|q_n(a_n)|>1$. Let $r_n$ be a polynomial vanishing at $a_1,\dots,a_{n-1}$
but not at $a_n$, and set $p_n:=q_n^mr_n$, where $m$ is chosen large enough so that
\[
\sup_{P_n}|p_n|<2^{-n}
\quad\text{and}\quad
|p_n(a_n)|>n+\sum_{k=1}^{n-1}|p_k(a_n)|.
\]
Let us write $P=Q\times R$, where $Q,R$ are compact polydisks such that $Q\subset D$ and $R\subset G$.
Then, for each $w\in R$, the family ${\cal U}$ is uniformly bounded above on $Q\times\{w\}$,
so eventually $u_n\le n$ on $Q\times\{w\}$. For these $n$, we then have $Q\times\{w\}\subset P_n$ and hence $|p_n|\le 2^{-n}$ on $Q\times\{w\}$. Thus the series
\[
f(z,w):=\sum_{n\ge1}p_n(z,w)
\]
converges uniformly on $Q\times\{w\}$. Likewise, it converges uniformly on $\{z\}\times R$, for each $z\in D$. We deduce that:
\begin{itemize}
\item $z\mapsto f(z,w)$ is holomorphic on $\inter(Q)$, for each $w\in \inter(R)$;
\item $w\mapsto f(z,w)$ is holomorphic on $\inter(R)$, for each $z\in \inter(Q)$.
\end{itemize}
By Lemma~\ref{L:Hartogs}, $f$ is holomorphic on $\inter(P)$.
On the other hand, for each $n$, our construction gives
\[
|f(a_n)|
\ge|p_n(a_n)|-\sum_{k=1}^{n-1}|p_k(a_n)|-\sum_{k=n+1}^\infty| p_k(a_n)|>n.
\]
Since $a_n\to a$, it follows that $f$ is discontinuous at $a$, the central point of $P$. We thus have arrived at a contradiction, and the proof is complete.
\end{proof}
One might wonder if Theorem~\ref{T:ubp} remains true if we drop one of the assumptions (i) or (ii). Here is a simple example to show that it does not. For each $n\ge1$, set
\[
K_n:=\{z\in{\mathbb C}:|z|\le n,~1/n\le \arg(z)\le 2\pi\},
\]
and let $(z_n)$ be a sequence such that $z_n\in{\mathbb C}\setminus K_n$ for all $n$ and $z_n\to0$. By Runge's theorem, for each $n$ there exists a polynomial $p_n$ such that $\sup_{K_n}|p_n|\le 1$ and $|p_n(z_n)|>n$. The sequence $|p_n|$ is then pointwise bounded on ${\mathbb C}$, but not uniformly bounded in any neighborhood of $0$.
Thus, if we define $u_n(z,w):=|p_n(z)|$,
then we obtain a sequence of continuous plurisubharmonic functions on ${\mathbb C}\times{\mathbb C}$ satisfying (ii) but not (iii).
Although we cannot drop (i) or (ii) altogether, it \emph{is} possible to weaken one of the conditions (i) or (ii) to hold merely on a set that is `not too small', and still obtain the conclusion (iii). This is the subject of next section.
\section{A stronger form of the uniform boundedness principle}\label{S:ubpgen}
A subset $E$ of ${\mathbb C}^N$ is called \emph{pluripolar} if there exists a plurisubharmonic function $u$ on ${\mathbb C}^N$ such that $u=-\infty$ on $E$ but $u\not\equiv-\infty$ on ${\mathbb C}^N$. Pluripolar sets have Lebesgue measure zero, and a countable union of pluripolar sets is again pluripolar. For further background on pluripolar sets, we again refer to Klimek's book \cite{Kl91}.
In this section we establish the following generalization of Theorem~\ref{T:ubp}, in which we weaken one of the assumptions (i),(ii) to hold merely on a non-pluripolar set.
\begin{theorem}\label{T:ubpgen}
Let $D\subset{\mathbb C}^N$ and $G\subset{\mathbb C}^M$ be domains, where $N,M\ge1$,
and let ${\cal U}$ be a family of continuous
plurisubharmonic functions on $D\times G$. Suppose that:
\begin{enumerate}[label=\normalfont(\roman*)]
\item ${\cal U}$ is locally uniformly bounded above on $D\times\{w\}$, for each $w\in G$;
\item ${\cal U}$ is locally uniformly bounded above on $\{z\}\times G$, for each $z\in F$,
\end{enumerate}
where $F$ is a non-pluripolar subset of $D$. Then:
\begin{enumerate}[resume, label=\normalfont(\roman*)]
\item ${\cal U}$ is locally uniformly bounded above on $D\times G$.
\end{enumerate}
\end{theorem}
For the proof, we need the following generalization of Hartogs' theorem, due to Terada \cite{Te67,Te72}.
\begin{lemma}\label{L:Terada}
Let $D\subset{\mathbb C}^N$ and $G\subset{\mathbb C}^M$ be domains,
and let $f:D\times G\to{\mathbb C}$ be a function such that:
\begin{itemize}
\item $z\mapsto f(z,w)$ is holomorphic on $D$, for each $w\in G$;
\item $w\mapsto f(z,w)$ is holomorphic on $G$, for each $z\in F$,
\end{itemize}
where $F$ is a non-pluripolar subset of $D$.
Then $f$ is holomorphic on $D\times G$.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{T:ubpgen}]
We define two subsets $A,B$ of $D$ as follows. First, $z\in A$ if $w\mapsto\sup_{u\in{\cal U}}u(z,w)$ is locally bounded above on $G$. Second, $z\in B$ if there exists a neighborhood $V$ of $z$ in $D$ such that $(z,w)\mapsto\sup_{u\in{\cal U}}u(z,w)$ is locally bounded above on $V\times G$. Clearly $B$ is open in $D$ and $B\subset A$. Also $F\subset A$, so $A$ is non-pluripolar.
Let $z_0\in D\setminus B$. Then there exists $w_0\in G$ such that ${\cal U}$ is not uniformly bounded above on any neighborhood of $(z_0,w_0)$. The same argument as in the proof of Theorem~\ref{T:ubp} leads to the existence of a compact polydisk $P=Q\times R$ around $(z_0,w_0)$ and a function $f:Q\times R\to{\mathbb C}$ such that:
\begin{itemize}
\item $z\mapsto f(z,w)$ is holomorphic on $\inter(Q)$, for each $w\in\inter(R)$,
\item $w\mapsto f(z,w)$ is holomorphic on $\inter(R)$, for each $z\in \inter(Q)\cap A$,
\end{itemize}
and at the same time $f$ is unbounded in each neighborhood of $(z_0,w_0)$. By Lemma~\ref{L:Terada}, this is possible only if $\inter(Q)\cap A$ is pluripolar.
Resuming what we have proved: if $z\in D$ and every neighborhood of $z$ meets $A$ in a non-pluripolar set,
then $z\in B$.
We now conclude the proof with a connectedness argument. As $A$ is non-pluripolar, and a countable union of pluripolar sets is pluripolar, there exists $z_1\in D$ such that every neighborhood of $z_1$ meets $A$ in a non-pluripolar set, and consequently $z_1\in B$. Thus $B\ne\emptyset$. We have already remarked that $B$ is open in $D$. Finally, if $z\in D\setminus B$, then there is a an open neighborhood $W$ of $z$ that meets $A$ in a pluripolar set, hence $B\cap W$ is both pluripolar and open, and consequently empty. This shows that $D\setminus B$ is open in $D$. As $D$ is connected, we conclude that $B=D$, which proves the theorem.
\end{proof}
We end the section with some remarks concerning the sharpness of Theorem~\ref{T:ubpgen}.
Firstly, we cannot weaken both conditions (i) and (ii) simultaneously.
Indeed, let ${\mathbb D}$ be the unit disk, and define a sequence $u_n:{\mathbb D}\times{\mathbb D}\to{\mathbb R}$ by
\[
u_n(z,w):=n(|z+w|-3/2).
\]
Then
\begin{itemize}
\item $z\mapsto\sup_n u_n(z,w)$ is bounded above on ${\mathbb D}$ for all $|w|\le1/2$,
\item $w\mapsto\sup_n u_n(z,w)$ is bounded above on ${\mathbb D}$ for all $|z|\le1/2$,
\end{itemize}
but the sequence $u_n(z,w)$ is not even pointwise bounded above at the point $(z,w):=(\frac{4}{5},\frac{4}{5})$.
Secondly, the condition in Theorem~\ref{T:ubpgen} that $F$ be non-pluripolar is sharp, at least for $F_\sigma$-sets. Indeed, let $F$ be an $F_\sigma$-pluripolar subset of $D$. Then there exists a plurisubharmonic function $v$ on ${\mathbb C}^N$ such that $v=-\infty$ on $F$ and $v(z_0)>-\infty$ for some $z_0\in D$. By convolving $v$ with suitable smoothing functions, we can construct a sequence $(v_n)$ of continuous plurisubharmonic functions on ${\mathbb C}^N$ decreasing to $v$ and such that the sets $\{v_n\le -n\}$ cover $F$. Let $(p_n)$ be a sequence of polynomials in one variable that is pointwise bounded in ${\mathbb C}$ but not uniformly bounded on any neighborhood of $0$ (such a sequence was constructed at the end of \S\ref{S:pfubp}). Choose positive integers $N_n$ such that
$\sup_{|w|\le n}|p_n(w)|\le N_n$, and define $u_n:D\times{\mathbb C}\to{\mathbb R}$ by
\[
u_n(z,w):=v_{N_n}(z)+|p_n(w)|.
\]
Then
\begin{itemize}
\item $z\mapsto\sup_n u_n(z,w)$ is locally bounded above on $D$ for all $w\in{\mathbb C}$,
\item $w\mapsto\sup_n u_n(z,w)$ is locally bounded above on ${\mathbb C}$ for all $z\in F$,
\end{itemize}
but $\sup_nu_n(z,w)$ is not bounded above on any neighborhood of $(z_0,0)$.
\section{Applications of the uniform boundedness principle}\label{S:appl}
Our first application is to null sequences of plurisubharmonic functions.
\begin{theorem}\label{T:null}
Let $D\subset{\mathbb C}^N$ and $G\subset{\mathbb C}^M$ be domains,
and let $(u_n)$ be a sequence of positive continuous
plurisubharmonic functions on $D\times G$. Suppose that:
\begin{itemize}
\item $u_n(\cdot,w)\to0$ locally uniformly on $D$ as $n\to\infty$, for each $w\in G$,
\item $u_n(z,\cdot)\to0$ locally uniformly on $G$ as $n\to\infty$, for each $z\in F$,
\end{itemize}
where $F\subset D$ is non-pluripolar.
Then $u_n\to0$ locally uniformly on $D\times G$.
\end{theorem}
\begin{proof}
Let $a\in D\times G$. Choose $r>0$ such that $\overline{B}(a,2r)\subset D\times G$.
Writing $m$ for Lebesgue measure on ${\mathbb C}^N\times {\mathbb C}^M$,
we have
\begin{align*}
\sup_{\zeta\in\overline{B}(a,r)}u_n(\zeta)
&\le \sup_{\zeta\in\overline{B}(a,r)}\frac{1}{m({B}(\zeta,r))}\int_{{B}(\zeta,r)}u_n\,dm\\
&\le \frac{1}{m({B}(0,r))}\int_{{B}(a,2r)}u_n\,dm.
\end{align*}
Clearly $u_n\to0$ pointwise on $B(a,2r)$. Also, by Theorem~\ref{T:ubpgen}, the sequence $(u_n)$ is uniformly
bounded on $B(a,2r)$. By the dominated convergence theorem, it follows that $\int_{B(a,2r)}u_n\,dm\to0$ as $n\to\infty$. Hence $\sup_{\zeta\in\overline{B}(a,r)}u_n(\zeta)\to0$ as $n\to\infty$.
\end{proof}
Our second application relates to the problem mentioned at the beginning of \S\ref{S:ubp}. Recall that $u:\Omega\to[-\infty,\infty)$ is plurisubharmonic if
\begin{enumerate}
\item $u$ is upper semicontinuous, and
\item $u|_{\Omega\cap L}$ is subharmonic, for each complex line $L$,
\end{enumerate}
and the problem is to determine whether in fact (2) implies (1).
Here are some known partial results:
\begin{itemize}
\item[-] Lelong \cite{Le45} showed that (2) implies (1) if, in addition, $u$ is locally bounded above.
\item[-] Arsove \cite{Ar66} generalized Lelong's result by showing that, if $u$ if separately subharmonic and locally bounded above, then $u$ is upper semicontinuous. (Separately subharmonic means that (2) holds just with lines $L$ parallel to the coordinate axes.) Further results along these lines were obtained in \cite{AG93,KT96,Ri14}.
\item[-] Wiegerinck \cite{Wi88} gave an example of a separately subharmonic function that is not upper semicontinuous. Thus Arsove's result no longer holds without the assumption that $u$ be locally bounded above.
\end{itemize}
In seeking an example to show that (2) does not imply (1), it is natural to try to emulate Wiegerinck's example,
which was constructed as follows.
Let $K_n, z_n$ and $p_n$ be defined as in the example at the end of \S\ref{S:pfubp}. For each $n$ define $v_n(z):=\max\{|p_n(z)|-1,\,0\}$. Then $v_n$ is a subharmonic function, $v_n=0$ on $K_n$ and $v_n(z_n)>n-1$. Set
\[
u(z,w):=\sum_kv_k(z)v_k(w).
\]
If $w\in{\mathbb C}$, then $w\in K_n$ for all large enough $n$, so $v_n(w)=0$. Thus, for each fixed $w\in{\mathbb C}$, the function $z\mapsto u(z,w)$ is a finite sum of subharmonic functions, hence subharmonic. Evidently, the same is true with roles of $z$ and $w$ reversed. Thus $u$ is separately subharmonic. On the other hand, for each $n$ we have
\[
u(z_n,z_n)\ge v_n(z_n)v_n(z_n)>(n-1)^2,
\]
so $u$ is not bounded above on any neighborhood of $(0,0)$.
This example does not answer the question of whether (2) implies (1) because the summands $v_k(z)v_k(w)$ are not plurisubharmonic as functions of $(z,w)\in{\mathbb C}^2$.
It is tempting to try to modify the construction by replacing $v_k(z)v_k(w)$ by a positive plurisubharmonic sequence $v_k(z,w)$ such that the partial sums $\sum_{k=1}^nv_k$
are locally bounded above on each complex line, but not on any open neighborhood of $(0,0)$.
However, Theorem~\ref{T:ubp} demonstrates immediately that this endeavor is doomed to failure, at least if we restrict ourselves to continuous plurisubharmonic functions.
This raises the following question, which, up till now, we have been unable to answer.
\begin{question}\label{Q:cts}
Does Theorem~\ref{T:ubp} remain true without the assumption that the functions in ${\cal U}$ be continuous?
\end{question}
This is of interest because of the following result.
\begin{theorem}
Assume that the answer to Question~\ref{Q:cts} is positive.
Let $\Omega$ be an open subset of ${\mathbb C}^N$ and let $u:\Omega\to[-\infty,\infty)$ be a function such that $u|_{\Omega\cap L}$ is subharmonic for each complex line $L$. Define
\[
s(z):=\sup\{v(z):v \text{~plurisubharmonic on~}\Omega,~v\le u\}.
\]
Then $s$ is plurisubharmonic on $\Omega$.
\end{theorem}
\begin{proof}
Let ${\cal U}$ be the family of plurisubharmonic functions $v$ on $\Omega$ such that $v\le u$. If the answer to Question~\ref{Q:cts} is positive, then ${\cal U}$ is locally uniformly bounded above on $\Omega$. Hence, by \cite[Theorem~2.9.14]{Kl91}, the upper semicontinuous regularization $s^*$ of $s$ is plurisubharmonic on $\Omega$,
and, by \cite[Proposition~2.6.2]{Kl91}, $s^*=s$ Lebesgue-almost everywhere on $\Omega$. Fix $z\in\Omega$. Then there exists a complex line $L$ passing through $z$ such that $s^*=s$ almost everywhere on $\Omega\cap L$. Let $\mu_r$ be normalized Lebesgue measure on $B(z,r)\cap L$. Then
\[
s^*(z)\le\int_{B(z,r)\cap L} s^*\,d\mu_r=\int_{B(z,r)\cap L} s\,d\mu_r\le\int_{B(z,r)\cap L} u\,d\mu_r.
\]
(Note that $u$ is Borel-measurable by \cite[Lemma~1]{Ar66}.)
Since $u|_{\Omega\cap L}$ is upper semicontinuous, we can let $r\to0^+$ to deduce that $s^*(z)\le u(z)$. Thus $s^*$ is itself a member of ${\cal U}$, so $s^*\le s$, and thus finally $s=s^*$ is plurisubharmonic on $\Omega$.
\end{proof}
Of course, $s=u$ if and only if $u$ is itself plurisubharmonic.
Maybe this could provide a way of attacking the problem of showing that $u$ is plurisubharmonic?
\end{document} |
\begin{document}
\begin{center}
{\Large Some remarks about Fibonacci elements in an arbitrary
algebra}
\begin{equation*}
\end{equation*}
Cristina FLAUT and Vitalii SHPAKIVSKYI
\begin{equation*}
\end{equation*}
\end{center}
\textbf{Abstract. }{\small In this paper, we prove some relations between
Fibonacci elements in an arbitrary algebra. Moreover, we define imaginary
Fibonacci quaternions and imaginary Fibonacci octonions and we prove that
always three arbitrary imaginary Fibonacci quaternions are linear
independents and the mixed product of three arbitrary imaginary Fibonacci
octonions is zero.}
\begin{equation*}
\end{equation*}
Keywords: Fibonacci quaternions, Fibonacci octonions, Fibonacci elements.
2000 AMS Subject Classification: 11B83, 11B99.
\begin{equation*}
\end{equation*}
\textbf{1. Introduction}
\begin{equation*}
\end{equation*}
Fibonacci elements over some special algebras were intensively studied in
the last time in various papers, as for example: [Akk; ], [Fl, Sa; 15], [Fl,
Sh; 13(1)], [Fl, Sh; 13(2)], [Ha; ],[Ha1; ],[Ho; 61],[Ho; 63],[Ke; ]. All
these papers studied properties of Fibonacci quaternions, Fibonacci
octonions in Quaternion or Octonion algebras or in generalized Quaternion or
Octonion algebras or studied dual vectors or dual Fibonacci quaternions (
see [Gu;], [Nu; ]).
In this paper, we will prove that some of the obtained identities can be
obtained over an arbitrary algebras. We introduce the notions of imaginary
Fibonacci quaternions and imaginary Fibonacci octonions and we prove, using
the structure of the quaternion algebras \ and octonion algebras, that
always arbitrary three of such elements are linear dependents. For other
details, properties and applications regarding quaternion algebras \ and
octonion algebras, the reader is referred, for example, to [Sc; 54], [Sc;
66], [Fl, St; 09], [Sa, Fl, Ci; 09].
\textbf{2. Fibonacci elements in an arbitrary algebra}
\begin{equation*}
\end{equation*}
Let $A$ be a unitary algebra over $K$ ($K=\mathbb{R},\mathbb{C}$)
with a basis $\{e_{0}=1,e_{1},e_{2},...,e_{n}\}.$ Let $\{f_{n}\}_{n\in
\mathbb{N}}$ be the Fibonacci sequence
\begin{equation*}
f_{n}=f_{n-1}+f_{n-2},n\geq 2,f_{0}=0,f_{1}=1.
\end{equation*}
In algebra $A,$ we define the Fibonacci element as follows:
\begin{equation*}
F_{m}=\overset{n}{\underset{k=0}{\sum }}f_{m+k}e_{k}.
\end{equation*}
\textbf{Proposition 2.1.} \textit{With the above notations, the following
relations hold:}
\textit{1)} $F_{m+2}=F_{m+1}+F_{m};$
\textit{2)} $\overset{p}{\underset{i=0}{\sum }}F_{i}=F_{p+2}-F_{i}.$
\textbf{Proof.} 1) $F_{m+1}+F_{m}=\overset{n}{\underset{k=0}{\sum }}
f_{m+k+1}e_{k}+\overset{n}{\underset{k=0}{\sum }}f_{m+k}e_{k}=\overset{n}{
\underset{k=0}{\sum }}(f_{m+k+1}+f_{m+k})e_{k}=\overset{n}{\underset{k=0}{
\sum }}$ $f_{m+k+2}e_{k}=F_{m+2}.$
2) $\overset{p}{\underset{i=0}{\sum }}F_{i}=F_{1}+F_{2}+...+F_{p}=$\newline
$=\overset{n}{\underset{k=0}{\sum }}f_{k+1}e_{k}+\overset{n}{\underset{k=0}{
\sum }}f_{k+2}e_{k}+...+\overset{n}{\underset{k=0}{\sum }}f_{k+p}e_{k}=$
\newline
$=e_{0}\left( f_{1}+...+f_{p}\right) +e_{1}\left( f_{2}+...+f_{p+1}\right) +$
\newline
$+e_{2}\left( f_{3}+...+f_{p+2}\right) +...+e_{n}\left(
f_{k+n}+...+f_{p+n}\right) =$\newline
$=$ $e_{0}\left( f_{p+2}-1\right) +e_{1}\left( f_{p+3}-1-f_{1}\right)
+e_{2}\left( f_{p+4}-1-f_{1}-f_{2}\right) +$\newline
$+e_{3}\left( f_{p+5}-1-f_{1}-f_{2}-f_{3}\right) +...+e_{n}\left(
f_{p+n+2}-1-f_{1}-f_{2}-...-f_{n}\right) =$\newline
$=F_{p+2}-F_{2}.$
We used the identity $\overset{p}{\underset{i=1}{\sum }}f_{i}=f_{p+2}-1$
(for usual Fibonacci numbers) and $1+f_{1}+f_{2}+...+f_{n}=f_{n+2}.\Box
$
\textbf{Remark 2.2}. The equalities 1, 2 from the above proposition
generalize the corresponding formulae from [Ke; ] [Ha; ] [Nu; ] [Ha1;
].
\textbf{Proposition 2.3.} \textit{We have the following formula (Binet's
fomula):}
\begin{equation*}
F_{m}=\frac{\alpha ^{\ast }\alpha ^{m}-\beta ^{\ast }\beta ^{m}}{\alpha
-\beta },
\end{equation*}
\textit{where} $\alpha =\frac{1+\sqrt{5}}{2},\beta =\frac{1-\sqrt{5}}{2}
,\alpha ^{\ast }=\underset{k=0}{\overset{n}{\sum }}\alpha ^{k}e_{k},~\beta
^{\ast }=\underset{k=0}{\overset{n}{\sum }}\beta ^{k}e_{k}.$
\textbf{Proof.} \ Using the formula for the real quaternions, $f_{m}=\frac{
\alpha ^{m}-\beta ^{m}}{\alpha -\beta },$ we obtain\newline
$F_{m}=\overset{n}{\underset{k=0}{\sum }}f_{m+k}e_{k}=\frac{\alpha
^{m}-\beta ^{m}}{\alpha -\beta }e_{0}+\frac{\alpha ^{m+1}-\beta ^{m+1}}{
\alpha -\beta }e_{1}+\frac{\alpha ^{m+2}-\beta ^{m+2}}{\alpha -\beta }
e_{2}+...+$\newline
$+\frac{\alpha ^{m+n}-\beta ^{m+n}}{\alpha -\beta }e_{n}=\frac{a^{m}}{\alpha
-\beta }\left( e_{0}+\alpha e_{1}+\alpha ^{2}e_{2}+...+\alpha
^{n}e_{n}\right) +$\newline
$+\frac{\beta ^{m}}{\alpha -\beta }\left( e_{0}+\beta e_{1}+\beta
^{2}e_{2}+...+\beta ^{n}e_{n}\right) =\frac{\alpha ^{\ast }\alpha ^{m}-\beta
^{\ast }\beta ^{m}}{\alpha -\beta }.
\Box
$
\textbf{Remark 2.4.} The above result generalizes the Binet formulae from
the papers [Gu;] [Akk; ] [Ke; ] [Ha; ] [Nu; ] [Ha1; ].
\textbf{Theorem 2.5.} \textit{The generating function for the Fibonacci
number over an algebra is of the form}
\begin{equation*}
G\left( t\right) =\frac{F_{0}+\left( F_{1}-F_{0}\right) t}{1-t-t^{2}}.
\end{equation*}
\textbf{Proof.} We consider the generating function of the form
\begin{equation*}
G\left( t\right) =\overset{\infty }{\underset{m=0}{\sum }}F_{m}t^{m}.
\end{equation*}
We consider the product\newline
$G\left( t\right) \left( 1-t-t^{2}\right) =\overset{\infty }{\underset{m=0}{
\sum }}F_{m}t^{m}=$ $\overset{\infty }{\underset{m=0}{\sum }}F_{m}t^{m}-
\overset{\infty }{\underset{m=0}{\sum }}F_{m}t^{m+1}-\overset{\infty }{
\underset{m=0}{\sum }}F_{m}t^{m+2}=$\newline
$=F_{0}+F_{1}t+F_{2}t^{2}+F_{3}t^{3}+...-F_{0}t-F_{1}t^{2}-F_{2}t^{3}-...-$
\newline
$-F_{0}t^{2}-F_{1}t^{3}-F_{2}t^{4}-...=F_{0}+\left( F_{1}-F_{0}\right)
t.\Box
$
\textbf{Remark 2.6.} The above Theorem generalizes results from the papers
[Gu;], [Akk; ], \ [Ke; ], [Ha; ],[Nu; ].
\begin{equation*}
\end{equation*}
\textbf{The Cassini identity}
\begin{equation*}
\end{equation*}
First, we obtain the following identity.
\textbf{Proposition 2.7.}
\begin{equation}
F_{-m}=\left( -1\right) ^{m+1}f_{m}F_{1}+\left( -1\right) ^{m}f_{m+1}F_{0}.
\tag{2.1.}
\end{equation}
\textbf{Proof.} We use induction. \ For $m=1,$ we obtain $
F_{-1}=f_{1}F_{1}-f_{2}F_{0},$ which is true. Now, we assume that it is true
for an arbitrary integer $k$
\begin{equation*}
F_{-k}=\left( -1\right) ^{k+1}f_{k}F_{1}+\left( -1\right) ^{k}f_{k+1}F_{0}
\end{equation*}
For $k+1,$ we obtain\newline
$F_{-(k+1)}=\left( -1\right) ^{k+2}f_{k+1}F_{1}+\left( -1\right)
^{k+1}f_{k+2}F_{0}=$\newline
$=\left( -1\right) ^{k}f_{k}F_{1}+\left( -1\right) ^{k}f_{k-1}F_{1}+\left(
-1\right) ^{k-1}f_{k+1}F_{0}+$\newline
$+\left( -1\right) ^{k-1}f_{k}F_{0}=F_{-\left( n-1\right) }-F_{-n}.$
Therefore, this statement is true.$\Box
$
\textbf{Theorem 2.8.} (Cassini's identity) \textit{With the above notations,
we have the following formula}
\begin{equation*}
F_{m-1}F_{m+1}-F_{m}^{2}=\left( -1\right) ^{m}(F_{-1}F_{1}-F_{0}^{2}).
\end{equation*}
\textbf{Proof.}
We consider\newline
$
F_{m-1}=f_{m-1}e_{0}+f_{m}e_{1}+f_{m+1}e_{2}+f_{m+2}e_{3}+...+f_{m+n-1}e_{n},
$\newline
$
F_{m+1}=f_{m+1}e_{0}+f_{m+2}e_{1}+f_{m+3}e_{2}+f_{m+4}e_{3}+...+f_{m+n+1}e_{n},
$\newline
$F_{m}=f_{m}e_{0}+f_{m+1}e_{1}+f_{m+2}e_{2}+f_{m+2}e_{3}+...+f_{m+n}e_{n}.$
We compute\newline
$F_{m-1}F_{m+1}=$\newline
$=\left[
f_{m-1}f_{m+1}e_{0}^{2}+f_{m-1}f_{m+2}e_{0}e_{1}+f_{m-1}f_{m+3}e_{0}e_{2}+f_{m-1}f_{m+4}e_{0}e_{3}...+f_{m-1}f_{m+n+1}e_{0}e_{n}
\right] +$\newline
$
+[f_{m}f_{m+1}e_{1}e_{0}+f_{m}f_{m+2}e_{1}^{2}+f_{m}f_{m+3}e_{1}e_{2}+f_{m}f_{m+4}e_{1}e_{3}...+f_{m}f_{m+n+1}e_{1}e_{n}]+
$\newline
$
+[f_{m+1}^{2}e_{2}e_{0}+f_{m+1}f_{m+2}e_{2}e_{1}+f_{m+1}f_{m+3}e_{2}^{2}+f_{m+1}f_{m+4}e_{1}e_{3}...+f_{m+1}f_{m+n+1}e_{2}e_{n}]+
$\newline
$
+[f_{m+2}f_{m+1}e_{3}e_{0}+f_{m+2}^{2}e_{3}e_{1}+f_{m+2}f_{m+3}e_{3}e_{2}+f_{m+2}f_{m+4}e_{3}^{2}...+f_{m+2}f_{m+n+1}e_{3}e_{n}]+...+
$\newline
$
+[f_{m+n-1}f_{m+1}e_{n}e_{0}+f_{m+n-1}f_{m+2}e_{n}e_{1}+f_{m+n-1}f_{m+3}e_{n}e_{2}+f_{m+n-1}f_{m+4}e_{n}e_{3}...+f_{m+n-1}f_{m+n+1}e_{n}^{2}].
$
Now, we compute \newline
$F_{m}^{2}=\left[
f_{m}^{2}e_{0}^{2}+f_{m}f_{m+1}e_{0}e_{1}+f_{m}f_{m+2}e_{0}e_{2}+f_{m}f_{m+3}e_{0}e_{3}+...+f_{m}f_{m+n}e_{0}e_{n}
\right] +$\newline
$+$ $\left[
f_{m+1}f_{m}e_{1}e_{0}+f_{m+1}^{2}e_{1}^{2}+f_{m+1}f_{m+2}e_{1}e_{2}+f_{m+1}f_{m+3}e_{1}e_{3}+...+f_{m+1}f_{m+n}e_{1}e_{n}
\right] +$\newline
$+\left[
f_{m+2}f_{m}e_{2}e_{0}+f_{m+2}f_{m+1}e_{2}e_{1}+f_{m+2}^{2}e_{2}^{2}+f_{m+2}f_{m+3}e_{2}e_{3}+...+f_{m+2}f_{m+n}e_{2}e_{n}
\right] +$\newline
$+\left[
f_{m+2}f_{m}e_{2}e_{0}+f_{m+2}f_{m+1}e_{2}e_{1}+f_{m+2}^{2}e_{2}^{2}+f_{m+2}f_{m+3}e_{2}e_{3}+...+f_{m+2}f_{m+n}e_{2}e_{n}
\right] +$\newline
$+\left[
f_{m+3}f_{m}e_{3}e_{0}+f_{m+3}f_{m+1}e_{3}e_{1}+f_{m+3}f_{m+2}e_{3}e_{2}+f_{m+3}^{2}e_{3}^{2}+...+f_{m+3}f_{m+n}e_{3}e_{n}
\right] +...+$\newline
$+\left[
f_{m+n}f_{m}e_{n}e_{0}+f_{m+n}f_{m+1}e_{n}e_{1}+f_{m+n}f_{m+2}e_{n}e_{2}+f_{m+n}f_{m+3}e_{n}e_{3}+...+f_{m+n}^{2}e_{n}^{2}
\right] .$
Consider the difference\newline
$F_{m-1}F_{m+1}-F_{m}^{2}=$\newline
$=e_{0}\left[ e_{0}\left( f_{m-1}f_{m+1}-f_{m}^{2}\right) +e_{1}\left(
f_{m-1}f_{m+2}-f_{m}f_{m+1}\right) +...+e_{n}\left(
f_{m-1}f_{m+n+1}-f_{m}f_{m+n}\right) \right] +$\newline
$+e_{1}\left[ e_{0}\left( f_{m}f_{m+1}-f_{m+1}f_{m}\right) +e_{1}\left(
f_{m}f_{m+2}-f_{m+1}^{2}\right) +...+e_{n}\left(
f_{m}f_{m+n+1}-f_{m+1}f_{m+n}\right) \right] +$\newline
$+e_{2}\left[ e_{0}\left( f_{m+1}^{2}-f_{m+2}f_{m}\right) +e_{1}\left(
f_{m+1}f_{m+2}-f_{m+2}f_{m+1}\right) +...+e_{n}\left(
f_{m+1}f_{m+n+1}-f_{m+2}f_{m+n}\right) \right] +$\newline
$+e_{3}\left[ e_{0}\left( f_{m+2}f_{m+1}-f_{m+3}f_{m}\right) +e_{1}\left(
f_{m+2}^{2}-f_{m+3}f_{m+1}\right) +...+e_{n}\left(
f_{m+2}f_{m+n+1}-f_{m+3}f_{m+n}\right) \right] +...+$\newline
$+e_{n}\left[ e_{0}\left( f_{m+n-1}f_{m+1}-f_{m+n}f_{m}\right) +e_{1}\left(
f_{m+n-1}f_{m+2}-f_{m+n}f_{m+1}\right) +...+e_{n}\left(
f_{m+n-1}f_{m+n+1}-f_{m+n}^{2}\right) \right] .$
Using the formula $f_{i}f_{j}-f_{i+k}f_{j-k}=\left( -1\right)
^{j-k}f_{i+k-j}f_{k}$ (see Koshy, p. 87, formula 2) and the identities $
f_{1}=1,f_{-m}=\left( -1\right) ^{m+1}f_{m}$ (see Koshy, p. 84), we obtain
\newline
$F_{m-1}F_{m+1}-F_{m}^{2}=e_{0}\left( -1\right) ^{m+1}\left[
e_{0}f_{1}+e_{1}f_{2}+e_{2}f_{3}+...+e_{n}f_{n+1}\right] +$\newline
$+e_{1}\left( -1\right) ^{m+1}\left[
e_{0}f_{0}+e_{1}f_{1}+e_{2}f_{2}+...+e_{n}f_{n}\right] +$\newline
$+e_{2}\left( -1\right) ^{m}\left[
e_{0}f_{-1}+e_{1}f_{0}+e_{2}f_{1}+...+e_{n}f_{n-1}\right] +$\newline
$+e_{3}\left( -1\right) ^{m}\left[
e_{0}f_{-2}+e_{1}f_{-1}+e_{2}f_{0}+...+e_{n}f_{n-2}\right] +...+$\newline
$+\left( -1\right) ^{m\ast n}e_{n}\left[
e_{0}f_{-n+1}+e_{1}f_{-n+2}+e_{2}f_{-n+3}+...+e_{n}f_{1}\right] =$\newline
$=\left( -1\right) ^{m}\left(
e_{0}F_{1}-e_{1}F_{0}+e_{2}F_{-1}-e_{3}F_{-2}+...+\left( -1\right)
^{n}e_{n}F_{-n+1}\right) .$
Using Proposition 2.7, we have\newline
$F_{m-1}F_{m+1}-F_{m}^{2}=\left( -1\right)
^{m}[e_{0}F_{1}-e_{1}F_{0}+e_{2}\left( F_{1}-F_{0}\right) -$\newline
$-e_{3}\left( 2F_{0}-F_{1}\right) +e_{4}\left( 2F_{1}-3F_{0}\right)
-e_{5}\left( -3F_{1}+5F_{0}\right) +...+$\newline
$+e_{n}\left( -1\right) ^{n}\left( \left( -1\right) ^{n}f_{n-1}F_{1}+\left(
-1\right) ^{n-1}f_{n}F_{0}\right) ]=$\newline
$=\left( -1\right)
^{m}[(e_{0}f_{-1}+e_{1}f_{0}+e_{2}f_{1}+...+e_{n}f_{n-1})F_{1}-$\newline
$-\left( f_{0}e_{0}+f_{1}e_{1}+f_{2}e_{2}+...+f_{n}e_{n}\right) F_{0}]=$
\newline
$=\left( -1\right) ^{m}\left[ F_{-1}F_{1}-F_{0}^{2}\right] .$ The theorem is
now proved.
\textbf{Remark 2.9. }
i) Similarly, we can prove an analogue of Cassini's formula:
\begin{equation*}
F_{m+1}F_{m-1}-F_{m}^{2}=\left( -1\right) ^{m}\left[ F_{1}F_{-1}-F_{0}^{2}
\right] .
\end{equation*}
ii) Theorem 2.8 generalizes Cassini's formula for all real algebras.
iii) If the algebra $A$ is algebra of the real numbers $\mathbb{R},$ in this
case, we have $F_{m}=f_{m}.$ From the above theorem, it \ results that
\begin{equation*}
f_{m+1}f_{m-1}-f_{m}^{2}=\left( -1\right) ^{m}\left[ f_{1}f_{-1}-f_{0}^{2}
\right] =\left( -1\right) ^{m},
\end{equation*}
which it is the classical Cassini's identity.
\begin{equation*}
\end{equation*}
\textbf{3. Imaginary Fibonacci quaternions and imaginary Fibonacci
octonions }
\begin{equation*}
\end{equation*}
In the following, we will consider a field $K$ with $charK\neq 2,3,$ $V$ a
finite dimensional vector space and $A$ a finite dimensional unitary algebra
over a field $\ K$, associative or nonassociative.
Let $\mathbb{H}\left( \alpha ,\beta \right) $ be the generalized real\
quaternion algebra, the algebra of the elements of the form $a=a_{1}\cdot
1+a_{2}\mathbf{i}+a_{3}\mathit{j}+a_{4}\mathbf{k},$ where $a_{i}\in \mathbb{R
},\mathbf{i}^{2}=-\alpha ,\mathbf{j}^{2}=-\beta ,$ $\mathbf{k}=\mathbf{ij}=-
\mathbf{ji}.$ We denote by $\mathbf{t}\left( a\right) $ and $\mathbf{n}
\left( a\right) $ the trace and the norm of a real quaternion $a.$ The norm
of a generalized quaternion has the following expression $\mathbf{n}\left(
a\right) =a_{1}^{2}+\alpha a_{2}^{2}+\beta a_{3}^{2}+\alpha \beta a_{4}^{2}$
and $\mathbf{t}\left( a\right) =2a_{1}.$ It is known that for $a\in $ $
\mathbb{H}\left( \alpha ,\beta \right) ,$ we have $a^{2}-\mathbf{t}\left(
a\right) a+\mathbf{n}\left( a\right) =0.$ The quaternion algebra $\mathbb{H}
\left( \alpha ,\beta \right) $ is a \textit{division algebra} if for all $
a\in \mathbb{H}\left( \alpha ,\beta \right) ,$ $a\neq 0$ we have $\mathbf{n}
\left( a\right) \neq 0,$ otherwise $\mathbb{H}\left( \alpha ,\beta \right) $
is called a \textit{split algebra}.
Let $\mathbb{O}(\alpha ,\beta ,\gamma )$ be a generalized octonion algebra
over $\mathbb{R},$ with basis $\{1,e_{1},...,e_{7}\},$ the algebra of the
elements of the form $\
a=a_{0}+a_{1}e_{1}+a_{2}e_{2}+a_{3}e_{3}+a_{4}e_{4}+a_{5}e_{5}+a_{6}e_{6}+a_{7}e_{7}\,
$and the multiplication given in the following table:
\begin{center}
{\footnotesize $
\begin{tabular}{c||c|c|c|c|c|c|c|c|}
$\cdot $ & $1$ & $\,\,\,e_{1}$ & $\,\,\,\,\,e_{2}$ & $\,\,\,\,e_{3}$ & $
\,\,\,\,e_{4}$ & $\,\,\,\,\,\,e_{5}$ & $\,\,\,\,\,\,e_{6}$ & $
\,\,\,\,\,\,\,e_{7}$ \\ \hline\hline
$\,1$ & $1$ & $\,\,\,e_{1}$ & $\,\,\,\,e_{2}$ & $\,\,\,\,e_{3}$ & $
\,\,\,\,e_{4}$ & $\,\,\,\,\,\,e_{5}$ & $\,\,\,\,\,e_{6}$ & $
\,\,\,\,\,\,\,e_{7}$ \\ \hline
$\,e_{1}$ & $\,\,e_{1}$ & $-\alpha $ & $\,\,\,\,e_{3}$ & $-\alpha e_{2}$ & $
\,\,\,\,e_{5}$ & $-\alpha e_{4}$ & $-\,\,e_{7}$ & $\,\,\,\alpha e_{6}$ \\
\hline
$\,e_{2}$ & $\,e_{2}$ & $-e_{3}$ & $-\,\beta $ & $\,\,\beta e_{1}$ & $
\,\,\,\,e_{6}$ & $\,\,\,\,\,e_{7}$ & $-\beta e_{4}$ & $-\beta e_{5}$ \\
\hline
$e_{3}$ & $e_{3}$ & $\alpha e_{2}$ & $-\beta e_{1}$ & $-\alpha \beta $ & $
\,\,\,\,e_{7}$ & $-\alpha e_{6}$ & $\,\,\,\beta e_{5}$ & $-\alpha \beta
e_{4} $ \\ \hline
$e_{4}$ & $e_{4}$ & $-e_{5}$ & $-\,e_{6}$ & $-\,\,e_{7}$ & $-\,\gamma $ & $
\,\,\,\gamma e_{1}$ & $\,\,\gamma e_{2}$ & $\,\,\,\,\,\gamma e_{3}$ \\ \hline
$\,e_{5}$ & $\,e_{5}$ & $\alpha e_{4}$ & $-\,e_{7}$ & $\,\alpha e_{6}$ & $
-\gamma e_{1}$ & $-\,\alpha \gamma $ & $-\gamma e_{3}$ & $\,\alpha \gamma
e_{2}$ \\ \hline
$\,\,e_{6}$ & $\,\,e_{6}$ & $\,\,\,\,e_{7}$ & $\,\,\beta e_{4}$ & $-\,\beta
e_{5}$ & $-\gamma e_{2}$ & $\,\,\,\gamma e_{3}$ & $-\beta \gamma $ & $-\beta
\gamma e_{1}$ \\ \hline
$\,\,e_{7}$ & $\,\,e_{7}$ & $-\alpha e_{6}$ & $\,\beta e_{5}$ & $\alpha
\beta e_{4}$ & $-\gamma e_{3}$ & $-\alpha \gamma e_{2}$ & $\beta \gamma
e_{1} $ & $-\alpha \beta \gamma $ \\ \hline
\end{tabular}
\
$ }
Table 1
\end{center}
The algebra $\mathbb{O}(\alpha ,\beta ,\gamma )$ is non-commutative and
non-associative.
If $a\in \mathbb{O}(\alpha ,\beta ,\gamma ),$ $
a=a_{0}+a_{1}e_{1}+a_{2}e_{2}+a_{3}e_{3}+a_{4}e_{4}+a_{5}e_{5}+a_{6}e_{6}+a_{7}e_{7}
$ then $\bar{a}
=a_{0}-a_{1}e_{1}-a_{2}e_{2}-a_{3}e_{3}-a_{4}e_{4}-a_{5}e_{5}-a_{6}e_{6}-a_{7}e_{7}
$ is called the \textit{conjugate} of the element $a.$ The scalars $\mathbf{t
}\left( a\right) =a+\overline{a}\in \mathbb{R}$ and
\begin{equation}
\,\mathbf{n}\left( a\right) =a\overline{a}=a_{0}^{2}+\alpha a_{1}^{2}+\beta
a_{2}^{2}+\alpha \beta a_{3}^{2}+\gamma a_{4}^{2}+\alpha \gamma
a_{5}^{2}+\beta \gamma a_{6}^{2}+\alpha \beta \gamma a_{7}^{2}\in \mathbb{R},
\tag{3.1.}
\end{equation}
are called the \textit{trace}, respectively, the \textit{norm} of the
element $a\in $ $A.$ \thinspace It\thinspace \thinspace \thinspace
follows\thinspace \thinspace \thinspace that$\,$\newline
\thinspace \thinspace $a^{2}-\mathbf{t}\left( a\right) a+\mathbf{n}\left(
a\right) =0,\forall a\in A.$The octonion algebra $\mathbb{O}\left( \alpha
,\beta ,\gamma \right) $ is a \textit{division algebra} if for all $a\in
\mathbb{O}\left( \alpha ,\beta ,\gamma \right) ,$ $a\neq 0$ we have $\mathbf{
n}\left( a\right) \neq 0,$ otherwise $\mathbb{O}\left( \alpha ,\beta ,\gamma
\right) $ is called a \textit{split algebra}.
Let \ $V$ be a real vector space of dimension $n$ and $<,>$ be the inner
product. The\textit{\ cross product} on $V$ is a continuos map
\begin{equation*}
X:V^{s}\rightarrow V,s\in \{1,2,...,n\}
\end{equation*}
with the following properties:
1) $<X\left( x_{1},...x_{s}\right) ,x_{i}>=0,i\in \{1,2,...,s\};$
2) $<X\left( x_{1},...x_{s}\right) ,X\left( x_{1},...x_{s}\right) >=\det
\left( <x_{i},x_{j}>\right) .($see [Br; ]$)$
In [Ro; 96], was proved that if $d=\dim _{\mathbb{R}}V,$ therefore $d\in
\{0,1,3,7\}.$(see [Ro; 96], Proposition 3)
The values $0,1,3$ and $7$ for dimensions are obtained from Hurwitz's
theorem, since the real Hurwitz division algebras $\mathcal{H}$ exist only
for dimensions $1,2,4$ and $8$ dimensions. In this situations, the cross
product is obtained from the product of the normed division algebra,
restricting it to imaginary subspace of the algebra $\mathcal{H},$ which can
be of \ dimension $0,1,3$ or $7$.(see [Ja; 74]) It is known that the real
Hurwitz division algebras are only: the real numbers, the complex numbers,
the quaternions and the octonions.
In $\mathbb{R}^{3}$ with the canonical basis $\{i_{1},i_{2},i_{3}\},$ the
cross product of two linearly independent vectors $
x=x_{1}i_{1}+x_{2}i_{2}+x_{3}i_{3}$ and $y=y_{1}i_{1}+y_{2}i_{2}+y_{3}i_{3}$
is a vector, denoted by $x\times y$ and can be expressed computing the
following formal determinant
\begin{equation}
x\times y=\left\vert
\begin{array}{ccc}
i_{1} & i_{2} & i_{3} \\
x_{1} & x_{2} & x_{3} \\
y_{1} & y_{2} & y_{3}
\end{array}
\right\vert . \tag{3.2.}
\end{equation}
The cross product can also be described using the quaternions and the basis $
\{i_{1},i_{2},i_{3}\}$ as a standard basis for $\mathbb{R}^{3}.$ If a vector
$x\in \mathbb{R}^{3}$ has the form $x=x_{1}i_{1}+x_{2}i_{2}+x_{3}i_{3}$ and
is represented as the quaternion $x=x_{1}\mathbf{i}+x_{2}\mathbf{j}+x_{3}
\mathbf{k}$, therefore the cross product of two vectors has the form $
x\times y=xy+<x,y>,$ where $<x,y>=x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3}$ is the
inner product.
A cross product for 7-dimensional vectors can be obtained in the same way by
using the octonions instead of the quaternions. If $x=\underset{i=0}{\overset
{7}{\sum }}x_{i}e_{i}$ and $y=\underset{i=0}{\overset{7}{\sum }}y_{i}e_{i}$
are two imaginary octonions, therefore
\begin{eqnarray*}
x\times y
&=&(x_{2}y_{4}-x_{4}y_{2}+x_{3}y_{7}-x_{7}y_{3}+x_{5}y_{6}-x_{6}y_{5})
\,e_{1}+ \\
&&+(x_{3}y_{5}-x_{5}y_{3}+x_{4}y_{1}-x_{1}y_{4}+x_{6}y_{7}-x_{7}y_{6})
\,e_{2}+ \\
&&+(x_{4}y_{6}-x_{6}y_{4}+x_{5}y_{2}-x_{2}y_{5}+x_{7}y_{1}-x_{1}y_{7})
\,e_{3}+ \\
&&+(x_{5}y_{7}-x_{7}y_{5}+x_{6}y_{3}-x_{3}y_{6}+x_{1}y_{2}-x_{2}y_{1})
\,e_{4}+ \\
&&+(x_{6}y_{1}-x_{1}y_{6}+x_{7}y_{4}-x_{4}y_{7}+x_{2}y_{3}-x_{3}y_{2})
\,e_{5}+ \\
&&+(x_{7}y_{2}-x_{2}y_{7}+x_{1}y_{5}-x_{5}y_{1}+x_{3}y_{4}-x_{4}y_{3})
\,e_{6}+ \\
&&+(x_{1}y_{3}-x_{3}y_{1}+x_{2}y_{6}-x_{6}y_{2}+x_{4}y_{5}-x_{5}y_{4})
\,e_{7},
\end{eqnarray*}
\begin{equation}
\tag{3.3.}
\end{equation}
see [Si; 02].
Let $\mathbb{H}$ be the real division quaternion algebra and $\mathbb{H}
_{0}=\{x\in \mathbb{H}$ $\mid $ $\mathbf{t}\left( x\right) =0\}.$ An element
$F_{n}\in \mathbb{H}_{0}$ is called an\textit{\ imaginary Fibonacci
quaternion element} if it is of the form $F_{n}=f_{n}\mathbf{i}+f_{n+1}
\mathbf{j}+f_{n+2}\mathbf{k,}$ where $\left( f_{n}\right) _{n\in \mathbb{N}}$
is the Fibonacci numbers sequence$.$ Let $F_{k},F_{m},F_{n}$ be three
imaginary Fibonacci quaternions. Therefore, we have the following result.
In the proof of the following results, we will use some relations between
Fibonacci numbers, namely:
\textit{D'Ocagne's identity}
\begin{equation}
f_{m}f_{n+1}-f_{n}f_{m+1}=\left( -1\right) ^{n}f_{m-n} \tag{3.4.}
\end{equation}
see relation (33) from [Wo], and
\textit{Johnson's identity}
\begin{equation}
f_{a}f_{b}-f_{c}f_{d}=\left( -1\right) ^{r}\left(
f_{a-r}f_{b-r}-f_{c-r}f_{d-r}\right) , \tag{3.5.}
\end{equation}
for arbitrary integers $a,b,c,d,$ and $r$ with $a+b=c+d,$ see relation (36)
from [Wo].
\textbf{Proposition 3.1.} \textit{With the above notations, for three
arbitrary Fibonacci imaginary quaternions, } \textit{we have}
\begin{equation*}
<F_{k}\times F_{m},F_{n}>=0.
\end{equation*}
\textit{Therefore, the vectors} $F_{k},F_{m},F_{n}$ \textit{are linear
dependents}.
The above result is similar with the result for dual Fibonacci vectors
obtained in [Gu;], Theorem 11.
Let $\mathbb{O}$ be the real division octonion algebra and $\mathbb{O}
_{0}=\{x\in \mathbb{H}$ $\mid $ $\mathbf{t}\left( x\right) =0\}.$ An element
$F_{n}\in \mathbb{O}_{0}$ is called an \textit{imaginary Fibonacci octonion
element} if it is of the form $
F_{n}=f_{n}e_{1}+f_{n+1}e_{1}+f_{n+2}e_{1}+f_{n+3}e_{1}+f_{n+4}e_{1}+f_{n+5}e_{1}+f_{n+6}e_{1}
\mathbf{,}$ where $\left( f_{n}\right) _{n\in \mathbb{N}}$ is the Fibonacci
numbers sequence$.$ Let $F_{k},F_{m},F_{n}$ be three imaginary Fibonacci
octonions.
\
\textbf{Proposition 3.2.} \textit{With the above notations, for three
arbitrary Fibonacci imaginary octonions, we have}
\begin{equation*}
<F_{k}\times F_{m},F_{n}>=0.
\end{equation*}
\qquad
\textbf{Proof.} Using formulae ($3.3$), $\left( 3.4\right) $ and $\left(
3.5\right) $, we will compute
$F_{k}\times F_{m}.$ \newline
The coefficient of \ $e_{1}$ is\newline
$f_{m+2}f_{k+4}-f_{k+2}f_{m+4}+f_{m+3}f_{k+7}-f_{k+3}f_{m+7}+f_{m+5}f_{k+6}-$
$f_{k+5}f_{m+6}=$\newline
$=f_{m}f_{k+2}-f_{k}f_{m+2}-f_{m}f_{k+4}+f_{k}f_{m+4}-f_{m}f_{k+1}+$ $
f_{k}f_{m+1}=$\newline
$=f_{m}\left( f_{k+2}-f_{k+4}-f_{k+1}\right) +f_{k}\left(
-f_{m+2}+f_{m+4}+f_{m+1}\right) =$\newline
$=f_{m}\left( f_{k}-f_{k+4}\right) +f_{k}\left( f_{m+4}-f_{m}\right) =$
\newline
$=-f_{m}\left( 3f_{k+1}+f_{k}\right) +f_{k}\left( 3f_{m+1}+f_{m}\right) =$
\newline
$=-3\left( f_{m}f_{k+1}-f_{k}f_{m+1}\right) =-3\left( -1\right) ^{k}f_{m-k}.$
\newline
The coefficient of $e_{2}$ is\newline
$f_{m+3}f_{k+5}-f_{k+3}f_{m+5}+f_{m+4}f_{k+1}-f_{k+4}f_{m+1}+f_{m+6}f_{k+7}-$
$f_{k+6}f_{m+7}=$\newline
$=$ $-f_{m}f_{k+2}+f_{k}f_{m+2}-f_{m+3}f_{k}+f_{k+3}f_{m}+f_{m}f_{k+1}-$ $
f_{k}f_{m+1}=$\newline
$=f_{m}\left( -f_{k+2}+f_{k+3}+f_{k+1}\right) +f_{k}\left(
f_{m+2}-f_{m+3}-f_{m+1}\right) =$\newline
$=2\left( f_{m}f_{k+1}-f_{k}f_{m+1}\right) =2\left( -1\right) ^{k}f_{m-k}.$
\newline
The coefficient of $e_{3}$ is \newline
$f_{m+4}f_{k+6}-f_{m+3}f_{k+5}+f_{m+5}f_{k+2}-f_{m+2}f_{k+5}+f_{m+7}f_{k+1}-$
$f_{k+7}f_{m+1}=$\newline
$
=f_{m}f_{k+2}-f_{m+2}f_{k}+f_{m+3}f_{k}-f_{m}f_{k+3}-f_{m+6}f_{k}+f_{m}f_{k+6}=
$\newline
$=f_{m}\left( f_{k+2}-f_{k+3}+f_{k+6}\right) +f_{k}\left(
-f_{m+2}+f_{m+3}-f_{m+6}\right) =$\newline
$=7\left( f_{m}f_{k+1}-f_{k}f_{m+1}\right) =7\left( -1\right) ^{k}f_{m-k}.$
\newline
The coefficient of $e_{4}$ is\newline
$f_{m+5}f_{k+7}-f_{k+5}f_{m+7}+f_{m+6}f_{k+3}-f_{k+6}f_{m+3}+f_{m+1}f_{k+2}-$
$f_{m+2}f_{k+1}=$\newline
$
=-f_{m}f_{k+2}+f_{k}f_{m+2}-f_{m+3}f_{k}+f_{k+3}f_{m}-f_{m}f_{k+1}+f_{k}f_{m+1}=
$\newline
$=f_{m}\left( -f_{k+2}+f_{k+3}-f_{k+1}\right) =0.$\newline
The coefficient of $e_{5}$ is\newline
$f_{m+6}f_{k+1}-f_{k+6}f_{m+1}+f_{m+7}f_{k+4}-f_{k+7}f_{m+4}+f_{m+2}f_{k+3}-$
$f_{k+2}f_{m+3}=$\newline
$
=-f_{m+5}f_{k}+f_{k+5}f_{m}+f_{m+3}f_{k}-f_{k+3}f_{m}+f_{m}f_{k+1}-f_{k}f_{m+1}=
$\newline
$=f_{m}\left( f_{k+5}-f_{k+3}+f_{k+1}\right) +f_{k}\left(
-f_{m+5}+f_{m+3}-f_{m+1}\right) =$\newline
$=4\left( f_{m}f_{k+1}-f_{k}f_{m+1}\right) =4\left( -1\right) ^{k}f_{m-k}.$
\newline
The coefficient of $e_{6}$ is\newline
$f_{m+7}f_{k+2}-f_{k+7}f_{m+2}+f_{m+1}f_{k+5}-f_{k+1}f_{m+5}+f_{m+3}f_{k+4}-$
$f_{k+3}f_{m+4}=$\newline
$=f_{m+5}f_{k}-f_{k+5}f_{m}-f_{m}f_{k+4}+f_{k}f_{m+4}-f_{m}f_{k+1}+$ $
f_{k}f_{m+1}=$\newline
$=f_{m}\left( -f_{k+5}-f_{k+4}-f_{k-1}\right) +f_{k}\left(
f_{k+5}+f_{k+4}+f_{k-1}\right) =$\newline
$=-9\left( f_{m}f_{k+1}-f_{k}f_{m+1}\right) =-9\left( -1\right) ^{k}f_{m-k}.$
\newline
The coefficient of $e_{7}$ is\newline
$f_{m+1}f_{k+3}-f_{k+1}f_{m+3}+f_{m+2}f_{k+6}-f_{k+2}f_{m+6}+f_{m+4}f_{k+5}-$
$f_{k+4}f_{m+5}=$\newline
$=f_{m}\left( -f_{k+2}+f_{k+4}+f_{k+1}\right) +f_{k}\left(
f_{m+2}-f_{m+4}-f_{m+1}\right) =$\newline
$=3\left( f_{m}f_{k+1}-f_{k}f_{m+1}\right) =3\left( -1\right) ^{k}f_{m-k}.$
\newline
We obtain that\newline
$F_{k}\times F_{m}=\left( -1\right) ^{k}f_{m-k}\left(
-3e_{1}+2e_{2}+7e_{3}+4e_{5}-9e_{6}+3e_{7}\right) .$ \newline
Therefore\newline
$<F_{k}\times F_{m},F_{n}>=\left( -1\right) ^{k}f_{m-k}\left(
-3f_{n+1}+2f_{n+2}+7f_{n+3}+4f_{n+5}-9f_{n+6}+3f_{n+7}\right) =$\newline
$=-2f_{n+2}+2f_{n+1}+2f_{n}=0.$
\begin{equation*}
\end{equation*}
\textbf{References}
\begin{equation*}
\end{equation*}
\newline
\newline
[Akk; ] I Akkus, O Kecilioglu, \textit{Split Fibonacci and Lucas Octonions},
accepted in Adv. Appl. Clifford Algebras.\newline
[Br; ] R. Brown, A. Gray (1967), \textit{Vector cross products}, Commentarii
Mathematici Helvetici \qquad 42 (1)(1967), 222--236.\newline
[Fl, Sa; 15] C. Flaut, D. Savin, \textit{Quaternion Algebras and Generalized
Fibonacci-Lucas Quaternions}, accepted in Adv. Appl. Clifford Algebras
\newline
[Fl, Sh; 13(1)] C. Flaut, V. Shpakivskyi,\textit{\ Real matrix
representations for the complex quaternions}, Adv. Appl. Clifford Algebras,
23(3)(2013), 657-671.\newline
[Fl, Sh; 13(2)] Cristina Flaut and Vitalii Shpakivskyi, \textit{On
Generalized Fibonacci Quaternions and Fibonacci-Narayana Quaternions}, Adv.
Appl. Clifford Algebras, 23(3)(2013), 673-688.\newline
[Fl, St; 09] C. Flaut, M. \c{S}tef\~{a}nescu, \textit{Some equations over
generalized quaternion and octonion division algebras}, Bull. Math. Soc.
Sci. Math. Roumanie, 52(100), no. 4 (2009), 427-439.\newline
[Gu;] I. A. Guren, S.K. Nurkan, \textit{A new approach to Fibonacci, Lucas
numbers and dual vectors}, accepted in Adv. Appl. Clifford Algebras.\newline
[Ha; ] S. Halici, On Fibonacci Quaternions, Adv. in Appl. Clifford Algebras,
22(2)(2012), 321-327.\newline
[Ha1; ] S Halici, \textit{On dual Fibonaci quaternions}, Selcuk J. Appl
Math, accepted.\newline
[Ho; 61] A. F. Horadam, \textit{A Generalized Fibonacci Sequence}, Amer.
Math. Monthly, 68(1961), 455-459.\newline
[Ho; 63] A. F. Horadam, \textit{Complex Fibonacci Numbers and Fibonacci
Quaternions}, Amer. Math. Monthly, 70(1963), 289-291.\newline
[Ja; 74] Nathan Jacobson (2009). \textit{Basic algebra I}, Freeman 1974 2nd
ed., 1974, p. 417--427.\newline
[Ke; ] O Kecilioglu, I Akkus, \textit{The Fibonacci Octonions,} accepted in
Adv. Appl. Clifford Algebras.\newline
[Ro; 96] M. Rost, \textit{On the dimension of a composition algebra}, Doc.
Math. J.,1(1996), 209-214.\newline
[Nu; ] S.K. Nurkan, I.A. Guren, \textit{Dual Fibonacci quaternions},
accepted in Adv. Appl. Clifford Algebras.\newline
[Sa, Fl, Ci; 09] D. Savin, C. Flaut, C. Ciobanu, \textit{Some properties of
the symbol algebras, Carpathian Journal of Mathematics}, 25(2)(2009), p.
239-245.\newline
[Si; 02] Z. K. Silagadze, \textit{Multi-dimensional vector product}, arxiv
\newline
[Sc; 66] R. D. Schafer, \textit{An Introduction to Nonassociative Algebras},
Academic Press, New-York, 1966.\newline
[Sc; 54] R. D. Schafer, \textit{On the algebras formed by the Cayley-Dickson
process}, Amer. J. Math. 76 (1954), 435-446.\newline
[Sm; 04] W.D.Smith, \textit{Quaternions, octonions, and now, 16-ons, and
2n-ons; New kinds of numbers,\newline
}[Sw; 73] M. N. S. Swamy, \textit{On generalized Fibonacci Quaternions}, The
Fibonacci Quaterly, 11(5)(1973), 547-549.\newline
[Wo] http://mathworld.wolfram.com/FibonacciNumber.html
\begin{equation*}
\end{equation*}
Cristina FLAUT
Faculty of Mathematics and Computer Science,
Ovidius University,
Bd. Mamaia 124, 900527, CONSTANTA,
ROMANIA
http://cristinaflaut.wikispaces.com/
http://www.univ-ovidius.ro/math/
e-mail:
cflaut@univ-ovidius.ro
cristina flaut@yahoo.com
Vitalii SHPAKIVSKYI
Department of Complex Analysis and Potential Theory
Institute of Mathematics of the National Academy of Sciences of Ukraine,
3, Tereshchenkivs'ka st.
01601 Kiev-4
UKRAINE
http://www.imath.kiev.ua/
e-mail: shpakivskyi@mail.ru
\begin{equation*}
\end{equation*}
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
In the paper \cite{renato} Renato Targino shows that bi-Lipschitz type of plane curve is determined by the local ambient topological properties of curves. Here we show that it is not longer true in higher dimensions. However we show that bi-Lipschitz type of space curves is determined by the number of singular points and by the local ambient topological type of a generic projection of such curves into the affine plane.
\end{abstract}
\section{Introduction}
Let $\Lambda, \field{G}amma\subset \field{C}^n$ be two algebraic curves. In general if the germs $(\Lambda, p)$ and $(\field{G}amma, q)$ are {\it ambient topologically equivalent}, in the sense that there exists a germ of homeomorphism $\varphi\colon (\field{C}^n,p)\rightarrow (\field{C}^n,q)$ such that $\varphi (\Lambda,p)=(\field{G}amma,q)$, then this does not imply that the germs $(\Lambda, p)$ and $(\field{G}amma, q)$ are {\it bi-Lipschitz equivalent} in the sense that there exists a germ of bi-Lipschitz homeomorphism $\varphi\colon (\Lambda,p)\rightarrow (\field{G}amma,q)$ (with respect to the outer metric). Moreover we have even the following global result:
\begin{ex}
Let $n>2$. For every irreducible singular algebraic curve $\field{G}amma\subset \field{C}^n$, there is an algebraic curve $\Lambda$, such that curves $\field{G}amma, \Lambda$ are not bi-Lipschitz equivalent but they are topologically equivalent.
{\rm Indeed, let $\Lambda$ be the normalization of $\field{G}amma$. Hence $\Lambda$ is an algebraic curve and still can be embedded into $\field{C}^n$ as smooth algebraic curve. But $\Lambda$ has no singularities and by \cite{Bir}, \cite{Sam} curves $\field{G}amma, \Lambda$ are not bi-Lipschitz equivalent. We have a canonical semi-algebraic mapping $\Phi: \Lambda\to\field{G}amma,$ which by the assumption is a homeomorphism. Now, by Theorem 6.6 in \cite{jel} we can extend the mapping
$\phi: \Lambda\to \field{G}amma$ to a global semi-algebraic homeomorphism $\Phi: \field{C}^n\to\field{C}^n.$}
\end{ex}
The situation is different in the dimension $n=2.$ In this dimension two germs are ambient topologically equivalent if and only if they are
outer bi-Lipschitz equivalent (See \cite{p-t}, \cite{fer}, \cite{n-p}). In fact, in the paper \cite{renato}, Renato Targino classifies plane algebraic curves, up to global bi-Lipschitz homeomorphisms, in terms of local ambient topological properties of the projective closure of those curves. More precisely, two algebraic curves $\Lambda, \field{G}amma\subset \field{C}^n$ are said {\it bi-Lipschitz equivalent} if there exists a bi-Lipschitz homeomorphism $\varphi\colon\Lambda\rightarrow\field{G}amma$ (with respect to the outer Euclidean metric induced from $\field{C}^n$). In the paper \cite{renato}, Renato Targino shows the Theorem \ref{renato} below.
\noindent{{\bf Notation.} Consider $\field{C}^2$ embedded into $\Bbb {CP}^2$. Given an algebraic plane curve $X\subset\field{C}^2$, let $\overline{X}\subset \Bbb{C P}^2$ be the respective projective closure of $X$ and let $\pi_{\infty}$ denote the line at infinity in $\Bbb{C P}^2$.}
\begin{theo}[\cite{renato}, Theorem 1.6]\label{renato}
Let $\Lambda, \field{G}amma\subset \field{C}^2$ be two algebraic plane curves with irreducible components $\Lambda = \bigcup_{i\in I} \Lambda_j$ and $\field{G}amma = \bigcup_{j\in J} \field{G}amma_i$. The following statements are mutually equivalent.
\begin{enumerate}
\item The curves $\Lambda$ and $\field{G}amma$ are bi-Lipschitz equivalent.
\item There are bijections $\sigma \colon I\rightarrow J$ and $\rho$ between the set of singular points of $\overline{\Lambda}\cup\pi_{\infty}$ and the set of singular points of $\overline{\field{G}amma}\cup\pi_{\infty}$ such that $\rho(p)\in\pi_{\infty}$ if and only if $p\in\pi_{\infty}$, $(\overline{\Lambda}\cup\pi_{\infty}, p)$ is topologically equivalent to $(\overline{\field{G}amma}\cup\pi_{\infty},\rho(p))$, $(\overline{\Lambda}_i\cup\pi_{\infty}, p)$ is topologically equivalent to $(\overline{\field{G}amma}_{\sigma(i)}\cup\pi_{\infty},\rho(p))$ ($\forall i\in I$), for all singular point $p$ of $\overline{\Lambda}\cup\pi_{\infty}$.
\end{enumerate}
\end{theo}
In this note we address the global classification problem given by bi-Lipschitz equivalence of algebraic space curves, i. e., algebraic curves in $\field{C}^n$ ($n>2$). As in the local case, we obtain a characterization of the bi-Lipschitz equivalence classes of algebraic space curves by looking for their generic plane projections. In this direction, we point out that: given an algebraic curve $\Lambda$ in $\field{C}^n$, any two generic plane projection are bi-Lipschitz equivalent, and this is why we can refer to a generic plane projection of $\Lambda$ as {\it the generic plane projection of} $\Lambda$. We are ready to state the main results of the paper.
\begin{theo}\label{main1} Two irreducible algebraic curves in $\field{C}^n$ are bi-Lipschitz equivalent if and only if they have the same number of singular points and their generic plane projections are bi-Lipschitz equivalent.
\end{theo}
As a direct consequence of Theorem \ref{main1} and Theorem \ref{renato} (Theorem 1.6 of \cite{renato}) we get a characterization of bi-Lipschitz equivalence classes of irreducible algebraic space curves in terms of the local ambient topology of its generic plane projection.
\begin{co}\label{co}Let $\Lambda, \field{G}amma\subset \field{C}^n $ ($n>2$) be two irreducble algebraic curves. Let $X_{\Lambda}$ and $X_{\field{G}amma}$ denote their generic plane projections (respectively). The following statements are mutually equivalent:
\begin{enumerate}
\item The curves $\Lambda$ and $\field{G}amma$ are bi-Lipschitz equivalent.
\item The curves $\Lambda$ and $\field{G}amma$ have the same number of singular points and there is a bijection $\rho$ between the set of singular points of $\overline{X}_{\Lambda}\cup\pi_{\infty}$ and the set of singular points of $\overline{X}_{\field{G}amma}\cup\pi_{\infty}$ such that $\rho(p)\in\pi_{\infty}$ if and only if $p\in\pi_{\infty}$ and $(\overline{X}_{\Lambda}\cup\pi_{\infty}, p)$ is topologically equivalent to $(\overline{X}_{\field{G}amma}\cup\pi_{\infty},\rho(p))$,
\end{enumerate}
\end{co}
\section{Main Result}
Let $B^k(R)\subset \field{C}^k$ denote the $2k$ real dimensional Euclidean ball of radius $R$ and center at $0.$
\begin{theo}\label{infinity}
Let $n>2$. If $X\subset \field{C}^n$ is a closed algebraic curve, then there are a real number $r>0$ and the proper projection
$\pi: X\to\field{C}^2$ such that $\pi: X\setminus \pi^{-1}(B^2(r))\to \field{C}^2\setminus B^2(r)$ is a bi-Lipchitz embedding.
\end{theo}
\begin{proof}
Of course it is enough to prove our theorem for a projection $\pi: X\to\field{C}^{n-1}$ and then use induction on the number $n$. Consider $\field{C}^n$ embedded into $\Bbb {CP}^n$. Let $X'$ be the projective closure of $X$ in $\Bbb {CP}^n.$
Let $Z:=X'\setminus X=\{ z_1,...,z_r\}.$ For $i\not=j$, let us denote by $L_{ij}$ the line $\overline{z_i,z_j}$ and let us denote by $\pi_\infty$ the hyperplane at infinity of $\field{C}^n.$ Thus $\pi_\infty\cong \Bbb {C P}^{n-1}$ is a projective space of dimension $n-1.$ For a non-zero vector $v\in \field{C}^n$, let $[v]$ denote the corresponding point in $\pi_\infty.$
Let $\Delta =\{ (x,y)\in {X}\times {X} : x=y\}.$ Consider the mapping $$A:
{X}\times {X}\setminus \Delta \ni (x,y)\to
[x-y]\in \pi_\infty.$$
Let $\field{G}amma$ be the graph of $A$ in $X'\times X'\times {\pi_\infty}$ and take $\field{G}amma':=\overline{\field{G}amma}$
(we take this closure in $X'\times X' \times {\pi_\infty}$). Let
$p:\field{G}amma' \to \pi_\infty$ and
$q:\field{G}amma' \to X'\times X'$ be the canonical projections. Note that, for $z_i\in Z$, the set $q^{-1}(z_i,z_i)=Z_{i}$ is an algebraic set of dimension at most $1$. Let $Z_i'=p(Z_i).$ The set $W:=\bigcup Z_i'$ is a closed subset of $\pi_\infty$ of dimension at most $1$, hence $\pi_\infty\setminus W \not=\emptyset.$ Let $Q\in \pi_\infty\setminus (W \cup \bigcup L_{ij})$, since $W$ is closed, there exists a small ball $B_1\subset \pi_\infty$ with center at $Q$ such that $B_1\cap (W\cup \bigcup L_{ij})=\emptyset.$
Let $B^n(R)$ be a large ball in $\field{C}^n$ and take $V(R)=(X'\setminus B^n(R))\times (X'\setminus B(R))$ and let $O_R=q^{-1}(V(R)).$ Hence $O_R$ is a neighborhood of $W$ in
$\pi_\infty$. We show that for $R$ sufficiently large, if $x,y\in X\setminus B^n(R),\ x\not=y$, then $A(x,y)\not\in B_1.$
Indeed, in other case, take $R_k=k \to \infty.$ Hence for every $k\in \Bbb N$ we have points $x_k,y_k\in X\setminus B^n(R),$ such that $A(x_k,y_k)\in B_1.$
But then $x_k,y_k\to \infty$, this means that we can assume that $x_k\to z_i$ and $y_k\to z_j.$ If $z_i=z_j$, then $\lim A(x_k,y_k)=\lim p((x_k,y_k, [x_k-y_k]))\in p(Z_i)=Z_i'.$ It is a contradiction. If $z_i\not=z_j$, then $\overline{x_k,y_k}\to\overline{z_i,z_j}=L_{ij}$ and this means that $L_{ij}\cap B_1\not=\emptyset$, a contradiction again.
Hence, there is a number $R$ sufficiently large, such that if $x,y\in X\setminus B(R)), \ x\not=y$, then $A(x,y)\not\in B_1.$ Let $\Sigma=A((X\setminus B(R))\times (X\setminus B(R))\setminus \Delta).$ Then $Q\not\in \overline{\Sigma}.$ Take a hyperplane $H\subset \field{C}^n$ in this way, such that $Q\not\in \overline{H}.$ Of course $H\cong \field{C}^{n-1}.$ Let $\pi:\field{C}^n\to H$ be the projection with center $Q$ and let $K=\pi(X\cap B^n(R)).$ It is a compact set, hence there exists a ball $B^{n-1}(r)$ such that $K\subset B^{n-1}(r).$
Consider the proper mapping $\pi: X\setminus \pi^{-1}(B^{n-1}(r))\to H\setminus B^{n-1}(r)$. We show that the projection $\pi$ is a bi-Lipschitz embedding. Indeed, since a complex linear isomorphism is a bi-Lipschitz mapping, we can assume that
$Q=(0:0:...0:1)$ and $H=\{x_n=0\}.$ Of course $||p(x)-p(y)||\le ||x-y||.$ Assume that $p$ is not bi-Lipschitz, i. e., there is a sequence of points $x_j,y_j\in X\setminus \pi^{-1}(B^{n-1}(r))$ such that $$\frac{||p(x_j)-p(y_j)||}{||x_j-y_j||}\to 0,$$ as $n\to \infty.$ Let $x_j-y_j=(a_1(j),...,a_{n-1}(j),b(j))$ and denote by $P_j$ the corresponding point $(a_1(j):...:a_{n-1}(j):b(j))$ in $\Bbb {CP}^{n-1}.$ Hence $$P_j=\frac{(a_1(j):...:a_{n-1}(j):b(j))}{||x_j-y_j||}.$$ Since $\displaystyle\frac{(a_1(j),...,a_{n-1}(j))}{||x_j-y_j||}=
\frac{p(x_j)-p(y_j)}{||x_j-y_j||}\to 0$, we have that $P_j\to Q.$ It is a contradiction.
\end{proof}
In the sequel we will use the following theorem of Jean-Pierre Serre (see \cite{mil}, p. 85):
\begin{theo}\label{serre}
If $\field{G}amma$ is an irreducible curve of degree $d$ and genus $g$ in the complex projective plane,
then $$\frac{1}{2} (d-1)(d-2)= g + \sum_{z\in Sing(\field{G}amma)} {\delta}_z,$$
where $\delta_z$ denotes the delta invariant of a point $z$.
\end{theo}
Before starting to prove Theorem \ref{main1}, let us introduce the notion of Euclidean subsets being bi-Lipschitz equivalent at infinity.
\begin{defi}
Two subsets $X\in\field{C}^n$ and $Y\in\field{C}^m$ are called {\bf bi-Lipschitz equivalent at infinity} if there exist compact subsets $K_1\in\field{C}^n$ and $K_2\in\field{C}^m$ and a bi-Lipschitz homeomorphism $X\setminus K_1\rightarrow Y\setminus K_2.$
\end{defi}
\begin{re}\label{remark}
{\rm In order to prove that two algebraic plane curves $X$ and $Y$ are bi-Lischitz equivalent, Renato Targino (see \cite{renato}) showed that is enough to verify the following two conditions:
\begin{enumerate}
\item There is a bijetion $\varphi\colon Sing(X)\rightarrow Sing(Y)$ such that $(X,p)$ is bi-Lipschitz equivalent to $(Y,\varphi(p))$ as germs, $\forall p\in Sing(X)$;
\item $X$ and $Y$ are bi-Lipschitz equivalent at infinity.
\end{enumerate}
Note that the proof of Renato Targino still works in the case where $X$ and $Y$ are algebraic curves in $\field{C}^n$ (not necessarily $n=2$).}
\end{re}
Now we can prove Theorem \ref{main1}.
\begin{proof}[Proof of Theorem \ref{main1}]
Let us suppose that $\Lambda$ and $\field{G}amma$ are bi-Lipschitz equivalent, in particular, they have the same genus and they are bi-Lipschitz equivalent at infinity. We are going to prove that their generic plane projections $\Lambda'$ and $\field{G}amma'$, respectively, satisfy conditions 1) and 2) of Remark \ref{remark}. By using Theorem \ref{infinity}, we see that $\Lambda$ and $\field{G}amma$ are bi-Lipschitz equivalent at infinity to $\Lambda'$ and $\field{G}amma'$, respectively. Since, $\Lambda$ and $\field{G}amma$ are bi-Lipschitz equivalent, it follows that $\Lambda'$ and $\field{G}amma'$ are bi-Lipschitz equivalent at infinity as well ($\Lambda'$ and $\field{G}amma'$ satisfy item 2) of Remark \ref{remark}).
Before starting to show that $\Lambda'$ and $\field{G}amma'$ satisfy item 1) of Remark \ref{remark}, let us do some remarks about singularities of plane generic projections $X'$ of a space curve $X$ in $\field{C}^n$. We have a partition of the singular subset $Sing(X')$ into two types of singularities: singularities that come from singularities of $X$ via the associated linear generic projection $X\subset\field{C}^n\rightarrow X'\subset\field{C}^2$ (let us denote the set of such singularities by $S_1(X')$) and the so-called {\bf new nodes} which are singularities that come from double-points of the associated linear generic projection $X\subset\field{C}^n\rightarrow X'\subset\field{C}^2$ (let us denote the set of new nodes by $S_2(X')$).
We resume our proof that $\Lambda'$ and $\field{G}amma'$ satisfy item 1) of Remark \ref{remark}. It is clear that the local composition of the bi-Lipschitz homeomorphism $\Lambda'\rightarrow\field{G}amma'$ and the linear generic projections $\Lambda\subset\field{C}^n\rightarrow \Lambda'\subset\field{C}^2$ and $\field{G}amma\subset\field{C}^n\rightarrow \field{G}amma'\subset\field{C}^2$ gives a natural bijection $\varphi\colon S_1(\Lambda')\rightarrow S_1(\field{G}amma')$ such that, $(\Lambda',p)$ is bi-Lipschitz equivalent to $(\field{G}amma',\varphi(p))$ as germs, $\forall p\in S_1(\Lambda')$. Next, we are going to extend $\varphi$ to the set of new nodes. Notice that, since $\Lambda$ and $\field{G}amma$ have the same number of singular points and the same genus, we can deduce by Theorem \ref{serre} that the number of new nodes which appear in $\Lambda'$ and $\field{G}amma'$ is the same in both cases. Indeed, since $\Lambda'$ and $\field{G}amma'$ are bi-Lipschitz equivalent at infinity they have the same degree (see Corollary 3.2 in \cite{bfs}) and they have topologically equivalent germs at infinity (as stated in Theorem 1.5 of \cite{renato}). Moreover $\field{G}amma'$ and $\Lambda'$ also have the same genus. Now, by Serre's Formula (Theorem \ref{serre}) we see that the number of new nodes must be the same in both cases. Since any two nodes are bi-Lipschitz equivalent as germs, we can consider $\varphi\colon S_2(\Lambda')\rightarrow S_2(\field{G}amma')$ as being any bijection such that $(\Lambda',p)$ is bi-Lipschitz equivalent to $(\field{G}amma',\varphi(p))$ as germs, $\forall p\in S_2(\Lambda')$. In other words, according to Remark \ref{remark}, we have proved that $\Lambda'$ and $\field{G}amma'$ are bi-Lipschitz equivalent.
On the other hand, let us suppose that the generic plane projections $\Lambda'$ (of $\Lambda $) and $\field{G}amma'$ (of $\field{G}amma$) are bi-Lipschitz equivalent. Thus, by using Theorem \ref{infinity}, we see that $\Lambda$ and $\field{G}amma$ are bi-Lipschitz equivalent at infinity, i.e., they satisfy item 2) of Remark \ref{remark}. Concerning to item 1) of Remark \ref{remark}, we have natural bijections $\varphi_{\Lambda}\colon Sing(\Lambda)\rightarrow S_1(\Lambda')$ and $\varphi_{\field{G}amma}\colon Sing(\field{G}amma)\rightarrow S_1(\field{G}amma')$ such that $(\Lambda,p)$ (respectively $(\field{G}amma,q)$) is bi-Lipschitz equivalent to $(\Lambda',\varphi_{\Lambda}(p))$ (respectively $(\field{G}amma',\varphi_{\field{G}amma}(q))$) as germs. Now, via the bi-Lipschitz homeomorphism between $\Lambda'$ and $\field{G}amma'$, the linear generic projections $\Lambda\subset\field{C}^n\rightarrow \Lambda'\subset\field{C}^2$ and $\field{G}amma\subset\field{C}^n\rightarrow \field{G}amma'\subset\field{C}^2$ are local bi-Lipschitz homeomorphisms, we have a natural bijection $\varphi'\colon S_1(\Lambda')\rightarrow S_1(\field{G}amma')$ such that $(\Lambda',p)$ is bi-Lipschitz equivalent to $(\field{G}amma',\varphi'(p))$ as germs, $\forall p\in \Lambda'$. Finally, by the composite mapping $\varphi_{\field{G}amma}^{-1}\circ\varphi'\circ\varphi_{\Lambda}$, we conclude that $\Lambda$ and $\field{G}amma$ satisfy item 1) of Remark \ref{remark}.
\end{proof}
\end{document} |
\begin{document}
\muaketitle
\thispagestyle{empty}
\muathrm{p}agestyle{empty}
\begin{abstract}
We consider the problem of finding a $d$-dimensional spectral density through a moment problem which is characterized by an integer parameter $\nu$. Previous results showed that there exists an approximate solution under the regularity condition $\nu\geq d/2+1$. To realize the process corresponding to such a spectral density, one would take $\nu$ as small as possible. In this letter we show that this condition can be weaken as $\nu\geq d/2$.
\end{abstract}
\section{Introduction}
Multidimensional stationary processes (or stationary random fields) represent a fundamental tool in many applications of signal and image processing. For instance, the data collected from an automotive radar system can be modeled by a multidimensional process of dimension $d=3$, see \cite{engels2014target,ZFKZ2019fusion}. In those applications we have to estimate the multidimensional spectral density of the process. This task can be addressed by means of a moment problem, more precisely, a \emph{convex optimization problem subject to moment constraints}.
In the unidimensional case $(d=1)$ a wide range of spectral estimation paradigms based on moment problems have been proposed, see for instance \cite{BGL-98,FPR-08}: in this case the moments correspond to some covariance lags of the process and in the simplest setup the optimal spectrum maximizes the entropy rate. The appealing property of these paradigms is that the optimal spectrum is rational and thus leading to a finite-dimensional linear stochastic system (called ``shaping filter'' in the literature of signal processing) after spectral factorization.
In the case where the moments include both covariance lags and cepstral coefficients (i.e., logarithmic moments), then it is possible to characterize only an approximate solution to the moment problem: the spectrum maximizing the entropy rate matches the covariance lags and approximately the cepstral coefficients \cite{enqvist2004aconvex}. Such a solution is obtained by considering a \emph{regularized} version of the dual optimization problem.
These paradigms have been extended also to the multidimensional case, see e.g., \cite{RKL-16multidimensional,ringh2018multidimensional}. Although spectral factorization is not always possible in the multidimensional setting, rationality still seems to be a key ingredient toward a finite-dimensional realization theory \cite{geronimo2006factorization,geronimo2004positive}. The main issue is, however, that the solution of the moment problem is not necessarily a spectral density, but rather a \emph{spectral measure} that may contain a singular part \cite{KLR-16multidimensional}. In particular, if the moments are both the covariance lags and the cepstral coefficients, the existence of an approximate rational solution is only guaranteed when the dimension is $d\leq 2$ \cite{KLR-16multidimensional}.
In order to overcome this limitation on the dimension $d$, we have proposed a new moment problem, hereafter called \emph{$\nu$-moment problem}, in which the entropy rate has been replaced by a more general definition of entropy, called \emph{$\nu$-entropy}, whose derivation comes from the $\alpha$-divergence \cite{Z-14rat}. The definition of cepstral coefficients has been generalized accordingly.
The $\nu$-moment problem is characterized by the integer parameter $\nu$. In \cite{Zhu-Zorzi2023cepstral,Zhu-Zorzi2022ceps_est} we have shown that for any $d>2$ there exists an approximate rational solution to the $\nu$-moment problem under the regularity condition $\nu\geq d/2+1$.
On the other hand, if the solution admits a spectral factorization, then the estimated process can be realized through a cascade of $\nu$ identical linear filters. Therefore, the larger $\nu$ is, the larger the complexity is in order to realize such a process. Accordingly, from a practical perspective one would take $\nu$ as small as possible.
The aim of the present letter is to show that the regularity condition can be weakened as $\nu\geq d/2$ and thus it is possible to estimate a $d$-dimensional process with $\nu=d/2$ whose realization, if admissible, is simpler than the one obtained using the theory in \cite{Zhu-Zorzi2023cepstral}. Such a result is achieved through a new regularization technique. Moreover, the technical proof of the existence of such a solution takes a different route from the one in \cite{Zhu-Zorzi2023cepstral}.
The outline of this letter is as follows. In Section \ref{sec:intr} we introduce the multidimensional $\nu$-moment problem and the regularization term for the dual problem, while in Section \ref{sec:dual} we characterize the regularized dual problem. In Section
\ref{sec:weaker_cond} we prove the existence of an approximate solution for the $\nu$-moment problem for $\nu\geq d/2$. Some numerical examples are provided in Section \ref{sec:nume}. Finally, in Section \ref{sec:concl} we draw the conclusions.
\section{Problem formulation} \label{sec:intr}
Consider a $d$-dimensional real\footnote{We choose to present the theory for real random fields for simplicity. After suitable adaptation, complex random fields can be handled as well.} stationary random field $\{y(\muathbf t) : \muathbf t\in\muathbb Z^d\}$ with zero mean and a spectral density $\Phi(\boldsymbol{\theta})$. The latter is a nonnegative function on the $d$-dimensional frequency domain $\muathbb T^d:=(0,2\muathrm{p}i]^d$ and $\boldsymbol{\theta}=(\theta_1, \mathrm{d}ots, \theta_d)\in\muathbb T^d$ is a frequency vector.
In the sequel, we shall use the notation $\Phi(e^{i \boldsymbol{\theta}})$ which is common in Complex Analysis. Here $e^{i\boldsymbol{\theta}}$ is a shorthand for the vector $(e^{i\theta_1}, \mathrm{d}ots, e^{i\theta_d})$ representing a point on the $d$-torus which is isomorphic to $\muathbb T^d$.
The $\nu$-entropy of the random field $y$ with the integer parameter $\nu>1$ is defined as
\begin{align}\label{nu_entropy_formula}
\muathbb H_\nu(\Phi) := \frac{\nu^2}{\nu-1} \left(\int_{\muathbb T^d}\Phi^{\frac{\nu-1}{\nu}}\mathrm{d}\mu-1\right)
\end{align}
where
$\mathrm{d}\muu=\frac{1}{(2\muathrm{p}i)^d}\muathrm{p}rod_{j=1}^{d}\mathrm{d}\theta_j$
is the normalized Lebesgue measure on $\muathbb T^d$. It is worth noting that $ \muathbb H_\nu$ is a more general definition of entropy. Indeed, for the case $\nu= 1$ (which is understood in a suitable limit sense), we obtain the usual entropy rate $\muathbb H_1(\Phi) = \int_{\muathbb T^d}\log \Phi \mathrm{d}\mu$ \cite{Zhu-Zorzi2023cepstral}. In this letter, we will consider the following multidimensional $\nu$-moment problem
\begin{subequations}\label{primal_prob}
\begin{alignat}{2}
& \underset{ \Phi\geq 0}{\muax}
& \quad & \muathbb H_\nu(\Phi) \label{nu-entropy} \\
&\hspace{0.4cm} \text{s.t.}
& \quad & c_\muathbf k=\int_{\muathbb T^d}e^{i\innerprod{\muathbf k}{\boldsymbol{\theta}}}\Phi \mathrm{d}\mu \;\ \forall \muathbf k\in\Lambda, \label{constraint_cov} \\
&
& \quad & m_\muathbf k= \frac{\nu}{\nu-1}\int_{\muathbb T^d}e^{i\innerprod{\muathbf k}{\boldsymbol{\theta}}}\Phi^{\frac{\nu-1}{\nu}} \mathrm{d}\mu\;\ \forall \muathbf k\in\Lambda_0. \label{constraint_cep}
\end{alignat}
\end{subequations}
where $\innerprod{\muathbf k}{\boldsymbol{\theta}}=\sum_{j=1}^{d} k_j\theta_j$ is the standard inner product in $\muathbb R^d$ and the multidimensional exponential function is understood as $e^{i\innerprod{\muathbf k}{\boldsymbol{\theta}}} = \muathrm{p}rod_{j=1}^d e^{ik_j\theta_j}$. In the moment constraints we have:
\begin{itemize}
\item $\muathbf c=\{c_{\muathbf k}\,:\, \muathbf k\in\Lambda\}$ which is a covariance multisequence of the random field $y$, namely $c_{\muathbf k}=\E[y(\muathbf t+\muathbf k) y(\muathbf t)]$ where $\E$ denotes the expectation operator. The finite index set $\Lambda\subset\muathbb Z^d$ contains $\muathbf 0$ and is symmetric with respect to the origin, i.e., $\muathbf k\in\Lambda$ implies $-\muathbf k\in\Lambda$.
It is well known that the covariances are the Fourier coefficients of the spectral density $\Phi$ which is the meaning of \eqref{constraint_cov}.
\item $\muathbf m=\{m_{\muathbf k}\, :\, \muathbf k\in\Lambda_0\}$ which is a multisequence of generalized cepstral coefficients, called \emph{$\nu$-cepstral coefficients}, associated to the same random field $y$, see \cite{Zhu-Zorzi2023cepstral}. It is a generalization of the classic logarithmic moments used in \cite{enqvist2004aconvex,RKL-16multidimensional}. Notice that, the notion of $\nu$-cepstral coefficients is consistent with the objective functional \eqref{nu_entropy_formula} employed in \eqref{primal_prob}. The index set $\Lambda_0:=\Lambda\backslash\{\muathbf 0\}$ is such that $m_{\muathbf 0}$ is excluded (for technical reasons).
\end{itemize}
The $\nu$-moment problem \eqref{primal_prob}, which is also referred to as the primal optimization problem, can be used to perform spectral estimation. Assume that a dataset generated by $y$ has been collected. Then, it is possible to compute the sample estimates $\hat c_\muathbf k$ and $\hat m_\muathbf k$ of $c_\muathbf k$ and $m_\muathbf k$, respectively. Then, the estimate of $\Phi$ is the solution of \eqref{primal_prob} in which $c_\muathbf k$ and $m_\muathbf k$ are substituted by $\hat c_\muathbf k$ and $\hat m_\muathbf k$.
In \cite{Zhu-Zorzi2023cepstral}, we derived the dual optimization problem and showed that, \emph{if the dual problem admits an interior-point solution}, the optimal spectral density (primal variable) is a rational function of the form
\begin{equation}\label{Phi_nu}
\Phi_\nu=(P/Q)^\nu
\end{equation}
where $P$ and $Q$ are positive\footnote{By positive we mean that $P>0$ for any point on the $d$-torus. In contrast, if $P\geq0$ and $P=0$ for certain points, we say that $P$ is \emph{nonnegative}.} trigonometric polynomials associated with the Lagrange multipliers.
However, it is highly nontrivial to prove that the optimal dual variable lies in the interior of the feasible set. In order to overcome this difficulty, we introduced a regularization term in the dual objective function which depends solely on $P$:
\begin{equation}
\frac{\lambda}{\nu-1} \int_{\muathbb T^d}\frac{1}{P^{\nu-1}}\mathrm{d}\mu
\end{equation}
where $\lambda>0$ is a regularization parameter. Such a regularizer can of course be interpreted as a \emph{barrier function} (for $P$) since under the regularity condition
\begin{equation}\label{old_cond_nu}
\nu\geq d/2+1,
\end{equation}
the regularizer takes an infinite value if $P$ has a zero on the $d$-torus. Hence, the optimal $P$ is forced to be an interior point, i.e., a positive polynomial. Obviously, one can always choose $\nu$ such that the condition \eqref{old_cond_nu} is met, and indeed in this case, we showed that the optimal $Q$ is also positive so that the optimal form \eqref{Phi_nu} is true for the primal problem \eqref{primal_prob}. The price to pay for the regularization is that the $\nu$-cepstral constraints \eqref{constraint_cep} are only approximately satisfied with an error that decreases as $\lambda\to 0_+$, i.e., the rational function \eqref{Phi_nu} represents an approximate solution to \eqref{primal_prob}. It is worth noting that a process with the spectrum (\ref{Phi_nu}) can be generated by the cascade of $\nu$ identical multidimensional filters (see Fig.~\ref{fig:cascade_linear_system}) if $P/Q$ admits a spectral factor corresponding to a state space realization. Clearly, the larger $\nu$ is, the larger the complexity of the realization is. Accordingly, the key point from a practical perspective is to have the possibility to take $\nu$ as small as possible.
\begin{figure}
\caption{A $d$-dimensional cascade linear stochastic system with two identical subsystems, where $\muathbf z=(z_1, z_2, \mathrm{d}
\label{fig:cascade_linear_system}
\end{figure}
Having in mind this important requirement, in this letter we want to face the following problem.
\begin{problem}
Take the barrier function in the dual problem as
\begin{equation}\label{regularizer_new}
\frac{\lambda}{\nu-1} \int_{\muathbb T^d}\frac{1}{P^{\nu}}\mathrm{d}\mu
\end{equation}
and show that the corresponding approximate solution of the primal problem exists under the weaker condition $\nu\geq d /2$.
\end{problem}
\begin{example}
\begin{figure}
\caption{Target parameters estimation problem: T is source generating the pulse train signals, R is the receiver, $\alpha$ is the azimuth angle and $r$ is the range.}
\label{fig:car}
\end{figure}
Consider an automotive radar system installed in the red car of Fig.~\ref{fig:car} that employs coherent linear frequency-modulated pulse trains signals (T) and uses a uniform linear array of receive antennas for the measurement (R). The target (green car in Fig.~\ref{fig:car}) is identified by the range $r$, the azimuth angle $\alpha$ and the relative velocity $v$. The problem of estimating the target parameters can be formulated as a multidimensional spectral estimation problem with $d=3$ \cite{zhu2020m,engels2014target}. Under the aforementioned hypothesis on the existence of a ``realizable'' spectral factor, the estimated model can be realized by a cascade of at least $3$ identical filters using the theory developed in \cite{Zhu-Zorzi2023cepstral}, while only $2$ using the theory that we will develop in this letter which makes more efficient the implementation of the model for simulation purposes.
\end{example}
\section{Dual problem} \label{sec:dual}
Let us recall the innocuous condition $\nu\geq 2$ which is assumed throughout this letter.
In \cite[Section~4]{Zhu-Zorzi2023cepstral}, it has been shown that the dual function of the primal problem \eqref{primal_prob} is
\begin{equation}\label{unreg_dual_func}
J_{\nu}(P,Q) := \frac{1}{\nu-1} \int_{\muathbb T^d} \frac{P^\nu}{Q^{\nu-1}} \mathrm{d}\muu + \innerprod{\muathbf q}{\muathbf c} - \innerprod{\muathbf p}{\muathbf m},
\end{equation}
where:
\begin{itemize}
\item $\muathbf q=\{q_{\muathbf k}\,:\,\muathbf k\in\Lambda\}$ contains the real coefficients of the nonnegative trigonometric polynomial
\begin{equation}
Q(e^{i\boldsymbol{\theta}}):=\sum_{\muathbf k\in\Lambda} q_{\muathbf k}e^{-i\innerprod{\muathbf k}{\boldsymbol{\theta}}}
\end{equation}
such that $q_{-\muathbf k}=q_{\muathbf k}$;
\item similarly, $\muathbf p=\{p_{\muathbf k}\,:\,\muathbf k\in\Lambda_0\}$ contains the real coefficients of the nonnegative polynomial
\begin{equation}
P(e^{i\boldsymbol{\theta}})=\sum_{\muathbf k\in\Lambda} p_{\muathbf k}e^{-i\innerprod{\muathbf k}{\boldsymbol{\theta}}}
\end{equation}
with $p_{-\muathbf k}=p_{\muathbf k}$ and $p_{\muathbf 0}=1$ \emph{fixed};
\item $\innerprod{\muathbf q}{\muathbf c}:=\sum_{\muathbf k\in\Lambda} q_{\muathbf k} c_{\muathbf k}$ inner product of two multisequences indexed in $\Lambda$, and $\innerprod{\muathbf p}{\muathbf m}$ is understood similarly with the index set replaced by $\Lambda_0$.
\end{itemize}
Since it is rather difficult to prove that the dual problem admits an interior-point solution, see \cite{Zhu-Zorzi2023cepstral}, we consider the regularized dual function
\begin{subequations}\label{reg_dual_func}
\begin{align}
& J_{\nu,\lambda}(P,Q) :=
J_{\nu}(P, Q) + \frac{\lambda}{\nu-1} \int_{\muathbb T^d}\frac{1}{P^\nu}\mathrm{d}\muu \label{reg_dual_func_with_barrier}\\
&\quad = \frac{1}{\nu-1} \int_{\muathbb T^d} g(P(e^{i\boldsymbol{\theta}}), Q(e^{i\boldsymbol{\theta}})) \mathrm{d}\muu + \innerprod{\muathbf q}{\muathbf c} - \innerprod{\muathbf p}{\muathbf m} \label{reg_dual_func_with_g}
\end{align}
\end{subequations}
where the bivariate function is defined as
\begin{equation}\label{func_g}
g(x,y) := x^\nu/y^{\nu-1} + \lambda/x^{\nu}
\end{equation}
with $x>0,\ y>0$ and the regularization parameter $\lambda>0$ is fixed. Now let us introduce the feasible sets
\begin{equation}
\begin{aligned}
\muathfrak{P}_{+} & := \left\{Q(e^{i\boldsymbol{\theta}})=\sum_{\muathbf k\in\Lambda} q_{\muathbf k}e^{-i\innerprod{\muathbf k}{\boldsymbol{\theta}}} : Q>0 \ \text{on}\ \muathbb T^d \right\}, \\
\muathfrak{P}_{+,o} & := \left\{P(e^{i\boldsymbol{\theta}})=\sum_{\muathbf k\in\Lambda} p_{\muathbf k}e^{-i\innerprod{\muathbf k}{\boldsymbol{\theta}}} \in \muathfrak{P}_{+} : p_{\muathbf 0}=1 \right\},\nonumber
\end{aligned}
\end{equation}
so that $Q\in \muathfrak{P}_{+}$ and $P\in \muathfrak{P}_{+,o}$.
The domain of definition of $J_{\nu,\lambda}$ can be extended to the boundary of the feasible set $\muathfrak{P}_{+,o}\times \muathfrak{P}_{+}$ by excluding the zero sets of $P$ and $Q$ from the domain of integration, which does not change the values of the integrals since the zero sets have zero Lebesgue measure. Moreover, $J_{\nu,\lambda}$ may take a value of $\infty$ at some boundary points, and hence it is understood as an \emph{extended real-valued} function.
\begin{lemma}\label{lem_strict_convex_g}
The function $g$ in \eqref{func_g} is strictly convex in the domain $x>0,\ y>0$ $($the first quadrant$)$.
\end{lemma}
\begin{proof}
We shall prove the claim via the derivative test. After some straightforward computations, we arrive at
\begin{subequations}
\begin{align}
\nabla g(x,y) = & x^{\nu-1} y^{-\nu} \left[ \begin{matrix} \nu y \\ (1-\nu) x\end{matrix} \right] + \left[ \begin{matrix} -\lambda\nu x^{-\nu-1} \\ 0 \end{matrix} \right], \label{gradient_g} \\
\nabla^2 g(x,y) = & \nu(\nu-1) x^{\nu-2} y^{-\nu-1} \left[ \begin{matrix} y^2 & -xy \\ -xy & x^2 \end{matrix} \right] \nonumber \\
& + \left[ \begin{matrix} \lambda\nu(\nu+1) x^{-\nu-2} & 0 \\
0 & 0 \end{matrix} \right], \label{Hessian_g}
\end{align}
\end{subequations}
where $\nu(\nu-1)>0$ is a positive integer.
It is readily observed that every diagonal element in the Hessian of $g$ is positive, and the first matrix in \eqref{Hessian_g} is positive semidefinite. After checking the determinant of the Hessian, we conclude that $\nabla^2 g(x,y)$ is positive definite in the first quadrant and the strict convexity follows.
\end{proof}
The next proposition is a direct consequence of Lemma~\ref{lem_strict_convex_g}.
\begin{proposition}\label{prop_convex}
The regularized dual function $J_{\nu,\lambda}$ is strictly convex in the closed set $\overline{\muathfrak{P}}_{+,o}\times\overline{\muathfrak{P}}_+$.
\end{proposition}
\begin{proof}
We only need to show the strict convexity of the integral term in \eqref{reg_dual_func_with_g} since the inner products are linear in $(P, Q)$. In the interior of the feasible set, namely $P\in\muathfrak{P}_{+,o}$ and $Q\in\muathfrak{P}_{+}$, the strict convexity of the integral term follows from that of $g$ (see Lemma~\ref{lem_strict_convex_g}), and this can be seen by a pointwise argument on the integrand (see e.g., Proposition~5.3 in \cite{zhu2020m}). The same reasoning works if $P$ or $Q$ is on the boundary of the respective feasible set because, once the zero sets of $P$ and $Q$ are excluded from the domain of integration, the function $g(P(e^{i\boldsymbol{\theta}}), Q(e^{i\boldsymbol{\theta}}))$ is well defined and the proof holds verbatim.
\end{proof}
We conclude that the regularized dual optimization can be formulated as
\begin{equation}\label{reg_dual_prob}
\muin\ J_{\nu, \lambda}(P, Q)\quad \text{s.t.}\ P\in \overline\muathfrak{P}_{+,o},\ Q\in \overline\muathfrak{P}_{+}.
\end{equation}
\section{A unique interior-point solution under the condition $\nu\geq {d}/{2}$}\label{sec:weaker_cond}
In this section we shall see how the condition $\nu\geq d/2$ guarantees that the regularized dual problem (\ref{reg_dual_prob}) has an interior-point solution. By Proposition~\ref{prop_convex}, we know that a minimizer of $J_{\nu, \lambda}$ in $\overline{\muathfrak{P}}_{+,o}\times\overline{\muathfrak{P}}_+$ is unique \emph{provided that it exists}. We shall first establish such existence. Then, under the regularity condition $\nu\geq d/2$ which is weaker than \eqref{old_cond_nu}, we aim to exclude the possibility that a minimum of $J_{\nu,\lambda}$ may fall on the boundary of the feasible set, which is a weaker version of Lemma~5.8 in \cite{Zhu-Zorzi2023cepstral}, using a Byrnes--Gusev--Lindquist-type argument that first appeared in \cite{BGL-98} and subsequently in e.g., \cite{RKL-16multidimensional}.
\subsection{Existence of a minimizer in $\overline{\muathfrak{P}}_{+,o}\times\overline{\muathfrak{P}}_+$}\label{subsec:exist}
The existence of a solution to \eqref{reg_dual_prob} can be shown via reasonings similar to the ones in \cite{Zhu-Zorzi2023cepstral}: such existence depends on the following feasibility assumption.
\begin{assumption}[Feasibility]\label{assump_feasible}
The given covariances $\{c_\muathbf k\}_{\muathbf k\in\Lambda}$ admit an integral representation
\begin{equation}
c_\muathbf k = \int_{\muathbb T^d} e^{i\innerprod{\muathbf k}{\boldsymbol{\theta}}} \Phi_0 \mathrm{d}\mu\quad \forall \ \muathbf k\in\Lambda,
\end{equation}
where $\Phi_0$ is a nonnegative function on $\muathbb T^d$ and is positive on some open ball $B_1\subset\muathbb T^d$.
\end{assumption}
In order to prove our existence result, we need the following lemmas.
\begin{lemma}\label{lem_lower_semicont}
The unregularized dual function $J_\nu$ in \eqref{unreg_dual_func} and the regularized version $J_{\nu,\lambda}$ in \eqref{reg_dual_func} are lower-semicontinuous on $\overline{\muathfrak{P}}_{+,o}\times\overline{\muathfrak{P}}_+$. In particular, they are both continuous on $\muathfrak{P}_{+,o}\times\muathfrak{P}_+$.
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma~5.5 in \cite{Zhu-Zorzi2023cepstral}. The only difference here is that the power of $P$ in the regularizer in \eqref{reg_dual_func_with_barrier} is $\nu$ instead of $\nu-1$ in \cite{Zhu-Zorzi2023cepstral}.
\end{proof}
\begin{lemma}\label{lem_unbounded}
Suppose that Assumption~\ref{assump_feasible} holds. If a sequence $\{(P_j,Q_j)\}_{j\geq1}\subset\overline{\muathfrak{P}}_{+,o}\times \overline{\muathfrak{P}}_+$ is such that $\|(P_j,Q_j)\|\to\infty$ as $j\to\infty$, then $J_{\nu,\lambda}(P_j,Q_j)\to\infty$.
\end{lemma}
\begin{proof}
See the proof of Lemma~5.6 in \cite{Zhu-Zorzi2023cepstral} which uses Assumption~\ref{assump_feasible}.
\end{proof}
\begin{proposition}\label{prop_exist}
Under Assumption~\ref{assump_feasible}, the regularized dual optimization problem \eqref{reg_dual_prob} admits a solution.
\end{proposition}
\begin{proof}
Take a sufficiently large real number $\beta$, so that the sublevel set of the regularized dual function
\begin{equation}\label{sublevel_set_reg}
J_{\nu,\lambda}^{-1}(-\infty,\beta] := \{ (P,Q)\in \overline{\muathfrak{P}}_{+,o}\times \overline{\muathfrak{P}}_+ \,:\, J_{\nu,\lambda}(P,Q)\leq\beta \}.
\end{equation}
is not empty. Then Lemma~\ref{lem_lower_semicont} implies that the sublevel set is closed; Lemma~\ref{lem_unbounded} implies that the sublevel set is bounded.
Obviously the polynomial pair $(P,Q)$, parametrized by their coefficients, belongs to a finite-dimensional vector space. It follows that the sublevel set is compact.
Given the lower-semicontinuity of the objective function $J_{\nu,\lambda}$ (see Lemma~\ref{lem_lower_semicont}), a minimizer exists in $J_{\nu,\lambda}^{-1}(-\infty,\beta]$ by the extreme value theorem of Weierstrass.
\end{proof}
\subsection{Non-optimality of boundary points given $\nu\geq d/2$}\label{subsec:boundary}
In this subsection we prove that the optimal solution of (\ref{reg_dual_prob}) cannot belong to the boundary of $\overline{\muathfrak{P}}_{+,o}\times \overline{\muathfrak{P}}_{+}$ using arguments which conceptually differ from the ones used for the case $\nu\geq d/2+1$ in \cite{Zhu-Zorzi2023cepstral}. Notice first that if $P\in\muathrm{p}artial\muathfrak{P}_{+,o}$ where $\muathrm{p}artial$ denotes the boundary of a set, then the regularization term \eqref{regularizer_new}
employed in \eqref{reg_dual_func_with_barrier} takes a value of $\infty$ under the condition $\nu\geq d/2$, see Proposition~A.4 in \cite{zhu2020m}. The other term, namely the unregularized function $J_\nu(P,Q)$ whose expression is given in \eqref{unreg_dual_func}, is bounded from below under Assumption~\ref{assump_feasible}, see the proof of \cite[Lemma~5.6]{Zhu-Zorzi2023cepstral}. Therefore, in this case we have $J_{\nu,\lambda}(P,Q)=\infty$ which is certainly not a minimum. Consequently, an optimal $(P, Q)$ must have $P\in\muathfrak{P}_{+,o}$.
Next, we work on the case of $(P,Q)\in\muathfrak{P}_{+,o}\times\muathrm{p}artial\muathfrak{P}_{+}$. It is still possible that the integral term in \eqref{unreg_dual_func} diverges so that $J_{\nu,\lambda}(P, Q)=\infty$, and such a point is obviously not a minimizer. Therefore, we only need to consider points $(P, Q)\in\muathfrak{P}_{+,o}\times\muathrm{p}artial\muathfrak{P}_{+}$ such that the function value $J_{\nu,\lambda}(P,Q)$ is \emph{finite}. Construct the real-valued function
\begin{equation}\label{func_f}
f(t) := J_{\nu,\lambda}(P, Q+t\muathbf 1)
\end{equation}
defined for $t\geq 0$ where $\muathbf 1$ denotes the constant polynomial taking value one.
One can show without difficulty that $f$ ``inherits'' the strict convexity from $J_{\nu,\lambda}$ (see Proposition~\ref{prop_convex}). It is worth noting that the continuity of $f(t)$ can be extended to $t=0_+$ which corresponds to the boundary point $(P, Q)$, in contrast with the general result that $J_{\nu,\lambda}$ is only lower-semicontinuous on the boundary of the feasible set (see Lemma~\ref{lem_lower_semicont}). These properties are established in the next statement.
\begin{lemma}\label{lem_bdry_cont}
The function $f(t)$ in \eqref{func_f}, where $(P,Q)$ is an arbitrarily fixed point in $\muathfrak{P}_{+,o}\times\muathrm{p}artial\muathfrak{P}_{+}$ such that $J_{\nu,\lambda}(P,Q)<\infty$, is strictly convex and continuous in $[0, \infty)$.
\end{lemma}
\begin{proof}
The strict convexity of $f$ follows directly from Proposition~\ref{prop_convex}, and it remains to show the continuity.
For $t>0$, the argument $(P, Q+t\muathbf 1)$ belongs to the interior $\muathfrak{P}_{+,o}\times \muathfrak{P}_{+}$, and the continuity of $f$ follows from that of $J_{\nu,\lambda}$ (see Lemma~\ref{lem_lower_semicont}).
Hence, we are only concerned with the right continuity of $f$ at $t=0$.
By the lower-semicontinuity of $J_{\nu,\lambda}$ at $(P,Q)$
(Lemma~\ref{lem_lower_semicont} again), for any $\varepsilon>0$, there exists $\mathrm{d}elta>0$ such that whenever $(P_1,Q_1)\in \overline{\muathfrak{P}}_{+,o}\times\overline{\muathfrak{P}}_+$ satisfies $\|(P_1, Q_1) - (P, Q)\|<\mathrm{d}elta$, we have $J_{\nu,\lambda}(P_1,Q_1)>J_{\nu,\lambda}(P,Q)-\varepsilon$. We can always choose $t$ sufficiently small to make the argument on the right-hand side of \eqref{func_f} sufficiently close to $(P,Q)$, which leads to the inequality $f(t)> f(0)-\varepsilon$. At the same time, by strict convexity we have $f(t)<tf(1)+(1-t)f(0) = f(0)+t[f(1)-f(0)]$.
The last term, namely $t[f(1)-f(0)]$ can be made smaller than $\varepsilon$ (in absolute value) for $t$ sufficiently small. Therefore, we reach the inequality $|f(t)-f(0)|<\varepsilon$ which proves the continuity at $t=0_+$ since $\varepsilon>0$ is arbitrarily chosen.
\end{proof}
\begin{proposition}\label{prop_boundary}
Fix any point $(P,Q)\in\muathfrak{P}_{+,o}\times\muathrm{p}artial\muathfrak{P}_{+}$ such that $J_{\nu,\lambda}(P,Q)<\infty$. Then the function $f(t)$ in \eqref{func_f} has a derivative $f'(t)\to -\infty$ as $t\to 0_+$. Therefore, $(P,Q)$ cannot be a minimizer of $J_{\nu,\lambda}$.
\end{proposition}
\begin{proof}
After some computations, we have for $t>0$ that
\begin{equation}\label{func_f_deriva}
f'(t) = c_{\muathbf 0} - \int_{\muathbb T^d} \left[ \frac{P}{Q+t\muathbf 1}\right]^\nu \mathrm{d}\muu.
\end{equation}
Since $P\in\muathfrak{P}_{+,o}$, we know $P_{\muin}:=\muin_{\boldsymbol{\theta}\in\muathbb T^d} P(e^{i\boldsymbol{\theta}})>0$.
Then the following relation
\begin{equation}
\int_{\muathbb T^d} \left[ \frac{P}{Q+t\muathbf 1}\right]^\nu \mathrm{d}\muu \geq P_{\muin}^\nu \int_{\muathbb T^d} \frac{1}{(Q+t\muathbf 1)^\nu} \mathrm{d}\muu \to\infty
\end{equation}
holds as $t\to 0_+$ by Lebesgue's monotone convergence theorem \cite[p.~21]{rudin1987real} and Proposition~A.4 in \cite{zhu2020m}.
This shows that $f'(t)$ in \eqref{func_f_deriva} tends to $-\infty$ as $t$ goes to zero from the right. Taking Lemma~\ref{lem_bdry_cont} into account, we conclude that $0$ is not a local minimizer of $f(t)$. Consequently, any boundary point $(P,Q)\in\muathfrak{P}_{+,o}\times\muathrm{p}artial\muathfrak{P}_{+}$ is not a minimizer of $J_{\nu,\lambda}$, because taking an arbitrarily small step along the direction $(\muathbf 0,\muathbf 1)$, which points towards the interior $\muathfrak{P}_{+,o}\times\muathfrak{P}_{+}$, will result in a decrease of the objective function value.
\end{proof}
We summarize what we have got so far in the next theorem.
\begin{theorem}\label{thm_main}
Under Assumption~\ref{assump_feasible} and the condition $\nu>d/2$, the optimization problem \eqref{reg_dual_prob} admits a unique interior-point solution $(\hat{P}, \hat{Q})\in\muathfrak{P}_{+,o}\times \muathfrak{P}_{+}$ such that
\begin{subequations}\label{stationarity_cond}
\begin{align}
c_\muathbf k & = \int_{\muathbb T^d} e^{i\innerprod{\muathbf k}{\boldsymbol{\theta}}} ({\hat P}/{\hat Q})^{\nu}\mathrm{d}\mu \; \; \forall\muathbf k\in\Lambda, \label{part_Q} \\
m_\muathbf k & = \int_{\muathbb T^d} e^{i\innerprod{\muathbf k}{\boldsymbol{\theta}}} \frac{\nu}{\nu-1}\left[({\hat P}/{\hat Q})^{\nu-1}-{\lambda}/{\hat P^{\nu+1}} \right] \mathrm{d}\mu \; \; \forall\muathbf k\in\Lambda_0. \label{part_P}
\end{align}
\end{subequations}
In plain words, the spectral density $\hat{\Phi}_\nu=(\hat{P}/\hat{Q})^\nu$ achieves covariance matching and approximate $\nu$-cepstral matching with errors
\begin{equation*}
\varepsilon_{\muathbf k} = \frac{\lambda\nu}{\nu-1}\int_{\muathbb T^d} e^{i\innerprod{\muathbf k}{\boldsymbol{\theta}}} \frac{1}{\hat P^{\nu+1}} \mathrm{d}\mu,\ \muathbf k\in\Lambda_0.
\end{equation*}
\end{theorem}
\begin{proof}
The existence of a solution is guaranteed by Proposition~\ref{prop_exist} and the uniqueness by Proposition~\ref{prop_convex}. Moreover, given the reasoning at the beginning of this subsection and Proposition~\ref{prop_boundary}, the optimal $(\hat{P}, \hat{Q})$ must be an interior point, i.e., both polynomials are positive. As a consequence, it must satisfy the stationary-point equation $\nabla J_{\nu,\lambda}(P, Q) = \muathbf 0$, which is equivalent to the conditions in \eqref{stationarity_cond}.
Indeed, this point can be seen by setting the first differential of the regularized dual function
\begin{equation}\label{first_diff_reg_dual_func}
\begin{aligned}
& \mathrm{d}elta J_{\nu,\lambda}(P, Q; \mathrm{d}elta P, \mathrm{d}elta Q) = \innerprod{\muathbf c}{\mathrm{d}elta\muathbf q} - \int_{\muathbb T^d} \mathrm{d}elta Q (P/Q)^\nu \mathrm{d}\muu \\
& \quad - \innerprod{\muathbf m}{\mathrm{d}elta\muathbf p} + \frac{\nu}{\nu-1} \int_{\muathbb T^d} \mathrm{d}elta P\left[ (P/Q)^{\nu-1} - \lambda/P^{\nu+1}\right] \mathrm{d}\mu
\end{aligned}
\end{equation}
equal to zero for any direction $(\mathrm{d}elta P, \mathrm{d}elta Q)$.
\end{proof}
\begin{remark}\label{remark1}
It is important to observe that the continuous dependence of the solution $(\hat P, \hat Q)$ to \eqref{reg_dual_prob} on the covariance and $\nu$-cepstral data $(\muathbf c, \muathbf m)$ can be established similarly to \cite[Sec.~6]{Zhu-Zorzi2023cepstral}, so that the optimization problem \eqref{reg_dual_prob} is in fact well-posed in the sense of Hadamard.
\end{remark}
\section{Numerical simulations}\label{sec:nume}
In this section we present some numerical experiments in which the problem \eqref{primal_prob} is used to reconstruct the spectrum of a $3$-d stationary random field $y(t_1, t_2, t_3)$ starting from a finite set of its covariance lags and $\nu$-cepstral coefficients. We assume that the underlying process $y$ is described as the output of a cascade linear shaping filter $\tilde{W}(z_1, z_2, z_3)=W^\nu(z_1,z_2,z_3)$ with a white noise input $e(t_1, t_2, t_3)$, see Fig.~\ref{fig:cascade_linear_system} with $\nu=2$ and $d=3$. We shall also assume that the transfer function $W$ has the structure consistent with our optimal form \eqref{Phi_nu} for the spectrum. More precisely, we consider the class of rational models such that the numerator and denominator polynomials both have degree one:
\begin{equation}\label{transfer_func_W}
W(\muathbf z) = \left[ \frac{b(\muathbf z)}{a(\muathbf z)} \right]^\nu = \left[ \frac{b_0 - b_1z_1^{-1} - b_2z_2^{-1} - b_3z_3^{-1}}{a_0 - a_1z_1^{-1} - a_2z_2^{-1} - a_3z_3^{-1}} \right]^\nu
\end{equation}
where $\muathbf z$ represents $(z_1,z_2,z_3)$ for short. Obviously, the polynomials $a(\muathbf z)$ and $b(\muathbf z)$ are described by the respective vectors
$\muathbf a=[a_0,\mathrm{d}ots,a_3]$ and $\muathbf b=[b_0,\mathrm{d}ots,b_3]$ of coefficients.
If the white noise input $e$ has unit variance, then the spectral density of the output process $y$ is $(P/Q)^\nu$ where
$P(e^{i\boldsymbol{\theta}}) = |b(e^{i\boldsymbol{\theta}})|^2$ and $Q(e^{i\boldsymbol{\theta}}) = |a(e^{i\boldsymbol{\theta}})|^2$
are two nonnegative polynomials in $\overline{\muathfrak{P}}_+$. Throughout this section, we set the parameter $\nu=2$ to meet the condition $\nu\geq {d}/{2}=3/2$.
In such scenario it is worth stressing that the theory developed in \cite{Zhu-Zorzi2023cepstral} does not work. Indeed that theory requires $\nu\geq 3$ to guarantee the existence of an approximate solution to the primal problem \eqref{primal_prob}.
In what follows we consider two models of the form \eqref{transfer_func_W}. We take two sets of \emph{real} parameters $(\muathbf a_j, \muathbf b_j)$ with $j=1, 2$ such that
\begin{equation}
\begin{aligned}
\muathbf a_1 = \muathbf a_2 =\muathbf a & = [1, 0.3, 0.3, 0.3], \\
\tilde{\muathbf b}_1 = [1, -0.2, -0.3, -0.4], \quad & \tilde{\muathbf b}_2 = [1, -0.2, -0.3, -0.5],
\end{aligned}
\end{equation}
and $\muathbf b_j=\tilde{\muathbf b}_j/\|\tilde{\muathbf b}_j\|$. The last operation of normalization gives $\|\muathbf b\|^2=1$ which is equivalent to $p_\muathbf 0=1$ for the numerator polynomial $P$. The first model, hereafter called ``zeroless model'', corresponds to the process with spectrum $\Phi_1=(P_1/Q)^2$ where the polynomials $Q=|a|^2$ and $P_1=|b_1|^2$ are positive on $\muathbb T^3$; the second model, hereafter called ``model with a spectral zero'', corresponds to the process with spectrum $\Phi_2=(P_2/Q)^2$ where $P_2=|b_2|^2$ (and also the spectrum) has a zero at the frequency vector $(\muathrm{p}i,\muathrm{p}i,\muathrm{p}i)$. The index set $\Lambda$ is identified as $\Lambda = \Lambda_+ - \Lambda_+$ with
\begin{equation}
\Lambda_+ := \{\, (0,0,0),\, (1,0,0),\, (0,1,0),\, (0,0,1) \,\}.
\end{equation}
Here the set difference is understood as $A-B:=\{x-y : x\in A,\ y\in B\}$.
Since $q_{-\muathbf k}=q_{\muathbf k}$ and $p_{-\muathbf k}=p_{\muathbf k}$, the total number of variables is $13$.
By means of the discrete formulation described in Section~7 of \cite{Zhu-Zorzi2023cepstral}, we consider the following procedure to test the ability to reconstruct the spectra through the solution of \eqref{primal_prob} for the previous two models:
\begin{enumerate}
\item Fix $N=20$ and discretize $\muathbb T^3$ into $N^3$ regular grid points by gridding the interval $[0, 2\muathrm{p}i]$ into $N$ equidistant points in each dimension.
\item Compute the ``true'' covariances $\{c_\muathbf k : \muathbf k\in\Lambda\}$ and the $\nu$-cepstral coefficients $\{m_\muathbf k : \muathbf k\in\Lambda_0\}$ of $\Phi=|W|^2$ via \eqref{constraint_cov} and \eqref{constraint_cep} where $\mathrm{d}\mu$ is replaced by a discrete measure with equal mass $1/N^3$ on the grid points.
\item Solve the discrete version of the regularized dual problem \eqref{reg_dual_prob} using $(\muathbf c, \muathbf m)$ computed above and $\lambda>0$ chosen sufficiently small.
\item Let $(\hat{\muathbf p},\hat{\muathbf q})$ be the optimal solution to \eqref{reg_dual_prob}
and $(\muathbf p,\muathbf q)$ be the polynomial coefficients corresponding to the true spectrum $\Phi=|W|^2$. Finally, evaluate the reconstruction error $\|(\hat{\muathbf p},\hat{\muathbf q}) - (\muathbf p,\muathbf q)\|$.
\end{enumerate}
In Step 3, the optimization problem is solved using Newton's method. Some computational details can be found in \cite{Zhu-Zorzi2023cepstral}, and suitable modifications on the gradient and Hessian of the objective function are needed for the current problem \eqref{reg_dual_prob}.
The left panel of Fig.~\ref{fig:secx_vs_lambda} shows the reconstruction errors for the two models with different values of the regularization parameter $\lambda= 10^{-n}$, $n=0, 2, 4, 6, 8, 10$. It is readily observed in both cases that the errors decrease monotonically as $\lambda\to 0$.
In view of Remark~\ref{remark1}, therefore, if we have a dataset generated by a $d$-dimensional process and consider the approximate solution to the problem (\ref{primal_prob}) with $\nu=\lceil {d/2}\rceil$ where $\muathbf c$ and $\muathbf m$ are replaced by their sample estimators (computed from the data), then the resulting spectral estimator is characterized by a small estimation error provided that $\lambda$ is chosen sufficiently small and the underlying process has a spectrum of the form (\ref{Phi_nu}) with $\nu=\lceil {d/2}\rceil$ which agrees with our parameter specification.
In the right panel of Fig.~\ref{fig:secx_vs_lambda}, we compare the spectral density of the model with a spectral zero and the reconstructed spectra for values of $\lambda=10^{-10}, 10^{-8}, 10^{-6}, 10^{-4}$ (corresponding to the orange line in the left panel) along a cross section $[\,\cdot \;11\;11\,]$ of the regular grid for $\muathbb T^3$, where the true spectral zero is located at the index\footnote{In Matlab, the array indices start with $1$.} $[11, 11, 11]$.
Notice that the other two cross sections $[\,11\;\cdot \;11\,]$ and $[\,11\;11\;\cdot \,]$ of the spectral densities are not shown because they are visually similar to this figure. We conclude that as $\lambda$ goes to zero, the true nonnegative spectrum is well approximated by a positive spectral density with smaller and smaller errors, which is consistent with the left panel.
\begin{figure}
\caption{\emph{Left:}
\label{fig:secx_vs_lambda}
\end{figure}
\section{Conclusions}\label{sec:concl}
We have considered a multidimensional $\nu$-moment problem which searches a spectral density maximizing the $\nu$-entropy and matching a finite set of covariance lags and $\nu$-cepstral coefficients. A previous work showed that it is possible to guarantee the existence of a rational approximate solution only in the case $\nu\geq d/2+1$ where $d$ is the dimension of the random field described by the multidimensional spectrum. Motivated by the fact that $\nu$ should be chosen as small as possible, we have proposed a different regularization technique which ensures the existence of a rational approximate solution to the $\nu$-moment problem under the weaker condition $\nu\geq d/2$.
\end{document} |
\begin{document}
\title{The Hierarchical Spectral Merger algorithm: A New Time Series Clustering
Procedure}
\author{Carolina Eu\'an \footnote{Centro de Investigaci\'on en
Matem\'aticas, Guanajuato, Gto, M\'exico.} \footnotemark,
Hernando Ombao \footnote{Department of Statistics, University of
California, Irvine, USA.}
\addtocounter{footnote}{-2}\addtocounter{Hfootnote}{-2}\footnote{UC Irvine
Space-Time Modeling Group.}, Joaqu\'in Ortega
\addtocounter{footnote}{-2}\addtocounter{Hfootnote}{-2}\footnotemark }
\date{}
\maketitle
\begin{abstract}
We present a new method for time series clustering which we call the
Hierarchical Spectral Merger (HSM) method. This
procedure is based on the spectral theory of time series and identifies series
that
share similar oscillations or waveforms.
The extent of similarity between a pair of time series is measured using
the total variation distance between their estimated spectral densities.
At each step of the algorithm, every time two clusters merge, a new spectral density is estimated using the whole information present in both clusters, which is representative of all the series in the new cluster.
The method is implemented in an R package \textit{HSMClust}. We present two
applications of the HSM method, one to data coming from
wave-height measurements in oceanography and the other to electroencefalogram
(EEG) data.
\end{abstract}
\noindent {\bf Keywords:} Hierarchical Spectral Merger Clustering, Time Series
Clustering, Hierarchical Clustering, Total Variation Distance, Time Series,
Spectral Analysis.
\section{Introduction}
The subject of time series clustering is an active research area with
applications in many fields. Finding similarity between time series frequently
plays a central role in many applications. In fact, time series clustering
problems arise in a natural way in a
wide variety of fields, including economics, finance, medicine, ecology,
environmental studies, engineering, and many others. A recent work, developed
by \cite{Krafty16}, uses a clustering model to develop a discriminant
analysis of stochastic cepstra. Time series clustering
is, in general, not an easy task due to the potential complexity of the data
and the difficulty of defining an
adequate notion of similarity between time series.
In \cite{Liao05}, and subsequently in \cite{Caiado15}, there are three
approaches to time
series clustering: methods based on the comparison of raw data,
methods based on parameters from models adjusted to the data, and
feature-based methods where the similarity between time series is
measured through features extracted from the data.
The first clustering approach compares of raw data and may not be
computationally scalable for long time series. The second
approach, based on parameters, is one of the most frequently used. However, it
has the limitation of requiring a parametric model and this might suffer from
model misspecification.
\cite{Vilar14} present an R package (TSclust) for time series
clustering with a wide variety of alternative procedures.
Here, we consider the problem of clustering stationary time series and our
proposal is based on using the spectral density as the central feature for
classification purposes. To build a clustering method one needs to measure the
similarity between the spectral densities. Our method uses the total
variation (TV)
distance as a measure of dissimilarity, which was proposed in
\cite{Alvarez15}.
This distance requires the use of normalized spectral densities, which is
equivalent to standardizing the time series so that their variances are equal
to
one. Thus, it is the distribution of the power across different frequencies of
the time series that is considered the fundamental feature for clustering
purposes rather than the magnitude of the oscillations.
In \cite{Alvarez15}, the TV distance was used to build a dissimilarity matrix
consisting of the distances between all the spectral densities, which was then
fed into a classical hierarchical agglomerative algorithm with the complete and
average linkage functions. The method introduced in this work which we call
the
Hierarchical Spectral Merger (HSM) method, is a new clustering
algorithm that takes advantage of the spectral theory of time series.
The key difference with classical hierarchical agglomerative clustering is
that every time a cluster is updated, a new representative (new estimated
spectral density) is computed.
Each time two clusters are joined in the HSM procedure, the
information available in all the series belonging to both clusters is merged to
produce a new estimate of the common spectral density for the new cluster.
The proposed approach is appealing because it is intuitive and the updated
spectral estimates are
smoother, less noisy and hence give better estimates of the TV distance.
Thus, every time two clusters merge, the dissimilarity matrix reduces its size
in one unit. In contrast, for the classical hierarchical agglomerative algorithm
the dissimilarity matrix is the same throughout the procedure, and the distances
between clusters at each step are calculated using linear combinations of the
individual distances; the linear combination used depends on the linkage
function that is chosen (single, complete, average, Ward, etc.). These methods
are based on geometrical ideas which, in some cases, may not be meaningful for
clustering time series, since a linear combination of the individual distances
may not have a clear meaning in terms of the spectral densities.
We will present two applications of the HSM method: one to data coming from
wave-height measurements in oceanography and the other to electroencephalogram
(EEG) data. Some of the numerical experiments described in Section 3 are
related
to these applications.
The rest of the paper is organized as follows: Section \ref{sec2} describes the
Hierarchical Spectral Merger procedure. Section 3 presents some numerical
experiments which compare the HSM method with some of the existing algorithms
and
considers the problem of deciding how many clusters there are in a given
sample.
Finally, Section 4 gives some examples of the use of the HSM algorithm. The
paper ends with some discussion of the results and some ideas about future work.
\section{HSM: A Method for Time Series Clustering } \label{sec2}
Our goal is to develop a method that finds groups or clusters that represent
spectrally synchronized time series. The algorithm we introduce, known as the
Hierarchical Spectral Merger (HSM) method, uses the TV distance as a
dissimilarity measure and proposes a new clustering procedure. This
algorithm is a modification of the usual agglomerative hierarchical
procedure, taking advantage of the spectral point of view for the analysis of
time series.
The first question when building a clustering algorithm is how to measure the
dissimilarity between the objects one is considering. In our case, this amounts
to measuring the dissimilarity between the spectral densities estimated from
the original time series, for which we use the TV distance between the
normalized spectra.
In what follows, section \ref{sec2-1} presents the TV distance while
section \ref{sec2-2} describes in detail the HSM method.
\subsection{Total Variation (TV) Distance}\label{sec2-1}
In general, the TV distance can be defined for any pair of probability measures
that
live on the same $\sigma$-algebra of sets. We will be focus here in the case
where these probability measures are defined on the real line and have density
functions with respect to the
Lebesgue measure.
The TV distance between two probability densities, $f$ and $g$, is defined as
\begin{eqnarray}\label{TVd_1}
d_{TV}(f,g)=1-\int\min\{f(\omega),g(\omega)\}\, \mbox{d}\omega.
\end{eqnarray}
Equation (\ref{TVd_1}) suggests a graphical interpretation of the TV distance.
If $f$ and $g$ are probability density functions and $d_{TV}(f,g) = 1-\delta$
then $\int\min\{f(\omega),g(\omega)\}\, \mbox{d}\omega = \delta$ and this means
that the common area below the two densities is equal to $\delta$, which
corresponds to the shaded area in figure \ref{Fig1}. When the area shared by
the
two densities increases then the TV distance decreases.
\begin{figure}
\caption{The total variation distance measures the similarity between two
densities. The shaded area represents the common area below the two densities,
which we denote as $\delta$.}
\label{Fig1}
\end{figure}
Since spectral densities are not probability densities, they have to be
normalized by dividing the estimated spectral density by the sample variance
$\widehat\gamma(0)$, since $\widehat{\gamma}(0) = \int \widetilde{f}(\omega)
d\omega$. Thus, we denote
$\widehat{f}(\omega)=\widetilde{f}(\omega)/\widehat\gamma(0)$ to be the
normalized
estimated spectral density.
In comparison with other similarity measures, the TV distance has some
desirable
properties. (1) The TV distance is a pseudo metric. It satisfies the symmetry
condition
and the
triangle
inequality, which are two reasonable properties expected from a similarity
measure. In this sense, the TV distance may be a better choice than the
Kullback-Leibler (KL)
divergence (although one can symmetrize the KL divergence). (2) The TV distance
is bounded, $0 \leq d_{TV}(f,g)\leq 1$ and can
be
interpreted in terms of the common area between two densities. Having a bounded
range ($[0,1]$) is a desirable feature, since this gives a very intuitive sense
to the values attained by the similarity measure. A value near $0$ means that
the densities are similar while a value near $1$ indicates they are highly
dissimilar. In contrast, both the $L^2$ distance and the Kullback-Leibler
divergence are not bounded above and thus lack this intuitive interpretation.
\subsection{The Hierarchical Spectral Merger (HSM) Algorithm}\label{sec2-2}
There are two general families of clustering algorithms: partitioning and
hierarchical. Among partitioning algorithms, the K-means and K-medoids are the
most commonly used. \cite{Maharaj12} proposed a K-means fuzzy clustering
method based on wavelets coefficients and made a comparison with other K-means
algorithms. For the hierarchical clustering algorithms, the typical
examples are agglomerative with single-linkage or complete-linkage
[\cite{Xu05}]. Storage and computational properties of the hierarchical
clustering methods are reviewed in \cite{Fionn15}.
The hierarchical spectral merger algorithm has two versions:
the first, known as \textit{single version}, updates the spectral
estimate of the cluster from a concatenation of the time series, and the
second, known as \textit{average version}, updates the spectral estimate of the
cluster from a weighted average of the spectral estimate obtained from each
signal in the cluster.
\textbf{\textit{Hierarchical Spectral Merger Algorithm.}}
Let $\{X_i=(X_i(1),\ldots,X_i(T)), i=1,\ldots,N\}$ be a set of time series. The
procedure starts with $N$ clusters, each cluster being a single time series.\\
\noindent
\textbf{Step 1.} Suppose there are k clusters. For each cluster,
estimate the spectral density (using some smoothing or averaging method) and
represent each cluster by a common normalized spectral
density $\widehat{f}_j(\omega)$, $j=1,\ldots,k$.\\
\textbf{Step 2.} Compute the TV distance between these $k$ spectral
densities.\\
\textbf{Step 3.} Find the two clusters that have smallest TV distance. \\
\textbf{Step 4.} Merge the time series in the two clusters with the smallest
TV
distance and replace the two clusters with the newly combined single cluster.\\
\textbf{Step 5.} Repeat Steps 1-4 until there is only one cluster left.
In Step 1, at the first iteration the spectral density for each time series is
initially estimated using the smoothed periodogram. In our case we used a lag
window estimator with a Parzen window. In further iterations, however, the way
the common spectral density is obtained depends on the version of the
algorithm.
When two clusters merge, there are two options, either (1) for the single
version, we concatenate the signals in both clusters and compute the smoothed
periodogram with the concatenated signal; or (2) for the average version, we
take the weighted average over all the estimated spectra
for each signal in the new cluster as the new estimated spectra.
\begin{table}
\begin{center}
\footnotesize
\begin{tabular}{l}
\hline
\textbf{Algorithm:}\\
\hline \hline \\
\begin{minipage}{\linewidth}
\begin{enumerate}\footnotesize
\item Initial clusters: $\mathbf{C}=\{C_i\}$, $C_i=X_i$, $i=1,\ldots,N$.\\
Dissimilarity matrix entry between clusters $i$ and $j$, \\
$$D_{ij}=d(C_i,C_j):=d_{TV}(\widehat{f}_i,\widehat{f}_j),$$
TV distance between the corresponding estimated normalized spectral densities
$\widehat{f}_i$ using the signals in each cluster.
\item \textbf{for} $k$ $in$ $1: N-1$
\item \hspace{.3cm}$\displaystyle (i_{k},j_{k})=\arg\!\min_{ij} D_{ij};~
\displaystyle min_k=\min_{ij} D_{ij}$ \hspace{1.2cm} $\#$Find the closest
clusters
\item \hspace{.3cm}$C_{new}=C_{i_{k}} \cup C_{j_{k}}$\hspace{4.55cm}
$\#$Join the closest clusters
\item \hspace{.3cm}$D^{new}=D\setminus \{D_{i_{k} .} \cup D_{j_{k} .} \cup
D_{.
i_{k}} \cup D_{. j_{k}}\}$ \hspace{1.05cm} $\#$Delete rows and columns
$i_{k},j_{k}$
\item \hspace{.3cm}\textbf{for} $j$ $in$ $1: N-k-1$
\item \hspace{.6cm}$D_{(N-k)j}^{new} {= D_{j(N-k)}^{new}=}d_{TV}(C_{new},C_j)$
\hspace{1.3cm}
$\#$Compute new distances
\item \hspace{.3cm}\textbf{end}
\item \hspace{.3cm}$D = D^{new}; ~ \displaystyle \mathbf{C}=
\left(\mathbf{C}\setminus \{C_{i_k},C_{j_k}\} \right) \cup C_{new}$
\hspace{.9cm}
$\#$New matrix $D$ and new clusters
\item \textbf{end}
\end{enumerate}
\end{minipage}
\\
\\
\hline
\end{tabular}
\caption{Hierarchical Spectral Merger Algorithm proposed using the Total
Variation
distance and the estimated spectra.}\label{Algo}
\end{center}
\end{table}
\textit{\bf{Remarks.}} (1) The value in Step 3 represents the ``cost'' of
joining
two clusters, i.e., having $k-1$ clusters versus $k$ clusters.
If a significantly large value is observed, then it may be reasonable to choose
$k$ clusters instead of $k-1$.
(2) Both versions of the algorithm compute the TV distance between the new and
the old
clusters based on updated estimated spectral densities, which is the main
difference with
classical hierarchical algorithms. While a hierarchical algorithm has a
dissimilarity matrix of size $N\times N$ during the whole procedure, the
proposed method reduces this size to $(N-k) \times (N-k)$ at the $k$-th
iteration. Table \ref{Algo} gives a summary of the hierarchical merger
algorithm.
\begin{figure}
\caption{Estimated spectra. (a.) Different colors correspond to different time
series, (b.) Red spectra are from realizations of the AR(2) model with activity
at alpha (8-12
Hz) and beta (12-30 Hz) bands and black spectra are from realizations of the
AR(2) model with
activity at alpha and gamma (30-50 Hz) bands.}
\label{F36a}
\label{F36b}
\end{figure}
\begin{figure}
\caption{Dynamics of the hierarchical spectral merger algorithm. (a), (c), (e)
and (g)
show the clustering process for the spectra. (b), (d), (f) and (h) show the
evolution of the estimated spectra, which improves as more series are merged in
the same cluster.}
\label{F37c}
\label{F37d}
\label{F37e}
\label{F37f}
\label{F37g}
\label{F37h}
\label{F37i}
\label{F37j}
\label{F37}
\end{figure}
\textit{\bf{Illustration.}}
Consider two different AR(2) models with their
spectra concentrated at 10 Hertz, however, one also contains power at 21 Hertz
while
the other has power at 40 Hertz. We simulated three time series for each
process,
10
seconds of each one with a sampling frequency of $100$ Hertz
($t=1,\ldots,1000$).
Figure \ref{F36a} shows the estimated spectra for each series and Figure
\ref{F36b} shows by different colors (red and black) which one belongs to the
first or second process. If we only look at the spectra, it is hard to
recognize the number of clusters and their memberships. The method might have
difficulty in clustering some cases, like the red and purple spectra.
The step-by-step result of the HSM method is shown in Figure \ref{F37}. We
start
with six
clusters; at the first iteration we find the closest spectra, represented in
Figure \ref{F37c} with the same color (red). After the first iteration we merge
these time series and get $5$ estimated spectra, one per cluster, Figure
\ref{F37d} shows the estimated spectra where the new cluster is represented by
the dashed red curve. We can follow the procedure in Figures \ref{F37e},
\subref{F37f}, \subref{F37g} and \subref{F37h}. In the end, the proposed
clustering algorithm reaches the correct solution, Figures \ref{F37i} and
\ref{F36b} coincide. Also, the estimated spectra for the two clusters, Figure
\ref{F37j}, is better than any of the initial spectra and we can identify the
dominant frequency bands for each cluster.
\section{Numerical Experiments}
We now investigate the finite sample performance of the HSM clustering
algorithm.
First, we explain the simulation methods based on the spectrum that we will use
in some of the experiments. Then, we present the results of the experiments,
assuming
that the number of clusters is known. Finally, we explore the case of unknown
number of clusters and possible criteria to choose it.
\textit{Simulation based on a parametric family of spectral densities.}
\begin{figure}
\caption{JONSWAP spectrum with different parameters.}
\label{JonsSpec}
\end{figure}
There exist several parametric families of spectral densities of frequent use
in
oceanography, which have an interpretation in terms of the behavior of sea
waves (\cite{Ochi98}).
Motivated by the applications, we will simulate time series (Gaussian process)
using spectra from one of these families. This methodology is already
implemented by \cite{Wafo11} in the WAFO toolbox for MATLAB.
An example of a group of parametric densities is the JONSWAP (Joint North-Sea
Wave Project) spectral family, which is given by the formula
$$
S(\omega) = \frac{g^2}{\omega^5}\exp(-5\omega_p^4/4\omega^4)
\gamma^{\exp(-(\omega-\omega_p)^2/2\omega_p^2s^2)}, \qquad \omega \in
(-\pi,\pi),
$$
where $g$ is the acceleration of gravity, $s = 0.07$ if $\omega\leq
\omega_p$ and $s=0.09$ otherwise; $\omega_p=\pi/T_p$ and $\gamma =
\exp(3.484(1-0.1975 (0.036-0.0056 T_p/\sqrt{H_s}) T_p^4/(H_s^2)))$.
The parameters for the model are the significant wave height $H_s$,
which is defined as 4 times the standard deviation of the time series,
and the spectral peak period $T_p$, which is the period
corresponding to the modal frequency of the spectrum. Figure
\ref{JonsSpec} shows some examples of this family with different values for the
parameters $T_p$ and $H_s$.
\\
\textit{Simulation based on AR(2) processes.}
Consider the second order auto-regressive AR(2) model which is defined as
\begin{equation}
Z_t=\phi_1 Z_{t-1}+ \phi_2 Z_{t-2}+ \epsilon_t,
\end{equation}
where $\epsilon_t$ is a white noise process. The characteristic polynomial for
this model is $\phi(z)= 1-\phi_1 z-\phi_2 z^2$. The roots of the polynomial
equation (assumed with magnitude bigger than $1$ for causality) indicate the
properties of the
oscillations. If the roots, denoted $z_0^{1}$ and $z_0^{2}$ are complex-valued
then they must be complex-conjugates, i.e., $z_0^{1}=\overline{z_0^{2}}$. These
roots have a polar representation
\begin{equation}\label{AR2}
|z_0^{1}|=|z_0^{2}|=M, \qquad \qquad \arg(z_0)= \frac{2 \pi \eta}{F_s},
\end{equation}
where $F_s$ denotes the sampling frequency (in Hertz); $M$ is the amplitude or
magnitude of the root
($M >1$ for causality); and $\eta$ is the frequency index ($\eta \in
(0,F_s/2)$). The spectrum of the
AR$(2)$ process
with polynomial roots as above will have peak frequency at $\eta$. The peak
becomes
broader as $M \to \infty$, and it becomes narrower as $M \to 1^{+}$.
\begin{figure}
\caption{Top: Spectra for the AR(2) process with different peak frequency;
$\eta=10,21,40$.
Bottom: Realizations from the corresponding AR(2) process.}
\label{F2}
\end{figure}
Then, given $(\eta,M,F_s)$, we take
\begin{align}\label{AR2_2}
\phi_1=\frac{2 \cos(w_0)}{M}\qquad \mbox{and}\qquad
\phi_2=-\left(\frac{1}{M^2}\right),
\end{align}
where $w_0=\frac{2\pi\eta}{F_s}$. If one computes the roots of the
characteristic polynomial with the coefficients in \eqref{AR2_2}, they satisfy
\eqref{AR2}.
To illustrate the type of oscillatory patterns that can be observed in time
series
from processes with corresponding spectra, we plot in Figure \ref{F2} the
spectral densities (top)
for different values of $\eta$, $M=1.01$ and $F_s=1000$ Hertz; and examples of
generated time series (bottom).
Larger values of $\eta$ gives rise to faster oscillations (higher
frequencies) of the signal.
\subsection{Experimental Design}
We consider two different experiments. The first one is motivated by
applications in oceanography, where the differences between spectra could be
produced by a small change in the modal frequency. The second experiment was
designed to test if the algorithms are able to distinguish between unimodal
and bimodal spectra. For all the experiments, the lengths of the time series
were $T=500,1000,$ and $2000$.
\begin{itemize}
\item \textbf{Experiment 1} is based on two different JONSWAP (Joint North-Sea
Wave Project) spectra (i.e. two clusters). The spectral densities considered
have significant wave height
$H_s$
equal to $3$, the first has a peak period $T_p$ of $3.6\sqrt{H_s}$ while for
the
second $T_p= 4.1\sqrt{H_s}$. Figure \ref{SE1} exhibits the JONSWAP spectra,
showing that the curves are close to each other. Five series from each spectrum
were simulated and $N=500$ replicates of this experiment were made. In this
case the sampling frequency was set to 1.28 Hertz, which is a common value for
wave data recorded using sea buoys. This experiment was carried out in
\cite{Alvarez15} to compare several clustering procedures.
\item \textbf{Experiment 2} is based on the AR$(2)$ process. Let $Z_t^j$,
$j=1,2,3$, be AR(2) processes with $M_j=1.1$ for all $j$ and
peak frequency $\eta_j =.1,.13,.16$ for $j=1,2,3$, respectively. $Z_t^{j}$
represents a latent signal oscillating at a pre-defined band. Define the
observed time series to be a mixture of these latent AR$(2)$ processes.
\begin{equation}\label{Sim1}
\begin{pmatrix}
X_t^1\\
X_t^2\\
\vdots\\
X_t^K\\
\end{pmatrix}_{K \times 1}
=
\begin{pmatrix}
\boldsymbol{e}_1^T\\
\boldsymbol{e}_2^T\\
\vdots\\
\boldsymbol{e}_K^T\\
\end{pmatrix}_{K \times 3}
\begin{pmatrix}
Z_t^1 \\
Z_t^2 \\
Z_t^3\\
\end{pmatrix}_{3 \times 1}
+
\begin{pmatrix}
\varepsilon_t^1\\
\varepsilon_t^2\\
\vdots\\
\varepsilon_t^K\\
\end{pmatrix}_{K \times 1}
\end{equation}
where $\varepsilon_t^j$ is Gaussian white noise, $X_t^j$ is a signal with
oscillatory behavior generated by the linear combination $\boldsymbol{e}_i^T
{\bf Z}_t$ and $K$ is the number of clusters. In this experiment, we set $K=3$,
with
$\boldsymbol{e}_1^T=c(1,0,0),~ \boldsymbol{e}_2^T=c(0,1,0) $ and
$\boldsymbol{e}_3^T=c(0,1,1)$. We simulate five replicates of each signal
$X_t^{i}$, so, we have three clusters with five members each. Figure \ref{SE2}
plots the three different spectra. For this experiment $N=1000$ replicates were
made, and the sampling frequency was set to 1 Hertz.
\end{itemize}
\begin{figure}
\caption{Spectra used in the simulation study to compare the HSM method with
other similarity measures. Each spectrum, with different color and line type,
corresponds to a cluster.}
\label{SE1}
\label{SE2}
\label{SpecExp}
\end{figure}
\subsection{Comparative Study}\label{CS}
To compare clustering results, we must take into account the ``quality of the
clustering" produced which depends on both the similarity measure and the
clustering algorithm used.
The HSM method has two main features: The use of the TV distance as a
similarity
measure and the hierarchical spectral merger algorithm.
The HSM method will be compared with the usual hierarchical agglomerative
clustering algorithm using the complete linkage function, which is one of the
standard clustering procedures used in the literature.
\cite{Vilar1} proposed two simulation tests to compare the
performance of several dissimilarity measures for time series clustering. We
compare the HSM method with competitors that were based on the spectral density
and performed well in P\'ertega D\'iaz and Vilar's experiments. In addition, we
also considered the
distance based on the cepstral coefficients, which was
used in \cite{Maharaj11}, and the symmetric version of the Kulback-Leibler
divergence, that was used in a hierarchical clustering algorithm in
\cite{ShumStof}.
Let $ I_X(\omega_k) = T^{-1} \Big|\sum_{t=1}^T X_t e^{-i\omega_k
t}\Big|^2 $ be the periodogram for time series $X$, at frequencies
$\omega_k=2\pi k/T, \ k=1, \dots, n$ with $n=[(T-1)/2]$, and $NI_X$
be the normalized periodogram, i.e. $NI_X(\omega_k) =
I_X(\omega_k)/\hat\gamma_0^X $, with $\hat\gamma_0^X$
the sample variance of time series $X$ (notice that $\hat\gamma_0^X=
\sum_{k=-(n-1)}^n I_X(\omega_k)$). The estimator of the
spectral density $ \widehat{f}_{X}$ is
the smoothed periodogram using a Parzen window with bandwidth equal to $100/T$,
normalized by dividing by $\hat\gamma_0^X$.
The dissimilarity criteria in the
frequency domain considered were:
\begin{itemize}
\item The Euclidean distance between the normalized estimated spectra:
$
d_{NP}(X,Y) = \frac{1}{n}\Big( \sum_k \big( \widehat{f}_{X}(\omega_k)-
\widehat{f}_{Y}(\omega_k)\big)^2\Big)^{1/2}.
$
\item The Euclidean distance between the logarithm of the normalized estimated
spectra:
$
d_{LNP}(X,Y) = \frac{1}{n}\Big( \sum_k \big( \log \widehat{f}_{X}(\omega_k)-
\log \widehat{f}_{Y}(\omega_k)\big)^2\Big)^{1/2}.
$
\item The square of the Euclidean distance between the cepstral coefficients
$
d_{CEP}(X,Y) = \sum_k^p \big( \theta_k^X-\theta_k^Y\big)^2
$
where, $\theta_0=\int_0^1 \log I(\lambda) \mbox{d}\lambda$ and
$\theta_k=\int_0^1 \log I(\lambda) \cos(2 \pi k \lambda )
\mbox{d}\lambda$.
\item The symmetric Kullback-Leibler distance between the normalized estimated
spectra:
$
d_{KL}(X,Y)= \int \widehat{f}_{X} (\omega)
\log\left(\frac{\widehat{f}_X(\omega)}{\widehat{f}_Y(\omega)}\right)
\mbox{d}\omega, \quad
d_{SKL}(X,Y)=d_{KL}(X,Y)+d_{KL}(Y,X).
$
\end{itemize}
We also added the clustering method proposed by \cite{Alvarez15}, that uses the
TV distance in a hierarchical clustering algorithm. All these dissimilarity
measures were compared with the HSM method using
normalized estimated spectra.
To evaluate the rate of success, we considered the following index which has
been
already used for comparing different clustering procedures [\cite{Vilar1},
\cite{Gav00}]. Let $\{C_1,\ldots,C_g\}$ and $\{G_1,\ldots,G_k\}$, be the set
of the $g$ true groups
and a $k$-cluster solution, respectively.
Then, $ \displaystyle \mbox{Sim}(G,C)=\frac{1}{g}\sum_{i=1}^{g} \max_{1\leq
j\leq k} \mbox{Sim}(G_j,C_i),$
where $ \displaystyle \mbox{Sim}(G_j,C_i)=\frac{2|G_j \cup C_i|}{|G_j|+|C_i|}$.
Note that this similarity measure will return 0 if the two clusterings are
completely dissimilar and 1 if they are the same.
In the comparative study, each simulation setting was replicated N times, and
the rate of success for each one was computed. The mean values for this index
are shown in
Tables \ref{Exp1Jons} and \ref{Exp2AR2}, and boxplots of the
values obtained are shown in Figures \ref{BPExp1Jons} and \ref{BPExp2AR}. The
simulation settings were the same in all cases and the clustering algorithm is
hierarchical with the complete link
function (similar results are obtained with the average link). For the HSM
method, we used the notation HSM1 when we used the single version, and
HSM2 when we used the average version. The
clustering method of \cite{Alvarez15} is denoted by TV.
\begin{table}\footnotesize
\centering
\begin{tabular}{cccccccc}
Experiment 1 & \\
\hline \hline\\
$T$ & NP& LNP& CEP& TV& SKL& HSM1& HSM2\\
\hline \\
500& 0.979& 0.772& 0.597& 0.988& 0.994& 0.989& 0.988\\
1000& 0.998& 0.851& 0.825& 0.999& 0.999 &0.999& 0.999\\
2000& 1& 0.932& 0.908& 1& 1& 1& 1\\
\end{tabular}
\caption{Mean values of the similarity index obtained using different distances
and the two proposed methods in Experiment 1. The number of replicates is
$N=500$.}\label{Exp1Jons}
\end{table}
\begin{figure}
\caption{Boxplots of the rate of success for the replicates under the
simulation setting of Experiment 1, by using different
distances.}
\label{BPExp1Jons}
\end{figure}
\begin{table}\scriptsize
\centering
\begin{tabular}{cccccccc}
Experiment 2 & \\
\hline \hline\\
$T$ & NP& LNP& CEP& TV& SKL& HSM1& HSM2\\
\hline \\
500& 0.864& 0.949& 0.895& 0.930& 0.952& 0.836& 0.838\\
1000& 0.961& 0.996& 0.974& 0.990& 0.994& 0.983& 0.983\\
2000& 0.995& 1& 0.999& 0.999& 0.999& 0.999& 0.999\\
\\
\end{tabular}
\caption{Mean values of the similarity index obtained using different distances
and the two proposed methods in Experiment 2. The number of replicates is
$N=1000$.}\label{Exp2AR2}
\end{table}
We can see from the boxplots corresponding to \textbf{Experiment 1} that the
CEP distance has many values smaller than $0.9$ even in the case of $T=2000$.
In \textbf{Experiment 2}, the HSM method did not have a good performance
for small-sized time series. It is necessary to have $T=1000$, for the HSM
method to identify the clusters more precisely. The
NP distance has the worst performance overall.
\begin{figure}
\caption{Boxplots of the rate of success for the replicates under the
simulation setting of Experiment 2, by using different
distances}
\label{BPExp2AR}
\end{figure}
In general, the rates of success for the HSM methods are very close to one. In
some cases the HSM method has the best results, and when it is
not the case, the rates obtained by the HSM method are close to the best. The
methods based on logarithms, such as
the LNP and CEP, have in some cases a good performance but in others their
performance is very poor. Compared to the symmetric Kullback-Leibler distance,
there is no clear winner but the method
proposed here still has the advantage of
being easily interpretable because the KL (or symmetric KL) cannot indicate if
the dissimilarity value
is large since it belongs to the range $[0, \infty)$.
\subsection{When the number of clusters is unknown}
For real data the number of clusters is usually unknown. Thus, an objective
criterion is
needed to determine the optimal number of clusters. As mentioned in Step 3 of
our
algorithm, the TV distance computed before joining two clusters can be used as
a
criterion.
We present two options for selecting the number of clusters. The first is
an empirical criterion while the second one is based on a bootstrap procedure.
\subsubsection{Empirical criterion}
Consider the following experimental design, similar to that of
\textbf{Experiment 2}. Let $Z_t^j$ be an AR(2) process
with $M_j=1.01$ for all $j$ and peak frequencies $\eta_j = 2, 6, 10, 21$ and
$40$
for $j=1, \ldots, 5$, respectively.
Define the observed time series to be a mixture of these latent AR$(2)$
processes.
\begin{equation}\label{Sim1a}
\begin{pmatrix}
X_t^1\\
X_t^2\\
\vdots\\
X_t^K\\
\end{pmatrix}_{K \times 1}
=
\begin{pmatrix}
\boldsymbol{e}_1^T\\
\boldsymbol{e}_2^T\\
\vdots\\
\boldsymbol{e}_K^T\\
\end{pmatrix}_{K \times 5}
\begin{pmatrix}
Z_t^1 \\
Z_t^2 \\
\vdots \\
Z_t^5\\
\end{pmatrix}_{5 \times 1}
+
\begin{pmatrix}
\varepsilon_t^1\\
\varepsilon_t^2\\
\vdots\\
\varepsilon_t^K\\
\end{pmatrix}_{K \times 1}
\end{equation}
where $\varepsilon_t^j$ is Gaussian white noise, $X_t^j$ is a signal with
oscillatory behavior generated by the linear combination $\boldsymbol{e}_i^T
{\bf Z}_t$, $K$ is the number of spectrally synchronized groups or clusters
and $n$ denotes the number of replicates of each signal $X_t^{i}$.
\begin{figure}
\caption{Trajectory of the value of the TV distance that is achieved by the
algorithm.}
\label{F6a}
\label{F6b}
\label{F6c}
\label{F6d}
\label{F6}
\end{figure}
Figure \ref{F6} displays the graphs corresponding to the minimum values
of the TV distance between clusters as a function of the number of clusters;
Figure
\ref{F6a} corresponds to the experimental design just described;
Figure \ref{F6b} corresponds to the experimental design with the same
coefficients as the first one but $K=5$, $n=3$ and $500$ draws;
Figure \ref{F6c} corresponds to a design with $K=5$, $n=3$ and $500$ draws,
with
coefficients $\boldsymbol{e}_1^T=(1/2~ 1~ 0~ 0~ 0), \boldsymbol{e}_2^T=(0~ 1~
1/2~ 0~ 0), \boldsymbol{e}_3^T=(0~ 0~ 1/2~ 1~ 0), \boldsymbol{e}_4^T=(0~ 0~ 0~
1~ 1/2), \boldsymbol{e}_5^T=(0~ 1~ 0~ 1~ 0).$
Finally, in Figure \ref{F6d} we set $K=10$, $n=15$ and $500$ draws but the
coefficients $\boldsymbol{e}_i$ are drawn from a Dirichlet distribution with
parameters $(.2,.2,.2,.2,.2)$. All these curves are decreasing and the speed of
decrease slows down after the true number of clusters, even when the
signals involved in each experiment are different. This ``elbow'' that seems to
appear with the true number of clusters can be used as a empirical criteria to
decide the number of clusters. Analogous results were obtained with several
different simulation schemes.
Similar criteria are frequently used in cross
validation methods. In this sense we propose this empirical criteria to get the
number of possible clusters in real data analysis.
\subsubsection{Bootstrapping}
The second option is based on a bootstrap
procedure to approximate the distribution of the TV distance between
estimated spectral densities, that was proposed in \cite{Euan16} and
will be reported in detail in a different manuscript. Here, we will use this
methodology to approximate the distribution of the total
variation distance between two clusters. Then, we use this
approximated distribution to choose a threshold
for the TV distance between estimated spectra to decide whether or not the
clusters should be merged.
The algorithm proposed in \cite{Euan16} to obtain a bootstrap sample of the TV
distance is the following.
\begin{enumerate}
\item From $X_1(t)$ and $X_2(t)$, estimate $\widehat{f}^{X_1}(\omega)$ and
$\widehat{f}^{X_2}(\omega)$ and take $ \displaystyle
\widehat{f}(\omega)=
\frac{\widehat{f}^{X_1}_N(\omega)+\widehat{f}^{X_2}(\omega)}{ 2 }.$
\item \label{A3} Draw $Z(1),\ldots,Z(T)\sim N(0,1)$ i.i.d random variables,
then estimate $\hat{f}^Z_N(\omega)$ using also the smoothed periodogram.
\item \label{A4} The bootstrap spectral density will be $\hat{f}_N^B(\omega)=
\hat{f}_N(\omega)\hat{f}^Z_N(\omega).$
\item Repeat \ref{A3} and \ref{A4} and estimate $\hat{d}_{TV}$ using the
bootstrap spectral densities, i.e.,
$$d_{TV}^B= d_{TV} (\hat{f}_N^{B_1},\hat{f}_N^{B_2}),$$
where $\hat{f}_N^{B_i}$, $i=1,2,$ are two bootstrap spectral densities
using different replicates of the process $Z(\cdot)$.
\end{enumerate}
The bootstrap spectral density presented in \ref{A3} and \ref{A4} is motivated
by the method presented in \cite{Paparoditis15}. \cite{Euan16} shows that
this procedure produces a consistent estimation
of the distribution of the TV distance. To extend this procedure to choose the
number of clusters, first we need to note that due to the hierarchical
structure of the algorithms used in all the methods proposed, the test
following test are equivalent:
\begin{align*}
&H_0: ~k-1 ~\mbox{Clusters} \qquad\mbox{vs} \qquad H_A: ~k ~\mbox{Clusters}, \\
&H_0: ~1 ~\mbox{Cluster} \qquad\mbox{vs} \qquad H_A: 2 ~\mbox{Clusters},
\end{align*}
since the $(k-1)$ clusters are built by joining two of the $k$
clusters.
In addition, we will use this option in the method presented
by \cite{Alvarez15} to select the number of clusters and compare with the HSM
method. The distribution of the total variation distance between two clusters
depends on the clustering procedure. When using the HSM method we aim to
approximate the
distribution of the distance between the mean spectra in each cluster while for
the hierarchical clustering with the TV distance, we need to produce samples
from each cluster to approximate the distribution of the distance calculated
through the link function.
The procedure of this test will be:
\begin{itemize}
\item Run the clustering procedure, either the HSM method or hierarchical
clustering with average or complete linkage.
\item Identify the two clusters that are joined to get the $(k-1)$ clusters.
\item Under the null hypothesis where the two clusters should be merged,
denote the common representative spectral estimate to be $\widehat{f}$.
\item Simulate the spectra with the bootstrap procedure to compute the
TV
distance. We consider two cases:
\begin{itemize}
\item[\textit{Case 1.}] When using the HSM method simulate two spectral
densities from the common spectra $f$ and compute the TV distance between them.
Repeat this procedure $M$ times.
\item[\textit{Case 2.}] When using hierarchical clustering with the TV
distance simulate two sets of spectral densities of size $g_1$ and $g_2$ from
the common spectra $f$, where $g_i$ are the number of members in cluster
$i=1,2$ (clusters to be joined). Compute the link function (complete or
average) between these two sets of spectra using the TV distance.
distance.
\end{itemize}
\item Run the test with the bootstrap sample.
\end{itemize}
\textit{\bf{Remark.}} Notice that this test assumes that there exits a common
spectra
$f$ within each cluster. In practice, it is possible that the spectra in one
cluster could vary slightly and in that setting we could cast this under mixed
effects model (\cite{Krafty11}).
To investigate the performance of this procedure, we used \textbf{Experiments
1}
and \textbf{2}. We considered the TV distance to feed a hierarchical algorithm
with two different link functions, namely, average and complete. Also, we
considered the
HSM method with the average version. In this case, we use $N=500$ replicates
for each experiment.
Tables \ref{Exp1NC} and \ref{Exp2NC} present the proportion of times that the
null hypothesis is rejected. To reject we used the bootstrap quantile of
probability $\alpha$. We did not expect to have a proportion of rejection equal
to $\alpha$, since in the case of using the complete or average link, these
values are not a direct observation of the TV distance.
However, we expect to have a good performance. In general, it could be possible
to overestimate the number of clusters.
\begin{table}\footnotesize
\centering
\begin{tabular}{ccccc}
Experiment 1 & \\
\hline \hline\\
Test &$\alpha$ & Complete& Average& HSM\\
\hline \\
1 cluster vs 2 clusters& 0.01& 1& 1& 1\\
& 0.05& 1& 1& 1\\
& 0.1& 1& 1& 1\\
2 clusters vs 3 clusters& 0.01& 0.052& 0.154& 0.008\\
& 0.05& 0.206& 0.492& 0.058\\
& 0.1& 0.382& 0.670& 0.164\\
\end{tabular}
\caption{Proportion of times that the null hypothesis is rejected. Complete
corresponds to the TV distance in hierarchical algorithm with the complete link
function, Average with the average link, and HSM is the
hierarchical spectral merger with the average version.}\label{Exp1NC}
\end{table}
In \textbf{Experiment 1} the true number of clusters is $2$. From Table
\ref{Exp1NC}, we observe that all methods rejected the hypothesis of one
cluster,
at all the significance levels. This means that the procedures did not
underestimate the number of clusters. To test $2$ versus $3$ clusters, the
proportion of
rejection
is high when we used the average link function, except in the case
$\alpha=0.01$.
If we used the complete link, the results are better.
However, the best results are given by the HSM method.
\begin{table}\footnotesize
\centering
\begin{tabular}{ccccc}
Experiment 2 & \\
\hline \hline\\
Test &$\alpha$ & Complete& Average& HSM\\
\hline \\
2 clusters vs 3 clusters& 0.01& 0.968& 1& 0.25\\
& 0.05& 1& 1& 0.924\\
& 0.1& 1& 1& 0.998\\
3 clusters vs 4 clusters& 0.01& 0.072& 0.18& 0.002\\
& 0.05& 0.228& 0.924& 0.050\\
& 0.1& 0.376& 0.998& 0.106\\
\end{tabular}
\caption{Proportion of times that the null hypothesis is rejected, in
Experiment
2.}\label{Exp2NC}
\end{table}
\begin{figure}
\caption{P-values obtained in the test of number of clusters using bootstrap
samples.}
\label{PVTE1}
\label{PVTE2}
\label{PvaluesTest}
\end{figure}
In \textbf{Experiment 2} the true number of clusters is $3$. This is a more a
difficult case, since the spectra are very close. From Table \ref{Exp2NC} when
testing 2 versus 3 clusters, we observe that the complete and average link
functions
did not underestimate the number of clusters. However, the HSM method did not
distinguish 3 clusters at a level $\alpha=0.01$, but the results are better at
higher
levels. For testing 3 versus 4 clusters, HSM performed the best,
followed by the complete
link. Again it was necessary to have a small value of $\alpha$ in order that
the average
link gave a reasonable performance.
Figure \ref{PvaluesTest} shows the p-values obtained comparing the value
from each simulation with the bootstrap distribution. We confirm the
fact
that the underestimation of the number of clusters has low probability, almost
zero in some cases, for the three methods. When the number of clusters to test
is the correct one, $2$ in \textbf{Experiment 1} and $3$ in \textbf{Experiment
2}, the p-values are widely distributed in the case of the complete link and
HSM method. With the average link, the p-values are smaller compared to the
other methods.
In general, this test has a good performance when one uses the complete link or
the HSM method.
\section{Analysis of the ocean waves and EEG data}
We developed the \textit{HSMClust} package written in R \nocite{RR} that
implements our proposed clustering method. The package can be downloaded from
\url{http://ucispacetime.wix.com/spacetime#!project-a/cxl2}.
{\bf Example 1: EEG Data.} Our first data example is the resting-state EEG data
from a single subject. The goal here is to cluster
resting-state EEG signals from different channels that are spectrally
synchronized, i.e.,
that show similar spectral profiles. This subject is from the healthy
population
and the EEG clustering here will serve as a
``standard" to
which the clustering of stroke patients (with severe motor impairment) will be
compared. Data were collected at 1000 Hz and pre-processed in the following
way. The continuous EEG
signal was low-pass filtered at 100 Hz, segmented into non-overlapping 1 second
epochs, and detrended. The original number of channels 256 had to be reduced to
194 because of the presence of artifacts that could not be corrected (e.g.,
loose leads).
The EEG channels were grouped into $19$ pre-defined regions in the brain as
specified in \cite{Wu14}: prefrontal (left-right), dorsolateral prefrontal
(left-right), pre-motor (left-right), supplementary motor area (SMA), anterior
SMA, posterior SMA, primary motor region (left-right), parietal (left-right),
lateral parietal (left-right), media parietal (left-right) and anterior
parietal
(left-right). Figure \ref{F1} shows the locations of these regions on the
cortical surface.
\begin{figure}
\caption{Brain regions: Left/Right Prefrontal (L\_Pf,
R\_Pr), Left/Right Dorsolateral Prefrontal (L\_dPr, R\_dPr), Left/Right
Pre-motor (L\_PMd, R\_PMd), Supplementary Motor Area (SMA), anterior SMA
(aSMA),
posterior SMA (pSMA), Left/Right Primary Motor Region (L\_M1, R\_M1),
Left/Right
Parietal (L\_Pr, R\_Pr), Left/Right Lateral Parietal (L\_latPr, R\_latPr),
Left/Right Media Parietal (L\_medPr, R\_medPr), Left/Right Anterior Parietal
(L\_antPr, R\_antPr). Gray squared channels do not belong to any of these
regions; Light blue region corresponds to right and left occipital and light
green region corresponds to central occipital.}
\label{F1}
\end{figure}
We present the results for subject BLAK at epoch 25 of the
resting state. The interpretations will employ
the usual division in frequency ranges for the analysis of spectral densities
of EEG data: Delta (0.1-3 Hertz), Theta (4-7 Hertz), Alpha (8-12 Hertz), Beta
(13-30 Hertz) and Gamma (31-50 Hertz).
\begin{figure}
\caption{Results using the \textit{HSM}
\label{E21}
\label{E22}
\label{E23}
\end{figure}
Figure \ref{E21} shows the minimum value of the TV distance, in this case the
``elbow'' appears around $6$ clusters. We analize the results for 7
clusters as well, in this case the channels in two of the clusters were
grouped into one since there were no significant evidence to reject 6 clusters
over 7. Figure \ref{E22} and \ref{E23} show the shape of the mean normalized
spectra by
cluster and the location at the cortical surface. Many of the channels at the
occipital and left premotor regions belong to cluster $6$ (pink),
which is dominated by frequencies at the theta and alpha bands (4-12 Hz).
Cluster $2$ (red) is conformed by the channels at the right premotor region and
this
cluster is only influenced by the delta band (0-4 Hz). Clusters 1 (black) and
5 (sky blue) are the only ones with influence of the beta band and they
are located at the left motor and left anterior parietal regions.
The HSM method captures the behavior of the EEG during the resting state
through
the cluster membership of the EEG channels. In addition, the HSM method also
identifies the frequency bands
that primarily drive the formation of the clusters. The clusters produced are
consistent for the most part with the anatomical-based parcellation of the
cortical surface
and thus cluster formation based on the spectra of the EEGs could be used to
recover the
spatial structure of the underlying brain process
{\bf Example 2. Sea Waves Data.} As a second example we consider wave-height
data from the Coastal Data
Information Program (CDIP) buoy 160 (44098 for the National Data Buoy Center)
situated off the coast of New Hampshire, USA, at a water depth of 76.5 m., when
a storm took place in December 2010.
The data correspond to a period of 69 hours, divided into 138 intervals of 30
minutes and recorded with a frequency of 1.28 Hz, between the 26th and the 29th
of December.
The use of stochastic processes for the analysis and modeling of sea waves has
a
long history, starting with the work of \cite{Pierson} and \cite{l-h1}. A
model commonly used is that stationary Gaussian random process. For this
particular sea waves data, the presence of a storm implies that the sea
surface will be
very unstable and the hypothesis of stationarity will only be valid for
short periods of time.
Sea-height data from buoys are usually divided into short intervals, between 20
and 30 minutes of duration, which are considered to be long enough to get a
reasonable estimation of the spectral density, and short enough for stationarity
to hold. For each interval, several numerical parameters are calculated from the
estimated spectral density, which give a summary of the sea conditions. Two very
important ones are the significant wave height and the peak frequency, or
equivalently, the peak period. The former is a standard measure of sea severity
and is defined as four times the standard deviation of the time series. The
latter is just the modal spectral frequency, or the corresponding period. Figure
\ref{F-olas1} (left) presents the evolution of both parameters for the time
interval being considered.
\begin{figure}
\caption{Significant wave height (black) and peak frequency (blue) for the 69
hour period (left). Trajectory for the minimum TV distance
(right).}
\label{F-olas1}
\end{figure}
In \cite{Alvarez15} a segmentation method for long time series based on a
clustering procedure was proposed and applied to a similar problem, but in a
different location and different sea conditions. We will apply the same method
with the clustering
procedure introduced in this work. The main idea is that the spectral density
should be able to identify the time intervals coming from the same stationary
process. Thus, since the data are already divided into 30-minute intervals, a
clustering procedure on the corresponding spectral densities should identify
intervals having similar oscillatory behavior. If these intervals are
contiguous in time, then they are considered part of a longer stationary
interval. More details on the procedure can be found in \cite{Alvarez15}.
As in the previous example, the trajectory for the minimum TV distance (Figure
\ref{F-olas1}, right) is used to get a first estimate for the number of
clusters, which in this case is 15. A bootstrap test supports this choice
(p-value = 0.01). The segmentation resulting from this clustering is shown in
Figure \ref{F-olas2}, where the different clusters correspond to different
colors in the vertical lines.
\begin{figure}
\caption{Segmentation for the 96 hour period. Different colors correspond to
different clusters.}
\label{F-olas2}
\end{figure}
It is interesting to compare these results with a similar analysis of a
different data set in \cite{Alvarez15}, event thought the clustering procedures
differ. In that case, for a time series of 96 hours recorded in Waimea Bay,
Hawaii in January, 2003, only 5 or 6 clusters were detected, depending on the
linkage function (average or complete) used for the hierarchical agglomerative
procedure. During the period studied for the Hawaii data there is also a storm,
but of a much smaller magnitud, with a significant wave height always below 5 m.
In contrast, for the data considered in this paper, significant wave height
surpasses 7 m. at the peak of the storm. The intensity of the storm is a
possible factor in the presence of more clusters of shorter duration.
Table \ref{T-olas1} gives the duration of consecutive intervals within a common
cluster, with a mean value of 2.23 h and a median of 1.5 h.
\begin{table}\footnotesize
\centering
\begin{tabular}{ccccccccccc}
Time (h) & 0.5 & 1 & 1.5 & 2 & 2.5 & 3 & 5 & 6.5
& 7.5 & 8\\
Number of intervals & 9 & 6 & 3 & 3 & 3 & 2 & 1 & 1 & 2 & 1\\
\end{tabular}
\caption{Frequency table for the duration of consecutive intervals within a
common cluster.}\label{T-olas1}
\end{table}
Figure \ref{F-olas3} presents the normalized spectral densities for the 15
clusters. The graphics show that bimodal spectral densities characterize some
clusters, even though in some cases the secondary mode is much smaller than the
main one. Bimodal spectra correspond to the presence of a second train of waves
with different dominating frequency, and the presence of this secondary train
of
waves, even if it is small, is captured by the method as an important feature
of
the spectra. In other cases there are differences on the dispersion of the
spectral density or in the location of the modal frequency. The procedure
employed yields a spectral characterization of the different clusters, which
can
be linked to the statistical characteristics of their duration, something which
is useful for the design of marine structures and energy converters.
\begin{figure}
\caption{Normalized spectral densities for the 15 clusters, the mean spectral
density is represented in black.}
\label{F-olas3}
\end{figure}
\section{Discussion and Future Work}
A new clustering procedure for time series based on the spectral densities and
the total variation (TV) distance, the Hierarchical Spectral Merger (HSM)
algorithm, was introduced. Using numerical experiments, the proposed HSM method
was compared with other available procedures, and the results show that its
performance comparable to the best, in addition to having an intuitive
interpretation. Applications to two data from different scientific fields were
also presented, showing that the method has wide applicability in many different
areas.
However, the method is not free from limitations and further developments are needed.
The fact that each cluster has a characteristic spectral density, with respect to which all distances are measured, may provide a way of correcting classification mistakes that occur due to the evolution of the clusters. This methodology would give a more robust clustering method.
\end{document} |
\begin{document}
\noindent
\title[Finiteness of meromorphic mappings from K\"{a}hler manifold]{Finiteness of meromorphic mappings from \\K\"{a}hler manifold into projective space}
\author{Pham Duc Thoan}
\address[Pham Duc Thoan]{Department of Mathematics, National University of Civil Engineering\\
55 Giai Phong street, Hai Ba Trung, Hanoi, Vietnam}
\email{thoanpd@nuce.edu.vn}
\author{Nguyen Dang Tuyen}
\address[Nguyen Dang Tuyen]{Department of Mathematics, National University of Civil Engineering\\
55 Giai Phong street, Hai Ba Trung, Hanoi, Vietnam}
\email{tuyennd@nuce.edu.vn}
\author{Noulorvang Vangty}
\address[Noulorvang Vangty]{Department of Mathmatics, National University of Education\\
136-Xuan Thuy str., Hanoi, Vietnam}
\email{vangtynoulorvang@gmail.com}
\maketitle
\begin{abstract}
The purpose of this paper is to prove the finiteness theorems for meromorphic mappings of a complete connected K\"{a}hler manifold into projective space sharing few hyperplanes in subgeneral position without counting multiplicity, where all zeros with multiplicities more than a certain number are omitted. Our results are extensions and generalizations of some recent ones.
\end{abstract}
\def\empty{\empty}
\footnotetext{\textit{2010 Mathematics Subject Classification}: Primary 32H30, 32A22; Secondary 30D35.\\
\hskip8pt Key words and phrases: finiteness theorems, meromorphic mapping, complete K\"{a}hler manifold.}
\section{Introduction}
Let $f$ be a non-constant meromorphic mapping of $\mathbb C^m$ into $\mathbb P^n(\mathbb C)$ and let $H$ be a hyperplane in $\mathbb P^n(\mathbb C)$. Denote by $\nu_{(f, H_j)}(z)$ the intersecting multiplicity of the mapping $f$ with the hyperplane $H_j$ at the point $f(z)$.
For a divisor $\nu$ on $\mathbb C^m$ and for a positive integer $k$ or $k=+\infty$, we set
$$ \nu_{\leqslantqslant k}(z)=
\begin{cases}
0& {\text{ if }} \nu(z)>k,\\
\nu(z)&{\text{ if }} \nu(z)\leqslantqslant k.
\end{cases} $$
Similarly, we define $\nu_{>k}(z).$ If $\varphi$ is a meromorphic function, the zero divisor of $\varphi$ is denoted by $\nu_{\varphi}.$
Let $H_1,H_2,\ldots,H_{q}$ be hyperplanes of $\mathbb P^n(\mathbb C)$ (in subgeneral position or in general position) and let $k_1,\ldots,k_q$ be positive integers or $+\infty$. Assume that $f$ is a meromorphic mapping satisfying
$$ \dim \{z:\nu_{(f,H_i),\leqslantqslant k_i}(z)\cdot\nu_{(f,H_j),\leqslantqslant k_j}(z)\}\leqslantqslant m-2\ \ (1\leqslantqslant i<j\leqslantqslant q).$$
Let $d$ be an integer number. We denote by $\mathcal {F}(f,\{H_j,k_j\}_{j=1}^q,d)$ the set of all meromorphic mappings $g: \mathbb C^m \to \mathbb P^n(\mathbb C)$ satisfying the following two conditions:
\begin{itemize}
\item[(a)] $\min(\nu_{(f, H_j),\leqslantqslant k_j},d)=\min(\nu_{(g, H_j),\leqslantqslant k_j},d)$ \ \ ($1\leqslantqslantslant j \leqslantqslantslant q$).
\item[(b)] $f(z)=g(z)$ on $\bigcup_{j=1}^q \{z:\nu_{(f,H_j),\leqslantqslant k_j}(z)>0\}$.
\end{itemize}
If $k_1=\cdots=k_q=+\infty$, we will simply use notation $\mathcal {F}(f,\{H_j\}_{j=1}^q,d)$ instead of $\mathcal {F}(f,\{H_j,\infty\}_{j=1}^q,d).$
In 1926, Nevanlinna \cite{Ne} showed that two distinct nonconstant meromorphic functions $f$ and $g$ on the complex plane cannot have the same inverse images for five distinct values, and that $g$ is a linear fractional transformation of $f$ if they have the same inverse images counted with multiplicities for four distinct values. After that, many authors have extended and improved Nevanlinna's results to the case of meromorphic mappings into complex
projective spaces such as Fujimoto \cite{Fu0, Fu2, F98}, Smiley \cite{LS}, Ru-Sogome \cite{R-S2}, Chen-Yan \cite{CY}, Dethloff-Tan \cite{DT}, Quang \cite{Q, Q1, Q2, Q3}, Nhung-Quynh \cite{NQ}.... These theorems are called uniqueness theorems or finiteness theorems. The first finiteness theorem for the case of meromorphic mappings from $\mathbb C^m$ into complex projective space $\mathbb P^n(\mathbb C)$ sharing $2n+2$ hyperplanes is given by Quang \cite{Q1} in 2012 and its correction \cite{QQ} in 2015. Recently, he \cite{Q2} extended his results and obtained the following finiteness theorem, in which he did not need to count all zeros with multiplicities more than certain values.
\vskip0.2cm
\noindent
\textbf{Theorem A} (see \cite[Theorem 1.1]{Q2})\ {\it Let $f$ be a linearly nondegenerate meromorphic mapping of $\mathbb C^m$ into $\mathbb P^n(\mathbb C)$. Let $H_1,\ldots, H_{2n+2}$ be $2n+2$ hyperplanes of $\mathbb P^n(\mathbb C)$ in general position and let $k_1,\ldots,k_{2n+2}$ be positive integers or $+\infty$. Assume that
$$ \sum_{i=1}^{2n+2}\frac1{k_i+1}<\min\leqslantft\{\frac{n+1}{3n^2+n}, \frac{5n-9}{24n+12},\frac{n^2-1}{10n^2+8n}\right\}.$$
Then $\sharp\mathcal F(f,\{H_i,k_i\}_{i=1}^{2n+2},1)\leqslantqslant2.$}
Note that the condition $\displaystyle\sum_{i=1}^{2n+2}\frac1{k_i+1}<\min\leqslantft\{\frac{n+1}{3n^2+n}, \frac{5n-9}{24n+12},\frac{n^2-1}{10n^2+8n}\right\}$ in Theorem A becomes $\displaystyle\sum_{i=1}^{2n+2}\frac1{k_i+1}<\frac{n+1}{3n^2+n}$ when $n\geq5.$
We now consider the general case, where $f : M \to \mathbb{P}^n(\mathbb{C})$ is a meromorphic mapping of an $m$-dimensional complete connected K\"{a}hler manifold $M$, whose universal covering is biholomorphic to a ball $B(R_0)=\{z\in{\mathbf{C}}^m\ :\ ||z||<R_0\}$ $(0<R_0\leqslant \infty)$, into $\mathbb{P}^n(\mathbb{C})$.
Let $H_1,\ldots,H_q$ be hyperplanes of $\mathbb P^n(\mathbb C)$ and let $k_1,\ldots,k_q$ be integers or $+\infty$. Then, the family $\mathcal F(f,\{H_i,k_i\}_{i=1}^{q},d)$ are defined similarly as above, where $d$ is an integer number.
For $\rho \geqslant 0,$ we say that $f$ satisfies the condition $(C_\rho)$ if there exists a nonzero bounded continuous real-valued function $h$ on $M$ such that
$$\rho {\mathcal{O}}mega_f + dd^c\log h^2\ge \text{Ric}\omega,$$
where ${\mathcal{O}}mega_f$ is the full-back of the Fubini-Study form ${\mathcal{O}}mega$ on $\mathbb{P}^n(\mathbb{C})$, $\omega = \dfrac{\sqrt{-1}}{2}\sum_{i,j}h_{i\bar{j}}dz_i\wedge d\overline{z}_j$ is K\"{a}hler form on $M$, $\text{Ric}\omega=dd^c\log(det(h_{i\overline{j}}))$, $d = \partial + \overline{\partial}$ and $d^c = \dfrac{\sqrt{-1}}{4\pi}(\overline{\partial} - \partial)$.
Very recently, Quang \cite{Q3} obtained a finiteness theorem for meromorphic mappings from such K\"{a}hler manifold $M$ into $\mathbb P^n(\mathbb C)$ sharing hyperplanes regardless of multiplicities by giving new definitions of "functions of small intergration" and "functions of bounded intergration" as well as proposing a new method to deal with the difficulties when he met on the K\"{a}hler manifold. We would like to emphasize that Quang's result is also the first finiteness theorem for meromorphic mappings on the K\"{a}hler manifold, although the uniqueness theorems were discovered early by Fujimoto \cite{Fu2} and later by many authors such as Ru-Sogome \cite{R-S2} or Nhung-Quynh \cite{NQ} and others. Here is his result.
\vskip0.2cm
\noindent
\textbf{Theorem B}\ (see \cite[Theorem 1.1]{Q3}).
{\it Let $M$ be an $m$-dimensional connected K\"{a}hler manifold whose universal covering is biholomorphic to $\mathbb C^m$ or the unit ball $B(1)$ of $\mathbb C^m$, and let $f$ be a linearly nondegenerate meromorphic mapping of $M$ into $\mathbb P^n(\mathbb C)\ (n\geqslant2)$. Let $H_1,\ldots,H_q$ be $q$ hyperplanes of $\mathbb P^n(\mathbb C)$ in general position. Assume that $f$ satisfies the condition $(C_{\rho})$.
If $$\displaystyle q>n+1+\frac{3nq}{6n+1}+\rho\frac{(n^2+4q-3n)(6n+1)}{6n^2+2}$$ then $\sharp\mathcal F(f,\{H_i\}_{i=1}^{q},1)\leqslantqslant2.$
}
Unfortunately, in this result, all zeros with multiplicities must need to be counted and hence Theorem B can not be an extension or a generalization of Theorem A.
Our purpose in this article is to prove a similar result to Theorems A and B for the case of a meromorphic mapping from a complete connected K\"{a}hler manifold into projective space, in which all zeros with multiplicities more than a certain number are omitted. However, the key used in the proof of Theorem A is technique “rearranging counting functions” to compare counting functions with characteristic functions, which is not valid on the Kähler manifold. In addition, the proof of Theorem B cannot work on the case of $k_i<\infty$. To overcome these difficulties, we use the technique in \cite{TN} and the methods in \cite{Q3}, as well as considering new auxiliary functions to obtain a new finiteness theorem which will generalize and extend the theorems cited above. Namely, we will prove the following theorem.
\begin{Theorem}\label{theo1}
Let $M$ be an $m$-dimensional connected K\"{a}hler manifold whose universal covering is biholomorphic to $\mathbb C^m$ or the unit ball $B(1)$ of $\mathbb C^m$, and let $f$ be a linearly nondegenerate meromorphic mapping of $M$ into $\mathbb P^n(\mathbb C)\ (n\geqslant2)$. Let $H_1,\ldots,H_q$ be $q$ hyperplanes of $\mathbb P^n(\mathbb C)$ in $N$-subgeneral position and let $k_1,\ldots,k_q$ be integers or $+\infty$. Assume that $f$ satisfies the condition $(C_{\rho})$. Let $k$ be the largest integer number not exceeding $\dfrac{q-2N-2}{2}$ and let $l$ be the smallest integer number not less than $\dfrac{2N-2}{k+2}+2$ if $k>0$ or let $l=2N+1$ if $k=0.$ Then $\sharp\mathcal F(f,\{H_i,k_i\}_{i=1}^q,1)\leqslantqslantslant2$ if
\begin{align*}
q&>2N-n+1+\sum_{i=1}^q\frac{n}{k_i+1}+\rho\big( n(2N-n+1)+\frac{4(q-n)n}{n-1}\big)\\
&+\max\leqslantft\{\frac{3nq}{2\big(3n+1+\frac{n-1}l\big)}, \frac{4q+3nq-14}{4q+3n-14},\frac{3nq^2}{6nq+(n-2)(q-2)+4q-6n-8}\right\}.
\end{align*}
\end{Theorem}
\noindent
\noindent {\bf Remark 1.} It is easy to see that $$\dfrac{3nq}{2\big(3n+1+\frac{n-1}l\big)}<\dfrac{3nq}{6n+2}<\dfrac{3nq}{6n+1},$$ and $$\dfrac{3nq^2}{6nq+(n-2)(q-2)+4q-6n-8}<\dfrac{3nq^2}{6nq+q}=\dfrac{3nq}{6n+1}, \forall n\geq2.$$
We now show that $$ \frac{4q+3nq-14}{4q+3n-14}<\dfrac{3nq}{6n+1}, \forall n\geq 3.$$ Indeed, it suffices to prove that $12nq^2-9n^2q-69nq-4q+84n+14>0$ for all $n\geq3.$
Since $q\geq2n+2$, we have $12nq^2-9n^2q-69nq-4q\geq q(15n^2-45n-4)> 0$ for all $n\geq4.$ For $n=3,$ we have $12nq^2-9n^2q-69nq-4q+84n+14=36q^2-292q+266>0$ since $q\geq8.$
Hence, when $k_1=\cdots=k_q=+\infty$ and $N=n$, Theorem {\Re\,}f{theo1} is an extension of Theorem B.
When $q=2n+2$, $M=\mathbb C^n$ and $H_1,\ldots, H_q$ are in general position, by $\rho=0$, $N=n$, $k=0$ and $l=2n+1,$ we obtain the following corollary from Theorem {\Re\,}f{theo1}.
\begin{corollary} \label{theo2}
Let $f$ be a linearly nondegenerate meromorphic mapping of $\mathbb C^m$ into $\mathbb P^n(\mathbb C)$. Let $H_1,\ldots, H_{2n+2}$ be $2n+2$ hyperplanes of $\mathbb P^n(\mathbb C)$ in general position and let $k_1,\ldots,k_{n+2}$ be positive integers or $+\infty$. Then $\sharp\mathcal F(f,\{H_i,k_i\}_{i=1}^{2n+2},1)\leqslantqslant2$ provided
$$ \sum_{i=1}^{2n+2}\frac1{k_i+1}<\min\leqslantft\{\frac{1}{2n},\frac{n^3+2n+3}{n(7n^2+5n+3)}\right\}.$$
In particular, if $n\geq 4$ then $\sharp\mathcal F(f,\{H_i,k_i\}_{i=1}^{2n+2},1)\leqslantqslant2$ provided
$$ \sum_{i=1}^{2n+2}\frac1{k_i+1}<\frac{1}{2n}.$$
\end{corollary}
\noindent {\bf Remark 2.} Consider the quantities $A=\min\leqslantft\{\frac{n+1}{3n^2+n}, \frac{5n-9}{24n+12},\frac{n^2-1}{10n^2+8n}\right\}$ in Theorem A and $B=\min\leqslantft\{\frac{1}{2n},\frac{n^3+2n+3}{n(7n^2+5n+3)}\right\}$ in Corollary {\Re\,}f{theo2}. We have the following estimates.
$\bullet$ For $n\geq 5$, $A=\frac{n+1}{3n^2+n}<\frac1{2n}=B.$
$\bullet$ For $n=4$, $A=\frac{n^2-1}{10n^2+8n}<\frac1{2n}=B.$
$\bullet$ For $n=3$, $A=\frac{n^2-1}{10n^2+8n}<\frac{n^3+2n+3}{n(7n^2+5n+3)}=B.$
$\bullet$ For $n=2$, $A=\frac{5n-9}{24n+12}<\frac{n^3+2n+3}{n(7n^2+5n+3)}=B.$
In all the cases, always $A<B$. Therefore, Corollary {\Re\,}f{theo2} is a nice improvement of Theorem A.
In order to prove our results, we first give an new estimate of the counting function of the Cartan’s auxiliary function (see Lemma 2.8). We second improve the algebraically dependent theorem of three meromorphic mappings (see Lemma 3.3). After that we use arguments similar to those used by Quang \cite{Q3} to finish the proofs.
\section{Basic notions and auxiliary results from Nevanlinna theory}
We will recall some basic notions in Nevanlinna theory due to \cite{R-S1,T-Q}.
\noindent
{\bf 2.1. Counting function.}\ We set $||z|| = \big(|z_1|^2 + \dots + |z_n|^2\big)^{1/2}$ for
$z = (z_1,\dots,z_n) \in \mathbb{C}^m$ and define
\begin{align*}
B(r) := \{ z \in \mathbb{C}^m : ||z|| < r\},{\mathbf{q}}uad
S(r) := \{ z \in \mathbb{C}^m : ||z|| = r\}\ (0 < r \leqslant \infty),
\end{align*}
where $B(\infty) = \mathbb{C}^m$ and $S(\infty) = \emptyset$.
Define
$$v_{m-1}(z) := \big(dd^c ||z||^2\big)^{m-1}{\mathbf{q}}uad {\mathbf{q}}uad \text{and}$$
$$\sigma_m(z):= d^c \text{log}||z||^2 \land \big(dd^c \text{log}||z||^2\big)^{m-1}
\text{on} {\mathbf{q}}uad \mathbb{C}^m \setminus \{0\}.$$
A divisor $E$ on a ball $B(R_0)$ is given by a formal sum $E=\sum\mu_{\nu}X_{\nu}$,
where $\{X_\nu\}$ is a locally family of distinct irreducible analytic hypersurfaces in $B(R_0)$ and $\mu_{\nu}\in \mathbb{Z}$. We define the support of the divisor
$E$ by setting $\mathrm{Supp}\, (E)=\cup_{\mu_{\nu}\ne 0} X_\nu$.
Sometimes, we identify the divisor $E$ with a function $E(z)$ from $B(R_0)$
into $\mathbb{Z}$ defined by $E(z):=\sum_{X_{\nu}\ni z}\mu_\nu$.
Let $M,k$ be positive integers or $+\infty$. We define the truncated divisor $E^{[M]}$ by
\begin{align*}
E^{[M]}:= \sum_{\nu}\min\{\mu_\nu, M \}X_\nu ,
\end{align*}
and the truncated counting function to level $M$ of $E$ by
\begin{align*}
N^{[M]}(r,r_0;E) := \int\limits_{r_0}^r \frac{n^{[M]}(t,E)}{t^{2m-1}}dt{\mathbf{q}}uad
(r_0 < r < R_0),
\end{align*}
where
\begin{align*}
n^{[M]}(t,E): =
\begin{cases}
\int\limits_{\mathrm{Supp}\, (E) \cap B(t)} E^{[M]}v_{m-1} &\text{ if } m \geqslant 2,\\
\sum_{|z| \leqslant t} E^{[M]}(z)&\text{ if } m = 1.
\end{cases}
\end{align*}
We omit the character $^{[M]}$ if $M=+\infty$.
Let $\varphi$ be a non-zero meromorphic function on $B(R_0)$. We denote by $\nu^0_\varphi$ (resp. $\nu^{\infty}_\varphi$) the divisor of zeros (resp. divisor of poles ) of $\varphi$. The divisor of $\varphi$ is defined by
$$\nu_\varphi=\nu^0_\varphi-\nu^\infty_\varphi.$$
For a positive integer $M$ or $M= \infty$, we define the truncated divisors of $\nu_\varphi$ by
$$\nu^{[M]}_\varphi(z)=\min\ \{M,\nu_\varphi(z)\}, {\mathbf{q}}uad
\nu^{[M]}_{\varphi, \leqslant k}(z):=\begin{cases}
\nu^{[M]}_\varphi(z)&\text{ if }\nu^{[M]}_\varphi(z)\leqslant k,\\
0&\text{ if }\nu^{[M]}_\varphi(z)> k.
\end{cases}
$$
For convenience, we will write $N_\varphi(r,r_0)$ and $N^{[M]}_{\varphi,\leqslant k}(r,r_0)$ for $N(r,r_0;\nu^0_\varphi)$ and $N^{[M]}(r,r_0;\nu^0_{\varphi,\leqslant k})$ respectively.
\vskip0.2cm
\noindent
{\bf 2.2. Characteristic function.}\ Let $f : B(R_0)\longrightarrow \mathbb{P}^n(\mathbb{C})$ be a meromorphic mapping. Fix a homogeneous coordinates system $(w_0 : \cdots : w_n)$ on $\mathbb{P}^n(\mathbb{C})$. We take a reduced representation
$f = (f_0 : \cdots : f_n)$, which means $f_i\ (0\leqslant i\leqslant n)$ are holomorphic functions and
$f(z) = \big(f_0(z) : \dots : f_n(z)\big)$ outside the analytic subset
$\{ f_0 = \dots = f_n= 0\}$ of codimension at least two.
Set $\Vert f \Vert = \big(|f_0|^2 + \dots + |f_n|^2\big)^{1/2}$. Let $H$ be a hyperplane in $\mathbb{P}^n(\mathbb{C})$ defined by $H = \{(\omega_0,\ldots,\omega_n): a_0\omega_0 + \cdots + a_n\omega_n = 0 \}$. We set $H(f) = a_0f_0 + \cdots + a_nf_n$ and $\Vert H \Vert = \big(|a_0|^2 + \dots + |a_n|^2\big)^{1/2}.$
The characteristic function of $f$ (with respect to Fubini Study form ${\mathcal{O}}mega$) is defined by
\begin{align*}
T_f(r,r_0) := \int_{t=r_0}^r\dfrac{dt}{t^{2m-1}}\int_{B(t)}f^*{\mathcal{O}}mega\wedge v_{m-1} ,{\mathbf{q}}uad{\mathbf{q}}uad 0 < r_0 < r < R_0.
\end{align*}
By Jensen's formula we have
\begin{align*}
T_f(r,r_0) = \int_{S(r)}\log ||f||\sigma_m - \int_{S(r_0)}\log ||f||\sigma_m,{\mathbf{q}}uad{\mathbf{q}}uad 0 < r_0 < r < R_0.
\end{align*}
Through this paper, we assume that the numbers $r_0$ and $R_0$ are fixed with $0<r_0<R_0$. By notation ``$||\ P$'', we mean that the asseartion $P$ hold for all $r\in [r_0, R_0]$ outside a set $E$ such that $\int_E dr < \infty$ in case $R_0 = \infty$ and $\int_E \dfrac{1}{R_0-r}dr < \infty$ in case $R_0 < \infty$.
\vskip0.2cm
\noindent
{\bf 2.3. Functions of small intergration.} We recall some definitions due to Quang \cite{Q3}.
Let $f^1,\ldots,f^k$ be $k$ meromorphic mappings from the complete K\"{a}hler manifold $B(1)$ into $\mathbb P^m(\mathbb C)$, which satisfies the condition $(C_{\rho})$ for a non-negative number $\rho$. For each $1\leqslantqslant u\leqslantqslant k$, we fix a reduced representation $f^u=(f_0^u:\cdots:f_n^u)$ of $f^u$.
A non-negative plurisubharmonic function $g$ on $B(1)$ is said to be of small intergration with respective to $f^1,\ldots,f^k$ at level $l_0$ if there exists an element $\alpha=(\alpha_1,\ldots,\alpha_m)\in\mathbb N^m$ with $|\alpha|\leqslantqslant l_0$, a positive number $K$, such that for every $0\leqslantqslant tl_0<p<1$ then
$$ \int_{S(r)}|z^{\alpha}g|^t\sigma_m\leqslantqslant K\leqslantft(\frac{R^{2m-1}{R-r}}\sum_{u=1}^mT_{f^u}(r,r_0)\right)^p $$
for all $r$ with $0<r_0<r<R<1,$ where $z^{\alpha}=z_1^{\alpha_1}\cdots z_m^{\alpha_m}.$
We denote by $S(l_0;f^1,\ldots,f^k)$ the set of all non-negative plurisubharmonic functions on $B(1)$ which are of small intergration with respective to $f^1,\ldots,f^k$ at level $l_0.$ We see that, if $g\in S(l_0;f^1,\ldots,f^k)$ then $g\in S(l;f^1,\ldots,f^k)$ for all $l>l_0.$ Moreover, if $g$ is a constant function then
$g\in S(0;f^1,\ldots,f^k)$.
By \cite[Proposition 3.2]{Q3}, if $g_i\in S(l_i;f^1,\ldots,f^k)$, then $g_1\cdots g_s\in S(\sum_{i=1}^sl_i;f^1,\ldots,f^k)$.
A meromorphic function $h$ on $B(1)$ is said to be of bounded intergration with bi-degree $(p,l_0)$ for the family $\{f^1,\ldots,f^k\}$ if there exists $g\in S(l_0;f^1,\ldots,f^k)$ satisfying $$ |h|\leqslantqslant||f^1||^p\cdots||f^u||^p\cdot g,$$
outside a proper analytic subset of $B(1).$
We denote by $B(p,l_0;f^1,\ldots,f^k)$ the set of all meromorphic functions on $B(1)$ which are of bounded intergration of bi-degree $p,l_0$ for $\{l_0;f^1,\ldots,f^k\}$. We have the following assertions:
$\bullet$ For a meromorphic mapping $h$, $|h|\in S(l_0;f^1,\ldots,f^k)$ iff $h\in B(0,l_0;f^1,\ldots,f^k)$.
$\bullet$ $B(p,l_0;f^1,\ldots,f^k)\subset B(p,l;f^1,\ldots,f^k)$ for all $0\leqslantqslant l_0<l.$
$\bullet$ If $h_i\in B(p_i,l_i;f^1,\ldots,f^k)$ then $h_1\cdots h_s\in B(\sum_{i=1}^sp_i,\sum_{i=1}^sl_i;f^1,\ldots,f^k)$.
\vskip0.2cm
\noindent
{\bf 2.4. Some Lemmas and Propositions.}
\begin{lemma}\label{lem2.1}\cite[Lemma 3.4]{F98}
If ${\mathbf{P}}hi^{\alpha}(F,G,H)=0$ and ${\mathbf{P}}hi^{\alpha}\leqslantft(\frac1F,\frac1G,\frac1H\right)=0$ for all $\alpha$ with $|\alpha|\leqslantqslant1$, then one of the following assertions holds:
(i) $F=G, G=H$ or $H=F$.
(ii) $\frac FG, \frac{G}H$ and $\frac HF$ are all constants.
\end{lemma}
\begin{proposition}[see \cite{NK, NGC}]\label{B0011} \emph {\it Let $H_1,\ldots,H_q $\ $( q > 2N - n+ 1)$ be hyperplanes in $\mathbb{P}^n(\mathbb{C})$ located in $N$-subgeneral position. Then there exists a function $\omega:\{1,\ldots, q\}\to (0,1]$ called a Nochka weight and a real number $\tilde{\omega}\geqslant1$ called a Nochka constant satisfying the following conditions:\\
\indent (i) If $j\in \{1,\ldots, q\}$, then $0<\omega_j\tilde{\omega}\leqslantqslantslant1.$\\
\indent (ii) $q-2N+n-1=\tilde{\omega}(\sum^{q}_{j=1}\omega_j-n-1).$\\
\indent (iii) For $R\subset \{1,\ldots, q\}$ with $ |R|=N+1$, then $\sum_{i\in R}\omega_i\leqslantqslantslant n+1.$\\
\indent (iv) $\frac{N}{n}\leqslantqslantslant \tilde{\omega} \leqslantqslantslant \frac{2N-n+1}{n+1}.$\\
\indent (v) Given real numbers $\lambda_1, \ldots,\lambda_q$ with $\lambda_j\geqslant1$ for $1\leqslantqslantslant j\leqslantqslantslant q$ and given any $R\subset \{1,\ldots, q\}$ and $|R|= N+1,$ there exists a subset $R^1\subset R$ such that $ |R^1|=\text{rank}\{H_i\}_{i\in R^1}=n+1 $ and $$ \prod_{j\in R}\lambda_j^{\omega_j}\leqslantqslantslant\prod_{i\in R^1}\lambda_i.$$ }
\end{proposition}
\noindent
\begin{proposition}[see \cite{T-Q}, Lemma 3.2]\label{prop4}
Let $\{H_i\}_{i=1}^q\ (q\geqslant n+1)$ be a set of hyperplanes of $\mathbb{P}^n(\mathbb{C})$ satisfying $\cap_{i=1}^{q}H_i = \emptyset$ and let $f: B(R_0) \longrightarrow \mathbb{P}^n(\mathbb{C})$ be a meromorphic mapping. Then there exist positive constants $\alpha$ and $\beta$ such that $$\alpha\Vert f\Vert \leqslantqslantslant \max\limits_{i\in \{1,\ldots,q\}} |H_i(f)|\leqslantqslantslant \beta\Vert f\Vert.$$
\end{proposition}
\begin{proposition}[see \cite{Fu1}, Proposition 4.5]\label{prop1}
Let $F_1,\ldots,F_{n+1}$ be meromorphic functions on $B(R_0)\subset\mathbb{C}^m$ such that they are linearly independent over $\mathbb{C}$. Then there exists an admissible set $\{\alpha_i=(\alpha_{i1},\ldots,\alpha_{im})\}_{i=1}^{n+1}$ with $\alpha_{ij}\ge 0$ being integers, $|\alpha_i|=\sum_{j=1}^m|\alpha_{ij}|\leqslant i$ for $1\leqslant i\leqslant n+1$ such that the generalized Wronskian $W_{\alpha_1,\ldots,\alpha_{n+1}}(F_1,\ldots,F_{n+1})\not\equiv 0$ where $W_{\alpha_1,\ldots,\alpha_{n+1}}(F_1,\ldots,F_{n+1}) = det \leqslantft(\mathcal{D}^{\alpha_i}F_j\right)_{1\leqslant i, j \leqslant n+1}.$
\end{proposition}
Let $L_1,\ldots,L_{n+1}$ be linear forms of $n+1$ variables and assume that they are linearly independent. Let $F=(F_1:\cdots:F_{n+1}): B(R_0)\to\mathbb{P}^n(\mathbb{C})$ be a meromorphic mapping and $(\alpha_1,\ldots,\alpha_{n+1})$ be an admissible set of $F$. Then we have following proposition.
\noindent
\begin{proposition} [see \cite{R-S1}, Proposition 3.3]\label{prop3}
In the above situation, set $l_0=|\alpha_1|+\cdots+|\alpha_{n+1}|$ and take $t,p$ with $0<tl_0<p<1.$ Then, for $0<r_0<R_0$ there exists a positive constant $K$ such that for $r_0 < r < R < R_0,$
$$\int\limits_{S(r)}\leqslantft |z^{\alpha_1+\cdots+\alpha_{n+1}}\dfrac{W_{\alpha_1,\ldots,\alpha_{n+1}}(F_1,\ldots,F_{n+1})}{L_1(F)\cdots L_{n+1}(F)}\right|^t \sigma_m\leqslant K\leqslantft(\dfrac{R^{2m-1}}{R-r}T_F(R,r_0)\right)^{p},$$
where $z^\alpha = z_1^{\alpha_1}\cdots z_m^{\alpha_m}$ for $z = (z_1,\ldots,z_m)$ and $\alpha = (\alpha_1,\ldots,\alpha_m)$.
\end{proposition}
For convenience of presentation, for meromorphic mappings $f^u: B(R) \to \mathbb{P}^n(\mathbb{C})$ and hyperplanes $\{H_i\}_{i=1}^q$ of $\mathbb{P}^n(\mathbb{C})$, we denote by $\mathcal{S}$ the closure of $$\cup_{1\leqslantqslant u \leqslantqslant 3} I(f^u)\cup \cup_{1 \leqslantqslant i<j \leqslantqslant q} \{z: \nu_{(f,H_i),\leqslantqslant k_i}(z) \cdot \nu_{(f,H_j),\leqslantqslant k_j}(z) > 0 \}.$$
We see that $\mathcal{S}$ is an analysis subset of codimension two of $B(R)$.
\begin{lemma}\cite[Lemma 2.6]{TN}\label{2.4}
Let $f^1, f^2, f^3$ be three mappings in $\mathcal F(f,\{H_i, k_i\}_{i=1}^q,1)$. Suppose that there exist $s,t,l\in\{1,\ldots ,q\}$ such that
$$
P:=Det\leqslantft (\begin{array}{ccc}
(f^1,H_s)&(f^1,H_t)&(f^1,H_l)\\
(f^2,H_s)&(f^2,H_t)&(f^2,H_l)\\
(f^3,H_s)&(f^3,H_t)&(f^3,H_l)
\end{array}\right )\not\equiv 0.
$$
Then we have
\begin{align*}
\nu_P(z)\geq \sum_{i=s,t,l}(\min_{1\leqslantqslant u\leqslantqslant 3}\{\nu_{(f^u,H_i),\leqslantqslant k_i}(z)\}-\nu^{[1]}_{(f^1,H_i),\leqslantqslant k_i}(z))+ 2\sum_{i=1}^q\nu^{[1]}_{(f^1,H_i),\leqslantqslant k_i}(z), \forall z \not\in \mathcal{S}.
\end{align*}
\end{lemma}
\begin{lemma}\label{lem1}\cite[Lemma 2.7]{TN}
Let $f$ be a linearly nondegenerate meromorphic mapping from $B(R_0)$ into $\mathbb{P}^n(\mathbb{C})$ and let $H_1, H_2,\ldots,H_q$ be $q$ hyperplanes of $\mathbb{P}^n(\mathbb{C})$ in $N$-subgeneral position.
Set $l_0=|\alpha_0|+\cdots+|\alpha_n|$ and take $t,p$ with $0 < tl_0 < p < 1.$ Let $\omega(j)$ be Nochka weights with respect to $H_j$, $1\leqslantqslant j\leqslantqslant q$ and let $k_j\ (j=1,\ldots, q)$ be positive integers not less than $n$. For each $j$, we put $\hat{\omega}(j):=\omega{(j)}\big(1-\frac{n}{k_j+1}).$
Then, for $0 < r_0 < R_0$ there exists a positive constant $K$ such that for $r_0 < r < R < R_0,$
$$
\int\limits_{S(r)}\leqslantft|z^{\alpha_0+\cdots+\alpha_n}\frac{W_{\alpha_0\ldots\alpha_n}(f)}{(f,H_1)^{\hat{\omega}(1)}\cdots (f,H_q)^{\hat{\omega}(q)}} \right|^{t}\bigl(\Vert f\Vert ^{\sum_{j=1}^q\hat{\omega}(j)-n-1}\bigr)^{t} \sigma_m \leqslantqslant K\bigl(\frac{R^{2m-1}}{R-r} T_f(R,r_0) \bigl)^p,
$$
\end{lemma}
In fact, Lemma {\Re\,}f{lem1} is another version of Lemma 8 in \cite{NT}, in which $\omega{(j)}$ is replaced by $\hat{\omega}(j)$.
\begin{lemma}\label{lem22}
Let $M$, $f$ and $H_1, H_2,\ldots,H_q$ be as in Theorem {\Re\,}f{theo1}. Let $P$ be a holomorphic function on $M$ and $\beta$ be a positive real number such that $P^{\beta}\in B(\alpha,l_0; f^1, f^2, f^3)$ and
\begin{align*}
\sum_{u=1}^3\sum_{i=1}^q\nu^{[n]}_{H_i(f^u),\leqslantqslant k_i}\leqslantqslant\beta\nu_{P},
\end{align*} where $f^1, f^2, f^3\in\mathcal F(f,\{H_j,k_j\}_{j=1}^q,1)$. Then $$q\leqslantqslantslant 2N-n+1+\sum_{i=1}^q\frac{n}{k_i+1}+\rho\big( n(2N-n+1)+\frac23l_0\big)+{\alpha}.$$
\end{lemma}
\begin{proof}
Let $F_u=(f^u_0:\cdots:f^u_n)$ be a reduced representation of $f^u\ (1\leqslantqslant u\leqslantqslant 3)$. By routine arguments in the Nevanlinna theory and using Proposition {\Re\,}f{B0011} (i), we have
\begin{equation*}
\begin{aligned}
\sum\limits_{i=1}^q\omega_i\nu_{H_i(f^u)}(z)&-\nu_{W_{\alpha_{u,0}\cdots\alpha_{u,n}}(F_u)}(z)\\
&\leqslantqslant \sum\limits_{i=1}^q\omega_i \min\{n,\nu_{H_i(f^u)}(z)\}\\
&= \sum\limits_{i=1}^q\omega_i \min\{n,\nu_{H_i(f^u),\leqslantqslant k_i}(z)\} + \sum\limits_{i=1}^q\omega_i \min\{n,\nu_{H_i(f^u),> k_i}(z)\}\\
&\leqslantqslant \sum\limits_{i=1}^q\frac1{\tilde\omega} \nu^{[n]}_{H_i(f^u),\leqslantqslant k_i}(z) +\sum\limits_{i=1}^q\omega_i \dfrac{n}{k_i+1} \nu_{H_i(f^u)}(z).
\end{aligned}
\end{equation*}
Hence, it is easy to see from the assumption that
\begin{align}\label{th11}
\sum_{i=1}^q {\hat{\omega}_{i}}(\nu_{H_i(f^1)}+\nu_{H_i(f^2)}+\nu_{H_i(f^3)}) - (\nu_{W_{\alpha_1}(F_1)} + \nu_{W_{\alpha_2}(F_2)}+ \nu_{W_{\alpha_3}(F_3)}) \leqslantqslant\frac{\beta}{\tilde{\omega}} \nu_P,
\end{align} where $\hat{\omega}_{i}:=\omega_i\big(1-\dfrac{n}{k_i+1}\big)$ for all $1\leqslantqslantslant i\leqslantqslantslant q$.
Since the universal covering of $M$ is biholomorphic to $B(R_0), 0<R_0\leqslantqslant\infty$, by using the universal covering if necessary, we may assume that $M = B(R_0)\subset {\mathbf{C}}^m$. We consider the following cases.
\noindent{\bf $\bullet$ First case:} $R_0 = \infty$ or $\lim\sup_{r\to R_0}\dfrac{T_{f^1}(r,r_0)+ T_{f^2}(r,r_0) + T_{f^3}(r,r_0)}{\log(1/(R_0-r))}=\infty$.
Integrating both sides of inequality ({\Re\,}f{th11}), we get
\begin{equation}\label{th12}
\begin{aligned}
\beta N_{P}(r)&\geqslant {\tilde\omega}\sum_{u=1}^3(\sum_{i=1}^q {\omega_i}N_{H_i(f^u)}(r,r_0)-N_{W_\alpha(F_u)}(r,r_0))-\sum_{u=1}^3\sum_{i=1}^q\frac{\tilde\omega\omega_in}{k_i+1}(T_{f^u}(r,r_0)+O(1).
\end{aligned}
\end{equation}
Applying Lemma {\Re\,}f{lem1} to $\omega_{i}\ (1\leqslantqslant i\leqslantqslant q),$ we have
$$
\int\limits_{S(r)}\leqslantft|z^{\alpha_0+\cdots+\alpha_{n}}\frac{W_{\alpha_0\ldots\alpha_{n}}(F_u)}{H_1^{{\omega}_1}(f^u)(z)\cdots H_q^{{\omega}_q}(f^u)(z)} \right|^{t_u}\leqslantft(\Vert f^u\Vert ^{\sum_{i=1}^q{\omega}_i-n-1}\right)^{t_u} \sigma_m \leqslantqslant K\bigl(\frac{R^{2m-1}}{R-r} T_{f^u}(R,r_0) \bigl)^{p_u}.
$$
By the concativity of the logarithmic function, we obtain
\begin{align*}
\int\limits_{S(r)}\log|z^{\alpha_0+\cdots+\alpha_{n}}|\sigma_m&+(\sum_{i=1}^q{\omega}_i-n-1)\int\limits_{S(r)}\log||f^u||\sigma_m+\int\limits_{S(r)}\log|W_{\alpha_0\ldots\alpha_{n}}(F_u)|\sigma_m\\
&-\sum_{i=1}^q\omega_i\int\limits_{S(r)}\log|H_i(f^u)|\sigma_m\leqslantqslant \frac{p_uK}{t_u}\big(\log^{+}\frac1{R_0-r}+\log^+T_{f^u}(r,r_0)\big).
\end{align*}
By the definition of the characteristic function and the counting function, we get the following estimate
\begin{align*}
||\ (\sum_{i=1}^q{\omega}_i-n-1)T_{f^u}(r,r_0)&\leqslantqslant\sum_{i=1}^q\omega_iN_{H_i(f^u)}(r,r_0)-N_{W_{\alpha_1\ldots\alpha_{n}}F_u)}(r)\\
&+K_1\big(\log^{+}\frac1{R_0-r}+\log^+T_{f^u}(r,r_0)\big).
\end{align*}
Using Proposition {\Re\,}f{B0011} (ii), we get
\begin{equation*}
\begin{aligned}
||\ (q-2N+n-1)T_{f^u}(r,r_0)&\leqslantqslant{\tilde\omega}\leqslantft(\sum_{i=1}^q{\omega_i}N_{H_i(f^u)}(r,r_0)-N_{W_{\alpha_0\ldots\alpha_{n}}(F_u)}(r,r_0)\right)\\&+{\tilde\omega}{K_1}\big(\log^{+}\frac1{R_0-r}+\log^+T_{f^u}(r,r_0)\big.
\end{aligned}
\end{equation*}
Combining these inequalities with ({\Re\,}f{th12}) and noticing that $\tilde\omega\omega_i\leqslantqslant1$, we get
\begin{equation}\label{th3}
\begin{aligned}
||\ \beta N_{P}(r)&\geqslant (q-2N+n-1)T(r,r_0)-\sum_{i=1}^q\frac{n}{k_i+1}T(r,r_0)+O(1),
\end{aligned}
\end{equation} where $T(r,r_0):=T_{f}(r,r_0)+T_g(r,r_0).$
Since the assumption $P^{\beta}\in B(\alpha,l_0; f^1, f^2, f^3)$, there exists $g\in S(l_0;f^1,f^2,f^3)$ satisfying $$ |P|^{\beta}\leqslantqslant||f^1||^{\alpha}\cdot||f^2||^{\alpha}\cdot||f^3||^{\alpha}\cdot g,$$
outside a proper analytic subset of $B(1).$ Hence, by Jensen's formula and the definition of the characteristic function, we have the following estimate
\begin{equation}\label{th4}
\begin{aligned}
||\ \beta N_{P}(r)=&\int_{S(r)}\log |P|^{\beta}\sigma_n + O(1)\\
\leqslantqslant &\int_{S(r)}({\alpha}\sum_{u=1}^3\log ||f^u||+\log ||g||)\sigma_n +O(1)\\
=&{\alpha}T_{f}(r,r_0)+o(T(r,r_0)).
\end{aligned}
\end{equation}Together ({\Re\,}f{th3}) with ({\Re\,}f{th4}), we obtain
\begin{align*}
(q-2N+n-1)T(r,r_0)-\sum_{i=1}^q\frac{n}{k_i+1}T(r,r_0)\leqslantqslant {\alpha}T(r,r_0)+o(T(r,r_0))
\end{align*}
for every $r$ outside a Borel finite measure set.
Letting $r\rightarrow\infty$, we deduce that $$q-2N+n-1-\sum_{i=1}^q\frac{n}{k_i+1}\leqslantqslant\rho\big( n(2N-n+1)+\frac23l_0\big)+{\alpha}$$ with $\rho=0.$
\vskip0.2cm
\noindent
{\bf $\bullet$ Second Case:} $R_0 < \infty$ and $\lim\sup_{r\to R_0}\dfrac{T_{f^1}(r,r_0)+T_{f^2}(r,r_0) + T_{f^3}(r,r_0)}{\log(1/(R_0-r))} < \infty$.\\
It suffices to prove the lemma in the case where $B(R_0) = B(1)$.
Suppose that $$q>2N-n+1+\sum_{i=1}^q\frac{n}{k_i+1}+\rho\big( n(2N-n+1)+\frac23l_0\big)+{\alpha}.$$
Then, we have $$q>2N-n+1+\sum_{i=1}^q{\tilde\omega}{\omega_i}\frac{n}{k_i+1}+\rho \big( n(2N-n+1)+\frac23l_0\big)+\alpha.$$
It follows from Proposition {\Re\,}f{B0011} ii), iv) that
\begin{equation*}
\begin{aligned}
\sum_{i=1}^q{{\omega_i}}\big(1-\frac{n}{k_i+1}\big)-(n+1)-\dfrac{\alpha}{\tilde{\omega}}&>\rho\big(\frac{n(2N-n+1)}{\tilde\omega}+\frac23\frac{l_0}{\tilde\omega}\big)\\
&\geqslant\rho \big(n(n+1)+\frac23\frac{l_0}{\tilde\omega}\big).
\end{aligned}
\end{equation*}
Put $$t=\dfrac{\frac{2\rho}3}{\displaystyle\sum_{i=1}^q{\hat{\omega}_{i}}-(n+1)-\dfrac{\alpha}{\tilde{\omega}}}.$$
It implies that
\begin{equation}\label{th01}
\begin{aligned}
\big(\frac{3n(n+1)}2+\frac{l_0}{\tilde\omega}\big)t<1.
\end{aligned}
\end{equation}
Put $\psi_u=z^{\alpha_{u,0}+\cdots+\alpha_{u,n}}\dfrac{W_{\alpha_{u,0}\cdots\alpha_{u,n}}(F_u)}{H_1^{\hat{\omega}_{1}}(f^u)\cdots H_q^{\hat{\omega}_{q}}(f^u)}\ \ (1\leqslantqslantslant u\leqslantqslantslant 3)$. It follows from ({\Re\,}f{th11}) that $\psi_1^{t}\psi_2^{t}\psi_3^{t} P^{\frac{ t\beta}{\tilde{\omega}}}$ is holomorphic. Hence $a=\log|\psi_1^{t}\psi_2^{t}\psi_3^{t} P^{\frac{t\beta}{\tilde\omega }}|$ is plurisubharmonic on $B(1)$.
We now write the given K\"{a}hler metric form as
$${\omega}=\frac{\sqrt{-1}}{2\pi}\sum\limits_{i,j}h_{i\bar{j}}dz_i\wedge d\bar{z}_j.$$
From the assumption that $f^1$, $f^2$ and $f^3$ satisfy condition $(C_\rho)$, there are continuous plurisubharmonic functions $a'_u$ on $B(1)$ such that
$$e^{a'_u}\text{det}(h_{i\bar{j}})^{\frac{1}{2}}\leqslantqslant \Vert f^u\Vert ^\rho, u=1,2,3.$$
Put $a_u=\frac23a'_u$,\ $u=1,2,3$ and we get
$$e^{a_u}\text{det}(h_{i\bar{j}})^{\frac{1}{3}}\leqslantqslant \Vert f^u\Vert ^{\frac{2\rho}{3}}.$$ Therefore, by the definition of $t$, we get
\begin{align*}
e^{a+a_1+a_2+a_3}\text{det}(h_{i\bar{j}})&\leqslantqslant e^{a}\Vert f^1\Vert^{\frac{2\rho}{3}}\Vert f^2\Vert^{\frac{2\rho}{3}} \Vert f^3\Vert^{\frac{2\rho}{3}}\\
&= |\psi_1|^{t}|\psi_2|^{t}|\psi_3|^{t}|P|^{\frac{t\beta}{\tilde\omega}}\Vert f^1\Vert^{\frac{2\rho}{3}}\Vert f^2\Vert^{\frac{2\rho}{3}}\Vert f^3\Vert^{\frac{2\rho}{3}}\\
&\leqslantqslant |\psi_1|^{t}|\psi_2|^{t}|\psi_3|^{t}\big(\Vert f^1\Vert \Vert f^2 \Vert \Vert f^3|\big)^{\frac{t \alpha}{\tilde\omega}}\Vert f^1\Vert^{\frac{2\rho}{3}}\Vert f^2\Vert^{\frac{2\rho}{3}}\Vert f^3\Vert^{\frac{2\rho}{3}}\cdot|g|^{\frac{t}{\tilde\omega}}\\
&= |\psi_1|^{t}|\psi_2|^{t}|\psi_3|^{t}\big(\Vert f^1 \Vert \Vert f^2 \Vert \Vert f^3 \Vert\big)^{t(\frac{\alpha}{\tilde\omega}+\frac{2\rho}{3t})}\cdot|g|^{\frac{t}{\tilde\omega}}\\
&= |\psi_1|^{t}|\psi_2|^{t}|\psi_3|^{t}\big(\Vert f^1 \Vert \Vert f^2 \Vert \Vert f^3 \Vert\big)^{t(\sum_{i=1}^q\hat{\omega}_{i}-n-1)}\cdot|g|^{\frac{t}{\tilde\omega}}.
\end{align*}
Note that the volume form on $B(1)$ is given by
$$dV:=c_m\text{det}(h_{i\bar{j}})v_m;$$
therefore,
$$\int\limits_{B(1)} e^{a+a_1+a_2+a_3}dV\leqslantqslantslant C\int\limits_{B(1)}\prod_{u=1}^3\big( |\psi_u|\Vert f^u \Vert^{\sum_{i=1}^q\hat{\omega}_{i}-n-1}\big)^{t}\cdot|g|^{\frac{t}{\tilde\omega}}v_m,$$
with some positive constant $C.$
Setting $x=\dfrac{l_0/\tilde\omega}{3n(n+1)/2+l_0/\tilde\omega},\ y=\dfrac{n(n+1)/2}{3n(n+1)/2+l_0/\tilde\omega}$, then $x+3y=1$. Thus, by the H\"{o}lder inequality and by noticing that
$$v_m=(dd^c\Vert z\Vert^2)^m=2m\Vert z\Vert^{2m-1}\sigma_m\wedge d\Vert z\Vert,$$
we obtain
\begin{align*}
\int\limits_{B(1)} e^{a+a_1+a_2+a_3}dV&\leqslantqslantslant C\prod_{u=1}^3\leqslantft (\int\limits_{B(1)}\big( |\psi_u|\Vert f^u\Vert ^{\sum_{i=1}^q\hat{\omega}_{i}-n-1}\big)^{\frac{t}y} v_m \right)^{y}\leqslantft(\int\limits_{B(1)} |z^{\beta }g|^{\frac{t}{x\tilde\omega}}v_m \right)^{x}\\
&\leqslantqslantslant C\prod_{u=1}^3\bigl(2m\int\limits_0\limits^1 r^{2m-1}\bigl(\int\limits_{S(r)} \big(|\psi_u|\Vert f^u\Vert ^{\sum_{i=1}^q\hat{\omega}_{i}-n-1}\big)^{\frac{t}y} \sigma_m\bigl)dr\bigl)^{y}\\
&\times\bigl(2m\int\limits_0\limits^1 r^{2m-1}\bigl(\int\limits_{S(r)} |z^{\beta}g|^{\frac{t}{x\tilde\omega}}\sigma_m\bigl)dr\bigl)^{x}.
\end{align*}
We see from ({\Re\,}f{th01}) that $\dfrac{l_0t}{\tilde\omega x}=\big(\dfrac{3n(n+1)}2+\dfrac{l_0}{\tilde\omega}\big)t<1$ and
$$\sum\limits_{s=0}^{n}|\alpha_{u,s}|\dfrac{t}y\leqslantqslantslant\dfrac{n(n+1)}2\dfrac{t}y=\big(\dfrac{3n(n+1)}2+\dfrac{l_0}{\tilde\omega}\big)t<1.$$
Then, we can choose a positive number $p$ such that $\dfrac{l_0t}{\tilde\omega x}<p<1$ and $\sum\limits_{s=0}^{n}|\alpha_{u,s}|\dfrac{t}y<p<1.$
Applying Lemma {\Re\,}f{lem1} to $\hat{\omega}_{i}$, and from the property of $g$, we get
$$\int\limits_{S(r)}\big(|\psi_u|\Vert f^u\Vert ^{\sum_{i=1}^q\hat{\omega}_{i}-n-1}\big)^{\frac{t}y} \sigma_m\leqslantqslant K_1\leqslantft(\frac{R^{2m-1}}{R-r} T_f^u(R,r_0) \right)^p$$
and
$$\int\limits_{S(r)} |z^{\beta}g|^{\frac{t}{\tilde\omega x}}\sigma_m\leqslantqslantslant K\leqslantft(\frac{R^{2m-1}}{R-r} T_g(R,r_0) \right)^{p}$$
outside a subset $E\subset [0,1]$ such that $\displaystyle\int\limits_{E}\dfrac{1}{1-r}dr\leqslant +\infty.$
Choosing $R=r+\dfrac{1-r}{eT_{f^u}(r,r_0)},$ we have
$$T_{f^u}(R,r_0)\leqslant 2T_{f^u}(r,r_0),$$
Hence, the above inequality implies that
$$\int\limits_{S(r)}\big(|\psi_u|\Vert f^u\Vert ^{\sum_{i=1}^q\hat{\omega}_{i}-n-1}\big)^{\frac{t}y}\sigma_m\leqslantqslantslant\frac{K_2}{(1-r)^p}(T_{f^u}(r,r_0))^{2p}
\leqslant \frac{K_2}{(1-r)^p}(\log\frac{1}{1-r})^{2p},$$
since $\lim \limits_{r\to R_0}\sup\dfrac{T_{f^1}(r,r_0)+T_{f^2}(r,r_0)+T_{f^2}(r,r_0)}{\log (1/(R_0-r))}<\infty.$
It implies that
$$\int\limits_0\limits^1 r^{2m-1}\leqslantft(\int\limits_{S(r)} \big(|\psi_u|\Vert f^u\Vert ^{\sum_{i=1}^q\hat{\omega}_{i}-n-1}\big) \sigma_m\right)dr\leqslantqslantslant \int\limits_0\limits^1 r^{2m-1}\frac{K_2}{(1-r)^p}\leqslantft(\log\frac{1}{1-r}\right)^{2p} dr <\infty.$$
Similarly,
$$\int\limits_0\limits^1 r^{2m-1}\leqslantft(\int\limits_{S(r)} |z^{\beta}g|^{\frac{t}{\tilde\omega x}}\sigma_m\right)dr\leqslantqslantslant \int\limits_0\limits^1 r^{2m-1}\frac{K_2}{(1-r)^p}\leqslantft(\log\frac{1}{1-r}\right)^{2p} dr <\infty.$$
Hence, we conclude that $\int\limits_{B(1)} e^{a+a_1+a_2+a_3}dV<\infty,$
which contradicts Yau's result \cite{Y} and Karp's result \cite{K}. The proof of Lemma {\Re\,}f{lem22} is complete.
\end{proof}
\section{Proof of Theorem {\Re\,}f{theo1}}
\begin{lemma}[see \cite{TN}, Lemma 3.1]\label{lem23}
If $q>2N+1+\sum_{v=1}^{q}\frac{n}{k_v+1}+\rho n(2N-n+1)$, then every $g\in\mathcal F(f,\{H_i,k_i\}_{i=1}^q,1)$ is linearly nondegenerate.
\end{lemma}
\begin{lemma}[see \cite{NT}, Lemma 12]\label{lem4.2}
Let $q, N$ be two integers satisfying $q\geq 2N+2$, $N \geq 2$ and $q$ be even. Let $\{a_1, a_2,\ldots,a_q\}$ be a family of vectors in a 3-dimensional vector space such that $\text{rank}\{a_j\}_{j\in R}=2$ for any subset ${R}\subset Q= \{1,\ldots,q\}$ with cardinality $|R|=N+1$. Then there exists a partition $\bigcup_{j=1}^{q/2}I_j$ of $\{1,\ldots,q\}$ satisfying $|I_j|=2$ and $\text{rank}\{a_i\}_{i\in I_j}=2$ for all $j=1,\ldots,q/2.$
\end{lemma}
We need the following result which slightly improves \cite[Theorem 1.3]{TN}.
\begin{lemma}\label{theo3} Let $k$ be the largest integer number not exceeding $\dfrac{q-2N-2}{2}$. If $n\geqslant2$ then $f^1\wedge f^2\wedge f^3\equiv0$ for every $f^1, f^2, f^3\in\mathcal(f,\{H_i,k_i\}_{i=1}^q,1)$ provided
$$q>2N-n+1+\sum_{i=1}^q\frac{n}{k_i+1}+\rho n(2N-n+1)+\frac{3nq}{2\big(q+(n-1)\frac{l+1}{l}\big)},$$
where $l$ is the smallest integer number not less than $\dfrac{2N+2+2k}{k+2}$ if $k>0$ or $l=2N+1$ if $k=0.$
\end{lemma}
\begin{proof}
We consider $\mathcal M^{3}$ as a vector space over the field $\mathcal M$ and denote $Q=\{1,\ldots,q\}$. For each $i\in Q$, we set
$$ V_i=\leqslantft ((f^1,H_i),(f^2,H_i),(f^3,H_i)\right )\in \mathcal M^{3}.$$
By Lemma {\Re\,}f{lem23}, $f^1, f^2, f^3$ are linearly nondegenerate. Suppose that $f^1\wedge f^2\wedge f^3\not\equiv 0$. Since the family of hyperplanes $\{H_1,H_2,\ldots,H_q\}$ are in $N$-subgeneral position, for each subset $R\subset Q$ with cardinality $|R|=N+1$, there exist three indices $l, t, s\in R$ such that the vectors $V_l, V_t$ and $V_s$ are linearly independent. This means that
$$
P_I:=\det\leqslantft (
\begin{array}{ccc}
(f^1,H_l)&(f^1,H_t)&(f^1,H_s)\\
(f^2,H_l)&(f^2,H_t)&(f^2,H_s)\\
(f^3,H_l)&(f^3,H_t)&(f^3,H_s)
\end{array}\right )\not\equiv 0,
$$ where $I:=\{l, t, s\}.$ We separate into the following cases.
\noindent{$\bullet$ \bf Case 1: $q\mod2=0$}
By the assumption, we have $q=2N+2+2k$\ $(k\geq0)$. Applying Lemma {\Re\,}f{lem4.2}, we can find a partition $\{J_1,\ldots, J_{q/2}\}$ of $Q$ satisfying $|J_j|=2$ and $\text{rank}\{V_v\}_{v\in J_j}=2$ for all $j=1,2,\ldots,q/2.$ Take a fixed subset $S_j=\{j_1,\ldots,j_{k+2}\}\subset \{1,\ldots,q\}$. We claim that:
{\it There exists a partition $J^j_1,\ldots,J^j_{N+1+k}$ with $k+2$ indices ${r^j_1,\ldots,r^j_{k+2}}\in\{1,\ldots,N+1+k\}$ satisfying $\text{rank}\{V_v,V_{j_i}\}_{v\in J^j_{r^j_i}}=3$ for all $1\leqslantqslant i\leqslantqslant k+2$.}
Indeed, consider $N$ sets $J_1,\ldots, J_{N}$ and $j_1$. Assume that $\text{rank}\{V_{j_1}, V_{t_2} \ldots, V_{t_u}\}=1$ where $u$ is maximal. By the assumption, we have $1\leqslantqslant u\leqslantqslant N-1.$ It follows that there exist $N-u$ pairs, for instance $\{V_v\}_{v\in J_1},\ldots, \{V_v\}_{v\in J_{N-u}}$ which do not contain $V_{j_1}$ or $V_{t_i}$ with $2\leqslantqslant i\leqslantqslant u$. Obviously, $N-u\geq1$. Without loss of generality, we can assume that $V_{j_1}\in\{V_v\}_{v\in J_{N}}$.
If $u=N-1$ then obviously, $\text{rank}\{V_v, V_{j_1}\}_{v\in J_{1}}=3$ since $\sharp(\{V_{j_1}, V_{t_2}, \ldots, V_{t_{N-1}}\}\cup\{V_v\}_{v\in J_1})=N+1.$
If $u\leqslantqslant N-2$, there are at least two pairs vectors, which do not contain $V_{j_1}$ or $V_{t_i}$ with $2\leqslantqslant i\leqslantqslant u$. Assume that $V_{j_1}\in$ span$\{V_v\}_{v\in J_{r_1}}$ with some $r_1\in\{1,\ldots,N-u\}$, there exists at least one pair, for instance $\{V_v\}_{v\in J_{j_0}}$
with $j_0\in\{1,\ldots,N-u\}$ such that $\text{rank}\{V_v\}_{v\in (J_{r_1}\cup J_{j_0})}=3$. Indeed, otherwise
$\text{rank}\{V_v\}_{v\in (\cup_{i=1}^{N-u}J_i)\cup\{j_1, t_2\ldots,t_u\}}=\text{rank}\{V_v\}_{v\in J_{r_1}}=2$. This is impossible since $\{V_v\}_{v\in (\cup_{i=1}^{N-u}J_i)\cup\{j_1,t_2\ldots,t_u\}}$ has at least $N+2$ vectors.
From sets $\{V_v\}_{v\in J_{r_1}}$ and $\{V_v\}_{v\in J_{j_0}}$, we can rebuild two linearly independent pairs $\{V_{i_1},V_{i_2}\}$ and $\{V_{i_3},V_{i_4}\}$ such that $\text{rank}\{V_{i_1},V_{i_2},V_{j_1}\}=3$, where $\{i_1,i_2,i_3,i_4\}=J_{r_1}\cup J_{j_0}.$ We redenote by $J_{r_1}=\{i_1, i_2\}$ and $J_{j_0}=\{i_3, i_4\}$.
Therefore, we obtain a partition still denoted by $J_1,\ldots,J_{N+1+k}$ such that there exists an index ${r^j_1}\in\{1,\ldots,N\}$ satisfying $\text{rank}\{V_v,V_{j_1}\}_{v\in J_{r^j_1}}=3$.
Next, we consider $N$ sets $J_1,\ldots,J_{r^j_1-1},J_{r^j_1+1},\ldots, J_{N+1}$ and $j_2$. Repeating the above argument, we get a partition still denoted by $J_1,\ldots,J_{q/2}$ such that there exists an index ${r^j_2}\in\{1,\ldots,{r^j_1-1},{r^j_1+1},\ldots, N+1\}$ satisfying $\text{rank}\{V_v,V_{j_2}\}_{v\in J_{r^j_2}}=3$. Of course, this partition still satisfies $\text{rank}\{V_v,V_{j_1}\}_{v\in J_{r^j_1}}=3$.
Continue to the process, after $k+2$ times, we will obtain a new partition denoted by $J^j_1,\ldots,J^j_{N+1+k}$ such that there exists $k+2$ indices ${r^j_1,\ldots,r^j_{k+2}}\in\{1,\ldots,N+1+k\}$ satisfying $\text{rank}\{V_v,V_{j_i}\}_{v\in J^j_{r^j_i}}=3$ for all $1\leqslantqslant i\leqslantqslant k+2$. The claim is proved.
Put $I^j_{r^j_i}=J^j_{r^j_i}\cup\{j_i\}$, then $P_{{I^j_{r^j_i}}}\not\equiv 0$ for all $1\leqslantqslant i\leqslantqslant k+2$.
For each remained index $i\in\{1,\ldots,N+1+k\}\setminus\{r^j_1,\ldots,r^j_{k+2}\}$, we choose a vector $V_{s_i}$ such that $\text{rank}\{V_v\}_{v\in J^j_i\cup\{s_i\}}=3.$ Put $I^j_i=J^j_i\cup\{s_i\}$, then $P_{{I^j_i}}\not\equiv 0$ for all $i.$
\noindent$\bullet$ If $k=0$ then $l=2N+1$ and $q=2N+2$. Put $S_1=\{1\}, S_2=\{2\},\ldots, S_{l-1}=\{2N\}, S_l=\{2N+1,2N+2\}.$
\noindent$\bullet$ If $k>0$ then $q=(k+2)(l-1)+t$ with $0<t\leqslantqslantslant k+2.$ Put $S_1=\{1,\ldots,k+2\},S_2=\{(k+2)+1,\ldots,2(k+2)\},\ldots,S_{l-1}=\{(k+2)(l-2)+1,\ldots,(k+2)(l-1)\}, S_l=\{(k+2)(l-1)+1,\ldots,2N+2+2k\}.$
Applying the claim to each set $S_j$ $(1\leqslantqslant j\leqslantqslant l)$, we get a partition $J^j_1,\ldots,J^j_{N+1+k}$ with $s_j=\sharp S_j$ indices ${r^j_1,\ldots,r^j_{s_j}}\in\{1,\ldots,N+1+k\}$ satisfying $\text{rank}\{V_v,V_{u}\}_{v\in J^j_{r^j_i}, u\in S_j}=3$ for all $1\leqslantqslant i\leqslantqslant s_j$.
We put $$P_Q=\prod_{j=1}^{l}\prod_{i=1}^{N+1+k}P_{I^j_i},$$ where $I^j_i$ is defined as in the above.
Since $(\min\{a,b,c\}-1)\geq\min\{a,n\}+\min\{b,n\}+\min\{c,n\}-2n-1$ for any positive integers $a,b,c$, we have
\begin{align*}
\min_{1\leqslantqslant u\leqslantqslant 3}\{\nu_{(f^u,H_v),\leqslantqslant k_v}(z)\}-\nu^{[1]}_{(f^k,H_v),\leqslantqslant k_v}(z)&\geq\sum_{u=1}^3\nu^{[n]}_{(f^u,H_v),\leqslantqslant k_v}(z)-(2n+1)\nu^{[1]}_{(f^k,H_v),\leqslantqslant k_v}(z),
\end{align*}
for all $z\in\mathrm{Supp}\,\nu_{(f^k,H_v),\leqslantqslant k_v}$.
Putting
$\nu_v(z)=\sum_{u=1}^3\nu^{[n]}_{(f^u,H_v),\leqslantqslant k_v}(z)-(2n+1)\nu^{[1]}_{(f^k,H_v),\leqslantqslant k_v}(z)\ (1\leqslantqslant k\leqslantqslant 3,\ v\in Q),$
from Lemma {\Re\,}f{2.4}, we have
\begin{align*}
\nu_{P_{{I}^j_i}}(z)\geq\sum_{v\in{I}^j_i}\nu_v(z)+ 2\sum_{v=1}^q\nu^{[1]}_{(f^k,H_v),\leqslantqslant k_v}(z)
\end{align*} and
\begin{align*}
\nu_{P_{{I}^j_i}}(z)\geq\sum_{v\in{J}^j_i}\nu_v(z)+ 2\sum_{v=1}^q\nu^{[1]}_{(f^k,H_v),\leqslantqslant k_v}(z).
\end{align*}
Note that for $k=0$ then $l(q-2N-1)-(2N+1)=0$. For $k>0$ then $2N+1\leqslantqslantslant\frac{q}{k+2}(2k+1)\leqslantqslantslant l(2k+1)=l(q-2N-1)$. Therefore, we always have $l(q-2N-1)-(2N+1)\geqslant0.$ It implies that $l(q-2n-1)-(2n+1)\geqslant0$ since $N\geq n.$ Then, for all $z \not\in \mathcal{S}$, we obtain
\begin{align*}
\nu_{P_Q}(z)&\geq l\sum_{v=1}^q\nu_v(z)+\sum_{v=1}^q\nu_v(z)+lq\sum_{v=1}^q\nu^{[1]}_{(f^k,H_v),\leqslantqslant k_v}(z)\\
&=(l+1)\sum_{v=1}^q(\sum_{u=1}^3\nu^{[n]}_{(f^u,H_v),\leqslantqslant k_v}(z)-(2n+1)\nu^{[1]}_{(f^k,H_v),\leqslantqslant k_v}(z))+lq\sum_{v=1}^q\nu^{[1]}_{(f^k,H_v),\leqslantqslant k_v}(z)\\
&=(l+1)\sum_{v=1}^q\sum_{u=1}^3\nu^{[n]}_{(f^u,H_v),\leqslantqslant k_v}(z)+\big(l(q-2n-1)-(2n+1)\big)\sum_{v=1}^q\nu^{[1]}_{(f^k,H_v),\leqslantqslant k_v}(z)\\
&\geq\leqslantft(l+1+\frac{l(q-2n-1)-(2n+1)}{3n}\right)\sum_{v=1}^q\sum_{u=1}^3\nu^{[n]}_{(f^u,H_v),\leqslantqslant k_v}(z)\\
&\geq\frac{l(q+n-1)+n-1}{3n}\sum_{v=1}^q\sum_{u=1}^3\nu^{[n]}_{(f^u,H_v),\leqslantqslant k_v}(z).
\end{align*}
We put $P:=P_Q$. The above inequality implies that
\begin{align*}
\sum_{v=1}^q\sum_{u=1}^3\nu^{[n]}_{(f^u,H_v),\leqslantqslant k_v}(z)\leqslantqslant\frac{3n}{l(q+n-1)+n-1}\nu_P(z), \forall z \not \in \mathcal{S}.
\end{align*}
Define $\beta:=\dfrac{3n}{l(q+n-1)+n-1}$ and $\gamma:=\dfrac{lq}2$.
\noindent{$\bullet$ \bf Case 2: $q\mod2=1$.}
By the assumption, we have $q-1=2N+2+2k.$ We consider any subset ${R}=\{j_1,\ldots,j_{q-1}\}$ of $\{1,\ldots,q\}$. By the same argument as in Case 1 for $R$, we get
\begin{align*}
\nu_{P_R}(z)&\geq(l+1)\sum_{v=1}^{q-1}\nu_{j_v}(z)+l(q-1)\sum_{v=1}^q\nu^{[1]}_{(f^k,H_v),\leqslantqslant k_v}(z), \forall z \not \in \mathcal{S}.
\end{align*}
We now define $P:=\prod_{|R|=q-1}P_{R},$ so we obtain
\begin{align*}
\nu_{P}(z)&=\sum_{|R|=q-1}\nu_{P_R}\\
&\geq(q-1)(l+1)\sum_{v=1}^{q}\nu_{v}(z)+ql(q-1)\sum_{v=1}^q\nu^{[1]}_{(f^k,H_v),\leqslantqslant k_v}(z)\\
&\geq(q-1)\frac{l(q+n-1)+n-1}{3n}\sum_{v=1}^q\sum_{u=1}^3\nu^{[n]}_{(f^u,H_v),\leqslantqslant k_v}(z).
\end{align*}
Hence, we have
\begin{align*}
\sum_{v=1}^q\sum_{u=1}^3\nu^{[n]}_{(f^u,H_v),\leqslantqslant k_v}(z)\leqslantqslant\frac{3n}{(l(q+n-1)+n-1)(q-1)}\nu_P(z), \forall z \not \in \mathcal{S}.
\end{align*}
Define $\beta:=\dfrac{3n}{\big(l(q+n-1)+n-1\big)(q-1)}$ and $\gamma:=\dfrac{(q-1)lq}2.$
Then, from all the above cases, we always get
$$\alpha:=\beta\gamma=\dfrac{3nlq}{2(l(q+n-1)+n-1)}=\frac{3nq}{2\big(q+(n-1)\frac{l+1}{l}\big)},$$
and
\begin{align*}
\sum_{u=1}^3\sum_{v=1}^q\nu_{(f^u,H_v),\leqslantqslant k_v}^{[n]}(z)\leqslantqslant\beta\nu_{P}(z), \forall z \not \in \mathcal{S}.
\end{align*}
It is easy to see that $|P|^{\beta}\leqslantqslant C(\Vert f^1\Vert\Vert f^2\Vert\Vert f^3\Vert)^{\beta\gamma}=C(\Vert f^1\Vert\Vert f^2\Vert\Vert f^3\Vert)^{\alpha}$, where $C$ is some positive constant. This means that $P^{\beta}\in B(\alpha,0; f^1, f^2, f^3)$. Applying Lemma {\Re\,}f{lem22}, we obtain
\begin{align*} q &\leqslantqslant 2N-n+1+\sum_{j=1}^q\frac{n}{k_j+1}+\rho n(2N-n+1)+\alpha\\
&=2N-n+1+\sum_{j=1}^q\frac{n}{k_j+1}+\rho n(2N-n+1)+\frac{3nq}{2\big(q+(n-1)\frac{l+1}{l}\big)},
\end{align*}
which contradicts the assumption. Therefore, $f^1\wedge f^2\wedge f^3 \equiv 0$ on $M$. The proof of Lemma {\Re\,}f{theo3} is complete.
\end{proof}
By basing on the proofs of Quang \cite[Lemma 3.3, 3.4, 3.5, 3.6]{Q2} or \cite[Lemma 4.4, 4.5, 4.6, 4.8]{Q3}, we obtain the following Lemmas which are necessary for the proof of our theorem.
The first, for three mappings $f^1, f^2, f^3\in\mathcal F(f,\{H_i,k_i\}_{i=1}^{q},1)$, we define
$\bullet F^{ij}_k=\frac{(f^k,H_i)}{(f^k,H_i)}, \ \ 0\leqslantqslant k\leqslantqslant 2,\ 1\leqslantqslant i,j\leqslantqslant q,$
$\bullet V_i=((f^1,H_i), (f^2,H_i), (f^3,H_i))\in\mathcal M^3_m,$
$\bullet \nu_i: \text{ the divisor whose support is the closure of the set } $ $\{z:\nu_{(f^u,H_i),\leqslantqslant k_i}(z)\geqslant\nu_{(f^v,H_i),\leqslantqslant k_i}(z)=\nu_{(f^t,H_i),\leqslantqslant k_i}(z) \text{ for a permutation } (u,v,t) \text{ of } (1,2,3)\}.$
We write $V_i\cong V_j$ if $V_i \wedge V_j\equiv 0$, otherwise we write $V_i\not\cong V_j$. For $V_i \not\cong V_j$, we write $V_i\sim V_j$
if there exist $1 \leqslantqslant u < v\leqslantqslant 3$ such that $F_u^{ij} = F_v^{ij}$, otherwise we write $V_i\not\sim V_j.$
\begin{lemma}\label{3.3}\cite[Lemma 3.3]{Q2} or \cite[Lemma 4.4]{Q3}
With the assumption of Theorem {\Re\,}f{theo1}, let $h$ and $g$ be two elements of the family
$\mathcal F(f, \{H_i, k_i\}_{i=1}^{q}, 1)$. If there exists a constant $\lambda$ and two indices $i, j$ such that
$\frac{(h, H_i)}{(h, H_j)} = \lambda\frac{(g, H_i)}{(g, H_j)},$
then $\lambda = 1.$
\end{lemma}
\begin{lemma}\label{3.4} \cite[Lemma 3.4]{Q2} or \cite[Lemma 4.5]{Q3}
Let $f^1, f^2, f^3$ be three elements of $\mathcal F(f, \{H_i, k_i\}_{i=1}^{q}, 1)$. Suppose that $f^1\wedge f^2 \wedge f^3 \equiv 0$ and $V_i\sim V_j$ for some
distinct indices $i$ and $j$. Then $f^1, f^2, f^3$ are not distinct.
\end{lemma}
\begin{lemma}\label{3.5}\cite[Lemma 3.5]{Q2} or \cite[Lemma 4.6]{Q3}
With the assumption of Theorem {\Re\,}f{theo1}, let $f^1, f^2, f^3$ be three maps in $\mathcal F(f, \{H_i, k_i\}_{i=1}^{q}, 1)$. Suppose that $f^1, f^2, f^3$ are distinct and there are two indices $i, j\in \{1, 2,\ldots, q\} \ (i \not= j)$ such that $V_i\not\cong V_j$ and
$${\mathbf{P}}hi^{\alpha}_{ij} := {\mathbf{P}}hi^{\alpha}(F_1^{ij}, F_2^{ij}, F_3^{ij}) \equiv 0$$ for every $\alpha = (\alpha_1,\ldots , \alpha_m)\in \mathbb Z^m_+$ with $|\alpha| = 1.$ Then for every $t\in\{1,\ldots, q\} \setminus \{i\}$, the following assertions hold:
(i) ${\mathbf{P}}hi^{\alpha}_{it} \equiv 0$ for all $|\alpha|\leqslantqslant1,$
(ii) if $V_i\not\cong V_t$, then $F^{ti}_1,F^{ti}_2,F^{ti}_3$ are distinct and there exists a meromorphic function $h_{it}\in B(0,1; f^1, f^2, f^3)$ such that
\begin{equation*}
\begin{aligned}
\nu_{h_{ti}}\geq-\nu^{[1]}_{(f,H_i),\leqslantqslant k_i}-\nu^{[1]}_{(f,H_t),\leqslantqslant k_t}+\sum_{j\not=i,t}\nu^{[1]}_{(f,H_j),\leqslantqslant k_j}.
\end{aligned}
\end{equation*}
\end{lemma}
\begin{lemma}\label{3.6}\cite[Lemma 3.6]{Q2} or \cite[Lemma 4.8]{Q3}
With the assumption of Theorem {\Re\,}f{theo1}, let $f^1, f^2, f^3$ be three maps in $\mathcal F(f, \{H_i, k_i\}_{i=1}^{q}, 1)$. Assume that there exist $i, j\in \{1, 2,\ldots, q\} \ (i \not= j)$ and $\alpha \in \mathbb Z^m_+$ with $|\alpha| = 1$ such that ${\mathbf{P}}hi^{\alpha}_{ij} \not\equiv 0$. Then there exists a holomorphic function $g_{ij}\in B(1,1;f^1,f^2,f^3)$ such that
\begin{equation*}
\begin{aligned}
\nu_{g_{ij}}&\geq\sum_{u=1}^3\nu^{[n]}_{(f^u,H_i),\leqslantqslant k_i}+\sum_{u=1}^3\nu^{[n]}_{(f^u,H_j),\leqslantqslant k_j}+2\sum_{t=1,t\not=i,j}\nu^{[1]}_{(f,H_t),\leqslantqslant k_t}-(2n+1)\nu^{[1]}_{(f,H_i),\leqslantqslant k_i}\\
&-(n+1)\nu^{[1]}_{(f,H_j),\leqslantqslant k_j}+\nu_j.
\end{aligned}
\end{equation*}
\end{lemma}
{\it We now prove Theorem {\Re\,}f{theo1}.}
\noindent
Suppose that there exist three distinct meromorphic mappings $f^1, f^2, f^3$ belonging to $\mathcal F(f, \{H_i, k_i\}_{i=1}^{q}, 1)$. By Lemma {\Re\,}f{theo3}, we get $f^1\wedge f^2\wedge f^3 \equiv 0.$ We may assume that
$$ \underset{group\ 1}{ \underbrace{{V_1\cong\cdots\cong V_{l_1}}}}\not\cong\underset{group\ 2}{ \underbrace{V_{l_1+1}\cong\cdots\cong V_{l_2}}}\not\cong\underset{group\ 3}{ \underbrace{V_{l_2+1}\cong\cdots\cong V_{l_3}}}\not\cong \cdots\not\cong\underset{group\ s}{ \underbrace{V_{l_{s-1}+1}\cong\cdots\cong V_{l_s}}},$$ where $l_s=q.$
Denote by $P$ the set of all $i\in \{1,\ldots, q\}$ satisfying that there exists $j\in \{1,\ldots, q\} \setminus \{i\}$ such that $V_i\not\cong V_j$ and ${\mathbf{P}}hi^{\alpha}_{ij}\equiv 0$ for all $\alpha\in\mathbb Z^m_+$ with $|\alpha| \leqslantqslant 1.$ We separate into three cases.
\noindent$\bullet$ {\bf Case 1:} $\sharp P \geq 2.$ It follows that $P$ contains two elements $i, j.$ We get ${\mathbf{P}}hi^{\alpha}_{ij}={\mathbf{P}}hi^{\alpha}_{ji}=0$ for all
$\alpha\in\mathbb Z^m_+$ with $|\alpha|\leqslantqslant 1.$ By Lemma {\Re\,}f{lem2.1}, there exist two functions, for instance $F_1^{ij}$ and
$F_2^{ij}$, and a constant $\lambda$ such that $F_1^{ij}=\lambda F_2^{ij}.$ Applying Lemma {\Re\,}f{3.3}, we have $F_1^{ij}= F_2^{ij}$. Hence,
since Lemma {\Re\,}f{3.5} (ii), we can see that $V_i\cong V_j$, i.e., $V_i$ and $V_j$ belong to the same group in the partition. We may assume that $i = 1$ and $j = 2.$ Since our assumption $f^1, f^2, f^3$ are distinct, the number of each group in the partition is less than $N + 1.$
Thus, we get $V_1\cong V_2\not\cong V_t$ for all $t\in \{N+ 1,\ldots, q\}.$ By Lemma {\Re\,}f{3.5} (ii), we obtain
\begin{equation*}\begin{aligned}
\nu_{h_{1t}}\geq-\nu^{[1]}_{(f,H_1),\leqslantqslant k_1}-\nu^{[1]}_{(f,H_t),\leqslantqslant k_t}+\sum_{s\not=1,t}\nu^{[1]}_{(f,H_s),\leqslantqslant k_s},
\end{aligned}\end{equation*}
and
\begin{equation*}\begin{aligned}
\nu_{h_{2t}}\geq-\nu^{[1]}_{(f,H_2),\leqslantqslant k_2}-\nu^{[1]}_{(f,H_t),\leqslantqslant k_t}+\sum_{s\not=2,t}\nu^{[1]}_{(f,H_s),\leqslantqslant k_s}.
\end{aligned}\end{equation*}
By summing up both sides of the above two inequalities, we have
\begin{equation*}\begin{aligned}
\nu_{h_{1t}}+\nu_{h_{2t}}\geq-2\nu^{[1]}_{(f,H_t)\leqslantqslant k_t}+\sum_{s\not=1,2,t}\nu^{[1]}_{(f,H_s),\leqslantqslant k_s}.
\end{aligned}\end{equation*}
Summing up both sides of the above inequalities over all $t\in\{N+ 1,\ldots, q\},$ we obtain
\begin{equation*}\begin{aligned}
\sum_{t=N+1}^q(\nu_{h_{1t}}+\nu_{h_{2t}})&\geq(q-N)\sum_{t=3}^N\nu^{[1]}_{(f,H_t)\leqslantqslant k_t}+(q-N-3)\sum_{t=N+1}^q\nu^{[1]}_{(f,H_t)\leqslantqslant k_t}\\
&\geq (q-N-3)\sum_{t=3}^q\nu^{[1]}_{(f,H_t)\leqslantqslant k_t}\geq\frac{q-N-3}{3n}\sum_{u=1}^3\sum_{t=3}^q\nu^{[n]}_{(f,H_t)\leqslantqslant k_t}.
\end{aligned}\end{equation*}
Hence, we get
\begin{equation*}\begin{aligned}
\sum_{u=1}^3\sum_{t=3}^q\nu^{[n]}_{(f,H_t)\leqslantqslant k_t}\leqslantqslant\frac{3n}{q-N-3}\nu_{\prod_{t=N+1}^q}(h_{1t}h_{2t}).
\end{aligned}\end{equation*}
Since $({\prod_{t=N+1}^q}(h_{1t}h_{2t}))^{\frac{3n}{q-N-3}}\in B(0,2(q-N)\frac{3n}{q-N-3};f^1,f^2,f^3)$, applying Lemma {\Re\,}f{lem22}, we obtain
$$q-2\leqslantqslantslant 2N-n+1+\sum_{i=1}^q\frac{n}{k_i+1}+\rho\big( n(2N-n+1)+4(q-N)\frac{n}{q-N-3}\big).$$
From the definition of $l$ and the condition of $q$, it is easy to see that $l\geq3.$ It is easy to see that
$$2\leqslantqslant\frac{3nq}{2\big(q+n-1+\frac{n-1}3\big)}\leqslantqslant\frac{3nq}{2\big(q+n-1+\frac{n-1}l\big)},$$ and $$ 4(q-N)\frac{n}{q-N-3}\leqslantqslant \frac{4(q-n)n}{n-1}.$$
These inequalities imply that
$$q\leqslantqslantslant 2N-n+1+\sum_{i=1}^q\frac{n}{k_i+1}+\rho\big( n(2N-n+1)+\frac{4(q-n)n}{n-1}\big)+\frac{3nq}{2\big(q+n-1+\frac{n-1}l\big)},$$
which is a contradiction.
\noindent$\bullet$ {\bf Case 2:} $\sharp P=1$. We assume that $P = \{1\}.$ It is easy to see that $V_1\not\cong V_i$ for all $i = 2,\ldots,q$. By Lemma {\Re\,}f{3.5} (ii), we obtain
\begin{equation*}
\begin{aligned}
\nu_{h_{1i}}\geq-\nu^{[1]}_{(f,H_1)\leqslantqslant k_1}-\nu^{[1]}_{(f,H_i)\leqslantqslant k_i}+\sum_{s\not=1,t}\nu^{[1]}_{(f,H_s)\leqslantqslant k_s}.
\end{aligned}
\end{equation*}
Summing up both sides of the above inequalities over all $i = 2,\ldots,q,$ we have
\begin{equation}\label{thoan1}
\begin{aligned}
\sum_{i=2}^q\nu_{h_{1i}}\geq(q-3)\sum_{i=2}^q\nu^{[1]}_{(f,H_i)\leqslantqslant k_i}-(q-1)\nu^{[1]}_{(f,H_1)\leqslantqslant k_1}.
\end{aligned}
\end{equation}
Obviously, $i\not\in P$ for all $i = 2,\ldots,q.$ Now put
$$\sigma(i)=\begin{cases}
i+N,& \text{ if } i+N\leqslantqslant q \\
i-N-q+1,& \text{ if }i+N>q,
\end{cases}$$ then $i$ and $\sigma(i)$ belong to distinct groups, i.e., $V_i\not\cong V_{\sigma(i)}$ for all $i = 2,\ldots,q$ and hence ${\mathbf{P}}hi^{\alpha}_{i\sigma(i)}\not\equiv0$ for some $\alpha\in\mathbb Z^m_+$ with $|\alpha|\leqslantqslant1.$ By Lemma {\Re\,}f{3.6}, we get
\begin{equation*}
\begin{aligned}
\nu_{g_{i\sigma(i)}}&\geq\sum_{u=1}^3\sum_{t=i,\sigma(i)}\nu^{[n]}_{(f^u,H_t)\leqslantqslant k_t}-(2n+1)\nu^{[1]}_{(f,H_i)\leqslantqslant k_i}-(n+1)\nu^{[1]}_{(f,H_{\sigma(i)})\leqslantqslant k_{\sigma(i)}}\\
&+2\sum_{t=1,t\not=i,\sigma(i)}\nu^{[1]}_{(f,H_t)\leqslantqslant k_t}.
\end{aligned}
\end{equation*}
Summing up both sides of this inequality over all $i\in\{2, \ldots, q\}$ and using ({\Re\,}f{thoan1}), we obtain
\begin{equation*}
\begin{aligned}
\sum_{i=2}^q\nu_{g_{i\sigma(i)}}&\geq2\sum_{i=2}^q\sum_{u=1}^3\nu^{[n]}_{(f^u,H_i),\leqslantqslant k_i}+(2q-3n-8)\sum_{i=2}^q\nu^{[1]}_{(f,H_i),\leqslantqslant k_i})+2(q-1)\nu^{[1]}_{(f,H_1),\leqslantqslant k_1}\\
&\geq2\sum_{i=2}^q\sum_{u=1}^3\nu^{[n]}_{(f^u,H_i),\leqslantqslant k_i}+\frac{4q-3n-14}3\sum_{u=1}^3\sum_{i=2}^q\nu^{[1]}_{(f^u,H_i),\leqslantqslant k_i})-2\sum_{i=2}^q\nu_{h_{1i}}\\
&\geq\frac{4q+3n-14}{3n}\sum_{i=2}^q\sum_{u=1}^3\nu^{[n]}_{(f^u,H_i),\leqslantqslant k_i}-2\sum_{i=2}^q\nu_{h_{1t}}.
\end{aligned}
\end{equation*}
It implies that $$\sum_{u=1}^3\sum_{i=2}^q\nu^{[n]}_{(f^u,H_i)}\leqslantqslant\frac{3n}{4q+3n-14}\nu_{\prod_{i=2}^q(g_{i\sigma(i)}h^2_{1i})}.$$
Obviously, $\prod_{i=2}^q(g_{i\sigma(i)}h^2_{1i})\in B(q-1,3(q-1);f^1,f^2,f^3)$. Applying Lemma {\Re\,}f{lem22}, we obtain
$$q-1\leqslantqslantslant 2N-n+1+\sum_{i=1}^q\frac{n}{k_i+1}+\rho\big( n(2N-n+1)+\frac{6n(q-1)}{4q+3n-14}\big)+\frac{3n(q-1)}{4q+3n-14}.$$
Since $q\geq2n+2$ and by the simple calculation, we have $$\frac{6n(q-1)}{4q+3n-14}\leqslantqslant \frac{6n(q-1)}{11n-6}<\frac{4(q-n)n}{n-1}.$$ It implies that
$$q\leqslantqslantslant 2N-n+1+\sum_{i=1}^q\frac{n}{k_i+1}+\rho\big( n(2N-n+1)+\frac{4(q-n)n}{n-1}\big)+\frac{4q+3nq-14}{4q+3n-14},$$
which is a contradiction.
\noindent$\bullet$ {\bf Case 3:} $\sharp P=0$. By Lemma {\Re\,}f{3.6}, for all $i\not=j$, we get
\begin{equation*}
\begin{aligned}
\nu_{g_{ij}}&\geq\sum_{u=1}^3\nu^{[n]}_{(f^u,H_i),\leqslantqslant k_i}+\sum_{u=1}^3\nu^{[n]}_{(f^u,H_j),\leqslantqslant k_j}+2\sum_{t=1,t\not=i,j}\nu^{[1]}_{(f,H_t),\leqslantqslant k_t}-(2n+1)\nu^{[1]}_{(f,H_i),\leqslantqslant k_i}\\
&-(n+1)\nu^{[1]}_{(f,H_j),\leqslantqslant k_j}+\nu_j.
\end{aligned}
\end{equation*}
Put $$ \gamma(i)=\begin{cases}
i+N& \text{ if } i\leqslantqslant q-N\\
i+N-q& \text{ if } i> q-N.
\end{cases} $$ By summing up both sides of the above inequality over all pairs $(i, \gamma(i)),$ we obtain
\begin{equation}\label{eq3.8}
\begin{aligned}
\sum_{i=1}^q\nu_{g_{i\gamma(i)}}\geq2\sum_{u=1}^3\sum_{i=1}^q\nu^{[n]}_{(f^u,H_i),\leqslantqslant k_i}+(2q-3n-6)\sum_{t=1}^q\nu^{[1]}_{(f,H_t),\leqslantqslant k_t}+\sum_{t=1}^q\nu_t.
\end{aligned}
\end{equation}
By Lemma {\Re\,}f{3.4}, we can see that $V_j\not\sim V_l$ for all $j\not=l.$ Thus, we have
$$ P^{i\gamma(i)}_{st}:=(f^s,H_i)(f^t,H_{\gamma(i)})-(f^t,H_{\gamma(i)})(f^s,H_i)\not\equiv0,\ s\not= t, 1\leqslantqslant i\leqslantqslant q.$$
We claim that: {\it With $i\not=j\not=\gamma(i)$, for every $z\in f^{-1}(H_j)$, we have
$$\sum_{1\leqslantqslant s<t\leqslantqslant3}\nu_{P^{i\gamma(i)}_{st}}(z)\geq4\nu^{[1]}_{(f,H_j),\leqslantqslant k_j}-\nu_j(z).$$}
Indeed, for $z\in f^{-1}(H_j)\cap\mathrm{Supp}\, {\nu_j},$ we have
$$ 4\nu^{[1]}_{(f,H_j),\leqslantqslant k_j}(z)-\nu_j(z)\leqslantqslant4-1=3\leqslantqslant\sum_{1\leqslantqslant s<t\leqslantqslant3}\nu_{P^{i\gamma(i)}_{st}}.$$
For $z\in f^{-1}(H_j)\setminus \mathrm{Supp}\, \nu_j$, we assume that $\nu_{(f^1,H_j),\leqslantqslant k_j}(z)<\nu_{(f^2,H_j),\leqslantqslant k_j}(z)\leqslantqslant\nu_{(f^3,H_j),\leqslantqslant k_j}(z).$ Since $f^1\wedge f^2\wedge f^3\equiv0,$ we have $\det(V_i,V_{\gamma(i)}, V_j)\equiv0,$ and hence
$$(f^1,H_j)P^{i\gamma(i)}_{23}=(f^2,H_j)P^{i\gamma(i)}_{13}-(f^3,H_j)P^{i\gamma(i)}_{12}.$$
It implies that $ \nu_{P^{i\gamma(i)}_{23}}\geq2 $
and so
$$\sum_{1\leqslantqslant s<t\leqslantqslant3}\nu_{P^{i\gamma(i)}_{st}}(z)\geq4=4\nu^{[1]}_{(f,H_i),\leqslantqslant k_i}(z)-\nu_j(z).$$
The claim is proved.
On the other hand, with $j=i$ or $j=\sigma(i)$, for every $z\in f^{-1}(H_j)$, we see that
\begin{align*}\nu_{P^{i\gamma(i)}_{st}}(z)&\geq\min\{\nu_{(f^s,H_j),\leqslantqslant k_j}(z),\nu_{(f^t,H_j),\leqslantqslant k_j}(z)\}\\
&\geq\nu^{[n]}_{(f^s,H_j),\leqslantqslant k_j}(z)+\nu^{[n]}_{(f^t,H_j),\leqslantqslant k_j}(z)-n\nu^{[1]}_{(f,H_j),\leqslantqslant k_j}(z).
\end{align*}
Hence, $ \sum_{1\leqslantqslant s<t\leqslantqslant3}\nu_{P^{i\gamma(i)}_{st}}(z)\geq2\sum_{u=1}^3\nu^{[n]}_{(f^u,H_j),\leqslantqslant k_j}(z)-3n\nu^{[1]}_{(f,H_j),\leqslantqslant k_j}(z).$
Together this inequality with the above claim, we obtain
\begin{equation*}
\begin{aligned}
\sum_{1\leqslantqslant s<t\leqslantqslant3}\nu_{P^{i\gamma(i)}_{st}}(z)&\geq\sum_{j=i,\gamma(i)}\big(2\sum_{u=1}^3\nu^{[n]}_{(f^u,H_j),\leqslantqslant k_j}(z)-3n\nu^{[1]}_{(f,H_j),\leqslantqslant k_j}(z)\big)\\
&+\sum_{j=1,j\not=i,\gamma(i)}(4\nu^{[1]}_{(f,H_j),\leqslantqslant k_j}(z)-\nu_j(z)).
\end{aligned}
\end{equation*}
On the other hand, it is easy to see that $\prod_{1\leqslantqslant s<t\leqslantqslant3}P^{i\gamma(i)}_{st}\in B(2,0;f^1,f^2,f^3).$
Summing up both sides of the above inequality over all $i,$ we obtain
\begin{equation*}
\begin{aligned}
\sum_{i=1}^q\sum_{1\leqslantqslant s<t\leqslantqslant3}\nu_{P^{i\gamma(i)}_{st}}\geq4\sum_{u=1}^3\sum_{i=1}^q\nu^{[n]}_{(f^u,H_i),\leqslantqslant k_i}+(4q-6n-8)\sum_{i=1}^q\nu^{[1]}_{(f,H_i),\leqslantqslant k_i}-(q-2)\sum_{i=1}^q\nu_i.
\end{aligned}
\end{equation*}
Thus,
$$ \sum_{i=1}^q\nu_i+\frac1{q-2}\sum_{i=1}^q\sum_{1\leqslantqslant s<t\leqslantqslant3}\nu_{P^{i\gamma(i)}_{st}}\geq\frac{4}{q-2}\sum_{u=1}^3\sum_{i=1}^q\nu^{[n]}_{(f^u,H_i),\leqslantqslant k_i}+\frac{4q-6n-8}{q-2}\sum_{i=1}^q\nu^{[1]}_{(f,H_i),\leqslantqslant k_i}.$$
Using this inequality and ({\Re\,}f{eq3.8}), we have
\begin{equation*}
\begin{aligned}
\sum_{i=1}^q\nu_{g_{i\gamma(i)}}&+\frac{1}{q-2}\sum_{i=1}^q\sum_{1\leqslantqslant s<t\leqslantqslant3}\nu_{P^{i\gamma(i)}_{st}}\\
&\geq\big(2+\frac{4}{q-2}\big)\sum_{u=1}^q\sum_{t=1}^q\nu^{[n]}_{(f^u,H_t),\leqslantqslant k_t}+\big(n-2+\frac{4q-6n-8}{q-2}\big)\sum_{i=1}^q\nu^{[1]}_{(f^u,H_i),\leqslantqslant k_i}\\
&\geq\big(2+\frac{4}{q-2}+\frac{n-2}{3n}+\frac{4q-6n-8}{3n(q-2)}\big)\sum_{u=1}^q\sum_{t=1}^q\nu^{[n]}_{(f^u,H_t),\leqslantqslant k_t}.
\end{aligned}
\end{equation*} It implies that
$$\sum_{u=1}^q\sum_{t=1}^q\nu^{[n]}_{(f^u,H_t),\leqslantqslant k_t}\leqslantqslant\frac{3n}{6nq+(n-2)(q-2)+4q-6n-8}\nu_{\prod_{i=1}^q(g^{q-2}_{i\gamma(i)}P^{i\gamma(i)}_{12}P^{i\gamma(i)}_{13}P^{i\gamma(i)}_{23})}.$$
Observe that $\prod_{i=1}^qg^{q-2}_{i\gamma(i)}P^{i\gamma(i)}_{12}P^{i\gamma(i)}_{13}P^{i\gamma(i)}_{23}\in B(q^2,q(q-2);f^1,f^2,f^3)$, hence applying Lemma {\Re\,}f{lem22}, we obtain
\begin{align*}q&\leqslantqslantslant 2N-n+1+\sum_{i=1}^q\frac{n}{k_i+1}+\rho\big( n(2N-n+1)+\frac{2nq(q-2)}{6nq+(n-2)(q-2)+4q-6n-8}\big)\\
&+\frac{3nq^2}{6nq+(n-2)(q-2)+4q-6n-8},
\end{align*}
which is impossiple since $$ \frac{2nq(q-2)}{6nq+(n-2)(q-2)+4q-6n-8}<\frac{2nq(q-2)}{6nq+q-2}=\frac{2n(q-2)}{6n+1}\leqslantqslant\frac{4(q-n)n}{n-1}.$$
The proof of Theorem {\Re\,}f{theo1} is complete.
$\square$
\noindent
\textbf{Acknowledgement:} This work was done while the first author was staying at the Vietnam Institude for Advanced Study in Mathematics (VIASM). He would like to thank VIASM for the support.
\end{document} |
\begin{document}
\begin{abstract}
This is a survey article on moduli of affine schemes equipped with an action of a reductive group.
The emphasis is on examples and applications to the classification of spherical varieties.
\end{abstract}
\title{Invariant Hilbert schemes}
\tableofcontents
\section{Introduction}
\label{sec:introduction}
The Hilbert scheme is a fundamental object of projective algebraic geometry; it parametrizes
those closed subschemes of the projective space $\bP^N$ over a field $k$, that have a prescribed
Hilbert polynomial. Many moduli schemes (for example, the moduli space of curves of a fixed genus)
are derived from the Hilbert scheme by taking locally closed subschemes and geometric invariant theory
quotients.
In recent years, several versions of the Hilbert scheme have been constructed in the setting of
algebraic group actions on affine varieties. One of them, the \emph{$G$-Hilbert scheme} $G$-$\Hilb(V)$,
is associated to a linear action of a finite group $G$ on a finite-dimensional complex vector space $V$;
it classifies those $G$-stable subschemes $Z \subset V$ such that the representation of $G$ in the coordinate
ring $\cO(Z)$ is the regular representation. The $G$-Hilbert scheme comes with a morphism to the quotient
variety $V/G$, that associates with $Z$ the point $Z/G$. This \emph{Hilbert-Chow morphism} has an inverse over
the open subset of $V/G$ consisting of orbits with trivial isotropy group, as every such orbit $Z$ is
a point of $G$-$\Hilb(V)$. In favorable cases (e.g. in dimension $2$), the Hilbert-Chow morphism is
a desingularization of $V/G$; this construction plays an essential role in the McKay correspondence
(see e.g. \cite{IN96, IN99, BKR01}).
Another avatar of the Hilbert scheme is the \emph{multigraded Hilbert scheme} introduced by Haiman and
Sturmfels in \cite{HS04}; it parametrizes those homogeneous ideals $I$ of a polynomial ring
$k[t_1,\ldots,t_N]$, graded by an abelian group $\Gamma$, such that each homogeneous component of
the quotient $k[t_1,\ldots,t_N]/I$ has a prescribed (finite) dimension. In contrast to the construction
(due to Grothendieck) of the Hilbert scheme which relies on homological methods, that of Haiman and
Sturmfels is based on commutative algebra and algebraic combinatorics only; it is valid over any base
ring $k$. Examples of multigraded Hilbert schemes include the Grothendieck-Hilbert scheme (as follows
from a result of Gotzmann, see \cite{Go78}) and the \emph{toric Hilbert scheme} (defined by Peeva and Sturmfels
in \cite{PS02}) where the homogeneous components of $I$ have codimension $0$ or $1$.
The invariant Hilbert scheme may be viewed as a common generalization of $G$-Hilbert schemes and
multigraded Hilbert schemes; given a complex reductive group $G$ and a finite-dimensional $G$-module $V$,
it classifies those closed $G$-subschemes $Z \subset V$ such that the $G$-module $\cO(Z)$ has prescribed
(finite) multiplicities. If $G$ is diagonalizable with character group $\Lambda$, then $Z$ corresponds to
a homogeneous ideal of the polynomial ring $\cO(V)$ for the $\Lambda$-grading defined by the $G$-action;
we thus recover the multigraded Hilbert scheme. But actually, the construction of the invariant Hilbert
scheme in \cite{AB05} relies on a reduction to the multigraded case via highest weight theory.
The Hilbert scheme of $\bP^N$ is known to be projective and connected; invariant Hilbert schemes are
quasi-projective (in particular, of finite type) but possibly non-projective. Also, they may be disconnected,
already for certain toric Hilbert schemes (see \cite{Sa05}). One may wonder how such moduli schemes can exist
in the setting of affine algebraic geometry, since for example any curve in the affine plane is a flat limit
of curves of higher degree. In fact, the condition for the considered subschemes $Z \subset X$ to have a
coordinate ring with finite multiplicities is quite strong; for example, it yields a proper morphism
to a punctual Hilbert scheme, that associates with $Z$ the categorical quotient $Z/\!/G \subset X/\!/G$.
In the present article, we expose the construction and fundamental properties of the invariant Hilbert scheme,
and survey some of its applications to varieties with algebraic group actions. The prerequisites are (hopefully)
quite modest: basic notions of algebraic geometry; the definitions and results that we need about actions and
representations of algebraic groups are reviewed at the beginning of Section 2. Then we introduce flat families of
closed $G$-stable subschemes of a fixed affine $G$-scheme $X$, where $G$ is a fixed reductive group, and define
the Hilbert function which encodes the $G$-module structure of coordinate rings. Given such a function $h$,
our main result asserts the existence of a universal family with base a quasi-projective scheme: the invariant
Hilbert scheme $\Hilb^G_h(X)$.
Section 3 presents a number of basic properties of invariant Hilbert schemes. We first reduce the construction
of $\Hilb^G_h(X)$ to the case (treated by Haiman and Sturmfels) that $G$ is diagonalizable and $X$ is a $G$-module.
For this, we slightly generalize the approach of \cite{AB05}, where $G$ was assumed to be connected. Then we
describe the Zariski tangent space at any closed point $Z$, with special attention to the case that $Z$ is a
$G$-orbit closure in view of its many applications. Here also, we adapt the results of \cite{AB05}. More original
are the next developments on the action of the equivariant automorphism group and on a generalization of the
Hilbert-Chow morphism, which aim at filling some gaps in the literature.
In Section 4, we first give a brief overview of invariant Hilbert schemes for finite groups and their
applications to resolving quotient singularities; here the reader may consult \cite{Be08} for a detailed
exposition. Then we survey very different applications of invariant Hilbert schemes, namely, to the
classification of \emph{spherical varieties}. These form a remarkable class of varieties equipped with
an action of a connected reductive group $G$, that includes toric varieties, flag varieties and symmetric
homogeneous spaces. A normal affine $G$-variety $Z$ is spherical if and only if the $G$-module $\cO(Z)$
is multiplicity-free; then $Z$ admits an equivariant degeneration to an affine spherical variety
$Z_0$ with a simpler structure, e.g., the decomposition of $\cO(Z_0)$ into simple $G$-modules is a grading
of that algebra. Thus, $Z_0$ is uniquely determined by the highest weights of these simple modules,
which form a finitely generated monoid $\Gamma$. We show (after \cite{AB05}) that the affine spherical
$G$-varieties with weight monoid $\Gamma$ are parametrized by an affine scheme of finite type $\M_{\Gamma}$,
a locally closed subscheme of some invariant Hilbert scheme.
Each subsection ends with examples which illustrate its main notions and results; some of these examples
have independent interest and are developed along the whole text. In particular, we present results of
Jansou (see \cite{Ja07}) that completely describe invariant Hilbert schemes associated with the
``simplest'' data: $G$ is semi-simple, $X$ is a simple $G$-module, and the Hilbert function $h$ is that
of the cone of highest weight vectors (the affine cone over the closed $G$-orbit $Y$ in the projective space
$\bP(X)$). Quite remarkably, $\Hilb^G_h(X)$ turns out to be either a (reduced) point or an affine line
$\bA^1$; moreover, the latter case occurs precisely when $Y$ can be embedded into another projective
variety $\tY$ as an ample divisor, where $\tY$ is homogeneous under a semi-simple group $\tG \supset G$,
and $G$ acts transitively on the complement $\tY \setminus Y$. Then the universal family is just the
affine cone over $\tY$ embedded via the sections of $Y$.
This relation between invariant Hilbert schemes and projective geometry has been further developed in
recent works on the classification of arbitrary spherical $G$-varieties. In loose words, one reduces
to classifying \emph{wonderful $G$-varieties} which are smooth projective $G$-varieties having the
simplest orbit structure; examples include the just considered $G$-varieties $\tY$. Taking an appropriate
affine multi-cone, one associates to each wonderful variety a family of affine spherical varieties over
an affine space $\bA^r$, which turns out to be the universal family. This approach is presented in more
details in the final Subsections \ref{subsec:fpsv} and \ref{subsec:cwv}.
Throughout this text, the emphasis is on geometric methods and very little space is devoted to
the combinatorial and Lie-theoretical aspects of the domain, which are however quite important.
The reader may consult \cite{Bi10, Ch08, MT02, Sa05, St96} for the combinatorics of toric Hilbert
schemes, \cite{Pe10} for the classification of spherical embeddings, \cite{Bra10} for that of
wonderful varieties, and \cite{Lo10} for uniqueness properties of spherical varieties. Also, we do
not present the close relations between certain invariant Hilbert schemes and moduli of polarized
projective schemes with algebraic group action; see \cite{Al02} for the toric case (and, more generally,
semiabelic varieties), and \cite{AB06} for the spherical case. These relations would deserve further
investigation.
Also, it would be interesting to obtain a modular interpretation of the \emph{main component}
of certain invariant Hilbert schemes, that contains the irreducible varieties. For toric Hilbert
schemes, such an interpretation is due to Olsson (see \cite{Ol08}) in terms of logarithmic structures.
Finally, another interesting line of investigation concerns the moduli scheme $\M_{\Gamma}$
of affine spherical varieties with weight monoid $\Gamma$. In all known examples, the irreducible
components of $\M_{\Gamma}$ are just affine spaces, and it is tempting to conjecture that this always
holds. A positive answer would yield insight into the classification of spherical varieties and
the multiplication in their coordinate rings.
\noindent
{\bf Acknowledgements.} I thank St\'ephanie Cupit-Foutou, Ronan Terpereau, and the referee for their
careful reading and valuable comments.
\section{Families of affine schemes with reductive group action}
\label{sec:fas}
\subsection{Algebraic group actions}
\label{subsec:aga}
In this subsection, we briefly review some basic notions about algebraic groups and their actions;
details and proofs of the stated results may be found e.g. in the notes \cite{Bri10}.
We begin by fixing notation and conventions.
Throughout this article, we consider algebraic varieties, algebraic groups, and
schemes over a fixed algebraically closed field $k$ of characteristic zero.
Unless explicitly mentioned, schemes are assumed to be \emph{noetherian}.
A {\bf variety} is a reduced separated scheme of finite type; thus,
varieties need not be irreducible. By a point of such a variety $X$,
we mean a closed point, or equivalently a $k$-rational point.
An {\bf algebraic group} is a variety $G$ equipped with morphisms
$\mu_G : G \times G \to G$ (the multiplication), $\iota_G : G \to G$
(the inverse) and with a point $e_G$ (the neutral element)
that satisfy the axioms of a group; this translates into the commutativity
of certain diagrams.
Examples of algebraic groups include closed subgroups of the general linear group $\GL_n$
consisting of all $n \times n$ invertible matrices, where $n$ is a positive
integer; such algebraic groups are called {\bf linear}.
We will only consider linear algebraic groups in the sequel.
An (algebraic) {\bf action} of an algebraic group $G$ on a scheme $X$
is a morphism
$$
\alpha : G \times X \longrightarrow X, \quad
(g,x) \longmapsto g \cdot x
$$
such that the composite morphism
$$
\CD
X @>{e_G \times \id_X}>> G \times X @>{\alpha}>> X
\endCD
$$
is the identity (i.e., $e_G$ acts on $X$ via the identity $\id_X$),
and the square
\begin{equation}\label{eqn:asso}
\CD
G \times G \times X @>{\id_G \times \alpha}>> G \times X \\
@V{\mu_G \times \id_X}VV @V{\alpha}VV \\
G \times X @>{\alpha}>> X \\
\endCD
\end{equation}
commutes (the associativity property of an action).
A scheme equipped with a $G$-action is called a $G$-{\bf scheme}.
Given two $G$-schemes $X$, $Y$ with action morphisms $\alpha$, $\beta$,
a morphism $f : X \to Y$ is called {\bf equivariant}
(or a {\bf $G$-morphism}) if the square
$$
\CD
G \times X @>{\alpha}>> X\\
@V{\id_G \times f}VV @V{f}VV \\
G \times Y @>{\beta}>> Y\\
\endCD
$$
commutes. If $\beta$ is the trivial action, i.e., the projection $G \times Y \to Y$,
then we say that $f$ is $G$-{\bf invariant}.
An (algebraic, or rational) $G$-{\bf module} is a vector space $V$ over $k$,
equipped with a linear action of $G$ which is algebraic in the following sense:
every $v \in V$ is contained in a $G$-stable finite-dimensional subspace $V_v \subset V$
on which $G$ acts algebraically. (We do not assume that $V$ itself is
finite-dimensional). Equivalently, $G$ acts on $V$ via a representation
$\rho : G \to \GL(V)$ which is a union of finite-dimensional algebraic
subrepresentations.
A $G$-stable subspace $W$ of a $G$-module $V$ is called a
$G$-{\bf submodule}; $V$ is {\bf simple} if it has no non-zero proper
submodule. Note that simple modules are finite-dimensional, and
correspond to irreducible representations.
A $G$-{\bf algebra} is an algebra $A$ over $k$, equipped with an action
of $G$ by algebra automorphisms which also makes it a $G$-module.
Given a $G$-scheme $X$, the algebra $\cO(X)$ of global sections of
the structure sheaf is equipped with a linear action of $G$ via
$$
(g \cdot f)(x) = f(g^{-1} \cdot x).
$$
In fact, this action is algebraic and hence \emph{$\cO(X)$ is a $G$-algebra}.
The assignement $X \mapsto \cO(X)$ defines a bijective correspondence
between \emph{affine} $G$-schemes and $G$-algebras.
Each $G$-algebra of finite type $A$ is generated by a finite-dimensional $G$-submodule $V$.
Hence $A$ is a quotient of the symmetric algebra $\Sym(V) \cong \cO(V^*)$ by a $G$-stable ideal.
It follows that \emph{every affine $G$-scheme of finite type is isomorphic to a closed
$G$-subscheme of a finite-dimensional $G$-module}.
\begin{examples}\label{ex:aga}
(i) Let $G := \GL_1$ be the {\bf multiplicative group}, denoted by $\bG_m$. Then
$\cO(G) \cong k[t,t^{-1}]$ and the $G$-modules are exactly the graded vector spaces
\begin{equation}\label{eqn:grad}
V = \bigoplus_{n \in \bZ} V_n
\end{equation}
where $\bG_m$ acts via $t \cdot \sum_n v_n = \sum_n t^n v_n$. In particular,
the $G$-algebras are just the $\bZ$-graded algebras.
\noindent
(ii) More generally, consider a {\bf diagonalizable group} $G$, i.e., a closed
subgroup of a {\bf torus} $(\bG_m)^n$. Then $G$ is uniquely determined
by its {\bf character group} $\cX(G)$, the set of homomorphisms
$\chi : G \to \bG_m$ equipped with pointwise multiplication. Moreover,
the abelian group $\cX(G)$ is finitely generated, and the assignement
$G \mapsto \cX(G)$ defines a bijective correspondence between
diagonalizable groups and finitely generated abelian groups. This correspondence
is contravariant, and $G$-modules correspond to $\cX(G)$-graded vector spaces.
Any character of $\bG_m$ is of the form $t \mapsto t^n$ for some integer $n$;
this identifies $\cX(\bG_m)$ with $\bZ$.
\end{examples}
\subsection{Reductive groups}
\label{subsec:rg}
In this subsection, we present some basic results on reductive groups, their representations
and invariants; again, we refer to the notes \cite{Bri10} for details and proofs.
A linear algebraic group $G$ is called {\bf reductive} if every $G$-module is semi-simple,
i.e., isomorphic to a direct sum of simple $G$-modules. In view of the characteristic-$0$
assumption, this is equivalent to the condition that $G$ has no non-trivial closed normal
subgroup isomorphic to the additive group of a finite-dimensional vector space;
this is the group-theoretical notion of reductivity.
Examples of reductive groups include finite groups, diagonalizable groups,
and the classical groups such as $\GL_n$ and the special linear group
$\SL_n$ (consisting of $n \times n$ matrices of determinant $1$).
Given a reductive group $G$, we denote by $\Irr(G)$ the set of isomorphism
classes of simple $G$-modules (or of irreducible $G$-representations).
The class of the trivial $G$-module $k$ is denoted by $0$.
For any $G$-module $V$, the map
$$
\bigoplus_{M \in \Irr(G)} \Hom^G(M,V) \otimes_k M \longrightarrow V,
\quad \sum_M f_M \otimes x_M \longmapsto \sum_M f_M(x_M)
$$
is an isomorphism of $G$-modules, where $\Hom^G(M,V)$ denotes the vector space of
morphisms of $G$-modules from $M$ to $V$, and $G$ acts on the left-hand side
via
$$
g \cdot \sum_M f_M \otimes x_M = \sum_M f_M \otimes g \cdot x_M.
$$
Thus, the dimension of $\Hom^G(M,V)$ is the multiplicity of $M$ in $V$
(which may be infinite). This yields the {\bf isotypical decomposition}
\begin{equation}\label{eqn:decm}
V \cong \bigoplus_{M \in \Irr(G)} V_M \otimes_k M \quad \text{where} \quad
V_M := \Hom^G(M,V).
\end{equation}
In particular, $V_0$ is the subspace of $G$-invariants in $V$, denoted by $V^G$.
For a $G$-algebra $A$, the invariant subspace $A^G$ is a subalgebra. Moreover,
each $A_M$ is an $A^G$-module called the {\bf module of covariants of type $M$}.
Denoting by $X = \Spec(A)$ the associated $G$-scheme, we also have
$$
A_M \cong \Mor^G(X,M^*)
$$
(the set of $G$-morphisms from $X$ to the dual module $M^*$; note that $M^*$ is simple).
Also, we have an isomorphism of $A^G$-$G$-modules in an obvious sense
\begin{equation}\label{eqn:cova}
A \cong \bigoplus_{M \in \Irr(G)} A_M \otimes_k M.
\end{equation}
\begin{example}\label{ex:ag}
Let $G$ be a diagonalizable group with character group $\Lambda$. Then $G$ is reductive
and its simple modules are exactly the lines $k$ where $G$ acts via a character
$\lambda \in \Lambda$. The decompositions (\ref{eqn:decm}) and (\ref{eqn:cova}) give back
the $\Lambda$-gradings of $G$-modules and $G$-algebras.
\end{example}
Returning to an arbitrary reductive group $G$, the modules of covariants satisfy
an important finiteness property:
\begin{lemma}\label{lem:finG}
Let $A$ be a $G$-algebra, finitely generated over a subalgebra
$R \subset A^G$. Then the algebra $A^G$ is also finitely generated
over $R$. Moreover, the $A^G$-module $A_M$ is finitely generated for
any $M \in \Irr(G)$.
\end{lemma}
In the case that $R = k$, this statement is a version of the classical
Hilbert-Nagata theorem, see e.g. \cite[Theorem 1.25 and Lemma 2.1]{Bri10}.
The proof given there adapts readily to our setting, which will be generalized
to the setting of families in Subsection \ref{subsec:fam}.
In particular, for an affine $G$-scheme of finite type $X = \Spec(A)$,
the subalgebra $A^G$ is finitely generated and hence corresponds to
an affine scheme of finite type $X/\!/G$, equipped with a $G$-invariant morphism
$$
\pi : X \longrightarrow X/\!/G.
$$
In fact, $\pi$ is universal among all invariant morphisms with source $X$;
thus, $\pi$ is called the {\bf categorical quotient}. Also, given a closed $G$-subscheme
$Y \subset X$, the restriction $\pi_{\vert Y}$ is the categorical quotient of $Y$, and
the image of $\pi_{\vert Y}$ is closed in $X/\!/G$. As a consequence, $\pi$ is surjective
and each fiber contains a unique closed $G$-orbit.
We now assume $G$ to be \emph{connected}.
Then the simple $G$-modules may be explicitly described via highest weight theory
that we briefly review. Choose a {\bf Borel subgroup} $B \subset G$, i.e.,
a maximal connected solvable subgroup. Every simple $G$-module $V$ contains
a unique line of eigenvectors of $B$, say $k v$; then $v$ is called a {\bf highest weight vector}.
The corresponding character $\lambda : B \to \bG_m$, such that $b \cdot v = \lambda(b) v$
for all $b \in B$, is called the {\bf highest weight of $V$}; it uniquely determines
the $G$-module $V$ up to isomorphism. We thus write $V = V(\lambda)$, $v = v_{\lambda}$,
and identify $\Irr(G)$ with a subset of the character group $\Lambda := \cX(B)$:
the set $\Lambda^+$ of {\bf dominant weights}, which turns out to be a finitely generated
submonoid of the {\bf weight lattice} $\Lambda$. Moreover, $V(0)$ is the trivial $G$-module $k$.
In this setting, the modules of covariants admit an alternative description in terms of
highest weights. To state it, denote by $U$ the unipotent part of $B$; this is a closed
connected normal subgroup of $B$, and a maximal unipotent subgroup of $G$.
Moreover, $B$ is the semi-direct product $UT$ where $T \subset B$ is a maximal torus,
and $\Lambda$ is identified with the character group $\cX(T)$ via the restriction.
Given a $G$-module $V$, the subspace of $U$-invariants $V^U$ is a $T$-module and hence
a direct sum of eigenspaces $V^U_{\lambda}$ where $\lambda \in \Lambda$. Moreover, the map
$$
\Hom^G\big( V({\lambda}), V) \longrightarrow V^U_{\lambda}, \quad
f \longmapsto f(v_\lambda)
$$
is an isomorphism for any $\lambda \in \Lambda^+$, and $V^U_{\lambda} = 0$
for any $\lambda \notin \Lambda^+$. As a consequence, given a $G$-algebra $A$,
we have isomorphisms
$$
A_{V(\lambda)} \cong A^U_{\lambda}
$$
of modules of covariants over
$$
A^G = A^B = A^U_0.
$$
The algebra of $U$-invariants also satisfies a useful finiteness property
(see e.g. \cite[Theorem 2.7]{Bri10}):
\begin{lemma}\label{lem:finU}
With the notation and assumptions of Lemma \ref{lem:finG}, the algebra
$A^U$ is finitely generated over $R$.
\end{lemma}
This in turn yields a categorical quotient
$$
\psi : X \longrightarrow X/\!/U
$$
for an affine $G$-scheme of finite type $X$, where $X/\!/U := \Spec\, \cO(X)^U$
is an affine $T$-scheme of finite type. Moreover, for any closed $G$-subscheme
$Y \subset X$, the restriction $\psi_{\vert Y}$ is again the categorical quotient of $Y$
(but $\psi$ is generally not surjective).
Also, many properties of $X$ may be read off $X/\!/U$. For example,
\emph{an affine $G$-scheme $X$ is of finite type (resp. reduced, irreducible, normal)
if and only if so is $X/\!/U$} (see \cite[Chapter 18]{Gr97}).
\begin{examples}\label{ex:agbis}
(i) If $G$ is a torus, then $G = B = T$ and $\Lambda^+ = \Lambda$; each $V(\lambda)$
is just the line $k$ where $T$ acts via $t \cdot z = \lambda(t) z$.
\noindent
(ii) Let $G = \SL_2$. We may take for $B$ the subgroup of upper triangular matrices with
determinant $1$. Then $U$ is the subgroup of upper triangular matrices with diagonal entries
$1$, isomorphic to the additive group $\bG_a$. Moreover, we may take for $T$ the subgroup
of diagonal matrices with determinant $1$, isomorphic to the multiplicative group $\bG_m$.
Thus, $\Lambda \cong \bZ$.
In fact, the simple $G$-modules are exactly the spaces $V(n)$ of homogeneous polynomials
of degree $n$ in two variables $x,y$ where $G$ acts by linear change of variables;
here $n \in \bN \cong \Lambda^+$. A highest weight vector in $V(n)$ is the monomial $y^n$.
Moreover, $V(n)$ is isomorphic to its dual module, and hence to the $n$-th symmetric
power $\Sym^n(k^2)$ where $k^2 \cong V(1)^* \cong V(1)$ denotes the standard $G$-module.
Since $G$ acts transitively on $V(1) \setminus \{0\}$, the categorical quotient
$V(1)/\!/G$ is just a point. One easily shows that the quotient by $U$ is the map
$$
V(1) \longrightarrow \bA^1, \quad a x + b y \longmapsto a.
$$
Also, the categorical quotient of $V(2)$ by $G$ is given by the discriminant
$$
\Delta : V(2) \longrightarrow \bA^1, \quad a x^2 + 2 b xy + c y^2 \longmapsto ac - b^2
$$
and the categorical quotient by $U$ is the map
$$
V(2) \longrightarrow \bA^2, \quad a x^2 + 2 b xy + c y^2 \longmapsto (a,ac - b^2).
$$
For large $n$, no explicit description of the categorical quotients $V(n)/\!/G$ and
$V(n)/\!/U$ is known, although the corresponding algebras $\Sym\big( V(n) \big)^G$
and $\Sym\big( V(n) \big)^U$ (the invariants and covariants of binary forms of degree $n$)
have been extensively studied since the mid-19th century.
\end{examples}
\subsection{Families}
\label{subsec:fam}
In this subsection, we fix a reductive group $G$ and an affine $G$-scheme of finite type,
$X = \Spec(A)$. We now introduce our main objects of interest.
\begin{definition}\label{def:fam}
A {\bf family of closed $G$-subschemes of $X$ over a scheme $S$}
is a closed subscheme $\cZ \subset X \times S$, stable by the
action of $G$ on $X \times S$ via $g \cdot (x,s) = (g \cdot x, s)$.
The family $\cZ$ is {\bf flat}, if so is the morphism $p : \cZ \to S$
induced by the projection $p_2 : X \times S \to S$.
\end{definition}
Given a family $\cZ$ as above, the morphism $p$ is $G$-invariant; thus, for any $k$-rational
point $s$ of $S$, the scheme-theoretic fiber $\cZ_s$ of $p$ at $s$ is a closed
$G$-subscheme of $X$. More generally, an arbitrary point $s \in S$ yields a closed subscheme
$\cZ_{k(\bar{s})} \subset X_{k(\bar{s})}$ where $k(s)$ denotes the residue field of $s$,
and $k(\bar{s})$ is an algebraic closure of that field. The scheme $\cZ_{k(\bar{s})}$ is
the geometric fiber of $\cZ$ at $s$, also denoted by $\cZ_{\bar{s}}$; this is a closed
subscheme of $X_{\bar{s}}$ equipped with an action of $G$ (viewed as a constant group scheme)
or equivalently of $G_{\bar{s}}$ (the group of $k(\bar{s})$-rational points of~$G$).
Since $X$ is affine, the data of the family $\cZ$ is equivalent to
that of the sheaf $p_* \cO_{\cZ}$ as a quotient of
$(p_2)_* \cO_{X \times S} = A \otimes_k \cO_S$, where both are viewed as sheaves
of $\cO_S$-$G$-algebras. Moreover, as $G$ acts trivially on $S$, we have
a canonical decomposition
\begin{equation}\label{eqn:decs}
p_* \cO_{\cZ} \cong \bigoplus_{M \in \Irr(G)} \cF_M \otimes_k M,
\end{equation}
where each {\bf sheaf of covariants}
\begin{equation}\label{eqn:covs}
\cF_M := \Hom^G(M, p_*\cO_Z) = ( p_*\cO_Z \otimes_k M^*)^G
\end{equation}
is a sheaf of $\cO_S$-modules, and (\ref{eqn:decs}) is an
isomorphism of sheaves of $\cO_S$-$G$-modules.
Also, $p_* \cO_{\cZ}$ is a sheaf of finitely generated $\cO_S$-algebras,
since $X$ is of finite type. In view of Lemma \ref{lem:finG}, it follows that
$$
\cF_0 = (p_* \cO_{\cZ})^G
$$
is a sheaf of finitely generated $\cO_S$-algebras as well; moreover, $\cF_M$
is a coherent sheaf of $\cF_0$-modules, for any $M \in \Irr(G)$. By (\ref{eqn:decs}),
the family $\cZ$ is flat if and only if each sheaf of covariants $\cF_M$ is flat.
\begin{definition}\label{def:fm}
With the preceding notation, the family $\cZ$ is {\bf multiplicity-finite}
if the sheaf of $\cO_S$-modules $\cF_M$ is coherent for any $M \in \Irr(G)$;
equivalently, $\cF_0$ is coherent.
We say that $\cZ$ is {\bf multiplicity-free} if each $\cF_M$ is zero or invertible.
\end{definition}
Since flatness is equivalent to local freeness for a finitely generated
module over a noetherian ring, we see that \emph{a multiplicity-finite
family is flat iff each sheaf of covariants is locally free of finite rank}.
When the base $S$ is connected, the ranks of these sheaves are well-defined
and yield a numerical invariant of the family: the {\bf Hilbert function}
$$
h = h_{\cZ} : \Irr(G) \longrightarrow \bN, \quad M \longmapsto \rk_{\cO_S}(\cF_M).
$$
This motivates the following:
\begin{definition}\label{def:hilb}
Given a function $h : \Irr(G) \to \bN$,
a {\bf flat family of closed subschemes of $X$ with Hilbert function $h$} is
a closed subscheme $\cZ \subset X \times S$ such that each sheaf of covariants
$\cF_M$ is locally free of rank $h(M)$.
\end{definition}
\begin{remarks}\label{rem:fam}
(i) In the case that $S = \Spec(k)$, a family is just a closed $G$-subscheme
$Z \subset X$. Then $Z$ is multiplicity-finite
if and only if the quotient $Z/\!/G$ is finite; equivalently, $Z$ contains only
finitely many closed $G$-orbits. For example, \emph{any $G$-orbit closure is multiplicity-finite}.
Also, $Z$ is multiplicity-free if and only if the $G$-module $\cO(Z)$ has multiplicities
$0$ or $1$. If $Z$ is an irreducible variety, this is equivalent to the condition that
$Z$ contains an open orbit of a Borel subgroup $B \subset G$ (see e.g. \cite[Lemma 2.12]{Bri10}).
In particular, when $G$ is a torus, say $T$, an affine irreducible $T$-variety $Z$ is multiplicity-free
if and only if it contains an open $T$-orbit. Then each non-zero eigenspace $\cO(Z)_{\lambda}$ is a line,
and the set of those $\lambda$ such that $\cO(Z)_{\lambda}\neq 0$ consists of those linear combinations
$n_1 \lambda_1 + \cdots n_r \lambda_r$, where $n_1,\ldots,n_r$ are non-negative integers,
and $\lambda_1, \ldots, \lambda_r$ are the weights of homogeneous generators of the algebra
$\cO(Z)$. Thus, this set is a finitely generated submonoid of $\Lambda$
that we denote by $\Lambda^+(Z)$ and call the {\bf weight monoid of $Z$}.
Each affine irreducible multiplicity-free $T$-variety $Z$ is uniquely determined by its weight monoid $\Gamma$:
in fact, \emph{the $\Lambda$-graded algebra $\cO(Z)$ is isomorphic to $k[\Gamma]$}, the algebra of the monoid
$\Gamma$ over $k$. Moreover, $Z$ is normal if and only if $\Gamma$ is {\bf saturated}, i.e., equals the
intersection of the group that it generates in $\Lambda$, with the convex cone that it generates in
$\Lambda \otimes_{\bZ} \bR$. Under that assumption, $Z$ is called an (affine) {\bf toric variety}.
Returning to an affine irreducible $G$-variety $Z$, note that $Z$ is multiplicity-free if
and only if so is $Z/\!/U$. In that case, we have an isomorphism of $G$-modules
\begin{equation}\label{eqn:wm}
\cO(Z) \cong \bigoplus_{\lambda \in \Lambda^+(Z)} V(\lambda)
\end{equation}
where $\Lambda^+(Z) := \Lambda^+(Z/\!/U)$ is again called the {\bf weight monoid of $Z$}.
In other words, the Hilbert function of $Z$ is given by
\begin{equation}\label{eqn:1-0}
h(\lambda) =
\begin{cases}
1 & \text{if $\lambda \in \Lambda^+(Z)$,} \\
0 & \text{otherwise.}\\
\end{cases}
\end{equation}
Also, \emph{$Z$ is normal if and only if $\Lambda^+(Z)$ is saturated}; then $Z$ is called
an (affine) {\bf spherical $G$-variety}. In contrast to the toric case, spherical varieties
are not uniquely determined by their weight monoid, see e.g. Example \ref{ex:fam}(ii).
\noindent
(ii) A flat family $\cZ$ over a \emph{connected} scheme $S$ is multiplicity-finite
(resp. is multiplicity-free, or has a prescribed Hilbert function $h$) if and only if
so does some geometric fiber $\cZ_{\bar{s}}$. In particular, if some fiber is a spherical
variety, then the family is multiplicity-free.
\noindent
(iii) Any family of closed $G$-subschemes $\cZ \subset X \times S$ yields a family
of closed $T$-subschemes $\cZ/\!/U \subset X/\!/U \times S$; moreover, the sheaves of
covariants of $\cZ$ and $\cZ/\!/U$ are isomorphic. Thus, $\cZ$ is flat
(multiplicity-finite, multiplicity-free) if and only if so is $\cZ/\!/U$. Also,
$\cZ$ has Hilbert function $h$ if and only if $\cZ/\!/U$ has Hilbert function $\bar{h}$
such that
\begin{equation}\label{eqn:bh}
\bar{h}(\lambda) = \begin{cases}
h(\lambda) & \text{if $\lambda \in \Lambda^+$}, \\
0 & \text{otherwise}. \\
\end{cases}
\end{equation}
\end{remarks}
\begin{examples}\label{ex:fam}
(i) The surface $\cZ$ of equation $x y - z = 0$ in $\bA^3$ is stable under the action of
$\bG_m$ via $t \cdot (x,y,z) = (tx, t^{-1}y, z)$. The morphism $z : \cZ \to \bA^1$ is a flat family
of closed $\bG_m$-subschemes of $\bA^2$ (the affine plane with coordinates $x,y$). The fibers
over non-zero points of $\bA^1$ are all isomorphic to $\bG_m$; they are exactly the orbits of
points of $\bA^2$ minus the coordinate axes. In particular, the family has Hilbert function
the constant function $1$. The fiber at $0$ is the (reduced) union of the coordinate axes.
For the $\bG_m$-action on $\bA^3$ via $t \cdot (x,y,z) = (t^2 x, t^{-1}y, z)$, the surface
$\cW$ of equation $x y^2 = z$ yields a family with the same fibers at non-zero points, but
the fiber at $0$ is non-reduced.
More generally, consider a torus $T$ acting linearly on the affine space $V = \bA^N$
with pairwise distinct weights. Denote by $\lambda_1,\ldots,\lambda_N$ the opposites of these
weights, i.e., the weights of the coordinate functions. Also, let $v \in V$ be
a {\bf general point} in the sense that all its coordinates are non-zero. Then the orbit closure
$$
Z := \overline{T \cdot v} \subset V
$$
is an irreducible multiplicity-free variety, and different choices of $v$ yield isomorphic
$T$-varieties; moreover, all irreducible multiplicity-free varieties may be obtained in this way.
The weight monoid of $Z$ is generated by $\lambda_1,\ldots,\lambda_N$.
We construct flat families over $\bA^1$ with general fiber $Z$ as follows.
Let the torus $T \times \bG_m$ act linearly on $V \times \bA^1 = \bA^{N+1}$
such that the coordinate functions have weights
$$
(\lambda_1,a_1), \ldots, (\lambda_N,a_N), (0,1)
$$
where $a_1,\ldots,a_N$ are integers, viewed as characters of $\bG_m$. Then the orbit closure
$$
\cZ := \overline{(T \times \bG_m) \cdot (v,1)} \subset V \times \bA^1
$$
may be viewed as a $T$-variety. The projection $p: \cZ \to \bA^1$ is $T$-invariant,
and flat since $\cZ$ is an irreducible variety. Moreover, $p$ is trivial over
$\bA^1 \setminus \{0\}$; specifically, the map
$$
p^{-1}(\bA^1 \setminus \{0\}) \longrightarrow Z \times (\bA^1 \setminus \{0\}),
\quad (v,s) \longmapsto (s^{-1}v, s)
$$
is a $T$-equivariant isomorphism of families over $\bA^1 \setminus \{0\}$.
In particular, the fibers of $p$ at non-zero points are all isomorphic to $Z$,
and $p$ is multiplicity-free with Hilbert function $h$ as above.
On the other hand, the special fiber $\cZ_0$ is non-empty if and only if
$p$ (viewed as a regular function on $\cZ$) is not invertible; this translates
into the condition that the convex cone generated by
$(\lambda_1,a_1), \ldots, (\lambda_N,a_N)$ does not contain $(0,-1)$.
One may show that the preceding construction yields all one-parameter families with
generic fiber a multiplicity-free variety. Also, one may show that the special fiber is reducible
unless the whole family is trivial; this contrasts with our next example, where all fibers are
irreducible varieties and the special fiber is singular while all others are smooth.
\noindent
(ii) Let $G = \SL_2$ and $\Delta : V(2) \to \bA^1$ the discriminant as in Example
\ref{ex:agbis}(ii). Then the graph
$$
\cZ = \{(f,s) \in V(2) \times \bA^1 ~\vert~ \Delta(f) = s \}
$$
is a flat family of $G$-stable closed subschemes of $V(2)$. The fibers at non-zero
closed points are exactly the $G$-orbits of non-degenerate quadratic forms, while the
fiber at $0$ consists of two orbits: the squares of non-zero linear forms, and the origin.
Since $\cZ \cong V(2)$ as $G$-varieties, the Hilbert function of $\cZ$ is given by
$$
h_2(n) =
\begin{cases}
1 & \text{if $n$ is even}, \\
0 & \text{if $n$ is odd}.\\
\end{cases}
$$
Thus, $\cZ$ is multiplicity-free. Moreover, the family $\cZ/\!/U$ is trivial with fiber $\bA^1$,
as follows from the description of $V(2)/\!/U$ in Example \ref{ex:agbis}(ii). In particular,
each fiber $\cZ_s$ is a spherical variety.
Next, consider the quotient $\cW$ of $V(2)$ by the involution $\sigma : f \mapsto -f$.
Then $\cW$ is the affine $G$-variety associated with the subalgebra of
$\cO\big(V(2)\big) \cong \Sym \big( V(2) \big)$ consisting of even polynomial functions,
i.e., the subalgebra generated by
$\Sym^2 \big( V(2) \big) \cong V(4) \oplus V(0)$. In other words,
$$
\cW \subset V(4) \times \bA^1
$$
and the resulting projection $q : \cW \to \bA^1$ may be identified with the $\sigma$-invariant
map $V(2) \to \bA^1$ given by the discriminant. It follows that $q$ is a flat family of
$G$-stable closed subschemes of $V(4)$, with Hilbert function given by
$$
h_4(n) =
\begin{cases}
1 & \text{if $n$ is a multiple of $4$}, \\
0 & \text{otherwise}.\\
\end{cases}
$$
In particular, $\cW$ is multiplicity-free. Moreover, its fibers at non-zero closed points are
exactly the orbits $G \cdot f^2 \subset V(4)$, where $f$ is a non-degenerate quadratic form,
while the fiber at $0$ consists of two orbits: the fourth powers of non-zero linear forms,
and the origin. The family $\cW/\!/U$ is again trivial with fiber $\bA^1$, so that the fibers
of $\cW$ are spherical varieties.
We will show that both families just constructed are universal (in the sense of the next
subsection), and that no family with similar properties exists in $V(n)$ for $n \neq 2,4$.
For this, we will apply the various techniques that we successively introduce;
see Examples \ref{ex:tuf}(ii), \ref{ex:zts}(ii), \ref{ex:aut}(ii) and \ref{ex:qsm}(ii).
\end{examples}
\subsection{The universal family}
\label{subsec:tuf}
In the setting of the previous subsection, there is a natural construction of {\bf pull-back}
for families of $G$-stable subschemes of $X$: given such a family $\cZ \subset X \times S$
and a morphism of schemes $f : S' \to S$, we can form the cartesian square
$$
\CD
\cZ' @>>> X \times S' \\
@VVV @V{\id_X \times f}VV \\
\cZ @>>> X \times S \\
\endCD
$$
where the horizontal arrows are inclusions. This yields a family of closed
$G$-subschemes of $X$ over $S'$: the pull-back of $\cZ$ under $f$, which may also be defined
via the cartesian square
$$
\CD
\cZ' @>>> S' \\
@VVV @V{f}VV \\
\cZ @>{p}>> S. \\
\endCD
$$
Note that $\cZ'$ is flat over $S'$ whenever $\cZ$ is flat over $S$; moreover, multiplicity-finiteness
and -freeness are preserved under pull-back, as well as the Hilbert function.
We may now state our main result, which asserts the existence of a universal family:
\begin{theorem}\label{thm:hilb}
Given a reductive group $G$, an affine $G$-scheme of finite type $X$ and a function
$h: \Irr(G) \to \bN$, there exists a family of closed $G$-subschemes with Hilbert function $h$,
\begin{equation}\label{eqn:univ}
\Univ^G_h(X) \subset X \times \Hilb^G_h(X),
\end{equation}
such that any family $\cZ \subset X \times S$ of closed $G$-subschemes with Hilbert
function $h$ is obtained from (\ref{eqn:univ}) by pull-back under
a unique morphism $f : S \to \Hilb^G_h(X)$. Moreover, the scheme $\Hilb^G_h(X)$ is
quasi-projective (in particular, of finite type).
\end{theorem}
The family (\ref{eqn:univ}) is of course uniquely determined up to a unique isomorphism
by its universal property. The scheme $\Hilb^G_h(X)$ is called the {\bf invariant Hilbert scheme}
associated with the affine $G$-scheme $X$ and the function $h$.
Theorem \ref{thm:hilb} may be reformulated as asserting that
\emph{the {\bf Hilbert functor} $Hilb^G_h(X)$ that associates with any scheme $S$,
the set of flat families $\cZ \subset X \times S$ with Hilbert function $h$,
is represented by the quasi-projective scheme $\Hilb^G_h(X)$}.
By taking $S = \Spec(R)$ where $R$ is an arbitrary algebra, this yields
an algebraic description of the $R$-points of the invariant Hilbert scheme:
these are those $G$-stable ideals $I \subset A \otimes_k R$ such that
each $R$-module of covariants
$$
\big( (R \otimes_k A)/I \big)_M = (R \otimes_k A)_M/I_M
$$
is locally free of rank $h(M)$.
In particular, the $k$-rational points of $\Hilb^G_h(X)$ (which are the same
as its closed points, since this scheme is of finite type) are those
$G$-stable ideals $I$ of $A = \cO(X)$ such that each simple $G$-module $M$ has
multiplicity $h(M)$ in the quotient $A/I$. These points may also be identified
with the closed $G$-stable subschemes $Z \subset X$ with Hilbert function $h$.
\begin{remarks}\label{rem:tuf}
(i) The case of the {\bf trivial group $G$} is already quite substantial.
There, $X$ is just an affine scheme of finite type, and a family with finite
multiplicities is exactly a closed subscheme $\cZ \subset X \times S$
such that the projection $p: \cZ \to S$ is finite. Moreover, a Hilbert function
is just a non-negative integer $n$. In that case, $\Hilb_n(X)$ is the
{\bf punctual Hilbert scheme} that parametrizes the closed subschemes of length $n$
of $X$. In fact, $\Hilb_n(X)$ exists more generally for any quasi-projective
scheme $X$ over a field of arbitrary characteristic.
\noindent
(ii) If $G$ is the multiplicative group $\bG_m$, we know that $X$ corresponds
to a $\bZ$-graded algebra of finite type $A$. For a Hilbert function $h : \bZ \to \bN$,
the scheme $\Hilb_h^{\bG_m}(X)$ parametrizes those graded ideals $I \subset A$
such that the vector space $(A/I)_n$ has dimension $h(n)$ for all $n \in \bZ$.
Of special interest is the case that $X$ is the affine space $\bA^N$ where $\bG_m$
acts via scalar multiplication, i.e., $A$ is a polynomial ring in $N$ indeterminates
of weight $1$. Then a necessary condition for the existence of such ideals $I$ is that
$h(n) = P(n)$ for all $n \gg 0$, where $P(t)$ is a (uniquely determined) polynomial:
the {\bf Hilbert polynomial} of the graded algebra $A/I$.
In that case, we also have the {\bf Hilbert scheme} $\Hilb_P(\bP^{N-1})$ that parametrizes
closed subschemes of the projective $(N-1)$-space with Hilbert polynomial $P$, or equivalently,
graded ideals $I \subset A$ such that $\dim(A/I)_n = P(n)$ for all $n \gg 0$.
This yields a morphism
$$
\Hilb^{\bG_m}_h(\bA^N) \longrightarrow \Hilb_P(\bP^{N-1})
$$
which is in fact an isomorphism for an appropriate choice of the Hilbert function $h$,
associated to a given Hilbert polynomial $P$ (see \cite[Lemma 4.1]{HS04}).
\noindent
(iii) More generally, if $G$ is diagonalizable with character group $\Lambda$, and
$X$ is a $G$-module of finite dimension $N$, then $A$ is a polynomial ring on homogeneous
generators with weights $\lambda_1,\ldots,\lambda_N \in \Lambda$. Moreover, $\Hilb_h^G(X)$
parametrizes those $\Lambda$-graded ideals $I \subset A$ such that each vector space
$(A/I)_{\lambda}$ has a prescribed dimension $h(\lambda)$. In that case, $\Hilb_h^G(X)$ is the
{\bf multigraded Hilbert scheme} of \cite{HS04}. As shown there, that scheme exists over any base ring,
and no noetherian assumption is needed in the definition of the corresponding functor.
\end{remarks}
\begin{examples}\label{ex:tuf}
(i) Consider a torus $T$ acting linearly on the affine space $\bA^N$ via pairwise distinct weights
and take for $h$ the Hilbert function of a general $T$-orbit closure. Denoting by
$\lambda_1,\ldots,\lambda_N$ the weights of the coordinate functions on $\bA^N$
and by $\Gamma$ the submonoid of $\Lambda$ generated by these weights, we have
\begin{equation}\label{eqn:hm}
h(\lambda) = \begin{cases}
1 & \text{if $\lambda \in \Gamma$}, \\
0 & \text{otherwise}. \\
\end{cases}
\end{equation}
The associated invariant Hilbert scheme $\Hilb^T_h(\bA^N)$ is called the {\bf toric Hilbert scheme};
it has been constructed by Peeva and Stillman (see \cite{PS02}) prior to the more general construction
of multigraded Hilbert schemes. Since $\Hilb^T_h(\bA^N)$ only depends on $T$ and
$\ulambda = (\lambda_1,\ldots,\lambda_N)$, we will denote it by $\Hilb^T(\ulambda)$.
\noindent
(ii) Let $G = \SL_2$ and take for $X$ the simple $G$-module $V(n)$. Then $X$ contains
a distinguished closed $G$-stable subvariety $Z$, consisting of the $n$-th powers of
linear forms. In other words, $Z$ is the affine cone over the image of the $n$-uple
embedding of $\bP^1$ in $\bP^n = \bP\big( V(n) \big)$. Since that image is the unique closed $G$-orbit,
$Z$ is the smallest non-zero closed $G$-stable subcone of $V(n)$. Also, $Z$ is a normal surface with
singular locus the origin if $n \geq 2$, while $Z = V(1)$ if $n = 1$. Moreover, the Hilbert function
of $Z$ is given by
\begin{equation}\label{eqn:hn}
h_n(m) =
\begin{cases}
1 & \text{if $m$ is a multiple of $n$}, \\
0 & \text{otherwise}. \\
\end{cases}
\end{equation}
As we will show, the corresponding invariant Hilbert scheme is the affine line if $n = 2$ or $4$;
in both cases, the universal family is that constructed in Example \ref{ex:fam}. For all other values
of $n$, the invariant Hilbert scheme consists of the (reduced) point $Z$.
\noindent
(iii) More generally, let $G$ be an arbitrary connected reductive group,
$V= V(\lambda)$ a non-trivial simple $G$-module, and $v = v_{\lambda}$ a highest weight vector.
Then the corresponding point $[v]$ of the projective space $\bP(V)$ is the unique $B$-fixed point.
Hence $G \cdot [v]$ is the unique closed $G$-orbit in $\bP(V)$, by Borel's fixed point theorem. Thus,
$$
Z := \overline{G \cdot v} = G \cdot v \cup \{0\}
$$
is the smallest non-zero $G$-stable cone in $V$: the {\bf cone of highest weight vectors}.
Moreover, we have an isomorphism of graded $G$-modules
\begin{equation}\label{eqn:hwc}
\cO(Z) \cong \bigoplus_{n=0}^{\infty} V(n\lambda)^*
\end{equation}
where $V(n\lambda)^*$ has degree $n$. Thus, denoting by $\lambda^*$ the highest weight of the
simple $G$-module $V(\lambda)^*$, we see that the $T$-algebra $\cO(Z)^U$ is a polynomial ring
in one variable of weight $\lambda^*$. In particular, $Z/\!/U$ is normal, and hence $Z$ is a
spherical variety. Its Hilbert function is given by
$$
h_{\lambda}(\mu) =
\begin{cases}
1 & \text{if $\mu$ is a multiple of $\lambda^*$}, \\
0 & \text{otherwise}. \\
\end{cases}
$$
Again, it turns out that the corresponding invariant Hilbert scheme is the affine line for
certain dominant weights $\lambda$, and is trivial (i.e., consists of the reduced point $Z$)
for all other weights. This result is due to Jansou (see \cite[Th\'eor\`eme 1.1]{Ja07}),
who also constructed the universal family in the non-trivial cases, as follows.
Assume that the $G$-module $V(\lambda) \oplus V(0) \cong V(\lambda) \times \bA^1$
carries a linear action of a connected reductive group $\tG \supset G$. Assume moreover
that this $\tG$-module is simple, say $V(\tlambda)$, and that the associated cone of highest vectors
$\tZ \subset V(\tlambda)$ satisfies $Z = \tZ \cap V(\lambda)$ as schemes.
Then the projection $p : \tZ \to \bA^1$ is a flat family of $G$-subschemes of $V(\lambda)$
with fiber $Z$ at $0$, and hence has Hilbert function $h_{\lambda}$. By \cite[Section 2.2]{Ja07},
this is in fact the universal family; moreover, all non-trivial cases are obtained from this construction.
One easily shows that the projectivization $\tY : = \tG \cdot [v]$, the closed $\tG$-orbit in
$\bP\big( V(\tlambda) \big)$, consists of two $G$-orbits: the closed orbit $Y := G \cdot [v]$,
a hyperplane section of $\tY$, and its (open affine) complement. Moreover, the projective data
$Y \subset \tY$ uniquely determine the affine data $Z \subset \tZ \subset V(\tlambda)$,
since the space of global sections of the ample divisor $Y$ on $\tY$ is $V(\tlambda)^*$.
In fact, the non-trivial cases correspond bijectively to the smooth projective varieties
where a connected algebraic group acts with two orbits, the closed one being an ample divisor
(see [loc.~cit., Section 2], based on Akhiezer's classification of certain varieties
with two orbits in \cite{Ak83}).
Returning to the case that $G = \SL_2$, the universal family for $n = 2$ is obtained by taking
$\tG = \SL_2 \times \SL_2$ where $\SL_2$ is embedded as the diagonal. Moreover,
$$
V(\tlambda) = V(1,1) = V(1) \otimes_k V(1) \cong V(2) \oplus V(0)
$$
where the latter isomorphism is as $\SL_2$-modules; also, $Y = \bP^1$ is the diagonal in
$\tY = \bP^1 \times \bP^1$.
For $n = 4$, one replaces $\SL_2$ with its quotient $\PSL_2 = \PGL_2$ (that we will keep
denoting by $G$ for simplicity), and takes $\tG = \SL_3$ where $G$ is embedded via its representation
in the $3$-dimensional space $V(2)$. Moreover, $V(\tlambda)$ is the symmetric square
of the standard representation $k^3$ of $\tG$, so that
$$
V(\tlambda) \cong \Sym^2 \big( V(2) \big) \cong V(4) \oplus V(0)
$$
as $G$-modules. Here $Y = \bP^1$ is embedded in $\tY \cong \bP^2$ as a conic.
\noindent
(iv) As another generalization of (ii) above, take again $G = \SL_2$ and $X = V(n)$. Assume that $n = 2m$
is even and consider the function $h = h_4$ if $m$ is even, and $h = h_2$ if $m$ is odd.
The invariant Hilbert scheme $\Hilb^G_h(X)$ has been studied in detail by Budmiger in his thesis \cite{Bu10}.
A closed point of that scheme is the (closed) orbit $G \cdot x^m y^m$, which in fact lies in an
irreducible component whose underlying reduced scheme is isomorphic to $\bA^1$. But $\Hilb^G_h(X)$ turns out
to be non-reduced for $m = 6$, and reducible for $m = 8$; see \cite[Section III.1]{Bu10}.
\end{examples}
\section{Basic properties}
\label{sec:bp}
\subsection{Existence}
\label{subsec:ex}
In this subsection, we show how to deduce the existence of the invariant Hilbert
scheme (Theorem \ref{thm:hilb}) from that of the multigraded Hilbert scheme,
proved in \cite{HS04}. We begin with three intermediate results which are of some
independent interest. The first one will allow us to enlarge the acting group $G$.
To state it, we need some preliminaries on \emph{induced schemes}.
Consider an inclusion of reductive groups $G \subset \tG$; then the homogeneous space
$\tG/G$ is an affine $\tG$-variety equipped with a base point, the image of $e_{\tG}$.
Let $X$ be an affine $G$-scheme of finite type. Then there exists an affine $\tG$-scheme
of finite type $\tX$, equipped with a $\tG$-morphism
$$
f: \tX \longrightarrow \tG/G
$$
such that the fiber of $f$ at the base point is isomorphic to $X$ as a $G$-scheme.
Moreover, $\tX$ is the quotient of $\tG \times X$
by the action of $G$ via
$$
g \cdot (\tg, x) := (\tg g^{-1}, g \cdot x)
$$
and this identifies $f$ with the morphism obtained from the projection $\tG \times X \to \tG$.
(These assertions follow e.g. from descent theory; see \cite[Proposition 7.1]{MFK94}
for a more general result).
The scheme $\tX$ satisfies the following property, analogous to \emph{Frobenius reciprocity}
relating induction and restriction in representation theory: For any $\tG$-scheme $\tY$,
we have an isomorphism
$$
\Mor^{\tG}(\tX, \tY) \cong \Mor^G(X, \tY)
$$
that assigns to any $f: \tX \to \tY$ its restriction to $X$. The inverse isomorphism assigns to
any $\varphi: X \to \tY$ the morphism
$\tG \times X \to \tY$, $(\tg,x) \mapsto \varphi(\tg \cdot x)$
which is $G$-invariant and hence descends to a morphism $\tX \to \tY$.
Thus, $\tX$ is called an {\bf induced scheme}; we denote it by $\tG \times^G X$.
Taking for $\tY$ a $\tG$-module $\tV$, we obtain an isomorphism
\begin{equation}\label{eqn:frob}
\Hom^{\tG}\big( \tV, \cO(\tX) \big) \cong \Hom^G\big( \tV, \cO(X) \big).
\end{equation}
Also, we have isomorphisms of $\tG$-modules
$$
\cO(\tX) \cong \cO(\tG \times X)^G \cong \big( \cO(\tG) \otimes_k \cO(X) \big)^G \cong
\Ind_G^{\tG} \big( \cO(X) \big)
$$
where $\Ind_G^{\tG}$ denotes the induction functor from $G$-modules to $\tG$-modules.
We may now state our first reduction result:
\begin{lemma}\label{lem:ind}
Let $G \subset \tG$ be an inclusion of reductive groups, $X$ an affine $G$-scheme
of finite type, and $\tX := \tG \times^G X$.
Let $\cZ \subset X \times S$ be a flat family of closed $G$-stable subschemes with
Hilbert function $h$. Then
$$
\tcZ := \tG \times ^G \cZ \subset \tX \times S
$$
is a flat family of closed $\tG$-stable subschemes, having a Hilbert function $\th$
that depends only on $h$. Moreover, if $Hilb^{\tG}_{\th}(\tX)$ is represented by a scheme
$\cH$, then $Hilb^G_h(X)$ is represented by a union of connected components of $\cH$.
\end{lemma}
\begin{proof}
Consider a simple $\tG$-module $\tM$ and the associated sheaf of covariants
$$
\cF_{\tM} = \Hom^{\tG}(\tM, \tp_*\cO_{\tG \times^G \cZ})
$$
where $\tp : \tG \times^G \cZ \to S$ denotes the projection. By using (\ref{eqn:frob}), this yields
$$
\cF_{\tM} \cong \Hom^G(\tM, p_*\cO_{\cZ}).
$$
Thus, the sheaf of $\cO_S$-modules $\cF_{\tM}$ is locally free of rank
$$
\sum_{M \in \Irr(G)} \dim \Hom^G(M,\tM) \; h(M) =: \th(\tM).
$$
It follows that $\tG \times ^G \cZ$ is flat with Hilbert function $\th$ just defined.
This shows the first assertion, and defines a morphism of functors
$Hilb^G_h(X) \to Hilb^{\tG}_{\th}(\tX)$, $\cZ \mapsto \tcZ := \tG \times^G \cZ$.
Next, consider a family of closed $\tG$-stable subschemes $\tcZ \subset \tX \times S$.
Then the composite morphism
$$
\CD
\tcZ @>{\tq}>> \tX @>{f}>> \tG/G
\endCD
$$
is $\tG$-equivariant. It follows that $\tcZ = \tG \times^G \cZ$
for some $G$-stable subscheme $\cZ \subset X \times S$. If $\tcZ$ is flat over $S$,
then by the preceding step, the sheaf of $\cO_S$-modules $\Hom^G(\tM, p_*\cO_{\cZ})$
is locally free for any simple $\tG$-module $\tM$. But every simple $G$-module $M$
is a submodule of some simple $\tG$-module $\tM$ (indeed, $M$ is a quotient of $\Ind_G^{\tG}(M)$,
and hence a quotient of a simple $\tG$-submodule). It follows that the sheaf of
$\cO_S$-modules $\Hom^G(M, p_*\cO_{\cZ})$ is a direct factor of $\Hom^G(\tM, p_*\cO_{\cZ})$
and hence is locally free of finite rank. Thus, $\cZ$ is flat and multiplicity-finite over $S$;
hence $\cZ$ has a Hilbert function $h'$ such that $\widetilde{h'} = \th$, if $S$ is connected.
When $h' = h$, the assignements $\cZ \mapsto \tcZ$ and $\tcZ \mapsto \cZ$ are mutually inverse.
Taking for $S$ a connected component of $\cH$ and for $\tcZ$ the pull-back of the
universal family, we obtain the final assertion.
\end{proof}
By the preceding lemma, we may replace $G$ with $\GL_n$; in particular,
we may assume that $G$ is \emph{connected}. Our second result, a variant of
\cite[Theorem 1.7]{AB05}, will allow us to replace $G$ with a torus.
As in Subsection \ref{subsec:rg}, we choose a Borel subgroup $B \subset G$
with unipotent part $U$, and a maximal torus $T \subset B$. We consider an affine
$G$-scheme $X = \Spec(A)$ and a function $h : \Lambda^+ \to \bN$; we extend $h$ to
a function $\bar{h}: \Lambda \to \bN$ with values $0$ outside $\Lambda^+$, as
in (\ref{eqn:bh}).
\begin{lemma}\label{lem:Uinv}
With the preceding notation, assume that $Hilb^T_{\bar{h}}(X/\!/U)$ is represented
by a scheme $\cH$. Then $Hilb^G_h(X)$ is represented by a closed subscheme of $\cH$.
\end{lemma}
\begin{proof}
We closely follow the argument of \cite[Theorem 1.7]{AB05}. Given a scheme $S$ and
a flat family $\cZ \subset X \times S$ of closed $G$-stable subschemes with Hilbert
function $h$, we obtain a family $\cZ/\!/U \subset X/\!/U \times S$ of closed
$T$-stable subschemes which is again flat and has Hilbert function $\bar{h}$, by
Remark \ref{rem:fam}(iii). Observe that $\cZ/\!/U$ uniquely determines $\cZ$:
indeed, $\cZ$ corresponds to a $G$-stable sheaf of ideals
$$
\cI \subset A \otimes_k \cO_S
$$
such that each quotient $(A \otimes_k \cO_S)^U_{\lambda}/ \cI^U_{\lambda}$
is locally free of rank $\bar{h}(\lambda)$. Moreover, $\cZ/\!/U$ corresponds
to the $T$-stable sheaf of ideals
$$
\cI^U \subset A^U \otimes_k \cO_S
$$
which generates $\cI$ as a sheaf of $\cO_S$-$G$-modules.
We now express the condition for a given $T$-stable sheaf of ideals
$$
\cJ \subset A^U \otimes_k \cO_S
$$
such that each quotient $(A \otimes_k \cO_S)^U_{\lambda}/ \cJ_{\lambda}$ is locally free of rank
$\bar{h}(\lambda)$, to equal $\cI^U$ for some $G$-stable sheaf of ideals $\cI$ as above.
This is equivalent to the condition that the $\cO_S$-$G$-module
$$
\cI := \langle G \cdot \cJ \rangle \subset A \otimes_k \cO_S
$$
generated by $\cJ$, is a sheaf of ideals, i.e., $\cI$ is stable under
multiplication by $A$. By highest weight theory, this means that
$$
(\cI \cdot A)^U \subset \cJ.
$$
We will translate the latter condition into the vanishing of certain morphisms of
locally free sheaves over $S$, arising from the universal family of $\cH$
via the classifying morphism
$$
f: S \longrightarrow \cH.
$$
For this, consider three dominant weights $\lambda$, $\mu$, $\nu$ and a
copy of the simple $G$-module $V(\nu)$ in $V(\lambda) \otimes_k V(\mu)$,
with highest weight vector
$$
v \in \big( V(\lambda) \otimes_k V(\mu) \big)^U_{\nu}.
$$
We may write
$$
v = \sum_i c_i (g_i \cdot v_{\lambda}) \otimes (h_i \cdot v_{\mu}),
$$
a finite sum where $c_i \in k$ and $g_i, h_i \in G$. This defines a morphism
of sheaves of $\cO_S$-modules
$$
A^U_{\lambda} \otimes_k \cJ_{\mu} \longrightarrow A^U_{\nu} \otimes_k \cO_S,
\quad
a \otimes b \longmapsto \sum_i c_i (g_i \cdot a)(h_i \cdot b).
$$
Composing with the quotient by $\cJ_{\nu}$ yields a morphism of sheaves
of $\cO_S$-modules,
$$
F_v : A^U_{\lambda} \otimes_k \cJ_{\mu} \longrightarrow
(A^U_{\nu} \otimes_k \cO_S)/\cJ_{\nu}.
$$
Our condition is the vanishing of these morphisms $F_v$ for all triples
$(\lambda,\mu,\nu)$ and all $v$ as above. Now $(A^U_{\nu} \otimes_k \cO_S)/\cJ_{\nu}$
and $\cJ_{\mu}$ are the pull-backs under $f$ of the analogous locally
free sheaves on $\cH$. This shows that the Hilbert functor $Hilb^G_h(X)$ is represented
by the closed subscheme of $\cH$ obtained as the intersection of the zero loci of the $F_v$.
\end{proof}
Our final reduction step will allow us to enlarge $X$:
\begin{lemma}\label{lem:sub}
Let $X$ be a closed $G$-subscheme of an affine $G$-scheme $Y$ of finite type.
If $Hilb^G_h(Y)$ is represented by a scheme $\cH$, then $Hilb^G_h(X)$ is represented
by a closed subscheme of $\cH$.
\end{lemma}
\begin{proof}
Let $X = \Spec(A)$ and $Y = \Spec(B)$, so that we have an exact sequence
$$
0 \longrightarrow I \longrightarrow B \longrightarrow A \longrightarrow 0
$$
where $I$ is a $G$-stable ideal of $B$. For any $M \in \Irr(G)$, this yields an
exact sequence for modules of covariants over $B^G$:
$$
0 \longrightarrow I_M \longrightarrow B_M \longrightarrow A_M \longrightarrow 0.
$$
Next, consider a scheme $S$ and a flat family $p: \cZ \to S$ of closed $G$-stable
subschemes of $Y$, with Hilbert function $h$. Then each associated sheaf of covariants $\cF_M$
is a locally free quotient of $B_M \otimes_k \cO_S$, of rank $h(M)$; this defines
a linear map $q_M : B_M \to H^0(S,\cF_M)$. Moreover, $\cZ$ is contained in $X \times S$
if and only if the image of $I \otimes_k \cO_S$ in $p_* \cO_{\cZ}$ is zero; equivalently, $q_M(I_M) = 0$
for all $M \in \Irr(G)$. Taking for $p$ the universal family of $\cH$, it follows that
the invariant Hilbert functor $Hilb^G_h(X)$ is represented by the closed subscheme of $\cH$,
intersection of the zero loci of the subspaces
$q_M(I_M) \subset \Gamma \big( \Hilb_h^G(Y),\cF_M \big)$ for all $M \in \Irr(G)$.
\end{proof}
Summarizing, we may reduce to the case that $G$ is a maximal torus of $\GL_n$ by
combining Lemmas \ref{lem:ind} and \ref{lem:Uinv}, and then to the case that $X$ is
a finite-dimensional $G$-module by Lemma \ref{lem:sub}. Then the invariant Hilbert
scheme is exactly the multigraded one, as noted in Remark \ref{rem:tuf}(iii).
\begin{remarks}\label{rem:ex}
(i) The proof of Lemma \ref{lem:Uinv} actually shows that the invariant Hilbert functor
$Hilb^G_h(X)$ is a closed subfunctor of $Hilb^T_h(X/\!/U)$. Likewise, in the
setting of Lemma \ref{lem:ind} (resp. of Lemma \ref{lem:sub}), $Hilb^G_h(X)$ is a closed
subfunctor of $Hilb^{\tG}_{\th}(\tX)$ (resp. of $Hilb^G_h(Y)$).
\noindent
(ii) The arguments of this subsection establish the existence of the invariant Hilbert scheme
over any field of characteristic $0$. Indeed, highest weight theory holds for $\GL_n$ in
that setting, whereas it fails for non-split reductive groups.
\end{remarks}
\subsection{Zariski tangent space}
\label{subsec:zts}
In this subsection, we consider a reductive group $G$, an affine $G$-scheme of finite type
$X = \Spec(A)$, and a function $h : \Irr(G) \to \bN$. We study the Zariski tangent space
$T_Z \Hilb^G_h(X)$ to the invariant Hilbert scheme at an arbitrary closed point $Z$, i.e.,
at a closed $G$-stable subscheme of $X$ with Hilbert function $h$. As a first step, we obtain:
\begin{proposition}\label{prop:zts}
With the preceding notation, we have
\begin{equation}\label{eqn:zts}
T_Z \Hilb_h^G(X) = \Hom^G_A(I, A/I)
= \Hom^G_{\cO(Z)}\big( I/I^2,\cO(Z) \big)
\end{equation}
where $I\subset A$ denotes the ideal of $Z$, and $\Hom^G_A$ stands for the space of $A$-linear,
$G$-equivariant maps.
\end{proposition}
Indeed, $\Hom_A(I, A/I)$ parametrizes the {\bf first-order deformations of $Z$ in $X$},
i.e., those closed subschemes
$$
\cZ \subset X \times \Spec\, k[\varepsilon]
$$
(where $\varepsilon^2 = 0$) that are flat over $\Spec \, k[\varepsilon]$ and satisfy $\cZ_s = Z$
where $s$ denotes the closed point of $\Spec \, k[\varepsilon]$; see e.g. \cite[Section 3.2]{Se06}).
The subspace $\Hom_A^G(I, A/I)$ parametrizes the $G$-stable deformations.
\begin{example}\label{ex:m2r}
Let $G = \SL_2$ and $X := r V(2)$ (the direct sum of $r$ copies of $V(2)$), where $r$ is a positive integer.
We consider the invariant Hilbert scheme $\Hilb^G_h(X)$, where $h = h_2$ is as defined in
(\ref{eqn:hn}), and show that the Zariski tangent space at any closed point $Z$ has dimension $r$.
Indeed, the $r$ projections $p_1,\ldots,p_r : Z \to V(2)$ are all proportional, since $h_2(2) = 1$.
Thus, we may assume that $Z$ is contained in the first copy of $V(2)$, for an appropriate choice of projections.
Then the condition that $h_2(0) =1$ implies that $Z$ is contained in the scheme-theoretic fiber
of the discriminant $\Delta$ at some scalar $t$. Since that fiber has also Hilbert function $h_2$,
we see that equality holds: the ideal of $Z$ satisfies
$$
I = \big( \Delta(p_1) - t, p_2, \cdots, p_r \big).
$$
In particular, $Z$ is a complete intersection in $X$, and the $\cO(Z)$-$G$-module $I/I^2$
is freely generated by the images of $\Delta(p_1) - t, p_2, \ldots, p_r$.
This yields an isomorphism of $\cO(Z)$-$G$-modules
$$
I/I^2 \cong \cO(Z) \otimes_k \big( V(0) \oplus (r-1) V(2) \big).
$$
As a consequence,
$$
\Hom_{\cO(Z)}^G\big( I/I^2, \cO(Z) \big) \cong
\Hom^G\big( V(0) \oplus (r-1) V(2), \cO(Z) \big)
$$
has dimension $h_2(0) + (r - 1) h_2(2) = r$. Together with (\ref{eqn:zts}), this implies the statement.
In fact, $\Hilb^G_h(X)$ is a smooth irreducible variety of dimension $r$, as we will see in
Example \ref{ex:aut}(iv). Specifically, $\Hilb^G_h(X)$ is the total space of the line bundle of degree
$-2$ on $\bP^{r-1}$, see Example \ref{ex:qsm}(iv).
\end{example}
The isomorphism (\ref{eqn:zts}) is the starting point of a local analysis of the
invariant Hilbert scheme, in relation to deformation theory (for the latter, see \cite{Se06}).
We will present a basic and very useful result in that direction; it relies on the following:
\begin{lemma}\label{lem:prof}
Let $\cM$ be a coherent sheaf on an affine scheme $Z$, and $M = H^0(Z,\cM)$ the associated
finitely generated module over $R := \cO(Z)$. Let $Z_0 \subset Z$ be a dense open subscheme
and denote by $\iota : Z_0 \to Z$ the inclusion map. Then the pull-back
$$
\iota^* : \Hom_R(M,R) = \Hom_Z(\cM,\cO_Z) \longrightarrow
\Hom_{Z_0}(\cM_{\vert Z_0},\cO_{Z_0})
$$
is injective.
If $Z$ is a normal irreducible variety and the complement $Z \setminus Z_0$ has codimension $\geq 2$
in $Z$, then $\iota^*$ is an isomorphism.
\end{lemma}
\begin{proof}
Choose a presentation of the $R$-module $M$,
$$
\CD
R^m @>{A}>> R^n @>>> M @>>> 0,
\endCD
$$
where $A$ is a matrix with entries in $R$. This yields an exact sequence of $R$-modules
$$
\CD
0 @>>> \Hom_R(M,R) @>>> R^n @>{B}>> R^m
\endCD
$$
where $B$ denotes the transpose of $A$. In other words,
$\Hom_R(M,R)$ consists of those $(f_1,\ldots,f_n) \in R^n$
that are killed by $B$. This implies both assertions,
since $\iota^* : R = \cO(Z) \to \cO(Z_0)$ is injective,
and is an isomorphism under the additional assumptions.
\end{proof}
We may now obtain a more concrete description of the Zariski tangent space at
a $G$-orbit closure:
\begin{proposition}\label{prop:orb}
Let $G$ be a reductive group, $V$ a finite-dimensional $G$-module, and $v$ a point of $V$.
Denote by $Z \subset V$ the closure of the orbit $G \cdot v$ and by $h$ the Hilbert function
of $\cO(Z)$. Let $G_v \subset G$ be the isotropy group of $v$, and $\fg$ the Lie algebra of $G$.
Then
\begin{equation}\label{eqn:orb}
T_Z \Hilb^G_h(V) \hookrightarrow \big(V/ \fg \cdot v \big)^{G_v}
\end{equation}
where $G_v$ acts on $V/ \fg \cdot v$ via its linear action on $V$ which
stabilizes the subspace $\fg \cdot v = T_v(G \cdot v)$.
Moreover, equality holds in (\ref{eqn:orb}) if $Z$ is normal and the boundary
$Z \setminus G \cdot v$ has codimension $\geq 2$ in $Z$.
\end{proposition}
\begin{proof}
We apply Proposition \ref{prop:zts} and Lemma \ref{lem:prof} by taking
$M = I/I^2$ and $Z_0 = G \cdot v$. This yields an injection of $T_Z \Hilb^G_h(V)$
into
$$
W := \Hom^G_{Z_0}\big( (\cI/\cI^2)_{\vert Z_0}, \cO_{Z_0} \big),
$$
where $\cI$ denotes the ideal sheaf of $Z$ in $V$. Moreover, $T_Z \Hilb^G_h(V) = W$
under the additional assumptions. Since $Z_0$ is a smooth subvariety of the smooth
irreducible variety $V$, the conormal sheaf $(\cI/\cI^2)_{\vert Z_0}$ is locally free.
Denoting the dual (normal) sheaf by $\cN_{Z_0/V}$, we have
$$
W = H^0(Z_0,\cN_{Z_0/V})^G.
$$
But $\cN_{Z_0/V}$ is the sheaf of local sections of the normal bundle, and
the total space of that bundle is the $G$-variety $G \times^{G_v} \cN_{Z_0/V,v}$
equipped with the projection to $G/G_v = G \cdot v$. Moreover, we have isomorphisms
of $G_v$-modules
$$
\cN_{Z_0/V, v} \cong T_v(V)/T_v(G \cdot v) \cong V/\fg \cdot v.
$$
It follows that
$$
H^0(Z_0,\cN_{Z_0/V})^G \cong \big( \cO(G/G_v) \otimes_k \cN_{Z_0/V,v} \big)^G
\cong \Hom^G(G/G_v,\cN_{Z_0/V, v}) \cong \cN_{Z_0/V, v}^{G_v}.
$$
This implies our assertions.
\end{proof}
We refer to \cite[Section 1.4]{AB05} for further developments along these lines,
including a relation to (non-embedded) first-order deformations,
and to \cite[Section 3]{PvS10} for generalizations where the boundary may have irreducible components
of codimension $1$. The obstruction space for $G$-invariant deformations is considered in
\cite[Section 3.5]{Cu09}.
\begin{examples}\label{ex:zts}
(i) Let $\ulambda = (\lambda_1,\ldots,\lambda_N)$ be a list of pairwise distinct weights of a torus $T$,
and $\Hilb^T(\ulambda)$ the associated \emph{toric Hilbert scheme} as in Example \ref{ex:tuf}(i).
Let $Z = \overline{T \cdot v}$ where $v$ is a general point of $V= \bA^N$
(i.e., all of its coordinates are non-zero). Then the stabilizer $T_v$ is the kernel
of the homomorphism
\begin{equation}\label{eqn:ulambda}
\ulambda : T \longrightarrow (\bG_m)^N, \quad
t \longmapsto \big( \lambda_1(t),\ldots,\lambda_N(t) \big)
\end{equation}
and hence acts trivially on $V$. Thus, the preceding proposition just yields an inclusion
$$
\iota : T_Z \Hilb^T(\ulambda) \hookrightarrow V/ \ft \cdot v
$$
where $\ft$ denotes the Lie algebra of $T$.
In fact, \emph{$\iota$ is an isomorphism}. Indeed, moving $v$ among the general points defines
a family $p : \cZ \to (\bG_m)^N$ of $T$-orbit closures in $V$, and hence a morphism
$f: (\bG_m)^N \to \Hilb^T(\ulambda)$. Moreover, the differential of $f$
at $v$ composed with $\iota$ yields the quotient map $V \to V/\ft \cdot v$; hence
$\iota$ is surjective. (See Example \ref{ex:aut}(i) for another version of this argument,
based on the natural action of $(\bG_m)^N$ on $\Hilb^T(\ulambda)$.)
\noindent
(ii) As in Example \ref{ex:tuf}(ii), let $G = \SL_2$, $X = V(n)$ and $Z$ the variety of $n$-th powers
of linear maps. Then $Z = \overline{G \cdot v} = G \cdot v \cup \{0\}$, where $v := y^n$ is
a highest weight vector; moreover, $Z$ is a normal surface. Thus, we may apply the preceding
proposition to determine $T_Z \Hilb^G_h\big( V(n) \big)$, where $h = h_n$ is the function
(\ref{eqn:hn}).
The stabilizer $G_{y^n}$ is the semi-direct product of the additive group $U$ (acting via
$x \mapsto x + t y$, $y \mapsto y$) with the group $\mu_n$ of $n$-th roots of unity acting via
$x \mapsto \zeta x$, $y \mapsto \zeta^{-1} y$. Also, $\fg \cdot v$ is spanned by the monomials
$y^n$ and $xy^{n-1}$, and $V/\fg \cdot v$ has basis the images of the remaining monomials
$x^n, x^{n-1}y, \ldots, x^2y^{n-2}$. It follows that $(V/\fg \cdot v)^U$ is spanned by the image of
$x^2 y^{n-2}$; the latter is fixed by $\mu_n$ if and only if $n = 2$ or $n = 4$. We thus obtain:
$$
T_Z \Hilb^G_h(V) =
\begin{cases}
k & \text{if $n = 2$ or $4$}, \\
0 & \text{otherwise}. \\
\end{cases}
$$
\noindent
(iii) More generally, let $G$ be an arbitrary connected reductive group, $V= V(\lambda)$
a simple $G$-module of dimension $\geq 2$, $v = v_{\lambda}$ a highest weight vector,
and $Z = \overline{G \cdot v}$ as in Example \ref{ex:tuf}(iii). Then the stabilizer
of the highest weight line $[v]$ is a parabolic subgroup $P \supset B$, and the
character $\lambda$ of $B$ extends to $P$; moreover, $G_v$ is the kernel of that extended
character. Also, $Z$ is normal and its boundary (the origin) has codimension $\geq 2$.
Thus, Proposition \ref{prop:orb} still applies to this situation. Combined with
arguments of combinatorial representation theory, it yields that $T_Z \Hilb^G_h(V) = 0$
unless $\lambda$ belongs to an explicit (and rather small) list of dominant weights;
in that case, $T_Z \Hilb^G_h(V) = k$ (see \cite[Section 1.3]{Ja07}).
\end{examples}
\subsection{Action of equivariant automorphism groups}
\label{subsec:aeag}
As in the previous subsection, we fix an affine $G$-scheme of finite type $X$ and a function
$h: \Irr(G) \to \bN$. We obtain a natural equivariance property of the corresponding
invariant Hilbert scheme.
\begin{proposition}\label{prop:aut}
Let $H$ be an algebraic group, and $\beta : H \times X \to X$ an action by $G$-automorphisms.
Then $\beta$ induces an $H$-action on $\Hilb^G_h(X)$ that stabilizes the universal family
$\Univ^G_h(X) \subset X \times \Hilb^G_h(X)$.
\end{proposition}
\begin{proof}
Given a flat family of $G$-stable subschemes $\cZ \subset X \times S$ with Hilbert function $h$,
we construct a flat family of $G$-stable subschemes
$$
\cW \subset H \times X \times S
$$
with the same Hilbert function, where $G$ acts on $H \times X \times S$ via
$g \cdot (h,x,s) = (h, g \cdot x, s)$. For this, form the cartesian square
\begin{equation}\label{eqn:act}
\CD
\cW @>>> H \times X \times S \\
@VVV @V{\beta \times \id_S}VV \\
\cZ @>>> X \times S. \\
\endCD
\end{equation}
Then $\cW$ is a closed subscheme of $H \times X \times S$, stable under the given $G$-action
since $\beta$ is $G$-equivariant. Moreover, the map
$$
H \times X \longrightarrow H \times X, \quad (h,x) \longmapsto (h, h \cdot x)
$$
is an isomorphism; thus, the square
\begin{equation}\label{eqn:com}
\CD
H \times X \times S @>>> H \times S \\
@V{\beta \times \id_S}VV @VVV \\
X \times S @>>> S \\
\endCD
\end{equation}
(where the non-labeled arrows are the projections) is cartesian. By composing
(\ref{eqn:act}) and (\ref{eqn:com}), we obtain a cartesian square
$$
\CD
\cW @>>> H \times S \\
@VVV @VVV \\
\cZ @>>> S \\
\endCD
$$
i.e., an isomorphism
$$
\cW \cong (H \times S) \times_S \cZ \cong H \times \cZ
$$
where the morphism $H \times \cZ \to \cW$ is given by $(h,z) \mapsto \beta(h^{-1}, z)$.
It follows that $\cW$ is flat over $H \times S$ with Hilbert function $h$.
Applying this construction to $\cZ = \Univ^G_h(X)$ and $S = \Hilb^G_h(X)$ yields a flat family
$\cW$ with Hilbert function $h$ and base $H \times \Hilb^G_h(X)$, and hence a morphism of schemes
$$
\varphi : H \times \Hilb^G_h(X) \longrightarrow \Hilb^G_h(X)
$$
such that $\cW$ is the pull-back of the universal family. Since the composite morphism
$$
\CD
X @>{e_H \times \id_X}>> H \times X @>{\beta}>> X
\endCD
$$
is the identity, it follows that the same holds for the composite
$$
\CD
\Hilb^G_h(X) @>{e_H \times \id_{\Hilb^G_h(X)}}>> H \times \Hilb^G_h(X) @>{\varphi}>> \Hilb^G_h(X).
\endCD
$$
Likewise, $\varphi$ satisfies the associativity property (\ref{eqn:asso}).
Thus, $\varphi$ is an $H$-action on $\Hilb^G_h(X)$.
To show that the induced $H$-action on $X \times \Hilb^G_h(X)$ stabilizes the closed
subscheme $\Univ^G_h(X)$, note that $\cW \subset H \times X \times \Hilb^G_h(X) $ is the closed
subscheme defined by $\big( (\beta(h^{-1}, x \big),s) \in \Univ^G_h(X)$ with an obvious notation
(as follows from the cartesian square (\ref{eqn:act})). But $\cW$ is also defined by
$\big( x, \varphi(h,s) \big) \in \Univ^G_h(X)$ (since it is the pull-back of $\Univ^G_h(X)$).
This yields the desired statement.
\end{proof}
In the case that $X$ is a finite-dimensional $G$-module, say $V$, a natural choice for $H$ is
the automorphism group of the $G$-module $V$, i.e., the centralizer of $G$ in $\GL(V)$;
we denote that group by $\GL(V)^G$. To describe it, consider the isotypical decomposition
\begin{equation}\label{eqn:is}
V \cong m_1 V(\lambda_1) \oplus \cdots \oplus m_r V(\lambda_r),
\end{equation}
where $\lambda_1, \ldots, \lambda_r$ are pairwise distinct dominant weights, and the multiplicities
$m_1, \ldots, m_r$ are positive integers. By Schur's lemma, $\GL(V)^G$ preserves each
isotypical component $m_i V(\lambda_i) \cong k^{m_i} \otimes_k V(\lambda_i)$ and acts there
via a homomorphism to $\GL_{m_i}$. It follows that
$$
H \cong \GL_{m_1} \times \cdots \times \GL_{m_r}.
$$
In particular, the center of $\GL(V)^G$ is a torus $(\bG_m)^r$ acting on $V$ via
$$
(t_1,\ldots,t_r) \cdot (v_1, \ldots,v_r) = (t_1 v_1, \ldots, t_r v_r)
\quad \text{where} \quad v_i \in m_i V(\lambda_i).
$$
\begin{examples}\label{ex:aut}
(i) For a torus $T$ acting linearly on $V = \bA^N$ via pairwise distinct weights
$\lambda_1,\ldots,\lambda_N$, the group $\GL(V)^T$ is just the diagonal torus
$(\bG_m)^N$. In particular, this yields an action of $(\bG_m)^N$ on the toric
Hilbert scheme $\Hilb^T(\ulambda)$ where $\ulambda =(\lambda_1,\ldots,\lambda_N)$.
The stabilizer of a general orbit closure $Z = \overline{T \cdot v}$ is the image of the homomorphism
$\ulambda : T \to (\bG_m)^N$ (\ref{eqn:ulambda}) with kernel $T_v$. Thus, the orbit $(\bG_m)^N \cdot Z$
is a smooth subvariety of $\Hilb^T(\ulambda)$, of dimension
$$
N - \dim \ulambda(T) = N - \dim(T) + \dim(T_v) = \dim(V/\ft \cdot v).
$$
Since $\dim T_Z \Hilb^T(\ulambda) \leq \dim(V/\ft \cdot v)$
by Example \ref{ex:zts}(i), it follows that equality holds. We conclude that
\emph{the orbit $(\bG_m)^N \cdot Z$ is open in $\Hilb^T(\ulambda)$}.
As a consequence, the closure of that orbit is an irreducible component of the toric Hilbert scheme,
equipped with its reduced subscheme structure: the {\bf main component}, also called the
{\bf coherent component}. Its points are the general $T$-orbit closures and their flat limits
as closed subschemes of $\bA^N$. The normalization of the main component is a quasi-projective toric
variety under the quotient torus $(\bG_m)^N/\ulambda(T)$.
In particular, the main component is a point if and only if the homomorphism $\ulambda$
is surjective, i.e., the weights $\lambda_1,\ldots,\lambda_N$ are linearly independent.
In that case, one easily sees that the whole toric Hilbert scheme is a (reduced) point.
\noindent
(ii) The automorphism group of the simple $\SL_2$-module $V(n)$ is just $\bG_m$
acting by scalar multiplication. For the induced action on the invariant Hilbert scheme,
a closed point is fixed if and only if its ideal is homogeneous. If the Hilbert function is
the function $h_n$ defined in (\ref{eqn:hn}), then there is a unique such ideal $I$:
that of the variety of $n$-th powers. Indeed, we have an isomorphism of $\SL_2$-modules
$$
\cO\big(V(n)\big)/I \cong \Sym \big(V(n) \big)/I \cong \bigoplus_{m=0}^{\infty} V(mn).
$$
Moreover, the graded $G$-algebra $\Sym \big(V(n) \big)/I$ is generated by (the image of) $V(n)$,
its component of degree $1$. By an easy induction on $m$, it follows that the component
of an arbitrary degree $m$ of that algebra is isomorphic to $V(mn)$. But the $G$-submodule
$V(mn) \subset \Sym^m \big( V(n) \big)$
has a unique $G$-stable complement: the sum of all other simple submodules. This determines each
homogeneous component of the graded ideal $I$.
\noindent
(iii) The preceding argument adapts to show that the cone of highest weight vectors is the unique
fixed point for the $\bG_m$-action on $\Hilb_{h_{\lambda}}^G\big( V(\lambda) \big)$, with the notation
of Example \ref{ex:tuf}(iii).
\noindent
(iv) As in Example \ref{ex:m2r}, let $G = \SL_2$, $V = r V(2)$ and $h = h_2$.
Then $H = \GL(V)^G$ is the general linear group $\GL_r$ acting on $V \cong k^r \otimes_k V(2)$ via its standard
action on $k^r$. For the induced action of $H$ on $\Hilb^G_h(V)$, the closed points form two orbits
$\Omega_1$, $\Omega_0$, with representatives $Z_1$, $Z_0$ defined by
$\Delta(p_1) = 1$ ($\Delta(p_1) = 0$) and $p_2 = \ldots = p_r =0$.
One easily checks that the isotropy group $H_{Z_0}$ is the parabolic subgroup of $\GL_r$ that stabilizes
the first coordinate line $ke_1$; as a consequence, $\Omega_0 \cong \bP^{r-1}$. Also, $H_{Z_1}$ is the
stabilizer of $\pm e_1$, and hence $\Omega_1 \cong (\bA^r \setminus \{0\})/\pm 1$ where $\GL_r$ acts
linearly on $\bA^r$. Since the family $\cZ$ of Example \ref{ex:fam}(ii) has general fibers in $\Omega_1$
and special fiber in $\Omega_0$, we see that $\Omega_0$ is contained in the closure of $\Omega_1$.
As a consequence, \emph{$\Hilb^G_h(V)$ is irreducible of dimension $r$}. Since its Zariski tangent space has
dimension $r$ at each closed point, it follows that \emph{$\Hilb^G_h(V)$ is a smooth variety}.
\end{examples}
\subsection{The quotient-scheme map}
\label{subsec:qsm}
We keep the notation of the previous subsection, and consider a family of $G$-stable closed
subschemes $\cZ \subset X \times S$ over some scheme $S$. Recall that the sheaf of $\cO_S$-algebras
$\cF_0 = (p_* \cO_{\cZ})^G$ is a quotient of $A^G \otimes_k \cO_S$ where $A = \cO(X)$.
This defines a family of closed subschemes
$$
\cZ/\!/G \subset X/\!/ G \times S
$$
where we recall that $X/\!/G = \Spec(A^G)$. If $\cZ$ is flat over $S$ with Hilbert function $h$,
then $\cF_0$ is locally free over $S$, of rank
$$
n := h(0).
$$
Thus, $\cZ/\!/G$ defines a morphism to the punctual Hilbert scheme of the quotient,
$$
f: S \longrightarrow \Hilb_n(X/\!/G).
$$
Applying this construction to the universal family yields a morphism
\begin{equation}\label{eqn:qsm}
\gamma : \Hilb^G_h(X) \longrightarrow \Hilb_n(X/\!/G)
\end{equation}
that we call the {\bf quotient-scheme map}.
\begin{proposition}\label{prop:proj}
With the preceding notation, the morphism (\ref{eqn:qsm}) is projective.
In particular, if the scheme $X/\!/G$ is finite, or equivalently, if $X$
contains only finitely many closed $G$-orbits, then $\Hilb^G_h(X)$ is projective.
\end{proposition}
\begin{proof}
Since $\Hilb^G_h(X)$ is quasi-projective, it suffices to show that $\gamma$ is proper.
For this, we use the valuative criterion of properness for schemes of finite type:
let $R$ be a discrete valuation ring containing $k$ and denote by $K$ the fraction field of $R$.
Let $\cZ_K \in \Hilb_h^G(X)(K)$ and assume that $\gamma(\cZ_K) \in \Hilb_n(X/\!/G)(K)$
admits a lift to $\Hilb_n(X/\!/G)(R)$. Then we have to show that $\cZ_K$ admits a lift to
$\Hilb_h^G(X)(R)$.
In other words, we have a family
$$
\cZ_K \subset X \times \Spec(K)
$$
of closed $G$-stable subschemes with Hilbert function $h$, such that the family
$$
\cZ_K/\!/G \subset X/\!/G \times \Spec(K)
$$
extends to a family in $X/\!/G \times \Spec(R)$. These data correspond to a $G$-stable ideal
$$
I_K \subset A \otimes_k K
$$
such that $I_K^G \subset A^G \otimes_k K$ equals $J \otimes_R K$, where
$$
J \subset A^G \otimes_k R
$$
is an ideal such that the $R$-module $(A^G \otimes_k R)/J$ is free of rank $h(0)$. Then
$$
J \subset (A^G \otimes_k R) \cap (J \otimes_R K),
$$
where the intersection is considered in $A \otimes_R K$. Moreover, the quotient
$R$-module $(A^G \otimes_k R) \cap (J \otimes_R K)/J$ is torsion; on the other hand,
this quotient is contained in the free $R$-module $(A^G \otimes_k R)/J$. Thus,
$$
J = (A^G \otimes_k R) \cap (J \otimes_R K) = (A^G \otimes_k R) \cap I_K.
$$
We now consider
$$
I := (A \otimes_k R) \cap I_K.
$$
Clearly, $I$ is an ideal of $A \otimes_k R$ satisfying $I^G = J$ and
$I \otimes_R K = I_K$. Thus, the $R$-module
$$
\big( (A \otimes_k R)/I \big)^G = (A^G \otimes_k R)/I^G
$$
is free of rank $n$. Moreover, each module of covariants
$\big( (A \otimes_k R)/I \big)_M$ is finitely generated over
$\big( (A \otimes_k R)/I \big)^G$, and torsion-free by construction.
Hence the $R$-module $\big( (A \otimes_k R)/I \big)_M$ is free;
tensoring with $K$, we see that its rank is $h(M)$. Thus, $I$
corresponds to an $R$-point of $\Hilb_h^G(X)$, which is the desired lift.
\end{proof}
\begin{remarks}\label{rem:hs}
(i) The morphism (\ref{eqn:qsm}) is analogous to the {\bf Hilbert-Chow morphism}, or
{\bf cycle map}, that associates with any closed subscheme $Z$ of the projective space $\bP^N$,
the support of $Z$ (with multiplicities) viewed as a point of the Chow variety of $\bP^N$.
The cycle map may be refined in the setting of punctual Hilbert schemes: given a quasi-projective
scheme $X$ and a positive integer $n$, there is a natural morphism
\begin{equation}\label{eqn:cm}
\varphi_n : \Hilb_n(X) \longrightarrow X^{(n)}
\end{equation}
where $X^{(n)}$ denotes the {\bf $n$-th symmetric product of $X$}, i.e., the quotient of the
product $X \times \cdots \times X$ ($n$ factors) by the action of the symmetric group
$S_n$ that permutes the factors; this is a quasi-projective scheme with closed points the
effective $0$-cycles of degree $n$ on $X$. Moreover, $\varphi_n$ induces the cycle map on closed
points, and is a projective morphism (for these results, see \cite[Theorem 2.16]{Be08}).
In the setting of Proposition \ref{prop:proj}, let
\begin{equation}\label{eqn:hccm}
\delta : \Hilb^G_h(X) \longrightarrow (X/\!/G)^{(n)}
\end{equation}
denote the composite of (\ref{eqn:qsm}) with (\ref{eqn:cm}). Then $\delta$ is a projective morphism,
and $(X/\!/G)^{(n)}$ is affine. As a consequence,
\emph{the invariant Hilbert scheme is projective over an affine scheme.}
\noindent
(ii) The quotient-scheme map satisfies a natural equivariance property: with the notation and assumptions of
Proposition \ref{prop:aut}, the $H$-action on $X$ induces an action on $X/\!/G$ so that the quotient morphism
$\pi$ is equivariant. This yields in turn an $H$-action on the punctual Hilbert scheme $\Hilb_n(X/\!/G)$;
moreover, \emph{the morphism (\ref{eqn:qsm}) is equivariant}
(as may be checked along the lines of that proposition).
Also, $H$ acts on the symmetric product $(X/\!/G)^{(n)}$ and the morphism (\ref{eqn:hccm}) is $H$-equivariant.
\noindent
(iii) In the case that $X$ is a finite-dimensional $G$-module $V$, and $H = (\bG_m)^r$
is the center of $\GL(V)^G$, the closed $H$-fixed points of $\Hilb^G_h(X)$
may be identified with those $G$-stable ideals $I \subset \cO(V)$ that are homogeneous
with respect to the isotypical decomposition (\ref{eqn:is}). Moreover,
\emph{the closure of each $H$-orbit in $\Hilb^G_h(V)$ contains a fixed point}:
indeed, the closure of each $H$-orbit in $V$ contains the origin, and hence the same holds
for the induced action of $H$ on the symmetric product $(V/\!/G)^{(n)}$ (where the
origin is the image of $(0, \ldots,0)$). Together with the properness of the morphism
$\delta$ and Borel's fixed point theorem, this implies the assertion.
\end{remarks}
\begin{examples}\label{ex:qsm}
(i) For the toric Hilbert scheme $\Hilb^T(\lambda_1,\ldots,\lambda_N) = \Hilb^T(\ulambda)$
of Example \ref{ex:tuf}(i), the quotient-scheme map may be refined as follows.
Given $\lambda$ in the monoid $\Gamma$ generated by $\lambda_1,\ldots,\lambda_N$, consider the graded subalgebra
$$
\cO(\bA^N)^{(\lambda)} := \bigoplus_{m = 0}^{\infty} \cO(\bA^N)_{m \lambda} \subset \cO(\bA^N)
$$
with degree-$0$ component $\cO(\bA^N)_0 =\cO(\bA^N/\!/T)$. Replacing $\lambda$ with a
positive integer multiple, we may assume that the algebra $\cO(\bA^N)^{(\lambda)}$ is finite
over its subalgebra generated by its components of degrees $0$ and $1$. Then
$$
\bA^N/\!/_{\lambda}T := \Proj \big( \cO(\bA^N)^{(\lambda)} \big)
$$
is a projective variety over $\bA^N/\!/_{\lambda}T$, and the twisting sheaf $\cO(1)$
is an ample invertible sheaf on $\bA^N/\!/_{\lambda}T$, generated by its subspace
$\cO(\bA^N)_{\lambda}$ of global sections. Moreover, one may define a morphism
$$
\gamma_{\lambda} : \Hilb^T(\ulambda) \longrightarrow \bA^N/\!/_{\lambda}T
$$
which lifts the quotient-scheme morphism $\gamma$. The collection of these morphisms forms
a finite inverse system; its inverse limit is called the {\bf toric Chow quotient}
and denoted by $\bA^N/\!/_C T$. This construction yields the {\bf toric Chow morphism}
$$
\Hilb^T(\ulambda) \longrightarrow \bA^N/\!/_C T
$$
which induces an isomorphism on the associated reduced schemes, under some additional
assumptions (see \cite[Section 5]{HS04}).
\noindent
(ii) With the notation of Example \ref{ex:tuf}(ii), we may now show that
$\Hilb_{h_n}^{\SL_2}\big( V(n) \big)$ is either an affine line if $n = 2$ or $4$,
or a reduced point for all other values of $n$.
Indeed, for the natural action of $\bG_m$, each orbit closure contains the unique fixed point $Z$.
If $n \neq 2$ and $n \neq 4$, then it follows that $\Hilb_{h_n}^{\SL_2}\big( V(n) \big)$ is just $Z$,
since its Zariski tangent space at that point is trivial.
On the other hand, we have constructed a family $\cZ \subset V(2) \times \bA^1$
(Example \ref{ex:fam}) and hence a morphism
$$
f : \bA^1 \to \Hilb_{h_2}^{\SL_2}\big( V(2) \big).
$$
Moreover, $f$ is injective (on closed points), since the fibers of $\cZ$ are pairwise distinct subschemes
of $V(2)$. Also, $\cZ$ is stable under the action of $\bG_m$ on $V(2) \times \bA^1$ via
$t \cdot (x,y) = (tx, t^2y)$ and hence $f$ is $\bG_m$-equivariant for the action on $\bA^1$ via
$t \cdot y = t^2 y$. In particular, $\Hilb_{h_2}^{\SL_2}\big( V(2) \big)$ has dimension $\geq 1$ at $Z$.
Since its Zariski tangent space has dimension $1$, it follows that
$\Hilb_{h_2}^{\SL_2}\big( V(2) \big)$ is smooth and $1$-dimensional at $Z$. Using the $\bG_m$-action,
it follows in turn that $f$ is an isomorphism; hence the natural map
$\cZ \to \Univ_{h_2}^{\SL_2}\big( V(2) \big)$ is an isomorphism as well. The quotient-scheme map
is also an isomorphism in that case.
Likewise, the family $\cW$ of Example \ref{ex:tuf}(ii) yields isomorphisms
$$
\bA^1 \longrightarrow \Hilb_{h_4}^{\SL_2}\big( V(4) \big),
\quad
\cW \longrightarrow \Univ_{h_4}^{\SL_2}\big( V(4) \big).
$$
Moreover, the quotient-scheme map is a closed immersion
$$
\bA^1 \hookrightarrow V(4)/\!/\SL_2 \cong \bA^2.
$$
\noindent
(iii) The preceding argument adapts to show that $\Hilb_{h_{\lambda}}^G\big( V(\lambda) \big)$
is either an affine line or a reduced point, with the notation of Example \ref{ex:tuf}(iii).
The quotient-scheme map is again a closed immersion.
\noindent
(iv) With the notation of Example \ref{ex:m2r}, we describe the quotient-scheme map $\gamma$;
it takes values in $V/\!/G$ since $h(0) = 1$.
Observe that the image of $G$ in $\GL\big(V(2)\big) \cong \GL_3$ is the special orthogonal group
associated with the non-degenerate quadratic form $\Delta$ (the discriminant). By classical invariant
theory, it follows that the algebra of invariants of $rV(2)$ is generated by the maps
$$
(A_1, \ldots, A_r) \longmapsto \Delta(A_i), \quad \delta(A_i, A_j), \quad \det(A_i,A_j,A_k),
$$
where $\delta$ denotes the bilinear form associated with $\Delta$. But $\det(A_i,A_j,A_k)$ vanishes
identically on the image of $\gamma$, which may thus be identified with the variety of symmetric
$r \times r$ matrices of rank $\leq 1$. In other words,
\emph{$\gamma\big( \Hilb^G_h(V) \big)$ is the affine cone over $\bP^{r-1}$ embedded via $\cO_{\bP^{r-1}}(2)$}.
Moreover, $\gamma$ is a $\GL_r$-equivariant desingularization of that cone, with exceptional locus
the homogeneous divisor $\Omega_0$. This easily yields an isomorphism of $\GL_r$-varieties
$$
\Hilb^G_h(V) \cong O_{\bP^{r-1}}(-2)
$$
where $O_{\bP^{r-1}}(-2)$ denotes the total space of the line bundle of degree $-2$ over $\bP^{r-1}$,
i.e., the blow-up at the origin of the image of $\gamma$.
\end{examples}
We now apply the construction of the quotient-scheme map to obtain a ``flattening'' of the categorical
quotient $\pi : X \to X/\!/G$, where \emph{$X$ is an irreducible variety}. Then $X/\!/G$ is an irreducible
variety as well, and there exists a largest open subset $Y_0 \subset Y := X/\!/G$ such that the pull-back
$$
\pi_0 : X_0 := \pi^{-1}(Y_0) \longrightarrow Y_0
$$
is flat. It follows that the (scheme-theoretic) fibers of $\pi$ at all closed points of its {\bf flat locus}
$Y_0$ have the same Hilbert function, say $h = h_X$. Since $F/\!/G$ is a (reduced) point for each such
fiber $F$, we have $h(0) =1$. Thus, the invariant-scheme map associated with this special Hilbert function
$h$ is just a morphism
$$
\gamma : \Hilb^G_h(X) \longrightarrow X/\!/G.
$$
\begin{proposition}\label{prop:flat}
With the preceding notation and assumptions, the diagram
\begin{equation}\label{eqn:quot}
\CD
\Univ^G_h(X) @>{q}>> X \\
@V{p}VV @V{\pi}VV \\
\Hilb^G_h(X) @>{\gamma}>> X/\!/G \\
\endCD
\end{equation}
commutes, where the morphisms from $\Univ^G_h(X)$ are the projections.
Moreover, the pull-back of $\gamma$ to the flat locus of $\pi$ is an isomorphism.
\end{proposition}
\begin{proof}
Set for simplicity $\cZ:= \Univ^G_h(X)$ and $S := \Hilb^G_h(X)$.
Then the natural map $\cO_S \to (p_* \cO_{\cZ})^G$ is an isomorphism,
by the definition of the Hilbert scheme and the assumption that $h(0) = 1$.
Thus, $p$ factors as the quotient morphism $\cZ \to \cZ/\!/G$ followed by an isomorphism
$\cZ/\!/G \to S$. In view of the definition of $\gamma$, this yields the first assertion.
For the second assertion, let $S_0 := \gamma_0^{-1}(Y_0)$ and
$\cZ_0 := p^{-1}(S_0)$. By the preceding step, we have a closed immersion
$\iota : \cZ_0 \subset X_0 \times_{Y_0} S_0$ of $G$-schemes. But both are flat
over $S_0$ with the same Hilbert function $h$. Thus, the associated sheaves
of covariants $\cF_M$ (for $\cZ_0$) and $\cG_M$ (for $X_0 \times_{Y_0} S_0$)
are locally free sheaves of $\cO_S$-modules of the same rank, and come with a surjective
morphism $\iota_M^* : \cG_M \to \cF_M$. It follows that $\iota_M^*$ is an isomorphism,
and in turn that so is $\iota$.
\end{proof}
This construction is of special interest in the case that $G$ is a finite group, see
\cite[Section 4]{Be08} and also Subsection \ref{subsec:rcqs}. It also deserves further
study in the setting of connected algebraic groups.
\begin{example}
Let $G$ be a semi-simple algebraic group acting on its Lie algebra $\fg$ via the
adjoint representation. By a classical result of Kostant, the categorical quotient
$\fg/\!/G$ is an affine space of dimension equal to the rank $r$ of $G$ (the dimension of
a maximal torus $T \subset G$). Moreover, the quotient morphism $\pi$ is flat; its fibers
are exactly the orbit closures of regular elements of $\fg$ (i.e., those with centralizer
of minimal dimension $r$), and the corresponding Hilbert function $h = h_{\fg}$ is given by
\begin{equation}\label{eqn:hilb}
h(\lambda) = \dim V(\lambda)^T.
\end{equation}
Thus, the invariant-scheme map yields an isomorphism $\Hilb^G_h(\fg) \cong \fg/\!/G$.
If $G = \SL_2$, then $\fg \cong V(2)$ and we get back that
$\Hilb^{\SL_2}_{h_2}\big( V(2) \big) \cong \bA^1$. More generally, when applied to $G = \SL_n$,
we recover a result of Jansou and Ressayre, see \cite[Theorem 2.5]{JR09}. They also show that
$\Hilb^{\SL_n}_h(\mathfrak{sl}_n)$ is an affine space whenever $h$ is the Hilbert function
of an \emph{arbitrary} orbit closure, and they explicitly describe the universal family
in these cases; see [loc.~cit., Theorem 3.6].
\end{example}
\section{Some further developments and applications}
\label{sec:sa}
\subsection{Resolution of certain quotient singularities}
\label{subsec:rcqs}
In this subsection, we assume that the group $G$ is \emph{finite}. We discusss a direct construction
of the invariant Hilbert scheme in that setting, as well as some applications; the reader may consult
the notes \cite{Be08} for further background, details, and developments.
Recall that $\Irr(G)$ denotes the set of isomorphism classes of simple $G$-modules. Since this set is finite,
functions $h : \Irr(G) \to \bN$ may be identified with isomorphism classes of arbitrary finite-dimensional
modules, by assigning to $h$ the $G$-module $\bigoplus_{M \in \Irr(G)} h(M) M$.
For example, the function $M \mapsto \dim(M)$ corresponds to the regular representation, i.e.,
$\cO(G)$ where $G$ acts via left multiplication.
Given such a function $h$ and a $G$-scheme $X$, we may consider the invariant Hilbert functor $Hilb^G_h(X)$
as in Subsection \ref{subsec:fam}: it associates with any scheme $S$ the set of closed $G$-stable
subschemes $\cZ \subset X \times S$ such that the projection $p: \cZ \to S$ is flat and the module
of covariants $\Hom^G(M,p_* \cO_{\cZ})$ is locally free of rank $h(M)$ for any $M \in \Irr(G)$.
For such a family, the sheaf $p_* \cO_{\cZ}$ is locally free of rank
$$
n = n(h) := \sum_{M \in \Irr(G)} h(M) \dim(M),
$$
in view of the isotypical decomposition (\ref{eqn:decs}). In other words, $\cZ$ is finite and flat over $S$
of constant degree $m$, the dimension of the representation associated with $h$.
If $X$ is \emph{quasi-projective},
then the punctual Hilbert scheme $\Hilb_n(X)$ exists and is equipped with an action of $G$
(see Proposition \ref{prop:aut}). Thus, we have a morphism $f: S \to \Hilb_n(X)$ which is readily seen to be
$G$-invariant. In other words, $f$ factors through a morphism to the {\bf fixed point subscheme}
$\Hilb_n(X)^G \subset \Hilb_n(X)$, i.e., the largest closed $G$-stable subscheme on which $G$ acts trivially.
Moreover, the pull-back of the universal family $\Univ_n(X)$ to $\Hilb_n^G(X)$ is a finite flat family of
$G$-stable subschemes of $X$, and has a well-defined Hilbert function on each connected component
(see Remark \ref{rem:fam}(iii)).
This easily implies the following version of Theorem \ref{thm:hilb} for finite groups:
\begin{proposition}\label{prop:conn}
With the preceding notation and assumptions, the Hilbert functor $Hilb^G_h(X)$ is represented
by a union of connected components of the fixed point subscheme $\Hilb_n(X)^G$.
\end{proposition}
Also, the quotient $\pi : X \to X/G$ exists, where the underlying topological space to $X/G$ is just
the orbit space, and the structure sheaf $\cO_{X/G}$ equals $(\pi_* \cO_X)^G$. Moreover, the set-theoretical
fibers of $\pi$ are exactly the $G$-orbits. As in Subsection \ref{subsec:qsm}, this yields a
quotient-scheme map in this setting,
$$
\gamma : \Hilb^G_h(X) \longrightarrow \Hilb_n(X/G).
$$
(In fact, the assignement $\cZ \mapsto \cZ/\!/G$ yields a morphism from $\Hilb_n^G(X)$ to the disjoint union
of the punctual Hilbert schemes $\Hilb_m(X/G)$ for $m \leq n$).
We now assume that \emph{$X$ is an irreducible variety on which $G$ acts faithfully}. Then $X$ contains
$G$-{\bf regular points}, i.e., points with trivial isotropy groups, and they form an open $G$-stable
subset $X_{\reg}\subset X$. Moreover, the (scheme-theoretic) fiber of $\pi$ at a given $x \in X(k)$ equals
the orbit $G \cdot x$ if and only if $x$ is $G$-regular. In other words, the {\bf regular locus} $X_{\reg}/G$
is the largest open subset of $X/G$ over which $\pi$ induces a Galois covering with group $G$;
it is contained in the flat locus. Thus, the Hilbert function $h_X$ associated with the general fibers
of $\pi$ (as in Subsection \ref{subsec:qsm}) is just that of the regular representation.
The corresponding invariant Hilbert scheme is called the {\bf $G$-Hilbert scheme} and denoted by
$G$-$\Hilb(X)$. It is a union of connected components of $\Hilb_n(X)^G$, where $n$ is the order of $G$,
and comes with a projective morphism
\begin{equation}\label{eqn:hcf}
\gamma : G-\Hilb(X) \longrightarrow X/G
\end{equation}
which induces an isomorphism above the regular locus $X_{\reg}/G$. Moreover, $\gamma$ fits
into a commutative square
$$
\CD
G-\Univ(X) @>{q}>> X \\
@V{p}VV @V{\pi}VV \\
G-\Hilb(X) @>{\gamma}>> X/G \\
\endCD
$$
by Proposition \ref{prop:flat}. In other words, $G$-$\Univ(X)$ is a closed $G$-stable subscheme
of the fibered product $X \times_{X/G} G-\Hilb(X)$.
We denote by $G$-$\cH_X$ the closure of $\gamma^{-1}(X_{\reg}/G)$ in $G$-$\Hilb(X)$, equipped with the
reduced subscheme structure.. This is an irreducible component of $G$-$\Hilb(X)$: the
{\bf main component}, also called the {\bf orbit component}. The points of $G$-$\cH_X$ are the regular
$G$-orbits and their flat limits as closed subschemes of $X$; also, the quotient-scheme map
restricts to a projective \emph{birational} morphism $G$-$\cH_X \to X/G$.
\begin{examples}
(i) If $X$ and $X/G$ are smooth, then \emph{the quotient-scheme map (\ref{eqn:hcf}) is an isomorphism}.
(Indeed, $\pi$ is flat in that case, and the assertion follows from Proposition \ref{prop:flat}).
In particular, if $V$ is a finite-dimensional vector space and $G \subset \GL(V)$ a finite
subgroup generated by pseudo-reflections, then $\cO(V)^G$ is a polynomial algebra by a theorem
of Chevalley and Shepherd-Todd. Thus, (\ref{eqn:hcf}) is an isomorphism under that assumption.
\noindent
(ii) If $X$ is a smooth surface, then every punctual Hilbert scheme $\Hilb_n(X)$ is smooth.
Since smoothness is preserved by taking fixed points under finite (or, more generally, reductive)
group actions, it follows that \emph{$\Hilb^G_h(X)$ is smooth for any function $h$.}
Thus, the quotient-scheme map $\gamma : G-\cH_X \to X/G$ is a resolution of singularities.
In particular, if $G$ is a finite subgroup of $\GL_2$ which is not generated by pseudo-reflections,
then the quotient $\bA^2/G$ is a normal surface with singular locus (the image of) the origin, and
$\gamma : G-\cH_{\bA^2} \to \bA^2/G$ is a canonical desingularization. If in addition $G \subset \SL_2$,
then $G$ contains no pseudo-reflection, and the resulting singularity is a rational double point.
In that case, $\gamma$ yields the \emph{minimal} desingularization (this result is due to Ito and
Nakamura, see \cite{IN96, IN99}; a self-contained proof is provided in \cite[Section 5]{Be08}).
\noindent
(iii) The preceding argument does not extend to smooth varieties $X$ of dimension $\geq 3$,
since $\Hilb_n(X)$ is generally singular in that setting. Yet it was shown by Bridgeland, King and Reid
via homological methods that \emph{$G$-$\Hilb(X)$ is irreducible and has trivial canonical sheaf, if $\dim(X) \leq 3$
and the canonical sheaf of $X$ is equivariantly trivial} (see \cite[Theorem 1.2]{BKR01}). As a consequence,
if $G \subset \SL_n$ with $n \leq 3$, then $\gamma : G-\Hilb(\bA^n) \to \bA^n/G$ is a crepant resolution;
in particular, $G -\Hilb(\bA^n)$ is irreducible. This result fails in dimension 4 as the $G$-Hilbert scheme
may be reducible; this is the case for the binary tetrahedral group $G \subset \SL_2$ acting on $k^4$
via the direct sum of two copies of its natural representation, see \cite{LS08}.
\end{examples}
\subsection{The horospherical family}
\label{subsec:thf}
In this subsection, $G$ denotes a \emph{connected} reductive group. We present a classical algebraic
construction that associates with each affine $G$-scheme $Z$, a ``simpler'' affine $G$-scheme $Z_0$
called its horospherical degeneration (see \cite{Po86} or \cite[\S 15]{Gr97}). Then we interpret this
construction in terms of families, after \cite{AB05}.
We freely use the notation and conventions from highest weight theory (Subsection \ref{subsec:rg})
and denote by $\alpha_1, \ldots, \alpha_r$ the simple roots of $(G,T)$ associated with the Borel
subgroup $B$ (i.e., the corresponding positive roots are those of $(B,T)$).
Given a $G$-algebra $A$, recall the isotypical decomposition
\begin{equation}\label{eqn:idec}
A = \bigoplus_{\lambda \in \Lambda^+} A_{(\lambda)} \quad \text{where} \quad
A_{(\lambda)} \cong A^U_{\lambda} \otimes_k V(\lambda).
\end{equation}
Also recall that when $G$ is a torus, $A_{(\lambda)}$ is just the weight space $A_{\lambda}$
and (\ref{eqn:idec}) is a \emph{grading} of $A$, i.e.,
$A_{\lambda} \cdot A_{\mu} \subset A_{\lambda + \mu}$ for all $\lambda$, $\mu$.
For an arbitrary group $G$, (\ref{eqn:idec}) is no longer a grading, but gives rise to a \emph{filtration}
of $A$. To see this, we study the multiplicative properties of the isotypical decomposition.
Given $\lambda,\mu \in \Lambda^+$, there is an isomorphism of $G$-modules
\begin{equation}\label{eqn:tens}
V(\lambda) \otimes_k V(\mu) \cong \bigoplus_{\nu \in \Lambda^+} c_{\lambda,\mu}^{\nu} V(\nu)
\end{equation}
where the $c_{\lambda,\mu}^{\nu}$'s are non-negative integers, called the
{\bf Littlewood-Richardson coefficients}.
Moreover, if $c_{\lambda,\mu}^{\nu} \neq 0$ then $\nu \leq \lambda + \mu$ where $\leq$ is the
partial ordering on $\Lambda$ defined by:
$$
\mu \leq \lambda \Leftrightarrow \lambda - \mu = \sum_{i=1}^r n_i \alpha_i
\text{ for some non-negative integers } n_1,\ldots, n_r.
$$
Finally, $c_{\lambda,\mu}^{\lambda + \mu} = 1$, i.e., the simple module with the largest weight
$\lambda + \mu$ occurs in the tensor product $V(\lambda) \otimes_k V(\mu)$ with multiplicity $1$.
This special component is called the {\bf Cartan component} of $V(\lambda) \otimes_k V(\mu)$.
We set
$$
A_{(\leq \lambda)} := \bigoplus_{\mu \in \Lambda^+, \, \mu \leq \lambda} A_{(\mu)}.
$$
In view of (\ref{eqn:tens}), we have
\begin{equation}\label{eqn:mult}
A_{(\leq \lambda)} \cdot A_{(\leq \mu)} \subset A_{(\leq \lambda + \mu)}
\end{equation}
for all dominant weights $\lambda,\mu$. In other words, the $G$-submodules $A_{(\leq \lambda)}$ form
an increasing filtration of the $G$-algebra $A$, indexed by the partially ordered group $\Lambda$.
The associated graded algebra $\gr(A)$ is a $G$-algebra, isomorphic to $A$ as a $G$-module
but where the product of any two isotypical components $A_{(\lambda)}$ and $A_{(\mu)}$ is
obtained from their product in $A$ by projecting on the Cartan component $A_{(\lambda + \mu)}$.
Thus, the product of any two simple submodules of $\gr(A)$ is either their Cartan product,
or zero. Also, note that $\gr(A)^U \cong A^U$ as $T$-algebras, since
$A^U_{(\lambda)} = A^U_{\lambda}$ for all $\lambda$.
Now consider the {\bf Rees algebra} associated to this filtration:
\begin{equation}\label{eqn:rees}
\cR(A) := \bigoplus_{\mu \in \Lambda} A_{(\leq \mu)} \, e^{\mu}
= \bigoplus_{\lambda \in \Lambda^+, \, \mu \in \Lambda, \, \lambda \leq \mu} A_{(\lambda)}\, e^{\mu}
\end{equation}
where $e^{\mu}$ denotes the character $\mu$ viewed as a regular function on $T$
(so that $e^{\mu} e^{\nu} = e^{\mu + \nu}$ for all $\mu$ and $\nu$). Thus, $\cR(A)$ is a subspace of
$$
A \otimes_k \cO(T) = \bigoplus_{\lambda \in \Lambda, \, \mu \in \Lambda} A_{(\lambda)}\, e^{\mu}.
$$
In fact, $A \otimes_k \cO(T)$ is a $G \times T$-algebra, and $\cR(A)$ is a
$G \times T$-subalgebra by the multiplicative property (\ref{eqn:mult}). Also, note that
$\cR(A)$ contains variables
\begin{equation}\label{eqn:var}
t_1:= e^{\alpha_1}, \ldots, t_r := e^{\alpha_r}
\end{equation}
associated with the simple roots; the monomials in these variables are just
the $e^{\mu -\lambda}$ where $\lambda \leq \mu$. By (\ref{eqn:rees}), we have
$$
\cR(A) \cong \bigoplus_{\lambda \in \Lambda^+} A_{(\lambda)} \, e^{\lambda} \otimes_k k[t_1,\ldots,t_r]
\cong A[t_1,\ldots,t_r]
$$
as $G$-$k[t_1,\ldots,t_r]$-modules. In particular, $\cR(A)$ is a free module over the polynomial ring
$k[t_1,\ldots,t_r] \subset \cO(T)$. Moreover, we have an isomorphism of $T$-$k[t_1,\ldots,t_r]$-algebras
$$
\cR(A)^U \cong A^U[t_1,\ldots,t_r]
$$
that maps each $f \in \cR(A)^U_{\lambda}$ to $f e^{\lambda}$. Also, the ideal
$(t_1,\ldots,t_r) \subset \cR(A)$ is $G$-stable, and the quotient by that ideal is just
the $G$-module $A$ where the product of any two components $A_{(\lambda)}$, $A_{(\mu)}$ is
the same as in $\gr(A)$. In other words,
$$
\cR(A)/(t_1,\ldots,t_r) \cong \gr(A).
$$
On the other hand, when inverting $t_1,\ldots,t_r$, we obtain
$$
\cR(A)[t_1^{-1},\ldots,t_r^{-1}] \cong \bigoplus A_{(\lambda)} \, e^{\mu}
$$
where the sum is over those $\lambda \in \Lambda^+$ and $\mu \in \Lambda$ such that
$\lambda - \mu = n_1 \alpha_1 + \cdots + n_r \alpha_r$ for some integers
$n_1,\ldots n_r$ (of arbitrary signs). In other words, $\lambda - \mu$ is in
the {\bf root lattice}, i.e., the sublattice of the weight lattice $\Lambda$ generated
by the roots. The torus associated with the root lattice is the {\bf adjoint torus}
$T_{\ad}$, isomorphic to $(\bG_m)^r$ via the homomorphism
$$
\ualpha : T \longrightarrow (\bG_m)^r, \quad
t \longmapsto \big( \alpha_1(t), \ldots, \alpha_r(t) \big).
$$
Moreover, the kernel of $\ualpha$ is the center of $G$, that we denote by
$Z(G)$. This identifies $T_{\ad}$ with $T/Z(G)$, a maximal torus of the adjoint group
$G/Z(G)$ (whence the name). Now $Z(G)$ acts on $A \otimes_k \cO(T)$ via its action on $A$
as a subgroup of $G$ (then each isotypical component $A_{(\lambda)}$ is an eigenspace
of weight $\lambda_{\vert Z(G)}$) and its action on $\cO(T)$ as a subgroup of $T$
(then each $e^{\mu}$ is an eigenspace of weight $-\mu$). Moreover, the invariant ring satisfies
\begin{equation}\label{eqn:loc}
\big( A \otimes_k \cO(T) \big)^{Z(G)} \cong \cR(A)[t_1^{-1},\ldots,t_r^{-1}]
\end{equation}
as $G \times T$-algebras over $\cO(T)^{Z(G)}\cong k[t_1,t_1^{-1},\ldots,t_r,t_r^{-1}]$.
Translating these algebraic constructions into geometric terms yields the following statement:
\begin{proposition}\label{prop:hor}
Let $Z = \Spec(A)$ be an affine $G$-scheme and $p: \cZ \to \bA^r$ the morphism
associated with the inclusion $k[t_1,\ldots,t_r] \subset \cR(A)$, where $\cR(A)$ denotes
the Rees algebra (\ref{eqn:rees}), and $t_1,\ldots,t_r$ the variables (\ref{eqn:var}).
Then $p$ is a flat family of affine $G$-schemes, and the induced family of $T$-schemes
$\cZ/\!/U \to \bA^r$ is trivial with fiber $Z/\!/U$. The fiber of $p$ at $0$ is
$\Spec\big( \gr(A) \big)$.
Moreover, the pull-back of $p$ to $T_{\ad}\subset \bA^r$ is isomorphic to the projection
$Z \times^{Z(G)} T \to T/Z(G) = T_{\ad}$. In particular, all fibers of $p$ at general points
of $\bA^r$ are isomorphic to $Z$.
\end{proposition}
This \emph{special fiber} $Z_0$ is an affine $G$-scheme such that the isotypical decomposition
of $\cO(Z_0)$ is a grading; such schemes are called {\bf horospherical}, and
$$
Z_0:= \Spec\big( \gr(A) \big)
$$
is called the {\bf horospherical degeneration} of the affine $G$-scheme $Z$.
We say that $p: \cZ \to \bA^r$ is the {\bf horospherical family} (this terminology
originates in hyperbolic geometry; note that horospherical varieties need not be
spherical).
For example, the cone of highest weight vectors
$Z = \overline{G \cdot v_{\lambda}} = G \cdot v_{\lambda} \cup \{0 \}$
is a horospherical $G$-variety, in view of the isomorphism (\ref{eqn:hwc}).
In that case, the fixed point subscheme $Z^U$ is the highest weight line
$kv_{\lambda} = V(\lambda)^U$, and thus $Z = G \cdot Z^U$. In fact, the latter property
characterizes horospherical $G$-schemes:
\begin{proposition}\label{prop:char}
An affine $G$-scheme $Z$ is horospherical if and only if $Z = G \cdot Z^U$
(as schemes).
\end{proposition}
\begin{proof}
First, note that the closed subscheme $Z^U \subset Z$ is stable under the Borel
subgroup $B$; it follows that $G \cdot Z^U$ is closed in $Z$ for an arbitrary
$G$-scheme $Z$. Indeed, the morphism
\begin{equation}\label{eqn:mor}
\varphi : G \times Z^U \longrightarrow Z, \quad (g,z) \longmapsto g \cdot z
\end{equation}
factors as the morphism
$$
\psi : G \times Z^U \longrightarrow G/B \times Z, \quad
(g,z) \longmapsto (gB, g \cdot z)
$$
followed by the projection $G/B \times Z \to Z$. The latter morphism is proper,
since $G/B$ is complete; moreover, $\psi$ is easily seen to be a closed immersion.
Also, note that the ideal of $G \cdot Z^U$ in $A = \cO(Z)$ is the intersection of
the $G$-translates of the ideal $I$ of $Z^U$. Thus, $Z = G \cdot Z^U$ if and only
if $I$ contains no non-zero simple $G$-submodule of $A$. Moreover, the ideal $I$ is generated
by the $g \cdot f - f$ where $g \in U$ and $f \in A$.
We now assume that $Z$ is horospherical. Consider a simple $G$-submodule $V(\lambda) \subset A$.
Then $V(\lambda)$ admits a unique lowest weight (with respect to the partial ordering $\leq$),
equal to $-\lambda^*$, and the corresponding eigenspace is a line. Moreover, the span of the
$g \cdot v - v$, where $g \in U$ and $v \in V(\lambda)$, is just the sum of all other
$T$-eigenspaces; we denote that span by $V(\lambda)_+$.
Since the product $V(\lambda)\cdot V(\mu)$, where $V(\mu)$ is some other simple submodule of $A$,
is either $0$ or the Cartan product $V(\lambda + \mu)$, we see that
$$
V(\lambda)_+ \cdot V(\mu) \subset V(\lambda + \mu)_+.
$$
Thus, the sum of the $V(\lambda)_+$ over all simple submodules is an ideal of $A$,
and hence equals $I$. In particular, $I$ contains no non-zero simple $G$-submodule of $A$.
Conversely, assume that $Z = G \cdot Z^U$, i.e., the morphism (\ref{eqn:mor}) is
surjective. Note that $\varphi$ is invariant under the action of $U$ via
$u \cdot (g,z) = (gu^{-1}, z)$, and also equivariant for the action of $G$ on
$G \times Z^U$ via left multiplication on $G$, and for the given action on $Z$.
Thus, $\varphi$ yields an inclusion of $G$-algebras
$$
\cO(Z) \hookrightarrow \cO(G)^U \otimes_k \cO(Z^U)
$$
where $G$ acts on the right-hand side through its action on $\cO(G)^U$ via left
multiplication. But $\cO(G)^U$ is also a $T$-algebra via right multiplication;
this action commutes with that of $G$, and we have an isomorphism of $G \times T$-modules
$$
\cO(G)^U \cong \bigoplus_{\lambda \in \Lambda^+} V(\lambda)^*
$$
where $G$ acts naturally on each $V(\lambda)^*$, and $T$ acts via its character $\lambda$
(see e.g. \cite[Section 2.1]{Bri10}).
In particular, the isotypical decomposition of $\cO(G)^U$ is a grading; thus, the
same holds for $\cO(G)^U \otimes_k \cO(Z^U)$ and for $\cO(Z)$.
\end{proof}
Next, we relate the preceding constructions to the invariant Hilbert scheme of a finite-dimensional
$G$-module $V$. Here it should be noted that the full horospherical family of a closed $G$-subscheme
$Z \subset V$ need not be a family of closed $G$-subschemes of $V \times \bA^r$ (see e.g. Example
\ref{ex:hor}). Yet \emph{the pull-back of the horospherical family to $T_{\ad}$ may be identified
with a family of closed $G$-subschemes of $V \times T_{\ad}$} as follows.
By \ref{eqn:loc}, we have an isomorphism of $G \times T$-algebras
$$
\cO(V \times T)^{Z(G)} \cong \cR \big( \cO(V) \big)[t_1^{-1},\ldots,t_r^{-1}]
$$
over $\cO(T_{\ad}) \cong k[t_1^{\pm 1},\ldots,t_r^{\pm 1}]$. Also, each isotypical component
$V_{(\lambda)}= m_{\lambda} V(\lambda)$ yields a subspace
$$
m_{\lambda} V(\lambda)^* \, e^{\lambda^*} \subset \cO(V \times T)^{Z(G)},
$$
stable under the action of $G \times T$. Moreover, all these subspaces generate
a $G \times T$-subalgebra of $\cO(V \times T)^{Z(G)}$, equivariantly isomorphic to $\cO(V)$
where $V$ is a $G$-module via the given action, and $T$ acts linearly on $V$ so that each
$V_{(\lambda)}$ is an eigenspace of weight $-\lambda^*$. Finally, we have an isomorphism
$$
\cO(V \times T)^{Z(G)} \cong \cO(V) \otimes_k \cO(T_{\ad})
$$
over $G \times T$-algebras over $\cO(T_{\ad})$, which translates into an isomorphism
$$
p^{-1}(T_{\ad}) \cong V \times T_{\ad}
$$
of families of $G$-schemes over $T_{\ad}$. (In geometric terms, we have trivialized
the homogeneous vector bundle $V \times^{Z(G)} T \to T/Z(G)$ by extending the $Z(G)$-action
on $V$ to a $T$-action commuting with that of $G$).
This construction extends readily to the setting of families, i.e., given a family
of closed $G$-subschemes $\cZ \subset V \times S$, we obtain a family of closed $G$-subschemes
$\cW \subset V \times T_{\ad} \times S$. By arguing as in the proof of Proposition \ref{prop:aut},
this defines an \emph{action of $T_{\ad}$ on the invariant Hilbert scheme $\Hilb^G_h(V)$}.
In fact, \emph{this action arises from the linear $T$-action on $V$ for which each $V_{(\lambda)}$
has weight $-\lambda^*$}: since $\lambda + \lambda^*$ is in the root lattice for any
$\lambda \in \Lambda$, the induced action of the center $Z(G) \subset T$ coincides with its action
as a subgroup of $G$, so that $Z(G)$ acts trivially on $\Hilb^G_h(V)$.
\begin{example}\label{ex:hor}
Let $G = \SL_2$ and $Z = G \cdot xy \subset V(2)$. Then $Z = (\Delta = 1)$ is a closed $G$-subvariety
of $V(2)$ with Hilbert function $h_2$. One checks that the $G$-submodules $\cO(Z)_{\leq 2n}$
are just the restrictions to $Z$ of the spaces of polynomial functions on $V(2)$ with degree $\leq n$.
Moreover, $Z_0 = \overline{G \cdot y^2}$ and the horospherical family is that of Example
\ref{ex:fam}(ii).
Likewise, if $Z = G \cdot x^2y^2 \subset V(4)$ then $Z_0 = \overline{G \cdot y^4}$ and the
horospherical family is again that of Example \ref{ex:fam}(ii).
Also, $V(1)$ is its own horospherical degeneration, but the horospherical degeneration of $V(2)$
is the singular hypersurface $\{(z,t) \in V(2) \oplus V(0) ~\vert~ \Delta(z) = 0\}$.
\end{example}
\subsection{Moduli of multiplicity-free varieties with prescribed weight monoid}
\label{subsec:mav}
In this subsection, we still consider a connected reductive group $G$, and fix a finitely generated
submonoid $\Gamma \subset \Lambda^+$. We will construct a moduli space for irreducible multiplicity-free
$G$-varieties $Z$ with weight monoid $\Gamma$ or equivalently, with Hilbert function $h = h_{\Gamma}$
(\ref{eqn:hm}). Recall that $Z/\!/U$ is an irreducible multiplicity-free $T$-variety with weight monoid
$\Gamma$, and hence is isomorphic to $Y := \Spec \big( k[\Gamma] \big)$.
We begin by constructing a special such variety $Z_0$. Choose generators
$\lambda_1,\ldots,\lambda_N$ of the monoid $\Gamma$. Consider the $G$-module
$$
V := V(\lambda_1)^* \oplus \cdots \oplus V(\lambda_N)^*,
$$
the sum of highest weight vectors
$$
v := v_{\lambda_1^*} + \cdots + v_{\lambda_N^*},
$$
and define
$$
Z_0:= \overline{G \cdot v} \subset V.
$$
Since $v$ is fixed by $U$, the irreducible variety $Z_0$ is horospherical in view of Proposition
\ref{prop:char}. The $\Lambda$-graded algebra $\cO(Z_0)$ is generated by $V(\lambda_1), \ldots, V(\lambda_N)$;
thus, $\Lambda^+(Z_0) = \Gamma$. This yields a special algebra structure on the $G$-module
$$
V(\Gamma) := \bigoplus_{\lambda \in \Gamma} V(\lambda)
$$
such that the subalgebra $V(\Gamma)^U$ is isomorphic to $\cO(Y) = k[\Gamma]$.
Each irreducible multiplicity-free variety $Z$ with weight monoid $\Gamma$ satisfies $\cO(Z) \cong V(\Gamma)$
as $G$-modules and $\cO(Z)^U \cong V(\Gamma)^U$ as $T$-algebras. This motivates the following:
\begin{definition}\label{def:alg}
A {\bf family of algebra structures of type $\Gamma$ over a scheme $S$} consists of
the structure of an $\cO_S$-$G$-algebra on $V(\Gamma) \otimes_k \cO_S$ that extends the
given $\cO_S$-$T$-algebra structure on $V(\Gamma)^U$.
\end{definition}
In other words, a family of algebra structures of type $\Gamma$ over $S$ is a multiplication law
$m$ on $V(\Gamma) \otimes_k \cO_S$ which makes it an $\cO_S$-$G$-algebra and restricts to the
multiplication of the $\cO_S$-$T$-algebra $V(\Gamma)^U \otimes_k \cO_S$. We may write
\begin{equation}\label{eqn:comp}
m = \sum_{\lambda, \mu, \nu \in \Gamma} m_{\lambda, \mu}^{\nu}
\end{equation}
where each component
$$
m_{\lambda, \mu}^{\nu} :
\big( V(\lambda) \otimes_k \cO_S \big) \otimes_{\cO_S} \big( V(\mu) \otimes_k \cO_S \big)
\longrightarrow V(\nu) \otimes_k \cO_S
$$
is an $\cO_S$-$G$-morphism. Moreover, the commutativity of $m$ and its compatibility with
the multiplication on $V(\Gamma)^U \otimes_k \cO_S$ translate into linear relations between the
$m_{\lambda,\mu}^{\nu}$'s, while the associativity translates into quadratic relations. Also, each
$m_{\lambda,\mu}^{\nu}$ may be viewed as a linear map
$$
\Hom^G\big( V(\lambda) \otimes_k V(\mu), V(\nu) \big) \longrightarrow H^0(S,\cO_S)
$$
or equivalently, as a morphism of schemes
$$
S \longrightarrow \Hom^G\big( V(\nu), V(\lambda) \otimes_k V(\mu) \big),
$$
and the polynomial relations between the $m_{\lambda,\mu}^{\nu}$'s are equivalent to polynomial
relations between these morphisms. It follows that
\emph{the functor $M_{\Gamma}$ is represented by an affine scheme $\M_{\Gamma}$}, a closed subscheme of
the infinite-dimensional affine space. Also, note that $m_{\lambda,\mu}^{\nu} = 0$ unless
$\nu \leq \lambda + \mu$.
In fact, \emph{the scheme $\M_{\Gamma}$ is of finite type}; yet one does not know how to obtain this
directly from the preceding algebraic description. This finiteness property will rather be derived
from a relation of $\M_{\Gamma}$ to the invariant Hilbert scheme $\Hilb^G_h(V)$ that we now present.
Given a family of algebra structures of type $\Gamma$ over $S$, the inclusion of $G$-modules
$V^* \subset V(\Gamma)$ yields a homomorphism of $\cO_S$-$G$-algebras
$$
\varphi : \Sym(V^*) \otimes_k \cO_S \longrightarrow V(\Gamma) \otimes_k \cO_S.
$$
Moreover, the $\cO_S$-$T$-algebra $V(\Gamma)^U \otimes_k \cO_S$ is generated by the images of the highest
weight lines $V(\lambda_1)^U, \ldots,V(\lambda_N)^U \subset (V^*)^U$. In particular, the restriction
$$
\Sym\big( (V^*)^U \big) \otimes_k \cO_S \longrightarrow V(\Gamma)^U \otimes_k \cO_S
$$
is surjective; thus, $\varphi$ is surjective as well. This defines a family of closed $G$-subschemes
$\cZ \subset V \times S$ with Hilbert function $h$, such that the sheaf of $\cO_S$-algebras
$(p_*\cO_{\cZ})^U$ is generated by the preceding highest weight lines. Choosing highest weight vectors
$v_{\lambda_1},\ldots,v_{\lambda_N}$, we obtain a surjective homomorphism of $\cO_S$-$T$-algebras
$\cO_S[t_1,\ldots,t_N] \to (p_*\cO_{\cZ})^U$ that maps each $t_i$ to $v_{\lambda_i}$.
Equivalently, we obtain a closed immersion $\cZ/\!/U \hookrightarrow \bA^N \times S$
of families of closed $T$-subschemes with Hilbert function $h$, where $T$ acts linearly
on $\bA^N$ with weights $-\lambda_1,\ldots,-\lambda_N$. This is also equivalent to a morphism
$$
f: S \to \Hilb^T(\ulambda)
$$
where the target is the toric Hilbert scheme of Example \ref{ex:tuf}(i). Now the condition that
our family of algebra structures extends the given algebra structure on $V(\Gamma)^U$ means
that $f$ maps $S$ to the closed point $Z_0/\!/U$, viewed as a general $T$-orbit closure in $\bA^N$.
Conversely, given a family of closed $G$-subschemes $\cZ \subset V \times S$ with Hilbert function
$h$ such that $(p_*\cO_{\cZ})^U$ is generated by $v_{\lambda_1},\ldots,v_{\lambda_N}$ and the
resulting morphism $f$ maps $S$ to the point $Z_0/\!/U$, we obtain an isomorphism
of $\cO_S$-$G$-modules $\cO_S \otimes_k V(\Gamma) \cong p_*\cO_{\cZ}$ which restricts to an
isomorphism of $\cO_S$-$T$-algebras $\cO_S \otimes_k V(\Gamma)^U \cong (p_*\cO_{\cZ})^U$. This translates
into a family of algebra structures of type $\Gamma$ over $S$.
Summarizing, we have the following link between algebra structures and invariant Hilbert schemes:
\begin{theorem}\label{thm:mm}
With the preceding notation and assumptions, there exists an open subscheme
$$
\Hilb^G_h(V)_0 \subset \Hilb^G_h(V)
$$
that parametrizes those families $\cZ$ such that the $\cO_S$-algebra $(p_* \cO_{\cZ})^U$
is generated by the image of $(V^*)^U$ under $\varphi$. Moreover, there exists a morphism
$$
f : \Hilb^G_h(V)_0 \longrightarrow \Hilb^T(\ulambda)
$$
that sends $\cZ$ to $\cZ/\!/U$. The fiber of $f$ at the closed point
$Z_0/\!/U \in \Hilb^T(\ulambda)$ represents the functor $M_{\Gamma}$.
\end{theorem}
We denote the fiber of $f$ at $Z_0/\!/U$ by $\M_{\Gamma}$ and call it the
{\bf moduli scheme of multiplicity-free varieties with weight monoid $\Gamma$}. Since it represents the
functor $M_{\Gamma}$, the scheme $\M_{\Gamma}$ is independent of the choices of generators of $\Gamma$ and of
highest weight vectors. It comes with a special point $Z_0$, the common horospherical degeneration to
all of its closed points.
We may also consider the preimage in $\Hilb^G_h(V)_0$ of the open $(\bG_m)^N$-orbit
$\Hilb^T_{\ulambda} \subset \Hilb^T(\ulambda)$ that consists of general $T$-orbit closures.
This preimage is an open subscheme of $\Hilb^G_h(V)_0$, that we denote by $\Hilb^G_{\ulambda}$.
Its closed points are exactly the irreducible multiplicity-free varieties $Z \subset V$
having weight monoid $\Gamma$ and such that the projections $Z \to V(\lambda_1)^*, \ldots,V(\lambda_N)^*$
are all non-zero; equivalently, the horospherical degeneration $Z_0$ is contained in $V$.
Such varieties $Z$ are called {\bf non-degenerate}.
Next, we relate these constructions to the action of the adjoint torus $T_{\ad}$ on $\Hilb^G_h(V)$,
defined in the previous subsection. The torus $(\bG_m)^N$ acts on $\Hilb^G_h(V)$ as the equivariant
automorphism group of the $G$-module $V$. This action stabilizes the open subschemes $\Hilb^G_h(V)_0$
and $\Hilb^G_{\ulambda}$; moreover, $f$ is equivariant for the natural action of $(\bG_m)^N$ on the
toric Hilbert scheme. Also, note that the $(\bG_m)^N$-orbit $\Hilb^T_{\ulambda}$ is isomorphic
to $(\bG_m)^N/\ulambda(T)$ where $\ulambda$ denotes the homomorphism (\ref{eqn:ulambda}).
This yields an action of $T$ on $\M_{\Gamma}$ and one checks that the center $Z(G)$ acts trivially.
Thus, \emph{$T_{\ad}$ acts on $\M_{\Gamma}$ and each $(\bG_m)^N$-orbit in $\Hilb^G_{\ulambda}$ intersects
$\M_{\Gamma}$ along a unique $T_{\ad}$-orbit}.
Given a family $\cZ$ of non-degenerate subvarieties of $V$, one shows that the associated horospherical
family $\cX$ is a family of non-degenerate subvarieties of $V$ as well. It follows that
\emph{the $T_{\ad}$-action on $\M_{\Gamma}$ extends to an $\bA^r$-action such that the origin of $\bA^r$
acts via the constant map to $Z_0$.} In particular, $Z_0$ is the unique $T_{\ad}$-fixed point and is
contained in each $T_{\ad}$-orbit closure; thus, \emph{the scheme $\M_{\Gamma}$ is connected}.
The $T_{\ad}$-action on $\M_{\Gamma}$ may also be seen in terms of multiplication laws (\ref{eqn:comp}):
by \cite[Proposition 2.11]{AB05},
\emph{each non-zero component $m_{\lambda, \mu}^{\nu}$ is a $T_{\ad}$-eigenvector of weight $\lambda + \mu - \nu$}
(note that $\lambda + \mu - \nu$ lies in the root lattice, and hence is indeed a character of $T_{\ad}$).
As a consequence, given an irreducible multiplicity-free variety $Z$ with weight monoid $\Gamma$, the
$T_{\ad}$-orbit closure of $Z$ (viewed as a closed point of $\M_{\Gamma}$) has for weight monoid
the submonoid $\Sigma_Z \subset \Lambda$ generated by the weights $\lambda + \mu - \nu$
where $V(\nu)$ is contained in the product $V(\lambda) \cdot V(\mu) \subset \cO(Z)$. In particular,
\emph{the monoid $\Sigma_Z$ is finitely generated}. Again, it is not known how to obtain this result directly
from the algebraic definition of the {\bf root monoid} $\Sigma_Z$.
In view of a deep theorem of Knop (see \cite[Theorem 1.3]{Kn96}), the saturation of the monoid $\Sigma_Z$ is free,
i.e., generated (as a monoid) by linearly independent elements. Equivalently,
\emph{the normalization of each $T_{\ad}$-orbit closure in $\M_{\Gamma}$ is equivariantly isomorphic
to an affine space on which $T_{\ad}$ acts linearly}.
We also mention a simple relation between the Zariski tangent space to $\M_{\Gamma}$ at a closed point $Z$
and the space $T^1(Z)$ parametrizing first-order deformations of $Z$: namely,
\emph{the normal space to the orbit $T_{\ad} \cdot Z$ in $\M_{\Gamma}$ is isomorphic to the $G$-invariant
subspace $T^1(Z)^G$} (see \cite[Proposition 1.13]{AB05}). In particular,
$$
T_{Z_0} \M_{\Gamma} = T^1(Z_0)^G
$$
as $Z_0$ is fixed by $T_{\ad}$.
In fact, many results of this subsection hold in the more general setting where the algebra $k[\Gamma]$
is replaced with an arbitrary $T$-algebra of finite type; see \cite{AB05}. The multiplicity-free case
presents remarkable special features; namely, finiteness properties that will be surveyed in the next subsection.
\begin{example}\label{ex:free}
If the monoid $\Gamma$ is free, then of course we choose $\lambda_1,\ldots,\lambda_N$ to be its minimal generators.
Since they are linearly independent, $\Hilb^T(\ulambda)$ is a (reduced) point and hence
$$
\Hilb^G_h(V)_0 = \Hilb^G_{\ulambda} = \M_{\Gamma}.
$$
Also, since the homomorphism $\ulambda$ is surjective, each $(\bG_m)^N$-orbit in $\Hilb^G_{\ulambda}$
is a unique $T_{\ad}$-orbit. The pull-back $\pi : \Univ_{\Gamma} \to \M_{\Gamma}$ of the universal family of
$\Hilb^G_h(V)$ may be viewed as the universal family of non-degenerate spherical subvarieties of $V$.
\end{example}
\subsection{Finiteness properties of spherical varieties}
\label{subsec:fpsv}
In this subsection, we survey finiteness and uniqueness results relative to the structure and classification
of spherical varieties. We still denote by $G$ a connected reductive group; we fix a Borel subgroup
$B \subset G$ and a maximal torus $T \subset B$.
Recall that a (possibly non-affine) $G$-variety $X$ is spherical, if $X$ is normal and contains an
open $B$-orbit; in particular, $X$ contains an open $G$-orbit $X_0$. Choosing a base point $x \in X_0$ and
denoting by $H$ its isotropy group, we may identify $X_0$ with the homogeneous space $G/H$.
We say that $H$ is a {\bf spherical subgroup} of $G$, and the pair $(X,x)$ is an
{\bf equivariant embedding} of $G/H$; the complement $X \setminus X_0$ is called the {\bf boundary}.
Morphisms of embeddings are defined as those equivariant morphisms that preserve base points.
If the variety $X$ is complete, then $X$ is called an {\bf equivariant completion} (or equivariant
compactification) of $G/H$.
One can show that \emph{any spherical $G$-variety contains only finitely many $B$-orbits}; as a consequence,
\emph{any equivariant embedding of a spherical $G$-homogeneous space contains only finitely many $G$-orbits}.
Conversely, if a $G$-homogeneous space $X_0$ satisfies the property that all of its equivariant embeddings
contain finitely many orbits, then $X_0$ is spherical.
Spherical homogeneous spaces admit a further remarkable characterization, in terms of the existence of
equivariant completions with nice geometric properties. Specifically, consider an embedding $X$ of a homogeneous
space $X_0 = G/H$. Assume that $X$ is smooth and $X \setminus X_0$ is a union of smooth prime
divisors that intersect transversally; in other words, the boundary is a {\bf divisor with simple normal crossings}.
Then we may consider the associated {\bf sheaf of logarithmic vector fields}, consisting of those derivations of
$\cO_X$ that preserve the ideal sheaf of $D := X \setminus X_0$. This subsheaf, denoted by $T_X(-\log D)$,
is a locally free subsheaf of the tangent sheaf $T_X$ of all derivations of $\cO_X$; both sheaves coincide
along $X_0$. The {\bf logarithmic tangent bundle} is the vector bundle on $X$ associated with $T_X(-\log D)$.
The $G$-action on $(X,D)$ yields an action of the Lie algebra $\fg$ by derivations that preserve $D$,
i.e., a homomorphism of Lie algebras
$$
\fg \longrightarrow H^0 \big(X, T_X(-\log D) \big).
$$
We say that the pair $(X,D)$ is {\bf log homogeneous under $G$} if $\fg$ generates the sheaf
of logarithmic vector fields. Now \emph{any complete log homogeneous $G$-variety is spherical; moreover,
any spherical $G$-homogeneous space admits a log homogeneous equivariant completion} (as follows from
\cite[Sections 2.2, 2.5]{BB96}; see \cite{Bri07b} for further developments on log homogeneous varieties and
their relation to spherical varieties). We will need a stronger version of part of this result:
\begin{lemma}\label{lem:lh}
Let $X$ be a smooth spherical $G$-variety with boundary a divisor $D$ with simple normal crossings and
denote by $S_X$ the subsheaf of $T_X(- \log D)$ generated by $\fg$. If $S_X$ is locally free, then
$S_X = T_X(- \log D)$. In particular, $X$ is log homogeneous.
\end{lemma}
\begin{proof}
Clearly, $S_X$ and $T_X(- \log D)$ coincide along the open $G$-orbit. Since these sheaves are locally free,
the support of the quotient $T_X(- \log D)/S_X$ has pure codimension $1$ in $X$. But this support is $G$-stable,
and contains no $G$-orbit of codimension $1$ by \cite[Section 2]{BB96}. Thus, this support is empty; this yields
our assertion.
\end{proof}
Log homogeneous pairs satisfy an important rigidity property, namely,
\begin{equation}\label{eqn:rig}
H^1\big( X, T_X(-\log D) \big) = 0
\end{equation}
whenever $X$ is complete (as follows from a vanishing theorem due to Knop, see \cite[Theorem 4.1]{Kn94}).
This is the main ingredient for proving the following finiteness result (\cite[Theorem 3.1]{AB05}):
\begin{theorem}\label{thm:fin}
For any $G$-variety $X$, only finitely many conjugacy classes of spherical subgroups of $G$ occur as isotropy
groups of points of $X$.
\end{theorem}
In other words, \emph{any $G$-variety contains only finitely many isomorphism classes of spherical $G$-orbits}.
For the proof, one reduces by general arguments of algebraic transformation groups to the case that
$X$ is irreducible and admits a geometric quotient
$$
p: X \longrightarrow S
$$
where the fibers of $p$ are exactly the $G$-orbits. Arguing by induction on the dimension,
we may replace $S$ with an open subset; thus, we may further
assume that $X$ and the morphism $p$ are smooth. Then the spherical $G$-orbits form an open subset of $X$,
since the same holds for the $B$-orbits of dimension $\dim(X) - \dim(S)$. So we may assume that all fibers are
spherical. Now, by general arguments of algebraic transformation groups again, there exists an equivariant
fiberwise completion of $X$, i.e., a $G$-variety $\bar{X}$ equipped with a proper $G$-invariant morphism
$$
\overline{p}: \bar{X} \longrightarrow S
$$
such that $\bar{X}$ contains $X$ as a $G$-stable open subset, and $\bar{p}$
extends $p$. We may further perform equivariant blow-ups and hence assume that $\bar{X}$ is smooth,
the boundary $\bar{X} \setminus X$ is a divisor with simple normal crossings, and the subsheaf
$S_{\bar{X}} \subset T_{\bar{X}}$ generated by $\fg$ is locally free. By Lemma \ref{lem:lh},
it follows that $\bar{X}$ is a family of log homogeneous varieties (possibly after shrinking $S$ again).
Now the desired statement is a consequence of rigidity (\ref{eqn:rig}) together with arguments of
deformation theory; see \cite[pp. 113--115]{AB05} for details.
As a direct consequence of Theorem \ref{thm:fin}, \emph{any finite-dimensional $G$-module $V$ contains
only finitely many closures of spherical $G$-orbits, up to the action of $\GL(V)^G$} (see \cite[p. 116]{AB05}).
In view of the results of Subsection \ref{subsec:mav}, it follows that
\emph{every moduli scheme $\M_{\Gamma}$ contains only finitely many $T_{\ad}$-orbits}. In particular,
\emph{up to equivariant isomorphism, there are only finitely many affine spherical varieties having a
prescribed weight monoid}.
This suggests that spherical varieties may be classified by combinatorial invariants. Before presenting
a number of results in this direction, we associate three such invariants to a spherical homogeneous space
$X_0 = G/H$.
The first invariant is the set of weights of $B$-eigenvectors in the field of rational functions $k(X_0) = k(G)^H$;
this is a subgroup of $\Lambda$, denoted by $\Lambda(X_0)$ and called the {\bf weight lattice} of $X_0$.
The rank of this lattice is called the {\bf rank} of $X_0$ and denoted by $\rk(X_0)$.
Note that any $B$-eigenfunction is determined by its weight up to a non-zero scalar, since $k(X_0)^B = k$
as $X_0$ contains an open $B$-orbit.
The second invariant is the set $\cV(X_0)$ of those discrete valuations of the field $k(X_0)$, with values in
the field $\bQ$ of rational numbers,
that are invariant under the natural $G$-action. One can show that any such valuation is uniquely determined by
its restriction to $B$-eigenvectors; moreover, this identifies $\cV(X_0)$ to a convex polyhedral cone in
the rational vector space $\Hom\big( \Lambda(X_0), \bQ \big)$. Thus, $\cV(X_0)$ is called the {\bf valuation cone}.
The third invariant is the set $\cD(X_0)$ of $B$-stable prime divisors in $X_0$, called {\bf colors};
these are exactly the irreducible components of the complement of the open $B$-orbit. Any $D \in \cD(X_0)$
defines a discrete valuation of $k(X_0)$, and hence (by restriction to $B$-eigenvectors) a point
$\varphi_D \in \Hom\big( \Lambda(X_0), \bQ \big)$. Moreover, the stabilizer of $D$ in $G$ is a parabolic subgroup
$G_D$ containing $B$, and hence corresponds to a set of simple roots. Thus, $\cD(X_0)$ may be viewed as
an abstract finite set equipped with maps $D \mapsto \varphi_D$ to $\Hom\big( \Lambda(X_0), \bQ \big)$
and $D \mapsto G_D$ to subsets of simple roots.
The invariants $\Lambda(X_0)$, $\cV(X_0)$, $\cD(X_0)$ are the main ingredients of a
\emph{classification of all equivariant embeddings of $X_0$}, due to Luna and Vust
(see \cite{Pe10} for an overview, and \cite{Kn91} for an exposition; the original article \cite{LV83} addresses
embeddings of arbitrary homogeneous spaces, see also \cite{Ti07}). This classification is formulated in terms of
{\bf colored fans}, combinatorial objects that generalize the fans of toric geometry. Indeed, toric varieties are
exactly the equivariant embeddings of a torus $T$ viewed as a homogeneous space under itself. In that case,
$\Lambda(T)$ is just the character lattice, and the set $\cD(T)$ is empty; one shows that $\cV(T)$ is the whole space
$\Hom \big( \Lambda(T),\bQ \big)$.
Another important result, due to Losev (see \cite[Theorem 1]{Lo09a}), asserts that
\emph{any spherical homogeneous space is uniquely determined by its weight lattice, valuation cone and colors,
up to equivariant isomorphism}.
The proof combines many methods, partial classifications, and earlier results, including the Luna-Vust
classification and the finiteness theorem \ref{thm:fin}.
Returning to an affine spherical variety $Z$, one can show that the valuation cone of the open $G$-orbit $Z_0$
is dual (in the sense of convex geometry) to the cone generated by the root monoid $\Sigma_Z$.
Also, recall that the saturation of $\Sigma_Z$ is a free monoid; its generators are called the {\bf spherical roots}
of $Z$. By another uniqueness result of Losev (see \cite[Theorem 1.2]{Lo09b}),
\emph{any affine spherical $G$-variety is uniquely determined by its weight monoid and spherical roots,
up to equivariant isomorphism}. Moreover,
\emph{any smooth affine spherical $G$-variety is uniquely determined by its weight monoid},
again up to equivariant isomorphism (\cite[Theorem 1.3]{Lo09b}).
Note that all smooth affine spherical varieties are classified in \cite{KS06}; yet one does not know how to deduce
the preceding uniqueness result (a former conjecture of Knop) from that classification.
\begin{example}\label{ex:sph}
The spherical subgroups of $G = \SL_2$ are exactly the closed subgroups of positive dimension.
Here is the list of these subgroups up to conjugation in $G$:
\noindent
(i) $H = B$ (the Borel subgroup of upper triangular matrices of determinant $1$).
Then $G/H \cong \bP^1$ has rank $0$ and a unique color, the $B$-fixed point $\infty$.
\noindent
(ii) $H = U \mu_n$ where $U$ denotes the unipotent part of $B$, and $\mu_n$ the group of diagonal
matrices with eigenvalues $\zeta,\zeta^{-1}$ where $\zeta^n = 1$; here $n$ is a positive integer.
Then $H \subset B$ and via the resulting map $G/H \to G/B$, we see that $G/H$ is the total space
of the line bundle $O_{\bP^1}(n)$ minus the zero section. Moreover, $G/H$ has rank $1$ and a unique color,
the fiber at $\infty$ of the projection to $\bP^1$. We have
$\Lambda(G/H) = n \bZ \subset \bZ = \Lambda$, and the valuation cone is the whole space
$\Hom\big( \Lambda(G/H), \bQ \big) \cong \bQ$.
A smooth equivariant completion of $G/H$ is $\bP \big( O_{\bP^1} \oplus O_{\bP^1}(n) \big)$,
the rational ruled surface of index $n$. By contracting the unique curve of negative
self-intersection, i.e., the section of self-intersection $-n$, we obtain another equivariant
completion which is singular if $n \neq 1$, and isomorphic to $\bP\big( V(1) \oplus V(0) \big) \cong \bP^2$
if $n = 1$.
\noindent
(iii) $H = T$ (the torus of diagonal matrices of determinant $1$). Then
$G/H \cong G \cdot xy \subset V(2)$ is the affine quadric $(\Delta = 1)$; it has rank $1$
and weight lattice $2 \bZ$. Note that $H$ is the intersection of $B$ with the opposite Borel
subgroup $B^-$ (of lower triangular matrices). Thus, $G/H$ admits $\bP^1 \times \bP^1$ as an
equivariant completion via the natural morphism $G/H \to G/B \times G/B^-$; this is in fact the unique
non-trivial equivariant embedding. Also, $G/H$ has two colors $D^+, D^-$ (the fibers at $\infty$ of the
two morphisms to $\bP^1$). The valuation cone is the negative half-line in
$\Hom\big( \Lambda(G/H), \bQ \big) \cong \bQ$, and $D^+,D^-$ are mapped to the same point of the positive
half-line.
\noindent
(iv) $H = N_G(T)$ (the normalizer of $T$ in $G$). Then $G/H \cong G \cdot [xy] \subset \bP\big( V(2) \big)$
is the open affine subset $(\Delta = 0)$ in the projective plane $\bP\big( V(2) \big)$, which is the unique
non-trivial embedding. Moreover, $G/H$ has rank $1$ and weight lattice $4 \bZ$. There is a unique color $D$,
with closure the projective line $\bP\big( y V(1) \big) \subset \bP\big( V(2) \big)$. The valuation cone is
again the negative half-line in $\Hom\big( \Lambda(G/H), \bQ \big) \cong \bQ$, and $D$ is mapped to a point
of the positive half-line.
\end{example}
\subsection{Towards a classification of wonderful varieties}
\label{subsec:cwv}
In this subsection, we introduce the class of wonderful varieties, which play an essential role
in the structure of spherical varieties. Then we present a number of recent works that classify
wonderful varieties (possibly with additional assumptions) via Lie-theoretical or geometric methods.
\begin{definition}
A variety $X$ is called {\bf wonderful} if it satisfies the following properties:
\begin{enumerate}
\item{}
$X$ is smooth and projective.
\item{}
$X$ is equipped with an action of a connected reductive group $G$ having an open orbit $X_0$.
\item{}
The boundary $D := X \setminus X_0$ is a divisor with simple normal crossings, and its irreducible
components $D_1,\ldots,D_r$ meet.
\item{}
The $G$-orbit closures are exactly the partial intersections of $D_1,\ldots,D_r$.
\end{enumerate}
\end{definition}
Then $D_1,\ldots,D_r$ are called the \emph{boundary components}; their number $r$ is the \emph{rank}
of $X$. By (4), $X$ has a unique closed orbit
$$
Y := D_1\cap \cdots \cap D_r.
$$
The wonderful $G$-varieties of rank $0$ are just the complete $G$-homogeneous varieties, i.e., the
homogeneous spaces $G/P$ where $P \subset G$ is a parabolic subgroup containing $B$. Those of rank $1$
are exactly the smooth equivariant completions of a homogeneous space by a homogeneous divisor; they have been
classified by Akhiezer (see \cite{Ak83}) and they turn out to be spherical. The latter property extends to all ranks:
in fact, \emph{the wonderful varieties are exactly the complete log homogeneous varieties having a
unique closed orbit}, as follows from \cite{Lu96} combined with \cite[Propositions 2.2.1 and 2.5]{BB96}.
Moreover, the rank of a wonderful variety coincides with the rank of its open $G$-orbit.
The combinatorial invariants $\Lambda(X_0)$, $\cV(X_0)$, $\cD(X_0)$ associated with the open $G$-orbit
of a wonderful variety $X$ admit simple geometric interpretations. To state them, let $X_1$ denote the
complement in $X$ of the union of the closures of the colors. Then $X_1$ is an open
$B$-stable subset of $X$. One shows that $X_1$ is isomorphic to an affine space and meets each $G$-orbit
in $X$ along its open $B$-orbit; it easily follows that
\emph{the closure in $X$ of each color $D \in \cD(X_0)$ is a base-point-free divisor, and these divisors form
a basis of the Picard group of $X$}. In particular, we have equalities in $\Pic(X)$:
\begin{equation}\label{eqn:pic}
D_i = \sum_{D \in \cD} c_{D,i} \, D \quad (i=1,\ldots,r)
\end{equation}
where the $c_{D,i}$ are uniquely determined integers. Also, $X_1 \cap Y$ is the open Bruhat cell in $Y$,
and hence equals $B \cdot y$ for a unique $T$-fixed point $y \in Y$. Thus, $T$ acts in the normal space
$T_y (X)/T_y (Y)$ of dimension $r = \rk(X)$; one shows that the corresponding weights $\sigma_1,\ldots,\sigma_r$
(called the {\bf spherical roots} of the wonderful variety $X$) are linearly independent. Now
\emph{the spherical roots form a basis of $\Lambda(X_0)$, and generate the dual cone to $\cV(X_0)$}.
The dual basis of $\Hom \big( \Lambda(X_0), \bQ \big)$ consists of the opposites of the valuations $v_1,\ldots,v_r$
associated with the boundary divisors. Moreover, (\ref{eqn:pic}) implies that the map
$\varphi : \cD \to \Hom \big( \Lambda(X_0), \bQ \big)$ is given by
$$
\varphi(v_D) = \sum_{i=1}^r c_{D,i} \, v_i \quad (D \in \cD).
$$
To each spherical homogeneous space $X_0 = G/H$, one associates a wonderful variety as follows.
Denote by $N_G(H)$ the normaliser of $H$ in $G$, so that the quotient group $N_G(H)/H$ is isomorphic
to the equivariant automorphism group $\Aut^G(X_0)$. Since $X_0$ is spherical, the algebraic group
$N_G(H)/H$ is diagonalizable; moreover, $N_G(H)$ equals the normalizer of the Lie algebra $\fh$.
Thus, the homogeneous space $G/N_G(H)$ is the $G$-orbit of $\fh$ viewed as a point of the Grassmannian
variety $\Gr(\fg)$ of subspaces of $\fg$ (or alternatively, of the scheme of Lie subalgebras of $\fg$).
The orbit closure
$$
X:= \overline{G \cdot \fh} \subset \Gr(\fg)
$$
is a projective equivariant completion of $G/N_G(H)$, called the \emph{Demazure embedding} of that
homogeneous space. In fact, \emph{the variety $X$ is wonderful} by a result of Losev (see \cite{Lo09c})
based on earlier results of several mathematicians, including Demazure and Knop (see \cite[Corollary 7.2]{Kn96}).
Moreover, by embedding theory of spherical homogeneous spaces, \emph{the log homogeneous embeddings
of $G/H$ are exactly those smooth equivariant embeddings that admit a morphism to $X$; then the
logarithmic tangent bundle is the pull-back of the tautological quotient bundle on $\Gr(\fg)$}.
Also, by embedding theory again, \emph{a complete log homogeneous variety $X'$ is wonderful if and only
if the morphism $X' \to X$ is finite.}
It follows that every spherical homogeneous space $G/H$ such that $H = N_G(H)$ admits a wonderful
equivariant completion; in the converse direction, if $G/H$ admits such a completion $X$,
then $X$ is unique, and the quotient $N_G(H)/H$ is finite. In particular, the center of $G$ acts on $X$
via a finite quotient; thus, one may assume that $G$ is semi-simple when considering wonderful $G$-varieties.
Since the $G$-variety $\Gr(\fg)$ contains only finitely many isomorphism classes of spherical $G$-orbits,
and any $G$-homogeneous space admits only finitely many finite equivariant coverings, we see that
\emph{the number of isomorphism classes of wonderful $G$-varieties is finite} (for a given group $G$).
Also, note that the wonderful varieties are exactly those log homogeneous varieties that are
{\bf log Fano}, i.e., the determinant of the logarithmic tangent sheaf is ample.
To classify wonderful $G$-varieties, it suffices to characterize those triples $(\Lambda,\cV,\cD)$
that occur as combinatorial invariants of their open $G$-orbits, in view of Losev's uniqueness result.
In fact, part of the information contained in such triples is more conveniently encoded by abstract
combinatorial objects called {\bf spherical systems}. These were introduced by Luna, who obtained a complete
classification of wonderful $G$-varieties for those groups $G$ of type $A$, in terms of spherical systems only.
For an arbitrary group $G$, Luna also showed how to reduce the classification of spherical $G$-homogeneous spaces
to that of wonderful $G$-varieties, and he conjectured that
\emph{wonderful varieties are classified by spherical systems} (see \cite{Lu01}).
Luna's Lie theoretic methods were further developed by Bravi and Pezzini to classify wonderful varieties in classical
types $B,C,D$ (see \cite{BP05, BP09}); the case of exceptional types $E$ was treated by Bravi in \cite{Bra07}.
Thus, Luna's conjecture has been checked in almost all cases. The exceptional type $F_4$ is considered by Bravi and
Luna in \cite{BL08}; they listed the 266 spherical systems in that case, and they constructed many examples of
associated wonderful varieties.
Luna's conjecture has also been confirmed for those wonderful $G$-varieties that arise as orbit closures
in projectivizations of simple $G$-modules, via a classification due to Bravi and Cupit-Foutou (see \cite{BC10}).
These wonderful varieties are called {\bf strict}; they are characterized by the property that the isotropy
group of each point equals its normalizer, as shown by Pezzini (see \cite[Theorem 2]{Pe07}).
In \cite{BC08}, Bravi and Cupit-Foutou applied that classification to explicitly describe certain
moduli schemes of spherical varieties with a prescribed weight monoid; we now survey their results.
We say that a submonoid $\Gamma \subset \Lambda^+$ is $G$-{\bf saturated}, if
$$
\Gamma = \Lambda^+ \cap \bZ \Gamma
$$
where $\bZ \Gamma$ denotes the subgroup of $\Lambda$ generated by $\Gamma$. Then $\Gamma$ is finitely generated,
and saturated in the sense arising from toric geometry. By \cite{Pa97}, the $G$-saturated submonoids of
$\Lambda^+$ are exactly the weight monoids of those affine horospherical $G$-varieties $Z_0$ such that
$Z_0$ is normal and contains an open $G$-orbit with boundary of codimension $\geq 2$.
Next, fix a $G$-saturated submonoid $\Gamma \subset \Lambda^+$ that is freely generated, with basis
$\lambda_1,\ldots,\lambda_N$. Consider the associated moduli scheme $\M_{\Gamma}$
equipped with the action of the adjoint torus $T_{\ad}$ (Subsection \ref{subsec:mav}).
Then $\M_{\Gamma}$ is an open subscheme of $\Hilb^G_h(V)$ by Example \ref{ex:free}, where
$V = V(\lambda_1)^* \oplus \cdots \oplus V(\lambda_N)^*$ and $h = h_{\Gamma}$.
By the results of \cite[Section 2.3]{BC08}, \emph{$\M_{\Gamma}$ is isomorphic to a $T_{\ad}$-module with linearly
independent weights, say $\sigma_1,\ldots,\sigma_r$. Moreover, the union of all non-degenerate $G$-subvarieties
$$
Z \subset V(\lambda_1)^* \oplus \cdots \oplus V(\lambda_N)^*
$$
with weight monoid $\Gamma$ is the affine multi-cone over a wonderful $G$-variety
$$
X \subset \bP \big( V(\lambda_1)^* \big) \times \cdots \times \bP\big( V(\lambda_N)^* \big)
$$
which is strict and has spherical roots $\sigma_1,\ldots,\sigma_r$.} The closures of the colors
of $X$ are exactly the pull-backs of the $B$-stable hyperplanes in $\bP \big( V(\lambda_i)^* \big)$
(defined by the highest weight vectors of $V(\lambda_i)$) under the projections
$X \to \bP \big( V(\lambda_i)^* \big)$.
To prove these results, one first studies the tangent space to $\M_{\Gamma}$ at the horospherical degeneration
$Z_0$, based on \ref{prop:orb}. This $T_{\ad}$-module turns out to be multiplicity-free with weights
$\sigma_1,\ldots,\sigma_r$ among an explicit list of spherical roots. Then one shows that the data of
$\lambda_1,\ldots,\lambda_N$ and $\sigma_1,\ldots,\sigma_r$ define a spherical system; finally, by the
classification of strict wonderful varieties, this spherical system corresponds to a unique such variety $X$.
Yet several $G$-saturated monoids may well yield the same strict wonderful variety, for instance in the case
that $\Gamma$ has basis a dominant weight $\lambda$ (any such monoid is $G$-saturated); see the final example below.
Another natural example of a $G$-saturated monoid is the whole monoid $\Lambda^+$ of dominant weights.
The affine spherical varieties $Z$ having that weight monoid are called a {\bf model $G$-variety}, as every
simple $G$-module occurs exactly once in $\cO(Z)$; then the horospherical degeneration of $Z$ is $Z_0 = G/\!/U$.
The strict wonderful varieties associated with model varieties have been described in detail by Luna
(see \cite{Lu07}).
More recently, Cupit-Foutou generalizes the approach of \cite{BC08} in view of a geometric classification of wonderful
varieties and of a proof of Luna's conjecture in full generality (see \cite{Cu09}). For this, she associates with any
wonderful variety of rank $r$ a family of (affine) spherical varieties over the affine space $\bA^r$, having a
prescribed free monoid. Then she shows that this family is the universal family.
The first step is based on the construction of the {\bf total coordinate ring} (also called the {\bf Cox ring})
of a wonderful variety $X$. Recall that the set $\cD$ of (closure of) colors freely generates the Picard group of
$X$, and consider the $\bZ^{\cD}$-graded ring
$$
R(X) := \bigoplus_{(n_D) \in \bZ^{\cD}} H^0 \big( X, \cO_X( \sum_{D \in \cD} n_D D) \big)
$$
relative to the multiplication of sections. We may assume that $G$ is semi-simple and simply connected; then
each space $H^0 \big( X, \cO_X(\sum_{D \in \cD} n_D D)$ has a unique structure of a $G$-module, and the total
coordinate ring $R(X)$ is a $\bZ^{\cD}$-graded $G$-algebra. It is also a finitely generated unique factorization
domain. Moreover, the invariant subring $R(X)^U$ is freely generated by the canonical sections $s_D$ of the colors
and by those $s_1, \ldots, s_r$ of the boundary divisors; each $s_i$ is homogeneous of weight $(c_{D,i})_{D \in \cD}$.
Each $s_D$ is a $B$-eigenvector of weight (say) $\omega_D$, and hence generates a simple submodule
$$
V_D \cong V(\omega_D) \subset H^0 \big( X, \cO_X(D) \big).
$$
As a consequence, the graded ring $R(X)$ is generated by $s_1,\ldots,s_r$ and by the $V_D$ where $D \in \cD$.
Moreover, $s_1,\ldots,s_r$ form a regular sequence in $R(X)$ (see \cite[Section 3.1]{Bri07a}).
In geometric terms, the affine variety $\tX := \Spec\big( R(X) \big)$ is equipped with an action of the
connected reductive group $\tG := G \times (\bG_m)^{\cD}$ and with a flat, $G$-invariant morphism
\begin{equation}\label{eqn:tcf}
\pi = (s_1,\ldots,s_r) : \tX \longrightarrow \bA^r
\end{equation}
which is also $(\bG_m)^{\cD}$-equivariant for the linear action of that torus on $\bA^r$ with weights
$\sum_{D\in \cD} c_{D,i} \varepsilon_D$ where $i = 1,\ldots, r$ and $\varepsilon_D :(\bG_m)^{\cD} \to \bG_m$
denotes the coefficient on $D$. Moreover, the $\tG$-subvariety $\tX$ is spherical and equipped with a closed
immersion into the $\tG$-module $\big( \bA^r \times \prod_{D \in \cD} V_D\big)^*$
that identifies $\pi$ with the projection to $(\bA^r)^* \cong \bA^r$. Here $(\bG_m)^{\cD}$ acts on
$\prod_{D \in \cD} V_D^*$ via multiplication by $-\varepsilon_D$ on $V_D^*$.
It follows that $\pi$ may be viewed as a family of non-degenerate spherical $G \times C$-subvarieties of
$$
V := \bigoplus_{D \in \cD} V_D^*
$$
where $C$ denotes the neutral component of the kernel of the homomorphism
$$
(\bG_m)^{\cD} \longrightarrow (\bG_m)^r, \quad
(t_D)_{D \in \cD} \longmapsto \big( \prod_{D \in \cD} t_D^{c_{D,i}} \big)_{i=1,\ldots,r}.
$$
Thus, $C$ is a torus, and $G \times C$ a connected reductive group with maximal torus $T \times C$ and adjoint
torus $T_{\ad}$. The weight monoid $\Gamma$ is freely generated by the restrictions to $T \times C$
of the weights $(\omega_D,\varepsilon_D)$ of $T \times (\bG_m)^{\cD}$, where $D \in \cD$.
Now the main results of \cite{Cu09} assert that the moduli scheme $\M_{\Gamma}$ is isomorphic to $\bA^r$, and
$\tX$ to the universal family. Moreover, $X$ is the quotient by $(\bG_m)^{\cD}$ of the union of non-degenerate
orbits (an open subset of $\tX$, stable under $\tG$). In particular, the wonderful $G$-variety $X$ is uniquely
determined by the monoid $\Gamma$.
As in \cite{BC08}, the first step in the proof is the determination of $T_{Z_0}(M_{\Gamma})$. Then a new
ingredient is the vanishing of $T^2(X_0)^G$, an obstruction space for the functor of invariant infinitesimal
deformations of $X_0$. This yields the smoothness of $\M_{\Gamma}$ at $Z_0$, which implies easily the
desired isomorphism $\M_{\Gamma} \cong \bA^r$.
\begin{example}\label{ex:won}
Let $G = \SL_2$ as in Example \ref{ex:sph}. Then the wonderful $G$-varieties $X$ are those of the following
list, up to $G$-isomorphism:
\noindent
(i) $\bP^1 = G/B$. Then $X$ is strict of rank $0$: it has no spherical root. Moreover,
$R(X) = \Sym\big( V(1) \big)$, $\tG \cong \bG_m \times G$, and $\tX = V(1)$ where $\bG_m$ acts via scalar
multiplication; the map (\ref{eqn:tcf}) is constant.
\noindent
(ii) $\bP^1 \times \bP^1$, of rank $1$ with open orbit $G/T$ and closed orbit $Y = G/B$ embedded as the
diagonal. Then $X$ is not strict, of rank $1$, and its spherical root is the simple root $\alpha$;
we have $Y = D^+ + D^-$ in $\Pic(X)$.
Moreover, $R(X) \cong \Sym\big( V(1) \oplus V(1) \big)$, $\tG = G \times (\bG_m)^2$, and
$\tX = V(1) \oplus V(1)$ where $(\bG_m)^2$ acts via componentwise multiplication, and $G$ acts diagonally.
The map (\ref{eqn:tcf}) is the determinant. The torus $C$ is one-dimensional, and the monoid $\Gamma$
has basis $(1,1)$ and $(1,-1)$.
\noindent
(iii) $\bP^2 = \bP\big( V(2) \big)$, of rank $1$ with open orbit $G/N_G(T)$ and closed orbit $Y = G/B$ embedded
as the conic $(\Delta = 0)$. Here $X$ is strict, of rank $1$, and its spherical root is $2\alpha$; we have
$Y = 2D$ in $\Pic(X)$. Moreover, $R(X) = \Sym\big( V(2) \big)$, $\tG \cong G \times \bG_m$, and $\tX = V(2)$
where $\bG_m$ acts via scalar multiplication. The map (\ref{eqn:tcf}) is the discriminant $\Delta$. The torus
$C$ is trivial, and $\Gamma$ is generated by $2$. We have $\M_{\Gamma} = \Hilb^G_{h_2}\big( V(2) \big) \cong \bA^1$.
Note that the monoid generated by $4$ yields the same wonderful variety.
\end{example}
\end{document} |
\begin{document}
\begin{abstract}
We prove symplectic non-squeezing for the cubic nonlinear Schr\"odinger equation on the line via finite-dimensional approximation.
\end{abstract}
\maketitle
\section{Introduction}
The main result of this paper is a symplectic non-squeezing result for the cubic nonlinear Schr\"odinger equation on the line:
\begin{align*}\lambdaabel{nls}\tag{NLS}
iu_t+\mathcal Delta u= \pm |u|^2 u.
\end{align*}
We consider this equation for initial data in the underlying symplectic Hilbert space $L^2({\mathbb{R}})$. For this class of initial data, the equation is globally well-posed in both the defocusing and focusing cases, that is, with $+$ and $-$ signs in front of the nonlinearity, respectively. Correspondingly, we will be treating the defocusing and focusing cases on equal footing.
Our main result is the following:
\begin{thm}[Non-squeezing for the cubic NLS]\lambdaabel{thm:nsqz}
Fix $z_*\in L^2({\mathbb{R}})$, $l\in L^2({\mathbb{R}})$ with $\|l\|_2=1$, $\alpha\in {\mathbb{C}}$, $0<r<R<\infty$, and $T>0$. Then there exists $u_0\in B(z_*, R)$ such that the solution $u$ to \eqref{nls} with initial data $u(0)=u_0$ satisfies
\begin{align}\lambdaabel{th:1}
|\lambdaangle l, u(T)\rangle-\alpha|>r.
\end{align}
\end{thm}
Colloquially, this says that the flow associated to \eqref{nls} does not carry any ball of radius $R$ into any cylinder whose cross-section has radius $r<R$. Note that it is immaterial where the ball and cylinder are centered; however, it is essential that the cross-section of the cylinder is defined with respect to a pair of canonically conjugate coordinates.
The formulation of this result is dictated by the non-squeezing theorem of Gromov, \cite[\S0.3A]{Gromov}, which shows the parallel assertion for \emph{any} symplectomorphism of finite-dimensional Hilbert spaces. At the present time, it is unknown whether this general assertion extends to the infinite-dimensional setting.
Non-squeezing has been proved for a number of PDE models; see \cite{Bourg:approx,Bourg:aspects,CKSTT:squeeze,HongKwon,KVZ:nsqz2d,Kuksin,Mendelson,Roumegoux}. We have given an extensive review of this prior work in our paper \cite{KVZ:nsqz2d} and so will not repeat ourselves here. Rather, we wish to focus on our particular motivations for treating the model \eqref{nls}.
With the exception of \cite{KVZ:nsqz2d}, which considers the cubic NLS on ${\mathbb{R}}^2$, all the papers listed above considered the non-squeezing problem for equations posed on tori. One of the initial goals for the paper \cite{KVZ:nsqz2d} was to treat (for the first time) a problem in infinite volume. Moreover, we sought also to obtain a first unconditional result where the regularity required to define the symplectic form coincided with the scaling-critical regularity for the equation.
Many of the central difficulties encountered in \cite{KVZ:nsqz2d} stem from the criticality of the problem considered there, to the point that they obscure the novel aspects associated to working in infinite volume. One of our main motivations in writing this paper is to elaborate our previous approach in a setting unburdened by the specter of criticality. In this way, we hope also to provide a more transparent framework for attacking (subcritical) non-squeezing problems in infinite volume.
In keeping with the expository goal just mentioned, we have elected here to treat a single model, namely, the cubic NLS in one dimension. What follows applies equally well to any mass-subcritical NLS in any space dimension --- it is simply a matter of adjusting the H\"older/Strichartz exponents appropriately.
Let us now briefly outline the method of proof. Like previous authors, the goal is to combine a suitable notion of finite-dimensional approximation with Gromov's theorem in that setting. The particular manner in which we do this mirrors \cite{KVZ:nsqz2d}, but less so other prior work.
In the presence of a frequency truncation, NLS on a torus (of, say, large circumference) becomes a finite-dimensional system and so is non-squeezing in the sense of Gromov. In particular, there is an initial datum $u_0$ in the ball of radius $R$ about $z_*$ so that the corresponding solution $u$ obeys \eqref{th:1} at time $T$. We say that $u$ is a \emph{witness} to non-squeezing.
Now choosing a sequence of frequency cutoff parameters $N_n\to\infty$ and a sequence of circumferences $L_n\to\infty$, Gromov guarantees that there is a sequence of witnesses $u_n$. Our overarching goal is to take a ``limit'' of these solutions and so obtain a witness to non-squeezing for the full (untruncated) model on the whole line. This goal is realized in two steps: (i) Removal of the frequency cutoff for the problem in infinite volume; see Section~\ref{S:4}. (ii) Approximation of the frequency-truncated model in infinite volume by that on a large torus; see Section~\ref{S:5}. The frequency truncation is essential for the second step since it enforces a form of finite speed of propagation.
The principal simplifications afforded by working in the subcritical case appear in the treatment of step (i); they are two-fold. First, the proof of large-data space-time bounds for solutions to \eqref{nls} is elementary and applies also (after trivial modifications) to the frequency-truncated PDE. This is not true for the critical problem. Space-time bounds for the mass-critical NLS is a highly nontrivial result of Dodson \cite{Dodson:3,Dodson:1,Dodson}; moreover, the argument does not apply in the frequency-truncated setting because the truncation ruins the monotonicity formulae at the heart of his argument. For a proof of uniform space-time bounds for suitably frequency-truncated cubic NLS on ${\mathbb{R}}^2$, see Section~4 in \cite{KVZ:nsqz2d}.
The second major simplification relative to \cite{KVZ:nsqz2d} appears when we prove wellposedness in the weak topology on $L^2$. Indeed, the reader will notice that the statement of Theorem~\ref{T:weak wp} here is essentially identical to that of Theorem~6.1 in \cite{KVZ:nsqz2d}; however, the two proofs bear almost no relation to one another. Here we exploit the fact that bounded-mass solutions are compact on bounded sets in space-time in a scaling-critical $L^p$ norm; this is simply not true in the mass-critical case. See Section~4 for further remarks on this topic.
\section{Preliminaries}
Throughout this paper, we will write the nonlinearity as $F(u):=\pm|u|^2u$.
\begin{defn}[Strichartz spaces] We define the Strichartz norm of a space-time function via
$$
\| u \|_{S(I\times{\mathbb{R}})} := \| u\|_{C^{ }_t L^2_x (I\times{\mathbb{R}})} + \| u\|_{L_t^4 L^\infty_x (I\times{\mathbb{R}})}
$$
and the dual norm via
$$
\| G \|_{N(I\times{\mathbb{R}})} := \inf_{G=G_1+G_2} \| G_1\|_{L^1_t L^2_x (I\times{\mathbb{R}})} + \| G_2 \|_{L^{4/3}_t L^{1 }_x (I\times{\mathbb{R}})}.
$$
We define Strichartz spaces on the torus analogously.
\end{defn}
The preceding definition permits us to write the full family of Strichartz estimates in a very compact form; see \eqref{StrichartzIneq} below. The other basic linear estimate that we need is local smoothing; see \eqref{LocalSmoothing} below.
\begin{lem}[Basic linear estimates]
Suppose $u:{\mathbb{R}}\times{\mathbb{R}}\to{\mathbb{C}}$ obeys
$$
(i\partial_t +\mathcal Delta) u = G.
$$
Then for every $T>0$ and every $R>0$,
\begin{align}\lambdaabel{StrichartzIneq}
\| u\|_{S([0,T]\times{\mathbb{R}})} \lambdaesssim \| u(0) \|_{L^2_x} + \| G \|_{N([0,T]\times{\mathbb{R}})},
\end{align}
\begin{align}\lambdaabel{LocalSmoothing}
\| u\|_{L^2_{t,x}([0,T]\times[-R,R])}
\lambdaesssim R^{1/2} \Bigl\{ \| |\abs{\nabla}^{2-d}la|^{-1/2} u(0) \|_{L^2_x} + \| |\abs{\nabla}^{2-d}la|^{-1/2} G \|_{L^1_t L^2_x([0,T]\times{\mathbb{R}})} \Bigr\}.
\end{align}
\end{lem}
Let $m_{\lambdae 1}:{\mathbb{R}}\to[0,1]$ be smooth, even, and obey
$$
m_{\lambdae 1}(\xi) = 1 \text{ for $|\xi|\lambdaeq 1$} \qtq{and} m_{\lambdae 1}(\xi) = 0 \text{ for $|\xi|\geq 2$.}
$$
We define Littlewood--Paley projections onto low frequencies according to
\begin{align}\lambdaabel{E:LP defn}
\widehat{ P_{\lambdaeq N} f }(\xi) := m_{\lambdae 1}(\xi/N) \hat f(\xi)
\end{align}
and then projections onto individual frequency bands via
\begin{align}\lambdaabel{E:LP defn'}
f_N := P_N f := [ P_{\lambdaeq N} - P_{\lambdaeq N/2} ] f .
\end{align}
\section{Well-posedness theory for several NLS equations}
In the course of the proof of Theorem~\ref{thm:nsqz}, we will need to consider the cubic NLS both with and without frequency truncation. To consider both cases simultaneously, we consider the following general form:
\begin{align}\lambdaabel{eq:1}
iu_t+\mathcal Delta u= \mathcal P F(\mathcal P u),
\end{align}
where $\mathcal P$ is either the identity or the projection to low frequencies $P_{\lambdaeq N}$ for some $N\in 2^{\mathbb{Z}}$. For the results of this section, it only matters that $\mathcal P$ is $L^p$ bounded for all $1\lambdaeq p\lambdaeq\infty$.
\begin{defn}(Solution)\lambdaabel{D:solution}
Given an interval $I\subseteq{\mathbb{R}}$ with $0\in I$, we say that $u:I\times{\mathbb{R}}\to{\mathbb{C}}$ is a \emph{solution} to \eqref{eq:1} with initial data $u_0\in L^2({\mathbb{R}})$ at time $t=0$ if $u$ lies in the classes $C^{ }_t L^2_x(K\times{\mathbb{R}})$ and $L^6_{t,x}(K\times{\mathbb{R}})$ for any compact $K\subset I$ and obeys the Duhamel formula
\begin{align*}
u(t) = e^{it\mathcal Delta} u_0 - i \int_{0}^{t} e^{i(t-s)\mathcal Delta} \mathcal P F\bigl(\mathcal P u(s) \bigr) \, ds
\end{align*}
for all $t \in I$.
\end{defn}
Such solutions to \eqref{eq:1} are unique and conserve both mass and energy:
\begin{align*}
\int_{{\mathbb{R}}} |u(t,x)|^2 \,dx \qtq{and}
E(u(t)):=\int_{{\mathbb{R}}}\tfrac 12 |\abs{\nabla}^{2-d}la u(t,x)|^2 \pm\tfrac14 |\mathcal P u(t,x)|^4 \,dx,
\end{align*}
respectively. Indeed, \eqref{eq:1} is the Hamiltonian evolution associated to $E(u)$ through the standard symplectic structure:
$$
\omega:L^2({\mathbb{R}})\times L^2({\mathbb{R}})\to {\mathbb{R}} \qtq{with} \omega(u,v)=\Im \int_{\mathbb{R}} u(x)\bar v(x)\, dx.
$$
The well-posedness theory for \eqref{eq:1} reflects the subcritical nature of this equation with respect to the mass. We record this classical result without a proof.
\begin{lem}[Well-posedness of \eqref{eq:1}]\lambdaabel{lm:loc}
Let $u_0\in L^2({\mathbb{R}})$ with $\|u_0\|_2\lambdae M$. There exists a unique global solution $u:{\mathbb{R}}\times{\mathbb{R}}\to {\mathbb{C}}$ to \eqref{eq:1} with initial data $u(0)=u_0$. Moreover, for any $T>0$,
\begin{align*}
\|u\|_{S([0,T]\times{\mathbb{R}})}\lambdaesssim_T M.
\end{align*}
If additionally $u_0\in H^1({\mathbb{R}})$, then
\begin{align*}
\|\partial_x u\|_{S([0,T]\times{\mathbb{R}})} \lambdaesssim_{M,T} \| \partial_x u_0 \|_{L^2({\mathbb{R}})}.
\end{align*}
\end{lem}
\section{Local compactness and well-posedness in the weak topology}\lambdaabel{S:4}
The arguments presented in this section show that families of solutions to \eqref{nls} that are uniformly bounded in mass are precompact in $L^p_{t,x}$ for $p<6$ on bounded sets in space-time. Furthermore, we have well-posedness in the weak topology on $L^2$; specifically, if we take a sequence of solutions $u_n$ to \eqref{nls} for which the initial data $u_n(0)$ converges \emph{weakly} in $L^2({\mathbb{R}})$, then $u_n(t)$ converges weakly at all times $t\in{\mathbb{R}}$. Moreover, the pointwise in time weak limit is in fact a solution to \eqref{nls}.
Justification of the assertions made in the first paragraph can be found within the proof of \cite[Theorem~1.1]{Nakanishi:SIAM}; however, this is not sufficient for our proof of symplectic non-squeezing. Rather, we have to prove a slightly stronger assertion that allows the functions $u_n$ to obey different equations for different $n$; specifically,
\begin{align}\lambdaabel{eqpn}
i\partial_t u_n+\mathcal Delta u_n=P_{\lambdae N_n} F(P_{\lambdae N_n}u_n)
\end{align}
where $N_n\to \infty$; see Theorem~\ref{T:weak wp} below. For the sake of completeness, we give an unabridged proof of this theorem, despite substantial overlap with the arguments presented in~\cite{Nakanishi:SIAM}.
What follows adapts easily to the setting of any mass-subcritical NLS. It does \emph{not} apply at criticality: compactness fails in any scale-invariant space-time norm (even on compact sets). Nevertheless, well-posedness in the weak topology on $L^2$ does hold for the mass-critical NLS; see \cite{KVZ:nsqz2d}. Well-posedness in the weak topology has also been demonstrated for some energy-critical models; see \cite{BG99, KMV:cq}. In all three critical results \cite{BG99, KMV:cq, KVZ:nsqz2d}, the key to overcoming the lack of compactness is to employ concentration compactness principles. We warn the reader, however, that the augments presented in \cite{KVZ:nsqz2d} are far more complicated than would be needed to merely verify well-posedness in the weak topology for the mass-critical NLS. In that paper, we show non-squeezing and so (as here) we were compelled to consider frequency-truncated models analogous to \eqref{eqpn}. Due to criticality, this change has a profound effect on the analysis; see \cite{KVZ:nsqz2d} for further discussion.
Simple necessary and sufficient conditions for a set $\mathcal F\subseteq L^p({\mathbb{R}}^n)$ to be precompact were given in \cite{Riesz}, perfecting earlier work of Kolmogorov and Tamarkin. In addition to boundedness (in $L^p$ norm), the conditions are tightness and equicontinuity:
$$
\lambdaim_{R\to\infty} \sup_{f\in\mathcal F} \|f\|_{L^p(\{|x|>R\})} =0
\qtq{and} \lambdaim_{h\to 0} \sup_{f\in\mathcal F} \|f(\cdot+h)-f(\cdot)\|_{L^p({\mathbb{R}}^n)} =0,
$$
respectively. The basic workhorse for equicontinuity in our setting is the following lemma:
\begin{lem}\lambdaabel{L:workhorse}
Fix $T>0$ and suppose $u:[-2T,2T]\times{\mathbb{R}}\to{\mathbb{C}}$ obeys
\begin{align}
\| u \|_{\tilde S} := \| u \|_{L^\infty_t L^2_x([-2T,2T]\times {\mathbb{R}})} + \| (i\partial_t+\mathcal Delta) u \|_{L^2_{t,x}([-2T,2T]\times{\mathbb{R}})} < \infty.
\end{align}
Then
\begin{align}\lambdaabel{22Holder}
\| u(t+\tau,x+y) - u(t,x) \|_{L^2_{t,x}([-T,T]\times[-R,R])} \lambdaesssim_{R,T} \bigl\{ |\tau|^{1/5} + |y|^{1/3} \bigr\} \| u \|_{\tilde S},
\end{align}
uniformly for $|\tau|\lambdaeq T$ and $y\in {\mathbb{R}}$.
\end{lem}
\begin{proof}
It is sufficient to prove the result when $y=0$ and when $\tau=0$; the full result then follows by the triangle inequality. In both cases, we use \eqref{LocalSmoothing} to estimate the high-frequency portion as follows:
\begin{align}\lambdaabel{22uhi}
\| u_{>N}(t+\tau,x+y) - u_{>N}(t,x) \|_{L^2_{t,x}([-T,T]\times[-R,R])} \lambdaesssim R^{\frac12} N^{-\frac12}(1+T^{\frac12}) \| u \|_{\tilde S}.
\end{align}
Next we turn to the low-frequency contribution. Consider first the case $\tau=0$. By Bernstein's inequality,
$$
\| u_{\lambdaeq N}(t,x+y) - u_{\lambdaeq N}(t,x) \|_{L^2_x({\mathbb{R}})} \lambdaesssim N|y| \, \| u(t) \|_{L^2_x({\mathbb{R}})} \lambdaesssim N|y| \, \| u \|_{\tilde S}.
$$
Therefore, setting $N=|y|^{-2/3}$, integrating in time, and using \eqref{22uhi}, we obtain
\begin{align}\lambdaabel{22ulo2}
\| u(t,x+y) - u(t,x) \|_{L^2_{t,x}([-T,T]\times[-R,R])} \lambdaesssim (R^{\frac12}+(RT)^{\frac12}+T^{\frac12} ) |y|^{\frac13} \| u \|_{\tilde S}.
\end{align}
Consider now the case $y=0$. Here it is convenient to use the Duhamel representation of $u$:
$$
u(t) = e^{it\mathcal Delta} u(0) - i \int_0^t e^{i(t-s)\mathcal Delta} (i\partial_s+\mathcal Delta) u(s)\,ds.
$$
To exploit this identity, we first observe that
$$
\bigl\| P_{\lambdaeq N} \bigl[ e^{i\tau\mathcal Delta} - 1] e^{it\mathcal Delta} u(0) \bigr\|_{L^2_{x}({\mathbb{R}})} \lambdaesssim N^2 |\tau| \| u(0) \|_{L^2_{x}({\mathbb{R}})}.
$$
Then, by the Duhamel representation and the Strichartz inequality, we obtain
\begin{align*}
\bigl\| u_{\lambdaeq N}(t+\tau) - u_{\lambdaeq N}(t) \bigr\|_{L^2_x({\mathbb{R}})}
&\lambdaesssim N^2 |\tau| \| u(0) \|_{L^2_{x}({\mathbb{R}})} + \| (i\partial_t+\mathcal Delta)u\|_{L^1_t L^2_x([t,t+\tau]\times{\mathbb{R}})} \\
&\lambdaesssim \{ N^2 |\tau| + |\tau|^{\frac12} \} \| u \|_{\tilde S}.
\end{align*}
Combining this with \eqref{22uhi} and choosing $N=|\tau|^{-2/5}$ yields
\begin{equation}\lambdaabel{22ulo1}
\bigl\| u(t+\tau) - u(t) \bigr\|_{L^2_{t,x}([-T,T]\times[-R,R])}
\lambdaesssim (R^{\frac12}+(RT)^{\frac12}+ T^{\frac12}) (|\tau|^{\frac15} +|\tau|^{\frac12}) \| u \|_{\tilde S}.
\end{equation}
This completes the proof of the lemma.
\end{proof}
\begin{prop}\lambdaabel{P:compact}
Let $u_n:{\mathbb{R}}\times{\mathbb{R}}\to{\mathbb{C}}$ be a sequence of solutions to \eqref{eqpn} corresponding to some sequence of $N_n>0$. We assume that
\begin{equation}\lambdaabel{apmassbound}
M:= \sup_n \| u_n(0) \|_{L^2_x({\mathbb{R}})} < \infty.
\end{equation}
Then there exist $v:{\mathbb{R}}\times{\mathbb{R}}\to{\mathbb{C}}$ and a subsequence in $n$ so that
\begin{align}\lambdaabel{E:P:compact}
\lambdaim_{n\to\infty} \| u_n - v \|_{L^p_{t,x}([-T,T]\times[-R,R])} = 0,
\end{align}
for all $R>0$, all $T>0$, and all $1\lambdaeq p <6$.
\end{prop}
\begin{proof}
A simple diagonal argument shows that it suffices to consider a single fixed pair $R>0$ and $T>0$. In what follows, implicit constants will be permitted to depend on $R$, $T$, and $M$.
In view of Lemma~\ref{lm:loc} and \eqref{apmassbound}, we have
\begin{equation}\lambdaabel{apbound}
\sup_n \| u_n \|_{S([-4T,4T]\times{\mathbb{R}})} \lambdaesssim 1.
\end{equation}
Consequently, if we define $\chi:{\mathbb{R}}^2\to[0,1]$ as a smooth cutoff satisfying
$$
\chi(t,x)= \begin{cases} 1 &:\text{ if } |t|\lambdaeq T \text{ and } |x|\lambdaeq R, \\
0 &:\text{ if } |t| > 2T \text{ or } |x|> 2R, \end{cases}
$$
then the sequence $\chi u_n$ is uniformly bounded in $L^2_{t,x}({\mathbb{R}}\times{\mathbb{R}})$. Moreover, by Lemma~\ref{L:workhorse} and \eqref{apbound}, it is also equicontinuous. As it is compactly supported, it is also tight. Thus, $\{\chi u_n\}$ is precompact in $L^2_{t,x}({\mathbb{R}}\times{\mathbb{R}})$ and so there is a subsequence such that \eqref{E:P:compact} holds with $p=2$. That it holds for the other values $1\lambdaeq p <6$ then follows from H\"older and \eqref{apbound}, which implies that $\{\chi u_n\}$ is uniformly bounded in $L^6_{t,x}([-T,T]\times{\mathbb{R}})$.
\end{proof}
\begin{thm}\lambdaabel{T:weak wp}
Let $u_n:{\mathbb{R}}\times{\mathbb{R}}\to{\mathbb{C}}$ be a sequence of solutions to \eqref{eqpn} corresponding to a sequence $N_n\to\infty$. We assume that
\begin{equation}
u_n(0) \rightharpoonup u_{\infty,0} \quad\text{weakly in $L^2({\mathbb{R}})$}
\end{equation}
and define $u_\infty$ to be the solution to \eqref{nls} with $u_\infty(0)=u_{\infty,0}$. Then
\begin{align}\lambdaabel{E:P:weak wp}
u_n(t) \rightharpoonup u_\infty(t) \quad\text{weakly in $L^2({\mathbb{R}})$}
\end{align}
for all $t\in{\mathbb{R}}$.
\end{thm}
\begin{proof}
It suffices to verify \eqref{E:P:weak wp} along a subsequence; moreover, we may restrict our attention to a fixed (but arbitrary) time window $[-T,T]$. Except where indicated otherwise, all space-time norms in this proof will be taken over the slab $[-T,T]\times{\mathbb{R}}$.
Weak convergence of the initial data guarantees that this sequence is bounded in $L^2({\mathbb{R}})$ and so by Lemma~\ref{lm:loc} we have
\begin{equation}\lambdaabel{apb}
\sup_n \ \| u_n \|_{L^\infty_t L^2_x} + \| u_n \|_{L^6_{t,x}} \lambdaesssim 1.
\end{equation}
The implicit constant here depends on $T$ and the uniform bound on $u_n(0)$ in $L^2({\mathbb{R}})$. Such dependence will be tacitly permitted in all that follows.
Passing to a subsequence, we may assume that \eqref{E:P:compact} holds for some $v$, all $R>0$, and our chosen $T$. This follows from Proposition~\ref{P:compact}. Combining \eqref{apb} and \eqref{E:P:compact} yields
\begin{equation}\lambdaabel{apb'}
\| v \|_{L^\infty_t L^2_x} + \| v \|_{L^6_{t,x}} \lambdaesssim 1.
\end{equation}
Moreover, as $L^2({\mathbb{R}})$ admits a countable dense collection of $C^\infty_c$ functions, \eqref{E:P:compact} guarantees that
\begin{equation}\lambdaabel{tinOmega}
u_n(t) \rightharpoonup v(t) \quad\text{weakly in $L^2({\mathbb{R}})$ for all $t\in\Omega$},
\end{equation}
where $\Omega\subseteq[-T,T]$ is of full measure.
We now wish to take weak limits on both sides of the Duhamel formula
\begin{equation*}
u_n(t) = e^{it\mathcal Delta} u_n(0) - i \int_0^t e^{i(t-s)\mathcal Delta} P_{\lambdaeq N_n} F\bigl(P_{\lambdaeq N_n} u_n(s)\bigr)\,ds.
\end{equation*}
Clearly, $e^{it\mathcal Delta} u_n(0)\rightharpoonup e^{it\mathcal Delta} u_{\infty,0}$ weakly in $L^2({\mathbb{R}})$. We also claim that
\begin{equation}\lambdaabel{v du hamel main}
\wlim_{n\to\infty} \int_0^t e^{i(t-s)\mathcal Delta} P_{\lambdaeq N_n} F\bigl(P_{\lambdaeq N_n} u_n(s)\bigr)\,ds = \int_0^t e^{i(t-s)\mathcal Delta} F\bigl(v(s)\bigr)\,ds,
\end{equation}
where the weak limit is with respect to the $L^2({\mathbb{R}})$ topology. This assertion will be justified later. Taking this for granted for now, we deduce that
\begin{equation}\lambdaabel{v du hamel}
\wlim_{n\to\infty} u_n(t) = e^{it\mathcal Delta} u_\infty(0) - i \int_0^t e^{i(t-s)\mathcal Delta} F(v(s))\,ds.
\end{equation}
Moreover, we observe that RHS\eqref{v du hamel} is continuous in $t$, with values in $L^2({\mathbb{R}})$, and that LHS\eqref{v du hamel} agrees with $v(t)$ for almost every $t$. Correspondingly, after altering $v$ on a space-time set of measure zero, we obtain $v\in C([-T,T];L^2({\mathbb{R}}))$ that still obeys \eqref{apb'} but now also obeys
\begin{equation}\lambdaabel{v du hamel'}
v(t) = e^{it\mathcal Delta} u_\infty(0) - i \int_0^t e^{i(t-s)\mathcal Delta} F(v(s))\,ds \qtq{and} \wlim_{n\to\infty} u_n(t) = v(t) ,
\end{equation}
for \emph{all} $t\in[-T,T]$. By Definition~\ref{D:solution} and Lemma~\ref{lm:loc}, we deduce that $v=u_\infty$ on $[-T,T]$ and that \eqref{E:P:weak wp} holds for $t\in [-T,T]$.
To complete the proof of Theorem~\ref{T:weak wp}, it remains only to justify \eqref{v du hamel main}. To this end, let us fix $\psi\in L^2({\mathbb{R}})$. We will divide our task into three parts.
Part 1: By H\"older's inequality, \eqref{apb}, and the dominated convergence theorem,
\begin{align*}
\biggl| \biggl\lambdaangle \psi,&\ \int_0^t e^{i(t-s)\mathcal Delta} \bigl[ F\bigl(P_{\lambdaeq N_n} u_n(s)\bigr) - P_{\lambdaeq N_n} F\bigl(P_{\lambdaeq N_n} u_n(s)\bigr) \bigr]\,ds \biggr\rangle \biggr| \\
&\lambdaeq \biggl| \int_0^t \ \Bigl\lambdaangle e^{-i(t-s)\mathcal Delta} P_{>N_n}\psi,\ F\bigl(P_{\lambdaeq N_n} u_n(s)\bigr) \Bigr\rangle \,ds\, \biggr| \\
&\lambdaeq \sqrt{T} \| P_{>N_n}\psi \|_{L^2_x} \| u_n \|_{L^6_{t,x}}^3 = o(1) \quad\text{as $n\to\infty$.}
\end{align*}
Part 2: Let $\chi_R$ denote the indicator function of $[-R,R]$ and let $\chi_R^c$ denote the indicator of the complementary set. Arguing much the same as for part 1, we have
\begin{align*}
\sup_n \biggl| \biggl\lambdaangle \psi,&
\ \int_0^t e^{i(t-s)\mathcal Delta} \chi_R^c \bigl[ F\bigl(P_{\lambdaeq N_n} u_n(s)\bigr) - F\bigl(v(s)\bigr) \bigr]\,ds \biggr\rangle \biggr| \\
&\lambdaeq T^{1/2} \| \chi_R^c e^{it \mathcal Delta} \psi \|_{L_{t,x}^6} \Bigl\{ \| v \|_{L^6_{t,x}}^2\|v\|_{L_t^\infty L_x^2} + \sup_n \| u_n \|_{L^6_{t,x}}^2\|u_n\|_{L_t^\infty L_x^2} \Bigr\} \\
&= o(1) \quad\text{as $R\to\infty$.}
\end{align*}
Part 3: An easy application of Schur's test shows that
$$
\| \chi_R P_{\lambdaeq N_n} \chi_{2R}^c f \|_{L^p({\mathbb{R}})} \lambdaesssim_\beta (N_nR)^{-\beta} \| f \|_{L^p({\mathbb{R}})}
$$
for any $1\lambdaeq p\lambdaeq \infty$ and any $\beta>0$. Correspondingly,
\begin{align*}
\bigl\| \chi_R P_{\lambdaeq N_n} (u_n - v) \bigr\|_{L^6_{t,x}} \lambdaesssim \bigl\| \chi_{2R} (u_n - v) \bigr\|_{L^6_{t,x}}
+ (N_nR)^{-\beta} \bigl\{ \| u_n\|_{L^6_{t,x}} + \| v \|_{L^6_{t,x}} \bigr\}.
\end{align*}
Using this estimate together with \eqref{apb}, \eqref{apb'}, \eqref{E:P:compact}, and the fact that $N_n\to\infty$, we deduce that
\begin{align*}
\bigl\| \chi_R \bigl[ (P_{\lambdaeq N_n} u_n) - v \bigr] \bigr\|_{L^6_{t,x}}
&\lambdaesssim \bigl\| \chi_R P_{\lambdaeq N_n} (u_n - v) \bigr] \bigr\|_{L^6_{t,x}} + \bigl\| P_{> N_n} v \bigr\|_{L^6_{t,x}} = o(1)
\end{align*}
as $n\to\infty$. From this, \eqref{apb}, and \eqref{apb'}, we then easily deduce that
\begin{align*}
&\lambdaimsup_{n\to\infty}\, \biggl| \biggl\lambdaangle \psi,
\ \int_0^t e^{i(t-s)\mathcal Delta} \chi_R \bigl[ F\bigl(P_{\lambdaeq N_n} u_n(s)\bigr) - F\bigl(v(s)\bigr) \bigr]\,ds \biggr\rangle \biggr| \\
&\ \ \ \lambdaesssim T^{1/2} \| \psi \|_{2}
\lambdaimsup_{n\to\infty} \, \bigl\| \chi_R\bigl[ (P_{\lambdaeq N_n} u_n) - v \bigr] \bigr\|_{L^6_{t,x}} \Bigl\{ \| v \|_{L^6_{t,x}}^2 + \| u_n \|_{L^6_{t,x}}^2 \Bigr\} = 0.
\end{align*}
Combining all three parts proves \eqref{v du hamel main} and so completes the proof of Theorem~\ref{T:weak wp}.
\end{proof}
\section{Finite-dimensional approximation}\lambdaabel{S:5}
As mentioned in the introduction, to prove the non-squeezing result for the cubic NLS on the line we will prove that solutions to this equation are well approximated by solutions to a finite-dimensional Hamiltonian system. As an intermediate step, in this section, we will prove that solutions to the frequency-localized cubic Schr\"odinger equation on the line are well approximated by solutions to the same equation on ever larger tori; see Theorem~\ref{thm:app} below.
To do this, we will need a perturbation theory for the frequency-localized cubic NLS on the torus, which in turn relies on suitable Strichartz estimates for the linear propagator. In Lemma~\ref{lm:stri} below we exploit the observation that with a suitable inter-relation between the frequency cut-off and the torus size, one may obtain the full range of mass-critical Strichartz estimates in our setting. We would like to note that other approaches to Strichartz estimates on the torus \cite{Bourg:1993,BourgainDemeter} are not well suited to the scenario considered here --- they give bounds on unnecessarily long time intervals, so long in fact, that the bounds diverge as the circumference of the circle diverges.
\subsection{Strichartz estimates and perturbation theory on the torus}
Arguing as in \cite[\S7]{KVZ:nsqz2d} one readily obtains frequency-localized finite-time $L^1\to L^\infty$ dispersive estimates on the torus ${\mathbb{T}}_L:={\mathbb{R}}/L{\mathbb{Z}}$, provided the circumference $L$ is sufficiently large. This then yields Strichartz estimates in the usual fashion:
\begin{lem}[Torus Strichartz estimates, \cite{KVZ:nsqz2d}]\lambdaabel{lm:stri}
Given $T>0$ and $1\lambdaeq N\in 2^{\mathbb{Z}}$, there exists $L_0=L_0(T,N)\geq 1$ sufficiently large so that for $L\ge L_0$,
\begin{gather*}
\|P_{\lambdae N}^L u\|_{S([-T,T]\times{\mathbb{T}}_L)}\lambdaesssim_{q,r} \|u(0)\|_{L^2({\mathbb{T}}_L)} + \|(i\partial_t+\mathcal Delta)u\|_{N([-T,T]\times{\mathbb{T}}_L)}.
\end{gather*}
Here, $P_{\lambdae N}^L$ denotes the Fourier multiplier on ${\mathbb{T}}_L$ with symbol $m_{\lambdaeq 1}(\cdot / N)$.
\end{lem}
Using these Strichartz estimates one then obtains (in the usual manner) a stability theory for the following frequency-localized NLS on the torus ${\mathbb{T}}_L$:
\begin{align}\lambdaabel{stabt:1}
\begin{cases}
(i\partial_t+\mathcal Delta )u =P_{\lambdae N}^ L F(P_{\lambdae N}^L u),\\
u(0)=P_{\lambdae N}^L u_0.
\end{cases}
\end{align}
\begin{lem}[Perturbation theory for \eqref{stabt:1}]\lambdaabel{lm:stab} Given $T>0$ and $1\lambdaeq N\in 2^{\mathbb{Z}}$, let $L_0$ be as in Lemma~\ref{lm:stri}. Fix $L\geq L_0$ and let $\tilde u$ be an approximate solution to \eqref{stabt:1} on $[-T,T]$ in the sense that
\begin{align*}
\begin{cases}
(i\partial_t+\mathcal Delta) \tilde u=P_{\lambdae N}^L F(P_{\lambdae N}^L \tilde u) +e, \\
\tilde u(0)=P_{\lambdae N}^L \tilde u_0
\end{cases}
\end{align*}
for some function $e$ and $\tilde u_0\in L^2({\mathbb{T}}_L)$. Assume
\begin{align*}
\|\tilde u\|_{L_t^\infty L_x^2([-T,T]\times{\mathbb{T}}_L)}\lambdae M
\end{align*}
and the smallness conditions
\begin{align*}
\|u_0-\tilde u_0\|_{L^2({\mathbb{T}}_L)} \lambdae {\varepsilon} \qtq{and} \|e\|_{N([-T,T]\times{\mathbb{T}}_L)}\lambdae {\varepsilon}.
\end{align*}
Then if ${\varepsilon}\lambdae {\varepsilon}_0(M,T)$, there exists a unique solution $u$ to \eqref{stabt:1} such that
\begin{align*}
\|u-\tilde u\|_{S([-T,T]\times{\mathbb{T}}_L)}\lambdae C(M,T) {\varepsilon}.
\end{align*}
\end{lem}
\subsection{Approximation by finite-dimensional PDE}
Fix $M>0$, $T>0$, and $\eta_n\to 0$. Let $N_n\to \infty$ be given and let $L_n=L_n(M,T, N_n, \eta_n)$ be large constants to be chosen later; in particular, we will have $L_n\to \infty$. Let $\mathbb{T}_n:={\mathbb{R}}/L_n{\mathbb{Z}}$ and let
\begin{align}\lambdaabel{1241}
u_{0, n}\in \mathcal H_n:=\bigl\{f\in L^2(\mathbb{T}_n):\,P_{>2N_n}^{L_n} f=0 \bigr\} \qtq{with} \|u_{0,n}\|_{L^2(\mathbb{T}_n)}\lambdae M.
\end{align}
Consider the following finite-dimensional Hamiltonian systems:
\begin{align}\lambdaabel{per1}
\begin{cases}
(i\partial_t+\mathcal Delta)u_n = P_{\lambdae N_n}^{L_n} F( P_{\lambdae N_n}^{L_n} u_n), \qquad (t,x)\in{\mathbb{R}}\times\mathbb{T}_n,\\
u_n(0)=u_{0,n}.
\end{cases}
\end{align}
We will show that for $n$ sufficiently large, solutions to \eqref{per1} can be well approximated by solutions to the corresponding problem posed on ${\mathbb{R}}$ on the fixed time interval $[-T,T]$. Note that as a finite-dimensional system with a coercive Hamiltonian, \eqref{per1} automatically has global solutions.
To continue, we subdivide the interval $[\frac{L_n}4,\frac{L_n}2]$ into at least $16M^2/\eta_n$ many subintervals of length $20\frac 1{\eta_n} N_n T$. This can be achieved so long as
\begin{align}\lambdaabel{cf:1}
L_n\gg \tfrac{M^2}{\eta_n} \cdot \tfrac 1{\eta_n} N_nT.
\end{align}
By the pigeonhole principle, there exists one such subinterval, which we denote by
\begin{align*}
I_n:=[c_n-\tfrac {10}{\eta_n} N_n T, c_n+\tfrac {10}{\eta_n}N_nT],
\end{align*}
such that
\begin{align*}
\|u_{0,n}\chi_{I_n}\|_2\lambdae \tfrac 14 \eta_n ^{1/2}.
\end{align*}
For $0\lambdaeq j\lambdaeq 4$, let $\chi_{n}^j:{\mathbb{R}}\to [0,1]$ be smooth cutoff functions adapted to $I_n$ such that
\begin{align*}
\chi_{n}^j(x)=\begin{cases}1, & x\in [c_n-L_n+\tfrac {10-2j}{\eta_n}N_nT, c_n-\tfrac {10-2j}{\eta_n}N_nT],\\
0, & x\in (-\infty, c_n-L_n+\tfrac {10-2j-1}{\eta_n}N_nT)\cup(c_n-\tfrac {10-2j-1}{\eta_n}N_nT,\infty).
\end{cases}
\end{align*}
The following properties of $\chi_n^j$ follow directly from the construction above:
\begin{align}\lambdaabel{cf:1.0}
\lambdaeft\{\quad\begin{gathered}
\chi_n^j\chi_n^i =\chi_n^j \quad\text{for all $0\lambdaeq j<i\lambdaeq 4$,} \\
\|\partial_x^k \chi_n^j \|_{L^\infty} = o\bigl( (N_nT)^{-k} \bigr) \quad\text{for each $k\geq 0$}, \\
\|(1-\chi_n^j)u_{0,n}\|_{L^2({\mathbb{T}}_n)} = o(1) \quad\text{for all $0\lambdae j\lambdae 4$}.
\end{gathered}\right.
\end{align}
Here and subsequently, $o(\cdot)$ refers to the limit as $n\to\infty$. To handle the frequency truncations appearing in \eqref{per1} and \eqref{lm:1}, we need to control interactions between these cutoffs and Littlewood--Paley operators. This is the role of the next lemma.
\begin{lem}[Littlewood--Paley estimates]\lambdaabel{L:LP est} For $L_n$ sufficiently large and all $0\lambdaeq j\lambdaeq 4$, we have the following norm bounds as operators on $L^2$\textup{:}
\begin{gather}
\|\chi_n^j (P_{\lambdae N_n}- P_{\lambdae N_n}^{L_n}) \chi_n^j \|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})} =o(1), \lambdaabel{E:P2P}\\
\|[\chi_n^j, P_{\lambdae N_n}^{L_n} ]\|_{L^2(\mathbb{T}_n)\to L^2(\mathbb{T}_n)} + \|[\chi_n^j, P_{\lambdae N_n}]\|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})} =o(1), \lambdaabel{E:Pcom}
\end{gather}
as $n\to\infty$. Moreover, if $i>j$ then
\begin{gather}
\|\chi_n^j P_{\lambdae N_n}^{L_n} (1-\chi_n^i) \|_{L^2(\mathbb{T}_n)\to L^2(\mathbb{T}_n)} + \|\chi_n^j P_{\lambdae N_n} (1-\chi_n^i) \|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})} =o(1). \lambdaabel{E:Pmis}
\end{gather}
\end{lem}
\begin{proof}
The proof of \eqref{E:P2P} is somewhat involved; one must estimate the difference between a Fourier sum and integral and then apply Schur's test. See \cite[\S8]{KVZ:nsqz2d} for details.
The estimate \eqref{E:Pcom} follows readily from the rapid decay of the kernels associated to Littlewood--Paley projections on both the line and the circle; specifically, by Schur's test,
\begin{gather*}
\text{LHS\eqref{E:Pcom}} \lambdaesssim N_n^{-1} \|\abs{\nabla}^{2-d}la \chi_n^j\|_{L^\infty_x} =o(1).
\end{gather*}
We may then deduce \eqref{E:Pmis} from this. Indeed, as $\chi_n^j\chi_n^i=\chi_n^j$ we have
\begin{gather*}
\chi_n^j P_{\lambdae N_n} (1-\chi_n^i) = [\chi_n^j, P_{\lambdae N_n}] (1-\chi_n^i),
\end{gather*}
and analogously on the torus.
\end{proof}
Now let $\tilde un$ denote the solution to
\begin{align}\lambdaabel{lm:1}
\begin{cases}
(i\partial_t+\mathcal Delta)\tilde un=P_{\lambdae N_n} F(P_{\lambdae N_n}\tilde un),\qquad (t,x)\in{\mathbb{R}}\times{\mathbb{R}},\\
\tilde un(0,x)=\chi_n^0(x) u_{0,n}(x+L_n{\mathbb{Z}}),
\end{cases}
\end{align}
where $u_{0,n}\in L^2(\mathbb{T}_n)$ is as in \eqref{1241}. It follows from Lemma~\ref{lm:loc} that these solutions are global and, moreover, that they obey
\begin{align}\lambdaabel{c1}
\|\partial_x^k \tilde un \|_{S([-T,T]\times{\mathbb{R}})}\lambdaesssim_T M N_n^k
\end{align}
uniformly in $n$ and $k\in\{0,1\}$. By providing control on the derivative of $\tilde un$, this estimate also controls the transportation of mass:
\begin{lem}[Mass localization for $\tilde un$]\lambdaabel{lm:sm}
Let $\tilde un$ be the solution to \eqref{lm:1} as above. Then for every $0\lambdaeq j\lambdaeq 4$ we have
\begin{align*}
\|(1-\chi_n^j)\tilde un\|_{L_t^\infty L_x^2([-T,T]\times{\mathbb{R}})} =o(1) \quad\text{as $n\to\infty$}.
\end{align*}
\end{lem}
\begin{proof}
Direct computation (cf. \cite[Lemma~8.4]{KVZ:nsqz2d}) shows
\begin{align*}
\frac{d\ }{dt} \int_{{\mathbb{R}}}|1-\chi_n^j(x)|^2 |\tilde un(t,x)|^2 \,dx&= - 4\Im\int_{{\mathbb{R}}^2}(1-\chi_n^j)(\abs{\nabla}^{2-d}la\chi_n^j) \overline{\tilde un} \abs{\nabla}^{2-d}la\tilde un \,dx \\
&\quad+2\Im\int_{{\mathbb{R}}^2}F(P_{\lambdae N_n} \tilde un)[P_{\lambdae N_n},(1-\chi_n^j)^2]\overline{\tilde un} \,dx.
\end{align*}
From this, the result can then be deduced easily using \eqref{cf:1.0}, \eqref{c1}, and \eqref{E:Pcom}.
\end{proof}
With these preliminaries complete, we now turn our attention to the main goal of this section, namely, to prove the following result:
\begin{thm}[Approximation]\lambdaabel{thm:app} Fix $M>0$ and $T>0$. Let $N_n\to \infty$ and let $L_n$ be sufficiently large depending on $M, T, N_n$. Assume $u_{0,n}\in \mathcal H_n$ with $\|u_{0,n}\|_{L^2(\mathbb{T}_n)} \lambdae M$. Let $u_n$ and $\tilde un$ be solutions to \eqref{per1} and \eqref{lm:1}, respectively. Then
\begin{align}\lambdaabel{pert:2.0}
\lambdaim_{n\to \infty} \|P_{\lambdae 2N_n}^{L_n}(\chi_n^2 \tilde un)-u_n\|_{S([-T,T]\times \mathbb{T}_n)}=0.
\end{align}
\end{thm}
\begin{rem}
Note that for $0\lambdaeq j\lambdaeq 4$ and any $t\in{\mathbb{R}}$, the function $\chi_n^j\tilde un(t)$ is supported inside an interval of size $L_n$; consequently, we can view it naturally as a function on the torus ${\mathbb{T}}_n$. Conversely, the functions $\chi_n^ju_n(t)$ can be lifted to functions on ${\mathbb{R}}$ that are supported in an interval of length $L_n$. In what follows, the transition between functions on the line and the torus will be made without further explanation.
\end{rem}
\begin{proof}
The proof of Theorem~\ref{thm:app} is modeled on that of \cite[Theorem~8.9]{KVZ:nsqz2d}. For brevity, we write
\begin{align*}
z_n:=P_{\lambdae 2 N_n}^{L_n}(\chi_n^2 \tilde un).
\end{align*}
We will deduce \eqref{pert:2.0} as an application of the stability result Lemma ~\ref{lm:stab}. Consequently, it suffices to verify the following:
\begin{gather}
\|z_n\|_{L_t^\infty L_x^2 ([-T,T]\times\mathbb{T}_n)}\lambdaesssim M \mbox{ uniformly in } n,\lambdaabel{per:1}\\
\lambdaim_{n\to \infty}\|z_n (0)-u_n(0)\|_{L^2(\mathbb{T}_n)}=0, \lambdaabel{per:2}\\
\lambdaim_{n\to \infty}\|(i\partial_t+\mathcal Delta)z_n- P_{\lambdae N_n}^{L_n} F( P_{\lambdae N_n}^{L_n} z_n)\|_{N([-T,T]\times\mathbb{T}_n)}=0.\lambdaabel{per:3}
\end{gather}
Claim \eqref{per:1} is immediate:
\begin{align*}
\|z_n\|_{L_t^\infty L_x^2([-T,T]\times\mathbb{T}_n)}\lambdaesssim \|\tilde un\|_{L_t^\infty L_x^2 ([-T,T]\times{\mathbb{R}})}\lambdaesssim \|\tilde u_n(0) \|_{L_x^2({\mathbb{R}})}
\lambdaesssim \| u_{n,0} \|_{L_x^2(\mathbb{T}_n)} \lambdaesssim M.
\end{align*}
To prove \eqref{per:2}, we use $u_{0,n}\in \mathcal H_n$ and \eqref{cf:1.0} as follows:
\begin{align*}
\|z_n(0)-u_n(0)\|_{L^2(\mathbb{T}_n)}
&=\| P_{\lambdae N_n}^{L_n}l (\chi_n^2 u_{0,n}-u_{0, n})\|_{L^2(\mathbb{T}_n)}\\
&\lambdaesssim \|\chi_n^2 u_{0,n}-u_{0,n}\|_{L^2(\mathbb{T}_n)} =o(1) \qtq{as} n\to \infty.
\end{align*}
It remains to verify \eqref{per:3}. Direct computation gives
\begin{align*}
(i\partial_t+\mathcal Delta)z_n - P_{\lambdae N_n}^{L_n} &F( P_{\lambdae N_n}^{L_n} z_n)\\
&= P_{\lambdae N_n}^{L_n}l\Bigl[2(\partial_x \chi_n^2)(\partial_x \tilde un) + (\mathcal Delta \chi_n^2) \tilde un\Bigr] \\
&\quad+ P_{\lambdae N_n}^{L_n}l \Bigl[\chi_n^2 P_{\lambdae N_n} F(P_{\lambdae N_n} \tilde un)- P_{\lambdae N_n}^{L_n} F( P_{\lambdae N_n}^{L_n}(\chi_n^2 \tilde un))\Bigr].
\end{align*}
In view of the boundedness of $ P_{\lambdae N_n}^{L_n}l$, it suffices to show that the terms in square brackets converge to zero in $N([-T,T]\times\mathbb{T}_n)$ as $n\to \infty$.
Using \eqref{c1} and \eqref{cf:1.0}, we obtain
\begin{align*}
\|(\partial_x \chi_n^2) (\partial_x\tilde un) \|_{L^1_tL^2_x([-T,T]\times\mathbb{T}_n)}
&\lambdae T \|\partial_x \chi_n^2\|_{L^\infty_x({\mathbb{R}})} \|\partial_x\tilde un \|_{L_t^\infty L^2_x([-T,T]\times{\mathbb{R}})} = o(1),\\
\|(\mathcal Delta \chi_n^2) \tilde un\|_{L^1_t L^2_x ([-T,T]\times\mathbb{T}_n)}&\lambdae T\|\partial_x^2 \chi_n^2\|_{L^\infty_x({\mathbb{R}})}\|\tilde un\|_{L_t^\infty L_x^2([-T,T]\times{\mathbb{R}})}= o(1),
\end{align*}
as $n\to\infty$.
To estimate the remaining term, we decompose it as follows:
\begin{align}
\chi_n^2 P_{\lambdae N_n} F(P_{\lambdae N_n} \tilde un)&- P_{\lambdae N_n}^{L_n} F( P_{\lambdae N_n}^{L_n}(\chi_n^2 \tilde un))\notag\\
&=\chi_n^2 P_{\lambdae N_n} \bigl[F(P_{\lambdae N_n} \tilde un)-F(P_{\lambdae N_n}(\chi_n^2\tilde un))\bigr]\lambdaabel{per:6}\\
&\quad+\chi_n^2 P_{\lambdae N_n}(1-\chi_n^3) F(P_{\lambdae N_n}(\chi_n^2 \tilde un))\bigr]\lambdaabel{per:7}\\
&\quad+\chi_n^2P_{\lambdae N_n}\chi_n^3 \bigl[F(P_{\lambdae N_n}(\chi_n^2\tilde un))- F( P_{\lambdae N_n}^{L_n}(\chi_n^2 \tilde un))\bigr]\lambdaabel{per:8}\\
&\quad+\chi_n^2\bigl(P_{\lambdae N_n}- P_{\lambdae N_n}^{L_n}\bigr)\chi_n^3 F( P_{\lambdae N_n}^{L_n}(\chi_n^2\tilde un))\lambdaabel{per:9}\\
&\quad+[\chi_n^2, P_{\lambdae N_n}^{L_n}]\chi_n^3 F( P_{\lambdae N_n}^{L_n} (\chi_n^2\tilde un))\lambdaabel{per:10}\\
&\quad+ P_{\lambdae N_n}^{L_n}(\chi_n^2-1) F( P_{\lambdae N_n}^{L_n}(\chi_n^2\tilde un)).\lambdaabel{per:11}
\end{align}
To estimate \eqref{per:6}, we use H\"older and Lemma~\ref{lm:sm}:
\begin{align*}
\|\eqref{per:6}\|_{N([-T,T]\times\mathbb{T}_n)}
&\lambdaesssim \|F(P_{\lambdae N_n} \tilde un)-F(P_{\lambdae N_n}(\chi_n^2\tilde un))\|_{L^{6/5}_{t,x}([-T,T]\times{\mathbb{R}})}\\
&\lambdaesssim T^{\frac 12}\|(1-\chi_n^2)\tilde un\|_{L^\infty_t L^2_x([-T,T]\times{\mathbb{R}})}\|\tilde un\|_{L^6_{t,x}([-T,T]\times{\mathbb{R}})}^2= o(1).
\end{align*}
We next turn to \eqref{per:7}. As
\begin{align*}
\|\eqref{per:7}\|_{L^1_tL^2_x([-T,T]\times\mathbb{T}_n)}
&\lambdaesssim \|\chi_n^2 P_{\lambdae N_n}(1-\chi_n^3)\|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})} T^{\frac 12} \|\tilde un\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^3,
\end{align*}
it follows from \eqref{E:Pmis} and \eqref{c1} that this is $o(1)$ as $n\to\infty$.
We now consider \eqref{per:8}. Using \eqref{E:P2P} and \eqref{c1}, we estimate
\begin{align*}
\|&\eqref{per:8}\|_{L_{t,x}^{6/5}([-T,T]\times\mathbb{T}_n)}\\
&\lambdaesssim T^{\frac 12} \|\chi_n^3(P_{\lambdae N_n}- P_{\lambdae N_n}^{L_n})\chi_n^2\tilde un \|_{L_t^\infty L_x^2([-T,T]\times\mathbb{T}_n)}\|\chi_n^2 \tilde un\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^2\\
&\lambdaesssim T^{\frac 12} \|\chi_n^3(P_{\lambdae N_n}- P_{\lambdae N_n}^{L_n})\chi_n^3 \|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})}\|\chi_n^2 \tilde un\|_{L^\infty_t L^2_x ([-T,T]\times{\mathbb{R}})}\|\tilde un\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^2 \\
&= o(1).
\end{align*}
Next we turn to \eqref{per:9}. Using \eqref{cf:1.0}, \eqref{E:P2P}, and \eqref{c1}, we get
\begin{align*}
\|\eqref{per:9}&\|_{L^1_tL^2_x([-T,T]\times\mathbb{T}_n)}\\
&\lambdaesssim \|\chi_n^3(P_{\lambdae N_n}- P_{\lambdae N_n}^{L_n})\chi_n^3\|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})}\|\chi_n^4 F( P_{\lambdae N_n}^{L_n}(\chi_n^2\tilde un))\|_{L^1_tL^2_x([-T,T]\times{\mathbb{R}})}\\
&\lambdaesssim o(1) \cdot T^{\frac 12} \|\tilde un\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^3 = o(1).
\end{align*}
To estimate \eqref{per:10}, we use \eqref{E:Pcom} and \eqref{c1} as follows:
\begin{align*}
\|\eqref{per:10}&\|_{L^1_tL^2_x([-T,T]\times\mathbb{T}_n)}\\
&\lambdaesssim\|[\chi_n^2, P_{\lambdae N_n}^{L_n}]\|_{L^2(\mathbb{T}_n)\to L^2(\mathbb{T}_n)} \| \chi_n^3 F( P_{\lambdae N_n}^{L_n}(\chi_n^2\tilde u_n))\|_{L_t^1L_x^2([-T,T]\times\mathbb{T}_n)}\\
&\lambdaesssim o(1) \cdot T^{\frac 12} \|\tilde un\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^3 = o(1).
\end{align*}
Finally, to estimate \eqref{per:11}, we write $\tilde u_n = \chi_n^1\tilde u_n + (1-\chi_n^1)\tilde u_n$ and then use \eqref{c1}, \eqref{E:Pmis}, and \eqref{cf:1.0}:\begin{align*}
\|\eqref{per:11}\|_{N([-T,T]\times\mathbb{T}_n)} &\lambdaesssim \| (\chi_n^2-1) F( P_{\lambdae N_n}^{L_n}(\chi_n^2\tilde un)) \|_{L^{6/5}_{t,x}([-T,T]\times{\mathbb{R}})} \\
&\lambdaesssim T^{\frac12} \|(1-\chi_n^2) P_{\lambdae N_n}^{L_n} \chi_n^1\tilde un\|_{L^\infty_t L^2_x([-T,T]\times{\mathbb{R}})} \|\tilde u_n\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^2 \\
&\quad+ T^{\frac12} \|(1-\chi_n^1)\tilde u_n\|_{L^\infty_tL_x^2([-T,T]\times{\mathbb{R}})}\|\tilde un\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^2\\
&= o(1) \qtq{as} n\to \infty.
\end{align*}
This completes the proof of the theorem.
\end{proof}
\section{Proof of Theorem \ref{thm:nsqz}}
In this section, we complete the proof of Theorem \ref{thm:nsqz}. To this end, fix parameters $z_*\in L^2({\mathbb{R}})$, $l\in L^2({\mathbb{R}})$ with $\|l\|_2=1$, $\alpha \in {\mathbb{C}}$, $0<r<R<\infty$, and $T>0$. Let $M:=\|z_*\|_2 + R$. Let $N_n\to \infty$ and choose $L_n$ diverging to infinity sufficiently fast so that all the results of Section~\ref{S:5} hold.
By density, we can find $\tilde z_*, \tilde l \in C_c^\infty({\mathbb{R}})$ such that
\begin{align}\lambdaabel{tildeapprox}
\| z_*-\tilde z_*\|_{L^2} \lambdae \delta \qquad\qtq{and}\qquad \|l-\tilde l\|_{L^2} \lambdae \delta M^{-1} \qtq{with} \|\tilde l\|_2=1,
\end{align}
for a small parameter $\delta>0$ chosen so that $\delta < (R-r)/8$. For $n$ sufficiently large, the supports of $\tilde z_*$ and $\tilde l$ are contained inside the interval $[-L_n/2, L_n/2]$, which means that we can view $\tilde z_*$ and $\tilde l$ as functions on ${\mathbb{T}}_n={\mathbb{R}}/L_n{\mathbb{Z}}$. Moreover,
\begin{equation}\lambdaabel{z*n}
\|\tilde z_* - P_{\lambdaeq N_n}^{L_n} \tilde z_*\|_{L^2(\mathbb{T}_n)}\lambdaesssim N_n^{-1} \|\tilde z_*\|_{H^1(\mathbb{T}_n)} = o(1),
\end{equation}
as $n\to\infty$. Similarly,
\begin{equation}\lambdaabel{l*n}
\| P_{>2N_n}^{L_n} \tilde l\|_{L^2(\mathbb{T}_n)} = o(1) \quad\text{as $n\to\infty$.}
\end{equation}
Consider now the initial-value problem
\begin{equation}\lambdaabel{329}
\begin{cases}
(i\partial_t+\mathcal Delta) u_n= P_{\lambdae N_n}^{L_n} F( P_{\lambdae N_n}^{L_n} u_n), \qquad(t,x)\in{\mathbb{R}}\times\mathbb{T}_n,\\
u_n(0)\in \mathcal H_n=\{f\in L^2(\mathbb{T}_n): \, P_{>2 N_n}^{L_n} f=0\}.
\end{cases}
\end{equation}
This is a finite-dimensional Hamiltonian system with respect to the standard Hilbert-space symplectic structure on $\mathcal H_n$; the Hamiltonian is
$$
H(u) = \int_{\mathbb{T}_n} \tfrac12 |\partial_x u|^2 \pm \tfrac14 | P_{\lambdae N_n}^{L_n} u|^4\,dx.
$$
Therefore, by Gromov's symplectic non-squeezing theorem, there exist initial data
\begin{align}\lambdaabel{main:0}
u_{0,n}\in B_{\mathcal H_n}(P_{\lambdaeq N_n}^{L_n} \tilde z_*, R-4\delta)
\end{align}
such that the solution to \eqref{329} with initial data $u_n(0)=u_{0,n}$ satisfies
\begin{align}\lambdaabel{main:1}
|\lambdaangle \tilde l, u_n(T)\rangle_{L^2(\mathbb{T}_n)} -\alpha|> r + 4\delta.
\end{align}
Just as in Section~\ref{S:5} we let $\tilde un:{\mathbb{R}}\times{\mathbb{R}}\to{\mathbb{C}}$ denote the global solution to
\begin{align*}
\begin{cases}
(i\partial_t+\mathcal Delta) \tilde un=P_{\lambdae N_n} F(P_{\lambdae N_n} \tilde un),\\
\tilde un(0)=\chi_n^0 u_{0,n},
\end{cases}
\end{align*}
and write $z_n:= P_{\lambdae N_n}^{L_n}l (\chi_n^2 \tilde un)$. By Theorem~\ref{thm:app}, we have the following approximation result:
\begin{align}\lambdaabel{main:2}
\lambdaim_{n\to \infty}\|z_n-u_n\|_{L^\infty_t L^2_x([-T,T]\times\mathbb{T}_n)}=0.
\end{align}
We are now ready to select initial data that witnesses the non-squeezing for the cubic NLS on the line. By the triangle inequality, \eqref{cf:1.0}, \eqref{main:0}, \eqref{z*n}, and \eqref{tildeapprox},
\begin{align*}
\|\chi_n^0 u_{0,n}-z_*\|_{L^2({\mathbb{R}})}
&\lambdae \|(\chi_n^0-1) u_{0,n}\|_{L^2(\mathbb{T}_n)} + \|u_{0,n}-P_{\lambdaeq N_n}^{L_n}\tilde z_*\|_{L^2(\mathbb{T}_n)}\\
&\quad +\|P_{\lambdaeq N_n}^{L_n}\tilde z_*-\tilde z_*\|_{L^2(\mathbb{T}_n)} + \| \tilde z_*-z_*\|_{L^2({\mathbb{R}})}\\
&\lambdae o(1) + R-4\delta+ o(1) + \delta\lambdae R-\delta,
\end{align*}
provided we take $n$ sufficiently large. Therefore, passing to a subsequence, we may assume that
\begin{align}\lambdaabel{main:3}
\chi_n^0 u_{0,n}\rightharpoonup u_{0,\infty}\in B(z_*,R) \quad \text{weakly in $L^2({\mathbb{R}})$}.
\end{align}
Now let $u_\infty:{\mathbb{R}}\times{\mathbb{R}}\to{\mathbb{C}}$ be the global solution to \eqref{nls} with initial data $u_\infty (0)=u_{0,\infty}$. By Theorem~\ref{T:weak wp},
\begin{align*}
\tilde un (T) \rightharpoonup u_\infty (T) \quad \text{weakly in $L^2({\mathbb{R}})$}.
\end{align*}
Combining this with Lemma \ref{lm:sm}, we deduce
\begin{align*}
\chi_n^2 \tilde un(T)\rightharpoonup u_\infty (T) \quad \text{weakly in $L^2({\mathbb{R}})$}.
\end{align*}
Thus, using also \eqref{l*n}, the definition of $z_n$, \eqref{main:2}, and \eqref{main:1}, we get
\begin{align*}
\bigl|\lambdaangle \tilde l, u_\infty (T)\rangle_{L^2({\mathbb{R}})}-\alpha\bigr |
&= \lambdaim_{n\to \infty}\bigl|\lambdaangle \tilde l, \chi_n^2 \tilde un(T)\rangle_{L^2(\mathbb{T}_n)}-\alpha\bigr|\\
&= \lambdaim_{n\to \infty}\bigl |\lambdaangle P_{\lambdae N_n}^{L_n}l \tilde l, \chi_n^2\tilde un (T)\rangle_{L^2(\mathbb{T}_n)} -\alpha\bigr|\\
&= \lambdaim_{n\to \infty}\bigl |\lambdaangle \tilde l, z_n(T)\rangle_{L^2(\mathbb{T}_n)} -\alpha\bigr| \\
&= \lambdaim_{n\to \infty}\bigl |\lambdaangle \tilde l, u_n(T)\rangle_{L^2(\mathbb{T}_n)} -\alpha\bigr| \\
&\geq r + 4\delta.
\end{align*}
Therefore, using
$$
\|u_\infty(T)\|_{L^2({\mathbb{R}})} = \|u_{0,\infty}\|_{L^2({\mathbb{R}})} < R + \|z_*\|_{L^2({\mathbb{R}})} = M
$$
(cf. \eqref{main:3}) together with \eqref{tildeapprox}, we deduce that
\begin{align*}
\bigl |\lambdaangle l, u_\infty(T)\rangle-\alpha\bigr|\ge r + 4\delta - \|l-\tilde l\|_2\|u_\infty(T)\|_{L^2} \geq r + 3\delta > r.
\end{align*}
This shows that $u_\infty(T)$ lies outside the cylinder $C_r(\alpha, l)$, despite the fact that $u_\infty(0)\in B(z_*,R)$, and so completes the proof of the theorem.\qed
\end{document} |
\begin{document}
\title{Foliations on $\mathbb{CP}
\begin{abstract}
In this work we classify foliations on $\mathbb{CP}^3$ of codimension 1 and degree $2$ that have a line as singular set. To achieve this, we do a complete description of the components. We prove that the boundary of the exceptional component has only 3 foliations up to change of coordinates, and this boundary is contained in a logarithmic component. Finally we construct examples of foliations on $\mathbb{CP}^3$ of codimension 1 and degree $s \geq 3$ that have a line as singular set and such that
they form a family with a rational first integral of degree $s+1$ or they are logarithmic foliations where some of them have a minimal rational first integral of degree not bounded.
\end{abstract}
\section{Introduction}
A holomorphic foliation $\mathcal{F}$ on the projective space $\mathbb{CP}^3$ of codimension one and degree $s$ is given by the projective class of a 1-form:
$$ \omega=A_1(z_1,z_2,z_3,z_4)dz_1+A_2(z_1,z_2,z_3,z_4)dz_2+A_3(z_1,z_2,z_3,z_4)dz_3+A_4(z_1,z_2,z_3,z_4)dz_4,$$
\noindent where $A_1, A_2, A_3, A_4 \in \mathbb{C}[z_1,z_2,z_3,z_4]$ are homogeneous of degree $s+1$, and they satisfy:
\begin{enumerate}
\item $\sum_{i=1}^4 z_iA_i(z_1,z_2,z_3,z_4)=0$
\item The integrability condition: $\omega \wedge d\omega=0$.
\end{enumerate}
\noindent We consider the subspace of classes of 1-forms that satisfy these conditions and such that its singular set $Sing(\omega)$ has codimension 2, we denote this space by $\mathcal{F}(s,3)$. Then $\mathcal{F}(s,3)$ can be identified with a Zariski's open set in the projective space $\mathbb{P}H^0(\mathbb{CP}^3, \Omega_{\mathbb{CP}^3}^1(s+2))$, which has dimension $4 \binom{s+4}{3} - \binom{s+5}{3}-1$. During this article, when we talk about closure we will be thinking with respect to $\mathcal{F}(s,3)$, unless otherwise specified.
\\
\noindent The form $ \omega=A_1(z_1,z_2,z_3,z_4)dz_1+A_2(z_1,z_2,z_3,z_4)dz_2+A_3(z_1,z_2,z_3,z_4)dz_3+A_4(z_1,z_2,z_3,z_4)dz_4$ will be called a homogeneous expression of the foliation $\mathcal{F}$. The singular set of the foliation is the algebraic variety
$$Sing(\mathcal{F})=\mathbb{V}(A_1,A_2,A_3,A_4) \subset \mathbb{CP}^3,$$
\noindent which has dimension 1. Then the simplest singular set that a foliation on $\mathbb{CP}^3$ can have is a line, which in this case is the intersection of two hyperplanes.
Due to the nature of this singular set it is clear that we can perform more calculations and try to prove more conjectures in the subspace of these foliations. It is for this reason that it is interesting to classify this type of foliations.
Here we note that the classification of these foliations is also the question proposed on page 57 of the book \cite{Deserti-Cerveau}.
\\
Then the main objective of this work is to find all the foliations in $\mathcal{F}(2,3)$ that have a line as singular set. Of course the linear pull-back
of foliation on $\mathbb{C}P$ of degree $2$ with a unique singular point satisfy this property. We can see in the article \cite{Cerveau} that there exist just four foliations, up to change of coordinates, with one singular point, they are:
\begin{align*}
\nu_1&=z_3^3dz_1-z_1z_2^2dz_2+(z_2^3-z_1z_3^2)dz_2\\
\nu_2&=-z_3^3dz_1+z_3(z_2^2+z_1z_3)dz_2+(z_1z_3^2-z_2^3-z_1z_2z_3)dz_3\\
\nu_3&=z_2^2z_3dz_1-z_3(z_2^2+z_3^2)dz_2+(z_2^3+z_2^2z_3-z_1z_2z_3)dz_3\\
\nu_4&=(z_1z_2z_3+z_3^3-z_2^3)dz_1+z_2(z_3^2-z_1z_2)dz_2-(z_2^2z_3+z_1^2z_2+z_1z_3^2)dz_3.
\end{align*}
The first one has a rational first integral of degree $3$, and for the last one the singularity is a saddle-node and the foliation does not have algebraic invariant curves. With this in mind we can ask:
It will be that all the foliations on $\mathbb{CP}^3$ of degree $2$ that have a line as singular set are linear pull-back of these $4$ foliation on $\mathbb{C}P$? The answer is not, however all of them are somehow obtained from these $4$.
\\
More specifically, in this set there are, up to change of coordinates: the linear pull-back of the three foliations $\nu_2$, $\nu_3$, $\nu_4$ on $\mathbb{C}P$ of degree $2$ with a unique singular point and without rational first integral and there are three foliations on
$\mathbb{CP}^3$ with a rational first integral of degree $3$ where one of them is the linear pull-back of the foliation $\nu_1$ on $\mathbb{C}P$ (see Theorem \ref{Classification}).
\\
For the proof of this result we study each irreducible component of the space $\mathcal{F}(2,3)$ of foliations on $\mathbb{CP}^3$ of degree $2$. We know by the article \cite{Cerveau-LinsNeto} that $\mathcal{F}(2,3)$ has $6$ irreducible components,
they are:
\begin{enumerate}
\item $S(2,3)$: the foliations which are linear pull-back of foliations on $\mathbb{C}P$ of degree $2$ with isolated singularities.
\item $\overline{R(2,2)}$: the closure of the space of foliations with a rational first integral $\frac{f(z_1,z_2,z_3,z_4)}{g(z_1,z_2,z_3,z_4)}$, where $f$ and $g$ in $\mathbb{C}[z_1,z_2,z_3,z_4]$ have degree 2 and $f$ defines a smooth hypersurface.
\item $\overline{R(1,3)}$: the closure of the space of foliations with a rational first integral $\frac{f(z_1,z_2,z_3,z_4)}{L^3(z_1,z_2,z_3,z_4)}$, where $f$ has degree $3$ and $L$ has degree $1$.
\item $\overline{L(1,1,1,1)}$: the closure of the space of logarithmic foliations given by the 1-forms:
$$\omega=\sum_{i=1}^4 \lambda_i L_1L_2L_3L_4 \frac{dL_i}{L_i},$$
\noindent where $L_1, L_2, L_3$ and $L_4$ in $\mathbb{C}[z_1,z_2,z_3,z_4]$ have degree $1$ and $\sum_{i=1}^4 \lambda_i=0$.
\item $\overline{L(1,1,2)}$: the closure of the space of logarithmic foliations given by the 1-forms:
$$\omega=\sum_{i=1}^3 \lambda_i f_1f_2f_3 \frac{df_i}{f_i},$$
\noindent where $f_1, f_2$ define different hyperplanes, $f_3$ defines an irreducible hypersurface of degree $2$ and $\lambda_1+\lambda_2+2\lambda_3=0$.
\item $\overline{E(3)}$: the exceptional component, which is the closure of the orbit of one foliation which singular set contains the twisted cubic.
\end{enumerate}
The components (4) and (5) are called logarithmic components.
\\
The most detailed and difficult analysis is that referring to the boundaries of the components. In this sense we have obtained an interesting result that says that in the boundary of the exceptional component $\overline{E(3)}$ there exist, up to change of coordinates, only three foliations which are also in the logarithmic component $\overline{L(1,1,2)}$ (There are also three forms whose singular set has dimension 2, but for the sake of completeness we will also describe them). The proof of this result uses Geometric Invariant Theory and the techniques developed by Kirwan in the book \cite{Kirwan}. For the study of the boundary of the logarithmic components we mainly use the results of \cite{Cerveau-Mattei}.
\\
The structure of the paper is as follows. In section 2 we describe all the foliations in the exceptional component, up to linear change of coordinates. Section three is devoted to study the logarithmic components. Finally in the last section we have the theorem of classification of foliations in $\mathcal{F}(2,3)$ that have a line as singular set. We finish with the construction of two examples of foliations in $\mathcal{F}(s,3)$, for $s \geq 3$ with a line as singular set and with one of the following properties:
\begin{enumerate}
\item They form a family that have a rational first integral of degree $s+1$.
\item They form a family of logarithmic foliations, where some of them have a minimal rational first integral whose degree is not bounded.
\end{enumerate}
\section{The boundary of the exceptional component $\overline{E(3)}$}
We know that the exceptional component $\overline{E(3)}$ is the closure of the orbit of a foliation on $\mathbb{CP}^3$ which singular set contains the twisted cubic (see example 6 of \cite{Cerveau-LinsNeto}), this orbit is considering with respect to the action by the automorphism group
of $\mathbb{CP}^3$. Then to find the foliations that have a line as singular set it is necessary to find the foliations in the boundary of this orbit.
For that we use Geometric Invariant Theory applied to the action by change of coordinates in this algebraic variety. We obtain the following.
\begin{teo} \label{excepcional} Let $\overline{E(3)}$ be the exceptional component of the space of foliations on $\mathbb{CP}^3$ of codimension $1$ of degree $2$ then, up to change of coordinates, it contains only the foliations associated to the following 1-forms:
\begin{enumerate}
\item $\omega=(2z_2^2z_4-z_2z_3^2-z_1z_3z_4)dz_1+(2z_3^2z_1-3z_1z_2z_4)dz_2+(3z_1^2z_4-z_1z_2z_3)dz_3+(z_2^2z_1-2z_3z_1^2)dz_4$
\item $\omega_1+\omega_2$
\item $\omega_2+\omega_3$
\item $\omega_1+\omega_3$.
\end{enumerate}
\noindent where
\begin{align*}
&\omega_1=z_1(-z_3z_4dz_1+3z_1z_4dz_3-2z_3z_1dz_4),\\
&\omega_2=z_2(2z_2z_4dz_1-3z_1z_4dz_2+z_2z_1dz_4)\\
&\omega_3=z_3(-z_2z_3dz_1+2z_3z_1dz_2-z_1z_2dz_3).
\end{align*}
\end{teo}
\begin{proof} In the article \cite{Cerveau-LinsNeto} we can see that the exceptional component $\overline{E(3)}$ of the space of foliations on $\mathbb{CP}^3$ of codimension $1$ of degree $2$ is the closure $\overline{SL_4(\mathbb{C})\cdot \omega}$ in
$\mathcal{F}(2,3)$ of the orbit of the
foliation given by:
\begin{align*}
\omega&=(2z_2^2z_4-z_2z_3^2-z_1z_3z_4)dz_1+(2z_3^2z_1-3z_1z_2z_4)dz_2\\
&+ (3z_1^2z_4-z_1z_2z_3)dz_3+(z_2^2z_1-2z_3z_1^2)dz_4,
\end{align*}
\noindent with respect to the action by change of coordinates by the reductive algebraic group $SL_4(\mathbb{C})$ on $\overline{E(3)}$.
\\
We can do this because we know that $\overline{E(3)}$ is an algebraic irreducible, variety of dimension $13$, therefore the linear action of $SL_4(\mathbb{C})$ in the projective space $\mathbb{P}H^0(\mathbb{CP}^3, \Omega_{\mathbb{CP}^3}^1(4))=\mathbb{CP}^{44}$, induces an action in this variety, which is
of course $SL_4(\mathbb{C})$-invariant:
\begin{align*}
SL_4(\mathbb{C}) \times \overline{E(3)} &\to \overline{E(3)}\\
(g,\nu) &\mapsto g \cdot \nu.
\end{align*}
\noindent We recall that a 1-parameter subgroup of an algebraic group $G$ is an algebraic morphism from $\mathbb{C}^*$ to $G$. And a very known result says that every 1-parameter subgroup of $SL_4(\mathbb{C})$ is diagonalizable. Let:
\begin{align*}
\lambda_{(n_1,n_2,n_3)}: \mathbb{C}^* &\to SL_4(\mathbb{C})\\
t &\mapsto \left(\begin{array}{cccc}
t^{n_1}&0&0&0 \\
0&t^{n_2}&0&0\\
0&0&t^{n_3}&0\\
0&0&0&t^{n_4}
\end{array}\right),
\end{align*}
\noindent be a diagonal 1-parameter subgroup of $SL_4(\mathbb{C})$, where $n_1, n_2, n_3, n_ 4 \in \mathbb{Z}$ and $n_1+n_2+n_3+n_4=0$. The action of $\lambda_{(n_1,n_2,n_3)}$ on $\omega$ is:
\begin{align*}
\lambda_{(n_1,n_2,n_3)}(t) \cdot \omega&=(2t^{n_3-n_4}z_2^2z_4-t^{n_2-n_3}z_2z_3^2-t^{n_1-n_2}z_1z_3z_4)dz_1\\
&+(2t^{n_3-n_4}z_3^2z_1-3t^{n_2-n_3}z_1z_2z_4)dz_2\\
&+(3t^{n_1-n_2}z_1^2z_4-t^{n_3-n_4}z_1z_2z_3)dz_3\\
&+(t^{n_2-n_3}z_2^2z_1-2t^{n_1-n_2}z_3z_1^2)dz_4.
\end{align*}
Let:
\begin{align*}
\omega_1&:=-z_1z_3z_4dz_1+3z_1^2z_4dz_3-2z_3z_1^2dz_4=z_1(-z_3z_4dz_1+3z_1z_4dz_3-2z_3z_1dz_4)\\
\omega_2&:=2z_2^2z_4dz_1-3z_1z_2z_4dz_2+z_2^2z_1dz_4=z_2(2z_2z_4dz_1-3z_1z_4dz_2+z_2z_1dz_4)\\
\omega_3&:=-z_2z_3^2dz_1+2z_3^2z_1dz_2-z_1z_2z_3dz_3=z_3(-z_2z_3dz_1+2z_3z_1dz_2-z_1z_2dz_3),
\end{align*}
\noindent then $\omega=\omega_1+\omega_2+\omega_3$ and
\begin{align*}
\lambda_{(n_1,n_2,n_3)}(t) \cdot \omega_1&=t^{n_1-n_2}\omega_1\\
\lambda_{(n_1,n_2,n_3)}(t) \cdot \omega_2&=t^{n_2-n_3}\omega_2\\
\lambda_{(n_1,n_2,n_3)}(t) \cdot \omega_3&=t^{n_3-n_4}\omega_3.\\
\end{align*}
\noindent If we take
\begin{align*}
\lambda_1(t)&=\lambda_{(3,-1,-1)}(t),\\
\lambda_2(t)&=\lambda_{(1,1,-1)}(t),\\
\lambda_3(t)&=\lambda_{(1,1,1)}(t),
\end{align*}
\noindent then we have
\begin{equation*}
\lim_{t \to \infty} \lambda_i(t) \cdot \omega=\omega_i \quad \textrm{and} \quad \lim_{t \to 0} \lambda_i(t) \cdot \omega=\omega_j+\omega_k
\end{equation*}
\noindent where $i,j,k \in \{1,2,3\}$ are $i,j,k$ different from each other.
\\
Then the classes of $\omega_1+\omega_2$, $\omega_2+\omega_3$ and $\omega_2+\omega_3$ are in
$$\overline{SL_4(\mathbb{C})\cdot \omega}-SL_4(\mathbb{C}) \cdot \omega=\overline{E(3)}-SL_4(\mathbb{C})\cdot \omega,$$
\noindent here it is important to note that these are integrable 1-forms.
We note also that the 1-forms $\omega_1$, $\omega_2$, $\omega_3$ have a hyperplane as singular set and they are in the closure of $SL_4(\mathbb{C})\cdot \omega$ but considered in $\mathbb{P}H^0(\mathbb{CP}^3, \Omega_{\mathbb{CP}^3}^1(4))$.
\\
In an analogous way we can see that for $i,j=1,2,3$, $i \neq j$, the classes of $\omega_i $ are in $\overline{SL_4(\mathbb{C}) \cdot (\omega_i+\omega_j)}^{\mathbb{P}H^0(\mathbb{CP}^3, \Omega_{\mathbb{CP}^3}^1(4))}$ and the orbit $SL_4(\mathbb{C})\cdot \omega_i$ is closed in $\mathbb{P}H^0(\mathbb{CP}^3, \Omega_{\mathbb{CP}^3}^1(4))$ because $\omega_i$
is the pull-back of a 1-form of degree $1$ of $\mathbb{C}P$ with three proper values and a line of singularities. Here we recall that the orbit of a matrix by conjugation is closed if the Jordan blocks have the minimum size, which is the case for three different proper values.
\\
\noindent On the other hand we have that $\lambda_{(3,1,-1)}(t) \cdot \omega=t^2\omega$, then
\begin{equation*}
\lim_{t \to 0} \lambda_{(3,1,-1)}(t) \cdot \overline{\omega}=\lim_{t \to 0} t^2\overline{\omega}=0,
\end{equation*}
\noindent where $\overline{\omega}$ is a different point from $0$ on $\omega$ in the affine cone of $\overline{E(3)}$. Then by Hilbert-Mumford criterion of 1-parameter subgroups (see \cite{Mumford}) the point $\omega$
is unstable for the action. We conclude that the set of semistable points is empty and the variety $\overline{E(3)}$ is the closed set of unstable points. Moreover, we have that $\{\lambda_{(3,1,-1)}(t): t \in \mathbb{C}^*\} \subset Aut(\omega)$, in fact we know that $\dim Aut(\omega)=2$.
\\
Now we are going to prove that the above are all the foliations in $\overline{E(3)}$. For this we use theorem 12.26 that we can find in part II of the book \cite{Kirwan}.
In order to simplify the arguments we are going to use the same notation as Kirwan in the mentioned book.
\\
The theorem says that there exists a stratification by subvarieties $S_0,...,S_n$, of $\overline{E(3)}$ locally closed, invariant by the action, disjoint and these subvarieties are parametrized by a finite set $\mathcal{B}=\{\beta_0, \beta_1,...,\beta_n\}$ of virtual 1-parameter subgroups of
$SL_4(\mathbb{C})$ and:
\begin{align*}
S_0&=\overline{E(3)}^{ss}=\emptyset, \quad \textrm{semistable points for the action}\\
\overline{E(3)}&=\overline{E(3)}^{un}=\bigcup_{i=1}^n S_{i} \quad \textrm{unstable points for the action}.
\end{align*}
\noindent Without giving too many details we are going to describe the construction of the stratum $S_i$. For that we remember that a diagonal 1-parameter subgroup of $SL_4(\mathbb{C})$ is identified with a point in $\mathbb{Z}^3$ and that the set of virtual 1-parameter subgroups of $SL_4(\mathbb{C})$ is identified with $\mathbb{Q}^3$.
\\
\noindent As we can see in definition $12.8$ of \cite{Kirwan} the indexing set $\mathcal{B}$ is the set of minimal combinations of weights lying in some Weyl chamber of the representation. And for $\beta_i \in \mathcal{B}$, we define:
\begin{align*}
Z_i&=\{\nu \in \overline{E(3)}: \beta_i(t) \cdot \nu=\nu\}\\
Y_i&=\{\mu \in \overline{E(3)}: p_i(\mu)=\lim_{t \to 0} \beta_i(t) \cdot \mu \in Z_i\}
\end{align*}
The function $p_i:Y_i \to Z_i$ is a locally trivial fibration with affine fiber and $p_i(\mu) \in \overline{SL_4(\mathbb{C}) \cdot \mu} \cap Z_i$ for all $\mu \in Y_i$.
Finally $Y^{ss}_i=p_i^{-1}(Z_i^{ss})$, where $Z_i^{ss}$ is the set of semistable points with respect to a certain action. Then the stratum is $S_i=SL_4(\mathbb{C}) \cdot Y_i^{ss}$.
On the other hand, it is easy to prove that the unique diagonal one parameter subgroup, up to integer multiples, which leaves $\omega$ fixed is $\beta_1=\lambda_{(3,1,-1)}$. Then with $\beta_1$ we will construct the stratum $S_1$, and it satisfies:
$$SL_4(\mathbb{C}) \cdot \omega \subset S_1=SL_4(\mathbb{C}) \cdot Y^{ss}_1 \subset \overline{S_1} \subset \overline{E(3)},$$
\noindent then $\overline{S_1} = \overline{E(3)}$. In this case we have that the closed sets $Z_2,...,Z_n$ have intersection with $Z_1$, because if this doesn't happen then we would have two foliations $\nu_1$, $\nu_2 \in \overline{E(3)}$ such that
$\overline{SL_4(\mathbb{C}) \cdot \nu_1} \cap \overline{SL_4(\mathbb{C}) \cdot \nu_2} \neq \emptyset$. Therefore we conclude that $\overline{S_1} = SL_4(\mathbb{C}) \cdot Y_1$. This means that it is enough to study the subvariety $Y_1$, but we will see that in fact it is enough to find the foliations in $Z_1$ to have all the foliations in $\overline{E(3)}$.
\\
The stabilizer of $\beta_1$ with respect to the adjoint action of $SL_4(\mathbb{C})$ is the subgroup $D$ of diagonal matrices in $SL_4(\mathbb{C})$. And the parabolic subgroup $P_1$ associated to $\beta_1$ is the group of upper triangular matrices (see definition 12.11 of \cite{Kirwan}). Following page 153 of \cite{Kirwan}, we have that $D$ acts in $Z_1:=\{\nu \in \overline{E(3)}: \beta_1(t) \cdot \nu=\nu \}$ and $P_1$ acts in $Y_1$ therefore $D\cdot \omega$ and $P_1 \cdot \omega$ are open and dense in $Z_1$ and in $Y_1$, respectively.
\\
Since $Z_1$ is irreducible we have that $Z_1=\overline{D \cdot \omega}$, and it will be enough to find the orbits in this closed set. Let $a_1,a_2,a_3,a_4 \in \mathbb{C}^{*}$ such that $a_1a_2a_3a_4=1$, then
\begin{align*}
\left(\begin{array}{cccc}
a_1&0&0&0 \\
0&a_2&0&0\\
0&0&a_3&0\\
0&0&0&a_4
\end{array}\right) \cdot \omega=a_1a_2^{-1}\omega_1+a_2a_3^{-1}\omega_2+a_3a_4^{-1}\omega_3.
\end{align*}
\noindent Thus $D \cdot \omega=\{\alpha_1 \omega_1+ \alpha_2 \omega_2+ \alpha_3 \omega_3: \alpha_1, \alpha_2, \alpha_3 \in \mathbb{C}^*\} $, note also that $\omega_1$, $\omega_2$, $\omega_3$
are equivalence classes in the projectivization of different eigenspaces for the associated representation given by the action, with respect to the maximal torus $D$. We conclude that $\{\alpha_1 \omega+ \alpha_2 \omega_2+ \alpha_3 \omega_3: \alpha_1, \alpha_2, \alpha_3 \in \mathbb{C}\} \cap \overline{E(3)}$ is closed because is the intersection of the variety with a projective space, since this is contained in $Z_1$ it must be $Z_1$ and therefore:
$$Z_1=D\cdot\omega \cup \bigcup_{k\neq j}D\cdot (\omega_k+\omega_j).$$
\\
In this case the indexing set of virtual 1-parameter subgroups for constructing the stratification is
$$\mathcal{B}=\{\beta_1=(3,1,-1), \beta_{12}=\lambda_3=(1,1,1), \beta_{13}=\lambda_2=(1,1,-1), \beta_{23}=\lambda_1=(3,-1,-1)\}.$$
\noindent Since $Y_1=\overline{P_1 \cdot \omega}$ we have the following sets and locally trivial fibrations:
\begin{align*}
&Y_1^{ss}=P_1 \cdot \omega, &Y_{jk}^{ss}=P_{jk} \cdot (\omega_j+\omega_k)\\
& \downarrow p_1 \quad \quad \quad \quad \quad & \downarrow p_{jk} \quad \quad \quad \quad \quad \quad \\
&Z_1^{ss}=D \cdot \omega, &Z_{jk}^{ss}=D_{jk}\cdot (\omega_j+\omega_k)
\end{align*}
\noindent where $P_{jk}$ is the parabolic subgroups associated to $\beta_{jk}$ and $D_{jk}$ is the stabilizer of $\beta_{jk}$ with respect to the adjoint action (see page 154 of \cite{Kirwan}). Since $\overline{E(3)}=\overline{S_1}=SL_4(\mathbb{C}) Y_1$ we have that the unique orbits in the exceptional component, up to change of coordinates, are $\omega$, $\omega_1+\omega_2$, $\omega_2+\omega_3$ and $\omega_1+\omega_3$.
\end{proof}
Now we are going to see the singular set and the rational first integrals of the foliations in the exceptional component $\overline{E(3)}$. We use the notation of the above theorem.
The singular set of $\omega$ is the union of the following three curves in $\mathbb{CP}^3$
\begin{enumerate}
\item The conic in a plane $\mathbb{V}(z_1,2z_2z_4-z_3^2)$
\item The line $\mathbb{V}(z_1,z_2)$
\item The twisted cubic $\mathbb{V}(2z_3^2-3z_2z_4,3z_1z_4-z_2z_3,z_2^2-2z_1z_3),$
\end{enumerate}
\noindent and its rational first integral is $\frac{(3z_4z_1^2-3z_1z_2z_3+z_2^3)^2}{(2z_1z_3-z_2^2)^3}$. For the other foliations we have:
\begin{align*}
\textrm{Foliation} \quad& \textrm{Singular Set} & \textrm{Rational First Integral}\\
\omega_1+\omega_2 \quad & \mathbb{V}(z_1,z_2) \cup \mathbb{V}(z_1,z_4) \cup \mathbb{V}(z_4,2z_1z_3-z_2^2)& \frac{z_1^4z_4^2}{(2z_1z_3-z_2^2)^3}\\
\omega_1+\omega_3 \quad &\mathbb{V}(z_1,z_2) \cup \mathbb{V}(z_1,z_3) \cup \mathbb{V}(z_3,z_4)& \frac{(z_1z_4-z_2z_3)^2}{z_1z_3^3}\\
\omega_2+\omega_3 \quad &\mathbb{V}(z_1,z_2) \cup \mathbb{V}(z_2,z_3) \cup \mathbb{V}(z_1,2z_2z_4-z_3^2)& \frac{z_1^2(2z_2z_4-z_3^2)}{z_2^4}
\end{align*}
It is easy to see that:
\begin{align*}
\omega_1+\omega_2&=z_1z_4(2z_1z_3-z_2^2) \Big(4\frac{dz_1}{z_1}+2\frac{dz_4}{z_4}-3\frac{d(2z_1z_3-z_2^2)}{2z_1z_3-z_2^2}\Big), \\
\omega_1+\omega_3&=z_1z_3(z_1z_4-z_2z_3)\Big(2\frac{d(z_1z_4-z_2z_3)}{z_1z_4-z_2z_3}-\frac{dz_1}{z_1}-3\frac{dz_3}{z_3}\Big),\\
\omega_2+\omega_3&=z_2z_1(2z_2z_4-z_3^2)\Big(4\frac{dz_2}{z_2}-2\frac{dz_1}{z_1}-\frac{d(2z_2z_4-z_3^2)}{2z_2z_4-z_3^2}\Big),\\
\end{align*}
\noindent this means that these foliations are in the logarithmic component $\overline{L(1,1,2)}$. With this we obtain the following corollaries.
\begin{cor} The exceptional component $\overline{E(3)}$ does not have foliations that have a line as singular set.
\end{cor}
\begin{cor} The boundary of the exceptional component is contained in a logarithmic component, more precisely:
$$\overline{E(3)}-E(3) \subset L(1,1,2).$$
\end{cor}
As we say before, the 1-forms $\omega_1$, $\omega_2$ and $\omega_3$ have a hyperplane as singular set, then they do not define a foliation but if we remove the singular hyperplane we obtain foliations which are linear pull-back of foliations on
$\mathbb{C}P$ of degree $1$ and such that:
\begin{align*}
\textrm{1-form} \quad& \textrm{Singular Set} & \textrm{Rational First Integral}\\
\frac{\omega_1}{z_1} \quad & \mathbb{V}(z_1,z_3) \cup \mathbb{V}(z_1,z_4) \cup \mathbb{V}(z_3,z_4) &\frac{z_1z_4^2}{z_3^3}\\
\frac{\omega_2}{z_2} \quad & \mathbb{V}(z_1,z_2) \cup \mathbb{V}(z_1,z_4) \cup \mathbb{V}(z_2,z_4) &\frac{z_1^2z_4}{z_2^3}\\
\frac{\omega_3}{z_3} \quad & \mathbb{V}(z_1,z_2) \cup \mathbb{V}(z_1,z_3) \cup \mathbb{V}(z_2,z_3) &\frac{z_2^2}{z_1z_3}\\
\end{align*}
\section{The logarithmic components of $\mathcal{F}(2,3)$}
We know that there exist two components of $\mathcal{F}(2,3)$ which generic elements are logarithmic foliations. We are going to describe the singular set for the foliations in every component.
Remember that:
\begin{itemize}
\item $\overline{L(1,1,1,1)}$ is the closure of the space of logarithmic foliations given by the 1-forms:
$$\omega=\sum_{i=1}^4 \lambda_i L_1L_2L_3L_4 \frac{dL_i}{L_i},$$
\noindent where $L_1, L_2, L_3$ and $L_4$ in $\mathbb{C}[z_1,z_2,z_3,z_4]$ have degree $1$ and $\sum_{i=1}^4 \lambda_i=0$.
\item $\overline{L(1,1,2)}$ is the closure of the space of logarithmic foliations given by the 1-forms:
$$\omega=\sum_{i=1}^3 \lambda_i f_1f_2f_3 \frac{df_i}{f_i},$$
\noindent where $f_1, f_2$ are different hyperplanes, $f_3$ is an irreducible hypersurface of degree $2$ and $\lambda_1+\lambda_2+2\lambda_3=0$.
\end{itemize}
If $\omega= \sum_{i=1}^4 \lambda_i L_1L_2L_3L_4 \frac{dL_i}{L_i} \in L(1,1,1,1)$, using the part (c) of the proposition 2.1 in page 96 of \cite{Cerveau-Mattei} we have that:
$$Sing(\omega)=\bigcup_{i \neq j} \mathbb{V}(L_i,L_j),$$
\noindent and this is not a line because
$L_1, L_2, L_3$ and $L_4$ are different hyperplanes. On the other hand, with theorem 1.1 page 91 of \cite{Cerveau-Mattei} adapted to this case, we can prove that the foliations in $\overline{L(1,1,1,1)}-L(1,1,1,1)$ have the form:
\begin{enumerate}
\item $\omega_1=L_1^2L_2L_3\big(\sum_{i=1}^3 \lambda_i \frac{dL_i}{L_i}+d(\frac{\alpha}{L_1})\big)$, where $\alpha$ is homogeneous of degree $1$.
\item $\omega_2=L_1^3L_2\big( \lambda_1 \frac{dL_1}{L_1}+\lambda_2 \frac{dL_2}{L_2}+d(\frac{\alpha}{L_1^2})\big)$, where $\alpha$ is homogeneous of degree $2$.
\item $\omega_3=L_1^4\big( \lambda_1 \frac{dL_1}{L_1}+d(\frac{\alpha}{L_1^3})\big)=\lambda_1L_1^3dL_1+L_1d\alpha-3\alpha dL_1$, where $\alpha$ is homogeneous of degree $3$.
\end{enumerate}
\noindent By part (d) of the proposition 2.1 in page 96 of \cite{Cerveau-Mattei}, we obtain the singular set for every case
\begin{enumerate}
\item $Sing(\omega_1)= \mathbb{V}(L_1,L_2) \cup \mathbb{V}(L_1,L_3) \cup \mathbb{V}(L_2,L_3) \cup \mathbb{V}(L_1,\alpha)$
\item $Sing(\omega_2)= \mathbb{V}(L_1,L_2) \cup \mathbb{V}(L_1,\alpha)$
\item $Sing(\omega_1)= \mathbb{V}(L_1,\alpha)$
\end{enumerate}
If $\omega= \sum_{i=1}^3 \lambda_i f_1f_2f_3 \frac{df_i}{f_i} \in L(1,1,2)$, then using part (c) of proposition 2.1 in \cite{Cerveau-Mattei} we have that:
$$Sing(\omega)=\bigcup_{i \neq j} \mathbb{V}(f_i,f_j) \cup Sing(df_3),$$
\noindent which is not a line. Finally, we just have to note that the foliations on the border of this component have the same form as the foliations on the border of the above logarithmic component. Then in the logarithmic components
$\overline{L(1,1,1,1)}$ and $\overline{L(1,1,2)}$ we do not have foliations with a line as singular set.
\section{Foliations on $\mathbb{CP}^3$ of codimension 1 that have a line as singular set}
In this section we present the classification of codimension 1 foliations on $\mathbb{CP}^3$ of degree $2$ that have a line as singular set.
\\
As we say in the introduction, the classification of these type of foliations is the question proposed on page 57 of the book \cite{Deserti-Cerveau}, in this sense the authors find the unique $\mathcal{L}$-foliation on $\mathbb{CP}^3$ of degree $2$ with a line as singular set (see theorem 4.5 of \cite{Deserti-Cerveau}). For this reason it is convenient to begin this section by saying something about
$\mathcal{L}$-foliations.
\begin{defin} Let $\mathcal{F}$ be a foliation on $\mathbb{CP}^3$ of codimension $1$. We will say that $\mathcal{F}$ is a $\mathcal{L}$-foliation of codimension $1$ if there exists a Lie subalgebra $\mathfrak{g}$ of the Lie algebra of the group
$Aut(\mathbb{CP}^3)$, such that for a generic point $z \in \mathbb{CP}^3$ we have the following property:
$$\textrm{$\mathfrak{g}(z)$ is the tangent space to the leaf of $\mathcal{F}$ at $z$}.$$
\noindent In particular, we have that $\dim \mathfrak{g}(z)=\dim \{X(z): X \in \mathfrak{g}\}=2$.
\end{defin}
For example, from theorem 4.5 of \cite{Deserti-Cerveau} we have that foliations $\omega$ and $\omega_1+\omega_3$ in the exceptional component $\overline{E(3)}$ (see theorem \ref{excepcional}) are $\mathcal{L}$-foliations. With the same result we have also the following:
\begin{cor} \label{L-foliation} The foliation on $\mathbb{CP}^3$ of degree $2$ given by the 1-form:
$$\omega=3(z_1z_3^2+z_2z_3z_4+z_2^3)dz_3-z_3d(z_1z_3^2+z_2z_3z_4+z_2^3),$$
\noindent is the unique $\mathcal{L}$-foliation, up to linear change of coordinates, that have a line as singular set.
\end{cor}
Now we present the theorem of classification of these type of foliations.
\begin{teo} \label{Classification} The foliations on $\mathbb{CP}^3$ of codimension $1$ of degree $2$ that have a line as singular set are, up to change of coordinates:
\begin{enumerate}
\item The linear pull-back in $\mathbb{CP}^3$ of the foliations on $\mathbb{C}P$ of degree $2$ given by the 1-forms:
\begin{align*}
\nu_2&=-z_3^3dz_1+z_3(z_2^2+z_1z_3)dz_2+(z_1z_3^2-z_2^3-z_1z_2z_3)dz_3\\
\nu_3&=z_2^2z_3dz_1-z_3(z_2^2+z_3^2)dz_2+(z_2^3+z_2^2z_3-z_1z_2z_3)dz_3\\
\nu_4&=(z_1z_2z_3+z_3^3-z_2^3)dz_1+z_2(z_3^2-z_1z_2)dz_2-(z_2^2z_3+z_1^2z_2+z_1z_3^2)dz_3.
\end{align*}
\item Or the foliation given by the 1-form:
$$\omega_f=3fdz_3-z_3df,$$
\noindent where the polynomial $f(z_1,z_2,z_3,z_4)$ is one of the following:
\begin{align*}
&z_1z_3^2+z_3z_4^2+z_2^3,\\
& z_1z_3^2+z_2z_3z_4+z_2^3, \\
& z_1z_3^2+z_2^3.
\end{align*}
\noindent In these cases the associated foliation has the rational first integral $\frac{f}{z_3^3}$, and only the third one is a linear pull-back of a foliation on $\mathbb{CP}^2$.
\end{enumerate}
\end{teo}
\begin{proof} For the proof we will study every component of the space of foliations on $\mathbb{CP}^3$ of codimension 1 of degree $2$. As we mentioned before and following the results of \cite{Cerveau-LinsNeto}, we have 6 components in
$\mathcal{F}(2,3)$ that we have called in the introduction: 1,...,6. Components 4, 5 and 6 were analyzed in the preceding sections where we saw that they do not have foliations that have a line as singular set.
\\
For the study of component $\overline{R(2,2)}$ we proceed as follows: if the polynomials $f$ and $g$ define quadric surfaces in $\mathbb{CP}^3$, with $\mathbb{V}(f)$ smooth, then the polynomial $g$ is not a double line, because
the degree of the foliation is $2$.
Let $Q=\mathbb{V}(f)$, by adjuntion formula we have that $K_Q= \mathcal{O}_Q(2) \otimes K_{\mathbb{CP}^3}= \mathcal{O}_Q(-2)$, then $-K_Q=\mathcal{O}_Q(2)$, and this is the class in $Pic(Q)$ of the intersection with another reduced quadric.
\\
On the other hand, we know that $Q$ is isomorphic to $\mathbb{P}^1 \times \mathbb{P}^1$ then $K_Q= p_1^*K_{\mathbb{P}^1} \otimes p_2^*K_{\mathbb{P}^1}$, we conclude that $K_Q$ has class $(-2,-2)$.
Therefore $\mathcal{O}_Q(2)$ has class $(2,2) \neq (0,4)$, this says that the intersection of $Q$ with another reduced quadric surface is not a line. In the frontier of this component we have foliations with a rational first integral where one of the quadric is the product of two different hyperplanes, then we do not have a line as singular set for the associated foliation.
\\
Then all the foliations of $\mathcal{F}(2,3)$ that have a line as singular set are in the components 1 and 3. In the component 1 we have the linear pull-back of the foliations on $\mathbb{C}P$ of degree $2$ with a unique singularity. By \cite{Cerveau}
we know that the three that do not have rational first integral are $\nu_2$, $\nu_3$ and $\nu_4$.
\\
It remains study the component $\overline{R(1,3)}$: Let $\mathcal{F} \in \overline{R(1,3)}$ such that it has a line as singular set. We can suppose that the rational first integral is:
$$\frac{f(z_1,z_2,z_3,z_4)}{z_3^3}$$
\noindent where $f(z_1,z_2,z_3,z_4)$ defines a reduced cubic hypersurface in $\mathbb{CP}^3$. The algebraic variety $\mathbb{V}(f,z_3)$ is contained in the singular set $Sing(\mathcal{F})$ of the foliation, then we can suppose that this is $\mathbb{V}(z_2,z_3)$,
therefore
$$f(z_1,z_2,z_3,z_4)=z_3h(z_1,z_2,z_3)+z_3z_4L(z_1,z_2,z_3,z_4)+az_2^3,$$
\noindent where $h(z_1,z_2,z_3)$ has degree 2, $L=a_1z_1+a_2z_2+a_3z_3+a_4z_4$, for some complex numbers $a_1, a_2, a_3, a_4$, and $a \in \mathbb{C}^*$.
\\
Note that if $L=0$, then the foliation depends just of the variables $z_1,z_2,z_3$ and this is the linear pull-back
of a foliation on $\mathbb{C}P$ of degree $2$ with a rational first integral and a unique singularity at $(1:0:0)$.
\\
Up to change of coordinates there is only one foliation with this properties, and it has rational first integral:
$$\frac{z_1z_3^2+z_2^3}{z_3^3},$$
\noindent then we can take $h=z_1z_3$ and $a=1$. We can see that if $a_1 \neq 0$, the cubic hypersurface defined by $z_1z_3^2+z_3z_4(\sum_{i=1}^4 a_iz_i)+z_2^3$ has a singularity out of the line $\mathbb{V}(z_2,z_3)$,
since this is also a singularity for the foliation then we must ask that $a_1=0$. Then to finish the analysis we consider the following cases for $a_2, a_3$ and $a_4$:
\begin{enumerate}
\item Suppose that $a_4 \neq 0$, and consider $f=z_1z_3^2+z_3z_4(\sum_{i=2}^4 a_iz_i)+z_2^3,$ we can see that the unique singular point for the cubic hypersurface defined by $f$ is $(1:0:0:0)$. We have that, up to change of coordinates (see case
XX of the study of singular cubic hypersurfaces in section 9.2.3 of \cite{Dolgachev}), this hypersurface is defined by:
$$z_1z_3^2+z_3z_4^2+z_2^3.$$
\noindent This cubic hypersurface just contains the line $\mathbb{V}(z_2,z_3),$ which is the singular set of the foliation, in fact this is the unique cubic hypersurface which contains just one line (see table 9.1 of \cite{Dolgachev}).
The foliation is not a linear pull-back of a foliation on $\mathbb{CP}^2$ because the hypersurface $\mathbb{V}(z_1z_3^2+z_3z_4^2+z_2^3)$ is not a cone over a cubic plane curve.
Finally, by theorem 4.5 of \cite{Deserti-Cerveau} we can conclude that this is not a $\mathcal{L}$-foliation either.
\item If $a_2 \neq 0$ and $a_4=0$, then the cubic hypersurface defined by
$$f=z_1z_3^2+z_3z_4(a_2z_2+a_3z_3)+z_2^3,$$
\noindent has as singular set the line $\mathbb{V}(z_2,z_3)$ which is the singular set of the foliation. Then by the form of $f$ and theorem 9.2.1 of \cite{Dolgachev} this is, up to linear change of coordinates:
$$f=z_1z_3^2+z_2z_3z_4+z_2^3.$$
\noindent Hence the rational first integral of the foliation is $\frac{z_1z_3^2+z_2z_3z_4+z_2^3}{z_3^3}$ and this this the $\mathcal{L}$-foliation given in Corollary \ref{L-foliation}. Since this is a $\mathcal{L}$-foliation and using proposition 3.7 in \cite{Deserti-Cerveau} we have this is not a linear pull-back of a foliation on $\mathbb{CP}^2$.
\item By parts 1 and 2, if $a_2 \neq 0$ or $a_4 \neq 0$ the foliation given by the 1-form $\omega_f$ is not a linear pull-back. If $a_2=a_4=0$ then the rational first integral of the foliation is:
$$\frac{(z_1+z_4)z_3^2+z_2^3}{z_3^3},$$
\noindent and with a linear change of coordinates we obtain that the rational first integral is equivalent to
$$\frac{z_1z_3^2+z_2^3}{z_3^3},$$
\noindent and the foliation is given by $\nu_1=z_3^3dz_1-z_1z_2^2dz_2+(z_2^3-z_1z_3^2)dz_2$, this is the linear pull-back of the unique foliation on $\mathbb{CP}^2$ of degree $2$ with a unique singular point and with a rational first integral. This means that this foliation is the unique in the intersection of the components $\overline{R(1,3)}$ and $S(2,3)$ with a line as singular set.
\end{enumerate}
\end{proof}
From the proof of the previous theorem we can conclude the following.
\begin{cor} There exists, up to linear change of coordinates, only one codimension 1 foliation on $\mathbb{CP}^3$ of degree 2 with a line as singular set which is neither $\mathcal{L}$-foliation nor linear pull-back of a foliation
on $\mathbb{CP}^2$.
\\
This foliation has the rational first integral $\frac{z_1z_3^2+z_3z_4^2+z_2^3}{z_3^3}$.
\end{cor}
We finish with the following examples of foliations in $\mathcal{F}(s,3)$, where $s \geq 3$, that have a line as singular set.
\begin{ex} In order to give a family of foliations on $\mathbb{CP}^3$ of codimension 1 of arbitrary degree $s$ that have a line as singular set and a rational first integral of degree $s+1$ we do a generalization of the construction that we did in the above theorem.
\\
Let $s \in \mathbb{N}$, $P(z_2,z_3)=\sum_{i=0}^s a_i z_2^i z_3^{s-i} \in \mathbb{C}[z_2,z_3]$ with $a_s \neq 0$, and $a \in \mathbb{C}$. We consider:
$$\frac{z_1z_3^{s}+a z_3z_4^s+Q(z_2,z_3)}{z_3^{s+1}},$$
\noindent where $Q(z_2,z_3)=\sum_{i=0}^s \frac{a_i}{i+1}z_2^{i+1}z_3^{s-i}$. Then the 1-form :
$$\omega_{a}= z_3^{s+1}dz_1 -z_3P(z_2,z_3) dz_2+(-z_1z_3^s+z_2P(z_2,z_3)+sa z_3z_4^s)dz_3-s a z_3^2 z_4^{s-1} dz_4,$$
\noindent defines a foliation on $\mathbb{CP}^3$ of degree $s$ and its singular set is the line $\mathbb{V}(z_2,z_3)$. This foliation is in the rational component $\overline{R(1,s+1)}$ of $\mathcal{F}(s,3)$. If $a=0$ then $\omega_0$ is the pull-back of the foliation:
$$\omega_{0,\mathbb{C}P}=z_3^{s+1}dz_1 -z_3P(z_2,z_3) dz_2+(-z_1z_3^s+z_2P(z_2,z_3))dz_3$$
\noindent on $\mathbb{C}P$ with a unique singularity and with the rational first integral:
$$\frac{z_1z_3^s+Q(z_2,z_3)}{ z_3^{s+1}}.$$
\end{ex}
With the following example we show that for degree greater than 2 there exist logarithmic foliations with a line as singular set. For some cases they have a minimal rational first integral of degree not bounded.
\begin{ex} Let $s_1, s_2, s_3 \in \mathbb{N}$, $a \in \mathbb{C}$ and
\begin{align*}
f_a(z_1,z_2,z_3,z_4)&=z_2^{s_2}z_3^{s_3}(z_1z_2^{s_1-1}+az_4^{s_1})+z_2^{s_1+s_2+s_3}+z_3^{s_1+s_2+s_3}.\\
\end{align*}
\noindent Take $\lambda_1, \lambda_2, \lambda_3 \in \mathbb{C}^*$ such that $\lambda_1(s_1+s_2+s_3)+\lambda_2+\lambda_3=0$, then the 1-form
\begin{align*}
\omega_a=f_az_2z_3 \big(&\lambda_1 \frac{df_a}{f_a}+ \lambda_2 \frac{dz_2}{z_2}+ \lambda_3 \frac{dz_3}{z_3}\Big),
\end{align*}
\noindent defines a logarithmic foliation on $\mathbb{CP}^3$ of degree $s_1+s_2+s_3$, its singular set is the line $\mathbb{V}(z_2,z_3)$. These foliations are in the irreducible logarithmic component $\overline{L(1,1,s_1+s_2+s_3)}$ of $\mathcal{F}(s_1+s_2+s_3,3)$. If $ \lambda_2, \lambda_3 \in \mathbb{N}$, then
the foliation has the minimal rational first integral:
$$\frac{f_a^{-\lambda_1}}{z_2^{\lambda_2} z_3^{\lambda_3}},$$
\noindent which has degree $\lambda_1+\lambda_2$, that means that the degree of this minimal rational first integral is not bounded. If $a=0$ the foliation is a linear pull-back of a foliation on $\mathbb{CP}^2$ of degree $s_1+s_2+s_3$.
\end{ex}
\end{document} |
\begin{document}
\global\long\def\stackrel{\mathbf{A}thrm{{\scriptscriptstyle def}}}{=}{\stackrel{\mathbf{A}thrm{{\scriptscriptstyle def}}}{=}}
\global\long\def\norm#1{\left\Vert #1\right\Vert }
\global\long\def\mathbf{A}thbb{R}{\mathbf{A}thbb{R}}
\global\long\def\abs#1{\left\vert #1\right\vert }
\global\long\def\mathbf{A}thbf{Diag}{\mathbf{A}thbf{Diag}}
\global\long\def\mathbf{A}thrm{nnz}{\mathbf{A}thrm{nnz}}
\global\long\def\mathbf{A}thbf{H}{\mathbf{A}thbf{H}}
\global\long\def\mathbf{A}thbf{I}{\mathbf{A}thbf{I}}
\global\long\def\mathbf{A}thbf{V}{\mathbf{A}thbf{V}}
\global\long\def\mathbf{A}thbf{W}{\mathbf{A}thbf{W}}
\global\long\def\mathbf{A}thbf{S}{\mathbf{A}thbf{S}}
\global\long\def\mathbf{A}thbf{P}{\mathbf{A}thbf{P}}
\global\long\def\mathbf{A}thbf{S}igma{\mathbf{A}thbf{\Sigma}}
\global\long\def\mathbf{A}{\mathbf{A}thbf{A}}
\global\long\def\mathbf{A}thbb{R}n{\mathbf{A}thbb{R}^{n}}
\global\long\def\mathrm{Tr}{\mathbf{A}thrm{Tr}}
\global\long\def\mbox{poly}{\mbox{poly}}
\global\long\def\mathrm{diag}{\mathbf{A}thrm{diag}}
\global\long\def\mathrm{Cov}{\mathbf{A}thrm{Cov}}
\global\long\def\mathbb{E}{\mathbf{A}thbb{E}}
\global\long\def\mathbb{P}{\mathbf{A}thbb{P}}
\global\long\def\mathrm{Var}{\mathbf{A}thrm{Var}}
\global\long\def\mathrm{rank}{\mathbf{A}thrm{rank}}
\global\long\def\mathbb{E}nt{\mathbf{A}thrm{Ent}}
\global\long\def\mathrm{vol}{\mathbf{A}thrm{vol}}
\global\long\def\mathrm{op}{\mathbf{A}thrm{op}}
\global\long\def\mathrm{op}{\mathbf{A}thrm{op}}
\global\long\def\mathbf{H}{\mathbf{A}thbf{H}}
\title{Strong Self-Concordance and Sampling}
\author{Aditi Laddha\thanks{Georgia Tech, aladdha6@gatech.edu}, Yin Tat Lee\thanks{University of Washington and Microsoft Research, yintat@uw.edu},
Santosh S. Vempala\thanks{Georgia Tech, vempala@gatech.edu}}
\mathbf{A}ketitle
\begin{abstract}
Motivated by the Dikin walk, we develop aspects of an interior-point
theory for sampling in high dimension. Specifically, we introduce a symmetric parameter
and the notion of strong self-concordance. These properties imply that the corresponding
Dikin walk mixes in $\tilde{O}(n\bar{\nu})$ steps from a warm start
in a convex body in $\mathbf{A}thbb{R}^{n}$ using a strongly self-concordant barrier
with symmetric self-concordance parameter $\bar{\nu}$. For many natural
barriers, $\bar{\nu}$ is roughly bounded by $\nu$, the standard
self-concordance parameter. We show that this property and strong
self-concordance hold for the Lee-Sidford barrier. As a consequence,
we obtain the first walk to mix in $\tilde{O}(n^{2})$ steps for an
arbitrary polytope in $\mathbf{A}thbb{R}^{n}$. Strong self-concordance for other
barriers leads to an interesting (and unexpected) connection ---
for the universal and entropic barriers, it is implied by the KLS
conjecture.
\end{abstract}
\section{Introduction}
The interior-point method is one of the major successes of optimization,
in theory and practice \citep{karmarkar1984new,renegar1988polynomial,vaidya1989speeding}.
It has led to the currently asymptotically fastest algorithms for solving
linear and semidefinite programs and is a popular method for the
accurate solution of medium to large-sized instances. The results of Nesterov
and Nemirovski \citep{nesterov1994interior} demonstrate that $\nu=O(n)$
is possible for any convex set using their universal barrier, where $\nu$ is the self-concordance parameter of the barrier. For
linear programming with feasible region $\left\{ x\,:\,Ax\ge b\right\} $,
the simple logarithmic barrier $g(x)=-\sum_{i}\ln((Ax-b)_{i})$ has
$\nu=O(m)$ for an $m\times n$ constraint matrix $A$, and is efficiently
computable (the universal barrier is polytime to estimate, but requires
the computation of volume of a convex body). In progress over the
past decade, Lee and Sidford \citep{lee2014path,lee2015efficient,lee2019solving}
introduced a barrier for linear programming that achieves $\nu=O(n\log^{O(1)}(m))$
while being efficiently computable. The interior-point method has
also directly influenced the design of combinatorial algorithms, leading
to faster methods for maxflow/mincut and other optimization problems
\citep{madry2010fast,christiano2011electrical,sherman2013nearly,madry2013navigating,kelner2014almost,peng2016approximate,sherman2017area}.
Sampling convex bodies is a fundamental problem that has close connections
to convex optimization. Indeed, convex optimization can be reduced
to sampling \citep{bertsimas2004solving}. The most general methods that lead
to polynomial-time sampling algorithms are the ball walk and hit-and-run,
both requiring only membership oracle access to the convex set being sampled. These
methods are not affine-invariant, i.e., their complexity depends on
the affine position of the convex set. A tight bound on their complexity
is $O^{*}\left(n^{2}R^{2}/r^{2}\right)$ where the convex body contains
a ball of radius $r$ and is mostly contained in a ball of radius
$R$ \citep{kannan1997random,lovasz2007geometry,lovasz2006hit,lovasz2006fast}. The ratio $R/r$ can be made $O(\sqrt{n})$
for any convex body by a suitable affine transformation.
This effectively makes the complexity $O^{*}(n^{3})$. However, the rounding (e.g.,
by near-isotropic transformation) is an expensive step, and its current
best complexity is $O^{*}(n^{4})$ \citep{lovasz2006simulated}. Even for polytopes,
this the rounding/isotropic step takes $O(mn^{4.5})$ total time for a
polytope with $m$ inequalities using an improved amortized analysis
of the per-step complexity \citep{mangoubi2019faster}.
Interior-point theory offers an alternative sampling method with no
need for rounding. A convex barrier function, via its Hessian, naturally
defines an ellipsoid centered at each interior point of a convex body,
the \emph{Dikin }ellipsoid, which is always contained in the body.
The Dikin walk, at each step, picks a uniformly random point in the
Dikin ellipsoid around the current point. To ensure a uniform stationary
density, the new point is accepted with a probability that depends on
the ratio of the volumes of the Dikin ellipsoids at the two points,
see Algorithm \ref{algo:Dikin} below. Kannan and Narayanan \citep{kannan2012random}
showed that the mixing rate of this walk with the standard logarithmic
barrier is $O(mn)$ for a polytope in $\mathbf{A}thbb{R}^{n}$ defined using $m$
inequalities. Each step of the walk involves computing the determinant and can be done in time $O(mn^{\omega-1})$, leading to an overall
arithmetic complexity of $O(m^{2}n^{\omega})$ (see also \citep{sachdeva2016mixing}
for a shorter proof of a Gaussian variant). Using a different more continuous
approach, where each step is the solution of an ODE (rather than a
straight-line step), Lee and Vempala \citep{lee2018convergence} showed
that the Riemannian Hamiltonian Monte Carlo improves the mixing rate
for polytopes to $O(mn^{2/3})$ while keeping the same per-step complexity.
This leads to the following basic questions:
\begin{itemize}
\item What is the fastest possible mixing rate of a Dikin walk?
\item Is a mixing rate of $O(n)$ possible while keeping each step efficient (say
matrix multiplication time or less)?
\end{itemize}
These are the natural analogies to the progress in optimization, where
for the first, Nesterov and Nemirovski show a convergence rate to
the optimum of $O(\sqrt{n})$, and for the second, Lee and Sidford
show $\tilde{O}(\sqrt{n})$ for linear programming while maintaining
efficiency.
These questions, in the context of sampling, lead to new challenges.
Whereas for optimization, one step can be viewed as moving to the
optimum of the objective in the current Dikin ellipsoid (a Newton
step), for sampling, the next step is a random point in the Dikin
ellipsoid; and since these ellipsoids have widely varying volumes,
maintaining the correct stationary distribution takes some work. In
particular, one needs to show that with large probability, the Dikin
ellipsoids at the current point and proposed next point have volumes
within a constant factor; this would imply that a standard Metropolis
filter succeeds with large probability and there is no ``local''
conductance bottleneck. For global convergence, the two important
ingredients are showing that one-step distributions from nearby points
have a large overlap and a suitable isoperimetric inequality. Both
parts depart significantly from the Euclidean set-up as the notion
of distance is defined by local Dikin ellipsoids.
To address these challenges, in place of the
self-concordance parameter $\nu$, we have a symmetric self-concordance
parameter $\bar{\nu}$. It is the smallest number such that for any
point $u$ in a convex body $K$, with Dikin ellipsoid $E_{u}$, we have $E_{u}\subseteq K\cap(2u-K)\subseteq\sqrt{\bar{\nu}}E_{u}$.
In general $\bar{\nu}$ can be as high as $\nu^{2}$ but for some
important barriers, it is bounded as $O(\nu)$. This includes the
logarithmic barrier, and, as we show, the Lee-Sidford barrier. This
definition and parameter allows us to show that the isoperimetric
(Cheeger) constant for the Dikin distance is asymptotically at least
$1/\sqrt{\bar{\nu}}$.
We need a further, important refinement. The notion of self-concordance
itself bounds the rate of change of the Hessian of the barrier (i.e.,
the Dikin matrix) with respect to the local metric in the spectral
norm, i.e., the maximum change in any direction. We define \emph{strong
}self-concordance as the requirement that this derivative be bounded
in \emph{Frobenius }norm. Again, the logarithmic barrier satisfies
this property, and we show that the Lee-Sidford barrier does as well.
Our main general result then is that the Dikin walk defined using
any symmetric, strongly self-concordant barrier with convex Hessian
mixes in $O(n\bar{\nu})$ steps. We prove that the LS barrier satisfies
all these conditions with $\bar{\nu}=\tilde{O}(n)$ and so has a mixing
rate of $\tilde{O}(n^{2})$ for polytopes, completely answering the
second question, and improving on several existing bounds in \citep{chen2018fast,gustafson2018john}.
We also show that the Dikin walk with the standard logarithmic barrier
can be implemented in time $O(nnz(A)+n^{2})$ where $nnz(A)$ is the
number of nonzero entries in the constraint matrix $A$. This answers
the open question posed in \citep{lee2015efficient,kannan2012random}.
These results along with earlier work on sampling polytopes are collected
in Table \ref{tab:sampling}. We note that while for the Dikin walk
with a logarithmic barrier, there are simple examples showing that
the mixing rate of $O(mn)$ is tight (take a hypercube and duplicate
one of its facets $m-n$ times), for the Dikin walk with the LS barrier,
we are not aware of a tight example or one with mixing rate greater
than $\tilde{O}(n)$. There is the tantalizing possibility
that it mixes in nearly linear time. Thus, the overall arithmetic complexity for sampling a polytope is reduced to $m\cdot\mathbf{A}thbf{I}n\left\{ \textrm{nnz}(A)\cdot n+n^{3}, n^{\omega+1}\right\}$ which improves the state of the art for all ranges of $m$.
\begin{table}
\centering
\caption{\label{tab:sampling}The complexity of uniform polytope sampling from
a warm start.}
\label{tab:compare}
\begin{tabular}{ccl}
\hline
\toprule
Markov Chain & Mixing Rate & Per step cost\\
\mathbf{A}thbf{I}drule
Ball Walk{\footnotemark}\citep{kannan1997random} & $n^{2}R^{2}/r^{2}$ & $mn$\\
\hline
Hit-and-Run{\footnotemark[1]}\citep{lovasz2006hit} & $n^{2}R^{2}/r^{2}$ & $mn$\\
\hline
Dikin \citep{kannan2012random} & $mn$ & $mn^{\omega-1}$\\
\hline
RHMC \citep{lee2018convergence} & $mn^{\frac{2}{3}}$ & $mn^{\omega-1}$\\
\hline
Geodesic Walk\citep{lee2017geodesic} & $mn^{\frac{3}{4}}$ & $mn^{\omega-1}$\\
\hline
John's Walk\citep{gustafson2018john} & $n^{7}$ & $mn^{4}+n^{8}$\\
\hline
Vaidya Walk\citep{chen2018fast} & $m^{\frac{1}{2}}n^{\frac{3}{2}}$ & $mn^{\omega-1}$\\
\hline
Approximate John Walk\citep{chen2018fast} & $n^{2.5}$ & $mn^{\omega-1}$\\
\hline
\textbf{\textcolor{red}{Dikin (this paper)}} & $mn$ & \textbf{\textcolor{red}{$\mathbf{A}thrm{nnz}(A)+n^{2}$}}\\
\hline
\textbf{\textcolor{red}{Weighted Dikin (this paper)}} & \textbf{\textcolor{red}{$n^{2}$}} & \textcolor{red}{$mn^{\omega-1}$}\\
\bottomrule
\end{tabular}
\end{table}
\footnotetext[1]{These entries are for general convex bodies
presented by oracles, with $R/r$ measuring the \emph{roundness} of
the input body; this can be made $O(\sqrt{n})$ with a rounding procedure
that takes $n^{4}$ steps (membership queries). \emph{After} rounding,
the amortized per-step complexity of the ball walk in a polytope is
$\tilde{O}(m)$.}
We also study the notions of symmetry and strong self-concorda\-nce introduced in this paper for three well-studied barriers, namely,
the classical universal barrier \citep{nesterov1994interior}, the
entropic barrier \citep{bubeck2014entropic} and the canonical barrier
\citep{hildebrand2014canonical}. While these barriers are not particularly
efficient to evaluate, they are interesting because they all achieve
the best (or nearly best) possible self-concordance parameter values
for arbitrary convex sets and convex cones (for the canonical barrier),
and have played an important role in shaping the theory of interior-point
methods for optimization. For the canonical barrier, the work of Hildebrand
already establishes the convexity of the log determinant function (by
definition of the barrier), and strong self-concordance \citep{hildebrand2014canonical}.
For the entropic and universal barriers, we present an unexpected
connection: the strong self-concordance is implied by the KLS isoperimetry
conjecture! This suggests the possibility of more fruitful connections
yet to be discovered using the notion of strong self-concordance.
\subsection{Dikin Walk}
The general Dikin walk is defined as follows. For a convex set $K$
with a positive definite matrix $\mathbf{A}thbf{H}(u)$ for each point $u\in K$,
let
\[
E_{u}(r)=\left\{ x:\,(x-u)^{\top}\mathbf{A}thbf{H}(u)(x-u)\le r^{2}\right\} .
\]
\begin{algorithm}
\SetAlgoLined
\DontPrintSemicolon
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{starting point $x_{0}$ in a polytope $P=\left\{ x:\,\mathbf{A}thbf{A}x\ge b\right\} $}
\Output{$x_T$}
Set $r=\frac{1}{512}$\;
\For{$t\leftarrow 1$ \KwTo $T$}{
$x_t \leftarrow x_{t-1}$\;
Pick $y$ from $E_{x_t}(r)$\;
$x_t \leftarrow y$ with probability $\mathbf{A}thbf{I}n\left\{ 1,\frac{\mathrm{vol}(E_{x_t}(r))}{\mathrm{vol}(E_{y}(r))}\right\} $\;
}
\caption{$\mathbf{A}thtt{DikinWalk}$}
\label{algo:Dikin}
\end{algorithm}
\subsection{Strong Self-Concordance}
We require a family of matrices to have the following properties.
Usually but not necessarily, these matrices come from the Hessian
of some convex function.
\begin{definition}[Self-concordance]
For any convex set $K\subseteq\mathbf{A}thbb{R}n$, we call a matrix function $\mathbf{A}thbf{H}:K\rightarrow\mathbf{A}thbb{R}^{n\times n}$ self-concordant if for any $x\in K$, we have
\[
-2\Vert h\Vert_{\mathbf{A}thbf{H}(x)}\mathbf{A}thbf{H}(x)\preceq\frac{d}{dt}\mathbf{A}thbf{H}(x+th)\preceq2\Vert h\Vert_{\mathbf{A}thbf{H}(x)}\mathbf{A}thbf{H}(x).
\]
\end{definition}
\begin{definition}[$\bar{\nu}$-Symmetry]
For any convex set $K\subseteq\mathbf{A}thbb{R}n$, we call a matrix function $\mathbf{A}thbf{H}:K\rightarrow\mathbf{A}thbb{R}^{n\times n}$ $\bar{\nu}$-symmetric if for any $x\in K$, we have
\[
E_{x}(1)\subseteq K\cap(2x-K)\subseteq E_{x}(\sqrt{\bar{\nu}}).
\]
\begin{figure}
\caption{$E_{u}
\end{figure}
\end{definition}
The following lemma shows that self-concordant matrix functions also enjoy a similar regularity as the usual self-concordant functions.
\begin{lemma}
\label{lem:global_self_concordant}Given any self-concordant matrix
function $\mathbf{A}thbf{H}$ on $K\subseteq\mathbf{A}thbb{R}n$, we define $\|v\|_{x}^{2}=v^{\top}\mathbf{A}thbf{H}(x)v$.
Then, for any $x,y\in K$ with $\Vert x-y\Vert_{x}<1$, we have
\[
\left(1-\Vert x-y\Vert_{x}\right)^{2}\mathbf{A}thbf{H}(x)\preceq\mathbf{A}thbf{H}(y)\preceq\frac{1}{\left(1-\Vert x-y\Vert_{x}\right)^{2}}\mathbf{A}thbf{H}(x).
\]
\end{lemma}
Proof in \ref{proof:2}. Many natural barriers, including the logarithmic barrier and the LS-barrier,
satisfy a much stronger condition than self-concordance, which we define here.
\begin{definition}[Strong Self-Concordance]
For any convex set $K\subseteq\mathbf{A}thbb{R}n$, we say a matrix function $\mathbf{A}thbf{H}:K\rightarrow\mathbf{A}thbb{R}^{n\times n}$
is strongly self-concordant if for any $x\in K$, we have
\[
\norm{\mathbf{A}thbf{H}(x)^{-1/2}D\mathbf{A}thbf{H}(x)[h]\mathbf{A}thbf{H}(x)^{-1/2}}_{F}\le2\norm h_{x}
\]
where $D\mathbf{A}thbf{H}(x)[h]$ is the directional derivative of $\mathbf{A}thbf{H}$
at $x$ in the direction $h$.
\begin{figure}
\caption{Strong self-concordance measures the rate of change of Hessian of a barrier in the Frobenius norm}
\end{figure}
\end{definition}
Similar to Lemma \ref{lem:global_self_concordant}, we have a global
version of strong self-concordance.
\begin{lemma}
\label{lem:global_strongly_self_concordant}Given any strongly self-concordant
matrix function $\mathbf{A}thbf{H}$ on $K\subset\mathbf{A}thbb{R}n$. For any $x,y\in K$ with
$\Vert x-y\Vert_{x}<1$, we have
\[
\|\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}(\mathbf{A}thbf{H}(y)-\mathbf{A}thbf{H}(x))\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}\|_{F}\leq\frac{\|x-y\|_{x}}{(1-\|x-y\|_{x})^{2}}.
\]
\end{lemma}
Proof in \ref{proof:4}. We note that strong self-concordance is stronger than self-concordance
since the Frobenius norm is always larger or equal to the spectral
norm. As an example, we will verify that the conditions hold for the
standard log barrier (Lemma \ref{lem:dssc}).
\subsection{Results}
Our first theorem is the following.
\begin{restatable}{theorem}{dikingeneral}\label{lem:dikingeneral}The
mixing rate of the Dikin walk for a $\bar{\nu}$-symmetric, strongly self-concordant
matrix function with convex log determinant is $O(n\bar{\nu})$.\end{restatable}
This implies faster mixing and sampling for polytopes using the LS
barrier (see Sec. \ref{subsec:LS-Barrier} for the definition).
\begin{theorem}
\label{thm:mixing-ls}The mixing rate of the Dikin walk based on the
LS barrier for any polytope in $\mathbf{A}thbb{R}^{n}$ is $\tilde{O}(n^{2})$ and
each step can be implemented in $\tilde{O}(mn^{\omega-1})$\footnote{We use $\tilde{O}$ to hide factors polylogarithmic in $n,m$.}
arithmetic operations.
\end{theorem}
On a related note, we show that each step of the standard Dikin walk
is fast, and does not need matrix multiplication.
\begin{theorem}
\label{thm:dikin}The Dikin walk with the logarithmic barrier for
a polytope $\{\mathbf{A}thbf{A}x\ge b\}$ can be implemented in time $O(\mathbf{A}thrm{nnz}(\mathbf{A}thbf{A})+n^{2})$
per step while maintaining the mixing rate of $O(mn)$. See \ref{section4}.
\end{theorem}
The next lemma results from studying strong self-concordance for classical
barriers. The KLS constant below is conjectured to be $O(1)$ and
known to be $O(n^{\frac{1}{4}})$ \citep{lee2017eldan}.
\begin{lemma}
\label{lem:SSC-KLS}Let $\psi_{n}$ be the KLS constant of isotropic
logconcave densities in $\mathbf{A}thbb{R}^{n}$, namely, for any isotropic logconcave
density $p$ and any set $S\subset\mathbf{A}thbb{R}n$, we have
\[
\int_{\partial S}p(x)dx\geq\frac{1}{\psi_{n}}\mathbf{A}thbf{I}n\left\{ \int_{S}p(x)dx,\int_{\mathbf{A}thbb{R}^{n}\backslash S}p(x)dx\right\} .
\]
Let $\mathbf{A}thbf{H}(x)$ be the Hessian of the universal or entropic barriers.
Then, we have
\[
\norm{\mathbf{A}thbf{H}(x)^{-1/2}D\mathbf{A}thbf{H}(x)[h]\mathbf{A}thbf{H}(x)^{-1/2}}_{F}=O(\psi_{n})\norm h_{x}.
\]
In short, the universal and entropic barriers in $\mathbf{A}thbb{R}^{n}$ are strongly
self-concordant up to a scaling factor depending on $\psi_{n}$.
\end{lemma}
In fact, our proof( see Section \ref{section5}) shows that up to a logarithmic factor the strong
self-concordance of these barriers is \emph{equivalent }to the KLS
conjecture.
\section{Mixing with Strong Self-Concordance}
A key ingredient of the proof of Theorem \ref{lem:dikingeneral} is
the following lemma.
\begin{lemma}
\label{lem:dikin-one-step}For two points $x,y\in P$, with $\Vert x-y\Vert_{x}\le\frac{1}{512\sqrt{n}}$,
we have $d_{TV}(P_{x},P_{y})\leq\frac{3}{4}$.
\end{lemma}
\begin{proof} Let $\mathbf{A}thcal{E}(x,\mathbf{A}thbf{A})$ denote the uniform distribution
over an ellipsoid centered at $x$ with covariance matrix $\mathbf{A}thbf{A}$ and radius $r=\frac{1}{512}$. Then,
\begin{align}
d_{\mathbf{A}thrm{TV}}(P_{x},P_{y}) & \leq\frac{1}{2}\text{rej}_{x}+\frac{1}{2}\text{rej}_{y}+d_{\textrm{TV}}(\mathbf{A}thcal{E}(x,\mathbf{A}thbf{H}(x)),\mathbf{A}thcal{E}(y,\mathbf{A}thbf{H}(y))\label{eq:def}
\end{align}
where $\text{rej}_{x}$ and $\text{rej}_{y}$ are the rejection probabilities
at $x$ and $y$.
We break the proof into 2 parts. First we bound the rejection probability at $x$. Consider the algorithm picks a point
$z$ from $E_{x}(r)$. Let $f(z)=\ln\det\mathbf{A}thbf{H}(z)$. The acceptance
probability of the sample $z$ is
\begin{equation}
\mathbf{A}thbf{I}n\left\{ 1,\frac{\mathrm{vol}(E_{x}(r))}{\mathrm{vol}(E_{z}(r))}\right\} =\mathbf{A}thbf{I}n\left\{ 1,\sqrt{\frac{\det(\mathbf{A}thbf{H}(z))}{\det(\mathbf{A}thbf{H}(x))}}\right\} .\label{eq:reject_prob_z}
\end{equation}
By our assumption $f$ is a convex function, and hence
\begin{equation}
\ln\frac{\det(\mathbf{A}thbf{H}(z))}{\det(\mathbf{A}thbf{H}(x))}=f(z)-f(x)\geq\langle\nabla f(x),z-x\rangle.\label{eq:log_det_Hzx}
\end{equation}
\begin{equation}
\langle\nabla f(x),z-x\rangle = \langle\mathbf{H}(x)^{-\frac{1}{2}}\nabla f(x),\mathbf{H}(x)^{-\frac{1}{2}}(z-x)\rangle
\end{equation}
where $z' = \mathbf{H}(x)^{-\frac{1}{2}}z$ is sampled from a ball of radius $r$ centered at $x' = \mathbf{H}(x)^{-\frac{1}{2}}x$, and hence we know that
\[
\mathbb{P}r(v^{\top}(z'-x')\geq-\epsilon r\|v\|_{2})\geq1-e^{-n\epsilon^{2}/2}.
\]
In particular, with probability at least $0.99$ in $z$, we have
\begin{equation}
\langle\nabla f(x),z-x\rangle\geq-\frac{4r}{\sqrt{n}}\|\mathbf{H}(x)^{-\frac{1}{2}}\nabla f(x)\|_{2}.\label{eq:grad_prob}
\end{equation}
To compute $\|\mathbf{H}(x)^{-\frac{1}{2}} \nabla f(x)\|_{2}^{2}$, it is easier to compute directional
derivative of $\nabla f$. Note that
\begin{align}
\|\mathbf{H}(x)^{-\frac{1}{2}}\nabla f(x)\|_{2} & = \mathbf{A}x_{\|v\|_{2}=1}\left(\mathbf{H}(x)^{-\frac{1}{2}}\nabla f(x)\right)^{\top}v \nonumber\\
& =\mathbf{A}x_{\|v\|_{2}=1}\mathrm{Tr}(\mathbf{A}thbf{H}(x)^{-1}D\mathbf{A}thbf{H}(x)[\mathbf{H}(x)^{-\frac{1}{2}}v]) \nonumber\\
& =\mathbf{A}x_{\|u\|_{x}=1}\mathrm{Tr}\left(\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}D\mathbf{A}thbf{H}(x)[u]\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}\right) \nonumber\\
& \leq\mathbf{A}x_{\|u\|_{x}=1}\sqrt{n}\|\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}D\mathbf{A}thbf{H}(x)[u]\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}\|_{F} \nonumber\\
& \leq\mathbf{A}x_{\|u\|_{x}=1}2\sqrt{n}\|u\|_{x} \leq2\sqrt{n}\label{eq:grad_log_H}
\end{align}
where the first inequality follows from $\left|\sum_{i=1}^{n}\lambda_{i}\right|\leq\sqrt{n}\sqrt{\sum_{i=1}^{n}\lambda_{i}^{2}}$
and the second inequality follows from the definition of strong self-concordance.
Combining \eqref{eq:reject_prob_z}, \eqref{eq:log_det_Hzx}, \eqref{eq:grad_prob}
and \eqref{eq:grad_log_H}, we see that with probability at least
$0.99$ in $z$, the acceptance probability of the sample $z$ is
\begin{equation}
\mathbf{A}thbf{I}n\left\{ 1,\frac{\mathrm{vol}(E_{x}(r))}{\mathrm{vol}(E_{z}(r))}\right\} \geq e^{-4r}\geq0.9922\label{eq:rejection}
\end{equation}
where we used that $r=\frac{1}{512}$. Hence, the rejection
probability $\mathbf{A}thrm{rej}_{x}$ (and similarly $\mathbf{A}thrm{rej}_{y}$)
satisfies
\begin{equation}
\mathbf{A}thrm{rej}_{x}\leq0.0039\qquad\text{and}\qquad\mathbf{A}thrm{rej}_{y}\leq0.0039.\label{eq:rej_xy}
\end{equation}
To bound the second term, note that $d_{\mathbf{A}thrm{TV}}$ follows the triangle inequality. So, we can bound the second term in \eqref{eq:def} as
\begin{align}
d_{\textrm{TV}}(\mathbf{A}thcal{E}(x,\mathbf{A}thbf{H}(x)),\mathbf{A}thcal{E}(y,\mathbf{A}thbf{H}(y)))
& \leq d_{\textrm{TV}}(\mathbf{A}thcal{E}(x,\mathbf{A}thbf{H}(x)),\mathbf{A}thcal{E}(y,\mathbf{A}thbf{H}(x)))\\
&\phantom{{}=1} +d_{\textrm{TV}}(\mathbf{A}thcal{E}(y,\mathbf{A}thbf{H}(x)),\mathbf{A}thcal{E}(y,\mathbf{A}thbf{H}(y)))\notag \label{eq:three_terms}
\end{align}
By definition of $d_{\mathbf{A}thrm{TV}}$,
\begin{equation}
d_{\textrm{TV}}(\mathbf{A}thcal{E}(x,(\mathbf{A}thbf{H}(x)),\mathbf{A}thcal{E}(y,\mathbf{A}thbf{H}(y))) =\frac{1}{2}\frac{\mathrm{vol}(E_{x}\backslash E_{y})}{\mathrm{vol}(E_{x})}+\frac{1}{2}\frac{\mathrm{vol}(E_{y}\backslash E_{x})}{\mathrm{vol}(E_{y})}
\end{equation}
The first term is a ratio of volumes and hence is invariant under the transformation $z \rightarrow \mathbf{A}thbf{H}(x)^{1/2}z$, after which it becomes the total variation distance between 2 balls of radius $r$ whose centers are at a distance at most $\frac{r}{\sqrt{n}}$. To bound this, we use lemma 3.2 from \citep{kannan1997random},
\begin{equation}
d_{\textrm{TV}}(\mathbf{A}thcal{E}(x,\mathbf{A}thbf{H}(x)),\mathbf{A}thcal{E}(y,\mathbf{A}thbf{H}(x))\leq\frac{e}{e+1}\label{eq:vol_distance}
\end{equation}
Now, we bound $d_{\textrm{TV}}(\mathbf{A}thcal{E}(y,\mathbf{A}thbf{H}(x)),\mathbf{A}thcal{E}(y,\mathbf{A}thbf{H}(y)))$.
Let $Y_{x}=\{z:(z-y)^{\top}\mathbf{A}thbf{H}(x)(z-y)\leq r^{2}\}$ and $Y_{y}=\{z:(z-y)^{\top}\mathbf{A}thbf{H}(y)(z-y)\leq r^{2}\}$.
Then,
\begin{align}
d_{\textrm{TV}}(\mathbf{A}thcal{E}(y,(\mathbf{A}thbf{H}(x)),\mathbf{A}thcal{E}(y,\mathbf{A}thbf{H}(y))) & =\frac{1}{2}\frac{\mathrm{vol}(Y_{x}\backslash Y_{y})}{\mathrm{vol}(_{x})}+\frac{1}{2}\frac{\mathrm{vol}(Y_{y}\backslash Y_{x})}{\mathrm{vol}(Y_{y})}\\
=1&-\frac{1}{2}\frac{\mathrm{vol}(Y_{x}\cap Y_{y})}{\mathrm{vol}(Y_{x})}-\frac{1}{2}\frac{\mathrm{vol}(Y_{x}\cap Y_{y})}{\mathrm{vol}(Y_{y})}\label{eq:dikin_formula}
\end{align}
We bound the total variation distance by bounding the fraction of
volume in the intersection of the ellipsoids having the same center. Again, we
can assume that $\mathbf{A}thbf{H}(y)=\mathbf{A}thbf{I}$ and that $y = 0$. Then, strong self-concordance
and Lemma \ref{lem:global_strongly_self_concordant} show that
\begin{equation}
\|\mathbf{A}thbf{I}-\mathbf{A}thbf{H}(x)^{-1}\|_{F}\leq2\|x-y\|_{x}\leq\frac{1}{256\sqrt{n}}.\label{eq:H_minus_I_F}
\end{equation}
In particular, we have that
\begin{equation}
\frac{255}{256}\mathbf{A}thbf{I}\preceq\mathbf{A}thbf{H}(x)^{-1}\preceq\frac{257}{256}\mathbf{A}thbf{I}.\label{eq:H_I_op}
\end{equation}
We partition the inverse eigenvalues, $\{\lambda_{i}\}_{i\in [n]}$ of $\mathbf{A}thbf{H}(x)$ into those
with values at least $1$ and the rest. Then consider the ellipsoid $\mathbf{A}thcal{I}$
whose inverse eigenvalues are $\mathbf{A}thbf{I}n\left\{ 1,\lambda_{i}\right\}$ along the eigenvectors of $\mathbf{A}thbf{H}(x)$. This is contained in both $Y_{x}$ and $Y_{y}$. We will see that
$\mathrm{vol}(\mathbf{A}thcal{I})$ is a constant fraction of the volume of both $Y_{x}$
and $Y_{y}$. First, we compare $\mathbf{A}thcal{I}$ and $Y_{y}$.
\begin{equation}
\begin{split}
\frac{\mathrm{vol}(Y_x \cap Y_y)}{\mathrm{vol}(Y_{y})} &\geq \frac{\mathrm{vol}(\mathbf{A}thcal{I})}{\mathrm{vol}(Y_{y})}=\left(\prod_{i:\lambda_{i}<1}\lambda_{i}\right)^{1/2}\\
&=\left(\prod_{i:\lambda_{i}<1}\left(1-(1-\lambda_{i})\right)\right)^{1/2}\\
&\geq\exp\left(-\sum_{i:\lambda_{i}<1}\left(1-\lambda_{i}\right)\right)\label{eq:vol_E_x}
\end{split}
\end{equation}
where we used that $1-x\geq\exp(-2x)$ for $0\leq x\leq\frac{1}{2}$
and $\lambda_{i}\geq\frac{1}{2}$ \eqref{eq:H_I_op}. From the inequality
\eqref{eq:H_minus_I_F}, it follows that
\[
\sqrt{\sum_{i}(\lambda_{i}-1)^{2}}\le\frac{1}{256\sqrt{n}}.
\]
Therefore, $\sum_{i:\lambda_{i}<1}|\lambda_{i}-1|\leq\frac{1}{256}$.
Putting it into \eqref{eq:vol_E_x}, we have
\begin{equation}
\frac{\mathrm{vol}(Y_{x}\cap Y_{y})}{\mathrm{vol}(Y_{y})}=\frac{\mathrm{vol}(\mathbf{A}thcal{I})}{\mathrm{vol}(Y_{y})}\geq e^{-\frac{1}{256}}.\label{eq:volPxyPx}
\end{equation}
Similarly, we have
\begin{equation}
\begin{split}
\frac{\mathrm{vol}(Y_{x}\cap Y_{y})}{\mathrm{vol}(Y_{x})}&\geq \left(\frac{\prod_{i:\lambda_{i}<1}\lambda_{i}}{\prod_{i:\lambda_{i}}\lambda_{i}}\right)^{1/2}=\left(\frac{1}{\prod_{i:\lambda_{i}>1}\lambda_{i}}\right)^{1/2}\\
&\geq \left(\frac{1}{\exp(\sum_{i:\lambda_{i}>1}(\lambda_{i}-1))}\right)^{1/2}\geq e^{-\frac{1}{512}}.\label{eq:volPxyPy}
\end{split}
\end{equation}
Putting \eqref{eq:volPxyPx} and \eqref{eq:volPxyPy} into \eqref{eq:dikin_formula},
we have
\begin{equation}
d_{\textrm{TV}}(\mathbf{A}thcal{E}(y,\mathbf{A}thbf{H}(x)),\mathbf{A}thcal{E}(y,\mathbf{A}thbf{H}(y))\leq1-\frac{e^{-\frac{1}{256}}}{2}-\frac{e^{-\frac{1}{512}}}{2}\label{eq:ellipsoid_2}
\end{equation}
Putting \eqref{eq:rej_xy}, \eqref{eq:vol_distance} and \eqref{eq:ellipsoid_2}
into \eqref{eq:def}, we have
\[
d_{\mathbf{A}thrm{TV}}(P_{x},P_{y})\leq\frac{0.0039}{2}+\frac{0.0039}{2}+1-\frac{e^{-\frac{1}{256}}}{2}-\frac{e^{-\frac{1}{512}}}{2}+\dfrac{e}{e+1}\leq\frac{3}{4}
\]
\end{proof}
The next lemma establishes isoperimetry and only needs the symmetric
containment assumption. This isoperimetry is for the cross-ratio distance.
For a convex body $K$, and any two points $x,y\in K$, suppose that
$p,q$ are the endpoints of the chord through $x,y$ in $K$, so that
these points occur in the order $p,x,y,q.$ Then, the \emph{cross-ratio}
distance between $x$ and $y$ is defined as
\[
d_{K}(x,y)=\frac{\|x-y\|_{2}\|p-q\|_{2}}{\|p-x\|_{2}\|y-q\|_{2}}.
\]
This distance enjoys the following isoperimetric inequality.
\begin{theorem}[\citep{lovasz1999hit}]
\label{thm:dK-iso}For any convex body $K$, and disjoint subsets
$S_{1},S_{2}$ of it, and $S_{3}=K\setminus S_{1}\setminus S_{2},$we
have
\[
\mathrm{vol}(S_{3})\ge d_{K}(S_{1},S_{2})\frac{\mathrm{vol}(S_{1})\mathrm{vol}(S_{2})}{\mathrm{vol}(K)}.
\]
\end{theorem}
We now relate the cross-ratio distance to the ellipsoidal norm.
\begin{lemma}
\label{lem:dikin-dist}For any $x,y\in K$, $d_{K}(x,y)\ge\frac{\Vert x-y\Vert_{x}}{\sqrt{\bar{\nu}}}.$
\end{lemma}
\begin{proof}
Consider the Dikin ellipsoid at $x$. For the chord $[p,q]$ induced by
$x,y$ with these points in the order $p,x,y,q$, suppose that $\|p-x\|_{2}\le\|y-q\|_{2}$.
Then by Lemma \ref{lem:dikingeneral}, $p\in K\cap(2x-K)$. And hence
$\|p-x\|_{x}\le\sqrt{\bar{\nu}}.$ Therefore,
\begin{align*}
d_{K}(x,y)=\frac{\|x-y\|_{2}\|p-q\|_{2}}{\|p-x\|_{2}\|y-q\|_{2}}&\geq\frac{\|x-y\|_{2}}{\|p-x\|_{2}}\\
&=\frac{\|x-y\|_{x}}{\|p-x\|_{x}}\ge\frac{\|x-y\|_{x}}{\sqrt{\bar{\nu}}}.
\end{align*}
\end{proof}
We can now prove the main conductance bound.
\dikingeneral*
\begin{proof}
We follow the standard high-level outline \citep{vempala2005geometric}.
Consider any measurable subset $S_{1}\subseteq K$ and let $S_{2}=K\setminus S_{1}$
be its complement. Define the points with low escape probability for
these subsets as
\[
S_{i}'=\left\{ x\in S_{i}:\,P_{x}(K\setminus S_{i})<\frac{1}{8}\right\}
\]
and $S_{3}'=K\setminus S_{1}'\setminus S_{2}'$. Then, for any $u\in S_{1}'$,
$v\in S_{2}'$, we have $d_{TV}(P_{u},P_{v})>1-\frac{1}{4}$. Hence,
by Lemma \ref{lem:dikin-one-step}, we have $\Vert u-v\Vert_{u}\ge\frac{1}{512\sqrt{n}}$.
Therefore, by Lemma \ref{lem:dikin-dist},
\[
d_{K}(u,v)\ge\frac{1}{512\sqrt{n}\cdot\sqrt{\bar{\nu}}}.
\]
We can now bound the conductance of $S_{1}$. We may assume that $\mathrm{vol}(S_{i}')\ge\mathrm{vol}(S_{i})/2$;
otherwise, it immediately follows that the conductance of $S_{1}$
is $\Omega(1)$. Assuming this, we have
\begin{align*}
\int_{S_{1}}P_{x}(S_{2})\,dx & \ge\int_{S_{3}'}\frac{1}{8}dx\ge\frac{1}{8}\mathrm{vol}(S_{3}')\\
& \ge\frac{1}{8}d_{K}(S_{1}',S_{2}')\frac{\mathrm{vol}(S_{1}')\mathrm{vol}(S_{2}')}{\mathrm{vol}(P)}\tag*{(from Thm \ref{thm:dK-iso})} \\
& \ge\frac{1}{32768\sqrt{n\bar{\nu}}}\mathbf{A}thbf{I}n\left\{ \mathrm{vol}(S_{1}),\mathrm{vol}(S_{2})\right\} .
\end{align*}
\end{proof}
It is well-known that inverse squared conductance of a Markov Chain is a bound on its
mixing rate, e.g., in the following form.
\begin{theorem}
\citep{lovasz1993random} Let $Q_{t}$ be the distribution of the current
point after $t$ steps of a Markov chain with stationary distribution
$Q$ and conductance at least $\phi$, starting from initial distribution
$Q_{0}$. Then, with $M=\sup_{A}\frac{Q_{0}(A)}{Q(A)}$,
\[
d_{TV}(Q_{t},Q)\leq\sqrt{M}\left(1-\frac{\phi^{2}}{2}\right)^{t}
\]
where $d_{TV}(Q_{t},Q)$ is the total variation distance between $Q_{t}$
and $Q$.
\end{theorem}
\section{Fast Polytope Sampling with the LS barrier}
\subsection{LS Barrier and its Properties\label{subsec:LS-Barrier}}
In this section, we assume the convex set is a polytope $P=\{x\in\mathbf{A}thbb{R}^{n}\vert\mathbf{A}thbf{A}x>b\}$. For any $x\in\mathbf{A}thrm{int}P$, let $\mathbf{A}thbf{S}_{x}=\mathbf{A}thbf{Diag}(\mathbf{A}thbf{A}x-b)$
and $\mathbf{A}thbf{A}_{x}=\mathbf{A}thbf{S}_{x}^{-1}\mathbf{A}thbf{A}$. We state the definition of the
Lee-Sidford barrier \citep{lee2019solving}, henceforth referred to as LS barrier.
\begin{definition}[LS Barrier]
The LS barrier is defined as
\[
\psi(x)=\mathbf{A}x_{w\in\mathbf{A}thbb{R}^{m}:w\geq0}\frac{1}{2}f(x,w)\]
where \[f(x,w)=\ln\det\left(\mathbf{A}thbf{A}_{x}\mathbf{A}thbf{W}^{1-\frac{2}{q}}\mathbf{A}thbf{A}_{x}\right)-\left(\frac{1}{2}-\frac{1}{q}\right)\sum_{i=1}^{m}w_{i}
\] and $\mathbf{A}thbf{W} = \mathbf{A}thbf{Diag}(w)$, and $q=2(1+\ln m)$.
\end{definition}
We follow the notation in \citep{lee2019solving}:
\begin{definition}
For any $x\in P$, we define $w_{x}=\arg\mathbf{A}x_{w\geq0}f(x,w)$, $\mathbf{A}thbf{W}_{x}=\mathbf{A}thbf{Diag}(w_{x})$,
$s_{x}=\mathbf{A}thbf{A}x-b$, $\mathbf{A}thbf{S}_{x}=\mathbf{A}thbf{Diag}(s_{x})$, $\mathbf{A}thbf{A}_{x}=\mathbf{A}thbf{S}_{x}^{-1}\mathbf{A}thbf{A}$,
$\mathbf{A}thbf{P}_{x}=\mathbf{A}thbf{W}_{x}^{\frac{1}{2}-\frac{1}{q}}\mathbf{A}thbf{A}_{x}\left(\mathbf{A}thbf{A}_{x}\mathbf{A}thbf{W}_{x}^{1-\frac{2}{q}}\mathbf{A}thbf{A}_{x}\right)^{-1}(\mathbf{A}thbf{W}_{x}^{\frac{1}{2}-\frac{1}{q}}\mathbf{A}thbf{A}_{x})^{\top}$,
$\sigma_{x}=\mathrm{diag}(\mathbf{A}thbf{P}_{x})$, $\mathbf{A}thbf{\Sigma}_{x}=\mathbf{A}thbf{Diag}(\sigma_{x})$,
$\mathbf{A}thbf{P}_{x}^{(2)}=\mathbf{A}thbf{P}_{x}\circ\mathbf{A}thbf{P}_{x}$, $\mathbf{A}thbf{\Lambda}_{x}=\mathbf{A}thbf{\Sigma}_{x}-\mathbf{A}thbf{P}_{x}^{(2)}$,
$\bar{\mathbf{A}thbf{\Lambda}}_{x}=\mathbf{A}thbf{\Sigma}_{x}^{-1/2}\mathbf{A}thbf{\Lambda}_{x}\mathbf{A}thbf{\Sigma}_{x}^{-1/2}$,
and $\mathbf{A}thbf{N}_{x}=2\bar{\mathbf{A}thbf{\Lambda}}_{x}(\mathbf{A}thbf{I}-(1-\frac{2}{q})\bar{\mathbf{A}thbf{\Lambda}}_{x})^{-1}$.
\end{definition}
\section{Properties of LS Barrier}\label{properties}
\begin{lemma}[\citep{lee2019solving}]
The function $\psi(x)$ has the following properties:
\begin{enumerate}
\item (Lemma 23) $\psi(x)$ is convex.
\item (Lemma 47.2)
\begin{equation}
\mathbf{A}thbf{P}_{x}^{(2)}\preceq\mathbf{A}thbf{\Sigma}_{x}\label{eq:31}
\end{equation}
\item (Lemma 31)
\begin{equation}
0\leq\sigma_{x,i}=w_{x,i}\leq1\label{eq:32}
\end{equation}
\begin{equation}
\mathbf{A}thbf{A}_{x}^{\top}\mathbf{A}thbf{W}_{x}\mathbf{A}thbf{A}_{x}\preceq\nabla^{2}\psi(x)\preceq(1+q)\mathbf{A}thbf{A}_{x}^{\top}\mathbf{A}thbf{W}_{x}\mathbf{A}thbf{A}_{x}\label{eq:34}
\end{equation}
\item (Lemma 33) For any $x_{t}=x+th$ and $s_{t}=\mathbf{A}thbf{A}x_{t}-b$, we
have
\begin{equation}
\|\mathbf{A}thbf{S}_{t}^{-1}\frac{d}{dt}s_{t}\|_{\mathbf{A}thbf{W}_{t}}\leq\|h\|_{\nabla^{2}\psi(x_{t})}\label{eq:35}
\end{equation}
\item (Lemma 34) For any $x_{t}=x+th$ and $w_{t}=w_{x_{t}}$, we have
\begin{equation}
\|\mathbf{A}thbf{W}_{t}^{-1}\frac{d}{dt}w_{t}\|_{\mathbf{A}thbf{W}_{t}}\leq q\|h\|_{\nabla^{2}\psi(x_{t})}\label{eq:36}
\end{equation}
\end{enumerate}
\end{lemma}
\subsection{Mixing Rate}
\begin{definition}
The LS matrix for a point $x\in P$ is defined as
\[
\mathbf{A}thbf{H}(x)=(1+q^{2})(1+q)\cdot\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{S}_{x}^{-1}\mathbf{A}thbf{W}_{x}^{1-\frac{2}{q}}\mathbf{A}thbf{S}_{x}^{-1}\mathbf{A}thbf{A}.
\]
\end{definition}
We establish the strong self-concordance of LS Matrix in the next lemma.
\begin{lemma}[Strong Self Concordance]
\label{lem:LS_strongly}The LS matrix is strongly self-concordant,
i.e., for any $x_{t} \in P$ given by $x_{t}=x+th$ and $\mathbf{H}_t = \mathbf{H}(x_t)$, we
have
\[
\|\mathbf{A}thbf{H}_{t}^{-1/2}(\frac{d}{dt}\mathbf{A}thbf{H}_{t})\mathbf{A}thbf{H}_{t}^{-1/2}\|_{F}\leq2\|h\|_{\mathbf{A}thbf{H}_{t}}.
\]
\end{lemma}
\begin{proof}
We redefine $$\overline{\mathbf{A}thbf{H}}_{t}=\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{V}_{t}\mathbf{A}thbf{A}$$
with $\mathbf{A}thbf{V}_{t}=\mathbf{A}thbf{S}_{t}^{-1}\mathbf{A}thbf{W}_{t}^{1-2/q}\mathbf{A}thbf{S}_{t}^{-1},\; \mathbf{A}thbf{P}_{t}=\sqrt{\mathbf{A}thbf{V}_{t}}\mathbf{A}thbf{A}(\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{V}_{t}\mathbf{A}thbf{A})^{-1}\mathbf{A}thbf{A}^{\top}\sqrt{\mathbf{A}thbf{V}_{t}}$.
Note that $\mathbf{A}thbf{V}_{t}$ is a diagonal matrix and that $\overline{\mathbf{A}thbf{H}}_{t}$
and $\mathbf{A}thbf{H}_{t}$ are just off by a scaling factor. Hence, we have
\begin{align*}
\allowdisplaybreaks
\|\mathbf{A}thbf{H}_{t}^{-1/2}(\frac{d}{dt}\mathbf{A}thbf{H}_{t})\mathbf{A}thbf{H}_{t}^{-1/2}\|_{F}^{2} & =\|\overline{\mathbf{A}thbf{H}}_{t}^{-1/2}(\frac{d}{dt}\overline{\mathbf{A}thbf{H}}_{t})\overline{\mathbf{A}thbf{H}}_{t}^{-1/2}\|_{F}^{2}\\
& =\mathrm{Tr}\overline{\mathbf{A}thbf{H}}_{t}^{-1}(\frac{d}{dt}\overline{\mathbf{A}thbf{H}}_{t})\overline{\mathbf{A}thbf{H}}_{t}^{-1}(\frac{d}{dt}\overline{\mathbf{A}thbf{H}}_{t})\\
& =\mathrm{Tr}\left((\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{V}_{t}\mathbf{A}thbf{A})^{-1}\mathbf{A}thbf{A}^{\top}(\frac{d}{dt}\mathbf{A}thbf{V}_{t})\mathbf{A}thbf{A}\right)^2\\
& =\mathrm{Tr}\mathbf{A}thbf{P}_{t}\frac{d\ln\mathbf{A}thbf{V}_{t}}{dt}\mathbf{A}thbf{P}_{t}\frac{d\ln\mathbf{A}thbf{V}_{t}}{dt}\\
& =\frac{d\ln v_{t}}{dt}^{\top}\mathbf{A}thbf{P}_{t}^{(2)}\frac{d\ln v_{t}}{dt}.
\end{align*}
Note that $\mathbf{A}thbf{P}_{t}^{(2)}\preceq\mathbf{A}thbf{S}igma_{t}$, by \eqref{eq:31}.
Therefore,
\begin{align*}
\|\mathbf{A}thbf{H}_{t}^{-1/2}(\frac{d}{dt}\mathbf{A}thbf{H}_{t})\mathbf{A}thbf{H}_{t}^{-1/2}\|_{F}^{2} & \leq\frac{d\ln v_{t}}{dt}^{\top}\mathbf{A}thbf{S}igma_{t}\frac{d\ln v_{t}}{dt}\\
& =\sum_{i=1}^{m}\sigma_{t,i}\left(\frac{d\ln s_{t,i}^{-2}w_{t,i}^{1-2/q}}{dt}\right)^{2}\\
& \leq4\sum_{i=1}^{m}\sigma_{t,i}\left(\left(\frac{d\ln s_{t,i}}{dt}\right)^{2}+\left(\frac{d\ln w_{t,i}}{dt}\right)^{2}\right)\\
& =4\sum_{i=1}^{m}\sigma_{t,i}\left(\left(\frac{1}{s_{t,i}}\frac{ds_{t,i}}{dt}\right)^{2}+\left(\frac{1}{w_{t,i}}\frac{dw_{t,i}}{dt}\right)^{2}\right)\\
& \leq4(1+q^{2})\|h\|_{\nabla^{2}\psi(x_{t})}^{2}
\end{align*}
where we used $\sigma_{t}=w_{t}$ \eqref{eq:32} in the second
last equation and equations \eqref{eq:35} and \eqref{eq:36} for the last inequality.
Finally, \eqref{eq:34} shows that $\nabla^{2}\psi(x_{t})\preccurlyeq(1+q)\mathbf{A}thbf{A}_{t}^{\top}\mathbf{A}thbf{W}_{t}\mathbf{A}thbf{A}_{t}$.
Since $0\leq w_{t}=\sigma_{t}\leq1$ by the property of leverage score,
we have
\[
\nabla^{2}\psi(x)\preceq(1+q)\mathbf{A}thbf{A}_{t}^{\top}\mathbf{A}thbf{W}_{t}\mathbf{A}thbf{A}_{t}\preceq(1+q)\mathbf{A}thbf{A}_{t}^{\top}\mathbf{A}thbf{W}_{t}^{1-2/q}\mathbf{A}thbf{A}_{t}=(1+q)\overline{\mathbf{A}thbf{H}}_{t}.
\]
Thus, $\|h\|_{\nabla^{2}\psi(x_{t})}^{2}\leq(1+q)\|h\|_{\overline{\mathbf{A}thbf{H}}_{t}}^{2}$.
Hence, we have
\begin{align*}
\|\mathbf{A}thbf{H}_{t}^{-1/2}(\frac{d}{dt}\mathbf{A}thbf{H}_{t})\mathbf{A}thbf{H}_{t}^{-1/2}\|_{F}^{2} & \leq4(1+q^{2})(1+q)\|h\|_{\overline{\mathbf{A}thbf{H}}_{t}}^{2}\leq4\|h\|_{\mathbf{A}thbf{H}_{t}}^{2}
\end{align*}
where we used that $\mathbf{A}thbf{H}_{t}=(1+q^{2})(1+q)\overline{\mathbf{A}thbf{H}}_{t}$.
\end{proof}
\begin{lemma}\label{lem:symmetry}
The LS-ellipsoid matrix has the following properties:
\begin{enumerate}
\item $\ln\det\mathbf{A}thbf{H}(x)$ is convex.
\item $\mathbf{A}thbf{H}$ is a symmetric strongly $\bar{\nu}$-self-concordant barrier
with $\bar{\nu}=O(n\log^{3}m)$.
\end{enumerate}
\end{lemma}
\begin{proof}
For any $x\in\mathbf{A}thrm{int}P$, \eqref{eq:32} shows that
\begin{align*}
\sum_{i}w_{x,i}=\sum_{i}\sigma_{x,i}&=\mathrm{Tr}\mathbf{A}thbf{W}_{x}^{\frac{1}{2}-\frac{1}{q}}\mathbf{A}thbf{A}_{x}\left(\mathbf{A}thbf{A}_{x}\mathbf{A}thbf{W}_{x}^{1-\frac{2}{q}}\mathbf{A}thbf{A}_{x}\right)^{-1}(\mathbf{A}thbf{W}_{x}^{\frac{1}{2}-\frac{1}{q}}\mathbf{A}thbf{A}_{x})^{\top}\\
&=\mathrm{Tr}\mathbf{A}thbf{I}_{n\times n}=n.
\end{align*}
Hence, the LS barrier can be restated as
\begin{align*}
\psi(x) & =\frac{1}{2}\ln\det(\mathbf{A}thbf{A}_{x}^{\top}\mathbf{A}thbf{W}_x^{1-2/q}\mathbf{A}thbf{A}_{x})-\left(\frac{1}{2}-\frac{1}{q}\right)n\\
& =\frac{1}{2}\ln\det\frac{1}{(1+q^{2})(1+q)}\mathbf{A}thbf{H}(x)-\left(\frac{1}{2}-\frac{1}{q}\right)n
\end{align*}
where $w_{x}$ is the maximizer of $f(x,w)$. Since $\psi(x)$
is convex, so is $\ln\det\mathbf{A}thbf{H}(x)$.
Next, we prove that $\bar{\nu}=O(n\log^{3}m)$. For any $x\in P$ and any $y\in E_{x}(1)$, $(y-x)^{\top}\mathbf{A}thbf{A}_{x}\mathbf{A}thbf{W}_{x}^{1-2/q}\mathbf{A}thbf{A}_{x}(y-x)\leq\frac{1}{(1+q^{2})(1+q)}$
and hence
\begin{align*}
&\norm{\mathbf{A}thbf{A}_{x}(y-x)}_{\infty}^{2}\\
&\quad =\mathbf{A}x_{i\in[m]}\left(e_{i}^{\top}\mathbf{A}thbf{A}_{x}(\mathbf{A}thbf{A}_{x}\mathbf{A}thbf{W}_{x}^{1-2/q}\mathbf{A}thbf{A}_{x})^{-1/2}(\mathbf{A}thbf{A}_{x}\mathbf{A}thbf{W}_{x}^{1-2/q}\mathbf{A}thbf{A}_{x})^{1/2}(y-x)\right)^{2}\\
&\quad \leq\frac{1}{(1+q^{2})(1+q)}\mathbf{A}x_{i\in[m]}e_{i}^{\top}\mathbf{A}thbf{A}_{x}(\mathbf{A}thbf{A}_{x}\mathbf{A}thbf{W}_{x}^{1-2/q}\mathbf{A}thbf{A}_{x})^{-1}\mathbf{A}thbf{A}_{x}e_{i}\\
&\quad \leq\mathbf{A}x_{i\in[m]}\frac{\sigma_{x,i}}{w_{x,i}^{1-2/q}}\leq\mathbf{A}x_{i\in[m]}\frac{\sigma_{x,i}}{w_{x,i}}=1
\end{align*}
since $w_{x,i}\leq1$. So, $E_{x}\subseteq P\cap(2x-P)$
for all $x\in P$.
For any $y\in P\cap(2x-P)$, we have $\Vert\mathbf{A}thbf{S}_{x}^{-1}\mathbf{A}thbf{A}(x-y)\Vert_{\infty}\leq1$.
Hence,
\begin{align*}
\frac{(x-y)^{T}\mathbf{A}thbf{H}(x)(x-y)}{(1+q^{2})(1+q)} & =(x-y)^{T}\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{S}_{x}^{-1}\mathbf{A}thbf{W}_{x}^{1-2/q}\mathbf{A}thbf{S}_{x}^{-1}\mathbf{A}thbf{A}(x-y)\\
& =\sum_{i=1}^{m}w_{x,i}^{1-2/q}(\mathbf{A}thbf{S}_{x}^{-1}\mathbf{A}thbf{A}(x-y))_{i}^{2} \leq\sum_{i=1}^{m}w_{x,i}^{1-2/q}\\
&\leq\left(\sum_{i=1}^{m}\left(w_{x,i}^{1-2/q}\right)^{\frac{1}{1-(2/q)}}\right)^{1-2/q}\left(\sum_{i=1}^{m}1^{q/2}\right)^{2/q}\\
& \le\left(\sum_{i=1}^{m}w_{x,i}\right)^{1-2/q}m^{2/q}\le n^{1-2/q}m^{2/q}\leq en.
\end{align*}
\end{proof}
Lemmas \ref{lem:LS_strongly} and \ref{lem:symmetry} imply that mixing time of Dikin walk with LS matrix is $\tilde{O}(n^2)$ from a warm start.
Implementing each step of this walk involves the following tasks:
\begin{enumerate}
\item Compute $\mathbf{A}thbf{H}(x)^{-1/2}v$ for some vector $v$
\item Compute the ratio $\det(\mathbf{A}thbf{H}(y)^{-1}\mathbf{A}thbf{H}(x))$ for points
$x,y$.
\end{enumerate}
Given $w_{x}$, computing $\mathbf{A}thbf{H}(x)$, its inverse and its determinant can all
be done in time $\tilde{O}\left(mn^{\omega-1}\right)$. $w_{x}$ can be updated in $\tilde{O}(mn^{\omega-1})$ per step as shown in \citep[Theorem 46]{lee2019solving}. Using this, each step of Dikin walk with LS Matrix can be implemented in time $O(mn^{\omega-1})$
This means that the total time to sample a polytope from a warm start is $\tilde{O}(mn^{\omega+1})$ as claimed in Theorem \ref{thm:mixing-ls}.
\section{Fast Implementation of Dikin walk}\label{section4}
\begin{lemma}[Strong Self-Concordance]
\label{lem:dssc} The matrix function $\mathbf{A}thbf{H}(x)=\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{S}_{x}^{-2}\mathbf{A}thbf{A}$ which is the Hessian of the log barrier function $\phi(x)=-\sum_{i=1}^{m}\log\left(A_{i}x-b_{i}\right)$,
is strongly self-concordant.
\end{lemma}
\begin{proof}
Let $x_{t}=x+th$ for some fixed vector $h$. Let $\mathbf{A}thbf{S}_{t}=\mathbf{A}thbf{Diag}(\mathbf{A}thbf{A}x_{t}\allowbreak - b)$, $\mathbf{A}_{t}=\mathbf{A}thbf{S}_{t}^{-1}\mathbf{A}$, $\mathbf{A}thbf{P}_{t}=\mathbf{A}_{t}(\mathbf{A}_{t}^{\top}\mathbf{A}_{t})^{-1}\mathbf{A}_{t}^{\top}$,
$\sigma_{t}=\mathrm{diag}(\mathbf{A}thbf{P}_{t})$, $\mathbf{A}thbf{\Sigma}_{t}=\mathbf{A}thbf{Diag}(\sigma_{t})$,
and $\mathbf{A}thbf{P}_{t}^{(2)}=\mathbf{A}thbf{P}_{t}\circ\mathbf{A}thbf{P}_{t}$. By
\citep[Lemma 47.2]{lee2019solving}, $\mathbf{A}thbf{P}_{t}^{(2)}\preccurlyeq\mathbf{A}thbf{\Sigma}_{t}\preceq\mathbf{A}thbf{I}$.
We are now ready to prove strong self-concordance.
\begin{align*}
&\|\mathbf{A}thbf{H}_{t}^{-1/2}(\frac{d}{dt}\mathbf{A}thbf{H}_{t})\mathbf{A}thbf{H}_{t}^{-1/2}\|_{F}^{2} \\
&=\mathrm{Tr}\mathbf{A}thbf{H}_{t}^{-1}(\frac{d}{dt}\mathbf{A}thbf{H}_{t})\mathbf{A}thbf{H}_{t}^{-1}(\frac{d}{dt}\mathbf{A}thbf{H}_{t})=\mathrm{Tr}\mathbf{A}thbf{P}_{t}\frac{d\ln s_{t}^{-2}}{dt}\mathbf{A}thbf{P}_{t}\frac{d\ln s_{t}^{-2}}{dt}\\
& =\frac{d\ln s_{t}^{-2}}{dt}^{\top}\mathbf{A}thbf{P}_{t}^{(2)}\frac{d\ln s_{t}^{-2}}{dt}\leq\sum_{i=1}^{m}\left(\frac{d\ln s_{t}^{-2}}{dt}\right)^{2}\\
& =\sum_{i=1}^{m}4s_{t,i}^{-2}\left(a_{i}^{\top}h\right)^{2}=4h^{\top}\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{S}_{t}^{-2}\mathbf{A}thbf{A}h=4\norm h_{\mathbf{A}thbf{H}_{t}}^{2}.
\end{align*}
\end{proof}
The function $\log\det\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{S}_{x}^{-2}\mathbf{A}thbf{A}$ is called the volumetric barrier and is known to be convex.
\begin{lemma}[{\citep[Lemma 3]{vaidya1996new}}]
$f(x)=\log\det\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{S}_{x}^{-2}\mathbf{A}thbf{A}$ is a convex function in $x$.
\end{lemma}
The main result of this section is to give an even faster implementation
by noting that in fact we can avoid explicitly computing $\mathbf{A}thbf{H}(x)$
or its inverse or determinant for the Dikin walk with log barrier.
This resolves an open problem posed in \citep{kannan2012random,lee2015efficient}.
The main challenge is to avoid computing the determinant of $\mathbf{A}thbf{H}(x)$.
In fact, what one needs is an unbiased estimator of the ratio of two
such determinants. We reduce this, first to estimating a log-det,
and then to an inverse maintenance problem in the next two lemmas.
To calculate rejection probability for Dikin Walk, we want an unbiased
estimator of $\frac{\det\mathbf{A}thbf{H}(x)}{\det\mathbf{A}thbf{H}(y)}$. We first
find an unbiased estimator, $Y$ of the term $\log\det\mathbf{A}thbf{H}(x)-\log\det\mathbf{A}thbf{H}(y)$
which can be calculated in $\widetilde{O}\left(\mathbf{A}thrm{nnz}(\mathbf{A}thbf{A})+n^{2}\right)$
time using lemma \ref{lem:logdet}. We then find an
unbiased estimaor, $X$ of the determinant of $\mathbf{A}thbf{H}(x)$ using lemma \ref{lem:det} which describes
an algorithm to find an unbiased estimator of a value $r$ given access
to an unbiased estimator of $\log r$.
\begin{lemma}[Determinant]
\label{lem:det}Given a random variable $Y$ with $\mathbb{E}(Y)=\log r$, the random variable $X$ defined as
\[
X=e\cdot\prod_{j=1}^{i}Y_{j}\mbox{ with probability }\dfrac{1}{e\cdot i!}
\]
with $Y_{j}$ being iid copies of $Y$ has $\mathbb{E}(X)=r.$
\end{lemma}
\begin{proof}
We know that
\[
r=\sum_{i=0}^{\infty}\dfrac{(\log(r))^{i}}{i!}.
\]
Using $X=e\cdot\prod_{j=1}^{i}Y_{j}$ with probability $\dfrac{1}{e\cdot i!}$
where $Y_{j}$ are iid random variables with $\mathbb{E}(Y_{j})=\log r$.
Then,
\[
\mathbf{A}thbb{E}[X]=\sum_{i=0}^{\infty}\frac{\mathbb{E}(Y)^{i}}{i!}=e^{\log(r)}=r.
\]
\end{proof}
\begin{lemma}[Log Determinant]
\label{lem:logdet} Define $\mathbf{A}thbf{H}(t)=\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{A}+t(\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{W}\mathbf{A}thbf{A}-\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{A})=\mathbf{A}thbf{A}^{\top}(\mathbf{A}thbf{I}+t(\mathbf{A}thbf{W}-\mathbf{A}thbf{I}))\mathbf{A}thbf{A}$.
Let $v\sim N(0,I)$ and $t$ be uniform in $[0,1]$ and
\[
Y=v^{\top}\mathbf{A}thbf{H}(t)^{-1}\mathbf{A}thbf{A}^{\top}(\mathbf{A}thbf{H}-I)\mathbf{A}thbf{A}v+\log\det\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{A}.
\]
Then, $\mathbb{E}(Y)=\log\det\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{W}\mathbf{A}thbf{A}$.
\end{lemma}
\begin{proof}
\allowdisplaybreaks
We have
\begin{align*}
&\log\det(\mathbf{A}thbf{H}(1))-\log\det(\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{A})\\
& =\int_{0}^{1}\dfrac{d\log\det\mathbf{A}thbf{H}(t)}{dt}dt\\
& =\int_{0}^{1}\mathrm{Tr}(\mathbf{A}thbf{H}(t)^{-1}\dfrac{d\mathbf{A}thbf{H}(t)}{dt})dt\\
& =\int_{0}^{1}\mathrm{Tr}(\mathbf{A}thbf{H}(t)^{-1}\mathbf{A}thbf{A}^{\top}(\mathbf{A}thbf{H}-\mathbf{A}thbf{I})\mathbf{A}thbf{A}^{\top})dt\\
& =\mathbf{A}thbb{E}_{v\sim N(0,I)}[v^{\top}\int_{0}^{1}\mathrm{Tr}(\mathbf{A}thbf{H}(t)^{-1}\mathbf{A}thbf{A}^{\top}(\mathbf{A}thbf{H}-\mathbf{A}thbf{I})\mathbf{A}thbf{A})dt\cdot v]\\
& =\int_{0}^{1}\mathbf{A}thbb{E}_{v\sim N(0,I)}[v^{\top}\mathbf{A}thbf{H}(t)^{-1}\mathbf{A}thbf{A}^{T}(\mathbf{A}thbf{H}-\mathbf{A}thbf{I})\mathbf{A}thbf{A}v]dt
\end{align*}
\end{proof}
Note that given $\mathbf{A}thbf{H}(t)^{-1}$, we can estimate the last expression as the sum of $\mathbf{A}thbb{E}_{v\sim N(0,I)}[v^{\top}\mathbf{A}thbf{H}(t)^{-1}\mathbf{A}thbf{A}^{\top}(\mathbf{A}thbf{H}-\mathbf{A}thbf{I})\mathbf{A}thbf{A}v]$.
Maintaining $\mathbf{A}thbf{H}(t)^{-1}$ reduces to the inverse maintainence
problem for $\mathbf{A}thbf{H}$. It is shown in \citep{lee2015efficient}
that a matrix inverse can be maintained efficiently in the following
sense. Suppose we have a sequence of matrices of the form $\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{D}^{(k)}\mathbf{A}thbf{A}$
where each $\mathbf{A}thbf{D}^{(k)}$ is a slowly-changing diagonal matrix.
Then for each matrix in the sequence, its inverse times any given
vector $v$ can be computed in time $\widetilde{O}\left(\mathbf{A}thrm{nnz}(\mathbf{A}thbf{A})+n^{2}\right).$
We use $\mathbf{A}thbf{W}=\mathbf{A}thbf{S_{x}}^{-2}\mathbf{A}thbf{S_{y}^{2}}$ to calculate
as unbiased estimate of $\log\det\mathbf{A}thbf{H}(x)-\log\det\mathbf{A}thbf{H}(y)$.
\begin{lemma}[{\citep[Theorem 13]{lee2015efficient}\label{lem:inv}}]
Suppose that a sequence of matrices \\ $\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{D}^{(k)}\mathbf{A}thbf{A}$
for the inverse maintenance problem satisfies the $$\sum_{i}\left(\frac{d_{i}^{(k+1)}-d_{i}^{(k)}}{d_{i}^{(k)}}\right)^{2}=O(1).$$
Then there is an algorithm that with high probability maintains an
$\widetilde{O}\left(\mathbf{A}thrm{nnz}(\mathbf{A}thbf{A})+n^{2}\right)$-time linear system
solver for $r$ rounds in total time $\widetilde{O}\left(r(\mathbf{A}thrm{nnz}(\mathbf{A}thbf{A})+ n^{2}+n^{\omega})\right)$
\end{lemma}
We note that the condition $\sum_{i}\left(\frac{d_{i}^{(k+1)}-d_{i}^{(k)}}{d_{i}^{(k)}}\right)^{2}=O(1)$
is satisfied since
\begin{align*}
\sum_{i}\left(\frac{d_{i}^{(k+1)}-d_{i}^{(k)}}{d_{i}^{(k)}}\right)^{2}&=\sum_{i}\left(\frac{(s_{i}^{(k+1)})^{-2}-(s_{i}^{(k)})^{-2}}{(s_{i}^{(k)})^{-2}}\right)^{2}\\
&=O\left(\sum_{i}\left(\frac{s_{i}^{(k+1)}-s_{i}^{(k)}}{s_{i}^{(k)}}\right)^{2}\right) \\&=O\left(\|x^{(k+1)}-x^{(k)}\|_{x^{(k)}}^{2}\right).
\end{align*}
Putting these together we have the following unbiased estimator for
$\sqrt{\det\mathbf{A}thbf{H}(x)/\det\mathbf{A}thbf{H}(y)}$:
Compute $X=\frac{e}{2}\cdot\prod_{j=1}^{i}Y_{j}\mbox{ with probability }\dfrac{1}{e\cdot i!}$
where each $Y_{j}$ is an iid sample generated as follows:
\begin{enumerate}
\item Pick $v\sim N(0,I)$ and $t$ uniformly in $[0,1].$
\item Set $\mathbf{A}thbf{W}=\mathbf{A}thbf{S_{x}}^{-2}\mathbf{A}thbf{S_{y}^{2}}.$
\item Compute $Y=v^{\top}\mathbf{A}thbf{H}(t)^{-1}\mathbf{A}thbf{A}^{\top}(\mathbf{A}thbf{H}(1)-\mathbf{A}thbf{I})\mathbf{A}thbf{A}v$
where $\mathbf{A}thbf{H}(t)=\mathbf{A}thbf{A}^{\top}(\mathbf{A}thbf{I}+t(\mathbf{A}thbf{W}-\mathbf{A}thbf{I}))\mathbf{A}thbf{A}$
using efficient inverse maintenance.
\end{enumerate}
We need one more trick. In the algorithm, at each step we need to
compute $\mathbf{A}thbf{I}n\left\{ 1,\frac{p(y\rightarrow x)}{p(x\rightarrow y)}\right\} $.
While we can approximate the ratio inside the min, this might make
the overall probability incorrect due to the min function not being
smooth. So instead we propose a smoother filter. This might have other
applications.
\begin{lemma}[Smooth Metropolis filter]
Let the probability of selecting the state $y$ from the state $x$ of an ergodic Markov chain be $p(x\rightarrow y)$.
Then accepting the step $x\rightarrow y$ with probability $\dfrac{p(y\rightarrow x)}{p(y\rightarrow x)+p(x\rightarrow y)}$
gives uniform stationary distribution.
\end{lemma}
\begin{proof}
Let $\tilde{p}(x \rightarrow y)$ be the probability of taking a step from $x$ to $y$. Then, $\tilde{p}$ satisfies detailed balance.
\begin{align*}
\tilde{p}(x\rightarrow y) & =p(x\rightarrow y)\cdot\dfrac{p(y\rightarrow x)}{p(y\rightarrow x)+p(x\rightarrow y)}\\
& =\dfrac{p(x\rightarrow y)p(y\rightarrow x)}{p(y\rightarrow x)+p(x\rightarrow y)}\\
& =p(y\rightarrow x)\cdot\dfrac{p(x\rightarrow y)}{p(y\rightarrow x)+p(x\rightarrow y)}\\
& =\tilde{p}(y\rightarrow x)
\end{align*}
So, $\tilde{p}(x\rightarrow y)=\tilde{p}(y\rightarrow x)$ for all
$x$ and $y$. Hence the stationary distribution is uniform.
\end{proof}
For the Dikin walk, $\frac{p(y\rightarrow x)}{p(x\rightarrow y)}=\sqrt{\frac{\det(\mathbf{A}thbf{H}_{x})}{\det(\mathbf{A}thbf{H}_{y})}}$.
Note that the rejection probability function $\frac{p(y\rightarrow x)}{p(y\rightarrow x)+p(x\rightarrow y)}\allowbreak=\frac{\frac{p(y\rightarrow x)}{p(x\rightarrow y)}}{1+\frac{p(y\rightarrow x)}{p(x\rightarrow y)}}$
is increasing in $\frac{p(y\rightarrow x)}{p(x\rightarrow y)}$.
As Dikin barrier is strongly self-concordant (Lemma \ref{lem:dssc})
and by \eqref{eq:rejection}, we get that with probability at least
$0.99$, for $y$ randomly drawn from $E_{x}$, $\frac{\mathrm{vol}(E_{x}(r))}{\mathrm{vol}(E_{y}(r))}\geq0.9922$.
Hence, the probability of not rejecting at each step at least $0.498$ with large probability.
\begin{proof}[Proof of Theorem \ref{thm:dikin}]
Implementing Dikin walk requires mai\-ntaining matrices $\mathbf{A}thbf{H}_{t}=\mathbf{A}thbf{A}^{\top}\mathbf{A}thbf{S}_{t}^{-2}\mathbf{A}thbf{A}$
corresponding to point $x_{t}$. \ref{lem:inv} shows that this can
be done in $\widetilde{O}\left(n^{\omega}+r(nnz(\mathbf{A}thbf{A})+n^{2})\right)$
time where $r$ is the number of steps in the chain. Additionaly,
each step requires calculating the rejection probability which is
a smooth function in $\dfrac{\det(\mathbf{A}thbf{H}_{t})}{\det(\mathbf{A}thbf{H}_{t+1})}$
and hence can be calculated in $\widetilde{O}\left(nnz(\mathbf{A}thbf{A})+n^{2}\right)$
amortized time using lemmas \ref{lem:det} and \ref{lem:logdet}.
\end{proof}
\section{Strong Self-Concordance of other barriers}\label{section5}
Here we analyze the strong self-concordance of the universal and entropic
barriers.
\begin{proof}[Proof of Lemma \ref{lem:SSC-KLS}.]
The entropic barrier is the dual of
\[
f(\theta)=\log(\int_{x\in K}\exp(\theta^{\top}x)dx).
\]
Then, its the first three derivatives are moments \citep{bubeck2014entropic}:
\begin{align*}
Df(\theta)[h_{1}] & =\frac{\int_{x\in K}x^{\top}h_{1}\exp(\theta^{\top}x)dx}{\int_{x\in K}\exp(\theta^{\top}x)dx}\\
& =\mathbb{E}_{x\sim p_{\theta}}x^{\top}h_{1}.
\end{align*}
where $p_{\theta}$ is the corresponding exponential distribution
with support $K$.
\begin{align*}
D^{2}f(\theta)[h_{1},h_{2}] & =\frac{\int_{x\in K}x^{\top}h_{1}x^{\top}h_{2}\exp(\theta^{\top}x)dx}{\int_{x\in K}\exp(\theta^{\top}x)dx}\\
&\quad -\frac{\prod_{i=1}^2\int_{x\in K}x^{\top}h_{i}\exp(\theta^{\top}x)dx}
{\left(\int_{x\in K}\exp(\theta^{\top}x)dx\right)^{2}}\\
& =\mathbb{E}_{x\sim p_{\theta}}h_{2}^{\top}xx^{\top}h_{1}-h_{2}^{\top}\mu\mu^{\top}h_{1}\\
& =\mathbb{E}_{x\sim p_{\theta}}(x-\mu)^{\top}h_{1}\cdot(x-\mu)^{\top}h_{2}
\end{align*}
Next, we note that
\begin{align*}
D\mu[h] & =D\frac{\int_{x\in K}x\exp(\theta^{\top}x)dx}{\int_{x\in K}\exp(\theta^{\top}x)dx}[h]\\
& =\frac{\int_{x\in K}xx^{\top}h\exp(\theta^{\top}x)dx}{\int_{x\in K}\exp(\theta^{\top}x)dx}\\
&\quad -\frac{\int_{x\in K}x\exp(\theta^{\top}x)dx}{\int_{x\in K}\exp(\theta^{\top}x)dx}\cdot\frac{\int_{x\in K}x^{\top}h\exp(\theta^{\top}x)dx}{\int_{x\in K}\exp(\theta^{\top}x)dx}\\
& =\mathbb{E}_{x\sim p_{\theta}}xx^{\top}h-\mu\mu^{\top}h\\
& =\mathbb{E}_{y\sim p_{\theta}}(y-\mu)(y-\mu)^{\top}h.
\end{align*}
So, we have
\begin{align*}
&D^{3}f(\theta)[h_{1},h_{2},h_{3}]\\
=& \mathbb{E}_{x\sim p_{\theta}}(-\mathbb{E}_{y\sim p_{\theta}}(y-\mu)(y-\mu)^{\top}h_{3})^{\top}h_{1}\cdot(x-\mu)^{\top}h_{2}\\
& +\mathbb{E}_{x\sim p_{\theta}}(x-\mu)^{\top}h_{1}\cdot(-\mathbb{E}(y-\mu)(y-\mu)^{\top}h_{3})^{\top}h_{2}\\
& +\mathbb{E}_{x\sim p_{\theta}}(x-\mu)^{\top}h_{1}\cdot(x-\mu)^{\top}h_{2}\cdot(x-\mu)^{\top}h_{3}\\
= & \mathbb{E}_{x\sim p_{\theta}}(x-\mu)^{\top}h_{1}\cdot(x-\mu)^{\top}h_{2}\cdot(x-\mu)^{\top}h_{3}.
\end{align*}
By \citep[(2.15)]{nesterov1994interior}, we have that
\[
D^{2}f^{*}(x_{\theta})[h_{1},h_{2}]=h_{1}^{\top}\nabla^{2}f(\theta)^{-1}h_{2}
\]
and
\begin{align*}
&D^{3}f^{*}(x_{\theta})[h_{1},h_{2},h_{3}]\\&\quad=-D^{3}f(\theta)[\nabla^{2}f(\theta)^{-1}h_{1},\nabla^{2}f(\theta)^{-1}h_{2},\nabla^{2}f(\theta)^{-1}h_{3}]
\end{align*}
where $x_{\theta}=\nabla f(\theta)$. Hence, we have
\begin{align*}
& \nabla^{2}f^{*}(x_{\theta})^{-\frac{1}{2}}D^{3}f^{*}(x_{\theta})[h]\nabla^{2}f^{*}(x_{\theta})^{-\frac{1}{2}}\\
&= -\mathbb{E}_{x\sim p_{\theta}}\nabla^{2}f(\theta)^{-\frac{1}{2}}(x-\mu)(x-\mu)^{\top}\nabla^{2}f(\theta)^{-\frac{1}{2}}\\
&\qquad \cdot(x-\mu)^{\top}\nabla^{2}f(\theta)^{-1}h\\
&= -\mathbb{E}_{x\sim\tilde{p}_{\theta}}xx^{\top}\cdot x^{\top}\nabla^{2}f(\theta)^{-\frac{1}{2}}h
\end{align*}
where $\tilde{p}_{\theta}$ is the distribution given by $\nabla^{2}f(\theta)^{-\frac{1}{2}}(x-\mu)$
where $x\sim p_{\theta}$. Note that $\tilde{p}_{\theta}$ is isotropic
and \citep[Fact 6.1]{eldan2013thin} shows that
\begin{equation}
\mathbf{A}x_{\|v\|_{2}=1}\norm{\mathbb{E}_{x\sim\tilde{p}_{\theta}}xx^{T}(x^{T}v)}_{F}=O(\psi_{n}).\label{eq:K_n_psi_n}
\end{equation}
Hence, we have that
\begin{align*}
&\norm{\nabla^{2}f^{*}(x_{\theta})^{-\frac{1}{2}}D^{3}f^{*}(x_{\theta})[h]\nabla^{2}f^{*}(x_{\theta})^{-\frac{1}{2}}}_{F}\\
&\quad =O(\psi_{n})\norm{\nabla^{2}f^{*}(x_{\theta})^{-\frac{1}{2}}h}_{2}=O(\psi_{n})\|h\|_{x_{\theta}}.
\end{align*}
This proves the lemma for the entropic barrier (recall that the entropic
barrier is $f^{*}$ instead of $f$).
For the universal barrier, first we recall that the polar of a convex
set $K$ is $K^{\circ}(x)=\left\{ z:z^{\top}(y-x)\le1\quad\forall y\in K\right\} $
and the barrier function is
\[
\mathbb{P}hi(x)=\log\mathrm{vol}(K^{\circ}(x)).
\]
Its derivatives have the following identities \citep[Page 52]{nesterov1994interior}.
Here the random point $y$ is drawn uniformly from the polar $K^{\circ}(x)$.
\begin{align*}
\nabla^{2}\mathbb{P}hi(x)= & (n+2)(n+1)\mathbb{E} yy{}^{\top}-(n+1)^{2}\mathbb{E} y\mathbb{E} y^{\top},\\
D\nabla^{2}\mathbb{P}hi(x)[h]= & -(n+1)(n+2)(n+3)\mathbb{E} yy^{\top}(y^{\top}h)\\
&+(n+1)^{2}(n+2)\mathbb{E} yy^{\top}\cdot\mathbb{E} y^{\top}h\\
& +2(n+1)^{2}(n+2)\mathbb{E} y(y^{\top}h)\cdot\mathbb{E} y^{\top}\\
&-2(n+1)^{3}\mathbb{E} y\cdot\mathbb{E} y^{\top}\cdot\mathbb{E} y^{\top}h
\end{align*}
Let $\mu=\mathbb{E} y$, we can re-write the derivatives as follows:
\begin{align*}
\nabla^{2}\mathbb{P}hi(x)= & (n+2)(n+1)\mathbb{E}(y-\mu)(y-\mu)^{\top}+(n+1)\mu\mu^{\top}\\
D\nabla^{2}\mathbb{P}hi(x)[h]= & -\mathbb{P}i_{i=1}^3 (n+i)\mathbb{E}(y-\mu)(y-\mu)^{\top}(y-\mu)^{\top}h\\
& -2(n+2)(n+1)(\mathbb{E}(y-\mu)(y-\mu)^{\top}\mu^{\top}h\\
& +\mathbb{E}\mu(y-\mu)^{\top}(y-\mu)^{\top}h+\mathbb{E}(y-\mu)\mu^{\top}(y-\mu)^{\top}h)\\
& -2(n+1)\mu\mu^{\top}\mu^{\top}h.
\end{align*}
Without loss of generality, we assume $\nabla^{2}\mathbb{P}hi(x)=I$. Then,
we have
\[
(n+2)(n+1)\mathbb{E}(y-\mu)(y-\mu)^{\top}\preceq I\qquad\text{and}\qquad(n+1)\mu\mu^{\top}\preceq I.
\]
For the first term, \eqref{eq:K_n_psi_n} shows that
\[
\|(n+1)(n+2)(n+3)\mathbb{E}(y-\mu)(y-\mu)^{\top}(y-\mu)^{\top}h\|_{F}=O(\psi_{n}).
\]
The Frobenius norm of next three terms are bounded by
\[
2\left|\mu^{\top}h\right|\norm{(n+2)(n+1)\mathbb{E}(y-\mu)(y-\mu)^{\top}}_{F}\le2\sqrt{n}\norm{\mu}\le2
\]
and so is the last term:
\[
2\norm{(n+1)\mu\mu^{\top}}_{F}\left|\mu^{\top}h\right|\le2.
\]
\end{proof}
To conclude this section, we remark that the universal and entropic barriers do \emph{not }satisfy our symmetry condition. Consider a rotational cone $C=\left\{ x\,:\,\sum_{i=2}^{n}x_{i}^{2}\le x_{1}^{2},0\le x_{1} \le 1\right\} $ and any point $x=(x_{1},0,\ldots,$0). Then symmetric body
around $x$, namely $K=C\cap(x-C)$ has the property that (a) the
John ellipsoid satisfies $E\subset K\subset\sqrt{n}C$ (as it does
for any symmetric convex body) and (b) the inertial ellipsoid has
a sandwiching ratio of $n$, proving that $\bar{\nu}\ge n=\Omega(\nu^{2}).$
For the entropic barrier, we have a similar result because multiplying
the indicator function of this symmetric convex body with an exponential
function of the form $e^{-c^{T}x}$ still has the same property for
the inertial ellipsoid. This example highlights the advantages of
barriers with John-like ellipsoids (log barrier, LS barrier) vs Inertia-like
ellipsoids (universal, entropic).
\appendix
\section{Proofs}
\subsection{Proof of Lemma \ref{lem:global_self_concordant}}\label{proof:2}
\begin{proof}
Let $h=y-x$, $x_{t}=x+th$ and $\phi(t)=h^{\top}\mathbf{A}thbf{H}(x_{t})h$.
Then,
\[
\left|\phi'(t)\right|=\left|h^{\top}\frac{d}{dt}\mathbf{A}thbf{H}(x_{t})h\right|\leq2\|h\|_{x_{t}}^{3}=2\phi(t)^{3/2}.
\]
Hence, we have $\left|\frac{d}{dt}\frac{1}{\sqrt{\phi(t)}}\right|\leq1$.
Therefore, $\frac{1}{\sqrt{\phi(t)}}\geq\frac{1}{\sqrt{\phi(0)}}-t$
and,
\begin{equation}
\phi(t)\leq\frac{\phi(0)}{(1-t\sqrt{\phi(0)})^{2}}.\label{eq:phi_bound}
\end{equation}
Now we fix any $v$ and define $\psi(t)=v^{\top}\mathbf{A}thbf{H}(x_{t})v$. Then,
\[
\left|\psi'(t)\right|=\left|v^{\top}\frac{d}{dt}\mathbf{A}thbf{H}(x_{t})v\right|\leq2\|h\|_{x_{t}}\|v\|_{x_{t}}^{2}=2\phi(t)\psi(t).
\]
Using \eqref{eq:phi_bound} at the end, we have
\[
\left|\frac{d}{dt}\ln\psi(t)\right|\leq\frac{2\sqrt{\phi(0)}}{(1-t\sqrt{\phi(0)})}.
\]
Integrating both sides from $0$ to $1$,
\[
\left|\ln\frac{\psi(1)}{\psi(0)}\right|\leq\int_{0}^{1}\frac{2\sqrt{\phi(0)}}{(1-t\sqrt{\phi(0)})}dt=2\ln(\frac{1}{1-\sqrt{\phi(0)}}).
\]
The result follows from this with $\psi(1)=v^{\top}\mathbf{A}thbf{H}(y)v$, $\psi(0)=v^{\top}\mathbf{A}thbf{H}(x)v$,
and $\phi(0)=\|x-y\|_{x}^{2}$.
\end{proof}
\subsection{Proof of Lemma \ref{lem:global_strongly_self_concordant}}\label{proof:4}
\begin{proof}
Let $x_{t}=(1-t)x+ty$. Then, we have
\begin{align*}
&\|\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}(\mathbf{A}thbf{H}(y)-\mathbf{A}thbf{H}(x))\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}\|_{F} \\
&\quad =\int_{0}^{1}\|\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}\frac{d}{dt}\mathbf{A}thbf{H}(x_{t})\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}\|_{F}dt.
\end{align*}
We note that $\mathbf{A}thbf{H}$ is self-concordant. Hence, Lemma \ref{lem:global_self_concordant}
shows that
\begin{align*}
&\|\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}\frac{d}{dt}\mathbf{A}thbf{H}(x_{t})\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}\|_{F}^{2}\\
&\quad =\mathrm{Tr}\mathbf{A}thbf{H}(x)^{-1}\left(\frac{d}{dt}\mathbf{A}thbf{H}(x_{t})\right)\mathbf{A}thbf{H}(x)^{-1}\left(\frac{d}{dt}\mathbf{A}thbf{H}(x_{t})\right)\\
&\quad \leq\frac{1}{\left(1-\Vert x-x_{t}\Vert_{x}\right)^{4}}\mathrm{Tr}\mathbf{A}thbf{H}(x_{t})^{-1}\left(\frac{d}{dt}\mathbf{A}thbf{H}(x_{t})\right)\mathbf{A}thbf{H}(x_{t})^{-1}\left(\frac{d}{dt}\mathbf{A}thbf{H}(x_{t})\right)\\
&\quad \leq\frac{4}{\left(1-\Vert x-x_{t}\Vert_{x}\right)^{4}}\|x-x_{t}\|_{x_{t}}^{2}\\
&\quad \leq\frac{4}{\left(1-\Vert x-x_{t}\Vert_{x}\right)^{6}}\|x-x_{t}\|_{x}^{2}
\end{align*}
where we used the strong self-concordance in the second inequality and Lemma \ref{lem:global_self_concordant}
again for the last inequality. Hence,
\begin{align*}
\|\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}(\mathbf{A}thbf{H}(y)-\mathbf{A}thbf{H}(x))\mathbf{A}thbf{H}(x)^{-\frac{1}{2}}\|_{F} & \leq\int_{0}^{1}\frac{2\|x-x_{t}\|_{x}}{\left(1-\Vert x-x_{t}\Vert_{x}\right)^{3}}dt\\
& =\int_{0}^{1}\frac{2t\|x-y\|_{x}}{(1-t\|x-y\|_{x})^{3}}dt\\
& =\frac{\|x-y\|_{x}}{(1-\|x-y\|_{x})^{2}}.
\end{align*}
\end{proof}
\end{document} |
\begin{document}
\title[measurable and topological dynamics]
{On the interplay between measurable
and topological dynamics}
\author{E. Glasner and B. Weiss}
{\rm{ad}}dress{Department of Mathematics\\
Tel Aviv University\\
Tel Aviv\\
Israel}
\email{glasner@math.tau.ac.il}
{\rm{ad}}dress {Institute of Mathematics\\
Hebrew University of Jerusalem\\
Jerusalem\\
Israel}
\email{weiss@math.huji.ac.il}
\begin{date}
{December 7, 2003}
\end{date}
\maketitle
\tableofcontents
\setcounter{secnumdepth}{2}
\setcounter{section}{0}
\section*{Introduction}
Recurrent - wandering, conservative - dissipative, contracting -
expanding, deterministic - chaotic, isometric - mixing, periodic -
turbulent, distal - proximal, the list can go on and on. These
(pairs of) words --- all of which can be found in the dictionary
--- convey dynamical images and were therefore adopted by
mathematicians to denote one or another mathematical aspect of a
dynamical system.
The two sister branches of the theory of dynamical systems called
{\em ergodic theory} (or {\em measurable dynamics}) and {\em
topological dynamics} use these words to describe different but
parallel notions in their respective theories and the surprising
fact is that many of the corresponding results are rather similar.
In the following article we have tried to demonstrate both the
parallelism and the discord between ergodic theory and topological
dynamics. We hope that the subjects we chose to deal with will
successfully demonstrate this duality.
The table of contents gives a detailed listing of the topics
covered. In the first part we have detailed the strong analogies
between ergodic theory and topological dynamics as shown in the
treatment of recurrence phenomena, equicontinuity and weak
mixing, distality and entropy. In the case of distality the
topological version came first and the theory of measurable
distality was strongly influenced by the topological results.
For entropy theory the influence clearly was in the opposite
direction. The prototypical result of the second part is the
statement that any abstract measure probability preserving
system can be represented as a continuous transformation of a
compact space, and thus in some sense ergodic theory embeds into
topological dynamics.
We have not attempted in any way to be either systematic or
comprehensive. Rather our choice of subjects was motivated by
taste, interest and knowledge and to great extent is random.
We did try to make the survey accessible to non-specialists,
and for this reason we deal throughout with the simplest case
of actions of $\mathbb{Z}$.
Most of the discussion carries over to noninvertible mappings
and to $\mathbb{R}$ actions. Indeed much of what we describe can be
carried over to general amenable groups.
Similarly, we have for the most part given rather complete definitions.
Nonetheless, we did take advantage of the fact that this article
is part of a handbook and for some of the definitions,
basic notions and well known results we refer the reader to the
earlier introductory chapters of volume I. Finally,
we should acknowledge the fact that we made use of parts of
our previous expositions
\cite{W4} and \cite{G}.
We made the writing of this survey more pleasurable for us by the
introduction of a few original results. In particular the
following results are entirely or partially new. Theorem
\ref{periodic} (the equivalence of the existence of a Borel
cross-section with the coincidence of recurrence and periodicity),
most of the material in Section 4 (on topological mild-mixing),
all of subsection 7.4 (the converse side of the local variational
principle) and subsection 7.6 (on topological determinism).
{\lambdarge{\part{Analogies}}}
\section{Poincar\'e recurrence vs. Birkhoff's
recurrence}\lambdabel{Sec-Poin}
\subsection{Poincar\'e recurrence theorem and
topological recurrence}
The simplest dynamical systems are the periodic ones.
In the absence of periodicity the crudest approximation
to this is approximate periodicity where instead of
some iterate $T^nx$ returning exactly to $x$ it returns
to a neighborhood of $x$. The first theorem in abstract
measure dynamics is Poincar\'{e}'s recurrence theorem
which asserts that for a finite measure preserving
system $(X,\mathcal{B},\mu,T)$ and any measurable set $A$,
$\mu$-a.e. point of $A$ returns to $A$
(see \cite[Theorem 4.3.1]{HKat}).
The proof of this
basic fact is rather simple and depends on identifying
the set of points $W\subset A$ that never return to $A$.
These are called the {\bf wandering points\/} and their
measurability follows from the formula
$$
W= A \cap \left( \bigcap_{k=1}^\infty T^{-k}
(X\setminus A)\right).
$$
Now for $n\ge 0$, the sets $T^{-n} W$ are pairwise
disjoint since $x\in T^{-n}W$ means that the forward
orbit of $x$ visits $A$ for the last time at moment $n$.
Since $\mu(T^{-n}W)=\mu(W)$ it follows that $\mu(W)=0$
which is the assertion of Poincar\'{e}'s theorem.
Noting that $A\cap T^{-n}W$ describes the points of $A$
which visit $A$ for the last time at moment $n$, and that
$\mu(\cup_{n=0}^\infty T^{-n} W)=0$ we have established the
following stronger formulation of Poincar\'{e}'s theorem.
\begin{thm}
For a finite measure preserving system $(X,\mathcal{B},\mu,T)$
and any measurable set $A$,
$\mu$-a.e. point of $A$ returns to $A$ infinitely often.
\end{thm}
Note that only sets of the form $T^{-n}B$ appeared in the
above discussion so that the invertibility of $T$ is not
needed for this result. In the situation of classical dynamics,
which was Poincar\'{e}'s main interest, $X$ is also equipped
with a separable metric topology. In such a situation we can
apply the theorem to a refining sequence of partitions $\mathcal{P}_m$,
where each $\mathcal{P}_m$ is a countable partition into sets of
diameter at most $\frac{1}{m}$. Applying the theorem to a fixed
$\mathcal{P}_m$ we see that $\mu$-a.e. point comes to within
$\frac{1}{m}$ of itself, and since the intersection of a
sequence of sets of full measure has full measure, we deduce
the corollary that $\mu$-a.e. point of $X$ is recurrent.
This is the measure theoretical path to the recurrence
phenomenon which depends on the presence of a finite invariant
measure. The necessity of such measure is clear from considering
translation by one on the integers. The system is dissipative,
in the sense that no recurrence takes place even though there is
an infinite invariant measure.
{\begin{center}{$\divideontimes$}\end{center}}
There is also a topological path to recurrence which was developed
in an abstract setting by G. D. Birkhoff. Here the above example
is eliminated by requiring that the topological space $X$,
on which our continuous transformation $T$ acts, be compact.
It is possible to show that in this setting a finite $T$-invariant
measure always exists, and so we can retrieve the measure
theoretical picture, but a purely topological discussion will
give us better insight.
A key notion here is that of minimality. A nonempty closed,
$T$-invariant set $E\subset X$, is said to be {\bf minimal\/}
if $F\subset E$, closed and $T$-invariant implies
$F=\emptyset$ or $F=E$. If $X$ itself is a minimal set we
say that the system $(X,T)$\ is a {\bf minimal system}.
Fix now a point $x_0\in X$ and consider
$$
\omega(x_0)=\bigcap_{n=1}^\infty\overline{\{T^k x_0:
k\ge n\}}.
$$
The points of $\omega(x_0)$ are called {\bf $\omega$-limit
points of $x_0$\/}, ($\omega =$ last letter of the Greek
alphabet) and in the separable case $y\in \omega(x_0)$ if
and only if there is some sequence $k_i\to \infty$ such that
$T^{k_i}x_0 \to y$. If $x_0\in \omega(x_0)$ then $x_0$ is called
a {\bf positively recurrent point}.
Clearly $\omega(x_0)$ is a closed and $T$-invariant set.
Therefore, in any nonempty minimal set $E$, any point
$x_0\in E$ satisfies $x_0\in \omega(x_0)$ and thus we see
that minimal sets have recurrent points.
In order to see that compact systems $(X,T)$\ have recurrent points it
remains to show that minimal sets always exist. This is an immediate
consequence of Zorn's lemma applied to the family of nonempty
closed $T$-invariant subsets of $X$. A slightly more
constructive proof can be given when $X$ is a compact and
separable metric space. One can then list a sequence of
open sets $U_1, U_2, \dots$ which generate the topology,
and perform the following algorithm:
\begin{enumerate}
\item
set $X_0=X$,
\item
for $i=1,2, \dots$, \newline
if $\bigcup_{\,n=-\infty}^{\,\infty}
T^{-n}U_i\supset X_{i-1}$
put $X_i= X_{i-1}$, else put
$X_i= X_{i-1}\setminus \bigcup_{\,n=-\infty}^{\,\infty} T^{-n}U_i$.
\end{enumerate}
Note that $X_i\ne\emptyset$ and closed and thus
$X_\infty=\bigcap_{\,i=0}^{\,\infty} X_i$ is nonempty.
It is clearly $T$-invariant and for any $U_i$, if $U_i\cap X_\infty
\ne\emptyset$ then
$\bigcup_{\,-\infty}^{\,\infty} T^{-n}(U_i\cap X_\infty)
=X_\infty$, which shows that $(X_\infty, T)$ is minimal.
\subsection{The existence of Borel cross-sections}
There is a deep connection between recurrent points in
the topological context and ergodic theory. To see this
we must consider quasi-invariant measures.
For these matters it is better to enlarge the scope and
deal with continuous actions of $\mathbb{Z}$, generated by $T$,
on a {\em complete separable metric space\/} $X$.
A probability measure $\mu$ defined on the Borel subsets
of $X$ is said to be {\bf quasi-invariant\/} if $T\cdot \mu
\sim \mu$. Define such a system $(X,\mathcal{B},\mu,T)$ to be
{\bf conservative\/} if for any measurable set $A$,
$\ TA\subset A$ implies $\mu(A\setminus TA)=0$.
It is not hard to see that the conclusion of Poincar\'{e}'s
recurrence theorem holds for such systems; i.e.
if $\mu(A)>0$, then $\mu$-a.e. $x$ returns
to $A$ infinitely often. Thus once again $\mu$-a.e. point
is topologically recurrent. It turns out now that the
existence of a single topologically recurrent point
implies the existence of a non-atomic conservative
quasi-invariant measure. A simple proof of this fact
can be found in \cite{KW} for the case when $X$ is compact
--- but the proof given there is equally valid for complete
separable metric spaces. In this sense the phenomenon of
Poincar\'{e} recurrence and topological recurrence are
``equivalent" with each implying the other.
A Borel set $B\subset X$ such that each orbit
intersects $B$ in exactly one point is called
{\bf a Borel cross-section\/} for the system $(X,T)$\ .
If a Borel cross-section exists, then no
non-atomic conservative quasi-invariant measure can
exist. In \cite{Wei} it is shown that the converse is also
valid --- namely if there are no conservative quasi-invariant
measures then there is a Borel cross-section.
Note that the periodic points of $(X,T)$\ form a Borel subset
for which a cross-section always exists, so that we can
conclude from the above discussion the following statement
in which no explicit mention is made of measures.
\begin{thm}\lambdabel{periodic}
For a system $(X,T)$\ , with $X$ a completely metrizable
separable space, there exists a Borel cross-section
if and only if the only recurrent points are the
periodic ones.
\end{thm}
\begin{rem}
Already in \cite{Glimm} as well as in \cite{Eff} one finds
many equivalent conditions for the existence of a Borel
section for a system $(X,T)$\ . However one doesn't find there explicit
mention of conditions in terms of recurrence.
Silvestrov and Tomiyama \cite{ST} established the theorem
in this formulation for $X$ compact (using $C^*$-algebra methods).
We thank A. Lazar for drawing our attention to their paper.
\end{rem}
\subsection{Recurrence sequences and Poincar\'e sequences}
We will conclude this section with a discussion of
recurrence sequences and Poincar\'e sequences. First for
some definitions.
Let us say that $D$ is a {\bf recurrence set} if for any
dynamical system $(Y, T)$ with compatible metric $\rho$ and
any $\epsilonsilon >0$ there is a point $y_0$ and a $d \in D$ with
$$
\rho(T^dy_0,\ y_0)< \epsilonsilon.
$$
Since any system contains minimal sets it suffices to restrict
attention here to minimal systems. For minimal systems the set of
such $y$'s for a fixed $\epsilonsilon$ is a dense open set.
To see this fact, let $U$ be an open set. By the minimality there
is some $N$ such that for any $y \in Y$, and some $0 \leq n \leq
N$, we have $T^ny \in U$. Using the uniform continuity of $T^n$,
we find now a $\deltata >0$ such that if $\rho(u,\ v) < \deltata$ then
for all $0 \leq n \leq N$
$$
\rho(T^nu,\ T^n v)< \epsilonsilon.
$$
Now let $z_0$ be a point in $Y$ and $d_0 \in D$ such that
\begin{equation}\lambdabel{d}
p(T^{d_0}z_0,\ z_0) < \deltata.
\end{equation}
For some $0 \leq n_0 \leq N$ we have $T^{n_0}z_0 = y_0 \in U$ and
from \eqref{d} we get $\rho(T^{d_0}y_0,\ y_0) < \epsilonsilon$. Thus points
that $\epsilonsilon$ return form an open dense set. Intersecting over
$\epsilonsilon\to 0$ gives a dense $G_\deltata$ in $Y$ of points $y$ for
which
$$
\inf_{d\in D} \ \rho(T^dy,\ y)=0.
$$
Thus there are points which actually recur along times drawn from
the given recurrence set.
A nice example of a recurrence set is the set of squares.
To see this it is easier to prove a stronger property which is
the analogue in ergodic theory of recurrence sets.
\begin{defn}
A sequence $\{s_j\}$ is said to be a
{\bf Poincar\'e sequence\/} if for any finite measure preserving system
$(X,\ \mathcal{B} ,\ \mu,\ T)$ and any $B \in \mathcal{B}$ with
positive measure we have
$$
\mu ( T ^{s_j} B \cap B ) > 0 \qquad {\text{for some $ s_j$
in the sequence.}}
$$
\end{defn}
Since any minimal topological system $(Y,T)$ has finite
invariant measures with global support, $\mu$ any Poincar\'e
sequence is recurrence sequence. Indeed for any presumptive
constant $b>0$ which would witness the non-recurrence of $\{s_j\}$
for $(Y,T)$, there would have to be an open set $B$
with diameter less than $b$ and having positive $\mu$-measure such
that $T ^{s_j} B \cap B$ is empty for all $\{s_j\}$.
Here is a sufficient condition for a sequence to be a Poincar\'e
sequence:
\begin{lem}
If for every $\alphapha \in (0,\ 2 \pi)$
$$
\lim_{n \to \infty}\ \frac {1}{n}\ \sum^{n}_{k=1}\
e^{i \alphapha s_k}\ =\ 0
$$
then $\{s_k\}_1^\infty$ is a Poincar\'e sequence.
\end{lem}
\begin{proof}
Let $(X,\ \mathcal{B} ,\ \mu,\ T)$ be a measure preserving
system and let $U$ be the unitary operator defined on $L^2(X,\
\mathcal{B},\ \mu)$ by the action of $T$, i.e.
$$
(Uf)(x)= f(Tx).
$$
Let $H_0$ denote the subspace of invariant functions and
for a set of positive measure $B$, let $f_0$ be the projection of
$1_B$ on the invariant functions. Since this can also be seen as
a conditional expectation with respect to the $\sigmama$-algebra of
invariant sets $f_0 \geq 0$ and is not zero. Now since $\mathbf{1}_B -
f_0$ is orthogonal to the space of invariant functions its
spectral measure with respect to $U$ doesn't have any atoms at
$\{0\}$. Thus from the spectral representation we deduce that in
$L^2$-norm
$$
\left| \left| \frac {1}{n}\ \sum_1^n\ U^{s_k}
(1_B-f_0)\right|\right|_{L^2} \longrightarrow 0
$$
or
$$
\left| \left| \left( \frac {1}{n} \sum^n_1 \ U ^{s_k}\ 1_B
\right)-f_0 \right|\right|_{L_2} \longrightarrow 0
$$
and integrating against $1_B$ and using the fact that $f_0$ is the
projection of $1_B$ we see that
$$
\lim_{n \to \infty}\ \frac {1}{n}\ \sum^n_1\ \mu(B \cap
T^{-s_k}B) = \|f_0\|^2 >0
$$
which clearly implies that $\{s_k\}$ is a Poincar\'e sequence.
\end{proof}
The proof we have just given is in fact von-Neumann's original
proof for the mean ergodic theorem. He used the fact that $\mathbb
{N}$ satisfies the assumptions of the proposition, which is Weyl's
famous theorem on the equidistribution of $\{n \alphapha\}$.
Returning to the squares Weyl also showed that $\{n^2 \alphapha\}$ is
equidistributed for all irrational $\alphapha$. For rational $\alphapha$
the exponential sum in the lemma needn't vanish , however the
recurrence along squares for the rational part of the spectrum is
easily verified directly so that we can conclude that indeed the
squares are a Poincar\'e sequence and hence a recurrence sequence.
The converse is not always true, i.e. there are recurrence
sequences that are not Poincar\'e sequences. This was first
shown by I. Kriz \cite{Kr} in a beautiful example
(see also \cite[Chapter 5]{W4}). Finally here is a
simple problem.
{\bf Problem:}\ If $D$ is a recurrence sequence for all
circle rotations is it a recurrence set?
A little bit of evidence for a positive answer to that problem
comes from looking at a slightly different characterization of
recurrence sets. Let $\mathcal{N} $ denote the collection of sets of
the form
$$
N(U,\ U)=\{n:\ T^{-n}U \cap U \neq \emptyset\},\qquad (U\ {\text{open
and nonempty}}),
$$
where $T$ is a minimal transformation. Denote by $\mathcal{N}^*$ the
subsets of $\mathbb {N}$ that have a non-empty intersection with every
element of $\mathcal{N} $. Then $\mathcal{N}^*$ is exactly the class of
recurrence sets. For minimal transformations, another description
of $N(U,\ U)$ is obtained by fixing some $y_0$ and denoting
$$
N(y_0,\ U)=\{n:\ T^ny_0 \in U\}
$$
Then $N(U,\ U)=N(y_0,\ U)-N(y_0,\ U)$. Notice that the minimality
of $T$ implies that $N(y_0,\ U)$ is a {\bf syndetic} set (a set
with bounded gaps) and so any $N(U,\ U)$ is the set of differences
of a syndetic set. Thus $\mathcal{N}$ consists essentially of all sets
of the form $S\ -\ S$ where $S$ is a syndetic set.
Given a finite set of real numbers
$\{\lambda_1,\lambda_2,\dots,\lambda_k\}$ and $\epsilon>0$
set
$$
V(\lambda_1,\lambda_2,\dots,\lambda_k;\epsilon)=
\{n\in \mathbb{Z}: \max_{j} \{\|n\lambda_j \|<\epsilon\}\},
$$
where $\|\cdot\|$ denotes the distance to the closest integer.
The collection of such sets forms a basis of neighborhoods at zero
for a topology on $\mathbb{Z}$ which makes it a topological group.
This topology is called the {\bf Bohr topology}.
(The corresponding uniform structure is
totally bounded and the completion of $\mathbb{Z}$
with respect to it is a compact topological group
called the {\bf Bohr compactification} of $\mathbb{Z}$.)
Veech proved in \cite{Veech} that any set
of the form $S\ -\ S$ with $S\subset \mathbb{Z}$ syndetic contains
a neighborhood of zero in the Bohr topology
{\em up to a set of zero density}.
It is not known if in that statement the zero density set can be
omitted. If it could then a positive answer to the above problem
would follow (see also \cite{G26}).
\section{The equivalence of weak mixing and continuous
spectrum}\lambdabel{Sec-WM}
In order to analyze the structure of a dynamical system
$\mathbf{X}$ there are, a priori, two possible approaches. In the first
approach one considers the collection of {\bf subsystems} $Y\subset X$
(i.e. closed $T$-invariant subsets)
and tries to understand how $X$ is built up by these subsystems.
In the other approach one is
interested in the collection of {\bf factors}
$X\overset{\pi}{\to}Y$ of the system $\mathbf{X}$.
In the measure theoretical case
the first approach leads to the ergodic decomposition and
thereby to the study of the ``indecomposable" or ergodic components
of the system. In the topological setup there is, unfortunately,
no such convenient decomposition describing the system in
terms of its indecomposable parts and one has to use some less
satisfactory substitutes. Natural candidates for indecomposable
components of a topological dynamical system are the ``orbit
closures" (i.e. the topologically transitive subsystems) or
the ``prolongation" cells (which often coincide with the
orbit closures), see \cite{AG0}. The minimal subsystems are of particular
importance here. Although we can not say, in any reasonable sense,
that the study of the general system can be reduced to that
of its minimal components, the analysis of the minimal systems
is nevertheless an important step towards a better understanding
of the general system.
This reasoning leads us to the study of the collection of
indecomposable systems (ergodic systems
in the measure category and transitive or minimal systems
in the topological case) and their factors. The simplest
and best understood indecomposable dynamical systems are the
ergodic translations of a compact monothetic group (a
cyclic permutation on $\mathbb{Z}_p$ for a prime number $p$,
the ``adding machine" on $\prod_{n=0}^\infty \mathbb{Z}_2$,
an irrational rotation $z\mapsto e^{2\pi i\alpha}z$ on
$S^1=\{z\in \mathbb{C}:|z|=1\}$ etc.). It is not hard to show that
this class of ergodic actions is characterized as those
dynamical systems which admit a model
$(X,\mathcal{X},\mu,T)$ where $X$ is a compact metric space,
$T:X\to X$ a surjective isometry and $\mu$ is $T$-ergodic.
We call these systems
{\bf Kronecker\/} or {\bf isometric\/} systems.
Thus our first question
concerning the existence of factors should be: given
an ergodic dynamical system $\mathbf{X}$ which are its Kronecker factors?
Recall that a measure dynamical system $\mathbf{X}=(X,\mathcal{X},\mu,T)$
is called {\bf weakly mixing\/} if the product system
$(X\times X,\mathcal{X}\otimes\mathcal{X},\mu\times \mu,T\times T)$
is ergodic. The following classical theorem is
due to von Neumann. The short and elegant proof we
give was suggested by Y. Katznelson.
\begin{thm}\lambdabel{easy}
An ergodic system $\mathbf{X}$ is weakly mixing iff it admits no
nontrivial Kronecker factor.
\end{thm}
\begin{proof}
Suppose $\mathbf{X}$ is weakly mixing and admits an isometric
factor. Now a factor of a weakly mixing
system is also weakly mixing and the only system which is both
isometric and weakly mixing is the trivial system (an easy
exercise).
Thus a weakly mixing system does not admit a nontrivial
Kronecker factor.
For the other direction,
if $\mathbf{X}$ is non-weakly mixing then in the product space $X\times X$
there exists a $T$-invariant measurable subset $W$
such that $0<(\mu\times \mu)(W)<1$. For every $x\in X$ let
$W(x)=\{x'\in X:(x,x')\in W\}$ and let $f_x={\mathbf{1}}_{W(x)}$,
a function in $L^\infty (\mu)$.
It is easy to check that $U_T f_x=f_{T^{-1}x}$ so that the
map $\pi:X\to L^2(\mu)$ defined by $\pi(x)=f_x, x\in X$
is a Borel factor map. Denoting
$$
\pi(X)=Y\subset L^2(\mu), \quad {\text{and}} \quad \nu=\pi_*(\mu),
$$
we now have a factor map $\pi: \mathbf{X} \to (Y,\nu)$.
Now the function $\|\pi(x)\|$ is clearly measurable
and invariant and by ergodicity it is a constant
$\mu$-a.e.; say $\|\pi(x)\|=1$.
The dynamical system $(Y,\nu)$ is thus a subsystem
of the compact dynamical system $(B,U_T)$, where
$B$ is the unit ball of the Hilbert space $L^2(\mu)$
and $U_T$ is the Koopman unitary operator induced by $T$
on $L^2(\mu)$. Now it is well known (see e.g.
\cite{G}) that a compact topologically
transitive subsystem which carries an invariant probability
measure must be a Kronecker system and our proof is complete.
\end{proof}
Concerning the terminology we used in the proof of
Theorem \ref{easy}, B. O. Koopman, a student of G. D. Birkhoff and a
co-author of both
Birkhoff and von Neumann introduced the crucial idea of
associating with a measure dynamical system
$\mathbf{X}=(X,\mathcal{X},\mu,T)$ the unitary operator
$U_T$ on the Hilbert space $L^2(\mu)$.
It is now an easy matter to
see that Theorem \ref{easy} can be re-formulated
as saying that the system $\mathbf{X}$ is weakly mixing iff
the point spectrum of the Koopman operator
$U_T$ comprises the single complex
number $1$ with multiplicity $1$.
Or, put otherwise, that the one dimensional space of constant
functions is the eigenspace corresponding to the eigenvalue
$1$ (this fact alone is equivalent to the ergodicity of the
dynamical system) and that the restriction of $U_T$
to the orthogonal complement of the space of constant functions
has a continuous spectrum.
{\begin{center}{$\divideontimes$}\end{center}}
We now consider a topological analogue of this theorem.
Recall that a topological system $(X,T)$\ is {\bf topologically
weakly mixing\/}
when the product system $(X\times X,T\times T)$ is topologically
transitive. It is {\bf equicontinuous\/} when the family
$\{T^n:n\in \mathbb{Z}\}$ is an equicontinuous family of maps.
Again an equivalent condition is the existence of a
compatible metric with respect to which $T$ is an isometry.
And, moreover, a minimal system is equicontinuous iff
it is a minimal translation on a compact monothetic group.
We will need the following lemma.
\begin{lem}\lambdabel{cont}
Let $(X,T)$\ be a minimal system and $f:X\to \mathbb{R}$ a
$T$-invariant function with at least one point of continuity
(for example this is the case when $f$ is lower or
upper semi-continuous or more generally when it is the
pointwise limit of a sequence of continuous functions),
then $f$ is a constant.
\end{lem}
\begin{proof}
Let $x_0$ be a continuity point and $x$ an arbitrary point
in $X$. Since $\{T^n x:n \in \mathbb{Z}\}$ is dense and as the value
$f(T^n x)$ does not depend on $n$ it follows that
$f(x)=f(x_0)$.
\end{proof}
\begin{thm}\lambdabel{eswdisp}
Let $(X,T)$\ be a minimal system
then $(X,T)$\ is topologically weakly mixing iff it
has no non-trivial equicontinuous factor.
\end{thm}
\begin{proof}
Suppose $(X,T)$\ is minimal and topologically weakly mixing and let
$\pi:(X,T)\to(Y,T)$ be an equicontinuous factor.
If $(x,x')$ is a point whose $T\times T$ orbit
is dense in $X\times X$ then $(y,y')=(\pi(x),\pi(x'))$
has a dense orbit in $Y\times Y$. However, if $(Y,T)$\ is
equicontinuous then $Y$ admits a compatible metric
with respect to which $T$ is an isometry and the
existence of a transitive point in $Y\times Y$
implies that $Y$ is a trivial one point space.
Conversely,
assuming that $(X\times X,T\times T)$ is not transitive we will
construct an equicontinuous factor $(Z,T)$ of $(X,T)$.
As $(X,T)$ is a minimal system, there exists a $T$-invariant
probability measure $\mu$ on $X$ with full support.
By assumption there exists an open $T$-invariant subset $U$
of $X\times X$, such that ${\rm{cls\,}} U:= M \subsetneq X\times X$.
By minimality the projections of $M$ to both $X$ coordinates
are onto.
For every $y\in X$ let $M(y)=\{x\in X:(x,y)\in M\}$,
and let $f_y=\mathbf{1}_{M(y)}$ be the indicator function of the set
$M(y)$, considered as an element of $L^1(X,\mu)$.
Denote by $\pi:X\to L^1(X,\mu)$ the map $y\mapsto f_y$.
We will show that $\pi$ is a continuous
homomorphism, where we consider $L^1(X,\mu)$ as a dynamical
system with the isometric action of the group
$\{U^n_T:n\in \mathbb{Z}\}$ and $U_Tf(x)=f(Tx)$.
Fix $y_0\in X$ and $\epsilonsilon>0$.
There exists an open neighborhood $V$ of the closed set
$M(y_0)$ with $\mu(V\setminus M(y_0))<\epsilonsilon$.
Since $M$ is closed the set map $y\mapsto M(y), X\to 2^X$ is
upper semi-continuous and we can find a neighborhood $W$ of
$y_0$ such that $M(y)\subset V$ for every $y\in W$.
Thus for every $y\in W$ we have $\mu(M(y)\setminus M(y_0))<\epsilonsilon$.
In particular,
$\mu(M(y))\le\mu(M(y_0))+\epsilonsilon$ and it follows that the map
$y\mapsto \mu(M(y))$ is upper semi-continuous.
A simple computation shows that
it is $T$-invariant, hence, by Lemma \ref{cont}, a constant.
With $y_0,\epsilonsilon$ and $V, W$ as above,
for every $y\in W$, $\mu(M(y)\setminus M(y_0))<\epsilonsilon$ and
$\mu(M(y))=\mu(M(y_0))$, thus
$\mu(M(y)\Deltata M(y_0))<2\epsilonsilon$,
i.e., $\| f_y-f_{y_0} \|_1<2\epsilonsilon$.
This proves the claim that $\pi$ is continuous.
Let $Z=\pi(X)$ be the image of $X$ in $L^1(\mu)$. Since
$\pi$ is continuous, $Z$ is compact.
It is easy to see that the $T$-invariance of $M$ implies that
for every $n\in \mathbb{Z}$ and $y\in X$,\
$f_{T^{-n}y}=f_y\circ T^n$ so that
$Z$ is $U_T$-invariant and
$\pi:(Y,T)\to (Z,U_T)$ is a homomorphism.
Clearly $(Z,U_T)$ is minimal and equicontinuous (in fact
isometric).
\end{proof}
Theorem \ref{eswdisp} is due to
Keynes and Robertson \cite{KRo} who developed an idea of
Furstenberg, \cite{Fur2};
and independently to K. Petersen \cite{Pe1} who utilized a
previous work of W. A. Veech, \cite{Veech}.
The proof we presented is an elaboration of a work of
McMahon \cite{McM2}
due to Blanchard, Host and Maass, \cite{BHM}.
We take this opportunity to point out a curious phenomenon
which recurs again and again. Some problems in topological
dynamics --- like the one we just discussed --- whose formulation
is purely topological, can be solved using the
fact that a $\mathbb{Z}$ dynamical system always carries an invariant
probability measure, and then employing a machinery provided
by ergodic theory. In several cases this approach is the only
one presently known for solving the problem. In the present
case however purely topological proofs exist, e.g. the
Petersen-Veech proof is one such.
\section{Disjointness: measure vs. topological}\lambdabel{Sec-disj}
In the ring of integers $\mathbb{Z}$ two integers $m$ and $n$ have no
common factor if whenever $k|m$ and $k|n$ then $k=\pm 1$.
They are disjoint if $m\cdot n$ is the least common multiple of
$m$ and $n$.
Of course in $\mathbb{Z}$ these two notions coincide. In his seminal paper
of 1967 \cite{Fur3},
H. Furstenberg introduced the same notions in the
context of dynamical systems, both measure-preserving
transformations and homeomorphisms
of compact spaces, and asked whether in these categories as well
the two are equivalent.
The notion of a factor in, say the measure category, is the natural
one: the dynamical system
$\mathbf{Y}=(Y,\mathcal{Y},\nu,T)$ is a {\bf factor\/} of the dynamical system
$\mathbf{X}=(X,\mathcal{X},\mu,T)$ if there exists a measurable map
$\pi:X \to Y$ with $\pi(\mu)=\nu$ that
$T\circ \pi= \pi \circ T$.
A common factor of two systems $\mathbf{X}$ and $\mathbf{Y}$ is thus
a third system $\mathbb{Z}b$ which is a factor of both.
A {\bf joining\/} of the two systems $\mathbf{X}$ and $\mathbf{Y}$ is
any system $\mathbf{W}$ which admits both as factors and is in turn
spanned by them. According to Furstenberg's definition
the systems $\mathbf{X}$ and $\mathbf{Y}$ are {\bf disjoint\/} if the product
system $\mathbf{X}\times \mathbf{Y}$ is the only joining they admit.
In the topological category, a joining of $(X,T)$\ and $(Y,S)$
is any subsystem $W\subset X\times Y$ of the product system
$(X\times Y, T\times S)$ whose projections on both
coordinates are full; i.e. $\pi_X(W)=X$ and $\pi_Y(W)=Y$.
$(X,T)$\ and $(Y,S)$ are {\bf disjoint\/} if $X\times Y$
is the unique joining of these two systems.
It is easy to verify that if $(X,T)$\ and $(Y,S)$ are disjoint
then at least one of them is minimal. Also, if both systems
are minimal then they are disjoint iff the product
system $(X\times Y, T\times S)$ is minimal.
In 1979, D. Rudolph, using joining techniques,
provided the first example of a pair of ergodic
measure preserving transformations with no common
factor which are not disjoint \cite{Ru1}.
In this work Rudolph laid the foundation
of joining theory. He introduced the class of
dynamical systems having ``minimal self-joinings"
(MSJ), and constructed a rank one mixing dynamical system
having minimal self-joinings of all orders.
Given a dynamical system $\mathbf{X}=(X,\mathcal{X},\mu,T)$
a probability measure $\lambda$ on the
product of $k$ copies of $X$ denoted $X_1,X_2,\ldots, X_k$,
invariant under the product transformation and
projecting onto $\mu$ in each coordinate
is a {\bf $k$-fold self-joining\/}. It is called an
{\bf off-diagonal\/} \lambdabel{def-offdiag} if it is
a ``graph" measure of the form
$\lambda={{\rm{gr\,}}}(\mu,T^{n_1},\dots,T^{n_k})$,
i.e. $\lambda$ is the image of $\mu$ under the map
$x\mapsto\big(T^{n_1}x,T^{n_2}x,\ldots, T^{n_k}x\big)$
of $X$ into $\prod\limits^k_{i=1}X_i$.
The joining $\lambda$ is a {\bf product of off-diagonals\/}
if there exists a
partition $(J_1,\ldots, J_m)$ of $\{1,\ldots, k\}$ such that
(i)\ For each $l$, the projection of $\lambda$ on
$\prod\limits_{i\in J_l}X_i$ is an off-diagonal,
(ii)\ The systems $\prod\limits_{i\in J_l}X_i$,
$1\le l\le m$, are independent.
An ergodic system $\mathbf{X}$ has
{\bf minimal self-joinings of order $k$\/}
if every $k$-fold ergodic self-joining of $\mathbf{X}$ is a
product of off-diagonals.
In \cite{Ru1} Rudolph shows how any dynamical system with MSJ
can be used to construct a counter example
to Furstenberg's question as well as
a wealth of other counter examples to various questions
in ergodic theory.
In \cite{JRS} del Junco, Rahe and Swanson were able
to show that the classical example
of Chac{\'o}n \cite{Chacon} has MSJ,
answering a question of Rudolph whether a weakly
but not strongly mixing system with MSJ exists.
In \cite{GW1} Glasner and Weiss provide a topological
counterexample, which also serves as a natural
counterexample in the measure category.
The example consists of two
horocycle flows which have no nontrivial common factor but are
nevertheless not disjoint.
It is based on deep results of
Ratner \cite{Rat} which provide a complete
description of the self joinings
of a horocycle flow.
More recently an even more striking example was given in the
topological category by E. Lindenstrauss, where two minimal
dynamical systems with no nontrivial factor share a common almost
1-1 extension, \cite{Lis1}.
Beginning with the pioneering works of Furstenberg and Rudolph,
the notion of joinings was exploited by many authors; Furstenberg
1977 \cite{Fur4}, Rudolph 1979 \cite{Ru1}, Veech 1982 \cite{V2},
Ratner 1983 \cite{Rat}, del Junco and Rudolph 1987 \cite{JR1},
Host 1991 \cite{Host},
King 1992 \cite{King2}, Glasner, Host and Rudolph 1992 \cite{GHR},
Thouvenot 1993 \cite{Thou2}, Ryzhikov 1994 \cite{Ry},
Kammeyer and Rudolph 1995 (2002) \cite{KR},
del Junco, Lema\'nczyk and Mentzen 1995 \cite{JLM},
and Lema\'nczyk, Parreau and Thouvenot 2000 \cite{LPT}, to mention
a few.
The negative answer to Furstenberg's question and the
consequent works on joinings and disjointness
show that in order to study the relationship between two
dynamical systems it is necessary to know all the possible joinings
of the two systems and to understand the nature of these joinings.
Some of the best known disjointness relations among families
of dynamical systems are the following:
\begin{itemize}
\item ${{\rm{id}}} \ \bot \ $ ergodic,
\item distal $\ \bot \ $ weakly mixing (\cite{Fur3}),
\item rigid $\ \bot \ $ mild mixing (\cite{FW2}),
\item zero entropy $\ \bot \ $ $K$-systems (\cite{Fur3}),
\end{itemize}
in the measure category and
\begin{itemize}
\item $F$-systems $ \ \bot \ $ minimal (\cite{Fur3}),
\item minimal distal $\ \bot \ $ weakly mixing,
\item minimal zero entropy $\ \bot \ $ minimal UPE-systems
(\cite{Bla2}),
\end{itemize}
in the topological category.
\section{Mild mixing: measure vs. topological}\lambdabel{Sec-mm}
\begin{defn}\lambdabel{def-mild}
Let $\mathbf{X}=(X,\mathcal{X},\mu,T)$ be a measure dynamical system.
\begin{enumerate}
\item
The system $\mathbf{X}$ is {\bf rigid\/} if there exists a
sequence $n_k\nearrow \infty$ such that
$$
\lim \mu
\left(T^{n_k}A\cap A\right)=\mu (A)
$$
for every measurable subset
$A$ of $X$. We say that $\mathbf{X}$ is $\{n_k\}$-{\bf rigid\/}.
\item
An ergodic system is {\bf mildly mixing\/}
if it has no non-trivial rigid factor.
\end{enumerate}
\end{defn}
These notions were introduced in \cite{FW2}.
The authors show that the mild mixing property is
equivalent to the following multiplier property.
\begin{thm}
An ergodic system $\mathbf{X}=(X,\mathcal{X},\mu,T)$ is mildly mixing
iff for every ergodic (finite or infinite) measure preserving
system $(Y,\mathcal{Y},\nu,T)$, the product system
$$
(X\times Y, \mu \times \nu, T\times T),
$$
is ergodic.
\end{thm}
Since every Kronecker system is rigid it follows from Theorem
\ref{easy} that mild mixing implies weak mixing.
Clearly strong mixing implies mild mixing.
It is not hard to construct rigid weakly mixing
systems, so that the class of mildly mixing systems
is properly contained in the class of weakly mixing
systems.
Finally there are mildly but not strongly mixing
systems; e.g. Chac{\'o}n's system is an example
(see Aaronson and Weiss \cite{AW}).
We also have the following analytic characterization of
mild mixing.
\begin{prop}\lambdabel{ex-rigid-matrix}
An ergodic system $\mathbf{X}$ is mildly
mixing iff
$$
\limsup_{n\to\infty}\phi_f(n)<1,
$$
for every matrix coefficient $\phi_f$, where
for $f\in L^2(X,\mu), \|f\|=1$,\
$\phi_f(n):=
\lambdangle U_{T^n} f,f\rangle$.
\end{prop}
\begin{proof}
If $\mathbf{X}\to \mathbf{Y}$ is a rigid factor, then there exists
a sequence $ n_i\to \infty$ such that
$U_{T^{n_i}}\to {{\rm{id}}}$ strongly on $L^2(Y,\nu)$.
For any function $f\in L^2_0(Y,\nu)$ with $\|f\|=1$,
we have $\lim_{i\to\infty}\phi_f(n_i)=1$. Conversely, if
$\lim_{i\to\infty}\phi_f(n_i)=1$ for some
$n_i\nearrow \infty$ and $f\in L^2_0(X,\mu),
\|f\|=1$, then $\lim_{i\to\infty} U_{T^{n_i}}f=f$. Clearly
$f$ can be replaced by a bounded function and we let
$A$ be the sub-algebra of $L^\infty(X,\mu)$ generated
by $\{U_{T^n}f:n \in \mathbb{Z} \}$. The algebra
$A$ defines a non-trivial factor $\mathbf{X}\to \mathbf{Y}$ such that
$U_{T^{n_i}}\to {{\rm{id}}}$ strongly on $L^2(Y,\nu)$.
\end{proof}
We say that a collection $\mathcal{F}$ of nonempty subsets of $\mathbb{Z}$ is a
{\bf family\/} if it is hereditary upward and {\bf proper\/} (i.e.
$A\subset B$ and $A\in \mathcal{F}$ implies $B\in \mathcal{F}$, and
$\mathcal{F}$ is neither empty nor all of $2^\mathbb{Z}$).
With a family $\mathcal{F}$ of nonempty subsets of $\mathbb{Z}$
we associate the {\bf dual family\/}
$$
\mathcal{F}^{*}=\{E:E\cap F\ne\emptyset, \forall \ F\in \mathcal{F}\}.
$$
It is easily verified that $\mathcal{F}^{*}$ is indeed a family.
Also, for families, $\mathcal{F}_1\subset \mathcal{F}_2\
\mathbf{\mathfrak{m}}p\ \mathcal{F}^{*}_1\supset \mathcal{F}^{*}_2$, and $\mathcal{F}^{**}=\mathcal{F}$.
We say that a subset $J$ of $\mathbb{Z}$ has {\bf uniform
density 1\/} if for every $0<\lambda<1$ there exists
an $N$ such that for every interval $I\subset \mathbb{Z}$
of length $> N$ we have $|J \cap I| \ge \lambda |I|$.
We denote by $\mathcal{D}$ the family of subsets of $\mathbb{Z}$
of uniform density 1.
It is also easy to see that $\mathcal{D}$ has the
finite intersection property.
Let $\mathcal{F}$ be a family of nonempty subsets of $\mathbb{Z}$
which is closed under finite intersections
(i.e. $\mathcal{F}$ is a filter). Following
\cite{Fur5} we say that a sequence $\{x_n: n\in \mathbb{Z}\}$
in a topological space $X$ {\bf $\mathcal{F}$-converges} to a point
$x\in X$ if for every neighborhood $V$ of $x$ the
set $\{n: x_n\in V\}$ is in $\mathcal{F}$. We denote this by
$$
\mathcal{F}\,{\text -}\,\lim x_n =x.
$$
We have the following characterization
of weak mixing for measure preserving systems
which explains more clearly its name.
\begin{thm}
The dynamical system $\mathbf{X}=(X,\mathcal{X},\mu,T)$ is
weakly mixing iff
for every $A, B \in \mathcal{X}$ we have
$$
\mathcal{D}\,{\text -}\,\lim \mu(T^{-n}A\cap B)
=\mu(A)\mu(B).
$$
\end{thm}
An analogous characterization of measure theoretical
mild mixing is obtained by considering the families of $I\!P$
and $I\!P^*$ sets.
An $I\!P$-{\bf set\/} is any subset of $\mathbb{Z}$ containing a subset
of the form
$I\!P\{n_i\}= \{n_{i_1}+n_{i_2}+\cdots+n_{i_k}:i_1<i_2<\cdots<i_k\}$,
for some infinite sequence $\{n_i\}_{i=1}^\infty$.
We let $\mathcal{I}$ denote the family of $I\!P$-sets and call
the elements of the dual family $\mathcal{I}^*$, {\bf $I\!P^*$-sets\/}.
Again it is not hard to see that the family of $I\!P^*$-sets
is closed under finite intersections.
For a proof of the next theorem we refer to \cite{Fur5}.
\begin{thm}
The dynamical system $\mathbf{X}=(X,\mathcal{X},\mu,T)$ is
mildly mixing iff
for every $A, B \in \mathcal{X}$ we have
$$
\mathcal{I}^*\,{\text -}\,\lim\mu(T^{-n}A\cap B)
=\mu(A)\mu(B).
$$
\end{thm}
{\begin{center}{$\divideontimes$}\end{center}}
We now turn to the topological category.
Let $(X,T)$\ be a topological dynamical system.
For two non-empty open sets $U,V\subset X$ and a point
$x\in X$ set
\begin{gather*}
N(U,V)=\{n\in \mathbb{Z}: T^n U\cap V\ne\emptyset\},\quad
N_+(U,V)=N(U,V)\cap \mathbb{Z}_+\\
{\text{and}} \qquad N(x,V)=\{n\in \mathbb{Z}: T^n x\in V\}.
\end{gather*}
Notice that sets of the form $N(U,U)$ are symmetric.
We say that $(X,T)$\ is {\bf topologically transitive\/}
(or just {\bf transitive\/}) if
$N(U,V)$ is nonempty whenever $U,V \subset X$ are two non-empty
open sets.
Using Baire's category theorem it is easy to see that (for metrizable
$X$) a system $(X,T)$\ is topologically transitive iff there exists
a dense $G_\delta$ subset $X_0\subset X$ such that ${\bar{\mathcal{O}}}_T(x)=X$
for every $x\in X_0$.
We define the family $\mathcal{F}_{\rm thick}$ of {\bf thick sets\/}
to be the collection of sets which contain arbitrary long
intervals. The dual family $\mathcal{F}_{\rm synd}=\mathcal{F}_{\rm thick}^*$
is the collection of {\bf syndetic sets\/} --- those sets $A\subset \mathbb{Z}$
such that for some positive integer $N$ the intersection
of $A$ with every interval of length $N$ is nonempty.
Given a family $\mathcal{F}$ we say that a topological
dynamical system $(X,T)$
is {\bf $\mathcal{F}$-recurrent\/} if $N(A,A)\in \mathcal{F}$
for every nonempty open set $A\subset X$.
We say that a dynamical system is {\bf $\mathcal{F}$-transitive\/}
if $N(A,B)\in \mathcal{F}$
for every nonempty open sets $A, B \subset X$.
The class of $\mathcal{F}$-transitive systems is denoted by $\mathcal{E}_{\mathcal{F}}$.
E.g. in this notation the class of {\bf topologically mixing systems\/}
is $\mathcal{E}_{\rm cofinite}$, where we call a subset $A\subset \mathbb{Z}$
co-finite when $\mathbb{Z}\setminus A$ is a finite set.
We write simply $\mathcal{E}=\mathcal{E}_{\rm{infinite}}$
for the class of {\bf recurrent
transitive\/} dynamical systems.
It is not hard to see that
when $X$ has no isolated points $(X,T)$\ is topologically transitive
iff it is recurrent transitive.
From this we then deduce that a weakly mixing system is
necessarily recurrent transitive.
In a dynamical system $(X,T)$\ a point $x\in X$ is a
{\bf wandering point\/} if there exists an open neighborhood
$U$ of $x$ such that the collection $\{T^n U: n\in \mathbb{Z}\}$ is
pairwise disjoint.
\begin{prop}\lambdabel{rec-tra}
Let $$(X,T)$\ $ be a topologically transitive dynamical system; then
the following conditions are equivalent:
\begin{enumerate}
\item
$$(X,T)$\ \in \mathcal{E}_{\text{\rm infinite}}$.
\item
The recurrent points are dense in $X$.
\item
$$(X,T)$\ $ has no wandering points.
\item
The dynamical system $(X_\infty,T)$, the one point compactification
of the integers with translation and a fixed point at infinity, is not a
factor of $$(X,T)$\ $.
\end{enumerate}
\end{prop}
\begin{proof}
1 $\mathbf{\mathfrak{m}}p$ 4\
If $\pi:X\to X_\infty$ is a factor
map then, clearly $N(\pi^{-1}(0),\pi^{-1}(0))=\{0\}$.
4 $\mathbf{\mathfrak{m}}p$ 3\
If $U$ is a nonempty open wandering subset of
$X$ then $\{T^jU : j\in \mathbb{Z}\}\cup (X\setminus \bigcup
\{T^jU : j\in \mathbb{Z}\})$
is a partition of $X$. It is easy to see that this
partition defines a factor map $\pi : X\to X_\infty$.
3 $\mathbf{\mathfrak{m}}p$ 2\
This implication is a consequence of the following:
\begin{lem}
If the dynamical system $$(X,T)$\ $ has no wandering points then
the recurrent points are dense in $X$.
\end{lem}
\begin{proof}
For every $\delta>0$ put
$$
A_\delta=\{x\in X:\exists j\not=0,\ d(T^jx,x)<\delta\}.
$$
Clearly $A_\delta$ is an open set and we claim that it is dense. In fact
given $x\in X$ and $\epsilon>0$ there exists $j\not=0$ with
$$
T^jB_\epsilon(x)\cap B_\epsilon(x)\not=\emptyset.
$$
If $y$ is a point in this intersection then $d(T^{-j}y,y)<2\epsilon$.
Thus for $\epsilon<\delta/2$ we have $y\in A_\delta$ and $d(x,y)<\epsilon$.
Now by Baire's theorem
$$
A=\bigcap_{k=1}^\infty A_{1/k}
$$
is a dense $G_\delta$ subset of $X$ and each point in $A$ is
recurrent.
\end{proof}
2 $\mathbf{\mathfrak{m}}p$ 1\
Given $U,V$ nonempty open subsets of $X$ and $k\in N(U,V)$
let $U_0$ be the nonempty open subset $U_0=U\cap T^{-k}V$. Check
that $N(U_0,U_0)+k\subset N(U,V)$. By assumption $N(U_0,U_0)$ is
infinite and a fortiori so is $N(U,V)$. This completes the proof
of Proposition \ref{rec-tra}.
\end{proof}
A well known characterization of the class
$\mathbf{WM}$ of topologically weakly mixing systems is due
to Furstenberg:
\begin{thm}
$\mathbf{WM}=\mathcal{E}_{\text{\rm thick}}$.
\end{thm}
Following \cite{AG} we call the systems in
$\mathcal{E}_{\text{\rm synd}}$ {\bf topologically ergodic\/}
and write $\mathbb{T}E$ for this class.
This is a rich class as we can see from the following
claim from \cite{GW}. Here $\mathbf{MIN}$ is the class of minimal
systems and $\mathbb{E}b$ the class of $E$-systems; i.e.
those transitive dynamical systems $(X,T)$\ for which there exists
a probability invariant measure with full support.
\begin{thm}\lambdabel{MIN<TE}
$\mathbf{MIN}, \mathbb{E}b \subset \mathbb{T}E$.
\end{thm}
\begin{proof}
1.\
The claim for $\mathbf{MIN}$ is immediate by the well
known characterization of minimal systems:
$(X,T)$\ is minimal iff $N(x,U)$ is syndetic for
every $x\in X$ and nonempty open $U\subset X$.
2.\
Given two non-empty open sets $U,V$
in $X$, choose $k\in \mathbb{Z}$ with $T^kU\cap V\not=\emptyset$.
Next set $U_0=T^{-k}V\cap U$, and observe that $k+N(U_0,U_0)
\subset N(U,V)$. Thus it is enough to show that $N(U,U)$ is
syndetic for every non-empty open $U$.
We have to show that
$N(U,U)$ meets every thick subset $B\subset\mathbb{Z}$.
By Poincar\'e's recurrence theorem,
$N(U,U)$ meets every set of the form $A-A=\{n-m:n,m\in A\}$
with $A$ infinite.
It is an easy exercise to show that every thick set $B$ contains
some $D^+(A)=\{a_n-a_m:n>m\}$ for an infinite sequence
$A=\{a_n\}$. Thus $\emptyset\not= N(U,U)\cap \pm D^+(A)
\subset N(U,U)\cap \pm B$. Since $N(U,U)$ is symmetric,
this completes the proof.
\end{proof}
We recall (see the previous section)
that two dynamical systems $(X,T)$\ and $(Y,T)$\ are disjoint
if every closed $T\times T$-invariant subset of $X\times Y$
whose projections on $X$ and $Y$ are full, is necessarily the
entire space $X\times Y$. It follows easily that when
$(X,T)$\ and $(Y,T)$\ are disjoint, at least one of them must be minimal.
If both $(X,T)$\ and $(Y,T)$\ are minimal then they are disjoint iff
the product system is minimal. We say that $(X,T)$\ and $(Y,T)$\ are
{\bf weakly disjoint\/} when the product system
$(X\times Y, T\times T)$ is transitive.
This is indeed a very
weak sense of disjointness as there are systems
which are weakly disjoint from themselves. In fact, by definition
a dynamical system is topologically weakly mixing iff it is weakly
disjoint from itself.
If $\mathbf{P}$ is a class of recurrent transitive dynamical systems
we let $\mathbf{P}^{\curlywedge}$ be the class of recurrent transitive
dynamical systems which are weakly disjoint from every member
of $\mathbf{P}$
$$
\mathbf{P}^{\curlywedge}=\{(X,T) : X\times Y\in \mathcal{E} \ {\text
{\rm for every \ }} (Y,T)\in \mathcal{P} \}.
$$
We clearly have $\mathbf{P}\subset \mathbb{Q}b\ \mathbf{\mathfrak{m}}p \mathbf{P}^{\curlywedge}\supset
\mathbb{Q}b^{\curlywedge}$ and $\mathbf{P}^{\curlywedge \curlywedge \curlywedge}= \mathbf{P}^{\curlywedge}$.
For the discussion of topologically mildly mixing systems it will
be convenient to deal with families of subsets
of $\mathbb{Z}_+$ rather than $\mathbb{Z}$.
If $\mathcal{F}$ is such a family then
$$
\mathcal{E}_\mathcal{F}=
\{(X,T) : N_+(A,B) \in \mathcal{F}\
{\text {\rm for every nonempty open}}\ A,B \subset X\}.
$$
Let us call a subset of $\mathbb{Z}_+$ a $SI\!P$-{\bf set\/}
(symmetric $I\!P$-set), if it contains a subset of the form
$$
SI\!P\{n_i\}=\{n_\alpha-n_\beta > 0: \ n_\alpha,n_\beta\in I\!P\{n_i\}
\cup \{0\}\},
$$
for an $I\!P$ sequence $I\!P\{n_i\}\subset \mathbb{Z}_+$.
Denote by $\mathcal{S}$ the family of $SI\!P$ sets.
It is not hard to show that
$$
\mathcal{F}_{\rm thick}\subset\mathcal{S}\subset\mathcal{I},
$$
(see \cite{Fur5}).
Hence
$\mathcal{F}_{\rm syndetic} \supset \mathcal{S}^* \supset \mathcal{I}^*$,
hence
$\mathcal{E}_{{\text {\rm synd}}}\supset
\mathcal{E}_{\mathcal{S}^*}\supset
\mathcal{E}_{\mathcal{I}^*}$,
and finally
$$
\mathcal{E}^{\curlywedge}_{{\text {\rm synd}}}\subset
\mathcal{E}^{\curlywedge}_{\mathcal{S}^*}\subset
\mathcal{E}^{\curlywedge}_{\mathcal{I}^*}.
$$
\begin{defn}
A topological dynamical system
$(X,T)$\ is called {\bf topologically mildly mixing\/}
if it is in $\mathcal{E}_{\mathcal{S}^*}$ and we denote the collection of
topologically mildly mixing systems by $\mathbf{MM}=\mathcal{E}_{\mathcal{S}^*}$.
\end{defn}
\begin{thm}\lambdabel{MM=E^}
A dynamical system is in $\mathcal{E}$ iff it is weakly disjoint from
every topologically mildly mixing system:
$$
\mathcal{E}=\mathbf{MM}^{\curlywedge}.
$$
And conversely it is topologically mildly mixing iff it is
weakly disjoint from every recurrent transitive system:
$$
\mathbf{MM}=\mathcal{E}^{\curlywedge}.
$$
\end{thm}
\begin{proof}
1.\
Since $\mathcal{E}_{\mathcal{S}^*}$ is nonvacuous (for example every
topologically mixing system is in $\mathcal{E}_{\mathcal{S}^*}$), it follows
that every system in $\mathcal{E}_{\mathcal{S}^*}^{\curlywedge}$ is in $\mathcal{E}$.
Conversely, assume that $$(X,T)$\ $ is in $\mathcal{E}$
but $$(X,T)$\ \not\in \mathcal{E}_{\mathcal{S}^*}^{\curlywedge}$, and we
will arrive at a contradiction.
By assumption there exists $$(Y,T)$\ \in \mathcal{E}_{\mathcal{S}^*}$ and a nondense
nonempty open invariant subset $W\subset X\times Y$.
Then $\pi_X(W)=O$ is a nonempty open invariant subset of $X$.
By assumption $O$ is dense in $X$.
Choose open nonempty sets $U_0\subset X$ and $V_0\subset Y$
with $U_0\times V_0\subset W$.
By Proposition \ref{rec-tra} there exists a recurrent point
$x_0$ in $U_0\subset O$. Then
there is a sequence $n_i\to\infty$ such that for the
$I\!P$-sequence $\{n_\alpha\}=I\!P\{n_i\}_{i=1}^\infty$,\
$I\!P{\text{-}}\lim T^{n_\alpha}x_0=x_0$ (see \cite{Fur5}).
Choose $i_0$ such that $T^{n_\alpha}x_0\in U_0$ for $n_\alpha\in
J=I\!P\{n_i\}_{i\ge i_0}$ and set $D=SI\!P(J)$.
Given $V$ a nonempty open subset of $Y$ we have:
$$
D\cap N(V_0,V) \not=\emptyset.
$$
Thus for some $\alpha,\beta$ and $v_0\in V_0$,
$$
T^{n_\alpha-n_\beta}(T^{n_\beta}x_0,v_0)=
(T^{n_\alpha}x_0,T^{n_\alpha-n_\beta}v_0)
\in (U_0\times V)\cap W.
$$
We conclude that
$$
\{x_0\}\times Y\subset {\rm{cls\,}} W.
$$
The fact that in an $\mathcal{E}$ system the recurrent points are dense
together with the observation that $\{x_0\}\times Y\subset {\rm{cls\,}} W$
for every recurrent point $x_0\in O$, imply that
$W$ is dense in $X\times Y$, a contradiction.
2.\
From part 1 of the proof we have $\mathcal{E}=\mathcal{E}_{\mathcal{S}^*}^{\curlywedge}$,
hence $\mathcal{E}^{\curlywedge}= \mathcal{E}_{\mathcal{S}^*}^{\curlywedge\curlywedge}
\supset\mathcal{E}_{\mathcal{S}^*}$.
Suppose $$(X,T)$\ \in \mathcal{E}$ but $$(X,T)$\ \not\in \mathcal{E}_{\mathcal{S}^*}$,
we will show that $$(X,T)$\ \not\in \mathcal{E}^{\curlywedge}$. There
exist $U,V\subset X$, nonempty open subsets and an $I\!P$-set
$I=I\!P\{n_i\}$ for a monotone increasing sequence $\{n_1<n_2<
\cdots\}$ with
$$
N(U,V)\cap D=\emptyset,
$$
where
$$
D=\{n_\alpha-n_\beta: n_\alpha,n_\beta\in I,\ n_\alpha>n_\beta\}.
$$
If $$(X,T)$\ $ is not topologically weakly mixing then
$X\times X\not\in \mathcal{E}$
hence $$(X,T)$\ \not\in \mathcal{E}^{\curlywedge}$. So we can assume that
$$(X,T)$\ $ is topologically weakly mixing. Now in $X\times X$
$$
N(U\times V,V\times U)= N(U,V)\cap N(V,U)=N(U,V)\cap -N(U,V),
$$
is disjoint from $D\cup -D$, and replacing $X$ by $X\times X$
we can assume that $N(U,V)\cap (D\cup -D)=\emptyset$.
In fact, if $X\in \mathcal{E}^{\curlywedge}$ then $X\times Y \in \mathcal{E}$
for every $Y \in \mathcal{E}$, therefore
$X\times (X\times Y) \in \mathcal{E}$ and we see that
also $X\times X \in \mathcal{E}^{\curlywedge}$.
By going to a subsequence, we can assume that
$$
\lim_{k\to\infty} n_{k+1}-\sum_{i=1}^kn_i =\infty.
$$
in which case the representation
of each $n\in I$ as $n=n_\alpha=n_{i_1}+n_{i_2}+\cdots
+n_{i_k}; \ \alpha=\{i_1<i_2<\cdots<i_k\}$ is unique.
Next let $y_0\in\{0,1\}^\mathbb{Z}$ be the sequence $y_0=\mathbf{1}_I$.
Let $Y$ be the orbit closure of $y_0$ in $\{0,1\}^\mathbb{Z}$
under the shift $T$, and let $[1]=\{y\in Y:y(0)=1\}$.
Observe that
$$
N(y_0,[1])=I.
$$
It is easy to check that
$$
I\!P{\text{-}}\lim T^{n_\alpha}y_0=y_0.
$$
Thus the system $(Y,T)$ is topologically transitive with
$y_0$ a recurrent point; i.e. $$(Y,T)$\ \in \mathcal{E}$.
We now observe that
$$
N([1],[1])= N(y_0,[1])- N(y_0,[1])=I-I= D\cup -D\cup \{0\}.
$$
If $X\times Y$ is topologically transitive then
in particular
\begin{gather*}
N(U\times [1],V\times [1])=N(U,V)\cap N([1],[1])
=\\
N(U,V)\cap (D\cup -D\cup \{0\})=\ {\text{\rm infinite\ set}}.
\end{gather*}
But this contradicts our assumption.
Thus $X\times Y\not\in \mathcal{E}$ and $$(X,T)$\ \not\in \mathcal{E}^{\curlywedge}$.
This completes the proof.
\end{proof}
We now have the following:
\begin{cor}
Every topologically mildly mixing system is weakly
mixing and topologically ergodic:
$$
\mathbf{MM}\subset \mathbf{WM}\cap \mathbb{T}E.
$$
\end{cor}
\begin{proof}
We have $\mathcal{E}_{\mathcal{S}^*}\subset \mathcal{E}
=\mathcal{E}_{\mathcal{S}^*}^{\curlywedge}$,
hence for every $$(X,T)$\ \in \mathcal{E}_{\mathcal{S}^*}$,
$X\times X \in \mathcal{E}$ i.e. $$(X,T)$\ $ is topologically weakly mixing.
And, as we have already observed the inclusion
$\mathcal{F}_{\rm syndetic} \supset \mathcal{S}^*$,
entails
$\mathbb{T}E=\mathcal{E}_{{\text {\rm synd}}}\supset
\mathcal{E}_{\mathcal{S}^*} = \mathbf{MM}$.
\end{proof}
To complete the analogy with the measure theoretical
setup we next define a topological analogue of rigidity.
This is just one of several possible definitions
of topological rigidity and we refer to \cite{GM}
for a treatment of these notions.
\begin{defn}
A dynamical system
$(X,T)$\ is called {\bf uniformly rigid\/} if there exists a
sequence $n_k\nearrow \infty$ such that
$$
\lim_{k\to\infty} \sup_{x\in X} d(T^{n_k}x,x)=0,
$$
i.e. $\lim_{k\to\infty} T^{n_k}={\rm{id}}$
in the uniform topology on the group of homeomorphism
of $H(X)$ of $X$.
We denote by $\mathcal{R}$ the collection of topologically
transitive uniformly rigid systems.
\end{defn}
In \cite{GM} the existence of minimal weakly mixing
but nonetheless uniformly rigid dynamical systems
is demonstrated. However, we have the following:
\begin{lem}
A system which is both topologically mildly mixing and
uniformly rigid is trivial.
\end{lem}
\begin{proof}
Let $$(X,T)$\ $ be both topologically mildly mixing and uniformly rigid.
Then
$$
\Lambda={\rm{cls\,}} \{T^n: n\in \mathbb{Z}\}\subset H(X),
$$
is a Polish monothetic group.
Let $T^{n_i}$ be a sequence converging uniformly to ${\rm{id}}$, the
identity element of $\Lambda$. For a subsequence we can ensure
that $\{n_\alpha\}=I\!P\{n_i\}$ is an $I\!P$-sequence such that
$I\!P{\text{-}}\lim T^{n_\alpha} = {\rm{id}}$ in $\Lambda$.
If $X$ is nontrivial we can now find an open ball
$B=B_\delta(x_0)\subset X$ with $TB\cap B=\emptyset$.
Put $U=B_{\delta/2}(x_0)$ and $V=TU$; then by assumption
$N(U,V)$ is an $SI\!P^*$-set and in particular:
$$
\forall \alpha_0\ \exists \alpha,\beta>\alpha_0,\ n_\alpha-n_\beta\in N(U,V).
$$
However, since $I\!P{\text{-}}\lim T^{n_\alpha} = {\rm{id}}$,
we also have eventually,
$T^{n_\alpha-n_\beta}U\subset B$; a contradiction.
\end{proof}
\begin{cor}
A topologically mildly mixing system has no nontrivial
uniformly rigid factors.
\end{cor}
We conclude this section with the following result
which shows how these topological and measure theoretical
notions are related.
\begin{thm}
Let $(X,T)$\ be a topological dynamical system with the
property that there exists an invariant probability measure
$\mu$ with full support such that the
associated measure preserving dynamical system
$(X,\mathcal{X},\mu,T)$ is measure theoretically
mildly mixing then $(X,T)$\ is topologically mildly mixing.
\end{thm}
\begin{proof}
Let $(Y,S)$ be any system in $\mathcal{E}$; by
Theorem \ref{MM=E^} it suffices to show that
$(X\times Y,T\times S)$ is topologically transitive.
Suppose $W\subset
X\times Y$ is a closed $T\times S$-invariant set with
${\rm{int\,}} W\ne\emptyset$.
Let $U\subset X, V\subset V$ be two nonempty open
subsets with $U\times V\subset W$. By transitivity of $(Y,S)$
there exits a transitive recurrent point $y_0\in V$.
By theorems of Glimm and Effros
\cite{Glimm}, \cite{Eff}, and Katznelson and Weiss \cite{KW} (see also
Weiss \cite{Wei}),
there exists a (possibly infinite) invariant
ergodic measure $\nu$ on $Y$ with $\nu(V)>0$.
Let $\mu$ be the probability invariant measure of full
support on $X$ with respect to which $(X,\mathcal{X},\mu,T)$ is
measure theoretically mildly mixing.
Then by \cite{FW2} the measure $\mu\times \nu$ is ergodic.
Since $\mu\times\nu(W) \ge \mu\times\nu(U\times V) >0$
we conclude that $\mu\times\nu(W^c)=0$ which clearly implies
$W= X\times Y$.
\end{proof}
We note that the definition of topological mild mixing
and the results described above concerning this notion
are new. However independently of our work
Huang and Ye in a recent work also define
a similar notion and give it a comprehensive
and systematic treatment, \cite{HY2}. The first named
author would like to thank E. Akin for instructive
conversations on this subject.
Regarding the classes $\mathbf{WM}$ and $\mathbb{T}E$ let us mention the
following result from \cite{W}.
\begin{thm}
$$
\mathbb{T}E = \mathbf{WM}^{\curlywedge}.
$$
\end{thm}
For more on these topics we refer to
\cite{Fur5}, \cite{A}, \cite{W}, \cite{AG}, \cite{HY1}
and \cite{HY2}.
\section{Distal systems: topological vs. measure}\lambdabel{Sec-distal}
As noted above the Kronecker or minimal equicontinuous dynamical
systems can be considered as the most elementary type of systems.
What is then the next stage? The clue in the topological case,
which chronologically came first, is to be found in the notion
of distality.
A topological system $(X,T)$\ is called {\bf distal \/} if
$$
\inf_{n\in \mathbb{Z}} d(T^nx,T^nx')>0
$$
for every $x\ne x'$ in $X$.
It is easy to see that this property does not depend on
the choice of a metric. And, of course, every equicontinuous
system is distal. Is the converse true? Are these notions
one and the same? The dynamical system given on the unit
disc $D=\{z\in \mathbb{C}:|z|\le 1\}$ by the formula $Tz=z\exp(2\pi i |z|)$
is a counter example, it is distal but not equicontinuous.
However it is not minimal. H. Furstenberg in 1963 noted that
skew products over an equicontinuous basis with compact group
translations as fiber maps are always distal, often minimal,
but rarely equicontinuous, \cite{Fur2}.
A typical example is the homeomorphism
of the two torus $\mathbb{T}^2=\mathbb{R}^2/\mathbb{Z}^2$ given by $T(x,y)=(x+\alpha,y+x)$
where $\alpha\in \mathbb{R}/Z$ is irrational. Independently and at about the
same time, it was shown by L. Auslander, L. Green and F. Hahn
that minimal nilflows are distal
but not equicontinuous, \cite{AGH}.
These examples led Furstenberg to his path
breaking structure theorem, \cite{Fur2}.
Given a homomorphism $\pi:(X,T)\to (Y,T)$ let
$R_\pi=\{(x,x'):\pi(x)=\pi(x')\}$. We say that the homomorphism
$\pi$ is an {\bf isometric extension\/} if there exists a
continuous function $d:R_\pi\to \mathbb{R}$ such that for each
$y\in Y$ the restriction of $d$ to $\pi^{-1}(y)\times \pi^{-1}(y)$
is a metric and for every $x,x'\in \pi^{-1}(y)$ we have
$d(Tx,Tx')=d(x,x')$.
If $K$ is a compact subgroup of ${\mathbb{A}ut}(X,T)$ (the group
of homeomorphisms of $X$ commuting with $T$, endowed
with the topology of uniform convergence)
then the map $x\mapsto Kx$ defines a factor map
$(X,T)\overset\pi\to(Y,T)$\ with $Y=X/K$ and
$R_\pi=\{(x,kx):x\in X,\ k\in K\}$.
Such an extension is called a
{\bf group extension\/}\lambdabel{def-gr-ext-t}.
It turns out, although this is not so easy
to see, that when $(X,T)$\ is minimal then $\pi:(X,T)\to (Y,T)$
is an isometric extension iff there exists a commutative diagram:
\begin{equation*}\lambdabel{iso-gr-diag}
\xymatrix
{
(\tilde X,T) \ar[dd]_{\tilde\pi} \ar[dr]^{\rho} & \\
& (X,T) \ar[dl]^{\pi}\\
(Y,T) &
}
\end{equation*}
where $(\tilde X,T)$ is minimal and
$(\tilde X,T)\overset{\tilde\pi}\to(X,T)$
is a group extension with some compact group
$K\subset \mathbb{A}ut(\tilde X,T)$
and the map $\rho$ is the quotient map from
$\tilde X$ onto $X$ defined by a closed subgroup $H$ of $K$.
Thus $Y=\tilde X/K$ and $X=\tilde X/H$ and we can think
of $\pi$ as a {\bf homogeneous space extension\/} with fiber
$K/H$.
We say that a (metrizable) minimal system $(X,T)$\ is an {\bf $I$ system}
if there is a (countable) ordinal $\eta$ and a family of systems
$\{(X_\theta,x_\theta)\}_{\theta\le\eta}$
such that (i) $X_0$ is the trivial system, (ii) for every
$\theta<\eta$ there exists an isometric homomorphism
$\phi_\theta:X_{\theta+1}\to X_\theta$,
(iii) for a limit ordinal $\lambda\le\eta$ the system $X_\lambda$
is the inverse limit of the systems $\{X_\theta\}_{\theta<\lambda}$
(i.e. $X_\lambda=\bigvee_{\theta<\lambda}(X_\theta,x_\theta)$),
and
(iv) $X_\eta=X$.
\begin{thm}[Furstenberg's structure theorem]
\lambdabel{furst-structure-tm}
A minimal system is distal iff it is an I-system.
\end{thm}
{\begin{center}{$\divideontimes$}\end{center}}
W. Parry in his 1967 paper \cite{Pa1} suggested an intrinsic
definition of measure distality. He defines in this paper
a property of measure dynamical systems, called
``admitting a separating sieve", which imitates the intrinsic
definition of topological distality.
\begin{defn}\lambdabel{defn-ssieve}
Let $\mathbf{X}$ be an ergodic dynamical system.
A sequence $A_1\supset A_2\supset \cdots$ of sets in $\mathcal{X}$
with $\mu(A_n)>0$ and $\mu(A_n)\to 0$,
is called a {\bf separating sieve\/}
if there exists a
subset $X_0\subset X$ with $\mu(X_0)=1$ such that for every
$x,x'\in X_0$ the condition
``for every $n\in \mathbb{N}$ there exists $k\in \mathbb{Z}$ with
$T^k x,T^k x'\in A_n$" implies $x=x'$, or
in symbols:
$$
\bigcap_{n=1}^\infty\left(\bigcup_{k\in \mathbb{Z}}
T^k(A_n\times A_n)\right)\cap
(X_0 \times X_0)\subset \Delta.
$$
We say that the ergodic system $\mathbf{X}$ is
{\bf measure distal\/} if either $\mathbf{X}$ is finite
or there exists a separating sieve.
\end{defn}
Parry showed that every
measure dynamical system admitting a separating sieve
has zero entropy and that any
$T$-invariant measure on a minimal topologically distal
system gives rise to a measure dynamical system
admitting a separating sieve.
If $\mathbf{X}=(X,\mathcal{X},\mu,T)$ is an ergodic dynamical system and
$K\subset \mathbb{A}ut(\mathbf{X})$ is a compact subgroup (where
$\mathbb{A}ut(\mathbf{X})$ is endowed with the weak topology)
then the system $\mathbf{Y}=\mathbf{X}/K$ is well defined and we say
that the extension $\pi:\mathbf{X} \to\mathbf{Y}$ is a {\bf group extension\/}.
Using \eqref{iso-gr-diag} we can define the notion of
isometric extension or homogeneous extension in the
measure category. We will say that an ergodic system
{\bf admits a Furstenberg tower\/} if it is obtained
as a (necessarily countable) transfinite tower of
measure isometric extensions.
In 1976 in two outstanding papers \cite{Z1}, \cite{Z2}
R. Zimmer developed the theory of distal systems
(for a general locally compact acting
group). He showed that, as in the topologically distal case,
systems admitting Parry's separating sieve are exactly
those systems which admit Furstenberg towers.
\begin{thm}
\lambdabel{zimmer-structure-tm}
An ergodic dynamical system is measure distal iff
it admits a Furstenberg tower.
\end{thm}
In \cite{Lis2} E. Lindenstrauss shows that
every ergodic measure distal $\mathbb{Z}$-system
can be represented as a minimal topologically distal
system. For the exact result see Theorem \ref{distal-model}
below.
\section{Furstenberg-Zimmer structure theorem vs. its topological PI
version}\lambdabel{Sec-FZ}
Zimmer's theorem for distal systems leads directly to a
structure theorem for the general ergodic system.
Independently, and at about the same time,
Furstenberg proved the same theorem,
\cite{Fur4}, \cite{Fur5}.
He used it as the main tool for his proof of Szemer\'edi's theorem
on arithmetical progressions.
Recall that an extension
$\pi:(X,\mathcal{X},\mu,T)\to (Y,\mathcal{Y},\nu,T)$ is a {\bf weakly
mixing extension\/} if the relative product system
$\mathbf{X}\underset{\mathbf{Y}}{\times}\mathbf{X}$ is ergodic.
(The system $\mathbf{X}\underset{\mathbf{Y}}{\times}\mathbf{X}$ is defined by
the $T\times T$ invariant measure
$$
\mu\underset{\nu}{\times}\mu=\int_Y\mu_y\times\mu_y\,d\nu(y),
$$
on $X\times X$, where $\mu=\int_Y\mu_y\,d\nu(y)$
is the disintegration of $\mu$ over $\nu$.)
\begin{thm}[The Furstenberg-Zimmer structure theorem]\lambdabel{FZsth}
Let $\mathbf{X}$ be an ergodic dynamical system.
\begin{enumerate}
\item
There exists a maximal distal factor
$\phi:\mathbf{X}\to\mathbb{Z}b$ with
$\phi$ is a weakly mixing extension.
\item
This factorization is unique.
\end{enumerate}
\end{thm}
{\begin{center}{$\divideontimes$}\end{center}}
Is there a general structure theorem for minimal topological
systems?
Here, for the first time, we see a strong divergence
between the measure and the topological theories.
The culpability for this divergence is to be found in the notions of
proximality and proximal extension, which arise
naturally in the topological theory but do not appear
at all in the measure theoretical context. In building
towers for minimal systems we have to use two building
blocks of extremely different nature (isometric and proximal)
rather than one (isometric) in the measure category.
A pair of points $(x,x')\in X\times X$ is called {\bf proximal\/}
if it is not distal, i.e. if $\inf_{n\in \mathbb{Z}} d(T^nx,T^nx')=0$.
An extension $\pi:(X,T)\to (Y,T)$ is called {\bf proximal\/}
if every pair in $R_\pi$ is proximal. The next theorem was
developed gradually by several authors (Veech, Glasner-Ellis-Shapiro,
and McMahon, \cite{V}, \cite{EGS}, \cite{McM1}, \cite{V1}).
We need first to introduce some definitions.
We say that a minimal dynamical system $(X,T)$\ is {strictly \bf PI\/}
(proximal isometric) if it admits a tower consisting of
proximal and isometric extensions. It is called a {\bf PI
system\/} if there is a strictly PI minimal system
$(\tilde{X},T)$ and a proximal extension $\theta:\tilde{X}\to X$.
An extension $\pi:X \to Y$ is a {\bf RIC extension\/}
(relatively incontractible) if for every $n\in \mathbb{N}$
and every $y\in Y$ the set of almost periodic points in
$X_y^n=\pi^{-1}(y)\times\pi^{-1}(y)\times\dots\times
\pi^{-1}(y)$ ($n$ times) is dense. (A point is called
{\bf almost periodic\/} if its orbit closure is minimal.)
It can be shown that
a every isometric (and more generally, distal) extension
is RIC. Also every RIC extension is open.
Finally a homomorphism $\pi:X\to Y$ is called
{\bf topologically weakly mixing\/} if the dynamical system
$(R_\pi,T\times T)$ is topologically transitive.
The philosophy in the next theorem is to regard proximal
extensions as `negligible' and then the claim is, roughly (i.e.
up to proximal extensions), that every minimal system is
a weakly mixing extension of its maximal PI factor.
\begin{thm}[Structure theorem for minimal systems]\lambdabel{minsth}
Given a metric minimal system $(X,T)$, there exists a
countable ordinal $\eta$ and a canonically defined
commutative diagram (the canonical PI-Tower)
\begin{equation}
\xymatrix
{X \ar[d]_{\pi} &
X_0 \ar[l]_{\tilde{\theta_0}}
\ar[d]_{\pi_0}
\ar[dr]^{\sigmama_1} & &
X_1 \ar[ll]_{\tilde{\theta_1}}
\ar[d]_{\pi_1}
\ar@{}[r]|{\cdots} &
X_{\nu}
\ar[d]_{\pi_{\nu}}
\ar[dr]^{\sigmama_{\nu+1}} & &
X_{\nu+1}
\ar[d]_{\pi_{\nu+1}}
\ar[ll]_{\tilde{\theta_{\nu+1}}}
\ar@{}[r]|{\cdots} &
X_{\eta}=X_{\infty}
\ar[d]_{\pi_{\infty}} \\
pt &
Y_0 \ar[l]^{\theta_0} &
Z_1 \ar[l]^{\rho_1} &
Y_1 \ar[l]^{\theta_1}
\ar@{}[r]|{\cdots} &
Y_{\nu} &
Z_{\nu+1}
\ar[l]^{\rho_{\nu+1}} &
Y_{\nu+1}
\ar[l]^{\theta_{\nu+1}}
\ar@{}[r]|{\cdots} &
Y_{\eta}=Y_{\infty}
}
\nonumber
\end{equation}
where for each $\nu\le\eta, \pi_{\nu}$
is RIC, $\rho_{\nu}$ is isometric, $\theta_{\nu},
{\tilde\theta}_{\nu}$ are proximal extensions and
$\pi_{\infty}$ is RIC and topologically weakly mixing extension.
For a limit ordinal
$\nu ,\ X_{\nu}, Y_{\nu}, \pi_{\nu}$
etc. are the inverse limits (or joins) of
$ X_{\iota}, Y_{\iota}, \pi_{\iota}$ etc. for $\iota
< \nu$.
Thus $X_\infty$ is a proximal extension of $X$ and a RIC
topologically weakly mixing extension of the strictly PI-system
$Y_\infty$.
The homomorphism $\pi_\infty$ is an isomorphism (so that
$X_\infty=Y_\infty$) iff $X$ is a PI-system.
\end{thm}
We refer to \cite{Gl3} for a review on structure theory
in topological dynamics.
\section{Entropy: measure and topological}\lambdabel{Sec-ent}
\subsection{The classical variational principle}
For the definitions and the classical results concerning
entropy theory we refer to \cite{HKat}, Section 3.7 for
measure theory entropy and Section 4.4 for metric and
topological entropy.
The variational principle asserts that for a topological
$\mathbb{Z}$-dynamical system $(X,T)$ the topological entropy equals the
supremum of the measure entropies computed over all
the invariant probability measures on $X$. It was already
conjectured in the original paper of Adler, Konheim and McAndrew
\cite{AKM} where topological entropy was introduced;\
and then, after many stages
(mainly by Goodwyn, Bowen and Dinaburg; see for example \cite{DGS})
matured into a theorem in Goodman's paper \cite{Goodm}.
\begin{thm}[The variational principle]\lambdabel{var-princ}
Let $(X,T)$\ be a topological dynamical system, then
$$
h_{{\rm top}}(X,T)={\sup}\{h_\mu:\mu\in M_T(X)\}
={\sup}\{h_\mu:\mu\in M^{{\rm{erg}}}_T(X)\}.
$$
\end{thm}
This classical theorem has had a tremendous
influence on the theory of dynamical systems
and a vast amount of literature ensued,
which we will not try to trace here (see \cite[Theorem 4.4.4]{HKat}).
Instead we would like to present a more
recent development.
\subsection{Entropy pairs and UPE systems}
As we have noted in the introduction,
the theories of measurable dynamics (ergodic theory) and
topological dynamics exhibit a remarkable parallelism.
Usually one translates `ergodicity' as
`topological transitivity',`weak mixing' as `topological
weak mixing', `mixing' as `topological
mixing' and `measure distal' as `topologically distal'.
One often obtains this way parallel
theorems in both theories, though the methods of proof
may be very different.
What is then the topological analogue of being a K-system?
In \cite{Bla1} and \cite{Bla2} F. Blanchard introduced a notion of
`topological $K$' for $\mathbb{Z}$-systems which he called UPE
(uniformly positive entropy).
This is defined as follows: a topological dynamical system $(X,T)$
is called a UPE system if every open cover
of $X$ by two non-dense open sets $U$ and $V$ has
positive topological entropy. A local version of this
definition led to the concept of an entropy pair.
A pair $(x,x') \in X \times X,\ x\not=x'$ is an
entropy pair if for every open cover
$\mathcal{U}=\{U,V\}$ of $X$, with
$x \in {{\rm{int\,}}} (U^c)$ and $x' \in {{\rm{int\,}}} (V^c)$,
the topological entropy $h(\mathcal{U})$
is positive.
The set of entropy pairs is denoted
by $E_X=E_{(X,T)}$ and it follows that the system $(X,T)$ is UPE
iff $E_X= (X\times X)\setminus \Delta$.
In general $E^*=E_X\cup \Deltata$ is
a $T\times T$-invariant closed symmetric and reflexive relation.
Is it also transitive? When the answer to this
question is affirmative then the quotient system
$X/E_X^*$ is the topological analogue of the
Pinsker factor. Unfortunately this need not
always be true even when $(X,T)$ is a minimal
system (see \cite{GW55} for a counter example).
The following theorem was proved in
Glasner and Weiss \cite{GW35}.
\begin{thm}\lambdabel{K>UPE}
If the compact system $(X,T)$ supports an
invariant measure $\mu$ for which the corresponding measure
theoretical system $(X,\mathcal{X},\mu,T)$ is a $K$-system,
then $(X,T)$ is UPE.
\end{thm}
Applying this theorem together with the Jewett-Krieger theorem
it is now possible to obtain a great variety
of strictly ergodic UPE systems.
Given a $T$-invariant probability measure $\mu$
on $X$, a pair $(x,x') \in X \times X,\ x\not=x'$
is called a $\mu$-entropy pair if for every Borel partition
$\alpha =\{Q,Q^c\}$ of $X$ with
$x \in {{\rm{int\,}}}(Q)$ and $x' \in {{\rm{int\,}}}(Q^c)$
the measure entropy $h_\mu(\alpha)$ is positive. This definition was
introduced by Blanchard, Host, Maass, Mart\'{\i}nez and Rudolph in
\cite{B-R} as a local generalization of Theorem \ref{K>UPE}.
It was shown in \cite{B-R} that
for every invariant probability measure $\mu$
the set $E_\mu$ of $\mu$-entropy pairs is contained
in $E_X$.
\begin{thm}\lambdabel{mu-subset-e}
Every measure entropy pair is a topological entropy pair.
\end{thm}
As in \cite{GW35} the main issue here is to understand
the, sometimes intricate, relation between the combinatorial
entropy $h_c(\mathcal{U})$ of a cover $\mathcal{U}$ and the measure theoretical
entropy $h_\mu(\gamma)$ of a measurable partition $\gamma$ subordinate
to $\mathcal{U}$.
\begin{prop}\lambdabel{pro-mu-subset-e}
Let $\mathbf{X}=(X,\mathcal{X},\mu,T)$ be a measure dynamical system.
Suppose $\mathcal{U}=\{U,V\}$ is a measurable cover such that
every measurable two-set partition $\gamma=\{H,H^c\}$ which
(as a cover) is finer than $\mathcal{U}$ satisfies
$h_\mu(\gamma)>0$; then $h_c(\mathcal{U})>0$.
\end{prop}
Since for a $K$-measure $\mu$ clearly every
pair of distinct points is in $E_\mu$, Theorem \ref{K>UPE}
follows from Theorem \ref{mu-subset-e}.
It was shown in \cite{B-R} that
when $(X,T)$ is uniquely ergodic the converse of Theorem
\ref{mu-subset-e} is also true:
$E_X=E_\mu$ for the unique invariant measure $\mu$ on $X$.
\subsection{A measure attaining the topological entropy
of an open cover}
In order to gain a better understanding of the
relationship between measure entropy pairs and topological
entropy pairs one direction of a
variational principle for open covers (Theorem
\ref{mcvp} below) was proved in Blanchard, Glasner and Host \cite{BGH}.
Two applications of this
principle were given in \cite{BGH};\
(i) the construction, for a general system $(X,T)$,
of a measure $\mu\in M_T(X)$ with $E_X=E_\mu$,
and (ii) the proof that under a homomorphism
$\pi:(X,\mu,T)\to (Y,\nu,T)$ every entropy pair in
$E_\nu$ is the image of an entropy pair in $E_\mu$.
We now proceed with the statement and proof of this
theorem which is of independent interest.
The other direction of this variational principle
will be proved in the following subsection.
\begin{thm}\lambdabel{mcvp}
Let $(X,T)$ be a topological dynamical system,
and $\mathcal{U}$ an open cover of $X$, then there exists a measure
$\mu\in M_T(X)$ such that $h_\mu(\alpha)\ge h_{{\rm top}}(\mathcal{U})$ for
all Borel partitions $\alpha$ finer than $\mathcal{U}$.
\end{thm}
A crucial element of the proof of the
variational principle is a combinatorial lemma which we present
next.
We let
$\phi: [0,1]\to\mathbb{R}$ denote the function
$$
\phi(x)=-t\log t\quad {\text{ for\ }} 0<t\le 1 ;\ \phi(0)=0 \ .
$$
Let $\mathfrak{L}=\{1,2,\dots,\ell\}$ be a
finite set, called the {\bf alphabet\/}; sequences
$\omega=\omega_1\ldots \omega_n\in \mathfrak{L}^n$, for $n\ge 1$, are called
{\bf words of length $n$ on the alphabet $\mathfrak{L}$\/} .
Let $n$ and $k$ be two integers with $1\leq k\le n$.
For every word $\omega$ of length $n$ and every word $\theta$ of length $k$
on the same alphabet, we denote by $p(\theta|\omega)$ the frequency of
appearances of $\theta$ in $\omega$,\ i.e.
$$
p(\theta|\omega)=\frac{1}{n-k+1}
{{\rm{card\,}}}\big\{i :\ 1\leq i\le n-k+1,\;
\omega_i\omega_{i+1}\ldots\omega_{i+k-1}=
\theta_1\theta_2\ldots\theta_{k}\big\} \ .
$$
For every word $\omega$ of
length $n$ on the alphabet $\mathfrak{L}$, we let
$$
H_k(\omega)=\sum_{\theta\in
\mathfrak{L}^k}\phi\big(p(\theta|\omega)\big) \ .
$$
\begin{lem} For every $h>0$, $\epsilon>0$, every integer
$k\ge 1$ and every sufficiently large integer $n$,
$$
{{\rm{card\,}}}\big\{
\omega\in \mathfrak{L}^n : H_k(\omega) \le kh \big\} \le \exp\big(
n(h+\epsilon)\big) \ .
$$
\end{lem}\lambdabel{combin}
{\bf Remark. }It is equally true that, if $h \le \log ({{\rm{card\,}}} \mathfrak{L})$,
for sufficiently large $n$,
$$
{{\rm{card\,}}}\big\{ \omega\in \mathfrak{L}^n : H_k(\omega) \leq kh \big\} \geq \exp\big(
n(h-\epsilon)\big) \ .
$$
We do not prove this inequality here,
since we have no use for it in the sequel.
\begin{proof}
{\bf The case $ k=1$.}
We have
\begin{equation}\lambdabel{H1}
{{\rm{card\,}}}\big\{ \omega\in \mathfrak{L}^n :\ H_1(\omega) \le h \big\}
=\sum_{q \in K} \frac{n!}{q_1!\ldots q_{\ell}!}
\end{equation}
where $K$ is the set of $q=(q_1,\ldots ,q_{\ell})\in \mathbb{N}^{\ell}$
such that
$$
\sum_{i=1}^{\ell} q_i=n\ {\text{ and }}\ \sum_{i=1}^{\ell}
\phi(\frac{q_i}{n})\le h \ .
$$
By Stirling's formula, there exist two
universal constants $c$ and
$c'$ such that
$$
c\big(\frac{m}{e})^m\sqrt m \le m!\,
\le c'\big({\frac{m}{e}})^m\sqrt m
$$
for every $m>0$.
From this we deduce the existence of a constant $C(\ell)$ such
that for every $ q\in K$,
$$
{\frac{n!}{q_1!\ldots q_{\ell}!}} \leq C(\ell)
\exp\big( n\sum_{i=1}^{\ell}
\phi(\frac{q_i}{n})\big)
\leq C({\ell})\exp(nh)\ .
$$
Now the sum \eqref{H1} contains at most $(n+1)^{\ell}$ terms;
so that we have
$$
{{\rm{card\,}}}\big\{ \omega\in \mathfrak{L}^n :\ H_1(\omega) \le h \big\} \le
(n+1)^{\ell}C(\ell)\exp(nh)\le \exp\big( n(h+\epsilon)\big)
$$
for all sufficiently large $n$, as was to be proved.
{\bf The case $k>1$.}
For every word $\omega$ of length $n\ge 2k$ on the alphabet
$\mathfrak{L}$, and for
$0\le j<k$, we let $n_j$ be the integral part of $\frac{n-j}{k}$,
and $\omega^{(j)}$ the word
$$
(\omega_{j+1}\ldots\omega_{j+k})\;
(\omega_{j+k+1}\ldots\omega_{j+2k})\;
\ldots\;
(\omega_{j+(n_j-1)k+1}\ldots\omega_{j+n_jk})
$$
of length $n_j$ on the
alphabet $B=\mathfrak{L}^k$.
Let now $\theta$ be a word of length $k$ on the alphabet
$\mathfrak{L}$; we also
consider $\theta$ as an element of $B$.
One easily verifies that, for every word $\omega$ of length $n$ on the
alphabet $\mathfrak{L}$,
$$
\big| p(\theta|\omega)-\frac{1}{k}\sum_{j=0}^{k-1}
p(\theta|\omega^{(j)})\big| \le \frac{k}{n-2k+1} \ .
$$
The function $\phi$ being uniformly continuous,
we see that for sufficiently large $n$,
and for every word $\omega$ of length $n$ on $\mathfrak{L}$,
$$
\sum_{\theta\in B} \left| \phi\big( p(\theta|\omega)\big) -\phi\left(
\frac{1}{k}\sum_{j=0}^{k-1}p(\theta|\omega^{(j)})\right) \right| <
\frac{\epsilon}{2}
$$
and by convexity of $\phi$,
$$
{\frac{1}{k}}\sum_{j=0}^{k-1}H_1(\omega^{(j)})= \frac{1}{
k}\sum_{j=0}^{k-1}\sum_{\theta\in B} \phi\big(p(\theta|\omega^{(j)})\big)
\leq \frac{\epsilon}{2} +
\sum_{\theta\in \mathfrak{L}^k}\phi\big(p(\theta|\omega)\big) =\frac{\epsilon}{2} +
H_k(\omega).
$$
Thus, if $H_k(\omega)\le kh$, there exists a $j$ such that
$H_1(\omega^{(j)})\le \frac{\epsilon}{2} + kh$.
Now, given $j$ and a word $u$ of length
$n_j$ on the alphabet $B$, there
exist $\ell^{n-n_jk}\leq \ell^{2k-2}$ words $\omega$ of length
$n$ on $\mathfrak{L}$ such that $\omega^{(j)}=u$. Thus for sufficiently large $n$,
by the first part of the proof,
\begin{align*}
{{\rm{card\,}}}\big\{ \omega\in \mathfrak{L}^n : H_k(\omega) \le kh \big\} & \le
{\ell}^{2k-2}\sum_{j=0}^{k-1}{{\rm{card\,}}}\big\{ u\in B^{n_j}:H_1(u) \le
\frac{\epsilon}{2} + kh\big\}\\
& \le {\ell}^{2k-2}\sum_{j=0}^{k-1}\exp\big(n_j(\epsilon+kh)\big)\\
& \le
{\ell}^{2k-2}k\exp\big(n(\frac{\epsilon}{k}+h)\big) \le
\exp\big(n(h+\epsilon\big))\ .
\end{align*}
\end{proof}
Let $(X,T)$ be a compact dynamical system.
As usual we denote by $M_T(X)$ the set of
$T$-invariant probability measures on $X$, and by
$M_T^{{\rm{erg}}}(X)$ the subset of ergodic measures.
We say that a partition $\alpha$ is finer than a cover
$\mathcal{U}$ when every atom of $\alpha$ is contained in an element of
$\mathcal{U}$.
If $\alpha=\{A_1,\ldots,A_{\ell}\}$ is a partition of $X$,
$x\in X$ and $N\in\mathbb{N}$, we write $\omega(\alpha,N,x)$ for the word of
length $N$ on the alphabet
$\mathfrak{L}=\{1,\ldots,{\ell}\}$ defined by
$$
\omega(\alpha,N,x)_n=i\quad {\text{if}}\quad T^{n-1}x\in A_i,\qquad
1\le n\le N\ .
$$
\begin{lem}Let $\mathcal{U}$ be a cover of $X$, $h=h_{{\rm top}}(\mathcal{U})$,
$K\ge 1$ an integer, and $\{\alpha_l:1\le l\le K\}$ a finite
sequence of partitions of $X$, all finer than $\mathcal{U}$.
For every $\epsilon>0$ and sufficiently large $N$,
there exists an $x\in X$ such that
$$
H_k\big(\omega(\alpha_l,N,x)\big)
\ge k(h-\epsilon)
\ {\text {for every} }\ k,l\ {\text { with} }\ 1\leq k,\;l\le K.
$$
\end{lem}
\begin{proof}
One can assume that all the partitions
$\alpha_l$ have the same number of elements $\ell$ and we
let $\mathfrak{L}=\{1,\ldots,\ell\}$.
For $1\le k \le K$ and $N\ge K$, denote
$$
\Omega(N,k)=\{\omega \in \mathfrak{L}^N :\ H_k(\omega) < k(h-\epsilon)\} \ .
$$
By Lemma \ref{combin}, for sufficiently large $N$
$$
{{\rm{card\,}}}(\Omega(N,k))\leq \exp(N(h-\epsilon/2))\ {\text{ for all}}\ k\le K.
$$
Let us choose such an $N$ which moreover satisfies
$K^2 <\exp(N\epsilon/2)$.
For $1\le k,l\le K$, let
$$
Z(k,l)= \{x\in X :\omega(\alpha_l, N,x)\in\Omega(N,k)\} \ .
$$
The set $Z(k,l)$ is the union of ${{\rm{card\,}}}(\Omega(N,k))$ elements of
$(\alpha_l)_0^{N-1}$. Now this partition is finer than the cover
$\mathcal{U}_0^{N-1}$, hence $Z(k,l)$ is covered by
$$
{{\rm{card\,}}}(\Omega(N,k))\leq\exp(N(h-\epsilon/2))
$$
elements of $\mathcal{U}_0^{N-1}$.
Finally,
$$
\bigcup_{1\leq k,l\leq K} Z(k,l)
$$
is covered by
$K^2\exp(N(h-\epsilon/2))<\exp(Nh)$ elements of $\mathcal{U}_0^{N-1}$.
As every subcover of $\mathcal{U}_0^{N-1}$ has at least
$\exp(Nh)$ elements,
$$
\bigcup_{1\le k,l\le K} Z(k,l)\neq X.
$$
This completes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of theorem \ref{mcvp}]
Let $\mathcal{U}=\{U_1,\ldots,U_\ell\}$ be an open cover of $X$.
It is clearly
sufficient to consider Borel partitions $\alpha$ of $X$ of the form
\begin{equation}\lambdabel{one*}
\alpha=\{ A_1,\ldots,A_\ell\}\ {\text{ with}}\ A_i\subset U_i
\ {\text{ for every} }\ i.
\end{equation}
{\bf Step 1:\ }Assume first that $X$
is $0$-dimensional.
The family of partitions finer than
$\mathcal{U}$, consisting of clopen sets and satisfying
\eqref{one*} is countable;
let $\{\alpha_l:l\ge 1\}$ be an enumeration of this family.
According to the previous lemma, there exists a sequence of integers
$N_K$ tending to
$+\infty$ and a sequence $x_K$ of elements of $X$ such that:
\begin{equation}\lambdabel{doub*}
H_k\big(\omega(\alpha_l, N_K,x_K)\big) \ge k(h-\frac{1}{K}) \
{\text{ for
every} }\ 1\le k,\;l \le K.
\end{equation}
Write
$$
\mu_K=\frac{1}{N_K}\sum_{i=0}^{N_K-1} \delta_{T^ix_K} \ .
$$
Replacing the
sequence $\mu_K$ by a subsequence
(this means replacing the sequence
$N_K$ by a subsequence, and the sequence $x_K$ by the
corresponding subsequence preserving the property \eqref{doub*}),
one can assume that the sequence
of measures $\mu_K$ converges weak$^*$ to a probability
measure $\mu$.
This measure $\mu$ is clearly $T$-invariant. Fix $k,l\ge 1$,
and let $F$ be an atom of the partition $(\alpha_l)_0^{k-1}$,
with name
$\theta\in\{1,\ldots, \ell\}^k$. For every $K$ one has
$$
\big|\mu_K(F)-p\big(\theta| \omega(\alpha_l , N_K,x_K)\big)
\big| \le \frac{2k}{N_K}.
$$
Now as $F$ is clopen,
\begin{align*}
\mu(F)&
=\lim_{K\to\infty}\mu_K(F)=
\lim_{K\to\infty} p\big(\theta| \omega(\alpha_l, N_K,x_K)\big)\;
\ {\text{hence}}\\
\phi(\mu(F) )
&=\lim_{K\to\infty} \phi\big(p\big(\theta| \omega(\alpha_l, N_K,x_K)\big)
\big)
\end{align*}
and, summing over $\theta\in \{1,\ldots,\ell\}^k$, one gets
$$
H_\mu\big( (\alpha_l)_0^{k-1}\big)
=\lim_{K\to\infty} H_k\big( \omega(\alpha_l , N_K,x_K)\big) \ge kh.
$$
Finally, by sending $k$ to infinity one obtains $h_\mu(\alpha_l)\ge h$.
Now, as $X$ is $0$-dimensional, the family of partitions
$\{\alpha_l\}$ is dense in the
collection of Borel partitions of $X$ satisfying \eqref{one*},
with respect to the
distance associated with $L^1(\mu)$. Thus, $h_\mu(\alpha)\ge h$
for every partition of this kind.
{\bf Step 2:\ }The general case.
Let us recall a well known fact: there
exists a topological system $(Y,T)$, where $Y$ is $0$-dimensional,
and a continuous surjective map $\pi: Y\to X$ with
$\pi\circ T=T\circ \pi$.
(Proof : as $X$ is a compact metric space, it is easy to construct
a Cantor set
$K$ and a continuous surjective $f: K\to X$.
Put
$$
Y=\{ y \in K^{\mathbb{Z}}:\ f(y_{n+1})=T f(y_n)\
{\text{ for every }}\ n\in\mathbb{Z}\}
$$
and let $\pi: Y\to X$ be defined by $\pi(y)=f(y_0)$.
$Y$ is a closed subset of $K{^\mathbb{Z}}$ --- where the latter is
equipped with the product topology --- and is invariant under the
shift $T$ on $K^{\mathbb{Z}}$. It is
easy to check that $\pi$ satisfies the required conditions.)
Let $\mathcal{V}=\pi^{-1}(\mathcal{U})=\{\pi^{-1}(U_1),
\ldots,\pi^{-1}(U_d)\}$
be the preimage of $\mathcal{U}$ under $\pi$ ;
one has $h_{{\rm top}}(\mathcal{V})=h_{{\rm top}}(\mathcal{U})=h$. By the
above remark, there exists $\nu\in M(Y,T)$ such that
$h_\nu(\mathcal{Q})\ge h$
for every Borel partition
$\mathcal{Q}$ of $Y$ finer than $\mathcal{V}$. Let
$\mu=\nu\circ\pi^{-1}$ the measure which is the image of
$\nu$ under $\pi$.
One has $\mu\in M_T(X)$ and, for every Borel partition $\alpha$ of
$X$ finer than $\mathcal{U}$, $\pi^{-1}(\alpha)$ is a Borel partition of
$Y$ which is finer than $\mathcal{V}$ with
$$
h_\mu(\alpha)=h_\nu\big( \pi^{-1}(\alpha)\big)\ge h.
$$
This completes the proof of the theorem.
\end{proof}
\begin{cor}\lambdabel{corol1} Let $(X,T)$ be a topological system,
$\mathcal{U}$ an open cover of $X$ and $\alpha$
a Borel partition finer than $\mathcal{U}$, then, there exists a
$T$-invariant ergodic measure $\mu$ on $X$ such that
$h_\mu(\alpha)\ge h_{{\rm top}}(\mathcal{U})$.
\end{cor}
\begin{proof} By Theorem \ref{mcvp} there exists
$\mu\in M_T(X)$ with $h_{\mu}(\alpha)\ge h_{{\rm top}}(\mathcal{U})$;
let $\mu=\int_\omega \mu_\omega\,dm(\omega )$
be its ergodic decomposition. The corollary
follows from the formula
$$
\int h_{\mu_\omega}(\alpha)\,dm(\omega) =h_\mu(\alpha).
$$
\end{proof}
\subsection{The variational principle for open covers}
Given an open cover $\mathcal{U}$ of the dynamical system $(X,T)$\ ,
the results of the previous subsection imply the inequality
$$
\sup_{\mu\in M_T(X)}\inf_{\alpha\succ\mathcal{U}} h_\mu(\alpha) \ge
h_{{\rm top}}(\mathcal{U}).
$$
We will now present a new result which will
provide the fact that
$$
\sup_{\mu\in M_T(X)}\inf_{\alpha\succ\mathcal{U}} h_\mu(\alpha) =
h_{{\rm top}}(\mathcal{U})
$$
thus completing the proof of a variational
principle for $\mathcal{U}$.
We first need a universal version of the Rohlin lemma.
\begin{prop}\lambdabel{univ-roh}
Let $(X,T)$ be a (Polish) dynamical system
and assume that there exists on $X$ a $T$-invariant
aperiodic probability measure. Given a positive
integer $n$ and a real number $\deltata>0$ there exists
a Borel subset $B\subset X$ such that the sets
$B, TB, \dots, T^{n-1}B$ are pairwise disjoint and for every
aperiodic $T$-invariant probability measure $\mu\in M_T(X)$
we have $\mu(\bigcup_{j=0}^{n-1} T^j B) > 1 - \deltata$.
\end{prop}
\begin{proof}
Fix $N$ (it should be larger than $n/\deltata$ for the required height
$n$ and error $\deltata$). The set of points that are periodic with
period $\le N$ is closed. Any point in the complement
(which by our assumption is nonempty) has,
by continuity, a neighborhood $U$ with $N$ disjoint forward iterates.
There is a countable subcover
$\{U_m\}$ of such sets since the space is Polish.
Take $A_1=U_1$ as a base for a {\em Kakutani sky-scraper}
\begin{gather*}
\{T^j A_1^k: j=0,\dots, k-1;\ k=1,2,\dots\},\\
A_1^k=\{x\in A_1: r_{A_1}(x)=k\},
\end{gather*}
where $r_{A_1}(x)$ is the first integer $j\ge 1$ with $T^jx\in A_1$.
Next set
$$
B_1= \bigcup_{k\ge 1}\bigcup_{j=0}^{[(k-n-1)/n]} T^{jn}A_1^k,
$$
so that the sets $B_1, TB_1,\dots, T^{n-1}B_1$ are pairwise disjoint.
Remove the full forward $T$ orbit of $U_1$ from the space and repeat
to find $B_2$ using as a base for the next Kakutani
sky-scraper $A_2$ defined as $U_2$ intersected with the part of $X$ not
removed earlier. Proceed by induction to define the
sequence $B_i, \ i=1,2,\dots$ and set $B=\bigcup_{i=1}^\infty B_i$.
By Poincar\'e recurrence for any aperiodic invariant measure we exhaust
the whole space except for $n$ iterates of the union $A$ of the bases
of the Kakutani sky-scrapers.
By construction $A=\bigcup_{m=1}^\infty A_m$ has $N$ disjoint iterates
so that $\mu(A) \le 1/N$ for every $\mu\in M_T(X)$.
Thus $B, TB,\dots, T^{n-1}B$ fill all but $n/N < \deltata$
of the space uniformly
over the aperiodic measures $\mu\in M_T(X)$.
\end{proof}
Let $(X,T)$\ be a dynamical system and $\mathcal{U}=\{U_1,U_2,
\dots U_{\ell}\}$ a finite open cover.
We denote by $\mathcal{A}$ the collection of all finite Borel
partitions $\alpha$ which refine $\mathcal{U}$, i.e. for every
$A\in \alpha$ there is some $U\in \mathcal{U}$ with $A\subset U$.
We set
$$
\mathbf{1}eck{ h}(\mathcal{U})=
\sup_{\mu\in M_T(X)}\inf_{\alpha\in\mathcal{A}} h_\mu(\alpha) \qquad
{\text{and}} \qquad
\hat h(\mathcal{U})=
\inf_{\alpha\in\mathcal{A}}\sup_{\mu\in M_T(X)} h_\mu(\alpha).
$$
\begin{prop}\lambdabel{bar-hat}
Let $(X,T)$ be a dynamical system, $\mathcal{U}=\{U_1,U_2,
\dots U_{\ell}\}$ a finite open cover, then
\begin{enumerate}
\item
$\mathbf{1}eck{ h}(\mathcal{U}) \le \hat h(\mathcal{U})$,
\item
$\hat h(\mathcal{U}) \le h_{{\rm top}}(\mathcal{U})$.
\end{enumerate}
\end{prop}
\begin{proof}
1.\
Given $\nu\in M_T(X)$ and $\alpha\in \mathcal{A}$ we obviously have
$h_\nu(\alpha) \le \sup_{\mu\in M_T(X)} h_\mu(\alpha)$. Thus
$$
\inf_{\alpha\in\mathcal{A}}h_\nu(\alpha) \le \inf_{\alpha\in\mathcal{A}}
\sup_{\mu\in M_T(X)} h_\mu(\alpha) = \hat h(\mathcal{U}),
$$
and therefore also $\mathbf{1}eck{ h}(\mathcal{U}) \le \hat h(\mathcal{U})$.
2.\
Choose for $\epsilon> 0$ an integer $N$ large enough so that
there is a subcover $\mathcal{D} \subset \mathcal{U}_0^{N-1}=
\bigvee_{j=0}^{N-1} T^{-j}\mathcal{U}$ of cardinality
$2^{N(h_{{\rm top}}(\mathcal{U})+\epsilon)}$. Apply Proposition \ref{univ-roh}
to find a set $B$ such that the sets
$B, TB, \dots ,T^{N-1}B$ are pairwise disjoint and for every
$T$-invariant Borel probability measure $\mu\in M_T(X)$
we have $\mu(\bigcup_{j=0}^{N-1} T^j B) > 1 - \deltata$.
Consider $\mathcal{D}_B=\{D\cap B: D\in \mathcal{D}\}$,
the restriction of the cover $\mathcal{D}$ to $B$, and find
a partition $\beta$ of $B$ which refines $\mathcal{D}_B$.
Thus each element $P\in \beta$ has the form
$$
P=P_{i_0,i_1,\dots,i_{N-1}} \subset
\left(\bigcap_{j=0}^{N-1} T^{-j} U_{i_j}\right)
\cap B,
$$
where $\bigcap_{j=0}^{N-1} T^{-j} U_{i_j}$ represents a
typical element of $\mathcal{D}$.
Next use the partition $\beta$ of $B$ to define a partition
$\alpha=\{A_i:i=1,\dots,\ell\}$ of $\bigcup_{j=0}^{N-1} T^{j}B$
by assigning to the set $A_i$ all sets of the form
$T^jP_{i_0,i_1,\dots,i_j,\dots,i_{N-1}}$ where $i_j=i$
($j$ can be any number in $[0,N-1]$).
On the remainder of the space $\alpha$ can be taken to be any
partition refining $\mathcal{U}$.
Now if $N$ is large and $\delta$ small
enough then
\begin{equation}\lambdabel{estim}
h_\mu(\alpha) \le h_{{\rm top}}(\mathcal{U})+2\epsilon.
\end{equation}
Here is a sketch of how one establishes this inequality.
For $n >> N$ we will estimate $H_\mu(\alpha_0^{n-1})$ by
counting how many $(n,\alpha)$-names are needed to cover
most of the space. We take $\delta>0$ so that $\sqrt{\delta}<< \epsilon$.
Denote $E = B \cup TB \cup \dots \cup T^{N-1}B$
(so that $\mu(E) > 1 - \deltata$).
Define
$$
f(x)=\frac{1}{n}\sum_{i=0}^n \mathbf{1}_E(T^ix),
$$
and observe that $0 \le f \le1$
and
$$
\int_X f(x)\, d\mu(x) > 1 - \delta,
$$
since $T$ is measure preserving.
Therefore $\int (1-f) < \delta$ and (Markov's
inequality)
$$
\mu \{x : (1-f) \ge \sqrt{\delta}\}
\le \frac{1}{\sqrt{\delta}} \int(1-f) \le \sqrt{\delta}.
$$
It follows that for points $x$ in $G =\{f > 1- \sqrt{\delta}\}$,
we have the property that $T^i x \in E$ for most $i$ in $[0,n]$.
Partition $G$ according to the values of $i$ for which $T^i x\in B$.
This partition has at most
$$
\sum_{j\le\frac{n}{N}}\binom{n}{j} \le \frac{n}{N}\binom{n}{n/N}
$$
sets, a number which is exponentially small in $n$
(if $N$ is sufficiently large).
For a fixed choice of these values the times when we are
not in $E$ take only $n\sqrt{\delta}$ values and there we have
$ < l^{n\sqrt{\delta}}$ choices.
Finally when $T^i x \in B$ we have at most
$2^{(N({h_{{\rm top}}}(U) + \epsilonsilon))}$
names so that the total contribution is $<
2^{(N({h_{{\rm top}}}(U) + \epsilonsilon))\frac{n}{N}}$.
Collecting these estimations we find that
$$
H(\alphapha_0^{n-1}) < n({h_{{\rm top}}}(U) + 2\epsilonsilon),
$$
whence \eqref{estim}.
This completes the proof of the proposition.
\end{proof}
We finally obtain:
\begin{thm}[The variational principle for open covers]\lambdabel{cvp}
Let $(X,T)$ be a dynamical system, $\mathcal{U}=\{U_1,U_2,
\dots U_k\}$ a finite open cover
and denote by $\mathcal{A}$ the collection of all finite Borel
partitions $\alpha$ which refine $\mathcal{U}$, then
\begin{enumerate}
\item
for every $\mu \in M_T(X)$,
$\inf_{\alpha\in\mathcal{A}} h_\mu(\alpha) \le h_{{\rm top}}(\mathcal{U})$,
and
\item
there exists an ergodic measure $\mu_0\in M_T(X)$
with $h_{\mu_0}(\alpha) \ge h_{{\rm top}}(\mathcal{U})$ for every
Borel partition $\alpha \in \mathcal{A}$.
\item
$$
\mathbf{1}eck{ h}(\mathcal{U}) = \hat h(\mathcal{U}) = h_{{\rm top}}(\mathcal{U}).
$$
\end{enumerate}
\end{thm}
\begin{proof}
1.\
This assertion can be formulated by the inequality
$\mathbf{1}eck{ h}(\mathcal{U}) \le h_{{\rm top}}(\mathcal{U})$ and it
follows by combining the two parts of Lemma \ref{bar-hat}.
2.\
This is the content of Theorem \ref{mcvp}.
3.\
Combine assertions 1 and 2.
\end{proof}
\subsection{Further results connecting topological
and measure entropy}
Given a topological dynamical system $(X,T)$\ and a measure
$\mu\in M_T(X)$,
let $\pi:(X,\mathcal{X},\mu,T)\to (Z,\mathcal{Z},\eta,T)$ be the
{\bf measure-theoretical\/} Pinsker factor of $(X,\mathcal{X},\mu,T)$, and let
$\mu=\int_Z \mu_z\,d\eta(z)$ be the disintegration of
$\mu$ over $(Z,\eta)$. Set
$$
\lambdambda=\int_Z (\mu_z \times \mu_z)\,d\eta(z),
$$
the relatively independent joining of $\mu$ with itself over $\eta$.
Finally let $\Lambdambda_\mu={{\rm{supp\,}}}(\lambdambda)$
be the topological support of $\lambdambda$ in $X\times X$.
Although the Pinsker factor is, in general, only defined
measure theoretically, the measure $\lambda$ is a well defined
element of $M_{T\times T}(X\times X)$.
It was shown in Glasner \cite{Gl25} that $E_\mu=
\Lambda_\mu\setminus \Delta$.
\begin{thm}\lambdabel{char-mu-ep}
Let $(X,T)$\ be a topological dynamical system and let
$\mu\in\linebreak[0] M_T(X)$.
\begin{enumerate}
\item
$E_\mu=\Lambdambda_\mu\setminus \Deltata$ and
$\Lambda_\mu=E_\mu\cup \{(x,x): x\in {{\rm{supp\,}}}(\mu)\}$.
\item
${{\rm{cls\,}}} E_\mu\subset \Lambdambda_\mu$.
\item
If $\mu$ is ergodic with positive entropy then
${{\rm{cls\,}}} E_\mu=\Lambdambda_\mu$.
\end{enumerate}
\end{thm}
One consequence of this characterization of the set of
$\mu$-entropy pairs is a description of the set
of entropy pairs of a product system.
Recall that an $E$-system is a system for which there exists
a probability invariant measure with full support.
\begin{cor}\lambdabel{prod-ep}
Let $(X_1,T)$ and $(X_2,T)$ be two topological $E$-systems then:
\begin{enumerate}
\item
$E_{X_1\times X_2}=
(E_{X_1}\times E_{X_2}) \cup (E_{X_1}\times \Delta_{X_2})
\cup (\Delta_{X_1}\times E_{X_2})$.
\item
The product of two UPE systems is UPE.
\end{enumerate}
\end{cor}
Another consequence is:
\begin{cor}\lambdabel{ep-proximal}
Let $(X,T)$ be a topological dynamical system,
$P$ the proximal relation on $X$. Then:
\begin{enumerate}
\item
For every $T$-invariant ergodic measure $\mu$
of positive entropy
the set $P\cap E_\mu$ is residual in the
$G_\deltata$ set $E_\mu$ of $\mu$ entropy pairs.
\item
When $E_X\ne\emptyset$ the set $P\cap E_X$ is residual in the
$G_\deltata$ set $E_X$ of topological entropy pairs.
\end{enumerate}
\end{cor}
Given a dynamical system $(X,T)$\ ,
a pair $(x,x')\in X\times X$ is called
a {\bf Li--Yorke pair\/} if it is a proximal pair but not
an asymptotic pair.
A set $S\subseteq X$ is called {\bf scrambled\/} if any pair of
distinct points $\{x,y\}\subseteq S$ is a Li--Yorke pair. A dynamical
system $(X,T)$ is called {\bf chaotic in the sense
of Li and Yorke\/} if there is an uncountable scrambled set.
In \cite{BGKM} Theorem \ref{char-mu-ep} is applied to solve
the question whether positive topological entropy implies
Li--Yorke chaos as follows.
\begin{thm}
Let $(X,T)$ be a topological dynamical system.
\begin{enumerate}
\item
If $(X,T)$ admits
a $T$-invariant ergodic measure $\mu$ with respect to which the measure
preserving system $(X,\mathcal{X},\mu,T)$ is not measure distal then $(X,T)$
is Li--Yorke chaotic.
\item
If $(X,T)$ has positive topological entropy then it is
Li--Yorke chaotic.
\end{enumerate}
\end{thm}
In \cite{BHR} Blanchard, Host and Ruette show that
in positive entropy systems there are also many
asymptotic pairs.
\begin{thm}\lambdabel{BHR}
Let $(X,T)$ be a topological dynamical system with positive
topological entropy. Then
\begin{enumerate}
\item
The set of points $x\in X$ for which there is
some $x'\ne x$ with $(x,x')$ an asymptotic pair, has
measure $1$ for every invariant probability measure on $X$
with positive entropy.
\item
There exists a probability measure $\nu$ on
$X\times X$ such that $\nu$ a.e. pair $(x,x')$
is Li--Yorke and positively asymptotic; or more precisely
for some $\deltata >0$
\begin{gather*}
\lim_{n\to+\infty}d(T^nx,T^nx')=0, \qquad {\text and}\\
\liminf_{n\to+\infty}d(T^{-n}x,T^{-n}x')=0,
\qquad
\limsup_{n\to+\infty}d(T^{-n}x,T^{-n}x')\ge\deltata.
\end{gather*}
\end{enumerate}
\end{thm}
\subsection{Topological determinism and zero entropy}
Following \cite{KSS} call a dynamical system $(X,T)$\
{\bf deterministic\/} if every $T$-factor is
also a $T^{-1}$-factor. In other words
every closed equivalence relation $R\subset
X\times X$ which has the property $TR\subset R$
also satisfies $T^{-1}R\subset R$.
It is not hard to see that an equivalent condition is
as follows. For every continuous real valued function
$f\in C(X)$ the function $f\circ T^{-1}$ is
contained in the smallest closed subalgebra $\mathcal{A}\subset C(X)$
which contains the constant function $\mathbf{1}$ and the
collection $\{f\circ T^n: n\ge 0\}$.
The folklore question whether the latter condition implies zero
entropy was open for awhile. Here we note that
the affirmative answer is a direct consequence of
Theorem \ref{BHR} (see also \cite{KSS}).
\begin{prop}\lambdabel{non-inv-f}
Let $(X,T)$ be a topological dynamical system
such that there exists a $\deltata>0$ and a pair
$(x,x')\in X\times X$ as in
Theorem \ref{BHR}.2. Then $(X,T)$\ is not deterministic.
\end{prop}
\begin{proof}
Set
$$
R=\{(T^nx,T^nx'): n\geq 0\}
\cup \{(T^nx',T^nx): n\geq 0\}
\cup \Deltata.
$$
Clearly $R$ is a closed equivalence relation
which is $T$-invariant but not $T^{-1}$-invariant.
\end{proof}
\begin{cor}
A topologically deterministic dynamical system has zero entropy.
\end{cor}
\begin{proof}
Let $(X,T)$ be a topological dynamical system with positive
topological entropy; by Theorem \ref{BHR}.2.
and Proposition \ref{non-inv-f} it is not deterministic.
\end{proof}
{\lambdarge{\part{Meeting grounds}}}
\section{Unique ergodicity}\lambdabel{Sec-JK}
The topological system
$(X,T)$ is called {\bf uniquely ergodic\/} if $M_T(X)$
consists of a single element $\mu$. If in addition $\mu$ is
a full measure (i.e. ${{\rm{supp\,}}} \mu=X$) then the system
is called {\bf strictly ergodic\/}
(see \cite[ Section 4.3]{HKat}).
Since the ergodic measures are characterized as the extreme points
of the Choquet simplex $M_T(X)$, it follows immediately
that a uniquely ergodic measure is ergodic.
For a while it was believed that strict ergodicity ---
which is known to imply some strong topological consequences
(like in the case of $\mathbb{Z}$-systems, the fact that {\em every\/}
point of $X$ is a generic point and moreover that the
convergence of the ergodic sums $\mathbb{A}_n(f)$
to the integral $\int f\, d\mu, \ f\in C(X)$
is {\em uniform\/}) ---
entails some severe restrictions on the measure-theoretical
behavior of the system. For example, it was believed that
unique ergodicity implies zero entropy. Then, at first some examples
were produced to show that this need not be the case.
Furstenberg in \cite{Fur3} and Hahn and Katznelson
in \cite{HK} gave examples of uniquely ergodic systems with
positive entropy. Later in 1970 R. I. Jewett surprised everyone
with his outstanding result: every weakly mixing measure
preserving $\mathbb{Z}$-system has a strictly ergodic model,
\cite{Jew}.
This was strengthened by Krieger \cite{K} who showed that
even the weak mixing assumption is redundant and that
the result holds for every ergodic $\mathbb{Z}$-system.
We recall the following well known characterizations
of unique ergodicity (see \cite[Theorem 4.9]{G}).
\begin{prop}\lambdabel{unique-erg}
Let $(X,T)$\ be a topological system. The following conditions are
equivalent.
\begin{enumerate}
\item
$(X,T)$\ is uniquely ergodic.
\item
$C(X)=\mathbb{R}+\bar B$, where $B=\{g-g\circ T: g\in C(X)\}$.
\item
For every continuous function $f\in C(X)$
the sequence of functions
$$
\mathbb{A}_nf(x)=\frac{1}{n}\sum_{j=0}^{n-1}f(T^jx).
$$
converges uniformly to a constant function.
\item
For every continuous function $f\in C(X)$
the sequence of functions $\mathbb{A}_n(f)$ converges pointwise to a
constant function.
\item
For every function $f\in A$, for a collection
$A\subset C(X)$ which linearly spans a uniformly dense
subspace of $C(X)$, the sequence of functions $\mathbb{A}_n(f)$
converges pointwise to a constant function.
\end{enumerate}
\end{prop}
Given an ergodic dynamical system
$\mathbf{X}=(X,\mathcal{X},\mu,T)$
we say that the system $\hat \mathbf{X}=(\hat X,\hat \mathcal{X},\hat \mu,T)$ is
a {\bf topological model\/} (or just a model) for
$\mathbf{X}$ if $(\hat X,T)$ is a topological system,
$\hat \mu\in M_T(\hat X)$ and the systems
$\mathbf{X}$ and $\hat \mathbf{X}$ are measure theoretically isomorphic.
Similarly we say that $\hat \pi:\hat \mathbf{X} \to \hat \mathbf{Y}$
is a {\bf topological model\/} for $\pi:\mathbf{X}\rightarrow \mathbf{Y}$
when $\hat \pi$ is a topological factor map and there
exist measure theoretical isomorphisms $\phi$
and $\psi$ such that the diagram
\begin{equation*}\lambdabel{eq-model2}
\xymatrix
{
\mathbf{X} \ar[d]_{\pi} \ar[r]^{\phi} &
\hat \mathbf{X} \ar[d]^{\hat \pi} \\
\mathbf{Y}\ar[r]_{\psi} & \hat\mathbf{Y}
}
\end{equation*}
is commutative.
\section{The relative Jewett-Krieger theorem}
In this subsection we will prove the following generalization
of the Jewett-Krieger theorem (see \cite[Theorem 4.3.10]{HKat}).
\begin{thm}\lambdabel{rel-jk}
If $\pi:\mathbf{X}=(X,\mathcal{X},\mu,T)\rightarrow \mathbf{Y}=(Y,\mathcal{Y},\nu,T)$ is
a factor map with $\mathbf{X}$ ergodic and $\hat\mathbf{Y}$ is a uniquely
ergodic model for $\mathbf{Y}$ then there
is a uniquely ergodic model $\hat \mathbf{X}$ for $\mathbf{X}$ and a factor map
$\hat \pi:\hat \mathbf{X} \to \hat \mathbf{Y}$ which is a model for
$\pi:\mathbf{X}\rightarrow \mathbf{Y}$.
\end{thm}
In particular, taking $\mathbf{Y}$ to be the trivial one point system we
get:
\begin{thm}\lambdabel{jk}
Every ergodic system has a uniquely ergodic model.
\end{thm}
Several proofs have been given of this theorem, e.g.
see \cite{DGS} and \cite{BF}.
We will sketch a proof which will serve the relative case as well.
\begin{proof}[Proof of theorem \ref{rel-jk}]
A key notion for this proof is that of a {\bf uniform} partition
whose importance in this context was emphasized by
G. Hansel and J.-P. Raoult, \cite{HR}.
\begin{defn}
A set $B \in \mathcal{X}$ is uniform if
$$
\lim_{N \to \infty}\ {{\rm{ess}}sup}_x
\left|\ \frac 1N\ \sum^{N-1}_0\ 1_B(T^ix)- \mu(B)\right|=0.
$$
A partition $\mathcal{P}$ is uniform if, for all $N$, every set in
$\bigvee^N_{-N}\ T^{-i}\mathcal{P}$ is uniform.
\end{defn}
The connection between uniform sets, partitions and unique
ergodicity lies in Proposition \ref{unique-erg}.
It follows easily from that
proposition that if $\mathcal{P}$ is a uniform partition, say into the
sets $\{P_1,\ P_2, \ldots, P_a\}$, and we denote by $\mathcal{P}$ also
the mapping that assigns to $x \in X$, the index $1 \leq i \leq a$
such that $x \in P_i$, then we can map $X$ to $\{1,\ 2, \ldots,
a\}^\mathbb Z = A^\mathbb Z$ by:
$$
\pi(x)= (\ldots, \mathcal{P} (T^{-1}x),\ \mathcal{P}(x),\ \mathcal{P}(Tx), \ldots,
\mathcal{P}(T^nx), \ldots).
$$
Pushing forward the measure $\mu$ by $\pi$, gives $\pi \circ \mu$
and the closed support of this measure will be a closed shift
invariant subset, say $E \subset A^\mathbb Z$. Now the indicator
functions of finite cylinder sets span the continuous functions on
$E$, and the fact that $\mathcal{P}$ is a uniform partition and
Proposition \ref{unique-erg} combine to establish that
$(E,\ $shift) is uniquely
ergodic. This will not be a model for $(X,\ \mathcal{X},\ \mu,\ T)$
unless $\bigvee^\infty_{- \infty}\ T^{-i}\mathcal{P}= \mathcal{X}$
modulo null sets, but in any case this does give a model for a
nontrivial factor of $X$.
Our strategy for proving Theorem \ref{jk} is to first construct a single
nontrivial uniform partition. Then this partition will be refined
more and more via uniform partitions until we generate the entire
$\sigmama$-algebra $\mathcal{X}$. Along the way we will be showing how
one can prove a relative version of the basic Jewett--Krieger
theorem. Our main tool is the use of Rohlin towers. These are
sets $B \in \mathcal{X}$ such that for some $N,\ B,\ TB, \ldots,
T^{N-1}B$ are disjoint while $\bigcup^{N-1}_0\ T^iB$ fill up
most of the space. Actually we need Kakutani--Rohlin towers, which
are like Rohlin towers but fill up the whole space. If the
transformation does not have rational numbers in its point spectrum
this is not possible with a single height, but two heights that are
relatively prime, like $N$ and $N+1$ are certainly possible. Here
is one way of doing this. The ergodicity of $(X,\ \mathcal{X},\ \mu,\
T)$ with $\mu$ non atomic easily yields, for any $n$, the existence
of a positive measure set $B$, such that
$$
T^i\ B \cap B = \emptyset \qquad , \qquad i=1,\ 2, \ldots, n.
$$
With $N$ given, choose $n \geq 10 \cdot N^2$ and find $B$ that
satisfies the above. It follows that the return time
$$
r_B(x)= \inf\{i>0:T^ix \in B\}
$$
is greater than $10 \cdot N^2$ on $B$. Let
$$
B_\ell = \{x:r_B(x)=\ell\}.
$$
Since $\ell$ is large (if $B_\ell$ is nonempty) one can write
$\ell$ as a positive combination of $N$ and $N \times 1$, say
$$
\ell = Nu_\ell + (N+1) v_\ell.
$$
Now divide the column of sets $\{T^iB_\ell : 0 \leq i < \ell\}$
into $u_\ell$-blocks of size $N$ and $v_\ell$-blocks of size $N+1$
and mark the first layer of each of these blocks as belonging to
$C$. Taking the union of these marked levels ($T^iB_\ell$ for
suitably chosen $i$) over the various columns gives us a set $C$
such that $r_C$ takes only two values -- either $N$ or $N+1$ as
required.
It will be important for us to have at our disposal K-R towers like
this such that the columns of say the second K-R tower are composed
of entire subcolumns of the earlier one. More precisely we want
the base $C_2$ to be a subset of $C_1$ -- the base of the first
tower. Although we are not sure that this can be done with just two
column heights we can guarantee a bound on the number of columns
that depends only on the maximum height of the first tower. Let us
define formally:
\begin{defn}
A set $C$ will be called the base of
a {\bf bounded} K-R tower if for some
$N,\ \bigcup^{N-1}_0\ T^iC=X$ up to a $\mu$-null set.
The least $N$ that satisfies this will be called the
{\bf height} of $C$,
and partitioning $C$ into sets of constancy of $r_C$ and viewing
the whole space $X$ as a tower over $C$ will be called the
K-R tower with columns the sets $\{T^iC_\ell:\ 0 \leq i <\ell\}$
for $C_\ell=\{x \in C: r_C(x)=\ell\}$.
\end{defn}
Our basic lemma for nesting these K-R towers is:
\begin{lem}\lambdabel{4N}
Given a bounded K-R tower with base $C$ and height $N$, for
any $n$ sufficiently large there is a bounded K-R tower with base
$D$ contained in $C$ whose column heights are all at least $n$ and
at most $n+4N$.
\end{lem}
\begin{proof}
We take an auxiliary set $B$ such that $T^i\ B \cap
B= \emptyset$ for all $0 <i<10(n+2N)^2$ and look at the unbounded
(in general) K-R tower over $B$. Using the ergodicity it is easy
to arrange that $B \subset C$. Now let us look at a single column
over $B_m$, with $m \geq 10\ (n+2N)^2$. We try to put down blocks
of size $n+2N$ and $n+2N+1$, to fill up the tower. This can
certainly be done but we want our levels to belong to $C$. We can
refine the column over $B_m$ into a finite number of columns so
that each level is either entirely within $C$ or in $X \backslash
C$. This is done by partitioning the base $C$ according to the
finite partition:
$$
\bigcap^{m-1}_{i=0}\ T^{-1}\{C,\ X \backslash C\}.
$$
Then we move the edge of each block to the nearest level that
belongs to $C$. The fact that the height of $C$ is $N$ means that
we do not have to move any level more than $N-1$ steps, and so at
most we lose $2N-2$ or gain that much thus our blocks, with bases
now all in $C$, have size in the interval $[n,\ n+4N]$ as
required.
\end{proof}
It is clear that this procedure can be iterated to give an infinite
sequence of nested K-R towers with a good control on the variation
in the heights of the columns. These can be used to construct
uniform partitions in a pretty straightforward way, but we need one
more lemma which strengthens slightly the ergodic theorem. We will
want to know that when we look at a bounded K-R tower with base $C$
and with minimum column height sufficiently large that for most of
the fibers of the towers (that is for $x \in C,\ \{T^ix:\ 0 \leq
i<r_C(x)\}$) the ergodic averages of some finite set of functions
are close to the integrals of the functions. It would seem that
there is a problem because the base of the tower is a set of very
small measure (less than 1/min column height) and it may be that
the ergodic theorem is not valid there. However, a simple
averaging argument using an intermediate size gets around this
problem. Here is the result which we formulate for simplicity for
a single function $f$:
\begin{lem}\lambdabel{erg}
Let $f$ be a bounded
function and $(X,\ \mathcal{X},\ \mu,\ T)$ ergodic. Given $\epsilonsilon
>0$, there is an $n_0$, such that if a bounded K-R tower with base
$C$ has minimum column height at least $n_0$, then those fibers
over $x \in C:\ \{T^ix:\ 0 \leq i < r_C(x)\}$ that satisfy
$$
\left| \frac {1}{r_C(x)}\ \sum^{r_C(x)-1}_{i=0}\
f(T^ix)-\int_X\ fd\mu \right| <\epsilonsilon
$$
fill up at least $1-\epsilonsilon$ of the space.
\end{lem}
\begin{proof}
Assume without loss of generality that $|f| \le 1$. For
a $\deltata$ to be specified later find an $N$ such that the set of
$y \in X$ which satisfy
\begin{equation}\lambdabel{*}
\left|\frac 1N\ \sum^{N-1}_0\ f(T^iy)-\int fd \mu\right| <
\deltata
\end{equation}
has measure at least $1-\deltata$. Let us denote the set of $y$ that
satisfy \eqref{*} by $E$. Suppose now that $n_0$ is large enough so that
$N/n_0$ is negligible -- say at most $\deltata$. Consider a bounded
K-R tower with base $C$ and with minimum column height greater than
$n_0$. For each fiber of this tower, let us ask what is the
fraction of its points that lie in $E$. Those fibers with at least
a $\sqrt{\deltata}$ fraction of its points not in $E$ cannot fill up
more than a $\sqrt{\deltata}$ fraction of the space, because
$\mu(E)>1-\deltata$.
Fibers with more than $1-\sqrt{\deltata}$ of its points
lying in $E$ can be divided into disjoint blocks of size $N$
that cover all the points that lie in $E$.
This is done by starting at $x\in C$, and moving up the fiber,
marking the first point in $E$, skipping $N$ steps and continuing
to the next point in $E$ until we exhaust the fiber.
On each of these $N$-blocks the average of $f$ is within $\delta$
of its integral, and since $|f|\le 1$ if $\sqrt{ \delta } <
\epsilon/10$ this will guarantee that the average of $f$ over the
whole fiber is within $\epsilon$ of its integra.
\end{proof}
We are now prepared to construct uniform partitions. Start with
some fixed nontrivial partition $\mathcal{P}_0$. By Lemma \ref{erg},
for any tall enough bounded K-R tower at least 9/10 of the columns will
have the 1-block distribution of each $\mathcal{P}_0$-name within $\frac
{1}{10}$ of the actual distribution. We build a bounded K-R tower
with base $C_1(1)$ and heights $N_1,\ N_1+1$ with $N_1$ large
enough for this to be valid. It is clear that we can modify
$\mathcal{P}_0$ to $\mathcal{P}_1$ on the bad fibers so that now all fibers have a
distribution of 1-blocks within $\frac {1}{10}$ of a fixed
distribution. We call this new partition $\mathcal{P}_1$. Our
further changes in $\mathcal{P}_1$ will not change the $N_1,\ N_1+1$
blocks that we see on fibers of a tower over our ultimate $C_1$.
Therefore, we will get a uniformity on all blocks of size $100N_1$.
The 100 is to get rid of the edge effects since we only know the
distribution across fibers over points in $C_1(1)$.
Next we apply Lemma \ref{erg} to the 2-blocks in $\mathcal{P}_1$ with 1/100.
We choose $N_2$ so large that $N_1/N_2$ is negligible and so that
any uniform K-R tower with height at least $N_2$ has for at least
99/100 of its fibers a distribution of 2-blocks within $1 / 100$ of
the global $\mathcal{P}_1$ distribution.
Apply Lemma \ref{4N} to find a uniform K-R tower
with base $C_2(2) \subset C_1(1)$ such that its column heights are
between $N_2$ and $N_2+4N_1$. For the fibers with good $\mathcal{P}_1$
distribution we make no change. For the others, we copy on most of
the fiber (except for the top $10 \cdot N^2_1$ levels) the
corresponding $\mathcal{P}_1$-name from one of the good columns. In
this copying we also copy the $C_1(1)$-name so that we preserve the
blocks. The final $10 \cdot N^2_1$ spaces are filled in with $N_1,\
N_1+1$ blocks. This gives us a new base for the first tower that
we call $C_1(2)$, and a new partition $\mathcal{P}_2$. The features of
$\mathcal{P}_2$ are that all its fibers over $C_1(2)$ have good (up to
$1 /10$) 1-block distribution, and all its fibers over $C_2(2)$
have good (up to $1 /100$) 2-block distributions. These will not
change in the subsequent steps of the construction.
Note too that the change from $C_1(1)$, to $C_1(2)$, could have been
made arbitrarily small by choosing $N_2$ sufficiently large.
There is one problem in trying to carry out the next step
and that is, the filling in of the top relatively small portion of
the bad fibers after copying most of a good fiber. We cannot copy
an exact good fiber because it is conceivable that no fiber with
the precise height of the bad fiber is good. The filling in is
possible if the column heights of the previous level are relatively
prime. This was the case in step 2, because in step 1 we began with
a K-R tower heights $N_1, N_1+1$. However, Lemma \ref{4N}
does not guarantee relatively prime heights.
This is automatically the case if there is no rational
spectrum. If there are only a finite number of rational points in
the spectrum then we could have made our original columns with
heights $LN_1,\ L(N_1+1)$ with $L$ being the highest power so that
$T^L$ is not ergodic and then worked with multiples of $L$ all the
time. If the rational spectrum is infinite then we get an infinite
group rotation factor and this gives us the required uniform
partition without any further work.
With this point understood it is now
clear how one continues to build a sequence of
partitions $\mathcal{P}_n$ that converge to $\mathcal{P}$ and $C_i(k)
\rightarrow C_i$ such that the $\mathcal{P}$-names of all fibers over
points in $C_i$ have a good (up to $1 / {10^i}$) distribution of
$i$-blocks. This gives the uniformity of the partition
$\mathcal{P}$ as required and establishes
\begin{prop}
Given any $\mathcal{P}_0$ and any $\epsilonsilon >0$ there
is a uniform partition $\mathcal{P}$ such that
$d(\mathcal{P}_0,\mathcal{P})<\epsilonsilon$ in the $\ell_1$-metric on partitions.
\end{prop}
As we have already remarked the uniform partition that we have
constructed gives us a uniquely ergodic model for the factor system
generated by this partition. We need now a relativized version of
the construction we have just carried out. We formulate this as
follows:
\begin{prop}
Given a uniform partition
$\mathcal{P}$ and an arbitrary partition $\mathcal{Q}_0$ that refines
$\mathcal{P}$, for any $\epsilonsilon >0$ there is a uniform partition $\mathcal{Q}$
that also refines $\mathcal{P}$ and satisfies
$$
\|\mathcal{Q}_0 - \mathcal{Q}\|_1<\epsilonsilon.
$$
\end{prop}
Even though we write things for finite alphabets, everything makes
good sense for countable partitions as well and the arguments need
no adjusting. However, the metric used to compare partitions
becomes important since not all metrics on $\ell_1$ are equivalent.
We use always:
$$
\|\mathcal{Q} - \overline {\mathcal{Q}}\|_1=\sum_j\ \int_X \
|1_{Q_j}-1_{\overline Q_j}| d \mu
$$
where the partitions $\mathcal{Q}$ and $\overline {\mathcal{Q}}$ are ordered
partitions into sets $\{Q_j\},\ \{\overline Q _j\}$ respectively.
We also assume that the $\sigmama$-algebra generated by the partition
$\mathcal{P}$ is nonatomic -- otherwise there is no real difference
between what we did before and what has to be done here.
We will try to follow the same proof as before. The problem is
that when we redefine $\mathcal{Q}_0$ to $\mathcal{Q}$ we are not allowed to
change the $\mathcal{P}$-part of the name of points. That greatly
restricts us in the kind of names we are allowed to copy on columns
of K-R towers and it is not clear how to proceed. The way to
overcome the difficulty is to build the K-R towers inside the
uniform algebra generated by $\mathcal{P}$. This being done we look,
for example, at our first tower and the first change we wish to
make in $\mathcal{Q}_0$. We divide the fibers into a finite number of
columns according to the height and according to the $\mathcal{P}$-name.
Next each of these is divided into subcolumns,
called $\mathcal{Q}_0$-columns, according to the
$\mathcal{Q}_0$-names of points. If a $\mathcal{P}$-column has some good (i.e. good
1-block distribution of $\mathcal{Q}_0$-names) $\mathcal{Q}_0$-subcolumn it can
be copied onto all the ones that are not good. Next notice that a
$\mathcal{P}$-column that contains not even one good $\mathcal{Q}_0$-name is
a set defined in the uniform algebra. Therefore if these sets have
small measure then for some large enough $N$, uniformly over the
whole space, we will not encounter these bad columns too many
times.
In brief the solution is to change the nature of the uniformity. We
do not make all of the columns of the K-R tower good -- but we make
sure that the bad ones are seen infrequently, uniformly over the
whole space. With this remark the proof of the proposition is
easily accomplished using the same nested K-R towers as before --
{\em but inside the uniform algebra}.
Finally the J-K theorem is established by constructing a refining
sequence of uniform partitions and looking at the inverse limit of
the corresponding topological spaces. Notice that if $\mathcal{Q}$
refines $\mathcal{P}$, and both are uniform, then there is a natural
homeomorphism from $X_{\mathcal{Q}}$ onto $X_{\mathcal{P}}$. The way in
which the theorem is established also yields a proof of the
relative J-K theorem, Theorem \ref{rel-jk}.
\end{proof}
Using similar methods E. Lehrer \cite{Leh} shows that
in the Jewett-Krieger theorem one can find,
for any ergodic system, a strictly
ergodic model which is topologically mixing.
\section{Models for other commutative diagrams}
One can describe Theorem \ref{rel-jk} as asserting that
every diagram of ergodic systems of the form
$\mathbf{X}\to\mathbf{Y}$ has a strictly ergodic model. What can we say
about more complicated commutative diagrams?
A moments reflection will show that a repeated application
of Theorem \ref{rel-jk} proves the first assertion
of the following theorem.
\begin{thm}\lambdabel{CD}
Any commutative diagram in the category of ergodic $\mathbb{Z}$
dynamical systems with the structure of an inverted tree,\
i.e. no portion of it looks like
\begin{equation}\lambdabel{Z>>XY}
\xymatrix
{
& \mathbb{Z}b \ar[dl]_\alpha \ar[dr]^\beta & \\
\mathbf{X} & & \mathbf{Y}
}
\end{equation}
has a strictly ergodic model.
On the other hand there exists a diagram of the form
\eqref{Z>>XY} that does not admit a strictly ergodic model.
\end{thm}
For the proof of the second assertion we need the following
theorem.
\begin{thm}\lambdabel{top-dis>meas-dis}
If $(Z,\eta,T)$ is a strictly ergodic system and
$(Z,T)\overset{\alpha}{\to} (X,T)$ and
$(Z,T)\overset{\beta}{\to}(Y,T)$
are topological factors such that
$\alpha^{-1}(U)\cap\beta^{-1}(V)\ne\emptyset$ whenever
$U\subset X$ and $V\subset Y$ are nonempty open sets,
then the measure-preserving systems $\mathbf{X}=(X,\mathcal{X},\mu,T)$ and
$\mathbf{Y}=(Y,\mathcal{Y},\nu,T)$ are measure-theoretically disjoint.
In particular this is the case if the systems
$(X,T)$\ {} and $(Y,T)$\ {} are topologically disjoint.
\end{thm}
\begin{proof}
It suffices to show that the map $\alpha\times \beta:Z\to X\times Y$
is onto since this will imply that the topological system
$(X\times Y,T)$ is strictly ergodic.
We establish this by showing that the measure
$\lambda=(\alpha\times\beta)_*(\eta)$ (a joining of $\mu$ and $\nu$)
is full;\ i.e. that it assigns positive measure to
every set of the form $U\times V$ with $U$ and $V$ as in the
statement of the theorem.
In fact, since by assumption $\eta$ is full we have
$$
\lambda(U\times V)=
\eta((\alpha\times\beta)^{-1}(U\times V))=
\eta(\alpha ^{-1}(U) \cap \beta^{-1}(V))>0.
$$
This completes the proof of the first assertion. The second
follows since topological disjointness of $(X,T)$\ {} and $(Y,T)$\ {}
implies that $\alpha\times \beta:Z\to X\times Y$ is onto.
\end{proof}
\begin{proof}[Proof of Theorem Theorem \ref{CD}]
We only need to prove the last assertion.
Take $\mathbf{X}=\mathbf{Y}$ to be any
nontrivial weakly mixing system, then $\mathbf{X}\times\mathbf{X}$ is
ergodic and the diagram
\begin{equation}
\xymatrix
{
& \mathbf{X}\times\mathbf{X} \ar[dl]_{p_1} \ar[dr]^{p_2} & \\
\mathbf{X} & & \mathbf{X}
}
\end{equation}
is our counter example.
In fact if \eqref{Z>>XY} is a uniquely ergodic model in this
situation then it is easy to establish that the condition in Theorem
\ref{top-dis>meas-dis} is satisfied and we apply this theorem
to conclude that $\mathbf{X}$ is disjoint from itself.
Since in a nontrivial system
$\mu\times\mu$ and ${{\rm{gr\,}}}(\mu,{{\rm{id}}})$ are different
ergodic joinings, this contradiction proves our assertion.
\end{proof}
\section{The Furstenberg-Weiss almost 1-1 extension theorem}
It is well known that in a topological measure space one can
have sets that are large topologically but small in the sense of the
measure. In topological dynamics when $(X,T)$\ is a factor of $(Y,T)$\
and the projection $\pi: Y \to X$ is one to one on a topologically
large set (i.e. the complement of a set of first category),
one calls $(Y,T)$\ an {\bf almost 1-1 extension} of $(X,T)$\ and considers
the two systems to be very closely related. Nonetheless,
in view of the opening sentence, it is possible that the measure
theory of $(Y,T)$\ will be quite different from the measure theory of
$(X,T)$\ . The following theorem realizes this possibility in an
extreme way (see \cite{FW15}).
\begin{thm}
Let $(X,T)$\ be a non-periodic minimal dynamical system, and let
$\pi: Y \to X$ be an extension of $(X,T)$\ with $(Y,T)$\ topologically
transitive and $Y$ a compact metric space. Then there exists an
almost 1-1 minimal extension, $\overline{\pi}:(\overline{Y},T)\to (X,T)$
and a Borel subset $Y_0\subset Y$ with a Borel measurable map
$\theta: Y_0 \to \overline{Y}$ satisfying (1) $\theta T = T \theta$,
(2) $\overline{\pi}\theta =\pi$, (3) $\theta$ is 1-1 on $Y_0$, (4)
$\mu(Y_0)=1$ for any $T$-invariant measure $\mu$ on $Y$.
\end{thm}
In words, one can find an almost 1-1 minimal extension of $X$
such that the measure theoretic structure is as rich as that of
an arbitrary topologically transitive extension of $X$.
An almost 1-1 extension of a minimal equicontinuous system
is called an {\bf almost automorphic} system.
The next corollary demonstrates the usefulness of this
modelling theorem. Other applications appeared e.g.
in \cite{GW35} and \cite{DL}.
\begin{cor}
Let $(X,\mathcal{X},\mu,T)$ be an ergodic measure preserving
transformation with infinite point spectrum defined by
$(G,\rho)$ where $G$ is a compact monothetic group
$G=\overline{\{\rho^n\}}_{n\in \mathbb{Z}}$. Then there is an almost
1-1 minimal extension of $(G,\rho)$ (i.e. a minimal almost
automorphic system), $(\tilde Z,\sigma)$ and an invariant
measure $\nu$ on $Z$ such that $(Z,\sigma,\nu)$ is isomorphic
to $(X,\mathcal{X},\mu,T)$.
\end{cor}
\section{Cantor minimal representations}
A {\em Cantor minimal dynamical system} is
a minimal topological system $(X,T)$\ where $X$ is the Cantor set.
Two Cantor minimal systems $(X,T)$\ and $(Y,S)$ are called
{\em orbit equivalent\/} (OE)
if there exists a homeomorphism $F: X\to Y$
such that $F(\mathcal{O}_T(x))=\mathcal{O}_S(Fx)$ for every $x\in X$.
Equivalently: there are functions
$n:X\to \mathbb{Z}$ and $m: X \to \mathbb{Z}$ such that for every $x\in X$
$F(Tx)=S^{n(x)}(Fx)$ and $F(T^{m(x)})=S(Fx)$.
An old result of M. Boyle implies that the requirement that,
say, the function $n(x)$ be continuous already implies that
the two systems are {\em flip conjugate\/}; i.e. $(Y,S)$
is isomorphic either to $(X,T)$\ or to $(X,T^{-1})$. However,
if we require that both $n(x)$ and $m(x)$ have at most
one point of discontinuity we get the new and, as it turns out,
useful notion of {\em strong orbit equivalence\/} (SOE).
A complete characterization of both OE and SOE of Cantor minimal systems
was obtained by Giordano Putnam and Skau \cite{GPS}
in terms of an algebraic invariant of Cantor minimal systems called
the {\em dimension group}. (See \cite{Glasner} for
a review of these results.)
We conclude this section with
the following remarkable theorems, due to N. Ormes \cite{Ormes},
which simultaneously generalize the theorems of Jewett and Krieger
and a theorem of Downarowicz \cite{Dow} which,
given any Choquet simplex $Q$, provides
a Cantor minimal system $(X,T)$\ with $M_T(X)$ affinely homeomorphic with $Q$.
(See also Downarowitcz and Serafin \cite{DS},
and Boyle and Downarowicz \cite{BD}.)
\begin{thm}\lambdabel{ort}
\begin{enumerate}
\item
Let $(\Omega,\mathcal{B},\nu,S)$ be an ergodic, non-atomic, probability measure
preserving, dynamical system. Let $(X,T)$ be a Cantor minimal
system such that whenever $\exp(2\pi i/p)$
is a (topological) eigenvalue of
$(X,T)$\ for some $p\in \mathbb{N}$ it is also a (measurable) eigenvalue
of $(\Omega,\mathcal{B},\nu,S)$.
Let $\mu$ be any element of the set of extreme points
of $M_T(X)$. Then, there exists
a homeomorphism $T':X \to X$ such that (i) $T$ and $T'$ are
strong orbit equivalent, (ii) $(\Omega,\mathcal{B},\nu,S)$ and $(X,\mathcal{X},\mu,T')$
are isomorphic as measure preserving dynamical systems.
\item
Let $(\Omega,\mathcal{B},\nu,S)$ be an ergodic, non-atomic, probability measure
preserving, dynamical system. Let $(X,T)$ be a Cantor minimal
system and $\mu$ any element of the set of extreme points
of $M_T(X)$. Then, there exists
a homeomorphism $T':X \to X$ such that (i) $T$ and $T'$ are
orbit equivalent, (ii) $(\Omega,\mathcal{B},\nu,S)$ and $(X,\mathcal{X},\mu,T')$
are isomorphic as measure preserving dynamical systems.
\item
Let $(\Omega,\mathcal{B},\nu,S)$ be an ergodic, non-atomic, probability measure
preserving dynamical system. Let $Q$ be any Choquet simplex and
$q$ an extreme point of $Q$. Then there exists a
Cantor minimal system $(X,T)$\ and an affine homeomorphism $\phi: Q \to M_T(X)$
such that, with $\mu=\phi(q)$,
$(\Omega,\mathcal{B},\nu,S)$ and $(X,\mathcal{X},\mu,T)$
are isomorphic as measure preserving dynamical systems.
\end{enumerate}
\end{thm}
\section{Other related theorems}
Let us mention a few more striking representation results.
For the first one recall that a topological dynamical system
$(X,T)$ is said to be \textbf{prime} if it has no non-trivial
factors. A similar definition can be given for measure preserving
systems. There it is easy to see that a prime system $(X,\mathcal{X},\mu,T)$
must have zero entropy. It follows from a construction in \cite{SW}
that the same holds for topological entropy, namely any system $(X,T)$
with positive topological entropy has non-trivial factors. In
\cite{Wei36} it is shown that any ergodic zero entropy
dynamical system has a minimal model $(X,T)$ with the property
that any pair of points $(u,v)$ not on the same orbit has a dense
orbit in $X \times X$. Such minimal systems are necessarily prime,
and thus we have the following result:
\begin{thm}\lambdabel{prime}
An ergodic dynamical system has a topological, minimal,
prime model iff it has zero entropy.
\end{thm}
The second theorem,
due to Glasner and Weiss \cite{GW35},
treats the positive entropy systems.
\begin{thm}
An ergodic dynamical system has a strictly ergodic,
UPE model iff it has positive entropy.
\end{thm}
We also have the following surprising result which is due
to Weiss \cite{Wei35}.
\begin{thm}
There exists a minimal metric dynamical system $(X,T)$
with the property that for every ergodic probability
measure preserving system $(\Omega,\mathcal{B},\mu,S)$ there
exists a $T$-invariant Borel probability measure
$\nu$ on $X$ such that the systems
$(\Omega,\mathcal{B},\mu,S)$ and $(X,\mathcal{X},\nu,T)$ are isomorphic.
\end{thm}
In \cite{Lis2} E. Lindenstrauss proves the following:
\begin{thm}\lambdabel{distal-model}
Every ergodic measure distal $\mathbb{Z}$-system
$\mathbf{X}=(X,\mathcal{X},\mu,T)$ can be
represented as a minimal topologically distal
system $(X,T,\mu)$ with $\mu\in M_T^{{\rm{erg}}}(X)$.
\end{thm}
This topological model need not, in general,
be uniquely ergodic. In other words there are measure
distal systems for which no uniquely ergodic topologically distal
model exists.
\begin{prop}
\begin{enumerate}
\item
There exists an ergodic non-Kronecker measure distal system
$(\Omegaega,\mathcal{F},m,T)$ with nontrivial maximal
Kronecker factor
$(\Omega_0,\mathcal{F}_0,m_0,T)$ such that (i) the extension
$(\Omegaega,\mathcal{F},m,T)\to (\Omega_0,\mathcal{F}_0,m_0,T)$ is finite to one a.e.
and (ii) every nontrivial factor map of $(\Omega_0,\mathcal{F}_0,m_0,T)$
is finite to one.
\item
A system $(\Omegaega,\mathcal{F},m,T)$ as in part 1 does not admit
a topologically distal strictly ergodic model.
\end{enumerate}
\end{prop}
\begin{proof}
1.\ Irrational rotations of the circle as well as
adding machines are examples of Kronecker systems satisfying
condition (ii).
There are several constructions in the literature of ergodic,
non-Kronecker, measure distal, two point extensions of these
Kronecker systems.
A well known explicit example is the strictly ergodic Morse
minimal system.
2.\
Assume to the contrary that $(X,\mu,T)$ is a distal
strictly ergodic model for $(\Omegaega,\mathcal{F},m,T)$.
Let $(Z,T)$ be the maximal equicontinuous factor of
$(X,T)$ and let $\eta$ be the unique invariant probability
measure on $Z$. Since by assumption $(X,\mu,T)$ is
not Kronecker it follows that $\pi: X \to Z$ is not
one to one. By Furstenberg's structure theorem for minimal distal
systems $(Z,T)$ is nontrivial and moreover there exists an
intermediate extension
$X \to Y \overset{\sigmama}{\to}Z$ such that $\sigmama$ is
an isometric extension. A well known construction implies
the existence of a minimal group extension
$\rho:(\tilde Y,T) \to (Z,T)$, with compact fiber group $K$,
such that the following diagram is commutative
(see Section \ref{Sec-distal} above).
We denote by $\nu$ the unique invariant measure on $Y$
(the image of $\mu$) and let $\tilde \nu$ be an ergodic
measure on $\tilde Y$ which projects onto $\nu$.
The dotted arrows denote measure theoretic factor maps.
\begin{equation*}
\xymatrix
{
& (X,\mu)\ar@{.>}[dl]\ar[dd]_{\pi} \ar[dr]& & &\\
(\Omega_0,m_0) \ar@{.>}[dr] & &(Y,\nu)\ar[dl]_{\sigmama} &
(\tilde Y,\tilde\nu)\ar[l]_{\phi}
\ar[dll]^{\rho, K}\\
& (Z,\eta)& & &
}
\end{equation*}
Next form the measure
$ \theta = \int_K R_k\tilde\nu\, dm_K,$
where $m_K$ is Haar measure on $K$ and for each $k\in K$,
$R_k$ denotes right translation by $k$ on $\tilde Y$
(an automorphism of the system $(\tilde Y,T)$).
We still have $\phi(\theta)=\nu$.
A well known theorem in topological dynamics (see \cite{SS}) implies
that a minimal distal finite to one extension of a minimal equicontinuous
system is again equicontinuous and since $(Z,T)$ is the maximal
equicontinuous factor of $(X,T)$ we conclude that
the extension $\sigmama: Y \to Z$ is not finite to one.
Now the fibers of the extension $\sigmama$ are homeomorphic
to a homogeneous space $K/H$, where $H$ is a closed subgroup of $K$.
Considering the measure disintegration $\theta = \int_Z \theta_z\,
d\eta(z)$ of $\theta$ over $\eta$ and its projection
$\nu = \int_Z \nu_z\, d\eta(z)$, the disintegration of $\nu$ over
$\eta$, we see that a.e. $\theta_z \equiv m_K$ and
$\nu_z \equiv m_{K/H}$. Since $K/H$ is infinite we conclude
that the {\em measure theoretical extension}
$\sigmama: (Y,\nu) \to (Z,\eta)$
is not finite to one. However considering the dotted part
of the diagram we arrive at the opposite conclusion.
This conflict concludes the proof of the proposition.
\end{proof}
In \cite{OW} Ornstein and Weiss introduced the notion
of tightness for measure preserving systems and the analogous
notion of mean distality for topological systems.
\begin{defn}
Let $(X,T)$\ be a topological system.
\begin{enumerate}
\item
A pair $(x,y)$ in $X\times X$ is {\bf mean proximal\/}
if for some (hence any) compatible metric $d$
$$
\limsup_{n\to\infty}\frac{1}{2n+1}\sum_{i=-n}^{n}
d(T^i x, T^i y) = 0.
$$
If this $\limsup$ is positive the pair is called
{\bf mean distal\/}.
\item
The system $(X,T)$\ is {\bf mean distal\/} if every pair with $x\ne y$
is mean distal.
\item
Given a $T$-invariant probability measure $\mu$ on $X$,
the triple $(X,\mu,T)$ is called {\bf tight\/} if there is
a $\mu$-conull set $X_0\subset X$ such that every pair of
distinct points $(x,y)$ in $X_0\times X_0$ is mean distal.
\end{enumerate}
\end{defn}
Ornstein and Weiss show that tightness is in fact a property
of the measure preserving system $(X,\mu,T)$ (i.e. if the
measure system $(X,\mathcal{X},\mu,T)$ admits one tight model then
every topological model is tight). They obtain the following results.
\begin{thm}
\mbox{}
\begin{enumerate}
\item
If the entropy of $(X,\mu,T)$ is positive and finite
then $(X,\mu,T)$ is not tight.
\item
There exist strictly ergodic non-tight systems
with zero entropy.
\end{enumerate}
\end{thm}
Surprisingly the proof in \cite{OW} of the
non-tightness of a positive entropy system does not
work in the case when the entropy is infinite which is
still open.
J. King gave an example of a tight system with a non-tight
factor. Following this he and Weiss \cite{OW} established
the following result. Note that this theorem
implies that tightness and mean distality are not
preserved by factors.
\begin{thm}
If $(X,\mathcal{X},\mu,T)$ is ergodic with zero entropy
then there exists a mean-distal system $(Y,\nu,S)$ which
admits $(X,\mathcal{X},\mu,T)$ as a factor.
\end{thm}
\end{document} |
\begin{document}
\begin{center}
\large{\bf Generalizations of $R_0$ and SSM properties; Extended Horizontal Linear Complementarity Problem}
\end{center}
\begin{center}
\textsc{Punit Kumar Yadav}\\
Department of Mathematics\\ Malaviya National Instiute of Technology, Jaipur, 302017, India\\
E-mail address: punitjrf@gmail.com\\
\textsc{K. Palpandi}\\
Department of Mathematics\\ Malaviya National Instiute of Technology, Jaipur, 302017, India\\
E-mail address: kpalpandi.maths@mnit.ac.in
\end{center}
\begin{abstract}
In this paper, we first introduce $R_0$-$W$ and ${\bf SSM}$-$W$ property for the set of matrices which is a generalization of $R_0$ and the strictly semimonotone matrix. We then prove some existence results for the extended horizontal linear complementarity problem when the involved matrices have these properties. With an additional condition on the set of matrices, we prove that the ${\bf SSM}$-$W$ property is equivalent to the unique solution for the corresponding extended horizontal linear complementarity problems. Finally, we give a necessary and sufficient condition for the connectedness of the solution set of the extended horizontal linear complementarity problems.
\end{abstract}
\mathcal{S}^{n}ection{Introduction}
The standard linear complementarity problem (for short LCP), LCP($C,q$), is to find vectors $x,y$ such that \begin{equation}
x\in \mathbb{R}^n, ~y=Cx + q\in\mathbb{R}^n ~\text{and}~ x{\bf w}edge y = 0,\end{equation} where $C\in \mathbb{R}^{n\times n},~q\in \mathbb{R}^n$ and $'{\bf w}edge'$ is a min map. The LCP has numerous applications in numerous domains, such as optimization, economics, and game theory. Cottle and Pang's monograph \mathcal{C}ite{LCP} is the primary reference for standard LCP. Various generalisations of the linear complementarity problem have been developed and discussed in the literature during the past three decades (see, \mathcal{C}ite{n2n,elcp,telcp,hlcp,hlcpm,rhlcp}). The extended horizontal linear complementarity problem is one of the most important extensions of LCP, which various authors have studied; see \mathcal{C}ite{PP0,exis,n2n} and references therein. For a given ordered set of matrices ${\bf C}:=\{C_0,C_1,...,C_k\} \mathcal{S}^{n}ubseteq \mathbb{R}^{n \times n}$, vector $q\in \mathbb{R}^n$ and ordered set of positive vectors ${\bf d}:=\{d_{1},d_{2},...,d_{k}\} \mathcal{S}^{n}ubseteq \mathbb{R}^n $, the extended horizontal linear complementarity problem (for short EHLCP), denoted by EHCLP(${\bf C},{\bf d},q$), is to find a vector $x_{0},x_{1},...,x_{k} \in \mathbb{R}^n$ such that \begin{equation}\label{e1}
\begin{aligned}
C_0 x_{0}=&q+\mathcal{S}^{n}um_{i=1}^{k} C_ix_{i},\\
x_{0}{\bf w}edge x_{1}=0 ~~\text{and} ~~ (d_{j}-&x_{j}){\bf w}edge x_{j+1}=0, ~1\leq j\leq k-1.\\
\end{aligned}
\end{equation}
If $k=1$, then EHLCP becomes the horizontal linear complementarity problem (for short HLCP), that is, \begin{equation*}
\begin{aligned}
C_0 x_{0}-C_1x_{1}=q~~\text{and}~~x_{0}{\bf w}edge x_{1}=0.
\end{aligned}
\end{equation*}
Further, HLCP reduces to the standard LCP by taking $C_0 =I$. Due to its widespread applications in numerous domains, the horizontal linear complementarity problem has received substantial research attention from many academics; see \mathcal{C}ite{hlcp,hlcpm,rhlcp, homo} and reference therein.
Various writers have presented new classes of matrices for analysing the structure of LCP solution sets in recent years; see for example, \mathcal{C}ite{LCP, fvi,PP0}.
The classes of $R_0$, $P_0$, $P$, and strictly semimonotone (SSM) matrices play a crucial role in the existence and uniqueness of the solution to LCP. For instance, $P$ matrix (if [$x\in\mathbb{R}^n,x*Ax\leq 0\implies x=0$]) gives a necessary and sufficient condition for the uniqueness of the solution for the LCP (see, Theorem 3.3.7 in \mathcal{C}ite{LCP}). To get a similar type of existence and uniqueness results for the generalized LCPs, the notion of $P$ matrix was extended for the set of matrices as the column $W$-property by Gowda et al. \mathcal{C}ite{PP0}. They proved that column $W$-property gives the solvability and the uniqueness for the extended horizontal linear complementarity problem (EHLCP). Also, they have generalized the concept of the $P_0$-matrix as the column $W_0$-property.
Another class of matrix, the so-called SSM matrix, has importance in LCP theory. This class of matrices provides a unique solution to LCP on $\mathbb{R}^n_+$ and also gives the existence of the solution for the LCP (see, \mathcal{C}ite{LCP}). For a $Z$ matrix (if all the off-diagonal entries of a matrix are non-positive), $P$ matrix is equivalent to the SSM matrix (see, Theorem 3.11.10 in \mathcal{C}ite{LCP}). A natural question arises whether the SSM matrix can be generalized for the set of matrices in the view of EHLCP and whether we have a similar equivalence relation for the set of $Z$ matrices. In this paper, we would like to answer this question.
The connectedness of the solution set of LCP has a prominent role in the study of the LCP. We say a matrix is connected if the solution set of the corresponding LCP is connected.
In \mathcal{C}ite{ctd}, Jones and Gowda addressed the connectedness of the solution set of the LCP. They proved that the matrix is connected whenever the given matrix is a $P_0$ matrix and the solution set has a bounded connected component. Also, they have shown that if the solution set of LCP is connected, then there is almost one solution of LCP for all $q>0.$ Due to the specially structured matrices involved in the study of the connectedness of the solution to LCP, various authors studied the connectedness of LCP, see for example \mathcal{C}ite{ctd,ctdl,Cntd}. The main objectives of this paper are to answer the following questions:
\begin{itemize}
\item[(Q1)] In LCP theory, it is a well-known result that the $R 0$ matrix gives boundedness to the LCP solution set. The same holds true for HLCP \mathcal{C}ite{szn}. This motivates the question of whether or not the notion of $R_0$ matrix can be generalized to the set of matrices. If so, then can we expect the same kind of outcome in the EHLCP?
\item [(Q2)] Given that a strictly semimonotone matrix guarantees the existence of the LCP solution and its uniqueness for $q{\rm GUS}eq 0$, it is natural to wonder whether the concept of SSM matrix can be extended to the set of matrices. If so, then whether the same result holds true for EHLCP.
\item [(Q3)] Motivated by the results of Gowda and Jones \mathcal{C}ite{ctd} regarding the connectedness of the solution set of LCP, one can ask whether the solution set of EHLCP is connected if the set of matrices has the column $W_0$ property and the solution set of the corresponding EHLCP has a bounded connected component.
\end{itemize}
The paper's outline is as follows: We present some basic definitions and results in section 2. We generalize the concept of $R_0$ matrix and prove the existence result for EHLCP in section 3. In section 4, we introduce the {\bf SSM}-$W$ property, and we then study an existence and uniqueness result for the EHLCP when the underlying set of matrices have this property. In the last section, we give a necessary and sufficient condition for the connectedness of the solution set of the EHLCP.
\mathcal{S}^{n}ection{Notations and Preliminaries}
\mathcal{S}^{n}ubsection{Notations}Throughout this paper, we use the following notations: \begin{itemize}
\item[(i)] The $n$ dimensional Euclidean space with the usual inner product will be denoted by $\mathbb{R}^n$. The set of all non-negative vectors (respectively, positive vectors) in $\mathbb{R}^n$ will be denoted by $\mathbb{R}^n_+$ (respectively, $\mathbb{R}^n_{++}$ ). We say $x {\rm GUS}eq 0$ (respectively, $ >0$) if and only if $x\in\mathbb{R}^n_+$ (respectively, $\mathbb{R}^n_{++})$.
\item [(ii)] The $k$-ary Cartesian power of $\mathbb{R}^n$ will be denoted by ${\mathcal L}ambda^{(k)}_n$ and the $k$-ary Cartesian power of $\mathbb{R}^n_{++}$ will be denoted by ${\mathcal L}ambda^{(k)}_{n,++}$. The bold zero '${\bf 0}$' will be used for denoting the zero vector $(0,0,...,0)\in {\mathcal L}ambda^{(k)}_n.$
\item [(iii)] The set of all $n\times n$ real matrices will be denoted by $\mathbb{R}^{n\times n}$. We use the symbol ${\mathcal L}ambda^{(k)}_{n\times n}$ to denote the $k$-ary Cartesian product of $\mathbb{R}^{n\times n}$.
\item [(iv)] We use $[n]$ to denote the set $\{1,2,...,n\}$.
\item [(v)] Let $M\in\mathbb{R}^{n\times n}$. We use $\text{diag}(M)$ to denote the vector $(M_{11},M_{22},...,M_{kk})\in \mathbb{R}^n$, where $M_{ii}$ is the $ii^{\rm th}$ diagonal entry of matrix $M$ and $\text{det}(M)$ is used to denote the determinant of matrix $M$.
\item[(vi)] SOL(${\bf C}, {\bf d}, q$) will be used for denoting the set of all solution to EHLCP(${\bf C},{\bf d},q$).
\end{itemize}
We now recall some definitions and results from the LCP theory, which will be used frequently in our paper.
\begin{proposition}[\mathcal{C}ite{wcp}]\label{star}
Let $V=\mathbb{R}^n.$ Then, the following statements are equivalent.
\begin{itemize}
\item [\rm(i)] $x{\bf w}edge y=0.$
\item [\rm(ii)] $x,y{\rm GUS}eq 0$ and $~x*y=0,$ where $*$ is the Hadamard product.
\item[\rm(iii)] $x,y{\rm GUS}eq 0~\text{and}~\langlegle x,y\ranglegle=0.$
\end{itemize} \end{proposition}
\begin{definition}[\mathcal{C}ite{PP0}]\rm
Let ${\bf C}=(C_0,C_1,...,C_k)\in{\mathcal L}ambda^{(k+1)}_{n\times n}$. Then a matrix $R\in\mathbb{R}^{n\times n}$ is column representative of ${\bf C}$ if $$R._j\in\big\{(C_0)._j,(C_1)._j,...,(C_k)._j\big\},~\forall j\in[n],$$
where $R._j$ is the $j^{{\rm th}}$ column of matrix $R.$
\end{definition}
Next, we define the column W-property.
\begin{definition}[\mathcal{C}ite{PP0}] \rm
Let ${\bf C}:=(C_0,C_1,...,C_k)\in{\mathcal L}ambda^{(k+1)}_{n\times n}$. Then we say that ${\bf C}$ has the
\begin{itemize}
\item[\rm (i)] {\it column $W$-property} if the determinants of all the column representative matrices of ${\bf C}$ are all positive or all negative.
\item[\rm (ii)] {\it column $W_0$-property} if there exists ${\bf N}:=(N_0,N_1,...,N_k)\in {\mathcal L}ambda^{(k+1)}_{n\times n}$ such that ${\bf C+\epsilon N}:=(C_0+ \epsilon N_0,C_1+\epsilon N_1,...,C_k+\epsilon N_k)$ has the column $W$-property for all $\epsilon>0$.
\end{itemize}
\end{definition}
Due to Gowda and Sznajder \mathcal{C}ite{PP0}, we have the following result.
\begin{theorem}[\mathcal{C}ite{PP0}] \label{P1}
For ${\bf C}=(C_0,C_1,...,C_k)\in{\mathcal L}ambda^{(k+1)}_{n\times n}$, the following are equivalent:\begin{itemize}
\item[\rm(i)]${\bf C}$ has the column $W$-property.
\item[\rm(ii)] For arbitrary non-negative diagonal matrices $D_{0},D_{1},...,D_{k}\in\mathbb{R}^{n\times n}$
with $\text{\rm diag}(D_{0}+D_{1}+D_{2}+...+D_{k})>0$,
$$\text{\rm det}\big(C_0D_{0}+C_1D_{1}+...+C_kD_{k}\big)\neq 0.$$
\item [\rm(iii)]$C_0$ is invertible and $(I,C_0^{-1}C_1,...,C_0^{-1}C_k)$ has the column $W$-property.
\item [\rm(iv)] For all $q\in\mathbb{R}^n$ and ${\bf d}\in{\mathcal L}ambda^{(k-1)}_{n,++}$, {\rm EHLCP}$({\bf C},{\bf d},q)$ has a unique solution. \end{itemize}
\end{theorem}
If $k=1$ and $C_0^{-1}$ exists, then HLCP($C_0,C_1,q$) is equivalent to LCP($C_0^{-1}C_1,C_0^{-1}(q)$). In this case, $C_0^{-1}C_1$ is a $P$ matrix if and only if for all $q\in\mathbb{R}^n$, LCP($C_0^{-1}C_1,C_0^{-1}(q)$) has a unique solution (see, Theorem 3.3.7 in \mathcal{C}ite{LCP}). Hence we have the following theorem given the previous theorem.
\begin{theorem}[\mathcal{C}ite{PP0}]\label{C1}
Let $(C_0,C_1)\in{\mathcal L}ambda^{(2)}_{n\times n}$. Then the following are equivalent.
\begin{itemize}
\item [\rm(i)] $(C_0,C_1)$ has the column $W$-property.
\item [\rm(ii)] $C_0$ is invertible and $C_0^{-1}C_1$ is a $P$ matrix.
\item [\rm(iii)] For all $q\in\mathbb{R}^n$, {\rm HLCP}$(C_0,C_1,q)$ has a unique solution.
\end{itemize}
\end{theorem}
\mathcal{S}^{n}ubsection{Degree theory}
We now recall the definition and some properties of a degree from \mathcal{C}ite{fvi,deg} for our discussion.
Let $\Omega$ be an open bounded set in $\mathbb{R}^n$. Suppose $h:\bar{\Omega}\rightarrow \mathbb{R}^n$ is a continuous function and a vector $p\notin h(\mathcal{K}'artial\Omega)$, where $\mathcal{K}'artial\Omega$ and $\bar{\Omega}$ denote the boundary and closure of $\Omega$, respectively. Then the degree of $h$ is defined with respect to $p$ over $\Omega$ denoted by $\text{deg}(h,\Omega,p).$ The equation $h(x)=p$ has a solution whenever $\text{deg}(h,\Omega,p)$ is non-zero. If $h(x)=p$ has only one solution, say $y$ in $\mathbb{R}^n$, then the degree is the same overall bounded open sets containing $y$. This common degree is denoted by $\text{deg}(h,p)$.
\mathcal{S}^{n}ubsubsection{Properties of the degree} The following properties are used frequently here.
\begin{itemize}
\item[(D1)] deg($I,\Omega,\mathcal{C}dot)=1$, where $I$ is the identity function.
\item [(D2)] {\bf Homotopy invariance}: Let a homotopy $\Phi(x,s):\mathbb{R}^n\times[0,1]\rightarrow \mathbb{R}^n $ be continuous.
If the zero set of $\Phi(x,s),~X=\{x:\Phi(x,s)={0}~\text{for some}~s\in[0,1]\}$ is bounded, then for any bounded open set $\Omega$ in $\mathbb{R}^n$ containing the zero set $X$, we have $$\text{deg}(\Phi(x,1),\Omega,{ 0})=\text{deg}(\Phi(x,0),\Omega,{0}).$$
\item[(D3)] {\bf Nearness property}: Assume $\text{deg}(h_1(x),\Omega,p)$ is defined and $h_2:{\bar\Omega}\rightarrow \mathbb{R}^n$ is a continuous function. If
$\text{sup}{\bf d}isplaystyle_{x\in\Omega}\| h_2(x)-h_1(x)\|<\text{dist}(p,\mathcal{K}'artial\Omega)$, then $\text{deg}(h_2(x),\Omega,p)$ is defined and equals to $\text{deg}(h_1(x),\Omega,p)$.
\end{itemize}
The following result from Facchinei and Pang \mathcal{C}ite{fvi} will be used later.
\begin{proposition}[\mathcal{C}ite{fvi}]\label{ND}
Let $\Omega$ be a non-empty, bounded open subset of $\mathbb{R}^n$
and let $\Phi:\bar{\Omega}\rightarrow \mathbb{R}^n$ be a continuous injective mapping. Then $\text{\rm deg}(\Phi,\Omega,p)\neq0$ for all
$p\in\Phi(\Omega)$.
\end{proposition}
\noindent{\bf Note}: All the degree theoretic results and concepts are also applicable over any finite dimensional Hilbert space (like $\mathbb{R}^n$ or $\mathbb{R}^n\times\mathbb{R}^n\times\mathbb{R}^n$ etc).
\mathcal{S}^{n}ection{$R_0$-$W$ property} In this section, we first define the $R_0$-$W$ property for the set of matrices which is a natural generalization of $R_0$ matrix in the LCP theory. We then show that the $R_0$-$W$ property gives the boundedness of the solution set of the corresponding EHLCP.
\begin{definition}\rm
Let ${\bf C}=(C_0,C_1,...,C_k) \in {\mathcal L}ambda^{(k+1)}_{n\times n}$. We say that ${\bf C}$ has the $R_0$-$W$ {\it property} if the system
$$C_0x_{0}=\mathcal{S}^{n}um_{i=1}^{k} C_ix_{i}~\text{and}~x_{0}{\bf w}edge x_{j}=0 ~~\forall~j\in [k]$$ has only zero solution.
\end{definition}
It can be seen easily that the $R_0$-$W$ property coincides with $R_0$ matrix when $k=1$ and $C_0 =I$. Also it is noted (see, \mathcal{C}ite{wcp}) that if $k=1$, then the $R_0$-$W$ property referred as $R_0$ pair. To proceed further, we prove the following result.
\begin{lemma}\label{l1} Let ${\bf C}=(C_0,C_1,...,C_k) \in {\mathcal L}ambda^{(k+1)}_{n\times n}$ and ${\bf{x}} =(x_{0},x_{1},...,x_{k})\in\text{\rm SOL}({\bf C},{\bf d},q)$. Then ${\bf{x}}$ satisfies the following system $$C_0x_{0}=q+\mathcal{S}^{n}um_{i=1}^{k} C_ix_{i}~\text{and}~x_{0}{\bf w}edge x_{j}=0~\forall~j\in [k].$$
\end{lemma}
\begin{proof}
As $x_{0}{\rm GUS}eq0$, there exists an index set $\alphalpha \mathcal{S}^{n}ubseteq [n]$ such that $(x_{0})_i=\begin{cases}
>0 & i\in \alphalpha\\
0 & i\in [n]\mathcal{S}^{n}etminus \alphalpha
\end{cases}.$ Since $x_{0} {\bf w}edge x_{1}=0$, we have $(x_{1})_i=0$ for all $i\in \alphalpha$. From $(d_{1}-x_{1}){\bf w}edge x_{2}=0$, we get $(d_{1})_i (x_{2})_i =0~\forall i\in \alphalpha$. This gives that $(x_{2})_i =0~\forall i\in \alphalpha$. By substituting $(x_{2})_i =0~\forall i\in \alphalpha$ in $(d_{2}-x_{2}){\bf w}edge x_{3}=0$, we obtain $(x_{3})_i =0~\forall i\in \alphalpha $. Continue the process in the similar way, one can get $(x_{4})_i =(x_{5})_i=...=(x_{k})_i=0~\forall i\in \alphalpha$. So, $x_{0}{\bf w}edge x_{j}=0~ \forall ~j \in[k]$. This completes the proof.
\end{proof}
We now prove the boundedness of the solution set of EHLCP when the involved set of matrices has the $R_0$-$W$ property.
\begin{theorem}\label{R_0}
Let ${\bf C}=(C_0,C_1,...,C_k) \in {\mathcal L}ambda^{(k+1)}_{n\times n}$. If ${\bf C}$ has the $R_0$-$W$ property then $\text{\rm SOL}({\bf C},{\bf d},q)$ is bounded for every $q\in\mathbb{R}^n$ and ${\bf d} \in {\mathcal L}ambda^{(k-1)}_{n,++}$.
\end{theorem}
\begin{proof} Suppose there exist $q\in\mathbb{R}^n$ and ${\bf d}=(d_{1}, d_{2},...,d_{k-1})\in {\mathcal L}ambda^{(k-1)}_{n,++}$ such that $\text{SOL}({\bf C},{\bf d},q)$ is unbounded. Then there exists a sequence ${\bf x}^{(m)}=( {x^{(m)}_{0}},{x^{(m)}_{1}},...,{x^{(m)}_{k}})$ in ${\mathcal L}ambda^{(k+1)}_n$ such that $||{\bf x}^{(m)}|| \to \mathop{\rm inf}ty $ as $m\to \mathop{\rm inf}ty$ and it satisfies
\begin{equation}\label{bound}
\begin{aligned}
~~~~& C_0 {x^{(m)}_{0}} =q+\mathcal{S}^{n}um_{i=1}^{k} C_i {x^{(m)}_{i}} \\
~~~~& {x^{(m)}_{0}} {\bf w}edge {x^{(m)}_{1}}=0 ~~\text{and}~~ (d_{j}-{x^{(m)}_{j}}){\bf w}edge {x^{(m)}_{j+1}}=0 ~\forall j\in[k-1].
\end{aligned}
\end{equation}
From the Lemma \ref{l1}, equation \ref{bound} gives that
\begin{equation}\label{bd1}
\begin{aligned}
C_0 {x^{(m)}_{0}} =&q+\mathcal{S}^{n}um_{i=1}^{k} C_i {x^{(m)}_{i}} ~~\text{and}~~
{x^{(m)}_{0}} {\bf w}edge {x^{(m)}_{j}} =&0 ~\forall j\in[k].\\
\end{aligned}
\end{equation}
As ${\bf d}frac{{\bf x}^{(m)}}{\|{\bf x}^{(m)}\|}$ is a unit vector for all $m$, ${\bf d}frac{{\bf x}^{(m)}}{\|{\bf x}^{(m)}\|}$ converges to some vector ${\bf{y}}=( y_{0},y_{1},...,y_{k}) \in {\mathcal L}ambda^{(k+1)}_{n}$ with $||{\bf{y}}||=1$.
Now first divide the equation \ref{bd1} by $\|{\bf x}^{(m)}\|$ and then take the limit $m\rightarrow \mathop{\rm inf}ty$, we get $$ C_0 y_{0}=\mathcal{S}^{n}um_{i=1}^{k} C_i y_{i} ~~\text{and}~~ y_{0}{\bf w}edge y_{j}=0 ~\forall j\in[k].$$ This implies that ${\bf{y}}$ must be a zero vector as ${\bf C}$ has the $R_0$-$W$ property, which contradicts the fact that $||{\bf{y}}||=1$. Therefore $\text{SOL}({\bf C},{\bf d},q)$ is bounded.
\end{proof}
\mathcal{S}^{n}ubsection{Degree of EHLCP}
Let ${\bf C}=(C_0,C_1,...,C_k)\in {\mathcal L}ambda^{(k+1)}_{n \times n}$ and ${\bf d}=(d_{1}, d_{2}, ...., d_{k-1})\in {\mathcal L}ambda^{(k-1)}_{n,++}$. We define a function $F:{\mathcal L}ambda^{(k+1)}_n \to {\mathcal L}ambda^{(k+1)}_n $ as \begin{equation}\label{e1}
\begin{aligned}
F({\bf{x}})=\begin{bmatrix}
C_0 x_{0} -\mathcal{S}^{n}um_{i=1}^{k} C_ix_{i}\\ x_{0}{\bf w}edge x_{1}\\ (d_{1}-x_{1}){\bf w}edge x_{2}\\ (d_{2}-x_{2}){\bf w}edge x_{3}\\ .\\ .\\ .\\ (d_{k-1}-x_{k-1}){\bf w}edge x_{k}\\
\end{bmatrix}.\end{aligned}\end{equation}
We denote the degree of $F$ with respect to ${\bf 0}$ over bounded open set $\Omega \mathcal{S}^{n}ubseteq {\mathcal L}ambda^{(k+1)}_n$ as $\rm{deg}({\bf C},\Omega,{\bf 0})$. It is noted that if ${\bf C}$ has the $R_0$-$W$ property, in view of the Lemma \ref{l1}, $F({\bf{x}})={\bf 0} {\mathcal L}eftrightarrow {{\bf{x}}}={\bf 0}$ which implies that $\text{deg}({\bf C},\Omega,{\bf 0})=\text{deg} ({\bf C},{\bf 0})$ for any bounded open set $\Omega$ contains the origin in ${\mathcal L}ambda^{(k+1)}_n$. We call this degree as EHLCP-degree of ${\bf C}.$
We now prove an existence result for EHLCP.
\begin{theorem}\label{P2}
Let ${\bf C}=(C_0,C_1,...,C_k)\in {\mathcal L}ambda^{(k+1)}_{n\times n}$. Suppose the following hold:
\begin{itemize}
\item[\rm (i)] ${\bf C}$ has the $R_0$-$W$ property.
\item[\rm (ii)] ${\rm{deg}}({\bf C},{\bf 0})\neq 0$.
\end{itemize}
Then {\rm EHLCP(${\bf C},{\bf d},q$)} has non-empty compact solution for all $q\in\mathbb{R}^n$ and ${\bf d} \in {\mathcal L}ambda^{(k-1)}_{n, ++}$.
\end{theorem}
\begin{proof}
As the solution set of EHLCP is closed, it is enough to prove that the solution set is non-empty and bounded. We first define a homotopy $\Phi: {\mathcal L}ambda_n^{(k+1)} \times [0,1] \to {\mathcal L}ambda_n^{(k+1)}$ as $$\Phi({\bf{x}},s)=\begin{bmatrix}
C_0 x_{0} -\mathcal{S}^{n}um_{i=1}^{k} C_ix_{i}-sq\\ x_{0}{\bf w}edge x_{1}\\ (d_{1}-x_{1}){\bf w}edge x_{2}\\ (d_{2}-x_{2}){\bf w}edge x_{3}\\ .\\ .\\ .\\ (d_{k-1}-x_{k-1}){\bf w}edge x_{k}\\
\end{bmatrix}.$$ Then, $$\Phi({\bf{x}},0)=F({\bf{x}})~~\text{and}~\Phi({\bf{x}},1)=F({\bf{x}})- \hat{q}, \text{where}~~\hat{q}=(q,0,0,...0)\in {\mathcal L}ambda^{(k+1)}_n.$$
By using the similar argument as in above Theorem $\ref{R_0}$, we can easily show that the zero set of homotopy, $X=\{{\bf{x}}:\Phi({\bf{x}},s)={\bf 0}~\text{for some}~s\in[0,1]\}$ is bounded. From the property of degree (D2), we get
$\text{deg}(F,\Omega,{\bf 0})=\text{deg}(F-\hat{q},\Omega,{\bf 0})$ for any open bounded set $\Omega$ containing $X$. As $\text{deg}(F, \Omega,{\bf 0})=\text{deg}({\bf C},{\bf 0})\neq 0$, we obtain $\text{deg}(F-\hat{q},\Omega,{\bf 0})\neq 0$ which implies $\text{SOL}({\bf C},{\bf d},q) $ is non-empty. As ${\bf C}$ has the $R_0$-$W$ property, by Theorem \ref{bound}, $\text{SOL}({\bf C},{\bf d},q) $ is bounded. This completes the proof.
\end{proof}
\mathcal{S}^{n}ection{ ${\bf SSM}$-$W$ property}
In this section, we first define the ${\bf SSM}$-$W$ {\it property} for the set of matrices which is a generalization of the SSM matrix in the LCP theory, and we then prove that the existence and uniqueness result for the EHLCP when the involved set of matrices have the ${\bf SSM}$-$W$ property.
We now recall that an $n\times n$ real matrix $M$ is called strictly semimonotone (SSM) matrix if [$x\in \mathbb{R}^n_+,~x*Mx\leq 0\mathbb{R}^{n\times n}ightarrow x=0$]. We generalize this concept to the set of matrices.
\begin{definition}\rm
We say that ${\bf C}=(C_0,C_1,...,C_k)\in {\mathcal L}ambda^{(k+1)}_{n\times n}$ has the ${\bf SSM}$-$W$ property if
\begin{equation*}
\{C_0x_{0}=\mathcal{S}^{n}um_{i=1}^{k} C_ix_{i},~x_{i}{\rm GUS}eq 0~\text{and}~~ x_{0}* x_{i}\leq 0~~\forall i\in[k]\}\mathbb{R}^{n\times n}ightarrow {{\bf{x}}}=(x_0,x_1,..,x_k)={\bf 0}.
\end{equation*}
\end{definition}
We prove the following result.
\begin{proposition}\label{P2}
Let ${\bf C}=(C_0,C_1,...,C_k)\in {\mathcal L}ambda^{(k+1)}_{n\times n}$. If ${\bf C}$ has the ${\bf SSM}$-$ W$ property, then the followings hold:
\begin{itemize}
\item[\rm (i)] $C_0^{-1}$ exists and $C_0^{-1}C_i$ is a strict semimonotone matrix for all $i\in[k].$
\item[\rm(ii)] $(I,C_0^{-1} C_1,...,C_0^{-1}C_k)$ has the ${\bf SSM}$-$W$ property.
\item [\rm(iii)] $(P^TC_0P,P^TC_1P,...,P^TC_kP)$ has the ${\bf SSM}$-$ W$ property for any permutation matrix $P$ of order $n$.
\end{itemize}
\end{proposition}
\begin{proof}
(i): Suppose there exists a vector $x_{0}\in \mathbb{R}^n$ such that $C_0x_{0}=0$. Then we have $$C_0x_{0}=C_1 0+C_2 0+ ...+C_k 0.$$ This gives that $x_{0}=0$ as ${\bf C}$ has the ${\bf SSM}$-$ W$ property. Thus $C_0$ is invertible.
Now we prove the second part of (i). Without loss of generality, it is enough to prove that $C^{-1}_0 C_1$ is a strictly semimonotone matrix. Suppose there exists a vector $y \in\mathbb{R}^n$ such that $y {\rm GUS}eq 0$ and $y * (C_0^{-1}C_1) y \leq 0.$ Let $y_0:=(C_0^{-1}C_1)y$, $y_1:=y$ and $y_i:=0$ for all $2\leq i\leq k$. Then we get $$C_0y_{0}=C_1 y_{1}+C_2 y_{2}+...+C_iy_{i}+..+C_k y_{k},~~y_{j} {\rm GUS}eq 0~\text{and}~y_{0} * y_{j}\leq 0~\forall j\in [k].$$
Since ${\bf C}$ has the ${\bf SSM}$-$ W$ property, $y_{j}=0~~\forall j\in [k]$. Thus $C_0^{-1}C_1$ is a strict semimonotone matrix. This completes the proof.
(ii): It follows from the definition of the ${\bf SSM}$-$W$ property.
(iii): Let ${\bf x}=(x_{0},x_{1},...,x_{k})\in {\mathcal L}ambda^{(k+1)}_n$ such that
$$ (P^TC_0P )x_{0}=\mathcal{S}^{n}um_{i=1}^{k} (P^TC_iP) x_{i},~x_{j}{\rm GUS}eq 0~\text{and}~x_{0}* x_{j}\leq 0 ~\forall j\in[k].$$ As $P$ is a non-negative matrix and $PP^T=P^TP$, we can rewrite the above equation as
$$ C_0Px_{0}=\mathcal{S}^{n}um_{i=1}^{k} C_iP x_{i},~ P x_{j} {\rm GUS}eq 0 ~\text{and}~Px_{0}* Px_{j}\leq 0 ~\forall j\in[k].$$
By the ${\bf SSM}$-$ W$ property of ${\bf C}$, $P x_{j}=0$ for all $0\leq j \leq k$ which implies ${\bf{x}} ={\bf 0}$. This completes the proof.
\end{proof}
In the above Proposition \ref{P2}, it can be seen easily that the converse of the item (ii) and (iii) are valid. But the converse of item (i) need not be true. The following example illustrates this.
\begin{example}\rm
Let ${\bf C}=(C_0,C_1,C_2)\in {\mathcal L}ambda^{(3)}_{2\times 2}$, where $$C_0=\begin{bmatrix}
1&0\\0&1\\
\end{bmatrix},~C_1=\begin{bmatrix}
1&-2\\0&1\\
\end{bmatrix},C_2=\begin{bmatrix}
1&0\\-2&1\\
\end{bmatrix}.$$ It is easy to check that $C_0^{-1}C_1=C_1$ and $C_0^{-1}C_2=C_2$ are $P$ matrix. So, $C_0^{-1}C_1$ and $ C_0^{-1}C_2$ are SSM matrix. Let ${\bf{x}}=(x_{0},x_{1},x_{2})=((0,0)^T,(1,1)^T,(1,1)^T)\in {\mathcal L}ambda^{(3)}_2$. Then we can see that the non-zero $\bf{x}$ satisfies $$C_0x_{0}=C_1x_{1}+C_2x_{2},~x_{1}{\rm GUS}eq 0,~x_{2} {\rm GUS}eq 0~\text{and}~x_{0}*x_{1}=0=x_{0} * x_{2}.$$ So ${\bf C}$ can not have the ${\bf SSM}$-$ W$ property.
\end{example}
The following result is a generalization of a well-known result in matrix theory that every $P$ matrix is a SSM matrix.
\begin{theorem}\label{T4}
Let ${\bf C}=(C_0,C_1,...,C_k)\in {\mathcal L}ambda^{(k+1)}_{n\times n}$. If ${\bf C}$ has the column $W$-property, then ${\bf C}$ has the ${\bf SSM}$-$ W$ property.
\end{theorem}
\begin{proof}
Suppose there exists a non-zero vector ${\bf x}=(x_{0},...,x_{k})\in {\mathcal L}ambda^{(k+1)}_n$ such that $$C_0x_{0}=\mathcal{S}^{n}um_{i=1}^{k} C_ix_{i},~x_{j}{\rm GUS}eq 0,~ x_{0}* x_{j}\leq 0~\forall j\in[k].$$
Consider a vector $y\in\mathbb{R}^n$ whose $j^{\rm{th}}$ component is given by $$y_j=\begin{cases}
-1 & \text{if} ~(x_{0})_j>0\\ 1 & \text{if} ~(x_{0})_j<0\\1 & \text{if} ~(x_{0})_j=0 ~\text{and}~(x_{i})_j\neq 0~\text{for some}~ i\in[k]\\0 & \text{if}~ (x_{0})_j=0 ~\text{and} ~(x_{i})_j= 0 ~\text{for all}~i\in[k]
\end{cases}.$$
As ${\bf{x}}$ is a non-zero vector, ${\bf{y}}$ must be a non-zero vector. Consider the diagonal matrices $D_{0}, D_{1},...,D_{k}$ which are defined by $$(D_{0})_{jj}=\begin{cases}(x_{0})_j & \text{if} ~(x_{0})_j>0\\ -(x_{0})_j & \text{if} ~(x_{0})_j<0\\0 & \text{if} (x_{0})_j=0~\text{and}~(x_{i})_j\neq 0~\text{for some}~ i\in[k] \\1 & \text{if}~ (x_{0})_j=0 ~\text{and} ~(x_{i})_j= 0 ~\text{for all}~i\in[k]
\end{cases}$$ and for all $i\in [k]$,
$$(D_{i})_{jj}=\begin{cases}
0&\text{if } (x_{0})_j>0\\ (x_{i})_j&\text{ else }
\end{cases}.$$
It is easy to verify that $D_{0}, D_{1},...,D_{k}$ are non-negative diagonal matrices and $\text{diag}(D_{0}+ D_{1}+...+D_{k})>0$. And also note that
\begin{equation}\label{22}
x_{0}=-D_{0}y~\text{and}~x_{i}=D_{i}y~\forall i\in [k].
\end{equation}
By substituting the Equation \ref{22} in $C_0x_{0}=\mathcal{S}^{n}um_{i=1}^{k} C_ix_{i}$, we get \begin{equation*}
C_0(-D_{0}y)=\mathcal{S}^{n}um_{i=1}^{k} C_iD_{i}(y) \mathbb{R}^{n\times n}ightarrow
\big(C_0D_{0}+C_1D_{1}+...+C_kD_{k}\big)y=0.
\end{equation*}
This implies that det$(C_0D_{0}+C_1D_{1}+...+C_kD_{k}\big)=0$. So, ${\bf C}$ does not have the column $W$-property from Theorem \ref{P1}. Thus we get a contradiction. Therefore, ${\bf C}$ has the ${\bf SSM}$-$W$ property.
\end{proof}
The following example illustrates that the converse of the above theorem is invalid.
\begin{example}\rm
Let ${\bf C}=(C_0,C_1,C_2) \in {\mathcal L}ambda^{(3)}_{2\times 2}$ such that $$C_0=\begin{bmatrix}
1&0\\0&1\\
\end{bmatrix},~C_1=\begin{bmatrix}
1&1\\1&1\\
\end{bmatrix},C_2=\begin{bmatrix}
1&1\\1&1\\
\end{bmatrix}.$$ Suppose $ {\bf{w}} =(x,y,z)\in {\mathcal L}ambda^{3}_2$ such that $$ C_{0}x=C_{1}y+C_{2}z~\text{and}~ y,z{\rm GUS}eq 0, x*y\leq 0, x*z\leq 0.$$
From $C_{0}x=C_{1}y+C_{2}z$, we get
$$\begin{bmatrix}
x_{1}\{\bf{x}}_{2}
\end{bmatrix}=\begin{bmatrix}
y_1+y_2+z_1+z_2\{\bf{y}}_1+y_2+z_1+z_2\\
\end{bmatrix}.$$ As $x*y\leq 0,~x*z\leq 0$ and from the above equation, we have
\begin{equation}\label{33}
\begin{aligned}
y_1&(y_1+y_2+z_1+z_2)\leq 0~\text{and}~ y_2(y_1+y_2+z_1+z_2)\leq 0,\\
z_1&(y_1+y_2+z_1+z_2)\leq 0~\text{and}~
z_2(y_1+y_2+z_1+z_2)\leq 0.\\
\end{aligned}
\end{equation}
Since $y,z{\rm GUS}eq 0$, from the equation \ref{33}, we get $x=y=z=0.$ Hence ${\bf C}$ has the ${\bf SSM}$-$ W$ property. As $\text{det}(C_1)=0$, by the definition of the column $W$-property, ${\bf C}$ does not have the column $W$-property.
\end{example}
We now give a characterization for ${\bf SSM}$-$W$ property.
\begin{theorem}\label{CW}
Let ${\bf C}=(C_0,C_1,...,C_k)\in {\mathcal L}ambda^{(k+1)}_{n\times n}$ has the ${\bf SSM}$-$ W$ property if and only if $(C_0,C_1D_{1}+C_2D_{2}+...+C_kD_{k})\in {\mathcal L}ambda_{n\times n}^{(2)}$ has the ${\bf SSM}$-$ W$ property for any set of non-negative diagonal matrix $(D_{1},D_{2},...,D_{k})\in {\mathcal L}ambda^{(k)}_{n\times n}$ with $\text{\rm diag}(D_{1}+D_{2}+...+D_{k})>0$.
\end{theorem}
\begin{proof}
{\it Necessary part}: Let $(D_{1},D_{2}...,D_{k})\in {\mathcal L}ambda^{(k)}_{n\times n}$ be the set of non-negative diagonal matrix with $\text{\rm diag}(D_{1}+D_{2}+...+D_{k})>0$. Suppose there exist vectors $x_{0}\in\mathbb{R}^n$ and $y\in \mathbb{R}^n_+ $ such that $$C_0x_{0}=\big(C_1D_{1}+C_2D_{2}+...+C_kD_{k}\big)y~~\text{and}~~x_{0}*y\leq 0.$$
For each $i\in[k]$, we set $x_{i}:=D_{i}y$. As each $D_{i}$ is a non-negative diagonal matrix, from $x_{0}*y\leq 0$, we get $x_{0}*x_{i}\leq 0~ \forall i\in[k]$. Then we have $$C_0x_{0}=C_1x_{1}+C_2x_{2}+...+C_kx_{k},$$ $$x_{i}{\rm GUS}eq 0,~x_{0}*x_{i}\leq 0~\forall i\in[k].$$ As ${\bf C}$ has the ${\bf SSM}$-$ W$ property of ${\bf C}$, we must have $x_{0}=x_{1}=...=x_{k}=0.$ This implies $x_{1}+x_{2}+...+x_{k}=(D_{1}+D_{2}+...+D_{k})y=0$. As $\text{\rm diag}(D_{1}+D_{2}+....+D_{k})>0$, we have $y=0$. This completes the necessary part.
\noindent{\it Sufficiency part}: Let ${\bf x}=(x_{0},x_{1},...,x_{k})\in {\mathcal L}ambda^{(k+1)}_n$ such that \begin{equation} \label{SS}
C_{0}x_{0}=C_1x_{1}+C_2x_{2}+...+C_kx_{k}~~\text{and}~~x_{j}{\rm GUS}eq 0,~x_{0}*x_{j}\leq 0~\forall j\in[k].
\end{equation}
We now consider an $n\times k$ matrix $X$ whose $j^{\rm th}$ column as $x_{j}$ for $j\in[k]$. So, $X=[x_{1}~x_{2}~...~x_{k}]$. Let $S:=\{i\in [k]: i^{\rm{th}} ~\text{row sum of $X$ is zero}\}$. From this, we define a vector $y\in\mathbb{R}^n$ and diagonal matrices $D_{1},D_{2},..,D_{k}$ such that
$$y_i=\begin{cases}
1 & i\notin S\\ 0 & i\in S\\
\end{cases}~~\text{and}~~ (D_{j})_{ii}=\begin{cases}
(x_{j})_i & i\notin S\\~ 1 &i\in S\\
\end{cases},$$ where $(D_{j})_{ii}$ is the diagonal entry of $D_{j}$ for all $j\in[k]$.
It can be seen easily that $D_{j}y=x_{j}$ for all $j\in[k]$ and each $D_{j}$ is a non-negative diagonal matrix with $\text{\rm diag}(D_{1}+D_{2}+...+D_{k})>0$. Therefore, from equation \ref{SS}, we get
$$C_0x_{0}=\big(C_1D_{1}+C_2D_{2}+...+C_kD_{k}\big)y,$$
$$x_{0}*y\leq 0.$$ From the hypothesis, we get $x_{0}=0=y$ which implies ${\bf{x}}={\bf 0}$. This completes the sufficiency part.
\end{proof} We now give a characterization for the column $W$-property.
\begin{theorem}\label{CD}
Let ${\bf C}=(C_0,C_1,...,C_k)\in {\mathcal L}ambda^{(k+1)}_{n\times n}$ has the column-$ W$ property if and only if $(C_0,C_1D_{1}+C_2D_{2}+...+C_kD_{k})\in {\mathcal L}ambda_{n\times n}^{(2)}$ has the column-$ W$ property for any set of non-negative diagonal matrices $D_{1},D_{2},...,D_{k}$ of order $n$ with ${\rm{diag}}(D_{1}+D_{2}+...+D_{k})>0$.
\end{theorem}
\begin{proof}
{\it Necessary part}: It is obvious.
\noindent{\it Sufficiency part}: Let $\{E^{0},E^{1},...,E^{k}\}$ be a set of non-negative diagonal matrices of order $n$ such that ${\rm{diag}}(E^{0}+E^{1}+...+E^{k})>0$. We claim that ${\bf d}et(C_0E^{0}+C_1E^{1}+...+C_kE^{k}) \neq 0$.
To prove this, we first construct a set of non-negative diagonal matrices $D_{1},D_{2},...,D_{k}$ and $E$ as follows: $$(D_{j})_{ii}=\begin{cases}
E^{j}_{ii}&\text{~if~} \mathcal{S}^{n}um_{m=1}^{k}E^{m}_{ii}\neq0\\
1&\text{~if~} \mathcal{S}^{n}um_{m=1}^{k}E^{m}_{ii}=0\\
\end{cases}\text{ and } E_{ii}=\begin{cases}
1 &\text{~if~} \mathcal{S}^{n}um_{m=1}^{k}E^{m}_{ii}\neq0\\
0&\text{~if~} \mathcal{S}^{n}um_{m=1}^{k}E^{m}_{ii}=0\\
\end{cases},$$ where $(D_{j})_{ii}$ is $ii^{\rm th}$ diagonal entry of $D_{j}$ for $j\in[k]$ and $ E_{ii}$ is $ii^{\rm th}$ diagonal entry of matrix $E$. By an easy computation, we have $D_{j}E=E^{j}~\forall j\in[k]$ and ${\rm diag}(D_{1}+D_{2}+...+D_{k})>0$. From ${\rm{diag}}(E^{0}+E^{1}+...+E^{k})>0$, we get ${\rm diag}(E^{0}+E)>0$. As $D_{j}E=E^{j}~\forall j\in[k]$ and $(C_0,C_1D_{1}+C_2D_{2}+...+C_kD_{k})$ has column $W$-property, by Theorem \ref{P1}, we have \begin{equation*}
\begin{aligned}
{\bf d}et(C_0E^{0}+C_1E^{1}+...+C_kE^{k})&={\bf d}et(C_0E^{0}+C_1D_{1}E+...+C_kD_{k}E)\\
&={\bf d}et(C_0E^{0}+(C_1D_{1}+...+C_kD_{k})E) \neq 0.
\end{aligned}
\end{equation*}
Hence ${\bf C}$ has the column $W$-property. This completes the proof.
\end{proof}
A well-known result in the standard LCP is that strictly semimonotone matrix and $P$ matrix are equivalent in the class of $Z$ matrices (see, Theorem 3.11.10 in \mathcal{C}ite{LCP}). Analogue this result, we prove the following theorem.
\begin{theorem}\label{cssm}
Let ${\bf C}=(C_0,C_1,...,C_k)\in {\mathcal L}ambda_{n\times n}^{(k+1)}$ such that $C_0^{-1}C_i$ be a $Z$ matrix for all $i\in[k]$. Then the following statements are equivalent.\begin{itemize}
\item [\rm (i)] ${\bf C}$ has the column $W$-property.
\item[\rm(ii)] ${\bf C}$ has the ${\bf SSM}$-$ W$ property.
\end{itemize}
\end{theorem}
\begin{proof}
(i)$\implies$(ii): It follows from Theorem \ref{T4}.
(ii)$\implies$(i): Let $\{D_{1}, D_{2},...,D_{k}\}$ be the set of non-negative diagonal matrices of order $n$ such that $\text{\rm diag}(D_{1}+D_{2}+...+D_{k})>0$. In view of Theorem \ref{CD}, it is enough to prove that $(C_0,C_1D_{1}+C_2D_{2}+...+C_kD_{k})$ has the column $W$-property.
As ${\bf C}$ has the ${\bf SSM}$-$ W$ property, by Theorem \ref{CW}, we have $(C_0,C_1D_{1}+...+C_kD_{k})$ has the ${\bf SSM}$-$ W$ property. So, by Proposition \ref{P2}, $\big(I,C_0^{-1}\big(C_1D_{1}+...+C_kD_{k}\big)\big)$ has the ${\bf SSM}$-$ W$ property and $C_0^{-1}\big(C_1D_{1}+C_2D_{2}+...+C_kD_{k}\big)$ is a strict semimonotone matrix. As $C_0^{-1}C_i$ is a $Z$ matrix, we get $C_0^{-1}\big(C_1D_{1}+C_2D_{2}+...+C_kD_{k}\big)$ is also a $Z$ matrix. Hence $C_0^{-1}\big(C_1D_{1}+C_2D_{2}+...+C_kD_{k}\big)$ is a $P$ matrix. So, by Theorem \ref{C1}, $(C_0,C_1D_{1}+C_2D_{2}+...+C_kD_{k})$ has the column $W$-property. Hence we have our claim.
\end{proof}
\begin{corollary}
Let ${\bf C}=(C_0,C_1,...,C_k)\in {\mathcal L}ambda_{n\times n}^{k+1}$ such that $C_0^{-1}C_i$ be a $Z$ matrix for all $i\in[k]$. Then the following statements are equivalent.
\begin{itemize}
\item [\rm (i)] ${\bf C}$ has the ${\bf SSM}$-$ W$ property.
\item [\rm(ii)] For all $q\in\mathbb{R}^n$ and ${\bf d}\in{\mathcal L}ambda^{(k-1)}_{n,++}$, {\rm EHLCP}$({\bf C},{\bf d},q)$ has a unique solution.
\end{itemize}
\end{corollary}
\begin{proof}
(i) $\implies$(ii): It follows from Theorem \ref{cssm} and Theorem \ref{P1}.
(ii)$\implies$(i): It follows from Theorem \ref{P1} and Theorem \ref{T4}.
\end{proof}
In the standard LCP \mathcal{C}ite{deg}, the strictly semimonotone matrix gives the existence of a solution of LCP. We now prove that the same result holds in EHLCP.
\begin{theorem}\label{degg}
Let ${\bf C}=(C_0,C_1,...,C_k)\in {\mathcal L}ambda^{(k+1)}_{n\times n}$ has the ${\bf SSM}$-$W$ property, then $\rm{{SOL}}({\bf C},{\bf d},q)\neq \emptyset$ for all $q\in\mathbb{R}^n$ and ${\bf d}\in {\mathcal L}ambda^{(k+1)}_{n,++}$.
\end{theorem}
\begin{proof}
As ${\bf C}$ has the ${\bf SSM}$-$W$ property$, {\bf C}$ has the $R_0$-$W$ property. From Theorem \ref{P2}, it is enough to prove that ${\rm deg}({\bf C},{\bf 0})\neq 0$. To prove this,
we consider a homotopy $\Phi: {\mathcal L}ambda_n^{(k+1)} \times [0,1] \to {\mathcal L}ambda_n^{(k+1)}$ as $$\Phi({\bf x},t)=t\begin{bmatrix}
C_0x_{0} \\ x_{1}\{\bf{x}}_{2}\{\bf{x}}_{3}\\.\\.\\.\{\bf{x}}_{k} \\
\end{bmatrix}+(1-t)\begin{bmatrix}
C_0 x_{0} -\mathcal{S}^{n}um_{i=1}^{k} C_ix_{i}\\ x_{0}{\bf w}edge x_{1}\\ (d_{1}-x_{1}){\bf w}edge x_{2}\\ (d_{2}-x_{2}){\bf w}edge x_{3}\\ .\\ .\\ .\\ (d_{k-1}-x_{k-1}){\bf w}edge x_{k}\\
\end{bmatrix}.$$
Let $F({\bf x}):=\Phi({\bf x},0)$ and $G({\bf x}):=\Phi({\bf x},1)$.
We first prove that the zero set $X=\{{\bf{x}}:\Phi({\bf{x}},t)={\bf 0}~\text{for some}~t\in[0,1]\}$ of homotopy $\Phi$ contains only zero. We consider the following cases.
{\it Case 1}: Suppose $t=0$ or $t=1$. If $t=0$, then $\Phi({\bf x},0)={\bf 0}\implies F({\bf{x}})={\bf 0}$. As $\bf{C}$ has the ${\bf SSM}$-$W$ property, by Lemma \ref{l1}, we have
$F({\bf x})={\bf 0}\mathbb{R}^{n\times n}ightarrow {\bf x}={\bf 0}$. If $t=1$, then $\Phi({\bf x},1)={\bf 0}\implies G({\bf{x}})={\bf 0}$. Again by $\bf{C}$ has the ${\bf SSM}$-$W$ property, $C^{-1}_0$ exists, which implies that $G$ is a one-one map. So, $G({\bf x})={\bf 0}\mathbb{R}^{n\times n}ightarrow {\bf x}={\bf 0}.$
{\it Case 2}: Suppose $t\in(0,1)$. Then $\Phi({\bf{x}},t)={\bf 0}$ which gives that
\begin{equation}\label{pp}
\begin{bmatrix}
C_0 x_{0} -\mathcal{S}^{n}um_{i=1}^{k} C_ix_{i}\\ x_{0}{\bf w}edge x_{1}\\ (d_{1}-x_{1}){\bf w}edge x_{2}\\ (d_{2}-x_{2}){\bf w}edge x_{3}\\ .\\ .\\ .\\ (d_{k-1}-x_{k-1}){\bf w}edge x_{k}\\
\end{bmatrix}=-\alphalpha\begin{bmatrix}
C_0x_{0} \\ x_{1}\{\bf{x}}_{2}\{\bf{x}}_{3}\\.\\.\\.\{\bf{x}}_{k} \\
\end{bmatrix},~~\text{where}~~\alphalpha={\bf d}frac{t}{1-t}>0.
\end{equation}
From the second row of above equation, we have $$x_{0}{\bf w}edge x_{1}=-\alphalpha {x_{1}}\implies \text{min}\{{x_{0}}+\alphalpha {x_{1}} ,(1+\alphalpha){x_{1}}\}=0.$$ By Proposition \ref{star}, we get $x_{1} {\rm GUS}eq 0$ and $({x_{0}}+\alphalpha {x_{1}}) * (1+\alphalpha){x_{1}} =0 $ which implies that ${x_{0}} * {x_{1}} \leq 0.$ Set $\Delta:=\{i\in [n]: ({x_{1}})_{i} >0\}$. So, we have \begin{equation}\label{EX}
(x_{0})_{i}=\begin{cases}
~\leq 0~&{\text{if}}~ i\in \Delta\\
~{\rm GUS}eq 0~& {\text{if}} ~i \notin \Delta
\end{cases}~~~\text{and}~~(x_{1})_{i}=\begin{cases}
>0~~\text{if}~i\in \Delta\\
=0~~\text{if}~i\notin \Delta
\end{cases}.
\end{equation}
From third row of the equation \ref{pp}, we have $ (d_{1}-x_{1}){\bf w}edge x_{2}=-\alphalpha {x_{2}}$ which is equivalent $$\mathop{\rm min}\{d_{1}-x_{1}+\alphalpha {x_{2}}, (1+\alphalpha){x_{2}}\}=0.$$ This gives that $x_{2}{\rm GUS}eq 0$ and $(d_{1}-x_{1}+\alphalpha {x_{2}})*(1+\alphalpha){x_{2}}=0$. As $d_{1}>0$ and from the last term in equation \ref{EX}, we have $$(x_{2})_{i}=\begin{cases}
{\rm GUS}eq 0~\text{if}~i\in \Delta\\
=0 ~\text{if}~i\notin \Delta
\end{cases}. $$
This leads that $x_{0} * x_{2}\leq 0$. By continuing the similar argument for the remaining rows, we get $$x_{j}{\rm GUS}eq 0~~\text{and}~x_{0}*x_{j}\leq 0~\forall j\in [k].$$
From the first row of the equation \ref{pp}, the vectors ${\bf{x}}=(x_{0},x_{1},...,x_{k})$ satisfies $$ C_0(1+\alphalpha) x_{0} =\mathcal{S}^{n}um_{i=1}^{k} C_i x_{i}~~\text{and}~~x_{j}{\rm GUS}eq0,~{x_{0}}*{x_{j}}\leq 0, ~j\in[k].$$ So, ${{\bf{x}}}={\bf 0}$ as ${\bf C}$ has the ${\bf SSM}$-$ W$ property.
From both cases, we get $X$ contains only zero. By the homotopy invariance property of degree (D2), we have $\text{deg}(\Phi({\bf{x}},0),\Omega,{\bf 0})=\text{deg}\big(\Phi({\bf{x}},1),\Omega,{\bf 0}\big)$ for any bounded open set containing ${\bf 0}$. As $G$ is a continuous one-one function, by Proposition \ref{ND}, we have $$\text{deg}\big({\bf C},{\bf 0}\big)=\text{deg}\big(\Phi({\bf{x}},0),\Omega,{\bf 0}\big)= \text{deg}\big(F,\Omega,{\bf 0}\big)=\text{deg}\big(G,\Omega,{\bf 0}\big)\neq 0.$$ This completes the proof.
\end{proof}
We now recall that a matrix $A \in\mathbb{R}^{n\times n}$ is said to be a $M$ matrix if it is $Z$ matrix and $A^{-1}(\mathbb{R}^n_+)\mathcal{S}^{n}ubseteq \mathbb{R}^n_+.$ We prove a uniqueness result for EHLCP when $q{\rm GUS}eq0$ and ${\bf d}\in{\mathcal L}ambda^{(k-1)}_{n,++}.$
\begin{theorem}\label{smq}
Let ${\bf C}=(C_0,C_1,...,C_k)\in {\mathcal L}ambda^{(k+1)}_{n\times n}$ has the ${\bf SSM}$-$ W$ property. If $C_0$ is a $M$ matrix. then for every $q \in \mathbb{R}^n_+$ and for every ${\bf d} \in {\mathcal L}ambda^{(k-1)}_{n,++}$, $\text{\rm EHLCP}({\bf C},{\bf d},q)$ has a unique solution.
\end{theorem}
\begin{proof}
Let $q \in \mathbb{R}^n_+$ and ${\bf d}=(d_{1}, d_{2},...,d_{k-1}) \in {\mathcal L}ambda^{(k-1)}_{n,++}$. We first show $(C_0^{-1}q,0,...,0)\in\text{SOL}({\bf C},{\bf d},q).$ As $C_0$ is a $M$ matrix and $q \in \mathbb{R}^n_+$, we have $C_0^{-1}q{\rm GUS}eq 0$. If we set ${\bf{y}}=(y_{0},y_{1},...,y_{k}):=(C_0^{-1}q,0,...,0)\in {\mathcal L}ambda^{(k+1)}_{n}$, then we can see easily that $(y_{0},y_{1},...,y_{k})$ satisfies that $$C_0 y_{0}=q+\mathcal{S}^{n}um_{i=1}^k C_i y_{i},~ y_{0}{\bf w}edge y_{1}=0~\text{and}~(d_{j}-y_{j}){\bf w}edge y_{j+1}=0~~\forall j\in[k-1].$$ Hence $(C_0^{-1}q,0,...,0)\in \text{SOL} ({\bf C},{\bf d},q).$
Suppose ${\bf x}=(x_{0},x_{1},...,x_{k})\in {\mathcal L}ambda^{(k+1)}_n$ is an another solution to EHLCP($C,q,d$). Then,
\begin{equation}\label{ssm}
\begin{aligned}
C_0 x_{0}=q+\mathcal{S}^{n}um_{i=1}^{k} C_ix_{i},~
x_{0}{\bf w}edge x_{1}=0, ~ (d_{j}-x_{j}){\bf w}edge x_{j+1}=0~ \forall j\in[k-1].
\end{aligned}
\end{equation}
From the Lemma \ref{l1}, we have
\begin{equation}\label{unique}
C_0x_{0}=q+\mathcal{S}^{n}um_{i=1}^{k} C_ix_{i}~\text{and}~x_{0}{\bf w}edge x_{j}=0~\forall~j\in [k].
\end{equation}
We let ${\bf{z}}:={\bf{x}}-{\bf{y}}$, then ${\bf{z}}=(x_{0}-C_0^{-1}q, x_{1},x_{2},..,x_{k})$. By an easy computation, from Equation \ref{unique}, we get
$$C_0 (x_{0}-C_0^{-1}q)=\mathcal{S}^{n}um_{i=1}^k C_i x_{i}$$ and
$$ x_{j}{\rm GUS}eq 0,~~(x_{0}-C_0^{-1}q)*x_{j}=x_{0}*x_{j}-C_0^{-1}q*x_{j}=-C_0^{-1}q*x_{j}\leq 0~\forall j\in [k].$$
Since ${\bf C}$ has the ${\bf SSM}$-$ W$ property, ${\bf{z}}={\bf 0}$ which implies that $(x_{0},x_{1},...,x_{k})=(C_0^{-1}q,0,...,0).$ This completes the proof.
\end{proof}
\mathcal{S}^{n}ection{Connected solution set and Column ${W_0}$ property }
In this section, we give a necessary and sufficient condition for the connected solution set of the EHLCP.
\begin{definition}\rm
Let ${\bf C}=(C_0,C_1,...,C_k)\in{\mathcal L}ambda^{(k+1)}_{n\times n}$. We say that ${\bf C}$ is connected if $\text{SOL}({\bf C},{\bf d},q)$ is connected for all $q\in\mathbb{R}^n$ and for all ${\bf d}\in{\mathcal L}ambda^{(k-1)}_{n,++}$.
\end{definition}
We now recall some definitions and results to proceed further.
\begin{definition}\rm\mathcal{C}ite{semi}
A subset of $\mathbb{R}^n$ is said to be a semi-algebraic set it can be represented as, $$S={\bf d}isplaystyle\bigcup_{u=1}^{s}\bigcap_{v=1}^{r_u}\{{\bf{x}}\in\mathbb{R}^n;f_{u,v}({\bf{x}})*_{uv} 0\}, $$
where for all $u\in[s]$ and for all $v\in[r_u]$, $*_{uv}\in\{\ >, =\}$ and $f_{u,v}$ is in the space of all real polynomials.
\end{definition}
\begin{theorem}[\mathcal{C}ite{semi}]\label{sctd}
Let $S$ be a semi-algebraic set. Then $S$ is connected iff $S$ is path-connected.
\end{theorem}
\begin{lemma}\label{CC}
The $\text{\rm SOL}({\bf C},{\bf d},q)$ is a semi-algebraic set.
\end{lemma}
\begin{proof}
It is clear from the definition of $\text{SOL}({\bf C},{\bf d},q)$.
\end{proof}
The following result gives a necessary condition for a connected solution whenever $C_0$ is a $M$ matrix.
\begin{theorem}
Let $C_0\in\mathbb{R}^{n\times n} $ be a $M$ matrix. If ${\bf C}=(C_0,C_1,...,C_k)\in{\mathcal L}ambda^{(k+1)}_{n\times n}$ is connected, then $\text{\rm SOL}({\bf C},{\bf d},q)=\{(C_0^{-1}q,0,...,0)\}$ for all $q\in\mathbb{R}^n_{++}$ and for all ${\bf d}\in{\mathcal L}ambda^{(k-1)}_{n,++}$.
\end{theorem}
\begin{proof}
Let $q\in\mathbb{R}^n_{++}$ and ${\bf d}=(d_{1}, d_{2},...,d_{k-1})\in{\mathcal L}ambda^{(k-1)}_{n,++}$. It can be seen from the proof of Theorem \ref{smq} that ${\bf{x}}=(C_0^{-1}q,0,...,0)\in\text{SOL}({\bf C},{\bf d},q)$. We now show that ${\bf{x}}$ is the only solution to EHLCP(${\bf C},{\bf d},q$).
Assume contrary. Suppose ${\bf{y}}$ is another solution to EHLCP(${\bf C},{\bf d},q$). As $\text{SOL}({\bf C},{\bf d},q)$ is connected, by Lemma \ref{CC} and Theorem \ref{sctd}, it is path-connected. So, there exists a path ${\rm GUS}amma=({\rm GUS}amma^{0},{\rm GUS}amma^{1},...,{\rm GUS}amma^{k}):[0,1]\rightarrow \text{SOL}({\bf C},{\bf d},q)$ such that $${\rm GUS}amma(0)={\bf{x}}, ~{\rm GUS}amma(1)={\bf{y}}~ \text{and}~ {\rm GUS}amma(t)\neq {\bf{x}} ~\forall t>0.$$
Let $\{t_m \} \mathcal{S}^{n}ubseteq(0,1)$ be a sequence such that ${t_m}\to 0$ as $m\rightarrow \mathop{\rm inf}ty$. Then, by the continuity of ${\rm GUS}amma$, ${\rm GUS}amma(t_m)\rightarrow {\rm GUS}amma(0)={\bf{x}}$ as $m\to \mathop{\rm inf}ty$. Since $\big({\rm GUS}amma^{0}(t_m),{\rm GUS}amma^{1}(t_m),...{\rm GUS}amma^{k}(t_m)\big)\in\text{SOL}({\bf C},{\bf d},q),$ \begin{equation*}\begin{aligned}
C_0{\rm GUS}amma^{0}(t_m) &=q+\mathcal{S}^{n}um_{i=1}^{k} C_i{\rm GUS}amma^{i}(t_m),\\
{\rm GUS}amma^{0}(t_m){\bf w}edge{\rm GUS}amma^{1}(t_m)=0& ~\text{and}~\big(d_{j}-{\rm GUS}amma^{j}(t_m)\big){\bf w}edge
{\rm GUS}amma^{({j+1})}(t_m) =0~\forall j\in[k-1].\\ \end{aligned}
\end{equation*}
Now we claim that there exists a subsequence \{$t_{m_l}\}$ of $\{t_{m}\}$ such that $$\big({\rm GUS}amma^{j}(t_{m_l})\big)_i\neq 0 ,\text{ for some }j\in[k] \text{ and for some } i\in[n].$$ Suppose the claim is not true. This means that for given any subsequence $\{t_{ml}\}$ of $\{t_m\}$, there exists $m_0\in \mathbb{N}$ such that for all $m_l {\rm GUS}eq m_0$, we have $$\big({\rm GUS}amma^{j}(t_{m_l})\big)_i =0~~\forall i\in [n]~~\forall j\in [k].$$ So, ${\rm GUS}amma^{j} (t_m)$ is an eventually zero sequence for all $j\in[k]$. This implies that there exists a natural number $m_0$ such that $${\rm GUS}amma^{1}(t_m)={\rm GUS}amma^{2}(t_m)=...={\rm GUS}amma^{k}(t_m)=0~~\forall m{\rm GUS}eq m_0.$$As $\big({\rm GUS}amma^{0}(t_m),{\rm GUS}amma^{1}(t_m),...{\rm GUS}amma^{k}(t_m)\big)\in\text{SOL}({\bf C},{\bf d},q)$, we get ${\rm GUS}amma^{0}(t_m)=C^{-1}_0 (q)~~\forall m {\rm GUS}eq m_0$. This gives us that ${\rm GUS}amma(t_m)={\bf{x}}$ for all $m{\rm GUS}eq m_0$ which contradicts the fact that ${\rm GUS}amma(t_m)\neq{\bf{x}}$ for all $m$. Therefore, our claim is true.
No loss of generality, we assume a sequence $\{t_m\}$ itself satisfies the condition $$\big({\rm GUS}amma^{j}(t_{m})\big)_i\neq 0 ,\text{ for some }j\in[k] \text{ and for some } i\in[n].$$
We now consider the following cases for possibilities of $j$.
{\it Case 1} : If $j=1$, then $({\rm GUS}amma^{0}(t_{m}))_i({\rm GUS}amma^{1}(t_{m}))_i=0$ which leads to $({\rm GUS}amma^{0}(t_{m}))_i=0.$ This implies that $$0={\bf d}isplaystyle{\lim_{m \to \mathop{\rm inf}ty} {\rm GUS}amma^{0}(t_{m})_{i}=(C_0^{-1}q})_i.$$ But $(C_0^{-1}q)>0$ as $C_0$ is a $M$ matrix. This is not possible. So, $j\neq 1$.
{\it Case 2} : If $2\leq j\leq k $, then we have $(d_{j-1}-{\rm GUS}amma^{j-1}(t_{m}))_i
({\rm GUS}amma^{j}(t_{m}))_i=0$ which gives that $(d_{j-1}-{\rm GUS}amma^{j-1}(t_{m}))_i=0.$ By taking limit $m\rightarrow\mathop{\rm inf}ty$,
$$0=\lim_{m\rightarrow\mathop{\rm inf}ty}(d_{j-1}-{\rm GUS}amma^{j-1}(t_{m}))_i=(d_{j-1})_i-({\rm GUS}amma^{j-1}(0))_i= (d_{j-1})_i>0.$$ This is not possible.
From both cases, there is no such a $j$ exists. This contradicts the fact. Hence ${\bf{x}}=(C_0^{-1}q,0,...,0)$ is the only solution to $\text{EHLCP}({\bf C},{\bf d},q)$.
\end{proof}
The following result gives a sufficient condition for a connected solution to EHLCP.
\begin{theorem}
Let ${\bf C}:=(C_0,C_1,...,C_k)\in {\mathcal L}ambda^{(k+1)}_{n \times n}$ has the column $W_0$-property. If $\text{\rm SOL}({\bf C},{\bf d},q)$ has a bounded connected component, then $\text{\rm SOL}({\bf C},{\bf d},q)$ is connected.
\end{theorem}
\begin{proof} If SOL$({\bf C},{\bf d},q)= \emptyset$, then we have nothing to prove.
Let SOL$({\bf C},{\bf d},q)\neq \emptyset$ and $A$ be a connected component of SOL$({\bf C},{\bf d},q)$. If SOL$({\bf C},{\bf d},q)=A$, then we are done. Suppose SOL$({\bf C},{\bf d},q)\neq A.$ Then there exists ${\bf{y}}=(y_{0},y_{1},..,y_{k})\in \text{SOL}({\bf C},{\bf d},q)\mathcal{S}^{n}etminus A$.
As $A$ is a bounded connected component of SOL$({\bf C},{\bf d},q)$, we can find an open bounded set $\Omega \mathcal{S}^{n}ubseteq {\mathcal L}ambda^{(k+1)}_{n}$ which contains $A$ and it does not intersect with other component of $\text{SOL}({\bf C},{\bf d},q)$. Therefore ${\bf{y}} \notin\Omega$ and $\mathcal{K}'artial{(\Omega)}\mathcal{C}ap\text{SOL}({\bf C},{\bf d},q)=\emptyset.$ Since ${\bf C}$ has the column $W_0$-property, there exists ${\bf N}:=(N_0,N_1,...,N_k)\in {\mathcal L}ambda^{(k+1)}_{n\times n}$ such that ${\bf C+\epsilon N}:=(C_0+\epsilon N_0,C_1+\epsilon N_1,...,C_k+\epsilon N_k)$ has the column $W$-property for every $\epsilon>0$.
Let ${\bf z}=(z_{0},z_{1},...,z_{k})\in A$ and $\epsilon >0$, we define functions $H_1$, $H_2$ and $H_3$ as follows:
$$H_1 ({\bf{x}})=\begin{bmatrix}
C_0 x_{0} -\mathcal{S}^{n}um_{i=1}^{k} C_ix_{i}-q\\ x_{0}{\bf w}edge x_{1}\\ (d_{1}-x_{1}){\bf w}edge x_{2}\\ .\\ .\\ .\\ (d_{k-1}-x_{k-1}){\bf w}edge x_{k}\\
\end{bmatrix},$$ $$H_2 ({\bf{x}})=\begin{bmatrix}
(C_0+\epsilon N_0) x_{0} -\mathcal{S}^{n}um_{i=1}^{k} (C_i+\epsilon N_i)x_{i}+(\mathcal{S}^{n}um_{i=1}^{k}\epsilon N_iy_{i}-\epsilon N_0y_{0}-q)\\ x_{0}{\bf w}edge x_{1}\\ (d_{1}-x_{1}){\bf w}edge x_{2}\\ .\\ .\\ .\\ (d_{k-1}-x_{k-1}){\bf w}edge x_{k}\\
\end{bmatrix},$$ $$H_3 ({\bf{x}})=\begin{bmatrix}
(C_0+\epsilon N_0) x_{0} -\mathcal{S}^{n}um_{i=1}^{k} (C_i+\epsilon N_i)x_{i}+(\mathcal{S}^{n}um_{i=1}^{k}\epsilon N_iz_{i}-\epsilon N_0z_{0}-q)\\ x_{0}{\bf w}edge x_{1}\\ (d_{1}-x_{1}){\bf w}edge x_{2}\\ .\\ .\\ .\\ (d_{k-1}-x_{k-1}){\bf w}edge x_{k}\\
\end{bmatrix}.$$ By putting ${\bf{x}}={\bf{y}}$ in $H_2({\bf{x}})$, and ${\bf{x}}={\bf z}$ in $H_1({\bf{x}})$ and $H_3({\bf{x}})$, we get $$H_1({\bf z})=H_2({\bf{y}})=H_3({\bf z})=0.$$ For $\epsilon$ is near to zero, deg$(H_1,\Omega,{\bf 0})$= deg$(H_2,\Omega,{\bf 0})$= deg$(H_3,\Omega,{\bf 0})$ due to the nearness property of degree (D3). As ${\bf z} \in \Omega$ is a solution to $H_3 ({\bf{x}})={\bf 0}$ and ${\bf C+\epsilon N}$ has the column $W$-property, we get deg$(H_3, \Omega, {\bf 0}) \neq 0$ by Theorem \ref{T4} and \ref{degg}. Since deg$(H_2,\Omega,{\bf 0})$= deg$(H_3,\Omega,{\bf 0})$, we have deg$(H_2,\Omega,{\bf 0})\neq 0$. This implies that if we set $ {q_2}:=q+\epsilon N_0 y_{0}-\mathcal{S}^{n}um_{i=1}^{k}\epsilon N_i y_{i}$, then EHLCP$({\bf C+\epsilon N},{\bf d},q_2)$ must have a solution in $\Omega$. As ${\bf C+\epsilon N}$ has the column $W$-property, by Theorem \ref{P1}, EHLCP$({\bf C+\epsilon N},{\bf d},q_2)$ has a unique solution which must be equal to ${\bf{y}}$. So, ${\bf{y}}\in \Omega$. It gives us a contradiction. Hence SOL$({\bf C},{\bf d},q)=A$. Thus SOL$({\bf C},{\bf d},q)$ is connected.
\end{proof}
\mathcal{S}^{n}ection{Conclusion}
In this paper, we introduced the $R_0$-$W$ property and SSM-$W$ properties and then studied the existence and uniqueness result for EHLCP when the underlying set of matrices has these properties. Last, we gave a necessary and sufficient condition for the connectedness of the solution set of the EHLCP.
\mathcal{S}^{n}ection*{Declaration of Competing Interest} The authors have no competing interests.
\mathcal{S}^{n}ection*{Acknowledgements} The first author is a CSIR-SRF fellow, and he wants to thank the Council of Scientific \& Industrial Research(CSIR) for the financial support.
\end{document} |
\begin{document}
\title{Poset Edge-Labellings and Left Modularity}
\author{Peter McNamara}
\address{Laboratoire de Combinatoire et d'Informatique Math\'ematique\\
Universit\'e du Qu\'ebec \`a Montreal\\
Case Postale 8888, succursale Centre-ville\\
Montr\'eal (Qu\'ebec) H3C 3P8\\
Canada}
\email{mcnamara@lacim.uqam.ca}
\author{Hugh Thomas}
\address{Fields Institute, 222 College St., Toronto, ON, M5T 3J1, Canada}
\email{hugh@math.unb.ca}
\begin{abstract}
It is known that a graded lattice of rank $n$ is supersolvable if and only if
it has an EL-labelling where the labels along any maximal chain are exactly
the numbers $1,2,\ldots,n$ without repetition. These labellings are called
$S_n$ EL-labellings, and having such a labelling is also equivalent to
possessing a maximal chain of left modular elements. In the case of an
ungraded lattice, there is a natural extension of $S_n$ EL-labellings,
called interpolating labellings. We show that admitting an
interpolating labelling is again equivalent to possessing a maximal chain
of left modular elements.
Furthermore, we work in the setting of an arbitrary bounded poset as all the
above results generalize to this case.
We conclude by applying our results to show that
the lattice of non-straddling partitions,
which is not graded in general, has a maximal chain of left modular elements.
\end{abstract}
maketitle
\section{Introduction}
An \emph{edge-labelling} of a poset $P$ is a map
from the edges of the Hasse diagram of $P$ to $mathbb{Z}$.
Our primary goal is to express certain classical properties of
$P$ in terms of edge-labellings admitted by $P$.
The idea of studying edge-labellings of posets goes back to \cite{St}.
An important milestone was \cite{Bj}, where
A. Bj\"orner defined EL-labellings,
and showed that if a poset admits an EL-labelling, then it is
shellable and hence Cohen-Macaulay.
We will be interested in a subclass of
EL-labellings, known as $S_n$ EL-labellings.
In \cite{St1}, R. Stanley
introduced supersolvable lattices and showed that they admit
$S_n$ EL-labellings. Examples of supersolvable lattices include
distributive lattices, the lattice of partitions of $[n]$, the
lattice of non-crossing partitions of $[n]$ and the lattice of
subgroups of a supersolvable group (hence the terminology).
It was shown in \cite{Mc} that a finite graded lattice of rank $n$ is
supersolvable if and only if it admits an $S_n$ EL-labelling. In many
ways, this characterization of lattice supersolvability in terms of
edge-labellings serves as the starting point for our investigations.
For basic definitions concerning partially
ordered sets, see \cite{ec1}.
We will say that a poset $P$ is \emph{bounded} if it contains a unique
minimal element and a unique maximal element, denoted $\hat{0}$ and $\hat{1}$
respectively.
All the posets we will consider will be finite and bounded.
A chain of a poset $P$ is said to be \emph{maximal} if it is maximal under
inclusion.
We say that $P$ is \emph{graded} if all the maximal chains of $P$ have the same
length, and we call this length the \emph{rank} of $P$.
We will write $x \lessdot y$ if $y$ covers $x$ in $P$ and
$x \lessdoteq y$ if $y$ either covers or equals $x$.
The edge-labelling $\gamma$ of $P$ is
said to be an \emph{EL-labelling} if for any $y<z$ in $P$,
\begin{enumerate}
\item[(i)] there is a unique unrefinable chain
$y=w_0 \lessdot w_1 \lessdot \cdots \lessdot w_r = z$ such that
$\gamma(w_0,w_1) \leq \gamma(w_1,w_2) \leq \cdots \leq \gamma(w_{r-1},w_r)$, and
\item[(ii)] the sequence of labels of this chain (referred to as the
\emph{increasing chain} from $y$ to $z$), when read from bottom
to top, lexicographically precedes the labels of any other unrefinable chain
from $y$ to $z$.
\end{enumerate}
This concept originates in \cite{Bj}; for the case where
$P$ is not graded, see \cite{BW1,BW2}.
If $P$ is graded of rank $n$ with an EL-labelling $\gamma$, then $\gamma$ is
said to be an \emph{$S_n$ EL-labelling} if the labels along any maximal
chain of $P$ are all distinct and are elements of $[n]$. In other words,
for every maximal chain
$\hat{0}=w_0 \lessdot w_1 \lessdot \cdots \lessdot w_n = \hat{1}$ of $P$, the map sending $i$ to
$\gamma(w_{i-1},w_i)$ is a permutation of $[n]$.
Note that the second condition in the definition of an EL-labelling is
redundant in this case.
\begin{example}\label{eg:distributive}
Any finite distributive lattice has an $S_n$ EL-labelling.
Let $L$ be a finite distributive lattice of rank $n$. By
the Fundamental Theorem of Finite Distributive Lattices
\cite[p. 59, Thm. 3]{Bi}, that is
equivalent to saying that
$L = J(Q)$, the lattice of order ideals of some $n$-element poset $Q$.
Let $\omega : Q \to [n]$ be a linear extension of $Q$, i.e., any
bijection labelling the vertices of $Q$ that is
order-preserving (if $a<b$ in $Q$ then $\omega(a)<\omega(b)$).
This labelling of the vertices of $Q$ defines a labelling of the edges of
$J(Q)$ as follows. If $y$ covers $x$ in $J(Q)$, then the order ideal
corresponding to $y$ is obtained from the order ideal corresponding to
$x$ by adding a single element, labeled by $i$, say. Then we set
$\gamma(x,y) = i$. This gives us an $S_n$ EL-labelling for $L=J(Q)$.
Figure \ref{fig:distexample} shows a labelled poset and its lattice
of order ideals with the appropriate edge-labelling.
\begin{figure}\label{fig:distexample}
\end{figure}
\end{example}
A finite lattice $L$ is said to be
\emph{supersolvable} if it contains a maximal chain,
called an
\emph{M-chain} of $L$, which together with any other chain
in $L$ generates a distributive sublattice.
We can label each such distributive sublattice by the method described in
Example \ref{eg:distributive} in such a way that the M-chain is the
unique increasing maximal chain.
As shown in \cite{St1}, this will assign
a unique label to each edge of $L$ and the resulting global labelling
of $L$ is an $S_n$ EL-labelling.
There is also a characterization of lattice supersolvability in
terms of
left modularity. Given an element $x$ of a finite lattice $L$, and
a pair of elements $y \leq z$, it is always true that
\begin{equation}\label{eq:modularineq}
(x \vee y) \wedge z \geq (x \wedge z) \vee y.
\end{equation}
The element $x$ is said to be \emph{left modular} if, for all $y \leq z$,
equality holds in \eqref{eq:modularineq}.
Following A. Blass and B. Sagan \cite{BS}, we will say that a lattice itself is
\emph{left modular} if it contains a left modular maximal chain, that is, a
maximal chain each of whose elements is left modular. (One might
guess that we should define a lattice to be left modular if all of its
elements are left modular, but this is equivalent to the definition of
a modular lattice.)
As shown in \cite{St1},
any M-chain of a supersolvable lattice
is always a left modular maximal chain, and so supersolvable lattices are
left modular.
Furthermore, it is shown by L. S.-C. Liu
\cite{L} that if
$L$ is a finite graded lattice with a left modular maximal chain $M$, then
$L$ has an $S_n$ EL-labelling with increasing maximal chain $M$.
In turn, as shown in \cite{Mc}, this implies that $L$ is
supersolvable, and so
we conclude the following.
\begin{theorem}\label{thm:gradedlattice}
Let $L$ be a finite graded lattice of rank $n$. Then the following
are equivalent:
\begin{enumerate}
\item $L$ has an $S_n$ EL-labelling,
\item $L$ is left modular,
\item $L$ is supersolvable.
\end{enumerate}
\end{theorem}
It is shown in \cite{St1} that if $L$ is upper-semimodular, then $L$ is
left modular if and only if $L$ is supersolvable.
Theorem \ref{thm:gradedlattice} is
a considerable strengthening of this. Here we used $S_n$ EL-labellings
to connect left modularity and supersolvability.
It is natural to ask for a more direct proof that (2) implies (3); such
a proof has recently been
provided by the second author in \cite{Th}.
Our goal is to generalize Theorem \ref{thm:gradedlattice}
to the case when $L$ is not
graded and, moreover, to the case when $L$ is not necessarily a lattice.
We now wish to define natural generalizations of $S_n$ EL-labellings and
of maximal left modular chains.
\begin{definition}
An EL-labelling $\gamma$ of a poset
$P$ is said to be \emph{interpolating} if, for any
$y \lessdot u \lessdot z$, either
\begin{enumerate}
\item[(i)] $\gamma(y,u) < \gamma(u,z)$ or
\item[(ii)] the increasing
chain from $y$ to $z$, say $y = w_0 \lessdot w_1 \lessdot \cdots \lessdot w_r = z$, has
the properties that its labels are strictly increasing and that
$\gamma(w_0,w_1) = \gamma(u,z)$ and $\gamma(w_{r-1},w_r)=\gamma(y,u)$.
\end{enumerate}
\end{definition}
\begin{example}
The reader is invited to
check that the labelling of the non-graded poset shown in Figure
\ref{fig:tamari} is an interpolating EL-labelling.
\begin{figure}
\caption{The Tamari lattice $T_4$ and its interpolating EL-labelling}
\label{fig:tamari}
\end{figure}
In fact, the poset shown is the so-called ``Tamari lattice'' $T_4$.
For all positive integers $n$, there exists a Tamari lattice $T_n$
with $C_n$ elements, where $C_n = \frac{1}{n+1}\binom{2n}{n}$,
the $n$th Catalan number.
More information on the Tamari lattice can be found in \cite[\S 9]{BW2},
\cite[\S 7]{BS}
and the references given there, and in
\cite[\S 3.2]{L}, where this interpolating EL-labelling appears.
The Tamari lattice is shown to have an EL-labelling in \cite{BW2} and
is shown to be left modular in \cite{BS}.
\end{example}
If $P$ is graded of rank $n$ and has an interpolating labelling $\gamma$
in which the labels on the increasing maximal chain reading from bottom to top
are $1,2,\ldots n$, then we can check (cf. Lemma \ref{lem:labelsdistinct})
that $\gamma$ is an
$S_n$ EL-labelling.
Our next step is to define left modularity in the non-lattice case.
Let $x$ and $y$ be elements of $P$. We know that $x$ and $y$ have at least
one common upper bound, namely $\hat{1}$. If the set of common upper bounds
of $x$ and $y$ has a least element, then we denote it by $x \vee y$.
Similarly, if $x$ and $y$ have a greatest common lower bound, then
we denote it by $x \wedge y$.
Now let $w$ and $z$ be elements of $P$ with $w, z \geq y$. Consider the set
of common lower bounds for $w$ and $z$ that are also greater than or equal
to $y$. Clearly, $y$ is in this set.
If this set has a greatest element, then we denote it by $w \wedge_y z$ and
we say that $w \wedge_y z$ is well-defined (in $[y,\hat{1}]$).
We see that $(x \vee y)\wedge_y z$ is well-defined
in the poset shown in Figure \ref{fig:snellable}, even though
$(x \vee y) \wedge z$ is not. Similarly, let $w$ and $y$ be
elements of $P$ with $w, y \leq z$. If the set
$\{u \in P\ |\ u \geq w,y mbox{\ and\ } u \leq z\}$ has a least element, then
we denote it by $w \vee^z y$ and we say that $w \vee^z y$ is
well-defined (in $[\hat{0},z]$). We will usually be interested in expressions
of the form $(x \vee y)\wedge_y z$ and $(x \wedge z)\vee^z y$.
The reader that is solely interested
in the lattice case can choose to ignore the subscripts
and superscripts
on the meet and
join symbols.
\begin{definition}
An element $x$ of a poset
$P$ is said to be \emph{\lessdotmpatible} if, for all $y \leq z$ in $P$,
$(x \vee y) \wedge_y z$ and $(x \wedge z) \vee^z y$
are well-defined.
A maximal chain of $P$ is said to be \lessdotmpatible\ if each of its elements is
\lessdotmpatible.
\end{definition}
\begin{example}
The poset shown in Figure \ref{fig:snellable} is certainly not a
lattice but the reader can check that the increasing maximal chain
is \lessdotmpatible.
\begin{figure}\label{fig:snellable}
\end{figure}
\end{example}
\begin{definition}
A \lessdotmpatible\ element $x$ of a poset
$P$ is said to be \emph{left modular} if, for all $y \leq z$ in $P$,
\[
(x \vee y) \wedge_y z = (x \wedge z) \vee^z y.
\]
A maximal chain of $P$ is said to be left modular if each of its elements
is \lessdotmpatible\ and left modular, and $P$ is said to be left modular if it possesses
a left modular maximal chain.
\end{definition}
This brings us to the first of our main theorems.
\begin{theorem}\label{thm:theorem2}
Let $P$ be a bounded poset with a
left modular maximal chain $M$.
Then $P$ has an interpolating EL-labelling with $M$ as its
increasing maximal chain.
\end{theorem}
The proof of this theorem will be the content of the next section.
In Section 3, we will prove the following converse result.
\begin{theorem}\label{thm:theorem3}
Let $P$ be a bounded poset with an interpolating EL-labelling. The unique
increasing chain from $\hat{0}$ to $\hat{1}$ is a
left modular maximal chain.
\end{theorem}
These two theorems, when compared with Theorem \ref{thm:gradedlattice}, might
lead one to ask about possible supersolvability results
for bounded posets that aren't graded lattices. This problem is
discussed in Section 4.
In the case of graded posets, we obtain a satisfactory result, namely
Theorem \ref{thm:posetsupersolvable}. As a consequence,
we have given an answer to the
question of when a graded poset $P$ has an $S_n$ EL-labelling.
This has ramifications
on the existence of a ``good 0-Hecke algebra action'' on the maximal
chains of the poset, as discussed in \cite{Mc}.
However, it remains an open problem to appropriately extend the
definition of supersolvability to ungraded posets.
An explicit
application
of Theorem \ref{thm:theorem3} is the subject of Section 5. As a variation on non-crossing
partitions and non-nesting partitions, we define non-straddling
partitions. Ordering the set of non-straddling partitions of $[n]$
by refinement gives a poset, denoted $NS_n$,
that is generally a non-graded lattice.
We define an edge-labelling $\gamma$ for $NS_n$ that is analogous
to the usual EL-labelling for the lattice of partitions of $[n]$.
In order to show that $NS_n$ is left modular, we then
prove that $\gamma$ is an interpolating EL-labelling.
\section{Proof of Theorem \ref{thm:theorem2}}
Throughout this section, we suppose that
$P$ is a bounded poset with a left modular
maximal chain $M: \hat{0} =x_0 \lessdot x_1 \lessdot \cdots \lessdot x_n = \hat{1}$.
We want to show that $P$ has an interpolating EL-labelling.
Our approach will be as follows: we will begin by specifying
an edge-labelling $\gamma$ for $P$ such that $M$ is an increasing chain
with respect to
$\gammamma$.
We will then prove a
series of lemmas which build on the \lessdotmpatibility\ and
left modularity properties. These
culminate with Proposition
\ref{prop:labelsequal} which, roughly speaking, gives a more
local definition for $\gamma$. We will then be ready to show
that $\gamma$ is an EL-labelling and is, furthermore, an
interpolating EL-labelling.
We choose a label set
$l_1 < \cdots < l_n$ of natural numbers. (For most purposes, we can let
$l_i = i$.) We define an edge-labelling $\gamma$ on $P$ by setting
$\gamma(y,z)=l_i$ for $y\lessdot z$ if
\[ (x_{i-1} \vee y)\wedge_y z = y mbox{\ \ and\ \ }
(x_i \vee y) \wedge_y z = z .\]
It is easy to see that $\gamma$ is well-defined.
We will refer to it as the labelling induced by $M$ and the label set
$\{l_i\}$.
When $P$ is a lattice, this labelling appears, for example, in \cite{L,W}.
As in \cite{L}, we can give an equivalent definition of $\gamma$ as follows.
\begin{lemma}
Suppose $y \lessdot z$ in $P$. Then $\gamma(y,z)= l_i$ if and only if
\[ i = min\{j\ |\ x_j \vee y \geq z \} = max\{ j+1\ |\ x_j \wedge z \leq y\}.\]
\end{lemma}
\begin{proof}
That $i=min\{j\ |\ x_j \vee y \geq z\}$ is immediate from the definition
of $\gamma$. By left modularity, $\gamma(y,z)=l_i$ if and only if
$(x_{i-1} \wedge z)\vee^z y = y$ and $(x_i \wedge z) \vee^z y = z.$
In other words, $x_{i-1}\wedge z \leq y$ and $x_i \wedge z \nleq y$.
It follows that $i = max\{ j+1\ |\ x_j \wedge z \leq y \}$.
\end{proof}
\begin{lemma}\label{lem:repeatedjoin}
Suppose that $y \leq w \leq z$ in $P$ and let $x\in M$.
Then
$((x \wedge z)\vee^z y) \vee^z w$ is well-defined
and
equals $(x \wedge z) \vee^z w$.
Similarly, $((x \vee y)\wedge_y z) \wedge_y w$ is well-defined
and
equals $(x \vee y) \wedge_y w$.
\end{lemma}
\begin{proof}
It is routine to check that, in $[\hat{0},z]$, $(x \wedge z) \vee^z w$
is the least common upper bound for $w$
and $(x \wedge z)\vee^z y$, and that, in $[y,\hat{1}]$,
$(x \vee y)\wedge_y w$ is the greatest common lower bound lower bound for
$(x \vee y)\wedge_y z$ and $w$.
\end{proof}
\begin{lemma}\label{lem:intervalmod}
Suppose that $t \leq u$ in $[y,z]$ and $x \in M$.
Let $w=(x \vee y) \wedge_y z
=(x \wedge z) \vee^z y$ in $[y,z]$. Then $(w \vee^z t) \wedge_t u$ and
$(w \wedge_y u) \vee^u t$ are well-defined elements of $[t,u]$ and are equal.
\end{lemma}
\begin{proof}
We see that, by Lemma \ref{lem:repeatedjoin},
\begin{equation*}
\begin{split}
(x \vee t) \wedge_t u &= ((x \vee t) \wedge_t z)\wedge_t u
= ((x\wedge z) \vee^z t)\wedge_t u \\
&= (((x \wedge z)\vee^z y)\vee^z t)\wedge_t u
= (w \vee^z t) \wedge_t u. \\
\end{split}
\end{equation*}
Similarly,
\[ (x \wedge u)\vee^u t = (w \wedge_y u)\vee^u t\]
But $(x \vee t)\wedge_t u = (x \wedge u)\vee^u t$, yielding the result.
\end{proof}
\begin{lemma}\label{lem:covers}
Suppose $x$ and $w$ are \lessdotmpatible\ and that $x$ is left modular in $P$.
\begin{enumerate}
\item[(a)]
If $x \lessdot w$ then for any $z$ in $P$ we have $x \wedge z \lessdoteq w \wedge z$.
\item[(b)]
If $w \lessdot x$ then for any $y$ in $P$ we have $w \vee y \lessdoteq x \vee y$.
\end{enumerate}
\end{lemma}
Part (b) appears in the lattice case in \cite[Lemma 2.5.6]{L} and
\cite[Lemma 5.3]{LS}.
\begin{proof}
We prove (a); (b) is similar. Assume, seeking a contradiction, that
$x \wedge z < u < w \wedge z$ for some $u \in P$. Now $u \leq z$ and
$u \leq w$. It follows that $u \nleq x$.
Now $x < x \vee u \leq w$. Therefore, $w = x \vee u$. So
\[ u = (x \wedge z) \vee^z u = (x \vee u)\wedge_u z = w \wedge z ,\]
which is a contradiction.
\end{proof}
We now prove a slight extension of
\cite[Lemma 2.5.7]{L} and \cite[Lemma 5.4]{LS}.
\begin{lemma}\label{lem:wichain}
The elements of $[y,z]$ of the form $(x_i\vee y)\wedge_y z$ form a
left modular maximal chain in $[y,z]$.
\end{lemma}
\begin{proof}
Lemma \ref{lem:intervalmod} gives the \lessdotmpatibility\ and left modularity
properties. By Lemma \ref{lem:covers}(b),
$x_i \vee y \lessdoteq x_{i+1} \vee y$. By Lemma \ref{lem:intervalmod} with
$z = \hat{1}$, we have that $x_i \vee y$ is left modular in $[y,\hat{1}]$.
Therefore, $(x_i \vee y) \wedge_y z \lessdoteq (x_{i+1} \vee y) \wedge_y z$ by Lemma
\ref{lem:covers}(a).
\end{proof}
We are now ready for the last, and most important, of our preliminary
results.
Let $[y,z]$ be an interval in $P$.
We call the maximal chain of $[y,z]$ from Lemma \ref{lem:wichain} the
\emph{induced}
left modular maximal chain of $[y,z]$.
One way to get a second edge-labelling for $[y,z]$ would be to take
the labelling induced in $[y,z]$ by this induced maximal chain.
We now prove that, for a suitable choice of label set, this
labelling coincides with $\gamma$.
\begin{proposition}\label{prop:labelsequal}
Let $P$ be a bounded poset,
$\hat{0} =x_0 \lessdot x_1 \lessdot \cdots \lessdot x_n = \hat{1}$ a
left modular maximal
chain and $\gamma$ the corresponding edge-labelling with label set
$\{l_i\}$. Let $y < z$, and define $c_i$ by saying
\begin{eqnarray*}
y & = & (x_{0} \vee y) \wedge_y z = \cdots = (x_{c_1-1} \vee y) \wedge_y z \\
& &\lessdot\ (x_{c_1} \vee y) \wedge_y z = \cdots = (x_{c_2-1} \vee y) \wedge_y z
\lessdot \cdots \\
& & \lessdot\ (x_{c_r} \vee y) \wedge_y z = \cdots = (x_{n} \vee y) \wedge_y z.
\end{eqnarray*}
Let $m_i = l_{c_i}$. Let $\delta$ be the labelling of $[y,z]$ induced
by its induced left modular maximal chain and the label set $\{m_i\}$.
Then $\delta$ agrees with $\gamma$ restricted to the edges of $[y,z]$.
\end{proposition}
\begin{proof}
Suppose $t \lessdot u$ in $[y,z]$.
Using ideas from the proof of Lemma \ref{lem:intervalmod},
\begin{eqnarray*}
\delta(t,u)=m_i & \Leftrightarrow &
(((x_{c_i-1} \vee y)\wedge_y z)\vee^z t)\wedge_t u = t mbox{\ and\ } \\
& & (((x_{c_i} \vee y)\wedge_y z)\vee^z t)\wedge_t u = u \\
& \Leftrightarrow & (x_{c_i-1} \vee t)\wedge_t u = t
mbox{\ and\ } (x_{c_i} \vee
t)\wedge_t u = u \\
& \Leftrightarrow & \gamma(t,u)=l_{c_i}.
\end{eqnarray*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:theorem2}.]
We now know that the induced left modular chain in $[y,z]$ has (strictly)
increasing labels, say $m_1 < m_2 < \cdots < m_r$.
Our first step is
to show that it is the
only maximal chain with (weakly) increasing labels.
Suppose that $y = w_0 \lessdot w_1 \lessdot \cdots \lessdot w_r = z$ is the induced chain
and that $y = u_0 \lessdot u_1 \lessdot \cdots \lessdot u_s = z$ is another chain with
increasing labels.
If $s=1$ then $y \lessdot z$ and the result is
clear. Suppose $s \geq 2$.
By Proposition \ref{prop:labelsequal},
we may assume that the labelling on $[y,z]$ is induced by
the induced left modular chain $\{w_i\}$.
In particular, we have that $\gamma(u_i, u_{i+1}) = m_l$ where
$l = min\{j\ |\ w_j \vee^z u_i \geq u_{i+1}\}$.
Let $k$ be the least number such that $u_k \geq w_1$. Then it is
clear that $\gamma(u_{k-1},u_k)=m_1$. Note that this is the smallest
label that can occur on any edge in $[y,z]$.
Since the labels on the chain $\{u_i\}$ are assumed to be increasing,
we must have $\gamma(u_0,u_1)=m_1$. It follows that $w_1 \vee^z u_0 \geq u_1$
and since $y \lessdot w_1$, we must have $u_1 = w_1$.
Thus, by induction, the two chains coincide.
We conclude that the induced left modular maximal chain is the only
chain in $[y,z]$ with increasing labels.
It also has the lexicographically
least set of labels. To see this,
suppose that $y = u_0 \lessdot u_1 \lessdot \cdots \lessdot u_s = z$ is another chain
in $[y,z]$. We assume that $u_1 \neq w_1$ since, otherwise, we can just
restrict our attention to $[u_1, z]$.
We have
$\gamma(u_0, u_1) = m_l$, where
$l=min\{j\ |\ w_j \geq u_1\} \geq 2$ since $w_1 \ngeq u_1$. Hence
$\gamma(u_0,u_1) \geq m_2 > \gamma(w_0, w_1)$.
This gives that $\gamma$ is an EL-labelling.
(That $\gamma$ is an EL-labelling
was already shown in the lattice case in \cite{L,W}.)
Finally, we show that it is an interpolating EL-labelling.
If $y \lessdot u \lessdot z$ is not the induced left modular maximal chain in $[y,z]$,
then let $y = w_0 \lessdot w_1 \lessdot \cdots \lessdot w_r = z$ be the induced left
modular maximal chain.
We have that $\gamma(y,u) = m_l$ where
\[
l=min\{j\ |\ w_j\vee^z y \geq u\} =min\{j\ |\ w_j \geq u\} = r
\]
since $u \lessdot z$.
Therefore, $\gamma(y,u) = m_r$.
Also, $\gamma(u,z) = m_l$ where
\[
l=max\{j+1\ |\ w_j\wedge_y z \leq u\} =max\{j+1\ |\ w_j \leq u\} = 1
\]
since $y \lessdot u$.
Therefore, $\gamma(y,u) = m_1$, as required.
\end{proof}
\section{Proof of Theorem \ref{thm:theorem3}}
We suppose that $P$
is a bounded poset with an interpolating EL-labelling $\gamma$.
Let $\hat{0} = x_0 \lessdot x_1 \lessdot \cdots \lessdot x_n = \hat{1}$ be the increasing chain
from $\hat{0}$ to $\hat{1}$ and let $l_i = \gamma(x_{i-1},x_i)$.
We will begin by
establishing some basic facts about interpolating labellings.
These results will enable us to show certain meets and joins exist by
looking at the labels that appear along particular increasing chains.
We will thus show that the $x_i$ are \lessdotmpatible .
We will finish by showing that the $x_i$ are left
modular, again by looking at the labels on increasing chains.
Let $y=w_0 \lessdot w_1 \lessdot \cdots \lessdot w_r = z$. Suppose that, for
some $i$, we have
$\gamma(w_{i-1},w_i) > \gamma(w_i, w_{i+1})$. Then the ``basic
replacement'' at $i$ takes the given chain and replaces the subchain
$w_{i-1} \lessdot w_i \lessdot w_{i+1}$ by the increasing chain from $w_{i-1}$ to
$w_{i+1}$. The basic tool for dealing with interpolating labellings
is the following well-known fact about EL-labellings.
\begin{lemma}\label{lem:basicreps}
Let $y=w_0 \lessdot w_1 \lessdot \cdots \lessdot w_r = z$. Successively perform basic
replacements on this chain, and stop when no more basic replacements
can be made. This algorithm terminates, and yields the increasing chain
from $y$ to $z$.
\end{lemma}
\begin{proof}
At each step, the sequence of labels on the new chain lexicographically
precedes the sequence on the old chain, so the process must terminate,
and it is clear that it terminates in an increasing chain.
\end{proof}
We now prove some simple consequences of this lemma.
\begin{lemma}\label{lem:labelsdistinct}
Let $m$ be the chain $y=w_0 \lessdot w_1 \lessdot \cdots \lessdot w_r = z$. Then the
labels on $m$ all occur on the increasing chain from $y$ to $z$ and
are all different. Furthermore, all the labels on the increasing
chain from $y$ to $z$ are bounded between the lowest and highest labels on
$m$.
\end{lemma}
\begin{proof}
That the labels on the given chain all occur on the increasing chain
follows immediately from Lemma \ref{lem:basicreps} and the fact that
after a basic replacement, the labels on the old chain all occur on
the new chain. Similar reasoning implies that the labels on
the increasing chain are bounded between the lowest and highest labels
on $m$.
That the labels are all different again follows from Lemma
\ref{lem:basicreps}. Suppose otherwise. By repeated basic replacements,
one obtains a chain which has two successive equal labels, which is not
permitted by the definition of an interpolating labelling.
\end{proof}
\begin{lemma}\label{lem:lessthanxi}
Let $z \in P$ such that there is some chain from $\hat{0}$ to $z$ all of
whose labels are in $\{l_1,\ldots,l_i\}$. Then $z \leq x_i$. Conversely,
if $z \leq x_i$, then all the labels on any chain from $\hat{0}$ to $z$ are
in $\{l_1,\ldots,l_i\}$.
\end{lemma}
\begin{proof}
We begin by proving the first statement. By Lemma \ref{lem:labelsdistinct},
the labels on the increasing chain from $\hat{0}$ to $z$ are in
$\{l_1,\ldots,l_i\}$. Find the increasing chain from $z$ to $\hat{1}$.
Let $w$ be the element in that chain such that all the labels below
it on the chain are in $\{l_1,\ldots,l_i\}$, and those
above it are in $\{l_{i+1},\ldots,l_n\}$. Again, by Lemma
\ref{lem:labelsdistinct}, the increasing chain from $\hat{0}$ to $w$ has all
its labels in $\{l_1,\ldots,l_i\}$, and the increasing
chain from $w$ to $\hat{1}$ has all its labels in $\{l_{i+1},\ldots,l_n\}$.
Thus $w$ is on the increasing chain from $\hat{0}$ to $\hat{1}$, and so $w=x_i$.
But by construction $w \geq z$. So $x_i \geq z$.
To prove the converse, observe that by Lemma \ref{lem:labelsdistinct}, no
label can occur more than once on any chain. But since every label
in $\{l_{i+1},\ldots,l_n\}$ occurs on the increasing chain
from $x_i$ to $\hat{1}$, no label from among that set can occur on any edge
below $x_i$.
\end{proof}
The obvious dual of Lemma \ref{lem:lessthanxi} is proved similarly:
\begin{corollary}
Let $z \in P$ such that there is some chain from $z$ to $\hat{1}$ all of
whose labels are in $\{l_{i+1},\ldots,l_n\}$. Then $z \geq x_i$. Conversely,
if $z \geq x_i$, then all the labels on any chain from $z$ to $\hat{1}$ are
in $\{l_{i+1},\ldots,l_n\}$.
\end{corollary}
We are now ready to prove the necessary \lessdotmpatibility\ properties.
\begin{lemma}\label{lem:meetandjoin}
$x_i \vee z$ and $x_i \wedge z$ are well-defined for any $z \in P$
and for $i=1,2,\ldots,n$.
\end{lemma}
\begin{proof}
We will prove that $x_i \wedge z$ is well-defined. The proof that
$x_i \vee z$ is well-defined is similar.
Let $w$ be the maximum element on the increasing chain from $\hat{0}$ to $z$
such that all labels on the increasing chain between $\hat{0}$ and $w$
are in $\{l_1,\ldots,l_i\}$.
Clearly $w \leq z$ and, by Lemma \ref{lem:lessthanxi}, $w \leq x_i$.
Suppose $y \leq z, x_i$. It follows that all labels from $\hat{0}$ to $y$ are
in $\{l_1, \ldots,l_i\}$. Consider the increasing chain from $y$ to
$z$. There exists an element $u$ on this chain such that all the labels
on the increasing chain from $\hat{0}$ to $u$ are in $\{l_1, \ldots,l_i\}$
and all the labels on the increasing chain from $u$ to $z$ are in
$\{l_{i+1},\ldots,l_n\}$. Therefore, $u$ is on the increasing chain
from $\hat{0}$ to $z$ and, in fact, $u = w$. Also, we have that
$\hat{0} \leq y \leq u = w \leq z$. We conclude that $w$ is the greatest
common lower bound for $z$ and $x_i$.
\end{proof}
\begin{lemma}
$\hat{0} = x_0 \wedge z \leq x_1 \wedge z \leq \cdots \leq x_n \wedge z = z$,
after we delete repeated elements, is the increasing chain in $[\hat{0},z]$.
Hence, $(x_i \wedge z) \vee^z y$ is well-defined for $y \leq z$.
Similarly, $(x_i \vee y)\wedge_y z$ is well-defined.
\end{lemma}
\begin{proof}
From the previous proof, we know that $x_i \wedge z$ is the maximum
element on the increasing chain from $\hat{0}$ to $z$ such that
all labels on the increasing chain between $\hat{0}$ and $x_i \wedge z$ are
in $\{l_1,\ldots,l_i\}$. The first assertion follows easily from
this.
Now apply Lemma \ref{lem:meetandjoin} to the bounded poset $[\hat{0},z]$.
It has an obvious interpolating labelling induced from the interpolating
labelling of $P$.
Recall that our definition of the existence of $(x_i \wedge z) \vee^z y$
only requires it to be well-defined in $[\hat{0},z]$.
The result follows.
\end{proof}
We conclude that the increasing maximal chain
$\hat{0} = x_0 \lessdot x_1 \lessdot \cdots \lessdot x_n = \hat{1}$ of $P$ is \lessdotmpatible.
It remains
to show that it is left modular.
\begin{proof}[Proof of Theorem \ref{thm:theorem3}.]
Suppose that $x_i$ is not left modular for some $i$. Then there exists
some pair $y \leq z$ such that $(x_i \vee y) \wedge_y z > (x_i \wedge z)\vee^z y$.
Set $x=x_i$, $b=(x_i \wedge z)\vee^z y$ and $c=(x_i \vee y)\wedge_y z$.
Observe that $d:=x\vee b \geq c$ while $a:= x \wedge c \leq b$. So the
picture is as shown in Figure \ref{fig:proofmodular}.
\begin{figure}\label{fig:proofmodular}
\end{figure}
By Lemma \ref{lem:lessthanxi}, the labels on the increasing chain from
$\hat{0}$ to $a$ are less than or equal to $l_i$. Consider the increasing chain
from $a$ to $c$. Let $w$ be the first element along the chain. If
$\gamma(a,w) \leq l_i$, then by Lemma \ref{lem:lessthanxi}, $w \leq x_i$,
contradicting the fact that $a = x \wedge c$. Thus the labels on the
increasing chain from $a$ to $c$ are all greater than $l_i$. Dually, the
labels on the increasing chain from $b$ to $d$ are less than or equal to
$l_i$. But now, by Lemma \ref{lem:labelsdistinct}, the labels on
the increasing chain from $b$ to $c$ must be contained in the labels on the
increasing chain from $a$ to $c$, and also from $b$ to $d$. But there are
no such labels, implying a contradiction. We conclude that the $x_i$ are
all left modular.
\end{proof}
We have shown that if $P$ is a bounded poset with an interpolating
labelling $\gamma$, then the unique increasing maximal chain $M$ is a
left modular maximal chain. By Theorem \ref{thm:theorem2},
$M$ then induces an
interpolating EL-labelling of $P$. We now show that this labelling agrees
with $\gamma$ for a suitable choice of label set, which is a special case of
the following proposition.
\begin{proposition}\label{prop:labellinguniqueness}
Let $\gamma$ and $\delta$ be two interpolating EL-labellings of a bounded poset $P$.
If $\gamma$ and $\delta$ agree on the $\gamma$-increasing chain from $\hat{0}$ to $\hat{1}$,
then $\gamma$ and $\delta$ coincide.
\end{proposition}
\begin{proof}
Let $m: \hat{0} = w_0 \lessdot w_1 \lessdot \cdots \lessdot w_r = \hat{1}$ be
the maximal chain with the lexicographically first $\gamma$
labelling among those chains for which $\gamma$ and $\delta$ disagree.
Since $m$ is not the
$\gamma$-increasing chain from $\hat{0}$ to $\hat{1}$, we
can find an $i$ such that $\gamma(w_{i-1},w_i) > \gamma(w_i,w_{i+1})$. Let
$m'$ be the result of the basic replacement at $i$ with respect to
the labelling $\gamma$. Then the $\gamma$-label sequence of $m'$
lexicographically precedes that of $m$, so $\gamma$ and $\delta$ agree on
$m'$. But using the fact that $\gamma$ and $\delta$ are interpolating,
it follows that they also agree on $m$. Thus they agree everywhere.
\end{proof}
\section{Generalizing Supersolvability}
Suppose $P$ is a bounded poset. For now, we consider the case of $P$ being
graded of rank $n$. We would like to define what it means for $P$ to
be \emph{supersolvable}, thus generalizing Stanley's definition of
lattice supersolvability. A definition of poset supersolvability with
a different purpose appears
in \cite{W} but we would like a more general definition. In particular,
we would like $P$ to be supersolvable if and only if
$P$ has an $S_n$ EL-labelling.
For example, the poset shown in Figure \ref{fig:snellable}, while it doesn't
satisfy V. Welker's definition,
should satisfy our definition.
We need to define, in the poset case, the equivalent of a sublattice
generated by two chains.
Suppose $P$ has a \lessdotmpatible\ maximal chain $M$.
Thus $(x \vee y) \wedge_y z$ and $(x \wedge z)\vee^z y$ are well-defined
for $x \in M$ and $y \leq z$ in $P$. Given any chain $c$ of $P$, we define
$R_{\M}(\ch)$ to be the smallest subposet of $P$ satisfying the following two
conditions:
\begin{enumerate}
\item[(i)] $M$ and $c$ are contained in $R_{\M}(\ch)$,
\item[(ii)] If $y \leq z$ in $P$ and $y$ and $z$ are in $R_{\M}(\ch)$, then so are
$(x \vee y) \wedge_y z$ and $(x \wedge z)\vee^z y$ for any $x$ in $M$.
\end{enumerate}
\begin{definition}
We say that a bounded poset $P$ is \emph{supersolvable} with M-chain $M$
if $M$ is a \lessdotmpatible\ maximal chain and $R_{\M}(\ch)$ is a distributive lattice
for any chain $c$ of $P$.
\end{definition}
Since distributive lattices are graded, it is clear that a poset must
be graded in order to be supersolvable. We now come to the main result of
this section.
\begin{theorem}\label{thm:posetsupersolvable}
Let $P$ be a bounded graded poset of rank $n$. Then the following
are equivalent:
\begin{enumerate}
\item $P$ has an $S_n$ EL-labelling,
\item $P$ is left modular,
\item $P$ is supersolvable.
\end{enumerate}
\end{theorem}
\begin{proof} Observe that for a graded poset, Lemma \ref{lem:labelsdistinct}
implies that an interpolating labelling is an $S_n$ EL-labelling, and
the converse is obvious. Thus,
Theorems \ref{thm:theorem2} and \ref{thm:theorem3}
restricted to the graded case give us that
$(1) \Leftrightarrow (2)$.
Our next step is to show that $(1)$ and $(2)$ together imply $(3)$.
Suppose $P$
is a bounded graded poset of rank $n$ with an $S_n$ EL-labelling.
Let $M$ denote the increasing maximal chain
$\hat{0}=x_0 \lessdot x_1 \lessdot \cdots \lessdot x_n = \hat{1}$
of $P$. We also know that $M$
is \lessdotmpatible\ and left modular and induces the same $S_n$ EL-labelling.
Given any maximal chain $m$ of $P$,
we define $Q_{\M}(\m)$ to be the closure of $m$ in $P$ under basic replacements.
In other words, $Q_{\M}(\m)$ is the smallest subposet of $P$
which contains $M$ and $m$ and
which has the property that, if $y$ and $z$ are in $Q_{\M}(\m)$ with $y \leq z$, then
the increasing chain between $y$ and $z$ is also in $Q_{\M}(\m)$.
It is shown in
\cite[Proof of Thm. 1]{Mc}
that $Q_{\M}(\m)$ is a distributive lattice.
There $P$ is a lattice but the proof of distributivity doesn't use this
fact. Now consider $R_{\M}(\ch)$. We will show that there exists a maximal
chain $m$ of $P$ such that $R_{\M}(\ch) = Q_{\M}(\m)$.
Let $m$ be the maximal
chain of $P$ which contains $c$ and which has increasing labels between
successive elements of $c \, \cup \{\hat{0},\hat{1}\}$.
The only idea we need is that, for
$y \leq z$ in $P$, the increasing chain from $y$ to $z$ is given by
$y = (x_0\vee y) \wedge_y z \leq (x_1\vee y)\wedge_y z \leq \cdots \leq
(x_n \vee y)\wedge_y z = z$, where we delete repeated elements.
This follows from Lemma \ref{lem:wichain} since the induced left modular
chain in $[y,z]$ has increasing labels.
It now follows that $R_{\M}(\ch) = Q_{\M}(\m)$, and hence
$R_{\M}(\ch)$ is a distributive lattice.
Finally, we will show that $(3) R_{\M}(\ch)ightarrow (2)$. We suppose that
$P$ is a bounded supersolvable poset with M-chain $M$.
Suppose $y \leq z$ in $P$ and let $c$ be the chain $y \leq z$.
For any $x$ in $M$, $x \vee y$ is well-defined in $P$
(because $M$ is assumed to be \lessdotmpatible) and equals
the usual join of $x$ and $y$ in the lattice $R_{\M}(\ch)$. The same idea
applies to $x \wedge z$, $(x \vee y)\wedge_y z$ and $(x\wedge z)\vee^z y$.
Since $R_{\M}(\ch)$ is distributive, we have that
\[
(x\vee y)\wedge_y z
= (x\vee y)\wedge z
= (x\wedge z)\vee (y\wedge z)
= (x \wedge z)\vee y
= (x \wedge z)\vee^z y
\]
in $R_{\M}(\ch)$ and so $M$ is left modular in $P$.
\end{proof}
\begin{remark}
We know from Theorem \ref{thm:gradedlattice} that a graded lattice of
rank $n$ is supersolvable if and only if it has an $S_n$ EL-labelling.
Therefore, it follows from Theorem \ref{thm:posetsupersolvable} that
the definition of a supersolvable poset when restricted to graded
lattices yields the usual definition of a supersolvable lattice. (Note
that this is not {\it a priori} obvious from our definition of a supersolvable
poset.)
\end{remark}
\begin{remark}
The argument above for the equality of $R_{\M}(\ch)$ and $Q_{\M}(\m)$ holds even if
$P$ is not graded. However, in the ungraded case, it is certainly
not true that $Q_{\M}(\m)$ is distributive. The search for a full generalization
of Theorem \ref{thm:gradedlattice} thus leads us to ask what can be
said about $Q_{\M}(\m)$ in the ungraded case. Is it a lattice? Can we
say anything even in the case that $P$ is a lattice?
\end{remark}
\section{Non-straddling partitions}
Let $\Pi_n$ denote the lattice of partitions of the set
$[n]$ into blocks, where we order
partitions by refinement: if $y$ and $z$ are partitions of $[n]$
we say that $y \leq z$
if every block of $y$ is contained in some block of $z$.
Equivalently, $z$ covers $y$ in
$\Pi_n$ if $z$ is obtained from $y$ by merging two blocks
of $y$. Therefore, $\Pi_n$ is
graded of rank $n-1$. $\Pi_n$ is shown to be supersolvable in
\cite{St1} and hence has
an $S_{n-1}$ EL-labelling, which we denote be $\delta$. In fact, it will
simplify our discussion if we use the label set $\{2,\ldots ,n\}$ for
$\delta$, rather than the label set $[n-1]$. We choose
the M-chain, and hence the
increasing maximal chain for $\delta$, to be the maximal chain
consisting of the bottom element and those
partitions of $[n]$ whose only non-singleton block is $[i]$,
where $2 \leq i \leq n$. In the literature, $\delta$ is often defined in
the following form, which can be shown to be equivalent. If $z$ is obtained
from $y$ by merging the blocks $B$ and $B'$, then we set
\[
\delta(y, z) = max \{ min B, min B' \} .
\]
For any $x \in \Pi_n$,
we will say that $j \in \{2, \ldots n\}$ is a \emph{block minimum} in $x$
if $j = min B$ for some block $B$ of $x$.
In particular, we see that $\delta(y,z)$ is the unique block minimum in $y$
that is not a block minimum in $z$.
Recall that a \emph{non-crossing} partition of $[n]$ is a partition with the
property that if
some block $B$ contains $a$ and $c$ and some block $B'$ contains
$b$ and $d$ with
$a < b < c < d$, then $B=B'$. Again, we can order the set of non-crossing
partitions of $[n]$
by refinement and we denote the resulting poset by $NC_n$. This
poset, which can be shown to be a lattice, has many nice properties and
has been studied extensively. More information can be found in
R. Simion's survey article \cite{Si} and the references given there.
Since $NC_n$ is a subposet of $\Pi_n$, we can consider $\delta$ restricted to
the edges of $NC_n$. It was observed by Bj{\"{o}}rner and P. Edelman in
\cite{Bj} that this gives an EL-labelling for $NC_n$ and we can easily see that this
EL-labelling is, in fact, an $S_{n-1}$ EL-labelling (once we subtract 1 from
every label).
We are now ready to state our main definition for this section, which should be
compared with the definition above of non-crossing partitions.
\begin{definition}
A partition of $[n]$ is said to be \emph{non-straddling} if whenever some
block $B$ contains $a$ and $d$ and some block $B'$ contains $b$ and $c$
with $a < b < c < d$, then $B=B'$.
\end{definition}
This definition is also very similar to that of \emph{non-nesting} partitions, as
defined by A. Postnikov and discussed in \cite[Remark 2]{Re} and \cite{At}.
The only difference in the definition of non-nesting partitions is that we
do not require $B=B'$ if there is also an element of $B$ between $b$ and $c$.
So, for example, $\{1,3,5\}\{2,4\}$ is a non-nesting partition in $\Pi_5$ but is not
a non-straddling partition. We say that $\{1,3,5\}\{2,4\}$ is a
\emph{straddling partition}, that $1 < 2 < 4 < 5$ is a \emph{straddle}, and that
the blocks $\{1,3,5\}$ and $\{2,4\}$ form a straddle.
Let $NS_n$ be the subposet of $\Pi_n$ consisting of those partitions that are
non-straddling.
To distinguish the interval $[x,y]$ in $\Pi_n$ from the interval $[x,y]$ in $NS_n$,
we will use the notation $[x,y]_{\Pi_n}$ and $[x,y]_{NS_n}$,
respectively. We note that the meet in $\Pi_n$ of two non-straddling partitions
is again non-straddling, implying that $NS_n$ is a meet-semilattice. Since
$\{1,2.\ldots, n\}$ is a top element for $NS_n$, we conclude that $NS_n$ is a
lattice. On the other hand, $NS_n$ is not graded. For example, consider those
elements of $\Pi_6$ that cover $\{1,4\}\{2,5\}\{3,6\}$, as represented
in Figure \ref{fig:nonstraddlinginterval}(a).
\begin{figure}\label{fig:nonstraddlinginterval}
\end{figure}
$\{1,2,4,5\}\{3,6\}$,
$\{1,3,4,6\}\{2,5\}$ and $\{1,4\}\{2,3,5,6\}$ are all straddling partitions, so
$\{1,4\}\{2,5\}\{3,6\}$ is covered in $NS_6$ by
$\{1,2,3,4,5,6\}$.
Figure \ref{fig:nonstraddlinginterval}(b) shows
$[ \{1,4\}\{2,5\}\{3\}\{6\} ,\hat{1}]_{NS_6}$.
Therefore, unlike $\Pi_n$ and $NC_n$, $NS_n$ cannot have an $S_{n-1}$
EL-labelling. However, we can ask if it has an interpolating EL-labelling.
We see that the following three ways of defining an edge-labelling
$\gamma$ for $NS_n$ are
equivalent. Observe that if $y \lessdot z$ in $NS_n$, then $z$ is obtained from $y$
by merging the blocks $B_1, B_2, \ldots , B_r$ of $y$ into a single block $B$ in
$z$. We set
\begin{eqnarray}
\gamma(y,z) & = & mbox{second smallest element of $\{min B_1, \ldots , min B_r\}$} \nonumber \\
& = & mbox{smallest block minimum in $y$ that is not a block} \nonumber \\
& & mbox{minimum in $z$} \nonumber \\
& = & mbox{smallest edge label of $[y,z]_{\Pi_n}$ under the edge-labelling $\deltalta$.} \label{eq:def3}
\end{eqnarray}
See Figure \ref{fig:nonstraddlinginterval}(b) for examples.
Note that the label set for $\gamma$ is $\{2, 3, \ldots , n\}$ and that if $r = 2$, then
$\gamma(y, z)$ equals $\delta(y,z)$. We see that the chain
\[
\hat{0} < \{1,2\}\{3\}\cdots \{n\}
< \{1,2,3\}\{4\}\cdots \{n\}
< \cdots
< \{1,2,\ldots ,n-1\}\{n\} < \hat{1}
\]
is an increasing maximal chain in $NS_n$ under $\gamma$.
\begin{theorem}\label{thm:gammainterpolating}
The edge-labelling $\gamma$ is an interpolating EL-labelling for $NS_n$.
\end{theorem}
Applying Theorem \ref{thm:theorem3}, we get the following result:
\begin{corollary}
$NS_n$ is left modular.
\end{corollary}
In preparation for proving Theorem \ref{thm:gammainterpolating}, we wish to get a
firmer grasp on $NS_n$.
Suppose $x, y \in NS_n$. While the meet of $x$ and $y$ in $NS_n$ is just the meet of
$x$ and $y$ in $\Pi_n$, the situation for joins is more complicated. The next lemma,
crucial to the proof that $\gamma$ is an EL-labelling, helps us to understand important
types of joins. From now on, unless otherwise specified, $x \vee y$ with $x, y \in NS_n$ will
denote the join of $x$ and $y$ in $NS_n$. Furthermore, if
$l_0 < l_1 < \cdots < l_r$ are block minima in $y$, then $\bl{l_i}$ will denote the block of
$y$ with minimum element $l_i$, and $\bl{l_0} \cup \bl{l_1} \cup \cdots \cup \bl{l_r}$ will denote
the minimum element $z \in NS_n$ for which the elements of $\bl{l_0}, \bl{l_1}, \cdots , \bl{l_r}$ are
all in a single block. Note that $z$ is well-defined, since it is the meet of all those elements of
$NS_n$ that have the required elements in a single block.
\begin{lemma}\label{lem:merging}
Suppose $l_0 < l_1 < \cdots < l_r$ are block minima in $y$ and that
\[
y \vee (\bl{l_0} \cup \bl{l_1})
= y \vee (\bl{l_0} \cup \bl{l_1} \cup \cdots \cup \bl{l_r}).
\]
Then
\[
y \vee (\bl{l_i} \cup \bl{l_j})
= y \vee (\bl{l_0} \cup \bl{l_1} \cup \cdots \cup \bl{l_r}).
\]
for any $0 \leq i < j \leq r$.
\end{lemma}
In words, this says that if merging the blocks $\bl{l_0}$ and $\bl{l_1}$ in $y$ requires us to
merge all of $\bl{l_0}, \bl{l_1}, \ldots , \bl{l_r}$, then merging any two of these blocks
also requires us to merge all of them.
\begin{proof}
The proof is by induction on $r$, with the result being trivially true when $r=1$.
While elementary, the details are a little intricate. To gain a better understanding,
the reader may wish to treat the proof as an exercise.
If $i < j < r-1$, then by the induction assumption and the hypothesis that
$y \vee (\bl{l_0} \cup \bl{l_1})
= y \vee (\bl{l_0} \cup \bl{l_1} \cup \cdots \cup \bl{l_r}$, we have
\[
y \vee (\bl{l_i} \cup \bl{l_j}) = y \vee (\bl{l_0} \cup \bl{l_1} \cup \cdots \cup \bl{l_{r-1}})
= y \vee (\bl{l_0} \cup \bl{l_1} \cup \cdots \cup \bl{l_r}),
\]
as required.
Therefore, it suffices to let $j = r$.
Since
$y \vee (\bl{l_0} \cup \bl{l_1}) = y \vee (\bl{l_0} \cup \bl{l_1} \cup \cdots \cup \bl{l_{r-1}})
= y \vee (\bl{l_0} \cup \bl{l_1} \cup \cdots \cup \bl{l_r})$, we know that
$\bl{l_0} \cup \bl{l_1} \cup \cdots \cup \bl{l_{r-1}}$ forms a straddle with $\bl{l_r}$.
There are two ways in which this might happen.
Suppose we have $a < b < c < d$ with $a, d \in \bl{l_0} \cup \bl{l_1} \cup \cdots \cup \bl{l_{r-1}}$
and $b, c \in \bl{l_r}$. Suppose $d \in \bl{l_s}$ in $y$. Then, since $l_s < l_r \leq b < c$, we
have that $l_s < b < c < d$ is a straddle in $y$, which contradicts $y \in NS_n$.
Secondly, suppose we have $a < b < c < d$ with $a, d \in \bl{l_r}$
and $b, c \in \bl{l_0} \cup \bl{l_1} \cup \cdots \cup \bl{l_{r-1}}$.
Suppose $b \in \bl{l_s}$ and $c \in \bl{l_t}$. Now $c > b > a \geq l_r > l_s, l_t$.
If $s=t$ then $y$ has a straddle, so we can assume that $l_s \neq l_t$ and that
$l_i \neq l_t$, with the argument being similar if $l_i \neq l_s$.
If $l_i < l_t$, then $l_i < l_t < c < d$ is a straddle when we merge blocks
$\bl{l_i}$ and $\bl{l_r}$ in $y$.
Therefore,
\begin{equation}\label{eq:merging}
y \vee (\bl{l_i} \cup \bl{l_r}) = y \vee (\bl{l_i} \cup \bl{l_t} \cup \bl{l_r})
= y \vee (\bl{l_0} \cup \bl{l_1} \cup \cdots \cup \bl{l_r})
\end{equation}
by the induction assumption.
If $l_i > l_t$, then $l_t < l_i < l_r < c$ is a straddle when we merge blocks
$\bl{l_i}$ and $\bl{l_r}$ in $y$, also implying \eqref{eq:merging}.
\end{proof}
\begin{lemma}\label{lem:firstx}
Suppose $y < z$ in $NS_n$ and that $[y, z]_{\Pi_n}$ has edge labels
$l_1 < l_2 < \cdots < l_s$ under the edge-labelling $\delta$.
\begin{enumerate}
\item[(i)] There is exactly one edge of the form $y \lessdot w$
with $\gamma(y, w) = l_1$ in $[y, z]_{NS_n}$.
\item[(ii)]
On any unrefinable chain $y \lessdot u_0 \lessdot u_1 \lessdot \cdots \lessdot u_k = z$ in $NS_n$, the
label $l_1$ has to appear.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) We first prove the existence of $w$.
Let $l_0$ be the minimum of the block of $z$ containing $l_1$
and set $w = y \vee (\bl{l_0} \cup \bl{l_1})$. Suppose $y < u \leq w$. We know
$w$ is obtained from $y$ by merging the blocks $\bl{l_0}, \bl{l_1}, \bl{l_{i_1}},
\bl{l_{i_2}}, \bl{l_{i_r}}$, for some $0 \leq r < s$. Applying Lemma \ref{lem:merging},
we get that $u = w$ and so $y \lessdot w$. By definition of $\gamma$, we have that
$\gamma(y, w) = l_1$.
It remains to prove uniqueness. Suppose $w' \in NS_n$ with $y \lessdot w'$ in $[y,z]$.
If $\gamma(y, w') = l_1$, then we see that the blocks $\bl{l_0}$ and $\bl{l_1}$ must
be merged in $w'$. Therefore, these two blocks are merged in $w \wedge w'$, which
is thus greater than $y$.
Since $y \lessdot w, w'$, we conclude that $w = w'$.
(ii) Consider the chain $y=u_0 < u_1 < \cdots < u_k=z$ as a chain in $\Pi_n$.
Since $\delta$ is an $S_{n-1}$ EL-labelling for $\Pi_n$ (once we subtract 1 from every
label), the label $l_1$ has to appear on every maximal chain of $[y,z]_{\Pi_n}$.
It particular, it has to appear in one of the intervals $[u_i, u_{i+1}]_{\Pi_n}$
for $0 \leq i < k$.
Therefore, by \eqref{eq:def3}, we get that $\gamma(u_i, u_{i+1}) = l_1$ for some
$0 \leq i < k$.
\end{proof}
\begin{proposition}
The edge-labelling $\gamma$ is an EL-labelling for $NS_n$.
\end{proposition}
\begin{proof}
Consider $y, z \in NS_n$ with $y < z$. Suppose $[y,z]_{\Pi_n}$ has
edge labels $l_1 < l_2 < \cdots < l_s$. By \eqref{eq:def3}, these are the only edge
labels that can appear in $[y,z]_{NS_n}$.
We now describe a recursive construction of an unrefinable chain
$\lambda: y = w_0 \lessdot w_1 \lessdot \cdots \lessdot w_k = z$ in $NS_n$.
We let $w_1$ be the $w$ of Lemma \ref{lem:firstx}, i.e. $w_1$ is that unique element
of the interval $[y,z]$ in $NS_n$ that covers $y$ and satisfies $\gamma(y, w_1)=l_1$.
Obviously, the labels in the interval $[w_1, z]$ are all greater than $l_1$.
Now we apply the same argument in the interval $[w_1, z]$ to define $w_2$ and repeat until
we have constructed all of $\lambda$. Clearly, $\lambda$ is then an increasing chain.
By Lemma \ref{lem:firstx}(i), it has the lexicographically least set of labels. By Lemma
\ref{lem:firstx}(ii), it is the only increasing chain from $y$ to $z$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:gammainterpolating}.]
Suppose we have $y \lessdot u \lessdot z$ in $NS_n$ with $\gamma(y, u) > \gamma(u, z)$. Let
$y = w_0 \lessdot w_1 \lessdot \cdots \lessdot w_k = z$ be the unique increasing chain of $[y, z]$ in
$NS_n$. By Lemma \ref{lem:firstx}, we know that
$\gamma(w_0, w_1)=\gamma(u,z)=l_1$, the smallest edge label
of $[y, z]_{\Pi_n}$.
To show that $\gamma(y, u) = \gamma(w_{k-1}, w_k)$, we have to work considerably harder.
We will continue to write $\bl{m}$ to denote the block of $y$ whose minimum is
$m$ and we suppose that $u$ is obtained from $y$ by merging blocks
$\bl{m_0}, \bl{m_1}, \ldots , \bl{m_s}$ of $y$, with
$m_0 < m_1 < \cdots < m_s$. We will write $\bl{l}_u$ to denote
the block of $u$ whose minimum is $l$, and we suppose that $z$ is obtained
from $u$ by merging blocks $\bl{l_0}_u, \bl{l_1}_u, \ldots , \bl{l_r}_u$,
with $l_0 < l_1 < \cdots < l_r$.
With the structure of the chain $y \lessdot u \lessdot z$ thus fixed, we now can deduce
information about the structure of the increasing chain.
If $l_0$ and $m_0$ are distinct and are both block minima in $z$, then all the
$l_i$'s and $m_j$'s are distinct. It follows that
$z$ is obtained from $y$ by merging blocks $\bl{l_0}, \bl{l_1}, \ldots , \bl{l_r}$ and
separately merging blocks $\bl{m_0}, \bl{m_1}, \ldots , \bl{m_s}$.
Since $y \lessdot u \lessdot z$, we get that $k=2$ and
$\gamma(y, u) = \gamma(w_1, z) = m_1$.
We assume, therefore, that $m_0 = l_i$ for some $0 \leq i \leq r$.
As usual, we let $w_1 = y \vee (\bl{l_0} \cup \bl{l_1})$. Now consider
\[
w = y \vee (\bigcup_{i: \, l_i < m_1} \bl{l_i}) .
\]
Since $l_1 < m_1$, we know that $w \geq w_1$. Let $B$ denote the
the block of $w$ containing all $\bl{l_i}$
satisfying $l_i < m_1$. Since $m_0 < m_1$, we know that
$m_0 \in B$. In fact, if we can show that $m_1 \not\in B$, then
we can now complete the proof. Indeed, assume $m_1 \not\in
B$ and let $w' = w \vee (B \cup \bl{m_1})$.
Now $w'$ has
$m_0$ and $m_1$ in the same block and so satisfies $w' \geq u$, since
$u = y \vee (\bl{m_0} \cup \bl{m_1})$.
Also, $w'$ has $l_0$ and $l_1$ in the same block and so satisfies
$w' \geq z$, since $z = u \vee (\bl{l_0} \cup \bl{l_1})$.
Hence, $w' = z$. By Lemma \ref{lem:merging} (substitute $w$ for $y$,
and $m_1 < \cdots < m_s$ for $l_1 < \cdots < l_r$), we see that $w \lessdot w'$.
Now $\gamma(w, w') = m_1$, while
the edge labels of $[y, w]_{\Pi_n}$ all come from
the set $\{l_i\ | \ l_i < m_1\}$, implying that $w$ is on the increasing chain
between $y$ and $z$. Therefore, $w = w_{k-1}$ and so $\gamma(w_{k-1},w_k)=
\gamma(y,u)$.
It remains to show that $m_1 \not\in B$.
In fact, we will show that $m_j \not\in B$ for any $j \geq 1$.
Consider the set:
\[
\tilde{B} = \bigcup_{i: \, l_i < m_1} \bl{l_i}
\]
We will show that $\tilde B$ does not form a straddle with any
$\bl{m_j}$ for $j\geq 1$. From that, it follows immediately that
$B=\tilde B$, and therefore that $m_1 \not\in B$, as desired.
For $j\geq 1$, if $\bl{m_j}$ is a singleton, then $\tilde B$ does not
form a straddle with $\bl{m_j}$. So suppose that $|\bl{m_j}|\geq 2$.
Let $m_j'$ denote
the second smallest element of $\bl{m_j}$. Observe the following:
\begin{itemize}
\item If $\bl{m_0}$ contains an element greater than $m_j'$, then $\bl{m_0}$ and $\bl{m_j}$
form a straddle in $y$, which is impossible.
\item If $\bl{m_0}$ has more than one element between $m_j$ and $m_j'$, then we can draw
the same conclusion.
\item Consider those $l_i < m_1$ with $l_i \neq m_0$. If $\bl{l_i}$
contains an element greater than $m_j$, then
$\bl{l_i}_u$ forms a straddle in $u$ with $\bl{m_0} \cup \bl{m_1} \cup \cdots \cup \bl{m_s}$,
which is impossible.
\end{itemize}
Combining these three observations, we see that $\tilde B$ contains
no elements greater than $m_j'$, and at most one element
between $m_j$ and $m_j'$.
In particular, it does not form a straddle with $\bl{m_j}$, as desired.
\end{proof}
\end{document} |
\begin{document}
\title{Entanglement as the symmetric portion of correlated coherence}
\author{Kok Chuan Tan}
\email{bbtankc@gmail.com}
\author{Hyunseok Jeong}
\email{jeongh@snu.ac.kr}
\affiliation{Center for Macroscopic Quantum Control \& Institute of Applied Physics, Department of Physics and Astronomy, Seoul National University, Seoul, 151-742, Korea}
\date{\today}
\begin{abstract}
We show that the symmetric portion of correlated coherence is always a valid quantifier of entanglement, and that this property is independent of the particular choice of coherence measure. This leads to an infinitely large class of coherence based entanglement monotones, which is always computable for pure states if the coherence measure is also computable. It is already known that every entanglement measure can be constructed as a coherence measure. The results presented here show that the converse is also true. The constructions that are presented can also be extended to also include more general notions of nonclassical correlations, leading to quantifiers that are related to quantum discord.
\end{abstract}
\maketitle
\section{Introduction}
An important pillar in the field of quantum information is the study of the quantumness of correlations, the most well known of which is the notion of entangled quantum states~\cite{Einstein1935}. Entanglement is now the basis of many of the most useful and powerful quantum protocols, such as quantum cryptography~\cite{Ekert1991}, quantum teleportation~\cite{Bennett1991} and superdense coding. In the past several decades, generalized notions of quantum correlations that include but supersede entanglement have also been considered, most prominently in the form quantum discord~\cite{Ollivier2001, Henderson2001}. There is mounting evidence that such notions of quantum correlations can also lead to nonclassical effects in multipartite scenarios~\cite{Datta2008, Chuan2012, Dakic2012, Chuan2013}, even when entanglement is not available.
In a separate development, the past several years has also seen a growing amount of interest in the recently formalized resource theory of coherence~\cite{Aberg2006,Baumgratz2014, Levi2014}. Such theories are primarily interested in identifying the quantumness of some given quantum state, and is not limited to a multipartite setting as in the case of entanglement or discord. Nonetheless, there is considerable interest in the study of correlations from the point of view of coherence~\cite{Streltsov2015, Tan2016, Ma2016, Tan2018}. In this picture, one may view quantum correlations as a single aspect of the more general notion of nonclassicality, which in this article we will assume to imply coherence. Beyond the study of quantum correlations, the resource theory of coherence has been applied to an ever increasing number of physical scenarios, ranging from macroscopicity~\cite{Yadin2016, Kwon2017}, to quantum algorithms~\cite{Hillery2016,Matera2016}, to interferometry~\cite{YTWang2017}, to nonclassical light~\cite{Tan2017, Zhang2015, Xu2016}. \cite{Streltsov2017} provides a recent overview of the developments to date. Especially relevant are the results in~\cite{Streltsov2015}. There, it was shown that coherence can be faithfully converted into entanglement, and that each entanglement measure corresponds to a coherence measure in the sense of \cite{Baumgratz2014}.
In this article, we report a series of constructions which allows notions of nonclassical correlations to be quantified using coherence measures. The arguments are general in the sense that it does not depend on the particular coherence measure used, and does not even depend on the particular flavour of coherence measure that is being employed, so long as they satisfy some minimal set of properties that a reasonable coherence measure should satisfy. This suggests that notions of entanglement and discord are intrinsically tied to any reasonable resource theory of coherence. In essence, our results establish that the converse of the relationship proposed in~\cite{Streltsov2015} is also true, so that for every coherence measure, there corresponds an entanglement measure. In fact, we also go beyond this by demonstrating that not only does this correspondence exist for the coherence resource theory proposed in \cite{Baumgratz2014}, as our framework does not depend on the choice of free operations required by a particular resource theory \cite{Streltsov2017}. In addition, as natural consequences of our operation-independent approach, we also see that discord-like quantifiers of nonclassical correlations are naturally embedded in any such resource theories of coherence.
This operation-free approach contrasts with other approaches considered in~\cite{Chitambar2016, Streltsov2016}, where coherence and entanglement are bridged by forming a hybrid resource theory based on some combination of free operations from both theories. Such a hybridization approach often require additional constraints, such as requiring that operations are both local on top of being incoherent, which may bring about extra complications. For instance, it is sometimes difficult to physically justify the set of operations being considered in the hybridized theory, and one may also have to deal with the accounting of not one, but two, species of resource states (i.e. simultaneously keep track of available maximally coherent qubits on top of maximally entangled qubits).
\section{Preliminaries}
We review some elementary concepts concerning coherence measures. Coherence is a basis dependent property of a quantum state. For a given fixed basis $\mathcal{B} = \{ \ket{i} \}$, the set of incoherent states $\cal I$ is the set of quantum states with diagonal density matrices with respect to this basis, and is considered to the the set of classical states. Correspondingly, states that have nonzero off diagonal elements form the set of coherent states that are nonclassical.
The notion of nonclassicality from the point of view of coherence is an unambiguous aspect of all coherence theories, but different flavors of coherence resource theories sometimes consider different sets of non-coherence producing operations in order to justify different coherence measures (See~\cite{Streltsov2017} for a summary). For our purposes, we will not require any specific properties of such non-coherence producing operations. The following is a set of axioms that such resource theories of coherence generally obeys: Let $\mathcal{C}$ be a measure of coherence belonging to some coherence resource theory, then $\mathcal{C}(\rho)$ must satisfy
(C1) $\mathcal{C}(\rho) \geq 0$ for any quantum state $\rho$ and equality holds if and only if $\rho \in \cal I$.
(C2) The measure is non-increasing under a non-coherence producing map $\Phi$ , i.e., $C(\rho) \geq C(\Phi(\rho))$.
(C3) Convexity, i.e. $\lambda C(\rho) + (1-\lambda) C(\sigma) \geq C(\lambda \rho + (1-\lambda) \sigma)$, for any density matrix $\rho$ and $\sigma$ with $0\leq \lambda \leq 1$.
The following quantity was considered by Tan {\it et al.}~\cite{Tan2016} while studying the relationship between coherence and quantum correlations: $$\mathcal{C}(A:B \mid \rho_{AB}) \coloneqq \mathcal{C}(\rho_{AB}) - \mathcal{C}(\rho_{A}) - \mathcal{C}(\rho_{B}).$$
This quantity was referred to as correlated coherence, and the coherence measure $\mathcal{C}$ in \cite{Tan2016} was chosen to be $l_1$-norm of coherence. There, it was noted that since it is always possible to choose a local basis for the subsystems $A$ and $B$ where $\mathcal{C}(\rho_{A})$ and $\mathcal{C}(\rho_{A})$ vanishes, the coherence in the system is apparently no longer stored locally, and must exist amongst the correlations between subsystems $A$ and $B$.
Based on this observation it was demonstrated there that if one were to minimize the correlated coherence with respect to all such possible local bases, i.e. all possible local bases $\mathcal{B}_A$ and $\mathcal{B}_B$ satisfying $\mathcal{C}(\rho_{A})=\mathcal{C}(\rho_{A})=0$, then the minimization over all such bases may be related to quantum correlations such as discord and entanglement. Formally, they considered the quantity:
\begin{definition}[Correlated coherence] $$\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB}) \coloneqq \min_{\mathcal{B}_{A:B}}\mathcal{C}(A:B \mid \rho_{AB}),$$ where the minimization is performed over the set of local bases $\mathcal{B}_{A:B} \coloneqq \{ (\mathcal{B}_A, \mathcal{B}_B) \mid \mathcal{C}(\rho_{A})=\mathcal{C}(\rho_{B})=0 \}.$ \end{definition}
The quantity is invariant under local unitary operations, since it is clear that for any state $\rho_{AB}$ and local basis $\mathcal{B}_{A:B} = \{\ket{i}_A\ket{j}_B \}$, the correlated coherence for the state $U_A \rho_{AB} U^\dag_A$ and basis $\mathcal{B}_{A:B} = \{U_A\ket{i}_A\ket{j}_B \}$ is identical. Subsequently, entropic versions of correlated coherence was also studied in \cite{Wang2017} and more recently in \cite{Kraft2018, Ma2018}, where operational scenarios were considered.
In the next section, we prove that using $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB})$ as our basic building block, every coherence measure can be used to construct a valid entanglement quantifier, which establishes that entanglement may be interpreted as the symmetric portion of correlated coherence.
\section{Quantifying entanglement with correlated coherence}
We begin with some necessary definitions:
\begin{definition}[Symmetric extensions]
A symmetric extension of a bipartite state $\rho_{A_1B_1}$ is an extension $\rho_{A_1\ldots A_n B_1\ldots B_n}$ satisfying $\mathrm{Tr}_{A_2 \ldots A_n B_2 \ldots B_n}(\rho_{A_1\ldots A_n B_1\ldots B_n}) = \rho_{A_1B_1}$ that is, up to local unitaries, invariant under the swap operation $\Phi_{\mathrm{SWAP}}^{A_i \leftrightarrow B_i}$ between any subsystems $A_i$ of Alice and $B_i$ of Bob, i.e. there exists some unitary $U_{A_1 \ldots A_n}$ such that $$\Phi_{\mathrm{SWAP}}^{A_i \leftrightarrow B_i}(U_{A_1 \ldots A_n}\rho_{A_a \ldots A_n B_1\ldots B_n}U^\dag_{A_1 \ldots A_n}) = U_{A_1 \ldots A_n}\rho_{A_a \ldots A_n B_1\ldots B_n}U^\dag_{A_1 \ldots A_n}$$
\end{definition}
A symmetric extension is therefore, up to a local unitary on Alice's side (or Bob's side), an extension of the quantum state that, up to local unitaries, exists within the symmetric subspace. Subsequently, for notational simplicity, we will use unprimed letters $A,B$ for the system of interest, and and primed letters $A', B'$ for the ancillas in the extension. Let us now consider the correlated coherence for such extensions.
\begin{definition}[Symmetric correlated coherence]
The symmetric correlated coherence, for any given coherence measure $\mathcal{C}$, is defined to be the following quantity: $$E_{\mathcal{C}}(\rho_{AB}) = \min_{A'B'}\mathcal{C}_{\mathrm{min}}(AA':BB' \mid \rho_{AA'BB'})$$ where the minimization is performed over all possible symmetric extensions of $\rho_{AB}$. Note that the ancillas $A'$ and $B'$ may, in general, be composite systems.
\end{definition}
The above definition quantifies the minimum correlated coherence that exists within a symmetric subspace of an extended Hilbert space, up to some local unitary on Alice's side or Bob's side. For this reason, we interpret this quantity as the portion of the correlated coherence that is symmetric.
For the rest of the note, we will prove several elementary properties of the above correlation measure, which will finally establish it as a valid entanglement monotone.
First, we will demonstrate that $E_{\mathcal{C}}(\rho_{AB})$ is a convex function of states:
\begin{proposition}
$E_{\mathcal{C}}(\rho_{AB})$ is a convex function of state, i.e. $$\sum_i p_i E_{\mathcal{C}}(\rho^i_{AB}) \geq E_{\mathcal{C}}(\sum_i p_i\rho^i_{AB})$$ where $p_i$ defines some probability distribution s.t. $\sum_i p_i = 1$ and $\rho^i_{AB}$ is any normalized quantum state.
\end{proposition}
\begin{proof}
Let $\rho^{i*}_{AA'BB'}$ be the optimal extension such that $E_{\mathcal{C}}(\rho^i_{AB})= \mathcal{C}_{\mathrm{min}}(AA':BB' \mid \rho^{i*}_{AA'BB'})$. We have the following chain of inequalities:
\begin{align}
\sum_i p_i E_{\mathcal{C}}(\rho^i_{AB})& = \sum_i p_i \mathcal{C}_{\mathrm{min}}(AA':BB' \mid \rho^{i*}_{AA'BB'}) \\
&= \sum_i p_i \mathcal{C}_{\mathrm{min}}(AA'A'':BB'B'' \mid \rho^{i*}_{AA'BB'}\otimes \ket{i,i}_{A''B''}\bra{i,i}) \\
&\geq \mathcal{C}_{\mathrm{min}}(AA'A'':BB'B'' \mid \sum_i p_i \rho^{i*}_{AA'BB'}\otimes \ket{i,i}_{A''B''}\bra{i,i}) \\
&\geq E_\mathcal{C}( \sum_i p_i \rho^i_{AB} )
\end{align}
The inequality in Line 3 occurs because there is at least one local basis that is upper bounded by Line 2. To see this, suppose for every $i$ and $\rho^{i*}_{AA'BB'}$, the optimal basis for evaluating $\mathcal{C}_{\mathrm{min}}(AA':BB' \mid \rho^{i*}_{AA'BB'})$ is $\{ \ket{\alpha_{i,j}}_{AA'} \ket{\beta_{i,k}}_{BB'} \}$. Then it is clear that the optimal local basis for $\rho^{i*}_{AA'BB'}\otimes \ket{i,i}_{A''B''}\bra{i,i}$ must be $\{ \ket{\alpha_{i,j}}_{AA'} \ket{i}_{A''}\ket{\beta_{i,k}}_{BB'} \ket{i}_{B''} \}$ since there was just essentially a relabelling of the basis. Since the coherence measure $\mathcal{C}$ is convex, the classical mixture of quantum states cannot increase the amount of coherence with respect to the basis $\{ \ket{\alpha_{i,j}}_{AA'} \ket{i}_{A''}\ket{\beta_{i,k}}_{BB'} \ket{i}_{B''} \}$. Finally, one can verify that the local coherences with respect to this basis is always zero, so this is just one particular local basis that satisfies the necessary contraints. In sum, this implies \begin{align*}\sum_i p_i &\mathcal{C}_{\mathrm{min}}(AA'A'':BB'B'' \mid \rho^{i*}_{AA'BB'}\otimes \ket{i,i}_{A''B''}\bra{i,i}) \\
&\geq \mathcal{C}_{\mathrm{min}}(AA'A'':BB'B'' \mid \sum_i p_i \rho^{i*}_{AA'BB'}\otimes \ket{i,i}_{A''B''}\bra{i,i}),\end{align*} which was the required inequality.
The inequality in Line 4 comes from the observation that $\sum_i p_i \rho^*_{AA'BB'}\otimes \ket{i,i}_{A''B''}\bra{i,i}$ is a particular symmetric extension of $\sum_i p_i \rho^*_{AB}$. The final inequality is simply the condition of convexity which we needed to prove.
\end{proof}
In the next proposition, we demonstrate the connection between $E_{\mathcal{C}}(\rho_{AB})$ and nonseparability, which defines entanglement.
\begin{proposition} [Faithfulness]
$E_{\mathcal{C}}(\rho_{AB}) = 0$ iff $\rho_{AB}$ is separable, and strictly positive otherwise.
\end{proposition}
\begin{proof}
First of all, we note that all coherence measures are nonnegative over valid quantum states, and as such, since $E_{\mathcal{C}}(\rho_{AB})$ is defined as a form of coherence over some quantum state, $E_{\mathcal{C}}(\rho_{AB})$ must be nonnegative.
Suppose some bipartite state $\rho_{AB}$ is separable. By definition, this necessarily implies that there exists some decomposition for which $\rho_{AB} = \sum_i p_i \ket{a_i}_A\bra{a_i} \otimes \ket{b_i}_B\bra{b_i}$. This always permits an extension of the form $\rho_{AA'BB'} = \sum_i p_i \ket{a_i}_A\bra{a_i} \otimes \ket{i}_{A'}\bra{i} \otimes \ket{b_i}_B\bra{b_i} \otimes \ket{i}_{B'}\bra{i}$ for some orthonormal set $\{ \ket{i} \}$.
It can then be directly verified that $\mathcal{C}_{\mathrm{min}}(AA':BB' \mid \rho_{AA'BB'}) = 0$ so we must have $E_{\mathcal{C}}(\rho_{AB})=0$ for every separable state.
We now prove the converse. Suppose $E_{\mathcal{C}}(\rho_{AB})=0$. Then there must exist some extension for which $\mathcal{C}_{\mathrm{min}}(AA':BB' \mid \rho_{AA'BB'}) = 0$. This implies that there must exist a local basis on $AA'$ and on $BB'$ such that the coherence must be zero, so $\rho_{AA'BB'}$ necessarily must be diagonal in this basis, i.e. $\rho_{AA'BB'} = \sum_{i} q_i \ket{\alpha_i}_{AA'}\bra{\alpha_i} \otimes \ket{\beta_i}_{BB'}\bra{\beta_i}$. Directly tracing out the subsystems $A'$ and $B'$ will lead to a decomposition of the form $\rho_{AB} = \sum_i p_i \ket{a_i}_A\bra{a_i} \otimes \ket{b_i}_B\bra{b_i}$, so $\rho_{AB}$ must be a separable state.
We then observe that since $E_{\mathcal{C}}(\rho_{AB})$ must be nonnegative, and it is zero iff $\rho_{AB}$ is separable, then it must be strictly positive for every entangled state. This completes the proof.
\end{proof}
Finally, we show that $E_{\mathcal{C}}(\rho_{AB})=0$ always decreases under LOCC type operations.
\begin{proposition} [Monotonicity] \label{thm::monotonicity} For any LOCC protocol represented by a quantum map $\Phi_{\mathrm{LOCC}}$, we have $$E_{\mathcal{C}}( \rho_{AB}) \geq E_{\mathcal{C}}[\Phi_{\mathrm{LOCC}}(\rho_{AB})].$$
\end{proposition}
\begin{proof}
Any LOCC operation can always be decomposed into some local quantum operation, a communication of classical information stored in a classical register, and finally, another local operation that is dependent on the classical information received.
Let us suppose that Bob, representing the subsystem $B$ is the one who will communicate classical information to Alice, representing subsystem $A$. His local operation can always be represented by adding ancillary subsystems $B'B''$ in some initial pure state $\ket{0}_{B'}\bra{0} \otimes \ket{0}_{B''}\bra{0}$, followed by a unitary operation on all of the subsystems on his side. Without any loss in generality, we will assume $B''$ will contain all the classical information (i.e. it is a classical register) after the unitary is performed, and $B'$ is traced out. Bob will then communicate this classical information to Alice, who will then perform some quantum operation depending on the information she received.
Based on the above, we have the following chain in inequalities:
\begin{align}
E_{\mathcal{C}}(\rho_{AB})&= E_{\mathcal{C}}( \rho_{AB} \otimes \ket{0}_{B'}\bra{0} \otimes \ket{0}_{B''}\bra{0}) \\
&= E_{\mathcal{C}}( U_{BB'B''}\rho_{AB} \otimes \ket{0}_{B'}\bra{0} \otimes \ket{0}_{B''}\bra{0}U_{BB'B''}^\dag) \\
&\geq E_\mathcal{C}(\sum_i K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i})
\end{align} where the last line makes use of the observation that a symmetric extension of the argument in Line 6 is also a symmetric extension of the argument of Line 7.
From the above, we see that a local POVM performed on Bob's side necessarily decreases the $E_\mathcal{C}$. The next part of the protocol requires Bob to communicate the classical information in the register $B''$ over to Alice. We need to demonstrate that this can be done for free, without increasing the correlated coherence.
To see this, let $\sigma_{AA'A''BB'B''}^*$ be the optimal symmetric extension of $\sum_i K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}$. We then have $E_\mathcal{C}(\sum_i K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}) = \mathcal{C}_{\mathrm{min}}(AA'A'':BB'B'' \mid \sigma_{AA'A''BB'B''}^*)$. Recall that the register $B''$ stores the classical information of Bob's POVM outcomes. By definition, $\sigma_{AA'A''BB'B''}^*$ must be a symmetric extension, so there exists a local unitary that Alice can perform such that $U_{AA'A''}\sigma_{AA'A''BB'B''}^* U_{AA'A''}^\dag = \Phi_{\mathrm{SWAP}}^{AA'A'' \leftrightarrow BB'B''}(U_{AA'A''}\sigma_{AA'A''BB'B''}^* U_{AA'A''}^\dag)$. Since local unitaries do no affect the measure $E_\mathcal{C}$, so we will assume that $\sigma_{AA'A''BB'B''}^*$ is itself already symmetric.
Suppose we add add registers, denoted $M_{A}$ and $M_{B}$, initialized in the state $\ket{0}_{M_A}$ and $\ket{0}_{M_B}$, and locally copy the classical information on registers $A''$ and $B''$ via CNOT operations $U_{\mathrm{CNOT}}^{M_AA''}$ and $U_{\mathrm{CNOT}}^{M_BB''}$. This results in the state $$\mathcal{U}_{\mathrm{CNOT}}^{M_AA''} \circ \mathcal{U}_{\mathrm{CNOT}}^{M_BB''}(\ket{0}_{M_A}\bra{0} \otimes \sigma_{AA'A''BB'B''}^* \otimes \ket{0}_{M_B}\bra{0}),$$ where $\mathcal{U}_{\mathrm{CNOT}}^{AB}(\rho_{AB}) \coloneqq U_{\mathrm{CNOT}}^{AB} \rho_{AB} U_{\mathrm{CNOT}}^{AB \dag}$. Note that as identical unitary operations are performed on Alice's and Bob's side, the above state is symmetric since $\sigma_{AA'A''BB'B''}^*$ is symmetric. Due to symmetry, we must have the following chain of equalities:
\begin{align} &\mathcal{U}_{\mathrm{CNOT}}^{M_AA''} \circ \mathcal{U}_{\mathrm{CNOT}}^{M_BB''}(\ket{0}_{M_A}\bra{0} \otimes \sigma_{AA'A''BB'B''}^* \otimes \ket{0}_{M_B}\bra{0}) \\& \;= \Phi_{\mathrm{SWAP}}^{A''\leftrightarrow B''} \circ \mathcal{U}_{\mathrm{CNOT}}^{M_AA''} \circ \mathcal{U}_{\mathrm{CNOT}}^{M_BB''}[\ket{0}_{M_A}\bra{0} \otimes \Phi_{\mathrm{SWAP}}^{A''\leftrightarrow B''} (\sigma_{AA'A''BB'B''}^*) \otimes \ket{0}_{M_B}\bra{0}] \\ & \;= \mathcal{U}_{\mathrm{CNOT}}^{M_AB''} \circ \mathcal{U}_{\mathrm{CNOT}}^{M_BA''}(\ket{0}_{M_A}\bra{0} \otimes \sigma_{AA'A''BB'B''}^* \otimes \ket{0}_{M_B}\bra{0})\end{align} where Equation~10 uses the fact that $\Phi_{\mathrm{SWAP}}^{A\leftrightarrow B}(\rho_{AB}) = U_{\mathrm{SWAP}}^{A\leftrightarrow B} \rho_{AB} U_{\mathrm{SWAP}}^{A\leftrightarrow B\; \dag}$, $ U_{\mathrm{SWAP}}^{A\leftrightarrow B} = U_{\mathrm{SWAP}}^{A\leftrightarrow B\; \dag}$ and $U_{\mathrm{SWAP}}^{B\leftrightarrow C} U_{\mathrm{CNOT}}^{AB} U_{\mathrm{SWAP}}^{B\leftrightarrow C \; \dag} = U_{\mathrm{CNOT}}^{AC}$. One can verify that Equation~10 is a symmetric extension of $\sum_i \ket{i}_{M_A}\bra{i}\otimes K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}$, which is just the state if Bob communicates the classical information on the register $B''$ to Alice. As such, we determine that the copying of classical information to Alice cannot increase the measure, so we have $$E_\mathcal{C}(\sum_i K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}) \geq E_\mathcal{C}(\sum_i \ket{i}_{M_A}\bra{i}\otimes K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}).$$ This is already sufficient for us to prove that $E_\mathcal{C}$ cannot increase under classical communication.
Continuing from where we left off:
\begin{align}
E_{\mathcal{C}}(\rho_{AB})& \geq E_\mathcal{C}(\sum_i K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}) \\
&\geq E_\mathcal{C}(\sum_i \ket{i}_{A''}\bra{i} \otimes K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}) \\
&= E_\mathcal{C}(\ket{0}_{A'}\bra{0} \otimes \sum_i \ket{i}_{A''}\bra{i} \otimes K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}) \\
&= E_\mathcal{C}(U_{AA'A''}\ket{0}_{A'}\bra{0} \otimes \sum_i \ket{i}_{A''}\bra{i} \otimes K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i} U_{AA'A''}^\dag) \\
&\geq E_\mathcal{C}( \sum_{i,j} \ket{i}_{A''}\bra{i} \otimes K_A^{i,j}K_B^i \rho^{*}_{AB}K_B^{i\dag} K_A^{i,j\dag} \otimes \ket{i}_{B''}\bra{i} )
\end{align} where in Line 11 and 12, we used the fact that local operations and lassical communication cannot increase $E_\mathcal{C}$, and in Line 15, the inequality is because every symmetric extension of the argument in Line 14 is also a symmetric extension of the argument in line 15. The final line says that when Alice performs an operation conditioned on the classical communication by Bob, the measure also does not increase.
From the above arguments, we see that any local POVM performed by Bob, followed by a communication of classical measurement outcomes to Alice, and ended by another local quantum operation by Alice conditioned on the classical communication necessarily cannot increase $E_\mathcal{C}$. Since any LOCC protocol is a series of such procedures between Alice and Bob, possibly with their roles reversed, this implies that $E_\mathcal{C}$ is always contractive under LOCC operations.
\end{proof}
In sum, the Propositions directly imply the following theorem, which is the key result of this article:
\begin{theorem}
$E_\mathcal{C}$ is a valid entanglement monotone for every choice of coherence measure $\mathcal{C}$.
\end{theorem}
We observe that if we were to choose the coherence measure to be the relative entropy of coherence, which is defined as $\mathcal{C}(\rho_{AB}) = \mathcal{S}[\Delta(\rho_{AB})]- \mathcal{S}(\rho_{AB})$ where $\Delta(\rho_{AB})$ is the completely dephased state~\cite{Baumgratz2014}, then for pure states, the measure exactly coincides with the well known entropy of entanglement. This is because pure quantum states only have trivial extensions and it always permits Schmidt decomposition of the state $\ket{\psi}_{AB} = \sum_i\sqrt{\lambda_i}\ket{i,i}_{AB}$ where we observe that the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$ satisfies the condition that $\mathcal{C}(\rho_A)=\mathcal{C}(\rho_B)=0$. We can then verify w.r.t. this basis, $\mathcal{S}[\Delta(\rho_{AB})]= \mathcal{S}(\sum_i{\lambda_i \ket{i,i}_{AB}\bra{i,i}}) = \mathcal{S}[\mathrm{Tr}_B(\ket{\psi}_{AB}\bra{\psi})]$ which is just the expression for the entropy of entanglement. It still remains to be proven that the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$ achieves the minimization required in the definition of correlated coherence. In fact, this is always true and is a generic property of all coherence measures, which we show in following theorem.
\begin{theorem}[$E_{\mathcal{C}}$ for pure states] \label{thm::purestates}
For any continuous coherence measure $\mathcal{C}$ and pure state $\ket{\psi}_{AB}$ with Schmidt decomposition $\ket{\psi}_{AB} = \sum_i\sqrt{\lambda_i}\ket{i,i}_{AB}$, $E_\mathcal{C}(\ket{\psi}_{AB}) = \mathcal{C}(\ket{\psi}_{AB})$ where the coherence is measured w.r.t. the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$ specified by the Schmidt decomposition.
\end{theorem}
\begin{proof}
Consider the Schmidt decomposition $\ket{\psi}_{AB} = \sum_i\sqrt{\lambda_i}\ket{i,i}_{AB}$ for any pure state. We don't need to consider extensions since every pure state only has trivial extensions. Suppose the coefficients are nondegenerate, in the sense that $\lambda_i \neq \lambda_j$ if $i\neq j$. If we were to perform a partial trace, we see that $\rho_A = \mathrm{Tr}_B(\ket{\psi}_{AB}\bra{\psi})= \sum_i \lambda_i \ket{i}_A\bra{i}$. As the the coefficients are nondegenerate, this implies that $\{\ket{i}_A\}$ (up to an overall phase factor) is the unique local basis satisfying $\mathcal{C}(\rho_A)=0$. Identical arguments also apply for subsystem $B$. As such, the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$ necessarily achieves the minimum for the correlated coherence, i.e. $\mathcal{C}_{\mathrm{min}}(A:B \mid \ket{\psi}_{AB})$ and $E_\mathcal{C}(\ket{\psi}_{AB})$ are just the coherence $\mathcal{C}(\ket{\psi}_{AB})$ w.r.t. the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$.
The above demonstrates that the local bases defined by the Schmidt decomposition achieves the necessary minimization when the coefficients are nondegenerate. We now extend the arguments to the more general case. Consider now a general Schmidt decomposition $\ket{\psi}_{AB} = \sum_i\sqrt{\lambda_i}\ket{i,i}_{AB}$. In this case, even if the coefficients are degenerate, the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$ nonetheless satisfies the constraints $\mathcal{C}(\rho_A)=\mathcal{C}(\rho_B)=0$ so $\mathcal{C}(\ket{\psi}_{AB})$ w.r.t. this basis is at least an upper bound to $E_{\mathcal{C}}(\ket{\psi}_{AB})$.
Consider again the partial trace $\rho_A = \sum_i \lambda_i \ket{i}_A\bra{i}$. Without any loss in generality, we will assume that $\lambda_i$ is in decreasing order, so that if $j\geq i$ then $\lambda_j \leq \lambda_i$. Suppose one of the coefficient is $m$-degenerate, so that for some $k$, $\lambda_k > \lambda_{k+1}= \ldots= \lambda_{k+m} > \lambda_{k+m+1}$. Note the strict inequality on both ends. We now consider a slightly perturbed state $\ket{\psi(\epsilon)}_{AB} = \sum_i\sqrt{\lambda_i(\epsilon)}\ket{i,i}_{AB}$ where $\lambda_i(\epsilon) = \lambda_i-\epsilon \lfloor i- k - \frac{m}{2} \rfloor$ whenever $k<i<k+m+1$ and $\lambda_i(\epsilon) = \lambda_i$ otherwise. The corresponding partial trace is denoted $\rho_A(\epsilon) = \sum_i \lambda_i(\epsilon) \ket{i}_A\bra{i}$. For sufficiently small $\epsilon >0$, we can verify that the majorization condition $\rho_A \prec \rho_A(\epsilon)$ is satisfied, which due to Nielsen's theorem \cite{Nielsen2010}, implies that there exists some LOCC operation $\Phi_{\mathrm{LOCC}}$ that performs the transformation $\ket{\psi}_{AB} \rightarrow\ket{\psi(\epsilon)}_{AB}$. From Theorem~\ref{thm::monotonicity}, we know that this implies $E_\mathcal{C}(\ket{\psi}_{AB}) \geq E_\mathcal{C}\left(\ket{\psi(\epsilon)}_{AB} \right)$ since the quantity cannot increase under LOCC operations. At the same time, for sufficiently small $\epsilon>0$, the coefficients $\lambda_i(\epsilon)$ are non-degenerate so $E_\mathcal{C}(\ket{\psi(\epsilon)}_{AB}) = \mathcal{C}(\ket{\psi(\epsilon)}_{AB})$ w.r.t. the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$. This implies that $\mathcal{C}(\ket{\psi(\epsilon)}_{AB}) \leq E_\mathcal{C}(\ket{\psi}_{AB}) \leq \mathcal{C}(\ket{\psi}_{AB})$. In the limit $\epsilon \rightarrow 0$, $\mathcal{C}(\ket{\psi(\epsilon)}_{AB})\rightarrow \mathcal{C}(\ket{\psi}_{AB})$ , so by the squeeze theorem we must have that $E_\mathcal{C}(\ket{\psi}_{AB}) = \mathcal{C}(\ket{\psi}_{AB})$, where the implied basis is given by $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$. We have considered the case where only one coefficient has $m$-degeneracy, but the same arguments can just be repeated as necessary for every coefficient that has degeneracy, which is sufficient to prove the general case.
\end{proof}
Theorem~\ref{thm::purestates} reveals that for every coherence measure and pure bipartite state, then there is always a basis where the coherence exactly quantifies the entanglement. In a more practical sense, it also shows that for every coherence measure that is computable, there corresponds a computable entanglement measure for pure states. Previously, we have already seen that the relative entropy of coherence, which has a closed form expression, corresponds to the entropy of entanglement for pure states. We can similarly choose the $l1$ norm of coherence, for which we get the simple closed form formula $E_{\mathcal{C}}(\ket{\psi}_{AB}) = \sum_{i \neq j } \sqrt{\lambda_i\lambda_j}$ where $\ket{\psi}_{AB} = \sum_i\sqrt{\lambda_i}\ket{i,i}_{AB}$. Theorem~\ref{thm::purestates} states that this expression is also a valid entanglement monotone for pure states. In general, there exists an infinite number of computable coherence measures. We also note that once one has an entanglement monotone for pure states, then it is possible to generalize it to mixed states via a convex roof construction~\cite{Horodecki2001}, which provides yet another avenue for generating new entanglement measures from coherence measures.
\section{Asymmetric quantifiers of quantum correlations}
In the previous section, the symmetric portion of the correlated coherence was considered, in which case it was found to directly address the entangled part of quantum correlations. We now show that simply dropping the requirement of symmetry naturally leads to discord-like measures of correlations.
For quantum discord, the set of states that has zero discord, and are thus ``classical", are the set of classical-quantum states which can be written in the form $\rho_{AB}=\sum_i p_i \ket{i}_A\bra{i}\otimes \rho_B^i$. One may readily define this set of classical quantum states by considering extensions without the requirement that the extension is symmetric. Let us consider the following:
\begin{definition} [Asymmetric discord of coherence]
The asymmetric discord of coherence, for any given coherence measure $\mathcal{C}$, is defined to be the following quantity: $$D_{\mathcal{C}}(\rho_{AB}) = \min_{B'}\mathcal{C}_{\mathrm{min}}(A:BB' \mid \rho_{ABB'})$$ where the minimization is performed over all possible extensions satisfying $\mathrm{Tr}_{B'}(\rho_{ABB'}) = \rho_{AB}$.
\end{definition}
We can then observe that this always defines a discord-like quantifier for every coherence measure $\mathcal{C}$.
\begin{theorem}
$D_{\mathcal{C}}(\rho_{AB}) = 0$ iff $\rho_{AB}$ is classical-quantum, i.e. the state can be written as $\rho_{AB}= \sum_i p_i \ket{i}_A\bra{i}\otimes \rho_B^i$ where $\{\ket{i}_A\}$ is some orthonormal set. It is strictly positive otherwise.
\end{theorem}
\begin{proof}
First, suppose $\rho_{AB}= \sum_i p_i \ket{i}_A\bra{i}\otimes \rho_B^i$. Writing $\rho_B^i$ in terms of its pure state decomposition, we have $$\rho_{AB}= \sum_i p_i \ket{i}_A\bra{i}\otimes \sum_j q_{ij}\ket{\beta_j}_B\bra{\beta_j}.$$ This state always permits an extension on Bob's side of the form $$\rho_{ABB'}= \sum_i p_i \ket{i}_A\bra{i}\otimes \sum_j q_{ij}\ket{\beta_j}_B\bra{\beta_j}\otimes \ket{i,j}_{B'}\bra{i,j}$$ for which $\mathcal{C}_{\mathrm{min}}(A:BB' \mid \rho_{ABB'})=0$ and so $D_{\mathcal{C}}(\rho_{AB})=0$.
Conversely, if $D_{\mathcal{C}}(\rho_{AB})=0$ then this implies that we can write $\rho_{ABB'} = \sum_i p_i \ket{i}_A\bra{i} \otimes \ket{\beta_i}_{BB'}\bra{\beta_i}$, is is a classical-quantum state and will remain classical-quantum even if we trace out the subsystem $B'$. This proves the converse statement so we must have $D_{\mathcal{C}}(\rho_{AB}) = 0$ iff $\rho_{AB}$ is classical-quantum.
Since $D_{\mathcal{C}}(\rho_{AB})$ is a coherence measure and so is nonnegative, and $D_{\mathcal{C}}(\rho_{AB}) = 0$ iff $\rho_{AB}$ is classical-quantum, we must have that for any non classical-quantum state, $D_{\mathcal{C}}(\rho_{AB})> 0$. This completes the proof.
\end{proof}
The most general notion of nonclassical correlations is one where the set of classical states is the set of classical-classical states, or completely classical states. These are quantum states that can always be written in the form $\rho_{AB}=\sum_{i,j} p_{ij} \ket{i}_A\bra{i}\otimes \ket{j}_B\bra{j}$. This can be directly addressed via the correlated coherence itself, without consideration of any extensions of states, which is the natural end point of the further relaxation of the constraints that were previously considered in $E_{\mathcal{C}}$ and $D_\mathcal{C}$
\begin{theorem}
$\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB}) = 0$ iff $\rho_{AB}$ is classical-classical, i.e. the state can be written as $\rho_{AB}=\sum_{i,j} p_{ij} \ket{i}_A\bra{i}\otimes \ket{j}_B\bra{j}$ where $\{\ket{i}_A\}$ and $\{\ket{j}_B\}$ are some orthonormal sets. It is strictly positive otherwise.
\end{theorem}
\begin{proof}
First, suppose $\rho_{AB}=\sum_{i,j} p_{ij} \ket{i}_A\bra{i}\otimes \ket{j}_B\bra{j}$. It is then immediate clear by considering the basis $\{ \ket{i}_A \ket{j}_B$ that $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB}) = 0$.
Conversely, if $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB}) = 0$ then this implies that we can write $\rho_{AB} = \sum_{i,j} p_{ij} \ket{i}_A\bra{i} \otimes \ket{j}_B\bra{j}$ since there must be some local basis $\{ \ket{i}_A \}$ and $\{ \ket{j}_B \}$ for which $\rho_{AB}$ is diagonal. This proves the converse statement so we must have $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB}) = 0$ iff $\rho_{AB}$ is classical-classical.
Since $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB})$ is a coherence measure and so is nonnegative, and $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB}) = 0$ iff $\rho_{AB}$ is classical-classical, so we must have that for any non classical-classical state, $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB})> 0$. This completes the proof.
\end{proof}
We also observe that for pure bipartite states, $\mathcal{C}_{\mathrm{min}}(A:B \mid \ket{\psi}_{AB}) = D_{\mathcal{C}}(\ket{\psi}_{AB})=E_{\mathcal{C}}(\ket{\psi}_{AB})$, The discord-like quantifiers converge with entanglement, which is a known property of measures of quantum discord.
\section{Conclusion}
In the preceding sections, we presented a construction that is valid quantifier of entanglement. The construction is also generalizable to include larger classes of quantum correlations, leading to discord-like quantifiers of nonclassicality. The arguments are independent of not only the type of coherence measure used, it is also independent of the kind of non-coherence producing operation that is being considered. Such entanglement measures must therefore necessarily exist for any convex coherence quantifier that shares a common notion of classicality. This leads to the conclusion that such constructions, and thus notions of entanglement and discord, must exist in every reasonable resource theory of coherence.
In~\cite{Streltsov2015}, it was demonstrated that for every entanglement measure, there corresponds a coherence measure. This was achieved by considering the entanglement of the state after performing some preprocessing in the form an an incoherent operation. In a sense, this article asks the converse question: Does every coherence measure correspond to some entanglement measure? The results discussed in this article proves this in the affirmative. Therefore, if one were interested in keeping count, the number of possible entanglement measures must be exactly equal to the number of coherence measures.
The fact that entanglement can always be defined as the symmetric portion of correlated coherence also further illuminates the role that is being played by the incoherent operation in~\cite{Streltsov2015}, despite not being a crucial element for the construction of entanglement measures. Recall that incoherent operations are operations that do not produce coherence. This does not, however, preclude the moving of coherence from one portion of the Hilbert space to another. Since coherence can always be faithfully convert coherence into entanglement, we see that the incoherent operation, in such a context, is performing the role of converting any local coherences into the symmetric portion of the correlated coherence, at least when one restrict themselves to the resource theory of coherence considered in~\cite{Baumgratz2014,Streltsov2015 }.
We hope that the discussion presented here will inspire further research into the interplay between coherence and quantum correlations.
\acknowledgements This work was supported by the National Research Foundation of Korea (NRF) through a grant funded by the Korea government (MSIP) (Grant No. 2010-0018295) and by the Korea Institute of Science and Technology Institutional Program (Project No. 2E27800-18-P043). K.C. Tan was supported by Korea Research Fellowship Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (Grant No. 2016H1D3A1938100). We would also like to thank H. Kwon for helpful discussions.
\end{document} |
\begin{document}
\title{Nonlocal entanglement concentration scheme for partially entangled
multipartite systems with nonlinear optics\footnote{Published in
Phys. Rev. A 77, 062325 (2008)}}
\author{Yu-Bo Sheng,$^{1,2,3}$ Fu-Guo Deng,$^{1,4}$\footnote{Email address:
fgdeng@bnu.edu.cn} and Hong-Yu Zhou$^{1,2,3}$}
\address{$^1$The Key Laboratory of Beam Technology and Material
Modification of Ministry of Education, Beijing Normal University,
Beijing 100875, China\\
$^2$Institute of Low Energy Nuclear Physics, and Department of
Material Science and Engineering, Beijing Normal University,
Beijing 100875, China\\
$^3$Beijing Radiation Center, Beijing 100875, China\\
$^4$Department of Physics, Applied Optics Beijing Area Major
Laboratory, Beijing Normal University, Beijing 100875, China }
\date{\today }
\begin{abstract}
We present a nonlocal entanglement concentration scheme for
reconstructing some maximally entangled multipartite states from
partially entangled ones by exploiting cross-Kerr nonlinearities
to distinguish the parity of two polarization photons. Compared
with the entanglement concentration schemes based on two-particle
collective unitary evolution, this scheme does not require the
parties to know accurately information about the partially
entangled states---i.e., their coefficients. Moreover, it does not
require the parties to possess sophisticated single-photon
detectors, which makes this protocol feasible with present
techniques. By iteration of entanglement concentration processes,
this scheme has a higher efficiency and yield than those with
linear optical elements. All these advantages make this scheme
more efficient and more convenient than others in practical
applications.
\end{abstract}
\pacs{ 03.67.Pp, 03.67.Mn, 03.67.Hk, 42.50.-p} \maketitle
\section{introduction}
Entanglement is a unique phenomenon in quantum mechanics and it
plays an important role in quantum-information processing and
transmission. For instance, quantum computers exploit entanglement
to speedup the computation of problems in mathematics
\cite{computation1,computation2}. The two legitimate users in
quantum communication---say, the sender Alice and the receiver
Bob---use an entangled quantum system to transmit a private key
\cite{Ekert91,BBM92,rmp,LongLiu,CORE} or a secret message
\cite{QSDC}. Also quantum dense coding \cite{densecoding,super2},
quantum teleportation \cite{teleportation}, controlled
teleportation \cite{cteleportation}, and quantum-state sharing
\cite{QSTS} need entanglements to set up the quantum channel.
However, in a practical transmission or the process for storing
quantum systems, we can not avoid channel noise, which will make
the entangled quantum system less entangled. For example, the
Bell state $\vert \phi^+\rangle_{AB}=\frac{1}{\sqrt{2}}(\vert
H\rangle_A\vert H\rangle_B + \vert V\rangle_A \vert V\rangle_B)$
may become a mixed one such as a Werner state \cite{werner}:
\begin{eqnarray}
W_{F} &=& F \vert \phi^{+}\rangle\langle\phi^{+}\vert + \frac{1-F}{3}(\vert
\phi^{-}\rangle\langle\phi^{-}\vert \nonumber\\
&+& \vert \psi^{+}\rangle\langle\psi^{+} \vert + \vert
\psi^{-}\rangle\langle\psi^{-} \vert),
\end{eqnarray}
where
\begin{eqnarray}
\vert \phi^{\pm} \rangle_{AB} =\frac{1}{\sqrt{2}}(\vert H\rangle_A
\vert H \rangle_B \pm \vert V\rangle_A\vert V\rangle_B),\\
\vert \psi^{\pm} \rangle_{AB} =\frac{1}{\sqrt{2}}(\vert H\rangle_A
\vert V \rangle_B \pm \vert V\rangle_A\vert H\rangle_B).
\end{eqnarray}
Here $H$ and $V$ represent the horizontal and vertical
polarizations of photons, respectively. The Bell state $\vert
\phi^+\rangle$ can also be degraded as a less pure entangled state
like $|\Psi\rangle=\alpha\vert H\rangle_A \vert V\rangle_B
+\beta\vert V\rangle_A \vert H\rangle_B$, where
$|\alpha|^{2}+|\beta|^{2}=1$. Multipartite entanglement states
also suffer from channel noise. For instance,
$|\Phi^{\pm}\rangle=\frac{1}{\sqrt{2}}(\vert HH \cdots H\rangle
\pm \vert VV \cdots V\rangle)$ will become
$|\Phi'^{\pm}\rangle=\alpha|HH \cdots H \rangle \pm \beta
|VV\cdots V\rangle$. For three-particle quantum systems, their
states with the form $\vert \Phi^\pm\rangle$ are called
Greenberg-Horne-Zeilinger (GHZ) states. Now, the multipartite
entangled states like
$|\Phi^{\pm}\rangle=\frac{1}{\sqrt{2}}(|00\cdot\cdot\cdot0\rangle\pm|11\cdot\cdot\cdot1\rangle)$
are also called multipartite GHZ states.
The method of distilling a mixed state into a maximally
entangled state is named entanglement purification, which has
been widely studied in recent years \cite{C.H.Bennett1,D.
Deutsch,Pan1,Pan2,M. Murao,M. Horodecki,Yong,shengpra}. Another
way of distilling less entangled pure states into maximally
entangled states that will be detailed here is called entanglement
concentration. Several entanglement concentration protocols of
pure nonmaximally entangled states have been proposed recently.
The first entanglement concentration protocol was proposed by
Bennett \emph{et al} \cite{C.H.Bennett2} in 1996, which is called
Schmidt projection method. In their protocol \cite{C.H.Bennett2},
the two parties of quantum communication need some collective and
nondestructive measurements of photons, which, however, are not
easy to manipulate in experiment. Also the two parties should
know accurately the coefficients $\alpha$ and $\beta$ of the
partially entangled state $\alpha|01\rangle+\beta|10\rangle$
before entanglement concentration. That is, their protocol works
under the condition that the two users obtain information about
the coefficients and possess the collective and nondestructive
measurement technique. Another similar scheme is called
entanglement swapping \cite{swapping1,swapping2}. In these schemes
\cite{swapping1,swapping2}, two pairs of less entangled pairs
belong to Alice and Bob. Then Alice sends one of her particles to
Bob, and Bob performs a Bell-state measurement on one of his
particle and Alice's one. So Bob has to own three photons of two
pairs, and they have to perform collective Bell-state
measurements. Moreover, the parties should exploit a two-particle
collective unitary evaluation of the quantum system and an
auxiliary particle to project the partially entangled state into a
maximally entangled one probabilistically.
Recently, two protocols of entanglement concentration based on a
polarization beam splitter (PBS) were proposed independently by
Yamamoto \emph{et al} \cite{Yamamoto} and Zhao \emph{et al}
\cite{zhao1}. The experimental demonstration of the latter has
been reported \cite{zhao2}. In their protocol, the parties exploit
two PBSs to complete the parity-check measurements of polarization
photons. However, each of the two users Alice and Bob has to
choose the instances in which each of the spatial modes contains
exactly one photon. With current technology, sophisticated
single-photon detectors are not likely to be available, which
makes it such that these schemes can not be accomplished simply
with linear optical elements.
Cross-Kerr nonlinearity is a powerful tool to construct a
nondestructive quantum nondemolition detector (QND). It also has
the function of constructing a controlled-not (CNOT) gate and a
Bell-state analyzer \cite{QND1}. Cross-Kerr nonlinearity was
widely studied in the generation of qubits
\cite{qubit1,qubit2,qubit3} and the discrimination of unknown
optical qubits \cite{discriminator}. Cross-Kerr nonlinearities can
be described with the Hamiltonian $H_{ck}=\hbar\chi
a^{+}_{s}a_{s}a^{+}_{p}a_{p}$ \cite{QND1,QND2}, where $a^{+}_{s}$
and $a^{+}_{p}$ are the creation operations and $a_{s}$ and
$a_{p}$ are the destruction operations. If we consider a coherent
beam in the state $|\alpha\rangle_{p}$ with a signal pulse in the
Fock state $|\Psi\rangle_s=c_{0}|0\rangle_{s}+c_{1}|1\rangle_{s}$
($|0\rangle_{s}$ and $|1\rangle_{s}$ denote that there are no
photons and one photon, respectively, in this state), after the
interaction with the cross-Kerr nonlinear medium the whole system
evolves as
\begin{eqnarray}
U_{ck}|\Psi\rangle_{s}|\alpha\rangle_{p}&=& e^{iH_{ck}t/\hbar}[c_{0}|0\rangle_{s}+c_{1}
|1\rangle_{s}]|\alpha\rangle_{p} \nonumber\\
&=& c_{0}|0\rangle_{s}|\alpha\rangle_{p}+c_{1}|1\rangle_{s}|
\alpha
e^{i\theta}\rangle_{p},
\end{eqnarray}
where $\theta=\chi t$ and $t$ is the interaction time. From this
equation, the coherent beam picks up a phase shift directly
proportional to the number of photons in the Fock state
$|\Psi\rangle_s$. This good feature can be used to construct a
parity-check measurement device \cite{QND1}.
In this paper, we present a different scheme for nonlocal
entanglement concentration of partially entangled multipartite
states with cross-Kerr nonlinearities. By exploiting a new
nondestructive QND, the parties of quantum communication can
accomplish entanglement concentration efficiently without
sophisticated single-photon detectors. Compared with the
entanglement concentration schemes based on linear optical
elements \cite{Yamamoto,zhao1}, the present scheme has a higher
efficiency and yield. Moreover, it does not require that the
parties know accurately information about the partially entangled
states---i.e., the coefficients of the states---different from
schemes based on two-particle collective unitary evaluation
\cite{C.H.Bennett2,swapping1,swapping2}. These good features give
this scheme the advantage of high efficiency and feasibility in
practical applications.
\begin{figure}
\caption{The principle of our nondestructive quantum nondemolition
detector (QND). Two cress-Kerr nonlinearities are used to
distinguish superpositions and mixtures of $|HH\rangle$ and
$|VV\rangle$ from $|HV\rangle$ and $|VH\rangle$. Each polarization
beam splitter (PBS) is used to pass through $|H\rangle$
polarization photons and reflect $|V\rangle$ polarization
photons. Cross-Kerr nonlinearity will cause the coherent beam to
pick up a phase shift $\theta$ if there is a photon in the mode.
So the probe beam $\vert \alpha \rangle$ will pick up a phase
shift of $\theta$ if the state is $|HH\rangle$ or $|VV\rangle$.
Here $b_1$ and $b_2$ represent the up spatial mode and the down
spatial mode, respectively.}
\end{figure}
\section{entanglement concentration of pure entangled photon pairs}
\subsection{Primary entanglement concentration of less entangled photon pairs }
\label{ecp}
The principle of our nondestructive QND is shown in Fig.1. It is
made up of four PBSs, two identical cross-Kerr nonlinear media,
and an $X$ homodyne measurement. If two polarization photons are
initially prepared in the states $|\varphi\rangle_{b_1}=c_{0}|H
\rangle_{b_1}+c_{1}|V\rangle_{b_1}$ and
$|\varphi\rangle_{b_2}=d_{0}|H\rangle_{b_2}+d_{1}|V\rangle_{b_2}$,
the two photons combined with a coherent beam whose initial state
is $\vert \alpha \rangle_p$ interact with cross-Kerr
nonlinearities, which will evolve the state of the composite
quantum system from the original one $\vert \Psi\rangle_{O}=\vert
\varphi\rangle_{b_1}\otimes \vert \varphi\rangle_{b_2}\otimes
\vert \alpha\rangle_{p}$ to
\begin{eqnarray}
|\Psi\rangle_{T} &=& [c_{0}d_{0}|HH\rangle +
c_{1}d_{1}|VV\rangle]|\alpha e^{i\theta}\rangle_{p} \nonumber\\
&+& c_{0}d_{1}|HV\rangle|\alpha
e^{i2\theta}\rangle_{p}+c_{1}d_{0}|VH\rangle|\alpha\rangle_{p}.
\end{eqnarray}
One can find immediately that $|HH\rangle$ and $|VV\rangle$
cause the coherent beam $\vert\alpha\rangle_p$ to pick up a phase
shift $\theta$, $|HV\rangle$ to pick up a phase shift $2\theta$,
and $|VH\rangle$ to pick up no phase shift. The different phase
shifts can be distinguished by a general homodyne-heterodyne
measurement ($X$ homodyne measurement). In this way, one can
distinguish $|HH\rangle$ and $|VV\rangle$ from $|HV\rangle$ and
$|VH\rangle$. This device is also called a two-qubit polarization
parity QND detector. Our QND shown in Fig.1 is a little different
from the one proposed by Nemoto and Munro \cite{QND1}. With the
QND in \cite{QND1}, the $|HH\rangle$ and $|VV\rangle$ pick up no
phase shift. However, it is well known that a vacuum state
(zero-photon state) can also cause there to be no phase shift on
the coherent beam. So one can not distinguish whether two photons
or no photons pass through the two spatial modes. This modified
QND can exactly check the number of photons if they have the same
parity.
\begin{figure}
\caption{Schematic diagram of the proposed entanglement
concentration protocol. Two pairs of identical less entanglement
photons are sent to Alice and Bob from source 1 ($S_1$) and source
2 ($S_2$). The QND is a parity-checking device. The wave plates
$R_{45}
\end{figure}
With the QND shown in Fig.1, the principle of our entanglement
concentration protocol is shown in Fig.2. Suppose there are two
identical photon pairs with less entanglement $a_1b_1$ and
$a_2b_2$. The photons $a$ belong to Alice and photons $b$ to Bob.
The photon pairs $a_1b_1$ and $a_2b_2$ are initially in the
following unknown polarization entangled states:
\begin{eqnarray}
|\Phi\rangle_{a_1b_1}=\alpha|H\rangle_{a_1}|H\rangle_{b_1}+\beta|V\rangle_{a_1}|V\rangle_{b_1},\nonumber\\
|\Phi\rangle_{a_2b_2}=\alpha|H\rangle_{a_2}|H\rangle_{b_2}+\beta|V\rangle_{a_2}|V\rangle_{b_2},
\end{eqnarray}
where $|\alpha|^{2}+|\beta|^{2}=1$. The original state of the four
photons can be written as
\begin{eqnarray}
|\Psi\rangle &\equiv& |\Phi\rangle_{a_1b_1}\otimes
|\Phi\rangle_{a_2b_2}=\alpha^{2}|H\rangle_{a_1}|H\rangle_{b_1}|H\rangle_{a_2}|H\rangle_{b_2}\nonumber\\
&+& \alpha\beta|H\rangle_{a_1}|H\rangle_{b_1}|V\rangle_{a_2}|V\rangle_{b_2}\nonumber\\
&+&\alpha\beta|V\rangle_{a_1}|V\rangle_{b_1}|H\rangle_{a_2}|H\rangle_{b_2}\nonumber\\
&+&
\beta^{2}|V\rangle_{a_1}|V\rangle_{b_2}|V\rangle_{a_2}|V\rangle_{b_2}.
\end{eqnarray}
After the two parties Alice and Bob rotate the polarization states
of their second photons $a_2$ and $b_2$ by $90^{\circ}$ with
half-wave plates (i.e., $R_{90}$ shown in Fig.2), the state of the
four photons can be written as
\begin{eqnarray}
|\Psi\rangle^{'} &=&\alpha^{2}|H\rangle_{a_1}|V\rangle_{a_3}|H\rangle_{b_1}|V\rangle_{b_3}\nonumber\\
&+& \alpha\beta|H\rangle_{a_1}|H\rangle_{a_3}|H\rangle_{b_1}|H\rangle_{b_3}\nonumber\\
&+&
\alpha\beta|V\rangle_{a_1}|V\rangle_{a_3}|V\rangle_{b_1}|V\rangle_{b_3}\nonumber\\
&+&
\beta^{2}|V\rangle_{a_1}|H\rangle_{a_3}|V\rangle_{b_1}|H\rangle_{b_3}.\label{staterotation}
\end{eqnarray}
Here $a_3$ ($b_3$) is used to label the photon $a_2$ ($b_2$) after
the half-wave plate $R_{90}$.
From Eq.(\ref{staterotation}), one can see that the terms
$|H\rangle_{a_1}|H\rangle_{a_3}|H\rangle_{b_1}|H\rangle_{b_3}$ and
$|V\rangle_{a_1}|V\rangle_{a_3}|V\rangle_{b_1}|V\rangle_{b_3}$
have the same coefficient of $\alpha\beta$, but the other two
terms are different. Now Bob lets the two photons $b_1$ and $b_3$
enter into the QND. With his homodyne measurement, Bob may get one
of three different results: $|HH\rangle$ and $|VV\rangle$ lead to
a phase shift of $\theta$ on the coherent beam, $|HV\rangle$ leads
to $2\theta$, and the other is $|VH\rangle$, which leads to no
phase shift. If the phase shift of homodyne measurement is
$\theta$, Bob asks Alice to keep these two pairs; otherwise, both
pairs are removed. After only this parity-check measurement, the
state of the photons remaining becomes
\begin{eqnarray}
|\Psi\rangle^{''} &=& \frac{1}{\sqrt{2}}(|H\rangle_{a_1}|H\rangle_{a_3}|H\rangle_{b_1}|H\rangle_{b_3}\nonumber\\
&+& |V\rangle_{a_1}|V\rangle_{a_3}|V\rangle_{b_1}|V\rangle_{b_3}).
\label{maxstate}
\end{eqnarray}
The probability that Alice and Bob get the above state is
$P_{s_1}=2|\alpha\beta|^{2}$.
Now both pairs $a_1b_1$ and $a_3b_3$ are in the same
polarizations. Alice and Bob use their $\lambda/4$-wave plates
$R_{45}$ to rotate the photons $a_3$ and $b_3$ by $45^{\circ}$.
The unitary transformation of $45^{\circ}$ rotations can be
described as
\begin{eqnarray}
|H\rangle_{a_3} & \rightarrow & \frac{1}{\sqrt{2}}(|H\rangle_{a_3}+|V\rangle_{a_3}),\nonumber\\
|H\rangle_{b_3} & \rightarrow & \frac{1}{\sqrt{2}}(|H\rangle_{b_3}+|V\rangle_{b_3}),\nonumber\\
|V\rangle_{a_3} & \rightarrow & \frac{1}{\sqrt{2}}(|H\rangle_{a_3}-|V\rangle_{a_3}),\nonumber\\
|V\rangle_{b_3} & \rightarrow &
\frac{1}{\sqrt{2}}(|H\rangle_{b_3}-|V\rangle_{b_3}).
\end{eqnarray}
After the rotations, Eq. (\ref{maxstate}) will evolve into
\begin{eqnarray}
|\Psi\rangle^{'''}&=&\frac{1}{2\sqrt{2}}(|H\rangle_{a1}|H\rangle_{b1}+|V\rangle_{a1}|V\rangle_{b1})
(|H\rangle_{a3}|H\rangle_{b3}\nonumber\\
&+&
|V\rangle_{a3}|V\rangle_{b3})+\frac{1}{2\sqrt{2}}(|H\rangle_{a1}|H\rangle_{b1}-
|V\rangle_{a1}|V\rangle_{b1})\nonumber\\
&&
(|H\rangle_{a3}|V\rangle_{b3}+|V\rangle_{a3}|H\rangle_{b3}).\label{statedistinguish}
\end{eqnarray}
The last step is to distinguish the photons $a_3$ and $b_3$ in
different polarizations. Two PBSs are used to pass through
$|H\rangle$ polarization photons and reflect $|V\rangle$
photons. From the Eq. (\ref{statedistinguish}), one can see that
if the two detectors $D_1$ and $D_2$ or the two detectors $D_3$
and $D_4$ fire, the photon pair $a_1b_1$ is left in the state
\begin{eqnarray}
|\phi^{+}\rangle_{a_1b_1}=\frac{1}{\sqrt{2}}(|H\rangle_{a_1}|H\rangle_{b_1}+|V\rangle_{a_1}|V\rangle_{b_1}).
\end{eqnarray}
If $D_1$ and $D_3$ or $D_2$ and $D_4$ fire, the photon pair
$a_1b_1$ are left in the state
\begin{eqnarray}
|\phi^{-}\rangle_{a_1b_1}=\frac{1}{\sqrt{2}}(|H\rangle_{a_1}|H\rangle_{b_1}-|V\rangle_{a_1}|V\rangle_{b_1}).
\end{eqnarray}
Both of these two states are the maximally entangled ones. In order to get
the same state of $|\Phi\rangle^{+}_{a1b1}$, one of the two parties Alice and Bob
should perform a simple local operation of phase rotation on her or his photon. The
maximally entangled states are generated with above operations.
In our scheme, only one QND is used to detect the parity of the
two polarization photons. If the two photons are in the same
polarization $|HH\rangle$ or $|VV\rangle$, the phase shift of the
coherent beam is $\theta$, which is easy to detect by the homodyne
measurement. Furthermore, our scheme is not required to have
sophisticated single-photon detectors, but only conventional
photon detectors. This is a good feature of our scheme, compared
with other schemes.
\subsection{Reusing resource-based entanglement concentration of partially entangled photon pairs}
\label{iterationsection}
With only one QND, our entanglement concentration has the same
efficiency as that based on linear optics \cite{Yamamoto,zhao1}.
The yield of maximally entangled states $Y$ is $|\alpha \beta|^2$.
Here the yield is defined as the ratio of the number of maximally
entangled photon pairs, $N_m$, and the number of originally less
entangled photon pairs, $N_l$. That is, the yield of our scheme
discussed above is $Y_1=\frac{N_m}{N_l}=|\alpha \beta|^2$. In
fact, $Y_1$ is not the maximal value of the yield of the
entanglement concentration scheme with the QND.
In our entanglement concentration scheme above, the two parties
Alice and Bob only pick up instances in which Bob gets the phase
shift $\theta$ on his coherent beam and removes the other
instances. In this way, the photon pairs kept are in the state
$\vert \Psi\rangle''$. However, if Bob chooses a suitable
cross-Kerr medium and controls accurately the interaction time
$t$, he can make the phase shift $\theta=\chi t=\pi$. In this way,
$2\theta$ and $0$ represent the same phase shift $0$. The two
photon pairs removed by Alice and Bob in the scheme above are just
in the state
\begin{eqnarray}
|\Phi_1\rangle^{''} &=& \alpha^2
|H\rangle_{a_1}|V\rangle_{a_3}|H\rangle_{b_1}|V\rangle_{b_3}\nonumber\\
&+& \beta^2
|V\rangle_{a_1}|H\rangle_{a_3}|V\rangle_{b_1}|H\rangle_{b_3}.
\label{lessstate2}
\end{eqnarray}
This four-photon system is not in a maximally entangled state, but
it can be used to get some maximally entangled state with
entanglement concentration. In detail, Alice and Bob take a
rotation by $90^{\circ}$ on each photon of the second four-photon
system and cause the state of this system to become
\begin{eqnarray}
|\Phi_2\rangle^{''}&=& \beta^2
|H\rangle_{a'_1}|V\rangle_{a'_3}|H\rangle_{b'_1}|V\rangle_{b'_3}\nonumber\\
&+& \alpha^2
|V\rangle_{a'_1}|H\rangle_{a'_3}|V\rangle_{b'_1}|H\rangle_{b'_3}.
\label{lessstate3}
\end{eqnarray}
The state of the composite system composed of eight photons
becomes
\begin{eqnarray}
|\Phi_s\rangle^{''} &\equiv& |\Phi_1\rangle^{''}\otimes
|\Phi_2\rangle^{''}\nonumber\\
&=&
\alpha^2\beta^2(|H\rangle_{a_1}|V\rangle_{a_3}|H\rangle_{b_1}|V\rangle_{b_3}
|H\rangle_{a'_1}|V\rangle_{a'_3}|H\rangle_{b'_1}|V\rangle_{b'_3}\nonumber\\
&+&
|V\rangle_{a_1}|H\rangle_{a_3}|V\rangle_{b_1}|H\rangle_{b_3}
|V\rangle_{a'_1}|H\rangle_{a'_3}|V\rangle_{b'_1}|H\rangle_{b'_3})\nonumber\\
&+& \alpha^4
|H\rangle_{a_1}|V\rangle_{a_3}|H\rangle_{b_1}|V\rangle_{b_3}
|V\rangle_{a'_1}|H\rangle_{a'_3}|V\rangle_{b'_1}|H\rangle_{b'_3}\nonumber\\
&+& \beta^4
|V\rangle_{a_1}|H\rangle_{a_3}|V\rangle_{b_1}|H\rangle_{b_3}
|H\rangle_{a'_1}|V\rangle_{a'_3}|H\rangle_{b'_1}|V\rangle_{b'_3}.\nonumber\\
\label{lessstate4}
\end{eqnarray}
For picking up the first two terms, Bob need only detect the
parities of the two photons $b_3$ and $b'_3$ with the QND. As the
two polarization photons $b_3$ and $b'_3$ in the first two terms
have the same parity, they will cause the coherent beam $\vert
\alpha \rangle_p$ to have a phase shift $\theta=\pi$. Those in the
other two terms cause the coherent beam $\vert \alpha \rangle_p$
to have a phase shift $0$.
When Bob gets the phase shift $\theta=\pi$, the eight photons
collapse to the state
\begin{eqnarray}
|\Phi_s\rangle^{'''} &=&\frac{1}{\sqrt{2}}
(|H\rangle_{a_1}|V\rangle_{a_3}|H\rangle_{b_1}|V\rangle_{b_3}
|H\rangle_{a'_1}|V\rangle_{a'_3}|H\rangle_{b'_1}|V\rangle_{b'_3}\nonumber\\
&+& |V\rangle_{a_1}|H\rangle_{a_3}|V\rangle_{b_1}|H\rangle_{b_3}
|V\rangle_{a'_1}|H\rangle_{a'_3}|V\rangle_{b'_1}|H\rangle_{b'_3}).\nonumber\\
\label{lessstate5}
\end{eqnarray}
The probability that Alice and Bob get this state is
\begin{eqnarray}
P_{s_2}=\frac{2|\alpha\beta|^{4}}{(|\alpha|^4 + |\beta|^4)^2}.
\end{eqnarray}
They have the probability $P'_{f_2}=1 - P_{s_2}$ to obtain the
less entangled state
\begin{eqnarray}
|\Phi_1\rangle^{'''} &=& \alpha^4
|H\rangle_{a_1}|V\rangle_{a_3}|H\rangle_{b_1}|V\rangle_{b_3}
|V\rangle_{a'_1}|H\rangle_{a'_3}|V\rangle_{b'_1}|H\rangle_{b'_3}\nonumber\\
&+& \beta^4
|V\rangle_{a_1}|H\rangle_{a_3}|V\rangle_{b_1}|H\rangle_{b_3}
|H\rangle_{a'_1}|V\rangle_{a'_3}|H\rangle_{b'_1}|V\rangle_{b'_3}\nonumber\\
\label{lessstate6}
\end{eqnarray}
which can be used to concentrate entanglement by iteration of the
process discussed above. In this way, one can obtain easily the
probability
\begin{eqnarray}
P_{s_n}=\frac{2|\alpha\beta|^{2^n}}{(|\alpha|^{2^n} +
|\beta|^{2^n})^2},
\end{eqnarray}
where $n$ is the iteration number of the entanglement
concentration processes.
For the four photons in the state described by
Eq.(\ref{lessstate5}), Alice and Bob can obtain a maximally
entangled photon pair with some single-photon measurements on the
other six photons by choosing the basis $X=\{\vert \pm
x\rangle=\frac{1}{\sqrt{2}}(\vert H\rangle \pm \vert V\rangle)\}$.
That is, Alice and Bob first rotate their polarization photons
$a_3$, $b_3$, $a'_1$, $b'_1$, $a'_3$ and $b'_3$ by $45^{\circ}$,
similar to the case discussed above (shown in Fig.2), and then
measure these six photons. If the number of the antiparallel
outcomes obtained by Alice and Bob is even, the photon pair
$a_1b_1$ collapses to the state $\vert
\phi^+\rangle_{a_1b_1}=\frac{1}{\sqrt{2}}(\vert
H\rangle_{a_1}\vert H\rangle_{b_1} + \vert V\rangle_{a_1}\vert
V\rangle_{b_1})$; otherwise the photon pair $a_1b_1$ collapses to
the state $\vert \phi^-\rangle_{a_1b_1}=\frac{1}{\sqrt{2}}(\vert
H\rangle_{a_1}\vert H\rangle_{b_1} - \vert V\rangle_{a_1}\vert
V\rangle_{b_1})$.
With the iteration of the entanglement concentration process, the
yield of our scheme is improved to be $Y$---i.e.,
\begin{eqnarray}
Y &=& \sum_{i=1}^{n} Y_i,
\end{eqnarray}
where
\begin{eqnarray}
Y_1 &=& |\alpha\beta|^{2},\nonumber\\
Y_2 &=&
\frac{1}{2}(1-2|\alpha\beta|^2)\frac{|\alpha\beta|^4}{(|\alpha|^4
+
|\beta|^4)^2}\nonumber,\\
Y_3&=&
\frac{1}{2^2}(1-2|\alpha\beta|^2)[1-\frac{|\alpha\beta|^4}{(|\alpha|^4
+ |\beta|^4)^2}]\frac{|\alpha\beta|^8}{(|\alpha|^8 +
|\beta|^8)^2},\nonumber\\
&& \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \ldots \nonumber\\
Y_n &=& \frac{1}{2^{n-1}}
(1-2|\alpha\beta|^2)\left(\prod^{n-1}_{j=3}[1-\frac{2|\alpha\beta|^{2^{j-1}}}{(|\alpha|^{2^{j-1}}
+
|\beta|^{2^{j-1}})^2}]\right)\nonumber\\
&&\frac{|\alpha\beta|^{2^{n}}}{(|\alpha|^{2^n} +
|\beta|^{2^n})^2}.
\end{eqnarray}
The yield is shown in Fig.3 with the change of the iteration number
of entanglement concentration processes $n$ and the coefficient
$\alpha \in[0,\frac{1}{\sqrt{2}}]$.
\begin{figure}
\caption{(Color online) The yield ($Y$) is altered with the
iteration number of entanglement concentration processes $n$ and the
coefficient $\alpha\in [0,\frac{1}
\end{figure}
Certainly, Alice and Bob can also accomplish the iteration of the
entanglement concentration by first measuring the two photons
$a_3$ and $b_3$ in the state $|\Phi_1\rangle^{''}$ described by
Eq. (\ref{lessstate2}) with the basis $X$ and then
concentrating some maximally entangled states from the partially
entangled quantum systems composed of the pairs $a_1b_1$. In fact,
after the measurements of the two photons with the basis $X$,
Alice and Bob can transfer the state of photon pair $a_1b_1$ to
$\alpha^2 \vert H\rangle_{a_1}\vert H\rangle_{b_1} + \beta^2 \vert
V\rangle_{a_1}\vert V\rangle_{b_1}$ with or without a unitary
operation. Alice and Bob can accomplish the entanglement
concentration with the same way discussed in Sec. \ref{ecp}.
The same as the entanglement concentration schemes with linear
optical elements \cite{Yamamoto,zhao1}, the present scheme has the
advantage that the two parties of quantum communication are not
required to know the coefficients of the less entangled states in
advance in order to reconstruct some maximally entangled states.
Moreover, this scheme does not require sophisticated single-photon
detectors and has a higher yield of maximally entangled states
than those based on linear optical elements \cite{Yamamoto,zhao1}
as the efficiency in the latter is just $|\alpha\beta|^2$ [the
probability that Alice and Bob get an Einstein- Podolsky-Rosen
(EPR) pair from two partially entangled photon pairs is
$2|\alpha\beta|^2$ in Refs. \cite{Yamamoto,zhao1}]. These good
features make the present entanglement concentration scheme more
efficient and more convenient than others in practical
applications.
\section{entanglement concentration of less entangled multipartite GHZ-class states}
It is straightforward to generalize our entanglement concentration
scheme to reconstruct maximally entangled multipartite GHZ states
from partially entangled GHZ-class states.
Suppose the partially entangled $N$-particle GHZ-class states are
described as follows:
\begin{eqnarray}
|\Phi'^{+}\rangle=\alpha|HH\cdot\cdot\cdot
H\rangle+\beta|VV\cdot\cdot\cdot V\rangle,
\end{eqnarray}
where $|\alpha|^{2}+|\beta|^{2}=1$. For two GHZ-class states, the
composite state can be written as
\begin{eqnarray}
|\Psi'\rangle &=& |\Phi'^+\rangle_{1}\otimes|\Phi'^+\rangle_{2}=(\alpha|H\rangle_{1}|H\rangle_{2}\cdots
|H\rangle_{N}\nonumber\\
&&+\beta|V\rangle_{1}|V\rangle_{2}\cdot\cdot\cdot|V\rangle_{N})\otimes\nonumber\\
&&(\alpha|H\rangle_{N+1}|H\rangle_{N+2}\cdot\cdot\cdot|H\rangle_{2N}\nonumber\\
&&+\beta|V\rangle_{N+1}|V\rangle_{N+2}\cdot\cdot\cdot|V\rangle_{2N}).
\end{eqnarray}
\begin{figure}
\caption{Schematic diagram of the multipartite entanglement
concentration scheme. $2N$ particles in two partially entangled
$N$-particle GHZ-class states are sent to $N$ parties of quantum
communication---say Alice, Bob, Charlie, etc. Photons $2$ and
$N+2$ are sent to Bob and enter into QND to complete a
parity-check measurement. After the QND measurement, Bob asks the
others to retain their photons if his two photons have the same
parity ($|HH\rangle$ or $|VV\rangle$) and remove them for next
iteration if Bob gets an odd parity ($|HV\rangle$ or
$|VH\rangle$). }
\end{figure}
The principle of our entanglement concentration scheme for
multipartite GHZ-class states is shown in Fig.4. $2N$ photons in
two pairs of $N$-particle non-maximally entangled GHZ-class states
are sent to Alice, Bob, Charlie, ect. (i.e., the $N$ parties of
quantum communication). Each party gets two photons. One comes
from the state $|\Phi^+\rangle_{1}$ and the other comes from
$|\Phi^+\rangle_{2}$, shown in Fig.4. Suppose Alice gets photon 1
and the photon $N+1$ and Bob gets photon $2$ and photon $N+2$.
Before entanglement concentration, each party rotates his second
polarization photon by $90^{\circ}$, similar to the case for
concentrating two-photon pairs. After the $90^{\circ}$ rotations,
the state of the $2N$ photons becomes
\begin{eqnarray}
|\Psi'\rangle' &=& \alpha^{2}|H\rangle_{1}|H\rangle_{2}\cdot\cdot\cdot|H\rangle_{N}|V\rangle_{N+1}
|V\rangle_{N+2}\cdot\cdot\cdot|V\rangle_{2N}\nonumber\\
&+&\alpha\beta|H\rangle_{1}|H\rangle_{2}\cdot\cdot\cdot|H\rangle_{N}|H\rangle_{N+1}|H\rangle_{N+2}\cdots
|H\rangle_{2N}\nonumber\\
&+&\alpha\beta|V\rangle_{1}|V\rangle_{2}\cdot\cdot\cdot|V\rangle_{N}|V\rangle_{N+1}|V\rangle_{N+2}\cdots
|V\rangle_{2N}\nonumber\\
&+&\beta^{2}|V\rangle_{1}|V\rangle_{2}\cdot\cdot\cdot|V\rangle_{N}|H\rangle_{N+1}|H\rangle_{N+2}\cdots
|H\rangle_{2N}.\nonumber\\
\end{eqnarray}
Bob lets photons $2$ and $N+2$ pass through his QND detector
whose principle is shown in Fig.2. For $|HH\rangle$ and
$|VV\rangle$, Bob gets the result with an $X$ homodyne measurement
$\theta$; for $|HV\rangle$, the result is $2\theta$ and
$|VH\rangle$ will make no phase shift. By choosing the phase shift
$\theta$, Bob asks the others to retain their photons; otherwise,
all the parties remove the photons. In this way, the whole state
of the retained photons can be described as
\begin{eqnarray}
|\Psi'\rangle'' &=& \frac{1}{\sqrt{2}}(|H\rangle_{1}|H\rangle_{2}\cdots|H\rangle_{N}
|H\rangle_{N+1}|H\rangle_{N+2}\cdots|H\rangle_{2N}\nonumber\\
&&+|V\rangle_{1}|V\rangle_{2}\cdots|V\rangle_{N}|V\rangle_{N+1}|V\rangle_{N+2}\cdots
|V\rangle_{2N}).\nonumber\\
\end{eqnarray}
The success probability is $2|\alpha\beta|^2$, the same as that
for two-photon pairs $P_{s_1}$. The above state is a maximally
entangled $2N$-particle state. By measuring each of the photons
coming from the second GHZ-class state with basis $X$, the
parties will obtain a maximally entangled $N$-particle state, as
after the photons $N+1$, $N+2$, $\ldots$, and $2N$ pass through
the $R_{45}$ plates, which rotate the polarizations of photons by
$45^{\circ}$, the state of the composite system becomes
\begin{eqnarray}
|\Psi'\rangle''' &=&
\frac{1}{\sqrt{2}}[|H\rangle_{1}|H\rangle_{2}\cdot\cdot\cdot|H\rangle_{N}
(\frac{1}{\sqrt{2}})^{\otimes^{N}}(|H\rangle+|V\rangle)^{\otimes^{N}}\nonumber\\
&+&|V\rangle_{1}|V\rangle_{2}\cdot\cdot\cdot|V\rangle_{N}
(\frac{1}{\sqrt{2}})^{\otimes^{N}}(|H\rangle-|V\rangle)^{\otimes^{N}}].\nonumber\\
\end{eqnarray}
By measuring the $N$ photon with the conventional photon
detectors, the $N$ parties will obtain a maximally entangled state
$\vert GHZ^+\rangle_{12\cdots N}$ if the number of parties who
obtain a single-photon measurement outcome $\vert V\rangle$ is
even; otherwise, they will obtain the maximally entangled state
$\vert GHZ^-\rangle_{12\cdots N}$. Here
\begin{eqnarray}
|GHZ^+\rangle &=&
\frac{1}{\sqrt{2}}(|H\rangle_{1}|H\rangle_{2}\cdots|H\rangle_{N}+
|V\rangle_{1}
|V\rangle_{2}\cdots|V\rangle_{N}),\nonumber\\
\end{eqnarray}
and
\begin{eqnarray}
|GHZ^-\rangle &=&
\frac{1}{\sqrt{2}}(|H\rangle_{1}|H\rangle_{2}\cdots|H\rangle_{N}-
|V\rangle_{1}
|V\rangle_{2}\cdots|V\rangle_{N}).\nonumber\\
\end{eqnarray}
For the photons removed by the parties, the method discussed in
Sec. \ref{iterationsection} also works for improving the
efficiency of a successful concentration of GHZ-class states and
the yield. In this time, one need only replace $\vert HH\rangle$
and $\vert VV\rangle$ in Sec. \ref{iterationsection} with $\vert
HH\cdots H\rangle$ and $\vert VV\cdots V\rangle$, respectively.
\section{discussion and summary}
Compared with the entanglement concentration schemes
\cite{C.H.Bennett2,swapping1,swapping2} by evolving the composite
system and an auxiliary particle, the present scheme does not
require the parties of quantum communication to know accurately
information about the less entanglement states. This good
feature makes the present scheme more efficient than those in
Refs. \cite{C.H.Bennett2,swapping1,swapping2} as the decoherence
of entangled quantum systems depends on the noise of quantum
channels or the interaction with the environment, which causes the
two parties to be blind to the information about the state. With
sophisticated single-photon detectors, entanglement concentration
schemes \cite{Yamamoto,zhao1} with linear optical elements are
efficient for concentrating some partially entangled states. With
the development of technology, sophisticated single-photon
detectors may be obtained in the future even though they are far
beyond what is experimentally feasible at present. Cross-Kerr
nonlinearity provides a good QND with which a parity-check
measurement can be accomplished perfectly in principle
\cite{QND1}. With the QND, our entanglement concentration scheme
has a higher efficiency and yield than those with linear optical
elements \cite{Yamamoto,zhao1}.
In summary, we propose a different scheme for nonlocal
entanglement concentration of partially entangled multipartite
states. We exploit cross-Kerr nonlinearities to distinguish the
parity of two polarization photons. Compared with other
entanglement concentration schemes, this scheme does not require a
collective measurement and does not require the parties of
quantum communication to know the coefficients $\alpha$ and
$\beta$ of the less entangled states. This advantage makes our
scheme have the capability of distilling arbitrary multipartite
GHZ-class states. Moreover, it does not require the parties to
adopt sophisticated single-photon detectors, which makes this
scheme feasible with present techniques. By iteration of
entanglement concentration processes, this scheme has a higher
efficiency than those with linear optical elements. All these
advantages make this scheme more convenient in practical
applications than others.
\end{document} |
\begin{document}
\title{Compact equations for the envelope theory}
\author{Lorenzo \surname{Cimino}}
\email[E-mail: ]{lorenzo.cimino@umons.ac.be}
\thanks{ORCiD: 0000-0002-6286-0722}
\author{Claude \surname{Semay}}
\email[E-mail: ]{claude.semay@umons.ac.be}
\thanks{ORCiD: 0000-0001-6841-9850}
\affiliation{Service de Physique Nucl\'{e}aire et Subnucl\'{e}aire,
Universit\'{e} de Mons,
UMONS Research Institute for Complex Systems,
Place du Parc 20, 7000 Mons, Belgium}
\date{\today}
\begin{abstract}
\textbf{Abstract} The envelope theory is a method to easily obtain approximate, but reliable, solutions for some quantum many-body problems. Quite general Hamiltonians can be considered for systems composed of an arbitrary number of different particles in $D$ dimensions. In the case of identical particles, a compact set of 3 equations can be written to find the eigensolutions. This set provides also a nice interpretation and a starting point to improve the method. It is shown here that a similar set of 7 equations can be determined for a system containing an arbitrary number of two different particles.
\keywords{Envelope theory; Many-body quantum systems; Approximation methods}
\end{abstract}
\maketitle
\section{introduction}
\label{sec:intro}
The envelope theory (ET) \cite{hall80,hall83,hall04} is a technique to compute approximate eigenvalues and eigenvectors of $N$-body systems. This method, first developed for systems with identical particles, has been extended to treat non-standard kinematics in $D$ dimensions in \cite{sema13,sema19}, and it has been recently generalized for systems with different particles \cite{sema20}. The big advantage of this method is that the computation cost is independent of the number of particles. Quite general Hamiltonians can be considered, and the approximate eigenvalues are lower or upper bounds in favorable cases. The method relies on the existence of an exact solution for the $N$-body harmonic oscillator Hamiltonian \cite{hall79,cint01}. The accuracy of the method has been checked for various three-dimensional systems \cite{sema15a} and one-dimensional systems containing up to 100 bosons \cite{sema19}.
It is worth noting that the ET method has been rediscovered in 2008 under the name of the auxiliary field method, following an approach different from the one used by Hall \cite{hall80,hall83,hall04}. It has been recognized later that both methods are actually completely equivalent. This story is described in \cite{silv12}, where a lot of information is given about this approximation method.
The ET has been used to obtain physical results about hadronic systems as in \cite{sema09}, and is especially useful when the number of particles can be arbitrary large as in the large-$N$ formulation of QCD \cite{buis11,buis12}. The method has allowed the study of a possible quasi Kepler's third law for quantum many-body systems \cite{sema21}. It can also be simply used to test accurate numerical calculations as in \cite{char15}.
Let us consider the $N$-body Hamiltonian
\begin{equation}\label{trueH}
H=\sum_{i=1}^N T_i(p_i) + \sum_{i<j=2}^N V_{ij}(r_{ij}),
\end{equation}
where $T_i$ is an arbitrary kinetic energy with some constraints \cite{sema18a} and $V_{ij}$ is a two-body central potential. We also define $p_i=\abs{\bm{p}_i}$ and $r_{ij}=\abs{\bm{r}_i-\bm{r}_j}$, where $\bm{r}_i$ and $\bm{p}_i$ are respectively the position and the momentum of the $i$th particle. It is assumed in the following that we are always working in the centre of mass (CM) frame, $\bm{P} = \sum_{i=1}^N \bm{p}_i=\bm{0}$, and with natural units ($\hbar=c=1$).
As explained in \cite{silv10,sema20}, in the framework of the ET, Hamiltonian (\ref{trueH}) is replaced by an auxiliary Hamiltonian (it is the origin of the other name of the method)
\begin{equation}\label{auxH}
\tilde{H}(\{\alpha\})=\sum_{i=1}^N \left[\frac{\bm{p}_i^2}{2\mu_i}+T_i(G_i(\mu_i))-\dfrac{G_i^2(\mu_i)}{2\mu_i}\right] + \sum_{i<j=2}^N\left[ \rho_{ij}\bm{r}_{ij}^2+V_{ij}(J_{ij}(\rho_{ij}))-\rho_{ij}J_{ij}^2(\rho_{ij})\right],
\end{equation}
where $\{\alpha\} = \{\{\mu_i\},\{\rho_{ij}\}\}$ is a set of auxiliary parameters to determine later, and where the auxiliary functions $G_i$ and $J_{ij}$ are such that
\begin{equation}\label{deffunc}
\begin{array}{cc}
T'_i(G_i(x))-\dfrac{G_i(x)}{x}=0,\\[0.5cm]
V'_{ij}(J_{ij}(x))-2xJ_{ij}(x)=0,
\end{array}
\end{equation}
where $U'(x)=dU(x)/dx$. It is useful to write Hamiltonian (\ref{auxH}) in the form
\begin{equation}
\tilde{H}(\{\alpha\}) = H_\text{ho}(\{\alpha\})+B(\{\alpha\}),
\end{equation}
where $H_\text{ho}$ is the harmonic oscillator part and $B$ is a function obtained by subtracting the harmonic oscillator contributions from (\ref{auxH}).
An eigenvalue of (\ref{auxH}) is given by
\begin{equation}\label{energy}
\tilde{E}(\{\alpha\})=E_\text{ho}(\{\alpha\})+B(\{\alpha\}),
\end{equation}
where $E_\text{ho}$ is an eigenvalue of $H_\text{ho}$. A procedure in \cite{silv10,sema20} explains how to compute $E_\text{ho}$ but an example will be given below. An eigenvalue $\tilde{E}$ also depends on the set of parameters $\{\alpha\}=\{\{\mu_i\},\{\rho_{ij}\}\}$. The principle of the method is to search for the set of parameters $\{\alpha_0\}=\{\{\mu_{i0}\},\{\rho_{ij0}\}\}$ such that
\begin{equation}\label{mineq}
\frac{\partial\tilde{E}}{\partial\mu_i}\biggr\rvert_{\{\alpha_0\}}=\frac{\partial\tilde{E}}{\partial\rho_{ij}}\biggr\rvert_{\{\alpha_0\}}=0 \hspace{5 mm} \forall\ i,j.
\end{equation}
Equations (\ref{mineq}) can be easily implemented and solutions $\{\alpha_0\}$ are easily found since we only need to find an extremum \cite{sema20}. After solving (\ref{mineq}), we obtain the desired approximate energy by substituting the set $\{\alpha_0\}$ back to (\ref{energy}), $\tilde{E}(\{\alpha_0\})=\tilde{E}_0$.
In the case of identical particles, it has been showed \cite{sema13,sema19} that we can equivalently find the eigenvalue $\tilde{E}_0$ by using a set of three compact equations
\begin{subequations}\label{compacteq}
\begin{equation}\label{compacteq1}
\tilde{E}_0 = N\,T(p_0)+C^2_N\,V(\rho_0),
\end{equation}
\begin{equation}\label{compacteq3}
N\,T'(p_0)\,p_0=C^2_N\,V'(\rho_0)\,\rho_0,
\end{equation}
\begin{equation}\label{compacteq2}
Q(N) = \sqrt{C^2_N}\,p_0\,\rho_0,
\end{equation}
\end{subequations}
where $C^2_N=N(N-1)/2$ is the number of pairs, and where $p_0^2=\bk{\bm{p}_i^2}$ and $\rho_0^2 = \bk{\bm{r}_{ij}^2} \hspace{2mm} \forall\ i,j$. The mean values are taken with an eigenstate of the auxiliary Hamiltonian corresponding to the global quantum number $Q(N)$ for the set $\{\alpha_0\}$ insuring the constraints (\ref{mineq}). The eigenstate is also completely (anti)symmetric for the exchange between particles.
\begin{equation}
Q(N)=
\begin{cases}
\sum\limits_{i=1}^{N-1} \left(2n_i+l_i+\frac{D}{2}\right) & \text{ if }D\geq2\\[15pt]
\sum\limits_{i=1}^{N-1} \left(n_i+\frac{1}{2}\right) & \text{ if }D=1
\end{cases},
\end{equation}
where the quantum numbers $\{n_i,l_i\}$ are associated with the internal Jacobi variables. Some values of $Q(N)$ for the bosonic and fermionic ground states are given in \cite{sema20,sema19}. In previous papers \cite{sema13,sema19}, the variable $r_0^2 = N^2 \bk{\left(\bm{r}_i-\bm{R}\right)^2}$, where $\bm{R}$ is the CM position, was used instead of $\rho_0$ because one-body and two-body potentials are treated together.
These equations are called compact because all the relevant variables appear in 3 equations giving the definition of the energy (\ref{compacteq1}), the equation of motion (\ref{compacteq3}) and the rule for the quantization (\ref{compacteq2}). Moreover, the uninteresting auxiliary parameters and functions are not present. Equations (\ref{compacteq}) can also be easily implemented and solved. There are good reasons to prefer the compact equations (\ref{compacteq}) over the ``extremization'' equations (\ref{mineq}). First, the quantities $p_0$ and $\rho_0$ give direct access to more interesting expectation values than $\{\alpha_0\}$. Secondly, these equations have a nice semiclassical interpretation as explained in \cite{sema13}. Thirdly, it is possible to improve the ET with the dominantly orbital state method starting from these equations \cite{sema15b}, which is the main motivation to write these equations. As the improvement obtained can be significant in some cases, it is worth generalizing it beyond systems of identical particles. In the following section, we will present the compact equations for a system composed of two different sets of $N_a$ and $N_b$ identical particles.
\section{$\bm{N_a + N_b}$ systems}
\label{sec:nanb}
Let us specify the auxiliary Hamiltonian (\ref{auxH}) for this system. The harmonic oscillator Hamiltonian for a system of $N_a$ particles of type $a$ and $N_b$ particles of type $b$ is given by
\begin{equation}\label{ho}
H_\text{ho}=\sum_{i=1}^{N_a} \frac{\bm{p}_i^2}{2\mu_a}+\sum_{j=1}^{N_b} \frac{\bm{p}_j^2}{2\mu_b}+\sum_{i<i'=2}^{N_a}\rho_{aa}\bm{r}_{ii'}^2+\sum_{j<j'=2}^{N_b}\rho_{bb}\bm{r}_{jj'}^2+\sum_{i=1}^{N_a}\sum_{j=1}^{N_b}\rho_{ab}\bm{r}_{ij}^2.
\end{equation}
In the following, letters $i\text{ }(j)$ are reserved for particles of type $a\text{ }(b)$. As explained in \cite{sema20,hall79}, it is useful to write (\ref{ho}) in the form
\begin{subequations}\label{ho2}
\begin{equation}\label{ho20}
H_\text{ho}=H_a+H_b+H_\text{CM} \hspace{1cm} \text{with}
\end{equation}
\begin{equation}\label{ho21}
H_a = \sum_{i=1}^{N_a}{\frac{\bm{p}_i^2}{2\mu_a}}-\frac{\bm{P}_a^2}{2M_a}+\sum_{i<i'=2}^{N_a}\left(\rho_{aa}+\frac{N_b}{N_a}\rho_{ab}\right)\bm{r}_{ii'}^2,
\end{equation}
\begin{equation}\label{ho22}
H_b = \sum_{j=1}^{N_b}{\frac{\bm{p}_j^2}{2\mu_b}}-\frac{\bm{P}_b^2}{2M_b}+\sum_{j<j'=2}^{N_b}\left(\rho_{bb}+\frac{N_a}{N_b}\rho_{ab}\right)\bm{r}_{jj'}^2,
\end{equation}
\begin{equation} \label{ho23}
H_\text{CM} = \frac{\bm{p}^2}{2\mu}+N_aN_b\rho_{ab}\bm{r}^2,
\end{equation}
\end{subequations}
where $\bm{P}_\alpha$ and $M_\alpha=N_\alpha\,\mu_\alpha$ are the total momentum and mass for the set $\alpha = \{a,b\}$, $\mu=\frac{M_aM_b}{M_a+M_b}$ is a reduced mass, and $\bm{p}=\frac{M_b\bm{P}_a-M_a\bm{P}_b}{M_a+M_b}$ and $\bm{r}=\bm{R}_a-\bm{R}_b$ are the relative momentum and position between the CM of the two sets, respectively. The three parts of (\ref{ho20}) are entirely decoupled since (\ref{ho21}) and (\ref{ho22}) depends on the internal coordinates of their respective set, and (\ref{ho23}) on the relative coordinates between the two CM.
Then, an eigenvalue $E_\text{ho}$ is easily obtained since (\ref{ho2}) is composed of three decoupled parts \cite{sema20}
\begin{equation}\label{enho}
E_\text{ho}=Q(N_a)\sqrt{\frac{2}{\mu_a}(N_a\rho_{aa}+N_b\rho_{ab})}+Q(N_b)\sqrt{\frac{2}{\mu_b}(N_b\rho_{bb}+N_a\rho_{ab})}+Q(2)\sqrt{\frac{2}{\mu}N_aN_b\rho_{ab}}.
\end{equation}
To be complete, the expression of the function $B(\{\alpha\})$ is given by
\begin{equation}\label{B}
\begin{aligned}
B & = N_a\left[T_a(G_a(\mu_a))-\frac{G^2_a(\mu_a)}{2\mu_a}\right]+C^2_{N_a}\left[V_{aa}(J_{aa}(\rho_{aa}))-\rho_{aa}J^2_{aa}(\rho_{aa})\right]\\
& +N_b\left[T_b(G_b(\mu_b))-\frac{G^2_b(\mu_b)}{2\mu_b}\right]+C^2_{N_b}\left[V_{bb}(J_{bb}(\rho_{bb}))-\rho_{bb}J^2_{bb}(\rho_{bb})\right] \\
& +N_aN_b\left[V_{ab}(J_{ab}(\rho_{ab}))-\rho_{ab}J^2_{ab}(\rho_{ab})\right].
\end{aligned}
\end{equation}
When combining (\ref{ho2}) and (\ref{B}), we can see that our auxiliary Hamiltonian (\ref{auxH}) is also composed of three distinct parts: one for the particles of type $a$, another for the particles of type $b$ and a last one for the relative motion between the two sets.
The compact equations can then be established in a similar way as done for identical particles \cite{silv12}. First, we apply the Hellmann-Feynman theorem \cite{hell} on Hamiltonian (\ref{auxH}) to evaluate extremization conditions (\ref{mineq}). By using definitions (\ref{deffunc}) we get the following results
\begin{equation}\label{hf}
\begin{array}{lllll}
& G_a^2(\mu_{a0})=p_a^2+\frac{P_0^2}{N_a^2} = {p^\prime_a}^2, \\[0.3cm]
& G_b^2(\mu_{b0})=p_b^2+\frac{P_0^2}{N_b^2} = {p^\prime_b}^2, \\[0.3cm]
& J^2_{aa}(\rho_{aa0})=r_{aa}^2, \\[0.3cm]
& J^2_{bb}(\rho_{bb0})=r_{bb}^2, \\[0.3cm]
& J^2_{ab}(\rho_{ab0})=\frac{N_a-1}{2N_a}r_{aa}^2+\frac{N_b-1}{2N_b}r_{bb}^2+R_0^2= {r^\prime_0}^2,
\end{array}
\end{equation}
where we have defined the six physical parameters
\begin{equation}
\begin{array}{lll}
& p_a^2=\bk{\bm{p}_i^2-\frac{\bm{P}_a^2}{N_a^2}}\text{ and } p_b^2=\bk{\bm{p}_j^2-\frac{\bm{P}_b^2}{N_b^2}},\\[0.3cm]
& r_{aa}^2= \bk{\bm{r}_{ii'}^2}\text{ and } r_{bb}^2= \bk{\bm{r}_{jj'}^2},\\[0.3cm]
& P_0^2=\bk{\bm{p}^2}\text{ and } R_0^2=\bk{\bm{r}^2}.
\end{array}
\end{equation}
The mean values are taken with an eigenstate of the auxiliary Hamiltonian corresponding to the quantum numbers $Q(N_a)$, $Q(N_b)$ and $Q(2)$ for the set $\{\alpha_0\}$ insuring the constraints (\ref{mineq}). The eigenstate is also completely (anti)symmetric for the exchange between the $N_a$ or the $N_b$ particles.
Then, by evaluating $\tilde{E}_0=\bk{\tilde{H}(\{\alpha_0\})}$ and using results (\ref{hf}), we obtain the following equation for the energy
\begin{equation}\label{eqen}
\tilde{E}_0=N_aT_a\left(p'_a\right)+N_bT_b\left(p'_b\right)+C^2_{N_a}V_{aa}\left(r_{aa}\right)+C^2_{N_b}V_{bb}\left(r_{bb}\right)+N_aN_bV_{ab}\left(r_0'\right)
\end{equation}
It is interesting to look at the meaning of the linear combinations $p'_a$, $p'_b$ and $r'_0$ since they appear in (\ref{eqen}). As shown in \cite{sema20}, we can derive a similar equation for the energy by using the form (\ref{ho}), instead of (\ref{ho2}), of the harmonic oscillator. By comparing, one can identify ${p'_a}^2 = \bk{\bm{p}_i^2}$, ${p'_b}^2=\bk{\bm{p}_j^2}$ and ${r'_0}^2=\bk{\bm{r}_{ij}^2}$.
In order to find these parameters, we need 6 additional equations. We can find three of them by applying the virial theorem separately on each of the three parts of the auxiliary Hamiltonian \cite{sema20}. One gets
\begin{subequations}\label{eqvirial}
\begin{equation}\label{eqvirial1}
N_aT'_a(p'_a)\frac{p_a^2}{p'_a}=C^2_{N_a}V'_{aa}(r_{aa})r_{aa}+\frac{N_b}{N_a}C^2_{N_a}V'_{ab}(r'_0)\frac{r_{aa}^2}{r'_0},
\end{equation}
\begin{equation}\label{eqvirial2}
N_bT'_b(p'_b)\frac{p_b^2}{p'_b}=C^2_{N_b}V'_{bb}(r_{bb})r_{bb}+\frac{N_a}{N_b}C^2_{N_b}V'_{ab}(r'_0)\frac{r_{bb}^2}{r'_0},
\end{equation}
\begin{equation}\label{eqvirial3}
\frac{1}{N_a} T'_a(p'_a)\frac{P_0^2}{p'_a}+\frac{1}{N_b} T'_b(p'_b)\frac{P_0^2}{p'_b}=N_aN_bV'_{ab}(r'_0)\frac{R_0^2}{r'_0}.
\end{equation}
\end{subequations}
Finally, we obtain three last equations by using the exact eigenvalue (\ref{enho}) of the harmonic oscillator and comparing it to $\bk{H_\text{ho}\{\alpha_0\}}$. Thanks to (\ref{ho2}), the comparison is done in a similar way as in \cite{silv12} and one gets
\begin{subequations}\label{eqcomp}
\begin{equation}\label{eqcomp1}
Q(N_a)=\sqrt{C^2_{N_a}}p_ar_{aa},
\end{equation}
\begin{equation}\label{eqcomp2}
Q(N_b)=\sqrt{C^2_{N_b}}p_br_{bb},
\end{equation}
\begin{equation}\label{eqcomp3}
Q(2)=P_0R_0.
\end{equation}
\end{subequations}
Equations (\ref{eqvirial}) and (\ref{eqcomp}) form a set of six equations which, combined with (\ref{eqen}), form the ET compact equations for a system of $N_a+N_b$ particles, and allow us to compute the approximate eigenvalue $ \tilde{E}_0$. We have verified on several systems that these equations give the same results than those found with the extremization equations in \cite{sema20}. We can note that the three equations (\ref{eqvirial}) can be derived by minimizing (\ref{eqen}) with respect to $r_{aa}$, $r_{bb}$ and $R_0$, and using (\ref{eqcomp}).
Equations (\ref{eqen})-(\ref{eqcomp}) are more complicated than equations (\ref{compacteq}). But, when comparing the two sets, it is possible to find an interpretation for equations (\ref{eqen})-(\ref{eqcomp}). Equation (\ref{eqen}) is obviously the energy computed in terms of the mean momenta and relative distances. Equations (\ref{eqvirial}) are the equations of motion determining these mean quantities, and equations (\ref{eqcomp}) are the semiclassical quantifications of the various orbital and radial motions. These equations make clear what are the relevant quantities appearing in a quantum system containing two different sets of identical particles. It is worth recalling that solutions obtained by the ET are full quantum ones with eigenfunctions associated \cite{silv10,sema20} and that observables can be computed \cite{sema15a}.
As a first check for these equations, we need to recover the three equations (\ref{compacteq}) when considering all particles identical. In this case $T_a = T_b$ and $V_{aa}=V_{bb}=V_{ab}$, and we must impose the following symmetries
\begin{equation}\label{sym}
\begin{array}{cc}
& \bk{\bm{p}_i^2} = \bk{\bm{p}_j^2}, \hspace{5 mm} \forall\ i,j, \\[0.5cm]
& \bk{\bm{r}_{ii'}^2} = \bk{\bm{r}_{jj'}^2} = \bk{\bm{r} _{ij}^2}, \hspace{5mm} \forall\ i,i',j,j'.
\end{array}
\end{equation}
From the definitions of our 6 parameters, we conclude
\begin{equation}\label{sym2}
\begin{array}{cc}
& p'_a = p'_b = p_0, \\[0.5cm]
& r_{aa} = r_{bb} = r'_0=\rho_0,
\end{array}
\end{equation}
where $p_0$ and $\rho_0$ are defined as before in (\ref{compacteq}). Then, we easily see that equation (\ref{eqen}) reduces to (\ref{compacteq1}) with $N = N_a + N_b$. It is a matter of algebra to show that the sum of the three equations (\ref{eqvirial})
\begin{equation}
N_aT'_a(p'_a)p'_a + N_bT'_b(p'_b)p'_b = C^2_{N_a}V'_{aa}(r_{aa})r_{aa} + C^2_{N_b}V'_{bb}(r_{bb})r_{bb} + N_aN_bV'_{ab}(r'_0)r'_0,
\end{equation}
reduces to (\ref{compacteq3}).
When all the particles are identical, it is not relevant to separate the energy on several subsets. We notice that $Q(N_a)+Q(N_b)+Q(2) = Q(N_a+N_b)$. This is a hint that the sum of the three equations (\ref{eqcomp}) must reduce to (\ref{compacteq2}), but the proof is more subtle. Thanks to the symmetries (\ref{sym}) and (\ref{sym2}), one can express $R_0$ in terms of $\rho_0$, and $p_a$, $p_b$ and $P_0$ in terms $p_0$. Then, simple calculations show that (\ref{eqcomp}) reduces to (\ref{compacteq2}). Finally, all equations (\ref{compacteq}) are recovered.
Note that (\ref{sym2}) also implies symmetries on the auxiliary parameters, $\mu_a = \mu_b$ and $\rho_{aa} = \rho_{bb} = \rho_{ab}$, which is also expected as explained in \cite{sema20}.
As a second test, we have substituted the harmonic oscillator Hamiltonian (\ref{ho}) into our 7 equations. Then, it is a matter of algebra to find the exact solution (\ref{enho}). A third check is given in the following section.
\section{$\bm{N_a=1}$ or/and $\bm{N_b=1}$}
\label{sec:na1}
The 7 equations (\ref{eqen}), (\ref{eqvirial}) and (\ref{eqcomp}) were computed for a system with $N_a+N_b$ particles. It is interesting to look at what happens when only one particle is present in a set. For example, let's look at the case $N_b=1$. Then, all the terms in $C^2_{N_b}$ and $Q(N_b)$ vanish. Equation (\ref{eqcomp2}) becomes trivial and (\ref{eqvirial2}) leads to $p_b = 0$. As $p_b=0$, we also have $p'_b = P_0$. At the end, we are left with a system of 5 equations
\begin{subequations}\label{eqn+1}
\begin{equation}
\tilde{E}_0=N_aT_a\left(p'_a\right)+T_b\left(P_0\right)+C^2_{N_a}V_{aa}\left(r_{aa}\right)+N_aV_{ab}\left(r_0'\right),
\end{equation}
\begin{equation}\label{eqn+12}
N_aT'_a(p'_a)\frac{p_a^2}{p'_a}=C^2_{N_a}V'_{aa}(r_{aa})r_{aa}+\frac{N_a-1}{2}V'_{ab}(r'_0)\frac{r_{aa}^2}{r'_0},
\end{equation}
\begin{equation}
\frac{1}{N_a} T'_a(p'_a)\frac{P_0^2}{p'_a}+T'_b(P_0)P_0=N_aV'_{ab}(r'_0)\frac{R_0^2}{r'_0},
\end{equation}
\begin{equation}\label{eqn+14}
Q(N_a)=\sqrt{C^2_{N_a}}p_ar_{aa},
\end{equation}
\begin{equation}
Q(2)=P_0R_0,
\end{equation}
\end{subequations}
where our four parameters are now defined as $p_a^2=\bk{\bm{p}_i^2-\frac{\bm{P}_a^2}{N_a^2}}$, $P_0^2=\bk{\left(\frac{\mu_b\bm{P}_a-M_a\bm{p}_b}{M_a + \mu_b}\right)^2}$, $\bm{r}_{aa}^2= \bk{\bm{r}_{ii'}^2}$ and $R_0^2=\bk{\left(\bm{R}_a-\bm{r}_b\right)^2}$. We also have ${p'_a}^2=p_a^2+\frac{P_0^2}{N_a^2}$ and ${r'_0}^2=\frac{N_a-1}{2N_a}r_{aa}^2+R_0^2$. The five equations (\ref{eqn+1}) can also be found from scratch with the above explained procedure.
Another special case is when $N_a=N_b=1$, that is we have a two-body system. We then have similar simplifications as in the previous case and we obtain the equations of the envelope theory at $N=2$, which are a generalization of the results obtained in \cite{sema13,sema12}
\begin{subequations}
\begin{equation}
\tilde{E}_0=T_a(P_0)+T_b(P_0) + V_{ab}(R_0),
\end{equation}
\begin{equation}
T'_a(P_0)P_0 + T'_b(P_0)P_0 = V'_{ab}(R_0)R_0,
\end{equation}
\begin{equation}
Q(2)=P_0R_0,
\end{equation}
\end{subequations}
where $P_0^2=\bk{\left(\frac{\mu_b\bm{p}_a-\mu_a\bm{p}_b}{\mu_a + \mu_b}\right)^2}$ and $R_0^2=\bk{\left(\bm{r}_a-\bm{r}_b\right)^2}$. The fact that the correct limits are obtained for $N_a=1$ or $N_b=1$ is also a test of coherence for the set~(\ref{eqen})-(\ref{eqvirial})-(\ref{eqcomp}).
\section{concluding remarks}
\label{sec:conclu}
We were able to build the 7 compact equations of the envelope theory for a system of $N_a+N_b$ particles. These equations reduce to the 3 usual ones when considering identical particles. We also presented the special cases when $N_b =1$ or/and $N_a=1$. Starting from these equations, it is possible to improve the envelope theory using a similar procedure than the one used in \cite{sema15b}. This is performed for a system of $N_a+1$ particles in \cite{chev21}. With these 7 equations, it is possible to open new domains of applicability of the envelope theory, especially in hadronic physics where the method is proven to be useful as mentioned in the introduction. But the method can also be used to estimate the binding energies of other systems such as nuclei or clusters of cold atoms for which ab-initio calculations are already available, as for instance in \cite{gatt13}. In particular, accurate calculations have been performed for large helium clusters \cite{gatt11,kiev20}. For such systems, it is necessary to take into account a three-body forces that can be handled by the envelope theory \cite{sema18b}.
\begin{acknowledgments}
L.C. would thank the Fonds de la Recherche Scientifique - FNRS for the financial support. This work was also supported under Grant Number 4.45.10.08.
\end{acknowledgments}
\end{document} |
\begin{document}
\title{Multisymplectic 3-forms on 7-dimensional manifolds}
\begin{abstract}
A 3-form $\omega\in\Lambda^3\mathbb R^{7\ast}$ is called multisymplectic if it satisfies some natural non-degeneracy requirement. It is well known that there are 8 orbits (or types) of multisymplectic 3-forms on $\mathbb R^7$ under the canonical action of $\mathrm GL(7,\mathbb R)$ and that two types are open. This leads to 8 types of global multisymplectic 3-forms on 7-dimensional manifolds without boundary.
The existence of a global multisymplectic 3-form of a fixed type is a classical problem in differential topology which is equivalent to the existence of a certain $G$-structure. The open types are the most interesting cases as they are equivalent to a $\mathrm G_2$ and $\tilde{\mathrm{G}}_2$-structure, respectively. The existence of these two structures is a well known and solved problem. In this article is solved (under some convenient assumptions) the problem of the existence multisymplectic 3-forms of the remaining types.
\end{abstract}
\section{Introduction}
Put $\mathrm V:=\mathbb R^7$.
There are finitely many orbits of the canonical action of $\mathrm GL(\mathrm V)$ on $\Lambda^3\mathrm V^\ast$.
We will call the orbits also types. A linear isomorphism $\Phi:\mathbb R^7\rightarrow\mathrm W$ induces a map $\Phi^\ast:\Lambda^3\mathrm W^\ast\rightarrow\Lambda^3\mathbb R^{7\ast}$. The type of $\Phi^\ast\omega$ does not depend on the choice of linear isomorphism and thus, we can define the type for any skew-symmetric 3-form on any 7-dimensional real vector space.
A 3-form $\omega\in\Lambda^3\mathrm V^\ast$ is called \textit{multisymplectic} if the insertion map
\begin{equation}
\mathrm V\rightarrow\Lambda^2\mathrm V^\ast, \ v\mapsto i_v\omega:=\omega(v,-,-)
\end{equation}
is injective. There are (see \cite{Dj} and \cite{W}) eight types of multisympletic 3-forms and two open types.
Let $\Omega$ be a global 3-form on a 7-dimensional manifold $N$ without boundary and $i\in\{1,\dots,8\}$. We call $\Omega$ a \textit{multisympletic 3-form of algebraic type} $i$ if for each $x\in N:\ \Omega_x$ is a multisymplectic 3-form of type $i$. The existence of such a 3-form is a classical problem in differential topology, if $\mathrm O_i$ is the stabilizer of a fixed multisymplectic 3-form $\omega_i\in\Lambda^3\mathrm V^\ast$ of algebraic type $i$, then $N$ admits a multisymplectic 3-form of algebraic type $i$ if, and only if it has an $\mathrm O_i$-structure. The groups $\mathrm O_i$ were studied in \cite{BV} where they were given as semi-direct products of some well known Lie groups.
By the Cartan-Iwasawa-Malcev theorem (see \cite[Theorem 1.2]{Bo}), a connected Lie group $\mathrm H$ has a maximal compact subgroup and any two such subgroups are conjugated. Let us fix one such subgroup and let us denote it by $\mathrm K$. By Cartan's result, the group $\mathrm H$ has the homotopy type of $\mathrm K$ and by a standard argument from the obstruction theory, any $\mathrm H$-principal bundle reduces to a $\mathrm K$-principal bundle.
Hence, the first goal is (see Section \ref{section max compact subgroups}) to find a maximal compact subgroup $\mathrm K_i$ of each group $\mathrm O_i$. Then we solve (see Section \ref{section global forms}) the problem of the existence of a multisympletic form on a closed 7-manifold of algebraic type $i$. The problem is not solved completely as for some types we assume that the underlying manifold is orientable or simply-connected.
The most interesting and well known cases are types 8 and 5 as $\mathrm O_8=\mathrm G_2$ and $\mathrm O_5=\tilde{\mathrm{G}}_2$. The existence of a $\mathrm G_2$-structure was solved in \cite{G} and the existence of a $\tilde{\mathrm{G}}_2$-structure in \cite{Le}.
Let us summarize the main result of this article into a single Proposition. See Section \ref{section spin char class} for the definition of characteristic classes $q(N)$ and $q(N;\ell)$.
\begin{thm}\label{metatheorem}
Let $N$ be a closed and connected 7-manifold.
\begin{enumerate}
\item Suppose that $N$ is orientable, spin$^c$ and that there are $e,f\in H^2(N,\mathbb Z)$ such that
\begin{equation*}
w_2(N)=\rho_2(e+f)\ \ \mathrm{and}\ \ q(N;e+f)=-ef,
\end{equation*}
then $N$ admits a multisymplectic 3-form of algebraic type 1.
If $N$ is simply-connected, then the assumptions are also necessary.
\item Suppose that $N$ is orientable, spin and that there are $e,f\in H^2(N,\mathbb Z)$ such that $$-q(N)=e^2+f^2+3ef,$$
then $N$ admits a multisymplectic 3-form of algebraic type 2.
If $N$ is simply-connected, then the assumptions are also necessary.
\item $N$ admits a multisympletic 3-form of algebraic type $3$ if, and only if $N$ is orientable and spin$^c$.
\item Suppose that $N$ is orientable, spin and there is $u\in H^4(N,\mathbb Z)$ such that $$q(M)=-4u,$$
then $N$ admits a multisymplectic 3-form of algebraic type 4.
On the other hand, if $N$ admits a multisymplectic 3-form of algebraic type $4$, then $N$ is orientable and spin.
\item $N$ admits a multisympletic 3-form of algebraic type $i=5,6,7,8$ if, and only if $N$ is orientable and spin.
\end{enumerate}
\end{thm}
\textit{Acknowledgement and dedication}. The author is grateful to Micheal C. Crabb for pointing out several mistakes in the original article and considerably simplifying many arguments in the present work. I would also like to thank to J. Van\v zura, M. \v Cadek and to the unknown referee for several valuable comments and suggestions.
I would like to dedicate this article to M. Doubek who drew author's attention to this subject and who passed away in a car accident at the age of 33.
\subsection{Notation}\label{section notation}
We will use the following notation:
\begin{tabular}{rl}
$1_n:=$& identity $n\times n$ matrix,\\
$[v_1,\dots,v_i]:=$& the linear span of vectors $v_1,\dots,v_i$,\\
$\alpha_{i_1i_2\dots i_\ell}:=$&$\alpha_{i_1}\wedge\alpha_{i_2}\wedge\dots\wedge\alpha_{i_\ell}$,\\
$M(k,\mathbb R):=$& the algebra of $k\times k$ real matrices.
\end{tabular}
Let $\mathrm V,\mathrm W$ be real vector spaces, $\mathrm V^\ast$ be the dual vector space to $\mathrm V$, $\mathrm{End}(\mathrm V)$ be the algebra of linear endomorphisms of $\mathrm V$, $\mathrm GL(\mathrm V)$ be the group of linear automorphisms of $\mathrm V$. Suppose that $\mathrm W'$ is a vector subspace of $\mathrm W$,\ \ $A$ is a subalgebra of $\mathrm{End}(\mathrm V)$,\ \ $\omega\in\otimes^i\mathrm W^\ast$,\ \ $\varphi\in\mathrm GL(\mathrm W)$,\ \ $H\subset\mathrm GL(\mathrm W)$, \ \ $ \mathcal{B}=\{v_1,\dots,v_n\}$ is a basis of $\mathrm V$ and $\Phi:\mathrm V\rightarrow\mathrm W$ is a linear isomorphism. Then we put
\begin{align}
&\Phi^\ast\varphi:=\Phi^{-1}\circ\varphi\circ\Phi\in\mathrm GL(\mathrm V)\ \ \mathrm{ and} \ \ \varphi^\ast H:=\{\varphi^\ast h:\ h\in H\},\label{notation pullback of map}\\
&\Phi^\ast\omega\in\otimes^i\mathrm V^\ast,\ \Phi^\ast\omega(v_1,\dots,v_i):=\omega(\Phi(v_1),\dots,\Phi(v_i)), \ v_1,\dots,v_i\in\mathrm V,\\\label{notation pullback of tensor}
&\varphi.\omega:=(\varphi^{-1})^\ast\omega,\\
&\mathrm{Stab}(\omega):=\{\varphi\in\mathrm GL(\mathrm W):\ \varphi.\omega=\omega\},\label{notation stabilizer of tensor}\\
&\mathrm{Stab}(\mathrm W'):=\{\varphi\in\mathrm GL(\mathrm W):\ \varphi(\mathrm W')\subset\mathrm W'\},\label{notation stabilizer of subspace}\\
&\mathrm GL_A(\mathrm V):=\{\varphi\in\mathrm GL(\mathrm V):\ \forall a\in A,\ \varphi\circ a=a\circ\varphi\},\label{notation equivariant maps}\\
&\tildehi_{\mathcal{B}}:=(a_{ij})\in M(n,\mathbb R),\ \tildehi\in\mathrm{End}(\mathrm V),\ \tildehi(v_j)=\sum_i a_{ij}v_i,\\
&\mathrm{tr}iangle^k(\omega):=\{w\in\mathrm W:\ (i_w\omega)^{\wedge^k}=0\}\label{notation isotropy set}.
\end{align}
It is easy to see that $\varphi^\ast\mathrm{Stab}(\omega)=\mathrm{Stab}(\varphi^\ast\omega)$.
From now on, $\mathcal{B}_{st}=\{e_1,\dotsm,e_\ell\}$ will denote the standard basis of $\mathbb R^\ell,\ \ell\ge1$. If $f_1,\dots,f_\ell\in\mathbb R^\ell$, then $(f_1|\dots|f_\ell)$ denotes the matrix with $i$-th column equal to $f_i$. We put $E_{ij}:=(0|\dots|e_i|\dots|0)\in M(\ell,\mathbb R)$ where $e_i$ is
the $j$-th column. We for brevity put $\mathrm V:=\mathbb R^7$ and $\mathrm{tr}iangle^j_i:=\mathrm{tr}iangle^j(\omega_i)$ where $\omega_i$ is the fixed multisymplectic 3-form of type $i$ from \cite{BV}. It follows that $\mathrm{tr}iangle^j_i$ is an $\mathrm O_i$-invariant subset.
\section{$\ast$-Algebras, some subgroups of $\mathrm G_2,\tilde\mathrm G_2$ and semi-direct products of Lie groups}
In this Section we will review some well known theory, set notation and prove some elementary facts about maximal compact subgroups of semi-direct products of Lie groups.
\subsection{$\ast$-algebras}\label{section algebras}
A $\ast$-\textit{algebra} is a pair $(A,\ast)$ where:
\begin{itemize}
\item $A$ is a real algebra\footnote{We do not assume that $A$ is associative.} with a unit $1$ and
\item $\ast:A\rightarrow A$ is an involution which satisfies: $$\ast 1=1,\ \ast(a.b)=(\ast b).(\ast a), \ a,b\in A.$$
\end{itemize}
We put $\bar a:=\ast(a)$, $\mathbb Re A:=\{a\in A:\ \bar a=a\}$ and $\Im A:=\{a\in A:\ \bar a=-a\}$. Then $A=\mathbb Re A\oplus\Im A$ and we denote by $\mathbb Re(a)$ and $\Im(a)$ the projection of $a\in A$ onto $\mathbb Re A$ and $\Im A$, respectively.
Suppose that $\mathbb Re A=\mathbb R.1$. Then $B_A(a,b):=\mathbb Re(a\bar b),\ a,b\in A$ is a \textit{standard bilinear form} on $A$ and $Q_A(a):=B_A(a,a)$ is a \textit{standard quadratic form}. Finally, we define a multi-linear form on $\Im A$:
\begin{equation}\label{tautological form on A}
\omega_A(a,b,c):=B_A(a.b,c),\ a,b,c\in\Im A.
\end{equation}
We will consider two products on the vector space $A\oplus A$, namely
\begin{equation}\label{CD multiplication}
(a,b)\cdot(c,d)=(ac-\bar d b,b\bar c+da)
\end{equation}
and
\begin{equation}\label{alternative multiplication}
(a,b)\tilde\cdot(c,d)=(ac+\bar d b,b\bar c+da).
\end{equation}
Then $A_2:=(A\oplus A,\cdot)$ and $\tilde A_2:=(A\oplus A,\tilde\cdot)$ are real algebras with a unit $(1,0)$.
It is clear that
\begin{equation}\label{conjugation}
\star:A\oplus A\rightarrow A\oplus A,\ \star{(a,b)}=(\bar a,-b)
\end{equation}
is an involution.
Then (see \cite[Section 1.13]{Y}) also $\mathbb CD(A):=(A_2,\star)$ and $\tildeCD(A):=(\tilde A_2,\star)$ are $\ast$-algebras and $\mathbb CD(A)$ is the \textit{Cayley-Dickson algebra associated to} $(A,\ast)$.
It is well known that $\mathbb C=\mathbb CD(\mathbb R),$ the algebra of quaternions $\mathbb H=\mathbb CD(\mathbb C)$ and the algebra of octonions $\mathbb O=\mathbb CD(\mathbb H)$.
The algebra of pseudo-quaternions ${\tilde{\mathbb H}}=\tildeCD(\mathbb C)$ is isomorphic to $(\mathrm M(2,\mathbb R),\ast)$ where
\begin{equation}\label{involution on PQ}
\ast\left(
\begin{array}{cc}
a&b\\
c&d\\
\end{array}
\right)=
\left(
\begin{array}{cc}
d&-b\\
-c&a\\
\end{array}
\right).
\end{equation}
The standard quadratic form is the determinant. In particular, the signature of $Q_{{\tilde{\mathbb H}}}$ is $(2,2)$.
We will use the following conventions. We put
\begin{equation}\label{pseudo-quaternions}
\ \tilde I:=\left(
\begin{array}{cc}
0&-1\\
1&0\\
\end{array}
\right),\
\tilde J:=
\left(
\begin{array}{cc}
0&1\\
1&0\\
\end{array}
\right),
\ \tilde K:=
\left(
\begin{array}{cc}
1&0\\
0&-1\\
\end{array}
\right).
\end{equation}
Be aware that $\tildeI.\tildeJ=-\tildeK$.
The algebra of pseudo-octonions ${\tilde{\mathbb O}}=\mathbb CD({\tilde{\mathbb H}})\cong \tildeCD(\mathbb H)$, see \cite[Section 1.13]{Y}.
\subsection{Subgroups of $\mathrm G_2$ and $\tilde\mathrm G_2$}\label{section subgroups}
We have $\mathrm G_2=\{\varphi\in\mathrm GL(\mathbb O)|\ \forall a,b\in\mathbb O: \varphi(a.b)=\varphi(a).\varphi(b)\}$. It is well known that $\mathrm G_2\subset\mathrm{Stab}(1)\cap\mathrm{Stab}(\Im\mathbb O)$. Hence, it is more natural to view $\mathrm G_2$ as a subgroup of $\mathrm GL(\Im\mathbb O)$. It is well known that $\omega_\mathbb O\in\Lambda^3\Im\mathbb O^\ast$ and $\mathrm G_2=\mathrm{Stab}(\omega_{\tilde{\mathbb O}})$. If we replace in the definition of $\mathrm G_2$ the algebra $\mathbb O$ by ${\tilde{\mathbb O}}$, then we get the group $\tilde{\mathrm{G}}_2$. As above, $\omega_{\tilde{\mathbb O}}\in\Lambda^3\Im{\tilde{\mathbb O}}^\ast$ and that $\tilde{\mathrm{G}}_2=\mathrm{Stab}(\omega_{\tilde{\mathbb O}})$. We view also $\tilde{\mathrm{G}}_2$ as a subgroup of $\mathrm GL(\Im{\tilde{\mathbb O}})$.
Let $\mathrm{Sp}(1)$ be the group of unit quaternions. We define
\begin{equation}\label{SO434}
\tildehi_{p,q}\in\mathrm GL(\Im\mathbb H\oplus\mathbb H),\ \tildehi_{p,q}(a,b):=(p.a.\bar p,q.b.\bar p),\ p,q\in\mathrm{Sp}(1).
\end{equation}
It is well known (see \cite[Proposition 1.20.1]{Y}) that $\mathrm{SO}(4)_{3,4}:=\{\tildehi_{p,q}:\ p,q\in\mathrm{Sp}(1)\}$ is a subgroup of $\mathrm G_2$. If we view ${\tilde{\mathbb O}}$ as $\tildeCD(\mathbb H)$, then it is shown in \cite[Section 1.13]{Y} that $\mathrm{SO}(4)_{3,4}\subset\tilde{\mathrm{G}}_2$.
We put
\begin{equation*}
\mathrm{\widetilde{Sp}}(1)_{\tildem}:=\{A\in{\tilde{\mathbb H}}: Q_{{\tilde{\mathbb H}}}(A)=\tildem1\}\ \ \mathrm{and}\ \ \mathrm{\widetilde{Sp}}(1):=\{A\in{\tilde{\mathbb H}}: Q_{{\tilde{\mathbb H}}}(A)=1\}.
\end{equation*}
Let $A,B\in\mathrm{\widetilde{Sp}}(1)_\tildem$ and consider
\begin{equation}\label{SO2234}
\tilde\tildehi_{A,B}\in\mathrm GL(\Im{\tilde{\mathbb H}}\oplus{\tilde{\mathbb H}}),\ \tilde\tildehi_{A,B}(X,Y)=Q_{{\tilde{\mathbb H}}}(B).(A.X.\bar A,B.Y.\bar A).
\end{equation}
It can be verified directly that $\mathrm{SO}(2,2)_{3,4}:=\{\tilde\tildehi_{A,B}|\ A,B\in\mathrm{\widetilde{Sp}}(1)_\tildem:\ Q_{{\tilde{\mathbb H}}}(A)=Q_{{\tilde{\mathbb H}}}(B)\}$ is a subgroup of $\tilde{\mathrm{G}}_2$. This uses the same computation which shows that $\mathrm{SO}(4)_{3,4}\subset\mathrm G_2$ together with the identity $A^{-1}=Q_{{\tilde{\mathbb H}}}(A).\bar A,\ A\in\mathrm{\widetilde{Sp}}(1)_\tildem$.
On the other hand, the group $\widetilde\mathrm{SO}(2,2)_{3,4}:=\{\tilde\tildehi_{A,B}|\ A,B\in\mathrm{\widetilde{Sp}}(1)_\tildem\}$ is not contained in $\tilde{\mathrm{G}}_2$. Nevertheless, we still have the following.
\begin{lemma}\label{lemma invariance of omega 5}
Let $X\in\Im{\tilde{\mathbb H}},\ Y,Z\in{\tilde{\mathbb H}}$ and $\tilde\tildehi_{A,B}\in\widetilde\mathrm{SO}(2,2)_{3,4}$. Then
\begin{equation}
\tilde\tildehi_{A,B}.\omega_{\tilde{\mathbb O}}((X,0),(0,Y),(0,Z)) =\omega_{\tilde{\mathbb O}}((X,0),(0,Y),(0,Z))=
\end{equation}
\end{lemma}
\begin{proof}
The right hand side is
\begin{eqnarray}\label{help I}
&&\omega_{\tilde{\mathbb O}}((X,0),(0,Y),(0,Z))=B_{\tilde{\mathbb O}}((X,0).(0,Y),(0,Z))=\mathbb Re((0,Y.X).\overline{(0,Z)})\nonumber\\
&&\ \ =\mathbb Re((0,Y.X).(0,-Z))=\mathbb Re(\bar Z.Y.X,0)=\mathbb Re(\bar Z.Y.X).
\end{eqnarray}
The left hand side is
\begin{eqnarray*}
&&\omega_{\tilde{\mathbb O}}(Q_{{\tilde{\mathbb H}}}(B)(\bar A.X.A,0),Q_{{\tilde{\mathbb H}}}(A)(0,\bar B.Y. A),Q_{{\tilde{\mathbb H}}}(A)(0,\bar B.Z.A))\\
&&=Q_{{\tilde{\mathbb H}}}(B)\mathbb Re(\bar A.\bar Z .B.\bar B.Y.A.\bar A.X.A)=\mathbb Re(\bar Z .Y.X).
\end{eqnarray*}
\end{proof}
The restriction map $\tildehi_{p,q}\mapsto\tildehi_{p,q}|_{\Im\mathbb H}$ induces a homomorphism $\tildei:\mathrm{SO}(4)_{3,4}\rightarrow\mathrm{SO}(\Im\mathbb H)$. It is well known that the homomorphism is surjective.
If $A,B\in\mathrm{\widetilde{Sp}}(1)_\tildem$ and $X\in\Im{\tilde{\mathbb H}}$, then $Q_{{\tilde{\mathbb H}}}(Q_{{\tilde{\mathbb H}}}(B) A.X.\bar A)=Q_{\tilde{\mathbb H}}(X)$. It follows that the restriction map $\tilde\tildehi_{A,B}\mapsto\tilde\tildehi_{A,B}|_{\Im{\tilde{\mathbb H}}}$ induces a map $\tilde\tildei:\widetilde\mathrm{SO}(2,2)_{3,4}\rightarrow\mathrm O(\Im{\tilde{\mathbb H}})$.
\begin{lemma}\label{lemma split ses of groups}
There are split short exact sequence of Lie groups
\begin{align}\label{split ses with SO432}
&0\rightarrow\mathrm{Sp}(1)\rightarrow\mathrm{SO}(4)_{3,4}\xrightarrow{\tildei}\mathrm{SO}(\Im\mathbb H)\rightarrow0\ \ \mathrm{and}\\
& 0\rightarrow\mathrm{\widetilde{Sp}}(1)\rightarrow\widetilde\mathrm{SO}(2,2)_{3,4}\xrightarrow{\tilde\tildei}\mathrm O(\Im{\tilde{\mathbb H}})\rightarrow0\label{split ses with O2234}
\end{align}
where $\tildei$ and $\tilde\tildei$ are defined above. The second short exact sequence contains a split short exact sequence
\begin{equation}
0\rightarrow\mathrm{\widetilde{Sp}}(1)\rightarrow\mathrm{SO}(2,2)_{3,4}\rightarrow\mathrm{SO}(\Im{\tilde{\mathbb H}})\rightarrow0 \label{split SO2234}.
\end{equation}
\end{lemma}
\begin{proof}
We have $\ker\tildei=\{\tildehi_{\tildem1,q}: q\in\mathrm{Sp}(1)\}$. Now $\tildehi_{p,q}=\tildehi_{p',q'}$ if, and only if $p=p',q=q'$ or $p=-p',q=-q'$. It follows that $\ker\tildei=\{\tildehi_{1,q}:\ q\in\mathrm{Sp}(1)\}\cong\mathrm{Sp}(1)$. It remains to show that the short exact sequence is split. For this consider the group $\mathrm{SO}(3)\cong\{\tildehi_{p,p}:\ p\in\mathrm{Sp}(1)\}$. Then $\tildei|_{\mathrm{SO}(3)}$ induces isomorphism $\mathrm{SO}(3)\rightarrow\mathrm{SO}(\Im\mathbb H)$. The inverse of this map is a splitting of $\tildei$.
It is easy to see that $\tilde\tildehi_{A,B}=\tilde\tildehi_{A',B'}$ if, and only if $A=A',B'=B$ or $A=-A',B=-B'$. We have $\ker\tilde\tildei=\{\tilde\tildehi_{\tildem1,B}:\ B\in\mathrm{\widetilde{Sp}}(1)\}=\{\tilde\tildehi_{1,B}:\ B\in\mathrm{\widetilde{Sp}}(1)\}\cong\mathrm{\widetilde{Sp}}(1)$. It is well known that any element of $\mathrm O(\Im{\tilde{\mathbb H}})$ is of the form $X\mapsto \tildem A.X.\bar A$ for some $A\in\mathrm{\widetilde{Sp}}(1)_\tildem$. Then $\tilde\tildei$ is surjective and arguing as above, it is easy to see that it is a split.
The last claim is easy to verify.
\end{proof}
\subsection{Maximal compact subgroup of semi-direct product}\label{section semi-direct products}
A semi-direct product of Lie groups is a short exact sequence
\begin{equation}
0\rightarrow\mathrm H_1\rightarrow\mathrm H\xrightarrow{\tildei}\mathrm H_2\rightarrow0
\end{equation}
of Lie groups which splits, i.e. there is a homomorphism $\iota:\mathrm H_2\rightarrow\mathrm H$ such that $\tildei\circ\iota=Id_{\mathrm H_2}$. In this way, we can view $\mathrm H_2$ as a subgroup of $\mathrm H$ and we will do this without further comment. We write $\mathrm H=\mathrm H_1\rtimes\mathrm H_2$.
\begin{lemma}\label{lemma max compact in product}
Let $\mathrm G$ be a Lie group with a compact subgroup $\mathrm K$. Suppose that $\mathrm H_1,\mathrm H_2$ are closed subgroups of $\mathrm G$ such that the group generated by $\mathrm H_1,\mathrm H_2$ is a semi-direct product $\mathrm H_1\rtimes\mathrm H_2$. Assume that $\mathrm K_i:=\mathrm K\cap\mathrm H_i,\ i=1,2$ is a maximal compact subgroup of $\mathrm H_i$.
Then the group generated by $\mathrm K_1,\mathrm K_2$ is a semi-direct product $\mathrm K_1\rtimes\mathrm K_2$ and this is a maximal compact subgroup of $\mathrm H_1\rtimes\mathrm H_2$.
\end{lemma}
\begin{proof}
Let $k_i\in\mathrm K_i,\ i=1,2$ be arbitrary. Then $k_2^{-1}k_1k_2\in\mathrm H_1\cap\mathrm K=\mathrm K_1$ and so the first claim follows. The group $\mathrm K_1\rtimes\mathrm K_2$ is homeomorphic to $\mathrm K_1\times\mathrm K_2$ and so it is compact. It remains to show that it is maximal among all compact subgroups of $\mathrm H_1\rtimes\mathrm H_2$.
Let $\tildei:\mathrm H_1\rtimes\mathrm H_2\rightarrow\mathrm H_2$ be the canonical projection and $\iota:\mathrm H_2\rightarrow\mathrm H_1\rtimes\mathrm H_2$ be the inclusion. Put $\bar h:=\iota\circ\tildei(h),\ h\in\mathrm H_1\rtimes\mathrm H_2$. Let $\mathrm L$ be a compact group such that $\mathrm K_1\rtimes\mathrm K_2\subset\mathrm L\subset\mathrm H_1\rtimes\mathrm H_2$. Then in particular, $\mathrm K_i\subset\mathrm L,\ i=1,2$. It is clearly enough to show that for each $\ell\in\mathrm L:\ \ell.\bar \ell^{-1}\in\mathrm K_1$ and $\bar \ell\in\mathrm K_2$.
The group $\tildei(\mathrm L)$ is a compact subgroup of $\mathrm H_2$ which contains $\mathrm K_2$. By the maximality of $\mathrm K_2$, $\tildei(\mathrm L)=\mathrm K_2$. Thus $\bar \ell\in\iota(\tildei(\mathrm L))=\iota(\mathrm K_2)=\mathrm K_2$.
It is clear that $\ell.\bar \ell^{-1}\in\ker\tildei=\mathrm H_1$. As $\bar \ell\in\mathrm K_2\subset\mathrm L$, we have that $\ell.\bar \ell^{-1}\in\mathrm L$ and thus $\ell.\bar \ell^{-1}\in\mathrm L\cap\mathrm H_1$. Now $\mathrm L\cap\mathrm H_1$ is a compact subgroup of $\mathrm H_1$ which contains $\mathrm K_1$. Again by the maximality of $\mathrm K_1$, $\mathrm K_1=\mathrm L\cap\mathrm H_1$ and the proof is complete.
\end{proof}
\begin{lemma}\label{lemma max compact in product I}
Let $\mathrm H_1\rtimes\mathrm H_2$ be a semi-direct product of Lie groups. Assume that the trivial group $\{e\}$ is a maximal compact subgroup of $\mathrm H_i$ and that $\mathrm K$ is a maximal compact subgroup of $\mathrm H_j$ where $i,j\in\{1,2\},\ i\ne j$. Then $\mathrm K$ is a maximal compact subgroup of $\mathrm H_1\rtimes\mathrm H_2$.
\end{lemma}
\begin{proof}
As $\mathrm H_1\rtimes\mathrm H_2$ is diffeomorphic to $\mathrm H_1\times\mathrm H_2$, the groups $\mathrm H_i,\ i=1,2$ are closed in the semi-direct product. The group $\mathrm K$ is a compact subgroup of $\mathrm H_1\rtimes\mathrm H_2$. By the assumptions, $\mathrm K\cap\mathrm H_i$ is a maximal compact subgroup of $\mathrm H_i,\ i=1,2$. The claim follows from Lemma \ref{lemma max compact in product}
\end{proof}
\begin{lemma}\label{lemma max compact in upper trian}
Let $\mathrm L$ be a subgroup of $\mathrm GL(n,\mathbb R)$. Suppose that for each $A\in\mathrm L$ the matrix $A-1_n$ is strictly upper triangular. Then $\{1_n\}$ is a maximal compact subgroup of $\mathrm L$.
\end{lemma}
\begin{proof}
Let $\mathrm K$ be a maximal compact subgroup of $\mathrm L$ and $B$ be a $\mathrm K$-invariant inner product on $\mathbb R^n$. Let $\{f_1,\dots,f_n\}$ be a $B$-orthonormal basis that is obtained by applying the Gram-Schmidt algorithm to the standard basis of $\mathbb R^n$. Let $F=(f_1|f_2|\dots|f_n)$ be the corresponding matrix. Then $F^{-1}.\mathrm K.F\subset\mathrm O(7)$. On the other hand, $F$ and $F^{-1}$ are upper triangular matrices with positive numbers on the diagonal. Thus, for each $A\in\mathrm K$ the matrix $F^{-1}.A.F$ is upper triangular and its diagonal coefficients are equal to 1. Hence, $F^{-1}.A.F=1_n$ and $A=1_n$.
\end{proof}
\section{Maximal compact subgroups}\label{section max compact subgroups}
This Section is organized as follows. Each type is treated in its own section. We first recall some results about $\mathrm O_i:=\mathrm{Stab}(\omega_i)$ from \cite{BV} where $\omega_i$ is a fixed representative of the type $i=1,\dots,8$. Then we use these results to find a maximal compact subgroup of $\mathrm O_i$. We separate the summary part from the genuine research part of this article by \textemdash\ x \textemdash.
\subsection{Type 8}\label{section type 8}
A representative chosen in \cite[Section Type 8.]{BV} is
\begin{equation}\label{omega8}
\omega_8=\alpha_{123}+\alpha_{145}-\alpha_{167}+\alpha_{246}+\alpha_{257}+\alpha_{347}-\alpha_{356}.
\end{equation}
It is well known that $\mathrm O_8=\mathrm G_2$.
\begin{center}
\textemdash\ x \textemdash
\end{center}
We will need in Section \ref{section type 7} the following observation.
Recall (see Section \ref{section algebras}) that $\mathbb O=\mathbb CD(\mathbb H)$ and the definition (\ref{tautological form on A}) of the tautological 3-form $\omega_\mathbb O$.
Let $\Phi_8:\mathrm V\rightarrow\Im\mathbb O$ be the linear isomorphism that maps the standard basis of $\mathrm V$ to the standard basis
\begin{equation}
\label{standard basis of O}
\{(i,0),(j,0),(k,0),(0,1),(0,i),(0,j),(0,k)\}.
\end{equation}
of $\Im\mathbb O$ where $\{1,i,j,k\}$ is the standard basis of $\mathbb H$.
\begin{lemma}\label{lemma type 8}
$\omega_8=\Phi_8^\ast\omega_\mathbb O$.
\end{lemma}
\begin{proof}
This is a straightforward computation which uses (\ref{help I}) together with $\omega_\mathbb O((i,0),(j,0),(k,0))=\mathbb Re((k,0).\overline{(k,0)})=\mathbb Re(-k^2)=\mathbb Re(1)=1.$
\end{proof}
\subsection{Type 7}\label{section type 7}
Let us first recall some facts from \cite[Section Type 7.]{BV}.
A representative is
\begin{equation}
\omega_7=\alpha_{125}+\alpha_{136}+\alpha_{147}+\alpha_{237}-\alpha_{246}+\alpha_{346}.
\end{equation}
Then $\mathrm V_3:=\mathrm{tr}iangle_7^3=[e_5,e_6,e_7]$ and we put $\mathrm W_4:=\mathrm V/\mathrm V_3$. The 3-form $\omega_7$ induces a bilinear form\footnote{Here $v=(c_1,\dots,c_7),\ w=(d_1,\dots,d_7)$} $B(v,w)=c_1d_1+c_2d_2+c_3d_3+c_4d_4$ on $\mathrm V$. The form $B$ descends to a positive definite bilinear form $B_4$ on $\mathrm W_4$. The associated quadratic form is denoted by $Q_4$. We have\footnote{Recall notation from (\ref{notation stabilizer of subspace}). In particular, $\mathrm{Stab}([B])$ is the stabilizer of the \textbf{line} spanned by $B$.} $\mathrm O_7\subset\mathrm{Stab}(\mathrm V_3)\cap\mathrm{Stab}([B])$.
The insertion map $v\mapsto i_v\omega_7$ induces a linear map $\lambda:\mathrm V_3\rightarrow\Lambda^2\mathrm W_4$. We put $\sigma_i:=\lambda(e_{i+4}),\ i=1,2,3$. As $B_4$ is non-degenerate, there are unique $E,F,G\in\mathrm{End}(\mathrm W_4)$ such that
\begin{equation}\label{type 7 quaternionic triple}
\sigma_1=B_4\circ E,\ \sigma_2=B_4\circ F,\ \sigma_3=B_4\circ G
\end{equation}
where $(B_4\circ A)(u,v):=B_4(A(u),v),\ A\in\mathrm{End}(\mathrm W_4),\ u,v\in\mathrm W_4$. The endomorphisms satisfy $E^2= F^2=-Id_{W_4}$ and $E F=-F E=G$. Hence, $\mathrm W_4$ is a 1-dimensional $\mathbb H$-vector space.
The scalar product $B_4$ induces a scalar product on $\Lambda^2\mathrm W_4^\ast$. As $\lambda$ is injective, $B_4$ induces a scalar product $B_3$ on $\mathrm V_3$ and one finds that $\{e_5,e_6,e_7\}$ is an orthonormal basis.
The homomorphism $\rho:\mathrm O_7\rightarrow\mathrm GL(\mathrm V_3),\ \varphi\mapsto \varphi|_{\mathrm V_3}$ induces a split short exact sequence $0\rightarrow\mathrm K\rightarrow\mathrm O_7\rightarrow\mathrm{CSO}(Q_3)\rightarrow0$. Any $\varphi\in\mathrm O_7$ induces an endomorphism $\tilde\varphi$ of $\mathrm W_4$ which preserves $Q_4$ up to scale. If $\varphi\in\mathrm K$, then $\tilde\varphi\in\mathrm GL_\mathbb H(\mathrm W_4)$ and $\tilde\varphi\in\mathrm{Stab}(B_4)$. This induces another split short exact sequence $0\rightarrow\mathrm L\rightarrow\mathrm K\rightarrow\mathrm{Sp}(1)\rightarrow0$. The group $\mathrm L$ is isomorphic to the Lie group $\mathbb R^8$. Notice that $
\ell_{\mathcal{B}_{st}}=\left(
\begin{matrix}
1_4&0\\
\ast&1_3\\
\end{matrix}
\right)$ for every $\ell\in\mathrm L$.
All together, $\mathrm O_7$ is isomorphic to a semi-direct product
\begin{equation}\label{semi-direct product type 7}
(\mathbb R^8\rtimes\mathrm{Sp}(1))\rtimes\mathrm{CSO}(3)
\end{equation}
as claimed in \cite[10. Proposition]{BV}
\begin{center}
\textemdash\ x \textemdash
\end{center}
We can now proceed. We put $$f_1:=-e_6,\ f_2:=e_7,\ f_3:=e_5, \ f_4:=-e_1,\ f_5:=e_3,\ f_6:=-e_4,\ f_7:=-e_2$$
and $\mathcal{B}_7:=\{f_1,\dots,f_7\}$. Let $\mathcal{B}_7^\ast=\{\beta_1,\dots,\beta_7\}$ be a dual basis. Then
\begin{equation}
\omega_7=\beta_{145}-\beta_{167}+\beta_{246}+\beta_{257}+\beta_{347}-\beta_{356}.
\end{equation}
Consider a linear isomorphism $\Phi_7:\mathrm V\rightarrow Im\mathbb O$ which maps the basis $\mathcal{B}_7$ to the standard basis (\ref{standard basis of O}) of $\Im\mathbb O$. Recall Section \ref{section subgroups} that $\mathrm{SO}(4)_{3,4}$ is a subgroup of $\mathrm GL(\Im\mathbb O)$. Thus, $\mathrm K_7:=\Phi_7^\ast\mathrm{SO}(4)_{3,4}$ is a subgroup of $\mathrm GL(\mathrm V)$. Here we use the notation set in (\ref{notation pullback of map}).
\begin{lemma}\label{lemma inclusion type 7}
The subspaces $\mathrm V_3$ and $\mathrm V_4:=[f_4,\dots,f_7]$ are $\mathrm K_7$-invariant and
$\mathrm K_7\subset\mathrm O_7$.
\end{lemma}
\begin{proof}
Since $\mathrm V_3=\Phi_7^{-1}(\Im\mathbb H)$ and $\mathrm V_4=\Phi_7^{-1}(\mathbb H)$, the first claim follows from the definition (\ref{SO434}).
By Lemma \ref{lemma type 8}: $\omega_7=\Phi_7^\ast\omega_\mathbb O-\beta_{123}$. As $\mathrm{SO}(4)_{3,4}\subset\mathrm G_2$, we see that $\mathrm K_7\subset\Phi_7^\ast\mathrm G_2=\mathrm{Stab}(\Phi_7^\ast\omega_\mathbb O)$. From (\ref{SO434}) is also clear that $\mathrm K_7\subset\mathrm{SO}(B_3)\subset\mathrm{Stab}(\beta_{123})$. Thus $\mathrm K_7\subset\mathrm O_7$.
\end{proof}
Put
\begin{equation}\label{group R+}
\mathrm{R}^+:=\bigg\{
\varphi\in\mathrm GL(\mathrm V)\bigg|
\ \varphi_{\mathcal{B}_7}=
\left(
\begin{matrix}
\lambda^{-2}.1_3&0\\
0&\lambda.1_4\\
\end{matrix}
\right),\lambda>0
\bigg\}
\end{equation}
It is clear that $\mathrm{R}^+$ is a subgroup of $\mathrm GL(\mathrm V)$ and that $\mathrm{R}^+\subset\mathrm O_7$. From Lemma \ref{lemma inclusion type 7} follows that $\mathrm K_7$ commutes with $\mathrm{R}^+$. It is easy to see that $k.\ell.k^{-1}\in\mathrm L$ whenever $\ell\in\mathrm L,\ k\in\mathrm K_7\times\mathrm{R}^+$. Hence, the group generated by $\mathrm L,\mathrm K_7$ and $\mathrm{R}^+$ is isomorphic to $\mathrm L\rtimes(\mathrm K_7\times\mathrm{R}^+)$. This is a subgroup of $\mathrm O_7$.
\begin{lemma}\label{lemma semi-direct product type 7}
$\mathrm O_7=\mathrm L\rtimes(\mathrm K_7\times\mathrm{R}^+)$.
\end{lemma}
\begin{proof}
Let $v\in\mathrm V_3$. The map $\Phi_7$ restricts to an isomorphism $\mathrm V_3\rightarrow\Im\mathbb H$ and so there is a unique $a\in\Im\mathbb H$ so that $\Phi_7(v)=(a,0)$.
The composition $\mathrm V\xrightarrow{\Phi_7}\Im\mathbb O\rightarrow\mathbb H$ where the second map is the canonical projection descends to an isomorphism $\underline\Phi_7:\mathrm W_4\rightarrow\mathbb H$. Notice that $\underline\Phi_7^\ast B_\mathbb H=B_4$. We know that there is a unique $A\in\mathrm{End}(\mathrm W_4)$ such that $\lambda(v)=B_4\circ A$. We will now show that $A$ corresponds to the multiplication by $a$ on the right.
Let $w_1,w_2\in\mathrm W_4$ and put $\ b=\underline\Phi_7(w_1),\ c=\underline\Phi_7(w_2)$. Then
\begin{eqnarray*}
&&\lambda(v)(w_1,w_2)=\omega_\mathbb O((a,0),(0,b),(0,c))=\mathbb Re(\bar cba)=\mathbb Re(ba \bar c)=B_\mathbb H(b.a,c)\\
&&=B_4(\Phi_7^\ast(r_a)(w_1),w_2)
\end{eqnarray*}
where $r_a:\mathbb H\rightarrow\mathbb H$ is the right-action by $a$. We see that $A=\underline\Phi_7^\ast(r_a)$.
Recall Lemma \ref{lemma split ses of groups} that $\tildei:\mathrm{SO}(4)_{3,4}\rightarrow\mathrm{SO}(\Im\mathbb H),\ \varphi\mapsto\varphi|_{\Im\mathbb H}$ induces the split short exact sequence (\ref{split ses with SO432}) where $\mathrm{Sp}(1)=\{\tildehi_{1,q}:\ q\in\mathrm{Sp}(1)\}$ and that $\tildehi_{1,q}(a,b)=(a,q.b)$. Now it is easy to see that $\mathrm K_7\times\mathrm{R}^+$ is a splitting of the subgroup $\mathrm{Sp}(1)\rtimes\mathrm{CSO}(3)$ from (\ref{semi-direct product type 7}). The claim then follows from the definition of $\mathrm L$.
\end{proof}
\begin{thm} \label{thm max compact type 7}
$\mathrm K_7$ is a maximal compact subgroup of $\mathrm O_7$.
\end{thm}
\begin{proof}
By Lemma \ref{lemma max compact in upper trian}, $\{1_7\}$ is a maximal compact subgroup of $\mathrm L$. It is also clear that $\{1_7\}$ is a maximal compact subgroup of $\mathrm{R}^+$. The claim follows from Lemma \ref{lemma max compact in product I}.
\end{proof}
\subsection{Type 5}\label{section type 5}
A representative chosen in \cite[Section type 5]{BV} is
\begin{eqnarray}\label{type 5}
\omega_5=\alpha_{123}-\alpha_{145}+\alpha_{167}+\alpha_{246}+\alpha_{257}+\alpha_{347}-\alpha_{356}.
\end{eqnarray}
It is well known that $\mathrm O_5=\tilde{\mathrm{G}}_2$.
\begin{center}
\textemdash\ x \textemdash
\end{center}
We will later need two more representatives of type 5.
Recall Section \ref{section algebras} that ${\tilde{\mathbb O}}=\tildeCD(\mathbb H)$ and the definition of $\omega_{\tilde{\mathbb O}}$. Let $\Phi_5:\mathrm V\rightarrow\Im{\tilde{\mathbb O}}$ be the linear isomorphism that maps the standard basis of $\mathrm V$ to the basis (\ref{standard basis of O}) of $\Im{\tilde{\mathbb O}}$. From (\ref{help I}) and (\ref{alternative multiplication}) follows at once that $\Phi_5^\ast\omega_{\tilde{\mathbb O}}=2\alpha_{123}-\omega_8$, i.e.
\begin{eqnarray}\label{type 5.I}
\Phi^\ast_5\omega_{\tilde{\mathbb O}}=\alpha_{123}-\alpha_{145}+\alpha_{167}-\alpha_{246}-\alpha_{257}-\alpha_{347}+\alpha_{356}.
\end{eqnarray}
Notice that $\varphi.\omega_5=\Phi^\ast_5\omega_{\tilde{\mathbb O}}$ where $\varphi\in\mathrm GL(\mathrm V)$ is determined by $\varphi(e_i)=e_i, \ 1\le i\le 5$ and $\varphi(e_j)=-e_j,\ j =6,7$.
Let us now view ${\tilde{\mathbb O}}$ as $\mathbb CD({\tilde{\mathbb H}})$ and ${\tilde{\mathbb H}}$ as $M(2,\mathbb R)$. Consider the linear isomorphism $\tilde\Phi_5:\mathrm V\rightarrow\Im{\tilde{\mathbb O}}$ that sends the standard basis of $\mathrm V$ to the basis
\begin{equation}\label{basis of pO}
\{(\tildeI,0),(\tildeJ,0),(\tildeK,0),(0,\sqrt2E_{11}),(0,\sqrt2E_{21}),(0,\sqrt2E_{12}),(0,\sqrt2E_{22})\}.
\end{equation}
\begin{lemma}\label{lemma type 5 II}
\begin{eqnarray}
\tilde\Phi_5^\ast\omega_{\tilde{\mathbb O}}=\alpha_{123}+\alpha_{145}+\alpha_{167}-\alpha_{245}+\alpha_{267}+\alpha_{347}-\alpha_{356}.
\end{eqnarray}
\end{lemma}
\begin{proof}
Let $\mathrm{tr}$ be the usual trace and
\begin{equation}
A=\left(
\begin{matrix}
a_3&a_2-a_1\\
a_1+a_2&-a_3
\end{matrix}
\right), \
B=\sqrt2\left(
\begin{matrix}
b_1&b_3\\
b_2&b_4
\end{matrix}
\right),\ C=\sqrt2\left(
\begin{matrix}
c_1&c_3\\
c_2&c_4
\end{matrix}
\right).
\end{equation}
From (\ref{help I}) follows that
\begin{align*}
\omega_{\tilde{\mathbb O}}((A,0),(0,B),(0,C))&=\mathbb Re(\bar CBA)=\frac{1}{2}\mathrm{tr}(\bar CBA)\\
&=a_1(b_1c_2-b_2c_1+b_3c_4-b_4c_3)\\
&+a_2(-b_1c_2+b_2c_1+b_3c_4-b_4c_3)\\
&+a_3(b_1c_4-b_4c_1-b_2c_3+b_3c_2).
\end{align*}
As $\omega_{\tilde{\mathbb O}}((\tildeI,0),(\tildeJ,0),(\tildeK,0))=\mathbb Re((-\tildeK,0).\overline{(\tildeK,0)})=\mathbb Re(\tildeK^2)=\mathbb Re(1)=1,$
the claim follows.
\end{proof}
\subsection{Type 2}\label{section type 2}
Let us first recapitulate \cite[Section Type 2.]{BV}.
A representative is
\begin{equation*}
\omega_2=\alpha_{125}+\alpha_{127}+\alpha_{147}-\alpha_{237}+\alpha_{346}+\alpha_{347}.
\end{equation*}
We will use a basis $\mathcal{B}_2=\{f_5,f_6,f_7,e_1,e_2,e_3,e_4\}$ of $\mathrm V$ where
$$f_5:=e_5+e_6,\ f_6:=-e_5+e_6,\ f_7:=-e_5-e_6+e_7.$$
For a dual basis $\mathcal{B}_2^\ast=\{\beta_1,\ldots,\beta_7\}$ we have
$\beta_1=\frac{1}{2}(\alpha_5+\alpha_6+2\alpha_7),\ \beta_2=\frac{1}{2}(-\alpha_5+\alpha_6),\ \beta_3=\alpha_7$ and $\beta_{i+3}=\alpha_i,\ i=1,\dots,4$
and so
$$\alpha_i=\beta_{i+3},\ i=1,\dots,4,\ \alpha_5=\beta_1-\beta_2-\beta_3,\ \alpha_6=\beta_1+\beta_2-\beta_3,\ \alpha_7=\beta_3.$$
Then
\begin{equation}
\omega_2=\beta_{145}+\beta_{167}-\beta_{245}+\beta_{267}+\beta_{347}-\beta_{356}.
\end{equation}
The subspace $\mathrm V_3:=[f_5,f_6,f_7]$ is $\mathrm O_2$-invariant and we put $\mathrm W_4:=\mathrm V/\mathrm V_3$. We denote by $B_3$ a symmetric bilinear form on $\mathrm V_3$ with orthogonal basis $\{f_5,f_6,f_7\}$ and $B_3(f_5,f_5)=1,\ B_3(f_6,f_6)=B_3(f_7,f_7)=-1$. The 3-form induces a quadratic form $Q(v)=2(c_1c_4-c_2c_3)$ where $v=(c_1,\dots,c_7)$. The associated symmetric bilinear form $B$ descends to a bilinear form $B_4$ on $\mathrm W_4$. We denote by $Q_3$ and $Q_4$ the quadratic form on $\mathrm V_3$ and $\mathrm W_4$ associated to $B_3$ and $B_4$, respectively. We have $\mathrm O_2\subset\mathrm{Stab}([B_3])\cap\mathrm{Stab}([B])$.
The insertion map $v\mapsto i_v\omega_2$ induces an injective linear map $\lambda:\mathrm V_3\rightarrow\Lambda^2\mathrm W_4$. We put $\sigma_i:=\lambda(f_{4+i}),\ i=1,2,3$. As $B_4$ is non-degenerate, there are unique $E,F,G\in\mathrm{End}(\mathrm W_4)$ such that
\begin{eqnarray*} &&\sigma_1= B_4\circ E,\ \sigma_2=B_4\circ F,\ \sigma_3=B_4\circ G \end{eqnarray*}
where we use the notation from (\ref{type 7 quaternionic triple}).
Then $E^2=-F^2=-Id_{\mathrm W_4}$ and $EF=-FE=G$ and hence, $\mathrm W_4$ is a 1-dimensional free ${\tilde{\mathbb H}}$-module.
There is a split short exact sequence $0\rightarrow\mathrm O_2^+\rightarrow\mathrm O_2\xrightarrow{\det}\mathbb R^\ast\rightarrow0$. The restriction map $\varphi\in\mathrm O^+_2\rightarrow\varphi|_{\mathrm V_3}$ induces another split short exact sequence $0\rightarrow\mathrm K\rightarrow\mathrm O^+_2\rightarrow\mathrm{SO}(Q_3)\rightarrow0$. Next, any $\varphi\in\mathrm O_2$ induces $\tilde\varphi\in\mathrm GL(\mathrm W_4)$. If $\varphi\in\mathrm K$, then $\tilde\varphi\in\mathrm GL_{\tilde{\mathbb H}}(\mathrm W_4)\cap\mathrm{Stab}(B_4)\cong\widetilde\mathrm{Sp}(1)$. We obtain another split short exact sequence $0\rightarrow\mathrm L\rightarrow\mathrm K\rightarrow\mathrm{\widetilde{Sp}}(1)\rightarrow0$. The group $\mathrm L$ is isomorphic to $\mathbb R^8$. Notice that
$\ell_{\mathcal{B}_2}=\left(
\begin{matrix}
1_3&\ast\\
0&1_4\\
\end{matrix}
\right)$
for every $\ell\in\mathrm L$. All together, $\mathrm O_2$ is isomorphic to a semi-direct product
\begin{equation}\label{semi-direct product type 2}
((\mathbb R^8\rtimes\mathrm{\widetilde{Sp}}(1))\rtimes\mathrm{SO}(1,2))\rtimes\mathbb R^\ast
\end{equation}
as claimed in \cite[5. Proposition]{BV}.
\begin{center}
\textemdash\ x \textemdash
\end{center}
We can now continue. Let $\Phi_2:\mathrm V\rightarrow\Im{\tilde{\mathbb O}}$ be a linear isomorphism that sends $\mathcal{B}_2$ to the basis (\ref{basis of pO}). Recall Section \ref{section subgroups} that $\mathrm{SO}(2,2)_{3,4}\subset\widetilde\mathrm{SO}(2,2)_{3,4}\subset\mathrm GL(\Im{\tilde{\mathbb O}})$. Put $\widetilde\mathrm H_2:=\Phi_2^\ast\widetilde\mathrm{SO}(2,2)_{3,4}$ and $\mathrm H_2:=\Phi_2^\ast\mathrm{SO}(2,2)_{3,4}$.
\begin{lemma}\label{lemma inclusion type 2}
The subspaces $\mathrm V_3$ and $\mathrm V_4:=[e_1,\dots,e_4]$ are $\widetilde\mathrm H_2$-invariant and
$\mathrm H_2\subset\mathrm O_2$.
\end{lemma}
\begin{proof}
The first claim follows from the fact that $\Phi_2(\mathrm V_3)=\Im{\tilde{\mathbb H}},\ \Phi_2(\mathrm V_4)={\tilde{\mathbb H}}$ are $\widetilde\mathrm{SO}(2,2)_{3,4}$-invariant subspaces.
From Lemma \ref{lemma type 5 II} follows that $\omega_2=\Phi_2^\ast\omega_{{\tilde{\mathbb O}}}-\beta_1\wedge\beta_2\wedge\beta_3$. The second claim is then a consequence of Lemma \ref{lemma invariance of omega 5}.
\end{proof}
Let $\mathrm{R}^+$ be the group from (\ref{group R+}). It is easy to see that $\mathrm{R}^+\subset\mathrm O_2$. From Lemma \ref{lemma inclusion type 2} follows that $\mathrm{R}^+$ commutes with $\widetilde\mathrm H_2$. Using the definition of $\mathrm L$, one can easily check that $g.\ell.g^{-1}\in\mathrm L$ whenever $\ell\in\mathrm L,\ g\in\widetilde\mathrm H_2\times\mathrm{R}^+$. We see that $\mathrm L,\widetilde\mathrm H_2$ and $\mathrm{R}^+$ generate a group $\mathrm L\rtimes(\widetilde\mathrm H_2\times\mathrm{R}^+)$ inside $\mathrm O_2$.
\begin{lemma}\label{lemma semi-direct product type 2}
$\mathrm O_2=\mathrm L\rtimes(\widetilde\mathrm H_2\times\mathrm{R}^+)$.
\end{lemma}
\begin{proof}
Let $v\in\mathrm V_3$. The map $\Phi_2$ restricts to an isomorphism $\mathrm V_3\rightarrow\Im{\tilde{\mathbb H}}$ and so there is a unique $a\in\Im{\tilde{\mathbb H}}$ so that $\Phi_2(v)=(a,0)$.
The composition $\mathrm V\xrightarrow{\Phi_2}\Im{\tilde{\mathbb O}}\rightarrow{\tilde{\mathbb H}}$ where the second map is the canonical projection descends to an isomorphism $\underline\Phi_2:\mathrm W_4\rightarrow{\tilde{\mathbb H}}$. From the summary given above, there is a unique $A\in\mathrm{End}(\mathrm W_4)$ such that $\lambda(v)=B_4\circ A$.
Following the proof of Lemma \ref{lemma semi-direct product type 7}, we find that $A=\underline\Phi_2^\ast(r_a)$.
It is easy to see that $\det:\mathrm L\rtimes(\widetilde\mathrm H_2\times\mathrm{R}^+)\rightarrow\mathbb R^\ast$ is surjective and that $\ker(\det)=\mathrm L\rtimes\mathrm H_2$.
Recall Lemma \ref{lemma split ses of groups} that $\tilde\tildei$ induces the split short exact sequence (\ref{split ses with SO432}) where $\mathrm{\widetilde{Sp}}(1)=\{\tilde\tildehi_{1,B}:\ B\in\mathrm{\widetilde{Sp}}(1)\}$ and $\tilde\tildehi_{1,B}(X,Y)=(X,B.Y)$. From this easily follows that $\mathrm H_2$ is a splitting of $\mathrm{\widetilde{Sp}}(1)\rtimes\mathrm{SO}(1,2)$ in (\ref{semi-direct product type 2}). The claim now follows from the definition of $\mathrm L$.
\end{proof}
It remains to find a maximal compact subgroup of $\mathrm O_2$. We will for simplicity consider only a maximal connected and compact subgroup.
The group $\mathrm{SO}(2)$ is a maximal compact subgroup of $\mathrm{SL}(2,\mathbb R)\cong\mathrm{\widetilde{Sp}}(1)$. From the proof of Lemma \ref{lemma split ses of groups} follows that a subgroup $\mathrm K^0$ of $\mathrm{SO}(2,2)_{3,4}$ that is generated by $\mathrm{SO}(2)_1:=\{\tilde\tildehi_{1,B}:\ B\in\mathrm{SO}(2)\}$ and $\mathrm{SO}(2)_2:=\{\tilde\tildehi_{A,A}:\ A\in\mathrm{SO}(2)\}$ is a semi-direct product $\mathrm K^o:=\mathrm{SO}(2)_1\rtimes\mathrm{SO}(2)_2$. As $\mathrm{SO}(2)$ is commutative, the product is direct. Put $\mathrm K^o_2:=\Phi_2^\ast\mathrm K^o$.
\begin{thm}\label{thm max compact type 2}
The group $\mathrm K^o_2\cong\mathrm{SO}(2)\times\mathrm{SO}(2)$ is a maximal connected and compact subgroup of $\mathrm O_2$. There is an isomorphism of $\mathrm K_2^o$-modules
\begin{equation}
\mathrm V\cong\mathbb R\oplus\mathbb C_1\oplus\mathbb C_2\oplus\mathbb C_1\otimes_\mathbb C\mathbb C_2
\end{equation}
where we denote by $\mathbb C_i,\ i=1,2$ the standard complex representation of the $i$-th factor of $\mathrm{SO}(2)\times\mathrm{SO}(2)$ on $\mathbb C=\mathbb R^2$.
\end{thm}
\begin{proof}
By Lemma \ref{lemma max compact in upper trian}, $\{1_7\}$ is a maximal compact subgroup of $\mathrm L$. It is also clear that $\{1_7\}$ is a maximal compact subgroup of $\mathrm{R}^+$. Hence, a maximal compact subgroup of $\widetilde\mathrm H_2$ is a maximal compact subgroup of $\mathrm O_2$.
From Lemma \ref{lemma max compact in product} easily follows that $\mathrm K^o$ is a maximal compact subgroup of the connected component of the identity element of $\mathrm{SO}(2,2)_{3,4}$.
Hence, $\mathrm K^o_2$ is a maximal compact subgroup of the connected component $\mathrm O_2^o$ of the identity element of $\mathrm O_2$. By the Cartan-Iwasawa-Malcev theorem, any two maximal compact subgroups of $\mathrm O_2^o$ are conjugated and hence isomorphic. The first claim follows and it remains to show the second claim.
Put
$$\mathcal{B}_{{\tilde{\mathbb O}}}:=\{(\tildeI,0),(\tildeJ,0),(-\tildeK,0),(0,1),(0,\tildeI),(0,\tildeJ),(0,-\tildeK)\}.$$
Let $R_t:\mathbb R^2\rightarrow\mathbb R^2$ be the anti-clockwise rotation at angle $t\in\mathbb R$. Then it is straightforward to find that
\begin{equation}\label{element of max compact type 2}
(\tilde\tildehi_{1,R_s})_{\mathcal{B}_{{\tilde{\mathbb O}}}}=
\left(
\begin{matrix}
1&0&0&0\\
0&1&0&0\\
0&0&R_{s}&0\\
0&0&0&R_{s}\\
\end{matrix}
\right),\ (\tilde\tildehi_{R_t,R_t})_{\mathcal{B}_{{\tilde{\mathbb O}}}}=
\left(
\begin{matrix}
1&0&0&0\\
0&R_{2t}&0&0\\
0&0&1&0\\
0&0&0&R_{2t}\\
\end{matrix}
\right)
\end{equation}
and the second claim now easily follows.
\end{proof}
\subsection{Type 6}\label{section type 6}
A representative is
\begin{equation}
\omega_6=\alpha_{127}-\alpha_{136}+ \alpha_{145}+\alpha_{235}+\alpha_{246}.
\end{equation}
Invariant subspaces are $\mathrm{tr}iangle_6^2=[e_7]$ and $\mathrm{tr}iangle_6^3=[e_3,e_4,\dots,e_7]$. We denote them by $\mathrm V_1$ and $\mathrm V_5$, respectively. We put $\mathrm W_2:=\mathrm V/\mathrm V_5,\ \mathrm Z_4=\mathrm V_5/\mathrm V_1$. The 3-form $\omega_6$ induces a quadratic form $Q(v)=c_1^2+c_2^2,\ v=(c_1,\dots,c_7)$. Let $B$ be the associated bilinear form. Then $\mathrm O_6\subset\mathrm{Stab}([B])$. The form $Q$ descends to a regular quadratic form $Q_2$ on $\mathrm W_2$. We denote by $B_2$ the associated bilinear form.
The insertion map $v\mapsto i_v\omega_6|_{V_5}$ induces a monomorphism $\lambda:\mathrm W_2\rightarrow\Lambda^2\mathrm Z_4^\ast$ and we put $\sigma_i:=\lambda(e_i),\ i=1,2$.
Each $\varphi\in\mathrm O_6$ induces $\tilde\varphi\in\mathrm GL(\mathrm W_2)$. Since $\tilde\varphi\in\mathrm{Stab}([B_2])$, we get a map $\mu:\mathrm O_6\rightarrow\mathrm{CO}(Q_2)$ and a split short exact sequence $0\rightarrow\mathrm O_6^1\rightarrow\mathrm O_6\xrightarrow{\det\mu}\mathbb R^\ast\rightarrow0$. Then $\mu|_{\mathrm O_6^1}$ induces another split short exact sequence $0\rightarrow\mathrm L\rightarrow\mathrm O_6^1\rightarrow\mathrm{SO}(Q_2)\rightarrow0$. Each $\varphi\in\mathrm O_6$ descends to $\bar\varphi\in\mathrm GL(\mathrm Z_4)$ and we get a map $\rho:\mathrm O_6\rightarrow\mathrm GL(\mathrm Z_4)$. If $\varphi\in\mathrm L$, then $\bar\varphi.\sigma_i=\sigma_i,\ i=1,2$. It can be shown that $\mathrm{Stab}(\sigma_1)\cap\mathrm{Stab}(\sigma_2)\cong\mathrm{SL}(2,\mathbb C)$ and we get a split short exact sequence $0\rightarrow\mathrm M\rightarrow\mathrm L\rightarrow\mathrm{SL}(2,\mathbb C)\rightarrow0$. Moreover, it can be shown that
$\varphi_{\mathcal{B}_{st}}=\left(
\begin{matrix}
1_2&0&0\\
\ast&1_4&0\\
\ast&\ast&1\\
\end{matrix}
\right)$ for every $\varphi\in\mathrm M$ and that $\mathrm M\cong(\mathbb R^6\rtimes\mathbb R^2)\rtimes\mathbb R^2$. All together, $\mathrm O_6$ is isomorphic to a semi-direct product
\begin{equation}\label{semidirect product type 6}
((((\mathbb R^6\rtimes\mathbb R^2)\rtimes\mathbb R^2)\rtimes\mathrm{SL}(2,\mathbb C))\rtimes\mathrm{SO}(2))\rtimes\mathbb R^\ast
\end{equation}
as claimed in \cite[9. Proposition]{BV}.
\begin{center}
\textemdash\ x \textemdash
\end{center}
Let us first relabel the standard basis. We put
$$f_1:=e_7,\ f_2:=e_1,\ f_3:=e_2,\ f_4:=e_4,\ f_5:=-e_3,\ f_6:=-e_5,\ f_7:=-e_6.$$ Let $\mathcal{B}_6^\ast=\{\beta_1,\dots,\beta_7\}$ be a dual basis to $\mathcal{B}_6:=\{f_1,f_2,\dots,f_7\}$. Then we have $$\beta_1=\alpha_7,\ \beta_2=\alpha_1,\ \beta_3=\alpha_2,\ \beta_4=\alpha_4,\ \beta_5=-\alpha_3,\ \beta_6=-\alpha_5,\ \beta_7=-\alpha_6.$$ From this one finds that
\begin{equation}
\omega_6=\beta_{123}-\beta_{246}-\beta_{257}-\beta_{347}+\beta_{356}.
\end{equation}
Consider the linear isomorphism $\mathbb C^2\rightarrow\mathbb H,\ (x_1+iy_1, x_2+iy_2)\mapsto(x_1+iy_1+y_2j+x_2k)$. This isomorphism is complex linear with respect to the multiplication by $i$ on $\mathbb H$ on the right. This induces embedding $\mathrm{SL}(2,\mathbb C)\hookrightarrow\mathrm GL(\mathbb H)$. We compose this with the canonical map $\mathrm GL(\mathbb H)\hookrightarrow\mathrm GL(\Im\mathbb H\oplus\mathbb H),\ \varphi\mapsto Id_{\Im\mathbb H}\oplus\varphi$ so that we can view $\mathrm{SL}(2,\mathbb C)$ as a subgroup of $\mathrm GL(\Im\mathbb H\oplus\mathbb H)$.
Put $\widetilde\mathrm U(1):=\{p\in\mathrm{Sp}(1):p.i=\tildem i.p\}$. It is easy to see that $\widetilde\mathrm U(1)$ has two connected components $\widetilde\mathrm U(1)_0=\{e^{it}:\ t\in\mathbb R\}$ and $\widetilde\mathrm U(1)_1=\{e^{it}.j:\ t\in\mathbb R\}$. We put $\mathrm K_6':=\{\tildehi_{p,q}:\ p\in\widetilde\mathrm U(1),\ q\in\mathrm{Sp}(1)\}$. It is easy to see that $\mathrm K_6'$ is a subgroup of $\mathrm GL(\Im\mathbb H\oplus\mathbb H)$ and we denote by $\mathrm H_6'$ the group generated by $\mathrm K_6'$ and $\mathrm{SL}(2,\mathbb C)$.
\begin{lemma}\label{lemma split ses with H_6'}
The map $\tildei_6:\mathrm H_6'\rightarrow\mathrm{SO}(\Im\mathbb H),\ \tildei_6(\varphi)=\varphi|_{\Im\mathbb H}$ induces a split short exact sequence
\begin{equation}\label{ses with H_6'}
0\rightarrow\mathrm{SL}(2,\mathbb C)\rightarrow\mathrm H_6'\rightarrow\mathrm O(2)\rightarrow0
\end{equation}
where $\mathrm O(2)\subset\mathrm{SO}(\Im\mathbb H)$ is the stabilizer of the line spanned by $i$. The group $\mathrm K_6'$ is a maximal compact subgroup of $\mathrm H'_6$.
\end{lemma}
\begin{proof}
It is easy to see that the image of $\tildei_6$ is $\mathrm O(2)$. From Lemma \ref{lemma split ses of groups} follows that $\ker\tildei_6$ is generated by $\mathrm{SL}(2,\mathbb C)$ and by $\mathrm{Sp}(1)=\{\tildehi_{1,q}:\ q\in\mathrm{Sp}(1)\}$. From (\ref{SO434}) is clear that $\mathrm{Sp}(1)\subset\mathrm{SL}(2,\mathbb C)$ and hence, $\tildei_6$ induces the short exact sequence (\ref{ses with H_6'}). This sequence is split by repeating the proof of Lemma \ref{lemma split ses of groups}. It remains to prove the second claim.
The group $\mathrm K_6'$ is a compact subgroup of $\mathrm H_6'=\mathrm{SL}(2,\mathbb C)\rtimes\mathrm O(2)$. We have that $\mathrm K_6'\cap\mathrm O(2)=\mathrm O(2)$ and that $\mathrm K_6'\cap\mathrm{SL}(2,\mathbb C)=\mathrm{Sp}(1)$ is a maximal compact subgroup of $\mathrm{SL}(2,\mathbb C)$. The claim follows from Lemma \ref{lemma max compact in product}.
\end{proof}
Let us view ${\tilde{\mathbb O}}$ as $\tildeCD(\mathbb H)$. Let $\Phi_6:\mathrm V\rightarrow\Im\mathbb H\oplus\mathbb H$ be the linear isomorphism which maps the basis $\mathcal{B}_6$ to the basis (\ref{standard basis of O}) of $\Im{\tilde{\mathbb O}}$. We put $\mathrm H_6:=\Phi_6^\ast\mathrm H_6'$ and $\mathrm K_6:=\Phi_6^\ast\mathrm K_6'$. These are by definition subgroups of $\mathrm GL(\mathrm V)$.
\begin{lemma}\label{lemma inclusion type 6}
The subspaces $[f_1],\ [f_2,f_3],\ [f_4,\dots,f_7]$ are $\mathrm H_6$-invariant and
$\mathrm H_6\subset\mathrm O_6$.
\end{lemma}
\begin{proof}
Notice that $[f_1]=\Phi_6^{-1}([(i,0)]),\ [f_2,f_3]=\Phi_6^{-1}([(j,0),(k,0)])$ and $[f_4,\dots,f_7]=\Phi_6^{-1}({\tilde{\mathbb H}})$. Hence, the first claim follows from the definition of $\mathrm H_6'$ and Lemma \ref{lemma split ses with H_6'}. Notice that $\mathrm H_6\subset\mathrm{Stab}(\beta_1)\cap\mathrm{Stab}(\beta_{23})$.
The 3-form $\Phi_6^\ast\omega_{{\tilde{\mathbb O}}}$ is obtained from $\Phi_5^\ast\omega_{{\tilde{\mathbb O}}}$ given in (\ref{type 5.I}) by replacing each $\alpha_i$ by $\beta_i,\ i=1,2,\dots,7$. We see that $\omega_6+\beta_1\wedge\gamma_1=\Phi_6^\ast\omega_{{\tilde{\mathbb O}}}$ where $\gamma_1:=-\beta_4\wedge\beta_5+\beta_6\wedge\beta_7$.
The group $\mathrm H_6$ is generated by $\mathrm K_6$ and $\Phi_6^\ast\mathrm{SL}(2,\mathbb C)$. Hence, it is enough to show that $\mathrm K_6\subset\mathrm{Stab}(\omega_6)$ and $\mathrm{SL}(2,\mathbb C)\subset\mathrm{Stab}(\omega_6)$.
Let $\varphi\in\mathrm K_6$. As $\mathrm K_6'\subset\mathrm{SO}(4)_{3,4}\subset\tilde\mathrm G_2$, it follows that $\mathrm K_6\subset\Phi_6^\ast\tilde{\mathrm{G}}_2=\mathrm{Stab}(\Phi^\ast_6\omega_{{\tilde{\mathbb O}}})$. Hence, it is enough to show $\varphi.\gamma_1=\gamma_1$. We have
\begin{eqnarray*}
\varphi.(\Phi_6^\ast\omega_{{\tilde{\mathbb O}}})&=&\varphi.(\beta_{123}+\beta_1\wedge\gamma_1-\beta_{246}-\beta_{257}+\beta_{356}-\beta_{347})\\
&=&\beta_{123}+\beta_1\wedge\varphi.\gamma_1+\varphi.(-\beta_{246}-\beta_{257}+\beta_{356}-\beta_{347})\\
&=&\beta_{123}+\beta_1\wedge\varphi.\gamma_1+\beta_2\wedge\gamma_2+\beta_3\wedge\gamma_3
\end{eqnarray*}
for some 2-forms $\gamma_2,\gamma_3$ which belong to the subalgebra of $\Lambda^\bullet\mathrm V^\ast$ generated $\beta_4,\beta_5,\beta_6,\beta_7$. But since $\varphi.(\Phi_6^\ast\omega_{{\tilde{\mathbb O}}})=\Phi_6^\ast\omega_{{\tilde{\mathbb O}}}$, it follows that $\varphi.\gamma_1=\gamma_1$ and $\gamma_2=-\beta_{46}-\beta_{57},\ \gamma_3=\beta_{56}-\beta_{47}$.
As $\mathrm{SL}(2,\mathbb C)$ acts by identity on $\Im\mathbb H$, we know that $\beta_i,\ i=1,2,3$ are invariant under $\mathrm{SL}(2,\mathbb C)$. The standard complex volume form on $\mathbb C^2$ induces via the isomorphism $\mathbb C^2\rightarrow\mathbb H$ and the inclusion $\mathbb H\hookrightarrow\Im\mathbb H\oplus\mathbb H$ given above a complex 2-form $\theta$ on $\Im\mathbb H\oplus\mathbb H$. It is straightforward to verify that $\Phi_6^\ast\theta=-\gamma_3-i\gamma_2$. Hence, $\mathrm{SL}(2,\mathbb C)\subset\mathrm{Stab}(\gamma_2)\cap\mathrm{Stab}(\gamma_3)$.
\end{proof}
We will need one more subgroup of $\mathrm GL(\mathrm V)$. Put
\begin{equation}
\mathrm{R}^+:=\bigg\{\varphi\in\mathrm GL(\mathrm V)\bigg|\ \left(
\begin{matrix}
\lambda^{-2}&0&0\\
0&\lambda.1_2&0\\
0&0&\lambda^{-\frac{1}{2}}.1_4\\
\end{matrix}
\right),\ \lambda>0
\bigg\}.
\end{equation}
It is easy to check that $\mathrm{R}^+$ is a subgroup of $\mathrm O_6$. From Lemma \ref{lemma inclusion type 6} follows that $\mathrm{R}^+$ commutes with $\mathrm H_6$.
By the summary given above, $g.m. g^{-1}\in\mathrm M$ whenever $m\in\mathrm M,\ g\in\mathrm H_6\times\mathrm{R}^+$. This shows that $\mathrm O_6$ contains a subgroup $\mathrm M\rtimes(\mathrm H_6\times\mathrm{R}^+)$.
\begin{lemma}\label{lemma semidirect product type 6}
$\mathrm O_6=\mathrm M\rtimes(\mathrm H_6\times\mathrm{R}^+)$.
\end{lemma}
\begin{proof}
Put $\mathrm O_6':=\mathrm M\rtimes(\mathrm H_6\times\mathrm{R}^+)$. Then $\mu|_{\mathrm O_6'}$ induces epimorphism $\mathrm O_6'\rightarrow\mathrm{CO}(Q_2)$. By Lemma \ref{lemma split ses with H_6'}, we get a split short exact sequence $0\rightarrow\mathrm L'\rightarrow\mathrm O_6'\rightarrow\mathrm{CO}(Q_2)\rightarrow0$ where $\mathrm L'=\mathrm M\rtimes\Phi_6^\ast\mathrm{SL}(2,\mathbb C)$. Then $\rho|_{\mathrm L'}$ induces a split short exact sequence $0\rightarrow\mathrm M\rightarrow\mathrm L'\rightarrow\Phi_6^\ast\mathrm{SL}(2,\mathbb C)\rightarrow0$. By the proof Lemma \ref{lemma inclusion type 6}, $\Phi_6^\ast\mathrm{SL}(2,\mathbb C)\subset\mathrm{Stab}(\gamma_2)\cap\mathrm{Stab}(\gamma_3)$. It is straightforward to verify that also the other inclusion $"\supset"$ holds. The 2-form $\gamma_{i},\ i=2,3$ descends to the form $\sigma_{i-1}$ on $\mathrm Z_4$. Thus $\Phi_6^\ast\mathrm{SL}(2,\mathbb C)=\mathrm{Stab}(\sigma_1)\cap\mathrm{Stab}(\sigma_2)$ and so $\Phi_6^\ast\mathrm{SL}(2,\mathbb C)$ is a splitting of the subgroup $\mathrm{SL}(2,\mathbb C)$ from (\ref{semidirect product type 6}). The claim now follows from the definition of $\mathrm M$.
\end{proof}
\begin{thm}\label{thm max compact type 6}
$\mathrm K_6$ is a maximal compact subgroup of $\mathrm O_6$.
\end{thm}
\begin{proof}
By Lemma \ref{lemma max compact in upper trian}, $\{1_7\}$ is a maximal compact subgroup of $\mathrm M$. The same is obviously true also for $\mathrm{R}^+$. From Lemma \ref{lemma split ses with H_6'} follows that $\mathrm K_6$ is a maximal compact subgroup of $\mathrm H_6$. The claim is then a consequence of Lemma \ref{lemma split ses with H_6'}.
\end{proof}
\subsection{Type 3}\label{section type 3}\label{section type 3}
A representative is
\begin{equation}
\omega_3=\alpha_{123}-\alpha_{167}+\alpha_{145}.
\end{equation}
Then $\mathrm V_6:=\mathrm{tr}iangle_3^2=\mathrm{tr}iangle_3^3=[e_2,\ldots,e_7]$. This is an $\mathrm O_3$-invariant subspace. We put $\mathrm W_1:=\mathrm V/\mathrm V_6$. The insertion map $v\mapsto i_v\omega_3$ induces a monomorphism $\lambda:\mathrm W_1\rightarrow\Lambda^2\mathrm V_6^\ast$. The image of $\lambda$ is an $\mathrm O_3$-invariant conformally symplectic structure on $\mathrm V_6$. The restriction map $\varphi\mapsto\varphi|_{\mathrm V_6}$ induces a split short exact sequence $0\rightarrow\mathrm K\rightarrow\mathrm O_3\rightarrow\mathrm{CSp}(3,\mathbb R)\rightarrow0$. Moreover, $\mathrm K=
\bigg\{\varphi\in\mathrm GL(\mathrm V)
\bigg|
\ \varphi_{\mathcal{B}_{st}}=
\left(
\begin{matrix}
1&\ast\\
0&1_6
\end{matrix}
\right)
\bigg\}.$
\begin{center}
\textemdash\ x \textemdash
\end{center}
We put $f_1:=e_1,\ f_2:=e_2,\ f_3:=e_4,\ f_4:=e_6,\ f_5:=e_7,\ f_6:=e_5,\ f_7:=e_3$. Let $\mathcal{B}_3^\ast=\{\beta_1,\dots,\beta_7\}$ be a dual basis to $\mathcal{B}_3:=\{f_1,\dots,f_7\}$. Then
\begin{equation}
\omega_3=\beta_{125}+\beta_{136}+\beta_{147}.
\end{equation}
Let $\omega$ be the imaginary part of the standard Hermitian form on $\mathbb C^3$. Let $\mathrm{CSp}(\omega):=\mathrm{Stab}([\omega])$ and $\mathrm{Sp}(\omega):=\mathrm{Stab}(\omega)$. The map $\mu:\mathrm{CSp}(\omega)\rightarrow\mathbb R^\ast$ determined by $ \mu(\varphi).\omega=\varphi.\omega$ is a group homomorphism.
Let $C:\mathbb C^3\rightarrow\mathbb C^3$ be the standard conjugation. Then $C.\omega=-\omega$ and so $C\in\mathrm{CSp}(\omega),\ \mu(C)=-1$. Let $\widetilde\mathrm U(3)$ be the group generated by $\mathrm U(3)$ and $C$. It is easy to see that $\widetilde\mathrm U(3)$ has two connected components, the connected component of the identity $\widetilde\mathrm U(3)_0=\mathrm U(3)$ and $\widetilde\mathrm U(3)_1:=\{\varphi\circ C:\ \varphi\in\mathrm U(3)\}$.
\begin{lemma}\label{lemma split ses with CSp}
There is a split short exact sequence
\begin{equation}
0\rightarrow\mathrm{Sp}(\omega)\rightarrow\mathrm{CSp}(\omega)\xrightarrow{\mu}\mathbb R^\ast\rightarrow0.
\end{equation}
The group $\widetilde\mathrm U(3)$ is a maximal compact subgroup of $\mathrm{CSp}(\omega)$.
\end{lemma}
\begin{proof}
Consider the subgroup $\mathrm{R}^\ast$ of $\mathrm{CSp}(\omega)$ generated by $C$ and $\{\lambda.Id_{\mathbb C^3}:\ \lambda>0\}$. Then it is easy to see that $\mu$ restricts to an isomorphism $\mathrm{R}^\ast\rightarrow\mathbb R^\ast$. The inverse of this is a splitting of $\mu$.
$\{Id_{\mathbb C^3},C\}=\mathrm O(\mathbb C^3)\cap\mathrm{R}^\ast$ and $\mathrm U(3)=\mathrm{Sp}(\omega)\cap\mathrm O(3)$. As $\mathrm{CSp}(\omega)$ and $\mathrm{R}^\ast$ are closed in $\mathrm GL(\mathbb C^3)$, the second claim is a corollary of Lemma \ref{lemma max compact in product}.
\end{proof}
For $A\in\mathrm{CSp}(\omega)$ we define $\tildehi_A\in\mathrm GL(\mathbb R\oplus\mathbb C^3),\ \tildehi_A(x,v)=(\mu(A).x,Av)$. Let
\begin{equation}
\Phi_3:\mathrm V\rightarrow\mathbb R\oplus\mathbb C^3,\ \
\Phi_3\big(\sum_{i=1}^7 x_if_i\big)=(x_1,(x_2+ix_5,x_2+ix_6,x_4+ix_6)).
\end{equation}
We put $\mathrm H_3:=\{\Phi_3^\ast(\tildehi_A):\ A\in\mathrm{CSp}(\omega)\}$ and $\mathrm K_3:=\{\Phi_3^\ast(\tildehi_A):\ A\in\widetilde\mathrm U(3)\}$.
\begin{lemma}
$\mathrm H_3\subset\mathrm O_3$.
\end{lemma}
\begin{proof}
Let $A\in\mathrm{CSp}(\omega)$ and $\alpha\in\mathbb R^\ast$ be the linear form corresponding to $Id_\mathbb R$. Then $\alpha\wedge\omega\in\Lambda^3(\mathbb R\oplus\mathbb C^3)^\ast$ and $$\tildehi_A.(\alpha\wedge\omega)=\mu(A)^{-1}.\alpha\wedge A.\omega=\mu(A)^{-1}.\alpha\wedge\mu(A).\omega=\alpha\wedge\omega.$$ Since $\omega_3=\Phi_3^\ast(\alpha\wedge\omega)$, the claim follows.
\end{proof}
It is clear that $\mathrm K$ and $\mathrm H_3$ generate a subgroup $\mathrm K\rtimes\mathrm H_3$ of $\mathrm O_3$.
\begin{lemma}
$\mathrm O_3=\mathrm K\rtimes\mathrm H_3$.
\end{lemma}
\begin{proof}
The restriction map $\varphi\in\mathrm O_3\mapsto\varphi|_{\mathrm V_3}$ induces isomorphism $\mathrm H_3\rightarrow\mathrm{CSp}(3,\mathbb R)$. This readily proves the claim.
\end{proof}
\begin{thm}\label{thm max compact type 3}
The group $\mathrm K_3$ is a maximal compact subgroup of $\mathrm O_3$.
\end{thm}
\begin{proof}
From Lemma \ref{lemma max compact in upper trian} follows that $\{1_7\}$ is a maximal compact subgroup of $\mathrm K$. The claim is a consequence of Lemmata \ref{lemma split ses with CSp} and \ref{lemma max compact in product I}.
\end{proof}
It is clear that $\mathrm K_3\subset\mathrm{SO}(7)$. We will now show $\mathrm K_3$ can be viewed as a subgroup of $\mathrm{Sp}in^c(7)$, i.e. there is an embedding $\mathrm K_3\rightarrow\mathrm{Sp}in^c(7)$ such that the composition $\mathrm K_3\rightarrow\mathrm{Sp}in^c(7)\rightarrow\mathrm{SO}(7)$ induces the identity on $\mathrm K_3$. Recall that $\mathrm{Sp}in^c(n)$ is the quotient of $\mathrm{Sp}in(n)\times\mathrm U(1)$ by the subgroup $\{\tildem(1,1)\}$ where $\{\tildem1\}=\ker\rho_n$ and $\rho_n:\mathrm{Sp}in(n)\rightarrow\mathrm{SO}(n)$ is the usual 2:1 covering. We denote the class of $(a,e^{it})$ in the quotient by $\langle a,e^{it}\rightarrowngle$. The canonical homomorphism $\rho_n^c:\mathrm{Sp}in^c(n)\rightarrow\mathrm{SO}(n)$ is $\langle a,e^{it}\rightarrowngle\mapsto\rho_n(a)$.
\begin{lemma}\label{lemma subgroup of Spin^c(7)}
$\mathrm K_3$ is a subgroup of $\mathrm{Sp}in^c(7)$.
\end{lemma}
\begin{proof}
$\widetilde\mathrm U(3)$ has two connected components and so the same is true for $\mathrm K_3$.
The connected component $\mathrm K_3^o$ of the neutral element of $\mathrm K_3$ is isomorphic to $\mathrm U(3)$. It is well known (see \cite[Section 3.4]{Mo}) that $\mathrm U(3)$ is a subgroup of $\mathrm{Sp}in^c(6)$. Using the standard inclusion $\mathrm{Sp}in^c(6)\hookrightarrow\mathrm{Sp}in^c(7)$, we see that there is a subgroup $\widetilde\mathrm K_3^o$ of $\mathrm{Sp}in^c(7)$ such that $\rho_7^c|_{\widetilde\mathrm K_3^o}$ induces isomorphism of Lie groups $\widetilde\mathrm K_3^o\rightarrow\mathrm K_3^o$. Hence, it remains to show that there is a subgroup $\tilde\mathrm K_3$ of $\mathrm{Sp}in^c(7)$ which contains $\tilde\mathrm K_3^o$ such that $\rho_7^c$ restricts to an isomorphism of Lie groups $\tilde\mathrm K_3\rightarrow\mathrm K_3$.
Notice that $\tildesi_{C}:=\Phi_3^\ast(\tildehi_A)\in\mathrm{SO}(7)$ is determined by $f_i\mapsto- f_i,,\ i=1,2,3,4$ and $\ f_{j}\mapsto f_j,\ j=5,6,7$. Let $B_{\mathrm V}$ be the standard inner product on $\mathrm V$. Let us view $\mathrm{Sp}in(7)$ as the subgroup of the Clifford algebra $(\mathrm V,B_{\mathrm V})$ that is generated by even number of unit vectors from $\mathrm V$. Put $\alpha:= f_1.f_3.f_5.f_7\in\mathrm{Sp}in(7)$. Then $\tildesi_C=\rho_7(\alpha)$ and $\alpha^2=1$. It is easy to verify that we can take as $\tilde\mathrm K_3$ the subgroup of $\mathrm{Sp}in^c(7)$ that is generated by $\tilde\mathrm K_3^o$ and $\langle\alpha,1\rightarrowngle$.
\end{proof}
\subsection{Type 4}\label{section type 4}
A representative of the orbit is
\begin{equation}
\omega_4=\alpha_{123}-\alpha_{167}+\alpha_{145}+\alpha_{246}.
\end{equation}
With respect to the basis $\mathcal{B}_3$ from Section \ref{section type 3} we have that
\begin{equation}
\omega_4=\beta_{125}+\beta_{136}+\beta_{147}+\beta_{234}.
\end{equation}
Put $\mathrm V_3:=[f_5,f_6,f_7],\ \mathrm V_6:=[f_2,f_3,f_4,f_5,f_6,f_7]$. It is claimed in \cite[Section type 3]{BV} that $\mathrm{tr}iangle^2_4=\mathrm V_3$ and $\mathrm{tr}iangle_4^3=\mathrm V_6$ and so these are $\mathrm O_4$-invariant subspaces. We put $\mathrm W_1:=\mathrm V/\mathrm V_6, \mathrm W_4:=\mathrm V/\mathrm V_3$ and $\mathrm Z_3:=\mathrm V_6/\mathrm V_3$.
Each $\varphi\in\mathrm O_4$ descends to $\tilde\varphi\in\mathrm GL(\mathrm W_1)\cong\mathbb R^\ast$ and $\hat\varphi\in\mathrm GL(\mathrm Z_3)$. The homomorphism $\mu:\mathrm O_4\rightarrow\mathrm GL(\mathrm W_1),\ \varphi\mapsto\tilde\varphi$ induces a split short exact sequence $0\rightarrow\mathrm O_4^+\rightarrow\mathrm O_4\xrightarrow{\mu}\mathbb R^\ast\rightarrow0$. The 3-form $\omega_4$ induces a volume form on $\mathrm Z_3$. It follows that the image of the homomorphism $\nu:\mathrm O_4\rightarrow\mathrm GL(Z_3),\ \varphi\mapsto\hat\varphi$ is contained in $\mathrm{SL}(\mathrm Z_3)$. There is a split short exact sequence $0\rightarrow\mathrm K\rightarrow\mathrm O_4^+\rightarrow\mathrm{SL}(Z_3)\rightarrow0$. If $\varphi\in\mathrm K$, then $\varphi_{\mathcal{B}_3}=
\left(
\begin{matrix}
1&0&0\\
\ast&1_3&0\\
\ast&\ast&1_3\\
\end{matrix}
\right)$. Moreover, it can be shown that $\mathrm K\cong(((\mathbb R^3\rtimes\mathbb R^6)\rtimes\mathbb R)\rtimes\mathbb R)\rtimes\mathbb R$.
Fixing an isomorphism $\mathrm Z^3\rightarrow\mathbb R^3$, we obtain an isomorphism between $\mathrm O_4$ and a semi-direct product
\begin{equation}
(((((\mathbb R^3\rtimes\mathbb R^6)\rtimes\mathbb R)\rtimes\mathbb R)\rtimes\mathbb R)\rtimes\mathrm{SL}(3,\mathbb R))\rtimes\mathbb R^\ast
\end{equation}
as in \cite[7. Proposition]{BV}.
\begin{center}
\textemdash\ x \textemdash
\end{center}
Put
\begin{equation}
\mathrm H_4:=\Bigg\{\varphi\in\mathrm GL(\mathrm V): \varphi_{\mathcal{B}_3}=\left(
\begin{matrix}
1&0&0\\
0&A&0\\
0&0&(A^T)^{-1}\\
\end{matrix}
\right),\ A\in\mathrm{SL}(3,\mathbb R) \Bigg\}.
\end{equation}
It is clear that $\mathrm K_4^o:=\mathrm H_4\cap\mathrm{SO}(7)$ is a maximal compact subgroup of $\mathrm H_4$.
\begin{lemma}
$\mathrm H_4\subset\mathrm O_4$.
\end{lemma}
\begin{proof}
Notice that $\omega_4=\omega_3+\beta_{234}$. It is obvious that $\mathrm H_4\subset\mathrm{Stab}(\beta_{234})\cap\mathrm{Stab}(\beta_{1})$ and it is straightforward to verify that $\mathrm H_4\subset\mathrm{Stab}(\beta_{25}+\beta_{26}+\beta_{37})$.
\end{proof}
Put
\begin{equation}\label{R subgroup of O4}
\mathrm{R}^\ast=\Bigg\{\varphi\in\mathrm GL(\mathrm V):\ \varphi_M=\left(
\begin{matrix}
\lambda&0&0\\
0&1_3&0\\
0&0&\lambda^{-1}.1_3\\
\end{matrix}
\right),\ \lambda\in\mathbb R^\ast\Bigg\}.
\end{equation}
It clear that $\mathrm{R}^\ast\subset\mathrm O_4$ and that $\mathrm{R}^\ast$ commutes with $\mathrm H_4$. Given $k\in\mathrm K,\ g\in\mathrm H_4\times\mathrm{R}^\ast$, then we easily check that $g.k. g^{-1}\in\mathrm K$. This shows that $\mathrm O_4$ contains a subgroup $\mathrm K\rtimes(\mathrm H_4\times\mathrm{R}^\ast)$.
\begin{lemma}
$\mathrm O_4=\mathrm K\rtimes(\mathrm H_4\times\mathrm{R}^\ast)$.
\end{lemma}
\begin{proof}
This easily follows from the summary given above.
\end{proof}
$\mathrm{R}^\ast\cap\mathrm O(7)$ contains two elements that correspond to $\lambda=\tildem1$. We denote the intersection by $\mathbb Z_2$.
\begin{thm}\label{thm max compact type 4}
The group $\mathrm K_4:=\mathrm K^o_2\times\mathbb Z_2$ is a maximal compact subgroup of $\mathrm O_4$. The group $\mathrm K_4$ is a subgroup of $\mathrm G_2$.
\end{thm}
\begin{proof}
By Lemma \ref{lemma max compact in upper trian}, $\{1_7\}$ is maximal compact subgroup of $\mathrm K$. Hence, it is (see Lemma \ref{lemma max compact in product I}) enough to show that $\mathrm K_4$ is a maximal compact subgroup of $\mathrm H_4\times\mathrm{R}^\ast$. We have that $\mathrm K_4=(\mathrm H_4\times\mathrm{R}^\ast)\cap\mathrm O(7),\ \mathrm K_4^o=\mathrm H_4\cap\mathrm O(7)$ and $\mathbb Z_2=\mathrm{R}^\ast\cap\mathrm O(7)$. Since $\mathrm H_4$ and $\mathrm{R}^\ast$ are closed subgroups of $\mathrm GL(\mathrm V)$ and since $\mathrm K_4^o$ and $\mathbb Z_2$ is a maximal compact subgroup of $\mathrm H_2$ and $\mathrm{R}^\ast$, respectively, the claim follows from Lemma \ref{lemma max compact in product}.
To complete the proof, let $\Phi_4:\mathrm V\rightarrow\Im\mathbb H\oplus\mathbb H$ be the linear isomorphism that sends the basis $\mathcal{B}_3$ to the basis $$\{(0,1),(i,0),(j,0),(k,0),(0,i),(0,j),(0,k)\}.$$ Then it is easy to see that $\mathrm K_4=\Phi_4^\ast\{\tildehi_{p,\tildem 1.p}:\ p\in\mathrm{Sp}(1)\}$. Thus, $\mathrm K_4\subset\Phi_4^\ast\mathrm G_2$
\end{proof}
\subsection{Type 1}\label{section type 1}
A representative of the orbit is
\begin{eqnarray}
\omega_1=\alpha_{127}+\alpha_{134}+\alpha_{256}.
\end{eqnarray}
We have $\mathrm{tr}iangle^2_1=\mathrm V^a_3\cup\mathrm V^b_3$ and $\mathrm{tr}iangle_1^3=\mathrm V_6^a\cup\mathrm V_6^b$ where
\begin{equation*}
\mathrm V^a_3:=[e_3,e_4,e_7],\ \mathrm V^b_3:=[e_5,e_6,e_7], \ \mathrm V_6^a=[e_1,e_3,\dots,e_7],\ \mathrm V_6^b=[e_2,\dots,e_7].
\end{equation*}
A subspace $\mathrm V_1:=\mathrm V^a_3\cap\mathrm V^b_3$ is $\mathrm O_1$-invariant and we put $\mathrm Z_2^a:=\mathrm V_3^a/\mathrm V_1,\ \mathrm Z^b_2:=\mathrm V^b_3/\mathrm V_1$.
Each element $\varphi\in\mathrm O_1$ induces an automorphism $\tilde\varphi$ of $\mathrm Z_2^a\oplus\mathrm Z_2^b$ such that $\tilde\varphi(\mathrm Z_2^a)=\mathrm Z_2^a,\ \tilde\varphi(\mathrm Z_2^b)=\mathrm Z_2^b$ or $\tilde\varphi(\mathrm Z_2^a)=\mathrm Z_2^b,\ \tilde\varphi(\mathrm Z_2^b)=\mathrm Z_2^a$. A map
\begin{displaymath}
sg:\mathrm O_1\rightarrow\mathbb Z_2,\ sg(\varphi)=
\Big\{
\begin{matrix}
1&\ \mathrm{if}\ \ \tilde\varphi(\mathrm Z_2^a)=\mathrm Z_2^a,\ \tilde\varphi(\mathrm Z_2^b)=\mathrm Z_2^b\\
-1&\ \mathrm{if}\ \ \tilde\varphi(\mathrm Z_2^a)=\mathrm Z_2^b,\ \tilde\varphi(\mathrm Z_2^b)=\mathrm Z_2^a\\
\end{matrix}
\end{displaymath}
is a split group homomorphism. Put $\mathrm O_1^+:=\ker(sg)$. If $\varphi\in\mathrm O_1^+$, then it is natural to view $\tilde\varphi$ as an element of $\mathrm GL(\mathrm Z_2^a)\times\mathrm GL(\mathrm Z_2^b)$. A map $\varphi\mapsto\tilde\varphi$ induces a split short exact sequence $0\rightarrow\mathrm K\rightarrow\mathrm O_1^+\rightarrow\mathrm GL(\mathrm Z_2^a)\times\mathrm GL(\mathrm Z_2^b)\rightarrow0$. Moreover, it can be shown that
$\varphi_\mathcal{B}=
\left(
\begin{matrix}
1_2&0&0\\
\ast&1_4&0\\
\ast&\ast&1
\end{matrix}
\right)$ for every $\varphi\in\mathrm K$ and that $\mathrm K\cong(H\oplus H)\rtimes\mathbb R^4$ where $H$ is a Heisenberg algebra in dimension 3. All together, $\mathrm O_1$ is isomorphic to a semi-direct product
\begin{equation}
(((H\oplus H)\rtimes\mathbb R^4)\rtimes\mathrm GL(\mathrm Z_2^a)\times\mathrm GL(\mathrm Z_2^b))\rtimes\mathbb Z_2
\end{equation}
as in \cite[4. Proposition]{BV}.
\begin{center}
\textemdash\ x \textemdash
\end{center}
We can now continue with the proof.
Let $A,B\in\mathrm GL(2,\mathbb R)$. We define 2-linear automorphisms of $\mathbb R\oplus\mathbb R\oplus\mathbb R^2\oplus\mathbb R^2\oplus\mathbb R$, namely
\begin{align*}
&\tildehi^+_{A,B}(u_1,u_2,v_1,v_2,u_3):=\bigg(\frac{u_1}{\det A},\frac{u_2}{\det B}, A(v_1),B(v_2),\det A.\det B.u_3\bigg)\ \mathrm{and}\\
&\tildehi^{-}_{A,B}(u_1,u_2,v_1,v_2,u_3):=\bigg(\frac{u_2}{\det B},\frac{u_1}{\det A}, B(v_2),A(v_1),-\det A.\det B.u_3\bigg)
\end{align*}
where $u_1,u_2,u_3\in\mathbb R,\ v_1,v_2\in\mathbb R^2$. Let
$$\Phi_1:\mathbb R^7\rightarrow \mathbb R\oplus\mathbb R\oplus\mathbb R^2\oplus\mathbb R^2\oplus\mathbb R,\ \Phi_1(x_1,\dots,x_7)=(x_1,x_2,(x_3,x_4),(x_5,x_6),x_7)$$
and put $\varphi^\bullet_{A,B}:=\Phi_1^\ast(\tildehi^\bullet_{A,B})$ where $\bullet=\tildem$.
It is straightforward to verify that
\begin{eqnarray}
\varphi^+_{A,B}\circ\varphi^+_{C,D}=\varphi^+_{A.C,B.D},\ \varphi^{-}_{A,B}\circ\varphi^{-}_{C,D}=\varphi^+_{B.C,A.D},\\
\varphi^+_{A,B}\circ\varphi^{-}_{C,D}=\varphi^{-}_{B.C,A.D},\ \varphi^{-}_{A,B}\circ\varphi^{+}_{C,D}=\varphi^+_{C.A,D.B}
\end{eqnarray}
and so $\mathrm H_1:=\{\varphi^\tildem_{A,B}|\ A,B\in\mathrm GL(2,\mathbb R)\}$ is a subgroup of $\mathrm GL(\mathrm V)$. The following observation is straightforward.
\begin{lemma}\label{lemma inclusion type 1}
$\mathrm H_1\subset\mathrm O_1$
\end{lemma}
It is easy to check that $h.k.h^{-1}\in\mathrm K$ whenever $h\in\mathrm H_1,\ k\in\mathrm K$. Hence, $\mathrm O_1$ contains a subgroup $\mathrm K\rtimes\mathrm H_1$.
\begin{lemma}\label{lemma semi-direct product type 1}
$\mathrm O_1=\mathrm K\rtimes\mathrm H_1$
\end{lemma}
\begin{proof}
It is easy to see that the inclusion $\mathrm H_1\hookrightarrow\mathrm O_1$ is a splitting of the subgroup $(\mathrm GL(Z_2^a)\times\mathrm GL(Z_2^b))\rtimes\mathbb Z_2$. The claim follows from the definition of $\mathrm K$.
\end{proof}
\begin{thm}\label{thm max compact type 1}
A maximal compact subgroup of $\mathrm H_1$ is a maximal compact subgroup of $\mathrm O_1$.
\end{thm}
\begin{proof}
From Lemma \ref{lemma max compact in upper trian} follows that $\{1_7\}$ is a maximal compact subgroup of $\mathrm K$. The claim is then a consequence of Lemma \ref{lemma max compact in product I}.
\end{proof}
\section{Multisymplectic 3-forms of a fixed algebraic type}\label{section global forms}
\subsection{Characteristic classes of spin and spin$^c$ vector bundles}\label{section spin char class}
We will denote by $\underline\mathbb R^i$ a trivial vector bundle with fiber $\mathbb R^i$. For a real vector bundle $\xi$ we denote by $w_i(\xi), \ p_1(\xi)$ and $e(\xi)$ the $i$-th Stiefel-Whitney class, the first Pontryagin class and the Euler class of $\xi$, respectively. If $\xi$ is a complex vector bundle, then $c_i(\xi)$ is the $i$-th Chern class of $\xi$. We will need (see also \cite[Section 2]{CCV}) two more characteristic classes, one is defined for a spin vector bundle and the other is defined for a spin$^c$ vector bundle.
Suppose that $\xi$ is a spin vector bundle over a base $N$. Then there is a class $q(\xi)\in H^4(N,\mathbb Z)$ which is independent of the choice of a spin structure on $\xi$. We have:
\begin{equation}
q(\xi\oplus\xi')=q(\xi)+ q(\xi') \ \ \mathrm{and} \ \ 2q(\xi)=p_1(\xi)
\end{equation}
if $\xi$ and $\xi'$ are spin. If $\xi$ is a complex bundle, then $\xi$ admits a spin structure if, and only if $c_1(\xi)$ is divisible by two, say $2m=c_1(\xi)$ for some $m\in H^2(M,\mathbb Z)$. Then
\begin{equation}
q(\xi)=2m^2-c_2(\xi).
\end{equation}
If $\xi=TN$, then we put $q(N):=q(TN)$.
A vector bundle $\xi$ admits a spin$^c$ structure if, and only if $w_2(\xi)$ is a reduction of an integral class, say $\rho_2(\ell)=w_2(\xi)$ for some $\ell\in H^2(N,\mathbb Z)$. Then we can define $q(\xi;\ell):=q(\xi-\lambda)\in H^4(N,\mathbb Z)$ where $\lambda$ is a complex line bundle with $c_1(\xi)=\ell$. The spin$^c$ characteristic class satisfies:
\begin{equation}
2q(\xi;\ell)=p_1(\xi)-\ell^2,\ \rho_2(q(\xi;\ell))=w_4(\xi),\ q(\xi;l+2m)=q(\xi,\ell)-2\ell m-2m^2
\end{equation}
where $m\in H^2(N,\mathbb Z)$. If $\xi$ is a complex vector bundle with $c_1(\xi)=\ell$, then
\begin{equation}
q(\xi;\ell)=-c_2(\xi).
\end{equation}
If $\xi=TN$, then we put $q(N;\ell):=q(TN;\ell)$.
The following Theorem can be found in \cite{CCS}.
\begin{thm}\label{fundamental thm}
Let $N$ be a closed, connected manifold of dimension 7. Consider two orientable 7-dimensional real vector bundles $\xi$ and $\xi'$ over $N$ with $w_2(\xi) = w_2(\xi')=\rho_2(\ell)$, where $\ell \in H^2(N;\mathbb Z)$. Then $\xi$ and $\xi'$ are isomorphic as vector bundles if, and only if $q(\xi;\ell) = q(\xi'; \ell)$
\end{thm}
\subsection{Multisymplectic 3-forms of algebraic type 5,6,7,8}
\begin{thm}\label{thm global forms 5,6,7,8}
Let $N$ be a closed connected 7-dimensional manifold. Then the following are equivalent:
\begin{enumerate}
\item $N$ is orientable and spin.
\item $N$ admits a multisymplectic 3-form of algebraic type 8.
\item $N$ admits a multisymplectic 3-form of algebraic type 5.
\item $N$ admits a multisymplectic 3-form of algebraic type 6.
\item $N$ admits a multisymplectic 3-form of algebraic type 7.
\end{enumerate}
\end{thm}
\begin{proof}
(1)"$\Leftrightarrow$"(2) is proved in \cite{G}.
(2)"$\Leftrightarrow$"(3) is proved in \cite{Le}.
(1)"$\mathbb Rightarrow$"(4) and (5).
By a result from \cite{Th}, any closed, orientable, spin manifold of dimension 7 admits two everywhere linearly independent vector fields. This gives a reduction from $\mathrm{Sp}in(7)$-structure to $\mathrm{Sp}in(5)$-structure. Following \cite[Proposition 2.2]{CCS}, an inclusion
$$\mathrm{Sp}(1)=\mathrm{Sp}(1)\times 1\hookrightarrow\mathrm{Sp}(1)\times\mathrm{Sp}(1)=\mathrm{Sp}in(4)\hookrightarrow\mathrm{Sp}in(5)$$
induces isomorphism on homotopy groups $\tildei_i$ for $i\le 5$ and an epimorphism on $\tildei_6$. Hence, the $\mathrm{Sp}in(5)$-structure reduces to a $\mathrm{Sp}in(4)$-structure which shows that $N$ admits three everywhere linearly independent vector fields, i.e. $TN\cong\underline\mathbb R^3\oplus\eta$ where $\eta$ is spin. Furthermore, the reduction from $\mathrm{Sp}in(4)$ to $\mathrm{Sp}(1)$ means that $\eta$ has a $\mathrm{Sp}(1)$-structure or equivalently, $\eta$ is a 1-dimensional $\mathbb H$-vector bundle. Notice that a composition
$$\mathrm{Sp}(1)\hookrightarrow\mathrm{Sp}in(5)\hookrightarrow\mathrm{Sp}in(7)\xrightarrow{\rho_7}\mathrm{SO}(7)\ \mathrm{is}\footnote{Up to conjugation.}\ \ q\in\mathrm{Sp}(1)\mapsto\tildehi_{1,q}\in\mathrm{SO}(7).$$
This is an embedding $\mathrm{Sp}(1)\hookrightarrow\mathrm{SO}(7)$. We know from Section \ref{section type 6} that $\mathrm{Sp}(1)\subset\mathrm K_6$ and from Section \ref{section type 7} that $\mathrm{Sp}(1)\subset\mathrm K_7$. We see that the $\mathrm{Sp}(1)$-structure extends to a $\mathrm K_6$ and $\mathrm K_7$-structure.
(4)"$\mathbb Rightarrow$" (3) By Section \ref{section type 6}, $\mathrm K_6\subset\Phi_6^\ast\tilde{\mathrm{G}}_2$.
(5)"$\mathbb Rightarrow$" (2) By Section \ref{section type 7}, $\mathrm K_7\subset\Phi_7^\ast\mathrm G_2$.
\end{proof}
\subsection{Multisymplectic 3-forms of algebraic type 4}
\begin{thm}\label{thm gl form 4}
Let $N$ be a closed, connected 7-manifold. If $N$ is orientable, spin and that there is $u\in H^4(M;\mathbb Z)$ such that $q(M) = -4u$, then
$N$ admits a multisymplectic 3-form of algebraic type 4.
If $N$ admits a multisymplectic 3-form of algebraic type 4, then $N$ is orientable and spin.
\end{thm}
\begin{proof}
Recall Section \ref{section type 4} that the maximal compact, connected subgroup $\mathrm K_4^o$ of $\mathrm O_4$ is isomorphic to $\mathrm{SO}(3)$ and that $\mathrm V\cong\mathbb R\oplus\mathbb R^3\oplus\mathbb R^3$ as a $\mathrm K_4^o$-module where $\mathbb R$ is a trivial module and $\mathbb R^3$ is the standard representation. Alternatively, $\mathrm V\cong\mathbb R\oplus\mathbb C^3$ where $\mathbb C^3=\mathbb R^3\otimes\mathbb C$. We see that $N$ admits a $\mathrm K_4^o$-structure if, and only if there is a 3-dimension orientable vector bundle $\alpha$ over $N$ such that $TN\cong\underline\mathbb R\oplus\alpha_\mathbb C$ where $\alpha_\mathbb C$ is the complexification of $\alpha$.
Let us assume that $q(N)=-4u$ for some $u\in H^4(N,\mathbb Z)$. We will first show that there is a $\mathbb H$-line bundle $\mu$ such that $-q(\mu)=e(\mu)=u$. For this we follow \cite[Proposition 2.5]{CCS}. The map $e: BSU(2)\rightarrow K(\mathbb Z,4)$ is an isomorphism on $\tildei_i,\ i\le 4$ and epimorphism on $\tildei_5$. This implies that there is a $\mathbb H$-line bundle $\eta$ over a 5-skeleton $N^{(5)}$ of $N$ with $e(\eta)=u$. Since $\tildei_5(BO(\infty))=\tildei_6(BO(\infty))=0$, it follows that the stable bundle $\eta\oplus\underline\mathbb R^3$ extends to $N$. Since any stable vector bundle over $N$ is stably isomorphic to a vector bundle of rank $7$, there is a vector bundle $\xi$ over $N$ of rank 7 with $-q(\xi)=u$ and $w_2(\xi)=w_2(\mu)=0$. By a result from \cite{CS}, the bundle $\xi$ admits a $\mathrm{Sp}in(5)$-structure. By the same argument as in the proof of Theorem \ref{thm global forms 5,6,7,8}, this structure reduces to $\mathrm{Sp}(1)$-structure, i.e. $\xi\cong\mu\oplus\underline\mathbb R^3$ where $\mu$ has a $\mathrm{Sp}(1)$-structure. We have $-q(\mu)=-q(\xi)=u$.
Next we take the associated 3-dimensional bundle $\alpha=\rho_-(\mu)$, see \cite[ Proposition 2.1]{CCVa}. From \cite[Lemma 2.4]{CCVa} follows that: $$p_1(\alpha)=p_1(\mu)-2e(\mu)=2q(\mu)-2e(\mu)=-4u=q(M).$$ On the other hand, $p_1(\alpha)=-c_2(\alpha_\mathbb C)=q(\alpha_\mathbb C)$. By Theorem \ref{fundamental thm}, $TN\cong\underline\mathbb R\oplus\alpha_\mathbb C$ and the sufficient condition follows.
By Theorem \ref{thm max compact type 4}, $\mathrm K_4\subset\mathrm G_2$ and so the necessary condition follows from Theorem \ref{thm global forms 5,6,7,8}.
\end{proof}
Notice that by \cite[Lemma 2.6]{CD}, $0=w_4(M)=\rho_2(q(M))$ and so for a closed spin manifold $N$ there is always $v\in H^4(M;\mathbb Z)$ such that $q(M) = 2v$.
\subsection{Multisymplectic 3-forms of algebraic type 3}
\begin{thm}\label{global form 3}
Let $N$ be a 7-dimensional manifold. Then $N$ admits a multisymplectic 3-form of algebraic type 3 if, and only if $N$ is orientable and spin$^c$.
\end{thm}
\begin{proof}
$"\Leftarrow"$ This is proved in \cite[Theorem 5.7]{D}. $"\mathbb Rightarrow"$
By Lemma \ref{lemma subgroup of Spin^c(7)}, $\mathrm K_3$ is a subgroup of $\mathrm{Sp}in^c(7)$ and so any $\mathrm K_3$-structure extends to a $\mathrm{Sp}in^c(7)$-structure.
\end{proof}
\subsection{Multisymplectic 3-forms of algebraic type 2}
\begin{thm}\label{thm gl form 2}
Let $N$ be a 7-dimensional closed and connected manifold. If $N$ is orientable, spin and there are $e,f\in H^2(M,\mathbb Z)$ such that $e^2+f^2+3e f=-q(M)$, then $N$ admits a multisymplectic 3-form of algebraic type 2. If $N$ is simply-connected, then this condition is also necessary.
On the other hand, suppose that $N$ is orientable and admits a multisymplectic 3-form of algebraic type 2. Then $N$ is spin.
\end{thm}
\begin{proof} Let $\alpha$ and $\beta$ be a complex line bundle with $c_1(\alpha)=e$ and $c_1(\beta)=f$. Put $\xi:=\underline\mathbb R\oplus\alpha\oplus\beta\oplus\alpha\otimes\beta$. By Theorem \ref{thm max compact type 2}, $\xi$ has a $\mathrm K_2^o$-structure. On the other hand, $w_2(\xi)=0$ and $q(\xi)=-c_2(\xi)=-e^2-f^2-3ef$. From Theorem \ref{fundamental thm} follows that $TN\cong\xi$. It is obvious that the condition is necessary, if $N$ is simply-connected.
Let us assume that $N$ is orientable and admits a $\mathrm O_2^+$-structure.
We have seen in the proof of Lemma \ref{lemma semi-direct product type 2}, that $\mathrm O_2^+=\mathrm O_2\cap\mathrm{SL}(\mathrm V)=\mathrm L\rtimes\mathrm H_2$. Moreover, the inclusion $\mathrm H_2\hookrightarrow\mathrm O_2^+$ is a homotopy equivalence and thus any $\mathrm O_2^+$-structure reduces to $\mathrm H_2$-structure. As $\mathrm H_2\subset\Phi_6^\ast\tilde{\mathrm{G}}_2$, any$\mathrm H_2$-structure extends to a $\tilde{\mathrm{G}}_2$-structure. Theorem \ref{thm global forms 5,6,7,8} implies that $N$ is spin.
\end{proof}
\subsection{Multisymplectic 3-forms of algebraic type 1}
\begin{lemma}
Let $N$ be a 7-manifold without boundary. Suppose that there are oriented vector bundles $\alpha,\beta$ of rank 2 such that $TN\cong\underline\mathbb R^3\oplus\alpha\oplus\beta$. Then $N$ admits a multisympletic 3-form of algebraic type 1. If $N$ is also simply-connected, then the assumption is also necessary.
\end{lemma}
\begin{proof}
Let us first show that the condition is sufficient. If $TN\cong\underline\mathbb R^3\oplus\alpha\oplus\beta$, then dually $T^\ast N\cong\underline\mathbb R^{3\ast}\oplus\alpha^\ast\oplus\beta^\ast$. Let $\theta_i,\ i=1,2,3$ be differential forms on $N$ which trivialize $\underline\mathbb R^{3\ast}$.
Next we choose everywhere non-zero sections $\mu_1$ and $\mu_2$ of $\Lambda^2\alpha^\ast$ and $\Lambda^2\beta^\ast$, respectively. Now it is easy to see that $\theta_1\wedge\theta_2\wedge\theta_3+\theta_1\wedge\mu_1+\theta_2\wedge\mu_2$ is a multisymplectic 3-form of algebraic type 1.
Let us now assume that $\tildei_1(N)=1$. Recall Lemma \ref{lemma semi-direct product type 1} that $\mathrm O_1=\mathrm K\rtimes\mathrm H_1$. As $\mathrm K\cong\mathbb R^{10}$ we know that $\tildei_i(\mathrm O_1/\mathrm H_1)=1,\ i\ge0$. From the classical obstruction theory (see for example \cite{Th}) follows that any $\mathrm O_1$-structure reduces to an $\mathrm H_1$-structure. If $N$ is simply-connected, then any $\mathrm H_1$-structure reduces to an $\mathrm H_1^o$-structure where $\mathrm H_1^o$ is the connected component of the identity element of $\mathrm H_1$. Now $\mathrm H_1^o\cong\mathrm GL^+(2,\mathbb R)^+\times\mathrm GL^+(2,\mathbb R)$ where $\mathrm GL^+(2,\mathbb R)=\{A\in\mathrm GL(2,\mathbb R):\ \det(A)>0\}$. As an $\mathrm H_1^0$-module, $\mathrm V\cong\mathbb R^3\oplus\mathbb R^2_1\oplus\mathbb R^2_2$ where $\mathbb R^3$ is a trivial representation and $\mathbb R^2_i$ is the standard representation of the $i$-th factor of $\mathrm H_1^o$. From this the necessary condition easily follows.
\end{proof}
We see that a simply-connected manifold $N$ with a global multisymplectic 3-form of algebraic type 1 admits a spin$^c$-structure. Proposition \ref{fundamental thm} implies the following.
\begin{thm}\label{thm gl form 1}
Let $N$ be a connected, closed and spin$^c$ 7-manifold. Suppose that there are $e,f\in H^2(M,\mathbb Z)$ such that $\rho_2(e+f)=w_2(M)$ and $-e f=q(M;e+f)$. Then $N$ admits a multisympletic 3-form of algebraic type 1. If $N$ is simply-connected, then the assumption is also necessary.
\end{thm}
\end{document} |
\begin{document}
\date{\today}
\title{Completely positive maps and classical correlations}
\author{C\'{e}sar A. Rodr\'{i}guez-Rosario}
\email[email: ]{carod@physics.utexas.edu}
\affiliation{ The University of Texas at Austin, Center for Complex Quantum Systems, 1 University Station C1602, Austin TX 78712}
\author{Kavan Modi}
\affiliation{ The University of Texas at Austin, Center for Complex Quantum Systems, 1 University Station C1602, Austin TX 78712}
\author{Aik-meng Kuah}
\affiliation{ The University of Texas at Austin, Center for Complex Quantum Systems, 1 University Station C1602, Austin TX 78712}
\author{Anil Shaji}
\affiliation{ Department of Physics and Astronomy, University of New Mexico,
Albuquerque NM 87131}
\author{E.~C.~G. Sudarshan}
\affiliation{ The University of Texas at Austin, Center for Complex Quantum Systems, 1 University Station C1602, Austin TX 78712}
\begin{abstract}
We expand the set of initial states of a system and its environment that are
known to guarantee completely positive reduced dynamics for the system when the
combined state evolves unitarily. We characterize the correlations in the
initial state in terms of its quantum discord [H.~Ollivier and W.~H.~Zurek,
Phys.~Rev.~Lett.~{\bf 88}, 017901 (2001)]. We prove that initial
states that have only classical correlations lead to completely positive reduced
dynamics. The induced maps can be not completely positive when quantum
correlations including, but not limited to, entanglement are present. We
outline the implications of our results to quantum process tomography
experiments.
\end{abstract}
\pacs{03.65.-w,03.65.Yz,03.67.Mn}
\keywords{entanglement, open systems, positive maps, qubit}
\maketitle
In the mathematical theory of open quantum systems
\cite{davies76a,spohn80a,breuer02a} it is often assumed that the system of
interest and its environment are initially in a product state. This extremely
restrictive assumption precludes the theory from describing a wide variety of
experimental situations including the one in which an open system is simply
observed for some interval of time without attempting to initialize it in any
particular state at the beginning of the observation period. If dynamical maps
\cite{PhysRev.121.920} are used to describe the open evolution, then it is known
that an initial product state leads to dynamics of the system
described in terms of completely positive maps
\cite{choi72a,choi75,kraus83b,sudarshan86,SudChaos}. There has been significant
experimental and theoretical interest in quantum correlations, entanglement and
coherence in the context of quantum information theory \cite{Nielsen00a}. It is
only recently that interest has picked up in investigating how these properties,
when present in the initial state of a system and its environment, affects the
open evolution of the system
\cite{pechukas94a,jordan05a,shaji05dis,jordan:052110,terno05,shabani06a,ziman06}. In this Letter we investigate the related question of how to relax the initial
product state assumption and still obtain dynamics for the system that are
described by completely positive transformations.
Consider a generic bipartite state $\rho^{\mathcal{SE}}$ of a quantum system
$\mathcal{S}$ and its environment $\mathcal{E}$. Unitary evolution of
the combined state induces transformations on the system state described
by a dynamical map $\mathfrak{B}$ defined as
\begin{equation}\label{TotalTrace}
\eta\rightarrow\mathfrak{B}(\eta)\equiv\mbox{Tr}_{\mathcal{E}}\left[U
\rho^{\mathcal{SE}} U^\dag\right]=\eta^{\prime},
\end{equation}
where $\eta = {\mbox{Tr}}_{\mathcal{E}} \rho^{\mathcal{SE}}$ is the
initial state of ${\mathcal{S}}$ and $\eta'$ is its final state. We use $\eta$
to represent density matrices of the system $\mathcal{S}$ and $\tau$ to
represent density matrices of the environment $\mathcal{E}$. The action of the
map can be written in terms of its eigenmatrices $\{\zeta^{( \alpha )} \}$ and
eigenvalues $\{\lambda_\alpha\}$,
\begin{equation*}\label{oper1}
\mathfrak{B}(\eta)=\sum_{\alpha}\lambda_{\alpha}\;\zeta^{(\alpha)}\eta
\;{\zeta^{(\alpha)}}^{\dagger}.\\
\end{equation*}
If the initial state of the system and its environment is simply separable
(product) so that $\rho^{\mathcal{SE}}=\eta \otimes \tau$, then the
eigenvalues of the dynamical map are all positive for any choice of unitary
evolution \cite{sudarshan86,SudChaos}. In this case we can define
$C^{(\alpha)}\equiv\sqrt{\lambda_{\alpha}}\zeta^{(\alpha)}$ to get
\begin{equation}\label{oper2}
\mathfrak{B}(\eta)=\sum_{\alpha}C^{(\alpha)}\eta {C^{(\alpha)}}^{\dagger},
\end{equation}
with $\sum_{\alpha}{C^{(\alpha)}}^{\dagger}C^{(\alpha)}=1$. Any map that can be
written in this form is completely positive \cite{choi72a,choi75}.
If the initial system and environment state is not a product state,
then the map induced by arbitrary unitary evolution on $\rho^{\mathcal{SE}}$ is
in general not completely positive \cite{pechukas94a,jordan05a}. In
this Letter we identify a general class of initial states such that
\emph{any} unitary transformation on it leads to completely positive reduced
dynamics for the system. Simply separable states are a subset of this general
class of states. To characterize this class we use the notion of
quantum discord introduced by Ollivier and Zurek \cite{PhysRevLett.88.017901}
and independently by Henderson and Vedral \cite{henderson01a}.
First, we consider an example that shows how not completely positive dynamics
arise in physically realizable situations where the initial state is separable
but not simply separable. Let ${\mathcal{S}}$ and ${\mathcal{E}}$
both be qubits in a combined initial state,
\begin{equation}\label{sep1}
\rho^{\mathcal{SE}} = \frac{1}{4} (\openone\otimes\openone+a_j
\sigma_j\otimes\openone+c_{23}\sigma_2\otimes\sigma_3),
\end{equation}
where $j =1,2,3$, $\sigma_j$ are the Pauli matrices and repeated
indices are summed over. To show that $\rho^{\mathcal{SE}}$ is a separable state
we use the Peres partial transpose test \cite{peres96a} which is a necessary and
sufficient test for entanglement in two qubit systems. The transpose operation
takes $\sigma_2$ to $ -\sigma_2$ while leaving the other two Pauli matrices
intact. If we apply the partial transpose test to $\rho^{\mathcal{SE}}$ by
transposing ${\mathcal{E}}$, we see from Eq.~(\ref{sep1}) that
$(\rho^{\mathcal{SE}})^{\mbox{PT}} = \rho^{\mathcal{SE}}$ and so it is a
separable state. The initial state of the system
is $\eta=\mbox{Tr}_\mathcal{E}[\rho^{\mathcal{SE}}] =
( \openone+a_j \sigma_j )/2$.
Consider a unitary evolution of
$\rho^{\mathcal{SE}}$ given by $U = e^{-iHt} = \cos(\omega
t)\openone\otimes\openone - i \sin(\omega t) \sigma_j
\otimes \sigma_j$, where $H = \omega \sum_j \sigma_j \otimes \sigma_j$. The
state of the system at time $t$ is given by \cite{rodr,jordan06a}
\begin{eqnarray*}
\eta'
&=& \frac{1}{2}\big[ \openone + \cos^2\left(2\omega t\right) a_j
\sigma_j + c_{23}\cos\left(2\omega t\right) \sin\left(2\omega
t\right)\sigma_1 \big].
\end{eqnarray*}
The dynamical map $\mathfrak{B}$ that describes the open evolution of the
system qubit ${\mathcal{S}}$ is an affine transformation \cite{jordan:034101}
that squeezes the Bloch sphere of the qubit into a sphere of radius $\cos^2 (2
\omega t)$ and shifts its center by $c_{23}\cos(2\omega t)\sin(2 \omega t)$ in
the $\sigma_1$ direction. The eigenvalues of the map are
\begin{eqnarray*}
\lambda_{1,2}&=&\frac{1}{2}\big[1-\cos^2 (2\omega t)\pm c_{23}\cos (2 \omega
t)\sin (2 \omega t ) \big], \\
\lambda_{3,4}&=&\frac{1}{2}\Big[1+ \cos^2 (2\omega t ) \nonumber
\\
&& \hspace{5 mm} \pm \cos (2\omega t )\sqrt{ 4\cos^2 ( 2\omega
t )+c_{23}^{2} \sin^2 ( 2\omega t )} \Big].
\end{eqnarray*}
It is easily seen that $\lambda_{3,4}$ are always positive, while for
$\lambda_{1,2}$ to be positive we need $\sin^2 (2\omega t ) \geq \pm c_{23}\cos
(2\omega t)\sin (2\omega t)$. We can choose $c_{23}$ such that this condition
will be violated for some values of $\omega t$ making the map $\mathfrak{B}$ not
completely positive. It has been previously shown that not completely positive
maps come from initial entanglement \cite{jordan:052110}. This example shows
that even separable states can lead to not completely positive maps. A similar
example has been worked out in \cite{terno05}. The map $\mathfrak{B}$ has a
physical interpretation as long as it is applied to initial states $\eta$ that
are {\em compatible} with the total state $\rho^{\mathcal{SE}}$
\cite{shaji05dis}.
From this example we see that correlations, and not necessarily entanglement, in
the initial state of the system and its environment can lead to not completely
positive reduced dynamics for ${\mathcal{S}}$. Do all correlations lead to not
completely positive maps? If not, is there a way of characterizing these
correlations that lets us easily see if a given initial state will lead to
completely positive dynamics or not?
The traditional division of bipartite density matrices $\rho^{\mathsf{XY}}$
into separable ($\rho^{\mathsf{XY}}=\sum_j p_j \eta_j \otimes \tau_j $) and
entangled is often taken to be synonymous with classical correlations and
quantum correlations respectively \cite{PhysRevA.40.4277}. Ollivier and Zurek
\cite{PhysRevLett.88.017901} and independently Henderson and Vedral
\cite{henderson01a} have proposed a different definition for classical and
quantum correlations in density matrices based on information theoretic
considerations. Suggestions for characterizing the correlations along similar
lines were also made by Bennett et al. in \cite{bennett99b,bennett02a}.
To quantify the correlations between two systems $\mathsf{X}$ and $\mathsf{Y}$,
we can either compute the mutual information,
\begin{equation}
\label{mutual}
\mathbf{I}(\mathsf{Y}:\mathsf{X})=\mathbf{H}(\mathsf{X})+\mathbf{H}(\mathsf{Y}
)-\mathbf{H}(\mathsf{X}\cup\mathsf{Y}),
\end{equation}
or its classical equivalent,
\begin{equation}
\label{conditional}
\mathbf{J}(\mathsf{Y}:\mathsf{X})=\mathbf{H}(\mathsf{Y})-\mathbf{H}(\mathsf{Y}
|\mathsf{X}),
\end{equation}where $\mathbf{H}$ is the Shannon
entropy \cite{shannon48a}.
If $\mathsf{X}$ and $\mathsf{Y}$ are classical systems in the
sense that their states are described by probability distributions over two
random variables $\mathsf{X}$ and $\mathsf{Y}$, then $\mathbf{J}=\mathbf{I}$ as a
consequence of Bayes' rule.
If $\mathsf{X}$ and $\mathsf{Y}$ are quantum systems with their state described
by the density matrix $\rho^{\mathsf{XY}}$, then the mutual information between
the two can be computed by replacing the Shannon entropy in Eq.~(\ref{mutual})
by the von Neumann entropy ${\mathbf H}(\rho)= -{\mbox{Tr}} \rho \log \rho$ \cite{vonNeu}. To
compute $\mathbf{J}(\mathsf{Y}:\mathsf{X})$, the definition of
$\mathbf{H}(\mathsf{Y}|\mathsf{X})$ has to be generalized to
\begin{equation}
\label{gen1}
{\mathbf H}\big({\mathsf Y} \big| \big\{ \Pi^{\mathsf{X}}_j \big\} \big) = \sum
_j p_j {\mathbf H} ( \rho_{{\mathsf Y}| \Pi^{\mathsf X}_j} ),
\end{equation}
where $p_j = {\mbox{Tr}}_{\mathsf{X}, \, \mathsf{Y}} \Pi^{\mathsf X}_j
\rho^{\mathsf{XY}}$ and $\rho_{\mathsf{Y}|\Pi^{\mathsf Y}_j}=
\Pi^\mathsf{X}_j \rho^{\mathsf{XY}}
\Pi^\mathsf{X}_j/p_j$.
Such a generalization is needed because quantum information differs from
classical information in that the information that can be obtained from a
quantum system depends not only on its state but also on the choice of
measurements that is performed on it. So, in generalizing
$\mathbf{H}(\mathsf{Y}|\mathsf{X})$, we first had to choose a particular set of
one-dimensional orthogonal projectors $\big\{\Pi^\mathsf{X}_j\big\}$ acting on
the system $\mathsf{X}$.
It turns out that for general bipartite quantum states, the mutual information
is not identical to $\mathbf{J}(\mathsf{Y}:\mathsf{X})$ defined using
Eq.~(\ref{gen1}). The difference between $\mathbf{I}$ and $\mathbf{J}$ is called
\emph{quantum discord} and it is taken as a measure of non-classical
correlations in a quantum state \cite{PhysRevLett.88.017901}.
A quantum state with only classical correlations satisfies the condition $\rho^{\mathsf{XY}}= \sum_j \Pi^\mathsf{X}_j \rho^{\mathsf{XY}} \Pi^\mathsf{X}_j$. States of this form are a subset of the set of all separable states and the
subset includes all simply separable states. On the other hand, not all separable
states have only classical correlations. This implies that quantum correlations
must be taken to mean more than just entanglement. The information theoretic
characterization of quantum states based on the nature of the correlations
present is compared with the traditional division into separable and
entangled states in Fig.~\ref{fig1}.
\begin{figure}
\caption{Quantum states of bipartite systems can be divided into two classes
based on their discord. States with quantum correlations have non-zero discord
while classically correlated states have zero discord. Separable states can have
quantum correlations while simply separable states have only classical
correlations. Not all quantum correlations are equivalent to entanglement. Also
shown is the nature of the dynamical maps induced by the unitary evolution of
the state of a system and its environment when the initial state belongs to each
class. Classically correlated states always lead to completely positive (CP)
maps while there are examples, indicated by the arrows, showing that states with
quantum correlations lead to not completely positive maps (Not-CP).}
\label{fig1}
\end{figure}
In experiments, quantum systems are often initialized in desired states by
first performing a complete set of orthogonal projective measurements $\big\{
\Pi_j \big\}$ on the system and then super-selecting the desired state from the
post-measurement state. After the measurements, the initial state of the system
and its environment has the form
\begin{equation}
\label{initial}
\rho^{\mathcal{SE}} = \sum_j \Pi_j \rho^{\mathcal{SE}} \Pi_j =\sum_{j}p_j
\Pi_j\otimes\tau_j,
\end{equation}
where $\tau_j$ are density matrices for ${\mathcal{E}}$, $\big\{\Pi_j\big\}$ are a
complete set of orthogonal projectors on ${\mathcal{S}}$, $p_j\geq 0$ and
$\sum_j p_j =1$.
We now show that initial states of the system and the environment with only
classical correlations with respect to the system will always lead to completely positive
maps on the system under \emph{any} unitary evolution. Previously, only
simply separable states were known to lead to completely positivity reduced
dynamics for any choice of unitary evolution for the
combined state \cite{terno05,ziman06}.
We start from the classically correlated state from Eq.~(\ref{initial}). The initial state of the system is $\eta = \sum_j p_j \Pi_j$.
From Eq.~(\ref{TotalTrace}) we have
\begin{eqnarray*}
\eta'_{rs}&=& \big[\mathfrak{B}\big]_{rr';ss'}\eta_{r's'} \\
&=& \mbox{Tr}_\mathcal{E} \bigg\{ \left[U\right]_{ra;r'a'} \Big(\sum_j p_j
\left[\Pi_j\right]_{r's'} [\tau_j]_{a'b'} \Big)[{U]^{*}_{sb;s'b'}}\bigg\}.
\end{eqnarray*}
Take the trace with respect to the environment by contracting indices $a$ and
$b$,
\begin{equation*}
\eta'_{rs}=\sum_j p_j \big[D^{kl}_{j}\big]_{rr'} [\Pi_j]_{r's}
{\big[D^{kl}_{j}\big]^{*}_{ss'}},
\end{equation*}
where $[D^{kl}_{j}]_{rr'} \equiv [U]_{rl;r'a'}[\sqrt{\tau_{j}}]_{a'k}$. We have used
the fact that $\{\tau_j\}$ are positive to take their square root. After
combining indices $k$ and $l$ into a single index $\alpha$ we obtain,
\begin{equation*}
\eta'= \mathfrak{B}(\eta)=\sum_{j,\alpha} p_j D_{j}^{(\alpha)} \Pi_j
{D_{j}^{(\alpha)}}^{\dagger}.
\end{equation*}
Expanding $D_{j}^{(\alpha)}$ as $\sum_m D_{m}^{(\alpha)}\delta_{jm}$
and using $\Pi^{2}_{j} = \Pi_j$ we obtain
\begin{equation*}
\eta' =\sum_{j,\alpha} p_{j} \Big(\sum_{m}
D_{m}^{(\alpha)}\delta_{jm} \Pi_{j}\Big) \Pi_{j} \Big(\sum_{n} \Pi_{j}
\delta_{jn} {D_{n}^{(\alpha)}}^{\dagger} \Big).
\end{equation*}
Now we can use the orthogonality of projectors, $\Pi_m \Pi_j
= \delta_{mj}\Pi_j$ to drop the dependency of $D_{j}^{(\alpha)}$ on index $j$
and write
\begin{equation*}
\eta'=\sum_{j,\alpha} p_{j} \Big(\sum_{m} D_{m}^{(\alpha)} \Pi_{m}\Big)
\Pi_{j} \Pi_{j} \Pi_{j} \Big(\sum_{n} {D_{n}^{(\alpha)}}\Pi_{n}
\Big)^{\dagger}.
\end{equation*}
We can redefine $C^{(\alpha)} \equiv\sum_m D_{m}^{(\alpha)}\Pi_{m} $ to obtain,
\begin{equation}
\label{final}
\eta^\prime= \sum_\alpha C^{(\alpha)} \Big(\sum_{j} p_{j}
\Pi_{j}\Big) {C^{(\alpha)}}^{\dagger}= \sum_\alpha C^{(\alpha)} \eta
{C^{(\alpha)}}^{\dagger}.
\end{equation}
Eq.~(\ref{final}) is identical to Eq.~(\ref{oper2}) showing that $\mathfrak{B}$
indeed is a completely positive map. This demonstrates that \emph{any} reduced unitary evolution of an
open system that is initially \emph{classically correlated} with its environment will
be given by a completely positive maps. The evolution of an open system
that has initial quantum correlations with the environment, on the other hand,
might lead to not completely positive maps as shown in Fig.~\ref{fig1}.
Note that by specifying the initial state $\rho^{\mathcal{SE}}$ in
Eq.~(\ref{initial}) we have restricted to a subset of all possible initial system states. This subset is spanned by
the projectors $\big\{ \Pi_j \big\}$. The map $\mathfrak{B}$ from Eq.~(\ref{final}),
on the other hand, can be applied to any state of the system. Since the map is
completely positive it will map any system state to another valid state.
We will not, however, be able to understand the action of the map on states
outside the subset spanned by $\big\{ \Pi_j \big\}$ as coming from the
contraction of unitary evolution of the combined state in Eq.~(\ref{initial}).
Experimentally reconstructing dynamical maps corresponding to open quantum
evolution is called quantum process tomography \cite{chuang97a,Nielsen00a}. A
number of known initial states, sufficient to span the space of density matrices
of the system, are allowed to evolve as a result of an unknown process. The final
state corresponding to each initial state is then determined by quantum state
tomography. With the knowledge of the initial and corresponding final states the
linear dynamical map describing this unknown process is determined. The
complete set of initial states for the system is typically generated by
creating a fiducial state and then applying controlled evolution on it to obtain
the other states. The fiducial state, in turn, is obtained by doing a
complete set of orthogonal measurements on the system as described earlier.
We look at a representative quantum process tomography experiment on a
solid-state qubit performed by Howard et al. \cite{Howard05, Howard06} in the light of
the results presented above. In this experiment, the system
of interest is a qubit formed in a nitrogen vacancy defect in a diamond
lattice. Under ideal conditions, the experiment requires the initial state
of the qubit to be the pure state $|\phi \rangle \langle \phi|$, but in
reality the qubit is initialized in the state $\eta_{0}$
with $p_0=\mbox{Tr}\left[|\phi\rangle\langle\phi|\eta_0 \right] =0.7$.
It is argued in \cite{Howard06} that the population considered was high enough
to effectively treat the initial state as $|\phi\rangle\langle\phi|$. From this
state a complete set of linearly independent states is constructed
stochastically. This provides the set of initial states necessary to perform process
tomography on the decoherence occurring to the qubits. The map corresponding to
the decoherence process was found and it had negative eigenvalues, making it
not completely positive. The experimentally obtained not completely positive
maps were then discarded in favor of their ``closest" completely positive
counterparts \cite{havel03}. The occurrence of negative eigenvalues for the
dynamical maps was attributed to experimental errors.
However, if we do not discard the negative eigenvalues as unphysical, this will
yield information about the initial preparation of the system.
If the system is indeed in a pure state $\eta_{0}\rightarrow
|\phi\rangle\langle\phi|$ initially then the combined state of the system
and its environment would necessarily be of the form $\rho^{\mathcal{SE}} =
|\phi\rangle\langle\phi|\otimes\tau$. Maps coming from such initially simply
separable states should be completely positive. This contradicts
what was found in the experiment. In addition to ruling out an initial simply separable state, we can now also
rule out initial states of the form
\begin{equation*}
\rho^{\mathcal{SE}}=p_0|\phi\rangle\langle\phi|\otimes\tau^\prime+\left(1-p_0\right)|\phi_\bot\rangle\langle\phi_\bot|\otimes\tau^{\prime\prime},
\end{equation*}
with $\langle \phi |\phi_\bot\rangle=0$, even though states of this form are
consistent with the measured population $p_0$. However, a state like this only
has classical correlations, and we know that the map induced by any unitary
evolution of such a state should be completely positive.
The not completely positive map found in this experiment could be interpreted as
an indication that the initial state of the system is not just classically
correlated with the environment. Given that the qubit is in a large crystal
lattice, it is perhaps not very surprising that it had quantum correlations with
the surrounding environment.
We propose that if after performing quantum process tomography a not completely
positive map is found, this should be considered as a signature that the system
had quantum correlations with the environment. Our definition of quantum
correlation is different from the ones considered in other previous studies by
other authors \cite{terno05,ziman06}.
In conclusion, we have studied the effect of initial correlations with the
environment on the complete positivity of dynamical maps that describe the
open-systems evolution. We proved that if there are only classical correlations
in the state of the system and its environment, as indicated by zero discord,
then the maps induced by any unitary evolution of the combined state must be
completely positive for any unitary transformation. This result is more general
than the previously known result for simply separable initial states.
{\bf Acknowledgments}: The authors wish to thank T. F. Jordan and W. Zurek for useful
comments and discussions. Anil Shaji acknowledges the support of the US Office
of Naval Research through Contract No.~N00014-03-1-0426.
\end{document} |
\begin{document}
\begin{center}
\begin{large}
{\bf Geometric measure of entanglement of multi-qubit graph states and its detection on a quantum computer}
\end{large}
\end{center}
\centerline {Kh. P. Gnatenko \footnote{E-Mail address: khrystyna.gnatenko@gmail.com}, N. A. Susulovska}
\centerline {\small \it Ivan Franko National University of Lviv,}
\centerline {\small \it Professor Ivan Vakarchuk Department for Theoretical Physics,}
\centerline {\small \it 12 Drahomanov St., Lviv, 79005, Ukraine}
\abstract{Multi-qubit graph states generated by the action of controlled phase shift operators on a separable quantum state of a system, in which all the qubits are in arbitrary identical states, are examined. The geometric measure of entanglement of a qubit with other qubits is found for the graph states represented by arbitrary graphs. The entanglement depends on the degree of the vertex representing the qubit, the absolute values of the parameter of the phase shift gate and the parameter of state the gate is acting on. Also the geometric measure of entanglement of the graph states is quantified on the quantum computer $\textrm{ibmq\_athens}$. The results obtained on the quantum device are in good agreement with analytical ones.
}
\section{Introduction}
Studies of entanglement of quantum states and its quantifying on a quantum computer have received much attention (see, for instance, \cite{Horodecki,Shimony,Behera,Scott,Horodecki1,Torrico,Sheng,Samar,Kuzmak,Kuzmak1,Gnatenko,Kuzmak2,Wang,Mooney} and references therein). Entanglement corresponds to non-classical correlations between the subsystems and presupposes that the state of a system cannot be factorized \cite{Horodecki}. This physical phenomenon plays a critical role in quantum information
in particular, in quantum cryptography, quantum teleportation (see, for example, \cite{Horodecki,Feynman,Bennett,Bouwmeester,Ekert,Raussendorf,Lloyd,Buluta,Shi,Llewellyn,Huang,Yin,Jennewein,Karlsson}).
The geometric measure of entanglement proposed by Shimony \cite{Shimony} is defined as
a minimal squared Fubiny-Study distance $d_{FS}^2 (\vert\psi\rangle, \vert\psi_s\rangle)=1-|\langle\psi|\psi_s\rangle|^2$ between an entangled state $\vert\psi\rangle$ and a set of separable pure states $\vert\psi_s\rangle$. It reads
\begin{eqnarray}
E(\vert\psi\rangle)=\min_{\vert\psi_s\rangle}(1-|\langle\psi|\psi_s\rangle|^2).
\end{eqnarray}
The authors of paper \cite{Samar} showed that the geometric measure of entanglement of a spin one-half (or qubit) with a quantum system
in a pure state $\vert\psi\rangle$ is entirely determined by the mean spin in this state.
Namely the following relation is satisfied
\begin{eqnarray}
E(\vert\psi\rangle)=\frac{1}{2}\left(1-|\langle{\bm \sigma}\rangle|\right),\label{ent}
\end{eqnarray}
$|\langle{\bm \sigma}\rangle|=\sqrt{\langle\sigma^x\rangle^2+\langle\sigma^y\rangle^2+\langle\sigma^z\rangle^2}$, here
$\sigma^x$, $\sigma^y$, $\sigma^z$ are the Pauli matrices, $\langle...\rangle=\bra \psi ...\ket \psi$.
Therefore, in order to quantify the geometric measure of entanglement the mean values of the Pauli matrices have to be calculated. Quantum protocol for detecting $\langle\sigma^x\rangle$, $\langle\sigma^y\rangle$, $\langle\sigma^z\rangle$ on a quantum computer is presented in \cite{Kuzmak}.
Mean value $\langle\sigma^x \rangle$ can be represented as
$\langle\sigma^x \rangle=\langle \psi \vert \sigma^x\vert\psi\rangle=\langle \tilde\psi^y \vert \sigma^z\vert \tilde\psi^y\rangle=\vert \langle \tilde\psi^y \vert 0 \rangle \vert^2-\vert \langle \tilde\psi^y \vert 1 \rangle \vert^2,$
where $\vert\tilde\psi^y\rangle=\exp(i\pi\sigma^y/4)\vert\psi\rangle$ and $\vert\langle\tilde\psi^y\vert 0 \rangle\vert^2$, $\vert\langle\tilde\psi^y\vert 1 \rangle \vert^2$ are probabilities that define the result of measurement in the standard basis. Thus, to quantify $\langle\sigma^x \rangle$ the $RY(\pi/2)$ gate has to be applied to the state of a qubit before conducting the measurement in the standard basis (the state of the qubit has to be rotated around the $y$ axis by $\pi/2$).
Similarly, to detect $\langle\sigma^y \rangle$ one has to apply the $RX(\pi/2)$ gate and then measure the qubit in the standard basis
$ \langle\sigma^y \rangle=\langle \psi \vert \sigma^y\vert\psi\rangle=\langle \tilde\psi^x \vert \sigma^z\vert \tilde\psi^x\rangle=\vert \langle \tilde\psi^x \vert 0 \rangle \vert^2-\vert \langle \tilde\psi^x \vert 1 \rangle \vert^2$,
here $\vert\tilde\psi^x\rangle=\exp(-i\pi\sigma^x/4)\vert\psi\rangle$ .
Lastly, for $\sigma^z$ we have
$\langle\sigma^z \rangle=\langle \psi \vert \sigma^z\vert\psi\rangle=\vert \langle\psi\vert 0 \rangle \vert^2-\vert \langle \psi \vert 1 \rangle\vert^2.$
This way one can obtain the values $\langle\sigma^x\rangle$, $\langle\sigma^y\rangle$, $\langle\sigma^z\rangle$ on the basis of the results of measurement of the qubit in the standard basis \cite{Kuzmak}.
Graph states are quantum states that can be represented by graphs \cite{Wang,Mooney,Schlingemann,Bell,Mazurek,Markham,Qian,Shettell,Hein,Guhne}. These states have various applications in quantum information, for instance in
quantum error correction \cite{Schlingemann,Bell,Mazurek},
quantum cryptography \cite{Markham,Qian} and practical quantum metrology in the presence of noise \cite{Shettell}.
Much attention has been devoted to examining multi-qubit graph states generated by 2-qubit controlled-Z operators acting on a separable quantum state of the system, in which all qubits are in the state $\ket{+}=(\ket{0}+\ket{1})/\sqrt{2}$ (see, for example, \cite{Wang,Mooney,Cabello,Alba,Mezher,Akhound,Haddadi} and references therein). The state represented by undirected graph $G(V,E)$ ($V$, $E$ denote vertices and edges of the graph,respectively) reads
\begin{eqnarray}
\ket{\psi_G} = \prod_{(i,j) \in E} CZ_{ij}(\phi) \ket{+}^{\otimes V}. \label{eq:2:1z}
\end{eqnarray}
Here $CZ_{ij}(\phi)$ is the controlled-Z gate acting on the states of qubits $q[i]$, $q[j]$.
The authors of paper \cite{Wang} studied graph states (\ref{eq:2:1z}) corresponding to rings on IBM's quantum computer ibmqx5 and showed that the 16-qubit quantum computer can be fully entangled. In \cite{Mooney} the entanglement of graph states was examined on the basis of calculations on the quantum computer IBM Q Poughkeepsie. Entanglement of graph states of spin systems
generated by the operator of evolution with Ising Hamiltonian was examined analytically and on the 5-qubit quantum computer IBM Q Valencia in \cite{Gnatenko}. It was shown that the entanglement of a spin with other spins in the graph state is related to the degree of the vertex representing the spin \cite{Gnatenko}.
In the present paper we study graph states obtained as a result of action of controlled phase shift operators on a separable quantum state of the system, in which all the qubits are in arbitrary identical states
\begin{eqnarray}
\ket{\psi_G(\phi,\alpha,\theta)} = \prod_{(i,j) \in E} CP_{ij}(\phi) \ket{\psi(\alpha,\theta)}^{\otimes V}, \label{eq:2:1}\\
\ket{\psi(\alpha,\theta)} = \cos \frac{\theta}{2} \ket{0}+ e^{i\alpha} \sin \frac{\theta}{2} \ket{1}, \label{eq:2:2}
\end{eqnarray}
here $CP_{ij}(\phi)$ is the controlled phase shift gate that acts on the qubits $q[i]$, $q[j]$.
State (\ref{eq:2:2}) is an arbitrary one-qubit state, $\theta\in[0,\pi]$, $\phi\in[0,2\pi]$.
In a particular case $\phi=\pi$, $\alpha=0$, $\theta = \pi/2$ state (\ref{eq:2:1}) coincides with (\ref{eq:2:1z}).
We find the expression for the entanglement of a qubit with other qubits in graph state (\ref{eq:2:1}) represented by an arbitrary graph. It is shown that the entanglement is determined by the absolute values of the parameters $\phi$ and $\theta$ as well as the degree of the vertex representing the qubit in the graph.
The entanglement of graph states is also studied on IBM's quantum computer $\textrm{ibmq\_athens}$.
The paper is organized as follows. In Section 2 an expression for the geometric measure of entanglement of a qubit with other qubits in graph state (\ref{eq:2:1}) is found. In Section 3 we present results of quantum computations on $\textrm{ibmq\_athens}$ for the entanglement of the graph states (\ref{eq:2:1}) represented by the chain, the claw and the complete graphs. Conclusions are made in Section 4.
\section{Geometric measure of entanglement of multi-qubit graph states}
According to the result (\ref{ent}) in order to find the geometric measure of entanglement of the qubit $q[l]$ with other qubits in graph state (\ref{eq:2:1}) one has to calculate $\langle{\bm \sigma}_l\rangle=\langle \psi_G (\phi,\alpha,\theta)\vert{\bm \sigma}_l\vert\psi_G (\phi,\alpha,\theta)\rangle$.
Note that accurate to the phase factor the quantum state $\ket{\psi(\alpha,\theta)}$ can be prepared by the action of the rotation operators $RZ(\alpha)$, $RY(\theta)$ on $\ket 0$. We have
\begin{eqnarray}
\ket{\psi(\alpha,\theta)} =e^{i\frac{\alpha}{2}}RZ(\alpha)RY(\theta)\ket 0=\nonumber\\=e^{i\frac{\alpha}{2}}e^{-i\frac{\alpha}{2}\sigma^z}e^{-i\frac{\theta}{2}\sigma^y}\ket 0.\label{eq:2:3}
\end{eqnarray}
The controlled phase shift gate can be represented as
\begin{eqnarray}
CP_{ij}(\phi)=|0\rangle_i{}_i\langle0| \hat{1}_j+|0\rangle_i{}_i\langle0| P_j(\phi)
=\nonumber\\=e^{\frac{i \phi}{4} (\hat{1}_i- \sigma^z_i) (\hat{1}_j - \sigma^z_j)},
\end{eqnarray}
where $\hat{1}_i$ is the unit operator, $P_j(\phi)$ is the phase gate acting on the state of the qubit $q[j]$, $P_j(\phi)=|0\rangle_j{}_j\langle0|+e^{i\phi}|0\rangle_j{}_j\langle0|$.
Thus, for $\braket{\sigma^x_l}$ we obtain
\begin{eqnarray}
\braket{\sigma^x_l} = \bra{\psi_0} \prod_{q\in V} e^{i\frac{\theta}{2} \sigma^y_q} e^{i\frac{ \alpha}{2} \sigma^z_q} \prod_{(j,k) \in E} (CP_{jk}(\phi))^+ \times\nonumber\\ \times\sigma^x_l \prod_{(m,n) \in E} CP_{mn}(\phi) \prod_{p \in V} e^{-i\frac{ \alpha}{2} \sigma^z_p} e^{-i\frac{ \theta}{2} \sigma^y_p} \ket{\psi_0}, \label{eq:2:5}
\end{eqnarray}
here we use notation ${\psi_0}=\ket{0}^{\otimes V}$.
Taking into account that operators $\sigma^x_l$, $\sigma^z_l$ anticommute ($\{\sigma^z_l, \sigma^x_l\}=0$), we can write
\begin{eqnarray}
\prod_{(j,k) \in E} (CP_{jk}(\phi))^+ \sigma^x_l \prod_{(m,n) \in E} CP_{mn}(\phi) =\nonumber\\=\prod_{(j,k) \in E} e^{-i\frac{ \phi}{4} (\hat{1}_j- \sigma^z_j) (\hat{1}_k - \sigma^z_k)}\sigma^x_l \times \nonumber\\ \times\prod_{(m,n) \in E} e^{i\frac{ \phi}{4} (\hat{1}_m-\sigma^z_m) (\hat{1}_n - \sigma^z_n)}=\nonumber\\=e^{i\frac{\phi}{2} n_l \sigma^z_l} e^{-i \frac{\phi}{2} \sum_{j \in N_G(l)} \sigma^z_j \sigma^z_l} \sigma^x_l,
\end{eqnarray}
here $n_l$ is a degree of the vertex representing the qubit $q[l]$ in the graph (a number of edges incident to the vertex), $N_G(l)$ is a neighborhood of the vertex $l$ (a set of vertices adjacent to the vertex $l$).
Hence, for $\braket{\sigma^x_l}$ we obtain
\begin{eqnarray}
\braket{\sigma^x_l} =\nonumber\\ = \bra{\psi_0} \prod_{q \in N_G[l]} e^{i\frac{\theta}{2} \sigma^y_q} e^{i\frac{ \alpha}{2} \sigma^z_q}e^{i\frac{\phi}{2} n_l \sigma^z_l} e^{-i \frac{\phi}{2} \sum_{j \in N_G(l)}\sigma^z_j \sigma^z_l} \times \nonumber\\ \times \sigma^x_l
\prod_{p\in N_G[l]} e^{-i\frac{ \alpha}{2} \sigma^z_p} e^{-i\frac{ \theta}{2} \sigma^y_p} \ket{\psi_0}=
\sin \theta \; \textrm{Re}\;z, \label{eq:2:6}
\end{eqnarray}
where $N_G[l]$ is a closed neighbourhood of the vertex $l$ (a set of vertices adjacent to the vertex $l$ and the vertex $l$). We use notation $z$ for complex number
\begin{eqnarray}
z=e^{-i(\alpha + \frac{\phi}{2} n_l)} \left(\cos \frac{\phi}{2} + i \sin \frac{\phi}{2} \cos \theta \right)^{n_l}.\label{zz}
\end{eqnarray}
Similarly, for $\braket{\sigma^y_l}$ we find
\begin{eqnarray}
\braket{\sigma^y_l} = \bra{\psi_G} \sigma^y_l \ket{\psi_G} = \nonumber\\
\bra{\psi_0} \prod_{q\in N_G[l]} e^{i\frac{\theta}{2} \sigma^y_q} e^{i\frac{ \alpha}{2} \sigma^z_q}e^{i\frac{\phi}{2} n_l \sigma^z_l} e^{-i \frac{\phi}{2} \sum_{j \in N_G(l)} \sigma^z_j \sigma^z_l} \times \nonumber\\ \times \sigma^y_l
\prod_{p\in N_G[l]} e^{-i\frac{ \alpha}{2} \sigma^z_p} e^{-i\frac{ \theta}{2} \sigma^y_p} \ket{\psi_0}
= - \sin \theta \; \textrm{Im}\;z, \label{eq:2:20}
\end{eqnarray}
number $z$ is given by (\ref{zz}).
Mean value $\braket{\sigma^z_l}$ reads
\begin{eqnarray}
\braket{\sigma^z_l} &= \bra{\psi_G} \sigma^z_l \ket{\psi_G} = \bra{\psi_0} e^{i \frac{\theta}{2} \sigma^y_l} \; \sigma^z_l e^{-i \frac{\theta}{2} \sigma^y_l} \ket{\psi_0}=\nonumber\\=\cos \theta. \label{eq:2:22}
\end{eqnarray}
Finally, on the basis of (\ref{ent}) for the geometric measure of entanglement of the qubit $q[l]$ with other qubits in graph state (\ref{eq:2:1}) we obtain the following expression
\begin{eqnarray}
E_l = \frac{1}{2} \left(1 - \sqrt{\sin^2\theta \;|z|^2 + \cos^2\theta} \right)=
\frac{1}{2}-\nonumber\\-\frac{1}{2}\sqrt{\sin^2\theta \left(\cos^2\frac{\phi}{2} + \sin^2\frac{\phi}{2} \cos^2 \theta \right)^{n_l} + \cos^2 \theta }.\label{eq:2:28}
\end{eqnarray}
Note that the geometric measure of entanglement of the qubit $q[l]$ with other qubits in graph state (\ref{eq:2:1}) depends on the degree of the vertex $n_l$ representing $q[l]$ in the graph, the absolute values of the parameter of the controlled phase gate $\phi$ and the parameter of state (\ref{eq:2:2}) $\theta$. It does not depend on the value of $\alpha$.
\section{Preparation of multi-qubit graph states and detection of their entanglement on a quantum computer}
We quantify the geometric measure of entanglement of graph states (\ref{eq:2:1}) on IBM's 5-qubit quantum computer $\textrm{ibmq\_athens}$ \cite{kk}.
The structure of the quantum computer is presented in Fig. 1, where the arrows link qubits to which the CNOT gate can
be directly applied.
\begin{figure}
\caption{ Connectivity map of IBM's quantum computer $\textrm{ibmq\_athens}
\label{fig:1}
\end{figure}
We consider graph state (\ref{eq:2:1}) corresponding to the graph with the structure of $\textrm{ibmq\_athens}$. It reads
\begin{eqnarray}
\ket{\psi_G^{(1)}(\phi,\alpha,\theta)} =\nonumber\\=CP_{01}(\phi)CP_{12}(\phi)CP_{23}(\phi)CP_{34}(\phi) \ket{\psi(\alpha,\theta)}^{\otimes 5}. \label{eq:2:13}
\end{eqnarray}
Degrees of vertices in the graph (see Fig.\ref{fig:2} (a)) corresponding to state (\ref{eq:2:13}) are the following $\textrm{deg}(V_0)=\textrm{deg}(V_4)=1$, $\textrm{deg}(V_1)=\textrm{deg}(V_2)=\textrm{deg}(V_3)=2$, where $V_i$ is the vertex representing the qubit $q[i]$.
We also study graph states (\ref{eq:2:1}) associated with the claw and the complete graphs (see Fig.\ref{fig:2} (b), (c)) and determine the geometric measure of entanglement of qubits represented by vertices with degrees 3 and 4.
These graph states are defined as follows
\begin{eqnarray}
\ket{\psi_G^{(2)}(\phi,\alpha,\theta)} =CP_{10}(\phi)CP_{12}(\phi)CP_{13}(\phi)\ket{\psi(\alpha,\theta)}^{\otimes 4}, \label{eq:2:14}\\
\ket{\psi_G^{(3)}(\phi,\alpha,\theta)} =\mathop{\prod^4_{i,j=0}}\limits_{i\neq j}CP_{ij}(\phi)\ket{\psi(\alpha,\theta)}^{\otimes 5}. \label{eq:2:15}
\end{eqnarray}
State (\ref{eq:2:14}) corresponds to the claw graph (see Fig.\ref{fig:2} (b)), which is the most simple graph with maximal vertex degree $\textrm{deg}(V_1)=3$. State (\ref{eq:2:15}) can be represented by the complete graph (see Fig.\ref{fig:2} (c)). In this case $\textrm{deg}(V_i)=4$, $i=(0,..,4)$.
\begin{figure}
\caption{Graphs corresponding to graph states $\ket{\psi_G^{(1)}
\label{ff1}
\label{ff3}
\label{ff2}
\label{fig:2}
\end{figure}
To quantify dependence of the geometric measure of entanglement of a graph state on the angle $\phi$ we fix parameter $\theta$ as $\theta=\pi/2$.
Taking into account that expression (\ref{eq:2:28}) obtained in the previous section does not depend on the value of $\alpha$, for convenience we set $\alpha=0$.
In this case from (\ref{eq:2:2}) we obtain $\ket{\psi(0,\pi/2)} =\ket+$ and the graph state reads
\begin{eqnarray}
\ket{\psi_G(\phi,0,\pi/2)} = \prod_{(i,j) \in E} CP_{ij}(\phi) \ket{+}^{\otimes V}. \label{eq:2:11}
\end{eqnarray}
Note that for $\phi=\pi$ state (\ref{eq:2:11}) coincides with (\ref{eq:2:1z}).
In case of fixing parameter $\phi$ as $\phi=\pi$ for $\alpha=0$ graph state (\ref{eq:2:1}) transforms to
\begin{eqnarray}
\ket{\psi_G(\pi,0,\theta)} = \prod_{(i,j) \in E} CZ_{ij} \ket{\psi(0,\theta)}^{\otimes V}. \label{eq:2}
\end{eqnarray}
Quantum protocols for preparing graph states $\ket{\psi_G(\phi,0,\pi/2)}$, $\ket{\psi_G(\pi,0,\theta)}$ corresponding to the chain, the claw and the complete graphs (see (\ref{eq:2:13}), (\ref{eq:2:14}), (\ref{eq:2:15}), respectively) are presented in
Figs. \ref{fig:5}, \ref{fig:6}.
\begin{figure}
\caption{Quantum protocols for preparing graph states $\ket{\psi_G^{(1)}
\label{ff1}
\label{ff3}
\label{ff2}
\label{fig:5}
\end{figure}
\begin{figure}
\caption{Quantum protocols for preparing graph states $\ket{\psi_G^{(1)}
\label{ff1}
\label{ff3}
\label{ff2}
\label{fig:6}
\end{figure}
We detect entanglement of qubits corresponding to vertices with degrees 1, 2, 3, 4 in graph states $\ket{\psi_G^{(1)}(\phi,0,\theta)}$ (\ref{eq:2:13}), $\ket{\psi_G^{(2)}(\phi,0,\theta)}$ (\ref{eq:2:14}), $\ket{\psi_G^{(3)}(\phi,0,\theta)}$ (\ref{eq:2:15}) on the quantum computer $\textrm{ibmq\_athens}$. For this purpose the graph states have been prepared with quantum protocols presented in Figs. \ref{fig:5}, \ref{fig:6} and the mean values $\braket{\sigma_l^x}$, $\braket{\sigma_l^y}$, $\braket{\sigma_l^z}$ have been measured using protocols presented in \cite{Kuzmak}.
We quantify geometric measure of entanglement of qubit $q[0]$ corresponding to the vertex with degree 1 with other qubits in the states $\ket{\psi_G^{(1)}(\pi,0,\theta)}$, $\ket{\psi_G^{(1)}(\phi,0,\pi/2)}$ for different values of $\theta$ (see Fig. \ref{fig:7} (a)) and for different values of $\phi$ (see Fig. \ref{fig:8} (a)). In addition, the entanglement of the qubit $q[1]$ corresponding to the vertex with degree 2 was quantified in the states $\ket{\psi_G^{(1)}(\pi,0,\theta)}$, $\ket{\psi_G^{(1)}(\phi,0,\pi/2)}$ (see Fig. \ref{fig:7} (b), Fig. \ref{fig:8} (b)). In order to detect the geometric measure of entanglement of qubits represented by vertices with degrees 1 and 2 we choose qubits $q [0]$ and $q[1]$ because of small readout errors for these qubits in comparison with other qubits corresponding to vertices with the same degrees in the graph state $\ket{\psi_G^{(1)}(\phi,0,\theta)}$ (see Table \ref{trr1}).
\begin{table}[!!h]
\caption{The calibration parameters of IBM's quantum computer $\textrm{ibmq\_athens}$ on 9 June 2021 \cite{kk}.}\label{trr1}
\begin{center}
\begin{tabular}{ c c c c }
& $Q_0$ & $Q_1$ & $Q_2$ \\
Readout error ($10^{-2}$) & 1.07 & 1.30 & 1.70 \\
Gate error ($10^{-4}$)& 2.98 & 3.16 & 5.26 \\
& $Q_3$ & $Q_4$ \\
Readout error ($10^{-2}$) & 1.31 & 2.00 & \\
Gate error ($10^{-4}$)& 2.54 & 2.89 &\\
CNOT error ($10^{-3}$) & CX0$\_$1& CX1$\_$0 & CX1$\_$2 \\
& 12.04 & 12.04& 11.13 \\
& CX2$\_$1 & CX2$\_$3 & CX3$\_$2\\
& 11.13 & 18.50 & 18.50 \\
& CX3$\_$4 & CX4$\_$3 \\
& 6.80 & 6.80 \\
\end{tabular}
\end{center}
\end{table}
The geometric measure of entanglement of the qubit $q[1]$ corresponding to the vertex with degree 3 in the claw graph with other qubits in the states $\ket{\psi_G^{(2)}(\pi,0,\theta)}$, $\ket{\psi_G^{(2)}(\phi,0,\pi/2)}$ was also calculated (see Fig. \ref{fig:7} (c), Fig. \ref{fig:8} (c)). To quantify the geometric measure of entanglement in the case of $n_l=4$ graph states $\ket{\psi_G^{(3)}(\pi,0,\theta)}$, $\ket{\psi_G^{(3)}(\phi,0,\pi/2)}$ represented by the complete graph were prepared using protocols (see Fig. \ref{fig:5} (c), Fig. \ref{fig:6} (c)) and the geometric measure of entanglement of the qubit $q[0]$ with other qubits was quantified (see Fig. \ref{fig:7} (d), Fig. \ref{fig:8} (d)).
\begin{figure}
\caption{Results of detecting geometric measure of entanglement on $\textrm{ibmq\_athens}
\label{ff1}
\label{ff3}
\label{ff2}
\label{ff2}
\label{fig:7}
\end{figure}
\begin{figure}
\caption{Results of detecting geometric measure of entanglement on $\textrm{ibmq\_athens}
\label{ff1}
\label{ff3}
\label{ff2}
\label{ff2}
\label{fig:8}
\end{figure}
The results for the geometric measure of entanglement obtained on the quantum computer for qubits in the graph states $\ket{\psi_G^{(1)}(\pi,0,\theta)}$, $\ket{\psi_G^{(1)}(\phi,0,\pi/2)}$ (see Fig. \ref{fig:7} (a), (b), Fig. \ref{fig:8} (a), (b)) are in good agreement with theoretical ones. In the case of graph states $\ket{\psi_G^{(2)}(\pi,0,\theta)}$, $\ket{\psi_G^{(2)}(\phi,0,\pi/2)}$, $\ket{\psi_G^{(3)}(\pi,0,\theta)}$, $\ket{\psi_G^{(3)}(\phi,0,\pi/2)}$ the results for the entanglement are not in so good agreement with analytical results because to prepare
these states two-qubit gates have to be applied to qubits that are not connected directly according to connectivity map of IBM's quantum computer $\textrm{ibmq\_athens}$ (see Fig. \ref{fig:1}). Also, quantum protocols for preparing graph states represented by the complete graph (see Fig. \ref{fig:5} (c), Fig. \ref{fig:6} (c)) contain more gates than those for preparing graph states corresponding to the chain and the claw (see Fig. \ref{fig:5} (a), (b), Fig. \ref{fig:6} (a), (b)). This leads to accumulation of errors.
\section{Conclusion}
In the paper we have studied the geometric measure of entanglement of graph states generated by the action of controlled phase shift operators on a separable quantum state of the system, in which all the qubits are in arbitrary identical states (\ref{eq:2:1}). The expression for the geometric measure of entanglemet of a qubit with other qubits in graph state (\ref{eq:2:1}) represented by an arbitrary graph has been found (\ref{eq:2:28}). We have concluded that the entanglement depends on the absolute values of the parameter of the phase shift operator $\phi$ and the parameter $\theta$ of states (\ref{eq:2:2}) as well as on the degree of the vertex representing the qubit in the graph.
The geometric measure of entanglement has also been calculated on the 5-qubit IBM's quantum computer $\textrm{ibmq\_athens}$. Graph states corresponding to the graph with the structure of quantum computer $\textrm{ibmq\_athens}$, the claw and the complete graphs have been prepared and the geometric measure of entanglement of qubits represented by vertices with degrees 1,2,3,4 has been found. Fixing parameter $\phi=\pi$ we have prepared graph states (\ref{eq:2:13})-(\ref{eq:2:15}) and quantified their entanglement for different values of $\theta$ (see Fig. \ref{fig:7}). In addition, we have set $\theta=\pi/2$ and examined dependence of the geometric measure of entanglement for different values of the parameter of the controlled phase gate $\phi$ in the case of graph states (\ref{eq:2:13})-(\ref{eq:2:15}) (see Fig. \ref{fig:8}). The results obtained with quantum calculations on the quantum device $\textrm{ibmq\_athens}$ are in good agreement with theoretical ones (see Figs. \ref{fig:7}, \ref{fig:8}).
\section{Acknowledgments}
The authors thank prof. Tkachuk V. M. for great support and useful comments during the research studies.
This work was supported by Project 2020.02/0196 (No. 0120U104801) from National Research Foundation of Ukraine.
\end{document} |
\begin{document}
\title{Radius of Starlikeness for Bloch Functions}
\author[Somya Malik]{Somya Malik}
\address{Department of Mathematics \\National Institute of Technology\\Tiruchirappalli-620015, India }
\email{arya.somya@gmail.com}
\author{V. Ravichandran}
\address{Department of Mathematics \\National Institute of Technology\\Tiruchirappalli-620015, India }
\email{vravi68@gmail.com; ravic@nitt.edu}
\begin{abstract}
For normalised analytic functions $f$ defined on the open unit disc $\mathbb{D}$ satisfying the condition $\sup_{z\in \mathbb{D}}(1-|z^2|) |f'(z)|\leq 1$, known as Bloch functions, we determine various starlikeness radii.
\end{abstract}
\subjclass[2010]{30C80, 30C45}
\thanks{The first author is supported by the UGC-JRF Scholarship.}
\maketitle
\section{Introduction}The class $\mathcal{A}$ consists of all analytic functions $f$ on the disc $\mathbb{D}:=\{z\in \mathbb{C}: |z|<1\}$ and normalized by the conditions $f(0)=0$ and $ f'(0)=1$. The class $\mathcal{S}$ consists of all univalent functions $f\in \mathcal{A}$. The class $\mathcal{B}$ of Bloch functions consists of all functions $f\in\mathcal{A}$ satisfying $ \sup_{z\in \mathbb{D}}(1-|z|^2)|f'(z)|\leq 1$ (see \cite{Bonk}, \cite{Pom}). Bonk \cite{Bonk} has shown that the radius of starlikeness of the class $\mathcal{B}$ is the same as radius of univalence and it equals $1/\sqrt{3}\approx 0.57735$. It also follows from his distortion inequalities that the radius of close-to-convexity (with respect to $z$) is also $1/\sqrt{3}$. We extend the radius of starlikeness result by find various other starlikeness radii. An analytic function $f$ is subordinate to the analytic function $g$, denoted as $f\prec g$, if there exists a function $w:\mathbb{D}\rightarrow \mathbb{D}$ with $w(0)=0$ satisfying $f(z)=g(w(z))$. If $g$ is univalent, then $f\prec g$ if and only if $f(0)=g(0)$ and $f(\mathbb{D})\subseteq g(\mathbb{D})$. The subclass of $\mathcal{S}$ of starlike functions $\mathcal{S}^{*}$ is the collection of functions $f\in \mathcal{S}$ with the condition $\RE (zf'(z)/f(z))>0$ where $z \in \mathbb{D}$. The subclass $\mathcal{K}$ of convex functions consists of the functions in $\mathcal{S}$ with $\RE (1+zf''(z)/f'(z))>0$ for $z\in \mathbb{D}$. This characterisation gives a characterization of these classes in terms of the class $\mathcal{P}$ of Carath\'{e}odory functions or the functions with positive real part, comprising of analytic functions $p$ with $p(0)=1$ satisfying $\RE (p(z))>0$ or equivalently the subordination $p(z)\prec (1+z)/(1-z)$. Thus, the classes of starlike and convex functions are consist of $f\in \mathcal{A}$ with $zf'(z)/f(z) \in \mathcal{P}$ and $1+zf''(z)/f'(z) \in \mathcal{P}$ respectively. Several subclasses of starlike and convex functions were defined using subordination of $zf'(z)/f(z)$ and $1+zf''(z)/f'(z)$ to some function in $\mathcal{P}$. Ma and Minda \cite{MaMinda} gave a unified treatment of growth, distortion, rotation and coeffcient inequalities for function in classes $\mathcal{S}^{*}(\varphi)=\{f\in \mathcal{A}:zf'(z)/f(z) \prec \varphi (z) \}$ and $\mathcal{K}(\varphi)=\{f\in \mathcal{A}:1+zf''(z)/f'(z) \prec \varphi (z)\}$, where $\varphi \in \mathcal{P}$, starlike with respect to 1, symmetric about the $x$-axis and $\varphi '(0)>0$. Numerous classes were defined for various choices of the function $\varphi$ such as $(1+Az)/(1+Bz),\ \mathit{e}^{z},\ z+\sqrt{1+z^2}$ and so on.
For any two subclasses $\mathcal{F}$ and $\mathcal{G}$ of $\mathcal{A}$, the $\mathcal{G}-$ radius of the class $\mathcal{F}$, denoted as $R_{\mathcal{G}} (\mathcal{F})$ is the largest number $R_{\mathcal{G}} \in (0,1)$ such that $r^{-1}f(rz)\in \mathcal{G}$ for all $f\in \mathcal{F}$ and $0<r<R_{\mathcal{G}}$. We determine the radius of various subclasses of starlike functions such as starlikeness associated with the exponential function, lune, cardioid and a particular rational function for the class of Bloch Functions.
\section{Radius Problems}
In 2015, Mendiratta \emph{et al.} \cite{Exp} introduced a subclass $S^{*}_{\mathit{e}}$ of starlike functions associated with the exponential function. This class $S^{*}_{\mathit{e}}$ consists of all functions $f\in\mathcal{A}$ satisfying the subordination $ zf'(z)/f(z)\prec e^z$. This subordination is equivalent to the inequality $|\log (zf'(z)/f(z))| <1$. Our first theorem gives the $S^{*}_{\mathit{e}}$-radius of Bloch functions.
\begin{theorem}
The $S^{*}_{\mathit{e}}$-radius of the class $\mathcal{B}$ of Bloch functions is \[\mathcal{R}_{S^{*}_{\mathit{e}}}(\mathcal{B})= \frac{1}{4} \sqrt{3} \left(3-3 e+\sqrt{1-10 e+9 e^2}\right) \approx 0.517387.\] The obtained radius is sharp.
\end{theorem}
\begin{proof} For functions $f \in \mathcal{B}$, Bonk \cite{Bonk} proved the following inequality
\begin{equation}\label{eqn1}
\left|\dfrac{zf'(z)}{f(z)}-\frac{\sqrt{3}}{\sqrt{3}-r}\right|\leq \frac{\sqrt{3}r}{(\sqrt{3}-r)(\sqrt{3}-2r)},\ \ |z|=r<\frac{1}{\sqrt{3}}.
\end{equation}
The function \[h(r):=\frac{\sqrt{3}}{\sqrt{3}-r}-\frac{\sqrt{3}r}{(\sqrt{3}-r)(\sqrt{3}-2r)}=\frac{3-3\sqrt{3}r}{(\sqrt{3}-r)(\sqrt{3}-2r)}\]
is a decreasing funciton of $r$ for $0\leq r<1/\sqrt{3}=\mathcal{R}_{S^{*}}(\mathcal{B})$. The number $R=\mathcal{R}_{S^{*}_{\mathit{e}}}(\mathcal{B})<1/\sqrt{3}=\mathcal{R}_{S^{*}}(\mathcal{B})$ is the smallest positive root of the polynomial
\begin{equation}\label{eqR}
2R^2+3\sqrt{3}(\mathit{e}-1)R+3(1-\mathit{e})=0
\end{equation} or
$h(R)=1/\mathit{e}$. Therefore, for $0\leq r< R$, it follows that $ 1/e=h(R)<h(r)$ and hence \begin{align}\label{eqn3}
\frac{\sqrt{3}r}{(\sqrt{3}-r)(\sqrt{3}-2r)}<\frac{\sqrt{3}}{\sqrt{3}-r}-\frac{1}{\mathit{e}}.
\end{align}
Thus \eqref{eqn1} and \eqref{eqn3} give \begin{align}\label{eqn3a} \left|\dfrac{zf'(z)}{f(z)}-\frac{\sqrt{3}}{\sqrt{3}-r}\right|< \frac{\sqrt{3}}{\sqrt{3}-r}-\frac{1}{\mathit{e}}, \quad |z|=r< R.\end{align}
The function $C(r)=\sqrt{3}/(\sqrt{3}-r )$ is an increasing function of $r$, so for $r\in [0,R)$,
it follows that $C(r) \in [1,C(R))\subseteq [1,C(0.6))\approx [1,1.53001)\subseteq (0.367879,1.54308)\approx (1/\mathit{e},(\mathit{e}+\mathit{e}^{-1})/2)$.
By \cite[Lemma 2.2]{Exp}, for $1/\mathit{e}<c<\mathit{e}$, we have $\{w:\ \left|w-c\right|<r_c\}\subseteq \{w:\ \left|\log (w)\right|<1\}$
when $r_c$ is given by
\begin{align}\label{eqn2}
r_c &=
\begin{dcases}
c- \mathit{e}^{-1} & \text{ if }\ \mathit{e}^{-1}<c\leq \frac{\mathit{e} +\mathit{e}^{-1}}{2} ,\\
\mathit{e} -c & \text{ if }\ \frac{\mathit{e} +\mathit{e}^{-1}}{2}\leq c<\mathit{e}.
\end{dcases}
\end{align}
By \eqref{eqn3a}, we see that $w=zf'(z)/f(z)$, $|z|<R$, satisfies $ |w-c|<c-\mathit{e}^{-1}$ and hence it follows that $ \left|\log (w)\right|<1 $. This shows that $S^{*}_{\mathit{e}}$-radius of the class $\mathcal{B}$ is at least $R$.
We now show that $R$ is the exact $S^{*}_{\mathit{e}}$-radius of the class $\mathcal{B}$.
The function $f_0:\mathbb{D}\to\mathbb{C}$ defined by \[f(z)=\dfrac{\sqrt{3}}{4}\left\{1-3\left(\dfrac{z-\sqrt{1/3}}{1-\sqrt{1/3}z}\right)^2\right\}\ =\ \dfrac{3z(3-2\sqrt{3}z)}{(3-\sqrt{3}z)^2}\]
is an example of function in the class $\mathcal{B}$ and it serves as an extremal function for the various problems.
For this function, we have \[\dfrac{zf'(z)}{f(z)}=\dfrac{3\sqrt{3}-9z}{2\sqrt{3}z^2-9z+3\sqrt{3}}.\]
Using the equation \eqref{eqR}, we get
$2\sqrt{3}R^2-9R+3\sqrt{3}=\mathit{e} (3\sqrt{3}-9R)$, thus, for $z=R$
\begin{align*}
\left|\log \left(\dfrac{zf'(z)}{f(z)}\right)\right| &=\left|\log \left(\dfrac{3\sqrt{3}-9z}{2\sqrt{3}z^2-9z+3\sqrt{3}}\right)\right|
= \left|\log \left(\dfrac{1}{\mathit{e}}\right)\right|=1.
\end{align*}
This proves that $R$ is the exact $S^{*}_{\mathit{e}}$-radius of the class $\mathcal{B}$.
\end{proof}
Sharma \emph{et al.} studied the class $\mathcal{S}^{*}_{c}=\mathcal{S}^{*} (\phi _c)= \ \mathcal{S}^{*} (1+(4/3)z+(2/3)z^2)$ and gave \cite[Lemma 2.5]{Cardioid} \\
For $1/3<c<3,$
\begin{align}\label{eqn4}
r_c &=
\begin{dcases}
\frac{3c-1}{3} & \text{ if }\ \frac{1}{3}<c\leq \frac{5}{3}\\
3-c & \text{ if } \ \frac{5}{3}\leq c<3
\end{dcases}
\end{align}
then $\{w: |w-c|<r_c\} \subseteq \Omega _c$. Here $\Omega_c$ is the region bounded by the cadioid $\{x+\iota y: (9x^2+9y^2-18x+5)^2 -16(9x^2+9y^2-6x+1)=0\}.$
\begin{theorem} \
The $S^{*}_{c}$-radius $\mathcal{R}_{S^{*}_{c}}\approx 0.524423.$ This radius is sharp.
\end{theorem}
\begin{proof}
$R=\mathcal{R}_{S^{*}_{c}}$ is the smallest positive root of the equation \[R^2+3\sqrt{3}R-3=0.\]
The function \[h(r):=\frac{\sqrt{3}}{\sqrt{3}-r}-\frac{\sqrt{3}r}{(\sqrt{3}-r)(\sqrt{3}-2r)}=\frac{3-3\sqrt{3}r}{(\sqrt{3}-r)(\sqrt{3}-2r)}\]
is a decreasing funciton of $r$ for $0\leq r<1/\sqrt{3}=\mathcal{R}_{S^{*}}$. \cite[Corollary, P.455]{Bonk} Note that the class $\mathcal{S}^{*}_{c}$ is a subclass of the parabolic starlike class $S^{*}$, Also since, $R=\mathcal{R}_{S^{*}_{c}}$ is the smallest positive root of the equation $h(r)=1/3$. For $0\leq r< R$, we have
\begin{align}\label{eqn5}
\frac{\sqrt{3}r}{(\sqrt{3}-r)(\sqrt{3}-2r)}<\frac{\sqrt{3}}{\sqrt{3}-r}-\frac{1}{3}
\end{align}
Thus \eqref{eqn1} and \eqref{eqn5} give \[\left|\dfrac{zf'(z)}{f(z)}-\dfrac{1}{1-ar}\right|< \frac{1}{1-ar}-\frac{1}{3}; \ |z|\leq r,\ a=\frac{1}{\sqrt{3}}.\]
The center $C(r)$ of \eqref{eqn1} is an increasing function of $r$, so for $r\in [0,R),\ C(r) \in [1,C(R))\subseteq [1,c(0.6))\approx [1,1.53001)\subseteq (1/3,5/3)$. Now, by \eqref{eqn4} we get that the disc $\{w: |w-c|<c-1/3\} \subseteq \Omega _c. $
\\
\\
For proving sharpness, consider the function \[f(z)=\dfrac{\sqrt{3}}{4}\left\{1-3\left(\dfrac{z-\sqrt{1/3}}{1-\sqrt{1/3}z}\right)^2\right\}\]
for this function, $\dfrac{zf'(z)}{f(z)}=\dfrac{3\sqrt{3}-9z}{2\sqrt{3}z^2-9z+3\sqrt{3}},$\
and using the equation for $R$, we get
\\ $2\sqrt{3}r^2-9r+3\sqrt{3}=\ 3(3\sqrt{3}-9r)$, thus for $z=R$
\begin{align*}
\dfrac{zf'(z)}{f(z)}
&= \dfrac{1}{3}\\
&= \phi _c (-1).
\end{align*}
\end{proof}
The class $\mathcal{S}^{*}_{\leftmoon}=\mathcal{S}^{*} (z+\sqrt{1+z^2})$ was introduced in 2015 by Rain and Sok\'{o}l \cite{Sokol} in 2015 and proved that $f\in \mathcal{S}^{*}_{\leftmoon} \iff zf'(z)/f(z)$ lies in the lune region $\{w: |w^2-1|<2|w|\}$. Gandhi and Ravichandran \cite[Lemma 2.1]{Lune} proved that
\\
for $\sqrt{2}-1<c\leq \sqrt{2}+1,$
\begin{align}\label{eqn6}
\{w: |w-c|<1-|\sqrt{2}-c|\}\subseteq \{w: |w^2-1|<2|w|\}
\end{align}
\begin{theorem}
The $\mathcal{S}^{*}_{\leftmoon}$ radius, $\mathcal{R}_{\mathcal{S}^{*}_{\leftmoon}} \approx 0.507306.$ The radius is sharp.
\end{theorem}
\begin{proof}
For $R=\mathcal{R}_{\mathcal{S}^{*}_{\leftmoon}}$, the center of \eqref{eqn1},\ $C(R)=\sqrt{2};$ since $C(r)$ is an increasing function of $r$, thus for $0\leq r< R,\ 1\leq C(r)<\sqrt{2}$, or\[ \text{for}\ 0\leq r<R,\ \sqrt{2}-C(r)\geq 0.\]
So, $R=\mathcal{R}_{\mathcal{S}^{*}_{\leftmoon}}$ is the smallest positive root of the equation \[(2-2\sqrt{2})R^2+\sqrt{3}(3\sqrt{2}-6)R+3(2-\sqrt{2})=0.\]
The function \[h(r):=\frac{\sqrt{3}}{\sqrt{3}-r}-\frac{\sqrt{3}r}{(\sqrt{3}-r)(\sqrt{3}-2r)}=\frac{3-3\sqrt{3}r}{(\sqrt{3}-r)(\sqrt{3}-2r)}\]
is a decreasing funciton of $r$ for $0\leq r<1/\sqrt{3}=\mathcal{R}_{S^{*}}$. \cite[Corollary, P.455]{Bonk} Note that the class $\mathcal{S}^{*}_{\leftmoon}$ is a subclass of the parabolic starlike class $S^{*}$. Also since, $R=\mathcal{R}_{\mathcal{S}^{*}_{\leftmoon}}$ is the smallest positive root of the equation $h(r)=\sqrt{2}-1$. For $0\leq r< R$, we have
\begin{align}\label{eqn7}
\frac{\sqrt{3}r}{(\sqrt{3}-r)(\sqrt{3}-2r)}<1-\sqrt{2} +\frac{\sqrt{3}}{\sqrt{3}-r}=1-\left|\sqrt{2}-\frac{\sqrt{3}}{\sqrt{3}-r}\right|.
\end{align}
Thus \eqref{eqn1} and \eqref{eqn7} give \[\left|\dfrac{zf'(z)}{f(z)}-\dfrac{1}{1-ar}\right|<1-\left|\sqrt{2}-\frac{1}{1-ar}\right|; \ |z|\leq r,\ a=\frac{1}{\sqrt{3}}.\]
The center $C(r)$ of \eqref{eqn1} is an increasing function of $r$, so for $r\in [0,R),\ C(r) \in [1,C(R))\subseteq [1,c(0.6))\approx [1,1.53001)\subseteq (\sqrt{2}-1,\sqrt{2}+1)$. Now, by \eqref{eqn6} we get that the $R$ is the required radius.
\\
\\
\\
Consider the \[f(z)=\dfrac{\sqrt{3}}{4}\left\{1-3\left(\dfrac{z-\sqrt{1/3}}{1-\sqrt{1/3}z}\right)^2\right\}\]
for this function, $\dfrac{zf'(z)}{f(z)}=\dfrac{3\sqrt{3}-9z}{2\sqrt{3}z^2-9z+3\sqrt{3}},$\\
and we can easily see that for $z=\frac{1}{2}[2\sqrt{3}-\sqrt{6}],$
\[\left|\left(\dfrac{zf'(z)}{f(z)}\right)^2-1\right|=\ 2\left(\dfrac{zf'(z)}{f(z)}\right)\ =\ 2(\sqrt{2}-1).\] Thus, the result is sharp.
\end{proof}
The next class that we consider is the class of starlike functions associated with a rational function. Kumar and Ravichandran \cite{Rational} introduced the class of starlike functions associated with the rational function $\psi (z)=1+ ((z^2+kz)/(k^2-kz))$ where $k=\sqrt{2}+1$, denoted by $\mathcal{S}^{*}_{R}=\mathcal{S}^{*}(\psi(z))$ They proved\cite[Lemma 2.2]{Rational} that \\
for $2(\sqrt{2}-1)<c<2,$
\begin{align}\label{eqn8}
r_c &=
\begin{dcases}
c-2(\sqrt{2}-1)\ & \text{ if }\ 2(\sqrt{2}-1)<c\leq \sqrt{2}\\
2-c\ & \text{ if }\ \sqrt{2}\leq c<2
\end{dcases}
\end{align}
then $\{w: |w-c|<r_c\} \subseteq \psi (\mathbb{D})$
\begin{theorem}
The $\mathcal{S}^{*}_{R}$ radius is the smallest positive root of the polynomial $4(1-\sqrt{2})r^2+3\sqrt{3}(2\sqrt{2}-3)r+3(3-2\sqrt{2})$ that is $\mathcal{R}_{\mathcal{S}^{*}_{R}} \approx 0.349865.$ The result is sharp.
\end{theorem}
\begin{proof}
$R=\mathcal{S}^{*}_{R}$ is the smallest positive root of the equation \[4(1-\sqrt{2})R^2+3\sqrt{3}(2\sqrt{2}-3)R+3(3-2\sqrt{2})=0.\]
The function \[h(r):=\frac{\sqrt{3}}{\sqrt{3}-r}-\frac{\sqrt{3}r}{(\sqrt{3}-r)(\sqrt{3}-2r)}=\frac{3-3\sqrt{3}r}{(\sqrt{3}-r)(\sqrt{3}-2r)}\]
is a decreasing funciton of $r$ for $0\leq r<1/\sqrt{3}=\mathcal{R}_{S^{*}}$. \cite[Corollary, P.455]{Bonk} Note that the class $\mathcal{S}^{*}_{R}$ is a subclass of the parabolic starlike class $S^{*}$. Also since, $R=\mathcal{S}^{*}_{R}$ is the smallest positive root of the equation $h(r)=2(\sqrt{2}-1)$. For $0\leq r< R$, we have
\begin{align}\label{eqn9}
\frac{\sqrt{3}r}{(\sqrt{3}-r)(\sqrt{3}-2r)}<\frac{\sqrt{3}}{\sqrt{3}-r}-2(\sqrt{2}-1)
\end{align}
Thus \eqref{eqn1} and \eqref{eqn9} give \[\left|\dfrac{zf'(z)}{f(z)}-\dfrac{1}{1-ar}\right|< \frac{1}{1-ar}-2(\sqrt{2}-1); \ |z|\leq r,\ a=\frac{1}{\sqrt{3}}.\]
The center $C(r)$ of \eqref{eqn1} is an increasing function of $r$, so for $r\in [0,R),\ C(r) \in [1,C(R))\subseteq [1,c(0.4))\approx [1,1.30029)\subseteq (2(\sqrt{2}-1),\sqrt{2})$. Now, by \eqref{eqn8} we get that the disc $\{w: |w-c|<x-2(\sqrt{2}-1)\} \subseteq \psi (\mathbb{D}). $
\\
\\
To show that the result is sharp, consider the function \[f(z)=\dfrac{\sqrt{3}}{4}\left\{1-3\left(\dfrac{z-\sqrt{1/3}}{1-\sqrt{1/3}z}\right)^2\right\}\]
for this function, $\dfrac{zf'(z)}{f(z)}=\dfrac{3\sqrt{3}-9z}{2\sqrt{3}z^2-9z+3\sqrt{3}},$\
and using the equation for $R$ we get
\\ $3\sqrt{3}-9r=(2\sqrt{2}-2)(2\sqrt{3}r^2-9r+3\sqrt{3})$ thus for $z=R$
\begin{align*}
\dfrac{zf'(z)}{f(z)}
&= 2\sqrt{2}-2\\
&= \psi(-1).
\end{align*}
\end{proof}
\begin{theorem}
For the class $\mathcal{B}$ the following results hold:
\begin{enumerate}
\item The Lemniscate starlike radius,\ $R_{\mathcal{S}^{*}_{L}}=\frac{2\sqrt{3}-\sqrt{6}}{4}\approx 0.253653.$
\item The starlike radius associated with the sine function,\ $R_{\mathcal{S}^{*}_{sin}}=\frac{\sqrt{3}\sin 1}{2+2\sin 1}\approx 0.395735.$
\item The nephroid radius, \ $R_{\mathcal{S}^{*}_{Ne}}=\frac{\sqrt{3}}{5}\approx 0.34641.$
\item The sigmoid radius,\ $R_{\mathcal{S}^{*}_{SG}}=\frac{\sqrt{3}(\mathit{e}-1)}{4\mathit{e}}\approx 0.273716.$
\end{enumerate}
\end{theorem}
\end{document} |
\begin{document}
\title{Convexity of the orbit-closed \ $C$-numerical range and majorization}
\begin{abstract}
We introduce and investigate the orbit-closed $C$-numerical range, a natural modification of the $C$-numerical range of an operator introduced for $C$ trace-class by Dirr and vom Ende.
Our orbit-closed $C$-numerical range is a conservative modification of theirs because these two sets have the same closure and even coincide when $C$ is finite rank.
Since Dirr and vom Ende's results concerning the $C$-numerical range depend only on its closure, our orbit-closed $C$-numerical range inherits these properties, but we also establish more.
For $C$ selfadjoint, Dirr and vom Ende were only able to prove that the closure of their $C$-numerical range is convex, and asked whether it is convex without taking the closure.
We establish the convexity of the orbit-closed $C$-numerical range for selfadjoint $C$ without taking the closure by providing a characterization in terms of majorization, unlocking the door to a plethora of results which generalize properties of the $C$-numerical range known in finite dimensions or when $C$ has finite rank.
Under rather special hypotheses on the operators, we also show the $C$-numerical range is convex, thereby providing a partial answer to the question posed by Dirr and vom Ende.
\end{abstract}
\begin{keywords}
numerical range, $C$-numerical range, convex, trace-class, Toeplitz--Hausdorff Theorem, unitary orbit, Hausdorff distance, essential spectrum
\end{keywords}
\begin{amscode}
Primary 47A12, 47B15; Secondary 52A10, 52A40, 26D15.
\end{amscode}
\section{Introduction}
Herein we let $Mhcal{H}$ denote a separable complex Hilbert space and $B(Mhcal{H})$ the collection of all bounded linear operators on $Mhcal{H}$.
For $A \in B(Mhcal{H})$, the \term{numerical range} $W(A)$ is the image of the unit sphere of $Mhcal{H}$ under the continuous quadratic form $x \mapsto \angles{Ax,x}$, where $\angles{\bigcdot,\bigcdot}$ denotes the inner product on $Mhcal{H}$.
Of course, the numerical range has a long history but perhaps the most impactful result is the Toeplitz--Hausdorff Theorem which asserts that the numerical range is convex \
\cite{Toe-1918-MZ,Hau-1919-MZ}; an intuitive proof is given by Davis in \cite{Dav-1971-CMB}.
In this paper we are interested in unitarily invariant generalizations of the numerical range and their associated properties, especially convexity and its relation to majorization.
By considering an alternative definition of the numerical range, some generalizations become readily apparent.
Notice that
\begin{equation*}
W(A) = \set{\angles{Ax,x} \mid x \in Mhcal{H}, \norm{x}=1} = \set{ \trace(PA) \mid P\ \text{is a rank-1 projection}}.
\end{equation*}
As Halmos recognized in \cite{Hal-1964-ASM}, one could generalize this by fixing $k \in Mhbb{N}$ and requiring $P$ to be a rank-$k$ projection.
In this way, we arrive at the \term{$k$-numerical range}
\begin{equation*}
\knr(A) := \vset{ \trace\Big(\frac{1}{k}PA\Big) \,\middle\vert\, P\ \text{is a rank-$k$ projection}}.
\end{equation*}
The normalization constant $\frac{1}{k}$ is actually quite natural;
among other things, it ensures $\knr(A)$ is bounded independent of $k$ by $\norm{A}$.
In \cite[\textsection{}12]{Ber-1963}, Berger proved a few fundamental facts about the $k$-numerical range including its convexity, as well as the inclusion property $\knr[k+1](A) \subseteq \knr(A)$.
We will see shortly that these convexity and inclusion properties are actually consequences of more general phenomena (see \Cref{cor:c-numerical-range-convex,cor:majorization-c-numerical-range-inclusion}).
In \cite{FW-1971-GMJ}, Fillmore and Williams examined $\knr(A)$, but restricted their attention to the finite dimensional setting.
There they established
\begin{equation}
\label{eq:knr-majorization-description}
\knr(A) = \vset{ \frac{1}{k} \trace(XA) \,\middle\vert\, 0 \le X \le I, \trace X = k },
\end{equation}
which was generalized by Goldberg and Straus to the $C$-numerical range, as we describe below.
Moreover, Fillmore and Williams showed that if $A \in M_n(Mhbb{C})$ is normal, then
\begin{equation}
\label{eq:knr-extreme-points}
\knr(A) = \conv \vset{ \frac{1}{k} \sum_{i=1}^k \lambda_i \,\middle\vert\, \parbox{32ex}{$\lambda_i$ is an eigenvalue of $A$, repeated at most according to multiplicity} \ },
\end{equation}
which says that the extreme points of $\knr(A)$ are contained in the set of averages of $k$-eigenvalues of $A$ (allowing repetitions according to multiplicity).
This is a clear analogue of the standard fact for numerical ranges that $W(A) = \conv \spec(A)$ when $A \in M_n(Mhbb{C})$ is normal.
In order to further generalize the $k$-numerical range, yet another new perspective is necessary.
The unitary group $Mhcal{U}$ of $B(Mhcal{H})$ acts by conjugation on $B(Mhcal{H})$, and the orbit $Mhcal{U}(C)$ of an operator $C \in B(Mhcal{H})$ under this action is called the \term{unitary orbit}.
When $P$ is any rank-$k$ projection ($k < \infty$), $Mhcal{U}(P)$ consists of all rank-$k$ projections in $B(Mhcal{H})$.
Therefore, if $P$ is a rank-$k$ projection, then
\begin{equation*}
\knr(A) = \vset{ \trace(XA) \,\middle\vert\, X \in Mhcal{U}\left(\frac{1}{k}P\right) }.
\end{equation*}
The above representation of the $k$-numerical range suggests the natural generalization to the \term{$C$-numerical range},
\begin{equation*}
\cnr(A) := \set{ \trace(XA) \mid X \in Mhcal{U}(C) }.
\end{equation*}
Of course, this requires $\trace(XA)$ to make sense, which can be achieved in several different ways, each investigated by various authors.
In \cite{Wes-1975-LMA}, Westwick considered $\cnr(A)$ when $C$ is a finite rank selfadjoint operator and proved that $\cnr(A)$ is convex by means of Morse theory.
When $\dim Mhcal{H} = n < \infty$ so that $B(Mhcal{H}) \cong M_n(Mhbb{C})$, $\cnr(A)$ is well-defined for an arbitrary $C \in M_n(Mhbb{C})$.
The $C$-numerical range was first studied in this generality by Goldberg and Straus in \cite{GS-1977-LAA}.
There, they proved a generalization of \eqref{eq:knr-majorization-description} for an arbitrary selfadjoint matrix $C$, which we extend to the infinite dimensional setting in \Cref{thm:c-numerical-range-via-majorization}.
Chi-Kwong Li provides in \cite{Li-1994-LMA} a comprehensive survey of the properties of the $C$-numerical range in finite dimensions, including the highlights which we now describe.
When $C$ is selfadjoint the $C$-numerical range is convex, but this may fail even if $C$ is normal \cite{Wes-1975-LMA,AT-1983-LMA}.
However, the $C$-numerical range is always star-shaped relative to the star center $\trace(C) \big(\frac{1}{n} \trace(A)\big)$ \cite{CT-1996-LMA}.
Moreover, there is a set $\cspec(A)$ associated to the pair $C,A$ called the \term{$C$-spectrum of $A$} which, when $C$ is a rank-1 projection, coincides with the usual spectrum of $A$;
Then when $A$ is normal and $C$ is selfadjoint, $\cnr(A) = \conv \cspec(A)$ \cite[Theorem~4]{Mar-1979-ANYAS}, which generalizes \eqref{eq:knr-extreme-points}.
In the recent paper \cite{DvE-2020-LaMA}, Dirr and vom Ende study a generalization of the $C$-numerical range to the infinite dimensional setting.
In this case, it again becomes necessary to ensure that the trace $\trace(XA)$ is well-defined, which they naturally enforce by requiring $C$ to be trace-class.
In \cite{DvE-2020-LaMA}, they prove extensions of some finite dimensional results by means of limiting arguments.
As a result of these limiting arguments, all of their major results pertain to the \emph{closure} $\closure{\cnr(A)}$ of the $C$-numerical range.
Dirr and vom Ende prove that $\closure{\cnr(A)}$ is star-shaped and that any element of $\trace(C) \nr_{\textrm{ess}}(A)$ is a star center \cite[Theorem~3.10]{DvE-2020-LaMA}.
They asked explicitly \cite[Open Problem (b)]{DvE-2020-LaMA} whether $\cnr(A)$ is convex without taking the closure, and we provide a partial answer in \Cref{cor:c-numerical-range-convex-diagonalizable}.
Moreover, they show that $\closure{\cnr(A)}$ is convex whenever $C$ is selfadjoint\footnote{or only slightly more generally, $C$ normal with collinear eigenvalues. In this paper, we have many results for selfadjoint $C$, but they generally have trivial unmentioned corollaries for $C$ normal with collinear eigenvalues by means of \Cref{prop:c-numerical-range-basics}\ref{item:similarity-preserving-ocnr}. We neglect these slightly more general statements in favor of the selfadjoint ones solely for clarity and simplicity of exposition.} or $A$ is a rotation and translation of a selfadjoint operator \cite[Theorem~3.8]{DvE-2020-LaMA}.
Additionally, they prove that if $C,A$ are both normal, $A$ is compact and the eigenvalues of either $C$ or $A$ are collinear, then $\closure{\cnr(A)} = \conv(\closure{\cspec(A)})$ \cite[Corollary~3.1]{DvE-2020-LaMA}.
In this paper we introduce and investigate a natural modification of the $C$-numerical range with $C$ trace-class which we call the \term{orbit-closed $C$-numerical range}, denoted $\ocnr(A)$ (see \Cref{def:orbit-closed-c-numerical-range}).
The only difference between $\ocnr(A)$ and $\cnr(A)$ is that the former allows $X$ which are \emph{approximately} unitarily equivalent (in trace norm) to $C$, that is,
\begin{equation*}
\ocnr(A) := \set{ \trace(XA) \mid X \in Mhcal{O}(C) },
\end{equation*}
where $Mhcal{O}(C) := \closure[\norm{\bigcdot}_1]{Mhcal{U}(C)}$.
Considering closures of unitary orbits in various operator topologies serves an important purpose and has precedent in the literature, especially in relation to majorization (see the discussion which introduces \cref{sec:orbit-closed-c-numerical-range}).
This relatively small difference between $\cnr(A)$ and $\ocnr(A)$ has significant consequences.
In particular, for $C$ selfadjoint we give a characterization of $\ocnr(A)$ in terms of majorization (\Cref{thm:c-numerical-range-via-majorization}) which is an appropriate extension to infinite dimensions of \cite[Theorem~1.2]{FW-1971-GMJ} (included herein as \eqref{eq:knr-majorization-description}) and its generalization \cite[Theorem~7]{GS-1977-LAA}, and whose proof is inspired by \cite[Theorem~2.14]{DS-2017-PEMSIS}.
Because in general $\cnr(A) \not= \ocnr(A)$, necessarily $\cnr(A)$ cannot enjoy this same characterization.
Moreover, this majorization characterization of $\ocnr(A)$ is the backbone of this paper and it provides a gateway to the rest of our major results.
One immediate corollary is the convexity of $\ocnr(A)$ when $C$ is selfadjoint (\Cref{cor:c-numerical-range-convex}) which generalizes and provides an independent and purely operator-theoretic proof of Westwick's theorem \cite{Wes-1975-LMA} for $C$ a finite rank selfadjoint operator.
Moreover, to our knowledge our \Cref{cor:c-numerical-range-convex} constitutes the \emph{only}\footnote{
In the \emph{finite dimensional} setting there is an independent proof of Westwick's theorem due to Poon \cite{Poo-1980-LMA} using a result of Goldberg and Straus \cite[Theorem~7]{GS-1977-LAA}.
This proof is similar in spirit to our \Cref{cor:c-numerical-range-convex} because it involves majorization.
However, it seems to us that the techniques in \cite{Poo-1980-LMA} cannot be used to reprove Westwick's result in the infinite dimensional setting even for finite rank $C$.
We say this because both \cite[Lemma~1]{Poo-1980-LMA} and \cite[Theorem~7]{GS-1977-LAA} rely in an essential way on Birkhoff's Theorem \cite{Bir-1946-UNTRA}.
The dependence of \cite[Theorem~7]{GS-1977-LAA} on Birkhoff's Theorem is not readily apparent, but can be observed through a careful analysis of the proof of \cite[Lemma~7]{GS-1977-LAA}.
}
independent proof of Westwick's convexity result in the infinite dimensional setting found in 45 years, which is especially significant because Westwick's proof used an unusual technique: Morse theory.
In addition, $\ocnr(A)$ is a \emph{conservative} modification of $\cnr(A)$ in the sense that $\cnr(A) \subseteq \ocnr(A) \subseteq \closure{\cnr(A)}$ (see \Cref{thm:cnr-dense-in-ocnr}), and moreover, if $C$ is finite rank, then $\ocnr(A) = \cnr(A)$.
Therefore, the orbit-closed $C$-numerical range constitutes an alternate natural extension of the $C$-numerical range to the infinite dimensional (and infinite rank) setting.
Moreover, because $\closure{\ocnr(A)} = \closure{\cnr(A)}$, all of Dirr and vom Ende's results (which concern the closure of the $C$-numerical range) are inherited by the orbit-closed $C$-numerical range.
Our main results are summarized in the list below.
Here $\lambda(C)$ denotes the eigenvalue sequence of a compact operator $C$ (see \cref{sec:notation}), $\prec$, $\pprec$ denote majorization and submajorization (see \Cref{def:sequence-majorization}), and $\ocspec(A)$ denotes the $Mhcal{O}(C)$-spectrum (see \Cref{def:c-spectrum}).
Reference \cref{sec:notation} for any other unfamiliar notation.
\begin{enumerate}
\item $\ocnr(A) = \cnr(A)$ if $C$ is finite rank (\Cref{prop:orbit-closure-equivalences}).
\item $\closure{\ocnr(A)} = \closure{\cnr(A)}$ (\Cref{thm:cnr-dense-in-ocnr}).
\item The map $(C,A) \mapsto \ocnr(A)$ is continuous (\Cref{thm:continuity}).
\item If $C \in Mhcal{L}_1^{sa}$, then $\ocnr(A) = \set{ \trace(XA) \mid X \in Mhcal{L}_1^{sa}, \lambda(X) \prec \lambda(C) }$ (\Cref{thm:c-numerical-range-via-majorization}).
\item If $C \in Mhcal{L}_1^{sa}$, then $\ocnr(A)$ is convex (\Cref{cor:c-numerical-range-convex}).
\item If $C,C' \in Mhcal{L}_1^{sa}$ and $\lambda(C) \prec \lambda(C')$, then $\ocnr(A) \subseteq \ocnr[C'](A)$ (\Cref{cor:majorization-c-numerical-range-inclusion}).
\item If $C \in Mhcal{L}_1^{sa}$, $A \in Mhcal{K}$, then
\begin{equation*}
\closure{\ocnr(A)} = \set{ \trace(XA) \mid X \in Mhcal{L}_1^{sa}, \lambda(X) \pprec \lambda(C)} = \ocnr[C \oplus Mhbf{0}](A \oplus Mhbf{0})
\end{equation*}
as long as $Mhbf{0}$ acts on a space of dimension at least $\rank C$
(\Cref{thm:compact-closed-iff-contains-weak-majorization} and \Cref{cor:compact-closure-direct-sum-zero}).
\item\label{item:viii} For $C \in Mhcal{L}_1^+$, $\ocnr(A)$ is closed if for every $\theta$, $\rank (\Re(e^{i\theta}A)-m_{\theta}I)_+ \ge \rank C$, where $m_{\theta} := \max \spec_{\mathrm{ess}}(\Re(e^{i\theta} A))$ (\Cref{thm:ocnr-closed-rank-condition}).
\item If $C \in Mhcal{L}_1^+$, then (\Cref{thm:direct-sum-characterization})
\begin{equation*}
\ocnr(A_1 \oplus A_2) = \conv \quad \bigcup_{Mhclap{\quad C_1 \oplus C_2 \in Mhcal{O}(C)}} \ \big( \ocnr[C_1](A_1) + \ocnr[C_2](A_2) \big).
\end{equation*}
\item If $C \in Mhcal{L}_1^+$, $A \in Mhcal{K}$ normal, then $\ocnr(A) = \conv \ocspec(A)$ (\Cref{thm:normal-convex-c-spectrum}).
\item If $C \in Mhcal{L}_1^+$ with $\dim \ker C \in \set{0,\infty}$, and $A \in B(Mhcal{H})$ is diagonalizable, then $\cnr(A)$ is convex (\Cref{cor:c-numerical-range-convex-diagonalizable}).
\end{enumerate}
Many of the results listed above are extensions of facts which are known in either the finite dimensional or finite rank setting.
However, to our knowledge, \ref{item:viii} appears to be entirely new.
This paper is structured as follows.
In \cref{sec:notation} we specify some notation.
\Cref{sec:orbit-closed-c-numerical-range} contains fundamental properties of the orbit-closed $C$-numerical range for general trace-class operators $C$.
Then in \cref{sec:majorization-and-convexity} we restrict attention to selfadjoint $C$ and establish a characterization of the orbit-closed $C$-numerical range in terms of majorization (\Cref{thm:c-numerical-range-via-majorization}) which is the main theorem that paves the way for all our other primary results; it has as a direct corollary the convexity of the orbit-closed $C$-numerical range (\Cref{cor:c-numerical-range-convex}).
In \cref{sec:boundary-points} we undertake a thorough investigation of points on the boundary $\partial \ocnr(A)$, including an analysis specific to the case when $A$ is compact in subsection~\ref{subsec:compact-operators}.
We obtain necessary and sufficient conditions for $\ocnr(A)$ to be closed when $A$ is compact and $C$ is selfadjoint (\Cref{thm:compact-closed-iff-contains-weak-majorization}).
Beginning in subsection~\ref{subsec:bounded-operators} we restrict our attention to positive $C$ for the remainder of the paper, and there we provide a sufficient condition for $\ocnr(A)$ to be closed when $A \in B(Mhcal{H})$ (\Cref{thm:ocnr-closed-rank-condition}).
In \cref{sec:oc-spectrum} we characterize the behavior of the orbit-closed $C$-numerical range under finite direct sums (\Cref{thm:direct-sum-characterization}) and prove $\ocnr(A) = \conv \ocspec(A)$ when $A$ is compact normal (\Cref{thm:normal-convex-c-spectrum}).
Finally, in \cref{sec:c-numerical-range-convexity} we use variations of the Schur--Horn theorem for positive compact operators to prove that the $C$-numerical range $W_C(A)$ is convex when $A$ is diagonalizable and $C$ is positive with either trivial or infinite dimensional kernel (\Cref{cor:c-numerical-range-convex-diagonalizable}), thereby providing a partial answer to the question \cite[Open Problem (b)]{DvE-2020-LaMA} posed by Dirr and vom Ende.
\section{Notation}
\label{sec:notation}
Let $Mhcal{K}$ denote the ideal of compact operators in $B(Mhcal{H})$ and $Mhcal{L}_1$ the ideal of trace-class operators, and $Mhcal{K}^{sa}, Mhcal{L}_1^{sa}$ and $Mhcal{K}^+, Mhcal{L}_1^+$ the selfadjoint and positive operators in these ideals.
For a compact operator $C$, let $\lambda(C)$ denote the \term{eigenvalue sequence} of $C$, that is $\lambda(C)$ is the sequence of eigenvalues of $C$ listed in order of decreasing modulus and repeated according to algebraic multiplicity, and concatenated with zeros if there are only finitely many eigenvalues; when $C$ is normal the algebraic and geometric multiplicities coincide.
Note that the sequence is not necessarily uniquely determined (since unequal eigenvalues may have the same modulus), and it omits any zero eigenvalue entirely if $C$ has infinitely many nonzero eigenvalues.
Let $c_0star$ denote the set of all nonnegative nonincreasing sequences converging to zero.
Given a nonnegative sequence $\lambda$ converging to zero (not necessarily monotone), the \term{monotonization} $\lambda^{*} \in c_0star$ of $\lambda$ is the measure-theoretic \term{nonincreasing rearrangement} relative to the counting measure on $Mhbb{N}$.
In other words, $\lambda^{*}_k$ is the $k$\textsuperscript{th} largest entry of $\lambda$ repeated according to multiplicity.
Note that if $\lambda$ has infinite support, then $\lambda^{*}$ is never zero.
For a real-valued sequence $\lambda$ converging to zero, it is often useful to ``split'' $\lambda$ into its positive and negative parts.
To this end, we define $\lambda^+ := (\max\set{\lambda,0})^{*}$, where the maximum is taken pointwise, and $\lambda^- := (-\lambda)^+$.
So the nonzero entries of $\lambda^+$ and $-\lambda^-$ are precisely the nonzero entries of $\lambda$, but it is possible that one of $\lambda^{\pm}$ maybe have zero entries which do not appear in the sequence $\lambda$.
When $C$ is a selfadjoint compact operator, we can apply the above splitting to the eigenvalue sequence $\lambda(C)$.
Then the nonzero entries of $\lambda^+(C)$ and $-\lambda^-(C)$ are precisely the nonzero entries of $\lambda(C)$, but it is possible that one of $\lambda^{\pm}(C)$ maybe have zero entries which are \emph{not} eigenvalues of $C$.
Indeed, this occurs when exactly one of $C_{\pm}$ is finite rank and $\ker C$ is trivial.
This is a technical issue which plays a minor role.
For a compact operator $C$, we denote by $s(C)$ the \term{singular value sequence} ($=\lambda(\abs{C})$), which for $C \in Mhcal{K}^+$ coincides with the eigenvalue sequence.
For a positive compact operator $C$, we will use $s(C)$ to refer to the eigenvalue sequence $\lambda(C)$ in order to emphasize positivity of the operator $C$.
For $A \in B(Mhcal{H})$, we denote by $Mhcal{U}(A)$ the unitary orbit of $A$ under the action of the unitary group $Mhcal{U}$ by conjugation.
For a trace-class operator $C$, we will let $Mhcal{O}(C)$ denote the trace-norm closure of the unitary orbit $Mhcal{U}(C)$, and we refer to $Mhcal{O}(C)$ as the \term{orbit} of $C$.
For $A \in B(Mhcal{H})$, $\Re A, \Im A$ denote the real and imaginary parts of $A$, and $\spec(A), \spec_{\mathrm{pt}}(A), \spec_{\mathrm{ess}}(A)$ are the spectrum, point spectrum and essential spectrum of $A$, respectively.
If $A$ is selfadjoint, then $A_+,A_-$ denote the positive and negative parts of $A$.
In addition, if $E \subseteq Mhbb{R}$ is Borel, then $\chi_E(A)$ denotes the spectral projection of $A$ corresponding to the set $E$.
For a set $S$ in a (real or complex) vector space we let $\conv S$ denote the (not necessarily closed) \emph{convex hull} of $S$.
That is, $\conv S$ is the smallest convex set containing $S$.
\section{The orbit-closed $C$-numerical range}
\label{sec:orbit-closed-c-numerical-range}
When working in an infinite dimensional operator algebra such as $B(Mhcal{H}), Mhcal{K}$ or a type II factor, it is often important to substitute the unitary orbit $Mhcal{U}(C)$ of an operator with its closure in an appropriate operator topology.
This appears repeatedly throughout the literature, especially in relation to majorization.
For example, Arveson and Kadison \cite{AK-2006-OTOAaA} considered $Mhcal{O}(C)$ for $C \in Mhcal{L}_1^+$ when investigating diagonals of positive trace-class operators and the Schur--Horn theorem, which is a characterization of the diagonals in terms of majorization.
Likewise, when Kaftal and Weiss extended the Schur--Horn theorem to positive compact operators $C \in Mhcal{K}^+$, they implicitly provided their primary characterization in terms\footnote{In \cite{KW-2010-JFA}, this is actually stated in terms of the so-called \emph{partial isometry orbit, $Mhcal{V(C)}$}, but \cite[Proposition~2.1.12]{Lor-2016} guarantees that $\closure[\norm{\bigcdot}]{Mhcal{U}(C)} = Mhcal{V}(C)$ for $C \in Mhcal{K}^+$.} of the norm closure $\closure[\norm{\bigcdot}]{Mhcal{U}(C)}$ of the unitary orbit \cite[Proposition~6.4]{KW-2010-JFA}.
In addition, when Dykema and Skoufranis studied numerical ranges in II$_1$ factors \cite{DS-2017-PEMSIS}, they also used the norm closure of the unitary orbit.
For $C$ selfadjoint, the net effect of taking the closure in each of these situations is to make the eigenvalue sequence\footnote{in the case of II$_1$ factors, the analogous notion is the \emph{spectral scale}.} $\lambda(C)$ a complete invariant for the closure of the unitary orbit of $C$.
The reason this phenomenon does not appear in the finite dimensional setting, or even in the case of $C$ finite rank, is that the unitary orbit is already closed.
The next proposition is a generalization of \cite[Proposition~3.1]{AK-2006-OTOAaA} and makes all of this intuition precise.
\begin{proposition}
\label{prop:orbit-closure-equivalences}
If $C \in Mhcal{K}$ is a compact normal operator, then the following are equivalent.
\begin{enumerate}
\item\label{item:1} $X \in \closure[\norm{\bigcdot}]{Mhcal{U}(C)}$; that is, $X$ is approximately unitarily equivalent to $C$.
\item\label{item:2} $X$ is compact normal and $\lambda(X) = \lambda(C)$ (up to a suitable permutation).
\item\label{item:3} $X \oplus Mhbf{0} \in Mhcal{U}(C \oplus Mhbf{0})$ where the size of $Mhbf{0}$ is infinite.
\end{enumerate}
If in addition $C \in Mhcal{L}_1$, then these are also equivalent to
\begin{enumerate}[resume]
\item\label{item:4} $X \in Mhcal{O}(C)$.
\end{enumerate}
When $C$ has finite rank, even if $C$ is not normal, then
\begin{equation*}
\overline{Mhcal{U}(C)}^{\norm{\bigcdot}} = Mhcal{O}(C) = Mhcal{U}(C).
\end{equation*}
\end{proposition}
\begin{proof}
\ref{item:1} $\Leftrightarrow$ \ref{item:2}.
This is due to Gellar and Page \cite[Theorem~1]{GP-1974-DMJ} and the fact that all nonzero eigenvalues of a compact operator are isolated.
\ref{item:2} $\Rightarrow$ \ref{item:3}.
If $X,C$ are compact normal and $\lambda(X) = \lambda(C)$, then $X$ and $C$ have the same nonzero eigenvalues including multiplicity.
Therefore $X \oplus Mhbf{0}, C \oplus Mhbf{0}$ not only have the same nonzero eigenvalues with multiplicity, but they also have zero as an eigenvalue of infinite multiplicity.
Therefore $X \oplus Mhbf{0}, C \oplus Mhbf{0}$ are unitarily equivalent.
\ref{item:3} $\Rightarrow$ \ref{item:2}.
If $X \oplus Mhbf{0} \in Mhcal{U}(C \oplus Mhbf{0})$, then $X$ is compact normal since $C$ is also.
Moreover, $\lambda(X) = \lambda(X \oplus Mhbf{0}) = \lambda(C \oplus Mhbf{0}) = \lambda(C)$.
Now suppose that $C \in Mhcal{L}_1$.
\ref{item:4} $\Rightarrow$ \ref{item:1}.
Trivial because the trace-norm topology on $Mhcal{U}(C)$ is stronger than the operator norm topology.
\ref{item:2} $\Rightarrow$ \ref{item:4}.
Suppose that $X$ is compact normal and $\lambda(X) = \lambda(C)$.
Clearly this implies that $X \in Mhcal{L}_1$ since $C \in Mhcal{L}_1$.
Let $\varepsilon > 0$ and take $N$ so that $\sum_{n=N+1}^{\infty} \abs{\lambda_n(C)} < \frac{\varepsilon}{2}$.
Since $X,C$ are normal and trace-class, they have orthonormal bases $\set{e_n}_{n=1}^{\infty}, \set{f_n}_{n=1}^{\infty}$ consisting of eigenvectors so that for $1 \le n \le N$, $X e_n = \lambda_n(X) e_n$ and $C f_n = \lambda_n(C) f_n$.
Let $U$ be the unitary for which $Ue_n = f_n$.
Then $UXU^{*},C$ are both diagonalized by the basis $\set{f_n}_{n=1}^{\infty}$.
Therefore,
\begin{equation*}
\norm{UXU^{*} - C}_1 = \trace\big(\abs{UXU^{*} - C}\big) \le \sum_{n=N+1}^{\infty} \abs{\lambda_n(X)} + \sum_{n=N+1}^{\infty} \abs{\lambda_n(C)} < \varepsilon.
\end{equation*}
Therefore $X \in Mhcal{O}(C)$.
The claim for finite rank operators follows from the fact that the unitary orbit of an operator is norm closed if and only if the C*-algebra it generates is finite dimensional \cite[Proposition~2.4]{Voi-1976-RRMPA}, which is certainly the case for finite rank operators.
\end{proof}
Of particular importance to us here are the equivalences \ref{item:2} $\Leftrightarrow$ \ref{item:3} $\Leftrightarrow$ \ref{item:4} when $C$ is normal and trace-class, which we will make use of repeatedly throughout.
\begin{definition}
\label{def:orbit-closed-c-numerical-range}
Given a trace-class operator $C \in Mhcal{L}_1$, we define the \term{orbit-closed $C$-numerical range} of an operator $A \in B(Mhcal{H})$ by
\begin{equation*}
\ocnr(A) := \{ \trace(XA) \mid X \in Mhcal{O}(C) \}.
\end{equation*}
\end{definition}
It is clear from the definition of the orbit-closed $C$-numerical range that $\cnr(A) \subseteq \ocnr(A)$ but the inclusion is, in general, strict as the next example shows.
\begin{example}
\label{ex:cnr-not-equal-ocnr}
Suppose $C$ is a strictly positive trace-class operator and $A$ is a positive operator with infinite dimensional kernel, then $0 \in \ocnr(A) \setminus \cnr(A)$.
Indeed, if $X \in Mhcal{U}(C)$, then $X$ is strictly positive and therefore $\trace(XA) = \trace(X^{\frac{1}{2}} A X^{\frac{1}{2}}) > 0$ since $X^{\frac{1}{2}} A X^{\frac{1}{2}}$ is a nonzero positive operator and the trace is faithful.
Therefore $0 \notin \cnr(A)$ since $X \in Mhcal{U}(C)$ was arbitrary.
On the other hand, since $\ker A$ is infinite dimensional, there is some positive trace-class $X'$ which acts on $\ker A$ with $\lambda(X') = \lambda(C)$.
Then $X := X' \oplus Mhbf{0}_{\ker^{\perp}\!\!A}$ satisfies $\lambda(X) = \lambda(C)$, so $X \in Mhcal{O}(C)$ by \Cref{prop:orbit-closure-equivalences}.
Moreover, $0 = \trace(XA) \in \ocnr(A)$.
\end{example}
By \Cref{prop:orbit-closure-equivalences}, for finite rank operators $Mhcal{U}(C) = Mhcal{O}(C)$, and hence in this case we have equality $\ocnr(A) = \cnr(A)$.
In particular, if $P$ is a rank-$k$ projection, then $\ocnr[\frac{1}{k}P](A)$ is just the $k$-numerical range $\knr(A)$.
This, along with the following theorem, justifies our claim that the orbit-closed $C$-numerical range is a conservative modification of the $C$-numerical range.
\begin{theorem}
\label{thm:cnr-dense-in-ocnr}
If $C \in Mhcal{L}_1$ is a trace-class operator and $A \in B(Mhcal{H})$, then $\cnr(A)$ is dense in $\ocnr(A)$.
In particular, $\closure{\cnr(A)} = \closure{\ocnr(A)}$.
\end{theorem}
\begin{proof}
This is a direct consequence of the continuity of the map $(X,A) \mapsto \trace(XA)$ from $Mhcal{L}_1 \times B(Mhcal{H}) \to Mhbb{C}$, where $Mhcal{L}_1$ denotes the ideal of trace-class operators equipped with the trace norm.
To be more specific, if $X \in Mhcal{O}(C)$, then there is a sequence of unitaries $U_n \in Mhcal{U}$ such that $U_n C U_n^{*} \xrightarrow{\norm{\bigcdot}_1} X$.
Then
\begin{equation*}
\abs{\trace(XA) - \trace(U_n C U_n^{*} A)} = \abs{\trace\big((X-U_n C U_n^{*})A\big)} \le \norm{X-U_n C U_n^{*}}_1 \norm{A}.
\end{equation*}
Therefore $\trace(U_n C U_n^{*} A) \to \trace(XA)$, proving that $\cnr(A)$ is dense in $\ocnr(A)$.
Because the inclusion $\cnr(A) \subseteq \ocnr(A)$ is trivial, this yields $\closure{\cnr(A)} = \closure{\ocnr(A)}$.
\end{proof}
We note that in general $\ocnr(A)$ is not closed, so it is not simply the closure of $\cnr(A)$.
Indeed, when $C$ is a rank-one projection $\ocnr(A) = W(A)$, which need not be closed.
As a follow up to the previous theorem, we prove that the orbit-closed $C$-numerical range is a continuous function from pairs of operators (trace-class and bounded) to bounded subsets of the plane equipped with the Hausdorff distance $d_H$ which is only a pseudometric unless one restricts to compact sets.
The Hausdorff distance on bounded sets is defined as
\begin{equation*}
d_H(Y,Z) := \max \left\{ \sup_{y\in Y} d(y,Z),\ \sup_{z \in Z} d(z,Y) \right\}.
\end{equation*}
As with any pseudometric, the Hausdorff distance $d_H$ generates a (ironically, non-Hausdorff) topological space whose basis consists of the open balls.
Since this topological space is not Hausdorff, limits are not unique, but two sets $Y,Z$ are limits of the same sequence if and only if $d_H(Y,Z) = 0$ if and only if $\closure{Y} = \closure{Z}$.
This latter fact about the closures follows immediately from the definition of $d_H$, which guarantees that two bounded sets have Hausdorff distance zero if and only if they have the same closure.
\begin{theorem}
\label{thm:continuity}
The function $(C,A) \mapsto \ocnr(A)$ from $Mhcal{L}_1 \times B(Mhcal{H})$ equipped with the norm $\norm{(C,A)} = \norm{C}_1 + \norm{A}$ to bounded subsets of $Mhbb{C}$ is continuous, where the latter is equipped with the Hausdorff pseudometric, denoted $d_H$.
In fact, the function is Lipschitz in each variable separately with Lipschitz constant the norm (or trace norm) of the fixed operator.
That is,
\begin{equation*}
d_H\big(\ocnr(A),\ocnr[C'](A)\big) \le \norm{C-C'}_1 \norm{A},
\end{equation*}
and
\begin{equation*}
d_H\big(\ocnr(A),\ocnr(A')\big) \le \norm{C}_1 \norm{A-A'}.
\end{equation*}
\end{theorem}
\begin{proof}
This is a direct consequence of the continuity of the map $(X,A) \mapsto \trace(XA)$.
Indeed, notice that for any $X \in Mhcal{O}(C)$ and $A,A' \in B(Mhcal{H})$, we have
\begin{equation*}
\abs{\trace(XA) - \trace(XA')} = \abs{\trace\big(X(A-A')\big)} \le \norm{X}_1 \norm{A-A'} = \norm{C}_1 \norm{A-A'}.
\end{equation*}
Since $\trace(XA), \trace(XA')$ represent arbitrary members of $\ocnr(A),\ocnr(A')$, we find
\begin{equation*}
d_H\big(\ocnr(A),\ocnr(A')\big) \le \norm{C}_1 \norm{A-A'}.
\end{equation*}
For the Lipschitz continuity in the other variable, notice that $d_H\big(\ocnr(A),\cnr(A)\big) = 0$ since these sets have the same closure by \Cref{thm:cnr-dense-in-ocnr}.
So, it suffices to prove the result for the $C$-numerical range.
Let $X \in Mhcal{U}(C)$, so that $X = UCU^{*}$ for some unitary $U$.
Then let $X' := UC'U^{*}$.
Therefore
\begin{equation*}
\abs{\trace(XA) - \trace(X'A)} = \abs{\trace\big((X-X')A)\big)} \le \norm{X-X'}_1 \norm{A} = \norm{C-C'}_1 \norm{A}.
\end{equation*}
By a symmetric argument we obtain
\begin{equation*}
d_H\big(\cnr(A),\cnr[C'](A)\big) \le \norm{C-C'}_1 \norm{A},
\end{equation*}
and hence also
\begin{equation*}
d_H\big(\ocnr(A),\ocnr[C'](A)\big) \le \norm{C-C'}_1 \norm{A}. \qedhere
\end{equation*}
\end{proof}
\begin{corollary}
\label{cor:approximate-unitary-equivalence-same-closure}
$\closure{\ocnr(A)} = \closure{\ocnr(A')}$ if $A,A'$ are approximately unitarily equivalent.
\end{corollary}
\begin{proof}
Since $A,A'$ are approximately unitarily equivalent, there are unitaries $U_n$ such that $U_n A U_n^{*} \to A'$, and therefore $d_H \big( \ocnr(U_n A U_n^{*}), \ocnr(A') \big) \to 0$ by \Cref{thm:continuity}.
However, $\ocnr(A) = \ocnr(U_n A U_n^{*})$ since conjugation by the unitary $U_n$ may be absorbed into $Mhcal{O}(C)$, whence $d_H \big( \ocnr(A), \ocnr(A') \big) = 0$.
Thus $\closure{\ocnr(A)} = \closure{\ocnr(A')}$.
\end{proof}
The following proposition provides some basic facts concerning the orbit-closed $C$-numerical range, all of which follow easily from fundamental properties of the trace.
\begin{proposition}
\label{prop:c-numerical-range-basics}
Given a trace-class operator $C \in Mhcal{L}_1$, $A \in B(Mhcal{H})$ and $a,b \in Mhbb{C}$,
\begin{enumerate}
\item\label{item:positivity-ocnr} if $A \in B(Mhcal{H})^+$ and $C \in Mhcal{L}_1^+$, then $\ocnr(A) \subseteq [0,\infty)$;
\item\label{item:hermitian-ocnr} if $C$ is selfadjoint, then for any $X \in Mhcal{O}(C)$, $\Re(\trace(XA)) = \trace(X\Re A)$, and so $\Re \ocnr(A) = \ocnr(\Re A)$;
\item\label{item:similarity-preserving-ocnr} $\ocnr(aI+bA) = a \trace C + b \ocnr(A)$.
\end{enumerate}
Moreover, the same results hold for $\cnr(A)$.
\end{proposition}
\begin{proof}
\ref{item:positivity-ocnr}.
Consider $X \in Mhcal{O}(C)$, so that $X$ is positive and trace-class.
If $A$ is also positive, then
\begin{equation*}
\trace(XA) = \trace(X^{\frac{1}{2}}AX^{\frac{1}{2}}) \ge 0,
\end{equation*}
since the trace is a positive linear functional.
\ref{item:hermitian-ocnr}.
If $C = C^{*}$, then for any $X \in Mhcal{O}(C)$ we have $X = X^{*}$.
Therefore
\begin{equation*}
\trace(XA) + \overline{\trace(XA)} = \trace(XA) + \trace(A^{*}X^{*}) = \trace(XA) + \trace(XA^{*}) = \trace\big(X(A + A^{*})\big).
\end{equation*}
\ref{item:similarity-preserving-ocnr}.
Note that $\trace\big(X(aI+bA)\big) = a \trace X + b \trace(XA)$, and since $\trace X = \trace C$ (because $X \in Mhcal{O}(C)$), we obtain $\ocnr(aI+bA) = a \trace C + b \ocnr(A)$.
Of course, a simple examination of the above proof allows one to conclude that everything works for $X \in Mhcal{U}(C)$.
Therefore, these results also apply to $\cnr(A)$.
\end{proof}
\section{Majorization and convexity}
\label{sec:majorization-and-convexity}
In this section we establish our main theorem which characterizes the orbit-closed $C$-numerical range for $C$ selfadjoint in terms of majorization (\Cref{thm:c-numerical-range-via-majorization}), which directly yields convexity (\Cref{cor:c-numerical-range-convex}).
We begin by recalling the notion of majorization.
\begin{definition}
\label{def:sequence-majorization}
Given nonnegative sequences $d,\lambda$ converging to zero, we say that $d$ is \term{submajorized} by $\lambda$ and write $d \pprec \lambda$ if, for all $n \in Mhbb{N}$,
\begin{equation*}
\sum_{k=1}^n d^{*}_k \le \sum_{k=1}^n \lambda^{*}_k.
\end{equation*}
If, in addition, equality of the sums holds when $n=\infty$ (including the possibility that both sums are infinite), we say that $d$ is \term{majorized} by $\lambda$ and write $d \prec \lambda$.
If equality of the sums holds for infinitely many $n \in Mhbb{N}$, we say that $d$ is \term{block majorized} by $\lambda$.
For real-valued sequences $d,\lambda \in \ell^1$, we say that $d$ is \term{submajorized} by $\lambda$, and write $d \pprec \lambda$, if $d^+ \pprec \lambda^+$ and $d^- \pprec \lambda^-$.
If in addition there is equality for $\sum_{k=1}^{\infty} d_k = \sum_{k=1}^{\infty} \lambda_k$, we say that $d$ is \term{majorized} by $\lambda$, and we write $d \prec \lambda$.
\end{definition}
The reader should take note: if $d,\lambda \in \ell^1$ are real-valued sequences, $d \prec \lambda$ is strictly weaker than satisfying both $d^+ \prec \lambda^+$ and $d^- \prec \lambda^-$.
For example, the zero sequence is majorized by any sequence in $\ell^1$ whose sum is zero.
The next two results are due to Hiai and Nakamura in \cite{HN-1991-TAMS} and link majorization and submajorization to the closed convex hulls of unitary orbits in various operator topologies.
Their results apply in von Neumann algebras more generally, not just $B(Mhcal{H})$, so we are stating simplified versions for our own needs.
\begin{proposition}[\protect{\cite[Theorem~3.3]{HN-1991-TAMS}}]
\label{prop:wot-closure-convex-orbit-weak-majorization}
For a selfadjoint compact operator $C \in Mhcal{K}^{sa}$,
\begin{equation*}
\{ X \in Mhcal{K}^{sa} \mid \lambda(X) \pprec \lambda(C) \} = \closure[Mhrm{wot}]{\convMhcal{U}(C)} = \closure[\norm{\bigcdot}]{\convMhcal{U}(C)}.
\end{equation*}
\end{proposition}
Note that for $C$ trace-class, since the trace-norm topology on $\conv Mhcal{U}(C)$ is stronger than the norm topology (or the weak operator topology), we may replace $Mhcal{U}(C)$ in \Cref{prop:wot-closure-convex-orbit-weak-majorization} with $Mhcal{O}(C)$.
\begin{proposition}[\protect{\cite[Theorem~3.5(4)]{HN-1991-TAMS}}]
\label{prop:convex-orbit-majorization}
For a selfadjoint trace-class operator $C \in Mhcal{L}_1^{sa}$,
\begin{equation*}
\{ X \in Mhcal{L}_1^{sa} \mid \lambda(X) \prec \lambda(C) \} = \closure[\norm{\bigcdot}_1]{\convMhcal{U}(C)} = \closure[\norm{\bigcdot}_1]{\convMhcal{O}(C)}.
\end{equation*}
\end{proposition}
Before we begin the proof of the main theorem of this section, which has as a corollary that $\ocnr(A)$ is convex when $C$ is selfadjoint, we must prove a key technical lemma.
This lemma says in a rather strong way that the extreme points of $\set{ X \in Mhcal{L}_1^{sa} \mid \lambda(X) \prec \lambda(C) }$ form a subset of $Mhcal{O}(C)$.
\begin{lemma}
\label{lem:majorization-trace-zero-perturbation}
Suppose that $X,C \in Mhcal{L}_1^{sa}$ with $\lambda(X) \prec \lambda(C)$ but $X \notin Mhcal{O}(C)$.
Then there is a nonzero projection $P$ of rank at least $2$ and an $\varepsilon > 0$ such that $\lambda(X+S) \prec \lambda(C)$ for any selfadjoint $S$ with $S = PS = SP$, $\trace S = 0$ and $\norm{S} < \varepsilon$.
\end{lemma}
\begin{proof}
Suppose that $X,C \in Mhcal{L}_1^{sa}$ with $\lambda(X) \prec \lambda(C)$ but $X \notin Mhcal{O}(C)$.
There are two distinct cases, when $\trace X_+ = \trace C_+$ (necessitating $\trace X_- = \trace C_-$) and when $\trace X_+ < \trace C_+$ (necessitating $\trace X_- < \trace C_-$).
\begin{case}{$\trace X_+ = \trace C_+$.}
Since $X \notin Mhcal{O}(C)$, we have $\lambda(X) \not= \lambda(C)$ by \Cref{prop:orbit-closure-equivalences}, and hence either $\lambda^+(X) \not= \lambda^+(C)$ or $\lambda^-(X) \not= \lambda^-(C)$.
Without loss of generality we may assume the former.
So, in this case it suffices to prove the result when $X,C \in Mhcal{L}_1^+$ and $s(X) \prec s(C)$ but $s(X) \not= s(C)$ since $s(X) = \lambda(X) = \lambda^+(X)$ for \emph{positive} compact operators.
Let $n \in Mhbb{N}$ be the first index for which the sequences $s(X), s(C)$ differ.
Necessarily $s_n(X) < s_n(C)$.
Moreover, $s_{n+1}(X) > 0$ since
\begin{equation*}
\sum_{j=1}^n s_j(X) < \sum_{j=1}^n s_j(C) \quad\text{necessitates}\quad \sum_{j=n+1}^\infty s_j(X) > \sum_{j=n+1}^\infty s_j(C).
\end{equation*}
Let $m \ge n+1$ be the first index such that $s_{m+1}(X) < s_{n+1}(X)$, and hence $s_{n+1}(X) = s_{n+2}(X) = \cdots = s_m(X)$.
Such an index $m$ occurs because $s(X) \in c_0star$ and $s_{n+1}(X) > 0$.
Let $\delta_k := \sum_{j = 1}^k \big( s_j(C) - s_j(X) \big)$ and note that $\delta_k \ge 0$ for all $k \in Mhbb{N}$ since $s(X) \prec s(C)$.
Also $\delta_k = 0$ for $1 \le k < n$, and $\delta_n = s_n(C) - s_n(X) > 0$.
Moreover, for $n < k \le m$, $s_k(X)$ is constant ($=s_{n+1}(X)$) and therefore on the interval $n \le k \le m$, $\delta_k$ is increasing (as long as $s_k(C) \ge s_{n+1}(X)$) and then (maybe) strictly decreasing (if/once $s_k(C) < s_{n+1}(X)$).
Consequently, $\delta_{m-1} > 0$, as it is either greater than or equal to $\delta_n > 0$ or strictly greater than $\delta_m \ge 0$.
Furthermore, $\min_{n \le k < m} \delta_k = \min \set{\delta_n, \delta_{m-1}} > 0$.
Set $\varepsilon := \min \set{ \delta_n, \delta_{m-1}, s_m(X)-s_{m+1}(X) }$.
Note that
\begin{equation*}
\delta_n = s_n(C) - s_n(X) \le s_{n-1}(C) - s_n(X) = s_{n-1}(X) - s_n(X).
\end{equation*}
Let $\{e_j\}_{j=1}^{\infty}$ be an orthonormal set of eigenvectors for $X$ corresponding to the eigenvalues in the sequence $s(X)$.
Let $P$ be the projection onto $\spans\{e_n,e_m\}$.
Let $S$ be any selfadjoint operator with $S = PS = SP$, $\trace(S) = 0$, and $\norm{S} < \varepsilon$.
Then because $S = PS = SP$, if $m \not= j \not= n$, then $Se_j = SPe_j = 0$, and hence $(X+S)e_j = s_j(X) e_j$.
Moreover, because $\norm{S} < \varepsilon$ and $\trace(S) = 0$, $(X+S)f_n = (s_n(X) + \eta) f_n$ and $(X+S)f_m = (s_m(X) - \eta) f_m$, for some $\abs{\eta} \le \norm{S} < \varepsilon$ and orthonormal vectors $f_n,f_m$ with $\spans \set{f_n,f_m} = \spans \set{e_n,e_m}$.
We will establish $s(X+S) \prec s(C)$, and we deal with the case when $\eta \ge 0$ first because it implies the case when $\eta \le 0$.
Notice that
\begin{equation*}
s_{n+1}(X) \le s_n(X) + \eta < s_n(X) + \big(s_{n-1}(X) - s_n(X)\big) = s_{n-1}(X),
\end{equation*}
and also
\begin{equation*}
s_{m-1}(X) \ge s_m(X) - \eta > s_m(X) - \big(s_m(X) - s_{m+1}(X)\big) = s_{m+1}(X).
\end{equation*}
Therefore the order of the singular values is preserved between $X$ and $X+S$;
in particular, $s_n(X+S) = s_n(X) + \eta$ and $s_m(X+S) = s_m(X) - \eta$ and $s_j(X+S) = s_j(X)$ for all $n \not= j \not= m$.
Thus to ensure $s(X+S) \prec s(C)$ we only need to check the partial sums for indices $n \le k < m$, because for all other values of $k$, $\sum_{j=1}^k s(X+S) = \sum_{j=1}^k s(X)$ and $s(X) \prec s(C)$.
So for any $n \le k < m$, we have
\begin{align*}
\sum_{j=1}^k s_j(X+S) &= \sum_{j=1}^{n-1} s_j(X+S) + s_n(X+S) + \sum_{j=n+1}^k s_j(X+S) \\
&= \sum_{j=1}^{n-1} s_j(X) + (s_n(X) + \eta) + \sum_{j=n+1}^k s_j(X) \\
&= \sum_{j=1}^{n-1} s_j(C) + (s_n(C) + \eta) + \sum_{j=n+1}^k s_j(C) - \delta_k \\
&\le \sum_{j=1}^k s_j(C).
\end{align*}
where the last line follows because $\eta - \delta_k \le \varepsilon - \min \set{\delta_n,\delta_{m-1}} \le 0$.
Thus $s(X+S) \prec s(C)$.
Now suppose $\eta \le 0$.
In this case it is clear that the sequence with $\eta$ is majorized by the same sequence with $\abs{\eta}$.
Indeed, this is due to a fundamental fact about majorization: given a decreasing nonnegative sequence $(d_j)_{j=1}^{\infty}$, if $n < m$ and $d_n > d_m$, and we consider the sequence $(d'_j)_{j=1}^{\infty}$ which is equal to the original sequence except that $d'_n = d_n - \varepsilon$ and $d'_m = d_m + \varepsilon$ for some $0 \le \varepsilon \le d_n - d_m$, then $(d'_j)_{j=1}^{\infty} \prec (d_j)_{j=1}^{\infty}$, and this happens even if the decreasing order is no longer preserved for the sequence $(d'_j)_{j=1}^{\infty}$.
In our case, for $\eta \le 0$, we are using $d_n = s_n(X) - \eta$, $d_m = s_n(X) + \eta$ and $\varepsilon = 2\abs{\eta}$.
Therefore we still obtain $s(X+S) \prec s(C)$ even for $\eta \le 0$.
\end{case}
\begin{case}{$\trace X_+ < \trace C_+$ and both $X_{\pm}$ are finite rank.}
Since $\trace X = \trace C$, we must also have $\trace X_- < \trace C_-$.
If $X_{\pm}$ are finite rank with ranks $n_{\pm}$, then $X$ has two orthonormal eigenvectors $e_{\pm}$ corresponding to the eigenvalue zero.
Set $\eta_{\pm} := \sum_{n=1}^{n_{\pm}} \big( \lambda^{\pm}_n(C) - \lambda^{\pm}_n(X) \big) + \lambda^{\pm}_{n_{\pm}+1}(C) > 0$ (if either $\eta_{\pm}$ were zero, it would imply both $\trace X_{\pm} = \trace C_{\pm}$).
Then set $\varepsilon := \min \set{\eta_{\pm}, \lambda^{\pm}_{n_{\pm}}(X)}$ and let $P$ be the projection onto $\spans\set{e_+,e_-}$.
Let $S$ be any selfadjoint operator for which $SP = PS = S$ and $\norm{S} < \varepsilon$ and $\trace S = 0$.
Adding $S$ to $X$ produces two new eigenvalues smaller in modulus than the rest.
That is, for $1 \le n \le n_{\pm}$, $\lambda^{\pm}_n(X+S) = \lambda^{\pm}_n(X) = \lambda^{\pm}_n(C)$ and $\lambda^{\pm}_{n_{\pm} + 1}(X+S) = \norm{S}$, and $\lambda^{\pm}_n(X+S) = 0$ for all $n > n_{\pm} + 1$.
Therefore, to see that $\lambda^{\pm}(X+S) \pprec \lambda^{\pm}(C)$, it suffices to check the partial sums for the indices $n_{\pm}+1$.
Thus,
\begin{equation*}
\sum_{n=1}^{n_{\pm} + 1} \big( \lambda^{\pm}_n(C) - \lambda^{\pm}_n(X+S) \big) = \sum_{n=1}^{n_{\pm}} \big( \lambda^{\pm}_n(C) - \lambda^{\pm}_n(X) \big) + \lambda^{\pm}_{n_{\pm} + 1}(C) - \norm{S} \ge \eta_{\pm} - \varepsilon \ge 0.
\end{equation*}
Finally, since $\trace(X+S) = \trace X + \trace S = \trace C$, we obtain $\lambda^{\pm}(X+S) \prec \lambda^{\pm}(C)$.
\end{case}
\begin{case}{$\trace X_+ < \trace C_+$ and one of $X_{\pm}$ is infinite rank.}
Again $\trace X = \trace C$, we must also have $\trace X_- < \trace C_-$.
By symmetry, we may assume without loss of generality that $X_+$ has infinite rank.
Then set $\gamma := \frac{1}{2}\trace (C_+ - X_+) > 0$ and $n_+ \in Mhbb{N}$ such that for all $N \ge n_+$ we have $\sum_{n=1}^N \big( \lambda^+_n(C) - \lambda^+_n(X) \big) \ge \gamma$.
Moreover, since $X_+$ is infinite rank, we may select $r > m \ge n_+$ such that $\lambda^+_{m-1}(X) > \lambda^+_m(X) \ge \lambda^+_r(X) > \lambda^+_{r+1}(X)$.
Set $\varepsilon := \min\set{\gamma,\lambda^+_{m-1}(X) - \lambda^+_m(X), \lambda^+_r(X) - \lambda^+_{r+1}(X)}$ and $P$ the projection onto $\spans\set{e_m,e_r}$.
Let $S$ be a selfadjoint operator for which $SP = PS = S$ and $\norm{S} < \varepsilon$ and $\trace S = 0$.
As with Case 1, $\lambda^+_n(X+S) = \lambda^+_n(X)$ for all $m \not= n \not= r$, and $\lambda^+_m(X+S) = \lambda^+_m(X) + \eta$ and $\lambda^+_r(X+S) = \lambda^+_r(X) - \eta$ for some $0 \le \eta \le \norm{S} < \varepsilon$ (the situation when $\eta \le 0$ is handled in the same manner as Case 1).
Of course, $\lambda^-(X+S) = \lambda^-(X)$.
To verify that $\lambda^+(X+S) \pprec \lambda^+(C)$, it suffices to check the partial sums for indices $m \le k < r$.
We obtain
\begin{align*}
\sum_{j=1}^k \lambda^+_j(X+S) &= \sum_{j=1}^{m-1} \lambda^+_j(X+S) + \lambda^+_m(X+S) + \sum_{j=m+1}^k \lambda^+_j(X+S) \\
&= \sum_{j=1}^k \lambda^+_j(X) + \eta \\
&\le \sum_{j=1}^k \lambda^+_j(C) - \gamma + \eta \\
&\le \sum_{j=1}^k \lambda^+_j(C),
\end{align*}
where the last line follows since $\eta \le \norm{S} < \varepsilon \le \gamma$.
Thus $\lambda^{\pm}(X+S) \pprec \lambda^{\pm}(C)$ and $\trace(X+S) = \trace X = \trace C$, so $\lambda^{\pm}(X+S) \prec \lambda^{\pm}(C)$.
\qedhere
\end{case}
\end{proof}
We now have the tools necessary (\Cref{prop:convex-orbit-majorization} and \Cref{lem:majorization-trace-zero-perturbation}) to prove our main theorem.
The proof is adapted from and follows closely the one given by Dykema and Skoufranis \cite[Theorem~2.14]{DS-2017-PEMSIS} for numerical ranges in II$_1$ factors, but there is one substantial difference.
A key step in the proof is obtaining an extreme point of a certain closed convex subset of $\{ X \in Mhcal{L}_1^{sa} \mid \lambda(X) \prec \lambda(C) \}$.
In the context of II$_1$ factors (or in any finite factor), this set happens to be weak* compact and so Dykema and Skoufranis are able to employ the Krein--Milman Theorem.
However, in $B(Mhcal{H})$, this set is definitely not weak* compact since it contains elements of arbitrarily small norm and therefore the zero operator is in the weak* closure.
Instead, $\{ X \in Mhcal{L}_1^{sa} \mid \lambda(X) \prec \lambda(C) \}$ is only a trace-norm closed and bounded convex set, and so the Krein--Milman Theorem cannot be invoked.
In order to circumvent this issue, we use the Radon--Nikodym Property of the Banach space of trace-class operators to obtain the desired extreme point.
\begin{theorem}
\label{thm:c-numerical-range-via-majorization}
For a selfadjoint trace-class operator $C \in Mhcal{L}_1^{sa}$ and any $A \in B(Mhcal{H})$,
\begin{equation*}
\ocnr(A) = \{ \trace(XA) \mid X \in Mhcal{L}_1^{sa}, \lambda(X) \prec \lambda(C) \}.
\end{equation*}
\end{theorem}
\begin{proof}
Given $X \in Mhcal{L}_1^{sa}$ with $\lambda(X) \prec \lambda(C)$ we will show there is some $Y \in Mhcal{O}(C)$ for which $\trace(XA) = \trace(YA)$.
For this consider the trace-norm continuous map $\Phi : Z \mapsto \trace(ZA)$ from $\{ Z \in Mhcal{L}_1^{sa} \mid \lambda(Z) \prec \lambda(C) \}$ to $Mhbb{C}$.
Then by \Cref{prop:convex-orbit-majorization} and continuity and linearity of $\Phi$, the set
\begin{equation*}
\Phi^{-1}(\trace(XA)) = \{ Z \in Mhcal{L}_1^{sa} \mid \lambda(Z) \prec \lambda(C), \trace(ZA) = \trace (XA) \}
\end{equation*}
is a nonempty, convex, trace-norm closed and bounded set.
The trace-class operators with the trace-norm form a Banach space, and moreover, this space has the Radon--Nikodym Property \cite[Lemma~2]{Chu-1981-JLMSIS}.
It is well-known (due to Lindenstrauss \cite[Theorem~2]{Phe-1974-JFA}) that the Radon--Nikodym Property implies the Krein--Milman Property: every convex, closed and bounded set is the closed convex hull of its extreme points.
In particular, $\Phi^{-1}(\trace(XA))$ has an extreme point, which we label $Y$.
We claim that $Y \in Mhcal{O}(C)$.
Suppose not.
Then we may apply \autoref{lem:majorization-trace-zero-perturbation} to obtain a nonzero projection $P$ as in that lemma.
Consider the real vector space $Mhcal{S}_P := \{ S \in B(Mhcal{H}) \mid S= S^{*}, S = SP = PS, \trace S = 0 \}$ and the linear map $S \mapsto \trace(SA)$.
Note that $Mhcal{S}_P$ has dimension at least two since $P$ must have rank at least two, and therefore this linear map has a nonzero element in the kernel.
By scaling we obtain an $S \in Mhcal{S}_P$ in the kernel of this map for which $\lambda(Y \pm S) \prec \lambda(C)$ by \autoref{lem:majorization-trace-zero-perturbation}.
Thus $\trace((Y\pm S)A) = \trace(YA) = \trace(XA)$, and therefore $Y \pm S \in \Phi^{-1}(\trace(XA))$, and hence
\begin{equation*}
Y = \frac{Y+S}{2} + \frac{Y-S}{2},
\end{equation*}
contradicting the fact that $Y$ is extreme in $\Phi^{-1}(\trace(XA))$.
Thus $Y \in Mhcal{O}(C)$.
Therefore $\{ \trace(XA) \mid X \in Mhcal{L}_1^{sa}, \lambda(X) \prec \lambda(C) \} \subseteq \ocnr(A)$, and the other inclusion follows since $Mhcal{O}(C) \subseteq \{ X \in Mhcal{L}_1^{sa} \mid \lambda(X) \prec \lambda(C) \}$.
\end{proof}
Since the collection $\{ X \in Mhcal{L}_1^{sa} \mid \lambda(X) \prec \lambda(C) \}$ is convex (e.g., by \Cref{prop:convex-orbit-majorization}, but this can also be proven directly rather easily) and the map $X \mapsto \trace(XA)$ is linear, it is clear that $\ocnr(A)$ is a convex set.
This generalizes \cite{Wes-1975-LMA} and is, to our knowledge, the only independent proof of this result when the underlying Hilbert space is infinite dimensional.
\begin{corollary}
\label{cor:c-numerical-range-convex}
If $C \in Mhcal{L}_1^{sa}$, then $\ocnr(A)$ is convex.
\end{corollary}
We remark that combining \Cref{cor:c-numerical-range-convex} with \Cref{thm:cnr-dense-in-ocnr} yields an independent proof of \cite[Theorem~3.8]{DvE-2020-LaMA} that $\closure{\cnr(A)}$ is convex under the stated hypothesis that $C \in Mhcal{L}_1^{sa}$.
In addition, \Cref{thm:c-numerical-range-via-majorization} has as a direct corollary the following inclusion relationship among orbit-closed $C$-numerical ranges.
This extends \cite[Theorem~7]{GS-1977-LAA} to the infinite dimensional setting.
\begin{corollary}
\label{cor:majorization-c-numerical-range-inclusion}
If $C,C' \in Mhcal{L}_1^{sa}$ and $\lambda(C') \prec \lambda(C)$, then $\ocnr[C'](A) \subseteq \ocnr(A)$.
\end{corollary}
\section{Boundary points}
\label{sec:boundary-points}
In the study of numerical ranges, it is often of interest to investigate the boundary.
We now determine some conditions under which points on the boundary $\partial\ocnr(A)$ actually belong to $\ocnr(A)$.
In general, this is a nontrivial question, but in this section we try to provide adequate answers.
\subsection{Compact operators}
\label{subsec:compact-operators}
We begin with the case when $A \in Mhcal{K}$ is a compact operator.
For this we have a very satisfying set of conditions in \Cref{thm:compact-closed-iff-contains-weak-majorization} equivalent to $\ocnr(A)$ being closed, and a simpler sufficient (but not necessary) condition in \Cref{cor:compact-closure-direct-sum-zero}.
In \Cref{thm:c-numerical-range-via-majorization}, we saw that the orbit-closed $C$-numerical range was the image of the operators whose eigenvalue sequences are majorized by $\lambda(C)$ under the map $X \mapsto \trace(XA)$.
A natural question to ask is whether or not there is a similar characterization for the image of those operators which are only submajorized by $C$.
The following lemma proves that this is indeed the case.
\begin{lemma}
\label{lem:weak-majorization-c-numerical-range}
For a selfadjoint trace-class operator $C \in Mhcal{L}_1^{sa}$ and an operator $A \in B(Mhcal{H})$,
\begin{equation*}
\set{ \trace(XA) \mid X \in Mhcal{L}_1^{sa}, \lambda(X) \pprec \lambda(C) } = \conv \ \bigcup_{Mhclap{\qquad 0 \le m_{\pm} \le \rank C_{\pm}}} \ \ocnr[C_{m_-,m_+}](A),
\end{equation*}
where $C_{m_-,m_+}$ is the operator $C (P^-_{m_-} + P^+_{m_+})$ where $\trace P^{\pm}_{m_{\pm}} = m_{\pm}$, and for some $\lambda_- \le 0 \le \lambda_+$, $\chi_{(-\infty,\lambda_-)}(C) \le P^-_{m_-} \le \chi_{(-\infty,\lambda_-]}(C)$ and $\chi_{(\lambda_+,\infty)}(C) \le P^+_{m_+} \le \chi_{[\lambda_+,\infty)}(C)$.
In other words, $C_{m_-,m_+}$ is the selfadjoint operator whose eigenvalues are the smallest $m_-$ negative eigenvalues $C$ along with the largest $m_+$ positive eigenvalues of $C$, namely $-\lambda^-_1(C),\ldots,-\lambda^-_{m_-}(C)$ and $\lambda^+_1,(C),\ldots,\lambda^+_{m_+}(C)$, along with the eigenvalue $0$ repeated with multiplicity $\trace(I-P_{m_-}^- - P_{m_+}^+)$.
\end{lemma}
\begin{proof}
Notice that the set $\set{ X \in Mhcal{L}_1^{sa} \mid \lambda(X) \pprec \lambda(C) }$ is convex (e.g., by \Cref{prop:wot-closure-convex-orbit-weak-majorization}, but this can also be proven directly) and the trace is a linear functional, hence the set $\set{ \trace(XA) \mid X \in Mhcal{L}_1^{sa}, \lambda(X) \pprec \lambda(C) }$ is convex.
Moreover, any $X \in Mhcal{O}(C_{m_-,m_+})$ satisfies $\lambda(X) = \lambda(C_{m_-,m_+}) \pprec \lambda(C)$.
Therefore the right-hand set is included in the left-hand set.
For the other inclusion, take any $X \in Mhcal{L}_1^{sa}$ with $\lambda(X) \pprec \lambda(C)$.
If $\trace(X_+) = \trace(C_+)$, set $m_+ = \rank C_+$, and if $\trace(X_-) = \trace(C_-)$, set $m_- = \rank C_-$ (we allow for $m_{\pm} = \infty$).
Otherwise, let $m_{\pm} \in Mhbb{N}$ be the smallest (and unique) positive integers for which
\begin{equation*}
\sum_{n=1}^{m_{\pm}-1} \lambda^{\pm}_n(C) \le \trace(X_{\pm}) < \sum_{n=1}^{m_{\pm}} \lambda^{\pm}_n(C).
\end{equation*}
Then there are $t_{\pm} \in [0,1]$ for which
\begin{equation*}
\trace(X_{\pm}) = \sum_{n=1}^{m_{\pm} - 1} \lambda^{\pm}_n(C) + t_{\pm} \lambda^{\pm}_{m_{\pm}}(C).
\end{equation*}
Then consider the operator $C'$ which is the convex combination
\begin{equation*}
(1-t_-)(1-t_+) C_{m_- - 1,m_+ - 1} + (1-t_-)t_+ C_{m_- - 1,m_+} + t_-(1-t_+) C_{m_- ,m_+ - 1} + t_- t_+ C_{m_-,m_+}.
\end{equation*}
Here, for convenience, we simply adopt the convention that $\infty = \infty - 1$ in case either of $m_{\pm}$ is infinite.
Therefore, the nonzero eigenvalues of $C'$ are $\lambda^{\pm}_1(C),\ldots,\lambda^{\pm}_{m_{\pm} - 1}(C),t_{\pm} \lambda^{\pm}_{m_{\pm}}(C)$ (or if $m_+ = \infty$, the positive eigenvalues are just $\lambda^+(C)$, and similarly for when $m_- = \infty$).
The operator $C'$ was constructed specifically so that $\lambda(X) \prec \lambda(C')$.
Therefore, by \Cref{thm:c-numerical-range-via-majorization}
\begin{equation*}
\trace(XA) \in \ocnr[C'](A) \subseteq \conv \ \bigcup_{Mhclap{\qquad 0 \le k_{\pm} \le \rank C_{\pm}}} \ \ocnr[C_{k_-,k_+}](A)
\end{equation*}
where the second inclusion holds because any $X' \in Mhcal{O}(C')$ is a convex combination of four $X_{k_-,k_+} \in Mhcal{O}(C_{k_-,k_+})$.
To see this, notice that the equation defining $C'$ actually establishes that $\lambda(C')$ is a convex combination of four appropriately permuted $\lambda(C_{k_-,k_+})$ (with up to two zeros added to any of these sequences).
Then by \Cref{prop:orbit-closure-equivalences} any $X' \in Mhcal{O}(C')$ has the form $\diag(\lambda(C')) \oplus Mhbf{0}$ in some basis for an appropriately sized $Mhbf{0}$, and this is clearly a convex combination of the same four $\diag(\lambda(C_{k_-,k_+})) \oplus Mhbf{0} \in Mhcal{O}(C_{k_-,k_+})$.
\end{proof}
The following theorem provides a complete characterization of when the orbit-closed $C$-numerical range of a compact operator is closed in terms of submajorization.
The equivalence \ref{item:positivity-ocnr} $\Leftrightarrow$ \ref{item:similarity-preserving-ocnr} generalizes \cite[Theorem~1(i)]{dBGS-1972-JLMSIS} for the standard numerical range and \cite[Result~(2.5)]{LP-1995-FDaaMaE} for finite rank $C$.
The proof of \cite[Theorem~1(i)]{dBGS-1972-JLMSIS} utilized weak sequential compactness of the unit ball in $Mhcal{H}$ in order to obtain the requisite limit vector, whereas \cite[Result~(2.5)]{LP-1995-FDaaMaE} applied the weak operator topology compactness of the unit ball of $B(Mhcal{H})$ to obtain the limiting operator.
Our proof below shows that the true essence of this phenomenon actually takes place relative to a different topology.
In particular, the key is the nontrivial weak* compactness of $\set{ X \in Mhcal{L}_1^{sa} \mid \lambda(X) \pprec \lambda(C) }$, where this set is viewed not as a subset of $B(Mhcal{H})$, but as a subset of $Mhcal{L}_1 \cong Mhcal{K}^{*}$ which is why the condition that $A \in Mhcal{K}$ is essential for these limit processes.
\begin{theorem}
\label{thm:compact-closed-iff-contains-weak-majorization}
Let $C \in Mhcal{L}_1^{sa}$ be a selfadjoint trace-class operator and let $A \in Mhcal{K}$ be a compact operator.
Then
\begin{equation*}
\closure{\ocnr(A)} = \set{ \trace(XA) \mid X \in Mhcal{L}_1^{sa}, \lambda(X) \pprec \lambda(C) }.
\end{equation*}
Consequently, the following are equivalent.
\begin{enumerate}
\item\label{item:ocnr-compact-closed} $\ocnr(A)$ is closed.
\item\label{item:ocnr-equal-weak-majorization} $\ocnr(A) = \set{ \trace(XA) \mid X \in Mhcal{L}_1^{sa}, \lambda(X) \pprec \lambda(C) }$.
\item\label{item:ocnr-contains-initial-ocnrs} $\ocnr(A) \supseteq \ocnr[C_{m_-,m_+}](A)$ for every $0 \le m_{\pm} \le \rank C_{\pm}$,
\end{enumerate}
where $C_{m_-,m_+}$ are defined as in \Cref{lem:weak-majorization-c-numerical-range}.
\end{theorem}
\begin{proof}
Recall that $Mhcal{L}_1$ is the dual $Mhcal{K}^{*}$ of the compact operators via the isometric isomorphism $C \mapsto \trace(C\,\bigcdot)$.
By the Banach--Alaoglu theorem, bounded subsets of $Mhcal{L}_1$ which are weak* closed are weak* compact.
Since the weak* topology on $Mhcal{L}_1$ is finer than the weak operator topology and coarser than, on trace-norm bounded sets, the (operator) norm topology, by \Cref{prop:wot-closure-convex-orbit-weak-majorization} the set $\set{ X \in Mhcal{L}_1^{sa} \mid \lambda(X) \pprec \lambda(C) }$ is bounded and weak* closed and therefore weak* compact.
Because $A \in Mhcal{K}$, the map $X \mapsto \trace(XA)$ is weak* continuous, and therefore $\set{ \trace(XA) \mid X \in Mhcal{L}_1^{sa}, \lambda(X) \pprec \lambda(C) }$ is compact since it is the continuous image of a compact set.
Because the weak* topology on $Mhcal{L}_1$ is weaker than the trace-norm topology, we have
\begin{equation*}
\closure[w*]{\closure[\norm{\bigcdot}_1]{\conv Mhcal{U}(C)}} = \closure[Mhrlap{w*}]{\conv Mhcal{U}(C)}.
\end{equation*}
Moreover, by \Cref{prop:wot-closure-convex-orbit-weak-majorization,prop:convex-orbit-majorization} the weak* closure of $\set{ X \in Mhcal{L}_1^{sa} \mid \lambda(X) \prec \lambda(C) }$ is $\set{ X \in Mhcal{L}_1^{sa} \mid \lambda(X) \pprec \lambda(C) }$ and therefore by \Cref{thm:c-numerical-range-via-majorization} and weak* continuity of $X \mapsto \trace(XA)$, $\ocnr(A)$ is dense in $\set{ \trace(XA) \mid X \in Mhcal{L}_1^{sa}, \lambda(X) \pprec \lambda(C) }$.
Hence
\begin{equation*}
\closure{\ocnr(A)} = \set{ \trace(XA) \mid X \in Mhcal{L}_1^{sa}, \lambda(X) \pprec \lambda(C) }.
\end{equation*}
\ref{item:ocnr-compact-closed} $\Leftrightarrow$ \ref{item:ocnr-equal-weak-majorization}.
This is immediate from what we have just proven.
\ref{item:ocnr-equal-weak-majorization} $\Leftrightarrow$ \ref{item:ocnr-contains-initial-ocnrs}.
This is immediate from \Cref{thm:c-numerical-range-via-majorization}, \Cref{cor:c-numerical-range-convex} and \Cref{lem:weak-majorization-c-numerical-range}.
\end{proof}
As previously remarked, the equivalence \ref{item:positivity-ocnr} $\Leftrightarrow$ \ref{item:similarity-preserving-ocnr} of \Cref{thm:compact-closed-iff-contains-weak-majorization} generalizes the original result of de Barra, Giles and Sims \cite[Theorem~1(i)]{dBGS-1972-JLMSIS} concerning the standard numerical range, which states that if $A$ is a compact operator, then $0 \in W(A)$ if and only if $W(A)$ is closed.
One might wonder if there is a condition analogous to that of de Barra, Giles and Sims which is somehow tied only to $0$.
The following example shows that for a na\"ive analogue, the result is false, but the corollary after that shows that not all hope is lost.
\begin{example}
This example shows that, unlike for the case of the standard numerical range, it is insufficient to simply have $0 \in W(A)$ for $A$ compact in order to guarantee that $W(A)$ is closed.
Indeed, it is even insufficient to have an orthonormal basis $\set{e_n}_{n=1}^{\infty}$ for which $\angles{A e_n,e_n} = 0$ for all $n \in Mhbb{N}$, even if $A$ is selfadjoint and trace-class.
Consider $A = \diag(-1,\frac{1}{2},\frac{1}{4},\ldots)$.
Then $A$ is selfadjoint and trace-class and $\trace(A) = 0$.
Therefore by \cite[Theorem~1]{FF-1980-PAMS} there is an orthonormal basis with respect to which the diagonal of $A$ is the zero sequence.
Now let $P$ be a rank-$2$ projection.
From \Cref{thm:c-numerical-range-selfadjoint-formula} we see that $\ocnr[P](A) = (-1,\frac{3}{4}]$, which clearly contains $0$ and yet is not closed.
\end{example}
Although \Cref{thm:compact-closed-iff-contains-weak-majorization} provides a complete characterization of when the orbit-closed $C$-numerical range of a compact operator is closed, the condition seems nontrivial to check.
The following corollary provides a sufficient condition which is hopefully easier to verify in practice.
\begin{corollary}
\label{cor:compact-closure-direct-sum-zero}
Let $C \in Mhcal{L}_1^{sa}$ be a selfadjoint trace-class operator and let $A \in Mhcal{K}$ be a compact operator.
Then $\closure{\ocnr(A)} = \ocnr[C \oplus Mhbf{0}](A \oplus Mhbf{0})$, where the $Mhbf{0}$ acts on a space of dimension at least $\rank C$.
In particular, if $P$ is a projection of rank at least $\rank C$ for which $PA = AP = 0$, then $\ocnr(A)$ is closed.
\end{corollary}
In the following proof of this corollary, we will be considering operators acting on Hilbert spaces $Mhcal{H}_1$ (separable, infinite dimensional) and on $Mhcal{H}_1 \oplus Mhcal{H}_2$ (with $Mhcal{H}_2$ separable).
It will sometimes be convenient to think of these operators acting on the same space which we do by selecting a fixed, but arbitrary isometric isomorphism $Mhcal{H}_1 \to Mhcal{H}_1 \oplus Mhcal{H}_2$.
This induces a *-isomorphism $B(Mhcal{H}_1) \to B(Mhcal{H}_1 \oplus Mhcal{H}_2)$.
Crucially, while the resulting *-isomorphism depends on the specific isometric isomorphism, objects and properties that are invariant under unitary conjugation, such as $\ocnr(A)$, $Mhcal{O}(C)$ or approximate unitary equivalence, are independent of this choice.
Moreover, under this identification $Mhcal{O}(C) = Mhcal{O}(C \oplus Mhbf{0})$ because $\lambda(C) = \lambda(C \oplus Mhbf{0})$ and the eigenvalue sequence is a complete invariant by \Cref{prop:orbit-closure-equivalences}.
This also makes it possible to read \Cref{cor:compact-closure-direct-sum-zero} as $\closure{\ocnr(A)} = \ocnr(A \oplus Mhbf{0})$.
\begin{proof}
Since $A \in Mhcal{K}$, it is clear that $A$ and $A \oplus Mhbf{0}$ are approximately unitarily equivalent (via the identification $B(Mhcal{H}_1) \to B(Mhcal{H}_1 \oplus Mhcal{H}_2)$ mentioned prior to the proof).
Indeed, if $A$ acts on $Mhcal{H}_1$ and $A \oplus Mhbf{0}$ acts on $Mhcal{H}_1 \oplus Mhcal{H}_2$, consider a sequence of finite projections $P_n$ converging in the strong operator topology to the identity, and notice that $P_n A, A P_n \to A$ in norm since $A \in Mhcal{K}$.
Let $U_n : Mhcal{H}_1 \to Mhcal{H}_1 \oplus Mhcal{H}_2$ be any unitary for which $U_n P_n = P_n \oplus Mhbf{0}$ (these exist since each $P_n^{\perp}$ is an infinite projection), and notice that $U_n A U_n^{*} \to A \oplus Mhbf{0}$.
Therefore, the closures of $\ocnr(A)$ and $\ocnr[C \oplus Mhbf{0}](A \oplus Mhbf{0})$ coincide by \Cref{cor:approximate-unitary-equivalence-same-closure}.
To complete the proof, it suffices to prove that $\ocnr(A \oplus Mhbf{0})$ is closed.
For this, let $C_{m,k}$ acting on $Mhcal{H}_1$ be defined as in
\Cref{lem:weak-majorization-c-numerical-range}.
Then there is a $C'_{m,k}$ acting on $Mhcal{H}_2$ such that $C_{m,k} \oplus C'_{m,k} \in Mhcal{O}(C \oplus Mhbf{0})$, where it suffices by \Cref{prop:orbit-closure-equivalences} to select a selfadjoint operator $C'_{m,k}$ whose nonzero eigenvalues are precisely the terms of $\lambda(C)$ missing from $\lambda(C_{m,k})$.
Thus, for any $X \in Mhcal{O}(C_{m,k})$ we have $X \oplus C'_{m,k} \in Mhcal{O}(C \oplus Mhbf{0})$, and therefore
\begin{equation*}
\trace(XA) = \trace \big( (X \oplus C'_{m,k}) (A \oplus Mhbf{0}) \big) \in \ocnr[C \oplus Mhbf{0}](A \oplus Mhbf{0}).
\end{equation*}
Since $X \in Mhcal{O}(C_{m,k})$ was arbitrary, as were $m,k$, we find that $\ocnr(A \oplus Mhbf{0}) \supseteq \ocnr[C_{m,k}](A)$ for all $m,k \in Mhbb{N} \cup \set{0,\infty}$.
Therefore, by \Cref{lem:weak-majorization-c-numerical-range} and \Cref{thm:compact-closed-iff-contains-weak-majorization},
\begin{equation*}
\ocnr[C \oplus Mhbf{0}](A \oplus Mhbf{0}) \supseteq \set{ \trace(XA) \mid X \in Mhcal{L}_1^{sa}, \lambda(X) \pprec \lambda(C) } = \closure{\ocnr(A)} = \closure{\ocnr[C \oplus Mhbf{0}](A \oplus Mhbf{0})}.
\end{equation*}
Now suppose that $A \in Mhcal{K}$ is an operator for which there is a projection of rank at least $\rank C$ for which $PA = AP = 0$.
If $P^{\perp}$ is finite we may pass from $P$ to an infinite, co-infinite subprojection to ensure the complement is infinite.
Then if $A'$ denotes the compression of $A$ to $P^{\perp} Mhcal{H}$, we certainly have $A = A' \oplus Mhbf{0}$ where $Mhbf{0}$ acts on $P Mhcal{H}$.
Moreover, since $P^{\perp}$ is infinite, there is some $C'$ acting on $P^{\perp} Mhcal{H}$ such that $\lambda(C') = \lambda(C)$.
Consequently,
\begin{equation*}
\ocnr(A) = \ocnr[C' \oplus Mhbf{0}](A' \oplus Mhbf{0}) = \closure{\ocnr[C'](A'})
\end{equation*}
is closed.
\end{proof}
\subsection{Bounded operators}
\label{subsec:bounded-operators}
The situation for $A \in Mhcal{K}$ compact was made especially tractable because of the duality $Mhcal{L}_1 \cong Mhcal{K}^{*}$.
As we now turn our attention to arbitrary operators $A \in B(Mhcal{H})$, the analysis becomes significantly more complex.
However, as we will observe, much of the analysis can be restricted to the compact portion of $A$ which lies outside the essential spectrum; for selfadjoint $A$, we mean the operator $(A-mI)_+$ where $m := \max \spec_{\mathrm{ess}}(A)$.
From now on, we will restrict our attention primarily to $C$ \emph{positive} and trace-class.
The reason is essentially to make the complicated analysis somewhat manageable.
In order to emphasize positivity, we will use the singular value sequence $s(C)$ to refer to the eigenvalue sequence (as opposed to $\lambda(C)$) since these coincide.
Since the orbit-closed $C$-numerical range is convex when $C$ is selfadjoint, one natural way to analyze boundary points is to first rotate the operator and then take the real part, as in the diagram:
\begin{center}
\begin{tikzcd}
A \arrow[r, maps to] \arrow[d, "\ocnr(\bigcdot)"] & e^{i\theta}A \arrow[r, maps to] \arrow[d, "\ocnr(\bigcdot)"] & \Re(e^{i\theta}A) \arrow[d, "\ocnr(\bigcdot)"] \\
\ocnr(A) \arrow[r, maps to] & e^{i\theta}\ocnr(A) \arrow[r, maps to] & \Re(e^{i\theta}\ocnr(A)),
\end{tikzcd}
\end{center}
where we have used \Cref{prop:c-numerical-range-basics} to commute both $\Re$ and multiplication by $e^{i\theta}$ with $\ocnr(\bigcdot)$.
In so doing one is able essentially to reduce the investigation of points on the boundary of the numerical range to the case when $A$ is selfadjoint.
However, there are often technicalities that arise when there is a line segment on the boundary because, after rotation, there is more than one point on the boundary with maximal real part (see \Cref{fig:rotation-technique}).
This rotation and real part technique goes all the way back to Kippenhahn in \cite{Kip-1951-MN} (or the English translation \cite[\textsection{}3]{Kip-2008-LMA}), but appears elsewhere in the literature, such as \cite{Joh-1978-SJNA}.
\begin{figure}
\caption{Rotation technique for points on the boundary of $\ocnr(A)$.}
\label{fig:rotation-technique}
\end{figure}
We begin with a simple but rather important lemma concerning submajorization which will be essential in our analysis of the boundary.
Effectively, it says that if $(d_n) \pprec (c_n)$ then $(d_n a_n) \pprec (c_n a_n)$ for any nonnegative decreasing sequence $(a_n)$;
moreover, if $(d_n a_n) \prec (c_n a_n)$ and $(a_n) \in c_0star$ is strictly positive, then $(d_n)$ is block majorized by $(c_n)$.
The first part of the lemma, namely that $(d_n) \pprec (c_n)$ implies $(d_n a_n) \pprec (c_n a_n)$, is a known result concerning submajorization (see, for example \cite[5.A.4.d]{MOA-2011}), but to the authors' knowledge the remainder of the lemma has not appeared in the literature and we will make full use of these additional facts later on.
To see the connection between the above formulation in terms of majorization and the actual statement of the lemma, consider $\delta_n := c_n - d_n$.
\begin{lemma}
\label{lem:majorization-product-decreasing-sequence}
Suppose that $(\delta_n)$ is a real-valued sequence and $(a_n)$ is a nonnegative decreasing sequence (even finite sequences are considered).
If for every $N$, $\sum_{n=1}^N \delta_n \ge 0$, then
\begin{enumerate}
\item \label{item:weak-majorization-implies-c-weak-majorization} for every $N$, $\displaystyle\sum_{n=1}^N \delta_n a_n \ge 0$;
\item \label{item:c-majorization-and-jump-implies-majorization} if $\displaystyle\sum_{n=1}^N \delta_n a_n = 0$, then $\displaystyle\sum_{n=1}^L \delta_n = 0$ whenever $a_L > a_{L+1}$ with $L < N$, \\ and if, in addition, $a_N > 0$, then $\displaystyle\sum_{n=1}^N \delta_n = 0$;
\item \label{item:liminf-c-majorization-implies-block-majorization} if $\displaystyle\liminf_{N \to \infty} \sum_{n=1}^N \delta_n a_n = 0$, then $\displaystyle\sum_{n=1}^L \delta_n = 0$ whenever $a_L > a_{L+1}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is a simple application of summation by parts.
Indeed, for any $N$,
\begin{equation*}
\sum_{n=1}^N \delta_n a_n = a_N \sum_{n=1}^N \delta_n + \sum_{L=1}^{N-1} (a_L - a_{L+1}) \sum_{n=1}^L \delta_n.
\end{equation*}
Notice that by hypothesis $a_N, \sum_{n=1}^N \delta_n$ are each nonnegative as are $(a_L - a_{L+1})$ and $\sum_{n=1}^L \delta_n$ for each $L < N$.
Therefore, $\sum_{n=1}^N \delta_n a_n \ge 0$ as well, proving \ref{item:weak-majorization-implies-c-weak-majorization}.
Moreover, if $\sum_{n=1}^N \delta_n a_n = 0$ and for $L < N$ if $a_L > a_{L+1}$, then $\sum_{n=1}^L \delta_n = 0$.
In addition, if $a_N > 0$, then $\sum_{n=1}^N \delta_n = 0$, which establishes \ref{item:c-majorization-and-jump-implies-majorization}.
Finally, notice that
\begin{align*}
\liminf_{N \to \infty} \sum_{n=1}^N \delta_n a_n &\ge \liminf_{N \to \infty} a_N \sum_{n=1}^N \delta_n + \liminf_{N \to \infty} \sum_{L=1}^{N-1} (a_L - a_{L+1}) \sum_{n=1}^L \delta_n \\
&\ge \liminf_{N \to \infty} \sum_{L=1}^{N-1} (a_L - a_{L+1}) \sum_{n=1}^L \delta_n.
\end{align*}
Therefore, if the limit inferior on the left-hand side is zero, then we conclude $\sum_{n=1}^L \delta_n = 0$ whenever $a_L > a_{L+1}$.
\end{proof}
The next proposition guarantees that the supremum of the orbit-closed $C$-numerical range is attained whenever $A,C$ are positive and compact.
This is our first sufficient condition for a point on the boundary to be included in $\ocnr(A)$.
\begin{proposition}
\label{prop:c-numerical-range-maximum}
Let $C,A$ be positive compact operators with $C$ trace-class.
Then
\begin{equation*}
\sup \ocnr(A) = \sum_{n=1}^{\infty} s_n(C) s_n(A),
\end{equation*}
and moreover the supremum is attained.
\end{proposition}
\begin{proof}
Take any $X \in Mhcal{U}(C)$.
Then since $X$ is a positive compact operator it is diagonalizable and so in some basis $X = \diag s(C)$.
Let $(d_n)$ be the diagonal of $A$ in this basis, which is necessarily nonnegative since $A$ is a positive operator.
It is well-known that $(d_n) \pprec s(A)$ (e.g., see \cite[Theorem~4.2]{AK-2006-OTOAaA}).
Therefore, since $s(C)$ is a nonincreasing nonnegative sequence, we may apply \autoref{lem:majorization-product-decreasing-sequence} to conclude for all $N \in Mhbb{N}$,
\begin{equation*}
\sum_{n=1}^N d_n s_n(C) \le \sum_{n=1}^N s_n(A) s_n(C) \le \sum_{n=1}^{\infty} s_n(A) s_n(C).
\end{equation*}
Taking the limit as $N \to \infty$, we find
\begin{equation*}
\trace(XA) = \sum_{n=1}^{\infty} d_n s_n(C) \le \sum_{n=1}^{\infty} s_n(A) s_n(C).
\end{equation*}
Moreover, since the trace is trace-norm continuous, we have $\trace(XA) \le \sum_{n=1}^{\infty} s_n(A) s_n(C)$ for any $X \in Mhcal{O}(C)$.
Thus $\sup \ocnr(A) \le \sum_{n=1}^{\infty} s_n(A) s_n(C)$.
To show equality and thus that the supremum is attained, simply note that there is a (likely different) basis which diagonalizes $A$ since it is a positive compact operator, and in this basis $A = Mhbf{0}_{\ker A} \oplus \diag s(A)$ (or $A = \diag s(A)$ if $A$ has finite rank).
Then $X := Mhbf{0}_{\ker A} \oplus \diag s(C) \in Mhcal{O}(C)$ (or $X := \diag s(C) \in Mhcal{O}(C)$ if $A$ has finite rank) by \Cref{prop:orbit-closure-equivalences} and we obtain
\begin{equation*}
\trace(XA) = \sum_{n=1}^{\infty} s_n(A) s_n(C). \qedhere
\end{equation*}
\end{proof}
Although the statement of the following theorem is restricted to the selfadjoint case, by the standard rotation argument mentioned at the beginning of this section, the next theorem provides a necessary and sufficient condition for a supporting line\footnotemark{} of $\ocnr(A)$ to contain \emph{at least one} point of $\ocnr(A)$.
Notice that if this supporting line intersects $\closure{\ocnr(A)}$ in exactly one point (in particular, if this point does not lie on a line segment on the boundary), then this theorem gives a necessary and sufficient condition for that point to lie in $\ocnr(A)$ (cf. \Cref{fig:rotation-technique}).
\footnotetext{
recall that a \term{supporting line} $L$ for a convex set $C$ in the plane is a line such that $L \cap \closure{C} \not= \varnothing$ and $C$ is entirely contained within one of the closed half-planes determined by $L$.
Notice that this latter condition ensures $L \cap \closure{C} \subseteq \partial C$.
}
We remark for the reader's convenience a basic fact which will occur in the following theorem and repeatedly throughout the remainder of this paper.
If $A \in B(Mhcal{H})$ and $m := \max \spec_{\mathrm{ess}}(\Re A)$, then $(\Re A - mI)_+$ is a positive compact operator.
Indeed, it clearly suffices to assume $m = 0$, and then simply notice that the spectral projections $\chi_{(-\infty,-\varepsilon)}(\Re A)_+ = 0$ and $\chi_{(\varepsilon,\infty)}(\Re A)_+ = \chi_{(\varepsilon,\infty)}(\Re A)$ are all finite for every $\varepsilon > 0$.
\begin{theorem}
\label{thm:c-numerical-range-selfadjoint-formula}
Let $C \in Mhcal{L}_1^+$ be a positive trace-class operator and suppose $A \in B(Mhcal{H})$ is selfadjoint.
Let $m := \max \spec_{\mathrm{ess}}(A)$.
Then
\begin{equation*}
\sup \ocnr(A) = m\trace C + \sup \ocnr(A-mI)_+,
\end{equation*}
Moreover, if $P := \chi_{[m,\infty)}(A)$ denotes the spectral projection of $A$ onto the interval $[m,\infty)$, then $\sup \ocnr(A)$ is attained if and only if $\rank C \le \trace P$.
In fact, if $X \in Mhcal{O}(C)$ attains the supremum, then $XP = PX = X$.
\end{theorem}
\begin{proof}
For $A \in B(Mhcal{H})$ and $m \in Mhbb{C}$, since $\spec_{\mathrm{ess}}(A-mI) = \spec_{\mathrm{ess}}(A) - m$, by \Cref{prop:c-numerical-range-basics}\ref{item:similarity-preserving-ocnr}, we may assume without loss of generality that $m=0$.
The inequality $\sup \ocnr(A) \le \sup \ocnr(A_+)$ is immediate because for any $X \in Mhcal{O}(C)$, since $A_+ - A \ge 0$, we have $\trace(X(A_+ - A)) \ge 0$ by \Cref{prop:c-numerical-range-basics}\ref{item:positivity-ocnr}.
Therefore
\begin{equation*}
\trace(XA) \le \trace(XA_+) \le \sup \ocnr(A_+),
\end{equation*}
and taking the supremum over $X \in Mhcal{O}(C)$ yields $\sup \ocnr(A) \le \sup \ocnr(A_+)$.
It remains to prove the reverse inequality and the claim concerning when the supremum is attained.
We begin by proving the former.
Now, if $\rank C \le \trace P$, there is some $C' \in Mhcal{O}(C)$ such that $PC' = C'P = C'$ by \Cref{prop:orbit-closure-equivalences} (e.g., take $C' := Mhbf{0}_{P^{\perp}Mhcal{H}} \oplus \diag_{PMhcal{H}} \big(s_n(C)\big)_{n=1}^{\trace P}$).
Since $C',A_+$ are positive compact operators which are zero on $P^{\perp} Mhcal{H}$, we may view them as operators acting on $PMhcal{H}$.
By \Cref{prop:c-numerical-range-maximum}, there is some $X' \in Mhcal{O}_{PMhcal{H}}(C')$ for which
\begin{equation*}
\trace_{PMhcal{H}}(X' A_+) = \sup \cnr[Mhcal{O}_{PMhcal{H}}(C')](A_+) = \sum_{n=1}^{\infty} s_n(C') s_n(A_+) = \sum_{n=1}^{\infty} s_n(C) s_n(A_+) = \sup \ocnr(A_+).
\end{equation*}
Then setting $X := Mhbf{0}_{P^{\perp}Mhcal{H}} \oplus X' \in Mhcal{O}(C)$ we find that $\trace(XA) = \trace_{PMhcal{H}}(X'A_+) = \sup \ocnr(A_+)$ which we already established is at least $\sup \ocnr(A)$.
Moreover, notice that $\sup \ocnr(A)$ is attained in this case.
Now suppose $\rank C > \trace P$.
For $\varepsilon > 0$, let $P_{\varepsilon} := \chi_{(-\varepsilon,0)}(A)$.
Since $\rank C > \trace P$, we know that $P$ is a finite projection.
But since $0 \in \spec_{\mathrm{ess}}(A)$, we must have that $P_{\varepsilon} + P = \chi_{(-\varepsilon,\infty)}(A)$ is infinite for every $\varepsilon > 0$, and hence $P_{\varepsilon}$ is infinite.
Then consider a basis $Mhfrak{e} = \set{e_n}_{n \in Mhbb{Z}}$ such that for $1 \le n \le \trace P$, $A e_n = s_n(A_+) e_n$, and for which $\set{e_n}_{n=\trace P + 1}^{\infty}$ is an orthonormal set in $P_{\varepsilon}Mhcal{H}$.
Define $X \in Mhcal{O}(C)$ to be the diagonal operator $X e_n = s_n(C) e_n$ for $n \in Mhbb{N}$ and $X e_n = 0$ for $n < 0$.
Then by construction and since $s_n(A_+) = 0$ for $n > \trace P$, we find
\begin{align*}
\trace(XA) = \trace(XAP) + \trace(XAP_{\varepsilon}) &\ge \sum_{n=1}^{\trace P} s_n(C) s_n(A_+) - \norm{X}_1 \norm{AP_{\varepsilon}} \\
&\ge \sum_{n=1}^{\infty} s_n(C) s_n(A_+) - \varepsilon \trace C. \\
&= \ocnr(A_+) - \varepsilon \trace C.
\end{align*}
Since $\varepsilon$ was arbitrary, this proves $\sup \ocnr(A) \ge \sup \ocnr(A_+)$, and thus we have equality.
Suppose $X \in Mhcal{O}(C)$ attains the supremum, that is, $\trace(XA) = \sup \ocnr(A)$.
As we have just proved that $\ocnr(A) = \ocnr(A_+)$, so then $\trace(XA) = \sup \ocnr(A_+)$.
Moreover, as $PA = A_+$ and $P^{\perp} A = -A_-$,
\begin{equation}
\label{eq:2}
\begin{aligned}
\trace(XA) &= \trace(XPA) + \trace(XP^{\perp}A) \\
&= \trace(XA_+) - \trace(XP^{\perp}A_-) \\
&\le \left( \sup \ocnr(A_+) \right) - \trace(XP^{\perp}A_-) \\
&= \trace(XA) - \trace(XA_-).
\end{aligned}
\end{equation}
Since, $\trace(XA_-) = \trace(XP^{\perp}A_-) = \trace(X^{\frac{1}{2}} P^{\perp} A_- P^{\perp} X^{\frac{1}{2}}) \ge 0$, equality in \eqref{eq:2} holds if and only if $X^{\frac{1}{2}} P^{\perp} A_- P^{\perp} X^{\frac{1}{2}} = 0$ since the trace is faithful, if and only if $A^{\frac{1}{2}}_- P^{\perp} X^{\frac{1}{2}} = 0$.
Now because $P^{\perp}$ is the spectral projection of $A$ on the interval $(-\infty,0)$, we see that $A_-^{\frac{1}{2}}$ is strictly positive on $P^{\perp}Mhcal{H}$ (or $P^{\perp} = 0$).
Therefore, $A^{\frac{1}{2}}_- P^{\perp} X^{\frac{1}{2}} = 0$ if and only if $P^{\perp} X^{\frac{1}{2}} = 0$ if and only if $R_X = R_{X^{\smash[t]{\frac{1}{2}}}} \le P$ ($R_X$ denotes the range projection of $X$) if and only if $PX=XP=X$.
This proves the claim about $X \in Mhcal{O}(C)$ which attain the supremum.
Finally, if $\rank C > \trace P$, then for any $X \in Mhcal{O}(C)$, $XP \not= X$ and so by the above, $X$ does not attain the supremum.
Since $X$ was arbitrary, the supremum cannot be attained in this case.
\end{proof}
The following example shows how the techniques developed thus far can be used to compute the orbit-closed $C$-numerical range in certain circumstances.
\begin{example}
Let $S$ denote the shift operator on either $\ell^2(Mhbb{N})$ or $\ell^2(Mhbb{Z})$.
It is well known that the standard numerical range is $W(S) = Mhbb{D}$, the open unit disk.
Let $C \in Mhcal{L}_1^+$ with $\trace C = \norm{C}_1 = 1$.
We will show $\ocnr(S) = Mhbb{D}$ also.
Notice first that $\lambda(C) \prec \lambda(P)$ where $P$ is a rank-$1$ projection, so by \Cref{cor:majorization-c-numerical-range-inclusion}, $\ocnr(S) \subseteq \ocnr[P](S) = W(S) = Mhbb{D}$.
Moreover, $S$ is unitarily equivalent to $e^{i\theta} S$ via the diagonal unitary $U_{\theta} := \diag(e^{in\theta})$.
Therefore, since the orbit-closed $C$-numerical range is unitarily invariant and using \Cref{prop:c-numerical-range-basics}\ref{item:similarity-preserving-ocnr}, $\ocnr(S) = \ocnr(e^{i\theta}S) = e^{i\theta}\ocnr(S)$ and so $\ocnr(S)$ is radially symmetric.
Because $\max \spec(\Re S) = \max \spec_{\mathrm{ess}}(\Re S) = 1$, we know $(\Re S - I)_+ = 0$, and therefore by \Cref{prop:c-numerical-range-basics}\ref{item:hermitian-ocnr} and \Cref{thm:c-numerical-range-selfadjoint-formula},
\begin{equation*}
\sup \Re \ocnr(S) = \sup \ocnr(\Re S) = \trace C + \sup \ocnr(\Re S - I)_+ = \trace C = 1.
\end{equation*}
Consequently, by the radial symmetry and convexity (using \Cref{cor:c-numerical-range-convex}) of $\ocnr(S)$ it must contain the open unit disk.
Therefore $\ocnr(S) = Mhbb{D}$.
We note that $\cnr(S)$ must be dense in $Mhbb{D}$ by \Cref{thm:cnr-dense-in-ocnr}, but it seems rather hard to conclude these sets are equal without convexity.
\end{example}
We now build towards \Cref{thm:ocnr-closed-rank-condition} which provides a sufficient condition for $\ocnr(A)$ to be closed for $A \in B(Mhcal{H})$.
We begin with a bootstrapping of a standard result by induction.
\begin{lemma}
\label{lem:close-projections-unitary-close-to-1}
Given $\varepsilon > 0$ there is some $\delta > 0$ such that whenever $\set{P_j}_{j=1}^N, \set{Q_j}_{j=1}^N$ are each collections of mutually orthogonal projections with $\norm{P_j - Q_j} < \delta$ for each $1 \le j \le N$, then there is a unitary $U$ conjugating each pair $P_j,Q_j$ such that $\norm{U-I} < \varepsilon$.
\end{lemma}
\begin{proof}
We proceed by induction on $N$.
The case when $N=1$ is standard, but a good reference is \cite[II.3.3.4]{Bla-2006}.
The argument is essentially this: set $Z := P_1 Q_1 + (1-P_1)(1-Q_1)$, then $Z$ is invertible and $U = Z\abs{Z}^{-1}$ is the desired unitary.
Now let $N \in Mhbb{N}$ and suppose the result holds for pairs of collections of mutually orthogonal projections of length at most $N$.
Let $\varepsilon > 0$, then there is some $\delta > 0$ corresponding to $\frac{\varepsilon}{2}$ by the inductive hypothesis.
Moreover, there is some $\eta > 0$ corresponding to $\min\set{\frac{\delta}{3},\frac{\varepsilon}{2}}$.
Suppose that $\set{P_j}_{j=1}^{N+1}, \set{Q_j}_{j=1}^{N+1}$ are each collections of mutually orthogonal projections with $\norm{P_j - Q_j} < \min\set{\eta,\frac{\delta}{3}}$.
Then there is a unitary $U'$ with $\norm{U' - I} < \min\set{\frac{\delta}{3},\frac{\varepsilon}{2}}$ conjugating $P_{N+1}$ to $Q_{N+1}$.
Then $U'$ also conjugates $\set{P_j}_{j=1}^N$ to a mutually orthogonal collection $\set{P'_j}_{j=1}^N$.
Moreover,
\begin{equation*}
\norm{P'_j - Q_j} = \norm{U' P_j U'^{*} - Q_j} \le \norm{U'-I} + \norm{U'^{*} - I} + \norm{P_j - Q_j} \le 2\min\vset{\frac{\delta}{3},\frac{\varepsilon}{2}} + \norm{P_j - Q_j} < \delta.
\end{equation*}
Then let $V$ be a unitary conjugating $\set{P'_j}_{j=1}^N$ to $\set{Q_j}_{j=1}^N$ inside the Hilbert space $Q_{N+1}^{\perp} Mhcal{H}$ such that $\norm{V - I_{Q_{N+1}^{\perp} Mhcal{H}}} < \frac{\varepsilon}{2}$.
Then $W := I_{Q_{N+1}Mhcal{H}} \oplus V$ is a unitary on $Mhcal{H}$ and $\norm{W - I} < \frac{\varepsilon}{2}$.
Finally, set $U = WU'$ and notice that $U$ conjugates $\set{P_j}_{j=1}^{N+1}$ to $\set{Q_j}_{j=1}^{N+1}$.
Moreover,
\begin{equation*}
\norm{U-I} \le \norm{W-I} + \norm{U'-I} < \frac{\varepsilon}{2} + \min\vset{\frac{\delta}{3},\frac{\varepsilon}{2}} \le \varepsilon.
\end{equation*}
By the induction, the proof is complete.
\end{proof}
Using \Cref{lem:close-projections-unitary-close-to-1} we now establish a sufficient condition for when certain points on the boundary $\partial\ocnr(A)$ can be obtained by elements of $Mhcal{O}(C)$ which are close in trace norm.
This approximation result is a key step in the proof of \Cref{thm:ocnr-closed-rank-condition}.
\begin{proposition}
\label{prop:boundary-approximation}
Let $C \in Mhcal{L}_1^+$ and suppose that $\rank (\Re A-mI)_+ \ge \rank C$, where $m := \max \spec_{\mathrm{ess}}(\Re A)$.
Let $[x_-,x_+]$ denote the (possibly degenerate) line segment on $\partial\ocnr(A)$ consisting of the points with maximal real part.
Furthermore, suppose that there are arbitrarily small $\theta > 0$ for which there is a point $x_{\theta} \in \ocnr(A)$ on its boundary whose supporting line intersects $\ocnr(A)$ only at this point $x_{\theta}$, and that $x_{\theta} \to x_-$ as $\theta \to 0$.
Then given any $\varepsilon > 0$, for sufficiently small $\theta$ there are some $X_{\theta}, X \in Mhcal{O}(C)$ with $\trace(X_{\theta}A) = x_{\theta}$ and $\Re\trace(XA) = \sup \Re\ocnr(A)$, and $\norm{X_{\theta} - X}_1 < \varepsilon$.
Consequently, $\ocnr(A)$ contains points on the line segment arbitrarily close to $x_-$.
\end{proposition}
\begin{proof}
By translating, we may clearly suppose that $m = 0$.
Define for each $\theta \in Mhbb{R}$ the selfadjoint operator $A_{\theta} := \Re(e^{i\theta} A)$.
Let $\varepsilon > 0$.
Since $C \in Mhcal{L}_1^+$, there is some $N \in Mhbb{N}$ such that $\sum_{n=N+1}^{\infty} s_n(C) < \frac{\varepsilon}{8\norm{A}}$;
if $\rank C < \infty$, set $N := \rank C$.
Let $\lambda_1 > \cdots > \lambda_m > 0$ be the $m$ largest eigenvalues of $A_0 = \Re A$ with associated (mutually orthogonal) spectral projections $P_j := \chi_{\set{\lambda_j}}(A_0)$ for $1 \le j \le m$.
Choose $m$ so that $\sum_{j=1}^m \trace(P_j) \ge N$, which is possible since $\rank (A_0)_+ \ge \rank C$ by hypothesis.
Set $P := \sum_{j=1}^m P_j$ and define $n_j := \sum_{i=1}^j \trace P_i$ and $n_0 := 0$.
We remark for future reference that $s_k((A_0)_+) = \lambda_j$ when $n_{j-1} < k \le n_j$.
Notice that
\begin{equation*}
A_{\theta} = \Re(e^{i\theta} A) = (\cos\theta) \Re A + (\sin\theta) \Im A = A_0 + B_{\theta},
\end{equation*}
where $B_{\theta} := (\cos\theta - 1) \Re A + (\sin\theta) \Im A$ and that $\norm{B_{\theta}} \le 2\theta \norm{A}$.
Set
\begin{equation*}
\delta_1 = \frac{1}{4} \min_{1 \le j \le m} \dist(\lambda_j, \spec(A_0) \setminus \set{\lambda_j}).
\end{equation*}
By the upper semicontinuity of the spectrum (and the essential spectrum), for all sufficiently small $\theta > 0$ we can guarantee that $m_{\theta} := \max \spec_{\mathrm{ess}}(A_{\theta}) < \lambda_m - \delta_1$ and that $\spec(A_{\theta})$ is contained in the $\delta_1$-neighborhood of $\spec(A_0)$.
By \Cref{lem:close-projections-unitary-close-to-1}, there is some $\delta > 0$ associated to $\frac{\varepsilon}{8 \norm{A} \trace C}$.
Then we may choose $\theta > 0$ small enough so that both $\abs{x_{\theta} - x_-} < \frac{\varepsilon}{2}$ and $\norm{B_{\theta}}$ is small enough \cite[Theorem~3.4]{MS-2015-JRAM}\footnote{This result is actually much stronger than we need because it provides tight bounds on the required size of the norm $\norm{B_{\theta}}$. For our purposes, the result we need could be obtained by straightforward, albeit somewhat tedious, arguments using the continuous functional calculus.} that if $Q_j := \chi_{[\lambda_j-\delta_1,\lambda_j+\delta_1]}(A_{\theta})$, then $\norm{P_j - Q_j} < \delta$.
Moreover, let $Q := \sum_{j=1}^m Q_j$.
By \Cref{lem:close-projections-unitary-close-to-1} there is a unitary $U$ with $\norm{U-I} < \frac{\varepsilon}{8 \norm{A} \trace C}$ conjugating $Q_j$ to $P_j$ (i.e., $U Q_j U^{*} = P_j$) for each $1 \le j \le m$.
Let $Mhfrak{e} := \set{e_k}_{k \in Mhbb{Z}}$ be an orthonormal basis so that for $1 \le k \le \max \set{\rank C, \trace Q}$ (note: $\trace Q = \trace P$), $e_k$ is an eigenvector of $A_{\theta}$ for the eigenvalue $m_{\theta} + s_k((A_{\theta} - m_{\theta}I)_+)$;
this is possible since by hypothesis $\rank (A_{\theta} - m_{\theta}I)_+ \ge \rank C$, and also $\chi_{(m_{\theta},\infty)}(A_{\theta}) \ge Q$ so $\rank (A_{\theta} - m_{\theta}I)_+ \ge \trace Q$.
The eigenvectors $\set{e_k}_{k=1}^{\trace Q}$ are in the subspaces $Q_j Mhcal{H}$.
More specifically, $\set{e_k}_{k=n_{j-1} + 1}^{n_j}$ is a basis for $Q_j Mhcal{H}$.
Consequently, $\set{U e_k}_{k=n_{j-1} + 1}^{n_j}$ is a basis for $P_j Mhcal{H}$ since $U$ conjugates $Q_j$ to $P_j$.
Therefore, for $n_{j-1} < k \le n_j$, we have $A_0 U e_k = \lambda_j U e_k$.
So, these are eigenvectors for $A_0$.
Now let $Mhfrak{f} := \set{f_k}_{k \in Mhbb{Z}}$ be an orthonormal basis for which $A_0 f_k = s_k((A_0)_+) f_k$ when $1 \le k \le \max \set{\rank C, \trace P}$ (note: $\trace P = \trace Q$);
again, this is possible since $\rank (A_0)_+ \ge \rank C$, and because $\chi_{(0,\infty)}(A_0) \ge P$, so $\rank(A_0)_+ \ge \trace P$.
By the previous paragraph we may select $f_k = U e_k$ for $1 \le k \le \trace Q = \trace P$.
Let $V$ be the unitary which maps $U e_k$ to $f_k$ for all $k \in Mhbb{Z}$.
Notice that $PV = VP = P$ since $PMhcal{H} = \spans \set{f_k}_{k=1}^{\trace P}$ and $V$ acts as the identity here since $Ue_k = f_k$ for $1 \le k \le \trace P$.
Define $X_{\theta}$ to be the operator which is diagonal with respect to the basis $Mhfrak{e}$ such that $X_{\theta} e_k = s_k(C) e_k$ for $1 \le k \le \rank C$ and $X_{\theta} e_k = 0$ for all other values of $k$.
Clearly $X_{\theta} \in Mhcal{O}(C)$ by \Cref{prop:orbit-closure-equivalences}.
Moreover, notice that
\begin{align*}
\trace(X_{\theta} A_{\theta}) &= \sum_{k=1}^{\rank C} s_k(C) \big( m_{\theta} + s_k((A_{\theta}-m_{\theta}I)_+) \big) \\
&= m_{\theta} \trace C + \sum_{k=1}^{\infty} s_k(C) s_k((A_{\theta}-m_{\theta}I)_+) \\
&= m_{\theta} \trace C + \sup \ocnr \big( (A_{\theta} - m_{\theta} I)_+ \big) \\
&= \sup \ocnr(A_{\theta}) = \sup \Re(\ocnr(e^{i\theta}A)).
\end{align*}
Then since $\Re \trace(X_{\theta} e^{i\theta} A) = \trace(X_{\theta} A_{\theta})$ maximizes $\Re(\ocnr(e^{i\theta}A))$, and because the supporting line for $x_{\theta}$ intersects the boundary only at that point, we must have $\trace(X_{\theta} A) = x_{\theta}$.
Now define $X := (VU)X_{\theta}(VU)^{*} \in Mhcal{O}(C)$.
Since $VU$ maps the basis $Mhfrak{e}$ onto the basis $Mhfrak{f}$, we see that $X$ is diagonal with respect to the basis $Mhfrak{f}$.
Moreover,
\begin{equation*}
\trace(X A_0) = \sum_{k=1}^{\rank C} s_k(C) s_k((A_0)_+) = \sum_{k=1}^{\infty} s_k(C) s_k((A_0)_+) = \sup \ocnr(A_0) = \sup \Re(\ocnr(A)).
\end{equation*}
Since $\Re \trace(XA) = \trace(XA_0)$, this entails $\trace(XA) \in [x_-,x_+]$.
We now estimate the trace norm of $X - X_{\theta}$.
Since $P,X$ (or $Q,X_{\theta}$) are diagonal with respect to the basis $Mhfrak{f}$ (or $Mhfrak{e}$) and therefore commute, we have
\begin{equation*}
X - X_{\theta} = PXP - QX_{\theta}Q + P^{\perp}XP^{\perp} - Q^{\perp}X_{\theta}Q^{\perp}.
\end{equation*}
Additionally, since $U$ conjugates $Q$ to $P$, we know $UQU^{*} = P$ and so $QU^{*} = U^{*}P$ and $UQ = PU$.
In addition, $PV = VP = P$, and combining these we obtain
\begin{align*}
PXP - QX_{\theta}Q &= PXP - QU^{*}V^{*}XVUQ \\
&= PXP - U^{*}PV^{*} XVPU \\
&= PXP - U^{*}PXPU \\
&= PX - U^{*}PX + U^{*}PX - U^{*}PXPU.
\end{align*}
Combining these we find
\begin{align*}
\norm{X-X_{\theta}}_1 &\le \norm{PXP - QX_{\theta}Q + P^{\perp}XP^{\perp} - Q^{\perp}X_{\theta}Q^{\perp}}_1 \\
&\le \norm{PX - U^{*}PX}_1 + \norm{U^{*}PX - U^{*}PXPU}_1 + \norm{P^{\perp}XP^{\perp}}_1 + \norm{Q^{\perp}X_{\theta}Q^{\perp}}_1 \\
&\le \norm{P-U^{*}P} \norm{X}_1 + \norm{U^{*}P} \norm{X}_1 \norm{P - PU} + \trace(P^{\perp}XP^{\perp}) + \trace(Q^{\perp}X_{\theta}Q^{\perp}) \\
&\le \frac{\varepsilon}{8 \norm{A} \trace C} \trace C + \frac{\varepsilon}{8 \norm{A} \trace C} \trace C + \sum_{n=1+\trace P}^{\infty} s_n(C) + \sum_{n=1+\trace Q}^{\infty} s_n(C) \\
&\le \frac{\varepsilon}{4\norm{A}} + 2 \sum_{n=N+1}^{\infty} s_n(C) \\
&< \frac{\varepsilon}{2\norm{A}}.
\end{align*}
Therefore,
\begin{equation*}
\abs{\trace(XA) - x_-} \le \abs{\trace(XA) - \trace(X_{\theta}A)} + \abs{x_{\theta} - x_-} < \norm{X-X_{\theta}}_1 \norm{A} + \frac{\varepsilon}{2} < \varepsilon.
\end{equation*}
Since $\trace(XA) \in [x_-,x_+]$ and $\varepsilon > 0$ was arbitrary, we conclude that $\ocnr(A)$ contains points on $[x_-,x_+]$ which are arbitrarily close to $x_-$.
\end{proof}
We are almost ready to provide a sufficient condition for $\ocnr(A)$ to be closed when $C \in Mhcal{L}_1^+$ and $A \in B(Mhcal{H})$, but before we proceed we need two more technical results concerning majorization, spectral projections and operators which maximize $\ocnr(A)$ for $A$ selfadjoint.
\Cref{lem:extreme-points-k-numerical-range} concerns, in essence, the properties of projections which maximize the $k$-numerical range.
Then \Cref{prop:block-diagonal-decomposition} bootstraps \Cref{lem:extreme-points-k-numerical-range} to conclude that a maximizer of the orbit-closed $C$-numerical range has a certain block diagonal decomposition.
\begin{lemma}
\label{lem:extreme-points-k-numerical-range}
Let $X$ be a positive compact operator and $P$ a rank-$N$ projection.
If
\begin{equation*}
\trace (P X) = \sup \ocnr[P](X) = \sum_{n=1}^N s_n(X),
\end{equation*}
and $Q := \chi_{[s_N(X),\infty)}(X)$, then $P \le Q$ and $Q - P \le \chi_{\set{s_N(X)}}(X)$.
Consequently, $X$ commutes with $P$.
\end{lemma}
\begin{proof}
In the case when $s_N(X) = 0$, then $Q = I \ge P$ and $Q - P = I - P = P^{\perp}$.
Therefore $\trace(P^{\perp} X P^{\perp}) = \trace(P^{\perp} X) = \trace X - \trace(P X) = 0$.
Since the trace is faithful, this implies $P^{\perp} X P^{\perp} = 0$, and therefore that $P^{\perp} X^{\frac{1}{2}} = 0$.
Therefore $P^{\perp} \le \chi_{\set{0}}(X)$.
Therefore we may suppose $s_N(X) > 0$.
Let $\lambda_1 > \lambda_2 > \cdots > \lambda_m = s_N(X) > 0$ be the distinct eigenvalues of $X$ greater than or equal to $s_N(X)$, and let $Q_j := \chi_{\set{\lambda_j}}(X)$ for $1 \le j \le m$ be the associated spectral projections.
Then $Q = \sum_{j=1}^m Q_j$, and $X Q_j = \lambda_j Q_j$.
Let $\lambda'$ be the largest eigenvalue of $X$ less than $\lambda_m$.
Set $\lambda_{m+1} := \frac{1}{2} ( \lambda_m + \lambda' )$, so that $\lambda_m > \lambda_{m+1} > \lambda' \ge 0$.
For convenience of notation we set $Q_{m+1} := Q^{\perp}$.
We remark that $X Q_{m+1} \le \lambda_{m+1} Q_{m+1}$.
Notice that for $1 \le j \le m+1$, $\trace (P Q_j) = \trace (Q_j P Q_j) \le \trace Q_j$ and that
\begin{equation*}
\sum_{j=1}^{m+1} \trace (P Q_j) = \trace \big( P (Q_1 + \cdots + Q_{m+1}) \big) = \trace \big( P (Q+Q^{\perp}) \big) = \trace P = N.
\end{equation*}
Therefore, we have majorization of the finite sequences
\begin{equation}
\label{eq:finite-majorization-pq}
\big( \trace(P Q_1), \ldots, \trace(P Q_{m+1}) \big) \prec \bigg( \trace Q_1, \ldots, \trace Q_{m-1}, \trace P - \sum_{j=1}^{m-1} \trace Q_j, 0 \bigg).
\end{equation}
Consider the difference of these sequences which, since $\trace Q_j - \trace (PQ_j) = \trace((I-P)Q_j) = \trace(P^{\perp}Q_j)$, has the form
\begin{equation*}
(\delta_j)_{j=1}^{m+1} := \bigg( \trace (P^{\perp}Q_1), \ldots, \trace (P^{\perp}Q_{m-1}), \trace P - \sum_{j=1}^m \trace( P Q_j ), - \trace (PQ^{\perp}) \bigg).
\end{equation*}
Then because $\sum_{j=1}^{m+1} Q_j = I$ and $XQ_j \le \lambda_j Q_j$ for $1 \le j \le m+1$, and since $(\delta_j)_{j=1}^{m+1}$ has nonnegative partial sums,
\begin{align*}
\trace (PX) = \sum_{j=1}^{m+1} \trace (PXQ_j) &\le \sum_{j=1}^{m+1} \lambda_j \trace (PQ_j) \\
&\le \sum_{j=1}^{m-1} \lambda_j \trace Q_j + \lambda_m \bigg( \trace P - \sum_{j=1}^{m-1} \trace Q_j \bigg) \tag*{by \Cref{lem:majorization-product-decreasing-sequence} with \eqref{eq:finite-majorization-pq},}\\
&= \sum_{j=1}^M s_j(X) + \lambda_m(X) (N - M) \\
&= \sum_{j=1}^N s_j(X),
\end{align*}
where $M := \sum_{j=1}^{m-1} \trace Q_j$.
By hypothesis the first and last expressions in the above chain are equal, and therefore we must have equality throughout.
Since from the previous display $\sum_{j=1}^{m+1} \delta_j \lambda_j = 0$, and because the $\lambda_j$ are distinct and positive, \Cref{lem:majorization-product-decreasing-sequence} guarantees $\delta_j = 0$ for all $1 \le j \le m+1$.
Therefore, $\trace (PQ^{\perp}) = \trace (P Q_{m+1}) = 0$ and hence $P \le Q$.
Similarly, for $1 \le j \le m-1$, $\trace (P^{\perp} Q_j) = 0$, and thus $Q_j \le P$,
so we may write $P = Q_1 + \cdots Q_{m-1} + P'$ for some projection $P'$.
Finally, the projection $Q-P = Q_m - P' \le Q_m = \chi_{\set{s_N(X)}}(X)$.
Notice that $X$ commutes with any subprojection of $\chi_{\set{s_N(X)}}(X)$ (because $X$ is scalar relative to this subspace), hence $X$ commutes with $Q-P$.
Since $X$ also commutes with $Q$ (because it is a spectral projection), it must commute with $P$ as well.
\end{proof}
The next proposition guarantees a kind of block diagonal decomposition for those $X \in Mhcal{O}(C)$ which maximize $\ocnr(A)$ for selfadjoint $A \in B(Mhcal{H})$.
This proposition is essential in proving: \Cref{thm:ocnr-closed-rank-condition}, which establishes a sufficient condition for $\ocnr(A)$ to be closed for some $A \in B(Mhcal{H})$; \Cref{thm:direct-sum-characterization}, which characterizes the behavior of the orbit-closed $C$-numerical range under direct sums; and \Cref{thm:normal-convex-c-spectrum}, which establishes an analogue for the orbit-closed $C$-numerical range of $W(A) = \conv \spec_{\mathrm{pt}}(A)$ when $A \in Mhcal{K}$ is normal.
\begin{proposition}
\label{prop:block-diagonal-decomposition}
Suppose that $C \in Mhcal{L}_1^+$ is a positive trace-class operator and $A \in B(Mhcal{H})$ is selfadjoint with $m := \max \spec_{\mathrm{ess}}(A)$.
Let $\set{\lambda_l}_{l=1}^N$ denote the distinct elements of $\spec(A)$ greater than $m$ listed in decreasing order, and including $\lambda_N = m$ when this list is finite.
If $X \in Mhcal{O}(C)$ is a maximizer of $\ocnr(A)$, that is, if $\trace(XA) = \sup \ocnr(A)$, then $X$ commutes with each of the projections $P_l = \chi_{\set{\lambda_l}}(A)$ for $1 \le l \le N$.
Moreover, for $1 \le l < N$, the compression of $X$ to $P_l$ is unitarily equivalent to $\diag(s_{n_{l-1}+1}(C),\ldots,s_{n_l}(C))$ where $n_0 := 0$ and $n_l := \sum_{j=1}^l \trace P_j$.
Furthermore, if $N < \infty$, then the compression of $X$ to $\chi_{\set{0}}(A)$ lies in $Mhcal{O}\big( \diag(s_{n_{N-1}+1}(C),s_{n_{N-1}+2}(C),\ldots,s_{n_N}(C)) \big)$, where this sequence is infinite if $n_N = \infty$.
\end{proposition}
\begin{proof}
By translating and applying \Cref{prop:c-numerical-range-basics} and \Cref{thm:c-numerical-range-selfadjoint-formula} we may assume without loss of generality that $m = 0$.
Then $\set{\lambda_l}_{l=1}^N$ are the distinct terms in the sequence $s(A_+)$ listed in decreasing order, and for $1 \le l < N$, the multiplicity of $\lambda_l$ in this sequence is exactly $\trace P_l$.
Set $P_0 := I - \sum_{l=1}^N P_l$.
Let $\set{e_j}_{j=-M}^{n_N}$ be an orthonormal basis where $\set{e_j}_{j=-M}^0$ is a basis for $P_0 Mhcal{H}$ and for each $1 \le l \le N$, the collection $\set{e_j}_{j=n_{l-1} + 1}^{n_l}$ is a basis for $P_l Mhcal{H}$.
Then let $(d_n)_{n=-M}^{n_N}$ be the diagonal of $X$ relative to this basis.
We have $(d_n)_{n=1}^{\rank A_+} \pprec s(X) = s(C)$.
Therefore
\begin{align*}
\sum_{n=1}^{\rank A_+} s_n(C) s_n(A_+) &= \sum_{n=1}^{\infty} s_n(C) s_n(A_+) \\
&= \sup \ocnr(A) \\
&= \trace(XA) \\
&= \trace(XA_-) + \sum_{l=1}^{N-1} \trace(XAP_l) \\
&= \sum_{l=1}^{N-1} \trace(XP_l \lambda_l) \\
&= \sum_{l=1}^{N-1} \sum_{n=n_{l-1}+1}^{n_l} d_n \lambda_l \\
&= \sum_{n=1}^{\rank A_+} d_n s_n(A_+).
\end{align*}
Then by \Cref{lem:majorization-product-decreasing-sequence} we obtain for each $1 \le l < N$, $\sum_{n=1}^{n_l} d_n = \sum_{n=1}^{n_l} s_n(C)$.
Therefore, for each $1 \le l < N$, we find
\begin{equation*}
\trace (P_l X) = \sum_{n=n_{l-1} + 1}^{n_l} d_n = \sum_{n=n_{l-1} + 1}^{n_l} s_n(C).
\end{equation*}
Then by \Cref{lem:extreme-points-k-numerical-range}, $P_1$ commutes with $X$;
moreover, if $X_1$ denote the compression of $X$ to $P_1$, then $X_1 \in Mhcal{U}\big(\diag(s_1(C),\ldots,s_{n_1}(C))\big)$.
Consequently, if we consider $X'_1$ to be the compression of $X$ to $P_1^{\perp}$, then $X'_1$ lies in $Mhcal{O}\big(\diag(s_{n_1 + 1}(C),s_{n_1 + 2}(C),\ldots)\big)$.
Therefore, we may again apply \Cref{lem:extreme-points-k-numerical-range} to conclude that $X'_1$ (and hence also $X$) commutes with $P_2$.
Moreover, if $X_2$ denotes the compression of $X$ to $P_2$, then $X_2 \in Mhcal{U}\big(\diag(s_{n_1 + 1}(C),\ldots,s_{n_2}(C))\big)$.
Continuing this procedure, by induction on $l$ we obtain for each $1 \le l < N$ that $P_l$ commutes with $X$, and that the compression $X_l$ of $X$ to $P_l$ is a matrix of size $n_l - n_{l-1} = \trace P_l$ with $X_l \in Mhcal{U} \big(\diag(s_{n_{l-1} + 1}(C),\ldots,s_{n_l}(C))\big)$.
If $\rank C \le \rank A_+ = n_{N-1}$, then the proof is already complete.
If, on the other hand, $\rank C > \rank A_+$, then $N < \infty$ and we must consider $X'_{N-1}$, which is the compression of $X$ to $P_0 + P_N$.
From the above, we know that $X'_N \in Mhcal{O}\big(\diag(s_{n_{N-1} + 1}(C),s_{n_{N-1} + 2}(C),\ldots)\big)$.
However, $P_0 = \chi_{(-\infty,0)}(A)$ and $P_N := \chi_{\set{0}}(A)$.
By \Cref{thm:c-numerical-range-selfadjoint-formula}, since $\trace(XA)$ is a maximizer of $\ocnr(A)$, we must have that $P_0 X = X P_0 = 0$, and therefore the compression of $X$ to $P_N$ lies in $Mhcal{O}\big(\diag(s_{n_N + 1}(C),s_{n_N + 2}(C),\ldots,s_{n_N}(C))\big)$.
\end{proof}
Using \Cref{prop:block-diagonal-decomposition}, it is possible to give a condition under which $\sup \ocnr(A)$, or even $\sup \cnr(A)$, is attained when $A \in Mhcal{K}^+$ and $C \in Mhcal{L}_1^+$.
\begin{remark}
\label{rem:equal-kernel-positive-compact}
Suppose that $A,C$ are positive compact operators with infinite rank and that $C$ is trace-class.
Every $X \in Mhcal{O}(C)$ for which $\trace(XA) = \sup \ocnr(A)$ satisfies $\ker X = \ker A$.
Indeed, by \Cref{prop:block-diagonal-decomposition}, for the projections $P_l$ for $l \in Mhbb{N}$, the operator $X$ commutes with each $P_l$ and the compression $X_l$ of $X$ to $P_l$ lies in $Mhcal{U}( \diag(s_{n_{l-1}+1}(C),\ldots,s_{n_l}(C)) )$.
Consequently, $X$ is strictly positive on $(\sum_{l=1}^{\infty} P_l) Mhcal{H}$ and must be zero on the complement.
Notice that $P_0 := I - \sum_{l=1}^{\infty} P_l$ is the projection onto $\ker A$.
Thus $\ker A = \ker X$.
Therefore, there is an $X \in Mhcal{U}(C)$ for which $\trace(XA) = \sup \cnr(A) = \sup \ocnr(A)$ if and only if\footnote{if $\dim \ker C = \dim \ker A$, then $X := Mhbf{0}_{\ker_A} \oplus \diag(s(C)) \in Mhcal{U}(C)$, and $A = Mhbf{0}_{\ker A} \oplus \diag(s(A))$ relative to the proper basis so $\trace(XA) = \sup \ocnr(A) = \sup \cnr(A)$ by \Cref{prop:c-numerical-range-maximum}.} $\dim \ker C = \dim \ker A$.
\end{remark}
We conclude this section by using \Cref{thm:c-numerical-range-selfadjoint-formula} and \Cref{prop:boundary-approximation,prop:block-diagonal-decomposition} to establish in \Cref{thm:ocnr-closed-rank-condition} a sufficient condition for $\ocnr(A)$ to be closed.
\begin{theorem}
\label{thm:ocnr-closed-rank-condition}
Let $C$ be a positive trace-class operator and let $A \in B(Mhcal{H})$.
Then $\ocnr(A)$ is closed if for every $\theta$, $\rank (\Re(e^{i\theta}A)-m_{\theta}I)_+ \ge \rank C$, where $m_{\theta} := \max \spec_{\mathrm{ess}}(\Re(e^{i\theta} A))$.
\end{theorem}
\begin{proof}
Suppose that the rank condition holds for every angle $\theta$.
Let $x \in \partial\ocnr(A)$.
There are two possibilities.
\begin{case}{There is a supporting line for $\closure{\ocnr(A)}$ which intersects $\closure{\ocnr(A)}$ only at $x$.}
After applying a suitable rotation, we may assume that $\Re x = \sup \ocnr(\Re(A))$ and that the supporting line is vertical, so that $x$ is the unique point of $\closure{\ocnr(A)}$ with maximal real part.
Since $\rank (\Re A - m_0 I)_+ \ge \rank C$, \Cref{thm:c-numerical-range-selfadjoint-formula} guarantees that $\sup \ocnr(\Re(A)) = \sup \Re(\ocnr(A))$ is attained, and by uniqueness this must be achieved by $x \in \ocnr(A)$.
\end{case}
\begin{case}{The only supporting line for $\closure{\ocnr(A)}$ containing $x$ intersects $\closure{\ocnr(A)}$ in a line segment $[x_-,x_+]$.}
After applying a suitable rotation, we may assume that $\Re x = \sup \ocnr(\Re(A))$, so the line segment $[x_-,x_+]$ is vertical and has maximal real part.
Moreover, by translating we may further assume $m_0 = 0$.
In order to prove that $x \in \ocnr(A)$, it suffices to show that $x_{\pm} \in \ocnr(A)$ since this set is convex by \Cref{cor:c-numerical-range-convex}.
We consider only $x_-$, as the analysis for $x_+$ is identical.
Then there are two possibilities.
The first is that $x_-$ itself has a (different) supporting line for $\closure{\ocnr(A)}$ which intersects $\closure{\ocnr(A)}$ only at $x_-$, in which case $x_- \in \ocnr(A)$ by Case 1;
this happens precisely when $x_-$ is a corner of $\ocnr(A)$.
The alternative is that there are no other supporting lines passing through $x_-$.
This implies that for any $\theta > 0$, the supporting line of $\closure{\ocnr(A)}$ with slope $\cot \theta$ intersects the boundary at a point distinct from $x_-$.
Now, we claim that there are arbitrarily small $\theta > 0$ such that this line intersects $\closure{\ocnr(A)}$ at a \emph{unique} point $x_{\theta}$, which must be in $\ocnr(A)$ by Case 1.
Indeed, if not, for each sufficiently small angle $\theta > 0$, $\closure{\ocnr(A)}$ would contain a nondegenerate line segment with slope $\cot \theta$, but this would imply that $\partial\ocnr(A)$ has infinite length (since the sum of uncountably many positive numbers is necessarily infinite), which would violate the fact that $\partial\ocnr(A)$ is rectifiable --- a well-known consequence of being a bounded convex curve.
Moreover, it is clear that $x_{\theta} \to x_-$ as $\theta \to 0^+$ for whichever positive $\theta$ the point $x_{\theta}$ is defined.
Thus the situation satisfies the hypotheses of \Cref{prop:boundary-approximation}, and so we are guaranteed that $\ocnr(A)$ contains points on the line segment $[x_-,x_+]$ arbitrarily close to $x_-$.
So consider a sequence of points $(x_j)$ in $\ocnr(A) \cap [x_-,x_+]$ converging to $x_-$.
Then there are $X_j \in Mhcal{O}(C)$ with $\trace(X_j A) = x_j$, and hence $\trace(X_j \Re A) = \Re x_j = \sup \ocnr(\Re A)$.
We may therefore apply \Cref{prop:block-diagonal-decomposition} to obtain finite projections $\set{P_l}_{l=1}^N$ such that the compression of $X_j$ to $P_l$ lies in $Mhcal{U}\big(\diag(s_{n_{l-1}+1}(C),\ldots,s_{n_l}(C))\big)$ and moreover $X_j$ commutes with each $P_l$.
If we set $P_0 := I - \sum_{l=1}^N P_l$, then since $\rank C \le \rank A_+$, we see that $X_j P_0 = 0$.
So $X_j$ is block diagonal with respect to the blocks $P_l$, and $P_0 X_j = 0$.
Note, the projections for these blocks are \emph{independent} of $j$.
Now, by the Schur--Horn theorem (\cite{Sch-1923-SBMG,Hor-1954-AJM}, but see \cite[Theorem~1.1]{KW-2010-JFA} for a concise, self-contained statement), there are block unitaries $U^{(j)} = \bigoplus_{l \ge 0} U^{(j)}_l$ for which $X_j = U^{(j)}(Mhbf{0}_{P_0 Mhcal{H}} \oplus \diag(s(C)) U^{(j)*}$.
Moreover, we can select $U^{(j)}_0 = I_{P_0 Mhcal{H}}$.
It is important to note that for each $l \ge 1$, $U^{(j)}_l$ is a finite matrix of size $n_l - n_{l-1}$.
We now apply the standard recursive subsequence technique to obtain a subsequence of the unitaries $U^{(j)}$ with desirable properties.
More specifically, by compactness of the unitary group in finite dimensions, there is a subsequence $U^{(j_{1,n})}$ such that $U^{(j_{1,n})}_1$ converges to some unitary matrix $U_1$ (of size $n_1 - n_0$).
Then for $l \ge 1$ we inductively construct a subsequence $U^{(j_{l+1,n})}$ of $U^{(j_{l,n})}$ for which $U^{(j_{l+1,n})}_{l+1}$ converges to some unitary matrix $U_{l+1}$.
Then consider the subsequence of the original sequence $U^{(j)}$ given by $V_n := U^{(j_{n,n})}$.
Define $U := \bigoplus_{l \ge 0} U_l$ and $X := U(Mhbf{0}_{P_0 Mhcal{H}} \oplus \diag(s(C))) U^{*} \in Mhcal{O}(C)$.
Note that $V_n$ converges entrywise to $U$, but not necessarily in any operator topology.
We claim that $X_{j_{n,n}} = V_n (Mhbf{0}_{P_0 Mhcal{H}} \oplus \diag(s(C)) V_n^{*}$ converges in trace-norm to $X$.
Indeed, let $\varepsilon > 0$ and since $C$ is trace-class, there is some $M$ such that $\sum_{n=n_M+1}^{\infty} s_n(C) < \frac{\varepsilon}{4}$.
Set $P := \bigoplus_{l=1}^M P_l$, which is a finite projection that commutes with $U, V_n, C' := Mhbf{0}_{P_0 Mhcal{H}} \oplus \diag(s(C))$ because each $P_l$ does.
Moreover, because $V_n$ converges entrywise to $U$ and $P$ is a finite projection, $V_n P$ converges to $UP$ in trace norm (or any other norm topology since all norms on a finite dimensional space are equivalent).
Therefore there is some $K$ such that for all $k \ge K$, $\norm{V_k P - U P}_1 < \frac{\varepsilon}{4 \norm{C}}$.
Thus we obtain
\begin{align*}
\norm{(V_k C' V_k^{*} - U C' U^{*})P}_1 &\le \norm{V_k PC' V_k^{*} - U PC' V_k^{*}}_1 + \norm{U C'P V_k^{*} - U C'P U^{*}}_1 \\
&\le \norm{V_k P - U P}_1 \norm{C'V_k^{*}} + \norm{UC'} \norm{PV_k^{*} - PU^{*}}_1 \\
&< \frac{\varepsilon}{4\norm{C}} \norm{C} + \norm{C} \frac{\varepsilon}{4\norm{C}} = \frac{\varepsilon}{2}.
\end{align*}
In addition,
\begin{align*}
\norm{(V_k C' V_k^{*} - U C' U^{*})P^{\perp}}_1 &\le \norm{V_k(C' P^{\perp})V_k^{*}}_1 + \norm{U(C' P^{\perp})U^{*}}_1 \\
&\le\norm{V_k}\norm{C' P^{\perp}}_1 \norm{V_k^{*}} + \norm{U}\norm{C' P^{\perp}}_1 \norm{U^{*}} \\
&< \frac{\varepsilon}{4} + \frac{\varepsilon}{4} = \frac{\varepsilon}{2}.
\end{align*}
Therefore, combining the above displays yields
\begin{align*}
\norm{X_{j_{k,k}} - X}_1 &= \norm{V_k C' V_k^{*} - U C' U^{*}}_1 \\
&= \norm{(V_k C' V_k^{*} - U C' U^{*})P}_1 + \norm{(V_k C' V_k^{*} - U C' U^{*})P^{\perp}}_1 \\
&< \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon.
\end{align*}
Thus $x_{j_{n,n}} = \trace(X_{j_{n,n}} A) \to \trace(XA)$.
Since $x_{j_{n,n}} \to x_-$, we find $x_- = \trace(XA) \in \ocnr(A)$.
Finally, a symmetric argument applies to $x_+$, and hence $x_{\pm} \in \ocnr(A)$.
Because $\ocnr(A)$ is convex by \Cref{cor:c-numerical-range-convex}, $x \in [x_-,x_+] \subseteq \ocnr(A)$, thereby completing the proof. \qedhere
\end{case}
\end{proof}
The following example shows that although the hypothesis of \Cref{thm:ocnr-closed-rank-condition} is not a necessary condition for $\ocnr(A)$ to be closed, it is in some sense sharp.
\begin{example}
\label{ex:single-angle-nonclosed}
This example shows that if the rank condition in \Cref{thm:ocnr-closed-rank-condition} fails for even a single angle $\theta$, it is possible for $\ocnr(A)$ not to be closed, even if $\ocnr(A)$ contains at least one boundary point for every angle $\theta$.
In fact, this example even uses the usual numerical range $W(A)$ and a diagonalizable operator $A$.
Consider the diagonalizable operator $A$ whose eigenvalues are $1$ and $\pm i + e^{\pm \frac{i\pi}{n}}$ for all $n \in Mhbb{N}$, each with multiplicity one.
Then the line segment $[1-i,1+i]$ lies on the boundary $\partial W(A)$, but $W(A) \cap [1-i,1+i] = \set{1}$, although nonempty, contains only a single point.
Moreover, $\rank \Re(A - m_0 I)_+ = 0$, but $\rank \Re(A - m_{\theta} I)_+ = \infty$ for any $0 < \theta < 2\pi$.
See \Cref{fig:example-rank} for a diagram of this situation.
The part of the proof which breaks down because $\rank \Re(A - m_0 I)_+ = 0$ is in the approximation result \Cref{prop:boundary-approximation}.
In particular, there aren't enough (any) spectral projections $P_j$ corresponding to nonzero eigenvalues of $\Re(A - m_0 I)_+$.
Of course, if we modify $A$ to have the eigenvalues $1 \pm i$ as well, then $W(A)$ becomes closed even though we still have $\rank \Re(A - m_0 I)_+ = 0$.
Therefore the sufficient condition given in \Cref{thm:ocnr-closed-rank-condition} is not necessary.
There are even simpler examples: the orbit-closed $C$-numerical range of a scalar is closed (a singleton), but $\rank \Re(A - m_\theta I)_+ = 0$ for all $\theta$.
\end{example}
\begin{figure}
\caption{The numerical range and eigenvalues of a diagonalizable operator for which $\rank \Re(A-m_0 I)_+ = 0$ and $\rank \Re(A-m_{\theta}
\label{fig:example-rank}
\end{figure}
\section{Compact normal operators and the $Mhcal{O}(C)$-spectrum}
\label{sec:oc-spectrum}
It is a standard result in linear algebra that for a normal matrix $A \in M_n(Mhbb{C})$ the standard numerical range satisfies $W(A) = \conv \spec(A)$, which is an immediate consequence of the elementary (finite or infinite dimensional) fact that $W(A_1 \oplus A_2) = \conv \big( W(A_1) \cup W(A_2) \big)$.
Of course, there are many ways to extend or generalize this result.
One can extend it to the infinite dimensional setting in two ways.
For normal $A \in B(Mhcal{H})$, there is the folklore result $\closure{W(A)} = \conv \spec(A)$.
However, restricting to normal $A \in Mhcal{K}$, de Barra, Giles and Sims proved $W(A) = \conv \spec_{\mathrm{pt}}(A)$ \cite[Theorem~2]{dBGS-1972-JLMSIS}.
The other option is to generalize the matrix result to other numerical ranges, such as the $k$-numerical range or the $C$-numerical range.
In this case, one needs a substitute for the spectrum $\spec(A)$ which is somehow relativized to the matrix $C$.
For $C \in M_n(Mhbb{C})$ normal, Marcus \cite{Mar-1979-ANYAS} introduced a substitute, now referred to as the \term{$C$-spectrum} and denoted\footnote{
This notation is common in the later literature, but Marcus actually used the notation $\cspec(A)$ to refer to the \emph{convex hull} of the $C$-spectrum, and he called this the \term{$C$-eigenpolygon}.
}
$\cspec(A)$, consisting of the sums of products of the eigenvalues of $C,A$.
There he proved that if $A \in M_n(Mhbb{C})$ is also normal, then $\cnr(A) = \conv \cspec(A)$.
Dirr and vom Ende extended the notion of $C$-spectrum to the infinite dimensional setting with $C \in Mhcal{L}_1$ and $A \in Mhcal{K}$ \cite[Definition~3.2]{DvE-2020-LaMA}, where they also managed to prove \cite[Theorem~3.4, Corollary~3.1]{DvE-2020-LaMA} that if $C,A$ are both normal, then
\begin{equation}
\label{eq:dirr-vom-ende-c-spectrum-results}
\cspec(A) \subseteq \cnr(A) \subseteq \conv \closure{\cspec(A)} \quad\text{and thus, if $C=C^{*}$,}\quad \closure{\cnr(A)} = \conv \closure{\cspec(A)}.
\end{equation}
Moreover, if $C$ is normal and $A$ is upper triangular, or vice versa, $\cspec(A) \subseteq \cnr(A)$ \cite[Theorem~3.5]{DvE-2020-LaMA}.
While Dirr and vom Ende's results are impressive, because of the hypothesis $A \in Mhcal{K}$ and de Barra, Giles and Sims result $W(A) = \conv \spec_{\mathrm{pt}}(A)$, one might hope for the chance to remove the closures from $\closure{\cnr(A)} = \conv \closure{\cspec(A)}$ in \eqref{eq:dirr-vom-ende-c-spectrum-results}.
In this section, for $C \in Mhcal{L}_1^+$, we do precisely that for the orbit-closed $C$-numerical range and the $Mhcal{O}(C)$-spectrum (see \Cref{def:c-spectrum}), which is a (not necessarily closed) slight modification of the $C$-spectrum defined by Dirr and vom Ende (see \Cref{rem:relation-cspec-to-ocspec}).
In particular, \Cref{thm:normal-convex-c-spectrum} says that if $C \in Mhcal{L}_1^+$ and $A \in Mhcal{K}$ is normal, then $\ocnr(A) = \conv \ocspec(A)$.
Along the way, with \Cref{thm:direct-sum-characterization} we characterize the behavior of the orbit-closed $C$-numerical range under direct sums, thereby generalizing the finite rank result \cite[Result~(4.4)]{LP-1995-FDaaMaE}.
\begin{lemma}
\label{lem:selfadjoint-direct-sum-characterization}
Let $C$ be a positive trace-class operator, $P$ an arbitrary projection, and suppose $A = A_{P^{\vphantom{\perp}}} \oplus A_{P^{\perp}} \in B(Mhcal{H})$ is selfadjoint, where $A_{P^{\vphantom{\perp}}},A_{P^{\perp}}$ act on $PMhcal{H}, P^{\perp} Mhcal{H}$, respectively.
Then
\begin{equation*}
\sup \ocnr(A) = \sup \set{ \trace(XA) \mid X = X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}} \in Mhcal{O}(C) },
\end{equation*}
and if either supremum is attained, then they both are.
\end{lemma}
\begin{proof}
The inequality $\sup \ocnr(A) \ge \sup \set{ \trace(XA) \mid X = X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}} \in Mhcal{O}(C) }$ is trivial since the latter set is a subset of the former.
We split the remainder of the proof into cases.
By translating, we may assume $\max \spec_{\mathrm{ess}}(A) = 0$.
\begin{case}{The supremum $\sup \ocnr(A)$ is attained.}
We must produce an $X = X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}} \in Mhcal{O}(C)$ with $\trace(XA) = \sup \ocnr(A)$.
By \Cref{thm:c-numerical-range-selfadjoint-formula} we are guaranteed that $\rank C \le \trace \chi_{[0,\infty)}(A)$.
Let $\set{ \lambda_l }_{l=1}^N$ denote the distinct nonnegative eigenvalues of $A$ listed in decreasing order and including zero if and only if this set is finite.
Then let $P_l$ be the associated spectral projections.
Set $P_0 := I - \sum_{l=1}^N P_l$.
Since $P$ commutes with $A$ it commutes with each $P_l$, so we may write $P_l = P_{l} P \oplus P_l P^{\perp}$ which is a sum of orthogonal projections.
Let $n_0 := 0$ and for $1 \le l \le N$ set $n_l := \sum_{j=1}^l \trace P_j$.
Note that if $N < \infty$, then $P_N = \chi_{\set{0}}(A)$ which may be either a finite or infinite projection, so $n_N$ may be either finite or infinite.
Then for each $1 \le l < N$, we may select the finite matrix $\diag((s_n(C))_{n=n_{l-1}+1}^{n_l})$ acting on $P_l Mhcal{H}$ and respecting the decomposition $P_l Mhcal{H} = P_l P Mhcal{H} \oplus P_l P^{\perp} Mhcal{H}$, so that $\diag((s_n(C))_{n=n_{l-1}+1}^{n_l}) = X'_l \oplus X''_l$, where $X'_l$ is a matrix of size $\trace P_l P$ and $X''_l$ is a matrix of size $\trace P_l P^{\perp}$.
The situation for $P_N$ is similar, except that the matrices involved might be infinite.
That is, we can consider the operator $\diag((s_n(C))_{n=n_{N-1}+1}^{n_N}) = X'_N \oplus X''_N$ acting on the (possibly infinite dimensional) space $P_N Mhcal{H} = P_N P Mhcal{H} \oplus P_N P^{\perp} Mhcal{H}$.
Set $X'_0 = 0 = X''_0$ acting on $P_0 P Mhcal{H}$ and $P_0 P^{\perp} Mhcal{H}$, respectively.
Setting $X_{P^{\vphantom{\perp}}} := \bigoplus_{l=0}^N X'_l$ and $X_{P^{\perp}} := \bigoplus_{l=0}^N X''_l$, we claim that $X = X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}}$ is the desired operator.
Indeed, notice that $\sum_{l=1}^N \trace P_l$ is either infinite or equal to $\trace \chi_{[0,\infty)}(A)$, which is in either case greater than or equal to $\rank C$.
Therefore, the operators $X'_l,X''_l$ have exhausted all the nonzero values of the sequence $s(C)$ (i.e., $s_j(C) = 0$ if $j > n_N$) and hence $X \in Mhcal{O}(C)$.
Moreover, since $s_n(A_+) = \lambda_l$ whenever $n_{l-1} < n \le n_l$, we find
\begin{align*}
\trace (XA) = \sum_{l=0}^N \trace (X P_l A) &= \sum_{l=0}^N \trace ((X'_l \oplus X''_l) \lambda_l) \\
&= \sum_{l=1}^N \sum_{n=n_{l-1}+1}^{n_l} s_n(C) \lambda_l \\
&= \sum_{l=1}^N \sum_{n=n_{l-1}+1}^{n_l} s_n(C) s_n(A_+) \\
&= \sum_{n=1}^{n_N} s_n(C) s_n(A_+).
\end{align*}
Now $n_N \ge \rank A_+$, and so we have
\begin{equation*}
\trace (XA) = \sum_{n=1}^{n_N} s_n(C) s_n(A_+) = \sum_{n=1}^{\infty} s_n(C) s_n(A_+) = \sup \ocnr(A),
\end{equation*}
where the last equality is due to \Cref{thm:c-numerical-range-selfadjoint-formula}.
\end{case}
\begin{case}{The supremum $\sup \ocnr(A)$ is not attained.}
In this case, by \Cref{thm:c-numerical-range-selfadjoint-formula} $M := \trace \chi_{[0,\infty)}(A) < \rank C$, and so this projection is finite.
Since $0 \in \spec_{\mathrm{ess}}(A)$, the projection $P_{\varepsilon} := \chi_{(-\varepsilon,0)}(A)$ must be infinite for any $\varepsilon > 0$, and since $P_{\varepsilon}$ is a spectral projection for $A$, it commutes with $P$ because $A$ does.
Therefore $P_{\varepsilon} = P_{\varepsilon} P + P_{\varepsilon} P^{\perp}$ is a sum of projections and at least one of these projections must be infinite.
Let $C' := \diag(s_1(C),\ldots,s_M(C),0,0,\ldots)$.
Since $\rank C' = M$, by Case 1 we know that there is some $X = X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}} \in Mhcal{O}(C')$ for which
\begin{equation*}
\trace (X A) = \sup \ocnr[C'](A) = \sum_{n=1}^M s_n(C) s_n(A_+) = \sum_{n=1}^{\infty} s_n(C) s_n(A_+) = \sup \ocnr(A),
\end{equation*}
where the third equality follows because $s_n(A_+) = 0$ for $n > M$.
Moreover, by \Cref{thm:c-numerical-range-selfadjoint-formula} $X P_{\varepsilon} = P_{\varepsilon} X = 0$.
Now let $Y$ be the operator given by $\diag(s_{M+1}(C),s_{M+2}(C),\ldots)$ on the subspace $P_{\varepsilon} PMhcal{H}$ (or $P_{\varepsilon} P^{\perp}Mhcal{H}$, whichever is infinite) and zero on the orthogonal complement in $Mhcal{H}$.
Then $X' = X + Y \in Mhcal{O}(C)$ commutes with $P$ (so it has a direct sum decomposition), and
\begin{align*}
\trace(X' A) = \trace(XA) + \trace (YA) &= \trace(XA) + \trace (Y P_{\varepsilon} A) \\
&\ge \sup \ocnr(A) - \norm{P_{\varepsilon}A} \norm{Y}_1 \\
&\ge \sup \ocnr(A) - \varepsilon \norm{C}_1.
\end{align*}
Since $\varepsilon > 0$ is arbitrary, this proves
\begin{equation*}
\sup \ocnr(A) \le \sup \set{ \trace(XA) \mid X = X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}} \in Mhcal{O}(C) },
\end{equation*}
and therefore we must have equality.
Finally, because $\set{ \trace(XA) \mid X = X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}} \in Mhcal{O}(C) } \subseteq \ocnr(A)$ and $\sup \ocnr(A)$ is not attained, the equality of the suprema guarantees that $\sup \set{ \trace(XA) \mid X = X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}} \in Mhcal{O}(C) }$ is not attained either. \qedhere
\end{case}
\end{proof}
Of course, by replacing $A$ with $-A$ in \Cref{lem:selfadjoint-direct-sum-characterization} one immediately obtains the exact same result with the suprema replaced by infima.
Moreover, the finite dimensional counterpart of \Cref{lem:selfadjoint-direct-sum-characterization} is a known result (see \cite[Result~(4.2)]{Li-1994-LMA}), and in that case the suprema are always attained by compactness.
We will make use of both of these facts in order to establish the following theorem.
\begin{theorem}
\label{thm:direct-sum-characterization}
Let $C$ be a positive trace-class operator, $P$ an arbitrary projection, and suppose $A = A_{P^{\vphantom{\perp}}} \oplus A_{P^{\perp}} \in B(Mhcal{H})$, where $A_{P^{\vphantom{\perp}}},A_{P^{\perp}}$ act on $PMhcal{H}, P^{\perp} Mhcal{H}$, respectively.
Then
\begin{equation*}
\ocnr(A_{P^{\vphantom{\perp}}} \oplus A_{P^{\perp}}) = \conv \quad \bigcup_{Mhclap{\quad C_{P^{\vphantom{\perp}}} \oplus C_{P^{\perp}} \in Mhcal{O}(C)}} \ \big( \ocnr[C_{P^{\vphantom{\perp}}}](A_{P^{\vphantom{\perp}}}) + \ocnr[C_{P^{\perp}}](A_{P^{\perp}}) \big).
\end{equation*}
\end{theorem}
\begin{proof}
The case when $C$ has finite rank appears in \cite[Result~(4.4)]{LP-1995-FDaaMaE}, so we will prove the result when $C$ has infinite rank.
One inclusion is immediate.
Indeed, given $X_{P^{\vphantom{\perp}}} \in Mhcal{O}(C_{P^{\vphantom{\perp}}}), X_{P^{\perp}} \in Mhcal{O}(C_{P^{\perp}})$ with $C_{P^{\vphantom{\perp}}} \oplus C_{P^{\perp}} \in Mhcal{O}(C)$, it is clear that $X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}} \in Mhcal{O}(C)$.
Therefore,
\begin{equation*}
\trace(X_{P^{\vphantom{\perp}}} A_{P^{\vphantom{\perp}}}) + \trace(X_{P^{\perp}} A_{P^{\perp}}) = \trace \big( (X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}})(A_{P^{\vphantom{\perp}}} \oplus A_{P^{\perp}}) \big) \in \ocnr(A).
\end{equation*}
Since $C_{P^{\vphantom{\perp}}} \oplus C_{P^{\perp}}\in Mhcal{O}(C)$ was arbitrary, as were $X_{P^{\vphantom{\perp}}} \in Mhcal{O}(C_{P^{\vphantom{\perp}}})$ and $X_{P^{\perp}} \in Mhcal{O}(C_{P^{\perp}})$, we obtain
\begin{equation*}
\bigcup_{Mhclap{C_{P^{\vphantom{\perp}}} \oplus C_{P^{\perp}} \in Mhcal{O}(C)}} \ \big( \ocnr[C_{P^{\vphantom{\perp}}}](A_{P^{\vphantom{\perp}}}) + \ocnr[C_{P^{\perp}}](A_{P^{\perp}}) \big) \subseteq \ocnr(A).
\end{equation*}
By \Cref{cor:c-numerical-range-convex}, $\ocnr(A)$ is convex and so contains the convex hull of this union.
We now prove the other inclusion.
For convenience, we replace $C$ by $\diag(s(C))$ since $Mhcal{O}(C) = Mhcal{O}(\diag(s(C)))$.
For $m \in Mhbb{N}$ set $C_m := \diag \big( s_1(C),\ldots,s_m(C),0,0,\ldots \big)$ and notice that $C_m \xrightarrow{\norm{\bigcdot}_1} C$.
Therefore $\ocnr[C_m](A) \to \ocnr(A)$ in the Hausdorff pseudometric by \Cref{thm:continuity}.
For any $X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}} \in Mhcal{O}(C)$, and let $Q_m$ denote the projection onto the span of the eigenvectors associated to $s_1(C),\ldots,s_m(C)$.
Note that $Q_m$ commutes with $P$ and therefore we can naturally obtain $Y_{P^{\vphantom{\perp}}} \oplus Y_{P^{\perp}} \in Mhcal{O}(C_m)$ via $Y_{P^{\vphantom{\perp}}} \oplus Y_{P^{\perp}} = (X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}}) Q_m$
Doing so ensures that $\norm{X_i - Y_i}_1 \le \norm{C_m - C}_1$ for $i=1,2$.
Conversely, given $Y_{P^{\vphantom{\perp}}} \oplus Y_{P^{\perp}} \in Mhcal{O}(C_m)$, both $Y_{P^{\vphantom{\perp}}},Y_{P^{\perp}}$ are finite rank and positive, therefore at least one of them has an infinite dimensional reducing subspace because one of them must act on an infinite dimensional space.
Then adding $\diag(s_{m+1}(C),s_{m+2}(C),\ldots)$ acting on that subspace yields an operator $X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}} \in Mhcal{O}(C)$ which again satisfies $\norm{X_i - Y_i}_1 \le \norm{C_m - C}_1$ for $i=1,2$.
Hence, by \Cref{thm:continuity} the Hausdorff distance between these orbit-closed $C$-numerical ranges satisfies
\begin{equation*}
d_H \big( \ocnr[X_i](A_i), \ocnr[Y_i](A_i) \big) \le \norm{C_m - C}_1 \norm{A_i} \le \norm{C_m - C}_1 \norm{A}.
\end{equation*}
Therefore, the corresponding unions converge in Hausdorff pseudometric
\begin{equation*}
\quad\bigcup_{Mhclap{\quad Y_{P^{\vphantom{\perp}}} \oplus Y_{P^{\perp}} \in Mhcal{O}(C_m)}} \ \big( \ocnr[Y_{P^{\vphantom{\perp}}}](A_{P^{\vphantom{\perp}}}) + \ocnr[Y_{P^{\perp}}](A_{P^{\perp}}) \big) \xrightarrow{d_H} \quad \bigcup_{Mhclap{\quad X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}} \in Mhcal{O}(C)}} \ \big( \ocnr[X_{P^{\vphantom{\perp}}}](A_{P^{\vphantom{\perp}}}) + \ocnr[X_{P^{\perp}}](A_{P^{\perp}}) \big).
\end{equation*}
Additionally, their convex hulls converge in the Hausdorff pseudometric as well.
Since the theorem is valid when the trace-class operator is finite rank \cite[Result (4.4)]{LP-1995-FDaaMaE} we have the convergence
\begin{align*}
&\ocnr[C_m](A)& &\qquad=& &\conv \quad \bigcup_{Mhclap{\quad Y_{P^{\vphantom{\perp}}} \oplus Y_{P^{\perp}} \in Mhcal{O}(C_m)}} \ \big( \ocnr[Y_{P^{\vphantom{\perp}}}](A_{P^{\vphantom{\perp}}}) + \ocnr[Y_{P^{\perp}}](A_{P^{\perp}}) \big)& \\
&\qquad \bigg\downarrow d_H & & & &\qquad\qquad\qquad\qquad\qquad\bigg\downarrow d_H& \\
&\ocnr(A)& &\text{same closure}& &\conv \quad \bigcup_{Mhclap{\quad X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}} \in Mhcal{O}(C)}} \ \big( \ocnr[X_{P^{\vphantom{\perp}}}](A_{P^{\vphantom{\perp}}}) + \ocnr[X_{P^{\perp}}](A_{P^{\perp}}) \big).&
\end{align*}
Since $\ocnr(A)$ and $\conv \big( \bigcup_{X_{P^{\vphantom{\perp}}} \oplus X_{P^{\perp}} \in Mhcal{O}(C)} \big( \ocnr[X_{P^{\vphantom{\perp}}}](A_{P^{\vphantom{\perp}}}) + \ocnr[X_{P^{\perp}}](A_{P^{\perp}}) \big) \big)$ are convex and have the same closure, they must have the same interior.
Hence it suffices to prove any boundary point of $\ocnr(A)$ lies in the above convex hull.
Now suppose $x = \trace(XA) \in \ocnr(A)$ lies on the boundary.
By the usual rotation and translation technique, we may assume $x$ has maximal real part and $\max \spec_{\mathrm{ess}}(\Re(A)) = 0$.
Since $\trace(XA) = \sup \Re \ocnr(A) = \sup \ocnr(\Re A)$, we may apply \Cref{prop:block-diagonal-decomposition}.
Let $\set{\lambda_l}_{l=1}^N$ denote the distinct nonnegative eigenvalues of $\Re A$ listed in decreasing order, and including zero if and only if $N < \infty$.
Let $\set{P_l}_{l=1}^N$ be the associated spectral projections and set $P_0 := I - \sum_{l=1}^N P_l$.
Let $n_0 := 0$ and for $1 \le l \le N$, $n_l := \sum_{j=1}^l \trace P_j$.
Then \Cref{prop:block-diagonal-decomposition} guarantees that $X$ commutes with each $P_l$, so that $X = \bigoplus_{l=0}^N X_l$ where $X_l$ acts on $P_l Mhcal{H}$.
Moreover, $X_0 = 0$ and for $1 \le l < N$, $X_l \in Mhcal{U}(C_l)$, where $ C_l := \diag(s_{n_{l-1}+1}(C),\ldots,s_{n_l}(C))$.
If $N < \infty$, then $X_N \in Mhcal{O}(C_N)$ where $C_N := \diag((s_n(C))_{n=n_{N-1}+1}^{n_N})$.
Let the reader take note that \emph{any} operator $Y$ with properties of $X$ listed in the previous paragraph (block diagonal with respect to $P_l$ with blocks in the associated orbits) has the property that $\Re(\trace(YA)) = \sup \Re \ocnr(A)$.
We will use this property shortly.
Let $A_l$ denote the compression of $\Im A$ to $P_l Mhcal{H}$.
In general, $\Im A$ will not be block diagonal with respect to these blocks because $A$ is not necessarily normal so $\Im A$ may not commute with $P_l$.
However, $P$ commutes with $A$, and therefore with $\Re A$ and $\Im A$, which implies that it also commutes with each spectral projection $P_l$.
Therefore, we may write each $P_l = P_l P + P_l P^{\perp}$ as a sum of projections, and also $A_l = A'_l \oplus A''_l$, where $A'_l, A''_l$ act on $P_l P Mhcal{H}$ and $P_l P^{\perp} Mhcal{H}$, respectively.
Now,
\begin{equation}
\label{eq:imaginary-part-inequality}
\Im(\trace(XA)) = \trace(X \Im A) = \sum_{l=1}^N \trace(X_l A_l) \le \sum_{l=1}^N \sup \ocnr[C_l](A_l).
\end{equation}
where we have omitted the $l=0$ term from the sum since $X_0 = 0$.
For each $1 \le l < N$, the operators $C_l,A_l$ act on the finite dimensional space $P_l Mhcal{H}$, and so the supremum $\sup \ocnr[C_l](A_l)$ is attained.
Moreover, because $A_l = A'_l \oplus A''_l$, by \Cref{lem:selfadjoint-direct-sum-characterization} (or rather, its finite dimensional counterpart) there exists $Y_l = Y'_l \oplus Y''_l \in Mhcal{U}(C_l)$ such that $\trace(Y_l A_l) = \sup \ocnr[C_l](A_l)$.
If $N < \infty$, then \Cref{lem:selfadjoint-direct-sum-characterization} still allows us to obtain $Y_N = Y'_N \oplus Y''_N \in Mhcal{O}(C_N)$ such that $\trace(X_N A_N) \le \trace(Y_N A_N)$ regardless of whether or not $\sup \ocnr[C_N](A_N)$ is attained.
Set $Y'_0 = 0 = Y''_0$.
Then set $Y_{P^{\vphantom{\perp}}} := \bigoplus_{l=0}^N Y'_l$ and $Y_{P^{\perp}} := \bigoplus_{l=0}^N Y''_l$ and $Y := Y_{P^{\vphantom{\perp}}} \oplus Y_{P^{\perp}} \in Mhcal{O}(C)$.
As previously remarked, $Y$ satisfies the same decomposition property as $X$, and therefore $\Re \trace (YA) = \sup \Re \ocnr(A) = \Re \trace (XA)$.
Moreover, $\Im \trace (XA) \le \Im \trace (YA)$.
Notice that
\begin{equation*}
\Im(\trace(XA)) \ge \sum_{l=1}^N \inf \ocnr[C_l](A_l).
\end{equation*}
Therefore, a symmetric argument to the one given in the previous paragraph allows us to produce a $Z = Z_{P^{\vphantom{\perp}}} \oplus Z_{P^{\perp}} \in Mhcal{O}(C)$ such that $\Re(\trace XA) = \Re(\trace ZA)$ and $\Im(\trace XA) \ge \Im(\trace ZA)$.
This proves that
\begin{equation*}
x = \trace(XA) \in \conv \set{ \trace(YA), \trace(ZA) } \subseteq \conv \quad \bigcup_{Mhclap{\quad C_{P^{\vphantom{\perp}}} \oplus C_{P^{\perp}} \in Mhcal{O}(C)}} \quad \big( \ocnr[C_{P^{\vphantom{\perp}}}](A_{P^{\vphantom{\perp}}}) + \ocnr[C_{P^{\perp}}](A_{P^{\perp}}) \big) ,
\end{equation*}
as desired.
\end{proof}
In \cite{DvE-2020-LaMA}, Dirr and vom Ende introduced an analogue of the $C$-spectrum for trace-class $C$ when $A$ is compact, which they also denoted $\cspec(A)$.
We will also need a notion of the $C$-spectrum of a compact operator $A$, but ours will differ slightly from the one given by Dirr and vom Ende, and for this reason we will instead use the notation $\ocspec(A)$ and refer to it as the $Mhcal{O}(C)$-spectrum.
As in \cite{DvE-2020-LaMA}, we must invoke the concept of the \term{modified eigenvalue sequence} of a compact operator $A$.
This is the sequence $\tilde{\eig}(A)$ obtain by mixing $\dim \ker A$ many zeros into the usual eigenvalue sequence $\lambda(A)$.
For the purposes of $Mhcal{O}(C)$-spectrum the order of these eigenvalues does not matter.
\begin{definition}
\label{def:c-spectrum}
Let $C$ be a trace-class operator of (possibly infinite) rank $N$, and let $A \in Mhcal{K}$.
The \term{$Mhcal{O}(C)$-spectrum} of $A$ is the collection
\begin{equation*}
\ocspec(A) := \vset{ \sum_{n=1}^{\infty} \lambda_n(C) \tilde{\eig}_{\pi(n)}(A) \,\middle\vert\, \pi : Mhbb{N} \to Mhbb{N}\ \text{injective} }.
\end{equation*}
One should think of the $Mhcal{O}(C)$-spectrum $\ocspec(A)$ as an $Mhcal{O}(C)$-relativized analogue of the point spectrum $\spec_{\mathrm{pt}}(A)$.
\end{definition}
This definition of $Mhcal{O}(C)$-spectrum differs from the definition of $C$-spectrum given in \cite{DvE-2020-LaMA} only in that we allow $\pi$ to be injective instead of a permutation, and that we use the standard eigenvalue sequence of $C$ instead of the modified eigenvalue sequence.
\begin{remark}
\label{rem:relation-cspec-to-ocspec}
The terminology $Mhcal{O}(C)$-spectrum and notation $\ocspec(A)$ is not haphazard, but alludes to the following relationship between the $C$-spectrum, the $Mhcal{O}(C)$-spectrum and the point spectrum.
For a normal operator $C \in Mhcal{L}_1$ and $A \in Mhcal{K}$, by \Cref{prop:orbit-closure-equivalences}
\begin{equation*}
\ocspec(A) = \ \bigcup_{Mhclap{X \in Mhcal{O}(C)}} \ \cspec[X](A) = \ \bigcup_{Mhclap{0 \le n \le \infty}} \ \cspec[C \oplus Mhbf{0}_n](A).
\end{equation*}
So, in essence, the $Mhcal{O}(C)$-spectrum is just a version of the $C$-spectrum where the size of the kernel of $C$ can vary, at least when $C$ is normal.
Moreover, if $C$ is finite rank, then $\ocspec(A) = \cspec(A)$, and if $P$ is a rank-$1$ projection, then $\ocspec[P](A) = \cspec[P](A) = \spec_{\mathrm{pt}}(A)$.
\end{remark}
It is a trivial fact that the point spectrum $\spec_{\mathrm{pt}}(A)$ of an operator $A$ is contained in the numerical range $W(A)$, and by convexity $\conv \spec_{\mathrm{pt}}(A) \subseteq W(A)$.
The following proposition establishes an analogous fact for the $Mhcal{O}(C)$-spectrum and the orbit-closed $C$-numerical range.
\begin{proposition}
\label{prop:c-spectrum-subset-ocnr}
If $C \in Mhcal{L}_1$ is normal and $A \in Mhcal{K}$ is upper triangular relative to some orthonormal basis, then $\ocspec(A) \subseteq \ocnr(A)$.
If, in addition, $C$ is selfadjoint, then the inclusion $\conv \ocspec(A) \subseteq \ocnr(A)$ also holds.
\end{proposition}
\begin{proof}
Suppose that $A \in Mhcal{K}$ is upper triangular relative to an orthonormal basis $\set{e_n}_{n=1}^{\infty}$ for $Mhcal{H}$.
Then it is well known that the diagonal entries of $A$ are precisely the (suitably permuted) modified eigenvalue sequence $\tilde{\eig}(A)$.
Indeed, the sequence of subspaces
\begin{equation*}
\set{0} \subseteq \spans \set{e_1} \subseteq \spans \set{e_1,e_2} \subseteq \spans \set{e_1,e_2,e_3} \subseteq \cdots \subseteq Mhcal{H},
\end{equation*}
forms a triangularizing chain for $A$, and so the nonzero diagonal entries are precisely the eigenvalues by Ringrose's Theorem, and they are repeated according to algebraic multiplicity (see \cite[Theorems~7.2.3 and 7.2.9]{RR-2000}).
Then take any injective $\pi : Mhbb{N} \to Mhbb{N}$ and define a sequence $(x_n)$ by
\begin{equation*}
x_n :=
\begin{cases}
\lambda_{\pi^{-1}(n)}(C) & \text{if } n \in \pi(Mhbb{N}), \\
0 & \text{otherwise.} \\
\end{cases}
\end{equation*}
Then since $C$ is normal, by \Cref{prop:orbit-closure-equivalences} $X := \diag(x_n) \in Mhcal{O}(C)$.
Moreover,
\begin{equation*}
\trace(XA) = \sum_{n=1}^{\infty} x_n \tilde{\eig}_n(A) = \sum_{n \in \pi(Mhbb{N})} \lambda_{\pi^{-1}(n)}(C) \tilde{\eig}_n(A) = \sum_{n=1}^{\infty} \lambda_n(C) \tilde{\eig}_{\pi(n)}(A).
\end{equation*}
Since $\pi$ was arbitrary, $\ocspec(A) \subseteq \ocnr(A)$.
Finally, if $C$ is selfadjoint, then by \Cref{cor:c-numerical-range-convex}, $\ocnr(A)$ is convex and therefore $\conv \ocspec(A) \subseteq \ocnr(A)$.
\end{proof}
Before we prove our last main theorem in this section (\Cref{thm:normal-convex-c-spectrum}), which says that $\conv \ocspec(A) = \ocnr(A)$ when $A \in Mhcal{K}$ is normal and $C \in Mhcal{L}_1^+$, we need lemmas corresponding to two special cases: $A$ selfadjoint, and $A$ normal with finite spectrum.
\begin{lemma}
\label{lem:selfadjoint-convex-c-spectrum}
If $A \in Mhcal{K}^{sa}$ and $C \in Mhcal{L}_1^+$, then $\ocnr(A) = \conv \ocspec(A)$.
\end{lemma}
\begin{proof}
Since $A \in Mhcal{K}^{sa}$ is diagonalizable, by \Cref{prop:c-spectrum-subset-ocnr} we only need to prove the inclusion $\ocnr(A) \subseteq \conv \ocspec(A)$.
Notice that $\ocnr(A)$ is an interval since it is convex and contained in $Mhbb{R}$.
We will prove that when $\sup \ocnr(A)$ is attained then it is an element of $\ocspec(A)$, and when the supremum is not attained, $\ocspec(A)$ contains elements arbitrarily close to $\sup \ocnr(A)$.
Of course, symmetric arguments apply to the infimum, thereby establishing the desired equality $\ocnr(A) = \conv \ocspec(A)$.
By \Cref{thm:c-numerical-range-selfadjoint-formula} and \Cref{prop:c-numerical-range-maximum} we know that
\begin{equation*}
\sup \ocnr(A) = \sum_{n=1}^{\infty} s_n(C) s_n(A_+),
\end{equation*}
and that this supremum is attained if and only if $N := \trace \chi_{[0,\infty)}(A) \ge \rank C$.
Moreover, since for $1 \le n \le N$, $s_n(A_+)$ is an eigenvalue for $A$, when the inequality $N \ge \rank C$ holds, we obtain
\begin{equation*}
\sup \ocnr(A) = \sum_{n=1}^{\rank C} s_n(C) s_n(A_+) \in \ocspec(A).
\end{equation*}
In the case when $\sup \ocnr(A)$ is not attained, we know that $N < \rank C$ (so $N < \infty$), and therefore $\chi_{[0,\infty)}(A)$ is a finite projection.
Since $0 \in \spec_{\mathrm{ess}}(A)$, for every $\varepsilon > 0$, the projection $\chi_{(-\varepsilon,0)}(A)$ is infinite.
Therefore $A$ has infinitely many arbitrarily small negative eigenvalues.
Let $\pi : Mhbb{N} \to Mhbb{N}$ be an injective function such that $\tilde{\eig}_{\pi(n)}(A) = s_n(A_+)$ for $1 \le n \le N$ and for $n > N$, $\frac{-\varepsilon}{\sum_{k=N+1}^{\infty} s_k(C)} < \tilde{\eig}_{\pi(n)}(A) < 0$.
Multiplying this inequality by $s_n(C)$ and summing over $n > N$ yields
$-\varepsilon < \sum_{n=N+1}^{\infty} s_n(C) \tilde{\eig}_{\pi(n)}(A) < 0$.
Therefore,
\begin{align*}
\sup \ocnr(A) - \varepsilon &= \sum_{n=1}^N s_n(C) s_n(A_+) - \varepsilon \\
&< \sum_{n=1}^N s_n(C) \tilde{\eig}_{\pi(n)}(A) + \sum_{n=N+1}^{\infty} s_n(C) \tilde{\eig}_{\pi(n)}(A) \\
&= \sum_{n=1}^{\infty} s_n(C) \tilde{\eig}_{\pi(n)}(A) \in \ocspec(A).
\end{align*}
Therefore $\ocspec(A)$ contains elements which are arbitrarily close to $\sup \ocnr(A)$.
As remarked at the beginning of the proof, symmetric arguments hold for $\inf \ocnr(A)$, and therefore $\ocnr(A) = \conv \ocspec(A)$.
\end{proof}
\begin{lemma}
\label{lem:finite-direct-sum-c-spectrum}
If $A \in Mhcal{K}$ is normal with finite spectrum and $C \in Mhcal{L}_1^+$, then $\ocnr(A) = \conv \ocspec(A)$.
\end{lemma}
\begin{proof}
Let $\spec(A) = \set{\lambda_1,\ldots,\lambda_m}$ listed in order of decreasing modulus, and let $P_1,\ldots,P_m$ be the corresponding spectral projections.
Of course, $0 \in \spec(A)$, so after relabeling we may assume $\lambda_m = 0$, and therefore $P_m$ is the only infinite projection among the list since $A \in Mhcal{K}$.
Now $A = \bigoplus_{j=1}^m \lambda_j I_{P_j Mhcal{H}}$, by \Cref{thm:direct-sum-characterization}, every element of $\ocnr(A)$ is a convex combination of terms of the form $\trace(XA)$ where $X = \bigoplus_{j=1}^m X_j \in Mhcal{O}(C)$ and $X_j$ acts on $P_j Mhcal{H}$.
We claim that any such term lies in $\ocspec(A)$.
Indeed, suppose that for $1 \le j < m$, $\set{e_k}_{k=n_{j-1} + 1}^{n_j}$ is a basis for $P_j Mhcal{H}$ which diagonalizes $X_j$, and that $\set{e_k}_{k=n_{m-1}+1}^{\infty}$ is a basis for $P_m Mhcal{H}$ which diagonalizes $X_m$.
Since $X \in Mhcal{O}(C)$ is diagonal relative to this orthonormal basis $\set{e_k}_{k=1}^{\infty}$, the nonzero terms of its diagonal sequence $(d_k)_{k=1}^{\infty}$ must consist precisely of the nonzero terms of $s(C)$.
Moreover, relative to this basis, $A$ is already diagonalized and its diagonal is precisely $\lambda(A) = \tilde{\eig}(A)$.
Let $\pi : Mhbb{N} \to Mhbb{N}$ be an injective function such that $d_{\pi(k)} = s_k(C)$.
Then we find
\begin{equation*}
\trace(XA) = \sum_{k=1}^{\infty} d_k \tilde{\eig}_k(A) = \sum_{k \in \pi(Mhbb{N})} d_k \tilde{\eig}_k(A) = \sum_{n = 1}^{\infty} d_{\pi(n}) \tilde{\eig}_n(A) = \sum_{n=1}^{\infty} s_n(C) \tilde{\eig}_{\pi(n)}(A) \in \ocspec(A).
\end{equation*}
Since any element of $\ocnr(A)$ is a convex combination of such terms, we obtain the inclusion $\ocnr(A) \subseteq \conv \ocspec(A)$, and equality follows from \Cref{prop:c-spectrum-subset-ocnr}.
\end{proof}
\begin{theorem}
\label{thm:normal-convex-c-spectrum}
If $C \in Mhcal{L}_1^+$ is a positive trace-class operator and $A \in Mhcal{K}$ is compact normal, then the orbit-closed $C$-numerical range and the convex hull of the $Mhcal{O}(C)$-spectrum coincide.
That is,
\begin{equation*}
\ocnr(A) = \conv \ocspec(A).
\end{equation*}
\end{theorem}
\begin{proof}
Since a normal operator $A \in Mhcal{K}$ is diagonalizable, by \Cref{prop:c-spectrum-subset-ocnr} we only need to prove the inclusion $\ocnr(A) \subseteq \conv \ocspec(A)$.
Consider a basis diagonalizing $A$, so that $A = \diag(\tilde{\eig}(A))$. Then for $m \in Mhbb{N}$ we define the finite rank operators $A_m := \diag \big( \tilde{\eig}_1(A), \ldots, \tilde{\eig}_m(A), 0, \ldots \big)$ and notice $A_m \xrightarrow{\norm{\bigcdot}} A$.
Therefore $\ocnr(A_m) \to \ocnr(A)$ in the Hausdorff pseudometric by \Cref{thm:continuity}.
Now $A_m$ is a normal compact operator with finite spectrum, so by \autoref{lem:finite-direct-sum-c-spectrum} we obtain $\conv \ocspec(A_m) = \ocnr(A_m)$.
We now prove that $\ocspec(A_m) \to \ocspec(A)$.
Let $\varepsilon > 0$, and choose $M \in Mhbb{N}$ such that for all $m \ge M$, $\norm{A_m - A} < \frac{\varepsilon}{\norm{C}_1}$.
Let $\pi : Mhbb{N} \to Mhbb{N}$ be any injective function.
Then
\begin{align*}
\sum_{n=1}^{\infty} s_n(C) \tilde{\eig}_{\pi(n)}(A_m) - \sum_{n=1}^{\infty} s_n(C) \tilde{\eig}_{\pi(n)}(A)
&= \sum_{n=1}^{\infty} s_n(C) \big( \tilde{\eig}_{\pi(n)}(A_m) - \tilde{\eig}_{\pi(n)}(A) \big) \\
&\le \norm{C}_1 \norm{A_m - A} < \norm{C}_1 \frac{\varepsilon}{\norm{C}_1} = \varepsilon.
\end{align*}
Consequently, $d_H \big( \ocspec(A_m), \ocspec(A) \big) < \varepsilon$, so $\ocspec(A_m) \to \ocspec(A)$.
Moreover, this implies $\conv \ocspec(A_m)$ converges to $\conv \ocspec(A)$ in the Hausdorff pseudometric as well.
Thus
\begin{equation*}
\conv \ocspec(A) \xleftarrow{d_H} \conv \ocspec[C_m](A) = \ocnr[C_m](A) \xrightarrow{d_H} \ocnr(A),
\end{equation*}
and hence $\closure{\conv \ocspec(A)} = \closure{\ocnr(A)}$.
By the above, it suffices to prove that every element of the boundary of $\ocnr(A)$ is also an element of $\conv \ocspec(A)$.
The argument is very similar to the one in the proof of \Cref{thm:direct-sum-characterization}, except we apply \Cref{lem:selfadjoint-convex-c-spectrum} in place of \Cref{lem:selfadjoint-direct-sum-characterization}.
Suppose that $\trace(XA) \in \ocnr(A)$ lies on the boundary.
By rotating, we may suppose that $\Re \trace(XA) = \sup \Re \ocnr(A) = \sup \ocnr(\Re A)$.
Then by \Cref{prop:block-diagonal-decomposition} we get spectral projections $\set{ P_l }_{l=1}^N$ associated to the distinct nonnegative eigenvalues $\set{ \lambda_l }_{l=1}^N$ of $\Re A$, including zero if and only if $N < \infty$.
We set $P_0 := I - \sum_{l=1}^N P_l$ and $n_0 := 0$, and $n_l := \sum_{j=1}^l \trace P_l$.
In addition, $X$ commutes with each $P_l$ and if $X_l$ denotes the compression of $X$ to $P_l Mhcal{H}$, then $X = \bigoplus_{l=0}^N X_l$.
Moreover, $X_0 = 0$ and for $1 \le l < N$ we have $X_l \in Mhcal{U}(C_l)$ where $C_l := \diag(s_{n_{l-1}+1}(C),\ldots,s_{n_l}(C))$.
If $N < \infty$, then $X_N \in Mhcal{O}(C_N)$ where $C_N := \diag(s_{n_{N-1} + 1}(C),s_{n_{N-1} + 2}(C),\ldots, s_{n_N}(C))$.
Since $A$ is normal, it is clear that $\Re A, \Im A$ commute, and therefore $\Im A$ commutes with each $P_l$, and so $A$ commutes with $P_l$ too.
Let $A_l$ be the compression of $A$ to $P_l Mhcal{H}$, so $A = \bigoplus_{l=0}^N A_l$.
Now for each $1 \le l \le N$, by \Cref{lem:selfadjoint-convex-c-spectrum} we know that there are elements $y_l, z_l \in \ocspec[C_l](\Im A_l)$ such that $z_l \le \trace(X_l \Im A_l) \le y_l$.
Moreover, since $\Re A_l = \lambda_l I_{P_l Mhcal{H}}$, then $z'_l := \lambda_l \trace C_l + i z_l, y'_l := \lambda_l \trace C_l + i y_l \in \ocspec[C_l](A_l)$.
Notice that $\trace(X_l A_l) = \lambda_l \trace C_l + i \trace(X_l \Im A_l)$, hence $\trace (X_l A_l) \in [z'_l, y'_l]$.
Summing over $1 \le l \le N$, we obtain
\begin{equation*}
\trace(XA) = \sum_{l=1}^N \trace(X_l A_l) \in [z,y], \quad\text{where}\quad z := \sum_{l=1}^N z'_l \quad \text{and}\quad y := \sum_{l=1}^N y'_l.
\end{equation*}
Finally, $z,y \in \sum_{l=1}^N \ocspec[C_l](A_l) \subseteq \ocspec[\bigoplus_{l=1}^N C_l] \big( \bigoplus_{l=1}^N A_l \big) \subseteq \ocspec(A)$, and therefore we obtain $\trace(XA) \in [z,y] \subseteq \conv \ocspec(A)$.
\end{proof}
\section{Convexity of the $C$-numerical range.}
\label{sec:c-numerical-range-convexity}
In their paper \cite{DvE-2020-LaMA}, Dirr and vom Ende asked whether the $C$-numerical range $\cnr(A)$ is convex when $C$ is normal with collinear eigenvalues.
We will now show that when $A$ is diagonalizable and $C$ is positive and has either trivial or infinite dimensional kernel, then this is indeed the case (see \Cref{cor:c-numerical-range-convex-diagonalizable}).
We make no claim that these circumstances are exhaustive, but we are limited by the proof technique and the underlying results.
Nevertheless, we felt that a partial answer to the question of the convexity of $\cnr(A)$ would contribute some value.
Let $E: B(Mhcal{H}) \to Mhcal{D}$ denote the canonical trace-preserving conditional expectation onto a diagonal masa $Mhcal{D}$.
In other words, $E$ is the operation of ``taking the main diagonal.''
When applied to the unitary orbit of an operator $C$, there is a natural bijection between $E(Mhcal{U}(C))$ and the set of all diagonal sequences of $C$ as the orthonormal basis giving rise to the matrix representation of $C$ varies.
The study of diagonal of operators has a rich history in the literature.
For a survey, see \cite{LW-2020-OT27}.
The following\footnote{In \cite{KW-2010-JFA}, this is stated in terms of the so-called \emph{partial isometry orbit, $Mhcal{V(C)}$}, but \cite[Proposition~2.1.12]{Lor-2016} guarantees that $\closure[\norm{\bigcdot}]{Mhcal{U}(C)} = Mhcal{V}(C)$ for $C \in Mhcal{K}^+$.} gives a complete characterization of diagonals of compact operators modulo the dimension of the kernel.
\begin{proposition}[\protect{\cite[Proposition~6.4]{KW-2010-JFA}}]
\label{prop:compact-schur-horn}
For a positive compact operator $C$,
\begin{equation*}
E\Big(\closure[\norm{\bigcdot}]{Mhcal{U}(C)}\Big) = \{ X \in Mhcal{D} \cap Mhcal{K}^+ \mid s(X) \prec s(C) \}.
\end{equation*}
\end{proposition}
Since the set $\{ X \in Mhcal{D} \cap Mhcal{K}^+ \mid s(X) \prec s(C) \}$ is readily seen to be convex, \Cref{prop:compact-schur-horn} can be used to give a one-line proof that $\ocnr(A)$ is convex whenever $A$ is diagonalizable, thereby providing yet another proof of \Cref{thm:c-numerical-range-via-majorization} in this restricted setting.
Indeed, suppose $A \in Mhcal{D}$ and let $t \in [0,1]$ and suppose $X_1,X_2 \in Mhcal{O}(C)$.
Then there is some $X \in Mhcal{O}(C)$ for which $E(X) = E(t X_1 + (1-t) X_2)$, and therefore
\begin{equation*}
\trace(E((t X_1 + (1-t)X_2)A)) = \trace(E(t X_1 + (1-t)X_2)A) = \trace(E(X)A) = \trace(E(XA)).
\end{equation*}
Since the conditional expectation is trace-preserving, $\trace((t X_1 + (1-t)X_2)A) = \trace(XA)$.
It turns out that there are certain circumstances under which $E(Mhcal{U}(C))$ has been characterized, namely when $\ker C$ is either trivial \cite[Proposition~6.6]{KW-2010-JFA} or infinite dimensional \cite[Corollary~3.5]{LW-2015-JFA}.
In both cases, the characterization is still linked to majorization but the details of the definitions are a bit too technical for our present purposes.
Nevertheless, it is known that $E(Mhcal{U}(C))$ is convex if $\ker C$ is trivial \cite[Corollary~6.7]{KW-2010-JFA} or infinite dimensional\footnote{In the case when $\ker C$ is nontrivial but finite dimensional, the first author has conjectured a characterization of $E(Mhcal{U}(C))$ and has established that this conjectured set is convex. See \cite[Conjecture~3.6, Lemma~4.2]{LW-2015-JFA} for details.} \cite[Corollary~4.3]{LW-2015-JFA}.
\begin{proposition}[\protect{\cite[Corollary~6.7]{KW-2010-JFA},\cite[Corollary~4.3]{LW-2015-JFA}}]
\label{prop:convex-schur-horn}
Let $C$ be a positive compact operator.
If $\ker C$ is either trivial or infinite dimensional, then $E(Mhcal{U}(C))$ is convex.
\end{proposition}
This immediately yields the following corollary concerning the convexity of $\cnr(A)$.
\begin{corollary}
\label{cor:c-numerical-range-convex-diagonalizable}
Let $C$ be a positive trace-class operator and suppose that $\ker C$ is either trivial or infinite dimensional.
For any diagonalizable operator $A$, $\cnr(A)$ is convex.
\end{corollary}
\begin{proof}
Suppose $A$ is diagonalizable.
Then after conjugating by a suitable unitary, which we can absorb into $Mhcal{U}(C)$, we may assume $A \in Mhcal{D}$.
Let $t \in [0,1]$ and suppose $X_1,X_2 \in Mhcal{U}(C)$.
Then by \Cref{prop:convex-schur-horn} there is some $X \in Mhcal{U}(C)$ for which $E(X) = E(t X_1 + (1-t) X_2)$, and therefore
\begin{equation*}
\trace(E((t X_1 + (1-t)X_2)A)) = \trace(E(t X_1 + (1-t)X_2)A) = \trace(E(X)A) = \trace(E(XA)).
\end{equation*}
Then $\trace((t X_1 + (1-t)X_2)A) = \trace(XA) \in \cnr(A)$ since the conditional expectation is trace-preserving.
\end{proof}
\end{document} |
\
$\Box$egin{document}
\title{Rarefaction pulses for the Nonlinear Schr\"odinger Equation\
in the transonic limit}
\
$\Box$egin{abstract}
We investigate the properties of finite energy travelling waves to the nonlinear Schr\"odinger equation
with nonzero conditions at infinity for a wide class of nonlinearities. In space dimension two and three we
prove that travelling waves converge in the transonic limit (up to rescaling)
to ground states of the Kadomtsev-Petviashvili equation. Our results generalize an earlier result
of F. B\'ethuel, P. Gravejat and J-C. Saut for
the two-dimensional Gross-Pitaevskii equation, and
provide a rigorous proof to a conjecture by C. Jones and P. H. Roberts about the existence of an upper branch of
travelling waves in dimension three.
\varepsilonnd{abstract}
\ \\
\|oindent {\
$\Box$f Keywords.} Nonlinear Schr\"odinger equation,
Gross-Pitaevskii equation, Kadomtsev-Petviashvili equation, travelling waves, ground state.\\
\|oindent {\
$\Box$f MSC (2010)} Main: 35C07, 35B40, 35Q55, 35Q53. Secondary: 35B45, 35J20, 35J60, 35Q51, 35Q56, 35Q60.
\tildeableofcontents
\sigmaection{Introduction}
We consider the nonlinear Schr\"odinger equation in $\mathbb R^N$
\
$\Box$e
\tildeag{NLS}
i \frac{\partial \Psi}{\partial t} + \Deltaelta \Psi + F(|\Psi|^2) \Psi = 0
\varepsilone
with the condition $|\Psi(t,x)| \tildeo r_0$ as $ | x | \tildeo {\rm in}nfty$,
where $r_0>0$ and $F(r_0^2) =0$. This equation arises
as a relevant model in many physical situations, such as the
theory of Bose-Einstein condensates, superfluidity ({see}
\cite{Cos}, \cite{G}, \cite{IS}, \cite{JR}, \cite{JPR} and
the surveys \cite{RB}, \cite{AHMNPTB}) or as an approximation of the Maxwell-Bloch system in Nonlinear
Optics ({cf.} \cite{KL}, \cite{KivPeli}). When
$F(\varrho) = 1- \varrho$, the corresponding (NLS) equation is called
the Gross-Pitaevskii equation and is a common model for Bose-Einstein condensates.
The so-called ``cubic-quintic'' (NLS), where
$$ F(\varrho) = - \alpha_1 + \alpha_3 \varrho - \alpha_5 \varrho^2 $$
for some positive constants
$\alpha_1 $, $\alpha_3 $ and $\alpha_5$
and $F $ has two positive roots, is also of high interest in Physics
(see, {e.g.}, \cite{BP}). In Nonlinear Optics, the nonlinearity $F$
can take various forms (cf. \cite{KL}), for instance
\
$\Box$e
\langleambdabel{nonlin}
F(\varrho) = - \alphalphapha \varrho^\|u - \
$\Box$eta \varrho^{2\|u},
\quad \quad
F(\varrho) = - \alphalphapha
\mathbb Big( 1 - \frac{1}{ (1+ \frac{\varrho}{\varrho_0} )^\|u} \mathbb Big),
\quad \quad
F(\varrho) = - \alphalphapha \varrho \mathbb Big( 1 + \gammaamma \, {\ranglem tanh}
( \frac{\varrho^2 - \varrho_0^2}{\sigmaigmagma^2} ) \mathbb Big), \qquad \mbox{etc.,}
\varepsilone
where $\alphalphapha$, $\
$\Box$eta$, $\gammaamma $, $\|u$, $\sigmaigmagma > 0$ are
given constants (the second formula, for instance, was proposed to take into account
saturation effects). It is therefore important to allow the nonlinearity
to be as general as possible.
The travelling wave solutions propagating with speed $c$ in
the $x_1$-direction are the solutions of the form $ \Psi (x,t) = U(x_1 - ct, x_2, \deltaots, x_N)$. The profile $U$ satisfies the equation
\
$\Box$e
\tildeag{TW$_c$}
- i c \partial_{x_1} U+ \Deltaelta U + F(|U|^2) U = 0.
\varepsilone
They are supposed to play an important role in the dynamics of
(NLS).
Since $(U, c)$ is a solution of (TW$_c$)
if and only if $(\overlineerline{U}, -c)$ is also a solution,
we may assume that $c \gammaeq 0$. The nonlinearities we consider are general, and we will merely make use of the following assumptions:\\
\|oindent {\
$\Box$f (A1)} The function $F$ is continuous on
$[0,+{\rm in}nfty)$, of class $\mathbb BC^1$ near $r_0^2$, $ F(r_0^2) = 0 $ and $ F'(r_0^2) <0$.
\|oindent
{\
$\Box$f (A2)} There exist $C >0$ and $p_0 {\rm in}n [1 , \frac{2}{N-2}) $ ($p_0 < {\rm in}nfty$ if $N=2$) such
that $|F(\varrho)| \langleeq C ( 1 + \varrho^{p_0} )$ for all
$\varrho \gammaeq 0$.
\|oindent
{\
$\Box$f (A3)} There exist $C_0>0$, $\alpha_0>0$ and $ \varrho_0 > r_0 $
such that $ F(\varrho) \langleeq - C_0 \varrho^{\alpha_0}$ for all
$\varrho \gammaeq \varrho_0$.\\
Assumptions (A1) and ((A2) or (A3)) are sufficient to guarantee the existence of travelling waves.
However, in order to get some sharp results we will need sometimes more information about the behavior
of $F$ near $ r_0^2$,
so we will replace (A1) by
\|oindent {\
$\Box$f (A4)} The function $F$ is continuous on
$[0,+{\rm in}nfty)$, of class $\mathbb BC^2$ near $r_0^2$, with
$ F(r_0^2) = 0 , $ $ F'(r_0^2) <0 $ and
$$ F(\varrho) = F(r_0^2) + F'(r_0^2) (\varrho-r_0^2)
+ \frac12 F''(r_0^2) ( \varrho-r_0^2)^2 + \mathbb BO((\varrho-r_0^2)^3)
\quad \quad \mbox{ as } \quad \varrho \tildeo r_0^2 . $$
If $F$ is $\mathbb BC^2$ near $r_0^2$, we define, as in \cite{CM1},
\
$\Box$egin{eqnarray}
\langleambdabel{Gamma}
\Gammaamma = 6 - \frac{4r_0^4}{{\mathfrak c}_s^2} F''(r_0^2) .
\varepsiloneq
The coefficient $\Gamma$ is positive for the Gross-Pitaevskii nonlinearity
($F(\varrho) = 1 - \varrho$) as well as for the cubic-quintic Schr\"odinger
equation. However, for the nonlinearity
$F(\varrho) = b \varepsilonx^{-\varrho/ \alphalphapha} - a $, where $\alpha>0$ and $0 < a < b$
(which arises in nonlinear
optics and takes into account saturation effects, see \cite{KL}),
we have $ \Gamma = 6 + 2 \langlen(a/b) $, so that $ \Gamma$
can take any value in $(-{\rm in}nfty, 6)$, including zero.
The coefficient $\Gamma$ may also vanish for some polynomial nonlinearities
(see \cite{C1d} for some examples and for the study of travelling
waves in dimension one in that case). In this paper we shall be concerned only
with the nondegenerate case $\Gamma \|ot = 0$.
{\
$\Box$f Notation and function spaces.}
For $ x = ( x_1, x_2, \deltaots, , x_N) {\rm in}n \mathbb R^N$, we denote
$x = ( x_1, x_{\partialerp})$, where $ x_{\partialerp} = ( x_2, \deltaots, x_N) {\rm in}n \mathbb R^{N-1}$.
Given a function $f$ defined on $ \mathbb R^N$,
we denote $ \|abla _{x_{\partialerp}} f = ( \frac{ \partial f}{\partial x_2}, \deltaots, \frac{ \partial f}{\partial x_N}).$
We will write $ \Deltaelta_{x_{\partialerp} }= \frac{ \partial^2}{\partial x^2 } + \deltaots + \frac{ \partial^2}{\partial x^N }$.
By "$f (t) \sigmaigmam g(t)$ as $ t \tildeo t_0$" we mean $ \langleim_{t \tildeo t_0 } \frac{f(t)}{ g(t) } = 1$.
We denote by ${\mathcal F}u $ the Fourier transform,
defined by $ {\mathcal F}u (f) ({\ranglem x}i )= \deltaisplaystyle {\rm in}nt_{\mathbb R^N} \varepsilonx^{-i x.{\ranglem x}i } f(x) \, dx $ whenever
$ f {\rm in}n L^1 ( \mathbb R^N)$.
Unless otherwise stated, the $L^p$ norms are computed on the whole space $\mathbb R^N$.
We fix an odd function $ \chi : \mathbb R \tildeo \mathbb R $ such that
$\chi(s) = s $ for $0 \langleeq s \langleeq 2 r_0$, $ \chi(s) = 3 r_0$ for
$s \gammaeq 4 r_0 $ and $ 0 \langleeq \chi' \langleeq 1$ on $\mathbb R_+$.
As usually, we denote $ \deltaot{H}^1(\mathbb R^N) = \{ h {\rm in}n L_{loc}^1(\mathbb R^N) \; | \; \|abla h
{\rm in}n L^2(\mathbb R^N) \}$.
We define the Ginzburg-Landau energy
of a function $ \partialsi {\rm in}n \deltaot{H}^1(\mathbb R^N) $ by
$$
E_{\ranglem GL} (\partialsi) =
{\rm in}nt_{\mathbb R^N} |\|abla \partialsi|^2 + (\chi^2(| \partialsi|)-r_0^2)^2 \ dx .
$$
We will use the function space
$$ \mathbb BE = \langleeft\{ \partialsi {\rm in}n \deltaot{H}^{1} (\mathbb R^N) \; \
$\Box$egin{itemize}g| \;
\chi^2 ( |\partialsi| ) - r_0^2 {\rm in}n L^2(\mathbb R^N) \rangleight\}
= \langleeft\{ \partialsi {\rm in}n \deltaot{H}^{1} (\mathbb R^N) \; \
$\Box$egin{itemize}g| \; E_{\ranglem GL}(\partialsi ) < {\rm in}nfty \rangleight\} . $$
The basic properties of this space have been discussed in the Introduction of \cite{CM1}.
We will also consider the space
$$ \mathbb BX = \langleeft\{ u {\rm in}n \mathbb BD^{1,2} (\mathbb R^N) \; \
$\Box$egin{itemize}g| \;
\chi^2 ( |r_0 -u| ) - r_0^2 {\rm in}n L^2(\mathbb R^N) \rangleight\}, $$
where $ \mathbb BD^{1,2} (\mathbb R^N)$ is the completion of $ \mathbb BC_c^{{\rm in}nfty} (\mathbb R^N)$ for the norm
$ \| u \|_{\mathbb BD^{1,2}} = \| \|abla u \|_{L^2(\mathbb R^N)}$.
If $ N \gammaeq 3$ it can be proved that
$ \mathbb BE = \{ \alphalpha ( r_0 - u) \; \
$\Box$egin{itemize}g| \; \, u {\rm in}n \mathbb BX , \; \alphalpha {\rm in}n \mathbb C, \; |\alphalpha | = 1\}.$
{\
$\Box$f Hamiltonian structure. }
The flow associated to (NLS) formally preserves the energy
$$ E(\partialsi) = {\rm in}nt_{\mathbb R^N} |\|abla \partialsi|^2 + V(|\partialsi|^2) \ dx , $$
where $V$ is the antiderivative of $-F$ which vanishes at $r_0^2$,
that is ${ V(s) = {\rm in}nt_s^{r_0^2} F(\varrho) \ d \varrho }$,
as well as the momentum.
The momentum (with respect to the direction of propagation $x_1$)
is a functional $Q$ defined on $ \mathbb BE$ (or, alternatively, on $ \mathbb BX$) in the following way.
Denoting by $ \langleambdangle \cdot , \cdot \tildeongle$ the standard scalar product in $ \mathbb C$,
it has been proven in \cite{CM1} and \cite{Maris} that for any $ \partialsi {\rm in}n {\mathbf E}o $ we have
$ \langleambdangle i \frac{\partial \partialsi}{\partial {x_1} } , \partialsi \tildeongle {\rm in}n \mathbb BY + L^1(\mathbb R^N) $, where
$\mathbb BY = \{ \frac{ \partial h}{ \partial{x_1} } \; | \; h {\rm in}n \deltaot{H}^1(\mathbb R^N) \}$ and
$ \mathbb BY$ is endowed with the norm $ \| \partial_{x_1} h \|_{\mathbb BY} =
\| \|abla h \|_{L^2(\mathbb R^N)}$. It is then possible to define
the linear continuous functional $L$ on $ \mathbb BY + L^1(\mathbb R^N) $ by
$$
L \langleeft( \frac{ \partial h}{\partial{x_1}} + \Theta \rangleight) = {\rm in}nt_{\mathbb R^N} \Theta (x) \ dx
\qquad \mbox{ for any } \frac{ \partial h}{\partial{x_1}} {\rm in}n \mathbb BY \mbox{ and } \Theta {\rm in}n L^1( \mathbb R^N).
$$
The momentum (with respect to the direction $x_1$) of a function $ \partialsi {\rm in}n \mathbb BE $ is
$
Q( \partialsi ) = L \langleeft( \langleambdangle i \frac{\partial \partialsi}{\partial {x_1} } , \partialsi \tildeongle \rangleight).
$
\\
If $ \partialsi {\rm in}n {\mathbf E}o $ does
not vanish, it can be lifted in the fom $\partialsi = \rangleho \varepsilonx^{i\partialhi}$ and we have
\
$\Box$e
\langleambdabel{momentlift}
Q(\partialsi) = {\rm in}nt_{\mathbb R^N} (r_0^2 - \rangleho^2) \frac{\partial \partialhi}{\partial x_1} \ dx .
\varepsilone
Any solution $U {\rm in}n \mathbb BE$ of (TW$_c$) is a critical point of the functional $E _c = E + cQ$
and satisfies the standard
Pohozaev identities (see Proposition 4.1 p. 1091 in \cite{M2}):
\
$\Box$e
\langleambdabel{Pohozaev}
\langleeft\{\
$\Box$egin{array}{ll}
P_c(U) = 0, \qquad \mbox{ where }
\deltaisplaystyle{ P_c(U)
= E(U) + cQ(U) - \frac{2}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla_{x_\partialerp} U|^2 \ dx, } \qquad \mbox{ and} \\
\\
\deltaisplaystyle{ E(U) = 2 {\rm in}nt_{\mathbb R^N} |\partial_{x_1} U|^2 \ dx }.
\varepsilonnd{array}\rangleight.
\varepsilone
We denote
\
$\Box$e
\langleambdabel{Cc}
\mathbb Co _c= \{ \partialsi {\rm in}n \mathbb BE \; \
$\Box$egin{itemize}g| \; \partialsi \mbox{ is not constant and } P_c( \partialsi ) = 0 \}.
\varepsilone
Using the Madelung transform $ \Psi = \sigmaqrt{ \varrho } \varepsilonx ^{ i \tildeheta}$
(which makes sense in any domain where $ \Psi \|eq 0$), equation (NLS) can be put into a hydrodynamical form.
In this context one may compute the associated speed of sound at infinity (see, for instance, the introduction of \cite{M2}):
$$ {\mathfrak c}_s = \sigmaqrt{ - 2 r_0^2 F'(r_0^2) } > 0 . $$
Under general assumptions it was proved
that finite energy travelling waves to (NLS) with speed $c$ exist if and only if $|c| < {\mathfrak c}_s$
(see \cite{M2, Maris}).
Let us recall the existence results of nontrivial
travelling waves that we use.
\
$\Box$egin{theo} [\cite{CM1}] \langleambdabel{th2dposit} Let $N = 2$ and assume
that the nonlinearity $F$ satisfies (A2) and (A4)and that $ \Gamma \|ot = 0 $.
(a) Suppose moreover that $V$ is nonnegative on $[0, {\rm in}nfty)$.
Then for any $q {\rm in}n (-{\rm in}nfty , 0 )$ there exists
$U{\rm in}n \mathbb BE$ such that $Q(U)= q$ and
$$ E(U) = {\rm in}nf \{ E(\partialsi) \; \
$\Box$egin{itemize}g| \; \partialsi {\rm in}n \mathbb BE, \; Q(\partialsi) = q \} . $$
(b) Without any assumption on the sign of $V$, there is $ q_{{\rm in}nfty} >0$ such that for any $ q {\rm in}n (- q_{{\rm in}nfty}, 0 )$ there is
$U{\rm in}n \mathbb BE$ satisfying $Q(U)= q$ and
$$ E(U) = {\rm in}nf \mathbb Big\{ E(\partialsi) \; \
$\Box$egin{itemize}g| \; \partialsi {\rm in}n \mathbb BE, \; Q(\partialsi) = q , \; {\rm in}nt_{\mathbb R^2} V(|\partialsi |^2 ) \, dx > 0 \mathbb Big\} . $$
For any $U$ satisfying (a) or (b) there exists $ c = c(U) {\rm in}n (0,{\mathfrak c}_s)$ such
that $U$ is a nonconstant solution to {\ranglem (TW$_{c(U)}$)}.
Moreover, if $ Q( U_1 ) < Q( U_2 ) <0 $ we have
$0 < c(U_1) < c(U_2 ) < {\mathfrak c}_s $ and
$c(U ) \tildeo {\mathfrak c}_s $ as $q \tildeo 0$.
\varepsilonnd{theo}
\
$\Box$egin{theo} [\cite{CM1}] \langleambdabel{th2d} Let $N = 2$. Assume that the nonlinearity $F$
satisfies (A2) and (A4) and that $ \Gamma \|ot = 0 $. Then there exists
$ 0 < k_{{\rm in}nfty} \langleeq {\rm in}nfty $ such that for any $ k {\rm in}n (0, k_{{\rm in}nfty})$,
there is $ \mathbb BU {\rm in}n \mathbb BE $ such that
$ \deltaisplaystyle{{\rm in}nt_{\mathbb R^2} |\|abla \mathbb BU |^2 \ dx = k}$ and
$$ {\rm in}nt_{\mathbb R^2} V(| \mathbb BU |^2) \ d x + Q( \mathbb BU ) =
{\rm in}nf \langleeft\{ {\rm in}nt_{\mathbb R^2} V(|\partialsi|^2) \ d x + Q(\partialsi) \; \mathbb Big| \; \partialsi {\rm in}n \mathbb BE, \;
{\rm in}nt_{\mathbb R^2} |\|abla \partialsi|^2 \ dx = k \rangleight\} . $$
For any such $ \mathbb BU $ there exists $c = c( \mathbb BU ) {\rm in}n (0, {\mathfrak c}_s)$
such that the function $ U(x) = \mathbb BU ( x / c) $ is a solution to {\ranglem (TW$_c$)}.
Moreover, if $ \mathbb BU_1$, $\mathbb BU_2$ are as above and
$\deltaisplaystyle {\rm in}nt_{\mathbb R^2} |\|abla \mathbb BU_1|^2 \, dx < {\rm in}nt_{\mathbb R^2} |\|abla \mathbb BU_2|^2 \, dx $,
then ${\mathfrak c}_s > c(\mathbb BU_1) > c (\mathbb BU_2) > 0$ and we have $ c( \mathbb BU ) \tildeo {\mathfrak c}_s $ as $ k \tildeo 0 $.
\varepsilonnd{theo}
\
$\Box$egin{theo} [\cite{Maris}] \langleambdabel{thM} Assume that $N \gammaeq 3$
and the nonlinearity $F$ satisfies (A1) and (A2).
Then for any $0 < c < {\mathfrak c}_s $ there exists a
nonconstant $ \mathbb BU {\rm in}n \mathbb BE $ such that $ P_c( \mathbb BU ) = 0 $ and
$ E(\mathbb BU) + c Q (\mathbb BU) = \deltaisplaystyle {\rm in}nf_{ \partialsi {\rm in}n \mathbb Co _c} ( E( \partialsi )+ cQ( \partialsi )).$
If $N \gammaeq 4$, any such $\mathbb BU $ is a nontrivial solution
to {\ranglem (TW$_c$)}. If $ N = 3$, for any $\mathbb BU$ as above there exists
$\sigma > 0 $ such that $U(x) = \mathbb BU ( x_1 , \sigma x_\partialerp ) {\rm in}n \mathbb BE $
is a nontrivial solution to {\ranglem (TW$_c$)}.
\varepsilonnd{theo}
If (A3) holds it was proved that there is $C_0 >0$, depending only on $F$, such that for any $ c {\rm in}n (0, {\mathfrak c}_s)$
and for any solution $U{\rm in}n \mathbb BE$ to (TW$_c$) we have $|U| \langleeq C_0$ in $ \mathbb R^N$
(see Proposition 2.2 p. 1079 in \cite{M2}).
If (A3) is satisfied but (A2) is not, one can modify $F$ in a neighborhood of infinity in such
a way that the modified nonlinearity $ \tildeildelde{F}$ satisfies (A2) and (A3) and $ F = \tildeildelde{F}$ on $[0, 2 C_0]$.
Then the solutions of (TW$_c$) are the same as the solutions of (TW$_c$) with $F$ replaced by $\tildeildelde{F}$.
Therefore all the existence results above hold if (A2) is replaced by (A3);
however, the minimizing properties hold only if we replace throughout $F$ and $V$ by $\tildeildelde{F}$ and $\tildeildelde{V}$, respectively,
where $\tildeildelde{V}(s) = \deltaisplaystyle {\rm in}nt_s^{ r_0 ^2} \tildeildelde{F}( \tildeau ) \, d \tildeau$.
The above results provide, under various assumptions, travelling waves to (NLS)
with speed close to the speed of sound $ {\mathfrak c}_s$.
We will study the behavior of travelling waves in the transonic limit $c \tildeo {\mathfrak c}_s$ in each
of the previous situations.
\sigmaubsection{Convergence to ground states for (KP-I)}
In the transonic limit, the travelling waves are expected
to be rarefaction pulses close, up to a rescaling, to ground
states of the Kadomtsev-Petviashvili I (KP-I) equation. We refer to
\cite{JR} in the case of the Gross-Pitaevskii equation ($F(\varrho)= 1 - \varrho$)
in space dimension $N=2$ or $N=3$, and to \cite{KL}, \cite{KAL},
\cite{KivPeli} in the context of Nonlinear Optics.
In our setting, the (KP-I) equation associated to (NLS) is
\
$\Box$e
\tildeag{KP-I}
2 \partial_\tildeau \zeta + \Gamma \zeta \partial_{z_1} \zeta
- \frac{1}{{\mathfrak c}_s^2} \, \partial_{z_1}^3 \zeta
+ \Deltaelta_{z_\partialerp} \partial^{-1}_{z_1} \zeta = 0 ,
\varepsilone
where $\Deltaelta_{z_\partialerp} = \deltaisplaystyle{\sigmaum_{j=2}^N \partial_{z_j}^2}$ and the coefficient $\Gamma$ is related to the nonlinearity
$F$ by (\rangleef{Gamma}).
The (KP-I) flow preserves (at least formally) the $L^2$ norm
$$ {\rm in}nt_{\mathbb R^N} \zeta^2 \ dz $$
and the energy
$$ \mathscr{E} (\zeta) = {\rm in}nt_{\mathbb R^N}
\frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \zeta)^2
+ |\|abla_{z_\partialerp} \partial_{z_1}^{-1} \zeta|^2
+ \frac{\Gamma}{3} \, \zeta^3 \ dz . $$
A solitary wave of speed $ 1 / (2{\mathfrak c}_s^2) $, moving to the left in the
$z_1$ direction, is a particular solution of (KP-I) of the form
$\zeta(\tildeau,z) = \mathbb BW(z_1 + \tildeau/ (2{\mathfrak c}_s^2), z_\partialerp)$.
The profile $\mathbb BW$ solves the equation
\
$\Box$e
\tildeag{SW}
\frac{1}{{\mathfrak c}_s^2} \, \partial_{z_1} \mathbb BW + \Gamma \mathbb BW \partial_{z_1} \mathbb BW
- \frac{1}{{\mathfrak c}_s^2} \, \partial_{z_1}^3 \mathbb BW + \Deltaelta_{z_\partialerp} \partial^{-1}_{z_1} \mathbb BW
= 0 .
\varepsilone
Equation (SW) has no nontrivial solution in the degenerate
linear case $\Gamma = 0$ or in space dimension $N \gammaeq 4$ (see Theorem 1.1 p. 214 in \cite{dBSIHP}
or the begining of section \rangleef{proofGS}). If $\Gamma \|ot = 0$,
since the nonlinearity is homogeneous, one
can construct solitary waves of any (positive) speed just by using the
scaling properties of the equation.
The solutions of (SW) are critical points of the associated action
$$ \mathscr{S} (\mathbb BW) = \mathscr{E} (\mathbb BW)
+ \frac{1}{{\mathfrak c}_s^2} {\rm in}nt_{\mathbb R^N} \mathbb BW^2\ dz . $$
The natural energy space for (KP-I) is $ \mathscr{Y} (\mathbb R^N)$, which is
the closure of $\partial_{z_1} \mathbb BC^{\rm in}nfty_c(\mathbb R^N)$
for the (squared) norm
$$
\| \mathbb BW\|_{\mathscr{Y} (\mathbb R^N)}^2 =
{\rm in}nt_{\mathbb R^N} \frac{1}{{\mathfrak c}_s^2}\, \mathbb BW^2 + \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathbb BW)^2
+ |\|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW|^2 \ dz . $$
From the anisotropic Sobolev embeddings (see \cite{BIN}, p. 323) it follows that
$\mathscr{S}$ is well-defined and is a continuous functional on $\mathscr{Y}(\mathbb R^N)$ for $N=2$ and $N=3$.
Here we are not interested in arbitrary solitary
waves for (KP-I), but only in {{\rm in}t ground states.} A ground state
of (KP-I) with speed $1/(2{\mathfrak c}_s^2)$ (or, equivalently, a ground state of (SW)) is
a nontrivial solution of (SW) which minimizes the action
$\mathscr{S}$ among all solutions of (SW). We shall
denote $\
$\Box$S_{\ranglem min}$ the corresponding action:
$$ \
$\Box$S_{\ranglem min} = {\rm in}nf \mathbb Big\{ \mathscr{S} (\mathbb BW ) \; \mathbb Big| \; \mathbb BW {\rm in}n \mathscr{Y}(\mathbb R^N) \sigmaetminus \{ 0 \},
\ \mathbb BW \ {\ranglem solves \ (SW)} \mathbb Big\} . $$
The existence of ground states (with speed $1/(2{\mathfrak c}_s^2)$) for (KP-I)
in dimensions $N=2$ and $N=3$ follows from Lemma 2.1 p. 1067 in \cite{dBSSIAM}.
In dimension $N=2$, we may use the variational characterization
provided by Lemma 2.2 p. 78 in \cite{dBS}:
\
$\Box$egin{theo} [\cite{dBS}] \langleambdabel{gs2d} Assume that $N=2$ and
$\Gamma \|ot = 0$. There exists $\mu >0$ such that the set of solutions to the
minimization problem
\
$\Box$e
\langleambdabel{minimiz}
{\mathscr{M}(\mu) } = {\rm in}nf \langleeft\{ \mathscr{E} (\mathbb BW)\; \mathbb Big| \; \mathbb BW {\rm in}n \mathscr{Y}(\mathbb R^2), \
{\rm in}nt_{\mathbb R^2} \frac{1}{{\mathfrak c}_s^2}\, \mathbb BW^2 \ dz = \mu \rangleight\} ,
\varepsilone
is precisely the set of ground states of {\ranglem (KP-I)}
and it is not empty.
Moreover, any sequence $ (\mathbb BW _n)_{n \gammaeq 1} \sigmaubset \mathscr{Y}(\mathbb R^2)$
such that $ \deltaisplaystyle {\rm in}nt_{\mathbb R^2} \frac{1}{{\mathfrak c}_s^2}\, \mathbb BW_n^2 \ dz \tildeo \mu $ and
$\mathscr{E} (\mathbb BW_n) \tildeo {\mathscr{M}(\mu) }$
contains a convergent subsequence
in $\mathscr{Y}(\mathbb R^2)$ (up to
translations). Finally, we have
$$ \mu = \frac{3}{2} \, \
$\Box$S_{\ranglem min}
\quad \quad \quad {{\rm in}t and} \quad \quad \quad
{\mathscr{M}(\mu) } = - \frac{1}{2} \
$\Box$S_{\ranglem min} . $$
\varepsilonnd{theo}
We emphasize that this characterization of ground states is specific to the two-dimensional case.
Indeed, since $ \mathscr{E} $ and the $L^2$ norm are conserved by (KP-I),
it implies the orbital stability of the set of ground states
for (KP-I) if $N=2$ ({cf.} \cite{dBS}). On the other hand, it is known that this set is orbitally unstable
if $N=3$ (see \cite{Liu}). In the three-dimensional case we need the following result, which
shows that ground states
are minimizers of the action under a Pohozaev type constraint.
Notice that any solution of (SW) in $\mathscr{Y}(\mathbb R^N)$
satisfies the Pohozaev identity
$$ {\rm in}nt_{\mathbb R^N} \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathbb BW)^2
+ |\|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW|^2
+ \frac{\Gamma}{3}\, \mathbb BW^3 + \frac{1}{{\mathfrak c}_s^2}\, \mathbb BW^2 \ dz =
\frac{2}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW|^2 \ dz , $$
which is (formally) obtained by multiplying (SW) by
$z_\partialerp \cdot \|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW$ and integrating by
parts (see Theorem 1.1 p. 214 in \cite{dBSIHP} for a rigorous justification).
Taking into account how travelling wave solutions to (NLS)
are constructed in Theorem \rangleef{thM} above, in the case $N =3$ we consider the minimization problem
\
$\Box$e
\langleambdabel{miniGS}
\mathscr{S}_* = {\rm in}nf \mathbb Big\{ \mathscr{S} (\mathbb BW) \; \mathbb Big| \;
\mathbb BW {\rm in}n \mathscr{Y}(\mathbb R^3) \sigmaetminus \{ 0 \}, \ \mathscr{S} (\mathbb BW)
= {\rm in}nt_{\mathbb R^3} |\|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW|^2 \ dz \mathbb Big\} .
\varepsilone
Our first result shows that in space dimension $N=3$ the ground states (with speed $1/(2{\mathfrak c}_s^2)$)
of (KP-I) are the solutions of the minimization problem (\rangleef{miniGS}).
\
$\Box$egin{theo} \langleambdabel{gs}
Assume that $ N = 3 $ and $ \Gamma \|ot = 0 $.
Then $ \mathscr{S}_* > 0 $ and the problem (\rangleef{miniGS}) has minimizers.
Moreover, $\mathbb BW_0$ is a minimizer for the problem (\rangleef{miniGS}) if and only if
there exist a ground state $ \mathbb BW $ for {\ranglem (KP-I)} (with speed $1/(2{\mathfrak c}_s^2)$) and $\sigma> 0$
such that $ \mathbb BW_0 (z) = \mathbb BW(z_1, \sigma z_\partialerp) $.
In particular, we have $ \mathscr{S}_* = \
$\Box$S_{\ranglem min} $.
Furthermore, let $(\mathbb BW_n)_{n \gammaeq 1} \sigmaubset \mathscr{Y}(\mathbb R^3)$ be a sequence satisfying:
(i) There are positive constants $ m_1, m_2 $ such that
$ m_1 \langleeq \deltaisplaystyle {\rm in}nt_{\mathbb R^3} \mathbb BW_n ^2 + (\partial _{z_1} \mathbb BW_n )^2 \, dz \langleeq m_2$.
(ii) $ \deltaisplaystyle {\rm in}nt_{\mathbb R^3} \frac{ 1}{{\mathfrak c}_s ^2} \mathbb BW_n ^2 + \frac{ 1}{{\mathfrak c}_s ^2} (\partial _{z_1} \mathbb BW_n )^2 + \frac{ \Gamma}{3 } \mathbb BW_n ^3 \, dz \tildeo 0$ as $ n \tildeo {\rm in}nfty$.
(iii) $ \deltaisplaystyle \langleiminf_ {n \tildeo {\rm in}nfty} \mathscr{S}( \mathbb BW_n) \langleeq \mathscr{S}_*.$
\|oindent
Then there exist $\sigma>0$, $ \mathbb BW {\rm in}n \mathscr{Y}(\mathbb R^3) \sigmaetminus \{ 0 \}$,
a subsequence $ ( \mathbb BW_{n_j} )_{j \gammaeq 0}$, and a sequence
$(z^j)_{j\gammaeq 0} \sigmaubset \mathbb R^3$ such that
$z \mapsto \mathbb BW( z_1 , \sigma^{-1} z_\partialerp)$ is a ground state for
{\ranglem (KP-I)} with speed $1/(2{\mathfrak c}_s^2)$ and
$$ \mathbb BW_{n_j} ( \cdot - z^j ) \tildeo \mathbb BW \quad \quad \quad
{{\rm in}t in} \quad \mathscr{Y}(\mathbb R^3) . $$
\varepsilonnd{theo}
\
$\Box$egin{itemize}gskip
We will study the behavior of travelling waves to (TW$_c$) in the transonic limit $c \|earrow {\mathfrak c}_s$
in space dimension $N=2$ and $N=3$ under the assumption $\Gamma \|ot = 0$
(so that (KP-I) has nontrivial solitary waves).
For $ 0 < \varepsilon < {\mathfrak c}_s$, we define $c(\varepsilon)>0$ by
$$
c(\varepsilon) = \sigmaqrt{{\mathfrak c}_s^2 - \varepsilon^2} .
$$
As already mentioned, in this asymptotic regime the
travelling waves are expected to be close to "the" ground state of (KP-I)
(to the best of our knowledge, the uniqueness of this solution up to translations has not been proven yet).
Let us give the formal derivation of this result, which follows the
arguments given in \cite{JR} for the
Gross-Pitaevskii equation
in dimensions $N=2$ and $N=3$.
We insert the ansatz
\
$\Box$e
\langleambdabel{ansatzKP}
U(x) = r_0 \mathbb Big( 1 + \varepsilon^2 A_\varepsilon (z) \mathbb Big)
\varepsilonxp \
$\Box$egin{itemize}g( i\varepsilon \varphi_\varepsilon (z) \
$\Box$egin{itemize}g) \quad \quad \quad \mbox{ where }
z_1 = \varepsilon x_1, \quad z_\partialerp = \varepsilon^2 x_\partialerp
\varepsilone
in (TW$_{c(\varepsilon)}$), cancel the phase factor and separate the real
and imaginary parts to obtain the system
\
$\Box$e
\langleambdabel{MadTW}
\langleeft\{\
$\Box$egin{array}{ll}
\deltaisplaystyle{ - c(\varepsilon) \partial_{z_1} A_\varepsilon
+ 2 \varepsilon^2 \partial_{z_1} \varphi_\varepsilon \partial_{z_1} A_\varepsilon
+ 2 \varepsilon^4 \|abla_{z_\partialerp} \varphi_\varepsilon \cdot \|abla_{z_\partialerp} A_\varepsilon
+ (1 + \varepsilon^2 A_\varepsilon) \mathbb Big( \partial_{z_1}^2 \varphi_\varepsilon +
\varepsilon^2 \Deltaelta_{z_\partialerp} \varphi_\varepsilon \mathbb Big)} = 0 \\ \ \\
\deltaisplaystyle{ - c(\varepsilon) \partial_{z_1} \varphi_\varepsilon
+ \varepsilon^2 (\partial_{z_1} \varphi_\varepsilon)^2
+ \varepsilon^4 |\|abla_{z_\partialerp} \varphi_\varepsilon |^2
- \frac{1}{\varepsilon^2} F\mathbb Big( r_0^2 ( 1 +\varepsilon^2 A_\varepsilon )^2 \mathbb Big)
- \varepsilon^2 \frac{ \partial_{z_1}^2 A_\varepsilon + \varepsilon^2 \Deltaelta_{z_\partialerp} A_\varepsilon}{1+\varepsilon^2 A_\varepsilon}} = 0.
\varepsilonnd{array}\rangleight.
\varepsilone
Formally, if $A_\varepsilon \tildeo A$ and $\varphi_\varepsilon \tildeo \varphi$ as $\varepsilon \tildeo 0$
in some reasonable sense, then to the leading order we obtain
$ - {\mathfrak c}_s \partial_{z_1} A +\partial^2_{z_1} \varphi = 0 $ for the first equation
in \varepsilonqref{MadTW}. Since $F$ is of class $\mathbb BC^2$ near $r_0^2$, using the Taylor expansion
$$ F\mathbb Big( r_0^2 ( 1 +\varepsilon^2 A_\varepsilon )^2 \mathbb Big) =
F(r_0^2) - {\mathfrak c}_s^2 \varepsilon^2 A_\varepsilon + \mathbb BO(\varepsilon^4) $$
with $F(r_0^2)=0$ and $ {\mathfrak c}_s^2 = -2 r_0^2 F'(r_0^2) $, the second equation in \varepsilonqref{MadTW}
implies $ - {\mathfrak c}_s \partial_{z_1} \varphi + {\mathfrak c}_s^2 A = 0$. In both cases, we obtain
the constraint
\
$\Box$e
\langleambdabel{prepar}
{\mathfrak c}_s A = \partial_{z_1} \varphi .
\varepsilone
We multiply the first equation in \varepsilonqref{MadTW} by $c(\varepsilon) / {\mathfrak c}_s^2 $
and we apply the operator $ \frac{1}{ {\mathfrak c}_s ^2} \partial_{z_1} $ to the second one, then we add the resulting equalities.
Using the Taylor expansion
$$ F\mathbb Big( r_0^2 (1+ \alpha)^2 \mathbb Big) = - {\mathfrak c}_s^2 \alpha
- \mathbb Big( \frac{{\mathfrak c}_s^2}{2} - 2 r_0^4 F''(r_0^2) \mathbb Big) \alpha^2
+ F_3(\alpha), \quad \quad \quad {\ranglem where} \ F_3(\alpha) = \mathbb BO(\alpha^3)
\ {\ranglem as} \ \alpha \tildeo 0 , $$
we get
\
$\Box$egin{align}
\langleambdabel{desing}
\frac{{\mathfrak c}_s^2 - c^2(\varepsilon) }{\varepsilon^2 {\mathfrak c}_s^2} \, \partial_{z_1} A_\varepsilon & \
- \frac{1}{{\mathfrak c}_s^2} \, \partial_{z_1} \mathbb Big(
\frac{\partial_{z_1}^2 A_\varepsilon + \varepsilon^2 \Deltaelta_{z_\partialerp} A_\varepsilon}{1 + \varepsilon^2 A_\varepsilon} \mathbb Big)
+ \frac{c(\varepsilon) }{ {\mathfrak c}_s^2} ( 1 + \varepsilon^2 A_\varepsilon ) \Deltaelta_{z_\partialerp} \varphi_\varepsilon
\|onumber \\ & +
\mathbb Big\{ 2 \frac{c(\varepsilon)}{{\mathfrak c}_s^2} \partial_{z_1} \varphi_\varepsilon \partial_{z_1} A_\varepsilon
+ \frac{c(\varepsilon)}{{\mathfrak c}_s^2}\, A_\varepsilon \partial^2_{z_1} \varphi_\varepsilon
+ \frac{1}{{\mathfrak c}_s^2} \, \partial_{z_1} \langleeft( ( \partial_{z_1} \varphi_\varepsilon )^2 \rangleight)
+ \mathbb Big[ \frac{1}{2} - 2 r_0^4 \frac{F''(r_0^2)}{{\mathfrak c}_s^2} \mathbb Big]
\partial_{z_1} ( A_\varepsilon^2 ) \mathbb Big\} \|onumber \\ & =
- 2 \varepsilon^2 \frac{c(\varepsilon)}{{\mathfrak c}_s^2} \|abla_{z_\partialerp} \varphi_\varepsilon \cdot \|abla_{z_\partialerp} A_\varepsilon
- \frac{\varepsilon^2}{{\mathfrak c}_s^2} \partial_{z_1} \langleeft( \, |\|abla_{z_\partialerp} \varphi_\varepsilon |^2 \rangleight)
- \frac{1}{{\mathfrak c}_s^2 \varepsilon^4}\, \partial_{z_1} \langleeft( F_3(\varepsilon^2 A_\varepsilon) \rangleight) .
\varepsilonnd{align}
If $A_\varepsilon \tildeo A$ and $\varphi_\varepsilon \tildeo \varphi$ as $\varepsilon \tildeo 0$
in a suitable sense, we have ${\mathfrak c}_s^2 - c^2(\varepsilon) = \varepsilon^2$ and
$\partial_{z_1}^{-1} A = \varphi/{\mathfrak c}_s$ by \varepsilonqref{prepar}, and then
\varepsilonqref{desing} gives
$$ \frac{1}{{\mathfrak c}_s^2}\, \partial_{z_1} A - \frac{1}{{\mathfrak c}_s^2} \, \partial^3_{z_1} A
+ \Gamma A \partial_{z_1} A + \Deltaelta_{z_\partialerp} \partial_{z_1}^{-1} A = 0 , $$
which is (SW).\\
The main result of this paper is as follows.
\
$\Box$egin{theo}
\langleambdabel{res1}
Let $N {\rm in}n \{ 2, 3 \}$ and assume that the nonlinearity $F$ satisfies (A2) and (A4) with $ \Gamma \|eq 0$.
Let $(U_n, c_n)_{n \gammaeq 1}$ be any sequence such that $U_n {\rm in}n \mathbb BE$ is a nonconstant solution of (TW$_{c_n}$),
$ c_n {\rm in}n (0, {\mathfrak c}_s )$ and $ c_n \tildeo {\mathfrak c}_s $ as $ n \tildeo {\rm in}nfty $ and one of the following situations occur:
(a)
$N=2$ and $U_n$ minimizes $E$ under the constraint $Q = Q(U_n)$, as in Theorem \rangleef{th2dposit} (a) or (b).
(b) $N=2$ and $U_n (c_n \cdot) $ minimizes the functional
$ I (\partialsi ) : = Q( \partialsi ) + \deltaisplaystyle {\rm in}nt_{\mathbb R^N} V(|\partialsi |^2) \, dx $
under the constraint $ \deltaisplaystyle {\rm in}nt_{ \mathbb R^N} |\|abla \partialsi |^2 \, dx = {\rm in}nt_{ \mathbb R^N} |\|abla U_n |^2 \, dx$,
as in Theorem \rangleef{th2d}.
(c) $N=3$ and $U_n$ minimizes $E_{c_n} = E + c_n Q$ under the constraint $P_{c_n} =0$,
as in Theorem \rangleef{thM}.
Then there exists $ n _0 {\rm in}n \mathbb N$ such that $ |U_n| \gammaeq r_0 /2$ in $ \mathbb R^N$ for all $ n \gammaeq n_0$ and,
denoting $ \varepsilon_n = \sigmaqrt{ {\mathfrak c}_s ^2 - c_n ^2}$ (so that $ c_n = c( \varepsilon_n)$), we have
\
$\Box$egin{eqnarray}
\langleambdabel{energy}
E(U_n) \sigmaigmam - {\mathfrak c}_s Q (U_n) \sigmaigmam
r_0^2 {\mathfrak c}_s^4 (7-2N) \
$\Box$S_{\ranglem min} \mathbb Big( {\mathfrak c}_s^2 - c_n^2 \mathbb Big)^{\frac{5-2N}{2}}
= r_0^2 {\mathfrak c}_s^4 (7-2N) \
$\Box$S_{\ranglem min} \varepsilon_n^{5-2N}
\varepsiloneq
and
\
$\Box$egin{eqnarray}
\langleambdabel{Ec}
E(U_n) + c_n Q (U_n) \sigmaigmam {\mathfrak c}_s^2 r_0^2 \
$\Box$S_{\ranglem min} \varepsilon_n^{7-2N}
\qquad \mbox{ as } n \tildeo {\rm in}nfty.
\varepsiloneq
Moreover, $U_n$ can be written in the form
$$ U_n(x) = r_0 \mathbb Big( 1 + \varepsilon_n^2 A_n (z) \mathbb Big)
\varepsilonxp \
$\Box$egin{itemize}g( i \varepsilon_n \varphi_n (z) \
$\Box$egin{itemize}g) , \quad \quad \quad \mbox{ where } \quad
z_1 = \varepsilon_n x_1, \quad z_\partialerp = \varepsilon_n^2 x_\partialerp , $$
and there exist a subsequence $(U_{n_k}, c_{n_k})_{k \gammaeq 1}$, a ground state $\mathbb BW$ of {\ranglem (KP-I)}
and a sequence $ (z^k )_{k \gammaeq 1} \sigmaubset \mathbb R^N$
such that, denoting $ \tildeildelde{A}_k = A_{n_k}( \cdot - z^k)$, $ \tildeildelde{\varphi}_k = \varphi_{n_k } (\cdot - z^k)$,
for any $1 < p < {\rm in}nfty$ we have
$$
\tildeildelde{A}_k \tildeo \mathbb BW, \qquad
\partial_{z_1} \tildeildelde{A}_k \tildeo \partial_{z_1} \mathbb BW, \qquad
\partial_{z_1} \tildeildelde{\varphi}_ k \tildeo {\mathfrak c}_s \mathbb BW \quad \mbox{{\rm in}t and} \quad
\partial_{z_1} ^2 \tildeildelde{\varphi}_ k \tildeo {\mathfrak c}_s \partial_{z_1} \mathbb BW \quad {{\rm in}t in} \quad W^{1,p}(\mathbb R^N) \mbox{ as } k \tildeo {\rm in}nfty.
$$
\varepsilonnd{theo}
As already mentioned,
if $F$ satisfies (A3) and (A4) it is possible to modify $F$
in a neighborhood of infinity such that the modified nonlinearity $\tildeildelde{F}$ also satisfies (A2)
and (TW$_c$) has the same solutions as the same equation with $\tildeildelde{F}$ instead of $F$.
Then one may use Theorems \rangleef{th2dposit}, \rangleef{th2d} and \rangleef{thM} to construct travelling waves for (NLS).
It is obvious that Theorem \rangleef{res1} above also applies to the solutions constructed in this way.
Let us mention that in the case of the Gross-Pitaevskii
nonlinearity $F(\varrho) = 1-\varrho$ and in dimension $N=2$,
F. B\'ethuel, P. Gravejat and
J-C. Saut proved in \cite{BGS1} the same type of convergence for
the solutions constructed in \cite{BGS2}. Those solutions are global
minimizers of the energy with prescribed momentum, which allows to
derive {{\rm in}t a priori} bounds: for instance, their energy is small. In fact,
if $V$ is nonnegative and $N=2$, Theorem \rangleef{th2dposit} provides travelling
wave solutions with speed $\sigmaigmameq {\mathfrak c}_s$ for $|q|$ small and the proof
of Theorem \rangleef{res1} is quite similar to \cite{BGS1}, and therefore we will focus on the
other cases. However, if the potential $V$ achieves negative values,
the minimization of the energy under the constraint of fixed
momentum on the whole space $ \mathbb BE$ is no longer possible,
hence the approach in Theorem \rangleef{th2d} or the local minimization approach in
Theorem \rangleef{th2dposit} $(b)$. In dimension $N=3$ (even for the
Gross-Pitaevskii nonlinearity $F(\varrho) = 1-\varrho$), the travelling waves we deal with
have high energy and momentum and are {{\rm in}t not} minimizers of the energy at fixed momentum
(which are the vortex rings, see \cite{BOS}).
In particular, we have to show that the $U_n$'s are vortexless
($|U_n|\gammaeq r_0 /2$). For the Gross-Pitaevskii nonlinearity,
Theorem \rangleef{res1} provides a rigorous proof to the existence of
the upper branch in the so-called Jones-Roberts curve in
dimension three (\cite{JR}). This upper branch was conjectured by
formal expansions and numerical simulations (however limited to not
so large momentum). In dimension $N=3$, the solutions on this
upper branch are expected to be unstable (see \cite{BR}), and these
rarefaction pulses should evolve by creating vortices (cf. \cite{B}).\\
It is also natural to investigate the one dimensional case.
Firstly, the (KP-I) equation has to be replaced by the (KdV)
equation
\
$\Box$e
\tildeag{KdV}
2 \partial_\tildeau \zeta + \Gamma \zeta \partial_{z} \zeta
- \frac{1}{{\mathfrak c}_s^2} \, \partial_{z}^3 \zeta = 0 ,
\varepsilone
and (SW) becomes
$$ \frac{1}{{\mathfrak c}_s^2}\, \partial_z \mathbb BW + \Gamma \mathbb BW \partial_{z} \mathbb BW
- \frac{1}{{\mathfrak c}_s^2} \, \partial_{z}^3 \mathbb BW = 0 . $$
If $ \Gamma \|ot = 0 $, the only
nontrivial travelling wave for (KdV) (up to space translations) is given by
$$ {\ranglem w} (z) = - \frac{3}{{\mathfrak c}_s^2 \Gamma \cosh^2(z/2)} , $$
and there holds
$$ \
$\Box$S({\ranglem w}) = {\rm in}nt_{\mathbb R} \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z} {\ranglem w} )^2
+ \frac{\Gamma}{3} \, {\ranglem w}^3 \ dz + \frac{1}{{\mathfrak c}_s^2} {\rm in}nt_{\mathbb R} {\ranglem w}^2\ dz
= {\rm in}nt_{\mathbb R} \frac{2}{{\mathfrak c}_s^2}\, (\partial_{z} {\ranglem w} )^2 \ dz
= \frac{48}{5 {\mathfrak c}_s^6 \Gamma^2} . $$
The following
result, which corresponds to Theorem \rangleef{res1} in dimension $N=1$,
was proved in \cite{C1d} by using ODE techniques.
\
$\Box$egin{theo} [\cite{C1d}] \langleambdabel{res2}
Let $N = 1 $ and assume that $F$ satisfies (A4) with $ \Gamma \|ot = 0 $.
Then, there are $\delta>0 $ and $ 0 < \mathfrak{c}_0 < {\mathfrak c}_s $ with
the following properties. For any $ \mathfrak{c}_0 \langleeq c < {\mathfrak c}_s $,
there exists a solution $U_c$ to {\ranglem (TW$_{c}$)} satisfying
$ \| \, |U_c| - r_0 \|_{L^{\rm in}nfty(\mathbb R)} \langleeq \delta$.
Moreover, for
$ \mathfrak{c}_0 \langleeq c < {\mathfrak c}_s $ any nonconstant solution $u$
of {\ranglem (TW$_{c}$)} verifying $ \| \, |u| - r_0 \|_{L^{\rm in}nfty(\mathbb R)} \langleeq \delta$ is of the form
$ u(x) = \varepsilonx^{i\tildeheta} U_c (x-{\ranglem x}i)$
for some $ \tildeheta {\rm in}n \mathbb R $ and $ {\ranglem x}i {\rm in}n \mathbb R$. The map $U_c$ can be written in
the form
$$ U_c(x) = r_0 \langleeft( 1 + \varepsilon^2 A_\varepsilon (z) \rangleight)
\varepsilonxp ( i \varepsilon \varphi_\varepsilon (z) ) , \qquad \mbox{ where } z = \varepsilon x \quad \mbox{ and }
\quad \varepsilon = \sigmaqrt{{\mathfrak c}_s^2 - c^2} $$
and for any $1 \langleeq p \langleeq {\rm in}nfty$,
$$ \partial_{z} \varphi_\varepsilon \tildeo {\mathfrak c}_s {\ranglem w}
\quad \quad \quad {{\rm in}t and} \quad \quad \quad
A_\varepsilon \tildeo {\ranglem w} \quad {{\rm in}t in} \quad W^{1,p}(\mathbb R)
\quad \quad \quad {{\rm in}t as} \quad \varepsilon \tildeo 0 . $$
Finally, as $ \varepsilon \tildeo 0 $,
$$ E(U_{c(\varepsilon)} ) \sigmaigmam - {\mathfrak c}_s Q (U_{c(\varepsilon)} ) \sigmaigmam
5 r_0^2 {\mathfrak c}_s^4 \
$\Box$S( {\ranglem w}) \mathbb Big( {\mathfrak c}_s^2 - c^2(\varepsilon) \mathbb Big)^{\frac{3}{2}}
= \varepsilon^3 \frac{48 r_0^2 }{{\mathfrak c}_s^2 \Gamma^2} $$
and
$$ E(U_{c(\varepsilon)} ) + c(\varepsilon) Q (U_{c(\varepsilon)} ) \sigmaigmam {\mathfrak c}_s^2 r_0^2 \
$\Box$S( {\ranglem w}) \varepsilon^{5}
= \frac{48 r_0^2 }{5 {\mathfrak c}_s^4 \Gamma^2} \varepsilon^5. $$
\varepsilonnd{theo}
\
$\Box$egin{rem} \ranglem
In the one-dimensional case it can be easily shown that
the mapping $(\mathfrak{c}_0 , {\mathfrak c}_s) \|i c \mapsto ( A_c - r_0 , \partial_z \partialhi ) {\rm in}n W^{1,p}(\mathbb R) $,
where $ U_c = A_c \varepsilonxp( i \partialhi ) $, is continuous for every $1 \langleeq p \langleeq {\rm in}nfty$.
\varepsilonnd{rem}
A natural question is to investigate the dynamical counterparts of Theorems
\rangleef{res1} and \rangleef{res2}. If $\Psi_\varepsilon^0$ is an initial datum for (NLS)
of the type
$$ \Psi_\varepsilon^0 (x) = r_0 \mathbb Big( 1 + \varepsilon^2 A_\varepsilon^0 (z) \mathbb Big)
\varepsilonxp \mathbb Big( i \varepsilon \varphi_\varepsilon^0(z) \mathbb Big) , $$
with $z=(z_1,z_\partialerp) = (\varepsilon x_1, \varepsilon^2 x_\partialerp)$ and
$ {\mathfrak c}_s A_\varepsilon^0 \sigmaigmameq \partial_{z_1} \varphi_\varepsilon^0$, we use for $\Psi_\varepsilon$
the ansatz at time $t>0$, for some functions
$A_\varepsilon$, $\varphi_\varepsilon$ depending on $(\tildeau,z)$,
$$ \Psi_\varepsilon (t,x) = r_0 \mathbb Big( 1 + \varepsilon^2 A_\varepsilon (\tildeau,z)\mathbb Big)
\varepsilonx^{i \varepsilon \varphi_\varepsilon(\tildeau,z) } , \quad \quad \quad
\tildeau = {\mathfrak c}_s \varepsilon^3 t , \quad z_1 = \varepsilon ( x_1 - {\mathfrak c}_s t) ,
\quad z_\partialerp = \varepsilon^2 x_\partialerp . $$
Similar computations imply that, for times $\tildeau$ of order one
(that is $t$ of order $\varepsilon^{-3}$), we have $ {\mathfrak c}_s A_\varepsilon \sigmaigmameq \partial_{z_1} \varphi_\varepsilon$
and $A_\varepsilon$ converges to a solution of the (KP-I) equation.
This (KP-I) asymptotic dynamics for
the Gross-Pitaevskii equation
in dimension $N=3$ is formally derived in \cite{BR} and is used to investigate
the linear instability
of the solitary waves of speed close to $ {\mathfrak c}_s = \sigmaqrt{2} $. The one-dimensional
analogue, where the (KP-I) equation has to be replaced by the corresponding
Korteweg-de Vries equation, can be found in \cite{ZK} and \cite{KAL}.
The rigorous mathematical proofs of these regimes have been provided
in \cite{CR2} in arbitrary space dimension and for a general nonlinearity $F$
(the coefficient $\Gamma$ might even vanish), respectively in \cite{BGSS} for the one
dimensional Gross-Pitaevskii equation by using
the complete integrability of the equation (more precisely, the existence of
sufficiently many conservation laws).
\sigmaubsection{Scheme of the proof of Theorem \rangleef{res1}}
In case $(a)$ there is a direct proof of Theorem \rangleef{res1} which is quite similar
to the one in \cite{BGS1}. Moreover, it follows from
Proposition 5.12 in \cite{CM1} that if $(U_n, c_n)$ satisfies $(a)$
then it also satisfies $(b)$, so it suffices to prove Theorem \rangleef{res1}
in cases $(b)$ and $(c)$.
The first step is to give sharp asymptotics for the quantities
minimized in \cite{CM1} and \cite{Maris} in order to prove the existence
of travelling waves, namely to estimate
$$ I_{\ranglem min} (k) =
{\rm in}nf \mathbb Big\{ {\rm in}nt_{\mathbb R^2} V(|\partialsi|^2) \ d x + Q(\partialsi ) \; \
$\Box$egin{itemize}g\vert \; \partialsi {\rm in}n \mathbb BE, \
{\rm in}nt_{\mathbb R^2} |\|abla \partialsi|^2 \ dx = k \mathbb Big\} \qquad \mbox{ as } k \tildeo 0$$
and
$$ T_c = {\rm in}nf \mathbb Big\{ E(\partialsi ) + c Q (\partialsi ) \; \
$\Box$egin{itemize}g\vert \; \partialsi {\rm in}n \mathbb BE, \; \partialsi \mbox{ is not constant, }
E(\partialsi) + c Q (\partialsi) =
{\rm in}nt_{\mathbb R^3} | \|abla_{x_\partialerp} \partialsi |^2 \ dx\} \qquad \mbox{ as } c \tildeo {\mathfrak c}_s.$$
These bounds are obtained by plugging test functions with the
ansatz \varepsilonqref{ansatzKP} into the corresponding minimization problems,
where $(A_\varepsilon,\varphi_\varepsilon ) \sigmaigmameq (A, {\mathfrak c}_s^{-1} \partial_{z_1}^{-1} A )$
and $A$ is a ground state for (KP-I).
A similar upper bound for $I_{\ranglem min}(k)$ was already a
crucial point in \cite{CM1} to rule out the dichotomy of
minimizing sequences.
\
$\Box$egin{prop}
\langleambdabel{asympto}
Assume that $F$ satisfies (A2) and (A4) with $ \Gamma \|ot = 0 $. Then:
(i) If $N=2$, we have as $ k \tildeo 0 $
$$ I_{\ranglem min} (k) \langleeq - \frac{k}{{\mathfrak c}_s^2}
- \frac{4 k^3}{27 r_0^4 {\mathfrak c}_s^{12} \
$\Box$S^2_{\ranglem min}} + \mathbb BO(k^5) . $$
(ii) If $ N = 3 $, the following upper bound holds as
$\varepsilon \tildeo 0$ (that is, as $c(\varepsilon) \tildeo {\mathfrak c}_s$):
$$ T_{c(\varepsilon)} \langleeq
{\mathfrak c}_s^2 r_0^2 \
$\Box$S_{\ranglem min} ( {\mathfrak c}_s^2 - c^2(\varepsilon) )^{\frac{1}{2}}
+ \mathbb BO\mathbb Big( ( {\mathfrak c}_s^2 - c^2(\varepsilon) )^{\frac{3}{2}} \mathbb Big)
= {\mathfrak c}_s^2 r_0^2 \
$\Box$S_{\ranglem min} \varepsilon + \mathbb BO(\varepsilon^3) . $$
\varepsilonnd{prop}
The second step is to derive upper bounds for the energy and the momentum.
In space dimension three (case $(c)$) this is tricky.
Indeed, if $U_c$ is a minimizer of $E_c$ under the constraint $ P_c = 0$, the only
information we have is about $ T_c = \deltaisplaystyle {\rm in}nt_{\mathbb R^N} |\|abla _{x_{\partialerp}} U_c |^2 \, dx $
(see the first identity in (\rangleef{Pohozaev})).
In particular, we have no {{\rm in}t a priori } bounds on
$\deltaisplaystyle {\rm in}nt_{\mathbb R^N} \mathbb Big| \frac{ \partial U_c}{\partial x_1 } \mathbb Big|^2 \, dx$, $Q(U_c)$ and the potential energy
$\deltaisplaystyle {\rm in}nt_{\mathbb R^N} V(|U_c|^2) \, dx$.
Using an averaging argument we infer that there is a sequence $(U_n, c_n)$ for which we have "good" bounds
on the energy and the momentum.
Then we prove a rigidity property of "good sequences":
any sequence $(U_n, c_n)$ that satisfies the "good bounds"
has a subsequence that satisfies the conclusion
of Theorem \rangleef{res1}.
This rigid behavior combined with the existence of a sequence with "good bounds"
and a continuation argument allow us to conclude that
Theorem \rangleef{res1} holds for {{\rm in}t any } sequence $(U_n, c_n)$ with $ c_n \tildeo {\mathfrak c}_s$ (as in (c)).
More precisely, we will prove:
\
$\Box$egin{prop} \langleambdabel{monoto}
Let $N \gammaeq 3$ and assume that $F$ satisfies (A1) and (A2). Then:
(i) For any $ c {\rm in}n (0, {\mathfrak c}_s )$ and any minimizer $ U $ of $E_c$ in $ \mathbb Co _c $ we have $Q(U) <0$.
(ii) The function $ (0, {\mathfrak c}_s) \|i c \langleongmapsto T_c {\rm in}n \mathbb R_+$ is decreasing,
thus has a derivative almost everywhere.
(iii) The function $ c \langleongmapsto T_c $ is left continuous on $(0, {\mathfrak c}_s)$.
If it
has a derivative at $c_0$, then for any minimizer $U_0$ of $ E_{c_0}$ under the constraint $P_{c_0} = 0$,
scaled so that $U_0$ solves {\ranglem (TW}$_{c_0}${\ranglem)}, there holds
$$ \frac{d T_c}{dc}_{|c=c_0} = Q(U_0) . $$
(iv) Let $ c_0 {\rm in}n ( 0 , {\mathfrak c}_s )$.
Assume that there is a sequence $ (c_n) _{n \gammaeq 1}$ such that $ c_n > c_0$, $ c_n \tildeo c_0$
and for any $n$ there is a minimizer $ U_n {\rm in}n \mathbb BE$
of $E_{c_n}$ on $ \mathbb Co _{c_n}$ which solves (TW$_{c_n}$) and the sequence $(Q(U_n))_{n \gammaeq 1 }$ is bounded.
Then $ c \langleongmapsto T_c$ is continuous at $ c_0$.
(v) Let $0 < c_1 <c_2 < {\mathfrak c}_s$. Let $U_i $ be minimizers of $E_{c_i} $
on $ \mathbb Co_{c_i}$, $ i =1,2$,
such that $ U_i$ solves {\ranglem (TW}$_{c_i}${\ranglem)}.
Denote $ q_1 = Q(U_1)$ and $q_2 = Q(U_2).$ Then we have
$$
\frac{ T_{c_1}^2}{q_1^2} - c_1 ^2 \gammaeq \frac{ T_{c_2}^2}{q_2^2} - c_2 ^2 .
$$
(vi) If $N=3$, $F$ verifies (A4) and $\Gamma \|ot = 0$, there exist a constant $C>0$ and
a sequence $\varepsilon_n \tildeo 0$ such that for any minimizer $U_n {\rm in}n \mathbb BE$ of $ E_{c(\varepsilon_n)} $ on $ \mathbb Co_{c(\varepsilon_n)}$
which solves {\ranglem (TW}$_{c(\varepsilon_n)}${\ranglem)} we have
$$ E(U_n) \langleeq \frac{C}{\varepsilon_n} \qquad \mbox{ and } \qquad |Q(U_n) | \langleeq \frac{C}{\varepsilon_n} . $$
\varepsilonnd{prop}
\
$\Box$egin{prop} \langleambdabel{convergence}
Assume that $N=3$, (A2) and (A4) hold and $ \Gamma \|eq 0$.
Let $( U_n, \varepsilon _n)_{n \gammaeq 1}$ be a sequence such that $ \varepsilon_n \tildeo 0$,
$U_n$ minimizes $E_{c(\varepsilon_n)}$ on $ \mathbb Co _{c(\varepsilon_n)}$, satisfies {\ranglem (TW}$_{c(\varepsilon_n)}${\ranglem)}
and there exists a constant $C>0$ such that
$$
E(U_n) \langleeq \frac{C}{\varepsilon_n} \qquad \mbox{ and } \qquad |Q(U_n) | \langleeq \frac{C}{\varepsilon_n}
\qquad \mbox{ for all } n.
$$
Then there is a subsequence of $(U_n, c( \varepsilon _n))_{n \gammaeq 1}$ which satisfies the conclusion of Theorem \rangleef{res1}.
\varepsilonnd{prop}
\
$\Box$egin{prop} \langleambdabel{global3}
Let $ N =3$ and suppose that (A2) and (A4) hold with $ \Gamma \|eq 0$.
There are $ K > 0$ and $ \varepsilon _* > 0 $ such that for any $ \varepsilon {\rm in}n ( 0, \varepsilon _* )$ and for any
minimizer $U$ of $E_{c(\varepsilon)} $ on $ \mathbb Co_{c (\varepsilon)}$ scaled so that $U$ satisfies {\ranglem (TW}$_{c(\varepsilon )}${\ranglem)}
we have
$$
E(U) \langleeq \frac{K}{\varepsilon} \qquad \mbox{ and } \qquad |Q(U) | \langleeq \frac{K}{\varepsilon}.
$$
\varepsilonnd{prop}
It is now obvious that the proof of Theorem \rangleef{res1} in the three-dimensional
case follows directly from Propositions \rangleef{convergence} and \rangleef{global3} above.
The most difficult and technical point in the above program is to prove Proposition \rangleef{convergence}.
Let us describe our strategy to carry out that proof, as well as the proof of Theorem \rangleef{res1} in the two-dimensional case.
Once we have a sequence of travelling waves to (NLS) with "good bounds" on the energy and the momentum
and speeds that tend to $ {\mathfrak c}_s$, we need to show that those solutions do not vanish and can be lifted.
We recall the following result, which is a consequence of Lemma 7.1 in \cite{CM1}:
\
$\Box$egin{lem} [\cite{CM1}] \langleambdabel{liftingfacile}
Let $ N \gammaeq 2 $ and suppose that the nonlinearity
$F$ satisfies (A1) and ((A2) or (A3)).
Then for any $ \delta > 0 $ there is $ M( \delta)> 0$ such that for all $ c {\rm in}n [0, {\mathfrak c}_s]$ and for all solutions
$ U {\rm in}n \mathbb BE$ of {\ranglem (TW}$_c${\ranglem )} such that $ \| \|abla U \| _{L^2( \mathbb R^N)} < M( \delta )$ we have
$$ \| \, |U| - r_0 \|_{L^{\rm in}nfty(\mathbb R^N)} \langleeq \delta . $$
\varepsilonnd{lem}
In the two-dimensional case the lifting properties follow immediately from Lemma \rangleef{liftingfacile}.
However, in dimension $N=3$, for travelling waves $U_{c(\varepsilon)} $
which minimize $E_{c(\varepsilon)}$ on $ \mathbb Co _{c(\varepsilon)} $ the quantity
$ \mathbb Big\| \frac{ \partial U_{c(\varepsilon)}}{\partial x_1 } \mathbb Big\|_{L^2} ^2 $
is large, of order $\sigmaigmameq \varepsilon^{-1}$ as $ \varepsilon \tildeo 0$.
We give a lifting
result for those solutions, based on the fact that
$ \| \|abla _{x_{\partialerp}} U_{c(\varepsilon)} \|_{L^2} ^2= \frac{N-1}{2} T_{c(\varepsilon)} $ is sufficiently small.
\
$\Box$egin{prop} \langleambdabel{lifting}
We consider a nonlinearity $F$ satisfying (A1) and ((A2) or (A3)).
Let $ U {\rm in}n \mathbb BE$ be a travelling wave to {\ranglem (NLS)} of speed $ c {\rm in}n [0, {\mathfrak c}_s ]$.
(i) If $N \gammaeq 3$, for any $0 < \delta < r_0$ there exists
$ \mu = \mu ( \delta ) > 0 $ such that
$$
\mathbb Big\| \frac { \partial U}{\partial x_1} \mathbb Big\|_{L^2( \mathbb R^N)} \cdot \|\|abla _{x_{\partialerp}} U \| _{L^2( \mathbb R^N)}^{N-1} \langleeq \mu( \delta)
\qquad \mbox{ implies } \qquad
\| \, |U| - r_0 \|_{L^{\rm in}nfty(\mathbb R^N)} \langleeq \delta .
$$
(ii) If $ N \gammaeq 4$ and, moreover, (A3) holds or
$\deltaisplaystyle \mathbb Big\| \frac { \partial U}{\partial x_1} \mathbb Big\|_{L^2( \mathbb R^N)} \cdot \|\|abla _{x_{\partialerp}} U \|_{L^2( \mathbb R^N)} ^{N-1} \langleeq 1$,
then for any $ \delta > 0 $ there is $ m(\delta) > 0$ such that
$$
{\rm in}nt_{\mathbb R^N} |\|abla _{x_{\partialerp}} U|^2 \, dx \langleeq m( \delta)
\qquad \mbox{ implies } \qquad
\| \, |U| - r_0 \|_{L^{\rm in}nfty(\mathbb R^N)} \langleeq \delta .
$$
\varepsilonnd{prop}
As an immediate consequence, the three-dimensional travelling
wave solutions provided by Theorem \rangleef{thM} have modulus close to $r_0$
(hence do not vanish) as $ c \tildeo {\mathfrak c}_s$:
\
$\Box$egin{cor}
\langleambdabel{sanszero}
Let $N=3$ and consider a nonlinearity $F$ satisfying (A2)
and (A4) with $\Gamma \|ot= 0$.
Then, the travelling wave solutions $U_{c(\varepsilon)} $ to (NLS)
provided by Theorem \rangleef{thM}
which satisfy an additional bound $E(U_{c(\varepsilon)} ) \langleeq \frac{C}{\varepsilon}$
(with $C$ independent on $ \varepsilon$)
verify
$$ \| \, |U_{c(\varepsilon)}| - r_0 \|_{L^{\rm in}nfty(\mathbb R^3)} \tildeo 0
\quad \quad \mbox{ as } \quad \varepsilon \tildeo 0. $$
In particular, for $\varepsilon$ sufficiently close to $ 0$ we have $|U_{c(\varepsilon)}| \gammaeq r_0 /2 $
in $\mathbb R^3$.
\varepsilonnd{cor}
\|oindent {{\rm in}t Proof.}
By the the second identity in (\rangleef{Pohozaev}) we have
$$
{\rm in}nt_{\mathbb R^3} \mathbb Big|\frac{\partial U_{c(\varepsilon)}}{\partial x_1} \mathbb Big|^2 \, dx = \frac 12 E ( U_{c(\varepsilon)} ) \langleeq \frac{C}{\varepsilon}.
$$
Moreover, the first identity in \varepsilonqref{Pohozaev} and Proposition \rangleef{asympto} $(ii)$ imply
$$ {\rm in}nt_{\mathbb R^3} |\|abla _{x_{\partialerp}} U_{c(\varepsilon)} |^2 \, dx = E_{c(\varepsilon)}(U_{c(\varepsilon)}) = T_{c(\varepsilon)} \langleeq C \varepsilon . $$
Hence
$ \mathbb Big\| \frac { \partial U_{c(\varepsilon)}}{\partial x_1} \mathbb Big\|_{L^2( \mathbb R^3)} \|\|abla _{x_{\partialerp}} U_{c(\varepsilon)} \| _{L^2( \mathbb R^3)}^{2} \langleeq {C}{\sigmaqrt{\varepsilon} } $
and the result follows from Proposition \rangleef{lifting} $(ii)$. \
$\Box$ \\
We give now some properties of the two-dimensional travelling wave solutions provided by Theorem \rangleef{th2d}.
\
$\Box$egin{prop} \langleambdabel{prop2d}
Let $N = 2 $ and assume that $F$ verifies (A2) and (A4)
with $\Gamma \|ot = 0$. Then there exist constants $C_1, \, C_2, \, C_3 , \, C_4>0$ and
$ 0 < k_* < k_{\rm in}nfty$ such that all travelling wave solutions
$U_k$ provided by Theorem \rangleef{th2d} with
$ 0 < k = \deltaisplaystyle {\rm in}nt_{\mathbb R^2} |\|abla U_k|^2 \ dx < k_* $
satisfy $|U_k| \gammaeq r_0 / 2 $ in $\mathbb R^2$,
\
$\Box$e
\langleambdabel{estim2d}
C_1 k \langleeq - Q(U_k) \langleeq C_2 k, \qquad
C_1 k \langleeq {\rm in}nt_{\mathbb R^2} V(|U_k|^2) \, dx \langleeq C_2 k, \qquad
C_1 k \langleeq {\rm in}nt_{\mathbb R^2} (\chi ^2(|U_k|) - r_0 ^2) ^2\, dx \langleeq C_2 k
\varepsilone
and have a speed $c(U_k) = \sigmaqrt{{\mathfrak c}_s^2 - \varepsilon_k^2}$ satisfying
\
$\Box$e
\langleambdabel{kifkif}
C_3 k \langleeq \varepsilon_k \langleeq C_4 k .
\varepsilone
\varepsilonnd{prop}
At this stage, we know that the travelling waves provided by Theorems \rangleef{th2d} and \rangleef{thM}
do not vanish if their speed is sufficiently close to $ {\mathfrak c}_s$.
Using the above lifting results, we may write such a solution $U_c$ in the form
\
$\Box$e
\langleambdabel{ansatz}
U_c (x) = \rangleho (x) \varepsilonx^{i\partialhi(x)}
= r_0 \sigmaqrt{1+\varepsilon^2 \mathbb BA_{\varepsilon}(z) }\ \varepsilonx^{i\varepsilon \varphi_{\varepsilon} (z)},
\quad \quad \mbox{ where } \quad \varepsilon = \sigmaqrt{ {\mathfrak c}_s ^2 - c^2}, \quad z_1 = \varepsilon x_1 , \ z_\partialerp = \varepsilon^2 x_\partialerp ,
\varepsilone
and we use the same scaling as in \varepsilonqref{ansatzKP}. The interest of
writing the modulus in this way (and not as in \varepsilonqref{ansatzKP}) is just to
simplify a little bit the algebra and to have expressions similar to
those in \cite{BGS1}. Since $\mathbb BA _{\varepsilon } = 2 A_{\varepsilon} + \varepsilon^2 A_{\varepsilon}^2$,
bounds in Sobolev spaces for $\mathbb BA _{\varepsilon} $ imply similar Sobolev bounds for $A_{\varepsilon }$ and conversely.
We shall now find Sobolev bounds for $\mathbb BA _{\varepsilon} $ and $\varphi _{\varepsilon } $.
It is easy to see that (TW$_{c}$) is equivalent to the following system for the phase $ \varphi$
and the modulus $ \rangleho$ (in the original variable $x$):
\
$\Box$e
\langleambdabel{phasemod}
\langleeft\{\
$\Box$egin{array}{l}
\deltaisplaystyle{ c \frac{\partial}{\partial {x_1}} (\rangleho^2 - r_0^2 ) =
2 \mbox{div} ( \rangleho^2 \|abla \partialhi ) }, \\ \\
\deltaisplaystyle{ \Deltaelta \rangleho - \rangleho |\|abla \partialhi|^2 + \rangleho F(\rangleho^2)
= - c \rangleho \frac{\partial \partialhi}{\partial {x_1}} } .
\varepsilonnd{array}\rangleight.
\varepsilone
Multiplying the second equation by $ 2 \rangleho$, we write (\rangleef{phasemod}) in the form
\
$\Box$e
\langleambdabel{phasemod2}
\langleeft\{\
$\Box$egin{array}{l}
2 \mbox{div} ( (\rangleho^2 - r_0 ^2) \|abla \partialhi )
- \deltaisplaystyle{ c \frac{\partial}{\partial {x_1}} (\rangleho^2 - r_0^2 ) } = - 2 r_0 ^2 \Deltaelta \partialhi , \\ \\
\deltaisplaystyle{ \Deltaelta (\rangleho^2 - r_0 ^2) - 2 |\|abla U_c|^2 + 2 \rangleho^2 F(\rangleho^2)
+ 2c( \rangleho ^2 - r_0 ^2) \frac{\partial \partialhi}{\partial {x_1}}
= - 2 c r_0^2 \frac{\partial \partialhi}{\partial {x_1}} } .
\varepsilonnd{array}\rangleight.
\varepsilone
Let $\varepsilonta = \rangleho^2 - r_0 ^2$.
We apply the operator $ \deltaisplaystyle - 2c \frac{ \partial }{\partial x_1}$ to the first equation in (\rangleef{phasemod2})
and we take the Laplacian of the second one, then we add the resulting equalities to get
\
$\Box$e
\langleambdabel{fond}
\langleeft[ \Deltaelta^2 - {\mathfrak c}_s^2 \Deltaelta + c^2 \frac{ \partial ^2}{\partial x_1^2} \rangleight] \varepsilonta
= \Deltaelta \langleeft( 2 |\|abla U_c|^2 - 2 c \varepsilonta \frac{\partial \partialhi}{\partial x_1}
- 2 \rangleho ^2 F( \rangleho ^2) - {\mathfrak c}_s^2 \varepsilonta \rangleight)
+ 2 c \frac{\partial}{ \partial x_1} (\mbox{div} (\varepsilonta \|abla \partialhi ) ).
\varepsilone
Since ${\mathfrak c}_s^2 = - 2 r_0^2 F'(r_0^2)$, using the Taylor expansion
$$
2 (s + r_0^2) F( s + r_0^2) + {\mathfrak c}_s^2 s
= - \frac{ {\mathfrak c}_s ^2}{r_0^2} \langleeft( 1 - \frac{ r_0^4 F'' (r_0^2)}{{\mathfrak c}_s ^2} \rangleight) s^2 + r_0 ^2 \tildeildelde{F}_3(s),
$$
where $ \tildeildelde{F}_3(s) = \mathbb BO(s^3)$ as $s \tildeo 0$, we
see that the right-hand side in \varepsilonqref{fond} is at least quadratic in
$(\varepsilonta , \partialhi) $. Then we perform a scaling and pass to the variable $ z = ( \varepsilon x_1, \varepsilon^2 x_{\partialerp})$
(where $ \varepsilon = \sigmaqrt{ {\mathfrak c}_s ^2 - c^2}$), so that \varepsilonqref{fond} becomes
\
$\Box$egin{align}
\langleambdabel{Fonda}
\mathbb Big\{ \partial_{z_1}^4 - \partial_{z_1}^2 - {\mathfrak c}_s^2 \Deltaelta_{z_\partialerp}
+ 2 \varepsilon ^2 \partial_{z_1}^2 \Deltaelta_{z_\partialerp} + \varepsilon^4 \Deltaelta^2_{z_\partialerp} \mathbb Big\}
\mathbb BA _{\varepsilon} = & \ \mathbb BR _{\varepsilon},
\varepsilonnd{align}
where $\mathbb BR _{\varepsilon}$ contains terms at least quadratic in $(\mathbb BA _{\varepsilon} ,\varphi _{\varepsilon})$:
\
$\Box$egin{align*}
\mathbb BR _{\varepsilon} = & \
\{ \partial_{z_1}^2 + \varepsilon ^2 \Deltaelta_{z_\partialerp} \} \mathbb Big[
2(1 + \varepsilon^2 \mathbb BA _{\varepsilon}) \mathbb Big( (\partial_{z_1} \varphi _{\varepsilon} )^2
+ \varepsilon ^2 |\|abla_{z_\partialerp} \varphi _{\varepsilon} |^2 \mathbb Big) + \varepsilon^2 \frac{(\partial_{z_1} \mathbb BA _{\varepsilon} )^2
+ \varepsilon ^2 |\|abla_{z_\partialerp} \mathbb BA _{\varepsilon} |^2}{2(1+ \varepsilon ^2 \mathbb BA _{\varepsilon} )} \mathbb Big]
\\ & \
- 2 c \varepsilon ^2 \Deltaelta_{z_\partialerp} ( \mathbb BA _{\varepsilon} \partial_{z_1} \varphi _{\varepsilon})
+ 2 c \varepsilon ^2 \deltaisplaystyle \sigmaum_{j=2}^N \partial_{z_1} \partial_{z_j} ( \mathbb BA _{\varepsilon} \partial_{z_j} \varphi _{\varepsilon} )
\\ & \
+ \{ \partial_{z_1}^2 + \varepsilon ^2 \Deltaelta_{z_\partialerp} \} \mathbb Big[
{\mathfrak c}_s^2 \mathbb Big( 1 - \frac{r_0^4F''(r_0^2)}{{\mathfrak c}_s^2} \mathbb Big) \mathbb BA _{\varepsilon}^2
- \frac{1}{\varepsilon ^4} \tildeildelde{F}_3(r_0^2 \varepsilon ^2 \mathbb BA _{\varepsilon}) \mathbb Big] .
\varepsilonnd{align*}
In the two-dimensional case,
uniform bounds (with respect to $\varepsilon$) in Sobolev spaces have been derived in \cite{BGS1}
by using \varepsilonqref{Fonda} and a bootstrap argument. This technique is based upon the fact that some kernels related
to the linear part in \varepsilonqref{Fonda}, such as
$$ {\mathcal F}u^{-1} \mathbb Big( \frac{{\ranglem x}i_1^2}{ {\ranglem x}i_1^4 + {\ranglem x}i_1^2 + {\mathfrak c}_s^2 |{\ranglem x}i_{\partialerp}|^2
+ 2 \varepsilon ^2 {\ranglem x}i_1^2 |{\ranglem x}i_{\partialerp}|^2 + \varepsilon ^4 |{\ranglem x}i_{\partialerp}|^4 } \mathbb Big)
\quad \quad {\ranglem and} \quad \quad
{\mathcal F}u^{-1} \mathbb Big( \frac{\varepsilon ^2 |{\ranglem x}i_\partialerp|^2}{ {\ranglem x}i_1^4 + {\ranglem x}i_1^2
+ {\mathfrak c}_s^2 |{\ranglem x}i_{\partialerp}|^2 + 2 \varepsilon ^2 {\ranglem x}i_1^2 |{\ranglem x}i_{\partialerp}|^2
+ \varepsilon ^4 |{\ranglem x}i_{\partialerp}|^4 } \mathbb Big) $$
are bounded in $L^p(\mathbb R^2)$ for $p$ in some interval $[2, \
$\Box$ar{p})$,
uniformly with respect to $\varepsilon$. However, this is no longer true
in dimension $N=3$: the above mentioned kernels are not in $L^2(\mathbb R^3)$
(but their Fourier transforms are uniformly bounded), and from the analysis
in \cite{G}, the kernel
$$ {\mathcal F}u^{-1} \mathbb Big( \frac{{\ranglem x}i_1^2}{ {\ranglem x}i_1^4 + {\ranglem x}i_1^2 + {\mathfrak c}_s^2 |{\ranglem x}i_{\partialerp}|^2} \mathbb Big) $$
is presumably too
singular near the origin to be in $L^p( \mathbb R^3) $ if $p \gammaeq 5/3$.
This lack of integrability of the kernels makes the analysis in the three dimensional
case much more diffcult than in the case $ N=2$.
One of the main difficulties in the three dimensional case is to
prove that for $ \varepsilon $ sufficiently small, $ \mathbb BA_{\varepsilon}$ is uniformly bounded in $L^p$ for some $ p >2$.
To do this we use a suitable decomposition of $\mathbb BA _{\varepsilon }$ in the Fourier space
(see the proof of Lemma \rangleef{Grenouille} below).
Then we improve the exponent $p$
by using a bootstrap argument, combining the iterative argument in \cite{BGS1}
(which uses the quadratic nature of $\mathbb BR _{\varepsilon}$ in \varepsilonqref{Fonda}) and the
appropriate decomposition of $\mathbb BA _{\varepsilon}$ in the Fourier space. This leads to
some $L^p$ bound with $p> 3 = N$. Once this bound is proved, the proof
of the $W^{1,p}$ bounds follows the scheme in \cite{BGS1}. We get:
\
$\Box$egin{prop}
\langleambdabel{Born} Under the assumptions of Theorem \rangleef{res1}, there is $\varepsilon_0 >0 $ such that
$ \mathbb BA_{\varepsilon} {\rm in}n W^{4, p}(\mathbb R^N)$ and $ \|abla \varphi _{e } {\rm in}n W^{3, p}(\mathbb R^N)$
for all $ \varepsilon {\rm in}n ( 0, \varepsilon_0)$ and all $ p {\rm in}n ( 1, {\rm in}nfty)$.
Moreover, for any $ p {\rm in}n (1, {\rm in}nfty ) $
there exists $C_{p} >0$ satisfying for all $ \varepsilon {\rm in}n (0, \varepsilon _0) $
\
$\Box$egin{eqnarray}
\langleambdabel{goodestimate}
\| \mathbb BA_{\varepsilon } \|_{L^p} + \| \|abla \mathbb BA_{\varepsilon } \|_{L^p} + \| \partial^2_{z_1} \mathbb BA_{\varepsilon } \|_{L^p}
+ \varepsilon \| \partial_{z_1} \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^p}
+ \varepsilon ^2 \| \|abla_{z_\partialerp}^2 \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p \qquad \mbox{ and }
\varepsiloneq
\
$\Box$egin{eqnarray}
\langleambdabel{goodestimate2}
\
$\Box$egin{array}{l}
\| \partial _{z_1} \varphi _{\varepsilon } \|_{L^p} + \varepsilon \| \|abla_{z_{\partialerp }} \varphi _{\varepsilon } \| _{L^p}
+ \| \partial _{z_1}^2 \varphi _{\varepsilon } \|_{L^p} + \varepsilon \| \|abla_{z_{\partialerp }} \partial _{z_1} \varphi _{\varepsilon } \| _{L^p}
+ \varepsilon ^2 \| \|abla_{z_{\partialerp }} ^2 \varphi _{\varepsilon } \| _{L^p}
\\
\\
+ \| \partial _{z_1}^3 \varphi _{\varepsilon } \|_{L^p} + \varepsilon \| \|abla_{z_{\partialerp }} \partial _{z_1} ^2 \varphi _{\varepsilon }\| _{L^p}
+ \varepsilon ^2 \| \|abla_{z_{\partialerp }} ^2 \partial _{z_1} \varphi _{\varepsilon }\| _{L^p} \langleeq C_p.
\varepsilonnd{array}
\varepsiloneq
The estimate \varepsilonqref{goodestimate} is also valid with $A_{\varepsilon }$ instead of $ \mathbb BA_{\varepsilon }$.
\varepsilonnd{prop}
Once these bounds are established, the estimates in Proposition \rangleef{asympto} show that
$({\mathfrak c}_s^{-1} \partial_{z_1} \varphi_n )_{n\gammaeq 0}$ is a minimizing sequence for the problem \varepsilonqref{minimiz} if $N=2$,
respectively for the problem (\rangleef{miniGS}) if $N=3$. Since
Theorems \rangleef{gs2d} and \rangleef{gs} provide compactness properties for
minimizing sequences, we get (pre)compactness of
$ ( {\mathfrak c}_s^{-1} \partial_{z_1} \varphi_n )_{n\gammaeq 0}$ in
$\mathscr{Y}(\mathbb R^N) \hookrightarrow L^2(\mathbb R^N)$, and then we complete
the proof of Theorem \rangleef{res1} by standard interpolation
in Sobolev spaces.
\sigmaubsection{On the higher dimensional case}
It is natural to ask what happens in the transsonic limit in dimension $N\gammaeq 4$.
Firstly, it should be noticed that even for the
Gross-Pitaevskii nonlinearity the problem is critical if $N=4$ and
supercritical in higher dimensions, hence Theorem \rangleef{thM} does not apply directly.
The first crucial step is to investigate the behaviour
of $T_c$ as $c \tildeo {\mathfrak c}_s$. In particular, in order to be able to
use Proposition \rangleef{lifting} to show that the solutions
are vortexless in this limit, we would need to prove that $T_c \tildeo 0$
as $c \tildeo {\mathfrak c}_s$. We have not been able to prove (or disprove)
this in dimension $N=4$ and $N=5$, except for the case $ \Gamma = 0$.
Quite surprisingly, for nonlinearities satisfying (A3) and (A4) (this is the case for both the Gross-Pitaevskii and the cubic-quintic nonlinearity),
this is not true in dimension higher than $5$, as shown by the following
\
$\Box$egin{prop}
\langleambdabel{dim6}
Suppose that $F$ satisfies (A3) and (A4) (and $\Gamma$ is arbitrary).
If $ N \gammaeq 6$, there exists $\delta >0 $
such that for any $ 0 \langleeq c \langleeq {\mathfrak c}_s$ and for any nonconstant solution $U {\rm in}n \mathbb BE$
of {\ranglem (TW$_{c}$)}, we have
$$
E(U) + c Q(U) \gammaeq \delta .
$$
In particular,
$$
{\rm in}nf_{0 < c < {\mathfrak c}_s} T_c > 0 .
$$
The same conclusion holds if $ N {\rm in}n \{ 4, 5 \}$ provided that $ \Gamma = 0$.
\varepsilonnd{prop}
Therefore we do not know if the solutions constructed in
Theorem \rangleef{thM} (for a subcritical nonlinearity) may vanish or not as
$c\tildeo {\mathfrak c}_s$ if $N\gammaeq 6$. On the other hand we can show, in
any space dimension $N\gammaeq 4$, that we cannot scale the
solutions in order to have compactness and convergence to a localized
and nontrivial object in the transonic limit as soon as the quantity
$E+cQ$ tends to zero.
\
$\Box$egin{prop}
\langleambdabel{vanishing}
Let $N\gammaeq 4$ and
suppose that $F$ satisfies (A2), (A3) and (A4) (and $\Gamma$ is arbitrary).
Assume that there exists a
sequence $(U_n, c_n)$ such that $ c_n {\rm in}n (0, {\mathfrak c}_s]$, $U_n {\rm in}n \mathbb BE$ is
a nonconstant solution of {\ranglem (TW$_{c_n}$)} and $E_{c_n}(U_n) \tildeo 0$
as $n\tildeo {\rm in}nfty$. Then, for $n$ large enough, there exist
$\alpha_n,\
$\Box$eta_n, \langleambda_n, \sigma_n {\rm in}n \mathbb R$, $ A_n {\rm in}n H^1( \mathbb R^N)$ and $ \varphi_n {\rm in}n \deltaot{H}^1( \mathbb R^N)$
uniquely determined such that
$$ U_n(x) = r_0 \mathbb Big( 1 + \alpha_n A_n (z) \mathbb Big)
\varepsilonxp \mathbb Big( i \
$\Box$eta_n \varphi_n (z) \mathbb Big) , \quad \quad \quad \mbox{ where } \quad
z_1 = \langleambda_n x_1, \quad z_\partialerp = \sigma_n x_\partialerp , $$
$$
\alpha_n \tildeo 0 \qquad \mbox{ and } \qquad
\| A_n \|_{L^{\rm in}nfty(\mathbb R^N)} = \| A_n \|_{L^2(\mathbb R^N)}
= \| \partial_{z_1} \varphi_n \|_{ L^2(\mathbb R^N)}
= \| \|abla_{z_\partialerp} \varphi_n \|_{L^2(\mathbb R^N)} = 1 .
$$
Then we have $c_n \tildeo {\mathfrak c}_s$ and
$$ \| \partial_{z_1} A_n \|_{L^2(\mathbb R^N)} \tildeo 0 \qquad \mbox{ as } n \tildeo +{\rm in}nfty. $$
\varepsilonnd{prop}
Consequently, even if one could show
that $T_c \tildeo 0$ as $ c \tildeo {\mathfrak c}_s $
in space dimension $4$ or $5$,
we would not have a nontrivial limit (after rescaling) of the corresponding rarefaction pulses.
\sigmaection{Three-dimensional ground states for (KP-I)
\langleambdabel{proofGS}}
We recall the
anisotropic Sobolev inequality (see \cite{BIN}, p. 323): for $N \gammaeq 2$ and
for any $2 \langleeq p < \frac{2(2N-1)}{2N-3}$, there exists $C=C(p,N)$
such that for all $\Theta {\rm in}n \mathbb BC_c^{\rm in}nfty(\mathbb R^N)$ we have
\
$\Box$e
\langleambdabel{Pastis}
\| \partial_{z_1} \Theta \|_{L^p(\mathbb R^N)} \langleeq C
\| \partial_{z_1} \Theta \|_{L^2(\mathbb R^N)}^{1 - \frac{(2N-1)(p-2)}{2p}}
\| \partial^2_{z_1} \Theta \|_{L^2(\mathbb R^N)}^{\frac{N(p-2)}{2p}}
\| \|abla_{z_\partialerp} \Theta \|_{L^2(\mathbb R^N)}^{\frac{(N-1)(p-2)}{2p}} .
\varepsilone
This shows that the energy $\mathscr{E}$ is well-defined on $\mathscr{Y}(\mathbb R^N)$
if $N=2$ or $N=3$. By (\rangleef{Pastis}) and the density of $\partial_{z_1} C_c^{{\rm in}nfty}(\mathbb R^3)$ in $\mathscr{Y}(\mathbb R^3)$
we get for any $ w {\rm in}n \mathscr{Y}(\mathbb R^3)$:
\
$\Box$e
\langleambdabel{SobAnis}
\| w \|_{L^3(\mathbb R^3)} \langleeq C
\| w \|_{L^2(\mathbb R^3)}^{\frac16}
\| \partial_{z_1} w \|_{L^2(\mathbb R^3)}^{\frac12}
\| \|abla_{z_\partialerp} \partial_{z_1}^{-1} w \|_{L^2(\mathbb R^3)}^{\frac13} .
\varepsilone
On the other hand, the following identities hold for any solution
$\mathbb BW {\rm in}n \mathscr{Y} (\mathbb R^N)$ of (SW):
\
$\Box$e
\langleambdabel{identites}
\langleeft\{\
$\Box$egin{array}{ll}
\deltaisplaystyle{ {\rm in}nt_{\mathbb R^N} \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathbb BW)^2
+ |\|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW|^2
+ \frac{\Gamma}{2}\, \mathbb BW^3 + \frac{1}{{\mathfrak c}_s^2}\, \mathbb BW^2 \ dz = 0} \\ \ \\
\deltaisplaystyle{ {\rm in}nt_{\mathbb R^N} \frac{-1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathbb BW)^2
+3 |\|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW|^2
+ \frac{\Gamma}{3}\, \mathbb BW^3 + \frac{1}{{\mathfrak c}_s^2}\, \mathbb BW^2 \ dz } = 0 \\ \ \\
\deltaisplaystyle{ {\rm in}nt_{\mathbb R^N} \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathbb BW)^2
+ |\|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW|^2
+ \frac{\Gamma}{3}\, \mathbb BW^3 + \frac{1}{{\mathfrak c}_s^2}\, \mathbb BW^2 \ dz =
\frac{2}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW|^2 \ dz }.
\varepsilonnd{array}\rangleight.
\varepsilone
The first identity is obtained by multiplying (SW) by $\partial_{z_1}^{-1} \mathbb BW$
and integrating, whereas the two other equalities are the Pohozaev
identities associated to the scalings in the $z_1$ and
$z_\partialerp$ variables respectively. Formally, they are obtained by multiplying (SW)
by $z_1 \mathbb BW$ and $z_\partialerp \cdot \|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW$
respectively and integrating by parts (see \cite{dBSIHP} for a
complete justification). Combining the equalities in \varepsilonqref{identites} we get
\
$\Box$e
\langleambdabel{Ident}
\langleeft\{\
$\Box$egin{array}{c}
\deltaisplaystyle{ {\rm in}nt_{\mathbb R^N} \frac{1}{{\mathfrak c}_s^2} \, (\partial_{z_1} \mathbb BW )^2 \ dz =
\frac{N}{N-1} {\rm in}nt_{\mathbb R^N} | \|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW|^2 \ dz } \\ \\
\deltaisplaystyle{ \frac{\Gamma}{6} {\rm in}nt_{\mathbb R^N} \mathbb BW^3 \ dz =
- \frac{2}{N-1} {\rm in}nt_{\mathbb R^N} | \|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW|^2 \ dz } \\ \\
\deltaisplaystyle{ {\rm in}nt_{\mathbb R^N} \frac{1}{{\mathfrak c}_s^2} \, \mathbb BW^2 \ dz
= \frac{7-2N}{N-1} {\rm in}nt_{\mathbb R^N} | \|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW|^2 \ dz }.
\varepsilonnd{array}\rangleight.
\varepsilone
Notice that for $N\gammaeq 4$ we have $ 7-2N <0$ and the last equality implies $\mathbb BW=0$.
We recall the following results about the ground states of (SW)
and the compactness of minimizing sequences in $ \
$\Box$Y(\mathbb R^3)$.
\
$\Box$egin{Lemma} [\cite{dBSIHP}, \cite{dBSSIAM}]
\langleambdabel{gs3}
Let $ N=3$ and $ \Gamma \|eq 0$.
(i) For $ \langleambda {\rm in}n \mathbb R^*$, denote
$I_{\langleambda } = {\rm in}nf \mathbb Big\{ \| w \|_{\
$\Box$Y (\mathbb R^3)} ^2 \; | \; \deltaisplaystyle {\rm in}nt_{\mathbb R^3} w^3 (z)\, dz = \langleambda \mathbb Big\}.$
Then for any $ \langleambda {\rm in}n \mathbb R^*$ we have $ I_{\langleambda } > 0$ and there is $ w_{\langleambda } {\rm in}n \
$\Box$Y(\mathbb R^3)$ such that
$\deltaisplaystyle {\rm in}nt_{\mathbb R^3} w_{\langleambda} ^3 (z)\, dz = \langleambda$ and $\| w _{\langleambda} \|_{\
$\Box$Y (\mathbb R^3)} ^2 = I_{\langleambda }$.
Moreover, any sequence $(w_n)_{n \gammaeq 1} \sigmaubset \
$\Box$Y(\mathbb R^3)$ such that
$\deltaisplaystyle {\rm in}nt_{\mathbb R^3} w_n^3 (z)\, dz \tildeo \langleambda $ and $\| w _{n} \|_{\
$\Box$Y (\mathbb R^3)} ^2 \tildeo I_{\langleambda }$
has a subsequence that converges in $ \
$\Box$Y(\mathbb R^3)$ (up to translations) to a minimizer of $ I_{\langleambda }$.
(ii) There is $ \langleambda^* {\rm in}n \mathbb R^*$ such that $ w^* {\rm in}n \
$\Box$Y(\mathbb R^3)$ is a ground state for {\ranglem (SW)}
(that is, minimizes the action $ \
$\Box$S$ among all solutions of {\ranglem (SW)})
if and only if $w^*$ is a minimizer of $I_{\langleambda ^*}$.
\varepsilonnd{Lemma}
The first part of Lemma \rangleef{gs3} is a consequence of
the proof of Theorem 3.2 p. 217 in \cite{dBSIHP}
and the second part follows from Lemma 2.1 p. 1067 in \cite{dBSSIAM}.
{{\rm in}t Proof of Theorem \rangleef{gs}.}
Given $ w {\rm in}n \
$\Box$Y(\mathbb R^3)$ and $ \sigmaigma > 0$, we denote
$P(w) = \deltaisplaystyle {\rm in}nt_{\mathbb R^3} \frac{1}{{\mathfrak c}_s ^2} w^2 + \frac{1}{{\mathfrak c}_s ^2} |\partial_{z_1} w|^2 + \frac{\Gamma}{3} w^3 \, dz$
and $ w_{\sigmaigma} (z) = w( z_1, \frac{ z_{\partialerp}}{\sigmaigma} )$. It is obvious that
$$
\
$\Box$egin{array}{c}
\deltaisplaystyle {\rm in}nt_{\mathbb R^3} w_{\sigmaigma }^p \, dz = \sigmaigma ^2 {\rm in}nt_{\mathbb R^3} w^p \, dz , \qquad
{\rm in}nt_{\mathbb R^3} |\partial _{z_1} (w_{\sigmaigma })|^2 \, dz = \sigmaigma ^2 {\rm in}nt_{\mathbb R^3} |\partial_{z_1}w|^2 \, dz \qquad \mbox{ and }
\\ \\
\deltaisplaystyle {\rm in}nt_{\mathbb R^3} |\|abla_{z_{\partialerp}} \partial_{z_1}^{-1} (w_{\sigmaigma })|^2 \, dz
= {\rm in}nt_{\mathbb R^3} |\|abla_{z_{\partialerp}} \partial_{z_1}^{-1} (w)|^2 w|^2 \, dz .
\varepsilonnd{array}
$$
Let $w^*$ be a ground state for (SW) (the existence of $ w^*$ is guaranteed by Lemma \rangleef{gs3} above).
Since $ w^*$ satisfies (\rangleef{identites}), we have $ P(w^*) = 0 $ and
$ \
$\Box$S (w^*) = \deltaisplaystyle {\rm in}nt_{\mathbb R^3} |\|abla_{z_{\partialerp}} \partial_{z_1}^{-1} (w^*)|^2 w|^2 \, dz.$
Consider $ w {\rm in}n \
$\Box$Y(\mathbb R^3)$ such that $ w \|eq 0$ and $ P(w) = 0$.
Then
$ \deltaisplaystyle \frac{ \Gamma}{3} {\rm in}nt_{\mathbb R^3} w^3 \, dz = - \frac{1}{{\mathfrak c}_s ^2} {\rm in}nt_{\mathbb R^3} w^2 + | \partial _{z_1} w|^2 \, dz < 0 $
and it is easy to see that there is $ \sigmaigma > 0 $ such that
$\deltaisplaystyle {\rm in}nt_{\mathbb R^3} w_{\sigmaigma }^3 \, dz = {\rm in}nt_{\mathbb R^3} (w ^*)^3 \, dz = \langleambda ^*$.
From Lemma \rangleef{gs3} it follows that $\| w_{\sigmaigma} \|_{\
$\Box$Y(\mathbb R^3)} ^2\gammaeq \| w^* \|_{\
$\Box$Y(\mathbb R^3)} ^2, $ that is
$$
\frac{ \sigmaigma ^2} { {\mathfrak c}_s ^2} {\rm in}nt_{\mathbb R^3} w^2 + |\partial _{z_1} w|^2 \, dz +
{\rm in}nt_{\mathbb R^3} |\|abla_{z_{\partialerp}} \partial_{z_1}^{-1} w |^2 \, dz
\gammaeq
\frac{ 1} { {\mathfrak c}_s ^2} {\rm in}nt_{\mathbb R^3} (w^*)^2 + |\partial _{z_1} w^*|^2 \, dz +
{\rm in}nt_{\mathbb R^3} |\|abla_{z_{\partialerp}} \partial_{z_1}^{-1} w^*|^2 \, dz .
$$
Since $ P(w) = 0 $ and $ P(w^*) = 0$ we have
$$
\frac{ \sigmaigma ^2} { {\mathfrak c}_s ^2} {\rm in}nt_{\mathbb R^3} w^2 + |\partial _{z_1} w|^2 \, dz
= - \sigmaigma ^2 \frac { \Gamma}{3} {\rm in}nt_{\mathbb R^3} w^3 \, dz
= - \frac { \Gamma}{3} {\rm in}nt_{\mathbb R^3} (w^*)^3 \, dz
= \frac{1}{ {\mathfrak c}_s ^2} {\rm in}nt_{\mathbb R^3} (w^*)^2 + |\partial _{z_1} w^*|^2 \, dz
$$
and the previous inequality gives
$
\deltaisplaystyle {\rm in}nt_{\mathbb R^3} |\|abla_{z_{\partialerp}} \partial_{z_1}^{-1} w|^2 \, dz
\gammaeq
{\rm in}nt_{\mathbb R^3} |\|abla_{z_{\partialerp}} \partial_{z_1}^{-1} w^*|^2 \, dz ,
$
that is $ \
$\Box$S(w) \gammaeq \
$\Box$S( w^*)$.
So far we have proved that the set $ {\mathcal P} = \{ w {\rm in}n \
$\Box$Y(\mathbb R^3) \; | \; w \|eq 0, \; P(w) = 0 \}$
is not empty and any ground state $w^*$ of (SW) minimizes the action $ \
$\Box$S$ in this set.
It is then clear that for any $ \sigmaigma > 0$, $ w_{\sigmaigma }^* $ also belongs to $ {\mathcal P}$ and mnimizes $ \
$\Box$S$ on $ {\mathcal P}$.
Conversely, let $ w {\rm in}n {\mathcal P} $ be such that $ \
$\Box$S(w) = \
$\Box$S_*$.
Let $w^*$ be a ground state for (SW).
It is clear that $ \deltaisplaystyle {\rm in}nt_{\mathbb R^3} |\|abla_{z_{\partialerp}} \partial_{z_1}^{-1} w|^2 \, dz = \
$\Box$S_*
= {\rm in}nt_{\mathbb R^3} |\|abla_{z_{\partialerp}} \partial_{z_1}^{-1} w^*|^2 \, dz $.
As above, there is a unique $ \sigmaigma >0 $ such that
$\deltaisplaystyle {\rm in}nt_{\mathbb R^3} w_{\sigmaigma}^3 \, dz = {\rm in}nt_{\mathbb R^3} (w^*)^3 \, dz = \langleambda ^*$ and then we have
$\deltaisplaystyle {\rm in}nt_{\mathbb R^3} w_{\sigmaigma } ^2 + |\partial _{z_1} w_{\sigmaigma }|^2 \, dz
= {\rm in}nt_{\mathbb R^3} (w^*)^2 + |\partial _{z_1} w^*|^2 \, dz $. We find
$ \| w_{\sigmaigma }\|_{\
$\Box$Y(\mathbb R^3)}^2 = \| w^*\|_{\
$\Box$Y(\mathbb R^3)}^2 = I_{\langleambda ^*} $,
thus $ w_{\sigmaigma }$ is a minimizer for $ I_{\langleambda ^*} $ and Lemma \rangleef{gs3} (ii) implies that
$ w_{\sigmaigma }$ is a ground state for (SW).
Let $(\mathbb BW_n)_{n \gammaeq 1} $ be a sequence satisfying $(i)$, $(ii)$ and $(iii)$.
We have $ P(\mathbb BW_n ) \tildeo 0 $ and
$$
\frac{\Gamma}{3} {\rm in}nt_{\mathbb R^3} \mathbb BW_n^3 \, dz
= P(\mathbb BW_n) - \frac{1}{{\mathfrak c}_s ^2} {\rm in}nt_{\mathbb R^3} \mathbb BW_n ^2 + | \partial _{z_1} \mathbb BW_n |^2 \, dz
{\rm in}n \langleeft[ \frac{ - 2 m_2}{{\mathfrak c}_s ^2}, - \frac{ m_1}{2 {\mathfrak c}_s ^2} \rangleight]
\qquad \mbox{ for all $n$ sufficiently large.}
$$
We infer that there are $ n_0 {\rm in}n \mathbb N$, $ \mathfrak{u}nderline{ \sigmaigma}, \
$\Box$ar{\sigmaigma} > 0 $
and a sequence $(\sigmaigma _n )_{ n \gammaeq n_0}\sigmaubset [ \mathfrak{u}nderline{ \sigmaigma}, \
$\Box$ar{\sigmaigma} ] $ such that
$ \deltaisplaystyle {\rm in}nt_{\mathbb R^3} ( (\mathbb BW_n)_{\sigmaigma _n}) ^3 \, dz = \langleambda ^*$ for all $ n \gammaeq n_0$.
Moreover,
$$
\
$\Box$egin{array}{rcl}
\| (\mathbb BW_n)_{\sigmaigma _n} \| _{\
$\Box$Y(\mathbb R^3)} ^2
& = & \deltaisplaystyle \frac{ \sigmaigma _n ^2}{{\mathfrak c}_s ^2} {\rm in}nt_{\mathbb R^3} \mathbb BW_n ^2 + | \partial _{z_1} \mathbb BW_n |^2 \, dz +
{\rm in}nt_{\mathbb R^3} |\|abla_{z_{\partialerp}} \partial_{z_1}^{-1} \mathbb BW_n|^2 \, dz
\\
\\
& = & \deltaisplaystyle \sigmaigma _n ^2 \langleeft( P(\mathbb BW_n) - \frac{ \Gamma}{3} {\rm in}nt_{\mathbb R^3} \mathbb BW_n ^3 \rangleight) + ( \
$\Box$S( \mathbb BW _n) - P(\mathbb BW_n) )
\\
& = & \deltaisplaystyle ( \sigmaigma _n ^2 - 1) P(\mathbb BW_n) + \
$\Box$S( \mathbb BW _n) - \frac{ \Gamma}{3} {\rm in}nt_{\mathbb R^3} (\mathbb BW_n)_{\sigmaigma _n} ^3 \, dz.
\varepsilonnd{array}
$$
Passing to the limit in the above equality we get
$$
\deltaisplaystyle \langleiminf_{n \tildeo {\rm in}nfty } \| (\mathbb BW_n)_{\sigmaigma _n} \| _{\
$\Box$Y(\mathbb R^3)} ^2
= \langleiminf_{n \tildeo {\rm in}nfty } \
$\Box$S( \mathbb BW _n) - \frac{ \Gamma}{3} \langleambda ^* \langleeq \
$\Box$S_* - \frac{ \Gamma}{3} \langleambda ^*
= \
$\Box$S( w^*) - \frac{ \Gamma}{3} {\rm in}nt_{\mathbb R^3} (w^*)^3\, dz
= \| w^* \|_{\
$\Box$Y(\mathbb R^3)} ^2 = I_{\langleambda ^*}.
$$
Hence there is a subsequence of $((\mathbb BW_n)_{\sigmaigma _n})_{n \gammaeq 1}$ which is a minimizing sequence for $I_{\langleambda ^*}$.
Using Lemma \rangleef{gs3} we infer that there exist a subsequence $(n_j)_{ j \gammaeq 1}$ such that
$ \sigmaigma_{n _j} \tildeo \sigmaigma {\rm in}n [\mathfrak{u}nderline{\sigmaigma}, \
$\Box$ar{\sigmaigma}]$,
a sequence $ (z_j)_{j \gammaeq 1} \sigmaubset \mathbb R^3$ and a minimizer $ \mathbb BW$ of $ I_{\langleambda ^*}$
(hence a ground state for (SW)) such that
$(\mathbb BW_{n_j})_{\sigmaigma_{n_j}} ( \cdot - z_j) \tildeo \mathbb BW$ in $ \
$\Box$Y(\mathbb R^3)$.
It is then straightforward that $ \mathbb BW_{n_j}( \cdot - z_j) \tildeo \mathbb BW_{\frac{1}{\sigmaigma}}$ in $ \
$\Box$Y(\mathbb R^3)$.
\
$\Box$ \\
We may give an alternate proof of Theorem \rangleef{gs} which does not rely directly
on the analysis in \cite{dBSIHP}, \cite{dBSSIAM} by following the strategy
of \cite{Maris}, which can be adapted to our problem up to some details.
\sigmaection{Proof of Theorem \rangleef{res1}}
\sigmaubsection{Proof of Proposition \rangleef{asympto}}
\langleambdabel{preuveasympto}
For some given real valued functions $A_\varepsilon$ and $\varphi_\varepsilon$,
we consider the mapping
$$ U_{\varepsilon}(x) = |U_{\varepsilon}| (x) \varepsilonx^{i\partialhi(x)} =
r_0 \mathbb Big( 1+ \varepsilon^2 A_\varepsilon(z) \mathbb Big) \varepsilonx^{i\varepsilon \varphi_\varepsilon(z)} ,
\quad \quad \mbox{ where } \quad z=(z_1,z_\partialerp) = ( \varepsilon x_1 , \varepsilon^2 x_\partialerp) . $$
It is obvious that $ U_{\varepsilon } {\rm in}n \mathbb BE$ provided that
$ A_{\varepsilon} {\rm in}n H^1 ( \mathbb R^N)$ and $ \|abla \varphi _{\varepsilon} {\rm in}n L^2( \mathbb R^N)$.
If $\varepsilon$ is small and $A_\varepsilon$ is uniformly bounded in $\mathbb R^N$, $U_{\varepsilon}$ does not
vanish and the momentum $Q(U_{\varepsilon})$ is given by
$$ Q(U_{\varepsilon}) = - {\rm in}nt_{\mathbb R^N} ( |U_{\varepsilon}|^2 - r_0^2 ) \frac{\partial \partialhi}{\partial x_1} \ dx
= - \varepsilon^{5-2N} r_0^2 {\rm in}nt_{\mathbb R^N} \mathbb Big( 2 A_\varepsilon + \varepsilon^2 A_\varepsilon^2 \mathbb Big)
\frac{\partial \varphi_\varepsilon}{\partial z_1} \ dz , $$
while the energy of $U_{\varepsilon }$ is
\
$\Box$egin{align*}
E(U_{\varepsilon} ) = & \ {\rm in}nt_{\mathbb R^N} |\|abla U_{\varepsilon}|^2 + V(|U_{\varepsilon}|^2) \ dx \\
= & \ \varepsilon^{5-2N} r_0^2 {\rm in}nt_{\mathbb R^N}
(\partial_{z_1} \varphi_\varepsilon)^2 \mathbb Big( 1 + \varepsilon^2 A_\varepsilon \mathbb Big)^2
+ \varepsilon^2 |\|abla_{z_\partialerp} \varphi_\varepsilon|^2 \mathbb Big( 1 + \varepsilon^2 A_\varepsilon \mathbb Big)^2
+ \varepsilon^2 (\partial_{z_1} A_\varepsilon)^2
+ \varepsilon^4 |\|abla_{z_\partialerp} A_\varepsilon|^2 \\
& \hspace{2cm} + {\mathfrak c}_s^2 A_\varepsilon^2
+ \varepsilon^2 {\mathfrak c}_s^2 \mathbb Big( 1 - \frac{4r_0^4}{3{\mathfrak c}_s^2} F''(r_0^2) \mathbb Big) A_\varepsilon^3
+ \frac{{\mathfrak c}_s^2}{\varepsilon^4} V_4 \mathbb Big( \varepsilon^2 A_\varepsilon \mathbb Big) \ dz ,
\varepsilonnd{align*}
where we have used the Taylor expansion
\
$\Box$e
\langleambdabel{V}
V \mathbb Big( r_0^2 (1 + \alpha)^2 \mathbb Big) = r_0^2 \mathbb Big\{
{\mathfrak c}_s^2 \alpha^2 + {\mathfrak c}_s^2 \mathbb Big( 1 - \frac{4r_0^4}{3{\mathfrak c}_s^2}
F''(r_0^2) \mathbb Big) \alpha^3 + {\mathfrak c}_s^2 V_4(\alpha) \mathbb Big\}
= r_0 ^2 {\mathfrak c}_s ^2 \mathbb Big\{ \alphalpha ^2 + \mathbb Big(\frac{\Gamma}{3} - 1 \mathbb Big) \alphalpha ^3 + V_4 ( \alphalpha ) \mathbb Big\}
\varepsilone
with
$ V_4(\alpha) = \mathbb BO(\alpha^4) $ as $\alpha \tildeo 0 . $
Consequently, with ${\mathfrak c}_s^2 = c^2(\varepsilon) + \varepsilon^2$ we get
\
$\Box$egin{align}
\langleambdabel{develo}
E_{c(\varepsilon)} (U_{\varepsilon }) & = \ E(U_{\varepsilon} ) + c(\varepsilon) Q (U_{\varepsilon}) \|onumber \\
& = \varepsilon^{5-2N} r_0^2 {\rm in}nt_{\mathbb R^N}
(\partial_{z_1} \varphi_\varepsilon)^2 \mathbb Big( 1 + \varepsilon^2 A_\varepsilon \mathbb Big)^2
+ \varepsilon^2 |\|abla_{z_\partialerp} \varphi_\varepsilon|^2 \mathbb Big( 1 + \varepsilon^2 A_\varepsilon \mathbb Big)^2
+ \varepsilon^2 (\partial_{z_1} A_\varepsilon)^2
+ \varepsilon^4 |\|abla_{z_\partialerp} A_\varepsilon|^2 \|onumber \\
& \hspace{1cm} + {\mathfrak c}_s^2 A_\varepsilon^2
+ \varepsilon^2 {\mathfrak c}_s^2 \mathbb Big( 1 - \frac{4r_0^4}{3{\mathfrak c}_s^2} F''(r_0^2) \mathbb Big) A_\varepsilon^3
+ \frac{{\mathfrak c}_s^2}{\varepsilon^4} V_4 \mathbb Big( \varepsilon^2 A_\varepsilon \mathbb Big)
- c(\varepsilon) \mathbb Big( 2 A_\varepsilon + \varepsilon^2 A_\varepsilon^2 \mathbb Big) \partial_{z_1} \varphi_\varepsilon \ dz \|onumber \\
& = \varepsilon^{7-2N} r_0^2 {\rm in}nt_{\mathbb R^N} \frac{1}{\varepsilon^2}
\mathbb Big( \partial_{z_1} \varphi_\varepsilon - c(\varepsilon) A_\varepsilon \mathbb Big)^2
+ (\partial_{z_1} \varphi_\varepsilon)^2 ( 2 A_\varepsilon + \varepsilon^2 A_\varepsilon^2 )
+ |\|abla_{z_\partialerp} \varphi_\varepsilon|^2 ( 1 + \varepsilon^2 A_\varepsilon )^2 + (\partial_{z_1} A_\varepsilon)^2
\|onumber \\
& \hspace{1cm} + \varepsilon^2 |\|abla_{z_\partialerp} A_\varepsilon|^2 + A_\varepsilon^2
+ {\mathfrak c}_s^2 \mathbb Big( 1 - \frac{4r_0^4}{3{\mathfrak c}_s^2} F''(r_0^2) \mathbb Big)
A_\varepsilon^3 + \frac{{\mathfrak c}_s^2}{\varepsilon^6} V_4( \varepsilon^2 A_\varepsilon)
- c(\varepsilon) A_\varepsilon^2 \partial_{z_1} \varphi_\varepsilon \ dz .
\varepsilonnd{align}
Since the first term in the last integral is penalised by $\varepsilon^{-2}$,
in order to get sharp estimates on $E_{c(\varepsilon)} $ one needs
$\partial_{z_1} \varphi_\varepsilon \sigmaigmameq c(\varepsilon) A_\varepsilon$.
Let $N=3$. By Theorem \rangleef{gs},
there exists a ground state $ A {\rm in}n \
$\Box$Y(\mathbb R^3)$ for (SW).
It follows from Theorem 4.1 p. 227 in \cite{dBSSIAM} that $ A {\rm in}n H^s(\mathbb R^3)$ for any $s {\rm in}n \mathbb N$.
Let $ \varphi = {\mathfrak c}_s \partial_{z_1 }^{-1} A$.
We use \varepsilonqref{develo} with
$A_\varepsilon(z) = \frac{\langleambda {\mathfrak c}_s}{ c(\varepsilon) } A( \langleambda z_1 , z_\partialerp)$
and $\varphi_\varepsilon(z) = \varphi( \langleambda z_1 , z_\partialerp)$.
For $\varepsilon>0$ small
and $\langleambda \sigmaigmameq 1$ (to be chosen later) we define
$$
U_\varepsilon (x) = |U_\varepsilon| (x) \varepsilonx^{i\partialhi_\varepsilon(x)} =
r_0 \mathbb Big( 1+ \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \langleambda A(z) \mathbb Big) \varepsilonx^{i\varepsilon \varphi(z)} ,
\quad \quad \quad \mbox{ where } \qquad z=(z_1,z_\partialerp) = ( \varepsilon \langleambda x_1 , \varepsilon^2 x_\partialerp) .
$$
Notice that $U_{\varepsilon}$ does not vanish if $\varepsilon$ is sufficiently small.
Since $\partial_{z_1} \varphi = {\mathfrak c}_s A$, we have
$ \partial_{z_1} \varphi_\varepsilon (z)= \langleambda \partial_{z_1} \varphi ( \langleambda z_1, z_{\partialerp} )= \langleambda {\mathfrak c}_s A (\langleambda z_1, z_{\partialerp} )= c(\varepsilon) A_\varepsilon (z) $
and therefore
\
$\Box$egin{align*}
\langleambda E_{c(\varepsilon)} (U_\varepsilon) = & \ {\mathfrak c}_s^2 r_0^2 \varepsilon {\rm in}nt_{\mathbb R^3}
\langleambda^3 \frac{{\mathfrak c}_s}{c(\varepsilon)} A^2 \mathbb Big( 2 A + \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \langleambda A^2 \mathbb Big)
+ \langleambda^2 |\|abla_{z_\partialerp} \partial_{z_1}^{-1} A|^2
\mathbb Big( 1 + \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \langleambda A \mathbb Big)^2
+ \frac{\langleambda^4}{c^2(\varepsilon)} (\partial_{z_1} A)^2 \|onumber \\
& \hspace{2cm} + \varepsilon^2 \frac{\langleambda^2}{c^2(\varepsilon)} |\|abla_{z_\partialerp} A|^2
+ \frac{\langleambda^2}{c^2(\varepsilon)} A^2
+ \frac{{\mathfrak c}_s^3}{c^3(\varepsilon)} \langleambda^3 \mathbb Big( 1-\frac{4r_0^4}{3{\mathfrak c}_s^2} F''(r_0^2) \mathbb Big) A^3
+ \frac{1}{\varepsilon^6} V_4\mathbb Big( \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \langleambda A \mathbb Big)
\|onumber \\
& \hspace{2cm} - \langleambda^3 \frac{{\mathfrak c}_s}{c(\varepsilon)} A^3 \ dz \|onumber \\
= & \ {\mathfrak c}_s^2 r_0^2 \varepsilon {\rm in}nt_{\mathbb R^3}
\langleambda^3 \frac{{\mathfrak c}_s}{c(\varepsilon)} \mathbb Big( 1 + \frac{{\mathfrak c}_s^2}{c^2(\varepsilon)}
\mathbb Big[ \frac{\Gamma}{3} - 1 \mathbb Big] \mathbb Big) A^3
+ \langleambda^2 |\|abla_{z_\partialerp} \partial_{z_1}^{-1} A|^2
\mathbb Big( 1 + \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \langleambda A \mathbb Big)^2
+ \frac{\langleambda^4}{c^2(\varepsilon)} (\partial_{z_1} A)^2
\|onumber \\
& \hspace{2cm} + \frac{\langleambda^2}{c^2(\varepsilon)} A^2
+ \varepsilon^2 \frac{\langleambda^2}{c^2(\varepsilon)} |\|abla_{z_\partialerp} A|^2
+ \varepsilon^2 \langleambda^4 \frac{{\mathfrak c}_s^2}{c^2(\varepsilon)} A^4
+ \frac{1}{ \varepsilon^6} V_4\mathbb Big( \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \langleambda A \mathbb Big) \ dz .
\varepsilonnd{align*}
On the other hand,
\
$\Box$egin{align*}
\langleambda {\rm in}nt_{\mathbb R^3} |\|abla_{\partialerp} U_\varepsilon|^2 \ dx = & \,
r_0^2 \varepsilon {\rm in}nt_{\mathbb R^3} |\|abla_{z_\partialerp} \varphi|^2
\mathbb Big( 1+ \varepsilon^2 \langleambda \frac{{\mathfrak c}_s}{c(\varepsilon)} A \mathbb Big)^2
+ \varepsilon^2 \langleambda^2 \frac{{\mathfrak c}_s^2}{c^2(\varepsilon)} |\|abla_{z_\partialerp} A|^2 \ dz \\
= & \, {\mathfrak c}_s^2 r_0^2 \varepsilon {\rm in}nt_{\mathbb R^3}
|\|abla_{z_\partialerp} \partial_{z_1}^{-1} A|^2
\mathbb Big( 1+ \varepsilon^2 \langleambda \frac{{\mathfrak c}_s}{c(\varepsilon)} A \mathbb Big)^2
+ \varepsilon^2 \frac{\langleambda^2}{c^2(\varepsilon)} |\|abla_{z_\partialerp} A|^2\ dz .
\varepsilonnd{align*}
Hence $U_\varepsilon $ satisfies the constraint $P_{c(\varepsilon )} (U_{\varepsilon}) = 0 $ (or equivalently
$ \deltaisplaystyle E_{c(\varepsilon)} (U_\varepsilon ) = {\rm in}nt_{\mathbb R^3} |\|abla_{\partialerp} U_\varepsilon |^2 \ dx $)
if and only if $G(\langleambda, \varepsilon ^2) = 0$, where
\
$\Box$egin{align*}
G (\langleambda , \varepsilon^2 ) = & \, {\rm in}nt_{\mathbb R^3}
\langleambda^3 \frac{{\mathfrak c}_s}{c(\varepsilon)} \mathbb Big( 1 + \frac{{\mathfrak c}_s^2}{c^2(\varepsilon)}
\mathbb Big[ \frac{\Gamma}{3} - 1 \mathbb Big] \mathbb Big) A^3
+ \langleambda^2 |\|abla_{z_\partialerp} \partial_{z_1}^{-1} A|^2
\mathbb Big( 1 + \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \langleambda A \mathbb Big)^2
+ \frac{\langleambda^4}{c^2(\varepsilon)} (\partial_{z_1} A)^2 \|onumber \\
& \hspace{2cm} + \frac{\langleambda^2}{c^2(\varepsilon)} A^2
+ \varepsilon^2 \frac{\langleambda^2}{c^2(\varepsilon)} |\|abla_{z_\partialerp} A|^2
+ \varepsilon^2 \langleambda^4 \frac{{\mathfrak c}_s^2}{c^2(\varepsilon)} A^4
+ \frac{1}{\varepsilon^6} V_4\mathbb Big( \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \langleambda A \mathbb Big) \ dz \\
& \ - {\rm in}nt_{\mathbb R^3} |\|abla_{z_\partialerp} \partial_{z_1}^{-1} A|^2
\mathbb Big( 1+ \varepsilon^2 \langleambda \frac{{\mathfrak c}_s}{c(\varepsilon)} A \mathbb Big)^2
+ \varepsilon^2 \frac{\langleambda^2}{c^2(\varepsilon)} |\|abla_{z_\partialerp} A|^2\ dz .
\varepsilonnd{align*}
Denote $\varepsilonpsilon = \varepsilon^2$.
Since $A$ is a ground state for (SW), it satisfies
the Pohozaev identities \varepsilonqref{identites}. The last of these identities is $ \mathscr{S}(A) =
\deltaisplaystyle {\rm in}nt_{\mathbb R^3} |\|abla_{z_\partialerp} \partial_{z_1}^{-1} A |^2 \ dz $, or equivalently
$$
G( \langleambda = 1, \varepsilonpsilon = 0 ) = 0 .
$$
A straightforward computation using \varepsilonqref{Ident} gives
$$ \frac{\partial G}{\partial \langleambda}_{|(\langleambda = 1, \varepsilonpsilon = 0)} =
{\rm in}nt_{\mathbb R^3} \Gamma A^3 + 2 |\|abla_{z_\partialerp} \partial_{z_1}^{-1} A|^2
+ \frac{4}{{\mathfrak c}_s^2} (\partial_{z_1} A)^2 + \frac{2}{{\mathfrak c}_s^2} A^2 \ dz
= 3 {\rm in}nt_{\mathbb R^3} |\|abla_{z_\partialerp} \partial_{z_1}^{-1} A|^2 \|ot = 0 . $$
Then the implicit function theorem implies that there
exists a function $\varepsilonpsilon \langleongmapsto \langleambda(\varepsilonpsilon) = 1 + \mathbb BO ( \varepsilonpsilon) = 1 + \mathbb BO(\varepsilon^2)$ such that
for all $\varepsilonpsilon$ sufficiently small we have $G(\langleambda(\varepsilonpsilon),\varepsilonpsilon ) = 0$,
that is $U_{c(\varepsilon)} $ satisfies the Pohozaev identity $P_{c(\varepsilon)}(U_{\varepsilon}) = 0$.
Choosing $ \langleambda = \langleambda ( \varepsilon ^2)$ and taking into account the last indetity in \varepsilonqref{identites}, we find
$$ T_{c(\varepsilon)} \langleeq E_{c(\varepsilon)}(U_\varepsilon) =
{\rm in}nt_{\mathbb R^3} |\|abla_{\partialerp} U_\varepsilon |^2 \ dx =
{\mathfrak c}_s^2 r_0^2 \varepsilon {\rm in}nt_{\mathbb R^3} |\|abla_{z_\partialerp} \partial_{z_1}^{-1} A|^2 + \mathbb BO(\varepsilon^3)
= {\mathfrak c}_s^2 r_0^2 \varepsilon \
$\Box$S_{\ranglem min} + \mathbb BO(\varepsilon^3)
$$
and the proof of $(ii)$ is complete.
Next we turn our attention to the case $N=2$.
Let $A = {\mathfrak c}_s^{-1} \partial_{z_1} \varphi {\rm in}n \
$\Box$Y(\mathbb R^2) $
be a ground state of (SW). The existence of $A$ is given by Theorem \rangleef{gs2d}.
By Theorem 4.1 p. 227 in \cite{dBSIHP} we have $A {\rm in}n H^s(\mathbb R^2)$ for all $s {\rm in}n \mathbb N$.
For $\varepsilon$ small, we define the map
$$
U_\varepsilon(x) = |U_\varepsilon| (x) \varepsilonx^{i\partialhi_\varepsilon(x)} =
r_0 \mathbb Big( 1+ \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} A(z) \mathbb Big) \varepsilonx^{i\varepsilon \varphi(z)} ,
\quad \quad \quad \mbox{ where } \qquad z=(z_1,z_2) = ( \varepsilon x_1 , \varepsilon^2 x_2) .
$$
From the above computations and \varepsilonqref{Ident} we have
\
$\Box$egin{align*}
k_\varepsilon = & \ {\rm in}nt_{\mathbb R^2} |\|abla U_\varepsilon|^2 \ dx =
r_0^2 \varepsilon {\rm in}nt_{\mathbb R^2} (\partial_{z_1} \varphi_\varepsilon)^2 \mathbb Big( 1 + \varepsilon^2 A_\varepsilon \mathbb Big)^2
+ \varepsilon^2 (\partial_{z_1} A_\varepsilon)^2
+ \varepsilon^2 (\partial_{z_2} \varphi_\varepsilon)^2 \mathbb Big( 1 + \varepsilon^2 A_\varepsilon \mathbb Big)^2
+ \varepsilon^4 (\partial_{z_2} A_\varepsilon)^2 \ dz \\
= & \ r_0^2 {\mathfrak c}_s^2 \varepsilon {\rm in}nt_{\mathbb R^2}
A^2 \mathbb Big( 1 + \frac{\varepsilon^2 {\mathfrak c}_s}{c(\varepsilon)} A \mathbb Big)^2
+ \frac{\varepsilon^2}{c^2(\varepsilon)} (\partial_{z_1} A)^2
+ \varepsilon^2 (\partial_{z_2} \partial_{z_1}^{-1} A)^2 \mathbb Big( 1 + \frac{\varepsilon^2 {\mathfrak c}_s}{c(\varepsilon)} A \mathbb Big)^2
+ \frac{\varepsilon^4}{c^2(\varepsilon)} (\partial_{z_2} A)^2 \ dz \\
= & \ r_0^2 {\mathfrak c}_s^2 \mathbb Big\{ \varepsilon {\rm in}nt_{\mathbb R^2} A^2 \ dz
+ \varepsilon^3 {\rm in}nt_{\mathbb R^2} \mathbb Big( 2 A^3 + \frac{(\partial_{z_1} A)^2}{{\mathfrak c}_s^2}
+ (\partial_{z_2} \partial_{z_1}^{-1} A)^2 \mathbb Big) \ dz
+ \mathbb BO(\varepsilon^5) \mathbb Big\} \\
= & \ r_0^2 {\mathfrak c}_s^2 \mathbb Big\{ \varepsilon \frac32 {\mathfrak c}_s ^2 \
$\Box$S(A)
+ \varepsilon^3 \mathbb Big( 2 - \frac{12}{\Gamma} - \frac12 \mathbb Big) \
$\Box$S(A)
+ \mathbb BO(\varepsilon^5) \mathbb Big\}
\varepsilonnd{align*}
It is easy to see that $\varepsilon \mapsto k_\varepsilon$ is a
smooth increasing diffeomorphism from an interval $[0,\
$\Box$ar{\varepsilon}]$ onto an interval
$[0,\
$\Box$ar{k}= \
$\Box$ar{k}_{\
$\Box$ar{\varepsilon}}]$, and that
$ \varepsilon = \deltaisplaystyle{\frac{k_\varepsilon}{r_0^2 {\mathfrak c}_s^2 \| A \|_{L^2}^2}} + \mathbb BO(k_\varepsilon^3)
= \frac{k_{\varepsilon }}{\frac 32 r_0 ^2 {\mathfrak c}_s ^4 \
$\Box$S (A) } + \mathbb BO(k_{\varepsilon}^3) $
as $\varepsilon \tildeo 0$. Moreover, denoting $U_{\varepsilon}^\sigma (x) = U_{\varepsilon}(x/ \sigma)$ we have
$$ {\rm in}nt_{\mathbb R^2} |\|abla U_\varepsilon^\sigma|^2 \ dx = {\rm in}nt_{\mathbb R^2} |\|abla U_\varepsilon|^2 \ dx $$
because $N=2$.
Using the test function $U_\varepsilon^\sigma$, it follows that
$$ I_{\ranglem min} (k_\varepsilon) \langleeq I(U_\varepsilon^\sigma ) \qquad \mbox{ for any } \sigma >0 . $$
Since $Q(U_{\varepsilon}) < 0$, the mapping
$$
\sigmaigma \langleongmapsto I(U_\varepsilon^\sigma ) = Q( U _\varepsilon^\sigma) + {\rm in}nt_{\mathbb R^2} V(|U_\varepsilon^\sigma|^2) \ dx
= \sigma Q(U_\varepsilon) + \sigma^2 {\rm in}nt_{\mathbb R^2} V(|U_\varepsilon|^2) \ dx
$$
achieves its minimum at $ \sigma_0 =
\deltaisplaystyle{\frac{- Q(U_\varepsilon)}{\deltaisplaystyle{2 {\rm in}nt_{\mathbb R^2} V(|U_\varepsilon|^2)}}} > 0 $, and the minimum value is
$
I(U_\varepsilon^{\sigma_0}) = \deltaisplaystyle \frac{- \deltaisplaystyle Q^2(U_\varepsilon)}{ \deltaisplaystyle{4{\rm in}nt_{\mathbb R^2} V(|U_\varepsilon|^2) \ dx}}.
$
Hence
$$
I_{\ranglem min} (k_\varepsilon) \langleeq I(U_\varepsilon^{\sigma_0} )
= \frac{- Q^2(U_\varepsilon)}{ \deltaisplaystyle{4{\rm in}nt_{\mathbb R^2} V(|U_\varepsilon|^2) \ dx}} .
$$
Using (\rangleef{V}) and (\rangleef{Ident}) we find
$$
\
$\Box$egin{array}{rcl}
\deltaisplaystyle {\rm in}nt_{\mathbb R^2} V(|U_\varepsilon|^2) \ dx & = & \deltaisplaystyle {\mathfrak c}_s^2 r_0^2 \varepsilon {\rm in}nt_{\mathbb R^2} A^2
+ \varepsilon^2 \mathbb Big( \frac{\Gamma}{3} - 1 \mathbb Big) A^3
+ \frac{1}{\varepsilon^4} V_4 ( \varepsilon^2 A ) \ dz
\\
\\
& = & \deltaisplaystyle\frac 32 {\mathfrak c}_s ^4 r_0 ^2 \
$\Box$S(A) \varepsilon - {\mathfrak c}_s ^2 r_0 ^2 \mathbb Big( \frac{\Gamma}{3} - 1 \mathbb Big) \frac{ 6}{\Gamma} \
$\Box$S(A) \varepsilon ^3 + \mathbb BO(\varepsilon^5)
\varepsilonnd{array}
$$
and
\
$\Box$egin{align*}
Q(U_\varepsilon) = - \varepsilon r_0^2 {\mathfrak c}_s {\rm in}nt_{\mathbb R^2} \mathbb Big( 2 A^2 + \varepsilon^2 A^3 \mathbb Big) \ dz
= -3 r_0 ^2 {\mathfrak c}_s ^3 \
$\Box$S(A) \varepsilon + r_0 ^2 {\mathfrak c}_s \frac{ 6}{\Gamma} \
$\Box$S(A) \varepsilon^3.
\varepsilonnd{align*}
Finally we obtain
$$
\
$\Box$egin{array}{l}
\deltaisplaystyle I_{\ranglem min} (k_\varepsilon) +\deltaisplaystyle \frac{ k_{\varepsilon}}{{\mathfrak c}_s ^2}
\langleeq \frac{- \deltaisplaystyle Q^2(U_\varepsilon)}{ \deltaisplaystyle{4 {\rm in}nt_{\mathbb R^2} V(|U_\varepsilon|^2) \ dx}} + \frac{1}{{\mathfrak c}_s^2} \deltaisplaystyle {\rm in}nt_{\mathbb R^2} |\|abla U_{\varepsilon} |^2 \, dx
\\
\\
\deltaisplaystyle = - \frac{\langleeft(-3 {\mathfrak c}_s ^2 + \frac{6}{\Gamma} \varepsilon^2 \rangleight) ^2 r_0 ^4 {\mathfrak c}_s ^2 \
$\Box$S^2(A) \varepsilon^2}
{4 \mathbb Big[ \frac 32 {\mathfrak c}_s ^2 - \mathbb Big( 2 - \frac{6}{\Gamma} \mathbb Big) \varepsilon^2 + \mathbb BO(\varepsilon^4) \mathbb Big]r_0 ^2 {\mathfrak c}_s ^2 \
$\Box$S(A) \varepsilon }
+
\langleeft[ \frac 32 r_0 ^2 {\mathfrak c}_s ^2 \varepsilon + r_0 ^2 \mathbb Big( \frac 32 - \frac{12}{\Gamma} \mathbb Big) \varepsilon^3 + \mathbb BO( \varepsilon ^5 ) \rangleight] \
$\Box$S(A)
\\
\\
= \deltaisplaystyle - \frac{ \langleeft(3 r_0 ^2 {\mathfrak c}_s ^2 \varepsilon ^3 + \mathbb BO( \varepsilon ^5) \rangleight) \
$\Box$S(A) }
{2 \langleeft[3 {\mathfrak c}_s ^2 - \langleeft( 4 - \frac{12}{\Gamma} \rangleight) \varepsilon^2 + \mathbb BO( \varepsilon^4) \rangleight] }
= - \frac 12 r_0 ^2 \
$\Box$S(A) \varepsilon ^3 + \mathbb BO(\varepsilon^5)
\\
\\
\deltaisplaystyle = - \frac 12 r_0 ^2 \
$\Box$S(A) \langleeft[ \frac{ k_{\varepsilon}}{\frac 32 r_0 ^2 {\mathfrak c}_s ^4 \
$\Box$S(A) } + \mathbb BO(k_{\varepsilon}^3) \rangleight]^3 +
\mathbb BO \langleeft( \langleeft(\frac{ k_{\varepsilon}}{\frac 32 r_0 ^2 {\mathfrak c}_s ^4 \
$\Box$S(A) } + \mathbb BO(k_{\varepsilon}^3) \rangleight)^5 \rangleight)
= \frac {-4k_{\varepsilon} ^3 }{27 r_0 ^4 {{\mathfrak c}_s } ^{12} \
$\Box$S_{\ranglem min}^2 } + \mathbb BO(k_{\varepsilon}^5 ).
\varepsilonnd{array}
$$
Since $ \varepsilon \langleongmapsto k_{\varepsilon}$ is a diffeomorphism from $[0,\
$\Box$ar{\varepsilon}]$ onto
$[0,\
$\Box$ar{k}]$, Proposition \rangleef{asympto} (i) is proven.
\
$\Box$
\sigmaubsection{Proof of Proposition \rangleef{monoto}}
Given a function $ f $ defined on $ \mathbb R^N$ and $ a, \, b > 0$,
we denote $ f_{a, b}(x) = f( \frac{ x_1}{a}, \frac{ x_{\partialerp}}{b}).$
By Proposition 2.2 p. 1078 in \cite{M2}, any solution
of (TW$_{c}$) belongs to $W_{loc}^{2, p }(\mathbb R^N)$ for all $ p {\rm in}n [2, {\rm in}nfty)$, hence to $C^{1, \alphalpha }(\mathbb R^N)$ for all $ \alphalpha {\rm in}n (0, 1)$.
($i$) Let $ U $ be a minimizer of $ E_c= E + cQ$ on $ \mathbb Co _c$ (where $\mathbb Co _c$ is as in \varepsilonqref{Cc})
such that $ \partialsi $ solves (TW$_c$).
Then $U $ satisfies the Pohozaev identities \varepsilonqref{Pohozaev}.
If $ Q( U ) > 0$, let $ \tildeildelde{ U }(x) = U ( - x_1, x_{\partialerp})$,
so that $ Q( \tildeildelde{ U }) = - Q( U ) < 0$ and
$ P_c( \tildeildelde{ U } ) = P_c( U ) - 2 c Q( U ) = - 2cQ( U ) < 0$.
Since for any function $ \partialhi {\rm in}n {\mathbf E}o $ we have
\
$\Box$e
\langleambdabel{Pca}
P_c( \partialhi _{a, 1}) = \frac 1a {\rm in}nt_{\mathbb R^N} \mathbb Big| \frac{ \partial \partialhi }{\partial x_1} \mathbb Big| ^2 \, dx
+ a \frac{N-3}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla_{x_{\partialerp}} \partialhi |^2 \, dx + c Q( \partialhi ) + a {\rm in}nt_{\mathbb R^N} V( |\partialhi |^2) \, dx,
\varepsilone
we see that there is $ a_0 {\rm in}n (0, 1) $ such that $ P_c( \tildeildelde{U}_{a_0 ,1 }) = 0$ .
We infer that
$$
T_c \langleeq E_c( \tildeildelde{U}_{a_0, 1} )
= \frac{2}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla_{x_{\partialerp}} \tildeildelde{U}_{a_0, 1} |^2 \, dx
= a_0 \frac{2}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla_{x_{\partialerp}} U |^2 \, dx
= a_0 E_c( U)
= a_0 T_c,
$$
contradicting the fact that $ T_c > 0$.
Thus $ Q( U ) \langleeq 0$.
Assume that $ Q( U ) = 0. $
From the identities (\rangleef{Pohozaev}) with $ Q( U ) = 0$ we get
\
$\Box$e
\langleambdabel{pr}
\deltaisplaystyle {\rm in}nt_{\mathbb R^N} \mathbb Big| \frac{ \partial U}{\partial x_1} \mathbb Big| ^2 \, dx = - \frac{1}{N-2} {\rm in}nt_{\mathbb R^N} V(|U |^2) \, dx
\qquad \mbox{ and } \qquad
\deltaisplaystyle {\rm in}nt_{\mathbb R^N} | \|abla _{x_{\partialerp}} U | ^2 \, dx = - \frac{N-1}{N-2} {\rm in}nt_{\mathbb R^N} V(|U |^2) \, dx .
\varepsilone
Since $ U {\rm in}n {\mathbf E}o $ and $ U $ is not constant,
necessarily
$ \deltaisplaystyle {\rm in}nt_{\mathbb R^N} V(|U |^2) \, dx = - (N-2) {\rm in}nt_{\mathbb R^N} \mathbb Big| \frac{ \partial U}{\partial x_1} \mathbb Big| ^2 \, dx <0$
and this implies that the potential $V$ must achieve negative values.
Then it follows from Theorem 2.1 p. 100 in \cite{brezis-lieb} that there is $ \tildeildelde{\partialsi }_0 {\rm in}n {\mathbf E}o $
such that
$\deltaisplaystyle {\rm in}nt_{\mathbb R^N} |\|abla \tildeildelde{\partialsi}_0 |^2 \, dx =
{\rm in}nf \mathbb Big\{ {\rm in}nt_{\mathbb R^N} |\|abla \partialhi |^2 \, dx \; \mathbb Big| \; \partialhi {\rm in}n \mathbb BE, \;
{\rm in}nt_{\mathbb R^N} V(|\partialhi |^2) \, dx = -1 \mathbb Big\}.$
Using Theorem 2.2 p. 102 in \cite{brezis-lieb} we see that there is $ \sigmaigma > 0$ such that, denoting
$ \partialsi _0 = (\tildeildelde{\partialsi }_0)_{\sigmaigma, \sigmaigma}$ and
$ - v_0 = \deltaisplaystyle {\rm in}nt_{\mathbb R^N} V(|\partialsi _0|^2) \, dx = - \sigmaigma ^N$,
we have $ \Deltaelta \partialsi _0 + F( |\partialsi _0 |^2 ) \partialsi _0 = 0 $ in $ \mathbb R^N$.
Hence $\partialsi _0 $ solves (TW$_0$) and
$$
\deltaisplaystyle {\rm in}nt_{\mathbb R^N} |\|abla {\partialsi}_0 |^2 \, dx = {\rm in}nf \mathbb Big\{ {\rm in}nt_{\mathbb R^N} |\|abla \partialhi |^2 \, dx \; \mathbb Big| \;
\partialhi {\rm in}n \mathbb BE, \; {\rm in}nt_{\mathbb R^N} V(|\partialhi |^2) \, dx = - v_0 \mathbb Big\}.
$$
Since all minimizers of this problem solve (TW$_{0}$) (after possibly rescaling),
we know that they are $C^1$ in $\mathbb R^N$ and then Theorem 2 p. 314 in \cite{MarisARMA}
imply that they are all radially symmetric (after translation). In particular, we have $Q (\partialsi _0 ) = 0 $
and $ \deltaisplaystyle {\rm in}nt_{\mathbb R^N} \mathbb Big| \frac{ \partial \partialsi _0}{\partial x_j} \mathbb Big| ^2 \, dx
= \frac 1N \deltaisplaystyle {\rm in}nt_{\mathbb R^N} |\|abla \partialsi _0 | ^2 \, dx $ for $ j = 1, \deltaots, N$.
By Lemma 2.4 p. 104 in \cite{brezis-lieb} we know that $ \partialsi _0 $ satisfies the Pohozaev identity
$
\deltaisplaystyle {\rm in}nt_{\mathbb R^N} |\|abla \partialsi _0 | ^2 \, dx = - \frac{ N}{N-2} v_0.
$
It follows that $P_c( \partialsi _0 ) = 0 $, hence $ \partialsi _0 {\rm in}n \mathbb Co_c $ and we infer that
$ E_c ( \partialsi _0 ) \gammaeq T_c$,
that is
$ \deltaisplaystyle \frac{2}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla_{x_{\partialerp}} \partialsi _0 |^2 \, dx
\gammaeq \frac{2}{N-1} \deltaisplaystyle {\rm in}nt_{\mathbb R^N} |\|abla_{x_{\partialerp}} U |^2 \, dx$.
Taking into account \varepsilonqref{pr} and the radial symmetry of $\partialsi _0$,
this gives $ v_0 \gammaeq - \deltaisplaystyle {\rm in}nt_{\mathbb R^N} V( |U |^2 ) \, dx$.
On the other hand, by scaling it is easy to see that $ \partialsi _0 $ is a minimizer of the functional
$ \partialhi \langleongmapsto \| \|abla \partialhi \|_{L^2( \mathbb R^N)} ^2 $ in the set
$ {\mathcal P} = \mathbb Big\{ \partialhi {\rm in}n \mathbb BE \; \mathbb Big| \; \deltaisplaystyle {\rm in}nt_{\mathbb R^N} |\|abla \partialhi | ^2 \, dx = - \frac{N}{N-2} {\rm in}nt_{\mathbb R^N} V(|\partialhi |^2) \, dx \mathbb Big\}$.
By \varepsilonqref{pr} we have $ U {\rm in}n {\mathcal P}$, hence
$\| \|abla U \|_{L^2( \mathbb R^N)} ^2 \gammaeq \| \|abla \partialsi _0\|_{L^2( \mathbb R^N)} ^2$ and consequently
$ - \deltaisplaystyle {\rm in}nt_{\mathbb R^N} V( |U |^2 ) \, dx \gammaeq v_0$.
Thus
$\| \|abla U \|_{L^2( \mathbb R^N)} ^2 = \| \|abla \partialsi _0\|_{L^2( \mathbb R^N)} ^2$,
$ \deltaisplaystyle {\rm in}nt_{\mathbb R^N} V( |U |^2 ) \, dx = {\rm in}nt_{\mathbb R^N} V( |\partialsi _0 |^2 )$ and $ U $ minimizes
$ \| \|abla \cdot \|_{L^2(\mathbb R^N)}^2$
in the set $\mathbb Big\{ \partialhi {\rm in}n \mathbb BE \; \mathbb Big| \; \deltaisplaystyle {\rm in}nt_{\mathbb R^N} V( |\partialhi |^2 ) \, dx = - v_0 \mathbb Big\}$.
By Theorem 2.2 p. 103 in \cite{brezis-lieb},
$ U $ solves the equation $ \Deltaelta U + \langleambda F(|U |^2) U = 0 $ in $ {\mathcal D} '(\mathbb R^N)$ for some $ \langleambda>0$
and using the Pohozaev identity associated to this equation we see that $ \langleambda = 1$, hence $ U $ solves (TW$_0$).
Since $U $ also solves (TW$_c$) for some $ c >0$ and $ \frac{\partial U}{\partial x_1}$ is continuous,
we must have $ \frac{\partial U}{\partial x_1} = 0 $ in $ \mathbb R^N$.
Together with the fact that $ U {\rm in}n \mathbb BE$, this implies that $ U $ is constant, a contradiction.
Therefore we cannot have $ Q( U ) = 0$ and we conclude that $Q(U ) < 0$.
($ii$) Fix $ c_0 {\rm in}n (0, {\mathfrak c}_s)$ and let $ U_0 {\rm in}n \mathbb BE$ be a minimizer of $ E_{c_0}$ on $ \mathbb Co _{c_0}$,
as given by Theorem \rangleef{thM}.
It follows from \varepsilonqref{Pca} that $P_c ((U_0)_{a, 1} ) = \frac 1a R_{c, U_0}(a), $ where
\
$\Box$e
\langleambdabel{Rca}
R_{c, U_0}(a) = {\rm in}nt_{\mathbb R^N} \mathbb Big| \frac{ \partial U_0}{\partial x_1} \mathbb Big|^2 \, dx + a c Q( U_0) +
a^2 \langleeft[ \frac{N-3}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla _{x_{\partialerp} } U_0 |^2 \, dx + {\rm in}nt_{\mathbb R^N} V(|u_0|^2) \, dx \rangleight]
\varepsilone
is a polynomial in $a$ of degree at most 2.
It is clear that $R_{c, U_0} (0 ) > 0$, $R_{c_0, U_0} (1) = P_{c_0} (U_0) = 0 $ and for any $ c > c_0 $ we have
$R_{c, U_0}(1) = P_{c_0}(U_0) + (c - c_0) Q(U_0) < 0 $ because $ Q(U_0) <0$.
Hence there is a unique $ a(c) {\rm in}n (0,1)$ such that $R_{c, U_0} (a(c)) = 0$, which means $P_{c}((U_0)_{a(c), 1}) =0$.
We infer that
\
$\Box$e
\langleambdabel{ac}
T_c \langleeq E_c( (U_0)_{a(c), 1}) = \frac{2}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla _{x_{\partialerp} } (U_0)_{a(c), 1} |^2 \, dx
= a(c) \frac{2}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla _{x_{\partialerp} } U_0 |^2 \, dx
= a(c) T_{c_0} .
\varepsilone
Since $ a(c) {\rm in}n (0,1)$, we have proved that $T_c < T_{c_0}$ whenever $ c_0 {\rm in}n (0, {\mathfrak c}_s ) $ and $ c {\rm in}n (c_0, {\mathfrak c}_s)$,
thus $ c \langleongmapsto T_c$ is decreasing. By a well-known result of Lebesgue, the function
$ c \langleongmapsto T_c $ has a derivative a.e.
($iii$)
Notice that \varepsilonqref{ac} holds whenever $c_0$, $U_{c_0}$ are as above and $a(c)$ is a positive
root of $R_{c, U_0}$. Using the Pohozaev identities \varepsilonqref{Pohozaev} we find
$$
2{\rm in}nt_{\mathbb R^N} \mathbb Big| \frac{ \partial U_0}{\partial x_1} \mathbb Big|^2 \, dx
= \frac{2}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla _{x_{\partialerp} } U_0 |^2 \, dx - c_0 Q (U_0)
= T_{c_0} - c_0 Q( U_0) \qquad \mbox{ and then}
$$
\
$\Box$e
\langleambdabel{ps}
\frac{N-3}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla _{x_{\partialerp} } U_0 |^2 \, dx + {\rm in}nt_{\mathbb R^N} V(|u_0|^2) \, dx
= - c_0 Q( U_0) - {\rm in}nt_{\mathbb R^N} \mathbb Big| \frac{ \partial U_0}{\partial x_1} \mathbb Big|^2 \, dx
= - \frac 12 c_0 Q(U_0) - \frac 12 T_{c_0} .
\varepsilone
We now distinguish two cases: $R_{c, U_0}$ has degree one or two.
Case $(a)$:
If $ \deltaisplaystyle \frac{N-3}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla _{x_{\partialerp} } U_0 |^2 \, dx + {\rm in}nt_{\mathbb R^N} V(|u_0|^2) \, dx = 0$,
then $R_{c, U_0}$ has degree one and we have
$\deltaisplaystyle {\rm in}nt_{\mathbb R^N} \mathbb Big| \frac{ \partial U_0}{\partial x_1} \mathbb Big|^2 \, dx + c_0 Q( U_0) =0$ because
$ P_{c_0}(U_0) = 0$. Since $R_{c, U_0}$ is an affine function, we find $a(c) = \frac{ c_0}{c}$
for all $ c > 0$, hence $ a( c_0) = 1$. Moreover, the left-hand side in \varepsilonqref{ps} is zero, thus we have
$ c_0 Q(U_0) + T_{c_0} = 0$ and consequently
$ a'( c_0) = - \frac{ 1}{c_0} = \frac{Q(U_0)}{T_{c_0}}$.
Case $(b)$: If $ \deltaisplaystyle \frac{N-3}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla _{x_{\partialerp} } U_0 |^2 \, dx + {\rm in}nt_{\mathbb R^N} V(|u_0|^2) \, dx
\|ot = 0$, $ R_{c, U_0} $ has degree two, and the discriminant of this second-order polynomial
is equal to
$$
\Deltaelta _{c, U_0} = ( c^2 - c_0 ^2) Q^2 ( U_0) + T_{c_0}^2.
$$
Consequently $R_{c, U_0}$ has real roots as long as $ ( c^2 - c_0 ^2) Q^2 ( U_0) + T_{c_0}^2 \gammaeq 0$.
It is easy to see that if there are real roots, at least one of them is positive.
Indeed, $R_{c, U_0}(0) > 0 > R_{c, U_0}'(0) $. If $ \Deltaelta _{c, U_0} \gammaeq 0 $,
no matter of the sign of the leading order coefficient
$ \frac{N-3}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla _{x_{\partialerp} } U_0 |^2 \, dx + {\rm in}nt_{\mathbb R^N} V(|u_0|^2) \, dx \|ot = 0 $,
the smallest positive root $a(c)$ of $ R_{c, U_0} $ is given by the formula
\
$\Box$e
\langleambdabel{root}
a(c) = \frac{- c Q( U_0) - \sigmaqrt{ ( c^2 - c_0 ^2) Q^2 ( U_0) + T_{c_0}^2}}{- c_0 Q(U_0) - T_{c_0}}
= \frac{ - c_0 Q( U_0) + T_{c_0}}{- cQ(U_0) + \sigmaqrt{ ( c^2 - c_0 ^2) Q^2 ( U_0) + T_{c_0}^2}} .
\varepsilone
Therefore, the function $ c \langleongmapsto a(c)$ is defined on the interval
$ [\tildeildelde{c}_0, {\rm in}nfty )$ where $ \tildeildelde{c}_0 = \sigmaqrt{ c_0 ^2 - \frac{ T_{c_0^2}}{Q^2(U_0)}}< c_0$,
it is differentiable on $ (\tildeildelde{c}_0, {\rm in}nfty )$ and $ a( c_0) = 1$.
Moreover, a straightforward computation gives $ a'( c_0) = \frac{Q(U_0)}{T_{c_0}}$.
Note that in Case $(a)$, the last expression in \varepsilonqref{root} is equal to
$ \frac{c_0}{c} $, which is then indeed $ a(c)$.
By \varepsilonqref{ac} we have $ T_c \langleeq a(c) T_{c_0}$ and passing to the limit we get
$ \deltaisplaystyle \langleim_{c \tildeo c_0,\, c < c_0} T_c \langleeq \langleim_{c \tildeo c_0, \, c < c_0} a(c) T_{c_0} = T_{c_0}$.
Since $ c \langleongmapsto T_c$ is decreasing, $T_c > T_{c_0} $ for $ c < c_0 $ and we see that it is left contiuous at $ c_0$.
Moreover, we have
$$
\frac{ T_c - T_{c_0}}{c - c_0 } \langleeq \frac{ a(c) - a(c_0)}{c - c_0} T_{c_0 } \quad \mbox{ for } c > c_0,
\quad \mbox{ respectively } \quad
\frac{ T_c - T_{c_0}}{c - c_0 } \gammaeq \frac{ a(c) - a(c_0)}{c - c_0} T_{c_0 } \quad \mbox{ for } c {\rm in}n [\tildeildelde{c}_0, c_0).
$$
Passing to the limit in the above inequalities we obtain, since $ a'( c_0) = \frac{Q(U_0)}{T_{c_0}}$
in Cases $(a)$ and $(b)$,
$$
\langleimsup_{ c \tildeo c_0, \, c > c_0} \frac{ T_c - T_{c_0}}{c - c_0 } \langleeq a'( c_0) T_{c_0} = Q(U_0),
\qquad \mbox{ respectively } \qquad
\langleiminf_{ c \tildeo c_0, \, c < c_0} \frac{ T_c - T_{c_0}}{c - c_0 } \gammaeq a'( c_0) T_{c_0} = Q(U_0).
$$
It is then clear that if $ c \langleongmapsto T_c$ is differentiable at $ c_0$, necessarily
$ \deltaisplaystyle \frac{d T_c}{dc}_{|c=c_0} = Q(U_0) .$
($iv$) Fix $ c_* {\rm in}n ( c_0, {\mathfrak c}_s)$. Passing to a subsequence we may assume that
$ c_0 < c_n < c_*$ for all $n$ and $ Q( U_n ) \tildeo - q _0 \langleeq 0$.
Then $ T_{c_0 } > T_{c_n} > T_{c_*} > 0 $
and $ (c_0 ^2 - c_n ^2) Q^2( U_n) + T_{c_n}^2 > ( c_0 ^2 - c_n ^2) Q^2( U_n) + T_{c_*}^2 > 0$
for all sufficiently large $n$.
Hence for large $n$ we may use \varepsilonqref{ac} and \varepsilonqref{root} with $( c_n, c_0)$ instead of $ ( c_0, c)$ and we get
$$
T_{c_0} \langleeq
\frac{ - c_n Q( U_n) + T_{c_n}}{- c_0 Q(U_n) + \sigmaqrt{ ( c_0^2 - c_n ^2) Q^2 ( U_n) + T_{c_n}^2}}
T_{c_n}.
$$
Since $T_{c_n}$ has a positive limit, passing to the limit as $ n \tildeo {\rm in}nfty $ in the
above inequality and using the monotonicity of $ c \langleongmapsto T_c$ we get
$ \deltaisplaystyle T_{c_0} \langleeq \langleiminf_{ n \tildeo {\rm in}nfty} T_{c_n} = \langleiminf_{c \tildeo c_0, \, c > c_0} T_c$.
This and the fact that $ T_c$ is decreasing and left continuous imply that $T_c$ is continuous at $ c_0$.
($v$) Let $ 0 < c_1 < c_2 < {\mathfrak c}_s $ and $U_1, \; U_2,$ $ q_1 = Q(U_1)< 0 $, $ q_2 = Q(U_2) < 0$
be as in Proposition \rangleef{monoto} ($v$).
If $ c_1 ^2 \langleeq c_2 ^2 - \frac {T_{c_2}^2}{q_2^2}$, the inequality in Proposition \rangleef{monoto} ($v$)
obviously holds. From now on we assume that $ c_1 ^2 > c_2 ^2 - \frac {T_{c_2}^2}{q_2^2} $.
The two discriminants $ \Deltaelta_{c_2 , U_1} = ( c_2^2 - c_1^2 ) q_1^2 + T_{c_1}^2 $
and $ \Deltaelta_{c_1 , U_2} = ( c_1^2 - c_2^2 ) q_2^2 + T_{c_2}^2 $ are positive: since
$ 0 < c_1 < c_2 $ for the first one, and by the assumption $ c_1 ^2 > c_2 ^2 - \frac {T_{c_2}^2}{q_2^2}$
for the second one. Therefore, we may use \varepsilonqref{ac} and \varepsilonqref{root} with the couples $(c_1, c_2)$,
respectively $(c_2, c_1)$ instead of $(c_0, c)$ to get
$$
T_{c_2} \langleeq \frac{ - c_1 q_1 + T_{c_1}}{- c_2 q_1 + \sigmaqrt{ (c_2 ^2 - c_1 ^2 ) q_1 ^2 + T_{c_1}^2}} T_{c_1},
\qquad \mbox{ respectively } \qquad
T_{c_1} \langleeq \frac{ - c_2 q_2 + T_{c_2}}{ - c_1 q_2 + \sigmaqrt{ (c_1 ^2 - c_2 ^2 ) q_2 ^2 + T_{c_2}^2}} T_{c_2}.
$$
Since $ T_{c_i} > 0 $, we must have
$$
\frac{ - c_1 q_1 + T_{c_1}}{ - c_2 q_1 + \sigmaqrt{ (c_2 ^2 - c_1 ^2 ) q_1 ^2 + T_{c_1}^2}}
\cdot
\frac{ - c_2 q_2 + T_{c_2}}{ - c_1 q_2 + \sigmaqrt{ (c_1 ^2 - c_2 ^2 ) q_2 ^2 + T_{c_2}^2}}
\gammaeq 1.
$$
We set $ y_1 = - \frac{ T_{c_1}}{c_1 q_1} > 0 $, and recast this inequality as
\
$\Box$e
\langleambdabel{ineqmagique}
\frac{ 1 + y_1}{\frac{c_2}{c_1} + \sigmaqrt{ \frac{ c_2^2}{c_1^2} - 1 + y_1^2}}
\gammaeq
\frac{ - c_1 q_2 + \sigmaqrt{ (c_1 ^2 - c_2 ^2 ) q_2 ^2 + T_{c_2}^2}}{ - c_2 q_2 + T_{c_2}}
= \frac{ 1 + \sigmaqrt{ 1 - \frac{c_2^2}{c_1^2} + \frac{T_{c_2}^2}{c_1^2 q_2^2} }}{
\frac{c_2}{c_1} - \frac{T_{c_2}}{c_1 q_2} } .
\varepsilone
Denoting, for $y {\rm in}n \mathbb R$,
$ g (y) = \deltaisplaystyle \frac{ 1 + y}{ \frac{c_2}{c_1} + \sigmaqrt{ \frac{c_2^2}{c_1^2} - 1 + y^2}} $,
\varepsilonqref{ineqmagique} is exactly
$$
g \mathbb Big( - \frac{ T_{c_1}}{c_1 q_1} \mathbb Big) = g (y_1) \gammaeq
g \mathbb Big( \sigmaqrt{ 1 - \frac{c_2^2}{c_1^2} + \frac{T_{c_2}^2}{c_1^2 q_2^2} } \mathbb Big) .
$$
If we show that $g$ is increasing, then we obtain
$$
- \frac{ T_{c_1}}{c_1 q_1} \gammaeq \sigmaqrt{ 1 - \frac{c_2^2}{c_1^2} + \frac{T_{c_2}^2}{c_1^2 q_2^2} } ,
\quad \quad \quad {\ranglem or} \quad \quad \quad
\frac{ T_{c_1}^2}{q_1^2} - c_1 ^2 \gammaeq \frac{ T_{c_2}^2}{q_2^2} - c_2 ^2 ,
$$
which is the desired inequality. To check that $g$ is increasing, we simply compute
$$
g' (y) = \deltaisplaystyle \frac{ \deltaisplaystyle \frac{c_2^2}{c_1^2} - 1 + \deltaisplaystyle \frac{c_2}{c_1} \sigmaqrt{ \deltaisplaystyle \frac{c_2^2}{c_1^2} - 1 + y^2} - y}{ \mathbb Big( \deltaisplaystyle \frac{c_2}{c_1} + \sigmaqrt{ \frac{c_2^2}{c_1^2} - 1 + y^2} \mathbb Big)^2
\sigmaqrt{ \frac{c_2^2}{c_1^2} - 1 + y^2}} ,
$$
which is positive since $ \frac{c_2}{c_1} > 1 $ and
$ \sigmaqrt{ \frac{c_2^2}{c_1^2} - 1 + y^2} > |y| $.
($vi$)
Since $ c \langleongmapsto - T_c$ is increasing, by a well-known result of Lebesgue this map is differentiable a.e.,
the function $ c \langleongmapsto \frac{ d T_c}{d c}$ belongs to $L_{loc}^1( 0, {\mathfrak c}_s)$ and for any $ 0 < c_1 < c_2 < {\mathfrak c}_s $ we have
$ \deltaisplaystyle {\rm in}nt_ {c_1 }^{c_2} - \frac{ d T_c}{d c} \, dc \langleeq -T_{c_2} + T_{c_1}. $
We recall that $ c( \varepsilon ) = \sigmaqrt{ {\mathfrak c}_s ^2 - \varepsilon^2}$ for all $ \varepsilon {\rm in}n ( 0, {\mathfrak c}_s )$.
If $N=3$, (A2) and ( A4) hold and $ \Gamma \|eq 0$, by Proposition \rangleef{asympto} ($ii$)
there is $ K > 0 $ such that $T_{c(\varepsilon)} \langleeq K \varepsilon $ for all sufficiently small $ \varepsilon$.
Thus for $ n {\rm in}n \mathbb N$ large we have
$$
{\rm in}nt_{c(2/n)}^{c(1/n)} - \frac{d T_c}{dc} \ dc
\langleeq T_{c(2/n)} - T_{c(1/n)} \langleeq T_{c(2/n)}
\langleeq \frac{2K}{n} .
$$
Hence there exists $ c_n {\rm in}n (c(2/n) , c(1/n))$
such that $c \mapsto T_c $ is differentiable at $ c_n $ and
$$
- \frac{d T_c}{dc}_{|c=c_n}
\langleeq \frac{1}{c(\frac 1n)- c(\frac 2n) } \cdot \frac{2K}{n}
\langleeq K' n . $$
Let $\varepsilon_n = \sigmaqrt{{\mathfrak c}_s^2 - c_n^2} $, so that
$c(\varepsilon_n) = c_n$. Since $ c(2/n) \langleeq c_n \langleeq c(1/n) $, we have
$
\frac1n \langleeq \varepsilon_n \langleeq \frac2n ,
$
so that $ \varepsilon _n \tildeo 0 $ as $ n \tildeo {\rm in}nfty$.
Let $U_n$ be a minimizer
of $ E_{c_n} $ on $ \mathbb Co _{c_n}$, scaled so
that $U_n$ solves (TW$_{c_n}$). From ($i$) and ($iii$) we get
$$ |Q(U_n) | = - Q( U_n) = - \frac{d T_c}{dc}_{|c=c_n }
\langleeq K'n \langleeq \frac{ 2K'}{\varepsilon _n} . $$
Since
$ E(U_n) + c_n Q(U_n) = T_{c_n } = \mathbb BO(\varepsilon_n) $, it follows that
$$
E(U_n)\langleeq - c_n Q( U_n) + T_{c_n} \langleeq \frac{K''}{ \varepsilon_n}
$$
and the proof is complete. \
$\Box$
\sigmaubsection{Proof of Proposition \rangleef{global3}}
We postpone the proof of Proposition \rangleef{convergence} and we prove Proposition \rangleef{global3}.
Let $ (\varepsilon_n)_{n \gammaeq 1}$ be the sequence given by Proposition \rangleef{monoto} ($vi$).
For each $n$ let $ U_n {\rm in}n \mathbb BE$ be a minimizer of $ E_{c_n}$ on $ \mathbb Co _{c_n}$ which solves (TW$_{c_n}$).
Passing to a subsequence if necessary and using Proposition \rangleef{convergence}, we may assume that
$ (\varepsilon_n)_{n \gammaeq 1}$ is strictly decreasing, that $(\varepsilon _n , U_n)_{n \gammaeq 1}$
satisfies the conclusion of Theorem \rangleef{res1} and
\
$\Box$e
\langleambdabel{en}
\frac 12 r_0 ^2 {\mathfrak c}_s ^4 \
$\Box$S_{\ranglem min} \frac{ 1}{\varepsilon _n } < E( U_n) < 2 r_0 ^2 {\mathfrak c}_s ^4 \
$\Box$S_{\ranglem min} \frac{ 1}{\varepsilon _n },
\varepsilone
\
$\Box$e
\langleambdabel{mom}
\frac 12 r_0 ^2 {\mathfrak c}_s ^3 \
$\Box$S_{\ranglem min} \frac{ 1}{\varepsilon _n } < - Q( U_n) < 2 r_0 ^2 {\mathfrak c}_s ^3 \
$\Box$S_{\ranglem min} \frac{ 1}{\varepsilon _n }
\qquad \mbox{ for all } n.
\varepsilone
We shall argue by contradiction. More precisely, we shall prove by contradiction
that there exists $ \varepsilon _* > 0 $ such that for any $ \varepsilon {\rm in}n ( 0, \varepsilon _* )$ and for any
minimizer $U$ of $E_{c(\varepsilon)} $ on $ \mathbb Co_{c (\varepsilon)}$ scaled so that $U$ satisfies
(TW$_{c(\varepsilon )}$), we have
$$ |Q(U) | \langleeq \frac{5r_0^2 {\mathfrak c}_s^3 \
$\Box$S_{\ranglem min}}{\varepsilon} . $$
In view of Proposition \rangleef{asympto} ($ii$), we then infer that
$$
E(U) = T_{c(\varepsilon)} - c(\varepsilon) Q(U) \langleeq \frac{K}{\varepsilon}
$$
for some constant $ K $ depending only on $ r_0$, $ {\mathfrak c}_s $ and $ \
$\Box$S_{\ranglem min} $, which
is the desired result. We thus assume that there exist infinitely many $n$'s such that
there is $ \tildeildelde{\varepsilon} _n {\rm in}n ( \varepsilon _n, \varepsilon _{n-1})$ and there is a minimizer
$ \tildeildelde{U}_n $ of $ E_{c( \tildeildelde{\varepsilon}_n)}$ on $ \mathbb Co _{c( \tildeildelde{\varepsilon}_n)}$ which
satisfies (TW$_{c( \tildeildelde{\varepsilon}_n)}$) and
\
$\Box$e
\langleambdabel{mauvais}
|Q(\tildeildelde{U}_n )| = - Q(\tildeildelde{U}_n ) > 5 r_0 ^2 {\mathfrak c}_s ^3 \
$\Box$S_{\ranglem min} \frac{ 1}{\tildeildelde{\varepsilon} _n }.
\varepsilone
Passing again to a subsequence of $(\varepsilon_n)_{n \gammaeq 1}$, we may assume that \varepsilonqref{mauvais} holds for
all $ n \gammaeq 1$. Then for each $ n {\rm in}n \mathbb N^*$ we define
$$
\
$\Box$egin{array}{rcl}
I_n & = & \mathbb Big\{ \varepsilon {\rm in}n ( \varepsilon _n, \varepsilon_{n-1}) \; \mathbb Big| \; \mbox{ for all } \varepsilon ' {\rm in}n [ \varepsilon_n, \varepsilon ]
\mbox{ and for any minimizer } U_{\varepsilon'} \mbox{ of }
E_{c(\varepsilon')} \mbox{ on } \mathbb Co _{c(\varepsilon')}
\\
& &
\mbox{ which solves (TW$_{c(\varepsilon')}$) there holds }
| Q( U_{\varepsilon'})| \langleeq 4 r_0 ^2 {\mathfrak c}_s ^3 \
$\Box$S_{\ranglem min} \cdot \frac{ 1}{ \varepsilon' } \mathbb Big\}
\varepsilonnd{array}
$$
and
$$
\varepsilon_n^{\#} = \sigmaup I_n.
$$
By Proposition \rangleef{monoto} ($v$), for $ \varepsilon'{\rm in}n ( \varepsilon _n , {\mathfrak c}_s)$ and for any minimizer $ U_{\varepsilon'}$
of $E_{c(\varepsilon')}$ on $ \mathbb Co _{c(\varepsilon')} $ which solves (TW$_{c(\varepsilon')}$) we have
$$
\frac{ T_{c( \varepsilon')}^2}{Q^2(U_{\varepsilon'})} + (\varepsilon')^2 \gammaeq \frac{ T_{c( \varepsilon _n)}^2}{Q^2(U_{n})} + \varepsilon_n ^2,
$$
which can be written as
$\deltaisplaystyle \frac{Q^2(U_{\varepsilon'})}{ T_{c( \varepsilon')}^2} \langleeq \frac{Q^2(U_{n})}{T_{c( \varepsilon _n)}^2 + ( \varepsilon _n ^2 - (\varepsilon ')^2) Q^2( U_n) } \; $
and this gives
\
$\Box$e
\langleambdabel{estimate1}
(\varepsilon')^2 Q^2(U_{\varepsilon'}) \langleeq \frac{(\varepsilon')^2 Q^2(U_{n}) T_{c( \varepsilon')}^2}{T_{c( \varepsilon _n)}^2 + ( \varepsilon _n ^2 - (\varepsilon ')^2) Q^2( U_n) }.
\varepsilone
The mapping $ \varepsilon \langleongmapsto T_{c(\varepsilon)}$ is right continuous (because $ c \langleongmapsto T_c$ is left continuous) and using \varepsilonqref{mom} we find
$$
\langleim_{\varepsilon'\tildeo \varepsilon_n, \, \varepsilon'> \varepsilon _n}
\frac{(\varepsilon')^2 Q^2(U_{n}) T_{c( \varepsilon')}^2}{T_{c( \varepsilon _n)}^2 + ( \varepsilon _n ^2 - (\varepsilon ')^2) Q^2( U_n) }
= \varepsilon_n ^2 Q^2 ( U_n) < ( 2 r_0 ^2 {\mathfrak c}_s ^3 \
$\Box$S_{\ranglem min} )^2.
$$
Thus all $ \varepsilon '{\rm in}n ( \varepsilon_n, \varepsilon_{n-1}) $ sufficiently close to $ \varepsilon_n$ belong to $I_n$.
In particular, $ I_n$ is not empty.
On the other hand, \varepsilonqref{mauvais} implies that any $ \varepsilon'{\rm in}n ( \tildeildelde{\varepsilon }_n , \varepsilon_{n-1}) $ does not belong to $ I_n$, hence
$\varepsilon_n ^{\#} = \sigmaup I_n {\rm in}n ( \varepsilon _n, \tildeildelde{\varepsilon}_n ]\sigmaubset ( \varepsilon_n, \varepsilon_{n-1}).$
Let $ U_n^{\#}$ be a minimizer of $E_{c( \varepsilon_n^{\#})}$ on $\mathbb Co _{c( \varepsilon_n^{\#})} $ which solves (TW$_{c( \varepsilon_n^{\#})}$).
We claim that
\
$\Box$e
\langleambdabel{claim1}
| Q(U_n^{\#}) | = 4 r_0 ^2 {\mathfrak c}_s ^3 \
$\Box$S_{\ranglem min} \frac{1}{\varepsilon_n^{\#} } .
\varepsilone
Indeed, proceeding as in \varepsilonqref{estimate1} we have for any $ \varepsilon'{\rm in}n ( \varepsilon_n, \varepsilon_n^{\#})$ and any minimizer $ U_{\varepsilon'} $ of
$E_{c(\varepsilon')}$ on $ \mathbb Co _{c(\varepsilon')} $ which satisfies (TW$_{c(\varepsilon')}$)
\
$\Box$e
\langleambdabel{estimate2}
(\varepsilon_n^{\#})^2 Q^2(U_n^{\#}) \langleeq
\frac{\langleeft( \frac{\varepsilon_n^{\#}}{\varepsilon' } \rangleight)^2 (\varepsilon')^2 Q^2(U_{\varepsilon '}) T_{c( \varepsilon_n^{\#})}^2}
{T_{c( \varepsilon ')}^2 + \langleeft( 1 - \langleeft( \frac{\varepsilon_n^{\#}}{\varepsilon'} \rangleight)^2\rangleight)(\varepsilon ')^2 Q^2( U_{\varepsilon'}) }.
\varepsilone
Notice that $ (\varepsilon')^2 Q^2(U_{\varepsilon '}) \langleeq ( 4 r_0 ^2 {\mathfrak c}_s ^3 \
$\Box$S_{\ranglem min} )^2 $ because $ \varepsilon' {\rm in}n I_n$.
In particular, $ Q(U_{\varepsilon'})$ is bounded as $ \varepsilon' {\rm in}n ( \varepsilon_n, \, \varepsilon_n^{\#}).$
Since $ c( \varepsilon') \sigmaearrow c( \varepsilon_n^{\#}) $ as $ \varepsilon' \|earrow \varepsilon_n^{\#}$, Proposition \rangleef{monoto} ($iv$)
implies that $ c \langleongmapsto T_c$ is continuous at $c( \varepsilon_n^{\#})$.
Then passing to $\deltaisplaystyle \langleiminf $ as $ \varepsilon' \|earrow \varepsilon_n^{\#}$ in \varepsilonqref{estimate2} we get
$(\varepsilon_n^{\#})^2 Q^2(U_n^{\#}) \langleeq ( 4 r_0 ^2 {\mathfrak c}_s ^3 \
$\Box$S_{\ranglem min} )^2 $.
We conclude that $ \varepsilon_n^{\#} {\rm in}n I_n$.
Next, for any $ \varepsilon' {\rm in}n ( \varepsilon_n^{\#}, {\mathfrak c}_s )$ and any minimizer $ U_{\varepsilon'} $ of
$E_{c(\varepsilon')}$ on $ \mathbb Co _{c(\varepsilon')} $ that solves (TW$_{c(\varepsilon')}$), inequality \varepsilonqref{estimate1}
holds with $ \varepsilon_n^{\#} $ and $ U_n^{\#}$ instead of $ \varepsilon_n$ and $ U_n$, respectively.
The limit of the right-hand side as $ \varepsilon' \sigmaearrow \varepsilon _n^{\#} $ is $(\varepsilon_n^{\#})^2 Q^2(U_n^{\#})$.
If $\varepsilon_n^{\#} | Q(U_n^{\#} | < 4 r_0 ^2 {\mathfrak c}_s ^3 \
$\Box$S_{\ranglem min} $,
as above we infer that there is $ \deltaelta_n > 0 $ such that $[ \varepsilon_n^{\#}, \varepsilon_n^{\#} + \deltaelta _n ] \sigmaubset I_n$,
contradicting the fact that $ \varepsilon_n^{\#} = \sigmaup I_n$.
The claim \varepsilonqref{claim1} is thus proved.
Now we turn our attention to the sequence $(\varepsilon_n^{\#}, U_n^{\#})_{n\gammaeq 1}$.
It is clear that $ \varepsilon_n ^{\#} \tildeo 0 $ (because $ \varepsilon_n^{\#} {\rm in}n (\varepsilon_n, \varepsilon_{n-1})$).
By Proposition \rangleef{asympto} ($ii$) there is $ K > 0 $ such that
$$
E ( U_n^{\#} ) + c(\varepsilon_n^{\#}) Q ( U_n^{\#} ) = E_{c(\varepsilon_n^{\#})} ( U_n^{\#} )= T_{c(\varepsilon_n^{\#})} \langleeq K \varepsilon_n^{\#}
$$
and using \varepsilonqref{claim1} we find $ | E ( U_n^{\#} )| \langleeq \frac{K'}{\varepsilon_n^{\#} }$
for some constant $ K'> 0 $ and for all $n$ sufficiently large.
Hence we may use Proposition \rangleef{convergence} and we infer that there is a subsequence
$(\varepsilon_{n_k}^{\#}, U_{n_k}^{\#})_{k\gammaeq 1}$ which satisfies the conclusion of Theorem \rangleef{res1}.
In particular, we have
$$
\langleim_{k \tildeo {\rm in}nfty} \varepsilon_{n_k}^{\#} | Q( U_{n_k}^{\#}) | = r_0 ^2 {\mathfrak c}_s ^3 \
$\Box$S_{\ranglem min}
$$
and this contradicts the fact that $U_{n_k}^{\#} $ satisfies \varepsilonqref{claim1}.
Proposition \rangleef{global3} is thus proven.
\
$\Box$
\sigmaubsection{Proof of Proposition \rangleef{lifting}}
\langleambdabel{preuvelifting}
($i$) Since $ U {\rm in}n \mathbb BE$, we have $ |U| - r_0 {\rm in}n H^1( \mathbb R^N)$ (see the Introduction of \cite{CM1})
and then $ \mathbb Big| \deltaisplaystyle \frac{ \partial }{\partial x_i} ( |U | - r_0 ) \mathbb Big| \langleeq \mathbb Big| \frac{ \partial U}{\partial x_i} \mathbb Big| $ a.e. in $ \mathbb R^N$.
It is well-known (see, for instance, \cite{brezis} p. 164) that for any $ \partialhi {\rm in}n H^1( \mathbb R^N)$ there holds
$$
\| \partialhi \|_{L^{2^*}(\mathbb R^N)} \langleeq C_S \partialrod _{i =1}^N \mathbb Big\| \frac{\partial \partialhi}{\partial x_i} \mathbb Big\| _{L^2( \mathbb R^N)}^{\frac 1N}.
$$
We infer that
\
$\Box$e
\langleambdabel{sobo}
\| \, |U| - r_0 \|_{L^{2^*}(\mathbb R^N)} \langleeq C_S \partialrod _{i =1}^N \mathbb Big\| \frac{\partial U}{\partial x_i} \mathbb Big\| _{L^2( \mathbb R^N)}^{\frac 1N}
\langleeq C_S \mathbb Big\| \frac{\partial U}{\partial x_1} \mathbb Big\| _{L^2( \mathbb R^N)}^{\frac 1N} \cdot
\| \|abla _{x_{\partialerp}} U \| _{L^2( \mathbb R^N)}^{\frac{N}{N-1}}.
\varepsilone
Assume first that (A2) holds.
If $\mathbb Big\| \frac{\partial U}{\partial x_1} \mathbb Big\| _{L^2( \mathbb R^N)} \cdot \| \|abla _{x_{\partialerp}} U \| _{L^2( \mathbb R^N)}^{N-1} \langleeq 1$,
from \varepsilonqref{sobo} we get $ \| \, |U| - r_0 \|_{L^{2^*}(\mathbb R^N)} \langleeq C_S$.
Let $ \tildeildelde{U} (x) = e^{-\frac{i c x_1}{2}} U(x) $.
Then $ \tildeildelde{U} {\rm in}n H_{loc}^1( \mathbb R^N)$ and $ \tildeildelde{U}$ solves the equation
$$
\Deltaelta \tildeildelde{U} + \langleeft( \frac{c^2}{4} + F(|\tildeildelde{U} |^2) \rangleight) \tildeildelde{U} = 0 \qquad \mbox{ in } \mathbb R^N.
$$
Since $ \| \tildeildelde{U} \|_{L^{2^*} (B(x, 1))} \langleeq C$ for any $ x {\rm in}n \mathbb R^N$ and for some constant $C>0$,
using the above equation and a standard bootstrap argument
(which works thanks to (A2)), we infer that $ \| \tildeildelde{U} \|_{W^{2,p} (B(x, \frac{1}{2^{n_0}}))} \langleeq \tildeildelde{C}_p$
for some $ n_0 {\rm in}n \mathbb N$, $ \tildeildelde{C}_p > 0$ and for any $ x {\rm in}n \mathbb R^N$ and any $ p {\rm in}n [2, {\rm in}nfty)$. This clearly implies
$ \| {U} \|_{W^{2,p} (B(x, \frac{1}{2^{n_0}}))} \langleeq {C_p}$ for any $ x {\rm in}n \mathbb R^N$ and any $ p {\rm in}n [2, {\rm in}nfty)$.
In particular, using the Sobolev embedding we see that there is $ L>0 $ (independent on $U$)
such that $ \| \|abla U \|_{L^{{\rm in}nfty}( \mathbb R^N)} \langleeq L$.
Fix $ \delta > 0$. If there is $ x_0 {\rm in}n \mathbb R^N$ such that $ | \, |U(x_0)| - r_0 | \gammaeq \delta$,
we infer that $ \| \, |U(x)| - r_0 | \gammaeq \frac{\delta}{2}$ for any $ x {\rm in}n B( x_0, \frac{ \delta}{2L})$
and consequently
\
$\Box$e
\langleambdabel{low}
\| \, |U| - r_0 \|_{L^{2^*}(\mathbb R^N)} \gammaeq \frac{\delta}{2} \langleeft({\mathcal L} ^N \langleeft( B( x_0 \frac{ \delta}{2L}) \rangleight) \rangleight)^{\frac{1}{2^*}}
= \frac{\delta}{2} \langleeft(\frac{\delta}{2L} \rangleight)^{\frac{N}{2^*}} \langleeft({\mathcal L} ^N( B(0,1)) \rangleight)^{\frac{1}{2^*}} .
\varepsilone
Let $ \mu ( \delta) = \min \langleeft(
1, \frac{\delta}{2} \langleeft(\frac{\delta}{2L} \rangleight)^{\frac{N}{2^*}} \langleeft( {\mathcal L} ^N( B(0,1)) \rangleight)^{\frac{1}{2^*}}
\rangleight).$
From \varepsilonqref{sobo} and \varepsilonqref{low} we infer that $|\, |U(x) | - r_0 | < \deltaelta $ for any solution $ U {\rm in}n \mathbb BE$ of (TW$_c$)
satisfying
$ \mathbb Big\| \frac{\partial U}{\partial x_1} \mathbb Big\| _{L^2( \mathbb R^N)} \cdot \| \|abla _{x_{\partialerp}} U \| _{L^2( \mathbb R^N)}^{N-1} \langleeq \mu ( \deltaelta).$
If (A3) holds, it follows from the proof of Proposition 2.2 p. 1078-1080 in \cite{M2} thet there is $ L >0$, independent on $U$,
such that $ \| \|abla U \|_{L^{{\rm in}nfty}( \mathbb R^N)} \langleeq L$.
The rest of the proof is as above.
($ii$) By Proposition 2.2 p. 1078 in \cite{M2} we know that $ U {\rm in}n W_{loc}^{2,p}(\mathbb R^N)$ for any $ p {\rm in}n [2, {\rm in}nfty)$.
In particular, $U {\rm in}n C^1( \mathbb R^N)$ .
As in the proof of ($i$) we see that there is $ L > 0$, independent on $U$, such that $ || \|abla U \|_{L^{{\rm in}nfty}( \mathbb R^N)} \langleeq L$.
Fix $ \delta > 0$ and assume that there is $ x^0 = ( x_1^0, \deltaots, x_N^0)$ such that $ | \, |U(x^0)| - r_0 | \gammaeq \delta$.
Then we have $ | \, |U(x)| - r_0 | \gammaeq \frac{\delta}{2}$ for any $ x {\rm in}n B( x^0 , \frac{ \delta}{2L})$
and, in particular,
$|\, | U(x_1, x_2^0, \deltaots, x_N^0)| - r_0 |
\gammaeq \frac{ \delta}{2}$ for any $ x_1 {\rm in}n [x_1^0 - \frac{ \delta}{2L}, x_1^0 + \frac{ \delta}{2L}]$.
We infer that
$|\, | U(x_1, x_{\partialerp})| - r_0 | \gammaeq \frac{\delta}{4} $ for any $ x_1 {\rm in}n [x_1^0 - \frac{ \delta}{2L}, x_1^0 + \frac{ \delta}{2L}]$
and any $ x_{\partialerp} {\rm in}n B_{\mathbb R^{N-1}} ( x_{\partialerp}^0, \frac{ \delta}{4L}).$
Consequently
$$
\
$\Box$egin{array}{l}
\| \, |U( x_1, \cdot ) | - r_0 \|_{L^{\frac{2(N-1)}{N-3}}(\mathbb R^{N-1})} \gammaeq
\frac{ \delta}{4} \langleeft( {\mathcal L}^{N-1} \langleeft( B_{\mathbb R^{N-1}}\langleeft(x_{\partialerp}^0, \frac{\delta}{4L}\rangleight) \rangleight) \rangleight)^{\frac{N-3}{2(N-1)}}
\\ \\
\gammaeq \frac{ \delta}{4} \langleeft(\frac{\delta }{4L} \rangleight)^{\frac{N-3}{2}} \langleeft( {\mathcal L}^{N-1} ( B_{\mathbb R^{N-1}}(0, 1)) \rangleight)^{\frac{N-3}{2(N-1)}}
= K \delta ^{ \frac{N-1}{2}}
\varepsilonnd{array}
$$
for all $ x_1 {\rm in}n [x_1^0 - \frac{ \delta}{2L}, x_1^0 + \frac{ \delta}{2L}]$.
Using the Sobolev inequality in $ \mathbb R^{N-1}$ we get for $ x_1 {\rm in}n \langleeft[x_1^0 - \frac{ \delta}{2L}, x_1^0 + \frac{ \delta}{2L}\rangleight], $
$$
{\rm in}nt_{\mathbb R^{N-1}} |\|abla_{x_{\partialerp}} U( x_1, x_{\partialerp}) |^2 \, dx _{\partialerp}
\gammaeq \frac{1}{\tildeildelde{C}_S^2} \| \, |U( x_1, \cdot ) | - r_0 \|_{L^{\frac{2(N-1)}{N-3}}(\mathbb R^{N-1})} ^2
\gammaeq \frac{K^2}{\tildeildelde{C}_S^2} \deltaelta ^{N-1} .
$$
Integrating the above inequality on $[x_1^0 - \frac{ \delta}{2L}, x_1^0 + \frac{ \delta}{2L}]$ we obtain
$ \| \|abla_{x_{\partialerp} } U \|_{ L^2 (\mathbb R^N )} ^2 \gammaeq \frac{K^2}{L \tildeildelde{C}_S^2} \delta ^{N} = K_1 \delta ^{N} .$
We conclude that if $\| \|abla_{x_{\partialerp} } U \|_{ L^2 (\mathbb R^N )} ^2 <\min(1, K_1 \delta ^{N}) $, then necessarily
$ | \, |U| - r_0 | < \delta$ in $ \mathbb R^N$.
\
$\Box$
\sigmaubsection{Proof of Proposition \rangleef{prop2d}}
It follows from Lemma 4.1 in \cite{CM1}
that there are $ k_0 > 0$, $ C_1, C_2 > 0$ such that for all $ \partialsi {\rm in}n \mathbb BE$ with $\deltaisplaystyle {\rm in}nt_{\mathbb R^2} | \|abla \partialsi |^2 \, dx \langleeq k_0$
we have
\
$\Box$e
\langleambdabel{kifkifpot}
C_1 {\rm in}nt_{\mathbb R^2} ( \chi^2(|\partialsi|) - r_0^2 )^2 \ dx \langleeq
{\rm in}nt_{\mathbb R^2} V(|\partialsi|^2) \ dx \langleeq C_2 {\rm in}nt_{\mathbb R^2} ( \chi^2(|\partialsi|) - r_0^2 )^2 \ dx .
\varepsilone
We recall that in space dimension two, nontrivial solutions $U_k$ to
(TW$_c$) have been constructed in Theorem \rangleef{th2d} by considering the minimization problem
$$
\mbox{ minimize } I(\partialsi) = Q( \partialsi ) + {\rm in}nt_{\mathbb R^2} V(| \partialsi |^2) \, dx \quad \mbox{ in } \mathbb BE \; \mbox{ under the constraint }
{\rm in}nt_{\mathbb R^2} | \|abla \partialsi |^2 \, dx = k.
\varepsilonqno{({\mathcal I} _k)}
$$
If $\mathbb BU_k$ is a minimizer for $({\mathcal I} _k)$, there is $ c_k > 0$ such that $U_k = (\mathbb BU_k)_{c_k, c_k}$
solves (TW$_{c_k}$) and minimizes $E_{c_k} = E + c_k Q$ in the set
$ \mathbb Big\{ \partialsi {\rm in}n \mathbb BE \; \mathbb Big| \; \deltaisplaystyle {\rm in}nt_{\mathbb R^2} | \|abla \partialsi |^2 \, dx = k \mathbb Big\}$.
Moreover, we have $ c_k \tildeo {\mathfrak c}_s $ as $ k \tildeo 0$.
Lemma \rangleef{liftingfacile} implies that $|U_k| \tildeo r_0$ uniformly on $ \mathbb R^2$ as $ k \tildeo 0$;
in particular, there is $ k_1 > 0 $ such that if $ k {\rm in}n ( 0, k_1) $,
we have $|U_k| \gammaeq \frac{r_0}{2} $ in $ \mathbb R^2$.
From the Pohozaev identities \varepsilonqref{Pohozaev} we get $c_k Q( U_k) + 2 \deltaisplaystyle {\rm in}nt_{\mathbb R^2} V(|U_k|^2 )\, dx = 0$, and this gives
\
$\Box$e
\langleambdabel{scaling2}
I_{\ranglem min}(k) = I( \mathbb BU _k) = \frac{ 1}{c_k} Q(U_k) + \frac{ 1}{c_k^2 } {\rm in}nt_{\mathbb R^2} V(|U_k|^2 )\, dx
= \frac{ 1}{2c_k} Q(U_k) = - \frac{ 1}{c_k^2 } {\rm in}nt_{\mathbb R^2} V(|U_k|^2 )\, dx.
\varepsilone
By Lemma 5.2 in \cite{CM1} there is $ k_2 > 0 $ such that $ - \frac{2k}{{\mathfrak c}_s ^2} \langleeq I_{\ranglem min}(k) \langleeq - \frac{k}{{\mathfrak c}_s ^2} $
for all $ k {\rm in}n (0, k_2)$.
Since $ c_k \tildeo {\mathfrak c}_s $ as $ k \tildeo 0$, the estimates \varepsilonqref{estim2d} follow directly from \varepsilonqref{kifkifpot} and \varepsilonqref{scaling2}.
It remains to prove \varepsilonqref{kifkif}.
By Proposition \rangleef{asympto}, there is $ \mu _0 > 0 $ such that for $k$ sufficiently small we have
$ I_{\ranglem min} (k) \langleeq - \frac{k}{{\mathfrak c}_s^2} - \mu_0 k^3 .$
By scaling we have
$$ \frac{1}{c_k^2} \mathbb Big( E_{c_k}(U_k) - {\rm in}nt_{\mathbb R^2} |\|abla U_k|^2 \ dx \mathbb Big)
= \frac{1}{c_k^2} \mathbb Big( c_k Q(U_k) + {\rm in}nt_{\mathbb R^2} V(|U_k|^2) \ dx \mathbb Big)
= I(\mathbb BU_k) = I_{\ranglem min}(k) \langleeq - \frac{k}{{\mathfrak c}_s^2} - \mu_0 k^3 . $$
Since ${\mathfrak c}_s^2 - c_k^2 = \varepsilon _k^2 $ and $\deltaisplaystyle {\rm in}nt_{\mathbb R^2} |\|abla U_k|^2 \ dx = k$, we get
\
$\Box$e
\langleambdabel{ecusson}
E_{c_k}(U_k) \langleeq k \mathbb Big( 1 - \frac{c _k^2}{{\mathfrak c}_s^2} \mathbb Big) - \mu_0 c_k^2 k^3
= \frac{k\varepsilon_k^2}{{\mathfrak c}_s^2} - \mu_0 c_k ^2 k^3 .
\varepsilone
The second Pohozaev identity \varepsilonqref{Pohozaev} yields $E_{c_k}(U_k) =
2 \deltaisplaystyle {\rm in}nt_{\mathbb R^2} |\partial_{2} U_k |^2 \ dx \gammaeq 0$, thus
$ 0 \langleeq k \mathbb Big( \frac{\varepsilon_k^2}{{\mathfrak c}_s^2} - \mu_0 c_k ^2 k^2 \mathbb Big) $
and this implies
$$ \frac{\varepsilon_k^2}{{\mathfrak c}_s^2} \gammaeq \mu_0 c^2 k^2 . $$
Since $c \gammaeq {\mathfrak c}_s/2$ for $k$ small,
the left-hand side inequality in \varepsilonqref{kifkif} follows.
In order to prove the second inequality in \varepsilonqref{kifkif}, we need the next Lemma.
In the case of the Gross-Pitaevskii nonlinearity, this result follows from Lemma 2.12 p. 597 in \cite{BGS1}.
In the case of general nonlinearities, it was proved in \cite{CM1}.
\
$\Box$egin{lem} [\cite{BGS1, CM1}]
\langleambdabel{tools}
Let $N\gammaeq 2$.
There is $ \
$\Box$eta _* > 0$ such that any solution $ U = \rangleho e^{ i \partialhi} {\rm in}n {\mathbf E}o$ of (TW$_c$) verifying
$ r_0 - \
$\Box$eta_* \langleeq \rangleho \langleeq r_0 + \
$\Box$eta _*$
satisfies the identities
\
$\Box$e
\langleambdabel{blancheneige}
E(U) + c Q(U) = \frac{2}{N} {\rm in}nt_{\mathbb R^N} |\|abla \rangleho |^2 \ dx \qquad \mbox{ and }
\varepsilone
\
$\Box$e
\langleambdabel{grincheux}
2 {\rm in}nt_{\mathbb R^N} \rangleho^2 |\|abla \partialhi |^2 \ dx =
c {\rm in}nt_{\mathbb R^N} ( \rangleho^2 - r_0^2 ) \partial_1 \partialhi \ dx = - c Q(U) .
\varepsilone
Furthermore, there exist $a_1, a_2 > 0$ such that
\
$\Box$e
\langleambdabel{balai}
a_1 \| \rangleho ^2 - r_0 ^2 \|_{L^2(\mathbb R^N)} \langleeq \| \|abla U \|_{L^2( \mathbb R^N)} \langleeq a_2 \| \rangleho ^2 - r_0 ^2 \|_{L^2(\mathbb R^N)}.
\varepsilone
\varepsilonnd{lem}
\|oindent {{\rm in}t Proof.}
Identity \varepsilonqref{grincheux} is Lemma 7.3 ($i$) in \cite{CM1}.
Formally, it follows by
multiplying the first equation in \varepsilonqref{phasemod} by $ \partialhi $ and
integrating by parts over $\mathbb R^N$; see \cite{CM1} for a rigorous justification.
Combining the two Pohozaev identities in \varepsilonqref{Pohozaev}, we have
$$ (N-2) {\rm in}nt_{\mathbb R^N} |\|abla U|^2 \ dx + N {\rm in}nt_{\mathbb R^N} V(|U|^2) \ dx
+ c ( N-1 ) Q(U) = 0 . $$
Using that $|\|abla U|^2 = |\|abla \rangleho|^2 + \rangleho^2 |\|abla \partialhi|^2 $,
we infer from \varepsilonqref{grincheux}
\
$\Box$egin{align*}
N(E(U) + c Q(U) ) = 2 {\rm in}nt_{\mathbb R^N} |\|abla U|^2 \ dx + c Q(U)
= & \ 2 {\rm in}nt_{\mathbb R^N} |\|abla \rangleho|^2 \ dx
+ \mathbb Big( 2 {\rm in}nt_{\mathbb R^N} \rangleho^2 |\|abla \partialhi|^2 \ dx + c Q(U) \mathbb Big)
\\ = & \ 2 {\rm in}nt_{\mathbb R^N} |\|abla \rangleho|^2 \ dx ,
\varepsilonnd{align*}
and this establishes \varepsilonqref{blancheneige}.
The estimate \varepsilonqref{balai} has been proven in \cite{CM1} (see inequality (7.17) there).
\
$\Box$
We come back to the proof of Proposition \rangleef{prop2d}.
We write $ U_k = \rangleho e^{ i \partialhi}$ and we denote $ \varepsilonta = \rangleho ^2 - r_0 ^2$, so that $ \rangleho$, $\partialhi $ and $ \varepsilonta$ satisfy
\varepsilonqref{phasemod}$-$\varepsilonqref{fond} (with $c_k$ instead of $c$).
Taking the Fourier transform of \varepsilonqref{fond} we get
\
$\Box$e
\langleambdabel{fondfou}
\
$\Box$egin{array}{rcl}
{\mathbf w}h{\varepsilonta} ( {\ranglem x}i) & = & \deltaisplaystyle \frac{ |{\ranglem x}i|^2}{|{\ranglem x}i |^4 + {\mathfrak c}_s ^2 |{\ranglem x}i |^2 - c_k ^2 {\ranglem x}i _1^2 }
{\mathcal F}u \langleeft( -2 |\|abla U_k|^2 + 2 c_k \varepsilonta \frac{ \partial \partialhi}{\partial x_1} + 2 \rangleho ^2 F( \rangleho ^2) + {\mathfrak c}_s ^2 \varepsilonta \rangleight)
\\
\\
& &
\deltaisplaystyle - 2 c_k \sigmaum_{j=1}^N \frac{ {\ranglem x}i _1 {\ranglem x}i _j }{|{\ranglem x}i |^4 + {\mathfrak c}_s ^2 |{\ranglem x}i |^2 - c_k ^2 {\ranglem x}i _1^2 }
{\mathcal F}u \langleeft( \varepsilonta \frac{ \partial \partialhi}{\partial x_j} \rangleight).
\varepsilonnd{array}
\varepsilone
It is easy to see that $ 2 \rangleho ^2 F( \rangleho ^2) + {\mathfrak c}_s ^2 \varepsilonta = \mathbb BO ( (\rangleho ^2 - r_0 ^2) ^2) = \mathbb BO ( \varepsilonta^2)$, hence
$$
\| {\mathcal F}u \langleeft( 2 \rangleho ^2 F( \rangleho ^2) + {\mathfrak c}_s ^2 \varepsilonta \rangleight) \|_{L^{{\rm in}nfty} ( \mathbb R^N)} \langleeq \| 2 \rangleho ^2 F( \rangleho ^2) + {\mathfrak c}_s ^2 \varepsilonta \|_{L^1( \mathbb R^N)}
\langleeq C \| \varepsilonta \|_{L^2( \mathbb R^N)} ^2.
$$
Since $ r_0 - \
$\Box$eta_* < |U_k| < r_0 + \
$\Box$eta_* $ if $ k $ is sufficiently small and $ |\|abla U_k|^2 = |\|abla \rangleho |^2 + \rangleho^2 |\|abla \partialhi |^2$,
using \varepsilonqref{balai} we get
$$
\mathbb Big\| {\mathcal F}u \langleeft( \varepsilonta \frac{ \partial \partialhi}{\partial x_j} \rangleight) \mathbb Big\| _{L^{{\rm in}nfty}(\mathbb R^N)}
\langleeq \mathbb Big\| \varepsilonta \frac{ \partial \partialhi}{\partial x_j} \mathbb Big\| _{L^{1}(\mathbb R^N)}
\langleeq \| \varepsilonta \|_{L^2(\mathbb R^N)} \mathbb Big\| \frac{ \partial \partialhi}{\partial x_j} \mathbb Big\| _{L^{2}(\mathbb R^N)}
\langleeq C \| \varepsilonta \|_{L^2(\mathbb R^N)} ^2
$$
and $ \| {\mathcal F}u ( |\|abla U_k| ^2) \|_{L^{{\rm in}nfty}(\mathbb R^N)} \langleeq \| \|abla U_k \|_{L^2(\mathbb R^N)}^2 \langleeq C \| \varepsilonta \|_{L^2(\mathbb R^N)} ^2 .$
Coming back to \varepsilonqref{fondfou} we discover
$$
| {\mathbf w}h{ \varepsilonta } ( {\ranglem x}i ) | \langleeq C \| \varepsilonta \|_{L^2(\mathbb R^N)} ^2 \cdot \frac{ |{\ranglem x}i|^2}{|{\ranglem x}i |^4 + {\mathfrak c}_s ^2 |{\ranglem x}i |^2 - c_k ^2 {\ranglem x}i _1^2 } .
$$
Using Plancherel's formula and the above estimate we find
\
$\Box$e
\langleambdabel{planchereta}
\| \varepsilonta \|_{L^2(\mathbb R^N)} ^2 = \frac{1}{(2 \partiali )^N} {\rm in}nt_{\mathbb R^N} |{\mathbf w}h{\varepsilonta} ( {\ranglem x}i )|^2\, d {\ranglem x}i
\langleeq C \| \varepsilonta \|_{L^2(\mathbb R^N)} ^4 {\rm in}nt_{\mathbb R^N} \frac{ |{\ranglem x}i|^4}{ (|{\ranglem x}i |^4 + {\mathfrak c}_s ^2 |{\ranglem x}i |^2 - c_k ^2 {\ranglem x}i _1^2) ^2 } \, d {\ranglem x}i.
\varepsilone
If $N=2$, a straightforward computation using polar coordinates gives (see the proof of (2.59) p. 598 in \cite{BGS2}):
$$
{\rm in}nt_{\mathbb R^2} \frac{ |{\ranglem x}i|^4}{ (|{\ranglem x}i |^4 + {\mathfrak c}_s ^2 |{\ranglem x}i |^2 - c_k ^2 {\ranglem x}i _1^2) ^2 } \, d {\ranglem x}i
= \frac{ \partiali}{{\mathfrak c}_s \sigmaqrt{ {\mathfrak c}_s ^2 - c_k^2}} = \frac{ \partiali}{{\mathfrak c}_s \varepsilon _k}.
$$
From to \varepsilonqref{planchereta} we get $ \| \varepsilonta \|_{L^2(\mathbb R^2)} ^2 \langleeq \frac{C}{\varepsilon _k} \| \varepsilonta \|_{L^2(\mathbb R^2)} ^4$
and taking into account \varepsilonqref{balai} we infer that
$\varepsilon _k \langleeq C \| \varepsilonta \|_{L^2(\mathbb R^2)} ^2 \langleeq \tildeildelde{C} \| \|abla U_k \|_{L^2( \mathbb R^2)} ^2 = \tildeildelde{C} k. $
\
$\Box$
Notice that at this stage, we have only upper bounds on the energy of travelling waves,
and we will have to prevent convergence towards the trivial solution to (SW).
This will be done with the help of the following result. It was proven in \cite{BGS2} in the case
of the Gross-Pitaevskii nonlinearity (see Proposition 2.4 p. 595 there). We extend the proof
to general nonlinearities.
\
$\Box$egin{lem}
\langleambdabel{minoinf}
Let $N \gammaeq 2$ and assume that (A1) holds and $F$ is twice differentiable at $ r_0^2$.
There is $C>0$, depending only on $N$ and on $F$, such that any travelling wave $ U {\rm in}n \mathbb BE$ of {\ranglem (NLS)}
of speed $ c {\rm in}n [0, {\mathfrak c}_s]$ such that $ \frac{r_0}{2} \langleeq|U| \langleeq \frac{ 3 r_0}{2}$ satisfies
$$
\| \, |U| - r_0 \|_{L^{{\rm in}nfty}(\mathbb R^N)} \gammaeq C( {\mathfrak c}_s ^2 - c^2) = C \varepsilon^2 (U).
$$
\varepsilonnd{lem}
\|oindent {{\rm in}t Proof.}
Let $U {\rm in}n \mathbb BE$ be a travelling wave such that $ \frac{r_0}{2} \langleeq|U| \langleeq \frac{ 3 r_0}{2}$ in $ \mathbb R^N$.
Then $U {\rm in}n W_{loc}^{2,p}(\mathbb R^N)$, $ \|abla U {\rm in}n W^{1,p}(\mathbb R^N)$ for all $ p {\rm in}n [2, {\rm in}nfty)$
(see Proposition 2.2 p. 1078-1079 in \cite{M2}), and $U$ admits a lifting $U = \rangleho e^{i \partialhi}$, where
$ \rangleho$ and $ \partialhi$ satisfy \varepsilonqref{phasemod}.
Since $ U {\rm in}n \mathbb BE$ we have $ \rangleho ^2 - r_0^2 {\rm in}n H^1( \mathbb R^N)$ and then it is easy to see that
$ \frac{ \rangleho ^2 - r_0^2}{\rangleho } {\rm in}n H^1( \mathbb R^N)$.
Multiplying the second equation in \varepsilonqref{phasemod} by $ \frac{ \rangleho ^2 - r_0^2}{\rangleho } $
and integrating by parts we get
\
$\Box$e
\langleambdabel{ident1}
{\rm in}nt_{\mathbb R^N} \langleeft( 1 + \frac{r_0^2}{\rangleho^2} \rangleight) |\|abla \rangleho |^2 \, dx
+ {\rm in}nt_{\mathbb R^N} ( \rangleho ^2 - r_0^2) |\|abla \partialhi |^2 - ( \rangleho ^2 - r_0^2) F( \rangleho ^2) - c ( \rangleho ^2 - r_0^2) \frac{ \partial \partialhi }{\partial x_1} \, dx = 0.
\varepsilone
Denote $ \delta = \| \, |U| - r_0 \|_{L^{{\rm in}nfty}(\mathbb R^N)} = \| \rangleho - r_0 \|_{L^{{\rm in}nfty}(\mathbb R^N)} .$ We have
\
$\Box$e
\langleambdabel{inegradrho}
{\rm in}nt_{\mathbb R^N} \langleeft( 1 + \frac{r_0^2}{\rangleho^2} \rangleight) |\|abla \rangleho |^2 \, dx
\gammaeq \langleeft( 1 + \frac{r_0^2}{(r_0 + \delta )^2} \rangleight) {\rm in}nt_{\mathbb R^N} |\|abla \rangleho |^2 \, dx
\qquad \mbox{ and }
\varepsilone
\
$\Box$e
\langleambdabel{majophase}
\mathbb Big| {\rm in}nt_{\mathbb R^N} ( \rangleho ^2 - r_0^2) |\|abla \partialhi |^2 \, dx \mathbb Big|
\langleeq {\rm in}nt_{\mathbb R^N} \frac{| \rangleho ^2 - r_0^2 |}{\rangleho^2} \rangleho ^2 |\|abla \partialhi |^2 \, dx
\langleeq \frac{ 2 r_0 \delta + \delta ^2 }{(r_0 - \delta)^2} {\rm in}nt_{\mathbb R^N} |\|abla U |^2 \, dx.
\varepsilone
There is $ \tildeildelde{C} > 0 $ such that $| F( s^2) - F'( r_0 ^2) ( s^2 - r_0 ^2) | \langleeq \tildeildelde{C} ( s^2 - r_0 ^2)^2 $
for all $ s {\rm in}n [\frac{r_0}{2}, \frac{ 3r_0}{2}]$.
Remember that $ - F'( r_0 ^2) = 2 a^2 $ and $ {\mathfrak c}_s = 2 a r_0$, thus
\
$\Box$e
\langleambdabel{60}
- ( \rangleho ^2 - r_0 ^2) F( \rangleho ^2) \gammaeq - F'( r_0 ^2) ( \rangleho ^2 - r_0 ^2) ^2 - \tildeildelde{C} | \rangleho ^2 - r_0 ^2 |^3
\gammaeq \langleeft( 2 a^2 - \tildeildelde{C} ( 2 r _0 \deltaelta + \deltaelta ^2 ) \rangleight) (\rangleho ^2 - r_0 ^2) ^2.
\varepsilone
Using \varepsilonqref{grincheux} and \varepsilonqref{momentlift}, then \varepsilonqref{ident1} and \varepsilonqref{inegradrho}-\varepsilonqref{60} we get
$$
\
$\Box$egin{array}{l}
\deltaisplaystyle - 2 c Q( U) = 2 {\rm in}nt_{\mathbb R^N} \rangleho ^2 |\|abla \partialhi |^2 \, dx + c {\rm in}nt_{\mathbb R^N} ( \rangleho ^2 - r _0 ^2) \frac{ \partial \partialhi}{\partial x_1 } \, dx
\\
\\
\deltaisplaystyle = 2 {\rm in}nt_{\mathbb R^N} \rangleho ^2 |\|abla \partialhi |^2 \, dx +
{\rm in}nt_{\mathbb R^N} \langleeft( 1 + \frac{r_0^2}{\rangleho^2} \rangleight) |\|abla \rangleho |^2 \, dx
+ {\rm in}nt_{\mathbb R^N} ( \rangleho ^2 - r_0^2) |\|abla \partialhi |^2 - ( \rangleho ^2 - r_0^2) F( \rangleho ^2) \, dx
\\
\\
\deltaisplaystyle \gammaeq
2 {\rm in}nt_{\mathbb R^N} \rangleho ^2 |\|abla \partialhi |^2 \, dx
+ {\rm in}nt_{\mathbb R^N} \langleeft( 1 + \frac{r_0^2}{(r_0 + \delta )^2} \rangleight) |\|abla \rangleho |^2
- \frac{ 2 r_0 \delta + \delta ^2 }{(r_0 - \delta)^2} |\|abla U |^2
+ \langleeft( 2 a^2 - \tildeildelde{C} ( 2 r _0 \deltaelta + \deltaelta ^2 ) \rangleight) (\rangleho ^2 - r_0 ^2) ^2 \, dx
\varepsilonnd{array}
$$
and we infer that there exists $K>0$, depending only on $F$, such that
\
$\Box$e
\langleambdabel{61}
-2 c Q(U) \gammaeq 2 ( 1 - K \delta ) {\rm in}nt_{\mathbb R^N} |\|abla U |^2 + a^2 ( \rangleho ^2 - r_0 ^2)^2\, dx.
\varepsilone
On the other hand, using \varepsilonqref{momentlift} we have
\
$\Box$e
\langleambdabel{62}
\
$\Box$egin{array}{l}
\deltaisplaystyle - Q( U) = \frac{ 2 a r_0}{ {\mathfrak c}_s} {\rm in}nt_{\mathbb R^N} ( \rangleho ^2 - r _0 ^2) \frac{ \partial \partialhi}{\partial x_1 } \, dx
\langleeq \frac{1}{{\mathfrak c}_s} {\rm in}nt_{\mathbb R^N} r_0 ^2 \mathbb Big| \frac{ \partial \partialhi}{\partial x_1 } \mathbb Big| ^2 + a ^2 ( \rangleho ^2 - r_0 ^2)^2\, dx
\\
\\
\deltaisplaystyle \langleeq \frac{1}{{\mathfrak c}_s} {\rm in}nt_{\mathbb R^N} \frac{ r_0 ^2}{(r_0 - \delta )^2} \rangleho ^2 \mathbb Big| \frac{ \partial \partialhi}{\partial x_1 } \mathbb Big| ^2
+ a^2 ( \rangleho ^2 - r_0 ^2)^2\, dx
\langleeq \frac{1}{{\mathfrak c}_s} \frac{ r_0 ^2}{(r_0 - \delta )^2} {\rm in}nt_{\mathbb R^N} |\|abla U |^2 + a^2 ( \rangleho ^2 - r_0 ^2)^2\, dx.
\varepsilonnd{array}
\varepsilone
Since $U$ is not constant we have $ \deltaisplaystyle {\rm in}nt_{\mathbb R^N} |\|abla U |^2 + a^2 ( \rangleho ^2 - r_0 ^2)^2\, dx > 0$
and comparing \varepsilonqref{61} and \varepsilonqref{62} we get
$$
\frac{c}{{\mathfrak c}_s } \frac{ r_0 ^2}{(r_0 - \delta )^2} \gammaeq 1 - K \delta.
$$
If $ \delta > \frac{1}{2K}$ the conclusion of Lemma \rangleef{minoinf} holds because $ \varepsilon (U)$ is bounded.
Otherwise, the previous inequality is equivalent to
$ \frac{ r_0 ^2}{(r_0 - \delta )^2} \frac{1}{ 1 - K \delta} \gammaeq \frac{ {\mathfrak c}_s}{\sigmaqrt{ {\mathfrak c}_s ^2 - \varepsilon^2(U) }}. $
There are $ K_1, \; K_2 > 0 $ such that $ \frac{ r_0 ^2}{(r_0 - \delta )^2} \frac{1}{ 1 - K \delta} \langleeq 1 + K_1 \delta $ and
$ \frac{ {\mathfrak c}_s}{\sigmaqrt{ {\mathfrak c}_s ^2 - \varepsilon^2 }} \gammaeq 1 + K_2 \varepsilon^2 $ for all $ \delta {\rm in}n [0, \frac{1}{2K} ] $ and all $ \varepsilon {\rm in}n [0, {\mathfrak c}_s )$ and we
infer that $ 1 + K_1 \delta \gammaeq 1 + K_2 \varepsilon ^2(U)$, that is
$ \delta = \| \, | U| - r_0 \|_{L^{{\rm in}nfty} ( \mathbb R^N )} \gammaeq \frac{K_2}{K_1} \varepsilon^2(U)$.
\
$\Box$
\sigmaubsection{Initial bounds for $\
$\Box$s{\mathbb BA_{\varepsilon}}$}
Let $U_c {\rm in}n \mathbb BE$ be a travelling wave to (NLS) of speed $ c$
provided by Theorems \rangleef{th2dposit} or \rangleef{th2d} if $N=2$, respectively by Theorem \rangleef{thM} if $N=3$,
such that $\frac{ r_0}{2} \langleeq |U| \langleeq \frac{ 3r_0}{2} $ in $ \mathbb R^N$.
As in \varepsilonqref{ansatz}, we write $U_c(x) = \rangleho(x) e^{ i \partialhi(x)} = r_0 \sigmaqrt{1+\varepsilon^2 \mathbb BA_{\varepsilon}(z) }\ \varepsilonx^{i\varepsilon \varphi_{\varepsilon} (z)}, $ where
$ \varepsilon = \sigmaqrt{ {\mathfrak c}_s ^2 - c^2}, $ $z_1 = \varepsilon x_1 ,$ $ \ z_\partialerp = \varepsilon^2 x_\partialerp .$
According to Proposition 2.2 p. 1078-1079 in \cite{M2} we have
$$
\| U_c \|_{C_b^1( \mathbb R^N)} \langleeq C \qquad \mbox{ and } \qquad \| \|abla U_c \|_{W^{1,p}(\mathbb R^N)} \langleeq C_p
\quad \mbox{ for } p {\rm in}n [2, {\rm in}nfty) .
$$
By scaling, we obtain the initial (rough) estimates
\
$\Box$e
\langleambdabel{bourrinska}
\| \mathbb BA _{\varepsilon} \|_{L^{{\rm in}nfty }} \langleeq \frac{C}{\varepsilon ^2}, \quad
\| \partial _{z_1} \mathbb BA _{\varepsilon} \|_{L^{{\rm in}nfty }} \langleeq \frac{C}{\varepsilon ^3}, \quad
\| \|abla _{z_{\partialerp}} \mathbb BA _{\varepsilon} \|_{L^{{\rm in}nfty }} \langleeq \frac{C}{\varepsilon ^4}, \quad
\| \partial_{z_1} \varphi _{\varepsilon} \|_{L^{{\rm in}nfty }} \langleeq \frac{C}{\varepsilon ^2}, \quad
\| \|abla _{z_{\partialerp}} \varphi_{\varepsilon} \|_{L^{{\rm in}nfty }} \langleeq \frac{C}{\varepsilon ^3}
\varepsilone
and
\
$\Box$e
\langleambdabel{bourrinSKF}
\mathbb Big\| \frac{ \partial ^2 \mathbb BA_{\varepsilon}}{\partial z_1 ^2} \mathbb Big\| _{L^p} \langleeq C_p \varepsilon^{-4 + \frac{2N-1}{p}}, \qquad
\mathbb Big\| \frac{ \partial ^2 \mathbb BA_{\varepsilon}}{\partial z_1 \partial z_j } \mathbb Big\| _{L^p} \langleeq C_p \varepsilon^{-5 + \frac{2N-1}{p}}, \qquad
\mathbb Big\| \frac{ \partial ^2 \mathbb BA_{\varepsilon}}{\partial z_j \partial z_k} \mathbb Big\| _{L^p} \langleeq C_p \varepsilon^{-6 + \frac{2N-1}{p}}
\varepsilone
for any $ p {\rm in}n [2, {\rm in}nfty) $ and all $ j, k {\rm in}n \{ 2, \deltaots, N \}.$
We have:
\
$\Box$egin{lem}
\langleambdabel{BornEnergy}
Assume that (A2) and (A4) are satisfied and $ \Gamma \|eq 0$.
Let $U_c$ be a solution to {\ranglem (TW$_{c}$)} provided by Theorem \rangleef{th2d} if $N=2$,
respectively by Theorem \rangleef{thM} if $N=3$ and let $ \varepsilon = \sigmaqrt{{\mathfrak c}_s ^2 - c^2}$.
If $N=3$ we assume moreover that $E(U_c) \langleeq \frac{K}{\varepsilon}$, where $K$ does not depend on $ \varepsilon$.
There exist $ \varepsilon _0 > 0 $ and $ C > 0 $ (depending only on $F$, $N$, $K$) such that
$U_c$ admits a lifting as in \varepsilonqref{ansatz} whenever $ \varepsilon {\rm in}n (0, \varepsilon _0)$ and the following estimate holds:
$$
{\rm in}nt_{\mathbb R^N} | \partial_{z_1} \varphi_{\varepsilon} |^2 + |\|abla_{z_\partialerp} \varphi_{\varepsilon}|^2
+ \mathbb BA_{\varepsilon} ^2 + | \partial_{z_1} \mathbb BA_{\varepsilon } |^2
+ \varepsilon^2 | \|abla_{z_\partialerp} \mathbb BA_{\varepsilon} |^2 \ dz \langleeq C . $$
\varepsilonnd{lem}
\|oindent {{\rm in}t Proof.}
If $ N =2$ it follows from Theorem \rangleef{th2d} that
$ k = \deltaisplaystyle {\rm in}nt_{\mathbb R^2} |\|abla U_c |^2 \, dx $ is small if $ \varepsilon $ is small.
Using Lemma \rangleef{liftingfacile} in the case $N=2$, respectively Corollary \rangleef{sanszero} if $N=3$,
we infer that $|U_c|$ is arbitrarily close to $ r_0 $ if $ \varepsilon $ is sufficiently small
and then it is clear that we have a lifting as in \varepsilonqref{ansatz}.
We will repeatedly use the fact that there is a constant $C$ depending only on $F$ such that
$$
C |\partial_j U_c|^2 \gammaeq |\partial_j (\rangleho^2)|^2 + |\partial_j \partialhi|^2 \qquad \mbox{ for } 1 \langleeq j \langleeq N.
$$
In view of the Taylor expansion of $V$ near $r_0^2$, for $\varepsilon$ sufficiently close to $ 0$ (so that $|U_c|$ is sufficiently close to $r_0$) we have
$$ V(|U_c|^2) \gammaeq C (|U_c| - r_0)^2 . $$
By scaling, we infer that for some $\delta_1 >0$ depending only on $F$ there holds
$$ E(U_c) = {\rm in}nt_{\mathbb R^N} |\|abla U_c|^2 + V(|U_c|^2) \ dx
\gammaeq \delta_1 \varepsilon^{5-2N} {\rm in}nt_{\mathbb R^N} | \partial_{z_1} \varphi_{\varepsilon} |^2 + \mathbb BA_{\varepsilon}^2 \ dz . $$
In the case $ N =2$ it follows from Proposition \rangleef{prop2d}
that $E(U_c) \langleeq C \varepsilon $ for some $ C $ independent of $ \varepsilon $.
In the case $ N =3$ we use the assumption $E(U_c) \langleeq \frac{K}{\varepsilon}$.
In both cases the previous inequality implies that
\
$\Box$e
\langleambdabel{bourrin2}
{\rm in}nt_{\mathbb R^N} | \partial_{z_1} \varphi_{\varepsilon} |^2 + \mathbb BA_{\varepsilon}^2 \ dz
\langleeq C .
\varepsilone
We have $E_{c} (U_c) = T_{c} = \mathbb BO(\varepsilon )$ if $ N=3 $ by Proposition \rangleef{asympto} $(ii)$,
respectively $E_{c} (U_c) = \mathbb BO( k \varepsilon ^2 ) = \mathbb BO(\varepsilon ^3)$ by \varepsilonqref{ecusson} and \varepsilonqref{kifkif} in the case $ N =2$.
From the Pohozaev identity $P_c(U_c) = 0$ (see \varepsilonqref{Pohozaev}) we deduce
$$
\frac{2r_0^2 \varepsilon^{7-2N}}{N-1} {\rm in}nt_{\mathbb R^N}
|\|abla_{z_\partialerp} \varphi_{\varepsilon}|^2 + \varepsilon ^2 |\|abla_{z_\partialerp} \mathbb BA_{\varepsilon}|^2 \ dz
\langleeq C \frac{2}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla_{\partialerp} U_c|^2 \ dx
= C E_{c} (U_c) = \mathbb BO(\varepsilon ^{7-2N}) .
$$
Thus we get
\
$\Box$e
\langleambdabel{bourrin3}
{\rm in}nt_{\mathbb R^N} |\|abla_{z_\partialerp} \varphi_{\varepsilon}|^2 + \varepsilon ^2 |\|abla_{z_\partialerp} \mathbb BA_{\varepsilon}|^2\ dz
\langleeq C .
\varepsilone
Furthermore, by scaling the identity \varepsilonqref{blancheneige} in
Lemma \rangleef{tools} we obtain
$$ r_0^2 \varepsilon ^{7-2N} {\rm in}nt_{\mathbb R^N} |\partial_{z_1} \mathbb BA_{\varepsilon } |^2 \ dz
\langleeq C {\rm in}nt_{\mathbb R^N} |\partial_{x_1} \rangleho |^2 \ dx \langleeq C
{\rm in}nt_{\mathbb R^N} |\|abla \rangleho |^2 \ dx = C \frac{N}{2} E_{c } (U_c) =
\mathbb BO (\varepsilon ^{7-2N} ) , $$
so that
\
$\Box$e
\langleambdabel{bourrin4}
{\rm in}nt_{\mathbb R^N} |\partial_{z_1} \mathbb BA_ {\varepsilon} |^2 \ dz \langleeq C .
\varepsilone
Gathering \varepsilonqref{bourrin2}, \varepsilonqref{bourrin3} and \varepsilonqref{bourrin4}
yields the desired inequality. \
$\Box$ \\
Using the above estimates, we shall find $L^q$ bounds
for $\mathbb BA_{\varepsilon}$. The proof is based
on equation \varepsilonqref{Fonda}, that is
$$
\mathbb Big\{ \partial_{z_1}^4 - \partial_{z_1}^2 - {\mathfrak c}_s^2 \Deltaelta_{z_\partialerp}
+ 2 \varepsilon ^2 \partial_{z_1}^2 \Deltaelta_{z_\partialerp} + \varepsilon ^4 \Deltaelta^2_{z_\partialerp} \mathbb Big\}
\mathbb BA_{\varepsilon} = \mathbb BR_{\varepsilon} ,
\varepsilonqno{\mbox{\varepsilonqref{Fonda}}}
$$
where
\
$\Box$egin{align*}
\mathbb BR _{\varepsilon} = & \
\{ \partial_{z_1}^2 + \varepsilon ^2 \Deltaelta_{z_\partialerp} \} \mathbb Big[
2(1 + \varepsilon^2 \mathbb BA _{\varepsilon}) \mathbb Big( (\partial_{z_1} \varphi {\varepsilon} )^2
+ \varepsilon ^2 |\|abla_{z_\partialerp} \varphi _{\varepsilon} |^2 \mathbb Big) + \varepsilon^2 \frac{(\partial_{z_1} \mathbb BA _{\varepsilon} )^2
+ \varepsilon ^2 |\|abla_{z_\partialerp} \mathbb BA _{\varepsilon} |^2}{2(1+ \varepsilon ^2 \mathbb BA _{\varepsilon} )} \mathbb Big]
\\ & \
- 2 c \varepsilon ^2 \Deltaelta_{z_\partialerp} ( \mathbb BA _{\varepsilon} \partial_{z_1} \varphi _{\varepsilon})
+ 2 c \varepsilon ^2 \deltaisplaystyle \sigmaum_{j=2}^N \partial_{z_1} \partial_{z_j} ( \mathbb BA _{\varepsilon} \partial_{z_j} \varphi _{\varepsilon} )
\\ & \
+ \{ \partial_{z_1}^2 + \varepsilon ^2 \Deltaelta_{z_\partialerp} \} \mathbb Big[
{\mathfrak c}_s^2 \mathbb Big( 1 - \frac{r_0^4F''(r_0^2)}{{\mathfrak c}_s^2} \mathbb Big) \mathbb BA _{\varepsilon}^2
- \frac{1}{\varepsilon ^4} \tildeildelde{F}_3(r_0^2 \varepsilon ^2 \mathbb BA _{\varepsilon}) \mathbb Big]
\varepsilonnd{align*}
and we recall that $ \tildeildelde{F}_3(\alpha) = \mathbb BO(\alpha^3)$ as $\alpha \tildeo 0$.
Let
$$
D_{\varepsilon} ({\ranglem x}i) = {\ranglem x}i_1^4 + {\ranglem x}i_1^2 + {\mathfrak c}_s^2 |{\ranglem x}i_\partialerp|^2
+ 2 \varepsilon^2 {\ranglem x}i_1^2 |{\ranglem x}i_\partialerp|^2 + \varepsilon^4 |{\ranglem x}i_\partialerp|^4
= ( {\ranglem x}i _1 ^2 + \varepsilon ^2|{\ranglem x}i _{\partialerp} |^2)^2 + {\ranglem x}i _1 ^2 + {\mathfrak c}_s ^2 |{\ranglem x}i _{\partialerp}|^2.
$$
We will consider the following kernels:
$$ \mathbb BK^1_{\varepsilon } (z) = {\mathcal F}u^{-1} \mathbb Big( \frac{{\ranglem x}i_1^2}{D_{\varepsilon} ({\ranglem x}i)} \mathbb Big) ,
\quad \quad \mathbb BK^\partialerp_{\varepsilon} (z) =
{\mathcal F}u^{-1} \mathbb Big( \frac{|{\ranglem x}i_\partialerp|^2}{D_{\varepsilon}({\ranglem x}i)} \mathbb Big)
\quad \quad {\ranglem and} \quad \quad
\mathbb BK^{1,j}_{\varepsilon} (z) = {\mathcal F}u^{-1} \mathbb Big( \frac{{\ranglem x}i_1 {\ranglem x}i_j}{D_{\varepsilon}({\ranglem x}i)} \mathbb Big) , \quad j = 2, \deltaots, N.$$
Then we may rewrite \varepsilonqref{Fonda} as a convolution equation
\
$\Box$e
\langleambdabel{Henry}
\mathbb BA_{\varepsilon} = \mathbb Big( \mathbb BK^1_{\varepsilon} + \varepsilon ^2 \mathbb BK^\partialerp_{\varepsilon} \mathbb Big) * G_{\varepsilon}
+ 2 c \varepsilon ^2 \mathbb BK_{\varepsilon}^{\partialerp} * (\mathbb BA_{\varepsilon} \partial_{z_1} \varphi_{\varepsilon})
- 2 c(\varepsilon) \varepsilon ^2 \sigmaum_{j=2}^N \mathbb BK^{1,j}_{\varepsilon} *
(\mathbb BA_{\varepsilon} \partial_{z_j} \varphi_{\varepsilon}) ,
\varepsilone
where
\
$\Box$egin{align*}
G_{\varepsilon} = & \
(1 + \varepsilon^2 \mathbb BA_{\varepsilon}) \mathbb Big( (\partial_{z_1} \varphi_{\varepsilon})^2
+ \varepsilon ^2 |\|abla_{z_\partialerp} \varphi_{\varepsilon}|^2 \mathbb Big)
+ \varepsilon ^2 \frac{(\partial_{z_1} \mathbb BA_{\varepsilon} )^2 + \varepsilon^2 |\|abla_{z_\partialerp} \mathbb BA_{\varepsilon}|^2}
{4(1+ \varepsilon ^2 \mathbb BA_{\varepsilon} )} \\ & \
+ \frac{{\mathfrak c}_s^2}{4} ( \Gamma - 2 ) \mathbb BA_{\varepsilon} ^2
- \frac{1}{\varepsilon ^4} \tildeildelde{F}_3(r_0^2 \varepsilon ^2 \mathbb BA_{\varepsilon} ) .
\varepsilonnd{align*}
\
$\Box$egin{lem}
\langleambdabel{Grenouille}
The following estimates hold for $N=2$, $3$ and $\varepsilon$ small enough:
(i) For all $ 2 \langleeq p \langleeq {\rm in}nfty $ we have $ \| \partial_{z_1} \mathbb BA_{\varepsilon} \|_{L^p}
+ \varepsilon \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon} \|_{L^p}
\langleeq C \varepsilon ^{\frac{6}{p}-3} $.
(ii) There exists $C>0 $ such that
$ \| \mathbb BA_{\varepsilon} \|_{L^{3q}} \langleeq C \varepsilon ^{- \frac23 }
\| \mathbb BA_{\varepsilon} \|^{\frac23}_{L^{2q}}$ for any $ 1 \langleeq q \langleeq {\rm in}nfty$.
(iii) If $N=3$, for any $ 2 \langleeq p < 8/3 $ there is $ C_p > 0 $ such that
$ \| \mathbb BA_{\varepsilon} \|_{L^p(\mathbb R^3)} \langleeq C_p $.
(iv) If $N=2$, for any
$ 2 \langleeq p < 4 $ there is $ C_p > 0 $ such that $ \| \mathbb BA_{\varepsilon} \|_{L^p(\mathbb R^2)} \langleeq C_p $.
\varepsilonnd{lem}
\|oindent {{\rm in}t Proof.} For $(i)$, it suffices to notice that the
estimate is true for $p=2$ by Lemma \rangleef{BornEnergy} and for $p={\rm in}nfty$
by \varepsilonqref{bourrinska}, therefore it holds for any $2 \langleeq p \langleeq {\rm in}nfty$ by interpolation.
For $(ii)$ we just interpolate the exponent $3q$ between $2q$ and ${\rm in}nfty$ and we use \varepsilonqref{bourrinska}:
$$
\| \mathbb BA_{\varepsilon} \|_{L^{3q}} \langleeq
\| \mathbb BA_{\varepsilon} \|_{L^{2q}}^{ \frac 23 }
\| \mathbb BA_{\varepsilon} \|_{L^{{\rm in}nfty}}^{ \frac 13 } \langleeq C \varepsilon ^{ -\frac 23 }
\| \mathbb BA_{\varepsilon} \|_{L^{2q}}^{ \frac 23 } .
$$
Next we prove $(iii)$. As already
mentioned, a uniform $L^p$ bound (for $2\langleeq p \langleeq 8/3$) on
the kernels $\mathbb BK^1_{\varepsilon}$, $ \varepsilon ^2 \mathbb BK^\partialerp_{\varepsilon}$ and $ \varepsilon ^2 \mathbb BK^{1,j}_ {\varepsilon} $
is established in \cite{BGS1} by using a Sobolev estimate. Unfortunately this is no longer possible
in dimension $N=3$. We thus rely on a suitable decomposition of $ \mathbb BA_{\varepsilon} $ in the Fourier space.
Some terms are controlled by using the energy bounds in Lemma \rangleef{BornEnergy}, the others by using \varepsilonqref{Henry}.
We consider a set of parameters $\alpha$, $\
$\Box$eta$, $\gamma {\rm in}n (1,2)$ and $\|u > 5/2 $
with $ \alpha \gammaeq \
$\Box$eta $ and $\alpha \gammaeq \gamma $ (to be fixed later).
For $ \varepsilon {\rm in}n (0,1)$, let
$$
\
$\Box$egin{array}{c}
E^I = \{ {\ranglem x}i {\rm in}n \mathbb R^N \; \
$\Box$egin{itemize}g| \; |{\ranglem x}i _{\partialerp} | < 1 \}, \quad
E^{II} = \{ {\ranglem x}i {\rm in}n \mathbb R^N \; \
$\Box$egin{itemize}g| \; |{\ranglem x}i_\partialerp| > \varepsilon ^{-\alpha} \}, \quad
E^{III} = \{ {\ranglem x}i {\rm in}n \mathbb R^N \; \
$\Box$egin{itemize}g| \; \varepsilon ^{-\
$\Box$eta} \langleeq |{\ranglem x}i_\partialerp| \langleeq \varepsilon ^{-\alpha},\, |{\ranglem x}i_1| < 1 \},
\\
\\
E^{IV} = \{ {\ranglem x}i {\rm in}n \mathbb R^N \; \
$\Box$egin{itemize}g| \; \varepsilon ^{-\gamma} \langleeq |{\ranglem x}i_\partialerp|
\langleeq \varepsilon ^{-\alpha}, \, 1 \langleeq |{\ranglem x}i_1|^\|u \langleeq |{\ranglem x}i_\partialerp| \}, \qquad
E^{V} = \{ {\ranglem x}i {\rm in}n \mathbb R^N \; \
$\Box$egin{itemize}g| \; 1 \langleeq |{\ranglem x}i_\partialerp| \langleeq \varepsilon ^{-\alpha}, \, |{\ranglem x}i_1|^\|u > |{\ranglem x}i_\partialerp| \},
\\
\\
E^{VI} = \{ {\ranglem x}i {\rm in}n \mathbb R^N \; \
$\Box$egin{itemize}g| \; 1 \langleeq |{\ranglem x}i_\partialerp| < \varepsilon ^{-\
$\Box$eta}, \, |{\ranglem x}i_1| < 1 \}, \qquad
E^{VII} = \{ {\ranglem x}i {\rm in}n \mathbb R^N \; \
$\Box$egin{itemize}g| \; 1 \langleeq |{\ranglem x}i_\partialerp| < \varepsilon ^{-\gamma}, \, 1 \langleeq |{\ranglem x}i_1|^\|u \langleeq |{\ranglem x}i_\partialerp| \}.
\varepsilonnd{array}
$$
It is easy to see that the sets $E^I, \deltaots , E^{VII}$ are disjoint and cover $ \mathbb R^N$.
For $J {\rm in}n \{ I, \deltaots , VII \} $ we denote $ \mathbb BA_{\varepsilon} ^J = {\mathcal F}u ^{-1} ({\mathbf w}h{\mathbb BA } _{\varepsilon} \mathbf{1}_{E^J})$,
so that $ \mathbb BA_{\varepsilon} = \mathbb BA_{\varepsilon} ^I + \deltaots + \mathbb BA_{\varepsilon}^{VII} $, and we estimate each term separately.
For $ \mathbb BA_{\varepsilon} ^{I} $ we use
$$ \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon }^{I} \|_{L^2}
= \| {\ranglem x}i_{\partialerp} {\mathbf w}idehat{\mathbb BA}_{\varepsilon} {\
$\Box$f 1}_{\{ |{\ranglem x}i_\partialerp| < 1 \} } \|_{L^2}
\langleeq \| {\mathbf w}idehat{\mathbb BA}_{\varepsilon } {\
$\Box$f 1}_{ \{ |{\ranglem x}i_\partialerp|\langleeq 1 \} } \|_{L^2}
\langleeq \| {\mathbf w}idehat{\mathbb BA}_{{\mathbf E} } \|_{L^2} = \| \mathbb BA_{{\mathbf E} } \|_{L^2} \langleeq C . $$
By Lemma \rangleef{BornEnergy}, $\mathbb BA_{\varepsilon} $ and $ \partial_{z_1} \mathbb BA_{\varepsilon } $
are uniformly bounded in $L^2$, thus we have
$$ \| \mathbb BA_ {\varepsilon} ^{I} \|_{L^2} + \| \partial_{z_1} \mathbb BA_{\varepsilon } ^{I} \|_{L^2} \langleeq C . $$
Hence $ \mathbb BA_{\varepsilon } ^{I} $ is uniformly bounded in $H^1$, and using the Sobolev
embedding we deduce
\
$\Box$e
\langleambdabel{Timide1}
\forall \; 2 \langleeq p \langleeq 6, \quad \quad \quad
\| \mathbb BA_{\varepsilon } ^{I} \|_{L^p} \langleeq C .
\varepsilone
We will use the Riesz-Thorin theorem to bound $\mathbb BA_{\varepsilon }^{II}$: if $1<q = \frac{p}{p-1} <2$ is the conjugate exponent
of $ p {\rm in}n (2, {\rm in}nfty) $, there holds
$$ \| \mathbb BA_{\varepsilon } ^{II} \|_{L^p} \langleeq C \| {\mathbf w}idehat{\mathbb BA}_{\varepsilon } ^{II} \|_{L^q} . $$
Thus it suffices to bound $ \| {\mathbf w}idehat{\mathbb BA}_{\varepsilon } ^{II} \|_{L^q} $.
Using the H\"older inequality with exponents $\frac{2}{q} $ and $\frac{2}{2-q} $,
we have
\
$\Box$egin{align*}
\| {\mathbf w}idehat{\mathbb BA}_{\varepsilon } ^{II} \|_{L^q}^q
= & \ {\rm in}nt_{\mathbb R^3}
\mathbb Big( (|{\ranglem x}i_1| + \varepsilon |{\ranglem x}i_\partialerp|) |{\mathbf w}idehat{\mathbb BA}_{\varepsilon } | \mathbb Big)^q
\tildeildemes \frac{{\
$\Box$f 1}_{\{ |{\ranglem x}i_\partialerp| > \varepsilon ^{-\alpha} \} } }{(|{\ranglem x}i_1| + \varepsilon |{\ranglem x}i_\partialerp|)^q } \, d {\ranglem x}i \\
\langleeq & \ \| (|{\ranglem x}i_1| + \varepsilon |{\ranglem x}i_\partialerp|) {\mathbf w}idehat{\mathbb BA}_{\varepsilon } \|_{L^2}^q
\langleeft( {\rm in}nt_{\mathbb R^3} \frac{ {\
$\Box$f 1}_{\{ |{\ranglem x}i_\partialerp| \gammaeq \varepsilon ^{-\alpha} \} } }{(|{\ranglem x}i_1| + \varepsilon |{\ranglem x}i_\partialerp|)^{\frac{2q}{2-q} }} \, d {\ranglem x}i \rangleight)^{\frac{2-q}{q} } \\
\langleeq & \ C_q ( \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^2}
+ \varepsilon \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^2})^q
\langleeft( {\rm in}nt_{\varepsilon ^{-\alpha}}^{{\rm in}nfty}
\frac{R \, dR}{ (\varepsilon R)^{\frac{3q-2}{2-q}}} \rangleight)^{\frac{2-q}{q} }.
\varepsilonnd{align*}
(We have computed the integral in ${\ranglem x}i_1$ and we
used cylindrical coordinates for the third line.) Provided that $\frac{3q-2}{2-q}> 2$ (or, equivalently,
$ q > 6/5 $), the last integral in $R$ is
$$ C (q) \varepsilon ^{-\frac{3q-2}{2-q}} \tildeildemes \varepsilon ^{\alpha \frac{5q-6}{2-q}}
\langleeq C_q $$
as soon as $\alpha \gammaeq \frac{3q-2}{5q-6} = \frac{2+p}{6-p}$, that is $ p \langleeq 6 - \frac{8}{\alpha + 1 }$.
Notice that $2<6- \frac{8}{\alpha+1} <6 $ because $\alpha > 1 $.
By Lemma \rangleef{BornEnergy} we get
\
$\Box$e
\langleambdabel{Timide2}
\forall \; 2 \langleeq p \langleeq 6 - \frac{8}{\alpha+1} ,
\quad \quad \quad \| \mathbb BA_{ \varepsilon } ^{II} \|_{L^p} \langleeq C(\alpha) .
\varepsilone
Using similar arguments, we have
\
$\Box$egin{align*}
\| \mathbb BA_{\varepsilon } ^{III} \|_{L^p}^q
\langleeq & \ C \| {\mathbf w}idehat{\mathbb BA}_{\varepsilon} ^{III} \|_{L^q}^q \\
= & \ C {\rm in}nt_{\mathbb R^3}
\mathbb Big( \varepsilon |{\ranglem x}i_\partialerp| \cdot |{\mathbf w}h{\mathbb BA}_{\varepsilon } | \mathbb Big)^q
\tildeildemes \frac{{\
$\Box$f 1}_{ \{ \varepsilon ^{-\
$\Box$eta} \langleeq |{\ranglem x}i_\partialerp| \langleeq \varepsilon ^{-\alpha} , \, |{\ranglem x}i_1| < 1 \} } }
{(\varepsilon |{\ranglem x}i_\partialerp|)^q } \, d {\ranglem x}i \\
\langleeq & \ C ( \varepsilon \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^2} )^q
\langleeft( {\rm in}nt_{\mathbb R^3} \frac{ {\
$\Box$f 1}_{ \{ \varepsilon ^{-\
$\Box$eta} \langleeq |{\ranglem x}i_\partialerp| \langleeq \varepsilon ^{-\alpha} , \, |{\ranglem x}i_1| \langleeq 1 \} } }{(\varepsilon |{\ranglem x}i_\partialerp|)^{\frac{2q}{2-q} }}
\, d {\ranglem x}i \rangleight)^{\frac{2-q}{q} }
\\ \langleeq & \
C_q \langleeft( \varepsilon ^{-\frac{2q}{2-q} } {\rm in}nt_{\varepsilon ^{-\
$\Box$eta}}^{\varepsilon ^{-\alpha}}
\frac{dR}{R^{\frac{4q-4}{2-q} + 1}} \rangleight)^{\frac{2-q}{q} } \langleeq C_q
\varepsilonnd{align*}
if $\
$\Box$eta \frac{4q-4}{2-q} - \frac{2q}{2-q} \gammaeq 0 $,
that is $ 2 \
$\Box$eta \gammaeq \frac{q}{(q-1)} = p $. Consequently,
\
$\Box$e
\langleambdabel{Timide3}
\forall \; 2 \langleeq p \langleeq 2 \
$\Box$eta ,
\quad \quad \quad \| \mathbb BA_{\varepsilon }^{III} \|_{L^p} \langleeq C(\
$\Box$eta) .
\varepsilone
Similarly we get a bound for $ \mathbb BA_{\varepsilon } ^{IV}$:
\
$\Box$egin{align*}
\| \mathbb BA_{\varepsilon } ^{IV} \|_{L^p}^q
\langleeq & \ C ( \varepsilon \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^2} )^q
\langleeft( {\rm in}nt_{\mathbb R^3} \frac{ {\
$\Box$f 1}_{ \{ \varepsilon ^{-\gamma} \langleeq |{\ranglem x}i_\partialerp| \langleeq \varepsilon ^{-\alpha} ,\, 1 \langleeq |{\ranglem x}i_1|^\|u \langleeq |{\ranglem x}i_\partialerp| \} } }{(\varepsilon |{\ranglem x}i_\partialerp|)^{\frac{2q}{2-q} }}
\, d {\ranglem x}i \rangleight)^{\frac{2-q}{q} }
\\ \langleeq & \
C_q \langleeft( \varepsilon ^{-\frac{2q}{2-q} } {\rm in}nt_{\varepsilon ^{-\gamma}}^{\varepsilon ^{-\alpha}}
\frac{R^{\frac{1}{\|u}} \, dR}{R^{\frac{4q-4}{2-q} + 1}} \rangleight)^{\frac{2-q}{q} }
\langleeq C_q
\varepsilonnd{align*}
provided that $\gamma \frac{4q-4}{2-q} - \frac{2q}{2-q} - \frac{\gamma}{\|u}
\gammaeq 0 $, which is equivalent to $ p \langleeq \frac{2\gamma (2 \|u + 1)}{2\|u+\gamma} $ (notice that
$\frac{2\gamma (2 \|u + 1)}{2\|u+\gamma} > 2$ because $\gamma > 1$). Therefore,
\
$\Box$e
\langleambdabel{Timide4}
\forall \; 2 \langleeq p \langleeq \frac{2\gamma (2 \|u + 1)}{2\|u+\gamma} ,
\quad \quad \quad \| \mathbb BA_{\varepsilon } ^{IV} \|_{L^p} \langleeq C(\|u) .
\varepsilone
We use the fact that $\| \partial_{z_1} \mathbb BA_{\varepsilon} \|_{L^2}$
is bounded independently of $ \varepsilon$ (see part $(i)$) in order to estimate $\mathbb BA_{\varepsilon }^V$:
\
$\Box$egin{align*}
\| \mathbb BA_{\varepsilon } ^{V} \|^q_{L^p} \langleeq & \ C \| {\mathbf w}idehat{\mathbb BA}_{\varepsilon } ^{V} \|^q_{L^q} \\
= & \ C {\rm in}nt_{\mathbb R^3 } |{\ranglem x}i_1 {\mathbf w}idehat{\mathbb BA}_{\varepsilon } |^q \tildeildemes
\frac{{\
$\Box$f 1}_{\{ 1 \langleeq |{\ranglem x}i_\partialerp | \langleeq \varepsilon ^{-\alpha}, \, |{\ranglem x}i_\partialerp| < |{\ranglem x}i_1|^\|u \}}}{|{\ranglem x}i_1|^q} \, d {\ranglem x}i \\
\langleeq & \ C \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^2}^{q} \langleeft(
{\rm in}nt_{\mathbb R^3} \frac{{\
$\Box$f 1}_{\{ 1 \langleeq |{\ranglem x}i_\partialerp | \langleeq \varepsilon ^{-\alpha}, \, |{\ranglem x}i_\partialerp| \langleeq |{\ranglem x}i_1|^\|u \}}}{|{\ranglem x}i_1|^{\frac{2q}{2-q}}} \, d {\ranglem x}i \rangleight)^{\frac{2-q}{2}} \\
\langleeq & \ C \langleeft( {\rm in}nt_1^{\varepsilon ^{-\alpha} } \frac{ R \, dR}{R^{(\frac{2q}{2-q}-1)/\|u}}
\rangleight)^{\frac{2-q}{2}} ,
\varepsilonnd{align*}
by using cylindrical coordinates in the fourth line.
We have
$\frac{2q}{2-q} > 1$ for $q {\rm in}n [1, 2)$ and the last integral is bounded
independently of $\varepsilon $ as soon as $\frac{1}{\|u} \langleeft( \frac{2q}{2-q} - 1 \rangleight) > 2 $,
that is $ p < \frac{4\|u +2}{2\|u-1} $.
It is obvious that $ \frac{4\|u +2}{2\|u-1} > 2$ for $\|u > 1/2$. As a consequence, we get
\
$\Box$e
\langleambdabel{Timide5}
\forall \; 2 \langleeq p < \frac{4\|u +2}{2\|u-1} ,
\quad \quad \quad \| \mathbb BA_{\varepsilon } ^{V} \|_{L^p} \langleeq C(p) .
\varepsilone
We use the convolution equation \varepsilonqref{Henry} to estimate $\mathbb BA_{\varepsilon } ^{VI}$ and $\mathbb BA_{\varepsilon } ^{VII}$.
Applying the Fourier transform to \varepsilonqref{Henry} we obtain the pointwise bound
\
$\Box$egin{align*}
|{\mathbf w}idehat{\mathbb BA}_{\varepsilon } ({\ranglem x}i) | = & \ \mathbb Big|
\mathbb Big( {\mathbf w}idehat{\mathbb BK}^1_{\varepsilon } + \varepsilon ^2 {\mathbf w}idehat{\mathbb BK}^\partialerp_{\varepsilon } \mathbb Big) {\mathbf w}idehat{G}_{\varepsilon }
+ 2 c(\varepsilon ) {\mathbf w}idehat{\mathbb BK}^{\partialerp }_{\varepsilon } {\mathcal F}u ( \mathbb BA_{\varepsilon } \partial_{z_1} \varphi_{\varepsilon } )
- 2 c(\varepsilon ) \varepsilon ^2 \sigmaum_{j=2}^N {\mathbf w}idehat{\mathbb BK}^{1,j}_{\varepsilon }
{\mathcal F}u (\mathbb BA_{\varepsilon } \partial_{z_j} \varphi_{\varepsilon }) \mathbb Big| \\
\langleeq & \ C \mathbb Big( |{\mathbf w}idehat{\mathbb BK}^1_{\varepsilon }| + \varepsilon ^2 |{\mathbf w}idehat{\mathbb BK}^\partialerp_{\varepsilon }|
+ \varepsilon ^2 \sigmaum_{j=2}^N |{\mathbf w}idehat{\mathbb BK}^{1,j}_{\varepsilon }| \mathbb Big)
\mathbb Big( \| {\mathbf w}idehat{G}_{\varepsilon } \|_{L^{\rm in}nfty} + \| {\mathcal F}u ( \mathbb BA_{\varepsilon } \partial_{z_1} \varphi_{\varepsilon } ) \|_{L^{\rm in}nfty}
+ \sigmaum_{j=2}^N \| {\mathcal F}u ( \mathbb BA_{\varepsilon } \partial_{z_j} \varphi_{\varepsilon } ) \|_{L^{\rm in}nfty} \mathbb Big) .
\varepsilonnd{align*}
The estimates in Lemma \rangleef{BornEnergy} and the boundedness of
${\mathcal F}u : L^1 \tildeo L^{\rm in}nfty$ imply that the second factor is bounded independently
of $\varepsilon$. Therefore
\
$\Box$e
\langleambdabel{Atchoum}
|{\mathbf w}idehat{\mathbb BA}_{\varepsilon } ({\ranglem x}i) | \langleeq C \mathbb Big( |{\mathbf w}idehat{\mathbb BK}^1_{\varepsilon }|
+ \varepsilon ^2 |{\mathbf w}idehat{\mathbb BK}^\partialerp_{\varepsilon }|
+ \varepsilon ^2 \sigmaum_{j=2}^N |{\mathbf w}idehat{\mathbb BK}^{1,j}_{\varepsilon }| \mathbb Big)
\langleeq C \frac{{\ranglem x}i_1^2 + \varepsilon ^2 |{\ranglem x}i_\partialerp|^2
+ \varepsilon ^2 |{\ranglem x}i_1| \cdot |{\ranglem x}i_\partialerp|}{D_{\varepsilon }({\ranglem x}i)}
\langleeq C \frac{{\ranglem x}i_1^2 + \varepsilon ^2 |{\ranglem x}i_\partialerp|^2 }{D_{\varepsilon }({\ranglem x}i)}
\varepsilone
because $ 2 \varepsilon ^2 |{\ranglem x}i_1| \cdot |{\ranglem x}i_\partialerp| \langleeq {\ranglem x}i_1^2 + \varepsilon ^4 |{\ranglem x}i_\partialerp|^2$.
If $ {\ranglem x}i {\rm in}n E^{VI}$ we have $ |{\ranglem x}i_1| \langleeq 1 $ and
$ 1 \langleeq |{\ranglem x}i_\partialerp| \langleeq \varepsilon^{-\
$\Box$eta} \langleeq \varepsilon^{-2}$ (because
$\
$\Box$eta < 2$), hence there is some constant $C$ depending only on
${\mathfrak c}_s$ such that
$$ C |{\ranglem x}i_\partialerp|^2 \gammaeq D_{\varepsilon }({\ranglem x}i) = {\ranglem x}i_1^4 + {\ranglem x}i_1^2 + {\mathfrak c}_s^2 |{\ranglem x}i_\partialerp|^2
+ 2 \varepsilon ^2 {\ranglem x}i_1^2 |{\ranglem x}i_\partialerp|^2 + \varepsilon ^4 |{\ranglem x}i_\partialerp|^4
\gammaeq \frac{|{\ranglem x}i_\partialerp|^2}{C} . $$
Using the Riesz-Thorin theorem
with exponents $2 < p < {\rm in}nfty$ and $q=p/(p-1) {\rm in}n (1,2)$ as well as \varepsilonqref{Atchoum} we find
\
$\Box$egin{align*}
\| \mathbb BA_{\varepsilon } ^{VI} \|_{L^p}^q
\langleeq & \
C \| {\mathbf w}idehat{\mathbb BA}_{\varepsilon } ^{VI} \|_{L^q}^q \\
\langleeq & \ C {\rm in}nt_{\mathbb R^3}
{\
$\Box$f 1}_{ \{ 1 \langleeq |{\ranglem x}i_\partialerp| \langleeq \varepsilon ^{-\
$\Box$eta}, \, |{\ranglem x}i_1| \langleeq 1 \}}
\frac{( {\ranglem x}i_1^2 + \varepsilon ^2 |{\ranglem x}i_\partialerp|^2 )^q}{ |{\ranglem x}i_\partialerp|^{2q} } \ d {\ranglem x}i \\
\langleeq & \ C {\rm in}nt_{\mathbb R^3}
{\
$\Box$f 1}_{ \{ 1 \langleeq |{\ranglem x}i_\partialerp| \langleeq \varepsilon ^{-\
$\Box$eta}, \, |{\ranglem x}i_1| \langleeq 1 \} }
\langleeft( \frac{{\ranglem x}i_1^{2q}}{ |{\ranglem x}i_\partialerp|^{2q} } + \varepsilon ^{2q} \rangleight) \ d {\ranglem x}i \\
\langleeq & \ C {\rm in}nt_{|{\ranglem x}i_\partialerp|\gammaeq 1} \frac{d{\ranglem x}i_\partialerp}{|{\ranglem x}i_\partialerp|^{2q} }
+ C \varepsilon ^{2q -2\
$\Box$eta} \langleeq C_q
\varepsilonnd{align*}
provided that $q> 1$
and $ q \gammaeq \
$\Box$eta $.
We have $ q \gammaeq \
$\Box$eta $ if and only if $ p \langleeq \frac{\
$\Box$eta}{\
$\Box$eta-1}$. It is obvious that
$\frac{\
$\Box$eta}{\
$\Box$eta-1}>2$ because $ 1 < \
$\Box$eta < 2$. Hence we obtain
\
$\Box$e
\langleambdabel{Timide6}
\forall \; 2 \langleeq p \langleeq \frac{\
$\Box$eta}{\
$\Box$eta-1} ,
\quad \quad \quad \| \mathbb BA_{\varepsilon }^{VI} \|_{L^p} \langleeq C(\
$\Box$eta) .
\varepsilone
In order to estimate $\mathbb BA_{\varepsilon } ^{VII}$ we notice that for $ {\ranglem x}i {\rm in}n E^{VII} $ we have
$ 1 \langleeq |{\ranglem x}i_\partialerp| \langleeq \varepsilon ^{-\gamma}$ and
$ 1 \langleeq |{\ranglem x}i_1|^\|u \langleeq |{\ranglem x}i_\partialerp|$, thus
$|{\ranglem x}i_1|^2 \langleeq |{\ranglem x}i_\partialerp| \langleeq \varepsilon ^{-2}$ because
$\|u \gammaeq 5/2 > 2 $ and $ \gamma \langleeq 2$. Hence there exists
$C > 0$ depending only on ${\mathfrak c}_s$ such that
$$
C |{\ranglem x}i_\partialerp|^2 \gammaeq D_{\varepsilon} ({\ranglem x}i) = {\ranglem x}i_1^4 + {\ranglem x}i_1^2 + {\mathfrak c}_s^2 |{\ranglem x}i_\partialerp|^2
+ 2 \varepsilon ^2 {\ranglem x}i_1^2 |{\ranglem x}i_\partialerp|^2 + \varepsilon ^4 |{\ranglem x}i_\partialerp|^4
\gammaeq \frac{|{\ranglem x}i_\partialerp|^2}{C} .
$$
Using \varepsilonqref{Atchoum} we get
\
$\Box$egin{align*}
\| \mathbb BA_{\varepsilon }^{VII} \|_{L^p}^q
\langleeq & \ C {\rm in}nt_{\mathbb R^3}
{\
$\Box$f 1}_{ \{ 1 \langleeq |{\ranglem x}i_\partialerp| \langleeq \varepsilon ^{-\gamma}, \,
1 \langleeq |{\ranglem x}i_1|^\|u \langleeq |{\ranglem x}i_\partialerp| \}}
\langleeft( \frac{{\ranglem x}i_1^{2q}}{ |{\ranglem x}i_\partialerp|^{2q} } + \varepsilon ^{2q} \rangleight) \, d {\ranglem x}i \\
\langleeq & \ C {\rm in}nt_{|{\ranglem x}i_\partialerp|\gammaeq 1} \frac{|{\ranglem x}i_\partialerp|^{\frac{2q+1}{\|u}}}{|{\ranglem x}i_\partialerp|^{2q} } \, d{\ranglem x}i_\partialerp
+ C \varepsilon ^{2q} {\rm in}nt_1^{\varepsilon ^{-\gamma}} R^{1+\frac{1}{\|u}} \, dR \langleeq C_q
\varepsilonnd{align*}
provided that $2q - \frac{2q+1}{\|u} > 2 $
and $2q - \gamma (2 + \frac{1}{\|u} ) \gammaeq 0 $.
These inequalities are equivalent to $ p < \frac{2\|u +1}{3}$ and
$ p \langleeq \frac{\gamma(2\|u +1)}{\gamma(2\|u +1) - 2 \|u}$, respectively.
Since $\|u > 5/2$, we have $ \frac{2\|u +1}{3} > 2 $ and $ \frac{4\|u}{2\|u+1} > 5/3 $ and $ \frac{\|u}{\|u-1} < 5/3 $.
It is easy to see that $ \frac{\gamma(2\|u +1)}{\gamma(2\|u +1)-2\|u} > 2$ if and only if $ \gamma < \frac{4\|u}{2\|u+1} $,
and that $ \frac{\gamma(2\|u +1)}{\gamma(2\|u +1)-2\|u} > \frac{2\|u +1}{3}$ if and only if $ \gamma < \frac{\|u}{\|u-1} $.
Hence
\
$\Box$e
\langleambdabel{Timide7}
\langleeft\{ \
$\Box$egin{array}{ll}
\deltaisplaystyle{\forall \; 1 \langleeq \gamma \langleeq \frac{\|u}{\|u-1}, \quad
\forall \; 2 \langleeq p < \frac{2\|u +1}{3}} ,
& \quad \quad \quad \| \mathbb BA_{\varepsilon }^{VII} \|_{L^p} \langleeq C(p,\|u) \\ \\
\deltaisplaystyle{ \forall \; \frac{\|u}{\|u-1} < \gamma \langleeq \frac53, \quad
\forall \; 2 \langleeq p \langleeq \frac{\gamma(2\|u +1)}{\gamma(2\|u +1)-2\|u} ,}
& \quad \quad \quad \| \mathbb BA_{\varepsilon }^{VII} \|_{L^p} \langleeq C(\gamma,\|u) .
\varepsilonnd{array}
\rangleight.
\varepsilone
We now choose the parameters $\alpha$, $\
$\Box$eta$, $\gamma$ and $\|u$. In view of
\varepsilonqref{Timide3} and \varepsilonqref{Timide6}, we fix $\
$\Box$eta = 3/2$, so that
$ 2 \
$\Box$eta = \
$\Box$eta/(\
$\Box$eta-1) = 3 $. We set $\alpha = 5/3 > 3/2 = \
$\Box$eta $.
Then by \varepsilonqref{Timide1}, \varepsilonqref{Timide2}, \varepsilonqref{Timide3} and
\varepsilonqref{Timide6} it follows that
$$ \forall \; 2 \langleeq p \langleeq 3, \quad \quad \quad
\| \mathbb BA_{\varepsilon }^{I} \|_{L^p} + \| \mathbb BA_{\varepsilon }^{II} \|_{L^p} + \| \mathbb BA_{\varepsilon }^{III} \|_{L^p}
+ \| \mathbb BA_{\varepsilon }^{VI} \|_{L^p} \langleeq C . $$
For the other terms,
we notice that in the case $1 \langleeq \gamma \langleeq \frac{\|u}{\|u-1}$ we have
$$ \frac{2\gamma (2 \|u + 1)}{2\|u+\gamma} \langleeq \frac{4 \|u + 2}{2\|u- 1} , $$
with equality if $\gamma = \frac{\|u}{\|u-1}$.
We also observe that
$$ \frac{2\|u +1}{3} < \frac{4 \|u + 2}{2\|u-1} < \frac83
\quad {\ranglem if} \quad \|u < \frac72,
\quad \quad \quad {\ranglem respectively} \quad \quad \quad
\frac83 < \frac{4 \|u + 2}{2\|u-1} < \frac{2\|u +1}{3} \quad
{\ranglem if} \quad \|u > \frac72 . $$
Then we fix
$\|u = 7 / 2$ and $\gamma = \frac{\|u}{\|u-1} = 7 / 5 < 5 / 3 $
and using \varepsilonqref{Timide4}, \varepsilonqref{Timide5} and \varepsilonqref{Timide7}
we obtain
$$ \forall \; 2 \langleeq p < \frac83, \quad \quad \quad
\| \mathbb BA_{\varepsilon }^{IV} \|_{L^p} + \| \mathbb BA_{\varepsilon }^{V} \|_{L^p}
+ \| \mathbb BA_{\varepsilon }^{VII} \|_{L^p} \langleeq C . $$
This concludes the proof of $(iii)$.
$(iv)$ We use
the same inequalities as in the three-dimensional case with
$ 1 < \|u < 3$ and $\alpha$, $\
$\Box$eta$, $\gamma {\rm in}n (1,2)$ satisfying
$\
$\Box$eta \langleeq \alpha$ and $ \gamma \langleeq \alpha$. We get
$$ \
$\Box$egin{array}{llll}
\deltaisplaystyle{ \forall \; 2 \langleeq p < {\rm in}nfty},
& \quad \| \mathbb BA_{\varepsilon }^{I} \|_{L^p} \langleeq C_p;
& \quad \quad \quad \deltaisplaystyle{ \forall \; 2 \langleeq p \langleeq 4 \alpha - 2},
& \quad \| \mathbb BA_{\varepsilon } ^{II} \|_{L^p} \langleeq C_p; \\ \\
\deltaisplaystyle{ \forall \; 2 \langleeq p \langleeq \frac{2\
$\Box$eta}{2 - \
$\Box$eta},}
& \quad \| \mathbb BA_{\varepsilon }^{III} \|_{L^p} \langleeq C(\
$\Box$eta);
& \quad \quad \quad \deltaisplaystyle{ \forall \; 2 \langleeq p \langleeq
\frac{2\gamma(\|u+1)}{\gamma + \|u(2-\gamma)},}
& \quad \| \mathbb BA_{\varepsilon }^{IV} \|_{L^p} \langleeq C(\
$\Box$eta) ; \\ \\
\deltaisplaystyle{ \forall \; 2 \langleeq p < 2\frac{\|u+1}{\|u-1},}
& \quad \| \mathbb BA_{\varepsilon }^{V} \|_{L^p} \langleeq C_p ;
& \quad \quad \quad \deltaisplaystyle{ \forall \; 2 \langleeq p < {\rm in}nfty,}
& \quad \| \mathbb BA_{\varepsilon } ^{VI} \|_{L^p} \langleeq C_p
\varepsilonnd{array} $$
and
$$ \forall \; 1 \langleeq \gamma \langleeq \frac{\|u}{\|u-1}, \quad
\forall \; 2 \langleeq p < \frac{\|u +1}{3-\|u} , \quad \quad \quad
\| \mathbb BA_{\varepsilon }^{VII} \| _{L^p} \langleeq C_p . $$
Then we choose
$$ \
$\Box$eta = \frac43, \quad \quad \quad
\alpha = \frac53 ,
\quad \quad \quad \|u = 3^- , \quad \quad \quad
\gamma = \frac{\|u}{\|u-1} = \frac32^+ ,$$
so that $ \alpha > \
$\Box$eta $ and $ \alpha > \gamma$. We infer that
$$ \forall \; 2 \langleeq p < 4 , \quad \quad \quad
\| \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p . $$
This completes the proof in the case $N=2$.\
$\Box$
\sigmaubsection{Proof of Proposition \rangleef{Born}}
We first recall the Fourier multiplier properties of the kernels $\mathbb BK_{\varepsilon }^1, $ $ \mathbb BK_{\varepsilon }^{\partialerp}$ and $\mathbb BK_{\varepsilon}^{1,j }$.
We skip the proof since it is the same as in section
5.2 in \cite{BGS1} and does not depend on the space dimension $N$.
\
$\Box$egin{lem}
\langleambdabel{Multiply}
Let $1 < q < {\rm in}nfty$. There exists $C_q>0$ (depending also on ${\mathfrak c}_s$)
such that for any $ \varepsilon {\rm in}n (0, 1 )$, any $2 \langleeq j \langleeq N$ and $h {\rm in}n L^q$
we have
\
$\Box$egin{align*}
& \| \mathbb BK^1_{\varepsilon } \sigmatar h \|_{L^q} \\
& + \| \partial_{z_1} \mathbb BK^1_{\varepsilon } \sigmatar h \|_{L^q}
+ \| \|abla_{z_\partialerp} \mathbb BK^1_{\varepsilon } \sigmatar h \|_{L^q} \\
& + \| \partial_{z_1}^2 \mathbb BK^1_{\varepsilon } \sigmatar h \|_{L^q}
+ \varepsilon \| \partial_{z_1} \|abla_{z_\partialerp} \mathbb BK^1_{\varepsilon } \sigmatar h \|_{L^q}
+ \varepsilon ^2 \| \|abla_{z_\partialerp}^2 \mathbb BK^1_{\varepsilon} \sigmatar h \|_{L^q}
\langleeq C_q \| h \|_{L^q} ,
\varepsilonnd{align*}
\
$\Box$egin{align*}
& \| \mathbb BK^\partialerp_{\varepsilon } \sigmatar h \|_{L^q} \\
& + \varepsilon \| \partial_{z_1} \mathbb BK^\partialerp_{\varepsilon } \sigmatar h \|_{L^q}
+ \varepsilon ^2 \| \|abla_{z_\partialerp} \mathbb BK^\partialerp_{\varepsilon } \sigmatar h \|_{L^q} \\
& + \varepsilon ^2 \| \partial_{z_1}^2 \mathbb BK^\partialerp_{\varepsilon } \sigmatar h \|_{L^q}
+ \varepsilon ^3 \| \partial_{z_1} \|abla_{z_\partialerp} \mathbb BK^\partialerp_{\varepsilon } \sigmatar h \|_{L^q}
+ \varepsilon ^4 \| \|abla_{z_\partialerp}^2 \mathbb BK^\partialerp_{\varepsilon } \sigmatar h \|_{L^q}
\langleeq C_q \| h \|_{L^q}
\varepsilonnd{align*}
and
\
$\Box$egin{align*}
& \| \mathbb BK^{1,j}_ {\varepsilon } \sigmatar h \|_{L^q} \\
& + \| \partial_{z_1} \mathbb BK^{1,j}_{\varepsilon } \sigmatar h \|_{L^q}
+ \varepsilon \| \|abla_{z_\partialerp} \mathbb BK^{1,j}_{\varepsilon } \sigmatar h \|_{L^q} \\
& + \varepsilon \| \partial_{z_1}^2 \mathbb BK^{1,j}_{\varepsilon } \sigmatar h \|_{L^q}
+ \varepsilon ^2 \| \partial_{z_1} \|abla_{z_\partialerp} \mathbb BK^{1,j}_{\varepsilon } \sigmatar h \|_{L^q}
+ \varepsilon ^3 \| \|abla_{z_\partialerp}^2 \mathbb BK^{1,j}_{\varepsilon } \sigmatar h \|_{L^q}
\langleeq C_q \| h \|_{L^q}.
\varepsilonnd{align*}
\varepsilonnd{lem}
The proof of
\varepsilonqref{goodestimate} is then divided into 5 Steps.
\
$\Box$egin{itemize}gskip
\|oindent {\
$\Box$f Step 1.} There is $ \varepsilon _ 1 > 0 $ and for any $1 < q < {\rm in}nfty $ there exists $C_q$
(depending also on $F$) such that for all $\varepsilon {\rm in}n (0, \varepsilon _1)$,
\
$\Box$egin{align*}
\| \mathbb BA_{\varepsilon } \|_{L^q} + & \ \| \|abla_z \mathbb BA_{\varepsilon } \|_{L^q}
+ \| \partial^2_{z_1} \mathbb BA_{\varepsilon } \|_{L^q}
+ \varepsilon \| \partial_{z_1} \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^q}
+ \varepsilon ^2 \| \|abla_{z_\partialerp}^2 \mathbb BA_{\varepsilon } \|_{L^q} \|onumber \\
& \langleeq C_q \mathbb Big( \| \mathbb BA_{\varepsilon } \|^2_{L^{2q}}
+ \varepsilon ^2 \mathbb Big[ \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^{2q}}
+ \varepsilon \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^{2q}} \mathbb Big]^2 \mathbb Big) .
\varepsilonnd{align*}
The proof is very similar to that of Lemma 6.2 p. 268 in \cite{BGS1} and thus is only sketched.
Indeed, if $ U = \rangleho e ^{ i \partialhi }$ is a finite energy solution to (TW$_c$) such that $ \frac{ r_0 }{2 } \langleeq \rangleho \langleeq 2 r _0$
then the first equation in \varepsilonqref{phasemod}
can be written as
$$
2 r_0 ^2 \Deltaelta \partialhi = c \frac{\partial }{\partial x_1} ( \rangleho ^2 - r_0 ^2) - 2 \mbox{div}\langleeft( (\rangleho ^2 - r_0 ^2) \|abla \partialhi \rangleight)
$$
and this gives
$$
2 r_0 ^2 \frac{ \partial \partialhi}{\partial x_j } = c R_j R_1 ( \rangleho ^2 - r_0 ^2) - 2 \sigmaum_{k = 1}^{N} R_j R_k \langleeft( ( \rangleho ^2 - r_0 ^2 ) \frac{ \partial \partialhi}{\partial x_k } \rangleight),
$$
where $R_k$ is the Riesz transform (defined by $ R_k f = {\mathcal F}u ^{-1} \langleeft( \frac{ i {\ranglem x}i _k}{|{\ranglem x}i |} {\mathbf w}h{f} \rangleight)$).
It is well-known that the Riesz transform maps continuously $ L^p(\mathbb R^N)$ into $ L^p(\mathbb R^N)$ for $ 1 < p < {\rm in}nfty$.
From the above we infer that for any $ q {\rm in}n (1, {\rm in}nfty) $ and any $ j {\rm in}n \{ 1, \deltaots, N \}$ we have
$$
\mathbb Big\| \frac{ \partial \partialhi}{\partial x_j } \mathbb Big\|_{L^q} \langleeq C(q) \| \rangleho ^2 - r_0 ^2 \|_{L^q}
+ C(q) \sigmaum_{k=1}^N \mathbb Big\| (\rangleho ^2 - r_0 ^2 ) \frac{ \partial \partialhi}{\partial x_j } \mathbb Big\|_{L^q}
\langleeq C(q) \| \rangleho ^2 - r_0 ^2 \|_{L^q} + C(q) \| \rangleho ^2 - r_0 ^2 \|_{L^{{\rm in}nfty } } \| \|abla \partialhi \|_{L^q}
$$
and this implies
$$
\| \|abla \partialhi \|_{L^q} \langleeq C(q) \| \rangleho ^2 - r_0 ^2 \|_{L^q} + C(q) \| \rangleho ^2 - r_0 ^2 \|_{L^{{\rm in}nfty } } \| \|abla \partialhi \|_{L^q}.
$$
If $\| \rangleho ^2 - r_0 ^2 \|_{L^{{\rm in}nfty } } $ is sufficiently small we get
$ \| \|abla \partialhi \| _{L^q} \langleeq \tildeildelde{C} (q) \| \rangleho ^2 - r_0 ^2 \|_{L^q} \langleeq K(q) \| \rangleho - r_0 \| _{L^q}.$
By scaling, this estimate implies that for $1 < q < {\rm in}nfty$,
\
$\Box$egin{eqnarray}
\langleambdabel{phasestimate1}
\| \partial_{z_1} \varphi_{\varepsilon } \|_{L^q}
+ \varepsilon \| \|abla_{z_\partialerp} \varphi_{\varepsilon } \|_{L^q} \langleeq C_q \| \mathbb BA_{\varepsilon } \|_{L^q} .
\varepsiloneq
Hence, by H\"older's inequality and Lemma \rangleef{Grenouille} $(ii)$,
\
$\Box$egin{align*}
\| G_{\varepsilon } \|_{L^q} \langleeq & \ C_q \mathbb Big( \| \mathbb BA_{\varepsilon } \|^2_{L^{2q}}
+ \varepsilon ^2 \| \mathbb BA_{\varepsilon } \|^3_{L^{3q}}
+ \varepsilon ^2 \| \partial_{z_1} \mathbb BA_{\varepsilon } \|^2_{L^{2q}}
+ \varepsilon ^4 \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|^2_{L^{2q}} \mathbb Big) \\
\langleeq & \ C_q \mathbb Big( \| \mathbb BA_{\varepsilon } \|^2_{L^{2q}}
+ \varepsilon ^2 \mathbb Big[ \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^{2q}}
+ \varepsilon \| \|abla_{z_\partialerp} \mathbb BA_ {\varepsilon } \|_{L^{2q}} \mathbb Big]^2 \mathbb Big) .
\varepsilonnd{align*}
We take the derivatives up to order $2$
of \varepsilonqref{Henry} and then the conclusion follows from Lemma \rangleef{Multiply}.
\
$\Box$egin{itemize}gskip
\|oindent {\
$\Box$f Step 2.} Let $N=3$. There is $ \varepsilon _2 > 0$ and for any $1 < p < 3/2 $ there exists
$C_p$ (also depending on $F$) such that for any $\varepsilon {\rm in}n (0, \varepsilon _2)$ there holds
\
$\Box$egin{align*}
\| \mathbb BA_{\varepsilon } \|_{L^p} + \| \|abla \mathbb BA_{\varepsilon } \|_{L^p}
+ \| \partial^2_{z_1} \mathbb BA_{\varepsilon } \|_{L^p}
+ \varepsilon \| \partial_{z_1} \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^p}
+ \varepsilon ^2 \| \|abla_{z_\partialerp}^2 \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p .
\varepsilonnd{align*}
If $1 \langleeq q \langleeq 3/2$, we have by Lemma \rangleef{Grenouille} $(i)$
$$ \varepsilon \mathbb Big[ \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^{2q}}
+ \varepsilon \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^{2q}} \mathbb Big] \langleeq C . $$
Thus for $ 1 < q \langleeq 3/2$ we infer from Step 1 that
\
$\Box$egin{align}
\langleambdabel{Garulfo}
\| \mathbb BA_{\varepsilon } \|_{L^q} + \| \|abla_z \mathbb BA_{\varepsilon } \|_{L^q}
+ & \ \| \partial^2_{z_1} \mathbb BA_{\varepsilon } \|_{L^q}
+ \varepsilon \| \partial_{z_1} \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^q}
+ \varepsilon ^2 \| \|abla_{z_\partialerp}^2 \mathbb BA_{\varepsilon } \|_{L^q}
\langleeq C_q + C_q \| \mathbb BA_{\varepsilon } \|^2_{L^{2q}} .
\varepsilonnd{align}
If $ 1 < p < 4/3 $, we use \varepsilonqref{Garulfo}
combined with Lemma \rangleef{Grenouille} $(iii)$ with exponent
$ 2p {\rm in}n [2,8/3)$ to get
\
$\Box$egin{align}
\langleambdabel{Prof}
\| \mathbb BA_{\varepsilon } \|_{L^p} + \| \|abla_z \mathbb BA_{\varepsilon } \|_{L^p}
+ & \, \| \partial^2_{z_1} \mathbb BA_{\varepsilon } \|_{L^p}
+ \varepsilon \| \partial_{z_1} \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^p}
+ \varepsilon ^2 \| \|abla_{z_\partialerp}^2 \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p .
\varepsilonnd{align}
This proves Step 2 for $1 < p < 4/3$. In dimension $N=3$, the Sobolev
inequality does not enable us to improve the $L^q$ integrability of $\mathbb BA_{\varepsilon }$ to some
$q>8/3$. We thus rely on
the decomposition of $\mathbb BA_{\varepsilon }$ as
$ \mathbb BA_{\varepsilon } = \mathbb BA_{\varepsilon }^{I} + \mathbb BA_{\varepsilon }^{II} + \mathbb BA_{\varepsilon }^{III} + \mathbb BA_{\varepsilon }^{IV} + \mathbb BA_{\varepsilon }^{V}
+ \mathbb BA_{\varepsilon }^{VI} + \mathbb BA_{\varepsilon }^{VII} , $
exactly as in Lemma \rangleef{Grenouille}. We choose $\alpha = 5/3$, $\
$\Box$eta = 3/2$.
By the estimates in the proof of Lemma \rangleef{Grenouille} $(iii)$ we have then
$$ \forall \; 2 \langleeq p \langleeq 3, \quad \quad \quad
\| \mathbb BA_{\varepsilon }^{I} \|_{L^p} + \| \mathbb BA_{\varepsilon }^{II} \|_{L^p} + \| \mathbb BA_{\varepsilon }^{III} \|_{L^p}
+ \| \mathbb BA_{\varepsilon }^{VI} \|_{L^p} \langleeq C . $$
It remains to bound $\mathbb BA_{\varepsilon }^{IV}$, $\mathbb BA_{\varepsilon }^{V}$ and $\mathbb BA_{\varepsilon }^{VII}$ in
$L^{3^-}$. In view of \varepsilonqref{Timide5}, we choose $\|u = 5/2$, so
that $\frac{4\|u+2}{2\|u-1} = 3$, and thus
$$ \forall \; 2 \langleeq p < 3, \quad \quad \quad \| \mathbb BA_{\varepsilon }^V \|_{L^p} \langleeq C_p . $$
We cancel out $\mathbb BA_{\varepsilon }^{IV}$ by taking $\gamma = 5/3 = \alpha$.
Next we turn our attention to the "bad term" $\mathbb BA_{\varepsilon }^{VII}$. By \varepsilonqref{Prof} we get
$$ \forall \; 1 < p < \frac43 , \quad \quad \quad
\| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p , $$
hence, by the Riesz-Thorin theorem,
$$ \forall \; 4 < r < {\rm in}nfty , \quad \quad \quad
\| {\ranglem x}i_{\partialerp} {\mathbf w}idehat{\mathbb BA}_{\varepsilon } \|_{L^r} =
\| {\mathcal F}u( \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } ) \|_{L^r} \langleeq C_r . $$
Consequently, for $4 < r < {\rm in}nfty$, $ 2 < p < {\rm in}nfty $ and $q= p / (p-1) {\rm in}n (1,2)$, using once again the Riesz-Thorin theorem
and the H\"older inequality with exponents $\frac{r}{q} $ and $ \frac{r}{r-q} $ we get
\
$\Box$egin{align*}
\| \mathbb BA_{\varepsilon }^{VII} \|_{L^p}^q \langleeq & \ C \| {\mathbf w}idehat{\mathbb BA}_{\varepsilon }^{VII} \|_{L^q}^q \\
= & \ C {\rm in}nt_{\mathbb R^3} ( |{\ranglem x}i_\partialerp| \cdot |{\mathbf w}idehat{\mathbb BA}_ {\varepsilon } | )^q
\tildeildemes \frac{{\
$\Box$f 1}_{ \{ 1 \langleeq |{\ranglem x}i_\partialerp| \langleeq \varepsilon ^{-\gamma},\,
1 \langleeq |{\ranglem x}i_1|^\|u \langleeq |{\ranglem x}i_\partialerp| \} }}{|{\ranglem x}i_\partialerp|^q} \ d {\ranglem x}i \\
\langleeq & \ C \| {\ranglem x}i_\partialerp {\mathbf w}idehat{\mathbb BA}_{\varepsilon } \|_{L^r}^{q}
\mathbb Big( {\rm in}nt_{\mathbb R^3} \frac{{\
$\Box$f 1}_{ \{ 1 \langleeq |{\ranglem x}i_\partialerp| \langleeq \varepsilon ^{-\gamma},\,
1 \langleeq |{\ranglem x}i_1|^\|u \langleeq |{\ranglem x}i_\partialerp| \} }}{|{\ranglem x}i_\partialerp|^{\frac{rq}{r-q}}} \ d {\ranglem x}i \mathbb Big)^{\frac{r-q}{r}} \\
\langleeq & \ C_{r,q} \mathbb Big( {\rm in}nt_1^{\varepsilon ^{-\gamma}} \frac{R^{1+\frac{1}{\|u}}}{R^{\frac{rq}{r-q}}} \ d R \mathbb Big)^{\frac{r-q}{r}}
\langleeq C_{r,q}
\varepsilonnd{align*}
provided that $ \frac{rq}{r-q} > 2 + \frac{1}{\|u} = 12 / 5 $.
Now let $ 2 \langleeq p < 3 $ be fixed, so that $ 3/2 < q \langleeq 2 $. Since
$ 3/2 < q \langleeq 2 $ and $q \langleongmapsto \frac{4q}{4-q} $ is increasing on $ (3/2 , 2 ] $,
we have $\frac{4q}{4-q} > 12 / 5$. Furthermore, we have
$ \frac{rq}{r-q} \tildeo \frac{4q}{4-q} > 12 / 5$ as $r \tildeo 4$. Hence we may choose
$r > 4$ such that $ \frac{rq}{r-q} > 2 + \frac{1}{\|u} = 12 / 5 $.
As a consequence, we have
$$ \forall \; 2 \langleeq p < 3, \quad \quad \quad
\| \mathbb BA_{\varepsilon }^{VII} \|_{L^p} \langleeq C_p . $$
Collecting the above estimates for $\mathbb BA_{\varepsilon }^{I} $, ... , $\mathbb BA_{\varepsilon }^{VII}$
we deduce
$$ \forall \; 2 \langleeq p < 3, \quad \quad \quad \| \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p . $$
Then we use once again \varepsilonqref{Garulfo} with exponent $ p/2 {\rm in}n (1,3/2) $
to infer that Step 2 holds for $ 1 < p < 3/2 $.\\
In order to be able to use Step 1 with some $q > 3/2$, we need to prove that
$\mathbb BA_{\varepsilon }$, $\varepsilon_n \partial_{z_1} \mathbb BA_{\varepsilon }$ and $ \varepsilon_n^2 \|abla_{z_\partialerp} \mathbb BA_{\varepsilon }$
are uniformly bounded in $L^p$ for some $p>3$. This is what we will prove next.
\
$\Box$egin{itemize}gskip
\|oindent {\
$\Box$f Step 3.} If $N=3$, the following bounds hold:
$$ \langleeft\{ \
$\Box$egin{array}{ll}
\deltaisplaystyle{ \forall \; 2 \langleeq p < 15/4 = 3.75 }, &
\quad \quad \quad \| \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p ;
\\ \\
\deltaisplaystyle{ \forall \; 2 \langleeq p < 18/5 = 3.6 }, &
\quad \quad \quad \varepsilon \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p ;
\\ \\
\deltaisplaystyle{ \forall \; 2 \langleeq p < 18/5 = 3.6 }, &
\quad \quad \quad \varepsilon ^2 \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p .
\varepsilonnd{array}
\rangleight. $$
Fix $ r {\rm in}n (3, {\rm in}nfty)$, $ p {\rm in}n (2, {\rm in}nfty )$ and let $q= p/(p-1) {\rm in}n (1,2)$ be the conjugate exponent of $p$.
By the Riesz-Thorin theorem and the H\"older inequality with exponents ${\frac{r}{q}} $ and $ {\frac{r}{r-q}}$ we have
\
$\Box$egin{align}
\langleambdabel{sorciere}
\| \mathbb BA_{\varepsilon } \|_{L^p}^q \langleeq & \ C \| {\mathbf w}idehat{\mathbb BA}_{\varepsilon } \|_{L^q}^q \|onumber \\
= & \ C {\rm in}nt_{\mathbb R^3} \mathbb Big[ (1 +|{\ranglem x}i_1|^2 + |{\ranglem x}i_\partialerp|) \cdot
|{\mathbf w}idehat{\mathbb BA}_{\varepsilon }| \mathbb Big]^q
\tildeildemes \frac{d {\ranglem x}i}{(1 + |{\ranglem x}i_1|^2 + |{\ranglem x}i_\partialerp|)^q} \|onumber \\
\langleeq & \ C \mathbb Big( \| {\mathbf w}idehat{\mathbb BA}_{\varepsilon } \|_{L^r} +\| {\ranglem x}i_1^2 {\mathbf w}idehat{\mathbb BA}_{\varepsilon } \|_{L^r}
+ \| {\ranglem x}i_\partialerp {\mathbf w}idehat{\mathbb BA}_{\varepsilon } \|_{L^r} \mathbb Big)^{q}
\mathbb Big( {\rm in}nt_{\mathbb R^3} \frac{ d {\ranglem x}i}{( 1+ |{\ranglem x}i_1|^2 + |{\ranglem x}i_\partialerp|)^{\frac{rq}{r-q}}} \mathbb Big)^{\frac{r-q}{r}} .
\varepsilonnd{align}
We bound the first parenthesis using again the Riesz-Thorin theorem:
since $ r {\rm in}n(3, {\rm in}nfty) $, its conjugate exponent $ r/(r-1) $ belongs to
$ (1,3/2) $ and then Step 2 holds for the exponent $ r$ instead of $p$, hence
\
$\Box$egin{align*}
\| {\mathbf w}idehat{\mathbb BA}_{\varepsilon } \|_{L^r} + \| {\ranglem x}i_1^2 {\mathbf w}idehat{\mathbb BA}_{\varepsilon } \|_{L^r}
+ \| {\ranglem x}i_\partialerp {\mathbf w}idehat{\mathbb BA}_{\varepsilon } \|_{L^r}
= & \ \| {\mathcal F}u( \mathbb BA_{\varepsilon }) \|_{L^r} + \| {\mathcal F}u( \partial_{z_1}^2 \mathbb BA_{\varepsilon }) \|_{L^r}
+ \| {\mathcal F}u( \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } ) \|_{L^r} \\
\langleeq & \ C \mathbb Big( \| \mathbb BA_{\varepsilon } \|_{L^{\frac{r}{r-1}}}
+ \| \partial_{z_1}^2 \mathbb BA_{\varepsilon } \|_{L^{\frac{r}{r-1}}}
+ \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^{\frac{r}{r-1}}} \mathbb Big) \langleeq C_r .
\varepsilonnd{align*}
Next, we compute using cylindrical coordinates
\
$\Box$egin{align*}
{\rm in}nt_{\mathbb R^3} & \ \frac{ d {\ranglem x}i }{( 1+ |{\ranglem x}i_1|^2 + |{\ranglem x}i_\partialerp|)^{\frac{rq}{r-q}}} \\
& \langleeq 4 \partiali \mathbb Big[
{\rm in}nt_0^1 {\rm in}nt_0^{+{\rm in}nfty} \frac{ R dR }{(1+R)^{\frac{rq}{r-q}}} \ d {\ranglem x}i_1
+ {\rm in}nt_1^{+{\rm in}nfty} {\rm in}nt_0^{{\ranglem x}i_1^2} \frac{ R dR }{{\ranglem x}i_1^{\frac{2rq}{r-q}}}
\ d {\ranglem x}i_1 + {\rm in}nt_1^{+{\rm in}nfty} {\rm in}nt_{{\ranglem x}i_1^2}^{+{\rm in}nfty}
\frac{ R dR }{R^{\frac{rq}{r-q}}} \ d {\ranglem x}i_1 \mathbb Big] \\
& \langleeq 4 \partiali \mathbb Big[ {\rm in}nt_0^{+{\rm in}nfty} \frac{ R dR }{(1+R)^{\frac{rq}{r-q}}}
+ \frac12 {\rm in}nt_1^{+{\rm in}nfty} \frac{{\ranglem x}i_1^4}{{\ranglem x}i_1^{\frac{2rq}{r-q}}} \ d {\ranglem x}i_1
+ \frac{1}{\frac{rq}{r-q}-2}
{\rm in}nt_1^{+{\rm in}nfty} \frac{d {\ranglem x}i_1}{{\ranglem x}i_1^{2(\frac{rq}{r-q}-2)} } \mathbb Big] .
\varepsilonnd{align*}
The integrals in the last line are finite
provided that $\frac{rq}{r-q} > 2$ (for the first integral), $\frac{2rq}{r-q} > 5$
(for the second integral) and $ 2(\frac{rq}{r-q}-2) > 1$ (for the third integral), hence their sum is finite if
$\frac{rq}{r-q} > 5/2 $. Note that $\frac{rq}{r-q} \tildeo \frac{3q}{3-q} $ as $r \tildeo 3$ and $ \frac{3q}{3-q} > 5/2 $
for $ q {\rm in}n (\frac{15}{11}, 3)$. If $ 2 < p < 15 / 4 = 3.75$ we have
$ 15 / 11 < q < 2 $ and we may choose $ r > 3 $
(and $r$ close to $3$) such that $\frac{rq}{r-q} > 5/2 $. Then it follows from
the two estimates above that
$$ \forall \; 2 \langleeq p < \frac{15}{4} , \quad \quad \quad
\| \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p . $$
Now we turn our attention to the bound on $\varepsilon \partial_{z_1} \mathbb BA_{\varepsilon }$.
Let $r {\rm in}n ( 1, \frac 32)$, $ q {\rm in}n [2, {\rm in}nfty)$ and $ s {\rm in}n (r,q)$.
We use the estimates in Step 2 for $ \mathbb Big\| \frac{ \partial ^2 \mathbb BA_{\varepsilon }}{\partial z_i \partial z_j} \mathbb Big\|_{L^r}$ and \varepsilonqref{bourrinSKF} with $N=3$
for $ \mathbb Big\| \frac{ \partial ^2 \mathbb BA_{\varepsilon }}{\partial z_i \partial z_j} \mathbb Big\|_{L^q}$, then we interpolate to get
\
$\Box$egin{eqnarray}
\langleambdabel{81a}
\mathbb Big\| \frac{ \partial ^2 \mathbb BA_{\varepsilon }}{\partial z_1 ^2} \mathbb Big\|_{L^s}
+ \varepsilon \mathbb Big\| \frac{ \partial ^2 \mathbb BA_{\varepsilon }}{\partial z_1 \partial z_j} \mathbb Big\|_{L^s}
+ \varepsilon ^2 \mathbb Big\| \|abla_{\partialerp} ^2 \mathbb BA _{\varepsilon} \mathbb Big\|_{L^s} \langleeq C_{r, q} \varepsilon^{\langleeft(-4 + \frac{2N-1}{q} \rangleight) \frac{ 1 - \frac rs }{1 - \frac rq}}.
\varepsiloneq
If $ s {\rm in}n (r,3)$, from the Sobolev inequality and the above estimate we obtain
\
$\Box$egin{eqnarray}
\langleambdabel{81}
\| \partial_{z_1} \mathbb BA _{\varepsilon} \|_{L^{\frac{3s}{3-s}}} \langleeq C_s \| \partial_{z_1} ^2 \mathbb BA_{\varepsilon } \|_{L^s} ^{\frac 13} \| \partial _{z_1} \|abla_{\partialerp} \mathbb BA_{\varepsilon } \|_{L^s} ^{\frac 23}
\langleeq C_{s,r,q} \varepsilon ^{-\frac 23} \varepsilon^{\langleeft(-4 + \frac{5}{q} \rangleight) \frac{ 1 - \frac rs }{1 - \frac rq}}.
\varepsiloneq
We have $ - \frac 23 + \langleeft(-4 + \frac{5}{q} \rangleight) \frac{ 1 - \frac rs }{1 - \frac rq} \tildeo - \frac{14}{3} + \frac{4r}{s} $
as $ q \tildeo {\rm in}nfty$ uniformly with respect to $ r {\rm in}n [1, \frac 32]$ and $ s {\rm in}n [1,3]$.
If $ 1 < s < \frac{18}{11} \alphapprox 1.636$
we have $ - \frac{14}{3} + \frac{4r}{s} \tildeo - \frac{14}{3} + \frac 6s > -1$ as $ r \tildeo \frac 32$.
For any fixed $ s {\rm in}n (1, \frac{18}{11})$ we may choose $q$ sufficiently large and $ r {\rm in}n (1,\frac 32)$
sufficiently close to $ \frac 32$ such that
$- \frac 23 + \langleeft(-4 + \frac{5}{q} \rangleight) \frac{ 1 - \frac rs }{1 - \frac rq} > -1$.
Since $\frac{3s}{3-s} \|earrow \frac{18}{5} $ as $ s \|earrow \frac{18}{11}$, from \varepsilonqref{81} we get
$$
\forall \; p {\rm in}n \langleeft(1, \frac {18}{5} \rangleight), \qquad \quad \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p \varepsilon^{-1}.
$$
Let $r {\rm in}n ( 1, \frac 32)$, $ q {\rm in}n [3, {\rm in}nfty)$ and $ s {\rm in}n (r,3)$.
Using the Sobolev inequality and \varepsilonqref{81a} we have
$$ \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^{\frac{3s}{3-s}}} \langleeq
C_p \| \partial_{z_1} \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^s}^{\frac13}
\| \|abla^2_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^s}^{\frac23}
\langleeq C_{s,r,q} \varepsilon ^{-\frac 53} \varepsilon^{\langleeft(-4 + \frac{5}{q} \rangleight) \frac{ 1 - \frac rs }{1 - \frac rq}}.
$$
Proceeding as above we infer that
$$ \forall \; 1 < p < 18 / 5, \quad \quad \quad
\varepsilon ^2 \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p . $$
\|oindent {\
$\Box$f Step 4.} Conclusion in the case $N=3$.
Fix $ 1 < p < 9 / 5 = 1.8 $.
Since $ 2 < 2p < 18/ 5 < 15/4 $, we may use Step 1 (with $p$ instead of $q$)
and Step 3 to deduce that
\
$\Box$egin{align}
\langleambdabel{princecharmant}
\| \mathbb BA_{\varepsilon } \|_{L^p} + & \ \| \|abla_z \mathbb BA_{\varepsilon } \|_{L^p}
+ \| \partial^2_{z_1} \mathbb BA_{\varepsilon } \|_{L^p}
+ \varepsilon_n \| \partial_{z_1} \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^p}
+ \varepsilon_n^2 \| \|abla_{z_\partialerp}^2 \mathbb BA_{\varepsilon } \|_{L^p} \|onumber \\
& \langleeq C_p \mathbb Big( \| \mathbb BA_{\varepsilon } \|^2_{L^{2p}}
+ \mathbb Big[ \varepsilon_n \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^{2p}}
+ \varepsilon_n^2 \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^{2p}} \mathbb Big]^2 \mathbb Big) \langleeq C_p .
\varepsilonnd{align}
Hence \varepsilonqref{goodestimate} holds for $p {\rm in}n (1, \; 9/5)$.
In particular, by the Sobolev imbeddding
$ W^{1,p} \hookrightarrow L^{\frac{3p}{3-p}}$ with $ 1 < p < 9/5 $ we have
$$ \forall \, 1 < q < 9/2 = 4.5 , \quad \quad \quad \| \mathbb BA_{\varepsilon } \|_{L^q} \langleeq C_q . $$
On the other hand, for any $1 < p < 9/5 $,
$$ \varepsilon \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{W^{1,p}}
= \varepsilon \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^{p}}
+ \varepsilon \| \partial^2_{z_1} \mathbb BA_{\varepsilon } \|_{L^{p}}
+ \varepsilon \| \|abla_{z_\partialerp} \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^{p}} \langleeq C_p
\quad \quad {\ranglem and} \quad \quad \varepsilon ^2 \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{W^{1,p}}
\langleeq C_p , $$
hence by the Sobolev embdding,
$$ \forall \; 1 < q < 9/2 = 4.5 , \quad \quad \quad
\varepsilon \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^q} + \varepsilon \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^q}
\langleeq C_q . $$
Thus we may apply Step 1 again to infer
that \varepsilonqref{princecharmant} holds now for $1 < p < 9/4 = 2.25 $. By the
Sobolev embedding $ W^{1,p} \hookrightarrow L^{\frac{3p}{3-p}}$, we
deduce as before that
$$ \forall \; 1 < q < 9, \quad \quad \quad \| \mathbb BA_{\varepsilon } \|_{L^q}
+ \varepsilon \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^q}
+ \varepsilon ^2 \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^q} \langleeq C_q . $$
Applying Step 1, we discover that \varepsilonqref{princecharmant}
holds for any $1 < p < 9/2 $. Since $ 9/2 > 3 $, the Sobolev
embedding yields
$$ \forall \; 1 < q \langleeq {\rm in}nfty, \quad \quad \quad
\| \mathbb BA_{\varepsilon } \|_{L^p} + \varepsilon \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^p}
+ \varepsilon ^2 \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p , $$
and the conclusion follows using again Step 1.
\
$\Box$egin{itemize}gskip
\|oindent {\
$\Box$f Step 5.} Conclusion in the case $N=2$.
The proof of \varepsilonqref{goodestimate} in the two-dimensional case is much easier: for any $1 < p < \frac 32$,
we have by Step 1 and Lemma \rangleef{Grenouille} $(i)$ and $(iv)$
$$ \| \mathbb BA_{\varepsilon } \|_{L^p} + \| \|abla_z \mathbb BA_{\varepsilon } \|_{L^p}
+ \| \partial^2_{z_1} \mathbb BA_{\varepsilon } \|_{L^p}
+ \varepsilon \| \partial_{z_1} \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^p}
+ \varepsilon ^2 \| \|abla_{z_\partialerp}^2 \mathbb BA_{\varepsilon } \|_{L^p} \langleeq C_p . $$
Thus, by the Sobolev embedding $W^{1,p} (\mathbb R^2 ) \hookrightarrow L^{\frac{2p}{2-p}} (\mathbb R^2 ) $,
\
$\Box$egin{eqnarray}
\langleambdabel{85}
\forall \; 1 < q < 6, \quad \quad \quad \| \mathbb BA_{\varepsilon } \|_{L^q} \langleeq C_q
\quad \quad \quad {\ranglem and} \quad \quad \quad
\varepsilon_n \mathbb Big[ \| \partial_{z_1} \mathbb BA_{\varepsilon } \|_{L^{q}}
+ \varepsilon_n \| \|abla_{z_\partialerp} \mathbb BA_{\varepsilon } \|_{L^{q}} \mathbb Big] \langleeq C_q .
\varepsiloneq
Applying Step 1 once again, we infer that \varepsilonqref{princecharmant} holds for any $ p {\rm in}n (1, 3)$.
Since $ 3 >2$, the Sobolev embedding implies that \varepsilonqref{85} holds for any $ q {\rm in}n (1, {\rm in}nfty]$.
Repeating the argument we get the desired conclusion. \\
Since
$A_{\varepsilon } = \varepsilon^{-2} ( \sigmaqrt{1+\varepsilon ^2 \mathbb BA_{\varepsilon }} - 1 )$, uniform
bounds bounds on $ A_{\varepsilon }$ and its derivatives up to order 2 follow immediately from \varepsilonqref{goodestimate}.
\
$\Box$egin{itemize}gskip
It remains to prove \varepsilonqref{goodestimate2}. The uniform bounds on
$\partial_{z_1} \varphi_{\varepsilon }$ and $ \varepsilon \|abla_{z_\partialerp} \varphi_ {\varepsilon }$ follow from \varepsilonqref{phasestimate1} and \varepsilonqref{goodestimate}.
Let $ U = \rangleho \varepsilonx ^{i \partialhi } $ be a finite energy solution to (TW$_c$), from the first equation in \varepsilonqref{phasemod}
we have
$$
2 \rangleho ^2 \Deltaelta \partialhi = c \frac{ \partial }{ \partial x_1 } ( \rangleho ^2 - r_0 ^2) - 2 \|abla ( \rangleho ^2) \cdot \|abla \partialhi.
$$
If $ \rangleho \gammaeq \frac{ r_0}{2} $ and $ c {\rm in}n (0, {\mathfrak c}_s)$, using the properties of the Riesz transform we get
for any $ j, k {\rm in}n \{ 1, \deltaots, N \}$ and any $ q {\rm in}n (1, {\rm in}nfty)$
$$
\mathbb Big\| \frac{ \partial ^2 \partialhi}{\partial x_j \partial x_k } \mathbb Big\|_{L^q} = \| R_j R_k ( \Deltaelta \partialhi ) \|_{L^q}
\langleeq C \| \Deltaelta \partialhi \|_{L^q}
\langleeq C \mathbb Big\| \frac{ \partial }{ \partial x_1 } ( \rangleho ^2 - r_0 ^2) \mathbb Big\| _{L^q} + C \| \|abla ( \rangleho ^2) \cdot \|abla \partialhi \|_{L^q}.
$$
In the case $ U = U_{\varepsilon}$, $ \rangleho (x) = r_0 \sigmaqrt{ 1 + \varepsilon ^2 \mathbb BA_{\varepsilon} (z) }$, $\partialhi (x) = \varepsilon \varphi _{\varepsilon} (z)$,
using \varepsilonqref{goodestimate} and \varepsilonqref{phasestimate1} we get
$$
\mathbb Big\| \frac{ \partial ^2 \partialhi}{\partial x_j \partial x_k } \mathbb Big\|_{L^q}
\langleeq \varepsilon^{ 3 - \frac{2N -1}{q} } \mathbb Big\| \frac{ \partial \mathbb BA_{\varepsilon}}{\partial z _1} \mathbb Big\| _{L^q}
+ C \varepsilon^{ 5 - \frac{2N -1}{q}} \mathbb Big\| \frac{ \partial \mathbb BA_{\varepsilon}}{\partial z _1} \cdot \frac{ \partial \varphi_{\varepsilon }}{\partial z_1} \mathbb Big\| _{L^q}
+ C \varepsilon^{7 - \frac{2N -1}{q}} \sigmaum_{ j =2}^N \mathbb Big\| \frac{ \partial \mathbb BA_{\varepsilon}}{\partial z _j} \cdot \frac{ \partial \varphi_{\varepsilon }}{\partial z_j} \mathbb Big\| _{L^q}
\langleeq C_q \varepsilon^{ 3 - \frac{2N -1}{q} }.
$$
By scaling we find for $ j,k {\rm in}n \{2, \deltaots, N \}$,
\
$\Box$egin{eqnarray}
\langleambdabel{phasestimate3}
\mathbb Big\| \frac{ \partial ^2 \varphi _{\varepsilon}}{\partial z_1 ^2} \mathbb Big\|_{L^q}
+ \varepsilon \mathbb Big\| \frac{ \partial ^2 \varphi _{\varepsilon}}{\partial z_1 \partial z_j } \mathbb Big\|_{L^q}
+ \varepsilon ^2 \mathbb Big\| \frac{ \partial ^2 \varphi _{\varepsilon}}{\partial z_j \partial z_k } \mathbb Big\|_{L^q} \langleeq C_q.
\varepsiloneq
By assumption (A4) there is $ \deltaelta > 0$ such that $F$ is $C^2$ on $ (\, ( r_0 -2 \deltaelta )^2, ( r_0 + 2 \deltaelta )^2) $.
Let $ U = \rangleho \varepsilonx ^{ i \partialhi }$ be a solution to (TW$_c$) such that $ r_0 - \deltaelta \langleeq \rangleho \langleeq r_0 + \deltaelta$.
Differentiating (TW$_c$) and using standard elliptic regularity theory it is not hard to see that $ U {\rm in}n W_{loc}^{4, p} ( \mathbb R^N)$
and $ \|abla U {\rm in}n W^{3, p}(\mathbb R^N)$ for any $ p {\rm in}n (1, {\rm in}nfty) $
(see the proof Proposition 2.2 (ii) p. 1079 in \cite{M2}). We infer that $ \|abla \rangleho, \, \|abla \partialhi {\rm in}n W^{3, p} ( \mathbb R^N)$
for $ p {\rm in}n (1, {\rm in}nfty) $.
Differentiating the first equation in \varepsilonqref{phasemod} with respect to $ x_1 $ we find
\
$\Box$egin{eqnarray}
\langleambdabel{deriveq}
c \frac{ \partial ^2}{\partial x_1 ^2} \langleeft( \rangleho ^2 - r_0 ^2\rangleight) = 2 \|abla \langleeft( \frac{ \partial ( \rangleho ^2)}{\partial x_1 } \rangleight) \cdot \|abla \partialhi
+ 2 \|abla ( \rangleho ^2) \cdot \|abla \langleeft( \frac{ \partial \partialhi}{\partial x_1} \rangleight) + 2 \frac{ \partial ( \rangleho ^2)}{\partial x_1 } \Deltaelta \partialhi
+ 2 \rangleho ^2 \Deltaelta \langleeft( \frac{ \partial \partialhi}{\partial x_1} \rangleight).
\varepsiloneq
If $ U = U_{\varepsilon}$, $ \rangleho (x) = r_0 \sigmaqrt{ 1 + \varepsilon ^2 \mathbb BA_{\varepsilon } (z) } $ and
$ \partialhi (x) = \varepsilon \varphi_{\varepsilon} (x)$, we perform a scaling and then we use
\varepsilonqref{goodestimate}, \varepsilonqref{phasestimate1} and \varepsilonqref{phasestimate3} to get, for $ 1 < q < {\rm in}nfty$ and all $ \varepsilon $ sufficiently small,
$$
\
$\Box$egin{array}{c}
\deltaisplaystyle \mathbb Big\| \frac{ \partial ^2}{\partial x_1 ^2} \langleeft( \rangleho ^2 - r_0 ^2 \rangleight) \mathbb Big\|_{L^q}
= \varepsilon^{ 4 + \frac{1 - 2N}{q}} \mathbb Big\| \frac{ \partial ^2 \mathbb BA_{\varepsilon}}{\partial z_1 ^2 } \mathbb Big\|_{L^q} \langleeq C_q \varepsilon^{ 4 + \frac{1 - 2N}{q}},
\\
\\
\deltaisplaystyle \mathbb Big\| \frac{ \partial ^2 ( \rangleho ^2)}{\partial x_1 ^2 } \cdot \frac{\partial \partialhi}{\partial x_1} \mathbb Big\|_{L^q}
\langleeq \mathbb Big\| \frac{ \partial ^2 ( \rangleho ^2)}{\partial x_1 ^2 } \mathbb Big\|_{L^{2q}} \mathbb Big\| \frac{\partial \partialhi}{\partial x_1} \mathbb Big\|_{L^{2q}}
= \varepsilon^{ 6 + \frac{1 - 2N}{q}} \mathbb Big\| \frac{ \partial^2 \mathbb BA_{\varepsilon}}{\partial z_1 ^2} \mathbb Big\|_{L^{2q}} \mathbb Big\| \frac{\partial \varphi _{\varepsilon}}{\partial z_1} \mathbb Big\|_{L^{2q}}
\langleeq C_q \varepsilon^{ 6 + \frac{1 - 2N}{q}},
\\
\\
\deltaisplaystyle \mathbb Big\| \frac{ \partial ^2 ( \rangleho ^2)}{\partial x_1 \partial x_{k} } \cdot \frac{\partial \partialhi}{\partial x_k} \mathbb Big\|_{L^q}
\langleeq \mathbb Big\| \frac{ \partial ^2 ( \rangleho ^2)}{\partial x_1 \partial x_k } \mathbb Big\|_{L^{2q}} \mathbb Big\| \frac{\partial \partialhi}{\partial x_k} \mathbb Big\|_{L^{2q}}
= \varepsilon^{ 8 + \frac{1 - 2N}{q}} \mathbb Big\| \frac{ \partial^2 \mathbb BA_{\varepsilon}}{\partial z_1 \partial z_k} \mathbb Big\|_{L^{2q}} \mathbb Big\| \frac{\partial \varphi _{\varepsilon}}{\partial z_k} \mathbb Big\|_{L^{2q}}
\langleeq C_q \varepsilon^{ 6 + \frac{1 - 2N}{q}},
\\
\\
\deltaisplaystyle \mathbb Big\| \frac{ \partial ( \rangleho ^2)}{\partial x_1 } \cdot \frac{\partial ^2 \partialhi}{\partial x_1 ^2 } \mathbb Big\|_{L^q}
\langleeq \mathbb Big\| \frac{ \partial ( \rangleho ^2)}{\partial x_1 } \mathbb Big\|_{L^{2q}} \mathbb Big\| \frac{\partial ^2 \partialhi}{\partial x_1 ^2 } \mathbb Big\|_{L^{2q}}
= \varepsilon^{ 6 + \frac{1 - 2N}{q}} \mathbb Big\| \frac{ \partial \mathbb BA_{\varepsilon}}{\partial z_1 } \mathbb Big\|_{L^{2q}} \mathbb Big\| \frac{\partial ^2 \varphi _{\varepsilon}}{\partial z_1 ^2 } \mathbb Big\|_{L^{2q}}
\langleeq C_q \varepsilon^{ 6 + \frac{1 - 2N}{q}},
\\
\\
\deltaisplaystyle \mathbb Big\| \frac{ \partial ( \rangleho ^2)}{\partial x_k } \cdot \frac{\partial ^2 \partialhi}{\partial x_1 \partial x_k } \mathbb Big\|_{L^q}
\langleeq \mathbb Big\| \frac{ \partial ( \rangleho ^2)}{\partial x_k } \mathbb Big\|_{L^{2q}} \mathbb Big\| \frac{\partial ^2 \partialhi}{\partial x_1 \partial x_k } \mathbb Big\|_{L^{2q}}
= \varepsilon^{ 8 + \frac{1 - 2N}{q}} \mathbb Big\| \frac{ \partial \mathbb BA_{\varepsilon}}{\partial z_k } \mathbb Big\|_{L^{2q}} \mathbb Big\| \frac{\partial ^2 \varphi _{\varepsilon}}{\partial z_1 \partial z_k } \mathbb Big\|_{L^{2q}}
\langleeq C_q \varepsilon^{ 7 + \frac{1 - 2N}{q}},
\\
\\
\deltaisplaystyle \mathbb Big\| \frac{ \partial ( \rangleho ^2)}{\partial x_1 } \mathbb Big\|_{L^q} = \varepsilon^{ 3 + \frac{1 - 2N}{q}} \mathbb Big\| \frac{ \partial \mathbb BA_{\varepsilon}}{\partial z_1} \mathbb Big\|_{L^q}
\langleeq C_q \varepsilon^{ 3 + \frac{1 - 2N}{q}},
\\
\\
\deltaisplaystyle \mathbb Big\| \frac{ \partial ^2 \partialhi }{\partial x_1 ^2 } \mathbb Big\|_{L^q} = \varepsilon^{ 3 + \frac{1 - 2N}{q}} \mathbb Big\| \frac{ \partial ^2 \varphi_{\varepsilon}}{\partial z_1 ^2 } \mathbb Big\|_{L^q}
\langleeq C_q \varepsilon^{ 3 + \frac{1 - 2N}{q}}
\qquad \mbox{ and } \qquad
\mathbb Big\| \frac{ \partial ^2 \partialhi }{\partial x_k ^2 } \mathbb Big\|_{L^q} = \varepsilon^{ 5 + \frac{1 - 2N}{q}} \mathbb Big\| \frac{ \partial ^2 \varphi_{\varepsilon}}{\partial z_k ^2 } \mathbb Big\|_{L^q}
\langleeq C_q \varepsilon^{ 3 + \frac{1 - 2N}{q}} .
\varepsilonnd{array}
$$
Hence $ \| \Deltaelta \partialhi \|_{L^q} \langleeq C_q \varepsilon^{ 3 + \frac{1 - 2N}{q}} $ and then
$\deltaisplaystyle \mathbb Big\| \frac{ \partial (\rangleho ^2)}{\partial x_1} \cdot \Deltaelta \partialhi \mathbb Big\|_{L^q} \langleeq C_q \varepsilon^{ 6 + \frac{1 - 2N}{q}} $.
From \varepsilonqref{deriveq} and the above estimates we infer that
$\deltaisplaystyle \mathbb Big\| \Deltaelta \langleeft(\frac{ \partial \partialhi }{\partial x_1} \rangleight) \mathbb Big\|_{L^q} \langleeq C_q \varepsilon^{ 4 + \frac{1 - 2N}{q}} $.
As before, this implies $\deltaisplaystyle \mathbb Big\| \frac{ \partial ^3 \partialhi}{\partial x_1 \partial x_i \partial x_j } \mathbb Big\|_{L^q} \langleeq C_q \varepsilon^{ 4 + \frac{1 - 2N}{q}} $
for any $ i, j {\rm in}n \{ 1, \deltaots, N \}$.
By scaling we find
$$
\mathbb Big\| \frac{ \partial ^3 \varphi _{\varepsilon}}{\partial z_1 ^3} \mathbb Big\|_{L^q}
+ \varepsilon \mathbb Big\| \|abla_{z_{\partialerp}} \frac{ \partial ^2 \varphi _{\varepsilon}}{\partial z_1 ^2} \mathbb Big\|_{L^q}
+ \varepsilon ^2 \mathbb Big\| \|abla_{z_{\partialerp}} ^2 \frac{ \partial \varphi _{\varepsilon}}{\partial z_1 } \mathbb Big\|_{L^q} \langleeq C_q.
$$
Then \varepsilonqref{goodestimate2} follows from the last estimate, \varepsilonqref{phasestimate1} and \varepsilonqref{phasestimate3}.
\
$\Box$
\sigmaubsection{Proof of Proposition \rangleef{convergence}}
Let $(U_n, \varepsilon _n)_{n \gammaeq 1}$ be a sequence as in Proposition \rangleef{convergence}.
We denote $ c_n = \sigmaqrt{{\mathfrak c}_s ^2 - \varepsilon _n ^2}$.
By Corollary \rangleef{sanszero} we have $ \| \, |U_n| - r_0 \| _{L^{{\rm in}nfty}(\mathbb R^3 )} \tildeo 0 $ as $ n \tildeo {\rm in}nfty$,
hence $|U_n| \gammaeq \frac{ r_0}{2}$ in $ \mathbb R^3$ for all sufficiently large $n$, say $ n \gammaeq n_0$.
For $ n \gammaeq n_0$ we have a lifting as in Theorem \rangleef{res1} or in \varepsilonqref{ansatz}, that is
$$
U_n (x) = \rangleho _n (x) \varepsilonx^{i\partialhi _n(x)}
=r_0 \langleeft( 1 + \varepsilon _n ^2A_n(z) \rangleight) \varepsilonx^{i \varepsilon _n \varphi _n(z) }
= r_0 \sigmaqrt{1+\varepsilon_n ^2 \mathbb BA_{n }(z) }\ \varepsilonx^{i\varepsilon _n \varphi_{n } (z)},
$$
$
\mbox{where } z_1 = \varepsilon _n x_1 , \; z_\partialerp = \varepsilon_n ^2 x_\partialerp .
$
Let $\mathbb BW_n = \partial_{z_1} \varphi_n / {\mathfrak c}_s $.
Our aim is to show that $(\mathbb BW_n )_{n \gammaeq n_0}$ is a minimizing sequence for $\
$\Box$S_*$ in the sense of Theorem \rangleef{gs}.
To that purpose we expand the functional $E_{c_n} (U_n)$ in terms of
the (KP-I) action of $\mathbb BW_n = \partial_{z_1} \varphi_n / {\mathfrak c}_s $.
Recall that by \varepsilonqref{develo} we have
\
$\Box$egin{align*}
E_{c_n} (u_n) = & \ \varepsilon_n r_0^2 {\rm in}nt_{\mathbb R^3} \frac{1}{\varepsilon_n^2}
\mathbb Big( \partial_{z_1} \varphi_n - c_n A_n \mathbb Big)^2
+ (\partial_{z_1} \varphi_n)^2 ( 2 A_n + \varepsilon_n^2 A_n^2 )
+ |\|abla_{z_\partialerp} \varphi_n |^2 ( 1 + \varepsilon_n^2 A_n )^2
\|onumber \\
& \hspace{2cm} + (\partial_{z_1} A_n)^2
+ \varepsilon_n^2 |\|abla_{z_\partialerp} A_n|^2 + A_n^2
+ {\mathfrak c}_s^2 \mathbb Big( \frac{\Gamma}{3} - 1 \mathbb Big) A_n^3
+ \frac{{\mathfrak c}_s^2}{\varepsilon_n^6} V_4( \varepsilon_n^2 A_n) \|onumber \\
& \hspace{2cm} - c_n A_n^2 \partial_{z_1} \varphi_n \ dz .
\varepsilonnd{align*}
By Proposition \rangleef{Born}, $(A_n)_{n \gammaeq n_0 }$ is bounded in $W^{1,p}(\mathbb R^N)$ for all $ p {\rm in}n (1, {\rm in}nfty)$,
hence it is bounded in $L^{{\rm in}nfty }(\mathbb R^3)$.
Since $ F( r_0 ^2 ( 1 + \varepsilon ^2 A_{\varepsilon } )) = F( r_0 ^2) - {\mathfrak c}_s ^2 \varepsilon ^2 A_{\varepsilon } + \mathbb BO ( \varepsilon ^4 A_{\varepsilon } ^2 )
= - c^2( \varepsilon) \varepsilon ^2 A_{\varepsilon } - \varepsilon^4 A_{\varepsilon } + \mathbb BO ( \varepsilon ^4 A_{\varepsilon } ) $,
from the second equation in \varepsilonqref{MadTW}, Lemma \rangleef{BornEnergy} and Proposition \rangleef{Born} we get
\
$\Box$egin{eqnarray}
\langleambdabel{approx1}
\| \partial _{z_1} \varphi _{n } - c _n A_{n } \|_{L^2} = \mathbb BO(\varepsilon_n ^2).
\varepsiloneq
In particular, we have $ \deltaisplaystyle {\rm in}nt_{\mathbb R^3} \frac{1}{\varepsilon_n^2} \mathbb Big( \partial_{z_1} \varphi_n - c_n A_n \mathbb Big)^2 \, dz = \mathbb BO(\varepsilon_n ^2) $ as $ n \tildeo {\rm in}nfty$.
By Proposition \rangleef{Born}, $ \partial_{z_1} \varphi _n {\rm in}n W^{2, p} (\mathbb R^N)$ for $ p {\rm in}n (1, {\rm in}nfty)$. Integrating by parts we have
$$
{\rm in}nt_{\mathbb R^N} ( \partial _{z_1} A _n ) ^2 - \frac{( \partial ^2_{z_1} \varphi _n ) ^2}{ c_n^2 }\, dz
= - {\rm in}nt_{\mathbb R^N} \langleeft( A _n - \frac{ \partial _{z_1} \varphi _n}{ c_n }\rangleight) \langleeft( \partial _{z_1} ^2 A _n + \frac{ \partial ^3_{z_1} \varphi _n}{ c _n }\rangleight)\, dz
$$
From the above identity, the Cauchy-Schwarz inequality, \varepsilonqref{approx1} and Proposition \rangleef{Born} we get
$$
\mathbb Big\vert {\rm in}nt_{\mathbb R^N} ( \partial _{z_1} A _n ) ^2 - \frac{ ( \partial_{z_1} ^2\varphi _n )^2}{{\mathfrak c}_s ^2} \, dz \mathbb Big\vert
\langleeq \langleeft( \frac{1}{ c_n ^2 } - \frac{1}{{\mathfrak c}_s ^2} \rangleight) {\rm in}nt_{\mathbb R^N} (\partial_{z_1} ^2 \varphi _n )^2 \, dz +
\mathbb Big\| A_n - \frac{ \partial _{z_1} \varphi _n}{ c_n } \mathbb Big\|_{L^2}
\mathbb Big\| \partial _{z_1} ^2 A _n + \frac{ \partial ^3_{z_1} \varphi _n}{ c _n } \mathbb Big\|_{L^2}
= \mathbb BO( \varepsilon _n ^2).
$$
Similarly, using \varepsilonqref{approx1}, H\"older's inequality and Proposition \rangleef{Born} we find
$$
\
$\Box$egin{array}{l}
\deltaisplaystyle \mathbb Big| {\rm in}nt_{\mathbb R^3} A_n^2 - \frac{(\partial_{z_1} \varphi_n)^2}{{\mathfrak c}_s^2} \, dz \mathbb Big|
+ \mathbb Big| {\rm in}nt_{\mathbb R^3} A_n^3 - \frac{(\partial_{z_1} \varphi_n)^3}{{\mathfrak c}_s^3} \, dz \mathbb Big|
\\
\\
\deltaisplaystyle + \mathbb Big| {\rm in}nt_{\mathbb R^3} A_n^2 \partial_{z_1} \varphi_n - \frac{(\partial_{z_1} \varphi_n)^3}{{\mathfrak c}_s^2} \, dz \mathbb Big|
+ \mathbb Big| {\rm in}nt_{\mathbb R^3} A_n ( \partial_{z_1} \varphi _n )^2 - \frac{(\partial_{z_1} \varphi_n)^3}{{\mathfrak c}_s} \, dz \mathbb Big|
= \mathbb BO(\varepsilon_n ^2) .
\varepsilonnd{array}
$$
Since $(A_n)_{n \gammaeq n_0}$ is bounded in $L^{{\rm in}nfty}(\mathbb R^3)$, using Lemma \rangleef{BornEnergy} we find
$$
{\rm in}nt_{\mathbb R^3 } |\|abla_{z_\partialerp} \varphi_n |^2 ( 1 + \varepsilon_n^2 A_n )^2 \, dz
= {\rm in}nt_{\mathbb R^3 } |\|abla_{z_\partialerp} \varphi_n |^2 \, dz + \mathbb BO( \varepsilon _n ^2)
= {\mathfrak c}_s ^2 {\rm in}nt_{\mathbb R^3 } |\|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW _n |^2 \, dz + \mathbb BO( \varepsilon _n ^2) .
$$
Recall that $ V_4( \alphalpha ) = \mathbb BO(\alphalpha ^4) $ as $ \alphalpha \tildeo 0$, hence Proposition \rangleef{Born} implies that
$$
{\rm in}nt_{\mathbb R^3 } \varepsilon _n ^2 A_n ^2 (\partial_{z_1} \varphi _n )^2 + \varepsilon_n^2 |\|abla_{z_\partialerp} A_n|^2 + \frac{{\mathfrak c}_s^2}{\varepsilon_n^6} V_4( \varepsilon_n^2 A_n) \, dz = \mathbb BO (\varepsilon _n ^2).
$$
Inserting the above estimates into \varepsilonqref{develo}
we obtain
\
$\Box$egin{eqnarray}
\langleambdabel{ecun}
\frac{E_{c(\varepsilon_n)} (U_n)}{{\mathfrak c}_s^2 r_0^2 \varepsilon_n} =
{\rm in}nt_{\mathbb R^3} \mathbb Big| \|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW_n \mathbb Big|^2
+ \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathbb BW_n )^2
+ \frac{\Gamma}{3}\, \mathbb BW_n^3 + \frac{1}{{\mathfrak c}_s^2}\, \mathbb BW_n^2 \, dz + \mathbb BO(\varepsilon_n ^2)
= \
$\Box$S ( \mathbb BW_n ) + \mathbb BO(\varepsilon_n ^2) .
\varepsiloneq
From the above estimate and the upper bound on $E_{c_n} (U_n) = T_{c_n}$ given by Proposition \rangleef{asympto} $(ii)$
we infer that
$$
\
$\Box$S ( \mathbb BW_n ) = \frac{E_{c(\varepsilon_n)} (U_n)}{{\mathfrak c}_s^2 r_0^2 \varepsilon_n} + \mathbb BO(\varepsilon_n ^2)
= \frac{ T_{c_n}}{{\mathfrak c}_s^2 r_0^2 \varepsilon_n} + \mathbb BO(\varepsilon_n ^2)
\langleeq \
$\Box$S_{\ranglem min} + \mathbb BO(\varepsilon_n ^2) = \
$\Box$S_{*} + \mathbb BO(\varepsilon_n ^2) .
$$
Similarly we have
$$
{\rm in}nt_{\mathbb R^3} |\|abla _{x_{\partialerp}} U_n |^2 \, dx
= r_0 ^2 \varepsilon _n {\rm in}nt_{\mathbb R^3} ( 1 + \varepsilon _n ^2 A_n) ^2 |\|abla _{x_{\partialerp}} \varphi _n |^2 + \varepsilon _n ^2 |\|abla _{x_{\partialerp}} A_n |^2 \, dz
= r_0 ^2 {\mathfrak c}_s ^2 \varepsilon _n {\rm in}nt_{\mathbb R^3} \mathbb Big| \|abla_{z_\partialerp} \partial_{z_1}^{-1} \mathbb BW_n \mathbb Big|^2 \, dz + \mathbb BO( \varepsilon_n ^3).
$$
Since $U_n$ satisfies the Pohozaev identity
$\deltaisplaystyle E_{c_n}(U_n) = {\rm in}nt_{\mathbb R^3} | \|abla_{z_\partialerp} U_n |^2 \ dz $,
comparing the above equation to the expression of $E_{c_n}(U_n) $ in \varepsilonqref{ecun} we find
$$
{\rm in}nt_{\mathbb R^3} \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathbb BW_n )^2
+ \frac{\Gamma}{3}\, \mathbb BW_n^3 + \frac{1}{{\mathfrak c}_s^2}\, \mathbb BW_n^2 \ dz = \mathbb BO(\varepsilon _n^2) .
$$
In order to apply Theorem \rangleef{gs}, we have to check that there is $ m_1 >0 $ such that
for all $n$ sufficiently large there holds
$$ {\rm in}nt_{\mathbb R^3} \mathbb BW_n^2 + (\partial_{z_1} \mathbb BW_n)^2 \ dz \gammaeq m_1 . $$
By Lemma \rangleef{minoinf}, there are $ k>0 $ depending only on $F$ and $ n_1 \gammaeq n_0$ such that
$$ \forall n \gammaeq n_1, \quad \| A_n \|_{L^{\rm in}nfty} \gammaeq k . $$
Since $A_n$ tends to $0$ at infinity, after a translation we may assume that
$$ |A_n(0)| = \| A_n \|_{L^{\rm in}nfty} \gammaeq k . $$
By Proposition \rangleef{Born} we know that
for all $ p {\rm in}n (1, {\rm in}nfty)$ there is $ C_p > 0 $ such that $ \| A_{n } \|_{W^{1, p } } \langleeq C_p$
for any $ n \gammaeq n_0$.
Then Morrey's inequality (see e.g. Theorem IX.12 p. 166 in \cite{brezis}) implies that for any $ \alphalpha {\rm in}n (0,1)$ there is $ C_{\alphalpha } > 0$ such that
for all $ n \gammaeq n_0$ and all $x, y {\rm in}n \mathbb R^3$ we have $|A_{n }(x) - A_{n }(y) | \langleeq C_{\alphalpha } |x -y |^{\alphalpha}.$
We infer that $ |A_n| \gammaeq k/2 $ in $B_r(0)$ for some $r>0$
independent of $n$, hence there is $m_1 > 0$ such that
$$
\| A_n \|_{L^2} \gammaeq \| A_n \|_{L^2(B_r(0))} \gammaeq 2 m_1.
$$
From \varepsilonqref{approx1} it follows that $\| \mathbb BW _n - A_n \|_{L^2} \tildeo 0 $ as $ n \tildeo {\rm in}nfty$, hence
$$ \| \mathbb BW_n \|_{L^2} \gammaeq \| \mathbb BW_n \|_{L^2(B_r(0))}
\gammaeq m_1 \qquad \mbox{ for all } n \mbox{ sufficiently large.} $$
Then Theorem \rangleef{gs} implies that there exist $\mathbb BW {\rm in}n \mathscr{Y}(\mathbb R^3)$,
a subsequence of $(\mathbb BW_n)_{n \gammaeq n_0}$ (still denoted $(\mathbb BW_n)_{n \gammaeq n_0}$),
and a sequence $(z^n)_{n\gammaeq n_0} \sigmaubset \mathbb R^3$ such that
$$
\mathbb BW_n ( \cdot - z^n ) \tildeo \mathbb BW \quad \quad \quad {\ranglem in} \quad \mathscr{Y}(\mathbb R^3) .
$$
Moreover, there is $ \sigmaigma > 0 $ such that $ z \langleongmapsto \mathbb BW(z, \frac{1}{\sigmaigma } z_{\partialerp})$ is a ground state
(with speed $1/(2 {\mathfrak c}_s ^2)$) of (KP-I). We will prove that $ \sigmaigma = 1$.
Let $ x^n = \langleeft( \frac{z_1^n}{\varepsilon _n}, \frac{ z_{\partialerp}^n}{\varepsilon_n ^2} \rangleight).$
We denote $ \tildeildelde{\mathbb BW}_n = \mathbb BW_n( \cdot - z^n)$, $ \tildeildelde{A}_n = A_n (\cdot - z ^n)$, $ \tildeildelde{ \varphi}_n = \varphi_n (\cdot - z^n)$, $\tildeildelde{U}_n = U_n (\cdot -x^n).$
It is obvious that $\tildeildelde{U}_n$ satisfies (TW$_{c_n}$) and all the previous estimates hold with
$ \tildeildelde{A}_n$, $ \tildeildelde{ \varphi}_n $ and $\tildeildelde{U}_n$ instead of $ A_n$, $\varphi_n$ and $U_n$, respectively.
Since $ \tildeildelde{\mathbb BW}_n = \frac{1}{{\mathfrak c}_s } \partial_{z_1} \tildeildelde{\varphi}_n$ and $ \tildeildelde{\mathbb BW}_n \tildeo \mathbb BW$ in $\mathscr{Y}(\mathbb R^3)$, we have
\
$\Box$egin{eqnarray}
\langleambdabel{conv1}
\partial_{z_1} \tildeildelde{\varphi}_n \tildeo {\mathfrak c}_s \mathbb BW, \qquad \quad
\partial_{z_1}^2 \tildeildelde{\varphi}_n \tildeo {\mathfrak c}_s \partial_{z_1} \mathbb BW \qquad \mbox{ and } \qquad
\|abla_{z _{\partialerp}} \tildeildelde{\varphi}_n \tildeo {\mathfrak c}_s \|abla_{z _{\partialerp}} \partial_{z_1}^{-1} \mathbb BW
\qquad \mbox{ in } L^2(\mathbb R^3).
\varepsiloneq
Integrating by parts, then using the Cauchy-Schwarz inequality, Proposition \rangleef{Born} and \varepsilonqref{approx1} we find
$$
\
$\Box$egin{array}{l}
\deltaisplaystyle {\rm in}nt_{\mathbb R^3} \mathbb Big| \partial _{z_1}^2 \tildeildelde{\varphi} _n - c_n \partial_{z_1} \tildeildelde{A}_n \mathbb Big| ^2 \, dz
= - {\rm in}nt_{\mathbb R^3} ( \partial _{z_1} \tildeildelde{\varphi} _n - c_n \tildeildelde{A}_n ) ( \partial _{z_1}^3 \tildeildelde{\varphi} _n - c_n \partial_{z_1}^2 \tildeildelde{A}_n ) \, dz
\\
\\
\langleeq \| \partial _{z_1} \tildeildelde{\varphi} _n - c_n \tildeildelde{A}_n \|_{L^2} \|\partial _{z_1}^3 \tildeildelde{\varphi} _n - c_n \partial_{z_1}^2 \tildeildelde{A}_n \|_{L^2}
= \mathbb BO(\varepsilon_n ^2),
\varepsilonnd{array}
$$
hence $\| \partial _{z_1}^2 \tildeildelde{\varphi} _n - c_n \partial_{z_1} \tildeildelde{A}_n \|_{L^2} = \mathbb BO(\varepsilon _n) \tildeo 0$. Since $ c_n \tildeo {\mathfrak c}_s$,
from \varepsilonqref{approx1} and \varepsilonqref{conv1} we get
\
$\Box$egin{eqnarray}
\langleambdabel{conv2}
\tildeildelde{A}_n \tildeo \mathbb BW \qquad \mbox{ and } \qquad \partial_{z_1} \tildeildelde{A}_n \tildeo \partial_{z_1} \mathbb BW
\qquad \mbox{ in } L^2( \mathbb R^3) \quad \mbox{ as } n \tildeo {\rm in}nfty.
\varepsiloneq
It is obvious that $ \tildeildelde{A}_n$, $\tildeildelde{\varphi}_n $ and $ \varepsilon _n$ satisfy \varepsilonqref{desing}.
Let $ \partialsi {\rm in}n C_c^{{\rm in}nfty}(\mathbb R^3)$.
We multiply \varepsilonqref{desing} by $ \partialsi $, integrate by parts, then pass to the limit as $ n \tildeo {\rm in}nfty$.
We use Proposition \rangleef{Born}, \varepsilonqref{conv1} and \varepsilonqref{conv2} and after a straightforward computation we discover that $ \mathbb BW$ satisfies the equation (SW)
in ${\mathcal D} '( \mathbb R^3 )$.
This implies that necessarily $ \sigmaigma = 1$ and $ \mathbb BW $ is a ground state of speed $1/(2 {\mathfrak c}_s ^2)$ to (KP-I).
In particular, $ \mathbb BW $ satisfies the Pohozaev identities \varepsilonqref{identites} and \varepsilonqref{Ident}.
Since $ \tildeildelde{\mathbb BW}_n \tildeo \mathbb BW$ in $ \mathscr{Y}(\mathbb R^3) $,
we have $ \
$\Box$S ( \mathbb BW_n) = \
$\Box$S ( \tildeildelde{\mathbb BW}_n) \tildeo \
$\Box$S ( \mathbb BW) $ and \varepsilonqref{ecun} implies
$$
\frac{ E_{c( \varepsilon _n)} ( U_n) }{{\mathfrak c}_s ^2 r_0 ^2 \varepsilon _n } = \
$\Box$S ( \mathbb BW _n) + \mathbb BO( \varepsilon_n ^2)
= \
$\Box$S( \mathbb BW) + o(1) = \
$\Box$S_{\ranglem min} +{o}(1),
$$
that is \varepsilonqref{Ec} holds.
Using the expression for the momentum in \varepsilonqref{momentlift}, then \varepsilonqref{conv1}, \varepsilonqref{conv2}, Proposition \rangleef{Born}
and the Pohozaev identities \varepsilonqref{identites} and \varepsilonqref{Ident} we get
$$
- \frac{ \varepsilon_n}{r_0 ^2 {\mathfrak c}_s ^3 } Q(U_n) = \frac{ \varepsilon_n}{r_0 ^2 {\mathfrak c}_s ^3 } {\rm in}nt_{\mathbb R^3} ( \rangleho _n ^2 - r_0 ^2) \frac { \partial \partialhi_n}{\partial x_1 } \, dx
= \frac{1}{{\mathfrak c}_s ^3} {\rm in}nt_{\mathbb R^3} ( 2 A_n (z) + \varepsilon _n ^2 A_n ^2 (z) ) \frac{ \partial \varphi _n}{ \partial z_1} (z) \, dz
\langleongrightarrow \frac{2}{{\mathfrak c}_s ^2} {\rm in}nt_{\mathbb R^3} \mathbb BW^2 (z) \, dz = \
$\Box$S( \mathbb BW) .
$$
Hence $ - {\mathfrak c}_s Q(U_n) \sigmaigmam r_0 ^2 {\mathfrak c}_s ^4 \
$\Box$S_{\ranglem min} \varepsilon^{-1} $ as $ n \tildeo {\rm in}nfty$.
Together with \varepsilonqref{Ec} this implies that $(U_n)_{n\gammaeq n_0 } $ satisfies \varepsilonqref{energy}.
By Proposition \rangleef{Born} we know that $ (\tildeildelde{A}_n)_{n \gammaeq n_0}$, $ (\partial_{z_1} \tildeildelde{A}_n)_{n \gammaeq n_0}$,
$ (\partial_{z_1} \tildeildelde{\varphi }_n)_{n \gammaeq n_0}$ and $ (\partial_{z_1} ^2 \tildeildelde{\varphi }_n)_{n \gammaeq n_0}$
are bounded in $L^p(\mathbb R^3) $ for $1 < p < {\rm in}nfty$.
From \varepsilonqref{conv1}, \varepsilonqref{conv2} and standard interpolation in $L^p $ spaces we find as $ n \tildeo {\rm in}nfty$
\
$\Box$egin{eqnarray}
\langleambdabel{conv3}
\tildeildelde{A}_n \tildeo \mathbb BW, \qquad
\partial_{z_1} \tildeildelde{A}_n \tildeo \partial_{z_1} \mathbb BW, \qquad
\partial_{z_1} \tildeildelde{\varphi }_n \tildeo {\mathfrak c}_s \mathbb BW \quad \mbox{ and } \quad
\partial_{z_1}^2 \tildeildelde{\varphi }_n \tildeo {\mathfrak c}_s \partial_{z_1} \mathbb BW
\quad \mbox{ in } L^p
\varepsiloneq
for any $ p {\rm in}n (1, {\rm in}nfty)$.
Proceeding as in \cite{BGS1} (see Lemma 4.6 p. 262 and Proposition 6.1 p. 266 there)
one can prove that for any multiindex $\alphalpha {\rm in}n \mathbb N^N$ with $ |\alphalpha | \langleeq 2$, the sequences
$ (\partial ^{\alphalpha } \tildeildelde{A}_n)_{n \gammaeq n_0}$, $ (\partial ^{\alphalpha } \partial_{z_1} \tildeildelde{A}_n)_{n \gammaeq n_0}$,
$ ( \partial ^{\alphalpha } \partial_{z_1} \tildeildelde{\varphi }_n)_{n \gammaeq n_0}$ and $ (\partial ^{\alphalpha } \partial_{z_1} ^2 \tildeildelde{\varphi }_n)_{n \gammaeq n_0}$
are bounded in $L^p(\mathbb R^3) $ for $1 < p < {\rm in}nfty$.
Then by interpolation we see that \varepsilonqref{conv3} holds in $W^{1,p}(\mathbb R^3) $ for all $ p {\rm in}n (1, {\rm in}nfty)$.
\
$\Box$
\sigmaubsection{Proof of Theorem \rangleef{res1} completed in the case $\
$\Box$s{N=2}$}
Assume that $ N =2$. Let $ (U_n, c_n )$ be a sequence of travelling waves to (NLS)
satisfying assumption (b) in Theorem \rangleef{res1} such that $ c_n \tildeo {\mathfrak c}_s$ as $ n \tildeo {\rm in}nfty$. Let $ \varepsilon _n = \sigmaqrt{{\mathfrak c}_s ^2 - c_n ^2}$.
By Theorem \rangleef{th2dposit} we have $ \deltaisplaystyle {\rm in}nt_{\mathbb R^2} | \|abla U_n |^2 \, dx \tildeo 0 $ as $ n \tildeo {\rm in}nfty$ and then
Lemma \rangleef{liftingfacile} implies that $\| \, | U_n | - r_0 \|_{L^{{\rm in}nfty}} \tildeo 0 $;
in particular, for $n$ sufficiently large we have a lifting
$U_n (x) = \rangleho_n(x) \varepsilonx^{i \partialhi _n (x)} = r_0 \mathbb Big( 1 + \varepsilon_n^2 A_n (z) \mathbb Big)
\varepsilonx ^{ i\varepsilon _n \varphi_n (z) } $
as in \varepsilonqref{ansatzKP} and the conclusion of Proposition \rangleef{Born} holds for $A_n$ and $ \varphi _n$.
As in the proof of Proposition \rangleef{convergence} we obtain
\
$\Box$egin{eqnarray}
\langleambdabel{approx2}
\| \partial _{z_1} \varphi _{n } - c _n A_{n } \|_{L^2} = \mathbb BO(\varepsilon_n ^2)
\qquad \mbox{ and } \qquad
\| \partial _{z_1} ^2 \varphi _{n } - c _n \partial_{z_1} A_{n } \|_{L^2} = \mathbb BO(\varepsilon_n ) \qquad \mbox{ as } n \tildeo {\rm in}nfty.
\varepsiloneq
Let $ k_n = \deltaisplaystyle {\rm in}nt_{\mathbb R^2} |\|abla U_n(x)|^2 \, dx$. We denote
$\mathbb BW_n = {\mathfrak c}_s^{-1} \partial_{z_1} \varphi_n $. By \varepsilonqref{approx2} we have $\| \mathbb BW_n - A_n \|_{L^2} = \mathbb BO(\varepsilon_n^2)$.
As in the proof of Proposition \rangleef{convergence} we find
$ \deltaisplaystyle \mathbb Big| {\rm in}nt_{\mathbb R^2} (\partial_{z_1} A_n )^2 - (\partial_{z_1} \mathbb BW _n )^2 \, dz \mathbb Big|
= \mathbb Big\vert {\rm in}nt_{\mathbb R^2} ( \partial _{z_1} A _n ) ^2 - \frac{ ( \partial_{z_1} ^2\varphi _n )^2}{{\mathfrak c}_s ^2} \, dz \mathbb Big\vert = \mathbb BO( \varepsilon _n ^2).$
Using \varepsilonqref{approx2} and Proposition \rangleef{Born} we get
\
$\Box$egin{align}
\langleambdabel{kn}
k_n = & \ {\rm in}nt_{\mathbb R^2} |\|abla U_n|^2 \ dx =
\varepsilon_n r_0^2 {\rm in}nt_{\mathbb R^2} (\partial_{z_1} \varphi_n)^2 (1 + \varepsilon_n^2 A_n)^2
+ \varepsilon_n^2 (\partial_{z_1} A_n)^2 + \varepsilon_n^2 (\partial_{z_2} \varphi_n)^2 (1 + \varepsilon_n^2 A_n)^2
+ \varepsilon_n^4 (\partial_{z_2} A_n)^2 \ dz
\|onumber
\\
= & \ \varepsilon_n r_0^2 {\rm in}nt_{\mathbb R^2} (\partial _{z_1} \varphi_n)^2 \ dz
+ \varepsilon_n^3 r_0^2 {\rm in}nt_{\mathbb R^2}
\mathbb Big( 2 A_n (\partial_{z_1} \varphi_n ) ^2 + (\partial_{z_1} A_n )^2
+ ( \partial_{z_2} \varphi _n )^2 \mathbb Big) \ dz
+ \mathbb BO(\varepsilon_n^5)
\|onumber
\\
= & \ \varepsilon_n r_0^2 {\mathfrak c}_s^2 {\rm in}nt_{\mathbb R^2} \mathbb BW_n^2 \ dz
+ \varepsilon_n^3 r_0^2 {\mathfrak c}_s^2 {\rm in}nt_{\mathbb R^2} \mathbb Big( 2 \mathbb BW_n^3
+ \frac{1}{{\mathfrak c}_s^2} ( \partial_{z_1} \mathbb BW_n )^2
+ ( \partial_{z_2}\partial_{z_1}^{-1} \mathbb BW_n )^2 \mathbb Big) \ dz
+ \mathbb BO(\varepsilon_n^5) .
\varepsilonnd{align}
Inverting this expansion we find the following expression of $ \varepsilon _n$ in terms of $ k_n$:
\
$\Box$e
\langleambdabel{pepsi}
\varepsilon_n = \frac{k_n}{r_0^2 {\mathfrak c}_s^2 \| \mathbb BW_n \|_{L^2}^2 }
- \frac{k_n^3}{r_0^6 {\mathfrak c}_s^6 \| \mathbb BW_n \|_{L^2}^8}
{\rm in}nt_{\mathbb R^2} \mathbb Big( 2 \mathbb BW_n^3 + \frac{1}{{\mathfrak c}_s^2} ( \partial_{z_1} \mathbb BW_n )^2
+ ( \partial_{z_2}\partial_{z_1}^{-1} \mathbb BW_n )^2 \mathbb Big) \ dz + \mathbb BO(k_n^5) .
\varepsilone
Recall that the mapping $U_n (c_n \cdot )$ is a minimizer of the functional $I(\partialsi ) = Q( \partialsi)+ \deltaisplaystyle {\rm in}nt_{\mathbb R^2} V( |\partialsi |^2) \, dx $
under the constraint $\deltaisplaystyle {\rm in}nt_{\mathbb R^2} |\|abla \partialsi |^2 \, dx = k_n$.
Using this information, Proposition \rangleef{asympto} $(i)$, the fact that
$c_n^2 = {\mathfrak c}_s^2 - \varepsilon_n^2 $ and \varepsilonqref{pepsi} we get
\
$\Box$egin{align}
\langleambdabel{lapubelle}
c_n Q(U_n) + {\rm in}nt_{\mathbb R^2} V(|U_n|^2) \ dx = & \,
c_n ^2 I(U_n (c_n \cdot ) ) = c_n^2 I_{\ranglem min} (k_n)
\|onumber
\\
\langleeq & \, c_n^2 \langleeft( - \frac{k_n}{{\mathfrak c}_s^2} - \frac{4 k_n^3}{27 r_0^4 {\mathfrak c}_s^{12} \
$\Box$S^2_{\ranglem min}} + \mathbb BO(k_n^5) \rangleight)
\|onumber
\\
= & \, - k_n + \frac{k_n^3}{r_0^4 {\mathfrak c}_s^6 \| \mathbb BW_n \|_{L^2}^4}
- \frac{4 k_n^3}{27 r_0^4 {\mathfrak c}_s^{10} \
$\Box$S_{\ranglem min}^2} + \mathbb BO(k_n^5) .
\varepsilonnd{align}
Moreover, using the Taylor expansion \varepsilonqref{V}, we find
$$ {\rm in}nt_{\mathbb R^2} V(|U_n|^2) \ dx = r_0^2 {\mathfrak c}_s^2 \varepsilon_n
{\rm in}nt_{\mathbb R^2} \mathbb Big( A_n^2 + \varepsilon_n^2 \mathbb Big[ \frac{\Gamma}{3} - 1 \mathbb Big] A_n^3
+ \frac{V_4(\varepsilon_n^2 A_n)}{\varepsilon_n^4} \mathbb Big) \ dz $$
and by \varepsilonqref{momentlift} we have
$$ Q(U_n) = - \varepsilon_n r_0^2 {\rm in}nt_{\mathbb R^2} \mathbb Big( 2 A_n + \varepsilon_n^2 A_n^2 \mathbb Big)
\frac{\partial \varphi_n}{\partial z_1} \ dz . $$
Taking into account \varepsilonqref{approx2} and the equality $c_n^2 = {\mathfrak c}_s^2 - \varepsilon_n^2$, then using expansion
of $ \varepsilon _n$ in terms of $ k_n$ \varepsilonqref{pepsi} we get
\
$\Box$egin{align}
\langleambdabel{99}
c_n Q(U_n) + & \ {\rm in}nt_{\mathbb R^2} V(|U_n|^2) \ dx
\|onumber
\\
= & \ r_0^2 {\mathfrak c}_s^2 \langleeft( \varepsilon_n {\rm in}nt_{\mathbb R^2}
\mathbb Big( - 2 A_n \mathbb BW_n + A_n^2 \mathbb Big) \ dz
+ \varepsilon_n^3 {\rm in}nt_{\mathbb R^2}
\mathbb Big( - A_n^2 \mathbb BW_n + \mathbb Big[ \frac{\Gamma}{3} - 1 \mathbb Big] A_n^3
+\frac{1}{{\mathfrak c}_s^2} A_n \mathbb BW_n \mathbb Big) \ dz
+ \mathbb BO(\varepsilon_n^5) \rangleight)
\|onumber
\\
= & \ r_0^2 {\mathfrak c}_s^2 \langleeft( \varepsilon_n \| \mathbb BW_n - A_n \|_{L^2}^2
- \varepsilon_n {\rm in}nt_{\mathbb R^2} \mathbb BW_n^2 \ dz
+ \varepsilon_n^3 {\rm in}nt_{\mathbb R^2}
\mathbb Big[ \frac{\Gamma}{3} - 2 \mathbb Big] \mathbb BW_n^3 + \frac{\mathbb BW_n^2}{{\mathfrak c}_s^2} \ dz
+ \mathbb BO(\varepsilon_n^5) \rangleight)
\|onumber
\\
= & \ r_0^2 {\mathfrak c}_s^2 \langleeft( - \varepsilon_n {\rm in}nt_{\mathbb R^2} \mathbb BW_n^2 \ dz
+ \varepsilon_n^3 {\rm in}nt_{\mathbb R^2}
\mathbb Big[ \frac{\Gamma}{3} - 2 \mathbb Big] \mathbb BW_n^3 + \frac{\mathbb BW_n^2}{{\mathfrak c}_s^2} \ dz
+ \mathbb BO(\varepsilon_n^5) \rangleight)
\|onumber
\\
= & \ - k_n
+ \frac{k_n^3}{ r_0^4 {\mathfrak c}_s^4 \| \mathbb BW_n \|_{L^2}^6} \
$\Box$S(\mathbb BW_n)
+ \mathbb BO(k_n^5) .
\varepsilonnd{align}
Inserting \varepsilonqref{99} into \varepsilonqref{lapubelle} we discover
$$ \frac{k_n^3}{ r_0^4 {\mathfrak c}_s^4 \| \mathbb BW_n \|_{L^2}^6} \
$\Box$S(\mathbb BW_n)
+ \mathbb BO(k_n^5) \langleeq\frac{k_n^3}{r_0^4 {\mathfrak c}_s^6 \| \mathbb BW_n \|_{L^2}^4}
- \frac{4 k_n^3}{27 r_0^4 {\mathfrak c}_s^{10} \
$\Box$S_{\ranglem min}^2} + \mathbb BO(k_n^5) , $$
that is
$$ \
$\Box$S(\mathbb BW_n) \langleeq \frac{1}{{\mathfrak c}_s^2} \| \mathbb BW_n \|_{L^2}^2
- \frac{4}{27 {\mathfrak c}_s^{6} \
$\Box$S_{\ranglem min}^2} \| \mathbb BW_n \|_{L^2}^6
+ \mathbb BO(k_n^2) $$
or equivalently
\
$\Box$e
\langleambdabel{topbelle}
\mathscr{E} (\mathbb BW_n) = \
$\Box$S(\mathbb BW_n) - \frac{1}{{\mathfrak c}_s^2} {\rm in}nt_{\mathbb R^2} \mathbb BW_n^2 \ d z
\langleeq - \frac{1}{2 \
$\Box$S_{\ranglem min}^2}
\mathbb Big( \frac{2}{3} \mathbb Big)^3
\cdot \mathbb Big( \frac{1}{{\mathfrak c}_s^{2}} \| \mathbb BW_n \| _{L^2}^2 \mathbb Big)^3 + \mathbb BO(k_n^2) .
\varepsilone
As in the proof of Proposition \rangleef{convergence}, it follows from Lemma \rangleef{minoinf} and Proposition \rangleef{Born} that there are
some positive constants $ m_1, \, m_2 $ such that
$$m_1 \langleeq \| \mathbb BW_n \|_{L^2}^2 \langleeq m_2 \qquad \mbox{ for all sufficiently large } n. $$
Denote $ \langleambda_n = \frac{\| \mathbb BW_n \|_{L^2}^2}{{\mathfrak c}_s^2} .$
Passing to a subsequence if necessary we may assume that $ \langleambda _n \tildeo \langleambda $, where $ \langleambda {\rm in}n (0,+{\rm in}nfty)$.
Let
$$ {\mathbb BW}_n ^{\#} (z) = \frac{\mu^2}{\langleambda_n^2}
\mathbb BW_n \mathbb Big( \frac{\mu}{\langleambda_n} z_1,\frac{\mu^2}{\langleambda_n^2} z_2 \mathbb Big), $$
where $\mu$ is as in Theorem \rangleef{gs2d}. Then ${ \mathbb BW}_n ^{\# }$ satisfies
$$ {\rm in}nt_{\mathbb R^2} \frac{1}{{\mathfrak c}_s^2} \, ({\mathbb BW}_n^{\#}) ^2 \ dz =
\frac{\mu}{\langleambda_n} {\rm in}nt_{\mathbb R^2} \frac{1}{{\mathfrak c}_s^2} \, {\mathbb BW}_n^2 \ dz
= \mu \quad \quad \quad {\ranglem and} \quad \quad \quad
\mathscr{E} ({\mathbb BW}_n^{\# }) = \frac{\mu^3}{\langleambda_n^3} \mathscr{E} (\mathbb BW_n) . $$
Plugging this into \varepsilonqref{topbelle} and recalling that
$ \mu = \frac32 \
$\Box$S_{\ranglem min}$, we infer that
$$ \mathscr{E} ({\mathbb BW}_n^{\#}) =
\frac{\mu^3}{\langleambda_n^3} \mathscr{E} (\mathbb BW_n)
\langleeq - \frac{1}{2 \
$\Box$S_{\ranglem min}^2}
\mathbb Big( \frac{2\mu }{3} \mathbb Big)^3 + \mathbb BO(k_n^2)
= - \frac{1}{2} \
$\Box$S_{\ranglem min} + \mathbb BO(k_n^2) . $$
Therefore $({\mathbb BW}_n^{\#})_{n \gammaeq n_0} $ is a minimizing sequence for
\varepsilonqref{minimiz}.
By Theorem \rangleef{gs2d} we infer that there exist a subsequence of
$ ({\mathbb BW}_n^{\#} )_ {n \gammaeq n_0}$, still denoted
$ ({\mathbb BW}_n^{\#})_{n \gammaeq n_0} $, a sequence $ (z^n )_{n \gammaeq n_0} = ( z_1^n, z_2^n)_{n \gammaeq n_0} \sigmaubset \mathbb R^2 $ and
a ground state $\mathbb BW$ (with speed $1/(2{\mathfrak c}_s^2)$) of (KP-I) such that
$ {\mathbb BW}_n^{\#} ( \cdot - z^n) \langleongrightarrow \mathbb BW$
strongly in $\mathscr{Y}(\mathbb R^2) $
as $ n \tildeo {\rm in}nfty$.
Let $ x^n = \langleeft( \frac{ \mu}{\varepsilon _n \langleambda _n } z_1^n, \frac{ \mu ^2 }{\varepsilon _n ^2 \langleambda _n ^2 } z_2^n \rangleight)$
and $ \tildeildelde{U}_n = U( \cdot - x^n)$,
$ \tildeildelde{A}_n (z) = A_n \langleeft( z_1 - \frac{ \mu}{\langleambda _n } z_1 ^n, z_2 - \frac{ \mu ^2}{\langleambda _n ^2} z_2 ^n \rangleight)$,
$ \tildeildelde{ \varphi }_n (z) = \varphi_n \langleeft( z_1 - \frac{ \mu}{\langleambda _n } z_1 ^n, z_2 - \frac{ \mu ^2}{\langleambda _n ^2} z_2 ^n \rangleight)$,
$ \tildeildelde{ \mathbb BW }_n (z) = \mathbb BW_n \langleeft( z_1 - \frac{ \mu}{\langleambda _n } z_1 ^n, z_2 - \frac{ \mu ^2}{\langleambda _n ^2} z_2 ^n \rangleight)$.
We denote $ \tildeildelde{ \mathbb BW}(z)= \frac{ \langleambda ^2}{\mu ^2} \mathbb BW ( \frac{ \langleambda }{\mu} z_1, \frac{ \langleambda ^2}{\mu ^2} z_2)$.
It is obvious that $ \tildeildelde{U}_n (x) = r_0 \langleeft( 1 + \varepsilon _n ^2\tildeildelde{A}_n (z) \rangleight) \varepsilonx ^{ i \varepsilon _n \tildeildelde{\varphi }_n (z )}$
is a solution to (TW$_{c_n}$) with the same properties as $ U_n$ and the functions
$ \tildeildelde{A}_n$, $ \tildeildelde{\varphi}_n$, $ \tildeildelde{\mathbb BW }_n$ satisfy the same estimates as $A_n$, $ \varphi _n$ and $ \mathbb BW _n$, respectively.
Moreover, we have $ \tildeildelde{\mathbb BW }_n = \frac{1}{{\mathfrak c}_s } \partial_{z_1} \tildeildelde{\varphi} _n$
and $ \tildeildelde{\mathbb BW }_n \langleongrightarrow \tildeildelde{ \mathbb BW}$ strongly in $\mathscr{Y}(\mathbb R^2) $ as $ n \tildeo {\rm in}nfty$.
It is clear that $ \tildeildelde{A}_n$, $\tildeildelde{\varphi}_n $ and $ \varepsilon _n$ satisfy \varepsilonqref{desing}.
For any fixed $ \partialsi {\rm in}n C_c^{{\rm in}nfty}(\mathbb R^3)$
we mutiply \varepsilonqref{desing} by $ \partialsi $, integrate by parts, then pass to the limit as $ n \tildeo {\rm in}nfty$.
Proceeding as in the proof of Proposition \rangleef{convergence} we find that $ \tildeildelde{\mathbb BW }$ satisfies equation (SW) in $ {\mathcal D}'( \mathbb R^2)$.
We know that $ \mathbb BW$ also solves (SW) and comparing the equations for $ \mathbb BW $ and $ \tildeildelde{\mathbb BW}$ we infer that
$ \langleeft( \frac{ \langleambda ^3}{\mu ^3} - \frac{ \langleambda ^5}{\mu ^5} \rangleight) \partial_{z_1} \mathbb BW = 0 $ in $ \mathbb R^2$ .
Since $ \partial_{z_1} \mathbb BW \|ot= 0$, $ \langleambda > 0 $ and $ \mu > 0$, we have necessarily $ \langleambda = \mu$, that is $ \tildeildelde{\mathbb BW} = \mathbb BW$.
In particular, we have $ \
$\Box$S ( \mathbb BW _n) = \
$\Box$S ( \tildeildelde{\mathbb BW}_n) \langleongrightarrow \
$\Box$S ( \mathbb BW) = \
$\Box$S_{\ranglem min} $ as $ n \tildeo {\rm in}nfty$.
Since $ \deltaisplaystyle {\rm in}nt_ {\mathbb R^2} |\|abla U_n |^2 \, dx = k_n$, using \varepsilonqref{99} and \varepsilonqref{kn} we get
$$
E(U_n) + c_n Q( U_n) = \frac{k_n^3}{ r_0^4 {\mathfrak c}_s^4 \| \mathbb BW_n \|_{L^2}^6} \
$\Box$S(\mathbb BW_n) + \mathbb BO(k_n^5)
\sigmaigmam \varepsilon_n ^3 r_0 ^2 {\mathfrak c}_s ^2 \
$\Box$S_{\ranglem min}
\qquad \mbox{ as } n \tildeo {\rm in}nfty .
$$
Hence \varepsilonqref{Ec} holds.
As in the proof of Proposition \rangleef{convergence} we have
$$
\
$\Box$egin{array}{l}
\deltaisplaystyle Q( U_n) = - {\rm in}nt_{\mathbb R^2} (\rangleho_n ^2 - r_0 ^2 ) \frac{ \partial \partialhi}{\partial x_1}
= - r_0 ^2 \varepsilon _n {\rm in}nt_{\mathbb R^2} ( 2 A_n (z) + \varepsilon _n ^2 A_n ^2 (z) ) \frac{ \partial \varphi _n}{ \partial z_1} (z) \, dz
\\
\\
\deltaisplaystyle \sigmaigmam -2 r_0 ^2 {\mathfrak c}_s \varepsilon_n {\rm in}nt_{\mathbb R^2} \mathbb BW^2 (z) \, dz = - 3 r_0 ^2 {\mathfrak c}_s ^3 \
$\Box$S( \mathbb BW) \varepsilon _n.
\varepsilonnd{array}
$$
The above computation and \varepsilonqref{Ec} imply \varepsilonqref{energy}.
Finally, the convergence in \varepsilonqref{conv3} as well as the similar property in $W^{1,p} (\mathbb R^2)$ are proven exactly
as in the three dimensional case.
\
$\Box$
\sigmaection{The higher dimensional case}
\sigmaubsection{Proof of Proposition \rangleef{dim6}}
We argue by contradiction. Suppose that the assumptions of Proposition \rangleef{dim6} hold and there is a sequence
$(U_n)_{n \gammaeq 1} \sigmaubset \mathbb BE $ of nonconstant solutions to (TW$_{c_n}$)
such that $E_{c_n}(U_n) \tildeo 0$ as $n \tildeo +{\rm in}nfty$.
By Proposition \rangleef{lifting} $(ii)$ we have $|U_n| \tildeo r_0 > 0 $ uniformly in
$\mathbb R^N$. Hence for $n$ sufficiently large we have the lifting
$ U_n(x) = \rangleho_n(x) \varepsilonx^{i\partialhi_n(x)} . $
We write
$$ \mathbb BB_{n } = \frac{|U_n| }{r_0 } - 1 ,
\qquad \mbox{ so that } \qquad
\rangleho _n = r_0( 1 + \mathbb BB _n) \qquad \mbox{ and } \qquad \mathbb BB _n \tildeo 0
\quad \mbox{ as } n \tildeo {\rm in}nfty.
$$
Recall that $U_n$ satisfies the Pohozaev identities \varepsilonqref{Pohozaev}. The identity $P_{c_n} (U_n) = 0 $ can be written as
$$
{\rm in}nt_{\mathbb R^N} \mathbb Big| \frac{ \partial U_n}{\partial x_1} \mathbb Big|^2 + \frac{N-3}{N-1} |\|abla _{x_{\partialerp}} U_n |^2 \, dx
+ c_n Q(U_n) + {\rm in}nt_{\mathbb R^N} V(|U_n| ^2) \, dx = 0 .
$$
Using the formula \varepsilonqref{momentlift} for $Q(U_n)$ and the Taylor expansion \varepsilonqref{V} for $V( r_0 ^2 ( 1 + \mathbb BB_n)^2 ) $ we get
\
$\Box$egin{align*}
r_0 ^2
{\rm in}nt_{\mathbb R^N} & \mathbb Big| \frac{ \partial \mathbb BB _n}{\partial x_1} \mathbb Big|^2 + ( 1 + \mathbb BB _n) ^2 \mathbb Big| \frac{ \partial \partialhi _n}{\partial x_1} \mathbb Big|^2
+ \frac{N-3}{N-1} |\|abla _{x_{\partialerp}} \mathbb BB _n |^2
+ \frac{N-3}{N-1} ( 1 + \mathbb BB_n )^2 |\|abla _{x_{\partialerp}} \partialhi _n |^2
\\
& - c_n ( 2 \mathbb BB_n + \mathbb BB _n ^2 ) \frac{ \partial \partialhi _n}{\partial x_1}
+ {\mathfrak c}_s ^2 \langleeft( \mathbb BB_n ^2 + \mathbb Big( \frac{ \Gamma }{3} -1 \mathbb Big) \mathbb BB_n ^3 + V_4 (\mathbb BB _n) \rangleight) \, dx = 0,
\varepsilonnd{align*}
where $V_4 ( \alphalpha ) = \mathbb BO( \alphalpha ^4) $ as $ \alphalpha \tildeo 0$.
After rearranging terms, the above equality yields
\
$\Box$egin{align*}
& {\rm in}nt_{\mathbb R^N}
( \partial_{x_1} \partialhi_n - c_n \mathbb BB _n )^2
+ (\partial_{x_1} \mathbb BB _n )^2
+ \frac{N-3}{N-1} |\|abla_{x_\partialerp} \partialhi_n|^2 ( 1 + \mathbb BB _n )^2
+ \frac{N-3}{N-1} |\|abla_{x_\partialerp} \mathbb BB _n|^2 + \varepsilon_n^2 \mathbb BB _n ^2 \ dx
\|onumber \\ & = - {\rm in}nt_{\mathbb R^6}
(\partial_{x_1} \partialhi_n)^2 ( 2 \mathbb BB _n + \mathbb BB_n ^2 )
+ {\mathfrak c}_s^2 \mathbb Big( \frac{\Gamma}{3} - 1 \mathbb Big) \mathbb BB_n^3 + {\mathfrak c}_s^2 V_4( \mathbb BB_n )
- c_n \mathbb BB_n ^2 \partial_{x_1} \partialhi_n \ dx \|onumber \\ &
= - \mathbb Big[ \frac{\Gamma}{3} {\mathfrak c}_s^2 - \varepsilon_n^2 \mathbb Big] {\rm in}nt_{\mathbb R^N} \mathbb BB_n ^3 \ dz
- {\mathfrak c}_s^2 {\rm in}nt_{\mathbb R^N} V_4( \mathbb BB_n ) \ dx
- {\rm in}nt_{\mathbb R^N} (\partial_{x_1} \partialhi_n)^2 \mathbb BB_n ^2 \ dx \|onumber
\\ & \quad \quad
+ {\rm in}nt_{\mathbb R^N} \mathbb BB_n \mathbb Big( ( \partial_{x_1} \partialhi_n - c_n \mathbb BB_n )^2
-3 c_n \mathbb BB_n (\partial_{x_1} \partialhi_n - c_n \mathbb BB_n ) \mathbb Big) \ dx
\varepsilonnd{align*}
and this can be written as
\
$\Box$egin{eqnarray}
\langleambdabel{Dev3}
\
$\Box$egin{array}{l}
\deltaisplaystyle {\rm in}nt_{\mathbb R^N}
( \partial_{x_1} \partialhi_n - c_n \mathbb BB _n )^2
+ (\partial_{x_1} \mathbb BB _n )^2
+ \frac{N-3}{N-1} |\|abla_{x_\partialerp} \partialhi_n|^2 ( 1 + \mathbb BB _n )^2
+ \frac{N-3}{N-1} |\|abla_{x_\partialerp} \mathbb BB _n|^2 + \varepsilon_n^2 ( 1 - \mathbb BB_n ) \mathbb BB _n ^2 \ dx
\\ \\
= \deltaisplaystyle - \frac{\Gamma}{3} {\mathfrak c}_s^2 {\rm in}nt_{\mathbb R^N} \mathbb BB_n ^3 \ dz
- {\mathfrak c}_s^2 {\rm in}nt_{\mathbb R^N} V_4( \mathbb BB_n ) \ dx
- {\rm in}nt_{\mathbb R^N} (\partial_{x_1} \partialhi_n)^2 \mathbb BB_n ^2 \ dx
\\ \\\quad \quad \deltaisplaystyle
+ {\rm in}nt_{\mathbb R^N} \mathbb BB_n \mathbb Big( ( \partial_{x_1} \partialhi_n - c_n \mathbb BB_n )^2
-3 c_n \mathbb BB_n (\partial_{x_1} \partialhi_n - c_n \mathbb BB_n ) \mathbb Big) \ dx .
\varepsilonnd{array}
\varepsiloneq
For $n$ sufficiently large we have $ \frac 12 \mathbb BB_n \langleeq ( 1 - \mathbb BB_n ) \mathbb BB _n ^2 \langleeq \frac 32 \mathbb BB _n ^2$
and then all the terms in the left-hand side of \varepsilonqref{Dev3} are nonnegative.
We will find an upper bound for the right-hand side of \varepsilonqref{Dev3}. First we notice
that the third integral there is nonnegative. Since $\mathbb BB _n \tildeo 0$ in $L^{\rm in}nfty$ and
$V_4(\alpha) = \mathbb BO(\alpha^4) $ as $\alpha \tildeo 0$, we have
\
$\Box$egin{eqnarray}
\langleambdabel{good1}
\mathbb Big| {\mathfrak c}_s^2 {\rm in}nt_{\mathbb R^N} V_4( \mathbb BB _n) \ dx \mathbb Big|
\langleeq C \| \mathbb BB_n \|_{L^4} ^4
\langleeq C \| \mathbb BB_n \|_{L^{\rm in}nfty} \| \mathbb BB_n \|_{L^3}^3 .
\varepsiloneq
Using the fact that $\| \mathbb BB_n \|_{L^{\rm in}nfty} \langleeq 1/4$ for
$n$ large enough and the inequality $2ab \langleeq a^2 + b^2$, we get
\
$\Box$egin{eqnarray}
\langleambdabel{good2}
{\rm in}nt_{\mathbb R^N} \mathbb BB_n \mathbb Big( ( \partial_{x_1} \partialhi_n - c_n \mathbb BB_n )^2
- 3 c_n \mathbb BB_n (\partial_{x_1} \partialhi_n - c_n \mathbb BB_n ) \mathbb Big) \ dx \langleeq
\frac12 {\rm in}nt_{\mathbb R^ N} ( \partial_{x_1} \partialhi_n - c_n \mathbb BB_n )^2 \ dx
+ 9 {\mathfrak c}_s^2 {\rm in}nt_{\mathbb R^ N} \mathbb BB_n ^4 \ dx .
\varepsiloneq
It is easy to see that $ \mathbb BB _n {\rm in}n H^1 ( \mathbb R^N)$ (see the Introduction of \cite{CM1}).
We recall the critical Sobolev embedding: for any $ h {\rm in}n H^1 ( \mathbb R^N) $ (with $N\gammaeq 3$) there holds
\
$\Box$egin{eqnarray}
\langleambdabel{Sobol}
\| h \|_{L^{\frac{2N}{N-2}}} \langleeq C
\| \partial_{x_1} h \|_{L^2}^{\frac{1}{N}}
\| \|abla_{x_\partialerp} h \|_{L^2}^{\frac{N-1}{N}} .
\varepsiloneq
Assume first that $ N \gammaeq 6$. Then $ 2^* = \frac{ 2N}{N-2} \langleeq 3$.
Using the Sobolev embedding \varepsilonqref{Sobol} and the fact that $\| \mathbb BB _n \| _{L^{{\rm in}nfty}} $ is bounded we get
\
$\Box$egin{eqnarray}
\langleambdabel{good3}
\| \mathbb BB _n \|_{L^3} ^3 \langleeq \| \mathbb BB _n \|_{L^{{\rm in}nfty }}^{ 3 - 2^*} \| \mathbb BB _n \|_{L^{2^*}}^{ 2^*}
\langleeq C \| \partial_{x _1} \mathbb BB _n \| _{ L^2 } ^{\frac{2^*}{N}} \| \|abla_{x_{\partialerp}} \mathbb BB _n \| _{L^2}^{ \frac{ 2^*(N-1)}{N}} .
\varepsiloneq
Using the inequalities $ \| \mathbb BB_n \|_{L^4}^4 \langleeq
\| \mathbb BB_n \|_{L^{\rm in}nfty} \| \mathbb BB_n \|_{L^3}^3 $ and
$1+\mathbb BB_n \gammaeq 1/2$ for $n$ large, we deduce from \varepsilonqref{Dev3} that
\
$\Box$egin{eqnarray}
\langleambdabel{sobolomenthe}
{\rm in}nt_{\mathbb R^N}
( \partial_{x_1} \partialhi_n - c_n \mathbb BB _n )^2
+ (\partial_{x_1} \mathbb BB _n )^2
+ |\|abla_{x_\partialerp} \partialhi_n|^2
+ |\|abla_{x_\partialerp} \mathbb BB _n |^2 + \varepsilon_n^2 \mathbb BB _n ^2 \ dx
\langleeq C \| \mathbb BB _n \| _{L^3}^3 .
\varepsiloneq
From \varepsilonqref{sobolomenthe} and \varepsilonqref{good3} we obtain
\
$\Box$e
\langleambdabel{sobolomenthos}
\| \|abla_{x_\partialerp} \partialhi_n \| _{L^2}^2 +
\| \partial_{x_1} \mathbb BB _n \|_{L^2}^2
+ \| \|abla_{x_\partialerp} \mathbb BB_n \| _{L^2}^2
\langleeq C \| \mathbb BB_n \|_{L^3}^3 \langleeq C
\| \partial_{x_1} \mathbb BB_n \| _{L^2}^{\frac{2}{N-2}}
\| \|abla_{x_\partialerp} \mathbb BB_n \| _{L^2}^{\frac{2N-2}{N-2}}.
\varepsilone
Assume now that ($ N=4$ or $N=5$) and $ \Gamma \|eq 0$. From \varepsilonqref{Dev3}, \varepsilonqref{good1} and
\varepsilonqref{good2} we get
\
$\Box$egin{eqnarray}
\langleambdabel{sobolomenthe1}
{\rm in}nt_{\mathbb R^N}
( \partial_{x_1} \partialhi_n - c_n \mathbb BB _n )^2
+ (\partial_{x_1} \mathbb BB _n )^2
+ |\|abla_{x_\partialerp} \partialhi_n|^2
+ |\|abla_{x_\partialerp} \mathbb BB _n |^2 + \varepsilon_n^2 \mathbb BB _n ^2 \ dx
\langleeq C \| \mathbb BB _n \| _{L^4}^4 .
\varepsiloneq
We have $ 2^* = 4 $ if $ N =4$ and $ 2^* = \frac{10}{3} < 4$ if $ N=5$.
By the Sobolev embedding we have
\
$\Box$egin{eqnarray}
\langleambdabel{good4}
\| \mathbb BB _n \|_{L^4 }^4 \langleeq \| \mathbb BB_n \|_{L^{\rm in}nfty} ^{ 4 - 2^* } \| \mathbb BB_n \|_{L^{2^*}}^{2^*}
\langleeq C \| \mathbb BB_n \|_{L^{2^*}}^{2^*}
\langleeq C \| \partial_{x _1} \mathbb BB _n \| _{ L^2 } ^{\frac{2^*}{N}} \| \|abla_{x_{\partialerp}} \mathbb BB _n \| _{L^2}^{ \frac{ 2^*(N-1)}{N}} .
\varepsiloneq
The two inequalities above give
\
$\Box$e
\langleambdabel{sobolomenthos1}
\| \|abla_{x_\partialerp} \partialhi_n \| _{L^2}^2 +
\| \partial_{x_1} \mathbb BB _n \|_{L^2}^2
+ \| \|abla_{x_\partialerp} \mathbb BB_n \| _{L^2}^2
\langleeq C \| \mathbb BB_n \|_{L^4}^4 \langleeq C
\| \partial_{x_1} \mathbb BB_n \| _{L^2}^{\frac{2}{N-2}}
\| \|abla_{x_\partialerp} \mathbb BB_n \| _{L^2}^{\frac{2N-2}{N-2}}.
\varepsilone
From either \varepsilonqref{sobolomenthos} or \varepsilonqref{sobolomenthos1} we obtain
$$
\| \partial_{x_1} \mathbb BB _n \|_{L^2}^2 \langleeq C
\| \partial_{x_1} \mathbb BB_n \| _{L^2}^{\frac{2}{N-2}}
\| \|abla_{x_\partialerp} \mathbb BB_n \| _{L^2}^{\frac{2N-2}{N-2}},
$$
which gives $ \| \partial_{x_1} \mathbb BB _n \|_{L^2}^ {\frac{2N -6}{N-2}} \langleeq C
\| \|abla_{x_\partialerp} \mathbb BB_n \| _{L^2}^{\frac{2N-2}{N-2}}$, or equivalently
\
$\Box$egin{eqnarray}
\langleambdabel{good5}
\| \partial_{x_1} \mathbb BB _n \|_{L^2} \langleeq C \| \|abla_{x_\partialerp} \mathbb BB_n \| _{L^2}^{\frac{N-1}{N-3}}.
\varepsiloneq
Now we plug \varepsilonqref{good5} into \varepsilonqref{sobolomenthe} or \varepsilonqref{sobolomenthe1} to discover
$$
\| \|abla_{x_{\partialerp}} \mathbb BB_n \| _{L^2}^ 2
\langleeq C
\| \partial_{x_1} \mathbb BB_n \| _{L^2}^{\frac{2}{N-2}}
\| \|abla_{x_{\partialerp}} \mathbb BB_n \| _{L^2}^{\frac{2N-2}{N-2}}
\langleeq C \| \|abla_{x_{\partialerp}} \mathbb BB_n \| _{L^2}^{\frac{2(N-1)}{N-3}}.
$$
Since $ \frac{2(N-1)}{N-3} > 2$ we infer that there is a constant $ m > 0 $ such that
$\| \|abla_{x_{\partialerp}} \mathbb BB_n \| _{L^2} \gammaeq m$ for all sufficiently large $n$.
On the other hand $U_n$ satisfies the Pohozaev identity $ P_{c_n }(U_n) = 0$, hence for large $n$ we have
$$
E_{c_n} (U_n) = \frac{2}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla_{x_{\partialerp} } U_n |^2 \, dx
\gammaeq \frac{2}{N-1} r_0 ^2 {\rm in}nt_{\mathbb R^N} |\|abla_{x_{\partialerp} } \mathbb BB _n |^2 \, dx
\gammaeq \frac{1}{N-1} r_0 ^2 m^2.
$$
This contradicts the assumption that $E_{c_n} (U_n) \tildeo 0 $ as $ n \tildeo {\rm in}nfty$.
The proof of Proposition \rangleef{dim6} is complete.
\
$\Box$
\
$\Box$egin{rem} \ranglem We do not know whether $T_{c}$
tends to zero or not as $c \tildeo {\mathfrak c}_s $ if $N=4$ or $N=5$ and $\Gamma \|eq 0$.
\varepsilonnd{rem}
\sigmaubsection{Proof of Proposition \rangleef{vanishing}}
Let $ N \gammaeq 4 $ and let
$ (U_n,c_n)_{n \gammaeq 1} $ be a sequence of nonconstant, finite energy solutions solution of
(TW$_{c_n}$) such that $ E_{c_n}(U_n) \tildeo 0$. By Proposition
\rangleef{lifting} $(ii)$ we have $|U_n| \tildeo r_0 > 0 $ uniformly in $\mathbb R^N$,
hence for $n$ sufficiently large we may write
$$ U_n(x) = \rangleho_n(x) \varepsilonx^{i\partialhi_n(x)}
= r_0 \mathbb Big( 1 + \alpha_n A_n (z) \mathbb Big)
\varepsilonxp \mathbb Big( i \
$\Box$eta_n \varphi_n (z) \mathbb Big) \quad \quad \quad \mbox{ where }
z_1 = \langleambda_n x_1, \quad z_\partialerp = \sigma_n x_\partialerp , $$
and $ \alpha_n = \frac{1}{r_0} \| \rangleho_n - r_0 \|_{ L^{{\rm in}nfty}} \tildeo 0 $.
Using the Pohozaev identity $ P_{c_n} (U_n) = 0$ and \varepsilonqref{blancheneige} we have
$$
\frac{2}{N-1} {\rm in}nt_{\mathbb R^N} |\|abla_{x _{\partialerp} } U_n (x) |^2 \, dx = E(U_n) + c_n Q(U_n)
= \frac{2}{N} {\rm in}nt_{\mathbb R^N} |\|abla \rangleho _n|^2 \ dx.
$$
Since $U_n {\rm in}n \mathbb BE $ and $ U_n$
is not constant, we have $ \deltaisplaystyle {\rm in}nt_{\mathbb R^N} |\|abla_{x _{\partialerp} } U_n (x) |^2 \, dx > 0$ and the above identity implies
that $ \rangleho _n $ is not constant. The equality
$ E( U_n) + c_n Q( U_n) = \deltaisplaystyle \frac{2}{N} {\rm in}nt_{\mathbb R^N} |\|abla \rangleho _n|^2 \ dx$
can be written as
$$
\langleeft( 1 - \frac 2N \rangleight) {\rm in}nt_{\mathbb R^N} |\|abla \rangleho _n|^2 \, dx
+ {\rm in}nt_{\mathbb R^N} \rangleho _n ^2 |\|abla \partialhi _n|^2 \, dx
+ c_n Q( U_n) + {\rm in}nt _{\mathbb R^N} V(\rangleho _n ^2) \, dx = 0.
$$
Since $ \rangleho _n \tildeo r_0 $ uniformly in $ \mathbb R^N$ as $ n \tildeo {\rm in}nfty$, for $ n$ large we have $ V( \rangleho _n ^2) \gammaeq 0 $ and
from the last identity we infer that $ \deltaisplaystyle 0 > c_n Q( U_n) = {\rm in}nt_{\mathbb R^N} (r_0 ^2 - \rangleho _n ^2) \frac{ \partial \partialhi}{\partial x_1} \, dx $,
which implies $ \| \partial _{x_1 } \partialhi _n \|_{L^2} > 0$.
We must have $ \| \|abla _{x_{\partialerp}} \partialhi _n \|_{L^2} > 0 $ (otherwise $\partialhi $ would depend only on $ x_1$,
contradicting the fact that $ \deltaisplaystyle {\rm in}nt_{\mathbb R^N} |\|abla \partialhi _n|^2 \, dx $ is finite).
The choice of $ \alphalpha _n$ implies $ \| A_n \| _{L^{{\rm in}nfty}} =1 $. Since $A_n$, $\partial_{z_1} \partialhi _n$ and
$\|abla_{z_\partialerp} \partialhi_n $ are nonzero, by scaling it is easy to see that
\
$\Box$e
\langleambdabel{normal}
\| A_n \|_{L^2} = \| \partial_{z_1} \varphi_n \|_{ L^2} =
\| \|abla_{z_\partialerp} \varphi_n \|_{L^2} = 1
\varepsilone
if and only if
$$ \langleambda_n \sigma_n^{N-1} = \frac{\| \, |U_n| - r_0 \|_{L^{\rm in}nfty}^2}{\| \, |U_n| - r_0 \|_{L^2}^2} ,
\quad \quad
\langleambda_n \
$\Box$eta_n = \| \partial_{x_1} \partialhi_n \|_{L^2}
\frac{\| \, |U_n| - r_0 \|_{L^{\rm in}nfty}}{\| \, |U_n| - r_0 \|_{L^2}} ,
\quad \quad \
$\Box$eta_n \sigma_n = \| \|abla_{x_\partialerp} \partialhi_n \|_{L^2}
\frac{\| \, |U_n| - r_0 \|_{L^{\rm in}nfty}}{\| \, |U_n| - r_0 \|_{L^2}} . $$
Since $N\gammaeq 3$, the above equalities allow to compute
$\langleambda_n$, $\
$\Box$eta _n$ and $ \sigma_n$.
Hence the scaling parameters $(\alpha_n,\
$\Box$eta_n, \langleambda_n,\sigma_n)$ are uniquely determined if \varepsilonqref{normal} holds and $ \| A_n \| _{L^{{\rm in}nfty}} =1 $.
The Pohozaev identity $ P_{c_n }(U_n) = 0 $ gives
\
$\Box$egin{align}
\langleambdabel{Dev2}
& {\rm in}nt_{\mathbb R^N}
\langleambda_n^2 \
$\Box$eta_n^2 (\partial_{z_1} \varphi_n)^2 \mathbb Big( 1 + \alpha_n A_n \mathbb Big)^2
+ \alpha_n^2 \langleambda_n^2 (\partial_{z_1} A_n)^2
\|onumber \\ &
+ \frac{N-3}{N-1} \
$\Box$eta_n^2 \sigma_n^2 |\|abla_{z_\partialerp} \varphi_n|^2
\mathbb Big( 1 + \alpha_n A_n \mathbb Big)^2
+ \frac{N-3}{N-1} \alpha_n^2 \sigma_n^2 |\|abla_{z_\partialerp} A_n|^2
+ \frac{1}{r_0 ^2} V\mathbb Big(r_0^2(1 + \alpha_n A_n)^2 \mathbb Big)\ dz
\|onumber \\ & \hspace{1cm}
= 2 c_n {\rm in}nt_{\mathbb R^N} 2 \langleambda_n \alpha_n \
$\Box$eta_n A_n \partial_{z_1} \varphi_n
+ \langleambda_n \alpha_n^2 \
$\Box$eta_n A_n^2 \partial_{z_1} \varphi_n \ dz .
\varepsilonnd{align}
By \varepsilonqref{normal}, the right-hand side of \varepsilonqref{Dev2} is
$\mathbb BO(\langleambda_n \alpha_n \
$\Box$eta_n)$. Since $\alpha_n \tildeo 0$ and
$\| A_n \|_{L^{\rm in}nfty}=1$ for $n$ large enough we have $1+\alpha_n A_n \gammaeq 1/2$,
and by \varepsilonqref{V} we get $V(r_0^2(1 + \alpha_n A_n)^2 ) \gammaeq \frac 12 r_0 ^2 {\mathfrak c}_s ^2 \alphalpha _n ^2 A_n ^2$.
If $N\gammaeq 3$
all the terms in the left-hand side of \varepsilonqref{Dev2} are non-negative
and we infer that
$$
{\rm in}nt_{\mathbb R^N} \langleambda_n^2 \
$\Box$eta_n^2 (\partial_{z_1} \varphi_n)^2
+ \alpha_n^2 A_n^2 \ dz = \mathbb BO(\langleambda_n \alpha_n \
$\Box$eta_n).
$$
From the normalization \varepsilonqref{normal} it follows that
$$ \langleambda_n^2 \
$\Box$eta_n^2 = \mathbb BO(\langleambda_n \alpha_n \
$\Box$eta_n),
\quad \quad \quad {\ranglem and} \quad \quad \quad \alpha_n^2 = \mathbb BO(\langleambda_n \alpha_n \
$\Box$eta_n), $$
which yields
\
$\Box$e
\langleambdabel{tutu1}
C_1 \langleeq \frac{\langleambda_n \
$\Box$eta_n}{\alpha_n} \langleeq C_2 \qquad \mbox{ for some } C_1, \, C_2 > 0.
\varepsilone
Let $\tildeheta _n = \frac{ \langleambda _n \
$\Box$eta _n}{\alpha _n}$.
We use the Taylor expansion \varepsilonqref{V} for the potential $V$, multiply
\varepsilonqref{Dev2} by $\frac{1}{\alpha_n^2}$ and write the resulting equality in the form
\
$\Box$egin{align*}
& {\rm in}nt_{\mathbb R^N}
\mathbb Big( \tildeheta_n \partial_{z_1} \varphi_n - c_n A_n \mathbb Big)^2
+ \langleambda_n^2 (\partial_{z_1} A_n)^2
+ \frac{N-3}{N-1} \frac{\tildeheta_n^2 \sigma_n^2}{\langleambda_n^2} |\|abla_{z_\partialerp} \varphi_n|^2
\mathbb Big( 1 + \alpha_n A_n \mathbb Big)^2
+ \frac{N-3}{N-1} \sigma_n^2 |\|abla_{z_\partialerp} A_n|^2
\|onumber \\ & \hspace{2cm}
+ ({\mathfrak c}_s^2-c_n^2) A_n^2 \ dz
\|onumber \\ & = - {\rm in}nt_{\mathbb R^N}
\tildeheta_n^2 \alpha_n (\partial_{z_1} \varphi_n)^2 \mathbb Big( 2 A_n + \alpha_n A_n^2 \mathbb Big)
+ {\mathfrak c}_s^2 \alpha_n \mathbb Big( \frac{\Gamma}{3} - 1 \mathbb Big) A_n^3
+ {\mathfrak c}_s^2 \frac{V_4( \alpha_n A_n)}{\alpha_n^2}
- 2 c_n \tildeheta_n \alpha_n A_n^2 \partial_{z_1} \varphi_n \ dz .
\varepsilonnd{align*}
By \varepsilonqref{normal} and \varepsilonqref{tutu1} the right-hand side of the above equality is $\mathbb BO(\alpha_n)$.
If $N \gammaeq 3$ all the terms in the left-hand side are nonnegative.
In particular, we get
$ \deltaisplaystyle ({\mathfrak c}_s^2 - c_n^2 ) {\rm in}nt_{\mathbb R^N} A_n ^2 \, dz
= {\mathfrak c}_s^2 - c_n^2 = \mathbb BO(\alpha_n) , $
so that $c_n \tildeo {\mathfrak c}_s$. Assuming that $N \gammaeq 4$, we also infer that
$$ {\rm in}nt_{\mathbb R^N} \langleambda_n^2 (\partial_{z_1} A_n)^2
+ \frac{\sigma_n^2}{\langleambda_n^2} |\|abla_{z_\partialerp} \varphi_n|^2 \ dz = \mathbb BO(\alpha_n ) . $$
Together with \varepsilonqref{normal} and \varepsilonqref{tutu1}, this implies
\
$\Box$e
\langleambdabel{tutu2}
\frac{\sigma_n^2}{\langleambda_n^2} = \mathbb BO(\alpha_n)
\quad \quad \quad {\ranglem and} \quad \quad \quad
{\rm in}nt_{\mathbb R^N} (\partial_{z_1} A_n)^2 \ dz = \mathbb BO \mathbb Big( \frac{\alpha_n}{\langleambda_n^{2}} \mathbb Big) .
\varepsilone
The Pohozaev identity $P_{c_n}(U_n) = 0$ and \varepsilonqref{normal} imply that for each $n$ such that
$ 1 + \alpha_n A_n \gammaeq \frac 12 $ we have
\
$\Box$egin{align}
\langleambdabel{final1}
E_{c_n}(U_n) = & \ \frac{2}{N-1}
{\rm in}nt_{\mathbb R^N} |\|abla_\partialerp U_n|^2 \ dx \|onumber\\ \|onumber \\
= & \ \frac{2 r_0 ^2 }{(N-1)\langleambda_n \sigma_n^{N-1}} {\rm in}nt_{\mathbb R^N}
\
$\Box$eta_n^2 \sigma_n^2 |\|abla_{z_\partialerp} \varphi_n|^2
\mathbb Big(1+\alpha_n A_n \mathbb Big)^2 + \alpha_n^2 \sigma_n^2 |\|abla_{z_\partialerp} A_n|^2 \ dz \|onumber \\ \|onumber \\
\gammaeq & \ \frac{r_0 ^2 \alpha_n^2 \tildeheta _n^2 }{2(N-1) \langleambda_n^3 \sigma_n^{N-3}} {\rm in}nt_{\mathbb R^N}
|\|abla_{z_\partialerp} \varphi_n|^2 \ dz \gammaeq \frac{\alpha_n^2}{C \langleambda_n^3 \sigma_n^{N-3}}.
\varepsilonnd{align}
However, in view of \varepsilonqref{tutu2} we have
\
$\Box$egin{eqnarray}
\langleambdabel{final2}
\frac{\alpha_n^2}{\langleambda_n^3 \sigma_n^{N-3}}
= \frac{\alpha_n^2}{\langleambda_n^N ( \sigma_n / \langleambda_n )^{N-3}}
\gammaeq \mathbb Big( \frac{\alpha_n}{\langleambda_n^2} \mathbb Big)^{N/2} \frac{\alpha_n^2}{C \alpha_n^{N/2} \alpha_n^{(N-3)/2}}
= \mathbb Big( \frac{\alpha_n}{\langleambda_n^2} \mathbb Big)^{N/2} \frac{1}{C \alpha_n^{(2N-7)/2}} .
\varepsiloneq
Notice that $ \alpha_n^{(2N-7)/2} \tildeo 0 $ as $\alpha_n \tildeo 0$ because $N \gammaeq 4$.
The fact that $E_{c_n} (U_n) \langleongrightarrow 0 $, \varepsilonqref{final1} and \varepsilonqref{final2} imply that
$ \frac{\alpha_n}{\langleambda_n^2} \tildeo 0 $
{\ranglem as} $\ n \tildeo +{\rm in}nfty . $
Then using \varepsilonqref{tutu2} we find
$$ {\rm in}nt_{\mathbb R^N} (\partial_{z_1} A_n)^2 \ dz = \mathbb BO \mathbb Big( \frac{\alpha_n}{\langleambda_n^{2}} \mathbb Big)
\tildeo 0 $$
and the proof is complete.
\
$\Box$
\
$\Box$egin{itemize}gskip
\|oindent
{\
$\Box$f Acknowledgement:} We greatfully acknowledge the support of the French ANR (Agence Nationale de la Recherche)
under Grant ANR JC { ArDyPitEq}.
\
$\Box$egin{thebibliography}{99}
\
$\Box$egin{itemize}bitem{AHMNPTB} {\sigmac M. Abid, C. Huepe, S. Metens, C. Nore, C. T. Pham,
L. S. Tuckerman and M. E. Brachet,}
Gross-Pitaevskii dynamics of Bose-Einstein condensates and superfluid
turbulence.
{{\rm in}t Fluid Dynamics Research {\
$\Box$f 33}, 5-6 (2003), 509-544.}
\
$\Box$egin{itemize}bitem{BP} {\sigmac I. Barashenkov and E. Panova,}
Stability and evolution of the quiescent and travelling solitonic bubbles
{{\rm in}t Physica D: Nonlinear Phenomena {\
$\Box$f 69}, 1-2 (1993), 114-134.}
\
$\Box$egin{itemize}bitem{B} {\sigmac N. Berloff,}
Evolution of rarefaction pulses into vortex rings.
{{\rm in}t Phys. Rev. B {\
$\Box$f 65}, 174518 (2002).}
\
$\Box$egin{itemize}bitem{B2} {\sigmac N. Berloff,}
Quantised vortices, travelling coherent structures and superfluid
turbulence. {{\rm in}t In Stationary and time dependent Gross-Pitaevskii equations.
A. Farina and J.-C. Saut Eds. Contemp. Math. Vol. {\
$\Box$f 473}, AMS,
Providence, RI, (2008), 26-54.}
\
$\Box$egin{itemize}bitem{BR} {\sigmac N. Berloff and P. H. Roberts,}
Motions in a Bose condensate: X. New results on stability of axisymmetric
solitary waves of the Gross-Pitaevskii equation.
{{\rm in}t J. Phys. A: Math. Gen., {\
$\Box$f 37} (2004), 11333-11351.}
\
$\Box$egin{itemize}bitem{BRsurvey} {\sigmac N. Berloff and P. H. Roberts,}
Nonlinear Schr\"odinger equation as a model of superfluid helium.
{{\rm in}t In "Quantized Vortex Dynamics and Superfluid Turbulence" edited by
C.F. Barenghi, R.J. Donnelly and W.F. Vinen, Lecture Notes in Physics,
volume {\
$\Box$f 571}, Springer-Verlag, (2001).}
\
$\Box$egin{itemize}bitem{BIN} {\sigmac O. V. Besov, V. P. Il'in and S. M. Nikolskii,}
Integral Representations of Functions and Imbedding Theorems. Vol. I.
{{\rm in}t J. Wiley, (1978).}
\
$\Box$egin{itemize}bitem{BGS1} {\sigmac F. B\'ethuel, P. Gravejat and J-C. Saut,}
On the KP-I transonic limit of two-dimensional Gross-Pitaevskii
travelling waves.
{{\rm in}t Dynamics of PDE {\
$\Box$f 5}, 3 (2008), 241-280.}
\
$\Box$egin{itemize}bitem{BGS2} {\sigmac F. B\'ethuel, P. Gravejat and J-C. Saut,}
Travelling waves for the Gross-Pitaevskii equation. II.
{{\rm in}t Comm. Math. Phys. {\
$\Box$f 285}, no. 2 (2009), 567-651.}
\
$\Box$egin{itemize}bitem{BGSsurvey} {\sigmac F. B\'ethuel, P. Gravejat and J-C. Saut,}
Existence and properties of travelling waves for the Gross-Pitaevskii
equation.
{{\rm in}t Stationary and time dependent Gross-Pitaevskii equations, 55-103,
Contemp. Math., {\
$\Box$f 473}, Amer. Math. Soc., Providence, RI, (2008).}
\
$\Box$egin{itemize}bitem{BGSS} {\sigmac F. B\'ethuel, P. Gravejat, J-C. Saut and D. Smets,}
On the Korteweg-de Vries long-wave approximation
of the Gross-Pitaevskii equation I.
{{\rm in}t Internat. Math. Res. Notices, no. 14, (2009), 2700-2748.}
\
$\Box$egin{itemize}bitem{BGSS2} {\sigmac F. B\'ethuel, P. Gravejat, J-C. Saut and D. Smets,}
On the Korteweg-de Vries long-wave approximation of the Gross-Pitaevskii
equation II.
{{\rm in}t Comm. Partial Differential Equations {\
$\Box$f 35}, no. 1 (2010), 113-164.}
\
$\Box$egin{itemize}bitem{BOS} {\sigmac F. B\'ethuel, G. Orlandi and D. Smets,}
Vortex rings for the Gross-Pitaevskii equation.
{{\rm in}t J. Eur. Math. Soc. (JEMS) {\
$\Box$f 6}, no. 1 (2004), 17-94.}
\
$\Box$egin{itemize}bitem{brezis} {\sigmac H. Br\'ezis, } Analyse fonctionnelle, {{\rm in}t Masson, Paris, 1983.}
\
$\Box$egin{itemize}bitem{brezis-lieb} {\sigmac H. Br\'ezis, E. H. Lieb, }
{Minimum Action Solutions for Some Vector Field Equations, }
{{\rm in}t Comm. Math. Phys. 96 (1984), 97-113.}
\
$\Box$egin{itemize}bitem{C1d} {\sigmac D. Chiron,}
Travelling waves for the Nonlinear Schr\"odinger Equation with
general nonlinearity in dimension one. {{\rm in}t Nonlinearity {\
$\Box$f 25} (2012), 813-850.}
\
$\Box$egin{itemize}bitem{CM1} {\sigmac D. Chiron and M. Mari\c{s},}
Travelling waves for nonlinear
Schr\"odinger equations with nonzero conditions at infinity, II.
{{\rm in}t Preprint, arXiv 1203.1912.}
\
$\Box$egin{itemize}bitem{CR2} {\sigmac D. Chiron and F. Rousset,}
The KdV/KP-I limit of the Nonlinear Schr\"odinger Equation.
{{\rm in}t SIAM J. Math. Anal. {\
$\Box$f 42}, no. 1 (2010), 64-96.}
\
$\Box$egin{itemize}bitem{Cos} {\sigmac C. Coste,}
Nonlinear Schr\"odinger equation and superfluid hydrodynamics.
{{\rm in}t Eur. Phys. J. B {\
$\Box$f 1} (1998), 245-253.}
\
$\Box$egin{itemize}bitem{dBSIHP} {\sigmac A. de Bouard and J.-C. Saut,}
Solitary waves of generalized Kadomtsev-Petviashvili equations.
{{\rm in}t Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire {\
$\Box$f 14}, no. 2
(1997), 211-236.}
\
$\Box$egin{itemize}bitem{dBSSIAM} {\sigmac A. de Bouard and J.-C. Saut,}
Symmetries and decay of the generalized Kadomtsev-Petviashvili
solitary waves.
{{\rm in}t SIAM J. Math. Anal. {\
$\Box$f 28}, no. 5 (1997), 1064-1085.}
\
$\Box$egin{itemize}bitem{dBS} {\sigmac A. de Bouard and J.-C. Saut.}
Remarks on the stability of generalized KP solitary waves.
In {{\rm in}t Mathematical problems in the theory of water waves (Luminy, 1995),
vol. 200 of Contemp. Math., p. 75-84. Amer. Math. Soc., Providence,
RI, (1996).}
\
$\Box$egin{itemize}bitem{G} {\sigmac P. Gravejat,}
Asymptotics of the solitary waves for the generalized
Kadomtsev-Petviashvili equations.
{{\rm in}t Discrete Contin. Dyn. Syst. {\
$\Box$f 21}, no. 3 (2008), 835-882.}
\
$\Box$egin{itemize}bitem{IS} {\sigmac S. Iordanskii and A. Smirnov,}
Three-dimensional solitons in He II.
{{\rm in}t JETP Lett. {\
$\Box$f 27}, (10) (1978), 535-538.}
\
$\Box$egin{itemize}bitem{JPR} {\sigmac C. Jones, S. Putterman and P. H. Roberts,}
Motions in a Bose condensate V. Stability of wave solutions of
nonlinear Schr\"odinger equations in two and three dimensions.
{{\rm in}t J. Phys A: Math. Gen. {\
$\Box$f 19} (1986), 2991-3011.}
\
$\Box$egin{itemize}bitem{JR} {\sigmac C. Jones and P. H. Roberts,}
Motion in a Bose condensate IV. Axisymmetric solitary waves.
{{\rm in}t J. Phys. A: Math. Gen., {\
$\Box$f 15} (1982), 2599-2619.}
\
$\Box$egin{itemize}bitem{K} {\sigmac Y. S. Kivshar,}
Dark-soliton dynamics and shock-waves induced by the stimulated
Raman effect in optical fibers.
{{\rm in}t Phys. Rev. A {\
$\Box$f 42} (1990), 1757-1761.}
\
$\Box$egin{itemize}bitem{KAL} {\sigmac Y. S. Kivshar, D. Anderson and M. Lisak,}
Modulational instabilities and dark solitons in a generalized nonlinear Schr\"odinger-equation.
{{\rm in}t Phys. Scr. {\
$\Box$f 47} (1993), 679-681.}
\
$\Box$egin{itemize}bitem{KL} {\sigmac Y. S. Kivshar and B. Luther-Davies},
Dark optical solitons: physics and applications.
{{\rm in}t Physics Reports {\
$\Box$f 298} (1998), 81-197.}
\
$\Box$egin{itemize}bitem{KivPeli} {\sigmac Y. S. Kivshar and D. Pelinovsky,}
Self-focusing and transverse instabilities of solitary waves.
{{\rm in}t Physics Reports {\
$\Box$f 331} (2000), 117-195.}
\
$\Box$egin{itemize}bitem{Lin} {\sigmac Z. Lin,}
Stability and instability of traveling solitonic bubbles.
{{\rm in}t Adv. Differential Equations {\
$\Box$f 7}, no. 8 (2002), 897-918.}
\
$\Box$egin{itemize}bitem{PLL} {\sigmac P.-L. Lions,}
The concentration-compactness principle in the calculus of variations.
The locally compact case, part I.
{{\rm in}t Ann. Inst. H. Poincar\'e, Anal. non lin\'eaire {\
$\Box$f 1} (1984), 109-145.}
\
$\Box$egin{itemize}bitem{Liu} {\sigmac Y. Liu,}
Strong instability of solitary-wave solutions to a Kadomtsev-Petviashvili
equation in three dimensions.
{{\rm in}t J. Differential Equations, {\
$\Box$f 180}, no. 1 (2002), 153-170.}
\
$\Box$egin{itemize}bitem{M2} {\sigmac M. Mari\c{s},}
Nonexistence of supersonic traveling waves for nonlinear Schr\"odinger
equations with nonzero conditions at infinity.
{{\rm in}t SIAM J. Math. Anal. {\
$\Box$f 40}, no. 3 (2008), 1076-1103.}
\
$\Box$egin{itemize}bitem{MarisARMA} {\sigmac M. Mari\c{s},}
On the symmetry of minimizers.
{{\rm in}t Arch. Rational Mech. Anal. {\
$\Box$f 192}, no 2 (2009), 311-330.}
\
$\Box$egin{itemize}bitem{Maris} {\sigmac M. Mari\c{s},}
Traveling waves for nonlinear Schr\"odinger equations with nonzero
conditions at infinity. {{\rm in}t Preprint, arXiv 0903.0354.}
\
$\Box$egin{itemize}bitem{RB} {\sigmac P. H. Roberts and N. Berloff,}
Nonlinear Schr\"odinger equation as a model of superfluid helium.
{{\rm in}t In "Quantized Vortex Dynamics and Superfluid Turbulence" edited by C.F. Barenghi,
R.J. Donnelly and W.F. Vinen, Lecture Notes in Physics, volume 571, Springer-Verlag, 2001.}
\
$\Box$egin{itemize}bitem{T} {\sigmac E. Tarquini,}
A lower bound on the energy of travelling waves of fixed speed for
the Gross-Pitaevskii equation.
{{\rm in}t Monatsh. Math. {\
$\Box$f 151}, no. 4 (2007), 333-339.}
\
$\Box$egin{itemize}bitem{ZK} {\sigmac V. Zakharov and A. Kuznetsov,}
Multi-scale expansion in the theory of systems integrable by
the inverse scattering transform.
{{\rm in}t Physica D, {\
$\Box$f 18} (1-3) (1986), 455-463.}
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\betagin{document}
\betagin{center}
{\Lambdarge \bf Borodin--Okounkov and Szeg\H{o} for Toeplitz \\[0.5ex] operators on model spaces}
{\Lambdarge Albrecht B\"ottcher}
{\bf e}nd{center}
\betagin{quote}
f^{(0)}otnotesize{
We consider the determinants of compressions of Toeplitz operators to finite-dimensional model spaces
and establish analogues of the Borodin--Okounkov formula and the strong Szeg\H{o} limit theorem in this setting.}
\let\fnsymbol{footnote}\relaxf^{(0)}otnote{\hspace*{-7.5mm} MSC 2010: 47B35, 30J10}
\let\fnsymbol{footnote}\relaxf^{(0)}otnote{\hspace*{-7.5mm} Keywords: Toeplitz determinant, model space, Blaschke product, truncated Toeplitz operator}
{\bf e}nd{quote}
\section{Introduction and main results}
Although compressions of Toeplitz operators to model spaces have been studied for a long time, see, for example,
\cite{Nik}, \cite{Treil}, it was
Sarason's paper \cite{Sar} which initiated the recent increasing activity in research into
such operatorsf^{(0)}otnote[1]{These operators are
now called ``truncated Toeplitz operators'', although that name is already occupied
by the classical finite Toeplitz matrices. Moreover, I see a difference between truncation and compression.
However, since Donald Sarason is one of my mathematical
top heroes, I will not vote against that name. I will nevertheless not follow the custom
and will instead refer to these operators simply as Toeplitz operators on model spaces.}, see,
for instance, the survey \cite{GaRoss} and the ample list of references therein.
The number one theorem in classical Toeplitz matrices is Szeg\H{o}'s strong limit theorem,
and curiously, I have not seen the model space version of this theorem among the many results which
have so far been carried over from the classical setting to the model space level. In fact the
strong Szeg\H{o} limit theorem is a straightforward consequence of another great theorem,
namely, the Borodin--Okounkov formula. My favorite proof of the Borodin--Okounkov formula
is the one in \cite{Botok}, and the purpose of this note is to show that this proof works
equally well for Toeplitz operators on model spaces.
Our context is the usual Hardy spaces of the unit disk $\mathbb{D}$ or, when interpreted as nontangential limits,
of the unit circle $\mathbb{T}$. We let $P$ stand for the orthogonal projection of $L^2$ onto $H^2$.
The Toeplitz operator $T(a)$ induced by a function $a \in L^\infty$ is the operator on $H^2$
which acts by the rule $T(a)f=P(af)$.
Let $u \in H^\infty$ be an inner function. The space $K_u:=H^2 \omegainus uH^2$ is referred to as
the model space generated by $u$. We denote by $P_u$ and $Q_u=I-P_u$ the orthogonal
projections of $H^2$ onto $K_u$ and $uH^2$, respectively. It is well known
that $P_u=I-T(u)T(\overline{u})$, the bar denoting complex conjugation.
We are interested in the compression
of $T(a)$ to $K_u$, that is, in the operator $T_u(a)=P_u T(a)|K_u$.
We will actually consider the matrix case. Thus, $a$ is supposed to be a matrix function in the
$\mathbb{C}^{m \widetildemes m}$-valued $L^\infty$, and $T(a)$ and $T_u(a)$ act on the $\mathbb{C}^m$-valued $H^2$ and $K_u$,
respectively.
The inner function $u$ remains scalar-valued.
We make the following assumptions on $a$. It is required that $a$ is in the intersection of the Wiener algebra $W$
and the Krein algebra $K_{2,2}^{1/2,1/2}$, that is,
the Fourier coefficients $a_n$ satisfy
$\sum_{n=-\infty}^\infty \| a_n\|+\sum_{n=-\infty}^\infty n \| a_n\|^2 < \infty$,
where $\| \cdot\|$ is any matrix norm on $\mathbb{C}^{m \widetildemes m}$. We furthermore assume that $a$ has right and left
canonical Wiener--Hopf factorizations $a=w_-w_+=v_+v_-$ in $W \cap K_{2,2}^{1/2,1/2}$. This means that $w_+, v_+,
\overline{w_-}, \overline{v_-}$ and their inverses belong to $W \cap K_{2,2}^{1/2,1/2}\cap H^\infty$. In the scalar case ($m=1$),
the existence of such factorizations is guaranteed if $a$ has no zeros on $\mathbb{T}$ and
vanishing winding number about the origin. Our assumptions imply in particular that $T(a)$, $T(a^{-1})$, and $T(\widetilde{a})$
are invertible on $H^2$. Here and in what follows, $\widetilde{a}$ results from $a$ be reversal of the Fourier coefficients,
$\widetilde{a}(t):=a(1/t)$ for $t \in \mathbb{T}$.
The Hankel operator $H(a)$ generated by $a \in L^\infty$ is defined on the space $H^2$ by $H(a)f=P(a\cdot (I-P)Jf)$,
where $J$ is the flip operator, $(Jf)(t)=(1/t)f(1/t)$ for $t \in \mathbb{T}$.
Put $b=v_-w_+^{-1}$ and $c=w_-^{-1}v_+$. Then $b$ and $c$ are in the Krein algebra and hence the Hankel
operators $H(b)$ and $H(\widetilde{c})$ are Hilbert--Schmidt operators. This implies that $H(b)H(\widetilde{c})$ is
in the trace class. As $T(b)=T(v_-)T(w_+^{-1})$ and $T(c)=T(w_-^{-1})T(v_+)$ are invertible, so also
is $I-H(b)H(\widetilde{c})=T(b)T(c)$.
For $\alpha \in \mathbb{D}$, we define the inner functions $\mu_\alpha$ and $B_\alpha$ by
\[\mu_\alpha(z)=\frac{z-\alpha}{1-\overline{\alpha}z}, \quad B_\alpha(z)=\frac{-\overline{\alpha}}{|\alpha|}\frac{z-\alpha}{1-\overline{\alpha}z}
\quad (z \in \mathbb{D}),\]
with the convention to put $B_0(z)=z$. The space $K_u$ is known to be finite-dimensional if and only if
$u$ is a finite Blaschke product, that is, if and only if there are $\alpha_1, \ldots, \alpha_N$ in $\mathbb{D}$ such that
$u=B_{\alpha_1}\cdots B_{\alpha_N}$. We let $\sigma(u)$ denote the numbers $\alpha_1, \ldots, \alpha_N$, repeated
according to the number of times they appear in $u=B_{\alpha_1}\cdots B_{\alpha_N}$. Finally, as usual,
the geometric mean of a (matrix) function $\varphi$ on $\mathbb{T}$ is defined by
\[G(\varphi)={\bf e}xp(\log \deltat \varphi)_0:={\bf e}xp\left(\frac{1}{2\pi}\int_0^{2\pi}\log\deltat\varphi(e^{i\theta})d\theta\right).\]
Here is the model space version of the Borodin--Okounkov formula.
\betagin{thm} \lambdabel{Theo 1.1}
If $u=B_{\alpha_1}\cdots B_{\alpha_N}$ is a finite Blaschke product, then
\betagin{equation}
\deltat T_u(a)=\left(\prod_{\alpha \in \sigma(u)} G(a \circ \mu_{-\alpha})\right)\frac{\deltat (I-Q_u H(b)H(\widetilde{c})Q_u)}{\deltat (I-H(b)H(\widetilde{c}))}.
\lambdabel{1.1}
{\bf e}nd{equation}
{\bf e}nd{thm}
An alternative expression for the product of the numbers $G(a\circ\mu_{-\alpha})$ is
\betagin{equation}
\prod_{\alpha \in \sigma(u)} G(a \circ \mu_{-\alpha})=\prod_{\alpha \in \sigma(u)} \deltat v_+(\alpha)\deltat v_-(1/\overline{\alpha}). \lambdabel{1.2}
{\bf e}nd{equation}
For $u(z)=z^N$, the products (\ref{1.2}) become $G(a)^N$ and (\ref{1.1}) turns into the classical
Borodin--Okounkov formula, which was originally established in \cite{Borodok}, reformulated, extended to the block case, and
equipped with two new proofs in \cite{BasWid}, and with still another proof in \cite{Botok}.
For positive functions $a$, the formula was even already in \cite{GerCase}, which, however,
was not known to the authors of \cite{BasWid}, \cite{Borodok}, \cite{Botok} at the time they wrote
their papers. Taking into account that $Q_u=T(u)T(\overline{u})$ for an arbitrary inner function, it is easy to see that
\[\deltat(I-Q_u H(b)H(\widetilde{c})Q_u)=\deltat(I-H(\overline{u}b)H(\widetilde{c}\widetilde{u}))\]
for every inner function $u$.
Now suppose $\{\alpha_j\}_{j=1}^\infty$ is a sequence of points in $\mathbb{D}$. Put
\[u_N(z)=\prod_{j=1}^N B_{\alpha_j}(z).\]
The following is a model space version of the strong Szeg\H{o} limit theorem.
\betagin{thm} \lambdabel{Theo 1.2}
If $\sum_{j=1}^\infty (1-|\alpha_j|) = \infty$, then $u_N(z) \to 0$ for $z \in \mathbb{D}$, $Q_{u_N} \to 0$ strongly and
\betagin{equation}
\lim_{N \to \infty} \deltat T_{u_N}(a)\prod_{\alpha \in \sigma(u_N)} G(a \circ \mu_{-\alpha})^{-1}= \frac{1}{\deltat (I-H(b)H(\widetilde{c}))}. \lambdabel{1.3}
{\bf e}nd{equation}
If $\sum_{j=1}^\infty (1-|\alpha_j|) < \infty$, then $u_N(z)$ converges to the infinite Blaschke product
\[B(z)=\prod_{j=1}^\infty B_{\alpha_j}(z)\] for $z \in \mathbb{D}$, $Q_{u_N} \to Q_B$ strongly, and
\betagin{equation}
\lim_{N \to \infty} \deltat T_{u_N}(a)\prod_{\alpha \in \sigma(u_N)} G(a \circ \mu_{-\alpha})^{-1}= \frac{\deltat(I-Q_B H(b)H(\widetilde{c})Q_B)}{\deltat (I-H(b)H(\widetilde{c}))}. \lambdabel{1.4}
{\bf e}nd{equation}
{\bf e}nd{thm}
Again, in the case where $u_N(z)=z^N$, this theorem implies that
\[\lim_{N \to \infty} T_{z^N}(a) G(a)^{-N} = \frac{1}{\deltat (I-H(b)H(\widetilde{c}))},\]
which is the classical Szeg\H{o}--Widom limit theorem, established by Szeg\H{o} \cite{Sz}
in the scalar case ($m=1$) and by Widom \cite{Wid} in the block case ($m \ge 1$).
Note that for $m=1$ we have
\[\frac{1}{\deltat (I-H(b)H(\widetilde{c}))}={\bf e}xp \sum_{k=1}^\infty k (\log a)_k (\log a)_{-k},\]
and that for $m \ge 1$ we may also write
\[\frac{1}{\deltat (I-H(b)H(\widetilde{c}))}=\deltat T(a) T(a^{-1}).\]
We refer to the books \cite{BoSi} and \cite{Simon} for more on this topic, including the history. Incidentally,
sequences of Toeplitz operators $T_{u_N}(a)$ with $u_{N+1}$ divisible by $u_N$ and with
$P_{u_N}$ converging strongly to $I$ appeared already in Treil's paper \cite{Treil} (and his results are also quoted
on p. 394 of \cite{BoSi}).
\section{Proofs}
We first prove Theorem \ref{Theo 1.1} and formula (\ref{1.2}).
Let $u$ be a finite Blaschke product. As shown in \cite{Botok} (or see \cite[p. 552]{BoSi} or \cite{BoWi}),
Jacobi's formula for the minors of the inverse matrix can be extended to identity minus trace class operators:
\[\deltat P_u (I-L)^{-1} P_u = \frac{\deltat (I-Q_u L Q_u)}{\deltat (I-L)}\]
whenever $L$ is of trace class and $I-L$ is invertible. This formula with $L=H(b)H(\widetilde{c})$ will give
Theorem~\ref{Theo 1.1} provided we can prove that
\betagin{equation}
\deltat P_u(I-H(b)H(\widetilde{c}))^{-1}P_u=\deltat T_u(a) \prod_{\alpha \in \sigma(u)} G(a \circ \mu_{-\alpha})^{-1}. \lambdabel{2.1}
{\bf e}nd{equation}
It is readily seen that if $\varphi \in H^\infty$, then
\betagin{equation}
P_u T(\varphi)=P_u T(\varphi) P_u, \quad T(\overline{\varphi}) P_u = P_u T(\overline{\varphi}) P_u.\lambdabel{2.1a}
{\bf e}nd{equation}
Consequently,
\betagin{eqnarray*}
& & P_u (I-H(b)H(\widetilde{c}))^{-1}P_u=P_u T(c)^{-1}T(b)^{-1}P_u\\
& & = P_u T(v_+^{-1})T(w_-)T(w_+)T(v_-^{-1})P_u
=T_u(v_+^{-1}) T_u(a) T_u(v_-^{-1}).
{\bf e}nd{eqnarray*}
Taking determinants, we see that the left-hand side of (\ref{2.1}) equals
\[\deltat T_u(a) /( \deltat T_u(v_+) \deltat T_u(v_-)).\]
We are so left with proving that
\betagin{eqnarray}
& & \deltat T_u(v_+)=\prod_{\alpha \in \sigma(u)}\deltat v_+(\alpha), \quad \deltat T_u(v_-)=\prod_{\alpha \in \sigma(u)} \deltat v_-(1/\overline{\alpha}), \lambdabel{2.2}\\
& & \prod_{\alpha \in \sigma(u)}\deltat v_+(\alpha)\deltat v_-(1/\overline{\alpha})= \prod_{\alpha \in \sigma(u)} G(a \circ \mu_{-\alpha}). \lambdabel{2.3}
{\bf e}nd{eqnarray}
The determinant is the product of the eigenvalues. A complex number $\lambda$ is an eigenvalue of $T_u(v_+)$
if and only if $T_u(v_+)-\lambda I=T_u(v_+-\lambda I)$ is not invertible. We may think of $T_u(v_+-\lambda I)$ as an $m \widetildemes m$
block matrix whose blocks $T_u(v_+^{jk}-\lambda \delta_{jk})$ are generated by scalar-valued functions. By virtue of~(\ref{2.1a}),
the blocks commute pairwise, and hence $T_u(v_+-\lambda I)$ is not invertible if and only if the block determinant
$\deltat T_u(v_+-\lambda I)$ is not invertible. Again by~(\ref{2.1a}), $\deltat T_u(v_+-\lambda I)=T_u(\deltat(v_+-\lambda I))$.
But the operator $T_u(\deltat(v_+-\lambda I))$ is known to be not invertible if and only if $\deltat(v_+(\alpha)-\lambda I)=0$ for
some $\alpha \in \sigma(u)$; see~\cite[p. 66]{Nik} or~\cite[Theorem 15(ii)]{GaRoss}. Equivalently, $T_u(\deltat(v_+-\lambda I))$ is
not invertible if and only if $\lambda$ is an eigenvalue of $v_+(\alpha)$ for some $\alpha \in \sigma(u)$. Thus, the set of the eigenvalues
of $T_u(v_+)$ is the union of the sets of the eigenvalues of $v_+(\alpha)$ for $\alpha \in \sigma(u)$, multiplicities taken into account.
This proves the first formula in~(\ref{2.2}). The second now follows from the equalities
\[\deltat T_u(v_-)=\overline{\deltat T_u(v_-^*)}=\prod_{\alpha \in \sigma(u)}\overline{\deltat v_-^*(\alpha)}=\prod_{\alpha \in \sigma(u)}\deltat v_-(1/\overline{\alpha}).\]
Finally, we have
\betagin{eqnarray*}
& & \prod_{\alpha \in \sigma(u)}\deltat v_+(\alpha) \deltat v_-(1/\overline{\alpha})=
\prod_{\alpha \in \sigma(u)}\deltat(v_+\circ \mu_{-\alpha})(0) \deltat(v_- \circ \mu_{-\alpha})(\infty)\\
& & = {\bf e}xp \sum_{\alpha \in \sigma(u)}\Big(\log\deltat(v_+\circ \mu_{-\alpha})(0)+ \log\deltat(v_- \circ \mu_{-\alpha})(\infty)\Big)\\
& & = {\bf e}xp \sum_{\alpha \in \sigma(u)}\Big([\log\deltat(v_+\circ \mu_{-\alpha})]_0+ [\log\deltat(v_- \circ \mu_{-\alpha})]_0\Big)\\
& & = {\bf e}xp \sum_{\alpha \in \sigma(u)}[\log \deltat (a\circ \mu_{-\alpha})]_0=\prod_{\alpha \in \sigma(u)} G(a \circ \mu_{-\alpha}),
{\bf e}nd{eqnarray*}
which gives (\ref{2.3}) and completes the proof of Theorem \ref{Theo 1.1} and formula (\ref{1.2}).
Once Theorem \ref{Theo 1.1} is available, Theorem \ref{Theo 1.2} is no surprise. Indeed, the assertions
concerning the limit of $u_N(z)$ are well known, and the theorem on the lower limits of model spaces
on page 35 of \cite{Nik} implies that $P_{u_N}$ converges strongly to $I$ if $u_N(z) \to 0$
and to $P_B$ if $u_N(z) \to B(z)$. Formulas~(\ref{1.3}) and~(\ref{1.4}) then result from Theorem~\ref{Theo 1.1}
and the continuity of the determinant on $I$ minus the trace ideal.
\section{Three Examples}
As already said, for $u(z)=z^N$ the term (\ref{1.2}) is simply $G(a)^N$. For general inner functions $u$,
it is less harmless. It suffices to illustrate things
in the simple case where $v_+(z)=1-vz$ with $|v| <1$. We put
\[G_u(v)=\prod_{\alpha \in \sigma(u)} v_+(\alpha)=\prod_{\alpha \in \sigma(u)} (1-v\alpha).\]
{\bf Example 1.} Let $\alpha_j=1-1/j^2$ and $u_N(z)=\prod_{j=1}^N B_{\alpha_j}(z)$. Then
\betagin{eqnarray*}
\log G_{u_N}(v) & = & \sum_{j=1}^N \log (1-v\alpha_j)= \sum_{j=1}^N \log\left(1-v+\frac{v}{j^2}\right)\\
& = & N \log (1-v)+\sum_{j=1}^N \log \left(1+\frac{v}{1-v}\,\frac{1}{j^2}\right)\\
& = & N \log (1-v)+\sum_{j=1}^\infty \log \left(1+\frac{v}{1-v}\,\frac{1}{j^2}\right)+O\left(\frac{1}{N}\right)
{\bf e}nd{eqnarray*}
and hence
\betagin{eqnarray*}
G_{u_N}(v) & = & (1-v)^N \prod_{j=1}^\infty\left(1+\frac{v}{1-v}\,\frac{1}{j^2}\right)\left(1+O\left(\frac{1}{N}\right)\right)\\
& = & (1-v)^N \:\frac{\sigmanh\left(\pi \sqrt{\frac{v}{1-v}}\right)}{\pi \sqrt{\frac{v}{1-v}}}\left(1+O\left(\frac{1}{N}\right)\right).
{\bf e}nd{eqnarray*}
{\bf Example 2.} Now take $\alpha_j=1-1/j$ and $u_N(z)=\prod_{j=1}^N B_{\alpha_j}(z)$. This time, with $q:=v/(1-v)$,
\betagin{eqnarray*}
\log G_{u_N}(v) & = & N \log (1-v)+\sum_{j=1}^N \log\left(1+\frac{q}{j}\right)\\
& = & N\log(1-v)+\sum_{j=1}^N \left(\log\left(1+\frac{q}{j}\right)-\frac{q}{j}\right)+\sum_{j=1}^N\frac{q}{j},
{\bf e}nd{eqnarray*}
and this equals
\[N\log(1-v)+\sum_{j=1}^\infty \left(\log\left(1+\frac{q}{j}\right)-\frac{q}{j}\right)+O\left(\frac{1}{N}\right)+
q\left(\log N + C + O\left(\frac{1}{N}\right)\right),\]
where $C=0.5772\ldots$ is Euler's constant. It follows that
\[G_{u_N}(v)=(1-v)^N \,N^q\,e^{qC}\prod_{j=1}^\infty \left(1+\frac{q}{j}\right)e^{-q/j}\left(1+O\left(\frac{1}{N}\right)\right),\]
and taking into account that
\[\prod_{j=1}^\infty \left(1+\frac{q}{j}\right)e^{-q/j}=\frac{e^{-qC}}{\Gamma(q+1)},\]
we arrive at the formula
\[G_{u_N}(v)=\frac{(1-v)^N\, N^{v/(1-v)}}{\Gamma\left(\frac{1}{1-v}\right)}\left(1+O\left(\frac{1}{N}\right)\right).\]
{\bf Example 3.} The previous two examples raise the question whether the limits of $G_{u_{N+1}}(v)/G_{u_N}(v)$ and
$G_{u_N}(v)^{1/N}$ always exist. Surprisingly, the answer is NO. Since $G_{u_{N+1}}(v)/G_{u_N}(v)=1-v\alpha_{N+1}$, this
is clear for the quotient. To give a counterexample for the root, we construct a sequence $\{u_N\}$ with a subsequence
$\{u_{N_i}\}$ such that $G_{u_{N_i}}(v)^{1/N_i}$ alternately assumes two different values.
We take $u_N(z)=\prod_{j=1}^N B_{\alpha_j}(z)$
where $\alpha_j =r_j z_j$, $r_j \in (0,1)$, $z_j \in \mathbb{T}$, and
$\sum_{j=1}^\infty (1-r_j) < \infty$. Then
\betagin{eqnarray*}
G_{u_N}(v) & = & \prod_{j=1}^N (1-vr_jz_j)=\prod_{j=1}^N (1-vz_j+vz_j(1-r_j))\\
& = & \prod_{j=1}^N (1-vz_j)\prod_{j=1}^N \left(1+\frac{vz_j}{1-vz_j}(1-r_j)\right)\\
& = & \prod_{j=1}^N (1-vz_j)\prod_{j=1}^\infty \left(1+\frac{vz_j}{1-vz_j}(1-r_j)\right)\:(1+o(1)),
{\bf e}nd{eqnarray*}
and it is sufficient to choose $\{z_j\}_{j=1}^\infty$ so that the limit of $\prod_{j=1}^N(1-vz_j)^{1/N}$
does not exist. We successively take $z_j=-1$ or $z_j=1$ and denote by $f(N)$ the number of choices
of $z_j=1$ after $N$ steps. Here $f: \mathbb{N} \to \mathbb{N}$ may be any function such that
\betagin{equation}
f(N-1) \le f(N) \le f(N-1)+1 \quad \mbox{for} \quad N \ge 2. \lambdabel{3.1}
{\bf e}nd{equation}
Then
\[\prod_{j=1}^N(1-vz_j)^{1/N}=(1-v)^{f(N)/N}(1+v)^{(N-f(N))/N}=(1+v)\left(\frac{1-v}{1+v}\right)^{f(N)/N},\]
and we are left with searching a function satisfying (\ref{3.1}) such that $f(N)/N$ has no limit as $N \to \infty$.
Such functions obviously exist: start with $f(1)=1$, leave $f(N)$ constant until $f(N)/N=1/4$, then increase $f(N)$ successively by $1$
until $f(N)/N=1/2$, after that leave $f(N)$ again constant to reach $f(N)/N=1/4$, then increase
$f(N)$ anew by ones until $f(N)/N=1/2$, etc.
Here is this function explicitly. Every natural number $N \ge 3$ may uniquely be written as $N=2\cdot 3^k+{\bf e}ll$ with $k \ge 0$
and $1 \le {\bf e}ll \le 4\cdot 3^k$. We put
\[f(2\cdot 3^k+{\bf e}ll)=\left\{\betagin{array}{lll} 3^k & \mbox{for} & 1 \le {\bf e}ll \le 2\cdot 3^k,\\
{\bf e}ll-3^k & \mbox{for} & 2\cdot 3^k \le {\bf e}ll \le 4\cdot 3^k, {\bf e}nd{array}\right.\]
and we also define $f(1)=f(2)=1$. Thus, our choice for $z_1$ is $1$, the following three choices are $z_2=z_3=z_4=-1$,
the following two are $z_5=z_6=1$, the following six $z_j$ are $-1$, the next six $z_j$ are $1$, and so on.
It can be verified straightforwardly that $f$ satisfies~(\ref{3.1}), and since
$f(N)/N=1/2$ for $N=2\cdot 3^k$ and $f(N)/N=1/4$ for $N=4\cdot 3^k$, the limit of $f(N)/N$ does
not exist.
\betagin{thebibliography}{99}
\bibitem{BasWid}
E. Basor and H. Widom,
{{\bf e}m On a Toeplitz determinant identity of Borodin and Okounkov}.
Integral Equations Operator Theory 37 (2000), 397–-401.
\bibitem{Borodok}
A. Borodin and A. Okounkov,
{{\bf e}m A Fredholm determinant formula for Toeplitz determinants}.
Integral Equations Operator Theory 37 (2000), 386–-396.
\bibitem{Botok}
A. B\"ottcher,
{{\bf e}m On the determinant formulas by Borodin, Okounkov, Baik, Deift, and Rains}.
Oper. Theory Adv. Appl. 135 (2002), 91--99.
\bibitem{BoSi}
A. B\"ottcher and B. Silbermann,
{{\bf e}m Analysis of Toeplitz Operators}. Second edition, Springer-Verlag, Berlin, 2006.
\bibitem{BoWi}
A. B\"ottcher and H. Widom,
{{\bf e}m Szeg\"o via Jacobi}.
Linear Algebra Appl. 419 (2006), 656–-667.
\bibitem{GaRoss}
S. R. Garcia and W. T. Ross,
{{\bf e}m Recent progress in truncated Toeplitz operators}.
Preprint, arXiv: 1108.1858v4 [math.CV] 22 Jun 2012.
\bibitem{GerCase}
J. S. Geronimo and K. M. Case,
{{\bf e}m Scattering theory and polynomials orthogonal on the unit circle}.
J. Math. Phys. 20 (1979), 299–-310.
\bibitem{Nik}
N. K. Nikolski,
{{\bf e}m Treatise on the Shift Operator}.
Springer-Verlag, Berlin, 1986.
\bibitem{Sar}
D. Sarason,
{{\bf e}m Algebraic properties of truncated Toeplitz operators}.
Oper. Matrices 1 (2007), 491–-526.
\bibitem{Simon}
B. Simon
{{\bf e}m Orthogonal Polynomial on the Unit Circle. I}.
Amer. Math. Soc., Providence, RI, 2005.
\bibitem{Sz}
G. Szeg\H{o},
{{\bf e}m On certain Hermitian forms associated with the Fourier series of a positive function}.
Festschrift Marcel Riesz, 228--238, Lund, 1952.
\bibitem{Treil}
S. R. Treil,
{{\bf e}m Invertibility of a Toeplitz operator does not imply its invertibility by the projection method}.
Soviet Math. Dokl. 35 (1987), 103–-107.
\bibitem{Wid}
H. Widom,
{{\bf e}m Asymptotic behavior of block Toeplitz matrices and determinants. II}.
Advances in Math. 21 (1976), 1–-29.
{\bf e}nd{thebibliography}
Albrecht B\"ottcher
Fakult\"at f\"ur Mathematik
TU Chemnitz
09107 Chemnitz
Germany
{\tt aboettch@mathematik.tu-chemnitz.de}
{\bf e}nd{document} |
\begin{document}
\newcommand{Consistency of orthology and paralogy constraints in the presence of gene transfers}{Consistency of orthology and paralogy constraints in the presence of gene transfers}
\newcommand{\ellistauthors}{\raggedright
Mark Jones\textsuperscript{1}, \space
Manuel Lafond\textsuperscript{2},
Celine Scornavacca\textsuperscript{3}
}
\newcommand{\ellistinstitutions}{
\textsuperscript{1} Delft Institute of Applied Mathematics, Delft University of Technology, The Netherlands, \texttt{M.E.L.Jones@tudelft.nl}
\\
\textsuperscript{2} Departement d'informatique, Université de Sherbrooke, Canada, \texttt{manuel.lafond@USherbrooke.ca}
\\
\textsuperscript{3} Institut des Sciences de l'Evolution,
Universit\'{e} de Montpellier, CNRS, IRD, EPHE
34095 Montpellier Cedex 5 - France
\texttt{Celine.Scornavacca@umontpellier.fr}
}
\newcommand{15 February 2022}{15 February 2022}
\newcommand{Barbara Holland}{Barbara Holland}
\newcommand{\ensuremath{\mathbb{D}}\xspaceOIrecommendation}{10.24072/pci.mcb.100009}
\newcommand{Jones M, Lafond M, Scornavacca C (2022) Consistency of orthology and paralogy constraints in the presence of gene transfers. arXiv:1705.01240 [cs], ver.6 peer-reviewed and recommended by Peer Community in Mathematical and Computational Biology. https://arxiv.org/abs/1705.01240.}{Jones M, Lafond M, Scornavacca C (2022) Consistency of orthology and paralogy constraints in the presence of gene transfers. arXiv:1705.01240 [cs], ver.6 peer-reviewed and recommended by Peer Community in Mathematical and Computational Biology. https://arxiv.org/abs/1705.01240.}
\newcommand{Algorithms, phylogenetics, orthology, horizontal gene transfer}{Algorithms, phylogenetics, orthology, horizontal gene transfer}
\newcommand{M.E.L.Jones@tudelft.nl, manuel.lafond@USherbrooke.ca, and \\ Celine.Scornavacca@umontpellier.fr}{M.E.L.Jones@tudelft.nl, manuel.lafond@USherbrooke.ca, and \\ Celine.Scornavacca@umontpellier.fr}
\newcommand{Two anonymous reviewers.}{Two anonymous reviewers.}
\newcommand{\preprintabstract}{
Orthology and paralogy relations are often inferred by methods based on gene sequence similarity \cs{that} yield
a graph depicting the relationships between gene pairs. Such relation graphs frequently contain errors,
as they cannot be explained via a gene tree that contains the depicted orthologs/paralogs while being consistent with the species evolution.
Previous research has mostly focused on correcting such errors in some minimal way, for instance by changing a minimum number of relations to attain consistency.
In this work, we ask: could the errors in the orthology predictions be explained by lateral gene transfer? We formalize this question
by allowing gene transfers to behave either as a speciation or as a duplication, expanding the space of valid orthology graphs.
We then provide a variety of algorithmic
results regarding the underlying problems. Namely, we show that deciding if a relation graph $R$ is consistent with a given species network $N$ with known transfer highways is
NP-hard, and that it is W[1]-hard under the parameter ``minimum number of transfers''.
During the process, we define a novel algorithmic problem called \emph{Antichain on trees}, which may be useful for other reductions.
We then present an FPT algorithm for the decision problem
based on the degree of the gene tree associated with $R$.
We also study analogous problems in the case that the transfer highways on a species tree are unknown.
}
\beginingpreprint
\section{Introduction}
In phylogenetics, evolutionary relationships between genes and species are often represented via phylogenetic trees.
\emph{Species trees} are phylogenetic trees displaying the evolutionary relationships among a set of species, while \emph{gene trees} are phylogenetic trees displaying the evolutionary relationships among genes.
Vertical descent with modification (speciation)
constitutes only part of the events shaping
a gene history;
\mj{other such events include,}
for example, duplications,
losses and transfers of genes.
When gene trees are used to estimate the evolutionary relationships of the species containing
\mj{those genes,}
only \emph{homologous} genes -- genes sharing a common ancestor -- should be compared.
Homology can be refined into the concepts of \emph{orthology} and \emph{paralogy}: two genes from two different species are said to be \emph{orthologous} if they are derived from a single gene present in the last common ancestor of the two species via a speciation event, and \emph{paralogous} if they were derived via a duplication event \citep{fitch1970distinguishing}.
Orthology inference is the starting point of several comparative genomics studies, and is also a key instrument for functional annotation of new genomes~\citep{gabaldon2013functional}. Several
methods have been designed to distinguish orthologs from paralogs.
These can be roughly divided in two groups \citep{altenhoff2012inferring}. The first group of methods, based on phylogenetic inference,
reconstruct a \emph{gene tree}
and deduce orthology relationships from this tree by comparing it with the species tree via \emph{reconciliation algorithms}
(see \cite{boussau:hal-02535529}
for a review).
Another class of methods estimates orthology using sequence similarity \ml{(see e.g.~\cite[among others]{li2003orthomcl,emms2015orthofinder} and~\cite{kristensen2011computational} for a survey)},
hypothesising that orthologs are more similar than paralogs.
Both methods can yield a \emph{relation graph}, in which vertices are genes, edges represent putative orthologous gene pairs and non-edges represent putative paralogs.
Phylogeny-based methods
require a prior knowledge of the species tree, and are very dependent on the accuracy of the gene trees. Unfortunately, the species phylogeny is not always known and gene trees can be highly inaccurate as a result of several kinds of reconstruction artefact, e.g. long-branch attraction (LBA) \citep{bergsten2005review}.
Similarity-based methods do not suffer from these drawbacks but still have an important weakness:
the inferred relation graph $R$ may fail to be
\emph{consistent}, meaning that there is no gene tree, labeled by speciation and duplication events, that can both explain the relations depicted by $R$
and ``agree'' with a known species tree $S$.
Moreover, approaches based on sequences tend to miss orthologs whose evolutionary path involves a duplication followed by high divergence, which occurs for instance in neofunctionalisation~\citep{lafond2018accurate}.
In
\mj{recent years,}
the decision problems of consistency of orthology/paralogy relations have been extensively studied \citep{hernandez2012event,hellmuth2013orthology,lafond2014orthology,hellmuth2015phylogenomics,jones2016,lafond2016link,dondi2017approximating}.
Two possible explanations for the inconsistency of a relation graph $R$ are that either the set of relations contains errors,
or the evolutionary model used to assess consistency is not appropriate for the gene family at hand.
Most of the previous work in this field has been devoted to detection and correction of errors in relation graphs.
\man{The second possibility has recently been considered in~\cite{hellmuth2019reconciling}. The authors ask, given a \cs{event-labeled} gene tree $G$ that displays a given \cs{set} of relations, whether there is a species network $N$ that \cs{can} be reconciled with $G$.}
In a similar vein,
in this paper we ask:
can inconsistent relations be explained by extending the usual speciation/duplication model to lateral gene transfers?
Two genes are said to be \emph{xenologous} if at least one of the two genes has been acquired by gene transfer.
As discussed in~\cite{koonin2005orthologs}, genes related by transfer may appear either as orthologs or paralogs, even though they are not related by speciation or duplication at their lowest common ancestor.
The terms \emph{pseudoorthologs} and \emph{pseudoparalogs} were used to designate homologous genes mimicking orthology and paralogy, respectively,
after one or more lateral gene transfers. Here, we provide a variety of algorithmic results regarding the question
of explaining inconsistent relations using these new types of relations.
The paper is organized as follows.
In Section~\ref{sec:prelim}, we introduce the notion of orthology/paralogy consistency with a given species network $N$,
and show how it relates to $DS$-trees, which are gene trees labeled by speciation and duplication only.
Then, in Section~\ref{sec:w1hardness} we study the question of deciding whether a relation graph $R$ is
consistent with $N$, meaning that $R$ can be represented by a gene history, possibly undergoing lateral transfers,
that agrees with $N$.
We show that, unfortunately,
this is
\mj{an NP-hard problem. Furthermore, the problem is unlikely to be fixed-parameter tractable with respect to the number of transfers, as this parameterized version of the problem is $W[1]$-hard.}
On the positive side, we show in Section~\ref{sec:dpalgo} that these problems can be solved in time
\mj{$O(2^{k}k!k|V(R)||V(N)|^4)$,}
where here
$k$ is the maximum degree of the smallest $DS$-tree exhibiting the relations of $R$.
In Section~\ref{sec:unknownhighways}, we turn to the variant where we have a species tree $S$ rather than a network, and
ask if transfer arcs can be inserted into $S$ so that $R$ becomes consistent.
Some proofs are quite technical and can be found in the Appendix.
\section{Preliminaries}\ellabel{sec:prelim}
We use the notation $[n] = \{1, 2, \elldots, n\}$.
Across the paper, let $\Gamma$ a set of genes, $\ensuremath{\mathbb{S}}\xspaceigma$ a set of species, and
$\sigma : \Gamma \rightarrow \ensuremath{\mathbb{S}}\xspaceigma$ the mapping between genes and species.
\begin{figure*}
\caption{An LGT network $N$\ellabel{fig1a}
\caption{A gene tree $G$\ellabel{fig1b}
\caption{A relation graph $R$\ellabel{fig1c}
\caption{An illustration of an LGT network \mj{with secondary arc $(n_4,n_5)$}
\end{figure*}
All trees in this paper are assumed to be rooted and directed, each edge being oriented away from the root.
A \emph{species network} $N$ on $\ensuremath{\mathbb{S}}\xspaceigma$ is a directed acyclic graph with a single indegree-0 node (the \emph{root}) and $|\ensuremath{\mathbb{S}}\xspaceigma|$ outdegree-0 nodes (the \emph{leaves}), such that each leaf is labeled by a different element of $\ensuremath{\mathbb{S}}\xspaceigma$.
Here we will consider only \emph{binary} species networks,
\mj{in which}
internal nodes have either indegree 1 and outdegree 2 (\emph{principal} nodes) or indegree 2 and outdegree 1 (\emph{secondary} nodes or \emph{reticulations}).
\cs{A \emph{Lateral Gene Transfer (LGT) network}} $N$ is a species network along with a partition of $E(N) = E_p \cup E_s$ into a set of \emph{principal arcs} $E_p$ and a set of \emph{secondary arcs} $E_s$~\citep{Cardona2015}. The $E_p$ edges correspond to vertical descent, whereas the $E_s$ edge correspond to pairs of species that may transfer genetic content. The subnetwork $N' = (V(N), E_p)$ obtained after removing the $E_s$ edges must be a tree \ml{in which the root has outdegree $2$}. We denote by $T_0(N)$ the tree obtained from $N'$ after \cs{suppressing} indegree-1 outdegree-1 nodes. Roughly speaking, an LGT network can also be seen as a network obtained by starting with a species tree $S = T_0(N)$, and then adding secondary arcs with endpoints located on the edges of $S$. Note that LGT networks are \emph{tree-based networks}, where $T_0(N)$ is a \emph{distinguished base tree} \citep{Francis2015}.
As defined in~\cite{Gorecki2004,nojgaard2018time}, we say that an LGT network $N$ is \emph{time-consistent} if there exists a function $t:V(N) \rightarrow \mathbb{N}$ such that:
\begin{enumerate}
\item $t(u)=t(v)$, if $(u,v) \in E_s$, and
\item $t(u)<t(v)$, if $(u,v) \in E_p$.
\end{enumerate}
\ml{Note that although time-consistency forbids directed cycles, not all directed acyclic graphs are time-consistent. For instance, one can easily construct an acyclic LGT network that contains two principal arcs $(a, b)$ and $(c, d)$, and secondary arcs $(a, d)$ and $(b, c)$; no time-consistent labeling is possible for $a, b, c, d$. It is also worth mentioning that LGT networks that admit a time-consistent map were characterized in~\cite{Gorecki2004}, where a linear-time algorithm is given to find such a map.}
Here a \emph{gene tree} $G$ on $\Gamma$ is a \cs{binary} tree with $|\Gamma|$ leaves such that each leaf is labeled by a different element of $\Gamma$.
For a binary network $N$, the root node is denoted by $r(N)$, the set of leaves is denoted by ${L}(N)$ and the set of internal nodes is denoted by $I(N)$.
An internal node $x$ of $N$ has either two children, which we will usually denote $x_l$ and $x_r$, or one child, which we will denote $x_l$. The parent of a node $x$ of in-degree $1$ is denoted $p(x)$.
If $x$ has out-degree $2$, the subnetwork rooted at $x$, denoted $N_x$, is the network consisting of the root $x$ and all the nodes
reachable from $x$ (hence if $N$ is a tree, then $N_x$ is a subtree). If $N$ is a rooted tree, $\textsc{lca}(x, y)$ denotes the lowest common ancestor of $x$ and $y$.
Note that all these notations apply to LGT networks and to gene trees (which are special cases of networks).
If $N$ is a species network, since $L(N)$ and $\ensuremath{\mathbb{S}}\xspaceigma$ are in bijection, we will not make the distinction between a leaf of $N$ and a member of $\ensuremath{\mathbb{S}}\xspaceigma$. The same applies to gene tree leaves and $\Gamma$.
\subsection{Reconciliations between gene trees and species networks}
\mj{A \ensuremath{\mathbb{D}}\xspaceTL reconciliation aims at explaining how an evolutionary history for a family of genes (given by a gene tree) may fit within a given species network $N$, using speciation, duplication, transfer and gene loss events.
The internal nodes of gene trees, representing ancestral genes, are mapped to ancestral species.}
Furthermore, the branches of a gene tree
may hide multiple events that have not been observed, mainly due to losses. Hence, a reconciliation $\alpha$ maps a node $x$ of $G$
to the sequence of species for the genes that should appear on its parent branch.
Possible mappings are restricted by few conditions aimed at describing only biologically-meaningful evolutionary histories.
\ml{A reconciliation model for gene trees and time-consistent LGT networks (called H-trees) was proposed in~\cite{Gorecki2010,gorecki2012inferring}, along with algorithms to minimize the duplication, loss and transfer cost.}
\ml{We use~\cite[Definition 3]{Scornavacca2016}, which uses the following formalization:}
\begin{definition}[\cite{Scornavacca2016}]
\ellabel{def:DTLrecLGTNetwork}
Given an LGT network $N$ and a gene tree $G$, let $\alpha$ be a function that maps each node $u$ of $G$ onto a directed path of $N$, denoted $\alpha(u) = (\alpha_1(u), \elldots, \alpha_{\ell}(u))$. Then $\alpha$ is a \emph{\ensuremath{\mathbb{D}}\xspaceTL reconciliation}
between $G$ and $N$ if and only if exactly one of the following events
occurs for each node $u$ of $G$ and each $\alpha_i(u)$.
\mjn{For each $\alpha_i(u)$ we also specify a label $e_\alpha(u,i)$ corresponding to the case that holds between $u$ and $\alpha_i(u)$, given in square brackets below}
(for simplicity, let $x:=\alpha_i(u)$ below):
\begin{itemize}
\item [a)] if $x$ is the last node of $\alpha(u)$, one of the cases below is true:
\begin{enumerate}
\item[1.] $u \in L(G)$, $x \in L(N)$ and $\sigma(u)=x$;
[extant leaf]
\item[2.] $\{\alpha_1(u_l), \alpha_1(u_r)\} =\{x_l,x_r\}$, where $(x, x_l), (x, x_r) \in E_p$;
$[\ensuremath{\mathbb{S}}\xspace]$
\item[3.] $\alpha_1(u_l)=x$ and $\alpha_1(u_r)=x$;
$[\ensuremath{\mathbb{D}}\xspace]$
\item[4.] $\{\alpha_1(u_l), \alpha_1(u_r)\} = \{x, y\}$, where $(x,y) \in E_s$;
$[\ensuremath{\mathbb{T}}\xspace]$
\end{enumerate}
\item [b)] otherwise, one of the cases below is true:
\begin{enumerate}
\item[5.] $\alpha_{i+1}(u) =y$, where $(x,y)$ \cs{is one of the two outgoing arcs of $x$ in $E_p$};
$[\ensuremath{\mathbb{SL}}\xspace]$
\item[6.] $\alpha_{i+1}(u)=y$, where $(x,y)$ is in $E_s$;
$[\ensuremath{\mathbb{T}}\xspaceL]$
\item[7.] $\alpha_{i+1}(u)=y$ and $(x,y)$ is the only outgoing arc of $x$ in $E_p$;
$[\emptyset]$
\end{enumerate}
\end{itemize}
\mj{When $\alpha$ is a \ensuremath{\mathbb{D}}\xspaceTL reconciliation between $G$ and $N$, we call the pair $(G,\alpha)$ a \emph{reconciled gene tree}.}
\end{definition}
By a slight abuse of notation, we may write $|\alpha(u)|$ to denote the number of vertices on the path $\alpha(u)$.
If $\alpha$ is clear from the context, we may write $e(u, i)$ \mjn{in place of $e_{\alpha}(u,i)$}.
With a slight abuse of terminology, we will write $e(\alpha_i(u))$ to denote $e(u,i)$.
We will also write \mj{$\alpha_{\textsc{last}}(u)$ to denote $\alpha_\ell(u)$ and} $e(u, \textsc{last})$ or $e(\alpha_{\textsc{last}}(u))$ to denote $e(u, \ell)$ where $\ell = |\alpha(u)|$.
A speciation (\ensuremath{\mathbb{S}}\xspace) sends its child genes to the child species through principal arcs. A duplication (\ensuremath{\mathbb{D}}\xspace) makes two copies of the gene in the current species. A transfer (\ensuremath{\mathbb{T}}\xspace) corresponds to transferring the lineage of a child of a gene to another branch of the species tree,
while the sibling lineage still evolves within the lineage of the parent. A speciation-loss (\ensuremath{\mathbb{SL}}\xspace) is a speciation where one of the descending genes is absent. A transfer-loss (\ensuremath{\mathbb{T}}\xspaceL) is a transfer of one of the two descendants of a gene combined with the loss of its sibling lineage. A no event ($\emptyset$) indicates that the gene is not transferred and follows the primary species history.
Note that, if $N$ is time-consistent, all \ensuremath{\mathbb{T}}\xspace and \ensuremath{\mathbb{T}}\xspaceL events can be guaranteed to happen between co-existing species.
\ml{Moreover, it is not hard to see that for a given root-to-leaf path $g_1, \elldots, g_k$ of $G$, the concatenation of the $\alpha(g_i)$ paths correspond to a directed path in $N$ (with some nodes that may occur multiple times in a row because of $\mathbb{D}$ nodes). Hence, if $N$ is time-consistent, $\alpha$ ensures that genes evolve without going back in time. Also note that some models only specify the last element of each $\alpha(u)$ (e.g. the $\mu$ map in~\cite{lafond2020reconstruction,nojgaard2018time}).}
An example of a \ensuremath{\mathbb{D}}\xspaceTL reconciliation between the LGT network in Figure \ref{fig1a} and the gene tree in Figure \ref{fig1b} is as follows: $\alpha(g_1)=(n_1)$, $\alpha(g_2)=(n_1)$, $\alpha(g_3)=(n_1)$, $\alpha(g_4)=(n_2)$, $\alpha(g_5)=(n_2,n_4)$, $\alpha(g_6)=(n_2)$, $\alpha(g_7)=(n_3)$, $\alpha(a_1)=(A)$, $\alpha(b_1)=(n_5,B)$, $\alpha(c_1)=(n_4,C)$, $\alpha(d_1)=(D)$, $\alpha(a_2)=(n_3,A)$, $\alpha(b_2)=(n_5,B)$, $\alpha(c_2)=(C)$, $\alpha(d_2)=(n_2,D)$.
\mjn{See Figure \ref{fig:embedding}}.
\mj{For this \ensuremath{\mathbb{D}}\xspaceTL reconciliation, we have $e(\alpha_1(g_1)) = e(\alpha_1(g_4)) = \ensuremath{\mathbb{D}}\xspace$, $e(\alpha_1(g_2)) = e(\alpha_1(g_3)) = e(\alpha_1(g_6)) = e(\alpha_1(g_7)) = \ensuremath{\mathbb{S}}\xspace$, $\cs{e(\alpha_2(g_5))} = \ensuremath{\mathbb{T}}\xspace$, $e(\alpha_1(b_1)) = \cs{e(\alpha_1(b_2))} = \emptyset$, $e(\alpha_1(c_1)) = \ensuremath{\mathbb{T}}\xspaceL$,
$e(\alpha_1(a_2)) = e(\alpha_1(d_2)) = \cs{e(\alpha_1(g_5))} = \ensuremath{\mathbb{SL}}\xspace$, and $e(\alpha_{\textsc{last}}(u) )=$ extant leaf for all $u \in L(G)$.}
\begin{figure*}
\caption{Illustration of a $\ensuremath{\mathbb{D}
\end{figure*}
Given $x,y \in \Gamma$, let $u = \textsc{lca}_G(x,y)$.
Then we say that $x$ and $y$ are
\emph{orthologs} w.r.t a reconciled gene tree $G$ if
$e(\alpha_\textsc{last}(u)) = \ensuremath{\mathbb{S}}\xspace$,
\emph{paralogs} if
$e(\alpha_\textsc{last}(u)) = \ensuremath{\mathbb{D}}\xspace$,
and \emph{xenologs} if
$e(\alpha_\textsc{last}(u)) = \ensuremath{\mathbb{T}}\xspace$.
Note that
one of these cases must hold for all distinct
$x,y \in \Gamma$.
\subsection{Orthology/paralogy relation graphs}
\ml{An undirected} graph $R$ is called a \emph{relation graph} if $V(R)=\Gamma$ (see Figure \ref{fig1c}). \ml{Since $R$ is undirected, we may denote an edge $\{x, y\}$ of $R$ as $xy$.}
Relation graphs are often used to depict orthology and paralogy relationships \citep{hellmuth2013orthology}: for any pair $x,y$ of \ml{distinct} vertices in $R$, $xy$ is an edge in $R$ if $x$ and $y$ are orthologs, otherwise $x$ and $y$ are paralogs.
Several orthology-detection methods such as OrthoMCL \citep{li2003orthomcl}, ProteinOrtho~\citep{lechner2011proteinortho} and OrthoFinder \citep{emms2015orthofinder} use sequence similarity as a proxy for orthology. Roughly speaking, similar sequences are presumed more likely to be orthologs.
When transfers are present, \ml{sequence similarity predictions} get trickier: xenologs can be ``interpreted'' as either orthologs, in case the two copies retained the same function (and thus their sequences are likely to be similar), or paralogs, if they did not (and thus their sequences are likely to be different). In the following, we adapt the framework described in \cite{hellmuth2013orthology} to the presence of xenologs. Note that in~\cite{hellmuth2016mathematics,geiss2017short,geiss2018reconstructing,lafond2020reconstruction}
, the authors approach this problem from a different angle, supposing the xenology relationships are given in the relation graph.
We say that a reconciled gene tree $(G, \alpha)$ \emph{displays} a relation graph $R$,
if there is a way of reinterpreting transfers as either speciation or duplication events,
such that for any pair $x,y$ of vertices in $R$,
$xy$ is an edge in $R$ if and only if $x$ and $y$
are orthologs according to $(G, \alpha)$. More precisely, we introduce two new types of events $\ensuremath{\mathbb{T}}\xspaceS, \ensuremath{\mathbb{T}}\xspaceD$, which correspond to transfers that behave as a speciation and a duplication, respectively. We then have the following definition:
\begin{definition}
Let $N$ be an LGT network, $R = (\Gamma, E)$ a relation graph, and $(G, \alpha)$ a reconciled gene tree with respect to $N$.
We say that $(G, \alpha)$ \emph{displays} $R$ if there exists a labeling $e^*$ of $\alpha$ satisfying:
\begin{itemize}
\item $e^*(u,i) \in \{\ensuremath{\mathbb{T}}\xspaceS,\ensuremath{\mathbb{T}}\xspaceD\}$ if $e(u,i) = \ensuremath{\mathbb{T}}\xspace$;
\item $e^*(u,i) = e(u,i)$ if $e(u,i) \neq \ensuremath{\mathbb{T}}\xspace$;
\item for any distinct $x,y \in \Gamma$, if $xy \in E$ then $e^*(\textsc{lca}_G(x,y), \textsc{last}) \in \{\ensuremath{\mathbb{S}}\xspace,\ensuremath{\mathbb{T}}\xspaceS\}$,
and otherwise $e^*(\textsc{lca}_G(x,y), \textsc{last}) \in \{\ensuremath{\mathbb{D}}\xspace,\ensuremath{\mathbb{T}}\xspaceD\}$.
\end{itemize}
\end{definition}
Note that, \man{if $(G, \alpha)$ and $R$ are known}, there is only one relabeling $e^*$ that \man{ensures that $(G, \alpha)$ displays $R$}.
\mjn{Indeed, if $e(u,i) \neq \ensuremath{\mathbb{T}}\xspace$ then $e^*(u,i) = e(u,i)$ and thus fixed by $(G,\alpha)$; otherwise, $\alpha_i(u)$ is the last element of $\alpha(u)$ and $\alpha_i(u) \ensuremath{\varnothing}\xspacetin L(N)$, and thus the value of $e^*(u,i)$ (either $\ensuremath{\mathbb{T}}\xspaceS$ or $\ensuremath{\mathbb{T}}\xspaceD$) depends on whether $xy \in E$, for any $x,y \in \Gamma$ such that $\alpha_i(u) = \textsc{lca}_G(x,y)$.}
The question of interest in this paper is, \man{if only $R$ is known}, whether there exists a gene tree that
displays $R$ and that can be reconciled with \cs{a given network} $N$.
\begin{definition}
Let $N$ be a species network and $R = (\Gamma, E)$ a relation graph. We say that $R$ is \emph{consistent} with $N$ (or $N$-consistent)
if there exists a reconciled gene tree $(G, \alpha)$ with respect to $N$ that displays $R$.
In addition we say that $R$ is \emph{$N$-consistent using $k$ transfers} if $(G, \alpha)$ contains at most $k$ transfers, that is, $e(u, i) = \ensuremath{\mathbb{T}}\xspace$ or $\ensuremath{\mathbb{T}}\xspaceL$ for at most $k$ choices of $(u,i)$.
\end{definition}
For an example, see Figure \ref{fig1}: $R$ is consistent using one transfer with $N$ because $(G, \alpha)$ displays $R$ \mj{(setting $e^*(g_5, \textsc{last}) = \ensuremath{\mathbb{T}}\xspaceS$)} and can be reconciled using one transfer (see the reconciliation given above). It is straightforward to see that $R$ is not consistent using no transfers, thus $R$ is not consistent according to the definition of consistency without xenology \citep{hernandez2012event,hellmuth2013orthology,lafond2014orthology,hellmuth2015phylogenomics,jones2016}.
\man{It is worth mentioning the question studied in~\cite{hellmuth2019reconciling} can be interpreted as asking whether $R$ is consistent with \emph{some} network $N$. It turns out that the answer is always yes, albeit a slightly different model is used.}
The main question of interest is to decide whether a set of orthology/paralogy relations can be explained by a gene tree that be reconciled with a given species network.
\ensuremath{\varnothing}\xspaceindent \textbf{\textsc{Network Consistency (NC):}}\\
\ensuremath{\varnothing}\xspaceindent {\bf Input}: A relation graph $R$ and a time-consistent species network $N$.\\
\ensuremath{\varnothing}\xspaceindent {\bf Question}: Is $R$ $N$-consistent? \\
We can also consider the minimization version.
It is the same as \textsc{NC}, but we are also given a parameter $k$ and ask
whether $R$ is $N$-consistent using $k$ transfers.
\ensuremath{\varnothing}\xspaceindent \textbf{\textsc{Transfer Minimization Network Consistency (TMNC):}}\\
\ensuremath{\varnothing}\xspaceindent {\bf Input}: A relation graph $R$, a time-consistent species network $N$, and an integer $k$.\\
\ensuremath{\varnothing}\xspaceindent {\bf Question}: Is $R$ $N$-consistent using at most $k$ transfers? \\
\subsection{Relation graphs and least-resolved DS-trees}
It will be useful to view the problem in terms of a gene tree instead of dealing with relations directly.
Before proceeding with our algorithmic results, we establish the equivalence between relation graphs and \emph{least-resolved DS trees}. \ml{This relationship was initially established in~\cite{bocker1998recovering}.} In essence, a DS-tree is simply a gene tree $D$ in which each internal node is labeled $\ensuremath{\mathbb{S}}\xspace$ or $\ensuremath{\mathbb{D}}\xspace$. This labeling does not have to be valid with respect to any species tree or network.
More formally, a \emph{DS-tree for $\Gamma$} is a pair $(D,l)$, where $D$ is a
rooted tree with $L(D) = \Gamma$, and
$l:I(D) \rightarrow \{\ensuremath{\mathbb{D}}\xspace, \ensuremath{\mathbb{S}}\xspace\}$
is a function labeling each internal node of $G$ as a \emph{duplication} or \emph{speciation}.
Note that $D$ is not necessarily binary.
The graph $R(D, l) = (\Gamma, E)$
is the relation graph such that for any pair \man{$\{x,y\}$}
of genes in $\Gamma$,
if $l(\textsc{lca}_D(x,y)) = \ensuremath{\mathbb{S}}\xspace$ then $xy \in E$,
and if $l(\textsc{lca}_D(x,y)) = \ensuremath{\mathbb{D}}\xspace$ then $xy \ensuremath{\varnothing}\xspacetin E$.
We say that $(D, l)$ \emph{displays} a relation graph $R$ if $R(D, l) = R$.
An $l$-contraction in a $DS$-tree $(D, l)$ consists of contracting an arc $(u,v)$ of $D$ with $u ,v \in I(D)$ and
$l(u) = l(v)$, and assigning the same label to the node resulting from the contraction.
We say that $(D, l)$ is \emph{least-resolved} if no $l$-contraction is possible.
Note that if $(D, l)$ is least-resolved, then it has
\emph{alternating} duplication and speciation nodes. That is,
each child of a speciation node is a duplication or a leaf, and each child of a duplication node is a speciation or a leaf.
A $DS$-tree $(D, l)$ is a \emph{refinement} of another $DS$-tree $(D', l')$
if $(D', l')$ can be obtained from $(D, l)$ by a sequence of $l$-contractions.
If $D$ is binary, then $(D, l)$ is a \emph{binary refinement} of $(D', l')$.
Observe that $l$-contractions do not change $l(\textsc{lca}_{D'}(x,y))$ for any pair of genes $(x,y)$.
Thus if $(D, l)$ is a refinement of $(D', l')$, then $R(D, l) = R(D', l')$.
It is known that all $DS$-trees that display $R$, if any \mj{exist}, are refinements of the same least-resolved $DS$-tree.
\begin{lemma}[\cite{hellmuth2013orthology,lafond2014orthology}]\ellabel{lem:unique-ds-tree}
Assume that some $DS$-tree displays a relation graph $R$.
Then the least-resolved $DS$-tree $(D, l)$ that displays $R$ is unique. Moreover, $(D, l)$ can be found in linear time.
\end{lemma}
We now want to relate $DS$-trees with \ensuremath{\mathbb{D}}\xspaceTL reconciliations by reinterpreting some internal nodes as transfers.
\begin{definition}\ellabel{def:nreconciable}
Let $N$ be an LGT network and $(D, l)$ a $DS$-tree with $D$ binary.
We say $(D, l)$ is \emph{$N$-reconcilable} if there exists a
\mj{\ensuremath{\mathbb{D}}\xspaceTL reconciliation $\alpha$ between $D$ and $N$}
such that
for every internal node $u \in I(D)$, the following holds:
\begin{itemize}
\item
if $l(u) = \ensuremath{\mathbb{S}}\xspace$, then $e(\alpha_{\textsc{last}}(u)) \in \{\ensuremath{\mathbb{S}}\xspace, \ensuremath{\mathbb{T}}\xspace\}$;
\item
if $l(u) = \ensuremath{\mathbb{D}}\xspace$, then $e(\alpha_{\textsc{last}}(u)) \in \{\ensuremath{\mathbb{D}}\xspace, \ensuremath{\mathbb{T}}\xspace\}$.
\end{itemize}
Moreover, $(D, l)$ is $N$-reconcilable using $k$ transfers if $\alpha$ uses $k$ transfers.
If $D$ is non-binary, we say that $(D, l)$ is $N$-reconcilable \mj{(using $k$ transfers)} if there exists a binary refinement
$(D', l')$ of $(D, l)$ such that $D'$ is $N$-reconcilable \mj{(using $k$ transfers).}
\end{definition}
Since relation graphs correspond to a unique \man{least-}resolved $DS$-tree, asking about the consistency of a relation graph $R$ is
equivalent to asking a similar question about a least-resolved DS-tree $(D, l)$ that displays $R$, if it exists (see Appendix for a proof).
\begin{lemma}\ellabel{lem:equiv-ds-tree}
Let $N$ be an LGT network and $R = (\Gamma, E)$ a relation graph. Then $R$ is $N$-consistent
(using $k$ transfers)
if and only if
there exists a DS-tree $(D, l)$ for $\Gamma$ such that $R(D, l) = R$ and such that $(D, l)$ is $N$-reconcilable (using $k$ transfers).
\end{lemma}
Note that in particular, Lemma ~\ref{lem:equiv-ds-tree} implies that for $R$ to be $N$-consistent
\mj{for} an LGT network $N$, there must exist a $DS$-tree $(D', l')$ such that $R(D', l') = R$. Moreover, we may assume that $(D', l')$ is a binary refinement of the unique least-resolved $DS$-tree $(D, l)$ that displays $R$.
By Lemma~\ref{lem:unique-ds-tree}, we can check in linear time whether $(D, l)$ exists, and if so construct it. Therefore,
we will often describe an instance of our problem by giving the least-resolved DS-tree $(D, l)$ satisfying $R(D, l) = R$.
\ml{We close this subsection by mentioning that the notion of consistency of a gene tree (or DS-tree) has been studied the other way around. That is, in~\cite{markin2018solving,gorecki2019feasibility}, we are instead given a species tree and a gene family, and must find a feasible gene scenario under certain constraints.}
\subsection{Basics of parameterized complexity}
We finish this section with some basics of parameterized complexity.
A \emph{parameterized problem} is a language $L \subseteq \ensuremath{\mathbb{S}}\xspaceigma^* \times \mathbb{N}$, where $\ensuremath{\mathbb{S}}\xspaceigma$ is a fixed alphabet and $\ensuremath{\mathbb{S}}\xspaceigma^*$ are the strings over this alphabet.
A pair $(x,k) \in \ensuremath{\mathbb{S}}\xspaceigma^* \times \mathbb{N}$ is a {\sc Yes}-instance of a parameterized problem $L$ if $(x,k) \in L$.
We call the second element $k$ the \emph{parameter} of the instance.
A parameterized problem is \emph{fixed-parameter tractable (FPT)} if there exists an algorithm that decides whether a given instance $(x,k)$ is a {\sc Yes}-instance in time $f(k)\cdot|x|^{O(1)}$, where $f$ is a computable function depending only on $k$; such an algorithm is called an \emph{FPT algorithm}.
The class $W[1]$ is a class of parameterized problems which are strongly believed to not be FPT.
A parameterized problem $L$ is \emph{$W[1]$-hard} if there exists $L' \in W[1]$ such that an FPT algorithm for $L$ would imply an FPT algorithm for $L'$.
For more information we refer the reader to~\cite{DowneyFellows2013}.
Here we sketch a NP-hardness proof for deciding whether a set of relations $R$ can be displayed
by a $CDST$-tree, even if $S(H)$ is time-consistent.
We use a reduction from the \satpb problem, which consists in deciding whether a boolean formula in CNF form
is satisfiable, with the additional constraints that each clause has $2$ or $3$ variables,
and each variable appears in $2$ or $3$ clauses (counting every positive or negative appearance).
[REF: Tovey, 1984]
We may assume that each variable $x_i$ appears positively and negatively at least one time.
Let $\phi$ be a \satpb instance, with variables $\{x_1, \elldots, x_n\}$
and clauses $\mathcal{C} = \{ C_1, \elldots, C_m \}$. The ordering of the $x_i$'s can be arbitrary,
but must remain fixed as it will be of importance later on.
From $\phi$ we construct a corresponding instance of $RET$, that is a relation graph $R$, a species tree $S$
and transfer highways $H$.
We start by constructing $S$ and $H$. As an intuitive aid, the whole point of this construction is to build a species network
that allows, for some specific gene trees $G_1, \elldots, G_m$ which we describe later, multiple choices
for each $s(r(G_i))$ by allowing transfers. We will construct the trees so that $\phi$ is satisfiable
iff $s(r(G_1)), \elldots, s(r(G_m))$ can be chosen to be pairwise-separated in $S$ (and thus pairwise related by speciation) .
Let $x_i$ be a variable of $\phi$, and define a corresponding tree $X_i$
(of which the root is of degree one, as it will become a transfer node), in the following way:
\begin{itemize}
\item
if $x_i$ appears twice positively in clauses $C_{j_1}$ and $C_{j_2}$, and once negatively in clause $C_k$,
then let $X_i'$ be a binary tree with $3$ leaves. Insert a degree one node labeled $(x_i, C_k)$ by connecting it to
$r(X_i')$ by an edge, and set it as the new root of $X_i'$.
Moreover take any $2$ leaves and label them by $(x_i, C_{j_1})$ and $(x_i, C_{j_2})$.
Label the other leave $\ell(x_i, C_k)$.
To obtain $X_i$, add $2$ new pending leaves: one leaf $\ell(x_i, C_{j_1})$ connected to $(x_i, C_{j_1})$,
one leaf $\ell(x_i, C_{j_2})$ connected to $(x_i, C_{j_2})$.
\item
if $x_i$ appears once positively in clause $C_{k}$, and twice negatively in clauses $C_{j_1}$
and $C_{j_2}$,
apply the same construction as above.
\item
if $x_i$ appears only twice in $\mathcal{C}$, say positively in clause $C_j$ and negatively in clause $C_k$,
then $X_i$ consists of a path of length $2$ (i.e. a $P_3$) with nodes $a, b, c$ and edges $ab$ and $bc$,
to which we add two leaves adjacent to $c$ ($X_i$ is known as a \emph{fork}).
Assign the $a$ node as the root, and label it $(x_i, C_j)$.
Label the $b$ node by $(x_i, C_k)$,
and label the inserted leaves $\ell(x_i, C_j)$ and $\ell(x_i, C_k)$.
\end{itemize}
Intuitively, to each variable $x_i$ and clause $C_j$ appearance corresponds a transfer node
$(x_i, C_j)$ in $X_i$. If some clauses $C_j$ and $C_k$ contain $x_i$ with the same value
(i.e. both positive or both negative), then $(x_i, C_j)$ and $(x_i, C_k)$ are separated in $X_i$.
Otherwise, one of $(x_i, C_j)$ or $(x_i, C_k)$ is an ancestor of the other. The pending leaves are added so that
each $(x_i, C_j)$ node has its own corresponding leaf $\ell(x_i, C_j)$ in $X_i$.
Call a node of $X$ \emph{marked} if it labeled by some $(x_i, C_j)$.
Let $X'$ be any tree containing each $X_i$ separately, i.e. take a tree on $n$ leaves,
then replace each leaf by the root of a distinct $X_i$.
Obtain the tree $X$ by adding a new leaf $\alpha$ as an outgroup, that is
by connecting $\alpha$ and $r(X')$ under a common parent.
To obtain $S$ we add transfer nodes to $X$, and construct the transfer highways $H$.
For each marked node $(x_i, C_j)$ in $X$, add a transfer node $\alpha_{ij}$
on the $\{r(X), \alpha\}$ edge, making sure that if $(x_i, C_j)$ has a marked descendant,
$(x_{i}, C_{k})$, then $\alpha_{ij}$ lies above $\alpha_{ik}$ (i.e. $\alpha_{ij}$ is closer to the root).
Then, add to $H$ the transfer highway $( (x_i, C_j), \alpha_{ij})$ to $H$.
Thus from $(x_i, C_j)$, one can always transfer to $\alpha$.
It is not hard to see that time-consistency is preserved by the inserted transfer highways.
[NOTE: we should assign precise time slots]
We finally describe transfers between $X$ leaves, and in the meantime provide a time slot for each transfer node.
For each leaf $\ell(x_i, C_j)$, let $e_{ij}$ be the edge between $\ell(x_i, C_j)$ and its parent.
Add exactly $n - i + 1$ transfer nodes
$t^{ij}_{i}, t^{ij}_{i + 1}, \elldots, t^{ij}_{n}$ on the $e_{ij}$ edge, and assign $t^{ij}_{k}$ the time slot $k$,
for each $i \elleq k \elleq n$. This finishes the construction of $S$.
As for $H$, for each clause $C_j \in \mathcal{C}$ containing
variables $x_{a}, x_{b}$ and $x_{c}$ with $a < b < c$, add to $H$ the transfer highways
$(t^{bj}_{b}, t^{aj}_{b})$ and $(t^{cj}_{c}, t^{aj}_{c})$.
[NOTE: WTF is this construction?]
The point of these transfers is to allow a path from
$(x_b, C_j)$ and $(x_c, C_j)$ to $\ell(x_a, C_j)$.
Note that for some $i$, the highest (i.e. oldest) transfer node $t^{ij}_i$ is always
the donor, whereas the lower ones are the receivers. Moreover a donor $t^{ij}_i$ can only
give to a receiver $t^{i'j}_i$ with $i' < i$.
[NOTE: intuitively, all transfers go 'to the left', which somehow enforces time consistency.
Formal details to come.]
We now construct a set of relations $R$, or more precisely a least resolved $DS$-tree $G$ for $R$
(they are in 1-to-1 correspondence).
For each clause $C_j \in \mathcal{C}$ with variables $x_{a}, x_{b}, x_{c}$ with $a < b < c$,
we first build a corresponding subtree $G_j$.
The tree $G_j$ is a binary tree on $2$ leaves, one of which is labeled $\alpha$,
and the other $\ell(x_a, C_j)$ (we label genes by their corresponding species).
The $DS$-tree $G$ is obtained by taking a polytomy on $m$ children $1, \elldots, m$, then replacing
each child $j$ by $G_j$.
The root of $G$ is a speciation, whereas every other internal node (i.e. the $G_j$ roots) is a duplication.
By Lemma~\ref{lem:refinement}, $R$ can be displayed by a $CDST$-tree if and only if
$G$ admits a binary refinement for which there is a $CDST$-completion $G'$.
We show that $G'$ exists if and only if $\phi$ is satisfiable.
The proof is divided in a series of claims.
\begin{claim}
For every $C_j \in \mathcal{C}$
with $C_j = (x_a \vee x_b \vee x_c)$ and $a < b < c$, by using transfers $s(r(G_j))$ can be any of $(x_a, C_j), (x_b, C_j)$ or $(x_c, C_j)$. Moreover $s(r(G_j))$ must be an ancestor of one of those nodes.
\end{claim}
\begin{proof}
Denote $r = r(G_j)$.
To verify the first assertion, first notice that $s(r) = (x_a, C_j)$ is possible if
$\{r, \alpha\}$ is made a transfer arc and $\{r, \ell(x_a, C_j)\}$ a regular edge.
If instead $s(r) = (x_b, C_j)$, then make $r\alpha$ a transfer edge,
add a transfer-loss node mapped to $t^{bj}_b$ on the other edge, and transfer to $\ell(x_a, C_j)$.
The case $s(r) = (x_b, C_j)$ is identical.
For the second assertion, suppose that $s(r)$ is not an ancestor of one of the specified nodes.
Observe that in this case, there is no directed path from $s(r)$ to $\ell(x_a, C_j)$ in $S(H)$.
Thus the $\{r, \ell(x_a, C_j)\}$ edge cannot be a transfer arc. But then, $s(r)$ is not an ancestor of
$\ell(x_a, C_j)$, and so $s$ is not a valid gene-species mapping.
\qed
\end{proof}
\begin{claim}
Let $G'$ be a $CDST$-completion of a binary refinement of $G$. Then the only transfer arcs of $G'$
have tail $r(G_j)$ for some $1 \elleq j \elleq m$.
\end{claim}
\begin{proof}
Suppose that in $G'$, there is a transfer arc $(x, y)$ on the path between the root and $r(G_j)$
for some $j$. Choose $(x, y)$ to be farthest from the root as possible. By the previous claim,
$s(r(G_j))$ must be an ancestor of $(x_i, C_j)$ for some $i$. Since there is no transfer on the path between $y$ and $r(G_j)$, $s(y)$ must be an ancestor of $s(r(G_j))$. But there is no transfer arc that leads
to $y$ nor any of its ancestors, and therefore the $(x, y)$ transfer arc is not feasible.
\qed
\end{proof}
\begin{claim}
$G$ admits a binary refinement for which there is a $CDST$-completion $G'$
if and only if there is a choice of $s_1 = s(r(G_1)), s_2 = s(r(G_2)), \elldots, s_m = s(r(G_m))$
such that $s_1, s_2, \elldots, s_m$ are pairwise-separated in $S$.
\end{claim}
(proof idea: every node above the $G_j$'s needs to be a speciation.)
\begin{claim}
There is a choice of $s_1 = s(r(G_1)), s_2 = s(r(G_2)), \elldots, s_m = s(r(G_m))$
such that $s_1, s_2, \elldots, s_m$ are pairwise-separated in $S$ if and only if $\phi$ is satisfiable.
\end{claim}
(proof idea: $(\Rightarrow)$ the
choice of $s(r(G_j)) = (x_i, G_j)$ corresponds to setting $x_i$ to the value that satisfies the clause $C_j$. No literal is chosen both true and false, by the $X_i$ construction.
$(\ensuremath{\mathbb{L}}\xspaceeftarrow)$ if $C_j$ is satisfied by $x_i$, set $s(r(G_j)) = (x_i, C_j)$. Pairwise-separation follows since
positive-negative pairs can't both be chosen.
THIS SECTION IS UNDER CONSTRUCTION
We show that the following problem is NP-hard: Given a (least-resolved) DS-tree $G_{DS}$ for $\Gamma$ and a species network $N$ for $\ensuremath{\mathbb{S}}\xspaceigma$, decide whether there exists a CDST-completion $G$ of $G_{DS}$ such that $G$ is consistent with $N$.
We give a reduction from the NP-hard problem {\sc Independent Set}: given a graph $H=(V,E)$ and an integer $k$, decide whether there exists a set $V' \subseteq V$ such that $|V'| \ge k$ and $\{u,v\} \setminus V' \neq \emptyset$ for any edge $uv \in E$. We call such a set $V'$ an \emph{independent set of size $k$}
The intuition behind our construction is as follows: for each vertex in $H$ there will be a node in our species network whose descendants have transfer arcs leading to nodes representing the edges with which that vertex is incident. Our DS-tree will require us to use at least one transfer arc to each edge node. Our choice of DS-tree will also force us to choose $k$ vertex nodes as the members of our independent set, but choosing a given vertex will 'block' the transfer nodes leading to edges from that vertex node. Thus, each edge must be incident with at least one vertex not chosen. It follows that our choice of $k$ vertices must indeed form an independent set.
Let $(H=(V,E), k)$ be an instance of {\sc Independent Set}.
Let $v_1, v_2, \dots, v_n$ be a numbering of the vertices in $V$, and $e_1, e_2, \dots, e_m$ a numbering of the edges in $E$.
We construct an instance of [our problem] as follows.
[Note: I was fairly fussy about distinguishing between the *leaf* of a gene / species tree, and the gene/ species that leaf is labelled with. We may wish to change this.]
\begin{itemize}
\item Let $\ensuremath{\mathbb{S}}\xspaceigma$ consist of a species $specVert_i$ for each vertex $v_i$, a species $specEdge_j$ for each edge $e_j$, and four special species $specLeft$, $specRight$, $specIn$, $specOut$. (Note that our DS-tree will not use any genes from $specVert_i$ for any vertex $v_i$, but we need them for the construction of our species network.)
\item To describe our species network, we first describe the distinguished base tree after indegree-$1$ and outdegree-$1$ nodes have been contracted. Let $r$ be the root node, with children $x_0$ and $y_0$.
Let $x_0$ have children $leafLeft$ and $x_1$.
For $i \in [n-2]$, let $x_i$ have children $leafVert_i$ and $x_{i+1}$, and let $x_{n-1}$ have children $leafVert_{n-1}$ and $leafVert_n$.
Let $y_0$ have children $y_1$ and $z_0$.
For each $j \in [m-2]$, let $y_j$ have children $leafEdge_j$ and $y_{j+1}$, and let $y_{m-1}$ have children $leafEdge_{m-1}$ and $leafEdge_m$.
Finally let $z_0$ have children $leafIn$ and $z_1$, and let $z_1$ have children $leafOut$ and $leafRight$.
[We label the leaf nodes with species as follows.
Leafs $leafLeft$, $leafRight$, $leafIn$ and $leafOut$ are labelled with species $specLeft$, $specRight$, $specIn$ and $specOut$ respectively.
For each $i \in [n]$, let leaf node $leafVert_i$ be labelled with $specVert_i$, and for each $j \in [m]$, let $leafEdge_j$ be labelled with $specEdge_j$.]
\item We now describe how to subdivide edges and add secondary arcs in order to get our species network $N$.
The order in which we subdivide edges will be important for the purposes of preserving time-consistency.
For each $i \in [n]$ in order, do the following:
Subdivide the arc between $leafVert_i$ and its parent with a new node $tailVert_iIn$, and subdivide the arc between $leafIn$ and its parent with a new node $headVert_iIn$. Add $(tailVert_iIn,headVert_iIn)$ as an arc in $E_s$.
Then, for each $j \in [m]$ in order, such that vertex $v_i$ is incident with edge $e_j$, do the following: Subdivide the arc between $leafVert_i$ and its parent with a new node $tailVert_iEdge_j$, and subdivide the arc between $leafEdge_j$ and its parent with a new node $headVert_iEdge_j$. Add $(tailVert_iEdge_j, headVert_iEdge_j)$ as an arc in $E_s$.
Finally, subdivide the arc between $leafVert_i$ and its parent with a new node $tailVert_iOut$, and subdivide the arc between $leafOut$ and its parent with a new node $headVert_iOut$. Add $(tailVert_iOut,headVert_iOut)$ as an arc in $E_s$.
\item After this process is complete, add any arcs not in $E_s$ to $E_p$, and we have our species network $N$. We show that $N$is time-consistent as follows: let $t: \{r,x_0, \dots, x_n, y_0, \dots, y_m, z_0, z_1\} \rightarrow \mathbb{N}$ be an arbitrary function such that $t(u) < t(v)$ whenever $u$ is an ancestor of $v$.
Let $n_0$ be the maximum value assigned by $t$, and now extend $t$ as follows.
For each $i \in [n]$, let $t(tailVert_iIn) = t(headVert_iIn) = n_0 + i$.
For each $i \in [n]$ and each $j \in [m]$ such that vertex $v_i$ is incident with edge $e_j$, let $t(tailVert_iEdge_j) = t(headVert_iEdge_j) = n_0 + n + m(i-1) + j$.
For each $i \in [n]$, let $t(tailVert_iOut) = t(headVert_iOut) = n_0 + n + mn + i$.
Finally, for each leaf node $leaf$, let $t(leaf) = n_0 + n + mn + n + 1$.
It is straightforward (hopefully) to verify that $t$ satisfies the requirements for time-consistency.
(Note that so far, we have not used the parameter $k$ in our construction).
\item
Let $\Gamma$ consist of a gene $geneEdge_j$ assigned to $specEdge_j$ for each $j \in [m]$, a gene $geneLeft$ assigned to $specLeft$, a gene $geneRight$ assigned to $specRight$, and for each $h \in [k]$, a gene $geneIn_h$ assigned to $specIn$ and a gene $geneOut_h$ assigned to $specOut$.
\item We now describe our (least-resolved) DS-tree $G_{DS}$.
Let the root of $G_{DS}$ be a speciation node $spec_0$.
Let the children of $spec_0$ be a duplication node $dup_0$ and a leaf node $leafGeneRight$.
Let the children of $dup_0$ be a leaf node $leafGeneLeft$ and a speciation node $spec_1$.
Let the children of $spec_1$ consist of a leaf node $leafGeneEdge_j$ for each $j \in [m]$, and a duplication node $dup_h$ for each $i \in [k]$.
For each $h \in [k]$, let the children of $dup_h$ be a leaf node $leafGeneIn_h$ and a leaf node $leafGeneOut_h$.
[We label the leaf nodes of $G$ with genes as follows.
Leafs $leafGeneLeft$ and $leafGeneRight$ are labelled with $geneLeft$ and $geneRight$ respectively.
For each $j \in [m]$, the leaf node $leafGeneEdge_j$ is labelled with $geneEdge_j$.
For each $h\in [k]$, leafs $leafGeneIn_h$ and $leafGeneOut_h$ are labelled with $geneIn_h$ and $geneOut_h$ respectively.]
This concludes the construction of $G_{DS}$, and of our [problem] instance.
\end{itemize}
\begin{figure}
\caption{The DS-tree $G_{DS}
\end{figure}
\begin{figure}
\caption{The network $N$. Transfer arcs and their vertices (except for $TailVert_iIn$) are not pictured.}
\end{figure}
\begin{lemma}
Suppose that there is a $CDST$-completion $G$ of $G_{DS}$ such that $G$ is consistent with $N$. Then $H$ has an independent set $V'$ of size $k$.
\end{lemma}
\begin{proof}
Recall that $G$ is derived from $G$ by refining non-binary nodes into binary subtrees (made of nodes of the same type), [subdiving edges], relablling some duplication or speciation nodes as transfer nodes, and marking at least one out-arc ofeach transfer node as a transfer arc.
Let $s:V(G) \rightarrow V(N)$ be a consistent gene-mapping.
We will speak of a subtree of $G$ as ``corresponding'' to a vertex in $G_{DS}$ when that subtree was the result of refining the vertex.
Similarly we may speak of a single vertex in $G$ as corresponding to a vertex in $G_{DS}$ if that vertex was not refined.
We note that the binary nodes in $G_{DS}$ (such as $dup_h$ for each $h \in [k]$) cannot be refined, and for such nodes $v$ there must be a single nodes in $G$ corresponding to $v$.
In such cases we will refer to the node corresponding to $v$ by the name $v$ as well.
We also note that every leaf in $G$ must correspond to a leaf in $G_{DS}$.
TODO: probably rewrite, don't think I'm explaining it very well.
Observe that if $u$ is an ancestor of $v$ in $G$, then by the definition of consistency there must be a directed $s(u) - s(v)$ path in $N$.
We begin with a technical claim that imposes some basic limitations on the structure of $G$ and $s$.
\begin{claim}
Let $rootSpec$ be the root of the subtree in $G$ corresponding to $spec_1$,
and let $v$ be any leaf of the form $leafGeneEdge_j$ for $j \in [m]$, or $leafGeneIn_h$ or $leafGeneOut_h$ for $h \in [k]$.
Then any $s(rootSpec) - s(v)$ path uses at least one transfer arc.
Furthermore such a path must pass through $tailVert_iIn$ for some $i \in [n]$.
\end{claim}
\begin{proof}
Since $s$ is a gene-species mapping, we have that $s(leafGeneEdge_j) = leafEdge_j$ for each $j \in [m]$ and $s(leafGeneIn_h) = leafIn$, $s(leafGeneOut_h) = leafOut$ for each $h \in [k]$.
Observe that by construction of $N$, the only directed paths leading to $leafEdge_j$, $leafIn$ or $leafOut$ that do not use transfer arcs are those beginning with either $r$ or some descendant of $y_0$.
Therefore,to prove the first claim,it will be enough to show that $s(rootSpec)$ is not $r$ and is not a descendant of $y_0$.
Consider the root $spec_0$ of $G_{DS}$ (and also of $G$).
As $spec_0$ is an ancestor of $leafGeneLeft$ and $leafGeneRight$, we must have that $s(spec_0) = r$, as this is the only node that is an ancestor of $leafLeft = s(leafGeneLeft)$ and $leafRight = s(leafGeneRight)$ in $N$.
Furthermore, $spec_0$ cannot have been relabelled as a transfer node in $G$, as $r$ is not the tail of a transfer arc.
Therefore $spec_0$ remains a speciation node in $G$.
As $dup_0$ and $leafGeneRight$ are the children of $spec_0$, by definition of consistency the $r - s(dup_0)$ and $r - leafRight$ paths separate in $N$.
The $r-leafRight$ path necessarily has $y_0$ as its first node after $r$.
Therefore the $r - dup_0$ path cannot have $y_0$, and intead must have $x_0$ as its first node after $r$.
Therefore $s(dup_0)$ is a descendant of $x_0$, from which it follow that $s(rootSpec)$ is a descendant of $x_0$ (as $rootSpec$ is a descendant of $dup_0$ in
$G$).
As $s(rootSpec)$ is a descendant of $x_0$, it cannot be $r$ nor a descendant of $y_0$.
(We note that $N$ does in fact have nodes that are descendants of both $x_0$ and $y_0$. However, all such nodes are descendants of the head of some transfer arc. Observe that no such node is an ancestor of both $leafIn = s(leafGeneIn_1)$ and $leafEdge_1 = s(leafGeneEdge_1)$, which $s(rootSpec)$ is required to be.)
So we now have that any $s(rootSpec) - s(v)$ path in $N$ uses at least one transfer arc.
To see the second claim, observe that the every tail of a transfer arc in $N$ is a descendant of $tailVert_iIn$ for some $i \in [n]$.
(Every tail node was inserted between some $leafVert_i$ and its parent, and $tailVert_iIn$ was the first node added in this way.)
\qed
\end{proof}
Our next two claims show that for every $h \in [k]$, $s(dup_h)$ corresponds to a choice of vertex for our independent set $V'$.
\begin{claim}
For each $h \in [k]$, $s(dup_h)$ is an ancestor of $tailVert_iIn$ for some $i \in [n]$.
\end{claim}
\begin{proof}
As $dup_h$ is on the path from $rootSpec$ to $leafGeneIn_h$ in $G$, by the previous claim $s(dup_h)$ is either an ancestor or a descendant of $tailVert_iIn$ for some $i \in [n]$.
Note however that $tailVert_iIn$ itself is the only descendant of $tailVert_iEdge_j$ from which there are directed paths to both $leafIn = s(leafGeneIn_h)$ and $leafOut = s(leafGeneOut_h)$.
As $dup_h$ is an ancestor of $leafGeneIn_h$ and $leafGeneOut_h$, this implies that $s(dup_h)$ is an ancestor of $tailVert_iIn$, as required.
\qed
\end{proof}
From this point onwards, for each $h \in [k]$, let $i(h)$ be an arbitrary index such that $s(dup_h)$ is an ancestor of $tailVert_{i(h)}In$.
\begin{claim}
For any $h,h' \in [k]$ such that $h \neq h'$, we have $i(h) \neq i(h')$.
\end{claim}
\begin{proof}
Let $x = \textsc{lca}_G(dup_h, dup_{h'})$, and note that $x$ is part of the subtree of $G$ corresponding to $spec_0$. As $spec_0$ was a speciation node in $G_{DS}$, $x$ is either a speciation node or a transfer node. As $dup_h$ is an ancestor of $tailVert_{i(h)}In$, and there is no ancestor of $tailVert_{i(h)}In$ that is the tail of a transfer arc, (except for $tailVert_{i(h)}In$ itself), $s(x)$ cannot be the tail of a transfer arc. Therefore $x$ cannot be a transfer node, and so $x$ is a speciation node.
(The case that $s(x)=s(dup_h)=s(dup_{h'}) = tailVert_{i(h)}In$ cannot occur, as in this case neither of the children of $s(x)$ take the transfer arc $(tailVert_{i(h)}In, headVert_{i(h)}In)$.)
By the definition of consistency, the $s(x)-s(dup_h)$ and $s(x)-s(dup_{h'})$ paths must separate in $N$.
It follows that the descendants $tailVert_{i(h)}In$ of $s(dup_h)$ and $tailVert_{i(h')}In$ of $s(dup_h')$ must be distinct, and so $i(h) \neq i(h')$.
\qed
\end{proof}
Thus, the set $V'= \{v_{i(1)}, \dots, v_{i(k)}\}$ is q set of $k$ vertices.
The next claim will allow us to show that $V'$ is an independent set.
For each $j \in [m]$ let $i[j]$ be an arbitrary index such that $tailVert_{i[j]}In$ is on a $s(rootSpec) - s(leafGeneEdge_j)$ path in $N$ (which must exist by the first claim).
\begin{claim}
For any $j \in [m]$ and any $h \in [k]$, we have $i[j]\neq i(h)$.
\end{claim}
\begin{proof}
The proof is similar to the previous claim.
Let $x = \textsc{lca}_G(leafGeneEdge_j, dup_h)$, and note that $x$ is part of the subtree of $G$ corresponding to $spec_0$.
As $spec_0$ was a speciation node in $G_{DS}$, $x$ is either a speciation node or a transfer node.
But as $dup_h$ is an ancestor of $tailVert_{i(h)}In$, and there is no ancestor of $tailVert_{i(h)}In$ that is the tail of a transfer arc, (except for $tailVert_{i(h)}In$ itself), the only possibility for $x$ to be a transfer node is if $s(x)=s(dup_h)=tailVert_{i(h)}In$.
But in this case neither the $s(x)_s(dup_h)$ path nor the $s(x)-s(leafGeneEdge_j)$ path in $N$ use the transfer arc $(tailVert_{i(h)}In, heqdVert_{i(h)}In)$, so this is impossible.
Hence $x$ is a speciation node in $G$.
By definition of consistency, the $s(x)-s(dup_h)$ and $s(x)-s(leafGeneEdge_j)$ paths must separate in $N$.
It follows that the $s(x)-s(leafGeneEdge_j)$ path, and hence the $s(rootSpec) - s(leafGeneEdge_j)$ path, cannot pass through $tailVert_{i(h)}In$, and hence $i[j] \neq i(j)$.
\qed
\end{proof}
We are now ready to put the proof together.
Let $V'= \{v_{i(1)}, \dots, v_{i(k)}\}$. As
we have $i(h) \neq i(h')$ for any $h,h' \in [k]$ such that $h \neq h'$, .
we have that $V'$ is a set of $k$ vertices.
To see that $V'$ is an independent set, observe that by the previous claim, for each edge $e_j$ the vertex $v_{i[j]}$ is not in $V'$.
By definition $tailVert_{i[j]}In$ is on a $s(rootSpec) - s(leafGeneEdge_j)$ path in $N$,
and by construction of $N$, this can only happen if
$N$ has the transfer arc $(tailVert_{i[j]}Edge_j, headVert_{i[j]}Edge_j)$, i.e.
$v_{i[j]}$ is incident to $e_j$.
Therefore every edge is incident with at least one vertex not in $V'$, from which it follow that $V'$ is an independent set.
\end{proof}
\begin{lemma}
Suppose that $H$ has an independent set $V'$ of size $k$. Then there is a $CDST$-completion $G$ of $G_{DS}$ such that $G$ is consistent with $N$.
\end{lemma}
\begin{proof}
Let $i_(1), \dots, i(h)$ be the set of indices such that $v_{i(1)}, \dots, v_{i(k)}$ are the vertices of $V'$.
For each $j \in [m]$, let $i[j]$ be an index such that $v_{i[j]}$ is a vertex incident to edge $e_j$ and not contained in $V'$.
Note that such an $i[j]$ must exist, as $V'$ is an independent set.
We note now, as it will be relevant later, that for each $h \in [k]$ the node $tailVert_{i(h)}In$ is not an ancestor of $tailVert_{i[j]}Edge_j$ for any $j \in [m]$, nor of $tailVert_{i(h')}In$ for any $h' \neq h \in [k]$.
We now describe how to derive $G$ from $G_{DS}$, starting from the root and working down, describing the consistent gene-mapping $s$ as we go.
Note that intially $G$ will contain some unary speciation and transfer nodes. This will be fixed at the end of the construction.
Let $spec_0$ remain a speciation node and set $s(spec_0) = r$.
Let $s(leafGeneRight) = leafRight$
Let $dup_0$ remain q duplication node and set $s(dup_0)=x_0$.
Let $s(leafGeneLeft) = leafLeft$.
We will expand $spec_1$ into a binary subtree, with some of the nodes relabelled as transfer nodes.
The structure of the binary tree depends on our choices of $i(h), i[j]$ for $h \in [k], j \in [m])$, and is constructed as follows.
Let $S'$ be the minimal subtree in $N$ that spans $tailVert_{i(h)}In$ for each $h \in [k]$ and $tailVert_{i[j]}Edge_j$ for each $j \in [m]$.
Note that as $tailVert_{i(h)}In$ is not an ancestor of $tailVert_{i[j]}Edge_j$ for any $j \in [m]$, the node $tailVert_{i(h)}In$ is a leaf in $S'$ for each $h \in [m]$.
So let $S$ be the tree derived from $S'$ by removing the leaves $tailVert_{i(h)}In$ for each $h \in [m]$.
We now expand $spec_1$ into a tree $T$ isomorphic to $S$. More specifically, $|V(T)| = |V(S)|$, and $s$ restricted to $V(T)$ is a one-to-one function such that for each $u,v \in V(T)$, $u$ is a child of $v$ if and only if $s(u)$ is a child of $s(v)$.
If $s(u) = tailVert_{i[j]}Edge_j$ for any $j \in [m]$, let $u$ be relabelled as a transfer node, and otherwise let $u$ remain a speciation node.
We next describe how the children of $spec_0$ in $G_{DS}$ are joined up to $T$ in $G$.
For each $j\in [m]$, let $s(leafGeneEdge_j) = leafEdge_j$, and let the parent of $leafGeneEdge_j$ be the transfer node $u$ for which $s(u) = tailVert_{i[j]}Edge_j$ .
(Note that the $tailVert_{i[j]}Edge_j - leafEdge_j$ path in $N$ uses the transfer arc $(tailVert_{i[j]}Edge_j ,headVert_{i[j]}Edge_j)$, satisfying the requirements of $u$ as a transfer node.)
For each $h \in [k]$, let $s(dup_h) = tailVert_{i(h)}In$, and let the parent of $dup_h$ be the $u$ such that $s(u)$ was the parent of $tailVert_{i(h)}In$ in $S'$.
Finally, for each $h \in[h]$, let $dup_h$ be relabelled as a transfer node, let $s(leafGeneIn_h) = leafIn$ and $s(leafGeneOut_h) = leafOut$.
Observe that this satisfies the requirement of $dup_h$ as a transfer node, since the $tailVert_{i(h)}In - leafIn$ path in $N$ uses the transfer arc $(tailVert_{i(h)}In , headVert_{i(h)}In)$.
Now contract any unary nodes to get the $CDST$-completion $G$ of $G_{DS}$.
Now we show that $s$ is a consistent gene-species mappingfor $G$.
It is clear from the construction of $G$ and $s$ that $s$ is a gene-species mapping, and that for every node $u$ with child $v$ in $G$, there is a $s(iu)-s(v)$ path in $N$.
The only nodes of $G$ assigned to a tail of an arc in $E_s$ were those relabelled as transfer nodes, and we noted during the consturction of $G$ and $s$ that for each of these nodes $u$, the $s(u)-s(v)$ path took the corresponding transfer arc for one of $u$'s children $v$.
It remains to show that for each speciation node $u$ with children $v$ and $v'$, the $s(u) - s(v)$ and $s(u) - s(v')$ paths separate in $N$.
For speciation node $spec_0$, this is clear by inspection....
Every other speciation node is a node in $T$, the subtree of $G$ corresponding to $spec_1$.
Observe that as $s^{-1}(tailVert_{i[j]}Edge_j)$ and $s{-1}(tailVert_{i(h)}In)$ were relabelled as transfer nodes for every $h \in [k], j \in [m]$, for every speciation node $u$ in $T$ the node $s(u)$ is an internal node of $S'$.
It follows by construction that if $u$ has distinct children $v$ and $v'$, then $(u)$ has distinct children $s(u)$ and $s(u')$. [NOTE TODO: actually this may not be true after contracting unary nodes, but the paths still separate.]
It follow that the $s(u) - s(v)$ and $s(u) - s(v')$ paths separate in $N$.
Thus, we have that $s$ is a consistent gene-species mapping for $G$, as required.
\end{proof}
\section{Hardness of minimizing transfers on LGT networks}\ellabel{sec:w1hardness}
In this section, we consider the \textsc{NC} and \textsc{TMNC} problems.
We will show that \textsc{NC} is NP-hard.
Moreover, we will show that the minimization version \textsc{TMNC} is not only NP-hard, but also $W[1]$-hard parameterized by $k$, the number of transfers.
We give a reduction from the following problem, which is known to be
\mj{NP-hard and $W[1]$-hard with respect to $k$~\citep{FELLOWS200953}:}
\ensuremath{\varnothing}\xspaceindent \textbf{ $k$-Multicolored Clique:}\\
\ensuremath{\varnothing}\xspaceindent {\bf Input}: A graph $H = (V,E)$, a partition of $V$ into color classes $V_1, \dots, V_k$. \\
\ensuremath{\varnothing}\xspaceindent {\bf Parameter}: $k$.\\
\ensuremath{\varnothing}\xspaceindent {\bf Question}: Is there a clique $C$ in $H$ containing exactly one vertex from each color class $V_i$?
The full version of the reduction can be found in the Appendix, but we can sketch the essential ideas here.
We describe the NP-hardness proof -- the $W[1]$-hardness is similar but ensures that the reduction
is parameterized by $k$.
We first reduce \textbf{$k$-Multicolored Clique} to a novel intermediate problem, \textsc{Antichain on Trees (ACT)}, then reduce \textsc{ACT} to \textsc{NC}.
\textsc{ACT} is formally defined below, but the intuition is as follows:
we are given a tree $T$, a set $X$ of elements to place on the nodes of $T$,
and a weight function $w:X \times V(T) \rightarrow \mathbb{N}_0 \cup \{\infty\}$ indicating the cost of
placing $x \in X$ on $v \in V(T)$. We interpret $w(x, v) < \infty$ as ``$x$ can go on $v$'' and
$w(x, v) = \infty$ as ``$x$ cannot go on $v$''. Our goal is to place each $x \in X$ on an allowable node
such that the elements of $X$ are pairwise incomparable (i.e. none is an ancestor of the other).
\ensuremath{\varnothing}\xspaceindent \textbf{\textsc{Antichain on Trees (ACT):}}\\
\ensuremath{\varnothing}\xspaceindent {\bf Input}: An rooted tree $T$, a set $X$, a cost function $w:X \times V(T) \rightarrow \mathbb{N}_0 \cup \{\infty\}$.\\
\ensuremath{\varnothing}\xspaceindent {\bf Question}: Does there exist an assignment $f:X \rightarrow V(T)$ such that $f(x)$ and $f(y)$ are incomparable in $T$ (that is, neither is an ancestor of the other) for each $x \neq y \in X$, and $w(x,f(x)) < \infty$ for each $x \in X$? \\
We call an assignment $f$ an \emph{incomparable assignment} if it satisfies the conditions of an \textsc{ACT} instance.
In the minimization version of \textsc{ACT}, which we call
\textsc{Minimum Weight Antichain on Trees (MWACT)},
we are given a parameter $k$ and ask if there is an incomparable assignment of weight at most $k$.
\ensuremath{\varnothing}\xspaceindent \textbf{\textsc{Minimum Weight Antichain on Trees (\mjn{MWACT}):}}\\
\ensuremath{\varnothing}\xspaceindent {\bf Input}: A rooted tree $T$, a set $X$, a cost function $w:X \times V(T) \rightarrow \mathbb{N}_0 \cup \{\infty\}$, and an integer $k$.\\
\ensuremath{\varnothing}\xspaceindent {\bf Question}: Does there exist an assignment $f:X \rightarrow V(T)$ such that $f(x)$ and $f(y)$ are incomparable in $T$ (that is, neither is an ancestor of the other) for each $x \neq y \in X$, and such that $\sum_{x \in X} w(x,f(x)) \elleq k$? \\
To see the relationship between \textsc{ACT} and \textsc{NC}, consider an \textsc{ACT} instance
$(T, X, w)$. In the \textsc{NC} setting, $N$ is obtained from $T$ after incorporating some
specific secondary arcs, and the given relations $R$ have, as their unique least-resolved $DS$-tree $(D, l)$,
a speciation root with $|X|$ children, each child being a duplication corresponding to an element of $X$.
Then being able to place $x \in X$ on $v \in V(T)$ represents ``$\alpha_\textsc{last}(x) = v$ is possible'',
i.e. the $x$ node of $D$ is \emph{mappable} onto $v$.
That is, the node $v$ has a directed path to every species present at a leaf below $x$, and the weight $w(x, v)$ is the number
of transfers required to do so.
To enforce the $\alpha_\textsc{last}(x)$ to be pairwise incomparable,
we ensure that transfers can only be undertaken by descendants of the $X$ nodes of $D$.
Thus the speciation root of $D$ cannot be explained by any transfer whatsoever, ensuring that its children must be
incomparable. We now proceed with the formalization of these ideas, and direct the reader to the Appendix for
the details of the constructions.
We first show that \textsc{ACT} is NP-hard and \textsc{MWACT} is $W[1]$-hard even under certain restrictions; these will allow us to reduce \textsc{ACT} to \textsc{NC} and \textsc{MWACT} to \textsc{TMNC}.
The main idea is that the incomparability requirement can be used to create gadgets as subtrees of an \textsc{ACT} or \textsc{MWACT} instance -- if some parent node is assigned to a variable in $X$, then none of its children can be assigned to any variable in $X$. In addition, the weight function allows to limit the number of places that can be assigned to a given variable. Using these ideas, we can create an instance of \textsc{ACT}, such that an incomparable assignment of finite weight exists if and only if a given instance of {\sc $k$-Multicolored Clique} is a {\sc Yes}-instance.
\begin{lemma}\ellabel{lem:MWACTconstruction}
Let $H = (V = V_1 \cup V_2 \cup \dots \cup V_k, E)$ be an instance of {\sc $k$-Multicolored Clique}.
Then in polynomial time, we can construct an instance $(T,X,w)$ of \textsc{ACT} such that
$(T,X,w)$ has an incomparable assignment of weight $< \infty$ if and only if
$H$ has a $k$-multicolored clique.
Furthermore, if an incomparable assignment of weight $w< \infty$ exists,
then there exists an incomparable assignment with weight $\elle k' = k^2+2k$,
and $(T,X,w)$ satisfies the following properties:
\begin{itemize}
\item $w(x,v) \in \{0,1,\infty\}$ for all $x \in X, v \in V(T)$;
\item $w(x,v)= 0$ for exactly one $v$ for each $x \in X$;
\item if $w(x,v) = 0$ then $w(y,v) = \infty$ for all $y \neq x$;
\item for any $x \in X$, $u,v \in V(T)$ such that $w(x,u),w(x,v) < \infty$, $u$ and $v$ are incomparable.
\end{itemize}
\end{lemma}
As $(T,X,w)$ is a {\sc Yes}-instance of \textsc{ACT} if and only if the corresponding instance of {\sc $k$-multicolored clique} is a {\sc Yes}-instance, we have that \textsc{ACT} is NP-hard.
Moreover, let \ellinebreak $(T,X,w,k')$ be the instance of \textsc{MWACT} with $k' = k^2 + k$ and $T,X,w$ as in Lemma~\ref{lem:MWACTconstruction}. Then Lemma~\ref{lem:MWACTconstruction} also implies that $(T,X,w,k')$ is a {\sc Yes}-instance of \textsc{MWACT} if and only if the corresponding instance of {\sc $k$-multicolored clique} is a {\sc Yes}-instance. As $k'$ is expressible as a function of $k$, any FPT algorithm for Lemma~\ref{lem:MWACTconstruction} implies a FPT algorithm for {\sc $k$-multicolored clique}. Therefore, as {\sc $k$-multicolored clique} is $W[1]$-hard, so is \textsc{MWACT}. Moreover as $(T,X,w)$ satisfies the properties of Lemma~\ref{lem:MWACTconstruction}, we have the following:
\begin{lemma}
\textsc{ACT} is NP-hard and \textsc{MWACT} is $W[1]$-hard, even under the following conditions:
\begin{itemize}
\item $w(x,v) \in \{0,1,\infty\}$ for all $x \in X, v \in V(T)$;
\item $w(x,v)= 0$ for exactly one $v$ for each $x \in X$;
\item if $w(x,v) = 0$ then $w(y,v) = \infty$ for all $y \neq x$;
\item for any $x \in X$, $u,v \in V(T)$ such that $w(x,u),w(x,v) < \infty$, $u$ and $v$ are incomparable.
\end{itemize}
\end{lemma}
We next reduce \textsc{ACT} to \textsc{NC}.
The main idea behind this reduction is that every element of $X$ can be represented by a child of the same speciation node in a least-resolved DS-tree.
The tree $T$ can be represented by the distinguished base tree in the species network, and secondary arcs can be added in such a way that, for any \ensuremath{\mathbb{D}}\xspaceTL reconciliation, the node corresponding to $x \in X$ can only be mapped to nodes $v$ for which $w(x,v) < \infty$.
\begin{lemma}\ellabel{lemma:NCreduction}
Let $(T,X,w)$ be an instance of \textsc{ACT}, such that $w(x,v) \in \{0,1,\infty\}$ for all $x \in X, v \in V(T)$, $w(x,v)= 0$ for exactly one $v$ for each $x \in X$,
\mj{if $w(x,v) = 0$ then $w(y,v) = \infty$ for all $y \neq x$,}
and for any $x \in X$, $u,v \in V(T)$ such that $w(x,u),w(x,v) < \infty$, $u$ and $v$ are incomparable.
Then in polynomial time, we can construct both a least-resolved DS-tree $(D, l)$ and a \mjn{time-consistent} LGT network $N$ such that for any integer $k$,
$(T,X,w)$ has an incomparable assignment of cost at most $k$ if and only if there exists a binary refinement $(D', l')$ of $(D, l)$ such that $(D', l')$ is $N$-reconcilable using at most $2k$ transfers.
\end{lemma}
By setting $R = R(D, l)$, Lemma~\ref{lemma:NCreduction} implies that $R$ is $N$-consistent if and only if $(T,W,x)$ has an incomparable assignment of cost $< \infty$, i.e. $(T,W,x)$ is a {\sc Yes}-instance of \textsc{ACT}. As \textsc{ACT} is NP-hard (under the restrictions in Lemma~\ref{lemma:NCreduction}), so is \textsc{NC}.
Moreover, for any integer $k$, Lemma~\ref{lemma:NCreduction} implies that $R$ is $N$-consistent using at most $k'=2k$ transfers if and only if $(T,W,x,k)$ is a {\sc Yes}-instance of \textsc{MWACT}. As \textsc{MWACT} is $W[1]$-hard (under the restrictions in Lemma~\ref{lemma:NCreduction}), so is \textsc{TMNC}.
\begin{theorem}
\textsc{NC} is NP-hard and \textsc{TMNC} is $W[1]$-hard.
\end{theorem}
\section{Dynamic programming for bounded degree DS-trees}\ellabel{sec:dpalgo}
\setlength{\textfloatsep}{2pt}
\begin{algorithm}[!b]
\KwData{A DS-tree $D$, an LGT network $N$}
\KwResult{$\infty$ if $D$ is not $N$-reconcilable, or otherwise the minimum number of transfers}
\caption{minTransferCost($D, N$)}
Initialize $f(g, s) = \infty$ for all $g \in V(D), s \in V(S)$
\For{$g \in V(D)$ in post-order traversal}{
\For{$s \in V(N)$ in post-order traversal}{
\uIf{$g$ is a leaf}{
$f(g, s) = 0$ if $\sigma(g) = s$, otherwise $f(g, s) = \infty$
}
\Else{
$best = \infty$
\For{$(D', l') \in \mathcal{B}(g)$}{
$b = reconcileLBR((D', l'), N, s, f)$
\ellIf {$b < best$}{$best = b$}
}
$f(g, s) = best$
}
}
}
return $\min_{s \in V(N)} f(r(D), s)$
\end{algorithm}
In this section, we show that given a \ml{relation graph} $R$ and its least-resolved DS-tree $(D, l)$, if every node of $D$ has degree at most $k$, then
one can decide if $(D, l)$ is $N$-reconcilable in time
\mj{$O(2^{k}k!k|V(D)||V(N)|^4)$.}
Moreover, if $(D, l)$ is $N$-reconcilable, our algorithm finds the minimum number of transfers
required by any possible reconciliation.
In particular, if $D$ is binary, then TMNC can be solved in polynomial time.
\ml{Note that in~\cite{hellmuth2019reconciling}, it is shown that a DS-tree can always be reconciled with some network in a similar reconciliation model, and the authors characterized precisely when a DS-tree can be reconciled with a given network (although transfers are not studied and, hence, not minimized as we do here). Let us also mention that in a series of papers~\citep{hellmuth2017biologically,nojgaard2018time,hellmuth2019reconciling}, it is shown how, given a DS-tree with known transfer events but no species phylogeny, one can find a species tree/network that it can be reconciled with.}
The idea of the algorithm is similar to those of~\cite{Scornavacca2016,kordi2017exact,mykowiecka2017inferring}.
We use dynamic programming over $V(D)$, from the leaves to the root, and when we encounter a non-binary node, we try every way of refining it. This is a relatively standard procedure, although ensuring a valid reconciliation while minimizing transfers requires care.
For each $g \in V(D)$ and each $s \in V(N)$, we denote by $f(g, s)$ the minimum number of transfers needed
by a reconciliation
$(D_g, \alpha)$ with respect to $N$ if we require $\alpha_\textsc{last}(g) = s$ (recall that $D_g$ is the subtree of $D$ rooted at $g$).
If $g$ is a binary node, we try mapping $g_l$ and $g_r$ to every pair of species $s_1$ and $s_2$
that allow $e(g, \textsc{last}) \in \{l(g), \ensuremath{\mathbb{T}}\xspace \}$, and $f(g, s)$ is the minimum over all possibilities. For fixed $s, s_1$ (resp. $s_2$), the number of transfers required
on the branch $(g, g_l)$ (resp. $(g, g_r)$) is the minimum number of secondary arcs on a path from $s$ to $s_1$ (resp. $s_2$).
This path would constitute the sequence $\alpha(g_l)$ (resp. $\alpha(g_r)$).
Then $f(g, s)$ can be computed from these values,
plus those of $f(g_l, s_1)$ and $f(g_r, s_2)$.
If $g$ is a non-binary node with children $g_1, \elldots, g_k$, we simply try to refine $g$ in every possible way, then do as in the binary case. In such a binary refinement $B$ of $g$, we may treat the $g_1, \elldots, g_k$ nodes of $B$ as leaves
and use the previously computed $f(g_i, s')$ values for each $(g_i, s')$ pair.
Let us turn to the algorithmic details.
\setlength{\textfloatsep}{2pt}
\begin{algorithm}[!t]
\KwData{A binary DS-tree $(D', l')$ which is an LBR of some subtree of $D$, an LGT network $N$, the desired species $s$ for $r(D')$, a cost function $f$ on the leaves of $D'$}
\KwResult{The minimum cost to reconcile $D'$ with $N$ such that $\alpha_\textsc{last}(r(D')) = s$}
\caption{reconcileLBR($D', N, s, f)$}
Set $f' = f$ (we maintain temporary costs $f'$ for $D'$)
\For{$g \in I(D')$ in post-order traversal}{
\For{$s' \in V(N)$ in post-order traversal}{
\uIf{$l'(g) = \ensuremath{\mathbb{S}}\xspace$}{
\uIf{\man{$s'$ has two children and } $(s', s'_l), (s', s'_r) \in E_p$}{
$cost12 = \min_{(s_1, s_2) \in P(s'_l) \times P(s'_r)} (f'(g_l, s_1) + t(s'_l, s_1) + f'(g_r, s_2) + t(s'_r, s_2))$
$cost21 = \min_{(s_1, s_2) \in P(s'_l) \times P(s'_r)} (f'(g_r, s_1) + t(s'_l, s_1) + f'(g_l, s_2) + t(s'_r, s_2))$
$f'(g, s') = \min (cost12, cost21 ) $ \ellabel{line:the-s-case}
}
\ElseIf{$s'$ is the tail of a secondary arc $(s', s'')$ ($s'' \in \{s'_l, s'_r\}$)}{
$cost12 = 1 +
\min_{(s_1, s_2) \in P(s') \times P(s'')} (f'(g_l, s_1) + t(s', s_1) + f'(g_r, s_2) + t(s'', s_2))$
$cost21 = 1 +
\min_{(s_1, s_2) \in P(s') \times P(s'')} (f'(g_r, s_1) + t(s', s_1) + f'(g_l, s_2) + t(s'', s_2))$
$f'(g, s') = min(cost12, cost21)$ \ellabel{line:the-t-case}
}
}
\ElseIf{$l'(g) = \ensuremath{\mathbb{D}}\xspace$}{
$f'(g, s') = \min_{(s_1, s_2) \in P(s') \times P(s')} (f'(g_l, s_1) + t(s', s_1) + f'(g_r, s_2) + t(s', s_2))$ \ellabel{line:the-d-case}
}
}
}
return $f'( r(D'), s)$
\end{algorithm}
Let $g \in I(D)$ with children $g_1, \elldots, g_k$.
A binary DS-tree $(D', l')$ with root $g$ and leafset $g_1, \elldots, g_k$ such that $l'(g') = l(g)$ for every $g' \in I(D')$ will be called a \emph{local binary refinement}
of $g$ (we write LBR for short). We denote by $\mathcal{B}(g)$ the set of possible LBRs of $g$.
For $s \in V(N)$, denote by $P(s)$ the set of vertices of $N$ that can be reached by some directed path starting from $s$,
and let $t(s, s')$ denote the minimum number of secondary arcs necessary to go from $s$ to $s'$
(note that $t(s, s')$ is easy to compute using weighted shortest path algorithms).
We let $t(s, s') = \infty$ if there is no path from $s$ to $s'$.
The algorithm $minTransferCost$ traverses $D$ in a post-order traversal and,
for each node $g$ and each LBR $D'$ in $\mathcal{B}(g)$,
calls $reconcileLBR$ to reconcile $D'$. Note that in the case that $g$ is binary,
only one LBR is tested, namely the tree with two leaves $g_l$ and $g_r$.
The proof of correctness can be done by induction over the height of $D_g$ and can be found in the Appendix.
For the complexity, we first compute the all-pairs shortest paths
in $N$ in time $O(|V(N)|^3)$ (this is only done once and will not contribute to the final complexity).
It is known that the number of binary trees on $k$ leaves
is
\mj{$(2k-3)!! = O(2^kk!)$~\citep{felsenstein2004inferring}}
which bounds the size of each set of LBRs.
The main algorithm computes $\mathcal{B}(g)$ up to $|V(D)||V(N)|$
times. Each member of each $\mathcal{B}(g)$ results in a call to $reconcileLBR$, which is done with a tree $D'$ on
at most $k$ leaves. Then in this subroutine for each $(g, s)$ pair with $g \in V(D')$
and $s \in V(N)$, $O(|V(N)|^2)$ pairs of the form $(s_1, s_2)$ are tested --
this takes time $O(k|V(N)|^3)$.
The total time is thus
$O(2^{k}k!k|V(D)||V(N)|^4)$.
\ml{The space taken by the algorithm is $O(|V(D)||V(N)| + |V(N)|^2)$. To see this, observe that $O(|V(N)|^2)$ space is needed to store the aforementioned all-pairs shortest path values and $O(|V(D)||V(N)|)$ space is needed for the $f(g, s)$ values. Each enumerated $(D', l') \in \mathcal{B}(g)$ takes space $O(k) = O(|V(D)|)$, which does not add to the space complexity if only the current such $(D', l')$ is kept in memory at all time. Also, one can check that $reconcileLBR$ can be done without additional space (the $P(s)$ sets can be computed on the fly each time when needed).}
\begin{theorem}\ellabel{thm:dp-algo-is-ok}
Algorithm $minTransferCost$ is correct. Moreover, it runs in time \ellinebreak
$O(2^{k}k!k|V(D)||V(N)|^4)$ \ml{and space $O(|V(N)||V(D)| + |V(N)|^2)$}.
\end{theorem}
\ml{Note that while we focused on minimizing the contribution of the $k$ parameter in the above algorithm, it is plausible that techniques developed for similar dynamic programming algorithms in~\cite{kordi2017exact,mykowiecka2017inferring} could help reduce the $|V(D)||V(N)|^4$ portion of the complexity. In essence, a factor of $|V(N)|^2$ is saved in~\cite{kordi2017exact,mykowiecka2017inferring} by defining $f(g, s)$ as the best cost of a reconciliation in which $\alpha_{\textsc{last}}(g)$ is mapped to any node reachable from $s$ (instead of requiring $s$ itself), which avoids having to minimize over all reachable pairs $(s_1, s_2)$ for every node of $D$ as in our algorithm.}
\section{With unknown transfer highways}\ellabel{sec:unknownhighways}
The set of secondary arcs on a species network cannot always be known with confidence. In fact, reconciliation is sometimes used to infer such arcs on a given species tree~\citep{THL2011}.
In this section, we remove the assumption that transfer arcs are known. We are given a species \emph{tree} $S$ with $|L(S)| > 1$, and
the secondary arcs $E_s$ are to be determined in a time-consistent manner. The question is whether,
for a relation graph $R$, there is
a species network $N$ with base tree $T_0(N) = S$ such that $R$ is $N$-consistent.
\begin{definition}
Let $S$ be a species tree.
We say \man{that a relation graph} $R$ is \ml{\emph{$S$-base-consistent}} (using $k$ transfers) if
there exists a time-consistent \cs{LGT} network $N$ such that
$T_0(N) = S$ and
$R$ is $N$-consistent (using $k$ transfers).
\end{definition}
We will show that a relation graph $R$ is always \ml{\emph{$S$-base-consistent}},
provided there is a $DS$-tree $(D, l)$ that displays $R$. In fact, we prove that \emph{any} binary $DS$-tree
can be made to ``agree'' with any species tree, no matter how inconsistent they appear to be (provided that each $DS$-tree leaf can be mapped to a corresponding species tree leaf).
Beforehand, we can easily establish the equivalence between relation graphs and $DS$-trees as we did for $N$-consistency.
We say that a $DS$-tree $(D, l)$ is \ml{\emph{$S$-base-reconcilable}} (using $k$ transfers) if there exists a
time-consistent species network $N$
such that $T_0(N) = S$ and $(D, l)$ is $N$-reconcilable (using $k$ transfers).
\begin{lemma}\ellabel{lem:equiv-ds-tree-unknown}
Let $R$ be a relation graph and $S$ be a species tree.
Then $R$ is \ml{$S$-base-consistent} (using $k$ transfers) if and only if
there exists a least-resolved $DS$-tree $(D, l)$ \mj{that displays $R$} and a binary refinement $(D', l')$ of $(D, l)$ such that
$(D', l')$ is \ml{$S$-base-reconcilable} (using $k$ transfers).
\end{lemma}
To show that any $DS$-tree $(D, l)$ is \ml{$S$-base-reconcilable}, we add to $S$ a set of secondary arcs $E_s$ of size
\mj{$O(h(D)|V(S)|^2)$,} \man{where $h(D)$ is the height of $D$ (see below)}.
\ml{We then obtain a reconciliation $\alpha$ in which $e(\alpha_{\textsc{last}}(u)) = \mathbb{T}$ for every internal node $u$ of $D$}, which might be necessary in some cases.
For a node $v \in V(D)$, we denote by $d(v)$ the
\emph{depth} of $v$, which is the number of edges on the path between $v$ and $r(D)$.
The \emph{height} of $D$, denoted $h(D)$, is the maximum depth of a node of $D$.
Let $m = |L(S)|$, and let $(s_1, \elldots, s_m)$ be an arbitrary ordering of $L(S)$.
Recall that for $i \in [m]$, $s_i$ is a leaf of $S$, and that $p(s_i)s_i$ refers to the edge from the parent of $s_i$ to $s_i$.
We construct the network $N(D)$ from $S$ using the following algorithm:
\setlength{\textfloatsep}{2pt}
\begin{algorithm}[H]
\caption{constructNetwork($D, S$)}
\ellabel{algo:everything-consistent}
\For{$d = 0$ to $h(D) + 1$}{
\For{$i = 1$ to $m$}{
\For{$j = 1$ to $m$, $j \neq i$}{
Subdivide the arc $p(s_i)s_i$, creating a donor node $\sdon{i}{j}{d}$ \;
Subdivide the arc $p(s_j)s_j$, creating a receiver node $\srec{j}{i}{d}$ \;
Add the secondary arc $(\sdon{i}{j}{d}, \srec{j}{i}{d})$ to $E_s$ \;
}
}
}
\end{algorithm}
Thus we add every transfer from the $s_1$ branch to the $s_i$ branch with $i \neq 1$,
then every transfer from the $s_2$ branch to the other $s_i$ branches, and so on, and
repeat this process $h(D) + 2$ times. Note that $p(s_i)$ changes with each subdivision.
It is not hard to see that $N(D)$ is time-consistent,
\man{since each time we insert a new arc $(x, y)$, its two endpoints $x$ and $y$ are
below every other previously inserted node.}
\begin{lemma}\ellabel{lem:all-ds-trees-reconcilable}
Let $(D, l)$ be any binary $DS$-tree and let $N := N(D)$ be the species network obtained from $S$ after
applying Algorithm~\ref{algo:everything-consistent}.
Then $(D, l)$ is $N$-reconcilable.
\end{lemma}
The detailed proof can be found in the Appendix. The idea is that each $v \in I(D)$ at depth
$d(v)$ has the
secondary edge $(\sdon{i}{j}{d(v)}, \srec{j}{i}{d(v)})$
\mj{at its disposal.}
It can be shown that for any $v \in I(D)$ and any distinct $s_i, s_j \in L(N)$,
$D_v$ can be reconciled with $N$ such that $\alpha(v) = (\sdon{i}{j}{d(v)})$.
The idea is illustrated in Figure~\ref{fig:dumb-reconciliation}. The highest node of $D$ is mapped to a highest donor node of $N$, and the descendants transfer back and forth, each time being mapped to a deeper donor node of $N$.
\begin{figure}
\caption{On the left, a species tree with two leaves, with \mj{the horizontal}
\end{figure}
\begin{theorem}\ellabel{thm:sconsistent-iff-dstree}
A relation graph $R$ is \ml{$S$-base-consistent} if and only if
there exists a $DS$-tree $(D, l)$ that displays $R$.
Therefore, deciding if a relation graph $R$ is \ml{$S$-base-consistent} can be done in polynomial time.
\end{theorem}
Thus, unlike $N$-consistency,
deciding \ml{$S$-base-consistency} of $R$ can be done quickly by verifying if $R$ admits a $DS$-tree.
However, the explanation of $R$ resulting from the above algorithm
will produce scenarios with many transfers, all of which are located between a leaf and its parent. Thus it makes sense to ask
if there is a scenario with at most $k$ transfers.
This problem is closely related to reconciling a gene tree with a species tree while minimizing the number of transfers. In~\cite{THL2011}, this problem is shown to be NP-hard.
In fact, we present a reduction for minimizing transfers that is very similar in spirit to the one given in~\cite{THL2011}.
There are, however, many differences between their problem and ours that prevent us from using the previous reduction as a black box for our purposes.
First, our definition of reconciliation is different, and in particular,
in~\cite{THL2011}, transfer-loss events are not allowed.
Also, in the $DS$-tree formulation derived from Lemma~\ref{lem:equiv-ds-tree}, we are given which
nodes of $D$ must be speciations, and which must be duplications.
Finally, the authors require that the output network contains no directed cycle,
whereas we require time-consistency, which is more restrictive.
\man{We invite the interested reader to consult the last \cs{section of the Appendix for} details.}
\begin{theorem}\ellabel{thm:hard-unknown-highways}
The problem of deciding if a relation graph $R$ is \ml{$S$-base-consistent} using $k$ transfers
is NP-hard, even if the least-resolved $DS$-tree $(D, l)$ for $R$ is binary.
\end{theorem}
\section{\mj{Discussion}}
In this work, we have shown that consistency of relations in the presence of transfers is
computationally hard to deal with, making its application difficult in practice. One possible avenue would be to attempt to apply our FPT algorithm to real datasets.
A similar algorithm was reported in~\cite{kordi2017exact} to be able to handle nodes with up to
8 children, so a next step would be to check the size of non-binary nodes of DS-trees.
It would also be interesting to study the problem of error correction of relations
in the presence of transfers - although this is almost certainly NP-hard,
approximation or FPT algorithms may be applicable.
\subsection*{Acknowledgment}
Version 6 of this preprint has been peer-reviewed and recommended by Peer Community In Mathematical and Computational Biology (\url{https://doi.org/10.24072/pci.mcb.100009}).
\subsection*{Data, script and code availability}
There is no data, script or code associated with the work presented in this paper.
\subsection*{Conflict of interest disclosure}
The authors of this preprint declare that they have no financial conflict of interest with the content of this article.
Celine Scornavacca is one of the PCI Math Comp Biol recommenders.
\printbibliography[notcategory=ignore]
\section*{Appendix}
Here we include the details of the proofs that were left out of the main text.
\ensuremath{\varnothing}\xspaceindent
\textbf{Lemma \ref{lem:equiv-ds-tree}.}
\emph{Let $N$ be an LGT network and $R = (\Gamma, E)$ a relation graph. Then $R$ is $N$-consistent
(using $k$ transfers)
if and only if
there exists a DS-tree $(D, l)$ for $\Gamma$ such that $R(D, l) = R$ and such that $(D, l)$ is $N$-reconcilable (using $k$ transfers).}
\begin{proof}
($\Rightarrow$) Let $(G, \alpha)$ be a gene tree reconciled with $N$ such that $(G, \alpha)$ displays $R$ using $k$ transfers, and let $e^*$ be a labeling such that $e^*(u,i) \in \{\ensuremath{\mathbb{T}}\xspaceS,\ensuremath{\mathbb{T}}\xspaceD\}$ if $e(u,i) = \ensuremath{\mathbb{T}}\xspace$, $e^*(u,i) = e(u,i)$ if $e(u,i) \neq \ensuremath{\mathbb{T}}\xspace$, and if $xy \in E$ then $e^*(\textsc{lca}_G(x,y), \textsc{last}) \in \{\ensuremath{\mathbb{S}}\xspace,\ensuremath{\mathbb{T}}\xspaceS\}$,
and otherwise $e^*(\textsc{lca}_G(x,y), \textsc{last}) \in \{\ensuremath{\mathbb{D}}\xspace,\ensuremath{\mathbb{T}}\xspaceD\}$.
Now define a binary DS-tree $(D,l)$ as follows.
Let $D = G$, and let $l(u) = \ensuremath{\mathbb{S}}\xspace$ if \ellinebreak $e^*(\alpha_\textsc{last}(u)) \in \{\ensuremath{\mathbb{S}}\xspace,\ensuremath{\mathbb{T}}\xspaceS\}$, and $l(u) = \ensuremath{\mathbb{D}}\xspace$ otherwise (in which case $e^*(\alpha_\textsc{last}(u)) \in \{\ensuremath{\mathbb{D}}\xspace,\ensuremath{\mathbb{T}}\xspaceD\}$).
Observe that by definition of $e^*$,
if $l(\textsc{lca}_{D}(x,y)) = \ensuremath{\mathbb{S}}\xspace$ then $xy \in E$,
and if $l(\textsc{lca}_{D}(x,y)) = \ensuremath{\mathbb{D}}\xspace$ then $xy \ensuremath{\varnothing}\xspacetin E$.
Thus we have that $R = R(D, l)$.
Also, note that $(D, l)$ is $N$-reconcilable using $k$ transfers,
since $\alpha$ satisfies the conditions of
Definition~\ref{def:nreconciable}.
($\ensuremath{\mathbb{L}}\xspaceeftarrow$): let $(D, l)$ be a $DS$-tree such that $R(D, l) = R$.
Note that $D$ is not necessarily binary.
Let $(D', l')$ be a binary refinement
of $(D, l)$ such that $(D', l')$ is $N$-reconcilable (such a refinement is assumed to exist by the lemma statement and by the definition of $N$-reconcilable for non-binary gene trees).
Since $(D', l')$ is $N$-reconcilable, there exists $\alpha'$ such that $(D', \alpha')$ is a reconciled gene tree with respect to $N$ such that
for every $u \in I(D')$, $l'(u) = \ensuremath{\mathbb{S}}\xspace$ implies $e(\alpha_{\textsc{last}}(u)) \in \{\ensuremath{\mathbb{S}}\xspace, \ensuremath{\mathbb{T}}\xspace\}$
and $l'(u) = \ensuremath{\mathbb{D}}\xspace$ implies $e(\alpha_{\textsc{last}}(u)) \in \{\ensuremath{\mathbb{D}}\xspace, \ensuremath{\mathbb{T}}\xspace\}$.
Define $e^*$ as follows:
if $e(u, i) \neq \ensuremath{\mathbb{T}}\xspace$, then $e^*(u, i) = e(u, i)$;
otherwise if $e(u, i) = \ensuremath{\mathbb{T}}\xspace$, if $l'(u) = \ensuremath{\mathbb{S}}\xspace$ then $e^*(u, i) = \ensuremath{\mathbb{T}}\xspaceS$ and if $l'(u) = \ensuremath{\mathbb{D}}\xspace$ then $e^*(u, i) = \ensuremath{\mathbb{T}}\xspaceD$.
Note that no additional transfer is created in this manner, and hence $e^*$ still uses $k$ transfers.
Also, for any pair of distinct genes $x, y \in \Gamma$ with $u = \textsc{lca}_{D'}(x, y)$,
$l'(u) = \ensuremath{\mathbb{S}}\xspace$ implies $e^*(\alpha_{\textsc{last}}(u)) \in \{\ensuremath{\mathbb{S}}\xspace, \ensuremath{\mathbb{T}}\xspaceS\}$
and $l'(u) = \ensuremath{\mathbb{D}}\xspace$ implies $e^*(\alpha_{\textsc{last}}(u)) \in \{\ensuremath{\mathbb{D}}\xspace, \ensuremath{\mathbb{T}}\xspaceD\}$.
It follows that $(D', \alpha')$ display $R$.
\end{proof}
\ensuremath{\varnothing}\xspaceindent
\textbf{Lemma \ref{lem:MWACTconstruction}.}
\emph{Let $H = (V = V_1 \cup V_2 \cup \dots \cup V_k, E)$ be an instance of {\sc $k$-Multicolored Clique}.
Then in polynomial time, we can construct an instance $(T,X,w)$ of \textsc{ACT} such that
$(T,X,w)$ has an incomparable assignment of weight $< \infty$ if and only if
$H$ has a $k$-multicolored clique.
Furthermore, if an incomparable assignment of weight $w< \infty$ exists,
then there exists an incomparable assignment with weight $\elle k' = k^2+2k$,
and $(T,X,w)$ satisfies the following properties:
\begin{itemize}
\item $w(x,v) \in \{0,1,\infty\}$ for all $x \in X, v \in V(T)$;
\item $w(x,v)= 0$ for exactly one $v$ for each $x \in X$;
\item If $w(x,v) = 0$ then $w(y,v) = \infty$ for all $y \neq x$;
\item for any $x \in X$, $u,v \in V(T)$ such that $w(x,u),w(x,v) < \infty$, $u$ and $v$ are incomparable.
\end{itemize}}
\begin{proof}
{\bf Construction of \textsc{ACT} instance:}
Let $H = V = (V_1 \cup V_2 \cup \dots \cup V_k, E)$ be an instance of {\sc $k$-Multicolored Clique}.
We now construct a tree $T$ together with a set $X$ and cost function $w:X \times V(T) \rightarrow \mathbb{N}_0 \cup \{\infty\}$.
For each element $x \in X$, there will be a single ``in''-element $x\_in$ of $V(T)$, for which $w(x,x\_in) = 0$.
There will also be some number of ``out''-elements $v$ for which $w(x,v) = 1$.
We begin by describing $T$.
$T$ is made up of a series of subtrees, each of which will act as a gadget in our reduction from {\sc $k$-Multicolored Clique}.
Every subtree consists of a root with several leaves as children.
The subtrees of $T$ are as follows:
\begin{itemize}
\item A tree {\bf Start}, with root $s\_in$ and children $class\_i\_in$ for each $i \in [k]$;
\item For each $i \in [k]$, $v \in V_i$, a tree {\bf Choose\_$v$}, with root $v\_in$, and children $class\_i\_out\_v$, together with $u\_to\_i\_out\_v$ for each $u \in V \setminus V_i$ such that $uv \in E$;
\item For each $i \in [k]$, $v \in V_i$, a tree {\bf Cover\_$v$}, with root $v\_out$, and children $count\_v\_in$ , together with $v\_to\_j\_in$ for each $j \neq i \in [k]$.
\item For each $i \in [k]$,
a singleton tree consisting of the node $count\_i\_out$.
\end{itemize}
See Figure~\ref{fig:clique_to_mwact}.
Finally we add a root node whose children are the roots of all the subtrees given above.
This concludes our construction of $T$.
\begin{figure*}
\caption{Figures used in the reduction from {\sc $k$-Multicolored Clique}
\end{figure*}
The set $X$ contains all vertices from $V$. In addition it contains a `start' element $s$, an element $class\_i$ for each $i \in [k]$, an element $count\_v$ for each $v \in V$, and an element $v\_to\_j$ for each $v \in V_i$ and $j \neq i \in [k]$.
The cost function $w:X \times V(T) \rightarrow \mathbb{N}_0 \cup \{\infty\}$ is defined as follows:
For each $i \in [k], v\in V_i$ and $j \neq i \in [k]$, set $w(s,s\_in)= w(class\_i, class\_i\_in)= w(v,v\_in)=w(count\_v, count\_v\_in)=w(v\_to\_j, v\_to\_j\_in)=0$.
For each $i \in [k]$ and $v \in V_i$, set $w(class\_i, class\_i\_out\_v) = 1$, set $w(v,v\_out) = 1$, and set $w(count\_v, count\_i\_out) = 1$.
(Note that there are therefore multiple elements $x \in X$ for which $w(x, count\_i\_out) = 1$.)
Finally, for each $i \in [k]$ and $v \in V_i$, and each edge $uv \in E$ with $u \in V_j, j\neq i\in[k]$, set $w(v\_to\_j, v\_to\_j\_out\_u)=1$.
For all other $x \in X$ and $v \in V(T)$, set $w(x,v) = \infty$.
This concludes our construction of our \textsc{ACT} instance $(X,T,w)$.
The construction can be done in polynomial time.
We observe that by construction, $w(x,v) \in \{0,1,\infty\}$ for all $x \in X, v \in V(T)$,
$w(x,v)= 0$ for exactly one $v$ for each $x \in X$,
and if $w(x,v) = 0$ then $w(y,v) = \infty$ for all $y \neq x$.
To see that $u$ and $v$ are incomparable for $x \in X$, $u,v \in V(T)$ such that $w(x,u),w(x,v) < \infty$, observe that each subtree in the construction
contains at most one node $z$ with $w(x,z) < \infty$ for each $x \in X$.
It remains to show that $(T,X,w)$ has an incomparable assignment of weight $< \infty$ if and only if
$H$ has a $k$-multicolored clique and that if an incomparable assignment of weight $w< \infty$ exists,
then there exists an incomparable assignment with weight $\elle k'$.
To do this, we will first show that the existence of a $k$-multicolored clique implies the existence of an incomparable assignment with weight $\elle k'$,
and then show that the existence of an incomparable assignment of weight $w< \infty$ implies the existence of a $k$-multicolored clique.
{\bf $k$-multicolored clique implies assignment of weight $\elle k'$:}
First suppose that a $k$-multicolored clique $C$ exists, and let $v_i$ denote the single vertex in $C \cap V_i$, for each $i \in [k]$.
Let $f:X \rightarrow V(T)$ be defined as follows:
Set $f(s) = s\_in$.
For each $i \in [k]$, set $f(class\_i) = class\_i\_out\_{v_i}$.
For each $i \in [k]$, set $f(v_i) = {v_i}\_out$, and for all other $v \in V$ set $f(v)=v\_in$.
For each $i \in [k]$, set $f(count\_{v_i}) = count\_i\_out$, and for all other $v \in V$ set $f(v)=count\_v\_in$.
For each $i \in [k]$, $j \neq i \in [k]$, set $f({v_i}\_to\_j) = {v_i}\_to\_j\_out\_{v_j}$
(note that ${v_i}\_to\_j\_out\_{v_j}$ exists because $v_j \in V_j$ and $v_i,v_j$ are adjacent).
For all other $v \in V_i$, set $f(v\_to\_j) = v\_to\_j\_in$.
Observe that $\sum_{x\in X}w(x,f(x)) = k+ k + k + k(k - 1) = k^2+2k = k'$.
It remains to show that $f(x)$ and $f(y)$ are incomparable for each $x \neq y \in X$.
As each of the subtrees described above are incomparable, it is enough to show that for each subtree, there are no comparable $y,z$ with $y,z$ assigned to different elements of $X$.
In {\bf Start}, the root $s\_in$ is assigned but none of the children $class\_i\_in$ are assigned, so we have no comparable assigned nodes.
In {\bf Choose\_$v$}, if $v = v_i$ for some $i \in [k]$, then the root ${v_i}\_in$ is not assigned, and as all other nodes are children of ${v_i}\_in$, there are no comparable assigned nodes.
For all other $v$ in class $V_i$, the root ${v_i}\_in$ is assigned. However, the child $class\_i\_out\_v$ is not assigned (as $class\_i$ is assigned to $class\_i\_out\_{v_i}$), and the other children $u\_to\_i\_out\_v$ are not assigned ($u\_to\_i\_out\_v$ is only assigned if $v = v_i, u = v_j$ for some $i \neq j \in [k]$).
In {\bf Cover\_$v$}, if $v = v_i$ for some $i \in [k]$, then the root ${v_i}\_out$ is assigned, but none of its children ${v_i}\_to\_j\_in$ or $count\_{v_i}\_in$ are assigned, as ${v_i}\_to\_j$ is assigned to ${v_i}\_to\_j\_out\_{v_j}$ and $count\_{v_i}$ is assigned to $count\_i\_out$.
For other $v \in V$, the root $v\_out$ is not assigned, and as all other nodes are children of $v\_out$, there are no comparable assigned nodes.
The nodes $count\_i\_out$ are the only nodes in $T$ that may be assigned to more than one element of $X$. However, by definition of $f$ we have that for each $i \in [k]$, $count\_{v_i}$ is the only element assigned to $count\_i\_out$.
As $\sum_{x\in X}w(x,f(x)) \elle k'$ and $f(x),f(y)$ are incomparable for all $x\neq y \in X$, we have that $(X,T,w,k')$ is a {\sc Yes}-instance, as required.
{\bf Assignment of finite weight implies $k$-multicolored clique:}
Suppose $f:X \rightarrow V(T)$ is an incomparable assignment with $\sum_{x \in X}w(x,f(x)) < \infty$.
Note that $f(s) = s\_in$, as there is no other node $z$ for which $w(s,z)< \infty$.
It follows that $f(class\_i) \neq class\_i\_in$ for each $i \in [k]$.
Therefore $f(class\_i) = class\_i\_out\_v$ for some $v \in V_i$.
Denote this $v$ by $v_i$.
As $class\_i\_out\_{v_i}$ is a child of ${v_i}\_in$ in {\bf Choose\_$v_i$}, we must have that $f(v_i) \neq {v_i}\_in$, and so instead $f(v_i) = {v_i}\_out$.
As ${v_i}\_out$ is the root of {\bf Cover\_$v_i$}, it follows that for each $j \neq i \in [k]$, we cannot have $f({v_i}\_to\_j) = {v_i}\_to\_j\_in$.
Therefore $f({v_i}\_to\_j) = {v_i}\_to\_j\_out\_u$ for some $u \in V_j$ adjacent to $v_i$.
Denote this $u$ by $u_{ij}$.
It remains to show that $u_{ij} = v_j$ for each $i\neq j \in [k]$, as this implies that $v_1, \dots, v_k$ form a clique.
As $f({v_i}\_to\_j) = {v_i}\_to\_j\_out\_{u_{ij}}$ is a child of ${u_{ij}}\_in$ in {\bf Choose\_$u_{ij}$}, we must have that $f(u_{ij}) \neq {u_{ij}}\_in$, and so instead $f(u_{ij}) = {u_{ij}}\_out$.
As $count\_u_{ij}\_in$ is a child of ${u_{ij}}\_out$ in {\bf Cover\_$u_{ij}$}, we must have that $f(count\_u_{ij}) \neq count\_u_{ij}\_in$ and so instead $f(count\_u_{ij}) = count\_j\_out$ (recall that $u_{ij} \in V_j$).
By a similar argument, since $f(v_j) = {v_j}\_out$ we also have $f(count\_v_j) = count\_j\_out$.
But then $f$ is not an incomparable assignment unless $u_{ij} = v_j$ (since $f(count\_u_{ij})$ and $f(count\_v_j)$ are the same node, and therefore comparable).
Therefore we must have that $u_{ij} = v_j$ for all $i\neq j \in [k]$, as required.
\end{proof}
\ensuremath{\varnothing}\xspaceindent
\textbf{Lemma \ref{lemma:NCreduction}.}
\emph{ Let $(T,X,w)$ be an instance of \textsc{ACT}, such that $w(x,v) \in \{0,1,\infty\}$ for all $x \in X, v \in V(T)$, $w(x,v)= 0$ for exactly one $v$ for each $x \in X$,
\mj{if $w(x,v) = 0$ then $w(y,v) = \infty$ for all $y \neq x$,}
and for any $x \in X$, $u,v \in V(T)$ such that $w(x,u),w(x,v) < \infty$, $u$ and $v$ are incomparable.
\\
Then in polynomial time, we can construct a least-resolved DS-tree $(D, l)$ and \mjn{time-consistent} LGT network $N$ such that for any integer $k$,
$(T,X,w)$ has an incomparable assignment of cost at most $k$ if and only if there exists a binary refinement $(D', l')$ of $(D, l)$ such that $(D', l')$ is $N$-reconcilable using at most $2k$ transfers.}
\begin{proof}
Let $(T,X,w)$ be an instance of $ACT$ satisfying the specified properties.
We begin by adjusting $T$ to ensure that it is binary.
If an internal node $u$ has a single child, we add an additional child of $u$ as a leaf of the tree.
If $u$ has more than two children, we refine $u$ into a binary tree with the same leaf set (treating $u$ as the root of this binary tree).
For any new node $v$ introduced in this way, we set $w(x,v)=\infty$ for all $x \in X$.
Observe that for the resulting tree $T'$, two nodes $u,v \in V(T)$ are incomparable in $T$ if and only if they are incomparable in $T'$.
Thus, changing $T$ in this way gives us an equivalent instance.
So we may now assume that $T$ is binary.
We next describe how to construct a least-resolved DS-tree $(D, l)$ .
Let $\Gamma$ be a set of genes as follows.
For each $x\in X$, $\Gamma$ contains two new genes $x\_left$ and $x\_right$.
Let $\ensuremath{\mathbb{S}}\xspaceigma$ contain species $spec\_x\_left$ and $spec\_x\_right$ for each $x\in X$, with $\sigma(x\_left) = spec\_x\_left$, $\sigma(x\_right)=spec\_x\_right$.
Let the DS-tree $(D, l)$ contain a speciation node $r$ as the root, and let $\{gene\_x: X\}$ be the set of children of $r$.
For each $x \in X$, let $gene\_x$ be a duplication node with children $x\_left$ and $x\_right$.
Note that $(D, l)$ is a least-resolved DS-tree.
We next describe how to construct the LGT network $N$, beginning with the distinguished base tree $T_0(N)$.
Initially, let $T_0(N)=T$, the input tree of our $ACT$ instance (in its binary version).
To avoid confusion with the MWACT instance later, we rename each node $v \in V(T)$ to $spec\_v$.
In addition, for each $x \in X$
let $u_x$ be the unique node in $T$ for which $w(x,u_x)=0$, with $spec\_u_x$ the corresponding node in $N$.
Now for each $v \in V(T)$,
we will add $spec\_v\_left$ and $spec\_v\_right$ as descendants \mj{(not necessarily children)} of $spec\_v$, as follows.
If $spec\_v$ is a leaf in $T_0(N)$, then add $spec\_v\_left$ and $spec\_v\_right$ as children of $spec\_v$.
Otherwise, add $spec\_v\_left$ and $spec\_v\_right$ as descendants of different children of $spec\_v$.
(This can be be done by subdividing any arc incident to leaf descended from a given child of $spec\_v$, and adding $spec\_v\_left$ or $spec\_v\_right$ as a child of the newly added node).
Observe that after $spec\_v\_left$ and $spec\_v\_right$ have been added, $spec\_v$ is the least common ancestor of $spec\_v\_left$ and $spec\_v\_right$. Furthermore this process does not change the least common ancestor of any pair of leaves.
Therefore, after doing this process for each $v \in V(T)$, we will have that for every $v \in V(T)$, $spec\_v$ is the least common ancestor of $spec\_v\_left$ and $spec\_v\_right$.
When $v = u_x$ for some $x \in X$, we also denote $spec\_v\_left$ and $spec\_v\_right$ by $spec\_x\_left$ and $spec\_x\_right$ respectively.
This completes the construction of the distinguished base tree; now we describe how to add secondary arcs.
For each $x \in X$ and each $v \in V(T)$ with $w(x,v)=1$, we do the following.
Add a new tail node between $spec\_v\_left$ and its parent, add a new head node between $spec\_x\_left$ and its parent, and add an arc from the tail to the head as a secondary arc.
Similarly, add a new tail node between $spec\_v\_right$ and its parent, and add a new head node between $spec\_x\_right$ and its parent, and add an arc from the tail to the head as a secondary arc.
Observe that after this, $spec\_v$ has paths to $spec\_x\_left$ and $spec\_x\_right$ in $N$, and these paths each use one secondary arc.
See Figure~\ref{fig:mwact_to_tmnc}.
Furthermore (by virtue of the fact that $w(y, u_x) \neq 1$ for any $x,y \in X$, and therefore a tail node is never added above $spec\_u_x\_left$ or $spec\_u_x\_right$), every path in $N$ has at most one secondary arc.
This completes the construction of the species network $N$, and our problem instance.
\mjn{Observe that $N$ is time-consistent, since each time we insert a new secondary arc, its two endpoints are below every other previously inserted node.}
We now show that
$(T,X,w)$ has an incomparable assignment of cost at most $k$ if and only if $(D, l)$ is $N$-consistent using at most $2k$ transfers.
\begin{figure*}
\caption{Part of the species network $N$ constructed in the reduction from \textsc{ACT}
\end{figure*}
First suppose that $(D, l)$ is $N$-consistent using at most $2k$ transfers.
We will first show the following claim.
In this claim and its proof, we use the terms 'ancestor' and 'descendant' to exclusively refer to ancestors or descendants with respect to the distinguished base tree $T_0(N)$:
\begin{nclaim}\ellabel{claim:0-or-2}
For $x \in X$, suppose $u \in V(N)$ is such that there exist paths from $u$ to $spec\_x\_left$ and from $u$ to $spec\_x\_right$, using at most $k_x$ secondary arcs in total.
If $k_x = 0$ then $u$ is an ancestor of $spec\_u_x$, and otherwise $u$ is an ancestor of
some $spec\_v$ such that $w(x,v)\elleq1$.
Moreover, if $u$ is not an ancestor of $spec\_u_x$ then $k_x = 2$.
\end{nclaim}
\begin{proof}
First, recall that $spec\_u_x$ is the least common ancestor of $spec\_x\_left$ and \ellinebreak $spec\_x\_right$ in $T_0(N)$. Since $k_x = 0$ implies that $u$ is an ancestor of both $spec\_x\_left$ and $spec\_x\_right$, we have that if $k_x = 0$ then $u$ is an ancestor of $spec\_u_x$.
Since there is a path from $u$ to $spec\_x\_left$, $u$ must be an ancestor of $spec\_v\_left$ for some $v$ such that $w(v,x) \elleq 1$ (such nodes are the only ones that have a path to $spec\_x\_left$, either using exclusively principal arcs or a using a single secondary arc).
Similarly, $u$ must be an ancestor of $spec\_v'\_right$ for some $v'$ such that $w(v',x) \elleq 1$.
If $v = v'$ then $u$ is an ancestor of both $spec\_v\_left$ and $spec\_v\_right$ and is therefore an ancestor of $spec\_v$, as required.
So assume that $v \neq v'$. If $u$ is an ancestor of $spec\_v$ or $spec\_v'$ then we are done, and otherwise $u$ must be a descendant of both $spec\_v$ and $spec\_v'$ (since it is an ancestor of descendants of both of these). But this implies that $v$ and $v'$ are comparable, a contradiction as $w(x,v),w(x,v') < \infty$.
Finally, we observe that $k_x < 1$ only if $u$ is an ancestor of at least one of $spec\_x\_left$ and $spec\_x\_right$. Therefore if $k_x < 2$ and $u$ is not an ancestor of $spec\_u_x$, it is a descendant of $spec\_u_x$. But this again implies a contradiction as $u$ is an ancestor of some $spec\_v$ with $w(x,v)=1$, which would then be a descendant of $spec\_u_x$.
\end{proof}
Now consider the binary refinement $(D', l')$ of $(D, l)$ that is $N$-consistent using at most $2k$ transfers.
Thus there exists $\alpha$ such that $(D', \alpha)$ is a reconciled gene tree with respect to $N$.
Note that by construction of $(D, l)$, there is a rooted subtree in $D'$ whose leaves are the set \mj{of duplication nodes} $\{gene\_x: x \in X\}$ and whose internal nodes are all speciation nodes according to $l'$.
For each $x \in X$, there are paths in $D'$ from $gene\_x$ to $x\_left$ and to $x\_right$, and so there are paths in $N$ from $\alpha_\textsc{last}(gene\_x)$ to $\sigma(x\_left) = spec\_x\_left$ and to $\sigma(x\_right)= spec\_x\_right$.
It follows from Claim~\ref{claim:0-or-2} that $\alpha_\textsc{last}(gene\_x)$ is an ancestor of $v$ for some $v \in V(T)$ such that $w(x,v) \elleq 1$.
By construction of $N$, there are no paths to such a $v$ using a secondary arc, and therefore as all ancestors of $x$ in $D'$ are speciation nodes, $\{\alpha_\textsc{last}(gene\_x): x \in X\}$ must form the leaves of a subtree in $T$.
It follows that $\alpha_\textsc{last}(gene\_x)$ and $\alpha_\textsc{last}(gene\_y)$ are incomparable for any $x \neq y \in X$.
Now we can define $f:X \rightarrow V(T)$ as follows.
For each $x \in X$, let $f(x) = u_x$ if $\alpha_\textsc{last}(gene\_x)$ is an ancestor of $spec\_u_x$,
and otherwise let $f(x)$ be a $v \in V(T)$ such that $w(x,v) \elleq 1$ and $\alpha_\textsc{last}(gene\_x)$ is an ancestor of $spec\_u_x$.
As their ancestors $\alpha_\textsc{last}(gene\_x)$ and $\alpha_\textsc{last}(gene\_y)$ are incomparable, it follows that $f(x)$ and $f(y)$ are also incomparable, for any $x \neq y \in X$.
Furthermore, by Claim~\ref{claim:0-or-2} we have that either $\alpha_\textsc{last}(gene\_x)$ is an ancestor of $spec\_u_x$, or the paths from $\alpha_\textsc{last}(gene\_x)$ to $\sigma(x\_left)$ and to $\sigma(x\_right)$ use $2$ secondary arcs.
Therefore the number of transfer arcs used by $\alpha$ is $2$ for every $x \in X$ with $w(x,f(x)) = 1$.
Thus $2k \geq 2\sum_{x \in X}w(x,f(x))$, and so $f$ is an incomparable assignment with $\sum_{x \in X}w(x,f(x)) \elleq k$, as required.
Now suppose that $(T,X,w)$ has an incomparable assignment $f:X \rightarrow V(T)$ such that $\sum_{x \in X}w(x,f(x)) \elleq k$.
We will show that $(D, l)$ has a binary refinement $(D', l')$ that is $N$-reconcilable using at most $2k$ transfers.
In particular, we will show that there is a reconciliation $\alpha$ such that $\alpha_\textsc{last}(gene\_x) = spec\_f(x)$ for all $x \in X$.
Observe first that as $f$ is an incomparable assignment, there exists a subtree $T'$ of $T$ whose leaves are $\{f(x): x \in X\}$.
By refining the root $r$ of $D$ into a subtree isomorphic to $T'$, we get a refinement $(D', l')$ such that $D'$ with the leaves $\{x\_left,x\_right: x \in X\}$ removed has a reconciliation with $N$ using $0$ transfers.
Furthermore this reconciliation $\alpha$ is such that $\alpha_\textsc{last}(gene\_x) = spec\_f(x)$ for all $x \in X$.
It remains to show how to extend $\alpha$ to the leaves $\{x\_left,x\_right: x \in X\}$ of $D'$.
For each $x \in X$, let $P_{x\_left}$ be a path in $N$ from $spec\_f(x)$ to $spec\_x\_left$ using a minimum number of secondary arcs.
By construction, this path uses $0$ secondary arcs if $w(x,f(x)) = 0$, and at most $1$ secondary arc if $w(x,f(x)) = 1$.
Similarly, let $P_{x\_right}$ be a path in $N$ from $spec\_f(x)$ to $spec\_x\_right$ using a minimum number of secondary arcs.
Then for each $x \in X$, we let $\alpha(x\_left) = P_{x\_left}$ and $\alpha(gene\_x\_right) = P_{x\_right}$.
It can be seen that $(D', \alpha)$ is a valid reconciliation with respect to $N$ that agrees with $(D', l')$.
Furthermore, $\alpha$ uses $2$ transfers for each $x \in X$ such that $w(x,f(x)) = 1$, and no others.
Therefore $D'$ is reconcilable using at most $\sum_{x \in X}2w(x,f(x)) \elleq 2k$ transfers, as required.
\end{proof}
\ensuremath{\varnothing}\xspaceindent
\textbf{Theorem \ref{thm:dp-algo-is-ok}.}
\emph{Algorithm $minTransferCost$ is correct. Moreover, it runs in time \ellinebreak
$O(2^{k}k!k|V(D)||V(N)|^4)$ \ml{and space $O(|V(N)||V(D)| + |V(N)|^2)$}.}
\begin{proof}
We prove the following statement by induction:
for each $g \in V(D)$ and $s \in V(N)$, the algorithm finds the minimum number of required transfers for a reconciliation between
the subtree $D_g$ and $N$ such that $g$ is mapped to $s$.
If $g$ is a leaf of $D$, the statement is easy to see, so suppose $g \in I(D)$.
Let $(\hat{D}_g, \alpha)$ be an optimal solution for $D_g, s$ and $N$, i.e. $\hat{D}_g$ is a binary refinement of $D_g$, $\alpha$ is
a reconciliation between $\hat{D}_g$ and $N$ such that $\alpha_\textsc{last}(g) = s$,
and the pair $(\hat{D}_g, \alpha)$ minimizes the number $t$ of required transfers.
If $g$ is binary, then $g_l$ and $g_r$ are children of $g$ in both $D_g$ and $\hat{D}_g$.
Let $s_1 = \alpha_\textsc{last}(g_l)$ and $s_2 = \alpha_\textsc{last}(g_r)$.
It is clear that $\alpha$ restricted to $\hat{D}_{g_l}$\footnote{By the restriction $\alpha'$ of $\alpha$ to $\hat{D}_{g_l}$, we mean
$\alpha'(v) = \alpha(v)$ for all strict descendants $v$ of $g_l$, and
\mj{$\alpha'(g_l) = (\alpha_\textsc{last}(g_l))$}} yields a reconciliation of
$\hat{D}_{g_l}$ using $f(g_l, s_1)$ transfers, since if there was a better refinement of $D_{g_l}$ admitting a better reconciliation
with $g_l$ mapped to $s_1$, then we could include this subsolution in $(\hat{D}, \alpha)$ and obtain a lower transfer cost.
The same argument holds for $g_r$ and $f(g_r, s_2)$.
We thus need to show that the algorithm will, at some point, consider the scenario of mapping
$g_l$ with $s_1$ and $g_r$ with $s_2$.
If $l(g) = \ensuremath{\mathbb{S}}\xspace$, two cases may occur, according to Definition~\ref{def:DTLrecLGTNetwork}:
(1) $e(\alpha_{\textsc{last}}(g)) = \ensuremath{\mathbb{S}}\xspace$, in which case $\alpha_1(g_l) = s_l$ and $\alpha_1(g_r) = s_r$ (or vice-versa, w.l.o.g.).
This implies $s_1 \in P(s_l)$ and $s_2 \in P(s_r)$, and this scenario is tested on line~\ref{line:the-s-case} of $reconcileLBR$;
(2) $e(\alpha_{\textsc{last}}(g)) = \ensuremath{\mathbb{T}}\xspace$, in which case $(s, s') $
is a transfer-arc, say $s' = s_r$ without loss of generality.
Then
$\alpha_1(g_l) \in \{s, s_l\}$ and $\alpha_1(g_r) = s_r$ (or
vice-versa, w.l.o.g.), which imply $s_1 \in P(s)$ and $s_2 \in P(s_r)$. This is tested by line~\ref{line:the-t-case} of $reconcileLBR$.
If $l(g) = \ensuremath{\mathbb{D}}\xspace$, we have $\alpha_1(g_l) = \alpha_1(g_r) = s$ and thus it is only required that $\alpha_\textsc{last}(g_l) \in P(s)$ and $\alpha_\textsc{last}(g_r) \in P(s)$,
which is tested on line~\ref{line:the-d-case}.
Therefore, the desired scenario of mapping $g_l$ to $s_1$ and $g_r$ to $s_2$ is considered.
One can also observe that no invalid mappings of $g_l$ and $g_r$ are considered by the algorithm
(if $l(g) = \ensuremath{\mathbb{S}}\xspace$, we test only the $s_1$ and $s_2$ that allow $e(\alpha_{\textsc{last}}(g)) \in \{\ensuremath{\mathbb{S}}\xspace, \ensuremath{\mathbb{T}}\xspace\}$,
and similarly for $l(g) = \ensuremath{\mathbb{D}}\xspace$).
The fact that the computed value $f'(g, s)$ (and hence $f(g, s)$) is minimum follows
from the induction hypothesis on $g_l$ and $g_r$.
Suppose instead that $g$ has children $g_1, \elldots, g_k$, $k \geq 3$.
For a fixed $(D', l') \in \mathcal{B}(g)$, by the induction hypothesis we have that
$f(g_i, s')$ is correct for every $i \in [k]$ and $s' \in V(N)$.
Using the argumentation for the binary case, it follows that after calling $reconcileLBR$,
we have correctly computed the minimum number of transfers for
the tree obtained from $D_g$ after replacing $g$ by its local binary refinement $D'$.
The connected subtree $B_g$ of $\hat{D}$ induced by $g, g_1, \elldots, g_k$ is in $\mathcal{B}(g)$,
and hence $minTransferCost$ will find $f(g, s)$ correctly when trying $D' = B_g$.
This concludes the proof, since the \ml{time and space} complexity of the algorithm was argued in the main text.
\end{proof}
\ensuremath{\varnothing}\xspaceindent
\textbf{Lemma \ref{lem:equiv-ds-tree-unknown}.}
\emph{Let $R$ be a relation graph and $S$ be a species tree.
Then $R$ is \ml{$S$-base-consistent} (using $k$ transfers) if and only if
there exists a least-resolved $DS$-tree $(D, l)$ that displays $R$ and a binary refinement $(D', l')$ of $(D, l)$ such that
$(D', l')$ is \ml{$S$-base-reconcilable} (using $k$ transfers).}
\begin{proof}
($\Rightarrow$) Assume that $R$ is \ml{$S$-base-consistent} using $k$ transfers. Then there exists an LGT network $N$ such that $T_0(N) = S$ and $R$ is $N$-consistent
using $k$ transfers. Then by Lemma~\ref{lem:equiv-ds-tree}, there is a $DS$-tree $(D, l)$ and a binary refinement
$(D', l')$ such that $(D', l')$ is $N$-reconcilable using $k$ transfers. Thus by definition,
\mj{$(D', l')$}
is \ml{$S$-base-reconcilable} using $k$ transfers.
($\ensuremath{\mathbb{L}}\xspaceeftarrow$) Assume that there is a DS-tree $(D, l)$ that displays $R$ and a binary refinement $(D', l')$ of $(D, l)$ such that $(D', l')$ is \ml{$S$-base-reconcilable} using $k$ transfers. Then there is an LGT network $N$ such that $T_0(N) = S$ and $(D', l')$ is $N$-reconcilable using
$k$ transfers. Again, by Lemma~\ref{lem:equiv-ds-tree}, $R$ is $N$-consistent using $k$ transfers.
So $R$ is also \ml{$S$-base-consistent} using $k$ transfers.
\end{proof}
\ensuremath{\varnothing}\xspaceindent
\textbf{Lemma \ref{lem:all-ds-trees-reconcilable}.}
\emph{Let $(D, l)$ be a binary $DS$-tree and let $N := N(D)$ be the species network obtained from $S$ after
applying Algorithm~\ref{algo:everything-consistent}.
Then $(D, l)$ is $N$-reconcilable.}
\begin{proof}
We show that for any $v \in I(D)$, the subtree
$(D_v, l)$ is $N$-reconcilable (where here, we slightly abuse notation by using $l$ to label $D_v$). Moreover, we show that if $v$ is not a leaf and $s_i, s_j \in L(S)$ are distinct, then there is a reconciliation $(D_v, \alpha)$ with respect to $N$
such that $\alpha(v) = (\sdon{i}{j}{d(v)})$ and $e(\alpha_\textsc{last}(v)) = \ensuremath{\mathbb{T}}\xspace$
(here, and for the rest of the proof, $d(v)$ refers to the depth of $v$ in $D$, and \emph{not}
its depth in $D_v$).
We use induction on the height $h(D_v)$.
First note that if $h(D_v) = 0$, then the statement is trivially true.
As an additional base case, suppose that $h(D_v) = 1$ and fix some $\sdon{i}{j}{d(v)}$, with $i \neq j$. Then both children $v_l$ and $v_r$ of $v$ are leaves.
Let $s_p = \sigma(v_l)$ and $s_q = \sigma(v_r)$ for some $p, q \in [m]$.
Note that $p = q$ is possible.
We find two paths $P_1$ and $P_2$ that correspond to $\alpha(v_l)$ and $\alpha(v_r)$.
We first claim that in $N$, there exists a directed path $P_1 = (\sdon{i}{j}{d(v)} = x_1, x_2, \elldots, x_{k_1} = s_p)$
such that $x_2 = \srec{j}{i}{d(v)}$ (i.e. $P_1$ starts with the $(\sdon{i}{j}{d(v)}, \srec{j}{i}{d(v)})$ arc).
Observe that there exists a directed path $P_1'$ from
$\srec{j}{i}{d(v)}$ to $s_p$. Indeed, if $s_j = s_p$, then $\srec{j}{i}{d(v)} = \srec{p}{i}{d(v)}$
is an ancestor of $s_p$ and $P_1'$ obviously exists.
Otherwise, $P_1'$ starts from $\srec{j}{i}{d(v)}$, goes to its descendant
$\sdon{j}{p}{d(v) + 1}$, takes the $(\sdon{j}{p}{d(v)+1}, \srec{p}{j}{d(v)+1})$ arc and then goes to $s_p$
(observe that $\sdon{j}{p}{d(v) + 1}$ does exist, since the first loop of the algorithm creating $N$
takes $c$ from $1$ to $h(D) + 1$, and $d(v) \elleq h(D)$). Since $P_1'$ exists and $(\sdon{i}{j}{d(v)}, \srec{j}{i}{d(v)})$ is an arc of $N$, the $P_1$ path exists.
By the same arguments, there is a path $P_2 = (\sdon{i}{j}{d(v)} = y_1, y_2, \elldots, y_{k_2} = s_q)$.
Now, the existence of $P_1$ and $P_2$ imply that we can make $v$ a transfer node.
More precisely, we let
\begin{align*}
\alpha(v) &= (\sdon{i}{j}{d(v)}) \\
\alpha(v_l) &= (x_2, x_3, \elldots, x_{k_1}= s_p) \\
\alpha(v_r) &= (\sdon{i}{j}{d(v)}, y_2, y_3, \elldots, y_{k_2} = s_q)
\end{align*}
Set $e(\alpha_\textsc{last}(v)) = \ensuremath{\mathbb{T}}\xspace$
and $e(v_l, k) \in \{\ensuremath{\mathbb{SL}}\xspace, \ensuremath{\mathbb{T}}\xspaceL, \emptyset\}$ for $k \in [|\alpha(v_l)| - 1]$ depending on what type of arc
$x_kx_{k + 1}$ is, then do the same for each $e(v_r, k)$ and $k \in [|\alpha(v_r)| - 1]$.
We have $\alpha_\textsc{last}(v) = \sdon{i}{j}{d(v)}$, $\alpha_1(v_l) = \srec{i}{j}{d(v)}$ and
$\alpha_1(v_r) = \sdon{i}{j}{d(v)}$, and since $(\sdon{i}{j}{d(v)}, \srec{j}{i}{d(v)}) \in E_s(N)$, condition a.4 of Definition~\ref{def:DTLrecLGTNetwork}
is satisfied, and so $\alpha$ is a reconciliation in which $e(\alpha_\textsc{last}(v)) = \ensuremath{\mathbb{T}}\xspace$.
This proves the base case.
Let $v \in V(D)$ such that $h(D_v) > 1$, and assume now by induction that the claim holds
for any internal node $v'$ such that $D_{v'}$ has height smaller than $h(D_v)$.
Let $v_l, v_r$ be the children of $v$.
At least one of $v_l, v_r$ must be an internal node, say $v_r$ without loss of generality.
Suppose first that $v_l$ is a leaf.
As before, in $N$ there is a path $P_1 = (x_1, x_2, \elldots, x_{k_1})$
starting with the $(x_1, x_2) = (\sdon{i}{j}{d(v)}, \srec{j}{i}{d(v)})$ arc
and that goes to $x_{k_1} = \sigma(v_l)$. As for $v_r$, by induction $D_{v_r}$ is $N$-reconcilable by some reconciliation $(D_{v_r}, \alpha')$ such that
$\alpha'(v_r) = (\sdon{i}{j}{d(v) + 1})$.
Now, in $N$ there is a path $P_2 = (\sdon{i}{j}{d(v)} = y_1, y_2, \elldots, y_{k_2} = \sdon{i}{j}{d(v) + 1})$
from $\sdon{i}{j}{d(v)}$
to $\sdon{i}{j}{d(v) + 1}$ in which each arc is in $E_p(N)$.
We can obtain the desired reconciliation $\alpha$ from $\alpha'$ in the following manner.
First let $\alpha(v) = (\sdon{i}{j}{d(v)})$ and $\alpha(v_l) = (x_2, x_3, \elldots, x_{k_1})$.
For every strict descendant $v_r'$ of $v_r$, let $\alpha(v_r') = \alpha'(v_r)$,
and finally let $\alpha(v_r) = (\sdon{i}{j}{d(v) + 1} = y_1, y_2, y_3, \elldots, y_{k_2} = \sdon{i}{j}{d(v) + 1})$.
As in the base case, we can set $e(\alpha_\textsc{last}(v)) = \ensuremath{\mathbb{T}}\xspace$ and satisfy condition a.4 of Definition~\ref{def:DTLrecLGTNetwork}. We set $e(v_r, k) \in \{\ensuremath{\mathbb{SL}}\xspace, \ensuremath{\mathbb{T}}\xspaceL, \emptyset\}$ accordingly for every $k \in [|\alpha(v_r)| - 1]$
(depending on what type of arc $x_kx_{k+1}$ is)
and set $e(\alpha_\textsc{last}(v_r)) = e(\alpha'_\textsc{last}(v_r))$.
Finally we set $e(\alpha_k(v'_r)) = e(\alpha'_k(v'_r))$ for every strict descendant $v'_r$ of $v_r$
and every $k \in [|\alpha(v'_r)|]$. We have that $\alpha(v), \alpha(v_l)$ and $\alpha(v_r)$ satisfy Definition~\ref{def:DTLrecLGTNetwork},
$e(\alpha_\textsc{last}(v_r)) = e(\alpha'_\textsc{last}(v_r))$
and every other gene-species mapping and event is unchanged from $\alpha'$.
It follows that $\alpha$ is a reconciliation.
Since $e(\alpha_\textsc{last}(v)) = \ensuremath{\mathbb{T}}\xspace$, the claim is proved for this case.
If instead both $v_l, v_r \in I(D)$, then by induction, $D_{v_l}$
is $N$-reconcilable with
reconciliation $\alpha^l$
such that $\alpha^l(v_l) = (\sdon{j}{i}{d(v) + 1})$ (notice the use of $j \rightarrow i$ and not $i \rightarrow j$).
Moreover, $D_{v_r}$ is $N$-reconcilable with reconciliation $\alpha^r$ such that
$\alpha^r(v_r) = (\sdon{i}{j}{d(v) + 1})$.
In $N$, there is a path $P_1 = (x_1, x_2, \elldots, x_{k_1})$
starting with the $(x_1, x_2) = (\sdon{i}{j}{d(v)}, \srec{j}{i}{d(v)})$ arc
that goes to $x_{k_1} = \sdon{j}{i}{d(v) + 1}$.
There is also a path $P_2 = (y_1, y_2, \elldots, y_{k_2})$
from $y_1 = \sdon{i}{j}{d(v)}$ to $y_{k_2} = \sdon{i}{j}{d(v) + 1}$ that uses only
arcs from $E_p(N)$.
Thus as before, we can make $v$ a transfer node. That is we set $\alpha(v) = (\sdon{i}{j}{d(v)})$
and $e(\alpha_\textsc{last}(v)) = \ensuremath{\mathbb{T}}\xspace$, $\alpha(v_l) = (x_2, \elldots, x_{k_1} = \sdon{j}{i}{d(v) + 1})$ and
$\alpha(v_r) = (\sdon{i}{j}{d(v)} = y_1, y_2, \elldots, y_{k_2} = \sdon{i}{j}{d(v) + 1})$.
We set $e(v_l, k), e(v_r, k') \in \{\ensuremath{\mathbb{SL}}\xspace, \ensuremath{\mathbb{T}}\xspaceL, \emptyset\}$ accordingly
for every $k \in [|\alpha(v_l)| - 1], k' \in [|\alpha(v_r)| - 1]$,
set $e(\alpha_\textsc{last}(v_l)) = e(\alpha^l_\textsc{last}(v_l)), e(\alpha_\textsc{last}(v_r)) = e(\alpha^r_\textsc{last}(v_r))$, and keep every other gene-species mapping and event from $\alpha^l$ and $\alpha^r$ unchanged.
In this manner $\alpha(v)$ satisfies Definition~\ref{def:DTLrecLGTNetwork},
and $\alpha$ is a reconciliation.
Again since $e(\alpha_\textsc{last}(v)) = \ensuremath{\mathbb{T}}\xspace$, the claim is proved.
\end{proof}
\ensuremath{\varnothing}\xspaceindent
\textbf{Theorem \ref{thm:sconsistent-iff-dstree}.}
\emph{A relation graph $R$ is \ml{$S$-base-consistent} if and only if
there exists a $DS$-tree $(D, l)$ such that $R(D, l) = R$.}
\begin{proof}
If there is no $DS$-tree $(D, l)$ such that $R(D, l) = R$, then by Lemma~\ref{lem:equiv-ds-tree}
there exists no species network $N$ with which $R$ is consistent, and thus $R$ cannot be
\ml{$S$-base-consistent}.
Conversely, let $(D' l')$ be a $DS$-tree such that $R(D', l') = R$, and let $(D, l)$ be a binary refinement
of $(D', l')$ (recalling that $R(D, l) = R(D', l') = R$).
Then by Lemma~\ref{lem:all-ds-trees-reconcilable}, $(D, l)$ is $N(D)$-reconcilable, where
the network $N(D)$ is the one constructed from $S$ by the algorithm described above.
By Lemma~\ref{lem:equiv-ds-tree}, $R$ is $N(D)$-consistent
and thus $R$ is also \ml{$S$-base-consistent}.
\end{proof}
\subsection*{Proof of Theorem~\ref{thm:hard-unknown-highways}: NP-hardness of minimizing transfers with unknown transfer highways}
The formal problem that we show NP-hard here in the following.
\ensuremath{\varnothing}\xspaceindent \textsc{Transfer Minimization Species Tree Consistency (TMSTC):}\\
\ensuremath{\varnothing}\xspaceindent {\bf Input}: A relation graph $R$, a species tree $S$, an integer $k$.\\
\ensuremath{\varnothing}\xspaceindent {\bf Question}: Is $R$ \ml{$S$-base-consistent} using at most $k$ transfers? \\
We reduce the feedback arc set problem to TMSTC.
\ensuremath{\varnothing}\xspaceindent \textbf{Feedback Arc Set (FAS):}\\
\ensuremath{\varnothing}\xspaceindent {\bf Input}: A directed graph $H = (V, A)$ and an integer $k$.\\
\ensuremath{\varnothing}\xspaceindent {\bf Question}: Does there exist a \emph{feedback arc set} of size at most $k$,
i.e. a set of arcs $A' \subseteq A$ of size at most $k$ such that
$H' = (V, A \setminus A')$ contains no directed cycle?\\
Given a FAS instance $H = (V, A)$, we construct a DS-tree $(D, l)$ and a species tree $S$
such that $H$ admits a feedback arc set of size at most $k$ if and only if
$R(D, l)$ is \ml{$S$-base-consistent} using at most $K = 2|A| + k$ transfers.
A \emph{caterpillar} is a rooted binary tree in which every internal node has exactly one child that is a
leaf, except for one node that has two leaf children.
We denote a caterpillar on leafset $x_1, x_2, \elldots, x_n$ by $(x_1|x_2|\elldots|x_n)$, where
the $x_i$ nodes are ordered by depth in non-decreasing order (thus $x_1$ is the leaf child of the root).
A \emph{subtree caterpillar} is a rooted binary tree obtained by replacing some leaves
of a caterpillar by rooted subtrees.
If each $x_i$ is replaced by a subtree $X_i$, we denote this by $(X_1|X_2|\elldots|X_n)$.
If some $X_i$ is a leaf $x_i$ (i.e. $X_i$ a tree with one vertex $x_i$), we may write
$(X_1|\elldots|X_{i - 1}|x_i|\elldots|X_n)$.
Given the FAS instance $H = (V, A)$, first order $V$ and $A$ arbitrarily,
and denote $V = (v_1, v_2, \elldots, v_n)$ and $A = (a_1, a_2, \elldots, a_m)$.
The species tree $S$ has a corresponding subtree for each vertex of $V$ and each arc of $A$. For each vertex $v_i \in V$,
let $S_{v_i}$ be a caterpillar $(v_{i,1} | v_{i,2} | \elldots | v_{i,2K})$ with $2K$ leaves.
For each $j \in [2K]$, denote $z_{i, j} = p(v_{i,j})$ (noting that $z_{i, 2K - 1} = z_{i, 2K}$).
Then, for each arc $a \in A$, let $S_{a}$ be the binary tree on two leaves $p_{a}, q_{a}$.
Then $S$ is the subtree-caterpillar
$(S_{a_1} | S_{a_2} | \elldots | S_{a_m} | S_{v_1} | S_{v_2} | \elldots | S_{v_n} )$.
See Figure~\ref{fig:fas-reduction}.
\begin{figure*}
\caption{The $S$ and $D$ trees constructed for our reduction. Duplication nodes appear as squares, and the absence of a square indicates speciation.}
\end{figure*}
The DS-tree $(D, l)$ has one subtree for each arc of $A$. For each $a = (v_i, v_j) \in A$,
let $D_{a} = D_{i,j}$ be a caterpillar with $4K + 2$ leaves
such that
\[
D_{i,j} = (v_{i,1}^1 | v_{i,1}^2 | v_{i,2}^1 | v_{i,2}^2 | \elldots | v_{i,2K}^1 | v_{i,2K}^2 | w_{j,1}^i | w_{j,2}^i)
\](we will interchangeably use the $D_a$ and $D_{i,j}$ notations whenever convenient).
Here the indices of the leaf labels indicates the species containing them,
i.e. for each $h \in [2K], \sigma(v_{i, h}^1) = \sigma(v_{i, h}^2) = v_{i,h}$,
and $\sigma(w_{j,1}^i) = v_{j,1}, \sigma(w_{j,2}^i) = v_{j,2}$.
Thus all the leaves of $L(D_{i,j})$ are from the $S_{v_i}$ subtree, with the exception of
$w_{j,1}^i$ and $w_{j,2}^i$ at the bottom.
For each $h \in [2K]$, the parent of $v_{i, h}^1$ is labeled by $\ensuremath{\mathbb{D}}\xspace$ whereas the parent of $v_{i, h}^2$
is labeled by $\ensuremath{\mathbb{S}}\xspace$.
The parent of $w_{j,1}^i$ and $w_{j,2}^i$ is labeled by $\ensuremath{\mathbb{D}}\xspace$.
We define another tree $D'_a = D'_{i,j} = (p_a^1 | p_a^2 | q_a^1 | q_a^2 | D_{i,j})$.
The parents of $p_a^1$ and $q_a^1$ are labeled $\ensuremath{\mathbb{D}}\xspace$,
whereas the parents of $p_a^2$ and $q_a^2$ are labeled $\ensuremath{\mathbb{S}}\xspace$
(here $\sigma(p_a^1) = \sigma(p_a^2) = p_a$ and $\sigma(q_a^1) = \sigma(q_a^2) = q_a$).
Finally, we let
\[D = (D'_{a_1} | p^3_{a_2} | D'_{a_2} | p^3_{a_3} | D'_{a_3} | \elldots | p^3_{a_{m - 2}} | D'_{a_{m - 2}} | p^3_{a_{m - 1}} | D'_{a_{m - 1}} | D'_{a_m})\]
where each $p_{a_i}^3$ is a new leaf with $\sigma(p_{a_i}^3) = p_{a_i}$. The purpose of the $p_{a_i}^3$ is to enforce a binary DS-tree.
The root is a speciation, and the main path of $D$ alternates labelings, i.e. for each $1 < i < [m]$,
the parent of $p^3_{a_i}$ is labeled $\ensuremath{\mathbb{D}}\xspace$ and the parent of $r(D'_{a_i})$ is labeled $\ensuremath{\mathbb{S}}\xspace$.
The parent of $r(D'_{a_m})$ is labeled $\ensuremath{\mathbb{S}}\xspace$.
It is not hard to see that this construction can be carried out in polynomial time.
Note that $D$ is binary and is also a least-resolved $DS$-tree.
Thus by Lemma~\ref{lem:equiv-ds-tree-unknown}, $R(D, l)$ is \ml{$S$-base-consistent} using $K$ transfers if and only if
$(D, l)$ is \ml{$S$-base-reconcilable} using $K$ transfers.
\begin{lemma}
If $H$ admits a feedback arc set $A' \subset A$ of size $k$, then
$(D, l)$ is \ml{$S$-base-reconcilable} using at most $K = 2m + k$ transfers.
\end{lemma}
\begin{proof}
The intuition behind the proof is as follows.
Each $D_{i,j}$ subtree and its $\ensuremath{\mathbb{D}}\xspace, \ensuremath{\mathbb{S}}\xspace$ labeling could be part of a valid reconciliation with respect to $S$, if it were not for the
$w^i_{j, 1}$ and $w^i_{j, 2}$ leaves at the bottom, which prevent their ancestors to be speciations. These need to be handled by either making the two edges incident to
$w^i_{j, 1}$ and $w^i_{j, 2}$ a transfer to $v_{j, 1}$ and $v_{j, 2}$ respectively, or better, by making the edge
above their common parent a transfer to some common ancestor of $v_{j, 1}$ and $v_{j, 2}$. The latter option is preferred as it requires one less transfer, but it cannot be taken
for every $D_{i, j}$ subtree because we will likely create time-inconsistencies.
As it turns out, given a feedback arc set $A'$ of size $k$, we have a way of taking these `double-transfers' only $k$ times.
As mentioned before, this is similar to the proof in~\cite{THL2011}. The difficulty here however, is to ensure that time-consistency is preserved and
that the $\ensuremath{\mathbb{D}}\xspace, \ensuremath{\mathbb{S}}\xspace$ labeling can be preserved.
We first show how to add secondary arcs to $S$ in a time-consistent manner in order to obtain $N$,
by making the time function $t$ explicit. We will add more arcs than necessary, but this simplifies the exposition.
Let $s_1, \elldots, s_{n + m - 1}$ be the vertices on the $r(S) - r(S_{v_n})$ path in $S$ (excluding $r(S_{v_n})$),
ordered by depth in increasing order. Assign time slot $t(s_{\ell}) = \ell$ for each $\ell \in [n + m - 1]$.
We then describe the transformation from $S$ to $N$ in three steps.
\ensuremath{\varnothing}\xspaceindent
\textbf{Step 1: transfer arcs from $q_{a_{\ell}}$ to $S_{v_i}$.}
We process each arc $a_{\ell} \in A$ for $\ell = 1,2, \elldots, m$ in increasing order as such:
first let $(v_i, v_j) = a_{\ell}$ (i.e. $v_i, v_j$ are the vertices of the $a_{\ell}$ arc in $H$).
Assign time slot $\ell + 1$ to the parent of nodes $p_{a_{\ell}}$ and $q_{a_{\ell}}$.
Then, subdivide $(q_{a_{\ell}}, p(q_{a_{\ell}}))$, creating a new node that we call $send\_q_{a_{\ell}}\_to\_i$. Next, subdivide $(p(r(S_{v_i})), r(S_{v_i}))$, creating a new node that we call $recv\_i\_from\_q_{a_\ell}$. After that, we add the secondary arc
$(send\_q_{a_{\ell}}\_to\_i, recv\_i\_from\_q_{a_\ell})$.
See Figure~\ref{fig:fas-reconcil}(1) for an illustration. Assign the time slot $m + n + \ell$ to the two newly created nodes.
Note that this process is repeated for each arc $a_{\ell}$ in order. Therefore, $p(r(S_{v_i}))$ may change during the process as new secondary arcs are inserted.
In the end, there is exactly one outbound transfer node inserted above each $q_{a_{\ell}}$, and $|N^+(v_i)|$ inbound transfer nodes inserted above each $r(S_{v_i})$, where
$N^+(v_i)$ is the set of out-neighbors of $v_i$ in $H$.
One can check that no time inconsistency is created so far, since every time a node is inserted, it is added
below every other internal node having a defined time slot so far, and it is assigned a higher time slot (since $m + n + \ell$ is always the highest time slot so far, for each $\ell \in [m]$).
Also note for later reference that, assuming $n \elleq m$, $t(p(r(S_{v_i}))) \elleq m + n + \ell$ for some $\ell \elleq m$, and therefore $t(p(r(S_{v_i}))) \elleq 3m$ after these operations.
\begin{figure*}
\caption{An illustration of the modifications from $S$ to $N$. (1) We first add the transfers between the $S_{a_{\ell}
\end{figure*}
For what follows, let $H' = (V, A \setminus A')$. Since $H'$ is a directed acyclic graph, it admits a
topological sort, i.e. an ordering $(v_{l_1}, v_{l_2}, \elldots, v_{l_n})$ of $V$
such that if $i < j$, then $(v_{l_j}, v_{l_i})$ is not an arc of $H'$ (in other words, there are no backwards arcs).
\man{We now add two new sets of arcs that are entirely based on the ordering $(v_{l_1}, \elldots, v_{l_n})$.}
\ensuremath{\varnothing}\xspaceindent
\textbf{Step 2: transfer arcs from $v_{l_i, 2K}$ to its successor subtrees.}
What we want to achieve in this step is that for each $v_{l_i}$, we can transfer from the parent of $v_{l_i, 2K}$ to any subtree $S_{v_{l_h}}$ such that $h > i$.
An example is provided in Figure~\ref{fig:fas-reconcil}(2).
Process each vertex $v_{l_i} \in V$ for $i = 1, 2, \elldots, n$ in increasing order as follows.
First we create the transfer nodes above $r(S_{v_{l_i}})$ that are destined to receive from the predecessors of $v_{l_i}$.
For each $j = 1, 2, \elldots, i - 1$ in order, add a node $recv\_l_i\_from\_l_j$ on the edge between $r(S_{v_{l_i}})$
and its parent, and assign the time slot
\[t(recv\_l_i\_from\_l_j) = (4 + i)Km + j\]
Then, we create the nodes above $v_{l_i, 2K}$ that are destined to send to the successor subtrees of $v_{l_i}$. For each $j = i + 1, i + 2, \elldots, n$ in increasing order,
add a node $send\_l_i\_to\_l_j$ on the $(p(v_{l_i,2K}),v_{l_i, 2K})$ arc.
For each such $j$, assign time slot
\[t(send\_l_i\_to\_l_j) = (4 + j)Km + i\]
Then, for each $i, j \in [n]$ with $i < j$, add a transfer arc from
$send\_l_i\_to\_l_j$ to \ellinebreak $recv\_l_j\_from\_l_i$.
Note that this transfer arc satisfies our time consistency requirement since
$t(send\_l_i\_to\_l_j) = (4 + j)Km + i = t(recv\_l_j\_from\_l_i)$.
Also note that for each arc $(v_{l_i}, v_{l_j})$ in $A \setminus A'$,
there is a corresponding secondary arc from
$send\_l_i\_to\_l_j$ to $recv\_l_j\_from\_l_i$.
We argue that $S$ is still time-consistent. We know already that secondary arcs so far have the same timing,
so we must show that
(1) no node has a child with a greater time slot, and
(2) there is a way to assign a time slot to the nodes $z_{i, 1}, \elldots, z_{i, 2K-1}$ within the $S_{v_i}$ trees.
For (1), all the receiving and sending nodes
inserted at the last step have
a time slot greater than $3m$ and are inserted below the nodes that had a time slot assigned
at the previous step (which were assigned a time slot at most $3m$).
Moreover, the $recv\_l_i\_from\_l_j$ nodes are inserted on the
$p(r(S_{v_{l_i}}))r(S_{v_{l_i}})$ arc in increasing order of time,
as well as the $send\_l_i\_to\_l_j$ nodes on the $(p(v_{l_i,2K}), v_{l_i, 2K})$ arc.
Hence no inconsistency is created within the $S_{v_i}$ trees.
For (2), note that for each $i \in [m]$,
the nodes $z_{i, 1}, \elldots, z_{i,2K-1}$ of $S_{v_i}$ lying on the path between $recv\_l_i\_from\_l_{i - 1}$
(above $r(S_{v_i})$)
and $send\_l_i\_to\_l_{i + 1}$ (at the bottom of $S_{v_i}$) all have an available time slot between
$(4 + i)Km + i - 1$ and $(4 + i + 1)Km + i$, since there are $2K - 1$
such nodes and there are $Km + 1$ available time slots.
Therefore, we can assign a time to each $z_{i, h}$ so that time consistency holds. Note that all internal nodes of $S$ have been assigned a time slot so far.
\ensuremath{\varnothing}\xspaceindent
\textbf{Step 3: escape route from \man{$v_{l_i, 2K}$ to $v_{l_j, 1}$, then to $v_{l_j, 2}$.}}
Again, process each vertex $v_{l_i}$ for $i = 1,2, \elldots, n$ in increasing order.
We make, for $j < i$, a ``last-resort escape route'' from $v_{l_i, 2K}$ to $v_{l_j, 1}$,
followed by a transfer arc going from $v_{l_j, 1}$ to $v_{l_j, 2}$. Taking these arcs in a reconciliation corresponds to taking ``backwards arcs'',
i.e. that belong to $A'$.
For that purpose, we add, on the arc between $v_{l_i, 2K}$ and its parent, $i - 1$ transfer nodes to send backwards.
Then on the arc between $v_{l_i, 1}$ and its parent, we add $n - i$ transfer nodes
to receive from the front. This step is illustrated on Figure~\ref{fig:fas-reconcil}(3).
More precisely,
for each $j = 1, 2, \elldots, i - 1$, add a node $backsend\_l_i\_to\_l_j$ on the edge between
$v_{l_i, 2K}$ and its parent. Assign a high time slot to this node,
say for example $t(backsend\_l_i\_to\_l_j) = (Km)^{10} + i + j$.
Then for each $j = i + 1, i + 2, \elldots, n$, add a node $backrecv\_l_i\_from\_l_j$ on the
edge between $v_{l_i, 1}$ and its parent. Assign the time slot $t(backrecv\_l_i\_from\_l_j) = (Km)^{10} + i + j$.
Note that time consistency is still preserved by these node insertions.
Then for each $i, j \in [n]$ with $i > j$, add a secondary arc from
$backsend\_l_i\_to\_l_j$ to $backrecv\_l_j\_from\_l_i$.
Again, these arcs are time-consistent since $t(backsend\_l_i\_to\_l_j) = (Km)^{10} + i + j = t(backrecv\_l_j\_from\_l_i)$.
To finish the network, for each $i \in [n]$, add a secondary arc $(send12\_i, recv12\_i)$ between
the $(p(v_{i, 1}), v_{i, 1})$ arc and the $(p(v_{i, 2}), v_{i, 2})$ arc.
To preserve time-consistency, assign a large enough time slot,
say $m^{100}$ to both newly created nodes.
This finally concludes the construction. Let us call the resulting network $N$.
For the remainder, let $u, v \in V(N)$ and
\mj{suppose}
that there is a path from $u$ to $v$ in $N$ that does not use a secondary arc. We denote this path by $[u \mathrel{{.}\,{.}}\ensuremath{\varnothing}\xspacebreak v]$.
We will also denote by $]u \mathrel{{.}\,{.}}\ensuremath{\varnothing}\xspacebreak v]$ the path $[u \mathrel{{.}\,{.}}\ensuremath{\varnothing}\xspacebreak v]$, but excluding $u$ from this path.
\ensuremath{\varnothing}\xspaceindent
\textbf{Reconciling $(D, l)$ with $N$.}
We are finally ready to show that $(D, l)$ is $N$-reconcilable using at most $K$
transfers. We begin by showing how to reconcile $D_{i,j}$ for
$a = (v_i, v_j) \in A$.
For reasons that will become apparent later, the edge above $r(D_{i,j})$ will always be contain a transfer.
To be more precise, set $\alpha_1(p(v^1_{i,1})) = recv\_i\_from\_q_a$ with
$e(p(v^1_{i,1}), 1) = \emptyset$ (setting it up to receive a transfer). Then set
$\alpha_\textsc{last}(p(v^1_{i,1})) = z_{i,1}$ with $e(p(v^1_{i,1}), \textsc{last}) = \ensuremath{\mathbb{D}}\xspace$.
Since there is a directed path from $recv\_i\_from\_q_a$ to $z_{i, 1}$ that uses
no secondary arc of $N$, $\alpha(p(v^1_{i,1}))$ can be completed with the appropriate
$\ensuremath{\mathbb{SL}}\xspace$ events.
Set $\alpha(p(v^2_{i,1})) = (z_{i, 1})$ and
for each $2 \elleq h \elleq 2K - 1$,
set
$\alpha(p(v^1_{i, h})) = \alpha(p(v^2_{i, h})) = (z_{i, h})$
(we will handle the case $h = 2K$ later). Then
set $e(\alpha_\textsc{last}(p(v^1_{i,h}))) = \ensuremath{\mathbb{D}}\xspace$ and
$e(\alpha_1(p(v^2_{i,h}))) = \ensuremath{\mathbb{S}}\xspace$. Note that the assigned events are the same as in
the $DS$ labeling $l$ of $D$, and that so far $\alpha$ satisfies Definition~\ref{def:DTLrecLGTNetwork}.
It is straightforward to set $\alpha(v^1_{i, h})$ and $\alpha(v^2_{i,h})$ appropriately.
\begin{figure*}
\caption{Top left: how the $D_{i,j}
\end{figure*}
We now handle the nodes $p(v^1_{i,2K})$ and $p(v^2_{i,2K})$ (see Figure~\ref{fig:fas-reconcile_G} for an illustration).
First denote by $w$ the parent of both $w_{j,1}^i$ and $w_{j,2}^i$ in $D_{i,j}$.
Suppose that $a = (v_i, v_j)$ is not in $A'$.
Recall the ordering $v_{l_1}, \elldots, v_{l_n}$ from above.
Then there are $i'$ and $j'$ such that $i = l_{i'}$ and $j = l_{j'}$, with $i' < j'$.
Therefore $N$ has a secondary arc
\[
(send\_l_{i'}\_to\_l_{j'}, recv\_l_{j'}\_from\_l_{i'}) = (send\_i\_to\_j, recv\_j\_from\_i)
\]
starting above $v_{i, 2K}$ and ending above $S_{v_j}$.
We make the parent edge of $w$ borrow this transfer arc.
For that purpose, set \\
\man{$\alpha(p(v^1_{i, 2K})) =~]z_{i, 2K - 1} \mathrel{{.}\,{.}}\ensuremath{\varnothing}\xspacebreak send\_i\_to\_j]$}
and $\alpha(p(v^2_{i, 2K})) = (send\_i\_to\_j)$, \\
setting $e(p(v^1_{i, 2K}), \textsc{last}) = \ensuremath{\mathbb{D}}\xspace$ and $e(p(v^2_{i, 2K}), \textsc{last}) = \ensuremath{\mathbb{T}}\xspace$.
For the child leaves, set $\alpha(v^1_{i, 2K}) = \alpha(v^2_{i, 2K}) = [send\_i\_to\_j \mathrel{{.}\,{.}}\ensuremath{\varnothing}\xspacebreak v_{i, 2K}]$.
Then we set $\alpha(w) = [recv\_j\_from\_i \mathrel{{.}\,{.}}\ensuremath{\varnothing}\xspacebreak z_{j, 1}]$
with $e(w, \textsc{last}) = \ensuremath{\mathbb{D}}\xspace$. It is straightforward to check that $\alpha(w_{j, 1}^i)$
and $\alpha(w_{j, 2}^i)$ can be set without requiring any additional transfer,
since $z_{j,1}$ is an ancestor of both $v_{j, 1}$ and $v_{j, 2}$.
Now, suppose instead that
$a = (v_i, v_j) \in A'$. Then the transfer arc used in the previous case does not exist,
since it is backwards with respect to our ordering.
In this case, we must use the last-resort route, namely the secondary arc\\ $(backsend\_i\_to\_j, backrecv\_j\_from\_i)$,
then the $(send12\_j, recv12\_j)$ arc.
More precisely, set
\[
\alpha(p(v^1_{i, 2K})) =~]z_{i, 2K - 1} \mathrel{{.}\,{.}}\ensuremath{\varnothing}\xspacebreak backsend\_i\_to\_j]
\]
and
\[
\alpha(p(v^2_{i, 2K})) = (backsend\_i\_to\_j)
\]
with $e(p(v^1_{i, 2K}), \textsc{last}) = \ensuremath{\mathbb{D}}\xspace$ and $e(p(v^2_{i, 2K}), \textsc{last}) = \ensuremath{\mathbb{T}}\xspace$. Then set $\alpha(v^1_{i, 2K}) = \alpha(v^2_{i, 2K}) = [backsend\_i\_to\_j \mathrel{{.}\,{.}}\ensuremath{\varnothing}\xspacebreak v_{i, 2K}]$.
Then let $\alpha(w) = [backrecv\_j\_from\_i \mathrel{{.}\,{.}}\ensuremath{\varnothing}\xspacebreak send12\_j]$
with $e(w, \textsc{last}) = \ensuremath{\mathbb{T}}\xspace$. Set $\alpha(w_{j, 1}^i) = [send12\_j \mathrel{{.}\,{.}}\ensuremath{\varnothing}\xspacebreak v_{j, 1}]$ and
$\alpha(w_{j, 2}^i) = [recv12\_j \mathrel{{.}\,{.}}\ensuremath{\varnothing}\xspacebreak v_{j,2}]$. One can check that $\alpha$
satisfies Definition~\ref{def:DTLrecLGTNetwork} and in this case, $D_{i,j}$ requires two transfers.
It remains to reconcile the rest of $D$.
We exhibit $\alpha$ for the nodes of $D'_{i,j}$ that are not in $D_{i, j}$.
Denote $a = (v_i, v_j)$.
In $S$, denote $r_a = p(p_a) = p(q_a)$. Set $\alpha(p(p^1_a)) = \alpha(p(p^2_a)) = (r_a)$,
and $e(p(p^1_a)) = \ensuremath{\mathbb{D}}\xspace, e(p(p^2_a)) = \ensuremath{\mathbb{S}}\xspace$ (we will adjust $\alpha(p(p^1_a))$ later).
Then set $\alpha(p(q^1_a)) = [r_a \mathrel{{.}\,{.}}\ensuremath{\varnothing}\xspacebreak send\_q_a\_to\_i]$ with $e(p(q^1_a), \textsc{last}) = \ensuremath{\mathbb{D}}\xspace$,
and $\alpha(p(q^2_a)) = (send\_q_a\_to\_i)$. Recall that $p(v^1_{i, 1})$ is a child of $p(q^2_a)$
and that $\alpha_1(p(v^1_{i, 1})) = recv\_i\_from\_q_a$. Thus by setting $e(p(q^1_a)) = \ensuremath{\mathbb{T}}\xspace$
we satisfy Definition~\ref{def:DTLrecLGTNetwork}. It is clear that the $\alpha$ values for the leaves
$p^1_a, p^2_a, q^1_a$ and $q^2_a$ can be set without requiring any additional transfer.
We have now reconciled $D'_{i,j}$ such that $\alpha_\textsc{last}(r(D'_{i,j})) = r_a$,
adding one transfer in the process.
What remains now are the nodes $g_1, g_2, \elldots, g_{\ell}$, ordered by increasing depth, that lie on the path
between $r(D)$ and $r(D'_{a_m})$ (excluding the latter).
We claim that none of these nodes requires any transfer.
The node $g_{\ell}$ is a speciation and has two children $r(D'_{a_{m - 1}})$ and $r(D'_{a_m})$: one mapped by $\alpha$ to species $r_{a_{m - 1}}$ and the other to $r_{a_m}$.
Then we can set $\alpha(g_{\ell}) = (lca_S(r_{a_{m - 1}}, r_{a_m}))$ and $e(g_{\ell}, \textsc{last}) = \ensuremath{\mathbb{S}}\xspace$, and adjust $\alpha(r(D'_{a_{m - 1}}))$ and $\alpha(r(D'_{a_{m }}))$ accordingly. Now, $g_{\ell - 1} = p(g_{\ell})$ is a duplication whose
other child is $p^3_{m - 1}$, and thus it is safe to set $\alpha(g_{\ell - 1}) = (lca_S(r_{a_{m - 1}}, r_{a_m}))$ as well
and set $e(g_{\ell - 1}, \textsc{last}) = \ensuremath{\mathbb{D}}\xspace$. Since the $D_{a}$ subtrees are ordered in the same manner in $D$ as the $S_a$ subtrees in $S$, it is not hard to see inductively that for $i < \ell - 1$,
if $l(g_{i}) = \ensuremath{\mathbb{S}}\xspace$, then $g_i$ has $r(D'_{a_h})$ as a child for some $h < m - 1$, which is mapped to
$r_{a_h}$, and the other child is $g_{i + 1}$,
mapped to $x := lca_S(r_{a_{h + 1}}, r_{a_{h + 2}})$. Hence we can set $\alpha(g_i) = (x, r_{a_h})$
and adjust the $\alpha$ values of the two children of $g_i$ accordingly. If $l(g_i) = \ensuremath{\mathbb{D}}\xspace$, we simply set
$\alpha(g_i) = \alpha(g_{i + 1})$.
We are done with the reconciliation $\alpha$ between $D$ and $N$.
To sum up, if $a \ensuremath{\varnothing}\xspacetin A'$, then $D'_a$ requires $2$ transfers,
and if $a \in A'$, then $D'_a$ requires $3$ transfers, and $|A'| = k$.
Thus $K = 2m + k$ transfers are added in total.
\end{proof}
We now undertake the converse direction of the proof.
We will make use of the following well-known fact on reconciliations.
\begin{lemma}\ellabel{lem:lcaspec}
Let $S$ be a species tree and let $N$ be an LGT network obtained by adding secondary arcs to $S$.
Let $(D, \alpha)$ be a reconciliation with respect to $N$. Let $u \in I(D)$ such that $e(u, \textsc{last}) = \ensuremath{\mathbb{S}}\xspace$ and let $v, w$ be two leaves descending from $u$ such that,
for every node $z$ on the path between $u$ and $v$ or on the path between $u$ and $w$, $\alpha(z)$ contains no $\ensuremath{\mathbb{T}}\xspace$ or $\ensuremath{\mathbb{T}}\xspaceL$ event.
Then $\alpha_{\textsc{last}}(u) = lca_{S}(\sigma(v), \sigma(w))$.
\end{lemma}
\begin{proof}
First note that by the definition of a reconciliation, $\alpha_{\textsc{last}}(u) = \ensuremath{\mathbb{S}}\xspace$ implies that $\alpha_{\textsc{last}}(u)$ must exist in $S$, since only those nodes can be the tail of two principal arcs in $N$ (recall that this is required by speciation).
Assume without loss of generality that $v$ descends from $u_l$ and $w$ from $u_r$. Let $P_v = (u = v_1, \elldots, v_a = v)$ be the path from $u$ to $v$ and $P_w = (u = w_1, \elldots, w_b = w)$ the path from $u$ to $w$. By the definition of speciation, $\alpha_1(u_l)$ and $\alpha_1(u_r)$ are the two children of $\alpha_{\textsc{last}}(u)$.
Moreover, by appending the paths $\alpha(v_2), \elldots, \alpha(v_a)$ and eliminating possible repetitions due to duplications, we obtain a path $P'_v$ of $N$ that uses only principal arcs, starts at $\alpha_1(u_l)$ and ends at $v$. Similarly, appending the paths $\alpha(w_2), \elldots, \alpha(w_b)$, we obtain a path $P'_w$ of $N$ that uses only principal arcs, starts at $\alpha_1(u_r)$ and ends at $w$.
Because $\alpha_1(u_l)$ and $\alpha_1(u_r)$ are the children of $\alpha_{\textsc{last}}(u)$ and $P'_v$ and $P'_w$ use only $E_p$ arcs, $P'_v$ and $P'_w$ are vertex-disjoint.
Thus $\alpha_{\textsc{last}}(u)$ is a node of $N$ whose two children can start disjoint paths that lead to $v$ and $w$, respectively.
The only node of $N$ from which this is possible is $lca_S(\sigma(v), \sigma(w))$.
\end{proof}
\begin{lemma}
If $(D, l)$ is \ml{$S$-base-reconcilable} using at most $K = 2m + k$ transfers,
then $H$ admits a feedback arc set $A' \subseteq A$ of size at most $k$.
\end{lemma}
\begin{proof}
Suppose that $(D, l)$ is \ml{$S$-base-reconcilable} using at most $K$ transfers, let $N$ be the species network
such that $T_0(N) = S$
and let $(D, \alpha)$ a reconciliation with respect to $N$ using $K$ transfers showing that $(D, l)$ is $N$-reconcilable.
We divide this proof into a series of claims.
Without loss of generality, we assume that the secondary arcs on $N$ are minimal, in the sense \mj{that}
every secondary arc of $N$ is used by $\alpha$.
\begin{nclaim}\ellabel{claim:gprime-tl}
For every arc $a = (v_i, v_j) \in A$, in the $D'_{i,j}$ subtree, there is a
node $x$ and an integer $h$ such that $e(x, h) \in \{\ensuremath{\mathbb{T}}\xspace, \ensuremath{\mathbb{T}}\xspaceL\}$ and $x$ does not belong to $D_{i,j}$.
\end{nclaim}
\begin{proof}
Suppose for contradiction that the claim is false.
Denote $y_p := p(p^2_a)$ and $y_q = p(q^2_a)$.
Because there is no transfer, we have $e(y_p, \textsc{last}) = e(y_q, \textsc{last}) = \ensuremath{\mathbb{S}}\xspace$, by the orthology requirements of $(D, l)$.
By Lemma~\ref{lem:lcaspec}, $\alpha_{\textsc{last}}(y_p) = lca_S(\sigma(p^2_a), \sigma(q^2_a)) = p(p_a) = p(q_a)$.
Now consider $\alpha_{\textsc{last}}(y_q)$.
By definition of speciation and by the absence of transfers in $\alpha(p(p_a^1))$ and $\alpha(y_q)$, $\alpha_{\textsc{last}}(y_q)$ must be a strict descendant of $\alpha_{\textsc{last}}(y_p) = p(q_a)$.
On the other hand, $\alpha_{\textsc{last}}(y_q)$ is a strict ancestor of $q_a$ since $e(y_q, \textsc{last}) = \ensuremath{\mathbb{S}}\xspace$.
Moreover, $\alpha_{\textsc{last}}(y_q)$ is a node of $S$ \man{(this is because $e(y_q, \textsc{last}) = \ensuremath{\mathbb{S}}\xspace$, and thus by definition, $\alpha_{\textsc{last}}(y_q)$ must be a node whose two children are principal arcs)}. We have reached a contradiction, since $S$ contains no node that is a strict descendant of $p(q_a)$ and a strict ancestor of $q_a$.
\end{proof}
\begin{nclaim}\ellabel{claim:in_svi}
Let $(v_i, v_j) \in A$. Then there is an internal node $x$ of $D_{i,j}$ such that
$\alpha_{\textsc{last}}(x)$ is a node of $S_{v_i}$.
\end{nclaim}
\begin{proof}
Suppose that for every internal node $x$ of $D_{i,j}$, $\alpha_{\textsc{last}}(x)$ is not a node of $S_{v_i}$.
Let $h \in [2K]$ such that $h$ is odd.
We show that there must be a transfer in some node of the path
between $v^2_{i,h}$ and $v^2_{i, h + 1}$ in $D_{i,j}$.
Let us assume that this is not the case.
We can thus assume that $e(p(v^2_{i,h}), \textsc{last}) = \ensuremath{\mathbb{S}}\xspace$ \man{and
that $\alpha(v^2_{i,h})$ does not contain a $\ensuremath{\mathbb{T}}\xspaceL$ event.}
It follows that $\alpha_{\textsc{last}}(p(v^2_{i,h}))$ is an ancestor of $v_{i,h}$ which, by assumption, does not belong to $S_{v_i}$.
Since we further assume that there is no transfer in $\alpha(p(v_{i,h+1}^1)), \alpha(p(v_{i,h+1}^2))$ or $\alpha(v_{i,h+1}^2)$, by Lemma~\ref{lem:lcaspec}, we must have $\alpha_{\textsc{last}}(p(v^2_{i, h}) = lca_S(\sigma(v_{i,h}), \sigma(v_{i, h+1}))$.
This node is in $S_{v_i}$, and we have reached a contradiction.
Therefore, some transfer must be present in some node of the $v_{i, h}^2 - v^2_{i,h+1}$ path.
This holds for every odd $h$, so $D_{i,j}$ has a least $K$ transfers.
But by the previous claim, $D'_{i,j}$ has at least one transfer that is not in $D_{i,j}$, so in total $D$ has strictly more
than $K$ transfers, a contradiction.
\end{proof}
\begin{nclaim} \ellabel{claim:gij-paths}
Let $(v_i, v_j) \in A$. Then in $N$, there is a node $s$ of $S_{v_i}$ such that
there exists a directed path $P_1$ from $s$ to $v_{j,1}$ containing a secondary arc $(t_1, t_1')$,
and a directed path $P_2$ from $s$ to $v_{j,2}$ containing a secondary arc $(t_2, t_2')$,
and such that
$D_{i,j}$ uses these secondary arcs (i.e. for each $h \in \{1,2\}$,
either $(\alpha_i(x), \alpha_{i + 1}(x)) = (t_h, t_h')$ for some $x \in V(D_{i,j})$ and integer $i$,
or $(\alpha_\textsc{last}(x), \alpha_1(y)) = (t_h, t_h')$ for some $x, y \in V(D_{i,j})$).
\man{Note that $(t_1, t'_1) = (t_2, t'_2)$ is possible.}
\end{nclaim}
\begin{proof}
Let $x$ be a node of $D_{i,j}$ satisfying Claim~\ref{claim:in_svi} above. Since $s:= \alpha_\textsc{last}(x)$ is in
the $S_{v_i}$ subtree, and that $x$ has descendants $w^i_{j,1}$ and $w^i_{j, 2}$ mapped to $v_{j,1}$ and $v_{j,2}$, there must be a path from $s$ to $v_{j, 1}$ and from $s$ to $v_{j ,2}$.
Since $s$ and $v_{j,1}$ (or $v_{j, 2}$) are incomparable in $S$,
these paths must contain a secondary arc.
Moreover, there must be such paths $P_1$ and $P_2$
and some node of $D_{i,j}$ on the $x - v_{j ,1}$ path (resp. the $x - (v_{j,2})$ path)
that uses the $(t_1, t_1')$ arc (resp. the $(t_2, t_2')$ arc).
\end{proof}
As specified in the previous claim, $(t_1, t_1') = (t_2, t_2')$ is possible. In essence, this happens when $S_{v_i}$ is able to get to $S_{v_j}$.
In the following, let $\hat{A} \subseteq A$ be the set of arcs such that $(v_i,v_j) \in \hat{A}$ if and only if there is
a directed path in $N$ from $r(S_{v_i})$ to $r(S_{v_j})$.
The set $A' = A \setminus \hat{A}$ will form our feedback arc set, i.e. the arcs to remove to eliminate all cycles.
\begin{nclaim}\ellabel{claim:hat-has-nocycle}
$H' = (V, \hat{A})$ contains no directed cycle.
\end{nclaim}
\begin{proof}
Suppose instead that in $H'$, there is a cycle $C = x_1x_2\elldots x_{\ell}x_1$.
By the definition of $\hat{A}$, in $N$ there is a directed path from $r(S_{x_i})$ to
$r(S_{x_{i + 1}})$ for every $i \in [\ell - 1]$, and from $r(S_{x_{\ell}})$ to $r(S_{x_1})$.
Thus $N$ contains a cycle, contradicting time-consistency.
\end{proof}
\begin{nclaim}
$|\hat{A}| \geq m - k$.
\end{nclaim}
\begin{proof}
Recall that by Claim~\ref{claim:gprime-tl}, $D$ has a transfer in $D'_{i,j}$ that is not in $D_{i,j}$,
and these together take up $m$ transfers.
Moreover by Claim~\ref{claim:gij-paths}, each $D_{i,j}$ subtree uses at least one transfer.
Since $D$ uses at at most $K = 2m + k$ transfers, there can be at most $k$ of the $D_{i,j}$ subtrees that
use more than one transfer, and hence at least $m - k$ that only use one.
By Claim~\ref{claim:gij-paths}, for each $(v_i, v_j) \in A$, there is a directed path $P_1$ in $N$ from
$r(S_{v_i})$ to $v_{j, 1}$ and a directed path $P_2$ from $r(S_{v_i})$ to $v_{j, 2}$, such that
$D_{i,j}$ uses the transfer arc $(t_1, t_1')$ from $P_1$ and $(t_2, t_2')$ from $P_2$.
If $D_{i,j}$ uses one transfer, we must have $(t_1, t_1') = (t_2, t_2')$.
This is only possible if $t_1' = t_2'$ is an ancestor of $\textsc{lca}_S(v_{j,1}, v_{j,2}) = r(S_{v_j})$.
This shows that there are at least $m - k$ subtrees $D_{i,j}$, and hence arcs $(v_i, v_j)$ such that $N$ has a path from
$r(S_{v_i})$ to $r(S_{v_j})$.
\end{proof}
We are done with the proof, since
$A' = A \setminus \hat{A}$ is a feedback arc set of $H$ by Claim~\ref{claim:hat-has-nocycle},
and $|A'| = |A| - |\hat{A}| \elleq m - (m - k) = k$.
\end{proof}
We have shown that that $H$ has a feedback arc set of size $k$
if and only $D$ is \ml{$S$-base-reconcilable} using $K = 2m + k$ transfers.
By Lemma~\ref{lem:equiv-ds-tree-unknown}, $H$ has a feedback arc set of size $k$
if and only if the relation graph $R(D)$ is \ml{$S$-base-consistent} using $K$ transfers.
Therefore we get the following.\\
\ensuremath{\varnothing}\xspaceindent
\textbf{Theorem \ref{thm:hard-unknown-highways}.}
\emph{The TMSTC problem is NP-hard, even if the input relation graph $R$ has a corresponding
least-resolved $DS$-tree that is binary.}
\end{document} |
\begin{document}
\title{f Entanglement robustness and geometry in systems of identical particles}
\begin{abstract}
\noindent
The robustness properties of bipartite entanglement in systems of $N$
bosons distributed in $M$ different modes are analyzed using a definition of separability
based on commuting algebras of observables, a natural choice
when dealing with identical particles. Within this framework,
expressions for the robustness and generalized robustness of entanglement can be
explicitly given for large classes of boson states: their
entanglement content results in general much more stable than that of
distinguishable particles states. Using these results,
the geometrical structure of the space of $N$ boson states can be explicitly addressed.
\end{abstract}
\section{Introduction}
When dealing with many-body systems made of identical particles, the usual definitions
of separability and entanglement appear problematic since the natural particle
tensor product structure on which these notions are based is no longer available.
This comes from the fact that in such systems the microscopic constituents
can not be singly addressed, nor their properties directly measured
\cite{Feynman,Sakurai}.
This observation points to the need of generalized notions of separability and entanglement
not explicitly referring to the set of system states, or more in general
to the ``particle'' aspect of first quantization: they should rather be based
on the second quantized description of many-body systems,
in terms of the algebra of observables
and to the behavior of the associated correlation functions.
This ``dual'' point of view stems from the fact that
in systems of identical particles there is not a preferred
notion of separability, it can be meaningful only in reference to a
given choice of (commuting) sets of observables \cite{Zanardi1}-\cite{Viola2}.
This new approach to separability and entanglement
have been formalized in \cite{Benatti1}-\cite{Benatti3}:
it is valid in all situations, while reducing to the standard one for
systems of distinguishable particles;
\footnote{The notion of entanglement
in many-body systems has been widely discussed
in the recent literature ({\it e.g.}, see \cite{Schliemann}-\cite{Buchleitner});
nevertheless, we stress that
only a limited part of those results are relevant for identical particles systems.}
in particular, it has been successfully applied
to the treatment of the behavior of trapped ultracold bosonic gases,
giving rise to new, testable prediction in quantum metrology
\cite{Benatti1}-\cite{Argentieri}.
As in the case of systems of distinguishable particles,
\footnote{For general reviews on the role of quantum correlations in systems
with large number of constituents see \cite{Lewenstein}-\cite{Amico}.}
suitable criteria able to detect non-classical correlations through
the implementations of practical tests are needed in order
to easily identify entangled bosonic many-body states \cite{Horodecki}-\cite{Modi}.
In the case of bipartite entanglement, the operation of partial transposition \cite{Peres}-\cite{Werner}
turns out to be again very useful; actually,
it has been found that in general this operation gives rise to a much more exhaustive
criterion for detecting bipartite entanglement than in the case of
distinguishable particles \cite{Argentieri,Benatti3}.
As a byproduct, this allows a rather complete classification
of the structure of bipartite entangled states in systems composed of $N$
bosons that can occupy $M$ different modes \cite{Benatti3},
becoming completely exhaustive in some relevant special cases, as for $M=2$.
In the following we shall further explore the properties of bipartite entanglement
in bosonic systems made of a fixed number of elementary constituents. We shall first
study to what extent an entangled bosonic state results robust against its
mixing with another state (separable or not): we shall find that in general, bosonic
entanglement is much more robust than the one of distinguishable particles.
In particular, we shall give an explicit expression for the so-called
``robustness'' \cite{Vidal} and upper bounds for the ``generalized robustness'' \cite{Steiner}.
As a byproduct, a characterization of the geometry of the space of
bosonic states will also be given; the structure
of this space results much richer than in the case of systems
with $N$ distinguishable constituents. One of the most striking results is that
the totally mixed separable state, proportional to the unit matrix, does no longer lay
in the interior of the subspace of separable states: in any of its neighborhood
entangled states can always be found.
\section{Entanglement of multimode boson systems}
As mentioned above, we shall focus on bosonic many-body systems made of $N$
elementary constituents that can occupy $M$ different modes. From the physical point of view,
this is a quite general framework, relevant in the study of quite different systems
in quantum optics, atom and condensed matter physics. For instance,
this theoretical paradigm is of special importance in modelling the behavior of ultracold bosonic gases
confined in multi-site optical lattices, that are becoming so relevant in the study of
quantum phase transitions and other quantum many-body effects ({\it e.g.}, see
\cite{Lewenstein,Bloch},
\cite{Stringari}-\cite{Yukalov} and references therein).
In order to properly describe the $N$ boson system,
let us thus introduce creation $a^\dagger_i$ and annihilation
operators $a_i$, $i=1, 2,\ldots,M$, for the $M$ different modes that the bosons can occupy,
obeying the standard canonical
commutation relations, $[a_i,\,a^\dagger_j]=\delta_{ij}$.
The total Hilbert space $\cal H$ of the system
is then spanned by the many-body Fock states, obtained by applying creation operators to the
vacuum:
\begin{equation}
|n_1, n_2,\ldots,n_M\rangle= {1\over \sqrt{n_1!\, n_2!\cdots n_M!}}
(a_1^\dagger)^{n_1}\, (a_2^\dagger)^{n_2}\, \cdots\, (a_M^\dagger)^{n_M}\,|0\rangle\ ,
\label{2-1}
\end{equation}
the integers $n_1, n_2, \ldots, n_M$ representing the occupation numbers of the different modes.
Since the number of bosons is fixed,
the total number operator $\sum_{i=1}^M a_i^\dagger a_i$
is a conserved quantity and the occupation numbers must satisfy the
additional constraint $\sum_{i=1}^M n_i=N$; in other words, all states must
contain exactly $N$ particles. As a consequence, the dimension $D$ of the system Hilbert space $\cal H$
is finite; one easily finds: $D={N+M-1\choose N}$.
In addition, the set of polynomials in all creation and annihilation operators,
$\{a^\dagger_i,\, a_i\}$, $i=1,2,\ldots, M$,
form an algebra that, together with its norm-closure, coincides with the algebra
${\cal B}({\cal H})$ of bounded operators;
\footnote{The algebra ${\cal B}({\cal H})$ is generated by the so-called Weyl operators;
all polynomials in the in the creation and annihilation operators are obtained from them
by proper differentiation \cite{Thirring,Strocchi}.}
the observables of the systems are part of this algebra.
When dealing with systems of identical particles,
instead of focusing on partitions of the Hilbert space $\cal H$, it seems natural to define the notion
of bipartite entanglement by the presence of non-classical correlations among
observables belonging to two commuting subalgebras ${\cal A}_1$ and
${\cal A}_2$ of ${\cal B}({\cal H})$ \cite{Benatti1}.
Quite in general, one can then introduce the following definition:
\noindent
{\bf Definition 1.} {\sl An {\bf algebraic bipartition} of the algebra ${\cal B}({\cal H})$ is any pair
$({\cal A}_1, {\cal A}_2)$ of commuting subalgebras of ${\cal B}({\cal H})$,
${\cal A}_1, {\cal A}_2\subset {\cal B}({\cal H})$.}
\noindent
More explicitly, a bipartition
of the $M$-oscillator algebra ${\cal B}({\cal H})$ can be given by splitting the collection
of creation and annihilation operators into two disjoint sets,
$\{a_i^\dagger,\, a_i\, | i=1,2\ldots,m\}$ and
$\{a_j^\dagger,\, a_j,\, |\, j=m+1,m+2,\ldots,M\}$;
it is thus uniquely determined by
the choice of the integer $m$, with $0\leq m \leq M$.
All polynomials in the first set (together with their norm-closures)
form a subalgebra ${\cal A}_1$, while the remaining set analogously generates
a subalgebra ${\cal A}_2$.
Since operators pertaining to different modes
commute, one sees that any element of the subalgebra ${\cal A}_1$
commutes with each element of ${\cal A}_2$,
in short $[{\cal A}_1, {\cal A}_2]=\,0$.
\noindent
{\bf Remark 1:} {\sl i)} Note that there is no loss of generality
in assuming the modes forming the subalgebras ${\cal A}_1$
(and ${\cal A}_2$) to be contiguous; if in the chosen bipartition this is not the case,
one can always re-label the modes in such a way to achieve this convenient ordering.
\break
{\sl ii)} Further, when the two commuting algebras ${\cal A}_1$ and ${\cal A}_2$ are generated
only by a subset $M'<M$ of modes, one can simply proceed
as if the $N$ boson system would contain just the used $M'$ modes, since
all operators in ${\cal B}({\cal H})$ pertaining to the modes not involved in the bipartition
commute with any element of the two subalgebras ${\cal A}_1$ and ${\cal A}_2$,
and therefore effectively act as ``spectators''. As a consequence, all the considerations
and results discussed below holds also in this situation, provided one replaces
the total number of modes $M$ with $M'$, the actual number of modes used
in the chosen bipartition.
$\Box$
Having introduced the notion of algebraic bipartition $({\cal A}_1, {\cal A}_2)$
of the system operator algebra ${\cal B}({\cal H})$, one can now define
the notion of local observable:
\noindent
{\bf Definition 2.} {\sl An element (operator) of ${\cal B}({\cal H})$ is said to be
$({\cal A}_1, {\cal A}_2)$-{\bf local}, {\it i.e.} local with respect to
a given bipartition $({\cal A}_1, {\cal A}_2)$, if it is the product $A_1 A_2$ of an element
$A_1$ of ${\cal A}_1$ and another $A_2$ in ${\cal A}_2$.}
\noindent
From this notion of operator locality, a natural definition of state separability and entanglement
follows \cite{Benatti1}:
\noindent
{\bf Definition 3.} {\sl A state $\rho$ (density matrix) will be called {\bf separable} with
respect to the bipartition $({\cal A}_1, {\cal A}_2)$, in short $({\cal A}_1, {\cal A}_2)$-separable,
if the expectation ${\rm Tr}\big[\rho\, A_1 A_2\big]$
of any local operator $A_1 A_2$ can be decomposed into a linear convex combination of
products of expectations:
\begin{equation}
{\rm Tr}\big[\rho\, A_1 A_2\big]=\sum_k\lambda_k\, {\rm Tr}\big[\rho_k^{(1)}\, A_1\big]\,
{\rm Tr}\big[\rho_k^{(2)}\, A_2\big]\ ,\qquad
\lambda_k\geq0\ ,\quad \sum_k\lambda_k=1\ ,
\label{2-2}
\end{equation}
where $\{\rho_k^{(1)}\}$ and $\{\rho_k^{(2)}\}$ are collections of admissible states for the systems;
otherwise the state $\rho$ is said to be {\bf entangled} with respect the bipartition
$({\cal A}_1, {\cal A}_2)$.}
\noindent
{\bf Remark 2:} {\sl i)} This generalized definition of separability can easily be extended to the case
of more than two partitions; for instance, in the case of an $n$-partition,
Eq.(\ref{2-2}) above would extend to:
\begin{equation}
{\rm Tr}\big[\rho\,A_1 A_2\cdots A_n]=\sum_k\lambda_k\, {\rm Tr}\big[\rho_k^{(1)}A_1\big]\,
{\rm Tr}\big[\rho_k^{(2)}A_2\big]\cdots
{\rm Tr}\big[\rho_k^{(n)}A_n\big]\, ,\quad
\lambda_k\geq0\ ,\,\, \sum_k\lambda_k=1\ .
\label{2-3}
\end{equation}
\noindent
{\sl ii)} When dealing with systems of {\sl distinguishable} particles, one finds
that {\sl Definition 3} gives the standard notion of separability \cite{Benatti1}.
\break
\noindent
{\sl iii)} In this respect, it should be noticed that when dealing with systems of identical particles,
there is no {\it a priori} given, natural partition,
so that questions about entanglement and separability, non-locality and locality
are meaningful only with reference to a specific choice
of commuting algebraic sets of observables \cite{Zanardi1}-\cite{Benatti3};
this general observation, often overlooked in the literature, is at the basis of the definitions
given in (\ref{2-2}) and (\ref{2-3}).
\break
\noindent
{\sl iv)} A special situations is represented by pure states \cite{Benatti3}.
In fact, when dealing with pure states,
instead of general statistical mixtures, and bipartitions that involve the whole algebra
${\cal B}({\cal H})$, the separability condition in (\ref{2-2}) (and similarly for (\ref{2-3}))
simplify, becoming:
\begin{equation}
{\rm Tr}\big[\rho\, A_1 A_2\big]={\rm Tr}\big[\rho^{(1)}\, A_1\big]\,
{\rm Tr}\big[\rho^{(2)}\, A_2\big]\ ,\quad\rho=|\psi\rangle\langle\psi|\ ,
\label{2-4}
\end{equation}
with $\rho^{(1)}$, $\rho^{(2)}$ projections on the restrictions of $|\psi\rangle$
to the first, respectively second, partition;
in other terms, separable, pure states turn out to be just product states.
$\Box$
Examples of $N$ bosons pure separable states are the Fock states.
Using the notation and specifications introduced before ({\it cf.} (\ref{2-1})),
they can be recast in the form:
\begin{equation}
| k_1, \ldots, k_m\,;\, k_{m+1}, \ldots, k_M\rangle\ ,\quad
\sum_{i=1}^m k_i =k\ ,\ \sum_{j=m+1}^M k_j=N-k\ ,\ \ 0\leq k \leq N\ ,
\label{2-5}
\end{equation}
where $k$ indicates the number of bosons in the first partition; by varying it together with
the integers $k_i$, these states generate the whole Hilbert space $\cal H$.
This basis states can be relabeled in a different, more convenient way as:
\begin{equation}
| k, \sigma; N-k, \sigma'\rangle\ ,\quad \sigma=1,2, \ldots, {k+m-1\choose k}\ ,\
\sigma'=1, 2,\ldots, {N-k+M-m-1\choose N-k}\ ,
\label{2-6}
\end{equation}
where, as before, the integer $k$ represents the number of particles found in the first $m$ modes,
while $\sigma$ counts the different ways in which those particles can fill those modes;
similarly, $\sigma'$ labels the ways in which the remaining $N-k$ particles
can occupy the other $M-m$ modes.
\footnote{Clearly, we need two extra labels $\sigma$ and $\sigma'$ for each value of $k$, so that
these labels (as well the range of values they take) are in general $k$-dependent: in order to keep
the notation as a simple as possible, in the following these dependences will be tacitly understood.}
In this new labelling, the property of orthonormality of the states
in (\ref{2-5}) becomes:
$\langle k, \sigma; N-k, \sigma'| l, \tau; N-l, \tau'\rangle=
\delta_{kl}\,\delta_{\sigma\tau}\,\delta_{\sigma'\tau'}$.
For fixed $k$, the basis vectors $\{| k, \sigma; N-k, \sigma'\rangle\}$ span a subspace ${\cal H}_k$
of dimension $D_k\, D_{N-k}$, where for later convenience we have defined ({\it cf.} (\ref{2-6}) above):
\begin{equation}
D_k\equiv{k+m-1\choose k}\ ,\qquad
D_{N-k}\equiv{N-k+M-m-1\choose N-k}\ .
\label{2-7}
\end{equation}
\noindent
{\bf Remark 3:} Note that the space ${\cal H}_k$ is naturally isomorphic to the tensor
product space $\mathbb{C}^{D_k}\otimes \mathbb{C}^{D_{N-k}}$; through this isomorphism, the states
$| k, \sigma; N-k, \sigma'\rangle$ can then be identified with the
corresponding basis states of the form $| k, \sigma\rangle \otimes | N-k, \sigma'\rangle$.
$\Box$
\noindent
By summing over all values of $k$, thanks to a known binomial summation formula \cite{Prudnikov},
one recovers the dimension $D$ of the whole Hilbert space $\cal H$
:
\begin{equation}
\sum_{k=0}^N D_k\, D_{N-k}=D= {N+M-1\choose N}\ .
\label{2-8}
\end{equation}
Using this notation, a generic mixed state $\rho$
can then be written as:
\begin{equation}
\rho=\sum_{k,l=0}^N\ \sum_{\sigma,\sigma',\tau,\tau'}\
\rho_{k \sigma\sigma', l\tau\tau'}\ | k, \sigma; N-k, \sigma'\rangle \langle l, \tau; N-l, \tau' |\ ,
\quad \sum_{k=0}^N\ \sum_{\sigma,\sigma'}\
\rho_{k \sigma\sigma', k\sigma\sigma'}=1\ .
\label{2-9}
\end{equation}
In general, to determine whether a given density matrix $\rho$ can be written in separable
form is a hard task and one is forced to rely on suitable separability tests.
One of the most useful such criteria involves the operation
of partial transposition \cite{Peres,Horodecki2}:
a state $\rho$ for which the partially transposed density matrix $\tilde\rho$ is
no longer positive is surely entangled. This lack of positivity can be witnessed by the
so-called negativity \cite{Zyczkowski,Vidal1,Horodecki}:
\begin{equation}
{\cal N}(\rho)=\, {1\over2}\Big(\!||\tilde\rho||_1 - {\rm Tr}[\rho]\Big)\ ,\qquad
||\tilde\rho||_1={\rm Tr}\Big[\sqrt{\tilde\rho^\dagger \tilde\rho}\Big]\ .
\label{2-10}
\end{equation}
which is nonvanishing only in presence of a non positive $\tilde\rho$.
Although this criterion is not exhaustive (there are entangled states that remain
positive under partial transposition), it results much more reliable in systems
made of identical particles \cite{Argentieri,Benatti3}.
Indeed, the operation of partial transposition
gives a necessary and sufficient criteria for entanglement detection for very general
classes of bosonic states (\ref{2-9}), {\it e.g.} in presence of only two modes ($M=2$), or, in the
generic case of arbitrary
$M$, when the $({\cal A}_1,\, {\cal A}_2)$-bipartition is such that the algebra
${\cal A}_1$ is generated by creation and annihilation operators of just one mode,
while the remaining $M-1$ modes generates ${\cal A}_2$.
Even more interestingly, it turns out that entangled $N$-body bosonic states need to be
of a definite, specific form \cite{Benatti3}:
{\bf Proposition 1.} {\sl A generic $(m, M-m)$-mode bipartite state (\ref{2-9}) is entangled
if and only if it can not be cast in the following block diagonal form
\begin{equation}
\rho=\sum_{k=0}^N p_k\ \rho_k\ ,\qquad \sum_{k=0}^N p_k=1\ ,\quad {\rm Tr}[\rho_k]=1\ ,
\label{2-11}
\end{equation}
with
\begin{equation}
\rho_k=\sum_{\sigma,\sigma',\tau,\tau'}\
\rho_{k \sigma\sigma', k\tau\tau'}\ | k, \sigma; N-k, \sigma'\rangle \langle k, \tau; N-k, \tau' |\ ,
\quad \sum_{\sigma,\sigma'}\rho_{k \sigma\sigma', k\sigma\sigma'}=1\ ,
\label{2-12}
\end{equation}
({\it i.e.} at least one of its non-diagonal coefficients $\rho_{k \sigma\sigma', l\tau\tau'}$, $k\neq l$,
is nonvanishing),
or if it can, at least one of its diagonal blocks $\rho_k$ is non-separable.}
\footnote{For each block $\rho_k$, separability is understood with reference to
the isomorphic structure $\mathbb{C}^{D_k}\otimes \mathbb{C}^{D_{N-k}}$
mentioned before (see {\sl Remark 3}).}
\noindent
{\sl Proof.} Assume first that the state $\rho$ can not be written in block diagonal form;
one can then show \cite{Benatti3} that it is not left positive by the operation
of partial transposition and therefore it is entangled. Next, take $\rho$
in block diagonal form as in (\ref{2-11}), (\ref{2-12}) above. If all its blocks
$\rho_k$ are separable, then clearly $\rho$ itself results separable.
Then, assume that at least one of the diagonal blocks
is entangled. By mixing it with the remaining blocks as in (\ref{2-11}) will not
spoil its entanglement since all blocks $\rho_k$ have support on orthogonal spaces;
as a consequence, the state $\rho$ results itself non-separable.
$\Box$
Having found the general form of non-separable $N$-boson states, one can next ask
how robust is their entanglement content against mixture with other states.
This question has been extensively studied for states of distinguishable particles
\cite{Vidal,Steiner,Vidal1,Plenio,Horodecki};
in the next section we shall analyze to what extent the results obtained in that case
can be extended to systems with a fixed number of bosons.
\section{Robustness of entanglement}
Several measures of entanglement have been introduced in the literature with the aim of
characterizing quantum correlations and its usefulness in specific applications
\cite{Horodecki}-\cite{Modi}.
Starting with the entanglement of formation, most of these measures point to the
quantification of the entanglement content of a given state. A different approach
to this general problem has been proposed in \cite{Vidal,Steiner}: the idea is to obtain information
about the magnitude of non-classical correlations contained in
a state $\rho$ by studying how much it can be mixed with other states before
becoming separable.
More precisely, let us indicate with $\cal M$ the set of all
systems states and by ${\cal S}\subset{\cal M}$ that of separable ones;
then, with reference to an arbitrary $(m, M-m)$-mode bipartition, one can introduce the
following definition:
\noindent
{\bf Definition 4.} {\sl Given a state $\rho$, its {\bf robustness of entanglement}
is defined by
\begin{equation}
R(\rho)={\rm inf}\Big\{ t\ |\ t\geq0,\ \exists\, \sigma\in{\cal M}'\subset{\cal M},\ {\rm for\ which} \
\eta\equiv{\rho + t\,\sigma\over(1+t)} \in{\cal S}\Big\}\ ,
\label{3-1}
\end{equation}
{\it i.e.} it is the smallest, non-negative $t$ such that a state $\sigma$ exists for which the
(unnormalized) combination $\rho + t\,\sigma$ is separable.}
\noindent
Actually, various forms of robustness have been introduced: they all share the definition
(\ref{3-1}), but differ in the choice of the subset
${\cal M}'$ from which the mixing state $\sigma$ should be drawn.
In particular, one talks of {\it generalized robustness} $R_g$ when $\sigma$ can be any state \cite{Steiner},
while simply of {\it robustness} $R_s$ when $\sigma$ must be separable \cite{Vidal}.
All robustness defined in this way satisfy nice properties; more specifically, they result
entanglement monotones, {\it i.e.} they are invariant under local operations and classical communication,
and convex, $R\big(\lambda\, \rho_1+(1-\lambda)\rho_2\big)\leq \lambda\, R(\rho_1) +
(1-\lambda)\, R(\rho_2)$; further, $R(\rho)$ in (\ref{3-1}) is vanishing if and only if
$\rho$ itself is separable. Although implicitly proven in the case of states of distinguishable
particles \cite{Vidal,Steiner}, these properties hold true also in the case of $N$-boson systems. Nevertheless,
there are striking differences in the behavior of the robustness of states of identical particles
with respect to what is known in systems with distinguishable constituents.
Let us first focus on the robustness $R_s(\rho)$, that measures how strong is the entanglement
content of a state $\rho$ when mixed with separable states. One finds:
\noindent
{\bf Proposition 2.} {\sl The robustness of entanglement of a generic $(m, M-m)$-mode
bipartite state $\rho$ is given by
\begin{equation}
R_s(\rho)=\sum_{k=0}^N p_k\, R_s(\rho_k)\ ,
\label{3-2}
\end{equation}
for states that are in block diagonal form as in (\ref{2-11}), (\ref{2-12}), while it is infinitely large
otherwise.}
\noindent
{\sl Proof.} From the results of {\it Proposition 1}, we know that separable $N$-boson states
must be block diagonal. If the state $\rho$ is not in this form, it can
never be made block diagonal by mixing it with any separable one; therefore, in this case, the combination
$\rho + t\, \sigma$ will never be separable, unless $t$ is infinitely large.
Next, consider the case in which the state $\rho$ is in block diagonal
form, {\it i.e.} it can be written as in (\ref{2-11}), (\ref{2-12}).
First, if $\rho$ is separable, then clearly $R_s(\rho)=\,0$.
For an entangled $\rho$, one can discuss each block $\rho_k$ separately: this is allowed
since they have support on orthogonal Hilbert subspaces. Then, let us indicate by $t_k$ the robustness
of block density matrix $\rho_k$; by {\sl Remark 3} and the definition of robustness,
the numbers $t_k$'s are finite and positive,
vanishing only when the corresponding state $\rho_k$ is separable \cite{Vidal}. More specifically, for each
$k$, there exist separable states $\sigma_k$ and $\eta_k$, such that:
\begin{equation}
\rho_k +t_k\, \sigma_k =(1+t_k)\, \eta_k\ .
\label{3-3}
\end{equation}
Multiplying both sides of this relation by the positive number $p_k$ and then summing over $k$,
one gets
\begin{equation}
\rho +t\, \sigma =(1+t)\, \eta\ ,\qquad t=\sum_{k=0}^N p_k\, t_k\ ,
\label{3-4}
\end{equation}
where the separable states $\sigma$ and $\eta$ are explicitly given by
\begin{equation}
\sigma=\sum_{k=0}^N \bigg({p_k\, t_k\over t}\bigg)\ \sigma_k\ ,\qquad
\eta=\sum_{k=0}^N \bigg({1+t_k \over 1+t}\bigg)\, p_k\, \eta_k\ .
\label{3-5}
\end{equation}
To prove that indeed $t$ given in (\ref{3-4}) is really the robustness of $\rho$, one needs to
check that no better decomposition
\begin{equation}
\rho +t'\, \sigma' =(1+t')\, \eta'\label{}\ ,
\label{3-6}
\end{equation}
with $t'\leq t$, exists. In order to show this, let proceed {\it ad absurdum}
and assume that such decomposition can indeed be found.
Since the states $\sigma'$ and $\eta'$ are separable, they must be block diagonal, {\it i.e.}
of the form $\sigma'=\sum_k q_k\, \sigma'_k$ and $\eta'=\sum_k r_k\, \eta'_k$
with $\sum_k q_k=\sum_k r_k=1$ and $\sigma'_k$ and $\eta'_k$ separable density matrices.
By the orthogonality of the Hilbert subspaces with fixed $k$, from (\ref{3-6}) one then gets
\begin{equation}
p_k\, \rho_k + t'\, q_k\, \sigma'_k = (1+t')\, r_k\, \eta'_k\ ,
\label{3-7}
\end{equation}
and further, by taking its trace, $p_k+ t'\, q_k = (1+t')\, r_k$. In addition, from the previous identity,
one sees that the combination
\begin{equation}
\rho_k + t'\, {q_k\over p_k}\, \sigma'_k\ ,
\label{3-8}
\end{equation}
is separable. By definition of robustness of the block $\rho_k$ as given in (\ref{3-3}),
it then follows that:
\begin{equation}
t'\, {q_k\over p_k}\geq t_k\ ,
\label{3-9}
\end{equation}
ore equivalently, $ t_k\, p_k \leq t'\, q_k$. By summing over $k$, one then finds:
$t\leq t'$; this result is compatible with the initial assumption $t\geq t'$
only if $t'$ coincides with $t$. Therefore, the robustness of the block diagonal state $\rho$
is indeed given by the weighted sum of the robustness of each block.
$\Box$
The problem of finding the robustness $R_s(\rho)$ of a generic $N$-boson state $\rho$ is then
reduced to the more manageable task of identifying the robustness of its diagonal
blocks, which are finite-dimensional density matrices
for which standard techniques and results can be used.
\noindent
{\bf Remark 4:} {\sl i)} A remarkable property of the robustness of entanglement of states describing
distinguishable particles is that it is equal to the negativity for pure states.
In the case of identical particles, this property does not hold anymore as the robustness of entanglement
of non-block diagonal pure states is infinitely large. Nevertheless, one can easily show that in
general for pure states: ${\cal N}(\rho)\leq R_s(\rho)$.
\break
{\sl ii)} The robustness of entanglement of states that, with respect to the given partition, result
mixtures of pure block states, $\rho=\sum_k p_k\rho_k$, $\rho_k{}^2=\rho_k$, is equal
to their negativity $R_s(\rho)=\sum_k p_k\, {\cal N}(\rho_k)={\cal N}(\rho)$,
since now $R_s(\rho_k)={\cal N}(\rho_k)$, as in the standard case (see \cite{Benatti3}).
$\Box$
More difficult appears the task of computing the generalized robustness
$R_g(\rho)$ of a generic $N$-boson state: only upper bounds can in general be given.
In any case, note that in general $R_g(\rho)\leq R_s(\rho)$ since the optimization procedure of
{\sl Definition 4} is performed over a larger subset of states in the case of the
generalized robustness.
By fixing as before a $(m, M-m)$-mode bipartition, a first bound on $R_g(\rho)$
can be easily obtained. Let us extract from $\rho$ its
diagonal part $\rho_D$, as defined in terms of
a Fock basis determined by the given bipartition ({\it cf.} (\ref{2-6})),
and call $\rho_{ND}\equiv \rho-\rho_D$ the rest. By definition of separability,
$\rho_D$ is surely separable with respect to
the chosen bipartition, so that an easy way to get a separable state
by mixing $\rho$ with another state $\sigma$ is to subtract from it its non-diagonal part.
However, $-\rho_{ND}$ alone is not in general a density matrix since it might have negative
eigenvalues. Let us denote by $\lambda$ the modulus of its largest negative eigenvalue;
then the quantity $\lambda \mathbbm{1} -\rho_{ND}$, where $\mathbbm{1}$ is the identity matrix,
will surely be positive and therefore, once
normalized, can play the role of the density matrix $\sigma$ in the separable combination
$\rho +t\, \sigma\equiv\rho_D +\lambda\, \mathbbm{1}$. By taking the trace, one finds
for the normalization factor $t$ the following expression: $t=\lambda\, D$,
where $D$ is as before the dimension of the total Hilbert space. By definition of robustness,
it then follows that
\begin{equation}
R_g(\rho)\leq \lambda\, D
\label{3-10}\ ;
\end{equation}
as a consequence, the generalized robustness
of a generic $N$-boson state is always finite.
A different bound on $R_g(\rho)$ can be obtained using a refined decomposition for $\rho$,
\begin{equation}
\rho=\rho_B + \rho_{NB}\ ,\qquad
\rho_B=\sum_{k=0}^N p_k\ \rho_k\ ,\qquad \sum_{k=0}^N p_k=1\ ,\quad {\rm Tr}[\rho_k]=1\ ,
\label{3-11}
\end{equation}
where $\rho_B$ is the block diagonal part of $\rho$, whose blocks $\rho_k$ can be written
as in (\ref{2-12}), while $\rho_{NB}\equiv\rho-\rho_B$ is the rest, containing the
non block diagonal pieces. One can first ask for the generalized robustness
of $\rho_B$, which is a {\it bona fide} state, being a normalized, positive
matrix.
\footnote{This is a direct consequence of the positivity of $\rho$, since
$\rho_B$ is made of its principal minors.}
Quite in general, one has:
\noindent
{\bf Proposition 3.} {\sl The generalized robustness of entanglement of a generic $(m, M-m)$-mode
bipartite state $\rho$ given in block diagonal form as in (\ref{2-11}), (\ref{2-12}) is given by
\begin{equation}
R_g(\rho)=\sum_{k=0}^N p_k\, R_g(\rho_k)\ .
\label{3-12}
\end{equation}
}
\noindent
{\sl Proof.} It is the same as in {\sl Proposition 2\,}; the only difference
is that now the states $\sigma_k$ are in general entangled.
$\Box$
\noindent
As a consequence, for a generic state as in (\ref{3-11}) above, due to the presence of the
additional term $\rho_{NB}$, one surely has:
$R_g(\rho)\geq R_g(\rho_B)=\sum_{k=0}^N p_k\, R_g(\rho_k)$.
To get an upper bound for $R_g(\rho)$, let us consider the form that the
(unnormalized) density matrices $\sigma$ must take in order to make
the combination $\rho+\sigma$ separable; by \hbox{\sl Definition 4}, the generalized
robustness of entanglement coincides with minimum value of their traces:
$R_g(\rho)={\rm inf}\big\{{\rm Tr}[\sigma]\big\}$.
Since the combination $\rho+\sigma$ is separable, it must be in block diagonal form;
therefore, $\sigma$ must surely contain the contribution $-\rho_{NB}$. Further,
it must also take care of the entanglement in the remaining block
diagonal term $\rho_B$; in view of the results of {\sl Proposition 3},
this can be obtained (and in an optimal way)
by the contribution $\sum_{k=0}^N p_k\, R_g(\rho_k)\, \sigma_k$,
where $\sigma_k$ is the optimal density matrix that makes the diagonal block $\rho_k$
separable.
However, while $\sum_{k=0}^N p_k\, R_g(\rho_k)\, \sigma_k$ is a positive matrix, in general $\rho_{NB}$
is not; therefore $\sigma$ should contain a further contribution,
a (unnormalized) positive and separable matrix $\tilde\sigma$ curing the negativity
induced by $\rho_{NB}$. As a consequence, the generic form of the positive matrix $\sigma$
making the combination $\rho+\sigma$ separable is given by:
\begin{equation}
\sigma=\sum_{k=0}^N p_k\, R_g(\rho_k)\, \sigma_k -\rho_{NB} + \tilde\sigma\ ,
\label{3-14}
\end{equation}
and the computation of the generalized robustness of $\rho$ is reduced to the
determination of the optimal $\tilde\sigma$; indeed:
\begin{equation}
R_g(\rho)=\sum_{k=0}^N p_k\, R_g(\rho_k) + {\rm inf}\big\{{\rm Tr}[\tilde\sigma]\big\}\ .
\label{3-15}
\end{equation}
Upper bounds on $R_g(\rho)$ can then be obtained by estimating the above minimum value,
trough specific choices of $\tilde\sigma$.
A simple possibility for curing the non positivity of $-\rho_{NB}$ is to add to it
a matrix proportional to the modulus of its largest negative eigenvalue;
in general, the value of this eigenvalue is however
difficult to estimate. Another possibility is suggested by the general theory
of positive matrices (see Theorem 6.1.1 and 6.1.10 in \cite{Horn}):
a sufficient condition for a generic hermitian matrix $M_{ij}$ to be positive is that it must be
``diagonally dominated'', {\it i.e.} $M_{ii}\geq \sum_{j\neq i} |M_{ij}|$, $\forall i$.
\footnote{Note that this condition is base-dependent: nonequivalent conditions
are obtained by expressing the matrix $M$ in different basis.}
Then, in a fixed separable basis, by choosing for $\tilde\sigma$
the diagonal matrix whose entries are given by
the sum of the modulus of the elements of the corresponding rows of $-\rho_{NB}$,
the matrix $\sigma$ in (\ref{3-14}) results positive and
makes the combination $\rho+\sigma$ separable. One can then conclude that:
\begin{equation}
R_g(\rho)\leq\sum_{k=0}^N p_k\, R_g(\rho_k) + ||\rho_{NB}||_{\ell_1}\ ,
\label{3-16}
\end{equation}
where, for any matrix $M$, $|| M||_{\ell_1}=\sum_{i,j} |M_{ij}|$ is the so-called
$\ell_1$-norm (see \cite{Horn}).
\footnote{The same procedure can also be applied to the previously used decomposition
of $\rho$ into its diagonal and off-diagonal parts: $\rho=\rho_D +\rho_{ND}$;
the mixing matrix $\sigma$ would now be composed by $-\rho_{ND}$ plus a diagonal matrix
whose entries are given by the sums of the modulus of the elements of each row of $\rho_{ND}$.
In this case, one easily finds that: $R_g(\rho)\leq ||\rho_{ND}||_{\ell_1}$;
although in general $||\rho_{ND}||_{\ell_1} \geq ||\rho_{NB}||_{\ell_1}$, this constitutes a
different upper bound for the generalized robustness, independent from that given in (\ref{3-16}).}
In presence of just two modes, $M=2$, each of which forming a partition,
the above considerations further simplify. In this case,
the Fock basis in (\ref{2-6}) is given by the set of $N+1$ vectors $\{ |k;N-k\rangle,\ 0\leq k\leq N \}$,
without the need of further labels; indeed, the $N$ bosons can occupy either one
of the two modes.
Notice that by (\ref{2-4}) this set of Fock vectors
constitutes the only basis made of separable pure states \cite{Benatti3}.
In this basis, a generic density matrix for the system can then be written as:
\begin{equation}
\rho=\sum_{k,l=0}^N\ \rho_{kl}\ | k; N-k\rangle \langle l; N-l |\ ,
\quad \sum_{k=0}^N\ \rho_{kk}=1\ .
\label{3-17}
\end{equation}
By {\sl Proposition 1}, once adapted to this simplified case,
it follows that a state as in (\ref{3-17}) is separable
if and only if $\rho_{kl}\sim\delta_{kl}$, {\it i.e.}
the density matrix $\rho$ is diagonal in the Fock basis. As a consequence,
an entangled state can never be made separable by mixing it with a separable state,
so that its robustness of entanglement results always infinite.
In the case of the generalized robustness, since there are only diagonal and off-diagonal
terms and no blocks, the above discussed upper bounds (\ref{3-10}) and (\ref{3-16}) simplify, becoming:
\begin{equation}
i)\ R_g(\rho)\leq \lambda\, (N+1)\ ,\qquad ii)\
R_g(\rho)\leq ||\rho_{ND}||_{\ell_1}\ ,
\label{3-18}
\end{equation}
where, as before, $\rho_{ND}$ is the non diagonal part of $\rho$ in the Fock basis, and
$\lambda$ the modulus of its largest positive eigenvalue.
These bounds can be explicitly evaluated for specific classes of states as shown below.
\noindent
{\bf Examples:} {\sl i)} Let us first consider pure states of the form:
\begin{equation}
\rho\equiv|\psi\rangle\langle\psi|\ ,\qquad
|\psi\rangle= {1\over\sqrt{N+1}}\sum_{k=0}^N p_k\, |k;N-k\rangle\ ,\qquad p_k=e^{i\varphi_k}\ .
\label{3-19}
\end{equation}
The non-diagonal part of the matrix $\rho$ is given by $\rho_{ND}=(\Phi-\mathbbm{1})/(N+1)$
where $\Phi=\sum_{k,l} e^{i(\varphi_k-\varphi_l)}\, |k;N-k\rangle \langle l;N-l|$
is the $(N+1)\times(N+1)$ matrix of phase differences; its eigenvalues are
zero and $N+1$, so that the largest negative eigenvalue of $-\rho_{ND}$ is in modulus
$N/(N+1)$. From the first bound in (\ref{3-18}) one therefore gets: $\ R_g(\rho)\leq N$.
This is also the result of the second bound, since the norm $||\rho_{ND}||_{\ell_1}$
is also equal to $N$.
\break
{\sl ii)} Nevertheless, the two bounds in (\ref{3-18})
do not in general coincide, as can be seen by slightly generalizing the
states in (\ref{3-19}) by allowing the coefficients $p_k$ to acquire a non unit norm, $p_k=|p_k| e^{i\varphi_k}$,
$\sum_k |p_k|^2=1$. By choosing the norms and phases of the $p_k$'s to be uniformly distributed, one can
easily generate states for which the second bound in (\ref{3-18})
is more stringent.
\break
{\sl iii)} Note, however, that the hierarchy of the bounds in (\ref{3-18}) can
be reversed, as it happens for instance with the following mixed state:
\begin{equation}
\rho=\frac{1}{N+1}\sum_{k=0}^N |k;N-k\rangle\langle k;N-k|-\frac{1}{N(N+1)}
\sum_{\substack{k,l=0\\k\neq l}}^N |k;N-k\rangle\langle l;N-l|\ .
\label{3-20}
\end{equation}
Indeed, now one has
$\rho_{ND}=(\mathbbm{1}-E)/N(N+1)$, where $E$ is the matrix with all entries equal to one.
One easily checks that $||\rho_{ND}||_{\ell_1}=1$, while the modulus
of the largest negative eigenvalue $\lambda$
of $-\rho_{ND}$ is $1/N(N+1)$. Therefore, from the first bound in (\ref{3-18}) one gets
$R_g(\rho)\leq 1/N$, which is lower than the second one.
$\Box$
\section{On the geometry of $N$-boson states}
As shown by the previous results, the properties of the states of a system of $N$-bosons
result rather different and to a certain extent richer
then those describing distinguishable particles. This is surely a consequence
of the indistinguishability of the system elementary constituents, but also
of the presence of the additional constraint that fixes the total number of
particles to be $N$. This additional ``rigidity'' allows nevertheless a detailed
description of the geometric structure of the set of $N$-boson states.
Let us first consider the set of entangled states. As mentioned before,
for systems of identical particles, negativity, as defined in (\ref{2-10}),
results a much more exhaustive entanglement criteria than in systems
of distinguishable particles: it seems then appropriate to look for states
that maximize it.
\noindent
{\bf Proposition 4.} {\sl Given any bipartition $(m, M-m)$,
the negativity ${\cal N}(\rho)$ is maximized by pure states
that have all the Schmidt coefficients nonvanishing and equal to a normalizing constant.}
\noindent
{\sl Proof.} First observe that the negativity is a convex function \cite{Vidal1}, {\it i.e.} it satisfies the inequality
\begin{equation}
{\cal N}\Big(\sum_i p_i\,\rho_i\Big)\leq \sum_i p_i\, {\cal N}(\rho_i)\ ,\qquad p_i\geq0\ ,\qquad
\sum_i p_i=1\ ,
\label{4-1}
\end{equation}
for any convex combination $\sum_i p_i\,\rho_i$
of density matrices $\rho_i$. Since any density matrix can be written as a convex combination
of projectors over pure states, one can limit the search to pure states;
in a given $(m, M-m)$-bipartition, they can be expanded in terms of the Fock basis (\ref{2-6}) as:
\begin{equation}
|\psi\rangle=\sum_{k=0}^N |\psi_k\rangle\ ,\qquad
|\psi_k\rangle\equiv\sum_{\sigma=1}^{D_k}\sum_{\sigma'=1}^{D_{N-k}}
\psi_{k\sigma\sigma'}\, |k,\sigma;N-k,\sigma'\rangle\ ,\qquad
\sum_k\sum_{\sigma\sigma'} |\psi_{k\sigma\sigma'}|^2=1\ .
\label{4-2}
\end{equation}
As observed before, for each $k$, the set of vectors $\{|k,\sigma;N-k,\sigma'\rangle\}$
span a subspace ${\cal H}_k\subset {\cal H}$ of finite dimension
$D_k\, D_{N-k}$ of the total Hilbert space $\cal H$, and the component $|\psi_k\rangle$ is a vector of this
space. Recalling {\sl Remark 3}, one can then write it in Schmidt form,
\begin{equation}
|\psi_k\rangle\equiv\sum_{\alpha=1}^{{\cal D}_k}
\tilde\psi_{k\alpha}\, ||k,\alpha;N-k,\alpha\rangle\rangle\ ;
\label{4-3}
\end{equation}
in this decomposition, the orthonormal vectors $\{||k,\alpha;N-k,\alpha\rangle\rangle\}$ form the
Schmidt basis, with ${\cal D}_k={\rm min}\{D_k, D_{N-k}\}$ ({\it cf.} (\ref{2-6})), while the Schmidt coefficients
$\tilde\psi_{k\alpha}$ are non negative real numbers, satisfying
the normalization condition $\sum_k\sum_\alpha (\tilde\psi_{k\alpha})^2=1$.
In this new basis, one can easily compute the negativity of the density matrix $|\psi\rangle\langle\psi|$,
\begin{equation}
{\cal N}\big(|\psi\rangle\langle\psi|\big)=
{1\over2}\bigg[\bigg(\sum_{k=0}^N\sum_{\alpha=1}^{{\cal D}_k} \tilde\psi_{k\alpha}\bigg)^2-1\bigg]\ .
\label{4-4}
\end{equation}
Clearly, the negativity increases monotonically with the sum
$\sum_k\sum_\alpha \tilde\psi_{k\alpha}$, that therefore needs to be maximized
under the constraint $\sum_k\sum_\alpha (\tilde\psi_{k\alpha})^2=1$.
One easily shows that the maximum is obtained when all coefficients $\tilde\psi_{k\alpha}$ are equal and
constant,
\begin{equation}
\tilde\psi_{k\alpha}={1\over\sqrt{\cal D}}\ ,\qquad {\cal D}=\sum_{k=0}^N {\cal D}_k\ ,
\label{4-5}
\end{equation}
so that, ${\cal N}\big(|\psi\rangle\langle\psi|\big)=({\cal D}-1)/2$. Further, all Schmidt coefficients
need to be non vanishing in order to get this maximum value for $\cal N$; indeed, for any state
$|\psi'\rangle$ with Schimdt number ${\cal D}'< {\cal D}$, one has:
${\cal N}\big(|\psi'\rangle\langle\psi'|\big)<({\cal D}-1)/2$.
$\Box$
\noindent
In analogy with the standard case, states for which the negativity reaches the maximum value
$({\cal D}-1)/2$ can be called ``maximally entangled states''.
\noindent
{\bf Remark 5:} {\sl i)} Given a fixed bipartition $(m, M-m)$, let us consider such
a maximally entangled state $\rho=|\psi\rangle\langle\psi|$. By tracing over the degrees of freedom
of the modes pertaining to the second partition, one obtains the reduced density matrix
$\rho^{(1)}$ describing the first $m$ modes only:
\begin{equation}
\rho^{(1)}={1\over {\cal D}} \sum_{k=0}^N\sum_{\alpha=1}^{{\cal D}_k}
||k,\alpha\rangle\rangle\, \langle\langle k,\alpha||\ ;
\label{4-6}
\end{equation}
similarly, by tracing over the first partition, one obtains the reduced density matrix $\rho^{(2)}$
describing the second $N-k$ modes.
In the case of distinguishable particles, either $\rho^{(1)}$ or $\rho^{(2)}$
is proportional to the identity matrix; this is no longer true here:
in fact, given a block with $k$ fixed, this happens only when $D_k\leq D_{N-k}$, {\it i.e.} when
${\cal D}_k=D_k$.
\break
{\sl ii)} On the other hand, given the expression (\ref{4-6}), one easily computes its von Neumann entropy,
obtaining: $S(\rho^{(1)})=-{\rm Tr}\big[\rho^{(1)}{\rm ln}\rho^{(1)}\big]={\rm ln}\,{\cal D}$,
as in the standard case.
\break
{\sl iii)} Similarly, the purity of $\rho^{(1)}$ is given by: ${\rm Tr}\big[\big(\rho^{(1)}\big)^2\big]=1/{\cal D}$.
This result and that of the previous remark can equally be taken as alternative, equivalent definitions
of the notion of maximally entangled states.
$\Box$
Let us now come to the analysis of the space $\cal S$ of separable states
as determined by the generic bipartition $(m, M-m)$.
Among them, the totally mixed state $\rho_{\rm mix}$, proportional to the unit matrix,
stands out because of its peculiar properties. In fact, recall that in the case
of distinguishable particles, $\rho_{\rm mix}$ always lays in the interior of $\cal S$
\cite{Zyczkowski,Bengtsson}; instead, now one finds:
\noindent
{\bf Proposition 5.} {\sl Given any bipartition $(m, M-m)$ and an associated
separable basis made of the Fock vectors $|k, \sigma;N-k, \sigma'\rangle$,
the totally mixed state,
\begin{equation}
\rho_{\rm mix}={1\over D}\sum_{k=0}^N\ \sum_{\sigma,\sigma'}\
| k, \sigma; N-k, \sigma'\rangle \langle k, \sigma; N-k, \sigma' |\ ,
\label{4-7}
\end{equation}
lays on the border of the space $\cal S$ of separable states.}
\noindent
{\sl Proof.} Let us take a state $\rho_{\rm ent}$ which can not be written in block diagonal form
as in (\ref{2-11}), (\ref{2-12}); by {\sl Proposition 1}, it is entangled.
Then, consider the combination
\begin{equation}
\rho_\epsilon={1\over 1+\epsilon}\big(\rho_{\rm mix}+\epsilon\, \rho_{\rm ent}\big)\ , \qquad \epsilon>0\ .
\label{4-8}
\end{equation}
For any $\epsilon$, this combination
will never be separable, since it is not block diagonal.
In other terms, in the vicinity of $\rho_{\rm mix}$, one always
find entangled states, so that it must lay on the border of $\cal S$.
$\Box$
\noindent
{\bf Remark 6:} {\sl i)} Note that similar considerations apply to all separable states:
there always exist small perturbations of separable, necessarily block diagonal, states that
make them not block diagonal, hence entangled. Instead, in the case of distinguishable
particles, almost all separable states remain separable under sufficiently small
arbitrary perturbations \cite{Bengtsson,Bandyopadhyay}.
\break
{\sl ii)} Further, using analogous steps, one can show
that a Werner-like state, $\rho_W=p\, |\psi\rangle\langle\psi| + (1-p)\, \rho_{\rm mix}$,
$0\leq p\leq 1$, with $|\psi\rangle$ a maximally entangled state,
is entangled for any nonvanishing value
of the parameter $p$, while for distinguishable $d$-level particles, this happens only when
$1/(d+1)<p\leq 1$ \cite{Werner1}.
$\Box$
The result of {\sl Proposition 5} is most strikingly illustrated by considering
a system of $N$ bosons that can occupy
two modes ($M=2$), each of which forming a partition. In the previous section, we have seen
that a generic state
$\rho$ for this system can be written in terms the Fock basis as in (\ref{3-17}); further,
it results separable if and only the density matrix $\rho$ is diagonal in this basis.
Instead, $\rho_\epsilon$ given in (\ref{4-8})
will develop non-diagonal entries as long as $\epsilon$ starts to be non vanishing.
As mentioned before, in this case the set of $N+1$ vectors $\{|k;N-k\rangle\}$
constitutes the only basis made of separable pure states \cite{Benatti3}.
Therefore, the decomposition of a generic separable state $\rho$ in terms
of projections on separable states turns out to be unique.
As a consequence, the set $\cal S$ of separable states is a sub-variety
of the convex space $\cal M$ of all states that turns out to be a simplex,
whose vertices are given precisely by these projections.
The space of states of $N$ bosons confined in two modes
is then much more geometrically structured than in the case
of systems made of distinguishable particles.
Indeed, given a complete set of
observables $\{ {\cal O}_i \}$, one can decompose any density matrix $\rho$ as:
\begin{equation}
\rho=\sum_i\, \rho_i\ \Bigg( { \sqrt{\rho}\, {\cal O}_i\, \sqrt{\rho} \over
{\rm Tr}\big[{\cal O}_i\rho\big] }\Bigg)\ ,\qquad
\rho_i\equiv{\rm Tr}\big[{\cal O}_i\rho\big]\ ,\quad {\cal O}_i>0,\quad \sum_i {\cal O}_i={1};
\label{4-9}
\end{equation}
this decomposition is over pure states, whenever the operators ${\cal O}_i$ are chosen to be projectors.
Therefore, in general, there are infinite ways of expressing a density matrix
as a convex combination of projectors, even when these projectors are made of separable states.
As seen, this conclusion no longer holds for systems of identical particles.
The totally mixed state $\rho_{\rm mix}$ has also another interesting property:
\footnote{For systems of distinguishable particles, the problem of finding so-called
``absolutely separable states'', {\it i.e.} states that are separable in any choice of
tensor product structure, has been discussed in \cite{Kus,Bengtsson}.}
\noindent
{\bf Proposition 6.} {\sl The totally mixed state is the only
state that remains separable for any choice of bipartition.}
\noindent
{\sl Proof.} First, let us again consider the
simplest case $M=2$. Any Bogolubov transformation maps the set of creation and annihilation
operators $a_i^\dagger$ and $a_i$, $i=1,2$ into new ones $b_i^\dag$, $b_i$, $i=1,2$;
a simple example is given by
\begin{equation}
b_1={a_1+a_2\over\sqrt{2}}\ ,\qquad
b_2={a_1-a_2\over\sqrt{2}}\ ,
\label{4-10}
\end{equation}
and their hermitian conjugates. The operators $b_i^\dag$, $b_i$
define a new bipartition $({\cal B}_1, {\cal B}_2)$ of the full algebra
$\cal B$ of bounded operators, distinct from original one $({\cal A}_1, {\cal A}_2)$
generated by $a_i^\dag$, $a_i$. States (and operators as well) that are local
in one bipartition might turn out to be non-local in the other.
For instance, the Fock states $\{|k,N-k\rangle\}$ result entangled with respect
to the new bipartition defined by the transformation (\ref{4-10}); indeed, one finds:
\begin{equation}
|k,N-k\rangle={1\over 2^{N/2}}{1\over\sqrt{k!(N-k)!}}\sum_{r=0}^k\sum_{s=0}^{N-k}
{k\choose r}{N-k\choose s}(-1)^{N-k-s} \big(b_1^\dag\big)^{r+s}\, \big(b_2^\dag\big)^{N-r-s}\, |0\rangle\ ,
\label{4-11}
\end{equation}
so that $|k,N-k\rangle$ is a combination of $({\cal B}_1, {\cal B}_2)$-separable states.
A similar conclusion applies to the mixed states (\ref{3-17}): in general, any
separable state $\rho_{\rm sep}=\sum_k \rho_k\, |k;N-k\rangle\langle k; N-k|$
is mapped by a Bogolubov transformation into a non-diagonal
density matrix, and therefore into an entangled state.
In fact, one can always find a unitary transformation $U$ that maps any
diagonal matrix $\rho$ into a non-diagonal one $U\rho\, U^\dagger$. In particular,
when $\rho$ is not degenerate, the transformed matrix $U\rho\, U^\dagger$ results
diagonal only if the operator $U$ is itself a diagonal matrix;
in this case however the corresponding Bogolubov transformation
results trivial and does not define a new partition.
The only density matrix that
remains invariant under all unitary transformations
is the one proportional to the unit matrix, {\it i.e.}
the totally mixed one. This conclusion can easily be extended to the
multimode case: given any separable state in a given $(m, M-m)$ bipartition,
$\rho_{\rm sep}=\sum_{k=0}^N\ \sum_{\sigma,\sigma'}\ \rho_{k\sigma\sigma'}
| k, \sigma; N-k, \sigma'\rangle \langle k, \sigma; N-k, \sigma' |$,
one can always construct a Bogolubov transformation that maps it into
an entangled one: it is sufficient to apply the above considerations
to any couple of modes belonging to separate partitions. Therefore, also in this more general
setting, only the state $\rho_{\rm mix}$
in (\ref{4-7}) is left invariant by all Bogolubov transformations.
$\Box$
Thanks to these results, the global geometry of the space of $N$-boson states
starts to emerge more clearly. Again the two-mode case is easier to describe.
We have seen that by fixing a bipartition one selects the sets $\cal S$ of separable states;
these form a sub-variety of the convex space $\cal M$ of all states, forming
a $(N+1)$-dimensional simplex, with the projectors over Fock states as generators.
Changing the bipartition through a Bogolubov transformation produces a new
simplex, having only one point in common with the starting one,
the state $\rho_{\rm mix}$. The geometry of the space of two-mode $N$-boson states
has therefore a sort of star-like topology, with the various simplexes sharing just one point,
the totally mixed state.
The case of $M$ modes is more complex. For a fixed bipartition,
the space of separable states is a sub-space of the convex space of all
states which is not any more strictly a simplex:
the decomposition of generic separable state $\rho$ is no longer unique,
since the Fock states in (\ref{2-6}) are no longer the only separable pure states:
for each $k$, reshufflings over the indices $\sigma$, $\sigma'$
are still allowed. Nevertheless, also in this case the global state space
presents a sort of star-like topology: only one point is shared by all separable bipartition sub-spaces,
the totally mixed state $\rho_{\rm mix}$.
\section{Outlook}
In many-body systems composed by a fixed number of identical particles,
the associated Hilbert space no longer exhibits the familiar particle tensor product structure.
The usual notions of separability and entanglement
based on such structure is therefore inapplicable: a generalized
definition of separability is needed and can be given in terms
of commuting algebras of observables instead of focusing on
the particle tensor decomposition of states.
The selection of these algebras is largely arbitrary, making it apparent
that in systems of identical particles entanglement
is not an absolute notion: it is strictly bounded
to specific choices of commuting sets of system observables.
Using these generalized definitions, we have studied bipartite entanglement in systems composed
by $N$ identical bosons that can occupy $M$ different modes.
More specifically, we have analyzed to what extent entangled states
result robust against mixing with another state, either separable or entangled.
We have found that in general, the entanglement contained in bosonic states is much more
robust than the one found in systems of distinguishable particles.
This result has been obtained by analyzing the
so-called {\sl robustness} and {\sl generalized robustness} of entanglement, of which explicit
expressions and upper bounds have been respectively given.
A quite general characterization of the geometry of the space of bosonic states has also been
obtained: this space exhibits a star-like structure composed by intersecting subspaces
each of one determined by a given bipartition
through the subset of separable states.
All these separable subspaces share one and only one point, the totally mixed state,
hence the star-shape topology.
As a final remark, notice that all above results can be generalized to the case
of systems where the total number of particles is not fixed,
but commutes with all physical observables ({\it i.e.} we are in
presence of a superselection rule \cite{Bartlett}).
In such a situation, a general density matrix $\rho$ can be written as an incoherent mixture
of states $\rho_N$ with fixed number $N$ of
particles, having the general form (\ref{2-9}); explicitly:
\begin{equation}
\rho=\sum_N \lambda_N \rho_N \ ,\qquad \lambda_N\geq 0\ ,\qquad \sum_N \lambda_N=1\ .
\label{5-1}
\end{equation}
The state $\rho$ is thus a convex combination of matrices $\rho_N$ having support
on orthogonal spaces. Arguments similar to those used in proving {\sl Proposition 2}
(and {\sl Proposition 3}) allow us to conclude that both the robustness and the
generalized robustness of entanglement of the state $\rho$ in (\ref{5-1})
is the weighted average of the robustness of the components $\rho_N$, {\it i.e.}
$R(\rho)=\sum_N \lambda_N\, R(\rho_N)$ for both cases. The problem of computing the
robustness of incoherent particle number mixtures (\ref{5-1}) is then reduced
to that of determining the robustness of the
corresponding components at fixed particle number, for which the considerations
and results discussed in the previous sections apply.
\end{document} |
\begin{document}
\title{Detection of multimode spatial correlation in PDC and application to the absolute calibration of a CCD camera}
\author{Giorgio Brida, Ivo Pietro Degiovanni, Marco Genovese, Maria Luisa Rastello , Ivano Ruo-Berchera\textsuperscript{*}}
\address{Istituto Nazionale di Ricerca Metrologica, Strada delle Cacce 91, 10135 Torino, Italy}
\email{$^*$i.ruoberchera@inrim.it}
\begin{abstract}
We propose and demonstrate experimentally a new method based on the
spatial entanglement for the absolute calibration of analog
detector. The idea consists on measuring the sub-shot-noise
intensity correlation between two branches of parametric down
conversion, containing many pairwise correlated spatial modes. We
calibrate a scientific CCD camera and a preliminary evaluation of
the statistical uncertainty indicates the metrological interest of
the method.
\end{abstract}
\ocis{(270.4180) Multiphoton processes; (120.1880) Detection; (120.3940) Metrology.}
\begin{thebibliography}{99}
\bibitem{Kol2007} Quantum Imaging, Kolobov editor (Springer, New York, 2007), and ref.s therein.
\bibitem{BrambPRA2004} Brambilla, E., Gatti, A., Bache, M., and Lugiato L. A. "Simultaneous near-field
and far-field spatial quantum correlations in the high-gain regime
of parametric down-conversion". Phys. Rev. A 69, 023802 (2004).
\bibitem{BlanchetPRL2008} J.-L Blanchet \textit{et al.}, "Measurement of Sub-Shot-Noise Correlations of Spatial Fluctuations in the Photon-Counting
Regime", Phys. Rev. Lett. \textbf{101},233604 (2008).
\bibitem{JedrkPRL2004} O. Jedrkievicz \textit{ et al.}, "Detection of Sub-Shot-Noise Spatial Correlation in High-Gain Parametric Down Conversion",
Phys. Rev. Lett. \textbf{93}, 243601 (2004).
\bibitem{BridaPRL2009} G. Brida \textit{et al.}, "Measurement of Sub-Shot-Noise Spatial Correlations without Background Subtraction",
Phys Rev. Lett. \textbf{102}, 213602 (2009).
\bibitem{BridaNPHOT2010} G. Brida, M. Genovese, I. Ruo Berchera,"Experimental realization of sub-shot-noise quantum imaging", Nature Photonics 4, 227 - 230
(2010).
\bibitem{mat}Bondani, M., Allevi, A., Zambra, G., Paris, M. and Andreoni, "A. Sub-shot-noise photon-number correlation in a mesoscopic twin beam of
light", Phy. Rev. A 76, 013833 (2007).
\bibitem{mc} T.Ishkakov, M. Chekhova, G.Leuchs,
"Generation and Direct Detection of Broadband Mesoscopic
Polarization-Squeezed Vacuum", Phys. Rev. Lett. 102, 183602 (2009)
\bibitem{BrambPRA2008} E. Brambilla, \textit{et al.}, "High-sensitivity imaging with multi-mode twin beams",Phys. Rev. A. \textbf{77}, 053807 (2008).
\bibitem{BoyerPRL2008}Boyer, V., Marino, A. M. and Lett, P. D. "Generation of spatially broadband twin beams for quantum imaging.", Phys. Rev. Lett. 100, 143601 (2008).
\bibitem{ge} M. Genovese, "Research on hidden variable theories: A review on recent progresses,"
Phys. Rep. \textbf{413}/6, 319-398 (2005) and references therein.
\bibitem{PolMig2007} S. Polyakov and A. Migdall, "High accuracy verification of a correlated-photon- based method for determining photoncounting detection efficiency"
Opt. Exp. \textbf{15} (4), 1390 (2007).
\bibitem{bp1} B.Y. Zel'dovich and D.N. Klyshko, "Statistics of field in parametric luminescence," Sov. Phys. JETP Lett. \textbf{9} 40-44 (1969).
\bibitem{Burnham} D.C. Burnham and D.L. Weinberg, "Observation of Simultaneity in Parametric Production of Optical Photon Pairs," Phys. Rev. Lett. \textbf{25}, 84-87 (1970).
\bibitem{klysh} D.N. Klyshko, "Use of two-photon light for absolute calibration of photoelectric detectors," Sov. J. Quant. Elect. {\bf 10}, 1112-1116 (1980).
\bibitem{malygin} A. A. Malygin, A. N. Penin and A. V. Sergienko, "Absolute Calibration of the Sensitivity of Photodetectors Using a Two-Photon Field," Sov. Phys. JETP Lett. {\bf 33}, 477-480 (1981).
\bibitem{Ginzburg} V. M. Ginzburg, N. G. Keratishvili, E. L. Korzhenevich, "Absolute meter of photodetector quantum efficiency based on the parametric down-conversion effect," G. V. Lunev, A. N. Penin, V. I. Sapritsky, Opt. Eng. \textbf{32}(11), 2911-2916 (1993).
\bibitem{alan} A. Migdall, "Correlated-photon metrology without absolute standards," Physics Today January, 41-46 (1999).
\bibitem{LindAO2006} M. Lindenthal and J. Kofler "Measuring of the absolute photodetection efficiency using photon number correlations" App. Opt. \textbf{45}(24),6059 (2006).
\bibitem{n1} ''AN APPLICATION OF TWO PHOTONS ENTANGLED STATES TO QUANTUM METROLOGY'',
M.Genovese, G. Brida e C. Novero. Jour. Mod. Opt. 47 (2000) 2099.
\bibitem{n2}''TOWARD AN ACCURACY BUDGET IN QUANTUM EFFICIENCY MEASUREMENT WITH
PARAMETRIC FLUORESCENCE'', G. Brida, S. Castelletto, I.
Degiovanni,M. Genovese, C. Novero e M.L. Rastello, METROLOGIA 37 5
(2000) 629.
\bibitem{n3} "SINGLE-PHOTON DETECTORS CALIBRATION BY MEANS OF CONDITIONAL
POLARIZATION ROTATION" G. Brida, M. Chekhova, M. Genovese,
M.Gramegna, L. Krivitsky, M.L. Rastello, Journ. Opt. Soc. of Am. B
22 (2005) 488.
\bibitem{n4}"TWIN-PHOTON TECHNIQUES FOR PHOTO-DETECTOR CALIBRATION", G. Brida, M.Genovese, M. Gramegna, Laser Physics Lett. 3 (2006)
115.
\bibitem{BridaJOSAB2006} Giorgio Brida, Maria Chekhova, Marco Genovese, Alexander Penin, Ivano Ruo-Berchera,
"The possibility of absolute calibration of analog detectors by using parametric down-conversion: a systematic study,"
J. Opt. Soc. Am. B \textbf{23}, 2185-2193 (2006).
\bibitem{RuoASL2009} I. Ruo Berchera, "Theory of PDC in a continuous variables framework and its applications to the absolute calibration of
photo-detectors", Adv. Sci. Lett. 2, 407–429 (2009).
\bibitem{BridaOE2008} "Analysis of the possibility of analog detectors calibration by
exploiting Stimulated Parametric Down Conversion", G.Brida, M.
Chekhova, M.Genovese, I. Ruo-Berchera, Optics Express, Vol. 16
(2008) 12550.
\bibitem{BridaJMO2009} G. Brida, M. Chekhova , M. Genovese, M.L. Rastello, I.Ruo Berchera,
Absolute calibration of Analog Detector using Stimulated Parametric Down Conversion, J. Mod. Opt., Vol. 56(2-3), 401-408 (2009).
\bibitem{Masha} T. Sh. Iskhakov, E. D. Lopaeva, A.
N. Penin, G. O. Rytikov, and M. V. Chekhova, "Two Methods for
Detecting Nonclassical Correlations in Parametric Scattering of
Light ", JETP Letters, 2008, Vol. 88, No. 10, pp. 660–664.
\bibitem{BGMPR-IGQI2009} Brida,G., Genovese,M., Meda, A., Predazzi, E. and Ruo Berchera, I., "Systematic study of the PDC speckle structure
for quantum imaging applications", Int. Journ. Quant. Inf. 7 139
(2009).
\bibitem{BGMPR-JMO2009} Brida,G., Genovese,M., Meda, A., Predazzi E. and Ruo Berchera, I., "Tailoring PDC speckle structure", Journal of Modern Optics 56 201 (2009).
\bibitem{Chekhova2010} Ivan N. Agafonov,
Maria V. Chekhova, Gerd Leuchs; "Two-Color Bright Squeezed Vacuum";
arXiv:0910.4831
\bibitem{GUM} "Guide to the expression of uncertainty in measurement (GUM)", ISO/IEC Guide 98:1995
\bibitem{qc} "Radiometry, fotometry and the candela: evolution in the classical
and quantum world", N.Fox,E.Ikonen,M.Rastello, G.Ulm, J.Zwinkels, in
press.
\end{thebibliography}
\section{Introduction}
The possibility of realizing sub shot noise regime by exploiting
multimode spatial correlation at the quantum level \cite{Kol2007,
BrambPRA2004}, sometimes called spatial entanglement, has been
experimentally demonstrated by using traveling wave parametric
amplifier and CCD array detectors both at the single photon level
\cite{BlanchetPRL2008} and at hight photon flux \cite{JedrkPRL2004,
BridaPRL2009, BridaNPHOT2010,mat,mc}, and by exploiting four wave
mixing in rubidium vapors \cite{BoyerPRL2008}. Very recently,
following the proposal of \cite{BrambPRA2008} our group showed that
it can find a natural application to the quantum imaging of weak
absorbing object beyond the shot-noise-level
\cite{BridaNPHOT2010}(standard quantum limit). All these findings
indicate that multimode spatial correlations and their detection is
a mature field that can lead to other interesting applications. In
particular, in this framework we developed the capability of an
almost perfect spatial selection of modes that are pairwise
correlated at the quantum level in parametric down conversion, and
the optimization of the noise reduction below the shot-noise-level.
On the other side, quantum correlations of twin beams generated by
Parametric Down Conversion (PDC), is a well recognized tool for
calibrating single photon detectors, competing with the traditional
ones of the optical metrology
\cite{ge,PolMig2007,bp1,Burnham,klysh,malygin,Ginzburg,alan,n1,n2,n3,n4}.
Extending this technique to higher photon fluxes, for calibrating
analog detectors, may have great importance in metrology
\cite{BridaJOSAB2006, BridaOE2008, BridaJMO2009, RuoASL2009,
LindAO2006,Masha}. So far, the main problem to achieve this goal has
been the difficulty of an accurate spatial selection of the
correlated modes among the spatially broadband emission of PDC.
In the work presented here, the know-out in the detection of spatial correlation in PDC is fruitfully applied to the absolute calibration of CCD cameras. The method is based on the measurement of the degree of correlation between symmetric areas belonging to the twin beams, which in principle depends only by the transmission and detection efficiency. The achieved statistical uncertainty indicates the effectiveness of the methods for metrology applications.
\section{Multimode spatial correlation in PDC}\label{Multimode correlation}
\begin{figure}
\caption{\textsf{Scheme for spatial correlations detection in PDC.
The PDC emission from the non-linear crystal is detected in the far
field, reached by a optical $f-f$ configuration. The pump transverse
size determines an uncertainty in the photon propagation direction.
The speckled image showed, has been obtained experimentally in very
high gain regime. For this reason, the spatial fluctuations are very
strong, and the coherence area is roughly represented by the typical
size of the speckles. }
\label{Correlations}
\end{figure}
The state produced by spontaneous PDC, in the approximation of plane wave pump field of frequency $\omega_{p}$ and propagating
in the $z$ direction, presents perfect transverse momentum phase-matching. Thus, it can be expressed as a tensor product of
bipartite states univocally identified by frequency $\omega_{p}/2\pm\omega$ and transverse momentum $\pm\mathbf{q}$, i.e.
$|\Psi\rangle=\bigotimes_{\mathbf{q},\omega}|\psi(\mathbf{q},\omega)\rangle$. Since we are mainly interested to the frequencies
near to the degeneracy ($\omega\sim0$), the state of the single bipartite transverse mode reduces to
\begin{equation}\label{two-mode state}
|\psi(\mathbf{q})\rangle=\sum_{n}C_{\mathbf{q}}(n)|n\rangle_{i,\mathbf{q}}|n\rangle_{s,-\mathbf{q}},
\end{equation}
where the coefficients $C_{\mathbf{q}}(n) \propto \sqrt{\langle n_{\mathbf{q}}\rangle^{n}/\langle
n_{\mathbf{q}}+1\rangle^{n+1}}$ are related to the mean number of photon in the mode $\mathbf{q}$ assumed to be the same for all
the modes, i.e. $\langle n_{\mathbf{q}}\rangle = \mu$, and the subscripts $s$ and $i$ indicated signal and idler fields.
The two-mode state in Eq. (\ref{two-mode state}) is entangled in the number of photons for each pair of modes $\pm\mathbf{q}$,
whereas the statistics of the single mode is thermal with mean value $\langle n_{i,\mathbf{q}}\rangle = \langle
n_{s,\mathbf{-q}}\rangle = \mu$ and variance $\langle \delta^{2}n_{i,\mathbf{q}}\rangle=\langle
\delta^{2}n_{s,\mathbf{-q}}\rangle=\mu (\mu+1)$. Now, we focus on the far field region, obtained as the focal plane of a thin
lens of focal length $f$ in a $f-f$ configuration (Fig. \ref{Correlations}). Here, any transverse mode $\mathbf{q}$ is
associated with a single position $\mathbf{x}$ in the detection (focal) plane according to the geometric transformation $(2c
f/\omega_{p})\mathbf{q}\rightarrow \mathbf{x}$, with $c$ the speed of light. Therefore, a perfect correlation appears in the
photon number $n_{i,\mathbf{x}}$ and $n_{s,-\mathbf{x}}$ registered by two detectors placed in two symmetrical positions
$\mathbf{x}$ and $-\mathbf{x}$, where the center of symmetry (CS) is basically the pump-detection plane interception (neglecting
the walk off).
In real experiments the pump laser is not a plane wave, rather it
can be reasonably represented by a gaussian distribution with
spatial waist $w_p$. This induces an uncertainty in the relative
propagation directions of the twin photons of the order of the
angular bandwidth of the pump. This uncertainty is the coherence
area of the process, roughly corresponding to the transverse size of
the mode in the far field (see Fig. \ref{Correlations}). The number
of photons collected in symmetrical portions of the far-field zone
are perfectly quantum correlated only when the detection areas
$\mathcal{A}_{det}$ are broader than a coherence area, whose size
$\mathcal{A}_{coh}$ is of the order of $[(2\pi c
f)/(\omega_{p}w_{p})]^{2}$ \cite{BrambPRA2004,BrambPRA2008} (only
for very large parametric gain it deviates from this behavior
\cite{BrambPRA2008,JedrkPRL2004,BGMPR-IGQI2009,BGMPR-JMO2009}).
Therefore, let us consider to collect photons over two perfectly
symmetrical and correlated areas $\mathcal{A}_{det,s}$ and
$\mathcal{A}_{det,i}$ with detection efficiency $\eta_{s}$ and
$\eta_{i}$ belonging to the signal and idler beams respectively, and
containing many transverse spatial modes
$\mathcal{M}_{spatial}=\mathcal{A}_{det,j}/\mathcal{A}_{coh}$
($j=s,i$). We also consider a situation in which the detection time
$\mathcal{T}_{det}$ is much larger than the coherence time
$\mathcal{T}_{coh}$of the process, thus the number of temporal mode
is large, $\mathcal{M}_{t}=\mathcal{T}_{det}/\mathcal{T}_{coh}\gg1$.
Since the modes in the single region are independent, the statistics
is multithermal with mean value $\langle
N_{j}\rangle=\mathcal{M}_{tot} \eta_{j} \mu$ (with
$\mathcal{M}_{tot}=\mathcal{M}_{t}\mathcal{M}_{spatial} $) and
variance
\begin{equation}\label{EN}
\left\langle\delta ^{2}N_{j}\right\rangle\equiv\left\langle N_{j}\right\rangle\left(1+\mathcal{E}\right)=\left\langle
N_{j}\right\rangle\left(1+\frac{\left\langle N_{j}\right\rangle}{\mathcal{M}_{tot}}\right)= \mathcal{M}_{tot}\eta_{j}
\mu \left(1+\eta_{j}\mu\right),
\end{equation}
with $j=s,i$ and $\mathcal{E}$ the excess noise, usually defined as the fluctuations that exceed the shot noise level (SNL). The
SNL, or standard quantum limit represents the level of noise associated to the detection of the coherent light, i.e.
$\langle\delta ^{2}N_{j}\rangle_{SNL}=\langle N_{j}\rangle$. In this theoretical description, the excess noise is only related
to the intrinsic thermal statistic of the single beam of PDC $\mathcal{E}=\langle N_{j}\rangle/\mathcal{M}_{tot}$. However, we
will discuss in the following that experimental imperfections give the major contribution to the excess noise in our setup.
The covariance between the signal and idler numbers of photon is
\begin{equation}\label{correlation}
\left\langle\delta N_{i}\delta N_{s}\right\rangle=\mathcal{M}_{tot}\eta_{s}\eta_{i} \mu(1+ \mu).
\end{equation}
The amount of correlation between the signal and idler fields is usually expressed in terms of the noise reduction factor $\sigma$
defined as the fluctuation of the difference $N_{-}\equiv N_{s}-N_{i}$ between the photons number normalized to the
corresponding level of shot noise:
\begin{equation}\label{sigma}
\sigma\equiv\frac{\left\langle\delta ^{2}N_{-}\right\rangle} {\left\langle N_{i}+N_{s}\right\rangle}=
1-\eta_{+}+\frac{\eta_{-}^{2}}{2\eta_{+}}\left(\frac{1}{2}+ \mu \right)=
1-\eta_{+}+\frac{\eta_{-}^{2}}{4\eta_{+}^{2}}\left(\eta_{+}+\frac{\langle N_{s}+N_{i}\rangle}{\mathcal{M}_{tot}}\right),
\end{equation}
where $\eta_{+}=(\eta_{s}+\eta_{i})/2$ and $\eta_{-}=\eta_{s}-\eta_{i}$. It as been evaluated by introducing the Eq.s
(\ref{correlation}) and (\ref{EN}) in the expression of the fluctuation $\langle\delta ^{2}N_{-}\rangle\equiv\langle\delta
^{2}N_{s}\rangle+\langle\delta ^{2}N_{i}\rangle-2\langle\delta N_{i}\delta N_{s}\rangle$
In the case of perfect balanced losses $\eta_{s}=\eta_{i}=\eta$ one get that $\sigma=1-\eta$ only depending on the quantum
efficiency. Therefore, in an ideal case in which $\eta\rightarrow 1$, $\sigma$ approaches zero. On the other side, for
classical states of light the degree of correlation is bounded by $\sigma\geq1$, where the lowest limit is reached for coherent
beams, $\sigma=1$.
According to Eq. (\ref{sigma}), the absolute estimation of the quantum efficiency by measuring the noise reduction factor, can
be achieved just when the excess noise disappears, that is realized in the case of balanced losses. The balancing of the two
channels can be performed physically by adding proper absorbing filters in the optical paths. Anyway, a more convenient approach
is to compensate a posteriori, for instance multiplying the values of $N_{i}$ for a factor $\alpha=\left\langle
N_{s}\right\rangle/\left\langle N_{i}\right\rangle=\eta_{s}/\eta_{i}$. It corresponds to evaluate the redefined noise reduction
factor $\sigma_{\alpha}$ (instead of the one in Eq. (\ref{sigma})):
\begin{equation}\label{sigma_alfa}
\sigma_{\alpha}\equiv\frac{\left\langle\delta ^{2}(N_{s}-\alpha N_{i})\right\rangle} {2\left\langle
N_{s}\right\rangle}=\frac{1}{2}(1+\alpha)-\eta_{s},
\end{equation}
This relation shows that the quantum efficiency $\eta_{s}$ can be evaluated measuring $\sigma_{\alpha}$ and the ratio
$\alpha=\left\langle N_{s}\right\rangle/\left\langle N_{i}\right\rangle$, without the need of a physical balancing the two
optical paths.
As a final remark, we observe that the result in Eq.
(\ref{sigma_alfa}) relays on the assumption that each spatial modes
collected by the region $\mathcal{A}_{det,i}$ finds its correlated
in the region $\mathcal{A}_{det,s}$, and viceversa. Otherwise, the
presence of uncorrelated modes in the two region would not provide a
complete cancelation of the excess noise by the subtraction, leading
to underestimate the quantum efficiency \cite{Chekhova2010}.
Therefore, experimental control on the detection of the spatial
modes by means of precise positioning and sizing of the regions and
accurate determination of the CS is fundamental for the accuracy of
the estimation of the quantum efficiency.
\section{The experimental procedure}
In our setup, a type II BBO non-linear crystal ($l=7$ mm) is pumped by the third harmonic (355 nm) of a Q-switched Nd:Yag laser.
The pulses have a duration of $\mathcal{T}_p=5$ ns with a repetition rate of 10 Hz and a maximum energy, at the selected
wavelength, of about 200 mJ. The pump beam crosses a spatial filter (a lens with a focal length of 50 cm and a diamond pin-hole,
250 $\mu$m of diameter), in order to eliminate the non-gaussian components and then it is collimated by a system of lenses to a
diameter of $w_{p}=1.25$ mm. After the crystal, the pump is stopped by a couple of UV mirrors, transparent to the visible
($\simeq 98\%$ transmission at 710 nm), and by a low frequency-pass filter ($\simeq 95\%$ transmission at 710 nm). The down
converted beams (signal and idler) are separated in polarization by two polarizers ($97\%$ transmission) and finally the far
field is imaged by a CCD camera. We used a 1340X400 CCD array, Princeton Pixis:400BR (pixel size of 20 $\mu$m), with high
quantum efficiency (around 80\%) and low noise in the read out process ($4$ electrons/pixel). The CCD exposure time is set by a
mechanical shutter to 90 ms, thus each image acquired corresponds to the PDC emission generated by a single shot of the laser.
The far field is observed at the focal plane of the lens with 10 cm focus in a $f-f$ optical configuration.
For reasons related to the visibility of the correlation, and in order to reduce the contribution of the read noise of the CCD,
it is convenient to perform an hardware binning of the physical pixels. It consists in grouping the physical pixels in squared
blocks, each of them being processed by the CCD electronics as single "superpixel". Depending on the measurement, the size of
the superpixel can be set accordingly. Typically we choose it of the same order, or larger than $\mathcal{A}_{coh}$.
The expected number of temporal modes $\mathcal{M}_{t}=\mathcal{T}_{p} / \mathcal{T}_{coh}$ detected in one image is $5\cdot
10^{3}$, considering the coherence time $\mathcal{T}_{coh}$ of PDC around one picosecond. The number of spatial modes
$\mathcal{M}_{spatial}=\mathcal{A}_{det,j}/\mathcal{A}_{coh}$ ($j=s,i$) depends only on the size of the detection areas, since
$\mathcal{A}_{coh}\sim 120\times120(\mu m)^{2}$ is fixed by the pump transverse size. The level of excess noise due to the
thermal statistics in the single beam is also fixed, since we keep fixed (aside unwanted fluctuation pulse-to pulse) the power
of the laser. The total number of modes $\mathcal{M}_{tot}$ turn out to be compatible with the level of excess noise
$\mathcal{E}\equiv\left\langle N_{s}\right\rangle/\mathcal{M}_{tot}\sim 0.1-0.2$. It can be measured by performing spatial
statistics as described in Sec. (\ref{evaluation of the CS}).
For an accurate estimation of quantum efficiency the following steps should be performed:
\begin{itemize}
\item [a)]\textit{Determination of the center of symmetry (CS)} \\ positioning of the correlated areas and determination of the center of symmetry of the spatial correlations within sub-coherence-area uncertainty, according to the experimental procedure presented in Subsection (\ref{evaluation of the CS})
\item [b)]\textit{Determination of the minimum size of $\mathcal{A}_{det,j}$}\\
The size of the detection areas must satisfy the condition $\mathcal{M}_{spatial}=\mathcal{A}_{det,j}/\mathcal{A}_{coh}\gg1$, for the purposes of the unbiased estimation of $\sigma$.
This can be achieved following the procedure sketched in Subsection (\ref{A_det aetimation}).
\item [c)]\textit{Analysis of experimental contributions to the excess noise}\\
Evaluation of noise coming from experimental imperfections, such as instability of the laser pulse-to-pulse
energy and the background due to straylight and electronic noise of the CCD. Eq.s (\ref{alfaE}) and
(\ref{sigmaalfaE}) are modified in order to account for these noise contributions. Detailed discussion on this item can be found in
Subsection (\ref{Contributions to the excess noise}).
\item [d)]\textit{Estimation of $\eta_j$ and of its statistical uncertainty according to Eq. (\ref{sigma_alfa})}\\
$\alpha$ and $\sigma_{\alpha}$ are estimated experimentally over a set of $\mathcal{N}$ images according to formula
\begin{equation}\label{alfaE}
\alpha=\frac{E[N_s (k)]}{E[N_i (k)]},
\end{equation}
and formula
\begin{equation}\label{sigmaalfaE}
\sigma_{\alpha}=\frac{E[(N_{s} (k)- \alpha N_{i} (k) )^2]-E[N_{s} (k)- \alpha N_{i} (k)]^2 }{E[N_s (k)]+\alpha E[N_i (k)]},
\end{equation}
respectively, where $E[N_j(k)]=\mathcal{N}^{-1} \sum_{k=1}^{\mathcal{N}}N_j(k)$, and $N_j(k)$ is the number of photons observed in
the detection
area $\mathcal{A}_{det,j}$ in the $k$-th image. [See Subsection (\ref{Efficiency estimation and uncertainty})].
\item [e)]\textit{Evaluation of the optical losses}\\
The actual value for the estimated quantum efficiency $\eta_j$ subsumes also losses due to the crystal, the lens and the mirrors.
Thus, the value of the quantum efficiency of the CCD is obtained as the ratio between the estimated value for the quantum efficiency
$\eta_j$ and the transmission on the $j$-channel, i.e.
\begin{equation}\label{etaT}
\eta_{true, j}=\frac{\eta_{j} }{\tau_{j}},
\end{equation}
with $j=s,i$. $\tau_{j}$ should be evaluated by means of an independent "classical" transmittance measurement. As in this paper
we present just a proof of principle of the proposed technique, we will not discuss this transmittance measurements anymore in
this paper. Thus, instead of providing the quantum efficiency of the CCD standalone, the results presented in the following can
be interpreted as the quantum efficiency of the whole optical system before the CCD, including the CCD itself.
\end{itemize}
\subsection{determination of CS}\label{evaluation of the CS}
\begin{figure}
\caption{\textsf{Determination of center of symmetry. The right hand side presents a typical image obtained by the CCD
in the working condition. The images is separated in two portions, one collecting the light from the signal beam (H-polarized)
and the other the light of the idler beam (V-polarized). Two regions $R_{s}
\label{CMdetermination}
\end{figure}
The single shot image is stored as a "superpixel" matrix (Fig. \ref{CMdetermination}). We select two equal rectangular regions
$\mathcal{A}_{det,s}$ and $\mathcal{A}_{det,i}$ belonging to the signal and idler portion of the image and containing a certain
number of "superpixels". In our case, we choose 9 $mm^2$ areas that span a wavelength bandwidth of the order of 10nm around the
degeneracy at $710nm$. Fixing the region $\mathcal{A}_{det,s}$, we evaluate the noise reduction factor $\sigma_{spatial}$ as a spatial average on the pairs of conjugated "superpixel" inside the two regions, in function of the position of the center of the region $\mathcal{A}_{det,i}$ of
coordinate $\mathbf{\xi}=(\xi_1, \xi_2)$. In this way we obtain a matrix of values of the spatial average $\sigma_{spatial}(\xi)$ as a function of the
center $\mathbf{\xi}$ of the detection area $\mathcal{A}_{det,i}$.
The result obtained by the analysis of a typical image are presented in Fig. \ref{CMdetermination}, where a binning 6$\times$6
of the physical pixel has been applied and the number of photon per superpixel is $\simeq1700$. The presence of correlations
around $\mathbf{\xi}=0$ is represented by a deep in the values of the spatial average of $\sigma_{spatial}$, whereas far from the
minimum the correlation decrease, because conjugated pixels no more detect the correlated photons. Thus, this measurement
of the spatial correlation allows to determine the best position of $\mathcal{A}_{det,i}$ with respect to $\mathcal{A}_{det,s}$,
and hence the center of symmetry of the correlation. The size of the correlation deep represents the coherence area that in our experiment
is $\mathcal{A}_{coh}\sim 120\times120(\mu m)^{2}$.
The superpixel size used for the measurement must realize a tradeoff between the visibility of the correlation, and the final
uncertainty in the CS determination. On one side, as discussed in Sec.\ref{Multimode correlation}, for a good visibility of the
quantum correlation (and to increase the signal to electronic noise ratio) the "superpixel" area should be much greater than
$A_{coh}$. On the other side, a small pixel size will lead to a small uncertainty. We found that in our setup the best choice is
a pixel size equal to the coherence area, that can be obtained by a 6$\times$6 binning, as in Fig. \ref{CMdetermination}.
From the discussion above, it is clear that the measurement described allows a positioning of the two correlated (symmetrical)
regions within a single "superpixel" uncertainty, while the center of symmetry is identify within half a "superpixel". However,
it can be demonstrated that even a shift of a small fraction of "superpixel" with respect the real CS determine a increasing of
the noise reduction factor \cite{BrambPRA2004,BrambPRA2008}. In practice, the optimization of the NRF by micro positioning of
the CCD allows to determine the physical center of symmetry within a final uncertainty less than 1/10 of the "superpixel",
hence of the coherence area in our setup \cite{BrambPRA2004,BrambPRA2008}.
\subsection{Determination of the minimum size of $\mathcal{A}_{det,j}$}\label{A_det aetimation}
In the previous section we showed that CS can be determined with good precision, and the regions $\mathcal{A}_{det,s}$ and
$\mathcal{A}_{det,i}$ are consequently fixed to be strictly symmetrical and correlated even "locally" (i.e. for each pair of
twin spatial modes).
However, as pointed out in Sec. \ref{Multimode correlation}, perfect detection of quantum correlation requires the condition
$\mathcal{M}_{spatial}=\mathcal{A}_{det,j}/\mathcal{A}_{coh}\gg1$. In order to establish when this occurs, we measure the noise
reduction factor in function of the detection area $\sigma_{\alpha}(A_{det})$, i.e. in function of the number of spatial modes
detected. The measurement has been performed on a set of $\mathcal{N}=4000$ images using a binning $12\times12$ that means a
superpixel size of $240\times240(\mu m)^{2}$. Here, we define $N_{j}(k)$ the total number of photons detected in the region
$\mathcal{A}_{det,j}$ of the $k$-th image of the set. $\sigma_{\alpha}(\mathcal{A}_{det,j})$ has been evaluated according to Eq.
(\ref{sigmaalfaE}), over the set of 4000 images. Here, the single determination of the noise reduction factor is obtained by
subtracting the total numbers of photons detected in the two large regions $\mathcal{A}_{det,j}$ of the same image, and the
statistics is obtained over the set of images. The results are reported in Fig. \ref{SigmaVsAdetection}.
As expected, $\sigma_{\alpha}(\mathcal{A}_{det,j}$) is a decreasing function reaching an asymptotic value for
$\mathcal{A}_{det,j}>\mathcal{A}_{0}=1440\times1440(\mu m)^{2}$ that corresponds roughly to a number of spatial modes larger
than 150. Therefore we can affirm that, working with detection area larger than $\mathcal{A}_{0}$ allows to match the condition
$\mathcal{M}_{spatial}=\mathcal{A}_{det,j}/\mathcal{A}_{coh}\gg1$. This prevents possible bias in the estimation of quantum
efficiency due to the uncertainty in the propagation direction of correlated photons.
\begin{figure}
\caption{\textsf{Noise reduction factor in function of the detection area. x-axis reports the linear size of the detection areas in $\mu$m. The two experimental curves refer to the noise reduction factor $\sigma_{\alpha}
\label{SigmaVsAdetection}
\end{figure}
\subsection{Experimental contributions to the excess noise}\label{Contributions to the excess noise}
Other sources of experimental noise lead to systematic issues on the experimental values of
$\sigma_{\alpha}$ and consequently on the quantum efficiency obtained by Eq. (\ref{sigma_alfa}). In
the following we will address this problems.
The largest contribution to the excess noise that we have in our setup is related to the
instability of the Q-Switch laser pulse. In particular, we observed a fluctuation of the energy
pulse-to-pulse of more than $10\%$. The power $P$ of the pulse is directly related to the mean
value of photons per mode $\mu\propto\sinh^{2}(const*\sqrt{P})$. Therefore, the temporal statistics
of the PDC emission is drastically influenced by the pump power fluctuation. In particular $\mu$ is
not a constant. A temporal statistics on many pulses (many images) will be characterize by a mean
value $\overline{\mu}$ and variance $V\left(\mu \right)$. It can be demonstrated that the
contribution of the pump fluctuation modifies the expected value of the noise reduction factor with
respect to Eq.(\ref{sigma}) in the form
\begin{equation}\label{laser fluct}
\sigma\equiv\frac{\left\langle\delta ^{2}N_{-}\right\rangle} {\left\langle
N_{s}+N_{i}\right\rangle}=
1-\eta_{+}+\frac{\eta_{-}^{2}}{2\eta_{+}}\left[\overline{\mu}+\frac{1}{2}+
\frac{V\left(\mu\right)}{\overline{\mu}}\left(1+\mathcal{M}_{tot} \right)\right].
\end{equation}
From this equation it is clear that the instability of the pump generates a contribution to the
amount of excess noise that can be relevant, since it include a factor $\mathcal{M}_{tot}$.
\footnote{The order of magnitude in our experiment is estimable as large as
$2\eta_{+}\mathcal{M}_{tot} V\left(\mu\right)/\overline{\mu}=V\left(\langle
N_{s}+N_{i}\rangle\right)/\overline{\langle N_{s}+N_{i}\rangle}\sim5\cdot 10^{3}$ (see Tab.
\ref{Tab}, first two columns). It is four order of magnitude larger than the excess noise due to
the thermal fluctuations.} Following the same argumentation of Section \ref{Multimode correlation},
the effect of this enhanced excess noise can be suppressed if the losses on the two beams are a
posteriori compensated, by evaluating $\sigma_{\alpha}$ instead of $\sigma$. In this case
$\sigma_{\alpha}$ reduces again to Eq. (\ref{sigma_alfa}).
An other important source of excess noise is the background generated by the electronics of the
CCD (digitalization, dark counts) and from the straylight, mostly caused by the fluorescence of the
filter and mirrors used for stopping the pump, and residual of the pump itself. The first
contribution, the electronic one, depend on the level of binning, and can be considered independent
with respect the thermal noise of the PDC, and straylight noise. In principle, also the straylight
noise is uncorrelated with respect the thermal fluctuation of the PDC light, and can be represented
by a poissonian-like statistics. However, some correlation between the straylight and the PDC
emission is introduced by the fluctuation of the pump, since when the pump pulse is more(less)
energetic both the PDC and straylight increase (decrease) accordingly. Anyway it can be
demonstrated that even this correlation cancel out in the difference of the photon number when the
transmission of signal and idler path are balanced. We define the number of counts $N_{s/i}'$
registered in the region $R_{s/i}$ expressed as the sum of the PDC photons $N_{s/i}$ and the
background $M_{s/i}$.
The expression linking the quantum efficiency to the expectation value of measurable quantities in
presence of background, the analogous of Eq. (\ref{sigma_alfa}), is
\begin{equation}\label{sigma_alfaB}
\sigma_{\alpha,B}\equiv\frac{\left\langle\delta
^{2}(N'_{s}-\alpha_{B}N'_{i})\right\rangle-\left\langle\delta
^{2}(M_{s}-\alpha_{B}M_{i})\right\rangle} {2\left(\left\langle N'_{s}\right\rangle-\left\langle
M_{s}\right\rangle\right)}=\frac{1}{2}(1+\alpha_{B})-\eta_{s},
\end{equation}
where $\alpha_{B}\equiv\frac{\eta_{s}}{\eta_{i}}=\frac{\langle N'_{s}\rangle-\langle
M_{s}\rangle}{\langle N'_{i}\rangle-\langle M_{i}\rangle}$. The background and its statistics can
be measured easily and independently, by collecting a set of $\mathcal{M}$ images when the PDC is
turned off, just by a $90^{o}$ rotation of the crystal. Following the same formalism of point d) of
the experimental procedure, $\alpha_{B}$ is obtained as:
\begin{equation}\label{alfaBE}
\alpha_{B}=\frac{E[N'_s (k)]-E[M_s (p)]}{E[N'_i (k)]-E[M_i (p)]},
\end{equation}
where $E[N'_j(k)]=\mathcal{N}^{-1} \sum_{k=1}^{\mathcal{N}}N'_j(k)$, and
$E[M_j(p)]=\mathcal{M}^{-1} \sum_{p=1}^{\mathcal{M}}M_j(p)$ represent the experimental
determination of the quantum expectation values $\langle N'_{j}\rangle$ and $\langle M_{j}\rangle$
respectively. At the same time, $\sigma_{\alpha,B}$ in Eq. (\ref{sigma_alfaB}) is obtained by the
following experimental estimates:
\begin{eqnarray}\label{sigmaalfaBE}
\left\langle\delta^{2}(N'_{s}-\alpha_{B}N'_{i})\right\rangle\mapsto E[(N'_{s} (k)- \alpha_{B}
N'_{i} (k) )^2]-E[N'_{s} (k)- \alpha_{B} N'_{i} (k)]^2\\\nonumber
\left\langle\delta^{2}(M_{s}-\alpha_{B}M_{i})\right\rangle\mapsto
E[(M_{s} (p)- \alpha_{B} M_{i} (p) )^2]-E[M_{s} (p)- \alpha M_{i} (p)]^2
\end{eqnarray}
We also mention that, for each measurement performed in the present work, the images affected by
cosmic rays have been discarded by a proper algorithm.
\subsection{Efficiency estimation and uncertainty evaluation }\label{Efficiency estimation and uncertainty}
In the previous Subsections we have presented the procedure to obtain an unbiased estimation of the
detection efficiency $\eta_s$ by means of appropriate positioning and sizing the detection regions,
a posteriori balancing of losses and background contribution analysis. In this subsection we
present the experimental estimation of $\eta_s$ as well as its uncertainty budget.
The sizing parameter chosen for this experimental proof of principle
are: areas of detection $\mathcal{A}_{det,j}=2400\times3840(\mu
m)^{2}$ ($5\times8$ superpixels of size $480\times480(\mu m)^{2}$
obtained by a binning 24x24 of the physical pixels) corresponding to
about $640\times\mathcal{A}_{coh}$. The estimated value of $\eta_s$
is obtained by the inversion of Eq. (\ref{sigma_alfaB}), where
$\alpha_B$ is calculated according to Eq. (\ref{alfaBE}) and
$\sigma_{\alpha,B}$ performing the substitutions in Eq.s
(\ref{sigmaalfaBE}) on the basis of $\mathcal{N}$ images with PDC
light, and $\mathcal{M}$ of background light. $\eta_i$ can be
obviously evaluated as $\eta_i=\alpha_B \eta_s$.
Once we intend to provide an uncertainty associated to the estimated value of $\eta_s$ (and
$\eta_i$), we should repeat the experiment $\mathcal{Z}$-times, i.e. we should collect
$\mathcal{Z}\cdot \mathcal{N}$ images with PDC light, and $\mathcal{Z}\cdot \mathcal{M}$ with
background light. Thus the estimated value of $\alpha_B$ and $\sigma_{\alpha,B}$ can be obtained
from
\begin{equation}\label{alfa^estim}
\alpha|_{estim}=\mathcal{Z}^{-1}\sum^{\mathcal{Z}}_{l=1} \alpha(l) \qquad
\sigma_{\alpha}|_{estim}={\mathcal{Z}}^{-1}\sum^{\mathcal{Z}}_{l=1} \sigma_{\alpha} (l)
\end{equation}
and the associated uncertainty is obtained following the guidelines of Ref. \cite{GUM}.
Specifically the uncertainty propagation is performed on the $2 \mathcal{N} + 2 \mathcal{M}$
measured quantities, namely $N'_s(k)$, $N'_i(k')$, $M_s(p)$, $M_i(p')$ with
$k,k'=1,...,\mathcal{N}$ and $p,p'=1,...,\mathcal{M}$. We accounted for the correlations between
$N'_s(k)$ and $N'_i(k')$ when $k=k'$, due mainly to PDC light, and between $M_s(p)$ and $M_i(p')$
when $p=p'$, due to pulse-to-pulse laser energy instability. Thus, it is assumed that there is no
correlation between measured quantities in different images. In our experiment
$\mathcal{N}=\mathcal{M}$ and $\mathcal{N}=500$ and $\mathcal{Z}=8$.
The estimated values of $\alpha_B$ and $\sigma_{\alpha,B}$ together with their uncertainties are
presented in Table 1, where, for the sake of comparison, are shown also the corresponding values
without background subtraction ($\alpha$ and $\sigma_{\alpha}$ respectively). According to Eq.
(\ref{sigma_alfaB}) we obtained $\eta_{s}=0.613 \pm 0.011$.
We underline that the estimated value of $\eta_s$ corresponds to the efficiency of the whole
quantum channel including the CCD -not only the efficiency of the CCD itself-, and that the
uncertainty associated to $\eta_s$ accounts only for the statistical contributions due to the
fluctuations of the $N$-s and $M$-s measured quantities (Type A uncertainty contributions,
according to Ref. \cite{GUM}).
Typically, further non-statistical uncertainties contributions (Type B \cite{GUM}) should be
accounted for in a complete uncertainty budget. For example, in this case a proper evaluation of
$\alpha_B$ with small uncertainty is mandatory to nullify the effect of the excess noise in Eq.
(\ref{laser fluct}), i.e. to ensure the validity of Eq. (\ref{sigma_alfaB}). A Type B uncertainty
contribution associated to the nullification of excess noise term should be considered. In Section
$\ref{Contributions to the excess noise}$ we showed that the excess noise due to the pulse to pulse
instability is of the order of $5\cdot10^{3}$ (see Footnote). From Eq. (\ref{laser fluct}) it turns
out that the condition for neglecting that term is
$\eta_{-}^{2}/(4\eta_{+}^{2})\cdot5\cdot10^{3}\ll1$, that in our case means $\eta_{-}\ll10^{-2}$.
The uncertainty on $\alpha_{B}$ presented in table \ref{Tab}, shows that the balancing of the two
arms can be performed within $10^{-5}$, equivalent to the condition $\eta_{-}=10^{-5}$. Thus, the
possible contribution to the uncertainty due to this term is less than $10^{-6}$.
\begin{table}
\centering
\small
\begin{tabular}{||c|c|c|c|c|c|c|c|c||} \hline\hline
$E[N'_s]$ & $\sqrt{E[\delta^{2} N'_s]}$ & $E[M_s]$ & $\sqrt{E[\delta^{2} M_s]}$ & $\alpha^{esim}$ & $\alpha^{estim}_{B}$ & $\sigma^{estim}$ & $\sigma^{estim}_{\alpha}$ & $\sigma^{estim}_{\alpha,B}$\\\hline\hline
262710 & 35982 & 12751 & 1318 & 0.99952 & 0.99416 & 0.454 & 0.449 & 0.384 \\
(620) & (437) & (158) & (30) & (0.00003) & (0.00004) &(0.010) &(0.010) & (0.011)\\\hline\hline
\end{tabular}
\caption{\textsf{Experimental estimates (row 2) and their uncertainties (row 3). Column 1-2 show the mean counts in the detection area $\mathcal{A}_{det,s}$ of the single images and the mean square of fluctuation $\delta^{2} N'_s(k)= (N'_s(k)-E[N'_s])^{2}$. Column 3-4 report the same values related to the background images. Column 5-6 present the experimental values of the compensation factor $\alpha$ and the value corrected for background counts. In column 7-9 are reported the raw noise reduction factor, the one after compensation of the losses, and finally the one after compensation and background correction respectively.}}\label{Tab}
\end{table}
In the determination of $\sigma_{\alpha,B}$ this uncertainty contribution is several order of
magnitude below the statistical (Type A) contributions, thus, absolutely negligible in the complete
uncertainty budget. A larger Type B contribution comes from the bias in the determination of the center of symmetry that in our present experiment is 1/10 of the coherence area in both the coordinates in the detection plane. Following the argumentation at the end of section \ref{evaluation of the CS} it generates an uncertainty of 1.5\%.
The actual relative uncertainty in the estimation of $\eta_s$ is $2.3\%$, it is
expected that it can be easily reduced of more than one order of magnitude increasing the number of
collected images, as far as the bottleneck of Type B uncertainty contribution is reached.
\section{Discussions and Conclusions}
As a test of consistency for the theoretical model at the basis of the proposed CCD calibration
technique, and, in particular, for the associated uncertainty model we evaluate the statistical
uncertainty associated to the mean values of $\alpha_B$ and $\sigma_{\alpha,B}$, according to
\begin{eqnarray}\label{sigma^estim}
\Delta\alpha^{estim}=\sqrt{[\mathcal{Z}(\mathcal{Z}-1)]^{-1}\sum^{\mathcal{Z}}_{l=1}
\left[\alpha(l)-\alpha^{estim}\right]^{2}}\\ \nonumber
\Delta\sigma^{estim}_{\alpha}=\sqrt{[\mathcal{Z}(\mathcal{Z}-1)]^{-1}\sum^{\mathcal{Z}}_{l=1}
\left[\sigma_{\alpha}(l)-\sigma^{estim}_{\alpha}\right] },
\end{eqnarray}
obtaining a good agreement with the estimated uncertainty.
Furthermore, we observe that according to the principle that the accuracy of a measurement depends
on the measuring time (in our case the number of acquired images), and not depend on how the data
are arranged, we verified that the final uncertainty on the mean values is not influenced by the
different possible choices of $\mathcal{Z}$ and $\mathcal{N}$, provided that total number of images
$\mathcal{Z}\cdot\mathcal{N}=const$. On the contrary, the standard deviation of the populations, is
a function of $\mathcal{N}$, as it is shown in Fig. \ref{Std Sigma}.
\begin{figure}
\caption{\textsf{Standard deviation of $\sigma_{\alpha,B}
\label{Std Sigma}
\end{figure}
We note that Tab.\ref{Tab} shows that $\langle M_{s}\rangle$ is 5\% of the total counts $\langle
N'_{s}\rangle$ although the weigh of background correction is the 15\% of the estimated noise
reduction factor. Nevertheless, the uncertainty on the value of $\sigma_{\alpha,B}$ is just
slightly influenced by the background correction. For the sake of completeness we observe that the
electronic noise contributes with a standard deviation $\Delta\sim70$ counts. In a forward-looking
perspective, for pushing the uncertainty on this measurements at the level of the best values
obtained in the single photon counting regime, it would be very important to reduce the straylight.
Actually it is possible to design a different experimental configuration that limits the
fluorescence of the pump.
In conclusion we have proposed and demonstrated experimentally a new
method for the absolute calibration of analog detector based on
measuring the sub-shot-noise intensity correlation between two
branches of parametric down conversion. The results on the
calibration of a scientific CCD camera demonstrate the metrological
interest of the method, that could find various applications,
starting from the possibility to give a key element for redefining
candela unit \cite{qc} in terms of photo-counting.
\section*{Acknowledgments}
This work has been supported in part by MIUR (PRIN 2007FYETBY), NATO
(CBP.NR.NRCL.983251) and QuCandela EU project.
\end{document} |
\begin{document}
\title{Unified Riccati theory for optimal permanent and sampled-data control problems in finite and infinite time horizons}
\begin{abstract}
We revisit and extend the Riccati theory, unifying continuous-time linear-quadratic optimal permanent and sampled-data control problems, in finite and infinite time horizons. In a nutshell, we prove that the following diagram commutes:
\begin{equation*}
\xymatrix@R=2cm@C=4cm {
\mathrm{(SD\tauext{-}DRE)} \hspace{-5cm} & E^{T,\Delta} \ar[r]^{T \tauo +\infty} \ar[d]_{\Vert \Delta \Vert \tauo 0} & E^{\infty,\Delta} \ar[d]^{\Vert \Delta \Vert \tauo 0} & \hspace{-5cm} \mathrm{(SD\tauext{-}ARE)} \\
\mathrm{(P\tauext{-}DRE)} \hspace{-5cm} & E^T \ar[r]_{T \tauo +\infty} & E^\infty & \hspace{-5cm} \mathrm{(P\tauext{-}ARE)}
}
\end{equation*}
i.e., that:
\begin{itemize}
\item[--] when the time horizon $T$ tends to $+\infty$, one passes from the Sampled-Data Difference Riccati Equation~$\mathrm{(SD\tauext{-}DRE)}$ to the Sampled-Data Algebraic Riccati Equation~$\mathrm{(SD\tauext{-}ARE)}$, and from the Permanent Differential Riccati Equation~$\mathrm{(P\tauext{-}DRE)}$ to the Permanent Algebraic Riccati Equation~$\mathrm{(P\tauext{-}ARE)}$;
\item[--] when the maximal step~$\Vert \Delta \Vert$ of the time partition~$\Delta$ tends to~$0$, one passes from~$\mathrm{(SD\tauext{-}DRE)}$ to~$\mathrm{(P\tauext{-}DRE)}$, and from~$\mathrm{(SD\tauext{-}ARE)}$ to~$\mathrm{(P\tauext{-}ARE)}$.
\end{itemize}
The notation $E$ in the above diagram (with various superscripts) refers to the solution of each of the Riccati equations listed above. Our notations and analysis provide a unified framework in order to settle all corresponding results.
\end{abstract}
\tauextbf{Keywords:} optimal control; sampled-data control; linear-quadratic (LQ) problems; Riccati theory; feedback control; convergence.
\tauextbf{AMS Classification:} 49J15; 49N10; 93C05; 93C57; 93C62.
\section{Introduction}
Optimal control theory is concerned with acting on controlled dynamical systems by minimizing a given criterion. We speak of a \tauextit{Linear-Quadratic~(LQ)} optimal control problem when the control system is a linear differential equation and the cost is given by a quadratic integral (see \cite{Kwakernaak}). One of the main results of LQ theory is that the optimal control is expressed as a linear state feedback called \tauextit{Linear-Quadratic Regulator~(LQR)}. The linear state feedback is described by using the \tauextit{Riccati matrix} which is the solution to a nonlinear backward matrix Cauchy problem in finite time horizon (DRE: Differential Riccati Equation), and to a nonlinear algebraic matrix equation in infinite time horizon (ARE: Algebraic Riccati Equation). The LQR problem is a fundamental issue in optimal control theory. Since the pioneering works by Maxwell, Lyapunov and Kalman (see the textbooks \cite{Kwakernaak, lee1986, sontag1998}), it has been extended to many contexts, among which: discrete-time~\cite{kuvera1972}, stochastic~\cite{zhu2005}, infinite-dimensional~\cite{curtain1974}, fractional~\cite{li2008}. One of these concerns the case where controls must be piecewise constant, which is particularly important in view of engineering applications. We speak, there, of \tauextit{sampled-data controls} (or \tauextit{digital controls}), in contrast to \tauextit{permanent controls}. Recall that a control problem is said to be \tauextit{permanent} when the control function is authorized to be modified at any time. In many problems, achieving the corresponding solution trajectory requires a permanent modification of the control. However such a requirement is not conceivable in practice for human beings, even for mechanical or numerical devices. Therefore sampled-data controls, for which only a finite number of modifications is authorized over any compact time interval, are usually considered for engineering issues. The corresponding set of \tauextit{sampling times} (at which the control value can be modified) is called \tauextit{time partition}. A vast literature deals with sampled-data control systems, as evidenced by numerous references and books (see, e.g., \cite{acker,acker2,azhm,bami,chen,fada,gero,Iser,land,nesi,raga,souz,toiv,tou} and references therein). One of the first contributions on LQ optimal sampled-data control problems can be found in~\cite{kalman1958}. This field has significantly grown since the 70's, motivated by the electrical and mechanical engineering issues with applications for example to strings of vehicles (see~\cite{astrom1963, dorato1971, levis1968, levis1971, melzer1971, middleton1990, salgado1988}). Sampled-data versions of feedback controls and of Riccati equations have been derived and, like in the fully discrete-time case (see \cite[Remark~2]{liu2014}), these two concepts in the sampled-data control case have various equivalent formulations in the literature, due to different developed approaches: in most of the references, LQ optimal sampled-data control problems are recast as fully discrete-time problems, and then the feedback control and the Riccati equation are obtained by applying the discrete-time dynamical programming principle (see \cite{bini2009,dorato1971,kalman1958}) or by applying a discrete-time version of the Pontryagin maximum principle (see \cite{astrom1963,dorato1971,kleinman1966}).
In the present paper our objective is to provide a mathematical framework in which LQ theories in the permanent and in the sampled-data case can be settled in a unified way. We build on our recent article~\cite{bourdin2017} in which we have developed a novel approach keeping the initial continuous-time formulation of the sampled-data problem, based on a sampled-data version of the Pontryagin maximum principle (see \cite{bourdin2013,bourdin2016}). Analogies between LQ optimal permanent and sampled-data controls have already been noticed in several works (see, e.g.,~\cite{salgado1988} or~\cite[Remark~5.4]{yuz2005}). In this article we gather in a unified setting the main results of LQ optimal control theory in the following four situations: permanent / sampled-data control, finite / infinite time horizon.
To this aim, an important tool is the map~$\mathcal{F}$ defined in Section~\ref{secF}, thanks to which we formulate, in the above-mentioned four situations, feedback controls and Riccati equations in Propositions~\ref{thmriccperm}, \ref{thmriccsample}, \ref{thmriccperminf} and~\ref{thmriccsampleinf} (Sections \ref{secfinitehorizon} and \ref{secinfinitehorizon}). Moreover, exploiting the continuity of $\mathcal{F}$, we establish convergence results between the involved Riccati matrices, either as the length of the time partition goes to zero or as the finite time horizon goes to infinity. Four convergence results are summarized in the diagram presented in the abstract, and we refer to our main result, Theorem~\ref{thmmain1} (stated in Section~\ref{secmain}), for the complete mathematical statement. Some of the convergence results are already known, some others are new. Hence, Theorem~\ref{thmmain1} fills some gaps in the existing literature and, in some sense, it closes the loop, which is the meaning of the commutative diagram that conveys the main message of this article.
Theorem~\ref{thmmain1} is proved in Appendix~\ref{app1}.
An important role in the proof is played by the \tauextit{optimizability} property (or \tauextit{finite cost} property), which is well known in infinite time horizon problems and is related to various notions of controllability and of stabilizability (see \cite{datta2004,terrell2009,weiss2000}). For sampled-data controls, when rewriting the original problem as a fully discrete-time problem, optimizability is formulated on the corresponding discrete-time problem (see \cite[Theorem~3]{dorato1971} or~\cite[p.~348]{levis1971}). Here, we prove in the instrumental Lemma~\ref{lemimportant} that, if the permanent optimizability property is satisfied, then the sampled-data optimizability property is satisfied for all time partitions of sufficiently small length (moreover, a bound of the minimal sampled-data cost is given, uniform with respect to the length of the time partition). This lemma plays a key role in order to prove convergence of the sampled-data Riccati matrix to the permanent one in infinite time horizon when the length of the time partition goes to zero.
\section{Preliminaries on linear-quadratic optimal control problems}\label{secprelim}
Throughout the paper, given any $p \in \mathbb{N}^*$, we denote by $\mathcal{S}^p_+$ (resp., $\mathcal{S}^p_{++}$) the set of all symmetric positive semi-definite (resp., positive definite) matrices of $\mathbb{R}^{p\tauimes p}$. Let $n$, $m \in \mathbb{N}^*$, let $P \in \mathcal{S}^n_+$, and for every $t\in\mathbb{R}$, let~$A(t)\in\mathbb{R}^{n\tauimes n}$, $B(t)\in\mathbb{R}^{n\tauimes m}$, $Q(t)\in\mathcal{S}^n_+$ and $R(t)\in\mathcal{S}^{m}_{++}$ be matrices depending continuously on $t$. Let~$\Phi(\cdot,\cdot)$ be the \tauextit{state-transition matrix} (\tauextit{fundamental matrix solution}) associated to~$A(\cdot)$ (see \cite[Appendix~C.4]{sontag1998}).
\begin{definition}\label{defautonomous}
We speak of an \tauextit{autonomous setting} when $A(t)\equiv A \in \mathbb{R}^{n\tauimes n}$, $B(t)\equiv B \in \mathbb{R}^{n\tauimes m}$, $Q(t)\equiv Q \in \mathcal{S}^n_{+}$ and $R(t)\equiv R \in \mathcal{S}^{m}_{++}$ are constant with respect to $t$.
\end{definition}
\subsection{Notations for a unified setting}\label{secF}
In this paper we consider four different LQ optimal control problems: permanent control versus sampled-data control, and finite time horizon versus infinite time horizon. To provide a unified presentation of our results (see Propositions~\ref{thmriccperm}, \ref{thmriccsample}, \ref{thmriccperminf} and~\ref{thmriccsampleinf}), we define the map
$$ \fonction{\mathcal{F}}{\mathbb{R} \tauimes \mathcal{S}^n_+ \tauimes \mathbb{R}_+}{\mathbb{R}^{n\tauimes n}}{(t,E,h)}{\mathcal{F}(t,E,h) := \mathcal{M}(t,E,h) \mathbb{N}N(t,E,h)^{-1} \mathcal{M}(t,E,h)^\tauop - \mathcal{G}(t,E,h) } $$
where $\mathcal{M}(t,E,h) := \mathcal{M}_1 (t,E,h) + \mathcal{M}_2(t,E,h)$, $\mathbb{N}N(t,E,h) := \mathbb{N}N_1(t,E,h) + \mathbb{N}N_2 (t,E,h) + \mathbb{N}N_3(t,E,h)$ and $\mathcal{G} (t,E,h) := \mathcal{G}_1(t,E,h) + \mathcal{G}_2(t,E,h)$, with
\begin{center}
\begin{tabular}[H]{|c|c|c|}
\hline
& if $h > 0$ & if $h=0$ \\ \hline
& & \\
$\mathcal{M}_1(t,E,h) := $ & $\Phi(t,t-h)^\tauop E \left( \dfrac{1}{h} \displaystyle \int_{t-h}^t \Phi(t,\tau) B(\tau) \; d\tau \right)$ & $EB(t)$ \\
& & \\ \hline
& & \\
$ \mathcal{M}_2(t,E,h) :=$ & $\dfrac{1}{h} \displaystyle \int_{t-h}^t \Phi (\tau,t-h)^\tauop Q(\tau) \left( \int_{t-h}^\tau \Phi(\tau,\xi) B(\xi) \; d\xi \right) \; d\tau$ & $0_{\mathbb{R}^{n\tauimes m}}$ \\
& & \\ \hline
& & \\
$ \mathbb{N}N_1(t,E,h) := $ & $\displaystyle \dfrac{1}{h} \int_{t-h}^t R(\tau) \; d\tau$ & $R(t)$ \\
& & \\ \hline
& & \\
$ \mathbb{N}N_2(t,E,h) := $ & $\displaystyle \dfrac{1}{h} \int_{t-h}^t \left( \int_{t-h}^\tau B(\xi)^\tauop \Phi(\tau,\xi)^\tauop \; d\xi \right) Q(\tau) \left( \int_{t-h}^\tau \Phi(\tau,\xi)B(\xi) \; d\xi \right) \; d\tau $ & $0_{\mathbb{R}^{m\tauimes m}}$ \\
& & \\ \hline
& & \\
$ \mathbb{N}N_3(t,E,h) := $ & $\displaystyle \dfrac{1}{h} \left( \int^t_{t-h} B(\tau)^\tauop \Phi(t,\tau)^\tauop \; d\tau \right) E \left( \int^t_{t-h} \Phi(t,\tau)B(\tau) \; d\tau \right)$ & $0_{\mathbb{R}^{m\tauimes m}}$ \\
& & \\ \hline
& & \\
$ \mathcal{G}_1(t,E,h) := $ & $\displaystyle \dfrac{1}{h} \int_{t-h}^t \Phi (\tau,t-h)^\tauop Q(\tau) \Phi(\tau,t-h) \; d\tau $ & $Q(t)$ \\
& & \\ \hline
& & \\
$ \mathcal{G}_2(t,E,h) := $ & $\displaystyle \dfrac{1}{h} \Big( \Phi(t,t-h)^\tauop E \Phi(t,t-h) - E \Big)$ & $A(t)^\tauop E + E A(t)$ \\
& & \\ \hline
\end{tabular}
\end{center}
The map~$\mathcal{F}$ is well-defined and is continuous (see Lemma~\ref{lemF} in Appendix~\ref{appprelim}). Moreover, for $h=0$, we have
$$ \mathcal{F}(t,E,0) = EB(t)R(t)^{-1} B(t)^\tauop E - Q(t) - A(t)^\tauop E - E A(t) \qquad \forall (t,E) \in \mathbb{R} \tauimes \mathcal{S}^n_+. $$
One recognizes here the second member of the Permanent Differential Riccati Equation (see Proposition~\ref{thmriccperm} and Remark~\ref{remanalog}). The map~$\mathcal{F}$ is designed to provide a unified notation for the permanent and sampled-data control settings.
\begin{remark}
In the \tauextit{autonomous setting} (see Definition~\ref{defautonomous}), the state-transition matrix is $\Phi (t,\tau) = e^{(t-\tau)A}$ for all~$(t,\tau) \in \mathbb{R} \tauimes \mathbb{R}$ (see, e.g.,~\cite[Lemma~C.4.1]{sontag1998}) and hence in this case the map~$\mathcal{F}$ does not depend on $t$, and
$$
\mathcal{F}(E,h) = \mathcal{M}(E,h) \mathbb{N}N(E,h)^{-1} \mathcal{M}(E,h)^\tauop - \mathcal{G}(E,h)\qquad \forall E \in \mathcal{S}^n_+ \quad\forall h\geq 0
$$
where $\mathcal{M}(E,h) := \mathcal{M}_1 (E,h) + \mathcal{M}_2(E,h)$, $\mathbb{N}N(E,h) := \mathbb{N}N_1(E,h) + \mathbb{N}N_2 (E,h) + \mathbb{N}N_3(E,h)$ and $\mathcal{G} (E,h) := \mathcal{G}_1(E,h) + \mathcal{G}_2(E,h)$, with
\begin{center}
\begin{tabular}[H]{|c|c|c|}
\hline
& if $h > 0$ & if $h=0$ \\ \hline
& & \\
$\mathcal{M}_1(E,h) := $ & $\displaystyle e^{hA^\tauop} E \left( \dfrac{1}{h} \int_{0}^h e^{\tau A} \; d\tau \right) B$ & $EB$ \\
& & \\ \hline
& & \\
$ \mathcal{M}_2(E,h) :=$ & $\displaystyle \dfrac{1}{h} \left( \int_{0}^h e^{\tau A^\tauop} Q \left( \int_0^\tau e^{\xi A} \; d\xi \right) \; d\tau \right) B$ & $0_{\mathbb{R}^{n\tauimes m}}$ \\
& & \\ \hline
& & \\
$ \mathbb{N}N_1(E,h) := $ & $R$ & $R$ \\
& & \\ \hline
& & \\
$ \mathbb{N}N_2(E,h) := $ & $\displaystyle B^\tauop \left( \dfrac{1}{h} \int_{0}^h \left( \int_{0}^\tau e^{\xi A^\tauop} \; d\xi \right) Q \left( \int_{0}^\tau e^{\xi A} \; d\xi \right) \; d\tau \right) B $ & $0_{\mathbb{R}^{m\tauimes m}}$ \\
& & \\ \hline
& & \\
$ \mathbb{N}N_3(E,h) := $ & $\displaystyle B^\tauop \left( \dfrac{1}{h} \left( \int_0^{h} e^{\tau A^\tauop} \; d\tau \right) E \left( \int_0^{h}e^{\tau A} \; d\tau \right) \right) B$ & $0_{\mathbb{R}^{m\tauimes m}}$ \\
& & \\ \hline
& & \\
$ \mathcal{G}_1(E,h) := $ & $\displaystyle \dfrac{1}{h} \int_{0}^h e^{\tau A^\tauop} Q e^{\tau A }\; d\tau $ & $Q$ \\
& & \\ \hline
& & \\
$ \mathcal{G}_2(E,h) := $ & $\displaystyle \dfrac{1}{h} \Big( e^{hA^\tauop} E e^{hA} - E \Big)$ & $A^\tauop E + E A$ \\
& & \\ \hline
\end{tabular}
\end{center}
In particular, in the autonomous setting and for $h=0$, we have
$$ \mathcal{F}(E,0) = EBR^{-1} B^\tauop E - Q - A^\tauop E - E A \qquad \forall E \in \mathcal{S}^n_+. $$
\end{remark}
\subsection{Finite time horizon: permanent / sampled-data control}\label{secfinitehorizon}
Given any $T >0$, we denote by~$\mathrm{AC}([0,T],\mathbb{R}^n)$ the space of absolutely continuous functions defined on $[0,T]$ with values in $\mathbb{R}^n$, and by $\mathrm{L}^2([0,T],\mathbb{R}^m)$ the Lebesgue space of square-integrable functions defined almost everywhere on $[0,T]$ with values in $\mathbb{R}^m$. In what follows $\mathrm{L}^2([0,T],\mathbb{R}^m)$ is the set of \tauextit{permanent controls}.
A \tauextit{time partition} of the interval $[0,T]$ is a finite set~$\Delta = \{ t_i \}_{i=0,\ldots,N}$, with $N \in \mathbb{N}^*$, such that~$ 0 = t_0 < t_1 < \ldots < t_{N-1} < t_N = T $. We denote by~$\mathrm{PC}^\Delta ([0,T],\mathbb{R}^m)$ the space of functions defined on $[0,T]$ with values in $\mathbb{R}^m$ that are piecewise constant according to the time partition~$\Delta$, that is
$$
\mathrm{PC}^\Delta ([0,T],\mathbb{R}^m) := \{ u : [0,T] \tauo \mathbb{R}^m \ \mid\ u(t) = u_i\in\mathbb{R}^m\quad\forall t\in[t_i,t_{i+1}), \ i=0,\ldots,N-1 \}.
$$
In what follows $\mathrm{PC}^\Delta([0,T],\mathbb{R}^m)$ is the set of \tauextit{sampled-data controls} according to the time partition $\Delta$ (it is a vector space of dimension $N$).
We denote by $\Vert \Delta \Vert := {\mathrm{max}}\{ h_i,\ i=1,\ldots,N\} > 0$, where $h_i := t_i - t_{i-1} > 0$ for all~$i=1,\ldots,N$. When $h_i = h$ for some~$h > 0$ for every~$i=1,\ldots,N$, the time partition $\Delta$ is said to be \tauextit{$h$-uniform} (which corresponds to \tauextit{periodic sampling}, see~\cite[Section~II.A]{bini2014}).
In this section we consider two LQ optimal control problems in finite time horizon: permanent control~$u \in \mathrm{L}^2([0,T],\mathbb{R}^m)$ (Proposition~\ref{thmriccperm}) and sampled-data control $u \in \mathrm{PC}^\Delta([0,T],\mathbb{R}^m)$ (Proposition~\ref{thmriccsample}).
\begin{proposition}[Permanent control in finite time horizon]\label{thmriccperm}
Let $T > 0$ and let $x_0 \in \mathbb{R}^n$. The LQ optimal permanent control problem in finite time horizon $T$ given by
\begin{equation}\tauag{$\mathrm{OCP}^T_{x_0}$}
\begin{array}{rl}
\tauext{minimize} & \langle P x(T) , x(T) \rangle_{\mathbb{R}^n} + \displaystyle \int_0^T \Big( \langle Q(\tau) x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R(\tau) u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau \\[18pt]
\tauext{subject to} & \left\lbrace \begin{array}{l}
x \in \mathrm{AC}([0,T],\mathbb{R}^n), \qquad u \in \mathrm{L}^2([0,T],\mathbb{R}^m) \\[8pt]
\dot{x}(t) = A(t)x(t) + B(t)u(t) \qquad \tauext{for a.e.}\ t \in [0,T] \\[8pt]
x(0)=x_0
\end{array} \right.
\end{array}
\end{equation}
has a unique optimal solution $(x^*,u^*)$. Moreover $u^*$ is the time-varying state feedback
$$ u^*(t) = - \mathbb{N}N(t,E^T(t),0)^{-1} \mathcal{M}(t,E^T(t),0)^\tauop x^*(t) \qquad \tauext{for a.e.}\ t \in [0,T] $$
where $E^T: [0,T] \tauo \mathcal{S}^{n}_+$ is the unique solution to the Permanent Differential Riccati Equation $\mathrm{(P\tauext{-}DRE)}$
\begin{equation}\tauag{$\mathrm{P\tauext{-}DRE}$}
\left\lbrace
\begin{array}{l}
\dot{E^T}(t) = \mathcal{F}(t,E^T(t),0) \qquad \forall t \in [0,T] \\[5pt]
E^T(T) = P.
\end{array}
\right.
\end{equation}
Furthermore, the minimal cost of $(\mathrm{OCP}^T_{x_0})$ is equal to $ \langle E^T(0) x_0 , x_0 \rangle_{\mathbb{R}^{n}}$.
\end{proposition}
\begin{proposition}[Sampled-data control in finite time horizon]\label{thmriccsample}
Let $T > 0$, let $\Delta = \{ t_i \}_{i=0,\ldots,N}$ be a time partition of the interval $[0,T]$ and let~$x_0 \in \mathbb{R}^n$. The LQ optimal sampled-data control problem in finite time horizon $T$ given by
\begin{equation}\tauag{$\mathrm{OCP}^{T,\Delta}_{x_0}$}
\begin{array}{rl}
\tauext{minimize} & \langle P x(T) , x(T) \rangle_{\mathbb{R}^n} + \displaystyle \int_0^T \Big( \langle Q(\tau) x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R(\tau) u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau \\[18pt]
\tauext{subject to} & \left\lbrace \begin{array}{l}
x \in \mathrm{AC}([0,T],\mathbb{R}^n), \qquad u \in \mathrm{PC}^\Delta([0,T),\mathbb{R}^m) \\[8pt]
\dot{x}(t) = A(t)x(t) + B(t)u(t) \qquad \tauext{for a.e.}\ t \in [0,T] \\[8pt]
x(0)=x_0
\end{array} \right.
\end{array}
\end{equation}
has a unique optimal solution $(x^*,u^*)$. Moreover $u^*$ is the time-varying state feedback
$$ u^*_i = - \mathbb{N}N(t_{i+1},E^{T,\Delta}_{i+1},h_{i+1})^{-1} \mathcal{M}(t_{i+1},E^{T,\Delta}_{i+1},h_{i+1} )^\tauop x^*(t_i) \qquad \forall i=0,\ldots,N-1 $$
where $E^{T,\Delta} = (E^{T,\Delta}_i)_{i=0,\ldots,N} \subset \mathcal{S}^{n}_+$ is the unique solution to the Sampled-Data Difference Riccati Equation~$\mathrm{(SD\tauext{-}DRE)}$
\begin{equation}\tauag{$\mathrm{SD\tauext{-}DRE}$}
\left\lbrace
\begin{array}{l}
E^{T,\Delta}_{i+1}-E^{T,\Delta}_i = h_{i+1} \mathcal{F} (t_{i+1},E^{T,\Delta}_{i+1},h_{i+1}) \qquad \forall i=0,\ldots,N-1 \\[5pt]
E^{T,\Delta}_N = P.
\end{array}
\right.
\end{equation}
Furthermore, the minimal cost of $(\mathrm{OCP}^{T,\Delta}_{x_0})$ is equal to $ \langle E^{T,\Delta}_0 x_0 , x_0 \rangle_{\mathbb{R}^{n}}$.
\end{proposition}
\begin{remark}\label{remanalog}
The mathematical contents of Propositions~\ref{thmriccperm} and~\ref{thmriccsample} are not new. The time-varying state feedback~$u^*$ in Proposition~\ref{thmriccperm} is usually written as
$$
u^*(t) = - R(t)^{-1} B(t)^\tauop E^T(t) x^*(t) \qquad \tauext{for a.e.}\ t \in [0,T]
$$
and $\mathrm{(P\tauext{-}DRE)}$ is usually written as
\begin{equation*}
\left\lbrace
\begin{array}{l}
\dot{E^T}(t) = E^T(t) B(t) R(t)^{-1} B(t)^\tauop E^T(t) - Q(t) - A(t)^\tauop E^T(t) - E^T(t)A(t) \qquad \forall t \in [0,T] \\[5pt]
E^T(T) = P
\end{array}
\right.
\end{equation*}
(see \cite{bressan2007, Kwakernaak, lee1986, sontag1998, trelat2005}). Like in the fully discrete-time case~\cite[Remark~2]{liu2014}, the analogous results in the sampled-data control case have various equivalent formulations in the literature.
Using the Duhamel formula,
Problem~$(\mathrm{OCP}^{T,\Delta}_{x_0})$ can be recast as a fully discrete-time linear-quadratic optimal control problem. In this way, the time-varying state feedback control $u^*$ in Proposition~\ref{thmriccsample} and $\mathrm{(SD\tauext{-}DRE)}$ were first obtained in~\cite{kalman1958} by applying the discrete-time dynamical programming principle (method revisited in~\cite[p.~616]{dorato1971} or more recently in~\cite[Theorem~4.1]{bini2009}), while they are derived in~\cite[Appendix~B]{astrom1963} or in \cite[p.~618]{dorato1971} by applying a discrete-time version of the Pontryagin maximum principle (see \cite{kleinman1966}).
In Theorem~\ref{thmmain1} hereafter, we are going to prove convergence of $E^{T,\Delta}$ to $E^T$ when $\Vert \Delta \Vert \tauo 0$.
\end{remark}
\subsection{Infinite time horizon: permanent / sampled-data control (autonomous setting and uniform time partition)}\label{secinfinitehorizon}
This section is dedicated to the infinite time horizon case. We denote by~$\mathrm{AC}([0,+\infty),\mathbb{R}^n)$ the space of functions defined on~$[0,+\infty)$ with values in $\mathbb{R}^n$ which are absolutely continuous over all intervals~$[0,T]$ with~$T> 0$, and by $\mathrm{L}^2([0,+\infty),\mathbb{R}^m)$ the Lebesgue space of square-integrable functions defined almost everywhere on $[0,+\infty)$ with values in $\mathbb{R}^m$. Assume that we are in the autonomous setting (see Definition~\ref{defautonomous}). We consider the following assumptions:
\begin{enumerate}
\item[$\mathrm{(H_1)}$] $Q \in \mathcal{S}^n_{++}$.
\item[$\mathrm{(H_2)}$] For every $x_0 \in \mathbb{R}^n$, there exists a pair $(x,u) \in \mathrm{AC}([0,+\infty),\mathbb{R}^n) \tauimes \mathrm{L}^2 ([0,+\infty),\mathbb{R}^m)$ such that $\dot{x}(t) = Ax(t) + Bu(t)$ for almost every $t \geq 0$ and~$x(0)=x_0$, satisfying
$$ \displaystyle \int_0^{+\infty} \Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau < +\infty. $$
\end{enumerate}
Assumption~$\mathrm{(H_2)}$ is known in the literature as \tauextit{optimizability} assumption (or \tauextit{finite cost} assumption) and is related to various notions of \tauextit{stabilizability} of linear permanent control systems (see \cite{weiss2000}). A wide literature is dedicated to this topic (see \cite{terrell2009} and references mentioned in \cite[Section~10.10]{datta2004}). Recall that, if the pair~$(A,B)$ satisfies the Kalman condition (see \cite[Theorem~1.2]{zabczyk2008}) or only the weaker Popov-Belevitch-Hautus test condition (see \cite[Theorem~6.2]{terrell2009}) then~$\mathrm{(H_2)}$ is satisfied.
Let $h > 0$. The $h$-uniform time partition of the interval~$[0,+\infty)$ is the sequence $\Delta = \{ t_i \}_{i \in \mathbb{N}}$, where~$t_i := ih$ for every~$i \in \mathbb{N}$. We denote by $\Vert \Delta \Vert = h$ and by~$\mathrm{PC}^\Delta ([0,+\infty),\mathbb{R}^m)$ the space of functions defined on $[0,+\infty)$ with values in $\mathbb{R}^m$ that are piecewise constant according to the time partition $\Delta$, that is
$$ \mathrm{PC}^\Delta ([0,+\infty),\mathbb{R}^m) := \{ u : [0,+\infty) \tauo \mathbb{R}^m\ \mid\ u(t) = u_i\quad \forall t \in [t_i,t_{i+1}), \ i\in\mathbb{N} \}. $$
We also consider the following assumption that we call \tauextit{$h$-optimizability} assumption:
\begin{enumerate}
\item[$\mathrm{(H}_2^h\mathrm{)}$] For every $x_0 \in \mathbb{R}^n$, there exists a pair $(x,u) \in \mathrm{AC}([0,+\infty),\mathbb{R}^n) \tauimes \mathrm{PC}^\Delta ([0,+\infty),\mathbb{R}^m)$ such that $\dot{x}(t) = Ax(t) + Bu(t)$ for almost every $t \geq 0$ and $x(0)=x_0$, satisfying
$$ \int_0^{+\infty} \Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau < +\infty. $$
\end{enumerate}
Obviously, if $\mathrm{(H}_2^h\mathrm{)}$ is satisfied for some $h > 0$ then $\mathrm{(H_2)}$ is satisfied. In other words, $\mathrm{(H}_2^h\mathrm{)}$ for a given $h>0$ is stronger than~$\mathrm{(H}_2\mathrm{)}$. Conversely, we will prove in Lemma~\ref{lemimportant} further that, if $\mathrm{(H_1)}$ and~$\mathrm{(H_2)}$ are satisfied, then there exists~$\overline{h} > 0$ such that $\mathrm{(H}_2^h\mathrm{)}$ is satisfied for every $h\in(0,\overline{h}]$.
In this section, in the autonomous setting (see Definition~\ref{defautonomous}), we consider two infinite time horizon LQ optimal control problems: permanent control~$u \in \mathrm{L}^2([0,+\infty),\mathbb{R}^m)$ (Proposition~\ref{thmriccperminf}) and sampled-data control~$u \in \mathrm{PC}^\Delta([0,+\infty),\mathbb{R}^m)$ (Proposition~\ref{thmriccsampleinf}).
\begin{proposition}[Permanent control in infinite time horizon]\label{thmriccperminf}
Assume that we are in the autonomous setting (see Definition~\ref{defautonomous}). Let $x_0 \in \mathbb{R}^n$. Under Assumptions $\mathrm{(H_1)}$ and $\mathrm{(H_2)}$, the LQ optimal permanent control problem in infinite time horizon given by
\begin{equation}\tauag{$\mathrm{OCP}^\infty_{x_0}$}
\begin{array}{rl}
\tauext{minimize} & \displaystyle \int_0^{+\infty} \Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau \\[18pt]
\tauext{subject to} & \left\lbrace \begin{array}{l}
x \in \mathrm{AC}([0,+\infty),\mathbb{R}^n), \quad u \in \mathrm{L}^2([0,+\infty),\mathbb{R}^m) \\[8pt]
\dot{x}(t) = Ax(t) + Bu(t) \qquad \tauext{for a.e.}\ t \geq 0 \\[8pt]
x(0)=x_0
\end{array} \right.
\end{array}
\end{equation}
has a unique optimal solution $(x^*,u^*)$. Moreover $u^*$ is the state feedback
$$ u^*(t) = - \mathbb{N}N(E^\infty,0)^{-1} \mathcal{M}(E^\infty,0) ^\tauop x^*(t) \qquad \tauext{for a.e.}\ t \geq 0 $$
where $E^\infty \in \mathcal{S}^{n}_{++}$ is the unique solution to the Permanent Algebraic Riccati Equation~$\mathrm{(P\tauext{-}ARE)}$
\begin{equation}\tauag{$\mathrm{P\tauext{-}ARE}$}
\left\lbrace
\begin{array}{l}
\mathcal{F}(E^\infty,0) = 0_{\mathbb{R}^{n\tauimes n}} \\[5pt]
E^\infty \in \mathcal{S}^n_{+}.
\end{array}
\right.
\end{equation}
Furthermore, the minimal cost of $(\mathrm{OCP}^\infty_{x_0})$ is equal to $ \langle E^\infty x_0 , x_0 \rangle_{\mathbb{R}^{n}}$.
\end{proposition}
\begin{proposition}[Sampled-data control in infinite time horizon]\label{thmriccsampleinf}
Assume that we are in the autonomous setting (see Definition~\ref{defautonomous}). Let $\Delta = \{ t_i \}_{i \in \mathbb{N}} $ be a $h$-uniform time partition of the interval~$[0,+\infty)$ and let~$x_0 \in \mathbb{R}^n$. Under Assumptions~$\mathrm{(H_1)}$ and $\mathrm{(H}^h_2\mathrm{)}$, the LQ optimal sampled-data control problem in infinite time horizon given by
\begin{equation}\tauag{$\mathrm{OCP}^{\infty,\Delta}_{x_0}$}
\begin{array}{rl}
\tauext{minimize} & \displaystyle \int_0^{+\infty} \Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau \\[18pt]
\tauext{subject to} & \left\lbrace \begin{array}{l}
x \in \mathrm{AC}([0,+\infty),\mathbb{R}^n), \quad u \in \mathrm{PC}^\Delta([0,+\infty),\mathbb{R}^m) \\[8pt]
\dot{x}(t) = Ax(t) + Bu(t) \qquad \tauext{for a.e.}\ t \geq 0 \\[8pt]
x(0)=x_0
\end{array} \right.
\end{array}
\end{equation}
has a unique optimal solution $(x^*,u^*)$. Moreover $u^*$ is the state feedback
$$ u^*_i = - \mathbb{N}N(E^{\infty,\Delta},h)^{-1} \mathcal{M}(E^{\infty,\Delta},h)^\tauop x^*(t_i) \qquad \forall i \in \mathbb{N} $$
where $E^{\infty,\Delta} \in \mathcal{S}^{n}_{++}$ is the unique solution to the Sampled-Data Algebraic Riccati Equation~$\mathrm{(SD\tauext{-}ARE)}$
\begin{equation}\tauag{$\mathrm{SD\tauext{-}ARE}$}
\left\lbrace
\begin{array}{l}
\mathcal{F}(E^{\infty,\Delta},h) = 0_{\mathbb{R}^{n\tauimes n}} \\[5pt]
E^{\infty,\Delta} \in \mathcal{S}^n_{+}.
\end{array}
\right.
\end{equation}
Furthermore, the minimal cost of $(\mathrm{OCP}^{\infty,\Delta}_{x_0})$ is equal to $ \langle E^{\infty,\Delta} x_0 , x_0 \rangle_{\mathbb{R}^{n}}$.
\end{proposition}
\begin{remark}
The mathematical content of Proposition~\ref{thmriccperminf} is well known in the literature (see \cite{bressan2007, Kwakernaak, lee1986, sontag1998, trelat2005}). The state feedback control $u^*$ in Proposition~\ref{thmriccperminf} is usually written as
$$ u^*(t) = - R^{-1} B^\tauop E^\infty x^*(t) \qquad \tauext{for a.e.}\ t \geq 0 $$
and~$\mathrm{(P\tauext{-}ARE)}$ is usually written as
\begin{equation*}
\left\lbrace
\begin{array}{l}
E^\infty B R^{-1} B^\tauop E^\infty - Q - A^\tauop E^\infty - E^\infty A = 0_{\mathbb{R}^{n\tauimes n}} \\[5pt]
E^\infty \in \mathcal{S}^n_{+}.
\end{array}
\right.
\end{equation*}
As said in Remark~\ref{remanalog}, our formulation of Proposition~\ref{thmriccperminf}, using the continuous map~$\mathcal{F}$ defined in Section~\ref{secF}, provides a unified presentation in the permanent and sampled-data cases. In Theorem~\ref{thmmain1} hereafter, we are going to prove convergence of $E^{\infty,\Delta}$ to $E^\infty$ when $h=\Vert \Delta \Vert \tauo 0$.
\end{remark}
\begin{remark}\label{remanalog2}
Similarly to the finite time horizon case (see Remark~\ref{remanalog}), the state feedback control in Proposition~\ref{thmriccsampleinf} and $\mathrm{(SD\tauext{-}ARE)}$ have various equivalent formulations in the literature (see \cite{bini2014,levis1968,levis1971,melzer1971,middleton1990}) and in most of these references Problem~$(\mathrm{OCP}^{\infty,\Delta}_{x_0})$ is recast as a fully discrete-time LQ optimal control problem with infinite time horizon. In particular the optimizability property for Problem~$(\mathrm{OCP}^{\infty,\Delta}_{x_0})$ is equivalent to the optimizability of the corresponding fully discrete-time problem (see \cite[Theorem~3]{dorato1971} or~\cite[p.348]{levis1971}). In the present work we will prove that, if~$\mathrm{(H_1)}$ and~$\mathrm{(H_2)}$ are satisfied, then there exists~$\overline{h} > 0$ such that the $h$-optimizability assumption~$\mathrm{(H}_2^h\mathrm{)}$ is satisfied for every $h\in(0,\overline{h}]$ (see Lemma~\ref{lemimportant} further). Moreover, in that context, a uniform bound of the minimal cost of Problem~$(\mathrm{OCP}^{\infty,\Delta}_{x_0})$ (independently of~$h\in(0,\overline{h}]$) is obtained. It plays a key role in order to prove convergence of~$E^{\infty,\Delta}$ to~$E^\infty$ when~$h=\Vert \Delta \Vert\tauo 0$.
We provide in Appendix~\ref{appthmriccsampleinf} a proof of Proposition~\ref{thmriccsampleinf} based on the $h$-optimizability assumption~$\mathrm{(H}^h_2\mathrm{)}$, by keeping the initial continuous-time formulation of Problem~$(\mathrm{OCP}^{\infty,\Delta}_{x_0})$ as in~\cite{bourdin2017}. This proof is an adaptation to the sampled-data control case of the proof of Proposition~\ref{thmriccperminf} (see \cite[p.153]{bressan2007}, \cite[Theorem~7 p.198]{lee1986} or~\cite[Theorem~4.13]{trelat2005}). Moreover it contains in particular the proof of convergence of $E^{T,\Delta}$ to $E^{\infty,\Delta}$ when~$T \tauo + \infty$.
\end{remark}
\section{Main result}\label{secmain}
Propositions~\ref{thmriccperm}, \ref{thmriccsample}, \ref{thmriccperminf} and~\ref{thmriccsampleinf} in Section~\ref{secprelim} give state feedback optimal controls for permanent and sampled-data LQ problems in finite and infinite time horizons. In each case, the optimal control is expressed thanks to a Riccati matrix: $E^T$, $E^{T,\Delta}$, $E^\infty$ and $E^{\infty,\Delta}$ respectively. Our main result (Theorem~\ref{thmmain1} below) asserts that the following diagram commutes:
\begin{equation*}
\xymatrix@R=2cm@C=4cm {
\mathrm{(SD\tauext{-}DRE)} \hspace{-5cm} & E^{T,\Delta} \ar[r]^{T \tauo +\infty} \ar[d]_{\Vert \Delta \Vert \tauo 0} & E^{\infty,\Delta} \ar[d]^{\Vert \Delta \Vert \tauo 0} & \hspace{-5cm} \mathrm{(SD\tauext{-}ARE)} \\
\mathrm{(P\tauext{-}DRE)} \hspace{-5cm} & E^T \ar[r]_{T \tauo +\infty} & E^\infty & \hspace{-5cm} \mathrm{(P\tauext{-}ARE)}
}
\end{equation*}
The precise mathematical meaning of the above convergences is provided in the next theorem which is the main contribution of the present work. Let us first state the following lemma (proved in Appendix~\ref{applemimportant}).
\begin{lemma}\label{lemimportant}
In the autonomous setting (see Definition~\ref{defautonomous}), under Assumptions~$\mathrm{(H_1)}$ and~$\mathrm{(H_2)}$, there exist~$\overline{h} > 0$ and $\overline{c} \geq 0$ such that, for all $h$-uniform time partitions $\Delta$ of the interval~$[0,+\infty)$, with~$0 < h \leq \overline{h}$, and for every~$x_0 \in \mathbb{R}^n$, there exists a pair $(x,u) \in \mathrm{AC}([0,+\infty),\mathbb{R}^n) \tauimes \mathrm{PC}^\Delta ([0,+\infty),\mathbb{R}^m)$ such that~$\dot{x}(t) = Ax(t) + Bu(t)$ for almost every $t \geq 0$ and~$x(0)=x_0$, satisfying
$$ \int_0^{+\infty} \Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau \leq \overline{c} \langle E^\infty x_0 , x_0 \rangle_{\mathbb{R}^n} < +\infty. $$
\end{lemma}
Not only Lemma~\ref{lemimportant} asserts that, if $\mathrm{(H_1)}$ and~$\mathrm{(H_2)}$ are satisfied, then there exists~$\overline{h} > 0$ such that~$\mathrm{(H}_2^h\mathrm{)}$ is satisfied for every $h\in(0,\overline{h}]$, but it also provides a \tauextit{uniform} $h$-optimizability for all~$0 < h \leq \overline{h}$ (in the sense that the finite right-hand term is independent of $h$). This uniform bound plays a crucial role in order to derive convergence of $E^{\infty,\Delta}$ to $E^\infty$ when $h = \Vert \Delta \Vert \tauo 0$ (which corresponds to the right arrow of the above diagram and to the fourth item of Theorem~\ref{thmmain1} below). Finally, from the proof of Lemma~\ref{lemimportant} in Appendix~\ref{applemimportant}, note that a lower bound of the threshold~$\overline{h} > 0$ can be expressed in function of the norms of~$A$, $B$, $Q$, $R$ and~$E^\infty$.
\begin{theorem}[Commutative diagram]\label{thmmain1}
We have the following convergence results:
\begin{enumerate}
\item[\rm{(i)}] \tauextbf{Left arrow of the diagram:} Given any $T > 0$, we have
$$ \lim\limits_{ \Vert \Delta \Vert \tauo 0} \ \ \underset{i=0,\ldots,N}{{\mathrm{max}}} \Vert E^T(t_i) - E^{T,\Delta}_i \Vert_{\mathbb{R}^{n\tauimes n}} = 0 $$
for all time partitions $\Delta = \{ t_i \}_{i=0,\ldots,N}$ of the interval~$[0,T]$.
\item[\rm{(ii)}] \tauextbf{Bottom arrow of the diagram:} Assume that $P=0_{\mathbb{R}^{n\tauimes n}}$ and that we are in the autonomous setting (see Definition~\ref{defautonomous}). Under Assumptions~$\mathrm{(H_1)}$ and $\mathrm{(H_2)}$, we have
$$ \lim\limits_{ T \tauo +\infty } E^T (t) = E^\infty \qquad \forall t \geq 0. $$
\item[\rm{(iii)}] \tauextbf{Top arrow of the diagram:} Assume that $P=0_{\mathbb{R}^{n\tauimes n}}$ and that we are in the autonomous setting (see Definition~\ref{defautonomous}). Let $\Delta = \{ t_i \}_{i \in \mathbb{N}} $ be a $h$-uniform time partition of the interval~$[0,+\infty)$. For all~$N \in \mathbb{N}^*$, we denote by~$\Delta_N := \Delta \cap [0,t_N]$ the~$h$-uniform time partition of the interval~$[0,t_N]$. Under Assumptions~$\mathrm{(H_1)}$ and $\mathrm{(H}^h_2\mathrm{)}$, we have
$$ \lim\limits_{ N \tauo +\infty } E^{t_N , \Delta_N }_i = E^{\infty , \Delta} \qquad \forall i \in \mathbb{N}. $$
\item[\rm{(iv)}] \tauextbf{Right arrow of the diagram:} In the autonomous setting (see Definition~\ref{defautonomous}), under Assumptions~$\mathrm{(H_1)}$ and $\mathrm{(H_2)}$, we have
$$ \lim\limits_{ h \tauo 0} E^{\infty,\Delta} = E^\infty $$
for all $h$-uniform time partitions $\Delta = \{ t_i \}_{i \in \mathbb{N}}$ of the interval $[0,+\infty)$ with $0 < h \leq \overline{h}$ (where $\overline{h} > 0$ is given by Lemma~\ref{lemimportant}).
\end{enumerate}
\end{theorem}
\begin{remark}
The proof of Theorem~\ref{thmmain1} is done in Appendix~\ref{appthmmain1}. Some results similar to the four items of Theorem~\ref{thmmain1} have already been discussed and can be found in the literature. For example, in the autonomous case and with $h$-uniform time partitions, the first item of Theorem~\ref{thmmain1} has been proved in~\cite[Corollary~2.3]{astrom1963} (a second-order convergence has even been derived). The second item of Theorem~\ref{thmmain1} is a well known fact and follows from the proof of Proposition~\ref{thmriccperminf} (see \cite[p.153]{bressan2007}, \cite[Theorem~7]{lee1986} or \cite[Theorem~4.13]{trelat2005}). The third item of Theorem~\ref{thmmain1} follows from the proof of Proposition~\ref{thmriccsampleinf} given in Appendix~\ref{appthmriccsampleinf} by keeping the initial continuous-time writting of Problem~$(\mathrm{OCP}^{\infty,\Delta}_{x_0})$. As evoked in Remarks~\ref{remanalog} and~\ref{remanalog2}, in the literature, the LQ optimal sampled-data control problems are usually rewritten as fully discrete-time LQ optimal control problems. As a consequence the result of the third item of Theorem~\ref{thmmain1} is usually reduced in the literature to the corresponding result at the discrete level (see \cite[Theorem~3]{dorato1971} or~\cite[p.348]{levis1971}). The last item of Theorem~\ref{thmmain1} is proved in Appendix~\ref{appthmmain1} by using the uniform $h$-optimizability obtained in Lemma~\ref{lemimportant}. Note that sensitivity analysis of $\mathrm{(SD\tauext{-}ARE)}$ with respect to $h$ has been explored in~\cite{fukata1979,levis1968,levis1971,melzer1971} by computing its derivative algebraically in view of optimization of the sampling period~$h$. Note that the map~$\mathcal{F}$ defined in Section~\ref{secF} is a suitable candidate in order to invoke the classical implicit function theorem and justify the differentiability of~$E^{\infty,\Delta}$ with respect to~$h$. Finally the contribution of the present work is to provide a framework allowing to gather Propositions~\ref{thmriccperm}, \ref{thmriccsample}, \ref{thmriccperminf} and~\ref{thmriccsampleinf} in a unified setting, based on the continuous map~$\mathcal{F}$, which moreover allows us to prove several convergence results for Riccati matrices and to summarize it in a single diagram.
\end{remark}
\appendix
\section{Proofs}\label{app1}
Preliminaries and reminders are done in Section~\ref{appprelim}. We prove Proposition~\ref{thmriccsampleinf} in Section~\ref{appthmriccsampleinf}, Lemma~\ref{lemimportant} in Section~\ref{applemimportant} and Theorem~\ref{thmmain1} in Section~\ref{appthmmain1}.
\subsection{Preliminaries}\label{appprelim}
\begin{lemma}[A backward discrete Gr\"onwall lemma]\label{lemgronwall}
Let $N \in \mathbb{N}^*$ and $(w_i)_{i=0,\ldots,N}$, $(z_i)_{i=1,\ldots,N}$ and $(\mu_i)_{i=1,\ldots,N}$ be three finite nonnegative real sequences which satisfy $w_N = 0$ and
$$ w_i \leq (1+\mu_{i+1})w_{i+1} + z_{i+1} \qquad \forall i = 0,\ldots,N-1. $$
Then
$$ w_i \leq \sum_{j=i+1}^N \left( \prod_{q=i+1}^{j-1} (1+\mu_q) \right) z_j \leq \sum_{j=i+1}^N e^{ \sum_{q=i+1}^{j-1} \mu_q } z_j \qquad \forall i=0,\ldots,N-1. $$
\end{lemma}
\begin{proof}
The first inequality follows from a backward induction. The second inequality comes from the inequality $1+\mu \leq e^\mu$ for all $\mu \geq 0$.
\end{proof}
\begin{lemma}[Some reminders on symmetric matrices]\label{lemmatrice}
Let $p \in \mathbb{N}^*$. The following properties are satisfied:
\begin{enumerate}
\item[\rm{(i)}] Let $E \in \mathcal{S}^p_+$ (resp., $E \in \mathcal{S}^p_{++}$). Then all eigenvalues of $E$ are nonnegative (resp., positive) real numbers.
\item[\rm{(ii)}] Let $E \in \mathcal{S}^p_{+}$. Then $ \rho_{\mathrm{min}}(E) \Vert y \Vert_{\mathbb{R}^p}^2 \leq \langle Ey,y \rangle_{\mathbb{R}^p} \leq \rho_{\mathrm{max}}(E) \Vert y \Vert_{\mathbb{R}^p}^2 $ for all $y \in \mathbb{R}^p$, where $\rho_{{\mathrm{min}}}(E)$ and~$\rho_{{\mathrm{max}}}(E)$ stand respectively for the smallest and the largest nonnegative eigenvalues of $E$.
\item[\rm{(iii)}] Let $E \in \mathcal{S}^p_{++}$. Then $E$ is invertible and $E^{-1} \in \mathcal{S}^p_{++}$. Moreover we have~$\rho_{{\mathrm{min}}}(E^{-1}) = 1/\rho_{\mathrm{max}}(E)$ and~$\rho_{{\mathrm{max}}}(E^{-1}) = 1/\rho_{\mathrm{min}}(E)$.
\item[\rm{(iv)}] Let $E \in \mathcal{S}^p_+$. It holds that $\Vert E \Vert_{\mathbb{R}^{p \tauimes p}} = \rho_{{\mathrm{max}}}(E) $.
\item[\rm{(v)}] Let $E \in \mathcal{S}^p_+$. If there exists $c \geq 0$ such that $\langle E y , y \rangle_{\mathbb{R}^p} \leq c \Vert y \Vert^2_{\mathbb{R}^p}$ for every $y \in \mathbb{R}^p$, then $\Vert E \Vert_{\mathbb{R}^{p \tauimes p}} \leq c$.
\item[\rm{(vi)}] Let $E_1$, $E_2 \in \mathcal{S}^p_+$. If $\langle E_1 y , y \rangle_{\mathbb{R}^p} = \langle E_2 y , y \rangle_{\mathbb{R}^p}$ for every $y \in \mathbb{R}^p$ then $E_1 = E_2$.
\item[\rm{(vii)}] Let $(E_k)_{k \in \mathbb{N}}$ be a sequence of matrices in $ \mathcal{S}^p_+$. If $\langle E_k y , y \rangle_{\mathbb{R}^p}$ converges when $k \tauo +\infty$ for all~$y \in \mathbb{R}^p$ then~$(E_k)_{k \in \mathbb{N}}$ has a limit $E \in \mathcal{S}^p_+$.
\end{enumerate}
\end{lemma}
\begin{proof}
The first four items are classical results (see, e.g.,~\cite{horn2013}). The fifth item follows from the fourth one. The last two items follow from the following fact: if $E \in \mathcal{S}^p_+$, with $E = (e_{ij})_{i,j=1,\ldots,p}$, then
$$ e_{ij} = \langle E b_j , b_i \rangle_{\mathbb{R}^p} = \dfrac{1}{2} \Big( \langle E (b_i+b_j) , b_i+b_j \rangle_{\mathbb{R}^p} - \langle E b_i , b_i \rangle_{\mathbb{R}^p} - \langle E b_j , b_j \rangle_{\mathbb{R}^p} \Big) \qquad \forall i, j = 1, \ldots,p $$
where $\{ b_i \}_{i=1,\ldots,p}$ stands for the canonical basis of $\mathbb{R}^p$.
\end{proof}
\begin{lemma}[Properties of the function~$\mathcal{F}$]\label{lemF}
The three following properties are satisfied:
\begin{enumerate}[label=\rm{(\roman*)}]
\item The map $\mathcal{F}$ is well-defined on $\mathbb{R} \tauimes \mathcal{S}^n_+ \tauimes \mathbb{R}_+$.
\item The map $\mathcal{F}$ is continuous on $\mathbb{R} \tauimes \mathcal{S}^n_+ \tauimes \mathbb{R}_+$.
\item If $\mathrm{K}K$ is a compact subset of $\mathbb{R} \tauimes \mathcal{S}^n_+ \tauimes \mathbb{R}_+$, then there exists a constant $c \geq 0$ such that
$$ \Vert \mathcal{F}(t,E_2,h) - \mathcal{F}(t,E_1,h) \Vert_{\mathbb{R}^{n\tauimes n}} \leq c \Vert E_2-E_1 \Vert_{\mathbb{R}^{n\tauimes n}} $$
for all $(t,E_1,E_2,h)$ such that $(t,E_1,h) \in \mathrm{K}K$ and $(t,E_2,h) \in \mathrm{K}K$.
\end{enumerate}
\end{lemma}
\begin{proof}
{\rm (i)} For $(t,E,h) \in \mathbb{R} \tauimes \mathcal{S}^n_+ \tauimes \mathbb{R}_+$, note that $\mathbb{N}N_1(t,E,h) \in \mathcal{S}^m_{++}$, $\mathbb{N}N_2(t,E,h) \in \mathcal{S}^m_+$ and $\mathbb{N}N_3(t,E,h) \in \mathcal{S}^m_+$. Hence the sum $\mathbb{N}N(t,E,h) $ belongs to~$\mathcal{S}^m_{++}$ and thus is invertible from~(iii) of Lemma~\ref{lemmatrice}.
{\rm (ii)} Since taking the inverse of a matrix is a continuous operation, we only need to prove that $\mathcal{M}$, $\mathbb{N}N$ and~$\mathcal{G}$ are continuous over~$\mathbb{R} \tauimes \mathcal{S}^n_+ \tauimes \mathbb{R}_+$. Let~$(t_k,E_k,h_k)_{k\in \mathbb{N}}$ be a sequence of $\mathbb{R} \tauimes \mathcal{S}^n_+ \tauimes \mathbb{R}_+$ which converges to some~$(t,E,h) \in \mathbb{R} \tauimes \mathcal{S}^n_+ \tauimes \mathbb{R}_+$. We need to prove that~$\mathcal{M}(t_k,E_k,h_k)$, $\mathbb{N}N(t_k,E_k,h_k)$ and $\mathcal{G}(t_k,E_k,h_k)$ converge respectively to~$\mathcal{M} (t,E,h)$, $\mathbb{N}N(t,E,h)$ and $\mathcal{G} (t,E,h)$ when~$k \tauo +\infty$. The case $h \neq 0$ can be treated using, for instance, the Lebesgue dominated convergence theorem. Let us discuss the case $h=0$ and let us assume, without loss of generality (since~$A$, $B$, $Q$ and $R$ are continuous matrices), that $h_k > 0$ for every $k \in \mathbb{N}$. In that situation we conclude by using in particular the fact that $t$ is a Lebesgue point of all integrands involved in the definitions of the functions~$\mathcal{M}$, $\mathbb{N}N$ and $\mathcal{G}$.
{\rm (iii)} It is clear that $\mathcal{F}$ is continuously differentiable over $\mathcal{S}^n_+$ with respect to its second variable. Similarly to the previous item, we can moreover prove that the map $(t,E,h) \mapsto \mathcal{D}_2 \mathcal{F}(t,E,h)$ is continuous over~$\mathbb{R} \tauimes \mathcal{S}^n_+ \tauimes \mathbb{R}_+$. Thus the third item follows by applying the Taylor expansion formula with integral remainder.
\end{proof}
\begin{lemma}[A uniform bound for $E^T$ and $E^{T,\Delta}$]\label{lembound}
Let $T > 0$. We have
$$ \Vert E^T(t) \Vert_{\mathbb{R}^{n\tauimes n}} \leq \Big( \Vert P \Vert_{\mathbb{R}^{n\tauimes n}} + (T-t) \Vert Q_{|[t,T]} \Vert_{\infty} \Big) e^{2 \Vert A_{|[t,T]} \Vert_{\infty} (T-t)} \qquad \forall t \in [0,T]. $$
If $\Delta = \{ t_i \}_{i=0,\ldots,N}$ is a time partition of the interval $[0,T]$, then
$$ \Vert E^{T,\Delta}_i \Vert_{\mathbb{R}^{n\tauimes n}} \leq \Big( \Vert P \Vert_{\mathbb{R}^{n\tauimes n}} + (T-t_i) \Vert Q_{|[t_i,T]} \Vert_{\infty} \Big) e^{2 \Vert A_{|[t_i,T]} \Vert_{\infty} (T-t_i)} \qquad \forall i=0,\ldots,N. $$
\end{lemma}
\begin{proof}
Let us prove the first part of Lemma~\ref{lembound}. We first deal with the case $t=0$. Taking the null control in Problem~$(\mathrm{OCP}^T_y)$ and using the Duhamel formula, we deduce that its minimal cost satisfies
$$ \langle E^T(0)y,y \rangle_{\mathbb{R}^n} \leq \Big( \Vert P \Vert_{\mathbb{R}^{n\tauimes n}} + T \Vert Q_{|[0,T]} \Vert_{\infty} \Big) e^{2T \Vert A_{|[0,T]} \Vert_{\infty} } \Vert y \Vert_{\mathbb{R}^n}^2 \qquad \forall y \in \mathbb{R}^n. $$
The result at $t=0$ then follows from (v) in Lemma~\ref{lemmatrice}. The case $0 < t < T$ can be treated similarly by considering the restriction of Problem~$(\mathrm{OCP}^T_y)$ to the time interval $[t,T]$ (instead of $[0,T]$). Finally the case~$t=T$ is obvious since $E^T(T) = P$. The second part of Lemma~\ref{lembound} is derived in a similar way.
\end{proof}
\begin{lemma}[Zero limit of finite cost trajectories at infinite time horizon]\label{leminfzero}
In the autonomous setting (see Definition~\ref{defautonomous}), under Assumption $\mathrm{(H_1)}$, for every $(x,u) \in \mathrm{AC}([0,+\infty),\mathbb{R}^n) \tauimes \mathrm{L}^2 ([0,+\infty),\mathbb{R}^m)$ such that~$\dot{x}(t) = Ax(t) + Bu(t)$ for almost every $t \geq 0$ and satisfying
$$ \int_0^{+\infty} \Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau < +\infty , $$
we have $ \lim_{t \tauo +\infty} x(t) = 0_{\mathbb{R}^n} $.
\end{lemma}
\begin{proof}
Since $Q \in \mathcal{S}^n_{++}$, we have $\Vert x(t) \Vert^2_{\mathbb{R}^n} \leq \frac{1}{\rho_{\mathrm{min}} (Q)} \langle Q x(t) , x(t) \rangle_{\mathbb{R}^n}$ for all $t \geq 0$. Using the assumptions we deduce that $x \in \mathrm{L}^2 ([0,+\infty),\mathbb{R}^m)$. Let us introduce $X \in \mathrm{AC}([0,+\infty),\mathbb{R})$ defined by~$X(t) := \Vert x(t) \Vert^2_{\mathbb{R}^n} \geq 0$ for all $t \geq 0$. Since~$\dot{X}(t) = 2 \langle A x(t) + Bu(t), x(t) \rangle_{\mathbb{R}^n} $ for almost every $t \geq 0$, we deduce that~$\dot{X} \in \mathrm{L}^1([0,+\infty),\mathbb{R})$ and thus~$X(t)$ admits a limit $\ell \geq 0$ when $t \tauo +\infty$. By contradiction let us assume that $\ell > 0$. Then there exists $s \geq 0$ such that $X(t) \geq \frac{\ell}{2} > 0$ for all~$t \geq s$. We get that
\begin{multline*}
\int_0^{\overline{t}} \Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau \geq \rho_{\mathrm{min}} (Q) \left( \int_0^{\overline{t}} X(\tau) \; d\tau \right) \\
= \rho_{\mathrm{min}} (Q) \int_0^{s} X(\tau) \; d\tau + \int_s^{\overline{t}} X(\tau) \; d\tau
\geq \rho_{\mathrm{min}} (Q) \left( \int_0^{s} X(\tau) \; d\tau + (\overline{t}-s) \dfrac{\ell}{2} \right) \qquad \forall \overline{t} \geq s.
\end{multline*}
A contradiction is obtained by letting $\overline{t} \tauo + \infty$.
\end{proof}
\subsection{Proof of Proposition~\ref{thmriccsampleinf}}\label{appthmriccsampleinf}
This proof is inspired from the proof of Proposition~\ref{thmriccperminf} (see \cite[p.153]{bressan2007}, \cite[Theorem~7 p.198]{lee1986} or \cite[Theorem~4.13]{trelat2005}) and is an adaptation to the sampled-data control case. We denote by~$\Delta_N := \Delta \cap [0,t_N]$ the~$h$-uniform time partition of the interval~$[0,t_N]$ for every $N \in \mathbb{N}^*$.
\paragraph{Existence and uniqueness of the optimal solution.}
Let $x_0 \in \mathbb{R}^n$. For every $u \in \mathrm{L}^2([0,+\infty),\mathbb{R}^m)$, we denote by $x(\cdot,u) \in \mathrm{AC}([0,+\infty),\mathbb{R}^n)$ the unique solution to the Cauchy problem
$$ \left\lbrace
\begin{array}{l}
\dot{x}(t) = A x(t) + Bu(t) \qquad \tauext{for a.e.}\ t \geq 0, \\[5pt]
x(0)= x_0.
\end{array}
\right. $$
We define the cost function
$$ \fonction{\mathrm{C}C}{\mathrm{L}^2([0,+\infty),\mathbb{R}^m)}{\mathbb{R} \cup \{ +\infty \} }{u}{\mathrm{C}C (u) := \displaystyle \int_0^{+\infty} \Big( \langle Q x(\tau,u) , x(\tau,u) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau. } $$
Problem~$(\mathrm{OCP}^{\infty,\Delta}_{x_0})$ can be recast as $ {\mathrm{min}} \{ \mathrm{C}C(u) \mid u \in \mathrm{PC}^\Delta ([0,+\infty),\mathbb{R}^m)\}$. Since $\mathrm{(H}^h_2\mathrm{)}$ is satisfied, we have
$$ \mathrm{C}C^* := \inf\{ \mathrm{C}C(u) \mid u \in \mathrm{PC}^\Delta ([0,+\infty),\mathbb{R}^m)\} < +\infty. $$
Let us consider a minimizing sequence~$(u_k)_{k \in \mathbb{N}} \subset \mathrm{PC}^\Delta ([0,+\infty),\mathbb{R}^m)$ and, without loss of generality, we assume that $\mathrm{C}C (u_k) < +\infty$ for every $k \in \mathbb{N}$. Since $R \in \mathcal{S}^n_{++}$, we deduce that the sequence $(u_k)_{k \in \mathbb{N}}$ is bounded in~$\mathrm{L}^2([0,+\infty),\mathbb{R}^m)$ and thus, up to a subsequence (that we do not relabel), converges weakly to some $u^* \in \mathrm{L}^2([0,+\infty),\mathbb{R}^m)$. Since $\mathrm{PC}^\Delta ([0,+\infty),\mathbb{R}^m)$ is a weakly closed subspace of $\mathrm{L}^2([0,+\infty),\mathbb{R}^m)$, it follows that $u^* \in \mathrm{PC}^\Delta ([0,+\infty),\mathbb{R}^m)$. Moreover, denoting by $x_k := x(\cdot,u_k)$ for every $k \in \mathbb{N}$, the Duhamel formula gives
$$ x_k (t) = e^{tA} x_0 + \int_0^t e^{(t-\tau)A} B u_k(\tau) \; d\tau \qquad \forall t \geq 0 \qquad \forall k \in \mathbb{N}. $$
By weak convergence we get that, for every $t\geq 0$, the sequence $(x_k(t))_{k \in \mathbb{N}}$ converges pointwise on $[0,+\infty)$ to
$$ x^* (t) := e^{tA} x_0 + \int_0^t e^{(t-\tau)A} B u^*(\tau) \; d\tau . $$
Then, obviously, $x^* = x(\cdot,u^*)$. Moreover, by Fatou's lemma (see, e.g.,~\cite[Lemma~4.1]{brezis2011}) and by weak convergence, we get that
\begin{multline*}
\mathrm{C}C^* = \lim_{k \tauo + \infty} \mathrm{C}C(u_k) = \liminf_{k \tauo + \infty} \mathrm{C}C(u_k)
= \liminf_{k \tauo +\infty} \int_0^{+\infty}\Big( \langle Q x_k(\tau) , x_k(\tau) \rangle_{\mathbb{R}^n} + \langle R u_k(\tau) , u_k(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau \\
\geq \liminf_{k \tauo +\infty} \int_0^{+\infty} \langle Q x_k(\tau) , x_k(\tau) \rangle_{\mathbb{R}^n} \; d\tau + \liminf_{k \tauo +\infty} \Vert u_k \Vert^2_{\mathrm{L}^2_R} \\
\geq \int_0^{+\infty} \langle Q x^*(\tau) , x^*(\tau) \rangle_{\mathbb{R}^n} \; d\tau + \Vert u^* \Vert^2_{\mathrm{L}^2_R} = \int_0^{+\infty} \left( \langle Q x^*(\tau) , x^*(\tau) \rangle_{\mathbb{R}^n} + \langle R u^*(\tau) , u^*(\tau) \rangle_{\mathbb{R}^m} \right) d\tau = \mathrm{C}C(u^*)
\end{multline*}
where the norm defined by $ \Vert u \Vert_{\mathrm{L}^2_R} := ( \int_0^{+\infty} \langle R u(\tau),u(\tau) \rangle_{\mathbb{R}^m} \; d\tau )^{1/2}$ for every $u \in \mathrm{L}^2([0,+\infty),\mathbb{R}^m)$ is equivalent to the usual one since $R \in \mathcal{S}^m_{++}$. We conclude that $(x^*,u^*)$ is an optimal solution to $(\mathrm{OCP}^{\infty,\Delta}_{x_0})$.
Let us prove uniqueness. Note that $x(\cdot,\lambda u + (1-\lambda) v) = \lambda x(\cdot,u) + (1-\lambda) x(\cdot,v)$ for all $u$, $v \in \mathrm{L}^2([0,+\infty),\mathbb{R}^m)$ and all $\lambda \in [0,1]$. Hence, since moreover $Q \in \mathcal{S}^n_{++}$ and $R \in \mathcal{S}^m_{++}$, the cost function $\mathrm{C}C$ is strictly convex and thus the optimal solution to $(\mathrm{OCP}^{\infty,\Delta}_{x_0})$ is unique.
\paragraph{Existence of a solution to $\mathrm{(SD\tauext{-}ARE)}$.}
Let us introduce the sequence $(D_i )_{i \in \mathbb{N}} \subset \mathbb{R}^{n\tauimes n}$ being the solution to the forward matrix induction given by
$$
\left\lbrace
\begin{array}{l}
D_{i+1}-D_i = - h \mathcal{F}(D_{i},h) \qquad \forall i \in \mathbb{N}, \\[5pt]
D_0 = 0_{\mathbb{R}^{n\tauimes n}}.
\end{array}
\right.
$$
Taking $P = 0_{\mathbb{R}^{n\tauimes n}}$, one has $D_i = E^{t_N,\Delta_N}_{N-i}$ for every $i = 0, \ldots,N$ and every $N \in \mathbb{N}^*$. Hence the sequence~$ ( D_i )_{i \in \mathbb{N}} $ is well defined and is in $\mathcal{S}^n_+$.
Our aim now is to prove that the sequence $ ( D_i )_{i \in \mathbb{N}}$ converges. Let $x_0 \in \mathbb{R}^n$. We denote by
$$ M := \int_0^{+\infty} \Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau < +\infty $$
where $(x,u) \in \mathrm{AC}([0,+\infty),\mathbb{R}^n) \tauimes \mathrm{PC}^\Delta ([0,+\infty),\mathbb{R}^m)$ is the pair provided in $\mathrm{(H}_2^h\mathrm{)}$. Since the minimal cost of~$(\mathrm{OCP}^{t_N,\Delta_N}_{x_0})$ (with $P = 0_{\mathbb{R}^{n\tauimes n}}$) is given by $\langle E^{t_N,\Delta_N}_0 x_0 , x_0 \rangle_{\mathbb{R}^n} = \langle D_N x_0,x_0 \rangle_{\mathbb{R}^n}$ and is increasing with respect to $N$, we deduce that $\langle D_N x_0,x_0 \rangle_{\mathbb{R}^n}$ is increasing with respect to $N$. Since it is also bounded by $M$, we deduce that it converges when $N \tauo +\infty$. By~(vii) of Lemma~\ref{lemmatrice}, we conclude that the sequence $ ( D_i )_{i \in \mathbb{N}}$ in $\mathcal{S}^n_+$ converges to some $D \in \mathcal{S}^n_+$ which satisfies~$\mathcal{F}(D,h) = 0_{\mathbb{R}^{n\tauimes n}}$ by continuity of $\mathcal{F}$ (see Lemma~\ref{lemF}).
\paragraph{Positive definiteness of $D$.}
Let $x_0 \in \mathbb{R}^n \backslash \{ 0 \}$. Since $Q \in \mathcal{S}^n_{++}$, the minimal cost of~$(\mathrm{OCP}^{t_N,\Delta_N}_{x_0})$ (with~$P = 0_{\mathbb{R}^{n\tauimes n}}$) given by $\langle E^{t_N,\Delta_N}_0 x_0 , x_0 \rangle_{\mathbb{R}^n} = \langle D_N x_0 , x_0 \rangle_{\mathbb{R}^n} $ for every~$N \in \mathbb{N}^*$ is positive. Since $\langle D_N x_0 , x_0 \rangle_{\mathbb{R}^n}$ is increasing with respect to $N$ and converges to $\langle D x_0 , x_0 \rangle_{\mathbb{R}^n}$, we deduce that $\langle D x_0 , x_0 \rangle_{\mathbb{R}^n} > 0$ and thus~$D \in \mathcal{S}^n_{++}$.
\paragraph{Lower bound of the minimal cost of $(\mathrm{OCP}^{\infty,\Delta}_{x_0})$.}
Our aim in this paragraph is to prove that, if~$Z \in \mathcal{S}^n_+$ satisfies $\mathcal{F}(Z,h) = 0_{\mathbb{R}^{n\tauimes n}}$, then $\langle Z x_0 , x_0 \rangle_{\mathbb{R}^n}$ is a lower bound of the minimal cost of $(\mathrm{OCP}^{\infty,\Delta}_{x_0})$ for every~$x_0 \in \mathbb{R}^n$.
Let $x_0 \in \mathbb{R}^n$. Let $(x,u) \in \mathrm{AC}([0,+\infty),\mathbb{R}^n) \tauimes \mathrm{PC}^\Delta([0,+\infty),\mathbb{R}^m)$ be a pair such that~$\dot{x}(t) = Ax(t) + Bu(t)$ for almost every $t \geq 0$ and $x(0) = x_0$. Our objective is to prove that
$$ \langle Z x_0 , x_0 \rangle_{\mathbb{R}^n} \leq \int_{0}^{+\infty} \Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau. $$
If the integral at the right-hand side is infinite, the result is obvious. Let us assume that the integral is finite. By Lemma~\ref{leminfzero}, $x(t)$ tends to $0_{\mathbb{R}^n}$ when $t \tauo +\infty$. By Proposition~\ref{thmriccsample}, the minimal cost of~$(\mathrm{OCP}^{t_N,\Delta_N}_{x_0})$ with~$P = Z$ is given by~$\langle E^{t_N,\Delta_N}_0 x_0 , x_0 \rangle_{\mathbb{R}^n}$ for every $N \in \mathbb{N}^*$. Since~$E^{t_N,\Delta_N}_N = Z$ and~$\mathcal{F}(Z,h) = 0_{\mathbb{R}^{n\tauimes n}}$, from the backward matrix induction, we get that $E^{t_N,\Delta_N}_i = Z$ for every $i = 0, \ldots ,N$ and every $N \in \mathbb{N}^*$. In particular the minimal cost of $(\mathrm{OCP}^{t_N,\Delta_N}_{x_0})$ with $P = Z$ is given by $\langle Z x_0 , x_0 \rangle_{\mathbb{R}^n}$ for every $N \in \mathbb{N}^*$. Hence
$$ \langle Z x_0 , x_0 \rangle_{\mathbb{R}^n} \leq \langle Z x(t_N),x(t_N) \rangle_{\mathbb{R}^n} + \int_0^{t_N} \Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau. $$
Taking the limit $N \tauo +\infty$, the proof is complete.
\paragraph{Upper bound of the minimal cost of $(\mathrm{OCP}^{\infty,\Delta}_{x_0})$.}
Our aim in this paragraph is to prove that, if~$Z \in \mathcal{S}^n_+$ satisfies $\mathcal{F}(Z,h) = 0_{\mathbb{R}^{n\tauimes n}}$, then $\langle Z x_0 , x_0 \rangle_{\mathbb{R}^n}$ is an upper bound of the minimal cost of $(\mathrm{OCP}^{\infty,\Delta}_{x_0})$ for every~$x_0 \in \mathbb{R}^n$. Denote by $\mathcal{M} := \mathcal{M} (Z,h)$, $\mathbb{N}N := \mathbb{N}N(Z,h)$ and $\mathcal{G} := \mathcal{G}(Z,h)$. We similarly use the notations~$\mathcal{M}_i$, $\mathbb{N}N_i$ and~$\mathcal{G}_i$ for $i=1,2,3$ (see Section~\ref{secF} for details).
Let $x_0 \in \mathbb{R}^n$. Let $x \in \mathrm{AC}([0,+\infty),\mathbb{R}^n)$ be the unique solution to
$$
\left\lbrace
\begin{array}{l}
\dot{x}(t) = A x(t) - B \mathbb{N}N^{-1} \mathcal{M}^\tauop x(t_i) \qquad \tauext{for a.e.}\ t \in [t_i,t_{i+1}) \qquad \forall i \in \mathbb{N} \\[5pt]
x(0) = x_0,
\end{array}
\right.
$$
and let $u \in \mathrm{PC}^\Delta([0,+\infty),\mathbb{R}^m)$ defined by $u_i := - \mathbb{N}N^{-1} \mathcal{M}^\tauop x(t_i)$ for every $i \in \mathbb{N}$. In particular~$\dot{x}(t) = Ax(t) + Bu(t)$ for almost every $t \geq 0$ and $x(0) = x_0$.
By the Duhamel formula, we have $x(t) = (\alpha_i(t) - \beta_i(t)) x(t_i)$ for all $t \in [t_i,t_{i+1})$ and every $i \in \mathbb{N}$, where
$$ \alpha_i (t) := e^{(t-t_i)A} \quad \tauext{and} \quad \beta_i (t) := \left( \int_{t_i}^t e^{\xi A} \; d\xi \right) B \mathbb{N}N^{-1} \mathcal{M}^\tauop \qquad \forall t \in [t_i,t_{i+1}) \qquad \forall i \in \mathbb{N}. $$
Using the above expressions of $\alpha_i$ and $\beta_i$, and after some computations, we get that
$$ \int_{t_i}^{t_{i+1}}\Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau = h \langle W_1 x(t_i) , x(t_i) \rangle_{\mathbb{R}^n} \qquad \forall i \in \mathbb{N} $$
where $ W_1 := \mathcal{G}_1 + \mathcal{M} \mathbb{N}N^{-1} \mathbb{N}N_2 \mathbb{N}N^{-1} \mathcal{M}^\tauop - 2 \mathcal{M}_2 \mathbb{N}N^{-1} \mathcal{M}^\tauop + \mathcal{M} \mathbb{N}N^{-1} \mathbb{N}N_1 \mathbb{N}N^{-1} \mathcal{M}^\tauop $. On the other hand, using again the above expressions of $\alpha_i$ and $\beta_i$, we compute
$$ \langle Z x(t_i),x(t_i) \rangle_{\mathbb{R}^n} - \langle Z x(t_{i+1}),x(t_{i+1}) \rangle_{\mathbb{R}^n} = h \langle W_2 x(t_i),x(t_i) \rangle_{\mathbb{R}^n} \qquad \forall i \in \mathbb{N} $$
where $W_2 := - \mathcal{G}_2 + 2 \mathcal{M}_1 \mathbb{N}N^{-1} \mathcal{M}^\tauop - \mathcal{M} \mathbb{N}N^{-1} \mathbb{N}N_3 \mathbb{N}N^{-1} \mathcal{M}$.
Using that~$\mathcal{F}(Z,h) = \mathcal{M} \mathbb{N}N^{-1} \mathcal{M}^\tauop - \mathcal{G} = 0_{\mathbb{R}^{n\tauimes n}}$, we obtain $W_2 - W_1 = 0_{\mathbb{R}^{n\tauimes n}}$ and thus $W_2 = W_1$. We deduce that
$$ \int_{t_i}^{t_{i+1}} \Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau = \langle Z x(t_i),x(t_i) \rangle_{\mathbb{R}^n} - \langle Z x(t_{i+1}),x(t_{i+1}) \rangle_{\mathbb{R}^n} \qquad \forall i \in \mathbb{N}. $$
Summing these equalities and using that $Z \in \mathcal{S}^n_+$, we get
$$ \int_{0}^{t_{N}}\Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau = \langle Z x_0 , x_0 \rangle_{\mathbb{R}^n} - \langle Z x(t_N) ,x(t_N) \rangle_{\mathbb{R}^n} \leq \langle Z x_0 , x_0 \rangle_{\mathbb{R}^n} \qquad \forall N \in \mathbb{N}^*. $$
Passing to the limit $N \tauo +\infty$, we finally obtain
$$ \int_{0}^{+\infty} \Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau \leq \langle Z x_0 , x_0 \rangle_{\mathbb{R}^n}. $$
We deduce that $\langle Z x_0 , x_0 \rangle_{\mathbb{R}^n}$ is an upper bound of the minimal cost of $(\mathrm{OCP}^{\infty,\Delta}_{x_0})$ for every $x_0 \in \mathbb{R}^n$.
\paragraph{Minimal cost of $(\mathrm{OCP}^{\infty,\Delta}_{x_0})$ and state feedback control.}
Let $x_0 \in \mathbb{R}^n$. By the previous paragraphs, since $D \in \mathcal{S}^n_{++} \subset \mathcal{S}^n_+$ satisfies $\mathcal{F}(D,h) = 0_{\mathbb{R}^{n\tauimes n}}$, the minimal cost of $(\mathrm{OCP}^{\infty,\Delta}_{x_0})$ is equal to $\langle D x_0 , x_0 \rangle_{\mathbb{R}^n}$. Moreover, by the previous paragraph, denoting by $x \in \mathrm{AC}([0,+\infty),\mathbb{R}^n)$ the unique solution to
$$
\left\lbrace
\begin{array}{l}
\dot{x}(t) = A x(t) - B \mathbb{N}N(D,h)^{-1} \mathcal{M}(D,h)^\tauop x(t_i) \qquad \tauext{for a.e.}\ t \in [t_i,t_{i+1}) \qquad \forall i \in \mathbb{N} \\[5pt]
x(0) = x_0,
\end{array}
\right.
$$
and by $u \in \mathrm{PC}^\Delta([0,+\infty),\mathbb{R}^m)$ the control defined by $u_i := - \mathbb{N}N(D,h)^{-1} \mathcal{M}(D,h)^\tauop x(t_i)$ for every $i \in \mathbb{N}$, we get that $\dot{x}(t) = Ax(t) + Bu(t)$ for almost every $t \geq 0$ and $x(0) = x_0$, and
$$ \int_{0}^{+\infty} \Big( \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} + \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^m} \Big) \; d\tau \leq \langle D x_0 , x_0 \rangle_{\mathbb{R}^n}. $$
Since $\langle D x_0 , x_0 \rangle_{\mathbb{R}^n}$ is the minimal cost of $(\mathrm{OCP}^{\infty,\Delta}_{x_0})$, the above inequality is actually an equality. By uniqueness of the optimal solution~$(x^*,u^*)$, we get that $(x,u) = (x^*,u^*)$ and thus the optimal sampled-data control~$u^*$ is given by $u^*_i = - \mathbb{N}N(D,h)^{-1} \mathcal{M}(D,h)^\tauop x^*(t_i)$ for every $i \in \mathbb{N}$.
\paragraph{Uniqueness of the solution to $\mathrm{(SD\tauext{-}ARE)}$.}
Assume that there exist $Z_1$, $Z_2 \in \mathcal{S}^n_+$ satisfying $\mathcal{F}(Z_1,h) = \mathcal{F}(Z_2,h) = 0_{\mathbb{R}^{n\tauimes n}}$. By the previous paragraphs, the minimal cost of $(\mathrm{OCP}^{\infty,\Delta}_{x_0})$ is equal to $\langle Z_1 x_0 , x_0 \rangle_{\mathbb{R}^n} = \langle Z_2 x_0 , x_0 \rangle_{\mathbb{R}^n} $ for every $x_0 \in \mathbb{R}^n$. By (vi) of Lemma~\ref{lemmatrice}, we conclude that $Z_1 = Z_2$.
\paragraph{End of the proof.}
Defining $E^{\infty,\Delta} := D \in \mathcal{S}^n_{++}$, the proof of Proposition~\ref{thmriccsampleinf} is complete.
\subsection{Proof of Lemma~\ref{lemimportant}}\label{applemimportant}
This proof is inspired from the techniques developed in~\cite{nesic1999} for preserving the stabilizing property of controls of nonlinear systems under sampling. We set $W := BR^{-1}B^\tauop E^\infty \in \mathbb{R}^{n\tauimes n}$ where $E^\infty$ is given by Proposition~\ref{thmriccperminf}. Note that $E^\infty W \in \mathcal{S}^n_+$. Using $\mathrm{(P\tauext{-}ARE)}$, we obtain
$$ 2 \langle E^\infty y , (A-W)y \rangle_{\mathbb{R}^n} = - \langle Q y , y \rangle_{\mathbb{R}^n} - \langle E^\infty W y, y \rangle_{\mathbb{R}^m} \leq -\rho_{\mathrm{min}} (Q) \Vert y \Vert^2_{\mathbb{R}^n} \qquad \forall y \in \mathbb{R}^n $$
where $ \rho_{\mathrm{min}} (Q) > 0 $ since $Q \in \mathcal{S}^n_{++}$. Let $\overline{h} > 0$ be such that
$$ h \Vert A - W \Vert_{\mathbb{R}^{n\tauimes n}} e^{h \Vert A \Vert_{\mathbb{R}^{n\tauimes n}}} < 1 \qquad \tauext{and} \qquad 2 \rho_{\mathrm{max}}(E^\infty W) \dfrac{h \Vert A-W \Vert_{\mathbb{R}^{n\tauimes n}} e^{h \Vert A \Vert_{\mathbb{R}^{n\tauimes n}}}}{1 - h \Vert A-W \Vert_{\mathbb{R}^{n\tauimes n}} e^{h \Vert A \Vert_{\mathbb{R}^{n\tauimes n}}} } \leq \dfrac{\rho_{\mathrm{min}} (Q)}{2} $$
for every $h\in(0,\overline{h}]$.
Now, let $x_0 \in \mathbb{R}^n$ and let $\Delta = \{ t_i \}_{i \in \mathbb{N}}$ be a $h$-uniform time partition of the interval $[0,+\infty)$ satisfying~$h\in(0,\overline{h}]$. Let~$x \in \mathrm{AC}([0,+\infty),\mathbb{R}^n)$ be the unique solution to
$$
\left\lbrace
\begin{array}{l}
\dot{x}(t) = A x(t) - W x(t_i) \qquad \tauext{for a.e.}\ t \in [t_i,t_{i+1}) \qquad \forall i \in \mathbb{N} \\[5pt]
x(0) = x_0,
\end{array}
\right.
$$
and let $u \in \mathrm{PC}^\Delta([0,+\infty),\mathbb{R}^m)$ be defined by $u_i := - R^{-1} B^\tauop E^\infty x(t_i)$ for every $i \in \mathbb{N}$. In particular~$\dot{x}(t) = Ax(t) + Bu(t)$ for almost every $t \geq 0$ and $x(0) = x_0$.
On the one hand, we have
\begin{multline*}
\Vert x(t) - x(t_i) \Vert_{\mathbb{R}^n} = \left\Vert \int_{t_i}^t \Big( Ax(\tau) - W x(t_i) \Big) \; d\tau \right\Vert_{\mathbb{R}^n} = \left\Vert \int_{t_i}^t \Big( A(x(\tau)-x(t_i)) + (A- W) x(t_i) \Big) \; d\tau \right\Vert_{\mathbb{R}^n} \\
\leq h \Vert A - W \Vert_{\mathbb{R}^{n\tauimes n}} \Vert x(t_i) \Vert_{\mathbb{R}^n} + \Vert A \Vert_{\mathbb{R}^{n\tauimes n}} \int_{t_i}^t \Vert x(\tau) - x(t_i) \Vert_{\mathbb{R}^n} \; d\tau
\end{multline*}
and, by the Gr\"onwall lemma (see \cite[Appendix~C.3]{sontag1998}), we get that
$$ \Vert x(t) - x(t_i) \Vert_{\mathbb{R}^n} \leq h \Vert A - W \Vert_{\mathbb{R}^{n\tauimes n}} e^{h \Vert A \Vert_{\mathbb{R}^{n\tauimes n}}} \Vert x(t_i) \Vert_{\mathbb{R}^n} \qquad \forall t \in [t_i,t_{i+1}) \qquad \forall i \in \mathbb{N}. $$
Since $\Vert x(t_i) \Vert_{\mathbb{R}^{n\tauimes n}} \leq \Vert x(t)-x(t_i) \Vert_{\mathbb{R}^{n\tauimes n}} + \Vert x(t) \Vert_{\mathbb{R}^{n\tauimes n}}$ and $ h \Vert A - W \Vert_{\mathbb{R}^{n\tauimes n}} e^{h \Vert A \Vert_{\mathbb{R}^{n\tauimes n}}} < 1 $, we get that
$$ \Vert x(t_i) \Vert_{\mathbb{R}^{n\tauimes n}} \leq \dfrac{1}{1-h \Vert A - W \Vert_{\mathbb{R}^{n\tauimes n}} e^{h \Vert A \Vert_{\mathbb{R}^{n\tauimes n}}}} \Vert x(t) \Vert_{\mathbb{R}^{n\tauimes n}}, \qquad \forall t \in [t_i,t_{i+1}) \qquad \forall i \in \mathbb{N} $$
and thus
$$ \Vert x(t) - x(t_i) \Vert_{\mathbb{R}^{n\tauimes n}} \leq \dfrac{h \Vert A - W \Vert_{\mathbb{R}^{n\tauimes n}} e^{h \Vert A \Vert_{\mathbb{R}^{n\tauimes n}}}}{1-h \Vert A - W \Vert_{\mathbb{R}^{n\tauimes n}} e^{h \Vert A \Vert_{\mathbb{R}^{n\tauimes n}}}} \Vert x(t) \Vert_{\mathbb{R}^{n\tauimes n}}, \qquad \forall t \in [t_i,t_{i+1}) \qquad \forall i \in \mathbb{N}. $$
On the other hand, we have
\begin{multline*}
\dfrac{d}{dt} \langle E^\infty x(t),x(t) \rangle_{\mathbb{R}^n} = 2 \langle E^\infty x(t) , \dot{x}(t) \rangle_{\mathbb{R}^n}
= 2 \langle E^\infty x(t) , A x(t) - W x(t_i) \rangle_{\mathbb{R}^n} \\
= 2 \langle E^\infty x(t) , (A-W) x(t) \rangle_{\mathbb{R}^n} + 2 \langle E^\infty x(t) , W (x(t)-x(t_i)) \rangle_{\mathbb{R}^n} \qquad \tauext{for a.e.}\ t \in [t_i,t_{i+1}) \qquad \forall i \in \mathbb{N}.
\end{multline*}
We deduce that
\begin{multline*}
\dfrac{d}{dt} \langle E^\infty x(t),x(t) \rangle_{\mathbb{R}^n}
\leq \left( - \rho_{\mathrm{min}} (Q) + 2 \rho_{\mathrm{max}}(E^\infty W) \dfrac{h \Vert A-W \Vert_{\mathbb{R}^{n\tauimes n}} e^{h \Vert A \Vert_{\mathbb{R}^{n\tauimes n}}}}{1 - h \Vert A-W \Vert_{\mathbb{R}^{n\tauimes n}} e^{h \Vert A \Vert_{\mathbb{R}^{n\tauimes n}}} } \right) \Vert x(t) \Vert^2_{\mathbb{R}^n} \\
\leq - \dfrac{\rho_{\mathrm{min}} (Q)}{2} \Vert x(t) \Vert^2_{\mathbb{R}^n} \leq - \dfrac{\rho_{\mathrm{min}} (Q)}{2 \rho_{\mathrm{max}}(E^\infty) } \langle E^\infty x(t) , x(t) \rangle_{\mathbb{R}^n} \qquad \tauext{for a.e.}\ t \geq 0.
\end{multline*}
We deduce from the Gr\"onwall lemma that
$$ \Vert x(t) \Vert^2_{\mathbb{R}^n} \leq \dfrac{1}{\rho_{\mathrm{min}} (E^\infty)} \langle E^\infty x(t) , x(t) \rangle_{\mathbb{R}^n} \leq \dfrac{1}{\rho_{\mathrm{min}} (E^\infty)} \langle E^\infty x_0 , x_0 \rangle_{\mathbb{R}^n} e^{- \frac{\rho_{\mathrm{min}} (Q)}{2 \rho_{\mathrm{max}}(E^\infty) } t} \qquad \forall t \geq 0. $$
We deduce that
\begin{multline*}
\int_0^{+\infty} \langle Q x(\tau) , x(\tau) \rangle_{\mathbb{R}^n} \; d\tau \leq \dfrac{\rho_{\mathrm{max}}(Q)}{\rho_{\mathrm{min}} (E^\infty)} \langle E^\infty x_0 , x_0 \rangle_{\mathbb{R}^n} \int_0^{+\infty} e^{- \frac{\rho_{\mathrm{min}} (Q)}{2 \rho_{\mathrm{max}}(E^\infty) } \tau} \; d\tau \\
= \dfrac{2 \rho_{\mathrm{max}}(Q) \rho_{\mathrm{max}}(E^\infty)}{\rho_{\mathrm{min}} (Q) \rho_{\mathrm{min}} (E^\infty)} \langle E^\infty x_0 , x_0 \rangle_{\mathbb{R}^n} < +\infty.
\end{multline*}
Moreover, using that $t_i = ih$ for every $i \in \mathbb{N}$, we have
\begin{multline*}
\int_0^{+\infty} \langle R u(\tau) , u(\tau) \rangle_{\mathbb{R}^n} \; d\tau \leq h \rho_{\mathrm{max}}(R) \sum_{i \in \mathbb{N}} \Vert u_i \Vert^2_{\mathbb{R}^m} \leq h \rho_{\mathrm{max}}(R) \Vert R^{-1} B^\tauop E^\infty \Vert^2_{\mathbb{R}^{m \tauimes n}} \sum_{i \in \mathbb{N}} \Vert x(t_i) \Vert^2_{\mathbb{R}^m} \\
\leq h \dfrac{\rho_{\mathrm{max}}(R)}{\rho_{\mathrm{min}} (E^\infty)} \Vert R^{-1} B^\tauop E^\infty \Vert^2_{\mathbb{R}^{m \tauimes n}} \langle E^\infty x_0 , x_0 \rangle_{\mathbb{R}^n} \sum_{i \in \mathbb{N}} \Big( e^{- \frac{\rho_{\mathrm{min}} (Q)}{2 \rho_{\mathrm{max}}(E^\infty) } h} \Big)^i \\
= h \dfrac{\rho_{\mathrm{max}}(R)}{\rho_{\mathrm{min}} (E^\infty)} \Vert R^{-1} B^\tauop E^\infty \Vert^2_{\mathbb{R}^{m \tauimes n}} \langle E^\infty x_0 , x_0 \rangle_{\mathbb{R}^n} \dfrac{1}{1 - e^{- \frac{\rho_{\mathrm{min}} (Q)}{2 \rho_{\mathrm{max}}(E^\infty) } h}} \\
\leq \dfrac{2 \rho_{\mathrm{max}}(R) \rho_{\mathrm{max}}(E^\infty)}{\rho_{\mathrm{min}} ( Q ) \rho_{\mathrm{min}} (E^\infty)} \Vert R^{-1} B^\tauop E^\infty \Vert^2_{\mathbb{R}^{m \tauimes n}} \langle E^\infty x_0 , x_0 \rangle_{\mathbb{R}^n} e^{\frac{\rho_{\mathrm{min}} (Q)}{2 \rho_{\mathrm{max}}(E^\infty) } \overline{h}} < +\infty.
\end{multline*}
Taking
$$ \overline{c} := \dfrac{2 \rho_{\mathrm{max}}(E^\infty)}{\rho_{\mathrm{min}} ( Q ) \rho_{\mathrm{min}} (E^\infty)} \Big( \rho_{\mathrm{max}} ( Q ) +\rho_{\mathrm{max}} ( R ) \Vert R^{-1} B^\tauop E^\infty \Vert^2_{\mathbb{R}^{m \tauimes n}} e^{\frac{\rho_{\mathrm{min}} (Q)}{2 \rho_{\mathrm{max}}(E^\infty) } \overline{h}} \Big) \geq 0 ,$$
the proof is complete.
\subsection{Proof of Theorem~\ref{thmmain1}}\label{appthmmain1}
\paragraph*{First item.}
This proof is inspired from the classical Lax theorem in numerical analysis (see \cite[p.73]{polyanin2018}). Let~$\varepsilon > 0$. We define the map
$$ \fonction{\varphi}{[0,T] \tauimes [0,T]}{\mathbb{R}^{n\tauimes n}}{(t,h)}{\varphi(t,h) := \mathcal{F}(t,E^T(t),h).} $$
By continuity of $E^T$ on $[0,T]$ and by Lemma~\ref{lemF}, the map $\varphi$ is uniformly continuous on the compact set~$[0,T] \tauimes [0,T]$. Hence there exists $\delta > 0$ such that
$$ \Vert \varphi(t_2,h_2) - \varphi(t_1,h_1) \Vert_{\mathbb{R}^{n\tauimes n}} \leq \dfrac{\varepsilon}{2Te^{cT}} $$
for all $(t_1,h_1)$, $ (t_2,h_2) \in [0,T] \tauimes [0,T]$ satisfying $\vert t_2-t_1 \vert + \vert h_2 - h_1 \vert \leq \delta$, where $c \geq 0$ is the constant given in Lemma~\ref{lemF} associated to the compact set $\mathrm{K}K := [0,T] \tauimes \mathrm{K} \tauimes [0,T]$ where
$$ \mathrm{K} := \left\lbrace E \in \mathcal{S}^n_+ \mid \Vert E \Vert_{\mathbb{R}^{n\tauimes n}} \leq \Big( \Vert P \Vert_{\mathbb{R}^{n\tauimes n}} + T \Vert Q_{|[0,T]} \Vert_{\infty} \Big) e^{2T \Vert A_{|[0,T]} \Vert_{\infty} } \right\rbrace . $$
In the sequel we consider a time partition $\Delta = \{ t_i \}_{i=0,\ldots,N}$ of the interval~$[0,T]$ such that $0 < \Vert \Delta \Vert \leq \delta$. Note that
\begin{multline*}
E^{T,\Delta}_i = E^{T,\Delta}_{i+1} - h_{i+1} \mathcal{F} (t_{i+1},E^{T,\Delta}_{i+1},h_{i+1}) \\
\tauext{and} \quad E^T(t_i) = E^T(t_{i+1}) - h_{i+1} \mathcal{F}(t_{i+1},E^T(t_{i+1}),h_{i+1}) +\eta_{i+1} \qquad \forall i=0,\ldots,N-1
\end{multline*}
where
$$ \eta_{i+1} := E^T(t_i)-E^T(t_{i+1}) + h_{i+1} \mathcal{F} (t_{i+1},E^T(t_{i+1}),h_{i+1}) \qquad \forall i=0,\ldots,N-1 .$$
By Lemmas~\ref{lemF} and~\ref{lembound}, we have
$$ \Vert E^T(t_i)-E^{T,\Delta}_i \Vert_{\mathbb{R}^{n\tauimes n}} \leq (1+c h_{i+1} ) \Vert E^T(t_{i+1})-E^{T,\Delta}_{i+1} \Vert_{\mathbb{R}^{n\tauimes n}} + \Vert \eta_{i+1} \Vert_{\mathbb{R}^{n\tauimes n}} \qquad \forall i=0,\ldots,N-1 .$$
It follows from the backward discrete Gr\"onwall lemma (see Lemma~\ref{lemgronwall}) that
$$ \Vert E^T(t_i)-E^{T,\Delta}_i \Vert_{\mathbb{R}^{n\tauimes n}} \leq \sum_{j=i+1}^N e^{ c \sum_{q=i+1}^{j-1} h_q } \Vert \eta_{j} \Vert_{\mathbb{R}^{n\tauimes n}} \leq e^{cT} \sum_{j=1}^N \Vert \eta_{j} \Vert_{\mathbb{R}^{n\tauimes n}} \qquad \forall i=0,\ldots,N-1 . $$
Since
\begin{multline*}
\eta_{j} = h_j \Big( \mathcal{F}(t_j,E^T(t_j),h_j) - \mathcal{F}(t_j,E^T(t_j),0) \Big)
+ \int_{t_{j-1}}^{t_j} \left( \mathcal{F}(t_j,E^T(t_j),0) - \mathcal{F}(\tau,E^T(\tau),0) \right) d\tau \\
= h_j \Big( \varphi(t_j,h_j) - \varphi(t_j,0) \Big) + \int_{t_{j-1}}^{t_j} \left( \varphi(t_j,0) - \varphi(\tau,0) \right) d\tau \qquad \forall j=1,\ldots,N
\end{multline*}
we obtain, by uniform continuity of $\varphi$ and using that $0 < \Vert \Delta \Vert \leq \delta$,
$$ \Vert \eta_{j} \Vert_{\mathbb{R}^{n\tauimes n}} \leq 2 h_j \dfrac{\varepsilon }{2Te^{cT}} = h_j \dfrac{\varepsilon }{Te^{cT}} \qquad \forall j=1,\ldots,N. $$
We conclude that
$$ \Vert E^T(t_i)-E^{T,\Delta}_i \Vert_{\mathbb{R}^{n\tauimes n}} \leq e^{cT} \sum_{j=1}^N \Vert \eta_{j} \Vert_{\mathbb{R}^{n\tauimes n}} \leq e^{cT} \sum_{j=1}^N h_j \dfrac{\varepsilon }{Te^{cT}} = \dfrac{\varepsilon}{T} \sum_{j=1}^N h_j = \varepsilon \qquad \forall i=0,\ldots,N-1. $$
The proof is complete.
\paragraph*{Second item.}
The second item of Theorem~\ref{thmmain1} is well known and follows from the proof of Proposition~\ref{thmriccperminf} (see \cite[p.153]{bressan2007}, \cite[Theorem~7]{lee1986} or \cite[Theorem~4.13]{trelat2005}).
\paragraph*{Third item.}
This result follows from the proof of Proposition~\ref{thmriccsampleinf}. Indeed, using the notations from Appendix~\ref{appthmriccsampleinf}, it is clear that
$$ \lim\limits_{N \tauo + \infty} E^{t_N,\Delta_N}_i = \lim\limits_{N \tauo + \infty} D_{N-i} = D = E^{\infty,\Delta} \qquad \forall i \in \mathbb{N}. $$
\paragraph*{Fourth item.}
By contradiction let us assume that $E^{\infty,\Delta}$ does not converge to $E^\infty$ when $h \tauo 0$. Then there exists $\varepsilon > 0$ and a positive sequence $(h_k)_{k \in \mathbb{N}}$ converging to $0$ such that $\Vert E^{\infty,\Delta_k} - E^\infty \Vert_{\mathbb{R}^{n\tauimes n}} \geq \varepsilon$ for every $k \in \mathbb{N}$, where $\Delta_k$ stands for the $h_k$-uniform time partition of the interval $[0,+\infty)$. Without loss of generality, we assume that $0 < h_k \leq \overline{h}$ for every $k \in \mathbb{N}$. It follows from Proposition~\ref{thmriccsampleinf} and from Lemma~\ref{lemimportant} that the minimal cost of $(\mathrm{OCP}^{\infty,\Delta_k}_{x_0})$ satisfies
$$ \langle E^{\infty,\Delta_k} x_0 , x_0 \rangle_{\mathbb{R}^{n}} \leq \overline{c} \langle E^{\infty} x_0 , x_0 \rangle_{\mathbb{R}^{n}} \leq \overline{c} \Vert E^\infty \Vert_{\mathbb{R}^{n\tauimes n} } \Vert x_0 \Vert_{\mathbb{R}^n}^2 \qquad \forall x_0 \in \mathbb{R}^n. $$
Hence $\Vert E^{\infty,\Delta_k} \Vert_{\mathbb{R}^{n\tauimes n}} \leq \overline{c} \Vert E^{\infty} \Vert_{\mathbb{R}^{n\tauimes n}}$ for every $k \in \mathbb{N}$ by (v) of Lemma~\ref{lemmatrice}. Thus the sequence~$(E^{\infty,\Delta_k})_{k \in \mathbb{N}}$ is bounded in $\mathbb{R}^{n\tauimes n}$ and, up to a subsequence (that we do not relabel), converges to some~$L \in \mathbb{R}^{n\tauimes n}$. In particular $\Vert L - E^\infty \Vert_{\mathbb{R}^{n\tauimes n}} \geq \varepsilon$. Since $E^{\infty,\Delta_k} \in \mathcal{S}^n_{++} \subset \mathcal{S}^n_+$ for every~$k \in \mathbb{N}$, it is clear that~$L \in \mathcal{S}^n_+$. Moreover, by $\mathrm{(SD\tauext{-}ARE)}$ associated to $h_k$ (see Proposition~\ref{thmriccsampleinf}), we know that~$\mathcal{F}(E^{\infty,\Delta_k},h_k) = 0_{\mathbb{R}^{n\tauimes n}}$ for all $k \in \mathbb{N}$. By continuity of $\mathcal{F}$ (see Lemma~\ref{lemF}), we conclude that~$\mathcal{F}(L,0) = 0_{\mathbb{R}^{n\tauimes n}}$. By uniqueness (see Proposition~\ref{thmriccperminf}) we deduce that $L = E^\infty$ which raises a contradiction with the inequality $\Vert L - E^\infty \Vert_{\mathbb{R}^{n\tauimes n}} \geq \varepsilon$. The proof is complete.
\end{document} |
\begin{document}
\title{ extbf{Independent screening for single-index hazard rate \ models with ultra-high dimensional features}
\renewcommand{Summary}{Summary}
\begin{abstract}
\noindent In data sets with many more features than observations, independent
screening based on all univariate regression models leads to a
computationally convenient variable selection method. Recent efforts
have shown that in the case of generalized linear models,
independent screening may suffice to capture all relevant features
with high probability, even in ultra-high dimension. It is unclear
whether this formal sure screening property is attainable when the
response is a right-censored survival time. We propose a
computationally very efficient independent screening method for
survival data which can be viewed as the natural survival equivalent
of correlation screening. We state conditions under which the method
admits the sure screening property within a general class of
single-index hazard rate models with ultra-high dimensional
features. An iterative variant is also described which combines
screening with penalized regression in order to handle more complex
feature covariance structures. The methods are evaluated through
simulation studies and through application to a real gene expression
data~set.
\mathrm{e}nd{abstract}
\section{Introduction}
\label{sec:intro}
With the increasing proliferation of biomarker
studies, there is a need for efficient methods for relating a
survival time response to a large number of features. In typical
genetic microarray studies, the sample size $n$ is measured in
hundreds whereas the number of features $p$ per sample can be in
excess of millions. Sparse regression techniques such as lasso
\citep{tibshirani97:_lasso_cox} and SCAD \citep{fan01:_variab} have
proved useful for dealing with such high-dimensional features but
their usefulness diminishes when $p$ becomes extremely large compared
to $n$. The notion of NP-dimensionality
\citep{fan09:_non_concav_penal_likel_np_dimen} has been conceived to
describe such ultra-high dimensional settings which are formally
analyzed in an asymptotic regime where $p$ grows at a non-polynomial
rate with~$n$. Despite recent progress \citep{bradic11:_penal},
theoretical knowledge about sparse regression techniques under
NP-dimensionality is still in its infancy. Moreover, NP-dimensionality
poses substantial computational challenges. When for example pairwise
interactions among gene expressions in a genetic microarray study are
of interest, the dimension of the feature space will trouble even the
most efficient algorithms for fitting sparse regression models. A
popular ad hoc solution is to simply pretend that feature correlations
are ignorable and resort to computationally swift univariate
regression methods; so-called independent screening methods.
In an important paper, \cite{fan08:_sure} laid the formal foundation
for using independent screening to distinguish `relevant' features
from `irrelevant' ones. For the linear regression model they showed
that, when the design is close to orthogonal, a superset of the true
set of nonzero regression coefficients can be estimated consistently
by simple hard-thresholding of feature-response correlations. This
sure independent screening (SIS) property of correlation screening is
a rather trivial one, if not for the fact that it holds true in the
asymptotic regime of NP-dimensionality. Thus, when the feature
covariance structure is sufficiently simple, SIS methods can overcome
the noise accumulation in extremely high dimension. In order to
accommodate more complex feature covariance structures
\cite{fan08:_sure} and \cite{fan09:_ultrah_dimen_featur_selec}
developed heuristic, iterated methods combining independent screening
with forward selection techniques. Recently, \cite{fan10:_sure_np}
extended the formal basis for SIS to generalized linear models.
In biomedical applications, the response of interest is often a
right-censored survival time, making the study of screening methods
for survival data an important one. \cite{fan10:_borrow_stren}
investigated SIS methods for the Cox proportional hazards model based
on ranking features according to the univariate partial log-likelihood
but gave no formal
justification. \cite{tibshirani09:_univar_shrin_cox_model_high_dimen_data}
suggested soft-thresholding of univariate Cox score statistics with
some theoretical justification but under strong assumptions. Indeed,
independent screening methods for survival data are apt to be
difficult to justify theoretically due to the presence of censoring
which can confound marginal associations between the response and the
features. Recent work by \cite{zhao10:_princ_sure_indep_screen_cox}
contains ideas which indicate that independent screening based on the
Cox model may have the SIS property in the absence of censoring.
In the present paper, we depart from the standard approach of studying
SIS as a rather specific type of model misspecification in which the
univariate versions of a particular regression model are used to infer
the structure of the joint version of the same particular regression
model. Instead, we propose a survival variant of independent screening
based on a model-free statistic which we call the `Feature Aberration
at Survival Times' (FAST) statistic. The FAST statistic is a simple
linear statistic which aggregates across survival times the aberration
of each feature relative to its time-varying average. Independent
screening based on this statistic can be regarded as a natural
survival equivalent of correlation screening. We study the SIS
property of FAST screening in ultra-high dimension for a general class
of single-index hazard rate regression models in which the risk of an
event depends on the features through some linear functional. A key
aim has been to derive simple and operational sufficient conditions
for the SIS property to hold. Accordingly, our main result states that
the FAST statistic has the SIS property in an ultra-high dimensional
setting under covariance assumptions as in
\cite{fan09:_ultrah_dimen_featur_selec}, provided that censoring is
essentially random and that features satisfy a technical condition
which holds when they follow an elliptically contoured distribution.
Utilizing the fact that the FAST statistic is related to the
univariate regression coefficients in the semiparametric additive
hazards model (\cite{lin94:_semip_analy}; \cite{mckeague94}), we
develop methods for iterated SIS. The techniques are evaluated in a
simulation study where we also compare with screening methods for the
Cox model \citep{fan10:_borrow_stren}. Finally, an application to a
real genetic microarray data set is presented.
\section{The FAST statistic and its motivation}
\label{sec:classical}
Let $T$ be a survival time which is subject to right-censoring by some
random variable $C$. Denote by $N(t):=\mathrm{1}(T\land C \leq t \land T \leq
C)$ the counting process which counts events up to time $t$, let
$Y(t):=\mathrm{1}(T \land C \geq t)$ be the at-risk process, and let $\vek{Z}
\in \mathbb{R}^p$ denote a random vector of explanatory variables or
features. It is assumed throughout that $\vek{Z}$ has finite variance
and is standardized, i.e.~centered and with a covariance matrix
$\vekg{\Sigma}$ with unit diagonal. We observe $n$ independent and
identically distributed (i.i.d.) replicates of $\{(N_i,Y_i,\vek{Z}_i)\,:\,0
\leq t \leq \tau\}$ for $i=1,\ldots,n$ where $[0,\tau]$ is the
observation time window.
Define the `Feature Aberration at Survival Times' (FAST) statistic as
follows:
\begin{equation}
\label{eq:dev-at-event-defin}
\vek{d} := n^{-1} \int_0^\tau \sum_{i=1}^n \{\vek{Z}_i-\bar{\vek{Z}}(t)\} \mathrm{d} N_i(t);
\mathrm{e}nd{equation}
where $\bar{\vek{Z}}$ is the at-risk-average of the $\vek{Z}_i$s,
\begin{displaymath}
\bar{\vek{Z}}(t):=\frac{\sum_{i=1}^n \vek{Z}_iY_i(t)}{\sum_{i=1}^n Y_i(t)}.
\mathrm{e}nd{displaymath}
Components of the FAST statistic define basic measures of the
marginal association between each feature and survival. In the following,
we provide two motivations for using the FAST statistic for screening
purposes. The first, being model-based, is perhaps the most intuitive
-- the second shows that, even in a model-free setting, the FAST
statistic may provide valuable information about marginal
associations.
\subsection{A model-based interpretation of the FAST statistic}
\label{sec:mod-based-inter}
Assume in this section that the $T_i$s have hazard
functions of the form
\begin{equation}
\label{eq:addriskmod}
\lambda_j(t)= \lambda_0(t)+\vek{Z}_j^\top \vekg{\alpha}^0; \qquad j=1,2,\ldots,n;
\mathrm{e}nd{equation}
with $\lambda_0$ an unspecified baseline hazard rate and
$\vekg{\alpha}^0 \in \mathbb{R}^p$ a vector of regression
coefficients. This is the so-called semiparametric additive hazards
model (\cite{lin94:_semip_analy}; \cite{mckeague94}), henceforth
simply the Lin-Ying model. The Lin-Ying model corresponds to assuming
for each $N_j$ an intensity function of the form
$Y_j(t)\{\lambda_0(t)+\vek{Z}_j^\top \vekg{\alpha}^0 \}$. From the
Doob-Meyer decomposition $\mathrm{d} N_j(t)= \mathrm{d} M_j(t) +
Y_j(t)\{\lambda_0(t)+\vek{Z}_j^\top \vekg\alpha^0 \} \mathrm{d} t$ with $M_j$
a martingale, it is easily verified that
\begin{equation}
\label{eq:decomp-of-ahaz}
\sum_{i=1}^n\{\vek{Z}_i-\bar{\vek{Z}}(t)\}\mathrm{d} N_i(t) = \Big[\sum_{i=1}^n\{\vek{Z}_i-\bar{\vek{Z}}(t)\}^{\otimes 2}Y_i(t) \mathrm{d} t\Big]\vekg{\alpha}^0 + \sum_{i=1}^n\{\vek{Z}_i-\bar{\vek{Z}}(t)\} \mathrm{d} M_i(t), \quad t \in [0,\tau].
\mathrm{e}nd{equation}
This suggests that $\vekg\alpha^0$ is estimable as the solution to the
$p \times p$ linear system of equations
\begin{equation}
\label{eq:linahaz}
\vek{d}=\vek{D}\vekg{\alpha};
\mathrm{e}nd{equation}
where
\begin{equation}
\label{eq:def-of-smalldn-bigdn}
\vek{d}:=n^{-1}\sum_{i=1}^n \int_0^\tau \{ \vek{Z}_i-\bar{\vek{Z}}(t)\} \mathrm{d} N_i(t), \quad \textrm{and }\vek{D}:=n^{-1}\sum_{i=1}^n \int_0^\tau Y_i(t)\{ \vek{Z}_i-\bar{\vek{Z}}(t)\}^{\otimes 2} \mathrm{d} t.
\mathrm{e}nd{equation}
Suppose $\hat{\vekg\alpha}$ solves \mathrm{e}qref{eq:linahaz}.
Standard martingale arguments \citep{lin94:_semip_analy}
imply root $n$ consistency of~$\hat{\vekg\alpha}$,
\begin{equation}
\label{eq:lin-ying-rootn-consistency}
\sqrt{n}(\hat{\vekg\alpha}-\vekg\alpha^0) \stackrel{d}{\to} \mathrm{N}(0,\vek{D}^{-1} \vek{B} \vek{D}^{-1}), \quad \textrm{ where } \vek{B}=n^{-1}\sum_{i=1}^n\int_0^\tau \{ \vek{Z}_i-\bar{\vek{Z}}(t)\}^{\otimes 2} \mathrm{d} N_i(t).
\mathrm{e}nd{equation}
For now, simply observe that the left-hand side of \mathrm{e}qref{eq:linahaz}
is exactly the FAST statistic; whereas $d_{j}D_{jj}^{-1}$ for
$j=1,2,\ldots,p$ estimate the regression coefficients in the
corresponding $p$ univariate Lin-Ying models. Hence we can
interpret $\vek{d}$ as a (scaled) estimator of the univariate regression
coefficients in a working Lin-Ying~model.
A nice heuristic interpretation of $\vek{d}$ results from the
pointwise signal/error decomposition~\mathrm{e}qref{eq:decomp-of-ahaz} which
is essentially a reformulated linear regression model $\vek{X}^\top
\vek{X} \vekg{\alpha}^0 + \vek{X}^\top \vekg{\mathrm{Var}epsilon}=\vek{X}^\top
\vek{y}$ with `responses' $y_j:=\mathrm{d} N_j(t)$ and `explanatory variables'
$\vek{X}_j:=\{\vek{Z}_j-\bar{\vek{Z}}(t)\}Y_j(t)$. The FAST statistic is given by the time average of $\E\{\vek{X}^\top
\vek{y}\}$ and may accordingly be viewed as a survival
equivalent of the usual predictor-response correlations.
\subsection{A model-free interpretation of the FAST statistic}
For a feature to be judged (marginally) associated with survival in
any reasonable interpretation of survival data, one would first
require that the feature is correlated with the probability of
experiencing an event -- second, that this correlation persists
throughout the time window. The FAST statistic can be shown to reflect
these two requirements when the censoring mechanism is sufficiently
simple.
Specifically, assume administrative censoring at time $\tau$ (so that
$C_1 \mathrm{e}quiv \tau$). Set $V(t):=\mathrm{Var}\{F(t|\vek{Z}_1)\}^{1/2}$ where
$F(t| \vek{Z}_1):=P(T_1 \leq t|\vek{Z}_1)$ denotes the conditional probability of death
before time~$t$. For each $j$, denote by
$\mathrm{d}elta_j$ the population version of $d_{j}$ (the in
probability limit of $d_{j}$ when $n \to \infty$). Then
\begin{align*}
\mathrm{d}elta_j &= \E \Big(\Big[Z_{1j}-\frac{\E\{Z_{1j}Y_1(t)\} }{\E\{Y_1(t)\}} \Big]\mathrm{1}(T_1 \leq t \land \tau)\Big) \\
&=\E\{Z_{1j} F(\tau|\vek{Z}_1)\} - \int_0^\tau \frac{\E\{Z_{1j}Y_1(t)\} }{\E\{Y_1(t)\}} \E\{\mathrm{d} F(t|\vek{Z}_1)\}\\
&= V(\tau)\mathrm{Cor}\{Z_{1j},F(\tau|\vek{Z}_1)\}+\int_0^\tau \mathrm{Cor}\{Z_{1j},F(t|\vek{Z}_1)\} \frac{V(t)}{\E\{Y_1(t)\}} \E\{\mathrm{d} F(t|\vek{Z}_1)\}.
\mathrm{e}nd{align*}
We can make the following observations:
\begin{enumerate}
\item[(i)] If $\mathrm{Cor}\{Z_{1j},F(t|\vek{Z}_1)\}$ has
constant sign throughout $[0,\tau]$, then $|\mathrm{d}elta_{j}|
\geq |V(\tau)\mathrm{Cor}\{Z_{1j},F(\tau|\vek{Z}_1)\}|$.
\item[(ii)] Conversely, if $\mathrm{Cor}\{Z_{1j},F(t|\vek{Z}_1)\}$ changes sign, so
that the the direction of association with $F(t|\vek{Z}_1)$ is not persistent
throughout $[0,\tau]$, then this will lead to a smaller value of
$|\mathrm{d}elta_{j}|$ compared to (i).
\item[(iii)]Lastly, if $\mathrm{Cor}\{Z_{1j},F(t|\vek{Z}_1)\} \mathrm{e}quiv 0$ then $\mathrm{d}elta_{j}=0$.
\mathrm{e}nd{enumerate}
In other words, the sample version $d_{j}$ estimates a time-averaged summary
of the correlation function $t \mapsto \mathrm{Cor}\{Z_{1j},F(t|\vek{Z}_1)\}$
which takes into account both magnitude and persistent behavior
throughout $[0,\tau]$. This indicates that the FAST statistic is
relevant for judging marginal association of features with survival
beyond the model-specific setting of
Section~\ref{sec:mod-based-inter}.
\section{Independent screening with the FAST statistic}
In this section, we extend the heuristic arguments of the
previous section and provide theoretical justification for using the
FAST statistic to screen for relevant features when the data-generating model belongs to a class of single-index hazard
rate regression models.
\subsection{The general case of single-index hazard rate models}
In the notation of Section \ref{sec:classical}, we assume survival times $T_j$ to have hazard rate functions of single-index form:
\begin{equation}
\label{eq:model-for-haz}
\lambda_j(t) = \lambda(t,\vek{Z}_j^\top \vekg{\alpha}^0), \quad j=1,2,\ldots,n.
\mathrm{e}nd{equation}
Here $\lambda \colon [0,\infty) \times \mathbb{R} \to [0,\infty)$ is a
continuous function, $\vek{Z}_1,\ldots,\vek{Z}_n$ are random vectors
in $\mathbb{R}^{p_n}$, $\vekg \alpha^0 \in \mathbb{R}^{p_n}$ is a
vector of regression coefficients, and $\vek{Z}_j^\top
\vekg{\alpha}^0$ defines a risk score. We subscript $p$ by $n$ to
indicate that the dimension of the feature space can grow with the
sample size. Censoring will always be assumed at least independent so
that $C_j$ is independent of $T_j$ conditionally on $\vek{Z}_j$. We
impose the following assumption on the hazard `link
function'~$\lambda$:
\begin{remark}
\label{assumption:monotonicity}
The survival function $\mathrm{e}xp\{-\int_0^t \lambda(s,\,\cdot\,) \mathrm{d} s\}$ is strictly monotonic for each $t \geq 0$.
\mathrm{e}nd{remark}
\noindent Requiring the survival function to depend monotonically on $\vek{Z}_j^\top
\vekg{\alpha}^0$ is natural in order to enable interpretation of the
components of $\vekg{\alpha}^0$ as indicative of positive or negative
association with survival. Note that it suffices that
$\lambda(t,\,\cdot\,)$ is strictly monotonic for each $t \geq
0$. Assumption \ref{assumption:monotonicity} holds for a range of
popular survival regression models. For example,
$\lambda(t,x):=\lambda_0(t)+x$ with $\lambda_0$ some baseline hazard
yields the Lin-Ying model~\mathrm{e}qref{eq:addriskmod};
$\lambda(t,x):=\lambda_0(t)\mathrm{e}^x$ is a Cox model; and
$\lambda(t,x):=\mathrm{e}^{x}\lambda_0(t\mathrm{e}^{x})$ is an accelerated failure
time model.
Denote by $\vekg\mathrm{d}elta$ the population version of the FAST statistic
under the model~\mathrm{e}qref{eq:model-for-haz} which, by the Doob-Meyer
decomposition $\mathrm{d} N_1(t)= \mathrm{d} M_1(t) + Y_1(t)\lambda(t,\vek{Z}_1^\top
\vekg\alpha^0)\mathrm{d} t$ with $M_1$ a martingale, takes the form
\begin{equation}
\label{eq:cp-decom}
\vekg{\mathrm{d}elta} = \E\Big[\int_0^\tau \{\vek{Z}_1-\vek{e}(t)\}Y_1(t) \lambda(t,\vek{Z}_1^\top \vekg\alpha^0) \mathrm{d} t\Big]; \qquad \textrm{where } \vek{e}(t):= \frac{\E\{\vek{Z}_1 Y_1(t)\}}{\E\{Y_1(t)\}}.
\mathrm{e}nd{equation}
Our proposed FAST screening procedure is as follows: given some (data-dependent) threshold $\gamma_n>0$,
\begin{enumerate}
\item[(i).] calculate the FAST statistic $\vek{d}$ from the available data and
\item[(ii).] declare the `relevant
features' to be the set $\{1 \leq j \leq p_n\,:\, |d_{j}|>\gamma_n\}$.
\mathrm{e}nd{enumerate}
By the arguments in Section \ref{sec:classical}, this procedure
defines a natural survival equivalent of correlation screening. Define the following sets of features:
\begin{align*}
\widehat{\mc{M}}_{d}^n&:= \{1 \leq j \leq p_n \,:\, |d_{j}| > \gamma_n \}, \\
\mc{M}^n &:= \{1 \leq j \leq p_n\,:\, \alpha_j^0 \neq 0\}, \\
\mc{M}_{\mathrm{d}elta}^n&:= \{1 \leq j \leq p_n \,:\, \mathrm{d}elta_{j} \neq 0\}.
\mathrm{e}nd{align*}
The problem of establishing the SIS property of FAST screening amounts
to determining when $\mc{M}^n \subseteq
\widehat{\mc{M}}_\mathrm{d}^n$ holds with large probability for
large $n$. This translates into two questions: first,
when do we have $\mc{M}_\mathrm{d}elta^n \subseteq
\widehat{\mc{M}}_d^n$; second, when do we have $\mc{M}^n
\subseteq \mc{M}_\mathrm{d}elta^n$? The first question is essentially
model-independent and requires establishing an exponential bound for
$n^{1/2}|d_{j}-\mathrm{d}elta_{j}|$ as $n\to \infty$. The second question is
strongly model-dependent and is answered by manipulating expectations
under the single-index model~(\ref{eq:model-for-haz}).
We state the main results here and relegate proofs to the appendix where we also state various regularity conditions. The
following principal assumptions, however, deserve separate attention:
\begin{remark}
\label{assumption:lr}
There exists $\vek{c} \in \mathbb{R}^{p_n}$ such that $\E(\vek{Z}_1|\vek{Z}_1^\top \vekg{\alpha}^0)=\vek{c}\vek{Z}_1^\top \vekg\alpha^0$.
\mathrm{e}nd{remark}
\begin{remark}
\label{assumption:ran-cens}
The censoring time $C_1$ depends on $T_1,\vek{Z}_1$ only through $Z_{1j}$, $j \notin \mc{M}^n$.
\mathrm{e}nd{remark}
\begin{remark}
\label{assumption:po}
$Z_{1j}$, $j \in \mc{M}^n$
is independent of $Z_{1j}$, $j \notin \mc{M}^n$.
\mathrm{e}nd{remark}
Assumption \ref{assumption:lr} is a `linear regression' property which holds true for Gaussian features and, more
generally, for features following an elliptically contoured
distribution \citep{hardin82}. In view of \cite{hall93}
which states that most low dimensional projections of high dimensional
features are close to linear, Assumption~\ref{assumption:lr} may not
be unreasonable a priori even for general feature
distributions when $p_n$ is large.
Assumption \ref{assumption:ran-cens} restricts the censoring mechanism
to be partially random in the sense of depending only on irrelevant
features. As we will discuss in detail below, such rather strong
restrictions on the censoring distribution seem necessary for
obtaining the SIS property; Assumption \ref{assumption:ran-cens} is
both general and convenient.
Assumption \ref{assumption:po} is the partial orthogonality condition
also used by \cite{fan10:_sure_np}. Under this assumption and Assumption \ref{assumption:ran-cens}, it follows
from~\mathrm{e}qref{eq:cp-decom} that $\mathrm{d}elta_{j}=0$ whenever $j \notin
\mc{M}^n$, implying $\mc{M}_{\mathrm{d}elta}^n \subseteq
\mc{M}^n$. Provided that we also have $\mathrm{d}elta_{j} \neq 0$ for $j \in
\mc{M}^n$ (that is, $\mc{M}^n \subseteq
\mc{M}_\mathrm{pre}^n$), there exists a threshold $\zeta_n>0$ such
that
\begin{displaymath}
\min_{j \in \mc{M}^n}|\mathrm{d}elta_{j}| \geq \zeta_n \qquad \max_{j \notin \mc{M}^n}|\mathrm{d}elta_{j}|=0.
\mathrm{e}nd{displaymath}
Consequently, Assumptions
\ref{assumption:ran-cens}-\ref{assumption:po} are needed to enable
consistent model selection via independent screening. Although model
selection consistency is not essential in order to capture just some
superset of the relevant features via independent screening, it is pertinent in order to limit
the size of such a superset.
The following theorem on FAST screening (FAST-SIS) is our
main theoretical result. It states that the screening property
$\mc{M}^n \subseteq \widehat{\mc{M}}_d^n$ may hold with
large probability even when $p_n$ grows exponentially fast in a
certain power of $n$ which depends on the tail behavior of
features. The covariance condition in the theorem is analogous to that of \cite{fan10:_sure_np} for SIS in generalized linear models
with Gaussian features.
\begin{theorem}
\label{thm:mainthm}
Suppose that Assumptions \ref{assumption:monotonicity}-\ref{assumption:ran-cens} hold alongside the regularity conditions
of the appendix and that $P(|Z_{1j}|>s) \leq l_0\mathrm{e}xp(-l_1 s^\mathrm{e}ta)$
for some positive constants $l_0,l_1,\mathrm{e}ta$ and sufficiently large $s$. Suppose
moreover that for some $c_1>0$ and $\kappa<1/2$,
\begin{equation}
\label{eq:covariance-in-mainthm}
|\mathrm{Cov}[Z_{1j},\vek{Z}_1^\top \vekg{\alpha}^0\}]| \geq c_1n^{-\kappa}, \quad j \in \mc{M}^n.
\mathrm{e}nd{equation}
Then $\mc{M}^n \subseteq \mc{M}^n_\mathrm{d}elta$. Suppose in
addition that $\gamma_n=c_2 n^{-\kappa}$ for some constant $0<c_2
\leq c_1/2$ and that $\log
p_n=o\{n^{(1-2\kappa)\mathrm{e}ta/(\mathrm{e}ta+2)}\}$. Then the SIS property
holds, $P(\mc{M}^n \subseteq \widehat{\mc{M}}_d^n)
\to 1$ when $n \to \infty$.
\mathrm{e}nd{theorem}
Observe that with bounded features, we may take $\mathrm{e}ta=\infty$ and
handle dimension of order $\log p_n=o(n^{1-2\kappa})$.
We may dispense with Assumption 2 on the feature distribution by
revising~\mathrm{e}qref{eq:covariance-in-mainthm}. By
Lemma~\ref{lemma:cum-haz-decom} in the appendix, taking
$\tilde{e}_j(t):= \E\{Z_{1j} P(T_1 \geq t|\vek{Z}_1)\}/\E\{P(T_1 \geq
t|\vek{Z}_1)\}$, it holds generally under Assumption \ref{assumption:ran-cens}~that
\begin{displaymath}
\mathrm{d}elta_j= \E\{\tilde{e}_j(T_1 \land C_1 \land \tau)\}, \quad j \in \mc{M}^n.
\mathrm{e}nd{displaymath}
Accordingly, if we replace \mathrm{e}qref{eq:covariance-in-mainthm} with the
assumption that $\E|Z_{1j} P(T_1 \geq t|\vek{Z}_1)| \geq c_1n^{-\kappa}$
uniformly in $t$ for $j \in \mc{M}^n$, the conclusions of Theorem
\ref{thm:mainthm} still hold. In other words, we can generally expect FAST-SIS to
detect features which are `correlated with the chance of
survival', much in line with
Section~\ref{sec:classical}. While this is valuable structural
insight, the covariance assumption \mathrm{e}qref{eq:covariance-in-mainthm} seems
a more operational condition.
Assumption \ref{assumption:ran-cens} is crucial to the proof of
Theorem~\ref{thm:mainthm} and to the general idea of translating a
model-based feature selection problem into a problem of
hard-thresholding $\vekg{\mathrm{d}elta}$. A weaker assumption is not possible
in general. For example, suppose that only Assumption~\ref{assumption:lr}
holds and that the censoring time also follows some single-index model of the form
\mathrm{e}qref{eq:model-for-haz} with regression coefficients
$\vekg\beta^0$. Applying Lemma~2.1~of \cite{cheng94:_adjus} to~\mathrm{e}qref{eq:cp-decom}, there exists finite constants $\zeta_1,\zeta_2$
(depending on $n$) such that
\begin{equation}
\label{eq:effect-of-cens}
\vekg\mathrm{d}elta = \vekg\Sigma(\zeta_1 \vekg\alpha^0+\zeta_2\vekg\beta^0).
\mathrm{e}nd{equation}
It follows that unrestricted censoring will generally confound the relationship
between $\vekg\mathrm{d}elta$ and $\vekg\Sigma\vekg\alpha^0$,
hence~$\vekg\alpha^0$. The precise impact of such unrestricted censoring seems difficult to discern,
although \mathrm{e}qref{eq:effect-of-cens} suggests that FAST-SIS may still be
able to capture the underlying model (unless $\zeta_1
\vekg\alpha^0+\zeta_2\vekg\beta^0$ is particularly ill-behaved). We
will have more to say about unrestricted censoring in the next
section.
Theorem~\ref{thm:mainthm} shows that FAST-SIS can consistently
capture a superset of the relevant features. A priori, this superset
can be quite large; indeed, `perfect' screening would result by simply
including all features. For FAST-SIS to be useful, it must
substantially reduce feature space dimension. Below we
state a survival analogue of Theorem~5 in \cite{fan10:_sure_np}, providing
an asymptotic rate on the FAST-SIS model size.
\begin{theorem}
\label{thm:bound-on-sel-var}
Suppose that Assumptions \ref{assumption:monotonicity}-\ref{assumption:po} hold alongside
the regularity conditions of the appendix and that $P(|Z_{1j}|>s)
\leq l_0\mathrm{e}xp(-l_1 s^\mathrm{e}ta)$ for positive constants $l_0,l_1,\mathrm{e}ta$ and sufficiently
large $s$. If $\gamma_n=c_4n^{-2\kappa}$ for some $\kappa< 1/2$ and
$c_4>0$, there exists a positive constant $c_5$ such~that
\begin{displaymath}
P[|\widehat{\mc{M}}^n_d| \leq O\{n^{2\kappa}\lambda_\mathrm{max}(\vekg{\Sigma})\}] \geq 1-O(p_n\mathrm{e}xp\{-c_5n^{(1-2\kappa)\mathrm{e}ta/(\mathrm{e}ta+2)}\});
\mathrm{e}nd{displaymath}
where $\lambda_{\mathrm{max}}(\Sigma)$ denotes the maximal
eigenvalue of the covariance matrix $\vekg{\Sigma}$ of the feature
distribution.
\mathrm{e}nd{theorem}
Informally, the theorem states that, under similar assumptions as in
Theorem~\ref{thm:mainthm} and the partial orthogonality condition
(Assumption \ref{assumption:po}), if features are not too strongly
correlated (as measured by the maximal eigenvalue of the covariance
matrix) and $p_n$ grows sufficiently fast, we can choose a threshold
$\gamma_n$ for hard-thresholding such that the false selection rate
becomes asymptotically negligible.
Our theorems say little about how to actually select the
hard-thresholding parameter $\gamma_n$ in practice. Following
\cite{fan08:_sure} and \cite{fan09:_ultrah_dimen_featur_selec}, we
would typically choose $\gamma_n$ such that $|\mc{M}_\mathrm{pre}^n|$
is of order $n/\log n$. Devising a general data-adaptive way of
choosing $\gamma_n$ is an open problem; false-selection-based criteria
are briefly mentioned in Section \ref{sec:scaling-fast}.
\subsection{The special case of the Aalen model}
Additional insight into the impact of censoring on FAST-SIS is
possible within the more restrictive context of the nonparametric
Aalen model with Gaussian features (\cite{aalen80};
\cite{aalen89}). This particular model asserts a hazard rate function
for $T_i$ of the form
\begin{equation}
\label{eq:def-of-aalenmodel}
\lambda_j(t) = \lambda_0(t)+\vek{Z}_j^\top \vekg\alpha^0(t), \quad j=1,2,\ldots,n;
\mathrm{e}nd{equation}
for some baseline hazard rate function $\lambda_0$ and $\vekg{\alpha}^0$ a vector of continuous regression coefficient functions. The Aalen model extends the
Lin-Ying model of Section \ref{sec:classical} by allowing time-varying
regression coefficients. Alternatively, it can be viewed as defining
an expansion to the first order of a general hazard rate function in
the class~\mathrm{e}qref{eq:model-for-haz} in the sense that
\begin{equation}
\label{eq:aalen-expansion}
\lambda\big(t,\vek{Z}_1^\top \vekg\alpha^0\big) \approx \lambda(t,0)+\vek{Z}_1^\top \vekg\alpha^0\frac{\partial \lambda(t,x)}{\partial x}\Big|_{x=0}.
\mathrm{e}nd{equation}
For Aalen models with Gaussian features, we have the following analogue to Theorem~\ref{thm:mainthm}.
\begin{theorem}
\label{thm:mainthm-aalen}
Suppose that Assumptions
\ref{assumption:monotonicity}-\ref{assumption:lr} hold alongside the
regularity conditions of the appendix. Suppose moreover that the
$\vek{Z}_1$ is mean zero Gaussian and that $T_1$ follows a model of
the form~\mathrm{e}qref{eq:def-of-aalenmodel} with regression coefficients
$\vekg{\alpha}^0$. Assume that $C_1$ also follows a model of the form
\mathrm{e}qref{eq:def-of-aalenmodel} conditionally on $\vek{Z}_1$ and that
censoring is independent. Let
$\vek{A}^0(t):=\int_0^t \vekg{\alpha}^0(s) \mathrm{d} s$. If for some
$\kappa<1/2$ and $c_1>0$, we have
\begin{equation}
\label{eq:aalen-eq}
|\mathrm{Cov}[Z_{1j},\vek{Z}_1^\top \E\{\vek{A}^0(T_1 \land C_1 \land \tau)\}]| \geq c_1n^{-\kappa}, \quad j \in \mc{M}^n,
\mathrm{e}nd{equation}
then the conclusions of Theorem~\ref{thm:mainthm} hold with $\mathrm{e}ta=2$.
\mathrm{e}nd{theorem}
In view of \mathrm{e}qref{eq:aalen-expansion}, Theorem~\ref{thm:mainthm-aalen}
can be viewed as establishing, within the model class
\mathrm{e}qref{eq:model-for-haz}, conditions for first-order validity of
FAST-SIS under a general (independent) censoring mechanism and Gaussian
features. The expectation term in \mathrm{e}qref{eq:aalen-eq} is essentially
the `expected regression coefficients at the exit time' which is
strongly dependent on censoring through the symmetric dependence on
survival and censoring time.
In fact, general independent censoring is a nuisance even in the
Lin-Ying model which would otherwise seem the `natural model' in which
to use FAST-SIS. Specifically, assuming only independent censoring,
suppose that $T_1$ follows a Lin-Ying model with regression
coefficients $\vekg\alpha^0$ conditionally on $\vek{Z}_1$ and that
$C_1$ also follows some Lin-Ying model conditionally on $\vek{Z}_1$. If
$\vek{Z}_1 = \vekg{\Sigma}^{1/2}\tilde{\vek{Z}}_1$ where the
components of $\tilde{\vek{Z}}_1$ are i.i.d. with mean zero and unit
variance, there exists a $p_n \times p_n$ diagonal matrix $\vek{C}$
such that
\begin{equation}
\label{eq:single-index-w-censoring}
\vekg{\mathrm{d}elta} = \vekg\Sigma^{1/2} \vek{C} \vekg{\Sigma}^{1/2} \vekg\alpha^0.
\mathrm{e}nd{equation}
See Lemma \ref{prop:aalen-screen} in the appendix. It holds that
$\vek{C}$ has constant diagonal iff features are Gaussian; otherwise
the diagonal is nonconstant and depends nontrivially on the regression
coefficients of the censoring model. A curious implication is
that, under Gaussian features, FAST screening has the
SIS property for this `double' Lin-Ying model irrespective of the
(independent) censoring mechanism. Conversely, sufficient conditions
for a SIS property to hold here under more general feature
distributions would require the $j$th component of
$\vekg\Sigma^{1/2} \vek{C} \vekg{\Sigma}^{1/2} \vekg\alpha^0$ to be
`large' whenever $\alpha_j^0$ is `large'; hardly a very operational
assumption. In other words, even in the simple Lin-Ying
model, unrestricted censoring complicates analysis of
FAST-SIS considerably.
\subsection{Scaling the FAST statistic}
\label{sec:scaling-fast}
The FAST statistic is easily generalized to incorporate
scaling. Inspection of the results in the appendix immediately shows
that multiplying the FAST statistic by some strictly positive,
deterministic weight does not alter its asymptotic behavior. Under
suitable assumptions, this also holds when weights are
stochastic. In the notation of Section \ref{sec:classical}, the
following two types of scaling are immediately relevant:
\begin{alignat}{2}
\label{eq:tfast} d_{j}^{Z} &= d_{j}B_{jj}^{-1/2} &&\textrm{ ($Z$-FAST);}\\
\label{eq:lyfast} d_{j}^{\mathrm{LY}} &= d_{j}D_{jj}^{-1} && \textrm{ (Lin-Ying-FAST).}
\mathrm{e}nd{alignat}
The $Z$-FAST statistic corresponds to standardizing $\vek{d}$ by its
estimated standard deviation; screening with this statistic is
equivalent to the standard approach of ranking features according to
univariate Wald $p$-values. Various forms of asymptotic false-positive
control can be implemented for $Z$-FAST, courtesy of the central limit
theorem. Note that $Z$-FAST is model-independent in the sense that its
interpretation (and asymptotic normality) does not depend on a
specific model. In contrast, the Lin-Ying-FAST statistic is
model-specific and corresponds to calculating the univariate
regression coefficients in the Lin-Ying model, thus leading to an
analogue of the idea of `ranking by absolute regression coefficients' of
\cite{fan10:_sure_np} .
We may even devise a scaling of $\vek{d}$ which mimicks the `ranking
by marginal likelihood ratio' screening of \cite{fan10:_sure_np} by
considering univariate versions of the natural loss function
$\vekg{\beta} \mapsto \vekg{\beta}^\top \vek{D} \vekg{\beta} -2
\vekg{\beta}^\top \vek{d}$ for the Lin-Ying model. The components of
the resulting statistic are rather similar to \mathrm{e}qref{eq:lyfast}, taking the form
\begin{equation}
\label{eq:lossfast} d_{j}^{\mathrm{loss}} = d_{j}B_{jj}^{-1/2} \textrm{ (loss-FAST).}
\mathrm{e}nd{equation}
Additional flexibility can be gained by using a time-dependent scaling
where some strictly positive (stochastic) weight is multiplied
on the integrand in (\ref{eq:dev-at-event-defin}). This is beyond the
scope of the present paper.
\section{Beyond simple independent screening -- iterated FAST screening}
\label{sec:iteratedsis}
The main assumption underlying any SIS method, including FAST-SIS, is
that the design is close to orthogonal. This assumption is easily
violated: a relevant feature may have a low marginal association with
survival; an irrelevant feature may be indirectly associated with
survival through associations with relevant features etc. To address
such issues, \cite{fan08:_sure} and
\cite{fan09:_ultrah_dimen_featur_selec} proposed various heuristic
iterative SIS (ISIS) methods which generally work as follows. First,
SIS is used to recruit a small subset of features within which an even
smaller subset of features is selected using a (multivariate) variable
selection method such as penalized regression. Second, the (univariate) relevance
of each feature not selected in the variable selection step is
re-evaluated, adjusted for all the selected features. Third, a small
subset of the most relevant of these new features is joined to the set
of already selected features, and the variable selection step is
repeated. The last two steps are iterated until the set of selected
features stabilizes or some stopping criterion of choice is reached.
We advocate a similar strategy to extend the application domain of
FAST-SIS. In view of Section~\ref{sec:mod-based-inter}, a variable
step using a working Lin-Ying model is intuitively sensible. We may
also provide some formal justification. Firstly, estimation in a
Lin-Ying model corresponds to optimizing the loss function
\begin{equation}
\label{eq:lossf-for-ahaz}
L(\vekg\beta):=\vekg\beta^\top \vek{D} \vekg\beta -2 \vekg\beta^\top \vek{d};
\mathrm{e}nd{equation}
where $\vek{D}$ was defined in Section \ref{sec:mod-based-inter}. As
discussed by \cite{martinussen09:_covar_selec_semip}, the loss
function \mathrm{e}qref{eq:lossf-for-ahaz} is meaningful for general hazard
rate models: it is the empirical version of the mean squared
prediction error for predicting, with a working Lin-Ying model, the
part of the intensity which is orthogonal to the at-risk indicator. In
the present context, we are mainly interested in the model selection
properties of a working Lin-Ying model. Suppose that $T_1$
conditionally on $\vek{Z}_1$ follows a single-index model of the
form~\mathrm{e}qref{eq:model-for-haz} and that Assumptions~\ref{assumption:ran-cens}-\ref{assumption:po}
hold. Suppose that $\vekg\Delta \vekg\beta^0=\vekg\mathrm{d}elta$ with
$\vekg\Delta$ the in probability limit of $\vekg{D}$. Then $\alpha_j^0
\mathrm{e}quiv 0$ implies $\beta_j^0=0$ \citep{hattori06:_some} so that a
working Lin-Ying model will yield conservative model selection in a quite general setting. Under
stronger assumptions, the following result, related to work by
\cite{brillinger83:_gauss} and \cite{li89:_regres}, is available.
\begin{theorem}
\label{thm:consistent-joint-single-index}
Assume that $T_1$ conditionally on $\vek{Z}_1$ follows a single-index model of the
form \mathrm{e}qref{eq:model-for-haz}. Suppose moreover that
Assumption~\ref{assumption:lr} holds and that $C_1$ is independent of
$T_1,\vek{Z}_1$ (random censoring). If $\vekg\beta^0$ defined by
$\vekg\Delta \vekg\beta^0 =\vekg\mathrm{d}elta$ is the vector of regression
coefficients of the associated working Lin-Ying model and
$\vekg\Delta$ is nonsingular, then there exists a nonzero constant
$\nu$ depending only on the distributions of $\vek{Z}_1^\top
\vekg\alpha^0$ and $C_1$ such that $\vekg\beta^0 = \nu
\vekg\alpha^0$.
\mathrm{e}nd{theorem}
Thus a working Lin-Ying model can consistently estimate regression
coefficient signs under misspecification. From the efforts of
\cite{zhu09:_variab} and \cite{zhu09:_noncon} for other types of
single-index models, it seems conceivable that variable selection
methods designed for the Lin-Ying model will enjoy certain consistency
properties within the model class \mathrm{e}qref{eq:model-for-haz}. The
conclusion of Theorem~\ref{thm:consistent-joint-single-index}
continues to hold when $\vekg{\Delta}$ is replaced by any matrix
proportional to the feature covariance matrix $\vekg{\Sigma}$. This is
a consequence of Assumption~\ref{assumption:lr} and underlines the
considerable flexibility available when estimating in single-index
models.
Variable selection based on the Lin-Ying loss
\mathrm{e}qref{eq:lossf-for-ahaz} can be accomplished by optimizing a
penalized loss function of the form
\begin{equation}
\label{eq:pen-loss}
\vekg\beta \mapsto L(\vekg\beta)+\sum_{j=1}^{p}p_\lambda(|\beta_j|);
\mathrm{e}nd{equation}
where $p_\lambda\colon \mathbb{R} \to \mathbb{R}$ is some nonnegative
penalty function, singular at the origin to facilitate model selection
\citep{fan01:_variab} and depending on some tuning parameter $\lambda$
controlling the sparsity of the penalized estimator. A popular
choice is the lasso penalty
\citep{tibshirani09:_univar_shrin_cox_model_high_dimen_data} and its
adaptive variant \citep{zou06}, corresponding to penalty functions
$p_\lambda(|\beta_j|)=\lambda |\beta_j|$ and
$p_\lambda(|\beta_j|)=\lambda |\beta_j|/|\hat{\beta}_j|$ with
$\hat{\vekg{\beta}}$ some root $n$ consistent estimator of
$\vekg{\beta}^0$, respectively. These penalties were studied by
\cite{ma07:_path} and \cite{martinussen09:_covar_selec_semip} for the
Lin-Ying model. Empirically, we have had better success with the
one-step SCAD (OS-SCAD) penalty of \cite{zou08:_one} than with
lasso penalties. Letting
\begin{equation}
\label{eq:scad-pen}
w_\lambda(x) := \lambda\mathrm{1}(x\leq \lambda)+\frac{(a\lambda -x)_+}{a-1}\mathrm{1}(x>\lambda), \quad a>2
\mathrm{e}nd{equation}
an OS-SCAD penalty function for the Lin-Ying model can be defined as follows:
\begin{equation}
\label{eq:oss-scad-penalty}
p_\lambda(|\beta_j|):= w_\lambda(\bar{D}|\hat{\beta}_j|) |\beta_j|.
\mathrm{e}nd{equation}
Here $\hat{\vekg\beta}:=\mathrm{argmin}\,_{\vekg\beta} L(\vekg\beta)$ is the
unpenalized estimator and $\bar{D}:=n^{-1}\mathrm{tr}\,(\vek{D})$ is
the average diagonal element of~$\vek{D}$; this particular re-scaling
is just one way to lessen dependency of the penalization on the time scale. If
$\vek{D}$ has approximately constant diagonal (which is often the case
for standardized features), then re-scaling by $\bar{D}$ leads to a
similar penalty as for OS-SCAD in the linear regression model with
standardized features. The choice $a=3.7$ in \mathrm{e}qref{eq:scad-pen} was
recommended by \cite{fan01:_variab}. OS-SCAD has not previously been
explored for the Lin-Ying model but its favorable
performance in ISIS for other regression models is well known
\citep{fan09:_ultrah_dimen_featur_selec,fan10:_borrow_stren}. OS-SCAD
can be implemented efficiently using, for example, coordinate descent
methods for fitting the lasso
\citep{gorst-rasmussen11:_effic,friedman07:_pathw}. For fixed~$p$, the
OS-SCAD penalty \mathrm{e}qref{eq:oss-scad-penalty} has the oracle property if
the Lin-Ying model holds true. A proof is beyond scope but follows by
adapting \cite{zou08:_one} along the lines of
\cite{martinussen09:_covar_selec_semip}.
In the basic FAST-ISIS algorithm proposed below, the initial recruitment step
corresponds to ranking the regression coefficients in the univariate
Lin-Ying models. This is a convenient generic choice because it enables
interpretation of the algorithm as standard `vanilla ISIS'
\citep{fan09:_ultrah_dimen_featur_selec} for the Lin-Ying model.
\begin{algorithm}[Lin-Ying-FAST-ISIS]
\label{alg:isis}
Set $\mc{M}:=\{1,\ldots,p\}$, let $r_\mathrm{max}$ be some pre-defined maximal number of iterations of the algorithm.
\begin{enumerate}
\item (\textit{Initial recruitment}). Perform SIS by ranking
$|d_{j}D_{jj}^{-1}|$, $j=1,\ldots,p_n$,
according to decreasing order of magnitude and retaining the $k_0 \leq d$ most
relevant features~$\mc{A}_1 \subseteq \mc{M}$.
\item For $r=1,2,\ldots$ do:
\begin{enumerate}
\item (\textit{Feature selection}). Define $\omega_j:=\infty$ if $j \in \mc{A}_r$ and $\omega_j:=0$ otherwise. Estimate
\begin{displaymath}
\hat{\vekg{\beta}}:=\mathrm{argmin}\,_{ \vekg\beta}\Big\{L(\vekg{\beta})+\sum_{j=1}^{p_n} \omega_jp_{\hat\lambda}(|\beta_j|) \Big\};
\mathrm{e}nd{displaymath}
with $p_\lambda$ defined in \mathrm{e}qref{eq:oss-scad-penalty} for some suitable tuning parameter $\hat\lambda$. Set
$\mc{B}_r := \{j\,:\,\hat{\beta}_j\neq 0\}$.
\item If $r>1$ and $\mc{B}_r = \mc{B}_{r-1}$, or if
$r=r_{\mathrm{max}}$; return $\mc{B}_r$.
\item (\textit{Re-recruitment}). Otherwise, re-evaluate relevance of
features in $\mc{M}\backslash \mc{B}_r$ according to the
absolute value of their regression coefficient $|\tilde{\beta}_j|$ in the
$|\mc{M}\backslash \mc{B}_r|$ unpenalized Lin-Ying models including each feature in $\mc{M}\backslash
\mc{B}_r$ and all features in $\mc{B}_r$, i.e.
\begin{equation}
\label{eq:fulladj}
\tilde{\beta}_j :=\hat{\beta}_1^{(j)}, \quad \textrm{where } \hat{\vekg\beta}^{(j)}=\mathrm{argmin}\,_{\vekg\beta_{\{j\} \cup \mc{B}_r}} L(\vekg\beta_{\{j\} \cup \mc{B}_r}), \quad j \in \mc{M}\backslash \mc{B}_r.
\mathrm{e}nd{equation}
Take $\mc{A}_{r+1}:=\mc{C}_r \cup \mc{B}_r$ where
$\mc{C}_r$ is the set of the $k_r$ most relevant features in
$\mc{M}\backslash \mc{A}_r$, ranked according to decreasing order of
magnitude of $|\tilde{\beta}_j|$.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{enumerate}
\mathrm{e}nd{algorithm}
\cite{fan08:_sure} recommended choosing $d$ to be of order $n/\log
n$. Following \cite{fan09:_ultrah_dimen_featur_selec}, we may take
$k_0=\lfloor 2d/3 \rfloor$ and $k_l = d-|\mc{A}_l|$ at each step. This
$k_0$ ensures that we complete at least one iteration of the
algorithm; the choice of $k_r$ for $r>0$ ensures that at most $d$
features are included in the final solution.
Algorithm \ref{alg:isis} defines an iterated variant of SIS with the
Lin-Ying-FAST statistic (\ref{eq:lyfast}). We can devise an analogous
iterated variant of $Z$-FAST-SIS in which the initial recruitment is
performed by ranking based on the statistic (\ref{eq:tfast}), and the
subsequent re-recruitments are performed by ranking $|Z|$-statistics
in the multivariate Lin-Ying model according to decreasing order of
magnitude, using the variance
estimator~\mathrm{e}qref{eq:lin-ying-rootn-consistency}. A third option would
be to base recruitment on (\ref{eq:lossfast}) and re-recruitments on the decrease in the multivariate loss
\mathrm{e}qref{eq:lossf-for-ahaz} when joining a given feature to the set of
features picked out in the variable selection step.
The re-recruitment step (b.iii) in Algorithm \ref{alg:isis} resembles
that of \cite{fan09:_ultrah_dimen_featur_selec}. Its naive
implementation will be computationally burdensome when $p_n$ is large,
requiring a low-dimensional matrix inversion per feature. Significant
speedup over the naive implementation is possible via the matrix
identity
\begin{equation}
\label{eq:matrixid}
\vek{D}=\left(\begin{matrix}
e & \vek{f}^\top \\
\vek{f} & \tilde{\vek{D}}
\mathrm{e}nd{matrix}\right) \Rightarrow \vek{D}^{-1} = \left(\begin{matrix}
k^{-1} & -k^{-1}\vek{f}^\top \tilde{\vek{D}}^{-1} \\
-k^{-1} \tilde{\vek{D}}^{-1}\vek{f}& (\tilde{\vek{D}}-e^{-1}\vek{f}\vek{f}^\top)^{-1}
\mathrm{e}nd{matrix}\right) \quad \textrm{where } k=e-\vek{f}^\top \tilde{\vek{D}}^{-1} \vek{f}.
\mathrm{e}nd{equation}
Note that only the first row
of $\vek{D}^{-1}$ is required for the re-recruitment step so that
\mathrm{e}qref{eq:fulladj} can be implemented using just a single low-dimensional
matrix inversion alongside $O(p_n)$ matrix/vector multiplications. Combining
\mathrm{e}qref{eq:matrixid} with \mathrm{e}qref{eq:lin-ying-rootn-consistency}, a
similarly efficient implementation applies for $Z$-FAST-ISIS.
The variable selection step (b.i) of Algorithm \ref{alg:isis} requires
the choice of an appropriate tuning parameter. This is traditionally a
difficult part of penalized regression, particularly when the aim is
model selection where methods such as cross-validation are prone to
overfitting \citep{leng07}. Previous work on ISIS used the Bayesian
information criterion (BIC) for tuning parameter selection
\citep{fan09:_ultrah_dimen_featur_selec}. Although BIC is based on the
likelihood, we may still define the following `pseudo BIC' based on
the loss \mathrm{e}qref{eq:lossf-for-ahaz}:
\begin{equation}
\label{eq:def-of-pbic}
\mathrm{PBIC}(\lambda) = \kappa\{L(\hat{\vekg\beta}_\lambda)-L(\hat{\vekg\beta})\}+ n^{-1} \mathrm{df}_\lambda \log n.
\mathrm{e}nd{equation}
Here $\hat{\vekg\beta}_\lambda$ is the penalized estimator,
$\hat{\vekg\beta}$ is the unpenalized estimator, $\kappa>0$ is a
scaling constant of choice, and $\mathrm{df}_\lambda$ estimates the degrees of freedom of the penalized estimator. A computationally
convenient choice is $\mathrm{df}_\lambda =
\|\hat{\vekg\beta}_\lambda\|_0$ \citep{zou07l}. It turns out that
choosing $\hat{\lambda}= \mathrm{argmin}\,_\lambda \mathrm{PBIC}_\lambda$ may lead to
model selection consistency. Specifically, the loss
\mathrm{e}qref{eq:lossf-for-ahaz} for the Lin-Ying model is of the
least-squares type. Then we can repeat the arguments of
\cite{wang07:_unified_lasso_estim_least_squar_approx} and
show that, under suitable consistency assumptions for the penalized estimator, there exists a sequence $\lambda_n \to 0$ yielding selection
consistency for $\hat{\vekg\beta}_{\lambda_n}$ and
satisfying
\begin{equation}
\label{eq:cons-property}
P\Big\{\inf_{\lambda \in S}
\mathrm{PBIC}(\lambda)>\mathrm{PBIC}(\lambda_n) \Big\} \to 1, \qquad n \to \infty;
\mathrm{e}nd{equation}
with $S$ the union of the set of tuning parameters $\lambda$ which
lead to overfitted (strict supermodels of the true model),
respectively underfitted models (any model which do not include the
true model). While \mathrm{e}qref{eq:cons-property} holds independently of
the scaling constant $\kappa$, the finite-sample behavior of PBIC
depends strongly on~$\kappa$. A sensible value may be inferred heuristically as
follows: the range of a `true' likelihood BIC is asymptotically
equivalent to a Wald statistic in the sense that (for fixed
$p$),
\begin{equation}
\label{eq:bicexpansion}
\mathrm{BIC}(0)-\mathrm{BIC}(\infty) =\hat{\vekg\beta}_{\mathrm{ML}}^\top
\mc{I}(\vekg\beta_0)\hat{\vekg\beta}_{\mathrm{ML}}+o_p(n^{-1/2});
\mathrm{e}nd{equation}
with $\hat{\vekg\beta}_{\mathrm{ML}}$ the maximum likelihood estimator
and $\mc{I}(\vekg\beta_0) \approx
n^{-1}\mathrm{Var}(\hat{\vekg\beta}_\mathrm{ML}-\vekg\beta_0)^{-1}$ the
information matrix. We may specify $\kappa$ by requiring that
$\mathrm{PBIC}(0)-\mathrm{PBIC}(\infty)$ admits an analogous
interpretation as a Wald statistic. Since $
\mathrm{PBIC}(0)-\mathrm{PBIC}(\infty) = \kappa\vek{d}^\top
\vek{D}^{-1} \vek{d} + o_p(n^{-1/2})$, it follows from
(\ref{eq:lin-ying-rootn-consistency}) that we should choose
\begin{displaymath}
\kappa := \frac{\vek{d}^\top \vek{B}^{-1}\vek{d}}{\vek{d}^\top \vek{D}^{-1} \vek{d}}.
\mathrm{e}nd{displaymath}
This choice of $\kappa$ also removes the dependency of PBIC on the time scale.
\section{Simulation studies}
\label{sec:numerical-results}
In this section, we investigate the performance of FAST screening on
simulated data. Rather than comparing with popular variable selection methods
such as the lasso, we will compare with analogous
screening methods based on the Cox model
\citep{fan10:_borrow_stren}. This seems a more pertinent benchmark
since previous work has already demonstrated that (iterated)
SIS can outperform variable
selection based on penalized regression in a number of cases
(\cite{fan08:_sure}; \cite{fan09:_ultrah_dimen_featur_selec}).
For all the simulations, survival times were generated from three
different conditionally exponential models of the generic form
(\ref{eq:model-for-haz}); that is, a time-independent hazard `link function' applied to
a linear functional of features. For suitable
constants $c$, the link functions were as follows (see also
Figure \ref{fig:linkfun}):
\begin{displaymath}
\begin{array}{rrcl}
\mathrm{Logit}:& \lambda_\mathrm{logit}(t,x)&:=&\{1+\mathrm{e}xp(c_\mathrm{logit}x\}^{-1} \\
\mathrm{Cox}:& \lambda_\mathrm{cox}(t,x)&:=&\mathrm{e}xp(c_\mathrm{cox}x) \\
\mathrm{Log}:& \lambda_\mathrm{log}(t,x)&:=& \log\{\mathrm{e}+(c_\mathrm{log}x)^2\}\{1+\mathrm{e}xp(c_\mathrm{log}x)\}^{-1}.
\mathrm{e}nd{array}
\mathrm{e}nd{displaymath}
The link functions represent different characteristic effects on the
feature functional, ranging from uniformly bounded (logit) over fast
decay/increase (Cox), to fast decay/slow increase (log). We took
$c_\mathrm{logit}=1.39$, $c_\mathrm{cox}=0.68$, and
$c_\mathrm{\log}=1.39$ and, unless otherwise stated, survival times
were right-censored by independent exponential random variables with
rate parameters 0.12 (logit link), 0.3 (Cox link) and 0.17 (log
link). These constants were selected to provide a crude `calibration'
to make the simulation models more comparable: for a univariate
standard Gaussian feature $Z_1$, a regression coefficient $\beta=1$,
and a sample size of $n=300$, the expected $|Z|$-statistic was 8 for
all three link functions with an expected censoring rate of~25\%, as
evaluated by numerical integration based on the true likelihood.
Methods for FAST screening have been implemented in the R package
`ahaz' \citep{gorst-rasmussen11}.
\begin{figure}[h!]
\center
\includegraphics[scale=.75]{linkf}
\caption{\label{fig:linkfun} The three hazard rate link functions used in the simulation studies}
\mathrm{e}nd{figure}
\subsection{Performance of FAST-SIS}
\label{sec:is-weighting-useful}
We first considered the performance of basic, non-iterated FAST-SIS. Features were generated as in scenario~1 of \cite{fan10:_sure_np}. Specifically, let
$\mathrm{e}psilon$ be standard Gaussian. Define
\begin{equation}
\label{eq:sis-scenario}
Z_{1j} := \frac{\mathrm{e}psilon_j+a_j\mathrm{e}psilon}{\sqrt{1+a_j^2}}, \quad j=1,\ldots,p;
\mathrm{e}nd{equation}
where $\mathrm{e}psilon_j$ is independently distributed as a standard
Gaussian for $j=1,2,\ldots,\lfloor p/3\rfloor$: independently
distributed according to a double exponential distribution with
location parameter zero and scale parameter 1 for $j=\lfloor p/3
\rfloor +1,\ldots,\lfloor 2p/3 \rfloor$; and independently distributed
according to a Gaussian mixture $0.5N(-1,1)+0.5N(1,0.5)$ for
$j=\lfloor 2p/3\rfloor+1,\ldots,p$. The constants $a_j$ satisfy
$a_1=\cdots=a_{15}$ and $a_j=0$ for $j>15$. With the choice
$a_1=\sqrt{\rho/(1-\rho)}$, $0 \leq \rho \leq 1$, we obtain
$\mathrm{Cor}(Z_{1i},Z_{1j})=\rho$ for $i \neq j$, $i,j \leq 15$, enabling
crude adjustment of the correlation structure of the feature
distribution. Regression coefficients were chosen to be of the generic
form $ \vekg\alpha^0=(1,1.3,1,1.3,\ldots)^\top$ with exactly the first
$s$ components nonzero.
\begin{table}
\caption{MMMS and RSD (in parentheses) for basic SIS with $n=300$ and $p=20,000$ (100 simulations).\label{tab:tab1}}
\centering
\fbox{
{\footnotesize
\begin{tabular}{llrrrrrrrrrrr}
\\[-7pt]
&&\multicolumn{3}{c}{$\lambda_\mathrm{logit}$ }&& \multicolumn{3}{c}{$\lambda_\mathrm{cox}$ } && \multicolumn{3}{c}{$\lambda_\mathrm{log}$ } \\[5pt]
\cline{3-5} \cline{7-9} \cline{11-13}\\[-5pt]
$\rho$ & & $s=3$ & $s=6$ & $s=9$ && $s=3$ & $s=6$ & $s=9$ && $s=3$ & $s=6$ & $s=9$ \\[5pt]
\hline
\\[-5pt]
0 & $\vek{d}$& 3 (1) & 32 (53) & 530 (914) & & 3 (0) & 7 (5) & 45 (103) & & 3 (0) & 22 (44) & 202 (302) \\
& $\vek{d}^{\mathrm{LY}}$& 4 (1) & 66 (95) & 678 (939) & & 3 (0) & 11 (14) & 96 (176) & & 3 (1) & 41 (87) & 389 (466) \\
& $\vek{d}^{Z}$& 3 (1) & 40 (71) & 522 (873) & & 3 (0) & 7 (7) & 48 (105) & & 3 (0) & 22 (45) & 262 (318) \\
& \textbf{Cox}& 3 (1) & 44 (68) & 572 (928) & & 3 (0) & 7 (4) & 40 (117) & & 3 (0) & 26 (51) & 280 (306) \\[4pt]
0.25 & $\vek{d}$& 3 (0) & 6 (1) & 11 (1) & & 3 (0) & 6 (0) & 9 (1) & & 3 (0) & 6 (1) & 10 (1) \\
& $\vek{d}^{\mathrm{LY}}$& 3 (0) & 7 (1) & 11 (2) & & 3 (0) & 6 (1) & 10 (1) & & 3 (0) & 7 (1) & 11 (1) \\
& $\vek{d}^{Z}$& 3 (0) & 6 (1) & 11 (1) & & 3 (0) & 6 (0) & 10 (1) & & 3 (0) & 6 (1) & 10 (1) \\
& \textbf{Cox}& 3 (0) & 6 (1) & 11 (1) & & 3 (0) & 6 (0) & 9 (1) & & 3 (0) & 6 (1) & 10 (1) \\[4pt]
0.5 & $\vek{d}$& 3 (0) & 7 (2) & 12 (2) & & 3 (0) & 6 (1) & 10 (1) & & 3 (0) & 7 (1) & 11 (2) \\
& $\vek{d}^{\mathrm{LY}}$& 3 (0) & 9 (3) & 13 (1) & & 3 (0) & 8 (2) & 13 (2) & & 3 (0) & 8 (2) & 12 (2) \\
& $\vek{d}^{Z}$& 3 (0) & 8 (3) & 12 (1) & & 3 (0) & 7 (2) & 12 (2) & & 3 (0) & 7 (2) & 12 (2) \\
& \textbf{Cox}& 3 (1) & 9 (3) & 13 (2) & & 3 (0) & 6 (1) & 11 (2) & & 3 (0) & 8 (2) & 12 (2) \\[4pt]
0.75 & $\vek{d}$& 3 (1) & 9 (2) & 13 (1) & & 3 (0) & 8 (2) & 12 (1) & & 3 (1) & 9 (3) & 12 (2) \\
& $\vek{d}^{\mathrm{LY}}$& 4 (2) & 11 (3) & 14 (2) & & 4 (1) & 11 (3) & 14 (1) & & 4 (2) & 10 (2) & 13 (1) \\
& $\vek{d}^{Z}$& 4 (1) & 10 (2) & 13 (1) & & 3 (1) & 10 (3) & 13 (1) & & 3 (1) & 9 (2) & 13 (1) \\
& \textbf{Cox}& 5 (3) & 12 (2) & 14 (1) & & 3 (0) & 7 (2) & 12 (2) & & 4 (1) & 11 (3) & 14 (2) \\[5pt]
\mathrm{e}nd{tabular}
}
}
\mathrm{e}nd{table}
\begin{figure}[h!]
\center
\includegraphics[scale=.7]{box1}
\caption{\label{fig:box1} Minimum observed
$|Z|$-statistics in the oracle model under
$\lambda_\mathrm{log}$, for the SIS simulation study.}
\mathrm{e}nd{figure}
For each combination of hazard link function, non-sparsity level $s$, and
correlation $\rho$, we performed 100 simulations with $p=20,000$
features and $n=300$ observations. Features were ranked using the
vanilla FAST statistic, the scaled FAST statistics (\ref{eq:tfast})
and (\ref{eq:lyfast}), and SIS based on a Cox working model
(Cox-SIS), the latter ranking features according their absolute univariate regression
coefficient. Results are shown in Table~\ref{tab:tab1}. As a
performance measure, we report the median of the minimum model size
(MMS) needed to detect all relevant features alongside its relative
standard deviation (RSD), the interquartile range divided by 1.34. The
MMS is a useful performance measure for this type of study since it
eliminates the need to select a threshold parameter for SIS. The
censoring rate in the simulations was typically 30\%-40\%.
For all methods, the MMMS is seen to increase with feature
correlation $\rho$ and non-sparsity~$s$. As
also noted by \cite{fan10:_sure_np} for the case of SIS for
generalized linear models, some correlation among features can
actually be helpful since it increases the strength of marginal
signals. Overall, the statistic $\vek{d}^\mathrm{\mathrm{LY}}$ seems
to perform slightly worse than both $\vek{d}$ and
$\vek{d}^\mathrm{Z}$ whereas the latter two statistics perform
similarly to Cox-SIS. In our basic implementation, screening with any
of the FAST statistics was more than 100 times faster than Cox-SIS,
providing a rough indication of the relative computational efficiency
of FAST-SIS.
To gauge the relative difficulty of the different simulation
scenarios, Figure \ref{fig:box1} shows box plots of the minimum of the
observed $|Z|$-statistics in the oracle model (the joint model with
only the relevant features included and estimation based on the
likelihood under the true link function) for the link function
$\lambda_\mathrm{log}$. This particular link function represents an
`intermediate' level of difficulty; with $|Z|$-statistics for
$\lambda_\mathrm{cox}$ generally being somewhat larger and
$|Z|$-statistics for $\lambda_\mathrm{logit}$ being slightly
smaller. Even with oracle information and the correct working model,
these are evidently difficult data to deal with.
\subsection{FAST-SIS with non-Gaussian features and
nonrandom censoring}
\label{sec:non-random-censoring}
We next investigated FAST-SIS with non-Gaussian features and a more
complex censoring mechanism. The simulation scenario
was inspired by the previous section but with all features generated
according to either a standard Gaussian distribution, a
$t$-distribution with~4 degrees of freedom, or a unit rate exponential
distribution. Features were standardized to have
mean zero and variance one, and the feature correlation structure was such
that $\mathrm{Cor}(Z_{1i},Z_{1j})=0.125$ for $i,j < 15$, $i \neq j$
and $\mathrm{Cor}(Z_{1i},Z_{1j})=0$ otherwise. Survival times were
generated according to the link function $\lambda_\mathrm{log}$ with
regression coefficients $\vekg{\beta}=(1,1.3,1,1.3,1,1.3,0,0,\ldots)$
while censoring times were generated according to the same model (link
function $\lambda_\mathrm{log}$ and conditionally on the same feature
realizations) with regression coefficients
$\tilde{\vekg{\beta}}=k\vekg{\beta}$. The constant $k$
controls the association between censoring and survival
times, leading to a basic example of nonrandom censoring
(competing risks).
Using $p=20,000$ features and~$n=300$ observations, we performed 100
simulations under each of the three feature distributions, for
different values of $k$. Table \ref{tab:tab-cens} reports the MMMS and
RSD for the four different screening methods of the previous section,
as well as for the statistic $\vek{d}^{\mathrm{loss}}$ in
(\ref{eq:lossfast}). The censoring rate in all scenarios was around
50\%.
From the column with $k=0$ (random censoring), the heavier tails of
the $t$-distribution increases the MMMS, particularly for
$\vek{d}^{\mathrm{LY}}$. The vanilla FAST statistic $\vek{d}$ seems
the least affected here, most likely because it does not directly
involve second-order statistics which are poorly estimated due to the
heavier tails. While $\vek{d}^{Z}$ and $\vek{d}^{\mathrm{loss}}$ are
also scaled by second-order statistics, the impact of the tails is
dampened by the square-root transformation in the scaling factors. In
contrast, the more distinctly non-Gaussian exponential distribution is
problematic for $\vek{d}^{Z}$. Overall, the statistics~$\vek{d}$ and
$\vek{d}^{\mathrm{loss}}$ seems to have the best and most consistent
performance across feature distributions. Nonrandom censoring
generally increases the MMMS and RSD, particularly for the
non-Gaussian distributions. There appears to be no clear difference
between the effect of positive and negative values of $k$. We found
that the effect of $k \neq 0$ diminished when the sample size was
increased (results not shown), suggesting that nonrandom censoring in
the present example leads to a power rather than bias issue. This may
not be surprising in view of the considerations below
\mathrm{e}qref{eq:single-index-w-censoring}. However, the example still shows
the dramatic impact of nonrandom censoring on the performance of~SIS.
\begin{table}
\caption{\label{tab:tab-cens}MMMS and RSD (in parentheses) for SIS under nongaussian features/nonrandom censoring with $n=300$ and $p=20,000$ (100 simulations).}
\centering
\fbox{
{\footnotesize
\begin{tabular}{llcccrrrr}
\\[-7pt]
&&&&&\multicolumn{4}{c}{$k$} \\[5pt]
\cline{6-9}\\[-5pt]
\textsl{Feature distr.} & && $k=0$ && $-0.5$ & $-0.25$ & $0.25$ & $0.5$\\[5pt]
\hline
\\[-5pt]
\textsl{Gaussian} & $\vek{d}$&& 6 (1) && 8 (8) & 7 (4) & 6 (1) & 7 (3) \\
& $\vek{d}^{\mathrm{LY}}$&& 6 (1) && 8 (6) & 7 (3) & 7 (2) & 8 (5) \\
& $\vek{d}^{Z}$&& 6 (1) && 7 (6) & 7 (2) & 6 (1) & 7 (2) \\
& $\vek{d}^{\mathrm{loss}}$&& 6 (1) && 8 (6) & 7 (3) & 6 (1) & 7 (3) \\
& \textbf{Cox} && 6 (1) && 8 (5) & 7 (2) & 6 (1) & 7 (2) \\[4pt]
$t$ ($df=4$) & $\vek{d}$&& 6 (1) && 13 (17) & 7 (5) & 6 (1) & 7 (3) \\
& $\vek{d}^{\mathrm{LY}}$&& 11 (7) && 12 (8) & 9 (7) & 48 (136) & 99 (185) \\
& $\vek{d}^{Z}$&& 7 (3) && 17 (20) & 8 (5) & 7 (2) & 7 (3) \\
& $\vek{d}^{\mathrm{loss}}$&& 6 (1) && 8 (7) & 7 (4) & 8 (15) & 10 (10) \\
& \textbf{Cox} && 7 (4) && 15 (23) & 8 (10) & 8 (4) & 9 (5) \\[4pt]
\textsl{Exponential} & $\vek{d}$&& 6 (1) && 6 (2) & 6 (1) & 7 (4) & 8 (7) \\
& $\vek{d}^{\mathrm{LY}}$&& 6 (1) && 11 (12) & 7 (3) & 6 (1) & 6 (1) \\
& $\vek{d}^{Z}$&& 15 (10) && 34 (36) & 24 (17) & 22 (28) & 26 (29) \\
& $\vek{d}^{\mathrm{loss}}$&& 6 (0) && 7 (4) & 6 (1) & 6 (1) & 6 (1) \\
& \textbf{Cox} && 8 (4) && 22 (31) & 14 (11) & 9 (6) & 9 (8) \\[5pt]
\mathrm{e}nd{tabular}
}}
\mathrm{e}nd{table}
\subsection{Performance of FAST-ISIS}
We lastly evaluated the ability of FAST-ISIS (Algorithm
\ref{alg:isis}) to cope with scenarios where FAST-SIS fails. As in
the previous sections, we compare our results with the analogous ISIS
screening method for the Cox model. To perform Cox-ISIS, we used the R
package `SIS', with (re)recruitment based on the absolute Cox
regression coefficients and variable selection based on OS-SCAD. We
also compared with $Z$-FAST-ISIS variant described below Algorithm
\ref{alg:isis} in which (re)recruitment is based on the Lin-Ying model
$|Z|$-statistics (results for FAST-ISIS with (re)recruitment based on the
loss function were very similar).
For the simulations, we adopted the structural form of the feature
distributions used by \cite{fan10:_borrow_stren}. We
considered $n=300$ observations and $p=500$ features which were
jointly Gaussian and marginally standard Gaussian. Only regression
coefficients and feature correlations differed between cases as
follows:
\begin{enumerate}
\item The regression coefficients are $\beta_1=-0.96$,
$\beta_2=0.90$, $\beta_3=1.20$, $\beta_4=0.96$, $\beta_5=-0.85$,
$\beta_6=1.08$ and $\beta_j=0$ for $j>6$. Features are
independent, $\mathrm{Cor}(Z_{1i},Z_{1j})=0$ for $i \neq j$.
\item The regression coefficients are the same as in (a) while $\mathrm{Corr}(Z_{1i},Z_{1j})=0.5$ for $i \neq j$.
\item The regression coefficients are
$\beta_1=\beta_2=\beta_3=4/3$, $\beta_4=-2\sqrt{2}$. The correlation
between features is given by $\mathrm{Cor}(Z_{1,4},Z_{1j})=1/\sqrt{2}$ for $j \neq
4$ and $ \mathrm{Cor}(Z_{1i},Z_{1j})=0.5$ for $i \neq j$, $i,j\neq 4$.
\item The regression coefficients are
$\beta_1=\beta_2=\beta_3=4/3$, $\beta_4=-2\sqrt{2}$ and
$\beta_5=2/3$. The correlation between features is
$\mathrm{Cor}(Z_{1,4},Z_{1j})=1/\sqrt{2}$ for $j \notin \{4,5\}$, $\mathrm{Cor}(Z_{1,5},Z_{1j})=0$
for $j \neq 5$, and $ \mathrm{Cor}(Z_{1i},Z_{1j})=0.5$ for $i \neq j$, $i,j \notin \{4,5\}$.
\mathrm{e}nd{enumerate}
Case (a) serves as a basic benchmark whereas case (b) is harder
because of the correlation between relevant and irrelevant
features. Case (c) introduces a strongly relevant feature $Z_4$ which
is not marginally associated with survival; lastly, case (d) is
similar to case (c) but also includes a feature $Z_5$ which is weakly
associated with survival and does not `borrow' strength from its
correlation with other relevant features.
Following \cite{fan10:_borrow_stren}, we took $d=\lfloor n/\log n/3\rfloor
= 17$ for the initial dimension reduction; performance did not depend
much on the detailed choice of $d$ of order $n/\log n$. For the three different screening
methods, ISIS was run for maximum of 5 iterations. (P)BIC was used for
tuning the variable selection steps. Results are shown in Table
\ref{tab:table3}, summarized over 100 simulations. We report the
average number of truly relevant features selected by ISIS and the average
final model size, alongside standard deviations in parentheses. To
provide an idea of the improvement over basic SIS, we also report the
median of the minimum model size (MMMS) for the initial SIS step
(based on vanilla FAST-SIS only). The censoring rate in the different
scenarios was 25\%-35\%.
The overall performance of the three ISIS methods is comparable
between the different cases. All methods deliver a dramatic
improvement over non-iterated SIS, but no one method performs
significantly better than the others. The two FAST-ISIS methods have a
surprisingly similar performance. As one would expect, Cox-ISIS does
particularly well under the link function $\lambda_\mathrm{cox}$ but
does not appear to be uniformly better than the two FAST-ISIS methods even in
this ideal setting. Under the link function $\lambda_\mathrm{logit}$,
both FAST-ISIS methods outperform Cox-ISIS in terms of the number of
true positives identified, as do they for the link function
$\lambda_\mathrm{log}$, although less convincingly. On the other hand,
the two FAST-ISIS methods generally select slightly larger models
than Cox-ISIS and their false-positive rates (not shown) are
correspondingly slightly larger. FAST-ISIS was 40-50
times faster than Cox-ISIS, typically completing calculations in 0.5-1
seconds in our specific implementation. Figure \ref{fig:box2} shows
box plots of the minimum of the observed $|Z|$-statistics in the
oracle model (based on the likelihood undebr the true model).
\begin{table}
\caption{\label{tab:table3}Simulation results for ISIS with $n=300$, $p=500$ and $d=17$ (100 simulations). Numbers in parentheses are standard deviations (or relative standard deviation, for the MMMS). }
\centering
\fbox{
{\footnotesize
\begin{tabular}{rrrrrrrrrrrr}
\\[-7pt]
&&&& \multicolumn{3}{c}{\textsl{Average no.~true positives (ISIS)}} && \multicolumn{3}{c}{\textsl{Average model size (ISIS)}} \\[5pt]
\cline{5-7} \cline{9-11}\\[-5pt]
\textsl{Link} & \textsl{Case} & \textsl{MMMS (RSD)} && LY-FAST & $Z$-FAST & Cox && LY-FAST & $Z$-FAST & Cox \\[5pt]
\hline
\\[-5pt]
$\lambda_\mathrm{logit}$ & (a)& 7 (3) & & 6.0 (0) & 6.0 (0) & 5.5 (1) & & 7.8 (1) & 7.9 (2) & 6.3 (2) \\
& (b)& 500 (1) & & 5.5 (1) & 5.5 (1) & 3.4 (1) & & 7.0 (2) & 6.7 (2) & 4.3 (2) \\
& (c)& 240 (125) & & 3.7 (1) & 3.8 (1) & 3.0 (2) & & 5.2 (2) & 5.7 (3) & 4.5 (4) \\
& (d)& 230 (124) & & 4.8 (1) & 4.7 (1) & 3.5 (2) & & 5.9 (2) & 6.2 (3) & 4.9 (4) \\[4pt]
$\lambda_\mathrm{cox}$ & (a)& 7 (1) & & 6.0 (0) & 6.0 (0) & 6.0 (0) & & 7.5 (1) & 7.5 (1) & 6.2 (1) \\
& (b)& 500 (1) & & 5.8 (1) & 5.8 (1) & 5.6 (1) & & 7.2 (2) & 6.8 (1) & 6.4 (2) \\
& (c)& 218 (120) & & 3.7 (1) & 3.6 (1) & 3.0 (2) & & 5.1 (3) & 5.3 (3) & 4.9 (4) \\
& (d)& 258 (129) & & 4.9 (1) & 4.8 (1) & 3.8 (2) & & 6.3 (2) & 6.0 (2) & 6.4 (5) \\[4pt]
$\lambda_\mathrm{log}$ & (a)& 6 (1) & & 6.0 (0) & 6.0 (0) & 6.0 (0) & & 7.3 (1) & 7.4 (1) & 6.3 (1) \\
& (b)& 500 (1) & & 5.8 (1) & 5.7 (1) & 4.9 (1) & & 7.2 (2) & 6.7 (1) & 5.7 (2) \\
& (c)& 252 (150) & & 3.9 (0) & 3.9 (1) & 3.4 (1) & & 5.3 (2) & 4.9 (2) & 5.5 (5) \\
& (d)& 223 (132) & & 4.9 (1) & 4.8 (1) & 4.0 (2) & & 6.0 (2) & 6.1 (2) & 5.9 (5) \\[5pt]
\mathrm{e}nd{tabular}
}}
\mathrm{e}nd{table}
\begin{figure}[h!]
\center
\includegraphics[scale=.7]{box2}
\caption{\label{fig:box2} Minimum observed $|Z|$-statistics
in the oracle models for the FAST-ISIS simulation study.}
\mathrm{e}nd{figure}
\pagebreak
We have experimented with other link functions and feature
distributions than those described above (results not shown). Generally, we
found that Cox-ISIS performs worse than FAST-ISIS for bounded link
functions. The observation from Table \ref{tab:table3}, that FAST-ISIS
may improve upon Cox-ISIS even under the link function
$\lambda_\mathrm{cox}$, does not necessarily hold when the signal
strength is increased. Then Cox-ISIS will be
superior, as expected. Changing the feature distribution to
one for which the linear regression property (Assumption
\ref{assumption:lr}) does not hold leads to a decrease in the overall
performance for all three ISIS methods.
\section{Application to AML data}
The study by \cite{metzeler08} concerns the development and evaluation
of a prognostic gene expression marker for overall survival among
patients diagnosed with cytogenetically normal acute myeloid leukemia
(CN-AML). A total of 44,754 gene expressions were recorded among 163
adult patients using Affymetrix HG-U133 A1B microarrays. Based the
method of supervised principal components
\citep{bair04:_semi_super_method_predic_patien}, the gene
expressions were used to develop an 86-gene signature for predicting
survival. The signature was validated on an external test data set
consisting of 79 patients profiled using Affymetrix HG-U133 Plus 2.0
microarrays. All data is publicly
available on the Gene Expression Omnibus web site
(http://www.ncbi.nlm.nih.gov/geo/) under the accession number
GSE12417. The CN-AML data was recently used by
\cite{benner10:_high_dimen_cox_model} for comparing the
performance of variable selection~methods.
Median survival time was 9.7 months in the training data (censoring
rate 37\%) and 17.7 months in the test data (censoring rate 41\%).
Preliminary to analysis, we followed the scaling approach employed by
\cite{metzeler08} and centered the gene expressions separately within
the test and training data set, followed by a scaling of the training
data with respect to the test
data.
We first applied vanilla FAST-SIS to the $n=163$ patients in the
training data to reduce the dimension from $p=44,754$ to $d=\lfloor
n/\log(n)\rfloor=31$. We then used OS-SCAD to select a final set
among these 31 genes. Since the PBIC criterion can be somewhat
conservative in practice, we selected the OS-SCAD tuning parameter using 5-fold
cross-validation based on the loss
function~(\ref{eq:lossf-for-ahaz}). Specifically, using a random split
of $\{1,\ldots,163\}$ into folds $F_1,\ldots,F_5$ of approximately equal size, we chose
$\lambda$ as:
\begin{displaymath}
\hat\lambda = \mathrm{argmin}\,_\lambda \sum_{i=1}^5 L^{(F_i)}\{\hat{\vekg\beta}_{-F_i}(\lambda)\};
\mathrm{e}nd{displaymath}
with $L^{(F_i)}$ the loss function using only observations from
$F_i$ and $\hat{\vekg\beta}_{-F_i}(\lambda)$ the regression
coefficients estimated for a tuning parameter $\lambda$, omitting
observations from $F_i$. This approach yielded a set of 7 genes, 5 of
which also appeared in the signature of \cite{metzeler08}. For
$\hat{\vekg{\beta}}$ the estimated penalized regression coefficients, we
calculated a risk score $\vekg{Z}_j^\top\hat{\vekg{\beta}}$
for each patient in the test data. In a Cox model, the
standardized risk score had a hazard ratio of 1.69 ($p=6 \cdot
10^{-4}$; Wald test). In comparison, lasso based on the
Lin-Ying model (\cite{leng07}; \cite{martinussen09:_covar_selec_semip}) with
5-fold cross-validation gave a standardized risk score with
a hazard ratio of 1.56 ($p=0.003$; Wald test) in the test data,
requiring 5 genes; \cite{metzeler08} reported a hazard ratio of 1.85
($p = 0.002$) for their 86-gene signature.
We repeated the above calculations for the three scaled versions of
the FAST statistic (\ref{eq:tfast})-(\ref{eq:lossfast}). Since
assessment of prediction performance using only a single data set may
be misleading, we also validated the screening methods via
leave-one-out (LOO) cross-validation based on the 163 patients in the
training data. For each patient $j$, we used FAST-SIS as above (or
Lin-Ying lasso) to obtain regression coefficients
$\hat{\vekg{\beta}}_{-j}$ based on the remaining 162 patients and
defined the $j$th LOO risk score as the percentile of $\vek{Z}_j^\top
\hat{\vekg{\beta}}_{-j}$ among $\{\vek{Z}_i^\top
\hat{\vekg{\beta}}_{-j}\}_{i \neq j}$. We calculated Wald $p$-values
in a Cox regression model including the LOO score as a continuous
predictor. Results are shown in Table~\ref{tab:tab-real-example} while
Table \ref{tab:tab-real-example-overlap} shows the overlap between
gene sets selected in the training data. There is seen to be some overlap
between the different methods, particularly between vanilla FAST-SIS
and the lasso, and many of the selected genes also appear in the
signature of \cite{metzeler08}. In the test data, the prediction
performance of the different screening methods was comparable whereas
the lasso had a slight edge in the LOO
calculations. Lin-Ying SIS selected only a single gene in the test
data and typically selected no genes in the LOO calculations. We
found FAST screening to be slightly more sensitive to the
cross-validation procedure than the lasso.
\begin{table}
\caption{\label{tab:tab-real-example}Prediction performance of FAST-SIS and Lin-Ying lasso in the AML data, evaluated in terms of the Cox hazard ratio for the standardized continuous risk score. The LOO calculations are based on the training data only.}
\centering
\fbox{
{\footnotesize
\begin{tabular}{llrrrrrr}
\\[-7pt]
&&&\multicolumn{5}{c}{\textsl{Screening method}} \\[5pt]
\cline{4-8}\\[-5pt]
\textsl{Scenario} & \textsl{Summary statistic} && $\vek{d}$ & $\vek{d}^{\mathrm{LY}}$ & $\vek{d}^{|Z|}$ & $\vek{d}^{\mathrm{loss}}$& Lasso \\[5pt]
\hline
\\[-5pt]
Test data & Hazard ratio&& 1.69 & 1.59 & 1.46 & 1.58 & 1.54 \\
& $p$-value&& $6 \cdot 10^{-4}$ & 0.0007 & 0.01 & 0.002 & 0.004\\
& No. predictors && 7 & 1 & 3 & 7 & 5 \\[3pt]
LOO & $p$-value&& $4 \cdot 10^{-7}$ & $0.16$ & $5 \cdot 10^{-5}$ & $4 \cdot 10^{-4}$ & $4 \cdot 10^{-8}$\\
& Median no. predictors && 7 & 0 & 3 & 5 & 5\\[5pt]
\mathrm{e}nd{tabular}
}}
\mathrm{e}nd{table}
\begin{table}
\caption{Overlap between gene sets selected by the different screening methods and the signature of \cite{metzeler08}.\label{tab:tab-real-example-overlap}}
\centering
\fbox{
{\footnotesize
\begin{tabular}{c| ccccccc}
\\[-7pt]
& $\vek{d}$ & $\vek{d}^{\mathrm{LY}}$ & $\vek{d}^{|Z|}$ & $\vek{d}^{\mathrm{loss}}$& Lasso & Metzeler\\[5pt]
\cline{1-7}\\[-5pt]
$\vek{d}$ & 7 & 0 & 1 & 2 & 4 & 5\\
$\vek{d}^{\mathrm{LY}}$ & & 1 & 0 & 0 & 0 &0 \\
$\vek{d}^{|Z|}$ & & & 3 & 2 & 2 &2 \\
$\vek{d}^{\mathrm{loss}}$ & & & & 7 & 2 & 5 \\
Lasso & & & & & 5 & 5\\
Metzeler & \phantom{Lasso} & \phantom{Lasso} & \phantom{Lasso} & \phantom{Lasso} & \phantom{Lasso} & 86 \\[5pt]
\mathrm{e}nd{tabular}
}
}
\mathrm{e}nd{table}
We next evaluated the extent to which iterated FAST-SIS might improve
upon the above results. From our limited experience with applying
ISIS to real data, instability can become an issue
when several iterations of ISIS are run; particularly when
cross-validation is involved. Accordingly, we ran only a single
iteration of ISIS using $Z$-FAST-ISIS. The algorithm kept 2 of the
genes from the first FAST-SIS round and selected 3 additional
genes so that the total number of genes was 5. Calculating in the test
data a standardized risk score based on the final regression coefficients,
we obtained a Cox hazard ratio of only 1.06 ($p=0.6$; Wald test) which
is no improvement over non-iterated FAST-SIS. A similar conclusion was
reached for the corresponding LOO calculations in the training data
which gave a Cox Wald $p$-value of 0.001 for the LOO risk score, using
a median of 4 genes. None of the other FAST-ISIS methods lead to
improved prediction performance compared to their non-iterated
counterparts.
FAST-ISIS runs swiftly on this large data set: one iteration of
the algorithm (re-recruitment and OSS-SCAD feature selection with
5-fold cross-validation) completes in under 5 seconds on a standard
laptop.
Altogether, the example shows that FAST-SIS can compete with a
computationally more demanding full-scale variable selection method in
the sense of providing similarly sparse models with competitive
prediction properties. FAST-ISIS, while computationally very feasible,
did not seem to improve prediction performance over simple independent
screening in this particular data set.
\section{Discussion}
Independent screening -- the general idea of looking at the effect of
one feature at a time -- is a well-established method for
dimensionality reduction. It constitutes a simple and excellently
scalable approach to analyzing high-dimensional data. The SIS property
introduced by \cite{fan08:_sure} has enabled a basic formal assessment
of the reasonableness of general independent screening
methods. Although the practical relevance of the SIS property has been
subject to scepticism \citep{roberts00:_discus_sure}, the formal context
needed to develop the SIS property is clearly useful for
identifying the many implicit assumptions made when applying
univariate screening methods to multivariate data.
We have introduced a SIS method for survival
data based on the notably simple FAST statistic. In simulation
studies, FAST-SIS performed on par with SIS based on the popular Cox
model, while being considerably more amenable to analysis. We have
shown that FAST-SIS may admit the formal SIS property
within a class of single-index hazard rate models. In addition to
assumptions on the feature distribution which are well known in the
literature, a principal assumption for the SIS property to hold is
that censoring times do not depend on the relevant features nor
survival. While such partially random censoring may be appropriate
to assume in many clinical settings, it indicates that additional
caution is called for when applying univariate screening and competing
risks are suspected.
A formal consistency property such as the SIS property is but one
aspect of a statistical method and does not make FAST-SIS universally
preferable. Not only is the SIS property unlikely to be unique to
FAST screening, but different screening methods often highlight
different aspects of data \citep{ma11:_rankin}, making it impossible
and undesirable to recommend one generic method. We do, however,
consider FAST-SIS a good generic choice of initial screening method
for general survival data. Ultimately, the initial choice of a
statistical method is likely to be made on the basis of parsimony,
computational speed, and ease of implementation. The FAST statistic
is about as difficult to evaluate as a collection of
correlation coefficients while iterative FAST-SIS
only requires solving one linear system of equations. This yields
substantial computational savings over methods not sharing
the advantage of linearity of estimating equations.
Iterated SIS has so far been studied to a very limited extent in an
empirical context. The iterated approach works well on simulated data,
but it is not obvious whether this necessarily translates into good
performance on real data. In our example involving a large gene
expression data set, ISIS did not improve results in terms of
prediction accuracy. Several issues may affect the performance of ISIS
on real data. First, it is our experience that the `Rashomon effect',
the multitude of well-fitting models \citep{breiman01:_statis_model},
can easily lead to stability issues for this type of forward
selection. Second, it is often difficult to choose a good tuning
parameter for the variable selection part of ISIS. Using BIC may lead
to overly conservative results, whereas cross-validation may lead to
overfitting when only the variable selection step -- and not the
recruitment steps -- are cross-validated. \cite{he11} recently
discussed how to combine ISIS with stability selection
\citep{meinshausen10:_stabil} in order to tackle instability issues
and to provide a more informative output than the concise `list of
indices' obtained from standard ISIS. Their proposed scheme requires
running many subsampling iterations of ISIS, a purpose for which
FAST-ISIS will be ideal because of its computational efficiency. The
idea of incorporating stability considerations is also attractive from
a foundational point of view, being a pragmatic departure from the
limiting \mathrm{e}mph{de facto} assumption that there is a single, true
model. Investigation of such computationally intensive frameworks,
alongside a study of the behavior of ISIS on a range of
different real data sets, is a pertinent future research topic.
A number of other extensions of our work may be of interest. We have
focused on the important case of time-fixed features and
right-censored survival times but the FAST statistic can also be used
with time-varying features alongside other censoring and truncation
mechanism supported by the counting process formalism. Theoretical
analysis of such extensions is a relevant future research topic, as is
analysis of more flexible, time-dependent scaling strategies for the
FAST statistic. \cite{fan11:_nonpar} recently discussed SIS where
features enter in nonparametric, smooth manner, and an extension of
their framework to FAST-SIS appears both theoretically and
computationally feasible. Lastly, the FAST statistic is closely
related to the univariate regression coefficients in the Lin-Ying
model which is rather forgiving towards misspecification: under
feature independence, the univariate estimator is consistent whenever
the particular feature under investigation enters the hazard rate model as a
linear function of regression coefficients \citep{hattori06:_some}. The Cox model
does not have a similar property \citep{struthers86:_missp}. Whether
such internal consistency under misspecification or lack hereof
affects screening in a general setting is an open question.
\section*{Appendix: proofs}
\label{sec:appendix}
In addition to Assumptions 1-4 stated in the main text, we will make use of
the following assumptions for the quantities defining the class of
single-index hazard rate models \mathrm{e}qref{eq:model-for-haz}:
\begin{enumerate}
\item[\textbf{A.}] $\E(Z_{1j})=0$ and $\E(Z_{1j}^2) =1$, $j=1,2,\ldots,p_n$.
\item[\textbf{B.}] $P\{Y_1(\tau)=1\}>0$.
\item[\textbf{C.}] $\mathrm{Var}(\vek{Z}_1^\top \vekg{\alpha}^0)$ is uniformly bounded above.
\mathrm{e}nd{enumerate}
The details in Assumption A are included mainly for convenience; it suffices to assume that $\E(Z_{1j}^2) <\infty$.
Our first lemma is a basic symmetrization result, included for completeness.
\begin{lemma}
\label{lem:symmetrize}
Let $X$ be a random variable with mean $\mu$ and finite variance $\sigma^2$. For $t> \sqrt{8}\sigma $, it holds that $P(|X-\mu|>t) \leq 4
P(|X|>t/4)$.
\mathrm{e}nd{lemma}
\begin{proof}
First note that when $t> \sqrt{8}\sigma $ we have
$P(|X-\mu|>t/2) \leq 1/2$, by Chebyshev's inequality. Let $X'$
be an independent copy of $X$. Then
\begin{equation}
\label{eq:probeq1}
2P(|X| \geq t/4) \geq P(|X'-X|>t/2) \geq P(|X-\mu|>t \land |X'-\mu|\leq t/2).
\mathrm{e}nd{equation}
But
\begin{displaymath}
P(|X-\mu|>t \land |X'-\mu|\leq t/2)=P(|X-\mu|>t)P(|X'-\mu| \leq t/2) \geq \frac{1}{2}P(|X-\mu|>t).
\mathrm{e}nd{displaymath}
Combining this with \mathrm{e}qref{eq:probeq1}, the statement of the lemma follows.
\mathrm{e}nd{proof}
The next lemma provides a universal exponential bound for the FAST
statistic and is of independent interest. It bears some similarity to
exponential bounds reported by
\cite{bradic10:_regul_coxs_propor_hazar_model} for the Cox~model.
\begin{lemma}
\label{prop:tail-bound}
Under assumptions A-B there exists constants $C_1,C_2,C_3>0$
independent of $n$ such that for any $K>0$ and $1 \leq j \leq
p_n$, it holds that
\begin{displaymath}
P\{n^{1/2}|d_{j}-\mathrm{d}elta_{j}|>C_1(1+t)\} \leq 10\mathrm{e}xp\{-t^2/(2K^2)\}+C_2\mathrm{e}xp(-C_3n)+nP(|Z_{1j}|>K).
\mathrm{e}nd{displaymath}
\mathrm{e}nd{lemma}
\begin{proof}
Fix $j$ throughout. Assume first that $|Z_{ij}| \leq K$ for some finite $K$. Define the random variables
\begin{displaymath}
A_n:=n^{-1}\sum_{i=1}^n\int_0^\tau\{Z_{ij}-e_j(t)\}\mathrm{d} N_i(t),\quad B_n:=\int_0^\tau\{\bar{Z}_j(t)-e_j(t)\}\mathrm{d} \bar{N}(t);
\mathrm{e}nd{displaymath}
where $\bar{N}(t):=n^{-1}\{N_1(t)+\cdots+N_n(t)\}$ and $e_j(t)=\E\{\bar{Z}_j(t)\}$. Then we can write
\begin{displaymath}
n^{1/2}(d_{j}-\mathrm{d}elta_{j}) = n^{1/2}\{A_n-\E(A_n)\}+n^{1/2}\{B_n-\E(B_n)\}.
\mathrm{e}nd{displaymath}
We will deal with each term in the display separately. Since $\mathrm{d} N_i(t) \leq 1$, it holds that
\begin{displaymath}
|A_n| \leq \max_{1 \leq i \leq n}|Z_{ij}|+\|e_j\|_\infty
\leq 2K.
\mathrm{e}nd{displaymath}
and Hoeffding's inequality \citep{hoeffding63:_probab} implies
\begin{equation}
\label{eq:tail-one}
P(n^{1/2}|A_n-\E(A_n)|>t) \leq 2\mathrm{e}xp\{-t^2/(2K^2)\}.
\mathrm{e}nd{equation}
Obtaining an analogous bound for $n^{1/2}\{B_n-\E(B_n)\}$ requires a
more detailed analysis. Since $\mathrm{d} \bar{N}(t) \leq 1$,
\begin{equation}
\label{eq:expression-for-Bn}
|B_n| \leq \int_0^\tau|\bar{Z}_j(t)-e_j(t)|\mathrm{d} \bar{N}(t) \leq \|\bar{Z}_j-e_j\|_\infty.
\mathrm{e}nd{equation}
We will obtain an exponential bound for the right-hand side via empirical process
methods. Define $E^{(k)}(t):=n^{-1}\sum_{i=1}^n Z_{ij}^{k}
Y_i(t)$ and $e^{(k)}(t):=\E\{E^{(k)}(t)\}$ for $k=0,1$. Denote
$m:=\inf_{t \in [0,\tau]}e^{(0)}(t)$ and observe that $m>0$, by Assumption B. Moreover, by Cauchy-Schwartz's inequality,
\begin{displaymath}
\|e^{(1)}/e^{(0)}\|_\infty \leq m^{-1}\sqrt{\E|Z_{1j}|^2e^{(0)}(t)} \leq m^{-1}.
\mathrm{e}nd{displaymath}
Define $\Omega_n:=\{\inf_{t \in [0,\tau]}E^{(0)}(t) \geq m/2\}$ and
let $\mathrm{1}_{\Omega_n}$ be the indicator of this event. In
view of the preceding display, we can write
\begin{align}
|\bar{Z}_j(t)-e_j(t)|\mathrm{1}_{\Omega_n}
& \leq \frac{1}{E^{(0)}(t)}\Big\{\Big|\frac{e^{(1)}(t)}{e^{(0)}(t)}\Big||e^{(0)}(t)-E^{(0)}(t)|+|E^{(1)}(t)-e^{(1)}(t)|\Big\}\mathrm{1}_{\Omega_n} \\
& \leq 2m^{-2}(\|P_n -P\|_{\mc{F}_0}+\|P_n -P\|_{\mc{F}_1})\mathrm{1}_{\Omega_n}\label{eq:someempproc}
\mathrm{e}nd{align}
with function classes $\mc{F}_k:= \{t \mapsto Z^{k} \mathrm{1}(T \geq t \land C \geq
t)\}$. We proceed to establish exponential bounds for the
empirical process suprema in \mathrm{e}qref{eq:someempproc}. Each of the
$\mc{F}_k$s are Vapnik-Cervonenkis subgraph
classes, and from \cite{pollard89:_asymp} there exists
some finite constant $\zeta$ depending only on intrinsic
properties of the $\mc{F}_k$s such that
\begin{equation}
\label{eq:emp-proc-momentbound}
\E(\|P_n -P\|_{\mc{F}_k}^2) \leq \zeta n^{-1} \E (Z_{1j}^2) =n^{-1}\zeta.
\mathrm{e}nd{equation}
In particular, it also holds that $ \E(\|P_n -P\|_{\mc{F}_k})
\leq n^{-1/2}\zeta^{1/2}$. Moreover,
\begin{displaymath}
|Z_{1j}^{k} \mathrm{1}(T_1 \geq t \land C_1 \geq t)-Z_{1j}^{k} \mathrm{1}(T_1 \geq s \land C_1 \geq s)|^2 \leq K^{2k}, \quad s,t \in [0,\tau].
\mathrm{e}nd{displaymath}
With $k_1:=\zeta^{1/2}$, the concentration theorem of \cite{massart00:_about_talag} implies
\begin{equation}
\label{eq:empproc-bound}
P\{n^{1/2}\|P_n -P\|_{\mc{F}_k} > k_1(1+t)\} \leq \mathrm{e}xp\{-t^2/(2K^2)\}, \quad k=0,1.
\mathrm{e}nd{equation}
Combining \mathrm{e}qref{eq:expression-for-Bn}-\mathrm{e}qref{eq:someempproc}, taking $k_2:=k_1 m^2/2$, we obtain
\begin{equation}
\label{eq:first-prob-bound}
P (\{n^{1/2} |B_n| > k_2(1+t)\} \cap \Omega_n ) \leq 2\mathrm{e}xp\{-t^2/(2K^2)\}.
\mathrm{e}nd{equation}
whereas Cauchy-Schwarz's inequality implies
\begin{displaymath}
\E (B_n^2\mathrm{1}_{\Omega_n}) \leq \E \|\bar{Z}_j-e_j\|_\infty^2 \mathrm{1}_{\Omega_n}
\leq 4m^{-4}\E\{(\|P_n -P\|_{\mc{F}_0}+\|P_n -P\|_{\mc{F}_1})^2\}\mathrm{1}_{\Omega_n}
\leq 12 m^{-4}\zeta n^{-1}.
\mathrm{e}nd{displaymath}
Combining Lemma \ref{lem:symmetrize} and
\mathrm{e}qref{eq:first-prob-bound}, there exists nonnegative
$k_3$ (depending only on $m$ and $\zeta$) such that
\begin{equation}
\label{eq:tail-two}
P \{n^{1/2} |B_n-\E(B_n)|\geq k_3(1+t) \} \leq 8\mathrm{e}xp\{-t^2/(2K^2)\}+ P(\Omega_n^c).
\mathrm{e}nd{equation}
To bound $P(\Omega_n^c)$, recall that $e^{(0)}(t) \geq m$ by assumption. Consequently,
\begin{displaymath}
\Omega_n^c
\subseteq \{|E^{(0)}(t)-e^{(0)}(t)|>m/2 \textrm{ for some } t\} \subseteq \{\|P_n
-P\|_{\mc{F}_0}>m/2\}.
\mathrm{e}nd{displaymath}
By \mathrm{e}qref{eq:emp-proc-momentbound}, we have $\E(\|P_n
-P\|_{\mc{F}_0}) \leq m/4$ eventually. By another application of the
concentration theorem \citep{massart00:_about_talag}, there exists
finite $k_4$ so that $ P\{\|P_n
-P\|_{\mc{F}_0}>m/4(1+t)\} \leq k_4\mathrm{e}xp(-nt^2/2)$. Setting $t=1$,
\begin{displaymath}
P(\Omega_n^c) \leq
P(\|P_n -P\|_{\mc{F}_0}>m/2) \leq
k_4\mathrm{e}xp(-n/2).
\mathrm{e}nd{displaymath}
Substituting this bound in \mathrm{e}qref{eq:tail-two} and combining with
\mathrm{e}qref{eq:tail-one}, omitting now the assumption that $Z_{ij}$ is bounded,
it follows that there exists constants
$C_1,C_2,C_3>0$ such that for any $K>0$ and $t>0$,
\begin{displaymath}
P\{n^{1/2}|d_{j}-\mathrm{d}elta_{j}|>C_1(1+t)\}
\leq 10\mathrm{e}xp\{-t^2/(2K^2)\}+C_2\mathrm{e}xp(-C_3 n)+P\Big(\max_{1 \leq i \leq n}|Z_{ij}| > K\Big).
\mathrm{e}nd{displaymath}
The statement of the lemma then follows from the union bound.
\mathrm{e}nd{proof}
\begin{lemma}
\label{prop:sure-pre-sreen}
Suppose that Assumptions A-B hold and that there exists positive constants
$l_0,l_1,\mathrm{e}ta$ such that $P(|Z_{1j}|>s) \leq l_0\mathrm{e}xp(-l_1 s^\mathrm{e}ta)$
for sufficiently large $s$. If $\kappa<1/2$ then for any $k_1>0$ there
exists $k_2>0$ such that
\begin{equation}
\label{eq:sure-scr-1}
P\Big(\max_{1 \leq j \leq p_n}|d_{j}-\mathrm{d}elta_{j}|>k_1 n^{-\kappa}\Big) \leq O[p_n\mathrm{e}xp\{-k_2n^{(1-2\kappa)\mathrm{e}ta/(\mathrm{e}ta+2)}\}].
\mathrm{e}nd{equation}
Suppose in addition that $|\mathrm{d}elta_{j}|>k_3n^{-\kappa}$ whenever $j
\in \mc{M}_\mathrm{d}elta^n$ and that $\gamma_n=k_4 n^{-\kappa}$
where $k_3,k_4$ are positive constants and $k_4 \leq k_3/2$. Then
\begin{equation}
\label{eq:sure-scr-2}
P(\mc{M}_\mathrm{d}elta^n \subseteq \widehat{\mc{M}}_{d}^n) \geq 1-O[p_n\mathrm{e}xp\{-k_2n^{(1-2\kappa)\mathrm{e}ta/(\mathrm{e}ta+2)}\}].
\mathrm{e}nd{equation}
In particular, if $\log p_n=o\{n^{(1-2\kappa)\mathrm{e}ta/(\mathrm{e}ta+2)}\}$ then $P(\mc{M}_\mathrm{d}elta^n \subseteq \widehat{\mc{M}}_{d}^n) \to 1$ when $n \to \infty.$
\mathrm{e}nd{lemma}
\begin{proof}
In Lemma \ref{prop:tail-bound}, take $1+t=k_1 n^{1/2-\kappa}/C_1$
and $K:=n^{(1-2\kappa)/(\mathrm{e}ta+2)}$. Then there exists positive constants
$\tilde{k}_2,\tilde{k}_3$ such that for each
$j=1,\ldots,p_n$,
\begin{displaymath}
P(|d_{j}-\mathrm{d}elta_{j}|>k_1 n^{-\kappa}) \leq 10\mathrm{e}xp\{-\tilde{k}_2n^{(1-2\kappa)\mathrm{e}ta/(\mathrm{e}ta+2)}\}+nl_0\mathrm{e}xp\{-\tilde{k}_3 n^{(1-2\kappa)\mathrm{e}ta/(\mathrm{e}ta+2)}\}.
\mathrm{e}nd{displaymath}
By the union bound, there exists $k_2>0$ such that
\begin{displaymath}
P\Big(\max_{1 \leq j \leq p_n}|d_{j}-\mathrm{d}elta_{j}|>k_1 n^{-\kappa}\Big) \leq O[p_n \mathrm{e}xp\{-k_2n^{(1-2\kappa)\mathrm{e}ta/(\mathrm{e}ta+2)}\}];
\mathrm{e}nd{displaymath}
which proves \mathrm{e}qref{eq:sure-scr-1}. Concerning \mathrm{e}qref{eq:sure-scr-2},
$k_3n^{-\kappa} -|d_{j}|\leq |\mathrm{d}elta_{j}-d_{j}|$ by assumption and so
\begin{displaymath}
P\Big(\min_{j \in \mc{M}_\mathrm{d}elta^n}|d_{j}| < \gamma_n\Big) \leq P\Big(\max_{j \in \mc{M}_\mathrm{d}elta^n}|d_{j}-\mathrm{d}elta_{j}| \geq k_4 n^{-\kappa}-\gamma_n\Big) \leq P\Big(\max_{j \in \mc{M}_\mathrm{d}elta^n}|d_{j}-\mathrm{d}elta_{j}| \geq n^{-\kappa} k_3/2 \Big);
\mathrm{e}nd{displaymath}
where the last inequality follows since we assume $k_4 \leq k_3/2$. Taking $k_1=k_3/2$ in \mathrm{e}qref{eq:sure-scr-1},
we arrive at the desired conclusion:
\begin{displaymath}
P(\mc{M}_\mathrm{d}elta^n \subseteq \widehat{\mc{M}}^n_d)
\geq 1 - P\Big(\min_{j \in \mc{M}_\mathrm{d}elta^n}|d_{j}| < \gamma_n\Big)
\geq 1-O[p_n \mathrm{e}xp\{-k_2n^{(1-2\kappa)\mathrm{e}ta/(\mathrm{e}ta+2)}\}].
\mathrm{e}nd{displaymath}
Finally, $P(\mc{M}_\mathrm{d}elta^n \subseteq
\widehat{\mc{M}}_d^n) \to 1$ when $n\to \infty$ follows
immediately when $\log p_n=o\{n^{(1-2\kappa)\mathrm{e}ta/(\mathrm{e}ta+2)}\}$.
\mathrm{e}nd{proof}
\begin{lemma}
\label{lem:fansong}
Let $\vek{Z} \in \mathbb{R}^p$ be a random vector with zero mean and
covariance matrix $\vekg{\Sigma}$. Let $\vek{b} \in \mathbb{R}^p$ and suppose that
$\E(\vek{Z}|\vek{Z}^\top \vek{b})=\vek{c}\vek{Z}^\top \vek{b}$ for some constant vector $\vek{c} \in
\mathbb{R}^p$. Assume that $f$ is some real function. Then
\begin{equation}
\label{eq:ellip-one}
\E\{\vek{Z} f(\vek{Z}^\top \vek{b})\} = \vekg{\Sigma} \vek{b} \frac{\E\{\vek{Z}^\top \vek{b} f(\vek{Z}^\top \vek{b})\}}{\mathrm{Var}(\vek{Z}^\top \vek{b})};
\mathrm{e}nd{equation}
taking $0/0:=0$. If moreover $f$ is differentiable and strictly monotonic, there exists
$\mathrm{Var}epsilon>0$ such that
\begin{equation}
\label{eq:ellip-two}
\E |\vek{Z} f(\vek{Z}^\top \vek{b})| \geq \vekg{\Sigma}\vek{b} \mathrm{Var}epsilon/\mathrm{Var}(\vek{Z}^\top \vek{b}).
\mathrm{e}nd{equation}
In particular, $\E\{Z_j f(\vek{Z}^\top \vek{b})\}=0$ iff $\mathrm{Cov}(Z_j,\vek{Z}^\top \vek{b})=0$.
\mathrm{e}nd{lemma}
\begin{proof}
Set $W:=\vek{Z}^\top \vek{b}$. By standard properties of conditional expectations, it
holds that
\begin{displaymath}
0=\E\{W (\vek{Z}-\E(\vek{Z}|W))\}= \vekg{\Sigma} \vek{b} - \E\{W\E(\vek{Z}|W)\}=\vekg{\Sigma} \vek{b} -\vek{c} \E(W^2),
\mathrm{e}nd{displaymath}
implying $\E(\vek{Z}|W)=\vekg\Sigma \vek{b} W/\mathrm{Var}(W)$. We then
obtain~\mathrm{e}qref{eq:ellip-one}:
\begin{displaymath}
\E\{\vek{Z} f(\vek{Z}^\top \vek{b})\} = \E\{\E(\vek{Z}|W)f(W)\}= \vekg\Sigma \vek{b}\E\{W f(W)\}/\mathrm{Var}(W).
\mathrm{e}nd{displaymath}
To show \mathrm{e}qref{eq:ellip-two}, the mean value theorem implies the
existence of some $0<\tilde{W} < W$ such that
\begin{displaymath}
\E(W f(W)) = \E[W\{f(0)+f'(\tilde{W})W] = \E\{W^2f'(\tilde{W})\}.
\mathrm{e}nd{displaymath}
Then
\begin{displaymath}
\E|W^2f'(\tilde{W})| \geq \E\{|f'(\tilde{W})| W^2\mathrm{1}(W^2 \leq 1)\} \geq \inf_{0 \leq x \leq 1} |f'(x)| \E\{W^2\mathrm{1}(W^2 \leq 1)\}.
\mathrm{e}nd{displaymath}
Strict monotonicity of $f$ then yields \mathrm{e}qref{eq:ellip-two}.
\mathrm{e}nd{proof}
\begin{lemma}
\label{lemma:cum-haz-decom}
Assume that the survival time $T$ has a general, continuous hazard rate function
$\lambda_T(t|Z)$ depending on the random variable $Z \in
\mathbb{R}$ and that the censoring time $C$ is independent of $Z$,
$T$. Then
\begin{displaymath}
\mathrm{d}elta = \int_0^\tau \tilde{e}(t) \mathrm{d} F(t) = \E\{\tilde{e}(T \land C \land \tau)\};
\mathrm{e}nd{displaymath}
where $F(t) := P(T
\land C \land \tau \leq t)$ and $\tilde{e}(t):= \E\{Z
P(T \geq t|Z)\}/\E\{P(T \geq t)\}$.
\mathrm{e}nd{lemma}
\begin{proof}
Let $S_T,S_C$ denote the survival functions of $T,C$,
conditionally on~$Z$. Using the expression
\mathrm{e}qref{eq:cp-decom} for $\mathrm{d}elta$ alongside the assumption of
random censoring, we obtain
\begin{align}
\mathrm{d}elta &= \E\Big[\int_0^\tau \{Z-e(t)\}Y(t) \lambda_T(t|Z) \mathrm{d} t\Big]\\
&= \int_0^\tau S_C(t)\E\{ ZS_T(t) \lambda_T(t|Z) \}\mathrm{d} t-\int_0^\tau \frac{\E\{ZS_T(t)\}}{\E\{Y(t)\}}S_C(t)\E\{Y(t)\lambda_T(t|Z)\}\mathrm{d} t \label{eq:expansion2} \\
&= -\int_0^\tau \frac{\mathrm{d}}{\mathrm{d} t} \tilde{e}(t)\E\{Y(t)\} \mathrm{d} t;
\mathrm{e}nd{align}
where last equality follows since $S_T'=-\lambda_T S_T$. Integrating by parts, we obtain the statement of the lemma:
\begin{displaymath}
\mathrm{d}elta = -\int_0^\tau \frac{\mathrm{d}}{\mathrm{d} t} \tilde{e}(t) \E\{Y(t)\} \mathrm{d} t = -\int_0^\infty \frac{\mathrm{d}}{\mathrm{d} t}\tilde{e}(t) \E\{P(T \land C \land \tau \geq t|Z) \} \mathrm{d} t= \E\{\tilde{e}(T \land C \land \tau)\}.
\mathrm{e}nd{displaymath}
\mathrm{e}nd{proof}
\begin{proof}[Proof of Theorem~\ref{thm:mainthm}]
Set $\tilde{e}_j(t):= \E\{Z_{1j} S_T(t,\vek{Z}_1^\top
\vekg{\alpha}^0)\}/\E\{S_T(t,\vek{Z}_1^\top \vekg{\alpha}^0)\}$ with
$S_T(t,\,\cdot\,)=\mathrm{e}xp\{-\int_0^t \lambda(s,\cdot)\mathrm{d}
s\}$. From Assumptions
\ref{assumption:monotonicity}-\ref{assumption:ran-cens}, Assumption
C, and Lemma \ref{lem:fansong}, there exists a
universal positive constant $k_1$ such that
\begin{displaymath}
|\mathrm{d}elta_j|=|\tilde{e}_j(t)| \geq |\E\{Z_{1j}S_T(t,\vek{Z}_1^\top \vekg{\alpha}^0)\}| \geq k_1|\mathrm{Cov}(Z_{1j},\vek{Z}_1^\top \vekg{\alpha}^0)|, \quad j \in \mc{M}^n.
\mathrm{e}nd{displaymath}
Then $\mc{M}^n \subseteq \mc{M}_\mathrm{d}elta^n$. The sure
screening property follows from Lemma \ref{prop:sure-pre-sreen} and
the assumptions.
\mathrm{e}nd{proof}
\begin{proof}[Proof of Theorem~\ref{thm:bound-on-sel-var}]
Suppose that
\begin{equation}
\label{eq:bound-on-deltasqnorm}
\|\vekg{\mathrm{d}elta}\|^2 =O\{\lambda_\mathrm{max}(\vekg{\Sigma})\}.
\mathrm{e}nd{equation}
For any $\mathrm{Var}epsilon>0$, on the set $B_n:=\{\max_{1 \leq j \leq p_n}|d_{j}-\mathrm{d}elta_{j}|\leq \mathrm{Var}epsilon n^{-\kappa}\}$, it then holds that
\begin{displaymath}
|\{1 \leq j \leq p_n\,:\,|d_{j}|>2\mathrm{Var}epsilon n^{-\kappa} \}| \leq |\{1 \leq j \leq p_n\,:\, |\mathrm{d}elta_{j}|>\mathrm{Var}epsilon n^{-\kappa}\}| \leq O\{n^{2\kappa}
\lambda_\mathrm{max}(\vekg{\Sigma})\}.
\mathrm{e}nd{displaymath}
Taking $k_1=2\mathrm{e}psilon$ in Lemma \ref{prop:sure-pre-sreen}, we have
\begin{displaymath}
P[|\widehat{\mc{M}}_d^n| \leq O\{n^{2\kappa} \lambda_\mathrm{max}(\vekg\Sigma)\}] \geq
P[|\{j \,:\,|d_{j}|>k_1 n^{-\kappa} \}|\leq O\{n^{2\kappa} \lambda_\mathrm{max}(\vekg\Sigma)\}] \geq P(B_n).
\mathrm{e}nd{displaymath}
By Lemma \ref{prop:sure-pre-sreen}, $P(B_n)=1-O[p_n \mathrm{e}xp\{-c_3 n^{(1-2\kappa)\mathrm{e}ta/(\mathrm{e}ta+2)}\}]$ as claimed.
So we need only verify~\mathrm{e}qref{eq:bound-on-deltasqnorm}.
By Lemma \ref{lemma:cum-haz-decom}, there exists a positive constant $c_1$ such that
$|\mathrm{d}elta_{j}| \leq c_1\int_0^\tau |\E\{Z_{1j} S_T(t,\vek{Z}_1^\top
\vekg{\alpha}^0)\}|\mathrm{d} F(t)$ for $j \in \mc{M}^n$ with $F$
the unconditional distribution function of $T_1 \land C_1 \land
\tau$. In contrast, $\mathrm{d}elta_j=0$ for $j \notin \mc{M}^n$, by Assumptions
\ref{assumption:ran-cens}-\ref{assumption:po}. It follows from Jensen's inequality that there exists a positive constant $c_2$
such that
\begin{equation}
\label{eq:deltasqnorm}
\|\vekg{\mathrm{d}elta}\|^2 \leq c_2\int_0^\tau \|\E\{\vek{Z}_{1} S_T(t,\vek{Z}_1^\top \vekg{\alpha}^0)\}\|^2 \mathrm{d} F(t).
\mathrm{e}nd{equation}
Lemma \ref{lem:fansong} implies
\begin{equation}
\E\{\vek{Z}_{1} S_T(t,\vek{Z}_1^\top \vekg{\alpha}^0)\}= \frac{\E\{\vek{Z}_{1}^\top \vekg{\alpha}^0 S_T(t,\vek{Z}_1^\top \vekg{\alpha}^0)\}}{\mathrm{Var}(\vek{Z}_1^\top \vekg{\alpha}^0)}\vekg{\Sigma} \vekg{\alpha}^0. \label{eq:alt-expr-for-deltansq}
\mathrm{e}nd{equation}
By Cauchy-Schwartz's inequality, using that $\|\vekg{\Sigma}
\vekg{\alpha}^0\|^2 \leq \|\vekg{\Sigma}^{1/2}\|^2\|\vekg{\Sigma}^{1/2}\vekg{\alpha}^0\|^2 \leq \lambda_\mathrm{max}(\vekg{\Sigma})\|\vekg{\Sigma}^{1/2}\vekg{\alpha}^0\|^2$,
\begin{displaymath}
\|\E\{\vek{Z}_{1} S_T(t,\vek{Z}_1^\top \vekg{\alpha}^0)\}\|^2 \leq \|\vekg{\Sigma} \vekg{\alpha}^0\|^2/\mathrm{Var}(\vek{Z}_1^\top \vekg{\alpha}^0) \leq \lambda_\mathrm{max}(\vekg\Sigma).
\mathrm{e}nd{displaymath}
Inserting this in \mathrm{e}qref{eq:deltasqnorm} then yields~the desired
result~\mathrm{e}qref{eq:bound-on-deltasqnorm}. Note that this result does not
rely on the uniform boundedness of $\mathrm{Var}(\vek{Z}_1^\top \vekg{\alpha}^0)$ (Assumption C).
\mathrm{e}nd{proof}
\begin{lemma}
\label{prop:aalen-screen}
Suppose that Assumption A holds and that both the survival time $T_1$ and censoring time $C_1$
follow a nonparametric Aalen model~\mathrm{e}qref{eq:def-of-aalenmodel} with time-varying
parameters $\vekg{\alpha}^0$ and $\vekg{\beta}^0$, respectively. Suppose
moreover that $\vek{Z}_1 = \vekg{\Sigma}^{1/2} \tilde{\vek{Z}}_1$
where $\tilde{\vek{Z}}_1$ has i.i.d.~components and denote
by $\phi(x):=\E\{\mathrm{e}xp(\tilde{Z}_{j1}x)\}$ the moment generating
function of $\tilde{Z}_{j1}$. Then
\begin{equation}
\label{eq:aalen-general-sigma}
\vekg{\mathrm{d}elta} = \vekg{\Sigma}^{1/2}\Big[\int_0^\tau \mathrm{diag}\Big\{\frac{\mathrm{d}}{\mathrm{d} x}\frac{\phi'(x)}{\phi(x)}\Big|_{x=-\Gamma_j^0(t)}\Big\} \E\{Y_1(t)\} \vekg{\alpha}^0(t)^\top \mathrm{d} t \Big]\vekg{\Sigma}^{1/2};
\mathrm{e}nd{equation}
where $\vekg{\Gamma}^0(t):=\vekg\Sigma^{1/2} \int_0^t \{\vekg{\alpha}^0(s)+\vekg{\beta}^0(s)\} \mathrm{d} s$. In particular,
if $\vek{Z}_1 \thicksim \mc{N}(0,\vekg{\Sigma})$ then
\begin{equation}
\label{eq:aalen-normal-sigma}
\vekg{\mathrm{d}elta} = \vekg{\Sigma}\Big\{\int_0^\tau \vekg{\alpha}^0(t)\E\{Y_1(t)\}\mathrm{d} t\Big\}.
\mathrm{e}nd{equation}
\mathrm{e}nd{lemma}
\begin{proof}
Let $\Lambda_T$ and $\Lambda_C$ denote the cumulative baseline
hazard functions associated with $T_1$ and $C_1$. Combining
\mathrm{e}qref{eq:cp-decom} and \mathrm{e}qref{eq:def-of-aalenmodel}, we get
\begin{align}
\vekg{\mathrm{d}elta} &= \E \Big\{\int_0^\tau \vek{Z}_1 \vek{Z}_1^\top Y_1(t) \vekg{\alpha}^0(t) \mathrm{d} t\Big\} - \int_0^\tau \E\{\vek{Z}_1Y_1(t)\}^{\otimes 2} \E\{Y_1(t)\}^{-1} \vekg{\alpha}^0(t) \mathrm{d}t \\
&= \int_0^\tau \vekg{\Sigma}^{1/2}\vek{H}(t) \vekg{\Sigma}^{1/2} \E\{Y_1(t)\} \vekg{\alpha}^0(t) \mathrm{d}t;
\mathrm{e}nd{align}
defining here
\begin{displaymath}
\vek{H}(t):=\frac{\E\{Y_1(t)\} \E\{\tilde{\vek{Z}}_1 \tilde{\vek{Z}}_1^\top Y_1(t)\}-\E\{\tilde{\vek{Z}}_1Y_1(t)\}^{\otimes 2}}{\E\{Y_1(t)\}^2}.
\mathrm{e}nd{displaymath}
Since we have
$Y_1(t)=\mathrm{e}xp[-\{\Lambda_T(t)+\Lambda_C(t)+\tilde{\vek{Z}}_1^\top
\vekg{\Gamma}^0(t)\}]$ conditionally on $\tilde{\vek{Z}}_1$, independence of the components of $\tilde{\vek{Z}}_1$ clearly implies $ [\vek{H}(t)]_{ij} \mathrm{e}quiv 0$ for $i
\neq j$. For $i=j$, factor the conditional at-risk indicator as
$Y_1(t)=Y_1^{(j)}(t)Y_1^{(-j)}(t)$ where
$Y_1^{(j)}:=\mathrm{e}xp\{-\tilde{Z}_{1j}\Gamma_j^0(t)\}$. Utilizing independence
again, we get
\begin{displaymath}
[\vek{H}(t)]_{jj}=\frac{\E\{Y_1^{(j)}(t)\} \E\{\tilde{Z}_{1j}^2 Y_1^{(j)}(t)\}-\E\{Y_1^{(j)}(t)\tilde{Z}_{1j}\}^2}{\E\{Y_1^{(j)}(t)\}^2}=\frac{\mathrm{d}}{\mathrm{d} x}\frac{\phi'(x)}{\phi(x)}\Big|_{x=-\Gamma_j^0(t)}
\mathrm{e}nd{displaymath}
This proves \mathrm{e}qref{eq:aalen-general-sigma}. To verify
\mathrm{e}qref{eq:aalen-normal-sigma}, simply note that the moment generating
function of a standard Gaussian is $\phi(x)=\mathrm{e}xp(x^2/2)$ for which
$\mathrm{d}/\mathrm{d}x\,(\phi'(x) \phi(x)^{-1}) = 1$.
\mathrm{e}nd{proof}
From \mathrm{e}qref{eq:aalen-general-sigma}, a `simple' description of
$\vekg{\mathrm{d}elta}$ (which does not involve factorizing a matrix in terms of
$\vekg{\Sigma}^{1/2}$) is available exactly when features are
Gaussian. Specifically, it holds for some fixed $K>0$ that
\begin{displaymath}
\frac{\mathrm{d}}{\mathrm{d} x}\frac{\phi'(x)}{\phi(x)}=K,\quad \textrm{ and } \phi(0)=1,
\mathrm{e}nd{displaymath}
iff $\phi(x)=\mathrm{e}xp(Kx^2/2)$, the moment generating function of a
centered Gaussian random variable.
\begin{proof}[Proof of Theorem \ref{thm:mainthm-aalen}]
We apply Lemma \ref{prop:aalen-screen}. Denote by $\vek{v}_j$ the $j$th
canonical basis vector in $\mathbb{R}^{p_n}$. Integrating by parts in \mathrm{e}qref{eq:aalen-normal-sigma}, we obtain
\begin{displaymath}
\mathrm{d}elta_{j}= \vek{v}_j^\top \vekg{\Sigma} \int_0^\tau \vekg{\alpha}^0(t) \E\{Y_1(t)\} \mathrm{d} t=\vek{v}_j^\top \vekg\Sigma \int_0^\infty \vekg{\alpha}^0(t) \E\{P(T_1 \land C_1 \land \tau \geq t)\} \mathrm{d} t= \vek{v}_j^\top \vekg{\Sigma} \E\{\vek{A}^0(T_1\land C_1\land \tau)\}.
\mathrm{e}nd{displaymath}
By the assumptions, $|\vek{v}_j^\top \vekg{\Sigma} \E\{\vek{A}^0(T_1\land C_1 \land
\tau)\}|\geq c_1n^{-\kappa}$ whenever $j \in \mc{M}^n$. Thus
$\mc{M}^n \subseteq \mc{M}_\mathrm{d}elta^n$. For Gaussian $Z_{1j}$,
we have $P(|Z_{1j}|>s) \leq \mathrm{e}xp(-s^2/2)$, and the SIS property then follows from~Lemma~\ref{prop:sure-pre-sreen}.
\mathrm{e}nd{proof}
\begin{proof}[Proof of Theorem \ref{thm:consistent-joint-single-index}]
Recall that
\begin{displaymath}
\vekg{\Delta} = \E\Big[\int_0^\tau \{\vek{Z}_1-\vek{e}(t)\}^{\otimes 2} Y_1(t) \mathrm{d} t\Big].
\mathrm{e}nd{displaymath}
Then
\begin{displaymath}
\vekg{\Delta} \vekg{\alpha}^0 = \int_0^\tau \frac{\E\{Y_1(t)\} \E\{Y_1(t)\vek{Z}_1 \vek{Z}_1^\top\vekg{\alpha}^0 \}-\E\{ Y_1(t) \vek{Z}_1^\top \vekg{\alpha}^0\}\E\{Y_1(t)\vek{Z}_1 \}}{\E\{Y_1(t)\}} \mathrm{d} t,
\mathrm{e}nd{displaymath}
But by Lemma \ref{lem:fansong} and the assumption of random censoring,
\begin{displaymath}
\E\{Y_1(t)\vek{Z}_1 \vek{Z}_1^\top\vekg{\alpha}^0 \} = \vekg{\Sigma} \vekg{\alpha}^0 \frac{\E\{(\vek{Z}_1^\top \vekg{\alpha}^0)^2 Y_1(t)\}}{\mathrm{Var}(\vek{Z}_1^\top \vekg{\alpha}^0)}, \quad \textrm{and } \E\{\vek{Z}_1 Y_1(t)\}=\vekg{\Sigma} \vekg{\alpha}^0\frac{\E\{Y_1(t)\vek{Z}_1^\top \vekg{\alpha}^0\}}{\mathrm{Var}(\vek{Z}_1^\top \vekg{\alpha}^0)}.
\mathrm{e}nd{displaymath}
So we can construct a function $\xi$ such that $ \vekg{\Delta}
\vekg{\alpha}^0 =\vekg{\Sigma} \vekg{\alpha}^0 \int_0^\tau
\xi(\vekg{Z}_1^\top \vekg{\alpha}^0,t)\mathrm{d} t$ where $\int_0^\tau
\xi(\vek{Z}_1^\top \vekg{\alpha}^0,t)\mathrm{d} t \neq 0$, by nonsingularity of $\vekg{\Delta}$. Similarly, using
Lemma~\ref{lemma:cum-haz-decom}, we may construct a function $\zeta$
such that $\vekg{\mathrm{d}elta} = \vekg{\Sigma} \vekg{\alpha}^0 \int_0^\tau
\zeta(\vek{Z}_1^\top \vekg{\alpha}^0,t) \mathrm{d} t$. Taking $
\nu:=\int_0^\tau \zeta(\vek{Z}_1^\top \vekg{\alpha}^0,t)\mathrm{d}
t/\int_0^\tau \xi(t,\vek{Z}_1^\top \vekg{\alpha}) \mathrm{d} t$,
$\vekg{\beta}^0= \nu \vekg{\alpha}^0$ solves $\vekg{\Delta}
\vekg{\beta}^0 = \vekg{\mathrm{d}elta}$.
\mathrm{e}nd{proof}
\mathrm{e}nd{document} |
\betagin{document}
\author{Paavo Salminen\\{\small Åbo Akademi University}
\\{\small Mathematical Department}
\\{\small FIN-20500 Åbo, Finland} \\{\small email: phsalmin@abo.fi}
\and
Pierre Vallois
\\{\small Universit\'e Henri Poincar\'e}
\\{\small D\'epartement de Math\'ematique}
\\{\small F-54506 Vandoeuvre les Nancy, France}
\\{\small email: vallois@iecn.u-nancy.fr}
}
\title{On subexponentiality of the L\'evy measure of the diffusion inverse local time; with applications to penalizations}
\date{}
\maketitle
\betagin{abstract} For a recurrent linear diffusion on ${\bf R}_+$
we study the asymptotics of the distribution of its local time at 0 as
the time parameter tends to infinity. Under the assumption
that the L\'evy measure of the inverse local time is subexponential
this distribution behaves asymtotically as a multiple of the L\'evy
measure. Using spectral representations we find the exact value of the
multiple. For this we also need a result on the asymptotic
behavior of the convolution of a
subexponential distribution and an arbitrary distribution on ${\bf R}_+.$
The exact knowledge of the asymptotic behavior of the distribution of the
local time allows us to analyze the process
derived via a penalization procedure with the local time. This result
generalizes the penalizations obtained in Roynette, Vallois and
Yor \cite{rvyV} for Bessel processes.
\\
\\
\\
\\ \\%
\noindent
{\rightm Keywords: Brownian motion, Bessel process, Hitting time,
Tauberian theorem, excursions}
\\ \\
{\rightm AMS Classification: 60J60, 60J65, 60J30}
\hbox {\rightm e}nd{abstract}
\hbox {\rightm e}ject
\section{Introduction}
\leftabel{sec0}
{\bf 1.} Let $X$ be a linear regular recurrent
diffusion taking values in ${\bf R}_+$ with 0 an instantaneously reflecting
boundary and $+\infty$ a natural boundary. Let ${\bf P}_x$ and ${\bf E}_x$
denote, respectively, the probability measure and the
expectation associated with $X$ when started from $x\geq 0.$ We assume that $X$ is
defined in the canonical space $C$ of continuous functions $\omegaega:{\bf R}_+\mapsto {\bf R}_+.$
Let
$$
{\cal C}_t:=\sigmagma\{\omegaega(s): s\lefteq t\}
$$
denote the smallest $\sigmagma$-algebra making the co-ordinate mappings up to time $t$ measurable
and take ${\cal C}$ to be the smallest $\sigmagma$-algebra including all $\sigmagma$-algebras ${\cal C}_t,\ t\geq 0.$
We let $m$ and $S$ denote the speed measure and the scale function of
$X,$ respectively. We normalize $S$ by $S(0)=0$ and remark that
$S(+\infty)=+\infty$ since we assume $X$ to be recurrent.
It is also assumed that $m$ does not have atoms. Recall that
$X$ has a jointly continuous transition density
$p(t;x,y)$ with respect to $m,$ i.e.,
$$
{\bf P}_x(X_t\in A)=\int_A p(t;x,y)\, m(dy),
$$
where $A$ is a Borel subset of ${\bf R}_+.$ Moreover, $p$ is symmetric in $x$ and $y,$ that is,
$p(t;x,y)=p(t;y,x).$ The Green or the resolvent kernel of $X$ is defined for $\leftambda>0$ via
\betagin{equation}
\leftabel{a0}
R_\leftambda(x,y):=\int_0^\infty {\rightm e}^{-\leftambda t}\,p(t;x,y)\,dt ,
\hbox {\rightm e}nd{equation}
Let $\{L^{(y)}_t\,:\, t\geq 0\}$ denote the local time of $X$ at $y$
normalized via
\betagin{equation}
\leftabel{e000}
L^{(y)}_t=\leftim_{\deltalta\downarrow 0}\frac 1{m((y,y+\deltalta))} \int_0^t
{\bf 1}_{[y,y+\deltalta)}(X_s)\, ds.
\hbox {\rightm e}nd{equation}
For $y=0$ we write simply $L_t,$ and define for $\hbox {\rightm e}ll\geq 0$
\betagin{equation}
\leftabel{e00}
\tau_\hbox {\rightm e}ll:=\inf\{s: L_s>\hbox {\rightm e}ll\},
\hbox {\rightm e}nd{equation}
i.e., $\tau:=\{\tau_\hbox {\rightm e}ll:\hbox {\rightm e}ll\geq 0\}$ is the right continuous inverse of $\{L_t\}.$ As is well known
$\tau$
is an increasing L\'evy process, in other words, a subordinator
and its L\'evy exponent is given by
\betagin{eqnarray}
\leftabel{e1}
&&\hskip-1cm
\nonumber
{\bf E}_0\lefteft(\hbox {\rightm e}xp(-\leftambda \tau_\hbox {\rightm e}ll)\rightight)=\hbox {\rightm e}xp\lefteft(-\hbox {\rightm e}ll/R_\leftambda(0,0)\rightight)
\\
&&\hskip1.8cm=
\hbox {\rightm e}xp(-\hbox {\rightm e}ll\int_0^\infty \nu(dv)(1-{\rightm e}^{-\leftambda v})),
\hbox {\rightm e}nd{eqnarray}
where $\nu$ is the L\'evy measure of $\tau.$
The assumption that the speed measure does not have an atom at 0 implies that
$\tau$ does not have a drift.
\vskip.4cm
\noindent
{\bf 2.} We are interested in the asymptotic behavior of the distribution of
$L_t$ as $t$ tends to infinity. The basic assumption under which this
study is done is the subexponentiality of the L\'evy measure of $\tau$
(see Section \rightef{sec3}). The subexponentiality assumption is equivalent with the relation
(cf. Proposition \rightef{prop31})
$$
{\bf P}(\tau_\hbox {\rightm e}ll\geq t) \,\mathop{\sigmam}_{t\to +\infty}\,
\hbox {\rightm e}ll\,\nu((t,+\infty))\quad \forall\ \hbox {\rightm e}ll>0.
$$
Here and throughout the paper the notation
$$
f(x)
\,\mathop{\sigmam}_{x\to a}\,
g(x),
$$
where $f$ and $g$ are real valued functions and $a$ is allowed to take
also ``values'' $+\infty$ or $-\infty,$ means that
$$
\leftim_{x\to a}\frac{f(x)}{g(x)}=1.
$$
Since $\tau$ is the inverse of $L,$ it also holds (see Proposition \rightef{prop31})
$$
{\bf P}_0(L_t\lefteq \hbox {\rightm e}ll) \,\mathop{\sigmam}_{t\to +\infty}\, \hbox {\rightm e}ll\,\nu((t,+\infty)).
$$
To extend this for an arbitrary starting state $x>0,$ we first show
that (see Proposition \rightef{prop32})
$$
{\bf P}_x(H_0>t) \,\mathop{\sigmam}_{t\to +\infty}\, S(x)\,\nu((t,+\infty)),
$$
where
$
H_0:=\inf\{t:\, X_t=y\},
$
and then (see Proposition \rightef{prop33})
\betagin{equation}
\leftabel{f04}
{\bf P}_x(L_t\lefteq \hbox {\rightm e}ll) \,\mathop{\sigmam}_{t\to +\infty}\, (S(x)+\hbox {\rightm e}ll)\,\nu((t,+\infty)).
\hbox {\rightm e}nd{equation}
Our motivation for relation (\rightef{f04}) arose from the desire to generalize the
penalization result obtained for Bessel processes in Roynette, Vallois and
Yor \cite{rvyV} (see also \cite{rvyCR} and \cite{rvyI}). From our point of view, since many of the
penalization results are derived for Brownian motion and Bessel
processes, it is important to increase
understanding of the assumptions needed to guarantee the validity of such
results for more general diffusions. In particular, we prove that
(see Theorem \rightef{thm62} and Example \rightef{ex61})
\betagin{equation}
\leftabel{f05}
\leftim_{t\to\infty}\frac{{\bf E}_0(h(L_t)\,|\,{\cal C}_u)}{{\bf E}_0(h(L_t))}=S(X_u)h(L_u)+1-
H(L_u)=:M^h_u \qquad \thetaxt{a.s.},
\hbox {\rightm e}nd{equation}
where $h$ is a probability density function on ${\bf R}_+$ (with some nice
properties) and $H$ is the corresponding distribution function.
\vskip.4cm
\noindent
{\bf 3.} The paper is organised as follows. In the next section basic
properties on subexponentiality are presented and a new result
(Lemma \rightef{prop0}) on the limiting behavior of the convolution of an
subexponential and a more general distribution is derived. In Section
3 we study the spectral representations of the hitting time
distributions and the L\'evy measure. In Section 4 results on
subexponentiality and the spectral representations are combined to
yield relation (\rightef{f04}). Hereby we also need a weak form of a
Tauberian theorem given as Lemma \rightef{lemma31} in Appendix.
The application in penalizations is discussed in Section 5. To make
the paper more readable we state and prove first the general theorem on
penalizations. After this the penalization with local time is treated
and (\rightef{f05}) is proved. The paper is concluded by characterizing
the law of the canonical process under the penalized measure induced
by the martingale $M^h.$ Using
absolute continuity and the compensation formula for excursions
we are able to shorten the proof when compared
with the one in \cite{rvyV}.
\section{Subexponentiality}
\leftabel{sec10}
In this section we present some basic results on subexponential
probability distributions. Later, in Section \rightef{sec3}, it is assumed
that the probability distribution induced by the tail of the L\'evy
measure of $\tau$ is subexponential. This assumption allows us to
deduce the crucial limiting behavior of the first hitting time
distribution (see Proposition \rightef{prop32}).
\betagin{definition}
The probability distribution function $F$ on $(0,+\infty)$ such that
\betagin{equation}
\leftabel{ee040}
F(0+)=0,\quad F(x)<1\quad\forall x>0,\quad \leftim_{x\to\infty}F(x)=1
\hbox {\rightm e}nd{equation}
is called subexponential if
\betagin{equation}
\leftabel{ee04}
\leftim_{x\to +\infty}\overline{F*F}(x)\,/\,\overline F(x)=2
\hbox {\rightm e}nd{equation}
where $*$ denotes the convolution and $\overline F(x):=1-F(x)$ the complementary distribution function.
\hbox {\rightm e}nd{definition}
For the following two lemmas and their proofs we refer Chistyakov
\cite{chistyakov64} and
Embrechts et al. \cite{embgolver79}.
\betagin{lemma}
\leftabel{slovar}
If $F$ is a probability distribution function satisfying (\rightef{ee040})
and
$$
\overline F(x)\,\mathop{\sigmam}_{x\to \infty}\,x^{-\alphapha}\, H(x)
$$
with $\alphapha\geq 0$ and $H$ a slowly varying function then $F$ is subexponential.
\hbox {\rightm e}nd{lemma}
\betagin{lemma}
\leftabel{uni}
If $F$ is subexponential then
\betagin{description}
\item{(i)}\ uniformly on compact $y$-sets
\betagin{equation}
\leftabel{ee05}
\leftim_{x\to\infty} {\overline F(x+y)}/{\overline F(x)}=1,
\hbox {\rightm e}nd{equation}
\item{(ii)}\ for all $\hbox {\rightm e}p>0,$\,
\betagin{equation}
\leftabel{ee051}
\leftim_ {x\to+\infty}{\rightm e}^{\hbox {\rightm e}p\,x}\overline F(x)= +\infty
\hbox {\rightm e}nd{equation}
\hbox {\rightm e}nd{description}
\hbox {\rightm e}nd{lemma}
The proof of the next lemma uses some ideas from Teugels \cite{teugels75} p. 1006.
\betagin{lemma}
\leftabel{prop0} Let $F$ and $G$ be two probability distributions on ${\bf R}_+.$
Assume that
\betagin{description}
\item{(1)}\hskip3mm $F$ is subexponential,
\item{(2)}\hskip3mm $\leftim_{x\to\infty} \overline G(x)/\overline F(x)=c>0.$
\hbox {\rightm e}nd{description}
Then
\betagin{equation}
\leftabel{ee1}
\leftim_{x\to\infty} \overline{F*G}(x)/\lefteft(\overline G(x)+\overline F(x)\rightight)=1.
\hbox {\rightm e}nd{equation}
\hbox {\rightm e}nd{lemma}
\betagin{proof}
Let $\hbox {\rightm e}p\in(0,1).$ By assumption {\sl (2)} there exists $\delta=\delta(\hbox {\rightm e}p)$
such that for $x>\delta$
\betagin{equation}
\leftabel{ee2}
c\,(1-\hbox {\rightm e}p)\overline F(x)\lefteq \overline G(x)\lefteq c\,(1+\hbox {\rightm e}p)\overline F(x).
\hbox {\rightm e}nd{equation}
Observe that
\betagin{eqnarray*}
&&
\hskip-1cm
\overline{F*G}(x)=1-{F*G}(x)=1-F(x)+F(x)-\int_0^x G(x-y)\,dF(y)
\\
&&
\hskip3.6cm
=\overline F(x)+\int_0^x \overline G(x-y)\,dF(y).
\hbox {\rightm e}nd{eqnarray*}
We assume now, throughout the proof, that $x>\delta$ and write
\betagin{equation}
\leftabel{ee3}
\frac{\overline{F*G}(x)}{\overline G(x)+\overline F(x)}=
\frac{\overline{G}(x)}{\overline G(x)+\overline F(x)}\lefteft(I_1(x)+I_2(x)\rightight)
+\frac{\overline{F}(x)}{\overline G(x)+\overline F(x)},
\hbox {\rightm e}nd{equation}
where
$$
I_1(x):=\int_0^{x-\deltalta} \frac{\overline G(x-y)}{\overline G(x)}\,dF(y)
$$
and
$$
I_2(x):=\int_{x-\deltalta}^x \frac{\overline G(x-y)}{\overline G(x)}\,dF(y).
$$
Obviously, by assumption {\sl (2)}, the claim (\rightef{ee1}) follows if we show that
\betagin{equation}
\leftabel{ee4}
\leftim_{x\to\infty} I_1(x)=1
\hbox {\rightm e}nd{equation}
and
\betagin{equation}
\leftabel{ee5}
\leftim_{x\to\infty} I_2(x)=0.
\hbox {\rightm e}nd{equation}
{\sl Proof of (\rightef{ee5}).}\ Since $\overline G(x-y)\lefteq 1$ we have
\betagin{eqnarray*}
&&
\hskip-1.65cm
I_2(x)\lefteq \int_{x-\deltalta}^x \frac{dF(y)}{\overline
G(x)}=\frac{F(x)-F(x-\delta)}{\overline G(x)}
\\
&&\hskip2cm
=\frac{\overline F(x-\delta)-\overline F(x)}{\overline G(x)}
\\
&&\hskip2cm
=\frac{\overline F(x)}{\overline G(x)}\lefteft(\frac{\overline F(x-\delta)}{\overline F(x)}-1\rightight).
\hbox {\rightm e}nd{eqnarray*}
Using now (\rightef{ee05}) and assumption {\sl (2)} yields (\rightef{ee5}).
\noindent
{\sl Proof of (\rightef{ee4}).}\ Since
$$
\overline G(x-y)\geq \overline G(x)
$$
we have
$$
I_1(x)=\int_0^{x-\deltalta} \frac{\overline G(x-y)}{\overline G(x)}\,dF(y)\geq F(x-\delta).
$$
Consequently,
\betagin{equation}
\leftabel{ee60}
\leftiminf_{x\to +\infty}I_1(x)\geq 1.
\hbox {\rightm e}nd{equation}
To derive an upper estimate, notice first that
\betagin{equation}
\leftabel{ee6}
I_1(x)\lefteq \frac{1+\hbox {\rightm e}p}{1-\hbox {\rightm e}p}\int_0^{x-\deltalta} \frac{\overline F(x-y)}{\overline F(x)}\,dF(y),
\hbox {\rightm e}nd{equation}
because, from (\rightef{ee2}),
$$
x>\delta\quad{\bf R}ightarrow\quad \overline G(x)\geq c(1-\hbox {\rightm e}p)\overline F(x)
$$
and
$$
y\lefteq x-\delta\quad{\bf R}ightarrow\quad x-y\geq \delta\quad{\bf R}ightarrow\quad \overline G(x-y)\lefteq c(1+\hbox {\rightm e}p)\overline F(x-y).
$$
Next we develop the integral term in (\rightef{ee6}) as follows
\betagin{eqnarray*}
&&
\int_0^{x-\deltalta} \overline F(x-y)\,dF(y)
\\
&&
\hskip2cm
=\int_0^{x-\deltalta}\lefteft( 1- F(x-y)\rightight)\,dF(y)
\\
&&
\hskip2cm
=F(x-\delta)-\int_0^{x-\deltalta} F(x-y)\,dF(y)
\\
&&
\hskip2cm
=F(x)-\int_0^{x-\deltalta} F(x-y)\,dF(y)+F(x-\delta)-F(x)
\\
&&
\hskip2cm
=F(x)-\int_0^{x-\deltalta} F(x-y)\,dF(y)-\int_{x-\deltalta}^x\,dF(y)
\\
&&
\hskip2cm
=F(x)-\int_0^{x-\deltalta}F(x-y)\,dF(y)
\\
&&
\hskip3.5cm
-\int_{x-\deltalta}^xF(x-y)\,dF(y)-\int^x_{x-\deltalta}\overline F(x-y)\,dF(y)
\\
&&
\hskip2cm
\lefteq
F(x)-\int_0^{x}F(x-y)\,dF(y).
\hbox {\rightm e}nd{eqnarray*}
Hence,
$$
\int_0^{x-\deltalta} \overline F(x-y)\,dF(y)\lefteq F(x)-F*F(x)
=\overline{F*F}(x)-\overline F(x).
$$
Consequently, from (\rightef{ee6}),
$$
I_1(x)\lefteq \lefteft(\frac{1+\hbox {\rightm e}p}{1-\hbox {\rightm e}p}\rightight)\,\lefteft(\frac{\overline{F*F}(x)-\overline F(x)}{\overline F(x)}\rightight),
$$
and using (\rightef{ee04}) and letting $\hbox {\rightm e}p\to 0$ we obtain
$$
\leftimsup_ {x\to +\infty}I_1(x)\lefteq 1
$$
which together with (\rightef{ee60}) proves (\rightef{ee4}) completing the
proof of Lemma \rightef{prop0}
\hbox {\rightm e}nd{proof}
\section{Spectral representations}
\leftabel{sec2}
Spectral representations play a crucial role in our study of
asympotic properties of the hitting time
distributions. In this section we recall basic properties of these
representations and derive some useful estimates. For references on
spectral theory of strings, we list \cite{kackrein74}, \cite{kasahara75},
\cite{dymmckean76}, \cite{kent80}, \cite{kuchler80}, \cite{knight81}
, \cite{kotaniwatanabe81}, \cite{kent82}, and
\cite{kuchlersalminen89}.
Besides the diffusion $X$ itself, it is important to study $X$ when killed at the first hitting time of 0, denoted
$\widehat X=\{\widehat X_t :t\geq 0\}$, i.e., the diffusion with the sample paths
\betagin{equation}
\leftabel{kill}
\widehat X_t:=
\betagin{cases}
X_t, & t<H_0,\\
\partial,& t\geq H_0,
\hbox {\rightm e}nd{cases}
\hbox {\rightm e}nd{equation}
where
$
H_0:=\inf\{t:\, X_t=y\},
$
and
$\partial$ is a point isolated from ${\bf R}_+$ (a "cemetary" point).
Then $\{\widehat X_t:t\geq 0\}$ is a diffusion
with the same scale and speed as $X.$
Let $\hat p$ denote the transition density of $\widehat X$ with respect to $m:$
\betagin{equation}
\leftabel{kill2}
{\bf P}_x(\widehat X_t\in dy)={\bf P}_x(X_t\in dy; t<H_0)=\hat p(t;x,y)\,m(dy).
\hbox {\rightm e}nd{equation}
Recall that the density of the ${\bf P}_x$-distribution of $H_0$ exists and
is given by
\betagin{equation}
\leftabel{f00}
f_{x0}(t):={\bf P}_x(H_0\in dt)/dt=\leftim_{y\downarrow 0}\frac{\hat p(t;x,y)}{S(y)}.
\hbox {\rightm e}nd{equation}
Moreover, the L\'evy measure $\nu$ of the inverse local time $\tau,$
see (\rightef{e000}) and (\rightef{e00}), is absolutely continuous with respect to the Lebesgue measure,
and the density of $\nu$ satisfies
\betagin{eqnarray}
\leftabel{v00}
&&\hskip-.5cm
\dot\nu(v):=\nu(dv)/dv=\leftim_{x\downarrow 0}\frac{f_{x0}(v)}{S(x)}
\hbox {\rightm e}nd{eqnarray}
We define now the basic eigenfunctions $A(x;\gammamma)$ and $C(x;\gammamma)$
associated with $X$ and $\widehat X,$ respectively,
via the integral equations
(recall that $S$ is continuous and $m$ has no atoms)
$$
A(x;\gammamma)=1-\gammamma\int_0^xdS(y)\,\int_0^ym(dz)\, A(z;\gammamma),
$$
\betagin{equation}
\leftabel{e201}
C(x;\gammamma)=S(x)-\gammamma\int_0^xdS(y)\,\int_0^ym(dz)\, C(z;\gammamma),
\hbox {\rightm e}nd{equation}
and the initial values
\betagin{equation}
\leftabel{e2015}
A(0;\gammamma)=1,\quad A'(0;\gammamma):=\leftim_{x\downarrow 0}\frac{A(x;\gammamma)-1}{S(x)}=0,
\hbox {\rightm e}nd{equation}
\betagin{equation}
\leftabel{e2016}
C(0;\gammamma)=0,\quad C\,'(0;\gammamma):=\leftim_{x\downarrow 0}\frac{C(x;\gammamma)}{S(x)}=1.
\hbox {\rightm e}nd{equation}
Let $\{A_n\}$
and $\{C_n\}$ be two families of functions defined by
\betagin{equation}
\leftabel{e210}
A_0(x)=1,\qquad A_{n+1}(x)=\int_0^xdS(y)\,\int_0^ym(dz)\, A_n(z)
\hbox {\rightm e}nd{equation}
and
\betagin{equation}
\leftabel{e21}
C_0(x)=S(x),\qquad C_{n+1}(x)=\int_0^xdS(y)\,\int_0^ym(dz)\, C_n(z),
\hbox {\rightm e}nd{equation}
respectively. Then the functions $A(x;\gammamma)$ and $C(x;\gammamma)$ are explicitly given by
\betagin{equation}
\leftabel{e2150}
A(x;\gammamma)=\sum_{n=0}^\infty (-\gammamma)^n\,A_n(x).
\hbox {\rightm e}nd{equation}
and
\betagin{equation}
\leftabel{e215}
C(x;\gammamma)=\sum_{n=0}^\infty (-\gammamma)^n\,C_n(x),
\hbox {\rightm e}nd{equation}
respectively (see Kac and Krein \cite{kackrein74} p. 29). In the next
lemma we give an estimate which shows that the series for $C$
converges rapidly for all values on $\gammamma$ and $x\geq 0$. A similar
estimate for $A$ can be
found in Dym and McKean \cite{dymmckean76} p. 162.
\betagin{lemma}
\leftabel{lemma1}
The functions $x\mapsto C_n(x),\, x\geq 0,\, n=0,1,2,\dots,$ are positive, increasing
and satisfy
\betagin{equation}
\leftabel{e22}
C_n(x)\lefteq \frac 1{n!}\, S(x) \lefteft(\int_0^x M(y)\,dS(y)\rightight)^n
\hbox {\rightm e}nd{equation}
where $M(z)=m(0,z).$
\hbox {\rightm e}nd{lemma}
\betagin{proof} The fact that $C_n$ are positive and increasing is immediate from (\rightef{e21}).
Clearly (\rightef{e22}) holds for $n=0.$ Hence, consider
\betagin{eqnarray*}
&&
C_{n+1}(x)=\int_0^xdS(y)\,\int_0^ym(du)\, C_n(u)\\
&&\hskip1.55cm
\lefteq
\int_0^xdS(y)\,\int_0^ym(du)\, \frac 1{n!}\, S(u) \lefteft(\int_0^u M(z)\,dS(z)\rightight)^n
\\
&&\hskip1.55cm
\lefteq
\frac 1{n!}\, S(x)
\int_0^xdS(y)\,\int_0^ym(du)\,\lefteft(\int_0^u M(z)\,dS(z)\rightight)^n
\\
&&\hskip1.55cm
\lefteq
\frac 1{n!}\, S(x)
\int_0^xdS(y)\,\lefteft(\int_0^y M(z)\,dS(z)\rightight)^n M(y)
\\
&&\hskip1.55cm
=
\frac 1{(n+1)!}\, S(x)
\lefteft(\int_0^x M(y)dS(y)\rightight)^{n+1},
\hbox {\rightm e}nd{eqnarray*}
where we have used the facts that $x\mapsto S(x)$ is increasing and
$x\mapsto M(x)$ is positive.
\hbox {\rightm e}nd{proof}
\betagin{lemma}
\leftabel{lemma2}
The function $x\mapsto C(x;\gammamma)$ satisfies the inequality
\betagin{equation}
\leftabel{e23}
|C(x;\gammamma)|\lefteq S(x)\,\hbox {\rightm e}xp\lefteft(|\gammamma|\int_0^x M(z)dS(z)\rightight).
\hbox {\rightm e}nd{equation}
\hbox {\rightm e}nd{lemma}
\betagin{proof}
This follows readily from (\rightef{e215}) and (\rightef{e22}).
\hbox {\rightm e}nd{proof}
From Krein's
theory of strings it is known (see \cite{dymmckean76} p.176, and
\cite{kackrein74,
kuchler80, kotaniwatanabe81}) that there exists a $\sigmagma$-finite measure denoted
$\Delta,$ called the principal spectral measure of $X$, with the property
\betagin{equation}
\leftabel{repk1}
\int_0^\infty \frac{\Delta(dz)}{z+1} < \infty
\hbox {\rightm e}nd{equation}
such that the transition density of $X$ can be represented as
\betagin{equation}
\leftabel{krein0}
p(t;x,y)= \int_0^\infty {\rightm e}^{-\gammamma t}\,A(x;\gammamma)\,A(y;\gammamma)\,\Delta(d\gammamma).
\hbox {\rightm e}nd{equation}
We remark that from the assumption that $m$ does not have an atom at 0
it follows (see \cite{dymmckean76} p.192) that $\Delta([0,\infty))=\infty.$
Analogously, for the killed process $\widehat X$ there exists (see
\cite{knight81},
\cite{kuchlersalminen89}) a $\sigmagma$-finite measure, denoted
$\widehat\Delta$ and called the principal spectral measure of $\widehat X,$
such that
\betagin{equation}
\leftabel{rep}
\int_0^\infty \frac{\widehat\Delta(dz)}{z(z+1)} < \infty,
\hbox {\rightm e}nd{equation}
and
\betagin{equation}
\leftabel{repdelta}
\int_0^\infty \frac{\widehat\Delta(dz)}{z} = \infty.
\hbox {\rightm e}nd{equation}
The transition density of $\widehat X$ can be represented as
\betagin{equation} \leftabel{krein1}
\hat p(t;x,y)= \int_0^\infty {\rightm e}^{-\gammamma t}\,C(x;\gammamma)\,C(y;\gammamma)\,\widehat\Delta(d\gammamma).
\hbox {\rightm e}nd{equation}
The result of the next proposition can be found also in
\cite{kuchlersalminen89}. Since the proof in \cite{kuchlersalminen89}
is not complete in all details we found it worthwhile to give here a
new proof.
\betagin{proposition}
\leftabel{prop1}
(i)\ The density of the ${\bf P}_x$-distribution of the first hitting time $H_0$ has the spectral representation
\betagin{equation}
\leftabel{krein2}
f_{x0}(t)=\int_0^\infty {\rightm e}^{-\gammamma t}\,C(x;\gammamma)\,\widehat\Delta(d\gammamma).
\hbox {\rightm e}nd{equation}
(ii) The density of the L\'evy measure of the inverse local time at 0 has the spectral representation
\betagin{equation}
\leftabel{krein21}
\dot\nu(t)=\int_0^\infty {\rightm e}^{-\gammamma t}\,\widehat\Delta(d\gammamma).
\hbox {\rightm e}nd{equation}
\hbox {\rightm e}nd{proposition}
\betagin{proof}
(i)\ Combining (\rightef{f00}) and (\rightef{krein1}) yields
\betagin{eqnarray*}
&&
f_{x0}(t)=\leftim_{y\downarrow 0}\frac{\hat p(t;x,y)}{S(y)}.
\\
&&
\hskip1.15cm
=\leftim_{y\downarrow 0}
\int_0^\infty {\rightm e}^{-\gammamma t}\,C(x;\gammamma)\,\frac{C(y;\gammamma)}{S(y)}\,\widehat\Delta(d\gammamma).
\hbox {\rightm e}nd{eqnarray*}
We show that the limit can be taken inside the integral by the Lebesgue dominated convergence
theorem. Let $t>0$ be fixed an choose $\hbox {\rightm e}p$ such that
$$
t-\int_0^\varepsilon M(z)dS(z)\geq t/2.
$$
Then, from Lemma \rightef{lemma2}, for $\gammamma>0$ and $0<y<\hbox {\rightm e}p$ we have
$$
{\rightm e}^{-\gammamma t}\,\frac{|C(y;\gammamma)|}{S(y)}
\lefteq
\hbox {\rightm e}xp\lefteft(-\gammamma\lefteft(t-\int_0^y M(z)dS(z)\rightight)\rightight)
\lefteq
{\rightm e}^{-\gammamma t/2}
$$
Consequently, it remains to show that
\betagin{equation}
\leftabel{e24}
\int_0^\infty {\rightm e}^{-\gammamma t/2}\lefteft|C(x;\gammamma)\rightight|\widehat\Delta(d\gammamma)<\infty.
\hbox {\rightm e}nd{equation}
By the Cauchy-Schwartz inequality
\betagin{eqnarray*}
&&
\lefteft(\int_0^\infty {\rightm e}^{-\gammamma
t/2}\lefteft|C(x;\gammamma)\rightight|\widehat\Delta(d\gammamma)\rightight)^2
\\
&&\hskip1.5cm
\lefteq
\int_0^\infty {\rightm e}^{-\gammamma t/2}\lefteft(C(x;\gammamma)\rightight)^2\widehat\Delta(d\gammamma)
\int_0^\infty {\rightm e}^{-\gammamma t/2}\widehat\Delta(d\gammamma)
\\
&&\hskip1.5cm
=
\hat p(t/2;x,x)
\int_0^\infty {\rightm e}^{-\gammamma t/2}\widehat\Delta(d\gammamma).
\hbox {\rightm e}nd{eqnarray*}
Clearly,
$
\hat p(t/2;x,x)<\infty
$
and, by (\rightef{rep}),
$
\int_0^\infty {\rightm e}^{-\gammamma t/2}\widehat\Delta(d\gammamma)<\infty.
$
These estimates allow us to use the Lebesgue dominated
convergence theorem and since (cf. (\rightef{e2016}))
$$
\leftim_{y\to 0}C(y;\gammamma)/S(y)= C\,'(0;\gammamma)=1
$$
the proof of (i) is complete. Representation
(\rightef{krein21}) can be proved similarly using formula (\rightef{v00}),
(\rightef{krein2}), (\rightef{e2016}) and
the estimates derived above. We leave the details to the reader.
\hbox {\rightm e}nd{proof}
\betagin{remark}
Consider
\betagin{eqnarray*}
&&
\hskip-1.7cm
\int_0^\infty (1 \wedge t) \,\dot\nu(t)\,dt =\int_0^\infty dt\, (1 \wedge
t)
\int_0^\infty\widehat\Delta(d\gammamma)\, {\rightm e}^{-\gammamma t}
\\
&&
\hskip1.6cm
=
\int_0^\infty\widehat\Delta(d\gammamma) \int_0^\infty dt\, (1 \wedge
t)\, {\rightm e}^{-\gammamma t}.
\hbox {\rightm e}nd{eqnarray*}
A straightforward integration yields
$$
\int_0^\infty (1 \wedge
t)\, {\rightm e}^{-\gammamma t}\,dt= \frac 1{\gamma^2}\lefteft(1-{\rightm e}^{-\gamma}\rightight),
$$
and, consequently, $(\rightef{rep})$ is equivalent with
(cf. \cite{knight81})
$$
\int_0^\infty (1 \wedge t) \,\dot\nu(t)\,dt<\infty,
$$
which is the crucial property of
the L\'evy measure of a subordinator.
For (\rightef{repdelta}), see \cite{kackrein74} p. 82.
and \cite{kuchlersalminen89}.
\hbox {\rightm e}nd{remark}
\betagin{example}
\leftabel{example1}
Let $R=\{R_t: t\geq 0\}$ and $\widehat R=\{\widehat R_t: t\geq 0\}$ be Bessel processes of dimension $0<\deltalta<2$
reflected at 0 and killed at 0, respectively. We
compute explicit spectral representations associated with $R$ and
$\widehat R.$
From, e.g., \cite{borodinsalminen02} p. 133
the following information concerning $R$ and $\widehat R$ can be found:
\betagin{description}
\item Speed measure
\betagin{equation}
\leftabel{m}
m(dx)=2\,x^{1-2\alpha}\, dx\quad \quad \alpha:=(2-\deltalta)/2.
\hbox {\rightm e}nd{equation}
\item Scale function
\betagin{equation}
\leftabel{s}
S(x)= \frac{1}{2\alpha}\,x^{2\alpha}
\hbox {\rightm e}nd{equation}
\item Transition density of $R$ (w.r.t. $m$)
\betagin{equation}
\leftabel{p}
p(t;x,y)= \frac 1 {2t} (xy)^\alpha\hbox {\rightm e}xp\lefteft(-\frac{x^2+y^2}{2t}\rightight)
I_{-\alpha}\lefteft(\frac{xy}{t}\rightight),\quad x,y>0.
\hbox {\rightm e}nd{equation}
\item Transition density of $\widehat R$ (w.r.t. $m$)
\betagin{equation}
\leftabel{hatp}
\hat p(t;x,y)= \frac 1 {2t} (xy)^\alpha\hbox {\rightm e}xp\lefteft(-\frac{x^2+y^2}{2t}\rightight)
I_{\alpha}\lefteft(\frac{xy}{t}\rightight),\quad x,y>0.
\hbox {\rightm e}nd{equation}
\hbox {\rightm e}nd{description}
To find the Krein measure $\Delta$ associated with $R$ we exploit
formulas (\rightef{krein0}) and (\rightef{p}) with $x=y=0$ and use
$$
I_\nu(z)\,\sigmam\,\frac 1{\Gammamma(\nu+1)}\lefteft(\frac z2\rightight)^\nu,\quad
z\to 0
$$
to obtain
$$
p(t;0,0)=\leftim_{x,y\to 0}p(t;x,y)=
\frac{t^{-(1-\alpha)}}{2^{1-\alpha}\,\Gammamma(1-\alpha)}=\int_0^\infty {\rightm e}^{-\gammamma
t}\,\Delta(d\gamma).
$$
Inverting the Laplace transform yields
\betagin{equation}
\leftabel{e241}
\Delta(d\gamma)=\frac{\gammamma^{-\alpha}\,d\gamma}{2^{1-\alpha}\,(\Gammamma(1-\alpha))^2}.
\hbox {\rightm e}nd{equation}
We apply formula (\rightef{e2150}), (\rightef{m}), and (\rightef{s}) to find the function $A(x;\gamma),$
and, hence, compute first directly via (\rightef{e210})
$$
A_n(x)=\frac{\Gammamma(1-\alpha)\,x^{2n}}{2^n\,\Gammamma(n+1)\,\Gammamma(n+1-\alpha)},\quad n=0,1,2,\dots.
$$
Consequently, after some manipulations, we have
$$
A(x;\gamma)=\Gammamma(1-\alpha)\,2^{-\alpha}\,\lefteft(x\sqrt{2\gamma}\rightight)^\alpha\,J_
{-\alpha}\lefteft(x\sqrt{2\gamma}\rightight),
$$
where $J$ denotes the usual Bessel function of the first kind, i.e.,
$$
J_\nu(z)=\sum_{n=0}^\infty \frac{(-1)^n(z/2)^{\nu+2n}}{\Gammamma(n+1)\,\Gammamma(\nu+ n+1)},
$$
and, finally, putting pieces together into (\rightef{krein0}) yields
\betagin{equation}
\leftabel{e2411}
p(t;x,y)=\frac 12\,\int_0^\infty {\rightm e}^{-\gamma t}\, (xy)^{\alpha}\,J_
{-\alpha}\lefteft(x\sqrt{2\gamma}\rightight)\,J_
{-\alpha}\lefteft(y\sqrt{2\gamma}\rightight)\, d\gamma.
\hbox {\rightm e}nd{equation}
Next we compute the Krein measure $\widehat\Delta$ associated with
$\widehat R.$ For this, we deduce from (\rightef{f00}), (\rightef{v00}),
(\rightef{s}), and (\rightef{hatp})
\betagin{equation}
\leftabel{e24115}
\dot\nu(t)=\leftim_{x,y\to 0}\frac{\hat p(t;x,y)}{S(x)S(y)}
=\frac{2^{1-\alpha}\, \alpha\, t^{-(1+\alpha)}}{\Gammamma(\alpha)}=\int_0^\infty {\rightm e}^{-\gammamma
t}\,\widehat\Delta(d\gamma),
\hbox {\rightm e}nd{equation}
and, consequently, inverting the Laplace transform gives
\betagin{equation}
\leftabel{e2412}
\widehat \Delta(d\gamma)=\frac{2^{1-\alpha}\, \gamma^{\alpha}}{(\Gammamma(\alpha)) ^2}\, d\gamma
\hbox {\rightm e}nd{equation}
Similarly as above, we apply formula (\rightef{e215}) to find the function $C(x;\gamma),$
and, hence, compute first directly via (\rightef{e21})
$$
C_n(x)=\frac{\Gammamma(\alpha)\,x^{2\alpha+2n}}{2^{n+1}\,\Gammamma(n+1)\,\Gammamma(n+1+\alpha)},\quad n=0,1,2,\dots.
$$
Consequently, after some manipulations,
$$
C(x;\gamma)=\Gammamma(\alpha)\,2^{(\alpha-2)/2}\,\gamma^{-\alpha/2}\,x^\alpha\, J_{\alpha}\lefteft(x\sqrt{2\gamma}\rightight).
$$
and
\betagin{equation}
\leftabel{e2413}
\hat p(t;x,y)=\frac 12\,\int_0^\infty {\rightm e}^{-\gamma t}\, (xy)^{\alpha}\,J_
{\alpha}\lefteft(x\sqrt{2\gamma}\rightight)\,J_
{\alpha}\lefteft(y\sqrt{2\gamma}\rightight)\, d\gamma.
\hbox {\rightm e}nd{equation}
See also Karlin and Taylor \cite{karlintaylor81} p. 338.
\hbox {\rightm e}nd{example}
\betagin{example}
\leftabel{example11} Taking above $\alphapha=1/2$ yields formulas for
Brownian motion. Recall
$$
J_{1/2}(z)=\sqrt{\frac 2{\pi z}}\,\sigman{z},
\quad \thetaxt{and}\quad
J_{-1/2}(z)=\sqrt{\frac 2{\pi z}}\,\cos{z}.
$$
Consequently, from (\rightef{e2411})
\betagin{eqnarray}
\leftabel{ex1}
&&
p(t;x,y)=\frac 1{\pi}\int_0^\infty {\rightm e}^{-\gammamma\,t}\cos(x\sqrt{2\gamma})\cos(y\sqrt{2\gamma})\, \frac{d\gammamma}{\sqrt{2\gamma}}
\\
&&
\nonumber
\hskip1.6cm
=\frac{1}{2\sqrt{2\pi t}}\lefteft({\rightm e}^{-(x-y)^2/(2t)}+{\rightm e}^{-(x+y)^2/(2t)}\rightight),
\hbox {\rightm e}nd{eqnarray}
and from (\rightef{e2413})
\betagin{eqnarray*}
&&
\hat p(t;x,y)=\frac 1\pi\int_0^\infty {\rightm e}^{-\gammamma
t}\,\frac{\sigman(x\sqrt{2\gamma})}{\sqrt{2\gamma}}\,
\frac{\sigman(y\sqrt{2\gamma})}{\sqrt{2\gamma}}\,{\sqrt{2\gamma}}\,d\gamma
\\
&&
\hskip1.6cm
=\frac{1}{2\sqrt{2\pi t}}\lefteft({\rightm e}^{-(x-y)^2/(2t)}-{\rightm e}^{-(x+y)^2/(2t)}\rightight).
\hbox {\rightm e}nd{eqnarray*}
Moreover,
\betagin{eqnarray*}
&&
f_{x0}(t)
=
\frac 1\pi\int_0^\infty\,{\rightm e}^{-\gammamma t}\,\sigman(x\sqrt{2\gamma})\, d\gammamma
=\frac{x}{t^{\,3/2}\sqrt{2\pi}}\,{\rightm e}^{-x^2/(2t)},
\hbox {\rightm e}nd{eqnarray*}
and
\betagin{eqnarray}
\leftabel{ex2}
&&
\dot\nu(t)=
\frac 1\pi\int_0^\infty {\rightm e}^{-\gammamma t} \,\sqrt{2\gamma}\,d\gammamma
=\frac1{t^{\,3/2}\sqrt{2\pi}}.
\hbox {\rightm e}nd{eqnarray}
From (\rightef{ex1}) we obtain $\Delta(d\gamma)=d\gamma/(\pi\sqrt{2\gamma}),$ and
from (\rightef{ex2}) $\widehat\Delta(d\gamma)=\sqrt{2\gamma}\,d\gamma/\pi.$
See also Karlin and Taylor \cite{karlintaylor81} p. 337 and 393, and \cite{borodinsalminen02} p. 120.
\hbox {\rightm e}nd{example}
\betagin{proposition}
\leftabel{prop2}
(i)\ The complementary ${\bf P}_x$-distribution function of $H_0$ has the spectral representation
\betagin{equation}
\leftabel{krein4}
{\bf P}_x(H_0>t)=\int_0^\infty \frac 1\gammamma\, {\rightm e}^{-\gammamma t}\, \,C(x;\gammamma)\,\widehat\Delta(d\gammamma).
\hbox {\rightm e}nd{equation}
(ii)\ The L\'evy measure has the spectral representation
\betagin{equation}
\leftabel{krein41}
\nu((t,\infty))=\int_t^{\infty} \dot\nu(s)\,ds =\int_0^\infty \frac 1\gammamma\, {\rightm e}^{-\gammamma t}\, \widehat\Delta(d\gammamma).
\hbox {\rightm e}nd{equation}
\hbox {\rightm e}nd{proposition}
\betagin{proof}
Formulas (\rightef{krein4}) and (\rightef{krein41}) follow from (\rightef{krein2})
and (\rightef{krein21}), respectively, using Fubini's theorem. To obtain
(\rightef{krein41}) is straightforward but for (\rightef{krein4})
the
applicability of Fubini's theorem needs to be justified.
Indeed, from (\rightef{krein2}) we have informally
\betagin{eqnarray*}
&&
{\bf P}_x(H_0>t)=\int_t^\infty f_{x0}(s)\,ds =\int_t^\infty
ds\int_0^\infty\widehat\Delta(d\gammamma)\, {\rightm e}^{-\gammamma s}\,C(x;\gammamma)
\\
&&
\hskip2.1cm
=\int_0^\infty\widehat\Delta(d\gammamma)\int_t^\infty
ds\, {\rightm e}^{-\gammamma s}\,C(x;\gammamma)
\hbox {\rightm e}nd{eqnarray*}
leading to (\rightef{krein4}). To make this rigorous, we verify that for
all $x>0$
$$
\int_0^\infty \frac 1\gammamma\, {\rightm e}^{-\gammamma t} \,|C(x;\gammamma)|\,\widehat\Delta(d\gammamma)<\infty.
$$
Consider first for $\varepsilon>0$
$$
K_1:=\int_0^\varepsilon \frac 1\gammamma\, {\rightm e}^{-\gammamma t}\,|C(x;\gammamma)|\,\widehat\Delta(d\gammamma).
$$
By the basic estimate (\rightef{e23}) for $0<\gammamma<\hbox {\rightm e}p$
$$
|C(x;\gammamma)|\lefteq S(x)\,\hbox {\rightm e}xp\lefteft(\hbox {\rightm e}p\,\int_0^x M(z)dS(z)\rightight).
$$
and, consequently,
$$
K_1\lefteq S(x)\,\hbox {\rightm e}xp\lefteft(\hbox {\rightm e}p\,\int_0^x M(z)dS(z)\rightight)
\int_0^\infty \frac 1\gammamma\, {\rightm e}^{-\gammamma
t}\,\widehat\Delta(d\gammamma)<\infty
$$
by (\rightef{rep}). Next, let
$$
K_2:=\int_\hbox {\rightm e}p^\infty \frac 1\gammamma\, {\rightm e}^{-\gammamma t}\,|C(x;\gammamma)|\,\widehat\Delta(d\gammamma).
$$
By the Cauchy-Schwartz inequality
$$
K_2^2\lefteq \int_\hbox {\rightm e}p^\infty \gammamma^{-2} \, {\rightm e}^{-\gammamma
t}\,\widehat\Delta(d\gammamma)\
\int_\hbox {\rightm e}p^\infty \, {\rightm e}^{-\gammamma t}\,(C(x;\gammamma))^2\,\widehat\Delta(d\gammamma).
$$
The first term on the right hand side is finite by (\rightef{rep}). For
the second term we have
\betagin{eqnarray*}
&&
\int_\hbox {\rightm e}p^\infty \, {\rightm e}^{-\gammamma t}\,(C(x;\gammamma))^2\,\widehat\Delta(d\gammamma)
\lefteq
\int_0^\infty \, {\rightm e}^{-\gammamma t}\,(C(x;\gammamma))^2\,\widehat\Delta(d\gammamma).
\\
&&
\hskip4.7cm
\lefteq \hat p(t;x,x)<\infty.
\hbox {\rightm e}nd{eqnarray*}
The proof of (\rightef{krein4}) is now complete.
\hbox {\rightm e}nd{proof}
\section{Asymtotic behavior of the distribution of $L_t$ as $t\to+\infty$}
\leftabel{sec3}
We make the following assumption concerning the
L\'evy measure of the inverse local time process
$\{\tau_\hbox {\rightm e}ll\,:\,\hbox {\rightm e}ll\geq 0\}$ valid throughout the rest of the paper (if nothing else is stated)
\betagin{description}
\item{(A)}\hskip3mm {\sl The probability distribution function
$$
x\mapsto \frac{\nu(1,x]}{\nu(1,+\infty)},\quad x>1,
$$
is assumed to be subexponential.}
\hbox {\rightm e}nd{description}
It is known, see Sato \cite{sato99} p. 164, that Assumption (A) is
equivalent with
\betagin{equation}
\leftabel{3e00}
{\bf P}(\tau_\hbox {\rightm e}ll\geq t) \,\mathop{\sigmam}_{t\to +\infty}\,
\hbox {\rightm e}ll\,\nu((t,+\infty))\quad \forall\ \hbox {\rightm e}ll>0,
\hbox {\rightm e}nd{equation}
and also with
\betagin{equation}
\leftabel{3e000}
{\thetaxt{\sl The\ law\ of\ }} \tau_\hbox {\rightm e}ll\ {\thetaxt{\sl
is\ subexponential\ for\ every\ }} \hbox {\rightm e}ll>0.
\hbox {\rightm e}nd{equation}
\betagin{proposition}
\leftabel{prop31}
For any fixed $\hbox {\rightm e}ll>0$, it holds
\betagin{equation}
\leftabel{3e0}
{\bf P}_0(L_t\lefteq \hbox {\rightm e}ll) \,\mathop{\sigmam}_{t\to +\infty}\, \hbox {\rightm e}ll\,\nu((t,+\infty)).
\hbox {\rightm e}nd{equation}
\hbox {\rightm e}nd{proposition}
\betagin{proof}
The claim follows immediately from (\rightef{3e00}) since
$$
{\bf P}_0(L_t\lefteq \hbox {\rightm e}ll)= {\bf P}(\tau_\hbox {\rightm e}ll\geq t).
$$
\hbox {\rightm e}nd{proof}
Our goal is to study the asymptotic behavior of $L_t$ under ${\bf P}_x.$
For this, we analyze first the distribution of the hitting
time $H_0.$ The proof of the next proposition is based on Lemma
\rightef{lemma31} stated and proved in Section \rightef{20} below.
\betagin{proposition}
\leftabel{prop32}
For any $x> 0,$ it holds
\betagin{equation}
\leftabel{3e1}
{\bf P}_x(H_0>t) \,\mathop{\sigmam}_{t\to +\infty}\, S(x)\,\nu((t,+\infty)).
\hbox {\rightm e}nd{equation}
\hbox {\rightm e}nd{proposition}
\betagin{proof}
Recall from (\rightef{krein4}) and (\rightef{krein41}) in Proposition
\rightef{prop2} the spectral representations
\betagin{equation}
\leftabel{krein4n}
{\bf P}_x(H_0>t)=\int_0^\infty \frac 1\gammamma\, {\rightm e}^{-\gammamma t}\, \,C(x;\gammamma)\,\widehat\Delta(d\gammamma)
\hbox {\rightm e}nd{equation}
and
\betagin{equation}
\leftabel{krein41n}
\nu((t,+\infty))=\int_0^\infty \frac 1\gammamma\, {\rightm e}^{-\gammamma t}\, \widehat\Delta(d\gammamma).
\hbox {\rightm e}nd{equation}
We apply Lemma \rightef{lemma31} with $\mu(d\gamma)=\widehat
\Delta(d\gamma)/\gamma,$ $g_1(\gamma)=C(x;\gammamma)$ and $g_2(\gamma)=S(x).$ Then,
the mapping $t\mapsto {\bf P}_x(H_0>t)$ has the r\^ole of
$f_1$ and $t\mapsto S(x)\,\nu((t,+\infty))$ the r\^ole of $f_2.$
Condition (\rightef{c1}) takes the form
$$
\leftim_ {t\to\infty}S(x)\,\nu((t,+\infty))\,{\rightm e}^{bt}=0
$$
and this holds by Assumption (A) and (\rightef{ee051}).
Moreover, condition (\rightef{c2}) means now
$$
\leftim_{\gammamma\to 0}C(x;\gammamma)/S(x)=1
$$
and this is true since using estimate (\rightef{e22}) in (\rightef{e215}) we obtain
$$
\lefteft|\frac{C(x;\gammamma)}{S(x)}-1\rightight|\lefteq \alpha\,|\gammamma| {\rightm e}^{\beta\,|\gammamma|}
$$
with some $\alpha$ and $\beta$ depending only on $x.$
Consequently, (\rightef{c3}) in
Lemma \rightef{lemma31} holds and, hence, the proof of the proposition is complete.
\hbox {\rightm e}nd{proof}
The main result of this section is as follows.
\betagin{proposition}
\leftabel{prop33}
For any $x> 0$ and $\hbox {\rightm e}ll>0,$ it holds
\betagin{equation}
\leftabel{3e2}
{\bf P}_x(L_t\lefteq \hbox {\rightm e}ll) \,\mathop{\sigmam}_{t\to +\infty}\, (S(x)+\hbox {\rightm e}ll)\,\nu((t,+\infty)).
\hbox {\rightm e}nd{equation}
\hbox {\rightm e}nd{proposition}
\betagin{proof}
Since $L_t$ increases only when $X$ is at 0 we may write
\betagin{eqnarray*}
&&
{\bf P}_x(L_t\lefteq \hbox {\rightm e}ll) = {\bf P}_x(H_0>t) + {\bf P}_x(H_0< t\,,\, L_t\lefteq \hbox {\rightm e}ll)
\\
&&
\hskip2.1cm
=
{\bf P}_x(H_0>t) + {\bf P}_x(H_0< t\,,\, L_{t-H_0}\circ\theta_{H_0}\lefteq \hbox {\rightm e}ll)
\\
&&
\hskip2.1cm
=
{\bf P}_x(H_0>t) + {\bf P}_x(H_0< t\,,\, t-H_0\lefteq \hat \tau_\hbox {\rightm e}ll),
\hbox {\rightm e}nd{eqnarray*}
where $\theta_\cdot$ denotes the usual shift operator and $\hat
\tau_\hbox {\rightm e}ll$ is a subordinator starting from 0, independent of $H_0$ and
identical in law with $\tau_\hbox {\rightm e}ll$ (under ${\bf P}_0$), by the strong Markov property.
Consequently,
\betagin{eqnarray*}
&&
\hskip-.5cm
{\bf P}_x(L_t\lefteq \hbox {\rightm e}ll)={\bf P}_x(H_0>t) + {\bf P}_x( \hat\tau_\hbox {\rightm e}ll+H_0\geq t) -{\bf P}_x( \hat\tau_\hbox {\rightm e}ll+H_0\geq t\,,\, H_0> t)
\\
&&
\hskip1.6cm
=
{\bf P}_x( \hat\tau_\hbox {\rightm e}ll+H_0\geq t).
\hbox {\rightm e}nd{eqnarray*}
We use Lemma \rightef{prop0} and take therein $F$ to be the $P_x$-distribution
$\hat\tau_\hbox {\rightm e}ll$ (which is the same as the $P_0$-distribution
$\tau_\hbox {\rightm e}ll$) and $G$ the $P_x$-distribution of $H_0.$ Then, by
(\rightef{3e000}), $F$ is subexponential and from (\rightef{3e0}) and
(\rightef{3e1}) we have
$$
\leftim_{t\to\infty}\frac{{\bf P}_x(H_0> t)}{{\bf P}_x( \hat\tau_\hbox {\rightm e}ll> t)}=\frac{S(x)}\hbox {\rightm e}ll>0.
$$
Consequently, by Lemma \rightef{prop0},
$$
\leftim_{t\to\infty}\frac{{\bf P}_x( \hat\tau_\hbox {\rightm e}ll+H_0> t)}{{\bf P}_x(
\hat\tau_\hbox {\rightm e}ll> t)+{\bf P}_x( H_0> t)}
=1,
$$
in other words,
\betagin{eqnarray*}
&&
{\bf P}_x(L_t\lefteq \hbox {\rightm e}ll)
\,\mathop{\sigmam}_{t\to \infty}\,
{\bf P}_x( H_0> t)+{\bf P}_x(
\hat\tau_\hbox {\rightm e}ll> t)
\\
&&
\hskip2cm
\,\mathop{\sigmam}_{t\to \infty}\,
S(x)\,\nu((t,\infty))+\hbox {\rightm e}ll\,\nu((t,\infty)),
\hbox {\rightm e}nd{eqnarray*}
as claimed.
\hbox {\rightm e}nd{proof}
\betagin{example}
\leftabel{bessel1}
{\rightm For a Bessel process of dimension $d\in(0,2)$ reflected at 0
we have from (\rightef{e24115}) in Example \rightef{example1}
$$
\nu((t,+\infty))=\frac{2^{1-\alpha}}{\Gammamma(\alpha)}\,t^{-\alpha},
$$
and Assumption (A) holds by Lemma \rightef{slovar}. Consequently,
$$
{\bf P}_x(L_t<\hbox {\rightm e}ll)
\,\mathop{\sigmam}_{t\to \infty}\,
(S(x)+\hbox {\rightm e}ll)\,\nu((t,+\infty)).
$$
where the scale function is as in Example \rightef{example1}. Taking here
$\alpha=1/2$ gives formulae for reflecting Brownian motion. We remark
that our normalization of the local time (see (\rightef{e000})) is different from the one
used in Roynette
et al. \cite{rvyV} Section 2. In our case, from (\rightef{e1}) and
(\rightef{e24115}) it follows (cf. also \cite{borodinsalminen02} p. 133
where the resolvent kernel is explicitly given) that
\betagin{equation}
\leftabel{cf1}
{\bf E}_0\lefteft(\hbox {\rightm e}xp(-\leftambda
\tau_\hbox {\rightm e}ll)\rightight)=\hbox {\rightm e}xp\lefteft(-\hbox {\rightm e}ll\, \frac{\Gammamma(1-\alpha)}{\Gammamma(\alpha)}\,
2^{1-\alpha}\, \leftambda^{\alpha}\rightight).
\hbox {\rightm e}nd{equation}
Comparing now formula (2.11) in
\cite{rvyV} with (\rightef{cf1}) it is seen that
$$
\widehat L_t = 2\alpha\, L_t
$$
where $\widehat L$ denotes the local time used in \cite{rvyV}.
}
\hbox {\rightm e}nd{example}
\section{Penalization of the diffusion with its local time}
\leftabel{sec6}
\subsection{General theorem of penalization}
\leftabel{sec61}
Recall that $(C,{\cal C},\{{\cal C}_t\})$ denotes the canonical space of continuous functions, and let ${\bf P}$ be a probability measure defined therein. In the next theorem we present the general penalization result which we then specialize to the penalization with local time.
\betagin{theorem}
\leftabel{thm61}
Let $\{F_t: t\geq 0\}$ be a stochastic process (so called weight process) satisfying
$$
0<{\bf E}(F_t)<\infty\quad \forall\ t>0.
$$
Suppose that for any $u\geq 0$
\betagin{equation}
\leftabel{61}
\leftim_{t\to\infty}\frac{{\bf E}(F_t\,|\,{\cal C}_u)}{{\bf E}(F_t)}=:M_u
\hbox {\rightm e}nd{equation}
exists a.s. and
\betagin{equation}
\leftabel{62}
{\bf E}(M_u)=1.
\hbox {\rightm e}nd{equation}
Then
\betagin{description}
\item{1)} $M=\{M_u: u\geq 0\}$ is a non-negative martingale with $M_0=1,$
\item{2)} for any $u\geq 0$ and $\Lambda\in{\cal C}_u$
\betagin{equation}
\leftabel{63}
\leftim_{t\to\infty}\frac{{\bf E}({\bf 1}_{\Lambda}\,F_t)}{{\bf E}(F_t)}={\bf E}({\bf 1}_{\Lambda}\,M_u)=:{\bf Q}^{(u)}(\Lambda),
\hbox {\rightm e}nd{equation}
\item{3)} there exits a probability measure ${\bf Q}$ on $(C,{\cal C})$ such
that for any $u>0$
$$
{\bf Q}(\Lambda)={\bf Q}^{(u)}(\Lambda)\qquad \forall \Lambda\in{\cal C}_u.
$$
\hbox {\rightm e}nd{description}
\hbox {\rightm e}nd{theorem}
\betagin{proof} We have (cf. Roynette et al. \cite{rvyII})
$$
\frac{{\bf E}({\bf 1}_{\Lambda_u}\,F_t)}{{\bf E}(F_t)}={\bf E}\lefteft({\bf 1}_{\Lambda_u}\,\frac{{\bf E}(F_t\,|\,{\cal C}_u)}{{\bf E}(F_t)}\rightight),
$$
and by (\rightef{61}) and (\rightef{62}) the family of random variables
$$
\lefteft\{\frac{{\bf E}(F_t\,|\,{\cal C}_u)}{{\bf E}(F_t)}\,:\, t\geq 0\rightight\}
$$
is uniformly integrable by Sheffe's lemma (see, e.g., Meyer
\cite{meyer66}),
and, hence, (\rightef{63}) holds in ${\bf L}^1(\Omega).$ To verify the
martingale property of $M$ notice that if $u<v$ then
$\Lambda_u\in{\cal C}_v$ and by (\rightef{63}) we have also
$$
\leftim_{t\to\infty}\frac{{\bf E}({\bf 1}_{\Lambda_u}\,F_t)}{{\bf E}(F_t)}
={\bf E}({\bf 1}_{\Lambda_u}\,M_v).
$$
Consequently,
$$
{\bf E}({\bf 1}_{\Lambda_u}\,M_v) ={\bf E}({\bf 1}_{\Lambda_u}\,M_u),
$$
i.e., $M$ is a martingale. Since the family $\{{\bf Q}^{(u)}: u\geq 0\}$ of
probability measures is consistent, claim 3) follows from Kolmogorov's
existence theorem (see, e.g., Billingsley \cite{billingsley68} p. 228-230).
\hbox {\rightm e}nd{proof}
\subsection{Penalization with local time}
\leftabel{sec62}
We are interested in analyzing the penalizations of diffusion $X$ with the weight process
given by
\betagin{equation}
\leftabel{635}
F_t:= h(L_t),\quad t\geq 0
\hbox {\rightm e}nd{equation}
with a suitable function $h.$
In particular, if $h={\bf 1}_{[0,\hbox {\rightm e}ll)}$ for some fixed $\hbox {\rightm e}ll>0$ then
$F_t={\bf 1}_{\{L_t<\hbox {\rightm e}ll\}}.$ In the next theorem we prove under some
assumtions on $h$ the validity of the basic penalization hypotheses
(\rightef{61}) and (\rightef{62})
for the weight process $\{F_t: t\geq 0\}.$ The explicit form of the
corresponding martingale $M^h$ is given. In Section 6.3 it is
seen that $M^h$ remains to be a martingale for more general functions
$h,$ and properties of $X$ under the probability measure induced by
$M^h$ are discussed.
In Roynette et al. \cite{rvyV} this kind of penalizations via local times of Bessel processes with dimension
parameter $d\in(0,2)$ are studied. Our work generalizes Theorem 1.5 in
\cite{rvyV} for diffusions
with subexponential L\'evy measure.
\betagin{theorem}
\leftabel{thm62}
Let $h:[0,\infty)\mapsto [0,\infty)$ be a Borel measurable,
right-continuous and non-increasing
function with compact support in $[0,K]$ for some given $K>0.$ Assume
also that
$$
\int_0^K h(y)\,dy =1,
$$
and define for $x\geq 0$
$$
H(x):=\int_0^x h(y)\,dy.
$$
Then for any $u\geq 0$
\betagin{equation}
\leftabel{64}
\leftim_{t\to\infty}\frac{{\bf E}_0(h(L_t)\,|\,{\cal C}_u)}{{\bf E}_0(h(L_t))}=S(X_u)h(L_u)+1-
H(L_u)=:M^h_u \qquad \thetaxt{a.s.}
\hbox {\rightm e}nd{equation}
and
\betagin{equation}
\leftabel{645}
{\bf E}_0\lefteft(M^h_u\rightight)=1.
\hbox {\rightm e}nd{equation}
Consequently, statements 1), 2) and 3) in Theorem \rightef{thm61} hold.
\hbox {\rightm e}nd{theorem}
\betagin{proof}
{\sl\ I)} We prove first (\rightef{64}).
\break
{\sl a)} To begin with, the following result on
the behavior of the denominator in (\rightef{64}) is needed:
for any $a\geq 0$
\betagin{equation}
\leftabel{65}
{\bf E}_a(h(L_t))\,\mathop{\sigmam}_{t\to +\infty}\,
\lefteft(S(a)h(0)+1\rightight)\,\nu((t,\infty)).
\hbox {\rightm e}nd{equation}
To show this, let $\mu$ denote the measure induced by $h,$ i.e., $\mu(dy)=-dh(y).$ Then
\betagin{equation}
\leftabel{636}
h(x)=\int_{(x,K]}\mu(dy)=\int_{(0,K]}{\bf 1}_{\{y>x\}}\mu(dy),
\hbox {\rightm e}nd{equation}
and, consequently,
$$
{\bf E}_a(h(L_t))={\bf E}_a\lefteft( \int_{(0,K]}{\bf 1}_{\{\hbox {\rightm e}ll>L_t\}}\mu(d\hbox {\rightm e}ll)\rightight)
= \int_{(0,K]}{\bf P}_a(L_t<\hbox {\rightm e}ll)\mu(d\hbox {\rightm e}ll).
$$
By Proposition \rightef{prop33}
$$
\leftim_{t\to\infty}\frac{{\bf P}_a(L_t<\hbox {\rightm e}ll)}{\nu((t,\infty))}=S(a)+\hbox {\rightm e}ll.
$$
Moreover, for $\hbox {\rightm e}ll\lefteq K$
$$
\frac{{\bf P}_a(L_t<\hbox {\rightm e}ll)}{\nu((t,\infty))}\lefteq
\frac{{\bf P}_a(L_t<K)}{\nu((t,\infty))}\to S(a)+K\quad \thetaxt{as}\ t\to\infty,
$$
and, by the dominated convergence theorem,
$$
\leftim_{t\to\infty}\int_{(0,K]}\frac{{\bf P}_a(L_t<\hbox {\rightm e}ll)}{\nu((t,\infty))}\,\mu(d\hbox {\rightm e}ll)
=\int_{(0,K]}(S(a)+\hbox {\rightm e}ll)\,\mu(d\hbox {\rightm e}ll).
$$
Hence,
\betagin{equation}
\leftabel{66}
{\bf E}_a(h(L_t))\,\mathop{\sigmam}_{t\to +\infty}\,\lefteft(\int_{(0,K]}(S(a)+\hbox {\rightm e}ll)\,\mu(d\hbox {\rightm e}ll)\rightight) \,\nu((t,\infty)),
\hbox {\rightm e}nd{equation}
and the integral in (\rightef{66}) can be evaluated as follows:
\betagin{eqnarray*}
&&
\hskip-1cm
\int_{(0,K]}(S(a)+\hbox {\rightm e}ll)\,\mu(d\hbox {\rightm e}ll)=S(a)\int_{(0,K]}\,\mu(d\hbox {\rightm e}ll)+\int_{(0,K]}\hbox {\rightm e}ll\,\mu(d\hbox {\rightm e}ll)
\\
&&
\hskip3cm
=S(a)h(0)+\int_{(0,K]}\mu(d\hbox {\rightm e}ll)\int_0^\hbox {\rightm e}ll du
\\
&&
\hskip3cm
=S(a)h(0)+\int_0^K du\int_{(u,K)}\mu(d\hbox {\rightm e}ll)
\\
&&
\hskip3cm
=S(a)h(0)+\int_0^K h(u) du
\\
&&
\hskip3cm
=S(a)h(0)+1.
\hbox {\rightm e}nd{eqnarray*}
This concludes the proof of (\rightef{65}).
\break
{\sl b)} To proceed with the proof of (\rightef{64}),
recall that $\{L_s\,:\,s\geq 0\}$ is an additive functional, that is,
$L_t=L_u+L_{t-u}\circ \theta_u$ for $t>u$ where $\theta_u$ is the usual shift operator.
Hence, by the Markov property, for $t>u$
\betagin{equation}
\leftabel{67}
{\bf E}_0(h(L_t)\,|\,{\cal C}_u)={\bf E}_0(h(L_u+L_{t-u}\circ \theta_u)\,|\,{\cal C}_u)=H(X_u,L_u,t-u)
\hbox {\rightm e}nd{equation}
with
$$
H(a,\hbox {\rightm e}ll,r):={\bf E}_a(h(\hbox {\rightm e}ll+L_r)).
$$
By (\rightef{65}), since $x\mapsto h(\hbox {\rightm e}ll+x)$ is non-increasing with compact support,
$$
H(a,\hbox {\rightm e}ll,r)\,\mathop{\sigmam}_{t\to +\infty}\, \lefteft(S(a)h(\hbox {\rightm e}ll)+\int_0^\infty h(\hbox {\rightm e}ll+u)du\rightight)\,\nu((t,\infty)).
$$
Bringing together (\rightef{67}) and (\rightef{65})
with $a=0$ yields
$$
\leftim_{t\to\infty}\frac{{\bf E}_0(h(L_t)\,|\,{\cal C}_u)}{{\bf E}_0(h(L_t))}=\frac{S(X_u)h(L_u)+\int_{L_u}^\infty h(x)dx}{\int_{0}^\infty h(x)dx}
$$
completing the proof of (\rightef{64}).
\\
\noindent
{\sl II)} To verify (\rightef{645}), we show that
$\{M^h_t\,:\, t\geq 0\}$ defined in (\rightef{64}) is a
non-negative martingale with $M^h_0=1$ (cf. Theorem \rightef{thm61}
statement {\sl 1)}).
\break
{\sl a)} First, consider the process $S(X)=\{S(X_t)\,:\,t\geq
0\}.$ Since the scale function is increasing $S(X)$
is a
non-negative linear diffusion. Moreover, e.g., from Meleard
\cite{meleard86}, it is, in fact, a sub-martingale with the
Doob-Meyer decomposition
\betagin{equation}
\leftabel{701}
S(X_t)= \tilde Y_t+\tilde L_t,
\hbox {\rightm e}nd{equation}
where $\tilde Y$ is a martingale and $\tilde L$ is a non-decreasing adapted
process. Because $\tilde L$ increases only when $S(X)$ is at 0 or,
equivalently, $X$ is at 0 $\tilde L$ is a local time of $X$ at
0. Consequently, $\tilde L$ is a multiple of $L$ a.s. (for the
normalization of $L,$ see (\rightef{e000})), i.e., there is a non-random constant $c$ such that for
all $t\geq 0$
\betagin{equation}
\leftabel{7012}
\tilde L_t= c\, L_t.
\hbox {\rightm e}nd{equation}
We claim that $\tilde L$ coincides with $L,$ that is $c=1.$
To show this, recall that
$$
{\bf E}_x(L^{(y)}_t)=\int_0^t p(s;x,y)\, ds,
$$
which yields (cf. (\rightef{a0}))
$$
R_ \leftambda(0,0)=\int_0^\infty \leftambda\,{\rightm e}^{-\leftambda t}\,
{\bf E}_0(L_t)\,dt.
$$
From (\rightef{701}) and (\rightef{7012}) we have ${\bf E}_0(L_t)={\displaystyle{}laystyle \frac 1c\,{\bf E}_0(S(X_t))},$ and, hence,
\betagin{eqnarray}
\leftabel{n70}
&&
\nonumber
R_ \leftambda(0,0)=\frac 1 c \int_0^\infty \leftambda\,{\rightm e}^{-\leftambda t}\,
{\bf E}_0(S(X_t))\,dt
\\
&&
\hskip1.6cm
=
\frac 1 c \int_0^\infty S(y)\,\leftambda\,
R_ \leftambda(0,y)\, m(dy).
\hbox {\rightm e}nd{eqnarray}
Next recall that the resolvent kernel can be expressed as
\betagin{equation}
\leftabel{f000}
R_ \leftambda(x,y)= w^{-1}_\leftambda\, \widehat{\psi}_\alphaphai_\leftambda(x)\,
\varphi_\leftambda(y),\quad 0\lefteq x\lefteq y,
\hbox {\rightm e}nd{equation}
where $w_\leftambda$ is a constant (Wronskian) and $\varphi_\leftambda$ and
$\widehat{\psi}_\alphaphai_\leftambda$ are the
fundamental decreasing and increasing, respectively, solutions of the generalized differential
equation
\betagin{equation}
\leftabel{f001}
\frac d{dm}\frac d{dS}\, u= \leftambda u
\hbox {\rightm e}nd{equation}
characterized (probabilistically) by
\betagin{equation}
\leftabel{f002}
{\bf E}_x\lefteft({\rightm e}^{-\leftambda H_y}\rightight)=\frac{R_ \leftambda(x,y)}{R_
\leftambda(y,y)}.
\hbox {\rightm e}nd{equation}
Consequently, (\rightef{n70}) is equivalent with
\betagin{eqnarray}
\leftabel{n701}
&&
\nonumber
\varphi_\leftambda(0)=
\frac 1 c \int_0^\infty S(y)\,\leftambda\,
\varphi_ \leftambda(y)\, m(dy)
\\
&&
\nonumber
\hskip1.2cm
=
\frac 1 c \int_0^\infty S(y)\,\frac d{dm}\frac d{dS}\, \varphi_ \leftambda(y)\, m(dy).
\\
\nonumber
&&
\hskip1.2cm
=
\frac 1 c \int_0^\infty dS(y)\,\int_y^\infty m(dz)\, \frac d{dm}\frac d{dS}\, \varphi_ \leftambda(z).
\\
&&
\hskip1.2cm
=
\frac 1 c \int_0^\infty dS(y)\leftp\frac d{dS}
\varphi_ \leftambda(+\infty)-\frac d{dS}\, \varphi_ \leftambda(y)\rightp,
\hbox {\rightm e}nd{eqnarray}
where for the third equality we have used Fubini's theorem. Next we claim that
\betagin{equation}
\leftabel{n71}
\frac d{dS}\,\varphi_ \leftambda(+\infty):=\leftim_{x\to\infty} \frac d{dS}\,\varphi_ \leftambda(x)=0.
\hbox {\rightm e}nd{equation}
To prove this, recall that the Wronskian (a constant) is given for all $z\geq 0$ by
\betagin{equation}
\leftabel{n72}
w_\leftambda=\varphi_\leftambda(z)\,\frac d{dS}\,
\widehat{\psi}_\alphaphai_ \leftambda(z)+\widehat{\psi}_\alphaphai_\leftambda(z)\,\leftp -\frac d{dS}\,\varphi_ \leftambda(z)\rightp.
\hbox {\rightm e}nd{equation}
Notice that both terms on the right hand side are non-negative. Since the boundary point $+\infty$ is
assumed to be natural it holds that $\leftim_{z\to\infty} H_z=+\infty$
a.s. and, therefore, (cf. (\rightef{f002}))
$$
\leftim_{z\to\infty}\widehat{\psi}_\alphaphai_\leftambda(z)=+\infty.
$$
Consequently, letting $z\to +\infty$ in (\rightef{n72}) we obtain
(\rightef{n71}). Now (\rightef{n701}) takes the form
\betagin{eqnarray*}
&&
\varphi_\leftambda(0)
=
-\frac 1 c \leftp \varphi_ \leftambda(+\infty)-\varphi_ \leftambda(0)\rightp .
\hbox {\rightm e}nd{eqnarray*}
This implies that $c=1$ since $\varphi_ \leftambda(+\infty)=0$ by the assumption that $+\infty$ is
natural (cf. (\rightef{f002})).
\break
{\sl b)} To proceed with the proof that $M^h$ is a martingale,
consider first the case with continuously differentiable $h.$ Then,
applying (\rightef{701}),
\betagin{equation}
\leftabel{71}
dM^h_t=h(L_t)(d\tilde Y_t+dL_t) +S(X_t)h'(L_t)dL_t-h(L_t)dL_t=h(L_t)dY_t,
\hbox {\rightm e}nd{equation}
where we have used that
$$
S(X_t)h'(L_t)dL_t=S(0)h'(L_t)dL_t=0.
$$
Consequently, $M^h$ is a continuous local martingale, and it
is a continuous martingale if for any $T>0$ the process
$\{M^h_t\,:\, 0\lefteq t\lefteq T\}$ is
uniformly integrable (u.i.). To prove this, we use again (\rightef{701}) and
write
\betagin{equation}
\leftabel{72}
M^h_t=h(L_t)\tilde Y_t+h(L_t)L_t+1-H(L_t).
\hbox {\rightm e}nd{equation}
Since $h$ is non-increasing and and has a compact support in $[0,K]$
we have
$$
|h(L_t)L_t+1-H(L_t)|\lefteq K\,\sup_{x\in[0,K]} h(x)+\int_0^\infty h(u)du
$$
showing that $\{ h(L_t)L_t+1-H(L_t): t\geq 0\}$ is u.i. Moreover, since $\{
h(L_t): t\geq 0\}$ is bounded and $\{\tilde Y_t\,:\, 0\lefteq t\lefteq T\}$ is u.i. it
follows that $\{h(L_t)\tilde Y_t\,:\, 0\lefteq t\lefteq T\}$ is u.i.. Consequently,
$\{M^h_t\,:\, 0\lefteq t\lefteq T\}$ is u.i., as claimed, and, hence,
$\{M^h_t: t\geq 0\}$ is a true martingale implying (\rightef{645}).
By the monotone class theorem (see, e.g., Meyer \cite{meyer66} T20 p. 28)
we can deduce that $\{M^h_t: t\geq 0\}$ remains a martingale if
the assumtion
``$h$ is continuously differentiable'' is relaxed to be ``$h$ is bounded and
Borel-measurable''. The proof of Theorem \rightef{thm62} is now complete.
\hbox {\rightm e}nd{proof}
\betagin{example}
\leftabel{ex61} Let $h(x):= {\bf 1}_{[0,\hbox {\rightm e}ll)}(x)$ with $\hbox {\rightm e}ll>0.$ Then
$$
h(0)=1,\quad \int_x^\infty h(y)dy=(\hbox {\rightm e}ll-x)^+,\quad \int_0^\infty
h(y)dy=\hbox {\rightm e}ll,
$$
and the martingale $M^h$ takes the form
\betagin{eqnarray}
\leftabel{68}
&&
\nonumber
M^h_u=\frac 1\hbox {\rightm e}ll\lefteft(S(X_u){\bf 1}_{\{L_u<\hbox {\rightm e}ll\}}+ (\hbox {\rightm e}ll-L_u)_+\rightight)
\\
&&
\nonumber
\hskip.8cm
=\frac 1\hbox {\rightm e}ll\lefteft(S(X_u)+ \hbox {\rightm e}ll-L_u\rightight){\bf 1}_{\{L_u<\hbox {\rightm e}ll\}}
\\
&&
\nonumber
\hskip.8cm
=\frac 1\hbox {\rightm e}ll\lefteft(S(X_{u\wedge{\tau_\hbox {\rightm e}ll}})+ \hbox {\rightm e}ll-L_{u\wedge{\tau_\hbox {\rightm e}ll}}\rightight)
\\
&&
\nonumber
\hskip.8cm
=1+\frac 1\hbox {\rightm e}ll\lefteft(S(X_{u\wedge{\tau_\hbox {\rightm e}ll}})-L_{u\wedge{\tau_\hbox {\rightm e}ll}}\rightight).
\\
&&
\nonumber
\hskip.8cm
=1+\frac 1\hbox {\rightm e}ll\,\tilde Y_{u\wedge{\tau_\hbox {\rightm e}ll}}.
\hbox {\rightm e}nd{eqnarray}
\hbox {\rightm e}nd{example}
\subsection{The law of $X$
under the penalized measure}
\leftabel{sec63}
In this section we study the process $X$ under the penalized measure
${\bf Q}$ introduced in Theorem \rightef{thm62}. In fact, we consider a more
general situation, and assume that $h$ is ``only'' a Borel measurable and
non-negative function defined on ${\bf R}_+$ such that
\betagin{equation}
\leftabel{685}
\int_0^\infty h(x)dx =1.
\hbox {\rightm e}nd{equation}
For such a function $h$ we define
\betagin{equation}
\leftabel{69}
M^h_t:= S(X_t)h(L_t)+1-H(L_t),
\hbox {\rightm e}nd{equation}
where
$$
H(x):=\int_0^x h(y)dy.
$$
It can be proved (see Roynette et al. \cite{rvyII} Section 3.2 and
\cite{rvyV} Section 3) that $\{M^h_t:t\geq 0\}$ is also in this more
general case a martingale such that ${\bf E}_0(M^h_t)=1$ and
$\leftim_{t\to\infty}M^h_t=0.$
Therefore, we may define, for each $u\geq 0,$ a probability measure
${\bf Q}^h$ on $({\cal C},{\cal C}_u)$ by setting
\betagin{equation}
\leftabel{73}
{\bf Q}^h(\Lambda_u):={\bf E}_0\lefteft({\bf 1}_{\Lambda_u}\, M^h_u\rightight)\qquad \Lambda_u\in{\cal C}_u.
\hbox {\rightm e}nd{equation}
The notation ${\bf E}^h$ is used for the expectation with respect to
${\bf Q}^h.$ Next two propositions constitute a generalization of Theorem
1.5 in \cite{rvyV}.
\betagin{proposition}
\leftabel{prop61}
Under ${\bf Q}^h,$ the random variable $L_\infty:=\leftim_{t\to \infty}L_t$ is finite
a.s. and
$$
{\bf Q}^h(L_\infty\in d\hbox {\rightm e}ll)=h(\hbox {\rightm e}ll)\,d\hbox {\rightm e}ll.
$$
\hbox {\rightm e}nd{proposition}
\betagin{proof}
For $u\geq 0$ and $\hbox {\rightm e}ll\geq 0$ it holds $\{L_u\geq \hbox {\rightm e}ll\}\in{\cal C}_u,$
and, consequently,
$$
{\bf Q}^h(L_u\geq \hbox {\rightm e}ll)={\bf E}_0\lefteft({\bf 1}_{\{L_u\geq \hbox {\rightm e}ll\}}\, M^h_u\rightight)
={\bf E}_0\lefteft({\bf 1}_{\{\tau_\hbox {\rightm e}ll\lefteq u\}}\, M^h_u\rightight).
$$
By optional stopping,
$$
{\bf E}_0\lefteft({\bf 1}_{\{\tau_\hbox {\rightm e}ll\lefteq u\}}\, M^h_u\rightight)
={\bf E}_0\lefteft({\bf 1}_{\{\tau_\hbox {\rightm e}ll\lefteq u\}}\, M^h_{\tau_\hbox {\rightm e}ll}\rightight),
$$
but
\betagin{eqnarray}
&&
\leftabel{735}
\nonumber
M^h_{\tau_\hbox {\rightm e}ll}
= S(X_{\tau_\hbox {\rightm e}ll})h(L_{\tau_\hbox {\rightm e}ll})+1-H(L_{\tau_\hbox {\rightm e}ll})
= S(0)h(\hbox {\rightm e}ll)+1-H(\hbox {\rightm e}ll)
\\
&&
\hskip.8cm
=\int_\hbox {\rightm e}ll^\infty h(y)\,dy.
\hbox {\rightm e}nd{eqnarray}
As a result,
$$
{\bf Q}^h(L_u\geq \hbox {\rightm e}ll)=\lefteft(\int_\hbox {\rightm e}ll^\infty h(y)\,dy\rightight)\,{\bf P}_0(\tau_\hbox {\rightm e}ll\lefteq u).
$$
Letting here $u\to \infty$ and using the fact that $\tau_\hbox {\rightm e}ll$ is
finite ${\bf P}_0-$a.s. shows that
$$
{\bf Q}^h(L_\infty\geq \hbox {\rightm e}ll)=\int_\hbox {\rightm e}ll^\infty h(y)\,dy.
$$
Moreover, from assumption (\rightef{685}) it now follows that $L_\infty$ is
${\bf Q}^h$-a.s. finite, and the proof is complete.
\hbox {\rightm e}nd{proof}
In the proof of the next proposition we use the process $X^\uparrow=\{ X^\uparrow_t:t\geq 0\}$
which is obtained from $\widehat X$ (cf. (\rightef{kill})) by conditioning $\widehat X$ not to
hit 0. The process $X^\uparrow$ can be described as Doob's
$h$-transform of $\widehat X,$ see, e.g., Salminen, Vallois and Yor
\cite{SVY07} p.105. The probability measure and the expectation
operator associated with $X^\uparrow$ are denoted by ${\bf P}^\uparrow$
and ${\bf E}^\uparrow,$ respectively. The transition density and the speed measure associated with $X^\uparrow$
are given by
\betagin{equation}
\leftabel{f01}
p^\uparrow(t;x,y):= \frac{\hat p(t;x,y)}{ S(y)S(x)},\quad\,m^\uparrow(dy):=S(y)^2\,m(dy).
\hbox {\rightm e}nd{equation}
Notice (cf. (\rightef{f00})) that
\betagin{equation}
\leftabel{f02}
p^\uparrow(t;0,y):= \leftim_{x\downarrow 0}
p^\uparrow(t;x,y)
=
\frac{f_{y0}(t)}{S(y)}.
\hbox {\rightm e}nd{equation}
Consequently, we have the formula
\betagin{equation}
\leftabel{f03}
1={\bf P}_0^\uparrow\leftp X^\uparrow_t> 0\rightp =\int_0^\infty
p^\uparrow(t;0,y)\,m^\uparrow(dy)
=\int_0^\infty f_{y0}(t)\, S(y) \,m(dy).
\hbox {\rightm e}nd{equation}
\betagin{proposition}
\leftabel{prop62}
Let $\leftambda$ denote the last exit time from 0, i.e.,
$$
\leftambda:=\sup\{t\,:\, X_t=0\}
$$
with $\leftambda=0$ if $\{\cdot\}=\hbox {\rightm e}mptyset.$ Then
\betagin{description}
\item{{\bf 1)}} ${\bf Q}^h(0<\leftambda<\infty)=1,$
\item{{\bf 2)}} under ${\bf Q}^h$
\betagin{description}
\item{{\bf a)}} \hskip.2cm $\{X_t\,:\,t\lefteq \leftambda\}$ and $\{X_{\leftambda+t}\,:\,t\geq 0\}$
are independent,
\item{{\bf b)}} \hskip.1cm conditionally on $L_\infty=\hbox {\rightm e}ll,$ the process
$\{X_t\,:\,t\lefteq \leftambda\}$ is distributed as $\{X_t\,:\,t\lefteq \tau_\hbox {\rightm e}ll\}$
under ${\bf P}_0,$ in other words,
\betagin{eqnarray}
\leftabel{74}
&&
\nonumber
\hskip-2cm
{\bf E}^h\lefteft(F(X_t\,:\, t\lefteq \leftambda)\,f(L_\infty)\rightight)
\\
&&
=
\int_0^\infty f(\hbox {\rightm e}ll)h(\hbox {\rightm e}ll){\bf E}_0\lefteft(F(X_t\,:\, t\lefteq
\tau_\hbox {\rightm e}ll\rightight)\, d\hbox {\rightm e}ll.
\hbox {\rightm e}nd{eqnarray}
where $F$ is a bounded and measurable
functional
defined in the canonical space $(C,{\cal C},({\cal C}_t))$ and
$f: [0,\infty)\mapsto [0,\infty)$ is a bounded and measurable function.
\item{{\bf c)}} \hskip.2cm the process
$\{X_{\leftambda+t}\,:\,t\geq 0\}$ is distributed as
$\{X^\uparrow_t\,:\,t\geq 0 \}$ started from 0.
\hbox {\rightm e}nd{description}
\hbox {\rightm e}nd{description}
\hbox {\rightm e}nd{proposition}
\betagin{proof}
Consider for a given $T>0$
\betagin{eqnarray*}
&&
\Delta
:=
{\bf E}^h\lefteft(F_1(X_u\,:\, u\lefteq
\leftambda)\,F_2(X_{\leftambda+v}\,:\, v\lefteq T)\,f(L_\leftambda)\,{\bf 1}_{\{0<\leftambda<\infty\}} \rightight),
\hbox {\rightm e}nd{eqnarray*}
where $F_1$ and $F_2$ are bounded and measurable
functionals
defined in the canonical space $(C,{\cal C},({\cal C}_t))$
and
$f: [0,\infty)\mapsto [0,\infty)$ is a bounded and measurable function.
For $N>0$ define
$$
\leftambda_N:=\sup\{u\lefteq N\,:\,X_u=0\}
$$
and
$$
\Delta^{(1)}_N
:={\bf E}^h\lefteft(F_1(X_u\,:\, u\lefteq
\leftambda_N)\,F_2(X_{\leftambda_N+v}\,:\, v\lefteq T)\,f(L_{\leftambda_N})
\,{\bf 1}_{\{\leftambda_N+T<N\}} \rightight).
$$
Then
$$
\Delta=\leftim_{N\to\infty}\Delta^{(1)}_N.
$$
By absolute continuity, cf. (\rightef{73}),
\betagin{eqnarray*}
&&
\hskip-.5cm
\Delta^{(1)}_N
={\bf E}_0\lefteft(F_1(X_u\,:\, u\lefteq
\leftambda_N)\,F_2(X_{\leftambda_N+v}\,:\, v\lefteq T)\,f(L_{\leftambda_N})\,{\bf
1}_{\{\leftambda_N+T<N\}}\, M^h_N\rightp
\\
&&
\hskip.4cm
=
{\bf E}_0\Big(F_1(X_u\,:\, u\lefteq
\leftambda_N)\,F_2(X_{\leftambda_N+v}\,:\, v\lefteq T)\,f(L_{\leftambda_N})\,{\bf 1}_{\{\leftambda_N+T<N\}}
\\
&&
\hskip5cm
\times
\leftp S(X_N)h(L_N)+1-H(L_N)\rightp\Big).
\hbox {\rightm e}nd{eqnarray*}
Since $F_1, F_2,$ and $f$ are bounded and
$$
\leftim_{N\to\infty}\leftp 1-H(L_N)\rightp =0\quad {\bf P}_0{\thetaxt -a.s.}
$$
we have
\betagin{eqnarray*}
&&
\hskip-.5cm
\Delta=
\leftim_{N\to\infty}
{\bf E}_0\Big(F_1(X_u\,:\, u\lefteq
\leftambda_N)\,F_2(X_{\leftambda_N+v}\,:\, v\lefteq T)\,f(L_{\leftambda_N})
\\
&&
\hskip4cm
\times{\bf 1}_{\{\leftambda_N+T<N\}}
\,S(X_N)h(L_N)\Big).
\hbox {\rightm e}nd{eqnarray*}
Let $\Delta^{(2)}_N$ denote the expression after the limit sign. Then
we write
\betagin{eqnarray*}
&&
\hskip-.5cm
\Delta^{(2)}_N=
{\bf E}_0\Big( \sum_{\hbox {\rightm e}ll} F_1(X_u\,:\, u\lefteq
\tau_{\hbox {\rightm e}ll-})
\,F_2(X_{\tau_{\hbox {\rightm e}ll-}+v}\,:\, v\lefteq T)
\\
&&
\hskip4cm
\times f(\hbox {\rightm e}ll)\,
\,{\bf 1}_{\{\tau_{\hbox {\rightm e}ll-}+T< N<\tau_{\hbox {\rightm e}ll}\}}
\,S(X_N)h(\hbox {\rightm e}ll)\Big),
\hbox {\rightm e}nd{eqnarray*}
where $\{\tau_\hbox {\rightm e}ll\}$
is the right continuous inverse of $\{L_t\}$ (see (\rightef{e00})).
By the Master formula (see Revuz and Yor \cite{revuzyor01} p. 475 and 483)
\betagin{eqnarray*}
&&
\hskip-1.5cm
\Delta^{(2)}_N=
\int_0^\infty d\hbox {\rightm e}ll\, h(\hbox {\rightm e}ll)f(\hbox {\rightm e}ll){\bf E}_0\Big( F_1(X_u\,:\, u\lefteq
\tau_{\hbox {\rightm e}ll})
\\
&&
\hskip.4cm
\times\,\int_{{\cal E}}{\bf n}(de)\,
\,F_2(e_v\,:\, v\lefteq T)\,{\bf 1}_{\{T\lefteq N-\tau_{\hbox {\rightm e}ll}\lefteq \zeta(e)\}}\,S(e_{N-\tau_\hbox {\rightm e}ll})\Big),
\hbox {\rightm e}nd{eqnarray*}
where ${\cal E}$ denotes the excursion space, $e$ is a generic excursion,
$\zeta(e)$ is the life time of the excursion $e,$ and ${\bf n}$ is the
It\^o measure in the excursion space (see, e.g., \cite{revuzyor01}
p. 480 and \cite{SVY07}). We claim that
\betagin{eqnarray}
\leftabel{745}
&&
\nonumber
\hskip-2cm
I:=\int_{{\cal E}}
\,F_2(e_v\,:\, v\lefteq T)\,{\bf 1}_{\{T\lefteq T'\lefteq \zeta(e)\}}\,S(e_{T'})\,{\bf n}(de)
\\
&&
\hskip4cm
={\bf E}_0^\uparrow\leftp F_2(X_v\,:\, v\lefteq T)\rightp.
\hbox {\rightm e}nd{eqnarray}
Notice that the right hand side of (\rightef{745}) does not depend on
$T'.$ We prove (\rightef{745}) for $F_2$ of the form
$$
F_2(e_v\,:\, v\lefteq T)=G(e_{t_1},\dots, e_{t_k}), \quad
t_1<t_2<\dots<t_k=T,
$$
where $G$ is a bounded and measurable function. For simplicity, take
$k=2$ and use Theorem 2 in \cite{SVY07} to obtain (for notation and
results needed, see
(\rightef{kill2}), (\rightef{f00}), (\rightef{f01}) and (\rightef{f02}))
\betagin{eqnarray*}
&&
\hskip-1cm
I=\int_{[0,\infty)^3}f_{x_1,0}(t_1)\,\hat p(t_2-t_1;x_1,x_2)\,\hat
p(T'-t_2;x_2,x_3)
\\
&&
\hskip3cm
\times\,
G(x_1,x_2)\, S(x_3)\,m(dx_1)\,m(dx_2)\,m(dx_3)
\\
&&
\hskip-.5cm
=\int_{[0,\infty)^3}S(x_1)\,f_{x_1,0}(t_1)\, p^\uparrow(t_2-t_1;x_1,x_2)\,\hat
p^\uparrow(T'-t_2;x_2,x_3)
\\
&&
\hskip2cm
\times
\,G(x_1,x_2)\, S(x_2)^2\,S(x_3)^2\,m(dx_1)\,m(dx_2)\,m(dx_3)
\\
&&
\hskip-.5cm
={\bf E}_0^\uparrow\leftp G(X_{t_1},X_{t_2})\rightp
\hbox {\rightm e}nd{eqnarray*}
proving (\rightef{745}). Consequently, we have (for all $N$)
\betagin{eqnarray*}
&&
\hskip-1.5cm
\Delta^{(2)}_N={\bf E}_0^\uparrow\leftp F_2(X_v\,:\, v\lefteq T)\rightp
\int_0^\infty d\hbox {\rightm e}ll\, h(\hbox {\rightm e}ll)f(\hbox {\rightm e}ll){\bf E}_0\leftp F_1(X_u\,:\, u\lefteq
\tau_{\hbox {\rightm e}ll})\rightp,
\hbox {\rightm e}nd{eqnarray*}
and choosing here $F_1,$ $F_2,$ and $f$ appropriately implies
all the claims of Proposition. In particular, $F_1=F_2=1$ and
$f=1$ yields ${\bf Q}^h_0(0<\leftambda<\infty)=1,$ and, hence,
$L_\infty=L_\leftambda$ ${\bf Q}^h_0$-a.s.
\hbox {\rightm e}nd{proof}
\section{Appendix: a technical lemma}
\leftabel{20}
The following lemma could be viewed as a ``weak'' form of the Tauberian
theorem (cf. Feller \cite{feller71} Theorem 1 p. 443) stating, roughly
speaking, that if
two functions behave similarly at zero then their Laplace transforms
behave similarly at infinity.
\betagin{lemma}
\leftabel{lemma31}
Let $\mu$ be a $\sigmagma$-finite measure on $[0,+\infty)$ and $g_1$ and $g_2$ two real valued functions such that
for some $\lefta_0>0$
$$
C_i:=\int_{[0,+\infty)}{\rightm e}^{-\lefta_0 \gammamma}\,|g_i(\gammamma)|\,\mu(d\gammamma)<\infty,\quad i=1,2.
$$
Assume also that $g_2(\gammamma)>0$ for all $\gammamma.$ Introduce for $\leftambda\geq \lefta_0$
$$
f_i(\leftambda):=\int_{[0,+\infty)}{\rightm e}^{-\lefta \gammamma}\,g_i(\gammamma)\,\mu(d\gammamma),\quad i=1,2.
$$
and suppose
\betagin{equation}
\leftabel{c1}
\leftim_{\lefta\to +\infty}f_2(\lefta)\,{\rightm e}^{b\lefta}=+\infty\qquad {\rightm for\ all\ } b>0.
\hbox {\rightm e}nd{equation}
Then
\betagin{equation}
\leftabel{c2}
{\displaystyle{}{ g_1(\gammamma)\,\mathop{\sigmam}_{\gammamma\to 0}\, g_2(\gammamma)}}
\hbox {\rightm e}nd{equation}
implies
\betagin{equation}
\leftabel{c3}
f_1(\lefta)
\,\mathop{\sigmam}_{\lefta\to +\infty}\,
f_2(\lefta)
\hbox {\rightm e}nd{equation}
\hbox {\rightm e}nd{lemma}
\betagin{proof} By property (\rightef{c2})
there exist two functions $\theta_*$ and $\theta^*$
such that for some $\hbox {\rightm e}p>0$ and for all $\gammamma\in(0,\hbox {\rightm e}p)$
\betagin{equation}
\leftabel{c4}
\theta_*(\hbox {\rightm e}p)\,g_2(\gammamma)\lefteq g_1(\gammamma)\lefteq \theta^*(\hbox {\rightm e}p)\,g_2(\gammamma).
\hbox {\rightm e}nd{equation}
and
\betagin{equation}
\leftabel{c41}
\leftim_{\hbox {\rightm e}p\to 0}\theta_*(\hbox {\rightm e}p)=\leftim_{\hbox {\rightm e}p\to 0}\theta^*(\hbox {\rightm e}p)=1.
\hbox {\rightm e}nd{equation}
We assume also that $\theta_*(\hbox {\rightm e}p)>0$ and $\theta^*(\hbox {\rightm e}p)>0.$
Letting $\lefta\geq 2\lefta_0$ we have for
$\gamma\geq \hbox {\rightm e}p$
$$
\lefta\gamma\geq \lefta_0\gamma+\frac{\lefta\gamma} 2\geq \lefta_0\gamma+\frac{\lefta\hbox {\rightm e}p} 2
$$
and
\betagin{equation}
\leftabel{c5}
\int_\hbox {\rightm e}p^\infty {\rightm e}^{-\lefta \gammamma}\,|g_i(\gammamma)|\,\mu(d\gammamma)
\lefteq {\rightm e}^{-\lefta\hbox {\rightm e}p/2} \,\int_\hbox {\rightm e}p^\infty\, {\rightm e}^{-\lefta_0
\gammamma}\,|g_i(\gammamma)|\,\mu(d\gammamma)\lefteq
{\rightm e}^{-\lefta\hbox {\rightm e}p/2}\,C_i.
\hbox {\rightm e}nd{equation}
Furthermore, from (\rightef{c4})
\betagin{eqnarray}
\leftabel{c6}
&&
\nonumber
\hskip-1.7cm
\int_0^\hbox {\rightm e}p {\rightm e}^{-\lefta \gammamma}\,g_1(\gammamma)\,\mu(d\gammamma)
\lefteq
\theta^*(\hbox {\rightm e}p)\,\int_0^\hbox {\rightm e}p {\rightm e}^{-\lefta \gammamma}\,g_2(\gammamma)\,\mu(d\gammamma)
\\
&&
\nonumber
\hskip2cm
\lefteq
\theta^*(\hbox {\rightm e}p)\,\int_0^\infty {\rightm e}^{-\lefta \gammamma}\,g_2(\gammamma)\,\mu(d\gammamma)
\\
&&
\hskip2cm
\lefteq
\theta^*(\hbox {\rightm e}p)\,f_2(\lefta)
\hbox {\rightm e}nd{eqnarray}
since $g_2$ is assumed to be positive. Writing
$$
f_1(\lefta)=\int_0^\hbox {\rightm e}p {\rightm e}^{-\lefta \gammamma}\,g_1(\gammamma)\,\mu(d\gammamma)+\int_\hbox {\rightm e}p^\infty {\rightm e}^{-\lefta \gammamma}\,g_1(\gammamma)\,\mu(d\gammamma)
$$
the estimates in (\rightef{c5}) and (\rightef{c6}) yield
$$
f_1(\lefta)\lefteq \theta^*(\hbox {\rightm e}p)\,f_2(\lefta)+ {\rightm e}^{-\lefta\hbox {\rightm e}p/2}\,C_1,
$$
which after dividing with $f_2(\lefta)>0$ implies using (\rightef{c1}) and (\rightef{c41})
\betagin{equation}
\leftabel{c7}
\leftimsup_{\lefta\to+\infty}\frac{f_1(\lefta)}{f_2(\lefta)}=1.
\hbox {\rightm e}nd{equation}
For a lower bound, consider
\betagin{eqnarray*}
&&
\hskip-1.7cm
f_1(\lefta)=\int_{[0,\infty)} {\rightm e}^{-\lefta \gammamma}\,g_1(\gammamma)\,\mu(d\gammamma)
\\
&&
\hskip-.6cm
\geq
\int_{[0,\hbox {\rightm e}p)} {\rightm e}^{-\lefta \gammamma}\,g_1(\gammamma)\,\mu(d\gammamma)
-\int_\hbox {\rightm e}p^\infty {\rightm e}^{-\lefta \gammamma}\,|g_1(\gammamma)|\,\mu(d\gammamma)
\\
&&
\hskip-.6cm
\geq
\theta_*(\hbox {\rightm e}p)\,\int_{[0,\hbox {\rightm e}p)} {\rightm e}^{-\lefta \gammamma}\,g_2(\gammamma)\,\mu(d\gammamma)
- {\rightm e}^{-\lefta\hbox {\rightm e}p/2}\,C_1
\\
&&
\hskip-.6cm
\geq
\theta_*(\hbox {\rightm e}p)\lefteft( f_2(\theta)-\int_\hbox {\rightm e}p^\infty {\rightm e}^{-\lefta \gammamma}\,g_2(\gammamma)\,\mu(d\gammamma)\rightight)
- {\rightm e}^{-\lefta\hbox {\rightm e}p/2}\,C_1.
\\
&&
\hskip-.6cm
\geq
\theta_*(\hbox {\rightm e}p)\,f_2(\theta)- \theta_*(\hbox {\rightm e}p)\,{\rightm e}^{-\lefta\hbox {\rightm e}p/2}\,C_2- {\rightm e}^{-\lefta\hbox {\rightm e}p/2}\,C_1.
\hbox {\rightm e}nd{eqnarray*}
Hence,
$$
\frac{f_1(\lefta)}{f_2(\lefta)}\geq \theta_*(\hbox {\rightm e}p)-\lefteft(\theta_*(\hbox {\rightm e}p)C_2-C_1\rightight)
\frac 1{{\rightm e}^{\lefta\hbox {\rightm e}p/2}\,f_2(\lefta)}
$$
showing that
$$
\leftiminf_{\lefta\to+\infty}\frac{f_1(\lefta)}{f_2(\lefta)}\geq 1,
$$
and completing the proof.
\hbox {\rightm e}nd{proof}
\hbox {\rightm e}nd{document} |
\begin{document}
\title{The Cauchy problem for an inviscid and non-diffusive Oldroyd-{B} model in two dimensions}
\author[a]{Yuanzhi Tu}
\author[a]{Yinghui Wang}
\author[a]{Huanyao Wen \thanks{Corresponding author.}}
\affil[a]{School of Mathematics, South China University of Technology, Guangzhou, China}
\date{}
\maketitle
\renewcommand{\thefootnote}{}
\footnote{ {E}-mail: mayztu@mail.scut.edu.cn(Tu); yhwangmath@163.com(Wang); mahywen@scut.edu.cn(Wen).}
\begin{abstract}
A two-dimensional inviscid and diffusive Oldroyd-B model was investigated by [T. M. Elgindi, F. Rousset, Commun. Pure Appl. Math. 68 (2015), 2005--2021] where the global existence and uniqueness of the strong solution were established for arbitrarily large initial data. As pointed out by [A. V. Bhave, R. C. Armstrong, R. A. Brown, J. Chem. Phys., 95(1991), 2988-–3000], the diffusion coefficient is significantly smaller than other effects, it is interesting to study the non-diffusive model. In the present work, we obtain the global-in-time existence and uniqueness of the strong solution to the non-diffusive model with small initial data via deriving some uniform regularity estimates and taking vanishing diffusion limits. In addition, the large time behavior of the solution is studied and the optimal time-decay rates for each order of spatial derivatives are obtained. The main challenges focus on the lack of dissipation and regularity effects of the system and on the slower decay in the two-dimensional settings. A combination of the spectral analysis and the Fourier splitting method is adopted.
\end{abstract}
{\noindent \textbf{Keywords:} An Oldroyd-B model; global existence and uniqueness; long time behavior; vanishing diffusion limits.}
{\noindent\textbf{AMS Subject Classification (2020):} 76A10, 76B03, 74H40.}
\section{Introduction}
The interest for viscoelastic fluids has increased considerably, due to their connections with applied sciences. The motion of the fluids can be described by the Navier-Stokes equations coupling some constitutive laws of different types, see \cite{Bird_1, Bird_2} for more details. In this paper, we consider the following {O}ldroyd-{B} type model in Eulerian coordinates:
\begin{equation} \label{Oldroyd_B_d}
\begin{cases}
\partial_tu+(u\cdot\nabla) u+\nabla p=K\, {\rm div}\tau,\\
\partial_t\tau+(u\cdot\nabla)\tau+\beta\tau=\alpha\mathbb{D}(u),\\
{\rm div}\,u=0, \\[2mm]
(u,\tau)(x,0)=(u_0,\tau_0)(x),
\end{cases}
\end{equation}
on $\mathbb{R}^2$ $\times$ $(0,\infty)$. (\ref{Oldroyd_B_d}) with a diffusion term $-\mu\Delta\tau$ on the left-hand side of the equation of $\tau$ was investigated by Elgindi and Rousset in \cite{Elgindi Rousset 2015} where the global existence and uniqueness of the strong solution were established for arbitrarily large initial data. In this paper, we aim to study the global wellposedness and the large time behavior of the non-diffusive model (\ref{Oldroyd_B_d}).
We will give an overview of study of the model. In fact, it is a simplified model of the following classical incompressible {O}ldroyd-{B} model\footnote{\eqref{Oldroyd_B_d} is the case that $\mu=0$, $\nu=0$ and $Q=0$.}:
\begin{eqnarray} \label{Oldroyd_B}
\begin{cases}
\partial_tu+(u\cdot\nabla) u+\nabla p-\nu\Delta u=K \,{\rm div}\tau,\\
\partial_t\tau+(u\cdot\nabla)\tau-\mu\Delta\tau+\beta\tau=Q(\nabla u,\tau)+\alpha\mathbb{D}(u),\\
{\rm div}\,u=0,
\end{cases}
\end{eqnarray}
where $u=u(x,t)$, $p=p(x,t)$, and $\tau=\tau(x,t)$ denote the velocity field of the fluid, the scalar pressure, and the tangential part of the stress tensor represented by a symmetric matrix, respectively. $\mathbb{D}(u)=\frac12(\nabla u+ \nabla u^T)$ is the symmetric part of the velocity gradient. The nonlinear term $Q(\nabla u,\tau)$ is a bilinear form:
\begin{equation*}
Q(\nabla u,\tau)=\Omega\tau - \tau\Omega + b(\mathbb{D}(u)\tau+\tau\mathbb{D}(u)).
\end{equation*}
$\Omega=\frac12(\nabla u- \nabla u^T)$ is the skew-symmetric part of velocity gradient and $b\in[-1,1]$. Those physical coefficients $\alpha,\beta ,\mu, \nu, K$ are constants that satisfy $\alpha,\beta, K,\mu, \nu>0.$
As pointed out by Bhave, Armstrong and Brown (\cite{Bhave 1991}), the diffusion coefficient $\mu$ is significantly smaller than other effects. Thus some early works on the mathematical theory of the system \eqref{Oldroyd_B} focused on the non-diffusive case (i.e. $\nu>0,\mu = 0$ in \eqref{Oldroyd_B}). In this case, the model \eqref{Oldroyd_B} without diffusive term was first introduced by {O}ldroyd (\cite{Oldroyd 1958}) to describe the behavior of viscoelastic fluids, which consists of both viscous and elastic components, and thus behaves as viscous fluid in some circumstances and as elastic solid in others. For the initial-boundary value problem, Guillop\'{e} and Saut(\cite{Guillo 1990}) established the local wellposedness of strong solutions in Sobolev space $H^s$ and obtained the global existence and uniqueness with small initial data and small coupling parameter $\alpha$. Later, Fern\'{a}ndez-Cara, Guill\'{e}n, and Ortega (\cite{Ortega 1998}) extended the result in the $L^p$ settings. Molinet and Talhouk (\cite{Molinet 2004}) proved that the results obtained by \cite{Guillo 1990} remain true without any restriction on the smallness of the coupling parameter. When considering the exterior domains, one needs to overcome the difficulty caused by both the boundary effect and unboundedness of the domain. Hieber, Naito, and Shibata (\cite{Hieber Naito 2012}) obtained the unique global strong solution with small initial data and small coupling parameter, see also \cite{Fang Hieber Zi 2013} by Fang, Hieber, and Zi for the non-small coupling parameter case. Chemin and Masmoudi (\cite{Chemin 2001}) studied the global wellposeness in the framework of critical Besov spaces and some blow-up criteria were also obtained. See also \cite{Chen Miao 2008, Zi 2014} for the case of the non-small coupling parameter in critical Besov spaces. Lions and Masmoudi (\cite{Lions 2000}) considered the case that $b=0$ and proved the existence of global weak solution for arbitrarily large initial data. In fact, for the case $b\neq0$, it is still open. For some studies of blow-up criteria, please refer to \cite{Lei 2010, Kupferman 2008}. Lei (\cite{Lei 2006}) obtained the global existence of classical solutions via the incompressible limit in periodic domains. Recently, Hieber, Wen, and Zi (\cite{Hieber Wen 2019}) studied the long time behaviors of the solutions in three dimensions and obtained the same decay rates as the heat equation, see also the extension by Huang, Wang, Wen, and Zi (\cite{Huang 2022}). For the case of infinite Weissenberg number, an energetic variational approach was first introduced by Lin, Liu, and Zhang (\cite{Lin-Liu-Zhang}) to understand the physical structure for the related systems (see for instance \cite{Hu-Lin,Hu-Wu,Lai,Lei1,Lei2,Lin} for more progress).
For the diffusive model (i.e. $\mu > 0$ in \eqref{Oldroyd_B}),
Constantin and Kliegl (\cite{Constantin 2012}) proved the global wellposedness of strong solutions for the two-dimensional Cauchy problem with large initial data and $\nu>0$. For the inviscid case, Elgindi and Rousset (\cite{Elgindi Rousset 2015}) proved that the problem \eqref{Oldroyd_B} is global wellposed in $\mathbb{R}^2$ provided that the initial data is small enough. Later, Elgindi and Liu (\cite{Elgindi Liu 2015}) extended the results to the three-dimensional case. Very recently, Huang, Wang, Wen and Zi (\cite{Huang 2022}) obtained the optimal decay estimates with vanishing viscosity ($\nu\geq 0$) in three dimensions. When $\nu=0$, Deng, Luo and Yin (\cite{Yin_Deng}) obtained the global wellposedness of strong solutions and the $H^1$ time-decay rate as $(1+t)^{-\frac12}$ with small initial data in $\mathbb{R}^2$. When $\nu=0$ and $Q=0$, Elgindi and Rousset (\cite{Elgindi Rousset 2015})
established the global existence and uniqueness of strong solutions in $\mathbb{R}^2$.
More precisely, they proved the following result with the diffusion coefficient $\mu>0$.
\begin{pro}[Theorem 1.1, \cite{Elgindi Rousset 2015}]\label{proposition_1}
Assume that the initial data satisfy $(u_0,\tau_0)\in H^{s}(\mathbb{R}^2)$ with $\mathrm{div}\, u_0 = 0,\tau_0$ symmetric and $s>2$, there exists a unique global solution $(u,\tau)\in C([0,\infty);H^{s}(\mathbb{R}^2))$ to the initial-value problem of \eqref{Oldroyd_B} with $\nu=0$ and $Q=0$.
\end{pro}
It is interesting to see whether the solution obtained in Proposition \ref{proposition_1} exists globally or not for the non-diffusive case.
\subsection{Main results}
Our aim in this paper is to investigate the global-in-time existence and uniqueness and the optimal time-decay rates of the solutions to the initial-value problem of \eqref{Oldroyd_B_d}. The first main result concerning the global existence and uniqueness is stated as follows.
\begin{theorem}\label{wellposedness}
Assume that $(u_0,\tau_0)\in H^3(\mathbb{R}^2)$ with $\mathrm{div} \,u_0 = 0$ and $\tau_0$ symmetric, then there exists a sufficiently small constant $\epsilon_0 >0$ such that
the Cauchy problem \eqref{Oldroyd_B_d} admits a unique global solution $(u,\tau)\in L^\infty([0,\infty);H^3(\mathbb{R}^2))$ satisfying the following uniform regularity estimate:
\begin{equation*}
\|(u,\tau)(t)\|_{H^3}^2 + \int_0^t (\|\nabla u(s)\|_{H^2}^2 + \|\tau(s)\|_{H^3}^2 ){\rm d}s \leq C\|(u_0,\tau_0)\|_{H^3}^2,
\end{equation*}
provided that $\|(u_0,\tau_0)\|_{H^3} \leq \epsilon_0.$
\end{theorem}
Based on the global existence and uniqueness of the solution, we get the second main result concerning the time-decay estimates.
\begin{theorem}\label{thm_OB_d_decay}
Under the assumptions of Theorem \ref{wellposedness}, assume in addition that $(u_0,\tau_0)\in L^1(\mathbb{R}^2)$, then the following optimal time-decay estimates of the solution to the problem \eqref{Oldroyd_B_d} hold.
\begin{enumerate}[i)]
\item Upper time-decay estimates of the solutions:
\begin{eqnarray}\label{opti1}
\ \|\nabla^ku(t)\|_{L^2}\le C (1+t)^{-\frac12-\frac{k}{2}},\ k=0,1,2,3,
\end{eqnarray}
and
\begin{eqnarray}\label{opti2}
\ \|\nabla^{k}\tau(t)\|_{L^2}\le C(1+t)^{-1-\frac{k}{2}},\ k=0,1,2,3,
\end{eqnarray}
for all $t>0$, where $C$ is a positive constant independent of time.
\item In addition, assume that $\Big|\int_{\mathbb{R}^2}u_0(x){\rm d}x\Big| = c_2>0.$ Then there exists a positive time $t_1=t_1(\beta)$ such that
\begin{eqnarray}\label{opti3}
\|\nabla^ku(t)\|_{L^2}\ge \frac{1}{C} (1+t)^{-\frac12-\frac{k}{2}},\ k=0,1,2,3,
\end{eqnarray}
and
\begin{eqnarray}\label{opti4}
\|\nabla^{k}\tau(t)\|_{L^2}\ge \frac{1}{C} (1+t)^{-1-\frac{k}{2}},\ k=0,1,2,3,
\end{eqnarray}
for all $t\geq t_1$.
\end{enumerate}
\end{theorem}
\begin{rem}
For any $\mu>0$, Theorem \ref{thm_OB_d_decay} still holds for the system with diffusion.
\end{rem}
\subsection{Main ideas}
In order to establish the global wellposedness result, we choose (\ref{Oldroyd_B_d}) with the diffusive term $-\mu\Delta\tau$ as an approximate system. To obtain the uniform regularity for $\mu,$ the diffusive term can not play much role. Instead we make full use of the damping term $\tau$. Combining with some compactness arguments, the unique global solution of the Cauchy problem for \eqref{Oldroyd_B_d} can be obtained via vanishing diffusion limit.
To obtain some optimal time-decay estimates of the solution, the main challenges focus on deriving the sharp decay rate of the solution itself in $L^2$ norm due to the lower dimension. In fact, one can know from Lemma \ref{lemma_Greenfunction_7} that the time-decay rates of the low-frequency part of the solution to the linearized system \eqref{Oldroyd_B_d} will decrease as the dimension does. Our main strategy is to use spectral analysis together with energy method that the upper bound of the low frequency is a constant to get the sharp time-decay rates of the higher order derivatives of the solution. However, it seems not working for the lower order. More specifically, inspired by \cite{Dong 2006} where $$\|u(t)\|_{L^2}\le C (1+t)^{-\frac14},$$ can be derived by using an observation that $$(1+t)^{\frac{1}{2}}\| \nabla u(t)\|_{L^2}\longrightarrow 0 \,\,\,\,as\,\,\,\,t\rightarrow\infty,$$ and Lemma \ref{lemma_Greenfunction_8}, see Lemma \ref{lemma_upper_decay} for more details. To get the sharp time-decay rate $$\|u(t)\|_{L^2}\le C (1+t)^{-\frac12},$$ if we replace the upper bound of the low frequency $g_1^2(t)$ in \eqref{new_H1_L2_29} by a constant and use Lemma \ref{lemma_Greenfunction_7}, then an integral like $\int_0^t(1+t-s)^{-\frac12}(1+s)^{-1}{\rm d}s$ will turn up and it could not be dominated by $(1+t)^{-\frac12}$. The Fourier splitting method that the upper bound of the low frequency depends on a function of time can overcome this difficulty.
The rest of the paper is organized as follows. In Section \ref{Section_2} we prove the uniform regularity estimates for $\mu$ in \eqref{Oldroyd_B_1} and obtain Theorem \ref{approximate solution}. In Section \ref{Section_4} we use the vanishing diffusion limit technique to obtain the unique global solution of system \eqref{Oldroyd_B_d} and finish the proof of Theorem \ref{wellposedness}. In Section \ref{Section_5} we first analyze the linear part of the system \eqref{Oldroyd_B_d} and obtain the corresponding estimates of Green functions and the low-frequency part of the solution in \eqref{Oldroyd_B_d}, and then we obtain the optimal time-decay rates respectively for $u$ and $\tau$ and get Theorem \ref{thm_OB_d_decay}.
Throughout the rest of the paper, let $C$ denote a generic positive constant depending on some known constants but independent of $\mu$, $\delta$, $t$, and $\eta_i$ for $i=1,2,3$.
\section{Uniform regularity}\label{Section_2}
To begin with, we use the following initial-value problem as an approximation of the problem \eqref{Oldroyd_B_d} as $\mu \to 0$, namely,
\begin{eqnarray} \label{Oldroyd_B_1}
\begin{cases}
\partial_tu^\mu+(u^\mu\cdot\nabla) u^\mu+\nabla p^\mu=K \,{\rm div}\tau^\mu,\\
\partial_t\tau^\mu+(u^\mu\cdot\nabla)\tau^\mu-\mu\Delta\tau^\mu+\beta\tau^\mu=\alpha\mathbb{D}(u^\mu),\\
{\rm div}\,u^\mu=0, (u^\mu,\tau^\mu)(x,0) = (u_0,\tau_0).
\end{cases}
\end{eqnarray}
The global wellposedness of problem \eqref{Oldroyd_B_1} for fixed $\mu>0$ was already stated in Proposition \ref{proposition_1}. In this section, we will establish the uniform regularity of the solutions to the problem \eqref{Oldroyd_B_1}, i.e.,
\begin{theorem}\label{approximate solution}
Suppose that $(u_0,\tau_0)\in H^3(\mathbb{R}^2)$ with $\mathrm{div} \,u_0 = 0$ and $\tau_0$ symmetric, then there exists a sufficiently small constant $\epsilon_0 >0$ independent
of $\mu$ and $t$, such that the solutions to the Cauchy problem \eqref{Oldroyd_B_1} satisfy the following uniform estimates:
\begin{equation*}\label{uniform_estimates}
\|(u^\mu,\tau^\mu)(t)\|_{H^3}^2 + \int_0^t (\|\nabla u^\mu(s)\|_{H^2}^2 + \|\tau^\mu(s)\|_{H^3}^2 + \mu\|\nabla\tau^\mu(s)\|_{H^3}^2){\rm d}s \leq C\|(u_0,\tau_0)\|_{H^3}^2,
\end{equation*}
for all $t>0$, provided that $\|(u_0,\tau_0)\|_{H^3} \leq \epsilon_0.$
\end{theorem}
For simplicity, we use $(u,\tau)$ to represent $(u^\mu,\tau^\mu)$. Before proving Theorem \ref{approximate solution}, we need some reformulations of the original system which are motivated by \cite{Zi 2014} and the references therein. More specifically, applying the Leray projection operator $\mathbb{P}$ to the first equation of \eqref{Oldroyd_B_1} and the operator $\mathbb{P}{\rm div}\,$ to the second equation of \eqref{Oldroyd_B_1} respectively, we obtain that
\begin{eqnarray} \label{OB_d_1}
\begin{cases}
\partial_tu+\mathbb{P}\left(u\cdot\nabla u\right)=K\, \mathbb{P}{\rm div}\tau,\\
\partial_t\mathbb{P}{\rm div}\tau+\mathbb{P}{\rm div}\left(u\cdot\nabla\tau\right)-\mu\,\mathbb{P}{\rm div}\Delta\tau+\beta\,\mathbb{P}{\rm div}\tau=\frac{\alpha}{2}\Delta u.
\end{cases}
\end{eqnarray}
Then, applying $\Lambda^{-1}=(\sqrt{-\Delta})^{-1}$ to (\ref{OB_d_1})$_2$ and denoting by
\begin{eqnarray}\label{sigma}
\sigma := \Lambda^{-1}\mathbb{P}{\rm div}\tau,
\end{eqnarray} we can rewrite (\ref{OB_d_1}) as follows:
\begin{eqnarray} \label{u_sigma_d}
\begin{cases}
\partial_tu-K \Lambda\sigma=\mathcal{F}_1,\\
\partial_t\sigma -\mu\Delta\sigma +\beta\sigma+\frac{\alpha}{2}\Lambda u=\mathcal{F}_2,
\end{cases}
\end{eqnarray} where
\begin{eqnarray*}
\mathcal{F}_1=-\mathbb{P}\left(u\cdot\nabla u\right),\
\mathcal{F}_2=-\Lambda^{-1}\mathbb{P}{\rm div}\left(u\cdot\nabla\tau\right).
\end{eqnarray*}
Here $\hat{\sigma}^j=i\left(\delta_{j,k}-\frac{\xi_j\xi_k}{|\xi|^2}\right)\frac{\xi_l}{|\xi|}\hat{\tau}^{l,k}$ where $\hat{f}$ denotes the Fourier transform of $f$.
It is worth noticing that for any $u\in L^2(\mathbb{R}^2)$, there holds
\begin{equation*}
\mathbb{P}(u)_i=u_i-\sum_{k=1}^{2}R_iR_k u_k,
\end{equation*}
where $R_iR_k=(-\Delta)^{-1}\partial_i\partial_k$.
It is not difficult to get that
\begin{equation}\label{bu_7}
\|\mathbb{P}u\|_{L^2}^2\le C \|u\|_{L^2}^2.
\end{equation}
Combining \eqref{sigma} and \eqref{bu_7}, we can estimate $\sigma$ as follows:
\begin{equation}\label{bu_8}
\|\nabla^k\sigma\|_{L^2}^2\le C\|\nabla^k\tau\|_{L^2}^2,
\end{equation}for $k = 0,1,2,3$.
The proof of Theorem \ref{approximate solution} highly relies on the following proposition.
\begin{pro}\label{Prop2}
Under the conditions of Theorem \ref{approximate solution}, there exist sufficiently small positive constants $\epsilon_0$ and $\delta$ independent of $\mu$ and $T$ such that
if
\begin{equation*}\label{apriori-assum}
\sup_{0\leq s \leq T}\|(u,\tau)(s)\|_{H^3}\leq \delta,
\end{equation*}
for any given $T>0$, there holds
\begin{equation*}\label{apriori-result}
\sup_{0\leq s \leq T}\|(u,\tau)(s)\|_{H^3}\leq \frac{\delta}{2},
\end{equation*}
provided that $\|(u_0,\tau_0)\|_{H^3} \leq \epsilon_0.$
\end{pro}
The proof of Proposition \ref{Prop2} consists of the following Lemmas \ref{lemma_regularity_1}, \ref{lemma_regularity_2} and \ref{lemma_regularity_3}.
\begin{lemma}\label{lemma_regularity_1}
Under the assumptions of Proposition \ref{Prop2}, there exists a sufficiently small positive constant $\eta_1$ independent of $\mu, T$ such that
\begin{equation}\label{est_H1}
\begin{split}
& \mathrm{d}t (\alpha\| u\|_{H^1}^2 + K\|\tau\|_{H^1}^2 + \eta_1\langle\Lambda u, \sigma\rangle) + \frac{\beta K}{2}\|\tau\|_{H^1}^2 +\frac{\eta_1\alpha}{4}\|\Lambda u\|_{L^2}^2 + \mu K\|\nabla\tau\|_{H^1}^2
\leq 0,
\end{split}
\end{equation}
for all $0\leq t \leq T$.
\end{lemma}
\begin{proof}
Multiplying (\ref{Oldroyd_B_1})$_1$ and (\ref{Oldroyd_B_1})$_2$ by $\alpha u$ and $K \tau$, respectively, summing the results up, and using integration by parts, we have
\begin{equation}\label{est_L2}
\begin{split}
&\frac12 \mathrm{d}t (\alpha\|u\|_{L^2}^2 + K\|\tau\|_{L^2}^2) + \beta K\|\tau\|_{L^2}^2 + \mu K\|\nabla\tau\|_{L^2}^2 = 0.
\end{split}
\end{equation}
Similarly, multiplying $\nabla$(\ref{Oldroyd_B_1})$_1$ and $\nabla$(\ref{Oldroyd_B_1})$_2$ by $\alpha\nabla u$ and $K \nabla \tau$, respectively, we have
\begin{equation*}
\begin{split}
&\frac12 \mathrm{d}t (\alpha\|\nabla u\|_{L^2}^2 + K\|\nabla\tau\|_{L^2}^2) + \beta K\|\nabla\tau\|_{L^2}^2 + \mu K\|\nabla^2\tau\|_{L^2}^2\\
=& - \langle K \nabla(u\cdot\nabla\tau),\nabla\tau \rangle
\,\le\, K\|\nabla u\|_{L^\infty}\|\nabla\tau\|_{L^2}^2
\,\le\, C\delta K \|\nabla\tau\|_{L^2}^2.
\end{split}
\end{equation*}
Letting $\delta\le \frac{\beta}{2C}$, then we obtain
\begin{equation}\label{est_first}
\begin{split}
& \frac{1}{2}\mathrm{d}t (\alpha\|\nabla u\|_{L^2}^2 + K\|\nabla\tau\|_{L^2}^2) + \frac{\beta K}{2}\|\nabla\tau\|_{L^2}^2 + \mu K\|\nabla^2\tau\|_{L^2}^2
\leq 0.
\end{split}
\end{equation}
To derive the dissipative estimate of the velocity gradient, the equation of $\sigma$ plays an important role. More specifically, multiplying $\Lambda$(\ref{u_sigma_d})$_1$ and (\ref{u_sigma_d})$_2$ by $\sigma$ and $\Lambda u$, respectively, summing the results up, and using integration by parts, we have
\begin{equation}\label{bu}
\begin{split}
&\partial_t\langle\Lambda u, \sigma\rangle + \frac{\alpha}{2}\|\Lambda u\|_{L^2}^2\\
= &\Big(K\|\Lambda \sigma\|_{L^2}^2 + \langle \mu\Delta\sigma,\Lambda u\rangle - \langle\beta\sigma,\Lambda u \rangle\Big)\\ &- \Big(\langle \Lambda\mathbb{P}(u\cdot \nabla u),\sigma\rangle + \langle \Lambda^{-1}\mathbb{P}\mathrm{d}iv(u\cdot \nabla \tau),\Lambda u\rangle\Big)\\
=:&\, I_1 - I_2.
\end{split}
\end{equation}
For $I_1$ and $I_2$, using (\ref{bu_7}), we have that
\begin{equation*}\label{bu_1}
\begin{split}
|I_1|&\le K\|\Lambda \sigma\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda u\|_{L^2}^2 + \frac{4\mu^2}{\alpha}\|\Delta\sigma\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda u\|_{L^2}^2 + \frac{4\beta^2}{\alpha}\|\sigma\|_{L^2}^2,\\
|I_2|&\le \frac12\|\Lambda \sigma\|_{L^2}^2
+ \frac12\|\mathbb{P}(u\cdot \nabla u)\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda u\|_{L^2}^2 + \frac{4}{\alpha}\|\Lambda^{-1}\mathbb{P}\mathrm{d}iv(u\cdot \nabla \tau)\|_{L^2}^2\\
&\le \frac12\|\Lambda \sigma\|_{L^2}^2 + C\|u\cdot \nabla u\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda u\|_{L^2}^2 + C\|u\cdot \nabla \tau\|_{L^2}^2.
\end{split}
\end{equation*}
Then, substituting the above inequalities into (\ref{bu}), we obtain that
\begin{equation}\label{est_first_u}
\begin{split}
&\partial_t\langle\Lambda u, \sigma\rangle + \frac{\alpha}{2}\|\Lambda u\|_{L^2}^2\\
\leq & \,(\frac{3\alpha}{16} + C\delta^2)\|\Lambda u\|_{L^2}^2 + (K + \frac{4\beta^2}{\alpha} + \frac12)\|\sigma\|_{H^1}^2 + C\delta^2\|\tau\|_{H^1}^2 + C\mu^2\|\nabla^2\tau\|_{L^2}^2.
\end{split}
\end{equation}
Letting $\delta$ and $\eta_1>0$ small enough, then summing \eqref{est_L2}, \eqref{est_first} and $\eta_1$\eqref{est_first_u} up, and using \eqref{bu_8}, we get (\ref{est_H1}).
\end{proof}
In a similar way, we can obtain the following higher order estimates.
\begin{lemma}\label{lemma_regularity_2}
Under the assumptions of Proposition \ref{Prop2}, there exists a sufficiently small positive constant $\eta_2=\frac{\eta_1}{4}$ independent of $\mu, T$ such that
\begin{equation}\label{est_H2}
\begin{split}
\mathrm{d}t (\alpha\| u\|_{H^2}^2 &+ K\|\tau\|_{H^2}^2 + \eta_1\langle\Lambda u, \sigma\rangle + \eta_2\langle\Lambda^2 u,\Lambda \sigma\rangle)\\ &+ \frac{\beta K}{4}\|\tau\|_{H^2}^2 + \frac{\eta_2\alpha}{8}\|\Lambda u\|_{H^1}^2 + \mu K\|\nabla\tau\|_{H^2}^2
\leq 0,
\end{split}
\end{equation} for all $0\leq t \leq T$.
\end{lemma}
\begin{proof}
Multiplying $\nabla^2$(\ref{Oldroyd_B_1})$_1$ and $\nabla^2$ (\ref{Oldroyd_B_1})$_2$ by $\alpha \nabla^2 u$ and $K \nabla^2 \tau$, respectively, summing the results up, and using integration by parts, we have
\begin{align}\label{est_second}
\begin{split}
&\frac12 \mathrm{d}t (\alpha\|\nabla^2 u\|_{L^2}^2 + K\|\nabla^2\tau\|_{L^2}^2) + \beta K\|\nabla^2\tau\|_{L^2}^2 + \mu K\|\nabla^3\tau\|_{L^2}^2\\
= & - \,\langle K \nabla^2(u\cdot\nabla\tau),\nabla^2\tau \rangle - \langle\alpha \nabla^2(u\cdot\nabla u),\nabla^2u \rangle\\
\leq & \,C(\|\nabla \tau\|_{L^\infty}\|\nabla^2 u\|_{L^2}\|\nabla^2 \tau\|_{L^2} + \|\nabla u\|_{L^\infty}\|\nabla^2 \tau\|_{L^2}^2 + \|\nabla u\|_{L^\infty}\|\nabla^2 u\|_{L^2}^2)\\
\leq &\, C\delta \|\nabla^2 u\|_{L^2}^2 + C\delta \|\nabla^2\tau\|_{L^2}^2.
\end{split}
\end{align}
Multiplying $\Lambda^2$(\ref{u_sigma_d})$_1$ and $\Lambda$(\ref{u_sigma_d})$_2$ by $\Lambda\sigma$ and $\Lambda^2 u$, respectively, summing the results up, and using integration by parts, we have
\begin{align}\label{bu_2}
\begin{split}
&\partial_t\langle\Lambda^2 u,\Lambda \sigma\rangle + \frac{\alpha}{2}\|\Lambda^2 u\|_{L^2}^2\\
= & \,\Big(K\|\Lambda^2 \sigma\|_{L^2}^2 + \langle \mu\Lambda\,\Delta\sigma,\Lambda^2 u\rangle - \langle\beta\Lambda\sigma,\Lambda^2 u \rangle\Big)\\ &- \Big(\langle \Lambda^2\mathbb{P}(u\cdot \nabla u),\Lambda\sigma\rangle + \langle \mathbb{P}\mathrm{d}iv(u\cdot \nabla \tau),\Lambda^2 u\rangle\Big)\\
=: & \,I_3 - I_4.
\end{split}
\end{align}
For $I_3$ and $I_4$, using (\ref{bu_7}), we have that
\begin{align}
|I_3|&\le K\|\Lambda^2 \sigma\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda^2 u\|_{L^2}^2 + \frac{4\mu^2}{\alpha}\|\Lambda\,\Delta\sigma\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda^2 u\|_{L^2}^2 + \frac{4\beta^2}{\alpha}\|\Lambda\sigma\|_{L^2}^2,\label{I3}\\
\nonumber |I_4|&\le \frac12\|\Lambda^2 \sigma\|_{L^2}^2
+ \frac12\|\Lambda\mathbb{P}(u\cdot \nabla u)\|_{L^2}^2
+ \frac{\alpha}{16}\|\Lambda^2 u\|_{L^2}^2 + \frac{4}{\alpha}\|\mathbb{P}\mathrm{d}iv(u\cdot \nabla \tau)\|_{L^2}^2\\
&\le \frac12\|\Lambda^2 \sigma\|_{L^2}^2
+ C\|\nabla(u\cdot \nabla u)\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda^2 u\|_{L^2}^2 + C\|\nabla(u\cdot \nabla \tau)\|_{L^2}^2\label{I4}.
\end{align}
Substituting (\ref{I3}) and (\ref{I4}) into (\ref{bu_2}), we get
\begin{align}\label{est_second_u}
\begin{split}
&\partial_t\langle\Lambda^2 u,\Lambda \sigma\rangle + \frac{\alpha}{2}\|\Lambda^2 u\|_{L^2}^2\\
\leq & \,(\frac{3\alpha}{16} + C\delta^2)\|\Lambda^2 u\|_{L^2}^2 + C\delta^2\|\nabla u\|_{L^2}^2\\
&+ (K + \frac{4\beta^2}{\alpha} + \frac12)\|\nabla\sigma\|_{H^1}^2 + C\delta^2\|\nabla\tau\|_{H^1}^2 + C\mu^2\|\nabla^3\tau\|_{L^2}^2,
\end{split}
\end{align}
where the facts that
\begin{equation*}\label{bu_20}
\begin{split}
\|\nabla(u\cdot \nabla u)\|_{L^2}\le \| \nabla u\|_{L^\infty}\|\nabla u\|_{L^2} + \| u\|_{L^\infty}\|\nabla^2 u\|_{L^2},\\
\|\nabla(u\cdot \nabla \tau)\|_{L^2}\le \|\nabla u\|_{L^\infty}\|\nabla \tau\|_{L^2} + \| u\|_{L^\infty}\|\nabla^2 \tau\|_{L^2},
\end{split}
\end{equation*}
are used.
Letting $\eta_2= \frac14 \eta_1$ and $\delta$ small enough, then summing \eqref{est_H1}, \eqref{est_second} and $\eta_2$\eqref{est_second_u} up, and using \eqref{bu_8}, we get (\ref{est_H2}).
\end{proof}
Finally, we get the following uniform estimates up to the third order.
\begin{lemma}\label{lemma_regularity_3}
Under the assumptions of Proposition \ref{Prop2}, there holds
\begin{equation}\label{eq_a_priori_est}
\begin{cases}
\|(u,\tau)(t)\|_{H^3}^2 \leq \frac{\delta}{2},\\[4mm]
\displaystyle\int_0^t\left(\|\nabla u(s)\|_{H^2}^2 + \|\tau(s)\|_{H^3}^2 + \mu\|\nabla\tau(s)\|_{H^3}^2\right){\rm d}s\le C,
\end{cases}
\end{equation}for all $0\leq t \leq T.$
\end{lemma}
\begin{proof}
Multiplying $\nabla^3$(\ref{Oldroyd_B_1})$_1$ and $\nabla^3$ (\ref{Oldroyd_B_1})$_2$ by $\alpha \nabla^3 u$ and $K \nabla^3 \tau$, respectively, summing the results up, and using integration by parts, we have
\begin{equation}\label{est_third}
\begin{split}
&\frac12 \mathrm{d}t (\alpha\|\nabla^3 u\|_{L^2}^2 + K\|\nabla^3\tau\|_{L^2}^2) + \beta K\|\nabla^3\tau\|_{L^2}^2 + \mu K\|\nabla^4\tau\|_{L^2}^2\\
= & -\, \langle K \nabla^3(u\cdot\nabla\tau),\nabla^3\tau \rangle - \langle\alpha \nabla^3(u\cdot\nabla u),\nabla^3u \rangle\\
\leq & \,C(\|\nabla u\|_{L^\infty}\|\nabla^3 \tau\|_{L^2}^2 + \|\nabla \tau\|_{L^\infty}\|\nabla^3 u\|_{L^2}\|\nabla^3 \tau\|_{L^2} + \|\nabla u\|_{L^\infty}\|\nabla^3 u\|_{L^2}^2 )\\
\leq & \,C\delta \|\nabla^3 u\|_{L^2}^2 + C\delta\|\nabla^3\tau\|_{L^2}^2.
\end{split}
\end{equation}
Similarly, multiplying $\Lambda^3$(\ref{u_sigma_d})$_1$ and $\Lambda^2$(\ref{u_sigma_d})$_2$ by $\Lambda^2\sigma$ and $\Lambda^3 u$, respectively, and using integration by parts, we have
\begin{align}\label{bu_4}
\begin{split}
&\partial_t\langle\Lambda^3 u,\Lambda^2 \sigma\rangle + \frac{\alpha}{2}\|\Lambda^3 u\|_{L^2}^2\\
= & \,\Big(K\|\Lambda^3 \sigma\|_{L^2}^2 + \langle \mu\Lambda^2\Delta\sigma,\Lambda^3 u\rangle - \langle\beta\Lambda^2\sigma,\Lambda^3 u \rangle\Big)\\ &- \Big(\langle \Lambda^3\mathbb{P}(u\cdot \nabla u),\Lambda^2\sigma\rangle + \langle \Lambda\mathbb{P}\mathrm{d}iv(u\cdot \nabla \tau),\Lambda^3 u\rangle\Big)\\
=: &\, I_5 - I_6.
\end{split}
\end{align}
Using (\ref{bu_7}), we can obtain the estimates of $I_5$ and $I_6$ as follows
\begin{align}
|I_5|&\le K\|\Lambda^3 \sigma\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda^3 u\|_{L^2}^2 + \frac{4\mu^2}{\alpha}\|\Lambda^2\Delta\sigma\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda^3 u\|_{L^2}^2 + \frac{4\beta^2}{\alpha}\|\Lambda^2\sigma\|_{L^2}^2,\label{I5}\\
\nonumber|I_6|&\le \frac12\|\Lambda^3 \sigma\|_{L^2}^2
+ \frac12\|\Lambda^2\mathbb{P}(u\cdot \nabla u)\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda^3 u\|_{L^2}^2 + \frac{4}{\alpha}\|\Lambda\mathbb{P}\mathrm{d}iv(u\cdot \nabla \tau)\|_{L^2}^2\\
&\le \frac12\|\Lambda^3 \sigma\|_{L^2}^2
+ C\|\nabla^2(u\cdot \nabla u)\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda^3 u\|_{L^2}^2 + C\|\nabla^2(u\cdot \nabla \tau)\|_{L^2}^2.\label{I6}
\end{align}
Substituting (\ref{I5}) and (\ref{I6}) into (\ref{bu_4}), we can deduce that
\begin{equation}\label{est_third_u}
\begin{split}
&\partial_t\langle\Lambda^3 u,\Lambda^2 \sigma\rangle + \frac{\alpha}{2}\|\Lambda^3 u\|_{L^2}^2\\
\leq & \,(\frac{3\alpha}{16} + C\delta^2)\|\Lambda^3 u\|_{L^2}^2 + C\delta^2\|\nabla^2 u\|_{L^2}^2\\ &+ (K + \frac{4\beta^2}{\alpha} + \frac12)\|\nabla^2\sigma\|_{H^1}^2 + C\delta^2 \|\nabla^2\tau\|_{H^1}^2 + C\mu^2 \|\nabla^4\tau\|_{L^2}^2,
\end{split}
\end{equation}
where the facts that
\begin{equation}\label{bu_21}
\begin{split}
\|\nabla^2(u\cdot \nabla u)\|_{L^2}&\le \| u\|_{L^\infty}\|\nabla^3 u\|_{L^2} + 3\| \nabla u\|_{L^\infty}\|\nabla^2 u\|_{L^2},\\
\|\nabla^2(u\cdot \nabla \tau)\|_{L^2}&\le \| u\|_{L^\infty}\|\nabla^3 \tau\|_{L^2} + 2\|\nabla u\|_{L^\infty}\|\nabla^2 \tau\|_{L^2} + \|\nabla \tau\|_{L^\infty}\|\nabla^2 u\|_{L^2},
\end{split}
\end{equation}
are used.
Letting $\eta_3:= \frac14 \eta_2$ and $\delta$ small enough, summing \eqref{est_H2}, \eqref{est_third} and $\eta_3$\eqref{est_third_u} up, and using \eqref{bu_8}, we obtain that
\begin{equation}\label{est_H3}
\begin{split}
\mathrm{d}t (\alpha\| u\|_{H^3}^2 &+ K\|\tau\|_{H^3}^2 + \sum_{i=1}^{3}\eta_i\langle\Lambda^i u,\Lambda^{i-1}\sigma\rangle)\\ &+ \frac{\beta K}{8}\|\tau\|_{H^3}^2 + \frac{\eta_3\alpha}{16}\|\Lambda u\|_{H^2}^2 + \mu K\|\nabla\tau\|_{H^3}^2
\leq 0.
\end{split}
\end{equation}
From the definition of $\eta_1$, $\eta_2$ and $\eta_3$, we have that
\begin{equation}\label{bu_9}
\begin{split}
\frac12(\alpha\| u\|_{H^3}^2 + K\|\tau\|_{H^3}^2)&\le \alpha\| u\|_{H^3}^2 + K\|\tau\|_{H^3}^2 + \sum_{i=1}^{3}\eta_i\langle\Lambda^i u,\Lambda^{i-1}\sigma\rangle\\ &\le 2(\alpha\| u\|_{H^3}^2 + K\|\tau\|_{H^3}^2).
\end{split}
\end{equation}
For all $0\leq t \leq T,$ integrating (\ref{est_H3}) over $[0, t]$ and utilizing (\ref{bu_9}), we have that
\begin{equation}\label{bu_10}
\begin{split}
&\frac12(\alpha\| u(t)\|_{H^3}^2 + K\|\tau(t)\|_{H^3}^2) \\&+\int_0^t\left(\frac{\eta_3\alpha}{16}\|\nabla u(s)\|_{H^2}^2 + \frac{\beta K}{8}\|\tau(s)\|_{H^3}^2 + \mu K\|\nabla\tau(s)\|_{H^3}^2\right){\rm d}s\\
\le &\,2(\alpha\| u_0\|_{H^3}^2 + K\|\tau_0\|_{H^3}^2)\,\le\,(2\alpha + 2K )\epsilon_0^2.
\end{split}
\end{equation}
Letting
\begin{equation*}\label{bu_11}
\begin{split}
\frac{4(\alpha+K)}{\min\{\alpha,K\}}\epsilon_0^2\le \frac{\delta}{2},
\end{split}
\end{equation*}
we get (\ref{eq_a_priori_est})$_1$ from (\ref{bu_10}). Using (\ref{bu_10}) again, we get (\ref{eq_a_priori_est})$_2$ for some known positive constant $C$.
\end{proof}
With Lemma \ref{lemma_regularity_3}, we finish the proof of Proposition \ref{Prop2}. Now we come to the proof of Theorem \ref{approximate solution} by using the standard continuity method.
\subsection*{Proof of Theorem \ref{approximate solution}}
For any fixed $\mu>0$, since
\begin{equation*}
\|(u_0,\tau_0)\|_{H^3}^2\le \epsilon_0^2\le \frac{\delta}{2},
\end{equation*}
and
\begin{equation*}
\|(u^\mu,\tau^\mu)(t)\|\in C([0,\infty);H^{3}(\mathbb{R}^2)),
\end{equation*}
there exists a time $T=T(\mu)>0$, such that
\begin{equation}\label{conclusion}
\|(u^\mu,\tau^\mu)(t)\|_{H^3}\leq \delta,
\end{equation} for all $t\in[0,T]$.
Letting $T^*$ be the maximal life span such that (\ref{conclusion}) holds. In view of (\ref{conclusion}), it holds that $T^*>0$.
Suppose that $T^*<+\infty$. Then, the continuity of the solution with respect to time yields that (\ref{conclusion}) holds on $[0,T^*]$, i.e.,
\begin{equation}\label{continuity_method}
\sup_{0\le s \le T^*}\|(u^\mu,\tau^\mu)(s)\|_{H^3}\le \delta.
\end{equation}
Then (\ref{continuity_method}) and Proposition \ref{Prop2} conclude that
\begin{equation}\label{apriori-assum1}
\|(u^\mu,\tau^\mu)(t)\|_{H^3}^2 \leq \frac{\delta}{2},
\end{equation} for all $t\in[0,T^*].$
Using (\ref{apriori-assum1}) and the continuity of the solution with respect to time again, we obtain that
$T^*$ in (\ref{continuity_method}) can be replaced by $T^*+\sigma_0$ for a positive constant $\sigma_0$. This is a contradiction with the definition of $T^*$. Therefore $T^*$ must be $+\infty$.
Hence for all $\mu>0$ and $t> 0$, there holds
\begin{equation*}
\begin{split}
\|(u^\mu,\tau^\mu)(t)\|_{H^3}^2\le\delta.
\end{split}
\end{equation*}
This together with (\ref{bu_10}) finishes the proof of Theorem \ref{approximate solution}.
\section{Global existence and uniqueness}\label{Section_4}
This section aims to complete the proof of Theorem \ref{wellposedness}. By virtue of the uniform estimates stated in Theorem \ref{approximate solution}, i.e.,
\begin{equation*}
\|(u^\mu,\tau^\mu)(t)\|_{H^3}^2 + \int_0^t (\|\nabla u^\mu(s)\|_{H^2}^2 + \|\tau^\mu(s)\|_{H^3}^2 + \mu\|\nabla\tau^\mu(s)\|_{H^3}^2){\rm d}s \leq C\|(u_0,\tau_0)\|_{H^3}^2.
\end{equation*}
Combining the above inequality with the equation \eqref{Oldroyd_B_1}, we can easily obtain that
\begin{equation*}
\|(\partial_t u^\mu,\partial_t \tau^\mu)(t)\|_{H^2}^2\le C.
\end{equation*}
By virtue of some standard weak (or weak*) convergence results and the Aubin-Lions Lemma (see for instance \cite{Simon 1987}), there exists a $(u,\tau)\in L^\infty([0,\infty);H^3(\mathbb{R}^2))$ which is a limit of $(u^\mu,\tau^\mu)$ (take subsequence if necessary) in some sense and solves (\ref{approximate solution}).
For the uniqueness, we suppose that there are two pairs of the solutions $(u_1,\tau_1)$ and $(u_2,\tau_2)$. Denote $w=u_1-u_2$, $v=\tau_1-\tau_2$ satisfying
\begin{equation} \label{convergence_6}
\begin{cases}
\partial_tw+(w\cdot\nabla) u_1 +u_2\cdot\nabla w +\nabla (p_1-p_2)=K\, {\rm div}\,v,\\
\partial_tv+(w\cdot\nabla)\tau_1+u_2\cdot\nabla v+\beta v=\alpha\mathbb{D}(w).
\end{cases}
\end{equation}
Multiplying (\ref{convergence_6})$_1$ and (\ref{convergence_6})$_2$ by $\alpha w$ and $K v$, respectively, summing the results up, and using integration by parts, we have
\begin{equation*}
\begin{split}
&\frac12 \mathrm{d}t (\alpha\|w\|_{L^2}^2 + K\|v\|_{L^2}^2) + \beta K\|v\|_{L^2}^2\\
\le&-\alpha\langle w\cdot\nabla u_1,w\rangle- \alpha\langle u_2\cdot\nabla w,w\rangle-K\langle w\cdot\nabla\tau_1,v\rangle-K\langle u_2\cdot\nabla v,v\rangle\\
\le&\,C(\alpha\|w\|_{L^2}^2 + K\|v\|_{L^2}^2),
\end{split}
\end{equation*}
which implies
\begin{equation*}
\alpha\|w(t)\|_{L^2}^2 + K\|v(t)\|_{L^2}^2\le e^{Ct}(\alpha\|w(0)\|_{L^2}^2 + K\|v(0)\|_{L^2}^2)=0.
\end{equation*}
Thus, $w=u_1-u_2=0$ and $v=\tau_1-\tau_2=0$. The proof of the uniqueness is complete.
\section{Decay estimates for the nonlinear system}\label{Section_5}
In this section, we will establish the upper and lower decay estimates to the solutions of the Cauchy problem \eqref{Oldroyd_B_d} and finish the proof of Theorem \ref{thm_OB_d_decay}. We consider $\mu=0$ in \eqref{u_sigma_d}:
\begin{eqnarray} \label{u_sigma_1}
\begin{cases}
\partial_tu-K \Lambda\sigma=\mathcal{F}_1,\\
\partial_t\sigma +\beta\sigma+\frac{\alpha}{2}\Lambda u=\mathcal{F}_2,
\end{cases}
\end{eqnarray} where
\begin{eqnarray*}
\mathcal{F}_1=-\mathbb{P}\left(u\cdot\nabla u\right),\
\mathcal{F}_2=-\Lambda^{-1}\mathbb{P}{\rm div}\left(u\cdot\nabla\tau\right).
\end{eqnarray*}
\subsection{Some estimates of the low-frequency parts}
Consider the linear part of the system \eqref{u_sigma_1}, i.e.,
\begin{equation} \label{Greenfunction_1}
\begin{cases}
\partial_tu-K \Lambda\sigma=0,\\
\partial_t\sigma +\beta\sigma+\frac{\alpha}{2}\Lambda u=0.
\end{cases}
\end{equation}
Note that the 3{D} case of \eqref{Greenfunction_1} with viscosity and diffusion has already been analyzed by Huang, the second author, the third author, and Zi (\cite{Huang 2022}) (see Lemmas 2.1, 4.1, 4.5 and 4.6 and Proposition 2.3 therein). After some slight modifications, we can get the similar results in the 2{D} case.
To begin with, applying Fourier transform to system \eqref{Greenfunction_1}, we get that
\begin{equation} \label{Greenfunction_2}
\begin{cases}
\partial_t\hat{u}^j-K|\xi|\hat{\sigma}^j=0,\\
\partial_t\hat{\sigma}^j+\beta\hat{\sigma}^j+\frac{\alpha}{2}|\xi| \hat{u}^j=0.
\end{cases}
\end{equation}
\begin{lemma}\label{lemma_Greenfunction_1}
(Lemma 2.1, \cite{Huang 2022} for the case $\mu, \varepsilon=0$) The system \eqref{Greenfunction_2} can be solved as follows :
\begin{equation*}
\begin{cases}
\hat{u} = \mathcal{G}_3 \hat{u}_0 + K|\xi|\mathcal{G}_1\hat{\sigma}_0,\\
\hat{\sigma} = -\frac{\alpha}{2}|\xi|\mathcal{G}_1 \hat{u}_0 + \mathcal{G}_2\hat{\sigma}_0,
\end{cases}
\end{equation*}
where
\begin{equation}\label{lemma_Greenfunction_3+1}
\begin{split}
\mathcal{G}_1(\xi,t)=\frac{e^{\lambda_+t}-e^{\lambda_-t}}{\lambda_+-\lambda_-}, \ \mathcal{G}_2(\xi,t)&=\frac{\lambda_+e^{\lambda_+t}-\lambda_-e^{\lambda_-t}}{\lambda_+-\lambda_-}, \\ \mathcal{G}_3(\xi,t)=\frac{\lambda_+e^{\lambda_-t}-\lambda_-e^{\lambda_+t}}{\lambda_+-\lambda_-},\,
\lambda_{\pm}&=\frac{-\beta\pm\sqrt{\beta^2-2\alpha K|\xi|^2}}{2}.
\end{split}
\end{equation}
\end{lemma}
Due to the explicit expression of the solution, we can easily get some estimates as follows.
\begin{lemma}\label{lemma_Greenfunction_4}
There exist positive constants $R=R(\alpha,\beta,K)$, $\theta=\theta(\alpha,\beta,K)$ and $C=C(\alpha,\beta,K)$, such that, for any $|\xi|\leq R$ and $t>0$, it holds that
\begin{equation*}\label{lemma_Greenfunction_5}
\begin{split}
&\left|\mathcal{G}_1(\xi,t)\right|,\left|\mathcal{G}_3(\xi,t)\right|\leq Ce^{-\theta|\xi|^2t},\\
&|\mathcal{G}_2(\xi,t)|
\leq C\left(|\xi|^2 e^{-\theta|\xi|^2t} + e^{-\frac{\beta t}{2}}\right).
\end{split}
\end{equation*}
\end{lemma}
\begin{rem}
The proof of Lemma \ref{lemma_Greenfunction_4} is similar to the proof of Proposition $2.3$ and Lemma $4.5$ in \cite{Huang 2022} with only the difference of dimension.
\end{rem}
Consequently, we can obtain the upper bound of the low-frequency part of the solution satisfying \eqref{u_sigma_1}.
\begin{lemma} \label{lemma_Greenfunction_7}
Assume that $(u_0,\tau_0)\in L^1(\mathbb{R}^2)$, it holds for the solution to \eqref{u_sigma_1} that
\begin{equation*} \label{lemma_Greenfunction_8}
\begin{split}
\left(\int_{|\xi|\leq R}
|\xi|^{2k}|\hat{u}(t)|^2
{\rm d}\xi\right)^\frac{1}{2}
\le&\,C(1+t)^{-\frac12-\frac{k}{2}} + C\int_0^t (1+t-s)^{-\frac12-\frac{k}{2}}(\|\hat{\mathcal{F}}_1 \|_{L^\infty} + \|\hat{\mathcal{F}}_2 \|_{L^\infty})
{\rm d}s,\\
\left(\int_{|\xi|\leq R}
|\xi|^{2k}|\hat{\sigma}(t)|^2
{\rm d}\xi\right)^\frac{1}{2}
\le &\,C(1+t)^{-1-\frac{k}{2}} + C\int_0^t (1+t-s)^{-1-\frac{k}{2}}(\|\hat{\mathcal{F}}_1 \|_{L^\infty} + \|\hat{\mathcal{F}}_2 \|_{L^\infty})
{\rm d}s.
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
From the Duhamel's principle, we have that
\begin{align}
\label{Greenfunction_13-1} \hat{u}(t) =& \,\mathcal{G}_3 \hat{u}_0 + K|\xi|\mathcal{G}_1\hat{\sigma}_0 + \int_0^t\mathcal{G}_3(t-s)\hat{\mathcal{F}}_1 (s) + K|\xi|\mathcal{G}_1(t-s)\hat{\mathcal{F}}_2 (s)
{\rm d}s,\\
\label{Greenfunction_13-2} \hat{\sigma}(t) =& -\frac{\alpha}{2}|\xi|\mathcal{G}_1 \hat{u}_0 + \mathcal{G}_2\hat{\sigma}_0 + \int_0^t-\frac{\alpha}{2}|\xi|\mathcal{G}_1(t-s)\hat{\mathcal{F}}_1 (s) + \mathcal{G}_2(t-s)\hat{\mathcal{F}}_2 (s){\rm d}s.
\end{align}
It follows from Lemma \ref{lemma_Greenfunction_4} and Minkowski's inequality that
\begin{align*}
&\left(\int_{|\xi|\leq R}|\xi|^{2k}|\hat{u}(t)|^2{\rm d}\xi\right)^{\frac12}\\
\le & ~C(\|\hat{u}_0\|_{L^\infty} + \|\hat{\tau}_0\|_{L^\infty})\left(\int_{|\xi|\leq R}|\xi|^{2k}e^{-2\theta|\xi|^2t}{\rm d}\xi\right)^{\frac12}\\
&+ ~C\left(\int_{|\xi|\leq R}|\xi|^{2k}\Big|\int_0^t\mathcal{G}_3(t-s)\hat{\mathcal{F}}_1 (\xi,s) + K|\xi|\mathcal{G}_1(t-s)\hat{\mathcal{F}}_2 (\xi,s){\rm d}s\Big|^2{\rm d}\xi\right)^\frac12\\
\le & ~C(1+t)^{-\frac12-\frac{k}{2}} + C\int_0^t\Big(\int_{|\xi|\leq R}|\xi|^{2k}e^{-2\theta|\xi|^2(t-s)}(|\hat{\mathcal{F}}_1 (\xi,s)|^2 + |\hat{\mathcal{F}}_2 (\xi,s)|^2){\rm d}\xi\Big)^\frac12{\rm d}s\\
\le &~C(1+t)^{-\frac12-\frac{k}{2}} + C\int_0^t(1+t-s)^{-\frac12-\frac{k}{2}}(\|\hat{\mathcal{F}}_1 (\cdot,s)\|_{L^\infty} + \|\hat{\mathcal{F}}_2 (\cdot,s)\|_{L^\infty}){\rm d}s.
\end{align*}
By similar calculations, we obtain that
\begin{align*}
&\left(\int_{|\xi|\leq R}
|\xi|^{2k}|\hat{\sigma}(t)|^2{\rm d}\xi\right)^\frac{1}{2}\\
\le& ~C\|\hat{u}_0\|_{L^\infty}\left(\int_{|\xi|\leq R}|\xi|^{2k+2}e^{-2\theta|\xi|^2t}{\rm d}\xi\right)^{\frac12} + C\|\hat{\tau}_0\|_{L^\infty}\left(\int_{|\xi|\leq R} |\xi|^{2k+4}e^{-2\theta|\xi|^2 t}+|\xi|^{2k}e^{-\beta t}{\rm d}\xi \right)^\frac12\\
&+ ~C\left(\int_{|\xi|\leq R}|\xi|^{2k}\Big|\int_0^t-\frac{\alpha}{2}|\xi|\mathcal{G}_1(t-s)\hat{\mathcal{F}}_1 (\xi,s) + \mathcal{G}_2(t-s)\hat{\mathcal{F}}_2 (\xi,s){\rm d}s\Big|^2{\rm d}\xi\right)^\frac12\\
\le& C(1+t)^{-1-\frac{k}{2}} + C\int_0^t (1+t-s)^{-1-\frac{k}{2}}(\|\hat{\mathcal{F}}_1(\cdot,s)\|_{L^\infty} + \|\hat{\mathcal{F}}_2(\cdot,s) \|_{L^\infty})
{\rm d}s.
\end{align*}
The proof of Lemma \ref{lemma_Greenfunction_7} is complete.
\end{proof}
Next, we consider the lower bound estimates of $\mathcal{G}_1(\xi,t)$ and $\mathcal{G}_3(\xi,t)$.
\begin{lemma}\label{lemma_Greenfunction_10}
Let $R$ be the constant chosen in Lemma \ref{lemma_Greenfunction_4}. There exist three positive constants $\eta=\eta(\alpha,\beta,K)$, $C=C(\alpha,\beta,K)$ and $t_1=t_1(\beta)$, such that
\begin{equation}\label{Greenfunction_16}
|\mathcal{G}_1(\xi,t)| \geq \frac{1}{C} e^{-\eta |\xi|^2 t},~~|\mathcal{G}_3(\xi,t)| \geq \frac{1}{C} e^{-\eta |\xi|^2 t}, \ {\rm for}\ {\rm all}\ |\xi| \leq R \ {\rm and}\ t\geq t_1.
\end{equation}
\end{lemma}
\begin{proof}
From Lemma \ref{lemma_Greenfunction_4}, there holds
\begin{equation} \label{Greenfunction_17}
\frac{\sqrt{2}}{2}\beta\leq \lambda_+ - \lambda_-=\sqrt{\beta^2-2\alpha\kappa|\xi|^2}\leq \beta,
\end{equation} for all $|\xi|\leq R$, where $R$ is sufficiently small.
Noticing that
\begin{equation*}\label{Greenfunction_18}
\lambda_+ =\frac{-\alpha K|\xi|^2}{\beta+\sqrt{\beta^2-2\alpha K|\xi|^2}} \geq - \frac{\alpha K}{\beta}|\xi|^2 =:-\eta|\xi|^2,
\end{equation*}
there exists a time $t_1 = \frac{\sqrt{2}\ln 2}{\beta},$ such that
\begin{equation} \label{Greenfunction_19}
|e^{\lambda_+t}-e^{\lambda_-t}| = \left|e^{\lambda_+ t}\big(1 - e^ {-(\lambda_+-\lambda_-)t}\big)\right| \geq \frac{1}{2}e^{-\eta |\xi|^2 t},
\end{equation}
and
\begin{equation} \label{Greenfunction_20}
|\lambda_+e^{\lambda_-t}-\lambda_-e^{\lambda_+t}| = |e^{\lambda_+ t}\big(\lambda_+ e^ {-(\lambda_+-\lambda_-)t} - \lambda_-\big)| \geq (\lambda_+-\lambda_-)e^{-\eta |\xi|^2t},
\end{equation} for any $t\geq t_1$.
Then (\ref{lemma_Greenfunction_3+1}) combined with (\ref{Greenfunction_17}), (\ref{Greenfunction_19}), and (\ref{Greenfunction_20}) yields (\ref{Greenfunction_16}). Hence we finish the proof of Lemma \ref{lemma_Greenfunction_10}.
\end{proof}
With Lemmas \ref{lemma_Greenfunction_4} and \ref{lemma_Greenfunction_10}, the lower bounds of the linear part of the solution can be estimated as follows.
\begin{lemma}\label{lemma_Greenfunction_11}
Under the assumptions of Lemma \ref{lemma_Greenfunction_10}, and in addition that $\Big|\int_{\mathbb{R}^2}u_0(x) {\rm d}x\Big| = c_2>0,$ there exists a positive generic constant $C = C(\alpha,\beta,K,c_2,\|\tau_0\|_{L^1})$, such that
\begin{equation}\label{lemma_Greenfunction_12}
\begin{split}
\||\xi|^k\left(\mathcal{G}_3 \hat{u}_0 + K|\xi|\mathcal{G}_1\hat{\sigma}_0\right)\|_{L^2} \geq \frac{1}{C}(1 + t)^{-\frac12-\frac{k}{2}},
\end{split}
\end{equation}
\begin{equation}\label{lemma_Greenfunction_13}
\,\,\,\,\,\||\xi|^k\left(-\frac{\alpha}{2}|\xi|\mathcal{G}_1 \hat{u}_0 + \mathcal{G}_2\hat{\sigma}_0\right)\|_{L^2} \geq \frac{1}{C}(1 + t)^{-1-\frac{k}{2}},
\end{equation}for all $t\geq t_1$ and $k=0, 1, 2, 3 $.
\end{lemma}
\begin{proof}
First of all, since $u_0\in L^1(\mathbb{R}^2),$ then $\hat{u_0}\in C(\mathbb{R}^2).$ There exsits a constant $R'>0$, such that
\begin{equation*}
|\hat{u_0}(\xi)|\ge \frac{c_2}{2},\text{ for all } 0\le|\xi|\leq R'.
\end{equation*}
For simplicity, we assume $R'\le R$,
then we have
\begin{equation}\label{Greenfunction_21}
\begin{split}
&\||\xi|^k\left(\mathcal{G}_3 \hat{u}_0 + K|\xi|\mathcal{G}_1\hat{\sigma}_0\right)\|_{L^2}\\
=&
\left\||\xi|^k\mathcal{G}_3(\xi,t)\hat{u}_0(\xi) + K|\xi|^{k+1}\mathcal{G}_1(\xi,t)\hat{\sigma}_0(\xi)\right\|_{L^2}\\
\geq & \left(\int_{|\xi|\leq R'}|\xi|^{2k}\big|\mathcal{G}_3(\xi,t)\hat{u}_0(\xi) \big|^2 {\rm d}\xi\right)^\frac12 -\left(\int_{|\xi|\leq R'} K^2|\xi|^{2k+2}|\mathcal{G}_1(\xi,t)\hat{\sigma}_0(\xi)|^2 {\rm d}\xi\right)^\frac12
\\ =:&\,
K_1-K_2.
\end{split}
\end{equation}
From Lemma \ref{lemma_Greenfunction_10}, we have that
\begin{equation}\label{Greenfunction_22}
\begin{split}
K_1 \geq \frac{1}{C}\left(\int_{|\xi|\leq R'}|\xi|^{2k}e^{-2\eta |\xi|^2t}{\rm d} \xi\right)^\frac12
\ge \frac{1}{C}(1+t)^{-\frac12-\frac{k}{2}},
\end{split}
\end{equation} for all $t\geq t_1$.
On the other hand, Lemma \ref{lemma_Greenfunction_4} yields
\begin{equation}\label{Greenfunction_23}
\begin{split}
K_2 \leq & ~C\|\hat{\sigma}_0\|_{L^\infty}\left( \int_{|\xi|\leq R'} |\xi|^{2k+2} e^{-2\theta|\xi|^2t} {\rm d} \xi\right)^\frac12
\le
C(1+t)^{-1-\frac{k}{2}}.
\end{split}
\end{equation}
(\ref{Greenfunction_21}), (\ref{Greenfunction_22}), and (\ref{Greenfunction_23}) yield (\ref{lemma_Greenfunction_12}) for all $t\ge t_1$.
Next, notice that
\begin{equation}\label{Greenfunction_24}
\begin{split}
&\||\xi|^k\left(-\frac{\alpha}{2}|\xi|\mathcal{G}_1 \hat{u}_0 + \mathcal{G}_2\hat{\sigma}_0\right)\|_{L^2}
\\= & \left\|-\frac{\alpha}{2}|\xi|^{k+1}\mathcal{G}_1(\xi,t)\hat{u}_0(\xi) + |\xi|^k \mathcal{G}_2(\xi,t)\hat{\sigma}_0(\xi)\right\|_{L^2}\\
\ge&\frac{\alpha}{2}\left(\int_{|\xi|\leq R'}|\xi|^{2k+2}|\mathcal{G}_1(\xi,t)|^2|\hat{u}_0(\xi)|^2 {\rm d}\xi\right)^\frac12
- \left(\int_{|\xi|\leq R'}|\xi|^{2k}|\mathcal{G}_2(\xi,t)|^2|\hat{\sigma}_0(\xi)|^2 {\rm d}\xi\right)^\frac12\\
=:&\, K_3 - K_4.
\end{split}
\end{equation}
Similar to the analysis of $K_1$ and $K_2$, we have
\begin{equation}\label{Greenfunction_25}
K_3 \ge \frac{1}{C}(1 + t)^{-1-\frac{k}{2}},
\end{equation}
and
\begin{equation}\label{Greenfunction_26}
\begin{split}
K_4 \le& \|\hat{\sigma}_0\|_{L^\infty} \left(\int_{|\xi|\leq R'} |\xi|^{2k}|\mathcal{G}_2(\xi,t)|^2{\rm d} \xi\right)^\frac12
\\ \le & C\left(\int_{|\xi|\leq R'} |\xi|^{4+2k}e^{-2\theta|\xi|^2 t}+|\xi|^{2k}e^{-\beta t}{\rm d}\xi \right)^\frac12\\
\le & \,C(1 + t)^{-\frac{3}{2}-\frac{k}{2}},
\end{split}
\end{equation} for all $t\geq t_1$.
It follows from \eqref{Greenfunction_24}, (\ref{Greenfunction_25}), and \eqref{Greenfunction_26} that \eqref{lemma_Greenfunction_13} holds for all $t\ge t_1$. Therefore letting $t\ge t_1$, we finish the proof of Lemma \ref{lemma_Greenfunction_11}.
\end{proof}
\subsection{Upper time-decay estimates}To begin with, we define that
\begin{equation*}
\begin{split}
\mathcal{H}_1(t) & := \alpha\| u\|_{H^1}^2 + K\| \tau\|_{H^1}^2 + \eta_1\langle\Lambda u,\sigma\rangle = O(\|(u,\tau)\|_{H^1}),\\
\mathcal{H}_2(t) & := \alpha\|\nabla u\|_{H^1}^2 + K\|\nabla \tau\|_{H^1}^2 + \eta_2\langle\Lambda^2 u,\Lambda\sigma\rangle = O(\|\nabla(u,\tau)\|_{H^1}),\\
\mathcal{H}_3(t) & := \alpha\|\nabla^2 u\|_{H^1}^2 + K\|\nabla^2 \tau\|_{H^1}^2 + \eta_3\langle\Lambda^3 u,\Lambda^2\sigma\rangle = O(\|\nabla^2(u,\tau)\|_{H^1}).
\end{split}
\end{equation*}
\begin{lemma}\label{lemma_upper_decay}
Under the assumptions of Theorem \ref{thm_OB_d_decay}, we have
\begin{equation*}\label{utauH1-1}
\| u(t)\|_{H^1}^2 + \| \tau(t)\|_{H^1}^2\le C(1+t)^{-\frac{1}{2}},
\end{equation*} for all $t>0.$
\end{lemma}
\begin{proof}
Recalling from \eqref{est_first} ($\mu=0$) that
\begin{equation*}
\begin{split}
\mathrm{d}t (\alpha\|\nabla u(t)\|_{L^2}^2 + K\|\nabla\tau(t)\|_{L^2}^2) \leq 0,
\end{split}
\end{equation*}
then we have
\begin{equation}\label{nau}
\begin{split}
\alpha\|\nabla u(t)\|_{L^2}^2 + K\|\nabla\tau(t)\|_{L^2}^2\le \alpha\|\nabla u(s)\|_{L^2}^2 + K\|\nabla\tau(s)\|_{L^2}^2,
\end{split}
\end{equation}for $t\ge s\ge 0$.
By virtue of \eqref{uniform_estimates} with $\mu=0$, there holds
\begin{equation*}
\int_0^{+\infty} (\|\nabla u(s)\|_{H^2}^2 + \|\tau(s)\|_{H^3}^2){\rm d}s \leq C.
\end{equation*}
This combined with (\ref{nau}) yields
\begin{equation*}\label{new_H1_L2_7}
\begin{split}
\frac{t}{2}\alpha\|\nabla u(t)\|_{L^2}^2 + \frac{t}{2}K\|\nabla\tau(t)\|_{L^2}^2\le \int_{\frac{t}{2}}^t (\alpha\|\nabla u(s)\|_{L^2}^2 + K\|\nabla\tau(s)\|_{L^2}^2){\rm d}s\longrightarrow 0\,\,\,\,\text{as}\,\,\,\,t\rightarrow+\infty.
\end{split}
\end{equation*}
Namely, we have
\begin{equation}\label{new_H1_L2_19}
\begin{split}
\varphi(t):=(1+t)^{\frac12}\|\nabla u(t)\|_{L^2}\longrightarrow 0 \,\,\,\,\text{and}\,\,\,\, \psi(t):=(1+t)^{\frac12}\|\nabla \tau(t)\|_{L^2}\longrightarrow 0 \,\,\,\,\text{as}\,\,\,\,t\rightarrow\infty.
\end{split}
\end{equation}
Next, from \eqref{est_H1} ($\mu=0$), we have
\begin{equation}\label{new_H1_L2}
\begin{split}
\mathrm{d}t \mathcal{H}_1(t) + \frac{\beta K}{2}\|\tau\|_{H^1}^2 +\frac{\eta_1\alpha}{4}\|\Lambda u\|_{L^2}^2
\leq 0.
\end{split}
\end{equation}
Noticing that
\begin{equation*}\label{notice}
\begin{split}
\|\Lambda u\|_{L^2}^2 = \|\nabla u\|_{L^2}^2 &= \frac12 \|\nabla u\|_{L^2}^2 + \frac12 \int_{|\xi|\ge R_{1}}
|\xi|^2|\hat{u}|^2 {\rm d}\xi + \frac12 \int_{|\xi|\le R_{1}}
|\xi|^2|\hat{u}|^2 {\rm d}\xi\\
&\ge \frac12 \|\nabla u\|_{L^2}^2 + \frac12 R_{1}^{2}\int_{|\xi|\ge R_{1}}
|\hat{u}|^2 {\rm d}\xi,
\end{split}
\end{equation*}
where $R_{1}:=\min\{1, R\}$. Without loss of generality, we assume that $\frac{\eta_1}{8}\le \frac{\beta}{2}$, then \eqref{new_H1_L2} can be rewritten as
\begin{equation}\label{dtH1}
\begin{split}
\mathrm{d}t \mathcal{H}_1(t) + \frac{\eta_1 R_{1}^{2}}{16}(2\alpha\| u\|_{H^1}^2 + 2K\| \tau\|_{H^1}^2)\le \frac{\eta_1\alpha}{8}\int_{|\xi|\le R_{1}}
|\hat{u}|^2 {\rm d}\xi.
\end{split}
\end{equation}
Substituting \eqref{bu_9} into (\ref{dtH1}), we obtain
\begin{equation} \label{new1}
\begin{split}
\mathrm{d}t \mathcal{H}_1(t) + \frac{\eta_1 R_{1}^{2}}{16}\mathcal{H}_1(t)
\le \frac{\eta_1\alpha}{8}\int_{|\xi|\le R_{1}}
|\hat{u}|^2 {\rm d}\xi.
\end{split}
\end{equation}
From Lemma \ref{lemma_Greenfunction_7} ($k=0$), we have that
\begin{equation} \label{new2}
\begin{split}
\left(\int_{|\xi|\leq R_{1}}
|\hat{u}|^2
{\rm d}\xi\right)^\frac{1}{2}&\le\left(\int_{|\xi|\leq R}
|\hat{u}|^2
{\rm d}\xi\right)^\frac{1}{2}\\
&\le C(1+t)^{-\frac12} + C\int_0^t(1+t-s)^{-\frac12}\|u\|_{L^{2}}(\|\nabla u\|_{L^2} + \|\nabla\tau\|_{L^{2}})
{\rm d}s.
\end{split}
\end{equation}
Then, (\ref{new1}) and (\ref{new2}) yield
\begin{equation}\label{new_H1_L2_6}
\begin{split}
\mathrm{d}t \mathcal{H}_1(t) &+ \frac{\eta_1 R_{1}^{2}}{16}\mathcal{H}_1(t)
\le C (1+t)^{-1}\\ &+ C \left(\int_0^t(1+t-s)^{-\frac12}\|u\|_{L^{2}}(\|\nabla u\|_{L^2} + \|\nabla\tau\|_{L^{2}})
{\rm d}s\right)^{2}.
\end{split}
\end{equation}
Substituting \eqref{new_H1_L2_19} into \eqref{new_H1_L2_6}, we obtain
\begin{equation}\label{new_H1_L2_20}
\begin{split}
\mathrm{d}t \mathcal{H}_1(t) &+ \frac{\eta_1 R_{1}^{2}}{16}\mathcal{H}_1(t)
\le C (1+t)^{-1}\\ &+ C \left(\int_0^t(1+t-s)^{-\frac12}(1+s)^{-\frac12}\|u(s)\|_{L^{2}}(\varphi(s) + \psi(s))
{\rm d}s\right)^{2}.
\end{split}
\end{equation}
Define that
\begin{equation}\label{M}
\mathcal{M}(t): = \sup_{0\leq s \leq t}(1+s)^{\frac{1}{2}}\mathcal{H}_1(s).
\end{equation}
Notice that $\mathcal{M}(t)$ is non-decreasing and for all $t\ge 0$,
$$\mathcal{H}_1(t) \le (1+t)^{-\frac{1}{2}}\mathcal{M}(t)\,\,\,\,\, \text{and}\,\,\,\,\, \|u(t)\|_{L^{2}} \le C(1+t)^{-\frac{1}{4}}\mathcal{M}(t)^{\frac12}.$$
Then, \eqref{new_H1_L2_20} and (\ref{M}) immediately yield
\begin{equation}\label{new_H1_L2_21}
\begin{split}
\mathrm{d}t \mathcal{H}_1(t) &+ \frac{\eta_1 R_{1}^{2}}{16}\mathcal{H}_1(t)
\le C (1+t)^{-1}\\ &+ C \mathcal{M}(t)\left(\int_0^t(1+t-s)^{-\frac12}(1+s)^{-\frac34}(\varphi(s) + \psi(s))
{\rm d}s\right)^{2}.
\end{split}
\end{equation}
Motivated by Dong and Chen (\cite{Dong 2006}), we define that
\begin{equation*}
\mathcal{J}(t):= (1+t)^{\frac14}\int_0^t(1+t-s)^{-\frac12}(1+s)^{-\frac34}(\varphi(s) + \psi(s)){\rm d}s.
\end{equation*}
Owing to \eqref{new_H1_L2_19}, for any given small constant $\epsilon$, there exists a $T_\epsilon> 0$, such that
$$\varphi(T_\epsilon) + \psi(T_\epsilon)\le\epsilon.$$
Then, we have
\begin{equation*}\label{new_H1_L2_22}
\begin{split}
\mathcal{J}(t) = &\,(1+t)^{\frac14}\int_0^{T_\epsilon}(1+t-s)^{-\frac12}(1+s)^{-\frac34}(\varphi(s) + \psi(s)){\rm d}s \\&+ (1+t)^{\frac14}\int_{T_\epsilon}^t(1+t-s)^{-\frac12}(1+s)^{-\frac34}(\varphi(s) + \psi(s)){\rm d}s\\
\le&\, (1+t)^{\frac14}\Big(C(T_\epsilon)\int_0^{T_\epsilon}(1+t-s)^{-\frac12}(1+s)^{-\frac34}{\rm d}s + \epsilon\int_{0}^t(1+t-s)^{-\frac12}(1+s)^{-\frac34}{\rm d}s\Big)\\
\le& \,C(T_\epsilon)(1+t)^{-\frac14} +C\epsilon,
\end{split}
\end{equation*}
for $t\ge 2T_\epsilon.$ Letting $t\rightarrow+\infty$, and using the fact that $\epsilon$ is arbitrarily small, we have
\begin{equation}\label{new_H1_L2_23}
\begin{split}
\mathcal{J}(t)\longrightarrow 0 \,\,\,\,as\,\,\,\,t\rightarrow +\infty.
\end{split}
\end{equation}
Back to \eqref{new_H1_L2_21}, by Gronwall's inequality and using \eqref{new_H1_L2_23}, we have
\begin{equation*}\label{new_H1_L2_24}
\begin{split}
\mathcal{H}_1(t)&\le e^{-Ct}\mathcal{H}_1(0)+ C\int_0^t e^{-C(t-s)}\left((1+s)^{-1} + \mathcal{M}(s)(1+s)^{-\frac12} \mathcal{J}(s)^2 \right) {\rm d}s\\
&\le C (1+t)^{-1} + C(T_1)\int_0^{T_1} e^{-C(t-s)}(1+s)^{-\frac12} {\rm d}s + C\int_{T_1}^t e^{-C(t-s)}\mathcal{M}(s)(1+s)^{-\frac12} \mathcal{J}(s)^2 {\rm d}s\\
&\le C(T_1) (1+t)^{-\frac{1}{2}} + \frac12(1+t)^{-\frac12}\sup_{0\leq s \leq t}(1+s)^{\frac{1}{2}}\mathcal{H}_1(s),
\end{split}
\end{equation*} for any $t\ge T_2:=2T_1$. Namely, for $t\ge T_2,$ there holds
\begin{equation}\label{new_H1_L2_26}
\begin{split}
(1+t)^{\frac12}\mathcal{H}_1(t)\le C(T_1) + \frac12\sup_{0\leq s \leq t}(1+s)^{\frac{1}{2}}\mathcal{H}_1(s).
\end{split}
\end{equation}
In \eqref{new_H1_L2_26}, taking supremum with respect to $t$ from $T_2$ to $t$, we get
\begin{equation*}\label{new_H1_L2_27}
\begin{split}
\frac12\sup_{T_2\leq s \leq t}(1+s)^{\frac{1}{2}}\mathcal{H}_1(s)\le C,
\end{split}
\end{equation*}
which yields
\begin{equation}\label{new_H1_L2_28}
\begin{split}
\mathcal{H}_1(t)\le C (1+t)^{-\frac12},
\end{split}
\end{equation} for $t\ge T_2$. In fact, for all $t\le T_2$, $\mathcal{H}_1(t)$ is bounded. Consequently, for all $t>0$, (\ref{new_H1_L2_28}) also holds. The proof of Lemma \ref{lemma_upper_decay} is complete.
\end{proof}
To get sharper decay rate of the quantities on the left-hand side of (\ref{utauH1}), we employ the Fourier splitting method (see \cite{Schonbek 1985}).
\begin{lemma}\label{lemma_upper_decay+1}
Under the assumptions of Theorem \ref{thm_OB_d_decay}, we have
\begin{equation}\label{utauH1}
\| u(t)\|_{H^1}^2 + \| \tau(t)\|_{H^1}^2\le C(1+t)^{-1},
\end{equation} for all $t>0.$
\end{lemma}
\begin{proof}
Decomposing the term $\|\Lambda u\|_{L^2}^2$ again, there holds
\begin{equation}\label{new_H1_L2_29}
\begin{split}
\|\Lambda u\|_{L^2}^2 = \|\nabla u\|_{L^2}^2 &= \frac12 \|\nabla u\|_{L^2}^2 + \frac12 \int_{|\xi|\ge g_1(t)}
|\xi|^2|\hat{u}|^2 {\rm d}\xi + \frac12 \int_{|\xi|\le g_1(t)}
|\xi|^2|\hat{u}|^2 {\rm d}\xi\\
&\ge \frac12 \|\nabla u\|_{L^2}^2 + \frac12 g_1(t)^{2}\int_{|\xi|\ge g_1(t)}
|\hat{u}|^2 {\rm d}\xi,
\end{split}
\end{equation}
where $g_1^2(t)=\frac{24}{\eta_1}(1+t)^{-1}$. Then $g_1^2(t)\le 1$ for all $t\ge \frac{24}{\eta_1}-1$.
Substituting \eqref{new_H1_L2_29} into \eqref{new_H1_L2}, we get
\begin{equation}\label{new_H1_L2_12}
\begin{split}
\mathrm{d}t \mathcal{H}_1(t) + \frac{3}{2}(1+t)^{-1}\mathcal{H}_1(t)
\le C(1+t)^{-1}\int_{|\xi|\le g_1(t)}
|\hat{u}|^2 {\rm d}\xi,
\end{split}
\end{equation}for all $t\ge \max\{\frac{24}{\eta_1}-1,\frac{6}{\beta}-1\}=:t_2$.
Recalling (\ref{Greenfunction_13-1}), we have
\begin{equation} \label{duhamel}
\hat{u}(t) = \,\mathcal{G}_3 \hat{u}_0 + K|\xi|\mathcal{G}_1\hat{\sigma}_0 + \int_0^t\mathcal{G}_3(t-s)\hat{\mathcal{F}}_1 (s) + K|\xi|\mathcal{G}_1(t-s)\hat{\mathcal{F}}_2 (s)
{\rm d}s,
\end{equation}
where
\begin{eqnarray*}
\mathcal{F}_1=-\mathbb{P}\left(u\cdot\nabla u\right),\
\mathcal{F}_2= -\Lambda^{-1}\mathbb{P}{\rm div}\left(u\cdot\nabla\tau\right).
\end{eqnarray*}
It is easy to see that
\begin{equation*}
g_1^2(t)=\frac{24}{\eta_1}(1+t)^{-1}\le R^2 \doteq \frac{\beta^2}{4\alpha K},
\end{equation*}
for all $t\ge \max\{\frac{96\alpha K}{\eta_1\beta^2}-1, t_2\}=:t_3$.
By virtue of Lemma \ref{lemma_Greenfunction_4}, (\ref{duhamel}) can be estimated as below:
\begin{align}\label{hatu}
\nonumber |\hat{u}|\le \,&Ce^{-\theta|\xi|^2t}|\hat{u}_0| + C |\xi| e^{-\theta|\xi|^2t}|\hat{\sigma}_0|
+ C\int_0^t e^{-\theta|\xi|^2(t-s)}|\xi||\widehat{u\otimes u}(s)| {\rm d}s
\\ \nonumber &+ C\int_0^t |\xi| e^{-\theta|\xi|^2(t-s)}|\xi||\widehat{u \otimes\tau}(s)| {\rm d}s\\
\le \,&C + C|\xi|\int_0^t \|u\|_{L^2}^2{\rm d}s + C|\xi|^2\int_0^t \|u\|_{L^2}\|\tau\|_{L^2}{\rm d}s,
\end{align}for all $|\xi|\le g_1(t)\le R$ as $t\ge t_3$.
By virtue of \eqref{new_H1_L2_28} and (\ref{hatu}), we get
\begin{equation}\label{u_L00_low_2}
\begin{split}
|\hat{u}(\xi,t)|\le C,
\end{split}
\end{equation} for $t\ge t_3$ and $|\xi|\le g_1(t)$.
Substituting \eqref{u_L00_low_2} into \eqref{new_H1_L2_12}, we get
\begin{equation}\label{new_H1_L2_15}
\begin{split}
\mathrm{d}t \mathcal{H}_1(t) + \frac{3}{2}(1+t)^{-1}\mathcal{H}_1(t)
\le C(1+t)^{-2},
\end{split}
\end{equation} for all $t\ge t_3$.
Multiplying \eqref{new_H1_L2_15} by $(1+t)^{\frac32}$, we can deduce that
\begin{equation*}\label{new_H1_L2_13}
\begin{split}
\mathrm{d}t ((1+t)^{\frac32}\mathcal{H}_1(t))\le C(1+t)^{-\frac12},
\end{split}
\end{equation*}for all $t\ge t_3$, which yields
\begin{equation}\label{new_H1_L2_14}
\begin{split}
\mathcal{H}_1(t)\le C (1+t)^{-1}.
\end{split}
\end{equation}
Similarly, since $\mathcal{H}_1(t)$ is bounded for all $t\le t_3$, thus (\ref{new_H1_L2_14}) also holds for all $t>0$.
\end{proof}
Next, we will develop a way to capture the optimal time-decay rates for
the higher-order derivatives of the solution.
\begin{lemma}\label{lemma_upper_decay_2}
Under the assumptions of Theorem \ref{thm_OB_d_decay}, we have
\begin{equation}\label{upper decay2}
\|\nabla u(t)\|_{H^2}^2 + \|\nabla \tau(t)\|_{H^2}^2\le C (1+t)^{-2},
\end{equation}
for all $t>0$.
\end{lemma}
\begin{proof}
Summing \eqref{est_first}, \eqref{est_second} and $\eta_2$\eqref{est_second_u} ($\mu=0$) up, we obtain that
\begin{equation}\label{new_H2_L2_1}
\begin{split}
\mathrm{d}t \mathcal{H}_2(t) + \frac{\beta K}{2}\|\nabla\tau\|_{H^1}^2 +\frac{\eta_2\alpha}{8}\|\Lambda^2 u\|_{L^2}^2 \le C\eta_2\big(\|\nabla u\|_{L^\infty}^{2}+\|\nabla \tau\|_{L^\infty}^{2}\big)\|\nabla u\|_{L^2}^2.
\end{split}
\end{equation}Then, \eqref{new_H2_L2_1} yields
\begin{equation}\label{new_H2_L2_2}
\begin{split}
\mathrm{d}t \mathcal{H}_2(t) + \bar{c}_0\mathcal{H}_2(t) \le C\|\nabla u\|_{L^2}^2,
\end{split}
\end{equation} for some positive constants $\bar{c}_0$ and $C$.
By virtue of \eqref{utauH1}, \eqref{new_H2_L2_2} yields
\begin{equation}\label{new_H2_L2_3}
\begin{split}
\mathcal{H}_2(t)\le &C(1+t)^{-1},
\end{split}
\end{equation}for all $t>0$.
Similarly, summing \eqref{est_second}, \eqref{est_third} and $\eta_3$\eqref{est_third_u} ($\mu=0$) up, we obtain that
\begin{equation} \label{new5}
\mathrm{d}t \mathcal{H}_3(t) + \frac{\beta K}{2}\|\nabla^2\tau\|_{H^1}^2 +\frac{\eta_3\alpha}{8}\|\Lambda^3 u\|_{L^2}^2 \le C\big(\|\nabla u\|_{L^\infty}+\|\nabla \tau\|_{L^\infty}\big)\|\nabla^2 u\|_{L^2}^2,
\end{equation}which yields
\begin{equation}\label{new_H3_L2_2}
\begin{split}
\mathrm{d}t \mathcal{H}_3(t) + \bar{c}_1\mathcal{H}_3(t) \le C\|\nabla^2 u\|_{L^2}^2,
\end{split}
\end{equation}for some positive constants $\bar{c}_1$ and $C$.
Combining \eqref{new_H2_L2_3} with \eqref{new_H3_L2_2}, we get
\begin{equation} \label{new3}
\mathcal{H}_3(t)\le C(1+t)^{-1},
\end{equation} for all $t>0$.
Similar to \eqref{new_H1_L2_29}, we have
\begin{equation}\label{na2u}
\|\Lambda^2 u\|_{L^2}^2 = \|\nabla^2 u\|_{L^2}^2 \ge \frac12 \|\nabla^2 u\|_{L^2}^2 + \frac12 g_2^2(t)\int_{|\xi|\ge g_2(t)}
|\xi|^2|\hat{u}|^2 {\rm d}\xi,
\end{equation}
where $g_2(t)>0$ is to be determined.
Substituting (\ref{na2u}) into \eqref{new_H2_L2_1}, and using (\ref{new_H2_L2_3}) and (\ref{new3}), we have
\begin{equation}\label{new_H2_L2_4}
\begin{split}
&\mathrm{d}t \mathcal{H}_2(t) + \frac{\beta K}{2}\|\nabla\tau\|_{H^1}^2 +\frac{\eta_2\alpha}{16}\|\nabla^2 u\|_{L^2}^2 + \frac{\eta_2\alpha}{16}g_2^2(t)\|\nabla u\|_{L^2}^2\\
\le\,&\frac{\eta_2\alpha}{16}g_2^2(t)\int_{|\xi|\le g_2(t)}
|\xi|^2|\hat{u}|^2 {\rm d}\xi + C\eta_2\big(\|\nabla u\|_{L^\infty}^{2}+\|\nabla \tau\|_{L^\infty}^{2}\big)\|\nabla u\|_{L^2}^2\\
\le\,&\frac{\eta_2\alpha}{16}g_2^2(t)\int_{|\xi|\le g_2(t)}
|\xi|^2|\hat{u}|^2 {\rm d}\xi +\frac{C\eta_2}{1+t}\|\nabla u\|_{L^2}^2.
\end{split}
\end{equation}
Here, taking $g_2^2(t)=\frac{160}{\eta_2}(1+t)^{-1}$, then $g_2^2(t)\le 1$ for all $t\ge \frac{160}{\eta_2}-1=:t_4$. In addition, letting $\eta_2 \le \min \{16\beta, \frac{5\alpha}{C}\}$, then \eqref{new_H2_L2_4} yields
\begin{equation}\label{new4}
\begin{split}
\mathrm{d}t \mathcal{H}_2(t) + \frac{\eta_2}{64}g_2^2(t)\,(2\alpha\|\nabla u\|_{H^1}^2 + 2K\| \nabla\tau\|_{H^1}^2)
\le C(1+t)^{-3},
\end{split}
\end{equation}where we have used the boundedness of $|\hat{u}|$ for all $|\xi|\le g_2(t)$, which is similar to (\ref{u_L00_low_2}).
Combining (\ref{new4}) with \eqref{bu_9}, we obtain
\begin{equation}\label{new_H2_L2_6}
\begin{split}
\mathrm{d}t \mathcal{H}_2(t) + \frac{\eta_2}{64}g_2^2(t)\mathcal{H}_2(t)
\le C(1+t)^{-3}.
\end{split}
\end{equation}
Multiplying \eqref{new_H2_L2_6} by $(1+t)^\frac52$, we can deduce that for all $t\ge t_4$,
\begin{equation}\label{new_H2_L2_7}
\begin{split}
\mathcal{H}_2(t)\le &C(1+t)^{-2}.
\end{split}
\end{equation}
Substituting \eqref{new_H2_L2_7} into \eqref{new_H3_L2_2}, we can also deduce that for all $t\ge t_4$,
\begin{equation}\label{new_H3_L2_8}
\begin{split}
\mathcal{H}_3(t)\le &C(1+t)^{-2}.
\end{split}
\end{equation}
Since for all $t\le t_4$, $\mathcal{H}_2(t)$ and $\mathcal{H}_3(t)$ are bounded, thus \eqref{new_H2_L2_7} and \eqref{new_H3_L2_8} also hold for all $t>0$.
\end{proof}
\begin{corollary}\label{cor1}
Under the assumptions of Theorem \ref{thm_OB_d_decay}, we have
\begin{equation}\label{new_tau_L2_2}
\begin{split}
\|\tau(t)\|_{L^2}^2\le C(1+t)^{-2},
\end{split}
\end{equation}for all $t>0.$
\end{corollary}
\begin{proof}
Applying $\nabla^k$ ($k=0,1,2$) to (\ref{Oldroyd_B_d})$_2$, multiplying the result by $\nabla^k \tau$, and integrating with respect to $x$, we have that
\begin{equation}\label{new_tau_L2_1}
\begin{split}
&\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \|\nabla^k \tau\|_{L^2}^2 + \frac{\beta}{2}\|\nabla^k \tau\|_{L^2}^2 \\&\le C \|\nabla^{k+1}u\|_{L^2}^2 + C\|\nabla^{k}(u\cdot \nabla\tau)\|_{L^2}^2\\
&\le C\|\nabla^{k+1}u\|_{L^2}^2 + C\|\nabla^{k+1}\tau\|_{L^2}^2\|u\|_{L^\infty}^2 + \|\nabla \tau\|_{L^\infty}^2\|\nabla^{k}u\|_{L^2}^2.
\end{split}
\end{equation}
However, (\ref{new_tau_L2_1}) for $k=1,2$ will be used later. In fact, to prove \eqref{new_tau_L2_2}, it suffices to take $k=0$ in \eqref{new_tau_L2_1}. Then by virtue of (\ref{new_H2_L2_7}) and \eqref{new_H3_L2_8}, the inequality \eqref{new_tau_L2_2} holds.
\end{proof}
\begin{lemma}\label{lemma_upper_decay_3}
Under the assumptions of Theorem \ref{thm_OB_d_decay}, we have
\begin{equation}\label{na2udecay}
\|\nabla^2 u(t)\|_{H^1}^2 + \|\nabla^2 \tau(t)\|_{H^1}^2\le C(1+t)^{-3},
\end{equation} for all $t>0.$
\end{lemma}
\begin{proof} By virtue of (\ref{upper decay2}) and (\ref{new5}), we have
\begin{equation*}\label{new_H3_L2_2+1}
\begin{split}
\mathrm{d}t \mathcal{H}_3(t) + \bar{c}_2\mathcal{H}_3(t) \le& C\big(\|\nabla u\|_{L^\infty}+\|\nabla \tau\|_{L^\infty}\big)\|\nabla^2 u\|_{L^2}^2+C\int_{|\xi|\leq R}|\xi|^{4}|\hat{u}(t)|^2{\rm d}\xi\\ \le& C\big(\|\nabla u\|_{H^2}^3+\|\nabla \tau\|_{H^2}^3\big) +C\int_{|\xi|\leq R}|\xi|^{4}|\hat{u}(t)|^2{\rm d}\xi\\ \le& C(1+t)^{-3} +C\int_{|\xi|\leq R}|\xi|^{4}|\hat{u}(t)|^2{\rm d}\xi,
\end{split}
\end{equation*}for some positive constants $\bar{c}_2$ and $C$, which together with Lemma \ref{lemma_Greenfunction_7} yields
\begin{equation}\label{new_H3_L2_2+2}
\begin{split}
\mathrm{d}t \mathcal{H}_3(t) + \bar{c}_2\mathcal{H}_3(t) \le& C(1+t)^{-3}.
\end{split}
\end{equation}
Then (\ref{na2udecay}) can be obtained by using (\ref{new_H3_L2_2+2}). The proof of Lemma \ref{lemma_upper_decay_3} is complete.
\end{proof}
\begin{lemma}\label{lemma_upper_decay_4}
Under the assumptions of Theorem \ref{thm_OB_d_decay}, we have
\begin{equation}\label{na3udecay}
\begin{split}
\|\nabla^3 u(t)\|_{L^2}^2 + \|\nabla^3 \tau(t)\|_{L^2}^2\le C (1+t)^{-4},
\end{split}
\end{equation}for all $t>0.$
\end{lemma}
\begin{proof}
Here, we choose the standard cut-off function $0\le \varphi_0(\xi)\le 1$ in $C_c^\infty(\mathbb{R}^2)$ such that
\begin{equation*}
\varphi_0(\xi) =
\begin{cases}
1, \,&for\, |\xi|\le \frac{R}{2},\\
0, \,&for\, |\xi|\ge R,
\end{cases}
\end{equation*}
where $R$ is defined in Lemma \ref{lemma_Greenfunction_4}. The low-high-frequency decomposition $(f^l(x), f^h(x))$ for a function $f(x)$ is stated as follows:
\begin{equation*}
f^l(x):=\mathcal{F}^{-1}(\varphi_0(\xi)\hat{f}(\xi))\,\,\,\ \text{and} \,\,\,\,f^h(x):=f(x)-f^l(x).
\end{equation*}
Multiplying $\Lambda^3$(\ref{u_sigma_1})$_1$ and $\Lambda^2(\ref{u_sigma_1})^l_2$ by $-\Lambda^2\sigma^l$ and $-\Lambda^3 u$, respectively, summing the results up, and using integration by parts, we have
\begin{align}\label{bu_15}
\begin{split}
&-\partial_t\langle\Lambda^3 u,\Lambda^2 \sigma^l\rangle\\
= & \,-\Big(K\langle\Lambda^3\sigma, \Lambda^3 \sigma^l\rangle - \langle\beta\Lambda^2\sigma^l,\Lambda^3 u \rangle - \frac{\alpha}{2}\langle\Lambda^3 u, \Lambda^3 u^l\rangle \Big)\\ &+ \Big(\langle \Lambda^3\mathbb{P}(u\cdot \nabla u),\Lambda^2\sigma^l\rangle + \langle\big(\Lambda \mathbb{P}\mathrm{d}iv(u\cdot \nabla \tau)\big)^l,\Lambda^3 u\rangle\Big)\\
=: &\,I_7 + I_8.
\end{split}
\end{align}
For $I_7$ and $I_8$, using (\ref{bu_7}), we have
\begin{align}
|I_7|\le\, &K\|\Lambda^3 \sigma\|_{L^2}^2 + \frac{\alpha}{32}\|\Lambda^3 u\|_{L^2}^2 + \frac{8\beta^2}{\alpha}\|\Lambda^2\sigma\|_{L^2}^2 + \frac{\alpha}{2}\|\Lambda^3 u^l\|_{L^2}^2 + \frac{\alpha}{8}\|\Lambda^3 u\|_{L^2}^2,\label{I7}\\
\nonumber|I_8|\le\, &\frac12\|\Lambda^3 \sigma\|_{L^2}^2
+ \frac12\|\Lambda^2\mathbb{P}(u\cdot \nabla u)\|_{L^2}^2
+ \frac{\alpha}{32}\|\Lambda^3 u\|_{L^2}^2 + \frac{8}{\alpha}\|\Lambda\mathbb{P}\mathrm{d}iv(u\cdot \nabla \tau)\|_{L^2}^2\\
\le\, &\frac12\|\Lambda^3 \sigma\|_{L^2}^2
+ C\|\nabla^2(u\cdot \nabla u)\|_{L^2}^2 + \frac{\alpha}{32}\|\Lambda^3 u\|_{L^2}^2 + C\|\nabla^2(u\cdot \nabla \tau)\|_{L^2}^2.\label{I8}
\end{align}
Substituting (\ref{I7}) and (\ref{I8}) into (\ref{bu_15}), we get
\begin{equation}\label{new_H5_L2_1}
\begin{split}
\begin{split}
&-\partial_t\langle\Lambda^3 u,\Lambda^2 \sigma^l\rangle\\
\leq \,& \frac{\alpha}{2}\|\Lambda^3 u^l\|_{L^2}^2 + \big(\frac{3\alpha}{16} + C\| u\|_{L^\infty}^{2}\big)\|\Lambda^3 u\|_{L^2}^2 + (K + \frac{8\beta^2}{\alpha} + \frac12)\|\nabla^2\sigma\|_{H^1}^2\\
\, &+ C\big(\|u\|_{L^\infty}^{2}+\|\nabla u\|_{L^\infty}^{2}+\|\nabla \tau\|_{L^\infty}^{2}\big)\big(\|\nabla^2\tau\|_{H^1}^2 + \|\nabla^2 u\|_{L^2}^2\big),
\end{split}
\end{split}
\end{equation}
where \eqref{bu_21} is used.
Summing \eqref{est_third_u} ($\mu=0$) and \eqref{new_H5_L2_1} up, and using the smallness of the solution stated in Theorem \ref{wellposedness}, we get
\begin{equation}\label{new_H5_L2_2}
\begin{split}
&\partial_t\langle\Lambda^3 u,\Lambda^2 \sigma^h\rangle + \frac{\alpha}{16}\|\Lambda^3 u\|_{L^2}^2\\
\leq \, &\frac{\alpha}{2}\|\Lambda^3 u^l\|_{L^2}^2 + (2K + \frac{12\beta^2}{\alpha} + 1)\|\nabla^2\sigma\|_{H^1}^2 \\
& + C\big(\|u\|_{L^\infty}^{2}+\|\nabla u\|_{L^\infty}^{2}+\|\nabla \tau\|_{L^\infty}^{2}\big)\big(\|\nabla^2\tau\|_{H^1}^2 + \|\nabla^2 u\|_{L^2}^2\big).
\end{split}
\end{equation}
Letting $\eta_4>0$ small enough, then summing 2$\times$\eqref{est_third} ($\mu=0$) and $\eta_4$\eqref{new_H5_L2_2} up, we have
\begin{equation}\label{new_H5_L2_4}
\begin{split}
&\mathrm{d}t \mathcal{H}_4(t) + \frac{\eta_4\alpha}{16}\|\Lambda^3 u\|_{L^2}^2 + \frac{\beta K}{100}\|\nabla^3\tau\|_{L^2}^2\\
\leq\, & C\big(\|\nabla u\|_{L^\infty}+\|\nabla \tau\|_{L^\infty}\big)\|\nabla^3 u\|_{L^2}^2 + \eta_4\Big(\frac{\alpha}{2} \|\Lambda^3 u^l\|_{L^2}^2
+ (2K + \frac{12\beta^2}{\alpha} + 1)\|\nabla^2\sigma\|_{H^1}^2 \\
\, &+ C\big(\|u\|_{L^\infty}^{2}+\|\nabla u\|_{L^\infty}^{2}+\|\nabla \tau\|_{L^\infty}^{2}\big)\big(\|\nabla^2\tau\|_{H^1}^2 + \|\nabla^2 u\|_{L^2}^2\big)\Big),
\end{split}
\end{equation} where
\begin{equation*}
\mathcal{H}_4:=\alpha\|\nabla^3 u\|_{L^2}^2 + K\|\nabla^3\tau\|_{L^2}^2+\eta_4\langle\Lambda^3 u,\Lambda^2 \sigma^h\rangle=O(\|\nabla^3 u\|_{L^2}^2+\|\nabla^3 \tau\|_{L^2}^2).
\end{equation*}
By virtue of (\ref{bu_7}), (\ref{bu_8}) and (\ref{upper decay2}) and the smallness of the solution and $\eta_4$, (\ref{new_H5_L2_4}) yields
\begin{equation}\label{new_H5_L2_5}
\begin{split}
\begin{split}
&\mathrm{d}t \mathcal{H}_4(t) + \frac{\eta_4\alpha}{32}\|\Lambda^3 u\|_{L^2}^2 + \frac{\beta K}{100}\|\nabla^3\tau\|_{L^2}^2\\
\leq \,& \eta_4\big(\frac{\alpha}{2} \|\Lambda^3 u^l\|_{L^2}^2
+ (2K + \frac{12\beta^2}{\alpha} + 1)\|\nabla^2\sigma\|_{L^2}^2\\
\,&+ C\big(\|u\|_{L^\infty}^{2}+\|\nabla u\|_{L^\infty}^{2}+\|\nabla \tau\|_{L^\infty}^{2}\big)\big(\|\nabla^2\tau\|_{L^2}^2 + \|\nabla^2 u\|_{L^2}^2\big).
\end{split}
\end{split}
\end{equation}
Using \eqref{bu_8}, we have that
\begin{equation}\label{bu_17}
\frac{\beta K}{200}\|\nabla^3\tau\|_{L^2}^2\ge \frac{1}{C}\|\nabla^3\sigma\|_{L^2}^2 \ge \frac{1}{C} R^2\int_{|\xi|\ge R}
|\xi|^4|\hat{\sigma}|^2 {\rm d}\xi\ge \frac{1}{C}\|\nabla^2\sigma^h\|_{L^2}^2.
\end{equation}
In addition, letting
\begin{equation}\label{bu_18}
\eta_4 \le \frac{C}{2K + \frac{12\beta^2}{\alpha} + 1},
\end{equation}
substituting \eqref{bu_17} and \eqref{bu_18} into \eqref{new_H5_L2_5}, and using (\ref{utauH1}), \eqref{upper decay2} and \eqref{na2udecay}, we get
\begin{equation}\label{new_H5_L2_6}
\begin{split}
\begin{split}
\mathrm{d}t \mathcal{H}_4(t) + \bar{c}_3\mathcal{H}_4(t)
\leq& \, C\|\Lambda^3 u^l\|_{L^2}^2
+ C\|\nabla^2\sigma^l\|_{L^2}^2\\
&+ C\big(\|u\|_{L^\infty}^{2}+\|\nabla u\|_{L^\infty}^{2}+\|\nabla \tau\|_{L^\infty}^{2}\big)\big(\|\nabla^2\tau\|_{L^2}^2 + \|\nabla^2 u\|_{L^2}^2\big)\\
\leq& \, C\|\Lambda^3 u^l\|_{L^2}^2
+ C\|\nabla^2\sigma^l\|_{L^2}^2 + C(1+t)^{-4},
\end{split}
\end{split}
\end{equation}
for some positive constants $\bar{c}_3$ and $C$.
By virtue of (\ref{Greenfunction_13-1}), (\ref{utauH1}), and (\ref{upper decay2}), we have
\begin{align}\label{new_H5_L2_8}
&\left(\int_{|\xi|\leq R}
|\xi|^{6}|\hat{u}(t)|^2{\rm d}\xi\right)^\frac{1}{2}\notag\\
\le & ~C(1+t)^{-2} + C\int_0^t\Big(\int_{|\xi|\leq R}|\xi|^{6}e^{-2\theta|\xi|^2(t-s)}(|\hat{\mathcal{F}}_1 (\xi,s)|^2 + |\hat{\mathcal{F}}_2 (\xi,s)|^2){\rm d}\xi\Big)^\frac12{\rm d}s\notag\\
\le &~C(1+t)^{-2} +C\int_0^{\frac{t}{2}}(1+t-s)^{-2}(\|\hat{\mathcal{F}}_1 (\cdot,s)\|_{L^\infty} + \|\hat{\mathcal{F}}_2 (\cdot,s)\|_{L^\infty}){\rm d}s \\
&+C\int_{\frac{t}{2}}^{t}\Big(\int_{|\xi|\leq R}|\xi|^{4}\frac{(1+t-s)^{2}}{(1+t-s)^{2}}e^{-2\theta|\xi|^2(1+t-s)}|\xi|^2(|\hat{\mathcal{F}}_1 (\xi,s)|^2 + |\hat{\mathcal{F}}_2 (\xi,s)|^2){\rm d}\xi\Big)^\frac12{\rm d}s\notag\\
\le &~C(1+t)^{-2} +C\int_{\frac{t}{2}}^{t}(1+t-s)^{-1}\Big(\big\||\xi|\hat{\mathcal{F}}_1 (\xi,s)\big\|_{L^2} + \big\||\xi|\hat{\mathcal{F}}_2 (\xi,s)\big\|_{L^2} \Big){\rm d}s,\notag
\end{align} where we have used
\begin{equation*}
\begin{split}
\|\hat{\mathcal{F}}_1(\xi,s) \|_{L^\infty} + \|\hat{\mathcal{F}}_2(\xi,s)\|_{L^\infty}\le& C\|u(s)\|_{L^2}\big(\|\nabla u(s)\|_{L^2}+\|\nabla \tau(s)\|_{L^2}\big)
\\ \le& C(1+s)^{-\frac{3}{2}}.
\end{split}
\end{equation*}
Using \eqref{upper decay2} and \eqref{na2udecay} and Gagliardo-Nirenberg inequailty, we have
\begin{equation}\label{new_H5_L2_9}
\begin{split}
\big\||\xi|\hat{\mathcal{F}}_1 (\xi,s)\big\|_{L^2}&\le C\|\nabla(u\cdot\nabla u)\|_{L^2}\\
&\le C\big(\| u\|_{L^\infty}\|\nabla^2 u\|_{L^2} + \|\nabla u\|_{L^\infty}\|\nabla u\|_{L^2}\big)\\
&\le C\big(\| u\|_{L^2}^{\frac12}\|\nabla^2 u\|_{L^2}^{\frac12}\|\nabla^2 u\|_{L^2} + \|\nabla u\|_{L^2}^{\frac12}\|\nabla^3 u\|_{L^2}^{\frac12}\|\nabla u\|_{L^2}\big)\\
&\le C(1+s)^{-\frac{9}{4}}.
\end{split}
\end{equation}
Similarly, we have
\begin{equation}\label{new_H5_L2_10}
\begin{split}
\||\xi|\hat{\mathcal{F}}_2 (\xi,s)\|_{L^2}&\le C(1+s)^{-\frac{9}{4}}.
\end{split}
\end{equation}
Subtituting \eqref{new_H5_L2_9} and \eqref{new_H5_L2_10} into \eqref{new_H5_L2_8}, we can deduce that
\begin{equation}\label{new_H5_L2_11}
\begin{split}
\|\Lambda^3 u^l\|_{L^2}^2&\le C(1+s)^{-4}.
\end{split}
\end{equation}
It is worth noticing that $\|\nabla^2\sigma^l\|_{L^2}$ has the similar structure as $\|\Lambda^3 u^l\|_{L^2}$ (see the proof of Lemma \ref{lemma_Greenfunction_7}). Thus we have
\begin{equation}\label{new_H5_L2_12}
\begin{split}
\|\nabla^2\sigma^l\|_{L^2}^2&\le C(1+s)^{-4}.
\end{split}
\end{equation}
Substituting \eqref{new_H5_L2_11} and \eqref{new_H5_L2_12} into \eqref{new_H5_L2_6}, we get
\begin{equation}\label{new_H5_L2_13}
\begin{split}
\begin{split}
\mathrm{d}t \mathcal{H}_4(t) + \bar{c}_3\mathcal{H}_4(t)\le C(1+t)^{-4}.
\end{split}
\end{split}
\end{equation}
Then, \eqref{na3udecay} can be obtained by using \eqref{new_H5_L2_13}.
\end{proof}
\begin{corollary}\label{cor2}
Under the assumptions of Theorem \ref{thm_OB_d_decay}, we have
\begin{equation}\label{new_tau_L2_}
\begin{split}
\|\nabla\tau(t)\|_{L^2}^2\le C (1+t)^{-3},\,\,\,\,
\|\nabla^2\tau(t)\|_{L^2}^2\le C (1+t)^{-4}.
\end{split}
\end{equation}for all $t>0.$
\end{corollary}
\begin{proof}
\eqref{new_tau_L2_} can be obtained by using (\ref{upper decay2}), \eqref{new_tau_L2_1}, \eqref{na2udecay}, and \eqref{na3udecay}.
\end{proof}
Noticing that \eqref{new_tau_L2_1} can not be used to get further time-decay rate of the quantity $\|\nabla^3 \tau(t)\|_{L^2}$, since the decay of $\|\nabla^4 u(t)\|_{L^2}$ is unknown. Here we consider a combination of $\|\nabla^3 u^h(t)\|_{L^2}^2$ and $\|\nabla^3 \tau(t)\|_{L^2}^2$.
\begin{lemma}\label{lemma_upper_decay_5}
Under the assumptions of Theorem \ref{thm_OB_d_decay}, we have
\begin{equation}\label{highest_order_decay_0}
\begin{split}
\|\nabla^3 u^h(t)\|_{L^2}^2 + \|\nabla^3 \tau(t)\|_{L^2}^2\le C (1+t)^{-5},
\end{split}
\end{equation}
for all $t>0.$
\end{lemma}
\begin{proof}
Multiplying $\nabla^3$(\ref{Oldroyd_B_d})$_1^h$ and $\nabla^3$ (\ref{Oldroyd_B_d})$_2$ by $\alpha \nabla^3 u^h$ and $K \nabla^3 \tau$, respectively, summing the results up, and using integration by parts, we have
\begin{equation}\label{highest_order_decay_1}
\begin{split}
&\frac12 \mathrm{d}t (\alpha\|\nabla^3 u^h\|_{L^2}^2 + K\|\nabla^3\tau\|_{L^2}^2) + \beta K\|\nabla^3\tau\|_{L^2}^2\\
= & -\, \Big(\langle\alpha \nabla^3(u\cdot\nabla u)^h,\nabla^3u^h \rangle + \langle K \nabla^3(u\cdot\nabla\tau),\nabla^3\tau \rangle\Big)\\
& + \, \Big(\langle \alpha \nabla^3\mathbb{D}(u^h),K\nabla^3\tau^l \rangle + \langle \alpha \nabla^3\mathbb{D}(u^l),K\nabla^3\tau \rangle\Big)\\
=: &\,I_{9} + I_{10},
\end{split}
\end{equation}
where we have used $$\langle K \nabla^3{\rm div}\tau^h,\alpha\nabla^3u^h \rangle = -\,\langle \alpha \nabla^3\mathbb{D}(u^h),K\nabla^3\tau^h \rangle.$$
Using the Plancherel's Theorem, we obtain
\begin{equation}\label{highest_order_decay_2}
\begin{split}
\langle\alpha \nabla^3(u\cdot\nabla u)^h,\nabla^3u^h \rangle
= &\,\langle\alpha|\xi|^3 \big(1-\varphi_0(\xi)\big)\widehat{u\cdot\nabla u},\,\,\,|\xi|^3\big(1-\varphi_0(\xi)\big)\hat{u} \rangle\\
= &\,\langle\alpha|\xi|^3\widehat{u\cdot\nabla u},\,\,\,|\xi|^3(1-\varphi_0(\xi))^2\hat{u}\rangle\\
= &\,\langle\alpha\nabla^3(u\cdot\nabla u),\,\,\nabla^3u^{\widetilde{h}}\rangle,
\end{split}
\end{equation}
where we define that
\begin{equation*}
u^{\widetilde{h}}(x):=\mathcal{F}^{-1}((1-\varphi_0(\xi))^2\hat{u}(\xi))\,\,\,\ \text{and} \,\,\,\,u^{\widetilde{l}}(x):=u(x)-u^{\widetilde{h}}(x).
\end{equation*}
Next, we can divide the term $\langle\alpha\nabla^3(u\cdot\nabla u),\nabla^3u^{\widetilde{h}}\rangle$ in (\ref{highest_order_decay_2}) into two parts, and using (\ref{upper decay2}), (\ref{na2udecay}) and (\ref{na3udecay}), we have
\begin{equation}\label{highest_order_decay_3}
\begin{split}
&\big|\langle\alpha\nabla^3(u\cdot\nabla u),\nabla^3u^{\widetilde{h}}\rangle\big|\\
= &\,\big|\langle\alpha\nabla^3(u\cdot\nabla u^{\widetilde{h}}),\nabla^3u^{\widetilde{h}}\rangle + \langle\alpha\nabla^3(u\cdot\nabla u^{\widetilde{l}}),\nabla^3u^{\widetilde{h}}\rangle\big|\\
\le &\,C\Big(\|\nabla u\|_{L^\infty}\|\nabla^3 u^{\widetilde{h}}\|_{L^2}^2 + \|\nabla u^{\widetilde{h}}\|_{L^\infty}\|\nabla^3 u\|_{L^2}\|\nabla^3 u^{\widetilde{h}}\|_{L^2}\\&+\|\nabla^2u\|_{H^1}\|\nabla^2u^{\widetilde{h}}\|_{H^1}\|\nabla^3u^{\widetilde{h}}\|_{L^2}
+ \|\nabla u\|_{L^\infty}\|\nabla^3 u^{\widetilde{l}}\|_{L^2}\|\nabla^3 u^{\widetilde{h}}\|_{L^2}\\ & + \|\nabla u^{\widetilde{l}}\|_{L^\infty}\|\nabla^3 u\|_{L^2}\|\nabla^3 u^{\widetilde{h}}\|_{L^2}+ \|\nabla^2 u^{\widetilde{l}}\|_{H^1}\|\nabla^2 u\|_{H^1}\|\nabla^3 u^{\widetilde{h}}\|_{L^2} \\&+\|u\|_{L^\infty}\|\nabla^4 u^{\widetilde{l}}\|_{L^2}\|\nabla^3 u^{\widetilde{h}}\|_{L^2}\Big)\\
\le& C(1+t)^{-5} + C(1+t)^{-\frac{5}{2}}\|\nabla^4 u^{\widetilde{l}}\|_{L^2}.
\end{split}
\end{equation}
Similar with (\ref{new_H5_L2_8}) and (\ref{new_H5_L2_9}), we have
\begin{equation}\label{highest_order_decay_}
\begin{split}
\|\nabla^4 u^{\widetilde{l}}\|_{L^2}&\le C(1+t)^{-\frac{5}{2}} + C\int_{\frac{t}{2}}^{t}(1+t-s)^{-1}\Big(\||\xi|^2\hat{\mathcal{F}}_1 (\xi,s)\|_{L^2} + \||\xi|^2\hat{\mathcal{F}}_2 (\xi,s)\|_{L^2} \Big){\rm d}s\\
&\le C(1+t)^{-\frac{5}{2}} + C\int_{\frac{t}{2}}^{t}(1+t-s)^{-1}(\|\nabla^2(u\cdot\nabla u)\|_{L^2} + \|\nabla^2(u\cdot\nabla \tau)\|_{L^2}) {\rm d}s\\
&\le C(1+s)^{-\frac{5}{2}}.
\end{split}
\end{equation}
Substituting \eqref{highest_order_decay_3} and \eqref{highest_order_decay_} into \eqref{highest_order_decay_1}, we have
\begin{equation}\label{highest_order_decay_4}
\begin{split}
|I_{9}|&\le C\,(1+t)^{-5} + C (\|\nabla u\|_{L^\infty}\|\nabla^3\tau\|_{L^2}^2 + \|\nabla \tau\|_{L^\infty}\|\nabla^3u\|_{L^2}\|\nabla^3\tau\|_{L^2})\\
&\le C\,(1+t)^{-5}.
\end{split}
\end{equation}
Noticing that
\begin{equation*}\label{highest_order_decay_5}
\begin{split}
\langle \alpha \nabla^3\mathbb{D}(u^h),K\nabla^3\tau^l \rangle = \langle \alpha \nabla^3\mathbb{D}(u^l),K\nabla^3\tau^h \rangle
\le\frac{\beta K}{4}\|\nabla^3 \tau\|_{L^2}^2 + \frac{\alpha^2 K}{\beta}\|\nabla^4 u^{l}\|_{L^2}^2,
\end{split}
\end{equation*}
and
$$\langle \alpha \nabla^3\mathbb{D}(u^l),K\nabla^3\tau \rangle\le\frac{\beta K}{4}\|\nabla^3 \tau\|_{L^2}^2 + \frac{\alpha^2 K}{\beta}\|\nabla^4 u^{l}\|_{L^2}^2,$$ we can estimate $I_{10}$ as follows:
\begin{equation}\label{highest_order_decay_6}
\begin{split}
|I_{10}|\le \frac{\beta K}{2}\|\nabla^3 \tau\|_{L^2}^2 + \frac{2\alpha^2 K}{\beta}\|\nabla^4 u^{l}\|_{L^2}^2.
\end{split}
\end{equation}
Substituting \eqref{highest_order_decay_4} and \eqref{highest_order_decay_6} into \eqref{highest_order_decay_1}, we get
\begin{equation}\label{highest_order_decay_7}
\begin{split}
\frac12 \mathrm{d}t (\alpha\|\nabla^3 u^h\|_{L^2}^2 + K\|\nabla^3\tau\|_{L^2}^2) + \frac{\beta K}{2}\|\nabla^3\tau\|_{L^2}^2\le C\,(1+t)^{-5},
\end{split}
\end{equation} where the estimate of the second term on the right-hand side of (\ref{highest_order_decay_6}) is similar to that of (\ref{highest_order_decay_}).
Multiplying $\Lambda^3$(\ref{u_sigma_1})$_1^h$ and $\Lambda^2(\ref{u_sigma_1})_2^h$ by $\Lambda^2\sigma^h$ and $\Lambda^3 u^h$, respectively, summing the results up, and using integration by parts, we have
\begin{equation}\label{highest_order_decay_8}
\begin{split}
&\partial_t\langle\Lambda^3 u^h,\Lambda^2 \sigma^h\rangle + \frac{\alpha}{2}\|\Lambda^3 u^h\|_{L^2}^2\\
= & \,\Big(K\|\Lambda^3 \sigma^h\|_{L^2}^2 - \langle\beta\Lambda^2\sigma^h,\Lambda^3 u^h \rangle\Big)\\ &- \Big(\langle \Lambda^3\mathbb{P}(u\cdot \nabla u)^h,\Lambda^2\sigma^h\rangle + \langle \Lambda\mathbb{P}\mathrm{d}iv(u\cdot \nabla \tau)^h,\Lambda^3 u^h\rangle\Big)\\
=: &\,I_{11} - I_{12}.
\end{split}
\end{equation}
Then, $I_{11}$ and $I_{12}$ can be estimated as follows:
\begin{align}
\label{new6} |I_{11}|&\le K\|\Lambda^3 \sigma^h\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda^3 u^h\|_{L^2}^2 + \frac{4\beta^2}{\alpha}\|\Lambda^2\sigma^h\|_{L^2}^2,\\
\nonumber|I_{12}|&\le \frac12\|\Lambda^3 \sigma^h\|_{L^2}^2
+ C\|\Lambda^2\mathbb{P}(u\cdot \nabla u)^h\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda^3 u^h\|_{L^2}^2 + C\|\Lambda\mathbb{P}\mathrm{d}iv(u\cdot \nabla \tau)^h\|_{L^2}^2\\ \label{new7}
&\le \frac12\|\Lambda^3 \sigma^h\|_{L^2}^2
+ C\|\nabla^2(u\cdot \nabla u)\|_{L^2}^2 + \frac{\alpha}{16}\|\Lambda^3 u^h\|_{L^2}^2 + C\|\nabla^2(u\cdot \nabla \tau)\|_{L^2}^2.
\end{align}
Substituting (\ref{new6}) and (\ref{new7}) into (\ref{highest_order_decay_8}), we can easily deduce that
\begin{equation}\label{highest_order_decay_9}
\begin{split}
\partial_t\langle\Lambda^3 u^h,\Lambda^2 \sigma^h\rangle + \frac{\alpha}{4}\|\Lambda^3 u^h\|_{L^2}^2
\le (K+\frac12)\|\Lambda^3 \sigma^h\|_{L^2}^2 + \frac{4\beta^2}{\alpha}\|\Lambda^2\sigma^h\|_{L^2}^2 + C(1+t)^{-5},
\end{split}
\end{equation} for all $t\ge0$.
We define that
\begin{equation*}
\begin{split}
\mathcal{H}_5(t): = \alpha\|\nabla^3 u^h\|_{L^2}^2 + K\|\nabla^3\tau\|_{L^2}^2 + \eta_3\langle\Lambda^3 u^h,\Lambda^2 \sigma^h\rangle.
\end{split}
\end{equation*}
Then
\begin{equation}\label{highest_order_decay_10}
\frac12\alpha\|\nabla^3 u^h\|_{L^2}^2 + \frac12K\|\nabla^3\tau\|_{L^2}^2\le\mathcal{H}_5(t)\le 2\alpha\|\nabla^3 u^h\|_{L^2}^2 + 2K\|\nabla^3\tau\|_{L^2}^2.
\end{equation}
Summing $2\times$\eqref{highest_order_decay_7} and $\eta_3\times$\eqref{highest_order_decay_9} up,
and using \eqref{highest_order_decay_10} and the smallness of $\eta_3$, we have
\begin{equation}\label{new8}
\mathrm{d}t \mathcal{H}_5(t) + \bar{c}_4\mathcal{H}_5(t)\le C\,(1+t)^{-5},
\end{equation}
for some positive constants $\bar{c}_4$ and $C$, where we have used
\begin{equation*}
\|\Lambda^3 \sigma^h\|_{L^2}^2 +\|\Lambda^2\sigma^h\|_{L^2}^2\le C\|\Lambda^3 \tau\|_{L^2}^2.
\end{equation*}
Then, (\ref{new8}) yields (\ref{highest_order_decay_0}). The proof of Lemma \ref{lemma_upper_decay_5} is complete.
\end{proof}
With Lemmas \ref{lemma_upper_decay+1}, \ref{lemma_upper_decay_2}, \ref{lemma_upper_decay_3}, \ref{lemma_upper_decay_4}, and \ref{lemma_upper_decay_5}, and Corollaries \ref{cor1} and \ref{cor2}, we get (\ref{opti1}) and (\ref{opti2}) in Theorem \ref{thm_OB_d_decay}.
\subsection{Lower time-decay estimates}
To finish the proof of Theorem \ref{thm_OB_d_decay}, we will establish the lower decay estimates for the system \eqref{Oldroyd_B_d}.
\begin{lemma}\label{lower_bound}
Under the assumptions of Theorem \ref{thm_OB_d_decay}, there exists a positive time $t_1$, such that the following estimates:
\begin{equation}\label{lower_bound_1}
\|\nabla^k u(t)\|_{L^2} \ge \frac{1}{C} (1 + t)^{-\frac{1}{2}-\frac{k}{2}}, \quad k=0,1,2,3,
\end{equation}
and
\begin{equation}\label{lower_bound_2}
\|\nabla^k \tau (t)\|_{L^2} \ge \frac{1}{C} (1 + t)^{-1-\frac{k}{2}}, \quad k=0, 1, 2, 3,
\end{equation}
hold for all $t\geq t_1$.
\end{lemma}
\begin{proof} Recalling from (\ref{Greenfunction_13-1}) and (\ref{Greenfunction_13-2}), we have
\begin{equation*}
\begin{split}
\hat{u}(t) =& \mathcal{G}_3 \hat{u}_0 + K|\xi|\mathcal{G}_1\hat{\sigma}_0 + \int_0^t\mathcal{G}_3(t-s)\hat{\mathcal{F}}_1 (s) + K|\xi|\mathcal{G}_1(t-s)\hat{\mathcal{F}}_2 (s)
{\rm d}s,\\
\hat{\sigma}(t) =& -\frac{\alpha}{2}|\xi|\mathcal{G}_1 \hat{u}_0 + \mathcal{G}_2\hat{\sigma}_0 + \int_0^t-\frac{\alpha}{2}|\xi|\mathcal{G}_1(t-s)\hat{\mathcal{F}}_1 (s) + \mathcal{G}_2(t-s)\hat{\mathcal{F}}_2 (s){\rm d}s,
\end{split}
\end{equation*}
where
\begin{eqnarray*}
\mathcal{F}_1=-\mathbb{P}\left(u\cdot\nabla u\right),\
\mathcal{F}_2=-\Lambda^{-1}\mathbb{P}{\rm div}\left(u\cdot\nabla\tau\right).
\end{eqnarray*}
From \eqref{lemma_Greenfunction_12} and \eqref{lemma_Greenfunction_13}, we can obtain for all $t\geq t_1$, that
\begin{equation}\label{lower_bound_3}
\begin{split}
\|\nabla^ku(t)\|_{L^2} = &\left\||\xi|^k\hat{u}(t)\right\|_{L^2}\\
\ge\,&\frac{1}{C}(1 + t)^{-\frac12-\frac{k}{2}} -
C\int_0^t \Big(\int_{|\xi|\leq R'}\Big||\xi|^k\mathcal{G}_3(t-s)\hat{\mathcal{F}}_1 (\xi,s)\\& + |\xi|^{k+1}\mathcal{G}_1(t-s)\hat{\mathcal{F}}_2 (\xi,s)\Big|^2{\rm d}\xi\Big)^\frac12{\rm d}s,
\end{split}
\end{equation}
and
\begin{equation}\label{lower_bound_4}
\begin{split}
\|\nabla^k\sigma(t)\|_{L^2}= &\left\||\xi|^k\hat{\sigma}(t)\right\|_{L^2}\\
\ge\, &\frac{1}{C}(1 + t)^{-1-\frac{k}{2}}-
C\int_0^t \Big(\int_{|\xi|\leq R'}\Big||\xi|^{k+1}\mathcal{G}_1(t-s)\hat{\mathcal{F}}_1 (\xi,s)\\ &+ |\xi|^k\mathcal{G}_2(t-s)\hat{\mathcal{F}}_2 (\xi,s)\Big|^2{\rm d}\xi\Big)^\frac12{\rm d}s.
\end{split}
\end{equation}
Now we estimate the nonlinear term in \eqref{lower_bound_3} for $k=0, 1, 2, 3$. In fact, using Lemmas \ref{lemma_Greenfunction_4} and \ref{lemma_upper_decay}, we obtain that for $k=0, 1, 2$,
\begin{equation}\label{lower_bound_5}
\begin{split}
& \int_0^t \Big(\int_{|\xi|\leq R'}\Big||\xi|^k\mathcal{G}_3(t-s)\hat{\mathcal{F}}_1 (\xi,s) + |\xi|^{k+1}\mathcal{G}_1(t-s)\hat{\mathcal{F}}_2 (\xi,s)\Big|^2{\rm d}\xi\Big)^\frac12{\rm d}s\\
\le C &\int_0^{\frac{t}{2}} \left(\|\frac{1}{|\xi|} \hat{\mathcal{F}}_1 (\xi,s)\|_{L^\infty} + \|\hat{\mathcal{F}}_2 (\xi,s)\|_{L^\infty}\right)\left(\int_{|\xi|\leq R'} |\xi|^{2(k+1)}e^{-2\theta|\xi|^2(t-s )} {\rm d} \xi\right)^{\frac{1}{2}}{\rm d}s \\
+ &C\int_{\frac{t}{2}}^t \left(\int_{|\xi|\leq R'} e^{-2\theta|\xi|^2(t-s )}\big(|\xi|^{2k}|\hat{\mathcal{F}}_1(\xi,s)|^2 + |\xi|^{2k}|\hat{\mathcal{F}}_2(\xi,s)|^2\big) {\rm d} \xi\right)^{\frac{1}{2}}{\rm d}s \\
\le C &\int_0^{\frac{t}{2}}(1+s)^{-1}(1+t-s)^{-1-\frac{k}{2}}{\rm d}s + C\int_{\frac{t}{2}}^t(1+s)^{-\frac{3}{2}-\frac{k}{2}}(1+t-s)^{-\frac{1}{2}}{\rm d}s\\
\le C& \,(1+t)^{-\frac{3}{4}-\frac{k}{2}}.
\end{split}
\end{equation}
For $k=3$, we have
\begin{equation}\label{lower_bound_7}
\begin{split}
& \int_0^t \Big(\int_{|\xi|\leq R'}\Big||\xi|^3\mathcal{G}_3(t-s)\hat{\mathcal{F}}_1 (\xi,s) + |\xi|^{4}\mathcal{G}_1(t-s)\hat{\mathcal{F}}_2 (\xi,s)\Big|^2{\rm d}\xi\Big)^\frac12{\rm d}s\\
\le C &\,(1+t)^{-\frac{9}{4}}
+ C\int_{\frac{t}{2}}^t \left(\big\||\xi|^{2}\hat{\mathcal{F}}_1(\cdot,s)\big\|_{L^\infty} + \big\||\xi|^{2}\hat{\mathcal{F}}_2(\cdot,s)\big\|_{L^\infty}\right)(1+t-s)^{-1}{\rm d}s \\
\le C& \,(1+t)^{-\frac{9}{4}} + C\int_{\frac{t}{2}}^t(1+s)^{-\frac{5}{2}}(1+t-s)^{-1}{\rm d}s\\
\le C& \,(1+t)^{-\frac{9}{4}}.
\end{split}
\end{equation}
Next, we turn to estimate the nonlinear term in \eqref{lower_bound_4} for $k=0, 1, 2, 3$. Similar to \eqref{lower_bound_5}, using Lemmas \ref{lemma_Greenfunction_4} and \ref{lemma_upper_decay}, we deduce that for $k=0, 1, 2$,
\begin{equation}\label{lower_bound_11}
\begin{split}
& \int_0^t \underbrace{\Big(\int_{|\xi|\leq R'}\Big||\xi|^{k+1}\mathcal{G}_1(t-s)\hat{\mathcal{F}}_1 (\xi,s) + |\xi|^k\mathcal{G}_2(t-s)\hat{\mathcal{F}}_2 (\xi,s)\Big|^2{\rm d}\xi\Big)^\frac12}_{N(k,s)}{\rm d}s
\\=&\int_0^{\frac{t}{2}} N(k,s){\rm d}s + \int_{\frac{t}{2}}^{t} N(k,s){\rm d}s.
\end{split}
\end{equation}For the first term of \eqref{lower_bound_11}, we have
\begin{align}\label{lower_bound_6}
\int_0^{\frac{t}{2}} N(k,s){\rm d}s \le C &\int_0^{\frac{t}{2}} \left(\|\frac{1}{|\xi|} \hat{\mathcal{F}}_1 (\cdot,s)\|_{L^\infty} +\|\hat{\mathcal{F}}_2 (\cdot,s)\|_{L^\infty}\right)\left(\int_{|\xi|\leq R'} |\xi|^{2k+4}e^{-2\theta|\xi|^2(t-s )}{\rm d} \xi \right)^{\frac{1}{2}}{\rm d}s\notag\\
+ &C\int_{0}^{\frac{t}{2}}\||\xi|^{k}\hat{\mathcal{F}}_2(\cdot,s)\|_{L^\infty}\left(\int_{|\xi|\leq R'} e^{-\beta(t-s)} {\rm d} \xi\right)^{\frac{1}{2}}{\rm d}s\\
\le C &\int_0^{\frac{t}{2}}(1+s)^{-1}(1+t-s)^{-\frac{3}{2}-\frac{k}{2}}{\rm d}s +C\,(1+t)^{-\frac{3}{2}-\frac{k}{2}}\notag\\
\le C&\, (1+t)^{-\frac{5}{4}-\frac{k}{2}}\notag.
\end{align}For the second term of \eqref{lower_bound_11}, we have
\begin{align}
\int_{\frac{t}{2}}^{t} N(k,s){\rm d}s\le &\,C\int_{\frac{t}{2}}^t \left(\||\xi|^{k}\hat{\mathcal{F}}_1(\cdot,s)\|_{L^\infty} + \||\xi|^{k}\hat{\mathcal{F}}_2(\cdot,s)\|_{L^\infty}\right)\left(\int_{|\xi|\leq R'} |\xi|^{2}e^{-2\theta|\xi|^2(t-s )} {\rm d} \xi\right)^{\frac{1}{2}}{\rm d}s \notag\\
&+ C\int_{\frac{t}{2}}^t\||\xi|^{k}\hat{\mathcal{F}}_2(\cdot,s)\|_{L^\infty}\left(\int_{|\xi|\leq R'} e^{-\beta(t-s)} {\rm d} \xi\right)^{\frac{1}{2}}{\rm d}s\\
\le &C\int_{\frac{t}{2}}^t(1+s)^{-\frac{3}{2}-\frac{k}{2}}(1+t-s)^{-1}{\rm d}s +C\,(1+t)^{-\frac{3}{2}-\frac{k}{2}}\notag\\
\le &C\, (1+t)^{-\frac{5}{4}-\frac{k}{2}}\notag.
\end{align}
For $k=3$, we have
\begin{equation}\label{lower_bound_8}
\begin{split}
& \int_0^t \Big(\int_{|\xi|\leq R'}\Big||\xi|^{4}\mathcal{G}_1(t-s)\hat{\mathcal{F}}_1 (\xi,s) + |\xi|^3\mathcal{G}_2(t-s)\hat{\mathcal{F}}_2 (\xi,s)\Big|^2{\rm d}\xi\Big)^\frac12{\rm d}s\\
\le C&\,(1+t)^{-\frac{11}{4}}
+ C\int_{\frac{t}{2}}^t \left(\||\xi|^{2}\hat{\mathcal{F}}_1(\cdot,s)\|_{L^2} + \||\xi|^{2}\hat{\mathcal{F}}_2(\cdot,s)\|_{L^2}\right)(1+t-s)^{-1}{\rm d}s \\
\le C& \,(1+t)^{-\frac{11}{4}} + C\int_{\frac{t}{2}}^t\Big(\|\nabla^{2}(u\cdot\nabla u)(s)\|_{L^2} + \|\nabla^{2}(u\cdot\nabla\tau)(s)\|_{L^2}\Big)(1+t-s)^{-1}{\rm d}s\\
\le C& \,(1+t)^{-\frac{11}{4}},
\end{split}
\end{equation}
where we have used the estimate:
\begin{equation*}\label{lower_bound_9}
\begin{split}
&\|\nabla^{2}(u\cdot\nabla u)(t)\|_{L^2} + \|\nabla^{2}(u\cdot\nabla\tau)(t)\|_{L^2}\\
\le &C\big(\| u\|_{L^2}^{\frac12}\|\nabla^2 u\|_{L^2}^{\frac12}\|\nabla^3 u\|_{L^2} + \|\nabla u\|_{L^2}^{\frac12}\|\nabla^3 u\|_{L^2}^{\frac12}\|\nabla^2 u\|_{L^2}\big)\\
&+C\big(\| u\|_{L^2}^{\frac12}\|\nabla^2 u\|_{L^2}^{\frac12}\|\nabla^3 \tau\|_{L^2} + \|\nabla u\|_{L^2}^{\frac12}\|\nabla^3 u\|_{L^2}^{\frac12}\|\nabla^2 \tau\|_{L^2}+ \|\nabla \tau\|_{L^2}^{\frac12}\|\nabla^3 \tau\|_{L^2}^{\frac12}\|\nabla^2 u\|_{L^2}\big)\\
\le &C (1+t)^{-3},\\
\end{split}
\end{equation*}for all $t\geq 0$, which is similar to \eqref{new_H5_L2_9}.
Using \eqref{lower_bound_3} -- \eqref{lower_bound_8}, and the fact that $\|\nabla^k\sigma\|_{L^2}\le C \|\nabla^k\tau\|_{L^2},$ we conclude that \eqref{lower_bound_1} and \eqref{lower_bound_2} hold for $t\ge t_1$ ($t_1$ is sufficiently large), i.e., \eqref{opti3} and \eqref{opti4} hold. Thus the proof of Theorem \ref{thm_OB_d_decay} is complete.
\end{proof}
\section*{Acknowledgments}This work was supported by the Guangdong Basic and Applied Basic Research Foundation $\#2020B1515310015$ and \#2022A1515012112, and by the National Natural Science Foundation of China $\#12071152,$ and by the Guangdong Provincial Key Laboratory of Human Digital Twin (\#2022B1212010004).
\end{document} |
\begin{document}
\title{A Myhill-Nerode Theorem for Higher-Dimensional Automata}
\author{Uli Fahrenberg\inst1 \and Krzysztof Ziemiański\inst2}
\institute{EPITA Research Laboratory (LRE), France \and University of Warsaw, Poland}
\maketitle
\begin{abstract}
We establish a Myhill-Nerode type theorem for higher-dim\-en\-sio\-nal automata (HDAs),
stating that a language is regular precisely if it has finite prefix quotient.
HDAs extend standard automata
with additional structure,
making it possible to distinguish between interleavings and concurrency.
We also introduce deterministic HDAs and show that not all HDAs are determinizable,
that is, there exist regular languages that cannot be recognised by a deterministic HDA.
Using our theorem, we develop an internal characterisation of deterministic languages
and also show that there exist infinitely ambiguous languages.
\keywords{higher-dimensional automata; Myhill-Nerode theorem; concurrency theory; determinism; ambiguity}
\end{abstract}
\section{Introduction}
Higher-dimensional automata (HDAs),
introduced by Pratt and van Glabbeek \cite{Pratt91-geometry, Glabbeek91-hda, DBLP:journals/tcs/Glabbeek06},
extend standard automata
with additional structure
that makes it possible to distinguish between interleavings and concurrency.
That puts them in a class with other non-interleaving models for concurrency
such as Petri nets \cite{book/Petri62},
event structures \cite{DBLP:journals/tcs/NielsenPW81},
configuration structures \cite{DBLP:conf/lics/GlabbeekP95, DBLP:journals/tcs/GlabbeekP09},
asynchronous transition systems \cite{Bednarczyk87-async, DBLP:journals/cj/Shields85},
and similar approaches \cite{pratt95chu, DBLP:journals/acta/GlabbeekG01, Pratt03trans_cancel, P15jlamp_STstruct},
while retaining some of the properties and intuition of automata-like models.
We have recently introduced languages of HDAs \cite{Hdalang},
which consist of partially ordered multisets with interfaces (ipomsets),
and shown a Kleene theorem for them \cite{conf/concur/FahrenbergJSZ22}.
Here we continue to develop the language theory of HDAs.
Our first contribution is a Myhill-Nerode type theorem for HDAs,
stating that a language is regular iff it has finite prefix quotient.
This provides a necessary and sufficient condition for regularity.
Our proof is inspired by the standard proofs of the Myhill-Nerode theorem,
but the higher-dimensional structure introduces some difficulties.
For example, we cannot use the standard prefix quotient relation
but need to develop a stronger one which takes concurrency of events into account.
As a second contribution,
we give a precise definition of deterministic HDAs
and show that there exist regular languages that cannot be recognised by deterministic HDAs.
(Our Myhill-Nerode construction will produce a non-de\-ter\-min\-is\-tic HDA for these.)
Our definition of determinism is more subtle than for standard automata
as it is not always possible to remove non-accessible
parts of HDAs.
We develop an internal characterisation of deterministic languages, without reference to HDAs.
We also introduce a notion of ambiguity for HDA languages
and show that there exist regular languages of unbounded ambiguity.
\section{HDAs and their languages}
An HDA is a collection of \emph{cells} which are connected according to specified \emph{face maps}.
Each cell has an associated list of \emph{labelled events}
which are interpreted as being executed in that cell,
and the face maps may terminate some events or, inversely,
indicate cells in which some of the current events were not yet started.
Additionally, some cells are designated \emph{start} cells and some others \emph{accept} cells;
computations of an HDA begin in a start cell and proceed by starting and terminating events
until they reach an accept cell.
To make the above precise, let $\Sigma$ be a finite alphabet.
An \emph{loset}\footnote{Pronunciation: ell-oh-set}
$(U, {\evord}, \lambda)$ is a finite totally ordered set $(U, {\evord})$
together with a labelling $\lambda: U\to \Sigma$.
Losets are interpreted as lists of labelled events.
The set of losets is denoted $\sq$.
A \emph{precubical set} consists of a set of cells $X$
together with a mapping $\ev: X\to \sq$ which to every cell assigns its list of active events.
For an loset $U$ we write $X[U]=\{x\in X\mid \ev(x)=U\}$ for the cells of type $U$.
Further, for every $U\in \sq$ and subset $A\subseteq U$ there are face maps
$\delta_A^0, \delta_A^1: X[U]\to X[U\setminus A]$.
The \emph{upper} face maps $\delta_A^1$ terminate the events in $A$,
whereas the \emph{lower} face maps $\delta_A^0$ ``unstart'' these events:
they map cells $x\in X[U]$ to cells $\delta_A^0(x)\in X[U\setminus A]$
where the events in $A$ are not yet active.
If $A, B\subseteq U$ are disjoint,
then the order in which events in $A$ and $B$ are terminated or unstarted
should not matter, so we require that $\delta_A^\nu \delta_B^\mu = \delta_B^\mu \delta_A^\nu$
for $\nu, \mu\in\{0, 1\}$: the \emph{precubical identities}.
A \emph{higher-dimensional automaton} (\emph{HDA})
is a precubical set together with subsets $X_\bot, X^\top\subseteq X$
of \emph{start} and \emph{accept} cells.
For a precubical set $X$ and subsets $A, B\subseteq X$
we denote by $X_A^B$ the HDA with precubical set $X$, start cells $A$ and accept cells $B$.
We do not assume that precubical sets or HDAs are finite.
The \emph{dimension} of an HDA $X$ is $\dim(X)=\sup\{|\ev(x)|\mid x\in X\}\in \Nat\cup\{\infty\}$.
Precubical sets may be defined as presheaves over a category, also denoted $\sq$,
consisting of losets $U$ and generated by coface maps $d_A^0, d_A^1: U\setminus A\to U$,
see \cite{conf/concur/FahrenbergJSZ22},
but we will not need this here.
We only note that for reasons described in the above paper,
we may switch freely between losets and their isomorphism classes,
where two losets are \emph{isomorphic} if there is an order- and label-preserving bijection
between their elements.
Passing from losets to isomorphism classes hides the identity of events,
but keeps information on their ordering and labelling.
A \emph{morphism} of HDAs $X$ and $Y$ is a function $f: X\to Y$ that preserves structure:
types of cells ($\ev_Y\circ f=\ev_X$),
face maps ($f(\delta^\nu_A(x))=\delta^\nu_A(f(x))$)
and start/accept cells ($f(X_\bot)\subseteq Y_\bot$, $f(X^\top)\subseteq Y^\top$).
\begin{figure}
\caption{A two-dimensional HDA $X$ on $\Sigma=\{{\color{red!70!black}
\label{fig:abcube}
\end{figure}
\begin{example}
\label{ex:abcube}
Figure \ref{fig:abcube} shows an HDA
both as a combinatorial object (left) and in a more geometric realisation (right).
We write isomorphism classes of losets as lists of labels
and omit the set braces in $\delta_{\{a\}}^0$ etc.
\end{example}
\subsubsection*{Paths.}
Computations of HDAs are paths:
sequences of cells connected by (co)face maps which start and terminate events.
A \emph{path} in $X$ is, thus, a sequence
\begin{equation}
\label{eq:path}
\alpha=(x_0, \phi_1, x_1, \dotsc, x_{n-1}, \phi_n, x_n),
\end{equation}
where the $x_i$ are cells of $X$ and the $\phi_i$ are coface maps,
so that for every $i$, $(x_{i-1},\varphi_i,x_i)$ is either
\begin{compactitem}
\item $(\delta^0_A(x_i),d^0_A, x_i)$ for $A\subseteq \ev(x_i)$
(\emph{upstep} $x_{i-1}\arrO{A} x_i$)
\item or $(x_{i-1},d^1_B,\delta^1_B(x_{i-1}))$ for $B\subseteq \ev(x_{i-1})$.
(\emph{downstep} $x_{i-1}\arrI{B} x_i$)
\end{compactitem}
The \emph{source} and \emph{target} of $\alpha$ as in \eqref{eq:path} are $\src(\alpha)=x_0$ and $\tgt(\alpha)=x_n$.
The set of all paths in $X$ starting at $A\subseteq X$ and terminating in $B\subseteq X$
is denoted by $\Path(X)_A^B$;
if $A$ or $B$ is omitted, we assume $A=X$ or $B=X$.
A path $\alpha$ is \emph{accepting} if $\src(\alpha)\in X_\bot$ and $\tgt(\alpha)\in X^\top$.
Paths $\alpha$ and $\beta$ may be concatenated
if $\tgt(\alpha)=\src(\beta)$;
their concatenation is written $\alpha*\beta$,
and we omit the ``$*$'' in concatenations if convenient.
\emph{Path equivalence} is the congruence $\simeq$
generated by $(z\arrO{A} y\arrO{B} x)\simeq (z\arrO{A\cup B} x)$,
$(x\arrI{A} y\arrI{B} z)\simeq (x\arrI{A\cup B} z)$, and
$\gamma \alpha \delta\simeq \gamma \beta \delta$ whenever $\alpha\simeq \beta$.
Intuitively, this relation allows to assemble subsequent upsteps or downsteps into one ``bigger'' step.
A path is \emph{sparse} if its upsteps and downsteps are alternating,
so that no more such assembling may take place.
Every equivalence class of paths contains a unique sparse path.
\begin{example}
\label{ex:paths}
The HDA $X$ of Fig.~\ref{fig:abcube} admits
five sparse accepting paths:
\begin{gather*}
v\arrO{a} e\arrI{a} w\arrO{b} h, \qquad\quad
v\arrO{a} e\arrI{a} w\arrO{b} h\arrI{b} y, \\
v\arrO{ab} q\arrI{a} h, \qquad
v\arrO{ab} q\arrI{ab} y, \qquad
v\arrO{b} g\arrI{b} x\arrO{a} f\arrI{a} y.
\end{gather*}
\end{example}
\subsubsection*{Ipomsets.}
The observable content of a path is given as an ipomset,
a generalisation of words which we introduce now.
A \emph{labelled iposet} $(P, {<}, {\evord}, S, T, \lambda)$
consists of a finite set $P$ with two (partial) orders:
the \emph{precedence order} $<$ and the \emph{event order} $\evord$
such that each pair in $P$ is comparable by $\le$ or $\evord$.
The subsets $S, T\subseteq P$ are the source and target \emph{interfaces} of $P$,
and $\lambda:P\to \Sigma$ is a labelling function.
Elements of $S$ must be $<$-minimal and those of $T$ $<$-maximal,
hence $S$ and $T$ are losets.
Elements of $P$ are interpreted as events;
source events in $S$ are already active at the beginning,
while target events in $T$ are still active at the end.
Source and target events will be marked by ``$\bullet$'' at the left or right side.
If the event order is not shown, we assume that it goes downwards.
Labelled iposets $P$ and $Q$ are \emph{isomorphic},
denoted $P\cong Q$,
if there exists a bijection $f: P\to Q$ which preserves all structure,
and isomorphism classes of labelled iposets are called \emph{ipomsets}.
Isomorphisms between labelled iposets are unique,
hence we may switch freely between ipomsets and concrete representations,
see \cite{conf/concur/FahrenbergJSZ22} for details.
\begin{figure}
\caption{Some ipomsets, see Example \ref{ex:ipomsets}
\label{fig:ipomsets}
\end{figure}
\begin{example}
\label{ex:ipomsets}
Figure \ref{fig:ipomsets} shows some simple examples of ipomsets.
The first one (from left to right) consists of three events labelled $a$, $b$ and $c$,
with the $a$-event preceding the $c$-event and both running in parallel with the $b$-event.
The second one has the same precedence structure,
but now the $a$-event is in the source interface.
In the third one, the source interface is the loset $\loset{a\\b}$
and the target interface is $\{b\}$.
The rightmost ipomset has a more complicated structure:
$a<c$, $a<d$, $b<d$, with source interface $\{b\}$ and target interface $\{d\}$.
\end{example}
Ipomsets may be glued using a generalisation of the standard serial composition of pomsets
\cite{DBLP:journals/fuin/Grabowski81}.
For ipomsets $P$ and $Q$, their \emph{gluing} $P*Q$ is defined
if the targets of $P$ match the sources of $Q$: $T_P\cong S_Q$.
In that case, its carrier set is the quotient $(P\sqcup Q)_{/x\equiv f(x)}$,
where $f: T_P\to S_Q$ is the unique isomorphism,
the interfaces are $S_{P*Q}=S_P$ and $T_{P*Q}=T_Q$,
$\evord_{P*Q}$ is the transitive closure of ${\evord_P}\cup {\evord_Q}$,
and $x<_{P*Q} y$ iff $x<_P y$, $x<_Q y$, or $x\in P\setminus T_P$ and $y\in Q\setminus S_Q$.
We will often omit the ``$*$'' in gluing compositions.
For ipomsets with empty interfaces, $*$ is serial pomset composition;
in general, matching interface points are glued,
see \cite{Hdalang, DBLP:journals/iandc/FahrenbergJSZ22} or below for examples.
An ipomset $P$ is \emph{discrete} if $<_P$ is empty and $\evord_P$ total.
We often write $\ilo{S}{U}{T}$ for the discrete ipomset $U$
with source and target interfaces $S, T\subseteq U$.
Discrete ipomsets $\ilo{\emptyset}{U}{\emptyset}$ are losets,
and the $\ilo{U}{U}{U}$ are identities for gluing composition and written $\id_U$.
A \emph{starter} is an ipomset $\ilo{U\setminus A}{U}{U}$,
a \emph{terminator} is $\ilo{U}{U}{U\setminus A}$;
these will be written $\starter{U}{A}$ and $\terminator{U}{A}$, respectively.
\begin{figure}
\caption{Decompositions of discrete ipomset into starters and terminators.}
\label{fig:discrete}
\end{figure}
\begin{example}
Figure \ref{fig:discrete} shows a discrete ipomset together with decompositions
\begin{equation*}
\loset{\bullet\!&a&\!\bullet\\&b\\&c&\!\bullet\\\bullet\!&a} =
\starter{\loset{a\\b\\c\\a}}{\loset{b\\c}} *
\terminator{\loset{a\\b\\c\\d}}{\loset{b\\a}} =
\starter{\loset{a\\c\\a}}{c} * \starter{\loset{a\\b\\c\\a}}{b} *
\terminator{\loset{a\\b\\c\\a}}{a} * \terminator{\loset{a\\b\\c}}{b}.
\end{equation*}
\end{example}
An ipomset $P$ is \emph{interval}
if it admits an interval representation \cite{book/Fishburn85}:
functions $b$ and $e$ from $P$ to real numbers such that
$b(x)\le e(x)$ for all $x\in P$ and
$x <_P y$ iff $e(x)<b(y)$ for all $x, y\in P$.
We will only consider interval ipomsets in this paper and hence omit the qualification ``interval''.
The set of ipomsets is denoted $\iiPoms$,
and for an loset $U$ we write $\iiPoms_{U}=\{P\in\iiPoms\mid T_P\cong U\}$.
\begin{figure}
\caption{Decomposition of ipomset into starters and terminators.}
\label{fig:Ndecomp}
\end{figure}
\cite[Prop.~21]{DBLP:journals/iandc/FahrenbergJSZ22} shows
that any ipomset can be presented as a gluing of starters and terminators,
see Fig.~\ref{fig:Ndecomp} for an example.
Such a presentation we call a \emph{step decomposition};
if starters and terminators are alternating, the decomposition is \emph{sparse}.
The following extension of the above statement is proven in Appendix~\ref{s:Step}.
\begin{proposition}
\label{p:SparsePresentation}
Every ipomset $P$ has a unique sparse step decomposition.
\end{proposition}
\begin{example}
The leftmost decomposition shown in Fig.~\ref{fig:discrete} is sparse,
but the rightmost is not.
Also the decomposition in Fig.~\ref{fig:Ndecomp} is sparse.
\end{example}
\subsubsection*{Labels of paths.}
We may now define the observable content or \emph{event ipomset} $\ev(\alpha)$
of a path $\alpha$ in an HDA recursively as follows:
\begin{compactitem}
\item If $\alpha=(x)$, then
$\ev(\alpha)=\id_{\ev(x)}$.
\item If $\alpha=(y\arrO{A} x)$, then
$\ev(\alpha)=\starter{\ev(x)}{A}$.
\item If $\alpha=(x\arrI{B} y)$, then
$\ev(\alpha)=\terminator{\ev(x)}{B}$.
\item If $\alpha=\alpha_1*\dotsm*\alpha_n$ is a concatenation, then
$\ev(\alpha)=\ev(\alpha_1)*\dotsm*\ev(\alpha_n)$.
\end{compactitem}
\cite[Lemma~8]{conf/concur/FahrenbergJSZ22} shows that $\alpha\simeq \beta$ implies $\ev(\alpha)=\ev(\beta)$.
Further, if $\alpha=\alpha_1*\dotsm*\alpha_n$ is a sparse path,
then $\ev(\alpha)=\ev(\alpha_1)*\dotsm*\ev(\alpha_n)$ is a sparse step decomposition.
\begin{figure}
\caption{HDA $Y$ consisting of three squares glued along common faces.}
\label{fig:ab*cb*cd}
\end{figure}
\begin{example}
The event ipomsets of the five sparse accepting paths in the HDA $X$ of Fig.~\ref{fig:abcube}
are $a b\,\bullet$, $a b$, $\loset{a\\b&\!\bullet}$, $\loset{a\\b}$, and $b a$.
Figure \ref{fig:ab*cb*cd} shows another HDA which admits an accepting path
$(\delta_a^0 x\arrO{a} x\arrI{a} \delta_a^1 x\arrO{c} y\arrI{b} \delta_b^1 y\arrO{d} z\arrI{d} \delta_d^ z)$.
Its event ipomset is precisely the N-shaped ipomset of Fig.~\ref{fig:Ndecomp},
with the indicated sparse step decomposition arising directly from the sparse presentation above.
\end{example}
The \emph{language} of an HDA $X$ is
\begin{equation*}
\Lang(X) = \{ \ev(\alpha)\mid \alpha \text{ accepting path in } X\}.
\end{equation*}
Ipomsets may be \emph{smoothened}, or made less concurrent, by strengthening the precedence order.
The corresponding relation for pomsets has been introduced in \cite{DBLP:journals/fuin/Grabowski81}.
For ipomsets $P$ and $Q$, we say that $Q$ subsumes $P$ and write $P\subsu Q$
if there exists a bijection $f: P\to Q$
which respects interfaces, reflects precedence and preserves essential event order:
$f(S_P)=S_Q$, $f(T_P)=T_Q$, $f(x)<_Q f(y)$ implies $x<_P y$,
and $x\evord_P y$, $x\not<_P y$ and $y\not<_P x$ imply $f(x)\evord_Q f(y)$.
\cite[Prop.~10]{conf/concur/FahrenbergJSZ22} shows that languages of HDAs
are sets of ipomsets which are closed under subsumption,
hence we shall assume these properties when defining the notion of language:
\begin{definition}
A \emph{language} is a subset $L\subseteq \iiPoms$
such that $P\subsu Q$ and $Q\in L$ imply $P\in L$.
\end{definition}
The set of all languages is denoted $\Langs\subseteq 2^{\iiPoms}$,
and a language is \emph{regular} if it is the language of a finite HDA.
\begin{example}
Using $\down$ to denote subsumption closure, the languages of our example HDAs are
$
\Lang(X) = \big\{ \loset{a\\b&\!\bullet}, \loset{a\\b}\big\}\down
=\big\{ \loset{a\\b&\!\bullet}, a b\,\bullet, \loset{a\\b}, ab, ba\big\}
$
and
\[
\Lang(Y) = \left\{ \left[\vcenter{\hbox{
\begin{tikzpicture}[x=.8cm, y=.6cm]
\node (a) at (0,0) {$a$};
\node (c) at (1,0) {$c$};
\node at (1.25,0) {$\bullet\!\!$};
\node (b) at (0,-.75) {$b$};
\node at (-.25,-.79) {$\!\bullet$};
\node (d) at (1,-.75) {$d$};
\path (a) edge (c);
\path (b) edge (d);
\path (a) edge (d);
\end{tikzpicture}
}}\right]
\right\}\down.
\]
\end{example}
\subsubsection*{Essential cells.}
Let $X$ be an HDA.
We say that a cell $x\in X$ is
\begin{compactitem}
\item
\emph{accessible} if $\Path(X)_\bot^x\ne \emptyset$, \ie
$x$ can be reached by a path from a start cell;
\item
\emph{coaccessible} if $\Path(X)_x^\top\ne \emptyset$, \ie
there is a path from $x$ to an accept cell;
\item
\emph{essential} if it is both accessible and coaccessible.
\end{compactitem}
A path is \emph{essential} if its source and target cells are essential.
This implies that all its cells are essential.
Segments of accepting paths are always essential.
The set of essential cells of $X$ is denoted by $\ess(X)$;
this is not necessarily a sub-HDA of $X$
given that faces of essential cells may be non-essential.
For example, all bottom cells of the HDA $Y$
in Fig.~\ref{fig:ab*cb*cd} are inaccessible.
\begin{lemma}
\label{l:HDAGen}
Let $X$ be an HDA.
There exists a smallest sub-HDA $X^{\ess}\subseteq X$ that contains all essential cells,
and\/ $\Lang(X^\ess)=\Lang(X)$.
If $\ess(X)$ is finite, then $X^{\ess}$ is also finite.
\end{lemma}
\begin{proof}
The set of all faces of essential cells
\[
X^{\ess}=\{\delta^0_A\delta^1_B(x)\mid x\in \ess(X),\; A,B\subseteq \ev(x),\; A\cap B=\emptyset\}
\]
is a sub-HDA of $X$, since faces of faces are faces again.
Clearly every sub-HDA of $X$ that contains $\ess(X)$ must also contain $X^\ess$.
Since all accepting paths are essential, $\Lang(X^\ess)=\Lang(X)$.
If $|\ess(X)|=n$ and $|\ev(x)|\leq d$ for all $x\in \ess(X)$,
then $|X^{\ess}|\leq n\cdot 3^d$. \qed
\end{proof}
\subsubsection*{Track objects,}
introduced in \cite{Hdalang},
provide a mapping from ipomsets to HDAs and are a powerful tool for reasoning about languages.
We only need some of their properties in proofs,
so we do not give a definition here but instead refer to \cite[Sect. 5.3]{Hdalang}.
Let $\sq^P$ denote the track object of an ipomset $P$;
this is an HDA with one start cell $c_\bot^P$ and one accept cell $c^\top_P$.
Below we list properties of track objects needed in the paper.
\begin{lemma}
\label{l:PathTrack}
Let $X$ be an HDA, $x,y\in X$ and $P\in\iiPoms$.
The following conditions are equivalent:
\begin{compactenum}
\item There exists a path $\alpha\in\Path(X)_x^y$ such that $\ev(\alpha)=P$.
\item There is an HDA-map $f:\sq^P\to X_x^y$ (\ie $f(c_\bot^P=x)$ and $f(c^\top_P)=y$).
\end{compactenum}
\end{lemma}
\begin{proof}
This is an immediate consequence of \cite[Prop.~89]{Hdalang}. \qed
\end{proof}
\begin{lemma}
\label{l:PathDivision}
Let $X$ be an HDA, $x,y\in X$ and $\gamma\in\Path(X)_x^y$.
Assume that $\ev(\gamma)=P*Q$ for ipomsets $P$ and $Q$.
Then there exist paths $\alpha\in\Path(X)_x$ and $\beta\in\Path(X)^y$
such that $\ev(\alpha)=P$, $\ev(\beta)=Q$ and $\tgt(\alpha)=\src(\beta)$.
\end{lemma}
\begin{proof}
By Lemma \ref{l:PathTrack},
there is an HDA map $f:\sq^{P*Q}\to X_x^y$.
By \cite[Lemma 65]{Hdalang},
there exist precubical maps $j_P:\sq^P\to\sq^{P*Q}$, $j_Q:\sq^Q\to\sq^{P*Q}$
such that $j_P(c_\bot^P)=x$, $j_P(c^\top_P)=j_Q(c_\bot^Q)$ and $j_Q(c^\top_Q)=y$.
By applying Lemma \ref{l:PathTrack} again to $j_P$ and $j_Q$
we obtain $\alpha$ and $\beta$. \qed
\end{proof}
\section{Myhill-Nerode theorem}
The \emph{prefix quotient} of
a language $L\in \Langs$ by an ipomset $P$ is the language
\begin{equation*}
P\backslash L=\{Q\in \iiPoms\mid P Q\in L\}.
\end{equation*}
Similarly, the \emph{suffix quotient} of $L$ by $P$ is $L/P=\{Q\in \iiPoms\mid Q P\in L\}$.
Denote
\[
\suff(L)=\{P\backslash L\mid P\in \iiPoms\},
\qquad
\pref(L)=\{L/P\mid P\in\iiPoms\}.
\]
We record the following property of quotient languages.
\begin{lemma}
\label{l:QuotIncl}
If $L$ is a language and $P\subsu Q$,
then $Q\backslash L\subseteq P\backslash L$.
\end{lemma}
\begin{proof}
If $P\subsu Q$, then $PR\subsu QR$.
Thus,
\[
R\in Q\backslash L
\iff
QR\in L
\implies
PR\in L
\iff
R\in P\backslash L. \quad\qed
\]
\end{proof}
The main goal of this section is to show the following.
\begin{theorem}
\label{t:MN}
For a language $L\in\Langs$
the following conditions are equivalent.
\begin{compactitem}
\item[\rm (a)] $L$ is regular.
\item[\rm (b)] The set $\suff(L)\subseteq\Langs$ is finite.
\item[\rm (c)] The set $\pref(L)\subseteq\Langs$ is finite.
\end{compactitem}
\end{theorem}
We prove only the equivalence between (a) and (b);
equivalence between (a) and (c) is similar.
First we prove the implication (a)$\implies$(b).
Let $X$ be an HDA with $\Lang(X)=L$.
For $x\in X$ define
languages $\mathsf{Pre}(x)=\Lang(X_\bot^x)$
and $\mathsf{Post}(x)=\Lang(X_x^\top)$.
\begin{lemma}
For every $P\in\iiPoms$,
$P\backslash L = \bigcup\{\mathsf{Post}(x)\mid x\in X,\; P\in \mathsf{Pre}(x) \}$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
Q\in P\backslash L
\iff
P Q\in L
\iff &
\exists\; f: \sq^{P Q}\to X = X_\bot^\top \\
\iff &
\exists\; x\in X, g: \sq^{P}\to X_\bot^x,\; h: \sq^{Q}\to X_x^\top \\
\iff &
\exists\; x\in X:
P\in \Lang(X_\bot^x),\;
Q\in \Lang(X_x^\top) \\
\iff &
\exists\; x\in X:
P\in \mathsf{Pre}(x),\;
Q\in \mathsf{Post}(x).
\end{align*}
The last condition says that $Q$ belongs to the right-hand side of the equation. \qed
\end{proof}
\begin{varproof}[of Thm.~\ref{t:MN}, {\rm (a)$\implies$(b)}]
The family of languages $\{P\backslash L\mid P\in \iiPoms\}$
is a subfamily of
$ \{\bigcup_{x\in Y} \mathsf{Post}(x)\bigmid Y\subseteq X\}$
which is finite. \qed
\end{varproof}
\subsubsection*{HDA construction.}
Now we show that (b) implies (a).
Fix a language $L\in\Langs$, with $\suff(L)$ finite or infinite.
We will construct an HDA $\MN(L)$ that recognises $L$
and show that if $\suff(L)$ is finite,
then its essential part $\MN(L)^\ess$ is finite.
The cells of $\MN(L)$ are equivalence classes of ipomsets
under a relation $\Lapprox$ induced by $L$ which we will introduce below.
The relation $\Lapprox$ is defined using prefix quotients,
but needs to be stronger than prefix quotient equivalence.
This is because events may be concurrent and because ipomsets have interfaces.
We give examples just after the construction.
For an ipomset $\ilo{S}{P}{T}$ define its \emph{(target) signature}
to be the starter $\fin(P)=\starter{T}{T-S}$.
Thus $\fin(P)$ collects all target events of $P$,
and its source interface contains those events that are also in the source interface of $P$.
We also write $\sfin(P)=S_{\fin(P)}=S\cap T$ and
$\rfin(P)=\fin(P)-S_{\fin(P)}=T-S$.
For example,
\[
\fin
\left(
\loset{\bullet\!&a&\!\bullet \\
\bullet\!&a\\
&c&\!\bullet
}
\right)
=
\loset{\bullet\!&a&\!\bullet \\
&c&\!\bullet
}
,
\quad
\fin
\left(
\loset{\bullet\!&a c&\!\bullet\\\bullet\!&b&\!\bullet}
\right)
=
\loset{&c&\!\bullet\\ \bullet\!&b&\!\bullet},
\quad
\fin
\left(
\loset{a c&\!\bullet\\b&\!\bullet}
\right)
=
\loset{c&\!\bullet\\ b&\!\bullet}.
\]
We define two equivalence relations on $\iiPoms$ induced by $L$:
\begin{compactitem}
\item
Ipomsets $P$ and $Q$ are \emph{weakly equivalent} ($P\Lsim Q$)
if $\fin(P)\cong \fin(Q)$ and $P\backslash L=Q\backslash L$.
Obviously, $P\Lsim Q$ implies $T_P\cong T_Q$
and $\rfin(P)\cong \rfin(Q)$.
\item
Ipomsets $P$ and $Q$ are \emph{strongly equivalent} ($P\Lapprox Q$)
if
$P\Lsim Q$ and
for all $A\subseteq \rfin(P)\cong\rfin(Q)$
we have $(P-A)\backslash L=(Q-A)\backslash L$. (Here $P-A$ means $P$ with $A$ removed.)
\end{compactitem}
Evidently $P\Lapprox Q$ implies $P\Lsim Q$, but the inverse does not always hold.
We will give examples below
which show why $\Lapprox$ is the proper relation to use for constructing $\MN(L)$.
\begin{lemma}
\label{l:StrongEqDef}
If $P\Lapprox Q$,
then $P-A\Lapprox Q-A$ for all $A\subseteq \rfin(P)\cong\rfin(Q)$.
\end{lemma}
\begin{proof}
For every $A$ we have $(P-A)\backslash L=(Q-A)\backslash L$, and
\[
\fin(P-A)=\fin(P)-A\cong \fin(Q)-A=\fin(Q-A),
\]
Thus, $P-A\Lsim Q-A$.
Further, for every $B\subseteq \rfin(P-A)\cong \rfin(Q-A)$,
\[
((P-A)-B)\backslash L
=
(P-(A\cup B))\backslash L
=
(Q-(A\cup B))\backslash L
=
((Q-A)-B)\backslash L,
\]
which shows that $P-A\Lapprox Q-A$. \qed
\end{proof}
Now define an HDA $\MN(L)$ as follows.
For $U\in\sq$,
\[
\MN(L)[U]
=
\left(\iiPoms_U/\Lapprox\right) \cup \{w_U\}.
\]
The $\Lapprox$-equivalence class of $P$
will be denoted by $\eqcl{P}$ (but often just by $P$ in examples).
Face maps are defined as follows,
for $A\subseteq U\in\sq$ and $P\in \iiPoms_U$:
\begin{equation}
\label{e:FaceMaps}
\delta^0_A(\eqcl{P})
=
\begin{cases}[c]
\eqcl{P-A} & \text{if $A\subseteq\rfin(P)$},\\
w_{U-A} & \text{if $A\cap \sfin(P)\neq\emptyset$},
\end{cases}
\qquad
\delta^1_A(\eqcl{P})
=
\eqcl{P*\terminator{U}{A}},
\end{equation}
\[
\delta^0_A(w_U)=\delta^1_A(w_U)=w_{U-A}.
\]
Finally, start and accept cells are given by
\[
\MN(L)_\bot = \{\eqcl{\id_U}\}_{U\in \sq},
\qquad
\MN(L)^\top=\{\eqcl{P}\mid P\in L\}.
\]
The cells $\eqcl{P}$ will be called \emph{regular}.
They are $\Lapprox$-equivalence classes of ipomsets,
lower face maps unstart events, and upper face maps terminate events.
The cells $w_U$ will be called \emph{subsidiary}:
they are not accessible but necessary to define some lower faces.
All faces of subsidiary cells are subsidiary,
and upper faces of regular cells are regular.
Below we present several examples,
in which we show only the essential part $\MN(L)^\ess$ of $\MN(L)$.
\begin{figure}
\caption{HDA $\MN(L)$ of Example \ref{ex:nondet}
\label{fig:ex.nondet}
\end{figure}
\begin{example}
\label{ex:nondet}
Let $L=\{\loset{a\\b}, a b, b a, a b c\}=\{\loset{a\\b}, a b c\}{\downarrow}$.
Fig.~\ref{fig:ex.nondet} shows the HDA $\MN(L)^\ess$
together with a list of essential cells of $M(L)$
and their prefix quotients in $L$.
Note the non-determinism
which is introduced by the cells $\eqcl{a b\,\bullet}$ and $\eqcl{\loset{a\\b&\!\bullet}}$:
the generating ipomsets have different prefix quotients
because of $\{\loset{a\\b}, a b c\}\subseteq L$ but the same lower face $\eqcl{a}$.
We will see below that no deterministic HDA $X$ exists with $\Lang(X)=L$.
\end{example}
\begin{example}
\label{ex:strongeq}
Here we explain why we need to use $\Lapprox$-equivalence classes
and not $\Lsim$-equivalence classes.
Let $L=\{\loset{a\\b},aa\}{\downarrow}$. Then $\MN(L)^\ess$ is as below.
\[
\begin{tikzpicture}[x=1cm, y=.8cm]
\filldraw[color=black!10!white] (0,0)--(2,0)--(2,2)--(0,2)--(0,0);
\node[state] (eps) at (0,0) {$\epsilon$};
\node[below left] at (eps) {$\bot$};
\node[state] (a) at (2,0) {${a}$};
\node[state] (b) at (0,2) {${b}$};
\node[state] (ab) at (2,2) {${\loset{a\\b}}$};
\node[above right=0.2] at (ab) {$\top$};
\node[state] (aa) at (4,0) {${aa}$};
\node[above right=0.18] at (aa) {$\top$};
\path (eps) edge node[swap] {${a\,\bullet}$} (a);
\path (eps) edge node {${b\,\bullet}$} (b);
\path (a) edge node[swap] {${aa\,\bullet}$} (aa);
\path (b) edge node {${ba\,\bullet}$} (ab);
\path (a) edge node[swap, pos=.55] {${ab\,\bullet}$} (ab);
\node (m) at (1,1) {${\loset{ a&\!\bullet \\ b&\!\bullet}}$};
\end{tikzpicture}
\]
Note that $(aa\,\bullet)\backslash L=(ba\,\bullet)\backslash L=\{\bullet\, a\}$,
yet $(aa\,\bullet)$ and $(ba\,\bullet)$ are not strongly equivalent,
because $a\backslash L=\{a,b\}\neq \{a\}=b\backslash L$.
\end{example}
\begin{example}
\label{ex:aa}
The language $L=\{\loset{\bullet\!& aa&\!\bullet\\ \bullet\!& a &\!\bullet }\}$
is recognised by the HDA $\MN(L)^\ess$ below:
\[
\begin{tikzpicture}[x=1cm, y=.8cm]
\filldraw[color=black!10!white] (0,0)--(4,0)--(4,2)--(0,2)--(0,0);
\node[state] (eps) at (0,0) {$w_\epsilon$};
\node[state] (a) at (2,0) {$w_\epsilon$};
\node[state] (b) at (0,2) {$w_\epsilon$};
\node[state] (ab) at (2,2) {$y$};
\node[state] (aa) at (4,0) {$w_\epsilon$};
\node[state] (aab) at (4,2) {$y$};
\path (eps) edge node[swap] {$w_a$} (a);
\path (eps) edge node {$w_a$} (b);
\path (a) edge node[swap] {$w_a$} (aa);
\path (b) edge node {$y_{\bullet a\bullet}$} (ab);
\path (a) edge(ab);
\path (ab) edge node {$y_{a\bullet}$} (aab);
\path (aa) edge node[swap] {$y_{\bullet a\bullet}$} (aab);
\node at (1,1) {${\loset{ \bullet\!& a&\!\bullet \\ \bullet\!& a &\!\bullet}}$};
\node at (3,1) {${\loset{ \bullet\!& aa&\!\bullet \\ \bullet\!& a &\!\bullet}}$};
\node at (0.5,0.7) {$\bot$};
\node at (3.6,1.3) {$\top$};
\end{tikzpicture}
\]
Cells with the same names are identified.
Here we see subsidiary cells $w_\epsilon$ and $w_a$,
and regular cells (denoted by $y$ indexed with their signature) that are not coaccessible.
The middle vertical edge is $\eqcl{\loset{\bullet\!& a \\ \bullet\!& a&\!\bullet }}$,
$y_{\bullet a\bullet}
=\eqcl{\loset{\bullet\!& a&\!\bullet\\ \bullet\!& a}}
=\eqcl{\loset{\bullet\!& aa\\ \bullet\!& a&\!\bullet }}$,
$y_{a\bullet}=\eqcl{\loset{\bullet\!& aa&\!\bullet\\ \bullet\!& a}}$,
and $y=\eqcl{\loset{\bullet\!& a\\ \bullet\!& a}}=\eqcl{\loset{\bullet\!& aa\\ \bullet\!& a}}$.
\end{example}
\subsubsection*{$\MN(L)$ is well-defined.}
We need to show that the formulas \eqref{e:FaceMaps}
do not depend on the choice of a representative in $\eqcl{P}$
and that the precubical identities are satisfied.
\begin{lemma}
\label{l:ExtQ}
Let $P$, $Q$ and $R$ be ipomsets with $T_P=T_Q=S_R$. Then
\[
P\backslash L \subseteq Q\backslash L \implies (P R)\backslash L
\subseteq (Q R)\backslash L.
\]
In particular,
$P\backslash L = Q\backslash L$ implies $(P R)\backslash L = (Q R)\backslash L$.
\end{lemma}
\begin{proof}
For $N\in \iiPoms$ we have
\begin{multline*}
N\in (P R)\backslash L \iff P R N\in L \iff
R N\in P\backslash L \\
\implies R N\in Q\backslash L \iff Q R N\in L \iff N\in
(Q R)\backslash L.\quad \qed
\end{multline*}
\end{proof}
The next lemma shows an operation to ``add order'' to an ipomset $P$.
This is done by first removing some points $A\subseteq T_P$
and then adding them back in, forcing arrows from all other points in $P$.
The result is obviously subsumed by~$P$.
\begin{lemma}
\label{l:MinExt}
For $P\in\iiPoms$ and $A\subseteq \rfin(P)$, $(P-A)*\starter{T_P}{A}\subsu P$. \qed
\end{lemma}
The next two lemmas, whose proofs are again obvious,
state that events may be unstarted or terminated in any order.
\begin{lemma}
\label{l:TerminatorComp}
Let $U$ be an loset and $A, B\subseteq U$ disjoint subsets.
Then
\[
\terminator{U}{B}*\terminator{(U-B)}{A}=\terminator{U}{A\cup B}=\terminator{U}{A}*\terminator{(U-A)}{B}. \quad\qed
\]
\end{lemma}
\begin{lemma}
\label{l:Comm}
Let $P\in \iiPoms$ and $A, B\subseteq T_P$ disjoint subsets.
Then
\begin{equation*}
(P*\terminator{T_P}{B})-A
=
(P-A)*\terminator{(T_P-A)}{B}. \quad\qed
\end{equation*}
\end{lemma}
\begin{lemma}
\label{l:Approx}
Assume that $P\Lapprox Q$ for $P,Q\in\iiPoms_U$.
Then $P*\terminator{U}{B}\Lapprox Q*\terminator{U}{B}$ for every $B\subseteq U$.
\end{lemma}
\begin{proof}
Obviously $\fin(P*\terminator{U}{B})=\fin(P)-B\cong \fin(Q)-B=\fin(Q*\terminator{U}{B})$.
For every $A\subseteq \rfin(P)-B\simeq \rfin(Q)-B$ we have
\[
((P-A)*\terminator{(U-A)}{B})\backslash L
=
((Q-A)*\terminator{(U-A)}{B})\backslash L
\]
by assumption and Lemma \ref{l:ExtQ}.
But
$(P*\terminator{U}{B})-A=(P-A)*\terminator{(U-A)}{B}$
and
$(Q*\terminator{U}{B})-A=(Q-A)*\terminator{(U-A)}{B}$
by Lemma \ref{l:Comm}. \qed
\end{proof}
\begin{proposition}
$\MN(L)$ is a well-defined HDA.
\end{proposition}
\begin{proof}
The face maps are well-defined:
for $\delta^0_A$ this follows from
Lemma \ref{l:StrongEqDef},
for $\delta^1_B$ from Lemma \ref{l:Approx}.
The precubical identities $\delta^\nu_A \delta^\mu_B=\delta^\mu_B\delta^\nu_A$
are clear for $\nu=\mu=0$,
follow from Lemma \ref{l:TerminatorComp} for $\nu=\mu=1$,
and from Lemma \ref{l:Comm} for $\{\nu,\mu\}=\{0,1\}$.\! \qed
\end{proof}
\subsubsection*{Paths and essential cells of $\MN(L)$.}
The next lemma provides paths in $\MN(L)$.
\begin{lemma}
\label{l:PathDetail}
For every $N,P\in \iiPoms$ such that $T_N\cong S_P$
there exists a path $\alpha\in \Path(\MN(L))_{\eqcl{N}}^{\eqcl{N P}}$
such that $\ev(\alpha)=P$.
\end{lemma}
\begin{proof}
Choose a decomposition
$P=Q_1*\dotsm*Q_n$ into starters and terminators.
Denote $U_k=T_{Q_k}=S_{Q_{k+1}}$
and define
\begin{equation*}
x_k=\eqcl{N*Q_1*\dotsm*Q_k}, \qquad
\phi_k=
\begin{cases}
d^0_A & \text{if $Q_k=\starter{U_k}{A}$}, \\
d^1_B & \text{if $Q_k=\terminator{U_{k-1}}{B}$}
\end{cases}
\end{equation*}
for $k=1,\dotsc,n$.
If $\varphi_k=d^0_A$ and $Q_k=\starter{U_k}{A}$, then
\begin{multline*}
\delta^0_A(x_k)
=
\eqcl{N*Q_1*\dotsm*Q_{k-1}*\starter{U_k}{A}-A} \\
=
\eqcl{N*Q_1*\dotsm*Q_{k-1}*\id_{U_k-A}}
=
x_{k-1}.
\end{multline*}
If $\varphi_k=d^1_B$ and $Q_k=\terminator{U_{k-1}}{B}$, then
\[
\delta^1_B(x_{k-1})
=
\eqcl{N*Q_1*\dotsm*Q_{k-1}*\terminator{U_{k-1}}{B}}
=
x_{k}.
\]
Thus, $\alpha=(x_0,\phi_1,x_1,\dotsc,\phi_n,x_n)$
is a path
with $\ev(\alpha)=P$, $\src(\alpha)=\eqcl{N}$ and $\tgt(\alpha)=\eqcl{N*P}$. \qed
\end{proof}
Our goal is now to describe essential cells of $\MN(L)$.
\begin{lemma}
\label{l:AccPos}
All regular cells of $\MN(L)$ are accessible.
If $P\backslash L\neq \emptyset$, then $\eqcl{P}$ is coaccessible.
\end{lemma}
\begin{proof}
Both claims follow from Lemma \ref{l:PathDetail}.
For every $P$ there exists a path
from $\eqcl{\id_{S_P}}$ to $\eqcl{\id_{S_P}*P}=\eqcl{P}$.
If $Q\in P\backslash L$,
then there exists a path $\alpha\in \Path(\MN(L))_{\eqcl{P}}^{\eqcl{PQ}}$,
and $\eqcl{PQ}\in \MN(L)^\top$. \qed
\end{proof}
\begin{lemma}
\label{l:AccNeg}
Subsidiary cells of $\MN(L)$ are not accessible.
If $P\backslash L=\emptyset$,
then $\eqcl{P}$ is not coaccessible.
\end{lemma}
\begin{proof}
If $\alpha\in\Path(\MN(L))_\bot^{w_U}$,
then it contains a step $\beta$ from a regular cell to a subsidiary cell
(since all start cells are regular).
Yet $\beta$ can be neither upstep (since lower faces of subsidiary cells are subsidiary)
nor downstep (since upper faces of regular cells are regular).
This contradiction proves the first claim.
To prove the second part we use a similar argument.
If $P\backslash L=\emptyset$, then a path $\alpha\in\Path(\MN(L))_{\eqcl{P}}^\top$
contains only regular cells (as shown above).
Therefore, $\alpha$ contains a step $\beta$ from $\eqcl{Q}$ to $\eqcl{R}$
such that $Q\backslash L=\emptyset$ and $R\backslash L\neq\emptyset$.
If $\beta$ is a downstep, \ie $\beta=(\eqcl{Q}\arrI{A} \eqcl{Q*\terminator{U}{A}})$,
and $N\in R\backslash L=(Q*\terminator{U}{A})\backslash L$,
then $\terminator{U}{A}*N\in Q\backslash L\neq\emptyset$: a contradiction.
If $\beta=(\eqcl{R-A}\arrO{A} \eqcl{R})$ and $N\in R\backslash L$,
then, by Lemma \ref{l:MinExt},
\[
(R-A)*\starter{U}{A}*N\subsu
R*N
\in L,
\]
implying that $(R-A)\backslash L\neq \emptyset$ by Lemma \ref{l:QuotIncl}. \qed
\end{proof}
Lemmas \ref{l:AccPos} and \ref{l:AccNeg} together immediately imply the following.
\begin{proposition}
\label{p:Acc}
$\ess(\MN(L))=\{\eqcl{P}\mid P\backslash L\neq \emptyset\}.$ \qed
\end{proposition}
\subsubsection*{$\MN(L)$ recognises $L$.}
One inclusion follows immediately from Lemma \ref{l:PathDetail}:
\begin{lemma}
\label{l:Path}
$L\subseteq\Lang(\MN(L))$.
\end{lemma}
\begin{proof}
For every $P\in \iiPoms$
there exists a path $\alpha\in\Path(\MN(L))_{\eqcl{\id_{S_P}}}^{\eqcl{P}}$
such that $\ev(\alpha)=P$.
If $P\in L$, then $\epsilon\in P\backslash L$,
\ie $\eqcl{P}$ is an accept cell.
Thus $\alpha$ is accepting and
$P=\ev(\alpha)\in \Lang(\MN(L))$. \qed
\end{proof}
The converse inclusion requires more work.
For a regular cell $\eqcl{P}$ of $\MN(L)$ denote $\eqcl{P}\backslash L=P\backslash L$
(this obviously does not depend on the choice of $P$).
\begin{lemma}
\label{l:Limit}
If $S\in\sq$ and $\alpha\in\Path(\MN(L))_{\eqcl{\id_S}}$,
then $\tgt(\alpha)\backslash L\subseteq \ev(\alpha)\backslash L$.
\end{lemma}
\begin{proof}
By Lemma \ref{l:AccNeg},
all cells appearing along $\alpha$ are regular.
We proceed by induction on the length of $\alpha$.
For $\alpha=(\eqcl{\id_S})$ the claim is obvious.
If $\alpha$ is non-trivial, we have two cases.
\begin{compactitem}
\item
$\alpha=\beta*(\delta^0_A(\eqcl{P})\arrO{A} \eqcl{P})$,
where $\eqcl{P}\in\MN(L)[U]$ and $A\subseteq \rfin(P)\subseteq U\cong T_P$.
By the induction hypothesis,
\[
(P-A)\backslash L
=\delta^0_A(\eqcl{P})\backslash L
=\tgt(\beta)\backslash L
\subseteq \ev(\beta)\backslash L.
\]
For $Q\in \iiPoms$ we have
\begin{align*}
Q\in P\backslash L
\iff P Q\in L
&\implies (P-A)*\starter{U}{A}*Q\in L
\tag{Lemma \ref{l:MinExt}}\\
&\iff \starter{U}{A}*Q \in (P-A)\backslash L\\
&\implies \starter{U}{A}*Q \in \ev(\beta)\backslash L
\tag{induction hypothesis}\\
&\iff \ev(\beta)*\starter{U}{A}*Q \in L\\
&\iff \ev(\alpha)*Q \in L
\iff Q \in \ev(\alpha)\backslash L.
\end{align*}
Thus, $\eqcl{P}\backslash L=P\backslash L\subseteq \ev(\alpha)\backslash L$.
\item
$\alpha=\beta*(\eqcl{P}\arrI{B}\delta^1_B(\eqcl{P}))$,
where $\eqcl{P}\in\MN(L)[U]$ and $B\subseteq U\cong T_P$.
By inductive assumption,
$
P\backslash L=\tgt(\beta)\backslash L \subseteq \ev(\beta)\backslash L
$.
Thus,
\begin{equation*}
\tgt(\alpha)\backslash L
=
\delta^1_B(\eqcl{P})\backslash L
=
\eqcl{P*\terminator{U}{B}}\backslash L
\subseteq
(\ev(\beta)*\terminator{U}{B})\backslash L
=
\ev(\alpha)\backslash L.
\end{equation*}
\end{compactitem}
The inclusion above follows from Lemma \ref{l:ExtQ}. \qed
\end{proof}
\begin{proposition}
\label{p:MNLang}
$\Lang(\MN(L))=L$.
\end{proposition}
\begin{proof}
The inclusion $L\subseteq \Lang(\MN(L))$ is shown in Lemma \ref{l:Path}.
For the converse, let $S\in\sq$ and $\alpha\in\Path(\MN(L))_{\eqcl{\id_S}}$,
then Lemma \ref{l:Limit} implies
\[
\tgt(\alpha)\in \MN(L)^\top
\iff
\epsilon\in \tgt(\alpha)\backslash L
\implies
\epsilon\in \ev(\alpha)\backslash L
\iff
\ev(\alpha)\in L,
\]
that is, if $\alpha$ is accepting, then $\ev(\alpha)\in L$. \qed
\end{proof}
\subsubsection*{Finiteness of $\MN(L)$.}
The HDA $\MN(L)$ is not finite,
since it contains infinitely many subsidiary cells $w_U$.
Below we show that its essential part $\MN(L)^\ess$ is finite
if $L$ has finitely many prefix quotients.
\begin{lemma}
\label{l:FiniteEss}
If $\suff(L)$ is finite,
then $\ess(\MN(L))$ is finite.
\end{lemma}
\begin{proof}
For $\eqcl{P},\eqcl{Q}\in\ess(L)$,
we have $\eqcl{P}\Lapprox\eqcl{Q}$ iff $f(\eqcl{P})=f(\eqcl{Q})$,
where
\[
f(\eqcl{P})
=
(
P\backslash L,
\fin(P),
((P-A)\backslash L)_{A\subseteq \rfin(P)}
).
\]
We will show that $f$ takes only finitely many values on $\ess(L)$.
Indeed, $P\backslash L$ belongs to the finite set $\suff(L)$.
Further, all ipomsets in $P\backslash L$ have source interfaces equal to $T_P$.
Since $P\backslash L$ is non-empty, $\fin(P)$ is a starter with $T_P$ as underlying loset.
Yet, there are only finitely many starters on any loset.
The last coordinate also may take only finitely many values,
since $\rfin(P)$ is finite and $(P-A)\backslash L\in\suff(L)$. \qed
\end{proof}
\begin{varproof}[of Thm.~\ref{t:MN}, {\rm (b)$\implies$(a)}]
From Lemma \ref{l:FiniteEss} and Lemma \ref{l:HDAGen},
$\MN(L)^\ess$ is a finite HDA.
By Prop.~\ref{p:MNLang} we have $\Lang(\MN(L)^\ess)=\Lang(\MN(L))=L$. \qed
\end{varproof}
\begin{figure}
\caption{Two HDAs recognising the language of Example \ref{exa:hda-loop}
\label{fig:hda-loop}
\end{figure}
\begin{example}
\label{exa:hda-loop}
We finish this section with another example,
which shows some subtleties related to higher-dimensional loops.
Let $L$ be the language of the HDA shown to the left of Fig.~\ref{fig:hda-loop}
(a looping version of the HDA of Fig.~\ref{fig:ab*cb*cd}),
then
\begin{equation*}
L = \{\bullet\,a\,\bullet\} \cup \{\loset{\bullet\!& aa&\!\bullet \\ &b}^n\mid n\ge 1\}\down.
\end{equation*}
Our construction yields $\MN(L)^\ess$ as shown on the right of the figure.
Here, $e=\eqcl{\loset{\bullet\!&a \\ &b&\!\bullet}}$, and
cells with the same names are identified.
These identifications follow from the fact that
$\loset{ \bullet\!& aa \\ &bb&\!\bullet}
\Lapprox \loset{\bullet\!&a \\ &b&\!\bullet}$,
$\loset{ \bullet\!& aa \\ &bb }
\Lapprox \loset{\bullet\!&a \\ &b}$, and
$\loset{ \bullet\!& aa \\ &b }
\Lapprox \bullet\, a$.
Note that $\loset{ \bullet\!&a&\!\bullet \\ &b&\!\bullet}$
and $\loset{ \bullet\!& aa&\!\bullet \\ &bb&\!\bullet}$
are not strongly equivalent, since they have different signatures:
$\loset{ \bullet\!& a&\!\bullet \\ &b&\!\bullet}$
and $\loset{ a&\!\bullet \\ b&\!\bullet}$, respectively.
\end{example}
\section{Determinism}
We now make precise our notion of determinism
and show that not all HDAs may be determinized.
Recall that we do not assume finiteness.
An HDA $X$ is \emph{deterministic} if
\begin{compactenum}
\item
for every $U\in \sq$ there is at most one initial cell in $X[U]$, and
\item
for all $V\in \sq$, $A\subseteq V$ and essential cell $x\in X[V-A]$
there exists at most one essential cell $y\in X[V]$
such that $\delta^0_A(y)=x$.
\end{compactenum}
A language is \emph{deterministic}
if it is recognised by a deterministic HDA.
That is, in any essential cell $x$ in a deterministic HDA $X$ and for any set $A$ of events,
there is at most one way to start $A$ in $x$ and remain in the essential part of $X$.
We allow multiple initial cells because ipomsets in $L$ may have different source interfaces;
for each source interface in $L$, there can be at most one matching start cell in $X$.
Note that we must restrict our definition to essential cells
as inessential cells may not always be removed
(in contrast to the case of standard automata).
A language $L$ is \emph{swap-invariant}
if it satisfies one of the following equivalent conditions:
\begin{compactenum}
\item[(1)]
$
PP', QQ'\in L\text{ and } P\subsu Q
\implies
QP'\in L.
$
($P, Q, P', Q'\in\iiPoms$)
\item[(2)]
$
P\subsu Q \implies P\backslash L = Q\backslash L \text{ or } Q\backslash L=\emptyset.
$
($P,Q\in\iiPoms$)
\end{compactenum}
Our main goal is to show the following criterion,
which will be implied by Props.~\ref{l:DetImpliesSInv} and \ref{l:SInvImpliesDet} below.
\begin{theorem}
\label{t:swapdet}
A language $L$ is deterministic iff it is swap-invariant.
\end{theorem}
\begin{example}
The regular language $L=\{\loset{a\\b}, a b, b a, a b c\}$ from Example \ref{ex:nondet}
is not swap-invariant:
using condition (2) above,
$ab\,\bullet\subsu\loset{a\\b&\!\bullet}$, but
$\{ab\,\bullet\}\backslash L
=
\{\bullet\, b, \bullet\, bc\}
\neq
\{\bullet\, b\}
=
\{\loset{a\\b&\!\bullet}\}\backslash L$.
Hence $L$ is not deterministic.
\end{example}
The next examples explain why we need to restrict to essential cells
in the definition of determinacy.
\begin{example}
The HDA in Example \ref{ex:aa} is deterministic.
There are two different $a$-labelled edges starting at $w_\epsilon$
($w_a$ and $\eqcl{\loset{\bullet\!& a \\ \bullet\!& a&\!\bullet }}$),
yet it does not disturb determinism since $w_\epsilon$ is not accessible.
\end{example}
\begin{example}
Let $L=\{ab, \loset{a&\!\bullet\\ b&\!\bullet}\}$.
Then $\MN(L)^{\ess}$ is as follows:
\[
\begin{tikzpicture}[x=1cm, y=.8cm]
\filldraw[color=black!10!white] (0,0)--(2,0)--(2,2)--(0,2)--(0,0);
\node[state] (eps) at (0,0) {$\epsilon$};
\node[below left] at (eps) {$\bot$};
\node[state] (a) at (2,0) {$a$};
\node[state] (b) at (0,2) {$y$};
\node[state] (ab) at (2,2) {$y$};
\node[state] (aa) at (4,0) {$ab$};
\node[above right=0.1] at (aa) {$\top$};
\path (eps) edge node[swap] {$a\,\bullet$} (a);
\path (eps) edge node {$b\,\bullet$} (b);
\path (a) edge node[swap] {$ab\,\bullet$} (aa);
\path (b) edge node {$y_{a\bullet}$} (ab);
\path (a) edge node[swap] {$y_{b\bullet}$} (ab);
\node (m) at (1,1) {${\loset{ a&\!\bullet \\ b&\!\bullet}}$};
\node at (1.4,1.2) {$\top$};
\end{tikzpicture}
\]
It is deterministic; there are two $b$-labelled edges leaving $a$,
namely $y_{b\bullet}$ and $ab\,\bullet$,
but only the latter is coaccessible.
\end{example}
\begin{lemma}
\label{l:DetEquiPaths}
Let $X$ be a deterministic HDA and $\alpha,\beta\in\Path(X)_\bot$
with $\tgt(\alpha),$ $\tgt(\beta)\in\ess(X)$.
If $\src(\alpha)=\src(\beta)$ and $\ev(\alpha)=\ev(\beta)$,
then $\tgt(\alpha)= \tgt(\beta)$.
\end{lemma}
\begin{proof}
We can assume that $\alpha=\alpha_1*\dotsm*\alpha_n$ and $\beta=\beta_1*\dotsm*\beta_n$ are sparse;
note that all of these cells are essential.
Denote $P=\ev(\alpha)=\ev(\beta)$,
then
\[
P=\ev(\alpha)=\ev(\alpha_1)*\dotsm\ev(\alpha_n)
\]
is a sparse step decomposition of $P$.
Similarly, $P=\ev(\beta_1)*\dotsm*\ev(\beta_m)$ is a sparse step decomposition.
Yet sparse step decompositions are unique by Lemma~\ref{p:SparsePresentation};
hence, $m=n$ and $\ev(\alpha_k)=\ev(\beta_k)$ for every $k$.
We show by induction that $\alpha_k=\beta_k$.
Assume that $\alpha_{k-1}=\beta_{k-1}$.
Let $x=\src(\alpha_k)=\tgt(\alpha_{k-1})=\tgt(\beta_{k-1})=\src(\beta_k)$.
If $P_k=\ev(\alpha_k)=\ev(\beta_k)$ is a terminator $\terminator{U}{B}$,
then
$\alpha_k = \delta_B^1(x) = \beta_k$.
If $P_k$ is a starter $\starter{U}{A}$,
then there are $y, z\in X$ such that $\delta^0_A(y)=\delta^0_A(z)=x$.
As $y$ and $z$ are essential
and $X$ is deterministic,
this implies $y=z$ and $\alpha_k=\beta_k$. \qed
\end{proof}
\begin{lemma}
\label{l:DetSubPaths}
Let $\alpha$ and $\beta$ be essential paths on a deterministic HDA $X$.
Assume that $\src(\alpha)=\src(\beta)$ and $\ev(\alpha)\subsu\ev(\beta)$.
Then $\tgt(\alpha)=\tgt(\beta)$.
\end{lemma}
\begin{proof}
Denote $x\in\src(\alpha)=\src(\beta)$ and $y=\tgt(\beta)$.
By Lemma \ref{l:PathTrack} there exists an HDA map
$f:\sq^{\ev(\beta)}\to X_x^y$.
By \cite[Lemma 63]{Hdalang}
there is an HDA map $i:\sq^{\ev(\alpha)}\to \sq^{\ev(\beta)}$.
We apply Lemma \ref{l:PathTrack} again to the composition $f\circ i$
and obtain that there is a path $\alpha'\in\Path(X)_x^y$
such that $\ev(\alpha')=\ev(\alpha)$.
Lemma \ref{l:DetEquiPaths} then implies $\tgt(\alpha)=\tgt(\alpha')=y$. \qed
\end{proof}
\begin{proposition}
\label{l:DetImpliesSInv}
If $L$ is deterministic, then $L$ is swap-invariant.
\end{proposition}
\begin{proof}
Let $X$ be a deterministic automaton that recognises $L$
and fix ipomsets $P\subsu Q$.
From Lemma \ref{l:QuotIncl} follows that $Q\backslash L\subseteq P\backslash L$.
It remains to prove that if $Q\backslash L\neq\emptyset$,
then $P\backslash L\subseteq Q\backslash L$.
Denote $U\cong S_P\cong S_Q$.
Let $R\in Q\backslash L$
and let $\omega\in\Path(X)_{\eqcl{\id_U}}^\top$ be an accepting path that recognises $QR$.
By Lemma \ref{l:PathDivision}, there exists a path $\beta\in\Path(X)_{\eqcl{\id_U}}$
such that $\ev(\beta)=Q$.
Now assume that $R'\in P\backslash L$,
and let $\omega'\in\Path(X)_{\eqcl{\id_U}}^\top$ be a path such that $\ev(\omega')=PR'$.
By Lemma \ref{l:PathDivision},
there exist paths $\alpha\in\Path(X)_{\eqcl{\id_U}}$ and
$\gamma\in\Path(X)^{\tgt(\omega')}$
such that $\tgt(\alpha)=\src(\gamma)$,
$\ev(\alpha)=P$ and $\ev(\gamma)=R'$.
From Lemma \ref{l:DetSubPaths} and $P\subsu Q$ follows that
$\tgt(\alpha)=\tgt(\beta)$.
Thus, $\beta$ and $\gamma$ may be concatenated
to an accepting path $\beta*\gamma$.
By $\ev(\beta*\gamma)=QR'$
we have $QR'\in L$, \ie $R'\in Q\backslash L$. \qed
\end{proof}
\begin{lemma}
\label{l:EssentialLowerFaces}
If $\eqcl{P}\in\ess(\MN(L))$ and $A\subseteq \rfin(P)$, then $\eqcl{P-A}\in\ess(\MN(L))$.
\end{lemma}
\begin{proof}
By Lemma \ref{l:Path}, $\eqcl{P-A}$ is accessible.
By assumption, $\eqcl{P}$ is coaccessible and $(\eqcl{P-A}\arrO{A}\eqcl{P})$
is a path, so $\eqcl{P-A}$ is also coaccessible. \qed
\end{proof}
\begin{proposition}
\label{l:SInvImpliesDet}
If $L$ is swap-invariant, then $\MN(L)$ is deterministic.
\end{proposition}
\begin{proof}
$\MN(L)$ contains only one start cell $\eqcl{\id_U}$ for every $U\in\sq$.
Fix $U\in\sq$, $P,Q\in\iiPoms_U$ and $A\subseteq U$.
Assume that $\delta_A^0(\eqcl{P})=\delta_A^0(\eqcl{Q})$, \ie
$\eqcl{P-A}=\eqcl{Q-A}$,
and $\eqcl{P},\eqcl{Q},\eqcl{P-A}\in\ess(\MN(L))$.
We will prove that $\eqcl{P}=\eqcl{Q}$,
or equivalently, $P\Lapprox Q$.
We have $\fin(P-A)=\fin(Q-A)=:\starter{(U-A)}{S}$.
First, notice that $A$, regarded as a subset of $P$ (or $Q$), contains no start events:
else, we would have $\delta^0_A(\eqcl{P})=w_{U-A}$ (or $\delta^0_A(\eqcl{Q})=w_{U-A}$).
As a consequence, $\fin(P)=\fin(Q)=\starter{U}{S}$.
For every $B\subseteq \rfin(P)=\rfin(Q)$ we have
\begin{align*}
(P-A)&\Lapprox(Q-A) \implies\\
(P-(A\cup B))\backslash L &= (Q-(A\cup B))\backslash L\implies\\
((P-(A\cup B))*\starter{U}{(A-B)})\backslash L &= ((Q-(A\cup B))*\starter{U}{(A-B)})\backslash L.
\end{align*}
The first implication follows from the definition,
and the second from Lemma \ref{l:ExtQ}.
From Lemma \ref{l:MinExt} follows that
\[
(P-(A\cup B))*\starter{U}{(A-B)}\subsu P-B,\quad
(Q-(A\cup B))*\starter{U}{(A-B)}\subsu Q-B.
\]
Thus, by swap-invariance we have
$
(P-B)\backslash L=(Q-B)\backslash L;
$
note that Lemma \ref{l:EssentialLowerFaces} guarantees that neither of these languages is empty. \qed
\end{proof}
\section{Ambiguity}
We say that an HDA $X$ is \emph{$k$-ambiguous}, for $k\ge 1$,
if every $P\in \Lang(X)$ is the event ipomset of at most $k$ sparse accepting paths in $X$.
Evidently deterministic HDAs are $1$-ambiguous (\ie unambiguous).
A language $L$ is \emph{of bounded ambiguity}
if it is recognised by a $k$-ambiguous HDA for some $k$.
Let $L = (\loset{a\\b} c d + a b \loset{c\\d})^+$; note that $L$ is regular \cite{conf/concur/FahrenbergJSZ22}.
We will show that $L$ is of unbounded ambiguity.
Let $X$ be an HDA such that $\Lang(X)=L$.
\begin{lemma}
\label{l:BrickPaths}
Let $\alpha$ and $\beta$ be essential sparse paths in $X$
with $\ev(\alpha)=\loset{a\\b} c d$ and $\ev(\beta)=a b \loset{c\\d}$.
Then
\begin{align*}
\alpha &= (v\arrO{ab}q\arrI{ab} x\arrO{c}e\arrI{c}y\arrO{d}f\arrI{d}z), \\
\beta &= (v'\arrO{a}g\arrI{a}w'\arrO{b}h'\arrI{b} x'\arrO{cd} r'\arrI{cd}z')
\end{align*}
for some $v,x,y,z,v',w',x',z'\in X[\epsilon]$,
$e\in X[c]$, $f\in X[d]$, $g'\in X[a]$, $h'\in X[b]$,
$q\in X[\loset{a\\b}]$, $r'\in X[\loset{c\\d}]$.
Furthermore, $x\neq x'$, and for
\begin{align*}
\bar\alpha &= (v\arrO{a}\delta^0_b(q)\arrI{a} \delta^0_a\delta^1_a(q)\arrO{b}\delta^1_a(q)\arrI{b} x\arrO{c}e\arrI{c}y\arrO{d}f\arrI{d}z), \\
\bar\beta &= (v'\arrO{a}g\arrI{a}w'\arrO{b}h'\arrI{b} x'\arrO{c}\delta^0_d(r')\arrI{c} \delta^0_d\delta^1_c(r')\arrO{d}\delta^1_c(r')\arrI{d}z')
\end{align*}
we have $\ev(\bar{\alpha})=\ev(\bar\beta)=abcd$ and $\bar\alpha\neq\bar\beta$.
\end{lemma}
\begin{proof}
The unique sparse step decomposition of $\loset{a\\b}cd$ is
\[ \loset{a\\b}cd=
\loset{a&\!\bullet\\b&\!\bullet}
*\loset{\bullet\!& a\\ \bullet\!& b}
* [c\,\bullet] * [\bullet\, c] * [d\,\bullet] * [\bullet\, d].\]
Thus, $\alpha$ must be as described above.
A similar argument applies for $\beta$.
Now assume that $x=x'$. Then
\[
\gamma=(v\arrO{ab}q\arrI{ab} x=x'\arrO{cd} r'\arrI{cd}z')
\]
is a path on $X$ for which $\ev(\gamma)=\loset{a\\b}*\loset{c\\d}$.
Since $\gamma$ is essential,
there are paths $\gamma'\in\Path(X)_\bot^v$ and $\gamma''\in\Path(X)_{z'}^\top$.
The composition $\omega=\gamma' \gamma \gamma''$ is an accepting path.
Thus, $\ev(\gamma')*\loset{a\\b}*\loset{c\\d}*\ev(\gamma'')\in L$: a contradiction.
Calculation of $\ev(\bar\alpha)$ and $\ev(\bar\beta)$ is elementary,
and $\bar\alpha\neq \bar\beta$ because $x\neq x'$. \qed
\end{proof}
\begin{proposition}
The language $L = (\loset{a\\b} c d + a b \loset{c\\d})^+$ is of unbounded ambiguity.
\end{proposition}
\begin{proof}
Let $X$ be any HDA with $\Lang(X)=L$.
We will show that there exist at least $2^n$ different sparse accepting paths
accepting $(abcd)^n$.
Let $P=\loset{a\\b}cd$, $Q=ab\loset{c\\d}$.
For every sequence $\mathbf{R}=(R_1,\dotsc,R_n)\in \{P,Q\}^n$
let $\omega_{\mathbf{R}}$ be an accepting path
such that $\ev(\omega_{\mathbf{R}})=R_1*\dotsm*R_n$.
By Lemma \ref{l:PathDivision},
there exist paths $\omega_{\mathbf{R}}^1,\dotsc,\omega_{\mathbf{R}}^n$
such that $\ev(\omega_{\mathbf{R}}^k)=R_k$
and $\omega'_{\mathbf{R}}=\omega_{\mathbf{R}}^1*\dotsm*\omega_{\mathbf{R}}^n$
is an accepting path.
Let $\bar{\omega}_{\mathbf{R}}^k$ be the path defined as in Lemma \ref{l:BrickPaths}
(\ie like $\bar\alpha$ if $R_k=P$ and $\bar\beta$ if $R_k=Q$).
Finally, put
$
\bar\omega_{\mathbf{R}}
=\bar\omega_{\mathbf{R}}^1*\dotsm*\bar\omega_{\mathbf{R}}^n.
$
Now choose ${\mathbf{R}}\neq \mathbf{S}\in\{P,Q\}^n$.
Assume that $\bar\omega_{\mathbf{R}}=\bar\omega_{\mathbf{S}}$.
This implies that $\bar\omega_{\mathbf{R}}^k=\bar\omega_{\mathbf{S}}^k$
for all $k$ (all segments have the same length).
But there exists $k$ such that $R_k\neq S_k$ (say $R_k=P$ and $S_k=Q$),
and, by Lemma \ref{l:BrickPaths} again, applied to $\alpha=\bar\omega_{\mathbf{R}}^k$
and $\beta=\bar\omega_{\mathbf{S}}^k$, we get $\bar\omega_{\mathbf{R}}^k\neq \bar\omega_{\mathbf{S}}^k$: a contradiction.
As a consequence, the paths $\{\bar\omega_{\mathbf{R}}\}_{\mathbf{R}\in\{P,Q\}^n}$
are sparse and pairwise different, and $\ev(\bar\omega_{\mathbf{R}})=(abcd)^n$ for all $\mathbf{R}$. \qed
\end{proof}
\section{Conclusion and Further Work}
We have proven a Myhill-Nerode type theorem for higher-dimensional automata (HDAs),
stating that a language is regular iff it has finite prefix quotient.
We have also introduced deterministic HDAs and shown that not all finite HDAs are determinizable,
and further that there exist infinitely ambiguous regular languages.
An obvious follow-up question to ask is whether finite HDAs are \emph{learnable},
that is, whether our Myhill-Nerode construction can be used
to introduce a learning procedure for HDAs akin to Angluin's L$^*$ algorithm \cite{DBLP:journals/iandc/Angluin87}
or some other recent approaches
\cite{DBLP:conf/rv/IsbernerHS14, DBLP:conf/birthday/HowarS22, DBLP:conf/fossacs/BarloccoKR19}.
Our Myhill-Nerode theorem provides a language-internal criterion for whether a language is regular,
and we have developed a similar one to distinguish deterministic languages.
Analogous, but more complicated, criteria should be available for different levels of ambiguity.
Another important aspect is the \emph{decidability} of these questions,
together with other standard problems such as membership or language equivalence.
We believe that membership of an ipomset in a regular language is decidable,
but we are less sure about decidability of the other problems.
\subsubsection*{Acknowledgement.}
We are in great debt to Christian Johansen and Georg Struth
for numerous discussions regarding the subjects of this paper;
any errors, however, are exclusively ours.
\newcommand{\Afirst}[1]{#1} \newcommand{\afirst}[1]{#1}
\appendix
\section{Step decompositions}
\label{s:Step}
A \emph{step decomposition} of an ipomset $P$ is a presentation $P=Q_1*\dotsm *Q_n$
such that every $Q_k$ is a starter or a terminator.
A step decomposition is
\emph{sparse} if it contains no identities and starters and terminators are alternating.
We show below that every ipomset
admits a \emph{unique} \emph{sparse} step decomposition.
For an ipomset $P$,
we denote by $P^{m}\subseteq P$ the subset of $<$-minimal elements
and
\[
P^{s}=\{p\in P\mid \forall\; p'\in P-P^{m}:\; p<p'\}.
\]
That is, $P^s$ contains precisely those minimal elements
which have arrows to all non-minimal elements.
Clearly, both $P^{m}$ and $P^{s}$ are losets, and $P^{s}\subseteq P^{m}\supseteq S_P$.
We need a few technical lemmas; the first is proven by simple calculations.
\begin{lemma}
\label{l:STGluing}
Let $P$ be an ipomset, $A\subseteq U\in\sq$.
\begin{compactenum}
\item
Assume that $U\cong S_P$ and $P'=\starter{U}{A}* P$.
Then $P'$ and $P$ are isomorphic as pomsets,
$T_P\cong T_{P'}$ and $S_{P'}\cong S_P-A$.
\item
Assume that $U-A\cong S_P$ and $P'=\terminator{U}{A}* P$.
Then $P'\cong P\cup A$ as sets, and $P\cong P'-A$ as pomsets,
$T_P\cong T_{P'}$ and $S_{P'}\cong U$. \qed
\end{compactenum}
\end{lemma}
Consider a presentation $P\cong Q R$.
From the definition follows that $P^{m}\cong Q^{m}$ and $S_P\cong S_Q$
This implies:
\begin{lemma}
\label{l:StartCrit}
Assume that $P\cong Q R$ and $Q$ is either a (non-identity) starter or a terminator.
Then $Q$ is a starter if{}f $S_P\subsetneq P^{m}$,
and $Q$ is a terminator if{}f $S_P=P^{m}$.
\end{lemma}
\begin{proof}
We have $P^{m}\cong Q=Q^m$ and $S_P\cong S_Q$.
But $Q$ is a terminator iff $S_Q=Q$ and a (non-identity) starter iff $S_Q\subsetneq Q$.
\qed
\end{proof}
\begin{lemma}
\label{l:SparsePreLemma}
Assume that $P\cong Q Q' R$.
\begin{compactenum}
\item
If $Q$ is a non-identity starter and $Q'$ is a non-identity terminator, then $Q\cong \starter{P^{m}}{P^{m}-S_P}$.
\item
If $Q$ is a non-identity terminator and $Q'$ is a non-identity starter, then $Q\cong\terminator{P^m}{P^s}$.
\end{compactenum}
\end{lemma}
\begin{proof}
Consider the first case. Then $P$ and $Q' R$ are isomorphic as pomsets,
and
\[
Q=T_Q\cong S_{Q'R}
\overset{\text{Lemma \ref{l:StartCrit}}}=
(Q'R)^m
\overset{\text{Lemma \ref{l:STGluing}}}\cong
P^m.
\]
Equality $S_Q=S_P$ follows immediately from the definition.
In the second case, we have $Q=S_Q\cong S_P\overset{\text{Lemma \ref{l:StartCrit}}}=P^m$,
and $Q' R\cong P-(Q-T_Q)$ as pomsets (Lemma \ref{l:STGluing}).
By Lemma \ref{l:StartCrit} we have
\[
P^m\cap (Q'R)= Q\cap (Q'R)=T_Q\cong S_{Q'R}
\overset{\text{Lemma \ref{l:StartCrit}}}\subsetneq
(Q'R)^m.
\]
Hence there exists an element $p\in Q' R$
that is minimal in $Q' R$ but not in $P$.
For every $p'\in P^s$ we have $p'<p$ and, therefore, $p'\not\in Q'R$.
As a consequence, $P^s\subseteq P-(Q'R)=Q-T_Q$ (Lemma \ref{l:STGluing}).
On the other hand, if $p'\in P^m-P^s$, then there exists $p\in P-P^m=P-Q$
such that $p'\not< p$. Thus, $p'$ must belong to $T_Q$. \qed
\end{proof}
\begin{varproof}[of Prop.~{\ref{p:SparsePresentation}}]
Let $P = P_1*\dotsm*P_n = Q_1*\dotsm*Q_m$ be two sparse presentations.
If $n=1$, then $m=1$ and equality follows trivially,
so assume $n, m\ge 2$ and write $P_2*\dotsm*P_n=P'$ and $Q_2*\dotsm*Q_m=Q'$.
Assume first that $P_1$ is a starter.
By Lemma \ref{l:SparsePreLemma}, $P_1\cong \starter{P^m}{P^m-S_P}$.
By Lemma \ref{l:STGluing},
$S_P\cong S_{P'}-(P^m-S_P)$.
Hence $S_{P'}\cong P^m$, implying $S_P\subsetneq P^m$.
By Lemma \ref{l:StartCrit}, $Q_1$ is a starter.
By Lemma \ref{l:SparsePreLemma}, $Q_1\cong \starter{P^m}{P^m-S_P}$.
Thus $P_1\cong Q_1$, and we may proceed inductively with $P'=Q'$.
Now assume instead that $P_1$ is a terminator.
By Lemma \ref{l:SparsePreLemma}, $P_1\cong \terminator{P^m}{P^s}$.
By Lemma \ref{l:STGluing},
$S_P\cong P^m$.
By Lemma \ref{l:StartCrit}, $Q_1$ is a terminator.
By Lemma \ref{l:SparsePreLemma}, $Q_1\cong \terminator{P^m}{P^s}$.
Thus $P_1\cong Q_1$, and we may proceed inductively with $P'=Q'$. \qed
\end{varproof}
\end{document} |
\begin{document}
\title{\Large\bf
Triplets and Symmetries of Arithmetic mod $p^k$}
\begin{abstract}
The finite ring $Z_k=Z$(+,~.) mod $p^k$ of residue arithmetic with
odd prime power modulus is analysed. The cyclic group of units $G_k$
in $Z_k$(.) has order $(p-1).p^{k-1}$, implying product structure
$G_k\equiv A_k.B_k$ with $|A_k|=p-1$ and $|B_k |=p^{k-1}$, the "core"
and "extension subgroup" of $G_k$ respectively. It is shown that
each subgroup $S \supset 1$ of core $A_k$ has zero sum, and $p$+1
generates subgroup $B_k$ of all $n\equiv 1$ mod $p$ in $G_k$.
The $p$-th power residues $n^p$ mod $p^k$ in $G_k$ form an order
$|G_k|/p$ subgroup $F_k$, with $|F_k|/|A_k|=p^{k-2}$, so $F_k$
properly contains core $A_k$ for $k \geq 3$.
By quadratic analysis (mod $p^3$) rather than linear analysis (mod $p^2$,
re: Hensel's lemma [5]~), the additive structure of subgroups $G_k$
and $F_k$ is derived. ~Successor function $n$+1 combines with the two
arithmetic $symmetries$ $-n$ and $n^{-1}$ to yield a $triplet$
structure in $G_k$ of three inverse pairs ($n_i,~n_i^{-1}$) with:
$n_i +1 \equiv -(n_{i+1})^{-1}$, ~indices mod 3, and $n_0.n_1.n_2 \equiv 1$
mod $p^k$. In case $n_0 \equiv n_1 \equiv n_2 \equiv n$ this reduces to the cubic
root solution $n+1 \equiv - n^{-1} \equiv - n^2$ mod $p^k ~(p$=1 mod 6). . .
The property of exponent $p$ distributing over a sum of core residues :
$(x+y)^p \equiv x+y \equiv x^p + y^p$ mod $p^k$ is employed to derive the known
$FLT$ inequality for integers. In other words, to a $FLT$ mod $p^k$
equivalence for $k$ digits correspond $p$-th power integers of $pk$
digits, and the $(p-1)k$ carries make the difference, representing
the sum of mixed-terms in the binomial expansion.
\equivnd{abstract}
{\bf Keywords}: Residue arithmetic, ring, group of units,
multiplicative semigroup, \\additive structure, triplet,
cubic roots of unity,~ $carry$, ~Hensel, Fermat, $FST, FLT$.
{\bf MSC-class}: ~11D41
\section*{Introduction}
The commutative semigroup $Z_k$(.) of multiplication mod $p^k$
(prime $p>$2) has for all $k>$0 ~just two idempotents: $1^2 \equiv 1$
and $0^2 \equiv 0$, and is the disjoint union of the corresponding
maximal subsemigroups (~Archimedian components~[3], [4]~).
Namely the group $G_k$ of units ($n^i\equiv 1$ mod $p^k$ for some $i>$0)
which are all relative prime to $p$, and maximal ideal $N_k$ as
nilpotent subsemigroup of all $p^{k-1}$ multiples of $p$ ($n^i\equiv 0$
mod $p^k$ for some $i>$0). Order $|G_k|=(p-1)p^{k-1}$ has two
coprime factors, sothat $G_k\equiv A_kB_k$, with 'core' $|A_k|=p-1$
and 'extension group' $|B_k|=p^{k-1}$.
Residues of $n^p$ form a subgroup $F_k \subset G_k$ of order
$|F_k|=|G_k|/p$, to be analysed for its additive structure. Each
$n$ in core $A_k$ satisfies $n^p\equiv n$ mod $p^k$, a generalization
of Fermat's Small Theorem ($FST$) for $k>1$, denoted as $FST_k$.
{\bf Base $p$} number representation is used, which notation is
useful for computer experiments, as reported in tables 1,2. This
models residue arithmetic mod $p^k$ by considering only the $k$
less significant digits, and ignoring the more significant digits.
Congruence class [$n$] mod $p^k$ is represented by natural number
$n<p^k$, encoded in $k$ digits (base $p$). Class [$n$] consists of
all integers with the same least significant $k$ digits as $n$.
{\bf Define} the {\bf 0-extension} of residue $n$ mod $p^k$ as the
natural number $n < p^k$ with the same $k$-digit representation
(base $p$), and all more significant digits (at $p^m, ~m \geq k)$
set to 0.
Signed residue $-n$ is only a convenient notation for the complement
$p^k-n$ of $n$, which are both positive. $C[n]$ or $C_n$ is a cyclic
group of order $n$, such as $Z_k(+) \cong C[p^k]$. The units mod $p$
form a cyclic group $G_1=C_{p-1}$, and $G_k$ of order $(p-1).p^{k-1}$
is also cyclic for $k>$1 ~[1]. ~~Finite {\it semigroup structure} is
applied, and {\it digit analysis} of prime-base residue arithmetic,
to study the combination of (+) and (.) mod $p^k$, especially the
additive properties of multiplicative subgroups of ring $Z_k(+, .)~$.
Only elementary residue arithmetic, cyclic groups, and (associative)
function composition are used (thm3.2), to begin with the known
cyclic (one generator) nature of group $G_k$ of units mod $p^k$ [1].
Lemma 1.1 on the direct product structure of $G_k$, and cor1.2 on
all $p$-th power residues mod $p^k$ as all extensions of those
mod $p^2$, are known in some form but are derived for completeness.
Lemma 1.4 on $B_k=(p+1)^*$, and further results are believed to be new.
The {\bf two symmetries} of residue arithmetic mod $p^k$, defined as
automorphisms of order 2, are {\bf complement} $-n$ under (+) with
factor $-1$, and {\bf inverse} $n^{-1}$ under (.) with exponent $-1$.\\
Their essential role in the triplet- structure (thm3.1) of this finite
ring is emphasized throughout. \\The main emphasis is on additive
analysis of multiplicative semigroup $Z$(.) mod $p^k$.\\
Concatenation will be used to indicate multiplication.
\begin{tabular}{ll}
{\bf Symbols} & and {\bf Definitions}~~~~~~(~odd prime $p$~)\\
\hline
$Z_k$(+,~.) & the finite ring of residue arithmetic mod $p^k$\\
$M_k$ & multiplication $Z_k$(.) mod $p^k$, semigroup ($k$-digit
arithmetic base $p$)\\
$N_k$ & maximal ideal of $M_k:~n^i\equiv 0$ mod $p^k$~(some $i>$0),
$|N_k|=p^{k-1}$\\
$n \in M_k$ & unique product ~$n = g^i.p^{k-j}$ mod $p^k$ ~($g^i
\in G_j$ coprime to $p$)\\
0-extension X & of residue $x$ mod $p^k$:
the smallest non-negative integer $X \equiv x$ mod $p^k$\\
(finite) extension $U$ & of $x$ mod $p^k$:
any integer $U \equiv x$ mod $p^k$\\
$C_m$ or $C[m]$ & cyclic group of order $m$:
~~e.g. ~$Z_k(+) \cong C[p^k]$\\
$G_k\equiv A_k.B_k$ & group of units: all ~$n^i \equiv 1$ mod $p^k$
~(some $i>$0), ~~$|G_k|\equiv (p-1)p^{k-1}$\\
$A_k$ & ~~~~~{\bf core} of $G_k$, ~~$|A_k|=p-1~~~(n^p$=$n$
mod $p^k$ for $n \in A_k$)\\
$B_k\equiv (p+1)^*$ & extension group of all $n\equiv 1$ mod $p$ ,
~~$|B_k|=p^{k-1}$\\
$F_k$ & subgroup of all $p$-th power residues in $G_k$ ,
~~$|F_k|=|G_k|/p$ \\
$A_k \subset F_k \subset G_k$ &
proper inclusions only for $k \geq 3~~(A_2\equiv F_2 \subset G_2)$\\
$d(n)$ & core increment $A(n+1)-A(n)$ of core func'n
$A(n)\equiv n^q,~q=|B_k|$\\
$FST_k$ & core $A_k$ extends $FST ~(n^p \equiv n$ mod $p$) to
mod $p^{k>1}$ for $p-1$ residues\\
solution in core & $x^p+y^p \equiv z^p$ mod $p^k$ ~with~ $x,y,z$
~in core $A_k$.\\
period of $n \in G_k$ & order $|n^*|$ of subgroup generated by
$n$ in $G_k(.)$\\
normation & divide $x^p+y^p \equiv z^p$ mod $p^k$ by one term (in $F_k$),
yielding one term $\pm 1$\\
complement $-n$ & unique in $Z_k$(+) : ~$-n+n\equiv 0$ mod $p^k$\\
inverse $n^{-1}$ & unique in ~$G_k$(.) : ~$n^{-1}.~n\equiv 1$ mod $p^k$\\
1-complement $n"$ & unique in $Z_k$(+) : ~$n"+n\equiv -1$ mod $p^k$\\
inverse-pair & pair ($a, ~a^{-1}$) of inverses in $G_k$ \\
{\bf triplet} & 3 inv.pairs: ~$a+b^{-1}\equiv b+c^{-1}\equiv c+a^{-1}\equiv -1,
~(abc\equiv 1$ mod $p^k$)\\
triplet$^p$ & a triplet of three $p$-th power residues in subgroup $F_k$ (thm3.1)\\
triplet$^p$ equiv'ce & one of the three equivalences of a triplet$^p$\\
symmetry mod $p^k$ & $-n$ and $n^{-1}$: ~order 2 automorphism of
$Z_k(+)$ resp. $G_k(.)$\\
$EDS$ property & Exponent Distributes over a Sum:
$(a+b)^p\equiv a^p+b^p$ mod $p^k$
\equivnd{tabular}
\section{ Structure of the group $G_k$ of units }
\begin{lem}
~~~~$G_k ~\cong ~A'_k \times B'_k ~\cong ~C[p-1]~.~C[p^{k-1}]$
~~~~~~~~~~~~ and $M_k$ (mod $p^k$) has a sub-semigroup isomorphic to
$M_1$ (mod $p$).
\equivnd{lem}
\begin{proof}
~Cyclic group $G_k$ of {\it units} $n$ ($n^i\equiv 1$ for some $i>0$)
has order $(p-1)p^{k-1}$, namely $p^{k}$ minus $p^{k-1}$ multiples
of $p$. Then $G_k=A'_k \times B'_k$, the direct product of two
relative prime cycles, with corresponding subgroups $A_k$ and $B_k$,
sothat $G_k\equiv A_k.B_k$ where\\
{\bf extension group} $B_k=C[~p^{k-1}~]$ consists of all $p^{k-1}$
residues mod $p^k$ that are 1 mod $p$, \\
and ~{\bf core} $A_k=C[p-1]$, ~so $M_k$ contains sub-semigroup
$A_k \cup 0 \cong M_1$.
\equivnd{proof}
{\bf Core $A_k$}, as $p-1$ cycle mod $p^k$, is Fermat's Small Theorem
$n^{p}\equiv n$ mod $p$ extended to $k>$1 for $p$ residues (including 0),
to be denoted as $FST_k$.\\ Recall that $n^{p-1} \equiv 1$ mod $p$ for
$n \equiv \!\!\!\!\!\!\//~~$0 mod $p$ ($FST$), then lem1.1 implies:
\begin{cor}~~With $|B|=p^{k-1}=q$ and $|A|=p-1$:\\ \hspace*{1cm}
Core $A_k=\{~n^q~\}$ mod $p^k~~(n=1.. p$-1) ~extends $FST$ for $k>$1,
~and: \\ \hspace*{1cm}
~~~~$B_k= \{n^{p-1}\}$ mod $p^k$ ~consists of all $p^{k-1}$
residues 1 mod $p$ in $G_k$.
\equivnd{cor}
Subgroup $F_k \equiv \{n^p\}$ mod $p^k$ of all $p$-th power residues
in $G_k$, with $F_k \supseteq A_k$ (only $F_2 \equiv A_2$) and order
$|F_k|=|G_k|/p=(p-1)p^{k-2}$, consists of {\bf all} $p^{k-2}$
extensions mod $p^k$ of the $p-1$ ~~$p$-th power residues in $G_2$,
which has order $(p-1)p$. Consequently we have:
\begin{cor}
Each extension of $n^p$ mod $p^2$ ~(in $F_2$)
is a $p$-th power residue (in $F_k$)
\equivnd{cor}
{\bf Core generation}:
~The $p-1$ residues $n^q$ mod $p^k ~(q=p^{k-1})$
define core $A_k$ for 0$<n<p$.\\Cores $A_k$ for successive $k$
are produced as the $p$-th power of each $n_0<p$ recursively:\\
~$(n_0)^p \equiv n_1, ~(n_1)^p \equiv n_2,~(n_2)^p \equiv n_3$, etc.,
where $n_i$ has $i$+1 digits. In more detail:
\begin{lem}.\\ \hspace*{.5cm}
The $p-1$ values $a_0<p$ define core $A_k$ by~
$(a_0)^{p^{k-1}}=a_0+\sum_{i=1}^{k-1}a_ip^i$ ~(digits $a_i<p$).
\equivnd{lem}
\begin{proof}
Let $a=a_0+mp<p^2$ be in core $A_2$, so $a^p\equiv a$ mod $p^2$.
Then $a^p= (a_0+mp)^p=a_0^p+p.a_0^{p-1}.mp\equiv a_0^p+mp^2$ mod $p^3$,
using $FST$. Clearly the second core digit, of weight $p$, is not
found this way as function of $a_0$, but requires actual computation
(unless $a\equiv p \pm 1$ as in lem1.3-4). It depends on the $carries$
produced in computing the $p$-th power of $a_0$. Recursively, each
next core digit can be found by computing the $p$-th power of a
core $A_k$ residue with $k$+1 digit precision; here core $A_k$
remains fixed since $a^p\equiv a$ mod $p^k$.
\equivnd{proof}
Notice $(p^2 \pm 1)^p\equiv p^3 \pm 1$ mod $p^5$. Moreover, initial
$(p+1)^p\equiv p^2+1$ mod $p^3$ yields in general for $(p \pm 1)^{p^m}$
the next property:
\begin{lem} ~~$(p+1)^{p^m}\equiv p^{m+1}+1$ {\rm ~~mod} $p^{m+2}$\\
\hspace*{1cm}
and: ~~~$(p-1)^{p^m}\equiv p^{m+1}-1$ {\rm ~~mod} $p^{m+2}$
\equivnd{lem}
\begin{lem}
~Extension group $B_k$ is generated by $p$+1 (mod $p^k$),
with $|B_k|=p^{k-1}$, \\ \hspace*{1cm} and each subgroup
$S \subseteq B_k$, ~$|S|=|B_k|/p^s$ has
sum $\sum S \equiv |S|$ {\rm ~mod} $p^k$.
\equivnd{lem}
\begin{proof}
The period of $p+1$, which is the smallest $x$ with $(p+1)^x\equiv 1$
mod $p^k$, implies $m+1=k$ (re lemma 1.3). So $m=k-1$, yielding
period $p^{k-1}$. No smaller exponent generates 1 mod $p^k$ since
$|B_k|$ has only divisors $p^s$.
$B_k$ consists of all $p^{k-1}$ residues which are 1 mod $p$.
The order of each subgroup $S \subset B_k$ must divide $|B_k|$,
sothat $|S|=|B_k|/p^s$ ~($0 \leq s < k$) and~ $S=\{1+m.p^{s+1}\}
~(m=0~..~|S|-1)$.
Then ~$\sum S= |S|+p^{s+1}.|S|(|S|-1)/2$ ~mod $p^k$, ~where~
$p^{s+1}.|S|=p.|B_k|=p^k$,
sothat~ $\sum S = |S|= p^{k-1-s}$ mod $p^k$. ~~Hence no subgroup
of $B_k$ ~sums to 0 mod $p^k$.
\equivnd{proof}
\begin{cor}
~~For core $A_k \equiv g^*$: each unit $n \in G_k \equiv A_kB_k$ has the form:
\\ \hspace*{1cm} $n \equiv g^i(p+1)^j$ mod $p^k$ for a unique pair of
non-neg. exponents $i<|A_k|$ and $j<|B_k|$.
\equivnd{cor}
Pair $(i,j)$ are the exponents in the core- and extension- component
of unit $n$.
\begin{thm}
Each subgroup $S \supset 1$ of core $A_k$ sums to ~0
{\rm ~mod} $p^k~~(k>$0).
\equivnd{thm}
\begin{proof}
~For {\it even} $|S|$: $-1$ in $S$ implies pairwise zero-sums.
In general: $c.S=S$ for all $c$ in $S$, and $c \sum S =\sum S$,
so~ $S.x=x$, writing $x$ for $\sum S$. Now for any $g$ in $G_k$:
$|S.g|=|S|$ sothat $|S.x|$=1 implies $x$ not in $G_k$, hence~
$x=g.p^e$ ~for some $g$ in $G_k$ and $0<e<k$ or $x=0 ~(e=k)$.
Then:
~~$S.x=S(g.p^e)=(S.g)p^e$ with $|S.g|=|S|$ if $e<k$.
~So~ $|S.x|$=1 ~yields~ $e$=$k$ ~and~ $x=\sum S$=0.
\equivnd{proof}
Consider the {\bf normation} of an additive equivalence $a+b \equiv c$
mod $p^k$ in units group $G_k$, by multiplying all terms with the
inverse of one of these terms, for instance to yield rhs $-1$:
{\bf (1)} ~~~1-complement form: ~~$a+b \equiv -1$ mod $p^k$ in $G_k$
~~~(digitwise sum $p-1$, no carry).
For instance the well known $p$-th power residue equivalence:~
~$x^p+y^p \equiv z^p$ ~in $F_k$ yields: \\[1ex]
{\bf (2)} ~~~~normal form:~~~~~~~ $a^p+b^p \equiv -1$ mod $p^k$ ~in $G_k$,
with a special case (in core $A_k$) considered next.
\begin{figure}
\begin{picture}(100,230)(-180,0)
\setlength{\unitlength}{1.5mm}
\put( 0,50){Core ~A = (43)* = 43 42 66 24 25 01 ~(mod $7^2$)}
\put( 5,45){ Cubic rootpair: 42 + 24 = 66 = $-1$ }
\put(-30,35){\shortstack{
Complement $C(n)=-n$ \\
Inverse ~~~$I(n)=n^{-1}$\\
Succesor $S(n)=n+1$ }}
\put( 4,21){\line(1,0){35}}
\put(21,12){\line(0,1){17}}
\put(13, 5){\vector(1, 2){17}}
\put(12,39){\vector(1,-2){18}}
\put(11,39){\vector(0,-1){35}}
\put(31, 4){\vector(0, 1){35}}
\put(12,39){\line(2,-1){30}}
\put(12, 4){\line(2, 1){30}}
\put( 9,30){I~~~~~~C~~~~~~~~C~~~~~~~~I}
\put(-30,10){\shortstack{
{\large Symmetries:}\\
$-n$ (diagonal) ~C\\
$n^{-1}$ (vertical) ~I\\
$-n^{-1}$ (horizontal) IC=CI }} \put(4,22){$-1$}
\put(40,20){\framebox(3.5,3.5){01}} \put(18,40){+1}
\put(10,39){\framebox(3.5,3.5){42}} \put(14,39){- - - - -$>$- - - -}
\put(30,39){\framebox(3.5,3.5){43}} \put(35,40){$42+1=-(42)^{-1}$}
\put( 0,20){\framebox(3.5,3.5){66}} \put(40,35){$42^3$=1 mod $7^2$}
\put(10, 0){\framebox(3.5,3.5){24}} \put(14, 3){- - - - -$>$- - - -}
\put(30, 0){\framebox(3.5,3.5){25}} \put(20, 0){+1}
\put( 7,40){$a$} \put(33,37){$-a^{-1}$}
\put( 5, 1){$a^{-1}$} \put(35, 1){$-a$}
\equivnd{picture}
\caption{{ ~~Core $A_2$ mod $7^2$ ($C_6$),
~~~Cubic roots $C_3$=\{42,~24,~01\}}}
\label{fig.1}
\equivnd{figure}
\section{The cubic root solution in core, ~and core symmetries}
\begin{lem}
The cubic roots of 1 mod $p^k$ ($p \equiv 1$ mod 6) ~are $p$-th
power residues in core $A_k$,
\\ \hspace*{1cm}
and for $a^3 \equiv 1 ~(a \equiv \!\!\!\!\!\!\//~~ 1):~ a+a^{-1} \equiv -1$ mod $p^{k>1}$
has no 0-extension to integers.
\equivnd{lem}
\begin{proof}
~If $p \equiv 1$ mod 6 then $3|(p-1)$ implies a core-subgroup
$S=\{a^2,a,1\}$ of three $p$-th powers: the cubic roots of 1
($a^3 \equiv 1$) in $G_k$ that sum to 0 mod $p^k$ (thm1.1). Now
$a^3-1=(a-1)(a^2+a+1)$, so if $a \equiv \!\!\!\!\!\!\//~~ 1$ then $a^2+a+1 \equiv 0$,
hence $a+a^{-1} \equiv -1$ solves (1): a {\bf root-pair} of inverses,
with $a^2 \equiv a^{-1}$. ~$S$ in core consists of $p$-th power residues
with $n^{p}\equiv n$ mod $p^{k}$. Write $b$ for $a^{-1}$, then
$a^p+b^p \equiv -1$ {\bf and} $a+b \equiv -1$, sothat $a^p+b^p\equiv (a+b)^p$
mod $p^k$. ~Notice the {\bf \it Exponent Distributes over a Sum ($EDS$)},
implying inequality $A^p+B^p<(A+B)^p$ for the corresponding
0-extensions $A,~B,~A+B$ of core residues $a,~b,~a+b$ mod $p^k$.
\equivnd{proof}
\begin{enumerate} {\small
\item
{\bf ~Display} $G_k\equiv g^*$ by equidistant points on a {\bf unit circle}
in the plane, with 1 and $-1$ on the horizontal axis (fig1,~2).
The successive powers $g^i$ of generator $g$ produce $|G_k|$ points
($k$-digit residues) counter- clockwise. In this circle each inverse pair
$(a,a^{-1})$ is connected $vertically$, complements $(a,-a)~diagonally$,
and pairs $(a,-a^{-1})~horizontally$, representing functions $I, C$ and
$IC=CI$ resp. (thm3.2). Figures 1, 2 depict for $p$=7, 5 these symmetries
of residue arithmetic.
\item
{\bf Scaling} any equation, such as $a+1\equiv -b^{-1}$, by a factor
$s\equiv g^i \in G_k\equiv g^*$, yields $s(a+1)\equiv -s/b$ mod $p^k$,
represented by a {\bf rotation} counter clockwise over $i$ positions.
}
\equivnd{enumerate}
\subsection{Core increment symmetry at double precision, and asymmetry beyond}
Consider {\bf core function} $A_k(n)=n^{|B_k|} ~(~|B_k|=p^{k-1} ~cor1.1)$
as integer polynomial of odd degree, and {\bf core increment}
function $d_k(n)=A_k(n+1)-A_k(n)$ of even degree one less than $A_k(n)$.
Computing $A_k(n)$ upto precision $2k+1$ (base $p$) shows $d_k(n)$ mod
$p^{2k+1}$ to have a 'double precision' symmetry for 1-complements
$m+n=p-1$. Only $n<p$ need be considered due to periodicity $p$.\\
This naturally reflects in the additive properties of core $A_k$,
as in table 1 for $p$=7 and $k$=1, with $n<p$ in $A_1$ by $FST$:
symmetry of core increment $d_1(n)$ mod $p^3$ but not so mod $p^4$.
Due to $A_k(n) \equiv n$ mod $p ~(FST)$ we have $d_k(n) \equiv 1$ mod $p$,
so $d_k(n)$ is referred to as core 'increment', although in general
$d_k(n) \equiv \!\!\!\!\!\!\//~~ 1$ mod $p^{k>1}$.
\begin{lem} ( Core increment at {\bf double precision} )~~
For $q=|B_k|=p^{k-1}$ and $k>0$:\\
(a)~~ Core function $A_k(n) \equiv n^q$ mod $p^k$ and increment
$d_k(n) \equiv A_k(n+1)-A_k(n)$ have {\bf period $p$}\\
(b)~~ for $m+n=p ~~~~:~ A_k(m) \equiv -A_k(n)$ mod $p^k$ {\rm ~~~(odd symmetry)}\\
(c)~~ for $m+n=p-1:~ d_k(m) \equiv d_k(n)$ mod $p^{2k+1}$
and $\equiv \!\!\!\!\!\!\//~~$ mod $p^{2(k+1)}$ \\ \hspace*{1cm}
{\rm ('double precision' ~even symmetry and -inequivalence respectively).}
\equivnd{lem}
\begin{proof}
{\bf(a)} ~Core function $A_K(n)\equiv n^q$ mod $p^k ~(q=p^{k-1},~n \equiv \!\!\!\!\!\!\//~~$ 0
mod $p$) has just $p-1$ distinct residues with $(n^q)^p \equiv n^q$ mod $p^k$,
and $A_k(n) \equiv n$ mod $p$ ($FST$). Including (non-core) $A_k(0) \equiv 0$ makes
$A_k(n)$ mod $p^k$ periodic in $n$ with {\bf period $p$} :
~$A_k(n+p) \equiv A_k(n)$ mod $p^k$, so $n<p$ suffices for core analysis.
Increment $d_k(n)$, as difference of two functions of period $p$,
also has period $p$.
{\bf(b)} ~$A_k(n)$ is a polynomial of odd degree with
{\bf odd symmetry}~ $A_k(-n) \equiv (-n)^q \equiv -n^q \equiv -A_k(n)$.
{\bf(c)}~ Difference polynomial $d_k(n)$ is of even degree $q-1$ with
leading term $q.n^{q-1}$, and residues 1 mod $p$ in extension group
$B_k$. The even degree of $d_k(n)$ results in {\bf even symmetry},
because \\[1ex] \hspace*{1cm}
$d_k(n-1) = n^q-(n-1)^q = -(-n)^q+(-n+1)^q = d_k(-n)$.
Denote $q=p^{k-1}$, then for ~$m+n=p-1$ follows:~
$d_k(m)=A_k(m+1)-A_k(m)=(p-n)^q-m^q$
and $d_k(n)=A_k(n+1)-A_k(n)=(p-m)^q-n^q$,
yielding:~ $d_k(m)-d_k(n) = [~(p-n)^q+n^q~] -[~(p-m)^q+m^q~]$.
By binomial expansion and $n^{q-1} \equiv m^{q-1} \equiv 1$ mod $p^k$
in core $A_k$:~ $d_k(m)-d_k(n) \equiv 0$ mod $p^{2k+1}$. ~With~
$n \equiv \!\!\!\!\!\!\//~~ m$ mod $p$:~ $m^{q-2} \equiv m^{-1} \equiv \!\!\!\!\!\!\//~~ n^{-1} \equiv n^{q-2}$
mod $p$, ~causing~ $d_k(m) \equiv \!\!\!\!\!\!\//~~ d_k(n)$ mod $p^{2k+2}$.
\equivnd{proof}
Table 1 ~($p$=7) shows, for $\{n,m\}$ in core $A_1 ~(FST)$ with
$n+m \equiv -1$ mod $p$, the core increment symmetry mod $p^3$ and difference
mod $p^4 ~(k$=1). While $024^7 \equiv 024$ in $A_3 ~(k$=3) has core increment
1 mod $p^7$, but not 1 mod $p^8$, and similarly at 1-complementary cubic
root $642^7 \equiv 642$.
\subsection{ Another derivation of the cubic root of 1 mod $p^k$ }
The cubic root solution was derived, for 3 dividing $p-1$, via
subgroup $S \subset A_k$ of order 3 (thm1.1). For completeness a
derivation using elementary arithmetic follows.
Notice ~$a+b \equiv -1$ ~~to yield ~~~$a^2+b^2 \equiv (a+b)^2-2ab \equiv 1-2ab$,
~and:\\ \hspace*{1cm}
$a^3+b^3 \equiv (a+b)^3-3(a+b)ab \equiv -1+3ab$. ~~The combined sum is $ab-1$:
\\[1ex] \hspace*{.5cm}
$\sum_{i=1}^3(a^i+b^i) \equiv \sum_{i=1}^3 a^i + \sum_{i=1}^3 b^i \equiv ab-1$
~mod $p^k$. ~~Find $a,b$ for $ab \equiv 1$ mod$p^k$.
Since $n^2+n+1=(n^3-1)/(n-1)$=0 for $n^3 \equiv 1 ~(n \neq 1$), we have
$ab \equiv 1$ mod$p^{k>0}$ if $a^3 \equiv b^3 \equiv 1$ mod $p^k$, with 3 dividing
$p-1~(p \equiv 1$ mod 6). Cubic roots $a^3 \equiv 1$ mod $p^k$ exist for any
prime $p \equiv 1$ mod 6 at any precision $k>0$.
In the next section other solutions of $\sum_{i=1}^3 a^i +
\sum_{i=1}^3 b^i \equiv 0$ mod $p^k$ will be shown, depending not
only on $p$ but also on $k$, with $ab \equiv 1$ mod $p^2$
but $ab \equiv \!\!\!\!\!\!\//~~ 1$ mod $p^3$, for some primes $p \geq 59$.
\section{Triplets, and the Core}
Any solution of (2): ~$a^p+b^p=-1$ mod $p^k$ has at least one term
($-1$) in core, and at most all three terms in core $A_k$.
To characterize such solution by the number of terms in core $A_k$,
quadratic analysis (mod $p^3$) is essential since proper inclusion
$A_k \subset F_k$ requires $k \geq 3$.
The cubic root solution, with one inverse pair (lem2.1), has all
three terms in core $A_{k>1}$. However, a computer search (table 2)
does reveal another type of solution of (2) mod $p^2$ for some
$p \geq 59$: three inverse pairs of $p$-th power residues,
denoted triplet$^p$, ~in core $A_2$.
\begin{thm}
A {\bf triplet$^p$} of three inverse-pairs of $p$-th power residues
in $F_k$ satifies :
\\ \hspace*{1in} {\bf (3a)}~~~~~~$a+b^{-1} \equiv -1$ ~(mod $p^k$)
\\ \hspace*{1in} {\bf (3b)}~~~~~~$b+c^{-1} \equiv -1$ ~~~,,
\\ \hspace*{1in} {\bf (3c)}~~~~~~$c+a^{-1} \equiv -1$ ~~~,,
~~~with $abc \equiv 1$ mod $p^k$.
\equivnd{thm}
\begin{proof} ~Multiplying by $b,~c,~a$ resp. maps
(3a) to (3b) if $ab \equiv c^{-1}$, and
(3b) to (3c) if $bc \equiv a^{-1}$, and
(3c) to (3a) if $ac \equiv b^{-1}$. ~All three conditions imply
$abc \equiv 1$ mod $p^k$. \equivnd{proof}
Table 2 shows all normed solutions of (2) mod $p^2$ for $p<200$, with
triplets at $p$= 59, 79, 83, 179, 193. The cubic roots, indicated by
$C_3$, occur only at $p \equiv 1$ mod 6, while a triplet$^p$ can occur for
either prime type $\pm 1$ mod 6. More than one triplet$^p$ can occur
per prime (two at $p$=59, three at 1093, four at 36847: each first
occurrance of such multiple triplet$^p$). There are primes for which
both rootforms occur, e.g. $p=79$ has a cubic root solution as well
as a triplet$^p$.
The question is if such {\bf loop structure} of inverse-pairs can have
a length beyond 3. Consider the successor $S(n)=n$+1 and the two
arithmetic symmetries, complement $C(n)=-n$ and inverse $I(n)=n^{-1}$,
as {\bf functions}, which compose associatively.\\ Then looplength $>$3
is impossible in arithmetic ring $Z_k(+,~.)$ mod $p^k$, seen as follows.
\begin{thm} .. (two basic solution types)\\ \hspace*{1cm}
Each normed solution of (2) is (an extension of) a triplet$^p$
or an inverse- pair.
\equivnd{thm}
\begin{proof}
Assume $r$ equations $1-n_i^{-1}\equiv n_{i+1}$ form a loop of length $r$
(indices mod $r$). Consider function $ICS(n)\equiv 1-n^{-1}$, composed of
the three elementary functions: Inverse, Complement and Successor, in
that sequence.~
Let $E(n)\equiv n$ be the identity function, and $n \neq 0,1,-1$ to prevent
division by zero, then under function composition the {\bf third
iteration} $[ICS]^3=E$, since $[ICS]^2(n)\equiv -1/(n-1) ~\rightarrow
~[ICS]^3(n)\equiv n$ (repeat substituting $1-n^{-1}$ for $n$). Since $C$
and $I$ commute, $IC$=$CI$, the 3! = 6 permutations of \{$I,C,S$\}
yield only four distinct dual-folded-successor {\it "dfs"} functions:
\hspace*{.5cm} $ICS(n)=-n^{-1},~SCI(n)=-(1+n)^{-1},
~CSI(n)=(1-n)^{-1}, ~ISC(n)=-(1+n^{-1})$.
By inspection each of these has $[dfs]^3=E$, referred to as
{\bf loop length} 3. For a cubic rootpair {\it dfs=E}, and 2-loops
do not occur since there are no duplets (next note 3.2). Hence
solutions of (2) have only {\it dfs} function loops of length
1 and 3: inverse pair and triplet.
\equivnd{proof}
A special triplet occurs if one of $a,b,c$ equals 1, say $a \equiv 1$.
Then $bc \equiv 1$ since $abc \equiv 1$, while (3a) and (3c) yield
$b^{-1} \equiv c \equiv -2$, so $b \equiv c^{-1} \equiv -2^{-1}$. Although triplet
$(a,b,c) \equiv (1,-2,-2^{-1})$ satisfies conditions (3), 2 is not in
core $A_{k>2}$, and by symmetry $a,b,c \equiv \!\!\!\!\!\!\//~~ 1$ for any triplet$^p$
of form (3).~ If $2^p \equiv \!\!\!\!\!\!\//~~ 2$ mod $p^2$ then 2 is not a $p$-th power
residue, so triplet $(1,-2,-2^{-1})$ is not a triplet$^p$ for such
primes (all upto at least $10^9$ except 1093, 3511).
\begin{figure}
\begin{picture}(100,230)(-120,0)
\setlength{\unitlength}{1.5mm}
\put( 5,49){Core ~A ~~= ~~33 44 12 01 ~(mod $5^2$) }
\put(11,48){\framebox(3.5,3.5){}}
\put( 5,45){Extn ~B ~~= 11 21 31 41 01 ~~~~~~~~G = A.B}
\put(13,46){\circle{4}}
\put(14,21){\line(1,0){14}}
\put(21,16){\line(0,1){10}}
\put(45,30){\vector(-1,2){3}}
\put(40,20){\framebox(3.5,3.5){01}} \put(42,22){\circle{5}}
\put(39,27){03 = g}
\put(37,33){14}
\put(33,37){02}
\put(27,39){11} \put(28,40){\circle{5}}
\put(20,39){\framebox(3.5,3.5){33}} \put(38,40){$33= -33^{-1}$}
\put(13,39){04}
\put( 7,37){22}
\put( 3,33){21} \put( 4,34){\circle{5}}
\put( 1,27){13} \put(14,27){+1~v~~~~~~~~~~~~~+1}
\put( 0,20){\framebox(3.5,3.5){44}} \put(5,20){-1}
\put( 1,13){42} \put(4,14){- - - -$>$ - - - - - triplet - - - - - -}
\put( 3, 7){31} \put(4, 8){\circle{5}}
\put( 7, 3){43}
\put(13, 1){34} \put(15,3){- -$>$- - - -} \put(15,3){\line(1,6){6}}
\put(20,-1){\framebox(3.5,3.5){12}}
\put(50,5){\shortstack{Triplet :\\(33,~34)\\(41,~42)\\(32,~33)}}
\put(45,0){33.41.32 = 01}
\put(27, 1){41} \put(28, 2){\circle{5}}
\put(33, 3){23} \put(27, 5){\vector(-3,1){23}}
\put(37, 7){24}
\put(39,13){32} \put(39,16){\vector(-3,4){17}}
\equivnd{picture}
\caption{{ ~~G = A.B = $g^*$ ~(mod $5^2$), ~~~~~Cycle in the plane}}
\label{fig.2}
\equivnd{figure}
\subsection{ A triplet for each $n$ in $G_k$ }
Notice the proof of thm3.2 does not require $p$-th power residues.
So {\bf any} $n \in G_k$ generates a triplet by iteration of one of
the four {\it dfs} functions (thm3.2), yielding the main triplet
structure of $G_k$ :
\begin{cor}
~{\rm Each} $n$ in $G_k ~(k>$0) generates a triplet of ~{\rm three}
inverse pairs,\\
\hspace*{1cm} except if ~$n^3 \equiv 1$ and $n \equiv \!\!\!\!\!\!\//~~ 1$ mod $p^k
~(p \equiv 1$ mod 6), which involves {\rm one} inverse pair.
\equivnd{cor}
Starting at $n_0 \in G_k$ six triplet residues are generated upon
iteration of e.g. $SCI(n)$: $n_{i+1}\equiv -(n_i+1)^{-1}$ (indices
mod 3), or another {\it dfs} function to prevent a non-invertable
residue. Less than 6 residues are involved if 3 or 4 divides $p-1$:
If $3|(p-1)$ then a cubic root of 1 ($a^3 \equiv 1, ~a \equiv \!\!\!\!\!\!\//~~ 1$)
generates just 3 residues: ~$a+1\equiv -a^{-1}$; \\
--- together with its complement this yields a subgroup
$(a+1)^*\equiv C_6$ ~(fig.1, $p$=7)\\
If 4 divides $p-1$ then an $x$ on the vertical axis has $x^2 \equiv -1$
so $x \equiv -x^{-1}$,\\
--- so the 3 inverse pairs involve then only five residues
~(fig.2: $p$=5).
\begin{enumerate} {\small
\item
It is no coincidence that the period 3 of each {\it dfs} composition
[~of $-n,~n^{-1},~n$+1; \\ ~~~e.g: $CIS(n) \equiv 1-n^{-1}$~] exceeds the
number of symmetries of finite ring $Z_k(+,~.)$ by one.
\item
{\bf No duplet} occurs: multiply $a+b^{-1} \equiv -1,~b+a^{-1} \equiv -1$
by $b$ resp. $a$ then $ab+1 \equiv -b$ and $ab+1 \equiv -a$, ~sothat ~$-b \equiv -a$
and $a \equiv b$.
\item
{\bf Basic triplet} mod $3^2: G_2 \equiv 2^* \equiv \{2,4,8,7,5,1\}$ is a
6-cycle of residues mod 9. ~Iteration: ~$SCI(1)^*: -(1+1)^{-1} \equiv 4,~
-(4+1)^{-1} \equiv 7,~ -(7+1)^{-1} \equiv 1$, and $abc \equiv 1.4.7 \equiv 1$ mod 9. }
\equivnd{enumerate}
\subsection{ The $EDS$ argument extended to non-core triplets }
The $EDS$ argument for the cubic root solution $CR$ (lem2.1), with all
three terms in core, also holds for any triplet$^p$ mod $p^2$. Because
$A_2 \equiv F_2$ mod $p^2$, so all three terms are in core for some linear
transform (5). Then for each of the three equivalences (3a-c) holds
the $EDS$ property: $(x+y)^p \equiv x^p+y^p$, and thus no finite (equality
preserving) extension exists, yielding inequality for the corresponding
integers for all $k>$1, to be shown next. A cubic root solution is a
special triplet$^p$ for $p \equiv 1$ mod 6, with $a \equiv b \equiv c$ in (3a-c).
Denote the $p-1$ core elements as residues of integer function
~$A(n)=n^{|B|}, ~(0<n<p)$, then by freedom of $p$-th power extension
beyond mod $p^2$ (cor1.4) choose, for any $k>2$ :
{\bf (4)} ~Core increment form:~ $A(n+1)-A(n) \equiv (r_n)^p$ mod $p^k$,
~~~~~~~~ with $(r_n)^p \equiv r_n.p^2+1 ~(r_n>$0),
~hence $(r_n)^p \equiv 1$ mod $p^2$, ~but ~$\equiv \!\!\!\!\!\!\//~~ 1$ mod $p^3$ ~in general.
This rootform of triplets, with two terms in core, is useful for the
additive analysis of subgroup $F_k$ of $p$-th power residues mod $p^k$
(re: the known Fermat's Last Theorem $FLT$ case1: residues coprime
to $p$ - to be detailed in the next section).
Any assumed $FLT~case_1$ solution (5) can be transformed into form (4) in
two steps that preserve the assumed $FLT$ equality for integers $<p^{kp}$
in full $p$-th power precision $kp$ where $x,y<p^k$ , or $(k+1)p$
~in case $p^k<x+y<p^{k+1}$ (one carry).\\
Namely first $scaling$ by an integer $p$-th power factor $s^p$ that is
1 mod $p^2$ (so $s \equiv 1$ mod $p$), to yield as one lefthand term the core
residue $A(n+1)$ mod $p^k$. And secondly a $translation$ by an additive
integer term $t$ which is 0 mod $p^2$ applied to both sides, resulting
in the other lefthand term $-A(n)$ mod $p^k$, preserving the assumed
integer equality (unit $x^p$ has inverse $x^{-p}$ in $G_k$). Without
loss assume the normed form with $z^p \equiv 1$ mod $p^2$, then such
{\bf linear transformation} ($s,t$) yields:
{\bf (5)}~~~~~~~~~~~
$x^p+y^p=z^p ~~\longleftrightarrow~~ (sx)^p+(sy)^p+t=(sz)^p+t$ ~~[ integers ],
~~~~~~~~ with~ $s^p \equiv A(n+1)/x^p, ~~~(sy)^p+t \equiv -A(n)$ ~mod $p^k$, ~so:
{\bf (5')} \hspace{3cm} $A(n+1)-A(n) \equiv (sz)^p+t$ ~mod $p^k$.
With $s^p \equiv z^p \equiv 1$ and $t \equiv 0$ mod $p^2$ this yields an equivalence
which is 1 mod $p^2$, hence a $p$-th power residue, with two of the three
terms in core. Such core increment form (4),(5') will be shown to have no
(equality preserving) finite extension, of all residues involved, to $p$-th
power integers, so the assumed integer $FLT~case_1$ equality cannot exist.
\begin{lem}
$p$-th powers of a 0-extended triplet$^p$ equivalence (mod $p^{k>1}$)
yield integer inequality.
\equivnd{lem}
\begin{proof}
In a triplet for some prime $p>2$ the core increment form (4) holds
for three distinct values of $n<p$, where scaling by respective factors
$-(r_n)^{-p}$ in $G_k$ mod $p^k$ returns 1-complement form (2).
Consider each triplet equivalence separately, and for a simple notation
let $r$ be any of the three $r_n$, with successive core residues
$A(n+1) \equiv x^p \equiv x, ~-A(n) \equiv y^p \equiv y$ mod $p^k$.
Then~ $x^p+y^p \equiv x+y \equiv r^p$ mod $p^k$, where $r^p \equiv 1$ mod $p^2$, has
both summands in core, but right hand side $r^p \equiv \!\!\!\!\!\!\//~~ 1$ mod $p^{k>2}$ is
not in core, with deviation $d \equiv r-r^p \equiv \!\!\!\!\!\!\//~~ 0$ mod $p^k$.\\
Hence~ $r \equiv r^p+d \equiv (x+y)+d$ mod $p^k$ ~(with $d \equiv 0$ mod $p^k$
in the cubic root case), and~ $x^p+y^p \equiv (x+y+d)^p$ mod $p^k$.
This equivalence has no finite (equality preserving) 0-extension to
integer $p$-th powers since $X^p+Y^p<(X+Y+D)^p$, ~so the assumed $FLT$
case$_1$ solution cannot exist.
\equivnd{proof}
For $p$=7 the cubic roots are $\{42,24,01\}$ mod $7^2$ (base 7).
In full 14 digits: $42^7+24^7=01424062500666$ while $66^7=60262046400666$,
which are equivalent mod $7^5$ but differ mod $7^6$.
More specifically, linear transform (5) adjoins to a $FLT$ case$_1$
solution mod $p^2$ a solution with two adjacent core residues
mod $p^k$ ~(5') for any precision $k>1$, while preserving the
assumed integer $FLT$ case$_1$ equality. Without loss one can
assume scalefactor $s<p^k$ and shift term $t<p^{2k}$, yielding
double precision integer operands $\{sx,sy,sz\} < p^{2k}$, with
an (assumed) $p$-th power equality of terms $<p^{2kp}$.
Although equivalence mod $p^{2k+1}$ can hold by proper choice
of linear transform $(s,t)$, ~$inequivalence$ at base $p$
~triple precision~ $3k+1$ ~follows by:
\begin{lem} ~(~triple precision inequality ~): \\ \hspace*{5mm}
Any extension $(X,Y,Z)$ of $(x,y,z)$ in~ $x^p+y^p \equiv z^p$ mod $p^{k>1}$
($FLT_k$ case$_1$) yields an integer $p$-th power inequality
(of terms $<p^{pk}$), with in fact inequivalence $X^p+Y^p \equiv \!\!\!\!\!\!\//~~ Z^p$
mod $p^{3k+1}$.
\equivnd{lem}
\begin{proof}
Let $X=up^k+x,~ Y=vp^k+y,~ Z=wp^k+z$ extend the residues $x,y,z<p^k$,
~such that $X,Y$ mod $p^{k+1}$ are not both in core $A_{k+1}$.
So the extensions do not extend core precision $k$, and without
loss take $u,v,w<p^k$, due to a scalefactor $s<p^k$ in (5). Write
$h=(p-1)/2$, ~then binomial expansion upto quadratic terms yields:
~~~~~~~~ $X^p \equiv u^2x^{p-2}h~p^{2k+1}+ux^{p-1}p^{k+1}+x^p$ ~~mod $p^{3k+1}$,
~and similarly:
~~~~~~~~ $Y^p \equiv v^2y^{p-2}h~p^{2k+1}+vy^{p-1}p^{k+1}+y^p$ ~~mod $p^{3k+1}$,
~and:
~~~~~~~~ $Z^p \equiv w^2z^{p-2}h~p^{2k+1}+wz^{p-1}p^{k+1}+z^p$ ~~mod $p^{3k+1}$,
where: $x^p+y^p \equiv x+y \equiv z^p$ ~mod $p^k$,
~and~ $x^{p-1} \equiv y^{p-1} \equiv 1$ ~mod $p^k$, but not so mod $p^{k+1}$:
\\ ~$u,v$ ~are such that not both $X,Y$ are in core $A_{k+1}$, hence
core precision $k$ is not increased.
By lemma 3.1 the 0-extension of $x,y,z$ (so $u=v=w=0$) does not
yield the required equality $X^p+Y^p=Z^p$. To find for which
maximum precision equivalence $can$ hold, choose $u,v,w$ sothat:
~~$(u+v)p^{k+1}+x^p+y^p \equiv wz^{p-1}p^{k+1}+z^p$ mod $p^{2k+1}$ ...[*]..
~yielding~ $X^p+Y^p \equiv Z^p$ mod $p^{2k+1}$.
A cubic root solution has also $z^p \equiv z$ in core $A_k$, so
$z^{p-1} \equiv 1$ mod $p^k$, then $w=u+v$ with $w^2>u^2+v^2$ would
require $x^p+y^p \equiv z^p$ mod $p^{2k+1}$, readily verified for $k$=2
and any prime $p>2$. \\[1ex] Such extension [*] implies inequivalence
$X^p+Y^p \equiv \!\!\!\!\!\!\//~~ Z^p$ mod $p^{3k+1}$ for non-zero extensions $u,v,w$.
Because $u+v=w$ together with $u^2+v^2=w^2=(u+v)^2$ yields $uv=0$.
So any (zero- or nonzero-) extension yields inequivalence mod $p^{3k+1}$.
\equivnd{proof}
\section{Residue triplets and Fermat's integer powersum inequality}
Core $A_k$ as $FST$ extension, the additive zero-sum property
of its subgroups (thm1.1), and the triplet structure of units
group $G_k$, allow a direct approach to Fermat's Last Theorem:
{\bf (6)}~~~~~
$x^p+y^p = z^p$ (prime $p>2$) ~has no solution for positive
integers $x,~y,~z$\\ \hspace*{2cm}
with ~~case$_1$ : ~$xyz \equiv \!\!\!\!\!\!\//~~ 0$ mod $p$,
and ~~case$_2$ : ~$p$ divides one of $x,y,z$.
Usually (6) mentions exponent $n>2$, but it suffices to show
inequality for primes $p>2$, because for composite exponent $m=p.q$
holds $a^{pq}=(a^p)^q= (a^q)^p$. If $p$ divides two terms then it
also divides the third, and all terms can be divided by $p^p$.
So in $case$ 2:~ $p$ divides just one term.
A finite integer $FLT$ solution of (6) has three $p$-th powers $<p^k$
for some finite fixed $k$, so occurs in $Z_k$, yet with no $carry$
beyond $p^{k-1}$, and (6) is the 0-extension of this solution mod
$p^k$. Each residue $n$ mod $p^k$ is represented uniquely by $k$
digits, and is the product of a $j$-digit number as 'mantissa'
relative prime to $p$, and $p^{k-j}$ represented by $k-j$ trailing
zero's (cor1.3).
Normation (2) to $rhs=-1$ simplifies the analysis, and maps residues
($k$ digits) to residues, keeping the problem finite. Inverse normation
back to (6) mod $p^k$ is in case 1 always possible, using an inverse
scale factor in group $F_k$. So normation does {\bf not} map to the
reals or rationals.
The present approach needs only a simple form of Hensel's lemma [5]
(in the general $p$-adic number theory), which is a direct consequence
of cor1.2 : ~~extend digit-wise the 1-complement form such that the
$i$-th digit of weight $p^i$ in $a^p$ and $b^p$ sum to $p-1$
~(all $i \geq 0$), with $p$ choices per extra digit. Thus to each
normed solution of (2) mod $p^2$ correspond $p^{k-2}$ solutions
mod $p^k$:
\begin{cor}
~(1-cmpl extension)~~~~
A normed $FLT_k$ root is an extended $FLT_2$ root.
\equivnd{cor}
\subsection{Proof of the FLT inequality}
Regarding $FLT$ case$_1$, an inverse-pair and triplet$^p$ are the only
(normed) $FLT_k$ roots (thm3.2). As shown (lem3.1), any assumed integer
case$_1$ solution has a corresponding equivalent core increment form
(4) with two terms in core, having no integer extension, against the
assumption.
\begin{thm} ($FLT$ Case 1).
~For prime $p>$2 and integers $x,y,z>0$ coprime to $p$ :\\ \hspace*{1in}
~$x^p+y^p=z^p$ has no solution.
\equivnd{thm}
\begin{proof}
~An $FLT_{k>1}$ solution is a linear transformed extension of an $FLT_2$
root in core $A_2=F_2$ (cor4.1). By lemmas 2.2c and 3.1 it has no finite
$p$-th power extension, yielding the theorem.
\equivnd{proof}
In $FLT$ case$_2$ just one of $x,y,z$ ~is a multiple of $p$, hence
$p^p$ divides one of the three $p$-th powers in $x^p+y^p=z^p$. Again,
any assumed case$_2$ equality can be scaled and translated to yield
an equivalence mod $p^p$ with two terms in core $A_p$, having no
integer extension, contra the assumption.
\begin{thm} ($FLT$ case$_2$)
~~~For prime $p>$2 and positive integers $x,y,z$ :\\ \hspace*{1in}
if~ $p$ ~divides just one of~ $x,y,z$ ~then ~$x^p+y^p=z^p$
~has no solution.
\equivnd{thm}
\begin{proof}
In a case$_2$ solution $p$ divides a lefthand term, $x=cp$ or
$y=cp~(c>$0), or the right hand side $z=cp$. Bring the multiple of
$p$ to the right hand side, for instance if $y=cp$ we have~
$z^p-x^p=(cp)^p$, while otherwise $x^p+y^p=(cp)^p$. So the sum or
difference of two $p$-th powers coprime to $p$ must be shown not to
yield a $p$-th power $(cp)^p$ for any $c>0$:
{\bf (7)}~~~~
$x^p \pm y^p = (cp)^p$ ~has no solution for integers $x,y,c>0$.
Notice that core increment form (4) does not apply here. However,
by $FST$ the two lefthand terms, coprime to $p$, are either
complementary or equivalent mod $p$, depending on their sum or
difference being $(cp)^p$. ~Scaling by $s^p$ for some $s \equiv 1$
mod $p$, ~so $s^p \equiv 1$ mod $p^2$, transforms one lefthand term into
a core residue $A(n)$ mod $p^p$, with $n \equiv x$ mod $p$.
And translation by adding $t \equiv 0$ mod $p^2$ yields the other term
$A(n)$ or $-A(n)$ mod $p^p$ respectively. The right hand side then
becomes $s^p(cp)^p+t$, ~equivalent to $t$ mod $p^p$. So an
assumed equality (7) yields, by two equality preserving tansformations,
the next equivalence (8), where $A(n) \equiv u \equiv u^p$ mod $p^p$ ~($u$ in
core $A=A_p$ for $0<n<p$ with $n \equiv x \equiv u$ mod $p$) and $s \equiv 1,~ t \equiv 0$
mod $p^2$:
{\bf (8)}~~~~~ $u^p \pm u^p \equiv u \pm u \equiv t$ mod $p^p ~(u \in A_p)$,~
~where~ $u \equiv (sx)^p$ ,~ $\pm u \equiv \pm (sy)^p+t$ mod $p^p$.
Equivalence (8) does not extend to integers, because $U^p+U^p>U+U$,
~and $U^p-U^p=0 \neq T$, ~where $U,T$ are the 0-extensions of $u,t$
mod $p^p$ respectively. But this contradicts assumed equalities (7),
which consequently must be false. \equivnd{proof}
{\bf Remark}: {\small
~~From a practical point of view the $FLT$ integer inequality of a
0-extended $FLT_k$ root (case$_1$) is caused by the {\it carries}
beyond $p^{k-1}$, amounting to a multiple of the modulus, produced in
the arithmetic (base $p$).
In the expansion of $(a+b)^p$, the mixed terms $can$ vanish mod $p^k$
for some $a,b,p$. Ignoring the carries yields $(a+b)^p \equiv a^p+b^p$
mod $p^k$, and the $EDS$' property is as it were the $syntactical$
expression of ignoring the carry ($overflow$) in residue arithmetic.
In other words, in terms of $p$-adic number theory, this means
'breaking the Hensel lift': the residue equivalence of an $FLT_k$
root mod $p^k$, although it holds for all $k>$0, $does$ imply
inequality for integers due to its special triplet structure,
where exponent $p$ distributes over a sum.}
\section*{ Conclusions }
\begin{enumerate}
\item
Symmetries $-n,~n^{-1}$ determine $FLT_k$ roots but do not exist
for positive integers.
\item
Another proof of $FLT$ case$_1$ might use product 1 mod $p^{k}$ of
$FLT_k$ root terms: $ab \equiv 1$ or $abc \equiv 1$, which is impossible for
integers $>1$. The product of $m$ (=2, 3, $p$) ~$k$-digit integers
has $mk$ digits. ~Arithmetic mod $p^k$ {\bf ignores carries} of
weight $p^k$ and beyond.
Removal of the mod $p^k$ condition from a particular $FLT_k$ root
equivalence 0-extends its terms, and the ignored carries imply
inequality for integers.
\item
{\bf Core} $A_k \subset G_k$ as extension of $FST$ to mod $p^{k>1}$,
and the zero-sum of its subgroups (thm1.1) yielding the cubic $FLT$
root (lem2.1), started this work. The triplets were found by analysing
a computer listing (tab.2) of the $FLT$ roots mod $p^2$ for $p<200$.
\item
Linear analysis (mod $p^2$) suffices for root existence (Hensel,
cor4.1), but {\bf quadratic} analysis (mod $p^3$) is necessary to
derive triplet$^p$ core-increment form {\bf (4) ~(5,5')} with
maximally two terms in core $A_3$.
\item
"$FLT$ eqn(1) has no finite solution" and "$[ICS]^3$ has no finite
fixed point" \\are equivalent (thm3.2), yet each $n \in G_k$ is a
fixed point of $[ICS]^3$ mod $p^k$ \\ (re: $FLT_2$ roots imply all
roots for $k>$2, yet no 0-extension to integers).
\item
Crucial in finding the arithmetic triplet structure, and the double
precision core-increment symmetry and inequivalence (lem2.2c) were
extensive computer experiments, and the application of {\it associative
function composition}, the essence of semi-groups, to the three
elementary functions (thm3.2): \\ \hspace*{1cm}
successor $S(n)=n$+1, complement $C(n)=-n$ and inverse $I(n)=n^{-1}$,
\\ with period 3 for $SCI(n)=-(n+1)^{-1}$ and the other three such
compositions. In this sense $FLT$ is not a purely arithmetic problem,
but essentially requires non-commutative and associative function
composition for its proof.
\equivnd{enumerate}
\section*{ Acknowledgements }
The opportunity given me by the program committee in Prague [2],
to present this simple application of finite semigroup structure to
arithmetic, is remarkable and greatly appreciated. Also, the feedback
from several correspondents is gratefully acknowledged.
\section*{ References }
\begin{enumerate} {\small
\item T.Apostol: {\it Introduction to Analytical Number Theory}
(thm 10.4-6), Springer Verlag, 1976.
\item N.F.Benschop: "The semigroup of multiplication mod $p^k$, an
extension of Fermat's Small Theorem, and its additive structure",\\
International conference {\it Semigroups and their Applications}
(Digest p7) Prague, July 1996.
\item A.Clifford, G.Preston: {\it The Algebraic Theory of Semigroups}
\\ Vol 1 (p130-135), AMS survey \#7, 1961.
\item S.Schwarz: "The Role of Semigroups in the Elementary Theory of
Numbers", \\ Math.Slovaca V31, N4, p369-395, 1981.
\item G.Hardy, E.Wright: {\it An Introduction to the Theory of Numbers}
\\ (Chap 8.3, Thm 123), Oxford-Univ. Press 1979.
}
\equivnd{enumerate}
\begin{center} -----///----- \equivnd{center}
\begin{verbatim}
n. n F= n^7 F'= PDo PD1 PD2 p=7
0. 0000 000000000 000000001 010000000 000000000 7-ary code
1. 0001 000000001 000000241 023553100 050301000 9 digits
2. 0002 000000242 000006001 < 055440100 446621000
3. 0003 000006243 000056251 150660100 401161000 '<' :
4. 0004 000065524 000345001 < 324333100 302541000 Cubic roots
5. 0005 000443525 001500241 612621100 545561000 (n+1)^p - n^p
6. 0006 002244066 004422601 355655100 233411000 = 1 mod p^3
x xx ^^^sym
7. 0010 010000000 013553101 410000000 000000000
8. 0011 023553101 031554241 116312100 062461000
9. 0012 055440342 062226001 < 351003100 534051000
10. 0013 150666343 143432251 630552100 600521000
11. 0014 324431624 255633001 < 455101100 160521000
12. 0015 613364625 444534241 156135100 242641000
13. 0016 361232166 025434501 110316100 223621000
14. 0020 420000000 423165201 010000000 000000000
15. 0021 143165201 263245241 402261100 313151000
16. 0022 436443442 342105001 < 502606100 060611000
17. 0023 111551443 000651251 326354100 541031000
18. .024 112533024 ! 660000001 < 036146100 035011000 (n+1)^p - n^p
19. .025 102533025 ! 366015241 612322100 531201000 = 1 mod p^7
20. 0026 501551266 625115401 332500100 600441000
--------&c
Table 1: Periodic Difference of i-th digit: PDi(n) = F(n+p^i) - F(n)
\equivnd{verbatim}
\label{lastpage}
\begin{verbatim}
Find a+b = -1 mod p^2 (in A=F < G): Core A={n^p=n}, F={n^p} =A if k=2.
G(p^2)=g*, log-code: log(a)=i, log(b)=j; a.b=1 --> i+j=0 (mod p-1)
TRIPLET^p: a+ 1/b= b+ 1/c= c+ 1/a=-1; a.b.c=1; (p= 59 79 83 179 193 ...
^^^^^^^
Root-Pair: a+ 1/a=-1; a^3=1 ('C3') <--> p=6m+1 (Cubic rootpair of 1)
^^^^^^^^^
p:6m+-1 g=generator; p < 2000: two triplets at p= 59, 701, 1811
5:- 2 three triplets at p= 1093
7:+ 3 C3 11:- 2
13:+ 2 C3 17:- 3
19:+ 2 C3 23:- 5 29:- 2
31:+ 3 C3
37:+ 2 C3 41:- 6
43:+ 3 C3 47:- 5
53:- 2 log lin mod p^2
59:- 2 ------ ------------
-2,-25( 40 15, 18 43) 25, 23( 35 11, 23 47) -23, 2( 53 54, 5 4)
-- -- -- -- -- --
27, 19( 18 44, 40 14) -19, 8( 13 38, 45 20) -8,-27( 5 3, 53 55)
61:+ 2 C3
67:+ 2 C3 71:- 7
73:+ 5 C3
79:+ 3 C3
30, 20( 40 46, 38 32) -20, 10( 36 42, 42 36) -10,-30( 77 11, 1 67)
83:- 2
21, 3( 9 74, 73 8) -3, 18( 54 52, 28 30) -18,-21( 13 36, 69 46)
89:- 3
97:+ 5 C3 101:- 2
103:+ 5 C3 107:- 2
109:+ 6 C3 113:- 3
127:+ 3 C3 131:- 2 137:- 3
139:+ 2 C3 149:- 2
151:+ 6 C3
157:+ 5 C3
163:+ 2 C3 167:- 5 173:- 2
179:- 2
19, 1( 78 176,100 2) -1, 18( 64 90,114 88) -18,-19( 88 59, 90 119)
181:+ 2 C3 191:- 19
193:+ 5 C3
-81, 58( 64 106,128 86) -58, 53( 4 101,188 91) -53, 81(188 70, 4 122)
197:- 2
199:+ 3 C3
------- -------------------------------------
Table 2: FLT_2 root: inv-pair (C3) & triplet^p (for p < 200)
\equivnd{verbatim}
\equivnd{document} |
\begin{document}
\title{Generating and detecting bound entanglement in two-qutrits using a family of indecomposable positive maps}
\author{Bihalan Bhattacharya}
\email{bihalan@gmail.com}
\author{Suchetana Goswami}
\email{suchetana.goswami@gmail.com}
\affiliation{S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Kolkata 700 106, India}
\author{Rounak Mundra}
\affiliation{Center for Security Theory and Algorithmic Research,
International Institute of Information Technology, Gachibowli, Hyderabad, India}
\author{Nirman Ganguly}
\affiliation{Department of Mathematics, Birla Institute of Technology and Science Pilani, Hyderabad Campus, Telangana-500078, India.}
\author{Indranil Chakrabarty}
\affiliation{Center for Security Theory and Algorithmic Research,
International Institute of Information Technology, Gachibowli, Hyderabad, India}
\author{Samyadeb Bhattacharya}
\email{samyadeb.b@iiit.ac.in}
\affiliation{Center for Security Theory and Algorithmic Research,
International Institute of Information Technology, Gachibowli, Hyderabad, India}
\author{A. S. Majumdar}
\affiliation{S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Kolkata 700 106, India}
\begin{abstract}
The problem of bound entanglement detection is a challenging aspect of quantum information theory for higher dimensional systems. Here, we propose an indecomposable positive map for two-qutrit systems, which is shown to detect a class of positive partial transposed (PPT) states. A corresponding witness operator is constructed and shown to be weakly optimal and locally implementable. Further, we perform a structural physical approximation of the indecomposable map to make it a completely positive one, and find a new PPT-entangled state which is not detectable by certain other well-known entanglement detection criteria.
\end{abstract}
\maketitle
\section{Introduction}
The inseparable feature of quantum states \mathscrite{EPR_35, S_35, R_89, B_64} plays the most crucial role in various information processing tasks \mathscrite{BW_92, BBCJPW_93, BCWSW_12, AMP_12}. Entanglement is the central feature of the theory of quantum information science and the detection of entanglement in an arbitrary quantum system is considered to be one of the most fundamental aspects of the subject. The most effective way to detect entanglement theoretically, is via the usage of positive but not completely positive (NCP) maps, of which the most famous and heavily utilized example is given by the partial transposition (PT) map \mathscrite{pt1}.
It is well known that PT gives us a necessary and sufficient criterion, named the separability criterion to detect entanglement only for $2\times 2$ and $2\times 3$ dimensional states \mathscrite{ppt1}. It is seen that for these dimensions, all entangled states have negative partial transposition (NPT). There are different prescribed protocols for detection of two-qubit entanglement based on this criterion \mathscrite{ADH_08, GA_12, ZGZG_08, CHKST_16, GCGM_19}. On the other hand, entanglement detection in general is a NP hard problem \mathscrite{posit8}. In case of higher dimensional systems, there exists a class of states which are entangled but having a positive partial transposition (PPT), and hence cannot be detected by the NPT criterion.
The entanglement of PPT entangled states is not distillable \mathscrite{bound1}. The presence
of bound entanglement in such states has evoked much interest as to the possibilities
of using or unlocking the entanglement in present in them \mathscrite{bound2, bound3}.
Bipartite bound entanglement channels can exhibit superadditivity of quantum channel
capacity \mathscrite{bound4}. A further interesting and difficult task is to detect such
bound entanglement \mathscrite{bound5, bound6}, and methods have been recently suggested
to prepare and certify bound entangled states that are robust for experimental
verification \mathscrite{bound7}. The bound entanglement in PPT entangled states is inextricably linked to indecomposable positive maps.
The structure of positive maps has been an area of interest to mathematicians for a long period of time, since it is extremely hard to determine the positivity of a map even in low dimensions. Ever since the seminal works of Peres and Horodecki \mathscritep{Peres96, pt1}, it has been clear that such maps play an instrumental role in detection of quantum entanglement. Considerable effort from both mathematicians and physicists \mathscritep{Stinespring55, arveson69, Stormer82, Worono76, Tomiyama85, Osaka91, Cho92, Ha03, Majewski01, Kossakowski03, Piani06, Majewski07, Sarbicki12, Zwolak13, Marciniak13, Miller15, Rutkowski15, Lewenstein16, Marciniak17} have shed some light on the structural intricacies of positive maps and their applications in physics. Applications of positive maps in the study of entanglement theory have catalysed the development of both domains.
Indecomposable positive maps play a key role in generating entangled states in higher dimensions. The class of positive maps which can be decomposed as an algebraic sum of two relatively simple convex sub classes of positive maps, {\it viz}., completely positive class and completely co-positive class is called decomposable. Since transposition maps are completely co-positive in nature, quantum states having PPT can not be detected by them. As a consequence, indecomposable maps are important for detecting PPT entangled states. Therefore, constructing non completely positive maps for detecting PPT entangled states is of considerable importance in entanglement theory.
As the PPT criterion fails to detect bound entanglement in higher dimensions, certain other criteria have been proposed in the literature which can detect some PPT entangled states. These include the computable cross norm or realignment criterion (CCNR criterion) \mathscrite{R_03, CW_02}, range criterion \mathscrite{BDMSST_99, BP_00}, covariance matrix criterion (CMC) \mathscrite{GHGE_07, GGHE_08} and others. In the present work we further explore the connection between the theory of positive maps and entanglement. We introduce an indecomposable positive map on the algebra of $3 \times 3$ complex matrices to obtain a PPT entangled state of a two-qutrit system.
Our proposed non-completely positive map not only detects a class of two-qutrit bound entangled states, but also introduces a class of them which are not detected by several of the previously mentioned criteria.
Since non-complete positive maps correspond to unphysical operations, it is impossible to implement them in the laboratory. However, it is indeed possible to construct a physically implementable complete positive map from a given unphysical map using the notion of structural physical approximation (SPA) \mathscrite{spa1, spa2} which we employ in this work.
The SPA technique has also been used for realization of the optimal singlet fraction \mathscrite{satya}. On the other hand, PPT entangled states have been constructed earlier from indecomposable positive maps \mathscrite{Ha03, HA04}. Constructions of such states were done by exploiting the facial structures and various duality relations of the cone of positive maps. Here we devise a
different method of contructing PPT entangled states via usage of the structural physical approximation (SPA) \mathscrite{spa1, spa2}.
Three-level systems are of primary importance in laser physics, and possess features of interest from the quantum information perspective, as well \mathscrite{qut1, qut2, qut3, qut4}. In practical quantum information procesing, detecting entanglement of a given unknown system and its quantification is one of the important areas of research. The theory of entanglement witnesses \mathscrite{wit1, wit3, wit4, wit2, guehne, Ganguly09} provides a useful avenue to this end, and futher helps to identify resources useful for various information processing tasks \mathscritep{Ganguly11, Adhikari12, Ganguly14, patro, vempati}. Here we formulate a weakly optimal \mathscrite{badzi} indecomposable entanglement witness from the positive map of our construction. This entanglement witness is shown to detect the proposed two-qutrit bound entangled state, and is further shown to be implementable through local operations.
The structure of the paper is the following. In Section II, we discuss some prerequisites of the theory applied in the later sections. In Section III, we define a new one parameter family of indecomposable positive maps and show that it can detect a class of two-qutrit entangled states. In Section IV we construct a weak optimal witness, which for a particular choice of parameter, detects at least one class of bound entangled states. In Section V, we employ the structural physical approximation to construct a new class of PPT entangled states. We conclude in Section VI with a summary of our results.
\section{Preliminaries}
In this section we shall discuss some preliminary details of positive maps. One can find detailed discussions on positive maps in \mathscrite{book1, book2}. We consider Hilbert space of finite dimension, and shall deal with positive maps between algebra of matrices.
The seminal results by Stormer \mathscritep{stormer63} and Woronowicz \mathscritep{Worono} showed that if $\mathcal{H}_1$
and $\mathcal{H}_2$ be two Hilbert spaces, then all positive maps acting on the set of bounded operators on $\mathcal{H}_1$ into the set of all bounded operators acting on $\mathcal{H}_2$ are decomposable if product of the dimension of $\mathcal{H}_1$ and $\mathcal{H}_2$ is upper bounded by 6. The first example of indecomposable map was provided by M.D.Choi \mathscritep{Choi75}, popularly known as Choi map. A new family of indecomposable map was considered by Hall \mathscritep{Hall2006} and Bruer \mathscritep{Bruer2006}. Later this map was generalised to a class of positive maps by Chruchinski and Kossakowski \mathscritep{Chruchinski2008} and discussed the indecomposability and atomicity of the part of the class. On the other hand, as discussed earlier, the theory of positive maps has a deep connection with quantum inseparability which instigates a new insight into the subject \mathscritep{Peres96, pt1, PH1997, Terhal2002}.
Here, we concentrate on the bipartite scenarios and recapitulate a few notions on separability and positive maps form the literature. As mentioned in the previous paragraph, if a bipartite state ($\rho_{AB}= \sum_{ijkl} p_{kl}^{ij} |i\rangle \langle j| \otimes |k \rangle \langle l| $) is a separable one, then it is PPT \mathscrite{P_96, HHH_01}, where the partial transposition (with respect to the second subsystem) is given by, $\rho^{T_{B}}=\sum_{ijkl} p_{lk}^{ij} |i \rangle \langle j| \otimes |k \rangle \langle l| $.
In this case, a state $\rho$ can be concluded as a separable one if and only if for any positive map $\Lambda$, we have $(\openone \otimes \Lambda) \rho \geq 0$. Though there are a few examples of such maps \mathscrite{choi,posit1,posit2,posit3,posit4,posit5,posit6,posit7} which can detect PPT entanglement, they are far from exhaustive.
Let $\mathbb{C}^{d}$ be the complex Hilbert spaces of dimension $d$.
Let $\mathcal{B} \left(\mathbb{C}^{d} \right)$ denote the space of all operators acting on $\mathbb{C}^{d}$. $\mathcal{B} \left( \mathbb{C}^{d} \right) $ is endowed with Hilbert-Schmidt inner product defined by $<X, Y>= Tr \left[ X^{\dagger} Y \right] $ for any two members $X , Y \in \mathcal{B} \left( \mathbb{C}^{d} \right) $. The sub collection of $\mathcal{B} \left( \mathbb{C}^{d} \right) $ consisting of hermitian, positive semidefinite operators having unit trace is known as the set of density operators acting on $\mathbb{C}^{d} $.
Recall that operators acting on finite dimensional spaces are bounded and can be represented as matrices with respect to some basis. Let $ \mathbb{M}_{d}$ and $ \mathbb{M}_{k}$ be the algebra of $d \times d$ and $k \times k$ matrices respectively, over the field of complex numbers. A linear map $\Lambda : \mathbb{M}_{d} \rightarrow \mathbb{M}_{d} $ is said to be positive if $\Lambda \left( X \right) \geq \Theta $ for any positive semi-definite $X \in \mathbb{M}_{d} $, where $\Theta$ denotes the zero operator. A linear map is said to be k-positive if the map $\mathbb{I}_k \otimes \Lambda : \mathbb{M}_{k} \otimes \mathbb{M}_{d} \rightarrow \mathbb{M}_{k} \otimes \mathbb{M}_{d} $ is positive for some $k \in \mathbb{N}$. A linear map is said to be completely positive if it is k-positive for all $k \in \mathbb{N}$. Similarly a linear map $\Lambda$ is said to be k-copositive if $\mathbb{I}_k \otimes \left( \Lambda \mathscrirc T \right) $ is positive for some $k \in \mathbb{N}$ and completely co-positive if $\Lambda \mathscrirc T$ is completely positive, where $T$ stands for the transposition map.
Given any linear map $\Lambda : \mathbb{M}_{d} \rightarrow \mathbb{M}_{d}$, in connection with the celebrated Choi-Jamiolkowski isomorphism we can construct a matrix $\mathcal{C}_{\Lambda}$, known as Choi matrix, living in $\mathbb{M}_{d} \otimes \mathbb{M}_{d}$. Choi matrix can be obtained via the rule, $\mathcal{C}_{\Lambda} = \mathbb{I} \otimes \Lambda \left( \vert \phi^{+} \rangle \langle \phi^{+} \vert \right) $, where $\vert \phi^{+} \rangle = \frac{1}{\sqrt{d}} \sum_{i=0}^{d-1} \vert ii \rangle$ is the maximally entangled state in $\mathbb{C}^{d} \otimes \mathbb{C}^{d}$ and $\left\lbrace \vert i \rangle _{0}^{d-1} \right\rbrace $ stands for standard computational basis for $\mathbb{C}^{d}$. A linear map $\Lambda$ is completely positive iff its Choi matrix $\mathcal{C}_{\Lambda}$ is positive semi-definite. It is to be noted that if a linear map is positive but not completely positive, then there exists some density operator $\rho$ whose image is not positive. Such an operator $\rho$ can not be a separable one. Hence, positive but not completely positive maps can be used to detect entangled density operators.
Another important notion of positive maps is their decomposability. A positive map $\Lambda$ is known to be decomposable if it can be expressed as $\Lambda = \Lambda_1 + \Lambda_2 \mathscrirc T$ where $\Lambda_1$ and $\Lambda_2$ are completely positive maps and $T$ denotes the action of transposition. Otherwise, it is said to be indecomposable. It is to be noted that decomposable maps can not detect PPT entangled density operators. Recall that a density operator $\sigma$ is said to be PPT if $ \left(\mathbb{I} \otimes T \right) \sigma \geq \Theta$ . Therefore, indecomposable maps must detect at least one PPT entangled density operator. Moreover, a positive linear map is called atomic if it can not be expressed as a sum of 2-positive and 2-copositive map. An atomic linear map is by definition indecomposable.\\
Additionally, a linear map is said to be trace preserving if $ Tr \left[ \Lambda \left( X\right) \right]= Tr\left[ X \right] ~~\forall X \in \mathbb{M}_{d}$. A linear map is said to be hermiticity preserving if $\Lambda \left( X \right)^{\dagger} = \Lambda \left( X^{\dagger} \right) ~~ \forall X \in \mathbb{M}_{d} $. Given a linear map $\Lambda : \mathbb{M}_{d} \rightarrow \mathbb{M}_{d}$, its dual map $\Lambda^{\dagger} : \mathbb{M}_{d} \rightarrow \mathbb{M}_{d}$ is defined by the relation $<\Lambda^{\dagger} \left( X \right), Y> = <X, \Lambda \left( Y \right)> $ for any operator $X, Y \in \mathbb{M}_{d}$. A map $\Lambda$ is positive iff its dual map $\Lambda^{\dagger}$ is also positive.
Using the above properties of positive maps, in the next section we shall introduce a new class of indecomposable positive maps.
\section{One parameter family of indecomposable positive maps}
We now introduce a one parameter family of positive maps containing a clear indecomposable subfamily. For this purpose, we start with the following definition.
\textbf{Definition 1:} Let $\mathbb{M}_3$ denote the algebra of $3 \times 3$ matrices over the field of complex numbers $\mathbb{C}$. We define a one parameter class of linear trace preserving maps $\Lambda_{\alpha} : \mathbb{M}_3 \rightarrow \mathbb{M}_3 $ by,
\begin{eqnarray}
\Lambda_{\alpha} \left( X \right) = \frac{1}{\alpha + \frac{1}{\alpha}} \begin{bmatrix}
\alpha (x_{11}+ x_{22})& -x_{12}&- \alpha x_{13}\\
-x_{21}&\frac{x_{22}+x_{33}}{\alpha}&-x_{32}\\
- \alpha x_{31}&-x_{23}&\alpha x_{33}+ \frac{x_{11}}{\alpha}
\end{bmatrix}
\end{eqnarray}
\begin{eqnarray}
where~ X= \begin{bmatrix}
x_{11}&x_{12}&x_{13}\\
x_{21}&x_{22}&x_{23}\\
x_{31}&x_{32}&x_{33}&
\end{bmatrix} \in \mathbb{M}_3 ~~and ~~\alpha \in ( 0 , 1 ].
\end{eqnarray}
\nonumberoindent \textbf{Theorem 1:} ~~$\Lambda_{\alpha}$ is a positive map on $\mathbb{M}_3$ for all $0 < \alpha\leq 1$.\\\\
\nonumberoindent \textbf{Proof:} To prove that the linear map is positive, it is sufficient to show that if acted upon any arbitrary pure state $|\phi\rangle =(\phi_1,\phi_2,\phi_3)^T$, the map will give only positive semidefinite output. Here $\phi_1, \phi_2,\phi_3$ are arbitrary complex numbers with the constraint $|\phi_1|^2+|\phi_2|^2+|\phi_3|^2=1$.
Here we have
\begin{eqnarray}
\begin{array}{ll}
\Lambda\left(|\phi\rangle\langle\phi|\right)= \\ \frac{1}{\alpha + \frac{1}{\alpha}}\begin{bmatrix}
\alpha (|\phi_1|^2+ |\phi_2|^2) & -\phi_1\phi_2^{*} &- \alpha\phi_1\phi_3^{*}\\
-\phi_1^*\phi_2&\frac{|\phi_2|^2+|\phi_3|^2}{\alpha}&-\phi_2^*\phi_3\\
- \alpha\phi_1^*\phi_3&-\phi_2\phi_3^*&\alpha |\phi_3|^2+ \frac{ |\phi_1|^2}{\alpha}
\end{bmatrix}
\end{array}
\end{eqnarray}
To prove that the matrix $\Lambda_{\alpha}\left(|\phi\rangle\langle\phi|\right)$ is positive, we need to show that all of its principal minors are positive. The 1st order principal minors are the diagonal elements, which are positive for any $\alpha > 0$. The three 2nd order principal minors are
\begin{eqnarray}
\begin{array}{ll}
M_1= \frac{1}{\alpha + \frac{1}{\alpha}}\left|\begin{array}{ll}
\alpha (|\phi_1|^2+ |\phi_2|^2)~~ -\phi_1\phi_2^{*}\\
-\phi_1^*\phi_2 ~~~~~~~~~~~~~\frac{|\phi_2|^2+|\phi_3|^2}{\alpha}
\end{array}\right| ,\\
\\
M_2= \frac{1}{\alpha + \frac{1}{\alpha}}\left|\begin{array}{ll}
\alpha (|\phi_1|^2+ |\phi_2|^2) ~~- \alpha\phi_1\phi_3^{*}\\
- \alpha\phi_1^*\phi_3~~~~~~~~~~~~\alpha |\phi_3|^2+ \frac{ |\phi_1|^2}{\alpha}
\end{array}\right|,\\
\\
M_3= \frac{1}{\alpha + \frac{1}{\alpha}}\left|\begin{array}{ll}
\frac{|\phi_2|^2+|\phi_3|^2}{\alpha}~~&-\phi_2^*\phi_3\\
-\phi_2\phi_3^*~~~~~~~~~~~~\alpha |\phi_3|^2+ \frac{ |\phi_1|^2}{\alpha}
\end{array}\right|.
\end{array}
\end{eqnarray}
We note that $M_1$ simplifies to $\frac{\alpha}{1+\alpha^{2}} \left( |\phi_1|^4 +|\phi_1|^2|\phi_3|^2 + |\phi_2|^2 |\phi_3|^2 \right) $. Therefore $M_1$ is non negative as $\alpha \in ( 0 , 1 ]$. Similarly, $M_2$ simplifies to $\frac{\alpha}{1+\alpha^{2}} \left( \frac{|\phi_1|^4 + |\phi_1|^2|\phi_2|^2+|\phi_2|^2|\phi_3|^2 \alpha^{3}}{\alpha} \right) $ which is a non negative quantity, and $M_3$ simplifies to $\frac{\alpha}{1+\alpha^{2}} \left( |\phi_3|^4 + \frac{|\phi_1|^2 (|\phi_2|^2+|\phi_3|^2)}{\alpha^{3}} \right) $ which is again a non negative quantity as $\alpha \in ( 0 , 1 ]$.
The remaining principal minor is the determinant of the matrix $\Lambda_{\alpha}\left(|\phi\rangle\langle\phi|\right)$, which is given by
\[
\begin{array}{ll}
D= \frac{\alpha^2}{1+\alpha^2}\left[|\phi_2|^2|\phi_3|^4+\frac{|\phi_3|^2|\phi_1|^4}{\alpha^2}+\frac{|\phi_1|^2|\phi_2|^4}{\alpha^2}\right]\\
- \frac{\alpha^2}{1+\alpha^2}\left[2|\phi_1|^2|\phi_2|^2|\phi_3|^2+2|\phi_1|^2Re(\phi_2^*\phi_3)^2)-\frac{1}{\alpha^2}|\phi_1|^2|\phi_2|^2|\phi_3|^2\right],\\
\\
\mbox{Since}~~ Re(\phi_2^*\phi_3)^2\leq |\phi_2|^2|\phi_3|^2, \forall~ \phi_2~~ \mbox{and}~~ \phi_3,~~ \mbox{we have}\\
\\
D\geq \frac{\alpha^2}{1+\alpha^2}\left[|\phi_2|^2|\phi_3|^4+\frac{|\phi_3|^2|\phi_1|^4}{\alpha^2}+\frac{|\phi_1|^2|\phi_2|^4}{\alpha^2}-(4-\frac{1}{\alpha^2})|\phi_1|^2|\phi_2|^2|\phi_3|^2\right],\\
\geq \frac{\alpha^2}{1+\alpha^2}\left[|\phi_2|^2|\phi_3|^4+|\phi_3|^2|\phi_1|^4+|\phi_1|^2|\phi_2|^4-3|\phi_1|^2|\phi_2|^2|\phi_3|^2\right],
\end{array}
\]
for all $\alpha\leq 1$. Here $Re(\mathscrdot)$ means the real part of a complex number. It is straightforward to check that the quantity
\[\left[|\phi_2|^2|\phi_3|^4+|\phi_3|^2|\phi_1|^4+|\phi_1|^2|\phi_2|^4-3|\phi_1|^2|\phi_2|^2|\phi_3|^2\right] \geq 0,\]
for all $\phi_1, \phi_2,\phi_3$ with the constraint $|\phi_1|^2+|\phi_2|^2+|\phi_3|^2=1$. Therefore, the map $\Lambda_{\alpha}(\mathscrdot)$ is positive for all $0 < \alpha\leq 1$. \qed
It is our aim to find whether the map $\Lambda_{\alpha}$ is useful to detect entangled states positive under partial transposition. For this purpose, we prove the following corollary. \\
\textbf{Corollary 1:} $\Lambda_{\alpha}$ contains a class of non completely positive indecomposable maps.
\proof To prove the corollary, we first have to show that the given positive map is is not completely positive. For this purpose, using Choi's theorem, it is sufficient to show that $\mathbb{I}\otimes\Lambda_{\alpha}(|\Phi\rangle\langle\Phi|)$ is not positive. Here, $|\Phi\rangle$ is the maximally entangled two qutrit state.
Let us consider the corresponding Choi matrix first. We take the maximally entangled state for two qutrit system as $ \vert \Phi \rangle = \frac{1}{\sqrt{3}} \left( \vert 00 \rangle + \vert 11 \rangle + \vert 22 \rangle \right) $ where,
\begin{eqnarray}
\vert 0 \rangle = \begin{bmatrix}
1\\
0\\
0
\end{bmatrix}, ~~ \vert 1 \rangle = \begin{bmatrix}
0\\
1\\
0
\end{bmatrix}, ~~ \vert 2 \rangle = \begin{bmatrix}
0\\
0\\
1
\end{bmatrix}
\end{eqnarray}
The one sided action of the map on the maximally entangled state gives rise to the Choi matrix,
\begin{eqnarray}
\begin{tiny}
\mathcal{C}_{\Lambda_{\alpha}}=\begin{bmatrix}
\frac{\alpha^2}{3 + 3 \alpha ^2}&0&0&0&- \frac{\alpha}{3+3 \alpha^2}&0&0&0&-\frac{\alpha^2}{3 + 3 \alpha ^2}\\
0&0&0&0&0&0&0&0&0\\
0&0&\frac{1}{3+3 \alpha^2}&0&0&0&0&0&0\\
0&0&0&\frac{\alpha^2}{3 + 3 \alpha ^2}&0&0&0&0&0\\
- \frac{\alpha}{3+3 \alpha^2}&0&0&0&\frac{1}{3+3 \alpha^2}&0&0&0&0\\
0&0&0&0&0&0&0&- \frac{\alpha}{3+3 \alpha^2}&0\\
0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&- \frac{\alpha}{3+3 \alpha^2}&0&\frac{1}{3+3 \alpha^2}&0\\
-\frac{\alpha^2}{3 + 3 \alpha ^2}&0&0&0&0&0&0&0&\frac{\alpha^2}{3 + 3 \alpha ^2}
\end{bmatrix}
\end{tiny}
\end{eqnarray}
The least eigenvalue of $\mathcal{C}_{\Lambda_{\alpha}}$ is $\lambda^{'}= \frac{1-\sqrt{1+4 \alpha^2}}{6+6 \alpha^2}$. We see that it is a negative quantity within the above parameter range $\alpha \in ( 0 , 1 ]$. Hence, it is proven that the given map is not completely positive.
In the most straightforward way to prove that the map is indecomposable, we have to now show that it can detect at least one entangled state which is positive under partial transposition. Such a class of two qutrit entangled states \mathscrite{StormerPPT} is the following
\begin{small}
\begin{eqnarray} \label{taux}
\tau_x=\frac{1}{3(1+x+x^{-1})}
\left(
\begin{array}{ccccccccc}
1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\
0 & x & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{1}{x} & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & \frac{1}{x} & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & x & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & x & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{x} & 0 \\
1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\
\end{array}
\right)
\end{eqnarray}
\end{small}
with $x$ being any non zero positive real number. Applying the proposed map, we have
\[
\begin{tiny}
\begin{array}{ll}
\mathbb{I} \otimes \Lambda_{\alpha}(\tau_x)=\\
\frac{N}{3(1+x+x^{-1})}\begin{array}{ll}
\left(
\begin{array}{ccccccccc}
\alpha (x+1) & 0 & 0 & 0 & -1 & 0 & 0 & 0 & -\alpha \\
0 & \frac{x+\frac{1}{x}}{\alpha } & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{\alpha }{x}+\frac{1}{\alpha } & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & \alpha \left(1+\frac{1}{x}\right) & 0 & 0 & 0 & 0 & 0 \\
-1 & 0 & 0 & 0 & \frac{x+1}{\alpha } & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & x \alpha +\frac{1}{\alpha x} & 0 & -1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & \alpha \left(x+\frac{1}{x}\right) & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & -1 & 0 & \frac{1+\frac{1}{x}}{\alpha } & 0 \\
-\alpha & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{x}{\alpha }+\alpha \\
\end{array}
\right)
\end{array}
\end{array}
\end{tiny}
\]
\\
Here, $N=1/(\alpha+1/\alpha)$ is the normalization factor. One of the principal minors of the matrix $\mathbb{I} \otimes \Lambda_{\alpha}(\tau_x)$ is given by
\[
\begin{array}{ll}
D_{\tau_x}=N\left|\begin{array}{ll}
\alpha(1+x)~~-1~~-\alpha\\
-1~~~~~~~~~~~\frac{1+x}{\alpha}~~~~~~0\\
-\alpha~~~~~~~~~~~0~~\alpha+\frac{x}{\alpha}
\end{array}\right|\\
~~~~~~=N[x(2+x)(\alpha+\frac{x}{\alpha})-\alpha(1+x)]
\end{array}
\]
\\
The negativity of $D_{\tau_x}$ for a large range of parameters can be seen from Fig.1
where we plot $D_{\tau_x}$ with respect to $x$ for some particular values of $\alpha$.
The following cases may be considered a examples.
Case 1: Considering $\alpha=\frac{1}{4}$, we see that $D_{\tau_x}<0$, if
$x< 0.154.$
\\
Case 2: Considering $\alpha=\frac{1}{2}$, we see that $D_{\tau_x}<0$, if
$x< 0.269.$
\\
Case 3: Let us now consider $\alpha=1$. In this case we can see that $D_{\tau_x}<0$, if
$ x < \sqrt{2}-1 .$
\\
Therefore, it the one parameter class of maps contains indecomposable positive maps. \qed \\
\begin{figure}\label{Minors}
\end{figure}
\textbf{Remark:} In \mathscrite{Yang16}, authors have shown that any indecomposable linear map $\Lambda : \mathbb{M}_3 \rightarrow \mathbb{M}_3 $ is atomic and hence in view of this fact the indecomposable maps in \textbf{Corollary 1} are also atomic.
Let us now illustrate the dual map corresponding to the positive map introduced in the definition 1.
\nonumberoindent \textbf{Corollary 2:} The following map
\begin{equation}\label{nmap}
\Lambda_{\alpha}^{\dagger} \left( X \right) = \frac{1}{\alpha + \frac{1}{\alpha}} \begin{bmatrix}
\alpha (x_{11}+ x_{33})& -x_{12}&- \alpha x_{13}\\
-x_{21}&\frac{x_{22}+x_{11}}{\alpha}&-x_{32}\\
- \alpha x_{31}&-x_{23}&\alpha x_{33}+ \frac{x_{22}}{\alpha}
\end{bmatrix}
\end{equation}
is also positive and indecomposable in the range $0<\alpha\leq 1$.
\proof The proof of positivity follows similarly to that of \textbf{Theorem 1}. For the proof of indecomposability, we can construct the matrix $\mathbb{I}\otimes\Lambda_{\alpha}^{\dagger}(\tau_x)$, to find that it will have at least one negative eigenvalue for \[x>\frac{1}{\sqrt{2}-1}.\]
\qed
\section{Entanglement witness and Weak optimality}
Since positive maps are not physically realizable, it is our next goal to construct an entanglement witness class \mathscrite{guehne}, in order to set the experimental viability of our findings on a firm footing. Moreover, we also prove that at least one of our constructed witnesses is weakly optimal.
Any positive but not completely positive map gives rise to an entanglement witness. For a given map $\Lambda_\Gamma$ its corresponding Choi matrix $\mathcal{C}_\Gamma$ serves as a witness for some entangled state. An entanglement witness $\mathcal{W}$ is said to be weakly optimal \mathscrite{badzi} if there exists some pure product state $\vert \gamma \rangle \otimes \vert \delta \rangle$ such that
\[ \langle \gamma \vert \otimes \langle \delta \vert \mathcal{W} \vert \gamma \rangle \otimes \vert \delta \rangle = 0\]
An entanglement witness can always be constructed from a positive map. We give one such example in the following.
We know that for two positive semi-definite matrices $A$ and $B$, the identity $Tr[A.B]\geq 0$ always holds. Based on this fact, we have $Tr[|\Phi\rangle\langle\Phi|\mathbb{I}\otimes\Lambda_{\alpha}^{\dagger}(\sigma)]\geq 0$, for all separable states $\sigma$ and for at least one entangled state, the trace identity will acquire negative value. Following the trace rule $Tr[C.D]=Tr[D.C]$ for any pair of matrices $C~\mbox{and}~D$, we get
\[Tr[|\Phi\rangle\langle\Phi|\mathbb{I}\otimes\Lambda_{\alpha}^{\dagger}(\rho)]=Tr[\mathbb{I}\otimes\Lambda_{\alpha}(|\Phi\rangle\langle\Phi|)\rho]\geq 0,\] for any state $\rho$ of $3\times 3$ dimension. This can of course be extended to arbitrary $d\times 3$ dimensional systems.
We can thus consider the one parameter family of positive maps $\Lambda_{\alpha}$ with the corresponding Choi matrix $\mathcal{C}_{\Lambda_{\alpha}}=\mathbb{I}\otimes\Lambda_{\alpha}(|\Phi\rangle\langle\Phi|)$, to be an one parameter family of entanglement witnesses. Now, for $\alpha = 1$, we can choose $ \vert \gamma \rangle = \vert \delta \rangle = \frac{1}{3} \begin{bmatrix}
1\\
1\\
1
\end{bmatrix} $ such that
\[\langle \gamma \vert \otimes \langle \gamma \vert \mathcal{C}_{\Lambda_{\alpha}} \vert \gamma \rangle \otimes \vert \gamma \rangle = 0\]
This weakly optimal witness can also be implemented locally. The witness can be expressed as a linear sum of two qutrit local observables. We consider the $3 \times 3$ identity matrix
$G_1= \begin{bmatrix}
1&0&0\\
0&1&0\\
0&0&1\\
\end{bmatrix}$,
along with 8 Gell-Mann matrices
$G_2= \begin{bmatrix}
0&1&0\\
1&0&0\\
0&0&0\\
\end{bmatrix}$,
$G_3= \begin{bmatrix}
0&-i&0\\
i&0&0\\
0&0&0\\
\end{bmatrix}$,
$G_4= \begin{bmatrix}
1&0&0\\
0&-1&0\\
0&0&0\\
\end{bmatrix}$,
$G_5= \begin{bmatrix}
0&0&1\\
0&0&0\\
1&0&0\\
\end{bmatrix}$,
$G_6= \begin{bmatrix}
0&0&-i\\
0&0&0\\
1&0&0\\
\end{bmatrix}$,
$G_7= \begin{bmatrix}
0&0&0\\
0&0&1\\
0&1&0\\
\end{bmatrix}$,
$G_8= \begin{bmatrix}
0&0&0\\
0&0&-i\\
0&i&0\\
\end{bmatrix}$,
$G_9= \frac{1}{\sqrt{3}}\begin{bmatrix}
1&0&0\\
0&1&0\\
0&0&-2\\
\end{bmatrix}$
as 9 local observables as they are Hermitian. We denote them as $G_i , i= 1 ..... 9$.
We note that for $\alpha=1$,
\begin{equation}\label{wit}
\begin{array}{ll}
\mathcal{C}_{\Lambda_{1}} = \frac{1}{3} G_1 \otimes G_1 - \frac{1}{6} G_2 \otimes G_2 + \frac{1}{6} G_3 \otimes G_3 -\frac{1}{4 \sqrt{3}} G_4 \otimes G_9\\
~~~~~~~~~~ -\frac{1}{6} G_5 \otimes G_5+ \frac{1}{6} G_6 \otimes G_6 -\frac{1}{6} G_7 \otimes G_7
-\frac{1}{6} G_8 \otimes G_8 \\
~~~~~~~~~~+\frac{1}{4 \sqrt{3}} G_9 \otimes G_4
+\frac{1}{12} G_9 \otimes G_9.
\end{array}
\nonumberonumber
\end{equation}
This witness is of course indecomposable, because the corresponding positive map is proven to be indecomposable. To further establish this fact, we apply the witness $ \mathcal{C}_{\Lambda_{1}}$ on $\tau_x$ (\ref{taux}) to find
\begin{equation}
Tr\left[ \mathcal{C}_{\Lambda_{1}}\tau_x\right]=\frac{3-x}{18\left(x^2+x+1\right)}.
\end{equation}
It is thus clear that the witness detects entanglement of the two-qutrit entangled state $\tau_x$ for $x>3$.
\section{Structural physical approximation and a new class of states with PPT entanglement}
The structural physical approximation (SPA)\mathscritep{spa1,spa2} of a positive map is a convex mixture of a depolarizing map with the given map, so that the resulting map is complete positive. A map $\Lambda_{dep} : \mathbb{M}_d \rightarrow \mathbb{M}_d$ is said to be depolarizing if $\Lambda_{dep} \left( X \right) = \frac{Tr\left( X \right) }{d} \mathbb{I}$ for $X \in \mathbb{M}_d $. Mathematically, SPA maps are the points of intersection of the line joining the given map with the depolarizing map and the set of all complete positive maps. Operationally, SPA of a positive map is obtained by adding some disturbance to the positive map.
An algorithm to find the optimal SPA map for a given positive map has been prescribed in \mathscritep{spa1}. We shall now formulate the SPA corresponding to the one parameter family of maps $\Lambda_{\alpha}$ and show that it gives rise to a class of PPT entangled states.
We have earlier considered the Choi matrix $\mathcal{C}_{\Lambda_{\alpha}}$ of the family of maps and found
the least eigenvalue of $\mathcal{C}_{\Lambda_{\alpha}}$ to be $\lambda^{'}= \frac{1-\sqrt{1+4 \alpha^2}}{6+6 \alpha^2}$ when we take $\alpha \in ( 0 , 1 ]$. It is a negative quantity within the above parameter range. Defining $\lambda = max \left[ 0 , -\lambda^{'} \right] $, and
following the prescription of \mathscritep{spa1}, the optimal SPA map corresponding to the map $\Lambda_{\alpha}$ is given by
\begin{eqnarray}
\Lambda_{\alpha}^{opt} = p^* \Lambda_{dep} + (1-p^*) \Lambda_{\alpha}^{'} \nonumberonumber
\end{eqnarray}
where $p^* = \frac{\lambda d d^{'} \beta_{\Lambda_{\alpha}}^{-1}}{\lambda d d^{'} \beta_{\Lambda}^{-1}+ 1}$, $\Lambda_{dep} = \frac{Tr\left( . \right) }{d^{'}} \mathbb{I}$, and $\Lambda^{'}= \beta_{\Lambda}^{-1} \Lambda$ is the re-scaling of the original map. Here $d = d^{'} = 3$, the input and output dimension of the map $\Lambda_{\alpha}$, and as a consequence of trace preservation, the value of $ \beta_{\Lambda_{\alpha}} = 1$.
Therefore, the optimal SPA map $\Lambda_{\alpha}^{opt} : \mathbb{M}_3 \rightarrow \mathbb{M}_3 $ is given by
\\
\\
\begin{widetext}
\begin{eqnarray}
\Lambda_{\alpha}^{opt} \left( X \right) = \left[
\begin{small}
\begin{array}{ccc}
\frac{x_{33} \left(\sqrt{4 \alpha ^2+1}-1\right)+x_{11} \left(2 \alpha ^2+\sqrt{4 \alpha ^2+1}-1\right)+x_{22} \left(2 \alpha ^2+\sqrt{4 \alpha ^2+1}-1\right)}{2 \alpha ^2+3 \sqrt{4 \alpha ^2+1}-1} & \frac{2 x_{12} \alpha }{-2 \alpha ^2-3 \sqrt{4 \alpha ^2+1}+1} & \frac{2 x_{13} \alpha ^2}{-2 \alpha ^2-3 \sqrt{4 \alpha ^2+1}+1} \\
\frac{2 x_{21} \alpha }{-2 \alpha ^2-3 \sqrt{4 \alpha ^2+1}+1} & \frac{x_{11} \left(\sqrt{4 \alpha ^2+1}-1\right)+(x_{22}+x_{33}) \left(\sqrt{4 \alpha ^2+1}+1\right)}{2 \alpha ^2+3 \sqrt{4 \alpha ^2+1}-1} & \frac{2 x_{32} \alpha }{-2 \alpha ^2-3 \sqrt{4 \alpha ^2+1}+1} \\
\frac{2 x_{31} \alpha ^2}{-2 \alpha ^2-3 \sqrt{4 \alpha ^2+1}+1} & \frac{2 x_{23} \alpha }{-2 \alpha ^2-3 \sqrt{4 \alpha ^2+1}+1} & \frac{-x_{22}+x_{33} \left(2 \alpha ^2-1\right)+x_{11} \left(\sqrt{4 \alpha ^2+1}+1\right)+(x_{22}+x_{33}) \sqrt{4 \alpha ^2+1}}{2 \alpha ^2+3 \sqrt{4 \alpha ^2+1}-1} \\
\end{array}
\end{small}
\right ]
\end{eqnarray}
\end{widetext}
We note that the SPA map is also trace preserving. To check whether the SPA is completely positive, we compute the corresponding Choi matrix. The Choi matrix is found to be
\begin{widetext}
\begin{eqnarray}
\mathcal{C}_{\Lambda_{\alpha}^{opt}} = \begin{tiny} \left[
\begin{array}{ccccccccc}
\frac{2 \alpha ^2+\sqrt{4 \alpha ^2+1}-1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & 0 & 0 & \frac{2 \alpha }{-6 \alpha ^2-9 \sqrt{4 \alpha ^2+1}+3} & 0 & 0 & 0 & \frac{2 \alpha ^2}{-6 \alpha ^2-9 \sqrt{4 \alpha ^2+1}+3} \\
0 & \frac{\sqrt{4 \alpha ^2+1}-1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{\sqrt{4 \alpha ^2+1}+1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & \frac{2 \alpha ^2+\sqrt{4 \alpha ^2+1}-1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & 0 & 0 & 0 & 0 \\
\frac{2 \alpha }{-6 \alpha ^2-9 \sqrt{4 \alpha ^2+1}+3} & 0 & 0 & 0 & \frac{\sqrt{4 \alpha ^2+1}+1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \frac{\sqrt{4 \alpha ^2+1}-1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & \frac{2 \alpha }{-6 \alpha ^2-9 \sqrt{4 \alpha ^2+1}+3} & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & \frac{\sqrt{4 \alpha ^2+1}-1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \frac{2 \alpha }{-6 \alpha ^2-9 \sqrt{4 \alpha ^2+1}+3} & 0 & \frac{\sqrt{4 \alpha ^2+1}+1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 \\
\frac{2 \alpha ^2}{-6 \alpha ^2-9 \sqrt{4 \alpha ^2+1}+3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{2 \alpha ^2+\sqrt{4 \alpha ^2+1}-1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} \\
\end{array}
\right]
\end{tiny}
\end{eqnarray}
\end{widetext}
We next compute the eigenvalues of the $9 \times 9$ Choi matrix $\mathcal{C}_{\Lambda_{\alpha}^{opt}}$ and observe that the Choi matrix is positive semi-definite for the whole range of $\alpha$ as all of its eigenvalues are non negative for the whole range of $\alpha$. This signifies that the SPA map is complete positive. Moreover, it is to be noted that the Choi matrix is a valid density matrix as $\mathcal{C}_{\Lambda_{\alpha}^{opt}}$ is Hermitian, positive semi-definite and of trace $1$ for $ \alpha \in (0, 1]$. Hence, we obtain a one parameter family of two qutrit states.
The partial transposition of $\mathcal{C}_{\Lambda_{\alpha}^{opt}}$ is given by
\begin{widetext}
\begin{small}
$\mathcal{C}^{T}_{\Lambda_{\alpha}^{opt}}$ = $\left(
\begin{array}{ccccccccc}
\frac{2 \alpha ^2+\sqrt{4 \alpha ^2+1}-1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & \frac{\sqrt{4 \alpha ^2+1}-1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & \frac{2 \alpha }{-6 \alpha ^2-9 \sqrt{4 \alpha ^2+1}+3} & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{\sqrt{4 \alpha ^2+1}+1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & 0 & 0 & \frac{2 \alpha ^2}{-6 \alpha ^2-9 \sqrt{4 \alpha ^2+1}+3} & 0 & 0 \\
0 & \frac{2 \alpha }{-6 \alpha ^2-9 \sqrt{4 \alpha ^2+1}+3} & 0 & \frac{2 \alpha ^2+\sqrt{4 \alpha ^2+1}-1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \frac{\sqrt{4 \alpha ^2+1}+1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & 0 & 0 & \frac{2 \alpha }{-6 \alpha ^2-9 \sqrt{4 \alpha ^2+1}+3} \\
0 & 0 & 0 & 0 & 0 & \frac{\sqrt{4 \alpha ^2+1}-1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & 0 & 0 \\
0 & 0 & \frac{2 \alpha ^2}{-6 \alpha ^2-9 \sqrt{4 \alpha ^2+1}+3} & 0 & 0 & 0 & \frac{\sqrt{4 \alpha ^2+1}-1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{\sqrt{4 \alpha ^2+1}+1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} & 0 \\
0 & 0 & 0 & 0 & \frac{2 \alpha }{-6 \alpha ^2-9 \sqrt{4 \alpha ^2+1}+3} & 0 & 0 & 0 & \frac{2 \alpha ^2+\sqrt{4 \alpha ^2+1}-1}{6 \alpha ^2+9 \sqrt{4 \alpha ^2+1}-3} \\
\end{array}
\right)$
\end{small}
\end{widetext}
We compute its eigenvalues and plot them with respect to $\alpha$ in Figure 2. We note that among the nine eigenvalues, one eigenvalue is negative in the interval $\alpha \in (0, 1)$, and other eight eigenvalues are all positive for $\alpha \in (0, 1)$ .
Hence, the class of states for this interval of values of $\alpha$ is NPT, and therefore, it is entangled. Interestingly, for the parameter value $\alpha= 1$, all the eigenvalues of the partially transposed matrix are non negative, and hence, the state $\mathcal{C}_{\Lambda_{\alpha}^{opt}}$ is PPT for $\alpha = 1$.
\begin{figure}\label{fig3}
\end{figure}
We are interested to find whether the state $\mathcal{C}_{\Lambda_{1}^{opt}}$ is entangled. It may be noted that since we are dealing with a two-qutrit system, the partial transposition criterion is no longer sufficient for entanglement detection.
We hence adopt the covariance matrix criterion \mathscrite{GHGE_07, GGHE_08} for entanglement detection. We can consider the 3 by 3 identity matrix along with the 8 Gell-Mann matrices as orthogonal local observables as they are orthogonal and Hermitian.
We take the Choi state $\mathcal{C}_{\Lambda_{\alpha}^{opt}}$ and obtain its two reduced density matrices $\mathcal{C}_{\Lambda_{\alpha}^{opt}}^{A}$ and $\mathcal{C}_{\Lambda_{\alpha}^{opt}}^{B}$ respectively. The covariance matrix criterion \mathscrite{GHGE_07, GGHE_08} states that for separable states,
\begin{eqnarray}
\label{cmatrix}
\vert\vert C \vert\vert_{1} \leq \sqrt{\left( 1- Tr \left[ \left(\mathcal{C}_{\Lambda_{\alpha}^{opt}}^{A} \right) ^{2}\right] \right) \left( 1- Tr \left[ \left( \mathcal{C}_{\Lambda_{\alpha}^{opt}}^{B}\right) ^{2}\right] \right)}
\end{eqnarray}
where $\vert\vert. \vert\vert_{1}$ stands for the trace norm and the components of the $C$ matrix are given by
\begin{eqnarray}
C_{ij}= \langle H_{i}^{A} \otimes H_{j}^{B} \rangle - \langle H_{i}^{A} \rangle \langle H_{j}^{B} \rangle
\label{Cmat}
\end{eqnarray} \\\\
and $H_{i}^{A}$ and $H_{j}^{B}$ denote local orthogonal observables on two sides. We
evaluate the C matrix using the state $\mathcal{C}_{\Lambda_{\alpha}^{opt}}$ and its reduced density matrices, and find that the LHS of Eq.(\ref{cmatrix}) is strictly greater than the RHS for the values of $\alpha$ in $ \left(0 , 1 \right] $. The result has been illustrated in Fig. 3. This certifies the presence of entanglement in the state $\mathcal{C}_{\Lambda_{\alpha}^{opt}}$ for $\alpha \in \left(0 , 1 \right] $. Therefore, the state corresponding to the value of the parameter $\alpha = 1 $, \textit{i.e}., $\mathcal{C}_{\Lambda_{1}^{opt}}$ is PPT-entangled. So, the SPA map $\Lambda_{\alpha}^{opt}$ can generate a two-qutrit PPT entangled state $\mathcal{C}_{\Lambda_{1}^{opt}}$.
\begin{figure}\label{fig3}
\end{figure}
It may be further noted that the state $\mathcal{C}_{\Lambda_{\alpha}^{opt}}$ is NPT for the values of the parameter $\alpha $ in $\left( 0 , 1 \right) $ and therefore, it is entangled.
For the parameter value $\alpha = 1$ the state is PPT and its entanglement can be detected via the covariance matrix criterion. Moreover, from figure 3 it is clear that the covariance matrix criterion can also detect the the entanglement in the range $\left( 0 , 1 \right) $ where the state is NPT.
Finally, let us check whether the PPT entangled state $\mathcal{C}_{\Lambda_{1}^{opt}}$ can be detected by some other existing positive maps.
PPT entangled states are considered as a weak form of entanglement that is usually very hard to detect. As discussed earlier, indecomposable maps are necessary to detect PPT entangled states. The celebrated Choi map $ \phi_{choi} : \mathbb{M}_3 \rightarrow \mathbb{M}_3 $ \mathscritep{Choi75}, one of the first examples of indecomposable maps in the literature, is defined as
\begin{eqnarray}
\phi_{Choi} \left( X \right) = \begin{bmatrix}
x_{11}+ x_{22} & -x_{12}& - x_{13}\\
-x_{21}&x_{22}+x_{11}&-x_{23}\\
- x_{31}&-x_{32}& x_{33}+x_{22}
\end{bmatrix}
\end{eqnarray}
where
\begin{eqnarray}
X= \begin{bmatrix}
x_{11}&x_{12}&x_{13}\\
x_{21}&x_{22}&x_{23}\\
x_{31}&x_{32}&x_{33}&
\end{bmatrix} \in \mathbb{M}_3. \nonumberonumber
\end{eqnarray}
A positive map $\Lambda$ is said to detect an entangled state $\kappa$ if and only if $\mathbb{I} \otimes \Lambda \left( \kappa \right) <\Theta $, where $\Theta$ stands for zero operator. It can be checked that $\mathbb{I} \otimes \phi_{Choi} \left( \mathcal{C}_{\Lambda_{\alpha}^{opt}} \right) \geqslant \Theta ~~ \forall \alpha \in ( 0 , 1 ]$.
Hence, the Choi map cannot detect the above PPT entangled state.
Recently, Miller and Olkiewicz \mathscritep{ Miller15} introduced another indecomposable map $\phi_{MO}$ on $\mathbb{M}_3$ given by,
\begin{eqnarray}
\phi_{MO} \left( X \right) = \begin{bmatrix}
\dfrac{1}{2} (x_{11}+ x_{22}) & 0& \frac{1}{\sqrt{2}} x_{13}\\
0&\dfrac{1}{2} (x_{11}+ x_{22})&\frac{1}{\sqrt{2}}x_{32}\\
\frac{1}{\sqrt{2}} x_{31}&\frac{1}{\sqrt{2}}x_{23}& x_{33}
\end{bmatrix}
\end{eqnarray}
where
\begin{eqnarray}
X= \begin{bmatrix}
x_{11}&x_{12}&x_{13}\\
x_{21}&x_{22}&x_{23}\\
x_{31}&x_{32}&x_{33}&
\end{bmatrix} \in \mathbb{M}_3. \nonumberonumber
\end{eqnarray}
It can be again checked that $\mathbb{I} \otimes \phi_{MO} \left( \mathcal{C}_{\Lambda_{\alpha}^{opt}} \right) \geqslant \Theta ~~ \forall \alpha \in ( 0 , 1 ]$.
Hence the above map also cannot detect the PPT entangled state $\mathcal{C}_{\Lambda_{\alpha}^{opt}} $.
\section{conclusions}
Bound entangled states are hard to find and detect. In this work, we have constructed a one parameter family of indecomposable positive maps in three dimensional Hilbert space. These maps are shown to detect entanglement in a certain class of
two-qutrit PPT entangled states. Through our proposed non-completely positive indecomposable map we are additionally able to find a new class of entangled states
among which there exists a PPT entangled state.
We have further constructed a weak optimal entanglement witness from one of these maps and have given its representation in terms of local observables. This presents the way to physically implement this witness towards
detection of the two-qutrit bound entangled state.
Moreover, we have also considered the structural physical approximation \mathscrite{spa1, spa2}of the proposed positive map. This leads to a large class of NPT entangled states, but more interestingly, we have found a unique bound entangled state which cannot be detected by various other well-known non-completely positive maps \mathscritep{Choi75,Miller15}. PPT entangled states have been constructed earlier from indecomposable positive maps using geometrical
methods \mathscrite{Ha03, HA04}. In the present analysis we have devised a new procedure of contructing PPT entangled states employing the structural physical approximation. To conclude, this work
motivates further investigations of positive maps and their applications in entanglement theory in higher dimensions.
{\it Acknowledgements:} NG would like to acknowledge support from the Research Initiation Grant BITS/GAU/RIG/2019/H0680
of BITS-Pilani, Hyderabad. ASM acknowledges support from the
project DST/ICPS/QuEST/Q98 from the Department of Science and Technology, India. BB acknowledges the support from DST INSPIRE programme.
\end{document} |
\begin{document}
\title{Combining Real-World and Randomized Control Trial Data Using Data-Adaptive Weighting via the On-Trial Score}
\author{Joanna Harton$^1$, Brian Segal$^2$, Ronac Mamtani$^3$, \\ Nandita Mitra$^{1*}$, and Rebecca A. Hubbard$^{1*}$}
\date{}
\maketitle
\footnotetext[1]{Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, PA, USA}
\footnotetext[2]{Indigo Ag, Boston, MA, USA}
\footnotetext[2]{Penn Medicine, Philadelphia, PA, USA}
\renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}}
\footnotetext[1]{Co-senior authors}
\footnotetext{\textbf{Corresponding author:} Joanna Harton, Department of Biostatistics, Epidemiology, and Informatics, 423 Guardian Drive, Philadelphia, PA 19104, USA. \textbf{Email:} jograce@pennmedicine.upenn.edu}
\section*{Abstract}
Clinical trials with a hybrid control arm (a control arm constructed from a combination of randomized patients and real-world data on patients receiving usual care in standard clinical practice) have the potential to decrease the cost of randomized trials while increasing the proportion of trial patients given access to novel therapeutics. However, due to stringent trial inclusion criteria and differences in care and data quality between trials and community practice, trial patients may have systematically different outcomes compared to their real-world counterparts. We propose a new method for analyses of trials with a hybrid control arm that efficiently controls bias and type I error. Under our proposed approach, selected real-world patients are weighted by a function of the ``on-trial score,'' which reflects their similarity to trial patients. In contrast to previously developed hybrid control designs that assign the same weight to all real-world patients, our approach upweights of real-world patients who more closely resemble randomized control patients while dissimilar patients are discounted. Estimates of the treatment effect are obtained via Cox proportional hazards models. We compare our approach to existing approaches via simulations and apply these methods to a study using electronic health record data. Our proposed method is able to control type I error, minimize bias, and decrease variance when compared to using only trial data in nearly all scenarios examined. Therefore, our new approach can be used when conducting clinical trials by augmenting the standard-of-care arm with weighted patients from the EHR to increase power without inducing bias.
\section{Introduction}
Randomized clinical trials are the gold-standard for testing a new treatment though there can be some potential disadvantages to running a traditional clinical trial. Clinical trials for rare diseases can take a long time to accrue patients due to the rarity of the disease, which makes the clinical trial and drug approval process take longer. Additionally, if a prior Phase II trial has shown superiority of the new treatment over the standard treatment, it may not be ethical to randomize patients in a 1:1 ratio, rather it may be preferred to use a 2:1 or 3:1 ratio favoring the intervention over the standard-of-care. This can result in lower power and the inability to detect an effect even if one truly exists.
It is therefore appealing to consider combining clinical trial data on patients receiving a novel treatment with data on patients receiving the control therapy derived from electronic health records (EHR). \cite{ventz_design_2019} The appeal of including external patients receiving the standard-of-care is that it reduces or even completely eliminates the need to randomize patients to a control arm in the current trial. \cite{schmidli_beyond_2020} In a trial with an external control arm, all data on the standard-of-care is derived from EHR, while, in a hybrid control arm external patients receiving the standard-of-care from an EHR are combined with randomized trial patients in the control arm. \cite{burcu_real-world_2020}
Electronic health records (EHR) contain a vast amount of data that can be relatively easily leveraged for research. These data are by nature observational, and, as such, there are a few key features of EHR data that are worth noting. First, EHR were developed for clinical care and billing purposes. Therefore, some of the information that researchers may be interested in may not be collected or may be contained in narrative text notes, which are difficult to analyze. Furthermore, the healthcare system in the United States, as well as in many other countries, is fragmented such that a patient's medical records may be spread across the databases of multiple healthcare systems, which can result in an incomplete or inadequate picture of a patient's health when relying on data from an individual EHR database.
Though conducting research with EHR data has distinct challenges, approaches have been developed to make beneficial use of this data source.
In 1976, Pocock proposed a method to combine randomized patients receiving the standard-of-care from historical clinical trials with intervention arm patients from a new trial to address the fact that many studies of the day examining the efficacy of a new treatment did not contain a randomized control arm, which made it difficult to draw causal conclusions. \cite{pocock_combination_1976} The approach represents a hybrid control arm combining patients randomized to the trial control arm with patients from the control arm of a historical trial. Pocock proposed six criteria for evaluating what constitutes an acceptable historical control arm as well as how much weight to assign to historical control patients relative to randomized control patients. \cite{pocock_combination_1976}
Pocock's six criteria can also be applied to the use of EHR data to construct the hybrid control arm of a trial with varying success. These six proposed guidelines along with considerations for application to an EHR-derived hybrid control arm are presented below:
\begin{enumerate}
\item \emph{``Such a group must have received a precisely defined standard treatment which must be the same as the treatment for the randomized controls."} Patients derived from EHR data can be selected such that they are receiving the same primary treatment as the randomized trial control patients. However, supportive care and the care environment may differ between the trial and routine clinical practice.
\item \emph{``The group must have been part of a recent clinical study which contained the same requirements for patient eligibility."} By definition the EHR patients are not part of a recent clinical trial but efforts should be made to identify an EHR-derived cohort using eligibility criteria as similar to the trial as possible. Due to limitations of EHR data capture, it may be difficult to apply all clinical trial inclusion/exclusion criteria. \cite{ramsey_using_2020}
\item \emph{``The methods of treatment evaluation must be the same."} This criterion may or may not be met, depending on the outcome and method of outcome ascertainment used in the trial. While it is imperative for the outcome measure to be the same, outcome ascertainment may differ between the trial and EHR data capture. For example, even an outcome such as death can have sensitivity between $30\%$ and $90\%$, meaning that there are deaths that are not recorded in the EHR. \cite{carrigan_evaluation_2019}
\item \emph{``The distributions of important patient characteristics in the group should be comparable with those in the new trial."} Patients receiving the control treatment in routine care may differ in many ways from patients participating in a trial. The requirement of the same distribution of patient characteristics in the EHR patient pool as the trial patients is possible but highly unlikely to be exact; similarity, however, should be striven for.
\item \emph{``The previous study must have been performed in the same organization with largely the same clinical investigators."} This criterion is not able to be met by definition.
\item \emph{``There must be no other indications leading one to expect differing results between the randomized and historical controls."} This is unlikely to be met \textbf{completely} due to many differences between real-world and clinical trial care. However, if the EHR data is contemporaneous with the current trial it is reasonable to assume that care received by the EHR patients is similar to the care received by the trial control patients.
\end{enumerate}
Overall, if an EHR cohort can be constructed such that the patients are contemporaneous with the clinical trial, the same inclusion/exclusion criteria are applied to ensure comparable patient populations are under study, EHR control patients receive the same treatment as the clinical trial control arm patients, and the outcome and method of outcome ascertainment are as similar between EHR data capture and the trial as possible, it may be appropriate to use EHR data as part of a hybrid control arm.
There are several approaches to using external control patients when estimating treatment effects, including Pocock's approach which was among the first. Although these methods were developed in the context of historical controls, where control patients are drawn from a previously conducted clinical trial, they can also be applied to the case where control patients are drawn from an EHR database. This does not affect the implementation of the methods, though interpretation must be made carefully. We assume throughout this paper that only patients in the trial receive the intervention therapy. Pocock's method assumes that the parameter of interest is the difference between the mean outcome in the intervention arm patients and the mean outcome in the standard-of-care arm patients in the clinical trial. Pocock's method also assumes that the true mean value for the trial standard-of-care patients follows a normal distribution centered at a weighted sum of the sample mean of the trial standard-of-care patients and the sample mean of the external standard-of-care patients, where weights are selected based on the extent to which external standard-of-care patients are believed to be representative of trial standard-of-care patients and with a standard deviation also dependent upon these factors. \cite{pocock_combination_1976}
Chen, et al. introduced a Bayesian approach to estimating the effect of a treatment when using hybrid control arms that relies on power priors. \cite{chen_power_2000, ibrahim_power_2000} This approach incorporates external standard-of-care arm data with the current trial data by taking only a fraction of the information from each external standard-of-care patient. \cite{chen_power_2000, ibrahim_power_2000} The power prior can estimate many different estimands, including differences in means or proportions, hazard ratios, or odds ratios. In this method, the pool of external standard-of-care patients is weighted as a whole and the external standard-of-care patients are assigned anywhere from $0\%$ to $100\%$ of the weight that a current trial participant, whether intervention or standard-of-care, receives in the final model. \cite{chen_power_2000, ibrahim_power_2000} When $\alpha$, the weight assigned to external standard-of-care patients, is $0$, the power prior approach is equivalent to using no data from the external standard-of-care patients, and when $\alpha$ is $1$, the power prior approach is the same as fully pooling the external standard-of-care patients with the current trial data. \cite{chen_power_2000, ibrahim_power_2000} This method may be more interpretable than Pocock's method as the amount of information incorporated is quantified directly through $\alpha$ rather than through a variance parameter. \cite{chen_power_2000, ibrahim_power_2000} Similar to Pocock's method, the amount of information incorporated from the external standard-of-care patients must be prespecified by the researcher and sensitivity analyses are recommended to determine the robustness of results to choice of $\alpha$. \cite{chen_power_2000, ibrahim_power_2000, pocock_combination_1976}
Duan and Ye in 2008 \cite{duan_normalized_2008} and Neuenschwander, et al. in 2009 \cite{neuenschwander_note_2009} concurrently developed the normalized power prior approach, which extended the power prior model to estimate $\alpha$ from the data rather than using a prespecified $\alpha$. \cite{duan_normalized_2008, neuenschwander_note_2009} The normalized power prior approach allows for more weight to be allocated to the external standard-of-care patients when the external standard-of-care patients are similar in terms of outcome to the current trial patients and less weight when they are dissimilar. \cite{duan_normalized_2008, neuenschwander_note_2009}
These earlier methods focused on weighting external standard-of-care patients as a group, whether the weight is pre-specified or data-driven. More recent extensions have considered individual-weighting of external standard-of-care patients based on the similarity of each individual to patients in the trial standard-of-care arm. One such adaptation of the power prior approach proposes dividing the external standard-of-care patients into subgroups based on their similarity to the trial patients and assigning a weight to each subgroup. \cite{wang_propensity_2019} Another recently proposed method uses a modification of the propensity score, called the on-trial score, to create matches between the external standard-of-care patients and the trial standard-of-care patients in order to create a hybrid standard-of-care arm consisting of patients most similar to those in the clinical trial intervention arm. \cite{lin_propensity-score-based_2019}
In this paper, we propose a new data-adaptive weighting method that addresses the limitation of assigning a single weight to the entire group of external standard-of-care patients by assigning weights to each individual in the external standard-of-care arm based on similarity to trial patients using the on-trial score. The use of individualized weights helps to account for the fact that patients included in EHR databases may be more heterogeneous than patients included in clinical trials. The proposed approach incorporates more information from standard-of-care patients who are more similar to trial participants than those who are not.
The structure of the paper is as follows: in section 2 we define notation, outline the existing approaches used for hybrid standard-of-care arms and introduce our proposed method for combining trial standard-of-care patients with external standard-of-care patients. In section 3, simulations are presented to assess the relative performance of our new method compared to existing methods. Section 4 applies all methods discussed to a clinical study for patients with metastatic castration-resistant prostate cancer comparing the standard-of-care treatment of prednisone with the new treatment of abiraterone acetate in conjunction with prednisone with with external standard-of-care patients from a pseudo EHR, and section 5 provides a summary and discussion.
\section{Methods}
\subsection{Notation}
We assume the existence of a trial of size $N_{Trial}$ and an external data source, e.g. an EHR database consisting of patients meeting comparable inclusion/exclusion criteria and receiving the same treatment as trial standard-of-care arm patients, of size $N_{External}$. Let $N = N_{Trial} + N_{External}$. Let each patient in each database have information on a set of $k$ covariates, $\mathbf{X}_{N x k} = \{\mathbf{X}_1, \mathbf{X}_2, \hdots, \mathbf{X}_k \}$. Additionally, let $\mathbf{R}$ be an indicator such that $\mathbf{R}_i$ takes the value of 1 if the $i^{th}$ patient is in the external data source and 0 if the $i^{th}$ patient is enrolled in the trial. Similarly, let $\mathbf{T}$ be a treatment indicator such that $\mathbf{T}_i$ takes a value of 0 if the $i^{th}$ patient receives the standard-of-care and 1 if the $i^{th}$ patient receives the intervention. In our numerical experiments below, we a have a time to event outcome variable, $\mathbf{Y} = min(\mathbf{F},\mathbf{C})$, where $\mathbf{F}$ is the time of the event of interest (failure time), $\mathbf{C}$ is the censoring time. Also, let $\mathbf{S}$ be a status indicator where $\mathbf{S}_i$ takes a value of 1 if $\mathbf{T}_i<\mathbf{C}_i$ and 0 otherwise. Extensions to outcomes of other variable types follow directly from the likelihood-based formulation below. We denote the data available for the set of external standard-of-care patients as $\mathbf{D_0} = \{\mathbf{X}, \mathbf{S}, \mathbf{Y}|\mathbf{R}_i=1 \}$, the set of randomized standard-of-care patients as $\mathbf{D_C} = \{\mathbf{X}, \mathbf{S}, \mathbf{Y}|\mathbf{R}_i=0, \mathbf{T}_i=0 \}$, the set of randomized intervention arm patients as $\mathbf{D_T} = \{\mathbf{X}, \mathbf{S}, \mathbf{Y}|\mathbf{R}_i=0, \mathbf{T}_i=1 \}$, and the set of all trial patients as $\mathbf{D} = (\mathbf{D_T}, \mathbf{D_C})$. We also let $\theta$ denote a target parameter of interest that represents treatment efficacy which could be parameterized as a difference in mean event times or hazard ratios comparing intervention arm and standard-of-care patients.
Below we summarize existing and proposed approaches to incorporating data from external standard-of-care patients into an analysis of treatment efficacy.
\subsection{Existing Approaches to Hybrid Control Trials}
\subsubsection{Na\"{i}ve Approaches}
Ignoring the external data and using only the current trial data serves as a positive control with regards to the minimum bias that can be attained when estimating the parameter of interest. In this method, only the data from the patients enrolled in the trial are analyzed and the patients from the external data source are left out.
Fully pooling the external data with the trial data serves as a negative control with regards to the amount of bias that is likely to occur when estimating the treatment effect. In this case, the external patients are given the same weight as the trial patients and all external patients are included in the analysis.
\subsubsection{Bayesian Approaches}
The power prior (PP) approach combines the external patients with the trial patients such that each external patient has a weight less than 1. \cite{ibrahim_power_2000} The power prior approach assigns the same weight, $\alpha$, such that $0<\alpha < 1$, to all patients in the external data source and the value of $\alpha$ is prespecified by the researcher. The power prior approach proposes the following prior distribution for $\theta$: $\pi(\theta | {\mathbf{D_0}}, \alpha) \propto \mathscr{L}(\theta | {\mathbf{D_0}})^\alpha \pi(\theta)$ which yields the following posterior distribution for $\theta$: $\pi(\theta | {\mathbf{D_0}}, \mathbf{D}, \alpha) \propto \mathscr{L}(\theta | \mathbf{D})[\mathscr{L}(\theta | {\mathbf{D_0}})^\alpha \pi(\theta)]$.
The normalized power prior approach (NPP) is similar to the power prior approach except that $\alpha$ is estimated from the data rather than being pre-specified by the researcher. \cite{duan_normalized_2008} The normalized power prior approach specifies a conditional prior for $\theta$ given $\alpha$ and a marginal distribution for $\alpha$. The normalized power prior approach has the form:
\begin{equation*}
\pi(\theta, \alpha | {\mathbf{D_0}}) \propto \frac{\mathscr{L}(\theta | {\mathbf{D_0}})^\alpha \pi(\theta) }{\int_\Theta \mathscr{L}(\theta | {\mathbf{D_0}})^\alpha \pi(\theta) d\theta} \pi(\alpha),
\end{equation*}
which results in the following posterior:
\begin{equation*}
\pi(\theta, \alpha | {\mathbf{D_0}}, \mathbf{D}) \propto \mathscr{L}(\theta | \mathbf{D}) \pi(\theta, \alpha | {\mathbf{D_0}}) \propto \frac{\mathscr{L}(\theta | \mathbf{D})\mathscr{L}(\theta | {\mathbf{D_0}})^\alpha \pi(\theta) \pi(\alpha)}{\int_\Theta \mathscr{L}(\theta | {\mathbf{D_0}})^\alpha \pi(\theta) d\theta}.
\end{equation*}
If $\pi(\alpha)$ is proper then the normalized power prior will also be proper.
\subsubsection{Lin's Method}
Lin's method uses an on-trial score, similar to a propensity score, where the outcome of interest is inclusion in the trial to construct a matched set of external standard-of-care patients and weight their likelihood contribution. \cite{lin_propensity-score-based_2019} The on-trial score is estimated as the probability that a patient is in the clinical trial given their baseline covariates using a logistic regression model. Next, optimal pair matching is performed using the on-trial score so that each trial patient receiving the intervention is matched with an external standard-of-care patient. The selected external standard-of-care patients form a pool from which $N_T-N_C$ patients are randomly drawn so that the augmented trial has a 1:1 ratio of treated to standard-of-care patients. In the outcome model the external standard-of-care patients are weighted by their on-trial scores while the trial patients are given a weight of one. \cite{lin_propensity-score-based_2019}
\subsection{Proposed Approach: Data-Adaptive Weighting}
In our proposed approach, we let the on-trial score be defined as the probability that the $i^{th}$ patient is included in the trial given the observed baseline covariates, $P(R_i=0|\mathbf{X}_i) = e(\mathbf{X}_i)$. To maximize the similarity between external and randomized standard-of-care patients, we then limit the set of external standard-of-care patients to the subset with the highest on-trial scores, such that the number of external standard-of-care patients selected results in a hybrid control arm of the same size as the intervention arm. Let $D^*_0$ represent data for the subset of $D_0$ with the $N_{D_T}-N_{D_C}$ largest on-trial scores. The on-trial scores are then transformed to obtain values for $\hat{\gamma}_i$ such that $\hat{\gamma}_{i} = e(\mathbf{X}_i)/(1-e(\mathbf{X}_i))$ and standardized such that $\hat{\gamma}_i^* = \hat{\gamma}_i N_{\mathbf{D^*_0}}/\sum_{i=1}^{N_{\mathbf{D^*_0}}} \hat{\gamma}_i$. The inverse odds weight is used as we are interested in the average treatment effect on the treated, or in this case, the average treatment effect for those on-trial. This weighting method assigns all trial patients their full weight and only up- or down-weights the selected external standard-of-care patients.
Estimation for data-adaptive weighting then uses a prior for $\theta$ of the form:
\begin{equation}
\pi(\theta | \bm{\hat{\gamma}^*}, {\mathbf{D^*_0}}) \propto \prod_{i=1}^{N_{\mathbf{D^*_0}}} \big[ \mathscr{L}(\theta | {{\mathbf{D^*_0}}_i})^{\hat{\gamma}^*_i} \big] \pi(\theta),
\end{equation}
which gives the following posterior:
\begin{equation}
\pi(\theta | \bm{\hat{\gamma}^*}, {\mathbf{D^*_0}}, \mathbf{D}) \propto \prod_{j=1}^{N_{\mathbf{D}}} \big[ \mathscr{L}(\theta | \mathbf{D}_j) \big] \prod_{i=1}^{N_{\mathbf{D^*_0}}} \big[ \mathscr{L}(\theta | {{\mathbf{D^*_0}}_i})^{\hat{\gamma}^*_i} \big] \pi(\theta)
\label{daw_posterior}
\end{equation}
We note that all patients are used in the estimation of the on-trial score as, assuming trial patients are randomly assigned to intervention arm, the distribution of baseline covariates is the same for standard-of-care arm and intervention arm patients. Here the on-trial score is estimated via a logistic regression, rather than being jointly estimated with $\theta$. However, the on-trial score may be estimated using more flexible modeling such as a random forest or ensemble machine learning if desired.
\subsection{Estimation}
The Bayesian estimation approach for the models presented above, under certain conditions, can be approximated using a frequentist analog. For example, in the case of the DAW method, when a non-informative prior is used for $\theta$, the prior for DAW, $\pi(\bm{\theta} | \mathbf{D_0}, \bm{\hat{\gamma}^*}) \propto \big[ \mathscr{L}(\bm{\theta} | \mathbf{D_0})^{\hat{\gamma}_i^*} \big] \pi(\bm{\theta})$, is equivalent to $\pi(\bm{\theta} | \mathbf{D_0}, \bm{\hat{\gamma}^*}) \propto \mathscr{L}(\bm{\theta} | \mathbf{D_0})^{\hat{\gamma}_i^*}$. In this case the posterior mode corresponds to the maximum likelihood estimator for a weighted parametric survival model. In this case, estimates from a weighted Cox proportional hazards model with weights of 1 for trial patients and $\bm{\hat{\gamma}^*}$ for selected external standard-of-care patients will provide similar estimates to the Bayesian estimates with flat priors. This approach may be preferable to a fully Bayesian estimation approach because of its relative computational efficiency and insensitivity to prior specification. In numerical experiments below, we investigate performance of estimation using this weighted Cox approach for all methods described above.
Our proposed method, data-adaptive weighting (DAW), builds on the power prior approach and Lin's approach. While the power prior approach and normalized power prior approach both use the same $\alpha$ value for all patients, DAW uses individual weights for the external patients depending on their similarity to the trial patients. Lin's approach uses individual weights for each of the external patients. However, the selected external patients are based upon matching on the on-trial score and selected subjects are directly weighted by the on-trial score.
\section{Simulation Study}
We conducted a simulation study to investigate the bias, efficiency, effective sample size, and type I error of the existing methods outlined above and the data-adaptive weighting method. Data were simulated with the objective of generating simulated data resembling a real-world study using trial data and external standard-of-care data from an EHR database.
Data were simulated for four covariates ($\bm{X}$), a real-world indicator ($\bm{R}$), a treatment indicator ($\bm{T}$), a failure time ($\bm{T}$), and a censoring time ($\bm{C}$). We simulated data for trials of two different sizes ($100, 1000$), each with two different randomization ratios of the number of patients in the intervention arm to the number of patients in the standard-of-care arm (2:1 and 3:1). The number of external standard-of-care patients available was equal to the number of trial patients. These values were selected to mirror real-world scenarios for unbalanced clinical trials and provide enough potential EHR patients to distinguish between the performance of the various methods. The hazard ratio for failure for patients in the intervention arm versus randomized standard-of-care patients (treatment effect) was allowed to take values of $0.5, 0.75, 0.875,$ and $1$. Two different strengths of confounding of the relationship between baseline covariates and failure were also explored: mild confounding and strong confounding. We note that these covariates confound the relationship between enrollment in the trial and the outcome because the baseline covariate distribution differs between trial and external standard-of-care patients. Analyses limited to the trial population are unconfounded because there is no relationship between trial arm and baseline covariates due to randomization. Censoring rate was held constant across all simulation scenarios. External standard-of-care patients had censoring times arising from an exponential distribution with rate 0.4 and trial patients had censoring times arising from an exponential distribution with rate 0.1.
Relationships among the simulated variables and the complete set of distributions and parameter values used to simulate data, along with examples of EHR-derived covariates used to motivate the simulation study are provided in Table \ref{sims}.
\renewcommand{1}{1.2}
\begin{table}[ht]
\caption{Data generation scheme and parameter values used for simulation study.}
\centering
\footnotesize
\begin{tabular}{|c|cc|c|}
\hline
\textbf{Variable} & \multicolumn{1}{c|}{\textbf{Trial Distribution}} & \textbf{External Distribution} & \textbf{Analogous Variable} \\ \hline
$X_1$ & \multicolumn{1}{c|}{$\text{Bernoulli}(0.5)$} & $\text{Bernoulli}(0.55)$ & Gender \\ \hline
$X_2$ & \multicolumn{1}{c|}{$\text{Bernoulli}(0.6)$} & $\text{Bernoulli}(0.4)$ & College Degree \\ \hline
$X_3$ & \multicolumn{1}{c|}{$\text{Normal}(60, 5)-60$} & $\text{Normal}(60, 10)-60$ & HDL Cholesterol \\ \hline
$X_4$ & \multicolumn{1}{c|}{$\text{Normal}(21, 2)-21$} & $\text{Normal}(23, 2)-21$ & BMI \\ \hline
T & \multicolumn{1}{c|}{$\text{Bernoulli}(t)$, $t=\{0.67, 0.75\}$} & 0 & Treatment Indicator \\ \hline
\multirow{2}{*}{$Y_{failure}$} & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}$\text{Exp}[log(\eta)T+log(\bm{\beta})\bm{X}]$\end{tabular}} & \multirow{2}{*}{Failure Time} \\
& \multicolumn{1}{r}{\begin{tabular}[c]{@{}r@{}}$\eta =$\\ $\bm{\beta} =$\\ \textcolor{white}{.} \end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}$\{0.5, 0.75, 0.875, 1 \}$\\ $\{ (1.25, 0.67, 0.98, 1.06), $\\ $ (2.25, 0.4, 0.93, 1.21) \}$\end{tabular}} & \\ \hline
$Y_{censor}$ & \multicolumn{1}{c|}{$\text{Exp}(0.1)$} & $\text{Exp}(0.4)$ & Censoring Time \\ \hline
\end{tabular}
\label{sims}
\end{table}
\renewcommand{1}{1}
In numerical examples, the on-trial score was estimated using logistic regression including all baseline covariates as predictors. Weighted Cox proportional hazards models with a treatment indicator as the sole covariate were fit as the outcome model to estimate the treatment effect.
Each simulation scenario was repeated 1,000 times. We estimated bias relative to the true marginal treatment effect, empirical variance, 95\% confidence interval coverage probabilities, power (for scenarios with non-null treatment effects), and type I error (for scenarios with a null treatment effect). Power and type I error are estimated for hypothesis testing using a significance threshold of 0.05.
We first examined the performance of alternative methods as we varied the ratio of patients in the intervention arm to standard-of-care patients in the trial. In these analyses neither the proportion of trial patients in the intervention arm nor the number of EHR patients available as a function of the number of trial patients affected the pattern of the results. Therefore, only a 2:1 intervention arm to standard-of-care arm ratio with the same number of EHR patients available as trial patients are presented here. Sample Kaplan-Meier curves for each treatment hazard ratio and confounding level combination show the difference in survival over time across the three groups of patients (Supplemental Figure \ref{fig:sim_km}).
\subsection{Simulation Results}
\begin{figure}
\caption{Bias for methods when applied to a trial with a 2:1 randomization ratio. PP = power prior, NPP = normalized power prior, DAW = data-adaptive weighting.}
\label{fig:bias}
\end{figure}
\begin{figure}
\caption{Variance for methods when applied to a trial with a 2:1 randomization ratio. PP = power prior, NPP = normalized power prior, DAW = data-adaptive weighting.}
\label{fig:variance}
\end{figure}
\begin{figure}
\caption{Power for methods when applied to a trial with a 2:1 randomization ratio. $80\%$ power is marked with a dotted line. PP = power prior, NPP = normalized power prior, DAW = data-adaptive weighting.}
\label{fig:power}
\end{figure}
\begin{figure}
\caption{Coverage of the true treatment effect hazard ratio when a 95$\%$ confidence interval is used for methods when applied to a trial with a 2:1 randomization ratio. 95$\%$ coverage is marked with a dotted line. PP = power prior, NPP = normalized power prior, DAW = data-adaptive weighting.}
\label{fig:coverage}
\end{figure}
\begin{figure}
\caption{Zoomed-in view of coverage of the true treatment effect hazard ratio when a 95$\%$ confidence interval is used for methods when applied to a trial with a 2:1 randomization ratio. 95$\%$ coverage is marked with a dotted line. PP = power prior, NPP = normalized power prior, DAW = data-adaptive weighting.}
\label{fig:coverageZOOM}
\end{figure}
The results of the simulation study show that fully pooling the data from the available EHR patients with the trial data results in large biases under all conditions examined (Figure \ref{fig:bias}). As expected, using only the trial patient data results in negligible bias. The power prior method was also substantially biased across all three $\alpha$ values explored, with $\alpha = 0.25$ having the least bias and $\alpha = 0.75$ having the most bias (Figure \ref{fig:bias}). The normalized power prior was biased when the trial size was 100 but displayed minimal bias when the trial size was 1,000. This is explained by the fact that $\hat{\alpha}$ was estimated to be approximately 0.22-0.36 for the trial size of 100 and 0.01-0.03 for the trial size of 1,000 (Table \ref{tab:alpha_npp}). Therefore, although the normalized power prior approach performed well in terms of bias this is because little information from the EHR patients was incorporated. Both Lin's method and DAW displayed low bias across all scenarios examined. DAW had consistently lower bias than Lin's method (Figure \ref{fig:bias}).
The variances of the estimates reflect the extent to which data from external standard-of-care subjects was incorporated. The trial only approach had the largest variance and full pooling of all EHR patients with the trial patients had the smallest variance (Figure \ref{fig:variance}). The power prior approach had variance between the trial only and full pooling methods, with variance inversely proportional to $\alpha$. The normalized power prior had substantially smaller variance than the trial only approach when the trial size was 100 due to the larger $\hat{\alpha}s$
(Figure \ref{fig:variance}, Table \ref{tab:alpha_npp}). DAW had smaller variance than Lin's method under all conditions examined. This is due to the larger effective sample size for any given scenario (Figure \ref{fig:variance}, Table \ref{tab:ess}).
DAW was able to achieve the targeted 1:1 intervention to standard-of-care ratio, while Lin's method resulted in substantially smaller effective sample sizes. DAW includes about twice as many external patients as Lin's method does in the scenarios examined (Table \ref{tab:ess}).
The full pooling and power prior methods had high power due to their large effective sample sizes and biased treatment effect estimated which were biased away from the null; however, had the covariate effects been in the other direction the bias would have been towards the null and these methods would have had lower power (Figures \ref{fig:power}, \ref{fig:bias}). The normalized power prior approach had higher power relative to the trial only method when the trial size was 100 but not when the trial size is 1,000. Lin's method and DAW had higher power than the trial only approach and were quite similar to one another when the trial size was 100; Lin's method had slightly higher power than DAW when the trial size was 1,000 (Figure \ref{fig:power}).
Clearly, unless the strength of confounding and the trial size are small, full pooling of all EHR patients or using the power prior with one of the $\alpha$ values examined provides very poor coverage of the true HR (Figure \ref{fig:coverage}). NPP only provides nominal coverage when the trial size is 1,000. Both Lin's method and DAW had nominal coverage in all scenarios except under strong confounding when the trial size was 1,000; in that case Lin's method had approximately $93-94\%$ coverage and DAW had $94\%$ coverage except when the marginal treatment ratio is nearly 1, in which case the coverage dropped to $9\%$ (Figures \ref{fig:coverage}, \ref{fig:coverageZOOM}).
Type I error was controlled at the 5\% level when only the trial data were analyzed, and was poorly controlled under the full pooling and power prior methods (Table \ref{tab:type1error}). As expected, NPP controlled type I error when $\hat{\alpha}$ was small (i.e., sample size of 1,000) and when there was mild confounding (Tables \ref{tab:type1error}, \ref{tab:alpha_npp}). Both Lin's method and DAW controlled type I error under all scenarios examined except when there was strong confounding and a trial size of 1,000; type I error was slightly inflated in this case to around $6\%$ (Table \ref{tab:type1error}).
\renewcommand{1}{1.2}
\begin{table}[ht]
\centering
\caption{Type I error for trials with a 2:1 randomization ratio. PP = power prior, NPP = normalized power prior, DAW = data-adaptive weighting.}
\label{tab:type1error}
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{|r|c|c|c|c|}
\hline
& \multicolumn{2}{c|}{\textbf{Mild Confounding}} & \multicolumn{2}{c|}{\textbf{Strong Confounding}} \\ \hline
& \textbf{Trial = 100} & \textbf{Trial = 1,000} & \textbf{Trial = 100} & \textbf{Trial = 1,000} \\ \hline
\textbf{Trial Only} & 0.05 & 0.051 & 0.052 & 0.046 \\ \hline
\textbf{Full Pooling} & 0.126 & 0.716 & 0.356 & 0.999 \\ \hline
\textbf{PP,} $\bm{\alpha = 0.25}$ & 0.053 & 0.177 & 0.098 & 0.619 \\ \hline
\textbf{PP,} $\bm{\alpha = 0.5}$ & 0.069 & 0.403 & 0.192 & 0.953 \\ \hline
\textbf{PP,} $\bm{\alpha = 0.75}$ & 0.097 & 0.583 & 0.287 & 0.996 \\ \hline
\textbf{NPP} & 0.061 & 0.061 & 0.117 & 0.047 \\ \hline
\textbf{Lin} & 0.049 & 0.046 & 0.044 & 0.060 \\ \hline
\textbf{DAW} & 0.052 & 0.048 & 0.050 & 0.059 \\ \hline
\end{tabular}
}
\end{table}
\renewcommand{1}{1}
\renewcommand{1}{1.2}
\begin{table}[ht]
\centering
\caption{Effective sample size for trials with a 2:1 randomization ratio. PP = power prior, NPP = normalized power prior, DAW = data-adaptive weighting}
\label{tab:ess}
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{|r|c|c|c|c|}
\hline
& \multicolumn{2}{c|}{\textbf{Mild Confounding}} & \multicolumn{2}{c|}{\textbf{Strong Confounding}} \\ \hline
& \textbf{Trial = 100} & \textbf{Trial = 1,000} & \textbf{Trial = 100} & \textbf{Trial = 1,000} \\ \hline
\textbf{Trial Only} & 100 & 1000 & 100 & 1000 \\ \hline
\textbf{Full Pooling} & 200 & 2000 & 200 & 2000 \\ \hline
\textbf{PP,} $\bm{\alpha = 0.25}$ & 125 & 1250 & 125 & 1250 \\ \hline
\textbf{PP,} $\bm{\alpha = 0.5}$ & 150 & 1500 & 150 & 1500 \\ \hline
\textbf{PP,} $\bm{\alpha = 0.75}$ & 175 & 1750 & 175 & 1750 \\ \hline
\textbf{NPP} & 135 & 1033 & 123 & 1013 \\ \hline
\textbf{Lin} & 116 & 1166 & 116 & 1166 \\ \hline
\textbf{DAW} & 134 & 1340 & 134 & 1340 \\ \hline
\end{tabular}
}
\end{table}
\renewcommand{1}{1}
\renewcommand{1}{1.2}
\begin{table}[ht]
\begin{center}
\caption{Mean $\alpha$ (alpha) values for normalized power prior approach for trials with a 2:1 randomization ratio. PP = power prior, NPP = normalized power prior, DAW = data-adaptive weighting}
\label{tab:alpha_npp}
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{|r|c|c|c|c|}
\hline
& \multicolumn{2}{c|}{\textbf{Mild Confounding}} & \multicolumn{2}{c|}{\textbf{Strong Confounding}} \\ \hline
& \textbf{Trial = 100} & \textbf{Trial = 1,000} & \textbf{Trial = 100} & \textbf{Trial = 1,000} \\ \hline
\textbf{HR: 0.5} & 0.36 & 0.03 & 0.22 & 0.01 \\ \hline
\textbf{HR: 0.75} & 0.36 & 0.03 & 0.23 & 0.01 \\ \hline
\textbf{HR: 0.875} & 0.36 & 0.03 & 0.24 & 0.01 \\ \hline
\textbf{HR: 1} & 0.35 & 0.03 & 0.23 & 0.01 \\ \hline
\end{tabular}
}
\\
\end{center}
\footnotesize{Note: Hazard ratios listed are the conditional treatment hazard ratios as opposed to the marginal treatment hazard ratios.}
\end{table}
\renewcommand{1}{1}
\section{Case Study: Metastatic Castration-Resistant Prostate Cancer}
\renewcommand{1}{1.2}
\begin{table}[ht!]
\centering
\caption{Characteristics of patients in the Janssen MCRPC cohort by data source}
\label{tab:mcrpc_tab1}
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{|r|c||c|c|}
\hline
\multicolumn{1}{|c|}{\textbf{}} & \multicolumn{1}{c||}{\textbf{}} & \multicolumn{2}{c|}{\textbf{Clinical Trial}} \\
& \multicolumn{1}{c||}{\textbf{\begin{tabular}[c]{@{}c@{}}Pseudo\\ EHR\end{tabular}}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Standard-of-Care \\ Arm\end{tabular}}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Intervention \\ Arm\end{tabular}}} \\
\textbf{} & N = 1000 & N = 394 & N = 791 \\ \hline
\textbf{Age, median (IQR)} & 69 (63 - 76) & 69 (63 - 75) & 69 (64 - 75) \\ \hline
\textbf{ECOG PS, N (\%)} & & & \\
\textbf{0} & 244 (24.4) & 140 (35.5) & 262 (33.1) \\
\textbf{1} & 572 (57.2) & 209 (53.0) & 447 (56.5) \\
\textbf{2} & 184 (18.4) & 45 (11.4) & 82 (10.4) \\ \hline
\textbf{Gleason Score, N (\%)} & & & \\
\textbf{1} & 66 (6.6) & 0 (0.0) & 1 (0.1) \\
\textbf{2} & 11 (1.1) & 15 (3.8) & 31 (3.9) \\
\textbf{3} & 12 (1.2) & 1 (0.3) & 2 (0.3) \\
\textbf{4} & 4 (0.4) & 1 (0.3) & 3 (0.4) \\
\textbf{5} & 19 (1.9) & 3 (0.8) & 4 (0.5) \\
\textbf{6} & 78 (7.8) & 2 (0.5) & 24 (3.0) \\
\textbf{7} & 304 (30.4) & 32 (8.1) & 76 (9.6) \\
\textbf{8} & 152 (15.2) & 151 (38.3) & 286 (36.2) \\
\textbf{9} & 354 (35.4) & 76 (19.3) & 159 (20.1) \\
\textbf{10} & 0 (0.0) & 113 (28.7) & 205 (25.9) \\ \hline
\textbf{PSA, median (IQR)} & 399 (98 - 914) & 139 (41 - 412) & 120 (37 - 354) \\
\textbf{LDH, median (IQR)} & 293 (225 - 397) & 235 (188 - 321) & 222 (187 - 308) \\
\textbf{ALP, median (IQR)} & 246 (109 - 447) & 126 (83 - 268) & 125 (79 - 254) \\
\textbf{Hb, median (IQR)} & 11.1 (10.1 - 12.5) & 12.0 (10.8 - 12.8) & 11.9 (0.9 - 12.9) \\
\textbf{Testosterone, median (IQR)} & 11.5 (5.6 - 20.0) & 12.0 (5.7 - 20.0) & 13.0 (5.8 - 20.2) \\ \hline
\end{tabular}
}
\end{table}
\renewcommand{1}{1}
The objective of this analysis was to compare the performance of alternative methods described above in a real-world context in which data were available from a clinical trial and a pseudo EHR dataset, which was constructed from the standard-of-care arm from the clinical trial. We compared the effect of abiraterone acetate (an androgen synthesis inhibitor) plus prednisone compared to prednisone alone on overall survival in patients with metastatic castration-resistant prostate cancer progressing after chemotherapy. Metastatic castration-resistant prostate cancer (MCRPC) is a type of advanced prostate cancer that no longer completely responds to treatments that lower testosterone. \cite{tucci_metastatic_2015} The study sample included patients with MCRPC from a phase 3 randomized double-blind clinical trial (NCT00638690) conducted by Janssen Research $\&$ Development, L.L.C.. Complete details of trial eligibility and treatment protocols have been previously published. \cite{de_bono_abiraterone_2011}
The clinical trial population included 1,185 patients with MCRPC progressing after taxane chemotherapy. \cite{noauthor_yoda_nodate} Patients were randomized in a 2:1 ratio to treatment with abiraterone acetate plus prednisone or treatment with prednisone alone. Patients were enrolled from 2008 to 2009 and followed for five years, or until death. Treatment arm was classified based on the arm to which a patient was randomized, regardless of whether they crossed over to open-label abiraterone acetate at any point (Table \ref{tab:mcrpc_tab1}).
The pseudo EHR dataset was constructed by sampling patients from the standard-of-care arm in the clinical trial such that the baseline covariate distribution of the resultant sample differed between the EHR and clinical trial (Table \ref{tab:mcrpc_tab1}, Supplemental Figure \ref{fig:real_data_cov_dists}). We assume that all patients had the same set of covariates associated with poor performance for patients with MCRPC recorded at the baseline encounter. \cite{sorensen_performance_1993, egevad_prognostic_2002, kan_prognosis_2017, mori_prognostic_2019, sharma_alkaline_2014, nakasian_effects_2017} Specifically, all standard-of-care arm patients were sampled with replacement to create a population of size 10,000 from which we could draw our pseudo EHR sample. Next, each patient was assigned a probability of sampling according to a non-linear function of the baseline covariates. By constructing this sampling probability using a non-linear functional form, the estimated on-trial score in the DAW approach will be mis-specified. This reflects the real-world scenario where we are unlikely to be able to correctly specify this model. A psuedo EHR sample of size 1,000 was then drawn. The sampling probabilities were generated such that patients were more likely to be included in the psuedo EHR if they: were younger, had a higher ECOG score, had a higher Gleason score, had a higher lab value for PSA, LDH, Hb, and ALP, or had a lower testosterone value. The logistic sigmoid function was used to relate the ECOG and Gleason scores to the sampling probability in order to separate out those with high versus low scores rather than having alinear additive effect as the score increased. The square root of the testosterone lab value was used to shrink the effect of a high testosterone value on being included in the sample.
Due to missingness in some variables, multiple imputation via predictive mean matching was used with 5 imputations. The median of the imputed covariates was calculated across imputations and included in the on-trial score, which is valid in the case where the covariates do not inform treatment assignment such as in a clinical trial. \cite{mitra_comparison_2016} Post-imputation covariate distributions stratified by data source were similar to pre-imputation covariate distributions.
\subsection{Case Study Results}
In order to appropriately interpret the results of the case study we first must evaluate each of Pocock's six criteria for the external standard-of-care data. \cite{pocock_combination_1976} In this case meeting most of the criteria was trivial due to the fact that the pseudo EHR was created from the standard-of-care arm of the clinical trial, except for $\#6$, since the pseudo EHR had patients selected in a biased fashion such that their covariate distribution was slightly different and their outcomes were somewhat worse than the clinical trial.
The case study of MCRPC patients had a high rate of death. Of the 791 intervention arm patients in the trial there were 645 deaths, of the 394 patients receiving the standard-of-care in the trial there were 331 deaths, and of the 1000 patients in the pseudo EHR there were 937 deaths. The median survival time was 15.6 months (95$\%$ CI: 14.7-16.8) for the intervention arm in the trial , 11.2 months (95$\%$ CI: 10.4-13.3) for the standard-of-care arm of the trial, and 8.0 months (95$\%$ CI: 7.9-8.6) for the pseudo EHR. It is clear that the patients in the pseudo EHR had inferior survival relative to both arms of the clinical trial. This is likely to be true in reality as patients often receive more supportive care in a clinical trial than in regular clinical practice and tend to have different covariate distributions due to restrictive inclusion/exclusion criteria.
The hazard ratio for death for patients on abiraterone acetate plus prednisone as compared to patients on prednisone alone was 0.86 (95$\%$ CI: 0.75-0.98) using only patients enrolled in the clinical trial. When all pseudo EHR patients were added to the analysis population, the hazard ratio for death was 0.60 (95$\%$ CI: 0.55-0.66) (Figure \ref{fig:yoda_results}). The hazard ratio for death as estimated by the power prior method with the three different $\alpha$ values were between the trial-only and full pooling methods, as expected. The normalized power prior method estimated $\alpha = 0.004$ and therefore was virtually identical to the trial only method as it effectively borrowed information from only four registry patients. Lin's method returned results somewhat similar to the trial only method, with an estimated hazard ratio of 0.76 (95$\%$ CI: 0.68-0.86), adding just over 200 patients to the analysis (Figure \ref{fig:yoda_results}). The estimated hazard ratio using DAW was 0.85 (95$\%$ CI: 0.76-0.94), which is almost identical to that obtained with the trial only data and also had a smaller confidence interval due to the fact that 397 patients were added so that the augmented trial had a 1:1 randomization ratio (Figure \ref{fig:yoda_results}). With the exception of NPP, which only added 4 patients, DAW returned results most similar to the trial-only result while achieving improved efficiency (Figure \ref{fig:yoda_results}).
\begin{figure}
\caption{Kaplan-Meier curves for overall survival patients in the clinical trial and the pseudo EHR.}
\label{fig:yoda_km}
\end{figure}
\begin{figure}
\caption{Hazard ratio for death and 95$\%$ confidence intervals (CI) for patients on abiraterone acetate plus prednisone compared to patients on prednisone alone. FP = full pooling, PP = power prior, NPP = normalized power prior ($\hat{\alpha}
\label{fig:yoda_results}
\end{figure}
\section{Discussion}
Data-adaptive weighting allows for the external patients who are most similar to the clinical trial to be selected and weighted such that the augmented trial has a 1:1 randomization ratio, which results in minimal bias and tighter confidence intervals as compared to using only the trial data. We compared the performance of alternative methods for constructing and analyzing a hybrid control arm in terms of bias, variance, power, confidence interval coverage, and type I error across various scenarios for trial size, trial randomization ratio, strength of covariate effects, and treatment effect.
Based on the results from our simulation study it is clear that fully pooling external patients with trial patients has the potential to produce highly biased results, have poor coverage, and substantially inflate type I error rates. Similarly, the power prior at all three alpha values examined exhibited poor performance, although results were attenuated towards the trial only analyses. The normalized power prior either exhibited moderate bias and poor coverage and type I error rate or had little bias but failed to incorporate much information from the EHR database. The case study examined here shows how the methods perform on real data when Pocock's criteria are met when creating a hybrid control trial. While observed covariates can be accounted for using on-trial scores as in Lin's method and DAW, the other criteria are extremely important to avoid confounding by unobserved characteristics of patients or their care environment, and creation of a hybrid control arm is not recommended if they are not met.
Lin's method has reduced bias and variance compared to the more traditional methods, though DAW was able to achieve lower bias and variance than Lin's method. Both methods performed very similarly with regards to confidence interval coverage and type I error rates. While this may initially cause one to conclude that Lin's method and DAW are both good methods to use for hybrid control arms, there are several points to be made regarding Lin's method. First, Lin's method becomes more difficult to implement with larger trial and/or external data source sizes as optimal matching can be cumbersome or even impossible with large samples. Second, while the number of EHR patients selected by Lin's method nominally produces a 1:1 ratio, weighting by the on-trial score results in an effective sample size that has fewer standard-of-care than intervention arm patients, reducing efficiency of this approach. Finally, it is unclear what estimand Lin's method estimates as the external patients are weighted by their on-trial score and the trial patients are given a weight of 1. These issues are addressed by the DAW as the IOW are scaled to ensure that the 1:1 ratio is preserved and the use of IOW allows one to estimate the estimand of interest: the average treatment effect for those on-trial.
While our simulation study evaluated a large combination of possible characteristics of a trial and real-world data source, there are additional factors that could have an effect on the results that were not examined, including differential covariate effects on the outcome between EHR and trial patients and potential differential error in outcome ascertainment between EHR and trial patients. Furthermore, due to the computational demands of Bayesian estimation for these methods, we have evaluated performance of a frequentist analogue, which targets a different estimand and may produce different results from a Bayesian implementation, particularly for small sample sizes where the Bayesian central limit theorem has little effect. One must also consider the direction of the bias induced by covariates that differ between the trial and real-world populations in order to determine the effect of the differential covariate distributions on power and type I error.
Based on the results of these simulations and the real-world data example, when working with hybrid-control arm data we recommend using the DAW method in order to minimize bias and variance while maximizing coverage and properly controlling the type I error rate. Additionally, DAW estimates the average treatment effect for those on-trial, which is the estimand of interest.
\section{Data Availability}
The data that support the findings of this study are available from JANSSEN RESEARCH $\&$ DEVELOPMENT, L.L.C. via the Yale University Open Data Access Project. Restrictions apply to the availability of these data, which were used under license for this study. Data are available at \url{https://yoda.yale.edu/} with the permission of the Yale University Open Data Access Project.
\section{Funding}
Research reported in this publication was supported in part by NIH grant R21CA227613. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
\section{Declaration of conflicting interests}
The author(s) declared no other potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
\section{Supplement}
\beginsupplement
\begin{figure}
\caption{Kaplan-Meier curves for sample simulated datasets for each of four conditional treatment hazard ratios and two strengths of confounding.}
\label{fig:sim_km}
\end{figure}
\begin{figure}
\caption{Covariate distributions in the clinical trial and pseudo EHR.}
\label{fig:real_data_cov_dists}
\end{figure}
\begin{figure}
\caption{On-trial score distributions for patients in the clinical trial and the pseudo EHR. Vertical lines denote mean on-trial score in each group.}
\label{fig:ps_dists}
\end{figure}
\end{document} |
\begin{document}
\begin{center}
{\Large Determinantal singularities and Newton
polyhedra\footnote{ For the most part, this is a significantly
updated exposition of material from \cite{E1}-\cite{E4}.}}
\end{center}
\begin{center}
{A. I. Esterov}\footnote{ Supported in part by
RFBR-JSPS-06-01-91063, RFBR-07-01-00593, INTAS-05-7805.}
\end{center}
There are two well researched tasks, related to Newton polyhedra:
\begin{quote}
A) to describe Newton polyhedra of resultants and discriminants;
B) to study invariants of singularities in terms of their Newton
polyhedra.
\end{quote}
We introduce so called resultantal singularities (Definitions
\ref{torrezcikl} and \ref{defressing}), whose study in terms of
Newton polyhedra unifies the tasks A and B to a certain extent.
In particular, this provides new formulations and proofs for a
number of well known results (see, for example, Theorem
\ref{thsturmf} related to task A and Corollary \ref{oka1f}
related to task B).
As an application, we study basic topological invariants of
determinantal singularities (Subsection \ref{ssdet}) and certain
generalizations of the Poincare-Hopf index (Subsection
\ref{ssind}) in terms of Newton polyhedra. By generalizations of
the Poincare-Hopf index we mean invariants of collections of
(co)vector fields that participate in generalizations of the
classical Poincare-Hopf formula to singular varieties, arbitrary
characteristic numbers, etc.
In order to study resultantal singularities in terms of Newton
polyhedra, we introduce relative versions of the mixed volume
(Subsection \ref{ssmixvol}), Kouchnirenko--Bernstein formula
(Subsection \ref{sskb}), Kouchnirenko--Bernstein--Kho\-van\-skii
formula (Subsection \ref{sskbk}), construct a toric resolution for
a resultantal singularity (Subsection \ref{sstorrez}), and
introduce one more tool, which has no counterpart for complete
intersection singularities (Subsection \ref{sscay}).
Recall that the Kouchnirenko--Bernstein formula \cite{bernst}
expresses the number of solutions of a system of polynomial
equations $f_1(x_1,\ldots,x_n)=\ldots=f_n(x_1,\ldots,x_n)=0$ in
terms of the Newton polyhedra of the polynomials $f_1,\ldots,f_n$,
provided that the coefficients of $f_1,\ldots,f_n$ satisfy a
certain condition of general position (the \textit{Newton
polyhedron} of a polynomial is the convex hull of the exponents
of its monomials).
A local version of this formula is presented below as an
illustrative special case of our results. Let $P$ be the set of
all polyhedra $\Delta$ in the positive orthant
$\Bbb R^n_+\subset\Bbb R^n$, such that the difference
$\Bbb R^n_+\setminus\Delta$ is bounded. $P$ is a semigroup with
respect to the Minkowski addition $\Delta_1+\Delta_2=\{x+y\;|\;
x\in\Delta_1,\, y\in\Delta_2\}$. The symmetric multilinear
function $\mathop{\rm Vol}\nolimits:\underbrace{P\times\ldots\times P}_n\to\Bbb R$, such
that $\mathop{\rm Vol}\nolimits(\Delta,\ldots,\Delta)$ equals the volume of
$\Bbb R^n_+\setminus\Delta$ for every $\Delta\in P$, is called the
\textit{mixed volume} of $n$ polyhedra. The minimal polyhedron in
$P$ that contains the exponents of all monomials, participating in
a power series $f$ of $n$ variables, is called the \textit{Newton
polyhedron} of $f$ (provided that such minimal polyhedron exists).
The coefficient of a monomial in the power series $f$ is called a
\textit{leading coefficient}, if the exponent of this monomial is
contained in a bounded face of the Newton polyhedron. For a
linear function $l:\Bbb Z^n\to\Bbb Z$ and a power series $f$ in $n$
variables, denote the lowest order non-zero $l$-quasihomogeneous
component of $f$ by $f^l$.
\begin{theor0}[\cite{E4}] \label{th1} If $\Delta_1,\ldots,\Delta_n$ are the Newton
polyhedra of complex analytic germs $f_1,\ldots,f_n,\;
f_i:(\Bbb C^n,0)\to(\Bbb C,0)$, then the topological degree of the map
$(f_1,\ldots,f_n):(\Bbb C^n,0)\to(\Bbb C^n,0)$ is greater or equal to
$n!\mathop{\rm Vol}\nolimits(\Delta_1,\ldots,\Delta_n)$.
If, in addition, the leading coefficients of $f_1,\ldots,f_n$
satisfy a certain condition of general position, then the
topological degree equals $n!\mathop{\rm Vol}\nolimits(\Delta_1,\ldots,\Delta_n)$.
The necessary and sufficient condition of general position is
that, for every linear function $l:\Bbb Z^n\to\Bbb Z$ with positive
coefficients, the polynomial equations $f^l_1=\ldots=f^l_n=0$ have
no common roots in $\Bbb CC^n$.
\end{theor0}
The proof is the same as for the classical D.~Bernstein's formula
(although it seems that the right hand side of the local version
was never described in this form before, cf. \cite{oka} and
\cite{biviam}). We generalize it in the following directions.
\textbf{(1)} The topological degree of the map
$(f_1,\ldots,f_n):(\Bbb C^n,0)\to(\Bbb C^n,0)$ can be regarded as the
Poincare-Hopf index of the complex vector field $(f_1,\ldots,f_n)$
on $\Bbb C^n$. We extend Theorem \ref{th1} to certain generalizations
of the Poincare-Hopf index (\cite{smg}, \cite{smgfam},
\cite{smgdet}, \cite{suwa}, etc), see Subsection \ref{ssind}.
\textbf{(2)} The topological degree of the map $(f_1,\ldots,f_n)$
differs by 1 from the Milnor number of the 0-dimensional complete
intersection $f_1=\ldots=f_n=0$. We study Milnor numbers of
arbitrary-dimensional complete intersections and, more generally,
of determinantal singularities, see Subsection \ref{ssdet}.
\textbf{(3)} The topological degree of the map $(f_1,\ldots,f_n)$
can be regarded as the intersection number of the divisors
$\{f_1=0\},\ldots,\{f_n=0\}$. More generally, we may assume that
$f_1,\ldots,f_n$ are sections of line bundles on an arbitrary
toric variety, see Subsection \ref{sskb}.
Subsections \ref{ssmixvol}--\ref{ssdet}, \ref{sstor}--\ref{sskb}
and \ref{ssresvar} introduce necessary notation and recall some
basic facts related to convex geometry (Subsection
\ref{ssmixvol}), mixed volumes (Subsection \ref{ssmixvol}),
resultants (Subsections \ref{ssres} and \ref{ssresvar}),
invariants of singularities (Subsection \ref{sstopinv}), Newton
polyhedra (Subsections \ref{ssdet} and \ref{sstor}), toric
varieties (Section \ref{sstor}) and intersection theory (Section
\ref{sskb}).
This work is based on the author's thesis, I am grateful to
Professor S.~M.~Gusein-Zade for his ideas and guidance. I also
want to thank A.~G.~Khovanskii and S.~P.~Chulkov for many
valuable remarks.
{\scriptsize \tableofcontents}
\section{Applications}\label{ss1} Before discussing the main results of
the paper, we present some of their applications in this section.
In the first subsection, we introduce mixed volume of pairs of
polytopes. In the second subsection, we present a formula for the
support function of the Newton polytope of the resultant in terms
of mixed volumes of pairs. In the third subsection, we recall
definitions of invariants of singularities that we wish to count
in terms of Newton polyhedra. In the last two subsections, we
count these invariants in terms of mixed volumes of Newton
polyhedra of singularities, provided that principal parts of
singularities are in general position.
\subsection{Relative mixed volume and prisms}\label{ssmixvol}
There are two possible settings in this subsection:
I) A \textit{polyhedron} in $\Bbb R^n$ is an intersection of finitely
many closed half-spaces. The \textit{volume form} in $\Bbb R^n$ and in
every its subspace is induced by the standard metric in $\Bbb R^n$.
The set $S\subset(\Bbb R^n)^*$ consists of all unit covectors in the
sense of the standard metric in $\Bbb R^n$, and $\Bbb K$ stands for $\Bbb R$.
II) A \textit{polyhedron} in $\Bbb R^n$ is an intersection of
finitely many rational closed half-spaces, whose vertices are
contained in $\Bbb Z^n$. The \textit{volume form} in $\Bbb R^n$ and every
its rational subspace is chosen in such a way that the minimal
possible volume of a parallelepiped with integer vertices equals
1. The set $S\subset(\Bbb R^n)^*$ consists of all primitive covectors
(an integer covector is said to be \textit{primitive} if it is
not equal to another integer covector multiplied by a positive
integer number). $\Bbb K$ stands for $\frac{\Bbb Z}{n!}$.
Let $\mathcal{M}$ be the set of all convex bounded polyhedra in
$\Bbb R^n$. This set is a semigroup with respect to the Minkowski
addition $A+B=\{a+b\; |\; a\in A,\; b\in B\}$. The \textit{mixed
volume} is the unique symmetric multilinear function
$$\mathop{\rm Vol}\nolimits:\underbrace{\mathcal{M}\times\ldots\times
\mathcal{M}}_n\to\Bbb K,$$ such that $\mathop{\rm Vol}\nolimits(A,\ldots,A)$ equals the
volume of $A$ for every polyhedron $A\in \mathcal{M}$.
We introduce a "relative" version of the mixed volume. Let
$N\subset\Bbb R^n$ be a convex polyhedron (not necessary bounded).
Its \textit{support function} $N(\cdot)$ is defined as
$$N(\gamma)=\inf\limits_{x\in N} \gamma(x)$$ for every covector
$\gamma\in (\Bbb R^n)^*$. The set $$N^{\gamma}=\{x\in N\; |\;
\gamma(x)=N(\gamma)\}$$ is called the \textit{support face} of the
polyhedron $N$ with respect to the covector $\gamma\in (\Bbb R^n)^*$.
The set $\{\gamma\; |\; N(\gamma)>-\infty\}\subset (\Bbb R^n)^*$ is
called the \textit{support cone} of $N$.
Consider the set $\mathcal{M}_{\Gamma}$ of all ordered pairs of
polyhedra $(A,B)$ with a given support cone
$\Gamma\subset(R^n)^*$, such that the symmetric difference
$A\vartriangle B$ is bounded. $\mathcal{M}_{\Gamma}$ is a
semigroup with respect to Minkowski addition of pairs
$(A,B)+(C,D)=(A+C,B+D)$. Example of Minkowski addition:
\begin{center}
\noindent\includegraphics[width=12cm]{sum2.eps} \end{center}
\begin{defin}[\cite{E2}, \cite{E3}] \label{defrelmixvol}
\textit{The volume} $V(A,B)$ of the pair of polyhedra
$(A,B)\in\mathcal{M}_{\Gamma}$ is the difference of the volumes
of the sets $A\setminus B$ and $B\setminus A$. \textit{The mixed
volume} of pairs of polyhedra with the support cone
$\Gamma\subset(\Bbb R^n)^*$ is the symmetric multilinear function
$\mathop{\rm Vol}\nolimits_{\Gamma}:\underbrace{\mathcal{M}_{\Gamma}\times\ldots\times
\mathcal{M}_{\Gamma}}_n\to\Bbb K$, such that
$\mathop{\rm Vol}\nolimits_{\Gamma}\bigl((A,B),\ldots,(A,B)\bigr)=V(A,B)$ for every
pair $(A,B)\in \mathcal{M}_{\Gamma}$.
\end{defin}
For example, if $\Gamma=(\Bbb R^n)^*$, then $\mathcal{M}_{\Gamma}$ is
the set of pairs of convex bounded polyhedra, and the mixed
volume of pairs
$\mathop{\rm Vol}\nolimits_{\Gamma}\Bigl((A_1,B_1),\ldots,(A_n,B_n)\Bigr)$ equals the
difference of the classical mixed volumes of the collections
$A_1,\ldots,A_n$ and $B_1,\ldots,B_n$. The following theorem
provides existence, uniqueness and basic formulas for computation
of the mixed volume of pairs.
\begin{utver}[\cite{E3}] \label{voldef}
1) If a function
$F:\underbrace{\mathcal{M}_{\Gamma}\times\ldots\times
\mathcal{M}_{\Gamma}}_n\to\Bbb K$ is symmetric, multilinear, and
$F\bigl((A,B),\ldots,(A,B)\bigr)=V(A,B)$ for every pair $(A,B)\in
\mathcal{M}_{\Gamma}$, then
$$F\bigl((A_1,B_1),\ldots,(A_n,B_n)\bigr)=\frac{1}{n!}\sum\limits_{I\in\{1,\ldots,n\}}
(-1)^{n-|I|} V\bigl(\sum\limits_{i\in I} (A_i,B_i)\bigr).$$
\newline 2) The function
$\mathop{\rm Vol}\nolimits_{\Gamma}:\underbrace{\mathcal{M}_{\Gamma}\times\ldots\times
\mathcal{M}_{\Gamma}}_n\to\Bbb K$, defined by the equality
$$\mathop{\rm Vol}\nolimits_{\Gamma}\bigl((A_1,B_1),\ldots,(A_n,B_n)\bigr)=\frac{1}{n!\cdot n}
\sum_{\sigma\in\mathcal{S}^n} \sum_{k=1}^n
\sum\limits_{\gamma\in\mathop{\rm Int}\nolimits\Gamma\cap S}
\bigl(B_{\sigma(k)}(\gamma)-A_{\sigma(k)}(\gamma)\bigr)\times$$
$$\times
\mathop{\rm Vol}\nolimits(A_{\sigma(1)}^{\gamma},\ldots,A_{\sigma(k-1)}^{\gamma},
B_{\sigma(k+1)}^{\gamma},\ldots,B_{\sigma(n)}^{\gamma}),$$ is
symmetric, multilinear, and
$\mathop{\rm Vol}\nolimits_{\Gamma}\bigl((A,B),\ldots,(A,B)\bigr)=V(A,B)$ for every
pair $(A,B)\in \mathcal{M}_{\Gamma}$ (here $\mathop{\rm Vol}\nolimits$ is the classical
mixed volume of bounded polyhedra, $\mathop{\rm Int}\nolimits\Gamma$ stands for the
interior of the cone $\Gamma$, and the sum in the right hand side
has only finitely many non-zero terms, because covectors
$\gamma$, corresponding to the non-zero terms, are the external
normal covectors of the bounded codimension 1 faces of the
polyhedron $\sum_{i=1}^n A_i+B_i$).
\end{utver}
\begin{exa} \label{exavol} The mixed volume of pairs $(A,B)$ and $(C,D)$ on the picture above equals $\min(p+s,\, q+r)/2$. \end{exa}
\textsc{Proof} of part 1: substitute every term of the form
$V(A,B)$ in the right hand side of the desired equality by
$F\bigl((A,B),\ldots,(A,$ $B)\bigr)$, open the parentheses by
linearity of $F$, and cancel like terms by the symmetric property
of $F$. Part 2 follows from the linearity and the symmetric
property of the classical mixed volume and the following fact (see
\cite{khovdan}): the volume of the "trapezoid" $A\times\{0\}\cup
B\times\{h\}\subset\Bbb R^{n-1}\times\Bbb R$ equals
$$\frac{h}{n}\sum_{k=0}^{n-1}
\mathop{\rm Vol}\nolimits(\underbrace{A,\ldots,A}_k,\underbrace{B,\ldots,B}_{n-k-1})$$
for bounded polyhedra $A$ and $B$ in $\Bbb R^{n-1}.\;\Box$
The symmetrizing in the right hand side of Part 2 (i.e. summation
over all $\sigma\in\mathcal{S}^n$) turns out to be redundant,
which significantly simplifies computations. In particular, the
mixed volume of pairs of polyhedra with integer vertices is
contained in $\frac{\Bbb Z}{n!}$.
\begin{utver}[\cite{E3}] \label{volsymm} We have
$\mathop{\rm Vol}\nolimits_{\Gamma}\bigl((A_1,B_1),\ldots,(A_n,B_n)\bigr)=$
$$=\frac{1}{n}
\sum_{k=1}^n \sum\limits_{\gamma\in\mathop{\rm Int}\nolimits\Gamma\cap S}
\bigl(B_k(\gamma)-A_k(\gamma)\bigr) \mathop{\rm Vol}\nolimits(A_1^{\gamma},\ldots,
A_{k-1}^{\gamma},B_{k+1}^{\gamma},\ldots,B_n^{\gamma}).$$
\end{utver}
The proof is based on relations between mixed volumes and
algebraic geometry, and will be given together with the proof of
Theorem \ref{relbernst} in Subsection \ref{ss2proof}. One can
easily verify that the right hand side of the Assertion
\ref{volsymm} is not symmetric for pairs of polyhedra with
unbounded symmetric difference, and its symmetrization is not
contained in $\frac{\Bbb Z}{n!}$.
One more formula for the mixed volume of pairs $(A_1,B_1),$
$\ldots,$ $(A_n,B_n)$ will also be proved together with Theorem
\ref{relbernst} in Subsection \ref{ss2proof}:
\begin{utver}[\cite{E3}] \label{volexpr2}
If $\tilde A_i$ and $\tilde B_i$ are bounded polyhedra such that
$A_i\setminus B_i = \tilde A_i\setminus \tilde B_i$ and
$B_i\setminus A_i = \tilde B_i\setminus \tilde A_i$, then
$\mathop{\rm Vol}\nolimits_{\Gamma}\bigl((A_1,B_1),\ldots,(A_n,B_n)\bigr)=\mathop{\rm Vol}\nolimits(\tilde
A_1,\ldots, \tilde A_n) - \mathop{\rm Vol}\nolimits(\tilde B_1,\ldots, \tilde B_n)$.
\end{utver}
We also need the following well known formula:
\begin{lemma} \label{comput1}
Let $A_1,\ldots,A_p$ be bounded polyhedra in a
$(p+q)$-dimensional space $P$, and $B_1,\ldots,B_q$ be bounded
polyhedra in its $q$-dimensional subspace $Q$. Then, denoting the
projection $P\to P/Q$ by $\pi$, we have
$$(p+q)!\mathop{\rm Vol}\nolimits(A_1,\ldots,A_p,B_1,\ldots,B_q)=p!q!\cdot\mathop{\rm Vol}\nolimits(\pi
A_1,\ldots,\pi A_p)\cdot\mathop{\rm Vol}\nolimits(B_1,\ldots,B_q).$$
\end{lemma}
When studying determinantal singularities in
terms of Newton polyhedra, we often deal with mixed volumes of
prisms over Newton polyhedra:
\begin{defin} \label{prismdef} \textit{The prism} $\Delta_1*\ldots*\Delta_n$ \textit{over polyhedra}
$\Delta_1,\ldots,\Delta_n\subset\Bbb R^m$ is the convex hull of the
union
$$\bigcup_i \{b_i\}\times\Delta_i\subset\Bbb R^{n-1}\oplus\Bbb R^m,$$
where $b_1,\ldots,b_n$ are the vertices of the standard simplex in
$\Bbb R^{n-1}$. \textit{The prism}
$(\Gamma_1,\Delta_1)*\ldots*(\Gamma_n,\Delta_n)$ \textit{over
pairs of polyhedra}
$(\Gamma_1,\Delta_1),\ldots,(\Gamma_n,\Delta_n)$ in $\Bbb R^m$ is the
pair $(\Gamma_1*\ldots*\Gamma_n,\Delta_1*\ldots*\Delta_n)$.
\end{defin} The following formula simplifies the computation of
the mixed volume of integer prisms. For a bounded set
$\Delta\subset\Bbb R^m$, denote the number of integer lattice points
in $\Delta$ by $I(\Delta)$. If the symmetric difference of
(closed) integer polyhedra $\Gamma$ and $\Delta$ in $\Bbb R^m$ is
bounded, denote the difference
$I(\Gamma\setminus\Delta)-I(\Delta\setminus\Gamma)$ by
$I(\Gamma,\Delta)$. Denote the convex hull of the union of
polyhedra $\Delta_i\subset\Bbb R^m$ by $\bigvee_i\Delta_i$. For pairs
of polyhedra $(\Gamma_i,\Delta_i)$ in $\Bbb R^m$, denote the pair
$(\bigvee_i\Gamma_i,\bigvee_i\Delta_i)$ by
$\bigvee_i(\Gamma_i,\Delta_i)$. \begin{theor} \label{thint} If
bounded integer polyhedra or pairs of integer polyhedra $B_{i,j}$,
$i=1,\ldots,n$, $j=1,\ldots,k$ in $\Bbb R^m$ have the same support
cone, and $m=k-n+1$, then the mixed volume of the prisms
$B_{1,j}*\ldots*B_{n,j}$, $j=1,\ldots,k$, equals
$$\frac{1}{k!}\sum_{J\subset\{1,\ldots,k\}\atop b_1+\ldots+b_{n}=|J|}(-1)^{k-|J|}I\Bigl(\bigvee_{J_1\sqcup\ldots\sqcup J_n=J\atop |J_1|=b_1,\ldots,|J_n|=b_n}
\sum_{i=1,\ldots,n\atop j\in J_i} B_{i,j}\Bigr).$$\end{theor} The
proof is given in Subsection \ref{ss4}. Note that some of
$B_{i,j}$ may be empty.
\subsection{Newton polyhedra of resultants.}\label{ssres}
We denote the monomial $t_1^{a_1}\ldots t_N^{a_N}$ by $t^a$. For
a subset $\Sigma\subset\Bbb R^N$, we denote the set of all Laurent
polynomials of the form $\sum\limits_{a\in\Sigma\cap\Bbb Z^N} c_a
t^a,\, c_a\in\Bbb C$, by $\Bbb C[\Sigma]$. We regard them as functions on
the complex torus $(\Bbb C\setminus\{0\})^N$.
\begin{defin}
\label{defgkzresultant} For finite sets $\Sigma_i\subset \Bbb Z^N,
i=0,\ldots,N$, the resultant $\mathop{\rm Res}\nolimits$ is defined as the equation of
the closure of the set
$$\bigl\{ (g_0,\ldots,g_N)\; |\; g_i\in\Bbb C[\Sigma_i],\;
g_0(t)=\ldots=g_N(t)=0 \mbox{ for some } t\in(\Bbb C\setminus\{0\})^N
\bigr\},$$ provided that this set is a hypersurface in
$\Bbb C[\Sigma_0]\oplus\ldots\oplus\Bbb C[\Sigma_N]$. Otherwise, we set
$\mathop{\rm Res}\nolimits=1$ by definition.
\end{defin}
\begin{exa} If $\Sigma_0$ and $\Sigma_1$ are segments in $\Bbb Z^1$,
then $\mathop{\rm Res}\nolimits$ is the classical resultant of univariate polynomials
$g_0$ and $g_1$.
If $\Sigma_0=\ldots=\Sigma_N$ is the set of vertices of the
standard $N$-dimensional simplex, then $\mathop{\rm Res}\nolimits$ is the determinant
of the coefficient matrix of an affine linear map
$(g_0,\ldots,g_N)$.
\end{exa}
Without loss in generality, we can assume that the collection
$\Sigma_0,\ldots,\Sigma_N\subset\Bbb Z^N$ is \textit{essential}, i.e.
the dimension of the convex hull of $\sum_{j\in J}\Sigma_j$ is not
smaller than $|J|$ for every $J\varsubsetneq\{0,\ldots,N\}$, and
equals $N$ for $J=\{0,\ldots,N\}$ (see \cite{sturmf} for
details). Under this assumption we have the following formula for
the support function of its Newton polyhedron $\Delta_{\mathop{\rm Res}\nolimits}$.
The resultant $\mathop{\rm Res}\nolimits$ is a polynomial in the coefficients
$c_{a,i}$ of indeterminate polynomials $g_i=\sum_{a\in\Sigma_i}
c_{a,i} t^a\in\Bbb C[\Sigma_i]$, thus $\Delta_{\mathop{\rm Res}\nolimits}$ is contained in
$\Bbb R\otimes\Lambda$, where $\Lambda$ is the lattice of monomials
$\prod c_{a,i}^{\lambda_{a,i}}$. The exponents $\lambda_{a,i}$
can be regarded as linear functions on $\Lambda$ and form a basis
of the dual lattice $\Lambda^*$. For a linear function
$\gamma=\sum\gamma_{a,i}\lambda_{a,i}\in\Lambda^*$ with
coordinates $\gamma_{a,i}$ in this basis, denote the ray
$\{(t,0),\ |\, t\leqslant 0\}\subset\Bbb R\oplus\Bbb R^N$ by $l$ and the
convex hull $\mathop{\rm conv}\nolimits\{(\gamma_{a,i},a)\, |\, a\in
A_i\}+l\subset\Bbb R\oplus\Bbb R^N$ by $A_{i,\gamma}$.
\begin{theor}[\cite{E2}, \cite{E3}] \label{thsturmf}
The value of the support function $\Delta_{\mathop{\rm Res}\nolimits}(\gamma)$ equals
the mixed volume of pairs
$(N+1)!\cdot\mathop{\rm Vol}\nolimits_l\Bigl((A_{0,0},A_{0,\gamma}),\ldots,(A_{N,0},A_{N,\gamma})\Bigr)$.
\end{theor}
The proof is given in Subsection \ref{ss3proof}.
\begin{rem}
When discussing Newton polyhedra of polynomials rather than
Newton polyhedra of analytic germs, the value of the support
function $\Delta(\cdot)$ at a covector $\gamma$ is often defined
as the maximal value of $\gamma$ on the polyhedron $\Delta$,
rather than the minimal one. In this notation, the formula is as
follows: $\Delta_{\mathop{\rm Res}\nolimits}(-\gamma)$ equals
$(N+1)!\cdot\mathop{\rm Vol}\nolimits_l\Bigl((A_{0,\gamma},A_{0,0}),\ldots,(A_{N,\gamma},A_{N,0})\Bigr)$.
\end{rem}
This theorem gives a new proof to a number of well known facts
about Newton polyhedra of resultants and discriminants, including
the description of the vertices of the Newton polyhedron
$\Delta_{\mathop{\rm Res}\nolimits}$ (\cite{sturmf}) and the formula for the support
function of the Newton polyhedron of the $A$-determinant
(\cite{gkz}).
\subsection{Topological invariants of singularities}\label{sstopinv}
We recall the definition of singularities and their topological
invariants that we wish to study in terms of Newton polyhedra.
\textbf{Milnor fiber and radial index.}
\begin{defin} \label{defmult} The \textit{multiplicity} of a positive-dimensional complex
analytic germ $V\subset\Bbb C^m$ is defined as its intersection
number with a generic vector subspace of complementary dimension
in $\Bbb C^m$.
\end{defin}
Suppose that the germ of a complex analytic set $V\subset\Bbb C^m$ is
smooth outside the origin. Let $f:(\Bbb C^m,0)\to(\Bbb C,0)$ be a germ of
a complex analytic function, such that the restriction $f|_{V}$
has no singular points in a punctured ball $B$, centered at the
origin.
\begin{defin}
For a small $\mathop{\rm \bold d}\nolimitselta\ne 0$, the manifold $V\cap B\cap
f^{(-1)}(\mathop{\rm \bold d}\nolimitselta)$ is called the \textit{Milnor fiber} of the germ
$f|_V$.
\end{defin}
Suppose that the germ of a real analytic set $V\subset\Bbb R^m$ is
smooth outside the origin. Suppose that $\omega$ is a germ of a
real continuous 1-form in $\Bbb R^m$ near the origin, and the
restriction $\omega|_V$ has no zeroes in a punctured neighborhood
$U$ of the origin. Let $\tilde \omega$ be a 1-form on
$V\setminus\{0\}$ such that
\newline 1) $\tilde\omega$ has isolated zeroes $p_1,\ldots, p_N$
in $U$, \newline 2) $\tilde\omega=\omega|_V$ outside $U$, and
\newline 3) $\tilde\omega(x)=d(\|x\|^2)|_V$ near the origin.
\begin{defin}(\cite{smg}) The \textit{radial index} of $\omega|_V$ is defined as $1+\sum_j \mathop{\rm ind}\nolimits_{p_j}\tilde\omega$,
where $\mathop{\rm ind}\nolimits_{p_j}$ is the Poincare-Hopf index at $p_j$.
\end{defin}
If $V$ is smooth, then the radial index of $\omega|_V$ equals the
Poincare-Hopf index $\mathop{\rm ind}\nolimits_0 \omega|_V$. If $V\subset\Bbb C^m$ and
$f:\Bbb C^m\to\Bbb C$ are complex analytic, then the sum of the radial
index of the 1-form $d\mathop{\rm \rm Re}\nolimits f|_V$ and the Euler
characteristic of a Milnor fiber of $f|_V$ equals 1, see \cite{smgfam}.
\textbf{Determinantal singularities.} Suppose that $A=(a_{i,j}) :
\Bbb C^n\to\Bbb C^{I\times k}$ is a germ of a matrix with holomorphic
entries near the origin, and $I\leqslant k$ (we denote the space
of all $(I\times k)$-matrices by $\Bbb C^{I\times k}$).
\begin{defin} The set $[A]=\{ x\, |\, \mathop{\rm rk}\nolimits A(x)<I \}$ is called the
\textit{$(I\times k)$-determinantal set, defined by $A$,}
provided that its dimension equals $n-k+I-1$ (i. e. the minimal
possible one).
\end{defin}
\begin{exa} An $(1\times k)$-determinantal set is a complete intersection of codiminsion $k$. \end{exa}
The following topological invariants of determinantal
singularities and complete intersections can be regarded as
generalizations of the Poincare-Hopf index. We denote the set of
all $I\times k$ degenerate complex matrices by
$\mathcal{D}^{I\times k}\subset\Bbb C^{I\times k}$.
\begin{defin}
\label{defegsect} Suppose that the entries of an $I_j\times k_j$
matrix $W_j$ are germs of continuous functions on $(\Bbb C^n,0)$,
where $j$ ranges from $1$ to $J$, and $n=\sum_j 1+|k_j-I_j|$. If
the origin is the only point of $\Bbb C^n$, where the matrices
$W_1,\ldots,W_J$ are all degenerate, then the intersection number
of the graph of the mapping
$W=(W_1,\ldots,W_J):\Bbb C^n\to\Bbb C^{I_1\times
k_1}\oplus\ldots\oplus\Bbb C^{I_J\times k_J}$ with the product
$\Bbb C^n\times\mathcal{D}^{I_1\times
k_1}\times\ldots\times\mathcal{D}^{I_J\times k_J}$ is called
\textit{the multiplicity of the collection} $(W_1,\ldots,W_J)$.
\end{defin}
In other words, if generic small perturbations of the matrices
$W_1,\ldots,W_J$ degenerate at finitely many points near the
origin, then the number of these points (counted with appropriate
signs) equals the multiplicity of $(W_1,\ldots,W_J)$. Since
determinantal singularities are Cohen-Macaulay, the multiplicity
of a holomorphic collection $(W_1,\ldots,W_J)$ equals
$$\mathop{\rm \bold d}\nolimitsim_{\Bbb C}\mathcal{O}_{(\Bbb C^n,0)}/\langle\mbox{maximal minors
of}\; W_1,\ldots,W_J\rangle$$ \noindent(it is also referred to as
the Buchsbaum-Rim multiplicity in this case). Note that the
multiplicity of the collection $(W_1,\ldots,W_J)$ is not equal to
the intersection number of the image $M=W(\Bbb C^n)$ and the product
$\mathcal{D}^{I_1\times
k_1}\times\ldots\times\mathcal{D}^{I_J\times k_J}$ in general ---
their ratio is equal to the topological degree of the germ of the
mapping $W:\Bbb C^n\to M$.
Definition \ref{defegsect} can be regarded as a generalization of the
Poincare-Hopf index due to the following observation. Let
$v_{i,j}$ be a smooth section of a vector bundle $\mathcal{I}_j$
of rank $k_j$ on a smooth $n$-dimensional complex manifold $M$ for
$i=1,\ldots,I_j,\; I_j\leqslant k_j,\; j=1,\ldots,J$, and
$n=\sum_j 1+k_j-I_j$. Suppose that, for every point $x\in M$,
except for a finite set $X\subset M$, there exists $j$ such that
the vectors $v_{1,j}(x),\ldots,v_{I_j,j}(x)$ are linearly
independent. Choosing a local basis $s_{1,j},\ldots,s_{k_j,j}$ of
the bundle $\mathcal{I}_j$ near a point $x\in X$, one can
represent $v_i$ as a linear combination
$v_i=w_{i,1,j}s_1+\ldots+w_{i,k_j,j}s_{k_j}$, where
$w_{\cdot,\cdot,j}$ are the entries of a smooth $(I_j\times
k_j)$-matrix $W_j:M\to\Bbb C^{I_j\times k_j}$, defined near $x$.
Denote the multiplicity of the collection $(W_1,\ldots,W_J)$ by
$m_x$. Then the Chern number
$c_{1+k_1-I_1}(\mathcal{I}_1)\smile\ldots\smile
c_{1+k_J-I_J}(\mathcal{I}_J)\cdot[M]$ is equal to the sum of the
multiplicities $m_x$ over all points $x\in X$, which is the
classical Poincare-Hopf formula for $J=I_1=1$ (see, for example,
\cite{gh}).
The special case of Definition \ref{defegsect} for $I_1=1,\; J=2$,
is called \textit{the Suwa residue} of the collection of sections
$(w^{1,2},\ldots,w^{I_2,2})^T$ of a $k_2$-dimensional vector
bundle on a germ of the complete intersection $w^{1,1}=0$ (see
\cite{suwa}). Another special case is the Gusein-Zade--Ebeling
index of a 1-form on a complete intersection.
\begin{defin}[\cite{smg}, \cite{smgfam}] \label{defegform}
Let $f_1,\ldots,f_k$ be germs of holomorphic functions on
$(\Bbb C^n,0)$. Suppose that $f_1=\ldots=f_k=0$ is an isolated
singularity of a complete intersection, i.e. the 1-forms
$df_1,\ldots,df_k$ are linearly independent at every point of the
set $\{f_1=\ldots=f_k=0\}\setminus\{0\}$. Let $\omega$ be a germ
of a smooth 1-form on $(\Bbb C^n,0)$, whose restriction to
$\{f_1=\ldots=f_k=0\}\setminus\{0\}$ has no zeros near the
origin. \textit{The Gusein-Zade--Ebeling index} of $\omega$ on the
complete intersection $f_1=\ldots=f_k=0$ is defined as the
multiplicity of the collection of the two matrices
$$\begin{pmatrix}\omega \\ df_1 \\ \vdots \\ df_k\end{pmatrix},
(f_1, \ldots, f_k).$$
\end{defin}
This can be regarded as a generalization of the Poincare-Hopf
index, because the sum of the indices of a 1-form on a variety
with isolated complete intersection singularities equals the
Euler characteristic of the smoothing of the variety.
\subsection{Determinantal singularities and Newton polyhedra}\label{ssdet}
We study the aforementioned invariants of determinantal
singularities in terms of Newton polyhedra.
\textbf{Newton polyhedra.} The monomial
$x_1^{a_1},\ldots,x_n^{a_n}$ is denoted by $x^a$, where
$a=(a_1,\ldots,a_n)\in\Bbb Z^n$. The positive orthant of $\Bbb R^n$ is
denoted by $\Bbb R^n_+$.
\begin{defin}[\cite{E1}]
\textit{The Newton polyhedron} $\Delta_f$ of a germ of a function
$f:(\Bbb C^n,0)\to (\Bbb C,0),\; f(x)=\sum\limits_{a\in A} c_a x^a$,
where $c_a\ne 0$ for all $a\in A$, is defined as the convex hull
of the Minkowski sum $A+\Bbb R^n_+$. The coefficient $c_a$ is said to
be a \textit{leading coefficient} of the power series $f$, if $a$
is contained in a bounded face of the Newton polyhedron
$\Delta_f$.
\textit{The Newton polyhedron} $\Delta_{\omega}$ of a germ of a
1-form $\omega=\sum_i\omega_idx_i$ on $(\Bbb C^n,0)$ is defined as the
convex hull of the union of the Newton polyhedra
$\Delta_{\omega_ix_i},\; i=1,\ldots,n$. The coefficient of the
monomial $x^adx_i/x_i$ in the power series expansion of $\omega$
is said to be leading, if $a$ is contained in a bounded face of
$\Delta_{\omega}$.
\end{defin}
Note that $\Delta_{df}=\Delta_f$ for every function $f$. Recall
that, for a collection of positive weights $l=(l_1,\ldots,l_n)$,
assigned to the variables $x_1,\ldots,x_n$ and their differentials
$dx_1,\ldots,dx_n$, the lowest order non-zero $l$-quasihomogeneous
component of the power series $f=\sum\limits_{a\in \Bbb Z^n} c_a x^a$
and $\omega=\sum\limits_{a\in \Bbb Z^n} c_{a,i} x^adx_i$ is denoted by
$f^l$ and $\omega^l$ respectively (in the latter case, the weight
of the differential $dx_i$ is equal to the weight of $x_i$ by
convention).
\textbf{Determinantal singularities and Newton polyhedra.}
Consider a germ of a holomorphic matrix
$A=(a_{i,j}):\Bbb C^n\to\Bbb C^{I\times k},\; I<k$. Suppose for
simplicity that the Newton polyhedron of the entry $a_{i,j}$ does
not depend on $i$, being equal to a certain polyhedron
$\Delta_j$ for $i=1,\ldots,I$ and $j=1,\ldots,k$. (we refer this
assumption to as the \textit{unmixedness assumption}).
\begin{defin} The leading coefficients of $A$ are said to be \textit{in
general position}, if, for every collection $l$ of positive
weights and every subset $\mathcal{I}\subset\{1,\ldots,I\}$, the
set of all points $x\in\Bbb CC^n$, such that the matrix
$\bigl(a^l_{i,j}(x)\bigr)_{i\in\mathcal{I}\quad\;\;\;\,\atop
j\in\{1,\ldots,k\}}$ is degenerate, has the maximal possible
codimension $k-|\mathcal{I}|+1$.
\end{defin}
For a polyhedron $\Delta\subset\Bbb R^n_+$, denote the pair
$(\Bbb R^n_+,\Delta)$ by $\widetilde\Delta$. Denote the pair
$(\Bbb R^n_+,\, \Bbb R^n_+\setminus$standard $n$-dimensional simplex$)$
by $L$.
\begin{theor}[ \cite{E4}] \label{matrmult} Denote the Newton polyhedron of the entry
$a_{i,j}$ of a holomorphic matrix $A=(a_{i,j}):\Bbb C^n\to\Bbb C^{I\times
k},\; I<k$ by $\Delta_j$. Suppose that the differences
$\Bbb R^n_+\setminus\Delta_j,\; j=1,\ldots,k$, are bounded and the
leading coefficients of $A$ are in general position. Then $A$
defines the germ of a $(I\times k)$-determinantal set $[A]$, whose
multiplicity equals
$$\sum_{0<j_0<\ldots<j_{k-I}\leqslant k} n!\mathop{\rm Vol}\nolimits_{\Bbb R^n_+}(\widetilde\Delta_{j_{\,0}},\ldots,\widetilde\Delta_{j_{\,k-I}},\underbrace{L,\ldots,L}_{n-k+I-1}).$$
If the leading coefficients of $A$ are not in general position,
then the multiplicity is greater than expected, or $A$ is not a
determinantal singularity.
\end{theor}
If $\mathop{\rm \bold d}\nolimitsim[A]>0$, then the
multiplicity should be understood in the sense of Definition
\ref{defmult}, otherwise in the sense of Definition
\ref{defegsect}. In both cases, this theorem follows from Theorem
\ref{egvol}, which also allows to drop the unmixedness
assumption. A similar formula for the Buchsbaum-Rim multiplicity
of the matrix $A$ is given in \cite{bivia} for $n\leqslant k-I+1$.
\begin{defin} The leading coefficients of $A$ are said to be \textit{in
strong general position}, if, for every collection $l$ of positive
weights, the polynomial matrix $(a^l_{i,j})$ defines a nonsingular
determinantal set in $\Bbb CC^n$.
In this case, the leading coefficients of a germ of a 1-form
$\omega$ are said to be \textit{in general position with respect
to} $A$, if, for every collection $l$ of positive weights, the
restriction of $\omega^l$ to the determinantal set, defined by
the matrix $(a^l_{i,j})$ in $\Bbb CC^n$, has no zeros.
\end{defin}
For a set $\mathcal{J}\subset\{1,\ldots,n\}$, let
$\Bbb R^\mathcal{J}\subset\Bbb R^n$ be a coordinate plane given by the
equations $x_i=0,\, i\notin \mathcal{J}$. For a polyhedron
$\Delta\subset\Bbb R^n_+$, denote the pair
$(\Bbb R^\mathcal{J}\cap\Bbb R^n_+,\, \Bbb R^\mathcal{J}\cap\Delta)$ by
$\widetilde\Delta^\mathcal{J}$.
\begin{theor}[\cite{E4}] \label{matrzeta} Denote the Newton polyhedron of the entry
$a_{i,j}$ of a holomorphic matrix $A=(a_{i,j}):\Bbb C^n\to\Bbb C^{I\times
k},\; I<k$ by $\Delta_j$. Suppose that the differences
$\Bbb R^n_+\setminus\Delta_j,\; j=1,\ldots,k$ are bounded, and
$n\leqslant 2(k-I+2)$.
\newline 1) If the leading coefficients of $A$ are in strong general position, then the
determinantal set $[A]$ is smooth outside the origin.
\newline 2) If the Newton polyhedron $\Delta_0\subset\Bbb R^n_+$ of a germ $f:\Bbb C^n\to\Bbb C$ intersects all coordinate axes,
and the leading coefficients of $df$ are in general position
w.r.t. $A$, then the Euler characteristic of a Milnor fiber of
$f|_{[A]}$ equals
$$\chi(\Delta_0,\ldots,\Delta_k)=\sum_{a_0\in\Bbb N,\;
\mathcal{J}\subset\{1,\ldots,n\},\atop\{j_1,\ldots,j_q\}\subset\{1,\ldots,k\}}(-1)^{|\mathcal{J}|+k-I}C_{|\mathcal{J}|+q-a_0-2}^{q-k+I-1}\times$$
$$\times\sum_{a_{j_1}\in\Bbb N,\, \ldots,\, a_{j_q}\in\Bbb N,\atop
a_{j_1}+\ldots+a_{j_q}=|\mathcal{J}|-a_0}|\mathcal{J}|!\mathop{\rm Vol}\nolimits_{\Bbb R^n_+}(\underbrace{\widetilde\Delta_0^\mathcal{J},\ldots,\widetilde\Delta_0^\mathcal{J}}_{a_0},
\underbrace{\widetilde\Delta_{j_1}^\mathcal{J},\ldots,\widetilde\Delta_{j_1}^\mathcal{J}}_{a_{j_1}},\ldots,
\underbrace{\widetilde\Delta^\mathcal{J}_{j_q},\ldots,\widetilde\Delta^\mathcal{J}_{j_q}}_{a_{j_q}}).$$
\newline 3) If the Newton polyhedron $\Delta_0\subset\Bbb R^n_+$ of a germ of a 1-form $\omega$ on $(\Bbb C^n,0)$ intersects all coordinate axes,
and the leading coefficients of $\omega$ are in general position
w.r.t. $A$, then the radial index of $\omega|_{[A]}$ makes sense
and equals $1-\chi(\Delta_0,\ldots,\Delta_k)$.
\end{theor}
In the formula above, $C_n^k=0$ for $k\notin\{0,\ldots,n\}$ by
convention. Thus, all the terms in this sum vanish, except for
those with $|\mathcal{J}|-a_0\geqslant q>k-I$.
The proof is given in Subsection \ref{ss3proof}. Actually, the
same argument allows to drop the unmixedness assumption, and to
compute the $\zeta$-function of monodromy for the restriction
$f|_{[A]}$ (or, more generally, for the restriction of $f$ to an
isolated resultantal singularity), see \cite{E4} for details.
Parts 2 and 3 together provide a formula for the Poincare-Hopf
index of a 1-form on a smoothable determinantal singularity (see
\cite{smgdet}).
The assumption of general position, that we impose on the leading
coefficients of the 1-form, implies that the Newton polyhedra of
its coordinates are almost equal to each other (see the definition
of interlaced polyhedra in \cite{E1}). When computing the radial
index of a 1-form on a complete intersection, one can drop this
assumption as follows: the radial index of a 1-form on a complete
intersection differs from its Ebeling--Gusein-Zade index by the
Milnor number of the complete intersection (see \cite{smg}); the
latter two invariants can be computed in terms of Newton
polyhedra by Theorem \ref{finoka1} and Corollary \ref{oka1f}
respectively. It would be interesting to relax the assumption of
general position in the same way for an arbitrary determinantal
singularity.
\subsection{(Co)vector fields and Newton polyhedra}\label{ssind}
\begin{defin} In the notation of Definition
\ref{defegsect}, the leading coefficients of the collection
$(W_1,\ldots,W_J),\; W_j=(w_k^{i,j})$, are said to be \textit{in
general position}, if, for every collection $l$ of positive
weights, assigned to the variables $x_1,\ldots,x_n$ and
$t_{i,j},\; i=1,\ldots,I_j,\; j=1,\ldots, J$, the lowest order
non-zero $l$-homogeneous components of the linear combinations
$$\sum_{i=1}^{I_j}w^{i,j}_k(x_1,\ldots,x_n)t_{i,j},\; k=1,\ldots,k_j,\;
j=1,\ldots,J,$$ which are polynomials of variables $x_{\cdot}$
and $t_{\cdot,\cdot}$, have no common zeros outside the coordinate
planes.
\end{defin}
\begin{theor}[\cite{E3}]
\label{egvol} Adopting notation of Definition \ref{defegsect},
denote the Newton polyhedron of the $(i,k)$-entry of the matrix
$W_j$ by $\Delta^{i,j}_k$, and suppose that the difference
$\Bbb R^n_+\setminus\Delta^{i,j}_k$ is bounded. If the leading
coefficients of the entries of the matrices $W_1,\ldots,W_J$ are
in general position, then the multiplicity of this collection of
matrices makes sense and equals $(k_1+\ldots+k_J)!$ times the
mixed volume of images of prisms
$$(\Bbb R^n_+,\Delta^{1,j}_k)*\ldots*(\Bbb R^n_+,\Delta^{I_j,j}_k)\subset\Bbb R^{I_j-1}\oplus\Bbb R^n$$
under the natural inclusions
$$\Bbb R^{I_j-1}\oplus\Bbb R^n\hookrightarrow\Bbb R^{I_1-1}\oplus\ldots\oplus\Bbb R^{I_J-1}\oplus\Bbb R^n,$$
where $k=1,\ldots,k_j$ and $j=1,\ldots,J$. If the leading
coefficients are not in general position, then the multiplicity
is greater than the expected value or is not defined.
\end{theor}
The proof is given in Subsection \ref{ss3proof}.
\begin{exa} \label{exadet} Consider a holomorphic $(2\times 3)$-matrix valued function $A$ of
two variables $x$ and $y$, such that its rows are equal to
\begin{center} $px^2+qy^3+$(higher order terms) and
$rx^5+sy^4+$(higher order terms),\end{center} where $p,q,r,s$ are
constant row vectors. Then the multiplicity of $A$ at the origin
is at least $34$, which is $3!$ times the volume of the
non-convex polyhedron $\Sigma$ on the picture below, and the
equality is attained if and only if $\mathop{\rm \bold d}\nolimitset\left(\substack{p\\ q\\
s}\right)\ne 0$ and $\mathop{\rm \bold d}\nolimitset\left(\substack{p\\ r\\ s}\right)\ne 0$.
The two polygons on the picture are the Newton polygons of the
rows of the matrix $A$.
\begin{center} \noindent\includegraphics[width=14cm]{examult.eps}
\end{center}
\end{exa}
Let $N_1,\ldots,N_k,M_1,\ldots,M_m$ be polyhedra in $\Bbb R^m_+$, such
that the differences $\Bbb R^m_+\setminus N_i$ are bounded. Denote
the mixed volume of pairs $$\bigl(\{0\}\times\Bbb R^m_+
,\;\;\{0\}\times N_i\bigr),\;\;i=1,\ldots,k, \mbox{ and } $$ $$
\bigl(M_j*N_1*\ldots*N_k,\;\;M_j*N_1*\ldots*N_k\bigr),\;\;j=1,\ldots,m,$$
in $\Bbb R^k\oplus\Bbb R^m$ by $\mathop{\rm res}\nolimits_{eg}(N_1,\ldots,N_k;M_1,\ldots,M_m)$.
\begin{theor}[\cite{E2}, \cite{E3}]
\label{finoka1} Let $f_1,\ldots,f_k,w_1,\ldots,w_n$ be germs of
holomorphic functions on $(\Bbb C^n,0)$, and suppose that their Newton
polyhedra intersect all coordinate axes. If the leading
coefficients of these functions satisfy a certain condition of
general position, then the Gusein-Zade--Ebeling index of the
1-form $w_1 dx_1 + \ldots + w_n dx_n$ on the isolated singularity
of the complete intersection $f_1=\ldots=f_k=0$ makes sense and
equals
$$\sum_{\mathcal{J}=\{i_1,\ldots,i_m\}\subset\{1,\ldots,n\},\atop
\mathcal{J}\ne\emptyset} (-1)^{n-m} (m+k)!
\mathop{\rm res}\nolimits_{eg}(\Delta_{f_1}^\mathcal{J},\ldots,\Delta_{f_k}^\mathcal{J};\Delta_{x_{i_1}w_{i_1}}^\mathcal{J},\ldots,\Delta_{x_{i_m}w_{i_m}}^\mathcal{J}).$$
For arbitrary leading coefficients, the index is not smaller than
the expected number, or is not defined.
\end{theor}
\begin{rem} See Theorem \ref{matrzeta}(3) for a generalization to
determinantal singularities under certain additional assumptions.
We do not explicitly formulate the condition of general position
for the leading coefficients, because it is a cumbersome special
case of Definition \ref{defgenpos}. One can reconstruct this
condition, tracing back the reduction of this theorem to Corollary
\ref{volveryconv}.
\end{rem}
The proof is given in Subsection \ref{ss3proof}. In particular,
we can now simplify M.~Oka's formula for the Milnor number of a
complete intersection, formulating it in the language of mixed
volumes of pairs. Consider pairs of polyhedra $A_1,\ldots,A_m$ in
$\Bbb R^n$ that have the same support cone $\Gamma$.
\begin{defin}\label{polyseries} We define the value of the power
series \newline
$\sum\limits_{(a_1,\ldots,a_m)\in\Bbb Z^n_+}c_{a_1,\ldots,a_m}A_1^{a_1}\cdot\ldots\cdot
A_m^{a_m}$ as the sum $$\sum\limits_{a_1+\ldots+a_m=n}
c_{a_1,\ldots,a_m}\mathop{\rm Vol}\nolimits_{\Gamma}(\underbrace{A_1,\ldots,A_1}_{a_1},\ldots,\underbrace{A_m,\ldots,A_m}_{a_m}),$$
and define the \textit{value of a rational function of pairs of
polyhedra} as the value of its power series expansion at the
origin.
\end{defin} For polyhedra $N_0,\ldots,N_k\subset\Bbb R^m_+$, such that the
differences $\Bbb R^m_+\setminus N_j$ are bounded, we denote the
value of the rational function
$m!\prod\limits_{i=0}^k\frac{(\Bbb R^m_+,N_i)}{1+(\Bbb R^m_+,N_i)}$ by
$\mu_m(N_0,\ldots,N_k)$.
\begin{sledst}[\cite{oka}] \label{oka1f}
Let $f_0,\ldots,f_k$ be germs of holomorphic functions on
$(\Bbb C^n,0)$, such that the differences
$\Bbb R^n_+\setminus\Delta_{f_j}$ are bounded. If the leading
coefficients satisfy a certain condition of general position (see
\cite{oka}, or \cite{E1} for a somewhat milder condition), then
the equations $f_0=\ldots=f_k=0$ define an isolated singularity
of a complete intersection, and its Milnor number equals
$$(-1)^{n-k-1} \sum\limits_{\mathcal{J}\subset\{1,\ldots,n\},\;
\mathcal{J}\ne\emptyset}
\mu_{|\mathcal{J}|}(\Delta_{f_0}^\mathcal{J},\ldots,\Delta_{f_k}^\mathcal{J})+(-1)^{n-k}.$$
\end{sledst}
One way to prove this formula is to simplify the answer given in
\cite{oka} by means of Assertion \ref{volsymm}. Another
(independent) way is to compute the Gusein-Zade--Ebeling index of
the 1-form $df_0$ on the complete intersection $f_1=\ldots=f_k=0$
(Theorem \ref{finoka1}), which equals the sum of the Milnor
numbers of the complete intersections $f_0=\ldots=f_k=0$ and
$f_1=\ldots=f_k=0$. The Oka formula then follows by induction on
$k$.
\section{Preliminaries from toric geometry}\label{ss2}
\subsection{Toric varieties} \label{sstor} We recall some basic facts about
smooth toric varieties and introduce corresponding notation (see
details in \cite{danil}, or, more elementary for the smooth case,
in \cite{varch2}). A closed cone in $\Bbb R^N$ with the vertex at the
origin is said to be \textit{simple} if it is generated by a part
of a basis of the lattice $\Bbb Z^N$. A collection of simple cones is
called a \textit{simple fan} if it is closed with respect to
taking intersections and faces of its cones and satisfies the
following condition: the intersection of each two cones is a face
of both of them. \textit{The support set} $|\Gamma|$ of a fan
$\Gamma$ is defined as the union of its cones.
Simple fans in $\Bbb R^N$ are in one-to-one correspondence with
$N$-dimensional smooth toric varieties (i.e. $N$-dimensional
smooth algebraic varieties with an action of a complex torus
$(\Bbb C\setminus\{0\})^N$, such that one of the orbits of this
action is everywhere dense -- this orbit is called \textit{the
maximal torus}). We denote the toric variety corresponding to a
fan $\Gamma$ by $\Bbb T^{\Gamma}$. The $k$-dimensional orbits of the
variety $\Bbb T^{\Gamma}$ are in one-to-one correspondence with the
codimension $k$ cones of the fan $\Gamma$, and adjacent orbits
correspond to adjacent cones. We denote the primitive generator
of a 1-dimensional cone, corresponding to a codimension 1 orbit
$R$, by $\gamma_R$.
Let $\Gamma$ be a simple fan with a convex support set in
$(\Bbb R^N)^*$. The polyhedron $\Delta\subset\Bbb R^N$ is said to be
\textit{compatible} with $\Gamma$, if its support function
$\Delta(\mathbf{\cdot})$ is linear on every cone of $\Gamma$ and
is not defined outside of $|\Gamma|$. If a polyhedron $\Delta$ is
compatible with $\Gamma$, then there exists a unique ample line
bundle $\mathcal{B}_{\Delta}$ on $\Bbb T^{\Gamma}$ equipped with a
meromorphic section $s_{\Delta}$, such that the divisor of zeros
and poles of $s_{\Delta}$ equals $-\sum_{R} \Delta(\gamma_R) R$,
where $R$ runs over all codimension 1 orbits of $\Bbb T^{\Gamma}$.
This correspondence, which assigns the pair
$(\mathcal{B}_{\Delta},s_{\Delta})$ to the polyhedron $\Delta$,
is an isomorphism between the semigroup of convex integer
polyhedra compatible with $\Gamma$, and the semigroup of pairs
$(\mathcal{B},s)$, where $\mathcal{B}$ is a very ample line bundle
on $\Bbb T^{\Gamma}$, and $s$ is its meromorphic section with zeroes
and poles outside of the maximal torus.
\textit{The compact part} $\Bbb T^{\mathop{\rm Int}\nolimits\Gamma}$ of the variety
$\Bbb T^{\Gamma}$ is defined as the union of all precompact orbits
(i.e. the orbits, corresponding to the cones in the interior of
$|\Gamma|$). \textit{The dual cone} of the support cone
$|\Gamma|\subset(\Bbb R^N)^*$ (i.e. the set of all points in $\Bbb R^N$
at which the values of all covectors from $|\Gamma|$ are
non-negative) is denoted by $\Gamma^*\subset\Bbb R^N$. Here is a
simple example with $\Bbb T^{\Gamma}=\Bbb CP^1\times\Bbb C^1$ and
$\Bbb T^{\mathop{\rm Int}\nolimits\Gamma}=\Bbb CP^1\times\{0\}$ (we only draw the real part of
the toric variety, of course):
\begin{center} \noindent\includegraphics[width=14cm]{exatoric.eps}
\end{center}
\begin{defin}[\cite{E3}] \label{defnewtsect}
If a germ $s$ of a meromorphic section of the line bundle
$\mathcal{B}_{\Delta}$ on the pair
$(\Bbb T^{\Gamma},\Bbb T^{\mathop{\rm Int}\nolimits\Gamma})$ has no poles outside of
$\Bbb T^{\mathop{\rm Int}\nolimits\Gamma}$, then the quotient $s/s_{\Delta}$ defines a
function on $\Bbb T^{\Gamma}$ that can be represented as a Laurent
series $\sum_{a\in A} c_a t^a,\; c_a\ne 0,\; A\subset\Bbb Z^N$ on the
maximal torus $(\Bbb C\setminus\{0\})^N$. In this case:
The \textit{Newton polyhedron} $\Delta_s$ of the germ $s$ is
defined as the convex hull of the set $A+\Gamma^*$;
A coefficient $c_a$ is said to be \textit{leading}, if $a$ is
contained in a bounded face of $\Delta_s$;
For a covector $\gamma$ in the interior of $|\Gamma|$, the lowest
order non-zero $\gamma$-homogeneous component of the series
$\sum_{a\in A} c_a t^a$ is called the
$\gamma$-\textit{truncation} of the section $s$, and is denoted
by $s^{\gamma}$. It is a Laurent polynomial on the maximal torus
$(\Bbb C\setminus\{0\})^N$.
The Newton polyhedron of $s$ is said to be not defined, if $s$ has
poles outside of the compact part $\Bbb T^{\mathop{\rm Int}\nolimits\Gamma}$.
\end{defin}
\begin{rem} In the latter case, we still can represent $s$ as a
quotient of two holomorphic sections $s_1/s_2$, and define the
Newton polyhedron of $s$ as the Minkowski difference
$\Delta_{s_1}-\Delta_{s_2}$, which is a virtual polyhedron. In
this way we can extend most of computations below to arbitrary
meromorphic sections. We do not take this way here, because we do
not need it for our applications in Section \ref{ss1}.
\end{rem}
\begin{exa} \label{exatoric} If $\Delta$ is as shown on the picture
below, then the corresponding line bundle $\mathcal{B}_{\Delta}$
on the toric variety $\Bbb T^{\Gamma}=\Bbb CP^1\times\Bbb C^1$ (see the above
picture) is the pullback of the line bundle $\mathcal{O}(1)$ on
the first factor $\Bbb CP^1$. Denoting the standard coordinates on the
factors $\Bbb CP^1$ and $\Bbb C^1$ by $\lambda:\mu$ and $x$ respectively,
the section $s_{\Delta}$ of the bundle $\mathcal{B}_{\Delta}$ is
the pullback of the section $\mu$ of the bundle $\mathcal{O}(1)$.
The Newton polyhedron of a section $s=\lambda (ax^p+\ldots)+\mu
(bx^q+\ldots)$ of $\mathcal{B}_{\Delta}$ is shown on the picture
below (dots stand for higher order terms), and the leading
coefficients of $s$ are $a$ and $b$.
\end{exa}
\begin{center} \noindent\includegraphics[width=10cm]{exanewt.eps}
\end{center}
\subsection{Relative Kouchnirenko-Bernstein formula} The classical
Kouchnirenko-Bernstein formula is about the intersection number
of algebraic hypersurfaces in a complex torus. We need this
formula and the notion of intersection number in a somewhat more
general setting.
\textbf{Intersection numbers.}\label{sskb} We recall the
definition of varieties with multiplicities and their
intersection numbers in the generality that we need (see details
in \cite{fulton}). Let $M$ be a smooth oriented $N$-dimensional
manifold.
\begin{defin}[\cite{E3}]
\textit{A $k$-dimensional cycle on $M$ with the support set $K$}
is defined as a pair $(K,\alpha)$, where $K\subset M$ is a closed
subset, and $\alpha\in
H_k(K\cup\{\infty\},\{\infty\}\,;\;\Bbb Q)=H^{N-k}(M,M\setminus
K\,;\;\Bbb Q)$.
\end{defin}
Definitions and basic properties of the sum
$(K_1,\alpha_1)+(K_2,\alpha_2)=(K_1\cup K_2, \alpha_1+\alpha_2)$,
the intersection $(K_1,\alpha_1)\cap(K_2,\alpha_2)=(K_1\cap K_2,
\alpha_1\smile\alpha_2)$, the direct image
$f_*(K,\alpha)=(f(K),f_*\alpha)$ under a proper map $f$, and the
inverse image $f^*(K,\alpha)=(f^{(-1)}(K),f^*\alpha)$ under an
arbitrary continuous map $f$ are the same as for homology and
cohomology cycles. If $\alpha$ is the fundamental homology cycle
of a closed irreducible analytical set $K$, then the cycle
$(K,\alpha)$ is denoted by $[K]$ and is called \textit{the
fundamental cycle} of $K$. The divisor of zeroes and poles of a
meromorphic section $w$ of a line bundle $E\to M$ is denoted by
$[w]$. More generally, a continuous section $v$ of a vector
bundle $F\to M$ also corresponds to a certain cycle in $M$, which
is denoted by $[v]$ and is defined as the intersection of the
fundamental cycles of the graphs of the section $v$ and the zero
section in the total space $F$.
\begin{defin}
\textit{The index} $\mathop{\rm ind}\nolimits s$ of a 0-dimensional cycle
$s=(K,\alpha)$ with a compact support set $K$ is defined as its
image $\mathop{\rm pt}\nolimits_*(\alpha)\in H_0(*\,;\;\Bbb Z)=\Bbb Z$ under the proper map
$\mathop{\rm pt}\nolimits:K\to *$.
\end{defin}
Let $s_1,\ldots,s_k$ be cycles on $M$ such that the sum of their
codimensions equals $N$ and the intersection $K$ of their support
sets is compact (though not necessary 0-dimensional). Then their
\textit{intersection number} $\mathop{\rm ind}\nolimits (s_1\cap\ldots\cap s_k)$ makes
sense.
Germs of cycles and indices of their intersections are defined in
the same way. In particular, the index $\mathop{\rm ind}\nolimits [A]\cap[B]$ for
germs of analytical sets $A$ and $B$ of complementary dimension
is the intersection number of these germs, the index $\mathop{\rm ind}\nolimits [w]$
for the germ of a section $w$ of a rank $N$ vector bundle is the
Poincare-Hopf index of this section.
\textbf{Relative Kouchnirenko-Bernstein formula} Let
$\Delta_1,\ldots,\Delta_I\subset\Bbb R^I$ be integer polyhedra
compatible with a simple fan $\Gamma$ in $(\Bbb R^I)^*$. Let $s_i,\,
i=1,\ldots,I$ be germs of meromorphic sections of the line bundles
$\mathcal{B}_{\Delta_i}$ on the pair $(\Bbb T^{\Gamma},
\Bbb T^{\mathop{\rm Int}\nolimits\Gamma})$, and suppose that their Newton polyhedra
$\Delta_{s_i}$ are defined (see Definition \ref{defnewtsect}). In
this subsection, we compute the intersection number $\mathop{\rm ind}\nolimits
\bigl([s_1]\cap\ldots\cap[s_I]\bigr)$ in terms of polyhedra
$\Delta_i$ and $\Delta_{s_i}$.
\begin{defin}\label{defgenpos0} Leading coefficients of
$s_1,\ldots,s_I$ are said to be \textit{in general position}, if,
for every covector $\gamma$ in the interior of $|\Gamma|$, the
polynomial equations $s_1^{\gamma}=\ldots=s_I^{\gamma}=0$ have no
common roots in $\Bbb CC^I$.
\end{defin}
\begin{theor}[Relative Bernstein formula,\cite{E3}] \label{relbernst}
Suppose that, in the above notation, the differences
$\Delta_i\setminus\Delta_{s_i}$ and
$\Delta_{s_i}\setminus\Delta_i, i=1,\ldots,I$ are bounded. If the
leading coefficients of the sections $s_i$ are in general
position, then the intersection number $\mathop{\rm ind}\nolimits
([s_1]\cap\ldots\cap[s_I])$ equals $I!$ times the mixed volume of
the pairs $(\Delta_i,\Delta_{s_i})$. If the leading coefficients
are not in general position, then the intersection number is
greater or is not defined.
\end{theor}
\begin{exa}\label{exabernst2} Adopting notation of Example \ref{exatoric} and taking
Example \ref{exavol} into account, the intersection number of the
divisors \begin{center}$\lambda (ax^p+\ldots)+\mu (bx^q+\ldots)=0$
and $\lambda (cx^r+\ldots)+\mu (dx^t+\ldots)=0$\end{center} on the
germ of the variety $\Bbb CP^1\times(\Bbb C^1,0)$ is greater or equal to
$\min(p+t,q+r)$, and the equality is always attained unless
$p+t=q+r$ and $\mathop{\rm \bold d}\nolimitset\left(a\; b\atop c\; d\right)=0$. To verify
this answer, we can notice that the desired intersection number
is equal to the multiplicity of zero of the determinant
$$\mathop{\rm \bold d}\nolimitset\left(ax^p+\ldots\quad bx^q+\ldots\atop cx^r+\ldots\quad
dx^t+\ldots\right) \mbox{ at } x=0 . $$
\end{exa}
Two proofs of Theorem \ref{relbernst} are given at the end of
this section. The first one (suggested by A.~G.~Khovanskii) is a
very transparent reduction to the global case and gives Assertion
\ref{volexpr2} as a byproduct. The second argument is close to the
original D.~Bernstein's proof and also leads to Assertion
\ref{volsymm} and Corollary \ref{volveryconv}, on which the proof
of Theorem \ref{finoka1} is based. Before proving this theorem,
we explain how to omit the redundant assumption of bounded
differences in Theorem \ref{relbernst}.
\subsection{Stable version of Kouchnirenko-Bernstein formula}\label{ssrelkb}
As above, let $\Delta_1,\ldots,\Delta_I\subset\Bbb R^I$ be integer
polyhedra compatible with a simple fan $\Gamma$ in $(\Bbb R^I)^*$.
Let $s_i,\, i=1,\ldots,I$ be germs of meromorphic sections of the
line bundles $\mathcal{B}_{\Delta_i}$ on the pair $(\Bbb T^{\Gamma},
\Bbb T^{\mathop{\rm Int}\nolimits\Gamma})$, and suppose that their Newton polyhedra
$\Delta_{s_i}$ are defined (see Definition \ref{defnewtsect}).
\begin{defin} \label{defgenpos}
Leading coefficients of the sections $s_1,\ldots,s_I$ are said to
be \textit{in strongly general position}, if, for every pair of
covectors $\gamma$ from the boundary of $|\Gamma|$ and $\mathop{\rm \bold d}\nolimitselta$
from the interior of $|\Gamma|$, the system of polynomial
equations $(s_i^{\gamma})^{\mathop{\rm \bold d}\nolimitselta}=0,\;
i\in\{i\;|\;\Delta_i(\gamma)=\Delta_{s_i}(\gamma)\}$, has no
solutions in $\Bbb CC^I$.
\end{defin}
If the differences of $\Delta_i$ and $\Delta_{s_i}$ are bounded,
then the strongly general position is equivalent to the general
position in the sense of Definition \ref{defgenpos0}. It is
stronger in general, but still occurs for generic leading
coefficients under a certain mild restriction on the Newton
polyhedra (see Corollary \ref{convstab}).
Choose a covector $\gamma_0$ in the interior of $|\Gamma|$, and,
for every positive number $M$, denote the convex hull of the union
$\Delta_{s_i}\cup \bigl(\Delta_i\cap \{\gamma_0\geqslant
M\}\bigr)$ by $H_{i,M}$. If the value of the mixed volume of pairs
$(\Delta_1,H_{1,M}),\ldots,(\Delta_I,H_{I,M})$ does not depend on
$M$ for large $M$, then this value is called the \textit{stable
mixed volume} of pairs
$(\Delta_1,\Delta_{s_1}),\ldots,(\Delta_I,\Delta_{s_I})$.
\begin{exa}
If polygons $A$ and $B$ are as shown below, then the stable mixed
area of the pairs $(\Bbb R^2_+,A)$ and $(\Bbb R^2_+,B)$ equals
$\frac{ps+rq-pr}{2}$ plus the proper mixed area of pairs
$\bigl(\Bbb R^2_++(p,0),A\bigr)$ and $\bigl(\Bbb R^2_++(0,r),B\bigr)$.
\end{exa}
\begin{center} \noindent\includegraphics[width=8cm]{exastable.eps}
\end{center}
\begin{theor}\label{relbernstab} If, in the above notation, leading
coefficients of the germs $s_1,\ldots,s_I$ are in strongly general
position, then the intersection index
$\mathop{\rm ind}\nolimits[s_1]\cap\ldots\cap[s_I]$ equals $I!$ times the stable mixed
volume of pairs
$(\Delta_1,\Delta_{s_1}),\ldots,(\Delta_I,\Delta_{s_I})$ (in
particular, both of them exist). If leading coefficients of the
germs $s_1,\ldots,s_I$ are not in strongly general position, then
the intersection number $\mathop{\rm ind}\nolimits[s_1]\cap\ldots\cap[s_I]$ is greater
or not defined.
\end{theor}
\textsc{Proof.} We can assume without loss of generality that the
polyhedron $\Delta_{i,M}=\Delta_i\cap \{\gamma_0\geqslant M\}$
has integer vertices for every integer $M$. For every
$i=1,\ldots,I$, pick $s_{i,M}$ such that $\Delta_{i,M}$ is its
Newton polyhedron, and consider germs $s_1+t s_{1,M},\ldots,s_I+t
s_{I,M}$ on the pair $\bigl(\Bbb T^{\Gamma}\times\Bbb C^1,\;
\Bbb T^{\mathop{\rm Int}\nolimits(\Gamma)}\times\{0\}\bigr)$, where $t$ is the coordinate
on $\Bbb C^1$.
If leading coefficients of the collection $s_1,\ldots,s_I$ are in
strongly general position, then
1) one can readily verify that the set $\{s_1+t
s_{1,M}=\ldots=s_I+t s_{I,M}=0\}\subset\Bbb T^{\Gamma}\times\Bbb C^1$ is
contained in $\Bbb T^{\mathop{\rm Int}\nolimits(\Gamma)}\times\Bbb C^1$ for a large $M$. Thus,
the desired $\mathop{\rm ind}\nolimits[s_1]\cap\ldots\cap[s_I]$ equals
$\mathop{\rm ind}\nolimits[s_1+\varepsilon s_{1,M}]\cap\ldots\cap[s_I+\varepsilon
s_{I,M}]$ for a small constant $\varepsilon\ne 0$.
2) leading coefficients of the collection $s_1+\varepsilon
s_{1,M},\ldots,s_I+\varepsilon s_{I,M}$ are in general position
for a small constant $\varepsilon\ne 0$. Thus, by Theorem
\ref{relbernst}, $\mathop{\rm ind}\nolimits[s_1+\varepsilon
s_{1,M}]\cap\ldots\cap[s_I+\varepsilon s_{1,M}]$ equals $I!$ times
the mixed volume of pairs
$(\Delta_1,\Delta_{s_1})_M,\ldots,(\Delta_I,\Delta_{s_I})_M$.
The first proposition of the theorem follows by (1) and (2).
If leading coefficients of the collection $s_1,\ldots,s_I$ are
not in strongly general position, then we can choose
$s_{1,M},\ldots,s_{I,M}$, such that leading coefficients of the
collection $s_1+\varepsilon s_{1,M},\ldots,s_I+\varepsilon
s_{I,M}$ are not in general position for a small constant
$\varepsilon\ne 0$. Then
$\mathop{\rm ind}\nolimits[s_1]\cap\ldots\cap[s_I]\geqslant\mathop{\rm ind}\nolimits[s_1+\varepsilon
s_{1,M}]\cap\ldots\cap[s_I+\varepsilon s_{I,M}]$, and the latter
index is greater than $I!$ times the mixed volume of pairs
$(\Delta_1,\Delta_{s_1})_M,\ldots,(\Delta_I,\Delta_{s_I})_M$ by
Theorem \ref{relbernst}, which implies the second proposition of
the theorem. $\Box$
\begin{sledst}\label{convstab} The following three conditions are equivalent:
1) The stable mixed volume of the pairs
$(\Delta_1,\Delta_{s_1}),\ldots,(\Delta_I,\Delta_{s_I})$ exists.
2) Generic leading coefficients are in strongly general position
for sections $s_1,\ldots,s_I$.
3) The collection $\Delta_1,\ldots,\Delta_I,$
$\Delta_{s_1},\ldots,\Delta_{s_I}$ is convenient in the following
sense.
\end{sledst}
\begin{defin} \label{defconv}
In the above notation, the collection of polyhedra
$\Delta_1,\ldots,\Delta_I,$ $\Delta_{s_1},\ldots,\Delta_{s_I}$ is
said to be \textit{convenient}, if, for every pair of covectors
$\gamma$ from the boundary of $|\Gamma|$ and $\mathop{\rm \bold d}\nolimitselta$ from the
interior of $|\Gamma|$, there exists a set
$I_{\gamma,\mathop{\rm \bold d}\nolimitselta}\subset \{1,\ldots,I\}$ such that
\newline 1) $\mathop{\rm \bold d}\nolimitsim \sum_{i\in I_{\gamma,\mathop{\rm \bold d}\nolimitselta}} (\Delta_{s_i}^{\gamma})^{\mathop{\rm \bold d}\nolimitselta} \leqslant |I_{\gamma,\mathop{\rm \bold d}\nolimitselta}|$, \; 2)
$\Delta_{s_i}(\gamma)=\Delta_i(\gamma)$ for every $i\in
I_{\gamma,\mathop{\rm \bold d}\nolimitselta}$.
A convenient collection of polyhedra is said to be
\textit{very convenient} if $\mathop{\rm \bold d}\nolimitsim \sum_{i\in I_{\gamma,\mathop{\rm \bold d}\nolimitselta}}
(\Delta_{s_i}^{\gamma})^{\mathop{\rm \bold d}\nolimitselta} < |I_{\gamma,\mathop{\rm \bold d}\nolimitselta}|$ for
$\gamma\ne 0$.
A very convenient collection of polyhedra is said
to be \textit{cone-convenient} if $\Delta_1$ is a cone and
$I_{\gamma,\mathop{\rm \bold d}\nolimitselta}\subset \{2,\ldots,n\}$ for every $\gamma\ne 0$.
\end{defin}
\textsc{Proof of Corollary \ref{convstab}.} (2) and (3) are
obviously equivalent. If (2) is satisfied, then (1) is satisfied
by Theorem \ref{relbernstab}. If (3) is not satisfied, then, in
the notation of the proof of Theorem \ref{relbernstab}, we can
choose $s_1,\ldots,s_I$, $s_{1,M},\ldots,s_{I,M}$ and
$s_{1,M+1},\ldots,s_{I,M+1}$ in such a way that leading
coefficients of $s_1+\varepsilon
s_{1,M}+s_{1,M+1},\ldots,s_I+\varepsilon s_{I,M}+s_{I,M+1}$ are
not in general position for a small $\varepsilon\ne 0$, but are in
general position for $\varepsilon=0$. Denote $I!$ times the mixed
volume of the pairs
$(\Delta_1,\Delta_{s_1})_M,\ldots,(\Delta_I,\Delta_{s_I})_M$ by
$V_M$. Then $V_M<\mathop{\rm ind}\nolimits[s_1+\varepsilon
s_{1,M}+s_{1,M+1}]\cap\ldots\cap[s_I+\varepsilon
s_{I,M}+s_{I,M+1}]\leqslant\mathop{\rm ind}\nolimits[s_1+s_{1,M+1}]\cap\ldots\cap[s_I+s_{I,M+1}]=V_{M+1}$
by Theorem \ref{relbernst}, and (1) is not satisfied. $\Box$
\subsection{Relative Kouchnirenko-Bernstein-Kho\-van\-skii formula}\label{sskbk}
In the assumptions of Theorem \ref{relbernst}, suppose that the
sections $s_1,\ldots,s_k,\; k<I$, are holomorphic (i.e.
$\Delta_{s_i}\subset\Delta_i$ for $i\leqslant k$), and the first
line bundle $\mathcal{I}_{\Delta_1}$ is trivial, i.e. $\Delta_1$
is a cone with the vertex at the origin $0\in\Bbb R^I$. The relative
version of the Bernstein-Khovanskii formula computes the Euler
characteristic of the Milnor fiber of the function $s_1$ on the
complete intersection $s_2=\ldots=s_k=0$, in terms of the Newton
polyhedra of the sections $s_1,\ldots,s_k$. At the end of this
subsection, we also explain how to drop the assumption of
triviality of $\mathcal{I}_{\Delta_1}$.
To define the Milnor fiber of $s_1$, it is convenient to fix a
family of neighborhoods for the compact part of the toric variety
$\Bbb T^{\Gamma}$. For instance, choose an integer point $a_i$ in
every infinite edge of $\Delta_1$, and define the neighborhood
$B_{\varepsilon}$ of the compact part of $\Bbb T^{\Gamma}$ as the
closure of set of all $x\in\Bbb CC^I$ such that $\sum_i |x^{a_i}| <
\varepsilon$.
\begin{defin} The Milnor fiber of the function $s_1$ on the
complete intersection $s_2=\ldots=s_k=0$ is the manifold
$\{s_1-\mathop{\rm \bold d}\nolimitselta=s_2=\ldots=s_k=0\}\cap B_{\varepsilon}$ for
$|\mathop{\rm \bold d}\nolimitselta|\ll\varepsilon\ll 1$.
\end{defin}
\begin{defin} The leading coefficients of $s_1,\ldots,s_k,\; k<I$, are
said to be in general position, if, for every $\gamma$ in the
interior of $|\Gamma|$, the systems of equations
$s_1^{\gamma}=\ldots=s_k^{\gamma}=0$ and
$s_2^{\gamma}=\ldots=s_k^{\gamma}=0$ define nonsingular varieties
in $\Bbb CC^I$.
\end{defin}
\begin{defin} Faces $A_1,\ldots,A_k$ of polyhedra
$\Delta_1,\ldots,\Delta_k$ in $\Bbb R^I$ are said to be compatible, if
$A_i=\Delta_i^{\gamma},\; i=1,\ldots,k$, for some linear function
$\gamma$ on $\Bbb R^I$.
\end{defin}
For compatible unbounded faces
$A_1\subset\Delta_1,\ldots,A_k\subset\Delta_k$, denote the
dimension of $A_1+\ldots+A_k$ by $q$. Then, up to a shift, the
pairs $(A_i,A_i\cap\Delta_{s_i})$ are contained in the same
rational $q$-dimensional subspace of $\Bbb R^N$; we denote the
corresponding shifted pairs by $\widetilde A_1,\ldots,\widetilde
A_k$. In the notation, introduced prior to Corollary \ref{oka1f},
denote the number $q!\frac{\widetilde
A_1\cdot\ldots\cdot\widetilde A_k}{(1+\widetilde
A_1)\cdot\ldots\cdot(1+\widetilde A_k)}$ by
$\chi_{A_1,\ldots,A_k}$.
\begin{theor} \label{relbkh} In the above assumptions, the Euler characteristic of the Milnor fiber of $s_1$
on the complete intersection $\{s_2=\ldots=s_n=0\}$ equals the
sum of $\chi_{A_1,\ldots,A_k}$ over all collections
$A_1\subset\Delta_1,\ldots,A_k\subset\Delta_k$ of compatible
unbounded faces, provided that the leading coefficients of
$s_1,\ldots,s_k$ are in general position.
\end{theor}
This is proved in \cite{oka} for a regular affine toric variety
(based on the idea of \cite{varch}), and in \cite{takeuchi} for
an arbitrary affine toric variety (in a more up to date
language). Both arguments can be easily applied to an arbitrary
(not necessary affine) toric variety, and also provide a formula
for the $\zeta$-function of monodromy of the function $s_1$.
However, since we restrict our consideration by the Milnor number
in this paper, we prefer to give a much simpler proof by
reduction to the global Bernstein-Khovanskii formula. It is based
on the same Khovanskii's idea, as the first proof of Theorem
\ref{relbernst}.
\textsc{Proof (\cite{E5}).} If the leading coefficients are in
general position, then topology of the Milnor fiber only depends
on the Newton polyhedra of $s_1,\ldots,s_k$, and we can assume
without loss of generality that $s_i=s_{\Delta_i}\cdot\tilde
s_i$, where $\tilde s_1,\ldots,\tilde s_k$ are Laurent
polynomials on $\Bbb CC^I$ and satisfy the condition of general
position of \cite{pkhovvol}.
Pick an orbit $T$ of $\Bbb T^{\Gamma}$, and consider the
corresponding faces $\widetilde D_i$ and $D_i$ of the polyhedron
$\Delta_i$ and the Newton polyhedron of $\tilde s_i$ (i.e. the
faces $\Delta^{\gamma}_i$ and $\Delta^{\gamma}_{\tilde s_i}$ for
a covector $\gamma$ in the relative interior of the cone,
corresponding to the orbit $T$ in the fan $\Gamma$). Up to a
shift, all $\widetilde D_i$ and $D_i,\; i=1,\ldots,k$, are
contained in the same $(\mathop{\rm \bold d}\nolimitsim T)$-dimensional subspace of $\Bbb R^I$,
hence, their $(\mathop{\rm \bold d}\nolimitsim T)$-dimensional mixed volumes make sense.
By the global Bernstein-Khovanskii formula, the Euler
characteristics of $T\cap\{\tilde s_1=\ldots=\tilde s_k=0\}$ and
$T\cap\{\tilde s_1-\mathop{\rm \bold d}\nolimitselta=\tilde s_2=\ldots=\tilde s_k=0\}$ equal
$(\mathop{\rm \bold d}\nolimitsim T)!\frac{D_1\cdot\ldots\cdot
D_k}{(1+D_1)\cdot\ldots\cdot(1+D_k)}$ and $(\mathop{\rm \bold d}\nolimitsim
T)!\frac{\widetilde D_1\cdot D_2\cdot\ldots\cdot
D_k}{(1+\widetilde D_1)\cdot(1+D_2)\cdot\ldots\cdot(1+D_k)}$
respectively.
Since the boundary of $B_{\varepsilon}$ subdivides the set
$T\cap\{\tilde s_1-\mathop{\rm \bold d}\nolimitselta=\tilde s_2=\ldots=\tilde s_k=0\}$ into
two parts, homeomorphic to the set $T\cap\{\tilde
s_1=\ldots=\tilde s_k=0\}$ and the Milnor fiber of $s_1$ on
$T\cap\{s_2=\ldots=s_n=0\}$, the Euler characteristic of the
latter equals $$(\mathop{\rm \bold d}\nolimitsim T)!\frac{\widetilde D_1\cdot
D_2\cdot\ldots\cdot D_k}{(1+\widetilde
D_1)\cdot(1+D_2)\cdot\ldots\cdot(1+D_k)}-(\mathop{\rm \bold d}\nolimitsim
T)!\frac{D_1\cdot\ldots\cdot
D_k}{(1+D_1)\cdot\ldots\cdot(1+D_k)}$$ by additivity of Euler
characteristic.
Denote the minimal face of $\Delta_i$, containing $D_i$, by $A_i$.
Then the difference above equals $\chi_{A_1,\ldots,A_k}$ by
Proposition \ref{volexpr2}. Thus, the Euler characteristic of the
Milnor fiber of $s_1$ on $T\cap\{s_2=\ldots=s_n=0\}$ equals
$\chi_{A_1,\ldots,A_k}$.
Summing up these equalities over all orbits
$T\subset\Bbb T^{\Gamma}$, corresponding to the cones from the
boundary of $\Gamma$, the statement of the theorem follows by
additivity of Euler characteristic. $\Box$
Finally, we formulate a more general version of this theorem,
with no assumptions on the triviality of the first line bundle.
We omit the proof, since it is essentially the same, and we do
not need this statement for our applications to determinantal
singularities.
In the assumptions of Theorem \ref{relbernst}, suppose that the
sections $s_1,\ldots,s_k$, $k<I$, are holomorphic (i.e.
$\Delta_{s_i}\subset\Delta_i$ for $i\leqslant k$), and pick a
holomorphic section $t_i$ of the line bundle
$\mathcal{I}_{\Delta_i}$ in the neighborhood $B_{\varepsilon}$
for a small $\varepsilon$. Varieties
$B_{\varepsilon}\cap\{s_1-t_1=\ldots=s_k-t_k=0\}\setminus\Bbb T^{\mathop{\rm Int}\nolimits\Gamma}$
are diffeomorphic to each other for almost all collections of
small sections $(t_1,\ldots,t_k)$. Such variety is called the
Milnor fiber of the complete intersection $\{s_1=\ldots=s_n=0\}$.
\begin{theor} In the above assumptions, the Euler characteristic of the Milnor fiber of
the complete intersection $\{s_1=\ldots=s_n=0\}$ equals the sum
of $\chi_{A_1,\ldots,A_k}$ over all collections
$A_1\subset\Delta_1,\ldots,A_k\subset\Delta_k$ of compatible
unbounded faces, provided that the leading coefficients of
$s_1,\ldots,s_k$ are in general position.
\end{theor}
\subsection{Proof of Theorem \ref{relbernst}}\label{ss2proof}
\textbf{Preliminary remarks.} The following lemma clarifies the
statement of Theorem \ref{relbernst} and reduces it to the case
of germs $s_1,\ldots,s_I$ with generic leading coefficients.
\begin{lemma}\label{l51}
Under the assumptions of Theorem \ref{relbernst}
\newline 1) the intersection number $\mathop{\rm ind}\nolimits
([s_1]\cap\ldots\cap[s_I])$ does not depend on the choice of the
simple fan $\Gamma$;
\newline 2) intersection number $\mathop{\rm ind}\nolimits
([s_1]\cap\ldots\cap[s_I])$ makes sense if leading coefficients of
$s_1,\ldots,s_I$ are in general position.
\newline 3)
Let $\tilde s_i$ be a section of the line bundle
$\mathcal{B}_{\Delta_i}$ such that $\Delta_{\tilde
s_i}=\Delta_{s_i}$, and suppose that leading coefficients of the
sections $s_1,\ldots,s_I$ are in general position. Then $\mathop{\rm ind}\nolimits
([\tilde s_1]\cap\ldots\cap[\tilde s_I])\; \geqslant\; \mathop{\rm ind}\nolimits
([s_1]\cap\ldots\cap[s_I])$, and the equality takes place if and
only if leading coefficients of the sections $\tilde
s_1,\ldots,\tilde s_I$ are in general position.
\end{lemma}
Note that Theorem \ref{relbernstab} provides a generalization of
these propositions to the case of a convenient collection
$\Delta_1,\ldots,\Delta_I,$ $\Delta_{s_1},\ldots,\Delta_{s_I}$,
but neither of them are valid if the collection of the Newton
polyhedra is not convenient.
\newline
\textsc{Proof.} 1) Suppose that simple fans $\Gamma_1$ and
$\Gamma_2$ are compatible with the polyhedra
$\Delta_1,\ldots,\Delta_I$. If one of them is a subdivision of
the other one, then there exists the natural mapping
$p:\Bbb T^{\Gamma_1}\to\Bbb T^{\Gamma_2}$ of topological degree 1, such
that $\Delta_{p^*s_i}=\Delta_{s_i}$. Thus, intersection numbers
$\mathop{\rm ind}\nolimits [s_1]\cap\ldots\cap[s_I]$ and $\mathop{\rm ind}\nolimits
[p^*s_1]\cap\ldots\cap[p^*s_I]$ on the varieties $\Bbb T^{\Gamma_1}$
and $\Bbb T^{\Gamma_2}$ are equal, and Part 1 follows. In general,
neither of $\Gamma_1$ and $\Gamma_2$ is a subdivision of the
other one, but they always admit a common simple subdivision.
\newline 2) Part 1 allows us to assume that the fan $\Gamma$ is compatible with all
polyhedra $\Delta_i$ and $\Delta_{s_i},\, i=1,\ldots,I$.
Represent every cycle $[s_i]$ as a sum $\alpha_i+\beta_i$, where
the support set $|\alpha_i|$ is contained in the compact part
$\Bbb T^{\mathop{\rm Int}\nolimits(\Gamma)}$, and the support set $|\beta_i|$ intersects
$\Bbb T^{\mathop{\rm Int}\nolimits(\Gamma)}$ properly. Then the condition of general
position for leading coefficients of $s_1,\ldots,s_I$ can be
reformulated as follows:
$$|\beta_1|\cap\ldots\cap|\beta_I|=\varnothing.$$
Hence, $[s_1]\cap\ldots\cap [s_I]$ equals the sum of intersections
$\cap_{i\in\mathcal{I}}\alpha_i\cap_{i\notin\mathcal{I}}\beta_i$
over all non-empty sets $\mathcal{I}\subset\{1,\ldots,I\}$. Since
the support set of each of these intersections is compact, their
indices make sense.
\newline 3) Represent every $s_i$ as $\alpha_i+\beta_i$ and $\tilde s_i$ as
$\tilde\alpha_i+\tilde\beta_i$ as above, and note that
$\alpha_i=\tilde\alpha_i$ because $\Delta_{s_i}=\Delta_{\tilde
s_i}$. Thus, taking into account that
$|\beta_1|\cap\ldots\cap|\beta_I|=\varnothing$, we have
$$\mathop{\rm ind}\nolimits([\tilde s_1]\cap\ldots\cap [\tilde s_I])=\mathop{\rm ind}\nolimits([s_1]\cap\ldots\cap [s_I])+\mathop{\rm ind}\nolimits(\tilde\beta_1\cap\ldots\cap\tilde\beta_I).$$
If leading coefficients of the sections $\tilde s_1,\ldots,\tilde
s_I$ are in general position, then
$|\tilde\beta_1|\cap\ldots\cap|\tilde\beta_I|=\varnothing$ and
$\mathop{\rm ind}\nolimits[\tilde s_1]\cap\ldots\cap [\tilde s_I]=\mathop{\rm ind}\nolimits
[s_1]\cap\ldots\cap [s_I]$, otherwise
$|\tilde\beta_1|\cap\ldots\cap|\tilde\beta_I|\ne\varnothing$. In
the latter case, if
$|\tilde\beta_1|\cap\ldots\cap|\tilde\beta_I|$ is not compact,
then $\mathop{\rm ind}\nolimits ([\tilde s_1]\cap\ldots\cap[\tilde s_I])$ is not
defined. Otherwise we have
$\mathop{\rm ind}\nolimits\tilde\beta_1\cap\ldots\cap\tilde\beta_I>0$, because, upon a
small perturbation of $\tilde s_1,\ldots,\tilde s_I$, the set
$|\tilde\beta_1|\cap\ldots\cap|\tilde\beta_I|$ is finite and
non-empty. $\Box$
\textbf{Proof of Theorem \ref{relbernst} and Assertion
\ref{volexpr2}.} By Lemma \ref{l51}, Part 3, it is enough to
prove Theorem \ref{relbernst} under the assumption that leading
coefficients of the sections $s_i,\, i=1,\ldots,I$ are in general
position, and the quotient $s_i/s_{\Delta_i}$ defines a Laurent
polynomial on the maximal torus (recall that $s_{\Delta_i}$ is
the distinguished section of the line bundle
$\mathcal{B}_{\Delta_i}$ with no zeros and poles on the maximal
torus, see Subsection \ref{sstor}). Since, under this assumption,
the intersection number $\mathop{\rm ind}\nolimits ([s_1]\cap\ldots\cap[s_I])$ is a
symmetric multilinear function of the pairs of polyhedra
$(\Delta_i,\Delta_{s_i})$, it is enough to prove the equality
$\mathop{\rm ind}\nolimits
([s_1]\cap\ldots\cap[s_I])=I!\mathop{\rm Vol}\nolimits(\Delta_1\setminus\Delta_{s_1})-I!\mathop{\rm Vol}\nolimits(\Delta_{s_i}\setminus\Delta_i)$
under the additional assumption that $\Delta_1=\ldots=\Delta_I$,
$\Delta_{s_1}=\ldots=\Delta_{s_I}$. The proof is based on the
following fact.
\begin{lemma}
Suppose that, under the assumptions of Theorem \ref{relbernst},
$s_i/s_{\Delta_i}=\tilde s_i$, where $\tilde s_1,\ldots,\tilde
s_I$ are Laurent polynomials with (bounded) Newton polyhedra
$\Delta_{\tilde s_i}$, and leading coefficients of $\tilde
s_1,\ldots,\tilde s_I$ are in general position. Then
$$\mathop{\rm ind}\nolimits
([s_1]\cap\ldots\cap[s_I])=I!\mathop{\rm Vol}\nolimits(\tilde\Delta_1,\ldots,\tilde\Delta_I)-I!\mathop{\rm Vol}\nolimits(\Delta_{\tilde
s_1},\ldots,\Delta_{\tilde s_I}),$$ where
$\tilde\Delta_i=\Delta_i\setminus(\Delta_{s_i}\setminus\Delta_{\tilde
s_i})$.
\end{lemma}
\textsc{Proof.} We extend $\Gamma$ to a complete fan
$\tilde\Gamma$, compatible with the polyhedra $\tilde\Delta_i$ and
$\Delta_{\tilde s_i}, i=1,\ldots,I$. The toric variety
$\Bbb T^{\tilde\Gamma}$ is a compactification of $\Bbb T^{\Gamma}$. The
line bundles $\mathcal{B}_{\Delta_i}$ and their sections $s_i$
extend to the line bundles $\mathcal{B}_{\tilde\Delta_i}$ and
their sections $q_i=s_{\tilde\Delta_i}\cdot\tilde s_i$ on this
compactification. The set of common zeroes of the sections
$q_1,\ldots,q_I$ consists of a compact set $Z\subset\Bbb T^{\mathop{\rm Int}\nolimits
\Gamma}$ and finitely many points in the maximal torus, which
splits the intersection $[q_1]\cap\ldots\cap[q_I]$ into the sum
$q_Z+q_T$, where $|q_Z|=Z$ and $|q_T|$ is contained in the
maximal torus: $$[q_1]\cap\ldots\cap[q_I]=q_Z+q_T.$$ By the
classical D.~Bernstein's formula, formulated in terms of
intersection numbers of divisors on toric varieties (see
\cite{pkhovvol}), we have \begin{center} $\mathop{\rm ind}\nolimits
([q_1]\cap\ldots\cap[q_I])=I!\mathop{\rm Vol}\nolimits(\tilde\Delta_1,\ldots,\tilde\Delta_I)$
and $\mathop{\rm ind}\nolimits q_T = I!\mathop{\rm Vol}\nolimits(\Delta_{\tilde s_1},\ldots,\Delta_{\tilde
s_I})$.\end{center} On the other hand, $\mathop{\rm ind}\nolimits q_Z$ equals the
desired intersection number $\mathop{\rm ind}\nolimits ([s_1]\cap\ldots\cap[s_I])$ by
the construction of $q_Z$. $\Box$
If we have $\Delta_1=\ldots=\Delta_I$,
$\Delta_{s_1}=\ldots=\Delta_{s_I}$, leading coefficients of the
sections $s_i,\, i=1,\ldots,I$ are in general position, and
$s_i/s_{\Delta_i},\, i=1,\ldots,I$ are Laurent polynomials, then
this lemma implies that $\mathop{\rm ind}\nolimits
([s_1]\cap\ldots\cap[s_I])=I!\mathop{\rm Vol}\nolimits(\Delta_1\setminus\Delta_{s_1})-I!\mathop{\rm Vol}\nolimits(\Delta_{s_i}\setminus\Delta_i)$,
and Theorem \ref{relbernst} follows. Together with Theorem
\ref{relbernst}, this lemma proves Assertion \ref{volexpr2}.
$\square$
\textbf{Proof of Theorem \ref{relbernst} and Assertion
\ref{volsymm} (\cite{E3}).} It is enough to prove Theorem
\ref{relbernst} under the assumption that
$\Delta_{s_i}\subset\Delta_i$ (which means that $s_1,\ldots,s_I$
are holomorphic sections). Indeed, we can represent the
meromorphic section $s_i$ as the quotient of the sections
$s_iq_i$ and $q_i$ of the line bundles
$\mathcal{B}_{\Delta_i+\tilde\Delta_i}$ and
$\mathcal{B}_{\tilde\Delta_i}$ respectively, such that
$\Delta_{s_iq_i}=\Delta_{s_i}+\Delta_{q_i}\subset\Delta_i+\tilde\Delta_i$
and $\Delta_{q_i}\subset\tilde\Delta_i$ (which means that
$s_iq_i$ and $q_i$ are holomorphic). The intersection number of
the divisors of sections is additive with respect to the
operation of tensor multiplication of sections, the mixed volume
of pairs is additive with respect to the operation of Minkowski
summation of pairs, hence the statement of the theorem for the
collection of meromorphic sections $s_1,\ldots,s_I$ follows from
the same statement for all collections of holomorphic sections of
the form $q_1(s_1)^{\alpha_1},\ldots,q_I(s_I)^{\alpha_I}$, where
$(\alpha_1,\ldots,\alpha_I)$ ranges over $\{0,1\}^I$.
First, we compute the desired intersection number in the
following special case (see Subsection \ref{ssmixvol} for notation
related to support functions and faces of convex polyhedra).
\begin{lemma} \label{indvoltriv}
Let $\Delta_1,\ldots,\Delta_I,$
$\Delta_{s_1},\ldots,\Delta_{s_I}$ be a cone-convenient collection
of polyhedra (see Definition \ref{defconv}). If leading
coefficients of the sections $s_i$ are in strongly general
position, then the intersection number $\mathop{\rm ind}\nolimits
([s_1]\cap\ldots\cap[s_I])$ equals
$$
(I-1)! \sum\limits_{\gamma\in S\cap\mathop{\rm Int}\nolimits|\Gamma|}
(\Delta_{s_1}(\gamma) - \Delta_1(\gamma))\cdot \mathop{\rm Vol}\nolimits
(\Delta_{s_2}^{\gamma},\ldots,\Delta_{s_I}^{\gamma}).
$$
(recall that $\mathop{\rm Vol}\nolimits$ is the classical mixed volume of bounded
polyhedra, $S$ stands for the set of all primitive covectors in
$(\Bbb Z^I)^*$ and $\mathop{\rm Int}\nolimits|\Gamma|$ stands for the interior of the cone
$|\Gamma|$; one can readily verify that the right hand side
contains finitely many non-zero terms).
\end{lemma}
\textsc{Proof.} We choose a simple fan $\Gamma$ compatible with
the polyhedra $\Delta_{s_i},\Delta_i,\, i=1,\ldots,I$. The
divisor $[s_i]$ can be represented as
$\alpha_i+\beta_i+\varphi_i$, where $\alpha_i$ is a linear
combination of the closures of precompact codimension 1 orbits in
$\Bbb T^{\Gamma}$, $\beta_i$ is a linear combination of the closures
of non-precompact codimension 1 orbits, and the hypersurfaces
$|\varphi_i|, i=1,\ldots,I$ intersect codimension 1 orbits of
$\Bbb T^{\Gamma}$ properly.
We note that the desired intersection number $\mathop{\rm ind}\nolimits
(\alpha_1+\beta_1+\varphi_1)\cap\ldots\cap(\alpha_I+\beta_i+\varphi_I)$
equals $$\mathop{\rm ind}\nolimits\alpha_1\cap\varphi_2\cap\ldots\cap\varphi_I.$$
Indeed, since the line bundle $\mathcal{B}_{\Delta_1}$ is
trivial, the support set of the cycle $[s_1-\varepsilon]\cap
\alpha_j$ is empty, and, in particular, $\mathop{\rm ind}\nolimits
[s_1]\cap\ldots\cap\ldots\cap[s_I]=\mathop{\rm ind}\nolimits
[s_1-\varepsilon]\cap[s_2]\cap\ldots\cap[s_I]=\mathop{\rm ind}\nolimits
[s_1-\varepsilon]\cap(\beta_2+\varphi_2)\cap\ldots\cap(\beta_I+\varphi_I)=\mathop{\rm ind}\nolimits
[s_1]\cap(\beta_2+\varphi_2)\cap\ldots\cap(\beta_I+\varphi_I)$.
Since the polyhedra under consideration are cone-convenient and
leading coefficients of $s_1,\ldots,s_I$ are in strongly general
position, the support sets of the intersections
$(\beta_1+\varphi_1)\cap\ldots\cap (\beta_I+\varphi_I)$ and
$\alpha_1\cap\bigcap_{j\in
\mathcal{J}}\varphi_j\cap\bigcap_{j\notin \mathcal{J}}\beta_j,\;
\mathcal{J}\subsetneq\{1,\ldots,I\}$ are empty, and hence $\mathop{\rm ind}\nolimits
(\alpha_1+\beta_1+\varphi_1)\cap(\beta_2+\varphi_2)\cap\ldots\cap(\beta_I+\varphi_I)=\mathop{\rm ind}\nolimits\alpha_1\cap\varphi_2\cap\ldots\cap\varphi_I$.
We can compute the intersection number
$$\mathop{\rm ind}\nolimits\alpha_1\cap\varphi_2\cap\ldots\cap\varphi_I,$$ using the following two equalities.
By D.~Bernstein's theorem, if $R$ is a compact closure of a
codimension 1 orbit of $\Bbb T^{\Gamma}$, and $\gamma_R$ is the
primitive generator of the corresponding 1-dimensional cone of
$\Gamma$, then the intersection number $\mathop{\rm ind}\nolimits
[R]\cap\varphi_2\cap\ldots\cap\varphi_I$ equals $(I-1)!\mathop{\rm Vol}\nolimits
(\Delta_{s_2}^{\gamma_R},\ldots,\Delta_{s_I}^{\gamma_R})$. On the
other hand, $\alpha_1=\sum_{R}
\Bigl(\Delta_{s_1}(\gamma_R)-\Delta_1(\gamma_R)\Bigr)\cdot[R]$,
where $R$ runs over all compact closures of codimension 1 orbits
of $\Bbb T^{\Gamma}$.
The second statement of the lemma follows from Lemma \ref{l51},
Part 3. $\Box$
We can reduce the general problem of computation of the
intersection number $\mathop{\rm ind}\nolimits ([s_1]\cap\ldots\cap[s_I])$ to the
special case of Lemma \ref{indvoltriv} as follows (perturbation
of divisors that we use in this argument is similar to the
original idea of D.~Bernstein, see \cite{bernst}).
\begin{lemma} \label{indvolgen}
Suppose that the collection of polyhedra
$\Delta_{s_i}\subset\Delta_i\subset\Bbb R^I,\; i=1,\ldots,I$ is very
convenient. For arbitrary numbers $a_i\in\Bbb N, i=1,\ldots,I$, denote
by $\Sigma_i$ the convex hull of
$((\Bbb R^1_++a_i)\times\Delta_i)\cup(\{0\}\times\Delta_{s_i})\subset\Bbb R^1\oplus\Bbb R^I$,
and denote the point $(1,0,\ldots,0)\in\Bbb R^1\oplus\Bbb R^I$ by $E$. If
leading coefficients of the sections $s_i$ are in strongly general
position, then the intersection number $\mathop{\rm ind}\nolimits
([s_1]\cap\ldots\cap[s_I])$ equals $ I! \sum\limits_{\gamma\in
S\cap\mathop{\rm Int}\nolimits(\Bbb R^1_+\times|\Gamma|)} (\gamma, E) \mathop{\rm Vol}\nolimits
(\Sigma_1^{\gamma},\ldots,\Sigma_I^{\gamma})$ (recall that $\mathop{\rm Vol}\nolimits$
is the classical mixed volume of bounded polyhedra, $S$ stands
for the set of all primitive covectors in $(\Bbb Z^1\oplus\Bbb Z^I)^*$
and $\mathop{\rm Int}\nolimits$ stands for the interior; one can readily verify that
the right hand side contains only finitely many non-zero terms).
\end{lemma}
\textsc{Proof.} We pick sections $\tilde s_i$ of the line bundles
$\mathcal{B}_{\Delta_i}$ with the Newton polyhedra
$\Delta_{\tilde s_i}=\Delta_i$ and generic leading coefficients.
Denote by $\varepsilon$ the standard coordinate on the first
factor of the product
$\Bbb C^1\times\Bbb T^{\Gamma}=\Bbb T^{\Bbb R^1_+\times\Gamma}$. The desired
intersection number $\mathop{\rm ind}\nolimits ([s_1]\cap\ldots\cap[s_I])$ equals the
intersection number
$\mathop{\rm ind}\nolimits[\varepsilon]\cap[s_1+\varepsilon^{a_1}\tilde
s_1]\cap\ldots\cap[s_I+\varepsilon^{a_I}\tilde s_I]$ on the germ
of the pair $(\Bbb C^1\times\Bbb T^{\Gamma},\{0\}\times\Bbb T^{\mathop{\rm Int}\nolimits\Gamma})$.
This intersection number can be computed by Lemma
\ref{indvoltriv}, because $\varepsilon$ can be considered as a
section of the trivial line bundle on $\Bbb C^1\times\Bbb T^{\Gamma}$.
$\Box$
In the notation of Lemma \ref{indvolgen}, a bounded face of the
polyhedron $\Sigma_i\subset\Bbb R^1\oplus\Bbb R^I$ is said to be
\textit{long}, if its projection to the first summand of the
space $\Bbb R^1\oplus\Bbb R^I$ is not a point. Denote by $L_i$ the set of
all covectors $\gamma$ such that the support face of $\Sigma_i$
with respect to $\gamma$ is long. If the sequence $a_1\ll
a_2\ll\ldots\ll a_I$ is fast-increasing enough, then the sets
$L_i$ do not intersect each other. The answer that Lemma
\ref{indvolgen} gives for $a_1\ll a_2\ll\ldots\ll a_I$ can be
simplified by Lemma \ref{comput1}, and turns out to be equal to
$\sum_{k=1}^I \sum\limits_{\gamma\in S\cap\mathop{\rm Int}\nolimits|\Gamma|}
(\Delta_{s_k}(\gamma)-\Delta_k(\gamma))
\mathop{\rm Vol}\nolimits(\Delta_1^{\gamma},\ldots,
\Delta_{k-1}^{\gamma},\Delta_{s_{k+1}}^{\gamma},\ldots,\Delta_{s_I}^{\gamma}).$
This expression is a symmetric function of pairs
$(\Delta_i,\Delta_{s_i})$, because the intersection number $\mathop{\rm ind}\nolimits
([s_1]\cap\ldots\cap[s_I])$ is a symmetric function of
$s_1,\ldots,s_I$. By Theorem \ref{voldef}, Part 2, the
symmetrization of this expression equals the mixed volume of the
pairs $(\Delta_i,\Delta_{s_i})$, hence this expression itself
equals the mixed volume, which proves Theorem \ref{relbernst} and
also implies Assertion \ref{volsymm}. $\square$
To prove Theorem \ref{finoka1} we need the following version of
Theorem \ref{relbernst}:
\begin{sledst} \label{volveryconv}
Suppose that integer polyhedra
$\Delta_1,\ldots,\Delta_I\subset\Bbb R^I$ are compatible with a
simple fan $\Gamma$ in $(\Bbb R^I)^*$. Let $s_i, i=1,\ldots,I,$ be
germs of holomorphic sections of the line bundles
$\mathcal{B}_{\Delta_i}$ on the pair $(\Bbb T^{\Gamma},
\Bbb T^{\mathop{\rm Int}\nolimits\Gamma})$, suppose that the collection of polyhedra
$\Delta_1,\ldots,\Delta_I,$ $\Delta_{s_1},\ldots,\Delta_{s_I}$ is
cone-convenient, and the difference
$\Delta_1\setminus\Delta_{s_1}$ is bounded. If leading
coefficients of the sections $s_i$ are in strongly general
position, then the intersection number $\mathop{\rm ind}\nolimits
([s_1]\cap\ldots\cap[s_I])$ equals
$$
I!\mathop{\rm Vol}\nolimits_{|\Gamma|}\Bigl((\Delta_1,\Delta_{s_1}),(\Delta_{s_2},\Delta_{s_2}),\ldots,(\Delta_{s_I},\Delta_{s_I})\Bigr).
$$
\end{sledst}
\textsc{Proof.} The intersection number can be computed by Lemma
\ref{indvoltriv}. We prove that the answer equals
$I!\mathop{\rm Vol}\nolimits_{|\Gamma|}\Bigl((\Delta_1,\Delta_{s_1}),(\Delta_{s_2},\Delta_{s_2}),\ldots,(\Delta_{s_I},\Delta_{s_I})\Bigr)$,
rewriting the latter mixed volume of pairs by Assertion
\ref{volsymm}. $\Box$
\section{Resultantal varieties}\label{ss3}
We introduce resultantal singularities and study their invariants
in terms of Newton polyhedra, which relies upon toric geometry of
Section \ref{ss2} and includes results of Section \ref{ss1} as a
special case.
\subsection{Resultantal varieties}\label{ssresvar}
We denote the monomial $t_1^{a_1}\ldots t_N^{a_N}$ by $t^a$. For
a subset $\Sigma\subset\Bbb R^N$, we denote the set of all Laurent
polynomials of the form $\sum\limits_{a\in\Sigma\cap\Bbb Z^N} c_a
t^a,\, c_a\in\Bbb C$, by $\Bbb C[\Sigma]$. We regard them as functions on
the complex torus $(\Bbb C\setminus\{0\})^N$.
\begin{defin}[\cite{E2}, \cite{E3}]
\label{torrezcikl} For finite sets $\Sigma_i\subset \Bbb Z^N,
i=1,\ldots,I$, the \textit{resultantal variety}
$R(\Sigma_1,\ldots,\Sigma_I)$ is defined as the closure of the set
$$\bigl\{ (g_1,\ldots,g_I)\; |\; g_i\in\Bbb C[\Sigma_i],\; \exists
t\in(\Bbb C\setminus\{0\})^N \; :$$ $$
g_1(t)=\ldots=g_I(t)=0\bigr\}\subset
\Bbb C[\Sigma_1]\oplus\ldots\oplus\Bbb C[\Sigma_I].$$
\end{defin}
\begin{defin}[\cite{E2}, \cite{E3}] \label{defressing} Consider a germ of a holomorphic mapping
$f:(\Bbb C^n,0)\to(\Bbb C[\Sigma_1]\oplus\ldots\oplus\Bbb C[\Sigma_I],0)$. The
preimage $f^{(-1)}\bigl(R(\Sigma_1,\ldots,\Sigma_I)\bigr)$ is
called a \textit{resultantal singularity}, if its codimension
equals the codimension of $R(\Sigma_1,\ldots,\Sigma_I)$.
\end{defin}
For instance, a determinantal singularity is a special case of a
resultantal singularity by the following lemma.
\begin{lemma} \label{detviarez} Identify an $I\times k$ matrix
$(w_{i,\, l})$ with the collection of linear functions
$$w_{1,\, l}t_1+\ldots+w_{I-1,\, l}t_{I-1}+w_{I,\, l},\; l=1,\ldots,k.$$ Then the
space of all collections of matrices $\Bbb C^{I_1\times
k_1}\oplus\ldots\oplus\Bbb C^{I_J\times k_J}$ is identified with the
space
$$\underbrace{\Bbb C[\Sigma_1]\oplus\ldots\oplus\Bbb C[\Sigma_1]}_{k_1}\oplus\ldots\oplus
\underbrace{\Bbb C[\Sigma_J]\oplus\ldots\oplus\Bbb C[\Sigma_J]}_{k_J},$$
where $\Sigma_j$ is the set of vertices of the standard
$(I_j-1)$-dimensional simplex in the $j$-th summand of the direct
sum ${\mathbf{Z}}^{I_1-1}\oplus\ldots\oplus{\mathbf{Z}}^{I_J-1}$,
and the set of all collections of degenerate matrices is
identified with the resultantal variety
$$R(\underbrace{\Sigma_1,\ldots,\Sigma_1}_{k_1},\ldots,
\underbrace{\Sigma_J,\ldots,\Sigma_J}_{k_J}).$$
\end{lemma}
\begin{rem} It would be useful to extend well-known relations
between determinantal varieties and determinants to resultantal
varieties and resultants: to prove, for instance, that the ideal
of the resultantal variety $R(\Sigma_1,\ldots,\Sigma_I)$ is
generated by $(\Sigma_{i_1},\ldots,\Sigma_{i_p})$-resultants,
where $(\Sigma_{i_1},\ldots,\Sigma_{i_p})$ runs over all essential
subcollections of codimension 1.
\end{rem}
When studying resultantal singularities, we can impose some
helpful restrictions on the sets $\Sigma_1,\ldots,\Sigma_I$
without loss in generality. We denote the dimension of the convex
hull of $\Sigma_1+\ldots+\Sigma_I$ by
$\mathop{\rm \bold d}\nolimitsim(\Sigma_1,\ldots,\Sigma_I)$, and the difference
$I-\mathop{\rm \bold d}\nolimitsim(\Sigma_1,\ldots,\Sigma_I)$ by
$\mathop{\rm codim}\nolimits(\Sigma_1,\ldots,\Sigma_I)$.
\begin{lemma}[\cite{sturmf}] \label{sturm1} We have
$$\mathop{\rm codim}\nolimits
R(\Sigma_1,\ldots,\Sigma_I)=\max\limits_{\{j_1,\ldots,j_p\}\subset\{1,\ldots,I\}}\mathop{\rm codim}\nolimits(\Sigma_{j_1},\ldots,\Sigma_{j_p}).$$
\end{lemma}
See \cite{pkhovvol} and \cite{sturmf} for the special cases of
$\mathop{\rm codim}\nolimits(\Sigma_1,\ldots,\Sigma_I)=0$ and $1$ respectively. The
generalization for arbitrary $\mathop{\rm codim}\nolimits(\Sigma_1,\ldots,\Sigma_I)$
is quite straightforward, see e.g. \cite{E4}, Theorem 2.12, for
details.
\begin{defin}[\cite{sturmf}]
A collection of sets $\Sigma_i\subset \Bbb Z^N, i=1,\ldots,I$, is said
to be \textit{essential}, if
$\mathop{\rm codim}\nolimits(\Sigma_{i_1},\ldots,\Sigma_{i_J})<\mathop{\rm codim}\nolimits(\Sigma_1,\ldots,\Sigma_I)$
for every subset $\{i_1,\ldots,i_J\}\subsetneq\{1,\ldots,I\}$.
\end{defin}
The following lemma implies that, without loss of generality, we
can restrict our consideration to resultantal varieties that
correspond to essential collections. Hence, in what follows, we
only consider essential collections $\Sigma_1,\ldots,\Sigma_I$,
such that $\Sigma_1+\ldots+\Sigma_I$ is not contained in a
hyperplane.
\begin{lemma} \label{essent1} Let $\Sigma_i\subset \Bbb Z^N,
i=1,\ldots,I$, be finite sets.
\newline 1) There exists the minimal subset
$\{i_1,\ldots,i_J\}\subset\{1,\ldots,I\}$, such that
$$\mathop{\rm codim}\nolimits(\Sigma_{i_1},\ldots,\Sigma_{i_J})=\max\limits_{\{j_1,\ldots,j_p\}\subset\{1,\ldots,I\}}\mathop{\rm codim}\nolimits(\Sigma_{j_1},\ldots,\Sigma_{j_p}).$$
In particular, the collection $\Sigma_{i_1},\ldots,\Sigma_{i_J}$
is essential.
\newline 2) $R(\Sigma_1,\ldots,\Sigma_I)=
p^{(-1)}\bigl(R(\Sigma_{i_1},\ldots,\Sigma_{i_J})\bigr)$, where
$p:\Bbb C[\Sigma_1]\oplus\ldots\oplus\Bbb C[\Sigma_I]
\to\Bbb C[\Sigma_{i_1}]\oplus\ldots\oplus\Bbb C[\Sigma_{i_J}]$ is the
natural projection.
\end{lemma}
\textsc{Proof} of Part 1 for $I=N+1$ is given in \cite{sturmf},
Section 1, and can be extended to the general case. Part 2
follows from Part 1 and Lemma \ref{sturm1}. See e.g. \cite{E4} for
details.
\begin{exa}If $\Sigma_1=\Sigma_2$ is a segment in $\Bbb R^2$, and $\Sigma_3$ is a polygon,
then $(\Sigma_1,\Sigma_2)$ is the minimal essential subcollection
in the collection $(\Sigma_1,\Sigma_2,\Sigma_3)$, and $\mathop{\rm codim}\nolimits
R(\Sigma_1,\Sigma_2,\Sigma_3)=\mathop{\rm codim}\nolimits(\Sigma_1,\Sigma_2)=1$.
\end{exa}
\subsection{Cayley trick for intersection numbers}\label{sscay}
In this subsection we study the multiplicity of a 0-dimensional
resultantal singularity in terms of Newton polyhedra. Let
$\Sigma_1,\ldots,\Sigma_I$ be an essential collection of finite
sets in $\Bbb Z^N$, such that their sum
$(\Sigma_1+\ldots+\Sigma_I)\times\{1\}$ generates the lattice
$\Bbb Z^N\oplus\Bbb Z$ (recall that we can impose both of these
restrictions without loss of generality). A point in the space
$\Bbb C[\Sigma_1]\oplus\ldots\oplus\Bbb C[\Sigma_I]$ is a collection of
Laurent polynomials of the form $(\sum\limits_{a\in\Sigma_1}
y_{(a,1)} t^a, \ldots, \sum\limits_{a\in\Sigma_I} y_{(a,I)}
t^a)$, where $y_m,\; m\in M=\{ (a,i)\;|\;
a\in\Sigma_i,\;i=1,\ldots,I\}$, is the natural system of
coordinates in $\Bbb C[\Sigma_1]\oplus\ldots\oplus\Bbb C[\Sigma_I]$. A
germ of a holomorphic mapping
$f:(\Bbb C^n,0)\to(\Bbb C[\Sigma_1]\oplus\ldots\oplus\Bbb C[\Sigma_I],0)$ is
given by its components $y_m=f_m,\;m\in M$, in this coordinate
system. We discuss the following problem: how to compute the
intersection number
$$(\mbox{graph of}\; f) \circ \bigl(R(\Sigma_1,\ldots,\Sigma_I)\times\Bbb C^n\bigr)\eqno (*)$$ in terms of
the Newton polyhedra $\Delta_{f_m},\;m\in M$, under the assumption
that the leading coefficients of $f_m$ are in general position?
In what follows, we use notation introduced in Subsection
\ref{sstor}. We choose a simple fan $\Gamma$ compatible with the
convex hulls $\mathop{\rm conv}\nolimits(\Sigma_1),\ldots,$ $\mathop{\rm conv}\nolimits(\Sigma_I)$. The
toric variety $\Bbb T^{\Gamma}$ carries the line bundle
$\mathcal{B}_{\mathop{\rm conv}\nolimits(\Sigma_i)}$ with the distinguished section
$s_{\mathop{\rm conv}\nolimits(\Sigma_i)}$. We pull it back to the product
$\Bbb T^{\Gamma}\times\Bbb C^n$, and denote the germ of its section
$s_{\mathop{\rm conv}\nolimits(\Sigma_i)}\cdot\sum\limits_{a\in\Sigma_i} f_{a,i} t^a$
on the pair $(\Bbb T^{\Gamma}\times\Bbb C^n, \Bbb T^{\Gamma}\times\{0\})$ by
$s_i$. The following theorem allows us to discuss Newton
polyhedra, leading coefficients and topology of the sections
$s_1,\ldots,s_I$ instead of those of the mapping $f$.
\begin{theor}[\cite{E3}] \label{indind}
1) Under the above assumptions, the intersection number $(*)$
equals $\mathop{\rm ind}\nolimits ([s_1]\cap\ldots\cap [s_I])$. In particular, these
intersection numbers make sense simultaneously.
\newline 2) The Newton polyhedron of the section ${s_i}$ equals $$\Delta_i=\mathop{\rm conv}\nolimits\left(\bigcup\limits_{a\in\Sigma_i}
\{a\}\times\Delta_{f_{(a,i)}}\right)\subset\Bbb R^N\times\Bbb R^n_+.$$ 3)
Every leading coefficient of $s_i$ is a leading coefficient of
one of the components $f_{(a,i)}$.
\end{theor}
Actually, Part 1 is valid for an arbitrary continuous map $f$,
the assumption of holomorphicity is redundant. Note that Part 1
only makes sense for $n=I-N$ (otherwise, the intersection numbers
are not defined).
\begin{exa}The matrix $A$ from Example \ref{exadet} can be considered as a germ of a mapping
$f:\Bbb C^2\to\Bbb C[S]\oplus\Bbb C[S]\oplus\Bbb C[S]$, where $S=\{(0,0),\,
(0,1),\, (1,0)\}$, see Lemma \ref{detviarez}. In this case we have
$\Delta_1=\Delta_2=\Delta_3=\bigl([0,1]\times\Bbb R^2_+\bigr)\setminus\Sigma$,
where $\Sigma$ is shown on the picture of Example \ref{exadet}.
Part 1 states that the multiplicity of degeneration of $A$ at the
origin is equal to the intersection number of the divisors
$\lambda a_{1,i}+\mu a_{2,i}=0$ in $\Bbb CP^1\times\Bbb C^2$, where
$\lambda:\mu$ are the standard coordinates on $\Bbb CP^1$, and $i$
runs over $1,2,3$.
\end{exa}
In particular, computing the intersection number $\mathop{\rm ind}\nolimits
([s_1]\cap\ldots\cap [s_I])$ on the toric variety
$\Bbb T^{\Gamma}\times\Bbb C^n$ by Theorem \ref{relbernst}, we have the
following corollary for the mapping $f$.
\begin{sledst}[\cite{E2}, \cite{E3}]\label{corolres1} The intersection number $(*)$ equals $I!$ times the mixed
volume of pairs
$\Bigl(\mathop{\rm conv}\nolimits(\Sigma_i)\times\Bbb R^n_+,\Delta_i\Bigr)$, provided that
the leading coefficients of the components $f_{(a,i)}$ are in
general position, and the difference of polyhedra in each of
these pairs is bounded.
\end{sledst}
\textsc{Proof of Theorem \ref{indind}.} Parts 2 and 3 follow from
definitions.
Proof of Part 1 (we use notation introduced in Subsection
\ref{sskb}): the line bundle $\mathcal{B}_{\mathop{\rm conv}\nolimits(\Sigma_i)}$
lifted from the toric variety $\Bbb T^{\Gamma}$ to the product
$M=\Bbb T^{\Gamma}\times(\Bbb C[\Sigma_1]\oplus\ldots\oplus\Bbb C[\Sigma_I])$
admits \textit{the tautological section} $\tilde s_i$, such that
$\tilde
s_i|_{\Bbb T^{\Gamma}\times\{(l_1,\ldots,l_I)\}}=s_{\mathop{\rm conv}\nolimits(\Sigma_i)}\cdot
l_i$ for every point
$(l_1,\ldots,l_I)\in\Bbb C[\Sigma_1]\oplus\ldots\oplus\Bbb C[\Sigma_I]$.
The cycle $[R(\Sigma_1,\ldots,\Sigma_I)]$ is the image of the
intersection $[\tilde s_1]\cap\ldots\cap[\tilde s_I]$ under the
projection $\pi$ of the product $M$ to the second factor, the
section $s_i$ is the inverse image of $\tilde s_i$ under the map
$(\mathop{\rm id}\nolimits,f):\Bbb T^{\Gamma}\times\Bbb C^n\to M$, and the diagram
$$\begin{matrix}
\Bbb T^{\Gamma}\times\Bbb C^n & \stackrel{(\mathop{\rm id}\nolimits,f)}{\longrightarrow} & M \\
\mathop{\rm \bold d}\nolimitsownarrow & & \phantom{\pi}\mathop{\rm \bold d}\nolimitsownarrow\pi \\
\Bbb C^n & \stackrel{f}{\longrightarrow} & \Bbb C[\Sigma_1]\oplus\ldots\oplus\Bbb C[\Sigma_I]
\end{matrix}$$
shows that $\mathop{\rm ind}\nolimits (\mathop{\rm id}\nolimits,f)^*\bigl([\tilde s_1]\cap\ldots\cap[\tilde
s_I]\bigr)=\mathop{\rm ind}\nolimits\pi_*\bigl([\tilde s_1]\cap\ldots\cap[\tilde
s_I]\bigr)\cap f_*[\Bbb C^n].$ Indeed,
$$
\pi_* ([\tilde s_1]\cap \ldots \cap [\tilde s_{I}]) \cap
f_*[\Bbb C^n] =\pi_* \Bigl([\tilde s_1]\cap \ldots \cap [\tilde s_{I}]
\cap \pi^* f_* [\Bbb C^n]\Bigr) =
$$
$$
=\pi_* \Bigl([\tilde s_1]\cap \ldots \cap [\tilde s_{I}] \cap
(\mathop{\rm id}\nolimits, f)_* [\Bbb T^{\Gamma} \times \Bbb C^n]\Bigr) = \pi_*(\mathop{\rm id}\nolimits, f)_*(\mathop{\rm id}\nolimits,
f)^* ([\tilde s_1]\cap \ldots \cap [\tilde s_{I}]).\; \Box
$$
\subsection{Toric resolutions of resultantal singularities}\label{sstorrez}
We construct a toric resolution for a
positive-dimensional resultantal singularity
$f^{(-1)}\bigl(R(\Sigma_1,\ldots,\Sigma_I)\bigr)$, such that the
leading coefficients of the components $f_{a,i}$ of the mapping
$f$ are in general position.
The faces of the positive orthant $\Bbb R^n_+$ form a fan that we
denote by $P$. Adopting notation of Theorem \ref{indind}, we
choose a simple subdivision $\Gamma'$ of the fan $\Gamma\times P$,
compatible with the Newton polyhedra $\Delta_{i},\,
i=1,\ldots,I$. The function $\sum\limits_{a\in\Sigma_i} f_{a,i}
t^a$ on the torus $\Bbb CC^N\times\Bbb CC^n$, that can be regarded as the
maximal torus of the variety $\Bbb T^{\Gamma'}$, defines a holomorphic
section $r_i=s_{\Delta_{i}}\cdot\sum\limits_{a\in\Sigma_i}
f_{a,i} t^a$ of the line bundle $\mathcal{B}_{\Delta_{i}}$ on
this variety.
\begin{defin}[\cite{E4}] The set $\{r_1=\ldots=r_I=0\}\subset\Bbb T^{\Gamma'}$, together with its projection $\pi$
to $\Bbb C^n$, is a \textit{toric resolution} of the singularity
$f^{(-1)}R(\Sigma_1,\ldots,\Sigma_I)$.\end{defin} This definition
is motivated by the following lemma.
\begin{lemma}[\cite{E4}] \label{lres1} Let $\Sigma_1,\ldots,\Sigma_I$ be an
essential collection of finite sets in $\Bbb Z^N$, such that their sum
$(\Sigma_1+\ldots+\Sigma_I)\times\{1\}$ generates the lattice
$\Bbb Z^N\oplus\Bbb Z$. \newline 1) We have
$\pi\Bigl(\{r_1=\ldots=r_I=0\}\Bigr)=f^{(-1)}R(\Sigma_1,\ldots,\Sigma_I)$.
\newline 2) If the Newton polyhedra of the components $f_{a,i}$ of the map $f$
intersect all coordinate axes in $\Bbb R^n$, and the leading
coefficients of the components are in general position, then
$\{r_1=\ldots=r_I=0\}$ is smooth, and the topological degree of
the map $\pi:\{r_1=\ldots=r_I=0\}\to
f^{(-1)}R(\Sigma_1,\ldots,\Sigma_I)$ equals 1. In particular,
$f^{(-1)}R(\Sigma_1,\ldots,\Sigma_I)$ is a resultantal
singularity.
\end{lemma}
The proof is the same as for toric resolutions of complete
intersections. A sufficient condition of general position is
that, for every collection of weights, assigned to the variables
$x_1,\ldots,x_n$ and $t_1,\ldots,t_n$, such that the weights of
$x_1,\ldots,x_n$ are positive, the polynomial equations
$(\sum\limits_{a\in\Sigma_i} f_{a,i} t^a)^l=0,\; i=1,\ldots,I$,
define a non-degenerate subvariety in the torus
$\Bbb CC^N\times\Bbb CC^n$. Let $\mathcal{Q}$ be 1 if the sets
$\Sigma_1,\ldots,\Sigma_I$ are contained in the standard simplex,
and let it be 0 otherwise.
\begin{theor}[\cite{E4}] \label{smoothres} If $n\leqslant 2(I-N)+\mathcal{Q}$, the convex hulls of $\Sigma_1,\ldots,\Sigma_I$
have the same dual fan, and this fan is simple, then, under the
assumptions of Lemma \ref{lres1}(2), the map
$\pi:\{r_1=\ldots=r_I=0\}\to f^{(-1)}R(\Sigma_1,\ldots,\Sigma_I)$
is a diffeomorphism outside the origin $0\in\Bbb C^n$.
\end{theor}
In particular, $f^{(-1)}R(\Sigma_1,\ldots,\Sigma_I)$ is an
isolated resultantal singularity in this case. See \cite{E4} for a
more general statement with no assumptions on the dual fans of
convex hulls of $\Sigma_i$.
\textsc{Proof.} Let $S$ be the set of all collections
$(\varphi_1,\ldots,\varphi_I),\; \varphi_i\in\Bbb C[\Sigma_i]$, such
that the sections $s_{\mathop{\rm conv}\nolimits(\Sigma_i)}\cdot\varphi_i$ of the line
bundles $\mathcal{I}_{\mathop{\rm conv}\nolimits(\Sigma_i)},\; i=1,\ldots,I$, have at
least two different common zeros or a multiple common zero in
$\Bbb T^{\Gamma}$ (recall that "$\mathop{\rm conv}\nolimits$" stands for the convex hull).
If $f^{(-1)}(S)=\{0\}$ and the set $\{r_1=\ldots=r_I=0\}$ is
smooth, then the map $\pi:\{r_1=\ldots=r_I=0\}\to
f^{(-1)}R(\Sigma_1,\ldots,\Sigma_I)$ is a diffeomorphism outside
the origin $0\in\Bbb C^n$. Hence, Theorem \ref{smoothres} is a
corollary of the following two lemmas. $\Box$
\begin{lemma}[\cite{E4}] \label{smoothres1} Under the assumptions of Theorem
\ref{smoothres}, we have $$\mathop{\rm codim}\nolimits S\geqslant 2(I-N)+\mathcal{Q}.$$
\end{lemma}
Since a common root of the sections
$s_{\mathop{\rm conv}\nolimits(\Sigma_i)}\cdot\varphi_i,\, i=1,\ldots,I$, appears in
codimension $I-N$, it is natural to expect two common roots in
codimension at least $2(I-N)$. If, in addition, the sets
$\Sigma_1,\ldots,\Sigma_I$ are contained in the standard simplex,
then the sections $s_{\mathop{\rm conv}\nolimits(\Sigma_i)}\cdot\varphi_i,\,
i=1,\ldots,I$, vanish at every point of the line through the two
common roots, which increases codimension by 1. See \cite{E4} for
a formal proof.
\begin{lemma}[\cite{E4}, \cite{E5}] If the Newton polyhedra of the components of the map $f$
intersect all coordinate axes in $\Bbb R^n$, and the leading
coefficients of the components of $f$ satisfy a certain condition
of general position, then the image of $f$ intersects the variety
$S$ properly outside of the origin.
\end{lemma}
In particular, if $n\leqslant\mathop{\rm codim}\nolimits S$, then $f^{(-1)}(S)=\{0\}$.
See \cite{E4} or \cite{E5} for the proof. Note that Theorem
\ref{smoothres} and Lemma \ref{smoothres1} provide a sharp
estimate for the codimension of the singular locus of resultantal
singularities and varieties, under the following assumption.
\begin{conjec} If the mixed volume of an essential collection of
integer polyhedra $\Sigma_1,\ldots,\Sigma_m$ in $\Bbb R^m$ is equal to
$1/m!$ (i.e. the minimal possible one), then, for a certain
automorphism $A:\Bbb Z^m\to\Bbb Z^m$ and vectors $a_1,\ldots,a_m$ in
$\Bbb Z^m$, the polyhedra $A(\Sigma_1)+a_1,\ldots,A(\Sigma_m)+a_m$
are contained in the standard simplex.
\end{conjec}
To the best of my knowledge, this fact is proved (by G. Gusev)
only under an additional assumption $\mathop{\rm \bold d}\nolimitsim\Sigma_1=m$ so far.
\subsection{Proofs of results of Section 1}\label{ss3proof}
\textsc{Proof of Theorem \ref{thsturmf}.} If $I=N+1$ and the
collection $\Sigma_1,\ldots,\Sigma_I\subset\Bbb Z^N$ is essential,
then we have $R(\Sigma_1,\ldots,\Sigma_I)=\{\mathop{\rm Res}\nolimits=0\}$. Let
$f:\Bbb C\to\Bbb C[\Sigma_1]\oplus\ldots\oplus\Bbb C[\Sigma_I]$ be a germ of
a holomorphic mapping with components $f_m(t)=c_m
t^{\gamma_m}+\ldots$, where $c_m$ are generic non-zero complex
numbers, $m\in M,\; M=\{(a,i)|a\in \Sigma_i\}$. Then the
intersection number \ref{sscay}$.(*)$ equals
$\Delta_{\mathop{\rm Res}\nolimits}(\gamma),$ where $\Delta_{\mathop{\rm Res}\nolimits}(\cdot)$ is the
support function of the Newton polyhedron $\Delta_{\mathop{\rm Res}\nolimits}$, and
$\gamma\in(\Bbb R^M)^*$ is the covector with coordinates $\gamma_m,\;
m\in M$. We can compute it by Corollary \ref{corolres1}. $\Box$
\textsc{Proof of Theorem \ref{matrzeta}.} By Lemma
\ref{detviarez}, determinantal singularities can be regarded as
resultantal singularities. The construction of the toric
resolution for resultantal singularities (Theorem
\ref{smoothres}) implies Part 1, and reduces Part 2 to Theorem
\ref{relbkh}. Part 3 can be reduced to Part 2 by means of this
construction, in the same way as it is done in \cite{E1} for
complete intersections by means of their toric resolutions (the
idea is to deform the differential form $\omega$ to the
differential of a function, preserving its radial index and
Newton polyhedron). $\Box$
\textsc{Proof of Theorem \ref{egvol}.} Lemma \ref{detviarez}
implies that the multiplicity of the collection of matrices is a
special case of the multiplicity of a 0-dimensional resultantal
singularity, treated in Corollary \ref{corolres1}. $\Box$
\textsc{Proof of Theorem \ref{finoka1}.} The proof is not exactly
the same as for Theorem \ref{egvol}, because the straightforward
application of Corollary \ref{corolres1} would give an answer
involving the Newton polyhedra of the partial derivatives of the
germs $f_j$.
We apply induction on $n$. The desired index equals the
multiplicity of the collection
of matrices $$\begin{pmatrix}x_1w_1 & \ldots & x_nw_n \\
x_1\frac{\partial f_1}{\partial x_1} & \ldots & x_n\frac{\partial
f_1}{\partial x_n} \\ \hdotsfor{3} \\ x_1\frac{\partial
f_k}{\partial x_1} & \ldots & x_n\frac{\partial f_k}{\partial x_n}
\end{pmatrix},\bigl(f_1,\ldots,f_k\bigr)$$
minus the sum of the indices of the 1-form $w_1 dx_1 + \ldots +
w_n dx_n$ restricted to the complete intersections
$f_1=\ldots=f_k=x_{i_1}=\ldots=x_{i_m}=0$, over all non-empty
subsets $\{i_1,\ldots,i_m\}\subset\{1,\ldots,n\}$. The latter
indices can be computed in terms of Newton polyhedra by
induction, and the multiplicity of the collection of matrices
equals
the multiplicity of the collection $$\begin{pmatrix} x_1w_1 & \ldots & x_nw_n \\
x_1\frac{\partial f_1}{\partial x_1}+a_{1,1}f_1 & \ldots &
x_n\frac{\partial f_1}{\partial x_n}+a_{1,n}f_1
\\ \hdotsfor{3} \\ x_1\frac{\partial f_k}{\partial
x_1}+a_{k,1}f_k & \ldots & x_n\frac{\partial f_k}{\partial
x_n}+a_{k,n}f_k\end{pmatrix},\bigl(f_1,\ldots,f_k\bigr),$$ where
$a_{i,j}$ are generic complex coefficients. Lemma \ref{detviarez}
implies that this multiplicity is a special case of the
multiplicity of a 0-dimensional resultantal singularity. Theorem
\ref{indind} represents it as the intersection number of certain
divisors on a toric variety, which can be computed/estimated in
terms of Newton polyhedra by Corollary \ref{volveryconv}. The
resulting answer does not involve the Newton polyhedra of the
partial derivatives of $f_i$ because the Newton polyhedron of
$x_j\frac{\partial f_i}{\partial x_j}+a_{i,j}f_i$ equals
$\Delta_{f_i}$ (in contrast to $\Delta_{x_j\frac{\partial
f_i}{\partial x_j}}$) $.\; \Box$
\subsection{Proof of Theorem \ref{thint}}\label{ss4}
This proof is purely combinatorial, it does not rely upon
material of Sections \ref{ss2} and \ref{ss3}.
\textbf{Lattice points of sums of polyhedra.} We prove the
equality
$$(A\cap\Bbb Z^q)+(B\cap\Bbb Z^q)=(A+B)\cap\Bbb Z^q$$ for a certain class of
bounded integer polyhedra $A,B$ in $\Bbb R^q$ (cf. \cite{oda1}).
\begin{defin}
A collection of rational cones $C_1,\ldots,C_p$ in $\Bbb R^q$ is said
to be $\Bbb Z$-\textit{transversal}, if $\sum\mathop{\rm \bold d}\nolimitsim C_i=q$, and the set
$\Bbb Z^q\cap\bigcup_i {C_i}$ generates the lattice $\Bbb Z^q$.
\end{defin}
\begin{defin}
A collection of fans $\Phi_1,\ldots,\Phi_p$ in $\Bbb R^q$ is said to
be $\Bbb Z$-\textit{transversal with respect to shifts} $c_1\in\Bbb R^q,
\ldots, c_p\in\Bbb R^q$, if every collection of cones $C_1\in\Phi_1,
\ldots, C_p\in\Phi_p$, such that the intersection
$(C_1+c_1)\cap\ldots\cap (C_p+c_p)$ consists of one point, is
$\Bbb Z$-transversal.
\end{defin}
\begin{theor} \label{transvpoly}
If the dual fans of bounded integer polyhedra $A_1,\ldots,A_p$ in
$\Bbb R^q$ are $\Bbb Z$-transversal with respect to certain shifts
$c_1\in(\Bbb R^q)^*, \ldots, c_p\in(\Bbb R^q)^*$, and
$\mathop{\rm \bold d}\nolimitsim(A_1+\ldots+A_p)=q$, then
$$(A_1\cap\Bbb Z^q)+\ldots+(A_p\cap\Bbb Z^q)=(A_1+\ldots+A_p)\cap\Bbb Z^q.$$
\end{theor}
\textsc{Proof.} Consider covectors $c_1\in(\Bbb R^q)^*, \ldots,
c_p\in(\Bbb R^q)^*$ as linear functions on the polyhedra
$A_1\subset\Bbb R^q,\ldots,A_p\subset\Bbb R^q$ respectively, and denote
their graphs in $\Bbb R^q\oplus\Bbb R^1$ by $\Gamma_1,\ldots,\Gamma_p$.
Denote the projection $\Bbb R^q\oplus\Bbb R^1\to\Bbb R^q$ by $\pi$, and
denote the ray $\{ (0,\ldots,0,t)\; |\;
t<0\}\subset\Bbb R^q\oplus\Bbb R^1$ by $L_-$.
Each bounded $q$-dimensional face $B$ of the sum
$\Gamma_1+\ldots+\Gamma_p+L_-$ is the sum of certain faces
$B_1,\ldots,B_p$ of polyhedra $\Gamma_1+L_-,\ldots,\Gamma_p+L_-$,
and $\Bbb Z$-transversality with respect to shifts $c_1\in(\Bbb R^q)^*,
\ldots, c_p\in(\Bbb R^q)^*$ implies that
$$\bigl(\pi(B_1)\cap\Bbb Z^q\bigr)+\ldots+\bigl(\pi(B_p)\cap\Bbb Z^q\bigr)=\pi(B_1+\ldots+B_p)\cap\Bbb Z^q.$$
Since the projections of bounded $q$-dimensional faces of the sum
$\Gamma_1+\ldots+\Gamma_p+L_-$ cover the sum $A_1+\ldots+A_p$, it
satisfies the same equality:
$$(A_1\cap\Bbb Z^q)+\ldots+(A_p\cap\Bbb Z^q)=(A_1+\ldots+A_p)\cap\Bbb Z^q.\;\Box$$
\begin{sledst} \label{minimax1}
Let $S\subset\Bbb R^q$ be the standard $q$-dimensional simplex, let
$l_1,\ldots,l_p$ be linear functions on $S$ with graphs
$\Gamma_1,\ldots,\Gamma_p$, and let $l$ be the maximal
piecewise-linear function on $pS$, such that its graph $\Gamma$
is contained in the sum $\Gamma_1+\ldots+\Gamma_p$. Then, for
each integer lattice point $a\in pS$, the value $l(a)$ equals the
maximum of sums $l_1(c_1)+\ldots+l_p(c_p)$ over all $p$-tuples
$(c_1,\ldots,c_p)$ of vertices of $S$, such that
$c_1+\ldots+c_p=a$.
\end{sledst}
\textsc{Proof.} Denote the projection $\Bbb R^q\oplus\Bbb R^1\to\Bbb R^q$ by
$\pi$. A $q$-dimensional face $B$ of $\Gamma$, which contains the
point $\bigl(a,l(a)\bigr)\in\Bbb R^q\oplus\Bbb R^1$, can be represented
as a sum of faces $B_i$ of simplices $\Gamma_i$. Since
$\pi(B_1),\ldots,\pi(B_p)$ are faces of the standard simplex,
their dual fans are $\Bbb Z$-transversal with respect to a generic
collection of shifts, and, by Theorem \ref{transvpoly},
$$\bigl(\pi(B_1)\cap\Bbb Z^q\bigr)+\ldots+\bigl(\pi(B_p)\cap\Bbb Z^q\bigr)=\pi(B)\cap\Bbb Z^q.$$
In particular, $a=c_1+\ldots+c_p$ for some integer lattice points
$c_i\in\pi(B_i)$, which implies $l(a)=l_1(c_1)+\ldots+l_p(c_p)$.
$\Box$
\begin{rem}
In particular, if the functions $l_1,\ldots,l_p$ are in general
position, then all $C_{p+q}^{q}$ integer lattice points in the
simplex $pS$ are projections of vertices of $\Gamma$. In the
tropical language, this is a well-known fact that $p$ generic
tropical hyperplanes in the space $\Bbb R^q$ subdivide it into
$C_{p+q}^{q}$ pieces.
\end{rem}
\begin{exa}
If $S$ in the formulation of Corollary \ref{minimax1} is not the
standard simplex, then the statement is not always true. For
example, consider $$S=\bigl\{|x|+|y|\leqslant 1\bigr\},\;
l_1(x,y)=x+y,\; l_2(x,y)=x-y,\; a=(1,0).$$
More generally, if $S_1,\ldots,S_p$ is an essential collection of
polyhedra in $\Bbb R^q,\, q>1$, that cannot be represented as a
collection of shifted faces of an elementary integer simplex, then
there exist concave piecewise linear functions $l_j:S_j\to\Bbb R$
with integer domains of linearity, such that the value of the
corresponding function $l(a)=\max\limits_{a_j\in S_j,\;
a_1+\ldots+a_p=a}\sum l_j(a_j)$ on $S_1+\ldots+S_p$ at some
integer point $a$ is strictly greater than $\max\limits_{a_j\in
S_j\cap\Bbb Z^q\atop a_1+\ldots+a_p=a}\sum l_j(a_j)$. That is why we
cannot extend Theorem \ref{thint} to mixed volumes of Newton
polyhedra, related to resultantal singularities, other than
determinantal ones.
\end{exa}
\textbf{Proof of Theorem \ref{thint}.} The desired statement
follows from Lemmas \ref{volint} and \ref{intsum} below.
\begin{lemma} \label{volint}
For pairs of integer polyhedra $A_i\in\mathcal{M}_{\Gamma}$, we
have
$$n!\mathop{\rm Vol}\nolimits_{\Gamma}(A_1,\ldots,A_n)=\sum_{1\leqslant i_1<\ldots<i_p\leqslant n}
(-1)^{n-p} I(A_{i_1}+\ldots+A_{i_p})+(-1)^n.$$
\end{lemma}
The proof is the same as for the classical mixed volume (see e.g.
\cite{pkhovvol}).
\begin{lemma} \label{intsum}
For pairs of polyhedra $B_{i,j}\in\mathcal{M}_{\Gamma}$,
$i=1,\ldots,n$, $j=1,\ldots,p$, we have
$$I\bigl(B_{1,1}*\ldots*B_{n,1}+\ldots+B_{1,p}*\ldots*B_{n,p}\bigr)=$$
$$=\sum_{a_1+\ldots+a_n=p\atop a_1\geqslant 0,\ldots, a_n\geqslant 0}I\Bigl(
\bigvee_{J_1\sqcup\ldots\sqcup J_n=\{1,\ldots,p\}\atop
|J_1|=a_1,\ldots,|J_n|=a_n} \sum_{i=1,\ldots,n\atop j\in J_i}
B_{i,j} \Bigr).$$
\end{lemma}
\textsc{Proof.} Every integer lattice point that participates in
the left hand side, is contained in the plane
$\{(a_1,\ldots,a_{n-1})\}\times\Bbb R^m\subset\Bbb R^{n-1}\oplus\Bbb R^m$ for
certain non-negative integer numbers $a_1,\ldots,a_n$, which sum
up to $p$. Thus, it is enough to describe the intersection of the
pair
$\bigl(B_{1,1}*\ldots*B_{n,1}+\ldots+B_{1,p}*\ldots*B_{n,p}\bigr)$
with each of these planes, using the following fact. $\Box$
\begin{lemma} Suppose that polyhedra $\Delta_{i,j}\subset\Bbb R^m$ have the same support cone for $i=1,\ldots,n$,
$j=1,\ldots,p$. Then, for each $n$-tuple of non-negative integer
numbers $a_1,\ldots,a_n$ which sum up to $p$,
$$\Bigl(\{(a_1,\ldots,a_{n-1})\}\times\Bbb R^m\Bigr)\, \cap\,
\bigl(
\Delta_{1,1}*\ldots*\Delta_{n,1}+\ldots+\Delta_{1,p}*\ldots*\Delta_{n,p}\bigr)=$$
$$=\{(a_1,\ldots,a_{n-1})\}\times\Bigl(\bigvee_{J_1\sqcup\ldots\sqcup J_n=\{1,\ldots,p\}\atop |J_1|=a_1,\ldots,|J_n|=a_n}
\sum_{i=1,\ldots,n\atop j\in J_i}
\Delta_{i,j}\Bigr)\subset\Bbb R^{n-1}\oplus\Bbb R^m.$$
\end{lemma}
\textsc{Proof.} For every hyperplane $L\subset\Bbb R^m$, denote the
projection $\Bbb R^{n-1}\oplus\Bbb R^m\to\Bbb R^{n-1}\oplus\Bbb R\, $ along $\,
\{0\}\oplus L$ by $\pi_L$. It is enough to prove that the images
of the left hand side and the right hand side under $\pi_L$
coincide for every $L$. To prove it, apply Corollary
\ref{minimax1}, assuming that $q=n-1,\; a=(a_1,\ldots,a_{n-1})$,
and $\Gamma_j$ is the maximal bounded face of the projection
$\pi_L\bigl(\Delta_{1,j}*\ldots*\Delta_{n,j}\bigr)$ for every
$j=1,\ldots,p$. $\Box$
\end{document} |
\begin{document}
\title{{\bf An FPT Algorithm Beating 2-Approximation for $k$-Cut}}
\author{ Anupam Gupta\thanks{Supported in part by NSF awards
CCF-1536002, CCF-1540541, and CCF-1617790. This work was done in
part when visiting the Simons Institute for the Theory of
Computing. } \and Euiwoong Lee\thanks{Supported by NSF award CCF-1115525, Samsung scholarship, and Simons award for graduate students in TCS.}
\and Jason Li\thanks{{\tt jmli@andrew.cmu.edu} }}
\date{Computer Science Department \\ Carnegie Mellon University \\ Pittsburgh, PA 15213.}
\thispagestyle{empty}
\maketitle
\begin{abstract}
In the $k$\textsc{-Cut}\xspace problem, we are given an edge-weighted graph $G$ and an
integer $k$, and have to remove a set of edges with minimum total
weight so that $G$ has at least $k$ connected components. Prior work
on this problem gives, for all $h \in [2,k]$, a $(2-h/k)$-approximation
algorithm for $k$-cut that runs in time $n^{O(h)}$. Hence to get a
$(2 - \varepsilon)$-approximation algorithm for some absolute constant
$\varepsilon$, the best runtime using prior techniques is
$n^{O(k\varepsilon)}$. Moreover, it was recently shown that getting a
$(2 - \varepsilon)$-approximation for general $k$ is NP-hard, assuming the
Small Set Expansion Hypothesis.
If we use the size of the cut as the parameter, an FPT algorithm to
find the exact $k$\textsc{-Cut}\xspace is known, but solving the $k$\textsc{-Cut}\xspace problem exactly
is $W[1]$-hard if we parameterize only by the natural parameter of
$k$. An immediate question is: \varepsilonmph{can we approximate $k$\textsc{-Cut}\xspace better
in FPT-time, using $k$ as the parameter?}
We answer this question positively. We show that for some absolute
constant $\varepsilon > 0$, there exists a $(2 - \varepsilon)$-approximation
algorithm that runs in time $2^{O(k^6)} \cdot \widetilde{O} (n^4) $. This is the first FPT
algorithm that is parameterized only by $k$ and strictly improves the
$2$-approximation.
\varepsilonnd{abstract}
\setcounter{page}{1}
\section{Introduction}
\langlebel{sec:introduction}
We consider the $k$\textsc{-Cut}\xspace problem: given an edge-weighted graph $G =
(V,E,w)$ and an integer $k$, delete a minimum-weight set of edges so
that $G$ has at least $k$ connected components. This problem is a
natural generalization of the global min-cut problem, where the goal is
to break the graph into $k=2$ pieces.
Somewhat surprisingly, the problem has poly-time algorithms for any
constant $k$: the current best result gives an $\tilde{O}(n^{2k})$-time
deterministic algorithm~\cite{Thorup08}. On the approximation algorithms
front, several $2$-approximation algorithms are
known~\cite{SV95, NR01, RS02}. Even a trade-off result is known: for any
$h \in [1,k]$, we can essentially get a $(2-\frac{h}{k})$-approximation in
$n^{O(h)}$ time~\cite{XCY11}. Note that to get $(2-\varepsilon)$ for some
absolute constant $\varepsilon > 0$, this algorithm takes time $n^{O(\varepsilon k)}$,
which may be undesirable for large $k$. On the other hand, achieving a
$(2-\varepsilon)$-approximation is NP-hard for general $k$, assuming the
Small Set Expansion Hypothesis (SSEH)~\cite{Manurangsi17}.
What about a better \varepsilonmph{fine-grained result} when $k$ is small?
Ideally we would like a runtime of $f(k) \mathrm{poly}(n)$ so it scales better
as $k$ grows --- i.e., an FPT algorithm with parameter $k$. Sadly, the
problem is $W[1]$-hard with this parameterization~\cite{DEFPR03}. (As an
aside, we know how to compute the optimal $k$\textsc{-Cut}\xspace in time $f(|\varepsilonnsuremath{\mathsf{Opt}}\xspace|) \cdot
n^{2}$~\cite{KT11, Chitnis}, where $|\varepsilonnsuremath{\mathsf{Opt}}\xspace|$ denotes the cardinality of
the optimal $k$\textsc{-Cut}\xspace.)
The natural question suggests itself: can we give a better approximation
algorithm that is FPT in the parameter $k$?
Concretely, the question we consider in this paper is: \varepsilonmph{If we
parameterize $k$\textsc{-Cut}\xspace by $k$, can we get a $(2-\varepsilon)$-approximation for
some absolute constant $\varepsilon > 0$ in FPT time---i.e., in time
$f(k) \mathrm{poly}(n)$?} (The hard instances which show $(2-\varepsilon)$-hardness
assuming SSEH~\cite{Manurangsi17} have $k = \Omega(n)$, so such an FPT
result is not ruled out.) We answer the question positively.
\begin{theorem}[Main Theorem]
\langlebel{thm:kcut-main}
There is an absolute constant $\varepsilon > 0$ and an a
$(2-\varepsilon)$-approximation algorithm for the $k$\textsc{-Cut}\xspace problem on general
weighted graphs that runs in time $2^{O(k^6)} \cdot \tilde{O}(n^4)$.
\varepsilonnd{theorem}
Our current $\varepsilon$ satisfies $\varepsilon \geq 0.0003$ (see the calculations in
\S\ref{sec:conclusion}). We hope that our result will serve as a
proof-of-concept that we can do better than the factor of~2 in FPT$(k)$
time, and eventually lead to a deeper understanding of the trade-offs
between approximation ratios and fixed-parameter tractability for the
$k$\textsc{-Cut}\xspace problem. Indeed, our result combines ideas from approximation
algorithms and FPT, and shows that considering both settings
simultaneously can help bypass lower bounds in each individual setting,
namely the $W[1]$-hardness of an exact FPT algorithm and the
SSE-hardness of a polynomial-time $(2-\varepsilon)$-approximation.
To prove the theorem, we introduce two variants of $k$\textsc{-Cut}\xspace.
\text{Laminar}kcut{k} is a special case of $k$\textsc{-Cut}\xspace where both the graph and the
optimal solution are promised to have special properties, and \textsc{Partial VC}\xspacelong
(\textsc{Partial VC}\xspace) is a variant of $k$\textsc{-Cut}\xspace where $k - 1$ components are required to be
singletons, which served as a hard instance for both the exact
$W[1]$-hardness and the $(2 - \varepsilon)$-approximation SSE-hardness. Our
algorithm consists of three main steps where each step is modular,
depends on the previous one: an FPT-AS for \textsc{Partial VC}\xspace, an algorithm for
\text{Laminar}kcut{k}, and a reduction from $k$\textsc{-Cut}\xspace to \text{Laminar}kcut{k}. In the
following section, we give more intuition for our three steps.
\subsection{Our Techniques}
\langlebel{sec:techniques}
For this section, fix an optimal $k$-cut
${\cal S}^* = \{ S^*_1, \dots, S^*_k\}$, such that
$w(\partial{S^*_1}) \leq \dots \leq w(\partial{S^*_k})$. Let the
optimal cut value be
$\varepsilonnsuremath{\mathsf{Opt}}\xspace := w(E(S^*_1, \dots, S^*_k)) = \sum_{i=1}^k w(\partial{S^*_i}) /
2$; here $E(A_1,\cdots, A_k)$ denotes the edges that go between
different sets in this partition. The $(2 - 2/k)$-approximation iterative
greedy algorithm by Saran and Vazirani~\cite{SV95} repeatedly computes
the minimum cut in each connected component and takes the cheapest one
to increase the number of connected components by $1$. Its
generalization by Xiao et al.~\cite{XCY11} takes the minimum $h$-cut
instead of the minimum $2$-cut to achieve a $(2 - h/k)$-approximation in
time $n^{O(h)}$.
\subsubsection{Step I: \textsc{Partial VC}\xspacelong}
\langlebel{sec:overview-pvc}
The starting point for our algorithm is the $W[1]$-hardness result of
Downey et al.~\cite{DEFPR03}: the reduction from $k$-clique results in a
$k$\textsc{-Cut}\xspace instance where the optimal solution consists of $k-1$ singletons
separated from the rest of the graph. Can we approximate such instances
well? Formally, the \textsc{Partial VC}\xspace problem asks: given a edge-weighted graph, find
a set of $k-1$ vertices such that the total weight of edges hitting
these vertices is as small as possible? Extending the result of
Marx~\cite{Marx07} for the maximization version, our first conceptual
step is an FPT-AS for this problem, i.e., an algorithm that given a
$\delta >0$, runs in time $f(k,\delta)\cdot \mathrm{poly}(n)$ and gives a
$(1+\delta)$-approximation to this problem.
\subsubsection{Step II: Laminar $k$-cut}
\langlebel{sec:overview-lam}
The instances which inspire our second idea are closely related to the
hard instances above. One instance on which the greedy algorithm of
Saran and Vazirani gives a approximation no better than $2$ for large
$k$ is this: take two cliques, one with $k$ vertices and unit edge
weights, the other with $k^2$ vertices and edge weights $1/(k+1)$, so
that the weighted degree of all vertices is the same. (Pick one vertex
from each clique and identify them to get a connected graph.) The
optimal solution is to delete all edges of the small clique, at cost
$\binom{k}{2}$. But if the greedy algorithm breaks ties poorly, it will
cut out $k-1$ vertices one-by-one from the larger clique, thereby
getting a cut cost of $\approx k^2$, which is twice as large. Again we
could use \textsc{Partial VC}\xspace to approximate this instance well. But if we replace each
vertex of the above instance itself by a clique of high weight edges,
then picking out single vertices obviously does not work. Moreover, one
can construct recursive and ``robust'' versions of such instances where
we need to search for the ``right'' (near-)$k$-clique to break
up. Indeed, these instances suggest the use of dynamic programming (DP),
but what structure should we use DP on?
One feature of such ``hard'' instances is that the optimal $k$\textsc{-Cut}\xspace
${\cal S}^* = \{S_1^*, \ldots, S_k^*\}$ is composed of near-min-cuts in the
graph. Moreover, no two of these near-min-cuts cross each other. We now
define the \text{Laminar}kcut{k} problem: find a $k$\textsc{-Cut}\xspace on an instance where
none of the $(1+\varepsilon)$-min-cuts of the graph cross each other, and where
each of the cut values $w(\partial{S_i^*})$ for $i = 1,\ldots, k-1$ are
at most $(1+\varepsilon)$ times the min-cut. Because of this laminarity (i.e.,
non-crossing nature) of the near-min-cuts, we can represent the
near-min-cuts of the graph using a tree $\mathcal T$, where the nodes of $G$
sit on nodes of the tree, and edges of $\mathcal T$ represent the near-min-cuts
of $G$. Rooting the tree appropriately, the problem reduces to
``marking'' $k-1$ incomparable tree nodes and take the near-min-cuts
given by their parent edges, so that the fewest edges in $G$ are
cut. Since all the cuts represented by $\mathcal T$ are near-min-cuts and
almost of the same size, it suffices to mark $k-1$ incomparable nodes to
maximize the number of edges in $G$ both of whose endpoints lie below a
marked node. We call such edges \varepsilonmph{saved} edges. \agnote{Any
figures?} In order to get a $(2-\varepsilon)$-approximation for \text{Laminar}kcut{k},
it suffices to save $\approx \varepsilon k \mathsf{Mincut}$ weight of edges.
Note that if $\mathcal T$ is a star with $n$ leaves and each vertex in $G$ maps
to a distinct leaf, this is precisely the \textsc{Partial VC}\xspace problem, so we do not
hope to find the optimal solution (using dynamic programming, say).
Moreover, extending the FPT-AS for \textsc{Partial VC}\xspace to this more general setting
does not seem directly possible, so we take a different approach. We
call a node an \varepsilonmph{anchor} if has some $s$ children which when marked
would save $\approx \varepsilon s \mathsf{Mincut}$ weight. We take the following
``win-win'' approach: if there were $\Omega(k)$ anchors that were
incomparable, we could choose a suitable subset of $k$ of their children
to save $\approx \varepsilon k \mathsf{Mincut}$ weight. And if there were not, then all
these anchors must lie within a subtree of $\mathcal T$ with at most $k$
leaves. We can then break this subtree into $2k$ paths and guess which
paths contain anchors which are parents of the optimal solution. For
each such guess we show how to use \textsc{Partial VC}\xspace to solve the problem and save a
large weight of edges. Finally how to identify these anchors? Indeed,
since all the mincuts are almost the same, finding an anchor again
involves solving the \textsc{Partial VC}\xspace problem!
\subsubsection{Step III: Reducing $k$\textsc{-Cut}\xspace to \text{Laminar}kcut{k}}
\langlebel{sec:overview-redn}
\begin{wrapfigure}{L}{0.38\textwidth}
\centering
\includegraphics[width=0.35\textwidth]{conform}
\caption{\langlebel{fig:conform} The blue set on the right,
formed by $S_5^* \cup S_7^* \cup S_{11}^*$, conforms to the
algorithm's partition ${\cal S}$ on the left.}
\varepsilonnd{wrapfigure}
We now reduce the general $k$\textsc{-Cut}\xspace problem to \text{Laminar}kcut{k}. This
reduction is again based on observations about the
graph structure in cases where the iterative greedy algorithms do not
get a $(2 - \varepsilon$)-approximation. Suppose ${\cal S} = \{ S_1, \dots,
S_{k'} \}$ be the connected components of $G$ at some point of an
iterative algorithm ($k' \leq k$). For a subset $\varepsilonmptyset \neq U
\subsetneq V$, we say that $U$ {\varepsilonm conforms} to partition ${\cal S}$ if
there exists a subset $J \subsetneq [k']$ of parts such that $U =
\cup_{j \in J} S_j$. One simple but crucial observation is the
following: if there exists a subset $\varepsilonmptyset \neq I \subsetneq [k]$ of
indices such that $\cup_{i\in I} S^*_i$ conforms to ${\cal S}$
(i.e., $\cup_{i\in I} S^*_i = \cup_{j \in J} S_j$), we can
``guess'' $J$ to partition $V$ into the two parts $\cup_{i \in I} S^*_i$
and $\cup_{i \notin I} S^*_i$. Since the edges between these two parts
belong to the optimal cut and each of them is strictly smaller than $V$,
we can recursively work on each part without any loss.
Moreover, the number of choices for $J$ is at most
$2^{k'}$ and each guess produces one more connected component, so the
total running time can be bounded by $f(k)$ times the running time of
the rest of the algorithm, for some function $f(\cdot)$. Therefore, we
can focus on the case where none of $\cup_{i \in I} S^*_i$ conforms to
the algorithm's partition ${\cal S}$ at any point during the algorithm's
execution.
\begin{wrapfigure}{R}{0.38\textwidth}
\centering
\includegraphics[width=0.35\textwidth]{histogram}
\caption{\langlebel{fig:histo} The blue curve shows cut sizes
for algorithm's cuts, red curve shows $w(\partial{S^*_i})$
values. The blue area (and in fact all the area below
$w(\partial{S_1^*})$ and above the algorithm's curve) makes the first
inequality loose. The grey area (and in fact all the area above
$w(\partial{S_1^*})$ and below OPT's curve) makes the second
inequality loose.}
\varepsilonnd{wrapfigure}
Now consider the iterative min-cut algorithm of Saran and Vazirani, and
let $c_i$ be the cost of the min cut in the $i^{th}$ iteration ($1 \leq
i \leq k - 1$). By our above assumption about non-conformity, none of
$\cup_{i \in I} S^*_i$, and in particular the subset $S^*_1$, conform to
the current components. This implies that deleting the remaining edges
in $\partial S^*_1$ is a valid cut that increases the number of
connected components by at least $1$, so $c_i \leq
w(\partial{S^*_1})$. Then we have the following chain of inequalities:
\[
\sum_{i=1}^{k-1} c_i \leq k \cdot w(\partial{S^*_1}) \leq \sum_{i=1}^k w(\partial{S^*_i}) = 2\varepsilonnsuremath{\mathsf{Opt}}\xspace.
\]
If the iterative min-cut algorithm could not get a $(2 -
\varepsilon)$-approximation, the two inequalities above must be essentially
tight. Hence almost all our costs $c_i$ must be close to
$w(\partial{S^*_1})$ and almost all $w(\partial{S^*_i})$ must be close
to $w(\partial{S^*_1})$.
Slightly more formally, let $\mathfrak{a} \in [k]$ be the smallest integer such
that
$c_{\mathfrak{a}} \gtrsim w(\partial{S^*_1})$
---so that the first $\mathfrak{a} - 1$ cuts are ones where
we pay ``much'' less than $\partial{S^*_1}$ and make the first
inequality loose. And let $\mathfrak{b} \in [k]$ be the smallest number such that
$w(\partial{S^*_{\mathfrak{b}}}) \gtrsim w(\partial{S^*_1})$
--- so that the last
$k - \mathfrak{b}$ cuts in OPT are much larger than $\partial{S^*_1}$ and make
the second inequality loose. Then if the iterative min-cut algorithm is
no better than a $2$-approximation, we can imagine that $\mathfrak{a} = o(k)$ and
$\mathfrak{b} \geq k - o(k)$.
For simplicity, let us assume that $\mathfrak{a} = 1$ and $\mathfrak{b} = k$ here.
Indeed, instead of just considering min-cuts, suppose we also consider
min-4-cuts, and take the one with better edges cut per number of new
components. The arguments of the previous paragraph still hold, so
$\mathfrak{a} = 1$ implies that
the best min-cuts and best min-4-way cuts (divided by 3) are roughly
at least $w(\partial{S^*_1})$ in the original $G$.
Since the min-cut is also at most $w(\partial{S^*_1})$,
the weight of the min-cut is
roughly $w(\partial{S^*_1})$ and none of the near-min-cuts
cross (else we would get a good 4-way cut). I.e., the
near-min-cuts in the graph form a laminar family.
Together with the fact that $\partial{S^*_1}, \dots, \partial{S^*_{k - 1}}$ are near-min-cuts (we assumed $\mathfrak{b} = k$),
this is precisely an instance of \text{Laminar}kcut{k}, which completes
the proof!
\iffalse
\alert{I would stop here and leave the rest of the details to the main
body, maybe just move on to explaining laminar-cut.}
\varepsilonlnote{I am fine with this except that the another promise of Laminar ensure that all but one optimal components are near min cuts. }
Let $S^*_{\geq \mathfrak{b}} := \cup_{i \geq \mathfrak{b}} S^*_{i}$ so that in the
$\mathfrak{b}$-cut $\{ S^*_1, \dots, S^*_{\mathfrak{b} - 1}, S^*_{\geq \mathfrak{b}} \}$, all but
one component satisfy that the weight of their boundary is close to
$w(\partial{S^*_1})$. Another application of the conformity argument
ensures that among the current components $S_1, \dots, S_{\mathfrak{a}}$,
\varepsilonlnote{this is another cute idea... should we not explain it here? If
we do, we definitely need a figure explaining what happens when two
$S_j$'s cross one $S^*_i$. } \agnote{I would say we postpone this idea
for now, we might lose people.} there is exactly one component (say
$S_1$) intersecting every $S^*_1, \dots, S^*_{\mathfrak{b} - 1}, S^*_{\geq \mathfrak{b}}$
and all the others are strictly contained in one $S^*_i$ (or $S^*_{\geq
\mathfrak{b}}$). In particular, note that $S_1$ intersects both $S^*_1$ and $V
\setminus S^*_1$, so $\mathsf{Mincut}(G[S_1]) \leq w(\partial{S^*_1})$.
Then we can see that $G[S_1]$ satisfies the promises of
$\text{Laminar}cut{\mathfrak{b}}{\varepsilonilon'}$: there exists a $\mathfrak{b}$-cut where all but
one component are close to the min cut and no two near-min cuts cross.
We guess $S_1$, run our algorithm for $\text{Laminar}cut{\mathfrak{b}}{\varepsilonilon'}$ for
$G[S_1]$ to get $\mathfrak{b} - 1$ more components. It results in $\mathfrak{a} + \mathfrak{b} -
1$ components, and if it is still less than $k$, we finally perform the
iterative greedy algorithm (here the min $2$-cut suffices) to get $k -
\mathfrak{a} - \mathfrak{b} + 1$ more components. The conformity still ensures that we pay
at most $w(\partial{S^*_1})$ in each iteration.
The total cost is the sum of (1) the cost to get the first $\mathfrak{a}$ components (2) the cost of $\text{Laminar}cut{\mathfrak{b}}{\varepsilonilon_1}$, and (3) the cost to get the final $k - \mathfrak{a} - \mathfrak{b} + 1$ components.
Since $\mathfrak{a}$ and $k - \mathfrak{b}$ are very small compared to $k$, (1) and (3) do not contribute much, so we can beat the $2$-approximation if we do for $\text{Laminar}cut{\mathfrak{b}}{\varepsilonilon_1}$.
\fi
\paragraph{Roadmap.} After some related work and preliminaries, we first
present the details of the reduction from $k$\textsc{-Cut}\xspace to \text{Laminar}kcut{k} in
Section~\ref{sec:reduction}. Then in Section~\ref{sec:laminar} we give
the algorithm for \text{Laminar}kcut{k} assuming an algorithm for \textsc{Partial VC}\xspace. Finally
we give our FPT-AS for \textsc{Partial VC}\xspace in Section~\ref{sec:partial-vc}.
\subsection{Other Related Work}
\langlebel{sec:related}
The $k$\textsc{-Cut}\xspace problem has been widely studied. Goldschmidt and Hochbaum gave
an $O(n^{(1/2- o(1))k^2})$-time algorithm~\cite{GH94}; they also showed
that the problem is NP-hard when $k$ is part of the input. Karger and
Stein improved this to an $O(n^{(2-o(1))k})$-time randomized Monte-Carlo
algorithm using the idea of random edge-contractions~\cite{KS96}.
After Kamidoi et al.~\cite{KYN06} gave an $O(n^{4k + o(1)})$-time
deterministic algorithm based on divide-and-conquer,
Thorup gave an $\tilde{O}(n^{2k})$-time deterministic algorithm based on
tree packings~\cite{Thorup08}.
Small values of $k \in [2, 6]$ also have been separately studied~\cite{NI92, HO92, BG97, Karger00, NI00, NKI00, Levine00}.
On the approximation algorithms front, a $2(1-1/k)$-approximation was
given by Saran and Vazirani~\cite{SV95}. Naor and Rabani~\cite{NR01},
and Ravi and Sinha~\cite{RS02} later gave $2$-approximation algorithms
using tree packing and network strength respectively. Xiao et
al.~\cite{XCY11} completed the work of Kapoor~\cite{Kapoor96} and Zhao
et al.~\cite{ZNI01} to generalize Saran and Vazirani to essentially give
an $(2 - h/k)$-approximation in time $n^{O(h)}$. Very recently,
Manurangsi~\cite{Manurangsi17} showed that for any $\varepsilon > 0$, it is
NP-hard to achieve a $(2 - \varepsilon)$-approximation algorithm in time
$\mathrm{poly}(n,k)$ assuming the Small Set Expansion Hypothesis.
\varepsilonmph{FPT algorithms:} Kawarabayashi and Thorup give an
$f(\varepsilonnsuremath{\mathsf{Opt}}\xspace) \cdot n^{2}$-time algorithm~\cite{KT11} for unweighted
graphs. Chitnis et al.~\cite{Chitnis} used a randomized color-coding
idea to give a better runtime, and to extend the algorithm to weighted
graphs. In both cases, the FPT algorithm is parameterized by the
cardinality of edges in the optimal $k$\textsc{-Cut}\xspace, not by $k$. For a
comprehensive treatment of FPT algorithms, see the excellent
book~\cite{FPT-book}, and for a survey on approximation and FPT
algorithms, see~\cite{Marx07}.
\varepsilonmph{Multiway Cut:} A problem very similar to $k$\textsc{-Cut}\xspace is the
\textsc{Multiway Cut} problem, where we are given $k$ terminals and want
to disconnect the graph into at least $k$ pieces such that all terminals
lie in distinct components. However, this problem behaves quite
differently: it is NP-hard even for $k=3$ (and hence an $n^{f(k)}$
algorithm is ruled out); on the other hand several algorithms are
known to approximate it to factors much smaller than~$2$ (see,
e.g.,~\cite{BuchbinderSW17} and references therein). FPT algorithms
parameterized by the size of $\varepsilonnsuremath{\mathsf{Opt}}\xspace$ are also known; see~\cite{CaoCF14}
for the best result currently known.
\section{Notation and Preliminaries}
\langlebel{sec:prelims}
For a graph $G = (V,E)$, and a subset $S \subseteq V$, we use $G[S]$ to
denote the subgraph induced by the vertex set $S$. For a collection of
disjoint sets $S_1, S_2, \ldots, S_t$, let $E(S_1, \ldots, S_t)$ be the
set of edges with endpoints in some $S_i, S_j$ for $i \neq j$. Let
$\partial S = E(S, V \setminus S)$. We say two cuts $(A, V\setminus A)$
and $(B,V\setminus B)$ \varepsilonmph{cross} if none of the four sets $A
\setminus B, B \setminus A, A \cap B$, and $V \setminus (A \cup B)$ is
empty.
$\mathsf{Mincut}$ and $\text{\sf{Min-4-cut}}$ denote the weight of the min-2-cut
and the min-4-cut respectively.
A cut $(A, V \setminus A)$ is called $(1 + \varepsilon)$-mincut if
$w(A, V \setminus A) \leq (1 + \varepsilon) \mathsf{Mincut}$.
\begin{restatable}[\textsc{Laminar $k$-Cut}$(\varepsilon_1)$]{definition}{LamDef}
\langlebel{def:laminarcut}
The input is a graph $G = (V,E)$ with edge weights, and two parameters
$k$ and $\varepsilon_1$, satisfying two promises: (i)~no two $(1+\varepsilon_1)$-mincuts
cross each other, and (ii)~there exists a $k$-cut ${\cal S}' = \{S_1',
\ldots, S_k'\}$ in $G$ with $w(\partial(S_i')) \le (1+\varepsilon_1)\mathsf{Mincut}(G)$
for all $i \in [1,k-1]$. Find a $k$-cut with the total weight.
The approximation ratio is defined as the ratio of
the weight of the returned cut to the weight of the $k$\textsc{-Cut}\xspace ${\cal S}'$
(which can be possibly less than $1$).
\varepsilonnd{restatable}
\begin{definition}[\textsc{Partial VC}\xspacelong]
\langlebel{def:pvc}
Given a graph $G = (V,E)$ with edge and vertex weights, and an integer
$k$, find a vertex set $S \subseteq V$ with $|S| = k$ nodes, minimizing the
weight of the edges hitting the set $S$ plus the weight of all
vertices in $S$.
\varepsilonnd{definition}
\section{Reduction to $\text{Laminar}cut{k}{\varepsilon_1}$}
\langlebel{sec:reduction}
In this section we give our reduction from $k$\textsc{-Cut}\xspace to
$\text{Laminar}cut{k}{\varepsilon_1}$, showing that if we can get a better-than-2
approximation for the latter, we can beat the factor of two for the
general $k$\textsc{-Cut}\xspace problem too. We assume the reader is familiar with the
overview in Section~\ref{sec:overview-redn}. Formally, the main theorem
is the following.
\begin{theorem}
\langlebel{thm:reduction1}
Suppose there exists a $(2 - \varepsilon_2)$-approximation algorithm for
$\text{Laminar}cut{k}{\varepsilon_1}$ for some $\varepsilon_1 \in (0, 1/4)$ and
$\varepsilon_2 \in (0, 1)$ that runs in time $f(k) \cdot g(n)$. Then
there exists a $(2 - \varepsilon_3)$-approximation algorithm for $k$\textsc{-Cut}\xspace
that runs in time $2^{O(k^2 \log k)} \cdot f(k) \cdot (n^4 \log^3 n + g(n))$ for some constant
$\varepsilon_3 > 0$.
\varepsilonnd{theorem}
\begin{algorithm}
\caption{$\text{Main}(G = (V, E, w), k)$}
\langlebel{alg:main}
\begin{algorithmic}[1]
\State $k' = 1$, $S_1 \gets V$
\While {$k' < k$ }
\mathbb For {$\boldsymbol{\mathrm{r}} \in [k]^{k'}$ } \langlebel{line:start-of-check} \Comment {Further partition each $S_i$ into $r_i$ components by Laminar}
\State $|\boldsymbol{\mathrm{r}}| \gets \sum_{j=1}^{k'} r_j$; $\{ C_1, \dots,
C_{|\boldsymbol{\mathrm{r}}|} \} \gets \cup_{i \in [k']}
\text{Laminar}(\inducedG{S_i}, r_i)$.
\If{$|\boldsymbol{\mathrm{r}}| \geq k$} $C_k \gets C_k \cup \dots \cup C_{|\boldsymbol{\mathrm{r}}|}$
\Else \State $\{C_1, \dots, C_{k}\} \gets \text{Complete}(G, k, C_1, \dots, C_{|\boldsymbol{\mathrm{r}}|})$ \langlebel{line:complete}
\EndIf
\State \mathbb Record($\text{Guess}(\{C_1, \dots, C_k \})$)
\EndFor \langlebel{line:end-of-check}
\State {} \Comment {Split some $S_i$ by a mincut or a min-4-cut}
\If{$k' > k - 3$ or $\min_{i \in [k']} \mathsf{Mincut} (\inducedG{S_{i}}) \leq
\min_{i \in [k']} \text{\sf{Min-4-cut}} (\inducedG{S_{i}}) / 3$} \langlebel{line:start-extend}
\State $i \gets \min_{i} \mathsf{Mincut} (\inducedG{S_{i}})$; $\{ T_1, T_2 \} \gets \text{Mincut}(\inducedG{S_{i}})$
\State $S_i \gets T_1$; $S_{k' + 1} \gets T_2$; $c_{k'} \gets \mathsf{Mincut} (\inducedG{S_{i}})$; $k' \gets k' + 1$
\Else
\State $i \gets \arg\min_{i} \text{\sf{Min-4-cut}} (\inducedG{S_{i}})$; $\{ T_1, \dots, T_4 \} \gets \text{Min-4-cut}(\inducedG{S_{i}})$;
$S_{i} \gets T_1$
\State $S_{k' + 1}, S_{k' + 2}, S_{k' + 3} \gets T_2, T_3, T_4$;
$c_{k'}, c_{k' + 1}, c_{k' + 2} \gets \text{\sf{Min-4-cut}} (\inducedG{S_{i}}) / 3$;
$k' \gets k' + 3$
\EndIf \langlebel{line:end-extend}
\EndWhile
\State let ${\cal S} = \{S_1, \ldots, S_k\}$ be the final reference $k$-partition.
\State \mathbb Record($\text{Guess}(G, k, {\cal S})$) \langlebel{line:lastupdate}
\State Return the best recorded $k$-partition.
\varepsilonnd{algorithmic}
\varepsilonnd{algorithm}
\begin{algorithm}
\caption{$\text{Complete}(G = (V, E, w), k, {\cal C} = \{C_1, \ldots, C_\varepsilonll\})$}
\langlebel{alg:complete}
\begin{algorithmic}[1]
\While {$\varepsilonll < k$}
\State $i \gets \min_{i \in [\varepsilonll]} \mathsf{Mincut}(\inducedG{C_i})$; $T_1, T_2 \gets \text{Mincut}(\inducedG{C_i})$
\State $C_i \gets T_1$; $C_{\varepsilonll + 1} \gets T_2$; $\varepsilonll \gets \varepsilonll + 1$
\EndWhile
\State Return ${\cal C} := \{C_1, \dots, C_k\}$.
\varepsilonnd{algorithmic}
\varepsilonnd{algorithm}
\begin{algorithm}
\caption{$\text{Guess}(G = (V, E, w), k, {\cal C} = \{C_1, \dots, C_k\})$}
\langlebel{alg:guess}
\begin{algorithmic}[1]
\State \mathbb Record($C_1, \dots, C_k$) \Comment{Returned
partition no worse than starting partition}
\mathbb For {$\varepsilonmptyset \neq J \subsetneq [k]$ }
\mathbb For {$k' = 1, 2, \dots, k - 1$ }
\State $L \gets \cup_{j \in J} C_j$; $R \gets V
\setminus L$ \Comment {Divide $S_i$ into two groups,
take union of each group}
\State $D_1, \dots, D_{k'} \gets \text{Main}(\inducedG{L},
k')$ \Comment{and recurse}
\State $D_{k'+1}, \dots, D_k \gets \text{Main}(\inducedG{R}, k - k')$
\State \mathbb Record($D_1, \dots, D_k$)
\EndFor
\EndFor
\State Return the best recorded $k$-partition among all these guesses.
\varepsilonnd{algorithmic}
\varepsilonnd{algorithm}
The main algorithm is shown in Algorithm~\ref{alg:main} (``\text{Main}'').
It maintains a ``reference'' partition ${\cal S}$, which is initially the
trivial partition where all vertices are in the same part. At each
point, it guesses how many pieces each part $S_i$ of this reference partition
${\cal S}$ should be split into using the ``Laminar'' procedure, and then
extends this to a $k$-cut using greedy cuts if necessary
(Lines~\ref{line:start-of-check}--\ref{line:end-of-check}).
It then
extends the reference partition by either taking the best min-cut or the
best min-4-cut among all the parts
(Lines~\ref{line:start-extend}--\ref{line:end-extend}).
Every time it
has a $k$-partition, it guesses (using ``Guess'') if the union of some
of the parts equals some part of the optimal partition, and uses that to
try get a better partition.
If one of the guesses is right, we strictly increase the number of
connected components by deleting edges in the optimal $k$-cut, so we can
recursively solve the two smaller parts. If none of our guesses was
right during the algorithm, our analysis in
Section~\ref{subsec:approx_factor} shows that there exist values of $k',
\boldsymbol{\mathrm{r}}$ such that ${\cal C} = \{C_1, \dots, C_k\}$ in
Line~\ref{line:complete}, obtained from the reference partition ${\cal S} =
\{ S_1, \dots, S_{k'} \}$ by running Laminar($G[S_{i}], r_i$) for each $i \in [k']$
and using Complete if necessary to get $k$ components, beats the $2$-approximation. Finally, a couple words about
each of the subroutines.
\begin{itemize}
\item Mincut$(G = (V, E, w))$ (resp.\ Min-4-cut$(G)$)
returns the minimum $2$-cut (resp.\ $4$-cut) as a partition of $V$
into $2$ (resp. $4$) subsets.
\item The subroutine ``Laminar'' returns a $(2-\varepsilon_2)$-approximation
for \text{Laminar}cut{$k$}{$\varepsilon_1$}, using the algorithm from
Theorem~\ref{thm:laminar}. Recall the definition of the problem in Definition~\ref{def:laminarcut}.
\item The operation ``\mathbb Record(${\cal P}$)'' in \text{Guess}\ and \text{Main}\ takes a
$k$-partition ${\cal P}$ and compares the weight of edges crossing this
partition to the least-weight $k$-partition recorded thus far (within
the current recursive call). If the current partition has less weight,
it updates the best partition accordingly.
\item Algorithm~\ref{alg:complete}(``\text{Complete}'') is a simple algorithm
that given an $\varepsilonll$-partition ${\cal P}$ for some $\varepsilonll \leq k$, outputs a
$k$-partition by iteratively taking the mincut in the current graph.
\item Algorithm~\ref{alg:guess}(``\text{Guess}''), when given an
$\varepsilonll$-partition ${\cal P}$ ``guesses'' if the vertices belonging to some
parts of this partition $\{ S_j \}_{j \in J}$ coincide with the union
of some $k'$ parts of the optimal partition. If so, we have made
tangible progress: it recursively finds a small $k'$-cut in the graph
induced by $\cup_{j \in J} S_j$, and a small $k-k'$ cut in the
remaining graph. It returns the best of all these guesses.
\varepsilonnd{itemize}
\subsection{The Approximation Factor}
\langlebel{subsec:approx_factor}
\begin{lemma}[Approximation Factor]
\langlebel{lem:apx-main}
$\text{Main}(G, k)$ achieves a $(2 - \varepsilon_3)$ approximation for some
$\varepsilon_3 > 0$ that depends on $\varepsilon_1, \varepsilon_2$ in
Theorem~\ref{thm:reduction1}.
\varepsilonnd{lemma}
\begin{proof}
We prove the lemma by induction on $k$. The value of $\varepsilon_3$ will be
determined later. The base case $k = 1$ is trivial. Fix some value
of $k$, and a graph $G$. Let ${\cal S} = \{S_1, \dots, S_k\}$ be the
final reference partition generated by the execution of $\text{Main}(G, k)$,
and let $c_1, \dots, c_{k - 1}$ be the values associated with it. From the
definition of the $c_i$'s in Procedure~\text{Main}, $\sum_{i=1}^{k - 1} c_i =
w(E(S_1, \dots, S_k))$. The $k$-partition returned by $\text{Main}(G, k)$ is
no worse than this partition ${\cal S}$ (because of the update on
line~\ref{line:lastupdate}), and hence has cost at most $\sum_{i=1}^{k-1}
c_i = w(E(S_1, \dots, S_k))$. Let us fix an optimal $k$-cut ${\cal S}^*
= \{ S^*_1, \dots, S^*_k\}$, and let $w(\partial{S^*_1}) \leq \dots
\leq w(\partial{S^*_k})$. Let $\varepsilonnsuremath{\mathsf{Opt}}\xspace := w(E(S^*_1, \dots, S^*_k)) =
\sum_{i=1}^k w(\partial{S^*_i}) / 2$.
\begin{definition}[Conformity]
\langlebel{def:conform}
For a subset $\varepsilonmptyset \neq U \subsetneq V$, we say that $U$ {\varepsilonm
conforms} to partition ${\cal S}$ if there exists a subset $J
\subsetneq [k]$ of parts such that $U = \cup_{j \in J} S_j$. (See Figure~\ref{fig:conform}.)
\varepsilonnd{definition}
The following claim shows that if there exists a subset $\varepsilonmptyset
\neq I \subsetneq [k]$ of indices such that $\cup_{i\in I} S^*_i$
conforms to ${\cal S}$, the induction hypothesis guarantees a $(2 -
\varepsilon_3)$-approximation.
\begin{claim}
Suppose there exists a subset $\varepsilonmptyset \neq I \subsetneq [k]$ such
that $\cup_{i \in I} S_i^*$ conforms to ${\cal S}$. Then $\text{Main}(G, k)$
achieves a $(2 - \varepsilon_3)$-approximation.
\langlebel{claim:good}
\varepsilonnd{claim}
\begin{proof}
Since $S^*_I := \cup_{i \in I} S^*_i$ conforms to ${\cal S}$, during
the run of $\text{Guess}(G, k, {\cal S})$ it will record the $k$-partition
$(\text{Main}(\inducedG{S^*_I}, |I|), \text{Main}(\inducedG{V \setminus S^*_I},
k - |I|) )$, and hence finally output a $k$-partition which cuts no
more edges than this starting partition. By the induction
hypothesis, $\text{Main}(\inducedG{S^*_I}, |I|)$ gives a $|I|$-cut of
$\inducedG{S^*_I}$ whose cost is at most $(2 - \varepsilon_3)$ times
$w(E(S^*_i)_{i \in I})$, and $\text{Main}(\inducedG{V \setminus S^*_I}, k
- |I|)$ outputs a $(k - |I|)$-cut of $\inducedG{V\setminus S^*_I}$
of cost at most $(2 - \varepsilon_3)$ times $w(E(S^*_i)_{i \notin
I})$. Thus, the value of the best $k$-partition returned by
$\text{Main}(G, k)$ is at most
\begin{align*}
& w(E(S^*_I, V \setminus S^*_I)) + (2 - \varepsilon_3) \left(
w(E(S^*_i)_{i \in I}) + w(E(S^*_i)_{i \notin I}) \right) \\
\leq & \ (2 - \varepsilon_3) w(E(S^*_1, \dots, S^*_k)) = (2 -
\varepsilon_3) \mathsf{Opt}. \qedhere
\varepsilonnd{align*}
\varepsilonnd{proof}
Therefore, to prove Lemma~\ref{lem:apx-main}, it suffices to assume
that no collection of parts in \varepsilonnsuremath{\mathsf{Opt}}\xspace conforms to our partition at any
point in the algorithm. I.e.,
\begin{leftbar}
\As{1}: for every subset $\varepsilonmptyset \neq I \subsetneq [k]$, $\cup_{i
\in I} S^*_i$ does not conform to ${\cal S} = \{ S_1, \dots, S_k\}$.
\varepsilonnd{leftbar}
Next, we study how $\varepsilonnsuremath{\mathsf{Opt}}\xspace$ is related to $w(\partial{S_1^*})$. Note
that $\varepsilonnsuremath{\mathsf{Opt}}\xspace \geq (k/2) \cdot w(\partial{S_1^*})$. The next claim shows
that we can strictly improve the $2$-approximation if $\varepsilonnsuremath{\mathsf{Opt}}\xspace$ is even
slightly bigger than that.
\begin{claim}
For every $i = 1, \dots, k-1$, $c_i \leq w(\partial{S^*_1})$. Moreover,
if $\varepsilonnsuremath{\mathsf{Opt}}\xspace \geq (k - 1)w(\partial{S_1^*}) / (2 - \varepsilon_3)$, $\text{Main}(G,
k)$ achieves a $(2 - \varepsilon_3)$-approximation.
\langlebel{claim:notgood}
\varepsilonnd{claim}
\begin{proof}
Consider the beginning of an arbitrary iteration of the while loop
of $\text{Main}(G, k)$. Let $k'$ and ${\cal S}' = \{ S_1, \dots, S_{k'} \}$
be the values at that iteration. By \As{1}, set $S_1^*$ does not
conform to ${\cal S}'$ (because ${\cal S}'$ only gets subdivided as the
algorithm proceeds, and $S_1^*$ does not conform to the final partition
${\cal S}$). So there exists some $i \in [k']$ such that $S_i$
intersects both $S^*_1$ and $V \setminus S^*_1$. If we consider
$\inducedG{S_i}$ and its mincut,
\[
\mathsf{Mincut}(\inducedG{S_i}) \leq
w(E(S_i \cap S^*_1, S_i \setminus S^*_1))
\leq w(\partial{S^*_1}).
\]
Now the new $c_j$ values created in this iteration of the while loop
are at most the smallest mincut value, so we have that each $c_j
\leq w(\partial{S_1^*})$. Therefore,
\[
w(E(S_1, \dots, S_k)) = \sum_{i=1}^{k - 1} c_i \leq (k - 1)\cdot
w(\partial{S_1^*}),
\]
and $\text{Main}(G, k)$ achieves a $(2 - \varepsilon_3)$-approximation if $(k
- 1) w(\partial{S^*_1}) \leq (2 - \varepsilon_3) \varepsilonnsuremath{\mathsf{Opt}}\xspace$.
\varepsilonnd{proof}
Consequently, it suffices to additionally assume that $\varepsilonnsuremath{\mathsf{Opt}}\xspace$ is
close to $(\nicefrac{k}{2}) \, w(\partial{S^*_1})$. Formally,
\begin{leftbar}
\As{2}: $ \varepsilonnsuremath{\mathsf{Opt}}\xspace < w(\partial{S^*_1}) \cdot \frac{k - 1}{2 - \varepsilon_3} $.
\varepsilonnd{leftbar}
Recall that $\varepsilon_1, \varepsilon_2 > 0$ are the parameters such that
there is a $(2 - \varepsilon_2)$-approximation algorithm for
$\text{Laminar}cut{k}{\varepsilon_1}$. Let $\mathfrak{a} \in [k]$ be the smallest
integer such that $c_{\mathfrak{a}} > w(\partial{S^*_1}) (1 -
\nicefrac{\varepsilon_1}{3})$ (set $\mathfrak{a} = k$ if there is no such integer).
(See Figure~\ref{fig:histo}.)
In other words, $\mathfrak{a}$ is the value of $k'$ in the while loop of
$\text{Main}(G, k)$ when both $\min_i \mathsf{Mincut}(\inducedG{S_i})$ and $\min_i
\text{\sf{Min-4-cut}}(\inducedG{S_i}) / 3$ are bigger than $w(\partial{S^*_1}) (1 -
\nicefrac{\varepsilon_1}{3})$ for the first time. Let $\varepsilon_4 > 0$ be a constant
satisfying
\begin{equation}
\langlebel{eq:para_1}
(2/3) \cdot \varepsilon_1 \varepsilon_4 \geq \varepsilon_3.
\varepsilonnd{equation}
The next claim shows that we are done if $\mathfrak{a}$ is large.
\begin{claim}
If $\mathfrak{a} \geq \varepsilon_4 k$, $\text{Main}(G, k)$ achieves a $(2 -
\varepsilon_3)$-approximation.
\langlebel{clm:left_tail}
\varepsilonnd{claim}
\begin{proof}
If $\mathfrak{a} \geq \varepsilon_4 k$, we have
\begin{align*}
\sum_{i=1}^{k-1} c_i &\leq
(\mathfrak{a} - 1) (1 - \nicefrac{\varepsilon_1}{3})\cdot w(\partial{S^*_1}) +
(k - \mathfrak{a})\cdot w(\partial{S^*_1}) \\
& \leq k \cdot w(\partial{S^*_1})\cdot (1 - \nicefrac{\varepsilon_1 \varepsilon_4}{3})
\leq (2 - (\nicefrac23) \varepsilon_1 \varepsilon_4) \varepsilonnsuremath{\mathsf{Opt}}\xspace \leq (2 - \varepsilon_3)
\varepsilonnsuremath{\mathsf{Opt}}\xspace. \qedhere
\varepsilonnd{align*}
\varepsilonnd{proof}
Thus, we can assume that our algorithm finds very few cuts appreciably
smaller than $w(\partial{S^*_1})$.
\begin{leftbar}
\As{3}: $\mathfrak{a} < \varepsilon_4 k$.
\varepsilonnd{leftbar}
Let $\mathfrak{b} \in [k]$ be the smallest number such that
$w(\partial{S^*_{\mathfrak{b}}}) > w(\partial{S^*_1}) (1 +
\nicefrac{\varepsilon_1}{3})$; let it be $k$ if there is no such
number. (Again, see Figure~\ref{fig:histo}.) Observe that $\mathfrak{a}$ is
defined based on our algorithm, whereas $\mathfrak{b}$ is defined based on the
optimal solution. Let $\varepsilon_5 > 0$ be a constant satisfying:
\begin{equation}
\langlebel{eq:para_2}
\frac{1}{2 - \varepsilon_3} \leq \frac{1 + \nf{\varepsilon_1 \varepsilon_5}{3}}{2}
~~~\Leftrightarrow~~~ (1 + \nf{\varepsilon_1 \varepsilon_5}{3})(2 -
\varepsilon_3) \geq 2.
\varepsilonnd{equation}
The next claim shows that $\mathfrak{b}$ should be close to $k$.
\begin{claim}
$\mathfrak{b} \geq (1 - \varepsilon_5)k$.
\varepsilonnd{claim}
\begin{proof}
Suppose that $\mathfrak{b} <(1 - \varepsilon_5) k$. We have
\begin{align*}
& \ \frac{ k \cdot w(\partial{S^*_1}) }{2 - \varepsilon_3} \stackrel{\As{2}}{>} \varepsilonnsuremath{\mathsf{Opt}}\xspace = \frac{1}{2} \sum_{i = 1}^k w(\partial{S^*_i}) \\
\geq& \ \frac{w(\partial{S^*_1})}{2} \left( (1 - \varepsilon_5)k +
\varepsilon_5 k (1 + \varepsilon_1 / 3) \right)
=
\frac{k\cdot w( \partial{S^*_1} )}{2} \left( 1 + \nf{\varepsilon_1 \varepsilon_5}{3} \right),
\varepsilonnd{align*}
which contradicts~\varepsilonqref{eq:para_2}.
\varepsilonnd{proof}
Therefore, we can also assume that very few cuts in \varepsilonnsuremath{\mathsf{Opt}}\xspace
are appreciably larger than $w( \partial{S^*_1})$.
\begin{leftbar}
\As{4}: $\mathfrak{b} \geq (1 - \varepsilon_5) k$.
\varepsilonnd{leftbar}
\textbf{Constructing an Instance of Laminar Cut:} In order to
construct the instance for the problem, let $S^*_{\geq \mathfrak{b}} = \cup_{i = \mathfrak{b}}^{k}
S^*_i$ be the union of these last few components from ${\cal S}^*$ which
have ``large'' boundary. Consider the iteration of the while loop
when $k' = \mathfrak{a}$ and consider $S_1, \dots, S_{\mathfrak{a}}$ in that
iteration. By its definition, $c_{\mathfrak{a}} > w(\partial{S_1^*}) (1 -
\nicefrac{\varepsilon_1}{3})$. Hence
\begin{gather}
\min_i \mathsf{Mincut}(\inducedG{S_i}) > w(\partial{S_1^*}) (1 -
\nicefrac{\varepsilon_1}{3}),
\langlebel{eq:stop1} \\
\min_i \text{\sf{Min-4-cut}}(G[S_i]) > 3 w(\partial{S_1^*}) (1 -
\nicefrac{\varepsilon_1}{3}).
\langlebel{eq:stop2}
\varepsilonnd{gather}
In particular, \varepsilonqref{eq:stop2} implies that no two near-min-cuts
cross, since two crossing near-min-cuts will result in a $4$-cut of
weight roughly at most $2 w(\partial{S_1^*})$.
However, we are not yet done, since we need to factor out the effects
of the $\mathfrak{a} - 1$ ``small'' cuts found by our algorithm. For this, we need
one further idea.
Let $\boldsymbol{\mathrm{r}} = (r_1, r_2, \ldots, r_{\mathfrak{a}}) \in [k]^{\mathfrak{a}}$ be such that
$r_i$ is the number of sets
$S^*_1, \dots, S^*_{\mathfrak{b} - 1}, S^*_{\geq \mathfrak{b}}$ that intersect with
$S_i$, and let $|\boldsymbol{\mathrm{r}}| := \sum_{i = 1}^{\mathfrak{a}} r_i$. If we consider the
bipartite graph where the left vertices are the algorithm's
components $S_1, \dots, S_{\mathfrak{a}}$, the right vertices are
$S^*_1, \dots, S^*_{\mathfrak{b} - 1}, S^*_{\geq \mathfrak{b}}$, and two sets have an
edge if they intersect, then $|\boldsymbol{\mathrm{r}}|$ is the number of edges. Since
there is no isolated vertex and the graph is connected (otherwise
there would exist $\varepsilonmptyset \neq I \subsetneq [k']$ and
$\varepsilonmptyset \neq J \subsetneq [k]$ with
$\cup_{i \in I}S_i = \cup_{j \in J} S^*_j$ contradicting~\As{1}),
the number of edges is $|\boldsymbol{\mathrm{r}}| \geq \mathfrak{a} + \mathfrak{b} - 1$.
\begin{claim}
\langlebel{clm:promises}
For each $i$ with $r_i \geq 2$,
the graph $\inducedG{S_i}$ satisfies the two promises of the problem
$\text{Laminar}cut{r_i}{\varepsilon_1}$.
\varepsilonnd{claim}
\begin{proof}
Fix $i$ with $r_i \geq 2$. Let
$J := \{ j \in [\mathfrak{b} - 1] \mid S_i \cap S^*_j \neq \varepsilonmptyset \}$ be
the sets $S_j^*$ among the first $\mathfrak{b} - 1$ sets in the optimal
partition that intersect $S_i$. Since $|J| \geq r_i - 1$ and $r_i \geq 2$, $|J| \geq 1$.
Note that
$(1 - \nf{\varepsilon_1}3)\cdot w(\partial{S^*_1}) <
\mathsf{Mincut}(\inducedG{S_i})$ by~\varepsilonqref{eq:stop1}.
For every $j \in J$,
\[
\mathsf{Mincut}(\inducedG{S_i}) \leq
w(E(S_i \cap S^*_j, S_i \setminus S^*_j)) \leq w(\partial{S^*_j})
\leq (1 + \varepsilon_1 / 3)\; w(\partial{S^*_1}) \leq (1 + \varepsilon_1)
\;\mathsf{Mincut}(\inducedG{S_i}).
\]
The first and second inequality hold since both parts
$S_i \cap S^*_j$ and $S_i \setminus S^*_j$ are nonempty, and hence
deleting all the edges in $\partial{S_j^*}$ would separate
$G[S_i]$.
The third inequality is by the choice of $\mathfrak{b}$, and the last
inequality uses~(\ref{eq:stop1}) and the fact that
$(1 + \nicefrac{\varepsilon_1}{3}) \leq (1 + \varepsilon_1)(1 -
\nicefrac{\varepsilon_1}{3})$ when $\varepsilon_1 < 1/4$.
This implies that in $\inducedG{S_i}$, for every $j \in J$,
$(S_i \cap S^*_j, S_i \setminus S^*_j)$ is a $(1 + \varepsilon_1)$-mincut.
Furthermore, in $\inducedG{S_i}$, no two $(1+\varepsilon_1)$-mincuts cross
because it will result a 4-cut of cost at most
\[
2(1+\varepsilon_1)\; \mathsf{Mincut}(\inducedG{S_i}) \leq 2(1+\varepsilon_1) (1 + \varepsilon_1 / 3)
\; w(\partial{S_1^*}),
\]
contradicting~\varepsilonqref{eq:stop2}. (Note that $2(1 +
\varepsilon_1) (1 + \nicefrac{\varepsilon_1}{3}) \leq 3(1 - \nicefrac{\varepsilon_1}{3})$ when
$\varepsilon_1 < 1/4$.) Hence, in $\inducedG{S_i}$, the two promises for
$\text{Laminar}cut{r_i}{\varepsilon_1}$ are satisfied.
\varepsilonnd{proof}
Our algorithm $\text{Main}(G, k$) runs $\text{Laminar}(\inducedG{S_i}, r_i)$ for
each $i \in [\mathfrak{a}]$ when it sets $k' = \mathfrak{a}$ and the vector $\boldsymbol{\mathrm{r}}$ as
defined above. As in the algorithm, let
${\cal C} = \{C_1, \dots, C_{k}\}$ be the partition obtained in
Line~\ref{line:complete}. In other words, to obtain the $k$ sets
$C_1, \dots, C_k$ from the set $V$, we take the reference partition
$S_1, \dots, S_{\mathfrak{a}}$ and further partition these sets using Laminar
to get $|\boldsymbol{\mathrm{r}}|$ parts $C_1, \dots, C_{|\boldsymbol{\mathrm{r}}|}$. If $|\boldsymbol{\mathrm{r}}| \geq k$, we
can merge the last $|\boldsymbol{\mathrm{r}}| - k + 1$ parts to get exactly $k$ parts if
we want (but we will not take any edge savings into account in this
calculation). If $|\boldsymbol{\mathrm{r}}| < k$, we get $k - |\boldsymbol{\mathrm{r}}|$ more parts using
the Complete procedure.
The total cost of this solution ${\cal C}$ is $w(E(C_1, \dots, C_k))$,
which is $\sum_{j=1}^{\mathfrak{a} - 1} c_j \leq (\mathfrak{a} - 1) w(\partial{S^*_1})$
plus the cost of $\text{Laminar}(\inducedG{S_i}, r_i)$ for all
$i \in [\mathfrak{a}]$ and the cost of $\text{Complete}$. Since
Claim~\ref{clm:promises} considers the partition of each
$\inducedG{S_i}$ obtained by cutting edges belonging to the optimal
$k$-partition,
the sum of the cost of the $r_i$-partition we compare to in
each \text{Laminar}kcut{r_i} is exactly $\varepsilonnsuremath{\mathsf{Opt}}\xspace$.
Hence the cost of the
solution given by $\text{Laminar}(\inducedG{S_i}, r_i)$ summed over
$i \in [\mathfrak{a}]$ is bounded by $(2 - \varepsilon_2) \varepsilonnsuremath{\mathsf{Opt}}\xspace$, by the approximation
assumption in Theorem~\ref{thm:reduction1}.
If $\cup_{i \in I} S^*_i$ for some $\varepsilonmptyset \neq I \subsetneq [k]$
conforms to ${\cal C}$, then since \text{Main}\ also records $\text{Guess}({\cal C})$,
the proof of Claim~\ref{claim:good} guarantees that $\text{Main}(G, k)$
gives a $(2 - \varepsilon_3)$ approximation using the induction hypothesis.
Otherwise, $S^*_1$ does not conform to ${\cal C}$, so the arguments used
in the proof of Claim~\ref{claim:notgood} show that the cost of
$\text{Complete}$ is at most $(k - |\boldsymbol{\mathrm{r}}|)\, w(\partial{S^*_1})$ if
$|\boldsymbol{\mathrm{r}}| \leq k$, and $0$ otherwise. Since $|\boldsymbol{\mathrm{r}}| \geq \mathfrak{a} + \mathfrak{b} - 1$,
the total cost $w(E(C_1, \dots, C_k))$ is then bounded
by
\begin{align*}
& (\mathfrak{a} - 1 ) w(\partial{S^*_1}) + (2 - \varepsilon_2) \varepsilonnsuremath{\mathsf{Opt}}\xspace +
(k - \mathfrak{a} - \mathfrak{b} + 1) w(\partial{S^*_1}) \\
= & \ (2 - \varepsilon_2) \varepsilonnsuremath{\mathsf{Opt}}\xspace + (k - \mathfrak{b}) w(\partial{S^*_1}) \\
\leq & \ (2 - \varepsilon_2) \varepsilonnsuremath{\mathsf{Opt}}\xspace + \varepsilon_5 k \cdot
w(\partial{S^*_1}) \tag{by \As{4}}\\
\leq & \ (2 - \varepsilon_2 + 2 \varepsilon_5) \varepsilonnsuremath{\mathsf{Opt}}\xspace.
\varepsilonnd{align*}
Therefore, if
\begin{equation}
\varepsilon_3 \leq \varepsilon_2 -2 \varepsilon_5,
\langlebel{eq:para_4}
\varepsilonnd{equation}
then $\text{Main}(G, k)$ gives a $(2 - \varepsilon_3)$ approximation in every
possible case. We set $\varepsilon_3, \varepsilon_4, \varepsilon_5 > 0$ so that they
satisfy the three conditions~\varepsilonqref{eq:para_1}, \varepsilonqref{eq:para_2},
and~\varepsilonqref{eq:para_4}, namely,
\[
(2/3) \cdot \varepsilon_1 \varepsilon_4 \geq \varepsilon_3, \quad (1 +
\varepsilon_1 \varepsilon_5 / 3)(2 - \varepsilon_3) \geq 2,
\quad \varepsilon_3 \leq \varepsilon_2 - 2 \varepsilon_5.
\]
(For instance, setting $\varepsilon_4 = \varepsilon_5 = \min(\varepsilon_1,
\varepsilon_2) / 3$ and $\varepsilon_3 = \varepsilon_4^2$ works.)
\varepsilonnd{proof}
\subsection{Running Time}
We prove that this algorithm also runs in FPT time, finishing the proof
of Theorem~\ref{thm:reduction1}.
\begin{lemma}
Suppose that $\text{Laminar}(G, k)$ runs in time $f(k) \cdot g(n)$.
Then Main$(G, k)$ runs in time $2^{O(k^2 \log k)} \cdot f(k) \cdot (g(n) + n^4 \log^3 n)$.
\varepsilonnd{lemma}
\begin{proof}
Let $\mathsf{Time}(\text{P})$ denote the running time of a procedure \text{P}.
Here each procedure is only parameterized by the number of sets it outputs (e.g., $\text{Main}(k), \text{Guess}(k), \text{Complete}(k), \text{Laminar}(k)$).
We use the fact that the global min-cut can be computed in time $O(n^2 \log^3 n)$~\cite{KS96} and the min-$4$-cut can be computed in $O(n^4 \log^3 n)$~\cite{Levine00}.
First, $\mathsf{Time}(\text{Complete}(k)) = O(kn^2 \log^3 n)$. For $\text{Guess}$ and $\text{Main}$,
\[
\mathsf{Time}(\text{Guess}(k)) \leq k \cdot 2^{k + 1} \cdot ( \mathsf{Time}(\text{Main}(k - 1)) + O(n) ),
\]
and
\begin{align*}
\mathsf{Time}(\text{Main}(k)) & \leq k^k \cdot (\mathsf{Time}(\text{Laminar}(k)) +
\mathsf{Time}(\text{Guess}(k)) + \mathsf{Time}(\text{Complete}(k))) + O(k n^4 \log^3 n) \\
& \leq 2^{O(k \log k)} \cdot f(k) \cdot (g(n) + O(n^4 \log^3 n)) +
2^{O(k \log k)} \cdot \mathsf{Time}(\text{Main}(k - 1)).
\varepsilonnd{align*}
We can conclude $\mathsf{Time}(\text{Main}(k)) \leq 2^{O(k^2 \log k)} \cdot f(k) \cdot (g(n) + n^4 \log^3 n)$.
\varepsilonnd{proof}
\section{An Algorithm for \text{Laminar}kcut{k}}
\langlebel{sec:laminar}
Recall the definition of the \text{Laminar}kcut{k} problem:
\LamDef*
Let $\mathcal O_{\varepsilon_1}$ contain all partitions $S_1,\ldots,S_k$ of $V$ with
the restriction that the boundaries of the first $k-1$ parts is
small---i.e., $w(\partial{S_i}) \le (1+\varepsilon_1)\mathsf{Mincut}(G)$ for all $i \in
[k-1]$. We emphasize that the weight of the last cut, i.e.,
$w(\partial{S_k})$, is unconstrained. In this section, we give an
algorithm to find a $k$-partition (possibly not in $\mathcal O_{\varepsilon_1}$) with
total weight
\[ w(E(S_1,\ldots,S_k)) \le (2 - \varepsilon_2) \min\limits_{\{S_i'\}\in
\mathcal O_{\varepsilon_1}} w(E(S_1',\ldots,S_k')). \]
Formally, the main theorem of this section is the following:
\begin{theorem}[Laminar Cut Algorithm]
\langlebel{thm:laminar}
Suppose there exists a $(1+\delta)$-approximation algorithm for
$\PartialVC{k}$ for some $\delta \in (0, 1/24)$ that runs in time
$f(k) \cdot g(n)$. Then, for any $\varepsilon_1\in(0,1/6-4\delta)$, there
exists a $(2 - \varepsilon_2)$-approximation algorithm for
$\text{Laminar}cut{k}{\varepsilon_1}$ that runs in time $2^{O(k)}f(k)(\tilde O(n^4) + g(n))$
for some constant $\varepsilon_2 > 0$.
\varepsilonnd{theorem}
In the rest of this section we present the algorithm and the analysis.
For a formal description, see the pseudocode in
Appendix~\ref{sec:pseudocode-laminar}.
\subsection{Mincut Tree}
The first idea in the algorithm is to consider the structure of a laminar family of cuts. Below, we introduce the concept of a \textit{mincut tree}. The vertices of the mincut tree are called \textit{nodes}, to distinguish them from the vertices of the original graph.
\begin{definition}[Mincut Tree]
A tree $\mathcal T = (V_{\mathcal T}, E_{\mathcal T}, w_{\mathcal T})$ is a \textbf{$(1+\varepsilon_1)$-mincut tree} on a graph $G=(V,E,w)$ with mapping $\phi : V \to V_{\mathcal T}$ if the following two sets are equivalent:
\begin{enumerate}
\item The set of all $(1+\varepsilon_1)$-mincuts of $G$.
\item Cut a single edge $e \in E_{\mathcal T}$ of the tree, and let $A_e
\subset V_{\mathcal T}$ be the nodes on one side of the cut. Define $S_e :=
\phi^{-1}(A_e) = \{v \mid \phi(v) \in A_e\}$ for each $e\in E_{\mathcal T}$,
and take the set of cuts $ \{ (S_e, V \setminus S_e) : e \in E_{\mathcal T}
\}$.
\varepsilonnd{enumerate}
Moreover, for every pair of corresponding $(1+\varepsilon_1)$-mincut $(S_e, V
\setminus S_e)$ and edge $e \in E_{\mathcal T}$, we have $w_{\mathcal T}(e) = w(E(S_e,
V\setminus S_e))$.
\varepsilonnd{definition}
We use the term \textit{mincut tree} without the $(1+\varepsilon_1)$ when the
value of $\varepsilon_1$ is either implicit or irrelevant.
For the rest of this section, let
\[ {\mu} := \mathsf{Mincut}(G) \] for brevity. Observe that the last condition
implies that ${\mu}\le w_{\mathcal T}(e)\le(1+\varepsilon_1){\mu}$ for all $e\in
E_{\mathcal T}$. The existence of a mincut tree (and the algorithm for it)
assuming laminarity, is standard, going back at least to Edmonds and
Giles~\cite{EG75}.
\begin{theorem}[Mincut Tree Existence/Construction]
\langlebel{thm:mincutTreeExistence}
If the set of $(1+\varepsilon_1)$-mincuts of a graph is laminar, then an
$O(n)$-sized $(1+\varepsilon_1)$-mincut tree always exists, and can
be found in $O(n^3)$ time.
\varepsilonnd{theorem}
\begin{proof}
We refer the reader to~\cite[Section~2.2]{KV12}. Fix a vertex $v \in
V$, and for each $(1+\varepsilon_1)$-mincut $(S,V\setminus S)$, pick the side
that contains $v$; this family of subsets of $V$ satisfies the laminar
condition in Proposition~2.12 of that book. Corollary~2.15 proves that
this family has size $O(n)$, and the construction of $T$ in
Proposition~2.14 gives the desired mincut tree. Furthermore, we can
compute the mincut tree in $O(n^3)$ time as follows: first precompute
whether $X \subset Y$ for every two sets $X$ and $Y$ in the family,
and then compute $T$ following the construction in the proof of
Proposition~2.14.
\varepsilonnd{proof}
\begin{definition}[Mincut Tree Terminology] Let $\mathcal T$ be a rooted mincut
tree. For $a \in V_{\mathcal T}$, define the following terms:
\begin{OneLiners}
\item[1.] $\ensuremath{\mathrm{children}}(a)$: the set of children of node $a$ in the rooted tree.
\item[2.] $\ensuremath{\mathrm{desc}}(a)$: the set of descendants of $a$, i.e., nodes $b \in V_{\mathcal T} \setminus a$ whose path to the root includes $a$.
\item[3.] $\ensuremath{\mathrm{anc}}(a)$: the set of ancestors of $a$, i.e., nodes $b \in V_{\mathcal T} \setminus a$ on the path from $a$ to the root.
\item[4.] $\ensuremath{\mathrm{subtree}}(a)$: vertices in the subtree rooted at $a$, i.e., $\{a\} \cup \ensuremath{\mathrm{desc}}(a)$.
\varepsilonnd{OneLiners}
\varepsilonnd{definition}
For the set of partitions $\mathcal O_{\varepsilon_1}$ (as defined at the beginning
of this section), we observe the following.
\begin{claim}[Representing Laminar Cuts in $\mathcal T$]
Let $\mathcal T = (V_{\mathcal T}, E_{\mathcal T}, w_{\mathcal T})$ be a $(1+\varepsilon_1)$-mincut tree of
$G=(V,E,w)$, and consider a partition $\{S_1, \ldots, S_k\} \in
\mathcal O_{\varepsilon_1}$. Then, there exists a root $r \in V_{\mathcal T}$ and nodes
$a_1, \ldots, a_{k-1} \in V_{\mathcal T} \setminus r$ such that if we root
the tree $\mathcal T$ at $r$,
\begin{enumerate}
\item For any two nodes in $\{a_1, \ldots, a_{k-1}\}$, neither is an
ancestor of the other. (We call two such nodes
\textbf{incomparable}).
\item For each $v_i$, let $A_i := \ensuremath{\mathrm{subtree}}(a_i)$, and let $A_k =
V_{\mathcal T} \setminus \bigcup_{i=1}^{k-1} A_i$ (so that $r \in A_k$). We
have the two equivalences $\{\phi^{-1}(A_i) \mid i \in [k-1]\} = \{S_1,
\ldots, S_{k-1}\}$ and $\phi^{-1}(A_k) = S_k$. In other words, the
components $A_i \subset V_{\mathcal T}$, when mapped back by $\phi^{-1}$,
correspond exactly to the sets $S_i \subset V$, with the additional
guarantee that $A_k$ and $S_k$ match.
\varepsilonnd{enumerate}
\varepsilonnd{claim}
\begin{proof}
Since $S_i$ is a $(1+\varepsilon_1)$-mincut for each $i \in [k-1]$, there
exists an edge $e_i \in E_{\mathcal T}$ such that the set $A_i'$ of nodes on
one side of $e_i$ satisfies $\phi^{-1}(A_i') = S_i $. The sets $A_i'$
for $i\in[k-1]$ are necessarily disjoint, and they cannot span all
nodes in $V_{\mathcal T}$, since $S_k$ is still unaccounted for. If we root
$\mathcal T$ at a node $r$ not in any $A_i'$, then each $A_i'$ is a subtree
of the rooted $\mathcal T$. Altogether, the roots of the subtrees $A_i'$
satisfy condition~(1) of the lemma, and the $A_i'$ themselves satisfy
condition~(2).
\varepsilonnd{proof}
For a graph $G=(V,E,w)$ and mincut tree $\mathcal T=(V_{\mathcal T},E_{\mathcal T},w_{\mathcal T})$
with mapping $\phi:V\to V_{\mathcal T}$, define $E_G(A,B)$ for $A,B \subset
V_{\mathcal T}$ as $E\left(\phi^{-1}(A), \phi^{-1}(B)\right)$, i.e., the total
weight of edges crossing the sets corresponding to $A$ and $B$ in $V$.
\begin{observation}
Given a root $r\in V_{\mathcal T}$ and incomparable nodes $a_1, \ldots,
a_{k-1} \in V_{\mathcal T} \setminus r$, we can bound the corresponding
partition $S_1,\ldots,S_k$ as follows:
\begin{align*}
w(E(S_1, \ldots, S_k)) &= \textstyle \sum_{i=1}^{k-1}
w(\partial(S_i)) - \sum_{i<j \le k-1} w(E(S_i, S_j)) \\ &=
\textstyle \sum_{i=1}^{k-1} w_{\mathcal T}(e_i) - \sum_{i<j\le k-1}
w(E_G(\ensuremath{\mathrm{subtree}}(a_i),\ensuremath{\mathrm{subtree}}(a_j))) ,
\varepsilonnd{align*}
where $e_i$ is the parent edge
of $v_i$ in the rooted tree.
\varepsilonnd{observation}
Note that ${\mu} \le w_{\mathcal T}(e) \le (1+\varepsilon_1){\mu}$ for all $e\in
E_{\mathcal T}$, so to approximately minimize the above expression for a fixed
root $r$, it suffices to approximately maximize
\begin{align*}
\textstyle \mathsf{Saved}(a_1,\ldots,a_{k-1}) := \sum\limits_{i<j\le k-1}
w(E_G(\ensuremath{\mathrm{subtree}}(a_i),\ensuremath{\mathrm{subtree}}(a_j))), \langlebel{eq:saved}
\varepsilonnd{align*}
which we think of as the edges \textit{saved} in the double counting of
$\sum_{i=1}^{k-1}w_{\mathcal T}(e_i)$. The actual approximation factor is made
precise in the proof of Theorem~\ref{thm:laminar}.
To maximize the number of saved edges over all partitions in
$\mathcal O_{\varepsilon_1}$, it suffices to try all possible roots $r$ and take the
best partition. Therefore, for the rest of this section, we focus on
maximizing $\mathsf{Saved}(a_1,\ldots,a_{k-1})$ for a fixed root
$r$.
Let $\varepsilonll^*(r)$ be that maximum value for root $r$, and let $\mathsf{Opt}(r) =
\{a_1^*, \ldots, a_{k-1}^*\} \subset V_{\mathcal T}$ be the solution that
attains it.
\subsection{Anchors}
Root the mincut tree $\mathcal T$ at $r$, and let $a_1^*,\ldots,a_{k-1}^*$ be incomparable nodes in the solution $\varepsilonnsuremath{\mathsf{Opt}}\xspace(r)$.
First, observe that we can assume w.l.o.g.\ that for each node $a_i^*$,
its parent node is an ancestor of some $a_j^* \neq a_i^*$: if not, we can replace $a_i^*$ with its parent, which can only increase $\mathsf{Saved}(a_1^*,\ldots,a_{k-1}^*)$.
\begin{observation}
Consider nodes $a_1^*,\ldots,a_s^* \in \mathsf{Opt}(r)$ which share the same parent $a \notin \mathsf{Opt}(r)$, and assume that $a$ has no other descendants. If we replace $a_1^*,\ldots,a_s^*$ in $\mathsf{Opt}(r)$ with $a$, then we lose at most $\mathsf{Saved}(a_1^*,\ldots,a_s^*)$ in our solution.\footnote{The new solution may no longer have $k-1$ nodes, but we will fix this problem in the proof of Theorem~\ref{thm:laminar}. For now, assume that we are allowed to choose any number up to $k-1$ nodes.}
\varepsilonnd{observation}
If $\mathsf{Saved}(a_1^*,\ldots,a_s^*)$ is small, i.e., compared to $(s-1){\mu}$, then we do not lose too much. This idea motivates the idea of anchors.
\begin{definition}[Anchors]
Let $\mathcal T=(V_{\mathcal T},E_{\mathcal T},w_{\mathcal T})$ be a rooted tree.
For a fixed constant $\varepsilon_3 > 0$, define an \textbf{$\varepsilon_3$-anchor} to be a node $a\in V_{\mathcal T}$ such that there exists $s \in [2,k-1]$ and $s$ children $a_1,\ldots,a_s$ such that $\mathsf{Saved}(a_1,\ldots,a_s) \ge \varepsilon_3(s-1){\mu}$. When the value of $\varepsilon_3$ is implicit, we use the term anchor, without the $\varepsilon_3$.
\varepsilonnd{definition}
We now claim that we can transform any solution to another
well-structured solution, with only a minimal loss.
\begin{lemma}[Shifting Lemma]
\langlebel{lem:anchor}
Let $a_1,\ldots,a_{k-1}$ be a set of incomparable nodes of a
$(1+\varepsilon_1)$-mincut tree $\mathcal T$. Then, there exists a set
$b_1,\ldots,b_s$ of incomparable nodes, for $1\le s\le k-1$, such that
\begin{enumerate}
\item The parent of every node $b_i$ is either an $\varepsilon_3$-anchor, or is
an ancestor of some node $b_j \neq b_i$ whose parent is an anchor.
\item $\mathsf{Saved}(b_1,\ldots,b_s) \ge \mathsf{Saved}(a_1,\ldots,a_{k-1}) -
\varepsilon_3(k-s){\mu}$.
\varepsilonnd{enumerate}
In particular, if $\{a_1, \ldots, a_{k-1}\} = \mathsf{Opt}(r)$, condition~(2)
implies $\mathsf{Saved}(b_1,\ldots,b_s) \ge \varepsilonll^*(r) - \varepsilon_3(k-1){\mu}$.
\varepsilonnd{lemma}
\begin{proof}
We begin with the solution $b_i=a_i$ for all $i$, and iteratively
shift non-anchors in the solution while maintaining the potential
function $\Phi:=\mathsf{Saved}(b_1,\ldots,b_s) - \mathsf{Saved}(a_1,\ldots,a_{k-1}) +
\varepsilon_3(k-s){\mu}$ nonnegative.
At the beginning, $\Phi=0$. Suppose there is a
node $b_i$ not satisfying condition (1). Choose one such $b_i$ of
maximum depth in the tree, and let $b'$ be its non-anchor parent. Then
the only descendants of $b'$ in the current solution are siblings of
$b_i$. Replace $b_i$ and its $s'$ siblings in the solution by
$b'$. Since $b'$ is not an anchor, $\mathsf{Saved}(b_1,\ldots,b_s)$ drops by
at most $\varepsilon_3(s'-1){\mu}$. This drop is compensated by the decrease
of the solution size from $s$ to $s-(s'-1)$.
\varepsilonnd{proof}
Hence, at a loss of $\varepsilon_3(k-1){\mu}$, it suffices to focus on a
solution $\mathsf{Opt}'(r)$ which fulfills condition (1) of
Lemma~\ref{lem:anchor} and has $\mathsf{Saved}$ value $\varepsilonll'(r) \geq \varepsilonll^*(r) -
\varepsilon_3(k-1){\mu}$.
The rest of the algorithm splits into two cases. At a high level, if
there are enough anchors in a mincut tree $\mathcal T$ that are incomparable
with each other, then we can take such a set and be done. Otherwise, the
set of anchors can be grouped into a small number of paths in $\mathcal T$, and
we can afford to try all possible arrangements of anchors. But first we
show how to find all the anchors in $\mathcal T$.
\subsection{Finding Near-Anchors}
\newcommand{\ensuremath{\mathrm{anc}}hors}{\mathcal{A}}
\begin{lemma}[Finding (Near-)Anchors]
\langlebel{lem:compute-anchors}
Assume access to a $(1+\delta)$-approximation algorithm for $\PartialVC k$
running in time $f(k) \cdot g(n)$. Then, there is an algorithm running
in time $O(n \cdot (n^2 + k \cdot f(k) \cdot g(n)))$ that computes a
set $\ensuremath{\mathrm{anc}}hors$ of ``near''-anchors in $\mathcal T$, i.e., vertices $a \in
V_{\mathcal T}$ for which there exists an integer $s \in [2,k-1]$ and $s$
children $b_1, \ldots, b_s$ such that $\mathsf{Saved}(b_1,\ldots,b_s) \geq
\varepsilon_3(s-1){\mu} - \delta(1+\varepsilon_1)s{\mu}$.
\varepsilonnd{lemma}
\begin{proof}
To determine if a node $a$ is an anchor or not, for each integer $s
\in [2, k-1]$ we wish to compute the maximum value of
$\mathsf{Saved}(b_1,\ldots,b_s)$ for $b_1,\ldots,b_s \in
\ensuremath{\mathrm{children}}(a)$. Consider the following weighted, complete graph with
vertex and edge weights: for each $b \in \ensuremath{\mathrm{children}}(a)$ create a vertex
$x_b$, and the edge $(x_{b_1}, x_{b_2})$ has weight
$\mathsf{Saved}(b_1,b_2)$. Each vertex $x_b$ also has weight $(1+\varepsilon_1){\mu}
- w(\partial x_b)$, where $w(\partial x_b)$ is the sum of the weights
of edges incident to $x_b$. Note that this graph is
$(1+\varepsilon_1){\mu}$-regular, if we include vertex weights in the
definition of vertex degree.
Observe that $w(\partial x_b)
\le \partial\left(\phi^{-1}(\ensuremath{\mathrm{subtree}}(b))\right) \le (1+\varepsilon_1){\mu}$,
since every edge in $G$ that contributes to $\mathsf{Saved}(b,b')$ for another
child $b'$ also contributes to the cut
$\partial\left(\phi^{-1}(\ensuremath{\mathrm{subtree}}(b))\right)$, which we know is $\le
(1+\varepsilon_1){\mu}$.
Therefore, each vertex has a nonnegative weight.
Also, a partial vertex cover on this graph with vertices $x_{b_1},
\ldots, x_{b_s}$ has weight exactly $(1+\varepsilon_1)s{\mu} - \mathsf{Saved} (b_1,
\ldots, b_s)$.
Let $b_1^*,\ldots,b_s^* \in \ensuremath{\mathrm{children}}(a)$ be the solution with maximum
$\mathsf{Saved}(b_1^*,\ldots,b_s^*)$. To compute this maximum, we can build
the above graph and run the $(1+\delta)$-approximate partial vertex
cover algorithm from Theorem~\ref{thm:pvc}. The solution
$b_1,\ldots,b_s$ satisfies
\[ (1+\varepsilon_1)s{\mu} - \mathsf{Saved}(b_1,\ldots,b_s) \le
(1+\delta)\left((1+\varepsilon_1)s{\mu} - \mathsf{Saved}(b_1^*,\ldots,b_s^*)\right),
\]
so that
\begin{align*}
\mathsf{Saved}(b_1,\ldots,b_s) & \ge (1+\delta)\,\mathsf{Saved}(b_1^*,\ldots,b_s^*)
- \delta(1+\varepsilon_1)s {\mu} \\
& \ge \mathsf{Saved}(b_1^*,\ldots,b_s^*) - \delta(1+\varepsilon_1)s{\mu}.
\varepsilonnd{align*}
We run this subprocedure for the vertex $a$ for each integer $2 \leq s
\le \min\{|\ensuremath{\mathrm{children}}(a)|, k-1\}$, and mark vertex $a$ if there exists
an integer $s$ such that the weight of saved edges is at least
$\varepsilon_3(s-1){\mu} - \delta(1+\varepsilon_1)s{\mu}$. The set $\ensuremath{\mathrm{anc}}hors$ of
near-anchors is exactly the set of marked vertices.
As for running time, for each node $a$, it takes $O(n^2)$ time to construct the $\textsc{Partial VC}\xspace$ graph and $O(k) \cdot f(k) \cdot g(n)$ time to solve $\PartialVC s$ for each $s \in [2,k-1]$. Repeating the above for each of the $O(n)$ nodes achieves the promised running time.
\varepsilonnd{proof}
\subsection{Many Incomparable Near-Anchors}
\begin{lemma}[Many Anchors]
\langlebel{lem:incomparable}
Suppose we have access to a $(1+\delta)$-approximation algorithm for
$\PartialVC k$ running in time $f(k) \cdot g(n)$. Suppose the set
$\ensuremath{\mathrm{anc}}hors$ of near-anchors contains $k-1$ incomparable nodes from the
mincut tree $\mathcal T$. Then, there is an algorithm computing a solution
with $\mathsf{Saved}$ value
$\ge \frac14\varepsilon_3(k-1){\mu} - \delta(1+\varepsilon_1)(k-1){\mu}$ for any
$\delta>0$, running in time
$O(n \cdot (n^2 + k \cdot f(k) \cdot g(n)))$.
\varepsilonnd{lemma}
\begin{proof}
First, we compute the set $\ensuremath{\mathrm{anc}}hors$ in $O(n \cdot (n^2 + k \cdot f(k)
\cdot g(n))$ time, according to Lemma~\ref{lem:compute-anchors}. If
$\ensuremath{\mathrm{anc}}hors$ contains $k-1$ incomparable nodes, we can \textit{find}
them in $O(n^2)$ time by greedily choosing nodes in a topological,
bottom-first order (see lines 4--11 in
Algorithm~\ref{alg:laminarRooted}). Each of these $k-1$ marked nodes
$a_1, \ldots, a_{k-1}$ has an associated value $s_i$, indicating that
$a_i$ has some $s_i$ children whose $\mathsf{Saved}$ value is at least
$\varepsilon_3(s_i-1){\mu} - \delta(1+\varepsilon_1)s_i{\mu}$. If we consider a
subset $A \subset [k-1]$ and choose the $s_i$ children for each $a_i$
with $i\in A$, then we get a set with $\sum_{i\in A} s_i$ nodes, whose
total $\mathsf{Saved}$ value at least \[\varepsilon_3\left( \sum_{i\in
A}(s_i-1)\right){\mu} - \delta(1+\varepsilon_1)\left( \sum_{i\in A}s_i
\right){\mu}.\] Assuming that $\sum_{i\in A}s_i \le k-1$, i.e., we
choose at most $k-1$ children, the second
$\delta(1+\varepsilon_1)\left(\sum_{i\in A}s_i\right){\mu}$ term is at most
$\delta(1+\varepsilon_1) (k-1){\mu}$. To optimize the $\varepsilon_3\left( \sum_{i\in
A}(s_i-1)\right){\mu}$ term, we reduce to the following knapsack
problem: we have $k-1$ items $i \in [k-1]$ where item $i$ has size
$s_i \in [2,k-1]$ and value $s_i-1$, and our bag size is $k-1$. A
knapsack solution of value $Z:=\sum_{i\in A}(s_i-1)$ translates to a
solution with $\mathsf{Saved}$ value $\ge \varepsilon_3{\mu} \cdot Z -
\delta(1+\varepsilon_1)(k-1){\mu}$. By Lemma~\ref{lemma:knapsack}, when $k
\ge 5$, we can compute a solution $A \subset [k-1]$ of value $\ge
(k-1)/4$ in $O(k)$ time. (If $k\le4$, we can use the exact $\tilde O(n^4)$
$k$\textsc{-Cut}\xspace algorithm from~\cite{Levine00}.) Selecting the children of each
$u_i$ with $i\in A$ gives a total $\mathsf{Saved}$ value of at least
$\frac14\varepsilon_3(k-1){\mu} - \delta(1+\varepsilon_1)(k-1){\mu}$.
\varepsilonnd{proof}
\subsection{Few Incomparable Near-Anchors}
\begin{figure}
\centering
\begin{tikzpicture} [xscale=.5, yscale=.3]
\tikzstyle{every node}=[circle, fill, scale=.3];
\node (v1) at (-1,4) {};
\node (v2) at (-2,2) {};
\node (v12) at (0.5,2) {};
\node (v3) at (-3,0) {};
\node (v10) at (-1.5,0) {};
\node (v11) at (0,0) {};
\node (v13) at (1,0) {};
\node (v14) at (2,0) {};
\node (v4) at (-4,-2) {};
\node (v7) at (-2,-2) {};
\node (v5) at (-5,-4) {};
\node (v6) at (-3.5,-4) {};
\node (v8) at (-2.5,-4) {};
\node (v9) at (-1,-4) {};
\node (v15) at (1,-2) {};
\node (v16) at (3,-2) {};
\draw [line width=.5pt] (v1) edge (v2);
\draw [line width=.5pt] (v2) edge (v3);
\draw [line width=.5pt] (v3) edge (v4);
\draw [line width=.5pt] (v4) edge (v5);
\draw [line width=.5pt] (v4) edge (v6);
\draw [line width=.5pt] (v7) edge (v3);
\draw [line width=.5pt] (v8) edge (v7);
\draw [line width=.5pt] (v9) edge (v7);
\draw [line width=.5pt] (v10) edge (v2);
\draw [line width=.5pt] (v11) edge (v12);
\draw [line width=.5pt] (v12) edge (v1);
\draw [line width=.5pt] (v13) edge (v12);
\draw [line width=.5pt] (v12) edge (v14);
\draw [line width=.5pt] (v15) edge (v14);
\draw [line width=.5pt] (v14) edge (v16);
\draw (v3) circle (.3);
\draw (v14) circle (.3);
\draw (v16) circle (.3);
\draw (v5) circle (.3);
\draw (v8) circle (.3);
\draw (v9) circle (.3);
\draw [line width=.5pt] (v5) edge (-5.5,-5);
\draw [line width=.5pt] (v5) edge (-4.5,-5);
\draw [line width=.5pt] (v8) edge (-3,-5);
\draw [line width=.5pt] (v8) edge (-2.25,-5);
\draw [line width=.5pt] (v9) edge (-1.25,-5);
\draw [line width=.5pt] (v9) edge (-0.5,-5);
\varepsilonnd{tikzpicture}
\qquad
\begin{tikzpicture} [xscale=.5, yscale=.3]
\tikzstyle{every node}=[circle, fill, scale=.5];
\node (v1) at (-1,4) {};
\node (v3) at (-3,0) {};
\node (v7) at (-2,-2) {};
\node (v5) at (-5,-4) {};
\node (v8) at (-2.5,-4) {};
\node (v9) at (-1,-4) {};
\node (v16) at (3,-2) {};
\draw [white,line width=.5pt] (v5) edge (-5.5,-5);
\draw [white,line width=.5pt] (v5) edge (-4.5,-5);
\draw [white,line width=.5pt] (v8) edge (-3,-5);
\draw [white,line width=.5pt] (v8) edge (-2.25,-5);
\draw [white,line width=.5pt] (v9) edge (-1.25,-5);
\draw [white,line width=.5pt] (v9) edge (-0.5,-5);
\draw [line width=1pt] (v1) edge (v3);
\draw [line width=1pt] (v3) edge (v5);
\draw [line width=1pt] (v3) edge (v7);
\draw [line width=1pt] (v7) edge (v8);
\draw [line width=1pt] (v9) edge (v7);
\draw [line width=1pt] (v1) edge (v16);
\varepsilonnd{tikzpicture}
\qquad
\begin{tikzpicture} [xscale=.5, yscale=.3]
\tikzstyle{every node}=[circle, fill, scale=.3];
\node[fill=brown] (v1) at (-1,4) {};
\node[fill=blue] (v2) at (-2,2) {};
\node[fill=green] (v12) at (0.5,2) {};
\node[fill=blue] (v3) at (-3,0) {};
\node[fill=] (v10) at (-1.5,0) {};
\node[fill=] (v11) at (0,0) {};
\node[fill=] (v13) at (1,0) {};
\node[fill=green] (v14) at (2,0) {};
\node[fill=red] (v4) at (-4,-2) {};
\node[fill=purple] (v7) at (-2,-2) {};
\node[fill=red] (v5) at (-5,-4) {};
\node[fill=] (v6) at (-3.5,-4) {};
\node[fill=orange] (v8) at (-2.5,-4) {};
\node[fill=yellow] (v9) at (-1,-4) {};
\node[fill=] (v15) at (1,-2) {};
\node[fill=green] (v16) at (3,-2) {};
\draw [blue,line width=1pt] (v1) edge (v2);
\draw [blue,line width=1pt] (v2) edge (v3);
\draw [red,line width=1pt] (v3) edge (v4);
\draw [red,line width=1pt] (v4) edge (v5);
\draw [line width=1pt] (v4) edge (v6);
\draw [purple,line width=1pt] (v7) edge (v3);
\draw [orange,line width=1pt] (v8) edge (v7);
\draw [yellow,line width=1pt] (v9) edge (v7);
\draw [line width=1pt] (v10) edge (v2);
\draw [line width=1pt] (v11) edge (v12);
\draw [green,line width=1pt] (v12) edge (v1);
\draw [line width=1pt] (v13) edge (v12);
\draw [green,line width=1pt] (v12) edge (v14);
\draw [line width=1pt] (v15) edge (v14);
\draw [green,line width=1pt] (v14) edge (v16);
\draw [white,line width=.5pt] (v5) edge (-5.5,-5);
\draw [white,line width=.5pt] (v5) edge (-4.5,-5);
\draw [white,line width=.5pt] (v8) edge (-3,-5);
\draw [white,line width=.5pt] (v8) edge (-2.25,-5);
\draw [white,line width=.5pt] (v9) edge (-1.25,-5);
\draw [white,line width=.5pt] (v9) edge (-0.5,-5);
\varepsilonnd{tikzpicture}
\caption{\langlebel{figure:branches}
Establishing the set of branches $\mathcal B$. The circled nodes on the left are the near-anchors. The middle graph is the tree $\mathcal T'$. On the right, each non-black color is an individual branch; actually, the branches only consist of nodes, but we connect the nodes for visibility. Also, note that the root is its own branch. The red, orange, yellow, and green branches form an incomparable set.}
\varepsilonnd{figure}
If the condition in Lemma~\ref{lem:incomparable} does not hold, then
there exist $\le k-2$ paths from the root in $\mathcal T$ such that every node
in the near-anchor set $\ensuremath{\mathrm{anc}}hors$ lies on one of these paths. If we view
the union of these paths as a tree $\mathcal T'$ with $\le k-2$ leaves, then we
can partition the nodes in tree $\mathcal T'$ into a collection $\mathcal B$ of at
most $2k-3$ \textit{branches}. Each branch $B$ is a collection of
vertices obtained by taking either a leaf of $\mathcal T'$ or a vertex of
degree more than two, and all its immediate degree-2 ancestors; see
Figure~\ref{figure:branches}. Note that it is possible that the root
node is its own branch. Hence, given two branches $B_1, B_2 \in \mathcal B$,
either every node from $B_1$ is an ancestor of every node from $B_2$ (or
vice versa), or else every node from $B_1$ is incomparable with every
node from $B_2$.
Let $A' \subseteq \ensuremath{\mathrm{anc}}hors$ be the set of anchors with at least one child in
$\mathsf{Opt}'(r)=\{a_1^*,\ldots,a_s^*\}$; recall that $\mathsf{Opt}'(r)$ was produced
by the shifting procedure in Lemma~\ref{lem:anchor}. Let $A^* \subseteq A'$
be
the \textit{minimal} anchors in $A'$, i.e., every anchor in $A'$ that is
not an ancestor of any other anchor in $A'$. We know that every anchor in
$A^*$ falls inside our set of branches, although the algorithm does not
know where. Moreover, by condition~(1) of Lemma~\ref{lem:anchor}, the
parent of every $a_i^* \in \mathsf{Opt}'(r)$ either lies in $A^*$, or is an
ancestor of an anchor in $A^*$.
As a warm-up, consider the case where all the anchors in $A'$ are
contained within a single branch.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.25]
\node [circle, fill=red, scale=.3] (v3) at (0,5) {};
\node [circle, fill=red, scale=.3] (v2) at (2.5,7.5) {};
\node [circle, fill=red, scale=.3] (v1) at (5,10) {};
\node [circle, fill=red, scale=.3] (v4) at (-2.5,2.5) {};
\node [circle, fill=red, scale=.3] (v5) at (-5,0) {};
\node [circle, fill=red, scale=.3, label=left:$a^*$] (v6) at (-7.5,-2.5) {};
\draw (v1) edge (v2);
\draw (v2) edge (v3);
\draw (v3) edge (v4);
\draw (v4) edge (v5);
\draw (v5) edge (v6);
\node [circle, fill=blue, scale=.3] (v7) at (5,8) {};
\node [circle, fill=blue, scale=.3] (v8) at (6.5,8) {};
\node [circle, fill=blue, scale=.3] (v9) at (2.5,5.5) {};
\node [circle, fill=blue, scale=.3] (v10) at (4,5.5) {};
\node [circle, fill=blue, scale=.3] (v11) at (0,3) {};
\node [circle, fill=blue, scale=.3] (v12) at (1.5,3) {};
\node [circle, fill=blue, scale=.3] (v13) at (-2.5,0.5) {};
\node [circle, fill=blue, scale=.3] (v14) at (-1,0.5) {};
\node [circle, fill=blue, scale=.3] (v15) at (-5,-2) {};
\node [circle, fill=blue, scale=.3] (v16) at (-3.5,-2) {};
\node [circle, fill=blue, scale=.3] (v17) at (-9,-4.5) {};
\node [circle, fill=blue, scale=.3] (v18) at (-7.5,-4.5) {};
\node [circle, fill=blue, scale=.3] (v19) at (-6,-4.5) {};
\draw (v1) -- (v7) -- (4.5,6.5) -- (5.5,6.5) -- (v7) (v1) -- (v8) -- (6,6.5) -- (7,6.5) -- (v8) (v2) -- (v9) -- (2,4) -- (3,4) -- (v9) (v2) -- (v2) -- (v10) -- (3.5,4) -- (4.5,4) -- (v10) (v3) -- (v11) -- (-0.5,1.5) -- (0.5,1.5) -- (v11) (v3) -- (v12) -- (1,1.5) -- (2,1.5) -- (v12) (v4) -- (v13) -- (-3,-1) -- (-2,-1) -- (v13) (v4) -- (v14) -- (-1.5,-1) -- (-0.5,-1) -- (v14) (v5) -- (v15) -- (-5.5,-3.5) -- (-4.5,-3.5) -- (v15) (v5) -- (v16) -- (-4,-3.5) -- (-3,-3.5) -- (v16) (v6) -- (v17) -- (-9.5,-6) -- (-8.5,-6) -- (v17) (v6) -- (v18) -- (-8,-6) -- (-7,-6) -- (v18) (v6) -- (v19) -- (-6.5,-6) -- (-5.5,-6) -- (v19);
\varepsilonnd{tikzpicture}
\qquad
\begin{tikzpicture}[scale=0.25]
\node [circle, fill=black, scale=.3] (v3) at (0,5) {};
\node [circle, fill=black, scale=.3] (v2) at (2.5,7.5) {};
\node [circle, fill=black, scale=.3] (v1) at (5,10) {};
\node [circle, fill=red, scale=.3] (v4) at (-2.5,2.5) {};
\node [circle, fill=red, scale=.3] (v5) at (-5,0) {};
\node [circle, fill=red, scale=.3] (v6) at (-7.5,-2.5) {};
\draw (v1) edge (v2);
\draw (v2) edge (v3);
\draw (v3) edge (v4);
\draw[red, line width=2pt] (v4) edge (v5);
\draw[red, line width=2pt] (v5) edge (v6);
\node [circle, fill=black, scale=.3] (v7) at (5,8) {};
\node [circle, fill=black, scale=.3] (v8) at (6.5,8) {};
\node [circle, fill=black, scale=.3] (v9) at (2.5,5.5) {};
\node [circle, fill=black, scale=.3] (v10) at (4,5.5) {};
\node [circle, fill=black, scale=.3] (v13) at (-2.5,0.5) {};
\node [circle, fill=black, scale=.3](v14) at (-1,0.5) {};
\node [circle, fill=black, scale=.3](v15) at (-5,-2) {};
\node [circle, fill=black, scale=.3] (v16) at (-3.5,-2) {};
\node [circle, fill=black, scale=.3] (v17) at (-9,-4.5) {};
\node [circle, fill=black, scale=.3](v18) at (-7.5,-4.5) {};
\node [circle, fill=black, scale=.3](v19) at (-6,-4.5) {};
\draw (v1) -- (v7) -- (4.5,6.5) -- (5.5,6.5) -- (v7);
\draw (v1) -- (v8) -- (6,6.5) -- (7,6.5) -- (v8);
\draw (v2) -- (v9) -- (2,4) -- (3,4) -- (v9);
\draw (v2) -- (v10) -- (3.5,4) -- (4.5,4) -- (v10);
\draw (v4) -- (v13) -- (-3,-1) -- (-2,-1) -- (v13) (v4) -- (v14) -- (-1.5,-1) -- (-0.5,-1) -- (v14) (v5) -- (v15) -- (-5.5,-3.5) -- (-4.5,-3.5) -- (v15) (v5) -- (v16) -- (-4,-3.5) -- (-3,-3.5) -- (v16) (v6) -- (v17) -- (-9.5,-6) -- (-8.5,-6) -- (v17) (v6) -- (v18) -- (-8,-6) -- (-7,-6) -- (v18) (v6) -- (v19) -- (-6.5,-6) -- (-5.5,-6) -- (v19);
\node [circle, fill=red, scale=.3] (v11) at (2,2.5) {};
\node [circle, fill=red, scale=.3] (v21) at (4,0) {};
\node [circle, fill=red, scale=.3] (v28) at (6,-2.5) {};
\node [circle, fill=black, scale=.3](v12) at (0.5,0.5) {};
\node [circle, fill=black, scale=.3](v20) at (2,0.5) {};
\node [circle, fill=black, scale=.3](v22) at (2.5,-2) {};
\node[circle, fill=black, scale=.3] (v23) at (4,-2) {};
\node[circle, fill=black, scale=.3] (v26) at (6,-4.5) {};
\node [circle, fill=black, scale=.3](v24) at (4.5,-4.5) {};
\node [circle, fill=black, scale=.3](v27) at (7.5,-4.5) {};
\draw [red, line width=2pt] (v11) edge (v21);
\draw [red, line width=2pt] (v21) edge (v28);
\draw (v3) edge (v11);
\draw (v11) -- (v12) -- (0,-1) -- (1,-1) -- (0.5,0.5);
\draw (v11) -- (v20) -- (1.5,-1) -- (2.5,-1) -- (v20);
\draw (v21) -- (v22) -- (2,-3.5) -- (3,-3.5) -- (v22) (v21) -- (v23) -- (3.5,-3.5) -- (4.5,-3.5) -- (v23);
\draw (v28) -- (v24) -- (4,-6) -- (5,-6) -- (v24) (v28) -- (v26) -- (5.5,-6) -- (6.5,-6) -- (6,-4.5) (6,-2.5) -- (v27) -- (7,-6) -- (8,-6) -- (v27);
\draw [green, line width=1pt] plot[smooth, tension=.7] coordinates {(-9,-5.5) (-8.5,-5) (-7.5,-5.5)};
\draw [green, line width=1pt] plot[smooth, tension=.7] coordinates {(-6,-5.5) (-4.5,-5.5) (-3.5,-3)};
\draw [green, line width=1pt] plot[smooth, tension=.7] coordinates {(-5,-3) (-4,-0.5) (-1,-0.5)};
\draw [green, line width=1pt] plot[smooth, tension=.7] coordinates {(0.5,-0.5) (1.5,-4.5) (4.5,-5.5)};
\draw [green, line width=1pt] plot[smooth, tension=.7] coordinates {(2.5,-3) (3.5,-2.5) (4,-3)};
\draw [blue, line width=1pt] plot[smooth, tension=1] coordinates {(-2.5,-0.5) (-0.5,-2) (2,-0.5)};
\draw [blue, line width=1pt] plot[smooth, tension=.7] coordinates {(-6,-5) (-1.5,-5) (2.5,-6.5) (6,-5.5)};
\draw [blue, line width=1pt] plot[smooth, tension=.7] coordinates {(2.5,4.5) (3.5,4) (4,4.5)};
\draw [blue, line width=1pt] plot[smooth, tension=.7] coordinates {(6.5,7) (6,2.5) (7.5,-2) (7.5,-5.5)};
\draw [blue, line width=1pt] plot[smooth, tension=.7] coordinates {(4,5) (5,5.5) (5,7)};
\varepsilonnd{tikzpicture}
\caption{\langlebel{figure:single-branch}Left (Claim~\ref{claim:singleBranch}): The red nodes form our branch $B$, and the blue nodes form the set $\ensuremath{\mathrm{children}}((\{a^*\}
\cup \ensuremath{\mathrm{anc}}(a^*)) \cap B)$. The triangles are the subtrees participating in the $\textsc{Partial VC}\xspace$ instance.
Right (Lemma~\ref{lem:chains}): The red nodes form our two incomparable branches. The green edges are internal edges, while the blue edges are external.}
\varepsilonnd{figure}
\begin{claim}[Warm-up]
\langlebel{claim:singleBranch}
Assume there exists a $(1+\delta)$-approximation algorithm for $\PartialVC k$ running
in time $f(k) \cdot g(n)$. Suppose the set of anchors $A'$ with at least one child in
$\mathsf{Opt}'(r)$ is contained within a single branch $B$. Then there is an
algorithm computing a solution with $\mathsf{Saved}$ value at least $\varepsilonll'(r)
- \delta(1+\varepsilon_1)(k-1){\mu}$, running in time $O(n \cdot (n^2 + f(k) \cdot g(n)))$.
\varepsilonnd{claim}
\begin{proof}
If all of $A'$ lies on $B$, the minimal anchor $a^* \in A^*$ must also
be in $B$. Moreover, for every $a_i^*\in\mathsf{Opt}'(r)$, its parent is
either $a^*$ or an ancestor of $a^*$, which means that $\mathsf{Opt}'(r) \subseteq
\ensuremath{\mathrm{children}}((\{a^*\} \cup \ensuremath{\mathrm{anc}}(a^*)) \cap B)$. Since the nodes in $\ensuremath{\mathrm{children}}((\{a^*\}
\cup \ensuremath{\mathrm{anc}}(a^*)) \cap B)$ are incomparable (see Figure~\ref{figure:single-branch}), we can construct the same
graph as the one in Lemma~\ref{lem:compute-anchors} on all these nodes
in $\ensuremath{\mathrm{children}}((\{a^*\} \cup \ensuremath{\mathrm{anc}}(a^*)) \cap B)$ and run the \textsc{Partial VC}\xspace-based
algorithm to get the same $\mathsf{Saved}$ guarantees (see
Algorithm~\ref{alg:subtreePVC}).
Therefore, the algorithm guesses the location of $a^*$ inside $B$ by
trying all possible $|B|=O(n)$ nodes, and for each choice of $a^*$,
runs the $(1-\delta)$-approximate \textsc{Partial VC}\xspace-based algorithm from
Lemma~\ref{lem:compute-anchors} on the corresponding graph (see
Algorithm~\ref{alg:singleBranch}).
\varepsilonnd{proof}
Now for the general case. Consider $\mathsf{Opt}'(r)$ and the set of all
branches $\mathcal B$. Let $\mathcal B^* \subseteq \mathcal B$ be the incomparable branches that
contain the minimal anchors, i.e., those in $A^*$. We classify the
$\varepsilonll(r')$ saved edges in $\mathsf{Opt}'(r)$ into two groups (see Figure~\ref{figure:single-branch}): if an edge is
saved between the subtrees below $a_i^*,a_j^* \in \mathsf{Opt}'(r)$ whose
parent(s) belong to the same branch in $\mathcal B^*$, then call this
\textit{an internal edge}. Otherwise, it is an \textit{external edge}:
these are saved edges in $\mathsf{Opt}'(r)$ that either go between two subtrees
in different branches, or between subtrees in the same branch in $\mathcal B
\setminus \mathcal B^*$. One of the two sets has $\ge \frac12\varepsilonll'(r)$ saved
edges, and we provide two separate algorithms, one to approximate each
group.
\begin{lemma}\langlebel{lem:chains}
Assume there exists a $(1+\delta)$-approximation algorithm for $\PartialVC k$ running
in time $f(k) \cdot g(n)$. Suppose that all anchors of $\mathsf{Opt}'(r)$ are contained in a set $\mathcal B$ of
$\le 2k-3$ branches. Then there is an algorithm that computes a
solution with $\mathsf{Saved}$ value $\ge \frac12\varepsilonll'(r) -
\delta(1+\varepsilon_1)(k-1){\mu}$, running in time $ 2^{O(k)} \cdot (n^2+f(k)\cdot g(n)) $.
\varepsilonnd{lemma}
\begin{proof}
\varepsilonmph{Case I: internal edges $\ge\frac12\varepsilonll'$.} For each branch
$B\in\mathcal B$ and each $s\in [k-1]$, compute a solution of $s$ nodes that
maximizes the number of internal edges \textit{within branch
$\mathcal B$}, in the same manner as in
Claim~\ref{claim:singleBranch}; this takes time $O(k^2n \cdot (n^2 + f(k) \cdot g(n)))$. Finally, guess all possible $\le
2^{2k-3}$ subsets of incomparable branches; for each subset
$\mathcal B'\subseteq\mathcal B$, try all vectors $\mathbf i \in [k-1]^{\mathcal B'}$ with
$\sum_{B\in\mathcal B'} i_B \le k-1$, look up the solution using $i_B$
vertices in branch $B$, and sum up the total number of internal
edges. Actually, trying all vectors $\mathbf i \in [k-1]^{\mathcal B'}$ takes $k^{O(k)}$ time, but we can speed up this step to $\mathrm{poly}(k)$ time using dynamic programming. Since one
of the guesses $\mathcal B'$ will be $\mathcal B^*$, the best solution will save at
$\ge\frac12\varepsilonll'(r)-\delta(1+\varepsilon_1)(k-1){\mu}$ edges. The total running time for this case is $O(k^2 \cdot f(k) \cdot g(n) + 2^{2k}\cdot\mathrm{poly}(k))$.
\varepsilonmph{Case II: external edges $\ge\frac12\varepsilonll'$.} Again, we guess the
set $\mathcal B^*\subset\mathcal B$ of incomparable branches containing minimal
anchors $A^*$. For a branch $B \in \mathcal B^*$, let $a_B:=(a \in B : B
\setminus a \subseteq \ensuremath{\mathrm{desc}}(a))$ be the ``highest'' node in $B$, that is an
ancestor of every other node in $B$. For each branch, we can replace
all nodes in $\mathsf{Opt}'(r)$ that are descendants of $a_B$ with just $a_B$;
doing can only increase the number of external edges. The new solution
has all nodes contained in the set
\[ \ensuremath{\mathrm{children}}\bigg(\ensuremath{\mathrm{anc}}\bigg(\bigcup_{B\in\mathcal B^*}\{a_B\}\bigg)\bigg), \]
which is a set of incomparable nodes. Therefore, we can construct the
graph of Lemma~\ref{lem:compute-anchors} and use the \textsc{Partial VC}\xspace-based
algorithm with this node set instead. This gives a solution with $\ge
\frac12\varepsilonll'(r)-\delta(1+\varepsilon_1)(k-1){\mu}$ saved edges. The total running time for this case is $O(2^{2k}\cdot (n^2 + f(k) \cdot g(n)))$.
\varepsilonnd{proof}
\subsection{Combining Things Together}
Putting things together, we conclude with Theorem~\ref{thm:laminar}. We
refer the reader to Algorithm~\ref{alg:laminar} for the pseudocode of
the entire algorithm.
\begin{proof}[Proof (Theorem~\ref{thm:laminar}).]
Let the original graph be $G=(V,E,w).$ We compute a $(1+\varepsilon_1)$-mincut
tree $\mathcal T=(V_{\mathcal T},E_{\mathcal T},w_{\mathcal T})$ with mapping $\phi:V\to V_{\mathcal T}$ in time $O(n^3)$, following Theorem~\ref{thm:mincutTreeExistence}.
Then, by running the two algorithms in Lemma~\ref{lem:incomparable} and
Lemma~\ref{lem:chains}, we compute a solution with $s \le k-1$
vertices with $\mathsf{Saved}$ value at least
\begin{align*}
& \max\left\{
\frac14\varepsilon_3(k-1){\mu} - \delta(1+\varepsilon_1)(k-1){\mu},\
\frac12\varepsilonll'(r) - \delta(1+\varepsilon_1)(k-1){\mu} \right\} \\
= & \max\left\{ \frac14\varepsilon_3(k-1){\mu}, \ \frac12\varepsilonll'(r) \right\} - \delta(1+\varepsilon_1)(k-1){\mu}
\varepsilonnd{align*}
for each root $r \in V_{\mathcal T}$ (see
Algorithm~\ref{alg:laminarRooted}). Using $\max\{p, q\} \geq
(4p+2q)/6$ and
$\varepsilonll'(r) \ge \varepsilonll^*(r) - \varepsilon_3(k-1){\mu}$
we get a solution with $\mathsf{Saved}$ value at least
\begin{align*}
& \frac16 \left( 4 \cdot \frac14\varepsilon_3(k-1){\mu} + 2 \cdot \frac12 \left[ \varepsilonll^*(r) - \varepsilon_3(k-1){\mu} \right] \right) - \delta(1+\varepsilon_1)(k-1){\mu} \\
\ge & \frac16\varepsilonll^*(r) - 2\delta(k-1){\mu},
\varepsilonnd{align*}
using that $\varepsilon_1 \leq 1$. In particular, the best solution
$v_1,\ldots,v_s \in V_{\mathcal T}$ over all $r$ satisfies
\[ \mathsf{Saved}(v_1,\ldots,v_s) \ge \frac16 \varepsilonll^* - 2\delta(k-1){\mu} ,\]
where $\varepsilonll^*(r)$ was replaced by $\varepsilonll^*$.
Let $v_1,\ldots,v_s \in V_{\mathcal T}$ be our solution with
$\mathsf{Saved}(v_1,\ldots,v_s) \ge \frac16 \varepsilonll^* - 2\delta(k-1){\mu}$. Let
$S_1,\ldots,S_s \subset V$ be the corresponding subsets in $V$, i.e.,
$S_i := \phi^{-1}(\ensuremath{\mathrm{subtree}}(v_i))$. Then, add the complement set
$S_{s+1}:=V \setminus \bigcup_{i\in[s]}S_i$ to the solution, so that
the sets $S_i$ partition $V$, and
\[w(E(S_1,\ldots,S_{s+1})) \le s(1+\varepsilon_1){\mu} - \left(\frac16 \varepsilonll^* -
2\delta(k-1){\mu}\right). \] Then, extend the solution to a
$k$-partition using Algorithm~\ref{alg:complete}. We now claim that every additional
cut that Algorithm~\ref{alg:complete} makes is a $(1+\varepsilon_1)$-mincut.
To see this, observe that $S_1^*, \ldots, S_{k-1}^*$ are all $(1+\varepsilon_1)$-mincuts and one
of them, say $S_j^*$, has to intersect some $S_i$. Then, the cut $(S_i \cap S_j^*, S_i \setminus S_j^*)$
is a $(1+\varepsilon_1)$-mincut in $S_i$. We can repeat this argument as long as we have $< k$ components $S_i$.
At the end, we have a
solution $S_1', \ldots, S_k'$ satisfying
\begin{align*}
w(E(S_1', \ldots, S_k')) & \le w(E(S_1,\ldots,S_s)) +
(k-1-s)(1+\varepsilon_1){\mu}
\\
& \le (k-1)(1+\varepsilon_1){\mu}- \left(\frac16 \varepsilonll^* -
2\delta(k-1){\mu}\right)
\varepsilonnd{align*}
Let $S^*_1,\ldots,S^*_k$ be the optimal partition in $\mathcal O_{\varepsilon_1}$
satisfying $\phi(r) \in S^*_k$, and let $\varepsilonll^*$ be the maximum of
$\mathsf{Saved}(v_1^*,\ldots,v_{k-1}^*)$ over incomparable
$v_1^*,\ldots,v_{k-1}^*$. Our solution has approximation ratio
\begin{align*}
\frac{w(E(S_1,\ldots,S_k))}{w(E(S_1^*,\ldots,S_k^*))} & \le
\frac{(k-1)(1+\varepsilon_1){\mu} - \frac16\varepsilonll^* +
2\delta(k-1){\mu}}{(k-1){\mu} - \varepsilonll^*} \\
& = \frac{(k-1)(1+\varepsilon_1){\mu} - \frac16\varepsilonll^* }{(k-1){\mu} -
\varepsilonll^*} + \frac{ 2\delta(k-1){\mu}}{(k-1){\mu} - \varepsilonll^*} \\
& \le 2(1+\varepsilon_1) - \frac16 + 4\delta,
\varepsilonnd{align*}
with the worst case achieved at $\varepsilonll^*=\frac12(k-1){\mu}$, which is
the highest $\varepsilonll^*$ can be. Setting $\varepsilon_2:=1/6 - 2\varepsilon_1-4\delta$
concludes the proof.
As for running time, we run the algorithms in Lemma~\ref{lem:incomparable} and Lemma~\ref{lem:chains} sequentially, and the final running time is $\tilde 2^{O(k)}f(k)(\tilde O(n^4) + g(n))$. (The $\tilde O(n^4)$ comes from the case when $k\le 4$, in which we solve the problem exactly in $\tilde O(n^4)$ time.)
\varepsilonnd{proof}
\newcommand{\mathsf{Wdeg}}{\mathsf{Wdeg}}
\section{An FPT-AS for \textsc{Partial VC}\xspacelong}
\langlebel{sec:partial-vc}
Recall the \textsc{Partial VC}\xspacelong (\textsc{Partial VC}\xspace) problem: the input is a graph $G = (V,E)$
with edge and vertex weights, and an integer $k$. For a set $S$, define
$E_S$ to be the set of edges with at least one endpoint in $S$. The goal
of the problem is to find a set $S$ with size $|S| = k$, minimizing the
weight $w(E_S) + w(S)$, i.e., the weight of all edges hitting $S$ plus
the weight of all vertices in $S$. Our main theorem is the following.
\begin{theorem}[\textsc{Partial VC}\xspacelong]
\langlebel{thm:pvc}
There is a randomized algorithm for \textsc{Partial VC}\xspace on weighted graphs that, for
any $\delta \in (0,1)$, runs in $O(2^{k^6/\delta^3} (m +
k^8/\delta^3)\,n \log n)$ time and outputs a
$(1+\delta)$-approximation to \textsc{Partial VC}\xspace with probability $1 - 1/\mathrm{poly}(n)$.
\varepsilonnd{theorem}
We first extend a result of Marx~\cite{Marx07} to give a
$(1+\delta)$-approximation algorithm for the case where $G$ has edge
weights being integers in $\{1, \ldots, M\}$ and no vertex weights, and
then show how to reduce the general case to this special case, losing
only another $(1+\delta)$-factor.
\subsection{Graphs with Bounded Weights}
\begin{lemma}
\langlebel{lem:pvc-simple}
Let $\delta \leq 1$. There is a randomized algorithm for the \textsc{Partial VC}\xspace
problem on simple graphs with edge weights in $\{1, \ldots, M\}$ (and
no vertex weights) that runs in $O(m+Mk^4/\delta)$ time, and outputs
a $(1+\delta)$-approximation with probability at least
$2^{-(Mk^2/\delta)}$.
\varepsilonnd{lemma}
\begin{proof}
This is a simple extension of a result for the
maximization case given by Marx~\cite{Marx07}. We give two algorithms: one for the case when the
optimal value is smaller than $\tau := Mk^2/\delta$ (which returns the
correct solution in time, but with probability $2^{-(Mk^2/\delta)}$),
and another for the case of the optimal value being at least $\tau$
(which deterministically returns a $(1+\delta)$-approximation in
linear time). We run both and return the better of the two solutions.
First, the case when the optimal value is at least $\tau$. Let the
\varepsilonmph{weighted degree} of a node $v$, denoted $w(\partial v)$ be
defined as $\sum_{e: v \in e} w(e)$. Observe that for
any set $S$ with $|S| \leq k$,
\[ 0 \leq \sum_{v \in S} w(\partial v) - w(E_S) \leq M\cdot
\binom{k}{2}. \] Hence, if $S^*$ is the optimal solution and
$w(E_{S^*}) \geq \tau$, then picking the set of $k$ vertices with the
least weighted degrees is a $(1+\delta)$-approximation.
Now for the case when the optimal value is at most $\tau$. In this case,
the optimal set $S^*$ can have at most $\tau$ edges incident to it, since
each edge must have weight at least $1$. Consider the color-coding
scheme where we independently and uniformly colors the vertices of $G$
with two colors (red and blue). With probability $2^{-(\tau+k)}$, all the
vertices in $S^*$ are colored red, and all the vertices in $N(S^*)
\setminus S^*$ are colored blue. Consider the ``red components'' in
the graph obtained by deleting the blue vertices. Then $S^*$ is the
union of one or more of these red components. To find it, define the
``size'' of a red component $C$ as the number of vertices in it, and
the ``cost'' as the total weight of edges in $G$ that are incident to
it (i.e., cost $= \sum_{e \in E: e \cap C \neq \varepsilonmptyset} w(e)$.)
Now we can use dynamic programming to find a collection of red
components with total size equal to $k$ and minimum total cost: this
gives us $S$ (or some other solution of equal cost). Indeed, if we
define the ``type'' of each component to be the tuple $(s,c)$ where $s
\in [1\ldots k]$ is the size (we can drop components of size greater
than $k$) and $c \in [1\ldots \tau]$ is the cost (we can drop all
components of greater cost). Let $T(s,c)$ be the number of copies of
type $(s,c)$, capped at $k$. Assume the types are numbered $\tau_1,
\tau_2, \ldots, \tau_{k\tau}$. Now if $C(i,j)$ is the minimum cost we can
have with components of type $\leq \tau_i = (s,c)$ whose total size is
$j$, then
\[ C(i,j) = \min_{0 \leq \varepsilonll \leq T(s,c)} C(i-1, j - \varepsilonll s) + \varepsilonll
c. \] Finally, we return the component achieving $C(k\tau,k)$. This can
all be done in $O(m + k^2\tau)$ time.
\varepsilonnd{proof}
Repeating the algorithm $O(2^{\tau + k} \log n) = O(2^{Mk^2/\delta + k} \log n)$ times and outputting
the best set found in these repetitions gives an algorithm that finds a
$(1+\delta)$-approximation with probability $1 - 1/\mathrm{poly}(n)$.
\subsection{Solving The General Case}
We now reduce the general \textsc{Partial VC}\xspace problem, where we have no bounds on the
edge weights (and we have vertex weights), to the special case from the
previous section.
The idea is simple: given a graph $G = (V,E)$ with edge and vertex
weights, we construct a collection of $|V|$ simple graphs $\{ H_v \}_{v
\in V}$, each defined on the vertex set $V$ plus a couple new nodes,
and having $O(|V|+|E|)$ edges, with each edge-weight $w'(e)$ being an
integer in $\{1, \ldots, M\}$ and $M = O(k/\delta)^2$, and with no vertex
weights. We find a $(1+\delta/2)$-approximate \textsc{Partial VC}\xspace solution on each $H_v$,
and then output the set $S$ which has the smallest weight (in $G$) among
these. We show how to ensure that $S \subseteq V$ and that it is a
$(1+\delta)$-approximation of the optimal solution in $G$.
\begin{proof}[Proof of Theorem~\ref{thm:pvc}]
Let $S^*$ be an optimal solution on $G$. Define the \varepsilonmph{extended
weighted degree} of a vertex $v$, denoted by $\mathsf{Wdeg}(v)$, to be its
vertex weight plus the weight of all edges adjacent to it. I.e.,
$\mathsf{Wdeg}(v) := w(v) + w(\partial v)$.
Firstly, assume we know a vertex $v^* \in S^*$ with the largest
$\mathsf{Wdeg}(v^*)$; we just enumerate over all vertices to find this vertex.
We now proceed to construct the graph $H_{v^*}$. Let $L =
\mathsf{Wdeg}(v^*)$, and delete all vertices $u$ with $\mathsf{Wdeg}(u) > L$. Note
that (a)~any solution containing $v^*$ has total weight at least $L$,
and (b)~each remaining edge and vertex has weight $\leq L$.
Assume that $G$ is simple, since we can combine parallel edges
together by summing their weights. Create two new vertices $p, q$,
and add an edge of weight $L k^2$ between them; this ensures that
neither of these vertices is ever chosen in any near-optimal
solution.
Let $\delta' > 0$ be a parameter to be fixed later; think of $\delta'
\approx \delta$. For each edge $e = (u,v)$ in the edge set $E$ that
has weight $w(e) < L\delta'/k^2$, remove this edge and add its weight
$w(e)$ to the weight of both its endpoints $u,v$. Finally, when there
are no more edges with $w(e) < L\delta'/k^2$, for each vertex $u$ in
$V$, create a new edge $\{u,p\}$ with weight being equal to the
current vertex weight $w(u)$, and zero out the vertex weight. Let the
new edge set be denoted by $E'$. We claim that for any set $S
\subseteq V$ of size $\le k$,
\[ \left( \sum_{e \in E': e \cap S} w(e)\right) - \left( \sum_{e \in
E: e \cap S} w(e) + \sum_{v \in S} w(v) \right) \leq \delta' L. \]
Indeed, the only change comes because of edges with weight $w(e) <
L\delta'/k^2$ and with both endpoints within $S$---these edges
contributed once earlier, but replacing them by the two edges means we
now count them twice. Since there are at most $\binom{k}{2}$ such
edges, they can add at most $\delta' L$.
At this point, all edges in the original edge set $E$ have weights in
$[L \delta'/k^2, Lk^2]$; the only edges potentially having weights $<
L \delta'/k^2$ are those between vertices and the new vertex $p$. For
any such edge with weight $< L \delta'/k$, we delete the edge. This
again changes the optimal solution by at most an additive $L \delta'$,
and ensure all edges in the new graph have weights in $[L \delta'/k^2,
Lk^2]$. Note that since the optimal solution has value at least $L$
by our guess, these additive changes of $L \delta'$ to the optimal
solution mean a multiplicative change of only $(1+\delta')$.
Finally, discretize the edge weights by rounding each edge weight to
the closest integer multiple of $L \delta'^2/k^2$. Since each edge
weight $\geq L \delta'/k^2$, each edge weight incurs a further
multiplicative error at most $1+\delta'$. Note that $M =
k^4/\delta'^2$. Now use Lemma~\ref{lem:pvc-simple} to get a
$(1+\delta')$-approximation for \textsc{Partial VC}\xspace on this instance with high
probability. Setting $\delta' = O(\delta)$ ensures that this solution
is within a factor $(1+\delta)$ of that in $G$.
\varepsilonnd{proof}
\section{Conclusion and Open Problems}
\langlebel{sec:conclusion}
Putting the sections together, we conclude with a proof of our main theorem.
\begin{proof}[Proof of Theorem~\ref{thm:kcut-main}]
Fix some $\delta \in (0,1/24)$. By Theorem~\ref{thm:pvc}, there is a
$(1+\delta)$-approximation algorithm for $\PartialVC k$ running in
time $O(2^{k^6/\delta^3} (m + k^8/\delta^3)\,n \log n) = 2^{O(k^6)}
n^4$ time. Plugging in $f(k) := 2^{O(k^6)}$ and $g(n) := n^4$ into
Theorem~\ref{thm:laminar}, we get a $(2-\varepsilon_2)$-approximation algorithm
to $\text{Laminar}cut k{\varepsilon_1}$ in time $2^{O(k)}f(k)(n^3 + g(n)) =
2^{O(k^6)} n^4$, for a fixed $\varepsilon_1 \in (0,1/6-4\delta)$. Plugging in
$f(k):=2^{O(k^6)}$ and $g(n):=n^4$ into Theorem~\ref{thm:reduction1}
gives a $(2-\varepsilon_3)$-approximation for $$k$\textsc{-Cut}\xspace$ in time $2^{O(k^2 \log k)} \cdot
f(k) \cdot (n^4 \log^3 n + g(n)) = 2^{O(k^6)} n^4 \log^3 n$.
Finally, for our approximation factor. Theorem~\ref{thm:laminar} sets
$\varepsilon_2:=1/6 - 2\varepsilon_1 - 4\delta$ for any small enough $\delta$. We can
take $\varepsilon_1$ and $\varepsilon_2$ to be equal, so that $\varepsilon_1 = \varepsilon_2 = 1/18 -
\nicefrac43\cdot\delta$. Finally, setting
$\varepsilon_4=\varepsilon_5=\min(\varepsilon_1,\varepsilon_2)/3$ and $\varepsilon_3:=\varepsilon_4^2$ in
Theorem~\ref{thm:reduction1} gives $\varepsilon_3 = 1/54^2 - \delta'$ for some
arbitrarily small $\delta'>0$. In other words, our approximation
factor is $2 - 1/54^2 + \delta'$, or $1.9997$ for an appropriately
small $\delta'$.
\varepsilonnd{proof}
Our result
combines ideas from approximation algorithms and FPT algorithms and
shows that considering both settings simultaneously can help bypass
lower bounds in each individual setting, namely the $W[1]$-hardness of
an exact FPT algorithm and the SSE-hardness of a polynomial-time
$(2-\varepsilon)$-approximation. While our improvement is quantitatively modest,
we hope it will prove qualitatively significant. Indeed, we hope these
and other ideas will help resolve whether an $(1+\varepsilon)$-approximation
algorithm exists in FPT time, and to show a matching lower and upper
bound.
\paragraph{Acknowledgments.} We thank Marek Cygan for generously giving
his time to many valuable discussions.
{\small
}
\appendix
\section{Pseudocode for \text{Laminar}cut{$k$}{$\varepsilon_1$}}
\langlebel{sec:pseudocode-laminar}
\begin{algorithm}
\caption{SubtreePartialVC$(G,\mathcal T,A,s,\delta)$}
\langlebel{alg:subtreePVC}
\begin{algorithmic}
\If {$|A| < s$}
\State \mathbb Return None
\EndIf
\mathbb For {$a \in A$}
\State $C_a \gets V(a) \cup \displaystyle\bigcup\limits_{a' \in \ensuremath{\mathrm{desc}}(a)}V(a')$
\EndFor \Comment{\textbf{Assert}: $C_a$ are all disjoint}
\
\State $\mathcal C \gets \{ C_a : a \in A\}$
\State $H \gets \text{Contract}(G, \mathcal C)$ \Comment{For each $C_a \in \mathcal C$, contract all vertices in $C_a$ into a single vertex in $H$}
\
\mathbb For {$i \in [k-1]$}
\State $P_{i} \gets \text{PartialVC}(H,i)$ \Comment{$P_{i} \in V(H)^i$}
\State $\mathcal S_i \gets \text{Expand}(H,P_i)$ \Comment { \parbox[t]{.5\linewidth}{ Map each $v \in P_i$ to the set of vertices in $V$ which contract to $v$ in $H$, and call the result $\mathcal S_i \in \left(2^V\right)^i$ } }
\EndFor
\State \mathbb Return $\{\mathcal S_{i} : i \in [s]\}$
\varepsilonnd{algorithmic}
\varepsilonnd{algorithm}
\begin{algorithm}
\caption{SingleBranch$(G,\mathcal T,B,k,\delta)$}
\langlebel{alg:singleBranch}
\begin{algorithmic}
\mathbb For{$a \in B$}
\State $\mathbb Record(\text{SubtreePartialVC}(G, \mathcal T, \ensuremath{\mathrm{children}} \left((\{a\} \cup \ensuremath{\mathrm{anc}}(a)) \cap B \right), k-1, \delta))$
\EndFor
\State Return the best recorded solution $\{v_1, \ldots, v_{k-1}\} \in V_{\mathcal T}$.
\varepsilonnd{algorithmic}
\varepsilonnd{algorithm}
\begin{algorithm}
\caption{Laminar$(G=(V,E,w),\mathcal T,k,\varepsilon_1,\delta)$}
\langlebel{alg:laminar}
\begin{algorithmic}
\State $\mathcal T=(V_{\mathcal T},E_{\mathcal T},w_{\mathcal T}) \gets \text{MincutTree}(G)$.
\mathbb For{$r \in V_{\mathcal T}$}
\State Root $\mathcal T$ at $r$.
\State $\mathbb Record(\text{LaminarRooted}(G,\mathcal T,r,k,\varepsilon_1,\delta))$
\EndFor
\State Return the best recorded $k$-partition.
\varepsilonnd{algorithmic}
\varepsilonnd{algorithm}
\begin{algorithm}
\caption{LaminarRooted$(G=(V,E,w),\mathcal T,r,k,\delta_1,\delta)$}
\langlebel{alg:laminarRooted}
\begin{algorithmic}
\mathbb For {$a \in V(\mathcal T)$}
\State $\{S_{a,i} : i \in [k-1]\} \gets \text{SubtreePartialVC}(G, \mathcal T, \ensuremath{\mathrm{children}}(a), k-1, \delta)$ \Comment{$S_{a,i} \in \left(2^V\right)^i$}
\EndFor
\
\State $A \gets \varepsilonmptyset$ \Comment{$A \subset V(\mathcal T) \times [k]$ is the set of \textit{anchors}}
\mathbb For {$a \in V(\mathcal T)$ in topological order from leaf to root}
\State $\varepsilon_3 \gets \frac{1-\delta}4-2\varepsilon_1$ \Comment {The optimal value of $\varepsilon_3$}
\State $I_a \gets \{i \in [k-1] : \text{Value}(P_{a,i}) \ge \varepsilon_3(1-\delta)(i-1){\mu} \}$
\If {$I_a \ne \varepsilonmptyset$ \textbf{and} $\nexists (a',i) \in A : a' \in \ensuremath{\mathrm{desc}}(a)$} \Comment{Only take \textit{minimal} anchors}
\State $A \gets A \cup \{(a, \max I_a)\}$
\EndIf
\EndFor
\
\If {$|A| \ge k-1$} \Comment{\textbf{Case (K)}: Knapsack}
\State $A' \gets \text{Knapsack}(A)$ \Comment{The Knapsack algorithm as described in Lemma~\ref{lem:incomparable}}
\State $\mathcal S \gets \displaystyle\bigcup\limits_{(a,i) \in A} \{S_{a,i}\}$ \Comment{The partition for Case (K), to be computed. \textbf{Assert}: $|S| \le k-1$}
\State $\mathbb Record(\text{Complete}(G,k,\mathcal S))$
\Else
\State $\mathcal B \gets \text{Branches}(A)$ \Comment{$\mathcal B \subset \left( 2^{V(\mathcal T)} \right)^r$ for some $k-1 \le r \le 2k-3$}
\
\mathbb For {$B \in \mathcal B$} \Comment {\textbf{Case (B1)}: Compute branches independently}
\State $\{P_{B,i} : i \in [k-1]\} \gets \text{SingleBranch}(G, \mathcal T, B, k-1, \delta)$ \Comment{$P_{B,i} \in V^i$}
\EndFor
\State $(\mathcal B^*,\mathbf i^*) \gets \operatornamewithlimits{argmin}\limits_{ \substack {\mathcal B' \subset \mathcal B \text{ incomparable},\\ \mathbf i \in [k-1]^{\mathcal B'} : \ \sum_{B} i_B\ =\ k-1} } \ \displaystyle\sum\limits_{B \in \mathcal B'} w(E(P_{B,i_B}))$ \Comment{Computed by brute force}
\State $\mathcal S_1 \gets \bigcup_{B \in \mathcal B^*} \{P_{B, i_B}\}$ \Comment{The partition in Case (B1)}
\State $\mathbb Record(\text{Complete}(G,k,\mathcal S_1))$
\
\mathbb For {$B \in \mathcal B$} \Comment{\textbf{Case (B2)}: Guess the branches with the anchors}
\State $a_B \gets (a \in B : B \setminus a \subset \ensuremath{\mathrm{desc}}(a))$ \Comment{$a_B$ is the common ancestor of branch $B$}
\EndFor
\mathbb For {$\mathcal B' \subset \mathcal B$ s.t.\ $\nexists B_1,B_2\in\mathcal B' : B_1 \subset \ensuremath{\mathrm{desc}}(B_2)$} \Comment{Subsets whose branches are incomparable}
\State $A_{\mathcal B'} \gets \ensuremath{\mathrm{children}}\left(\bigcup_{B \in \mathcal B'} \left( \{a_B\} \cup \ensuremath{\mathrm{anc}}(a_B) \right) \right)$
\State $\mathcal S_{2,\mathcal B'} \gets \text{SubtreePartialVC}(G,\mathcal T,A_{\mathcal B'},k-1,\delta)$ \Comment{The partition for $\mathcal B'$ in Case (B2)}
\State $\mathbb Record(\text{Complete}(G,k,\mathcal S_{2,\mathcal B'}))$
\EndFor
\EndIf
\
\State Return the best recorded $k$-partition.
\varepsilonnd{algorithmic}
\varepsilonnd{algorithm}
\section{Missing Proofs}
\langlebel{sec:missing-proofs}
\begin{lemma} \langlebel{lemma:knapsack}
Consider the knapsack instance of $k-1$ items $i \in [k-1]$ where item $i$ has size $s_i \in [2,k-1]$ and value $s_i-1$. There is an algorithm achieving value $\ge (k-1)/4$ for $k \ge 5$, running in $O(k)$ time.
\varepsilonnd{lemma}
\begin{proof}
Consider the greedy
knapsack solution where we always choose the heaviest item, if still
possible. Let $A \in [k-1]$ be our solution. If our total size
$\sum_{i\in A}s_i$ is at least $k - 1 - \sqrt k$, then our value is
at least $\sum_{i\in A}(s_i-1) \ge \sum_{i\in A}s_i/2 \ge (k-1-\sqrt
k)/2$. Otherwise, since we could not fit the next item of size at
least $\sqrt k$ into our solution, all of our items have size at
least $\sqrt k$. Furthermore, our total solution size is at least
$(k-1)/2$, so $\sum_{i \in A}(s_i-1) \ge \sum_{i\in A}(1-1/\sqrt
k)s_i \ge (1-1/\sqrt k)(k-1)/2$. When $k \ge 5$, the
value is $\ge (1-1/\sqrt 5)(k-1)/2 \ge (k-1)/4$.
\varepsilonnd{proof}
\varepsilonnd{document} |
\begin{document}
\title{An overview of nonparametric tests of extreme-value dependence and of some related statistical procedures}
\begin{abstract}
An overview of existing nonparametric tests of extreme-value dependence is presented. Given an i.i.d.\ sample of random vectors from a continuous distribution, such tests aim at assessing whether the underlying unknown copula is of the {\em extreme-value} \index{extreme-value copula} type or not. The existing approaches available in the literature are summarized according to how departure from extreme-value dependence is assessed. Related statistical procedures useful when modeling data with this type of dependence are briefly described next. Two illustrations on real data sets are then carried out using some of the statistical procedures under consideration implemented in the \textsf{R} package {\tt copula}. Finally, the related problem of testing the {\em maximum domain of attraction} condition is discussed.
\end{abstract}
\section{Introduction}
By definition, the class of extreme-value copulas consists of all possible limit copulas of affinely normalized, componentwise maxima of a multivariate i.i.d.\ sample, or, more generally, of a multivariate stationary time series. As a consequence, extreme-value copulas can be seen as appropriately capturing the dependence between extreme or rare events. The famous extremal types theorem of multivariate extreme value theory leads to a rather simple characterization of extreme-value copulas:
the class of extreme-value copulas merely coincides with the class of max-stable copulas (see Section~\ref{bk:sec:found} below for a precise definition). Other characterizations are possible, most of which are based on a parametrization by a lower-dimensional function or measure \citep[see e.g.][for an overview]{bk:GudSeg10}. Serial dependence of the underlying time series is explicitly allowed provided certain mixing conditions hold \citep{bk:Hsi89, bk:Hus90}.
The theory underlying extreme-value copulas motivates their use in combination with the famous {\em block maxima}\index{multivariate block maxima} method popularized in the univariate case in the monograph of \cite{bk:Gum58}: from a given time series, calculate (componentwise) monthly or annual or, more generally, block maxima, and consider the class of extreme-value copulas (or parametric subclasses thereof) as an appropriate model for the multivariate sample of block maxima. If the block size is sufficiently large, it is unlikely that the respective maxima within a block occur at the beginning or the end of the block, whence, even under weak serial dependence of the underlying time series, block maxima could be considered as approximately independent. In statistical practice, independence has usually been postulated hitherto. Applications of the block maxima method can also be found in contexts in which the underlying time series is not necessarily stationary, as is the case when seasonalities are present (for instance in some hydrological problems).
The use of extreme-value copulas is not restricted to the framework of multivariate extreme-value theory. These dependence structures can actually be a convenient choice to model any data sets with positive association. Moreover, many parametric submodels are available in the literature \cite[see e.g.][for an overview]{bk:GudSeg10}.
Extreme-value copulas have been successfully applied in empirical finance and insurance \citep[see e.g.][]{bk:LonSol01, bk:CebDenLam03, bk:McNFreEmb05}, and environmental sciences \citep[see e.g.][]{bk:Taw88,bk:SalDeMKotRos07}. They also arise in spatial statistics in connection with max-stable processes in which they determine the underlying spatial dependence \citep[see e.g.][]{bk:DavPadRib12,bk:Rib13,bk:RibSed13}.
From a statistical point of view, it is important to test the hypothesis that the copula of a given sample is an extreme-value copula. When applied within the context of the block maxima method, a rejection of this hypothesis would indicate that the size of the blocks is too small and should be enlarged, or that the (broad) conditions of the extremal types theorem are not satisfied. When applied outside of the extremal types theoretical framework, tests of extreme-value dependence merely indicate whether the class of max-stable copulas is a plausible choice for modeling the cross-sectional dependence in the data at hand. If there is no evidence against this class, additional statistical procedures tailored for extreme-value copulas can be used to carry out the data analysis.
This chapter is organized as follows. A brief overview of the theory underlying extreme-value copulas is given in the second section. The third section provides a summary of the procedures available in the literature for testing whether the copula of a random sample from a continuous distribution can be considered of the extreme-value type or not. Rather detailed analyses of bivariate financial data and bivariate insurance data are presented next. They are accompanied by code for the \textsf{R} statistical system \citep{bk:Rsystem} from the {\tt copula} package \citep{bk:copula}. Finally, in the last section, the related issue of testing the maximum domain of attraction condition is discussed.
\section{Mathematical foundations} \label{bk:sec:found}
Consider a $d$-dimensional random vector $\mathbf{X} = (X_1,\dots,X_d)$, $d\ge 2$, whose marginal cumulative distributions functions (c.d.f.s) $F_1,\dots,F_d$ are assumed to be continuous. Then, by \cite{bk:Skl59}'s representation theorem, the c.d.f.\ $F$ of $\mathbf{X}$ can be written in a unique way as
$$
F(\mathbf{x}) = C \{ F_1(x_1),\dots,F_d(x_d) \}, \qquad \mathbf{x}=(x_1, \dots, x_d) \in \mathbb{R}^d,
$$
where the function $C:[0,1]^d \to [0,1]$ is a copula, i.e., the restriction of a multivariate c.d.f.\ with standard uniform margins to the unit hypercube. The above display is usually interpreted in the way that the copula $C$ completely characterizes the stochastic dependence among the components of $\mathbf X$.
A $d$-dimensional copula $C$ is an {\em extreme-value} copula \index{extreme-value copula} if and only if there exists a copula $C^*$ such that, for any $\mathbf{u} \in [0,1]^d$,
\begin{align}
\label{bk:eq:maxdom}
\lim_{n \to \infty} \{ C^*(u_1^{1/n},\dots,u_d^{1/n}) \}^n = C(\mathbf{u}).
\end{align}
The copula $C^*$ is then said to be in the {\em maximum domain of attraction} of $C$, which shall be denoted as $C^* \in D(C)$ in what follows.
Some algebra reveals that $\{ C^*(u_1^{\scriptscriptstyle 1/n},\dots,u_d^{\scriptscriptstyle 1/n}) \}^n$ is the copula, evaluated at $\mathbf{u} \in [0,1]^d$, of the vector of componentwise maxima computed from an i.i.d.\ sample $\mathbf Y_1, \dots, \mathbf Y_n$ with continuous marginal c.d.f.s and copula $C^*$. The latter fact motivates the terminology \textit{extreme-value copula}.
It is additionally very useful to note that $C$ is an extreme-value copula if and only if it is {\em max-stable}, that is, if and only if, for any $\mathbf{u} \in [0,1]^d$ and $r \in \mathbb N$, $r > 0$,
\begin{equation}
\label{bk:eq:maxstability}
\{ C(u_1^{1/r},\dots,u_d^{1/r}) \}^r = C(\mathbf{u}).
\end{equation}
The sufficiency follows by using, in combination with~\eqref{bk:eq:maxdom}, the fact that, for any $\mathbf{u} \in [0,1]^d$ and $r \in\mathbb N$, $r > 0$,
\[
\Big[C^*\big\{ (u_1^{1/r})^{1/n}, \dots, (u_d^{1/r})^{1/n} \big\} \Big]^{1/n}
=
\Big[ \big \{ C^*(u_1^{1/(nr)}, \dots, u_d^{1/(nr)}) \big\}^{1/(nr)} \Big]^r.
\]
The necessity is an immediate consequence of the fact $C \in D(C)$ for any max-stable copula $C$. Interestingly enough, it can be shown that a max-stable copula actually satisfies~\eqref{bk:eq:maxstability} for any real $r>0$ \citep[see e.g.][Lemma 5.4.1]{bk:Gal78}. \index{max-stable copula}
An alternative, more complex characterization, essentially due to \cite{bk:Pic81}, is as follows: a copula $C$ is of the extreme-value type if and only if there exists a function $A$ such that, for any $\mathbf{u} \in (0,1]^d \setminus \{(1,\dots,1)\}$,
\begin{equation}
\label{bk:eq:Pickands_charact}
C(\mathbf{u})
=
\exp \left\{ \left( \sum_{j=1}^d \log u_j \right) A \left(\frac{\log u_2}{\sum_{j=1}^d \log u_j}, \dots, \frac{\log u_{d}}{\sum_{j=1}^d \log u_j} \right) \right\},
\end{equation}
where $A:\Delta_{d-1} \to [1/d,1]$ is the {\em Pickands dependence function} \index{Pickands dependence function} and $\Delta_{d-1} = \{(w_1,\dots,w_{d-1}) \in [0,1]^{d-1} : w_1 + \dots + w_{d-1} \le 1 \}$ is the unit simplex \citep[see e.g.][for more details]{bk:GudSeg12}. If relation~\eqref{bk:eq:Pickands_charact} is met, then $A$ is necessarily convex and satisfies the boundary condition $\max\{1- \sum_{j=1}^{d-1} w_j, w_1, \dots, w_{d-1}\} \le A(\mathbf w) \le 1$ for all $\mathbf w = (w_1,\dots,w_{d-1}) \in \Delta_{d-1}$. The latter two conditions are, however, not sufficient to characterize the class of Pickands dependence functions unless $d=2$ \citep[see e.g.][for a counterexample]{bk:BeiGoeSegTeu04}.
Several other characterizations of extreme-value copulas are possible, for instance using the {\em spectral measure of $C$} \citep[see e.g.][for details]{bk:GudSeg12} or the {\em stable tail dependence function} \citep{bk:res13, bk:ChaFouGenNes14}.
\section{Existing tests of extreme-value dependence}
\label{bk:sec:tests}
Let $\mathcal{EV}$ denote the class of extreme-value copulas. Given a random sample $\mathbf{X}_1,\dots,\mathbf{X}_n$ from a c.d.f.\ $C\{F_1(x_1),\dots,F_d(x_d)\}$ with $F_1,\dots,F_d$ continuous and $C,F_1,\dots,F_d$ unknown, tests of extreme-value dependence aim at testing
\begin{equation}
\label{bk:eq:H0}
H_0 : C \in \mathcal{EV} \qquad \mbox{against} \qquad H_1 : C \not \in \mathcal{EV}.
\end{equation}
The existing tests for $H_0$ available in the literature are all rank-based and therefore margin-free. They can be classified into three groups according to how departure from extreme-value dependence is assessed.
\subsection{Approaches based on Kendall's distribution} \label{bk:subsec:kendall}
The first class of approaches, which is also the oldest, finds its origin in the seminal work of \cite{bk:GhoKhoRiv98} and is restricted to the case $d=2$. Given a bivariate random vector $\mathbf{X} = (X_1,X_2)$ with c.d.f.\ $F$, continuous marginal c.d.f.s $F_1$ and $F_2$ and copula $C$, the tests in this class are based on the random variable
$$
W = F(X_1,X_2) = C \{ F_1(X_1), F_2(X_2) \}.
$$
The c.d.f.\ of $W$ is frequently referred to as {\em Kendall's distribution} and will be denoted by $K$ subsequently. When $C \in \mathcal{EV}$, \cite{bk:GhoKhoRiv98} showed that
\begin{equation}
\label{bk:eq:Kendall_dist}
K(w) = \Pr(W \leq w) = w - (1 - \tau) w \log w, \qquad w \in (0,1],
\end{equation}
where $\tau$ denotes {\em Kendall's tau}. Whether $C$ is of the extreme-value type or not, it is known since \cite{bk:SchWol81} that
$$
\tau = 4 \int_{[0,1]^2} C(u_1,u_2) \mathrm{d} C(u_1,u_2) - 1 = 4 \mathrm{E}(W) - 1.
$$
When $C \in \mathcal{EV}$, \cite{bk:GhoKhoRiv98} also obtained from~\eqref{bk:eq:Kendall_dist} that, for $k\in \mathbb N$,
$
\mu_k := E(W^k) = (k \tau + 1)/(k+1)^2,
$
which for instance implies that
\begin{equation}
\label{bk:eq:test1}
-1 + 8 \mu_1 - 9 \mu_2 = 0.
\end{equation}
In order to test $H_0$ from a bivariate random sample $\mathbf{X}_1,\dots,\mathbf{X}_n$ with c.d.f.\ $C\{F_1(x_1),F_2(x_2)\}$ where $C,F_1,F_2$ are unknown, \cite{bk:GhoKhoRiv98} suggested to assess whether a sample version of the left-hand side of~\eqref{bk:eq:test1} is significantly different from zero or not. Specifically, they considered the statistic
\begin{equation}
\label{bk:eq:S2n}
S_{2n} = - 1 + \frac{8}{n(n-1)} \sum_{i \neq j} I_{ij} - \frac{9}{n(n-1)(n-2)} \sum_{i \neq j \neq k} I_{ij} I_{kj},
\end{equation}
where $I_{ij} = \mathbf{1}(X_{i1} \leq X_{j1},X_{i2} \leq X_{j2})$. As shown by \cite{bk:GhoKhoRiv98}, $S_{2n}$ is a centered $U$-statistic which, under the null hypothesis, converges weakly to a normal random variable. To carry out the test, \cite{bk:GhoKhoRiv98} proposed to estimate the variance of $S_{2n}$ using a jackknife estimator. The test based on $S_{2n}$ was revisited by \cite{bk:BenGenNes09} who proposed two alternative strategies to compute approximate p-values for $S_{2n}$. The three versions of the test are implemented in the function \texttt{evTestK} of the \textsf{R} package {\tt copula}.
The above approach was recently furthered by \cite{bk:DuNes13} who used the first three moments of Kendall's distribution and the theoretical relationship
\begin{equation}
\label{bk:eq:test2}
-1+4\mu_1+9\mu_2-16\mu_3 = 0
\end{equation}
under the null instead of~\eqref{bk:eq:test1}. The corresponding test statistic will subsequently be denoted by $S_{3n}$.
An additional contribution of the latter authors was to find a counterexample to \cite{bk:GhoKhoRiv98}'s conjecture that $K$ has the form in~\eqref{bk:eq:Kendall_dist} if and only if $C \in \mathcal{EV}$. The latter implies that tests in this class are not consistent. Despite that fact, the Monte Carlo experiments reported in \cite{bk:KojYan10c} and in \cite{bk:DuNes13} suggest that tests based on $S_{2n}$ and its extension studied in \cite{bk:DuNes13} are among the most powerful procedures for testing bivariate extreme-value dependence.
Notice finally that additional extensions of the approach of \cite{bk:GhoKhoRiv98} were studied in \cite{bk:Que12} along with tests based on Cram\'er--von Mises-like statistics derived from the empirical process $\sqrt{n} (K_n - K_{\tau_n})$, where $K_n$ is the empirical c.d.f.\ of $\hat W_1,\dots,\hat W_n$ with $\hat W_i = F_n(X_{i1},X_{i2})$ and $F_n$ the empirical c.d.f.\ of $\mathbf{X}_1,\dots,\mathbf{X}_n$, and $K_{\tau_n}$ is defined as in~\eqref{bk:eq:Kendall_dist} with $\tau$ replaced by its classical estimator denoted $\tau_n$.
\subsection{Approaches based on max-stability} \label{bk:subsec:max}
The second class of tests proposed in the literature consists of assessing empirically whether~\eqref{bk:eq:maxstability} holds or not. It was investigated in \cite{bk:KojSegYan11} for $d \geq 2$. The key ingredient is a natural nonparametric estimator of the unknown copula $C$ known as the {\em empirical copula} \citep[see e.g.][]{bk:Rus76,bk:Deh79,bk:Deh81}.
Given a sample $\mathbf{X}_1,\dots,\mathbf{X}_n$ from a c.d.f.\ $C\{F_1(x_1),\dots,F_d(x_d)\}$ with $F_1,\dots,F_d$ continuous and $C,F_1,\dots,F_d$ unknown, let $\hat U_{ij} = R_{ij}/(n+1)$ for all $i \in \{1,\dots,n\}$ and $j \in \{1,\dots,d\}$, where $R_{ij}$ is the rank of $X_{ij}$ among $X_{1j},\dots,X_{nj}$, and set $\mathbf{\hat U}_i = (\hat U_{i1},\dots, \hat U_{id})$. It is worth noticing that the scaled ranks $\hat U_{ij}$ can equivalently be rewritten as $\hat U_{ij} = n F_{nj}(X_{ij}) / (n+1)$, where $F_{nj}$ is the empirical c.d.f.\ computed from $X_{1j},\dots,X_{nj}$, the scaling factor $n/(n+1)$ being classically introduced to avoid problems at the boundary of~$[0,1]^d$. The empirical copula of $\mathbf{X}_1,\dots,\mathbf{X}_n$ is then frequently defined as the empirical c.d.f.\ computed from the {\em pseudo-observations} \index{pseudo-observations} $\mathbf{\hat U}_1,\dots,\mathbf{\hat U}_n$, i.e.,
\begin{equation}
\label{bk:eq:empcop}
C_n(\mathbf{u}) = \frac{1}{n} \sum_{i=1}^n \mathbf{1} ( \mathbf{\hat U}_i \leq \mathbf{u} ), \qquad \mathbf{u} \in [0,1]^d.
\end{equation}
The inequalities between vectors in the above definition are to be understood componentwise.
To test~\eqref{bk:eq:maxstability} empirically, \cite{bk:KojSegYan11} considered test statistics constructed from the empirical process
\begin{equation}
\label{bk:test_process}
\mathbb{D}_{r,n}(\mathbf{u}) = \sqrt{n} \left[ \{ C_n(u_1^{1/r},\dots,u_d^{1/r}) \}^r - C_n(\mathbf{u}) \right], \qquad \mathbf{u} \in [0,1]^d,
\end{equation}
for some strictly positive fixed values of $r$. The recommended test statistic is
\begin{equation}
\label{bk:eq:T345n}
T_{3,4,5,n} = T_{3,n} + T_{4,n} + T_{5,n},
\end{equation}
where $T_{r,n} = \int_{[0,1]^d} \{\mathbb{D}_{r,n}(\mathbf{u})\}^2 \mathrm{d} C_n(\mathbf{u})$. Approximate p-values for the latter were computed using a {\em multiplier bootstrap}. The test based on $T_{3,4,5,n}$ is implemented in the function \texttt{evTestC} of the \textsf{R} package {\tt copula}. It is not a consistent test either, because the validity of~\eqref{bk:eq:maxstability} is assessed only for a small number of $r$ values.
\subsection{Approaches based on the estimation of the Pickands dependence function}
\label{bk:subsec:pick}
Recall that $\mathbf{X}_1,\dots,\mathbf{X}_n$ is a random sample from a c.d.f.\ $C\{F_1(x_1),\dots,F_d(x_d)\}$ with $F_1,\dots,F_d$ continuous and $C,F_1,\dots,F_d$ unknown. If $C \in \mathcal{EV}$, it can be expressed as in~\eqref{bk:eq:Pickands_charact}. The third class of tests exploits variations of the following idea: given a nonparametric estimator $A_n$ of $A$ and using the empirical copula $C_n$ defined in~\eqref{bk:eq:empcop}, relationship~\eqref{bk:eq:Pickands_charact} can be tested empirically.
The first test in this class is due to \cite{bk:KojYan10c} who, for $d=2$ only, constructed test statistics from the empirical process
$$
\mathbb{E}_n(u_1,u_2) = \sqrt{n} \left( C_n(u_1,u_2) - \exp \left[ \log(u_1 u_2) A_n \left\{ \frac{\log(u_2)}{\log(u_1 u_2)} \right\} \right] \right),
$$
for $(u_1,u_2) \in (0,1]^2 \setminus \{(1,1)\}$. The recommended statistic is
\begin{equation}
\label{bk:eq:TnA}
T_n^A = \int_{[0,1]^2} \mathbb{E}_n(u_1,u_2)^2 \mathrm{d} C_n(u_1,u_2),
\end{equation}
when $A_n$ is the rank-based version of the Cap\'era\`a--Foug\`eres--Genest (CFG) estimator of $A$ studied in \cite{bk:GenSeg09}. The resulting test relies on a {\em multiplier bootstrap} and is implemented in the function \texttt{evTestA} of the \textsf{R} package {\tt copula}. A multivariate version of this test was studied in \cite{bk:Gud12} using the multivariate extension of the rank-based CFG estimator of~$A$ investigated in \cite{bk:GudSeg12}.
An alternative class of nonparametric multivariate rank-based estimators of $A$ was proposed in \cite{bk:BucDetVol11} and \cite{bk:BerBucDet13}. These are based on the minimization of a weighted $L^2$-distance between the logarithms of the empirical and the unknown extreme-value copula. To derive multivariate tests of extreme-value dependence, the latter authors reused the aforementioned $L^2$-distance to measure the difference between the empirical copula in~\eqref{bk:eq:empcop} and a plug-in nonparametric estimator of $C$ under extreme-value dependence based on~\eqref{bk:eq:Pickands_charact}. The corresponding test statistic is subsequently denoted by $T_{L^2,n}$.
We end this subsection by briefly summarizing a recent graphical approach due to \cite{bk:CorGenNes14}. Their idea, hitherto restricted to the bivariate case, is as follows: given a copula $C$, consider the transformation $T_C:(0,1)^2 \to (0,1) \times (0,\infty]$, defined by
\[
T_C(u_1,u_2) = \left( \frac{\log (u_2)}{\log (u_1u_2)}, \frac{\log \{ C(u_1,u_2) \} }{ \log(u_1u_2) } \right), \qquad (u_1,u_2) \in (0,1)^2.
\]
If $C \in \mathcal{EV}$, representation~\eqref{bk:eq:Pickands_charact} holds and we have $\log \{ C(u_1,u_2) \} = \log(u_1u_2) A \{ \log(u_2)/\log(u_1u_2) \}$ for all $(u_1,u_2) \in (0,1)^2$, whence $\mathcal S_C = \{ T_C(u,v): (u,v) \in (0,1)^2\}$ coincides with the graph of $A$, i.e., with the set $\{(t,A(t)) : t \in (0,1) \}$. More generally, some thought reveals that $H_0$ is valid if and only if $\mathcal S_C$ is a convex curve. The latter observation suggests to test $H_0$ in~\eqref{bk:eq:H0} by estimating the set $\mathcal S_C$ and visually assessing the departure of that estimated set from a convex curve. The estimator defined in \cite{bk:CorGenNes14}, called the {\em A-plot}, is given by
\[
\hat {\mathcal S}_n =
\left\{
(\hat T_i, \hat Z_i) : \hat T_i = \frac{\log(\hat U_{i2}) }{ \log( \hat U_{i1} \hat U_{i2})}, \hat Z_i = \frac{\log \{ C_n(\hat U_{i1}, \hat U_{i2}) \} }{ \log(\hat U_{i1} \hat U_{i1})}, i \in \{1, \dots, n\}
\right\}.
\]
Examples of A-plots when $C \in \mathcal{EV}$ and when $C \not \in \mathcal{EV}$ can be found in Figure~1 of \cite{bk:CorGenNes14}. When $C$ is of the extreme-value type, the previous authors proposed a B-spline smoothing estimator for the Pickands dependence function $A$ based on $\hat {\mathcal S}_n$. The latter is subsequently denoted by $A_n$ for simplicity (even though the estimator depends on several smoothing parameters). Additionally to a pure graphical check, the authors propose
\begin{equation}
\label{bk:eq:resid}
T_n = \frac{1}{n} \sum_{i=1}^n \{ \hat Z_i - A_n (\hat T_i) \}^2,
\end{equation}
a residual sum of squares, as a formal test statistic for $H_0$. The hypothesis is rejected for unlikely large values of $T_n$. Specifically, an approximate p-value for $T_n$ is computed by means of a {\em parametric bootstrap} procedure based on simulating from a copula with Pickands dependence function $A_n$.
\subsection{Finite-sample performance of some of the tests}
The finite-sample performance of the tests reviewed in the preceding sections was investigated by various authors. Table~\ref{bk:tab:evc} below, taken from \cite{bk:CorGenNes14}, gathers those results from \cite{bk:CorGenNes14}, \cite{bk:KojYan10c}, \cite{bk:DuNes13}, and \cite{bk:BucDetVol11} that were obtained under the same experimental settings (notice that the Gumbel--Hougaard copula is the only extreme-value copula among those considered in the table). As noted by \cite{bk:CorGenNes14}, no test is uniformly better than the others: each test, except the one based on $T_{L^2,n}$ from \cite{bk:BucDetVol11}, is favored for at least one of the considered scenarios under $H_1$. For high levels of dependence (as measured by Kendall's tau), the tests based on $S_{2n}$ and $S_{3n}$ described in Section~\ref{bk:subsec:kendall} seem to yield the most accurate approximation of the nominal level (here 5\%). The tests whose approximate p-values are computed by means of a multiplier bootstrap, i.e., the tests based on $T_{3,4,5,n}$ defined in~\eqref{bk:eq:T345n} and on $T_n^A$ and $T_{L^2,n}$ introduced in Section~\ref{bk:subsec:pick}, are quite conservative for such scenarios. From a computational perspective, the test based on $S_{2n}$ seems to be the fastest, while the one based on $T_n^A$ defined in~\eqref{bk:eq:TnA} is the most computationally intensive. Additional comparison of the tests based on $S_{2n}$, $T_{3,4,5,n}$ and $T_n^A$ (resp.\ $S_{2n}$ and $S_{3n}$) can be found in \citet[Tables 1--3]{bk:KojYan10c} \citep[resp.][Table 5]{bk:DuNes13}.
\begin{table}[t!]
\begin{center}
\footnotesize{
\begin{tabular}{l l r r r r r r}
$\tau$ & $C$ & $T_n$ & $S_{2n}$ & $S_{3n}$ & $T_n^A$ & $T_{3,4,5,n}$ & $T_{L^2,n}$ \\ \hline
0.25 & Gumbel--Hougaard & 4.7 & 5.4 & 5.3 & 3.8 & 5.0 & 4.5 \\
& Clayton & 97.7 & 98.0 & 96.6 & {\bf 98.4} & 94.6 & 87.4 \\
& Frank & 18.7 & 38.4 & 57.0 & 58.3 & {\bf 66.1} & 29.1 \\
& Gaussian & 25.5 & 37.3 & {\bf 40.3} & 36.5 & 38.7 & 16.8 \\
& Student $t$ with 4 d.f.\ & {\bf 37.7} & 26.2 & 19.6 & 23.9 & 26.6 & 10.5 \\ \hline
0.5 & Gumbel--Hougaard & 5.4 & 5.1 & 5.0 & 3.9 & 4.0 & 2.9 \\
& Clayton & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\
& Frank & 87.8 & 59.4 & 84.4 & 95.7 & {\bf 96.5} & 73.0 \\
& Gaussian & 59.4 & {\bf 62.6} & 61.7 & 61.8 & 51.0 & 23.7 \\
& Student $t$ with 4 d.f.\ & {\bf 58.6} & 56.0 & 45.3 & 50.1 & 52.7 & 15.8 \\ \hline
0.75 & Gumbel--Hougaard & 6.2 & 4.9 & 5.3 & 3.2 & 2.3 & 2.5 \\
& Clayton & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\
& Frank & 98.3 & 58.5 & 92.9 & {\bf 99.9} & 99.0 & 78.3 \\
& Gaussian & 56.5 & {\bf 75.2} & 71.1 & 66.5 & 46.7 & 8.4 \\
& Student $t$ with 4 d.f.\ & 45.8 & 67.8 & 55.8 & 50.6 & {\bf 69.2} & 4.6 \\
\end{tabular}
}
\caption{\small Rejection rates of $H_0$ estimated from random samples of size $n=200$ generated from a c.d.f.\ with copula $C$ whose Kendall's tau is~$\tau$. All the tests were carried out at the 5\% significance level. The table is taken from \cite{bk:CorGenNes14}.
}\label{bk:tab:evc}
\end{center}
\end{table}
\cite{bk:KojSegYan11} and \cite{bk:BerBucDet13} also present
simula\-tion results for $d > 2$, which are in favor of the test based on $T_{3,4,5,n}$ defined in~\eqref{bk:eq:T345n}. Preliminary results obtained in \cite{bk:Gud12} indicate that the multivariate extension of the test based on $T_n^A$ defined in~\eqref{bk:eq:TnA} is likely to outperform the test based on $T_{3,4,5,n}$ for several scenarios under $H_1$.
\section{Some related statistical inference procedures}
\label{bk:sec:related}
Once it has been decided to use an extreme-value copula to model dependence in a set of multivariate continuous i.i.d.\ observations, a typical next step is to choose a parametric family $\mathcal{C}$ in $\mathcal{EV}$ and estimate its unknown parameter(s) from the data. As many parametric families of extreme-value copulas are available \citep[see e.g.][]{bk:GudSeg10,bk:RibSed13}, it is of strong practical interest to be able to test whether a given family $\mathcal{C}$ is a plausible model or not for the data at hand. In other words, tests for
$
H_0 : C \in \mathcal{C}
$
against
$
H_1 : C \not \in \mathcal{C}
$
would be needed. Such goodness-of-fit procedures were investigated in the bivariate case by \cite{bk:GenKojNesYan11} who considered Cram\'er--von Mises test statistics based on the difference between a nonparametric and a parametric estimator of the Pickands dependence function. The Monte Carlo experiments reported in the latter work highlighted the fact that, unless the amount of data is very large, there is hardly any practical difference among the existing bivariate symmetric parametric families of extreme-value copulas, and that an issue of more importance from a modeling perspective is whether a symmetric or asymmetric family should be used. For that purpose, the specific test of symmetry for bivariate extreme-value copulas investigated in \cite{bk:KojYan12} can be used as a complement to the goodness-of-fit test studied in \cite{bk:GenKojNesYan11}. Both tests are available in the {\tt copula} \textsf{R} package.
When $d > 2$ but $d$ remains reasonably small (say $d \leq 10$), generic goodness-of-fit tests (that is, developed for any parametric copula family, not necessarily of the extreme-value type) could be used \citep[see e.g.][and the references therein]{bk:GenRemBea09,bk:KojYan11}. In a higher dimensional context, one possibility consists of using the specific approach for extreme-value copulas proposed by \cite{bk:Smi90} in his seminal work on max-stable processes. It consists of comparing nonparametric and parametric estimators of the underlying {\em extremal coefficients} (which are functionals of the Pickands dependence function). The latter approach was recently revisited in \cite{bk:KojShaYan14}.
\section{Illustrations and \textsf{R} code from the {\tt copula} package}
\label{bk:sec:illus}
We provide two illustrations below. The first one concerns bivariate financial logreturns and exemplifies the key theoretical connection between multivariate block maxima and extreme-value copulas briefly mentioned in the introduction and Section~\ref{bk:sec:found}. The second illustration consists of a detailed analysis of the well-known LOSS/ALAE insurance data with particular emphasis on the effect and handling of ties.
\subsection{Bivariate financial logreturns}
As a first illustration, we considered daily logreturns computed from the closing values of the Dow Jones and the S\&P 500 stock indexes for the period 1990-2004. The closing values are available in the {\tt QRM} \textsf{R} package \citep{bk:QRM} and can be loaded by entering the following commands into the \textsf{R} terminal:
{\small
\begin{verbatim}
> library(QRM)
> data(dji)
> data(sp500)
\end{verbatim}
} \noindent
Daily logreturns for the period under consideration were computed using the {\tt timeSeries} \textsf{R} package \citep{bk:timeSeries}:
{\small
\begin{verbatim}
> d <- na.omit(cbind(dji,sp500))
> rd <- returns(d)
\end{verbatim}
}\noindent
The statistical procedures mentioned in the previous sections should not however be directly applied on the resulting bivariate daily logreturns as the latter are strongly serially dependent. To obtain observations that might exhibit extreme-value dependence and could be considered approximately i.i.d., we first formed the bivariate series of componentwise monthly maxima. The last step was performed using functions from the {\tt timeSeries} and {\tt timeDate} \textsf{R} packages \citep{bk:timeDate}:
{\small
\begin{verbatim}
> by <- timeSequence(from=start(rd), to=end(rd), by="month")
> mrd <- aggregate(rd, by, max)
\end{verbatim}}
\noindent The resulting component series do not contain ties which is compatible with the implicit assumption of continuous margins:
{\small
\begin{verbatim}
> x <- series(mrd)
> nrow(x)
[1] 171
> apply(x, 2, function(x) length(unique(x)))
DJI SP500
171 171
\end{verbatim}
} \noindent
After loading the {\tt copula} package with the command {\tt library(copula)} and setting the random seed by typing {\tt set.seed(123)}, the test of extreme-value dependence based on $S_{2n}$ (resp.\ $T_{3,4,5,n}$, $T_n^A$) defined in~\eqref{bk:eq:S2n} (resp.\,~\eqref{bk:eq:T345n}, ~\eqref{bk:eq:TnA}) was applied using the command {\tt evTestK(x)} (resp. {\tt evTestC(x)}, {\tt evTestA(x, derivatives="Cn")}) and returned an approximate p-value of 0.5737 (resp.\ 0.4191, 0.2423). In other words, none of the tests detected any evidence against extreme-value dependence thereby suggesting that the copula of componentwise block maxima, for blocks of length corresponding to a month, is sufficiently close to an extreme-value copula. Note that, as the tests are rank-based, they could have equivalently been called on the pseudo-observations computed from the monthly block maxima. The random seed was set (to ensure exact reproducibility) because the second and third tests involve random number generation as their p-values are computed using resampling.
For illustration purposes, we next formed monthly logreturns as follows:
{\small
\begin{verbatim}
> srd <- aggregate(rd, by, sum)
> x <- series(srd)
\end{verbatim}
} \noindent
Proceeding as previously, it can be verified that the resulting component series do not contain ties which is compatible with the implicit assumption of continuous margins. Monthly log\-returns being merely sums of daily log\-returns, the underlying unknown bivariate distribution should be far from exhibiting extreme-value dependence. The tests of extreme-value dependence based on $S_{2n}$, $T_{3,4,5,n}$ and $T_n^A$ returned approximate p-values of 0.0003, 0.02 and 0.0005, respectively, confirming that there is strong evidence in the data against extreme-value dependence.
\subsection{LOSS/ALAE insurance data}
The well-known LOSS/ALAE insurance data are very frequently used for illustration purposes in copula modeling \citep[see e.g.][]{bk:FreVal98,bk:BenGenNes09,bk:KojYan10}. The two variables of interests are LOSS, an indemnity payment, and ALAE, the corresponding allocated loss adjustment expense. They were observed for 1500 claims of an insurance company. Following \cite{bk:BenGenNes09}, the following study is restricted to the 1466 uncensored claims.
The data are available in the {\tt copula} package, and can be loaded by typing {\tt library(copula)} followed by {\tt data(loss)}. The uncensored claims described in terms of LOSS and ALAE were obtained as follows:
{\small
\begin{verbatim}
> myLoss <- subset(loss, censored==0, select=c("loss", "alae"))
\end{verbatim}
}\noindent
These data, consisting of 1466 bivariate observations, contain a non-negligible amount of ties, the variable LOSS being particularly affected:
{\small
\begin{verbatim}
> sapply(myLoss, function(x) length(unique(x)))
loss alae
541 1401
\end{verbatim}
}
The presence of ties is incompatible with the implicit assumption of continuous margins. Indeed, combined with the assumption that the data are i.i.d.\ observations, continuity of the margins implies that ties should no occur. Yet, ties are present here as in many other real data sets. The latter could be due either to the fact that the observed phenomena are truly discontinuous, or to precision/rounding issues. As far as the LOSS/ALAE data are concerned, the latter explanation applies.
Among the tests briefly described in Section~\ref{bk:sec:tests}, only that of \cite{bk:CorGenNes14} explicitly considers the case of discontinuous margins (see Section~6 in that reference). The remaining tests were all implemented under the assumption of continuous margins. For the test based on $S_{2n}$ defined in~\eqref{bk:eq:S2n}, \cite{bk:GenNesRup11} provide a heuristic explanation of the fact that, for discontinuous margins, $S_{2n}$ is not necessarily centered anymore under the null.
Given the situation, there are roughly four possible courses of action: (i)~stop the analysis, (ii)~delete tied observations, (iii)~use average ranks for ties or (iv)~break ties at random, sometimes referred to as {\em jittering} (which amounts to adding a small, continuous white noise term to all observations). Arguments for not considering solution~(ii) are given in~\citet[Section~2]{bk:GenNesRup11}. To empirically study solutions~(iii) and~(iv), the latter authors carried out an experiment consisting of applying the test based on $S_{2n}$ defined in~\eqref{bk:eq:S2n} on binned observations from a bivariate Gumbel--Hougaard copula. More specifically, tied observations were obtained by dividing the unit square uniformly into bins of dimension 0.1 by 0.1 (resp.\ 0.2 by 0.2) resulting in at most 100 (resp.\ 25) different bivariate observations whatever the sample size. In such a setting, \cite{bk:GenNesRup11} observed that both solutions~(iii) and~(iv) led to strongly inflated empirical levels for the test based on $S_{2n}$.
The situation in terms of ties in the LOSS/ALAE data is however far from being as extreme as in the experiment of \cite{bk:GenNesRup11}. In addition, ties mostly affect the LOSS variable. This prompted us first to consider solution~(iv) as implemented in \cite{bk:KojYan10}.
\paragraph{Random ranks for ties} The idea consists of carrying out the analysis for many different randomizations (with the hope that this will result in many different configurations for the parts of the data affected by ties) and then looking at the empirical distributions (and not the averages) of the results (here the p-values of various tests).
For illustration purposes, we first detail the analysis for one randomization:
{\small
\begin{verbatim}
> set.seed(123)
> pseudoLoss <- sapply(myLoss, rank, ties.method="random") /
(nrow(myLoss) + 1)
\end{verbatim}
} \noindent
As a next step, the tests of extreme-value dependence based on $S_{2n}$, $T_{3,4,5,n}$ and $T_n^A$ defined in~\eqref{bk:eq:S2n},~\eqref{bk:eq:T345n} and~\eqref{bk:eq:TnA}, respectively, were applied by successively typing \texttt{evTestK(pseudoLoss)}, \texttt{evTestC(pseudoLoss)} and \texttt{evTestA(pseudoLoss, derivatives="Cn")}, resulting in approximate p-values of 0.8845, 0.468 and 0.4231, respectively.
Hence, none of the three tests detected any evidence against extreme-value dependence. The following step consisted of fitting a parametric family of bivariate extreme-value copulas to the data. As discussed in Section~\ref{bk:sec:related}, given the very strong similarities among the existing families of bivariate symmetric extreme-value copulas, the only issue of practical importance is to assess whether a symmetric or asymmetric family should be used. To do so, we applied the test developed in \cite{bk:KojYan12} by calling \texttt{exchEVTest(pseudoLoss)}, with a resulting p-value of 0.1653.
The previous result suggested to focus on an exchangeable family such as the Gumbel--Hougaard. We then ran the goodness-of-fit test proposed in \cite{bk:GenKojNesYan11}
by calling
{\small
\begin{verbatim}
> gofEVCopula(gumbelCopula(), pseudoLoss, method="itau", verbose=FALSE)
\end{verbatim}}
\noindent
The resulting p-value of 0.2592 suggested to fit the Gumbel--Hougaard family:
{\small
\begin{verbatim}
> fitCopula(gumbelCopula(), pseudoLoss, method="itau")
fitCopula() estimation based on 'inversion of Kendall's tau'
and a sample of size 1466.
Estimate Std. Error z value Pr(>|z|)
param 1.44040 0.03327 43.29 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
\end{verbatim}}
To assess how different randomizations of the ties affect the results, the above analysis was repeated 100 times using the following code:
{\small
\begin{verbatim}
> randomize <- function()
+ {
+ pseudoLoss <- sapply(myLoss, rank, ties.method="random") /
(nrow(myLoss) + 1)
+ evtK <- evTestK(pseudoLoss)$p.value
+ evtC <- evTestC(pseudoLoss)$p.value
+ evtA <- evTestA(pseudoLoss, derivatives="Cn")$p.value
+ exevt <- exchEVTest(pseudoLoss)$p.value
+ gofevGH <- gofEVCopula(gumbelCopula(), pseudoLoss, method="itau",
verbose=FALSE)$p.value
+ fitGH <- fitCopula(gumbelCopula(), pseudoLoss, method="itau")
+ c(evtK=evtK, evtC=evtC, evtA=evtA, exevt=exevt, gofevGH=gofevGH,
+ est=fitGH@estimate, se=sqrt(fitGH@var.est))
+ }
> reps <- t(replicate(100, randomize()))
> round(apply(reps, 2, summary), 3)
evtK evtC evtA exevt gofevGH est se
Min. 0.868 0.430 0.353 0.092 0.191 1.441 0.033
1st Qu. 0.898 0.462 0.396 0.112 0.223 1.442 0.033
Median 0.914 0.475 0.411 0.120 0.235 1.442 0.033
Mean 0.913 0.474 0.411 0.122 0.236 1.442 0.033
3rd Qu. 0.928 0.489 0.425 0.129 0.248 1.443 0.033
Max. 0.955 0.525 0.464 0.162 0.292 1.444 0.033
\end{verbatim}}
The empirical distributions of the results show that the different randomizations did not affect the results qualitatively.
\paragraph{Average ranks for ties}
We also considered solution~(iii), that is, average ranks for ties. The p-values of the three tests of extreme-value dependence (applied in the same order as previously) were 0.6, 0.02 and 0, respectively. The p-values of the tests of exchangeability and goodness of fit were 0.12 and 0.18, respectively. The estimate of the parameter of the Gumbel--Hougaard copula was 1.446.
\paragraph{Random or average ranks for ties?}
The previous computations illustrate that solutions~(iii) and~(iv) for dealing with ties can result in significantly different conclusions. To gain insight into which solution should be preferred, if any, we designed an experiment tailored to the LOSS/ALAE data. Specifically, we simulated a large number of samples of size $n=1466$ from a Gumbel--Hougaard copula with parameter value 1.446, as suggested by the aforementioned parametric fit. We then modified each sample so that its marginal empirical c.d.f.s evaluated at the respective observations coincide with those of the LOSS/ALAE data. For instance, in the original data, the 27th to the 49th smallest values of LOSS are equal. Each simulated sample was modified so that the 27th to the 49th smallest values of the first variable all get replaced by the 49th smallest observation. The same approach was used for the second variable of the generated samples. Solutions~(iii) and~(iv) were applied next to each modified sample prior to running the tests of extreme-value dependence, and the resulting p-values were compared with those obtained by applying the tests on the corresponding unmodified sample (that is, with no ties). The code used to carry out the experiment for the test based on $S_{2n}$ defined in~\eqref{bk:eq:S2n} is given below:
{\small \begin{verbatim}
> mr.loss <- rank(myLoss[,1], ties.method="max")
> mr.alae <- rank(sort(myLoss[,2]), ties.method="max")
> test.func <- function(x) evTestK(x)$p.value
> do1 <- function()
+ {
+ x <- rCopula(1466, gumbelCopula(1.446))
+ y <- x[order(x[,1]),]
+ y[,1] <- y[mr.loss,1]
+ y <- y[order(y[,2]),]
+ y[,2] <- y[mr.alae,2]
+ z <- apply(y, 2, rank, ties.method="random")
+ c(test.func(x), test.func(y), test.func(z))
+ }
> res <- t(replicate(1000, do1()))
> summary(round(res[,1] - res[,3],3))
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.093000 -0.014000 0.001000 0.000033 0.014000 0.105000
> summary(round(res[,1] - res[,2],3))
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.54600 -0.25520 0.14000 0.06697 0.34750 0.52300
> apply(res, 2, function(x) mean(x <= 0.05))
0.046 0.107 0.047
\end{verbatim}}
For the test based on $S_{2n}$, the p-values computed from a continuous sample and the corresponding randomized sample are very close on average, the maximal deviation being relatively small. On the contrary, the p-values computed from a continuous sample are larger on average than the p-values computed from the corresponding sample involving average ranks, and the maximal deviation is very large. We also see that when solution~(iii) is considered, the test based on $S_{2n}$ is way too liberal, confirming the findings of \cite{bk:GenNesRup11}, while, when solution~(iv) is used, the test holds its level well. A similar experiment was performed for the test based on $T_{3,4,5,n}$ (with 100 replications only) and the conclusions are of the same nature but more pronounced:
{\small
\begin{verbatim}
> summary(round(res[,1] - res[,3],3))
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.10400 -0.01425 -0.00150 -0.00050 0.01650 0.08700
> summary(round(res[,1] - res[,2],3))
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.0190 0.2700 0.4820 0.4536 0.6592 0.8730
> apply(res, 2, function(x) mean(x <= 0.05))
0.05 0.45 0.05
\end{verbatim}}
The previous experiment can be adapted to any data set containing ties and suggests that, in the case of the LOSS/ALAE data, solution~(iv) is meaningful while solution~(iii) should be avoided.
\section{Testing the maximum domain of attraction condition}
The statistical framework considered in the three previous sections can be regarded as the ``classical'' setting of dependence modeling by copulas. As mentioned in the introduction, modeling a copula by an extreme-value copula, or testing extreme-value dependence within such a framework, is particularly sensible if there are reasons to assume that the data at hand are generated by some maxima-forming process. If this is not the case, or if the hypothesis of extreme-value dependence is rejected, it might still be reasonable to make the (mild) assumption that the copula of interest lies in the domain of attraction of some extreme-value copula. It is the aim of the present section to briefly discuss how the latter assumption could be tested.
A precise formulation of the problem is as follows: we observe a sample of $d$-dimensional i.i.d.\ vectors
$\mathbf Y_1, \dots, \mathbf Y_n$ with c.d.f.\ $C^*\{G_1(y_1), \dots, G_d(y_d)\}$, where $G_1, \dots, G_d$ are assumed continuous and $C^*,G_1,\dots,G_d$ are unknown. We are interested in tests of
\begin{align} \label{bk:eq:evcondition}
H_0: C^* \in D(C) \text{ for some } C \in \mathcal{EV}
\quad \text{against}\quad
H_1: C^* \notin D(C) \text{ for any } C \in \mathcal{EV},
\end{align}
where the notation $C^* \in D(C)$ is defined below~\eqref{bk:eq:maxdom}. Notice that the analogue univariate problem (i.e., testing the null hypothesis that the underlying distribution of a given univariate i.i.d.\ sample lies in the maximum domain of attraction of some extreme-value distribution) was tackled in \cite{bk:DieDehHus02}, \cite{bk:DreDehLi06} and \cite{bk:HusLi06}, while, in the multivariate case, only very few (validated) methods seem available.
A rejection of the null hypothesis in~\eqref{bk:eq:evcondition} gives indication that the stochastic dependence between componentwise block maxima formed from the $\mathbf Y_i$, no matter how large the blocks are, cannot be adequately described by an extreme-value copula. On the other hand, if the hypothesis is not rejected, it is promising to consider an extreme-value copula as a model provided the block size is sufficiently large. Also, in the latter case, one could make use of~\eqref{bk:eq:maxdom} to obtain the approximation that, for a sufficiently large~$r$, $C^*(\mathbf v) \approx \{ C(v_1^r, \dots, v_d^r) \}^{1/r}= C(\mathbf v)$ for all $\mathbf v \in [\mathbf t, \mathbf 1]$, with $\mathbf t =(t_1, \dots, t_d)$ close to $\mathbf 1$. This would imply that, at least in the upper tail, the copula $C^*$ can be well-approximated by an extreme-value copula $C$. A threshold model of that form was for instance considered in \cite{bk:LedTaw96} in a bivariate setting with generalized Pareto marginals.
A first promising approach to test $H_0$ in~\eqref{bk:eq:evcondition} consists of comparing two estimators of $C$ (or its characterizing objects) with different backgrounds. Under the null hypothesis, these estimators should not differ too much. Based on the peak-over-threshold method and in the bivariate case, \cite{bk:EinDehLi06} developed a test based on an Anderson--Darling-type statistic between two estimators of the stable-tail dependence function $\ell:[0,1]^2 \to \mathbb R$ defined by $\ell(x,y) = |x+y| A(x/|x+y|)$, where $A$ denotes the Pickands dependence function of $C$. Critical values for the test were obtained by approximately simulating from the limiting random variable. To the best of our knowledge, this testing procedure is the only validated method for testing the (bivariate) maximum domain of attraction condition.
A heuristic approach to test the null hypothesis in~\eqref{bk:eq:evcondition} in the bivariate case was described in \cite{bk:CorGenNes14}. Their method consists of considering a trimmed A-plot (see also Section~\ref{bk:subsec:pick}) defined by only including those points $(\hat T_i, \hat Z_i)$ in the set $\hat{\mathcal S}_n=\hat{\mathcal S}_n(\mathbf t)$ for which $\hat{\mathbf U}_i \in [\mathbf t, \mathbf 1]$, with some suitable thres\-hold parameter $\mathbf t =(t_1, t_2) \in [0,1]^2$ close to $\mathbf 1$. Based on the trimmed A-plot, the approach briefly described in Section~\ref{bk:subsec:pick} can be followed to obtain a B-spline smoothing estimator of the Pickands dependence function corresponding to the limiting extreme-value copula $C$. Plotting the residual sum of squared errors defined in~\eqref{bk:eq:resid} against the threshold $\mathbf t$ serves as a data-driven method for the choice of the threshold. For that particular choice, the A-plot as well as the testing procedure described in Section~\ref{bk:subsec:pick} can be used to assess heuristically whether the maximum domain of attraction condition holds or not.
Finally, the tests described in Section~\ref{bk:sec:tests} can be adapted to obtain simple heuristic procedures for testing$H_0$ in~\eqref{bk:eq:evcondition}. Under the null hypothesis, given $\mathbf Y_1, \dots, \mathbf Y_n$, if we form $k$ (componentwise) block maxima from blocks of length $m$,
\[
\mathbf X_{i} = (X_{i1}, \dots, X_{id}), \quad X_{ij}= \max \{Y_{m(i-1)+1,j}, \dots, Y_{mi,j}\},
\]
$i \in \{1, \dots, k\}$, $j \in \{1, \dots d\}$, where $km = n$ and $m$ is sufficiently large (if $n$ is not an integer multiple of $m$, then a negligible remainder block of length strictly smaller than $m$ occurs), then the copula of the block maxima $\mathbf X_i$ should (approximately) be an extreme-value copula. The tests described in Section~\ref{bk:sec:tests} could next be applied to $\mathbf X_1, \dots, \mathbf X_k$ to obtain an indication of whether the maximum domain of attraction condition holds or not. Another promising approach consists of adapting the approach in Section~\ref{bk:subsec:max} by only testing max-stability in the upper tail $[\mathbf t, \mathbf 1]$, with some suitable thres\-hold parameter $\mathbf t =(t_1, t_2) \in [0,1]^2$ close to $\mathbf 1$. This could be done by integrating the square of the process in~\eqref{bk:test_process} over the restricted set $[\mathbf t, \mathbf 1]$.
Precise asymptotic validations of these methods are, however, not available. A treatment of occurring bias terms from an undersized choice of the block length or the threshold parameter would be necessary, as for instance carried out in \cite{bk:BucSeg14} in an estimation framework for time series based on block maxima. Also, data-driven methods to choose the block length $m$ or the threshold parameter $\mathbf t$ would need to be developed.
\section{Open questions and ignored difficulties}
Several issues dealt with in this chapter would need to be thoroughly investigated in future research. For instance, the suggested approach for handling ties in data sets for which it is actually reasonable to assume that the apparent discontinuities are only due to precision or rounding issues would need to be studied more in depth. While for the LOSS/ALAE data set, it seemed reasonable to break ties at random a large number of times, this may not be the case for other data sets in which the proportion of ties is significantly larger \citep[see e.g.][Section~4]{bk:GenNesRup11}. Yet, even more difficult appears to be the problem of testing extreme-value dependence from truly discontinuous observations such as count data. A promising starting point for adapting some of the statistical procedures described in this work to such a context is the recent work of \cite{bk:GenNesRem14} on the {\em multilinear empirical copula}.
With financial applications in mind in particular, tests of extreme-value dependence would also need to be adapted to multivariate stationary times series. The methods briefly described in this chapter all rely on the assumption that the observations at hand are serially independent which is hardly verified for many data sets of interest. Applying the discussed statistical procedures to (almost i.i.d.) standardized residuals from common time series models might be an option, but it is unclear whether the necessary additional estimation step affects the limiting null distribution of the test statistics or not. For that purpose, a starting point might be the work of \cite{bk:Rem10} where the asymptotics of the empirical copula process of standardized residuals are investigated. If the tests are to be applied on the stationary raw time series data, then their empirical levels will most likely be affected by the serial dependence present in the observations. In such a situation, the dependent multiplier bootstrap studied in \cite{bk:BucKoj14} could be used to adapt some of the reviewed tests of extreme-value dependence.
\textbf{Acknowledgments} The authors are grateful to Christian Genest, Johanna Ne\v slehov\'a and an anonymous referee for their constructive comments on an earlier version of this chapter. This work has been supported in part by the Collaborative Research Center \textit{Statistical Modeling of Nonlinear Dynamic Processes} (SFB 823, project A7) of the German Research Foundation (DFG).
\end{document} |
\begin{equation}gin{document}
\title[]{Entanglement-enhanced Neyman-Pearson target detection using quantum illumination}
\author{Quntao Zhuang$^{1,2}$, Zheshen Zhang$^1$ and Jeffrey H Shapiro$^1$}
\address{$^1$Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA\\
$^2$Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA}
\end{eqnarray}d{quntao@mit.edu}
\begin{equation}gin{indented}
\item March 7, 2017
\end{indented}
\begin{equation}gin{abstract}
Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.
\end{abstract}
\section{Introduction}
Entanglement is arguably the premier quantum-mechanical resource for obtaining sensing performance that exceeds limits set by classical physics. Entanglement, however, is vulnerable to loss and noise arising from environmental interactions. As a result, the performance advantages of many entanglement-enabled sensing schemes---such as those that rely on frequency-entangled states (see, e.g.,~\cite{Giovannetti2001}), or N00N states (see, e.g.,~\cite{Dowling2008})---vanish as loss and noise increase. Quantum illumination (QI)~\cite{Sacchi_2005_1,Sacchi_2005_2,Lloyd2008,Tan2008,Lopaeva_2013,Guha2009,Zheshen_15,Barzanjeh_2015}, in contrast, is highly robust against environmental loss and noise. QI utilizes entanglement to beat the performance of the optimum classical-illumination (CI) scheme for detecting the presence of a weakly reflecting-target that is embedded in a very noisy environment, despite QI's initial entanglement being destroyed \emph{before} the target-detection quantum measurement is made. In particular, for equally-likely target absence or presence, Tan \emph{et al}~\cite{Tan2008} showed that QI's error-probability exponent is 6\,dB higher than that of the optimum CI scheme of the same transmitted power. Tan \emph{et al} obtained their result from the quantum Chernoff bound (QCB)~\cite{Audenaert2007}, which gives no inkling as to what receiver hardware could be used to realize that performance advantage. Indeed, finding a structured optimum receiver for QI has been a longstanding problem. Guha and Erkmen~\cite{Guha2009} introduced and analyzed the optical parametric amplifier (OPA) receiver, showing that its error-probability exponent for equally-likely target absence or presence is 3\,dB greater than that of optimum CI. A subsequent experiment~\cite{Zheshen_15}, which implemented the OPA receiver, verified that QI could outperform CI in an entanglement-breaking scenario. In recent theoretical work~\cite{Zhuang2017}, we showed that sum-frequency generation (SFG) combined with a feedforward (FF) mechanism can achieve QI's full 6\,dB advantage in error-probability exponent for equally-likely target absence or presence.
Tan \emph{et al}'s~\cite{Tan2008} assumption of equally-likely target absence or presence and use of error probability as a performance metric makes their analysis Bayesian, but Bayesian analysis is \emph{not} the preferred approach for target detection, owing to the difficulty of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm (Type-I) and miss (Type-II) errors. Instead, radar theory opts for the Neyman-Pearson performance criterion, in which optimum target detection maximizes the detection probability, $P_D \equiv \Pr(\mbox{decide present}\mid\mbox{present})$, subject to a constraint on the false-alarm probability, $P_F \equiv \Pr(\mbox{decide present}\mid \mbox{absent})$. (The detection probability satisfies $P_D = 1-P_M$, where $P_M \equiv \Pr(\mbox{decide absent}\mid \mbox{present})$ is the miss probability.) Spedalieri and Braunstein~\cite{Spedalieri2014} derived the optimum trade-off between the false-alarm and miss-probability error exponents in the asymptotic ($M\rightarrow \infty$) limit of $M$-copy quantum-state discrimination. More recently, Wilde \emph{et al}~\cite{Wilde2016} showed that for fixed false-alarm probability, QI's miss-probability exponent greatly exceeds that of the optimum CI scheme. For weakly-reflecting targets embedded in high-brightness noise, however, Wilde \emph{et al}'s result only holds when $P_M$ is extremely low, e.g., $P_M \sim10^{-30}$ or lower. In this paper we use results from our FF-SFG analysis~\cite{Zhuang2017} to obtain the receiver operating characteristic (ROC)---i.e., the trade-off between $P_D$ and $P_F$---for optimum QI target detection, and compare it to the ROCs of QI target detection with OPA reception and optimum CI target detection.
The rest of the paper is organized as follows. In Sec.~\ref{QIdetection} we describe QI target detection, as introduced by Tan \emph{et al}~\cite{Tan2008}, and contrast two general approaches to multiple-copy, quantum-state discrimination that will help later in understanding why QI target detection with OPA reception is inferior to QI target detection using FF-SFG reception. Sections~\ref{OPArcvr} and \ref{SFGrcvr} are devoted, respectively, to descriptions of the OPA and FF-SFG receivers for QI target detection, including how they use the returned-signal and stored-idler mode pairs that QI provides to make their decisions as to target absence or presence. Section~\ref{ROCsec} concludes the paper with a comparison between the ROCs of QI target detection with FF-SFG reception, QI target detection with OPA reception, CI target detection with homodyne detection, and a coherent-state discrimination problem whose performance is the ultimate limit for QI target detection in the $N_S \ll 1$ regime.
\section{Target detection using quantum illumination}
\label{QIdetection}
Figure~\ref{FIG_schematic} is a schematic representation of QI target detection~\cite{Tan2008}. An entanglement source generates $M \gg 1$ independent, identically distributed (iid) signal-idler mode pairs, with photon annihilation operators $\{\hat{c}_{S_{0_m}},\hat{c}_{I_{0_m}} : 1\le m\le M\}$. Each mode pair is in a two-mode squeezed-vacuum state of mean photon number $2N_S \ll 1$. For simplicity, Fig.~\ref{FIG_schematic} shows only a single signal-idler pair $\left(S_0,I_0\right)$. The bold green dashed line denotes the maximum entanglement between $\left(S_0,I_0\right)$. The signal modes (solid green circles) interrogate the region in which the weakly-reflecting target would be located were it present. The idler modes (solid blue circles) are retained for subsequent joint measurement with noisy signal modes that are returned from the interrogated region. Assuming ideal idler storage, the annihilation operators for the idler modes at the joint measurement are $\{\hat{c}_{I_m} = \hat{c}_{I_{0_m}}: 1\le m \le M\}$.
\begin{equation}gin{figure}[hbt]
\centering
\subfigure[]{
\includegraphics[width=0.45\textwidth]{SFG_ROC_fig1a}
\label{FIG_schematic}
}
\subfigure[]{
\includegraphics[width=0.45\textwidth]{SFG_ROC_fig1b}
\label{FIG_LOCC}
}
\caption{(a) Schematic of QI target detection. Upper panel: target present ($h=1$). Lower panel: target absent ($h=0$). The green dashed lines shows the correlation between the signals (green balls) and the idlers (blue balls), with thickness indicating correlation strength.
(b) Measurement schemes. Upper panel: local operations plus classical communication (LOCC) using individual unitary transformations followed by positive operator-valued measurements (POVMs) for each mode pair that may be connected by classical (feedforward or feedback) communication and whose outputs are pooled to reach a final target absence or presence decision ($\tilde{h} = 0$ or 1). Lower panel: collective operation using a unitary transformation $\hat{U}$ operating on all the mode pairs followed by a single POVM to reach a target absence or presence decision.
}
\end{figure}
Figure~\ref{FIG_schematic}'s upper panel shows that in the presence of a target, i.e., $h=1$, the returned signal modes contain a weak reflection from the target (the small solid green circle) embedded in a bright noise background (the red cloud). The residual signal photons from the transmitter have a weak phase-sensitive cross correlation with the stored idler modes, indicated by the green dashed lines. Thus, when $h=1$ the returned signal modes are described by the annihilation operators $\{\hat{c}_{S_m} = \sqrt{\kappa}\,\hat{c}_{S_{0_m}}+\sqrt{1-\kappa}\,\hat{c}_{N_m}: 1\le m \le M\}$, where $\kappa\ll1$ is the transmitter-to-target-to-receiver transmissivity and the $\{\hat{c}_{N_m}\}$ are annihilation operators for noise modes, each of which is in a thermal state containing $N_B /(1-\kappa) \gg 1$ photons on average.
Figure~\ref{FIG_schematic}'s lower panel shows that in the absence of a target, i.e., $h=0$, the returned signal modes are due solely to the bright noise background (the red cloud). As such, there is no phase-sensitive cross correlation between the returned signal modes and the stored idler modes, as illustrated by the absence of green dashed lines. At the receiver, the annihilation operators for the signal modes are then $\{\hat{c}_{S_m} = \hat{c}_{N_m} : 1\le m \le M\}$, where the noise modes are now each in thermal states containing $N_B$ photons on average, so that there is no passive signature of target presence~\cite{Tan2008}.
Conditioned on the true hypothesis $h$, the returned signal and stored idler mode pairs, $\{\hat{c}_{S_m},\hat{c}_{I_m}: 1 \le m \le M\}$, are in iid zero-mean Gaussian states that are completely determined by their Wigner covariance matrices, viz.,
\begin{equation}gin{eqnarray}
&
{\mathbf{\Lambda}}_h =
\frac{1}{4}
\left(
\begin{equation}gin{array}{cccc}
(2N_B+1) {\mathbf I}&2\sqrt{\kappa N_S(N_S+1)}\,{\mathbf Z}\delta_{1h}\\
2\sqrt{\kappa N_S(N_S+1)}\, {\mathbf Z}\delta_{1h}&(2N_S+1){\mathbf I}
\end{array}
\right),
&
\label{CovReturnIdler_main}
\end{eqnarray}
for $h=0,1$. In this expression: ${\mathbf I} = {\rm diag}(1,1)$; ${\mathbf Z}={\rm diag}(1,-1)$; $\delta_{ih}$ is the Kronecker delta function; we have used $(2N_B + 1)$ in lieu of $(2\kappa N_S + 2N_B + 1)$ because $\kappa \ll 1$, $N_S \ll 1$, and $N_B \gg 1$; and $\sqrt{\kappa N_S(N_S+1)}$ is the residual phase-sensitive cross correlation between the returned signal and stored idler modes that heralds target presence. It follows that the task of QI target detection is identifying the presence of that phase-sensitive cross correlation.
At this juncture it is useful to present two generic approaches to sensing whether the returned-signal, stored-idler mode pairs possess a phase-sensitive cross correlation: local operations plus classical communication (LOCC), and collective operations. These approaches---shown schematically in Fig.~\ref{FIG_LOCC}---will appear later in the guises of the OPA receiver and the FF-SFG receiver. For now we merely note the following points. The LOCC scheme performs unitary transformations followed by positive operator-valued measurements (POVMs) on each mode pair that may be connected by classical (feedforward or feedback) communication and whose outcomes are pooled to determine its decision, $\tilde{h} = 0$ or 1, as to whether the target is absent ($\tilde{h} = 0$) or present ($\tilde{h} = 1$). The collective approach, in contrast, applies a unitary transformation to \emph{all} of the mode pairs and then performs a single POVM to generate its $\tilde{h}$.
\section{The OPA Receiver}
\label{OPArcvr}
Helstrom~\cite{Helstrom1969} showed that Neyman-Pearson optimum hypothesis testing---for discriminating between the density operators $\hat{\rho}_h^{\otimes M}$ for $h=0,1$---is realized by taking $\tilde{h}$ to be the outcome of the POVM $u(\hat{\rho}_1^{\otimes M} -\zeta\hat{\rho}_0^{\otimes M})$, where $u(x) = 1$ for $x \ge 0$ and 0 otherwise. Here, because $P_F = {\rm Tr}[\hat{\rho}_0^{\otimes M}u(\hat{\rho}_1^{\otimes M} -\zeta\hat{\rho}_0^{\otimes M})]$, the constant $\zeta$ is chosen to saturate the Neyman-Pearson criterion's constraint on that quantity. Unfortunately, analytical expressions for $P_F$ and $P_D= {\rm Tr}[\hat{\rho}_1^{\otimes M}u(\hat{\rho}_1^{\otimes M} -\zeta\hat{\rho}_0^{\otimes M})]$ are unavailable for QI target detection's density operators. It is worth noting, in this regard, that Helstrom's optimum POVM for minimum error-probability QI target detection takes the form given above with $\zeta = \pi_0/\pi_1$, where $\pi_h$ is the prior probability of hypothesis $h$.
Helstrom's minimum error-probability POVM leads to an error-probability exponent---$\mathcal{E} \equiv -\lim_{M\to \infty}[\ln(\Pr(e))/M]$, where $\Pr(e) = \pi_0P_F + \pi_1P_M$---that is given by the quantum Chernoff bound (QCB)~\cite{Audenaert2007},
\begin{equation}gin{equation}
\mathcal{E}_{\rm QCB} = -\ln\!\left[\min_{0\le s\le 1} {\rm Tr}\!\left(\hat{\rho}_0^s \hat{\rho}_1^{1-s}\right)\right],
\end{equation}
for all nondegenerate priors ($\pi_0\pi_1\neq 0$). Because $\hat{\rho}_0$ and $\hat{\rho}_1$ for QI target detection are both Gaussian states, $\mathcal{E}_{\rm QCB}$ can be obtained analytically~\cite{Pirandola2008,Spedalieri2014}. It then turns out that $\mathcal{E}_{\rm QCB} \rightarrow \kappa N_S/N_B$ in the $N_S\ll1, N_B\gg1$ limit~\cite{Tan2008}. Figure~\ref{FIG_qcb} plots $\mathcal{E}_{\rm QCB}/(\kappa N_S/N_B)$ versus $N_S$ for a variety of $N_B$ values. We see that the asymptotic formula works well when $N_B>20$ and $N_S\le 10^{-3}$.
\begin{equation}gin{figure}
\centering
\includegraphics[width=0.4\textwidth]{SFG_ROC_fig2}
\caption{
Plots of QI target detection's QCB error-probability exponent normalized by its $N_S \ll 1$, $N_B \gg 1$ asymptote versus $N_S$ for various $N_B$ values. \label{FIG_qcb}
}
\end{figure}
When Ref.~\cite{Tan2008} was published, it was known that none of the three conventional optical receivers---heterodyne, homodyne, or direct detection---yielded any advantage in error-probability exponent in QI target detection. It was not until the work of Guha and Erkmen~\cite{Guha2009}, and the subsequent experiment by Zhang \emph{et al}~\cite{Zheshen_15}, that an architecture---the OPA receiver---which afforded QI target detection a performance advantage over CI target detection was proposed, analyzed, and experimentally demonstrated.
Figure~\ref{OPA_schematic} shows a schematic of the OPA receiver. Each returned-signal and stored-idler mode pair undergoes a two-mode squeezing (TMS) operation governed by the gain-$G$ Bogoliubov transformation
\begin{eqnarray}
\hat{d}_{S_m}&=&\sqrt{G}\hat{c}_{S_m}+\sqrt{G-1} \hat{c}_{I_m}^\dagger
\\
\hat{d}_{I_m}&=&\sqrt{G}\hat{c}_{I_m}+\sqrt{G-1} \hat{c}_{S_m}^\dagger,
\end{eqnarray}
with $0 < G-1 \ll 1$. This TMS operation converts the absence or presence of a phase-sensitive cross correlation between the $\hat{c}_{S_m}$ and $\hat{c}_{I_m}$ modes into a difference in the average photon number in the $\hat{d}_{I_m}$ mode. The OPA receiver's $\tilde{h} = 0$ or 1 decision is made by measuring the total photon number in the $\{\hat{d}_{I_m}\}$ modes---i.e., measuring $\hat{N}_d \equiv \sum_{m=1}^M\hat{d}_{I_m}^\dagger\hat{d}_{I_m}$---and comparing its outcome $n_d$ with a threshold that maximizes the posterior probability, viz.,
\begin{equation}gin{equation}
\tilde{h}_{\rm OPA} = \arg\max_j\!\left[\pi_jP_{\hat{N}_d}^{(j)}(n_d)\right],
\end{equation}
where $P_{\hat{N}_d}^{(j)}(n_d)$ is the conditional probability of getting $n_d$ given that $h=j$. Note that although we have described the OPA receiver on a mode-pair basis, its $M$ TMS operations can be performed simultaneously using a low-gain optical parametric amplifier, and its total photon-number measurement $\hat{N}_d$ can be accomplished by direct detection, thus enabling a convenient experimental realization~\cite{Zheshen_15}.
\begin{equation}gin{figure}
\centering
\includegraphics[width=0.4\textwidth]{SFG_ROC_fig3}
\caption{Schematic of the OPA receiver~\cite{Guha2009}.
\label{OPA_schematic}
}
\end{figure}
The iid nature of the returned-signal and stored-idler mode pairs, conditioned on target absence or presence, implies that the $\hat{d}_{I_m}$ modes will also be iid given $h$. Furthermore, $M\gg 1$ then provides a central-limit-theorem justification for a Gaussian approximation to the $P_{\hat{N}_d}^{(j)}(n_d)$ distribution. Using this approximation we can calculate the error probability for equally-likely target absence or presence, minimize it over the OPA gain, and show that the resulting error-probability exponent is 3\,dB inferior to $\mathcal{E}_{\rm QCB}$ in the asymptotic ($\kappa \ll1, N_S\ll 1, N_B \gg 1$) regime. We have also used the Gaussian approximation and OPA gain optimization to obtain OPA reception's ROC that we will present and discuss in Sec.~\ref{ROCsec}.
The OPA receiver's suboptimality stems from its being an LOCC system~\cite{Acin_2005}. The LOCC approach is capable of minimum error-probability quantum reception for multiple-copy, pure-state discrimination, but QI target detection in the $\kappa \ll 1, N_S\ll 1, N_B \gg 1$ operating regime is a multiple-copy, mixed-state discrimination problem for which it is known that a collective measurement is needed to achieve that performance~\cite{Calsamiglia_2010}. Indeed, it has been recently shown that QI target detection using LOCC reception can achieve at most a 3\,dB advantage in error-probability exponent over CI~\cite{Sanz_2017}.
\section{The FF-SFG receiver}
\label{SFGrcvr}
We have just seen that the Helstrom POVM for optimum QI target detection cannot be realized with the LOCC approach. Instead, a collective measurement is required. In principle, that collective measurement can be implemented by a quantum Schur transform~\cite{Bacon_2006} on a quantum computer. We have recently introduced the FF-SFG receiver~\cite{Zhuang2017}, and showed it to be the first architecture---short of a quantum computer---whose error-probability exponent for equally-likely target absence or presence achieves QI target detection's 6\,dB advantage over CI in $N_S \ll 1$ low-signal-brightness regime. The FF-SFG receiver builds on two guiding principles: (1) that SFG is the inverse of the down-conversion process that generates $M$ modes of two-mode squeezed states from a single-mode coherent-state pump; and (2) the Dolinar receiver~\cite{Dolinar1973} achieves minimum error-probability discrimination between arbitrary coherent-state hypotheses. As noted in~\cite{Zhuang2017}, the FF-SFG receiver can be adapted to realize Helstrom's Neyman-Pearson POVM $u(\hat{\rho}_1^{\otimes M} -\zeta\hat{\rho}_0^{\otimes M})$ for QI target detection merely by modifying the FF-SFG receiver's Bayesian update rule (see below) to use $\pi_1 =1/(1+\zeta)$ for the prior probability of target presence.
The FF-SFG receiver entails $K$ cycles, as shown in Fig.~\ref{SFG_schematic}. Each cycle employs: three TMS operations whose squeeze parameters are determined from measurement information fed forward from the preceding cycle; an SFG process that, assuming the previous cycle's tentative decision as to target absence or presence is correct, almost fully converts any phase-sensitive cross correlation in its input modes into auxiliary-mode photons at its output; and photon-number measurements on the auxiliary modes. The photon-number measurement outcomes are fed into a Bayesian-update rule that dictates the next tentative target absence or presence decision based on the information available up to that point in the reception process. The Bayesian-update rule also produces feedforward information that controls the TMS operations in the next cycle. The total number of cycles is chosen to ensure receiver performance that is close to quantum optimum.
\begin{equation}gin{figure}
\centering
\includegraphics[width=0.7\textwidth]{SFG_ROC_fig4}
\caption{Schematic of the FF-SFG receiver~\cite{Zhuang2017} The top panel shows how two of its $K$ cycles are connected. The bottom panel shows the detail of one of those cycles. SFG denotes sum-frequency generation. $S(\cdot)$ denotes a TMS operation. FF denotes feed forward.
\label{SFG_schematic}
}
\end{figure}
Now let us explain how the FF-SFG receiver achieves minimum error-probability performance for an arbitrary but given set of priors, $\{\pi_0,\pi_1\}$; for full details see~\cite{Zhuang2017}. (Setting $\zeta = \pi_0/\pi_1$ leads to this receiver's realizing the maximum $P_D$ value consistent with $P_F = {\rm Tr}[\hat{\rho}_0^{\otimes M}u(\hat{\rho}_1^{\otimes M} -\zeta\hat{\rho}_0^{\otimes M})]$.) Akin to the OPA receiver, the FF-SFG receiver converts phase-sensitive cross correlation into photon-number information that can be measured by direct detection. Unlike the OPA receiver, which uses LOCC operations on {\em individual} mode pairs, the FF-SFG receiver applies a joint operation to {\em all} mode pairs. In particular, each of its cycles uses an SFG process that operates on a collection of weak signal-idler mode pairs and a vacuum auxiliary mode~\cite{Zhuang2017}. If the tentative decision from the previous cycle is correct, this SFG process will convert almost all of any phase-sensitive cross correlation in each signal-idler mode pair into a coherent state of the auxiliary mode embedded in a weak thermal background. Critically, the coherent states that SFG creates from the $M$ mode pairs at its input are in phase. Thus their coherent-state contributions to the auxiliary-mode output add constructively. As such, the SFG operation is {\em not} LOCC, opening a path for optimum QI target detection.
The inputs to the FF-SFG receiver's first cycle ($k=0$) are the returned-signal and the stored-idler mode pairs, represented by annihilation operators $\hat{c}_{S_m}^{(0)} = \hat{c}_{S_m}$ and $\hat{c}_{I_m}^{(0)} = \hat{c}_{I_m}$. A beam splitter with transmissivity $\eta\ll 1$ taps a small portion of each $\hat{c}_{S_m}^{(k)}$ mode, yielding a weak transmitted mode $\hat{c}_{S_m,1}^{(k)}$ to undergo the TMS operation $S(r_k)$ with the $\hat{c}_{I_m}^{(k)}$ mode, and a strong $\hat{c}_{S_m,2}^{(k)}$ mode that is retained. The TMS operation's squeezing parameter, $r_{k}$, is computed from $\tilde{h}_k$, which is the tentative absence or presence decision made prior to the present cycle. (For the $k=0$ cycle, that tentative decision is derived solely from the prior probabilities.) The $r_k$ value is chosen to almost purge any phase-sensitive cross correlation between the $\{\hat{c}_{S_m,1}^{(k)},\hat{c}_{I_m}^{(k)}\}$ mode pairs from the $S(r_k)$ operation's output mode pairs when $\tilde{h}_k$ is a correct decision~\cite{Zhuang2017}. $S(r_k)$'s output mode pairs undergo an SFG process that converts any residual phase-sensitive cross correlation to photons in the auxiliary sum-frequency $\hat{b}^{(k)}$ mode. Thus, the subsequent detection of photons in the $\hat{b}^{(k)}$ mode is an indication of the tentative decision $\tilde{h}_k$ was incorrect. Following the $k$th cycle's SFG operation, we apply the TMS operation $S(-r_k)$ to each signal-idler mode pair, which ensures that, when its signal-mode outputs are combined with the retained $\{\hat{c}_{S_m,2}^{(k)}\}$ modes on a second transmissivity-$\eta$ beam splitter, the $\{\hat{c}_{E_m}^{(k)}\}$ output modes contain the same number of photons as the $\hat{b}^{(k)}$ mode. The photon-number measurements $\hat{b}^{(k)\dagger}\hat{b}^{(k)}$ and $\sum_{m=1}^M\hat{c}_{E_m}^{(k)\dagger}\hat{c}_{E_m}^{(k)}$ then provide outcomes $N_b^{(k)}$ and $N_E^{(k)}$ that are substantial when $\tilde{h}_k$ is incorrect, but negligible when $\tilde{h}_k$ is correct.
The $k$th cycle is completed by a TMS operation $S(\varepsilon_k)$, with $\varepsilon_k=\sqrt{\eta}\,r_k$, that makes the phase-sensitive cross correlation of the signal and idler inputs to the $(k+1)$th cycle independent of $r_k$.
The Bayesian update rule that generates $\{\tilde{h}_k: 0 \le k \le K-1\}$ from the FF-SFG receiver's photon-number measurements works as follows. For $k=0$, we initialize the process using the given priors in $\tilde{h}_k = \arg\max_j(\pi_j)$. The prior probabilities for target absence and presence based on all measurement outcomes up to and including those from the $k$th cycle are given by the Bayesian update rule~\cite{Acin_2005,Assalini2011},
\begin{equation}
P_{h=j}^{(k)}=\frac{P_{h=j}^{(k-1)}P_{BE}(N_b^{(k-1)},N_E^{(k-1)};j,r_{\tilde{h}_{k-1}}^{(k-1)})}{\sum_{j=0}^1 P_{h=j}^{(k-1)}P_{BE}(N_b^{(k-1)},N_E^{(k-1)};j,r_{\tilde{h}_{k-1}}^{(k-1)})},
\label{update}
\end{equation}
for $1\le k \le K-1$, where $P_{BE}(N_b^{(k-1)},N_E^{(k-1)};j,r_{\tilde{h}_{k-1}}^{(k-1)})$ is the conditional joint probability of getting counts $N_b^{(k-1)}$ and $N_E^{(k-1)}$ given that the true hypothesis is $j$, $r_{k-1} = r_{\tilde{h}_{k-1}}^{(k-1)}$ is the decision-dependent TMS squeezing parameter for cycle $k-1$, and $P_{h=j}^{(0)} = \pi_j$. The tentative decision that determines the TMS squeezing parameter for the $k$th cycle is then $\tilde{h}_k = \arg\max_j(P_{h=j}^{(k)})$. After the last cycle ($k=K-1$), the final decision on target absence or presence is $\tilde{h}_K = \arg\max_j(P_{h=j}^{(K)})$, where the $\{P_{h=j}^{(K)}\}$ are obtained from Eq.~\ref{update} with $k = K$. This decision accounts for \emph{all} the information obtained from the $K$ measurement cycles. As we have shown in Ref.~\cite{Zhuang2017}, the total number of cycles needed to approach optimum FF-SFG performance is determined by the beam splitter's transmissivity $\eta$.
\section{Receiver operating characteristic comparison}
\label{ROCsec}
\begin{equation}gin{figure}
\centering
\includegraphics[width=0.35\textwidth]{SFG_ROC_fig5}
\caption{
ROCs of QI target detection with FF-SFG reception (red dots), QI target detection with OPA reception (black solid curve), CI target detection with coherent-state (CS) light and homodyne reception (black dashed curve) schemes. Also included is the ROC of coherent-state Neyman-Pearson (Coherent NP) for discriminating between the coherent state $|\sqrt{M\kappa N_S/N_B}\,\rangle$ and the vacuum state (green solid curve), which is known to be realized by QI target detection with FF-SFG reception when $N_S \ll 1$. All four ROCs assume that $M = 10^{7.5}$, $N_S=10^{-4}$, $\kappa = 0.01$, and $N_B=20$.
\label{ROC}
}
\end{figure}
The culmination of this paper is the ROC comparison we will present in this section for the $P_D$ versus $P_F$ trade-offs of QI target detection with FF-SFG reception, QI target detection with OPA reception, and CI target detection with coherent-state illumination and homodyne detection. Also included is the ROC for discriminating between the coherent state $|\sqrt{M\kappa N_S/N_B}\,\rangle$ and the vacuum state, which we have shown in Ref.~\cite{Zhuang2017} to be the FF-SFG's performance when $N_S \ll 1$. These four ROCs---which are plotted in Fig.~\ref{ROC}---all assumed the same operating parameters: $M = 10^{7.5}$ transmitted modes, $N_S = 10^{-4}$ average transmitted photon-number per mode; $\kappa = 0.01$ roundtrip channel transmissivity when the target is present; and $N_B = 20$ average received background photon-number per mode. The FF-SFG receiver's ROC was obtained from Monte Carlo simulations done in the manner described in Ref.~\cite{Zhuang2017}. In particular, to get a point on the FF-SFG receiver's ROC, we first choose a $\zeta$ value, then initialize the Bayesian update procedure from Eq.~(\ref{update}) using the priors $\pi_0 = \zeta/(1+\zeta)$ and $\pi_1 = 1/(1+\zeta)$, and run the simulation to obtain $P_D$ and $P_F$. The OPA receiver's ROC was obtained from the Gaussian approximation to its photon-counting statistics conditioned on the true hypothesis, similar to what we have previously done for the use of QI with OPA reception to realize classical communication that is immune to passive eavesdropping~\cite{Zhang2013}. The coherent-state homodyne setup's ROC was obtained analytically from the conditional statistics of its homodyne receiver's output. Note that because $N_B \gg 1$, a homodyne receiver is essentially the optimum quantum receiver for CI target detection. Furthermore, the target-detection problem for the coherent-state homodyne setup reduces to distinguishing between known signals embedded in additive Gaussian noise, whose ROC is well known~\cite{VanTrees1968}. The ROC for discriminating the coherent state $|\sqrt{M\kappa N_S/N_B}\,\rangle$ from the vacuum state can be obtained analytically, as shown by Helstrom~\cite{Helstrom1976}.
Figure~\ref{ROC} shows the superiority of FF-SFG reception to OPA reception in QI target detection, and the improvements that both offer over CI target detection. More importantly, Fig.~\ref{ROC} shows that the FF-SFG's ROC in the $N_S\ll 1$ limit matches that of the optimum discrimination between the coherent state $|\sqrt{M\kappa N_S/N_B}\,\rangle$ and the vacuum state, as expected from what was previously found for minimum error-probability QI target detection with equally-likely target absence or presence~\cite{Zhuang2017}. Thus we conclude that FF-SFG reception provides a structure-receiver alternative to Schur-transform implementation on a quantum computer for achieving the QI's full performance advantage for detecting the presence of a weakly-reflecting target that is embedded in a bright noise background.
\ack
QZ acknowledges support from the Claude E. Shannon Research Assistantship.
ZZ and JHS acknowledge support from AFOSR Grant No.~FA9550-14-1-0052.
QZ thanks M. M. Wilde for helpful discussions.
\section*{References}
\begin{equation}gin{thebibliography}{99}
\bibitem{Giovannetti2001}Giovannetti V, Lloyd S, and Maccone L 2001 \emph{Nature} {\bf 412} 417
\bibitem{Dowling2008}Dowling J P 2008 \emph{Contemp. Phys.} {\bf 49} 125
\bibitem{Sacchi_2005_1}Sacchi M F 2005 \emph{Phys. Rev. A} {\bf 72} 014305
\bibitem{Sacchi_2005_2}Sacchi M F 2005 \emph{Phys. Rev. A} {\bf 71} 062340
\bibitem{Lloyd2008}Lloyd S 2008 \emph{Science} {\bf 321} 1463
\bibitem{Tan2008}Tan S H, Erkmen B I, Giovannetti V, Guha S,
Lloyd S, Maccone L, Pirandola S, and Shapiro J H 2008
\emph{Phys. Rev. Lett.} {\bf 101} 253601
\bibitem{Lopaeva_2013}Lopaeva E D, Ruo Berchera I, Degiovanni I P, Olivares S,
Brida G, and Genovese M 2013 \emph{Phys. Rev. Lett.} {\bf 110}
153603
\bibitem{Guha2009}Guha S and Erkmen B I 2009 \emph{Phys. Rev. A} {\bf 80} 052310
\bibitem{Zheshen_15}Zhang Z, Mouradian S, Wong F N C, and
Shapiro J H 2015 \emph{Phys. Rev. Lett.} {\bf 114} 110506
\bibitem{Barzanjeh_2015}Barzanjeh S, Guha S, Weedbrook C, Vitali D,
Shapiro J H, and Pirandola S 2015 \emph{Phys. Rev. Lett.} {\bf 114} 080503
\bibitem{Audenaert2007}Audenaert K M R, Calsamiglia J, Mu\~noz-Tapia R, Bagan E, Masanes Ll, Acin A and Verstraete F 2007 \emph{Phys. Rev. Lett.} {\bf 98} 160501
\bibitem{Zhuang2017}Zhuang Q, Zhang Z and Shapiro J H 2016 2017 \emph{Phys. Rev. Lett.} {\bf 118} 040801
\bibitem{Spedalieri2014} Spedalieri G and Braunstein S L 2014 \emph{Phys. Rev. A} {\bf 90} 052307
\bibitem{Wilde2016} Wilde M M, Tomamichel M, Lloyd S and Berta M 2016 arXiv:1608.06991
\bibitem{Helstrom1969}Helstrom C W 1969 \emph{J. Stat. Phys.} {\bf 1} 231
\bibitem{Pirandola2008} Pirandola S and Lloyd S 2008 \emph{Phys. Rev. A} {\bf 78} 012331
\bibitem{Acin_2005}Ac\'{i}n A, Bagan E, Baig M, Masanes Ll and Mu\~{n}oz-Tapia M 2005 \emph{Phys. Rev. A} {\bf 71} 032338
\bibitem{Calsamiglia_2010}Calsamiglia J, de Vicente J I, Mu\~noz-Tapia R and Bagan E 2010 \emph{Phys. Rev. Lett.} {\bf 105} 080504
\bibitem{Sanz_2017} Sanz M, Las Heras U, Garc'{i}a-Ripoll J J, Solano E and Di Candia R 2017 \emph{Phys. Rev. Lett.} {\bf 118} 070803
\bibitem{Bacon_2006}Bacon D, Chuang I L and Harrow A W 2006 \emph{Phys. Rev. Lett.} {\bf 97} 170502
\bibitem{Dolinar1973}Dolinar S J 1973 Research Laboratory of Electronics MIT Quarterly Progress Report No.~111 115
\bibitem{Assalini2011}Assalini A, Dalla Pozza N, and Pierobon G 2011 \emph{Phys. Rev. A} {\bf 84} 022342
\bibitem{Zhang2013}Zhang Z, Tengner M, Zhong T, Wong F N C, and Shapiro J H 2013 \emph{Phys. Rev. Lett.} {\bf 111} 010501
\bibitem{VanTrees1968}Van Trees H L 1968 \emph{Detection, Estimation, and Modulation Theory, Part~I} (New York: Wiley) Chap 2
\bibitem{Helstrom1976}Helstrom C W 1976 \emph{Quantum Detection and Estimation Theory} (New York: Academic Press) Chap~IV
\end{thebibliography}
\end{document} |
\begin{document}
\title{\large\bf{Kramers-Fokker-Planck
operators with homogeneous potentials}
\begin{abstract}
In this article we establish a global subelliptic estimate for Kramers-Fokker-Planck operators with homogeneous potentials $V(q)$ under some conditions, involving in particular the control of the eigenvalues of the Hessian matrix of the potential. Namely, this work presents a different approach from the one in \cite{Ben}, in which the case $V(q_1,q_2)=-q_1^2(q_1^2+q_2^2)^n$ was already treated only for $n=1.$ With this article, after the former one dealing with non homogeneous polynomial potentials, we conclude the analysis of all the examples of degenerate ellipticity at infinty presented in the framework of Witten Laplacian by Helffer and Nier in \cite{HeNi}. Like in \cite{Ben}, our subelliptic lower bounds are the optimal ones up to some logarithmic correction.
\end{abstract}
\noindent\textbf{Key words:} subelliptic estimates, compact resolvent, Kramers-Fokker-Planck operator.\\
\noindent\textbf{MSC-2010:} 35Q84, 35H20, 35P05, 47A10, 14P10
\tableofcontents
\section{Introduction and main results}
In this work we study the Kramers-Fokker-Planck operator
\begin{align}
K_V=p.\partial_q-\partial_qV(q).\partial_p+\frac{1}{2}(-\Delta_p+p^2)~,\;\;\;\;\;(q,p)\in
\mathbb{R}^{2d}
\,,
\label{a.3eq1}
\end{align}
where $q$ denotes the space variable, $p$ denotes the velocity
variable and the potential $V(q)$
is a real-valued function defined in the
whole space $\mathbb{R}^d_q.$
Setting
\[
O_p=\frac{1}{2}(D^2_p+p^2)
\;,\quad\quad \text{and}\quad\quad
X_V=p.\partial_q-\partial_qV(q).\partial_p~,
\]
the Kramers-Fokker-Planck operator $K_V$ defined in (\ref{a.3eq1}) reads
$K_V=X_V+O_p.$ \\
We firstly list some notations used throughout the paper. We denote for an arbitrary function $V(q)$ in $\mathcal{C}^{\infty}(\mathbb{R}^d)$
\[
\begin{aligned}
\mathrm{Tr}_{+,V}(q) & = \sum\limits_{\substack{\nu\in \mathrm{Spec}(\mathrm{Hess}\; V)\\ \nu>0}} \nu(q)\,,
\\ \mathrm{Tr}_{-,V}(q) &=-\sum\limits_{\substack{\nu\in \mathrm{Spec}(\mathrm{Hess}\; V)\\ \nu\le 0}}\nu(q)\;.
\end{aligned}
\]
In particular for a polynomial $V$ of degree less than 3, $\mathrm{Tr}_{+,V}$ and $\mathrm{Tr}_{-,V}$ are two constants. In this case we define the constants $A_V$ and $B_V$ by
\begin{align*}
A_V& = \max \{(1+\mathrm{Tr}_{+,V})^{2/3}, 1+\mathrm{Tr}_{-,V}\}\;,\\
B_V &= \max\{\min\limits_{q\in\mathbb{R}^d}\left|\nabla\;V(q)\right|^{4/3}, \frac{1+\mathrm{Tr}_{-,V}}{\log(2+\mathrm{Tr}_{-,V})^2}\}\;. \end{align*}
This work is principally based on the publication by Ben Said,
Nier, and Viola \cite{BNV}, which concerns the study of
Kramers-Fokker-Planck operators with polynomials of degree less than
three. In \cite{BNV} we proved the existence of a constant $c>0$,
independent of $V$, such that the following global subelliptic estimate with remainder
\begin{align}
\|K_Vu\|^2_{L^2(\mathbb{R}^{2d})}+A_V\|u\|^2_{L^2(\mathbb{R}^{2d})}\ge
{c} \Big(\|O_pu\|^2_{L^2(\mathbb{R}^{2d})}&+\|X_Vu\|^2_{L^2(\mathbb{R}^{2d})}\nonumber\\&+\|\langle\partial_q V(q)\rangle^{2/3}u\|^2_{L^2(\mathbb{R}^{2d})}+\|\langle D_q\rangle^{2/3}u\|_{L^2(\mathbb{R}^{2d})}\Big)\label{eq44}
\end{align}
holds for all $u\in \mathcal{C}_0^{\infty}(\mathbb{R}^{2d}).$
Furthermore, supposing
$\mathrm{Tr}_{-,V}+\min\limits_{q\in\mathbb{R}^d}\left|\nabla\;V(q)\right|\not=0$,
there exists a constant $c>0$, independent of $V$, such that
\begin{align}
\|K_Vu\|^2_{L^2(\mathbb{R}^{2d})}\ge c\,B_V\|u\|^2_{L^2(\mathbb{R}^{2d})}~,\label{1.5mm}
\end{align}
is valid for all $u\in \mathcal{C}_0^{\infty}(\mathbb{R}^{2d}).$
As a consequence collecting (\ref{1.5mm}) and (\ref{eq44}) together,
there is a constant $c>0$, independent of $V$, so that the global subelliptic estimates without remainder
\begin{align}
\|K_Vu\|^2_{L^2(\mathbb{R}^{2d})}\ge \frac{c}{1+\frac{A_V}{B_V}}\Big(\|O_pu\|^2_{L^2(\mathbb{R}^{2d})}&+\|X_Vu\|^2_{L^2(\mathbb{R}^{2d})}\nonumber\\&+\|\langle\partial_q V(q)\rangle^{2/3}u\|^2_{L^2(\mathbb{R}^{2d})}+\|\langle D_q\rangle^{2/3}u\|_{L^2(\mathbb{R}^{2d})}\Big)
\label{eq55}
\end{align}
holds for all $u\in \mathcal{C}_0^{\infty}(\mathbb{R}^{2d}).$
Here and throughout the paper we use the notation
\begin{align*}
\langle \cdot\rangle=\sqrt{1+|\cdot|^2}\;.
\end{align*}
Moreover we remind that for an arbitrary potential $V\in\mathcal{C}^{\infty}(\mathbb{R}^d)$, the Kramers-Fokker-Planck operator $K_V$ is essential maximal accretive when endowed with the domain $\mathcal{C}_0^{\infty}(\mathbb{R}^{2d})\;$(see Proposition 5.5, page 44 in \cite{HeNi}). Thanks to this property we deduce that the domain of the closure of $K_V$ is given by
\begin{align*}
D(K_V)=\left\{u\in L^2(\mathbb{R}^{2d}),\; K_Vu\in L^2(\mathbb{R}^{2d})\right\}~.
\end{align*}
Resultently, by density of $\mathcal{C}_0^{\infty}(\mathbb{R}^{2d})$
in the domain $D(K_V)$ all estimates written in this article, which
are verified with $C^\infty_0(\mathbb{R}^{2d})$ functions, can be
extended to $D(K_V).$ By relative bounded perturbation
with bound less than $1$\,,
this result holds as well when $V\in
\mathcal{C}^{\infty}(\mathbb{R}\setminus\left\{0\right\})$ is an
homogeneous function of degree $r>1$.
Our results will require the following assumption after setting \begin{align}
\mathcal{S}=\left\lbrace q\in\mathbb{R}^{d},\;\;|q|=1 \right\rbrace\;.\end{align}
\begin{assumption}\label{a.31.4}
The potential $V(q)$ is an homogeneous function of degree $r> 2$ in\\ $\mathcal{C}^{\infty}(\mathbb{R}^d\setminus\left\{~0\right\})$ and satisfies:
\begin{align}
\forall\;\; q\in\mathcal{S}\;,\;\;\;\;\;\;\;\;\;\partial_qV(q)=0\Rightarrow \mathrm{Tr}_{-,V}(q)>0\;.\label{a31.4.}
\end{align}
\end{assumption}
Our main result is the following.
\begin{thm}\label{a.3thm1.1}
If the potential $V(q)$ verifies Assumption \ref{a.31.4}, then
there exists a strictly positive constant $C_{V}>1$ (which depends on $V$) such that
\begin{align}
\|K_{V}u\|^2_{L^2}+C_{V}\|u\|^2_{L^2}\ge \frac{1}{C_V}\Big(\|L(O_p)u\|^2_{L^2}&+\|L(\langle\nabla V(q)\rangle^{\frac{2}{3}}) u\|^2_{L^2}\nonumber\\&+\|L(\langle\mathrm{Hess}\; V(q)\rangle^{\frac{1}{2}} ) u\|^2_{L^2}+\|L(\langle D_q\rangle^{\frac{2}{3}} ) u\|^2_{L^2}\Big)~,\label{a.31.6}
\end{align}
holds for all $u\in D(K_{V})$ where $L(s)=\frac{s+1}{\log(s+1)}$ for any $s\ge1.$
\end{thm}
\begin{cor}
\label{cor}
The Kramers-Fokker-Planck operator $K_V$ with a potential $V(q)$ satisfying Assumption \ref{a.31.4} has a compact resolvent.
\end{cor}
\begin{proof}
Let $0<\delta<1.$ Define the functions $f_\delta:\mathbb{R}^d\to\mathbb{R}$ by $$f_\delta(q)= |\nabla V(q)|^{\frac{4}{3}(1-\delta)}+|\mathrm{Hess}\,V(q)|^{1-\delta}~.$$
As a result of (\ref{a.31.6}) in Theorem~\ref{a.3thm1.1} there is a constant $C_V>1$ such that
\begin{align*}
\|K_Vu\|^2_{L^2}+C_V\|u\|^2_{L^2}\ge\frac{1}{C_V}\Big(\langle u,f_\delta u\rangle+ \|L(O_p)u\|^2_{L^2}+\|L(\langle D_q\rangle^{\frac{2}{3}} ) u\|^2_{L^2}\Big)~,
\end{align*}
holds for all $u\in\mathcal{C}_0^{\infty}(\mathbb{R}^{2d})$ and all $\delta\in (0,1).$
In order to show that the operator $K_V$ has a compact resolvent it is sufficient to prove that $\lim\limits_{q\to +\infty}f_\delta(q)=+\infty.$
It is a matter of how different derivatives scale. Consider the unit sphere $S=\{q\in\mathbb{R}^{d}:|q|=1\}$. By Assumption (\ref{a31.4.}), at every point on $S$ either $\nabla V\neq 0$ or $|\mathrm{Hess}\; V|\neq 0$. Then the function $f_\delta$ is always positive on $S$. By hypothesis, $f_\delta$ is continuous on $S$ and therefore it achieves a positive minimum there, call it $m_\delta>0$.
For any $y,|y|>1$ there exists $\lambda>1$ such that $y=\lambda q$ for some $q\in S$. By homogeneity,
$$
V(y)=\lambda^rV\left(\frac{y}{\lambda}\right)=\lambda^rV(q)
$$
and therefore, by the chain rule
$$
|\nabla V(y)|=\lambda^{r-1}|\nabla V(q)|
$$
and
$$
|\mathrm{Hess}\;V(y)|=\lambda^{d(r-2)}|\mathrm{Hess}\;V(q)|.
$$
Adding these up,
$$
|\nabla V(y)|^{\frac{4}{3}(1-\delta)}+|\mathrm{Hess}\,V(y)|^{1-\delta}\ge \lambda^{(1-\delta)\min\{\frac{4}{3}(r-1),d(r-2)\}}f_\delta(q)\ge m_\delta\lambda^{(1-\delta)\min\{\frac{4}{3}(r-1),d(r-2)\}}
$$
which goes to infinity as $|y|=\lambda\to\infty$, since by assumption $r>2$.
\end{proof}
\begin{RQ}
The result of Corollary \label{cor} does not hold in the case of homogenous polynomial of degree 2 with degenerate Hessian. Indeed, we already know that in this case, the resolvent of the Kramers-Fokker-Planck operator $K_V$ is not compact since it is not as so for the Witten Laplacian (cf. Proposition 5.19 and Theorem 10.16 in \cite{HeNi}).
\end{RQ}
\begin{RQ}
Our results are in agreement with the results of Wei-Xi-Li \cite{Li}\cite{Li2} and those of Helffer-Nier on Witten Laplacian with homogeneous potential \cite{HeNi1}.
\end{RQ}
\section{Observations and first inequalities}
\subsection{Dyadic partition of unity}
In this paper, we make use of a locally finite dyadic partition of unity with respect to the position variable $q \in \mathbb{R}^d.$ Such a partition is described in the following Proposition. For a detailed proof, we refer to \cite{BCD} (see page 59).
\begin{prop}
Let $\mathcal{C}$ be the shell $\left\lbrace x\in\mathbb{R}^{d},\;\;
\frac{3}{4}< |x|<\frac{8}{3} \right\rbrace.$ There
exist radial functions $\chi$ and $\phi$ valued in the interval $[0,
1],$ belonging respectively to $\mathcal{C}^{\infty}_{0}(B(0,
\frac{4}{3}))$ and to $\mathcal{C}^{\infty}_{0}(\mathcal{C})$ such that
\begin{align*}
\forall x\in \mathbb{R}^d,\quad\quad\chi(x) +\sum_{j\ge0}\phi(2^{-j}
x) = 1\;,
\end{align*}
\begin{align*}
\forall x\in \mathbb{R}^d\setminus\left\{0\right\},\quad\quad\sum_{j\in\mathbb{Z}}\phi(2^{-j}
x) = 1\;.
\end{align*}
\end{prop}
Setting for all $q\in\mathbb{R}^d,$
\begin{align*}&\chi_{-1}(q)=\frac{\chi(2q)}{\Big(\chi^2(2q)+\sum\limits_{j'\ge0}\phi^2(2^{-j'}
q)\Big)^{\frac{1}{2}}}=\frac{\chi(2q)}{\Big(\chi^2(2q)+\phi^2(q)\Big)^{\frac{1}{2}}}\;,\\&
\chi_{j}(q)=\frac{\phi(2^{-j}
q)}{\Big(\chi^2(2q)+\sum\limits_{j'\ge0}\phi^2(2^{-j'}
q)\Big)^{\frac{1}{2}}}\stackrel{\text{if}~j\leq 2}{=}
\;,
\frac{\phi(2^{-j}
q)}{\Big(\sum\limits_{j-1\leq j'\leq j+1}\phi^2(2^{-j'}
q)\Big)^{\frac{1}{2}}}
\end{align*}
we get a localy finite dyadic partition of unity
\begin{align}
\sum_{j\geq-1}\chi_j^2(q)=
\tilde{\chi}_{-1}^{2}(2|q|) +\tilde{\chi}_{0}^{2}(2|q|)+\sum_{j\geq 0}\widetilde{\chi}^2(2^{-j}|q|)=1\label{a.32.1}
\end{align}
where for all $j\in\mathbb{N},$ the cutoff functions $
\widetilde{\chi}_{0},\tilde{\chi}$ and $\widetilde{\chi}_{-1}$ belong
respectively to $\mathcal{C}_0^{\infty}(\left]
\frac{3}{4},\frac{8}{3}\right[)$, $\mathcal{C}_0^{\infty}(\left] \frac{3}{4},\frac{8}{3}\right[)$ and $\mathcal{C}_0^{\infty}(\left] 0,{\frac{4}{3}}\right[).$
\begin{lem}\label{a.3lem2.2}
Let $V$ be in $\mathcal{C}^{\infty}(\mathbb{R}^{d}\setminus\left\{0\right\}).$ Consider the Kramers-Fokker-Planck operator $K_{V}$ defined as in (\ref{a.3eq1}). For a locally finite partition of unity $\sum\limits_{j\ge-1}\chi^2_j(q)=1$ one has
\begin{align}
\|K_{V}u\|^2_{L^2(\mathbb{R}^{2d})}=\sum\limits_{j\ge-1}\|K_{V}(\chi_ju)\|^2_{L^2(\mathbb{R}^{2d})}-\|(p\partial_q\chi_j)u\|^2_{L^2(\mathbb{R}^{2d})}\label{a.32.4}~,
\end{align}
for all $u\in\mathcal{C}_0^{\infty}(\mathbb{R}^{2d}).$
In particular when the cutoff functions $\chi_j$ have the form (\ref{a.32.1}), there exists a uniform constant $c>0$ so that
\begin{align}
(1+4c)\|K_{V}u\|^2_{L^2(\mathbb{R}^{2d})}+c\|u\|^2_{L^2(\mathbb{R}^{2d})}\ge\sum\limits_{j\ge-1}\|K_{V}(\chi_ju)\|^2_{L^2(\mathbb{R}^{2d})},\label{a.32.5}
\end{align}
holds for all $u\in\mathcal{C}_0^{\infty}(\mathbb{R}^{2d}).$
\end{lem}
\begin{proof}
The proof of the equality (\ref{a.32.4}) is detailed in \cite{Ben}. Now it remains to show the inequality (\ref{a.32.5}), after considering a locally finite dyadic partition of unity
\begin{align}
\sum_{j\geq-1}\chi_j^2(q)=1\;,
\end{align}
where for all $j\in\mathbb{N},$ the cutoff functions $ \chi_{j}$ and $\chi_{-1}$ are respectively supported in the shell \\$\left\lbrace q\in\mathbb{R}^{d},\;\; 2^j\frac{3}{4}\le |q|\le 2^j\frac{8}{4} \right\rbrace$ and in the ball $B( 0,\frac{3}{4}).$
Since the partition is locally finite, for each index $j\ge-1$ there are finitely many $j'$ such that $(\partial_q\chi_j)\chi_{j'}$ is nonzero.
Along these lines, there exists a uniform constant $c>0$ so that
\begin{align}
\sum\limits_{j\geq-1}\|(p\partial_q\chi_j)u\|^2_{L^2}&=\sum\limits_{j\geq-1}\sum\limits_{j'\geq-1}\|(p\partial_q\chi_j)\chi_{j'}u\|^2_{L^2}\nonumber\\&\le c\sum\limits_{j\geq-1}\frac{1}{(2^j)^2}\|p\chi_ju\|^2_{L^2}~,\label{a.32.6}
\end{align}
holds for all $u\in\mathcal{C}_0^{\infty}(\mathbb{R}^{2d}).$
On the other hand, for every $u\in\mathcal{C}_0^{\infty}(\mathbb{R}^{2d}),$
\begin{align}
c\sum\limits_{j\geq-1}\frac{1}{(2^j)^2}\|p\chi_ju\|^2_{L^2}\le 4c\,\|pu\|^2_{L^2}\le8c\,\mathrm{Re}\;\langle u,K_Vu\rangle\le 4c\,(\|u\|^2_{L^2}+\|K_Vu\|^2_{L^2})\;.\label{a.32.7}
\end{align}
Collecting the estimates (\ref{a.32.4}), (\ref{a.32.6}) and (\ref{a.32.7}), we establish the desired inequality (\ref{a.32.5}).
\end{proof}
\subsection{Localisation in a fixed Shell}
\begin{lem}\label{a.3lem2.3}
Let $V(q)$ be an homogeneous function in $\mathcal{C}^{\infty}(\mathbb{R}^{d}\setminus\left\{0\right\})$ of degree $r$ and assume $j\in\mathbb{Z}.$ Given $u_j\in\mathcal{C}_0^{\infty}(\mathbb{R}^{2d}),$ one has
\begin{align*}
\|K_{V}u_j\|_{L^2(\mathbb{R}^{2d})}=\|K_{j,V}v_j\|_{L^2(\mathbb{R}^{2d})}\;,
\end{align*}
where the operator $K_{j,V}$ is defined by \begin{align}
K_{j,V}=\frac{1}{2^j}p\partial_q-(2^j)^{r-1}\partial_qV(q)\partial_p+O_p\;,\label{a.3222}\end{align}
and $\;v_j(q,p)=2^{\frac{jd}{2}}u_j(2^jq,p).$
In particular when $u_j$ is supported in $\left\lbrace q\in\mathbb{R}^{d},\; 2^j\frac{3}{4}\le |q|\le 2^j\frac{8}{3}\right\rbrace,$ the support of $v_j$ is a fixed shell $\overline{\mathcal{C}}=\left\lbrace q\in\mathbb{R}^{d},\; \frac{3}{4}\le |q|\le \frac{8}{3}\right\rbrace\;.$
\end{lem}
\begin{proof}
Let $j\in\mathbb{Z}$ be an index. Assume $u_j\in\mathcal{C}_0^{\infty}(\mathbb{R}^{2d})$ and state \begin{align}
v_j(q,p)=2^{\frac{jd}{2}}u_j(2^jq,p)\;.\label{a.3211}
\end{align}
On the grounds that the function $V$ is homogeneous of degree $r$ we deduce that respectively its gradient $\partial_qV(q)$ is homogeneous of degree $r-1.$ As follows, we can write
\begin{align*}
K_Vu_j(q,p)&=K_V\Big(2^{\frac{-jd}{2}}v_j(2^{-j}q,p)\Big)\\&=2^{\frac{-jd}{2}}\Big((2^{-j}p\partial_q-(2^j)^{r-1}\partial_qV(q)\partial_p+O_p)v_j\Big)(2^{-j}q,p)\;.
\end{align*}
Notice that if
\begin{align*}
\mathrm{supp}\;u_j\subset\left\lbrace q\in\mathbb{R}^{d},\; 2^j\frac{3}{4}\le |q|\le 2^j\frac{8}{3}\right\rbrace\;,
\end{align*}
the cutoff functions $v_j,$ defined in (\ref{a.3211}), are all supported in the fixed shell \begin{align*}
\overline{\mathcal{C}}=\left\lbrace q\in\mathbb{R}^{d},\; \frac{3}{4}\le |q|\le \frac{8}{3}\right\rbrace\;.
\end{align*}
\end{proof}
\begin{RQ}
Assume $j\in\mathbb{N}.$ If we introduce a small parameter $h=2^{-2(r-1)j}$ then the operator $K_{j,V},$ defined in (\ref{a.3222}), can be rewritten as
\begin{align*}
K_{j,V}=\frac{1}{h}\Big(\sqrt{h}p(h^{\frac{1}{2}+\frac{1}{2(r-1)}}\partial_q)-\sqrt{h}\partial_qV(q)\partial_p+\frac{h}{2}(-\Delta_p+p^2)\Big)\;.
\end{align*}
Now owing to a dilation with respect to the velocity variable $p,$ which for $(\sqrt{h}p,\sqrt{h}\partial_p)$ associates $(p,h\partial_p),$ we deduce that the operator $K_{j,V}$ is unitary equivalent to
\begin{align*}
\widehat{K}_{j,V}=\frac{1}{h}\Big(p(h^{\frac{1}{2}+\frac{1}{2(r-1)}}\partial_q)-\partial_qV(q)h\partial_p+\frac{1}{2}(-h^2\Delta_p+p^2)\Big)\;.
\end{align*}
In particular, taking $r=2,$
\begin{align*}
\widehat{K}_{j,V}=\frac{1}{h}\Big(p(h\partial_q)-\partial_qV(q)h\partial_p+\frac{1}{2}(-h^2\Delta_p+p^2)\Big)\;,
\end{align*}
is clearly a semiclassical operator with respect to the variables $q$ and $p$.
However if $r>2$, the operator $\widehat{K}_{j,V}$ is semiclassical only with respect to the velocity variable $p$ (since $h^{\frac{1}{2}+\frac{1}{2(r-1)}}>h$).
For a polynomial $V(q),$ the case $r=2$ corresponds to the quadratic situation. Extensive works have been done concerned with this case (see \cite{Hor}\cite{HiPr}\cite{Vio}\cite{Vio1}\cite{AlVi}\cite{BNV}).
\end{RQ}
\section{Proof of the main result}
In this section we present the proof of Theorem~\ref{a.3thm1.1}.
\begin{proof}
In the whole proof we denote \begin{align*}
\overline{\mathcal{C}}=\left\lbrace q\in\mathbb{R}^{d},\; \frac{3}{4}\le |q|\le \frac{8}{3}\right\rbrace\;.
\end{align*}
Assume $u\in\mathcal{C}_0^{\infty}(\mathbb{R}^{2d})$ and consider a localy finite dyadic partition of unity defined as in (\ref{a.32.1}). By Lemma \ref{a.3lem2.2} (see (\ref{a.32.5})), there is a uniform constant $c$ such that
\begin{align}
(1+4c)\|K_{V}u\|^2_{L^2(\mathbb{R}^{2d})}+c\|u\|^2_{L^2(\mathbb{R}^{2d})}\ge\sum\limits_{j\ge-1}\|K_{V}u_j\|^2_{L^2(\mathbb{R}^{2d})}.\label{a.33111}
\end{align}
where we denote $u_j=\chi_ju.$ We obtain by Lemma \ref{a.3lem2.3} and the estimate (\ref{a.33111})
\begin{align}
(1+4c)\|K_{V}u\|^2_{L^2(\mathbb{R}^{2d})}+c\|u\|^2_{L^2(\mathbb{R}^{2d})}\ge\sum\limits_{j\ge-1}\|K_{j,V}v_j\|^2_{L^2(\mathbb{R}^{2d})}\;,
\end{align}
where the operator
\begin{align*}
K_{j,V}=\frac{1}{2^j}p\partial_q-(2^j)^{r-1}\partial_qV(q)\partial_p+O_p\;,\end{align*}
and $ v_j(q,p)=2^{\frac{jd}{2}}u_j(2^jq,p)\;.$ Setting $h=2^{-2(r-1)j},$ one has
\begin{align*}
K_{j,V}=p(h^{\frac{1}{2(r-1)}}\partial_q)-h^{-\frac{1}{2}}\partial_qV(q)\partial_p+\frac{1}{2}(-\Delta_p+p^2)\;.\end{align*}
Now, fix $\nu>0$ such that
\begin{align}
\max(\frac{1}{6},\frac{1}{8}+\frac{3}{8(r-1)})<\nu<\frac{1}{4}+\frac{1}{4(r-1)}~. \label{a.32..9}
\end{align}
Such a choice is always possible:
\begin{itemize}
\item In the case $r\ge 10,$
$\max(\frac{1}{6},\frac{1}{8}+\frac{3}{8(r-1)})$ equals $\frac{1}{6}$
while $\frac{1}{4}+\frac{1}{4(r-1)}$ is always greater than
$\frac{1}{4}.$ So we can choose a value $\nu$ independent of $r$
between $\frac{1}{6}$ and $\frac{1}{4}.$
\item in the case $2< r<10,$ $\max(\frac{1}{6},\frac{1}{8}+\frac{3}{8(r-1)})$ equals $\frac{1}{8}+\frac{3}{8(r-1)}<\frac{1}{4}+\frac{1}{4(r-1)}$\,. Hence, we can choose for example $\nu=\frac{3}{16}+\frac{5}{16(r-1)}.$
\end{itemize}
Taking $\nu>0,$ satisfying (\ref{a.32..9}), we consider a locally finite partition of unity with respect to $q\in\mathbb{R}^d$ given by
\begin{align*}
\sum\limits_{k\ge-1}(\theta_{k,h}(q))^2&=\sum\limits_{k\ge-1}\Big(\theta(\frac{1}{|\ln(h)|h^{\nu}}q-q_k)\Big)^2\nonumber\\&=\sum\limits_{k\ge-1}\Big(\theta(\frac{1}{|\ln(h)|h^{\nu}}(q-q_{k,h})\Big)^2=1\;,
\end{align*}
where for any index $k$ \begin{align*}
q_{k,h}=|\ln(h)|h^{\nu}q_{k}\;,\;\;\;\;\;\mathrm{supp}\;\theta_{k,h}\subset B(q_{k,h},|\ln(h)|h^{\nu})\;,\;\;\;\;\;\theta_{k,h}\equiv1\;\;\text{in}\;\;B(q_{k,h},\frac{1}{2}|\ln(h)|h^{\nu})\;.
\end{align*}
Using this partition we get through Lemma \ref{a.3lem2.2} (see (\ref{a.32.4})),
\begin{align}
\|K_{j,V}v_j\|^2_{L^2}\ge \sum\limits_{k\ge-1}\|K_{j,V}\theta_{k,h}v_j\|^2_{L^2}-|\ln(h)|^{-2}h^{\frac{1}{r-1}-2\nu}\|p\theta_{k,h}v_j\|^2_{L^2}\;.\label{a.3322}
\end{align}
In order to reduce the written expressions we denote in the whole of the proof \begin{align*}w_{k,j}=\theta_{k,h}v_j\;.\end{align*}
Taking into account (\ref{a.3322}),
\begin{align}
\|K_{j,V}v_j\|^2_{L^2}&\ge \sum\limits_{k\ge-1}\|K_{j,V}w_{k,j}\|^2_{L^2}-|\ln(h)|^{-2}h^{\frac{1}{r-1}-2\nu}\|w_{k,j}\|_{L^2}\|K_{j,V}w_{k,j}\|_{L^2}\nonumber\\&\ge\sum\limits_{k\ge-1}{\frac{3}{4}}\|K_{j,V}w_{k,j}\|^2_{L^2}-2|\ln(h)|^{-2}h^{\frac{1}{r-1}-2\nu}\|w_{k,j}\|^2_{L^2}\;.\label{a.32.13}
\end{align}
Notice that in the last inequality we simply use respectively the fact that
\begin{align*}
\|pw_{k,j}\|^2_{L^2}\leq 2\mathrm{Re}\langle w_{k,j},K_{j,V}w_{k,j}\rangle\le \|w_{k,j}\|_{L^2}\|K_{j,V}w_{k,j}\|_{L^2}\;,
\end{align*}
and the Cauchy inequality with epsilon ( $ab\le \epsilon a^2+\frac{1}{4\epsilon}b^2$).
From now on, set \begin{align*}
K_0=\left\{q\in \overline{\mathcal{C}}\;,\;\;\;\partial_qV(q)=0\right\}\;.
\end{align*}
Clearly, by continuity of the map $q\mapsto\partial_qV(q)$ on the
shell $\overline{\mathcal{C}}$ (which is a compact set of $\mathbb{R}^d$), we
deduce the compactness of $K_0.$
Since $q\mapsto\frac{\mathrm{Tr}_{-,V}(q)}{1+\mathrm{Tr}_{+,V}(q)}$ is
uniformly continuous on any compact neighborhood of $K_{0}$\,, there
exists $\varepsilon_{1}>0$ such that
\begin{align}
d(q,K_0)\le \epsilon_1\Rightarrow \frac{\mathrm{Tr}_{-,V}(q)}{1+\mathrm{Tr}_{+,V}(q)}\ge \frac{\epsilon_0}{2}\;,\label{a.33.6.}
\end{align}
where $\epsilon_0:=\min\limits_{q\in K_0}\frac{\mathrm{Tr}_{-,V}(q)}{1+\mathrm{Tr}_{+,V}(q)}.$
On the other hand, in vue of the definition of $K_0$ and by continuity of $q\mapsto \partial_qV(q)$ on $\overline{\mathcal{C}},$ there is a constant $\epsilon_2>0$ (that depends on $\epsilon_1$) such that
\begin{align}
\forall\; q\in \overline{\mathcal{C}}\;,\;\;d(q,K_0)\ge \epsilon_1\Rightarrow |\partial_qV(q)|\ge \epsilon_2\;.\label{a.337}
\end{align}
Now let us introduce
\begin{align*}
\Sigma(\epsilon_1)=\left\{q\in \mathcal{C}\;,\;\;d(q,K_0)\ge\epsilon_1\right\}\;,
\end{align*}
\begin{align*}
I(\epsilon_1)=\left\{k\in \mathbb{Z}\;,\;\;\mathrm{supp}\;\theta_{k,h}\subset\Sigma(\epsilon_1)\right\}\;.
\end{align*}
In order to establish a subelliptic estimate for $K_{j,V},$ we distinguish the two following cases.
\begin{outerdesc}
\item[Case \textbf{1}] $k\not\in I(\epsilon_1).$ In this case the support of the cutoff function $\theta_{k,h}$ might intercect the set of zeros of the gradient of $V.$
\item[Case \textbf{2}] $k\in I(\epsilon_1).$ Here the gradient of $V$ does not vanish for all $q$ in the support of $\theta_{k,h}.$
\end{outerdesc}
The idea is to use, in the suitable situation, either quadratic
or linear approximating polynomial $\widetilde{V}$ near some $q'_{k,h}\in\mathrm{supp}\;\theta_{k,h}$ to write
\begin{align*}
\sum\limits_{k\ge-1}\|K_{j,V}w_{k,j}\|^2_{L^2}\ge \frac{1}{2}\sum\limits_{k\ge-1}\|K_{j,\widetilde{V}}w_{k,j}\|^2_{L^2}-\|(K_{j,V}-K_{j,\widetilde{V}})w_{k,j}\|^2_{L^2}\;,
\end{align*}
or equivalently
\begin{align}
\sum\limits_{k\ge-1}\|K_{j,V}w_{k,j}\|^2_{L^2}\ge \frac{1}{2}\sum\limits_{k\ge-1}\|K_{j,\widetilde{V}}w_{k,j}\|^2_{L^2}-\|\frac{1}{\sqrt{h}}(\partial_qV(q)- \partial_q\widetilde{V}(q))\partial_pw_{k,j}\|^2_{L^2}\;.\label{a.33.444}
\end{align}
Then based on the estimates written in \cite{BNV}, which are valid for the operator $K_{\widetilde{V}},$ we deduce a subelliptic estimate for $K_{\widetilde{V}},$ after a careful control of the errors which appear in (\ref{a.32.13}) and (\ref{a.33.444}).
\noindent\textbf{Case 1.} In this situation, we use the quadractic approximation near some element \\$q'_{k,h}\in\mathrm{supp}\;\theta_{k,h}\cap(\mathbb{R}^d\setminus \Sigma(\epsilon_1)),$
\begin{align*}
V^2_{k,h}(q)&=\sum\limits_{|\alpha|\le2}\frac{\partial_q^{\alpha}V(q'_{k,h})}{\alpha!}(q-q'_{k,h})^{\alpha}\;.
\end{align*}
Notice that one has for all $q\in\mathbb{R}^d,$
\begin{align}
|V(q)- V^2_{k,h}(q)|=\mathcal{O}(|q-q'_{k,h}|^3)\;.
\end{align}
Accordingly, for every $q$ in the support of $w_{k,j},$
\begin{align}
|\partial_qV(q)- \partial_qV^2_{k,h}(q)|&=\mathcal{O}(|q-q'_{k,h}|^2)\nonumber\\&=\mathcal{O}(|\ln(h)|^2h^{2\nu})\;.\label{a.33.66}
\end{align}
Combining (\ref{a.33.444}) and (\ref{a.33.66}), there is a constant $c > 0$ such that
\begin{align}
\sum\limits_{k\ge-1}\|K_{j,V}w_{k,j}\|^2_{L^2}&\ge \frac{1}{2}\sum\limits_{k\ge-1}\|K_{j,V^2_{k,h}}w_{k,j}\|^2_{L^2}-c\,\frac{(|\ln(h)|^2h^{2\nu})^2}{h}\|\partial_pw_{k,j}\|^2_{L^2}\nonumber\\&\ge \frac{1}{2}\sum\limits_{k\ge-1}\|K_{j,V^2_{k,h}}w_{k,j}\|^2_{L^2}-c\,\frac{(|\ln(h)|^2h^{2\nu})^2}{h}\|w_{k,j}\|_{L^2}\|K_{j,V^2_{k,h}}w_{k,j}\|_{L^2}\nonumber\\&\ge \frac{3}{16}\sum\limits_{k\ge-1}\|K_{j,V^2_{k,h}}w_{k,j}\|^2_{L^2}-2c\,\frac{(|\ln(h)|^2h^{2\nu})^2}{h}\|w_{k,j}\|^2_{L^2}\;.\label{a.32.77}
\end{align}
Putting (\ref{a.32.13}) and (\ref{a.32.77}) together,
\begin{align}
\|K_{j,V}v_j\|^2\ge \frac{9}{64}\sum\limits_{k\ge-1}\|K_{j,V^2_{k,h}}w_{k,j}\|^2-\frac{3}{2}\,c\,\frac{(|\ln(h)|^2h^{2\nu})^2}{h}\|w_{k,j}\|^2-2|\ln(h)|^{-2}h^{\frac{1}{r-1}-2\nu}\|w_{k,j}\|^2\;.\label{a.3312}
\end{align}
On the other hand, owning to a change of variables $q"=qh^{\frac{1}{2(r-1)}},$ one can write
\begin{align}
\|K_{j,V^2_{k,h}}w_{k,j}\|_{L^2}=\|\widetilde{K}_{j,V^2_{k,h}}\widetilde{w}_{k,j}\|_{L^2}\label{a.33.88}\;,
\end{align}
where the operator $\widetilde{K}_{j,V^2_{k,h}}$ reads
\begin{align*}
\widetilde{K}_{j,V^2_{k,h}}&=p\partial_q-h^{-\frac{1}{2}}\partial_qV^2_{k,h}(h^{\frac{1}{2(r-1)}}q)\partial_p+\frac{1}{2}(-\Delta_p+p^2)
\\
&
=
p\partial_q-\underbrace{h^{-\frac{1}{2}+\frac{1}{2(r-1)}}}_{=:H}\partial_qV^2_{k,h}(q)\partial_p+\frac{1}{2}(-\Delta_p+p^2)
\;,
\end{align*}
and
\begin{align*}
w_{k,j}(q,p)=\frac{1}{h^{\frac{d}{4(r-1)}}}\widetilde{w}(\frac{q}{h^{\frac{1}{2(r-1)}}},p)\;.
\end{align*}
In the rest of the proof we denote \begin{align*}
H=h^{-\frac{1}{2}}h^{\frac{1}{2(r-1)}}\;.
\end{align*}
From now on assume $j\in\mathbb{N}.$ In view of (\ref{a.33.6.}), $\mathrm{Tr}_{-,V^2_{k,h}}=\mathrm{Tr}_{-,V}(q'_{k,h})\not=0.$ Hence by (\ref{1.5mm}),
\begin{align}
\|\widetilde{K}_{j,V^2_{k,h}}\widetilde{w}_{k,j}\|^2_{L^2}\ge c\,\frac{1+H\mathrm{Tr}_{-,V^2_{k,h}}}{\log(2+H\mathrm{Tr}_{-,V^2_{k,h}})^2}\| \widetilde{w}_{k,j}\|^2_{L^2}\;.
\end{align}
Or samely
\begin{align}
\|\widetilde{K}_{j,V^2_{k,h}}\widetilde{w}_{k,j}\|^2_{L^2}\ge c\,\frac{1+H\mathrm{Tr}_{-,V}(q'_{k,h})}{\log(2+H\mathrm{Tr}_{-,V}(q'_{k,h}))^2}\| \widetilde{w}_{k,j}\|^2_{L^2}\;.\label{a.3316}
\end{align}
Using once more (\ref{a.33.6.}),
\begin{align}
\mathrm{Tr}_{-,V}(q'_{k,h})\ge \frac{\epsilon_0}{2}(1+\mathrm{Tr}_{+,V}(q'_{k,h}))\;,
\end{align}
where we remind that $\epsilon_0=\min\limits_{q\in K_0}\frac{\mathrm{Tr}_{-,V}(q)}{1+\mathrm{Tr}_{+,V}(q)}.$ Consequently
\begin{align}
|\mathrm{Hess}\;V(q'_{k,h})|\ge \mathrm{Tr}_{-,V}(q'_{k,h})\ge \frac{\epsilon_0}{2}\;,
\end{align}
and
\begin{align}
\mathrm{Tr}_{-,V}(q'_{k,h})&\ge \frac{1}{2}\mathrm{Tr}_{-,V}(q'_{k,h})+\frac{\epsilon_0}{4}(1+\mathrm{Tr}_{+,V}(q'_{k,h}))\nonumber\\&\ge \frac{1}{2}\min(1,\frac{\epsilon_0}{2})(\mathrm{Tr}_{-,V}(q'_{k,h})+\mathrm{Tr}_{+,V}(q'_{k,h}))\nonumber\\&\ge \frac{1}{2}\min(1,\frac{\epsilon_0}{2})|\mathrm{Hess}\;V(q'_{k,h})|\;.\label{a.3318}
\end{align}
Furthermore by continuity of the map $q\mapsto \mathrm{Tr}_{-,V}(q)$ on the compact set $
\overline{\mathcal{C}},$ there exists a constant $\epsilon_3 > 0$ such that
$\mathrm{Tr}_{-,V}(q)\le \epsilon_3$ for all $q\in\overline{\mathcal{C}}.$ Hence \begin{align}\frac{\epsilon_0}{2}\le \mathrm{Tr}_{-,V}(q'_{k,h})\le \epsilon_3\;.\label{a.3317}
\end{align}
From (\ref{a.3316}), (\ref{a.3318}) and (\ref{a.3317}),
\begin{align*}
\|\widetilde{K}_{j,V^2_{k,h}}\widetilde{w}_{k,j}\|^2_{L^2}\ge c\,\frac{H}{\log(H)^2}\| \widetilde{w}_{k,j}\|^2_{L^2}\;.
\end{align*}
It follows from the above inequality and (\ref{a.33.88}),
\begin{align}
\|K_{j,V^2_{k,h}}w_{k,j}\|^2_{L^2}\ge c\,\frac{H}{\log(H)^2}\| w_{k,j}\|^2_{L^2}\;.\label{a.33.1333}
\end{align}
Now using the estimate (\ref{a.33.1333}), we should control the errors
coming from the partition of unity and the quadratic
approximation. For this reason, notice that our choice of
exponent $\nu$ in \eqref{a.32..9} implies
\begin{align*}
\left\{
\begin{array}{l}
\frac{(|\ln(h)|^2h^{2\nu})^2}{h}\ll \frac{H}{\log(H)^2}\\
|\ln(h)|^{-2}h^{\frac{1}{r-1}-2\nu}\ll \frac{H}{\log(H)^2}\;.
\end{array}
\right.
\end{align*}
As a result, collectting the estimates (\ref{a.3312}) and (\ref{a.33.1333}), we deduce the existence of a constant $c>0$ such that
\begin{align}
\|K_{j,V}v_j\|^2_{L^2}\ge c\sum\limits_{k\ge-1}\|K_{j,V^2_{k,h}}w_{k,j}\|^2_{L^2}\;.\label{a.33...15}
\end{align}
Via (\ref{eq44}), there is a constant $c>0$ so that
\begin{align}
\|\widetilde{K} _{j,V^2_{k,h}}\widetilde{w}_{k,j}\|^2+(1+10c)H|\mathrm{Hess}\;V(q'_{k,h})|\|\widetilde{w}_{k,j}\|^2\ge c\Big(\|O_p\widetilde{w}_{k,j}\|^2&+\|\langle D_q \rangle^{\frac{2}{3}} \widetilde{w}_{k,j}\|^2\nonumber\\&+H|\mathrm{Hess}\;V(q'_{k,h})|\|\widetilde{w}_{k,j}\|^2\Big)\;.
\end{align}
Hence using the reverse change of variables $q"=\frac{q}{h^{\frac{1}{2(r-1)}}}\;,$ we obtain in view of the above estimate and (\ref{a.33.88}),
\begin{align}
\|K_{j,V^2_{k,h}}w_{k,j}\|^2+(1+10c)H|\mathrm{Hess}\;V(q'_{k,h})|\|w_{k,j}\|^2\ge c\Big(\|O_pw_{k,j}\|^2&+\|\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}} w_{k,j}\|^2\nonumber\\&+H|\mathrm{Hess}\;V(q'_{k,h})|\|w_{k,j}\|^2\Big)\;.\label{a.33.16}
\end{align}
Or by (\ref{a.3318}) and (\ref{a.3317}),
\begin{align}
\frac{\epsilon_0}{2}\le|\mathrm{Hess}\;V(q'_{k,h})|\le \frac{2\epsilon_3}{\min(1,\frac{\epsilon_0}{2})}\;,\label{a.33.19}\end{align}
Putting (\ref{a.33.16}) and (\ref{a.33.19}) together, there is a constant $c>0$ so that
\begin{align}
\|K_{j,V^2_{k,h}}w_{k,j}\|^2+H\|w_{k,j}\|^2\ge c\Big(\|O_pw_{k,j}\|^2&+\|\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}}w_{k,j}\|^2\nonumber\\&+H\|w_{k,j}\|^2+\|\langle H|\mathrm{Hess}\;V(q'_{k,h})|\rangle^{\frac{1}{2}}w_{k,j}\|^2\Big)\;.\label{a
33.1888}
\end{align}
On the other hand, for all $q\in\mathrm{supp}\;w_{k,j},$ \begin{align}|\mathrm{Hess}\;V(q)-\mathrm{Hess}\;V(q'_{k,h})|=\mathcal{O}(|q-q'_{k,h}|)=\mathcal{O}(|\ln(h)|h^{\nu})\label{a.33266}\;\end{align}
Therefore by (\ref{a.33.19}) and (\ref{a.33266}), we obtain for every $q\in\mathrm{supp}\;w_{k,j}$ and all $j$ sufficiently large.
\begin{align}
\frac{1}{2}|\mathrm{Hess}\;V(q'_{k,h})|\le|\mathrm{Hess}\;V(q)|\le \frac{3}{2}|\mathrm{Hess}\;V(q'_{k,h})|\;.\label{a.3326}
\end{align}
From (\ref{a
33.1888}) and (\ref{a.3326}), there exists a constant $c>0$ so that
\begin{align}
\|K_{j,V^2_{k,h}}w_{k,j}\|^2+H\|w_{k,j}\|^2\ge c\Big(\|O_pw_{k,j}\|^2&+\|\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}} w_{k,j}\|^2\nonumber\\&+H\|w_{k,j}\|^2+\|\langle H|\mathrm{Hess}\;V(q)|\rangle^{\frac{1}{2}}w_{k,j}\|^2\Big)\;,\label{a
3328}
\end{align}
is valid for all $j$ large enough.
Furthermore, by continuity of the map $q\mapsto|\partial_qV(q)|^{\frac{4}{3}}$ on the fixed shell $\overline{\mathcal{C}}$, for all $q\in\mathrm{supp}\; w_{k,j}$
\begin{align}
\frac{1}{4}H\ge c\,|h^{-\frac{1}{2}}\partial_qV(q)|^{\frac{4}{3}}\;,\label{a
33.199}
\end{align}
holds for all $j$ sufficiently large.
In such a way, considering (\ref{a
3328}) and (\ref{a
33.199})
\begin{align}
\|K_{j,V^2_{k,h}}w_{k,j}\|^2+(2+H)\|w_{k,j}\|^2\ge c\Big(\|&O_pw_{k,j}\|^2+\|\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}} w_{k,j}\|^2+(2+H)\|w_{k,j}\|^2\nonumber\\&+\|(H|\mathrm{Hess}\;V(q)|)^{\frac{1}{2}}w_{k,j}\|^2+\|\langle h^{-\frac{1}{2}}|\partial_qV(q)|\rangle^{\frac{2}{3}}w_{k,j}\|^2\Big)\;.\label{a.33.20000}
\end{align}
Putting (\ref{a.33.1333}) and (\ref{a.33.20000}) together,
\begin{align}
\|K_{j,V^2_{k,h}}w_{k,j}\|^2\ge c\Big(\|\frac{O_p}{\log(2+H)}&w_{k,j}\|^2+\|\frac{\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}}}{\log(2+H)}w_{k,j}\|^2+\|\frac{(2+H)^{\frac{1}{2}}}{\log(2+H)}w_{k,j}\|^2\nonumber\\&+\|\frac{\langle H|\mathrm{Hess}\;V(q)|\rangle^{\frac{1}{2}}}{\log(2+H)}w_{k,j}\|^2+\|\frac{\langle h^{-\frac{1}{2}}|\partial_qV(q)|\rangle^{\frac{2}{3}}}{\log(2+H)}w_{k,j}\|^2\Big)\;,\label{a.33.201}
\end{align}
holds for all $j\ge j_0,$ for some $j_0\ge 1$ large enough.
Now let us collect the finite remaining terms for $-1\le j\le j_0.$
After recalling $h=2^{-j}$ and $H=h^{-\frac{1}{2}+\frac{1}{2(r-1)}}$
we define
\begin{multline*}
c_V^{(1)}=\max_{-1\le j\le j_0}
\left[A_{V_{k,h}^{2}}+\sup_{q\in \mathrm{supp}\,(\chi_{j}\theta_{k,h})}\left(\langle
H|\mathrm{Hess}~V(q)|\rangle+\langle
h^{-\frac{1}{2}}\left|\partial_{q}V(q)\right|\rangle^{4/3}\right)
\right.
\\
\left.
+\frac{(2+H)}{\log(2+H)^2}+\frac{3}{2}\,c\,\frac{(|\ln(h)|^2h^{2\nu})^2}{h}+2|\ln(h)|^{-2}h^{\frac{1}{r-1}-2\nu}\right]\,.
\end{multline*}
From the lower bound \eqref{eq44}, we deduce the existence of a constant $c>0$ so that
\begin{align}
\frac{9}{64}\|K_{V_{k,h}^{2}}w_{k,j}\|&+(c_V^{(1)}-\frac{3}{2}\,c\,\frac{(|\ln(h)|^2h^{2\nu})^2}{h}-2|\ln(h)|^{-2}h^{\frac{1}{r-1}-2\nu})\|w_{k,j}\|^2\nonumber\\&\geq c\Big(\|O_pw_{k,j}\|^{2}+\|\langle
h^{\frac{1}{2(r-1)}} D_{q}\rangle^{2/3}w_{k,j}\|^{2}
+\|\langle h^{-\frac{1}{2}}|\partial_{q}V(q)|\rangle^{2/3}w_{k,j}\|^{2}
\nonumber\\&\hspace{4cm}+\|\langle
H|\mathrm{Hess}~V(q)|\rangle^{1/2}w_{k,j}\|^{2}
+\|\frac{(2+H)^{\frac{1}{2}}}{\log(2+H)}w_{k,j}\|^{2}\Big)\;,\label{a.33334}
\end{align}
holds for all $-1\le j\le j_0.$
Finally, collecting (\ref{a.33...15}), (\ref{a.33.201}) and (\ref{a.33334}),
\begin{align}
\|K_{j,V}v_j\|^2+c_V^{(2)}\|v_j\|^2\ge c\sum\limits_{k\not\in I(\epsilon_1)}\Big(\|&\frac{O_p}{\log(2+H)}w_{k,j}\|^2+\|\frac{\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}}}{\log(2+H)}w_{k,j}\|^2+\|\frac{(2+H)^{\frac{1}{2}}}{\log(2+H)}w_{k,j}\|^2\nonumber\\&+\|\frac{\langle H|\mathrm{Hess}\;V(q)|\rangle^{\frac{1}{2}}}{\log(2+H)}w_{k,j}\|^2+\|\frac{\langle h^{-\frac{1}{2}}|\partial_qV(q)|\rangle^{\frac{2}{3}}}{\log(2+H)}w_{k,j}\|^2\Big)\;,\label{a.33.2000}
\end{align}
is valid for every $j\ge -1.$
\noindent\textbf{Case 2.} We consider in this case the linear approximating polynomial
\begin{align*}
V^1_{k,h}(q)&=\sum\limits_{|\alpha|=1}\frac{\partial_q^{\alpha}V(q_{k,h})}{\alpha!}(q-q_{k,h})^{\alpha}\;.
\end{align*}
Note that for any $q\in\mathbb{R}^d,$
\begin{align}
|V(q)- V^1_{k,h}(q)|=\mathcal{O}(|q-q_{k,h}|^2)\;,
\end{align}
and for every $q\in\mathrm{supp}\;w_{k,j},$
\begin{align}
|\partial_qV(q)- \partial_qV^1_{k,h}(q)|&=\mathcal{O}(|q-q_{k,h}|)\nonumber\\&=\mathcal{O}(|\ln(h)|h^{\nu})\;.\label{a.32.24}
\end{align}
Due to (\ref{a.33.444}) and (\ref{a.32.24}),
\begin{align}
\sum\limits_{k\ge-1}\|K_{j,V}w_{k,j}\|^2&
\ge \frac{1}{2}\sum\limits_{k\ge-1}\|K_{j,V^1_{k,h}}w_{k,j}\|^2-c\,\frac{(|\ln(h)|h^{\nu})^2}{h}\|\partial_pw_{k,j}\|^2\nonumber\\&\ge \frac{1}{2}\sum\limits_{k\ge-1}\|K_{j,V^1_{k,h}}w_{k,j}\|^2-c\,\frac{(|\ln(h)|h^{\nu})^2}{h}\|w_{k,j}\|\|K_{j,V^1_{k,h}}w_{k,j}\|\nonumber\\&\ge \frac{3}{16}\sum\limits_{k\ge-1}\|K_{j,V^1_{k,h}}w_{k,j}\|^2-2c\,\frac{(|\ln(h)|h^{\nu})^2}{h}\|w_{k,j}\|^2\;.\label{a.32.255}
\end{align}
Assembling (\ref{a.32.13}) and (\ref{a.32.255}),
\begin{align}
\|K_{j,V}v_j\|^2\ge \frac{9}{64}\sum\limits_{k\ge-1}\|K_{j,V^1_{k,h}}w_{k,j}\|^2-\frac{3}{2}\,c\,\frac{(|\ln(h)|^2h^{2\nu})^2}{h}\|w_{k,j}\|^2-2|\ln(h)|^{-2}h^{\frac{1}{r-1}-2\nu}\|w_{k,j}\|^2\;.\label{a.3336}
\end{align}
Additionally, one has
\begin{align}
\|K_{j,V^1_{k,h}}w_{k,j}\|_{L^2}=\|\widetilde{K}_{j,V^1_{k,,h}}\widetilde{w}_{k,j}\|_{L^2}\;,\label{a.32.666}
\end{align}
where the operator $\widetilde{K}_{j,V^1_{k,,h}}$ is given by
\begin{align}
\widetilde{K}_{j,V^1_{k,h}}&=p\partial_q-h^{-\frac{1}{2}}\partial_qV^1_{k,h}(h^{\frac{1}{2(r-1)}}q)\partial_p+\frac{1}{2}(-\Delta_p+p^2)\nonumber\\&=p\partial_q-h^{-\frac{1}{2}}\partial_qV(q_{k,h})\partial_p+\frac{1}{2}(-\Delta_p+p^2)\;,
\end{align}
and
\begin{align}
w_{k,j}(q,p)=\frac{1}{h^{\frac{d}{4(r-1)}}}\widetilde{w}(\frac{q}{h^{\frac{1}{2(r-1)}}},p)\;.
\end{align}
Now, in order to absorb the errors in (\ref{a.3336}) we need the following estimates showed in \cite{BNV} (see (\ref{1.5mm})),
\begin{align}
\|\widetilde{K} _{j,V^1_{k,h}}\widetilde{w}_{k,j}\|^2_{L^2}\ge c\| (h^{-\frac{1}{2}}|\partial_qV(q_{k,h})|)^{\frac{2}{3}}\widetilde{w}_{k,j}\|^2_{L^2}\;.\label{a.32.30}
\end{align}
From now on assume $j\in\mathbb{N}.$ Taking into account (\ref{a.337}) and (\ref{a.32.30}),
\begin{align}
\|K_{j,V^1_{k,h}}\widetilde{w}_{k,j}\|^2_{L^2}\ge c\| (h^{-\frac{1}{2}})^{\frac{2}{3}}\widetilde{w}_{k,j}\|^2_{L^2}\;.\label{a.3341}
\end{align}
Owing to (\ref{a.32.666}) and (\ref{a.32.30}),
\begin{align}
\|K_{j,V^1_{k,h}}w_{k,j}\|^2\ge c\| (h^{-\frac{1}{2}})^{\frac{2}{3}}w_{k,j}\|^2\;.\label{a.33.311}
\end{align}
Note that one has
Therefore, combining (\ref{a.3336}) and (\ref{a.33.311}), there is a constant $c>0$ so that
\begin{align}
\|K_{j,V}v_j\|^2\ge c\sum\limits_{k\ge-1}\|K_{j,V^1_{k,h}}w_{k,j}\|^2\;.\label{a.33.3222}
\end{align}
Using once more \cite{BNV} (see (\ref{eq44})), there is a constant $c>0$ such that
\begin{align}
\|\widetilde{K}_{j,V^1_{k,h}}\widetilde{w}_{k,j}\|^2\ge c\Big(\|O_p\widetilde{w}_{k,j}\|^2+\|\langle D_q \rangle^{\frac{2}{3}} \widetilde{w}_{k,j}\|^2+\|\langle h^{-\frac{1}{2}}|\partial_qV(q_{k,h})|\rangle ^{\frac{2}{3}}\widetilde{w}_{k,j}\|^2\Big)\;.\label{a.33.344}
\end{align}
As a consequence of (\ref{a.32.666}) and (\ref{a.33.344}),
\begin{align}
\|K_{j,V^1_{k,h}}w_{k,j}\|^2\ge c\Big(\|O_pw_{k,j}\|^2+\|\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}} w_{k,j}\|^2+\|\langle h^{-\frac{1}{2}}|\partial_qV(q_{k,h})|\rangle ^{\frac{2}{3}}w_{k,j}\|^2\Big)\;.\label{a.3346}
\end{align}
By (\ref{a.337}) and (\ref{a.32.24}),
\begin{align}
\frac{1}{2}|\partial_qV(q)|\le|\partial_qV(q_{k,h})|\le \frac{3}{2}|\partial_qV(q)|\;,\label{a.3326}
\end{align}
holds for all $q\in\mathrm{supp}\;w_{k,j}$ and any $j$ large.
Then, it follows from (\ref{a.3326}) and (\ref{a.3346}),
\begin{align}
\|K_{j,V^1_{k,h}}w_{k,j}\|^2\ge c\Big(\|O_pw_{k,j}\|^2+\|\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}} w_{k,j}\|^2+\|\langle h^{-\frac{1}{2}}|\partial_qV(q)|\rangle ^{\frac{2}{3}}w_{k,j}\|^2\Big)\;.
\end{align}
Or in this case, in vue of the (\ref{a.337}), one has $|\partial_qV(q)|\ge \epsilon_2$ for all $q\in \mathrm{supp}\; w_{k,j}.$ Hence it results from the above inequality
\begin{align}
\|K_{j,V^1_{k,h}}w_{k,j}\|^2\ge c\Big(\|O_pw_{k,j}\|^2&+\|\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}} w_{k,j}\|^2+\|(h^{-\frac{1}{2}})^{\frac{2}{3}}w_{k,j}\|^2+\|\langle h^{-\frac{1}{2}}|\partial_qV(q)|\rangle ^{\frac{2}{3}}w_{k,j}\|^2\Big)\;.\label{a.3335}
\end{align}
Furthermore, by continuity of $q\mapsto|\mathrm{Hess}\;V(q)|$ on the compact set $\overline{\mathcal{C}},$ one has for all \\$q\in\mathrm{supp}\; w_{k,j}$ and any $j$ large
\begin{align}
\frac{1}{4}(h^{-\frac{1}{2}})^{\frac{4}{3 }}\ge c\, H|\mathrm{Hess}\;V(q)|\;.
\end{align}
Then by the above inequality and (\ref{a.3335}), we get
\begin{align}
\|K_{j,V^1_{k,h}}w_{k,j}\|^2\ge c\Big(\|O_pw_{k,j}\|^2&+\|\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}} w_{k,j}\|^2+\|(2+H)^{\frac{1}{2}}w_{k,j}\|^2\nonumber\\&+\|\langle H|\mathrm{Hess}\;V(q)|\rangle^{\frac{1}{2}}w_{k,j}\|^2+\|\langle h^{-\frac{1}{2}}|\partial_qV(q)|\rangle^{\frac{2}{3}}w_{k,j}\|^2\Big)\;,\label{a.33.7.7}
\end{align}
for every $j\ge j_1$ for some $j_1\ge 1$ large. Now set
\begin{multline*}
c_V^{(3)}=\max_{-1\le j\le j_1}
\left[\sup_{q\in \mathrm{supp}\,(\chi_{j}\theta_{k,h})}\left(\langle
H|\mathrm{Hess}~V(q)|\rangle+\langle
h^{-\frac{1}{2}}\left|\partial_{q}V(q)\right|\rangle^{4/3}\right)
\right.
\\
\left.
+\frac{(2+H)}{\log(2+H)^2}+\frac{3}{2}\,c\,\frac{(|\ln(h)|^2h^{2\nu})^2}{h}+2|\ln(h)|^{-2}h^{\frac{1}{r-1}-2\nu}\right]\,.
\end{multline*}
Seeing \eqref{eq44}, we deduce the existence of a constant $c>0$ so that
\begin{align}
\frac{9}{64}\|K_{V_{k,h}^{1}}w_{k,j}\|&+(c_V^{(3)}-\frac{3}{2}\,c\,\frac{(|\ln(h)|^2h^{2\nu})^2}{h}-2|\ln(h)|^{-2}h^{\frac{1}{r-1}-2\nu})\|w_{k,j}\|^2\nonumber\\&\geq c(\|O_pw_{k,j}\|^{2}+\|\langle
h^{\frac{1}{2(r-1)}} D_{q}\rangle^{2/3}w_{k,j}\|^{2}
+\|\langle h^{-\frac{1}{2}}|\partial_{q}V(q)|\rangle^{2/3}w_{k,j}\|^{2}
\nonumber\\&\hspace{4cm}+\|\langle
H|\mathrm{Hess}~V(q)|\rangle^{1/2}w_{k,j}\|^{2}
+\|\frac{(2+H)^{\frac{1}{2}}}{\log(2+H)}w_{k,j}\|^{2})\;,\label{a.333334}
\end{align}
holds for all $-1\le j\le j_1.$
Thus, combining the estimates (\ref{a.33.3222}), (\ref{a.33.7.7}) and (\ref{a.333334})
\begin{align}
\|K_{j,V}v_j\|^2+c_V^{(4)}\|v_j\|^2\ge c\sum\limits_{k\in I(\epsilon_1)}\Big(\|&O_pw_{k,j}\|^2+\|\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}}w_{k,j}\|^2+\|\frac{(2+H)^{\frac{1}{2}}}{\log(2+H)}w_{k,j}\|^{2}\nonumber\\&+\|\langle H|\mathrm{Hess}\;V(q)|\rangle ^{\frac{1}{2}}w_{k,j}\|^2+\|\langle h^{-\frac{1}{2}}|\partial_qV(q)|\rangle ^{\frac{2}{3}}w_{k,j}\|^2\Big)\;,\label{a.33.200}
\end{align}
holds for all $j\ge-1.$
In conclusion, in view of (\ref{a.33.2000}) and (\ref{a.33.200}), there is a constant $c>0$ such that
\begin{align}
\|K_{j,V}v_j\|^2+c_V^{(5)}\|v_j\|^2\ge c\sum\limits_{k\ge-1}\Big(\|&\frac{O_p}{\log(2+H)}w_{k,j}\|^2+\|\frac{\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}}}{\log(2+H)}w_{k,j}\|^2+\|\frac{(2+H)^{\frac{1}{2}}}{\log(2+H)}w_{k,j}\|^2\nonumber\\&+\|\frac{\langle H|\mathrm{Hess}\;V(q)|\rangle ^{\frac{1}{2}}}{\log(2+H)}w_{k,j}\|^2+\|\frac{\langle h^{-\frac{1}{2}}|\partial_qV(q)|\rangle ^{\frac{2}{3}}}{\log(2+H)}w_{k,j}\|^2\Big)\;,\label{a.3333555}
\end{align}
holds for all $j\ge -1.$
Finally setting $L(s) =\frac{ s+1}{\log(s+1)}$ for all $s \ge 1,$ notice that there is a constant $c > 0$ such that for all $x \ge 1,$
\begin{align}
\inf_{t\ge2}\frac{x}{\log(t)}+ t \ge \frac{ 1}{c}L(x)\;.\label{a.33555}
\end{align}
After setting the quantities
\begin{align*}
\Lambda_{1,j}=\frac{O_p}{\log(2+H)}~,&\quad\Lambda_{2,j}=\frac{\langle H|\mathrm{Hess}\;V(q)|\rangle^{1/2}}{\log(2+H)}~,\quad\Lambda_{3,j}=\frac{\langle h^{-\frac{1}{2}}|\partial_q V(q)|\rangle^{\frac23}}{\log(2+H)}~,\\&\quad\Lambda_{4,j}=\frac{2+H}{\log(2+H)^2}\;,\quad\Lambda_{5,j}=\frac{\langle h^{\frac{1}{2(r-1)}}D_q \rangle)^{\frac{2}{3}}}{\log(2+H)}~,
\end{align*}
we get through the estimate (\ref{a.33555}), for every $j,k\ge -1$
\begin{align*}
\|\Lambda_{1,j}w_{k,j}\|^2_{L^2}+\frac{1}{4}\|\Lambda_{4,j}w_{k,j}\|^2_{L^2}\ge c_1\|L(O_p)w_{k,j}\|^2_{L^2}\;,
\end{align*}
\begin{align*}
\|\Lambda_{5,j}w_{k,j}\|^2_{L^2}+\frac{1}{4}\|\Lambda_{4,j}w_{k,j}\|^2_{L^2}\ge c_2\|L(\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}})w_{k,j}\|^2_{L^2}\;,
\end{align*}
\begin{align*}
\|\Lambda_{2,j}w_{k,j}\|^2_{L^2}+\frac{1}{4}\|\Lambda_{4,j}w_{k,j}\|^2_{L^2}\ge c_3\|L(\langle H|\mathrm{Hess}\;V(q)|\rangle^{\frac{1}{2}})w_{k,j}\|^2_{L^2}\;,
\end{align*}
\begin{align*}
\|\Lambda_{3,j}w_{k,j}\|^2+\frac{1}{4}\|\Lambda_{4,j}w_{k,j}\|^2_{L^2}\ge c_4\|L(\langle h^{-\frac{1}{2}}|\partial_qV(q)|\rangle^{\frac{2}{3}})w_{k,j}\|^2_{L^2}\;.
\end{align*}
From the above estimates and (\ref{a.3333555}),
\begin{align}
\|K_{j,V}v_j\|^2+c_V^{(6)}\|v_j\|^2\ge\; c\sum\limits_{k\ge-1}&\Big(\|L(O_p)w_{k,j}\|^2+\|L(\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}})w_{k,j}\|^2\nonumber\\&+\|L(\langle H|\mathrm{Hess}\;V(q)|\rangle^{\frac{1}{2}})w_{k,j}\|^2+\|L(\langle h^{-\frac{1}{2}}|\partial_qV(q)|\rangle^{\frac{2}{3}})w_{k,j}\|^2\Big)\;.
\end{align}
Therefore in view of Lemma 2.5 in \cite{Ben} conjugated by the unitary transformation of the change of scale,
\begin{align}
\|K_{j,V}v_j\|^2+c_V^{(7)}\|v_j\|^2\ge\; c\Big(\|L(O_p&)v_j\|^2+\|L(\langle h^{\frac{1}{2(r-1)}} D_q \rangle^{\frac{2}{3}})v_j\|^2\nonumber\\&+\|L(\langle H|\mathrm{Hess}\;V(q)|\rangle^{\frac{1}{2}} )v_j\|^2+\|L(\langle h^{-\frac{1}{2}}|\partial_qV(q)|\rangle^{\frac{2}{3}} )v_j\|^2\Big)\;,
\end{align}
or equivalently
\begin{align}
\|K_{V}u_j\|^2+c_V^{(7)}\|u_j\|^2\ge c\Big(\|L(O_p)&u_j\|^2+\|L(\langle D_q \rangle^{\frac{2}{3}})u_j\|^2\nonumber\\&+\|L(\langle \mathrm{Hess}\;V(q)\rangle^{\frac{1}{2}})u_j\|^2+\|L(\langle \partial_qV(q)\rangle^{\frac{2}{3}})u_j\|^2\Big)\;,
\end{align}
for every $j\ge -1.$
Therefore, combining the last estimate and (\ref{a.33111}), there is a constant $C_V>1$ so that
\begin{align}
\|K_{V}u\|^2_{L^2(\mathbb{R}^{2d})}+C_V\|u\|^2_{L^2(\mathbb{R}^{2d})}\ge \frac{1}{C_V}\Big(\|&L(O_p)u\|^2+\|L(\langle D_q \rangle^{\frac{2}{3}})u\|^2\nonumber\\&+\|L(\langle\mathrm{Hess}\;V(q)\rangle^{\frac{1}{2}})u\|^2+\|L(\langle \partial_qV(q)\rangle^{\frac{2}{3}})u\|^2\Big)
\end{align}
holds for all $u\in\mathcal{C}_0^{\infty}(\mathbb{R}^{2d}).$
\end{proof}
\textbf{Acknowledgement} I would like to thank my supervisor Francis Nier for his support and guidance throughout this
work.
\end{document} |
\begin{document}
\begin{center}
\title{Constants for Artin-like problems in Kummer and division fields}
\author{Amir Akbary}
\address{Department of Mathematics and Computer Science, University of Lethbridge, Lethbridge, Alberta T1K 3M4, Canada}
\email{amir.akbary@uleth.ca}
\author{Milad Fakhari}
\address{Department of Mathematics and Computer Science, University of Lethbridge, Lethbridge, Alberta T1K 3M4, Canada}
\email{milad.fakhari@uleth.ca}
\subjclass[2020]{11N37, 11A07}
\keywords{Generalized Artin problem, character sums, Titchmarsh divisor problems in the family of number fields}
\date{\today}
\begin{abstract}
We apply the character sums method of Lenstra, Moree, and Stevenhagen to explicitly compute the constants in
the Titchmarsh divisor problem for Kummer fields and division fields of Serre curves. We derive our results as special cases of a general result on the product expressions for the sums in the form $$\sum_{n=1}^{\infty}\frac{g(n)}{\#G(n)}$$ in which $g(n)$ is a multiplicative arithmetic function and $\{G(n)\}$ is a certain family of Galois groups. Our results extend the application of the character sums method to the evaluation of constants, such as the Titchmarsh divisor constants, that are not density constants.
\end{abstract}
\maketitle
\end{center}
\section{Introduction}\label{sec:intro}
Throughout this paper, let $a$ be a non-zero integer that is not $\pm1$. Let $h$ be the largest integer for which $a$ is a perfect $h$-th power.
In 1927, Emil Artin proposed a conjecture for the density of primes $q$ for which a given integer $a$ is a primitive root modulo $q$. More precisely, Artin conjectured that the density is
\begin{equation}
\label{ArtinConstant}
A_a=\displaystyle\prod_{p \: \primen}\left(1-\frac{1}{\#G(p)}\right)=\displaystyle\prod_{\substack{p \: \primen \\ p\mid h}}\left(1-\frac{1}{p-1}\right)\displaystyle\prod_{\substack{p \: \primen \\ p\nmid h}}\left(1-\frac{1}{p(p-1)}\right),
\end{equation}
where $G(p)$ is the Galois group of $\mathbb{Q}(\zeta_p,a^{1/p})$ over $\mathbb{Q}$. Here $\zeta_p$ is a primitive $p$-th root of unity. Note that $G(p)$ depends on $a$, but we suppress the dependence on $a$ in our notation for simplicity. Also, observe that $A_a=0$ if $a$ is a perfect square as $G(2)=\{1\}$ for such $a$.
In 1957, computer calculations of the density for various values of $a$ by D. H. Lehmer and E. Lehmer revealed some discrepancies from the conjectured value $A_a$. The reason for these inconsistencies is the dependency between the splitting conditions in \emph{Kummer fields} $\mathbb{Q}(\zeta_p,a^{1/p})$.
To deal with these dependencies, Artin suggested an \textit{entanglement correction factor} that appears when $a_{sf}\equiv1~({\rm mod}~4)$, where $a_{sf}$, the square-free part of $a$, is the largest square-free factor of $a$ (see preface to Artin's collected works \cite{artin}). More precisely,
the corrected conjectured density $\delta_a$ is
\begin{equation}
\label{artin-corrected}
\delta_a=
\begin{cases}
A_a &\quad\quad\text{if }a_{sf}\nequiv1~({\rm mod}~{4}),\\
E_a\cdot A_a&\quad\quad\text{if }a_{sf}\equiv1~({\rm mod}~{4}),
\end{cases}
\end{equation}
where
\begin{equation}
\label{Hooley}
E_a=1-\mu(|a_{sf}|)\displaystyle\prod_{\substack{p \mid h\\ p\mid a_{sf}} }\frac{1}{p-2}\displaystyle\prod_{\substack{p \nmid h\\ p\mid a_{sf}} }\frac{1}{p^2-p-1}.
\end{equation}
Here, $\mu(.)$ is the M\"obius function.
Hooley proved the modified conjecture \cite{Hooley} in 1967 under the assumption of the \emph{Generalized Riemann Hypothesis} (GRH) for the \emph{Kummer fields} $K_n=\mathbb{Q}(\zeta_n, a^{1/n})$ for square-free values of $n$. For any $n$, let $G(n)$ be the Galois group of $K_n/\mathbb{Q}$. Hooley proved, under the GRH, that the primitive root density is
\begin{equation}
\label{hooley-sum}
\sum_{n=1}^{\infty}\frac{\mu(n)}{\#G(n)},
\end{equation}
and then showed that the above sum equals the corrected conjectured density $\delta_a$ in \eqref{artin-corrected}.
In \cite{lenstra-stevenhagen-moree}, Lenstra, Moree, and Stevenhagen introduced their character sums method in finding product expressions for densities in Artin-like problems. Their method directly studies the primes that do not split completely in a Kummer family attached to $a$, without considering the summation expressions such as \eqref{hooley-sum} for the constants. In \cite[Theorem 4.2]{lenstra-stevenhagen-moree}, they express the correction factor \eqref{Hooley}, when $a$ is non-square and the discriminant $d$ of $K_2=\mathbb{Q}(a^{1/2})$ is odd, as
\begin{equation}
\label{artin-lenstra-product}
E_a=1+\displaystyle\prod_{p \mid 2d }\frac{-1}{\#G(p)-1}.
\end{equation}
The authors of \cite{lenstra-stevenhagen-moree} achieve this by constructing a quadratic character $\chi=\prod_{p} \chi_p$ of a certain profinite group $A=\prod_{p} A_p$ such that $\ker \chi=\Gal(K_\infty/\mathbb{Q})$, where $K_{\infty}=\bigcup_{n\geq1}K_n$ (see Section 2 for details). They derive \eqref{artin-corrected} as a special case of the following general theorem (\cite[Theorem 3.3]{lenstra-stevenhagen-moree}) in the context of profinite groups.
\begin{theorem}[{Lenstra-Moree-Stevenhagen}]
\label{LMS}
Let $A=\prod_{p} A_p$, with Haar measure $\nu=\prod_{p} \nu_p$, and the quadratic character $\chi=\prod_{p} \chi_p$ be as above. Then for $G=\ker \chi$ and $S=\prod_p S_p$, a product of $\nu_p$-measurable subsets $S_p\subset A_p$ with $\nu_p(S_p)>0$, we have
$$\delta(S)=\frac{\nu(G\bigcap S)}{\nu(G)} =\left(1+\prod_p \frac{1}{\nu_p(S_p)} \int_{S_p} \chi_p d\nu_p \right)\cdot \frac{\nu(S)}{\nu(A)}.$$
\end{theorem}
The above theorem shows that if $\frac{\nu(G\bigcap S)}{\nu(G)}\neq \frac{\nu(S)}{\nu(A)}$, then the density of $S$ in $A$ can be corrected to give the density of the elements of $S$ in $G$. Moreover, the correction factor can be written explicitly in terms of the average of local characters $\chi_p$ over $S_p$.
Our goals in this paper are two-fold. In one direction, in Theorem \ref{main1} and Corollary \ref{cor-after-main1}, we will show how the character sums method of \cite{lenstra-stevenhagen-moree} can be adapted to directly deal with the general sums similar to \eqref{hooley-sum}. This is an approach different from the one given in Theorem \ref{LMS} in which a density given as a product, i.e., $\nu(S)/\nu(A)$, is corrected to another density, i.e., $\nu(G\cap S)/\nu(G)$, which is not explicitly given as an infinite sum. In another direction, we describe how the method of \cite{lenstra-stevenhagen-moree} can be adapted to derive product expressions for the general sums similar to \eqref{hooley-sum} in which $\mu(n)$ is replaced by a multiplicative arithmetic function that could be supported on non-square free integers (all the examples given in \cite{lenstra-stevenhagen-moree} are dealing with arithmetical functions supported on square-free integers). Such arithmetic sums appear naturally on many Artin-like problems. In addition, some of them, such as Titchmarsh divisor problems for families of number fields, are not problems related to the natural density of subsets of integers. In this direction, our Theorem \ref{product-kummer-family} provides a product formula for the constant appearing in the Generalized Artin Problem for multiplicative functions $f$ (see Problem \ref{generalized}) in full generality.
We continue with our general setup. Let $a=\pm a_0^e$, where
$e$ is the largest positive integer such that $\lvert a\rvert$ is a perfect $e$-th power,
and $\sgn(a_0)=\sgn(a)$. In our arguments, the integer $a$ is fixed, so we suppress the dependency on $a$ in most of our notations. We fix a solution of the equation $x^2-a_0=0$ and denote it by $a_0^{1/2}$. The quadratic field $K=\mathbb{Q}(a_0^{1/2})$, the so-called \emph{entanglement field}, plays an important role in our arguments. We denote the discriminant of $K$ by $D$.
Observe that
for an integer $a (\neq 0, \pm 1)$, we have three different cases based on the parity of the exponent $e$ and the sign of $a$: (i) \emph{Odd exponent case}, in which $e$ is odd; (ii) \emph{Square case}, in which $e$ is even and $a>0$; (iii) \emph{Twisted case}, in which $e$ is even and $a<0$. We refer to cases (i) and (ii) as \emph{untwisted} cases. Note that for odd exponent case $K=K_2$, for square case $K_2=\mathbb{Q}$ and $K\neq K_2$, and for twisted case $K_2=\mathbb{Q}(i)$ and $K\neq K_2$.
For a Kummer family $\{K_n\}$, the Galois elements in $G(n)=\Gal(K_n/\mathbb{Q})$ are determined by their actions on the $n$-th roots of $a$ and the $n$-th roots of unity. Thus, any Galois automorphism can be realized as a group automorphism of the multiplicative group $$R_n=\{\alpha\in\overline{\mathbb{Q}}^{\times};\;\alpha^n\in\langle a\rangle\},$$ the group of \emph{$n$-radicals} of $a$. This yields the injective homomorphisms
\begin{equation}
\label{embed}
r_n:G(n)\to A(n):=\mathbb{A}ut_{\mathbb{Q}^{\times}\cap R_n}(R_n),
\end{equation}
where $A(n)$ is the group of automorphisms of $R_n$ fixing elements of $\mathbb{Q}^\times$. For $n=\prod_{p^k\Vert n} p^k$ we have $A(n)\cong \prod_{p^k\Vert n} A(p^k)$. Let $\nu_p(e)$ denote the multiplicity of $p$ in $e$. Let $\varPhi(n)$ be the Euler totient function.
For odd $p$, $$\#A(p^k)= p^{k-\min\{k, \nu_p(e)\}}\varPhi(p^k)$$ and for $p=2$,
$$\#A(2^k)=\begin{cases}
2^{k-\min\{k, s-1\}}\varPhi(2^k)&\text{if}~e~\text{is~odd~or}~a>0,\\
2^{k-\min\{k, s-1\}}\varPhi(2^{k+1})&\text{if}~e~\text{is~even~and}~a<0,
\end{cases}$$
where
\begin{equation}
\label{s-def}
s=\begin{cases}
\nu_2(e)+1&\text{if}~e~\text{is~odd~or}~ a>0,\\
\nu_2(e)+2& \text{if}~e~\text{is}~\text{even~and}~a<0.
\end{cases}
\end{equation}
(see Proposition \ref{new-prop} for a proof.)
The following theorem, related to the family of Kummer fields $K_n$, gives us the product expressions of a large family of summations involving the orders of the Galois groups of $K_n/\mathbb{Q}$.
\begin{theorem}
\label{product-kummer-family}
Let $a=\pm a_0^e$, where $a_0$ and $e$ are defined as above, $K=\mathbb{Q}(a_0^{1/2})$, and let $D$ be the discriminant of $K$.
Let $g$ be a multiplicative arithmetic function such that
\begin{equation*}
\sum_{n\geq1}^{\infty}\frac{\lvert g(n)\rvert}{\#G(n)}<\infty,
\end{equation*}
where $\{G(n)\}$ is the family of Galois groups of the Kummer family $\{\mathbb{Q}(\zeta_n, a^{1/n}) \}$.
Let
$A(n)$ be as defined above.
Then,
\begin{equation}
\label{product-kummer}
\sum_{n=1}^{\infty}\frac{g(n)}{\#G(n)}=\prod_p\sum_{k\geq 0}\frac{g(p^k)}{\#A(p^k)}+
\prod_{p}\sum_{k\geq\ell(p)}\frac{g(p^k)}{\#A(p^k)},
\end{equation}
where
$$\ell(p)=
\left\{\begin{array}{ll}
0&\text{if~} p~ \text{is~ odd~and~}p\nmid D,\\1&\text{if~} p~ \text{is~ odd~and~}p\mid D,\\
s&\text{if}~ p=2~ \text{and}~ D ~\text{is odd},\\
\max\{2,s\}&\text{if}~ p=2~ \text{and}~ 4\Vert D,\\
2&\text{if}~ p=2,~8\Vert D,~\text{and}~(\nu_2(e)=1~\text{and}~a<0),\\
\max\{3,s\}&\text{if}~ p=2,~ 8\Vert D,~\text{and}~(\nu_2(e)\neq 1~\text{or}~a>0).\end{array}
\right.
$$
\end{theorem}
\begin{Remark}
(i) In the summation \eqref{hooley-sum} appearing in Artin's primitive root conjecture, we have $g(n)=\mu(n)$. In this case, formula \eqref{product-kummer} for $g(n)=\mu(n)$ provides a unified way of expressing the constant in Artin's primitive root conjecture.
Note that if $e$ is even and $a>0$, i.e., $a$ is a perfect square, we have $$\sum_{k\geq 0}\frac{\mu(2^k)}{\#A(2^k)}= 0~~{\rm ~~ and}~\sum_{k\geq\ell(2)}\frac{\mu(2^k)}{\#A(2^k)}= 0.$$
(The first sum is zero since $\#A(2)=1$ and the second sum is zero since $\ell(2)\geq 2$.) Hence, \eqref{product-kummer} vanishes. Also, if $e$ is even and $a<0$, then $\ell(2)\geq 2$. Thus, \eqref{product-kummer} reduces to \eqref{ArtinConstant}. If $e$ is odd and $D$ is even, then again $\ell(2)\geq 2$ and \eqref{product-kummer} reduces to \eqref{ArtinConstant}. The only remaining case is when $e$ is odd and $D$ is odd (equivalently $e$ odd and $a_{sf}\equiv 1$ (mod $4$)), where \eqref{product-kummer} reduces to $E_a \cdot A_a$ given in \eqref{artin-corrected}.
{(ii) As a consequence of Theorem 1.1, we can derive necessary and sufficient conditions for the vanishing of
\begin{equation}
\label{constant}
\sum_{n=1}^{\infty}\frac{g(n)}{\#G(n)}.
\end{equation}
More precisely, \eqref{constant} vanishes if and only if one of the following holds:
\noindent (a) For a prime $p\nmid 2D$, we have $\displaystyle{\sum_{k\geq 0}\frac{g(p^k)}{\#A(p^k)}=0}$.
\noindent (b) We have $$\prod_{p\mid 2D} \sum_{k\geq 0}\frac{g(p^k)}{\#A(p^k)}+ \prod_{p\mid 2D} \sum_{k\geq \ell(p)}\frac{g(p^k)}{\#A(p^k)}=0.$$
In the case of Artin's conjecture, (a) is never satisfied and (b) holds if and only if $a$ is a perfect square.
}
{(iii) If $\#G(n)$ was a multiplicative function, then the sum in \eqref{product-kummer} would have been equal to the {product} $\prod_p\sum_{k\geq 0}\frac{g(p^k)}{\#G(p^k)}$. However, this is not the case for the Kummer family, and thus the sum in \eqref{product-kummer} may differ from the above naive product. If the sum and the product are not equal, then
a complex number $E_{a, g}$ is called a \emph{correction factor} if
\begin{equation*}
\sum_{n=1}^{\infty}\frac{g(n)}{\#G(n)}=E_{a, g} \prod_p\sum_{k\geq 0}\frac{g(p^k)}{\#G(p^k)}.
\end{equation*}
The expression \eqref{product-kummer} provides precise information on the correction factor $E_{a, g}$. In fact,
if \\$\sum_{k\geq0}\frac{g(p^k)}{\#G(p^k)}\neq0$ for all primes $p\mid 2D$, we have
\begin{equation*}
\sum_{n=1}^{\infty}\frac{g(n)}{\#G(n)}=\left(\frac{\displaystyle{\prod_{p\mid 2D}\sum_{k\geq 0}\frac{g(p^k)}{\#A(p^k)}+ \prod_{p\mid 2D}\sum_{k\geq\ell(p)}\frac{g(p^k)}{\#A(p^k)}}}{\displaystyle{\prod_{p\mid 2D}\sum_{k\geq 0} \frac{g(p^k)}{\#G(p^k)}}}\right)\prod_p \sum_{k\geq 0}\frac{g(p^k)}{\#G(p^k)}.
\end{equation*}
Also, if $\sum_{k\geq0}\frac{g(p^k)}{\#G(p^k)}=0$ for some prime $p\mid 2D$, and $\sum_{n\geq1}\frac{g(n)}{\#G(n)}\neq 0$, then
the {product} $\prod_p\sum_{k\geq 0}\frac{g(p^k)}{\#G(p^k)}$ cannot be corrected.
It should be noted that for $K=\mathbb{Q}(\sqrt{\pm 2})$ the above correction factor is slightly different from the one given in Theorem \ref{LMS} for the density problems since in these cases $\#G(2^k)\neq \#A(2^k)$ for some positive integers $k$.}
(iv) For integer $a (\neq 0, \pm 1)$, let $n_a=\prod_{p\mid 2D} p^{\ell(p)}$, where $D$ and $\ell(p)$ are as in Theorem \ref{product-kummer-family}. Then, by taking $g(n)=1/n^s$, for $\mathcal{R}e(s)>0$, in Theorem \ref{product-kummer-family} and comparing the coefficients of $1/n^s$ in both sides of \eqref{product-kummer}, we get
$$
[\mathbb{Q}(\zeta_n, a^{1/n}):\mathbb{Q}]=\begin{cases}
\#A(n)&\text{if } n_a\nmid n,\\
\frac{1}{2}\#A(n)&\text{if } n_a\mid n.
\end{cases}
$$
\end{Remark}
\par
The formula \eqref{product-kummer} can be used to study the constants in many Artin-like problems. We next apply this formula in the computation of the average value of a specific arithmetic function attached to a Kummer family.
More precisely, for $\{K_n:=\mathbb{Q}(\zeta_n,a^{1/n})\}_{n\geq 1}$, we define
\begin{equation*}
\tau_{a}(p)=\#\left\{n\in\mathbb{N} ;\; p\text{ splits completely in }K_n/\mathbb{Q}\right\}.
\end{equation*}
The Titchmarsh divisor problem attached to a Kummer family concerns the behaviour of $\sum_{p\leq x}\tau_a(p)$ as $x\to\infty$ (see \cite{AG} for the motivation behind this problem and its relation with the classical Titchmarsh divisor problem on the average value of the number of divisors of shifted primes). Under the assumption of the GRH for the Dedekind zeta function of $K_n/\mathbb{Q}$ for $n\geq1$, Felix and Murty \cite[Theorem 1.6]{felix-murty} proved that
\begin{equation}
\label{felix-murty-tdp}
\sum_{p\leq x}\tau_a(p)\sim \left(\sum_{n\geq1}\frac{1}{[K_n:\mathbb{Q}]}\right)\cdot \li(x),
\end{equation}
as $x\to\infty$, where $\li(x)=\int_2^x \frac{1}{\log t}dt$. They do not provide an Euler product expression for the constant appearing in the main term of \eqref{felix-murty-tdp}. As a direct consequence of
Theorem \ref{product-kummer-family} with $g(n)=1$, we readily find an explicit product formula for the constant appearing in \eqref{felix-murty-tdp}.
\begin{proposition}
\label{TDPK-formula}
Let $a=\pm a_{0}^e$ with $e=\prod_pp^{\nu_p(e)}$, and let $D$ be the discriminant of $K=\mathbb{Q}(a_0^{1/2})$. Then, if $e$ is odd or $a>0$,
\begin{equation}
\label{kummer-tdp-lastproduct}
\begin{split}
\sum_{n\geq1}\frac{1}{[K_n:\mathbb{Q}]}=& \left(1+\frac{c_0}{3\cdot2^{\nu_2(e)}-2}\prod_{p\mid2D}\frac{p^{\nu_p(e)+2}+p^{\nu_p(e)+1}-p^2}{p^{\nu_p(e)+3}+p^{\nu_p(e)}-p^2}\right)\\
&~~~~~~~~~~~\times\prod_p\left(1+\frac{p^{\nu_p(e)+2}+p^{\nu_p(e)+1}-p^2}{p^{\nu_p(e)}(p-1)(p^2-1)}\right),
\end{split}
\end{equation}
where
\begin{equation*}
c_0=
\begin{cases}
1/4 & \text{ if } 4\Vert D\;\text{and}\;\nu_2(e)=0,\text{ or if } 8\Vert D ~\text{and}~ \nu_2(e)=1, \\
1/16 & \text{ if } 8\Vert D\;\text{and}\; \nu_2(e)=0,\\
1 & \text{ otherwise.}
\end{cases}
\end{equation*}
If $e$ is even and $a<0$ (i.e., the twisted case)
\begin{equation}
\label{kummer-tdp-lastproduct-2}
\begin{split}
\sum_{n\geq1}\frac{1}{[K_n:\mathbb{Q}]}=& \left(1+\frac{c_0}{3\cdot2^{\nu_2(e)+2}-2}\prod_{\substack{p\mid D\\ p\neq 2}}\frac{p^{\nu_p(e)+2}+p^{\nu_p(e)+1}-p^2}{p^{\nu_p(e)+3}+p^{\nu_p(e)}-p^2}\right)\\
&~~~~~~~~~~~\times \left(1+\frac{2^{\nu_2(e)+2}-2^{\nu_2(e)}-1}{3\cdot 2^{\nu_2(e)}}\right)\prod_{\substack{p\\p\neq 2}}\left(1+\frac{p^{\nu_p(e)+2}+p^{\nu_p(e)+1}-p^2}{p^{\nu_p(e)}(p-1)(p^2-1)}\right),
\end{split}
\end{equation}
where
\begin{equation*}
c_0=
\begin{cases}
4 & \text{ if } 8\Vert D\;\text{and}\; \nu_2(e)=1,\\
1 & \text{ otherwise.}
\end{cases}
\end{equation*}
\end{proposition}
Let $c_{a}$ denote the constant given in \eqref{kummer-tdp-lastproduct} and \eqref{kummer-tdp-lastproduct-2}.
It is evident that $c_a=q_a\cdot u$, where $q_a$ is a rational number depending on $a$, and $u$ is the \emph{universal constant}
\begin{equation}
\label{universal}
\sum_{n=1}^{\infty} \frac{1}{n \Phi(n)}=\prod_{p}\left( 1+\frac{p}{(p-1)(p^2-1)} \right)= 2.203856\cdots,
\end{equation}
where $\Phi(n)$ is the Euler totient function.
Observe that if $\nu_p(e)=0$ for all $p$, the expressions for naive products (products over all primes $p$) given in \eqref{kummer-tdp-lastproduct} and \eqref{kummer-tdp-lastproduct-2}
reduce to \eqref{universal}. This is in accordance with \cite[Theorem 1.4]{AF} in which \eqref{universal} appears as the average constant while varying $a$. Thus, on average over $a$, a smooth version of \eqref{kummer-tdp-lastproduct} and \eqref{kummer-tdp-lastproduct-2}, i.e., the universal constant, appears.
The product expressions of Proposition \ref{TDPK-formula} provide a convenient way of computing the numerical value of $c_{a}$ for a given value of $a$. We record a sample of such values in the following table.
\par
\begin{tabular}{c|cccccccccccc}
$a$&-13&-10&-8&-5&-3&-2&2&3&5&8&10&13\\
\hline
$c_{a}$
&2.205&2.206&2.972&2.214&2.343&2.258&2.258&2.238&2.247&2.972&2.206&2.209
\end{tabular}
\par
The classical Artin conjecture and the Titchmarsh divisor problem for a Kummer family are instances of a more general problem that we now describe. For an integer $a (\neq 0, \pm 1)$ and a prime $p\nmid a$, the \emph{residual index} of $a$ mod $p$, denoted by $i_a(p)$ is the index of the subgroup $\langle a \rangle$ in the multiplicative group $(\mathbb{Z}/p\mathbb{Z})^\times$.
There is a vast amount of literature on the study of asymptotics of functions of $i_a(p)$ as $p$ varies over primes. In \cite[p. 377]{Papa}, the following problem is proposed.
\begin{problem}[Generalized Artin Problem]
\label{generalized}
For certain integers $a$ and arithmetic functions $f(n)$, establish the asymptotic formula
$$\sum_{p\leq x} f(i_a(p)) \sim c_{f, a} \li(x),$$
as $x\rightarrow \infty$, where
\begin{equation}
\label{series}
c_{f, a}:= \sum_{n\geq 1} \frac{g(n)}{[K_n:\mathbb{Q}]}.
\end{equation}
Here $g(n)=\sum_{d\mid n} \mu(d) f(n/d)$ is the M\"{o}bius inverse of $f(n)$, where $\mu(n)$ is the M\"{o}bius function.
\end{problem}
Note that by setting $f(n)$ as the characteristic function of the set $S=\{1\}$ and $g(n)=\mu(n)$ in Problem \ref{generalized}, we get the Artin conjecture, and $f(n)=d(n)$ (the divisor function) and $g(n)=1$ give the Titchmarsh divisor problem for a Kummer family, this is true since $\tau_a(p)=d(i_a(p))$ (see \cite[Lemma 2.1]{felix-murty} for details). Also, a conjecture of Laxton from 1969 \cite{L} predicts that for $f(n)=1/n$, the generalized Artin problem determines the density of primes in the sequence given by the recurrence $w_{n+2}=(a+1) w_{n+1}-aw_n$, where $a>1$ is a fixed integer. Another instance of Problem \ref{generalized} appears in a conjecture of Bach, Lukes, Shallit, and Williams \cite{BLSW} in which the constant $c_{f, 2}$ for $f(n)=\log{n}$ appears in the main term of the asymptotic formula for $\log{P_2(x)}$, where $P_2(x)$ is the smallest \emph{$x$-pseudopower} of the base $2$.
A notable result on Generalized Artin Problem, due to Felix and Murty \cite[Theorem 1.7]{felix-murty}, establishes, under the assumption of GRH, the asymptotic
\begin{equation}
\label{FM}
\sum_{p\leq x} f(i_a(p))= c_{f, a} \li(x)+O_a\left( \frac{x}{(\log{x})^{2-\epsilon-\alpha}} \right),
\end{equation}
for $\epsilon>0$. Here $f(n)$ is an arithmetic function whose M\"obius inverse $g(n)$ satisfies
$$|g(n)| \ll d_k(n)^r (\log{n})^{\alpha}, $$
with $k, r \in \mathbb{N}$ and $0\leq \alpha <1$ all fixed, where $d_k(n)$ denotes the number of representations of $n$ as product of $k$ positive integers.
Observe that
the identity \eqref{product-kummer} in Theorem \ref{product-kummer-family} conveniently furnish a product formula in full generality
for the constant $c_{f, a}$ in \eqref{FM} when $f$ (equivalently $g$) is a multiplicative function. This product formula is valuable for studying the vanishing criteria for $c_{f, a}$ and their numerical evaluations for different $f$.
We now comment on the proof of Theorem \ref{product-kummer-family}. Observe that corresponding to the Kummer family $\{K_n\}$, we can consider the inverse systems $((G(n))_{n\in \mathbb{N}}, (i_{n_1,n_2})_{n_1\mid n_2})$ and $((A(n))_{n\in \mathbb{N}},(j_{n_1,n_2})_{n_1\mid n_2})$ ordered by divisibility relation on $\mathbb{N}$, where $G(n)$ and $A(n)$ are as defined before and $i_{n_1,n_2}$ and $j_{n_1, n_2}$, for $n_1\mid n_2$, are restriction maps. By taking the inverse limits on both sides of \eqref{embed}
we have the injective continuous homomorphism
$$r:G=\varprojlim G(n)\to A=\varprojlim A(n)$$
of profinite groups, where $G=\Gal(K_{\infty}/\mathbb{Q})$ and $A=\mathbb{A}ut_{\mathbb{Q}^{\times}\cap R_{\infty}}(R_{\infty})$ with $K_{\infty}=\bigcup_{n\geq1}K_n$ and $R_{\infty}=\bigcup_{n\geq 1}R_n$.
As profinite groups, both $G$ and $A$ are endowed with compact topologies and thus can be equipped by Haar measures.
We will show that Theorem \ref{product-kummer-family} is a corollary of the following theorem attached to a general setting of profinite groups $G$ and $A$.
\begin{theorem}
\label{main1}
Let $((G(n))_{n\in \mathbb{N}}, (i_{n_1,n_2})_{n_1\mid n_2})$ and $((A(n))_{n\in \mathbb{N}},(j_{n_1,n_2})_{n_1\mid n_2})$ be inverse systems of finite groups ordered by divisibility relation on $\mathbb{N}$.
Moreover, for $n\geq 1$, assume that there are injective maps $r_n:G(n)\to A(n)$ compatible by $i_{n_1,n_2}$ and $j_{n_1,n_2}$, i.e., for $n_1\mid n_2$, the diagram
\begin{equation*}
\begin{tikzcd}
G(n_2)\arrow[r,"r_{n_2}"]\arrow[d,"i_{n_1,n_2}"] & A(n_2)\arrow[d,"j_{n_1,n_2}"]\\
G(n_1)\arrow[r,"r_{n_1}"] & A(n_1)
\end{tikzcd}
\end{equation*}
commutes. Let $r: G=\varprojlim G(n)\to A=\varprojlim A(n)$ be the resulting injective continuous homomorphism of profinite groups.
Let $\mu_m$ be the multiplicative group of $m$-th roots of unity for a fixed $m$.
Suppose there exists an exact sequence
\begin{equation}
\label{ev1}
1\to G\stackrel{r}{\longrightarrow} A\stackrel{\chi}{\longrightarrow} \mu_m\to 1,
\end{equation}
where $\chi$ is a continuous homomorphism.
Let $g$ be an arithmetic function such that
\begin{equation*}
\sum_{n\geq1}\frac{\lvert g(n)\rvert}{\#G(n)}<\infty.
\end{equation*}
Consider the natural projections
$\pi_{A,n}:A\to A(n)$
and let
\begin{equation}
\label{g-tilde}
\mathbb{T}ilde{g}=\sum_{n\geq1}g(n)1_{\ker\pi_{A,n}},
\end{equation}
where $1_{\ker\pi_{A, n}}$ denotes the characteristic function of $\ker\pi_{A, n}$.
Let $\nu_A$ be the normalized Haar measure attached to $A$.
Then, $\mathbb{T}ilde{g}\in L^1(\nu_A)$ (the space of $\nu_A$-integrable functions) and
\begin{equation*}
\sum_{n\geq1}\frac{g(n)}{\#G(n)}=\sum_{i=0}^{m-1}\int_A \mathbb{T}ilde{g}\chi^id\nu_A.
\end{equation*}
\end{theorem}
Observe that Theorem \ref{main1} is quite general and can be applied in the evaluation of sums of the form $\sum_{n\geq 1} g(n)/\#G(n)$ for any family $\{G(n)\}$ of finite groups satisfying the assumptions of the theorem.
The Kummer family is an instance of such families. Another example is the family of division fields attached to a \emph{Serre elliptic curve} $E$ (see Section \ref{Serre section} for the definition). Following \cite[Section 8]{lenstra-stevenhagen-moree}, in Section \ref{Serre section}, we show that the family of division fields $\{\mathbb{Q}(E[n])\}$ attached to a Serre curve $E$ satisfies the conditions of Theorem \ref{main1} and so as a consequence of it the following holds.
\begin{proposition}
\label{prop-Serre}
Let $\mathbb{Q}(E[n])$ denote the $n$-division field of a Serre elliptic curve defined over $\mathbb{Q}$ by a Weierstrass equation $y^2=x^3+ax+b$. Let $\Delta$ be the discriminant of the cubic equation $x^3+ax+b=0$ and let $D$ be the discriminant of the quadratic field $K=\mathbb{Q}({\Delta}^{1/2})$. Let $g(n)$ be a multiplicative arithmetic function such that
\begin{equation*}
\sum_{n\geq1}^{\infty}\frac{\lvert g(n)\rvert}{[\mathbb{Q}(E[n]) : \mathbb{Q}]}<\infty.
\end{equation*}
Then,
\begin{equation}
\label{serre-sum}
\sum_{n=1}^{\infty}\frac{g(n)}{[\mathbb{Q}(E[n]) : \mathbb{Q}]} =\prod_p\sum_{k\geq 0}\frac{g(p^k)}{p^{4k-3}(p^2-1)(p-1)}+
\prod_{p}\sum_{k\geq\ell(p)}\frac{g(p^k)}{p^{4k-3}(p^2-1)(p-1)},
\end{equation}
where
$$\ell(p)=\left\{\begin{array}{ll}
0&\text{if~} p~ \text{is~ odd~and~}p\nmid D,\\
1&\text{if~} p~ \text{is~ odd~and~}p\mid D,\\
1&\text{if}~ p=2~ \text{and}~ D ~\text{is odd},\\
2&\text{if}~ p=2~ \text{and}~ 4\Vert D,\\
3&\text{if}~ p=2~ \text{and}~ 8\Vert D.\\
\end{array}
\right.
$$
\end{proposition}
Observe that the above proposition for $g(n)=1$ reduces to the product expression of the Titchmarsh divisor problem for the family of division fields attached to a Serre curve $E$. We note that the product expression for this constant and two other constants corresponding to different $g(n)$'s for such families are given in \cite[Theorem 5]{cojocaru:tdp} by determining the value of $[\mathbb{Q}(E[n]):\mathbb{Q}]$ for a Serre curve $E$ (see \cite[Proposition 17 (iv)]{cojocaru:tdp}) and employing \cite[Lemma 3.12]{kowalski}. It is worth mentioning that a similar approach in finding the expression \eqref{kummer-tdp-lastproduct} using the exact formulas for $[K_n: \mathbb{Q}]$ as given in \cite[Proposition 4.1]{wagstaff} will result in the tedious case by case computations that does not appear to be straightforward. Especially when $a<0$, this approach seems to be intractable. The method of \cite{lenstra-stevenhagen-moree} as described above provides an elegant approach to establishing identities similar to \eqref{kummer-tdp-lastproduct} and \eqref{kummer-tdp-lastproduct-2}.
The structure of the paper is as follows. We describe our adaptation of the character sums method of \cite{lenstra-stevenhagen-moree} in Sections 2 and 3 and prove Proposition \ref{character} that plays a crucial role in our explicit computation of the constants in the Kummer case. Section 4 is dedicated to a proof of Theorem \ref{main1}. The proofs of Theorem \ref{product-kummer-family} and its consequence, Proposition \ref{TDPK-formula}, are given respectively in Sections 5 and 6. Finally, a brief discussion on Serre curves and the proof of Proposition \ref{prop-Serre} are provided in Section 7.
\begin{notation}
The following notations are used throughout the paper. The letter $p$ denotes a prime number, $k$ denotes a non-negative integer, the letter $n$ denotes a positive integer,
the multiplicity of the prime $p$ in the prime factorization of $n$ is denoted by $\nu_p(n)$, the cardinality of a finite set $S$ is denoted by $\#S$, $1_S$ is the characteristic function of a set $S$, $\overline{\mathbb{Q}}$ is an algebraic closure of $\mathbb{Q}$, $\zeta_n$ denotes a primitive root of unity, and $\Phi(n)$ is the Euler totient function. In Sections \ref{section:character}, \ref{Sec-3}, \ref{S1}, and \ref{Section 5}, $a=\pm a_0^e$ is a non-zero integer other than $\pm 1$, the collection $\{K_n=\mathbb{Q}(\zeta_n, a^{1/n}) \}_{n\in \mathbb{N}}$ is the family of Kummer fields and $K=\mathbb{Q}(a_0^{1/2})$ is the entanglement field attached to this family, $D$ is the discriminant of $K$, the Galois group of $K_n$ over $\mathbb{Q}$ is denoted by $G(n)$, the inverse limit of the directed family $\{G(n)\}$ is denoted by $G$, $\mu_\infty$ denotes the group of all roots of unity, and $\mathbb{Q}_{ab}=\mathbb{Q}(\mu_\infty)$ is the maximal abelian extension of $\mathbb{Q}$. The group of $n$-radicals of the integer $a=\pm a_0^e$ is denoted by $R_n$ and $R_{\infty}=\bigcup_{n\geq 1}R_n$.
The group of automorphisms of $R_n$ (respectively $R_\infty$) that fix $\mathbb{Q}^\times$ is denoted by $A(n)$ (respectively $A$). The inverse limit of the system
$\{A(p^k)\}_{k\geq 1}$ is denoted by $A_p$. The map $\pi_{A, n}$ (respectively $\pi_{G, n}$ and $\varphi_{p^k}$) is the projection map from $A$ (respectively $G$ and $A_p$) to $A(n)$ (respectively $G(n)$ and $A(p^k)$). The profinite completion of $\mathbb{Z}$ is denoted by $\widehat{\mathbb{Z}}$ and $\mathbb{Z}_p$ is the ring of $p$-adic integers. The normalized Haar measures on $G$, $A$, and $A_p$ are denoted respectively by $\nu_G$, $\nu_A$, and $\nu_{A_p}$. The space of $\nu$-integrable functions is denoted by $L^1(\nu)$.
In Section \ref{Section 3}, $G(n)$, $A(n)$, $A(p^k)$, $G$, $A$, $A_p$, $\pi_{A, n}$, $\varphi_{p^k}$, $\nu_G$, $\nu_A$, and $\nu_{A_p}$ are used in the general setting of profinite groups.
Finally, in Section \ref{Serre section}, $E[n]$ denotes the group of $n$-division points of an elliptic curve $E$ defined over $\mathbb{Q}$ given by a Weierstrass equation with discriminant $\Delta$, and $K=\mathbb{Q}(\Delta^{1/2})$ of discriminant $D$ is the entanglement field attached to the family of division fields of a Serre elliptic curve. \end{notation}
\section{The associated character to a Kummer family}\label{section:character}
Recall that for an integer $a (\neq0,\pm1)$, we set $a=\pm a_0^e$, where $\sgn(a)=\sgn(a_0)$ and $e$ is the largest such integer. We fix a solution of the equation $x^2-a_0=0$, denote it by $a_0^{1/2}$, and set $K=\mathbb{Q}(a_0^{1/2})$.
We next
define a quadratic character which describes the entanglements inside a given Kummer family $\{K_n\}$. Let $\mu_{\infty}=\bigcup_{n\geq1}\mu_n(\overline{\mathbb{Q}})$ be the group of all roots of unity in $\overline{\mathbb{Q}}$. Then, $\mu_{\infty}$ is contained in $K_{\infty}=\bigcup_{n\geq1}K_n$. In addition, the infinite extension $K_{\infty}/\mathbb{Q}$ is the compositum of $\mathbb{Q}(a_0^{\mathbb{Q}})$ and $\mathbb{Q}_{ab}$ (the maximal abelian extension of $\mathbb{Q}$), where
\begin{equation}
\label{intersection}
\mathbb{Q}(a_0^{\mathbb{Q}})\cap\mathbb{Q}_{ab}=\mathbb{Q}(a_0^{1/2})
\end{equation}
(see \cite{lenstra-stevenhagen-moree}*{Lemma 2.5}). Note that $a_0^{\mathbb{Q}}=\{a_0^b;\;b\in\mathbb{Q}\}$. In \cite{lenstra-stevenhagen-moree}*{Page 494} it is proved that
\begin{equation}
\label{semidirect}
A=\mathbb{A}ut_{\mathbb{Q}^{\times}\cap R_{\infty}}(R_{\infty})\cong\Hom(a_0^{\mathbb{Q}}/a_0^{\mathbb{Z}},\mu_{\infty})\rtimes \mathbb{A}ut(\mu_{\infty}),
\end{equation}
where
$a_0^{\mathbb{Z}}=\{a_0^b;\;b\in\mathbb{Z}\}$,
and for $(\phi_1, \sigma_1), (\phi_2, \sigma_2)\in A$ we have
$$(\phi_1, \sigma_1)(\phi_2, \sigma_2)= (\phi_1\cdot (\sigma_1 \circ \phi_2), \sigma_1\circ \sigma_2).$$
Note that $G=\Gal(K_{\infty}/\mathbb{Q})$ can be embedded in $A$. Thus, if $(\phi,\sigma)\in \Hom(a_0^{\mathbb{Q}}/a_0^{\mathbb{Z}},\mu_{\infty})\rtimes \mathbb{A}ut(\mu_{\infty})\cong A$ is an element of $G$, then, by \eqref{intersection}, the action of $\phi$ and $\sigma$ on $\mathbb{Q}(a_0^{1/2})$ must be the same. One can show that $(\phi,\sigma)\in A$ is in $G$ if and only if $\phi$ and $\sigma$ act in a compatible way on $a_0^{1/2}$, i.e.,
\begin{equation}
\label{compatibility}
\phi(a_0^{1/2})=\frac{\sigma(a_0^{1/2})}{a_0^{1/2}}\in \mu_2
\end{equation}
(see \cite{lenstra-stevenhagen-moree}*{Page 494}). (For simplicity, we used $\phi(a_0^{1/2})$ instead of $\phi(a_0^{1/2}a_0^{\mathbb{Z}})$.) We elaborate on \eqref{compatibility} by considering two distinct quadratic characters $\psi_K$ and $\chi_D$ on $A$ which are related to the entanglement field $K=\mathbb{Q}(a_0^{1/2})$ of discriminant $D$. The quadratic character $\psi_K:A\to\mu_2$ corresponds to the action of $\phi$-component of $(\phi,\sigma)\in A$ on $a_0^{1/2}$, i.e.,
\begin{equation*}
\psi_K(\phi,\sigma)=\phi(a_0^{1/2})\in\mu_2.
\end{equation*}
This is a \emph{non-cyclotomic character}, i.e., $\psi_K$ does not factor via the natural map $A\to\mathbb{A}ut(\mu_{\infty})$ (see \cite{lenstra-stevenhagen-moree}*{Page 495}).
The other quadratic character,
\begin{equation*}
\chi_D:A\to\mathbb{A}ut(\mu_{\infty})\cong\widehat{\mathbb{Z}}^{\times}\to\mu_2,
\end{equation*}
corresponds to the action of the cyclotomic component $\mathbb{A}ut(\mu_{\infty})$ of $A$ on $K=\mathbb{Q}(a_0^{1/2})$ of discriminant $D$, i.e.,
\begin{equation*}
\chi_D(\phi,\sigma)=\frac{\sigma(a_0^{1/2})}{a_0^{1/2}}\in \mu_2.
\end{equation*}
Hence, by \cite{cox}*{Proposition 5.16 and Corollary 5.17}, $\chi_D$ is the lift of the Kronecker symbol $\left(\frac{ D}{.}\right)$ to $\mathbb{A}ut(\mu_{\infty})\cong\widehat{\mathbb{Z}}^{\times}$.
The characters $\chi_D$ and $\psi_K$ are not the same on $A$ since one is cyclotomic, and the other is not. Moreover, by \eqref{compatibility}, an element $x\in A$ is in $G$ if and only if $\psi_K(x)=\chi_D(x)$. Thus, the image of the homomorphism $G\to A$ is the kernel of the non-trivial quadratic character $\chi=\psi_K\cdot\chi_D: A\to\mu_2$. In other words, the sequence
\begin{equation*}
1\longrightarrow G\stackrel{r}{\longrightarrow} A\xrightarrow{\chi=\psi_K\cdot\chi_D}\mu_2\longrightarrow 1
\end{equation*}
is an exact sequence (see \cite{lenstra-stevenhagen-moree}*{Theorem 2.9} for more details).
Let $A(p^k)=\mathbb{A}ut_{\mathbb{Q}^\times \cap R_{p^k} }(R_{p^k})$ and $A_p=\varprojlim A(p^k)$. Since an element of $A$ can be determined by its action on prime power radicals, then $A\cong \prod_{p} A_p$ (see \cite[formula (2.10), p. 495]{lenstra-stevenhagen-moree} and \cite[p. 20]{MS}).
The character $\chi_D$ is the lift of the Kronecker symbol $\left(\frac{D}{.}\right)$ to $A$ via the maps
\begin{equation*}
A\cong\left(\prod_pA_p\right)\stackrel{\proj}{\longrightarrow}\mathbb{A}ut(\mu_{\infty})\left(\cong\prod_p\mathbb{Z}^{\times}_p\right)\stackrel{\text{proj}}{\longrightarrow}(\mathbb{Z}/|D|\mathbb{Z})^{\times},
\end{equation*}
where the first projection comes via \eqref{semidirect}.
Since $D$ is a fundamental discriminant, $\chi_D=\prod_{p\mid D}\chi_{D,p}$, where $\chi_{D,p}$ is the lift of the Legendre symbol modulo $p$ to $A_p$ for odd $p$, and $\chi_{D,2}$ is the lift of one of the Dirichlet characters mod $8$ to $A_2$ (see \cite{davenport}*{Chapter 5}). More precisely, if $D$ is odd, then $\chi_{D,2}=1$; if $4~\Vert~D$, then $\chi_{D,2}$ is the lift to $A_2$ of $\left(\frac{-4}{.}\right)$, the unique Dirichlet character mod $8$ of conductor $4$; and if $8~\Vert~D$, then $\chi_{D,2}$ is the lift to $A_2$ of $\left(\frac{\pm8}{.}\right)$, one of the two Dirichlet characters mod $8$ of conductor $8$. For the case $8~\Vert~D=2^a\prod_{i=1}^kp_i$, if $D>0$ and the number of $1\leq i\leq k$ with $p_i\equiv3\;(\modd\;4)$ is even, or $D<0$ and the number of $1\leq i\leq k$ with $p_i\equiv3\;(\modd\;4)$ is odd, then $\chi_{D,2}$ is the lift to $A_2$ of $\left(\frac{8}{.}\right)$. Otherwise, $\chi_{D,2}$ is the lift to $A_2$ of $\left(\frac{-8}{.}\right)$.
Next, we show that $\chi$ can be written as a product of local characters $\chi_p: A_p\to\mu_2$. Note that $\psi_K$ factors via $A_2$. Let $\psi_{K,2}:A_2\to\mu_2$ be the corresponding homomorphism obtained from factorization of $\psi_K$ via $A_2$.
For odd primes $p\nmid D$, set $\chi_p=1$. Let $\chi_p=\chi_{D,p}$ for odd primes $p\mid D$ and for prime $2$ let $\chi_2=\chi_{D,2}\cdot\psi_{K,2}$. Therefore, by the above construction of $\chi$, we have the decomposition $\chi=\prod_p\chi_p$.
\section{The local characters $\chi_p$}
\label{Sec-3}
In this section, we find the smallest values of $k$, as a function of $p$ and $a$, for which the local character $\chi_p$ factors via $A(p^k)$. In other words, we will determine the values of $k$ for which $\chi_p$ is trivial on $\ker\varphi_{p^{k}}$ and it is nontrivial on $\ker\varphi_{p^{k-1}}$, where $\varphi_{p^i}$ is the projection map from $A_p$ to $A(p^i)$. The values are recorded in the statements of Theorem \ref{product-kummer-family} and Proposition \ref{character}. We start
by giving a concrete description of the groups $A(2^k)$, for positive integers $k$, as subgroups of matrices
$\begin{psmallmatrix}1 & 0\\b & d\end{psmallmatrix}$, where $b\in \mathbb{Z}/2^k\mathbb{Z}$ and $d\in \left(\mathbb{Z}/2^k\mathbb{Z}\right)^\times$. We achieve this by choosing a certain compatible system of generators for the groups $R_{2^k}$, where $k\geq 1$.
\begin{proposition}
\label{new-prop}
(i) Let $\varPhi(n)$ be the Euler totient function and $s$ be as defined in \eqref{s-def}. For odd $p$, $$\#A(p^k)= p^{k-\min\{k, \nu_p(e)\}}\varPhi(p^k)$$ and for $p=2$,
$$\#A(2^k)=\begin{cases}
2^{k-\min\{k, s-1\}}\varPhi(2^k)&\text{if}~e~\text{is~odd~or}~a>0,\\
2^{k-\min\{k, s-1\}}\varPhi(2^{k+1})&\text{if}~e~\text{is~even~and}~a<0.
\end{cases}$$
(ii) If $e$ is even and $a<0$, we have
\begin{equation*}
A(2^k)\cong \left\{\left(\begin{array}{cc} 1&0\\b&d \end{array} \right);~ b\in \mathbb{Z}/2^k\mathbb{Z}, d\in \left(\mathbb{Z}/2^k\mathbb{Z}\right)^\times, {\rm and}~2b+1\equiv d~({\rm mod}~2^{\min\{k,s-1\}})
\right\}.
\end{equation*}
(iii) If $e$ is odd or $a>0$, we have
\begin{equation*}
A(2^k)\cong \left\{\left(\begin{array}{cc} 1&0\\b&d \end{array} \right);~ b\in \mathbb{Z}/2^k\mathbb{Z}, d\in \left(\mathbb{Z}/2^k\mathbb{Z}\right)^\times, {\rm and}~b+1\equiv d~({\rm mod}~2^{\min\{k,s-1\}})
\right\}.
\end{equation*}
\end{proposition}
\begin{proof}
(i) For odd primes $p$, it is known that $A(p^k)\cong G(p^k)={\rm Gal}(K_{p^k}/\mathbb{Q})$ (see \cite[Remarks 2.12. (b), p. 496]{lenstra-stevenhagen-moree}). Also if
$K\neq \mathbb{Q}(\sqrt{\pm 2})$, then $A(2^k)\cong G(2^k)$. Since the size of $A(2^k)$ is independent of $K$, we get the formulas for the size of $A(p^k)$ from the ones for $G(p^k)$ as given in \cite[Proposition 4.1]{wagstaff}.
(ii) Let $a=-a_0^e$ as before and $e=2^{\nu_2(e)}e_1$, where $\nu_2(e)\geq 1$ and $e_1$ is odd. We denote a primitive $m$-th root of unity by $\zeta_m$. Recall that $R_{2^k}$ is the group of $2^k$- radicals. We have $$R_{2^k} =\langle\zeta_{2^{k+1}} \left(a_0^{e_1}\right)^{1/2^{k-\nu_2(e)}}, \zeta_{2^k} \rangle=\langle \beta, \zeta_{2^k} \rangle.$$
An automorphism $\tau\in A(2^k)$ is determined by its action on these generators of $R_{2^k}$, i.e., $\beta$ and $\zeta_{2^k}$. We have
$\tau(\beta)=\beta \zeta_{2^{k}}^{b(\tau)}$ and $\tau(\zeta_{2^k})=\zeta_{2^k}^{d(\tau)}$, where $b(\tau)\in \mathbb{Z}/2^k\mathbb{Z}$ and $d(\tau)\in \left(\mathbb{Z}/2^k\mathbb{Z}\right)^{\times}$.
We consider two cases.
Case 1: $k\geq s-1=\nu_2(e)+1$. We have
$$a_0^{e_1} \tau(\zeta_{2^{k+1}}^{2^{k-\nu_2(e)}})=\tau(\beta^{2^{k-\nu_2(e)}})=(\beta \zeta_{2^{k}}^{b(\tau)})^{2^{k-\nu_2(e)}}=a_0^{e_1} \zeta_{2^{k+1}}^{2^{k-\nu_2(e)}} \zeta_{2^{k}}^{b(\tau)2^{k-\nu_2(e)}}. $$
From here we get
$$\zeta_{2^k}^{d(\tau)2^{k-\nu_2(e)-1}}=\zeta_{2^k}^{2^{k-\nu_2(e)-1}+b(\tau) 2^{k-\nu_2(e)}}.$$
This implies $2b(\tau)+1\equiv d(\tau)~({\rm mod}~2^{s-1})$.
Case 2: $k< s-1=\nu_2(e)+1$. We have
$$(a_0^{e_1})^{2/2^{k-\nu_2(e)}} \tau(\zeta_{2^{k+1}}^{2})=\tau(\beta^{2})=(\beta \zeta_{2^{k}}^{b(\tau)})^{2}=(a_0^{e_1})^{2/2^{k-\nu_2(e)}} \zeta_{2^{k+1}}^{2} \zeta_{2^{k}}^{2b(\tau)}. $$
From here we get
$$\zeta_{2^k}^{d(\tau)}=\zeta_{2^k}^{1+2b(\tau) }.$$
This implies $2b(\tau)+1\equiv d(\tau)~({\rm mod}~2^{k})$.
So any $\tau\in A(2^k)$ corresponds to a matrix $\begin{psmallmatrix}1 & 0\\b(\tau) & d(\tau)\end{psmallmatrix}$ in the affine group of matrices given in part (ii) of the proposition. Since, in each case, the number of such matrices is equal to the cardinality of $A(2^k)$ given in part (i), then the claimed isomorphism in part (ii) holds.
(iii) The proof is analogous to the proof of (ii) by considering $R_{2^k} =\langle \zeta_{2^k}\left(a_0^{e_1}\right)^{1/2^{k-\nu_p(e)}}, \zeta_{2^k} \rangle,$
where $(e_1, 2)=1$.
\end{proof}
The following proposition indicates the significance of the integer $s$ defined in \eqref{s-def}.
\begin{proposition}
\label{factoring}
The number $s$ defined in \eqref{s-def} is the smallest integer $k$ for which $\psi_{K, 2}$ factors via $A(2^k)$.
\end{proposition}
\begin{proof}
For integers $k\geq 0$, let $\varphi_{p^k}:A_p\to A(p^k)$ be the projection map. It is enough to show that $\psi_{K, 2}$ is non-trivial on
$\ker \varphi_{2^{s-1}}$ and is trivial on $\ker \varphi_{2^{s}}$.
We write the proof for the twisted case, where $s=\nu_2(e)+2$. The proof for the untwisted case is similar.
Assume that $a=-(a_0^{e_1})^{2^{\nu_2(e)}}$ as in part (iii) of Proposition \ref{new-prop}. Let $\alpha\in A_2$ be such that
$$\tau_2=\varphi_{2^{\nu_2(e)+2}}(\alpha)= \left(\begin{array}{cc} 1&0\\0&1+2^{\nu_2(e)+1} \end{array}\right)\in A(2^{\nu_2(e)+2}).$$
Observe that $\alpha\in \ker \varphi_{2^{\nu_2(e)+1}}$ and since
$R_{2^{\nu_2(e)+2}}=\langle\zeta_{2^{\nu_2(e)+3}} \left(a_0^{e_1}\right)^{1/4}, \zeta_{2^{\nu_2(e)+2}} \rangle$, we have
\begin{equation}
\label{T1}
\tau_2(\zeta_{2^{\nu_2(e)+3}} \left(a_0^{e_1}\right)^{1/4})=\zeta_{2^{\nu_2(e)+3}} \left(a_0^{e_1}\right)^{1/4}.
\end{equation}
Raising both sides of \eqref{T1} to power $2$ and observing that
$\zeta_{2^{\nu_2(e)+2}}$ and $ \left(a_0^{e_1}\right)^{1/2}\in R_{2^{\nu_2(e)+2}}$, we get
\begin{equation}
\label{T2}
\tau_2(\zeta_{2^{\nu_2(e)+2}}) \tau_2( \left(a_0^{e_1}\right)^{1/2})=\zeta_{2^{\nu_2(e)+2}} \left(a_0^{e_1}\right)^{1/2}.
\end{equation}
Now since
$\tau_2(\zeta_{2^{\nu_2(e)+2}})= \zeta_{2^{\nu_2(e)+2}}^{1+2^{\nu_2(e)+1}}$, the equation \eqref{T2} implies that $$\tau_2( a_0^{e_1/2})=- a_0^{e_1/2}.$$
We have $e_1=2m+1$ for some integer $m$. Hence,
$$\tau_2( a_0^{1/2}a_0^m)=- a_0^{1/2}a_0^m.$$
Thus for $\alpha\in \ker \varphi_{2^{\nu_2(e)+1}}$, we have $\psi_{K, 2}(\alpha)=-1$. Hence, $\psi_{K, 2}$ is non-trivial on $\ker \varphi_{2^{\nu_2(e)+1}}$.
Next, let $\alpha\in A_2$ be such that $\alpha\in \ker \varphi_{2^{\nu_2(e)+2}}$. Hence,
\begin{equation*}
\label{T3}
\tau_3=\varphi_{2^{\nu_2(e)+3}}(\alpha)= \left(\begin{array}{cc} 1&0\\ b&d \end{array}\right)\in A(2^{\nu_2(e)+3})
\end{equation*}
and
\begin{equation}
\label{T4}
\left(\begin{array}{cc} 1&0\\ b&d \end{array}\right)
\equiv \left(\begin{array}{cc} 1&0\\ 0&1 \end{array}\right)~({\rm mod}~2^{\nu_2(e)+2} ).
\end{equation}
Hence, $b=2^{\nu_2(e)+2}b_1$ for some integer $b_1$. We have
\begin{equation*}
\tau_3(\zeta_{2^{\nu_2(e)+4}} \left(a_0^{e_1}\right)^{1/8})=\zeta_{2^{\nu_2(e)+4}} \left(a_0^{e_1}\right)^{1/8} \zeta_{2^{\nu_2(e)+3}}^{2^{\nu_2(e)+2}b_1}.
\end{equation*}
Squaring both sides of this identity yields
\begin{equation*}
\tau_3(\zeta_{2^{\nu_2(e)+3}}) \tau_3(\left(a_0^{e_1}\right)^{1/4})=\zeta_{2^{\nu_2(e)+3}} \left(a_0^{e_1}\right)^{1/4}.
\end{equation*}
This implies
\begin{equation}
\label{T5}
\tau_3(\left(a_0^{e_1}\right)^{1/4})=\frac{\zeta_{2^{\nu_2(e)+3}}}{\zeta_{2^{\nu_2(e)+3}}^d} \left(a_0^{e_1}\right)^{1/4}.
\end{equation}
Now observe,
from \eqref{T4}, that
\begin{equation}
\label{T6}
d=1+2^{\nu_2(e)+2}d_1
\end{equation}
for some integer $d_1$. Raising both sides of \eqref{T5} to power 2 and employing \eqref{T6} yield
$$\tau_3(\left(a_0^{e_1}\right)^{1/2})= \left(a_0^{e_1}\right)^{1/2}.
$$
Hence,
$$\tau_3( a_0^{1/2}a_0^m)= a_0^{1/2}a_0^m,$$
where $e_1=2m+1$.
Thus, $\psi_{K, 2}$ is trivial on $\ker \varphi_{2^{\nu_2(e)+2}}$.
\end{proof}
The following proposition is essential in proving Theorem \ref{product-kummer-family}.
\begin{proposition}
\label{character}
Let $\ell(p)$ be the smallest integer $k$ for which $\chi_p$ factors via $A(p^{k})$. Then
$$\ell(p)=\left\{\begin{array}{ll}
0&\text{if~} p~ \text{is~ odd~and~}p\nmid D,\\1&\text{if~} p~ \text{is~ odd~and~}p\mid D,\\
s&\text{if}~ p=2~ \text{and}~ D ~\text{is odd},\\
\max\{2,s\}&\text{if}~ p=2~ \text{and}~ 4\Vert D,\\
2&\text{if}~ p=2,~8\Vert D,~\text{and}~(\nu_2(e)=1~\text{and}~a<0),\\
\max\{3,s\}&\text{if}~ p=2,~ 8\Vert D,~\text{and}~(\nu_2(e)\neq 1~\text{or}~a>0).\end{array}
\right.
$$
\end{proposition}
\begin{proof}
If $p\nmid 2D$, by the definition of $\chi_p$, we have that $\chi_p$ is constantly equal to $1$. Thus, the assertion holds.
If $p$ is an odd integer dividing $D$, then $\chi_p$ is the Legendre symbol mod $p$, so the result follows.
If $p=2$ and $D$ is odd, then $\chi_2=\psi_{K, 2}$. Thus, the result follows from Proposition \ref{factoring}.
If $p=2$ and $4\Vert D$, then $\chi_2=\psi_{K, 2} \chi_{D, 2}$, where $\chi_{D, 2}$ is the Dirichlet character mod $8$ of conductor $4$.
{We are looking for a positive integer $k$ such that $\psi_{K,2} (\alpha)\neq \chi_{D, 2}(\alpha)$ for an element $\alpha\in \ker\varphi_{2^{k-1}}$, and $\psi_{K,2} (\alpha)= \chi_{D, 2}(\alpha)$ for all $\alpha\in \ker\varphi_{2^k}$.}
Note that $2$ is the smallest values of $k$ for which $\chi_{D, 2}$ factors via $A(2^k)$, and, by Proposition \ref{factoring}, $s$ is the smallest value of $k$ for which $\psi_{K, 2}$ factors via $A(2^k)$. Thus, $\chi_{D, 2}$ is trivial on $\ker\varphi_{2^k}$ for $k\geq 2$ and is nontrivial on $\ker\varphi_{2^{k}}$ for $0\leq k\leq 1$. Also, $\psi_{K, 2}$ is trivial on $\ker\varphi_{2^k}$ for $k\geq s$ and is nontrivial on $\ker\varphi_{2^{k}}$ for $0\leq k\leq s$. Using these facts and a case-by-case analysis in terms of the values of $\nu_2(e)$, and for the untwisted and twisted cases,
we can see that the claimed assertion, in this case, holds. More precisely, if $\nu_2(e)=0$, then $\chi_2$ factors via $A(2^2)$. Otherwise, $\chi_2$ factors via $A(2^s)$.
The only case that needs special attention is when $a$ is an exact perfect square (i.e., $a>0$ and $\nu_2(e)=1$). In this case, $\max\{2, s\}=2$ and both $\psi_{K, 2}$ and $\chi_{D, 2}$ are trivial on $\ker\varphi_{2^2}$, hence $\chi_2$ is trivial on $\ker\varphi_{2^2}$. Let $\alpha\in \ker\varphi_{2}$ be such that $\varphi_{2^2}(\alpha)= \begin{psmallmatrix}1 & 0\\0 & 3\end{psmallmatrix}\in A(2^2)$. Note that $0+1\equiv 3$ (mod $2$), so by Proposition \ref{new-prop}(iii) such $\alpha$ exists. We have $\chi_2(\alpha)=\psi_{K, 2}(\alpha)\chi_{D, 2}(\alpha)=(1)(-1)=-1$. Thus, $\chi_2$ is non-trivial on $\ker\varphi_{2}$. Hence, $\chi_2$ factors via $A(2^2)=A(2^s)=A(2^{\max\{2, s\}})$.
If $p=2$ and $8\Vert D$, similar to part (iv), a case-by-case analysis in terms of the values of $\nu_2(e)$, and for the untwisted and twisted cases, verifies the result. (Note that in this case $3$ is the smallest values of $k$ for which $\chi_{D, 2}$ factors via $A(2^k)$.) More precisely, if $\nu_2(e)=0$ or $1$, and $a$ is not negative of a perfect square, then $\chi_2$ factors via $A(2^3)$. Also, if $\nu_2(e)=1$ and $a<0$, then $\chi_2$ factors via $A(2^2)$. Otherwise, $\chi_2$ factors via $A(2^s)$. Two cases need special attention.
Case 1: The number $a$ is negative of an exact perfect square (i.e., $a<0$ and $\nu_2(e)=1$). In this case, $\max\{3, s\}=3$ and both $\psi_{K, 2}$ and $\chi_{D, 2}$ are trivial on $\ker\varphi_{2^3}$, hence $\chi_2$ is trivial on $\ker\varphi_{2^3}$. Thus, $\chi_2$ acts through $A(2^3)$. Let $\alpha\in \ker\varphi_{2^2}$. Then $\varphi_{2^3}(\alpha)= \begin{psmallmatrix}1 & 0\\b & d\end{psmallmatrix}\in A(2^3)$ such that $\begin{psmallmatrix}1 & 0\\b & d\end{psmallmatrix}\equiv \begin{psmallmatrix}1 & 0\\0 & 1\end{psmallmatrix}$ (mod $2^2$).
Hence,
$$ \left(\begin{array}{cc}1 & 0\\b & d\end{array}\right )\in \left\{ \left(\begin{array}{cc}1 & 0\\0 & 1\end{array}\right ), \left(\begin{array}{cc}1 & 0\\0 & 5\end{array}\right ), \left(\begin{array}{cc}1 & 0\\4 & 1\end{array}\right ), \left(\begin{array}{cc}1 & 0\\4 & 5\end{array}\right ) \right\}\subset A(2^3).$$
Since for each $\alpha$ corresponding to the above matrices we have $\chi_2(\alpha)=\psi_{K, 2}(\alpha)\chi_{D, 2}(\alpha)=1$, we conclude that $\chi_2$ is trivial on $\ker \varphi_{2^2}$. Now let $\alpha\in \ker\varphi_{2}$ be such that $\varphi_{2^2}(\alpha)= \begin{psmallmatrix}1 & 0\\6 & 1\end{psmallmatrix}\in A(2^2)$. Note that $(2)(6)+1\equiv 1$ (mod $2^2$), so by Proposition \ref{new-prop}(ii) such $\alpha$ exists. We have $\chi_2(\alpha)=\psi_{K, 2}(\alpha)\chi_{D, 2}(\alpha)=(-1)(1)=-1$. Thus, $\chi_2$ is non-trivial on $\ker\varphi_{2}$. Hence, $\chi_2$ factors via $A(2^2)$ as claimed.
Case 2: The number $a$ is an exact perfect fourth power (i.e., $a>0$ and $\nu_2(e)=2$). In this case, $\max\{3, s\}=3$ and both $\psi_{K, 2}$ and $\chi_{D, 2}$ are trivial on $\ker\varphi_{2^3}$, hence $\chi_D$ is trivial on $\ker\varphi_{2^3}$. Let $\alpha\in \ker\varphi_{2^2}$ be such that $\varphi_{2^3}(\alpha)= \begin{psmallmatrix}1 & 0\\0 & 9\end{psmallmatrix}\in A(2^3)$. Note that $0+1\equiv 9$ (mod $2^2$), so by Proposition \ref{new-prop}(iii) such $\alpha$ exists. We have $\chi_2(\alpha)=\psi_{K, 2}(\alpha)\chi_{D, 2}(\alpha)=(1)(-1)=-1$. Thus, $\chi_2$ is non-trivial on $\ker\varphi_{2^2}$. Hence, $\chi_2$ factors via $A(2^3)=A(2^s)=A(2^{\max\{3, s\}})$.
\end{proof}
\section{Proof of Theorem \ref{main1}}
\label{sec-main1}
\begin{proof}[Proof of Theorem \ref{main1}]
\label{Section 3}
Let $\nu_G$ be the normalized Haar measure on the profinite group $G$, and $\nu_A$ be the normalized Haar measure on the profinite group $A$.
We start by writing the summation
\begin{equation*}
\sum_{n\geq1}\frac{g(n)}{\#G(n)}
\end{equation*}
in terms of measures of certain measurable subgroups of $G$. For this purpose, let $\pi_{G,n}: G\to G(n)$ be the projection map for each $n\geq1$. Then, $G/\ker\pi_{G,n}\cong G(n)$ and $[G:\ker\pi_{G,n}]=\#G(n)$. Hence, since $\ker\pi_{G, n}$ is a closed subgroup of $G$, we have $\nu_G(\ker\pi_{G,n})=1/\#G(n)$.
Thus,
\begin{equation}
\label{ev2}
\sum_{n\geq1}\frac{g(n)}{\#G(n)}=\sum_{n\geq1}g(n)\nu_G(\ker\pi_{G,n}).
\end{equation}
Observe that the number of cosets of the set $A/r(\ker\pi_{G,n})$ divided by the number of cosets of the group $G/\ker\pi_{G,n}\cong r(G)/r(\ker\pi_{G,n})$ is equal to $\#\left( A/r(G)\right)$. Hence, we have
\begin{equation}
\label{ev3}
\nu_G(\ker\pi_{G,n})=\frac{\nu_A(r(\ker\pi_{G,n}))}{\nu_A(r(G))}.
\end{equation}
Now, since
\begin{equation}
\label{star}
1\to G\stackrel{r}{\longrightarrow} A\stackrel{\chi}{\longrightarrow} \mu_m\to 1
\end{equation}
is an exact sequence, by (\ref{ev3}), we have
\begin{equation}
\label{ev4}
\begin{split}
\sum_{n\geq1}g(n)\nu_G(\ker\pi_{G,n}) & =\sum_{n\geq1}g(n)\frac{\nu_A(r(\ker\pi_{G,n}))}{\nu_A(\ker\chi)}\\
& =\frac{1}{\nu_A(\ker\chi)}\sum_{n\geq1}g(n)\nu_A(r(\ker\pi_{G,n})).
\end{split}
\end{equation}
Next, we show that $r(\ker\pi_{G,n})=\ker(\pi_{A,n})\cap\ker\chi$, where $\pi_{A,n}:A\to A(n)$ is the projection map for each $n\geq1$. To prove this claim, we note that the diagram
\begin{equation}
\label{commutative-G-to-A-main1}
\begin{tikzcd}
G\arrow[d,"r"]\arrow[r,"\pi_{G,n}"]&G(n)\arrow[d,"r_n"]\\
A\arrow[r,"\pi_{A,n}"]&A(n)
\end{tikzcd}
\end{equation}
commutes. For a group $H$, let $e_H$ denote its identity element. Note that if $\sigma\in\ker\pi_{G,n}$, then $r_n(\pi_{G,n}(\sigma))=r_n(e_{G(n)})=e_{A(n)}$. Hence, by the commutative diagram \eqref{commutative-G-to-A-main1}, we have $r(\sigma)\in\ker(\pi_{A,n})$. Moreover, by the exact sequence \eqref{star}, we have $r(\sigma)\in r(G)=\ker\chi$. Therefore,
\begin{equation}
\label{ev5-1}
r(\ker\pi_{G,n})\subset \ker(\pi_{A,n})\cap\ker\chi.
\end{equation}
On the other hand, if $\alpha\in\ker(\pi_{A,n})\cap\ker\chi\subset\ker\chi=r(G)$, then there exists a $\sigma\in G$ such that $r(\sigma)=\alpha$. Moreover, $r(\sigma)\in\ker(\pi_{A,n})$ means $\pi_{A,n}(r(\sigma))=e_{A(n)}$. Hence, $r_n(\pi_{G,n}(\sigma))=e_{A(n)}$ as \eqref{commutative-G-to-A-main1} is commutative. Thus, $\sigma\in\ker\pi_{G,n}$, since $r_n$ is injective. This shows that
\begin{equation}
\label{ev5-2}
\ker(\pi_{A,n})\cap\ker\chi\subset r(\ker\pi_{G,n}).
\end{equation}
Therefore, from \eqref{ev5-1} and \eqref{ev5-2}, we have
\begin{equation}
\label{ev5}
r(\ker\pi_{G,n})=\ker(\pi_{A,n})\cap\ker\chi.
\end{equation}
From (\ref{ev5}), we have
\begin{equation}
\label{ev6}
\begin{split}
\sum_{n\geq1}g(n)\nu_A(r(\ker\pi_{G,n}))
& =\sum_{n\geq1}g(n)\nu_A(\ker\pi_{A,n}\cap\ker\chi)\\
& =\sum_{n\geq1}g(n)\int_A1_{\ker\pi_{A,n}\cap\ker\chi}d\nu_A\\
& =\int_A\left(\sum_{n\geq1}g(n)1_{\ker\pi_{A,n}}\right)1_{\ker\chi}d\nu_A.
\end{split}
\end{equation}
To justify the interchange of the summation and the integral in the last equality, observe that
\begin{equation*}
\left\lvert\sum_{n=1}^mg(n)1_{\ker\pi_{A,n}\cap\ker\chi}\right\rvert\leq\sum_{n\geq1}\lvert g(n)\rvert1_{\ker\pi_{A,n}\cap\ker\chi}.
\end{equation*}
Since by the assumption, $\sum_{n\geq1}\lvert g(n)\rvert/\#G(n)$ converges, then,
by \cite[Theorem 1.27]{rudin}, \\$\sum_{n\geq1}\lvert g(n)\rvert1_{\ker\pi_{A,n}\cap\ker\chi}$ is integrable. Thus, by Lebesgue's dominated convergence theorem (see \cite[ Theorem 1.34]{rudin}), the interchange of the summation and the integral in \eqref{ev6} is justified. Also, since $\#A(n)\geq \#G(n)$, then $\sum_{n\geq1}\lvert g(n)\rvert/\#A(n)<\infty$. Hence, by \cite[Theorem 1.38]{rudin},
$$\mathbb{T}ilde{g}=\sum_{n\geq1}g(n)1_{\ker\pi_{A,n}}\in L^1(\nu_A).$$
Now from (\ref{ev2}), (\ref{ev4}), and (\ref{ev6}), we have
\begin{equation}
\label{ev7}
\sum_{n\geq1}\frac{g(n)}{\#G(n)}
=\frac{\int_A \mathbb{T}ilde{g}1_{\ker\chi}d\nu_A}{\int_A1_{\ker\chi}d\nu_A}.
\end{equation}
Note that the character $\chi:A\to\mu_m$ in (\ref{star}) induces the character $\chi':A/r(G)\stackrel{\sim}{\longrightarrow}\mu_m$ by $\chi'(\bar{\alpha})=\chi(\alpha)$, where $\alpha\in A$ and $\bar{\alpha}$ is the coset associated to $\alpha$ in $A/r(G)$. More precisely, $\chi$ is the lift of $\chi'$ to $A$. Since $\chi'$ sends a generator of $A/r(G)$ to a generator of $\mu_m$, then $\chi'$ is a generator of the group of characters of $A/r(G)$ denoted by $\widehat{A/r(G)}$. Thus, for $\bar{\alpha}\in A/r(G)$, by \cite{course-in-arithmetic}*{Chapter \textrm{VI}, Proposition 4}, we have
\begin{equation*}
\sum_{\epsilon\in\widehat{A/G}}\epsilon(\bar{\alpha})=\sum_{i=0}^{m-1}(\chi')^i(\bar{\alpha})=
\begin{cases}
m &\quad \text{if }\;\bar{\alpha}=1,\\
0 &\quad \text{if }\;\bar{\alpha}\neq 1.
\end{cases}
\end{equation*}
Therefore, since $\bar{\alpha}=1$ means $\alpha\in\ker\chi$, we have
\begin{equation*}
\sum_{i=0}^{m-1}\chi^i(\alpha)=
\begin{cases}
m &\quad \text{if }\;\alpha\in\ker\chi,\\
0 &\quad \text{if }\;\alpha\notin\ker\chi.
\end{cases}
\end{equation*}
This implies $\sum_{i=0}^{m-1}\chi^i(\alpha)=m\cdot1_{\ker\chi}(\alpha)$. Thus,
\begin{equation}
\label{ev8}
\frac{\int_A\mathbb{T}ilde{g}\1_{\ker\chi}d\nu_A}{\int_A\1_{\ker\chi}d\nu_A}=\frac{\int_A\mathbb{T}ilde{g}\sum_{i=0}^{m-1}\chi^id\nu_A}{m\int_A 1_{\ker\chi}d\nu_A}.
\end{equation}
Furthermore, by \eqref{star}, we have $[A:\ker\chi]=[A:r(G)]=m$. Hence, $\nu_A(\ker\chi)=1/m$. Thus, the desired result follows from \eqref{ev7} and \eqref{ev8}.
\end{proof}
The following corollary considers a special case of Theorem \ref{main1}.
\begin{corollary}
\label{cor-after-main1}
In Theorem \ref{main1}, suppose that $g$ is a multiplicative arithmetic function. In addition, assume that $A\cong\prod_pA_p$, where $A_p=\varprojlim A(p^i)$, and $\chi=\prod_p\chi_p$, where $\chi_p:A_p\to\mu_m$ is a character of $A_p$. Then \begin{equation}
\label{tilde-g-p}
\mathbb{T}ilde{g}_p=\sum_{k\geq0}g(p^k)1_{\ker\varphi_{p^k}}\in L^1(\nu_{A_p}).
\end{equation}
Moreover,
\begin{equation}
\label{new-cor-12}
\sum_{n\geq1}\frac{g(n)}{\#G(n)}=\sum_{i=0}^{m-1}\prod_p\int_{A_p}\mathbb{T}ilde{g}_p\chi^i_pd\nu_{A_p},
\end{equation}
where $\nu_{A_p}$ is the normalized Haar measure on $A_p$.
\end{corollary}
\begin{proof}
By Theorem \ref{main1}, we have
\begin{equation}
\label{new-cor-1}
\sum_{n\geq1}\frac{g(n)}{\#G(n)}=\sum_{i=0}^{m-1}\int_A\mathbb{T}ilde{g}\chi^id\nu_A.
\end{equation}
Since $g(n)$ is multiplicative, $A\cong\prod_pA_p$, $\nu_A=\prod_p\nu_{A_p}$, $\chi=\prod_p\chi_p$, and $\mathbb{T}ilde{g}=\prod_p\mathbb{T}ilde{g}_p$, then \eqref{new-cor-1} yields \eqref{new-cor-12}. Note that, since $\#A(p^k)\geq \#G(p^k)$, then $\sum_{k\geq 0}\lvert g(n)\rvert/\#A(p^k)<\infty$. Hence, by \cite[Theorem 1.38]{rudin},
$\mathbb{T}ilde{g}_p\in L^1(\nu_{A_p}).$ Thus, the integrals in \eqref{new-cor-12} are finite.
\end{proof}
\section{Proof of Theorem \ref{product-kummer-family}}
\label{S1}
\begin{proof}[Proof of Theorem \ref{product-kummer-family}]
We employ Corollary \ref{cor-after-main1} and compute
$\int_{A_p}\mathbb{T}ilde{g}_pd\nu_{A_p}$ and
$\int_{A_p}\mathbb{T}ilde{g}_p\chi_pd\nu_{A_p}$ for primes $p$.
Since $\ker\varphi_{p^k}$ is a closed subgroup of $A_p$, we have $\nu_{A_p}(\ker\varphi_{p^k})=1/[A_{p}:\ker\varphi_{p^k}]=1/\#A(p^k)$. Observe that
\begin{equation}
\label{sum-in-last-form-1}
\begin{split}
\int_{A_p} \mathbb{T}ilde{g}_pd\nu_{A_p}
& =\int_{A_p}\sum_{k\geq0} g(p^k)1_{\ker\varphi_{p^k}}d\nu_{A_p}\\
& =\sum_{k\geq0}g(p^k)\nu_{A_p}(\ker\varphi_{p^k})\\
& =\sum_{k\geq0}\frac{g(p^k)}{\#A(p^k)}.
\end{split}
\end{equation}
In addition, by Proposition \ref{character},
\begin{equation}
\label{sum-in-last-form-3}
\begin{split}
\int_{A_p}\mathbb{T}ilde{g}_p\chi_pd\nu_{A_p}
& =\int_{A_p}\left(1_{A_p}\chi_p+g(p)1_{\ker\varphi_p}\chi_p+\dots+g(p^k)1_{\ker\varphi_{p^k}}\chi_p+\dots\right)d\nu_{A_p}\\
& =0+\sum_{k\geq \ell(p)}g(p^k)\nu_{A_p}(\ker\varphi_{p^k})\\
& =\sum_{k\geq \ell(p)}\frac{g(p^k)}{\#A(p^k)}.
\end{split}
\end{equation}
Thus, by Corollary \ref{cor-after-main1} with $m=2$, \eqref{sum-in-last-form-1}, and \eqref{sum-in-last-form-3}, we get \eqref{product-kummer}.
\end{proof}
\section{Proof of Proposition \ref{TDPK-formula}}
\label{Section 5}
\begin{proof}[Proof of Proposition \ref{TDPK-formula}]
For integer $k\geq 1$ and odd prime $p$, let
\begin{equation}
\label{k'}
k'=
\begin{cases}
0 &\text{if }k\leq \nu_p({e}),\\
k-\nu_p(e)&\text{if }k>\nu_p(e),\\
\end{cases}
\end{equation}
and for $k\geq 1$ and $p=2$, let
\begin{equation}
\label{k-prime}
k'=
\begin{cases}
0 &\text{if }k\leq \nu_2({e})~\text{and}~ (a>0~ \text{or}~ e~\text{is~odd}),\\
1 &\text{if }k\leq \nu_2({e})~\text{and}~(a<0~ \text{and}~ e~\text{is~even}),\\
k-\nu_2(e)&\text{if }k>\nu_2(e).
\end{cases}
\end{equation}
Then, from Proposition \ref{new-prop} (i), we have
\begin{equation}
\label{Apk}
\#A(p^k)=\begin{cases}
p^{k+k'-1}(p-1)&\text{if }k\geq 1,\\
1&\text{if }k=0.
\end{cases}
\end{equation}
Now by employing \eqref{Apk} in \eqref{product-kummer} we get
\begin{equation}
\label{kummer-sum}
\sum_{n=1}^{\infty}\frac{g(n)}{\#G(n)}=\prod_p\sum_{k\geq 0}\frac{g(p^k)}{p^{k+k'-1}(p-1)}+\prod_{p}\sum_{k\geq\ell(p)}\frac{g(p^k)}{p^{k+k'-1}(p-1)}.
\end{equation}
We set $g=1$ in \eqref{kummer-sum} to
get the product expression for the constant in the conjectured asymptotic formula in the Titchmarsh Divisor Problem for a given Kummer family. Therefore, by \eqref{kummer-sum},
\begin{equation}
\label{expression}
\sum_{n\geq1}\frac{1}{[K_n:\mathbb{Q}]}=\left(1+\prod_{p\mid 2D}\frac{C_p}{1+B_p}\right)\prod_p\left(1+B_p\right),
\end{equation}
for the following values for $B_p$ and $C_p$.
If $p$ is odd, we have
\begin{equation}
\label{bp}
B_p= \sum_{k\geq1}\frac{1}{p^{k+k'-1}(p-1)}=\frac{p^{\nu_p(e)+2}+p^{\nu_p(e)+1}-p^2}{p^{\nu_p(e)}(p-1)(p^2-1)},
\end{equation}
and $C_p=B_p$, where $k^\prime$ is given by \eqref{k'}.
For $p=2$, we have the following cases for $B_2$ and $C_2$ with $k^\prime$ as given by \eqref{k-prime}.
\noindent {\it Case (i).} Let $e$ be odd or $a>0$. Hence, $s=\nu_2(e)+1$. Then $B_2$ is the same as \eqref{bp} with $p=2$. Now, if $D$ is odd; or $4 \Vert D$ and $s\geq2$; or $8 \Vert D$ and $s\geq3$, then
\begin{equation*}
C_2= \sum_{k\geq \ell(2)}\frac{1}{2^{k+k'-1}}=\frac{2}{2^{\nu_2(e)}(2^2-1)}.
\end{equation*}
Otherwise,
\begin{equation*}
C_2=\sum_{k\geq \ell(2)}\frac{1}{2^{k+k'-1}}=\frac{2^{\nu_2(e)+1}}{2^{\beta}(2^2-1)},
\end{equation*}
where $\beta=2$ if $4 \Vert D$ and $s=1$; and $\beta=4$ if $8 \Vert D$ and $s\in \{1, 2\}$.
\noindent {\it Case (ii).} Let $e$ be even and $a<0$. Then
\begin{equation*}
B_2= \sum_{k\geq 1}\frac{1}{2^{k+k'-1}}=\frac{2^{\nu_2(e)+2}-2^{\nu_2(e)}-1}{2^{\nu_2(e)}(2^2-1)}.
\end{equation*}
If $8\Vert D$ and $\nu_2(e)= 1$, we have $\ell(2)=2$. Hence,
$$C_2= \sum_{k\geq \ell(2)}\frac{1}{2^{k+k'-1}}= \frac{1}{2^2-1}.$$
Otherwise, we have $\ell(2)=s=\nu_2(e)+2$ and thus
\begin{equation*}
C_2= \sum_{k\geq \ell(2)}\frac{1}{2^{k+k'-1}}=\frac{1}{2^{\nu_2(e)+1}(2^2-1)}.
\end{equation*}
By applying the above expressions in \eqref{expression} and by case-by-case simplifying, we get \eqref{kummer-tdp-lastproduct}.
\end{proof}
\section{Serre Curves}
\label{Serre section}
Let $E$ be an elliptic curve defined over $\mathbb{Q}$ given by a Weierstrass equation
\begin{equation*}
y^2=x^3+ax+b,
\end{equation*}
where $a,b\in\mathbb{Q}$. Let $\mathbb{Q}(E[n])$ be the $n$-division field of $E$. By taking the inverse limit of the natural injective maps
\begin{equation*}
r_n:\Gal(\mathbb{Q}(E[n])/\mathbb{Q})\to\mathbb{A}ut(E[n])\cong \GL_2(\mathbb{Z}/n\mathbb{Z}),
\end{equation*}
over all $n\geq1$, we have an injective profinite homomorphism
\begin{equation*}
r\colon \Gal(\mathbb{Q}(E[\infty])/\mathbb{Q})\to\mathbb{A}ut(E[\infty])\cong\GL_2(\widehat{\mathbb{Z}}).
\end{equation*}
Let $\Delta$ be the discriminant of the cubic equation $x^3+ax+b=0$. Set $K=\mathbb{Q}({\Delta}^{1/2})$ and let
$D$ be the discriminant of $K$. In anticipation of applying Theorem \ref{main1}, let $\det$ be the determinant map $\det:\GL_2(\widehat{\mathbb{Z}})\to\widehat{\mathbb{Z}}^{\times}$ and
\begin{equation*}
\chi_D:\GL_2(\widehat{\mathbb{Z}})\stackrel{\det}{\longrightarrow}\widehat{\mathbb{Z}}^{\times}\stackrel{\left(\frac{ D}{.}\right)}{\longrightarrow}\mu_2
\end{equation*}
be the composition of $\det$ with the lift to $\widehat{\mathbb{Z}}^{\times}$ of the Kronecker symbol attached to $D$. We note that $\GL_2(\mathbb{Z}/2\mathbb{Z})\cong S_3$, where $S_3$ is the symmetric group on three letters. Let
\begin{equation*}
\psi:\GL_2(\widehat{\mathbb{Z}})\to \GL_2(\mathbb{Z}/2\mathbb{Z})\cong S_3\stackrel{\sgn}{\longrightarrow}\mu_2
\end{equation*}
be the composition of the projection map from $\GL_2(\widehat{\mathbb{Z}})$ to $\GL_2(\mathbb{Z}/2\mathbb{Z})$ with the signature character on $S_3$. Let $G=\Gal(\mathbb{Q}(E[\infty])/\mathbb{Q})$. For $\eta \in G$ we can show that the image of $r(\eta)$ under $\psi$ is the same as ${\chi_D(r(\eta))=\eta({\Delta}^{1/2})}/{{\Delta}^{1/2}}$. We now set $\chi=\chi_D\cdot\psi$.
The above construction of character $\chi$ is described by J.-P. Serre in \cite{serre}. In addition, in \cite{serre},
Serre shows that
always $[\GL_2(\widehat{\mathbb{Z}}):r(G)]\geq2$. We name $E$ a \emph{Serre curve} if
$[\GL_2(\widehat{\mathbb{Z}}):r(G)]=2$. This is equivalent to saying that $r(G)=\ker\chi$. Thus, letting $A=\GL_2(\widehat{\mathbb{Z}})$, for Serre curve $E$, the sequence
\begin{equation}
\label{exactec}
1 \longrightarrow G \stackrel{r}{\longrightarrow} A \stackrel{\chi}{\longrightarrow} \mu_{2} \longrightarrow 1
\end{equation}
is an exact sequence. In addition for a Serre curve $K=\mathbb{Q}(\Delta^{1/2})$ is a quadratic field.
The quadratic character $\chi:\GL_2(\hat{\mathbb{Z}})(\cong\prod_p\GL_2(\mathbb{Z}_p))\to\mu_2$ can be written as a product of local characters $\chi_p:\GL_2(\mathbb{Z}_p)\to\mu_2$. Observe that since $\psi$ factors via $\GL_2(\mathbb{Z}/2\mathbb{Z})$, then it factors via $\GL_2(\mathbb{Z}_2)$. Let $\psi_{2}:\GL_2(\mathbb{Z}_2)\to\mu_2$ be the corresponding homomorphism obtained from factorization of $\psi$ via $\GL_2(\mathbb{Z}_2)$. For prime $p\nmid 2D$, let $\chi_p$ be constantly equal to $1$. For odd prime $p\mid D$, let $\chi_p=\chi_{D,p}$ be the lift of the Legendre symbol mod $p$ to $\mathbb{Z}_p^{\times}$,
i.e.,
\begin{equation*}
\chi_p:\GL_2(\mathbb{Z}_p^{\times})\stackrel{\det}{\longrightarrow}\mathbb{Z}_p^{\times}\stackrel{}{\longrightarrow}\mu_2
\end{equation*}
where the last map is the composition of {projection map to $\mathbb{Z}/p\mathbb{Z}$} and the Legendre symbol mod $p$. For prime $2$, let $\chi_2=\chi_{D,2}\cdot\psi_{2}$, where $\chi_{D,2}$, similar to the Kummer case, is the lift of one of the Dirichlet characters mod $8$ to $\mathbb{Z}^{\times}_2$ (if $D$ is odd, then $\chi_{D, 2}$ is trivial).
Therefore, by the above construction of $\chi$, we have the decomposition $\chi=\prod_p\chi_p$.
Let $A_p=\GL_2(\mathbb{Z}_p)$ and $A(p^k)=\GL_2(\mathbb{Z}/p^k\mathbb{Z})$. The following is an analogous of Proposition \ref{character} for Serre curves.
\begin{proposition}
\label{character-serre}
For a Serre curve $E$, assume the above notations.
Let $\ell(p)$ be the smallest integer $k$ for which $\chi_p$ factors via $A(p^k)$. Then
$$\ell(p)=\left\{\begin{array}{ll}
0&\text{if~} p~ \text{is~ odd~and~}p\nmid D,\\
1&\text{if~} p~ \text{is~ odd~and~}p\mid D,\\
1&\text{if}~ p=2~ \text{and}~ D ~\text{is odd},\\
2&\text{if}~ p=2~ \text{and}~ 4\Vert D,\\
3&\text{if}~ p=2~ \text{and}~ 8\Vert D.\\
\end{array}
\right.
$$
\end{proposition}
\begin{proof}
If $p\nmid 2D$, then $\chi_p$ is constanly equal to $1$. Hence, it factors via $A(1)$. If $p$ is odd and $p\mid D$, then $\chi_p$ is the Legendre symbol mod $p$, and so it factors via $A(p)$, and since it is non-trivial, it does not factor via $A(1)$. The result for $p=2$ follows from the construction of $\chi_2$ described above, noting that the smallest integer $k$ for which
$\psi_2$ factors via $A(p^k)$ is $k=1$, for $4\Vert D$ the smallest such $k$ is $k=2$, and for $8\Vert D$ the smallest such $k$ is $k=3$.
\end{proof}
We are now ready to prove our last remaining assertion.
\begin{proof}[Proof of Proposition \ref{prop-Serre}]
Following steps similar to the proof of Theorem \ref{product-kummer-family} and by employing Corollary \ref{cor-after-main1} with $m=2$, Proposition \ref{character-serre}, and
\begin{equation*}
\label{|GL|}
\left|\GL_2(\mathbb{Z}/n\mathbb{Z})\right|=\prod_{p^e\:\Vert\:n}p^{4e-3}(p^2-1)(p-1)
\end{equation*}
(see \cite[page 231]{K}) we have the stated product expression.
\end{proof}
\par
\noindent{\bf Acknowledgements.}
The authors thank the reviewers for their valuable comments and suggestions.
The authors thank David Basil and Solaleh Bolvardizadeh for help computing the explicit constants $ c_a$ given in the table after Proposition \ref{TDPK-formula}.
\begin{rezabib}
\bib{AG}{article}{
author={Akbary, Amir},
author={Ghioca, Dragos},
title={A geometric variant of Titchmarsh divisor problem},
journal={Int. J. Number Theory},
volume={8},
date={2012},
number={1},
pages={53--69},
issn={1793-0421},
review={\MR{2887882}},
doi={10.1142/S1793042112500030},
}
\bib{AF}{article}{
author={Akbary, Amir},
author={Felix, Adam Tyler},
title={On the average value of a function of the residual index},
journal={Springer Proc. Math. Stat.},
volume={251},
date={2018},
pages={19--37},
review={\MR{3880381}},
}
\bib{artin}{book}{
author={Artin, Emil},
title={The collected papers of Emil Artin},
note={Edited by Serge Lang and John T. Tate},
publisher={Addison-Wesley Publishing Co., Inc., Reading, Mass.-London},
date={1965},
pages={xvi+560 pp. (2 plates)},
review={\MR{0176888}},
}
\bib{BLSW}{article}{
author={Bach, Eric},
author={Lukes, Richard},
author={Shallit, Jeffrey},
author={Williams, H. C.},
title={Results and estimates on pseudopowers},
journal={Math. Comp.},
volume={65},
date={1996},
number={216},
pages={1737--1747},
issn={0025-5718},
review={\MR{1355005}},
}
\bib{cojocaru:tdp}{article}{
author={Bell, Renee},
author={Blakestad, Clifford},
author={Cojocaru, Alina Carmen},
author={Cowan, Alexander},
author={Jones, Nathan},
author={Matei, Vlad},
author={Smith, Geoffrey},
author={Vogt, Isabel},
title={Constants in Titchmarsh divisor problems for elliptic curves},
journal={Res. Number Theory},
volume={6},
date={2020},
number={1},
pages={Paper No. 1, 24},
issn={2522-0160},
review={\MR{4041152}},
doi={10.1007/s40993-019-0175-9},
}
\bib{cox}{book}{
author={Cox, David A.},
title={Primes of the form $x^2 + ny^2$},
series={Pure and Applied Mathematics (Hoboken)},
edition={2},
note={Fermat, class field theory, and complex multiplication},
publisher={John Wiley \& Sons, Inc., Hoboken, NJ},
date={2013},
pages={xviii+356},
isbn={978-1-118-39018-4},
review={\MR{3236783}},
doi={10.1002/9781118400722},
}
\bib{davenport}{book}{
author={Davenport, Harold},
title={Multiplicative number theory},
series={Graduate Texts in Mathematics},
volume={74},
edition={3},
note={Revised and with a preface by Hugh L. Montgomery},
publisher={Springer-Verlag, New York},
date={2000},
pages={xiv+177},
isbn={0-387-95097-4},
review={\MR{1790423}},
}
\bib{felix-murty}{article}{
author={Felix, Adam Tyler},
author={Murty, M. Ram},
title={A problem of Fomenko's related to Artin's conjecture},
journal={Int. J. Number Theory},
volume={8},
date={2012},
number={7},
pages={1687--1723},
issn={1793-0421},
review={\MR{2968946}},
doi={10.1142/S1793042112500984},
}
\bib{Hooley}{article}{
author={Hooley, Christopher},
title={On Artin's conjecture},
journal={J. Reine Angew. Math.},
volume={225},
date={1967},
pages={209--220},
issn={0075-4102},
review={\MR{207630}},
doi={10.1515/crll.1967.225.209},
}
\bib{K}{book}{
author={Koblitz, Neal},
title={Introduction to elliptic curves and modular forms},
series={Graduate Texts in Mathematics},
volume={97},
edition={2},
publisher={Springer-Verlag, New York},
date={1993},
pages={x+248},
isbn={0-387-97966-2},
review={\MR{1216136}},
doi={10.1007/978-1-4612-0909-6},
}
\bib{kowalski}{article}{
author={Kowalski, E.},
title={Analytic problems for elliptic curves},
journal={J. Ramanujan Math. Soc.},
volume={21},
date={2006},
number={1},
pages={19--114},
issn={0970-1249},
review={\MR{2226355}},
}
\bib{lang}{book}{
author={Lang, Serge},
title={Algebra},
series={Graduate Texts in Mathematics},
volume={211},
edition={3},
publisher={Springer-Verlag, New York},
date={2002},
pages={xvi+914},
isbn={0-387-95385-X},
review={\MR{1878556}},
doi={10.1007/978-1-4613-0041-0},
}
\bib{L}{article}{
author={Laxton, R. R.},
title={On groups of linear recurrences. I},
journal={Duke Math. J.},
volume={36},
date={1969},
pages={721--736},
issn={0012-7094},
review={\MR{0258781}},
}
\bib{lenstra-stevenhagen-moree}{article}{
author={Lenstra, H. W., Jr.},
author={Moree, P.},
author={Stevenhagen, P.},
title={Character sums for primitive root densities},
journal={Math. Proc. Cambridge Philos. Soc.},
volume={157},
date={2014},
number={3},
pages={489--511},
issn={0305-0041},
review={\MR{3286520}},
doi={10.1017/S0305004114000450},
}
\bib{MS}{article}{
author={Moree, P.},
author={Stevenhagen, P.},
title={Computing higher rank primitive root densities},
journal={Acta Arith.},
volume={163},
date={2014},
number={1},
pages={15--32},
issn={0065-1036},
review={\MR{3194054}},
doi={10.4064/aa163-1-2},
}
\bib{Papa}{article}{
author={Pappalardi, F.},
title={On Hooley's theorem with weights},
note={Number theory, II (Rome, 1995)},
journal={Rend. Sem. Mat. Univ. Politec. Torino},
volume={53},
date={1995},
number={4},
pages={375--388},
issn={0373-1243},
review={\MR{1452393}},
}
\bib{rudin}{book}{
author={Rudin, Walter},
title={Real and complex analysis},
edition={3},
publisher={McGraw-Hill Book Co., New York},
date={1987},
pages={xiv+416},
isbn={0-07-054234-1},
review={\MR{924157}},
}
\bib{course-in-arithmetic}{book}{
author={Serre, J.-P.},
title={A course in arithmetic},
series={Graduate Texts in Mathematics, No. 7},
note={Translated from the French},
publisher={Springer-Verlag, New York-Heidelberg},
date={1973},
pages={viii+115},
review={\MR{0344216}},
}
\bib{serre}{article}{
author={Serre, Jean-Pierre},
title={Propri\'{e}t\'{e}s galoisiennes des points d'ordre fini des courbes
elliptiques},
language={French},
journal={Invent. Math.},
volume={15},
date={1972},
number={4},
pages={259--331},
issn={0020-9910},
review={\MR{387283}},
doi={10.1007/BF01405086},
}
\bib{wagstaff}{article}{
author={Wagstaff, Samuel S., Jr.},
title={Pseudoprimes and a generalization of Artin's conjecture},
journal={Acta Arith.},
volume={41},
date={1982},
number={2},
pages={141--150},
issn={0065-1036},
review={\MR{674829}},
doi={10.4064/aa-41-2-141-150},
}
\end{rezabib}
\end{document} |
\begin{document}
\begin{center}
{\huge{\bf On the conjugacy class of the Fibonacci \\[.3cm]dynamical system}}
{\large{Michel Dekking (Delft University of Technology)\\ and\\ Mike Keane (Delft University of Technology and University of Leiden)}}
{\large{{Version: August 16, 2016}}}
\end{center}
\section{Introduction}\label{sec:intro}
We study the Fibonacci substitution $\varphi$ given by
$$\varphi:\quad 0\rightarrow\,01,\;1\rightarrow 0.$$
The infinite Fibonacci word $w_{\rm F}$ is the unique one-sided sequence (to the right) which is a fixed point of $\varphi$:
$$w_{\rm F}=0100101001\dots.$$
We also consider one of the two two-sided fixed points $x_{\rm F}$ of $\varphi^2$:
$$x_{\rm F}=\dots01001001\!\cdot\!0100101001\dots.$$
The dynamical system generated by taking the orbit closure of $x_{\rm F}$ under the shift map $\sigma$ is denoted by $(X_\varphi,\sigma)$.
The question we will be concerned with is: what are the substitutions $\eta$ which generate a symbolical dynamical system topologically isomorphic to the Fibonacci dynamical system? Here topologically isomorphic means that there exists a homeomorphism $\psi: X_\varphi\rightarrow X_\eta$, such that $\psi\sigma=\sigma\psi$, where we denote the shift on $X_\eta$ also by $\sigma$. In this case $(X_\eta, \sigma)$ is said to be conjugate to
$(X_\varphi,\sigma)$.
This question has been completely answered for the case of constant length substitutions in the paper \cite{CDK}. It is remarkable that there are only finitely many injective primitive substitutions of length $L$ which generate a system conjugate to a given substitution of length $L$. Here a substitution $\alpha$ is called \emph{injective} if $\alpha(a)\ne \alpha(b)$ for all letters $a$ and $b$ from the alphabet with $a\ne b$. When we extend to the class of all substitutions, replacing $L$ by the Perron-Frobenius eigenvalue of the incidence matrix of the substitution, then the conjugacy class can be infinite in general. See \cite{Dekking-TCS} for the case of the Thue-Morse substitution. In the present paper we will prove that there are infinitely many injective primitive substitutions with Perron-Frobenius eigenvalue $\Phi=(1+\sqrt{5})/2$ which generate a system conjugate to the Fibonacci system---see Theorem~\ref{th:inf}.
In the non-constant length case some new phenomena appear. If one has an injective substitution $\alpha$ of constant length $L$, then all its powers $\alpha^n$ will also be injective. This is no longer true in the general case. For example, consider the injective substitution $\zeta$ on the alphabet $\{1,2,3,4,5\}$ given by
$$\zeta: \qquad 1\rightarrow 12,\;
2\rightarrow 3,\;
3\rightarrow 45,\;
4\rightarrow 1,\;
5\rightarrow 23.$$
An application of Theorem~\ref{th:Nblock} followed by a partition reshaping (see Section~\ref{sec:reshaping}) shows that the system $(X_\zeta,\sigma)$ is conjugate to the Fibonacci system.
However, the square of $\zeta$ is given by
$$\zeta^2: \qquad 1\rightarrow 123,\;
2\rightarrow 45,\;
3\rightarrow 123,\;
4\rightarrow 12,\;
5\rightarrow 345, $$
which is \emph{not} injective. To deal with this undesirable phenomenon we introduce the following notion. A substitution $\alpha$ is called a \emph{full rank} substitution if its incidence matrix has full rank (non-zero determinant). This is a strengthening of injectivity, because obviously a substitution which is not injective can not have full rank. Moreover, if the substitution $\alpha$ has full rank, then all its powers $\alpha^n$ will also have full rank, and thus will be injective.
Another phenomenon, which does not exist in the constant length case, is that non-primitive substitutions $\zeta$ may generate uniquely defined minimal systems
conjugate to a given system. For example, consider the injective substitution $\zeta$ on the alphabet $\{1,2,3,4\}$ given by
$$\zeta:\qquad 1\rightarrow 12,\quad
2\rightarrow 31,\quad
3\rightarrow 4,\quad
4\rightarrow 3. $$
With the partition reshaping technique from Section~\ref{sec:reshaping} one can show that the system $(X_\zeta,\sigma)$ is conjugate to the Fibonacci system (ignoring the system on two points generated by $\zeta$). In the remainder of this paper we concentrate on primitive substitutions.
The structure of the paper is as follows. In Section~\ref{sec:Nblock} we show that all systems in the conjugacy class of the Fibonacci substitution can be obtained by letter-to-letter projections of the systems generated by so-called $N$-block substitutions. In Section~\ref{sec:C3} we give a very general characterization of symbolical dynamical systems in the Fibonacci conjugacy class, in the spirit of a similar result on the Toeplitz dynamical system in \cite{CKL08}. In Section~\ref{sec:reshaping} we introduce a tool which admits to turn non-injective substitutions into injective substitutions. This is used in Section~\ref{sec:C1} to show that the Fibonacci class has infinitely many primitive injective substitutions as members. In Section~\ref{sec:two} we quickly analyse the case of a 2-symbol alphabet. Sections \ref{sec:equi} and \ref{sec:mat} give properties of maximal equicontinuous factors and incidence matrices, which are used to analyse the 3-symbol case in Section \ref{sec:C2}. In the final Section \ref{sec:L2L} we show that the system obtained by doubling the 0's in the infinite Fibonacci word is conjugate to the Fibonacci dynamical system, but can not be generated by a substitution.
\section{$N$-block systems and $N$-block substitutions}\label{sec:Nblock}
For any $N$ the $N$-block substitution $\hat{\theta}_N$ of a substitution $\theta$ is defined on an alphabet of $p_\theta(N)$ symbols, where $p_\theta(\cdot)$ is the complexity function of the language ${\cal L}_\theta$ of $\theta$ (cf.\ \cite[p.~95]{Queff}). What is \emph{not} in \cite{Queff}, is that this $N$-block substitution generates the $N$-block presentation of the system $(X_\theta,\sigma)$.
We denote the letters of the alphabet of the $N$-block presentation by $[a_1a_2\dots a_N]$, where $a_1a_2\dots a_N$ is an element from ${\cal L}_\theta^N$, the set of words of length $N$ in the language of $\theta$. The $N$-block presentation $(X^{[N]}_\theta,\sigma)$ emerges by applying an sliding block code $\Psi$ to the sequences of $X_\theta$, so $\Psi$ is the map \\[-.3cm]
$$\Psi(a_1a_2\dots a_N)=[a_1a_2\dots a_N].$$
We denote by $\psi$ the induced map from $X_\theta$ to $X^{[N]}_\theta$:
$$\psi(x)=\dots\Psi(x_{-N},\dots,x_{-1})\Psi(x_{-N+1},\dots,x_{0})\dots.$$
It is easy to see that $\psi$ is a conjugacy, where the inverse is $\pi_0$ induced by the 1-block map (also denoted $\pi_0$) given by $\pi_0([a_1a_2\dots a_N])=a_1$.
The $N$-block substitution $\hat{\theta}_N$ is defined by requiring that for each word $a_1a_2\dots a_N$ the length of $\hat{\theta}_N([a_1a_2\dots a_N])$ is equal to the length $L_1$ of $\theta(a_1)$, and the letters of $\hat{\theta}_N([a_1a_2\dots a_N])$ are the $\Psi$-codings of the first $L_1$ consecutive $N$-blocks in $\theta(a_1a_2\dots a_N)$.
\begin{theorem}\label{th:Nblock} Let $\hat{\theta}_N$ be the $N$-block substitution of a primitive substitution $\theta$. Let $(X^{[N]}_\theta,\sigma)$ be the $N$-block presentation of the system $(X_\theta,\sigma)$. Then the set $X^{[N]}_\theta$ equals $X_{\hat{\theta}_N}$.
\end{theorem}
{\em Proof:} Let $x$ be a fixed point of $\theta$, and let $y=\psi(x)$, where $\psi$ is the $N$-block conjugacy, with inverse $\pi_0$. The key equation is $\pi_0\,\hat{\theta}_N=\theta\,\pi_0$.
This implies\\[-.3cm]
$$\pi_0\,\hat{\theta}_N(y)=\theta\,\pi_0(y)=\theta\,\pi_0(\psi(x))=\theta(x)=x.$$
Applying $\psi$ on both sides gives $\hat{\theta}_N(y)=\psi(x)=y$, i.e., $y$ is a fixed point of $\hat{\theta}_N$. But then $X^{[N]}_\theta=X_{\hat{\theta}_N}$, by minimality of $X^{[N]}_\theta$. $\Box$
It is well known (see, e.g., \cite[p.~105]{Queff}) that $p_\varphi(N)=N+1$, so for the Fibonacci substitution $\varphi$ the $N$-block substitution $\hat{\varphi}_N$ is a substitution on an alphabet of $N+1$ symbols.
We describe how one obtains $\hat{\varphi}_2$.
We have ${\cal L}_\varphi^2=\{00, 01, 10\}$. Since 00 and 01 start with 0, and 10 with 1, we obtain
$$\hat{\varphi}_2:\quad [00]\mapsto [01][10],\;[01]\mapsto [01][10],\; [10]\mapsto [00],$$
reading off the consecutive 2-blocks from $\varphi(00)=0101,\, \varphi(01)=010$ and $\varphi(10)=001$.
It is useful to recode the alphabet $\{[00],[01],[10]\}$ to the standard alphabet $\{1,2,3\}$.
We do this in the order in which they appear for the first time in the infinite Fibonacci word $w_{\rm F}$--- we call this the \emph{canonical coding}, and will use the same principle for all $N$. For $N=2$ this gives
$[01]\rightarrow 1,\; [10]\rightarrow 2,\; [00]\rightarrow 3$. Still using the notation $\hat{\varphi}_2$ for the
substitution on this new alphabet, we obtain
$$\hat{\varphi}_2(1)=12 \quad \hat{\varphi}_2(2)=3, \quad \hat{\varphi}_2(3)=12.$$
In this way the substitution is in standard form (cf.~\cite{CDK} and \cite{Dekking-2016}).
\section{The Fibonacci conjugacy class}\label{sec:C3}
Let $F_n$ for $n=1,2,\dots$ be the Fibonacci numbers $$F_1=1,\, F_2=1,\, F_3=2,\, F_4=3,\, F_5=5, \dots.$$
\begin{theorem} Let $(Y,\sigma)$ be any subshift. Then $(Y,\sigma)$ is topologically conjugate to the Fibonacci system $(X_\varphi,\sigma)$ if and only if there exist $n\ge 3 $ and two words $B_0$ and $B_1$ of length $F_n$ and $F_{n-1}$, such that any $y$ from $Y$ is a concatenation of $B_0$ and $B_1$, and moreover, if\, $\cdots B_{x_{-1}} B_{x_0} B_{x_1}\cdots B_{x_k}\cdots$ is such a concatenation, then $x=(x_k)$ is a sequence from the Fibonacci system.
\end{theorem}
\noindent \emph{Proof:} First let us suppose that $(Y,\sigma)$ is topologically isomorphic to the Fibonacci system. By the Curtis-Hedlund-Lyndon theorem, there exists an integer $N$ such that $Y$ is obtained by a letter-to-letter projection $\pi$ from the $N$-block presentation $(X^{[N]}_\varphi, \sigma)$ of the Fibonacci system.
Now if $B_0$ and $B_1$ are two decomposition blocks of sequences from
$X^{[N]}_\varphi$ of length $F_n$ and $F_{n-1}$, then $\pi(B_0)$ and $\pi(B_1)$ are decomposition blocks of sequences from $Y$ with lengths $F_n$ and $F_{n-1}$, again satisfying the concatenation property.
So it suffices to prove the result for $X^{[N]}_\varphi$. Note that we may suppose that the integers $N$ pass through an infinite subsequence; we will use $N=F_n$,where $n=3,4,\dots$. Useful to us are the \emph{singular words} $w_n$ introduced in \cite{WenWen}. The $w_n$ are the unique words of length $F_{n+1}$ having a different Parikh vector from all the other words of length $F_{n+1}$ from the language of $\varphi$. Here $w_1=1, w_2=00$, $w_3=101$, and for $n\ge4$
$$w_n=w_{n-2}w_{n-3}w_{n-2}.$$
The set of return words of $w_n$ has only two elements which are $u_n=w_nw_{n+1}$ and $v_n=w_nw_{n-1}$ (see page 108 in \cite{HuangWen}).
The lengths of these words are $|u_n|=F_{n+3}$ and $|v_n|=F_{n+2}$. Let $w_n^-$ be $w_n$ with the last letter deleted.
Define for $n\ge5$
$$B_0=\Psi(u_{n-3}w_{n-3}^-), \quad B_1=\Psi(v_{n-3}w_{n-3}^-),$$
where $\Psi$ is the $N$-block code from ${\cal L}_\varphi^N$ to ${\cal L}_{\varphi^{[N]}}$, with $N=F_{n-2}$.
Then these blocks have the right lengths, and by Theorem 2.11 in \cite{HuangWen}, the two return words partition the infinite Fibonacci word $w_{\rm F}$ according to the infinite Fibonacci word---except for a prefix $r_{n,0}$:
$$w_{\rm F}=r_{n,0}u_nv_nu_nu_nv_nu_n\dots.$$
By minimality this property carries over to all two-sided sequences in the Fibonacci dynamical system.
For the converse, let $Y$ be a Fibonacci concatenation system as above. Let $C_0=\varphi^{n-2}(0)$ and $C_1=\varphi^{n-2}(1)$. We define a map $g$ from $(Y,\sigma)$ to a subshift of $\{0,1\}^{\mathbb{Z}}$ by
$$g:\quad \cdots B_{x_{-1}} B_{x_0} B_{x_1}\cdots B_{x_k}\cdots\; \mapsto \; \cdots C_{x_{-1}} C_{x_0} C_{x_1}\cdots C_{x_k}\cdots,$$
respecting the position of the $0^{\rm th}$ coordinate. Since $|C_0|=|B_0|$ and $|C_1|=|B_1|$, $g$ commutes with the shift. Also, $g$ is obviously continuous.
Moreover, since for any sequence $x$ in the Fibonacci system $\varphi^{n-2}(x)$ is again a sequence in the Fibonacci system, $g(Y)\subseteq X_\varphi$.
So, by minimality, $(X_\varphi,\sigma)$ is a factor of $(Y,\sigma)$. Since $g$ is invertible, with continuous inverse, $(Y,\sigma)$ is in the conjugacy class of the Fibonacci system.
$\Box$
\noindent {\bf Example}\; The case $(F_n,F_{n-1})=(13,8)$. Then $n=7$, so we have to consider the singular word $w_4=00100$ of length 5.
\noindent The set of $5$-blocks is
$\{01001,\,10010,\,00101,\,01010,\,10100,\,00100\}.$\\
These will be coded by the canonical coding $\Psi$ to the standard alphabet $\{1,2,3,4,5,6\}$.
Note that $\Psi(w_4)=6$.
Further, $w_3=101$ and $w_5=10100101$. So $u_n=0010010100101$ and $v_n=00100101$. Applying $\Psi$ gives the two decomposition
blocks $B_0 = 6123451234512$ and $B_1 = 61234512$.
\section{Reshaping substitutions}\label{sec:reshaping}
We call a language preserving transformation of a substitution a reshaping. An example is the prefix-suffix change used in \cite{Dekking-TCS}.
Here we consider a variation which we call a \emph{partition reshaping}.
We give an example of this technique.
Take the $N$-block representation of the Fibonacci system for $N=4$. All five 4-blocks occur consecutively at the beginning of the Fibonacci word $w_{\rm F}$ as
$\{0100,\,1001,\,0010,\, 0101,\, 1010\}.$
The canonical coding to $\{1,2,3,4,5\}$ gives the 4-block substitution $\hat{\varphi}_4$:
$$\hat{\varphi}_4:\qquad 1\rightarrow 12,\;
2\rightarrow 3,\;
3\rightarrow 45, \;
4\rightarrow 12,\;
5\rightarrow 3.$$
\noindent Its square is equal to
$$\hat{\varphi}_4^2: \qquad 1\rightarrow 123,\;
2\rightarrow 45,\;
3\rightarrow 123,\;
4\rightarrow 123,\;
5\rightarrow 45. $$
Since the two blocks $B_0=123$ and $B_1=45$ have no letters in common
this permits to do a partition reshaping. Symbolically this can be represented by
\begin{table}[h!]
\centering
\caption{\small Partition reshaping.}
\label{tab:table1}
\begin{tabular}{ccccccccc}\\[.005cm]
1 & \; & 2 & 3 & & \qquad\qquad 4 & \; & 5 & \\
$\downarrow$ & \; & $\downarrow$ & $\downarrow$& & \qquad\qquad $\downarrow$ & \; & $\downarrow$ & \\
1 & 2 & 3 & 4 & 5 & \qquad\qquad 1 & 2 & 3 & \\
1 & \,\; 2 $\|$& 3 &\,\; 4 $\|$ & 5 & \qquad\qquad \,\; 1 $\|$ & 2 & \,\; 3 $\|$ &
\end{tabular}
\end{table}
Here the third line gives the images $\hat{\varphi}_4(B_0)=\hat{\varphi}_4(123)=12345$ and $\hat{\varphi}_4(B_1)=\hat{\varphi}_4(45)=123$; the fourth line gives a \emph{another} partition of these two words in three, respectively two subwords from which the new substitution $\eta$ can be read of:
$$\eta: \qquad 1\rightarrow 12,\;
2\rightarrow 34,\;
3\rightarrow 5 ,\;
4\rightarrow 1,\;
5\rightarrow 23. $$
What we gain is that the partition reshaped substitution $\eta$ generates the same language as $\hat{\varphi}_4$, but that $\eta$ is injective---it is even of full rank.
\section{The Fibonacci class has infinite cardinality}\label{sec:C1}
\begin{theorem}\label{th:inf} There are infinitely many primitive injective substitutions with Perron-Frobenius eigenvalue the golden mean that generate dynamical systems topologically isomorphic to the Fibonacci system.
\end{theorem}
We will explicitly construct infinitely many primitive injective substitutions whose systems are topologically conjugate to the Fibonacci system. The topological conjugacy will follow from the fact that the systems are $N$-block codings of the Fibonacci system, where $N$ will run through the numbers $F_n-1$.
As an introduction we look at $n=5$, i.e., we consider the blocks of length $N=F_5-1=4$.
With the canonical coding of the $N$-blocks we obtain the 4-block substitution $\hat{\varphi}_4$---see Section~\ref{sec:reshaping}:
$$\hat{\varphi}_4:\qquad 1\rightarrow 12,\,
2\rightarrow 3,\,
3\rightarrow 45, \,
4\rightarrow 12,\,\
5\rightarrow 3.$$
\noindent An \emph{interval} $I$ starting with $a\in A$ is a word of length $L$ of the form $$I=a,a+1,...,a+L-1.$$
\noindent Note that $\hat{\varphi}_4(123)=12345$, and $\hat{\varphi}_4(45)=123$, and these four words are intervals.
This is a property that holds in general. First we need the fact that the first $F_n$ words of length $F_n-1$ in the
fixed point of $\varphi$ are all different. This result is given by Theorem 2.8 in \cite{Chuan-Ho}. We code these $N+1$ words by the canonical coding
to the letters $1,2,\dots,F_n$. We then have
\begin{equation}\label{eq:Fib}\hat{\varphi}_N(12...F_{n-1})=12\dots F_{n}, \qquad \hat{\varphi}_N(F_{n-1}\!+1,\dots F_n)=12\dots F_{n-1}.\end{equation}
This can be seen by noting that $\pi_0 \hat{\varphi}_N^n=\varphi^n \pi_0,$ for all $n$, and that the fixed point of $\varphi$ starts with $\varphi^{n-2}(0)\varphi^{n-3}(0)$.
We continue for $n \ge 5$ with the construction of a substitution $\eta=\eta_n$ which is a partition reshaping of $\hat{\varphi}_N$.
The $F_n$ letters in the alphabet $A^{[N]}$ are divided in three species, S, M and L (for Small, Medium and Large).
$${\rm S}:={1,...,F_{n-3}}, \quad {\rm M}:={F_{n-3}+1,...,F_{n-1}},\quad {\rm L}:={F_{n-1}\!+1,...,F_n}.$$
Note that ${\rm Card}\, {\rm M}=F_{n-1}-F_{n-3}=F_{n-2}=F_{n}-F_{n-1}={\rm Card}\, {\rm L}.$
An important role is played by $a_{ \rm M}:=F_{n-3}+1$, the smallest letter in M, and $a_{ \rm L}:=F_{n-1}+1$, the smallest letter in L.
For the letters in M (except for $a_{ \rm M})$ the rules are very simple:
$$\eta(a)= a+F_{n-2}$$ (i.e., a single letter obtained by addition of the two integers).
The first letter in M has the rule $$\eta(a_{ \rm M})=\eta(F_{n-3}\!+1)= F_{n-1}, F_{n-1}\!+1= F_{n-1},a_{ \rm L} .$$
The images of the letters in L are intervals of length 1 or 2, obtained from a partition of the word $12\dots F_{n-1}$.
Their lengths are coming from $\varphi^{n-4}(0)$, rotated once (put the 0 in front at the back). This word is denoted $\rho(\varphi^{n-4 }(0))$.
The choice of this word is somewhat arbitrary, other choices would work. The properties of $v:=\rho(\varphi^{n-4}(0))$ which matter to us are
(V1) $\ell:=|v|=F_{n-2}$.
(V2) $v_1=1$, $v_\ell=0$.
(V3) $v$ does not contain any 11.
\noindent Now the images of the letters in L are determined by $v$ according to the following rule: $|\eta(a_{ \rm L}+k-1)|=2-v_k$, for all $k=1,\dots,F_{n-2}$. Note that this implies in particular that for all $n\ge 5$ one has by property (V2)
$$\eta(a_{\rm L})=\eta(F_{n-1}\!+1)= 1, \;\qquad \eta(F_n)= F_{n-1}-1,F_{n-1}.$$
The images of the letters in S are then obtained by choosing the lengths of the $\eta(a)$ in such a way that the largest common refinement of the induced partitions of the images of S and L is the singleton partition.
\noindent{\bf Example} The case $n=7$, so $ F_n=13$, $ F_{n-1}=8$, and $ F_{n-2}=5$.
\noindent Then ${\rm S}=\{1,2,3\},\, {\rm M}=\{4,5,6,7,8\},\, {\rm L}=\{9,10,11,12,13\}.$
\noindent Rules for M: \quad $4\rightarrow 89,\;5\rightarrow 10, \;6\rightarrow11, \;7\rightarrow 12, \;8\rightarrow13.$
Now $$\varphi^3(0)=01001\; \Rightarrow\; v= 10010\; \Rightarrow\; {\rm the\, partition\, is}\; 1|23|45|6|78.$$ This partition gives the following rules for L: $$9\rightarrow1,\; 10\rightarrow23,\; 11\rightarrow45,\; 12\rightarrow6,\; 13\rightarrow 78.$$
The induced partition for the images of the letters in S is $|12|34|567|8$, yielding rules
$$1\rightarrow12,\; 2\rightarrow34,\; 3\rightarrow567.$$
\noindent In summary we obtain the substitution $\eta=\eta_7$ given by : \vspace*{-0.1cm}
\begin{align*}
{\rm S}:
\begin{cases}
1& \rightarrow 1,2\\
2& \rightarrow 3,4\\
3& \rightarrow 5,6,7
\end{cases}
\qquad {\rm M}:
\begin{cases}
4& \rightarrow 8,9\\
5& \rightarrow 10\\
6& \rightarrow 11\\
7& \rightarrow 12\\
8& \rightarrow 13
\end{cases}
\qquad {\rm L}:
\begin{cases}
\,\,9\!\!& \rightarrow 1\\
10& \rightarrow 2,3\\
11& \rightarrow 4,5\\
12& \rightarrow 6\\
13& \rightarrow 7,8.
\end{cases}
\end{align*}
The substitution $\eta$ is primitive because you `can go' from the letter 1 to any letter and from any letter to the letter 1.
This gives irreducibility; there is primitivity because periodicity is impossible by the first rule $1\rightarrow 1,2$.
\noindent The substitution $\eta$ has full rank because any unit vector $$e_a=(0,\dots,0,1,0,\dots,0)$$ is a linear combination of rows of the incidence matrix $M_\eta$ of $\eta$. For $a\in {\rm L}\setminus\{9\}$ this combination is trivial, and for the other letters this is exactly forced by the choice of lengths in such a way that the largest common refinement of the induced partitions of the images of S and L is the singleton partition.
In more detail: denote the $a^{\rm th}$ row of $M_\eta$ by $R_a$. Then $e_1=R_9$, and thus $e_2=R_1-R_9$, $e_3=R_{10}-e_2=R_{10}-R_1+R_9$, etc.
The argument yielding the property of full rank will hold in general for all $n\ge5$. To prove primitivity for all $n$ we need some more details.
\begin{proposition} The substitution $\eta=\eta_n$ is primitive for all $n \ge 5$.\end{proposition}
\noindent \emph{Proof:} The proposition will be proved if we show that for all $a\in A$ the letter $a$ will occur in some iteration $\eta^k(1)$, and conversely,
that for all $a\in A$ the letter $1$ will occur in some iteration $\eta^k(a)$. The first part is easy to see from the fact that $\eta(1)=1,2$ and that
$\eta^2(1,\dots,F_{n-2})=1,\dots,F_n-1$, plus $\eta^2(a_{\rm M})=F_n,1$. For the second part, we show that A) for any $a\in$ M$\cup$L a letter from S will occur in $\eta^k(a)$ in $k\le$ Card M$\cup$L steps (see Lemma~\ref{lem:dec}) and B), that for any $a\in$ S the letter 1 will occur in $\eta^k(a)$ in $k\le$ 2Card $A$ steps (see Lemma~\ref{lem:occ1}).
$\Box$
\begin{lemma}\label{lem:dec} Let $f:A\rightarrow A$ be the map that assigns the first letter of $\eta^2(a)$ to $a$. Then $f$ is strictly decreasing on L $\cup$ M$\backslash \{a_{\rm M}\}$.\end{lemma}
\noindent \emph{Proof:} First we consider $f$ on ${\rm L}$. We have $$\eta^2(a_{\rm L}\dots F_n)=\eta(1,\dots, F_{n-1}-1, F_{n-1})=1\dots F_n.$$
Since $$\eta^2(F_n)=\eta( F_{n-1}-1,F_{n-1})=F_{n-1}-1+F_{n-2},F_{n-1}+F_{n-2}=F_n-1, F_n,$$
we obtain $f(F_n)=F_n-1<F_n$. This implies that also the previous letters in ${\rm L}$ are mapped by $f$ to a smaller letter.
\noindent Next we consider $f$ on M$\backslash \{a_{\rm M}\}$. Here $$\eta^2(a_{\rm M}+1,\dots, F_{n-1})=\eta(a_{\rm L}+1,\dots, F_{n})=2,3,\dots, F_{n-1}.$$
Now $$\eta^2(F_{n-1})=\eta( F_{n})=F_{n-1}-1,F_{n-1}.$$
So we obtain $f(F_{n-1})=F_{n-1}-1<F_{n-1}$. This implies that also the previous letters in ${\rm M}$ are mapped by $f$ to a smaller letter.
$\Box$
\begin{lemma}\label{lem:occ1} For all $a\in S$ there exists $k \le 2\,{\rm Card}\, A$ such that the letter 1 occurs in $\eta^{k}(a)$.
\end{lemma}
\noindent \emph{Proof:} The substitution $\eta^2$ maps intervals $I$ to intervals $\eta^2(I)$, provided $I$ does not contain $a_{\rm M}$ or $a_{\rm L}$.
By construction, since the $\eta(b)$ for $b\in {\rm L}$ have length 1 or 2, the length of $\eta(a)$ for $a\in {\rm S}$ is 2 or 3, and so $\eta(a)$ contains a word $c, c+1$ for some $c\in A$. Since $\rho\varphi^{(n-4)}(0)$ does not contain two consecutive 1's (property (V3)), the image $\eta^2(c,c+1)$ has at least length 3. Sincew_{\rm F}ootnote{This follows from the fact that any word in the language of $\eta$ occurs in some concatenation of the two words $12\dots F_{n}$ and $12\dots F_{n-1}$.} any word of length at least 3 in the language of $\eta$ contains an interval of length 2, the length increases by at least 1 if you apply $\eta^2$. It follows that for all $n\ge 5$ and all $a\in {\mathrm S}$ one has $|\eta^{2n+1}(a)| \ge n+2$.
But then after less than ${\rm Card}\, A$ steps a letter $a_{\rm M}$ or a letter $a_{\rm L}$ must occur in $\eta^{2n+1}(a)$. This implies that the letter 1 occurs in $\eta^{2n+3}(a)$, since both $\eta^2(a_{\rm M})$ and $\eta^2(a_{\rm L})$ contain a 1.
$\Box$
\section{The 2-symbol case}\label{sec:two}
The eigenvalue group of the Fibonacci system is the rotation over the small golden mean $\gamma=(\sqrt{5}-1)/2$ on the unit circle, and any system topologically isomorphic to the Fibonacci system must have an incidence matrix with Perron Frobenius eigenvalue the golden mean or a power of the golden mean
(cf.~\cite[Section 7.3.2]{Pytheas}). Thus, modulo a permutation of the symbols, on an alphabet of two symbols the incidence matrix
with Perron-Frobenius eigenvalue the golden mean has to be $\left( \begin{smallmatrix} 1 \, 1\\ 1\, 0 \end{smallmatrix} \right).$ There are two substitutions with this incidence matrix: Fibonacci $\varphi$, and reverse Fibonacci ${\varphi_{\textsc{\tiny R}}}$, defined by
$${\varphi_{\textsc{\tiny R}}}: \qquad 0\rightarrow\,10,\;1\rightarrow 0.$$
These two substitutions are essentially different, as they have different standard forms (see \cite{Dekking-2016} for the definition of standard form).
However, it follows directly from Tan Bo's criterion in his paper \cite{Tan}
that ${\varphi_{\textsc{\tiny R}}}$ and $\varphi$ have the same languagew_{\rm F}ootnote{This follows also directly from the well-known formula
${\varphi_{\textsc{\tiny R}}}^{\!2n}(0)\,10=01\,\varphi^{2n}(0)$ for all $n\ge1$ (see \cite[p.17]{Berstel}).},
but then they also generate the same system. Conclusion: the conjugacy class of Fibonacci with Perron-Frobenius eigenvalue the golden mean
restricted to two symbols consists of Fibonacci and reverse Fibonacci.
\section{Maximal equicontinuous factors}\label{sec:equi}
Let $T$ be the mapping from the unit circle $Z$ to itself defined by $Tz=z+\gamma \mod 1$,
where $\gamma$ is the small golden mean.
This, being an irrational
rotation, is indeed an equicontinuous dynamical system -- the usual
distance metric is an invariant metric under the mapping.
The factor map from the Fibonacci dynamical system $(X_\varphi,\sigma)$ to $(Z,T)$ is given by
requiring that the cylinder sets $\{x:x_0=0\}$ and $\{x:x_0=1\}$ are mapped to
the intervals $[0,\gamma]$ and $[\gamma,1]$ respectively, and requiring equivariance.
If we take any point of $Z$ not of the form $n\gamma \mod 1$ ($n$ any integer), then the
corresponding sequence is unique. If, however, we use an element in the
orbit of $\gamma$, then for this point there will be two codes, a ``left" one
and a ``right" one.
We want to understand more generally why two or more points map to a single point. Suppose $x$ and $y$ are two
points of a system $(X,\sigma)$ that map to two points $x'$ and $y'$ in
an equicontinuous factor. Then for any power of $T$ (the
map of the factor system) the distance between $T^n(x')$ and $T^n(y')$ is
just equal to the distance between $x'$ and $y'$. So $x$ and
$y$ map to the same point $x'$ if either all $x_n$ and $y_n$ are equal for
sufficiently large $n$, or all $x_n$ and $y_n$ are equal for sufficiently
large $-n$. We say that $x$ and $y$ are respectively \emph{right asymptotic} or \emph{left asymptotic}
A pair of letters $(b,a)$ is called a \emph{cyclic pair} of a substitution $\alpha$ if $ba$ is an element of the language of $\alpha$, and for some integer $m$
$$\alpha^m(b)=\dots b \quad{\rm and}\quad \alpha^m(a)=a\dots. $$
Such a pair gives an infinite sequence of words $\alpha^{mk}(ba)$ in the language of $\alpha$, which---if properly centered---converge to an infinite word which is a fixed point of $\alpha^m$. With a slight abuse of notation we denote this word by $\alpha^{\infty}(b)\cdot \alpha^{\infty}(a)$.
For the Fibonacci substitution $\varphi$, $(0,0)$ and $(1,0)$ are cyclic pairs, and the two synchronized points $\varphi^\infty(0)\cdot\varphi^\infty(0)$ and
$\varphi^\infty(1)\cdot\varphi^\infty(0)$, are right asymptotic so they map to the same point in the equicontinuous factor.
Because of these considerations we now define $Z$-triples. Let $\eta$ be a primitive substitution. Call three points $x$, $y$, and $z$ in $X_\eta$ a $Z$-\emph{triple} if they are generated by three cyclic pairs
of the form $(b,a),\, (b,d)$ and $(c,d)$, where $a,b,c,d \in A$. Then $x$, $y$, and $z$ are mapped to the same point in the maximal equicontinuous factor.
\begin{theorem}\label{th:Zth} \; Let $(X_\eta,\sigma)$ be any substitution dynamical system topologically isomorphic to the Fibonacci dynamical system. Then there do not exist $Z$-triples in $X_\eta$.
\end{theorem}
\noindent \emph{Proof:} Since $(X_\eta,\sigma)$ is topologically isomorphic to $(X_\varphi,\sigma)$, its maximal equicontinuous factor is $(Z,T$), and the factor map is at most 2-to-1. Suppose $(b,a),\, (b,d)$ and $(c,d)$ gives a $Z$-triple $x,y,z$ in $X_\eta$. Noting that
$$x=\eta^\infty(b)\cdot\eta^\infty(a), \quad y=\eta^\infty(b)\cdot \eta^\infty(d)$$
are left asymptotic, and $y=\eta^\infty(b)\cdot \eta^\infty(d)$ and $z=\eta^\infty(c)\cdot\eta^\infty(d)$ are right asymptotic, this would give a contradiction. $\Box$
\noindent {\bf Example} Let $\eta$ be the substitution given by
$$\eta:\qquad 1\rightarrow 12,\,
2\rightarrow 34,\,
3\rightarrow 5 ,\,
4\rightarrow 1,\,
5\rightarrow 23. $$
Then $\eta$ generates a system that is topologically isomorphic to the Fibonacci system ($\eta$ is the substitution at the end of Section~\ref{sec:reshaping}). Quite remarkably, $\eta^6$ admits 5 fixed points generated by the cyclic pairs $(1, 2),\, (2, 3),\, (3, 1),\, (4, 5)$ and $(5, 1)$.
Note however, that no three of these form a $Z$-triple.
\section{Fibonacci matrices}\label{sec:mat}
Let $\mathcal{F}_r$ be the set of all non-negative primitive $r\times r$ integer matrices, with Perron-Frobenius eigenvalue the golden mean $\Phi = (1+\sqrt{5})/2$.\\
We have seen already that $\mathcal{F}_2$ consists of the single matrix $\left( \begin{smallmatrix} 1 \, 1\\ 1\, 0 \end{smallmatrix} \right).$
\begin{theorem}\label{th:F3} The class $\mathcal{F}_3$ essentially consists of the three matrices
\qquad $ \left( \begin{smallmatrix} 0\, 1\, 0\\ 1\, 0\, 1\\ 1\, 1\, 0 \end{smallmatrix} \right),\;
\left( \begin{smallmatrix} 0\, 1\, 0\\ 0\, 0 \,1 \\ 1\, 2\,0 \end{smallmatrix} \right),\;
\left( \begin{smallmatrix} 0\, 1\, 0\\ 1 \,0\, 1\\ 1\, 0\, 1 \end{smallmatrix} \right).$
\end{theorem}
Here essentially means that in each class of 6 matrices corresponding to the permutations of the $r=3$ symbols, one representing member has been chosen (actually corresponding to the smallest standard form of the substitutions having that matrix).
\emph{Proof:} Let $M$ be a non-negative primitive $3\times 3$ integer matrix, with Perron-Frobenius eigenvalue the golden mean $\Phi = (1+\sqrt{5})/2$. We write\\ [-0.7cm]
$$ M= \left( \begin{matrix} a\; b\; c\\ d\; e\; f\\ g\; h\; i \end{matrix} \right).$$
The characteristic polynomial of $M$ is $\chi_M(u)=u^3-Tu^2+Fu-D,$
where $T=a+e+i$ is the trace of $M$, and
\begin{equation}\label{eq:FandD}
F=ae+ai+ei-bd-cg-fh,\quad
D=aei+bfg+cdh-afh-bdi-ceg.\quad
\end{equation}
Of course $D$ is the determinant of $M$.
Since $\Phi$ is an eigenvalue of $M$, and we consider matrices over the integers, $u^2-u-1$ has to be a factor of $\chi_M$.
Performing the division we obtain
$$\chi_M(u)=\big(u-(T-1)\big)\big(u^2-u-1\big),$$
and requiring that the remainder vanishes, yields
\begin{equation}\label{eq:DF}
F=T-2,\quad D=1-T.
\end{equation}
Note that the third eigenvalue equals $\lambda_3=T-1$. From the Perron-Frobenius theorem follows that this has to be smaller than $\Phi$ in absolute value, and since it is an integer, only $\lambda_3=-1, 0, 1$ are possible. Thus there are only 3 possible values for the trace of $M$: $T=0,\, T=1$ and $T=2$.
The smallest row sum of $M$ has to be smaller than the PF-eigenvalue $\Phi$ (well known property of primitive non-negative matrices). Therefore $M$ has to have one of the rows $(0,0,1)$, $(0,1,0)$ or $(0,0,1)$. Also, because of primitivity of $M$, the 1 in this row can not be on the diagonal. By performing permutation conjugacies of the matrix we may then assume that $M$ has the form
$$ M= \left( \begin{matrix} 0\;\; 1\;\; 0\\ d\;\; e\;\; f\\ g\;\; h\;\; i \end{matrix} \right).$$
The equation \eqref{eq:FandD} combined with \eqref{eq:DF} then simplifies to
\begin{equation}\label{eq:DF2}
T-2=F=ei-d-fh, \quad
1-T=D=fg-di.
\end{equation}
\noindent{\bf Case $\mathbf{{\emph T}=0}$}
\noindent In this case $e=i=0$, so \eqref{eq:DF2} simplifies to
\begin{equation}\label{eq:T0F}
-2=F=-d-fh, \quad 1=D=fg.
\end{equation}
Then $f=g=1$, and so $d+h=2$. This gives three possibilities leading to the matrices
$ \left( \begin{smallmatrix} 0\, 1\, 0\\ 1\, 0\, 1\\ 1\, 1\, 0 \end{smallmatrix} \right),\;
\left( \begin{smallmatrix} 0\, 1\, 0\\ 0\, 0 \,1 \\ 1\, 2\,0 \end{smallmatrix} \right),\;
\left( \begin{smallmatrix} 0\, 1\, 0\\ 2 \,0\, 1\\ 1\, 0\, 0 \end{smallmatrix} \right).$
\noindent Here the third matrix is permutation conjugate to the second one.
\noindent{\bf Case $\mathbf{{\emph T}=1}$}
\noindent In this case $e=1, i=0$, or $e=0, i=1$.
\noindent First case: $e=1, i=0$. Now \eqref{eq:DF2} simplifies to
\begin{equation}\label{eq:T1F}
-1=F=-d-fh, \quad 0=D=fg.
\end{equation}
Then $g=0$, since $f=0$ is not possible because of primitivity.
But $g=0$ also contradicts primitivity, as $d+fh=1$, gives either $d=0$ or $h=0$.
\noindent Second case: $e=0, i=1$.
Now \eqref{eq:DF2} simplifies to
\begin{equation}\label{eq:T1F2}
-1=F=-d-fh, \quad 0=D=fg-d.
\end{equation}
Then $d=0$ would imply that $f=h=1$.
But, as $g>0$ because of primitivity, we get a contradiction with $fg=d=0$.
On the other hand, if $d>0$, then $d=1$ and $f=0$ or $h=0$. But $fg=d=1$ gives $f=g=1$, so $h=0$, and we obtain the matrix
$ \left( \begin{smallmatrix} 0\, 1\, 0\\ 1\, 0\, 1\\ 1\, 0\, 1 \end{smallmatrix} \right).$
\noindent{\bf Case $\mathbf{{\emph T}=2}$}
\noindent In this case \eqref{eq:DF2} becomes
\begin{equation}\label{eq:T1F}
0=F=ei-d-fh, \quad -1=D=fg-di.
\end{equation}
Since $ei=0$ would lead to $f=0$, which is not allowed by primitivity, what remains is $e=1,i=1$. Then, substituting $d=fg+1$ in the first equation gives
$0=f(g+h)$. But both $f=0$ and $g=h=0$ contradict primitivity.
Final conclusion: there are three matrices in $\mathcal{F}_3$, modulo permutation conjugacies.
$\Box$
\noindent {\bf Remark} It is well-known that the PF-eigenvalue lies between the smallest and the largest row sum of the matrix. One might wonder whether this largest row sum is bounded for the class $\mathcal{F}=\cup_r\mathcal{F}_r$.
Actually the class $\mathcal{F}_r$ contains matrices with some row sum equal to $r-1$ for all $r\ge 3$:
take the matrix $M$ with $M_{1,j}=1$ for $j=2,\dots,r$, $M_{2,2}=1$ and
$M_{i,{i+1}}=1$, for $i=2,\dots,r-1$, $M_{r,1}=1$ and all other entries 0.
Now note that $(1, \Phi,...,\Phi)$ is a left eigenvector of $M$ with eigenvalue $\Phi$ (since $\Phi^2=1+\Phi$).
Since the eigenvector has all entries positive, it must be a PF-eigenvector (well known property of
primitive, non-negative matrices), and hence $M$ is in $\mathcal{F}_r$.
\section{The 3-symbol case}\label{sec:C2}
\begin{theorem} There are two primitive injective substitutions
$\eta$ and $\zeta$ on a three letter alphabet $\{a,b,c\}$ that generate dynamical systems topologically isomorphic to the Fibonacci system. These are givenw_{\rm F}ootnote{Standard forms: replace $a,b,c$ by $1,2,3$.} by\\[-.4cm]
$$\eta(a)=b,\,\eta(b)=ca,\, \eta(c)=ba,\quad \zeta(a)=b,\,\zeta(b)=ac,\, \zeta(c)=ab.$$
\end{theorem}
\noindent\emph{Proof:} The possible matrices for candidate substitutions are given in Theorem~\ref{th:F3}. Let us consider the first matrix
$ \left( \begin{smallmatrix} 0\, 1\, 0\\ 1\, 0\, 1\\ 1\, 1\, 0 \end{smallmatrix} \right)$.
\noindent There are four substitutions with this matrix as incidence matrix:
\begin{align*}
\eta_1: \; a& \rightarrow b,\, b\rightarrow ca,\,c\rightarrow ba,& \eta_2: \; a \rightarrow b,\, b\rightarrow ca,\,c\rightarrow ab,\\
\eta_3: \; a& \rightarrow b,\, b\rightarrow ac,\,c\rightarrow ba,& \eta_4: \; a \rightarrow b,\, b\rightarrow ac,\,c\rightarrow ab,
\end{align*}
Here $\eta_1=\eta$. To prove that the system of $\eta$ is conjugate to the Fibonacci system consider the letter-to-letter map $\pi$ given by
$$\pi(a)=1,\quad \pi(b)=\pi(c)=0.$$ Then $\pi$ maps $X_\eta$ onto $X_\varphi$, because $\pi\eta=\varphi\pi$. Moreover, $\pi$ is a conjugacy, since if $x\ne y$ and $\pi(x)=\pi(y)$, then there is a $k$ such that $x_k=b$ and $y_k=c$. But the words of length 2 in the language of $\eta$ are $ab, ba, bc$ and $ca$, implying that $x_{k-1}=a$ and $y_{k-1}=b$, contradicting $\pi(x)=\pi(y)$.
Since $\zeta$ is the time reversal of $\eta$, and we know already that the system of ${\varphi_{\textsc{\tiny R}}}$ is conjugate to the Fibonacci system, the system generated by $\eta_4=\zeta={\eta_{\textsc{\tiny R}}}$ is conjugate to the Fibonacci system.
It remains to prove that $\eta_2$ and $\eta_3$ generate systems that are \emph{not} conjugate to the Fibonacci system. Again, since $\eta_3$ is the time reversal of $\eta_2$, it suffices to do this for $\eta_2$.
The language of $\eta_2$ contains the words $ab, bb$ and $bc$. These words generate fixed points of $\eta_2^6$ in the usual way. But these three fixed points form a $Z$-triple, implying that the system of $\eta_2$ can not be topologically isomorphic to the Fibonacci system
(see Theorem~\ref{th:Zth}).
The next matrix we have to consider is $\left( \begin{smallmatrix} 0\, 1\, 0\\ 0\, 0 \,1 \\ 1\, 2\,0 \end{smallmatrix} \right).$
There are three substitutions with this matrix as incidence matrix:
\begin{align*}
\eta_1: \; a& \rightarrow b,\, b\rightarrow c,\,c\rightarrow abb,& \eta_2: \; a \rightarrow b,\, b\rightarrow c,\,c\rightarrow bab,\\
\eta_3: \; a& \rightarrow b,\, b\rightarrow c,\,c\rightarrow bba.
\end{align*}
Again, the system of $\eta_1$ contains a $Z$-triple generated by $ab, bb$ and $bc$. So this system is not conjugate to the Fibonacci system, and neither is the one generated by $\eta_3$ (time reversal of $\eta_1$). The system generated by $\eta_2$ behaves similarly to the Fibonacci system, \emph{but} is has an eigenvalue $-1$ (it has a two-point factor via the projection $a,c\rightarrow 0, b\rightarrow 1$.)
Finally, we have to consider the matrix $\left( \begin{smallmatrix} 0\, 1\, 0\\ 1 \,0\, 1\\ 1\, 0\, 1 \end{smallmatrix} \right).$
There are four substitutions with this matrix as incidence matrix:
\begin{align*}
\eta_1: \; a& \rightarrow b,\, b\rightarrow ac,\,c\rightarrow ac,& \eta_2: \; a \rightarrow b,\, b\rightarrow ac,\,c\rightarrow ca,\\
\eta_3: \; a& \rightarrow b,\, b\rightarrow ca,\,c\rightarrow ac,& \eta_4: \; a \rightarrow b,\, b\rightarrow ca,\,c\rightarrow ca.
\end{align*}
Here $\eta_1$ and $\eta_4$ generate systems conjugate to the Fibonacci system, but the substitutions are not injective.
The substitution $\eta_2$ has all 9 words of length 2 in its language, and all of these generate fixed points of $\eta_2^6$. So the system of $\eta_2$ is certainly not topologically isomorphic to the Fibonacci system. The proof is finished, since $\eta_3$ is the time reversal of $\eta_2$. $\Box$
\section{Letter-to-letter maps}\label{sec:L2L}
By the Curtis-Hedlund-Lyndon theorem all members in the conjugacy class of the Fibonacci system can be obtained by applying letter-to-letter maps $\pi$ to $N$-block presentations $(X^{[N]},\sigma)$. Here we analyse the case $N=2$. The 2-block presentation of the Fibonacci system is generated by (see Section~\ref{sec:Nblock}) the 2-block substitution
$$\hat{\varphi}_2(1)=12 \quad \hat{\varphi}_2(2)=3, \quad \hat{\varphi}_2(3)=12.$$
There are (modulo permutations of the symbols) three letter-to-letter maps from $\{1,2,3\}$ to $\{0,1\}$. Two of these project onto the Fibonacci system, as they are projections on the first respectively the second letter of the 2-blocks. The third is $\pi$ given by
$$\pi(1)=0,\quad \pi(2)=0, \quad \pi(3)=1.$$
What is the system $(Y,\sigma)$ with $Y=\pi\big(X^{[2]}\big)$?
First note that $(Y,\sigma)$ is conjugate to the Fibonacci system since $\pi$ is clearly invertible.
Secondly, we note that the points in $Y$ can be obtained by doubling the 0's in the points of the Fibonacci system.
This holds because $\pi(12)=00,\, \pi(3)=1$, but also $$\pi(\hat{\varphi}_2(12))=\pi(123)=001,\;\pi(\hat{\varphi}_2(3))=\pi(12)=00.$$
Thirdly, we claim that the system $(Y,\sigma)$ cannot be generated by a substitution. This follows from the fact that the sequences in $Y$ contain the word 0000, but no other fourth powers. This is implied by the $4^{\rm th}$ power free-ness of the Fibonacci word, proved in \cite{Karhumaki}.
A fourth property is that the sequence $y^+$ obtained by doubling the 0's in $w_{\rm F}$, where $w_{\rm F}$ is the infinite Fibonacci word is given by
$$y^+_n=[(n+2)\Phi]-[n\Phi]-[2\Phi], \qquad {\rm for\;} n\ge 1,$$
according to \cite{Wolfdieter}, and \cite{OEIS-Fib} (here $[\cdot]$ denotes the floor function).
Finally we remark that Durand shows in the paper \cite{Durand} that the Fibonacci system is prime \emph{modulo topological isomorphism}, and ignoring finite factors and rotation factors. This implies that all the projections are automatically invertible, if the projected system is not finite.
\end{document} |
\begin{document}
\title{Empowering the Configuration-IP -- New PTAS Results for Scheduling with Setups Times}
\begin{abstract}
Integer linear programs of configurations, or configuration IPs, are a classical tool in the design of algorithms for scheduling and packing problems, where a set of items has to be placed in multiple target locations.
Herein a configuration describes a possible placement on one of the target locations, and the IP is used to chose suitable configurations covering the items.
We give an augmented IP formulation, which we call the module configuration IP.
It can be described within the framework of n-fold integer programming and therefore be solved efficiently.
As an application, we consider scheduling problems with setup times, in which a set of jobs has to be scheduled on a set of identical machines, with the objective of minimizing the makespan.
For instance, we investigate the case that jobs can be split and scheduled on multiple machines.
However, before a part of a job can be processed an uninterrupted setup depending on the job has to be paid.
For both of the variants that jobs can be executed in parallel or not, we obtain an efficient polynomial time approximation scheme (EPTAS) of running time $f(1/\varepsilon)\times \mathrm{poly}(|I|)$ with a single exponential term in $f$ for the first and a double exponential one for the second case.
Previously, only constant factor approximations of $5/3$ and $4/3 + \varepsilon$ respectively were known.
Furthermore, we present an EPTAS for a problem where classes of (non-splittable) jobs are given, and a setup has to be paid for each class of jobs being executed on one machine.
\end{abstract}
\section{Introduction}
In this paper, we present an augmented formulation of the classical integer linear program of configurations (configuration IP) and demonstrate its use in the design of efficient polynomial time approximation schemes for scheduling problems with setup times.
Configuration IPs are widely used in the context of scheduling or packing problems, in which items have to be distributed to multiple target locations.
The configurations describe possible placements on a single location, and the integer linear program (IP) is used to choose a proper selection covering all items.
Two fundamental problems, for which configuration IPs have prominently been used, are bin packing and minimum makespan scheduling on identical parallel machines, or machine scheduling for short.
For bin packing, the configuration IP was introduced as early as 1961 by Gilmore and Gomory \cite{gilmore1961linear}, and the recent results for both problems typically use configuration IPs as a core technique, see, e.g., \cite{goemans2014polynomiality,JKV16ICALP}.
In the present work, we consider scheduling problems and therefore introduce the configuration IP in more detail using the example of machine scheduling.
\subparagraph*{Configuration IP for Machine Scheduling.}
In the problem of machine scheduling, a set $\mathcal{J}$ of $n$ jobs is given together with processing times $p_j$ for each job $j$ and a number $m$ of identical machines.
The objective is to find a schedule $\sigma: \mathcal{J} \rightarrow [m]$, such that the makespan is minimized, that is, the latest finishing time of any job $C_{\max}(\sigma)=\max_{i\in[m]}\sum_{j\in\sigma^{-1}(i)}p_j$.
For a given makespan bound, the configurations may be defined as multiplicity vectors indexed by the occurring processing times, where the overall length of the chosen processing times does not violate the bound.
The configuration IP is then given by variables $x_C$ for each configuration $C$; constraints ensuring that there is a machine for each configuration, i.e., $\sum_{C}x_C = m$; and further constraints due to which the jobs are covered, i.e., $\sum_{C}C_p x_C = |\sett{j\in\mathcal{J}}{p_j=p}|$ for each processing time $p$.
In combination with certain simplification techniques, this type of IP is often used in the design of \emph{polynomial time approximation schemes} (PTAS).
A PTAS is a procedure that, for any fixed accuracy parameter $\varepsilon > 0$, returns a solution with approximation guarantee $(1+\varepsilon)$ that is, a solution, whose objective value lies within a factor of $(1 + \varepsilon)$ of the optimum.
In the context of machine scheduling, the aforementioned simplification techniques can be used to guess the target makespan $T$ of the given instance; to upper bound the cardinality of the set of processing times $P$ by a constant (depending in $1/\varepsilon$); and to lower bound the processing times in size, such that they are within a constant factor of the makespan $T$ (see, e.g., \cite{alon1998approximation,JKV16ICALP}).
Hence, only a constant number of configurations is needed, yielding an integer program with a constant number of variables.
Integer programs of that kind can be efficiently solved using the classical algorithm by Lenstra and Kannan \cite{lenstra1983integer,kannan1987minkowski}, yielding a PTAS for machine scheduling.
Here, the error of $(1+\varepsilon)$ in the quality of the solution is due to the simplification steps, and the scheme has a running time of the form $f(1/\varepsilon)\times\mathrm{poly}(|I|)$, where $|I|$ denotes the input size, and $f$ some computable function.
A PTAS with this property is called \emph{efficient} (EPTAS).
Note that for a regular PTAS a running time of the form $|I|^{f(1/\varepsilonilon)}$ is allowed.
It is well known, that machine scheduling is strongly NP-hard, and therefore there is no optimal polynomial time algorithm, unless P$=$NP, and also a so-called \emph{fully polynomial} PTAS (FPTAS)---which is an EPTAS with a polynomial function~$f$---cannot be hoped for.
\subparagraph*{Machine Scheduling with Classes.}
The configuration IP is used in a wide variety of approximation schemes for machine scheduling problems \cite{alon1998approximation, JKV16ICALP}.
However, the approach often ceases to work for scheduling problems in which the jobs have to fulfill some additional requirements, like, for instance, class dependencies.
A problem emerging, in this case, is that the additional requirements have to be represented in the configurations, resulting in a super-constant number of variables in the IP.
We elaborate on this using a concrete example:
Consider the variant of machine scheduling in which the jobs are partitioned into $K$ setup classes.
For each job $j$ a class $k_j$ is given and for each class $k$ a setup time $s_k$ has to be paid on a machine, if a job belonging to that class is scheduled on it, i.e., $C_{\max}(\sigma) = \max_{i\in[m]}\big( \sum_{j\in\sigma^{-1}(i)} p_j + \sum_{k\in\sett{k_j}{j\in\sigma{-1}(i)}} s_k \big)$.
With some effort, simplification steps similar to the ones for machine scheduling can be applied.
In the course of this, the setup times as well can be bounded in number and guaranteed to be sufficiently big \cite{SetupPTAS2016}.
However, it is not hard to see that the configuration IP still cannot be trivially extended, while preserving its solvability.
For instance, extending the configurations with multiplicities of setup times will not work, because then we have to make sure that a configuration is used for a fitting subset of classes, creating the need to encode class information into the configurations or introduce other class dependent variables.
\subparagraph*{Module Configuration IP.}
Our approach to deal with the class dependencies of the jobs is to cover the job classes with so-called modules and cover the modules in turn with configurations in an augmented IP called the module configuration IP ($\mathrm{MCIP}$).
In the setup class model, for instance, the modules may be defined as combinations of setup times and configurations of processing times, and the actual configurations as multiplicity vectors of module sizes.
The number of both the modules and the configurations will typically be bounded by a constant.
To cover the classes by modules each class is provided with its own set of modules, that is, there are variables for each pair of class and module.
Since the number of classes is part of the input, the number of variables in the resulting $\mathrm{MCIP}$ is super-constant, and therefore the algorithm by Lenstra and Kannan \cite{lenstra1983integer,kannan1987minkowski} is not the proper tool for the solving of the $\mathrm{MCIP}$.
However, the $\mathrm{MCIP}$ has a certain simple structure:
The mentioned variables are partitioned into uniform classes each corresponding to the set of modules, and for each class, the modules have to do essentially the same---cover the jobs of the class.
Utilizing these properties, we can formulate the $\mathrm{MCIP}$ in the framework of $n$-fold integer programms---a class of IPs whose variables and constraints fulfill certain uniformity requirements.
In 2013 Hemmecke, Onn, and Romanchuk \cite{nfoldcubic} showed that $n$-fold IPs can be efficiently solved, and very recently both Eisenbrand, Hunkenschröder and Klein \cite{Kim-n-fold2018} and independently Koutecký, Levin and Onn \cite{KLS-n-fold-2018} developed algorithms with greatly improved running times for the problem.
For a detailed description of the $\mathrm{MCIP}$, the reader is referred to Section \ref{sec:ILP}.
In Figure \ref{fig:MCIP} the basic idea of the $\mathrm{MCIP}$ is visualized.
Using the $\mathrm{MCIP}$, we are able to formulate an EPTAS for machine scheduling in the setup class model described above.
Before, only a regular PTAS with running time $nm^{\mathcal{O}(1/\varepsilon^5)}$ was known \cite{SetupPTAS2016}.
To the best of our knowledge, this is the first use of $n$-fold integer programing in the context of approximation algorithms.
\begin{figure}
\caption{On the left, there is a schematic representation of the configuration IP.
There are constant different sizes each occurring a super-constant number of times.
The sizes are directly mapped to configurations.
On the right, there is a schematic representation of the MCIP.
There is a super-constant number of classes, each containing a constant number of sizes which have super-constant multiplicities.
The elements from the class are mapped to a constant number of different modules, which have a constant number of sizes.
These module sizes are mapped to configurations.}
\label{fig:MCIP}
\end{figure}
\subparagraph*{Results and Methodology.}
To show the conceptual power of the $\mathrm{MCIP}$, we utilize it for two more problems:
The \emph{splittable} and the \emph{preemptive} setup model of machine scheduling.
In both variants for each job $j$, a setup time $s_j$ is given.
Each job may be partitioned into multiple parts that can be assigned to different machines, but before any part of the job can be processed the setup time has to be paid.
In the splittable model, job parts belonging to the same job can be processed in parallel, and therefore beside the partition of the jobs, it suffices to find an assignment of the job parts to machines.
This is not the case for the preemptive model, in which additionally a starting time for each job part has to be found, and two parts of the same job may not be processed in parallel.
In 1999 Schuurman and Woeginger \cite{schuurman1999preemptive} presented a polynomial time algorithm for the preemptive model with approximation guarantee $4/3 + \varepsilon$, and for the splittable case a guarantee of $5/3$ was achieved by Chen, Ye and Zhang \cite{chen2006lot}.
These are the best known approximation guarantees for the problems at hand.
We show that solutions arbitrarily close to the optimum can be found in polynomial time:
\begin{theorem}\label{thm:main_result}
There is an efficient PTAS with running time $2^{f(1/\varepsilon)}\mathrm{poly}(|I|)$ for minimum makespan scheduling on identical parallel machines in the setup-class model, as well as in the preemptive and splittable setup models.
\end{theorem}
More precisely, we get a running time of $2^{\mathcal{O}(\nicefrac{1}{\varepsilon^{3}}\log^4\nicefrac{1}{\varepsilon})}K^2 n m \log (Km)$ in the setup class model, $2^{\mathcal{O}(\nicefrac{1}{\varepsilon^2}\log^3\nicefrac{1}{\varepsilon})}n^2\log^3 (nm)$ in the splittable, and $2^{2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log\nicefrac{1}{\varepsilon})}}n^2 m\log m\log (nm)$ in the preemptive model.
Note, that all three problems are strongly NP-hard, due to trivial reductions from machine scheduling, and our results are therefore in some sense best possible.
Summing up, the main achievement of this work is the development of the module configuration IP and its application in the development of approximation schemes.
Up to now, EPTAS or even PTAS results seemed out of reach for the considered problems, and for the preemptive model we provide the first improvement in 20 years.
The simplification techniques developed for the splittable and preemptive model in order to employ the $\mathrm{MCIP}$ are original and in the latter case quite elaborate, and therefore interesting by themselfs.
Furthermore, we expect the $\mathrm{MCIP}$ to be applicable to other packing and scheduling problems as well, in particular for variants of machine scheduling and bin packing with additional class depended constraints.
On a more conceptual level, we gave a first demonstration of the potential of $n$-fold integer programming in the theory of approximation algorithms, and hope to inspire further studies in this direction.
We conclude this paragraph with a more detailed overview of our results and their presentation.
For all three EPTAS results we employ the classical dual approximation framework by Hochbaum and Shmoys \cite{dualapprox} to get a guess of the makespan $T$.
This approach is introduced in Section \ref{sec:prelim} together with $n$-fold IPs and formal definitions of the problems.
In the following section, we develop the module configuration IP, in its basic form and argue that it is indeed an $n$-fold IP.
The EPTAS results follow the same basic approach described above for machine scheduling:
We find a schedule for a simplified instance via the $\mathrm{MCIP}$ and transform it into a schedule for the original one.
The simplification steps typically include rounding of the processing and setup times using standard techniques, as well as, the removal of certain jobs, which later can be reinserted via carefully selected greedy procedures.
For the splittable and preemptive model, we additionally have to prove that schedules with a certain simple structure exist, and in the preemptive model, the $\mathrm{MCIP}$ has to be extended.
In Section \ref{sec:EPTAS} the basic versions of the EPTAS are presented and in Section \ref{sec:better_running_time}, some improvements of the running time for the splittable and the setup class model are discussed.
\subparagraph*{Related work.}
For an overview on $n$-fold IPs and their applications, we refer to the book by Onn \cite{onn2010nonlinear}.
There have been recent applications of $n$-fold integer programming to scheduling problems in the context of parameterized algorithms:
Knop and Kouteck{\`y} \cite{knop2016scheduling} showed, among other things, that the problem of makespan minimization on unrelated parallel machines, where the processing times are dependent on both jobs and machines, is fixed-parameter tractable with respect to the maximum processing time and the number of distinct machine types.
This was generalized to the parameters maximum processing time and rank of the processing time matrix by Chen et al. \cite{chen2017parameterized}.
Furthermore, Knop, Kouteck{\`y} and Mnich \cite{KKM17} provided an improved algorithm for a special type of $n$-fold IPs yielding improved running times for several applications of $n$-fold IPs including results for scheduling problems.
There is extensive literature concerning scheduling problems with setup times.
We highlight a few closely related results and otherwise refer to the surveys \cite{allahverdi1999survey,allahverdi2008survey}.
In the following, we use the term $\alpha$-approximation as an abbreviation for polynomial time algorithms with approximation guarantee $\alpha$.
The setup class model was first considered by Mäcker et al. \cite{macker2015non} in the special case that all classes have the same setup time.
They designed a $2$-approximation and additionally a $3/2 + \varepsilon$-approximation for the case that the overall length of the jobs from each class is bounded.
Jansen and Land \cite{SetupPTAS2016} presented a simple $3$-approximation with linear running time, a $2+\varepsilon$-approximation, and the aforementioned PTAS for the general setup class model.
As indicated before, Chen et al. \cite{chen2006lot} developed a $5/3$-approximation for the splittable model.
A generalization of this, in which both setup and processing times are job and machine dependent, has been considered by Correa et al. \cite{correa2015strong}.
They achieve a $(1+\phi)$-approximation, where $\phi$ denotes the golden ratio, using a newly designed linear programming formulation.
Moreover, there are recent results concerning machine scheduling in the splittable model considering the sum of the (weighted) completion times as the objective function, e.g. \cite{schalekamp2015split,correa2016splitting}.
For the preemptive model, a PTAS for the special case that all jobs have the same setup time has been developed by Schuurman and Woeginger \cite{schuurman1999preemptive}.
The mentioned $(4/3 +\varepsilon)$-approximation for the general case \cite{schuurman1999preemptive} follows the same approach.
Furthermore, a combination of the setup class and the preemptive model has been considered, in which the jobs are scheduled preemptively, but the setup times are class dependent.
Monma and Potts \cite{monma1993analysis} presented, among other things, a $(2-1/(\floor{m/2}+1))$-approximation for this model, and later Chen \cite{chen1993better} achieved improvements for some special cases.
\section{Preliminaries}\label{sec:prelim}
In the following, we establish some concepts and notations, formally define the considered problems, and outline the dual approximation approach by Hochbaum and Shmoys \cite{dualapprox}, as well as $n$-fold integer programs.
For any integer $n$, we denote the set $\set{1,\dots, n}$ by $[n]$; we write $\log(\cdot)$ for the logarithm with basis $2$; and we will usually assume that some instance $I$ of the problem considered in the respective context is given together with an accuracy parameter $\varepsilon \in (0,1)$ such that $1/\varepsilon$ is an integer.
Furthermore for any two sets $X,Y$ we write $Y^X$ for the set of functions $f:X\rightarrow Y$.
If $X$ is finite, we say that $Y$ is indexed by $X$ and sometimes denote the function value of $f$ for the argument $x\in X$ by $f_x$.
\subparagraph*{Problems.}
For all three of the considered models, a set $\mathcal{J}$ of $n$ jobs with processing times $p_j\in\mathbb{Q}_{>0}$ for each job $j\in\mathcal{J}$ and a number of machines $m$ is given.
In the preemptive and the splittable model, the input additionally includes a setup time $s_j\in\mathbb{Q}_{>0}$ for each job $j\in\mathcal{J}$; while in the setup class model, it includes a number $K$ of setup classes, a setup class $k_j\in[K]$ for each job $j\in\mathcal{J}$, as well as setup times $s_k\in\mathbb{Q}_{>0}$ for each $k\in[K]$.
We take a closer look at the definition of a schedule in the preemptive model.
The jobs may be split.
Therefore, partition sizes $\kappa: \mathcal{J} \rightarrow \mathbb{Z}_{>0}$, together with processing time fractions $\lambda_j:[\kappa(j)]\rightarrow(0,1]$, such that $\sum_{k\in[\kappa(j)]}\lambda_j(k)=1$, have to be found, meaning that job $j$ is split into $\kappa(j)$ many parts and the $k$-th part for $k\in[\kappa(j)]$ has processing time $\lambda_j(k)p_j$.
This given, we define $Js = \sett{(j,k)}{j\in\mathcal{J},k\in[\kappa(j)]}$ to be the set of job parts.
Now, an assignment $\sigma:Js\rightarrow[m]$ along with starting times $\xi:Js\rightarrow\mathbb{Q}_{>0}$ has to be determined, such that any two job parts assigned to the same machine or belonging to the same job do not overlap.
More precisely, we have to assure that for each two job parts $(j,k),(j',k')\inJs$ with $\sigma(j,k)=\sigma(j',k')$ or $j=j'$, we have $\xi(j,k)+ s_j + \lambda_j(k)p_j \leq \xi(j')$ or $\xi(j',k')+ s_{j'} + \lambda_{j'}(k)p_{j'} \leq \xi(j)$.
A schedule is given by $(\kappa,\lambda,\sigma,\xi)$ and the makespan can be defined as $C_{\max} = \max_{(j,k)\inJs} (\xi(j,k)+ s_j + \lambda_j(k)p_j)$.
Note that the variant of the problem in which overlap between a job part and setup of the same job is allowed is equivalent to the one presented above.
This was pointed out by Schuurmann and Woeginger \cite{schuurman1999preemptive} and can be seen with a simple swapping argument.
In the splittable model, it is not necessary to determine starting times for the job parts, because, given the assignment $\sigma$, the job parts assigned to each machine can be scheduled as soon as possible in arbitrary order without gaps.
Hence, in this case, the output is of the form $(\kappa,\lambda,\sigma)$ and the makespan can be defined as $C_{\max} = \max_{i\in[m]} \sum_{(j,k)\in\sigma^{-1}(i)} s_j + \lambda_j(k)p_j$.
Lastly, in the setup class model the jobs are not split and given an assignment, the jobs assigned to each machine can be scheduled in batches comprised of the jobs of the same class assigned to the machine without overlaps and gaps.
The output is therefore just an assignment $\sigma:\mathcal{J}\rightarrow[m]$ and the makespan is given by $C_{\max} = \max_{i\in[m]} \sum_{j\in\sigma^{-1}(i)} p_j + \sum_{k\in\sett{k_j}{j\in\sigma{-1}(i)}} s_k $.
Note, that in the preemptive and the setup class model, we can assume that the number of machines is bounded by the number of jobs:
If there are more machines than jobs, placing each job on a private machine yields an optimal schedule in both models and the remaining machines can be ignored.
This, however, is not the case in the splittable model, which causes a minor problem in the following.
\subparagraph*{Dual Approximation.}
All of the presented algorithms follow the dual approximation framework introduced by Hochbaum and Shmoys \cite{dualapprox}:
Instead of solving the minimization version of a problem directly, it suffices to find a procedure that for a given bound $T$ on the objective value either correctly reports that there is no solution with value $T$ or returns a solution with value at most $(1+ a \varepsilon)T$ for some constant $a$.
If we have some initial upper bound $B$ for the optimal makespan $\mathrm{OPT}$ with $B\leq b\mathrm{OPT}$ for some $b$, we can define a PTAS by trying different values $T$ from the interval $[B/b,B]$ in a binary search fashion, and find a value $T^*\leq (1+\mathcal{O}(\varepsilon))\mathrm{OPT}$ after $\mathcal{O}(\log b/\varepsilon)$ iterations.
Note that for all of the considered problems constant approximation algorithms are known, and the sum of all processing and setup times is a trivial $m$-approximation.
Hence, we always assume that a target makespan $T$ is given.
Furthermore, we assume that the setup times and in the preemptive and setup class cases also the processing times are bounded by $T$, because otherwise we can reject $T$ immediately.
\subparagraph*{$n$-fold Integer Programs.}
We briefly define $n$-fold integer programs (IP) following the notation of \cite{nfoldcubic} and \cite{knop2016scheduling} and state the main algorithmic result needed in the following.
Let $n,r,s,t\in\mathbb{Z}_{>0}$ be integers and $A$ be an integer $((r + ns) \times nt)$-matrix of the following form:
\[A=
\begin{pmatrix}
A_1 & A_1 & \cdots & A_1 \\
A_2 & 0 & \cdots & 0 \\
0 & A_2 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & A_2
\end{pmatrix}
\]
The matrix $A$ is the so-called $n$-fold product of the bimatrix $\binom{A_1}{A_2}$, with $A_1$ an $r\times t$ and $A_2$ an $s\times t$ matrix.
Furthermore, let $w,\ell,u\in\mathbb{Z}^{nt}$ and $b\in\mathbb{Z}^{r+ns}$.
Then the $n$-fold integer programming problem is given by:
\[\min\sett{wx}{Ax = b, \ell \leq x \leq u, x\in\mathbb{Z}^{nt}}\]
We set $\Delta$ to be the maximum absolute value occurring in $A$.
Up to recently the best known algorithm for solving $n$-fold IPs was due to Hemmecke, Onn and Romanchuk \cite{nfoldcubic}:
\begin{theorem}\label{thm:solving_n-fold_old}
Let $\varphi$ be the encoding length of $w$, $b$, $\ell$, $u$ and $\Delta$.
The $n$-fold integer programming problem can be solved in time $\mathcal{O}(\Delta^{3t(rs+st+r+s)}n^3\varphi)$, when $r$, $s$ and $t$ are fixed.
\end{theorem}
However, in 2018 both Eisenbrand, Hunkenschröder and Klein \cite{KLS-n-fold-2018} and independently Koutecký, Levin and Onn \cite{KLS-n-fold-2018} developed algorithms with improved and very similar running times.
We state a variant due to Eisenbrand et al. that is adapted to our needs:
\begin{theorem}\label{thm:solving_n-fold}
Let $\varphi$ be the encoding length of the largest number occurring in the input, and $\Phi = \max_i(u_i-\ell_i)$.
The $n$-fold integer programming problem can be solved in time $(rs\Delta)^{\mathcal{O}(r^2s+rs^2)}t^2n^2\varphi\log(\Phi)\log(nt\Phi)$.
\end{theorem}
The variables $x$ can naturally be partitioned into \emph{bricks} $x^{(q)}$ of dimension $t$ for each $q\in [n]$, such that $x=(x^{(1)},\dots x^{(n)})$.
Furthermore, we denote the constraints corresponding to $A_1$ as \emph{globally uniform} and the ones corresponding to $A_2$ as \emph{locally uniform}.
Hence, $r$ is the number of globally and $s$ the number of locally uniform constraints (ignoring their $n$-fold duplication); $t$ the \emph{brick size} and $n$ the \emph{brick number}.
\section{Module Configuration IP}\label{sec:ILP}
In this section, we state the configuration IP for machine scheduling; introduce a basic version of the module configuration IP (MCIP) that is already sufficiently general to work for both the splittable and setup class model; and lastly show that the configuration IP can be expressed by the $\mathrm{MCIP}$ in multiple ways.
Before that, however, we formally introduce the concept of \emph{configurations}.
Given a set of objects $A$, a configuration $C$ of these objects is a vector of multiplicities indexed by the objects, i.e., $C\in\mathbb{Z}_{\geq 0}^A$.
For given sizes $\Lambda(a)$ of the objects $a\in A$, the size $\Lambda(C)$ of a configuration $C$ is defined as $\sum_{a\in A}C_a\Lambda(a)$.
Moreover, for a given bound $B$, we define $\mathcal{C}_A(B)$ to be the set of configurations of $A$ that are bounded in size by $B$, that is, $\mathcal{C}_A(B) = \sett{C\in\mathbb{Z}_{\geq 0}^A}{\Lambda(C)\leq B}$.
\subparagraph*{Configuration IP.}
We give a recollection of the configuration IP for scheduling on identical parallel machines.
Let $P$ be the set of distinct processing times for some instance $I$ with multiplicities $n_p$ for each $p\in P$, meaning, $I$ includes exactly $n_p$ jobs with processing time $p$.
The size $\Lambda(p)$ of a processing time $p$ is given by itself.
Furthermore, let $T$ be a guess of the optimal makespan.
The configuration IP for $I$ and $T$ is given by variables $x_C\geq 0$ for each $C\in \mathcal{C}_P(T)$ and the following constraints:
\begin{align}
\sum_{C\in\mathcal{C}_P(T)}x_{C} & = m & \label{eq:CILP_machs} \\
\sum_{C\in\mathcal{C}_P(T)} C_p x_{C} & = n_p & \forall p\in P\label{eq:CILP_jobs}
\end{align}
Due to constraint (\ref{eq:CILP_machs}), exactly one configuration is chosen for each machine, while (\ref{eq:CILP_jobs}) ensures that the correct number of jobs or job sizes is covered.
\subparagraph*{Module Configuration IP.}
Let $\mathcal{B}$ be a set of basic objects (e.g. jobs or setup classes) and let there be $D$ integer values $B_1,\dots,B_D$ for each basic object $B\in\mathcal{B}$ (e.g. processing time or numbers of different kinds of jobs).
Our approach is to cover the basic objects with so-called \emph{modules} and in turn cover the modules with configurations.
Depending on the context, modules correspond to batches of jobs or job piece sizes together with a setup time and can also encompass additional information like a starting time.
Let $\mathcal{M}$ be a set of such modules.
In order to cover the basic objects, each module $M\in\mathcal{M}$ also has $D$ integer values $M_1,\dots,M_D$.
Furthermore, each module $M$ has a size $\Lambda(M)$ and a set of eligible basic objects $ \mathcal{B}(M)$.
The latter is needed because not all modules are compatible with all basic objects, e.g., because they do not have the right setup times.
The configurations are used to cover the modules, however, it typically does not matter which module exactly is covered, but rather which size the module has.
Let $H$ be the set of distinct module sizes, i.e., $H = \sett{\Lambda(M)}{M\in\mathcal{M}}$, and for each module size $h\inH$ let $\mathcal{M}(h)$ be the set of modules with size $h$.
We consider the set $\mathcal{C}$ of configurations of module sizes which are bounded in size by a guess of the makespan $T$, i.e., $\mathcal{C}=\mathcal{C}_H(T)$.
In the preemptive case configurations need to additionally encompass information about starting times of modules, and therefore the definition of configurations will be slightly more complicated in that case.
Since we want to chose configurations for each machine, we have variables $x_C$ for each $C\in\mathcal{C}$ and constraints corresponding to (\ref{eq:CILP_machs}).
Furthermore, we chose modules with variables $y_M$ for each $M\in\mathcal{M}$ and because we want to cover the chosen modules with configurations, we have some analogue of constraint (\ref{eq:CILP_jobs}), say $\sum_{C\in\mathcal{C}(T)} C_{h} x_{C} = \sum_{M\in \mathcal{M}(h)}y_M $ for each module size $h \inH$.
It turns out however, that to properly cover the basic objects with modules, we need the variables $y_M$ for each basic object, and this is were $n$-fold IPs come into play:
The variables stated so far form a brick of the variables of the $n$-fold IP and there is one brick for each basic object, that is, we have, for each $B\in\mathcal{B}$, variables $x^{(B)}_C$ for each $C\in\mathcal{C}$, and $y^{(B)}_M$ for each $M\in\mathcal{M}$.
Using the upper bounds of the $n$-fold model, variables $y^{(B)}_M$ are set to zero, if $B$ is not eligible for $M$; and we set the lower bounds of all variables to zero.
Sensible upper bounds for the remaining variables, will be typically clear from context.
Besides that, the module configuration integer program $\mathrm{MCIP}$ (for $\mathcal{B}$, $\mathcal{M}$ and $\mathcal{C}$) is given by:
\begin{align}
\sum_{B\in\mathcal{B}}\sum_{C\in\mathcal{C}}x^{(B)}_{C} & = m & \label{eq:MCIP_machs} \\
\sum_{B\in\mathcal{B}}\sum_{C\in\mathcal{C}(T)} C_h x^{(B)}_{C} & = \sum_{B\in\mathcal{B}}\sum_{M\in \mathcal{M}(h)}y^{(B)}_M & \forall h \inH \label{eq:MCIP_mods}\\
\sum_{M\in\mathcal{M}}M_d y^{(B)}_M & = B_d & \forall B\in\mathcal{B}, d\in[D] \label{eq:MCIP_bobs}
\end{align}
It is easy to see that the constraints (\ref{eq:MCIP_machs}) and (\ref{eq:MCIP_mods}) are globally uniform.
They are the mentioned adaptations of (\ref{eq:CILP_machs}) and (\ref{eq:CILP_jobs}).
The constraint (\ref{eq:MCIP_bobs}), on the other hand, is locally uniform and ensures that the basic objects are covered.
Note that, while the duplication of the configuration variables does not carry meaning, it also does not upset the model:
Consider the modified $\mathrm{MCIP}$ that is given by not duplicating the configuration variables.
A solution $(\tilde{x}, \tilde{y})$ for this IP gives a solution $(x,y)$ for the $\mathrm{MCIP}$ by fixing some basic object $B^*$, setting $x^{(B^*)}_C = \tilde{x}_C$ for each configuration $C$, setting the remaining configuration variables to $0$, and copying the remaining variables.
Given a solution $(x,y)$ for the $\mathrm{MCIP}$, on the other hand, gives a solution for the modified version $(\tilde{x}, \tilde{y})$ by setting $\tilde{x}_C = \sum_{B\in\mathcal{B}}x^B_C$ for each configuration $C$.
Summarizing we get:
\begin{observation}\label{rem:MCIP}
The $\mathrm{MCIP}$ is an $n$-fold IP with brick-size $t = |\mathcal{M}| + |\mathcal{C}|$, brick number $n = |\mathcal{B}|$, $r = |H|+1$ globally uniform and $s = D$ locally uniform constraints.
\end{observation}
Moreover, in all the considered applications we will minimize the overall size of the configurations, i.e., $\sum_{B\in\mathcal{B}}\sum_{C\in\mathcal{C}}\Lambda(C)x^{(B)}_{C}$.
This will be required, because in the simplification steps of our algorithms some jobs are removed and have to be reinserted later, and we therefore have to make sure that no space is wasted.
\subparagraph*{First Example.}
We conclude the section by pointing out several different ways to replace the classical configuration IP for scheduling on identical machines with the $\mathrm{MCIP}$, thereby giving some intuition for the model.
The first possibility is to consider the jobs as the basic objects and their processing times as their single value ($\mathcal{B}= \mathcal{J}$, $D=1$); the modules are the processing times ($\mathcal{M} = P$), and a job is eligible for a module, if its processing time matches; and the configurations are all the configurations bounded in size by $T$.
Another option is to chose the processing times as basic objects, keeping all the other definitions essentially like before.
Lastly, we could consider the whole set of jobs or the whole set of processing times as a single basic object with $D=|P|$ different values.
In this case, we can define the set of modules as the set of configurations of processing times bounded by $T$.
\section{EPTAS results}\label{sec:EPTAS}
In this section, we present approximation schemes for each of the three considered problems.
Each of the results follows the same approach:
The instance is carefully simplified, a schedule for the simplified instance is found using the $\mathrm{MCIP}$, and this schedule is transformed into a schedule for the original instance.
The presentation of the result is also similar for each problem:
We first discuss how the instance can be sensibly simplified, and how a schedule for the simplified instance can be transformed into a schedule for the original one.
Next, we discuss how a schedule for the simplified instance can be found using the $\mathrm{MCIP}$, and lastly, we summarize and analyze the taken steps.
For the sake of clarity, we have given rather formal definitions for the problems at hand in Section \ref{sec:prelim}.
In the following, however, we will use the terms in a more intuitive fashion for the most part, and we will, for instance, often take a geometric rather than a temporal view on schedules and talk about the \emph{length} or the \emph{space} taken up by jobs and setups on machines rather than time.
In particular, given a schedule for an instance of any one of the three problems together with an upper bound for the makespan $T$, the \emph{free space} with respect to $T$ on a machine is defined as the summed up lengths of time intervals between $0$ and $T$ in which the machine is idle.
The free space (with respect to $T$) is the summed up free space of all the machines.
For bounds $T$ and $L$ for the makespan and the free space, we say that a schedule is a $(T,L)$-schedule if its makespan is at most $T$ and the free space with respect to $T$ is at least $L$.
When transforming the instance we will increase or decrease processing and setup times and fill in or remove extra jobs.
Consider a $(T',L')$-schedule, where $T'$ and $L'$ denote some arbitrary makespan or free space bounds.
If we fill in extra jobs or increase processing or setup times, but can bound the increase on each machine by some bound $b$, we end up with a $(T' + b,L')$-schedule for the transformed instance.
In particular we have the same bound for the free space, because we properly increased the makespan bound.
If, on the other hand, jobs are removed or setup times decreased, we obviously still have a $(T',L')$-schedule for the transformed instance.
This will be used frequently in the following.
\subsection{Setup Class Model}
We start with the setup class model.
In this case, we can essentially reuse the simplification steps that were developed by Jansen and Land \cite{SetupPTAS2016} for their PTAS.
The main difference between the two procedures is that we solve the simplified instance via the $\mathrm{MCIP}$, while they used a dynamic program.
For the sake of self-containment, we include our own simplification steps, but remark that they are strongly inspired by those from \cite{SetupPTAS2016}.
In Section \ref{sec:better_running_time} we give a more elaborate rounding procedure resulting in an improved running time.
\subparagraph*{Simplification of the Instance.}
In the following, we distinguish \emph{big setup} jobs $j$ jobs belonging to classes $k$ with setup times $s_k \geq \varepsilon^3 T$ and \emph{small setup} jobs with $s_k < \varepsilon^3 T$.
We denote the corresponding subsets of jobs by $\jobs^{\mathrm{bst}}$ and $\jobs^{\mathrm{sst}}$ respectively.
Furthermore, we call a job \emph{tiny} or \emph{small}, if its processing time is smaller than $\varepsilon^4 T$ or $\varepsilon T$ respectively, and \emph{big} or \emph{large} otherwise.
For any given set of jobs $J$, we denote the subset of tiny jobs from $J$ with $J_\mathrm{tiny}$ and the small, big and large jobs analogously.
We simplify the instance in four steps, aiming for an instance that exclusively includes big jobs with big setup times and additionally only a constant number of distinct processing and setup times.
For technical reason we assume $\varepsilon \leq 1/2$.
We proceed with the first simplification step.
Let $I_1$ be the instance given by the job set $\mathcal{J}\setminus\jobs^{\mathrm{sst}}_\mathrm{small}$ and $Q$ the set of setup classes completely contained in $\jobs^{\mathrm{sst}}_\mathrm{small}$, i.e., $Q = \sett{k}{\forall j\in\mathcal{J}: k_j = k \Rightarrow j\in\jobs^{\mathrm{sst}}_\mathrm{small} }$.
An obvious lower bound on the space taken up by the jobs from $\jobs^{\mathrm{sst}}_\mathrm{small}$ in any schedule is given by $L=\sum_{j\in\jobs^{\mathrm{sst}}_\mathrm{small}} p_j + \sum_{k\in Q} s_k$.
Note that the instance $I_1$ may include a reduced number $K'$ of setup classes.
\begin{lemma}\label{lem:sclass_rounding1}
A schedule for $I$ with makespan $T$ induces a $(T,L)$-schedule for $I_{1}$, that is, a schedule with makespan $T$ and free space at least $L$; and any $(T',L)$-schedule for $I_1$ can be transformed into a schedule for $I$ with makespan at most $(1+\varepsilon)T'+ 2\varepsilon^3 T$.
\end{lemma}
\begin{proof}
The first claim is obvious and we therefore assume that we have a $(T',L)$-schedule for $I_1$.
We group the jobs from $\jobs^{\mathrm{sst}}_\mathrm{small}$ by setup classes and first consider the groups with summed up processing time at most $\varepsilon^2 T$.
For each of these groups we check whether the respective setup class contains a large job.
If this is the case, we schedule the complete group on a machine on which such a large job is already scheduled if possible using up free space.
Since the large jobs have a length of at least $\varepsilon T$, there are at most $T'/(\varepsilon T)$ many large jobs on each machine and therefore the schedule on the respective machine has length at most $(1+\varepsilon)T'$ or there is free space with respect to $T'$ left.
If, on the other hand, the respective class does not contain a large job and is therefore fully contained in $\jobs^{\mathrm{sst}}_\mathrm{small}$, we create a container including the whole class and its setup time.
Note that the overall length of the container is at most $(\varepsilon^2+\varepsilon^3)T\leq \varepsilon T$ (using $\varepsilon \leq 1/2$).
Next, we create a sequence containing the containers and the remaining jobs ordered by setup class.
We insert the items from this sequence greedily into the remaining free space in a next-fit fashion, exceeding $T'$ on each machine by at most one item from the sequence.
This can be done because we had a free space of at least $L$ and the inserted objects had an overall length of at most $L$.
To make the resulting schedule feasible, we have to insert some setup times.
However, because the overall length of the jobs from each class in need of a setup is at least $\varepsilon^2 T$ and the sequence was ordered by classes, there are at most $T'/(\varepsilon^2 T) + 2$ distinct classes without a setup time on each machine.
Inserting the missing setup times will therefore increase the makespan by at most $(T'/(\varepsilon^2 T) + 2)\varepsilon^3 T = \varepsilon T' + 2\varepsilon^3 T$.
\end{proof}
Next, we deal with the remaining (large) jobs with small setup times $j\in\jobs^{\mathrm{sst}}_\mathrm{large}$.
Let $I_2$ be the instance we get by increasing the setup times of the classes with small setup times to $\varepsilon^3 T$.
We denote the setup time of class $k\in[K']$ for $I_2$ by $s'_k$.
Note that there are no small setup jobs in $I_2$.
\begin{lemma}\label{lem:sclass_rounding2}
A $(T',L')$-schedule $I_1$ induces a $((1 + \varepsilon^2)T',L')$-schedule for $I_2$, and a $(T',L')$-schedule for $I_2$ is also a $(T',L')$-schedule for $I_1$.
\end{lemma}
\begin{proof}
The first claim is true because in a schedule with makespan at most $T$ there can be at most $T'/(\varepsilon T)$ many large jobs on any machine, and the second claim is obvious.
\end{proof}
Let $I_3$ be the instance we get by replacing the jobs from $\jobs^{\mathrm{bst}}_\mathrm{tiny}$ with placeholders of size $\varepsilon^4 T$.
More precisely, for each class $k\in[K]$ we introduce $\ceil{(\sum_{j\in\jobs^{\mathrm{bst}}_\mathrm{tiny},k_j=k}p_j)/(\varepsilon^4 T)}$ many jobs with processing time $\varepsilon^4 T$ and class $k$.
We denote the job set of $I_3$ by $\mathcal{J}'$ and the processing time of a job $j\in\mathcal{J}'$ by $p'_j$.
Note that $I_3$ exclusively contains big jobs with big setup times.
\begin{lemma}\label{lem:sclass_rounding3}
If there is a $(T',L')$-schedule for $I_2$, there is also a $((1 + \varepsilon)T',L')$-schedule; and if there is a $(T',L')$-schedule for $I_3$, there is also a $((1 + \varepsilon)T',L')$-schedule for $I_2$.
\end{lemma}
\begin{proof}
Note, that for any $(T',L')$-schedule for $I_2$ or $I_3$ there are at most $T'/ (\varepsilon^3T) $ many distinct big setup classes scheduled on any machine.
Hence, when considering such a schedule for $I_2$, we can remove the tiny jobs belonging to $\jobs^{\mathrm{bst}}_\mathrm{tiny}$ from the machines and instead fill in the placeholders, such that each machine for each class receives at most as much length from that class, as was removed, rounded up to the next multiple of $\varepsilon^4 T$.
All placeholders can be placed like this and the makespan is increased by at most $(T'/ (\varepsilon^3T))\varepsilon^4 T = \varepsilon T'$.
If, on the other hand, we consider such a schedule for $I_3$, we can remove the placeholders and instead fill in the respective tiny jobs, again overfilling by at most one job.
This yields a $((1 + \varepsilon)T',L')$-schedule for $I_2$ with the same argument.
\end{proof}
Lastly, we perform both a geometric and an arithmetic rounding step for the processing and setup times.
The geometric rounding is needed to suitably bound the number of distinct processing and setup times and due to the arithmetic rounding we will be able to guarantee integral coefficients in the IP.
More precisely, we set $\tilde{p}_j = (1+\varepsilon)^{\ceil{\log_{1+\varepsilon}p'_j/(\varepsilon^4 T)}}\varepsilon^4 T$ and $\bar{p}_j = \ceil{\tilde{p}_j/\varepsilon^5 T}\varepsilon^5 T$ for each $j\in\mathcal{J}'$, as well as $\tilde{s}_j = (1+\varepsilon)^{\ceil{\log_{1+\varepsilon}s'_j/(\varepsilon^3 T)}}\varepsilon^3 T$ and $\bar{s}_k = \ceil{\tilde{s}_j/\varepsilon^5 T}\varepsilon^5 T$ for each setup class $k\in[K']$.
The resulting instance is called $I_4$.
\begin{lemma}\label{lem:sclass_rounding4}
A $(T',L')$-schedule for $I_3$ induces a $((1 + 3\varepsilon)T',L')$-schedule for $I_4$, and any $(T',L')$-schedule for $I_4$ can be turned into a $(T',L')$-schedule for $I_3$.
\end{lemma}
\begin{proof}
For the first claim, we first stretch a given schedule by $(1+\varepsilon)$.
This enables us to use the processing and setup times due to the geometric rounding step.
Now, using the ones due to the second step increases the schedule by at most $2\varepsilon T'$, because there where at most $T'/(\varepsilon^4 T)$ many big jobs on any machine to begin with.
The second claim is obvious.
\end{proof}
Based on the rounding steps, we define two makespan bounds $\bar{T}$ and $\breve{T}$:
Let $\bar{T}$ be the makespan bound that is obtained from $T$ by the application of the Lemmata \ref{lem:sclass_rounding1}-\ref{lem:sclass_rounding4} in sequence, i.e., $\bar{T} = (1 + \varepsilon^2)(1 + \varepsilon)(1 + 3\varepsilon) T = (1+\mathcal{O}(\varepsilon))T$.
We will find a $(\bar{T},L)$-schedule for $I_4$ utilizing the $\mathrm{MCIP}$ and afterward apply the Lemmata \ref{lem:sclass_rounding1}-\ref{lem:sclass_rounding4} backwards, to get a schedule with makespan $\breve{T} = (1+\varepsilon)^2\bar{T} + \varepsilon^3 T= (1+\mathcal{O}(\varepsilon))T$.
Let $P$ and $S$ be the sets of distinct occurring processing and setup times for instance $I_4$.
Because of the rounding, the minimum and maximum lengths of the setup and processing times, and $\varepsilon < 1$, we can bound $|P|$ and $|S|$ by $\mathcal{O}(\log_{1+\varepsilon} 1/\varepsilon)=\mathcal{O}(1/\varepsilon\log 1/\varepsilon)$.
\subparagraph*{Utilization of the MCIP.}
At this point, we can employ the module configuration IP.
The basic objects in this context are the setup classes, i.e., $\mathcal{B}=[K']$, and the different values are the numbers of jobs with a certain processing time, i.e., $D = |P|$.
We set $n_{k,p}$ to be the number of jobs from setup class $k\in[K']$ with processing time $p\in P$.
The modules correspond to batches of jobs together with a setup time.
Batches of jobs can be modeled as configurations of processing times, that is, multiplicity vectors indexed by the processing times.
Hence, we define the set of modules $\mathcal{M}$ to be the set of pairs of configurations of processing times and setup times with a summed up size bounded by $\bar{T}$, i.e., $\mathcal{M} = \sett{(C,s)}{C\in\mathcal{C}_P(\bar{T}),s\in S, s + \Lambda(C)\leq \bar{T}}$, and write $M_p = C_p$ and $s_M = s$ for each module $M=(C,s)\in\mathcal{M}$.
The values of a module $M$ are given by the numbers $M_p$ and its size $\Lambda(M)$ by $s_M + \sum_{p\in P}M_p p$.
Remember that the configurations $\mathcal{C}$ are the configurations of module sizes $H$ that are bounded in size by $\bar{T}$, i.e., $\mathcal{C}=\mathcal{C}_H(\bar{T})$.
A setup class is eligible for a module, if the setup times fit, i.e., $\mathcal{B}_M = \sett{k\in[K']}{s_k=s_M}$.
Lastly, we establish $\varepsilon^5 T=1$ by scaling.
For the sake of readability, we state the resulting constraints of the $\mathrm{MCIP}$ with adapted notation and without duplication of the configuration variables:
\begin{align}
\sum_{C\in\mathcal{C}}x_{C} & = m & \label{eq:MCIP_machs_class} \\
\sum_{C\in\mathcal{C}} C_h x_{C} & = \sum_{k\in[K']}\sum_{M\in \mathcal{M}(h)}y^{(k)}_M & \forall h\inH \label{eq:MCIP_mods_class}\\
\sum_{M\in\mathcal{M}}M_p y^{(k)}_M & = n_{k,p} & \forall k\in[K'],p\in P \label{eq:MCIP_jobs_class}
\end{align}
Note that the coefficients are all integral and this includes those of the objective function, i.e., $\sum_C \Lambda(C)x_C $, because of the scaling step.
\begin{lemma}\label{lem:class_IP_to_sched}
With the above definitions, there is a $(\bar{T},L)$-schedule for $I_4$, iff the $\mathrm{MCIP}$ has a solution with objective value at most $m\bar{T} - L$.
\end{lemma}
\begin{proof}
Let there be a $(\bar{T},L)$-schedule for $I_4$.
Then the schedule on a given machine corresponds to a distinct configuration $C$ that can be determined by counting for each possible group size $a$ the batches of jobs from the same class whose length together with the setup time adds up to an overall length of $a$.
Note that the length of this configuration is equal to the used up space on that machine.
We fix an arbitrary setup class $k$ and set the variables $x^{(k)}_C$ accordingly (and $x^{(k')}_C=0$ for $k'\neq k$ and $C\in\mathcal{C}$).
By this setting we get an objective value of at most $m\bar{T} - L$ because there was $L$ free space in the schedule.
For each class $k$ and module $M$, we count the number of machines on which the there are exactly $M_p$ jobs with processing time $p$ from class $k$ for each $p\in P$, and set $y^{(k)}_M$ accordingly.
It is easy to see that the constraints are satisfied by these definitions.
Given a solution $(x,y)$ of the $\mathrm{MCIP}$, we define a corresponding schedule:
Because of (\ref{eq:MCIP_machs_class}) we can match the machines to configurations such that each machine is matched to exactly one configuration.
If machine $i$ is matched to $C$, for each group $G$ we create $C_G$ slots of length $\Lambda(G)$ on $i$.
Next, we divide the setup classes into batches.
For each class $k$ and module $M$, we create $y^{(k)}_M$ batches of jobs from class $k$ with $M_p$ jobs with processing time $p$ for each $p\in P$ and place the batch together with the corresponding setup time into a fitting slot on some machine.
Because of (\ref{eq:MCIP_jobs_class}) and (\ref{eq:MCIP_mods_class}) all jobs can be placed by this process.
Note that the used space equals the overall size of the configurations and we therefore have free space of at least $L$.
\end{proof}
\subparagraph*{Result.}
Using the above results, we can formulate and analyze the following procedure:
\begin{algorithm}\
\begin{enumerate}
\item Generate the modified instance $I_4$:
\begin{itemize}
\item Remove the small jobs with small setup times.
\item Increase the setup times of the remaining classes with small setup times.
\item Replace the tiny jobs with big setup times.
\item Round up the resulting processing and setup times.
\end{itemize}
\item Build and solve the $\mathrm{MCIP}$ for $I_4$.
\item If the $\mathrm{MCIP}$ is infeasible, or the objective value greater than $m \bar{T} - L$, report that $I$ has no solution with makespan $T$.
\item Otherwise build the schedule with makespan $\bar{T}$ and free space at least $L$ for $I_4$.
\item Transform the schedule into a schedule for $I$ with makespan at most $\breve{T}$:
\begin{itemize}
\item Use the prerounding processing and setup times.
\item Replace the placeholders by the tiny jobs with big setup times.
\item Use the orignal setup times of the classes with small setup times.
\item Insert the small jobs with small setup times into the free space.
\end{itemize}
\end{enumerate}
\end{algorithm}
The procedure is correct due to the above results.
To analyze its running time, we first bound the parameters of the $\mathrm{MCIP}$.
We have $|\mathcal{B}| = K' \leq K$ and $D = |P|$ by definition,
and $|\mathcal{M}| = \mathcal{O}(|S|(1/\varepsilon^3)^{|P|}) = 2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log^2\nicefrac{1}{\varepsilon})}$, because $|S|,|P|\in\mathcal{O}(1/\varepsilon\log 1/\varepsilon)$.
This is true, due to the last rounding step, which also implies $|H|\in\mathcal{O}(1/\varepsilon^5)$, yielding $|\mathcal{C}| = |H|^{\mathcal{O}(1/\varepsilon^3)} = 2^{\mathcal{O}(\nicefrac{1}{\varepsilon^3}\log\nicefrac{1}{\varepsilon})}$.
According to Observation \ref{rem:MCIP}, this yields a brick size of $t = 2^{\mathcal{O}(\nicefrac{1}{\varepsilon^3}\log\nicefrac{1}{\varepsilon})}$, a brick number of $K$, $\mathcal{O}(1/\varepsilon^5)$ globally, and $\mathcal{O}(1/\varepsilon\log 1/\varepsilon)$ locally uniform constraints for the $\mathrm{MCIP}$.
We have $\Delta =\mathcal{O}(1/\varepsilon^5)$, because all occurring values in the processing time matrix are bounded in $\bar{T}$, and we have $\bar{T} =\mathcal{O}(1/\varepsilon^5)$, due to the scaling.
Furthermore, the values of the objective function, the right hand side, and the upper and lower bounds on the variables are bounded by $\mathcal{O}(n/\varepsilon^5)$, yielding a bound of $\mathcal{O}(\log n/\varepsilon^5)$ for the encoding length of the biggest number in the input $\varphi$.
Lastly, all variables can be bounded by $0$ from below and $\mathcal{O}(m/\varepsilon^3)$ from above, yielding $\Phi = \mathcal{O}(m/\varepsilon^3)$.
By Theorem \ref{thm:solving_n-fold} and some arithmetic, the $\mathrm{MCIP}$ can be solved in time:
\[(rs\Delta)^{\mathcal{O}(r^2s+rs^2)}t^2n^2\varphi\log(\Phi)\log(nt\Phi) = 2^{\mathcal{O}(\nicefrac{1}{\varepsilon^{11}}\log^2\nicefrac{1}{\varepsilon})}K^2\log n\log m \log Km\]
When building the actual schedule, we iterate through the jobs and machines like indicated in the proof of Lemma \ref{lem:class_IP_to_sched}, yielding the following:
\begin{theorem}
The algorithm for the setup class model finds a schedule with makespan $(1+\mathcal{O}(\varepsilon))T$ or correctly determines that there is no schedule with makespan $T$ in time
$2^{\mathcal{O}(\nicefrac{1}{\varepsilon^{11}}\log^2\nicefrac{1}{\varepsilon})}K^2nm\log Km$.
\end{theorem}
\subsection{Splittable Model}\label{sec:EPTAS_slittable}
The approximation scheme for the splittable model presented in this section is probably the easiest one discussed in this work.
There is, however, one problem concerning this procedure:
Its running time is polynomial in the number of machines, which might be exponential in the input size.
In Section \ref{sec:better_running_time} we show how this problem can be overcome and further improve the running time.
\subparagraph*{Simplification of the Instance.}
In this context the set of big setup jobs $\jobs^{\mathrm{bst}}$ is given by the jobs with setup times at least $\varepsilon T$ and the small setup jobs $\jobs^{\mathrm{sst}}$ are all the others.
Let $L = \sum_{j\in \jobs^{\mathrm{sst}}} (s_j + p_j)$.
Because every job has to be scheduled and every setup has to be paid at least once, $L$ is a lower bound on the summed up space due to small jobs in any schedule.
Let $I_1$ be the instance that we get by removing all the small setup jobs from the given instance $I$.
\begin{lemma}\label{lem:split_rounding1}
A schedule with makespan $T$ for $I$ induces a $(T,L)$-schedule for $I_1$; and any $(T',L)$-schedule for $I_1$ can be transformed into a schedule for $I$ with makespan at most $T'+\varepsilon T$.
\end{lemma}
\begin{proof}
The first claim is obvious.
Hence, consider a sequence consisting of the jobs from $\jobs^{\mathrm{sst}}$ together with their set up times, where the setup up time of a job is the direct predecessor of the job.
We insert the setup times and jobs from this sequence greedily into the schedule in a next-fit fashion:
Given a machine we keep inserting the items from the sequence on the machine at the end of the schedule until the taken up space on the machine reaches $T'$.
If the current item does not fit exactly, we cut it, such that the used space on the machine is exactly $T'$.
Then we continue with the next machine.
We can place the whole sequence like this without exceeding the makespan $T'$, because we have free space of at least $L$ which is the summed up length of the items in the sequence.
Next, we remove each setup time that was placed only partly on a machine together witch those that were placed at the end of the schedule, and insert a fitting setup time for the jobs that were scheduled without one, which can happen only once for each machine.
This yields a feasible schedule, whose makespan is increased by at most $\varepsilon T$.
\end{proof}
Next, we round up the processing times of $I_1$ to the next multiple of $\varepsilon^2 T$, that is, for each job $j\in\mathcal{J}$ we set $\bar{p}_j = \ceil{p_j/(\varepsilon^2 T)} \varepsilon^2 T$ and $\bar{s}_j = \ceil{s_j/(\varepsilon^2 T)} \varepsilon^2 T$.
We call the resulting instance $I_2$, and denote its job set by $\mathcal{J}'$.
\begin{lemma}\label{lem:split_rounding2}
If there is a $(T,L')$-schedule for $I_1$, there is also a $((1 + 2\varepsilon)T,L')$-schedule for $I_2$ in which the length of each job part is a multiple of $\varepsilon^2 T$, and any $(T',L')$-schedule for $I_2$ yields a $(T',L')$-schedule for $I_1$.
\end{lemma}
\begin{proof}
Consider a $(T,L)$-schedule for $I_1$.
There are at most $1/\varepsilon$ jobs scheduled on each machine, since each setup time has a length of at least $\varepsilon T$.
On each machine, we extend each occurring setup time and the processing time of each occurring job part by at most $\varepsilon^2T$ to round it to a multiple of $\varepsilon^2T$.
This step extends the makespan by at most $2\varepsilon T$.
Since now each job part is a multiple of $\varepsilon^2 T$, the total processing time of the job is a multiple of $\varepsilon^2 T$ too.
In the last step, we check for each job $j \in \jobs^{\mathrm{bst}}$ if the total processing time is now larger than the smallest multiple of $\varepsilon^2T$, which is larger than its original processing time.
If this is the case, we discard the spare processing time.
Lastly, there is at least as much free space in the resulting schedule as in the original one, because we properly increased the makespan bound.
The second claim is obvious.
\end{proof}
Based on the two Lemmata, we define two makespan bounds $\bar{T} = (1 + 2\varepsilon)T$ and $\breve{T} = \bar{T} + \varepsilon T = (1+3\varepsilon)T$.
We will use the $\mathrm{MCIP}$ to find a $(\bar{T},L)$-schedule for $I_2$ in which the length of each job part is a multiple of $\varepsilon^2 T$.
Using the two Lemmata, this will yield a schedule with makespan at most $\breve{T}$ for the original instance $I$.
\subparagraph*{Utilization of the MCIP.}
The basic objects in this context are the (big setup) jobs, i.e., $\mathcal{B} = \jobs^{\mathrm{bst}} = \mathcal{J}'$, and they have only one value ($D = 1$), namely, their processing time.
Moreover, the modules are defined as the set of pairs of job piece sizes and setup times, i.e., $\mathcal{M} = \sett[\big]{(q,s)}{s,q\in\sett{x\varepsilon^2 T}{x\in\mathbb{Z}, 0<x\leq 1/\varepsilon^2}, s\geq \varepsilon T}$, and we write $s_M = s$ and $q_M = q$ for each module $M = (q,s)\in \mathcal{M}$.
Corresponding to the value of the basic objects the value of a module $M$ is $q_M$, and its size $\Lambda(M)$ is given by $q_M+s_M$.
A job is eligible for a module, if the setup times fit, i.e., $\mathcal{B}_M = \sett{j\in\mathcal{J}'}{s_j=s_M}$.
In order to ensure integral values, we establish $\varepsilon^2 T=1$ via a simple scaling step.
The set of configurations $\mathcal{C}$ is comprised of all configurations of module sizes $H$ that are bounded in size by $\bar{T}$, i.e., $\mathcal{C}=\mathcal{C}_\mathcal{M}(\bar{T})$.
We state the constraints of the $\mathrm{MCIP}$ for the above definitions with adapted notation and without duplication of the configuration variables:
\begin{align}
\sum_{C\in\mathcal{C}}x_{C} & = m & \label{eq:MCIP_machs_split} \\
\sum_{C\in\mathcal{C}} C_h x_{C} & = \sum_{j\in\mathcal{J}'}\sum_{M\in \mathcal{M}(h)}y^{(j)}_M & \forall h\inH \label{eq:MCIP_mods_split}\\
\sum_{M\in\mathcal{M}}q_M y^{(j)}_M & = p_j & \forall j\in\mathcal{J}' \label{eq:MCIP_jobs_split}
\end{align}
Note that we additionally minimize the summed up size of the configurations, via the objective function $\sum_C \Lambda(C)x_C$.
\begin{lemma}\label{lem:split_IP_to_sched}
With the above definitions, there is a ($\bar{T}, L$)-schedule for $I_2$ in which the length of each job piece is a multiple of $ \varepsilon^2 T$, iff $\mathrm{MCIP}$ has a solution with objective value at most $m\bar{T} - L$.
\end{lemma}
\begin{proof}
Given such a schedule for $I_2$, the schedule on each machine corresponds to exactly one configuration $G$ that can be derived by counting the job pieces and setup times with the same summed up length $a$ and setting $C_G$ accordingly, where $G$ is the group of modules with length $a$.
The size of the configuration $C$ is equal to the used space on the respective machine.
Therefore, we can fix some arbitrary job $j$ and set the variables $x^{(j)}_C$ to the number of machines whose schedule corresponds to $C$ (and $x^{(j')}_C=0$ for $j'\neq j$ and $C\in\mathcal{C}$).
Since there is at least a free space of $L$ for the schedule, the objective value is bounded by $m\bar{T} - L$.
Furthermore, for each job $j$ and job part length $q$, we count the number of times a piece of $j$ with length $q$ is scheduled and set $y^{(j)}_{(q,s_j)}$ accordingly.
It is easy to see that the constraints are satisfied.
Now, let $(x,y)$ be a solution to the $\mathrm{MCIP}$ with objective value at most $m\bar{T} - L$.
We use the solution to construct a schedule:
For job $j$ and configuration $C$ we reserve $x^{(j)}_{C}$ machines.
On each of these machines we create $C_h$ slots of length $h$, for each module size $h\inH$.
Note that because of (\ref{eq:MCIP_machs_split}) there is the exact right number of machines for this.
Next, consider each job $j$ and possible job part length $q$ and create $y^{(j)}_{(q,s_j)}$ split pieces of length $q$ and place them together with a setup of $s_j$ into a slot of length $s_j+q$ on any machine.
Because of (\ref{eq:MCIP_jobs_split}) the entire job is split up by this, and because of (\ref{eq:MCIP_mods_split}) there are enough slots for all the job pieces.
Note that the used space in the created schedule is equal to the objective value of $(x,y)$ and therefore there is at least $L$ free space.
\end{proof}
\subparagraph*{Result.}
Summing up, we can find a schedule of length at most $(1 + 3\varepsilon) T$ or correctly determine that there is no schedule of length $T$ with the following procedure:
\begin{algorithm}
\
\begin{enumerate}
\item Generate the modified instance $I_2$:
\begin{itemize}
\item Remove the small setup jobs.
\item Round the setup and processing times of the remaining jobs.
\end{itemize}
\item Build and solve the $\mathrm{MCIP}$ for this case.
\item If the IP is infeasible, or the objective value greater than $m\bar{T} - L$, report that $I$ has no solution with makespan $T$.
\item Otherwise build the schedule with makespan $\bar{T}$ and free space at least $L$ for $\bar{I}$.
\item Transform the schedule into a schedule for $I$ with makespan at most $\breve{T}$:
\begin{itemize}
\item Use the original processing and setup times.
\item Greedily insert the small setup jobs.
\end{itemize}
\end{enumerate}
\end{algorithm}
To assess the running time of the procedure, we mainly need to bound the parameters of the $\mathrm{MCIP}$, namely $|\mathcal{B}|$, $|H|$, $|\mathcal{M}|$, $|\mathcal{C}|$ and $D$.
By definition, we have $|\mathcal{B}| = |\mathcal{J}'| \leq n$ and $D=1$.
Since all setup times and job piece lengths are multiples of $\varepsilon^2 T$ and bounded by $T$, we have $|\mathcal{M}| = \mathcal{O}(1/\varepsilon^4)$ and $|H| = \mathcal{O}(1/\varepsilon^2)$.
This yields $|\mathcal{C}| \leq |H|^{\mathcal{O}(1/\varepsilon + 2)} = 2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log \nicefrac{1}{\varepsilon})}$, because the size of each module is at least $\varepsilon T$ and the size of the configurations bounded by $(1+2\varepsilon)T$.
According to Observation \ref{rem:MCIP}, we now have brick-size $t = 2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log \nicefrac{1}{\varepsilon})}$, brick number $|\mathcal{B}| = n$, $r = |\Gamma|+1 = \mathcal{O}(1/\varepsilon^2)$ globally uniform and $s = D = 1$ locally uniform constraints.
Because of the scaling step, all occurring numbers in the constraint matrix of the $\mathrm{MCIP}$ are bounded by $1/\varepsilon^2$ and therefore $\Delta \leq 1/\varepsilon^2$.
Furthermore, each occurring number can be bounded by $\mathcal{O}(m/\varepsilon^2)$ and this is an upper bound for each variable as well, yielding $\varphi = \mathcal{O}(\log m/\varepsilon^2 )$ and $\Phi = \mathcal{O}(m/\varepsilon^2)$.
Hence the $\mathrm{MCIP}$, can be solved in time: \[(rs\Delta)^{\mathcal{O}(r^2s+rs^2)}t^2n^2\varphi\log(\Phi)\log(nt\Phi) = 2^{\mathcal{O}(\nicefrac{1}{\varepsilon^4}\log\nicefrac{1}{\varepsilon})}n^2\log^2 m\log nm\]
While the first step of the procedure is obviously dominated by the above, this is not the case for the remaining ones.
In particular, building the schedule from the IP solution costs $\mathcal{O}((n+m)/\varepsilon^2)$, if the procedure described in the proof of Lemma \ref{lem:split_IP_to_sched} is realized in a straight-forward fashion.
The last step of the algorithm is dominated by this, yielding the running time stated in the theorem below.
Note that the number of machines $m$ could be exponential in the number of jobs, and therefore the described procedure is a PTAS only for the special case of $m = \mathrm{poly}(n)$.
However, this limitation can be overcome with a little extra effort, as we discuss in Section \ref{sec:better_running_time}.
\begin{theorem}
The algorithm for the splittable model finds a schedule with makespan at most $(1 + 3\varepsilon)T$ or correctly determines that there is no schedule with makespan $T$ in time $2^{\mathcal{O}(\nicefrac{1}{\varepsilon^4}\log\nicefrac{1}{\varepsilon})}n^2m\log m\log nm$.
\end{theorem}
\subsection{Preemptive Model}\label{sec:EPTAS_preemptive}
In the preemptive model we have to actually consider the time-line of the schedule on each machine instead of just the assignment of the jobs or job pieces, and this causes some difficulties.
For instance, we will have to argue that it suffices to look for a schedule with few possible starting points, and we will have to introduce additional constraints in the IP in order to ensure that pieces of the same job do not overlap.
Our first step, in dealing with this extra difficulty is to introduce some concepts and notation:
For a given schedule with a makespan bound $T$, we call a job piece together with its setup a \emph{block}, and we call the schedule $X$-layered, for some value $X$, if each block starts at a multiple of $X$.
Corresponding to this, we call the time in the schedule between two directly succeeding multiples of $X$ a \emph{layer} and the corresponding time on a single machine a \emph{slot}.
We number the layers bottom to top and identify them with their number, that is, the set of layers $\Xi$ is given by $\sett{\ell\in\mathbb{Z}_{>0}}{(\ell-1) X\leq T}$.
Note that in an $X$-layered schedule, there is at most one block in each slot and for each layer there can be at most one block of each job present.
Furthermore, for $X$-layered schedules, we slightly alter the definition of free space:
We solely count the space from slots that are completely free.
If in such a schedule, for each job there is at most one slot occupied by this job but not fully filled, we additionally call the schedule \emph{layer-compliant}.
\subsubsection*{Simplification of the Instance}
In the preemptive model we distinguish \emph{big}, \emph{medium} and \emph{small} setup jobs, using two parameters $\delta$ and $\mu$:
The big setup jobs $\jobs^{\mathrm{bst}}$ are those with setup time at least $\delta T$, the small $\jobs^{\mathrm{sst}}$ have a setup time smaller than $\mu T$, and the medium $\jobs^{\mathrm{mst}}$ are the ones in between.
We set $\mu = \varepsilon^2\delta$ and we choose $\delta\in\set{\varepsilon^1,\dots,\varepsilon^{\nicefrac{2}{\varepsilon^2}}}$ such that the summed up processing time together with the summed up setup time of the medium setup jobs is upper bounded by $m \varepsilon T$, i.e., $\sum_{j\in\jobs^{\mathrm{mst}}}(s_j + p_j)\leq m \varepsilon T$.
If there is a schedule with makespan $T$, such a choice is possible, because of the pidgeon hole principle, and because the setup time of each job has to occur at least once in any schedule.
Similar arguments are widely used, e.g. in the context of geometrical packing algorithms.
Furthermore we distinguish the jobs by processing times, calling those with processing time at least $\varepsilon T$ \emph{big} and the others \emph{small}.
For a given set of jobs $J$, we call the subsets of big or small jobs $J_\mathrm{big}$ or $J_\mathrm{small}$ respectively.
We perform three simplification steps, aiming for an instance in which the small and medium setup jobs are big; small setup jobs have setup time $0$; and for which an $\varepsilon\delta T$-layered, layer-compliant schedule exists.
Let $I_1$ be the instance we get by removing the small jobs with medium setup times $\jobs^{\mathrm{mst}}_\mathrm{small}$ from the given instance $I$.
\begin{lemma}\label{lem_prmt_simplification_1}
If there is a schedule with makespan at most $T$ for $I$, there is also such a schedule for $I_1$, and if there is a schedule with makespan at most $T'$ for $I_1$ there is a schedule with makespan at most $T' + (\varepsilon + \delta)T$ for $I$.
\end{lemma}
\begin{proof}
The first claim is obvious.
For the second, we create a sequence containing the jobs from $\jobs^{\mathrm{mst}}_\mathrm{small}$ each directly preceded by its setup time.
Recall that the overall length of the objects in this sequence is at most $m\varepsilon T$, and the length of each job is bounded by $\varepsilon T$.
We greedily insert the objects from the sequence, considering each machine in turn.
On the current machine we start at time $T' + \delta T$ and keep inserting until $T' + \delta T + \varepsilon T$ is reached.
If the current object is a setup time, we discard it and continue with the next machine and object.
If, on the other hand, it is a job, we split it, such that the remaining space on the current machine can be perfectly filled.
We can place all objects like this, however the first job part placed on a machine might be missing a setup.
We can insert the missing setups because they have length at most $\delta T$ and between time $T'$ and $T'+ \delta T$ there is free space.
\end{proof}
Next, we consider the jobs with small setup times:
Let $I_2$ be the instance we get by removing the small jobs with small setup times $\jobs^{\mathrm{sst}}_\mathrm{small}$ and setting the setup time of the big jobs with small setup times to zero, i.e., $\bar{s}_j=0$ for each $j\in \jobs^{\mathrm{sst}}_\mathrm{big}$.
Note that in the resulting instance each small job has a big setup time.
Furthermore, let $L := \sum_{j\in \jobs^{\mathrm{sst}}_\mathrm{small}} p_j + s_j$.
Then $L$ is an obvious lower bound for the space taken up by the jobs from $\jobs^{\mathrm{sst}}_\mathrm{small}$ in any schedule.
\begin{lemma}\label{lem_prmt_simplification_2}
If there is a schedule with makespan at most $T$ for $I_1$, there is also a $(T,L)$-schedule for $I_2$; and if there is a $\gamma T$-layered $(T',L)$-schedule for $I_2$, with $T'$ a multiple of $\gamma T$, there is also a schedule with makespan at most $(1 + \gamma^{-1}\mu)T' + (\mu + \varepsilon) T$ for $I_1$.
\end{lemma}
\begin{proof}
The first claim is obvious, and for the second consider a $\gamma T$-layered $(T',L)$-schedule for $I_2$.
We create a sequence that contains the jobs of $\jobs^{\mathrm{sst}}_\mathrm{small}$ and their setups, such that each job is directly preceded by its setup.
Remember that the remaining space in partly filled slots is not counted as free space.
Hence, since the overall length of the objects in the sequence is $L$, there is is enough space in the free slots of the schedule to place them.
We do so in a greedy fashion guaranteeing that each job is placed on exactly one machine:
We insert the objects from the sequence into the free slots, considering each machine in turn and starting on the current machine from the beginning of the schedule and moving on towards its end.
If an object cannot be fully placed into the current slot there are two cases:
It could be a job or a setup.
In the former case, we cut it and continue placing it in the next slot, or, if the current slot was the last one, we place the rest at the end of the schedule.
In the latter case, we discard the setup and continue with the next slot and object.
The resulting schedule is increased by at most $\varepsilon T$, which is caused by the last job placed on a machine.
To get a proper schedule for $I_1$ we have to insert some setup times:
For the large jobs with small setup times and for the jobs that were cut in the greedy procedure.
We do so by inserting a time window of length $\mu T$ at each multiple of $\gamma T$ and at the end of the original schedule on each machine.
By this, the schedule is increased by at most $\gamma^{-1}\mu T' + \mu T$.
Since all the job parts in need of a setup are small and did start at multiples of $\mu T$ or at the end, we can insert the missing setups.
Note that blocks that span over multiple layers are cut by the inserted time windows.
This, however, can easily be repaired by moving the cut pieces properly down.
\end{proof}
We continue by rounding the medium and big setup and all the processing times.
In particular, we round the processing times and the big setup times up to the next multiple of $\varepsilon\delta T$ and the medium setup times to the next multiple of $\varepsilon\mu T$, i.e., $\bar{p}_j= \ceil{p_j/(\varepsilon\delta T)} \varepsilon\delta T$ for each job $j$, $\bar{s}_j= \ceil{s_j/(\varepsilon\delta T)} \varepsilon\delta T$ for each big setup job $j\in\jobs^{\mathrm{bst}}$, and $\bar{s}_j= \ceil{s_j/(\varepsilon\mu T)} \varepsilon\mu T$ for each medium setup job $j\in\jobs^{\mathrm{mst}}_\mathrm{big}$.
\begin{lemma}\label{lem_prmt_simplification_3}
If there is a $(T,L)$-schedule for $I_2$, there is also an $\varepsilon\delta T$-layered, layer-compliant $((1+3\varepsilon)T,L)$-schedule for $I_3$; and if there is a $\gamma T$-layered $(T',L)$-schedule for $I_3$, there is also such a schedule for $I_2$.
\end{lemma}
While the second claim is easy to see, the proof of the first is rather elaborate and unfortunately a bit tedious.
Hence, since we believe Lemma \ref{lem_prmt_simplification_3} to be fairly plausible by itself, we postpone its proof to the end of the section and proceed discussing its use.
For the big and small setup jobs both processing and setup times are multiples of $\varepsilon\delta T$.
Therefore, the length of each of their blocks in an $\varepsilon\delta T$-layered, layer-compliant schedule is a multiple of $\varepsilon\delta T$.
For a medium setup job, on the other hand, we know that the overall length of its blocks has the form $x\varepsilon\delta T + y \varepsilon\mu T$, with non-negative integers $x$ and $y$.
In particular it is a multiple of $\varepsilon\mu T$, because $\varepsilon\delta T = (1/\varepsilon^2) \varepsilon\mu T$.
In a $\varepsilon\delta T$-layered, layer-compliant schedule, for each medium setup job the length of all but at most one block is a multiple of $\varepsilon\delta T$ and therefore a multiple of $\varepsilon\mu T$.
If both the overall length and the lengths of all but one block are multiples of $\varepsilon\mu T$, this is also true for the one remaining block.
Hence, we will use the $\mathrm{MCIP}$ not to find an $\varepsilon\delta T$-layered, layer-compliant schedule in particular, but an $\varepsilon\delta T$-layered one with block sizes as described above and maximum free space.
Based on the simplification steps, we define two makespan bounds $\bar{T}$ and $\breve{T}$:
Let $\bar{T}$ be the makespan bound we get by the application of the Lemmata \ref{lem_prmt_simplification_1}-\ref{lem_prmt_simplification_3}, i.e., $\bar{T} = (1+3\varepsilon)T$.
We will use the $\mathrm{MCIP}$ to find an $\varepsilon\delta T$-layered $(\bar{T},L)$-schedule for $I_3$, and apply the Lemmata \ref{lem_prmt_simplification_1}-\ref{lem_prmt_simplification_3} backwards to get schedule for $I$ with makespan at most $\breve{T} = (1 + (\varepsilon\delta)^{-1}\mu)\bar{T} + (\mu + \varepsilon) T + (\varepsilon + \delta)T \leq (1 + 9\varepsilon)T$, using $\varepsilon\leq 1/2$.
\subsubsection*{Utilization of the MCIP}
Similar to the splittable case, the basic objects are the (big) jobs, i.e., $\mathcal{B} = \mathcal{J}_{\mathrm{big}}$, and their single value is their processing time ($D = 1$).
The modules, on the other hand, are more complicated, because they additionally need to encode which layers are exactly used and, in case of the medium jobs, to which degree the last layer is filled.
For the latter we introduce buffers, representing the unused space in the last layer, and define modules as tuples $(\ell,q,s,b)$ of starting layer, job piece size, setup time and buffer size.
For a module $M = (\ell,q,s,b)$, we write $\ell_M = \ell$, $q_M=q$, $s_M=s$ and $b_M = b$, and we define the size $\Lambda(M)$ of $M$ as $ s + q + b$.
The overall set of modules $\mathcal{M}$ is the union of the modules for big, medium and small setup jobs $\mods^{\mathrm{bst}}$, $\mods^{\mathrm{mst}}$ and $\mods^{\mathrm{sst}}$ that are defined in the following.
For this let
$Q^{\mathrm{bst}} = \sett{q}{q = x\varepsilon\delta T, x\in\mathbb{Z}_{> 0},q \leq \bar{T}}$ and
$Q^{\mathrm{mst}} = \sett{q}{q = x\varepsilon\mu T, x\in\mathbb{Z}_{> 0},q \leq \bar{T}}$ be the sets of possible job piece sizes of big and medium setup jobs;
$S^{\mathrm{bst}} = \sett{s}{s = x\varepsilon\delta T, x\in\mathbb{Z}_{\geq 1/\varepsilon},s \leq \bar{T}}$ and
$S^{\mathrm{mst}} = \sett{s}{s = x\varepsilon\mu T, x\in\mathbb{Z}_{\geq 1/\varepsilon},s \leq \delta T }$ be the sets of possible big and medium setup times;
$B = \sett{b}{b = x\varepsilon\mu T, x\in\mathbb{Z}_{\geq 0}, b < \varepsilon\delta T}$ the set of possible buffer sizes;
and $\Xi = \set{1,\dots,1/(\varepsilon\delta) + 3/\delta}$ the set of layers.
We set:
\begin{align*}
\mods^{\mathrm{bst}} &= \sett{(\ell,q,s,0)}{\ell\in \Xi,q\in Q^{\mathrm{bst}},s\in S^{\mathrm{bst}}, (\ell - 1)\varepsilon\delta T + s + q\leq \bar{T} }\\
\mods^{\mathrm{mst}} &= \sett{(\ell,q,s,b)\in \Xi\times Q^{\mathrm{mst}}\!\!\times S^{\mathrm{mst}}\!\!\times B}{x = s + q + b\in\varepsilon\delta T\mathbb{Z}_{>0}, (\ell - 1)\varepsilon\delta T + x\leq \bar{T} }\\
\mods^{\mathrm{sst}} &= \sett{(\ell,\varepsilon\delta T,0,0)}{\ell\in \Xi}
\end{align*}
Concerning the small setup modules, note that the small setup jobs have a setup time of $0$ and therefore may be covered slot by slot.
We establish $\varepsilon\mu T=1$ via scaling, to ensure integral values.
A big, medium or small job is eligible for a module, if it is also big, medium or small respectively and the setup times fit.
We have to avoid that two modules $M_1,M_2$, whose corresponding time intervals overlap, are used to cover the same job or in the same configuration.
Such an overlap is given, if there is some layer $\ell$ used by both of them, that is, $(\ell_M - 1)\varepsilon\delta T \leq (\ell - 1)\varepsilon\delta T < (\ell_M - 1)\varepsilon\delta T + \Lambda(M)$ for both $M \in \set{M_1,M_2}$.
Hence, for each layer $\ell\in\Xi$, we set $\mathcal{M}_\ell\subseteq \mathcal{M}$ to be the set of modules that use layer $\ell$.
Furthermore, we partition the modules into groups $\Gamma$ by size and starting layer, i.e., $\Gamma = \sett{G\subseteq \mathcal{M}}{M,M'\in G\Rightarrow \Lambda(M)=\Lambda(M') \wedge \ell_M = \ell_{M'}}$.
The size of a group $G\in\Gamma$ is the size of a module from $G$, i.e. $\Lambda(G) = \Lambda(M)$ for $M\in G$.
Unlike before we consider \emph{configurations of module groups} rather than module sizes.
More precisely, the set of configurations $\mathcal{C}$ is given by the configurations of groups, such that for each layer at most one group using this layer is chosen, i.e., $\mathcal{C} = \sett{C\in\mathbb{Z}_{\geq 0}^\Gamma}{\forall\ell\in\Xi:\sum_{G\subseteq\mathcal{M}_\ell}C_G\leq 1}$.
With this definition we prevent overlap conflicts on the machines.
Note that unlike in the cases considered so far, the size of a configuration does not correspond to a makespan in the schedule, but to used space, and the makespan bound is realized in the definition of the modules instead of in the definition of the configurations.
To also avoid conflicts for the jobs, we extend the basic $\mathrm{MCIP}$ with additional locally uniform constraints.
In particular, the constraints of the extended $\mathrm{MCIP}$ for the above definitions with adapted notation and without duplication of the configuration variables are given by:
\begin{align}
\sum_{C\in\mathcal{C}}x_{C} & = m & \label{eq:MCIP_machs_preempt} \\
\sum_{C\in\mathcal{C}(T)} C_G x_{C} & = \sum_{j\in\mathcal{J}}\sum_{M\in G}y^{(j)}_M & \forall G \in\Gamma \label{eq:MCIP_mods_preempt}\\
\sum_{M\in\mathcal{M}}q_M y^{(j)}_M & = p_j & \forall j\in\mathcal{J} \label{eq:MCIP_jobs_preempt}\\
\sum_{M\in\mathcal{M}_\ell} y^{(j)}_M & \leq 1 & \forall j\in\mathcal{J}, \ell\in\Xi \label{eq:MCIP_colis_preempt}
\end{align}
Like in the first two cases we minimize the summed up size of the configurations, via the objective function $\sum_C \Lambda(C)x_C$.
Note that in this case the size of a configuration does not have to equal its height.
It is easy to see that the last constraint is indeed locally uniform.
However, since we have an inequality instead of an equality, we have to introduce $|\Xi|$ slack variables in each brick, yielding:
\begin{observation}\label{rem:MCIP_prmt}
The $\mathrm{MCIP}$ extended like above is an $n$-fold IP with brick-size $t = |\mathcal{M}| + |\mathcal{C}| + |\Xi|$, brick number $n = |\mathcal{J}|$, $r = |\Gamma|+1$ globally uniform and $s = D + |\Xi|$ locally uniform constraints.
\end{observation}
\begin{lemma}\label{lem:prmt_IP_to_sched}
With the above definitions, there is an $\varepsilon\delta T$-layered ($\bar{T}, L$)-schedule for $I_3$ in which the length of a block is a multiple of $\varepsilon\delta T$, if it belongs to a small or big setup job, or a multiple of $\varepsilon\mu T$ otherwise, iff the extended $\mathrm{MCIP}$ has a solution with objective value at most $m\bar{T} - L$.
\end{lemma}
\begin{proof}
We first consider such a schedule for $I_3$.
For each machine, we can derive a configuration that is given by the starting layers of the blocks together with the summed up length of the slots the respective block is scheduled in.
The size of the configuration $C$ is equal to the used space on the respective machine.
Hence, we can fix some arbitrary job $j$ and set $x^{(j)}_C$ to the number of machines corresponding to $j$ (and $x^{(j')}_C=0$ for $j'\neq j$).
Keeping in mind that in an $\varepsilon\delta T$-layered schedule the free space is given by the free slots, the above definition yields an objective value bounded by $m\bar{T} - L$, because there was free space of at least $L$.
Next, we consider the module variables for each job $j$ in turn:
If $j$ is a small setup job, we set $y^{(j)}_{(\ell,\varepsilon\delta T,0,0)}$ to $1$, if the $j$ occurs in $\ell$, and to $0$ otherwise.
Now, let $j$ be a big setup job.
For each of its blocks, we set $y^{(j)}_{(\ell,z-s_j,s_j,0)} = 1$, where $\ell$ is the starting layer and $z$ the length of the block.
The remaining variables are set to $0$.
Lastly, let $j$ be a medium setup job.
For each of its blocks, we set $y^{(j)}_{(\ell,z-s_j,s_j,b)} = 1$, where $\ell$ is the starting layer of the block, $z$ its length and $b = \ceil{z/(\varepsilon\delta T)}\varepsilon\delta T - z$.
Again, the remaining variables are set to $0$.
It is easy to verify that all constraints are satisfied by this solution.
If, on the other hand, we have a solution $(x,y)$ to the $\mathrm{MCIP}$ with objective value at most $m\bar{T} - L$, we reserve $\sum_{j}x^{(j)}_{C}$ machines for each configuration $C$.
There are enough machines to do this, because of (\ref{eq:MCIP_machs_preempt}).
On each of these machines we reserve space:
For each $G\in\Gamma$, we create an allocated space of length $\Lambda(G)$ starting from the starting layer of $G$, if $C_G = 1$.
Let $j$ be a job and $\ell$ be a layer.
If $j$ has a small setup time, we create $y^{(j)}_{(\ell,\varepsilon\delta T,0,0)} $ pieces of length $\varepsilon\delta T$ and place these pieces into allocated spaces of length $\varepsilon\delta T$ in layer $\ell$.
If, on the other hand, $j$ is a big or medium setup job, we consider each possible job part length $q\in Q^{\mathrm{bst}}$ or $q\in Q^{\mathrm{mst}}$, create $y^{(j)}_{(\ell,q,s_j,0)} $ or $y^{(j)}_{(\ell,q,s_j,b)}$, with $b = \ceil{q/(\varepsilon\delta T)}\varepsilon\delta T - \varepsilon\delta T$, pieces of length $q$, and place them together with their setup time into allocated spaces of length $q$ in layer $\ell$.
Because of (\ref{eq:MCIP_jobs_preempt}) the entire job is split up by this, and because of (\ref{eq:MCIP_mods_preempt}) there are enough allocated spaces for all the job pieces.
The makespan bound is ensured by the definition of the modules, and overlaps are avoided, due to the definition of the configurations and (\ref{eq:MCIP_colis_preempt}).
Furthermore, the used slots have an overall length equal to the objective value of $(x,y)$ and therefore there is at least $L$ free space.
\end{proof}
\subsubsection*{Result}
Summing up the above considerations, we get:
\begin{algorithm}\
\begin{enumerate}
\item If there is no suitable class of medium setup jobs, report that there is no schedule with makespan $T$ and terminate the procedure.
\item Generate the modified instance $I_3$:
\begin{itemize}
\item Remove the small jobs with medium setup times.
\item Remove the small jobs with small setup times, and decrease the setup time of big jobs with small setup time to $0$.
\item Round the big processing times, as well as the medium, and the big setup times.
\end{itemize}
\item Build and solve the $\mathrm{MCIP}$ for $I_3$.
\item If the $\mathrm{MCIP}$ is infeasible, or the objective value greater than $m \bar{T} - L$, report that $I$ has no solution with makespan $T$.
\item Otherwise build the $\varepsilon\delta T$-layered schedule with makespan $\bar{T}$ and free space at least $L$ for $I_3$.
\item Transform the schedule into a schedule for $I$ with makespan at most $\breve{T}$:
\begin{itemize}
\item Use the prerounding processing and setup times.
\item Insert the small jobs with small setup times into the free slots and insert the setup times of the big jobs with small setup times.
\item Insert the small jobs with medium setup times.
\end{itemize}
\end{enumerate}
\end{algorithm}
We analyze the running time of the procedure, and start by bounding the parameters of the extended $\mathrm{MCIP}$.
We have $|\mathcal{B}| = n$ and $D=1$ by definition, and the number of layers $|\Xi|$ is obviously $\mathcal{O}(1/(\varepsilon\delta)) = \mathcal{O}(1/\varepsilon^{2/\varepsilon + 1}) = 2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log\nicefrac{1}{\varepsilon})}$.
Furthermore, it is easy to see that $|Q^{\mathrm{bst}}| = \mathcal{O}(1/(\varepsilon\delta))$, $|Q^{\mathrm{mst}}| = \mathcal{O}(1/(\varepsilon^3\delta))$, $|S^{\mathrm{bst}}| = \mathcal{O}(1/(\varepsilon\delta))$, $|S^{\mathrm{mst}}| = \mathcal{O}(1/(\varepsilon^3))$, and $|B|=\mathcal{O}(1/\varepsilon^2)$.
This gives us $\mods^{\mathrm{bst}} \leq |\Xi||Q^{\mathrm{bst}}||S^{\mathrm{bst}}|$, $\mods^{\mathrm{mst}} \leq |\Xi||Q^{\mathrm{mst}}||S^{\mathrm{mst}}||B| $ and $\mods^{\mathrm{sst}} = |\Xi|$, and therefore $|\mathcal{M}| = |\mods^{\mathrm{bst}}| + |\mods^{\mathrm{mst}}| + |\mods^{\mathrm{sst}}| = 2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log\nicefrac{1}{\varepsilon})}$.
Since their are $\mathcal{O}(1/(\delta\varepsilon)) $ distinct module sizes, the number of groups $|\Gamma|$ can be bounded by $\mathcal{O}(|\Xi|/(\varepsilon\delta)) = 2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log\nicefrac{1}{\varepsilon})}$.
Hence, for the number of configurations we get $|\mathcal{C}| = \mathcal{O}((1/(\varepsilon\delta))^{|\Gamma|}) = 2^{2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log\nicefrac{1}{\varepsilon})}}$.
By Observation \ref{rem:MCIP_prmt}, the modified $\mathrm{MCIP}$ has $r = 2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log\nicefrac{1}{\varepsilon})}$ many globally and $s = 2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log\nicefrac{1}{\varepsilon})}$ many locally uniform constraints; its brick number is $n$, and its brick size is $ t = 2^{2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log\nicefrac{1}{\varepsilon})}}$.
All occurring values in the matrix are bounded by $\bar{T}$, yielding $\Delta \leq \bar{T} = 1/(\varepsilon\mu) + 1/\mu = 2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log\nicefrac{1}{\varepsilon})}$, due to the scaling step.
Furthermore, the numbers in the input can be bounded by $m2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log\nicefrac{1}{\varepsilon})}$ and all variables can be upper bounded by $\mathcal{O}(m)$.
Hence, we have $\varphi = \mathcal{O}(\log m + 1/\varepsilon \log 1/\varepsilon)$ and $\Phi = \mathcal{O}(m)$, and due to Theorem \ref{thm:solving_n-fold} we can solve the $\mathrm{MCIP}$ in time:
\[(rs\Delta)^{\mathcal{O}(r^2s+rs^2)}t^2n^2\varphi\log(\Phi)\log(nt\Phi) = 2^{2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log\nicefrac{1}{\varepsilon})}}n^2\log^2 m\log nm\]
A straight-forward realization of the procedure for the creation of the $\varepsilon\delta T$-layered $(\bar{T},L)$-schedule for $I_3$ (the fifth step), which is described in the proof of Lemma \ref{lem:prmt_IP_to_sched}, will take $nm2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log\nicefrac{1}{\varepsilon})}$ time, yielding:
\begin{theorem}
The algorithm for the preemptive model finds a schedule with makespan at most $(1+9\varepsilon)T$ or correctly determines that there is no schedule with makespan $T$ in time $2^{2^{\mathcal{O}(\nicefrac{1}{\varepsilon}\log\nicefrac{1}{\varepsilon})}}n^2 m\log m\log nm$.
\end{theorem}
\subsubsection*{Proof of Lemma \ref{lem_prmt_simplification_3}}
We divide the proof into three steps, which can be summarized as follows:
\begin{enumerate}
\item We transform a $(T,L)$-schedule for $I_2$ into a $((1+3\varepsilon)T,L)$-schedule for $I_3$ in which the big setup jobs are already properly placed inside the layers.
\item We construct a flow network with integer capacities and a maximum flow, based on the placement of the remaining jobs in the layers.
\item Using flow integrality and careful repacking, we transform the schedule into a $\varepsilon\delta T$-layered, layer-compliant schedule.
\end{enumerate}
More precisely the above transformation steps will produce a $\varepsilon\delta T$-layered, layer-compliant $((1+3\varepsilon)T,L)$-schedule with the additional properties, that too much processing time may be inserted for some jobs or setup times are produced that are not followed by the corresponding job pieces.
Note that this does not cause any problems:
We can simply remove the extra setups and processing time pieces.
For the medium jobs this results in a placement with at most one used slot that is not fully filled, as required in a layer-compliant schedule.
\subparagraph*{Step 1.}
Remember that a block is a job piece together with its setup time placed in a given schedule.
Consider a $(T,L)$-schedule for $I_2$ and suppose that for each block in the schedule there is a container perfectly encompassing it.
Now, we stretch the entire schedule by a factor of $(1+3\varepsilon)$ and in this process we stretch and move the containers correspondingly.
The blocks are not stretched but moved in order to stay in their container, and we assume that they are positioned at the bottom, that is, at the beginning of the container.
Note that we could move each block inside its respective container without creating conflicts with other blocks belonging to the same job.
In the following, we use the extra space to modify the schedule.
Similar techniques are widely used in the context of geometric packing algorithms.
Let $j$ be a big setup job.
In each container containing a block belonging to $j$, there is a free space of at least $3\varepsilon\delta T$, because the setup time of $j$ is at least $\delta T$ and therefore the container had at least that length before the stretching.
Hence, we have enough space to perform the following two steps.
We move the block up by at most $\varepsilon\delta T$, such that it starts at a multiple of $\varepsilon\delta T$.
Next, we enlarge the setup time and the processing time by at most $\varepsilon\delta T$, such that both are multiples of $\varepsilon\delta T$.
Now the setup time is equal to the rounded setup time, while the processing time might be bigger, because we performed this step for each piece of the job.
We outline the procedure in Figure \ref{fig:container}.
\begin{figure}
\caption{The stretching and rounding steps, for a small job part with big setup time starting in the first layer of the schedule, depicted from left to right: The schedule and the containers are stretched; the block is moved up; and the processing and the setup time are increased. The hatched part represents the setup time, the thick rectangle the container, and the dashed lines the layers, with $\varepsilon = \delta = 1/8$.}
\label{fig:container}
\end{figure}
We continue with the small setup jobs.
These jobs are big and therefore for each of them there is a summed up free space of at least $3\varepsilon^2 T$ in the containers belonging to the respective job---more than enough to enlarge some of the pieces such that their overall length matches the rounded processing time.
Lastly, we consider the medium setup jobs.
These jobs are big as well and we could apply the same argument as above, however, we need to be a little bit more careful in order to additionally realize the rounding of the setup times and an additional technical step, we need in the following.
Fix a medium setup job $j$ and a container filled with a block belonging to $j$.
Since the setup time has a length of at least $\mu T$, the part of the container filled with it was increased by at least $3\varepsilon\mu T$.
Hence, we can enlarge the setup time to the rounded setup time without using up space in the container that was created due to the processing time part.
We do this for all blocks belonging to medium setup jobs.
The extra space in the containers of a medium setup job due to the processing time parts is still at least $3\varepsilon^2 T\geq 3\varepsilon\delta T$.
For each medium setup job $j$ we spend at most $\varepsilon\delta T$ of this space to enlarge its processing time to its rounded size and again at most $\varepsilon\delta T$ to create a little bit of extra processing time in the containers belonging to $j$.
The size of this extra processing time is bounded by $\varepsilon\delta T$ and chosen in such a way that the overall length of all blocks belonging to $j$ in the schedule is also a multiple of $\varepsilon\delta T$.
Because of the rounding, the length of the added extra processing time for each $j$ is a multiple of $\varepsilon\mu T$.
The purpose of the extra processing time is to ensure integrality in the flow network, which is constructed in the next step.
Note that the free space that was available in the original schedule was not used in the above steps, in fact it was even increased by the stretching.
Hence, we have created a $((1+3\varepsilon)T,L)$-schedule for $I_3$---or a slightly modified version thereof---and the big setup jobs are already well behaved with respect to the $\varepsilon\delta T$-layers, that is, they start at multiples of $\varepsilon\delta T$, and fully fill the slots they are scheduled in.
\subparagraph*{Step 2.}
Note that for each job $j$ and layer $\ell\in\Xi$, the overall length $q_{j,\ell}$ of job and setup pieces belonging to $j$ and placed in $\ell$ is bounded by $\varepsilon\delta T$.
We say that $j$ is \emph{fully}, or \emph{partially}, or \emph{not} scheduled in layer $\ell$, if $q_{j,\ell} = 1$, or $q_{j,\ell} \in (0,1)$, or $q_{j,\ell} = 0$ respectively.
Let $X_j$ be the set of layers in which $j$ is scheduled partially and $Y_\ell$ the set of (medium or small setup) jobs partially scheduled in~$\ell$.
Then $a_j = \sum_{\ell\in X_j} q_{j,\ell}$ is a multiple of $\varepsilon\delta T$ and we set $n_j = a_j/(\varepsilon\delta T)$.
Furthermore, let $b_\ell = \sum_{j\in Y_\ell} q_{j,\ell} $ and $k_\ell = \ceil{b_\ell/(\varepsilon\delta T)}$.
Our flow network has the following structure:
There is a node $v_j$ for each medium or small setup job, and a node $u_\ell$ for each layer $\ell$, as well as a source $\alpha$ and a sink $\omega$.
The source node is connected to the job nodes via edges $(\alpha,v_j)$ with capacity $n_j$; and the layer nodes are connected to the sink via edges $(u_\ell,\omega)$ with capacity $k_\ell$.
Lastly, there are edges $(v_j,u_\ell)$ between job and layer nodes with capacity $1$, if $j$ is partially scheduled in layer $\ell$, or $0$ otherwise.
In Figure \ref{fig:Network} a sketch of the network is given.
The schedule can be used to define a flow $f$ with value $\sum_{j}n_j$ in the network, by setting $f(\alpha, v_j) = n_j$, $f(u_\ell, \omega) = b_\ell / (\varepsilon\delta T)$, and $f(v_j,u_\ell) = q_{j,\ell}/(\varepsilon\delta T)$.
It is easy to verify that $f$ is a maximum flow, and because all capacities in the flow network are integral, we can find another maximum flow $f'$ with integral values.
\begin{figure}
\caption{Flow network for layers and partially scheduled jobs.}
\label{fig:Network}
\end{figure}
\subparagraph*{Step 3.}
We start by introducing some notation and a basic operation for the transformation of the schedule:
Given two machines $i$ and $i'$ and a time $t$, a \emph{machine swap} between $i$ and $i'$ at moment $t$ produces a schedule, in which everything that was scheduled on $i$ from $t$ on is now scheduled on $i'$ and vice versa.
If on both machines there is either nothing scheduled at $t$, or blocks are starting or ending at $t$, the resulting schedule is still feasible.
Moreover, if there is a block starting at $t$ on one of the machines and another one belonging to the same job ending on the other we can merge the two blocks and transform the setup time of the first into processing time.
We assume in the following that we always merge if this is possible, when performing a machine swap.
Remember that by definition blocks belonging to the same job cannot overlap.
However, if there was overlap, it could be eliminated using machine swaps \cite{schuurman1999preemptive}.
If a given slot only contains pieces of jobs that are partially scheduled in the layer, we call the slot \emph{usable}.
Furthermore, we say that a job $j$ is \emph{flow assigned} to layer $\ell$, if $f'(v_j,u_\ell) = 1$.
In the following, we will iterate through the layers, and create as many usable slots as possible, reserve them for flow assigned jobs, and fill them with processing and setup time of the corresponding slot later on.
To do so, we have to distinguish different types of blocks belonging to jobs that are partially placed in a given layer:
Inner blocks, which lie completely inside the layer and touch at most one of its borders; upper cross-over blocks, which start inside the layer and end above it; and lower cross-over blocks, which start below the layer and end inside it.
When manipulating the schedule layer by layer, the cross-over jobs obviously can cause problems.
To deal with this, we will need additional concepts:
A \emph{repair piece} for a given block is a piece of setup time of length less than $\varepsilon\delta T$, with the property that the block and the repair piece together make up exactly one setup of the respective job.
Hence, if a repair-piece is given for a block, the block is comprised completely of setup time.
Moreover, we say that a slot reserved for a job $j$, has a \emph{dedicated setup}, if there is a block of $j$ including a full setup \emph{starting or ending} inside the slot.
In the following, we give a detailed description of the transformation procedure followed by a high-level overview of the procedure.
The procedure runs through two phases.
In the first phase the layers are transformed one after another from bottom to top.
After a layer is transformed the following invariants will always hold:
\begin{enumerate}
\item A scheduled block either includes a full setup, or has a repair piece, and in the latter case it was an upper cross-over block in a previous iteration.
\item Reserved slots that are not full have a dedicated setup.
\end{enumerate}
Note that the invariants are trivially fulfilled in the beginning.
During the first phase, we remove some job and setup parts from the schedule that are reinserted into the reserved slots in the second phase.
Let $\ell\in\Xi$ denote the current layer.
In the first step, our goal is to ensure that jobs that are fully scheduled in $\ell$ occupy exactly one slot, thereby creating as many usable slots as possible.
Let $j$ be a job that is fully scheduled in layer $\ell$.
If there is a block belonging to $j$ and ending inside the layer at time $t$, there is another block belonging to $j$ and starting at $t$, because $j$ is fully scheduled in $\ell$ and there are no overlaps.
Hence, we can perform a machine swap at time $t$ between the two machines the blocks are scheduled on.
We do so, for each job fully scheduled in the layer and each corresponding pair of blocks.
After this step, there are at least $k_\ell$ usable slots and at most $k_\ell$ flow assigned jobs in layer $\ell$.
\begin{figure}
\caption{The rectangles represent blocks, the hatched parts the setup times, and the dashed lines layer borders. The push and cut step is performed on two blocks. For one of the two a repair piece is created.}
\label{fig:pushandcut}
\end{figure}
Next, we consider upper cross-over blocks of jobs that are partially scheduled in the layer $\ell$ but are not flow assigned to it.
These are the blocks that cause the most problems, and we perform a so-called \emph{push and cut step} (see Figure \ref{fig:pushandcut}) for each of them:
If $q$ is the length of the part of the block lying in $\ell$, we cut away the upper part of the block of length $q$ and move the remainder up by $q$.
If the piece we cut away does contain some setup time, we create a repair piece for the block out of this setup time.
The processing time part of the piece, on the other hand, is removed.
Note that this step preserves the first invariant.
The repair piece is needed in the case that the job corresponding to the respective block is flow assigned to the layer in which the block ends.
We now remove all inner blocks from the layer, as well as the parts of the upper and lower cross-over blocks that lie in the layer.
After this, all usable slots are completely free.
Furthermore, note that the the first invariant might be breached by this.
Next, we arbitrarily reserve usable slots for jobs flow assigned to the layer.
For this, note that due to the definition of the flow network, there are at most $k_\ell$ jobs flow assigned to the layer and there are at least as many usable slots, as noted above.
Using machine swaps at the upper and lower border of the layer, we then ensure, that the upper and lower cross-over blocks of the jobs flow assigned to the layer lie on the same machine as the reserved slot.
This step might breach the second invariant as well.
However, for each job $j$ flow assigned to the layer, we perform the repair steps in order to restore the invariants:
If there is an upper cross-over block for $j$, we reinsert the removed part of the block at the end of the slot, thereby providing a dedicated setup for the remaining free space in the slot.
If there is a lower, but no upper cross-over block for $j$, there are two cases:
Either there was a repair piece for the block or not.
In both cases we reinsert the removed part of the block in the beginning of the slot and in the first we additionally insert as much setup of the repair piece as possible.
The possible remainder of the repair piece is removed.
Now the slot is either full, or a full setup is provided.
If there is neither an upper nor a lower block for $j$, there is an in inner block belonging to $j$.
This has to be the case, because otherwise the capacity in the flow network between $j$ and $\ell$ is $0$ and $j$ could not have been flow assigned to $\ell$.
Moreover, this inner block contains a full setup and we can place it in the beginning of the slot, thus providing the dedicated setup.
The invariants are both restored.
After the first phase is finished, we have to deal with the removed pieces in the second one.
The overall length of the reserved slots for a job $j$ equals the overall length $a_j$ of its setup and job pieces from layers in which $j$ was partially scheduled.
Since, we did not create or destroy any job piece, we can place the removed pieces corresponding to job $j$ perfectly into the remaining free space of the slots reserved for $j$, and we do so after transforming them completely into processing time.
Because of the second invariant, there is a dedicated setup in each slot, however, it may be positioned directly above the newly inserted processing time.
This can be fixed by switching the processing time with the top part of the respective setup time.
Lastly, all remaining usable slots are completely free at the end of this procedure, and since the others are full they have an overall size of at least $L$.
We conclude the proof of Lemma \ref{lem_prmt_simplification_3} with an overview of the transformation procedure.
\begin{algorithm}
\
\noindent \emph{Phase 1:} For each layer $\ell\in\Xi$, considered bottom to top, perform the following steps:
\begin{enumerate}
\item Use machine swaps to ensure that jobs fully scheduled in $\ell$ occupy exactly one slot.
\item For each upper cross-over block of a job partially scheduled but not flow assigned to $\ell$ perform a push and cut step.
\item Remove inner blocks and parts of cross-over blocks that lie in $\ell$.
\item Reserve usable slots for jobs flow assigned to the layer.
\item Use machine swaps to ensure, that cross-over blocks of flow assigned jobs lie on the same machine as the reserved slot.
\item For each job $j$ flow assigned to the layer, perform exactly one of the repair steps.
\end{enumerate}
\noindent \emph{Phase 2:}
\begin{enumerate}
\item Transform all removed pieces into processing time and insert the removed pieces into the reserved slots.
\item If processing time has been inserted ahead of the dedicated setup of the slot, reschedule properly.
\end{enumerate}
\end{algorithm}
\section{Improvements of the running time}\label{sec:better_running_time}
In this section, we revisit the splittable and the setup time model.
For the former, we address the problem of the running time dependence in the number of machines $m$, and for both we present an improved rounding procedure, yielding a better running time.
\subsection{Splittable Model -- Machine Dependence}
In the splittable model, the number of machines $m$ may be super-polynomial in the input size, because it is not bounded by the number of jobs $n$.
Hence, we need to be careful already when defining the schedule in order to get a polynomially bounded output.
We say a machine is \textit{composite} if it contains more than one job, and we say it is \textit{plain} if it contains at most one job.
For a schedule with makespan $T$, we call each machine \textit{trivial} if it is plain and has load $T$ or if it is empty, and \textit{nontrivial} otherwise.
We say a schedule with makespan $T$ is \textit{simple}, if the number of nontrivial machines is bounded by $\binom{n}{2}$.
\begin{lemma}
If there is a schedule with makespan $T$ for $I$ there is also a simple schedule with makespan $T$.
\end{lemma}
\begin{proof}
Let there be a schedule $S$ with makespan $T$ for $I$.
For the first step, let us assume there are more than $\binom{n}{2}$ composite machines.
In this case, there exist two machines $M_1$ and $M_2$ and two jobs $a,b \in \mathcal{J}, a\not = b$ such that both machines contain parts of both jobs since there are at most $\binom{n}{2}$ different pairs of jobs.
Let $t_{M_x}(y)$ be the processing time combined with the setup time of job $y \in \{a,b\}$ on machine $M_{x}$, $x \in \{1,2\}$.
W.l.o.g. let $t_{M_1}(a)$ be the smallest value of the four.
We swap this job part and its setup time with some of the processing time of the job $b$ on machine $M_2$.
If the processing time of $b$ on $M_2$ is smaller than $t_{M_1}(a)$, there is no processing time of $b$ on $M_2$ left and we can discard the setup time from $b$ on this machine.
We can repeat this step iteratively until there are at most $\binom{n}{2}$ machines containing more than one job.
In the second step, we shift processing time from the composite machines to the plain ones.
We do this for each job until it is either not contained on a composite machine or each plain machine containing this job has load $T$.
If the job is no longer contained on a composite machine, we shift the processing time of the job such that all except one machine containing this job has load $T$.
Since this job does not appear on any composite machines, their number can be bounded by $\binom{n-1}{2}$, by repeating the first step.
Therefore, the number of nontrivial machines is bounded by $\binom{n-i}{2} + i \leq \binom{n}{2}$ for some $i\in\set{0,\dots,n}$.
\end{proof}
For a simple schedule a polynomial representation of the solution is possible:
For each job, we give the number of trivial machines containing this jobs, or fix a first and last trivial machine belonging to this job.
This enables a polynomial encoding length of the output, given that the remaining parts of the jobs are not fragmented into too many parts, which can be guaranteed using the results of Section \ref{sec:EPTAS}.
To guarantee that the MCIP finds a simple solution, we need to modify it a little.
We have to ensure that nontrivial configurations are not used to often.
We can do this by summing up the number of those configurations and bound them by $\binom{n}{2}$.
Let $\mathcal{C}' \subseteq \mathcal{C}$ be the set of nontrivial configurations, i.e., the set of configurations containing more than one module or one module with size smaller than $T$.
We add the following globally uniform constraint to the MCIP:
\begin{align}
\sum_{C \in \mathcal{C}'} x_C \leq \binom{|\jobs^{\mathrm{bst}}|}{2}
\end{align}
Since this is an inequality, we have to introduce a slack variable increasing the brick size by one.
Furthermore, the bound on the biggest number occurring in the input as well as the range of the variables has to be increased by a factor of $\mathcal{O}(n^2)$, yielding a slightly altered running time for the $\mathrm{MCIP}$ of:
\[2^{\mathcal{O}(\nicefrac{1}{\varepsilon^4}\log\nicefrac{1}{\varepsilon})}n^2\log^3 nm\]
The number of modules with maximum size denotes for each job in $\jobs^{\mathrm{bst}}$ how many trivial machines it uses.
The other modules can be mapped to the nontrivial configurations and the jobs can be mapped to the modules.
We still have to schedule the jobs in $\jobs^{\mathrm{sst}}$.
We do this as described in the proof of Lemma \ref{lem:split_rounding1}.
We fill the nontrivial machines greedily step by step starting with the jobs having the smallest processing time.
When these machines are filled, there are some completely empty machines left.
Now, we estimate how many machines can be completely filled with the current job $j$.
This can be done, by dividing the remaining processing time by $T-s_i$ in $\mathcal{O}(1)$.
The remaining part is scheduled on the next free machine.
This machine is filled up with the next job and again the number of machines which can be filled completely with the rest of this new job is determined.
These steps are iterated until all jobs in $\jobs^{\mathrm{sst}}$ are scheduled.
This greedy procedure needs at most $\mathcal{O}(|\jobs^{\mathrm{bst}}|(|\jobs^{\mathrm{bst}}|-1) + |\jobs^{\mathrm{sst}}|) = \mathcal{O}(n^2)$ operations.
Therefore we can avoid the dependence in the number of machines and the overall running time is dominated by the time it takes to solve the $\mathrm{MCIP}$.
\subsection{Improved Rounding Procedures}
To improve the running time in the splittable and setup class model, we reduce the number of module sizes via a geometric and an arithmetic rounding step.
In both cases, the additional steps are performed following all the other simplification steps.
The basic idea is to include setup times together with their corresponding job pieces or batches of jobs respectively into containers with suitably rounded sizes and to model these containers using the modules.
The containers have to be bigger in size than the objects they contain and the load on a machine is given by the summed up sizes of the containers on the machine.
Let $H^*$ be a set of container sizes.
Then a $H^*$-structured schedule is a schedule in which each setup time together with its corresponding job piece or batch of jobs is packed in a container with the smallest size $h\in H^*$ such that the summed up size of the setup time and the job piece or batch of jobs is upper bounded by $h$.
\subparagraph*{Splittable Model.}
Consider the instance $I_2$ for the splittable model described in Section~\ref{sec:EPTAS_slittable}.
In this instance, each setup and processing time is a multiple of $\varepsilon^2 T$ and we are interested in a schedule of length $(1 + 2\varepsilon)T$.
For each multiple $h$ of $\varepsilon^2 T$, let $\tilde{h}=(1+\varepsilon)^{\ceil{\log_{1+\varepsilon}h/(\varepsilon^2 T)}}\varepsilon^2 T$ and $\bar{h} = \ceil{\tilde{h}/\varepsilon^2 T}\varepsilon^2 T$, and $\bar{H} = \sett{\bar{h}}{h\in\varepsilon^2T\mathbb{Z}_{\geq 1},h\leq (1 + 2\varepsilon)^2 T}$.
Note that $|\bar{H}|\in\mathcal{O}(1/\varepsilon\log 1/\varepsilon)$
\begin{lemma}
If there is a $((1 + 2\varepsilon)T,L')$-schedule for $I_2$ in which the length of each job part is a multiple of $\varepsilon^2 T$, there is also a $\bar{H}$-structured $((1 + 2\varepsilon)^2T,L')$-schedule for $I_2$ with the same property.
\end{lemma}
\begin{proof}
Consider such a schedule for $I_2$ and a pair of setup time $s$ and job piece $q$ scheduled on some machine.
Let $h=s+q$.
Stretching the schedule by $(1 + 2\varepsilon)$ creates enough space to place the pair into a container of size $\bar{h}$, because $(1+\varepsilon)h\leq \tilde{h}$, and $\varepsilon h \leq \varepsilon^2 T$, since $s\geq \varepsilon T$.
\end{proof}
To implement this lemma into the procedure the processing time bounds $\bar{T}$ and $\breve{T}$ both have to be properly increased.
Modeling a $\bar{H}$-structured schedule can be done quite naturally:
We simply redefine the size $\Lambda(M)$ of a module $M=(s,q)\in\mathcal{M}$ to be $\bar{s+q}$.
With this definition, we have $|H|=|\bar{H}|=\mathcal{O}(1/\varepsilon\log 1/\varepsilon)$, yielding an improved running time for solving the $\mathrm{MCIP}$ of:
\[2^{\mathcal{O}(\nicefrac{1}{\varepsilon^2}\log^3\nicefrac{1}{\varepsilon})}n^2\log^2 m\log nm\]
Combining this with the results above and the considerations in Section \ref{sec:EPTAS_slittable} yields the running time claimed below Theorem \ref{thm:main_result}.
\subparagraph*{Setup Class Model.}
In the setup class model, an analogue approach also yields a reduced set of module sizes, that is, $|H|=\mathcal{O}(1/\varepsilon\log 1/\varepsilon)$.
Therefore, the $\mathrm{MCIP}$ can be solved in time:
\[2^{\mathcal{O}(\nicefrac{1}{\varepsilon^{3}}\log^4\nicefrac{1}{\varepsilon})}K^2\log n\log m \log Km\]
Hence, we get the running time claimed beneath Theorem \ref{thm:main_result}.
\section{Conclusion}
We presented a more advanced version of the classical configuration IP, showed that it can be solved efficiently using algorithms for $n$-fold IPs, and developed techniques to employ the new IP for the formulation of efficient polynomial time approximation schemes for three scheduling problems with setup times, for which no such algorithms were known before.
For further research the immediate questions are whether improved running times for the considered problems, in particular for the preemptive model, can be achieved; whether the MCIP can be solved more efficiently; and to which other problems it can be reasonably employed.
From a broader perspective, it would be interesting to further study the potential of new algorithmic approaches in integer programming for approximation, and, on the other hand, further study the respective techniques themselfs.
\end{document} |
\begin{document}
\title{Extremal Betti numbers of some Cohen-Macaulay binomial edge ideals}
\begin{abstract}
We provide the regularity and the Cohen-Macaulay type of binomial edge ideals of Cohen-Macaulay cones, and we show the extremal Betti numbers of some classes of Cohen-Macaulay binomial edge ideals: Cohen-Macaulay bipartite and fan graphs. In addition, we compute the Hilbert-Poincaré series of the binomial edge ideals of some Cohen-Macaulay bipartite graphs.
\end{abstract}
\section*{Introduction}
Binomial edge ideals were introduced in 2010 independently by Herzog et al. in \cite{HHHKR} and by Ohtani in \cite{MO}. They are a natural generalization
of the ideals of 2-minors of a $2\times n$-generic matrix: their generators are those 2-minors whose column indices correspond to the edges of a graph. More precisely, given a simple graph $G$ on $[n]$ and the polynomial ring $S=K[x_1,\dots,x_n,y_1, \dots,y_n]$ with $2n$ variables over a field $K$, the \textit{binomial edge ideal} associated to $G$ is the ideal $J_G$ in $S$ generated by all the binomials $\{x_iy_j -x_jy_i | i,j \in V(G) \text{ and } \{i,j\} \in E(G)\}$, where $V(G)$ denotes the vertex set and $E(G)$ the edge set of $G$.
Many algebraic and homological properties of these ideals have been investigated, such as the Castelnuovo-Mumford regularity and the projective dimension, see for instance \cite{HHHKR}, \cite{EHH}, \cite{MK}, \cite{KM}, and \cite{RR}.
Important invariants which are provided by the graded finite free resolution are the extremal Betti numbers of $J_G$.
Let $M$ be a finitely graded $S$-module. Recall the Betti number $\beta_{i,i+j}(M) \neq 0$ is called \textit{extremal} if $\beta_{k,k+\ell} =0$ for all pairs $(k,\ell) \neq (i,j)$, with $k \geq i, \ell \geq j$. A nice property of the extremal Betti numbers is that $M$ has an unique extremal Betti number if and only if $\beta_{p,p+r}(M) \neq 0$, where $p = \projdim M $ and $r = \reg M$. In this years, extremal Betti numbers were studied by different authors, also motivated by Ene, Hibi, and Herzog's conjecture (\cite{EHH}, \cite{HR}) on the equality of the extremal Betti numbers of $J_G$ and $\mathrm{in}_<(J_G)$. Some works in this direction are \cite{B}, \cite{DH}, and \cite{D}, but the question has been completely and positively solved by Conca and Varbaro in \cite{CV}. The extremal Betti numbers of $J_G$ are explicitly provided by Dokuyucu, in \cite{D}, when $G$ is a cycle or a complete bipartite graph, by Hoang, in \cite{H}, for some closed graphs, and by Herzog and Rinaldo, in \cite{HR}, and Mascia and Rinaldo, in \cite{MR}, when $G$ is a block graph. In this paper, we show the extremal Betti numbers for binomial edge ideals of some classes of Cohen-Macaualy graphs: cones, bipartite and fan graphs. The former were introduced and investigated by Rauf and the second author in \cite{RR}. They construct Cohen-Macaulay graphs by means of the formation of cones: connecting all the vertices of two disjoint Cohen-Macaulay graphs to a new vertex, the resulting graph is Cohen-Macaulay. For these graphs, we give the regularity and also the Cohen-Macaulay type (see Section \ref{Sec: cones}) . The latter two are studied by Bolognini, Macchia and Strazzanti in \cite{BMS}. They classify the bipartite graphs whose binomial edge ideal is Cohen-Macaulay. In particular, they present a family of bipartite graphs $F_m$ whose binomial edge ideal is Cohen-Macaulay, and they prove that, if $G$ is connected and bipartite, then $J_G$ is Cohen-Macaulay if and only if $G$ can be obtained recursively by gluing a finite number of graphs of the form $F_m$ via two operations. In the same article, they describe a new family of Cohen-Macaulay binomial edge ideals associated with non-bipartite graphs, the fan graphs. For both these families, in \cite{JK}, Jayanthan and Kumar compute a precise expression for the regularity, whereas in this work we provide the unique extremal Betti number of the binomial edge ideal of these graphs (see Section \ref{Sec: bipartite and fan graphs} and Section \ref{Sec: Cohen-Macaulay bipartite graphs}). In addition, we exploit the unique extremal Betti number of $J_{F_m}$ to describe completely its Hilbert-Poincaré series (see Section \ref{Sec: bipartite and fan graphs}).
\section{Betti numbers of binomial edge ideals of disjoint graphs}\label{Sec: preliminaries}
In this section we recall some concepts and notation on graphs that we will use in the article.
Let $G$ be a simple graph with vertex set $V(G)$ and edge set $E(G)$. A subset $C$ of $V(G)$ is called a \textit{clique} of $G$ if for all $i$ and $j$ belonging to $C$ with $i \neq j$ one has $\{i, j\} \in E(G)$. The \textit{clique complex} $\Delta(G)$ of $G$ is the simplicial complex of all its cliques. A clique $C$ of $G$ is called \textit{face} of $\Delta(G)$ and its \textit{dimension} is given by $|C| -1$. A vertex of $G$ is called {\em free vertex} of $G$ if it belongs to exactly one maximal clique of $G$. A vertex of $G$ of degree 1 is called \textit{leaf} of $G$. A vertex of $G$ is called a \textit{cutpoint} if the removal of the vertex increases the number of connected components. A graph $G$ is {\em decomposable}, if there exist two subgraphs $G_1$ and $G_2$ of $G$, and a decomposition $G=G_1\cup G_2$ with $\{v\}=V(G_1)\cap V(G_2)$, where $v$ is a free vertex of $G_1$ and $G_2$.
\begin{setup}\label{setup} Let $G$ be a graph on $[n]$ and $u \in V(G)$ a cutpoint of $G$. We denote by
\begin{align*}
& G' \text{ the graph obtained from } G \text{ by connecting all the vertices adjacent to } u, \\
& G'' \text{ the graph obtained from } G \text{ by removing } u, \\
& H \text{ the graph obtained from } G' \text{ by removing } u.
\end{align*}
\end{setup}
Using the notation introduced in Set-up \ref{setup}, we consider the following short exact sequence
\begin{equation}\label{Exact}
0\To S/J_G \To S/J_{G'}\oplus S/((x_u, y_u)+J_{G''})\To S/((x_u,y_u)+J_{H}) \To 0
\end{equation}
For more details, see Proposition 1.4, Corollary 1.5 and Example 1.6 of \cite{R}.
From (\ref{Exact}), we get the following long exact sequence of Tor modules
\begin{equation}\label{longexact}
\begin{aligned}
&\cdots\rightarrow T_{i+1,i+1+(j-1)}(S/((x_u,y_u)+J_H)) \rightarrow T_{i,i+j}(S/J_G) \rightarrow \\
& \hspace{-0.4cm}T_{i,i+j}(S/J_{G'}) \oplus T_{i,i+j}(S/((x_u, y_u)+J_{G''})) \rightarrow T_{i,i+j}(S/((x_u,y_u)+J_H)) \rightarrow
\end{aligned}
\end{equation}
where $T_{i,i+j}^S(M)$ stands for $\mathrm{Tor}_{i,i+j}^S(M,K)$ for any $S$-module $M$, and $S$ is omitted if it is clear from the context.
\
\begin{lemma}\label{Lemma: beta_p,p+1 e beta_p,p+2}
Let $G$ be a connected graph on $[n]$. Suppose $J_G$ be Cohen-Macaulay, and let $p=\projdim S/J_G$. Then
\begin{enumerate}
\item[(i)] $\beta_{p,p+1} (S/J_G) \neq 0$ if and only if $G$ is a complete graph on $[n]$.
\item[(ii)] If $G= H_1 \sqcup H_2$, where $H_1$ and $H_2$ are graphs on disjoint vertex sets, then $\beta_{p,p+2} (S/J_{G}) \neq 0$ if and only if $H_1$ and $H_2$ are complete graphs.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) In \cite{HMK}, the authors prove that for any simple graph $G$ on $[n]$, it holds
\begin{equation}\label{eq: linear strand}
\beta_{i,i+1} (S/J_G) = i f_{i}(\Delta(G))
\end{equation}
where $\Delta(G)$ is the clique complex of G and $f_{i}(\Delta(G))$ is the number of faces of $\Delta(G)$ of dimension $i$. Since $J_G$ is Cohen-Macaulay, it holds $p=n-1$, and the statement is an immediate consequence of Equation (\ref{eq: linear strand}), with $i=p$.\\
\noindent (ii) Since $J_G$ is generated by homogeneous binomials of degree 2, $\beta_{1,1}(S/J_G) = 0$. This implies that $\beta_{i,i}(S/J_G) = 0$ for all $i \geq 1$. For all $j\geq 1$, we have
\begin{equation*}
\beta_{p,p+j} (S/J_G) = \sum_{\substack{1 \leq j_1, j_2 \leq r \\ j_1+j_2 = j}} \beta_{p_1, p_1+j_1}(S_1/J_{H_1})\beta_{p_2, p_2+j_2}(S_2/J_{H_2})
\end{equation*}
For $j=2$, we get
\begin{equation}\label{eq on beta_p,p+2 disjoint graphs}
\beta_{p,p+2} (S/J_G) = \beta_{p_1, p_1+1}(S_1/J_{H_1})\beta_{p_2, p_2+1}(S_2/J_{H_2}).
\end{equation}
By part (i), both the Betti numbers on the right are non-zero if and only if $H_1$ and $H_2$ are complete graphs, and the thesis follows.
\end{proof}
\ \\
Let $M$ be a finitely graded $S$-module. Recall the Cohen-Macaulay type of $M$, that we denote by $\text{CM-type}(M)$, is $\beta_p(M)$, that is the sum of all $\beta_{p,p+i}(M)$, for $i=0,\dots,r$, where $p= \projdim M$, and $r=\reg M$. When $S/J_G$ has an unique extremal Betti number, we denote it by $\widehat{\beta}(S/J_G)$.
\begin{lemma}\label{Lemma: Betti numbers of disjoint graphs}
Let $H_1$ and $H_2$ be connected graphs on disjoint vertex sets and $G=H_1\sqcup H_2$. Suppose $J_{H_1}$ and $J_{H_2}$ be Cohen-Macaulay binomial edge ideals. Let $S_i = K[\{x_j,y_j\}_{j \in V(H_i)}]$ for $i=1,2$. Then
\begin{enumerate}
\item[(i)] $\text{CM-type} (S/J_{G}) = \text{CM-type}(S_1/J_{H_1}) \text{CM-type}(S_2/J_{H_2})$.
\item[(ii)] $\widehat{\beta}(S/J_G) = \widehat{\beta}(S_1/J_{H_1})\widehat{\beta}(S_2/J_{H_2})$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) The equality $J_{G} = J_{H_1} + J_{H_2}$ implies that the minimal graded free resolution of $S/J_G$ is the tensor product of the minimal graded free resolutions of $S_1/J_{H_1}$ and $S_2/J_{H_2}$, where $S_i = K[\{x_j,y_j\}_{j \in V(H_i)}]$ for $i=1,2$. Then
\[
\beta_t(S/J_G) = \sum_{k=0}^t \beta_k(S_1/J_{H_1})\beta_{t-k}(S_2/J_{H_2}).
\]
Let $p = \projdim S/J_G$, that is $p=p_1+p_2$, where $p_i = \projdim S_i/J_{H_i}$ for $i=1,2$. Since $\beta_k (S_1/J_{H_1}) = 0$ for all $k > p_1$ and $\beta_{p-k}(S_2/J_{H_2})=0$ for all $k < p_1$, it follows
\[
\beta_p(S/J_G) = \beta_{p_1}(S_1/J_{H_1})\beta_{p_2}(S_2/J_{H_2}).
\]
\noindent (ii) Let $r= \reg S/J_G$. Consider
\begin{equation*}
\beta_{p,p+r} (S/J_G) = \sum_{\substack{1 \leq j_1, j_2 \leq r \\ j_1+j_2 = r}} \beta_{p_1, p_1+j_1}(S_1/J_{H_1})\beta_{p_2, p_2+j_2}(S_2/J_{H_2}).
\end{equation*}
Since $\beta_{p_i, p_i+j_i}(S_i/J_{H_i}) =0$ for all $j_i > r_i$, where $r_i = \reg S_i/J_{H_i}$ for $i=1,2$, and $r = r_1 + r_2$, it follows
\begin{equation*}
\beta_{p,p+r}(S/J_G) = \beta_{p_1, p_1+r_1}(S_1/J_{H_1})\beta_{p_2, p_2+r_2}(S_2/J_{H_2}).
\end{equation*}
\end{proof}
Let $G$ be a simple connected graph on $[n]$. We recall that if $J_G$ is Cohen-Macaulay, then $p=\projdim S/J_G = n-1$, and it admits an unique extremal Betti number, that is $\widehat{\beta}(S/J_G) = \beta_{p,p+r} (S/J_G)$, where $r = \reg S/J_G$.
\section{Regularity and Cohen-Macaulay type of cones}\label{Sec: cones}
The \textit{cone} of $v$ on $H$, namely $\mathrm{cone}(v,H)$, is the graph with vertices $V(H) \cup \{v\}$ and edges $E(H) \cup \{\{v,w\} \mid w \in V(H)\}$.
\begin{lemma}\label{sum reg}
Let $G=\mathrm{cone}(v, H_1 \sqcup \dots \sqcup H_s)$, with $s\geq 2$. Then
\[
\reg S/J_G = \max \left\lbrace \sum_{i=1}^s \reg S/J_{H_i}, 2\right\rbrace.
\]
\end{lemma}
\begin{proof}
Consider the short exact sequence (\ref{Exact}), with $G=\mathrm{cone}(v, H_1 \sqcup \dots \sqcup H_s)$ and $u=v$, then $G' = K_n$, the complete graph on $[n]$, $G'' = H_1 \sqcup \dots \sqcup H_s$, and $H=K_{n-1}$, where $n = |V(G)|$. Since $G'$ and $H$ are complete graphs, the regularity of $S/J_{G'}$ and $ S/((x_u,y_u)+J_{H})$ is 1. Whereas the regularity of $S/((x_u,y_u)+J_{G''})$ is given by $\reg S/J_{H_1} + \cdots +\reg S/J_{H_s}$. We get the following bound on the regularity of $S/J_G$
\begin{eqnarray*}
\reg S/J_G\hspace{-0.2cm}&\leq &\hspace{-0.2cm}\max\left\lbrace\reg \frac{S}{J_{G'}},\reg \frac{S}{((x_u, y_u)+J_{G''})}, \reg \frac{S}{((x_u,y_u)+J_{H})} +1\right\rbrace\\
&= &\hspace{-0.2cm}\max\left\lbrace 1, \sum_{i=1}^s \reg S/J_{H_i}, 2\right\rbrace.
\end{eqnarray*}
Suppose $\sum_{i=1}^s \reg S/J_{H_i} \geq 2$, hence $\reg S/J_G \leq \sum_{i=1}^s \reg S/J_{H_i}$. Since $H_1 \sqcup \dots \sqcup H_s$ is an induced subgraph of $G$, by \cite[Corollary 2.2]{MM} of Matsuda and Murai we have
\[
\reg S/J_{G} \geq \reg S/J_{H_1\sqcup \cdots \sqcup H_s} = \sum_{i=1}^s \reg S/J_{H_i}.
\]
Suppose now $\sum_{i=1}^s \reg S/J_{H_i} < 2$, hence $\reg S/J_G \leq 2$. Since $G$ is not a complete graph, $\reg S/J_G \geq 2$, and the statement follows.
\end{proof}
Observe that it happens $\reg S/J_G = 2$, for $G=\mathrm{cone}(v, H_1 \sqcup \dots \sqcup H_s)$, with $s \geq 2$, if and only if all the $H_i$ are isolated vertices except for at most two which are complete graphs.
\ \\
We are going to give a description of the Cohen-Macaulay type and some Betti numbers of $S/J_G$ when $S/J_G$ is Cohen-Macaulay, and $G$ is a cone, namely $G=\mathrm{cone}(v,H)$. By \cite[Lemma 3.4]{RR}, to have Cohen-Macaulayness it is necessary that $H$ has exactly two connected components and both are Cohen-Macaulay (see also Corollaries 3.6 and 3.7 and Theorem 3.8 in \cite{RR}).
\begin{proposition}\label{prop: cm-type cone}
Let $G=\mathrm{cone}(v, H_1 \sqcup H_2)$ on $[n]$, with $J_{H_1}$ and $J_{H_2}$ Cohen-Macaulay binomial edge ideals. Then
\[
\text{CM-type}(S/J_G) = n-2 + \text{CM-type}(S/J_{H_1})\text{CM-type}(S/J_{H_2}).
\]
In particular, the unique extremal Betti number of $S/J_G$ is given by
\begin{equation*}
\widehat{\beta}(S/J_G) =
\begin{cases}
\widehat{\beta}(S_1/J_{H_1}) \widehat{\beta} (S_2/J_{H_2}) & \text{ if } r >2 \\
n-2 +\widehat{\beta}(S_1/J_{H_1}) \widehat{\beta}(S_2/J_{H_2}) & \text{ if } r =2
\end{cases}
\end{equation*}
where $r= \reg S/J_G$. In addition, if $r >2$, it holds
\begin{equation*}
\beta_{p,p+2} (S/J_G) = n-2.
\end{equation*}
\end{proposition}
\begin{proof}
Consider the short exact sequence (\ref{Exact}), with $u=v$, then we have $G' = K_{n}$, $G'' = H_1 \sqcup H_2$, and $H= K_{n-1}$. It holds
\begin{align}
r=\reg S/J_G & \ = \ \max \{\reg S/J_{H_1} + \reg S/J_{H_2}, 2\}, \notag\\
\reg S/((x_u,y_u)+J_{G''}) & \ = \ \reg S/J_{H_1} + \reg S/J_{H_2},\notag \\
\reg S/J_{G'} & \ = \ \reg S/((x_u,y_u)+J_{H}) = 1, \label{reg1}
\end{align}
and
\[
p=\projdim S/J_G = \projdim S/J_{G'} = \projdim S/((x_u,y_u)+J_{G''}) = n-1,
\]
\[
\projdim S/((x_u,y_u)+J_{H}) = n.
\]
Consider the long exact sequence (\ref{longexact}) with $i=p$. By \eqref{reg1}, we have
\[
\beta_{p,p+j}(S/J_{G'}) = \beta_{p,p+j}(S/((x_u, y_u)+J_{H})) = 0 \text{ for all } j \geq 2
\]
and
\[\beta_{p+1,p+1+(j-1)}(S/((x_u,y_u)+J_H)) \neq \text{ only for } j=2.\]
\noindent
By Lemma \ref{Lemma: beta_p,p+1 e beta_p,p+2} and Lemma \ref{Lemma: Betti numbers of disjoint graphs} (i), it follows that
\begin{eqnarray*}
\text{CM-type}(S/J_G) &=& \sum_{j=0}^r \beta_{p,p+j} (S/J_G) = \sum_{j=2}^r \beta_{p,p+j} (S/J_G) \\
&=& \beta_{p-1,p-2+2}(S/J_H) + \sum_{j=2}^r \beta_{p-2,p-2+j} (S/J_{G''}) \\
&=& n-2 + \text{CM-type} (S/J_{G''}) \\
&=& n-2 + \text{CM-type}(S/J_{H_1})\text{CM-type}(S/J_{H_2}).
\end{eqnarray*}
If $r=2$,
\begin{eqnarray*}
\text{CM-type}(S/J_G) &=& \beta_{p,p+2} (S/J_G) \\
&=& \beta_{p-1,p-2+2}(S/J_H) + \beta_{p-2,p-2+2} (S/J_{G''}) \\
&=& n-2 + \widehat{\beta} (S_1/J_{H_1}) \widehat{\beta} (S_2/J_{H_2}),
\end{eqnarray*}
where the last equality follows from Equation (\ref{eq on beta_p,p+2 disjoint graphs}).
If $r >2$, it means that $H_1$ and $H_2$ are not both complete graphs, and then, by Lemma \ref{Lemma: Betti numbers of disjoint graphs} (ii), $\beta_{p-2,p-2+2} (S/J_{G''}) =0$, then $\beta_{p,p+2} (S/J_G) =n-2$, and $\widehat{\beta}(S/J_G) = \widehat{\beta}(S_1/J_{H_1}) \widehat{\beta}(S_2/J_{H_2})$.
\end{proof}
\section{Extremal Betti numbers of some classes of Cohen-Macaulay binomial edge ideals}\label{Sec: bipartite and fan graphs}
We are going to introduce the notation for the family of fan graphs first introduced in \cite{BMS}.
Let $K_m$ be the complete graph on $[m]$ and $W=\{v_1,\dots,v_s\} \subseteq [m]$. Let $F_{m}^W$ be the graph obtained from $K_m$ by attaching, for every $i=1, \dots, s$, a complete graph $K_{h_i}$ to $K_m$ in such a way $V(K_m) \cap V(K_{h_i}) = \{v_1, \dots, v_i\}$, for some $h_i >i$. We say that the graph $F_m^W$ is obtained by adding a \textit{fan} to $K_m$ on the set $W$. If $h_i = i+1$ for all $i=1, \dots, s$, we say that $F_m^W$ is obtained by adding a \textit{pure fan} to $K_m$ on the set $W$.
Let $W = W_1 \sqcup \cdots \sqcup W_k$ be a non-trivial partition of a subset $W \subseteq [m]$. Let $F_m^{W,k}$ be the graph obtained from $K_m$ by adding a fan to $K_m$ on each set $W_i$, for $i=1, \dots, k$. The graph $F_m^{W,k}$ is called a $k$-fan of $K_m$ on the set $W$. If all the fans are pure, we called it a \textit{k-pure} fan graph of $K_m$ on $W$.
When $k=1$, we write $F_m^W$ instead of $F_m^{W,1}$. Consider the pure fan graph $F_m^W$ on $W=\{v_1, \dots, v_s\}$. We observe that $F_m^W = \mathrm{cone}(v_1, F_{m-1}^{W'} \sqcup \{w\})$, where $W' = W \setminus \{v_1\}$, $w$ is the leaf of $F_m^W$, $\{w,v_1\} \in E(F_m^W)$, and $F_{m-1}^{W'}$ is the pure graph of $K_{n-1}$ on $W'$. \\
Now, we recall the notation used in \cite{BMS} for a family of bipartite graphs.
For every $m \geq 1$, let $F_m$ be the graph on the vertex set $[2m]$ and with edge set $E(F_m)= \{\{2i, 2j-1\} \mid i=1, \dots, m, j=i, \dots, m\}$. \\
In \cite{BMS}, they prove that if either $G = F_m$ or $G= F_m^{W,k}$, with $m \geq 2$, then $J_{G}$ is Cohen-Macaulay. The regularity of $S/J_G$ has been studied in \cite{JK}, and hold the following results.
\begin{proposition}[\cite{JK}]\label{prop: reg pure fan graph}
Let $G = F_m^{W,k}$ be the $k$-pure fan graph of $K_m$ on $W$, with $m \geq 2$. Then
\[
\reg S/J_G = k+1.
\]
\end{proposition}
\begin{proposition}[\cite{JK}]\label{prop: reg bip}
For every $m\geq 2$, $\reg S/J_{F_m} = 3$.
\end{proposition}
Observe that if $G = F_m^W$ is a pure fan graph, the regularity of $J_G$ is equal to 3 for any $m$ and $W\subseteq [m]$, then all of these graphs belong to the class of graphs studied by Madani and Kiani in \cite{MK1}.
Exploiting Proposition \ref{prop: cm-type cone}, we get hold a formula for the CM-type of any $G = F_m^W$ pure fan graph.
\begin{proposition}\label{Prop: CM-type pure fan graph}
Let $m \geq 2$, and $G = F_m^W$ a pure fan graph, with $|W| \geq 1$. Then
\begin{equation}\label{eq: CM-type pure fan graph}
\mathrm{CM-type}(S/J_G) = \widehat{\beta}(S/J_G)=(m-1)|W|.
\end{equation}
\end{proposition}
\begin{proof}
We use induction on $m$. If $m=2$, $G$ is decomposable into $K_2$ and $K_3$, and it is straightforward to check that (\ref{eq: CM-type pure fan graph}) holds. If $m >2$ and supposing the thesis true for all the pure graphs of $K_{m-1}$, we have $G = \mathrm{cone}(v_1, H_1 \sqcup H_2)$, where $W=\{v_1, \dots, v_s\}$, $H_1=F_{m-1}^{W'}$ is the pure graph of $K_{m-1}$ on $W'$, with $W' = W \setminus \{v_1\}$, $w$ is the leaf of $G$, $\{w,v_1\} \in E(G)$, and $H_2=\{w\}$. By induction hypothesis $\text{CM-type}(S/J_{H_1}) = (m-2)(|W|-1)$, and $\text{CM-type}(S/J_{H_2})=1$, then using Proposition \ref{prop: cm-type cone}, it follows
\begin{equation*}
\begin{aligned}
\text{CM-type}(S/J_G) &= |V(G)|-2 + \text{CM-type}(S/J_{H_1})\text{CM-type}(S/J_{H_2}) \\
&= (m+ |W|-2) + (m-2)(|W|-1) = (m-1)|W|.
\end{aligned}
\end{equation*}
Since $|W| \geq 1$, the graph $F_m^W$ is not a complete graph, then $\beta_{p,p+1}(S/J_G) =0$, where $p = \projdim S/J_{G}$. Due to $\reg S/J_G=2$, the $\text{CM-type}(S/J_G)$ coincides with the unique extremal Betti number of $S/J_{G}$, that is $\beta_{p,p+2}$.
\end{proof}
In the following result we provide the unique extremal Betti number of any $k$-pure fan graph.
\begin{proposition}\label{CM-type Fan}
Let $G = F_m^{W,k}$ be a $k$-pure fan graph, where $m \geq 2$ and $W = W_1 \sqcup \cdots \sqcup W_k \subseteq [m]$ is a non-trivial partition of $W$. Then
\begin{equation} \label{eq: CM-type k-pure fan graph}
\widehat{\beta}(S/J_G) = (m-1) \prod_{i=1}^k |W_i|.
\end{equation}
\end{proposition}
\begin{proof}
Let $|W_i| = \ell_i$, for $i=1, \dots k$. First of all, we observe that if $\ell_i = 1$ for all $i=1,\dots,k$, that is $W_i = \{v_i\}$, then $G$ is decomposable into $G_1 \cup \cdots \cup G_{k+1}$, where $G_1=K_m$, $G_j=K_2$ and $G_1 \cap G_j = \{v_j\}$, for all $j=2, \dots, k+1$. This implies
\[
\widehat{\beta}(S/J_G) = \prod_{j=1}^{k+1} \widehat{\beta}(S/J_{G_j}) = m-1
\]
where the last equality is due to the fact $\widehat{\beta}(S/J_{K_m}) = m-1$ for any complete graph $K_m$, with $m \geq 2$. Without loss of generality, we suppose $\ell_1 \geq 2$. \\
We are ready to prove the statement on induction on $n$, the number of vertices of $G=F_m^{W,k}$, that is $n=m+\sum_{i=1}^k \ell_i$. Let $n=4$, then $G$ is a pure fan graph $F_2^W$, with $|W|=2$, satisfying Proposition \ref{Prop: CM-type pure fan graph} and it holds \eqref{eq: CM-type pure fan graph}. Let $n>4$. Pick $v \in W_1$ such that $\{v,w\} \in E(G)$, with $w$ a leaf of $G$. Consider the short exact sequence (\ref{Exact}), with $u=v$, $G' = F_{m+\ell_1}^{W',k-1}$ the $(k-1)$-pure fan graph of $K_{m+\ell_1}$ on $W' = W_2 \sqcup \cdots \sqcup W_k$, $G'' = F_{m-1}^{W'',k} \sqcup \{w\}$ the disjoint union of the isolated vertex $w$ and the $k$-pure fan graph of $K_{m-1}$ on $W'' = W \setminus \{v\}$, and $H= F_{m+\ell_1-1}^{W',k-1}$. For the quotient rings involved in (\ref{Exact}), from Proposition \ref{prop: reg pure fan graph}, we have
\begin{align*}
r = &\reg S/J_G \ = \reg S/((x_u,y_u)+J_{G''}) = 1 + k, \\
&\reg S/J_{G'} = \reg S/((x_u,y_u)+J_{H}) = k.
\end{align*}
As regard the projective dimensions, we have
\begin{align*}
p &= \projdim S/J_G = \projdim S/J_{G'} = \projdim S/((x_u,y_u)+J_{G''}) \\
&= \projdim S/((x_u,y_u)+J_{H})-1 = m + \sum_{i=1}^k \ell_i -1.
\end{align*}
Fix $i=p$ and $j=r$ in the long exact sequence (\ref{longexact}). The Tor modules $T_{p+1,p+1+(r-1)}(S/((x_u,y_u)+J_H))$ and $T_{p,p+r}(S/((x_u, y_u)+J_{G''}))$ are the only non-zeros. It follows
\begin{align*}
\beta_{p,p+r}(S/J_G) &= \beta_{p-1,p+r-2}(S/J_{H}) + \beta_{p-2,p+r-2}(S/J_{G''})\\
&= \widehat{\beta}(S/J_H) + \widehat{\beta}(S/J_{F_{m-1}^{W'',k}}).
\end{align*}
Both $F_{m-1}^{W'',k}$ and $H$ fulfil the hypothesis of the proposition and they have less than $n$ vertices, then by induction hypothesis
\begin{align*}
\widehat{\beta}(S/J_{H}) &= (m+\ell_1-2) \prod_{s=2}^k \ell_s, \\
\widehat{\beta}(S/J_{F_{m-1}^{W'',k}}) &= (m-2) (\ell_1-1) \prod_{s=2}^k \ell_s.
\end{align*}
Adding these extremal Betti numbers, the thesis is proved.
\end{proof}
\begin{proposition}\label{Prop: CM-type bip}
Let $m \geq 2$. The unique extremal Betti number of the bipartite graph $F_m$ is given by
\[
\widehat{\beta}(S/J_{F_m}) = \sum_{k=1}^{m-1} k^2.
\]
\end{proposition}
\begin{proof}
We use induction on $m$. If $m=2$, then $F_2 = K_2$ and it is well known that
$\widehat{\beta}(S/J_{F_m})=1$.
Suppose $m >2$. Consider the short exact sequence (\ref{Exact}), with $G=F_m$ and $u= 2m-1$, with respect to the labelling introduced at the begin of this section. The graphs involved in (\ref{Exact}) are $G' = F_{m+1}^W$, that is the pure fan graph of $K_{m+1}$, with $V(K_{m+1}) = \{u\} \cup \{2i | i=1,\dots,m\}$, on $W=\{2i-1 | i= 1, \dots, m-1\}$, $G'' = F_{m-1} \sqcup \{2m\}$, and the pure fan graph $H=F_m^W$. By Proposition \ref{prop: reg pure fan graph} and Proposition \ref{prop: reg bip}, we have
\begin{align*}
r = &\reg S/J_G = \reg S/((x_u,y_u)+J_{G''}) = 3 \\
&\reg S/J_{G'} = \reg S/((x_u,y_u)+J_{H}) = 2.
\end{align*}
As regards the projective dimension of the quotient rings involved in (\ref{Exact}), it is equal to $p = 2m-1$ for all, except for $S/((x_u,y_u)+J_{H})$ whose projective dimension is $2m$.
Consider the long exact sequence (\ref{longexact}), with $i=p$ and $j=r$.
In view of the above, $T_{p,p+r}(S/J_{G'})$, $T_{p,p+r}(S/((x_u,y_u)+J_H))$, and all the Tor modules on the left of $T_{p+1,p+1+(r-1)}(S/((x_u,y_u)+J_H))$ in (\ref{longexact}) are zero. It follows that
\[
T_{p,p+r}(S/J_G) \cong T_{p+1,p+1+(r-1)}(S/((x_u,y_u)+J_H)) \oplus T_{p,p+r}(S/((x_u, y_u)+J_{G''})).
\]
Then, using Proposition \ref{CM-type Fan} and induction hypothesis, we obtain
\begin{eqnarray*}
\beta_{p,p+r}(S/J_G) &=& \beta_{p-1,p+r-2}(S/J_H) + \beta_{p-2,p+r-2}(S/J_{G''})\\ &=& \widehat{\beta}(S/J_H) + \widehat{\beta}(S/J_{G''})\\
&=&(m-1)^2 + \sum_{k=1}^{m-2} k^2 = \sum_{k=1}^{m-1} k^2.
\end{eqnarray*}
\end{proof}
\begin{question}\label{Rem: extremal betti = CM-type}
Based on explicit calculations we believe that for all bipartite graphs $F_m$ and pure fan graphs $F_m^{W,k}$ the unique extremal Betti number coincides with the CM-type, that is $\beta_{p,p+j}(S/J_G) = 0$ for all $j=0, \dots, r-1$, when either $G= F_m$ or $G= F_m^{W,k}$, for $m \geq 2$, and $p= \projdim S/J_G$ and $r= \reg S/J_G$.
\end{question}
\ \\
In the last part of this section, we completely describe the Hilbert-Poincaré series $\mathrm{HS}$ of $S/J_G$, when $G$ is a bipartite graph $F_m$. In particular, we are interested in computing the $h$-vector of $S/J_G$.
For any graph $G$ on $[n]$, it is well known that
\[
\mathrm{HS}_{S/J_G} (t) = \frac{p(t)}{(1-t)^{2n}} = \frac{h(t)}{(1-t)^d}
\]
where $p(t), h(t) \in \mathbb{Z}[t]$ and $d$ is the Krull dimension of $S/J_G$. The polynomial $p(t)$ is related to the graded Betti numbers of $S/J_G$ in the following way
\begin{equation}\label{eq: p(t) with Betti number}
p(t) = \sum_{i,j}(-1)^i\beta_{i,j}(S/J_G)t^j.
\end{equation}
\begin{lemma}\label{lemma: the last non negative entry}
Let $G$ be a graph on $[n]$, and suppose $S/J_G$ has an unique extremal Betti number, then the last non negative entry in the $h$-vector is $(-1)^{p+d}\beta_{p,p+r}$, where $p= \projdim S/J_G$ and $r= \reg S/J_G$.
\end{lemma}
\begin{proof}
If $S/J_G$ has an unique Betti number then it is equal to $\beta_{p,p+r}(S/J_G)$. Since $p(t)=h(t)(1-t)^{2n-d}$, then $\mathrm{lc} (p(t)) = (-1)^d \mathrm{lc}(h(t))$, where $\mathrm{lc}$ denotes the leading coefficient of a polynomial. By Equation (\ref{eq: p(t) with Betti number}), the leading coefficient of $p(t)$ is the coefficient of $t^j$ for $j = p+r$. Since $\beta_{i,p+r} = 0$ for all $i < p$, $\mathrm{lc} (p(t)) = (-1)^p \beta_{p,p+r}$, and the thesis follows.
\end{proof}
The degree of $\mathrm{HS}_{S/J_G}(t)$ as a rational function is called \textit{a-invariant}, denoted by $a(S/J_G)$, and it holds
\[
a(S/J_G) \leq \reg S/J_G - \depth S/J_G.
\]
The equality holds if $G$ is Cohen-Macaulay. In this case, $\dim S/J_G = \depth S/J_G$, and then $\deg h(t) = \reg S/J_G$.
\begin{proposition}
Let $G=F_m$, with $m \geq 2$, then the Hilbert-Poincaré series of $S/J_G$ is given by
\[
\mathrm{HS}_{S/J_G}(t) = \frac{h_0+h_1t+h_2t^2+ h_3t^3}{(1-t)^{2m+1}}
\]
where
\[
h_0 = 1, \qquad h_1= 2m-1, \qquad h_2 = \frac{3m^2-3m}{2}, \; \text{ and } \; h_3 = \sum_{k=1}^{m-1}k^2.
\]
\end{proposition}
\begin{proof}
By Proposition \ref{prop: reg bip}, $\deg h(t) = \reg S/J_G = 3$. Let $\mathrm{in}(J_G) = I_{\Delta}$, for some simplicial complex $\Delta$, where $I_\Delta$ denotes the Stanley-Reisner ideal of $\Delta$. Let $f_i$ be the number of faces of $\Delta$ of dimension $i$ with the convention that
$f_{-1} = 1$. Then
\begin{equation}\label{eq: h_k}
h_k = \sum_{i=0}^k (-1)^{k-i} \binom{d-i}{k-i} f_{i-1}.
\end{equation}
Exploiting the Equation (\ref{eq: h_k}) we get
\[
h_1 = f_0 -d = 4m - (2m+1) = 2m-1
\]
To obtain $h_2$ we need first to compute $f_1$, that is the number of edges in $\Delta$: they are all the possible edges, except for those that appear in $(I_{\Delta})_2$, which are the number of edges in $G$. So
\[
f_1 = \binom{4m}{2} - \frac{m(m+1)}{2} = \frac{15m^2-5m}{2}.
\]
And then we have
\begin{eqnarray*}
h_2 = \binom{2m+1}{2} f_{-1} - \binom{2m}{1} f_{0} + \binom{2m-1}{0} f_{1} = \frac{3m^2-3m}{2}.
\end{eqnarray*}
By Lemma \ref{lemma: the last non negative entry}, and since $p=2m-1$ and $d=2m+1$,
\[
h_3 = (-1)^{4m} \beta_{p,p+r}(S/J_G) = \sum_{k=1}^{m-1}k^2
\]
where the last equality follows from Proposition \ref{Prop: CM-type bip}.
\end{proof}
\section{Extremal Betti numbers of Cohen-Macaulay bipartite graphs}\label{Sec: Cohen-Macaulay bipartite graphs}
In \cite{BMS}, the authors prove that, if $G$ is connected and bipartite, then $J_G$ is Cohen-Macaulay if and only if $G$ can be obtained recursively by gluing a finite number of graphs of the form $F_m$ via two operations. Here, we recall the notation introduced in \cite{BMS} for the sake of completeness. \\
\noindent Operation $*$: For $i = 1, 2$, let $G_i$ be a graph with at least one leaf $f_i$. We denote by $G = (G_1, f_1) * (G_2, f_2)$ the graph G obtained by identifying $f_1$ and $f_2$. \\
\noindent Operation $\circ$: For $i = 1,2$, let $G_i$ be a graph with at least one leaf $f_i$, $v_i$ its neighbour and assume $\deg_{G_i}(v_i) \geq 3$. We denote by $G = (G_1, f_1) \circ (G_2, f_2)$ the graph G obtained by removing the leaves $f_1, f_2$ from $G_1$ and $G_2$ and by identifying $v_1$ and $v_2$. \\
In $G = (G_1, f_1) \circ (G_2, f_2)$, to refer to the vertex $v$ resulting from the identification of $v_1$ and $v_2$ we write $\{v\} = V(G_1) \cap V(G_2)$.
For both operations, if it is not important to specify the vertices $f_i$ or it is clear from the context, we simply write $G_1 * G_2$ or $G_1 \circ G_2$.
\begin{theorem}[\cite{BMS}]
Let $G=F_{m_1} \circ \cdots \circ F_{m_t} \circ F$, where $F$ denotes either $F_{m}$ or a $k$-pure fan graph $F_{m}^{W,k}$, with $t \geq 0$, $m \geq 3$, and $m_i \geq 3$ for all $i=1, \dots, t$. Then $J_G$ is Cohen-Macaulay.
\end{theorem}
\begin{theorem}[\cite{BMS}, \cite{RR}]\label{Theo: bipartite CM}
Let $G$ be a connected bipartite graph. The following properties are equivalent:
\begin{enumerate}
\item[(i)] $J_G$ is Cohen-Macaulay;
\item[(ii)] $G = A_1 *A_2 * \cdots * A_k$, where, for $i=1, \dots, k$, either $A_i = F_m$ or $A_i = F_{m_1} \circ \cdots \circ F_{m_t}$, for some $m \geq 1$ and $m_j \geq 3$.
\end{enumerate}
\end{theorem}
Let $G=G_1* \cdots * G_t$, for $t \geq 1$. Observe that $G$ is decomposable into $G_1 \cup \cdots \cup G_t$, with $G_i \cap G_{i+1} = \{f_i\}$, for $i=1, \dots t-1$, where $f_i$ is the leaf of $G_i$ and $G_{i+1}$ which has been identified in $G_i *G_{i+1}$ and $G_i \cap G_j = \emptyset$, for $1\leq i <j \leq t$. If $G$ is a Cohen-Macaulay bipartite graph, then it admits only one extremal Betti number, and by \cite[Corollary 1.4]{HR}, it holds
\[
\widehat{\beta}(S/J_G) = \prod_{i=1}^t \widehat{\beta}(S/J_{G_i}).
\]
In light of the above, we will focus on graphs of the form $G= F_{m_1} \circ \cdots \circ F_{m_t}$, with $m_i \geq 3$, $i=1, \dots, t$. Before stating the unique extremal Betti number of $S/J_G$, we recall the results on regularity showed in \cite{JK}.
\begin{proposition}[{\cite{JK}}]
For $m_1, m_2 \geq 3$, let $G = F_{m_1} \circ F$, where either $F = F_{m_2}$ or $F$ is a $k$-pure fan graph $F_{m_2}^{W,k}$, with $W = W_1 \sqcup \cdots \sqcup W_k$ and $\{v\}= V(F_{m_1}) \cap V(F)$. Then
\begin{equation*}
\reg S/J_G =
\begin{cases}
6 \qquad \; \; \; \quad \text{ if } F = F_{m_2}\\
k+3 \qquad \text{ if } F= F_{m_2}^{W,k} \text{ and } |W_i| = 1 \text{ for all } i\\
k+4 \qquad \text{ if } F= F_{m_2}^{W,k} \text{ and } |W_i| \geq 2 \text{ for some } i \text{ and } v \in W_i\\
\end{cases}
\end{equation*}
\end{proposition}
\begin{proposition}[\cite{JK}]
Let $m_1, \dots, m_t , m\geq 3$ and $t \geq 2$. Consider $G= F_{m_1} \circ \cdots \circ F_{m_t} \circ F$, where $F$ denotes either $F_{m}$ or the $k$-pure fan graph $F_m^{W,k}$ with $W = W_1 \sqcup \cdots \sqcup W_k$ and $|W_i| \geq 2$ for some $i$. Then
\[
\reg S/J_G = \reg S/J_{F_{m_1-1}} + \reg S/J_{F_{m_2-2}} + \cdots + \reg S/J_{F_{m_t -2}} + \reg S/J_{F \setminus \{v,f\}}
\]
where $\{v\} = V( F_{m_1} \circ \cdots \circ F_{m_t}) \cap V(F)$, $v \in W_i$ and $f$ is a leaf such that $\{v,f\} \in E(F)$.
\end{proposition}
\begin{lemma}\label{Lemma: CM-type 2 pallini}
Let $m_1, m_2 \geq 3$ and $G = F_{m_1} \circ F$, where $F$ is either $F_{m_2}$ or a $k$-pure fan graph $F_{m_2}^{W,k}$, with $W = W_1 \sqcup \cdots \sqcup W_k$ and $|W_i| \geq 2$ for some $i$. Let $\{v\} = V(F_{m_1}) \cap V(F)$ and suppose $v \in W_i$. Let $G''$ be as in Set-up \ref{setup}, with $u=v$. Then the unique extremal Betti number of $S/J_G$ is given by
\[
\widehat{\beta} (S/J_G) = \widehat{\beta} (S/J_{G''}).
\]
In particular,
\[
\widehat{\beta} (S/J_G) =
\begin{cases}
\widehat{\beta}(S/J_{F_{m_1-1}})\widehat{\beta}(S/J_{F_{m_2-1}}) \qquad \text{ if } F = F_{m_2} \\
\widehat{\beta}(S/J_{F_{m_1-1}})\widehat{\beta}(S/J_{F_{m_2-1}^{W',k}}) \qquad \text{ if } F = F_{m_2}^{W,k}
\end{cases}
\]
where $W' = W \setminus \{v\}$.
\end{lemma}
\begin{proof}
Consider the short exact sequence (\ref{Exact}), with $G = F_{m_1} \circ F$ and $u=v$.\\
If $F = F_{m_2}$, then the graphs involved in (\ref{Exact}) are: $G' = F_{m}^{W,2}$, $G'' = F_{m_1 -1} \sqcup F_{m_2-1}$, and $H=F_{m-1}^{W,2}$, where $m=m_1+m_2-1$, $W = W_1 \sqcup W_2$ with $|W_i| = m_i -1$ for $i=1,2$, and $G'$ and $H$ are $2$-pure fan graph. By Proposition \ref{prop: reg pure fan graph} and Proposition \ref{prop: reg bip}, we have the following values for the regularity
\begin{align*}
r = &\reg S/J_G = \reg S/((x_u,y_u)+J_{G''}) = 6\\
&\reg S/J_{G'} = \reg S/((x_u,y_u)+J_{H}) = 3.
\end{align*}
In the matter of projective dimension, it is equal to $p=n-1$ for all the quotient rings involved in (\ref{Exact}), except for $S/((x_u,y_u)+J_{H})$, for which it is $n$.
Considering the long exact sequence (\ref{longexact}) with $i=p$ and $j=r$, it holds
\[
\beta_{p,p+r} (S/J_G) = \beta_{p,p+r} (S/((x_u,y_u)+J_{G''}))
\]
and by Lemma \ref{Lemma: Betti numbers of disjoint graphs} (ii) the second part of thesis follows. \\
The case $F = F_{m_2}^{W,k}$ follows by similar arguments. Indeed, suppose $|W_1| \geq 2$ and $v \in W_1$. The graphs involved in (\ref{Exact}) are: $G' = F_{m}^{W',k}$, $G'' = F_{m_1 -1} \sqcup F_{m_2-1}^{W'',k}$, and $H=F_{m-1}^{W',k}$, where $m=m_1+m_2+|W_1| -2$, all the fan graphs are $k$-pure, $W' = W'_1 \sqcup W_2 \sqcup \cdots \sqcup W_k$, with $|W'_1| = m_1 -1$, whereas $ W'' = W \setminus \{v\}$. Fixing $r = \reg S/J_G = \reg S/((x_u,y_u)+J_{G''}) = k+4$, since $\reg S/J_{G'} = \reg S/((x_u,y_u)+J_{H}) = k+1$, and the projective dimension of all the quotient rings involved in (\ref{Exact}) is $p=n-1$, except for $S/((x_u,y_u)+J_{H})$, for which it is $n$, it follows
\[
\beta_{p,p+r} (S/J_G) = \beta_{p,p+r} (S/((x_u,y_u)+J_{G''}))
\]
and by Lemma \ref{Lemma: Betti numbers of disjoint graphs} (ii) the second part of the thesis follows.
\end{proof}
\begin{theorem}\label{Theo: betti number t pallini}
Let $t \geq 2$, $m \geq 3$, and $m_i \geq 3$ for all $i=1, \dots, t$. Let $G=F_{m_1} \circ \cdots \circ F_{m_t} \circ F$, where $F$ denotes either $F_{m}$ or a $k$-pure fan graph $F_{m}^{W,k}$ with $W = W_1 \sqcup \cdots \sqcup W_k$. Let $\{v\} = V(F_{m_1} \circ \cdots \circ F_{m_t} ) \cap V(F)$ and, if $F=F_m^{W,k}$, assume $|W_1| \geq 2$ and $v \in W_1$. Let $G''$ and $H$ be as in Set-up \ref{setup}, with $u=v$. Then the unique extremal Betti number of $S/J_G$ is given by
\[
\widehat{\beta}(S/J_G)=\widehat{\beta}(S/J_{G''}) +
\begin{cases}
\widehat{\beta}(S/J_H) &\text{if } m_t=3\\
0 &\text{if } m_t>3
\end{cases}
\]
In particular, if $F = F_{m}$, it is given by
\[
\widehat{\beta} ( S/J_G) =
\widehat{\beta}(S/J_{F_{m_1} \circ \cdots \circ F_{m_t-1}}) \widehat{\beta}(S/J_{F_{m-1}}) +
\begin{cases}
\widehat{\beta}(S/J_{H}) &\text{if } m_t=3 \\
0 &\text{if } m_t>3
\end{cases}
\]
where $H = F_{m_1} \circ \cdots \circ F_{m_{t-1}} \circ F_{m+m_t-2}^{W',2}$, and $F_{m+m_t-2}^{W',2}$ is a $2$-pure fan graph of $K_{m+m_t-2}$ on $W'=W_1' \sqcup W_2'$, with $|W_1'|=m_t-1$ and $|W_2'|=m-1$.\\
If $F = F_{m}^{W,k}$, it is given by
\[
\widehat{\beta} ( S/J_G) =
\widehat{\beta}(S/J_{F_{m_1} \circ \cdots \circ F_{m_t-1}}) \widehat{\beta}(S/J_{F_{m-1}^{W'',k}}) +
\begin{cases}
\widehat{\beta}(S/J_H ) &\text{if } m_t=3 \\
0 &\text{if } m_t>3
\end{cases}
\]
where $W'' = W \setminus \{v\}$, $H=F_{m_1}\circ \cdots \circ F_{m_{t-1}} \circ F_{m'}^{W''',k}$, with $m'=m+m_t+|W_1|-2$, $W''' = W_1'' \sqcup W_2 \sqcup \cdots \sqcup W_k$, and $|W_1''| = m_t -1$.
\end{theorem}
\begin{proof}
If $F=F_m$, we have $G'= F_{m_1} \circ \cdots \circ F_{m_{t-1}} \circ F_{m+m_t-1}^{W',2}$, $G'' = F_{m_1} \circ \cdots \circ F_{m_t-1} \sqcup F_{m-1}$, and $H=F_{m_1} \circ \cdots \circ F_{m_{t-1}} \circ F_{m+m_t-2}^{W',2}$, where $W'=W_1'\sqcup W_2'$, with $|W_1'|=m_t-1$ and $|W_2'|=m-1$. As regard the regularity of these quotient rings, we have
\begin{align*}
r &= \reg S/J_G = \reg S/((x_u,y_u)+J_{G''})\\
&= \reg S/J_{F_{m_1-1}} + \reg S/J_{F_{m_2-2}} + \cdots + \reg S/J_{F_{m_t -2}} + \reg S/J_{F_{m-1}}
\end{align*}
and both $\reg S/J_{G'}$ and $\reg S/((x_u,y_u)+J_H)$ are equal to
\[
\reg S/J_{F_{m_1-1}} + \reg S/J_{F_{m_2-2}} + \cdots + \reg S/J_{F_{m_{t-1} -2}} + \reg S/J_{F_{m+m_t-1}^{W',2}}.
\]
Since $\reg S/J_{F_{m-1}} = \reg S/J_{F_{m+m_t-1}^{W',2}}=3$, whereas if $m_t=3$, $\reg S/J_{F_{m_t -2}}=1$, otherwise $\reg S/J_{F_{m_t -2}}=3$, it follows that
\[
\reg S/J_{G'} = \reg S/((x_u,y_u)+J_H) =
\begin{cases}
r-1 &\text{ if } m_t=3\\
r-3 &\text{ if } m_t>3\\
\end{cases}
\]
For the projective dimensions, we have
\begin{eqnarray*}
p &=& \projdim S/J_G = \projdim S/((x_u,y_u)+J_{G''}) \\
&=& \projdim S/J_{G'} = \projdim S/((x_u,y_u)+J_{H}) -1= n-1.
\end{eqnarray*}
Passing through the long exact sequence (\ref{longexact}) of Tor modules, we obtain, if $m_t =3$
\[
\beta_{p,p+r} (S/J_G) = \beta_{p,p+r} (S/((x_u,y_u)+J_{G''})) + \beta_{p+1,(p+1)+(r-1)}(S/((x_u,y_u)+J_H))
\]
and, if $m_t>3$
\[
\beta_{p,p+r} (S/J_G) = \beta_{p,p+r} (S/((x_u,y_u)+J_{G''})).
\]
The case $F=F_{m}^{W,k}$ follows by similar arguments. Indeed, the involved graphs are: $G'=F_{m_1}\circ \cdots \circ F_{m_{t-1}} \circ F_{m'}^{W''',k}$, $G''= F_{m_1}\circ \cdots \circ F_{m_t-1} \sqcup F_{m-1}^{W'',k}$, and $H=F_{m_1}\circ \cdots \circ F_{m_{t-1}} \circ F_{m'-1}^{W''',k}$, where all the fan graphs are $k$-pure, $W'' = W \setminus \{v\}$, $m'=m+m_t+|W_1|-1$, $W''' = W_1'' \sqcup W_2 \sqcup \cdots \sqcup W_k$, and $|W_1''| = m_t -1$. Fixing $r= \reg S/J_G$, we get $\reg S/((x_u,y_u)+J_{G''}) =r$, whereas
\[
\reg S/J_{G'} = \reg S/((x_u,y_u)+J_H) =
\begin{cases}
r-1 &\text{ if } m_t=3\\
r-3 &\text{ if } m_t>3\\
\end{cases}
\]
The projective dimension of all the quotient rings involved is $p=n-1$, except for $S/((x_u,y_u)+J_H)$, for which it is $n$. Passing through the long exact sequence (\ref{longexact}) of Tor modules, it follows the thesis.
\end{proof}
\begin{corollary}
Let $t \geq 2$, $m,m_1 \geq 3$, and $m_i \geq 4$ for all $i=2, \dots, t$. Let $G=F_{m_1} \circ \cdots \circ F_{m_t} \circ F$, where $F$ denotes either $F_{m}$ or a $k$-pure fan graph $F_{m}^{W,k}$ with $W = W_1 \sqcup \cdots \sqcup W_k$. Let $\{v\} = V(F_{m_1} \circ \cdots \circ F_{m_t} ) \cap V(F)$ and, when $F=F_m^{W,k}$, assume $|W_1| \geq 2$ and $v \in W_1$. Then the unique extremal Betti number of $S/J_G$ is given by
\[
\widehat{\beta}(S/J_G)=
\begin{cases}
\widehat{\beta}(S/J_{F_{m_1-1}})\prod_{i=2}^t \widehat{\beta}(S/J_{F_{m_i-2}})\widehat{\beta}(S/J_{F_{m-1}})&\text{ if } F=F_m\\
\widehat{\beta}(S/J_{F_{m_1-1}})\prod_{i=2}^t \widehat{\beta}(S/J_{F_{m_i-2}})\widehat{\beta}(S/J_{F_{m-1}^{W',k}}) &\text{ if } F=F_m^{W,k}
\end{cases}
\]
where $W' = W\setminus \{v\}$.
\end{corollary}
\begin{proof}
By Theorem \ref{Theo: betti number t pallini} and by hypothesis on the $m_i$'s, we get
\[
\widehat{\beta}(S/J_G) = \widehat{\beta}(S/J_{F_{m_1}} \circ \cdots \circ F_{m_{t-1}}) \widehat{\beta}(S/J_{F_{m-1}}).
\]
Repeating the same argument for computing the extremal Betti number of $S/J_{F_{m_1} \circ \cdots \circ F_{m_t-1} }$, and by Lemma \ref{Lemma: CM-type 2 pallini}, we have done.
\end{proof}
\begin{remark}
Contrary to what we believe for bipartite graphs $F_m$ and $k$-pure fan graphs $F_m^{W,k}$ (see Question \ref{Rem: extremal betti = CM-type}), in general for a Cohen-Macaulay bipartite graph $G=F_{m_1} \circ \cdots \circ F_{m_t}$, with $t \geq 2$, the unique extremal Betti number of $S/J_G$ does not coincide with the Cohen-Macaulay type of $S/J_G$, for example for $G=F_4 \circ F_3$, we have $5 = \widehat{\beta}(S/J_G) \neq \text{CM-type}(S/J_G) =29 $.
\end{remark}
\end{document} |
\begin{document}
\title[Mazur-Ulam theorem in non-Archimedean $n$-normed spaces]
{A Mazur-Ulam theorem for Mappings of conservative distance in non-Archimedean $n$-normed spaces}
\author[Hahng-Yun Chu and Se-Hyun Ku]{Hahng-Yun Chu$^\dagger$ and Se-Hyun Ku$^\ast$}
\address{Hahng-Yun Chu, Department of Mathematical Sciences, Korea Advanced Institute of Science and Technology,
335, Gwahak-ro, Yuseong-gu, Daejeon 305-701, Republic of Korea }
\email{\rm hychu@@kaist.ac.kr}
\address{Se-Hyun Ku, Department of Mathematics, Chungnam National University, 79, Daehangno, Yuseong-Gu, Daejeon 305-764, Republic of Korea} \email{\rm shku@@cnu.ac.kr}
\thanks{
\newline\ \noindent 2010 {\it Mathematics Subject Classification}. primary 46B20, secondary 51M25, 46S10
\newline {\it Key words and Phrases}. Mazur-Ulam theorem, $n$-isometry, non-Archimedean $n$-normed space
\newline{\it $\ast$ Corresponding author}
\newline{\it $\dagger$ The author was supported by the second stage of the Brain Korea 21 Project, The Development Project of Human Resources in Mathematics, KAIST in 2008. \\}}
\maketitle
\begin{abstract}
In this article, we study the notions of $n$-isometries in non-Archimedean $n$-normed spaces over linear ordered non-Archimedean fields, and prove the Mazur-Ulam theorem in the spaces.
Furthermore, we obtain some properties for $n$-isometries in non-Archimedean $n$-normed spaces.
\end{abstract}
\baselineskip=18pt
\theoremstyle{definition}
\newtheorem{df}{Definition}[section]
\newtheorem{rk}[df]{Remark}
\newtheorem{ma}[df]{Main Theorem}
\newtheorem{cj}[df]{Conjecture}
\newtheorem{pb}[df]{Problem}
\theoremstyle{plain}
\newtheorem{lm}[df]{Lemma}
\newtheorem{eq}[df]{equation}
\newtheorem{thm}[df]{Theorem}
\newtheorem{cor}[df]{Corollary}
\newtheorem{pp}[df]{Proposition}
\setcounter{section}{0}
\section{Introduction}
Let $X$ and $Y$ be metric spaces with metrics $d_X$ and $d_Y,$ respectively.
A map $f:X\rightarrow Y$ is called an isometry if $d_Y(f(x),f(y))=d_X(x,y)$ for every $x,y\in X.$
Mazur and Ulam\cite{mu32} were the first to treated the theory of isometry.
They have proved the following theorem;
\textbf{Mazur-Ulam Theorem}\,\,\,\textit{Let $f$ be an isometric transformation from a real normed vector space
$X$ onto a real normed vector space $Y$ with $f(0)=0$. Then $f$ is linear.}
It was a natural ask if the result holds without the onto assumption.
Asked about this natural question, Baker\cite{b71} answered that every isometry of a real normed linear space
into a strictly convex real normed linear space is affine.
The Mazur-Ulam theorem has been widely studied by \cite{j01,ms08,r01,rs93,rw03,v03,x01}.
Chu et al.\cite{cpp04} have defined the notion of a $2$-isometry which is suitable to represent the concept of an area preserving mapping in linear $2$-normed spaces.
In \cite{c07}, Chu proved that the Mazur-Ulam theorem holds in linear $2$-normed spaces
under the condition that a $2$-isometry preserves collinearity.
Chu et al.\cite{ckk08} discussed characteristics of $2$-isometries.
In \cite{as}, Amyari and Sadeghi proved the Mazur-Ulam theorem in non-Archimedean $2$-normed spaces under the condition of strictly convexity.
Recently, Choy et al.\cite{cck} proved the theorem on non-Archimedean $2$-normed spaces over linear ordered non-Archimedean fields without the strictly convexity assumption.
Misiak\cite{m89-1, m89-2} defined the concept of an $n$-normed space and investigated the space.
Chu et al.\cite{clp04}, in linear $n$-normed spaces, defined the concept of an $n$-isometry that is suitable to represent the notion of a volume preserving mapping.
In~\cite{cck09}, Chu et al. generalized the Mazur-Ulam theorem to $n$-normed spaces.
In recent years, Chen and Song\cite{cs} characterized $n$-isometries in linear $n$-normed spaces.
In this paper, without the condition of the strictly convexity, we prove the (additive) Mazur-Ulam theorem on non-Archimedean $n$-normed spaces.
Firstly, we assert that an $n$-isometry $f$ from a non-Archimedean space to a non-Archimedean space
preserves the midpoint of a segment under some condition about the set of all elements of a valued field
whose valuations are 1.
Using the above result, we show that the Mazur-Ulam theorem on non-Archimedean $n$-normed spaces over linear ordered non-Archimedean fields.
In addition, we prove that the barycenter of a triangle in the non-Archimedean $n$-normed spaces is $f$-invariant under different conditions from those referred in previous statements.
And then we also prove the (second typed) Mazur-Ulam theorem in non-Archimedean $n$-normed spaces under some different conditions.
\section{The Mazur-Ulam theorem on non-Archimedean $n$-normed spaces}
In this section, we introduce a non-Archimedean $n$-normed space which is a kind of a generalization of a non-Archimedean $2$-normed space and we show the (additive) Mazur-Ulam theorem for an $n$-isometry $f$ defined on a non-Archimedean $n$-normed space, that is, $f(x)-f(0)$ is additive.
Firstly, we consider some definitions and lemmas which are needed to prove the theorem.
Recall that a {\emph{non-Archimedean}} (or {\emph{ultrametric}}) {\emph{valuation}} is given by a map $|\cdot|$ from a field ${\mathcal{K}}$ into $[0,\infty)$ such that for all $r,s\in{\mathcal{K}}$:
(i) $|r|=0$ if and only if $r=0$;
(ii) $|rs|=|r||s|$;
(iii) $|r+s|\leq \max\{|r|, |s|\}.$
If every element of ${\mathcal{K}}$ carries a valuation then a field ${\mathcal{K}}$ is called a {\emph{valued field}}, for a convenience,
simply call it a field. It is obvious that $|1|=|-1|=1$ and $|n| \leq 1$ for
all $n \in {\mathbb N}$. A trivial example of a non-Archimedean valuation
is the map $|\cdot|$ taking everything but $0$ into $1$ and
$|0|=0$ (see~\cite{nb81}).
A {\emph{non-Archimedean norm}} is a function $\| \cdot \| :{\mathcal X} \to [0, \infty)$ such that for all $r \in {\mathcal K}$ and $x,y \in {\mathcal X}$:
(i) $\|x\| = 0$ if and only if $x=0$;
(ii) $\|rx\| = |r| \|x\|$;
(iii) the strong triangle inequality
$$\| x+ y\| \leq \max \{\|x\|, \|y\|\}.$$
Then we say $({\mathcal X}, \|\cdot\|)$ is a {\emph{non-Archimedean space}}.
\begin{df}\label{df31}
Let ${\mathcal X}$ be a vector space with the dimension greater than $n-1$ over a valued field $\mathcal{K}$
with a non-Archimedean valuation $|\cdot|.$
A function $\| \cdot, \cdots, \cdot \|:{\mathcal{X}}\times\cdots\times{\mathcal{X}}\rightarrow[0,\infty)$
is said to be a {\emph{non-Archimedean $n$-norm}} if
$(\textrm{i}) \ \ \|x_1, \cdots, x_n \|=0 \Leftrightarrow x_1,
\cdots, x_n
\textrm{ are linearly dependent} ;$
$(\textrm{ii}) \ \ \|x_1, \cdots, x_n \| = \| x_{j_1}, \cdots,
x_{j_n} \| $ for every permutation $(j_1, \cdots, j_n)$ of $(1,
\cdots, n) ;$
$(\textrm{iii}) \ \ \| \alpha x_1, \cdots, x_n \| =| \alpha | \
\| x_1, \cdots, x_n \| ;$
$(\textrm{iv}) \ \ \|x+y, x_2, \cdots, x_n \| \le \max\{\|x, x_2,
\cdots, x_n\|, \|y, x_2, \cdots, x_n\|\} $
\noindent for all $\alpha \in \mathcal{K}$ and all $x, y, x_1,
\cdots, x_n \in \mathcal{X}$.
Then $(\mathcal{X},\| \cdot, \cdots, \cdot \|)$ is called a {\it non-Archimedean
$n$-normed space}.
\end{df}
From now on, let $\mathcal{X}$ and $\mathcal{Y}$ be non-Archimedean $n$-normed spaces
over a linear ordered non-Archimedean field $\mathcal{K}.$
\begin{df} \cite{clp04}
Let $\mathcal{X}$ and $\mathcal{Y}$ be non-Archimedean $n$-normed spaces and $f : \mathcal{X} \rightarrow \mathcal{Y}$ a mapping. We call $f$ an {\it
$n$-isometry} if
$$\|x_1 - x_0, \cdots, x_n - x_0\|=\|f(x_1) - f(x_0), \cdots, f(x_n)- f(x_0)\|$$
for all $x_0, x_1, \cdots, x_n \in \mathcal{X}$.
\end{df}
\begin{df}~\cite{clp04}
The points $x_{0}, x_{1}, \cdots,
x_{n}$ of a non-Archimedean $n$-normed space $\mathcal{X}$ are said to be {\it n-collinear} if for every $i$,
$\{x_{j} - x_{i} \mid 0 \le j \neq i \le n \}$ is linearly
dependent.
\end{df}
The points $x_0,x_1$ and $x_2$ of a non-Archimedean $n$-normed space $\mathcal{X}$ are said to be $2$-collinear if and only if $x_2-x_0=t(x_1-x_0)$ for some element $t$ of a real field.
We denote the set of all elements of $\mathcal{K}$ whose valuations are $1$ by $\mathcal{C},$
that is, ${\mathcal{C}}=\{\gamma\in{\mathcal{K}}:|\gamma|=1\}.$
\begin{lm}{\label{lm31}}
Let $x_{i}$ be an element of a non-Archimedean $n$-normed space $\mathcal{X}$ for
every $i \in \{1, \cdots , n\}$ and $\gamma\in\mathcal{K}.$
Then $$
\|x_{1}, \cdots , x_{i}, \cdots , x_{j}, \cdots , x_{n} \| =
\|x_{1}, \cdots , x_{i}, \cdots , x_{j} + \gamma x_{i}, \cdots ,
x_{n} \|.
$$
for all $1 \leq i \ne j \leq n.$
\end{lm}
\begin{pf}
By the condition $(\textrm{iv})$ of Definition~\ref{df31}, we have
\begin{eqnarray*}
&&\|x_{1}, \cdots , x_{i}, \cdots , x_{j} + \gamma x_{i}, \cdots ,x_{n} \|\\
&\leq&\max\{\|x_{1}, \cdots , x_{i}, \cdots , x_{j}, \cdots , x_{n} \|,|\gamma|\,\|x_{1}, \cdots , x_{i}, \cdots , x_{i}, \cdots ,x_{n} \|\}\\
&=&\max\{\|x_{1}, \cdots , x_{i}, \cdots , x_{j}, \cdots , x_{n} \|,0\}\\
&=&\|x_{1}, \cdots , x_{i}, \cdots , x_{j}, \cdots , x_{n} \|.
\end{eqnarray*}
One can easily prove the converse using the similar methods.
This completes the proof.
\end{pf}
\begin{rk}\label{rk33}
Let $\mathcal{X,Y}$ be non-Archimedean $n$-normed spaces over a linear ordered non-Archimedean field $\mathcal{K}$
and let $f:\mathcal{X}\rightarrow\mathcal{Y}$ be an $n$-isometry.
One can show that the $n$-isometry $f$ from $\mathcal{X}$ to $\mathcal{Y}$ preserves the $2$-collinearity
using the similar method in ~\cite[Lemma 3.2]{cck09}.
\end{rk}
The {\emph{midpoint}} of a segment with endpoints $x$ and $y$ in the non-Archimedean $n$-normed space $\mathcal{X}$ is defined by the point $\frac{x+y}{2}.$
Now, we prove the Mazur-Ulam theorem on non-Archimedean $n$-normed spaces.
In the first step, we prove the following lemma. And then, using the lemma, we show that an $n$-isometry $f$ from a non-Archimedean $n$-normed space $\mathcal{X}$ to a non-Archimedean $n$-normed space $\mathcal{Y}$ preserves the midpoint of a segment.
I.e., the $f$-image of the midpoint of a segment in $\mathcal{X}$ is also the midpoint of a corresponding segment in $\mathcal{Y}.$
\begin{lm}\label{lm39}
Let $\mathcal{X}$ be a non-Archimedean $n$-normed space
over a linear ordered non-Archimedean field $\mathcal{K}$ with ${\mathcal{C}}=\{2^n|\,n\in{\mathbb{Z}}\}$
and let $x_0,x_1\in\mathcal{X}$ with $x_0\neq x_1.$
Then $u:=\frac{x_0+x_1}{2}$ is the unique member of $\mathcal{X}$ satisfying
\begin{eqnarray*}
\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_0-u,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_1-u,x_1-x_2,x_1-x_3,\cdots,x_1-x_n\|
\end{eqnarray*}
for some $x_2,\cdots,x_n\in \mathcal{X}$ with $\|x_0-x_1,x_0-x_2,\cdots,x_0-x_n\|\neq0$
and $u,x_0,x_1$ are $2$-collinear.
\end{lm}
\begin{pf}
Let $u:=\frac{x_0+x_1}{2}.$
From the assumption for the dimension of $\mathcal{X},$ there exist $n-1$ elements $x_2,\cdots,x_n$ in $\mathcal{X}$ such that $\|x_0-x_1,x_0-x_2,\cdots,x_0-x_n\|\neq0.$
One can easily prove that $u$ satisfies the above equations and conditions.
It suffices to show the uniqueness for $u.$
Assume that there is an another $v$ satisfying
\begin{eqnarray*}
\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_0-v,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_1-v,x_1-x_2,x_1-x_3,\cdots,x_1-x_n\|
\end{eqnarray*}
for some elements $x_2,\cdots,x_n$ of $\mathcal{X}$ with $\|x_0-x_1,x_0-x_2,\cdots,x_0-x_n\|\neq0$
and $v,x_0,x_1$ are $2$-collinear.
Since $v,x_0,x_1$ are $2$-collinear, $v=tx_0+(1-t)x_1$ for some $t\in\mathcal{K}.$
Then we have
\begin{eqnarray*}
&&\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-v,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-tx_0-(1-t)x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&|1-t|\,\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|,
\end{eqnarray*}
\begin{eqnarray*}
&&\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_1-v,x_1-x_2,x_1-x_3,\cdots,x_1-x_n\|\\
&=&\|x_1-tx_0-(1-t)x_1,x_1-x_2,x_1-x_3,\cdots,x_1-x_n\|\\
&=&|t|\,\|x_0-x_1,x_1-x_2,x_1-x_3,\cdots,x_1-x_n\|\\
&=&|t|\,\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|.
\end{eqnarray*}
Since $\|x_0-x_1,x_0-x_2,\cdots,x_0-x_n\|\neq0$, we have two equations $|1-t|=1$ and $|t|=1.$
So there are two integers $k_1,k_2$ such that $1-t=2^{k_1},\,t=2^{k_2}.$
Since $2^{k_1}+2^{k_2}=1,$ $k_i<0$ for all $i=1,2.$
Thus we may assume that $1-t=2^{-n_1},\,t=2^{-n_2}$ and $n_1\geq n_2\in\mathbb{N}$ without loss of generality.
If $n_1\gneq n_2,$ then $1=2^{-n_1}+2^{-n_2}=2^{-n_1}(1+2^{n_1-n_2}),$ that is, $2^{n_1}=1+2^{n_1-n_2}.$
This is a contradiction because the left side of the equation is a multiple of $2$ but the right side of the equation is not.
Thus $n_1=n_2=1$ and hence $v=\frac{1}{2}x_0+\frac{1}{2}x_1=u.$
\end{pf}
\begin{thm}\label{thm38}
Let $\mathcal{X},\mathcal{Y}$ be non-Archimedean $n$-normed spaces
over a linear ordered non-Archimedean field $\mathcal{K}$ with ${\mathcal{C}}=\{2^n|\,n\in{\mathbb{Z}}\}$
and $f:\mathcal{X}\rightarrow \mathcal{Y}$ an $n$-isometry.
Then the midpoint of a segment is $f$-invariant, i.e., for every $x_0,x_1\in\mathcal{X}$ with $x_0\neq x_1,$ $f(\frac{x_0+x_1}{2})$ is also the midpoint of a segment with endpoints $f(x_0)$ and $f(x_1)$ in $\mathcal{Y}.$
\end{thm}
\begin{pf}
Let $x_0,x_1\in\mathcal{X}$ with $x_0\neq x_1.$
Since the dimension of $\mathcal{X}$ is greater than $n-1,$
there exist $n-1$ elements $x_2,\cdots,x_n$ of $\mathcal{X}$ satisfying $\|x_0-x_1,x_0-x_2,\cdots,x_0-x_n\|\neq0.$
Since $x_0,x_1$ and their midpoint $\frac{x_0+x_1}{2}$ are $2$-collinear in $\mathcal{X}$,
$f(x_0),f(x_1),f(\frac{x_0+x_1}{2})$ are also $2$-collinear in $\mathcal{Y}$ by Remark~\ref{rk33}.
Since $f$ is an $n$-isometry, we have the followings
\begin{eqnarray*}
&&\|f(x_0)-f(\frac{x_0+x_1}{2}),f(x_0)-f(x_2),\cdots,f(x_0)-f(x_n)\|\\
&=&\|x_0-\frac{x_0+x_1}{2},x_0-x_2,\cdots,x_0-x_n\|\\
&=&|\frac{1}{2}|\,\|x_0-x_1,x_0-x_2,\cdots,x_0-x_n\|\\
&=&\|f(x_0)-f(x_1),f(x_0)-f(x_2),\cdots,f(x_0)-f(x_n)\|\,,\\
\\
&&\|f(x_1)-f(\frac{x_0+x_1}{2}),f(x_1)-f(x_2),\cdots,f(x_1)-f(x_n)\|\\
&=&\|x_1-\frac{x_0+x_1}{2},x_1-x_2,\cdots,x_1-x_n\|\\
&=&|\frac{1}{2}|\,\|x_1-x_0,x_1-x_2,\cdots,x_1-x_n\|\\
&=&\|f(x_0)-f(x_1),f(x_0)-f(x_2),\cdots,f(x_0)-f(x_n)\|.
\end{eqnarray*}
By Lemma ~\ref{lm39}, we obtain that $f(\frac{x_0+x_1}{2})=\frac{f(x_0)+f(x_1)}{2}$ for all $x_0,x_1\in\mathcal{X}$ with $x_0\neq x_1.$
This completes the proof.
\end{pf}
\begin{lm}\label{lm36}
Let $\mathcal{X}$ and $\mathcal{Y}$ be non-Archimedean $n$-normed spaces and $f:\mathcal{X} \rightarrow \mathcal{Y}$ an $n$-isometry. Then the following conditions are equivalent.
$(\textrm{i})$ The $n$-isometry $f$ is additive, i.e., $f(x_0+x_1)=f(x_0)+f(x_1)$ for all $x_0,x_1\in \mathcal{X};$
$(\textrm{ii})$ the $n$-isometry $f$ preserves the midpoint of a segment in $\mathcal{X},$
i.e., $f(\frac{x_0+x_1}{2})=\frac{f(x_0)+f(x_1)}{2}$ for all $x_0,x_1\in \mathcal{X}$ with $x_0 \neq x_1;$
$(\textrm{iii})$ the $n$-isometry $f$ preserves the barycenter of a triangle in $\mathcal{X}$, i.e., $f(\frac{x_0+x_1+x_2}{3})=\frac{f(x_0)+f(x_1)+f(x_2)}{3}$ for all $x_0,x_1,x_2\in \mathcal{X}$ satisfying that $x_0,x_1,x_2$ are not $2$-collinear.
\end{lm}
\begin{pf}
Firstly, the equivalence between $(\textrm{i})$ and $(\textrm{ii})$ is obvious.
Then we suffice to show that $(\textrm{ii})$ is equivalent to $(\textrm{iii}).$
Assume that the $n$-isometry $f$ preserves the barycenter of a triangle in $\mathcal{X}.$
Let $x_0,x_1$ be in $\mathcal{X}$ with $x_0 \neq x_1.$
Since the $n$-isometry $f$ preserves the $2$-collinearity, $f(x_0),f(\frac{x_0+x_1}{2}),f(x_1)$ are $2$-collinear.
So
\begin{equation}\label{eq31}
f(\frac{x_0+x_1}{2})-f(x_0)=s\Big(f(x_1)-f(x_0)\Big)
\end{equation}
for some element $s$ of a real field.
By the hypothesis for the dimension of $\mathcal{X}$, we can choose the element $x_2$ of $\mathcal{X}$ satisfying that $x_0,x_1$ and $x_2$ are not $2$-collinear.
Since $x_2,\frac{x_0+x_1+x_2}{3},\frac{x_0+x_1}{2}$ are $2$-collinear, we have that $f(x_2),f(\frac{x_0+x_1+x_2}{3}),f(\frac{x_0+x_1}{2})$ are also $2$-collinear by Remark~\ref{rk33}.
So we obtain that
\begin{equation}\label{eq32}
f(\frac{x_0+x_1+x_2}{3})-f(x_2)=t\Big(f(\frac{x_0+x_1}{2})-f(x_2)\Big)
\end{equation}
for some element $t$ of a real field.
By the equations (\ref{eq31}), (\ref{eq32}) and the barycenter preserving property for the $n$-isometry $f$, we have $$\frac{f(x_0)+f(x_1)+f(x_2)}{3}-f(x_2)=t\Big(f(x_0)+sf(x_1)-sf(x_0)-f(x_2)\Big).$$
Thus we get $$\frac{f(x_0)+f(x_1)-2f(x_2)}{3}=t(1-s)f(x_0)+tsf(x_1)-tf(x_2).$$
So we have the following equation $$\frac{2}{3}\Big(f(x_0)-f(x_2)\Big)-\frac{1}{3}\Big(f(x_0)-f(x_1)\Big)=t\Big(f(x_0)-f(x_2)\Big)-ts\Big(f(x_0)-f(x_1)\Big).$$
By a calculation, we obtain
\begin{equation}\label{eq33}
(\frac{2}{3}-t)\Big(f(x_0)-f(x_2)\Big)+(-\frac{1}{3}+ts)\Big(f(x_0)-f(x_1)\Big)=0.
\end{equation}
Since $x_0,x_1,x_2$ are not $2$-collinear, $x_0-x_1,x_0-x_2$ are linearly independent.
Since $\dim{\mathcal{X}}\geq n,$ there are $x_3,\cdots,x_n\in \mathcal{X}$ such that $\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\neq0.$
Since $f$ is an $n$-isometry,
\begin{eqnarray*}
&&\|f(x_0)-f(x_1),f(x_0)-f(x_2),f(x_0)-f(x_3),\cdots,f(x_0)-f(x_n)\|\\
&&=\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\neq0.
\end{eqnarray*}
So $f(x_0)-f(x_1)$ and $f(x_0)-f(x_2)$ are linearly independent.
Hence, from the equation (\ref{eq33}), we have $\frac{2}{3}-t=0$ and $-\frac{1}{3}+ts=0.$
I.e., we obtain $t=\frac{2}{3}, s=\frac{1}{2},$
which imply the equation $$f(\frac{x_0+x_1}{2})=\frac{f(x_0)+f(x_1)}{2}$$ for all $x_0,x_1\in \mathcal{X}$ with $x_0\neq x_1.$
Conversely, $(\textrm{ii})$ trivially implies $(\textrm{iii})$.
This completes the proof of this lemma.
\end{pf}
\begin{rk}{\label{rk3-10}}
One can prove that the above lemma also holds in the case of linear $n$-normed spaces.
\end{rk}
\begin{thm}{\label{rk37}}
Let $\mathcal{X}$ and $\mathcal{Y}$ be non-Archimedean $n$-normed spaces
over a linear ordered non-Archimedean field $\mathcal{K}$ with ${\mathcal{C}}=\{2^n|\,n\in{\mathbb{Z}}\}.$
If $f:\mathcal{X}\rightarrow \mathcal{Y}$ is an $n$-isometry, then $f(x)-f(0)$ is additive.
\end{thm}
\begin{pf}
Let $g(x):=f(x)-f(0).$
Then it is clear that $g(0)=0$ and $g$ is also an $n$-isometry.
From Theorem~\ref{thm38}, for $x_0,x_1\in{\mathcal{X}}(x_0\neq x_1),$ we have $$g\Big(\frac{x_0+x_1}{2}\Big)=\frac{g(x_0)+g(x_1)}{2}.$$
Hence, by Lemma~\ref{lm36}, we obtain that $g$ is additive which completes the proof.
\end{pf}
In the remainder of this section, under different conditions from those previously referred in Theorem~\ref{rk37},
we also prove the Mazur-Ulam theorem on a non-Archimedean $n$-normed space.
Firstly, we show that an $n$-isometry $f$ from a non-Archimedean $n$-normed space $\mathcal{X}$ to a non-Archimedean $n$-normed space $\mathcal{Y}$ preserves the barycenter of a triangle. I.e., the $f$-image of the barycenter of a triangle is also the barycenter of a corresponding triangle.
Then, using Lemma~\ref{lm36}, we also prove the Mazur-Ulam theorem(non-Archimedean $n$-normed space version) under some different conditions.
\begin{lm}\label{lm35}
Let $\mathcal{X}$ be a non-Archimedean $n$-normed space over a linear ordered non-Archimedean field $\mathcal{K}$
with ${\mathcal{C}}=\{3^n|\,n\in{\mathbb{Z}}\}$ and let $x_0,x_1,x_2$ be elements of $\mathcal{X}$ such that $x_0,x_1,x_2$ are not $2$-collinear.
Then $u:=\frac{x_0+x_1+x_2}{3}$ is the unique member of $\mathcal{X}$ satisfying
\begin{eqnarray*}
\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_0-x_1,x_0-u,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_1-x_2,x_1-u,x_1-x_3,\cdots,x_1-x_n\|\\
=\|x_2-x_0,x_2-u,x_2-x_3,\cdots,x_2-x_n\|
\end{eqnarray*}
for some $x_3,\cdots,x_n\in \mathcal{X}$ with $\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\neq0$
and $u$ is an interior point of $\triangle_{x_0x_1x_2}.$
\end{lm}
\begin{pf}
Let $u:=\frac{x_0+x_1+x_2}{3}.$
Thus $u$ is an interior point of $\triangle_{x_0x_1x_2}.$
Since $\dim{\mathcal{X}}>n-1,$ there are $n-2$ elements $x_3,\cdots,x_n$ of $\mathcal{X}$ such that
$\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\neq0.$
Applying the lemma~\ref{lm31}, we have that
\begin{eqnarray*}
&&\|x_0-x_1,x_0-u,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,x_0-\frac{x_0+x_1+x_2}{3},x_0-x_3,\cdots,x_0-x_n\|\\
&=&|\frac{1}{3} |\,\|x_0-x_1,x_0-x_1+x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|.
\end{eqnarray*}
And we can also obtain that
\begin{eqnarray*}
&&\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_1-x_2,x_1-u,x_1-x_3,\cdots,x_1-x_n\|\\
&=&\|x_2-x_0,x_2-u,x_2-x_3,\cdots,x_2-x_n\|.
\end{eqnarray*}
For the proof of uniqueness, let $v$ be an another interior point of $\triangle_{x_0x_1x_2}$ satisfying
\begin{eqnarray*}
\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_0-x_1,x_0-v,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_1-x_2,x_1-v,x_1-x_3,\cdots,x_1-x_n\|\\
=\|x_2-x_0,x_2-v,x_2-x_3,\cdots,x_2-x_n\|
\end{eqnarray*}
with $\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\neq0.$
Since $v$ is an element of the set $\{t_0x_0+t_1x_1+t_2x_2 |\,t_0+t_1+t_2=1,\,t_i\in{\mathcal{K}},\,t_i>0\,\,{\text{for all}}\,\,i\},$
there are elements $s_0,s_1,s_2$ of $\mathcal{K}$ with $s_0+s_1+s_2=1,\,s_i>0$ such that $v=s_0x_0+s_1x_1+s_2x_2.$
Then we have
\begin{eqnarray*}
&&\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,x_0-v,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,x_0-s_0x_0-s_1x_1-s_2x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,(s_0-1)x_0+s_1x_1+(1-s_0-s_1)x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,(s_0+s_1-1)x_0+(1-s_0-s_1)x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&|s_0+s_1-1|\,\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&|s_2|\,\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|
\end{eqnarray*}
and hence $|s_2|=1$ since $\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\neq0.$
Similarly, we obtain $|s_0|=|s_1|=1.$
By the hypothesis of $\mathcal{C},$ there are integers $k_0,k_1,k_2$ such that $s_0=3^{k_1},s_1=3^{k_2},s_2=3^{k_2}.$
Since $s_0+s_1+s_2=1,$ every $k_i$ is less than $0.$
So, one may let $s_0=3^{-n_0},s_1=3^{-n_1},s_2=3^{-n_2}$ and $n_0\geq n_1\geq n_2\in\mathbb{N}.$
Assume that the one of above inequalities holds.
Then $1=s_0+s_1+s_2=3^{-n_0}(1+3^{n_0-n_1}+3^{n_0-n_2}),$ i.e.,\,$3^{n_0}=1+3^{n_0-n_1}+3^{n_0-n_2}.$
This is a contradiction, because the left side is a multiple of $3$ whereas the right side is not.
Thus $n_0=n_1=n_2.$
Consequently, $s_0=s_1=s_2=\frac{1}{3}.$
This means that $u$ is unique.
\end{pf}
\begin{thm}\label{thm36}
Let $\mathcal{X},\mathcal{Y}$ be non-Archimedean $n$-normed spaces over a linear ordered non-Archimedean field $\mathcal{K}$
with ${\mathcal{C}}=\{3^n|\,n\in{\mathbb{Z}}\}$ and $f:\mathcal{X}\rightarrow\mathcal{Y}$ an interior preserving $n$-isometry.
Then the barycenter of a triangle is $f$-invariant.
\end{thm}
\begin{pf}
Let $x_0,x_1$ and $x_2$ be elements of $\mathcal{X}$ satisfying that $x_0,x_1$ and $x_2$ are not $2$-collinear.
It is obvious that the barycenter $\frac{x_0+x_1+x_2}{3}$ of a triangle $\triangle_{x_0x_1x_2}$ is an interior point of the triangle.
By the assumption, $f(\frac{x_0+x_1+x_2}{3})$ is also the interior point of a triangle $\triangle_{f(x_0)f(x_1)f(x_2)}.$
Since $\dim{\mathcal{X}}>n-1,$ there exist $n-2$ elements $x_3,\cdots,x_n$ in $\mathcal{X}$ such that
$\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|$ is not zero.
Since $f$ is an $n$-isometry, we have
\begin{eqnarray*}
&&\|f(x_0)-f(x_1),f(x_0)-f\Big(\frac{x_0+x_1+x_2}{3}\Big),f(x_0)-f(x_3),\cdots,f(x_0)-f(x_n)\|\\
&=&\|x_0-x_1,x_0-\frac{x_0+x_1+x_2}{3},x_0-x_3,\cdots,x_0-x_n\|\\
&=&|\frac{1}{3}|\,\|x_0-x_1,x_0-x_1+x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|f(x_0)-f(x_1),f(x_0)-f(x_2),f(x_0)-f(x_3),\cdots,f(x_0)-f(x_n)\|.
\end{eqnarray*}
Similarly, we obtain
\begin{eqnarray*}
&&\|f(x_1)-f(x_2),f(x_1)-f\Big(\frac{x_0+x_1+x_2}{3}\Big),f(x_1)-f(x_3),\cdots,f(x_1)-f(x_n)\|\\
&=&\|f(x_2)-f(x_1),f(x_2)-f\Big(\frac{x_0+x_1+x_2}{3}\Big),f(x_2)-f(x_3),\cdots,f(x_2)-f(x_n)\|\\
&=&\|f(x_0)-f(x_1),f(x_0)-f(x_2),f(x_0)-f(x_3),\cdots,f(x_0)-f(x_n)\|.
\end{eqnarray*}
From Lemma~\ref{lm35}, we get $$f\Big(\frac{x_0+x_1+x_2}{3}\Big)=\frac{f(x_0)+f(x_1)+f(x_2)}{3}$$
for all $x_0,x_1,x_2\in\mathcal{X}$ satisfying that $x_0,x_1,x_2$ are not $2$-collinear.
\end{pf}
\begin{thm} {\label{thm3.11}}
Let $\mathcal{X}$ and $\mathcal{Y}$ be non-Archimedean $n$-normed spaces
over a linear ordered non-Archimedean field $\mathcal{K}$ with ${\mathcal{C}}=\{3^n|\,n\in{\mathbb{Z}}\}.$
If $f:{\mathcal{X}}\rightarrow{\mathcal{Y}}$ is an interior preserving $n$-isometry, then $f(x)-f(0)$ is additive.
\end{thm}
\begin{pf}
Let $g(x):=f(x)-f(0).$
One can easily check that $g(0)=0$ and $g$ is also an $n$-isometry.
Using a similar method in ~\cite[Theorem 2.4]{cck},
we can easily prove that $g$ is also an interior preserving mapping.
Now, let $x_0,x_1,x_2$ be elements of ${\mathcal{X}}$ satisfying that $x_0,x_1,x_2$ are not $2$-collinear.
Since $g$ is an interior preserving $n$-isometry, by Theorem~\ref{thm36}, $$g\Big(\frac{x_0+x_1+x_2}{3}\Big)=\frac{g(x_0)+g(x_1)+g(x_2)}{3}$$
for any $x_0,x_1,x_2\in {\mathcal{X}}$ satisfying that $x_0,x_1,x_2$ are not $2$-collinear.
From the lemma \ref{lm36},
we obtain that the interior preserving $n$-isometry $g$ is additive which completes the proof.
\end{pf}
{
\end{document} |
\begin{document}
\title{ Prime polynomial values of \\ quadratic functions in short intervals}
\author{Sushma Palimar}
\address{
Department of Mathematics,\\
Indian Institute of Science,\\
Bangalore, Karnataka, India. }
\email{psushma@iisc.ac.in, sushmapalimar@gmail.com.}
\subjclass[2010]{11T55(primary),11P55,11N37.}
\begin{abstract}{In this paper we establish the function field analogue
of Bateman-Horn conjecture in short interval in the limit of a large finite field. Hence
we start with counting prime polynomials generated by primitive
quadratic functions in short intervals.
To this end we further
work out on function field analogs of cancellation of Mobius sums and its correlations(Chowla type sums)
and confirm that square root cancellation in Mobius sums is equivalent to square root cancellation in Chowla type sums.
}
\end{abstract}
\maketitle
\section{Introduction}
The well known conjecture of Hardy-Littlewood and Bateman-Horn predicts how often polynomials
take prime values. For example, choose $f_{1}(T)$,..., $f_{r}(T)$ to be non-associate irreducible polynomials in $\mbox{$\mathbb{Z}$}[T]$,
with leading coefficient of each
$f_{i}>0$ and suppose that for each prime $p$ there exists $ n\in \mbox{$\mathbb{Z}$}$ such that
$p\nmid f_{1}(n)\cdot\cdot\cdot f_{r}(n)$ for all integers $n$.
Set $\pi_{f_1,f_2,...,f_r}$ as the number of positive integers $n\leq x$ such that
$ f_{1}(n),...,f_{r}(n) \text{ are all primes}$. \begin{equation}\label{eqNo.1}
\pi_{f_1,f_2,...f_r}(x):= \#\{1\leq n\leq x: f_{1}(n),...,f_{r}(n) \text{ are all primes} \}\end{equation}
\[ \sim\frac{C (f_{1},f_{2},...,f_{r})}{ \prod\limits_{i=1}^{r} deg f_i}\frac{x}{(log x)^{r} } \]
where\[ C (f_{1},f_{2},...,f_{r}):=\underset{p \text{ prime}}{\bf{\prod}}\frac{1-\nu(p)/p}{({1-1/p})^{m}}, \]
$\nu(p)$ being the number of solutions to $f_{1}(T)...f_{r}(T)\equiv {0}\pmod{p}$ in $\mbox{$\mathbb{Z}$}/p.$
The product, $C (f_{1},f_{2},...,f_{r}) $ is called Hardy-Littlewood constant associated to ${f_{1}(n),...,f_{r}(n)}$ \cite{kc}.
The only proved case of Bateman-Horn conjecture is the case of a single linear polynomial, which is the Dirichlet's theorem on
primes in arithmetic progressions \cite{sb}. Bateman Horn conjecture reduces to the special case,
Hardy-Littlewood twin primes conjecture on the density of twin primes, whenever $r=2, f_1(T)=T; f_2(T)=T+2$, in (\ref{eqNo.1})
according to which the number of twin primes pairs less than $x$ is:
\[\pi_{2}(x)\sim 2\underset{p\geq 3}\Pi \frac{p(p-2)}{(p-1)^{2}}\frac{x}{(log x)^{2}}
\]
We derive the function field analog of Bateman-Horn conjecture in the limit of large finite field in short interval.
\subsection*{Polynomial Ring and Prime Polynomials}
Let $\mbox{$\mathbb{F}$}_{q}[t]$ be the ring of polynomials over the finite field $\mbox{$\mathbb{F}$}_{q}$ with $q$ elements, $q=p^{\nu}, p: \text{ prime}.$
Let $\mathcal{P}_{n}=\{f\in \mbox{$\mathbb{F}$}_{q}[t]| \mathrm{deg} f=n \}$ be the set of all polynomials of degree $n$ and
$\mathcal{M}_{n}\subset\mathcal{P}_{n}$ be the subset of monic polynomials of degree $n$ over $\mbox{$\mathbb{F}$}_{q}$.
The polynomial ring $\mbox{$\mathbb{F}$}_q[t]$ over a finite field $\mbox{$\mathbb{F}$}_q$ shares several properties with the ring of integers and the analogies between
number field and function fields are fundamental in number theory. For instance, as quantitative aspect of this analogy,
we have the Prime Polynomial Theorem.
The prime polynomial theorem states that, the number $\pi_{q}(n)$ of monic irreducible polynomials of degree $n$ is \[
\pi_{q}(n)=\frac{q^{n}}{n}+ O\big(\frac{q^{n/2}}{n}\big), \quad q^{n}\rightarrow \infty.\]
The prime polynomial theorem for arithmetic progression asserts,
given a polynomial modulus $Q\in \mbox{$\mathbb{F}$}_{q}[t]$, of positive degree and a polynomial $A$, coprime to $Q$, the number
$ \pi_{q}(n; Q, A)$ of primes $P\equiv A\pmod Q , P\in \mathcal{M}_{n} $ satisfies,
\[ \pi_{q}(n; Q, A)=\frac{\pi_{q}(n)}{\Phi(Q)}+O(\mathrm{deg }Q. q^{n/2}).\]
where $\Phi_(Q)$ is the number of coprime residues modulo $Q$. For $q\rightarrow \infty,$ the
main term is dominant as long as $\mathrm{deg}Q<n/2.$
\subsection*{} In \cite{lbs}
Bary-Soroker considered the function field analogue of Hardy - Littlewood prime tuple conjecture,
in the limit of a large finite field, for functions, $F_i=f+h_i$,
$h_i\in \mathbb{F}_q[t]$, $ deg(f)>deg(h_i), \text{ for } i=1,2,...,n$. This result was established previously by
Bender and Pollack \cite{AP} for the case $i=2$.
\subsection{Prime polynomials in short interval}
Some of the salient problems of prime number theory deals with the study of distribution of primes
in short interval and arithmetic progression.
To set up an equivalent problem for the polynomial ring $\mbox{$\mathbb{F}$}_{q}[t]$, we define short interval in function fields. Here we follow \cite{KZ} for notations.
For a nonzero polynomial, $f\in \mbox{$\mathbb{F}$}_{q}[t]$, we define its norm by \(||f||:= {\#F_q[t]}/{(f)}=q^{deg f}\).
Given a monic polynomial $f\in \mathcal{M}_{n}$ of degree $n$, and $h<n$, ``short intervals" around $f\in \mathcal{M}_{n}$
of diameter $q^{h}$
is the set
\begin{equation}I(f,h):=\{g\in\mbox{$\mathbb{F}$}_{q}[t]:\mathrm{deg}(f-g)\leq h\}=\{g\in \mbox{$\mathbb{F}$}_{q}[t]:||f-g||\leq q^{h}\} =f+ \mathcal{P}_{\leq h}\end{equation}
Thus, $I(f,h)$ is of the form, $f+\sum_{i=0}^{h}a_it^i$, where ${\mathbf {a}}=(a_0,a_1,...,a_h)$ are
algebraically independent variables over $\mbox{$\mathbb{F}$}_{q}$.
The number of polynomials in this interval is \[H:=\#I(f;h)=q^{h+1}.\]
For $h=n-1, I(f,n-1)=\mathcal{M}_{n}$ is the set of all polynomials of degree $n$.
For $h<n$, if $||f-g||\leq q^{h}$, then $f$ is monic if and only if $g$ is monic.
Bank, Bary-Soroker and Rosenzweig \cite{Baroli}
obtained the result on counting prime polynomials in the short interval $I(A,h)$
for the primitive linear function $f(t)+g(t)x$.
In \cite{Baro} the function field analogue of Hardy - Littlewood prime tuple conjecture on these primitive linear functions is resolved
in short interval case.
\subsection*{Counting Prime polynomials and HIT}
To establish the function field analogue of counting prime polynomials in \textit{short interval} we start with irreducible quadratic function
$F(x,t)=f(t)+x^{2}\cdot g(t) \in \mbox{$\mathbb{F}$}_{q}[t][x]$ with following properties. Let
$f, g\in \mbox{$\mathbb{F}$}_{q}[t]$ be non zero, relatively prime polynomials,
$g(t)$ a monic polynomial and the product
$f\cdot g $ not a square polynomial with $\mathrm{deg}f<\mathrm{deg}g$. Hence, by the choice of $f\text{ and }g$, it is clear that,
the function $F(x,t)= f(t)+x^2\cdot g(t)$ is irreducible in $x$. The first derivative of $F(x,t)$ is $2xg(t)\neq0$, implies
the function $ f(t)+g(t)x^2$, as a polynomial in $x$ is separable over $\mbox{$\mathbb{F}$}_{q}[t]$.
\subsection*{}
The short interval $I(p,m)$ defined as, $\mathrm{h}=p+\mathcal{P}_{\leq m}, \mathrm{deg}p>m,
\text{ is given by } $\begin{equation}\mathrm{h}(t)=p(t)+\sum_{i=0}^{m}a_it^i\end{equation} where ${\mathbf{a}}=(a_0,a_1,...,a_m)$ are
algebraically independent variables over $\mbox{$\mathbb{F}$}_{q}$.
``Technically, the problem of finding prime polynomials in short interval is
to find the number of tuples $\mathbf{A}=\{A_0,...,A_m\}\in \mbox{$\mathbb{F}$}_{q}^{m+1}$
for which $F(\mathbf{A},t)$ is irreducible in $\mbox{$\mathbb{F}$}_{q}[t]$."
The key tool used is the Hilbert Irreducibility Theorem, which answers, does
the specialization $\mathbf{a}\mapsto\mathbf{A}\in\mbox{$\mathbb{F}$}_{q}^{m+1}$ preserve the irreducibility?
We have,\begin{equation}\label{maineq}
F(x,t)=f(t)+x^2g(t) \text{ for } f(t),g(t)\in \mbox{$\mathbb{F}$}_{q}[t]
\end{equation}
Then,
\[F(\mathrm{h},t)=f(t)+g(t)\mathrm{h}^2 = f(t)+g(t)\Big\{p(t)+\sum\limits_{i=0}^{m}a_it^i\Big\}^{2}\]
therefore,\begin{equation} \label{maineq1}F(\mathbf{a},t)= \tilde f(t)+g(t)\Big\{\big(\sum\limits_{j=0}^{m}a_jt^j\big)^2+2p(t)\sum\limits_{j=0}^{m}a_jt^j\Big\}\end{equation}
where \[\tilde f(t)=f(t)+g(t)p(t)^2 \]and\[ n=\mathrm{deg}F=\mathrm{deg}\tilde f=s+2k>\mathrm{deg}g\]
Under the above setup, we get an asymptotic for:
\begin{equation} \pi_{q}(I(p,m))=\#\{ h:=p(t)+\sum\limits_{j=0}^{m}A_jt^j| F(\mathbf{A},t) \text{ is irreducible in } \mbox{$\mathbb{F}$}_{q}[t]\}\end{equation}
and we have the following theorem.
\begin{theorem}\label{th1}
Let $n$ be a fixed positive integer and $q$ an odd prime power.
Then we have $\pi_{q}(I(p,m))=\frac{q^{m+1}}{n}+O_{n}(q^{m+\frac{1}{2}})$
\end{theorem}
One of the basic forms of HIT states
that, if $f(x_1,x_2,...,x_r,T_1,T_2,...,T_s)\in \mbox{$\mathbb{Q}$}[x_1,...,x_r,T_1,...,T_s]$ is irreducible, then there exists a specialization
$\mathbf{(t)}=(t_1,...,t_s)$
such that $f(x_1,...,x_r)=f(x_1,...,x_r,t_1,...,t_s)$ as a rational polynomial in $x_1,...,x_r$ is irreducible over $\mbox{$\mathbb{Q}$}[x_1,...,x_r]$. If $r=1$,
consider $f$ as a polynomial in $x$ over the rational function field
$L=\mbox{$\mathbb{Q}$}(T_1,...,T_s)$, having roots $\alpha_1, . . . , \alpha_n$ in the algebraic closure $\bar L.$
If $f$ is irreducible and separable, then these roots are distinct, and we can consider the Galois group $G$ of $f$
over $L$ as a subgroup of the symmetric group $S_n$. Then there exists a specialisation
$ \mathbf{t}\in \mbox{$\mathbb{Q}$}^{s}$ such that the resulting rational polynomial in $x$ still is irreducible and has Galois group
$G$ over $\mbox{$\mathbb{Q}$}$. In fact, if $\mathbf{t}$ is chosen in such a way that the specialized polynomial in $x$
still is of degree $n$, and separable, then its Galois group $G_\mathbf{t}$ over $\mbox{$\mathbb{Q}$}$ is a subgroup of $G$
(well-defined up to conjugation)
and it turns out that ‘almost all’ specializations for $\mathbf{t}$ preserve the Galois group, i.e. $G_\mathbf{t}=G$. Hence, we start by computing the
Galois group of $F(\mathbf{a},t)$ over $\bar \mbox{$\mathbb{F}$}_{q}(\mathbf{a})$.
In the sequel let $k=\bar \mbox{$\mathbb{F}$}_q$ and we prove $\mathrm{Gal}(F({\mathbf{a}},t),k({\mathbf{a}})) = S_{n}$ in section \ref{gal}.
\section{$\mathrm{Gal}(F({\mathbf{a}},t),k({\mathbf{a}})) = S_{n}$}
\label{gal}
\begin{theorem}\label{th2}
Let $k=\bar \mbox{$\mathbb{F}$}_{q},\, q \text{ an odd prime power and } {\mathbf{ a}}=(a_0,a_1,...,a_m)$ be $m+1$ tuple of varibles,
$m\geq 2$.
Then, $\mathrm{Gal}(F({\mathbf{a}},t),k({\mathbf{a}}))$ = $S_{n}$.
\end{theorem}
To prove Theorem \ref{th2}, as a first step we show the following:
\subsection{$\mathrm{Gal}(F({\mathbf{a}},t),k({\mathbf{a}}))$ is doubly transitive}
\begin{proposition}\label{prop1}
The polynomial function, $F(\mathbf{a},t)=\tilde f(t)+g(t)\{\sum\limits_{j=0}^{m}a_jt^j)^2+2p(t)\sum\limits_{j=0}^{m}a_jt^j\}$
is separable in $t$ and irreducible in the ring $k({\mathbf{a}})[t]$.
\end{proposition}
\begin{proof}
To prove irreducibility of $F(\textbf{a},t)$ in $k({\textbf{a}})[t]$,
we consider $F({\textbf{a}},t)$ as a quadratic equation in the variable $a_0$
and show that its discriminant is square free. We have from equation (\ref{maineq1}),
\[F({\textbf{a}},t)=\tilde f(t)+g(t)\{(\sum\limits_{j=0}^{m}a_jt^j)^2+2p(t)\sum\limits_{j=0}^{m}a_jt^j\}\]
\[F({\textbf{a}},t)=\tilde f(t)+g(t)\big\{l(t)^2+2l(t)p(t)\big\}\text{ where } l(t)=\sum\limits_{i=0}^{m}a_it^i.\]
writing,
$ F(\mathbf{a},t)$ as a quadratic equation in $a_0$, we have,
\[ g(t)a_0^2+\{2g(t)(l_1(t)+p(t))\}a_0+\big\{\tilde f(t)+g(t)\{l_1(t)^2+2p(t)l_1(t)\}\big \}=0\]
\[\text{ where }l_1(t)=\sum_{i=1}^{m}a_it^i\]
\text{ The discriminant of the above equation is }
\begin{equation}\label{eqdisc}
\Delta(F({\mathbf{a}},t))=
4g(t)^2\{l_1(t)+p(t)\}^2-4g(t)\{\tilde f(t)+g(t)\{l_1(t)^2+2p(t)l_1(t)\}\}\end{equation}
Substituting, $\tilde f(t)=f(t)+g(t)p(t)^2$, in the second sum of equation (\ref{eqdisc}), we have
\[4g(t)^2\{l_1(t)+p(t)\}^2 -4g(t)f(t)-4g(t)^2\{p(t)^2+l_1(t)^2+2p(t)l_1(t)\}\] Hence,
\[\Delta(F({\mathbf{a}},t))=-4g(t)f(t)\neq 0\]
Clearly, $\Delta(F({\mathbf{a}},t)=-4f(t)g(t)$, is not a square by our choice of $f$ and $g$. Therefore, $F(\mathbf{a},t)$ is irreducible
in $k[a_1,...,a_m,t][a_0]=k[a_0,...,a_m,t]$. Hence, by Gauss lemma, $F({\mathbf{a}},t)$ is irreducible in $k(a_0,...,a_m)[t]$.
Coming to the separability of $F(\mathbf{a},t)$ in $t$, we see that,
the irreducible polynomial $F(x,t)$ is separable in $x$, (since the first derivative of $F(x,t)$
is not a zero polynomial by the choice of $f$ and $g$.)
Hence, the result by Rudnik in \cite{zr} confirms, the polynomial $F(\mathbf{a},t)$ is separable in $t$.
\end{proof}
In the next proposition we prove that the Galois group of $F({\mathbf{a}},t)$ over $k(a_0,...,a_m)$ is doubly transitive with respect
to the action on the roots of $F.$
We quickly go through the definitions of doubly transitivity as given in [page no.119, \cite{ssa}]. Let $K$ be a field.
Consider a polynomial $f(y)=y^n+a_1y^{n-1}+...+a_n$ with $a_i\in K.$ We can factor $f$ as $f(y)=(y-\alpha_1)(y-\alpha_2)...(y-\alpha_n)$,
where the roots $\alpha_i$ are in some extension field of $K.$ Let $L=K(\alpha_1,...,\alpha_n)$ then $L$ is called the splitting field of $K$.
The Galois group
of $L$ over $K$, denoted by $\mathrm{Gal}(L/K)$ is the group of all $K-$automorphisms of $L$.
i.e., those field automorphisms of $L$ which leave $K$, element wise fixed. Assuming $L$ to be separable over $K$, and $f$ to have no multiple factors
in $K[y]$, every member of $\mathrm{Gal}(L/K)$ permutes $\alpha_1,...,\alpha_n$, and this gives
an injective homomorphism of $\mathrm{Gal}(L/K)$ into $S_{n}$ whose image is called Galois group of $f$ over $K$, denoted $\mathrm{Gal}(f,K)$.
$\mathrm{Gal}(f,K)$ is transitive if and only if $f$ is irreducible in $K[y] $ and
$\mathrm{Gal}(f,K)$, a subgroup of symmetric group $S_n$
is $2-$ transitive if and only if $G$ is transitive and its one point stabilizer group $G_{\alpha_1}$ is transitive as a subgroup of $S_{n-1.}$
where, by definition, $G_{\alpha_1}=\{g\in G| g(\alpha_1)=\alpha_1\}$ is thought of as a
subgroup of the group of all permutations of roots $\{\alpha_2,...,\alpha_n\}$ of $f.$ Note that if $G$ is
transitive, then all the one point stabilizers $G_{\alpha_i},i=1,2,...,n$ are isomorphic to each other.
To see the equational analogue of this, we consider, $f(y)$, an irreducible polynomial in $K[y]$. We throw away a root of $f$, say $\alpha_1$
to get \[ f_1(y)=\frac{f(y)}{(y-\alpha_1)}=y^{n-1}+b_1y^{n-2}+...+b_{n-1}\in \mbox{$\mathbb{K}$}(\alpha_1)[Y].\] Then $f$ and $f_1$ are irreducible in $K[y]$ and
$K(\alpha_1)[y]$ respectively if and only if $\mathrm{Gal}(f,K)$ is doubly transitive. \cite{ssa1}
\begin{proposition}\label{prop2}
For, $F(\mathbf{a},t)$ defined above, the Galois group $\mathrm{G}$ of $F({\mathbf{a}},t)$
over $k({\mathbf{a}})$ is doubly transitive with respect to the
action on the roots of $F({\mathbf{a}},t)$.
\end{proposition}
\begin{proof}
Proposition \ref{prop1} implies, the Galois group $\mathrm{G}=\mathrm{Gal}(F({\mathbf{a}},t),k({\mathbf{a}}))$
is transitive.
We show that, the Galois group $\mathrm{Gal}(F({\mathbf{a}},t),k({\mathbf{a}}))$
is doubly transitive by specializing $a_0=0.$
Under the specialization $a_0=0$, we have
\[\tilde F(a_1,...,a_m,t)=\tilde f(t)+g(t)\{(\sum\limits_{j=1}^{m}a_jt^j)^2+2p(t)\sum\limits_{j=1}^{m}a_jt^j\}\]
Let $\alpha \in k$ be a root of $\tilde f(t)$, by substituting, $t$ by $t+\alpha$, we may assume that, $\tilde f(0)=0.$ Hence,
$ f_{0}(t)=\tilde f(t)/t$ is a polynomial.
\[\tilde F(a_1,...,a_m,t)= t\big\{f_0(t)+g(t)\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{m}a_ia_jt^{i+j-1}+g(t)2p(t)\sum\limits_{j=1}^{m}a_jt^{j-1}\big\}\]
We first show that,\begin{equation}\label{irr-dou}
f_0(t)+g(t)\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{m}a_ia_jt^{i+j-1}+g(t)2p(t)\sum\limits_{j=1}^{m}a_jt^{j-1}
\end{equation}
is irreducible in $k(a_1,...,a_m)[t]$ and separable in $t$.
Separability of polynomial equation (\ref{irr-dou}) is attained by applying the result in \cite{zr}.
Now, we show that, equation(\ref{irr-dou}) is irreducible in $t$.
We prove this by writing,
equation(\ref{irr-dou}) as a quadratic equation in $a_1$ and show that, discriminant of this quadratic equation is $f(t).g(t)$,
which is not a square polynomial. Hence writing, equation(\ref{irr-dou})
as a quadratic equation in $a_1$
we have
\begin{equation}\label{disc-dou}
\begin{split}
&t\cdot g(t)a_1^2+ g(t)\{\sum\limits_{j=2}^{m}a_jt^j+2p(t)\}a_1 +\\
&f_0(t)+g(t)\{\sum\limits_{i=2}^{m}\sum\limits_{i=2}^{m}a_ia_jt^{i+j-1}+2p(t)
\sum\limits_{i=2}^{m}a_it^{i-1}\}=0
\end{split}
\end{equation}
Discriminant of the above quadratic equation (\ref{disc-dou}) in $a_1$ is
\[g(t)^{2}\{\sum\limits_{j=2}^{m}a_jt^j+2p(t)\}^2-4tg(t)\big[f_0(t)+g(t)\{\sum\limits_{i=2}^{m}\sum\limits_{i=2}^{m}a_ia_jt^{i+j-1}+2p(t)\sum\limits_{i=2}^{m}a_it^{i-1}\}\big]\]
substituting, $f_0(t)=\frac{1}{t}(f(t)+g(t)p(t)^2)$
\[g(t)^{2}\{\sum\limits_{j=2}^{m}a_jt^j+2p(t)\}^2-4f(t)g(t)-g(t)^2\{\sum\limits_{j=2}^{m}a_jt^j+2p(t)\}^2=-4f(t)g(t)\]
Clearly, equation (\ref{irr-dou}) is irreducible in $k(a_1,...,a_m)[t]$ and separable in $t$.
Let $\mathrm{G}_t$ be the Galois group of $\tilde F(a_1,...,a_m,t)$ over $k(\alpha,a_1,...,a_m)$.
Hence, from the discussion above Proposition \ref{prop2} it is clear that, $\mathrm{G}_t$ is doubly transitive subgroup of the symmetric group
$S_{deg \tilde f}$.
Since, $\tilde F(a_1,...,a_m,t)$ is separable, specialization induces, $\mathrm{G}_t\subset \mathrm{G}$, which is uniquely determined up to conjugation.
Hence, the stabilizer of a root of $F$ in $\mathrm{G}$ is transitive. Thus $\mathrm{G}$ is doubly transitive.
\end{proof}
\subsection*{ Proof of Theorem \ref{th2}}
\begin{proof}
Already, we have seen Galois group of $F({\mathbf{a}},t)$ over $k({\mathbf{a}})$ is doubly transitive. Hence, it only remains to show
Galois group of $F({\mathbf{a}},t)$ over $k({\mathbf{a}})$ contains a tranposition.
To achieve this, we first show that at some specialization $a_m=0$, the polynomial $F(a_0,...,a_{m-1})$ has one double zero and rest $(n-2)$ simple zeros.
$$\text{ Let }\tilde F(\mathbf{a},t)=F({\mathbf{a}},t)|a_m=0$$
\begin{definition}\label{Morse}
A polynomial $f$ is called { \it Morse function }\cite{jp} if
\begin{enumerate}
\item $f(\beta_i)\neq f(\beta_j),$ for $i\neq j$ i.e., critical values of $f$ are distinct.
\item The zeros $\beta_1,\beta_2,...,\beta_{n-1}$ of derivative $f^{\prime}$ of $f$ are simple. i.e.,
critical values of $f$ are non degenerate.
\end{enumerate}
\end{definition}
It is well known that, discriminant of a monic separable polynomial
is given by, \begin{equation}\label{disdef}disc(F)=\pm Res(F,F^{\prime})\end{equation}
Proposition \ref{prop1} implies, the specialized polynomial $F({\mathbf{a}},t)|a_m=0$ (equation (\ref{speceq}))
is separable in $t$ and irreducible
in $k(a_0,a_1,...,a_{m-1})[t]$. We have,
\begin{equation}\label{speceq}\tilde F(a_0,...,a_{m-1},t)=\tilde f(t)+g(t)\{(\sum\limits_{j=0}^{m-1}a_jt^j)^2+2p(t)\sum\limits_{j=0}^{m-1}a_jt^j\}\end{equation}
Separability of $\tilde F$ implies for
$(A_0,A_1,...,A_{m-1}) \in \bar k^{m}$, the system of equations below does not have a solution in the algebraic closure of $k$.
\begin{equation}\label{eq2}
\begin{cases}
\tilde F^{\prime}(\rho_i)=0\\
\tilde F^{\prime}(\rho_j)=0\\
\tilde F(\rho_i)=\tilde F(\rho_j) \text{ for some} \rho_i,\rho_j \text{ in the algebraic closure of } {k}
\end{cases}
\end{equation}
Which further imply, the critical values of $\tilde F(a_0,...,a_{m-1},t)$ are distinct. Proving condition $(1)$ of Definition {\ref{Morse}}.
Detailed explanation is given in (\cite{CR}, Section 3 and Section 4).
It remains to prove, the condition $(2)$ of Definition \ref{disdef}, i.e., critical values of $\tilde F$ are non degenerate.
A small calculation shows derivative of $\tilde F(t)$ and derivative of $\tilde F^{\prime}(t)$ with respect to $t$ have no common root.
Thus, critical values of $\tilde F$ are non degenerate.
Hence, the function $\tilde F(a_0,...,a_{m-1},t)$ is Morse. Hence, the polynomial $F(a_0,...,a_{m-1})$ has one double zero and rest $(n-2)$ simple zeros.
Hence a transposition in $G=\mathrm{Gal}(F({\mathbf{a}},t),k({\mathbf{a}}))$ is implied by (\cite{Ho}, Lemma1) which is stated below.
\begin{lemma}\label{lem2}
Let $p$ be a prime number and $\mathfrak{p}$ be a prime ideal in $K$ satisfying $\mathfrak{p}|p.$
If $f(x)\equiv (x-c)^{2}\bar h(x) \pmod{p}$ for some $c\in \mbox{$\mathbb{Z}$}$ and a separable polynomial $\bar h(x)\equiv \pmod {p}$
such that $\bar h(c)\not\equiv 0{\pmod {p}}$,
then the inertia group of $\mathfrak{p}$ over $\mbox{$\mathbb{Q}$}$ is either trivial or a group generated by a transposition.
\end{lemma}
By Proposition \ref{prop2}, the Galois group
$G$ of $F({\mathbf{a}},t)$ over $k({\mathbf{a}})$ is doubly transitive.
Any finite doubly transitive permutation group containing a transposition is a full symmetric group (\cite{jp} Lemma, 4.4.3).
Thus, $\mathrm{Gal}(F({\mathbf{a}},t), k({\mathbf{a}}))$ is isomorphic to the full symmetric group $S_{n}$.
Thus, Theorem \ref{th2} is complete.
\end{proof}
\section{Irreducibility Criteria}
Since, $\mathrm{Gal}(F({\mathbf{a}},t),k({\mathbf{a}}))= S_{n}$. Now, we obtain the asymptotic for the number of irreducibles in
$I(p,m)$ for which the specialized polynomial $F({\mathbf{A}},t)$ is irreducible in $\mbox{$\mathbb{F}$}_{q}[t]$, where $\mathbf{A}=(A_0,A_1,...,A_m)\in \mbox{$\mathbb{F}$}_q$.
To attain this, we invoke an irreducibility criteria, as in [Lemma 2.8, \cite{lbmj}], which reduces the above problem of finding
irreducibles $h\in I(p,m)$ to counting of rational points of an absolutely irreducible variety over a finite field $\mbox{$\mathbb{F}$}_{q}$.
Then the required asymptotic follows by applying $Lang-Weil$ estimate.
Now, we have the following proposition.
\begin{proposition}\label{Theorem2}
Let ${\mathbf{a}}=(a_0,a_1,...,a_m)$ be an $(m+1)$ tuple of variables. Let $F({\mathbf{a}},t)\in \mbox{$\mathbb{F}$}_{q}[a_0,a_1,...,a_m,t]$
be a polynomial that is separable in $t$ and irreducible in the ring $k(\mathbf{a})[t]$ with $\mathrm{deg}_{t}F=n$. Let $L$ be the splitting field
of $F({\mathbf{a}},t)$ over $\mbox{$\mathbb{F}$}_{q}(\mathbf{a})$. Let $k$ be an algebraic closure of $\mbox{$\mathbb{F}$}_{q}$. Assume that, $\mathrm{Gal}(F,k(a_0,...,a_m))=S_n.$
Then the number of $\mathbf{A}=(A_0,...,A_m) \in \mbox{$\mathbb{F}$}_{q}^{m+1} $
for which all the specialized polynomial $F(\mathbf{A},t)$, is irreducible is
$\frac{q^{m+1}}{n}\big(1+O_{n}(q^{-1/2})\big)$ as $q\rightarrow\infty$ and $n$ is fixed.
\end{proposition}
\begin{proof}
This is proved in [Lemma 2.1, \cite{lbs}].
\end{proof}
\subsection*{Proof of Theorem \ref{th1}}
\begin{proof}
Let $n$ be fixed positive integer and $q$ an odd prime power and
\[F(\mathbf{a},t)= \tilde f(t)+g(t)(\sum\limits_{j=0}^{m}a_jt^j)^2+2p(t)g(t)\sum\limits_{j=0}^{m}a_jt^j.\]
We have seen, $\mathrm{Gal}(F(\mathbf{a},t),k(\mathbf{a}))=S_n$
and $F(\mathbf{a},t)$ satisfies all assumptions of Proposition \ref{Theorem2}.
Thus the number of $\{(A_0,...,A_m)\}\in \mbox{$\mathbb{F}$}_{q}^{m+1}$ for which $ F(\mathbf{A},t)$
irreducible in $\mbox{$\mathbb{F}$}_q[t]$ is \[ \frac{q^{m+1}}{n}+O_{n}(q^{m+\frac{1}{2}}).\]
This finishes the proof since this number equals $\pi_q(I(p,m))$. Hence, \[\pi_q(I(p,m))=\frac{\#I(p,m)}{n}+O_{n}(q^{m+\frac{1}{2}}).\]
\end{proof}
\section{ Cycle structure, Factorization type, Galois groups and Conjugacy classes}
In previous sections, we obtained an asymptotic for the number of prime polynomials in the interval $I(p,m)$
for the function $F(x,t)=f(t)+x^2g(t)\in \mbox{$\mathbb{F}$}_{q}[t][x]$.
Here, we derive an equidistribution result (Theorem \ref{frob}) by the function field version of Chebotarev Density theorem.
We know that, factorization over $\mbox{$\mathbb{F}$}_{q}[t]$ resemble cycles of permutations, below we state some known results, mainly from \cite{BR} and \cite{anlz}.
By definition, $\mathcal{M}_{n}$
is the collection of monic polynomials
of degree $n$ consists of $q^{n}$ elements. Partition $\tau$ of a positive integer $n$ is defined to be a sequence of non-increasing positive integers
$(c_1,...,c_k)$ such that, $|\tau|:=c_1+\cdot\cdot\cdot+c_k$ and $|\tau|=n$.
\begin{definition}
Every monic polynomial $f\in \mbox{$\mathbb{F}$}_{q}[t]$ of some degree $n$ has a factorization
$f=P_1\cdot \cdot \cdot P_k$ in to
irreducible monic polynomials $P_1,...,P_k \in \mbox{$\mathbb{F}$}_{q}[t]$, which is unique up to rearrangement. Taking degrees
we obtain a partition of $n$ given by, $\mathrm{deg} P_1 + \cdot \cdot \cdot +\mathrm{deg} P_k$ of $\mathrm{deg}f$ and
its factorization type is given by $$\tau_{f}=(\mathrm{deg}P_1,...,\mathrm{deg}P_k)$$ \end{definition}
\begin{definition}
Every permutation $\sigma \in S_n$ has a cycle decomposition $\sigma=(\sigma_1...\sigma_k)$ in to
disjoint cycles $\sigma_1,...,\sigma_k$ which is unique up to rearrangement and each fixed point of $\sigma$ has cycle length 1. If $|\sigma_i|$
is the length of cycle $\sigma_i$, we obtain a partition of $n$ given by
$$\tau_{\sigma}=(|\sigma_1|, \cdot \cdot \cdot ,|\sigma_k|)$$ and it's cycle type to be given by
$\tau_{\sigma}=(|\sigma_1|+ \cdot \cdot \cdot +|\sigma_k|)$
\end{definition}
For each partition $\tau\vdash n$,
the probability of a random permutation on $n$ letters has a cycle structure $\sigma$ is given by Cauchy's formula:
\begin{equation}
\mathbb{P}(\tau_{\sigma}=\tau)=\frac{\#\{\sigma \in S_n:\tau_{\sigma}=\tau\}}{\# S_n}=\prod\limits_{j=1}^{k}\frac{1}{j^{c_j}\cdot c_j!}\end{equation}
As $q\rightarrow \infty$, the distribution over $\mathcal{M}_{n}$ of factorization types tends to distribution of cycle types in $S_{n}$ \cite{anlz}.
\begin{proposition}
For a partition $\tau\vdash n$,
\[ \lim\limits_{q\rightarrow \infty} \mathbb{P}_{f\in \mathcal{M}_{n}}(\tau_{f}=\tau)=\mathbb{P}_{\sigma\in S_n}(\tau_{\sigma}=\tau)\text{
where, } \]$\mathbb{P}_{f\in \mathcal{M}_{n}}(\tau_{f}=\tau):=\frac{1}{q^{n}}\#\{f\in \mathcal{M}_{n}:\tau_{f}=\tau\}$.
\end{proposition}
We consider specializations $F(\mathbf{A},t)$ as described in previous sections, where $\mathbf{A}=\{A_0,A_1,...,A_m\}\in \mbox{$\mathbb{F}$}_{q}$.
For such an $\mathbf{A}$ denote by $\Theta(F(\mathbf{A},t)$, the conjugacy class in $S_n$ of permutations with
cycle structure $(d_1,d_2,...d_r)$, this is the factorization class of $F(\mathbf{A},t)$.
For a separable polynomial $f\in \mbox{$\mathbb{F}$}_{q}[t]$ of degree $n$, the Frobenius map $\mathrm{Fr}_{q}$ given by
$(y\rightarrow y^{q})$ defines a permutation of the roots of $f$,
which gives a well defined conjugacy class $\Theta(f)$ of the symmetric group $S_n.$ The degrees of the prime factors of $f$ correspond to
the cycle lengths of $\Theta (f)$. In particular, $f$ is irreducible if and only if $\Theta(f)$ (conjugacy class of) is a full cycle.
It is known that for any fixed conjugacy class $C$ of $S_n$ the probability of $\Theta(F(\mathbf{A},t))=C$ as $\mathbf{A}$ ranges over $\mbox{$\mathbb{F}$}_{q}^{m+1}$
is determined by the Galois group $G$ of the polynomial $F(\mathbf{A},t)$ over the field $\mbox{$\mathbb{F}$}_{q}(\mathbf{A})$ together with its standard action on the roots,
up to an error term of $O_{m, \mathrm{deg}F}(q^{-\frac{1}{2}}))$. Hence, we have
\begin{theorem}\label{frob}
Let ${\mathbf{a}}=(a_0,a_1,...,a_m)$ be an $(m+1)$ tuple of variables over $\mbox{$\mathbb{F}$}_{q}$.
Let $k$ be an algebraic closure of $\mbox{$\mathbb{F}$}_{q}$. Let $F({\mathbf{a}},t)\in \mbox{$\mathbb{F}$}_{q}[a_0,a_1,...,a_m,t]$
be a polynomial that is separable in $t$ and irreducible in the ring $k(\mathbf{a})[t]$ with $\mathrm{deg}_{t}F=n$. Let $L$ be the splitting field
of $F({\mathbf{a}},t)$ over $\mbox{$\mathbb{F}$}_{q}(\mathbf{a})$. Assume that, $G=\mathrm{Gal}(F,k(a_0,...,a_m))=S_n.$
Then for every conjugacy class $C$ in $S_n$
\[\#\{h\in I(p,m)| \Theta(F({\mathbf{A}},t))=C\}= \frac{|C|}{|G}q^{m+1}(1+O_{n}(q^{-\frac{1}{2}}))\]
\end{theorem}
\begin{proof}
The proof follows from Theorem 3.1 of \cite{anlz}.
\end{proof}
A variant of application of Theorem 3.1 in \cite{anlz} is given in ( Theorem 2.2., \cite{AE}).
\section{Bateman-Horn conjecture}
The classical Bateman-Horn problem is described in the introduction.
In \cite{AE}, Entin has established an analogue of Bateman and Horn conjecture under the following set up.
Let $F_1,...,F_m\in \mbox{$\mathbb{F}$}_{q}[t][x], \mathrm{deg}_{x}F_i>0$ be
non-associate, irreducible and separable over $\mbox{$\mathbb{F}$}_{q}(t), n$ a natural number.
Let $a_0,a_1,...,a_n$ be free variables,
$\mathrm{f}=a_nt^n+...+a_0\in\mbox{$\mathbb{F}$}_{q}[\mathbf{a},t]$ and $N_i=\mathrm{deg}_{t}F_i(t,\mathrm{f})$.
Under the above assumptions, below Theorem is established.
\begin{theorem}\label{entbat}
Let $F_1,...,F_m\in \mbox{$\mathbb{F}$}_{q}[t][x]$, $\mathrm{deg}_{x}F_{i}=r_i>0$, be non associate irreducible polynomials which are separable over
$\mbox{$\mathbb{F}$}_{q}(t)$ (i.e., $F_i\not\in \mbox{$\mathbb{F}$}_q[t][x^p]$) and monic in $x$. Let $n$ be a natural number satisfying $n\geq3$ and $n\geq sl F_i$
for $1\leq i\leq m$. Denote $N_i=r_in$. Denote by $\mu_i$ the number of irrducible factors into which $F_i(t,x)$ splits over $\bar \mbox{$\mathbb{F}$}_q$.
Then \[\#\{f\in \mbox{$\mathbb{F}$}_q[t], deg f=n|F_i(t,f)\in \mbox{$\mathbb{F}$}_{q}[t]\text{ is irreducible for $i=1,2,..,r$}\}\]
\[=\big(\prod\limits_{i=1}^{m}\frac{\mu_i}{N_i}\big)q^{n+1}(1+O_{m,\mathrm{deg}F_i,n}(q^{\frac{-1}{2}}))\]
\end{theorem}
We study the function field version of Bateman-Horn conjecture for polynomial functions defined in equation (\ref{maineq}) namely
$F_i=f_i+g_ix^2 \in \mbox{$\mathbb{F}$}_{q}[t][x]$.
We obtain the following result.
\begin{theorem}\label{bat}
Let each of $F_1,...,F_r\in \mbox{$\mathbb{F}$}_{q}[t][x]$, distinct primitive quadratic functions satisfy all conditions of Proposition \ref{Theorem2}.
Then \[\#\{h:=p(t)+\sum\limits_{j=0}^{m}A_jt^j|F_i(\mathbf{A},t)\in \mbox{$\mathbb{F}$}_{q}[t]\text{ is irreducible for $i=1,2,...,r$} \}\]\[=
\big(\prod\limits_{i=1}^{r}\frac{1}{n_i}\big)q^{m+1}(1+O_{n,r}(q^{\frac{-1}{2}}))\]
\[=\big(\prod\limits_{i=1}^{r}\frac{1}{n_i}\big)\#I(p,m)+O_{n,r}(q^{m+{\frac{1}{2}}})\]
\end{theorem}
\begin{proof}
The proof of this Theorem is completed, once we show that, the Galois group $G$ is the full permutation group $S_{n_{1}}\times...\times S_{n_{r}}$
acting on the roots of $F_{i}(\mathbf{a},t)$ over $k(\mathbf{a})$.
From Lemma \ref{lem2}, we see that, $\mathrm{Gal}(F_i({\mathbf{a}},t), k({\mathbf{a}}))\mathcal{O}ng S_{n_i}$. To show that,
the Galois group $G$ is the full permutation group $S_{n_{1}}\times...\times S_{n_{r}}$
we need to show the multiplicative independence of $disc_{t}F_i(t,h)$ modulo squares, i.e.,
$disc_{t}F_{i}(\mathbf{a},t)$ is linearly independent as elements of $k(\mathbf{a})^{\times}/k(\mathbf{a})^{\times{2}}$, nothing but
$d_i, d_j$ for $i\neq j$ are non squares and are relatively
prime in the ring $k(\mathbf{a},t)$.
Denoting $d_i=disc_{t}F_{i}(\mathbf{a},t)$, the discriminant of $F_{i}(\mathbf{a},t)$, is considered as a polynomial in $t$.
Discriminant of a
monic separable polynomial $f(t)$ is defined by the resultant of $f \text{ and } f^{\prime}$:
\[disc(f)=\pm Res(f,f^{\prime})=\pm\prod_{j=1}^{\nu}f(\tau_j), \text{ where }f^{\prime}=c\prod_{j=1}^{\nu}(t-\tau_j) \]
Since, $\mathrm{Gal}(F_i({\mathbf{a}},t), k({\mathbf{a}}))$ is a full symmetric group, $d_i$ is not a square in $k(\mathbf{a})$ for any $i$.
If $d_i,d_j$ are not relatively prime in $k({\mathbf{a}})$, then they have a
common root. Thus $d_i,d_j$ having a common root gives the following system of equations:
\begin{equation}
\begin{cases}
F^{\prime}(\rho_i)=0\\
F^{\prime}(\rho_j)=0\\
F(\rho_i)=F(\rho_j) \text{ for some } \rho_i,\rho_j \in \bar{k}.
\end{cases}
\end{equation}
But we have seen, this system does not have any solution for any $\rho_i,\rho_j$ in the algebraic closure of $k$ [page 3, \cite{CR}].
Hence, the Galois group $\mathrm{G}=S_{n_{1}}\times...\times S_{n_{r}}$.
Rest of the proof follows from [Theorem 3.1, \cite{Baro}].
Thus \[\#\{h:=p(t)+\sum\limits_{j=0}^{m}A_jt^j |F_i(\mathbf{A},t)\in \mbox{$\mathbb{F}$}_{q}[t]\text{ is irreducible for $i=1,2,...,r$} \}\]
\[=
\big(\prod\limits_{i=1}^{r}\frac{1}{n_i}\big)q^{m+1}(1+O_{n,r}(q^{\frac{-1}{2}}))\]
\end{proof}
\section{M$\ddot{\mathrm{o}}$bius sums and Chowla's conjecture}
The Mertens function, given by the partial sums of M$\ddot{\mathrm{o}}$bius function, $M(n):=\sum\limits_{k=1}^{n}\mu(k)$ is of great
importance in Number Theory, where $\mu(k)$ is the Mobius function. For example, the Prime Number Theorem is logically
equivalent to \begin{equation}\sum\limits_{k=1}^{n}\mu(k)=o(n)\end{equation} and
\begin{equation}\sum\limits_{k=1}^{n}\frac{\mu(k)}{k}=o(1)\end{equation}
the Riemann Hypothesis is equivalent to
\begin{equation}M(n)=O(n^{\frac{1}{2}+\epsilon})\text{ for all }\epsilon>0.\end{equation}
Thus one can say $M(n)$ is said to demonstrate square root cancellation.
Keating and Rudnick have established the function field version of square root cancellation of Mobius sums in short intervals \cite{KZ}.
Carmon and Rudnick have resolved the function field version of the conjecture of Chowla on auto-corelation of Mobius function \cite{CR}
over large finite field and proved the following result.
For $r,n\geq 2$, distinct polynomials $\alpha_1,...,\alpha_r\in\mbox{$\mathbb{F}$}_{q}[X]$ of degree
smaller than $n, q$ odd, and $(\epsilon_1,...,\epsilon_r)\in\{1,2\}$ not all even,
\begin{equation}
\sum_{deg F=n}\mu(F+\alpha_1)^{\epsilon_1}\cdot\cdot\cdot\mu(F+\alpha_1)^{\epsilon_r}=O(rnq^{n-\frac{1}{2}})
\end{equation}
We show that, there is square root cancellation in Mobius sums, as well as in
the auto-correlation type sums appearing in Chowla's conjecture for the function $F(x,t)=f(t)+g(t)x^2$ in the short interval of the form $I(p,m)$
in large finite field (equation \ref{maineq1}).
\newline
For polynomials over a finite field $\mbox{$\mathbb{F}$}_q$, the Mobius function of a nonzero polynomial $f\in \mbox{$\mathbb{F}$}_{q}[x]$ is defined
to be $\mu(F)=(-1)^{r}$ if $F=cP_1,...,P_r$
with $0\neq c\in \mbox{$\mathbb{F}$}_{q}$ and $P_1,...,P_r$ are distinct monic irreducible polynomials, and $\mu(F)\neq 0$ otherwise.
The analogue of the full sum $M(n)$ is the sum over all monic polynomials $\mathcal{M}_n$ of given degree $n$, for which
we have \[
\sum_{f\in \mathcal{M}_n}\mu(f) =
\begin{cases}
1, & n=0\\
-q &n=1\\
0, &n\geq 2
\end{cases}
\]
\begin{equation}
\text{ Set } S_{\mu}(p;m):=\sum\limits_{h\in I(p,m)}\mu(f+gh^2)
\end{equation}
In the next Theorem, we demonstrate
square root cancellation in Mobius sums is equivalent to square root cancellation in auto correlation of Mobius sums
in the short interval $I(p,m)$ in the larger finite field
limit $q\rightarrow \infty$ and $\mathrm{deg}(p)$ fixed.
\begin{theorem}\label{th-mobi-sum}
\begin{enumerate}
\item
Let $F(\mathbf{a},t)$ satisfy the conditions of Propisition \ref{prop1} and $\mathrm{deg}F(\mathbf{a},t)=n$. Then for $m\geq 1$
\begin{equation}\label{mobsum}
\big|S_{\mu}(p,m)\big| \ll_{n}\frac{\#I(p,m)}{\sqrt{q}}\end{equation} and the implied constant depending only on $n=\mathrm{deg}(F)$.
\item Let each of $F_i$ of $\mathrm{max}$ $\mathrm{ deg} n_i$ satisfy all conditions of
Proposition \ref{prop1}. Then for $\epsilon_{i}\in\{1,2\}$,
not all even,
\begin{equation}\label{autocho}
\Big|\sum\limits_{h\in I(p,m)} \mu(F_1(\mathbf{a},t))^{\epsilon_1}...\mu(F_r(\mathbf{a},t))^{\epsilon_r}\Big|\ll_{r,\mathrm{deg}F_i}\frac{\#I(p,m)}{\sqrt{q}}
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
Mobius function $\mu(F)$ can be computed in terms of the discriminant $\mathrm{disc}(F)$ of $F(x)$ as (see \cite{kc1})
\(\mu(F)=(-1)^{\mathrm{deg} F}\mathcal{H}i_{2}(\mathrm{disc}(F)), \text{ where } \mathcal{H}i_{2}\)
is the quadratic character on $\mbox{$\mathbb{F}$}_{q}. $
Then equation (\ref{mobsum}) becomes,
\begin{equation}\label{disc-char}
S_{\mu}(p;m):=(-1)^{n}\sum\limits_{h\in I(p,m)}\mathcal{H}i_{2}(\mathrm{disc}(f(t)+g(t)h^2))\end{equation}
To solve the equation (\ref{disc-char}), we follow the method as in \cite{CR}.
Since, $\mathrm{disc}(F)$ is polynomial in the coefficients of $F$, equation (\ref{disc-char}) is an ${m+1}$ dimensional character sum,
the equation is evaluated by
bounding all but one variable.
writing,
\begin{equation}
\begin{split}
&F(\mathbf{a},t)=\tilde F(\mathbf{a},t)+b\\
\text{Set } &D_{F}(b):=disc(\tilde F(\mathbf{a},t)+b)
\end{split}
\end{equation}Here, $b:=F(0)$ of $F(\mathbf{a},t)=f(t)+g(t)h^{2}$
which is a polynomial of degree $(n-1)$ in $b$. Therefore we have
\begin{equation}
S_{\mu}(p;m) \leq \sum\limits_{\mathbf{A}\in \mbox{$\mathbb{F}$}_{q}^{m}}\Big|\sum\limits_{b\in \mbox{$\mathbb{F}$}_q}\mathcal{H}i_{2}(D_{F}(b))\Big|
\end{equation}
By Weil's theorem (the Riemann Hypothesis for curves over a finite field), which implies that for a polynomial $P(t)\in \mbox{$\mathbb{F}$}_{q}[t]$ of positive degree,
which is not proportional to a square of another polynomial (see \cite{CR})
\begin{equation}\label{cal-mobi}
\Big|\mathcal{H}i_{2}(P(t))\Big| \leq(\mathrm{deg}P-1)\sqrt{q},\quad P(t)\neq cH^{2}(t)
\end{equation}
Proposition \ref{prop1} implies $D_{F}(b)$ is not a square.
Non vanishing of $\mathcal{H}i_{2}(D_{F}(b))$ follows from the fact that, $F(x,t)$ is separable in $x$.
Hence, we have
\begin{equation}
S_{\mu}(p;m) \leq (n-2)q^{m+\frac{1}{2}}
\end{equation}
The implied constant depends only on $n=\mathrm{deg}F(\mathbf{a},t)$.
Proof of equation (\ref{autocho}) is based on the similar techniques used in proving equation(\ref{mobsum}) of Theorem \ref{th-mobi-sum}.
Therefore we have from equation (\ref{cal-mobi})
$$
\sum\limits_{{\mathbf{A}\in\mbox{$\mathbb{F}$}_{q}^{m}}}\Big|\sum\limits_{b\in{\mbox{$\mathbb{F}$}_q }}\mathcal{H}i_2(D_{F_i}(b)^{\epsilon_1}\cdot\cdot\cdot D_{F_r}(b)^{\epsilon_r})\Big|\leq
\Big(2\sum\limits_{i=1}^{r}(n_i-1)-1\Big)q^{m+\frac{1}{2}}
$$
which is clearly,
$$\sum\limits_{{\mathbf{A}\in\mbox{$\mathbb{F}$}_{q}^{m}}}\Big|\sum\limits_{b\in{\mbox{$\mathbb{F}$}_q }}\mathcal{H}i_2(D_{F_i}(b)^{\epsilon_1}\cdot\cdot\cdot D_{F_r}(b)^{\epsilon_r})\Big|
\ll_{deg{F_i},r}\frac{\#I(p,m)}{\sqrt{q}}$$
Hence, we can conclude that square root cancellation in Mobius sums is equivalent to square root cancellation in Chowla type sums.
\end{proof}
\end{document} |
\begin{document}
\title{Studying Free-Space Transmission Statistics and Improving Free-Space QKD in the Turbulent Atmosphere}
\author{C. Erven,$^1$ B. Heim,$^{1,2,3}$ E. Meyer-Scott,$^1$ J.P. Bourgoin,$^1$ R. Laflamme,$^{1,4}$ G. Weihs$^{1,5}$ and T. Jennewein$^{1}$}
\address{$^1$ Institute for Quantum Computing, Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada}
\address{$^2$ Max Planck Institute for the Science of Light, G\"unther-Scharowsky-Str. 1, Building 24, 91058 Erlangen, Germany}
\address{$^3$ Erlangen Graduate School in Advanced Optical Technologies (SAOT), University of Erlangen-Nuremberg, Paul-Gordan-Str. 6, 91052 Erlangen, Germany}
\address{$^4$ Perimeter Institute, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada}
\address{$^5$ Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstrasse 25, 6020 Innsbruck, Austria}
\eads{\mailto{cerven@iqc.ca}, \mailto{bettina.heim@mpl.mpg.de}, \mailto{thomas.jennewein@uwaterloo.ca}}
\date{\today}
\begin{abstract}
The statistical fluctuations in free-space links in the turbulent atmosphere are important for the distribution of quantum signals. To that end, we first study statistics generated by the turbulent atmosphere in an entanglement based free-space quantum key distribution (QKD) system. Using the insights gained from this analysis, we study the effect of link fluctuations on the security and key generation rate of decoy state QKD concluding that it has minimal effect in the typical operating regimes. We then investigate the novel idea of using these turbulent fluctuations to our advantage in QKD experiments. We implement a signal-to-noise ratio filter (SNRF) in our QKD system which rejects measurements during periods of low transmission efficiency, where the measured quantum bit error rate (QBER) is temporarily elevated. Using this, we increase the total secret key generated by the system from 78,009\unit{bits} to 97,678\unit{bits}, representing an increase of 25.2\% in the final secure key rate, generated from the \emph{same} raw signals. Lastly, we present simulations of a QKD exchange with an orbiting LEO satellite and show that an SNRF will be extremely useful in such a situation, allowing many more passes to extract a secret key than would otherwise be possible.
\end{abstract}
\maketitle
\section{Introduction}
Quantum key distribution (QKD), one of the first experimentally realizable technologies from the field of quantum information, has by now seen a number of robust implementations both in fibre \cite{SFIK11,PPAB09,SBCDLP09,GRTZ02} and free-space \cite{JRYYZXWYHJYYCPP10,EBHWSL09,ECLW08,PFHLK08,UTSWSLBJPTOFMRSBWZ07,NHMPW02}. Indeed, it has already reached the level of maturity so as to be offered as a commercial product from a number of companies \cite{MagiQ,idQuantique,QuintessenceLabs,SeQureNet}. While the fastest systems to date are based on fibre transmission media \cite{DYDSS08}, they will remain limited to a transmission distance of about 200\unit{km} until reliable quantum repeaters are realized. Even taking into account expected future advances in fibre, source, and detector technology, secure key distribution will still be limited to about 400\unit{km} using fibres.
QKD with orbiting satellites has long been proposed as a solution for global key distribution, as evidenced by the growing number of feasibility studies that have been conducted \cite{NHMPW02,RTGK02,AJPLZ03,AFMMCPAJUSBRLBWZ07,BMHHEHKHDGLLJ12}. QKD with low earth orbit (LEO) satellites likely represents the most feasible solution since they will have the shortest free-space transmission distance with the lowest losses. However, LEO satellites travel quickly with short orbital periods limiting the time available to perform QKD during a single pass to the order of 300\unit{sec} \cite{BMHHEHKHDGLLJ12,NHMPW02}. Thus, it is important to have a thorough understanding of the transmission properties of the free-space channel which the photons will travel through in order to properly evaluate the feasibility of such a system. As well, with such a short time to exchange a key, it is important to extract the most secure key bits from the relatively small number of signals sent and received during a pass.
To these ends, this article first examines some recent theoretical work on the transfer of quantum light and entanglement through the turbulent atmosphere; then experimentally determined free-space transmission efficiency curves measured with an entanglement based free-space QKD system are analyzed; this is followed by a discussion of the implications of link fluctuations on decoy state QKD; a method for improving free-space QKD key rates in the turbulent atmosphere through the use of a signal-to-noise ratio filter (SNRF) is then put forward; followed by the experimental results of implementing such a filter and their implications for the security of the system.
\section{Free-Space Optical Link Statistics}
The propagation of classical light through turbulent atmosphere has long been of interest in theoretical investigations, including such diverse phenomena as diffraction, scintillation, and the absorption of light by molecules in the atmosphere which produce beam wander and broadening \cite{Kol41,Tat71,Fan75,Fan80,AP05,APY95}. Satellite based communication has also been investigated in the context of a turbulent atmosphere \cite{Fri67,Sha11,AP05,APY95}. From these studies it has been shown that the intensity fluctuations due to the turbulent atmosphere can be assumed to be log-normally distributed in the regime of weak fluctuations and strong losses. This has also been confirmed in various experiments (see e.g. \cite{MCPH04}).
Recently, Vasylyev, Semenov and Vogel \cite{VSV12,SV10,SV09} have provided a theoretical foundation for studying the influence of fluctuating loss channels on the transmission of quantum and entangled states of light. Like others \cite{Smi93,MCPH04}, in Refs. \cite{SV10,SV09} they approximate the probability distribution of the (fluctuating) atmospheric transmission coefficient (PDTC) in the case of entanglement distribution according to the log-normal distribution:
\begin{equation}\label{eq.PDTC}
\mathcal{P}(\eta_{atm}) = \frac{1}{\sqrt{2 \pi} \sigma \eta_{atm}} \exp \bigg[- \frac{1}{2} \bigg(\frac{\ln \eta_{atm} + \bar{\theta}}{\sigma} \bigg)^{2} \bigg]
\end{equation}
where $\eta_{atm}$ is the atmospheric transmittance, $\overline{\theta} = -\ln<\eta_{atm}>$ is the logarithm of the mean atmospheric transmittance, and $\sigma$ is the variance of $\theta = -\ln \eta_{atm}$ characterizing the atmospheric turbulence.
Equation \ref{eq.PDTC} only describes in a simplified way the transmission property of an atmospheric channel and ignores any phase (front) fluctuations. This is sufficient for our analysis because our experiments utilize the direct detection of single photons, making the phase nature of the transmission irrelevant.
\subsection{Measuring Free-Space Link Statistics with Entangled Photons}
To begin, we measured the free-space transmission efficiency statistics in our entanglement based QKD system. The system is comprised of a compact Sagnac interferometric entangled photon source \cite{EHRLW10,KFW06,FHPJZ07} operating at 808\unit{nm}, a 1,305\unit{m} free-space optical link where the outgoing/incoming beam is expanded/contracted by the use of appropriate telescopes (the telescopes have a 75\unit{mm} collection lens and a 25:1 magnification), two compact passive polarization analysis modules, quad module silicon avalanche photodiode single photon detectors (PerkinElmer SPCM-AQ4C, $\sim$50\% detection efficiency, $\sim$400 dark counts/sec), time-tagging units, GPS time receivers, two laptop computers, and custom written software \cite{ECLW08}. We choose to work at a wavelength of 808\unit{nm} to take advantage of a peak in the typical atmospheric transmission \cite{BMHHEHKHDGLLJ12} as well as high detection efficiency at that wavelength in our detectors. Usually there is a 10\unit{nm} interference filter at the entrance of the polarization detector box used to reject background light; however, we remove it for all experiments in this paper in order to simulate a scenario (such as a satellite link) with a higher background noise level in order to test the usefulness of our signal-to-noise ratio filter, described later.
Brida \emph{et al.} \cite{BGN00} were the first to suggest using two photon entangled states for the absolute quantum efficiency calibration of photodetectors. We adapt their method here to measure the PDTC of the free-space channel by first performing a local experiment with the same equipment (source, polarization analyzers, photon detectors) so that we can measure the various other efficiencies of the system. Then through comparison of the experiments performed locally and over the free-space we can extract the PDTC of the link.
In a local experiment we expect the number of counts per second seen by Alice ($N_{A}$) and Bob ($N_{B}$) to be given by
\begin{equation}\label{eq.AliceLocalSingles}
N_{A} = N \eta_{A} = N \eta_{A_{source}} \eta_{A_{pol}} \eta_{A_{det}}
\end{equation}
\begin{equation}\label{eq.BobLocalSingles}
N_{B} = N \eta_{B} = N \eta_{B_{source}} \eta_{B_{pol}} \eta_{B_{det}}
\end{equation}
where $N$ is the total number of pairs produced at the source per second, $\eta_{A}$ is Alice's total transmission efficiency (comprised of the source coupling efficiency, $\eta_{A_{source}}$, polarization analyzer efficiency, $\eta_{A_{pol}}$, and detector efficiency, $\eta_{A_{det}}$), and similarly for Bob. Additionally, the expected number of observed coincidences per second ($N_{coin}$) between Alice and Bob, found using a coincidence window ($\Delta t_{coin}$) to identify entangled photon pairs, is given by
\begin{equation}\label{eq.LocalCoins}
N_{coin} = N \eta_{A} \eta_{B}.
\end{equation}
Dividing the measured coincidence count rate ($N_{coin}$) by the observed singles rate at Alice ($N_{A}$) yields an estimate for the total loss caused by Bob's optics ($\eta_{B}$) including the source coupling, polarization analyzer, and photon detectors. Double pair emissions, where two photon pairs are created in the source crystal at once, could lead to corrections in Eqs. \ref{eq.LocalCoins}-\ref{eq.LinkAccidentals} at sufficiently high pump powers. However, for the experiments detailed here, the pumping strength was sufficiently low that double pair emissions were negligible and thus safely ignored.
For experiments performed over the free-space link, the equation for Bob's singles rate gets modified to
\begin{eqnarray}\label{eq.BobLinkSingles}
N_{B} & = & N \eta_{B} + N_{background} \nonumber \\
& = & N \eta_{B_{source}} \eta_{B_{atm}} \eta_{B_{pol}} \eta_{B_{det}} + N_{background}
\end{eqnarray}
where his total transmission efficiency, $\eta_{B}$, now includes a term for the link transmission efficiency, $\eta_{B_{atm}}$, and an additional term, $N_{background}$, is added representing background photons which are collected and measured by Bob's receiver. The equation for the coincidence rate is similarly modified to
\begin{equation}\label{eq.LinkCoins}
N_{coin} = N \eta_{A} \eta_{B} + N_{accidental}
\end{equation}
where $N_{accidental}$ represents accidental coincidences of Alice's measurements with the background photons measured by Bob. Fortunately, the accidental rate given to good approximation by
\begin{equation}\label{eq.LinkAccidentals}
N_{accidental} \approx N_{A} N_{B} \Delta t_{coin}
\end{equation}
can be easily estimated by finding the number of coincidences between Alice's measurements and Bob's measurements shifted by a few coincidence windows and then subtracted from the results.
To find the free-space link PDTC we divide the coincidence rate ($N_{coin}$) observed during a link experiment by Alice's local single photon count rates ($N_{A}$) which gives the PDTC for Bob's total loss, $\eta_{B}$, including all of the losses in his equipment. Then, using the estimate from the local experiment, we divide out the losses from Bob's equipment leaving only the atmospheric transmission, $\eta_{B_{atm}}$, allowing us to construct the PDTC for the free-space channel. There is an alternative method for estimating the free-space link PDTC using only the singles rates from an experiment over a free-space link. However, the method just described using coincidences is more accurate than using just the singles rates since the only source of error is the accidental coincidence rate ($N_{accidental}$) which we can estimate and remove.
We studied three different scenarios with our system for the distribution of entangled photons over free-space channels corresponding to the following conditions: a maximum free-space transmission with optimized pointing and focusing parameters (Fig. \ref{fig.IQCPDTC} (a)), a transmission with artificially increased turbulence using a heat gun to heat the air immediately in front of the sending telescope (Fig. \ref{fig.IQCPDTC} (b)), and a defocused transmission as a way to simulate larger losses (Fig. \ref{fig.IQCPDTC} (c)). For each of these experiments, the data was broken up into blocks of a certain duration which we call the block duration and then the efficiency was estimated for each block using the method described above. These results are then summed up into a histogram, normalized, and displayed as the PDTC for that link. In all cases, the distributions are shown with a block duration of 10\unit{ms} since it has been shown that this is the typical timescale for atmospheric turbulence\cite{MCPH04}. All measurements were performed on August 24, 2011 between the hours of 12 and 1am since the system requires the reduced background experienced at night to operate, with a total data acquisition time for each experiment of 3 minutes. The temperature was approximately 18$^{\circ}$C with clear visibility in an urban environment with typical city illumination levels.
\begin{figure}
\caption{Probability distribution of the transmission coefficient (PDTC) for the case of (a) an optimized free-space channel, (b) a free-space channel with artificially increased turbulence using a heat gun placed in front of the sending telescope, and (c) a free-space channel where the beam is defocused in order to simulate larger losses. The detection (sampling) time was 10\unit{ms}
\label{fig.IQCPDTC}
\end{figure}
Fig. \ref{fig.IQCPDTC} (a) shows that we experienced extremely good atmospheric conditions during the experiments since the observed transmission coefficient for the well aligned link was very close to a Poissonian distribution. The term Poissonian here really refers to the original graphs of integer photon counts versus the frequency with which they were observed. We would expect the transmission coefficient for a local system without a free-space link to be Poissonian in nature owing to the pair creation process and detection. The fact that we still observe a Poissonian distribution with a free-space link implies that our atmospheric conditions were very good since the presence of the link did not alter the nature of the statistics.
The defocussed transmission case, Fig. \ref{fig.IQCPDTC} (c), is also very close to a Poissonian distribution only narrowed with a decreased overall transmittance compared to Fig. \ref{fig.IQCPDTC} (a). This is expected since defocusing the beam increased it to a size larger than the receiver telescope thus causing fluctuations in the transmission efficiency experienced over the free-space link to be smoothed out (ie. causing it to be even closer to a Poissonian distribution) while at the same time lowering the overall transmittance since many more photons missed the receiver telescope and consequently were not collected and measured. For the experiment where turbulence was artificially added by letting the beam pass over hot air produced by a heat gun, Fig. \ref{fig.IQCPDTC} (b), the distribution indeed changes towards a log-normal distribution as predicted.
\section{Effect of Link Fluctuations on Decoy State QKD}
Having investigated the PDTC for a number of different free-space channels in the previous section, we now turn our attention to the question of what effect atmospheric turbulence might have on weak coherent pulse QKD with decoy states. Attenuated lasers, while convenient for QKD, do not emit true single photons but rather a mixture of photon number states following a Poissonian distribution. This limits the distance over which QKD can be performed as Eve can perform a photon number splitting attack to gain full information on multi-photon pulses \cite{GLLP04}. This attack relies on Eve's ability to block single photon pulses and thus modifies the channel transmission nonlinearly depending on the photon number. However, this attack can be detected through the use of decoy states of various pulse strengths \cite{Hwa03,LMC05}, and an additional step in the security phase which verifies that the channel transmission does not depend on the mean photon number. Thus, it is crucial for free space QKD systems using decoy states to consider atmospheric fluctuation since the security of the protocol depends strongly on the relative transmission of the various pulse strengths.
Here we investigate whether the assumption of a static channel for determining secure key length is valid when the channel is, in reality, fluctuating. We consider a one-decoy protocol from Ma \emph{et al.} \cite{MQZL05}, including the ``tighter bound'' from section E.2, along with the PDTC generated from the photon statistics in atmosphere taken from \cite{MCPH04}, and with a realistic error correction efficiency of $f(e) = 1.22$ used. Figures \ref{fig.WCPvsSigma} and \ref{fig.WCPvsLoss} compare the results from a simulation of secure key rates based on a simple static channel versus a channel fluctuating with a log-normal distribution.
\begin{figure}
\caption{Secure key rate versus the turbulence parameter, $\sigma$, comparing static (solid line) and fluctuating channel (dashed line) with same mean loss. Average channel losses are indicated for the four curves. Deviation is only apparent at very strong turbulence, meaning the static channel approximation is sufficient for most cases.}
\label{fig.WCPvsSigma}
\end{figure}
\begin{figure}
\caption{Secure key rate versus loss, comparing a static quantum channel (solid line) to a fluctuating free-space quantum channel (dashed line) with same mean loss. Figure a) considers ``good'' atmosphere with $\sigma=0.18$, resulting in no deviation between the static and fluctuating channel. Figure b) considers ``bad'' atmosphere with $\sigma=1.8$, resulting in less secure key for the fluctuating channel at low loss.}
\label{fig.WCPvsLoss}
\end{figure}
Fig. \ref{fig.WCPvsSigma} shows that approximating a fluctuating channel as a static channel with the same mean loss is sufficient so long as the atmosphere is not extremely turbulent. It should be noted that the turbulence model of Milonni \emph{et al.} \cite{MCPH04} begins to fail at extremely high turbulence levels possibly leading to inaccuracies in the extreme right portion of Fig. \ref{fig.WCPvsSigma}. Unfortunately, we are unaware of a correct high turbulence free-space link model to replace it with in this regime. Nevertheless, we can still draw important conclusions in the usual operating scenarios. Further, at moderate turbulence strengths Fig. \ref{fig.WCPvsLoss} shows that the static channel approximation is valid as long as the channel loss is above $\sim$15\unit{dB}, a typical condition in long distance free-space QKD (see Ref.~\cite{BMHHEHKHDGLLJ12} for example scenarios). Therefore, the security of weak coherent pulse QKD with decoy states is not significantly affected by a fluctuating free-space quantum channel as compared to the usual assumed static channel since the differences only arise in a situation where the high turbulence would likely make a successful transmission impossible or with a link with such low losses that the transmission distance is likely uninteresting. This also paves the way for checking whether the key rates could possibly be improved with a signal-to-noise ratio filter.
\section{Improving QKD with a Signal-to-Noise Ratio Filter}
Using the link statistics analysis and the data from the experiments above, we now investigate the use of a signal-to-noise ratio filter (SNRF) in order to increase the final key rate in QKD systems with a turbulent quantum transmission channel. The idea of the SNRF is to throw away data blocks where the signal-to-noise ratio (SNR) was low based on a directly measurable quantity, the signal strength, under the assumption that the noise caused by background events remains constant. While this has the consequence of decreasing the overall raw key rate, it is possible to actually improve the final secret key rate since we omit the blocks where the SNR was lower and correspondingly the QBER was inflated by the larger relative contribution from the background. We mention that similar filtering techniques have been explored in the continuous variable regime, though the effect of fading on CV quantum states is very different. Fading and excess noise quickly destroy Gaussian quantum features such as entanglement and purity of states; however, different possibilities have been offered to recover these \cite{HMDFLLA06,DLHMFLA08}.
We define the SNRF algorithm as follows. One begins by measuring the background contribution of the quantum free-space channel (in terms of a count rate) with the entangled source switched off. Then one defines the singles contribution from the source divided by the background contributions as the dimensionless SNR. One then throws away low SNR blocks where the background contribution is proportionally higher according to a preset SNR threshold. The idea can also be mapped to real coincidences from the source divided by background coincidences where these numbers now implicitly depend on the coincidence window used.
In the following, we implement the equivalent algorithm where rather than using the dimensionless SNR we instead use the singles rates to define our threshold. The SNR threshold is implicitly used in this protocol since the background noise is assumed to remain constant. Thus, examining the optimum singles rate threshold effectively amounts to finding the optimum SNR threshold since one could calculate this number by first measuring the background, subtract it from the total measured singles, and then divide the remainder by the measured background to arrive at the SNR. In the remainder of this paper we will refer to all such equivalent protocols as a SNRF algorithm.
Fig. \ref{fig.SingleCoinQBER} shows (a) Alice's local rates (red curve) and Bob's singles count rates measured over the link (blue curve) along with the coincidence count rate (green curve) and (b) the corresponding QBER's measured in the Z (blue curve) and X (green curve)\footnote{The Z basis here refers to the basis of the Pauli $\sigma_{Z}$ operator (ie. horizontal and vertical polarization), while the X basis refers to the basis of the Pauli $\sigma_{X}$ operator (ie. +45$^{\circ}$ and -45$^{\circ}$ polarization).} bases when no SNRF is used for the artificially increased turbulence experiment of Fig. \ref{fig.IQCPDTC} (b). Whereas, Fig. \ref{fig.SingleCoinQBER} (c) shows Alice and Bob's singles and coincidence rates when the optimum SNRF threshold of 95,000\unit{counts/sec} (discussed below) is applied, and (d) shows the corresponding QBER. The data points are grouped according to the optimum block duration of 30\unit{ms} (thus, each data point represents 30\unit{ms} worth of data) and a coincidence window of 5\unit{ns} is used. Here one can clearly see the high background detection rate experienced by Bob (a situation that will be typical of a QKD link performed to an orbiting satellite) as the flat bottom of his singles rate graph (Fig. \ref{fig.SingleCoinQBER} (a) blue curve), as well as the wildly varying coincidence rates (Fig. \ref{fig.SingleCoinQBER} (a) green curve) where the points close to the x-axis largely consist of accidental coincidences. From Fig. \ref{fig.SingleCoinQBER} (a) blue curve, we can estimate Bob's mean background count rate at roughly 2,700\unit{counts/sec}. Each background count which is registered as a valid coincidence will be uncorrelated with Alice's measurements and contribute a 50\% error rate to the QBER.
The SNRF idea is neatly illustrated here by looking at the many high QBER values, corresponding to the low signal phases in Fig. \ref{fig.SingleCoinQBER} (b) associated with Bob's low singles and coincidence rates from the top graph. We know from the experiment corresponding to Fig. \ref{fig.IQCPDTC} (a) for the well aligned link that the intrinsic QBER is {$\sim$2.34\%}; however; the QBER observed for the turbulent link corresponding to Fig. \ref{fig.IQCPDTC} (b) and Fig. \ref{fig.SingleCoinQBER} (a) and (b) was instead {$\sim$5.51\%}. This increase in the measured QBER over the actual QBER of the system will lower the final secret key rates. However, one can see that when the low SNR regions are removed from the singles and coincidence graph (Fig. \ref{fig.SingleCoinQBER} (c)) using the optimum SNRF, many of the corresponding high QBER blocks (Fig. \ref{fig.SingleCoinQBER} (d)) are also removed. Thus, we are able to lower the measured QBER from $\sim$5.51\% to $\sim$4.30\%, a value closer to the intrinsic error rate of the system, allowing the system to generate many more secret key bits than would otherwise be possible.
\begin{figure}
\caption{Alice's local single count rate (red curves, left axis), Bob's single count rate measured over the link (blue curves, left axis), the coincidence rate (green curves, right axis), and QBER in the Z (blue curve) and X (green curve) bases for the high free-space turbulence experiment of Fig. \ref{fig.IQCPDTC}
\label{fig.SingleCoinQBER}
\end{figure}
The secret key rate formula for our system expressed in secret key bits per raw key bit is given by \cite{MFL07b}
\begin{equation}\label{eq.InfiniteKeyRate}
R = \frac{1}{2}(1 - f(e)h_{2}(e) - h_{2}(e))
\end{equation}
where $f(e)$ is the error correction inefficiency as a function of the error rate, normally $f(e) \geq 1$ with $f(x) = 1$ at the Shannon limit, and $h_{2}(e) = -e \log e - (1-e) \log (1-e)$ is the binary entropy function. For the clarity of the argument we have used the infinite key limit formula; however, the insights gained should transfer to the finite key limit. Looking at Eq. \ref{eq.InfiniteKeyRate}, we can see that a higher QBER is detrimental to the final key rate for two reasons (a)~increased error correction inefficiency and (b)~increased privacy amplification. The Cascade algorithm \cite{BS94,SY00} and low density parity check (LDPC) codes \cite{Gal62,MN97,Pea04,MCMHT09} are the two most commonly employed error correction algorithms used in QKD systems. As the QBER climbs the number of parities revealed (and correspondingly the information about the key which has to be accounted for in privacy amplification) increases. This applies even in the ideal case of error correction algorithms operating at the Shannon limit. Privacy amplification is then used after error correction to squeeze out any potential eavesdropper and ensure that the probability that anyone besides Alice and Bob knows the final key is exponentially small at the cost of shrinking the size of the final key. Privacy amplification is commonly accomplished by applying a two-universal hash function \cite{CW79,Kra94} to the error corrected key and then using Eq. \ref{eq.InfiniteKeyRate} to determine how many bits from this operation may be kept for the final secret key. Both the number of bits exposed during error correction and the measured QBER are used to determine the final size of the key. Additionally, the secure key rate formula is a non-linear function of the QBER so that decreasing the QBER does better than a linear improvement in the final key rate \cite{CCHGKMTONMSWFRGGHTB11}. Thus, the fewer parities revealed during error correction and the lower we can make the measured QBER, the larger the final key will be.
The use of a SNRF could potentially open a loophole in the security proofs for QKD since we are now discarding data (which is typically not allowed by the proofs) depending on Bob's measured singles rates. However, we are implementing the SNRF on Bob's singles rate which is a sum over all of his detectors during a block of data, so the SNRF is detector independent. Additionally, discarding data should be equivalent to a decrease in the channel transmission efficiency (which could happen anyways due to atmospheric effects) and thus should not affect the security proof. One might initially think that measuring the SNRF in real-time could thwart a potential eavesdropper; however, any real-time measurement would require a probe beam either of a different wavelength or propagating in a different spatial mode so that it could be easily separated from the signal photons. This difference would easily allow an eavesdropper to fake whatever background they wished on the probe beam while leaving the signal beam untouched. Therefore, for this paper we assume that using a SNRF does not compromise the security of our system; however, we stress that it remains an open question whether security can be proven for this scenario. We also point out that for an entangled QKD protocol security does not depend on the transmission of the quantum channel; whereas, if one wanted to use a SNRF in a decoy state protocol, which works by measuring the channel gain for each photon number component, the issue of security would be delicate and require careful analysis so as not to open up any security loopholes. We hope that by showing the utility of the SNRF methods we can stimulate work on proving its security. Finally, we also mention recent work by Usenko \emph{et al.} \cite{UHPWMLF12} which theoretically studied the influence of fading on Gaussian states in the framework of the continuous variable QKD filtering ideas mentioned earlier.
\section{Experimental Results and Discussion}
After performing some initial simulations which showed the promise of the SNRF idea, we proceeded to implement the algorithm using the data gathered during the artificially increased turbulence experiment of Fig. \ref{fig.IQCPDTC} (b). There are three main parameters which affect the total secret key rate using the SNRF idea: the block duration, the SNRF threshold, and the coincidence window. The block duration refers to the time-scale on which the SNRF algorithm is applied and its optimum should be related to the time-scale of the atmospheric turbulence. The optimum SNRF threshold should be related to the mean background count rates observed during the experiment. Fig. \ref{fig.SecretKeyRates} shows the results of this analysis, with the total secret key generated from the 3\unit{min} block of data from Fig. \ref{fig.IQCPDTC} (b) plotted against the block duration and the SNRF threshold, for a coincidence window of 5\unit{ns}.
\begin{figure}
\caption{The total secret key for the high turbulence free-space experiment of Fig. \ref{fig.IQCPDTC}
\label{fig.SecretKeyRates}
\end{figure}
The key rates for the lower SNRF thresholds (closest to the front) in Fig. \ref{fig.SecretKeyRates} essentially show the secret key rate one would expect without implementing the SNRF algorithm (since little if any raw key is thrown away). As the SNRF threshold increases though (moving towards the top in Fig. \ref{fig.SecretKeyRates}), one can clearly see that the total secret key rate also increases until reaching a maximum at which point it quickly falls off since the SNRF cuts out too much raw key. Less obvious from the figure, but still important, there is a gradual improvement in the secret key rate as the block duration shrinks until a maximum is reached at which point the secret key rate gradually decreases once again. The optimum parameters for this data set were to use a block duration of 30\unit{ms} and a SNRF threshold of 95,000\unit{counts/sec} suitably applied on the timescale of the optimum block duration which increased the total secret key generated to 97,678\unit{bits} from the 78,009\unit{bits} generated when no SNRF was used. This represents an increase of 25.2\% in the total secret key generated from the \emph{same} raw key dataset.
As mentioned earlier, the secret key rate given by Eq. \ref{eq.InfiniteKeyRate} is improved due to two effects. First, the intrinsic error rate in the data is smaller causing the efficiency of the Cascade error correction algorithm \cite{BS94,SY00} used here to be improved from 1.2631 for the case of no SNRF to 1.2202 when a SNRF is used. This increased efficiency translates into fewer bits revealed during error correction and thus few bits sacrificed during privacy amplification. Secondly, the QBER measured during error correction is smaller, 4.30\% with a SNRF versus 5.51\% with none. This translates into less privacy amplification needed to ensure that the final secret key is secure against an eavesdropper.
\begin{table}[htbp]
\centering
\begin{tabular}{cccccc}
Scenario & Raw key & Sifted key & Secret key & f & QBER\\
\hline \hline
No SNRF & 535,530 & 259,855 & 78,009 & 1.2697 & 5.51\% \\
\hline
Above SNRF & 466,441 & 226,279 & 97,678 & 1.2202 & 4.30\% \\
\hline
Below SNRF & 69,089 & 33,576 & - & - & 13.77\% \\
\end{tabular}
\caption{Measured values for: directly generating key, using the SNRF to generate key, and data discarded by the SNRF, for the high free-space turbulence experiment of Fig. \ref{fig.IQCPDTC} (b).}
\label{tab.KeyGenValues}
\end{table}
In order to aid the potential security analysis of our SNRF idea, we also include a few other measured values pertinent to its implementation which are summarized in Tab. \ref{tab.KeyGenValues}. For the data set shown in Fig. \ref{fig.SecretKeyRates}, we kept 466,441 coincidences which made up our raw key while rejecting 69,089 coincidences generated from data blocks that were below the SNRF threshold. The size of the sifted key, where both Alice and Bob measured in the same basis, was 226,279 bits while 33,576 bits were rejected by the SNRF. As mentioned before, the QBER in this sifted key was 4.30\% while the QBER in the rejected data was 13.77\%. Here we can clearly see how utilizing the SNRF was able to increase our overall secret key rate by rejecting this small subset which turns out to have a much higher QBER.
While the preceding discussion nicely illustrated the usefulness of using the SNRF idea to produce a larger final key length from the same raw key rates, we now mention two ideas for how one would actually implement the SNRF algorithm in practice. The first idea would be in the case of a static scenario (fixed position free-space links) to use the first few minutes of an experiment to find the optimum SNRF threshold (since finding the optimum requires the full knowledge of the measurement results) which would then be used during the rest of the key exchange. For the case of a long distance key exchange with a quickly changing background one could periodically re-calculate the ideal SNRF threshold in order to continually operate with the optimum threshold. In the case of an exchange with an orbiting satellite, for example, one could periodically re-calculate the optimum every few seconds even as atmospheric conditions changed over the course of the key exchange. The beauty of the SNRF idea is that it can be applied completely offline, as the raw data from the key exchange can be saved and processed with the optimum threshold determined after the satellite has already passed overhead. Additionally, the user can apply as fine or coarse grained an analysis as they wish. A second possibility would be to have a catalogue of free-space parameter regimes and the corresponding optimum SNRF thresholds stored in a look-up table. Then one could continually monitor the free-space link statistics over the course of a key exchange (which require only the coincidence events to calculate) and pick the optimum threshold based on the measured free-space PDTC parameters.
Besides these implementation ideas, there are at least two other possibilities for future work to augment the protocol. The ideas are similar with the first being to use an adaptive block duration which expands and contracts depending on the single photon rates being observed. The optimum block duration of 30\unit{ms} found in this experiment was in a way a compromise between using larger blocks where fluctuations are averaged over and smaller blocks where there are unclear statistics. With an adaptive algorithm it would be possible to match the block duration more closely to the actual physical SNR variations during an experiment and thus increase the proportion of good transmission periods kept even more.
The second idea would seek to examine how the signal (singles rate) is correlated with the QBER (for instance, by plotting a 3D frequency (z-axis) histogram of signal (x-axis) versus QBER (y-axis)). With this correlation plot, one could try to predict what the most likely QBER would be for a given signal level. Then one could apply a finer filtering scheme, for instance, grouping data blocks into the three classes: low QBER, medium QBER ($<11\%$), and high QBER ($>11\%$). Certainly the high QBER blocks should be discarded because they actually cost key. But while the medium QBER blocks may still have a QBER higher than that of the intrinsic system due to background light, they would still contribute positively to the key. Processing them separately from the low QBER blocks however would allow one to optimize the algorithms used for each subset to make them as efficient as possible.
\begin{figure}
\caption{The simulated secret key rates for a LEO QKD satellite for various elevation angles and the expected number of background counts and free-space PDTC \cite{BMHHEHKHDGLLJ12}
\label{fig.KeyRateVsBackgroundSim}
\end{figure}
The full power of the SNRF idea is realized in cases with a high background where the accidental coincidence rate approaches the same order of magnitude as the QKD signal. Recent work for the case of performing QKD with an orbiting satellite \cite{BMHHEHKHDGLLJ12} has shown that one will indeed be operating in a high background regime where the SNRF idea will prove very useful. Further to this point, depending on the level of the background noise, the simulations in Fig. \ref{fig.KeyRateVsBackgroundSim} show that the SNRF idea can be used to produce a secret key when the background would otherwise have prevented it. Fig. \ref{fig.KeyRateVsBackgroundSim} plots the secret key rates for exchanging key with with an orbiting LEO QKD satellite carrying an entangled photon source with a pair production rate of 100\unit{MHz} and an intrinsic QBER of 2.5\% for various elevation angles and the expected number of background counts and free-space PDTC \cite{BMHHEHKHDGLLJ12}. These initial results show that the SNRF idea would allow us to generate secret key from many more satellite passes occurring at elevation angles of $70^{\circ}$ or less; though a more detailed analysis would have to take into account the statistics of satellite passes over a year which must integrate over the various elevation angles. Nonetheless, the SNRF idea should prove particularly useful as the most probable passes for a LEO satellite occur at elevation angles much less than $90^{\circ}$ which otherwise would render them useless due to the high free-space link fluctuations, high background, and low SNR experienced. Thus, we are very confident that the SNRF idea will prove extremely useful in high background situations such as in satellite QKD, long distance terrestrial free-space links, or daylight QKD experiments.
\section{Conclusions}
In conclusion, we have used an entanglement based free-space QKD system to study the link statistics generated during the fluctuating free-space transmission of entangled photon pairs. Simulating a free-space channel with a high amount of turbulence allowed us to recover the theoretical prediction of a log-normal distribution for the statistics of the transmission coefficient. Using insights from this analysis, we studied the effect of link fluctuations on free-space decoy state QKD and found that the static channel approximation typically assumed is valid for the regimes where such systems are typically operated. Lastly, we studied the implementation of a signal-to-noise ratio filter in order to increase the overall secret key rate by rejecting measurements during periods of low transmission efficiency which tend to have a larger QBER due to a higher proportion of background events to actual entangled pair detection in the raw key. Using this SNRF, we were able to increase the final secret key rate by 25.2\% using the \emph{same} raw signals for a particular experimental run. Further, we showed simulations that indicate that the SNRF idea will be extremely useful in terrestrial long distance free-space experiments and experiments exchanging a key with a LEO satellite allowing one to generate a secret key from many passes that would otherwise have been useless.
\ack
Support for this work by NSERC, CSA, CIFAR, CFI, ORF, OCE, ERA, and the Bell family fund is gratefully acknowledged. One of us (BH) would like to thank the EU-Canada Programme for Cooperation in Higher Education, Training, and Youth which sponsored her exchange with the IQC during this project. The authors would like to thank M. Toyoshima, B. Higgins, and N. L\"utkenhaus for helpful discussions about free-space satellite communication, turbulence induced link fluctuations, and the quantum security of such schemes.
During the final preparation of this manuscript the authors became aware of similar work \cite{CTDGUVV12} which examined the impact of turbulence on long range quantum and classical channels, and put forward a similar SNR proposal based on measuring a secondary beacon laser. While the work measuring the PDTC of the free-space links is similar, we stress that we have already implemented such an SNRF \emph{directly} on the detected signal and illustrated the improvements possible.
Lastly, the authors would like to thank the anonymous referees for their many comments, which were very useful in improving the quality of this paper.
\section*{References}
\end{document} |
\begin{document}
\preprint{APS/Quant_Metro}
\title{Achieving Heisenberg-limited metrology with spin cat states via interaction-based readout}
\author{Jiahao Huang$^{1}$}
\author{Min Zhuang$^{1,2}$}
\author{Bo Lu$^{1}$}
\author{Yongguan Ke$^{1,2}$}
\author{Chaohong Lee$^{1,2,3}$}
\altaffiliation{Corresponding author.\\ Email: lichaoh2@mail.sysu.edu.cn, chleecn@gmail.com}
\affiliation{$^{1}$Laboratory of Quantum Engineering and Quantum Metrology, School of Physics and Astronomy, Sun Yat-Sen University (Zhuhai Campus), Zhuhai 519082, China}
\affiliation{$^{2}$State Key Laboratory of Optoelectronic Materials and Technologies, Sun Yat-Sen University (Guangzhou Campus), Guangzhou 510275, China}
\affiliation{$^{3}$Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, China}
\date{\today}
\begin{abstract}
Spin cat states are promising candidates for quantum-enhanced measurement.
Here, we analytically show that the ultimate measurement precision of spin cat states approaches the Heisenberg limit, where the uncertainty is inversely proportional to the total particle number.
In order to fully exploit their metrological ability, we propose to use the interaction-based readout for implementing phase estimation.
It is demonstrated that the interaction-based readout enables spin cat states to saturate their ultimate precision bounds.
The interaction-based readout comprises a one-axis twisting, two $\frac{\pi}{2}$ pulses, and a population measurement, which can be realized via current experimental techniques.
Compared with the twisting echo scheme on spin squeezed states, our scheme with spin cat states is more robust against detection noise.
Our scheme may pave an experimentally feasible way to achieve Heisenberg-limited metrology with non-Gaussian entangled states.
\end{abstract}
\maketitle
\section{Introduction}\label{Sec1}
Quantum metrology aims to enhance the measurement precision and develop optimal schemes for estimating an unknown parameter by means of quantum strategies~\cite{Giovannetti2004,Giovannetti2006,Giovannetti2011}.
In comparison to classical strategies, quantum-enhanced measurement offers significant advantages, where a dramatic improvement for the achievable precision can be obtained due to the use of quantum entanglement~\cite{Pezze2009, Hyllus2010, Huang2014}.
Preparing and detecting entangled quantum states are two main challenges to achieve the Heisenberg-limited metrology.
A lot of endeavors had been made to create various kinds of entangled input states, such as spin squeezed states~\cite{ Leroux2010, Gross2010, Riedel2010, Muessel2015}, twin Fock states~\cite{Lucke2011, Zhang2013, Luo2017}, maximally entangled states~\cite{Bollinger1996, Monz2011}, and so on.
However, to detect the output states, single-particle resolved detection is assumed to be necessary, which has so far been the bottleneck in the practical performances.
Moreover, imperfect detection is one of the key obstacles that hamper the improvement of measurement precision via many-body entangled states.
Recently, an echo protocol was proposed to perform phase estimation near the Heisenberg limit~\cite{Davis2016, Frowis2016}, which does not require single-particle resolved detection.
The input state is generated by the time-evolution under a one-axis twisting Hamiltonian $\hat U=e^{-iH_{\textrm{OAT}}t}$~\cite{Kitagawa1993}, and a reversal evolution $\hat U_R=\hat U^{\dagger}$ is performed on the output state prior to the final population measurement.
The nonlinear dynamics $\hat U_R$ enables Heisenberg-limited precision scaling ($\propto N^{-1}$) under detection noise $\sigma\lesssim\sqrt{N}$, where $N$ is the total particle number.
This kind of nonlinear detection with spin squeezed states~\cite{Hosten2016} and two-mode squeezed vacuum states~\cite{Linnemann2016} has been respectively realized in experiments.
More recently, schemes on interaction-based readout that relax the time-reversal condition ($\hat U_R \neq \hat U^{\dagger}$) have been proposed~\cite{Nolan2017, Mirkhalaf2018}.
These pointed out a new direction of utilizing entangled states for quantum metrology~\cite{Dunningham2002, Macri2016, Szigeti2017, Fang2017, Anders2017, Huang2018, Choi2018, Haine2018}.
Spin cat states, a kind of non-Gaussian entangled states as a superposition of distinct spin coherent states (SCSs), are considered as promising candidates for quantum-enhanced measurement~\cite{Agarwal1997, Gerry1998, Sanders2014, Lau2014, Signoles2014, Huang2015}.
It has been shown that spin cat states with modest entanglement can perform high-precision phase measurement beyond the standard quantum limit (SQL) even under dissipation~\cite{Huang2015}.
However, to perform the interferometry with spin cat states in practice, parity measurement is required so that single-particle resolution should be accessed~\cite{Gerry2010, Zhang2012, Hume2013, Huang2015, LuoC2017}.
The requirement of single-particle detection limits the experimental feasibility of quantum metrology with spin cat states.
Thus, to overcome this barrier, is it possible to replace the parity measurement with interaction-based readout?
Compared with the interaction-based readout scheme with spin squeezed states, will the spin cat states offer better robustness against detection noise?
In this article, we propose to perform the phase estimation with spin cat states via interaction-based readout.
We find that spin cat states have the ability to perform Heisenberg-limited phase measurement and that interaction-based readout is an optimal method to fully exploit this potential ability.
In Sec.~\ref{Sec2}, we give a general framework of many-body quantum interferometry and the phase estimation.
In Sec.~\ref{Sec3}, we analytically obtain the ultimate precision bound for spin cat states.
The ultimate bound is always inversely proportional to the total particle number with a constant depending on the separation of the two superposition SCSs that approaches the Heisenberg limit.
In Sec.~\ref{Sec4}, we describe the procedure of quantum interferometry via interaction-based readout with spin cat states.
Then, the estimated phase precisions via interaction-based readout are numerically calculated and the optimal conditions are given.
Especially, when the estimated phase lies around $\phi \sim \pi/2$, all the spin cat states can saturate their ultimate bounds with suitable interaction-based readout.
Finally, the detailed derivation of how the interaction-based readout can saturate the ultimate precision bounds of spin cat states are analytically shown.
In Sec.~\ref{Sec5}, we analyze the robustness against detection noise within our scheme.
It is demonstrated that, the spin cat states under interaction-based readout are immune to detection noise up to $\sigma \lesssim \tilde{c}_D(\theta) N$ with $\tilde{c}_D(\theta)$ a constant depending on the form of spin cat states.
Compared with the echo twisting schemes, our proposal with spin cat states can be approximately $\sqrt{N}$ times more robust against the excess detection noise.
In addition, the influence of dephasing during the nonlinear evolution in the process of interaction-readout is discussed.
In Sec.~\ref{Sec6}, we briefly summarize our results.
\section{Phase Estimation via Many-body Quantum Interferometry}\label{Sec2}
The most widely used interferometry can be described within a two-mode bosonic system of $N$ particles, such as Ramsey interferometry with ultracold atoms~\cite{Pezze2016}, trapped ions~\cite{Wineland1994, Blatt2008}, and Mach-Zehnder interferometry in optical systems~\cite{Dowling2003}.
In these systems, the system state can be well characterized by the collective spin operators, $\hat J_x = {1\over2} \left(\hat a \hat b^{\dagger} + \hat a^{\dagger} \hat b\right)$, $\hat J_y = {1\over2i} \left(\hat a \hat b^{\dagger} - \hat a^{\dagger} \hat b\right)$, $\hat J_z = {1\over2}\left(\hat b^{\dagger}\hat b - \hat a^{\dagger} \hat a\right)$ with $\hat a $ and $\hat b$ the annihilation operators for particles in mode $|a\rangle$ and mode $|b\rangle$, respectively.
A common quantum interferometry can be divided into three steps.
First, a desired input state $|\psi\rangle_{in}$ is prepared.
Then, the input state evolves under the action of an unknown quantity and accumulates an phase $\phi$ to be measured, i.e., $|\psi(\phi)\rangle_{out}=\hat U(\phi) |\psi\rangle_{in}$.
Finally, a proper sequence of measurement onto the output state $|\psi(\phi)\rangle_{out}$ is implemented to extract the accumulated phase.
Theoretically, for a given phase accumulation process $\hat U(\phi)=e^{-i\phi\hat G}$, the measurement precision of the accumulated phase is constrained by a fundamental limit, the quantum Cram\'{e}r-Rao bound (QCRB)~\cite{Braunstein1994, Huang2014, Pezze2009, Hyllus2010}, which only depends on the specific property of the chosen input state,
\begin{equation}\label{QCRB}
\Delta \phi \ge \Delta \phi^{Q} \equiv \frac{1}{\sqrt{\mu F^Q}},
\end{equation}
\begin{equation}\label{FQ}
F^Q = 4\left( \langle \psi'|\psi'\rangle - |\langle \psi'|\psi(\phi)\rangle_{out}|^2\right)=4\Delta^2 \hat G,
\end{equation}
where $\Delta \phi = \sqrt{\langle \phi^2 \rangle - \langle\phi\rangle^2}$ is the standard deviation of the estimated phase, $\mu$ corresponds to the number of trials, $|\psi'\rangle=\text{d}|\psi(\phi)\rangle_{out} / \text{d}\phi$ denotes the derivative and $\Delta^2 \hat G = _{in}\!\!\langle\psi |\hat G^2| \psi\rangle_{in} - _{in}\!\!\langle\psi |\hat G| \psi\rangle_{in}^2$ is the variance of $\hat G$ for the input state.
In realistic scenarios, the frequency shift between the two modes $\omega$ is one of the most widely interesting parameters to be estimated owing to its importance in frequency standards~\cite{Margolis2009}.
Therefore, the generator can be chosen as $\hat G=\hat J_z$, the estimated phase $\phi=\omega t$ and the quantum Fisher information (QFI) becomes $F^Q=4\Delta^2 \hat J_z$.
It is well known that, using an input GHZ state can maximize $F^Q$ to $N^2$, and the corresponding phase measurement precision scales inversely proportional to the total particle number, $\Delta\phi \propto N^{-1}$, attaining the Heisenberg limit.
In the following, we will show that, apart from GHZ state, other spin cat states also have the ability to perform Heisenberg-limited phase estimation.
Further, we will give an experimentally feasible scheme to realize the Heisenberg-limited measurement with spin cat states by means of interaction-based readout.
\section{Ultimate precision bound of spin cat states}\label{Sec3}
Spin cat states are typical kinds of macroscopic superposition of spin coherent states (MSSCS).
Generally, a MSSCS is a superposition of multiple spin coherent states (SCSs)~\cite{Ferrini2008, Ferrini2010, Pawlowski2013}.
Here, we discuss the MSSCS in the form of
\begin{equation}\label{MSSCS}
|\Psi(\theta, \varphi)\rangle_{\textrm{M}}=\mathcal{N}_{C}(|\theta,\varphi\rangle + |\pi-\theta,\varphi\rangle),
\end{equation}
where $\mathcal{N}_{C}$ is the normalization factor and $\left|\theta,\varphi\right\rangle$ denotes the $N$-particle SCS with
\begin{equation}\label{SCS_Dicke}
\left|\theta,\varphi\right\rangle=\sum^{J}_{m=-J} c_m(\theta)e^{-i(J+m)\varphi}\left|J,m\right\rangle.
\end{equation}
Here, $c_m(\theta) =\sqrt{\frac{(2J)!}{(J+m)!(J-m)!}}\cos^{J+m}\left({\theta \over 2}\right)\sin^{J-m}\left({\theta \over 2}\right)$, and $\{\left|J,m\right\rangle\}$ represents the Dicke basis with $J={N / 2}$ and $m=-J, -J+1, ..., J-1, J$.
Without loss of generality, we assume $\varphi=0$,
\begin{eqnarray}\label{MSSCS_Dicke}
|\Psi(\theta)\rangle_{\textrm{M}}&=&\mathcal{N}_{C}(|\theta,0\rangle + |\pi-\theta,0\rangle) \nonumber\\
&=& \mathcal{N}_{C}\left(\sum^{J}_{m=-J} \left[c_m(\theta)+c_m(\pi-\theta)\right]\left|J,m\right\rangle\right) \nonumber\\
&=& \mathcal{N}_{C} \left[\sum^{J}_{m=-J} c_m(\theta)\left(\left|J,m\right\rangle+\left|J,-m\right\rangle\right)\right],
\end{eqnarray}
where the two SCSs have the same azimuthal angle $\varphi=0$ and the polar angles are symmetric about $\theta=\pi/2$.
Since $c_m(\theta)=c_{-m}(\pi-\theta)$, the coefficients of the MSSCS are symmetric about $m=0$.
It is shown that~\cite{Huang2015}, when the two superposition SCSs are orthogonal or quasi-orthogonal, the corresponding MSSCS can be regarded as a spin cat state.
Mathematically, the sufficient condition of spin cat states can be expressed as
\begin{equation}\label{c0}
\theta \lesssim \theta_c\equiv\sin^{-1}\left\{2\left[\frac{\left((J-1)!\right)^2}{2 (2J)!}\right]^{1\over{2J}}\right\},
\end{equation}
which is derived from the assumption $2 |c_0(\theta)|^2 = \frac{(2J)!}{J! J!} \cos^{2J} \left(\theta\over2\right) \sin^{2J} \left(\theta\over2\right) \lesssim \frac{1}{J^2}$ when $J>1$.
According to Eq.~\eqref{c0}, $\theta_c$ increases as the total particle number $N=2J$ grows.
For total particle number $N \ge 40$, one can find $\theta_c \approx 7\pi/20$.
Under this condition, the normalization factor $\mathcal{N}_{C}\approx \frac{1}{\sqrt{2}}$.
Throughout this paper, we will focus on the spin cat states under the conditions of $N \ge 40$ and $\theta \le 7\pi/20$.
We abbreviate the spin cat states $\left|\Psi(\theta)\right\rangle_{\textrm{M}}$ ($N \ge 40$ and $\theta \le 7\pi/20$) to $\left|\Psi(\theta)\right\rangle_{\textrm{CAT}}$ below, i.e.,
\begin{equation}\label{CAT}
\left|\Psi(\theta)\right\rangle_{\textrm{CAT}}\approx\frac{1}{\sqrt{2}} \left[\sum^{J}_{m=-J} c_m(\theta)\left(\left|J,m\right\rangle+\left|J,-m\right\rangle\right)\right].
\end{equation}
Note that spin cat states can be understood as a superposition of GHZ states with different spin length.
The average of half-population difference $_\textrm{CAT}\langle\Psi(\theta) |\hat J_z| \Psi(\theta)\rangle_\textrm{CAT}=\sum_{m=-J}^{J} m c_m^{*}(\theta) c_m(\theta)=0$ for all spin cat states.
Owing to $_\textrm{CAT}\langle\Psi(\theta) |\hat J_z| \Psi(\theta)\rangle_\textrm{CAT}=0$, the variance of a spin cat state $|\Psi(\theta)\rangle_\textrm{CAT}$ becomes $\Delta^2 \hat J_z=\langle\Psi(\theta) |\hat J_z^2| \Psi(\theta)\rangle_\textrm{CAT} = \sum_{m=-J}^{J} m^2 c_m^{*}(\theta) c_m(\theta)$.
Since the two superposition SCSs are well fragmented for a spin cat state, and the coefficients $c_m(\theta)$ is in a binomial distribution, the variance can be approximately calculated as
\begin{equation}\label{variance}
\Delta^2 \hat J_z = \sum_{m=-J}^{J} m^2 |c_m(\theta)|^2 \approx \overline M^2,
\end{equation}
where $-\overline M$ and $\overline M$ can be regarded as the center locations of the two peaks.
This assumption is valid for calculating the QFI and perfectly matches the numerical results especially when the total particle number $N$ is large.
Given that $\Delta^2 J_z \approx \overline M^2$, the QFI of a spin cat state can be obtained, i.e., $F^Q_\textrm{CAT} \approx 4\overline M^2$.
\begin{figure}
\caption{(Color online) The ultimate precision scalings of different $|\Psi(\theta)\rangle_{\textrm{M}
\label{Fig1}
\end{figure}
Mathematically, $\overline M$ is a continuous variable, and it can be determined by the equation (see Appendix A),
\begin{equation}\label{Det_M}
\sqrt{\frac{J+\overline M}{J-\overline M+1}}\tan\left(\frac{\theta}{2}\right)=1.
\end{equation}
Solving Eq.~\eqref{Det_M}, we get $\overline M=J \left( \frac{1-\tan^2\left(\theta/2\right)}{1+\tan^2\left(\theta/2\right)} \right) + \frac{1}{1+\tan^2(\theta/2)}$.
When the total particle number $N=2J$ is large, it can be approximated as
\begin{equation}\label{MM}
\overline M \approx \left( \frac{1-\tan^2\left(\theta/2\right)}{1+\tan^2\left(\theta/2\right)} \right){N\over2},
\end{equation}
and the QFI of a spin cat state can be written as
\begin{equation}\label{QFI_M}
F^Q_\textrm{CAT} \approx \left( 1- \frac{2\tan^2\left(\theta/2\right)}{1+\tan^2\left(\theta/2\right)} \right)^2{N^2}.
\end{equation}
Thus, according to the QCRB~\eqref{QCRB}, the ultimate phase precision by a spin cat state is obtained,
\begin{eqnarray}\label{QCRB_M}
\Delta\phi \ge \Delta\phi^Q_\textrm{CAT} &=&{C(\theta)} N^{-1} \\\nonumber
&=& \left( 1+ \frac{2\tan^2\left(\theta/2\right)}{1-\tan^2\left(\theta/2\right)} \right) N^{-1}.
\end{eqnarray}
The above analytic result~\eqref{QCRB_M} is verified by numerical calculations, as shown in Fig.~\ref{Fig1}.
From the expression of QCRB~\eqref{QCRB_M}, the achievable precision of the spin cat state $|\Psi(\theta)\rangle_\textrm{CAT}$ follows the Heisenberg scaling multiplied by a coefficient $C(\theta)$ only dependent on $\theta$.
This coefficient $C(\theta)$ grows monotonically as $\theta$ gets larger.
When $\theta=0$, $|\Psi(0)\rangle_\textrm{CAT}$ becomes the GHZ state (an extreme type of spin cat states), its ultimate bound returns to $1/N$.
When $\theta>0$, the ultimate bound becomes $C(\theta)/N$, which is a constant fold higher than the GHZ state.
For example, $\Delta\phi^{Q}=2/N$ for $|\Psi(\pi/4)\rangle_\textrm{CAT}$, for a given $N$, the achievable precision just decreases by half compared to $|\Psi(0)\rangle_\textrm{CAT}$.
This indicates that, the spin cat states with modest $\theta$ may be more experimentally feasible.
They preserve the Heisenberg scaling of precision, and meanwhile are more easily prepared than the maximally entangled states in experiments~\cite{Lee2006, Lee2009, Huang2015, Xing2016, Huang2018}.
It is worth mentioning that, the ultimate bound~\eqref{QCRB_M} is only valid for spin cat states which satisfy the condition~\eqref{c0}.
For other MSSCS $|\Psi(\theta)\rangle_{\textrm{M}}$ in which the overlap between the two SCSs is more significant, the precision scaling no longer remains Heisenberg-limited, but approaches the SQL as $\theta$ gradually increases towards $\pi/2$.
\section{Interaction-based readout with spin cat states}\label{Sec4}
Although we have demonstrated that the spin cat states have the ability to perform Heisenberg-limited parameter estimation, how to saturate the ultimate precision bound and exploit their full potential in practice is a more important problem.
Here, we will propose a practical scheme to implement the Heisenberg-limited quantum metrology with spin cat states by adding a nonlinear dynamics before the population difference measurement.
We will show that, this is an optimal detection scheme which can saturate the ultimate precision bound of the spin cat states.
\subsection{Interaction-based readout}
The procedures of our scheme based on interaction-based readout are illustrated as follows, see Fig.~\ref{Fig2}.
First, a suitable input spin cat states $|\Psi(\theta)\rangle_{\textrm{CAT}}$ is prepared.
The spin cat states can be created by several kinds of methods in various quantum systems~\cite{Agarwal1997, Gerry1998, Sanders2014, Lau2014, Signoles2014, Huang2015}.
Particularly in Bose condensed atomic systems, the spin cat states can be generated via nonlinear dynamical evolution~\cite{Ferrini2008, You2003} or deterministically prepared by adiabatic ground state preparation~\cite{Lee2006, Huang2015, Huang2018}.
The parameter $\theta$ for a specific spin cat state is determined by the control of atom-atom interaction~\cite{Gross2010, Riedel2010}.
Then, the input state evolves under the Hamiltonian $H_{0}=\omega \hat J_z$, and the output state $|\Psi(\phi)\rangle_{out}=\hat U(\phi) |\Psi(\theta)\rangle_{\textrm{CAT}}$, where $\hat U(\phi)=e^{-i H_{0} t}=e^{-i\hat J_z\phi}$ with accumulated phase $\phi=\omega t$.
Finally, a sequence interaction-based readout is performed on $|\Psi(\phi)\rangle_{out}$ to extract $\phi$.
\begin{figure}
\caption{(Color online) The schematic of the interaction-based readout scheme with input state cat states. First, an input spin cat state is prepared. Then, an estimated phase is accumulated during interrogation. To extract the phase, an interaction-based readout protocol is applied, in which a nonlinear evolution is sandwiched by two $\frac{\pi}
\label{Fig2}
\end{figure}
Here, the sequence comprises a nonlinear dynamics sandwiched by two $\pi\over2$ pulses prior to the half-population difference measurement.
The final state after the sequence can be written as
\begin{equation}\label{NonDy}
|\Psi(\phi)\rangle_f = \hat R_x^{\dagger}({\pi\over2}) \hat U_{non}(\chi t) \hat R_x({\pi\over2}) |\Psi(\phi)\rangle_{out},
\end{equation}
where $\hat U_{non}(\chi t)=e^{-iH_{\textrm{OAT}}t}=e^{i\chi\hat J_z^2 t}$ describes the nonlinear evolution with nonlinearity $\chi$, $\hat R_x({\pi\over2})=e^{i{\pi\over2}\hat J_x}$ is a $\pi\over2$ pulse, a rotation about $x$ axis.
Applying the half-population difference measurement $\hat J_z$ on the final state, one can obtain the expectation and standard deviation of $\hat J_z$,
\begin{equation}\label{Avg_Jz}
\langle\hat J_z\rangle_f = _{f}\!\langle\Psi(\phi)| \hat J_z |\Psi(\phi)\rangle_{f},
\end{equation}
\begin{equation}\label{Delta_Jz}
\left(\Delta \hat J_z\right)_f = \sqrt{_{f}\langle\Psi(\phi)| \hat J_z^2 |\Psi(\phi)\rangle_{f} - \left(_{f}\langle\Psi(\phi)| \hat J_z |\Psi(\phi)\rangle_{f}\right)^2}.
\end{equation}
Therefore, the estimated phase precision is given according to the error propagation formula,
\begin{equation}\label{Delta_phi}
\Delta \phi = \frac{\left(\Delta \hat J_z\right)_f}{\left|\partial \langle\hat J_z\rangle_f / \partial \phi \right|}.
\end{equation}
\subsection{Numerical results}
The measurement precision via interaction-based readout with spin cat states are shown in Fig.~\ref{Fig3}.
We first consider the accumulated phase around $\phi=0$, and plot the precision dependence on the nonlinear evolution, see Fig.~\ref{Fig3}~(a).
The optimal nonlinear evolution $\chi t$ changes with different input spin cat states $|\Psi(\theta)\rangle_{\textrm{CAT}}$.
For spin cat states with larger $\theta$, despite the estimated phase precision $\Delta \phi_{\textrm{min}}$ becomes a bit worse, but the required optimal nonlinear evolution $\chi t$ is getting smaller.
Given a fixed nonlinearity $\chi$, the optimal nonlinear evolution time $t_{opt}$ decreases with $\theta$, see the inset of Fig.~\ref{Fig3}~(a).
We choose four typical spin cat states $|\Psi(0)\rangle_{\textrm{CAT}}$, $|\Psi(\pi/8)\rangle_{\textrm{CAT}}$, $|\Psi(\pi/4)\rangle_{\textrm{CAT}}$ and $|\Psi(7\pi/20)\rangle_{\textrm{CAT}}$ and evaluate their precision scaling versus total particle number, see Fig.~\ref{Fig3}~(c).
It is shown that, the spin cat states with interaction-based readout still preserve the Heisenberg-limited scaling when $\phi \sim 0$.
Although there exists a shift from the QCRB for spin cat states with large $\theta$, the Heisenberg scaling $\Delta\phi_{\textrm{min}} \propto 1/N$ enables the high-precision measurement when the total particle number is large.
Further, we also consider the accumulated phase around $\phi=\pi/2$, and plot the precision dependence on the nonlinear evolution, see Fig.~\ref{Fig3}~(b).
Differently from the case of $\phi \sim 0$, the optimal nonlinear evolution $\chi t = \pi/2$ for all input spin cat states $|\Psi(\theta)\rangle_{\textrm{CAT}}$ when $\phi \sim \pi/2$, see Fig.~\ref{Fig3}~(b).
The precision scaling versus total particle number saturate the QCRB~\eqref{QCRB_M}, which indicates that the interaction-based readout is an optimal scheme to attain the ultimate bound of the spin cat states, see Fig.~\ref{Fig3}~(c).
The measurement precision can also be estimated via classical Fisher information (CFI).
We numerically find that, the minimum standard deviations in the above scenarios are the same with the results calculated by CFI.
Both scenarios are useful in practical parameter estimation.
When the parameter is very tiny, the accumulated phase may be around $\phi=0$, the spin cat states with modest $\theta$ are beneficial.
For example, the optimal nonlinear evolution of $|\Psi(\pi/4)\rangle_{\textrm{CAT}}$ is $\chi t =\pi/4$, which is only half of the one for the GHZ state.
Meanwhile, the corresponding precision scaling is still $\Delta\phi \propto 1/N$.
On the other hand, for relatively large parameters and the interrogation time can be varied so that the accumulated phase lies around $\phi=\pi/2$, the interaction-based readout can saturate the ultimate bound only if the nonlinear evolution can be tuned to $\chi t =\pi/2$.
\begin{figure*}
\caption{(Color online) The phase precision obtained with spin cat states via interaction-based readout for the estimated phase in the vicinity of (a) $\phi\sim0$ and (b) $\phi\sim\pi/2$. The standard derivations of the phase $\Delta \phi$ versus the nonlinear evolution $\chi t$ for spin cat states $|\Psi(0)\rangle_{\textrm{CAT}
\label{Fig3}
\end{figure*}
\begin{figure*}
\caption{(Color online) (a) The optimal phase uncertainty under optimal control against detection noise $\sigma$ with spin cat states $|\Psi(0)\rangle_{\textrm{CAT}
\label{Fig4}
\end{figure*}
\subsection{Analytical analysis}
For spin cat states via interaction-based readout, the corresponding measurement precision can be analyzed analytically for some specific cases.
We will show how the interaction-based readout with $\chi t=\pi/2$ can saturate the ultimate precision bound for spin cat states when the estimated phase is around $\phi=\pi/2$.
Then, we will also illustrate the reason why the interaction-based readout with spin cat states for $\phi=0$ and $\phi=\pi/2$ make a difference.
Consider an input state,
\begin{equation}\label{Psi_in_gen}
|\Psi_{in}\rangle=\sum_{k=-J}^{J} a_k |J,k\rangle,
\end{equation}
which is symmetric with respect to exchange of two modes, where $a_{k}=a_{-k}$ and $J=N/2$ is an even number.
According to Eq.~\eqref{NonDy}, the final state before observable measurement can be expressed as,
\begin{eqnarray}\label{Psi_f_gen}
|\Psi(\phi)\rangle_f &=& e^{-i\frac{\pi}{2}\hat J_x} e^{i\chi t\hat J_z^2} e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}|\Psi_{in}\rangle, \nonumber\\
&=& \sum_{k=-J}^{J} a_k \left(e^{-i\frac{\pi}{2}\hat J_x} e^{i\chi t\hat J_z^2} e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}\right)|J,k\rangle. \nonumber\\
&&
\end{eqnarray}
When $\chi t=\pi/2$, Eq.~\eqref{Psi_f_gen} becomes,
\begin{equation}\label{Psi_f_gen2}
|\Psi(\phi)\rangle_f=\sum_{k=-J}^{J} a_k \left(e^{-i\frac{\pi}{2}\hat J_x} e^{i\frac{\pi}{2} J_z^2} e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}\right)|J,k\rangle.
\end{equation}
Since $a_{k}=a_{-k}$,
\begin{eqnarray}\label{Psi_f_gen3}
&& |\Psi(\phi)\rangle_f= a_0\left(e^{-i\frac{\pi}{2}\hat J_x} e^{i\frac{\pi}{2} J_z^2} e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}\right)|J,0\rangle \nonumber\\
&+& \sum_{k=1}^{J} a_k \left(e^{-i\frac{\pi}{2}\hat J_x} e^{i\frac{\pi}{2} J_z^2} e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}\right)\left(|J,k\rangle + |J,-k\rangle\right). \nonumber\\
&&
\end{eqnarray}
One can prove that (see Appendix B),
\begin{eqnarray}\label{Psi_nonDy}
&& e^{-i\frac{\pi}{2}\hat J_x} e^{i\frac{\pi}{2} J_z^2} e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}\left(|J,k\rangle + |J,-k\rangle\right) \nonumber\\
&=&\left[\cos(k\phi)+\sin(k\phi)\right]\left(i\right)^{(J-k)^2}|J,k\rangle \nonumber\\
&+&\left[\cos(k\phi)-\sin(k\phi)\right]\left(i\right)^{(J-k)^2}|J,-k\rangle
\end{eqnarray}
Using Eq.~\eqref{Psi_nonDy}, we can get
\begin{eqnarray}\label{Psi_f_gen4}
|\Psi(\phi)\rangle_f=\sum_{k=-J}^{J} a_k &&\left[\cos(k\phi)+ (-1)^{J-k}\sin(k\phi)\right] \nonumber\\
&& \left(i\right)^{(J-k)^2}|J,k\rangle.
\end{eqnarray}
Then, the conditional probability of obtaining measurement result of $k$ can also be obtained,
\begin{eqnarray}\label{ConPro}
P(k|\phi)&=&|a_k|^2 \left[\cos(k\phi)+(-1)^{J-k}\sin(k\phi)\right]^2 \nonumber\\
&=&|a_k|^2 \left[1+(-1)^{J-k}\sin(2k\phi)\right].
\end{eqnarray}
Thus, the CFI can be calculated,
\begin{eqnarray}\label{CFI}
F^C(\phi)&=&\sum_{k}\frac{1}{P(k|\phi)}\left(\frac{\partial P(k|\phi)}{\partial \phi}\right)^2 \nonumber\\
&=& \sum_{k}\frac{4|a_k|^4 k^2\cos^2(2k\phi)}{|a_k|^2[1+(-1)^{J-k}\sin(2k\phi)]},
\end{eqnarray}
and the corresponding Cram\'{e}r-Rao bound is $\Delta\phi^C=1/{\sqrt{F^C(\phi)}}$.
One can easily find that, when $\phi=\pi/2$, $F^C=\sum_k 4 |a_k|^2 k^2$.
For spin cat states, $F^C=4\sum_{k=-J}^{J} k^2 |c_k(\theta)|^2 =F^Q\approx 4\overline M^2$, $\Delta\phi^C=\Delta\phi^Q$.
This indicates that, the ultimate precision bound (i.e., QCRB) can be saturated via the CFI~\eqref{CFI}.
To obtain the CFI, one need to estimate the full probability distribution of the final state~\cite{Szigeti2017, Haine2018, Mirkhalaf2018}.
In our scheme, one can also saturate the ultimate precision bound by measuring the expectation of half-population difference and using the error propagation formula.
The half-population difference of the final state can be written explicitly,
\begin{eqnarray}\label{Avg_Jz_gen}
\langle\hat J_z\rangle_f &=& _{f}\langle\Psi(\phi)| \hat J_z |\Psi(\phi)\rangle_{f} \nonumber\\
&=& \sum_{k=-J}^{J} k |a_k|^2 \left[\cos(k\phi)+ (-1)^{J-k}\sin(k\phi)\right]^2 \nonumber\\
&=& \sum_{k=-J}^{J} k |a_k|^2 \left[1+ (-1)^{J-k}\sin(2k\phi)\right] \nonumber\\
&=& \sum_{k=-J}^{J} k |a_k|^2 (-1)^{J-k}\sin(2k\phi),
\end{eqnarray}
and its derivative with respect to $\phi$ reads as,
\begin{equation}\label{Derivative_gen}
\frac{d\langle\hat J_z\rangle_f}{d\phi} = \sum_{k=-J}^{J} 2k^2 |a_k|^2 (-1)^{J-k}\cos(2k\phi).
\end{equation}
Correspondingly, the standard deviation of half-population difference is
\begin{eqnarray}\label{Delta_Jz_gen}
\left(\Delta \hat J_z\right)_f &=& \sqrt{_{f}\langle\Psi(\phi)| \hat J_z^2 |\Psi(\phi)\rangle_{f} - \left(_{f}\langle\Psi(\phi)| \hat J_z |\Psi(\phi)\rangle_{f}\right)^2} \nonumber\\
&=& \{\sum_{k=-J}^{J} k^2 |a_k|^2 \left[1+ (-1)^{J-k}\sin(2k\phi)\right] \nonumber\\
&-& \left[\sum_{k=-J}^{J} k |a_k|^2 (-1)^{J-k}\sin(2k\phi)\right]^2\}^{1/2} \nonumber\\
&=& \!\sqrt{\sum_{k=-J}^{J} \!k^2 |a_k|^2 \!-\! \left[\sum_{k=-J}^{J} k |a_k|^2 (\!-\!1)^{J-k}\sin(2k\phi)\!\right]^2}. \nonumber\\
&&
\end{eqnarray}
Finally, we can obtain the phase measurement precision via Eqs.~\eqref{Derivative_gen} and~\eqref{Delta_Jz_gen},
\begin{equation}\label{Delta_phi_gen}
\Delta \phi = \!\frac{\sqrt{\sum_{k=-J}^{J} k^2 |a_k|^2 \!-\! \left[\!\sum_{k=-J}^{J} k |a_k|^2 (-1)^{J-k}\sin(2k\phi)\!\right]^2}}{\left|\sum_{k=-J}^{J} 2k^2 |a_k|^2 (-1)^{J-k}\cos(2k\phi)\right|}.
\end{equation}
When $\phi=0$, $\sin(2k\phi)=0$ and $\cos(2k\phi)=1$, the phase measurement precision becomes
\begin{equation}\label{Delta_phi_gen_0}
\Delta \phi|_{\phi=0} = \frac{\sqrt{\sum_{k=-J}^{J} k^2 |a_k|^2}}{\left|\sum_{k=-J}^{J} 2k^2 |a_k|^2 (-1)^{J-k}\right|}.
\end{equation}
When $\phi=\pi/2$, $\sin(2k\phi)=0$ and $(-1)^{J-k}\cos(2k\phi)=1$, the phase measurement precision can be simplified as
\begin{equation}\label{Delta_phi_gen_halfpi}
\Delta \phi|_{\phi=\pi/2} = \frac{\sqrt{\sum_{k=-J}^{J} k^2 |a_k|^2}}{\left|\sum_{k=-J}^{J} 2k^2 |a_k|^2 \right|}.
\end{equation}
For a spin cat state~\eqref{CAT}, according to Eqs.~\eqref{variance},~\eqref{MM},~\eqref{QCRB_M},~\eqref{Delta_phi_gen_0} and~\eqref{Delta_phi_gen_halfpi}, we get the phase measurement precision
\begin{equation}\label{Delta_phi_CAT_0}
\Delta \phi |_{\phi=0}= \frac{\overline M}{\left|\sum_{m=-N/2}^{N/2} 2k^2 |c_m(\theta)|^2 (-1)^{N/2-m}\right|},
\end{equation}
and
\begin{eqnarray}\label{Delta_phi_CAT_halfpi}
\Delta \phi |_{\phi=\pi/2} &=& \frac{\overline M}{\left|\sum_{m=-N/2}^{N/2} 2k^2 |c_m(\theta)|^2\right|} \nonumber\\
&=& \frac{\overline M}{2\overline M^2} \nonumber\\
&=& C(\theta) N^{-1}.
\end{eqnarray}
From Eq.~\eqref{Delta_phi_CAT_halfpi}, it is obvious that the interaction-based readout with $\chi t=\pi/2$ attains the ultimate precision bound $\Delta_{CAT}^Q$ of spin cat states when $\phi=\pi/2$.
Comparing with Eq.~\eqref{Delta_phi_CAT_0} and Eq.~\eqref{Delta_phi_CAT_halfpi}, we find that, for interaction-based readout with $\chi t=\pi/2$, $\Delta \phi |_{\phi=0} \ge \Delta \phi |_{\phi=\pi/2}$ since the factor $(-1)^{N/2-m}$ in Eq.~\eqref{Delta_phi_CAT_0} decreases the sensitivity $d\langle\hat J_z\rangle_f/{d\phi}$ when even and odd $m$ coexist.
This also indicates that the interaction-based readout with $\chi t=\pi/2$ may not be the optimal choice for $\phi=0$ with spin cat states.
For $\phi=0$, it is hard to analyze analytically, so we can only obtain the optimal conditions for different spin cat states numerically, which has been shown in the above subsection B.
\section{Robustness against Imperfections}\label{Sec5}
Finally, we investigate the robustness of the interaction-based readout scheme.
In realistic experiments, there are many imperfections that limit the final estimation precision.
Here, we discuss two main sources: the detection noise of the measurement and the dephasing during the nonlinear evolution of the interaction-based readout.
\subsection{Influences of detection noise}
Ideally, the half-population difference measurement onto the final state can be rewritten as $\langle\hat J_z\rangle_f = \sum_{m=-N/2}^{N/2} P_m(\phi) m$, where $P_m(\phi)$ is the measured probability of the final state projecting onto the basis $|J,m\rangle$.
For an inefficient detector with Gaussian detection noise~\cite{Nolan2017,Mirkhalaf2018,Haine2018}, the half-population difference measurement becomes
\begin{equation}\label{GuassianNoise}
\langle\hat J_z\rangle_f^{\sigma} = \sum_{m=-N/2}^{N/2} P_m(\phi|\sigma) m,
\end{equation}
with
\begin{equation}\label{GuassianPro}
P_m(\phi|\sigma)=\sum_{n=-N/2}^{N/2} A_n e^{-(m-n)^2/2\sigma^2} P_n(\phi),
\end{equation}
the conditional probability depending on the detection noise $\sigma$. Here, $A_n$ is a normalization factor.
In Fig.~\ref{Fig4}~(a), we plot the optimal standard deviation $\Delta\phi$ versus the detection noise $\sigma$ with different input spin cat states under the conditions of $\phi\sim0$ and $\phi\sim\pi/2$.
First, we should mention that, the results via CFI when $\phi\sim0$ and $\phi\sim\pi/2$ are the same as the ones using the error propagation formula.
The standard deviation $\Delta\phi$ stays unchanged when $\sigma\le\sqrt{N}$ and starts to blow up as $\sigma$ becomes large enough.
It is obvious that, the spin cat states with smaller $\theta$ are more robust against the detection noise.
To analyze more clearly, for $\phi\sim\pi/2$, we show the relation of $\Delta\phi/\Delta\phi_{CAT}^Q$ versus $\sigma/(\Delta \hat J_z)$, where $\Delta\phi_{CAT}^Q \approx 1/(2\overline M)\approx C(\theta)/N$, $\Delta \hat J_z\approx \overline M$ are the ultimate precision bound and the standard deviation for a spin cat state $|\Psi(\theta)\rangle_{\textrm{CAT}}$, respectively.
Interestingly, all the spin cat states have similar scaling when $\sigma/(\Delta \hat J_z)\lesssim0.5$, as shown in Fig.~\ref{Fig4}~(b).
When $\sigma/(\Delta \hat J_z)>0.5$, the phase uncertainties start to increase rapidly.
The critical point of the detection noise can be expressed as
\begin{equation}\label{DetNoise}
\sigma_c\approx0.5\Delta \hat J_z\approx 0.5\overline M\approx N/2C(\theta)\equiv \tilde{c}_D(\theta)N.
\end{equation}
Thus, the interaction-based readout with spin cat state is robust against detection noise up to $\sigma_c\propto N$, which is agree with the results in Ref.~\cite{Fang2017}.
Compared with the echo twisting schemes~\cite{Davis2016, Frowis2016}, our proposal will be much more robust against excess detection imperfection when $N$ is relatively large.
In addition, for a spin cat state $|\Psi(\theta)\rangle_{\textrm{CAT}}$ with smaller $\theta$, $\tilde{c}_D(\theta)=1/2C(\theta)$ is larger.
This explains why $|\Psi(\theta)\rangle_{\textrm{CAT}}$ with smaller $\theta$ is more robust in our scheme.
\subsection{Influences of dephasing during interaction-based readout}
Another imperfection may come from environment effects during the process of interaction-based readout.
Here, we consider $\phi\sim0$ and the interrogation duration is shorter than the duration of interaction-based readout.
Thus the interaction-based readout may suffer from correlated dephasing.
The process can be described by a Lindblad master equation~\cite{Dorner2012},
\begin{equation}\label{Lindblad}
\frac{d\rho}{dt}=i\left[\chi\hat J_z^2,\rho\right]+\gamma\left(\hat J_z\rho\hat J_z-\frac{1}{2}\hat J_z^2\rho-\frac{1}{2}\rho\hat J_z^2\right),
\end{equation}
where $\gamma$ denotes the dephasing rate and $\rho$ is the density matrix of the evolved state.
The initial density matrix is $\rho(0)=|\tilde{\Psi}\rangle\langle\tilde{\Psi}|$ with $|\tilde{\Psi}\rangle=\hat R_x({\pi\over2}) \hat U(\phi) |\Psi(\theta)\rangle_{\textrm{CAT}}$.
In Fig.~\ref{Fig5}, the effects of dephasing on the estimated phase precision for spin cat states are shown.
First, spin cat states are robust against the dephasing during the interaction-based readout.
The measurement precision can be still beyond SQL when $\gamma$ is large.
Second, the precision of spin cat states with larger $\theta$ degrades more slowly when $\gamma$ becomes severe since the corresponding optimal evolution time is shorter.
Therefore, it is more feasible to use spin cat states with modest $\theta$ via interaction-based readout when the estimated phase is near 0.
\begin{figure}
\caption{(Color online) The phase precision $\Delta \phi$ versus the nonlinear evolution $\chi t$ for spin cat states $|\Psi(0)\rangle_{\textrm{CAT}
\label{Fig5}
\end{figure}
\section{Summary}\label{Sec6}
In summary, we have investigated the metrological performances of spin cat states and proposed to implement interaction-based readout to make full use of spin cat states for quantum phase estimation.
We analytically show that spin cat states have the ability to perform Heisenberg-limited measurement, whose standard derivations of the estimated phase are always inversely proportional to the total particle number.
We find that interaction-based readout is one of the optimal methods for spin cat states to perform Heisenberg-limited measurement.
When the estimated phase $\phi$ is around 0, the spin cat states with modest entanglement are beneficial since their optimal nonlinear evolution of the interaction-based readout $\chi t$ is much smaller than $\pi/2$.
However, when the estimated phase $\phi$ lies near $\pi/2$, the interaction-based readout with spin cat states can always saturate the ultimate precision bound if the nonlinear evolution can be tuned to $\chi t =\pi/2$.
The detailed derivation of how the interaction-based readout can saturate the ultimate precision bounds of spin cat states are analytically given.
Moreover, the interaction-based readout with spin cat states is robust against detection noise and it does not require single-particle resolution detectors.
Compared with the twisting echo schemes, our proposal can be immune against detection noise up to $\sigma \sim \tilde{c}_D(\theta) N$, which is much more robust.
Besides, the influences of other imperfect effects such as dephasing during interaction-based readout are also discussed.
Our study on quantum phase estimation with spin cat states via the interaction-based readout may open up a feasible way to achieve Heisenberg-limited quantum metrology with non-Gaussian entangled states.
\section*{APPENDIX A: DETERMINATION OF $\mathbf{\overline M}$ FOR SPIN CAT STATES}
For a spin cat state, since $-\overline M$ and $\overline M$ can be interpreted as the center locations of the two peaks of the spin coherent states, the value of $\overline M$ can be determined by the maximum of coefficient $c_m(\theta)$.
Without loss of generality, we assume $\overline M>0$.
The difference between the nearest two coefficients can be calculated as
\begin{eqnarray}\label{Dif_Coef}
c_m(\theta)\!&-&\!c_{m\!-\!1}(\theta)=\!\sqrt{\!\frac{(2J)!}{(\!J\!+\!m)!(\!J\!-\!m)!}}\cos^{\!J+\!m}\!\left({\theta \over 2}\right)\sin^{\!J-\!m}\!\left({\theta \over 2}\right) \nonumber\\
&-&\!\sqrt{\!\frac{(2J)!}{(\!J\!+\!m\!-\!1)!(\!J\!-\!m\!+\!1)!}}\cos^{\!J+\!m-\!1}\!\left({\theta \over 2}\right)\sin^{\!J\!-\!m\!+\!1}\!\left({\theta \over 2}\right)\nonumber\\
&=& c_m(\theta)\left[1-\sqrt{\frac{J+m}{J-m+1}}\tan\left(\frac{\theta}{2}\right)\right].
\end{eqnarray}
For $m\le\overline M$, $c_m(\theta)\!-\!c_{m\!-\!1}(\theta)\ge0$, while for $m\ge\overline M$, $c_m(\theta)\!-\!c_{m\!-\!1}(\theta)\le0$, therefore when $m=\overline M$, it should be $c_m(\theta)\!-\!c_{m\!-\!1}(\theta)=0$.
Thus,
\begin{equation}\label{A2}
c_{\overline M}(\theta)\!-\!c_{\overline M\!-\!1}(\theta)=c_{\overline M}(\theta)\left[1\!-\!\sqrt{\frac{J+\overline M}{\!J\!-\!\overline M\!+\!1}}\tan\left(\frac{\theta}{2}\right)\right]=0,
\end{equation}
and since the coefficients $c_m(\theta)$ are all real positive numbers, we can deduce that,
\begin{equation}\label{A3}
\sqrt{\frac{J+\overline M}{J-\overline M+1}}\tan\left(\frac{\theta}{2}\right)=1.
\end{equation}
Solving Eq.~\eqref{A3}, we can find that,
\begin{equation}\label{A4}
\overline M=J \left( \frac{1-\tan^2\left(\theta/2\right)}{1+\tan^2\left(\theta/2\right)} \right) + \frac{1}{1+\tan^2(\theta/2)}.
\end{equation}
Since $J\gg1$ and $1/[1+\tan^2\left(\theta/2\right)]<1$ can be neglected,
\begin{equation}\label{A5}
\overline M \approx J\left( \frac{1-\tan^2\left(\theta/2\right)}{1+\tan^2\left(\theta/2\right)} \right).
\end{equation}
\section*{APPENDIX B: THE EFFECT OF INTERACTION-BASED READOUT ($\chi t=\pi/2$)}
Here, we give the proof of Eq.~\eqref{Psi_nonDy} in the main text.
The Dicke basis,
\begin{equation}\label{B1}
|J,-J\rangle \equiv \prod_{l=1}^{2J}|\downarrow\rangle_{l},
\end{equation}
\begin{equation}\label{B2}
|J, J\rangle \equiv \prod_{l=1}^{2J}|\uparrow\rangle_{l},
\end{equation}
and
\begin{equation}\label{B3}
|J,k\rangle \equiv \prod_{l=1}^{J+k}|\uparrow\rangle_{l}\prod_{l=J+k+1}^{2J}|\downarrow\rangle_{l},
\end{equation}
for $-J<k<J$.
First,
\begin{eqnarray}\label{B4}
&& e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}\left(|J, J\rangle +|J, -J\rangle\right)\nonumber\\
&=& e^{i\frac{\pi}{4}\left(\sum_{l=1}^{2J}\hat \sigma_x^{(l)}\right)} e^{-i\frac{\phi}{2}\left(\sum_{l=1}^{2J}\hat \sigma_z^{(l)}\right)} \left(\prod_{l=1}^{2J}|\downarrow\rangle_{l} + \prod_{l=1}^{2J}|\uparrow\rangle_{l}\right)\nonumber\\
&=& e^{i\frac{\pi}{4}\left(\sum_{l=1}^{2J}\hat \sigma_x^{(l)}\right)} \left(\prod_{l=1}^{2J} e^{i\frac{\phi}{2}}|\downarrow\rangle_{l} + \prod_{l=1}^{2J}e^{-i\frac{\phi}{2}}|\uparrow\rangle_{l}\right)\nonumber\\
&=& e^{i\phi J}\prod_{l=1}^{2J}\left(\frac{1}{\sqrt{2}}|\downarrow\rangle_{l}+\frac{i}{\sqrt{2}}|\uparrow\rangle_{l}\right) \nonumber\\
&+& e^{-i\phi J} \prod_{l=1}^{2J}\left(\frac{i}{\sqrt{2}}|\downarrow\rangle_{l}+\frac{1}{\sqrt{2}}|\uparrow\rangle_{l}\right)\nonumber\\
&=& \sum_{m=-J}^{J} \left[\frac{\sqrt{C_{2J}^{J-m}}}{2^J}(i)^{J+m}e^{i\phi J} |J,m\rangle\right] \nonumber\\
&+& \sum_{m=-J}^{J} \left[\frac{\sqrt{C_{2J}^{J+m}}}{2^J}(i)^{J-m}e^{-i\phi J}|J,m\rangle\right] \nonumber\\
&=& \sum_{m=-J}^{J} \frac{\sqrt{C_{2J}^{J-m}}}{2^J}\left[(i)^{J+m}e^{i\phi J}+(i)^{J-m}e^{-i\phi J}\right]|J,m\rangle \nonumber\\
&&
\end{eqnarray}
Here, $C_{y}^{x}=\frac{y!}{x!(y-x)!}$ is the binomial coefficient. Then,
\begin{eqnarray}\label{B5}
&& e^{i \frac{\pi}{2} \hat J_z^2} e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}\left(|J, J\rangle +|J, -J\rangle\right)\nonumber\\
&=& \sum_{m=-J}^{J} \frac{\sqrt{C_{2J}^{J-m}}}{2^J}(i)^{m^2} \left[(i)^{J+m}e^{i\phi J}+(i)^{J-m}e^{-i\phi J}\right]|J,m\rangle.\nonumber\\
&&
\end{eqnarray}
Considering the cases of even and odd $m$ respectively, we surprisingly find that,
\begin{eqnarray}\label{B6}
&& (i)^{m^2} \left[(i)^{J+m}e^{i\phi J}+(i)^{J-m}e^{-i\phi J}\right]\nonumber\\
&=& \cos(J\phi)\left[(i)^{J+m}+(i)^{J-m}\right]+\sin(J\phi)\left[(i)^{J-m}-(i)^{J+m}\right],\nonumber\\
&&
\end{eqnarray}
Substituting Eq.~\eqref{B6} into Eq.~\eqref{B5},
\begin{eqnarray}\label{B7}
&& e^{i \frac{\pi}{2} \hat J_z^2} e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}\left(|J, J\rangle +|J, -J\rangle\right)\nonumber\\
&=& \left[\cos(J\phi)\!-\!\sin(J\phi)\right]\prod_{l=1}^{2J}\left(\frac{1}{\sqrt{2}}|\downarrow\rangle_{l}+\frac{i}{\sqrt{2}}|\uparrow\rangle_{l}
\right)\nonumber\\
&+& \left[\cos(J\phi)\!+\!\sin(J\phi)\right]\prod_{l=1}^{2J}\left(\frac{i}{\sqrt{2}}|\downarrow\rangle_{l}+\frac{1}{\sqrt{2}}|\uparrow\rangle_{l}
\right).\nonumber\\
&&
\end{eqnarray}
So, we have
\begin{eqnarray}\label{B8}
&& e^{-i\frac{\pi}{2}\hat J_x} e^{i \frac{\pi}{2} \hat J_z^2} e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}\left(|J, J\rangle +|J, -J\rangle\right)\nonumber\\
&=& \left[\cos(J\phi)\!-\!\sin(J\phi)\right]\prod_{l=1}^{2J} |\downarrow\rangle_{l}+ \left[\cos(J\phi)\!+\!\sin(J\phi)\right]\prod_{l=1}^{2J} |\uparrow\rangle_{l}\nonumber\\
&=& \left[\cos(J\phi)\!-\!\sin(J\phi)\right]|J,-J\rangle + \left[\cos(J\phi)\!+\!\sin(J\phi)\right]|J,J\rangle.\nonumber\\
&&
\end{eqnarray}
Next, when $-J<k<J$, we assume $k>0$ without loss of generality,
\begin{eqnarray}\label{B9}
&& e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}\left(|J, k\rangle +|J, -k\rangle\right)\nonumber\\
&=& e^{i\frac{\pi}{4}\left(\sum_{l=1}^{2J}\hat \sigma_x^{(l)}\right)} \left(\prod_{l=1}^{J+k} e^{-i\frac{\phi}{2}}|\uparrow\rangle_{l}\prod_{l=J+k+1}^{2J}e^{i\frac{\phi}{2}}|\downarrow\rangle_{l}\right)\nonumber\\
&+& e^{i\frac{\pi}{4}\left(\sum_{l=1}^{2J}\hat \sigma_x^{(l)}\right)} \left(\prod_{l=1}^{J-k} e^{-i\frac{\phi}{2}}|\uparrow\rangle_{l}\prod_{l=J-k+1}^{2J}e^{i\frac{\phi}{2}}|\downarrow\rangle_{l}\right)\nonumber\\
&=& \sum_{m=-k}^{k} \frac{\sqrt{C_{2k}^{k-m}}}{2^k}\left[(i)^{k+m}e^{i\phi k}+(i)^{k-m}e^{-i\phi k}\right]|J,m\rangle. \nonumber \\
\end{eqnarray}
Then,
\begin{eqnarray}\label{B10}
&& e^{i \frac{\pi}{2} \hat J_z^2} e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}\left(|J, k\rangle +|J, -k\rangle\right)\nonumber\\
&=& \sum_{m=-k}^{k} \frac{\sqrt{C_{2k}^{k-m}}}{2^k}(i)^{m^2} \left[(i)^{k+m}e^{i\phi k}+(i)^{k-m}e^{-i\phi k}\right]|J,m\rangle.\nonumber\\
&&
\end{eqnarray}
Similar to Eq.~\eqref{B6}, we have
\begin{eqnarray}\label{B11}
&& (i)^{m^2} \left[(i)^{k+m}e^{i\phi k}+(i)^{k-m}e^{-i\phi k}\right]\nonumber\\
&=& \cos(k\phi)\left[(i)^{k+m}+(i)^{k-m}\right]+\sin(k\phi)\left[(i)^{k-m}-(i)^{k+m}\right],\nonumber\\
&&
\end{eqnarray}
Substituting Eq.~\eqref{B11} into Eq.~\eqref{B10},
\begin{eqnarray}\label{B12}
&& e^{i \frac{\pi}{2} \hat J_z^2} e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}\left(|J, k\rangle +|J, -k\rangle\right)\nonumber\\
&=& \left[\cos(k\phi)\!-\!\sin(k\phi)\right]\prod_{l=1}^{2k}\left(\frac{1}{\sqrt{2}}|\downarrow\rangle_{l}+\frac{i}{\sqrt{2}}|\uparrow\rangle_{l}
\right)\nonumber\\
&&\prod_{l=2k+1}^{2J}
\!\left(\frac{1}{\sqrt{2}}|\downarrow\rangle_{l}\!+\!\frac{1}{\sqrt{2}}|\uparrow\rangle_{l}\right)+\nonumber\\
&& \left[\cos(k\phi)\!+\!\sin(k\phi)\right]\prod_{l=1}^{2k}\left(\frac{i}{\sqrt{2}}|\downarrow\rangle_{l}+\frac{1}{\sqrt{2}}|\uparrow\rangle_{l}
\right)\nonumber\\
&&\prod_{l=2k+1}^{2J}
\!\left(\frac{1}{\sqrt{2}}|\downarrow\rangle_{l}\!+\!\frac{1}{\sqrt{2}}|\uparrow\rangle_{l}\right).\nonumber\\
&&
\end{eqnarray}
Finally, we obtain that
\begin{eqnarray}\label{B13}
&& e^{-i\frac{\pi}{2}\hat J_x} e^{i \frac{\pi}{2} \hat J_z^2} e^{i\frac{\pi}{2}\hat J_x} e^{-i \phi \hat J_z}\left(|J, k\rangle +|J, -k\rangle\right)\nonumber\\
&=& \left[\cos(k\phi)\!-\!\sin(k\phi)\right](i)^{(J-k)^2}|J,-k\rangle \nonumber\\
&+& \left[\cos(k\phi)\!+\!\sin(k\phi)\right](i)^{(J-k)^2}|J,k\rangle. \nonumber\\
&&
\end{eqnarray}
Combining Eq.~\eqref{B8} and~\eqref{B13}, we can unify as Eq.~\eqref{Psi_nonDy} in the main text.
\end{document} |
\begin{document}
\title{Common Knowledge in Interaction Structures}
\sloppy
\begin{abstract}
We consider two simple variants of a framework
for reasoning about knowledge amongst communicating groups of players.
Our goal is to clarify the resulting epistemic issues.
In particular, we investigate what is the impact of common knowledge of the
underlying hypergraph connecting the players, and under what conditions
common knowledge distributes over disjunction.
We also obtain two versions of the classic result that common knowledge cannot be
achieved in the absence of a simultaneous event
(here a message sent to the whole group).
\end{abstract}
\section{Introduction}
\label{sec:intro}
We introduce a framework for reasoning about communication amongst groups of players.
We assume that each player is a member of a certain number of groups, and that he is able to broadcast
synchronously information to each of those groups.
Thus there is what we call an \dfn{interaction structure}, a hypergraph of the players, that determines the
communication protocol.
We are interested in studying what players can learn in certain restricted communication settings,
what impact common knowledge of the underlying hypergraph can have,
and in properties of the resulting knowledge that can simplify reasoning about it.
\begin{figure}
\caption{Two interaction structures. Hyperarcs are shown in gray.}
\label{fig:interaction-structure-examples:a}
\label{fig:interaction-structure-examples:b}
\label{fig:interaction-structure-examples}
\end{figure}
For example, consider \cref{fig:interaction-structure-examples}.
If player~$i$ knows that he is in interaction structure~\subref{fig:interaction-structure-examples:a},
and he learns a fact from player~$j$ that initially only player~$n$ knew,
then $i$~can deduce that both~$l$ and~$k$ also must have learned that fact.
In interaction structure~\subref{fig:interaction-structure-examples:b},
he can only deduce that either of them has learned it, but not which one.
If~$i$ does not know the interaction structure,
he cannot draw such conclusions, since player~$n$ might as well have communicated with player~$j$ directly.
One particular focus of our discussion concerns conditions under which knowledge of a disjunction
does allow us to deduce knowledge of one particular disjunct,
thus simplifying reasoning in such situations.
Another focus is to analyze the conditions for attaining common knowledge.
In the following \cref{sec:defs}, we first set up a more restricted framework
where players can only communicate those facts that they initially know,
and we examine this framework in detail in \cref{sec:telling}.
In \cref{sec:forwarding} we then lift this restriction
and examine how the properties of knowledge are affected
when players are allowed to send information which they learned from other players.
In \cref{sec:conclusions} we discuss related work,
in particular two closely related frameworks from the literature,
and draw some conclusions.
We look at some possible extensions in \cref{sec:extensions}.
\section{Preliminaries}
\label{sec:defs}
We assume the following setup to be common knowledge among the players.
There is a set of players $N$. Each player $i \in N$ has
a private set $\ensuremath{At}\xspace_i$ of facts (atomic propositions), of which only player $i$ initially knows whether they are true.
The truth values of these facts are represented by a \dfn{valuation},
which can be written as a set $V \subseteq \ensuremath{At}\xspace$ containing those facts that are true,
where $\ensuremath{At}\xspace=\bigcup_{i\in N}\ensuremath{At}\xspace_i$.
By $V_i$, we denote $V\cap \ensuremath{At}\xspace_i$, the restriction of $V$ to $i$'s facts.
Throughout, we assume communication to be \dfn{truthful} in the sense that it only contains information the sender knows to be true.
An \dfn{interaction structure} for players $N$ is a tuple $(H,
(\ensuremath{At}\xspace_i)_{i \in N})$, where $H$ is a \dfn{hypergraph on $N$}, i.e., a
set of non-empty subsets of $N$, called \dfn{hyperarcs}, and the $At_i$ are pairwise disjoint sets.
In the present section we place two restrictions, that are related.
Firstly, we use unordered \emph{sets} of messages,
i.e.~without any temporal structure, since it only matters
whether a given message has been broadcast or not,
and not \emph{when} it was broadcast. Secondly, we only allow
messages of the form $\msg{i}{A}{p}$ with $i\in A\in H$ and $p\in\ensuremath{At}\xspace_i$.
That is, players only broadcast basic facts that `belong' to them.
In \cref{sec:forwarding} we partially lift these restrictions,
allowing more general forms of broadcast. This in turn
means introducing some temporal ordering since if the
message $\msg{i}{A}{p}$ occurs, with $p\not\in\ensuremath{At}\xspace_i$, then everybody in
$A$ knows that \emph{before} that broadcast there was another
broadcast of the form $\msg{\cdot}{B}{p}$ with $i \in B$,
since otherwise~$i$ could not have known~$p$.
Given these restrictions, we consider two different situations:
one in which the underlying hypergraph is commonly known amongst the players;
and one in which it is not,
in the sense that a player knows only the hyperarcs to which he belongs.
In each case an interaction structure defines a
communication protocol: each player~$i$ can at any point broadcast
any true fact $p \in \ensuremath{At}\xspace_i$ to any hyperarc $A \in H$ with $i \in A$.
Thus a \dfn{message} is a tuple $\msg{i}{A}{p}$
with $i \in A$ and $p\in\ensuremath{At}\xspace_i$; $\msg{i}{A}{p}$ is the message in which $i$
communicates among the group $A$ his fact~$p$.
\dfn{$H$-compliant messages} are those in which $A \in H$.
If the players consider only $H$-compliant messages possible,
then they know the underlying hypergraph $H$.
So if the model allows only $H$-compliant messages, the
underlying hypergraph $H$ is common knowledge among the players;
if it uses all messages, $H$ is unknown.
We next define our model formally in order to reason about the
knowledge of the players and how it changes as messages are broadcast.
This is roughly along the lines of \emph{history based models}
(see, e.g., \citet{pacuit_reasoning_2007,FHMV_RAK}).
We start by defining a \dfn{state}, which we might also have called `possible world', \state
to consist of a valuation~$V \subseteq \ensuremath{At}\xspace$
and a set~$M$ of messages $\msg{\cdot}{\cdot}{p}$ such that~$p \in V$.
An \dfn{$H$-compliant} state is one where $M$ only contains $H$-compliant messages.
A \dfn{word} over a set $A\subseteq N$ is a finite sequence $w=i_1\ldots i_k$ where each $i_l \in A$.
By $A^*$ we denote the set of all words over $A$,
and we write $\setof w$ for the \emph{set} of players occurring in $w$.
Now given a set of messages $M$ and a word $w$, we introduce the following notation:
\begin{align*}
M_w &:= \C{(\cdot,A,\cdot) \in M \mid \setof w\subseteq A}\\
\mbox{{\bf false}}acts(M) &:= \C{p \mid (\cdot, \cdot,p) \in M}.
\end{align*}
So $M_i$ (respectively, $M_w$) is the subset of the set of messages~$M$ that player $i$ received
(respectively, that were broadcast to all the players in~$w$; note that the order in $w$ does not matter),
and $\mbox{{\bf false}}acts(M)$ is the set of facts that were communicated in the messages in~$M$.
In particular, $\mbox{{\bf false}}acts(M_i)$ is the set of facts that were communicated
in the messages in~$M$ that player~$i$ received.
Note that \state is a state if $\mbox{{\bf false}}acts(M)\subseteq V$.
Further, we define all set operations to act component-wise on states, e.g.
$\state \subseteq \state[']$ iff $V \subseteq V'$ and $M \subseteq M'$.
In order to represent the knowledge of the players we define
an \dfn{indistinguishability relation} between states:
$
\mbox{$\state \sim_i \state[']$ iff $(V_i, M_i) = (V'_i, M'_i)$.}
$
In the semantics we present below, a player $i$ is said to `know' a fact
just if that fact is true in every state that is indistinguishable for $i$
from the actual state. Of particular interest to us is the knowledge of
\emph{groups} $G\subseteq N$ (always assumed to be non-empty).
Specifically we consider the so-called
`common knowledge' among a group (cf.~\cite[p.~23]{FHMV_RAK}).
These are facts that everybody in the group knows,
they all know that they know, etc.
To define this formally we extend the individual
indistiguishability relation to groups:
for $G \subseteq N$ the relation $\sim_G$ is the transitive
closure of $\bigcup_{i \in G}\sim_i$.
We are interested in properties definable by the following \dfn{epistemic language} $\mathcal{L}$:
\[
{\it Var}phi ::= p \mid \neg {\it Var}phi \mid {\it Var}phi \langled {\it Var}phi \mid {\it Var}phi \lor {\it Var}phi \mid \ck G {\it Var}phi,
\]
where the atoms $p$ denote the facts in~$\ensuremath{At}\xspace$,
$\neg$, $\langled$ and $\lor$ are the standard connectives;
and $\ck G$ is a knowledge operator, with $\ck G {\it Var}phi$ meaning ${\it Var}phi$
is common knowledge among $G$.
We write $\knows i$ for $\ck{\{i\}}$;
$\knows i {\it Var}phi$ can be read `$i$ knows that ${\it Var}phi$'.
The \dfn{positive language} $\mathcal{L}^+$ is the sublanguage of $\mathcal{L}$
in which negation ($\neg$) does not occur.
The semantics for $\mathcal{L}$ is as follows:
\begin{align*}
\state &\vDash_H p && \text{ iff } p \in V, \\
\state &\vDash_H \neg {\it Var}phi && \text{ iff } \state \nvDash_H {\it Var}phi, \\
\state &\vDash_H {\it Var}phi \lor \psi && \text{ iff } \state \vDash_H {\it Var}phi\mbox{$\exists$}xt{ or }\state \vDash_H \psi, \\
\state &\vDash_H {\it Var}phi \langled \psi && \text{ iff } \state \vDash_H {\it Var}phi\mbox{$\exists$}xt{ and }\state \vDash_H \psi, \\
\state &\vDash_H \ck G {\it Var}phi && \text{ iff }
\begin{aligned}[t]
&\state['] \vDash_H {\it Var}phi\\
&\mbox{$\exists$}xt{for each $H$-compliant \state[']}\\
&\mbox{$\exists$}xt{with } \state \sim_{G} \state[']\enspace.
\end{aligned}
\end{align*}
By allowing only $H$-compliant states in the last clause of the semantics,
the underlying hypergraph~$H$ is assumed to be \emph{common knowledge}.
Assuming that the hypergraph $H$ is \emph{unknown} turns out to be equivalent to
the case where it is common knowledge that the hypergraph $H$ is complete,
i.e., $H=\mbox{$\ \sqsubseteq\ $}werset{N} - {\emptyset}$.
This might seem counter-intuitive, but it reflects the fact that if the
hypergraph is unknown then every player must consider it possible that
every set $A \subseteq N$ might be a hyperarc in $H$.
To denote the corresponding semantics,
we use $\vDash$ as abbreviation for $\vDash_H$ with $H$ being the complete hypergraph.
For a word $w=i_1\ldots i_k$, we write $\knows w$ to abbreviate $\knows{i_1}\knows{i_2} \ldots \knows{i_k}$,
and write $\sim_w$ to denote the concatenation $\sim_{i_1} \circ \ldots \circ \sim_{i_k}$.
Notice that $\state \sim_G \state[']$ iff there is $w\in G^*$ with $\state \sim_w \state[']$.
So an equivalent way of specifying the semantics for $\ck G$ with non-singleton $G$ is as follows:
\begin{equation}
\label{equ:K}
\state \vDash \ck G {\it Var}phi\mbox{$\exists$}xt{ iff }\state \vDash \knows w {\it Var}phi\mbox{$\exists$}xt{ for all }w\in G^*\enspace.
\tag{$\star$}
\end{equation}
We now study the consequences of two choices in the analysis of players' knowledge:
\begin{itemize}
\item The type of messages; we assumed already that players send only atomic information, but there still remains a choice whether, as assumed above, players only send information they know initially, or can can send information that they have learned from other players. The former scenario is explored in \cref{sec:telling}, the latter in \cref{sec:forwarding}.
\item The issue whether the underlying hypergraph is commonly known among the players.
We consider this distinction in both of the following sections.
\end{itemize}
We shall see that both choices have bearing on players' knowledge.
\section{Telling}
\label{sec:telling}
In this section we study the case under the assumption mentioned above,
that players' messages refer only to the facts they know initially.
So players can send only information they know at the outset.
We call this contingency `telling'.
For the relevance of common knowledge of~$H$, consider the following example.
\begin{example}
\label{ex:ck-of-h-matters-for-negative}
For players $G=\{i,j,k\}$, $H=\{G\}$, $p\in \ensuremath{At}\xspace_k$ and $\state=(\emptyset,\emptyset)$,
we have
\[
\state\vDash_H\knows i\neg \knows j p\enspace.
\]
Indeed, the only hyperarc in $H$ through which player~$j$ could learn anything from $k$
is the one which also contains player~$i$.
So there is no way for $k$ to tell $j$ anything `secretly'.
Hence, with $M_i=\emptyset$,
$i$ also knows that $M_{jk}=\emptyset$.
That is, in all states which $i$ considers possible at \state,
$k$ has not told $j$ that $p$,
therefore in all these states $j$ does not know $p$.
On the other hand, we have
$
\state\nvDash \knows i\neg \knows j p\enspace,
$
since $\state\sim_i(\{p\},\{\msg{k}{\{j,k\}}{p}\})$.
So for some formulas, common knowledge of $H$ matters.
Note also that in this example, we even have
\[
\state\vDash_H\ck G\neg \knows j p\enspace,
\]
which shows that common knowledge can be attained without any communication taking place.
\end{example}
However, common knowledge of formulas from the positive language $\mathcal{L}^+$
can only be attained through messages received by the whole group,
and for these formulas, common knowledge of~$H$ does not matter.
In order to establish this,
we first show the following \cref{result:positive-keep-holding}, which intuitively says that
if a positive formula is true in some state,
then it remains true in any state where more facts are true
or more communication has taken place.
Remember that $\vDash$ corresponds to $\vDash_H$ with $H$ being the complete hypergraph,
so the following carries over to general states and $\vDash$.
\begin{lemma}
\label{result:positive-keep-holding}
For any ${\it Var}phi\in\mathcal{L}^+$ and $H$-compliant states $\state $ and \state[']
with $\state\subseteq\state[']$,
\[
\mbox{$\exists$}xtup{if } \state\vDash_H{\it Var}phi, \mbox{$\exists$}xtup{ then } \state['] \vDash_H{\it Var}phi.
\]
\end{lemma}
\begin{proof}
We proceed by structural induction on ${\it Var}phi$.
The only not completely obvious case is when
${\it Var}phi=\ck G\psi$ with $\psi\in\mathcal{L}^+$.
We show the claim for $G=\{i\}$; the non-singleton case then follows by induction and~\eqref{equ:K}.
Take an $H$-compliant state \state[''] such that $\state[']\sim_i\state['']$.
Let
\[
\state['''] := (V_i\cup\mbox{$\exists$}xtstyle\bigcup_{j\neq i}V_j'',M_i)\enspace.
\]
We have $\mbox{{\bf false}}acts(M_i)\subseteq\mbox{{\bf false}}acts(M)\subseteq V$,
since \state is a state.
Also, $M_i\subseteq M_i'=M_i''\subseteq M''$,
so $\mbox{{\bf false}}acts(M_i)\subseteq\mbox{{\bf false}}acts(M'')\subseteq V''$,
since \state[''] is a state.
Hence,
$
\mbox{{\bf false}}acts(M''')=\mbox{{\bf false}}acts(M_i)\subseteq V\cap V''=\mbox{$\exists$}xtstyle\bigcup_{i\in N}(V_i\cap V_i'')\subseteq V'''\enspace.
$
This shows that \state['''] is a state.
Moreover, $\state\sim_i\state[''']$.
Assume now $\state\vDash_H \knows i\psi$.
Then we obtain $\state[''']\vDash_H\psi$.
Further, we have $\state[''']\subseteq\state['']$
since $V_i\subseteq V_i'=V_i''$ and $M_i\subseteq M_i'=M_i''$ due to $\state[']\sim_i\state['']$.
Thus, by induction hypothesis we obtain $\state['']\vDash_H\psi$.
\end{proof}
\begin{theorem}
\label{result:ck-of-h-doesnt-matter}
For any $H$-compliant state $\state$ and ${\it Var}phi\in\mathcal{L}^+$,
\[
\mbox{$\state\vDash{\it Var}phi$ iff $\state\vDash_H{\it Var}phi$.}
\]
\end{theorem}
\begin{proof}
We proceed by structural induction.
The only non-trivial step is when ${\it Var}phi=\ck G\psi$ with $\psi\in\mathcal{L}^+$.
\noindent ($\Rightarrow$) By induction hypothesis, $\state\vDash\ck G\psi$ implies $\state\vDash_H\ck G\psi$,
since each $H$-compliant state is also a state.
\noindent ($\Leftarrow$) Assume to the contrary that
$\state\nvDash\ck G\psi$.
So there is
a state $\state[']$ with $\state\sim_G \state[']$ and $ \state[']\nvDash\psi$.
Now let
\[
M'\restriction_H:=\{\msg{\cdot}{A}{\cdot} \in M' \mid A\in H\}\enspace.
\]
So $M'\restriction_H$ consists of all $H$-compliant messages of $M'$.
Now note that $( V', M'\restriction_H)\subseteq \state[']$,
so from $ \state[']\nvDash\psi$
we obtain that $(V', M'\restriction_H)\nvDash\psi$
using \cref{result:positive-keep-holding} (which, as noted, also holds for general states and $\vDash$).
Since $( V', M'\restriction_H)$ is $H$-compliant,
the induction hypothesis yields $(V', M'\restriction_H)\nvDash_H\psi$.
Moreover, we also have $\state\sim_G(V', M'\restriction_H)$,
since \state is $H$-compliant and $\state\sim_G \state[']$.
Thus, $\state\nvDash_H\ck G\psi$.
\end{proof}
In the remainder of this section, we are concerned with formulas from~$\mathcal{L}^+$,
so in view of the above results we restrict attention to $\vDash$.
We now establish that $\ck G$ distributes over disjunctions of positive formulas,
starting with singleton~$G$.
\begin{lemma}
\label{result:k-disjunction-distributes}
For any ${\it Var}phi_1,{\it Var}phi_2\in\mathcal{L}^+$, $i\in N$, and state \state,
\[
\mbox{$\state\vDash \knows i({\it Var}phi_1\vee{\it Var}phi_2)$ iff $\state\vDash \knows i{\it Var}phi_1\vee \knows i{\it Var}phi_2$.}
\]
\end{lemma}
\begin{proof}
To deal with the ($\Rightarrow$) implication assume that
$\state\nvDash \knows i{\it Var}phi_1\vee \knows i{\it Var}phi_2$. Then
$\state\nvDash \knows i{\it Var}phi_1$ and $\state\nvDash \knows i{\it Var}phi_2$,
i.e., there are $ \state[']$ and $\state['']$ such that
\begin{align*}
\state\sim_i \state[']&\mbox{$\exists$}xt{ and }\state[']\nvDash{\it Var}phi_1\enspace\mbox{$\exists$}xt{, as well as}\\
\state\sim_i\state['']&\mbox{$\exists$}xt{ and }\state['']\nvDash{\it Var}phi_2\enspace.
\end{align*}
Let now
\[
\state[''']:=( V_i\cup\mbox{$\exists$}xtstyle\bigcup_{j\neq i}(V_j'\cap V_j''),M_i)\enspace.
\]
Then $\mbox{{\bf false}}acts(M)\subseteq V$, $\mbox{{\bf false}}acts(M')\subseteq V'$, $\mbox{{\bf false}}acts(M'')\subseteq V''$,
since \state, \state['] and \state[''] are states.
Moreover, $M_i=M_i'$ and $M_i=M_i''$.
So,
\begin{align*}
\mbox{{\bf false}}acts(M''')&\subseteq\mbox{{\bf false}}acts(M)\cap\mbox{{\bf false}}acts(M')\cap\mbox{{\bf false}}acts(M'')\\
&\subseteq V\cap V'\cap V''\\
&=\mbox{$\exists$}xtstyle\bigcup_{i\in N}(V_i\cap V_i'\cap V_i'')\\
&\subseteq V'''\enspace.
\end{align*}
This shows that \state['''] is a state,
and since $M'''=M_i\subseteq M$ it is $H$-compliant.
Now since $V_i=V_i'=V_i''$ and $M_i=M_i'=M_i''$,
we have $\state[''']\subseteq\state[']$ and $\state[''']\subseteq\state['']$.
By \cref{result:positive-keep-holding}, we obtain
$\state[''']\nvDash{\it Var}phi_1$
and
$\state[''']\nvDash{\it Var}phi_2$,
thus
$\state[''']\nvDash{\it Var}phi_1\vee{\it Var}phi_2$.
Furthermore $\state\sim_i\state[''']$,
so $\state\nvDash \knows i({\it Var}phi_1\vee{\it Var}phi_2)$.
Further, the ($\Leftarrow$) implication immediately holds by the semantics.
\end{proof}
\begin{theorem}
\label{result:ck-disjunction-distributes}
For any ${\it Var}phi_1,{\it Var}phi_2\in\mathcal{L}^+$, state \state, and $G\subseteq N$,
\[
\mbox{$\state\vDash \ck G({\it Var}phi_1\vee{\it Var}phi_2)$ iff $\state\vDash \ck G{\it Var}phi_1\vee \ck G{\it Var}phi_2$.}
\]
\end{theorem}
\begin{proof}
The claim follows directly from \cref{result:k-disjunction-distributes} and~\eqref{equ:K}.
\end{proof}
To see that this result does not hold if we allow negation,
consider three players $i,j,k\in N$, $p\in V_k$,
$\state=(\emptyset,\emptyset)$, and
${\it Var}phi=\knows i( \knows j p\vee\neg \knows j p)$.
Then $\state\vDash{\it Var}phi$, since the used disjunction is a tautology,
but there is no way for $i$ to know which disjunct is true.
Even with non-tautological disjunctions, the result does not hold.
\begin{example}
\label{ex:k-disjunction-doesnt-distribute-with-negation}
With $p\in V_k$ and
\begin{align*}
\state&=(\{p\},\{\msg{k}{\{i,k\}}{p}\})\\
{\it Var}phi&=\knows i(\knows j p\vee\neg(\knows j p\vee \knows j\neg p))\enspace,
\end{align*}
we have $\state\vDash{\it Var}phi$, but again, $i$ knows neither disjunct in $\state$.
Intuitively, having privately learned that $p$ is true,
$i$ knows that $j$ either also learned it,
or that $j$ doesn't know whether $p$ is true,
but $i$ does not know which of these two statements is true.
\end{example}
Another observation is that
mutual knowledge of any \emph{fact} $p\in\ensuremath{At}\xspace$
can only be obtained through a corresponding message,
and is thus inseparably tied to common knowledge.
\begin{lemma}
\label{result:knowledge-chain-fact-equiv-msg-equiv-ck}
For any $w\in N^*$ with $\abs{\setof w}\geq2$,
$p\in\ensuremath{At}\xspace$,
and state \state,
the following are equivalent:
\begin{enumerate}[(i)]
\item\label{result:knowledge-chain-fact-equiv-msg-equiv-ck:chain}
$\state\vDash \knows wp$,
\item\label{result:knowledge-chain-fact-equiv-msg-equiv-ck:msgs}
there is $\msg{\cdot}{\cdot}{p}\in M_w$,
\item\label{result:knowledge-chain-fact-equiv-msg-equiv-ck:ck}
$\state\vDash \ck Gp$
with $G=\setof w$.
\end{enumerate}
\end{lemma}
\begin{proof}
$(\ref{result:knowledge-chain-fact-equiv-msg-equiv-ck:chain})\Rightarrow
(\ref{result:knowledge-chain-fact-equiv-msg-equiv-ck:msgs})$:
Assume that $(\ref{result:knowledge-chain-fact-equiv-msg-equiv-ck:msgs})$ does not hold.
Let $V':=V\setminus\{p\}$.
Then $(V',M_w)$ is a state and $(V',M_w)\nvDash p$.
Now let $i\in N$ be such that $p\in\ensuremath{At}\xspace_i$.
Since $\abs{\setof w}\geq2$, there is $j\in\setof w$ with $p\not\in\ensuremath{At}\xspace_j$.
By construction, $(V',M_w)\nvDash p$ and $(V,M)\sim_j(V',M_w)$.
Since $j\in\setof w$ and both $\state\sim_k\state$ and $(V',M_w)\sim_k(V',M_w)$ for all $k\in N$,
we obtain $\state\sim_w(V',M_w)$ and thus $\state\nvDash \knows wp$.
\noindent $(\ref{result:knowledge-chain-fact-equiv-msg-equiv-ck:msgs})\Rightarrow
(\ref{result:knowledge-chain-fact-equiv-msg-equiv-ck:ck})$:
Suppose that $G=\setof w$ and take $m \in M_w$.
Consider $\state[']$ such that $\state \sim_{G} \state[']$.
This means that for a sequence
$i_1, \mbox{$\ldots$}, i_k$ of players from $G$
and some states $\state[^1], \mbox{$\ldots$}, \state[^k]$
we have $\state \sim_{i_1} \state[^1] \sim_{i_2} \mbox{$\ldots$} \sim_{i_k} \state[^k]$,
where $\state['] = \state[^k]$.
But $i_1 \in G$, so $m \in M_{i_1}$, and consequently $m \in
M^{1}_{i_1}$. Also $i_2 \in G$, so $m \in M^{1}_{i_2}$,
and consequently $m \in M^{2}_{i_2}$. Continuing this way we conclude
that $m \in M^{k}_{i_k}$, that is $m \in M'_{i_k}$.
Hence, $m \in M'$. This shows that $M_w \mbox{$\:\subseteq\:$} M'$. So $\mbox{{\bf false}}acts(M_w) \mbox{$\:\subseteq\:$}
V'$, since $(V', M')$ is a state. But by the assumption $p \in
Facts(M_w)$, so $\state['] \vDash p$. This proves $\state \vDash \ck G
p$.
\noindent $(\ref{result:knowledge-chain-fact-equiv-msg-equiv-ck:ck})\Rightarrow
(\ref{result:knowledge-chain-fact-equiv-msg-equiv-ck:chain})$: By~\eqref{equ:K}.
\end{proof}
We can extend this connection between mutual and common knowledge to arbitrary positive formulas.
\begin{theorem}
\label{result:knowledge-chain-phi-equiv-ck}
\label{thm:permutation}
For any $G\subseteq N$, ${\it Var}phi\in\mathcal{L}^+$, and state \state,
\begin{center}
$\state\vDash \ck G{\it Var}phi$ iff $\state\vDash \knows w{\it Var}phi$ for some $w\in G^*$ with $\setof w=G$.
\end{center}
\end{theorem}
\begin{proof}
The direction $(\Rightarrow)$ is by~\eqref{equ:K}.
For $(\Leftarrow)$, we proceed by structural induction.
The base case
is obtained from \cref{result:knowledge-chain-fact-equiv-msg-equiv-ck}.
The induction step for disjunction follows by
\cref{result:ck-disjunction-distributes},
and for conjunction it follows directly by definition of the semantics.
For ${\it Var}phi=\knows i\psi$,
the assumption $\state\vDash \knows w\knows i\psi$ yields, by induction hypothesis,
that $\state\vDash \ck{G'}\psi$ for $G'=\setof w\cup\{i\}=G\cup\{i\}$,
which by definition of the semantics implies that $\state\vDash \ck G\knows i\psi$.
\end{proof}
Note that this result provides for positive formulas
a simplified characterization of the common knowledge operator,
as compared with~\eqref{equ:K}.
Finally, we establish a result intuitively saying that a group's
common knowledge of a
positive formula can only be achieved
when some message (or messages) has been broadcast to at least all
members of this group. So common knowledge of a positive formula cannot be
achieved among a group by means of more limited communications, for example
point-to-point messages.
Given a formula ${\it Var}phi$ we denote by $Facts({\it Var}phi)$ the set of facts that occur in it.
\begin{theorem}
\label{result:knowledge-chain-phi-only-through-msg}
\label{thm:pos}
For any $G\subseteq N$ with $\abs G\geq2$,
${\it Var}phi\in\mathcal{L}^+$, and state \state,
\begin{center}
if $\state\vDash \ck G{\it Var}phi$, then there is $\msg{\cdot}{A}{p}\in M$ with $G\subseteq A$ and $p \in Facts({\it Var}phi)$.
\end{center}
\end{theorem}
\begin{proof}
By \cref{result:ck-disjunction-distributes} and the definition of semantics,
we can transform $\ck G{\it Var}phi$ into an equivalent formula
consisting only of disjunctions and conjunctions over formulas of the form $\ck G\ck{G_1}\dots \ck{G_l}p$
with $G_1,\dots,G_l\subseteq N$ and $p\in Facts({\it Var}phi)$.
Since $\state\vDash \ck G{\it Var}phi$
there is at least one of these formulas for which
$\state\models \ck G\ck{G_1}\dots \ck{G_l}p$.
Take now $w$ such that $\setof w = G$.
By~\eqref{equ:K} we obtain $\state\vDash \knows w \ck{G_1}\dots \ck{G_l} p$, so
by the definition of semantics $\state\vDash \knows w p$.
The claim now follows by \cref{result:knowledge-chain-fact-equiv-msg-equiv-ck}.
\end{proof}
\section{Forwarding}
\label{sec:forwarding}
We now consider a more complex situation in which players are allowed
to send facts that they learned from other players.
We call this contingency `forwarding'.
It is achieved by
relaxing in the definition of a message $(i,A,p)$ the assumption
$p \in \ensuremath{At}\xspace_i$ to $p \in \ensuremath{At}\xspace$.
We still insist that a player can send a message only to a group to which he belongs,
that is, that $i \in A$ holds.
We also assume that only information \emph{known to be true} is sent,
so we now need to examine how a player learned the information he is sending.
This brings us to consider the following relation on the set of messages:
\[
\mbox{$(j,B,p) \leadsto (i,A,p)$ iff $p \not\in \ensuremath{At}\xspace_i$ and $i \in B$.}
\]
Intuitively, $(j,B,p) \leadsto (i,A,p)$ means that the fact $p$ is initially not known to player~$i$
and that he has learned it from a message sent by player~$j$ to a group to which $i$ belongs.
So $(j,B,p) \leadsto (i,A,p)$ means that $(j,B,p)$ is a possible (partial) \emph{explanation} of $(i,A,p)$.
By a \oldbfe{state} we now mean a pair $(V, M)$ such that for
each message $(i,A,p) \in M$ a sequence of messages $(j_1, B_1, p),
\mbox{$\ldots$}, (j_k, B_k, p)$ exists (i.e., each of these messages about $p$ is in $M$) such
that
\begin{compactitem}
\item these messages form an explanatory chain:
for $l \in \{1, \mbox{$\ldots$}, k-1\}$ we have $(j_l, B_l, p) \leadsto (j_{l+1}, B_{l+1}, p)$;
\item they are not circular: players $j_1, \mbox{$\ldots$}, j_k$ are all different;
\item $p$ is initially known to player~$j_1$: $p \in V_{j_1}$; and
\item the fact $p$ reaches player~$i$: $(j_k, B_k, p) = (i,A,p)$.
\end{compactitem}
We call such a sequence of messages an \oldbfe{explanation} for $(i,A,p)$ in
$(V, M)$. So a pair $\state$ is a state if for each of its messages it has an explanation.
Note that given a state, its messages contain only true facts. That
is, if $(V, M)$ is a state, then $\mbox{{\bf false}}acts(M) \mbox{$\:\subseteq\:$} V$.
Moreover, if $(i,A,p) \in M$, then player $i$ knows that $p$ is true,
i.e.~$(V,M) \vDash \knows i p$.
(A more general statement is established in Lemma \ref{lem:equiv_i}.)
Note also that when each message in $M$ is of the form $(i,\cdot,p)$,
where $p \in V_i$, then $(V, M)$ is a state, since each message then forms
its own explanation. So states considered in this section generalize the
states considered in the previous section.
Each state can be alternatively viewed as a partial ordering
$\leadsto^*$ (where $\leadsto^*$ is the reflexive, transitive
closure of the $\leadsto$ relation) on a set of messages such that each message
has an explanation.
The only restriction on the order of the actions comes from the
relation $\leadsto$ that needs to be respected: a player sends a
message that contains information that either he initially knows to be true (the
message is $(i,A,p)$ where $p \in V_i$) or he has learned
(the message is $(i,A,p)$ and some earlier message is of the form
$(j,B,p)$, where $i \in A$). So the computation begins by some
players who send information they know is true.
We now consider the semantics introduced in \cref{sec:defs} in this
extended setting. It is important to realize that these two semantics
differ in the sense that for a state
and a formula ${\it Var}phi \in {\cal L}$ it can happen that
$\state\vDash_H {\it Var}phi$ holds in the sense of \cref{sec:defs} but not
in the sense considered now.
\begin{example}
\label{ex:semantics-different}
Let $N = \C{i,j,k}$, $H =
\{\C{i,j}, \C{j,k}\}$, $V = \{p\}$, where $p \in At_i$, and $M =
\{(i,\{i,j\},p)\}$. Then we have
\[
\state \vDash_H \knows i \neg \knows k p
\]
in the sense of \cref{sec:defs}. The intuitive reason is that the fact
$p$ `belongs' to $i$, so it cannot be used in any message sent by $j$,
and this information is known to $i$. However, in the present setting
$p$ can be used in a message sent by $j$ and we have
\[
\state \nvDash_H \knows i \neg \knows k p\enspace.
\]
Indeed, consider $(V',M')$ with $V' = \{p\}$ and $M' = \{(i,\{i,j\},p),(j,\{j,k\},p)\}$.
Then $(V, M) \sim_i (V', M')$ and $(V', M') \vDash \knows k p$.
\end{example}
In general, only non-epistemic formulas have
the same meaning w.r.t.~both semantics.
We now show that some, though not all, properties
established in the previous section also hold in this new setting.
In particular, as in \cref{ex:ck-of-h-matters-for-negative}, we have
$(\mbox{$\emptyset$}, \mbox{$\emptyset$}) \vDash_H\ck G\neg \knows jp$, so
also now common knowledge can exist without any communication
taking place.
However, as we shall see, \cref{result:ck-of-h-doesnt-matter} does not
hold in general any more.
So in the following we usually consider $\vDash_H$, which, as
mentioned earlier,
includes $\vDash$ as a special case with $H$ being the complete hypergraph.
Recall that for a formula ${\it Var}phi$ we denoted the set of facts that occur in it by $Facts({\it Var}phi)$.
\begin{lemma}\label{lem:f}
For any ${\it Var}phi \in {\cal L}^+$ and $H$-compliant state $\state$ if $(V,M) \vDash_H {\it Var}phi$, then $Facts({\it Var}phi) \cap V \neq \mbox{$\emptyset$}$.
\end{lemma}
\begin{proof}
We proceed by structural induction on ${\it Var}phi$.
The only not completely obvious case is when
${\it Var}phi=\ck G\psi$. But $(V,M) \vDash_H \ck G\psi$ implies $(V,M) \vDash_H \psi$, so in this case the induction hypothesis
readily applies, as well.
\end{proof}
The following result is then a counterpart of \cref{result:knowledge-chain-phi-only-through-msg}.
\begin{theorem}\label{thm:group1}
For any $G \subseteq N$ with $|G| \geq 2$, ${\it Var}phi\in\mathcal{L}^+$, and $H$-compliant state $\state$,
\begin{mainclaim}
if $\state\vDash_H \ck G {\it Var}phi$, then there is $\msg{\cdot}{A}{p}\in M$ with $G \subseteq A$ and $p \in Facts({\it Var}phi)$.
\end{mainclaim}
\end{theorem}
\begin{proof}
Note that the conclusion of the implication can be written in a more succinct way as
$Facts({\it Var}phi) \cap \mbox{{\bf false}}acts(\bigcap_{i \in G} M_i) \neq \mbox{$\emptyset$}$.
Suppose that $\state\vDash_H \ck G {\it Var}phi$ and $Facts({\it Var}phi) \cap \mbox{{\bf false}}acts(\bigcap_{i \in G} M_i) = \mbox{$\emptyset$}$.
Call a message a \oldbfe{$p$-message} if it is of the form $(\cdot, \cdot, p)$.
Abbreviate $\bigcup_{i \in G} M_i$ to $M_G$
(Note that this is different from $M_w$, where $w$ is a word,
which corresponds to an intersection.)
Three cases arise.
\mbox{$\:\cap\:$}I
\noindent
\emph{Case 1}. For all $p \in Facts({\it Var}phi)$ there is no $p$-message in $M_G$ and $Facts({\it Var}phi) \cap V = \mbox{$\emptyset$}$.
Then by \cref{lem:f} $(V,M) \nvDash_H {\it Var}phi$.
\mbox{$\:\cap\:$}I
\noindent
\emph{Case 2}. For all $p \in Facts({\it Var}phi)$ there is no $p$-message in $M_G$ and $Facts({\it Var}phi) \cap V \neq \mbox{$\emptyset$}$.
Take some $p \in Facts({\it Var}phi) \cap V$.
Since $|G| \geq 2$, there is $i \in G$ such that $p \not\in V_{i}$.
Remove from $V$ the fact $p$ and from $M$ all $p$-messages. Denote
the outcome by $(V', M')$. By construction $(V',M')$ is an $H$-compliant state and by
the assumption there is no $p$-message in $M_{i}$, so
$(V,M) \sim_i (V',M')$.
\mbox{$\:\cap\:$}I
\noindent
\emph{Case 3}. For some $p \in Facts({\it Var}phi)$ there is a $p$-message in $M_G$.
Given a set of messages $O \mbox{$\:\subseteq\:$}
M$, we denote by $top(O)$ the set of $p$-messages $m \in O$, where $p \in Facts({\it Var}phi)$, such that
for no $m' \in O$ we have $m \neq m'$ and $m \leadsto^* m'$. Further,
we define ($cl$ stands for the closure)
\[
cl(m) := \C{m' \in M \mid m \leadsto^{*} m'}.
\]
We assumed that the set of $p$-messages in $M_G$, where $p \in Facts({\it Var}phi)$, is non-empty,
so the set $top(M_G)$ is non-empty and hence for some $i_1 \in G$ the set
$top(M_G) \cap M_{i_1}$ is non-empty. Choose some $m \in top(M_G) \cap M_{i_1}$.
Let $M' := M \setminus cl(m)$ and $V' := V$. By the assumption
$Facts({\it Var}phi) \cap \mbox{{\bf false}}acts(\bigcap_{i \in G} M_i) = \mbox{$\emptyset$}$, so there is some $i \in G$ such that
$i \not\in A$, where $m$ is of the form $(\cdot,A,p)$ for some $p \in Facts({\it Var}phi)$.
By the construction $(V', M')$ is an $H$-compliant state
and $(V,M) \sim_{i} (V', M')$.
\mbox{$\:\cap\:$}I
We now repeat the above case analysis with $(V',M')$ instead of $(V,M)$.
Iterating this way we eventually end up in Case 1 since in Cases 2 and 3
always some fact or message is removed. This way we obtain a word $w \in
G^*$ and an $H$-compliant state $(V'', M'')$ such that $(V,M) \sim_{w} (V'', M'')$
and $(V'', M'') \nvDash_H {\it Var}phi$. By~\eqref{equ:K} this contradicts the assumption that
$\state\vDash_H \ck G {\it Var}phi$.
\end{proof}
\begin{corollary} \label{cor:iff}
For any $G \subseteq N$ with $|G| \geq 2$, $p\in\ensuremath{At}\xspace$, and $H$-compliant state $\state$,
\[
\mbox{$\state\vDash_H \ck G p$ iff there is $\msg{\cdot}{A}{p}\in M$ with $G \subseteq A$.}
\]
\end{corollary}
\begin{proof}
$(\mbox{$\:\Rightarrow\:$})$ is a direct consequence of \cref{thm:group1}.\\
\noindent ($\mbox{$\:\Leftarrow\:$}$)
The proof is analogous to the one of the implication
$(\ref{result:knowledge-chain-fact-equiv-msg-equiv-ck:msgs})\Rightarrow
(\ref{result:knowledge-chain-fact-equiv-msg-equiv-ck:ck})$ of
\cref{result:knowledge-chain-fact-equiv-msg-equiv-ck} and is omitted.
\end{proof}
Here is the counterpart of the above result for the case of one
player. It states that in any state a player knows a fact iff either
he knows it at the outset or he has learned it through a message he
received.
\begin{lemma}
\label{lem:equiv_i}
For any $i\in N$, $p\in\ensuremath{At}\xspace$, and $H$-compliant state $\state$,
\[
\mbox{$\state\vDash_H \knows i p$ iff $p \in V_i \cup\mbox{{\bf false}}acts(M_i)$.}
\]
\end{lemma}
\begin{proof}
$(\mbox{$\:\Rightarrow\:$})$
Suppose that $\state\vDash_H \knows i p$ and $p \not\in V_i \cup\mbox{{\bf false}}acts(M_i)$.
Remove from $V$ the fact $p$ and from $M$ all messages of the form $(\cdot, \cdot, p)$.
Denote the outcome by $(V',M')$. By construction $(V',M')$ is an $H$-compliant state and by the assumption
$(V,M) \sim_i (V',M')$. So $(V',M') \vDash_H p$, which is a contradiction.
\mbox{$\:\cap\:$}I
\noindent
$(\mbox{$\:\Leftarrow\:$})$
Consider an $H$-compliant state $\state[']$ such that $\state \sim_i \state[']$.
Then $V_i = V'_i$ and $M_i = M'_i$. So $V_i \mbox{$\:\subseteq\:$} V'$ and
$\mbox{{\bf false}}acts(M_i) \mbox{$\:\subseteq\:$} \mbox{{\bf false}}acts(M') \mbox{$\:\subseteq\:$} V'$, where the final inclusion follows by
the fact that $\state[']$ is a state. So $p \in V'$ and consequently $\state['] \vDash_H p$, as desired.
\end{proof}
\begin{corollary}
\label{result:ck-of-h-doesnt-matter-for-facts}
For any $i\in N$, $p\in\ensuremath{At}\xspace$, and $H$-compliant state $\state$,
\[
\mbox{$\state\vDash_H \knows i p$ iff $\state\vDash \knows i p$.}
\]
\end{corollary}
\begin{proof}
This follows from \cref{lem:equiv_i} and the fact that $\vDash$ is a special case of $\vDash_H$.
\end{proof}
For further analysis we need an auxiliary concept. Suppose that $(V,M) \mbox{$\:\subseteq\:$} (V',
M')$, where $(V',M')$ is a state. In general, $(V,M)$
does not need to be a state but we
can complete it to a state $L(V,M)$ such that $L(V,M)
\mbox{$\:\subseteq\:$} (V',M')$. Indeed, it suffices for each message $m$ in $M$ to add
to $M$ messages forming an explanation of $m$ in $M'$ and then add
to $V$ the facts used in these added messages. More precisely, let
$M''$ be a smallest set such that $M \mbox{$\:\subseteq\:$} M'' \mbox{$\:\subseteq\:$} M'$ and $(V \cup
Facts(M''), M'')$ is a state. In general, this does not define
a unique state, since each message in $M'$ can have
multiple explanations. However, the states are finite, so we can
always choose $(V \cup Facts(M''), M'')$ in a unique way, for example,
by associating with each state a unique natural number.
From now on we assume that given an inclusion $(V,M) \mbox{$\:\subseteq\:$} (V', M')$
the state $L(V,M)$ is uniquely defined. Note that if
\state['] is $H$-compliant, then so are \state and $L\state$.
The following observation will be useful.
\begin{fact}
\label{fact:L}
For any $i\in N$ and $H$-compliant states $\state ,\state[']$ with $(V_i, M_i) \mbox{$\:\subseteq\:$} \state[']$,
\[
\state \sim_i L(V_i, M_i).
\]
\end{fact}
\begin{proof}
All messages in $(V_i, M_i)$ involve player~$i$, so
the $H$-compliant state $\state[''] = L(V_i, M_i)$ is realized by adding to $(V_i,
M_i)$ only some messages that do not involve player~$i$ and some
facts from outside of $\ensuremath{At}\xspace_i$. Consequently $M_i = M''_i$ and
$V_i = V''_i$, that is $\state \sim_i \state['']$.
\end{proof}
Next, the following property of the semantics will be needed.
\begin{lemma}
\label{lem:properties}
For any $H$-compliant state $\state$, $G \mbox{$\:\subseteq\:$} N$,
and facts $p_1, \mbox{$\ldots$}, p_k$,
\[
\mbox{$\state \vDash_H \ck G(\bigvee_{j=1}^{k} p_j)$ iff $\state \vDash_H \bigvee_{j=1}^{k} \ck G p_j$.}
\]
\end{lemma}
\begin{proof}
To deal with ($\Rightarrow$) we consider two cases.
\mbox{$\:\cap\:$}I
\noindent
\emph{Case 1}. $|G| = 1$, say $G = \C{i}$.
Suppose that $\state\nvDash_H \bigvee_{j=1}^{k} \knows i p_j$.
Then for $j \in \C{1, \mbox{$\ldots$}, k}$ we have $\state\nvDash_H \knows i p_j$
and thus by
\cref{lem:equiv_i} $p_j \not\in V_i \cup \mbox{{\bf false}}acts(M_i)$.
Let
\[
\state[''] := L(V_i, M_i)
\]
be the $H$-compliant state defined w.r.t.~the inclusion $(V_i, M_i) \mbox{$\:\subseteq\:$} \state$.
This state is realized by adding to $M_i$ some messages from $M$ and to $V_i$ some
facts from $\mbox{{\bf false}}acts(M_i)$. So $V'' \mbox{$\:\subseteq\:$} V_i \cup \mbox{{\bf false}}acts(M_i)$ and consequently
for $j \in \C{1, \mbox{$\ldots$}, k}$ we have $p_j \not\in V''$. Hence,
\[
\state['']\nvDash_H \mbox{$\exists$}xtstyle\bigvee_{j=1}^{k} p_j.
\]
Moreover, by \cref{fact:L} we have $\state\sim_i \state['']$, so
$\state\nvDash_H \knows i(\bigvee_{j=1}^{k} p_j)$.
\mbox{$\:\cap\:$}I
\noindent
\emph{Case 2}. $|G| \geq 2$.
By \cref{lem:f} for some $j \in \C{1, \mbox{$\ldots$}, k}$ there is $\msg{\cdot}{A}{p_j}\in M$ with $G \subseteq A$.
So, by \cref{cor:iff}, $\state\vDash_H \ck G p_j$, and thus $\state \vDash_H \bigvee_{j=1}^{k} \ck G p_j$.
\mbox{$\:\cap\:$}I
The ($\Leftarrow$) implication holds directly by the definition of the semantics.
\end{proof}
We can now resume our comparison with the results of the previous section. To start with,
the following result is a counterpart of \cref{result:positive-keep-holding}.
\begin{lemma}
\label{lem:mono}
For any ${\it Var}phi\in\mathcal{L}^+$ and $H$-compliant states $(V,M)$ and $(V',M')$
with $(V',M')\subseteq\state $,
\[
\mbox{$\exists$}xtup{if } (V', M')\vDash_H{\it Var}phi, \mbox{$\exists$}xtup{ then } \state \vDash_H{\it Var}phi.
\]
\end{lemma}
\begin{proof} By structural induction on ${\it Var}phi$.
\end{proof}
In \cref{sec:telling} we used this result to
establish \cref{result:ck-of-h-doesnt-matter}. However, in the
current setting the counterpart of \cref{result:ck-of-h-doesnt-matter} does not hold.
\begin{example}
\label{ex:ck-of-h-does-matter}
Consider players $N = \C{i,j,k,l}$ and a graph $H$
with the edges $\{l,k\}, \{k,j\}, \{j,i\}$,
see \cref{fig:line}. Suppose that
$V = \C{p}$, where $p \in \ensuremath{At}\xspace_l$, and $M = \C{(l,\C{l,k},p), (k,\C{k,j},p), (j,\C{j,i},p)}$.
\begin{figure}
\caption{Knowledge of~$H$ matters even for positive formulas when forwarding is allowed.}
\label{fig:line}
\end{figure}
Then
\[
\state \nvDash \knows i \knows k p,
\]
since player~$i$ does not know through which source player~$j$ learned~$p$. However,
\[
\state \vDash_H \knows i \knows k p,
\]
since when the underlying graph is commonly known,
player~$i$ knows that player~$j$ learned $p$ from player~$k$.
\end{example}
Still, a limited counterpart of \cref{result:ck-of-h-doesnt-matter} does hold.
Let $\mathcal{L}^{+}_{K}$ be the sublanguage of $\mathcal{L}^{+}$ in which
the knowledge operators $\ck G$ are not allowed to be nested. So if $\ck G
{\it Var}phi \in \mathcal{L}^{+}_{K}$, then ${\it Var}phi$ is a propositional
formula that does not use negation.
\begin{theorem}
\label{thm:+K}
For any $H$-compliant state $\state$ and ${\it Var}phi\in\mathcal{L}^+_{K}$,
\[
\mbox{$\state\vDash{\it Var}phi$ iff $\state\vDash_H{\it Var}phi$.}
\]
\end{theorem}
\begin{proof}
We proceed by structural induction on ${\it Var}phi$. The only
non-trivial case is when ${\it Var}phi=\ck G\psi$ for some $i\in N$ and
$\psi$ is a propositional formula that does not use negation.
Let $\bigwedge_{j=1}^{k} \bigvee_{l=1}^{m_j} p_{j,l}$
be the conjunctive normal form of $\psi$. So each $p_{j,l}$ is a fact.
By \cref{lem:properties} and the definition of semantics
we have both
\begin{align*}
\state\vDash \ck G \psi&\text{ iff }\state\vDash\mbox{$\exists$}xtstyle\bigwedge_{j=1}^{k} \bigvee_{l=1}^{m_j} \ck G p_{j,l}\\
\intertext{and}
\state\vDash_H \ck G \psi&\text{ iff }\state\vDash_H\mbox{$\exists$}xtstyle\bigwedge_{j=1}^{k} \bigvee_{l=1}^{m_j} \ck G p_{j,l}\enspace.\\
\intertext{But by \cref{result:ck-of-h-doesnt-matter-for-facts}, for all $j \in \C{1, \mbox{$\ldots$}, k}$ and $l \in \C{1, \mbox{$\ldots$}, m_j}$ we have}
\state\vDash \ck G p_{j,l}&\text{ iff }\state\vDash_H \ck G p_{j,l}\enspace.
\end{align*}
This implies the claim for $\ck G\psi$.
\end{proof}
We now analyze to what extent \cref{result:ck-disjunction-distributes} holds in the current setting.
We first prove that the $\ck G$ operator distributes over
disjunctions of formulas from the non-epistemic sublanguage
$\mathcal{L}_{\wedge,\vee}$ of $\mathcal{L}$ in which only
conjunction and disjunction is allowed.
\begin{theorem}
\label{thm:disj-p}
For any ${\it Var}phi_1,{\it Var}phi_2\in\mathcal{L}_{\wedge,\vee}$, $i\in N$ and $H$-compliant state $\state$,
\[
\mbox{$\state \vDash_H \ck G({\it Var}phi_1\vee{\it Var}phi_2)$ iff $\state \vDash_H \ck G{\it Var}phi_1\vee \ck G{\it Var}phi_2$.}
\]
\end{theorem}
\begin{proof}
Passing by the conjunctive normal forms of ${\it Var}phi_1$ and ${\it Var}phi_2$,
the result follows from the definition of the semantics and Lemma \ref{lem:properties} twice.
\end{proof}
However, the $\ck G$ operator does not distribute over the knowledge operators,
so the counterpart of \cref{result:ck-disjunction-distributes} does not hold.
\begin{example}
\label{ex:ck-doesnt-distribute-over-k}
Consider the set of players $N = \C{i,j,k,l,n}$
and the hypergraph $H$ being the graph with the edges
$(n,k), (n,l), (k,j), (l,j), (j,i)$,
see \cref{fig:interaction-structure-examples}\subref{fig:interaction-structure-examples:b}.
Take $V = \C{p}$, where $p \in \ensuremath{At}\xspace_n$, and
\[
M =
\C{(n,\C{n,k},p), (k,\C{k,j},p), (j,\C{j,i},p)}\enspace.
\]
Then
\[
\state \vDash_H \knows i(\knows k p \vee \knows l p),
\]
but neither
$
\state \vDash_H \knows i \knows k p,
$
nor
$
\state \vDash_H \knows i \knows k q
$
holds. Informally, player~$i$ knows that either player~$k$ or player~$l$ knows~$p$
but he does not know which one of them knows~$p$.
\end{example}
As noticed already after the proof of \cref{result:ck-disjunction-distributes},
the $\knows i$ operator does not distribute over negation either;
the same example applies here.
Finally, reconsider \cref{thm:permutation}. It is straightforward to see that it does not hold in
the present setting, even for two players. Indeed, reconsider \cref{ex:ck-of-h-does-matter}.
We showed there that $\state \vDash_H \knows i \knows k p$. However, it is easy to see that
$\state \nvDash_H \ck{\{i, k\}} p$ since $\state \nvDash_H \knows k \knows i p$.
\section{Conclusions and related work}
\label{sec:conclusions}
In this paper we studied various aspects of common knowledge in two
simple frameworks concerned with synchronous communication. It is
useful to clarify that our two impossibility results concerning
the attainment of common knowledge amongst players
(\cref{result:knowledge-chain-phi-only-through-msg,thm:group1})
differ from the customary impossibility results.
For example, \citet{HM90} formalize the epistemic aspects of the
celebrated Coordinated Attack Problem that consists in
achieving common knowledge (a `common plan of action').
They show (in Section~8) that in a
distributed system in which communication is not guaranteed, common
knowledge is not attainable. When communication is guaranteed, they
show the same result when there is no bound on message delivery
times. In both situations the proof assumes the existence of clocks and
point-to-point communication.
The close correspondence between simultaneous events (in our system a
broadcast to the whole group) and common knowledge is pointed out by
\citet{FHMV99}. Their model of a distributed system consists of a
set of linear `runs' (histories), while we only assume a partial
ordering ($\leadsto$) between messages broadcast to groups, which are
the only possible actions.
We have shown that in our framework, common knowledge of a positive formula
is indeed inseparably related to group communication, which corresponds to simultaneous events.
However, as we have seen, this does not hold of negative formulas,
so the relationship is not as obvious as it may seem.
The results of \citet{FHMV99} may be seen to correspond to our \cref{cor:iff},
though we allow broadcasts instead of just point-to-point communication.
\citet{chandy_processes_1986} consider the flow of information
in distributed systems with asynchronous communication.
They study how processes `learn' about states of other processes and how knowledge evolves.
The main difference is that with asynchronous communication,
hypergraphs are equivalent to mere point-to-point graphs.
Without guarantees on the delivery time, and without temporal reasoning,
from the knowledge point of view
sending an asynchronous group message has the same effect as
sending a separate message to each group member.
Our study concerning the consequences of the assumption whether the underlying
hypergraph is commonly known among the players brings our paper
somewhat closer to the area of social networks
(see, e.g., \citet{Jac08}).
Within logic, the relevance of epistemic issues in communication networks
has been recognized by a number of authors, e.g.~\citet{van_benthem_one_2006_}.
However, to our knowledge the only work that addresses these issues
is \citet{pacuit_reasoning_2007} and, to some extent,~\citet{roelofsen_exploring_2005}.
We now briefly discuss these frameworks and relate them to our own.
\Citet{pacuit_reasoning_2007} use a history-based model
to study diffusion of information in a communication graph,
starting from facts initially known to individual players.
Communicative acts are assumed to consist in
a player~$j$ `reading' an arbitrary propositional formula from another player~$i$,
with the precondition that~$i$ \emph{knows} that the formula holds.
Communicative acts are restricted to a commonly known, static, directed graph,
and, unlike in our case, are assumed to go \emph{unnoticed by~$i$}.
The paper formalizes what conclusions,
beyond the mere factual content of messages,
can be drawn using knowledge of the communication graph and, consequently,
knowledge of the possible routes along which certain information can have flown.
\Citet{roelofsen_exploring_2005} uses a model based on Dynamic
Epistemic Logic (DEL) to describe how some initial epistemic
model evolves in a communication situation. Communication is
among subgroups and can contain arbitrary epistemic formulas.
Further, communication is assumed to be truthful and is restricted to
occur along a hypergraph.
However the hypergraph is explicitly encoded in the model, and thus
(knowledge of it) is subject to change.
While under certain circumstances history-based modeling and DEL are
equivalent~\cite{van_benthem_merging_2007}, our approach is more
in the spirit of~\citet{pacuit_reasoning_2007}.
Indeed, we also study how specific information may have spread.
Also, all possible communications are included in the model
and suspicions about them are not explicitly formed.
Finally, the underlying graph (in our case hypergraph) is static
and not included in the model.
On a technical level, our approach differs from~\citet{pacuit_reasoning_2007}
in that we use sets of messages instead of sequences
and, when dealing with forwarding, employ a more general structure than histories
by considering messages partially ordered by the relation $\leadsto$.
On the other hand, our messages are simpler:
\citet{pacuit_reasoning_2007} allow disjunctions of facts, while we allow only facts.
What distinguishes our approach on a more conceptual level
is that our focus lies on identifying natural conditions that allow us to
prove stronger results about knowledge, such as distributivity over
disjunctions, or irrelevance of (common) knowledge of the underlying
hypergraph.
\section{Extensions}
\label{sec:extensions}
We conclude by listing a number of natural extensions of the considered framework
that are worthy of further study:
\begin{itemize}
\item We could equip the players with theories that their parts of
valuations, $V_i$, have to satisfy. In this extension we would
assume that each player $i$ has a propositional theory $T_i$
built from facts in $At_i$ that he adheres to. The theories $T_i$
where $i \in N$ then form a common knowledge among the players. So
each player $j$ can assume that player $i$ considers $V_i$ such that
$V_i$ is a model of $T_i$.
\item We could consider more complex messages than simple atomic
facts, for example propositional formulas, or even
epistemic formulas.
Also, we could study asynchronous communication,
messages from unknown senders or to an unknown group of recipients,
and a counterpart of the blind copy feature familiar from e-mails.
\item In \cref{sec:forwarding} we relaxed the assumption that in a
message $(i,A,p)$ it has to be the case that $p \in \ensuremath{At}\xspace_i$, but we
did still insist on the \emph{truthfulness} of messages, requiring
that $p \in V$. We could further relax this assumption, by
insisting only that $p \in At$. This way we would model messages
that consist of possibly false (but credible) information. This
would lead to a study of beliefs (which can be false) rather than
knowledge (which cannot) and common beliefs rather than common
knowledge.
\item We could consider in this framework belief revision, by assuming
that the theory $T_i$ of player $i$ consists of his beliefs, which
would then be revised in view of received information.
Alternatively, $T_i$ could be the certain knowledge of player $i$
against which received information would be revised.
\item We could assume that the players have different knowledge of the
underlying hypergraph, by assuming that for all $i$ we have $H \mbox{$\:\subseteq\:$}
H_i$, where $H$ is the underlying hypergraph and $H_i$ is its
approximation known to $i$, and that players learn $H$ by exchanging
messages. The messages would contain information about which
hyperarcs do \emph{not} belong to $H$.
\item Alternatively, we could study a setup in which each player has
an indistinguishability relation over hypergraphs. This
would allow us to model players' partial knowledge of the
underlying hypergraph.
\end{itemize}
We use the setting of the first item in~\cite{apt_strategy_2009} to reason
about iterated elimination of strategies in \oldbfe{strategic games with interaction structures}.
These are strategic games in which there is a hypergraph over the set
of players (an interaction structure) and
the players can communicate about their preferences, initially only known to themselves,
so that within each hyperarc players can obtain common knowledge of each other's
preferences.
\end{document} |
\begin{document}
\baselineskip17pt
\title[Competition]{Does a population with the highest turnover coefficient win competition?}
\author[R. Rudnicki]{Ryszard Rudnicki}
\address{R. Rudnicki, Institute of Mathematics,
Polish Academy of Sciences, Bankowa 14, 40-007 Katowice, Poland.}
\email{rudnicki@us.edu.pl}
\keywords{nonlinear Leslie model, competitive exclusion, periodic cycle, population dynamics}
\subjclass[2010]{Primary: 92D25; Secondary: 37N25, 92D40}
\begin{abstract}
We consider a discrete time competition model.
Populations compete for common limited resources
but they have different fertilities and mortalities rates.
We compare dynamical properties of this model with its continuous counterpart.
We give sufficient conditions for competitive exclusion and the existence of periodic solutions
related to the classical logistic, Beverton-Holt and Ricker models.
\end{abstract}
\maketitle
\section{Introduction}
\label{s:int}
It is well known that if species compete for common limited resources, then usually they cannot coexist in the long term.
This law was introduced by Gause \cite{Gause} and it is called \textit{the principle of competitive exclusion}.
There are a lot of papers where the problem of competitive exclusion or coexistence is discussed. Most of them are described by continuous time models, but
there are also some number of discrete time models devoted to this subject (see \cite{AS,CLCH} and the references given there).
If we have continuous and discrete time versions of a similar competition model
it is interesting to compare the properties of
both versions of the model, especially to check if
they are dynamically consistent, i.e., if they possess the same dynamical properties as stability or chaos.
In this paper we consider a discrete time competition model with overlapping generations. We prove a sufficient condition for competitive exclusion
and compare it with its continuous counterpart.
The model considered here is the following. A population consists of $k$ different individual strategies.
The state of the population at time $t$ is given by the
vector $\mathbf x(t)=[x_{1}(t),\dots ,x_{k}(t)]$, where $x_{i}(t)$ is the
size of subpopulation with the phenotype $i$.
Individuals with different phenotypes do not mate and
each phenotype $i$ is characterized by per capita reproduction $b_i$ and mortality $d_i$.
We assume that the juvenile survival rate depends on the state $\mathbf x$
and it is given by a function $f\colon \mathbb R^k_+ \to [0,1]$.
Therefore, $f$ describes the suppression of growth driven, for example, by competition for food or
free nest sites for newborns.
The time evolution of the state of population
is given by the system
\begin{equation}
\label{d-m}
x_i(t+1)=x_i(t)-d_ix_i(t)+ b_i x_i(t)f(\mathbf x(t)).
\end{equation}
We assume that $0<d_i\le 1$, $d_i<b_i$ for $i=1,\dots,k$ and $f(\mathbf x)>0$ for $\mathbf x\ne 0$.
The model is similar in spirit to that presented in \cite{AB} (a continuous version) and in \cite{AR} (a discrete version) but
in those papers $f$ has a special form strictly connected with competition for free nest sites for newborns.
A simplified Leslie/Gower model \cite{AlSR} is also of the form~ (\ref{d-m}).
The suppression function $f$ can be quite arbitrary.
Usually, it is of the form $f(\mathbf x)=\varphi (w_1x_1+\dots+w_kx_k)$,
where $\varphi$ is a decreasing function and $w_1,\dots,w_k$ are positive numbers,
but e.g. in \cite{AB} it is of the form
\[
f(\mathbf x)=
\begin{cases}
1,&\textrm{if $\,(1+b_1)x_1+\dots+ (1+b_k)x_k\le K$},\\
\dfrac{K-x_1-\dots-x_k}{b_1x_1+\dots+b_kx_k},
&\textrm{if $\,(1+b_1)x_1+\dots +(1+b_k)x_k> K$}.
\end{cases}
\]
Now we present some motivation for studying model (\ref{d-m}).
We begin with a continuous time version of it.
The time evolution of the state of population
is described by the system
\begin{equation}
\label{c-m}
x_i'(t)=-d_ix_i(t)+ b_i x_i(t)f(\mathbf x(t)),
\quad i=1,\dots,k.
\end{equation}
We assume that $0<d_i< b_i$, $f$ has values in the interval $[0,1]$,
and
\begin{equation}
\label{b:f}
f(\mathbf x) \le \min\bigg\{\frac{d_i}{b_i}
\colon \,\, i=1,\dots,k\bigg\}
\quad\textrm{if $|\mathbf x|\ge M$},
\end{equation}
where $|\mathbf x|=x_1+\dots+x_k$.
From the last condition it follows that the total size $|\mathbf x(t)|$
of the population is bounded by $\max(M,|\mathbf x(0)|)$.
We also assume that $f$ is
a sufficiently smooth
function to have the existence and uniqueness of the solutions of (\ref{c-m}), for example it is enough to assume that $f$ satisfies a local Lipschitz condition.
We denote by $L_i=b_i/d_i$ the turnover coefficient for the strategy $i$.
We assume that
\begin{equation}
\label{ineq}
L_1>L_2\ge\dots \ge L_k.
\end{equation}
It is well known that
\begin{equation}
\label{goto0}
\lim_{t\to\infty} x_i(t)=0 \textrm{ \ for $i\ge 2$}.
\end{equation}
Indeed, from (\ref{c-m}) it follows that
\begin{equation}
\label{goto0-2}
(b_i^{-1}\ln x_i(t))'= -L_i^{-1}+f(\mathbf x(t)).
\end{equation}
Thus
\begin{equation}
\label{goto0-3}
(b_1^{-1}\ln x_1(t)-b_i^{-1}\ln x_i(t))'= L_i^{-1}-L_1^{-1}.
\end{equation}
Therefore
\begin{equation}
\label{goto0-4}
\frac{d}{dt} \ln \bigg( \frac{x_1(t)^{b_i}}{x_i(t)^{b_1}}\bigg) = b_1b_2(L_i^{-1}-L_1^{-1})>0
\end{equation}
and, consequently,
\begin{equation}
\label{goto0-5}
\lim_{t\to\infty}\frac{x_1(t)^{b_i}}{x_i(t)^{b_1}}=\infty.
\end{equation}
Since $x_1(t)$ is a bounded function, from (\ref{goto0-5}) it follows that (\ref{goto0}) holds.
Now we return to a discrete time version of model (\ref{c-m}).
From (\ref{b:f}) it follows immediately that $x_i(t+1)\le x_i(t)$ if $|\mathbf x(t)|\ge M$ and, therefore,
the sequence $(x_i(t))$ is bounded. Moreover, since $d_i \le 1$ we have $x_i(t)>0$ if $x_i(0)>0$.
It is of interest to know whether (\ref{goto0}) holds also
for discrete model (\ref{d-m}).
Observe that (\ref{d-m}) can be written as
\begin{equation}
\label{d-m1}
\frac{x_i(t+1)-x_i(t)}{b_ix_i(t)}= -L_i^{-1}+ f(\mathbf x(t)).
\end{equation}
Now (\ref{goto0-3}) takes the form
\begin{equation}
\label{d-m2}
\frac{1}{b_1}D_l\, x_1(t)-\frac{1}{b_i}D_l\, x_i(t) = L_i^{-1}-L_1^{-1}.
\end{equation}
In the last formula the \textit{logarithmic derivative} $x_i'/x_i$ was replaced by its discrete version
\[
D_l\, x_i(t) :=\dfrac{x_i(t+1)-x_i(t)}{x_i(t)}.
\]
Let $\alpha=b_1/b_i$ and $\beta=b_1(L_i^{-1}-L_1^{-1})=\alpha d_i-d_1$. Then $0<\beta<\alpha$ and
(\ref{d-m2}) can be written in the following way
\begin{equation}
\label{d-m3}
D_l\, x_1(t)= \alpha D_l\, x_i(t) +\beta.
\end{equation}
We want to find a sufficient condition for (\ref{goto0}).
In order to do it we formulate the following general question,
which can be investigated independently of the above biological models.
\begin{problem}
\label{prob1}
Find parameters $\alpha$ and $\beta$, $0<\beta<\alpha$, such that the following condition
holds:\\
(C) \ if $(u_n)$ and $(v_n)$ are arbitrary bounded sequences of positive numbers
satisfying
\begin{equation}
\label{p1}
\frac{u_{n+1}-u_n}{u_n}= \alpha \frac{v_{n+1}-v_n}{v_n} +\beta,
\end{equation}
for $n\in\mathbb{N}$, then $\lim\limits_{n\to\infty} v_n=0$.
\end{problem}
In the case when the model has the property of competitive exclusion (\ref{goto0}) one can ask if the dynamics of the $k$-dimensional model
is the same as in the restriction to the one-dimensional model.
The answer to this question is positive for the continuous version,
because the one-dimensional model has a very simple dynamics. In Section~\ref{ss:one} we also show that both dynamics are similar if
the one-dimensional model has the shadowing property. More interesting question is what can happen when condition (C) does not hold.
One can expect that then subpopulations with different strategies can coexist even if condition (\ref{ineq}) holds.
But we do not have a coexistence equilibrium
(i.e. a positive stationary solution of (\ref{d-m})) which makes the problem
more difficult. In Section~\ref{ss:periodic} we check that two-dimensional systems with $f$ related to the classical logistic,
Beverton-Holt and Ricker models can have periodic solutions even in the case
when one-dimensional versions of these models have
stationary globally stable solutions and the two-dimensional model has a locally stable boundary equilibrium $(x_1^*,0)$.
\section{Competitive exclusion}
\label{s:mr}
The solution of Problem~\ref{prob1} is formulated in the following theorem.
\begin{theorem}
\label{th1}
If \ $\alpha\le 1+\beta$ then condition (C) is fulfilled.
If $\alpha> 1+\beta$
then we can find periodic sequences $(u_n)$ and $(v_n)$ of period two with positive elements
which satisfy $(\ref{p1})$.
\end{theorem}
\begin{lemma}
\label{l:c}
Consider the function
\begin{equation}
\label{l-1}
g_n (x_1,\dots,x_n)=(\alpha x_1+\gamma)(\alpha x_2+\gamma)\cdots(\alpha x_n+\gamma)
\end{equation}
defined on the set
$S_{n,m}=\{\mathbf x\in \mathbb R^n_+\colon \,x_1x_2\cdots x_n=m\}$,
where $\alpha>0$, $\gamma\ge 0$, $m>0$ and $n$ is a positive integer.
Then
\begin{equation}
\label{l-2}
g_n (x_1,\dots,x_n)\ge \left(\alpha m^{1/n}+\gamma \right)^n.
\end{equation}
\begin{proof}
The case $\gamma=0$ is obvious, so we assume that $\gamma>0$.
We use the standard technique of
the Lagrange multipliers for investigating problems on conditional extrema.
Let
\[
L(x_1,\dots,x_n)=g_n (x_1,\dots,x_n)+\lambda(m-x_1x_2\cdots x_n).
\]
Then
\[
\frac{\partial L(x_1,\dots,x_n)}{\partial x_i}
=\frac{\alpha g_n (x_1,\dots,x_n)}{\alpha x_i+\gamma}
-\frac{\lambda x_1x_2\cdots x_n}{x_i}.
\]
Observe that if
$\dfrac{\partial L(x_1,\dots,x_n)}{\partial x_i}=0$ for $i=1,\dots,n$,
then $x_1=\dots=x_n=m^{1/n}$. It means that the function $g_n$
has only one local conditional extremal point
and this point is the global minimum because $g_n(\mathbf x)$ converges to infinity as $\|\mathbf x\|\to \infty$.
\end{proof}
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{th1}]
Equation (\ref{p1}) can be written in the following form
\begin{equation}
\label{p2}
\frac{u_{n+1}}{u_n}= \alpha \frac{v_{n+1}}{v_n} +\gamma,
\end{equation}
where $\gamma=\beta+1-\alpha$.
Consider the case $\alpha\le 1+\beta$. Then $\gamma\ge 0$ and $\alpha+\gamma=\beta+1>1$.
We show that if $(v_n)$ is a sequence of positive numbers such that $\limsup\limits_{n\to\infty} v_n>0$ and $(u_n)$
is a sequence of positive numbers such that (\ref{p2}) holds,
then the sequence $(u_n)$ is unbounded.
Indeed, then we can find an $\overline{m}>0$ and a subsequence
$(v_{n_i})$ of $(v_n)$ such that $v_{n_i}\ge \overline m$ for $i\in\mathbb N$.
We set $v_0=1$ and $x_i=v_{i}/v_{i-1}$ for $i\in\mathbb N$. Then
$v_{n}=x_1\cdots x_n$ and $u_n=u_0g_n(x_1,\dots,x_n)$, where
$u_0=u_1/(\alpha v_1+\gamma)$.
If $m=x_1\cdots x_{n_i}$, then $m\ge \overline m$ and from
Lemma~\ref{l:c} it follows that
\[
u_{n_i}=u_0g_{n_i}(x_1,\dots,x_{n_i})\ge
u_0\left(\alpha m^{1/n_i}+\gamma \right)^{n_i}\ge
u_0\left(\alpha \overline m^{1/n_i}+\gamma \right)^{n_i}.
\]
Since $\lim\limits_{i\to\infty}\overline m^{1/n_i}=1$ and $\alpha+\gamma>1$
we obtain
$\lim\limits_{i\to\infty} u_{n_i}=\infty$, which proves the first part of the theorem.
Now we assume that $\alpha> 1+\beta$. Then $\gamma<0$.
First we check that there exists $\theta>1$ such that
\begin{equation}
\label{per1}
(\alpha \theta+\gamma)(\alpha \theta^{-1}+\gamma)=1.
\end{equation}
Equation (\ref{per1}) can be written in the following form
\begin{equation}
\label{per3}
\theta+\theta^{-1}=L, \textrm{ \, where $L=\frac{\alpha^2+\gamma^2-1}{\alpha|\gamma|}$}.
\end{equation}
Since $\alpha+\gamma=\beta+1>1$ we have
$\alpha^2+\gamma^2-1> 2\alpha|\gamma|$,
which gives $L>2$ and implies that there exists $\theta>1$ such that
(\ref{per1}) holds.
Now we put
$u_{2n-1}=c_1$, $u_{2n}=c_1(\alpha \theta+\gamma)$,
$v_{2n-1}=c_2$, $v_{2n}=c_2\theta$
for $n\in\mathbb N$, where $c_1$ and $c_2$ are any positive constants. Then
\[
\frac{u_{2n}}{u_{2n-1}}= \alpha \theta+\gamma=\alpha\frac{v_{2n}}{v_{2n-1}}+\gamma,
\]
and using (\ref{per1}) we obtain
\[
\frac{u_{2n+1}}{u_{2n}}= \frac1{\alpha \theta+\gamma}
=\alpha \theta^{-1}+\gamma=\alpha\frac{v_{2n+1}}{v_{2n}}+\gamma.
\qedhere
\]
\end{proof}
\begin{remark}
\label{r:1}
We have proved
a slightly stronger condition than (C) in the case $\alpha\le 1+\beta$. Namely,
if $(u_n)$ is a bounded sequence of positive numbers,
$(v_n)$ is a sequence of positive numbers and they satisfy
(\ref{p1}), then $\lim\limits_{n\to\infty} v_n=0$.
In the proof of condition (C) we have not used the preliminary assumption that $\beta<\alpha$.
\end{remark}
\section{Applications}
\label{s:appl}
Now we return to the model given by (\ref{d-m}).
We assume that $f\colon \mathbb R^k_+\to [0,1]$ is a continuous function
which satisfies (\ref{b:f}). From (\ref{b:f}) it follows that there exists $\overline{M}>0$
such that the set
\[
X=\{\mathbf x\in\mathbb R_+^k\colon x_1+\dots+x_k\le \overline M\}
\]
is invariant under (\ref{d-m}), i.e., if
$\mathbf x(0)\in X$ then $\mathbf x(t)\in X$ for $t>0$. We restrict the domain of the model
to the set $X$. Let $T\colon X\to X$ be the transformation given by
$T_i(\mathbf x)=(1-d_i)x_i+b_if(\mathbf x)x_i$, for $i=1,\dots,k$.
\subsection{Persistence}
\label{ss:persistence}
First we check that if $f(\mathbf 0)=1$ then the population is \textit{persistent}, i.e.,
$\liminf_{n \to\infty}\|T^n(\mathbf x)\|\ge \varepsilon_1 >0$ for all $\mathbf x\ne \mathbf 0$.
This is a standard result from persistence theory but we check it to make the paper self-contained.
Since $b_i>d_i$ for $i=1,\dots,k$ we find $\varepsilon>0$ and $\delta>0$
such that
\begin{equation}
\label{ej-f-p}
T_i(\mathbf x)\ge (1+\delta)x_i \quad\textrm{for $i=1,\dots,k$ and $\mathbf x \in B(\mathbf 0,\varepsilon)$},
\end{equation}
where $B(\mathbf 0,\varepsilon)$ denotes the open ball in $X$ with center $\mathbf 0$ and radius $\varepsilon$.
Moreover, since $T(\mathbf x)\ne \mathbf 0$ for $\mathbf x\ne 0$ the closed set $T(X\setminus B(\mathbf 0,\varepsilon))$ is disjoint with some
neighbourhood of $\mathbf 0$. Using (\ref{ej-f-p}) we find $\varepsilon_1\in (0,\varepsilon)$ such that
for each $\mathbf x \ne \mathbf 0$ we also find an integer $n_0(\mathbf x)$ such
that $T^n(\mathbf x)\notin B(\mathbf 0,\varepsilon_1)$ for $n\ge n_0(\mathbf x)$.
\subsection{Convergence to one-dimensional dynamics}
\label{ss:one}
Now we present some corollaries of Theorem~\ref{th1} concerning the long-time behaviour of the population.
The inequality $0<\alpha\le 1+\beta$ can be written in terms of birth and death coefficients as
\begin{equation}
\label{a:1}
b_1(1-d_i)\le b_i(1-d_1)\quad\textrm{for $i=2,\dots,k$.}
\end{equation}
It means that if (\ref{ineq}) and (\ref{a:1}) hold, then
all strategies except the first one become extinct.
It suggests that the model should behave asymptotically as $t\to\infty$, like a one-dimensional model corresponding
to a population consisting of only the first strategy. This reduced model is given by the
recurrent formula
\begin{equation}
\label{d-m-a}
y(t+1)=S(y(t)),
\end{equation}
where $S(y)=y-d_1y+ b_1yf(y,0,\dots,0)$.
In order to check that the model given by (\ref{d-m}) has the same asymptotic behaviour
as the transformation $S$, we need some auxiliary definitions.
A sequence $(y_k)$ is called an $\eta$-\textit{pseudo-orbit} of a transformation $S$ if
$|S(y_k)- y_{k+1}|<\eta$
for all $k\ge 1$. The transformation $S$ is called \textit{shadowing}, if for
every $\delta>0$ there exists
$\eta>0$ such that for each
$\eta$-pseudo-orbit $(y_k)$ of $S$
there is a point $y$ such that
$|y_k -S^k(y)|<\delta$.
\begin{theorem}
\label{th-as}
Assume that $f(\mathbf 0)=1$ and that conditions $(\ref{ineq})$, $(\ref{a:1})$ hold.
Then
\[
\lim_{t\to\infty} x_i(t)=0 \ \textrm{ for $\,i=2,\dots,k$}.
\]
If $S$ is shadowing then for each $\delta>0$
and for each initial point $\mathbf x(0)=(x_1,\dots,x_k)$
with $x_1>0$ there exists $t_0\ge 0$ such that
$y(t_0)>0$ and
\begin{equation}
\label{wn-sh}
\big|
x_1(t)
- y(t)
\big|
<\delta
\textrm{ \ for $t\ge t_0$}.
\end{equation}
\end{theorem}
\begin{proof}
Let us fix a $\delta>0$ and let $\eta>0$ be a constant from the shadowing property of $S$.
From the uniform continuity of the function $f$
there is an $\varepsilon>0$ such that
\begin{equation}
\label{a:t1}
\overline M b_1\big|f(x_1,\dots,x_k)-f(x_1,0,\dots,0)
\big|
<\eta
\textrm{ if $\,\mathbf x\in X$, $x_2+\dots+x_k<\varepsilon$.}
\end{equation}
Since
all strategies except the first one become extinct,
there exists $t_0\ge 0$ such that $x_2(t)+\dots+x_k(t)<\varepsilon$ for $t\ge t_0$.
From (\ref{a:t1}) it follows that
\[
|x_1(t+1)-S(x_1(t))|<\eta \textrm{ \ for $t\ge t_0$}
\]
and, consequently,
the sequence $x_1(t_0),x_1(t_0+1),\dots$ is
an $\eta$-pseudo-orbit.
Since $S$ is shadowing we have
(\ref{wn-sh}).
\end{proof}
The shadowing property was intensively studied for the last thirty years
and there are a lot of results concerning the shadowing property for
one-dimensional maps (cf. a survey paper by Ombach and Mazur \cite{OmbachMazur}).
It is obvious that if $S$ has an asymptotically stable periodic orbit then
$S$ is shadowing on the basin of attraction of this orbit.
Moreover, for a continuous one-dimensional transformation
the convergence of all iterates to a unique fixed point $x$ implies its global stability
\cite{Sedeghat}.
Thus, as a simple consequence of Theorem~\ref{th-as} we obtain
\begin{corollary}
\label{cor-as}
Assume that $f(\mathbf 0)=1$ and that conditions $(\ref{ineq})$, $(\ref{a:1})$ hold.
If $S$ has a fixed point $x_*>0$ and $\lim\limits_{n\to\infty}S^n(x)=x_*$ for all $x>0$,
then for each initial point $\mathbf x(0)=(x_1,\dots,x_k)$
with $x_1>0$, we have
$\lim\limits_{t\to\infty}\mathbf x(t)=(x_*,0,\dots,0)$.
\end{corollary}
Some applications of shadowing to semelparous population
similar to Theorem~\ref{th-as} and Corollary~\ref{cor-as}
can be found in \cite{RudnickiWieczorek2010}.
An interested reader will find there also some observations concerning
chaotic behaviour of such models. In particular,
the model given by (\ref{d-m}) can exhibit chaotic behaviour if
the suppression function is of the form
$f(\mathbf x)=1-x_1-\dots-x_n$, i.e., it is a generalization of the logistic model.
\begin{remark}[Dynamical consistence]
\label{r:d-c}
If we replace $x'_i(t)$ with $(x_i(t+h)-x_i(t))/h$ in (\ref{c-m}) then we get
\begin{equation}
\label{d-mh}
x_i(t+h)=x_i(t)-d_ihx_i(t)+ b_ih x_i(t)f(\mathbf x(t)),\quad i=1,\dots,k.
\end{equation}
One can ask if this scheme is dynamically consistent with (\ref{c-m}).
Observe, that inequalities (\ref{ineq}) also hold if we replace $b_i$ with $b_ih$ and $d_i$ with $d_ih$.
The difference equation (\ref{d-mh}) is said to be \textit{dynamically consistent} with
(\ref{c-m}) if they possesses the same dynamical
behavior such as local stability, bifurcations, and chaos \cite{LE},
or more specifically \cite{Mickens,RG} if they have the same given property,
e.g. if the competitive exclusion takes place in both discrete and continuous models.
The model (\ref{d-mh}) is biologically meaningful only if the death coefficients are $\le 1$, i.e., if
\begin{equation}
\label{w:h}
0<h\le h_{\max{}}=\min \{d_1^{-1},\dots,d_k^{-1}\}.
\end{equation}
We assume that $b_i$ and $d_i$ satisfy (\ref{ineq}), i.e., $b_1d_i>b_id_1$ for $i=2,\dots,k$.
Let $b_{i,h}=b_ih$, $d_{i,h}=d_ih$.
Now, (\ref{a:1}) applied to $b_{i,h}$ and $d_{i,h}$ gives
\begin{equation}
\label{a:1-r}
b_1-b_i\le (b_1d_i-b_id_1)h\quad\textrm{for $i=2,\dots,k$}.
\end{equation}
In particular if (\ref{ineq}) holds and $b_i\ge b_1$ for $i=2,\dots,k$ then for all $h$ satisfying (\ref{w:h})
all strategies except the first one become extinct, i.e., the difference equation (\ref{d-mh}) is dynamically consistent with
(\ref{c-m}) with respect to this property. We cannot expect ``full" dynamical consistence if the above conditions hold, because in the case of the logistic map,
i.e., if $f(\mathbf x)=1-x_1-x_2$, the stationary point $\mathbf x_1^*=((b_1-d_1)d_1^{-1},0)$ of (\ref{c-m}) is globally stable but in the numerical scheme
(\ref{d-mh}) this point loses stability when $b_1h>2+d_1h$.
\end{remark}
\subsection{Periodic solutions}
\label{ss:periodic}
Theorem~\ref{th1} can be also useful if we look for periodic oscillation in
the model given by (\ref{d-m}).
We restrict our investigation to the two-dimensional model.
We recall that
if $\alpha> 1+\beta$, then
the periodic sequences given by
$u_{2n-1}=c_1$,
$u_{2n}=c_1(\alpha \theta+\gamma)$,
$v_{2n-1}=c_2$, $v_{2n}=c_2 \theta$
for $n\in\mathbb N$, satisfy $(\ref{p1})$.
Here $c_1$ and $c_2$ are any positive constants,
$\theta>1$ is a solution of the equation
\[
(\alpha \theta+\gamma)(\alpha \theta^{-1}+\gamma)=1,
\]
$\alpha=b_1/b_2$, $\beta=\alpha d_2-d_1>0$, and $\gamma=1+\beta-\alpha=\alpha(d_2-1)+(1-d_1)<0$.
Under these assumptions we are looking for $c_1, c_2>0$ such that
\begin{equation}
\left\{
\label{ukl-2'}
\begin{aligned}
&\theta=1-d_2+b_2f(c_1,c_2),
\\
&1=\theta(1-d_2)+b_2 \theta f(c_1(\alpha \theta+\gamma),c_2\theta).
\end{aligned}
\right.
\end{equation}
This system is equivalent to
\begin{equation}
\label{ukl-3'}
\left\{
\begin{aligned}
&f(c_1,c_2)=\left(\theta+d_2-1\right)b_2^{-1}
\\
&f(c_1(\alpha \theta+\gamma),c_2\theta)
=\left(\theta^{-1}+d_2-1\right)b_2^{-1}.
\end{aligned}
\right.
\end{equation}
Since $f(\mathbf x)\in (0,1)$ for $\mathbf x\in X\setminus\{\mathbf 0\}$,
we have the following necessary condition for the existence of positive solution
of the system (\ref{ukl-3'}) :
\begin{equation}
\label{e:wko}
\theta<1+b_2-d_2\quad\textrm{and}\quad \theta<(1-d_2)^{-1}.
\end{equation}
Let $f(\mathbf x)=\varphi(x_1+x_2)$, where
$\varphi$ is a strictly decreasing function defined on the interval $[0,K)$, $0<K\le \infty$,
such that $\varphi(0)=1$ and $\lim_{x\to K}\varphi(x)=0$.
Define
$m_1=\left( \theta+d_2-1\right)b_2^{-1}$,
$m_2=\left(\theta^{-1}+d_2-1\right)b_2^{-1}$, $p_1=\varphi^{-1}(m_1)$,
and $p_2=\varphi^{-1}(m_2)$. If (\ref{e:wko}) holds then the constants $p_1$, $p_2$
are well defined and $0<p_1<p_2$.
Thus, we find a positive solution of system (\ref{ukl-3'}) if and only if
(\ref{e:wko}) holds and
\begin{equation}
\label{e:wkw}
c_1+c_2=p_1 \quad\textrm{and}\quad
c_1(\alpha \theta+\gamma)+c_2\theta=p_2.
\end{equation}
System (\ref{e:wkw}) has a unique solution
\begin{equation}
\label{e:wkw2}
c_1=
\frac{p_2-p_1\theta}
{\alpha \theta+\gamma-\theta},
\quad
c_2=
\frac{p_1(\alpha \theta+\gamma)-p_2}
{\alpha \theta+\gamma-\theta}.
\end{equation}
Since $\alpha>1$, $\theta>1$, and $\beta>0$ we have
\[
\alpha \theta+\gamma-\theta=\alpha \theta+1+\beta-\alpha-\theta=(\alpha-1)(\theta-1)+\beta>0.
\]
Thus
system (\ref{ukl-3'}) has a positive solution if and only if
(\ref{e:wko}) holds and
\begin{equation}
\label{e:wkw3}
p_1\theta< p_2<p_1(\alpha \theta+\gamma).
\end{equation}
Now we show how to find parameters $b_1,b_2,d_1,d_2$ such that
(\ref{e:wko}) and (\ref{e:wkw3}) hold.
Assume that $\beta$ is sufficiently small.
Since $\gamma=\beta+1-\alpha$, from (\ref{per3}) it follows
\[
\theta+\theta^{-1}=\frac{2\alpha(\alpha-1)-2(\alpha-1)\beta+\beta^2}{\alpha(\alpha-1)-\alpha\beta}=
2+\frac{2\beta}{\alpha(\alpha-1)}+O(\beta^2).
\]
Let $\theta=1+\varepsilon$. Then
$\varepsilon=\sqrt{2/(\alpha^2-\alpha)}\,\sqrt{\beta}+O(\beta) $
and we can assume that
$\varepsilon$ is also sufficiently small. Hence $\theta^{-1}=1-\varepsilon+O(\varepsilon^2)$, $\alpha\theta+\gamma=1+\alpha\varepsilon+O(\varepsilon^2)$,
$m_1=(d_2+\varepsilon)b_2^{-1}$, and
$m_2=(d_2-\varepsilon)b_2^{-1}+O(\varepsilon^2)$.
Assume that $\varphi^{-1}$ is a $C^2$-function in a neighbourhood of the point $\bar x=d_2b_2^{-1}$. Then
\begin{equation}
\label{e:wkw4}
p_1= A+Bb_2^{-1}\varepsilon+O(\varepsilon^2), \quad
p_2= A-Bb_2^{-1}\varepsilon+O(\varepsilon^2),
\end{equation}
where $A=\varphi^{-1}(\bar x)$ and $B=(\varphi^{-1})'(\bar x)=1/\varphi'(A)$.
Substituting (\ref{e:wkw4}) to (\ref{e:wkw3}) we rewrite (\ref{e:wkw3}) as
\begin{equation}
\label{e:wkw5}
A+O(\varepsilon)<-2 Bb_2^{-1}<\alpha A+O(\varepsilon).
\end{equation}
Taking sufficiently small $\beta$, we are also able to check
condition (\ref{e:wko}). Thus, if $A<-2 Bb_2^{-1}$, $\beta$ is sufficiently small and $\alpha$ is sufficiently large,
both conditions (\ref{e:wko}) and (\ref{e:wkw3}) are fulfilled
and a non-trivial periodic solution exists.
\begin{example}
\label{ex1}
We consider the two-dimensional model (\ref{d-m}) related to the logistic map, i.e., with
$f(\mathbf x)=\varphi(x_1+x_2)$ and $\varphi(x)=1-x/K$ for $x\in [0,K]$ and $\varphi(x)=0$ for $x>K$.
Then $\varphi^{-1}(x)=K(1-x)$, and so $A=K(1-d_2/b_2)$, $B=-K$.
From (\ref{e:wkw5}) it follows that if
$2b_2/b_1<b_2-d_2<2$, $d_1=b_1d_2/b_2-\beta$ and $\beta$ is sufficiently small, then there exists a periodic solution.
Let us consider a special example with the following coefficients
$b_1=2.02$, $b_2=0.505$, $d_1=0.0399$, $d_2=0.01$. Then $b_1/d_1>b_2/d_2$, $\alpha= b_1/b_2=4$, $\beta=\alpha d_2-d_1=0.0001$, $\gamma=-2.9999$,
and $\theta\approx 1.00408$.
Then all conditions hold and a positive periodic solution exists.
If $K=1$ then the periodic sequence is given by $x_1(2n-1)\approx 0.8482$,
$x_1(2n)\approx 0.8622$,
$x_2(2n-1)\approx 0.1099$, $x_2(2n)\approx 0.1103$
for $n\in\mathbb N$.
It is interesting that in this case, one-dimensional models
(i.e., with the birth and death coefficients $b_1,d_1$ or $b_2,d_2$) have positive and globally stable fixed points
because $b_i<2+d_i$ for $i=1,2$ (see Section~\ref{ss:loc-stab}).
Hence the two-dimensional model has a locally stable fixed point $(1-d_1/b_1,0)$ but this point is not globally stable.
\end{example}
\begin{example}
\label{ex2}
We consider now the two-dimensional Beverton-Holt model with harvesting, i.e., a model of type (\ref{d-m}) with
$f(\mathbf x)=\varphi(x_1+x_2)$ and $\varphi(x)=c/(c+x)$ for $x\in [0,\infty)$, $c>0$.
A one-dimensional version of this model always has one positive fixed point and this point is globally asymptotically stable
(see Section~\ref{ss:loc-stab}).
We have
$\varphi^{-1}(x)=c/x-c$, and so $A=c(b_2/d_2-1)$, $B=-cb_2^2/d_2^2$.
Inequality (\ref{e:wkw5}) takes the form
\[
(b_2-d_2)+O(\varepsilon)<2b_2/d_2<\alpha (b_2-d_2)+O(\varepsilon).
\]
The first inequality is automatically fulfilled for sufficiently small $\beta$.
The second inequality holds if
$b_1>\dfrac{2b_2^2}{(b_2-d_2)d_2}$
and $\beta$ is sufficiently small
and then a positive periodic solution exists.
\end{example}
\begin{example}
\label{ex3}
We consider now the two-dimensional model (\ref{d-m}) related to the Ricker map, i.e., with
$f(\mathbf x)=\varphi(x_1+x_2)$ and $\varphi(x)=e^{-cx}$ for $x\in [0,\infty)$.
We have
$\varphi^{-1}(x)=- c^{-1}\ln x$, $(\varphi^{-1})'(x)=- (cx)^{-1}$ and
so $A=c^{-1}\ln(b_2/d_2)$, $B=-b_2/(cd_2)$.
Inequality (\ref{e:wkw5}) takes the form
\[
\ln(b_2/d_2) +O(\varepsilon)<2/d_2<\alpha \ln(b_2/d_2)+O(\varepsilon).
\]
Thus if $d_2e^{2/(\alpha d_2)}<b_2<d_2e^{2/d_2}$
and $\beta$ is sufficiently small, then a positive periodic solution exists.
Now we give an example
when $T$ have a positive periodic point
and
both one-dimensional models have globally stable fixed points,
i.e., $b_r<d_re^{2/d_r}$ holds (see Section~\ref{ss:loc-stab}).
Let $b_1=1.0001e^2$, $b_2=b_1/4$, $d_1=0.9999$, $d_2=0.25$.
The coefficients $\alpha= b_1/b_2=4$, $\beta=\alpha d_2-d_1=0.0001$, $\gamma=-2.9999$
are the same as in Example ~\ref{ex1}. Thus $\theta\approx 1.00408$, $\theta^{-1}\approx 0.99594$,
and $\alpha\theta+\gamma=1.01642$.
For $c=1$ we have $p_i=-\ln m_i$ for $i=1,2$ and we can check that
the periodic sequence is given by $x_1(2n-1)\approx 1.49009$,
$x_1(2n)\approx 1.51455$,
$x_2(2n-1)\approx 0.000868$, $x_2(2n)\approx 0.000871$
for $n\in\mathbb N$.
\end{example}
\begin{remark}
\label{r:per}
We have restricted examples only to $f$ of the form $f(\mathbf x)=\varphi(x_1+x_2)$ with the typical $\varphi$ used in the classic
competition models, to show that these models can have no coexistence equilibrium, but they can have a positive periodic solution.
Formula (\ref{ukl-3'}) can be used to find periodic solutions of models with other $f$'s.
\end{remark}
\subsection{Stability of fixed points}
\label{ss:loc-stab}
In the previous sections we use some results concerning local and global stability of the transformation $T$
and to make our exposition self-contained we add a section concerning this subject.
First we look for fixed points of the transformation $T$ and check their local stability. We assume that $f(\mathbf 0)=1$ and
\begin{equation}
\label{strict}
L_1>L_2>\dots>L_k.
\end{equation}
Let $\mathbf x^*$ be a fixed point of $T$, i.e.,
$T({\mathbf x}^*)={\mathbf x}^*$.
Then
$x^*_i=0$ or $f(\mathbf x^*)=d_i/b_i=L_i^{-1}$ for $i=1,\dots,k$.
From (\ref{strict}) it follows that $\mathbf x^*$ is a fixed point of $T$ if
$\mathbf x^*=\mathbf x^*_0=\mathbf 0$ or $\mathbf x^*=\mathbf x^*_r=(0,\dots,x^*_r,\dots,0)$,
where $r\in\{1,\dots,k\}$ and $f(\mathbf x^*_r)=L_r^{-1}$.
We assume that the functions $x\mapsto f(0,\dots,x,\dots,0)$
are strictly decreasing.
Then $T$ has exactly $k+1$ fixed points $ \mathbf x^*_r$, $r=0,\dots,k$.
Let $A_r$ be the matrix with
$a_{ij}^r=\dfrac{\partial T_i}{\partial x_j}({\mathbf x_r^*})$. We have
\begin{equation}
\label{wzpoch}
\dfrac{\partial T_i}{\partial x_j}({\mathbf x})
=\delta_{ij}(1-d_i+b_if(\mathbf x))+b_ix_i\dfrac{\partial f}{\partial x_j}(\mathbf x),
\end{equation}
where $\delta_{ii}=1$ and $\delta_{ij}=0$ if $i\ne j$. Since
$f(\mathbf 0)=1$ we obtain $a_{ii}^0=1-d_i+b_i>1$ and $a_{ij}^0=0$ if $i\ne j$,
and therefore $\mathbf x_0^*=\mathbf 0$ is a repulsive fixed point.
Now we consider a point $\mathbf x_r$ with $r>0$.
Then $f(\mathbf x_r)=d_r/b_r$ and from (\ref{wzpoch}) we obtain
\[
a_{ij}^r=\dfrac{\partial T_i}{\partial x_j}(\mathbf x_r^*)=
\begin{cases}
\delta_{ij}(1-d_i+b_id_r/b_r), \quad \textrm{if $i\ne r$},\\
\delta_{ij}+b_rx_r\dfrac{\partial f}{\partial x_r}(\mathbf x_r^*),
\quad \textrm{if $i= r$}.
\end{cases}
\]
The matrix $A_r$ has $k$-eigenvalues $\lambda_i$, $i=1,\dots,k$ and $\lambda_i=a^r_{ii}$.
We have
\[
\begin{aligned}
\lambda_i&=1-d_i+b_id_r/b_r=1 +b_i(L_r^{-1}-L_i^{-1}),\textrm{ if $i\ne r$},\\
\lambda_r&=1+b_rx_r\dfrac{\partial f}{\partial x_r}(\mathbf x_r^*).
\end{aligned}
\]
Observe that if $r=1$ then $\lambda_i\in (0,1)$ for $i>1$. If
we assume that $-2<b_1x_1\dfrac{\partial f}{\partial x_1}(\mathbf x_1^*)<0$, then also $\lambda_1\in (0,1)$ and the fixed point $\mathbf x_1^*$ is locally asymptotically stable.
If $r>1$ then $\lambda_i>1$ for $i<r$ and, consequently,
the fixed point $\mathbf x_1^*$ is not asymptotically stable.
But if
$-2<b_rx_r\dfrac{\partial f}{\partial x_r}(\mathbf x_r^*)<0$,
the point $\mathbf x_r^*$ is locally \textit{semi-stable}, i.e., is stable for the transformation $T$ restricted to the set
\[
X_r=\{{\mathbf x}\in X \colon \,x_1=\dots=x_{r-1}=0\}.
\]
In the case of the logistic map
$f(\mathbf x)=1-(x_1+\dots +x_k)/K$
we have $x_r=K(1-d_r/b_r)$, $\dfrac{\partial f}{\partial x_r}(\mathbf x_r^*)=-1/K$,
and conditions for stability (or semi-stability) of $\mathbf x_r^*$ reduce to $b_r<2+d_r$.
If the positive fixed point $x^*$ of a one-dimensional logistic map is locally asymptotically stable
then this point is globally stable, i.e., $T^n(x)\to x^*$, for $x\in (0,K)$.
Thus, Example~\ref{ex1} shows that the behaviour of
a $k$-dimensional logistic map
and its one-dimensional restrictions
can be different. It can have a locally stable
fixed point $\mathbf x_1^*$ but this point can be not globally asymptotically stable.
Consider a model with the Beverton-Holt birth rate
\[
f(\mathbf x)=\dfrac{c}{c+x_1+\dots +x_k}.
\]
Then we have $x_r=c\left(\dfrac{b_r}{d_r}-1\right)$, $\dfrac{\partial f}{\partial x_r}(\mathbf x_r^*)=-\dfrac{c}{(c+x_r)^2}$.
Conditions for stability (or semi-stability) of $x_r$ reduce to the inequality $b_rx_rc< 2(c+x_r)^2$
or, equivalently to $b_r^2/d_r^2>b_r^2/d_r-b_r$, which always holds because $0\le d_r\le 1$.
The positive fixed point $x^*$ of the one-dimensional map $T$ is globally stable because $x<T(x)<x^*$ for $x\in (0,x^*)$ and $T(x)<x$ for $x>x^*$.
Example~\ref{ex2} shows that a two-dimensional map have a locally stable
fixed point $\mathbf x_1$ but this point can be not globally asymptotically stable.
In the case of the Ricker map $f(\mathbf x)=e^{-c(x_1+\dots +x_k)}$
we have $x_r=\dfrac 1c\ln\dfrac{b_r}{d_r}$, $\dfrac{\partial f}{\partial x_r}(\mathbf x_r^*)=-c\dfrac{d_r}{b_r}$,
and conditions for stability (or semi-stability) of $x_r$ reduce to $cd_rx_r>2$, which takes place when
$b_r<d_re^{2/d_r}$. The last inequality is also sufficient for global stability of the fixed point (see e.g. \cite[Th.\ 9.16]{Thieme}).
\section{Conclusion}
In this paper we consider a discrete time strong competition model. While in its continuous counterpart
a population having the maximal turnover coefficient drives the other to extinction, the discrete time model can have no this property.
We give sufficient conditions for competitive exclusion for a discrete model.
Although this model does not have a coexistence equilibrium, it can have a positive periodic solution.
It is interesting that this periodic solution can exist in the case when the restrictions of the model to one dimensional cases
have globally stable stationary solutions.
Theorem~\ref{th1} can be also applied to models when the suppression function $f$ depends on other factors,
for example the suppression function $f$ can include resource density.
It would be interesting to generalize Theorem~\ref{th1} to models with weaker
competition, i.e., when the suppression function is not identical for all subpopulations,
or to discrete-continuous hybrid models \cite{GHL,ML} or to equations on time scales \cite{BP}.
\section*{Acknowledgments}
The author is grateful to Dr. Magdalena Nockowska for several helpful discussions while this work was in progress.
This research was partially supported by
the National Science Centre (Poland)
Grant No. 2014/13/B/ST1/00224.
\end{document} |
\begin{document}
\title{Phase control of localization in the nonlinear two-mode system from harmonic mixing driving: Perturbative analysis and symmetry consideration}
\author{Xianchao Le$^{1}$}
\author{Zhao-Yun Zeng$^{2}$}
\author{Baiyuan Yang$^{2}$}
\author{Yunrong Luo$^{3}$}
\author{Jinpeng Xiao$^{2}$}
\author{Lei Li$^{2}$}
\author{Lisheng Wang$^{2}$}
\author{Yajiang Chen$^{2}$}
\author{Ai-Xi Chen$^{1}$}
\author{Xiaobing Luo$^{1,2}$}
\altaffiliation{Corresponding author: xiaobingluo2013@aliyun.com}
\affiliation{$^{1}$Department of Physics, Zhejiang Sci-Tech University, Hangzhou, 310018, China}
\affiliation{$^{2}$School of Mathematics and Physics, Jinggangshan University, Ji'an 343009, China}
\affiliation{$^{3}$ Department of Physics and Key Laboratory for Matter Microstructure and Function of Hunan Province, and Key Laboratory of
Low-dimensional Quantum Structures and Quantum Control of Ministry
of Education, Hunan Normal University, Changsha 410081, China}
\date{\today}
\begin{abstract}
In this paper, we present a rigorous analysis of symmetry and underlying physics of the nonlinear two-mode system
driven by a harmonic mixing field, by means of multiple scale asymptotic analysis method. The effective description in the framework of the second-order perturbative theory provides an accurate picture for understanding the Floquet eigenspectrum and dynamical features of the nonlinear two-mode system, showing full agreement with the prediction of symmetry considerations. We find that two types of symmetries play significant role in the dynamical features of this model, the mechanism behind which can be interpreted in terms of the effective description. The results are of relevance for the phase control of the atomic localization in Bose-Einstein condensates or switch of the optical signals in nonlinear mediums.\\
{Keywords:} Symmetry, Multiple time scales, Nonlinear Floquet states, Localization
\end{abstract}
\maketitle
\section{introduction}
Nonlinear two-mode model is one prototypical example to investigate the fundamental
quantum effects and nonlinear tunneling dynamics. The existence of nonlinearity is ubiquitous in diverse branches of science and its physical origins
include a mean-field treatment of the interactions between coherent atoms\cite{Pethick2022}, nonlinear Kerr effects in optical fibers\cite{Agrawal}, and possible modifications of quantum mechanics
on the fundamental level\cite{Weinberg1989}. In reality, the nonlinear two-mode model can be applied to describe a wide variety of physical systems, such as two coupled optical waveguides with Kerr nonlinearity\cite{Jensen1982}, Bose-Einstein
condensates (BECs) in a double-well potential\cite{Smerzi1997}, among others. As is well known, the presence of nonlinearity gives rise to a number of new quantum natures such as macroscopic quantum self-trapping
(MQST)\cite{Smerzi1997, Milburn1997, Albiez2005} and breakdown of quantum
adiabaticity\cite{Wu2000, Liu2002, Liu2003}, which presents both challenges and opportunities for existing theories
in linear systems.
In recent years, Floquet engineering, i.e., coherent control via periodic driving, has offered a versatile method for realization of new phases not accessible in equilibrium systems\cite{Bukov2015, Eckardt2017, Silveri2017, An2021}, because it adds time-periodicity as a novel control dimension to quantum systems. Nonlinear two-mode model under periodic driving presents the paradigm of this hot topic and thus is attracting more and more interest, due to the fact that gaining insight from the combined effects
of periodic driving and nonlinearity on the quantum tunneling through a barrier may contribute to the possibility of utilizing two-state systems as the basic building blocks of
quantum-based devices. In
periodically driven quantum systems, it is convenient to analyze the dynamics
in terms of the so-called Floquet states and quasi-energies. When nonlinearity is introduced, it is necessary
to extend the conventional Floquet states to nonlinear
Floquet states\cite{Holthaus12001, Holthaus22001, Luo2007, Luo2008, Molina2008}. Nowadays considerable efforts have been devoted to study the periodically driven nonlinear two-state systems in different ways, i.e., by
employing an effective Hamiltonian
description\cite{Wang2006, Zhang2008}, numerically computing the nonlinear
Floquet quasienergy spectrum\cite{Luo2007, Luo2008, Molina2008, Molina2008R, Lyu2020}, and constructing the exact analytical nonlinear Floquet solutions\cite{Xie12007, Xie22007, Yang2016} as well. Rich dynamical behaviors have been uncovered, such as the emergence of Hamiltonian
chaos\cite{Abdullaev2000, Lee2001, Hai2002, Weiss2008, Jiang2014}, photon-assisted
tunneling\cite{Eckardt2005, Watanabe2010}, coherent control of self-trapping\cite{Holthaus12001, Holthaus22001, Wang2006, Xie12007}, and so
on.
On the other side, previous works have clearly identified that the symmetries of the time-periodic
Hamiltonian play crucial role in the current rectification
phenomenon or the ratchet effect in the driven periodic potential\cite{Reimann2002, Hanggi2009}. It has been shown that in order to
achieve directed
(ratchet) transport,
relevant symmetries have to be broken\cite{Flach2000, Denisov2007}.
According to the Curie's principle,
a certain phenomenon always occurs unless it is ruled out by
symmetries\cite{Reimann2002}.
The discussion of space-time symmetries was also exemplified nicely by the
two-state dynamics\cite{Kierig2008}. It has been noted that a broken space-time symmetry leads in general to a driving-induced unbalanced Floquet states with unequal population of two modes. In addition, a prominent quantum effect called coherent destruction of tunneling (CDT)\cite{Grossmann1991}, upon
the occurrence of which the quantum tunneling effects can be completely suppressed, has been shown to be connected to the degeneracy
of the quasienergies and the so-called generalized parity symmetry (that is, the Hamiltonian
is invariant under a spatial parity transformation plus a time
shift by half a driving period)\cite{Kierig2008}. An important question now arises as to how the symmetry pictures are modified when
the two-state system is subject to nonlinearity. Despite a few numerical
studies available in such a problem, the connection between symmetry and dynamical properties of the driven nonlinear two-mode model is still not very clear, which awaits more rigorous and
elaborate analytical results.
In the present work, we shall explore the symmetry and the underlying physics of the nonlinear two-state system
exposed to a harmonic mixing field, in a
rigorous way by means of multiple-scale asymptotic analysis method.
By pushing the multiple-time-scale asymptotic analysis up to the second order, it is shown that apart from renormalization of the tunneling parameter,
the harmonic mixing driving may also induce an effective static dc-bias between two modes whose amplitude and sign depend on the phase shift between two harmonics.
The effective time-independent Hamiltonian obtained in the framework of the second-order perturbative theory is found to be successful in capturing the fine structure of the
quasienergy spectrum with different time-space symmetries, which confirms the
predictions of the symmetry considerations.
The analytical results give a clear explanation for why the phase shift between the two-harmonics components of driving field can play the role of control
parameter for the amplitude and sign of the population imbalance of nonlinear Floquet states.
\section{Model and symmetry}
We consider a simple, yet non-trivial periodically driven nonlinear two-mode system consisting of two basis states
$|1\rangle$ and $|2\rangle$, whose dynamics is described by
\begin{align}\label{Dnls1}
i\frac{dc_1}{dt}= &-\frac{v}{2}c_2+\frac{S(t)}{2}c_1-\chi|c_1|^2c_1\nonumber \\
i\frac{dc_2}{dt}= &-\frac{v}{2}c_1-\frac{S(t)}{2}c_2-\chi|c_2|^2c_2,
\end{align}
where $v$ denotes the tunneling rate constant, $\chi$ is the nonlinearity strength, $c_{1,2}$ are the quantum
probability amplitudes on the two basis states $|1\rangle$ and $|2\rangle$, and $S(t)$ is an external periodic field of zero mean,
$S(t+T)=S(t)$.
Before proceeding to the analysis of dynamics of the system \eqref{Dnls1}, it is instructive to identify the symmetry property of the model
equation. Like its linear counterpart, the driven nonlinear two-mode system also admits solutions in the form of Floquet
states $\mathbf{c}(t)=\tilde{\mathbf{c}}(t)e^{-i\varepsilon t}$, where $\mathbf{c}(t)=[c_1(t),c_2(t)]^T$ (hereafter the superscript $T$ stands for the
transpose), $\varepsilon$ is the quasienergy,
and $\mathbf{\tilde{c}}(t)=[\tilde{c}_1(t),\tilde{c}_2(t)]^T$ inherits
the period of the driving and is called the Floquet eigenstate. Substituting the Floquet solution into Eq.~\eqref{Dnls1}, we obtain
the following eigenvalue equation
\begin{align}\label{eigen eq}
\mathcal{H}\mathbf{\tilde{c}}(t)=\varepsilon\mathbf{\tilde{c}}(t),~~\mathcal{H}:=H(t)-i\partial_t,
\end{align}
with time-periodic Hamiltonian
corresponding to the system \eqref{Dnls1}, i.e.,
\begin{equation}\label{Ht}
H(t)=\left(
\begin{array}{cc}
\frac{S(t)}{2}-\chi |\tilde{c}_1(t)|^2 & -\frac{v}{2} \\
-\frac{v}{2} & -\frac{S(t)}{2}-\chi |\tilde{c}_2(t)|^2 \\
\end{array}
\right),
\end{equation}
where the operator $\mathcal{H}$ is the so-called Floquet Hamiltonian defined in the extended Hilbert space.
For the linear ($\chi=0$) case, we should review the three relevant symmetries
of \eqref{eigen eq} below. If $S(t)$ is shift symmetric
$S(t)=-S(t+T/2)$, then $\mathcal{H}$ is invariant under the generalized parity symmetry
\begin{equation}\label{SA}
S_{\rm{GP}}: |1\rangle\leftrightarrow |2\rangle,~~t\rightarrow t+\frac{T}{2},
\end{equation}
which consists of a spatial parity transformation plus a time shift by half a driving
period.
If $S(t)$ is antisymmetric under $t\rightarrow-t+2t_0$ inversion, $S(t+t_0)=-S(-t+t_0)$ at some appropriate points $t_0$,
the Floquet Hamiltonian is invariant under the following symmetry
\begin{equation}\label{SB}
S_{\rm{PT}}: |1\rangle\leftrightarrow |2\rangle,~~t\rightarrow -t+2t_0, i\rightarrow-i,
\end{equation}
which is equivalent to parity-time symmetry. The transformation $S_{\rm{PT}}$ represents the combined parity
and time reversal operations.
Furthermore, if $S(t)$ possesses the symmetry $S(t+t_0) =S(-t+t_0)$, then the Floquet Hamiltonian $\mathcal{H}$ is
time-reversal invariant under
\begin{equation}\label{SB}
S_{\rm{T}}: ~~t\rightarrow -t+2t_0, i\rightarrow-i.
\end{equation}
To measure the localization
properties of a Floquet mode, we use the time-averaged expectation values of the
Pauli matrix $\sigma_z$:
\begin{equation}\label{expectation}
\langle\langle\sigma_z\rangle\rangle=\frac{1}{T}\int_0^{T}dt\mathbf{\tilde{c}}^{\dag}(t)\sigma_z\mathbf{\tilde{c}}(t).
\end{equation}
This time-averaged expectation value $\langle\langle\sigma_z\rangle\rangle$ would vanish with a perfectly delocalized Floquet state (balanced Floquet state).
In the linear limit ($\chi=0$), it can be readily verified that the system \eqref{Dnls1} has two Floquet states
with zero time-averaged population imbalances, namely $\langle\langle\sigma_z\rangle\rangle=0$, whenever the symmetries $S_{\rm{GP}}$ and/or $S_{\rm{PT}}$
are realized.
Next we will show what consequences arise for the above mentioned symmetries when the nonlinearity is imposed. Consider a harmonic mixing (two-frequency) driving
\begin{equation}\label{expectation}
S(t)=-A[\sin\omega t+f\sin(2\omega t+\phi)],
\end{equation}
which has two components of frequencies $\omega$
and $2\omega$ with phase shift $\phi$. Obviously, in the presence of both harmonics ($A\neq 0,~f\neq0 $), the generalized
parity symmetry $S_{\rm{GP}}$ is always violated, independently of the value of the
phase shift $\phi$. Note that the antisymmetry $S(t)=-S(-t)$ is preserved for
$\phi=n\pi$ with $n$ integer, and the time-reversal symmetry $S(t_0+t)=S(t_0-t)$ is preserved for $\phi=n\pi+\pi/2$.
The relationship between the antisymmetry $S(t)=-S(-t)$ and dynamical properties can be readily analyzed by the symmetry argument. We implement $S_{\rm{PT}}$ on the nonlinear model Eq.~\eqref{eigen eq}, where $S_{\rm{PT}}$ includes the
combined parity and time reversal operations. As the first step, time-reversal transformation (which changes a complex number to its
complex conjugate and turns $t$ into $-t$) convert Eq.~\eqref{eigen eq} into
\begin{align}\label{PT equ}
[\widetilde{H}-i\partial_t]\left(
\begin{array}{c}
\tilde{c}_1^*(-t) \\
\tilde{c}_2^*(-t) \\
\end{array}
\right)=\varepsilon\left(
\begin{array}{c}
\tilde{c}_1^*(-t) \\
\tilde{c}_2^*(-t) \\
\end{array}
\right),
\end{align}
with
\begin{align}\label{PT equ H}
\widetilde{H}=\left(
\begin{array}{cc}
\frac{S(-t)}{2}-\chi |\tilde{c}_1^*(-t)|^2 & -\frac{v}{2} \\
-\frac{v}{2} & -\frac{S(-t)}{2}-\chi |\tilde{c}_2^*(-t)|^2 \\
\end{array}
\right).
\end{align}
The second step is to act with the parity transformation (permutation of the two indices 1 and 2) on Eq.~\eqref{PT equ}, from which we can observe
\begin{equation}\label{PT equ2}
\mathcal{H'}\left(
\begin{array}{c}
\tilde{c}_2^*(-t) \\
\tilde{c}_1^*(-t) \\
\end{array}
\right)=\varepsilon\left(
\begin{array}{c}
\tilde{c}_2^*(-t) \\
\tilde{c}_1^*(-t) \\
\end{array}
\right)
\end{equation}
with
\begin{equation}\label{PT H2}
\mathcal{H'}=\left(
\begin{array}{cc}
-\frac{S(-t)}{2}-\chi |\tilde{c}_2^*(-t)|^2 & -\frac{v}{2} \\
-\frac{v}{2} & \frac{S(-t)}{2}-\chi |\tilde{c}_1^*(-t)|^2 \\
\end{array}
\right)-i\partial_t.
\end{equation}
By comparing Eq.~\eqref{PT H2} with its original equation \eqref{eigen eq}, we find that for a nonlinear system under the action of antisymmetric driving [$S(t)=-S(-t)$], the
$S_{\rm{PT}}$ operation leaves the Floquet Hamiltonian invariant, namely $\mathcal{H'}=\mathcal{H}$, provided that
the following constraint,
\begin{equation}\label{constraints}
\chi|\tilde{c}_2(-t)|=\chi|\tilde{c}_1(t)|,
\end{equation}
is satisfied.
If the Floquet Hamiltonian is invariant under $S_{\rm{PT}}$, we have
\begin{equation}\label{Relat twosolu}
S_{\rm{PT}}\left(
\begin{array}{c}
\tilde{c}_1(t) \\
\tilde{c}_2(t) \\
\end{array}
\right)= \left(
\begin{array}{c}
\tilde{c}_2^*(-t) \\
\tilde{c}_1^*(-t) \\
\end{array}
\right)=\left(
\begin{array}{c}
\tilde{c}_1(t) \\
\tilde{c}_2(t) \\
\end{array}
\right)e^{i\varphi}.
\end{equation}
Here the Floquet states are defined up to an arbitrary phase $\varphi$. Physically, Eq.~\eqref{Relat twosolu} implies that the Floquet Hamiltonian operator $\mathcal{H}$
and the $S_{\rm{PT}}$ operator share the same eigenmode. Once Eq.~\eqref{Relat twosolu} is satisfied, the constraint \eqref{constraints} is satisfied automatically and vice verse.
From \eqref{Relat twosolu}, it is easy to prove
\begin{align}
\frac{1}{T}\int_0^T|\tilde{c}_2(t)|^2dt=\frac{1}{T}\int_{0}^{T}|\tilde{c}_1(t)|^2dt,
\end{align}
which means $\langle\langle\sigma_z\rangle\rangle=\frac{1}{T}\int_0^T[|c_1(t)|^2-|c_2(t)|^2]dt=0$, representing balanced Floquet states with zero averaged population imbalances. Apparently, whether nonlinearity presents or not, the balanced Floquet states exist inevitably for the antisymmetric driving. Nevertheless, if the antisymmetry $S(t)=-S(-t)$ is violated, the balanced Floquet states will disappear since the Floquet Hamiltonian changes under the action of operation $S_{\rm{PT}}$, and all the Floquet states will acquire some nonzero population
imbalances.
There are exceptions for the nonlinear case.
If the antisymmetry $S(t)=-S(-t)$ holds, while $|\tilde{c}_1(t)|^2=|\tilde{c}_2(-t)|^2$ does not hold, the Floquet Hamiltonian in \eqref{eigen eq} is not invariant under $S_{\rm{PT}}$ symmetry operation. This situation will lead to the surprising result that two doubly-degenerate unbalanced nonlinear Floquet
states emerge, as described as follows. Due to $S(t)=-S(-t)$, from \eqref{eigen eq} and \eqref{PT H2} it follows that
the system admits two independent Floquet solutions
$|\psi_1\rangle=[\tilde{c}_1(t),\tilde{c}_2(t)]^T$ and $|\psi_2\rangle=[\tilde{c}_2^*(-t),\tilde{c}_1^*(-t)]^T$ corresponding to the same quasienergy $\varepsilon$.
In this case, because of $|\tilde{c}_1(t)|^2\neq|\tilde{c}_2(-t)|^2$, it is easy to see that
\begin{align}
\langle\langle\sigma_z\rangle\rangle=\frac{1}{T}\int_0^T[|c_1(t)|^2-|c_2(t)|^2]dt\neq 0,\\
\frac{1}{T}\int_0^T\langle\psi_1|\sigma_z|\psi_1\rangle dt=-\frac{1}{T}\int_0^T\langle\psi_2|\sigma_z|\psi_2\rangle dt,
\end{align}
which denotes the emergence of two doubly-degenerate unbalanced (localized) nonlinear Floquet
states, with exactly opposite time-averaged population imbalance.
Apparently, these two degenerate Floquet states exist only when the nonlinear term does not vanish, so
that they have no linear counterparts. Also note that the degeneracy for the nonlinear Floquet states can be lifted by breaking of the antisymmetry $S(t)=-S(-t)$.
\section{Quasienergies and Floquet states}
In this section, we shall numerically compute the nonlinear Floquet
states and corresponding quasienergies by following the strategy developed in Refs.~\cite{Luo2007, Luo2008}.
In this strategy, we expand $\tilde{\mathbf{c}}(t)$ as well as
the time-periodic modulation $S(t)$ into Fourier series with $2N +1$ modes: $\tilde{c}_1\left( t \right) =\sum_{n=-N}^N{a_n}e^{in\omega t}$, $\tilde{c}_2\left( t \right) =\sum_{n=-N}^N{b_n}e^{in\omega t}$, and
$S\left( t \right) =\sum_{m=-N}^N{p_m}e^{im\omega t},~ p_m=\frac{1}{T}\int_0^T{S\left( t \right)}e^{-im\omega t}dt$. Substituting these series into Eq.~\eqref{eigen eq} yields
\begin{align}\label{egienI}
&\sum_{n,m}{a_n}p_me^{i\left( m+n \right) \omega t}-\chi \sum_{n,m,m'}{a_m{a^*}_{m'}a_{n}}e^{i\left( m+n-m' \right) \omega t}-\frac{v}{2}\sum_n{b_n}e^{in\omega t}\nonumber\\
&+\sum_n{n\omega a_n}e^{in\omega t}=\varepsilon \sum_n{a_n}e^{in\omega t},\nonumber\\
&-\sum_{n,m}{b_n}p_me^{i\left( m+n \right) \omega t}-\chi \sum_{n,m,m'}{b_m{b^*}_{m'}b_{n}}e^{i\left( m+n-m' \right) \omega t}-\frac{v}{2}\sum_n{a_n}e^{in\omega t}\nonumber\\
&+\sum_n{n\omega b_n}e^{in\omega t}=\varepsilon \sum_n{b_n}e^{in\omega t}.
\end{align}
Multiplying the above equations by $e^{-ij\omega t}$, and integrating them over one driving period, one gets the following eigenvalue equation,
\begin{align}\label{egienII}
&-\frac{A}{4i}\left( a_{j-1}-a_{j+1}+fe^{i\phi}a_{j-2}-fe^{-i\phi}a_{j+2} \right) -\chi \sum_{m,m'}{a_m{a^*}_{m'}a_{j+m'-m}}\nonumber\\&-\frac{\nu}{2}b_j+j\omega a_j=\varepsilon a_j,\nonumber\\
&\frac{A}{4i}\left( b_{j-1}-b_{j+1}+fe^{i\phi}b_{j-2}-fe^{-i\phi}b_{j+2} \right) -\chi \sum_{m,m'}{b_m{b^*}_{m'}b_{j+m'-m}}\nonumber\\&-\frac{\nu}{2}a_j+j\omega b_j=\varepsilon b_j.
\end{align}
Finally, the nonlinear Floquet
states and corresponding quasienergies can be found by solving numerically the eigenvalue equation \eqref{egienII} in a
self-consistent manner. In numerical calculations, the Fourier terms of orders
higher than a cut-off order $N$ is neglected when convergence is achieved. The above procedure involves conservation of the
norm, i.e.,
\begin{align}\label{norm}
\sum_n |a_n|^2+\sum_n |b_n|^2=1.
\end{align}
In analogy to quasimomenta
in the spatially periodic crystal, the quasienergy spectrum repeats itself
periodically on the energy-axis, thus possessing Brillouin zonelike
structure, the width
of one zone being $\hbar\omega$ ($\hbar=1$). In the following, we restrict ourselves to states with quasienergies in one Brillouin zone $(-\omega/2, \omega/2]$.
\begin{figure}
\caption{(color online) The Floquet state quasienergies versus the driving parameter $A/\omega$. [(a), (b)] the case of time-reversal antisymmetric driving [see the profile of $S(t)$ at $\phi=0$, in the bottom-right inset of panel (a)]. [(c), (d)] the case of time-reversal symmetric driving [see the shape of $S(t)$ at $\phi=\pi/2$ in
the inset of panel (c)]. The left column [(a), (c)] is for the linear case $\chi=0$, while right column [(b), (d)] is for the nonlinear case $\chi=0.4$. The inset of panel (b) shows the time-averaged population of mode 1 for every Floquet state in the
lowest quasienergy level. Inset of panel (b) is the enlarged view of the closest approach of two normal Floquet states at $A/\omega=2.4$. Other parameters: $f=1/4, \omega=10, v=1$. }
\label{fig1}
\end{figure}
Our numerical results of quasienergies are plotted in Fig.~\ref{fig1}. Figs.~\ref{fig1} (a)-(b) illustrate the quasienergies with an antisymmetric but a sawtooth driving [namely, $S(t)=-S(-t)$ with $\phi=0$, see bottom-right inset in Fig.~\ref{fig1} (a)], and Figs.~\ref{fig1} (c)-(d) illustrate the ones with a symmetric driving [namely, $S(t+t_0)=S(-t+t_0)$ with $\phi=\pi/2$, see bottom-right inset in Fig.~\ref{fig1} (c)].
It is clear from Fig.~\ref{fig1} that there are two
quasienergies at a given value of $A/\omega$ for the linear case (see the left column), while in the presence of nonlinearity several new states are emerging within certain range of $A/\omega$ due to bifurcation (see the right column). As the nonlinearity sets in, the antisymmetric driving case shows a pitchfork bifurcation with appearance of the additional quasienergy level that is absent in the linear case and lies in the lowest branch. As seen in the inset of Fig.~\ref{fig1} (b), the additional quasienergy is in fact doubly degenerate and corresponds
to two different Floquet states with exactly opposite
nonzero population imbalance, which can be witnessed by calculation of the cycle-averaged population, $\langle |c_1|^2\rangle=\frac{1}{T}\int_0^T|c_1|^2 dt$, for the given Floquet state. By contrast, for the symmetric driving [see Fig.~\ref{fig1} (d)], the twofold degeneracy for the lowest quasienergy is lifted and the quasienergy splits into two quasi-degenerate (nearly coincident) quasienergy levels. Among the two quasi-degenerate quasienergy levels that have no linear equivalent, the lower one shows strict continuation from the undriven limit, while the other emerges
through the saddle-node bifurcation upon the smooth change of the driving parameter $A/\omega$.
It should be noted that there is no threshold value of nonlinearity for
level bifurcation (forming the well-known triangular structure) to appear for the two-mode system under symmetric driving, which is the same as the purely sinusoidal driving case\cite{Luo2007, Luo2008}. However, for the antisymmetric driving case, our numerical results, which are not listed here, reveal that level bifurcation occurs only above a certain critical value of nonlinearity, the reason of which will be explained later. In addition to the new quasienergies emerging from bifurcation, there also exists two normal Floquet states which survive for vanishing nonlinearity and thus have linear counterparts. It is clear that the two normal Floquet states make closest approach at $A/\omega=2.4$. A significant difference between the antisymmetric and symmetric case is that for the former, there exists a large gap between the two normal Floquet state levels [characterized by the minimal level spacing between the two Floquet states, see $\Delta\varepsilon$ as labeled in Fig.~\ref{fig1} (b)], while for the latter, there is a nearly vanishing gap between the two normal levels when they make closest approach at $A/\omega=2.4$. The enlarged view of the closest approach for the symmetric driving case reveals that there is no true level crossing between Floquet states.
In Fig.~\ref{fig2}, we have numerically examined the dependence of the level spacing $\Delta\varepsilon$ on the phase shift (top panel) and nonlinearity strength (bottom panel). As shown in Fig.~\ref{fig2} (a), for $\phi=\pm \pi/2$, i.e., in the presence a time-reversal symmetry, the energy gap $\Delta\varepsilon$ nearly vanishes, and the maximum values of $\Delta\varepsilon$ are reached for $\phi=n\pi$ with $n$ integers (maximally broken time-reversal symmetry, but in the presence of a time-reversal antisymmetry). The dependence of $\Delta\varepsilon$ on the nonlinearity strength is exemplified in Fig.~\ref{fig2} (a) for a specific case $\phi=\pi/4$. It is clearly seen that $\Delta\varepsilon$ remains unchanged as the nonlinearity strength varies. The other choice of the phase shift will produce the same result. This means that the spectrum structures of the two normal Floquet states having linear
counterparts are not affected by the presence of nonlinearity.
Most strikingly, switching the sign of phase shift $\phi$ creates the lowest Floquet
states with opposite population imbalances, as shown in Fig.~\ref{fig3}. By comparing Fig.~\ref{fig3} (a)
and Fig.~\ref{fig3} (b), we clearly see that both cases of $\phi=\pm \pi/4$ have exactly the same quasienergy spectrum.
For $\phi=\pm \pi/4$ (neither time-reversal symmetric nor time-reversal antisymmetric), there exists a level gap (no level crossing) between the two normal
Floquet states (see the two upper black lines) which have linear analogues, and when the nonlinearity is strong enough in this system [e.g., $\chi=0.4$ in Fig.~\ref{fig3}], there are two nearly coincident (not degenerate) quasienergy levels within a finite interval of
parameter values around $A/\omega=2.4$ (see the bottom insets), which stem from the level bifurcations caused by nonlinearity. In the insets of Fig.~\ref{fig3}, we have also plotted the cycle-averaged population $\langle |c_1|^2\rangle=\frac{1}{T}\int_0^T|c_1|^2 dt$ for the Floquet state corresponding to the lowest level.
We may expect that the lowest Floquet state with nearly symmetric population
distribution
continuously evolves into the one with strong population imbalance. As $A/\omega$ is increased from zero to $2.4$, we observe that the variable $\langle |c_1|^2\rangle$
drops down to 0 for $\phi=-\pi/4$, which implies the complete localization at state $|2\rangle$, whereas $\langle |c_1|^2\rangle$ rises up to 1 for $\phi=\pi/4$,
corresponding to complete localization at state $|1\rangle$. Thus, by tuning the phase shift, we can switch between the two strongly localized states with opposite population imbalances.
\begin{figure}
\caption{(color online) (a) Dependence of $\Delta\varepsilon$ on the phase shift $\phi$. $f=1/4, A/\omega=2.4,\omega=10, v=1,\chi=0.4$. (b) Dependence of $\Delta\varepsilon$ on the nonlinearity parameter $\chi$. $f=1/4, \phi=\pi/4, A/\omega=2.4,\omega=10, v=1$. The quantity $\Delta\varepsilon$ denotes the minimal level spacing between the two normal Floquet states at $A/\omega=2.4$, as indicated in Fig.~\ref{fig1}
\label{fig2}
\end{figure}
\begin{figure}
\caption{(color online) The Floquet state quasienergies versus the driving parameter $A/\omega$ at (a) $\phi=-\pi/4$ and (b) $\phi=\pi/4$. In the two plots, the
top-right insets are the time-averaged population $\langle |c_1|^2\rangle$ for the Floquet state in the lowest quasienergy level, and the bottom-right insets are the enlarged view of the two quasi-degenerate (nearly coincident) quasienergy levels originating from nonlinear bifurcation. The other parameters are $f=1/4,\omega=10, v=1,\chi=0.4$.}
\label{fig3}
\end{figure}
\section{Physical consequences}
In this section, we will investigate the physical implications of the above-mentioned symmetries on the dynamics of nonlinear two-mode system.
In Fig.~\ref{fig4}, we initialize the system in state $|1\rangle$, and numerically plot the average of the population $|c_1|^2$ over long-enough time interval at $A/\omega=2.4$ (a) and $A/\omega=1$ (b) for $\phi=0$ and $\phi=\pi/2$. By comparison, we find that at $A/\omega=1$, the averages of the population $|c_1|^2$ for $\phi=0$ and $\phi=\pi/2$ exhibit the same (overlapped) dynamics with the same transition to localization for nonlinearity strength above a critical value, while at $A/\omega=2.4$,
the averages show qualitatively different dynamical features for $\phi=0$ and $\phi=\pi/2$. This can be explained by noting that the Floquet eigenspectra for $\phi=0$ and $\phi=\pi/2$ are the same in the regions away from $A/\omega=2.4$, but they are different in the region around $A/\omega=2.4$.
In Fig.~\ref{fig4} (a), for $\phi=\pi/2$ (time-reversal symmetric driving), we find that there is no threshold value of nonlinearity for
localization to occur, which corresponds to an almost perfect (but not true) level crossing between the two normal Floquet states at $A/\omega=2.4$.
In this case, we notice the essential role played by the periodic driving on the localization phenomenon, and in the linear limit, this localization can be connected to the
well-known CDT phenomenon. In contrast, if the time-reversal symmetry is broken [for example, $\phi=0$ in Fig.~\ref{fig4} (a)], localization occurs
only above a certain critical value of nonlinearity, suggesting the localization being a purely nonlinear phenomenon. This effect is related to the existence of a relatively large level gap ($\Delta \varepsilon$) of the two normal Floquet states when they make closest approach at $A/\omega=2.4$.
\begin{figure}
\caption{(color online) Dependence of the time-averaged population $\langle |c_1|^2\rangle$ on nonlinearity parameter $\chi$ at (a) $A/\omega=2.4$ and (b) $A/\omega=1$. Here the system is initially prepared at state $|1\rangle$, and the average $\langle ...\rangle$ is numerically realized over a long-enough time interval. Two prototypical examples, namely $\phi=0$ (not time-reversal symmetric but time-reversal antisymmetric) and $\phi=\pi/2$ (time-reversal symmetric) drivings, are compared. Other parameters are the same as in Fig.~\ref{fig1}
\label{fig4}
\end{figure}
\begin{figure}
\caption{(color online) Localization induced by the modulation $S(t)$
with a linearly ramped amplitude $A=\alpha t$, where the dimensionless ramping
rate $\alpha = 0.01$, for the two-mode system \eqref{Dnls1}
\label{fig5}
\end{figure}
\begin{figure}
\caption{(color online) The final localization according to the adiabatic
process outlined in Fig.~\ref{fig5}
\label{fig6}
\end{figure}
As noted in the previous section, the harmonic mixing driving field permits a sensitive control of the population distribution as a function of the phase shift.
For demonstration, we simply ramp the driving amplitude $A$ linearly in time, viz, $A=\alpha t$, where $\alpha$ is ramping rate.
The system is initialized in its ground state, $c_1(0)=c_2(0)=1/\sqrt{2}$. The driving amplitude is ramped up from zero to $A/\omega=2.4$ for a given $\omega=10$, then it is held
constant hereafter. When the ramping rate takes a low value $\alpha=0.01$, we expect the system
to adiabatically follow the lowest Floquet state. Two different scenarios of localization dynamics are identified in Fig.~\ref{fig5}, for two different values of the
phase shift $\phi$. For $\phi=-\pi/4$, the time-evolving state is finally localized at $|2\rangle$ and stays there subsequently.
When the phase shift is changed to a positive value, $\phi=\pi/4$,
the time-evolving state becomes concentrated in the other basis state $|1\rangle$. Thus, states with opposite population
imbalances can be selectively targeted, which may help to control localization process. To provide a more
complete analysis on this phase control of population imbalance, we evaluate the final
time-averaged population of the two modes as $\langle |c_n|^2\rangle_f=\frac{1}{\Delta t}\int_{t_f}^{t_f+\Delta t}|c_n|^2dt$, ($n=1, 2$), where
$t_f$ represents the time instant when the linearly-ramping
modulation amplitude reaches the value $A/\omega=2.4$ (giving the maximum localization),
and $\Delta t$ represents the long-enough averaging time interval
(during which the modulation amplitude stays constant). As expected, the degree of final localization is symmetric with respect to $\phi=0$ [see Fig.~\ref{fig6}], and which one of the two basis states is highly occupied depends on the sign of the phase shift.
Now it is natural to put a simple question: why can the change of the phase shift lead to inversion of population imbalance? At present, the analytical results for
the question is still missing in the literatures, and thus this problem calls for analytical insights.
\section{Perturbative analysis}
To gain analytical insight, in this section we perform a multiple-scale asymptotic analysis of the model \eqref{Dnls1} in the high frequency
limit $\omega\gg \{v, \chi\}$(see, for instance, Refs.~\cite{Longhi2012, Zhou2013, Luo2014, Luo2021}). For this purpose, we first introduce the slowly varying functions $a_1$ and $a_2$ through the transformation
\begin{equation}\label{Forma1}
c_1=a_1e^{-i\int\frac{S(t)}{2}dt},~
c_2=a_2e^{i\int\frac{S(t)}{2}dt}.
\end{equation}
Substituting the transformation \eqref{Forma1} into Eq.~\eqref{Dnls1}, we obtain the following coupled equation
\begin{align}\label{Dnls2}
i\frac{da_1}{dt}= &-\frac{v'}{2}a_2-\chi|a_1|^2a_1,\nonumber\\
i\frac{da_2}{dt}= &-\frac{v'^*}{2}a_1-\chi|a_2|^2a_2,
\end{align}
where $v'=ve^{i\int{S(t)}dt}=v\exp{[i\frac{A}{\omega}\cos\omega t+i\frac{Af}{2\omega}\cos(2\omega t+\phi)]}$.
Denoting
\begin{equation}
F(t)=\exp{[i\frac{A}{\omega}\cos\omega t+i\frac{Af}{2\omega}\cos(2\omega t+\phi)]},
\end{equation}
and introducing a new variable
\begin{equation}\label{new var}
\tau=\omega t,~\epsilon=\frac{v}{\omega},
\end{equation}
then Eq.~\eqref{Dnls2} reads as
\begin{align}\label{Dnls3}
i\frac{da_1}{d\tau}= &-\frac{\epsilon}{2}F(\tau)a_2-\epsilon\frac{\chi}{v}|a_1|^2a_1,\nonumber\\
i\frac{da_2}{d\tau}= &-\frac{\epsilon}{2}F^*(\tau)a_1-\epsilon\frac{\chi}{v}|a_2|^2a_2.
\end{align}
Let us look for a solution to Eq.~\eqref{Dnls3} as a power-series expansion in the smallness parameter $\epsilon$:
\begin{equation}\label{expand a}
a_j=a_j^{(0)}+\epsilon a_j^{(1)}+\epsilon^2 a_j^{(2)}+\cdots,~j=1,2,
\end{equation}
and introduce multiple time scales $T_0=\tau,~T_1=\epsilon\tau,~T_2=\epsilon^2\tau,\dots.$
By using the derivative rule
$\frac{d}{d\tau}=\frac{\partial}{\partial T_0}+\epsilon\frac{\partial}{\partial T_1}+
\epsilon^2\frac{\partial}{\partial T_2}+\cdots,$ and the fact
\begin{equation*}
|a_j|^2a_j=|a_j^{(0)}|^2a_j^{(0)}+\epsilon[2|a_j^{(0)}|^2a_j^{(1)}+(a_j^{(0)})^2a_j^{*(1)}]+\cdots,~j=1,2,
\end{equation*}
and substituting Eq.~\eqref{expand a} into Eq.~\eqref{Dnls3}, we obtain a hierarchy of equations for successive corrections to
$a_{1,2}$ at the various orders in $\epsilon$.
At the leading order $\epsilon^0$, we find
\begin{equation}\label{0order}
i\partial_{T_0} a_j^{(0)}=0, ~~~a_j^{(0)}=A_j(T_1,T_2,\cdots),~j=1,2,
\end{equation}
where the amplitudes $A_{1,2}(T_1,T_2,\cdots)$ are functions of the slow time variables
$T_1,T_2,\cdots$, but independent of the fast time variable $T_0$. At order $\epsilon^1$ one has
\begin{align}\label{1order}
i\partial_{T_1} a_1^{(0)}+i\partial_{T_0} a_1^{(1)}=&-\frac{F(\tau)}{2}a_2^{(0)}-\frac{\chi}{v}|a_1^{(0)}|^2a_1^{(0)}\nonumber\\
i\partial_{T_1} a_2^{(0)}+i\partial_{T_0} a_2^{(1)}=&-\frac{F^*(\tau)}{2}a_1^{(0)}-\frac{\chi}{v}|a_2^{(0)}|^2a_2^{(0)}.
\end{align}
For the convenience of our discussion, we simplify equation \eqref{1order} as
\begin{align}\label{1orders}
i\partial_{T_1} A_1+i\partial_{T_0} a_1^{(1)}=&-\frac{F(\tau)}{2}A_2-\frac{\chi}{v}|A_1|^2A_1\nonumber\\
i\partial_{T_1} A_2+i\partial_{T_0} a_2^{(1)}=&-\frac{F^*(\tau)}{2}A_1-\frac{\chi}{v}|A_2|^2A_2.
\end{align}
To avoid the occurrence of secularly growing terms in the solutions $a_1^{(1)}$ and $a_2^{(1)}$, the solvability conditions
\begin{equation}\label{solvcondi1}
i\partial_{T_1} A_1=-\frac{\overline{F(\tau)}}{2}A_2-\frac{\chi}{v}|A_1|^2A_1,
~~i\partial_{T_1} A_2=-\frac{\overline{F^*(\tau)}}{2}A_1-\frac{\chi}{v}|A_2|^2A_2
\end{equation}
must be satisfied. Throughout our paper, the overline denotes the time average with respect to the fast time variable $T_0$.
$F(\tau)$ can be expanded by using of the first kind Bessel function $J_{\alpha}(x)$ with order $\alpha$,
\begin{equation*}
F(\tau)=\sum_{m,n}J_n(\frac{A}{\omega})J_m(\frac{Af}{2\omega})i^{m+n}e^{i(2m+n)\tau}e^{im\phi},
\end{equation*}
which gives
\begin{equation}\label{F aver}
\overline{F(\tau)}=\sum_{m}J_{-2m}(\frac{A}{\omega})J_m(\frac{Af}{2\omega})i^{-m}e^{im\phi}.
\end{equation}
It follows from Eqs.~\eqref{1orders}, \eqref{solvcondi1} and \eqref{F aver}, that the amplitudes $a_{1,2}^{(1)}$
at order $\epsilon$ are given by
\begin{align}\label{a11 a21}
a_1^{(1)}&=-i\int(-\frac{F(\tau)}{2}A_2+\frac{\overline{F(\tau)}}{2}A_2)d\tau=A_2\Phi(\tau),\nonumber\\
a_2^{(1)}&=-i\int(-\frac{F^*(\tau)}{2}A_2+\frac{\overline{F^*(\tau)}}{2}A_2)d\tau=-A_1\Phi^*(\tau),
\end{align}
where $\Phi(\tau)=\sum_{n\neq-2m}\frac{1}{2m+n}J_{n}(\frac{A}{\omega})J_m(\frac{Af}{2\omega})i^{m+n}e^{im\phi}e^{i(2m+n)\tau}$.
At the next order $\epsilon^2$, we have
\begin{align}\label{2order}
i\left(\partial_{T_2} a_1^{(0)}+\partial_{T_1} a_1^{(1)}+\partial_{T_0} a_1^{(2)}\right)
&=-\frac{F(\tau)}{2}a_2^{(1)}-\frac{\chi}{v}[2|a_1^{(0)}|^2a_1^{(1)}\nonumber\\
&+(a_1^{(0)})^2a_1^{*(1)}],\nonumber\\
i\left(\partial_{T_2} a_2^{(0)}+\partial_{T_1} a_2^{(1)}+\partial_{T_0} a_2^{(2)}\right)
&=-\frac{F^*(\tau)}{2}a_1^{(1)}-\frac{\chi}{v}[2|a_2^{(0)}|^2a_2^{(1)}\nonumber\\
&+(a_2^{(0)})^2a_2^{*(1)}].
\end{align}
In order to avoid the occurrence of secularly growing terms in the solutions $a_1^{(2)}$ and $a_2^{(2)}$,
the following solvability conditions must be satisfied:
\begin{align}\label{solvcondi3}
i\partial_{T_2} A_1=&-\overline{\frac{F(\tau)}{2}a_2^{(1)}}=A_1\overline{\frac{F(\tau)\Phi^*(\tau)}{2}}=\frac{\delta}{2}A_1,\nonumber\\
i\partial_{T_2} A_2=&-\overline{\frac{F(\tau)}{2}a_1^{(1)}}=-A_2\overline{\frac{F^*(\tau)\Phi(\tau)}{2}}=-\frac{\delta^*}{2}A_2,
\end{align}
where
\begin{align}\label{delta}
\delta =\overline{F(\tau)\Phi^*(\tau)}&=\sum_{m,M,l,M\neq0}\frac{(-1)^{M-l}}{M}i^{2M-m-l}e^{i(m-l)\phi}J_{M-2m}(\frac{A}{\omega})\nonumber\\
&\times J_{M-2l}(\frac{A}{\omega})J_{m}(\frac{Af}{2\omega})J_{l}(\frac{Af}{2\omega}).
\end{align}
Thus the evolution of the amplitudes $A_{1,2}$ up to the second-order long time scale is given by
\begin{equation}\label{AjSecond}
i\frac{dA_j}{d\tau}=i\epsilon\partial_{T_1} A_j+i\epsilon^2\partial_{T_2} A_j,~~j=1,2.
\end{equation}
Substituting equations \eqref{0order}, \eqref{solvcondi1} and \eqref{solvcondi3} into equation \eqref{AjSecond}, we obtain
\begin{align}\label{scale Aj equ}
i\frac{dA_1}{d\tau}&=-\epsilon\frac{\overline{F(\tau)}}{2}A_2-\epsilon\frac{\chi}{v}|A_1|^2A_1+\epsilon^2\frac{\delta}{2}A_1,
\nonumber\\
i\frac{dA_2}{d\tau}&=-\epsilon\frac{\overline{F^*(\tau)}}{2}A_1-\epsilon\frac{\chi}{v}|A_2|^2A_2-\epsilon^2\frac{\delta^*}{2}A_2.
\end{align}
Substituting \eqref{new var} into \eqref{scale Aj equ}, we have
\begin{align}\label{Ajdiff}
i\frac{dA_1}{dt}&=-v\frac{\overline{F(\tau)}}{2}A_2-\chi|A_1|^2A_1+\frac{v^2}{\omega}\frac{\delta}{2}A_1,\nonumber\\
i\frac{dA_2}{dt}&=-v\frac{\overline{F^*(\tau)}}{2}A_1-\chi|A_2|^2A_2-\frac{v^2}{\omega}\frac{\delta^*}{2}A_2.
\end{align}
Let $v'=v\overline{F(\tau)}$ and $\delta'=\frac{v^2}{\omega}\delta$, then Eq.~\eqref{Ajdiff} reads
\begin{align}\label{Ajdiff2}
i\frac{dA_1}{dt}&=\frac{\delta'}{2}A_1-\chi|A_1|^2A_1-\frac{v'}{2}A_2,\nonumber\\
i\frac{dA_2}{dt}&=-\frac{\delta'^*}{2}A_2-\chi|A_2|^2A_2-\frac{v'^{\ast}}{2}A_1.
\end{align}
Numerical investigations (not shown here) reveal that from Eq.~\eqref{Ajdiff2} (which is accurate up to the time scale $\sim 1/\epsilon^2$), we can recover all the known quasienergy spectrums based on the original model \eqref{Dnls1}. Thus, the effective equation \eqref{Ajdiff2} constitutes a strong analytical basis for understanding the dynamical features of the original system with different time-space symmetries. It
is noteworthy that, in the effective equation \eqref{Ajdiff2}, a second-order static dc-bias $\delta'$ appears as a surprise, apart from the
coupling strength $v$ replaced by $v'$. In the following, we will rigorously prove
that the effective tunneling rate $v'$ and the second-order detuning $\delta'$ enjoy some very
interesting properties.
(i) $v'$ is a real number when $\phi=\pm\pi/2$ [$\phi\in [-\pi,\pi)$], and is a complex number otherwise.
We write down
\begin{equation}\label{F aver1}
\overline{F(\tau)}=\sum_{m}J_{-2m}(\frac{A}{\omega})J_m(\frac{Af}{2\omega})i^{-m}e^{im\phi},
\end{equation}
and its complex conjunction
\begin{equation}\label{F aver2}
\overline{F(\tau)}^*=\sum_{m}J_{-2m}(\frac{A}{\omega})J_m(\frac{Af}{2\omega})({-i})^{-m}e^{-im\phi}.
\end{equation}
Subtracting \eqref{F aver2} from \eqref{F aver1} yields
\begin{align}\label{F aver3}
\overline{F(\tau)}-\overline{F(\tau)}^*=&\sum_{m}J_{-2m}(\frac{A}{\omega})J_m(\frac{Af}{2\omega})[e^{im(\phi-\frac{\pi}{2})}-e^{-im(\phi-\frac{\pi}{2})}]
\nonumber\\
=&2i\sum_{m}J_{-2m}(\frac{A}{\omega})J_m(\frac{Af}{2\omega})\sin\big[m(\phi-\frac{\pi}{2})\big ].
\end{align}
Evidently, when $\phi=\pm\pi/2$ [$\phi\in [-\pi, \pi)$], $\overline{F(\tau)}=\overline{F(\tau)}^*$, thus $v'=v\overline{F(\tau)}$ is a real number.
If otherwise, i.e., when $\phi\neq\pm\pi/2$ [$\phi\in [-\pi, \pi)$], $\overline{F(\tau)}$ and $v'=v\overline{F(\tau)}$ are, in general, complex numbers.
(ii) $\delta'$ is always a real number.
By using the expression \eqref{delta} of $\delta$, one can obtain that
\begin{align}\label{delta conj}
\delta^*=\sum_{m,l,M,M\neq0} &\frac{(-1)^{M-l}}{M}(-i)^{2M-m-l}e^{-i(m-l)\phi}J_{M-2m}(\frac{A}{\omega})
J_{M-2l}(\frac{A}{\omega})\nonumber\\
&\times J_{m}(\frac{Af}{2\omega})J_{l}(\frac{Af}{2\omega})\nonumber\\
=\sum_{m,l,M,M\neq0} &\frac{(-1)^{M-m}}{M}(-i)^{2M-m-l}e^{-i(l-m)\phi}J_{M-2m}(\frac{A}{\omega})
J_{M-2l}(\frac{A}{\omega})\nonumber\\
&\times J_{m}(\frac{Af}{2\omega})J_{l}(\frac{Af}{2\omega}).
\end{align}
Here we have made the exchange $m\leftrightarrow l$. Since $(-1)^{M-m}(-1)^{2M-m-l}=(-1)^{M-l}$, we get from Eq.~\eqref{delta conj}
that $\delta=\delta^*$, thus $\delta$ is a real number.
(iii) $\delta(-\phi)=-\delta(\phi)$, and $\delta$ must be zero when $\phi=0$ [i.e., when $S(t)$ possesses the antisymmetry $S(t)=-S(-t)$].
Separating $M>0$ and $M<0$ parts of the expression \eqref{delta} of $\delta$, making the transformation $M\rightarrow-M$ for negative $M$,
we obtain
\begin{align}\label{delta other1}
\delta=\sum_{m,M,l,M>0} &\frac{(-1)^{M-l}}{M}i^{2M-m-l}e^{i(m-l)\phi}J_{M-2m}(\frac{A}{\omega})
J_{M-2l}(\frac{A}{\omega})\nonumber\\
&\times J_{m}(\frac{Af}{2\omega})J_{l}(\frac{Af}{2\omega}) \nonumber\\
-\sum_{m,M,l,M>0} &\frac{(-1)^{-M-l}}{M}i^{-2M-m-l}e^{i(m-l)\phi}J_{-M-2m}(\frac{A}{\omega})\nonumber\\
&\times J_{-M-2l}(\frac{A}{\omega})J_{m}(\frac{Af}{2\omega})J_{l}(\frac{Af}{2\omega}).
\end{align}
Changing the summation indices $m,l$ into $-m,-l$ in the second summation in \eqref{delta other1}, we have
\begin{align}\label{delta other2}
\delta=\sum_{m,M,l,M>0} &\frac{(-1)^{M-l}}{M}i^{2M-m-l}e^{i(m-l)\phi}J_{M-2m}(\frac{A}{\omega})
J_{M-2l}(\frac{A}{\omega}) \nonumber\\
&\times J_{m}(\frac{Af}{2\omega})J_{l}(\frac{Af}{2\omega}) \nonumber\\
-\sum_{m,M,l,M>0} &\frac{(-1)^{-M+l}}{M}i^{-2M+m+l}e^{-i(m-l)\phi}J_{-M+2m}(\frac{A}{\omega}) \nonumber\\
&\times J_{-M+2l}(\frac{A}{\omega})J_{-m}(\frac{Af}{2\omega})J_{-l}(\frac{Af}{2\omega}).
\end{align}
By using the relation of Bessel function $J_{-\alpha}=(-1)^{\alpha}J_{\alpha}$ and the fact that $(-1)^{-m-l}=i^{-2m-2l}$,
then we obtain the alternative form of $\delta$ from Eq.~\eqref{delta other2}
\begin{align}\label{delta other}
\delta=\sum_{m,M,l,M>0} &\frac{(-1)^{M-l}}{M}i^{2M-m-l}(e^{i(m-l)\phi}-e^{-i(m-l)\phi})J_{M-2m}(\frac{A}{\omega}) \nonumber\\
&\times J_{M-2l}(\frac{A}{\omega})J_{m}(\frac{Af}{2\omega})J_{l}(\frac{Af}{2\omega}).
\end{align}
When $\phi=0$, it follows from Eq.~\eqref{delta other} that $\delta=0$. It is easy to see that $\delta(-\phi)=-\delta(\phi)$
from Eq.~\eqref{delta other}.
To corroborate these analytical results, we numerically calculate the two quantities $\overline{F(\tau)}$ and $\delta$ as shown in Fig.~\ref{fig7}. The dependencies of $\overline{F(\tau)}$ on the driving parameter $A/\omega$ are illustrated in Figs.~\ref{fig7} (a) and (b) for $\phi=0$ and $\phi=\pi/2$ respectively, which verifies the fact that $\overline{F(\tau)}$ [hence the effective coupling strength $v'=v\overline{F(\tau)}$] is a real number when $\phi=\pi/2$, and is generally a complex number when $\phi=0$. Note that, if the time-reversal symmetry ($\phi=\pi/2$) is preserved, $\overline{F(\tau)}$ (consequently, the effective coupling strength) vanishes identically at some certain driving parameter values $A/\omega=2.4, 5.4, 8.4$. Whereas if the time-reversal symmetry is violated [see, e.g., $\phi=0$ in Fig.~\ref{fig7}(a)], $\overline{F(\tau)}$ is not equal to zero for any value of $A/\omega$. As shown in Fig.~\ref{fig7} (c), we also obtain a dependence of the effective detuning $\delta$ on $\phi$ with sign changes, as expected. We clearly observe that the effective detuning $\delta$ takes maximum (minimum) values at $\phi=\pm \pi/2$ (harmonic mixing signal is symmetric but not antisymmetric), but instead vanishes at $\phi=0$ (harmonic mixing signal is antisymmetric).
The properties for $\overline{F(\tau)}$ and $\delta$ have the following physical implications. First, when $\phi=\pi/2$ such that the temporal symmetry $S(t+t_0)=S(-t+t_0)$ is preserved, $\overline{F(\tau)}$ vanishes at $A/\omega=2.4$, while $\delta$ takes nonzero value. The eigenvalues of Eq.~\eqref{Ajdiff2} with an effectively undriven
(time-averaged) Hamiltonian
are the quasienergies of the original time-dependent
quantum system \eqref{Dnls1}.
In the linear limit, from \eqref{Ajdiff2} we obtain the eigenvalues as $E_{\pm}=\pm\frac{1}{2}\sqrt{\delta'^2+|v\overline{F(\tau)}|^2}$. When the driving parameter is chosen as $A/\omega=2.4$ such that $\overline{F(\tau)}$ vanishes, there will be a minimum of the level spacing,
which is fixed by an extremely small value $\Delta\varepsilon=\delta'$, and confirms that the quasienergies of the two normal Floquet states (having linear counterparts) form an anticrossing rather than a crossing. Thus, the common
explanation of CDT fails. Here, the CDT for a time-reversal symmetric system comes from vanishing of the effective coupling strength, not from the quasienergy degeneracy. Second, when $\phi\neq\pi/2$ such that the time-reversal symmetry $S(t+t_0)=S(-t+t_0)$ is broken, $\overline{F(\tau)}$ (hence the effective coupling strength) does not vanish for any value of $A/\omega$. As illustrated in Fig.~\ref{fig2} (b), the energy gap $\Delta\varepsilon$ (the minimum level spacing between two normal Floquet states) shows roughly no dependence on nonlinearity, and its value can be approximated (or accurately given) by $\Delta\varepsilon=\sqrt{\delta'^2+|v\overline{F(\tau)}|^2}$ at $A/\omega=2.4$, where $|\overline{F(\tau)}|$ takes minimum (nonzero) value. This approximation is made in the case $\delta'\neq 0$ (making the two normal Floquet states a little bit unbalanced), where we neglect the negligibly small nonlinear energy offset, i.e., the term $\chi(|A_1|^2-|A_2|^2)$. Note that here the minimum value of $|\overline{F(\tau)}|$ is nonzero and much larger than the absolute value of second-order bias $|\delta'|$. Thus, the broken time-reversal symmetry leads to a relatively large energy gap $\Delta\varepsilon$ between the two normal Floquet states having linear counterparts. In this case, level bifurcation (resulting in the emergence of new localized nonlinear Floquet states) and suppression of tunneling occur only for nonlinearity beyond a certain threshold value. Third, the change of the phase shift can create the lowest Floquet states with opposite population imbalances. This can be reasoned as follows. When $\phi\neq0$ [$S(t)\neq-S(-t)$], a nonzero second-order bias $\delta'=\frac{v^2}{\omega}\delta$ is generated by the broken time-reversal antisymmetry. When $\delta'\neq0$, under inversion $\delta'\rightarrow-\delta'$, we conclude that for any eigenstate $(A_1', A_2')^T$ to \eqref{Ajdiff2},
there is a partner eigenstate $(A_2', A_1')^T$ of \eqref{Ajdiff2} with $\delta'$ replaced by $-\delta'$ at the same energy.
This implies inversion $\delta'\rightarrow-\delta'$ does not change the energy spectrum, but flip the sign of population imbalance
corresponding to the same energy. Because of $\delta(-\phi)=-\delta(\phi)$, the inversion of population imbalance can be expected by changing the phase shift $\phi$.
\begin{figure}
\caption{(color online) $\overline{F(\tau)}
\label{fig7}
\end{figure}
\section{Conclusion}
In summary, the symmetry and underlying physics of the nonlinear two-state system
driven by a harmonic mixing field have
been studied analytically and numerically. A multiple-scale
asymptotic analysis is used to understand the essential physics with different time-space symmetries.
By use of the effective description
valid up to the second order of $1/\omega$, we have clarified the origin of the CDT in the time-reversal symmetric
two-state system, and explained the reason why the broken time-reversal antisymmetry can induce the inversion of population imbalance between two modes. These analytical results establish an intimate connection between
symmetry breakings and the relevance of dynamical properties in the nonlinear two-mode system. Although various aspects of the nonlinear two-mode system have been explored previously,
the analytical results on connection between symmetry and dynamical features of the system have not been addressed before and the present paper
fills the gap in literatures.
\acknowledgments
The work was supported by the Natural Science Foundation of Zhejiang Province, China (Grant No. LY21A050002), the National
Natural Science Foundation of China (Grant No. 11975110), the Scientific and Technological Research Fund of
Jiangxi Provincial Education Department (Grant No. GJJ211026), and Zhejiang Sci-Tech University Scientific Research
Start-up Fund (Grant No. 20062318-Y), the Scientific Research Foundation of Hunan Provincial Education Department (Grant No. 21B0063), and the Hunan Provincial Natural Science Foundation of China (Grant No. 2021JJ30435).
Xianchao Le and Zhao-Yun Zeng contributed equally.
\end{document} |
\begin{document}
\doublespacing
\parindent 0cm
\makeatletter
\def\@seccntformat#1{\csname the#1\endcsname. }
\def\section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus
-.2ex}{1.3ex plus .2ex}{\center\large\sc}}
\def\@startsection{subsection}{2}{\z@}{3.25ex plus 1ex minus .2ex}{-1em}{\normalsize\bf}{\@startsection{subsection}{2}{\z@}{3.25ex plus 1ex minus .2ex}{-1em}{\normalsize\bf}}
\def\@startsection{subsubsection}{3}{\z@}{3.25ex plus 1ex minus .2ex}{-1em}{\normalsize\it}{\@startsection{subsubsection}{3}{\z@}{3.25ex plus 1ex minus .2ex}{-1em}{\normalsize\it}}
\newtheorem{theorem}{Theorem}
\newtheorem{definition}{Definition}
\newtheorem{lemma}{Lemma}
\newtheorem{assumption}{Assumption}
\newtheorem{cor}{Corollary}
\theoremstyle{definition}
\newtheorem{example}{Example}
\newtheorem{remark}{Remark}
\title{ extsc{A nonparametric test for a constant correlation matrix}
\begin{abstract}
We propose a nonparametric procedure to test for changes in correlation matrices at an unknown point in time.
The new test requires constant expectations and variances, but only mild assumptions on the serial dependence structure and has considerable power in finite samples.
We derive the asymptotic distribution under the null hypothesis of no change as well as local power results and apply the test to stock returns.
\end{abstract}
\textbf{Keywords:} Fluctuation Test, Functional Delta Method, Gaussian Process, Local Power.
\textbf{JEL codes:} C12, C14, C32, C58
\section{Introduction} \label{sec:introduction}
The Bravais-Pearson correlation coefficient is arguably the most widely used measure of dependence between random variables.
For financial time series, correlations among returns are for instance widely used in risk management.
However, there is compelling empirical evidence that
the correlation structure of financial returns cannot be assumed to be constant over time,
see e.g. \citet{krishan:2009}.
In particular, in periods of financial crisis, correlations often increase, a phenomenon which is sometimes referred to as ``Diversification Meltdown''.
As most often potential change points are not known a priori, practitioners are interested in testing correlation constancy in financial time series at an unknown point in time.
\citet{wied:2012} propose a nonparametric retrospective kernel-based correlation constancy test (referred to as KB-test in what follows)
and \citet{wiedgaleano:2012} propose a sequential monitoring procedure. These papers complement other approaches for related measures of dependence, e.g.\ for the whole covariance matrix (\citealp{aue:2009b}, Galeano and Pe\~na, 2007\nocite{galeano:2007}),
the copula (\citealp{na:2012}, \citealp{kramervk:2010}), Spearman's rho (\citealp{gaissler:2010}), Kendall's tau (\citealp{dehling:2012}), autocovariances in a linear process (\citealp{lee:2003}) and covariance operators in the context of functional data analysis (\citealp{fremdt:2012}).
In what follows, we stick to correlation.\ \citet{wied:2012} show that a correlation test can be more powerful than a covariance test when we have more than one change point in the covariance structure.
However, the KB-test only considers bivariate correlations, whereas in portfolio management, where we typically have more than two assets, constancy of the whole correlation matrix is of interest. In this context, it would be possible to perform several pairwise tests and to use a level correction like Bonferroni-Holm. However, in this paper, we consider the correlation matrix. In a simulation study, we see that the matrix-based test outperforms the Bonferroni-Holm approach in some (although not in all) situations. We extend the methodology from the KB-test to higher dimensions, but on the other hand keep its nonparametric and model-free approach.
We consider the $\frac{p(p-1)}{2}$-vector of
successively calculated pairwise correlation coefficients and derive its limiting distribution with the functional delta method approach and some proof ideas from \citet{wied:2012}. We use a bootstrap approximation for a normalizing constant in order to approximate the asymptotic limit distribution of the test statistic.
This may be an alternative for the bivariate case as well.
In an application of this test to Value-at-Risk forecasts (\citealp{berens:2013}) it is seen that this proposed test might indeed be useful in practical situations. That is, it might be a promising approach to combine the well-known CCC (constant conditional correlation) and DCC (dynamic conditional correlation) model with this test for structural breaks in correlations.
The paper is organized as follows: In Section 2, we present the test statistic and derive the asymptotic null distribution, Section 3 deals with local power, Section 4 presents simulation evidence, Section 5 provides an empirical illustration and Section 6 a conclusion. All proofs are deferred to the appendix.
\section{The Fluctuation Test} \label{sec:retrotest}
Let ${\bf X}_t = \left( X_{1,t},X_{2,t},\ldots,X_{p,t}\right) $, $t\in \mathbb{Z}$, be a sequence of $p$-variate random vectors on a probability space $(\Omega,\mathfrak{A},\mathsf{P})$ with finite $4$-th moments and
(unconditional) correlation matrix $R_t = (\rho^{ij}_t)_{1 \leq i,j \leq p}$, where
\begin{equation*}
\rho^{ij}_{t}=\frac{\mathsf{Cov}(X_{i,t},X_{j,t})}{\sqrt{\mathsf{Var}(X_{i,t})\mathsf{
Var}(X_{j,t})}}.
\end{equation*}
Furthermore, we call $||\cdot||_r$ the $L_r$-norm, $r > 0,$ and $D(I,\mathbb{R}^d), d \in \mathbb{N},$ the space of $d$-dimensional càdlàg functions on an interval $I \subseteq [0,1]$ (compare \citealp{billingsley:1968} and related literature for details).
We write $A \sim (m,n)$ for a matrix $A$ with $m$ rows and $n$ columns. Throughout the paper, we denote by $\rightarrow_d$ and $\rightarrow_p$ convergence in distribution and probability, respectively, of random variables or vectors. By $\Rightarrow_d$, we denote convergence in distribution of stochastic processes on a function space, which will be specified depending upon the situation, and with respect to the corresponding supremum norm.
For $T \in \mathbb{N}$, the hypothesis pair is given by $H_0: R_1 = \ldots = R_T$ vs. $H_1: \neg\ H_0$. Under $H_0$, we denote $\rho^{ij}_{t} =: \rho^{ij}$.
The ``preliminary version'' of the test statistic is given by
\begin{equation*}
Q_T := \max_{2 \leq k \leq T} \sum_{1 \leq i < j \leq p} \frac{k}{\sqrt{T}} \left| \hat \rho^{ij}_k - \hat \rho^{ij}_T \right| =:
\max_{2 \leq k \leq T} \frac{k}{\sqrt{T}} \left| \left| P_{k,T} \right| \right|_1,
\end{equation*}
where
\begin{align*}
\hat \rho^{ij}_k = \frac{\sum_{t=1}^k (X_{i,t} - \bar X_{i,k})(X_{j,t} - \bar X_{j,k})}{\sqrt{\sum_{t=1}^k (X_{i,t} - \bar X_{i,k})^2}\sqrt{\sum_{t=1}^k
(X_{j,t} - \bar X_{j,k})^2}}, \label{rhohat}
\end{align*}
$\bar X_{i,k} = \frac{1}{k} \sum_{t=1}^k X_{i,t},\ \bar X_{j,k} = \frac{1}{k} \sum_{t=1}^k X_{j,t}$ and $P_{k,T} = \left(\hat \rho^{ij}_k - \hat \rho^{ij}_T \right)_{1 \leq i < j \leq p}
\in \mathbb{R}^{\frac{p(p-1)}{2}}$.\footnote{Here and analogously in the following, the expression $1 \leq i < j \leq p$ for a vector means that the first entry or entries consist of the expressions for $i=1$, followed by the one(s) for $i=2$ and so on.}
The value $\hat \rho^{ij}_k$ is the empirical pairwise correlation coefficient for the random variables $X_i$ and $X_j$, calculated from the first $k$ observations. Thus, the test statistic compares the pairwise successively calculated correlation coefficients with the corresponding correlation coefficients calculated from the whole sample. The null hypothesis is rejected whenever $Q_T$ becomes too large, i.e., whenever at least one of these differences become too large over time or, equivalently, whenever the successively calculated correlation coefficients of at least one pair fluctuate too much over time. The weighting factor $\frac{k}{\sqrt{T}}$ serves for compensating the fact that the correlations are typically estimated better in the middle or in the end of sample compared to the beginning of the sample. We will see later on in the context of discussing the bootstrap approximation that it might be more convenient to use a slightly modified version of $Q_T$.
For deriving the limiting null distribution and local power results, some additional assumptions
are necessary. The following assumptions concern moments and serial dependencies of the random variables and correspond to (A1), (A2) and (A3)
in \citet{wied:2012}, adjusted for the multivariate case.
\begin{assumption}\label{limit}
For
\begin{equation*}
U_t := \begin{pmatrix} X^2_{1,t} & - & \mathsf{E}(X^2_{1,t}) \\
\vdots & & \vdots \\
X^2_{p,t} & - & \mathsf{E}(X^2_{p,t}) \\
X_{1,t} & - & \mathsf{E}(X_{1,t}) \\
\vdots & & \vdots \\
X_{p,t} & - & \mathsf{E}(X_{p,t}) \\
X_{1,t} X_{2,t} & - & \mathsf{E}(X_{1,t} X_{2,t}) \\
X_{1,t} X_{3,t} & - & \mathsf{E}(X_{1,t} X_{3,t}) \\
\vdots & & \vdots \\
X_{p-1,t} X_{p,t} & - & \mathsf{E}(X_{p-1,t} X_{p,t}) \end{pmatrix}
\end{equation*}
and $S_j:= \sum_{t=1}^j U_t$, we have
\begin{align*}
\lim_{m \rightarrow \infty} \mathsf{E}\left(\frac{1}{m} S_m
S_m'\right) =: D_1,
\end{align*}
where $D_1$ is a finite and positive definite matrix with $2p+\frac{p(p-1)}{2}$ rows and $2p+\frac{p(p-1)}{2}$ columns.
\end{assumption}
\begin{assumption}\label{moments}
For some $r > 2$, the $r$-th absolute moments of the components of $U_t$ are
uniformly bounded, that means, $\sup_{t \in \mathbb{Z}} \mathsf{E} ||U_t||_{r} < \infty$.
\end{assumption}
\begin{assumption}\label{ned}
For $r$ from Assumption \ref{moments}, the vector $(X_{1,t},\ldots,X_{p,t})$ is $L_2$-NED (near-epoch dependent) with
size $-\frac{r-1}{r-2}$ and constants $(c_{t}),t \in \mathbb{Z}$, on a sequence
$(V_{t}),t \in \mathbb{Z}$, which is $\alpha$-mixing of size $\phi^*
:= -\frac{r}{r-2}$, i.e.,
\begin{align*}
\left|\left|(X_{1,t},\ldots,X_{p,t}) - \mathsf{E}\left((X_{1,t},\ldots,X_{p,t}) | \sigma(V_{t-l},\ldots,V_{t+l}) \right)\right|\right|_2 \leq c_t v_l
\end{align*}
with $\lim_{l \rightarrow \infty} v_l = 0$. The constants $(c_{t}),t \in \mathbb{Z}$ fulfill $c_{t} \leq 2 ||U_t||_2$ with $U_t$ from Assumption \ref{limit}.
\end{assumption}
Assumption \ref{limit} is a regularity condition which rules out trending random variables. As we have financial returns in mind, this is no issue.
Assumption \ref{moments} is more critical because it requires finite $|4 + \gamma|$-th moments of ${\bf X}_t$ with an arbitrary $\gamma > 0$ (note that the components of ${\bf X}_t$ enter $U_t$ quadratically).
In fact, there is evidence that even variances might not exist for some financial series, cf. \citet{mandelbrot:1963}.
However, simulation evidence below shows that the test still works under the $t_3$-distribution.
Assumption \ref{ned} is a very general serial dependence assumption which holds in relevant econometric models, e.g.\ in GARCH-models under certain conditions (cf. \citealp{carrascochen:2002}).
It guarantees that the vector
\begin{equation*}
(X^2_{1,t},\ldots,X^2_{p,t},X_{1,t},\ldots,X_{p,t},X_{1,t} X_{2,t},X_{1,t} X_{3,t},\ldots,X_{p-1,t} X_{p,t})
\end{equation*}
is $L_2$-NED (near-epoch dependent) with size $-\frac{1}{2}$, cf. \citet{davidson:1994}, p. 273. This allows for applying a functional central limit theorem later on.
Next, we impose a stationarity condition which is in line with \citet{aue:2009b}.
\begin{assumption}\label{stationarity}
$(X_{1,t},\ldots,X_{p,t}), t \in \mathbb{Z},$ has constant expectation and variances, that means, $\mathsf{E}(X_{i,t}), i = 1,\ldots,p,$ and
$0 < \mathsf{E}(X_{i,t}^2), 1 \leq i \leq p,$ do not depend on $t$.
\end{assumption}
This condition might be
slightly relaxed to allow for some fluctuations in the first and second
moments (see (A4) and (A5) in \citealp{wied:2012}), but for ease of exposition
and because the procedure would remain exactly the same, we stick to this assumption. Note that most financial time series processes as for example GARCH are (unconditionally) stationary under certain conditions.
Clearly, the original test problem is invariant under heteroscedasticity. But we believe that it is at least extremely difficult if not impossible to design a fluctuation test for correlations in which
arbitrary variance changes are allowed under the null hypothesis.
Our main result is:
\begin{theorem}\label{theorem1}
Under $H_0$ and Assumptions \ref{limit},\ref{moments},\ref{ned},\ref{stationarity}, for $T \rightarrow \infty$,
\begin{equation*}
\frac{\tau(s)}{\sqrt{T}} (\hat \rho^{ij}_{\tau(s)} - \hat \rho^{ij}_T)_{1 \leq i < j \leq p} \Rightarrow_d E^{1/2} B^{\frac{p(p-1)}{2}}(s),
\end{equation*}
on $D\left([0,1],\mathbb{R}^{\frac{p(p-1)}{2}}\right)$, where $\tau(s) = \lfloor 2 + s(T-2) \rfloor$, $$E = \lim_{T \rightarrow \infty} \mathsf{Cov} \left( \sqrt{T} \left(\hat \rho^{ij}_T \right)_{1 \leq i < j \leq p} \right) \sim \left(\frac{p(p-1)}{2} \times \frac{p(p-1)}{2}\right)$$ and
$B^{\frac{p(p-1)}{2}}(s)$ is a vector of $\frac{p(p-1)}{2}$ independent standard Brownian Bridges.
\end{theorem}
The proof of the theorem can be found in the appendix. It relies on the application of an adapted functional delta method.
We want to stress that simply applying a functional central limit theorem is not enough here due to the cumbersome, non-linear structure of the correlation coefficient.
From the previous theorem, we directly obtain with the Continuous Mapping Theorem
\begin{cor}\label{corollary1}
Under $H_0$ and Assumptions \ref{limit},\ref{moments},\ref{ned},\ref{stationarity}, for $T \rightarrow \infty$,
\begin{equation*}
Q_T \rightarrow_d \sup_{0 \leq s \leq 1} \left|\left| E^{1/2} B^{\frac{p(p-1)}{2}}(s) \right|\right|_1.
\end{equation*}
\end{cor}
In order to obtain critical values, we need information about $E$.
There are several possibilities for estimating $E$; one possibility is the estimator $\hat E$, given by a bootstrap approximation. For this estimation, one can for example use the moving block bootstrap from \citet{kuensch:1989} and \citet{liusingh:1992},
cf.\ also \citet{lahiri:1999}, \citet{concalves:2002}, \citet{concalves:2003}, \citet{calhoun:2013}, \citet{radulovic:2012} and \citet{sharipov:2012}.
Defining a block length $l_T$, we divide the time series into $T-l_T-1$ overlapping blocks $B_i, i = 1,\ldots,T-l_t-1,$ with length $l_T$ such that
$B_1 = ({\bf X}_1,\ldots,{\bf X}_{l_T})$, $B_2 = ({\bf X}_2,\ldots,{\bf X}_{l_T+1}),\ldots$.
Then, in each bootstrap repetition $b, b=1,\ldots,B$ for some large $B$, we sample $\left\lfloor\frac{T}{l_T}\right\rfloor$ times with replacement one of the $T-l_T-1$ blocks and concatenate the blocks. So, we obtain $B$ $p$-dimensional time series with length $\left\lfloor\frac{T}{l_T}\right\rfloor \cdot l_T$. For each bootstrapped time series we calculate the vector $v_b := \sqrt{T} \left(\hat \rho^{ij}_{b,T} \right)_{1 \leq i < j \leq p}$.
The estimator $\hat E$ is then the empirical covariance matrix of these $B$ vectors, i.e., $$\hat E = \frac{1}{B} \sum_{b=1}^B (v_b - \bar v)(v_b - \bar v)'$$ with $\bar v = \frac{1}{B} \sum_{b=1}^B v_b$. The bootstrap estimator ``replaces'' the rather complicated kernel estimator $\tilde E$ from the KB-test (Appendix A.1 in \citealp{wied:2012}).
The advantage of the bootstrap estimator is the fact that it can be derived easily even in higher dimensions.
It would be possible to obtain a kernel estimator also in higher $(> 2)$ dimensions. However, its structure would then depend on the structure of derivatives of certain non-linear, higher-dimensional functions which transform a high-dimensional vector of moments to the vector of correlation coefficients. (More information is given in the proof of Theorem \ref{theorem1}). The arguably complicated transformation makes the calculation of a kernel estimator very cumbersome and much harder to implement.
Moreover, a kernel estimator depends on the choice of the bandwidth and the kernel. The disadvantage of the bootstrap is that it is computationally more intensive. In addition, the choice of the block length is required.
The matrix $\hat E$ is an estimator for $\mathsf{Cov^*}(v_b)$ which is the (theoretical) covariance matrix of $v_b$ with respect to the bootstrap sample conditionally on the original data ${\bf X}_1,\ldots,{\bf X}_T$. In order to validate the bootstrap, the key point is the proof that, for $T \rightarrow \infty$, $\mathsf{Cov^*}(v_b)$ converges in probability to $E$. In order to obtain such an asymptotic result, we need an assumption on the block length.
\begin{assumption}\label{bootstrap}
For $T \rightarrow \infty$, $l_T \rightarrow \infty$ and $l_T \sim T^{\alpha}$ for $\alpha \in (0,1)$.
\end{assumption}
The assumption is similar to the one for the moving block bootstrap in Theorem 1 (Condition 4) of \citet{calhoun:2013}. It guarantees that the block length becomes large but not too large compared to $T$.
Moreover, we need an assumption which ensures that the bootstrap correlation coefficients are sufficiently close to the correlation coefficients obtained from the data.
\begin{assumption}\label{uniformintboot}
For $1 \leq i < j \leq p$, some $\delta > 0$ and $b=1,\ldots,B$, the random variable
$$C_T := \mathsf{E}\left(\left|\sqrt{T} (\hat \rho^{ij}_{b,T} - \hat \rho^{ij}_T)\right|^{2+\delta} \left| {\bf X}_1,\ldots,{\bf X}_T \right. \right)$$ is stochastically bounded ($C_T = O_{\mathsf{P}}(1)$).
\end{assumption}
The next theorem gives the theoretical validation for the bootstrap.
\begin{theorem}\label{theorem2}
Under $H_0$ and Assumptions \ref{limit},\ref{moments},\ref{ned},\ref{stationarity},\ref{bootstrap},\ref{uniformintboot}, for $T \rightarrow \infty$, $$\mathsf{Cov^*}(\sqrt{T} \left(\hat \rho^{ij}_{b,T} \right)_{1 \leq i < j \leq p}) \rightarrow_p E.$$
\end{theorem}
Given the theoretical results, it is reasonable to consider the ``test statistic''
\begin{equation*}
A_T := \max_{2 \leq k \leq T} \frac{k}{\sqrt{T}} \left| \left| \hat E^{-1/2} P_{k,T} \right| \right|_1
\end{equation*}
in applications. Then, the null hypothesis is rejected whenever $A_T$ is larger than the $(1-\alpha)$-quantile of the random variable $A := \sup_{0 \leq s \leq 1} \left|\left| B^{\frac{p(p-1)}{2}}(s) \right|\right|_1$. The quantiles of $A$, which serve as an approximation for the quantiles of the finite sample distribution, can easily be obtained by Monte Carlo simulations, i.e., by approximating the paths of the Brownian Bridge on fine grids.
There might be situations in practice in which $\hat E^{1/2}$ is not positive definite so that $\hat E^{-1/2}$ would not be defined. However, due to Assumption \ref{limit}, at least for larger $T$ and $B$, we can virtually assume positive definiteness.\footnote{To circumvent the problem of impossible or numerically unstable inversion of $\hat E^{1/2}$, one could calculate the statistic $Q_T$ and simulate critical values from the limit random variable in Corollary \ref{corollary1} in which $E$ is replaced by $\hat E$.}
\section{Local Power} \label{sec:localpower}
Econometricians are often not only interested in the behavior of a test under the null hypothesis, but would like to get information about the behavior under some local alternatives.
For simplicity, we consider a setting in which the expectations and variances remain constant such that a covariance change is equal to a change in correlations.
To be more precise, under $H_1$, in at least one of the components of ${\bf X}_t$, there is a correlation change of order $\frac{M}{\sqrt{T}}$ ($M > 0$ arbitrary) with constant expectations and variances and
\begin{equation*}
(\mathsf{E}(X_{i,t} X_{j,t}))_{1 \leq i < j \leq p} = v + \frac{M}{\sqrt{T}} g\left( \frac{t}{T} \right).
\end{equation*}
Here, $v \in \mathbb{R}^{\frac{p(p-1)}{2}}$ is a constant vector and $g(s) = (g_1(s),\ldots,g_{\left( \frac{p(p-1)}{2} \right)}(s))$ is a bounded $\frac{p(p-1)}{2}$-dimensional function that is not constant and that can be approximated by step functions such that the function
\begin{equation*}
\int_0^s g(u) du - s \int_0^1 g(u) du
\end{equation*}
is different from $0 \in \mathbb{R}^{\frac{p(p-1)}{2}}$ for at least one $s \in [0,1]$. The integral is defined component by component.
Note that we now deal with triangular arrays because the distribution of the ${\bf X}_t$ changes with $T$, but, for simplicity, we do not change our notation.
A typical example for the function $g$ would be a step function with a jump from $0$ to $g_0$ in a given point $z_0$ in one of the components. This implies that the correlation of one pair jumps at time $\lfloor T \cdot z_0 \rfloor$. A step function with several jumps would correspond to multiple change points. With a continuous function $g$, one would obtain continuously changing correlations.
The following Theorem \ref{theorem3} is an analogue to Theorem \ref{theorem1} and yields the distribution under the sequence of local alternatives.
\begin{theorem}\label{theorem3}
Under the sequence of local alternatives and Assumptions \ref{limit},\ref{moments},\ref{ned},\ref{stationarity}, for $T \rightarrow \infty$,
\begin{equation*}
\frac{\tau(s)}{\sqrt{T}} (\hat \rho^{ij}_{\tau(s)} - \hat \rho^{ij}_T)_{1 \leq i < j \leq p} \Rightarrow_d E^{1/2} B^{\frac{p(p-1)}{2}}(s) + E^{1/2} C(s),
\end{equation*}
on $D\left([0,1],\mathbb{R}^{\frac{p(p-1)}{2}}\right)$, where
\begin{equation*}
C(s) = M \begin{pmatrix} \frac{1}{\sqrt{\mathsf{Var}(X_{1})\mathsf{Var}(X_{2})}} \left( \int_0^s g_1(u)du - s \int_0^1 g_1(u) du \right) \\ \frac{1}{\sqrt{\mathsf{Var}(X_{1})\mathsf{Var}(X_{3})}} \left( \int_0^s g_2(u)du - s \int_0^1 g_2(u) du \right) \\ \vdots \\ \frac{1}{\sqrt{\mathsf{Var}(X_{p-1})\mathsf{Var}(X_{p})}} \left( \int_0^s g_{\frac{p(p-1)}{2}}(u)du - s \int_0^1 g_{\frac{p(p-1)}{2}}(u)du \right) \end{pmatrix}
\end{equation*}
is a deterministic function that depends on the specific form of the local alternative under consideration, characterized by $g$.
\end{theorem}
In Theorem \ref{theorem3}, the supremum is taken over the absolute value of a Brownian Bridge plus a
deterministic function $C(s)$.
As the main characteristic of the function $C(s)$ from Theorem \ref{theorem3}, we have the factor $M$ times the expression $\int_0^s g_i(u)du - s \int_0^1 g_i(u)du$ in each component $i=1,\ldots, \frac{p(p-1)}{2}$. This follows from the structure of a Brownian Bridge.
The previous theorem directly yields with the Continuous Mapping Theorem
\begin{cor}
Under the sequence of local alternatives and Assumptions \ref{limit},\ref{moments},\ref{ned},\ref{stationarity}, for $T \rightarrow \infty$,
\begin{equation*}
Q_T \rightarrow_d \sup_{0 \leq s \leq 1} \left| \left| E^{1/2} B^{\frac{p(p-1)}{2}}(s) + E^{1/2} C(s) \right| \right|_1
\end{equation*}
\end{cor}
Also under local alternatives we want to estimate $E$ with the bootstrap. It turns out that the estimator presented in Section \ref{sec:retrotest} has the same limit distribution as under the null hypothesis.
Thus, the bootstrap approach is valid both under the null and under the alternative.
\begin{theorem}\label{theorem4}
Under the sequence of local alternatives and Assumptions \ref{limit},\ref{moments},\ref{ned},\ref{stationarity},\ref{bootstrap},\ref{uniformintboot}, for $T \rightarrow \infty$, $$\mathsf{Cov^*}(\sqrt{T} \left(\hat \rho^{ij}_{b,T} \right)_{1 \leq i < j \leq p}) \rightarrow_p E.$$
\end{theorem}
The theoretical results in this section imply that the quantity $A_T$ is close to $A_L := \sup_{0 \leq s \leq 1} \left| \left| B^{\frac{p(p-1)}{2}}(s) + C(s) \right| \right|_1$ for large $T$ and $B$. Moreover, for every $B \geq 1$, the test statistic becomes arbitrarily large for large $M$ and $T$.
\section{Finite Sample Evidence} \label{sec:simulations}
We illustrate the finite sample properties of our multivariate test with Monte Carlo simulations in different settings:
We consider a series of four-variate random vectors which are, on the one hand, serially independent and, on the other hand, fulfill a four-variate MA(1)-structure with MA-parameters $0.5$. This means that, for $t=0,\ldots,T$, there are serially independent vectors ${\bf u}_t := (u_{t,1},u_{t,2},u_{t,3},u_{t,4})$ such that the data generating process is defined by $${\bf X}_t = {\bf u}_t + A {\bf u}_{t-1}, t=1,\ldots,T$$ with ${\bf X}_t = (X_{t,1},X_{t,2},X_{t,3},X_{t,4}), A=\text{diag}(\theta,\theta,\theta,\theta)$ and $\theta \in \{0,0.5\}$.
The lengths of the series are chosen as $T \in \{200,500\}$, the block lengths are $l_T = \lfloor T^{1/4} \rfloor$, respectively\footnote{For, $T=200$, $\left\lfloor\frac{T}{l_T}\right\rfloor \neq \frac{T}{l_T}$, so that the length of the bootstrapped time series is not exactly equal to $T$.
However, we consider the difference as negligible.}, the number of bootstrap replications is $999$ and the number of Monte Carlo replications is
$10000$. We consider, on the one hand, a four-variate normal distribution (ND) and, on the other hand, a four-variate $t_{3}$-distribution.
The $t_3$-distribution is not covered by our assumptions, but we analyze it to get a picture of the behavior of the test in settings which are realistic in financial applications.
For simulating the behavior under the null, we set the variances of the $u_{i,t}, i=1,2,3,4,$ to $1$ and the correlations of the $u_{i,t}$ to $\rho_{12} = \ldots = \rho_{34} =: \rho_0 \in \{0,0.5\}$. Under the alternative, the $u_{i,t}$ have correlation $\rho_0$ in the first half of the sample. Moreover, we have a change in all six pairwise correlations of the $u_{i,t}$ with shifts $\Delta \rho = -0.2,-0.4,0.2,0.4$ in the middle of the sample.
The results (empirical rejection probabilities, not-size-adjusted, nominal level $0.05$ which corresponds to a simulated critical value of $4.47$) are given in Table \ref{table1}.
\begin{center}
- Table \ref{table1} here -
\end{center}
It is seen that there are some size distortions for the heavy-tailed distribution and/or serial dependence although the level seems to converge to $0.05$ for higher $T$ in all cases.
The power of the test increases in $T$ and in absolute values of the correlation changes.
For the $t_3$-distribution, the power is in general considerably lower. Further simulations show that the size properties become worse for even higher $\rho_0$ and even higher serial dependence.
Moreover, we compare the test for constant correlation matrix with a multivariate procedure based on the pairwise correlation test from \citet{wied:2012} (with bandwidth $\lfloor \log(T) \rfloor$) and the Bonferroni-Holm correction. That means that we perform $m=6$ pairwise tests and denote by $p_{(1)},\ldots,p_{(m)}$ the corresponding p-values in increasing order. We declare the null hypothesis of constant correlation matrix to be invalid if there is at least one $j=1,\ldots,m$ such that $$p_{(j)} < \frac{0.05}{m+1-j}.$$ The results are also presented in Table \ref{table1}. Depending on the situation, sometimes the one and sometimes the other procedure performs better. While the Bonferroni-Holm procedure has in general slightly better size and power properties for $\rho_0=0.5$, the matrix-based test performs better with $\rho_0=0$, especially with the normal distribution and decreasing correlation. There is even one case in which the Bonferroni-Holm procedure is not unbiased (the power is smaller than the size) which does not occur with the matrix-based test.
In another setting, we have compared the bivariate bootstrap with the bivariate kernel-based and have seen that both tests behave more or less similarly.
\section{Application to Stock Returns}
Next, we show how the proposed test can be applied in financial time series. For this, we consider the correlation of four stocks.
In order to avoid issues due to market trading in different time zones, we just consider the European market.
We look at the four companies of Euro Stoxx 50 with the highest weights in the index in the end of May 2012, that means Total, Sanofi, Siemens and BASF,
and consider the time span 01.01.2007 - 01.06.2012 such that $T=1414$.
The data was obtained from the database Datastream. Figures \ref{fig:correlations1}, \ref{fig:correlations2}, \ref{fig:correlations3} plot rolling windows of the six pairwise correlations
of the continuous daily returns from each asset with the window length $120$. This corresponds to the trading time of about half a year.
The days on the x-axis show the first day of the windows, respectively.
\begin{center}
- Figure \ref{fig:correlations1} here -
- Figure \ref{fig:correlations2} here -
- Figure \ref{fig:correlations3} here -
\end{center}
We identify time-varying correlations. It is for example interesting to see that the correlation between Total and Sanofi is close to
$0$ in the beginning of February 2008 and much higher after this. The correlation between Sanofi and BASF is interestingly low in the middle of 2009.
The test statistic $Q_T$ applied on the four-variate return vector is equal to $10.49$. With $B=10999$ bootstrap replications, we obtain $A_T = 6.55$. With this value, we cannot yet determine if the null hypothesis of constant correlation is rejected.
So we calculate the statistic $A_T$ with $B=10999$. The $0.95$-quantile of $\sup_{0 \leq s \leq 1} \left| \left| B^{6}(t) \right| \right|_1$ is equal to $4.47$ and so, the null hypothesis is rejected on the
significance level $\alpha=0.05$. The approximate p-value is smaller than $0.001$.
Figure \ref{fig:evolution} shows the process
\begin{equation*}
\left( \sum_{1 \leq i < j \leq p} \frac{k}{\sqrt{T}} \left| \hat \rho^{ij}_k - \hat \rho^{ij}_T \right| \right)_{2 \leq k \leq T},
\end{equation*}
that means the evolution of the successively calculated correlations over time.
In the context of CUSUM tests, the point of the maximum is often considered as a reasonable estimator for the (most important) change point if the test decides that such a point actually exists,
see \citet{vostrikova:1981} and the related literature. In this case, the maximum is obtained at the 11th of September 2008 which corresponds quite well to the insolvency of Lehman Brothers. A discussion on dating multiple change points in the correlation matrix can be found in \citet{galeano:2014}.
\begin{center}
- Figure \ref{fig:evolution} here -
\end{center}
\section{Conclusion}
We have presented a new fluctuation test for constant correlations in the multivariate setting for which the location of potential change points need not be specified a priori. The new test is based on a bootstrap approximation, works under mild assumptions regarding the dependence structure, has appealing properties in simulations and seems to be useful in empirical applications. Potential drawbacks of the test are the requirement of finite fourth moments and the assumption of constant expectations and variances. It might be an interesting question for the future to thoroughly investigate to which extent these drawbacks could be overcome by some kind of prefiltering and/or other transformations. Moreover, it could be worthwile to extend the present approach to the problem of monitoring correlation changes or to other, perhaps more robust measures of dependence.
\appendix
\section{Appendix}
{\it Proof of Theorem \ref{theorem1}} \\
Note that the null hypothesis and Assumption \ref{stationarity} imply that $\mathsf{E}(X_{i,t} X_{j,t}), 1 \leq i,j \leq p,$ do not depend on $t$.
At first, we need an invariance principle for the vector
\begin{equation*}
V_T(s) := \frac{1}{\sqrt{T}} \sum_{t=1}^{\tau(s)} \begin{pmatrix} X^2_{1,t} & - & \mathsf{E}(X^2_{1,t}) \\
\vdots & & \vdots \\
X^2_{p,t} & - & \mathsf{E}(X^2_{p,t}) \\
X_{1,t} & - & \mathsf{E}(X_{1,t}) \\
\vdots & & \vdots \\
X_{p,t} & - & \mathsf{E}(X_{p,t}) \\
X_{1,t} X_{2,t} & - & \mathsf{E}(X_{1,t} X_{2,t}) \\
X_{1,t} X_{3,t} & - & \mathsf{E}(X_{1,t} X_{3,t}) \\
\vdots & & \vdots \\
X_{p-1,t} X_{p,t} & - & \mathsf{E}(X_{p-1,t} X_{p,t}) \end{pmatrix},
\end{equation*}
which is provided by \citet{davidson:1994}, p. 492. Thus, it holds, for $T \rightarrow \infty$, $V_T(s) \Rightarrow_d D_1^{1/2} W^{2p+\frac{p(p-1)}{2}}(s)$ on $D\left([0,1],\mathbb{R}^{2p+\frac{p(p-1)}{2}}\right)$, where $W^{2p+\frac{p(p-1)}{2}}(s)$ is a $\left(\frac{p(p-1)}{2}+2p\right)$-dimensional Brownian Motion with independent components and $D_1$ is given in Assumption \ref{limit}.
Now, one makes the observation that
\begin{equation*}
V_T(s) = \frac{\tau(s)}{\sqrt{T}} \begin{pmatrix} \overline{X^2_{1}}(s) & - & \mathsf{E}(X^2_{1,t}) \\
\vdots & & \vdots \\
\overline{X^2_{p}}(s) & - & \mathsf{E}(X^2_{p,t}) \\
\overline{X_{1}}(s) & - & \mathsf{E}(X_{1,t}) \\
\vdots & & \vdots \\
\overline{X_{p}}(s) & - & \mathsf{E}(X_{p,t}) \\
\overline{X_{1} X_{2}}(s) & - & \mathsf{E}(X_{1,t} X_{2,t}) \\
\overline{X_{1} X_{3}}(s) & - & \mathsf{E}(X_{1,t} X_{3,t}) \\
\vdots & & \vdots \\
\overline{X_{p-1} X_{p}}(s) & - & \mathsf{E}(X_{p-1,t} X_{p,t}) \end{pmatrix},
\end{equation*}
where, for $i=1,\ldots,p$, $\overline{X^2_{i}}(s) = \frac{1}{\tau(s)} \sum_{t=1}^{\tau(s)} X_{i,t}$, $\overline{X^2_{i}}(s) = \frac{1}{\tau(s)} \sum_{t=1}^{\tau(s)} X^2_{i,t}$ and, for $1 \leq i < j \leq p$, $\overline{X_{i} X_{j}}(s) = \frac{1}{\tau(s)} \sum_{t=1}^{\tau(s)} X_{i,t} X_{j,t}$. The goal is to transform this vector of simple first and second order moments into the vector with the successively calculated correlation coefficients and then to apply the adapted functional delta method, Theorem A.1 in \citet{wied:2012}.
The transforming functions are
\begin{equation*}
\begin{split}
f_1: \mathbb{R}^{2p+\frac{p(p-1)}{2}} &\rightarrow \mathbb{R}^{p+\frac{p(p-1)}{2}} \\
(x_1,\ldots,x_{\left(2p+\frac{p(p-1)}{2}\right)}) &\rightarrow \begin{pmatrix} x_1 & - & (x_{p+1}^2) \\ \vdots & & \vdots \\ x_p & - & (x_{2p}^2) \\ x_{2p+1} & - & x_p x_{p+1} \\ x_{2p+2} & - & x_p x_{p+2} \\ \vdots & & \vdots \\ x_{\left(2p + \frac{p(p-1)}{2}\right)} & - & x_{2p-1} x_{2p} \end{pmatrix}
\end{split}
\end{equation*}
for the transformation on the vector of variances and covariances and
\begin{equation*}
\begin{split}
f_2: \mathbb{R}^{p+\frac{p(p-1)}{2}} &\rightarrow \mathbb{R}^{\frac{p(p-1)}{2}} \\
(x_1,\ldots,x_{\left(p+\frac{p(p-1)}{2}\right)}) &\rightarrow \begin{pmatrix} \frac{x_{p+1}}{\sqrt{x_1 x_2}} \\ \frac{x_{p+2}}{\sqrt{x_1 x_3}} \\ \vdots \\ \frac{x_{\left(p+\frac{p(p-1)}{2}\right)}}{\sqrt{x_{p-1} x_p}} \end{pmatrix}
\end{split}
\end{equation*}
for the transformation on the vector of correlations.
We obtain, for $T \rightarrow \infty$ and for arbitrary $\epsilon > 0$,
\begin{equation}\label{analogueLemma3}
W_T(s) := \frac{\tau(s)}{\sqrt{T}} (\hat \rho^{ij}_{\tau(s)} - \rho^{ij})_{1 \leq i < j \leq p} \Rightarrow_d D_3 D_2 D_1^{1/2} W^{2p+\frac{p(p-1)}{2}}(s)
\end{equation}
on $D\left([\epsilon,1],\mathbb{R}^{2p+\frac{p(p-1)}{2}}\right)$ for matrices $D_2 \sim \left( \left(p+\frac{p(p+1)}{2}\right) \times \left(2p+\frac{p(p+1)}{2}\right)\right)$ and $D_3 \sim \left( \frac{p(p+1)}{2} \times \left(p+\frac{p(p+1)}{2}\right)\right)$. Here, $D_2$ is the Jacobian matrix of $f_1$ and $D_3$ is the Jacobian matrix of $f_2$, evaluated at certain moments.
We are not interested in the exact (and cumbersome) structure of these matrices. But we observe that $D_2$ contains all $\left(p+\frac{p(p+1)}{2}\right)$-dimensional unit vectors and $D_3$ contains all $\left(\frac{p(p+1)}{2}\right)$-dimensional unit vectors (weighted with some constants) in its columns. Thus, $D_2$ and $D_3$ have full column rank. Together with Assumption \ref{limit}, this implies that $D_3 D_2 D_1^{1/2}$ has full column rank. Consequently, $D_3 D_2 D_1 D_2 ' D_3 '$ is invertible and positive definite.
Now, with an application of Theorem 4.2 in \citet{billingsley:1968}, we obtain, for $T \rightarrow \infty$, $W_T(s) \Rightarrow_d D_3 D_2 D_1^{1/2} W^{2p+\frac{p(p-1)}{2}}(s)$ on $D\left([0,1],\mathbb{R}^{2p+\frac{p(p-1)}{2}}\right)$. Moreover, it holds
\begin{equation*}
D_3 D_2 D_1^{1/2} W^{2p+\frac{p(p-1)}{2}}(s) \stackrel{d}{=} (D_3 D_2 D_1 D_2 ' D_3 ')^{1/2} W^{\frac{p(p-1)}{2}}(s)
\end{equation*}
and from \eqref{analogueLemma3} it is easy to see (with $s=1$) that the asymptotic covariance matrix of $\sqrt{T} \left(\hat \rho^{ij}_T \right)_{1 \leq i < j \leq p}$ is equal to
$D_3 D_2 D_1 D_2 ' D_3 ' =: E$.
$\blacksquare$ \\
{\it Proof of Theorem \ref{theorem2}} \\
We use a bootstrap theorem for near epoch dependent data for $V_T(1)$. Note that a univariate bootstrap central limit theorem conditionally on the original data
(see for example \citealp{pauly:2009}, Lemma and Definition 2.7, for a precise definition of this type of convergence) is obtained by \citet{calhoun:2013}, Corollary 2.
For the multivariate generalization, we use an argument based on the Cramér-Wold device.
Since we consider convergence of conditional distributions which are random variables and since an uncountable union of null sets is not necessary a null set again,
we cannot directly apply the Cramér-Wold device. However, we can use an argument based on the Cramér-Wold device and Assumption \ref{limit} for the multivariate
generalization (see \citealp{pauly:2009}, Theorem 3.19, Theorem 3.20 and the related material in this reference; the main argument is that we just consider rational $\lambda$ when applying the Cramér-Wold device).
Then, Condition 1 of \citet{calhoun:2013}, Corollary 2, is fulfilled with our Assumption \ref{ned},
Condition 2 as well as the condition ``$\sum_{t=1}^n (\mu_{nt} - \bar \mu)^2 = o(n^{1/2})$'' with our Assumption \ref{limit} and our Assumption \ref{stationarity}, Condition 3 with our Assumption \ref{moments} and Condition 4 with our Assumption \ref{bootstrap}.
Summing up the previous discussion, the block bootstrap consistently estimates the distribution law of $V_T(1)$.
But then, with the standard (functional) delta method for the bootstrap (\citealp{vandervaart:1996}, Theorem 3.9.11.) transforming $V_T(1)$ to $W_T(1)$,
also the law of $W_T(1)$ is consistently estimated. That means that, for $T \rightarrow \infty$, $$d\left(\mathcal{L}\left(\sqrt{T} (\hat \rho^{ij}_{b,T} - \hat \rho^{ij}_T)_{1 \leq i < j \leq p} | {\bf X}_1,\ldots,{\bf X}_T\right),\mathcal{L} \left(D_3 D_2 D_1^{1/2} W^{2p+\frac{p(p-1)}{2}}(1)\right)\right) \rightarrow_p 0,$$ where $d$ is a metric of weak convergence (see \citealp{pauly:2009}, p. 36) and $\mathcal{L}(\cdot)$ denotes the distribution of a random vector.
Now consider, for $1 \leq i < j \leq p$ and the $\delta$ from Assumption \ref{uniformintboot}, the conditional expectation $$\mathsf{E}\left(\left|\sqrt{T} (\hat \rho^{ij}_{b,T} - \hat \rho^{ij}_T)\right|^{2+\delta} \left| {\bf X}_1,\ldots,{\bf X}_T \right. \right) =: C_T.$$ By Assumption \ref{uniformintboot}, $C_T$ is stochastically bounded. Then, with Lemma 1 in \citealp{cheng:2011}, we can consistently estimate the asymptotic covariance matrix of $W_T(1)$.
$\blacksquare$ \\
{\it Proof of Theorem \ref{theorem3}} \\
Transferring the proof of Theorem \ref{theorem1}, we obtain, under $H_1$, for $T \rightarrow \infty$, $V_T(s) \Rightarrow_d D_1^{1/2} W^{2p+\frac{p(p-1)}{2}}(s) + A(s)$ on $D\left([0,1],\mathbb{R}^{2p+\frac{p(p-1)}{2}}\right)$. Here,
\begin{equation*}
A = \left( 0 , \ldots , 0 , \int_0^s g(u)' du \right)'
\end{equation*}
(note that $g(u)'$ is the transpose of the function $g$).
So,
\begin{equation}\label{analogueLemma3LocalPower}
W_T(s) := \frac{\tau(s)}{\sqrt{T}} (\hat \rho^{ij}_{\tau(s)} - \rho^{ij})_{1 \leq i < j \leq p} \Rightarrow_d D^{1/2} W^{\frac{p(p-1)}{2}}(s) + D_3 D_2 A(s),
\end{equation}
where $D_3$ and $D_2$ are the matrices mentioned in the proof of Theorem \ref{theorem1}. Due to the structure of $D_3$ and $D_2$, we have
\begin{equation*}
D_3 D_2 A(s) = M \begin{pmatrix} \frac{1}{\sqrt{\mathsf{Var}(X_{1})\mathsf{Var}(X_{2})}} \int_0^s g_1(u)du \\ \frac{1}{\sqrt{\mathsf{Var}(X_{1})\mathsf{Var}(X_{3})}} \int_0^s g_2(u)du \\ \vdots \\ \frac{1}{\sqrt{\mathsf{Var}(X_{p-1})\mathsf{Var}(X_{p})}} \int_0^s g_{\frac{p(p-1)}{2}}(u)du \end{pmatrix}.
\end{equation*}
This completes the proof.
$\blacksquare$ \\
{\it Proof of Theorem \ref{theorem4}} \\
Under local alternatives, for $i=1,\ldots,p$, $\mathsf{E}(X_{i,t})$ and $\mathsf{E}(X^2_{i,t})$ are constant, respectively. Moreover, for $1 \leq i < j \leq p$, it holds
\begin{equation*}
\sum_{t=1}^T \left(\mathsf{E}(X_{i,t} X_{j,t}) - \overline{X_{i} X_{j}}(1)\right)^2 = o(T^{1/2}).
\end{equation*}
Therefore, the condition ``$\sum_{t=1}^n (\mu_{nt} - \bar \mu)^2 = o(n^{1/2})$'' of Corollary 2 in \citet{calhoun:2013} is fulfilled. The other conditions are fulfilled with the same arguments as in Theorem \ref{theorem2}. Then, by this corollary and the Cramér-Wold Theorem, we estimate $E$ consistently with the bootstrap estimator as described in the proof of Theorem \ref{theorem2}.
$\blacksquare$ \\
\begin{table}
\centering
\caption{Empirical size and empirical power (times $100$, respectively) of the multivariate correlation test; columns 5,6 give empirical rejection probabilities for the matrix-based test, columns 7,8 give rejection probabilites for the Bonferroni-Holm procedure}
\begin{tabular}{|c|c|c|cc|cc|}\hline
$MA$ & distr. & $\rho_0$ & rej.prob. & & rej.prob. & \\ \hline
& & & $T=200$ & $T=500$ & $T=200$ & $T=500$ \\ \hline \hline
\multicolumn{7}{|c|}{$\Delta \rho=0$} \\ \hline
0 & N & 0 & 2.8 & 3.8 & 2.9 & 3.5 \\
0 & N & 0.5 & 4.4 & 4.4 & 5.5 & 4.3 \\
0 & t & 0 & 8.7 & 6.7 & 7.8 & 4.9 \\
0 & t & 0.5 & 11.5 & 8.1 & 10.1 & 6.3 \\ \hline
0.5 & N & 0 & 4.8 & 5.3 & 4.0 & 4.0 \\
0.5 & N & 0.5 & 7.4 & 6.2 & 7.5 & 5.1 \\
0.5 & t & 0 & 13.1 & 9.4 & 9.7 & 5.5 \\
0.5 & t & 0.5 & 17.1 & 12.3 & 13.2 & 8.1 \\ \hline \hline
\multicolumn{7}{|c|}{$\Delta \rho=0.2$} \\ \hline
0 & N & 0 & 32.2 & 89.1 & 26.3 & 73.5 \\
0 & N & 0.5 & 43.6 & 90.5 & 55.6 & 93.7 \\
0 & t & 0 & 19.0 & 32.3 & 15.8 & 20.2 \\
0 & t & 0.5 & 30.1 & 41.3 & 35.4 & 44.9 \\ \hline
0.5 & N & 0 & 30.3 & 80.8 & 24.7 & 63.9 \\
0.5 & N & 0.5 & 42.3 & 82.9 & 52.3 & 87.9 \\
0.5 & t & 0 & 24.5 & 34.7 & 18.4 & 22.2 \\
0.5 & t & 0.5 & 36.6 & 44.8 & 38.9 & 46.7 \\ \hline \hline
\multicolumn{7}{|c|}{$\Delta \rho=-0.2$} \\ \hline
0 & N & 0 & 74.4 & 100.0 & 29.2 & 80.7 \\
0 & N & 0.5 & 16.8 & 79.5 & 20.3 & 80.7 \\
0 & t & 0 & 36.4 & 64.0 & 16.4 & 21.2 \\
0 & t & 0.5 & 13.6 & 22.5 & 10.1 & 15.6 \\ \hline
0.5 & N & 0 & 65.8 & 99.6 & 25.9 & 70.1 \\
0.5 & N & 0.5 & 15.4 & 66.6 & 15.9 & 67.9 \\
0.5 & t & 0 & 39.9 & 64.9 & 20.1 & 23.4 \\
0.5 & t & 0.5 & 18.1 & 24.7 & 12.4 & 16.8 \\ \hline \hline
\end{tabular}
\label{table1}
\end{table}
\begin{figure}
\caption{Rolling correlations}
\label{fig:correlations1}
\end{figure}
\begin{figure}
\caption{Rolling correlations}
\label{fig:correlations2}
\end{figure}
\begin{figure}
\caption{Rolling correlations}
\label{fig:correlations3}
\end{figure}
\begin{figure}
\caption{Evolution of successively calculated correlations}
\label{fig:evolution}
\end{figure}
\end{document} |
\begin{document}
\draft
\preprint{\hbox to \hsize{\hfil\vtop{\hbox{IASSNS-HEP-99/36}
\hbox{April, 1999}}}}
\title{State Vector Collapse Probabilities and Separability\\
of Independent Systems in Hughston's\\
Stochastic Extension of the Schr\"odinger Equation \\
}
\author{Stephen L. Adler\\}
\address{Institute for Advanced Study\\
Princeton, NJ 08540\\
}
\author{Lawrence P. Horwitz
\footnote{On leave from School of Physics and Astronomy,
Raymond and Beverly Sackler Faculty
of Exact Sciences, Tel Aviv University, Ramat Aviv, Israel, and
Department of Physics, Bar Ilan University, Ramat Gan, Israel.}}
\address{Institute for Advanced Study\\
Princeton, NJ 08540\\
}
\maketitle
\leftline{Send correspondence to:}
{\leftline{Stephen L. Adler}
\leftline{Institute for Advanced Study}
\leftline{Olden Lane, Princeton, NJ 08540}
\leftline{Phone 609-734-8051; FAX 609-924-8399; email adler@ias.edu}}
\begin{abstract}
We give a general proof that Hughston's stochastic extension of the
Schr\"odinger equation leads to state vector collapse to energy eigenstates,
with collapse probabilities given by the quantum mechanical probabilities
computed from the initial state. We also show that for a system
composed of independent subsystems, Hughston's equation separates into
similar independent equations for the each of the subsystems, correlated
only through the common Wiener process that drives the state reduction.
\end{abstract}
A substantial body of work [1] has addressed the problem of state vector
collapse by proposing that the Schr\"odinger equation be modified to
include a stochastic process, presumably arising from physics at a deeper
level, that drives the collapse process. Although interesting models
have been constructed, there so far has been no demonstration that for a
generic Hamiltonian, one can construct a stochastic dynamics that collapses
the state vector with the correct quantum mechanical probabilities.
Part of the problem has been that most earlier work has used stochastic
equations that do not preserve state vector normalization, requiring
additional ad hoc assumptions to give a consistent
physical interpretation.
Various authors [2] have proposed rewriting the Schr\"odinger
equation as an equivalent dynamics on projective Hilbert space, i.e., on
the space of rays, a formulation in which the imposition of a state vector
normalization condition is not needed. Within this framework, Hughston [3]
has proposed a simple stochastic extension of the Schr\"odinger equation,
constructed solely from the Hamiltonian function, and has shown that his
equation leads to state vector reduction to an energy eigenstate, with
energy conservation in the mean throughout the reduction process.
In the simplest spin-1/2 case, Hughston exhibits an explicit solution
that shows that his equation leads to collapse with the correct quantum
mechanical probabilities, but the issue of collapse probabilities in the
general case has remained open. In this Letter, we shall give a general
proof that Hughston's equation leads to state vector collapse to energy
eigenstates with the
correct quantum mechanical probabilities, using the martingale or ``gambler's
ruin'' argument pioneered by Pearle [4]. We shall also show that Hughston's
equation separates into independent equations of similar structure for a
wave function constructed as the product of independent subsystem wave
functions.
We begin by explaining the basic elements needed to understand Hughston's
equation, working in an $n+1$ dimensional Hilbert space. We denote the
general state vector in this space by $| z \rangle$, with $z$ a shorthand
for the complex projections $z^1,z^2,...,z^{n+1}$ of the state vector on an
arbitrary fixed basis. Letting $F$ be an arbitrary Hermitian operator, and
using the summation convention that repeated indices are summed over their
range, we define
\begin{mathletters}
\label{allequations}
\begin{equation}
(F) \equiv { \langle z | F | z \rangle \over \langle z |z \rangle }
= { \overline z^{\alpha} F_{\alpha \beta} z^{\beta} \over
\overline z^{\gamma} z^{\gamma} }~~~,
\label{equationa}
\end{equation}
so that $(F)$ is the expectation of the operator $F$
in the state $|z\rangle$,
independent of the ray representative and normalization
chosen for this state.
Note that in this notation $(F^2)$ and $(F)^2$ are not the same; their
difference is in fact the variance $[\Delta F]^2$,
\begin{equation}
[\Delta F]^2 = (F^2)-(F)^2~~~.
\label{equationb}
\end{equation}
\end{mathletters}
We shall use two other parameterizations for the state $|z\rangle$ in what
follows. Since $(F)$ is homogeneous of degree zero in both
$z^{\alpha}$ and $\overline z^{\alpha}$, let us define new
complex coordinates $t^j$ by
\begin{equation}t^j=z^j/z^0,~~ \overline t^j=\overline z^j
/ \overline z^0~,~~~j=1,...,n. ~~~
\end{equation}
Next, it is convenient to
split each of the complex numbers $t^j$ into its real and imaginary
part $t^j_R,~t^j_I$, and to introduce a $2n$ component real vector
$x^a,~a=1,...,2n$ defined by $x^1=t^1_R,~x^2=t^1_I,~x^3=t^2_R,~
x^4=t^2_I,...,x^{2n-1}=t^n_R,~x^{2n}=t^n_I$. Clearly, specifying
the projective coordinates $t^j$ or $x^a$ uniquely determines the
unit ray containing the unnormalized state $|z\rangle$, while leaving
the normalization and ray representative of the state $|z\rangle$
unspecified.
As discussed in Refs. [2], projective Hilbert space is also a Riemannian
space with respect to the Fubini-Study metric $g_{\alpha \beta}$, defined
by the line element
\begin{mathletters}
\label{allequations}
\begin{equation}
ds^2= g_{\alpha \beta} d\overline z^{\alpha} dz^{\beta}
\equiv 4\left( 1- { | \langle z | z+dz \rangle |^2 \over \langle z |z \rangle
\langle z+dz | z+dz \rangle } \right) ~~~.
\label{equationa}
\end{equation}
Abbreviating $\overline z^{\gamma} z^{\gamma} \equiv \overline z \cdot z$,
a simple calculation gives
\begin{equation}
g_{\alpha \beta}=4(\delta_{\alpha \beta} \overline z \cdot z
-z^{\alpha} \overline z^{\beta})/(\overline z \cdot z)^2
=4 {\partial \over \partial \overline z^{\alpha} }
{\partial \over \partial z^{\beta} } \log \overline z \cdot z~~~.
\label{equationb}
\end{equation}
\end{mathletters}
Because of the homogeneity conditions $\overline z^{\alpha} g_{\alpha \beta}
=z^{\beta} g_{\alpha \beta}=0$, the metric $g_{\alpha \beta}$ is not
invertible, but if we hold the coordinates $\overline z^0,~z^0$ fixed in
the variation of Eq.~(3a) and go over to the
projective coordinates $t^j$, we can rewrite the line element of Eq.~(3a)
as
\begin{mathletters}
\label{allequations}
\begin{equation}
ds^2=g_{jk}d\overline t^j dt^k~~~,
\label{equationa}
\end{equation}
with the invertible metric [5]
\begin{equation}
g_{jk}={4[(1+\overline t^{\ell} t^{\ell}) \delta_{jk} - t^j \overline t^k ]
\over (1+\overline t^m t^m)^2 }~~~,
\label{equationb}
\end{equation}
with inverse
\begin{equation}
g^{jk}={1 \over 4} (1+\overline t^m t^m) (\delta_{jk} + t^j \overline t^k)
~~~.
\label{equationc}
\end{equation}
Reexpressing the complex projective coordinates $t^j$ in terms of the
real coordinates $x^a$, the line element can be rewritten as
\begin{eqnarray}
ds^2=&&g_{ab}dx^adx^b~~~,\nonumber\\
g_{ab}=&&{4[(1+x^dx^d)\delta_{ab}-(x^ax^b+\omega_{ac}x^c\omega_{bd}x^d)]
\over (1+x^e x^e)^2}~~~,\nonumber\\
g^{ab}=&&{1\over 4} (1+x^e x^e)(\delta_{ab}+
x^ax^b+\omega_{ac}x^c\omega_{bd}x^d)~~~.
\label{equationd}
\end{eqnarray}
\end{mathletters}
Here $\omega_{ab}$ is a numerical tensor whose only nonvanishing elements are
$\omega_{a=2j-1 ~b=2j}=1$ and $\omega_{a=2j~b=2j-1}=-1$
for $j=1,...,n$. As discussed
by Hughston, one can define a complex structure $J_a^{~b}$ over the entire
projective Hilbert space for which $J_a^{~c}J_b^{~d}g_{cd}=g_{ab},$
$J_a^{~b}J_b^{~c}=-\delta_a^c$,
such that $\Omega_{ab}=g_{bc} J_a^{~c}$ and
$\Omega^{ab}=g^{ac}J_c^{~b}$ are antisymmetric tensors. At $x=0$, the metric
and complex structure take the values
\begin{eqnarray}
g_{ab}=&&4 \delta_{ab}~,~~g^{ab}={1 \over 4} \delta_{ab}~~~,\nonumber\\
J_a^{~b}=&&\omega_{ab}~,~~\Omega_{ab}=4\omega_{ab}~,
~~\Omega^{ab}={1\over 4}\omega_{ab}~~~.
\end{eqnarray}
Returning to Eq.~(1a), we shall now derive some identities that are
central to what follows. Differentiating Eq.~(1a) with respect to
$\overline z^{\alpha}$, with respect to $z^{\beta}$, and with respect to both
$\overline z^{\alpha}$ and $z^{\beta}$, we get
\begin{mathletters}
\label{allequations}
\begin{eqnarray}
\langle z | z \rangle {\partial (F) \over \partial \overline z^{\alpha}}
=&&F_{\alpha \beta} z^{\beta} - (F) z^{\alpha}~~~,\nonumber\\
\langle z | z \rangle {\partial (F) \over \partial z^{\beta}}=&&
\overline z^{\alpha} F_{\alpha \beta} - (F) \overline z^{\beta}~~~,\nonumber\\
\langle z | z \rangle^2 {\partial^2 (F) \over \partial \overline z^{\alpha}
\partial z^{\beta} }=&&
\langle z |z \rangle [F_{\alpha \beta}-\delta_{\alpha \beta} (F) ]
+2z^{\alpha} \overline z^{\beta} (F) - \overline z^{\gamma} F_{\gamma \beta}
z^{\alpha}-\overline z^{\beta} F_{\alpha \gamma} z^{\gamma}~~~.
\label{equationa}
\end{eqnarray}
Writing similar expressions for a second operator expectation $(G)$,
contracting in various combinations with the relations of Eq.~(6a), and
using the homogeneity conditions
\begin{equation}
\overline z^{\alpha} {\partial (F) \over \partial \overline z^{\alpha} }
=z^{\beta} {\partial (F) \over \partial z^{\beta} }
=\overline z^{\alpha} {\partial^2 (F) \over \partial \overline z^{\alpha}
\partial z^{\beta}}
=z^{\beta} {\partial^2 (F) \over \partial \overline z^{\alpha}
\partial z^{\beta} } =0
\label{equationb}
\end{equation}
\end{mathletters}
to eliminate derivatives with respect to $\overline z^0,~z^0$, we get
the following identities,
\begin{mathletters}
\label{allequations}
\begin{eqnarray}
-i(FG-GF)&&=-i \langle z| z \rangle \left(
{\partial (F) \over \partial z^{\alpha}}
{\partial (G) \over \partial \overline z^{\alpha}} -
{\partial (G) \over \partial z^{\alpha}}
{\partial (F) \over \partial \overline z^{\alpha}} \right)
=2\Omega^{aAb} \nabla_a (F) \nabla_b (G)~~~,\nonumber\\
(FG+GF)-2(F)(G)&&= \langle z| z \rangle \left(
{\partial (F) \over \partial z^{\alpha}}
{\partial (G) \over \partial \overline z^{\alpha}} +
{\partial (G) \over \partial z^{\alpha}}
{\partial (F) \over \partial \overline z^{\alpha}} \right)
=2g^{ab} \nabla_a (F) \nabla_b (G)~~~,\nonumber\\
(FGF)-(F^2)(G)&&-(F)(FG+GF)+2(F)^2(G)\nonumber\\
&&=\langle z | z \rangle ^2
{\partial (F) \over \partial z^{\alpha}}
{\partial^2 (G) \over \partial \overline z^{\alpha} \partial z^{\beta}}
{\partial (F) \over \partial \overline z^{\beta}}
=2\nabla^a (F) \nabla^b (F) \nabla_a \nabla_b (G),
\label{equationa}
\end{eqnarray}
with $\nabla_a$ the covariant derivative with respect to the Fubini-Study
metric. It is not necessary to use the detailed form of the
affine connection
to verify the right hand equalities in these identities, because since $(G)$
is a Riemannian scalar, $\nabla_a \nabla_b (G)$$ =\nabla_a \partial_b (G)$,
and since projective Hilbert space is a homogeneous manifold, it suffices
to verify the identities at the single point $x=0$, where the affine
connection vanishes and thus
$\nabla_a \nabla_b (G)=\partial_a \partial_b (G)$.
Using Eqs.~(7a) and the chain rule we also find
\begin{equation}
-\nabla^a [(F^2)-(F)^2] \nabla_a (G)=
-{1\over 2} (F^2 G+G F^2) +(F^2)(G) + (F) (FG+GF) -2(F)^2(G)~~~,
\label{equationb}
\end{equation}
which when combined with the final identity in Eq.~(7a) gives
\begin{equation}
\nabla^a(F) \nabla^b(F) \nabla_a \nabla_b (G)
-{1\over 2} \nabla^a [(F^2)-(F)^2] \nabla_a (G)=
-{1\over 4}([F,[F,G]])~~~,
\label{equationc}
\end{equation}
\end{mathletters}
the right hand side of which vanishes when the operators $F$ and $G$
commute [6].
Let us now turn to Hughston's stochastic differential equation, which
in our notation is
\begin{mathletters}
\label{allequations}
\begin{equation}
dx^a=[2 \Omega^{ab}\nabla_b(H)-{1\over 4}\sigma^2 \nabla^aV]dt
+\sigma\nabla^a(H) dW_t~~~,
\label{equationa}
\end{equation}
with $W_t$ a Brownian motion or Wiener process, with $\sigma$ a parameter
governing the strength of the stochastic terms, with $H$ the Hamiltonian
operator and $(H)$ its expectation, and with $V$ the
variance of the Hamiltonian,
\begin{equation}
V=[\Delta H]^2=(H^2)-(H)^2~~~.
\label{equationb}
\end{equation}
\end{mathletters}
When the parameter $\sigma$ is zero, Eq.~(8a) is just the transcription
of the Schr\"odinger equation to projective
Hilbert space. For the time evolution of a
general function $G[x]$, we get by
Taylor expanding $G[x+dx]$ and using the It\^o stochastic calculus rules
\begin{mathletters}
\label{allequations}
\begin{equation}
[dW_t]^2=dt~,~~[dt]^2=dtdW_t=0~~~,
\label{equationa}
\end{equation}
the corresponding stochastic differential equation
\begin{equation}
dG[x]=\mu dt + \sigma \nabla_aG[x]\nabla^a(H) dW_t~~~,
\label{equationb}
\end{equation}
with the drift term $\mu$ given by
\begin{equation}
\mu=2 \Omega^{ab} \nabla_aG[x]\nabla_b(H)-{1\over 4}
\sigma^2\nabla^aV\nabla_a
G[x]+{1\over 2}\sigma^2 \nabla^a(H)\nabla^b(H)\nabla_a\nabla_bG[x]~~~.
\label{equationc}
\end{equation}
\end{mathletters}
Hughston shows that with the $\sigma^2$ part of the drift term
chosen as in Eq.~(8a), the drift term $\mu$ in Eq.~(9c) vanishes for the
special case $G[x]=(H)$, guaranteeing conservation
of the expectation of the energy with respect to the stochastic evolution
of Eq.~(8a). But referring to Eq. (7c) and the first identity in Eq.~(7a),
we see that in fact
a much stronger result is also true, namely that $\mu$ vanishes [and thus
the stochastic process of Eq.~(9b) is a martingale] whenever
$G[x]=(G)$, with $G$ any operator that commutes with the Hamiltonian $H$.
Let us now make two applications of this fact. First, taking $G[x]=V=
(H^2)-(H)^2$, we see that the contribution from $(H^2)$ to $\mu$
vanishes, so the drift term comes entirely from $-(H)^2$.
Substituting this into $\mu$ gives $-2(H)$ times the drift term produced by
$(H)$, which is again zero, plus an extra term
\begin{mathletters}
\label{allequations}
\begin{equation}
-\sigma^2 \nabla^a(H)\nabla ^b(H)\nabla_a(H)\nabla_b(H)
=-\sigma^2V^2~~~,
\label{equationa}
\end{equation}
where we have used the relation $V=\nabla_a(H)\nabla^a(H)$ which follows
from the $F=G=H$ case of the middle identity of Eq.~(7a). Thus the
variance $V$ of the Hamiltonian satisfies the stochastic differential
equation, derived by Hughston by a more complicated method,
\begin{equation}
dV=-\sigma^2 V^2 dt + \sigma \nabla_aV\nabla^a(H) dW_t~~~.
\label{equationb}
\end{equation}
This implies that the expectation $E[V]$ with respect to the stochastic
process obeys
\begin{equation}
E[V_t]=E[V_0]-\sigma^2 \int_0^t ds E[V_s^2]~~~,
\label{equationc}
\end{equation}
which using the inequality $0\leq E[\{V-E[V]\}^2]=E[V^2]-E[V]^2$
gives the inequality
\begin{equation}
E[V_t] \leq E[V_0] -\sigma^2 \int_0^t ds E[V_s]^2~~~.
\label{equationd}
\end{equation}
\end{mathletters}
Since $V$ is necessarily positive, Eq.~(10d) implies that $E[V_{\infty}]=0$,
and again using positivity of $V$ this implies that $V_s$
vanishes as $s \to \infty$, apart from a set of outcomes
of probability measure zero. Thus, as concluded by
Hughston, the stochastic term in his equation drives the system, as $t \to
\infty$, to an energy eigenstate.
As our second application of the vanishing of the drift term $\mu$ for
expectations of operators that commute with $H$, let us consider the
projectors $\Pi_e\equiv |e\rangle \langle e| $ on a complete set of
energy eigenstates $|e \rangle$. By definition, these projectors all
commute with H, and so the drift term $\mu$ vanishes in the stochastic
differential equation for $G[x]=(\Pi_e)$, and consequently the expectations
$E[(\Pi_e)]$ are time independent; additionally, by completeness of
the states $|e\rangle$, we have $\sum_e (\Pi_e)=1$.
But these are just the conditions for
Pearle's [4] gambler's ruin argument to apply. At time zero,
$E[(\Pi_e)]=(\Pi_e)\equiv p_e$
is the absolute value squared of the quantum mechanical amplitude
to find the initial state in energy eigenstate $|e \rangle$. At $t=\infty$,
the system always evolves to an energy eigenstate, with the eigenstate
$|f\rangle $ occurring with some probability $P_f$. The expectation
$E[(\Pi_e)]$, evaluated at infinite time, is then
\begin{equation}
E[(\Pi_e)]=1 \times P_e + \sum_{f \neq e} 0 \times P_f = P_e~~~;
\end{equation}
hence $p_e=P_e$ for each $e$ and the state collapses into energy eigenstates
at $t=\infty$ with probabilities given by the usual quantum mechanical
rule applied to the initial wave function [7].
Let us now examine the structure of Hughston's
equation for a Hilbert space constructed as the direct product of
independent subsystem Hilbert spaces, so that
\begin{mathletters}
\label{allequations}
\begin{eqnarray}
|z\rangle =&& \prod_{\ell} |z_{\ell} \rangle~~~,\nonumber\\
H=&&\sum_{\ell} H_{\ell}~~~,
\label{equationa}
\end{eqnarray}
with $H_{\ell}$ acting as the unit operator on the states $|z_{k}\rangle ~,~~
k \neq \ell$. Then a simple calculation shows that the expectation
of the Hamiltonian $(H)$ and its variance $V$ are both
additive over the subsystem Hilbert spaces,
\begin{eqnarray}
(H)=&&\sum_{\ell} (H_{\ell})_{\ell}~~~,\nonumber\\
V=\sum_{\ell} V_{\ell} =&&\sum_{\ell}[ (H_{\ell}^2)_{\ell}
-(H_{\ell})_{\ell}^2]~~~,
\label{equationb}
\end{eqnarray}
\end{mathletters}
with $(F_{\ell})_{\ell}$ the expectation of the operator $F_{\ell}$
formed according to Eq.~(1a) with respect
to the subsystem wave function $|z_{\ell}\rangle$. In addition,
the Fubini-Study line element is also additive over the subsystem Hilbert
spaces, since [8]
\begin{eqnarray}
1-ds^2/4=&& {| \langle z | z+dz \rangle |^2 \over \langle z |z \rangle
\langle z+dz | z+dz \rangle } =\prod_{\ell}
{ | \langle z_{\ell} | z_{\ell}+dz_{\ell} \rangle |^2 \over
\langle z_{\ell} |z_{\ell} \rangle \langle z_{\ell}+dz_{\ell}
| z_{\ell}+dz_{\ell} \rangle }\nonumber\\
=&&\prod_{\ell}[1-ds_{\ell}^2/4]=1-[\sum_{\ell} ds_{\ell}^2]/4 +{\rm O}(ds^4)
~~~.
\end{eqnarray}
As a result of Eq.~(13), the metric $g^{ab}$ and complex
structure $\Omega^{ab}$ block diagonalize over the independent subsystem
subspaces. Equation (12b) then implies that Hughston's stochastic extension
of the Schr\"odinger equation given in Eq.~(8a)
separates into similar equations for the subsystems, that do not refer
to one another's $x^a$ coordinates, but are correlated only through the
common Wiener process $dW_t$ that appears in all of them. Under the
assumption [9] that $\sigma \sim M_{\rm Planck}^{-1/2}$ in microscopic
units with $\hbar =c=1$, these correlations will be very small; it will be
important to analyze whether they can have observable physical
consequences on laboratory or cosmological scales [10].
To summarize, we have shown that Hughston's stochastic extension of the
Schr\"odinger equations has properties that make it a viable physical model
for state vector reduction. This opens the challenge of seeing whether
it can be derived as a phenomenological approximation to a fundamental
pre-quantum dynamics. Specifically, we suggest that since
Adler and Millard [11] have argued that quantum mechanics can emerge as
the thermodynamics of an underlying non-commutative operator dynamics,
it may be possible to show that Hughston's stochastic process is the
leading statistical fluctuation correction to this thermodynamics.
\acknowledgments
This work was supported in part by the Department of Energy under
Grant \#DE--FG02--90ER40542. One of us (S.L.A.) wishes to thank J.
Anandan for conversations introducing him to the Fubini-Study metric.
The other (L.P.H.) wishes to thank P. Leifer for many discussions on
the properties of the complex projective space.
\begin{references}
\bibitem{[1]} For a representative, but not exhaustive, survey of the earlier
literature, see the papers of Di\'osi, Ghirardi et. al., Gisin,
Pearle, and Percival cited by Hughston, Ref. [3] below.
\bibitem{[2]} T.W.B. Kibble, Commun. Math. Phys. {\bf 65}, 189 (1979);
D. A. Page, Phys. Rev. A {\bf 36}, 3479 (1987); Y. Aharanov and
J. Anandan, Phys. Rev. Lett. {\bf 58}, 1593 (1987); J. Anandan and Y.
Aharanov, Phys. Rev. D {\bf 38}, 1863 (1988)
and Phys. Rev. Lett. {\bf 65}, 1697 (1990); G. W. Gibbons,
J. Geom. Phys. {\bf 8}, 147 (1992); L. P. Hughston, ``Geometric aspects
of quantum mechanics'', in S. A. Huggett, ed., {\it Twistor theory},
Marcel Dekker, New York, 1995; A. Ashtekar and T. A. Schilling, preprint
gr-qc/9706069. For related work, see
A. Heslot, Phys. Rev. D {\bf 31}, 1341 (1985)
and S. Weinberg, Phys. Rev. Lett. {\bf 62}, 485 (1989) and
Ann. Phys. (NY) {\bf 194}, 336 (1989).
\break
\bibitem{[3]} L. P. Hughston, Proc. Roy. Soc. Lond. A {\bf 452}, 953 (1996).
\bibitem{[4]} P. Pearle, Phys. Rev. D {\bf 13}, 857 (1976);
Phys. Rev. D {\bf 29},
235 (1984); Phys. Rev. A {\bf 39}, 2277 (1989).
\break
\bibitem{[5]} What we have called $z^0$ could be any $z^{\alpha}\neq
0$. There is
therefore a set of holomorphically overlapping patches, so that the
metric of Eq.~(4b) is globally defined. See, for example, S. Kobayashi
and K. Nomizu, {\it Foundations of Differential Geometry}, Vol. II, p. 159,
Wiley Interscience, New York, 1969.
\break
\bibitem{[6]} An alternative demonstration of this result uses the
fact, noted by
Hughston [3], that $\xi^a_F\equiv \Omega^{ab} \nabla_b(F)$ is a
Killing vector obeying $\nabla_c\xi_F^a+\nabla^a \xi_{Fc}=0$. First rewrite
$\nabla^a(F)\nabla^b(F)\nabla_a\nabla_b(G)$ as $\Omega^{ca} \Omega^{eb}
\xi_{Fc}\xi_{Fe} \nabla_a\nabla_b(G)$ $=\xi_{Fc}\xi_{Fe}\Omega^{ca}
\nabla_a \xi_G^e$. By the Killing vector property, this becomes
$-\xi_{Fc}\xi_{Fe}\Omega^{ca}\nabla^e \xi_{Ga}$, which can be rewritten
as $-\nabla^e[\xi_{Fe}\xi_{Fc}\Omega^{ca}\xi_{Ga}]$
$+\nabla^e\xi_{Fe}\, \xi_{Fc}\Omega^{ca}\xi_{Ga}$
$+\xi_{Fe}\nabla^e\xi_{Fc}\,\Omega^{ca}\xi_{Ga}$.
When $F$ and $G$ commute,
the first two terms vanish by the first identity in Eq.~(7a), while using
the Killing vector property for $\xi_F$ in the third term gives
$-\nabla_c\xi_F^e\, \xi_{Fe} \Omega^{ca} \xi_{Ga}$ =
$-(1/2)\nabla_c[\xi_F^e \xi_{Fe}]\Omega^{ca}J_a^{~b} \nabla_b(G)$,
which using $\Delta F=\xi_F^e\xi_{Fe}$ reduces to $(1/2)\nabla_c[\Delta F]
\nabla^c (G)$.
\break
\bibitem{[7]} This conclusion readily generalizes to the stochastic equation
$dx^a=[2 \Omega^{ab}\nabla_b(H)-{1\over 4}\sigma^2 \sum_j \nabla^aV_j]dt
+\sigma\sum_j \nabla^a(H_j) dW_t^j~~~,$ with the $H_j$ a set of
mutually commuting operators that commute with H, with
$V_j =(H_j^2)-(H_j)^2$, and with the $dW_t^j$ independent Wiener processes
obeying $dW_t^j dW_t^k=\delta^{jk}dt$~.
\bibitem{[8]} An alternative way to see this is to use the identity
$\log \overline z \cdot z =\log \prod_{\ell} \overline z_{\ell}
\cdot z_{\ell} =$ $ \sum_{\ell} \log \overline z_{\ell}\cdot z_{\ell}$
in Eq.~(3b), along with a change of
variable from $z$ to the $z_{\ell}$'s.
\break
\bibitem{[9]} See L. P. Hughston, Ref. [3], Sec. 11 and earlier
work of Di\'osi, Ghirardi
et. al., and Penrose cited there; also D. I. Fivel, preprint
quant-ph/9710042.
\break
\bibitem{[10]}
Atomic physics tests for nonlinearities in quantum mechanics have
been surveyed by J. J. Bollinger, D. J. Heinzen, W. M. Itano, S. L. Gilbert,
and D. J. Wineland, in J. C. Zorn and R. R. Lewis, eds., Proceedings of the
12th International Conference on Atomic Physics, Amer. Inst. of Phys. Press,
New York, 1991, p. 461. In Hughston's equation, the parameter $\epsilon$
characterizing the nonlinearities is of order $\epsilon \sim \sigma^2
[\Delta H]^2$. For a two level system with ``clock'' transition energy
$E_c$, one has $[\Delta H]^2 \sim E_c^2$, so for
$\sigma^2 \sim M_{\rm Planck}^{-1}$,
one estimates $\epsilon \sim E_c^2/M_{\rm Planck}$. For the
$^9Be$ transition studied by Bollinger et. al., this gives a predicted
$\epsilon \sim 10^{-46}$ MeV, as compared with the measured bound $|\epsilon|
< 2.4 \times 10^{-26}$ MeV. Transitions with smaller $E_c$ values,
such as $^{201}Hg$ and $^{21}Ne$, have correspondingly
suppressed predictions for $\epsilon$.
\bibitem{[11]}
S. L. Adler and A. C. Millard, Nucl. Phys. B {\bf 473},
199 (1966);
see also S. L. Adler and A. Kempf, J. Math. Phys. {\bf 39}, 5083 (1998).
\end{references}
\end{document} |
\begin{document}
\title{Chv\'{a}
\footnotetext[1]{Department of Mathematics, Iowa State University, Ames, IA 50011; {\tt $\{$cocox,rymartin$\}$@iastate.edu}}
\footnotetext[2]{Department of Mathematical and Statistical Sciences, University of Colorado Denver, Denver, CO 80217 ; {\tt michael.ferrara@ucdenver.edu}.}
\footnotetext[3]{Research supported in part by NSF grant DMS-1427526, ``The Rocky Mountain - Great Plains Graduate Research Workshop in Combinatorics".}
\footnotetext[4]{Research supported in part by Simons Foundation Collaboration Grant \#206692 (to Michael Ferrara).}
\footnotetext[5]{Department of Mathematics, University of Illinois Urbana-Champaign, Urbana, IL 60801; {\tt reinige1@illinois.edu}.}
\footnotetext[6]{This author's research was partially supported by the National Security Agency (NSA) via grant H98230-13-1-0226. This author's contribution was completed in part while he was a long-term visitor at the Institute for Mathematics and its Applications. He is grateful to the IMA for its support and for fostering such a vibrant research community.}
\begin{abstract}
A sequence of nonnegative integers $\pi =(d_1,d_2,...,d_n)$ is {\it graphic} if there is a (simple) graph $G$ of order $n$ having degree sequence $\pi$. In this case, $G$ is said to {\it realize} or be a {\it realization of} $\pi$. Given a graph $H$, a graphic sequence $\pi$ is \textit{potentially $H$-graphic} if there is some realization of $\pi$ that contains $H$ as a subgraph.
In this paper, we consider a degree sequence analogue to classical graph Ramsey numbers. For graphs $H_1$ and $H_2$, the \textit{potential-Ramsey number} $r_{pot}(H_1,H_2)$ is the minimum integer $N$ such that for any $N$-term graphic sequence $\pi$, either $\pi$ is potentially $H_1$-graphic or the complementary sequence $\overline{\pi}=(N-1-d_N,\dots, N-1-d_1)$
is potentially $H_2$-graphic.
We prove that if $s\ge 2$ is an integer and $T_t$ is a tree of order $t> 7(s-2)$, then $$r_{pot}(K_s, T_t) = t+s-2.$$ This result, which is best possible up to the bound on $t$, is a degree sequence analogue to a classical 1977 result of Chv\'{a}tal on the graph Ramsey number of trees vs.\ cliques. To obtain this theorem, we prove a sharp condition that ensures an arbitrary graph packs with a forest, which is likely to be of independent interest.
\end{abstract}
\section{Introduction}
A sequence of nonnegative integers $\pi =(d_1,d_2,\dotsc,d_n)$ is {\it graphic} if there is a (simple) graph $G$ of order $n$ having degree sequence $\pi$. In this case, $G$ is said to {\it realize} or be a {\it realization of} $\pi$, and we will write $\pi=\pi(G)$. Unless otherwise stated, all sequences in this paper are assumed to be nonincreasing.
There are a number of theorems characterizing graphic sequences, including classical results by Havel~\cite{hav} and Hakimi~\cite{hak} and an independent characterization by Erd\H{o}s and Gallai~\cite{EG}. However, a given graphic sequence may have a diverse family of nonisomorphic realizations; as such it has become of recent interest to determine when realizations of a given graphic sequence have various properties. As suggested by A.R.~Rao in~\cite{Ra79}, such problems can be broadly classified into two types, the first described as ``forcible'' problems and the second as ``potential'' problems. In a forcible degree sequence problem, a specified graph property must exist in every realization of the degree sequence $\pi$, while in a potential degree sequence problem, the desired property must be found in at least one realization of $\pi$.
Results on forcible degree sequences are often stated as traditional problems in structural or extremal graph theory, where a necessary and/or sufficient condition is given in terms of the degrees of the vertices (or equivalently the number of edges) of a given graph (e.g.~Dirac's theorem on hamiltonian graphs).
A number of degree sequence analogues to classical problems in extremal graph theory appear throughout the literature, including potentially graphic sequence variants of Hadwiger's Conjecture~\cite{CO,DM,RoSo10}, extremal graph packing theorems~\cite{BFHJKW, DFJS}, the Erd\H{o}s-S\'{o}s Conjecture~\cite{LiYin09}, and the Tur\'{a}n Problem~\cite{EJL, FLMW}.
\subsection{Potential-Ramsey Numbers}
Given graphs $H_1$ and $H_2$, the \textit{Ramsey number} $r(H_1,H_2)$ is the minimum positive integer $N$ such that every red/blue coloring of the edges of the complete graph $K_N$ yields either a copy of $H_1$ in red or a copy of $H_2$ in blue. In~\cite{BFHJ}, the authors introduced the following potential degree sequence version of the graph Ramsey numbers.
Given a graph $H$, a graphic sequence $\pi=(d_1,\dots,d_n)$ is \textit{potentially $H$-graphic} if there is some realization of $\pi$ that contains $H$ as a subgraph. For graphs $H_1$ and $H_2$, the \textit{potential-Ramsey number} $r_{pot}(H_1,H_2)$ is the minimum integer $N$ such that for any $N$-term graphic sequence $\pi$, either $\pi$ is potentially $H_1$-graphic or the complementary sequence $$\overline{\pi}=(\overline{d}_1,\dots,\overline{d}_N)=(N-1-d_N,\dots, N-1-d_1)$$
is potentially $H_2$-graphic.
In traditional Ramsey theory, the enemy gives us a coloring of the edges of $K_N$, and we must be sure that there is some copy of $H_1$ or $H_2$ in the appropriate color, regardless of the enemy's choice. When considering the potential-Ramsey problem, we are afforded the additional flexibility of being able to replace the enemy's red subgraph with any graph having the same degree sequence. Consequently, it is immediate that for any $H_1$ and $H_2$, $$r_{pot}(H_1,H_2)\le r(H_1,H_2).$$
While this bound is sharp for certain pairs of graphs, for example $H_1=P_n$ and $H_2=P_m$ (see~\cite{BFHJ}), in general that is far from the case. For instance, determining $r(K_k,K_k)$ is generally considered to be one of the most difficult open problems in combinatorics, if not all of mathematics. For instance, the best known asymptotic lower bound, due to Spencer \cite{S}, states that $$r(K_k,K_k)\ge (1+o(1))\frac{\sqrt{2}}{e}k2^{\frac{k}{2}}.$$ As a contrast, the following theorem appears in \cite{BFHJ}.
\begin{theorem}[Busch, Ferrara, Hartke and Jacobson~\cite{BFHJ}]\label{theorem:K_nK_t}\label{theorem:ramsey_clique}
For $n\ge t\ge 3$, $$r_{pot}(K_n,K_t) = 2n+t-4$$ except when $n=t=3$, in which case $r_{pot}(K_3,K_3)=6$.
\end{theorem}
Despite this theorem, which implies that the potential-Ramsey numbers are at worst a ``small'' linear function of the order of the target graphs, it remains a challenging problem to determine a meaningful bound on $r_{pot}(H_1,H_2)$ for arbitrary $H_1$ and $H_2$. In addition to its connections to classical Ramsey theory, the problem of determining $r_{pot}(H_1,H_2)$ also contributes to the robust body of work on subgraph inclusion problems in the context of degree sequences (c.f.~\cite{ChenLiYin,DM,EJL,FLMW,HS,LS}).
\section{Potential-Ramsey Numbers for Trees vs.\ Cliques} In this paper, we are motivated by the following classical result of Chv\'{a}tal~\cite{Chv}, which is one of the relatively few exact results for graph Ramsey numbers that applies to a broad collection of target graphs.
\begin{theorem}
For any positive integers $s$ and $t$ and any tree $T_t$ of order $t$, $$r(K_s,T_t)=(s-1)(t-1)+1.$$
\end{theorem}
This elegant result states that the Ramsey number $r(K_s,T_t)$ depends only on $s$ and $t$ and not $T_t$ itself. Our goal in this paper is to investigate the existence of a similarly general result for $r_{pot}(K_s, T_t)$. For $s\ge 2$, let $G=K_{s-2}\vee \overline{K_{t-1}}$ and note that (a) $G$ is the unique realization of its degree sequence, and (b) $G$ does not contain $K_s$ and $\overline{G}$ does not contain any graph $H$ of order $t$ without isolated vertices. This implies the following general lower bound on the potential Ramsey number.
\begin{prop}\label{prop:LBtreevsnoisol}
If $H$ is any graph of order $t$ without isolated vertices, then $r_{pot}(K_s,H)\geq t+s-2$. In particular, for any tree $T_t$ of order $t$, $r_{pot}(K_s,T_t)\ge t+s-2$.
\end{prop}
The following theorems
show that for fixed $t$, different choices of $T_t$ may have different potential-Ramsey numbers.
\begin{theorem}[Busch, Ferrara, Hartke and Jacobson~\cite{BFHJ}]\label{theorem:pathclique}
For $t\ge 6$ and $s\ge 3$,
$$r_{pot}(K_s, P_t)=\left\{\begin{array}{ll}
2s - 2 + \left\lfloor\frac{t}{3}\right\rfloor, & \text{ if $s > \left\lfloor \frac{2t}{3} \right\rfloor$},
\\
t+s-2, & \text{ otherwise}.\\
\end{array}\right.$$
\end{theorem}
\begin{theorem}\label{theorem:potram_clique_star}
For $s,t\geq4$,
$$r_{pot}(K_s, K_{1,t-1}) = \left\{\begin{array}{ll}
2s, & \text{ if } t< s+2,
\\
t+s-2, & \text{ otherwise}.\\
\end{array}\right.$$
\end{theorem}
We will prove Theorem~\ref{theorem:potram_clique_star} in Section~\ref{section:proofs}.
There is one notable common feature between these results; namely that the potential-Ramsey number matches the bound given in Proposition~\ref{prop:LBtreevsnoisol} when $t$ is large enough relative to $s$. This suggests the question: does
there exist a function $f(s)$ such that if $t\ge f(s)$, then for any tree $T_t$ of order $t$, $$r_{pot}(K_s,T_t)=t+s-2?$$
Our next result answers this question in the affirmative. Let $\ell(T)$ denote the number of leaves of a tree $T$.
\begin{theorem}\label{theorem:tree_clique}
Let $T_t$ be a tree of order $t$ and let $s\ge 2$ be an integer. If either
\begin{itemize}
\item[(i)] $\ell(T_t)\geq s+1$ or
\item[(ii)] $t> 7(s-2)$,
\end{itemize}
\noindent then $r_{pot}(K_s, T_t)=t+s-2$.
\end{theorem}
We believe that the coefficient of $7$ in part (ii) of the theorem could be improved, and believe it is feasible that $r_{pot}(K_s,T_t)=t+s-2$ whenever $t\gtrsim \frac{3}{2}s$, as suggested by Theorem~\ref{theorem:pathclique}, although we pose no formal conjecture here.
To obtain Theorem~\ref{theorem:tree_clique}, we give a sharp condition that ensures an arbitrary graph and a forest pack. This result, which we prove in Section~\ref{section:packing}, is likely of independent interest. In Section~\ref{section:lemmas} we present some technical lemmas, and in Section~\ref{section:proofs} we prove both Theorem~\ref{theorem:potram_clique_star} and Theorem~\ref{theorem:tree_clique}.
\section{Packing Forests and Arbitrary Graphs}\label{section:packing}
For two graphs $G$ and $H$ where $|V(G)|\geq |V(H)|$, we say that $G$ and $H$ \emph{pack} if there is an injective function $f:V(H)\to V(G)$ such that for any $xy\in E(H)$, $f(x)f(y)\notin E(G)$. In other words, $G$ and $H$ pack if we can embed both of them into $K_{|V(G)|}$ without any edges overlapping. One of the most notable results about packing is the following theorem due to Sauer and Spencer.
\begin{theorem}[Sauer and Spencer~\cite{SS}]\label{thm:sauerspencer}
Let $G$ and $H$ be graphs of order $n$. If $2\Delta(G)\Delta(H)<n$, then $G$ and $H$ pack.
\end{theorem}
While Theorem \ref{thm:sauerspencer} is tight (see~\cite{KK} for the characterization of the extremal graphs), the condition is not necessary to ensure that $G$ and $H$ pack. In particular, even if $H$ has large maximum degree, if $H$ is sparse enough, then it may still pack with $G$. In light of this, we provide the following, which is in many cases an improvement to the Sauer-Spencer theorem provided one graph happens to be a forest.
\begin{theorem}\label{thm:sauerspencerforest}
Let $F$ be a forest and $G$ be a graph both on $n$ vertices, and let $\comp(F)$ be the number of components of $F$ that contain at least one edge and $\ell(F)$ the number of vertices of $F$ with degree 1. If
\[
3\Delta(G)+\ell(F)-2\comp(F)< n,
\]
then $F$ and $G$ pack.
\end{theorem}
\begin{proof}
Let $e\in E(F)$ and let $F'=F\setminus\{e\}$. Notice that $\ell(F')\leq\ell(F)+2$, but if $\ell(F')>\ell(F)$, then $\comp(F')>\comp(F)$. Further, if $\comp(F')<\comp(F)$, then $e$ was an isolated edge, so $\ell(F')\leq\ell(F)-2$. Therefore, $\ell(F')-2\comp(F')\leq\ell(F)-2\comp(F)$.
Consequently, if $3\Delta(G)+\ell(F)-2\comp(F)< n$, then any subgraph of $F$ attained by deleting edges also satisfies this inequality. Therefore we may suppose, for the sake of contradiction, that $F$ is minimal in the sense that for any edge $e\in E(F)$, $F\setminus\{e\}$ packs with $G$. Going forward, we follow the proof of the Sauer-Spencer theorem from \cite{KK} and improve the bounds at certain steps with the knowledge that $F$ is a forest.
For an embedding $f:V(G)\to V(F)$, we call an edge $uv\in E(F)$ a \emph{conflicting edge} if there is some edge $xy\in E(G)$ such that $f(x)=u$ and $f(y)=v$. As $G$ and $F$ do not pack, any embedding must have at least one conflicting edge. Notice that by the minimality of $F$, there is a packing $f$ of $G$ with $F\setminus\{uv\}$, and since $G$ does not pack with $F$, inserting the edge $uv$ must create a conflicting edge. Therefore, for any $uv\in E(F)$, there is an embedding in which $uv$ is the only conflicting edge.
For an embedding $f:V(G)\to V(F)$ and $u,v\in V(F)$, a \emph{$(u,v;G,F)_f$-link} is a triple $(u,w,v)$ such that $uw$ is an edge in the embedding of $G$ and $wv$ is an edge of $F$.
Similarly, a \emph{$(u,v;F,G)_f$-link} is a triple $(u,w,v)$ such that $uw$ is an edge in $F$ and $wv$ is an edge in the embedding of $G$.
Let $f$ be an embedding such that $ux$ is the only conflicting edge, and let $v\in V(F)\setminus\{x\}$.
We claim that there is either a $(u,v;G,F)_f$-link or a $(u,v;F,G)_f$-link.
As $ux$ is both an edge of $F$ and an edge in the embedding of $G$, $xv$ is not an edge of $F$ and also not an edge in the embedding of $G$.
Supposing that $f(u')=u$ and $f(v')=v$, let $f':V(G)\to V(F)$ be defined as $f'(u')=v$, $f'(v')=u$, and $f'(y)=f(y)$ for all $y\notin\{u',v'\}$.
As $F$ and $G$ do not pack, there must be a conflicting edge under $f'$.
This conflicting edge cannot be incident to $x$, but must be incident to either $u$ or $v$.
If the conflicting edge is $uw$, then $(u,w,v)$ is a $(u,v;F,G)_f$-link; if the conflicting edge is instead $vw$, then $(u,w,v)$ is a $(u,v;G,F)_f$-link.
Let $u\in V(F)$ be a non-isolated vertex and let $f:V(G)\to V(F)$ be an embedding such that $ux$ is the only conflicting edge for some $x$.
Define
\begin{align*}
V_1 &=\{v\in V(F)\colon\,\text{there is a $(u,v;F,G)_f$-link}\},\\
V_2 &=\{v\in V(F)\colon\,\text{there is a $(u,v;G,F)_f$-link}\}.
\end{align*}
Since $u$ is incident to $\deg_F(u)$ edges of $F$ and each $w\in N_F(u)$ is incident to at most $\Delta(G)$ edges of the embedding of $G$, we have
\[
|V_1|\leq\deg_F(u)\Delta(G).
\]
Similarly, with $u'\in V(G)$ such that $f(u')=u$,
\begin{align*}
|V_2| &\leq \sum_{w'\in N_G(u')} \deg_F(f(w')) \\
&= 2\deg_G(u') + \sum_{w'\in N_G(u')} (\deg_F(f(w'))-2) \\
&\leq 2\Delta(G) + \sum_{\substack{w\in V(F):\\ \deg_F(w)\geq2}} (\deg_F(w)-2) \\
&= 2\Delta(G) + \ell(F)-2\comp(F),
\end{align*}
where the last equality is justified by the fact that for any tree $T$ with at least two vertices,
\[
\ell(T)=2+\sum_{\substack{w\in V(T):\\ \deg_T(w)\geq 2}}(\deg_T(w)-2).
\]
Since every $v\in V(F)\setminus\{x\}$ is an element of either $V_1$ or $V_2$ and $u\in V_1\cap V_2$,
\[
n\leq |V_1| + |V_2| \leq \deg_F(u)\Delta(G) + 2\Delta(G)+\ell(F)-2\comp(F).
\]
Since this holds for every nonisolated vertex $u$ of $F$, it holds for a leaf of $F$. Therefore, by choosing $u$ to be a leaf,
\[
n\leq 3\Delta(G)+\ell(F)-2\comp(F),
\]
contradicting the hypothesis of the theorem.
\end{proof}
As we are interested in attaining Chv\'{a}tal-type results for the potential-Ramsey number, we will make use of Theorem \ref{thm:sauerspencerforest} when $F$ is a tree.
\begin{cor}\label{cor:sauerspencertree}
Let $G$ and $T$ be graphs of order $n$ where $T$ is a tree. If $3\Delta(G)+\ell(T)-2< n$, then $G$ and $T$ pack.
\end{cor}
The tightness of Corollary~\ref{cor:sauerspencertree}, and hence of Theorem~\ref{thm:sauerspencerforest}, is seen by letting $n$ be even, $G={n\over 2}K_2$ and $T=K_{1,n-1}$. In this case, $\Delta(G)=1$ and $\ell(T)=n-1$, so $3\Delta(G)+\ell(T)-2=n$; however, $G$ and $T$ do not pack. The following proposition shows the asymptotic tightness of Corollary~\ref{cor:sauerspencertree} for any $\Delta(G)$.
\begin{prop}
For any $\epsilon>0$ and $r\in\mathbb{Z}^+$, there is a graph $G$ with $\Delta(G)=r$ and a tree $T$, each with $n$ vertices, such that $3\Delta(G)+\ell(T)-2< (1+\epsilon)n$, but $G$ and $T$ do not pack.
\end{prop}
\begin{proof}
For $m,r\geq 2$ and $n=mr$, let $G=mK_r$ and let $T$ be any tree of order $n$ that is a subdivision of $K_{1,(m-1)r+1}$. It is readily seen that $G$ and $T$ do not pack for any $m$ and $r$. Given $\epsilon>0$, choose $m$ such that $m> 2/\epsilon$. Thus,
\begin{align*}
3\Delta(G)+\ell(T)-2 &= 3(r-1)+(m-1)r+1-2 \\
&= 2r+rm-4=\left(1+{2\over m}\right)n-4\\
&< (1+\epsilon)n.\qedhere
\end{align*}
\end{proof}
\section{Technical Lemmas}\label{section:lemmas}
We begin with the following useful result of Yin and Li.
\begin{theorem}[Yin and Li~\cite{YiLi05}]\label{thm:cliquegraph}
Let $\pi=(d_1,\dots,d_n)$ be a graphic sequence and $s\geq 1$ be an integer.
\begin{itemize}
\item[(i)] If $d_s\geq s-1$ and $d_{2s}\geq s-2$, then $\pi$ is potentially $K_s$-graphic.
\item[(ii)] If $d_s\geq s-1$ and $d_i\geq 2s-2-i$ for $1\leq i\leq s-1$, then $\pi$ is potentially $K_s$-graphic.
\end{itemize}
\end{theorem}
The following lemma from~\cite{GJL} is an extension of a corresponding result of Rao~\cite{Ra79} for cliques.
\begin{lemma}[Gould, Jacobson and Lehel~\cite{GJL}]\label{lemma:gould}
If $\pi$ is potentially $H$ graphic, then there is a realization $G$ of $\pi$
containing $H$ as a subgraph on the $|V(H)|$ vertices of largest degree in $G$.
\end{lemma}
We next give a simple sufficient condition for a graphic sequence to have a realization containing any tree of order $t$.
\begin{lemma}\label{lemma:tree_pot}
Let $T$ be a tree of order $t$ and let $\pi=(d_1,\dots,d_n)$ be a graphic sequence with $n\geq t$. If $d_{t-1}\geq t-1$, then $\pi$ is potentially $T$-graphic.
\end{lemma}
\begin{proof}
We proceed by induction on $t$. If $t=2$, the claim follows immediately, so suppose that $t>2$ and let $\pi=(d_1,\dots,d_n)$ be a graphic sequence with $d_{t-1}\geq t-1$.
Let $T$ be any tree of order $t$, $v$ be a leaf of $T$, and $T'=T\setminus\{v\}$. Thus, $T'$ is a tree of order $t-1$.
As $\pi$ is a non-increasing sequence, $d_{t-2}\geq t-1>t-2$, so by the induction hypothesis, $\pi$ is potentially $T'$-graphic. By Lemma~\ref{lemma:gould}, there is a realization $G$ of $\pi$ such that $T'$ is a subgraph of $G[v_1,\dots,v_{t-1}]$.
As $\deg_G(v_i)\geq t-1$ for all $i\leq t-1$, $|N_G(v_i)\cap\{v_t,\dots,v_n\}|\geq 1$ for all $i\leq t-1$.
Hence we may attach a leaf to any vertex of $T'$ to attain a copy of $T$, so $\pi$ is potentially $T$-graphic.
\end{proof}
Finally, we combine Lemma~\ref{lemma:tree_pot} and Theorem~\ref{thm:cliquegraph} to obtain the following.
\begin{prop}\label{prop:degrees}
Let $T$ be a tree on $t$ vertices, let $s\leq t-2$, and set $n=t+s-2$.
If $\pi=(d_1,\dots,d_n)$ is a graphic sequence such that
$\pi$ is not potentially $T$-graphic and
$\overline{\pi}$ is not potentially $K_s$-graphic,
then $d_{t-s-1}\geq t$ and $d_t\geq t-s+1$.
\end{prop}
\begin{proof}
If $\pi$ is not potentially $T$-graphic, then $d_{t-1}\leq t-2$. Therefore, $\overline{d}_s=n-1-d_{t-1}\geq s-1$. By Theorem \ref{thm:cliquegraph}, if $\overline{d}_{2s}\geq s-2$, then $\overline{\pi}$ is potentially $K_s$-graphic. As this is not the case, $\overline{d}_{2s}\leq s-3$. Hence, $d_{t-s-1}=n-1-\overline{d}_{2s}\geq t$.
Furthermore, since $\overline{d}_s\geq s-1$, we see that $\overline{d}_i\leq 2s-3-i$ for some $1\leq i\leq s-2$ (else $\overline{\pi}$ is potentially $K_s$-graphic by Theorem~\ref{thm:cliquegraph}). Therefore
\[d_t=n-1-\overline{d}_{s-1}\geq n-1-\overline{d}_i \geq n-1-(2s-4)=t-s+1.\qedhere\]
\end{proof}
\section{Proof of Theorems \ref{theorem:potram_clique_star} and \ref{theorem:tree_clique}}\label{section:proofs}
We begin by proving Theorem \ref{theorem:potram_clique_star}.
\begin{proof}[Proof of Theorem~\ref{theorem:potram_clique_star}]
When $s\leq t-2$, the lower bound is established by Proposition~\ref{prop:LBtreevsnoisol}. When $s\geq t-2$, the lower bound is established by considering the graphic sequence $\pi=(d_1,\dots,d_{2s-1})$ where $d_i=2$ for all $i$.
As $t\geq 4$, $\pi$ is not potentially $K_{1,t-1}$-graphic.
Further, in any realization of $\pi$, the largest independent set has size at most $s-1$, so $\overline{\pi}$ is not potentially $K_s$-graphic.
For the upper bound, consider any graphic sequence $\pi=(d_1,\dotsc,d_n)$ with $n=s+\max\{s,t-2\}$. If $d_1\geq t-1$, then $\pi$ is potentially (in fact forcibly) $K_{1,t-1}$-graphic. So suppose $d_1\leq t-2$. Thus, we have $\overline{d}_n=n-1-d_1\geq n-1-(t-2)=n-t+1\geq s-1$; since also $n\geq2s$, Theorem~\ref{thm:cliquegraph} implies that $\overline{\pi}$ is potentially $K_s$-graphic.
\end{proof}
We are now ready to prove Theorem~\ref{theorem:tree_clique}, the main result of this paper.
\begin{proof}[Proof of Theorem~\ref{theorem:tree_clique}.]
Let $\pi=(d_1,\dots,d_n)$ be a graphic sequence with $n=t+s-2$ such that $\pi$ is not potentially $T$-graphic and $\overline{\pi}$ is not potentially $K_s$-graphic.
(i) Suppose that $\ell(T)\geq s+1$ and let $S$ be any set of $s+1$ leaves of $T_t$. Let $T'=T\setminus S$, so $T'$ is a tree of order $t-s-1$. By Proposition~\ref{prop:degrees}, $d_{t-s-2}\geq t>t-s-2$, so $\pi$ is potentially $T'$-graphic. Let $G$ be a realization of $\pi$ with $T'$ as a subgraph of $G[v_1,\dots,v_{t-s-1}]$. Because $\deg_G(v_i)\geq t$ for all $i\leq t-s-1$, $|N_G(v_i)\cap\{v_{t-s},\dots,v_{t-s-1}\}|\geq s+2$ for all $i\leq t-s-1$. Hence, we can reattach the vertices in $S$ to attain a copy of $T$, contradicting the fact that $\pi$ is not potentially $T$-graphic.
(ii) Suppose that $\ell(T)\leq s$ and let $G$ be any realization of $\pi$ and define $G_t=G[v_1,\dots,v_t]$. As $d_t\geq t-s+1$ and $|V(G)|=t+s-2$, $\delta(G_t)\geq d_t-(s-2)=t-2s+3$, so $\Delta(\overline{G_t})=t-1-\delta(G_t)\leq 2s-4$. Because $t>7(s-2)$,
\[
3\Delta(\overline{G_t})+\ell(T)-2\leq 3(2s-4)+s-2=7s-14< t.
\]
By applying Corollary~\ref{cor:sauerspencertree}, we see that $T$ and $\overline{G_t}$ pack, or in other words, $T$ is a subgraph of $G_t$, again contradicting the fact that $\pi$ is not potentially $T$-graphic.
\end{proof}
\end{document} |
\begin{document}
\def\spacingset#1{\renewcommand{\baselinestretch}
{#1}\small\normalsize} \spacingset{1}
\title{\bf Selection of Regression Models under Linear Restrictions for Fixed and Random Designs}
\author{Sen Tian\footnote{E-mail: st1864@stern.nyu.edu} \quad Clifford M. Hurvich \quad Jeffrey S. Simonoff \\\\
Department of Technology, Operations, and Statistics, \\Stern School of Business, New York University.}
\date{}
\maketitle
\begin{abstract}
\iffalse
Variable selection problem is popular in linear regression modeling. For each candidate model, certain predictors are restricted to have coefficients zero. In order to select the one with the best predictive performance from a sequence of candidate models, one often rely on information criteria, e.g. the squared error-based Mallows' C$_p$ and the Kullback-Leibler (KL) based AICc. Both criteria are derived under the fixed-X design. For the random-X design, Tukey's S$_p$ and the more recent RC$_p$ criteria provide unbiased estimators for the squared error-based test error. It remains a challenge to derive a KL-based criterion for the random-X. Furthermore, in practice, we can have restrictions that are more general than setting coefficients to be zero (e.g. restricting predictors to have the same coefficient), and the goal is to select the model with the best predictive performance from a sequence of candidates each having a different set of restrictions. In this paper, we propose a KL-based information criterion under general restrictions for the random-X and we denote it as RAICc. We also extend the existing criteria (C$_p$, AICc, RC$_p$ and S$_p$) by incorporating general restrictions. We further show that the KL-based criteria (AICc and RAICc) have superior performance compared to the squared-error based information criteria and cross-validation regardless the design of X, for both variable selection and general restriction problems. Supplemental materials containing the technical details of the theorems and the complete simulation results are attached at the end of the main document. The computer code to reproduced the results for this article are available online\footnote{https://github.com/sentian/RAICc.}.
\fi
Many important modeling tasks in linear regression, including variable selection (in which slopes of some predictors are set equal to zero) and simplified models based on sums or differences of predictors (in which slopes of those predictors are set equal to each other, or the negative of each other, respectively), can be viewed as being based on imposing linear restrictions on regression parameters. In this paper, we discuss how such models can be compared using information criteria designed to estimate predictive measures like squared error and Kullback-Leibler (KL) discrepancy, in the presence of either deterministic predictors (fixed-X) or random predictors (random-X). We extend the justifications for existing fixed-X criteria C$_p$, FPE and AICc, and random-X criteria S$_p$ and RC$_p$, to general linear restrictions. We further propose and justify a KL-based criterion, RAICc, under random-X for variable selection and general linear restrictions. We show in simulations that the use of the KL-based criteria AICc and RAICc results in better predictive performance and sparser solutions than the use of squared error-based criteria, including cross-validation. Supplemental material containing the technical details of the theorems is attached at the end of the main document. The computer code to reproduce the results for this article, and the complete set of simulation results, are available online\footnote{https://github.com/sentian/RAICc.}.
\end{abstract}
\noindent
{\it Keywords:} AICc; C$_p$; Information criteria; Optimism; Random-X
\section{Introduction}
\subsection{Model selection under linear restrictions}
Consider a linear regression problem with an $n\times 1$ response vector $y$ and an $n\times p$ design matrix $X$. The true model is generated from
\begin{equation}
y=X \beta_0 + \epsilon,
\label{eq:truemodel}
\end{equation}
where $\beta_0$ is a $p \times 1$ true coefficient vector, and the $n \times 1$ vector $\epsilon$ is independent of $X$, with $\{\epsilon_i\}_{i=1}^n \stackrel {iid} {\sim} N(0,\sigma_0^2)$. Note that $\beta_0$ represents the true parameters, not an intercept term.
We consider an approximating model
\begin{equation*}
y=X \beta + u,
\end{equation*}
where $\beta$ is $p \times 1$ and the $n \times 1$ vector $u$ is independent of $X$, with $\{u_i\}_{i=1}^n \stackrel {iid} {\sim} N(0,\sigma^2)$. For this approximating model, we further impose $m$ linear restrictions on the coefficient vectors $\beta$ that are given by
\begin{equation}
R \beta = r,
\label{eq:restriction}
\end{equation}
where $R$ is an $m \times p$ matrix with linearly independent rows ($\text{rank}(R)=m$) and $r$ is an $m \times 1$ vector. Both $R$ and $r$ are nonrandom. Examples of such restrictions include setting some slopes equal to 0 (which corresponds to variable selection), setting slopes equal to each other (which corresponds to using the sum of predictors in a model), and setting sums of slopes to 0 (which for pairs of predictors corresponds to using the difference of the predictors in a model).
Suppose first that $X$ is deterministic; we refer to this as the fixed-X design. Denote $f(y_i|x_i,\beta,\sigma^2)$ as the density for $y$ conditional on the $i$-th row of $x_i$. We have the log-likelihood function (multiplied by $-2$)
\begin{equation}
-2 \log f(y|X,\beta,\sigma^2) = -2 \sum_{i=1}^n \log f(y_i|x_i,\beta,\sigma^2) = n \log (2\pi \sigma^2) + \frac{1}{\sigma^2} || y-X\beta||_2^2.
\label{eq:loglike_fixedx}
\end{equation}
By minimizing \eqref{eq:loglike_fixedx} subject to \eqref{eq:restriction}, we obtain the restricted maximum likelihood estimator (MLE)
\begin{equation}
\begin{aligned}
\hat{\beta} &= \hat{\beta}^f + (X^T X)^{-1} R^T ( R(X^T X)^{-1} R^T)^{-1} (r-R \hat{\beta}^f),\\
\hat \sigma^2 &= \frac{1}{n} ||y-X \hat{\beta}||^2,
\end{aligned}
\label{eq:betahat_sigmahatsq}
\end{equation}
where $\hat{\beta}^f = (X^T X)^{-1} X^T y$ is the unrestricted least squares estimator. Since the errors are assumed to be Gaussian, $\hat\beta$ is also the restricted least squares estimator.
In practice, a sequence of estimators $\hat\beta(R_i,r_i|X,y)$, each based on a different set of restrictions, is often generated, and the goal is to choose the one with the best predictive performance. This can be done on the basis of information criteria, which are designed to estimate the predictive accuracy for each considered model. Note that the notion of predictive accuracy can be as simple as distance of a predicted value from a future value, as is the case in squared-error prediction measures, but also can encompass the more general idea that the log-likelihood is a measure of the accuracy of a fitted distribution as a prediction for the distribution of a future observation. This idea can be traced back to \citet{Akaike1973}, as noted in an interview with Akaike \citep{findley1995conversation}; see also \citet{efron1986biased}.
\subsection{Variable selection under fixed-X}
\label{sec:intro_subsetselection}
An important example of comparing models with different linear restrictions on $\beta$ is variable selection. We consider fitting the ordinary least squares (OLS) estimator on a predetermined subset of predictors with size $k$, and without loss of generality, the subset includes the first $k$ predictors of $X$, i.e. $\hat\beta^f(X_1,\cdots,X_k,y)$. By letting $R_k= \irow{0 & I_{p-k}}_{(p-k) \times p} $ and $r_k = \irow{0}_{(p-k) \times 1}$, it is easy to verify that $\hat{\beta}(R_k,r_k|X,y)=\hat\beta^f(X_1,\cdots,X_k,y)$. Therefore, comparing OLS fits on different subsets of predictors falls into the framework of comparing estimators with different linear restrictions on $\beta$.
Information criteria are designed to provide an unbiased estimate of the test error. We simplify the notation by denoting $\hat\beta(k) = \hat{\beta}(R_k,r_k|X,y)$. We also denote errF as the in-sample training error and ErrF as the out-of-sample test error. errF measures how well the estimated model fits on the training data $(X,y)$, while ErrF measures how well the estimated model predicts the new test data $(X,\tilde{y})$, where $\tilde{y}$ is an independent copy of the original response $y$, i.e. $\tilde{y}$ is drawn from the conditional distribution of $y|X$. The notations of errF and ErrF are based on those in \citet{efron2004estimation}, and the notation F here indicates that we have a fixed-$X$ design. \citet{efron1986biased} defined the optimism of a fitting procedure as the difference between the test error and the training error, i.e.
\begin{equation*}
\text{optF} = \text{ErrF} - \text{errF},
\end{equation*}
and introduced the optimism theorem,
\begin{equation*}
E_y(\text{optF}) = E_y(\text{ErrF}) - E_y(\text{errF}),
\end{equation*}
where $E_y$ represents the expectation taken under the true model with respect to the random variable $y$. The optimism theorem provides an elegant framework to obtain an unbiased estimator of $E(\text{ErrF})$,
that is
\begin{equation*}
\widehat{\text{ErrF}} = \text{errF} + E_y(\text{optF}),
\end{equation*}
where the notation $\widehat{\text{ErrF}}$ follows from \citet{efron2004estimation}. It turns out that many existing information criteria can be derived using the concept of optimism.
A typical measure of the discrepancy between the true model and an approximating model is the squared error (SE), i.e.
\begin{equation*}
\text{ErrF}_\text{SE} = E_{\tilde{y}}\left( \lVert \tilde{y}-X\hat{\beta} \rVert_2^2 \right).
\end{equation*}
The training error is $\text{errF}_\text{SE} = \displaystyle \lVert y-X\hat{\beta} \rVert_2^2$. \citet{ye1998measuring} and \citet{efron2004estimation} showed that for any general fitting procedure $\hat\mu$ and any model distribution (not necessarily Gaussian)
\begin{equation}
E_y(\text{optF}_\text{SE}) = 2\sum_{i=1}^n \text{Cov}_y(\hat\mu_i,y_i),
\label{eq:EoptF_SE}
\end{equation}
which is often referred to as the covariance penalty. For the OLS estimator $\hat\mu(k) = X\hat\beta(k)$ it is easy to verify that $E_y(\text{optF}_\text{SE}(k)) = 2 \sigma_0^2 k$. We denote RSS$(k)$ as the residual sum of squares for the OLS estimator, i.e. $\text{RSS}(k)=\lVert y- X\hat\beta(k) \rVert_2^2$. Hence,
\begin{equation*}
\widehat{\text{ErrF}}_\text{SE}(k) = \text{RSS}(k) + 2 \sigma_0^2 k
\end{equation*}
is an unbiased estimator of $E_y(\text{ErrF}_\text{SE})$. As suggested by \citet{mallows1973some}, typically $\sigma_0^2$ is estimated using the OLS fit on all the predictors, i.e. $\hat\sigma_0^2=\text{RSS}(p)/(n-p)$. We then obtain the Mallows' C$_p$ criterion \citep{mallows1973some}
\begin{equation}
\text{C}_p(k) = \text{RSS}(k) + \frac{\text{RSS}(p)}{n-p} 2k.
\label{eq:cp_subsetselection}
\end{equation}
An alternative is to use the OLS fit based on the $k$ predictors in the subset to estimate $\sigma_0^2$. i.e. $\hat\sigma_0^2 = \text{RSS}(k)/(n-k)$, which yields the final prediction error \citep{akaike1969fitting,akaike1970statistical}
\begin{equation}
\text{FPE}(k) = \text{RSS}(k)\frac{n+k}{n-k}.
\label{eq:cptilde_subsetselection}
\end{equation}
Another commonly-used error measure is (twice) the Kullback-Leibler (KL) divergence \citep[see, e.g.,][Section 3]{konishi2008information}
\begin{equation}
\text{KLF} = E_{\tilde{y}}\left[ 2\log f(\tilde{y} | X,\beta_0,\sigma_0^2 ) - 2\log f(\tilde{y} | X,\hat\beta,\hat\sigma^2 ) \right].
\label{eq:KLF}
\end{equation}
The right-hand side of \eqref{eq:KLF} evaluates the predictive accuracy of the fitted model, by measuring the closeness of the distribution of $\tilde{y}$ based on the fitted model and the distribution of $\tilde{y}$ based on the true model. The term $E_{\tilde{y}}\left[ 2\log f(\tilde{y} | X,\beta_0,\sigma_0^2 ) \right]$ is the same for every fitted model. Therefore, an equivalent error measure is the expected likelihood
\begin{equation*}
\text{ErrF}_\text{KL} = E_{\tilde{y}}\left[ -2\log f(\tilde{y} | X,\hat\beta,\hat\sigma^2 ) \right].
\end{equation*}
The training error is
\begin{equation*}
\text{errF}_\text{KL} = -2\log f(y|X,\hat\beta,\hat\sigma^2).
\end{equation*}
For the OLS estimator $\hat\beta(k)$, \citet{sugiura1978further} and \citet{hurvich1989regression} showed that under the Gaussian error \eqref{eq:truemodel}
\begin{equation*}
E_y(\text{optF}_\text{KL}(k)) = n\frac{n+k}{n-k-2}-n,
\end{equation*}
and hence
\begin{equation*}
\widehat{\text{ErrF}}_\text{KL}(k) = n\log\left(\frac{\text{RSS}(k)}{n}\right) + n\frac{n+k}{n-k-2} + n\log(2\pi)
\end{equation*}
is an unbiased estimator of $E_y(\text{ErrF}_\text{KL})$. Since the term $n\log(2\pi)$ appears in all of the models being compared, and thus is irrelevant when comparing criteria for the models, the authors dropped it and introduced the corrected AIC
\begin{equation}
\text{AICc}(k) = n \log\left( \frac{\text{RSS}(k)}{n}\right) + n\frac{n+k}{n-k-2}.
\label{eq:aicc_subsetselection}
\end{equation}
\citet{Hurvich1991} showed that AICc has superior finite-sample predictive performance compared to AIC \citep{Akaike1973}
\begin{equation*}
\text{AIC}(k) = n \log\left( \frac{\text{RSS}(k)}{n}\right) + n + 2(k+1),
\end{equation*}
which does not require a Gaussian error assumption but relies on asymptotic results. The derivations of AICc and AIC require the assumption that the true model is included in the approximating models. Neither AICc nor AIC involve $\sigma_0^2$, a clear advantage over C$_p$. Note that the second term of AICc can be rewritten as $n[1 + (2k+2)/(n-k-2)]$, which approximately equals the sum of the second and third terms of AIC when $n$ is large relative to $k$, demonstrating their asymptotic equivalence when $n\rightarrow\infty$ and $p$ is fixed.
\subsection{From fixed-X to random-X}
The assumption that $X$ is fixed holds in many applications, for example in a designed experiment where categorical predictors are represented using indicator variables or effect codings. However, in many other cases where the data are observational and the experiment is conducted in an uncontrolled manner, fixed-X is not valid and it is more appropriate to treat $(x_i,y_i)_{i=1}^n$ as $iid$ random draws from the joint distribution of $X$ and $y$. We refer to this as the random-X design.
As noted by \citet{breiman1992submodel}, the choice between fixed-X and random-X is conceptual, and is normally determined based on the nature of the study. The extra source of randomness from $X$ results in larger test error compared to the fixed-X situation, and therefore the information criteria designed under fixed-X can be biased estimates of the random-X test error. Furthermore, when applied as selection rules, the authors showed in simulations that C$_p$ leads to significant overfitting under the random-X design. This motivates the derivation of information criteria for the random-X situation.
For the random-X design, we assume that the row vectors of $X$, $\{x_i\}_{i=1}^n$, are $iid$ multivariate normal with mean $E(x_i)=0$ and covariance matrix $E(x_i x_i^T)=\Sigma_0$. Let $f(y_i,x_i|\beta,\sigma^2,\Sigma)$ denote the joint multivariate normal density for $y_i$ and $x_i$. Let $g(x_i|\Sigma)$ denote the multivariate normal density for $x_i$. By partitioning the joint density of $(y,X)$ into the product of the conditional and marginal densities, and by separating the parameters of interest, we have the log-likelihood function (multiplied by $-2$)
\begin{equation}
\begin{aligned}
-2 \log f(y, X|\beta,\sigma^2,\Sigma) &= \sum_{i=1}^n -2 \log f(y_i, x_i|\beta,\sigma^2,\Sigma) = -2 \sum_{i=1}^n [\log f(y_i|x_i,\beta,\sigma^2) + \log g(x_i|\Sigma)] \\
&= \left [ n \log (2\pi \sigma^2) + \frac{1}{\sigma^2} || y-X\beta||_2^2 \right ] + \left [np \log(2\pi) + n \log |\Sigma| + \sum_{i=1}^n x_i^T \Sigma^{-1} x_i \right ].
\end{aligned}
\label{eq:loglike_randomx}
\end{equation}
Minimizing \eqref{eq:loglike_randomx} subject to \eqref{eq:restriction}, we find that the MLE $(\hat{\beta}, \hat{\sigma}^2)$ of $(\beta, \sigma^2)$ remains the same as in the fixed-X design, i.e. \eqref{eq:betahat_sigmahatsq}. The MLE of $\Sigma$ is given by
\begin{equation*}
\hat \Sigma = \frac{1}{n} \sum_{i=1}^n x_i \, x_i^T = \frac{1}{n} X^T X.
\end{equation*}
Since $\hat\beta$ is unchanged when we move from fixed-X to random-X, variable selection as an example of linear restrictions on $\beta$ is based on the same parameter estimates as in Section \ref{sec:intro_subsetselection}. Denote errR, ErrR and optR as the training error, test error and the optimism under random-X, respectively. We generate $X^{(n)}$ as an independent copy of $X$, where the rows $\{x_i^{(n)}\}_{i=1}^n$ are $iid$ multivariate normal $\mathcal{N}(0,\Sigma_0)$. The new copy of the response $y^{(n)}$ is generated from the conditional distribution $y|X^{(n)}$. The optimism for random-X can be defined in the same way as for fixed-X, i.e. $\text{optR}=\text{ErrR}-\text{errR}$. \citet{rosset2020fixed} discussed the optimism for general fitting procedures, when the discrepancy between the true and approximating models is measured by the squared error (SE), i.e.
\begin{equation*}
\text{ErrR}_\text{SE} = E_{X^{(n)},y^{(n)}} \left( \lVert y^{(n)} - X^{(n)} \hat\beta \rVert_2^2 \right).
\end{equation*}
The training error is $\text{errR}_\text{SE} = \lVert y - X \hat\beta \rVert_2^2$. For the OLS estimator, the authors showed that
\begin{equation*}
E_{X,y}(\text{optR}_\text{SE}(k)) = \sigma_0^2 k \left(2 + \frac{k+1}{n-k-1} \right),
\end{equation*}
and hence
\begin{equation*}
\widehat{\text{ErrR}}_\text{SE}(k) = \text{RSS}(k) + \sigma_0^2 k \left(2 + \frac{k+1}{n-k-1} \right)
\end{equation*}
is an unbiased estimator of $E_{X,y}(\text{ErrR}_\text{SE})$. The result holds for arbitrary joint distributions of $(x_y,y_i)$ , and it only requires $x_i$ being marginally normal. As in the fixed-X case, if we use the unbiased estimate of $\sigma_0^2$ based on the full OLS fit, we have the analog of the C$_p$ rule for random-X,
\begin{equation}
\text{RC}_p(k) = \text{RSS}(k) + \frac{\text{RSS}(p)}{n-p} k\left(2 + \frac{k+1}{n-k-1}\right).
\label{eq:rcp_subsetselection}
\end{equation}
If we use the alternative estimate of $\sigma_0^2$ based on the OLS fit on the $k$ predictors in the subset, i.e. $\hat\sigma_0^2=\text{RSS}(k)/(n-k)$, we have the analog of the FPE rule for random-X,
\begin{equation}
\text{S}_p(k) = \text{RSS}(k)\frac{n(n-1)}{(n-k)(n-k-1)}.
\label{eq:sp_subsetselection}
\end{equation}
\citet{hocking1976biometrics} refers to \eqref{eq:sp_subsetselection} as the S$_p$ criterion of \citet{sclove1969criteria}; see also \citet{thompson1978a,thompson1978b}. Note that the notation used here is slightly different from that in \citet{rosset2020fixed}, where the authors used RC$_p$ to denote the infeasible criterion involving $\sigma_0^2$ and used $\widehat{\text{RC}}_p$ to denote the feasible criterion S$_p$. The RC$_p$ criterion in our notation was not studied in their paper.
Another class of selection rules is cross-validation (CV), which does not impose parametric assumptions on the model. A commonly used type of CV is the so-called K-fold CV. The data are randomly split into K equal folds. For each fold, the model is fitted using data in the remaining folds and is evaluated on the current fold. The process is repeated for all K folds, and an average squared error is obtained. In particular, the n-fold CV or leave-one-out (LOO) CV provides an approximately unbiased estimator of the test error under the random-X design, i.e. $E_{X,y}(\text{ErrR}_\text{SE})$. \citet{burman1989comparative} showed that for OLS, LOOCV has the smallest bias and variance in estimating the squared error-based test error, among all K-fold CV estimators. LOOCV is generally not preferred due to its large computational cost, but for OLS, the LOOCV error estimate has an analytical expression: the predicted residual sum of squares (PRESS) statistic \citep{allen1974relationship}
\begin{equation*}
\text{PRESS}(k) = \sum_{i=1}^n \left( \frac{y_i-x_i^T\hat\beta(k)}{1-H_{ii}(k)} \right)^2,
\end{equation*}
where $H(k) = X(k)(X(k)^T X(k))^{-1}X(k)^T$ and $X(k)$ contains the first $k$ columns of $X$.
\subsection{General linear restrictions}
Variable selection is a special case of linear restrictions on $\beta$, where certain entries of $\beta$ are restricted to be zero. In practice, we may restrict predictors to have the same coefficient (e.g. $\beta_1=\beta_2=\beta_3$), or we may restrict the sum of their effects (e.g. $\beta_1+\beta_2+\beta_3=1$). Using the structure in \eqref{eq:restriction}, we formulate a sequence of models, each of which imposes a set of general restrictions on $\beta$, where the goal is to select the model with best predictive performance. The previously defined information criteria and PRESS cannot be applied to this problem, although \citet{tarpey2000note} derived the PRESS statistic for the estimator under general restrictions as
\begin{equation*}
\text{PRESS}(R,r) = \sum_{i=1}^n \left( \frac{y_i-x_i^T\hat\beta}{1-H_{ii}+{H_Q}_{ii}} \right)^2,
\end{equation*}
where $H=X(X^T X)^{-1} X^T$ and $H_Q = X (X^T X)^{-1} R^T \left[ R (X^T X)^{-1} R^T \right]^{-1} R (X^T X)^{-1} X^T$.
\subsection{The contribution of this paper}
The information criteria introduced in Section \ref{sec:intro_subsetselection} have been studied primarily in the context of variable selection problems under fixed-X. In this paper we discuss how such criteria can be generalized to model comparison under general linear restrictions with either a fixed-X or a random-X (in both cases including the special case of variable selection). Note that a selection rule is preferred if it chooses the models that lead to the best predictive performance. This is related to, but not the same as, providing the best estimate of the test error. These two goals are fundamentally different \citep[see, e.g.,][Section 7]{hastie2009elements}, and we focus on the predictive performance of the selected model.
In Section \ref{sec:ic_fixedx}, we consider the fixed-X situation and derive general versions of AICc, C$_p$ and FPE for arbitrary linear restrictions on $\beta$. Random-X is assumed in Section \ref{sec:ic_randomx} and a version of RC$_p$ and S$_p$ for general linear restrictions is obtained. Furthermore, we propose and justify a novel criterion, RAICc, for general linear restrictions and discuss its connections with AICc. We further show that expressions of the information criteria for variable selection problems can be recovered as special cases of their expressions derived under general restrictions. In Section \ref{sec:simulation}, we show via simulations that AICc and RAICc provide consistently strong predictive performance for both variable selection and general restriction problems. Lastly, in Section \ref{sec:conclusion}, we provide conclusions and discussions of potential future work.
\section{ Information criteria for fixed-X }
\label{sec:ic_fixedx}
\subsection{KL-based information criterion}
Using the likelihood function \eqref{eq:loglike_fixedx} and the MLE \eqref{eq:betahat_sigmahatsq}, the expected log-likelihood can be derived as
\begin{equation*}
\begin{aligned}
\text{ErrF}_\text{KL} &= E_{\tilde{y}} [-2 \log f( \tilde{y} | X,\hat\beta,\hat\sigma^2 )] = n \log (2\pi \hat\sigma^2) + \frac{1}{\hat\sigma^2} E_{\tilde{y}} || \tilde{y}-X\hat\beta||_2^2 \\
&= n \log (2\pi \hat\sigma^2) + \frac{1}{\hat\sigma^2} (\hat\beta-\beta_0)^T X^T X (\hat\beta-\beta_0) + \frac{n\sigma_0^2}{\hat\sigma^2},
\end{aligned}
\end{equation*}
and the training error is
\begin{equation*}
\text{errF}_\text{KL} = -2\log f(y|X,\hat\beta,\hat\sigma^2) = n\log(2\pi\hat\sigma^2) + n.
\end{equation*}
In the context of variable selection, the assumption that the approximating model includes the true model is used in the derivations of AIC \citep{linhart1986model} and AICc \citep{hurvich1989regression}. This assumption can be generalized to the context of general restrictions.
\begin{assumption}
If the approximating model satisfies the restrictions $R\beta = r$, then the true model satisfies the analogous restrictions $R\beta_0 = r$; that is, the true model is at least as restrictive as the approximating model.
\label{assumption}
\end{assumption}
Under this assumption, we have the following lemma. The proofs for all of the lemmas and theorems in this paper are given in the Supplemental Material.
\begin{lemma}
Under Assumption \ref{assumption}, $\hat\sigma^2$ and the quadratic form $(\hat \beta-\beta_0)^T X^T X (\hat \beta-\beta_0)$ are independent, and
\begin{equation*}
\begin{aligned}
n \sigma_0^2 E_y\left[ \frac{1}{\hat{\sigma}^2} \right] &= n\frac{n}{n-p+m-2},\\
E_y \left [ (\hat \beta-\beta_0)^T X^T X (\hat \beta-\beta_0) \right ] &= \sigma_0^2 (p-m).
\end{aligned}
\end{equation*}
\label{thm:components_ekl_lr_fixedx}
\end{lemma}
Lemma \ref{thm:components_ekl_lr_fixedx} provides the fundamentals for calculating the expected optimism.
\begin{theorem}
Under Assumption \ref{assumption},
\begin{equation*}
E_y(\text{optF}_\text{KL}) = n \frac{n+p-m}{n-p+m-2} - n.
\end{equation*}
\label{thm:EoptF_KL}
\end{theorem}
Consequently,
\begin{equation*}
\widehat{\text{ErrF}}_\text{KL} = \text{errF}_\text{KL} + E_y(\text{optF}_\text{KL}) = n\log(\hat\sigma^2) + n \frac{n+p-m}{n-p+m-2} + n\log(2\pi)
\end{equation*}
is an unbiased estimator of the test error $E_y \left[ \text{ErrF}_\text{KL} \right]$. We follow the same tradition as in the derivations of AIC and AICc that since the term $n\log(2\pi)$ appears in $\widehat{\text{ErrF}}_\text{KL}$ for every model being compared, it is irrelevant for purposes of model selection. We therefore ignore this term and define
\begin{equation*}
\text{AICc}(R,r) = n\log \left( \frac{\text{RSS}(R,r)}{n} \right) + n \frac{n+p-m}{n-p+m-2},
\end{equation*}
where RSS$(R,r)= \lVert y -X\hat\beta \rVert_2^2$. For the variable selection problem, e.g. regressing on a subset of predictors with size $k$, we are restricting $p-k$ slope coefficients to be zero. By plugging $\hat\beta = \hat\beta(k)$ and $m=p-k$ into the expressions of AICc$(R,r)$, we obtain AICc$(k)$ given in \eqref{eq:aicc_subsetselection}.
\subsection{Squared error-based information criterion}
The covariance penalty \eqref{eq:EoptF_SE} is defined for any general fitting procedure. By explicitly calculating the covariance term for $\hat\mu=X\hat\beta$, we can obtain the expected optimism.
\begin{theorem}
\begin{equation*}
E_y (\text{optF}_\text{SE}) = 2 \sigma_0^2 (p-m).
\end{equation*}
\label{thm:EoptF_SE}
\end{theorem}
An immediate consequence of this is that
\begin{equation*}
\widehat{\text{ErrF}}_\text{SE} = \text{errF}_\text{SE} + E_y(\text{optF}_\text{SE}) = \text{RSS}(R,r) + 2 \sigma_0^2 (p-m)
\end{equation*}
is an unbiased estimator of $E_y(\text{ErrF}_\text{SE})$. Using the unbiased estimator of $\sigma_0^2$ given by the OLS fit based on all of the predictors, i.e. $\hat\sigma_0^2=\text{RSS}(p)/(n-p)$, we define
\begin{equation*}
\text{C}_p(R,r) = \text{RSS}(R,r) + \frac{\text{RSS}(p)}{n-p} 2(p-m).
\end{equation*}
An alternative estimate of $\sigma_0^2$ is $\text{RSS}(R,r)/(n-p+m)$, which yields
\begin{equation*}
\text{FPE}(R,r) = \text{RSS}(R,r)\frac{n+p-m}{n-p+m}.
\end{equation*}
For the variable selection problem, by substituting $m=p-k$ into the expressions of C$_p$ and FPE, we obtain the previously-noted definitions of them, i.e. C$_p$(k) and FPE(k) given in \eqref{eq:cp_subsetselection} and \eqref{eq:cptilde_subsetselection}, respectively.
\section{ Information criteria for random-X }
\label{sec:ic_randomx}
\subsection{KL-based information criterion, RAICc}
We replace the unknown parameters by their MLE, and have the fitted model $f(\cdot|\hat\beta,\hat\sigma^2,\hat\Sigma)$. The KL information measures how well the fitted model predicts the new set of data $(X^{(n)},y^{(n)})$, in terms of the closeness of the distributions of $(X^{(n)},y^{(n)})$ based on the fitted model and the true model, i.e.
\begin{equation}
\text{KLR} = E_{X^{(n)},y^{(n)}} \left[ 2\log f(X^{(n)},y^{(n)}|\beta_0,\sigma_0^2,\Sigma_0) -2 \log f(X^{(n)},y^{(n)}|\hat\beta,\hat\sigma^2,\hat\Sigma) \right].
\label{eq:KLR}
\end{equation}
An equivalent form for model comparisons is the expected log-likelihood
\begin{equation*}
\begin{aligned}
&\text{ErrR}_\text{KL} = E_{X^{(n)},y^{(n)}} \left[ -2 \log f(X^{(n)},y^{(n)}|\hat\beta,\hat\sigma^2,\hat\Sigma) \right] \\
&= \left[ n \log (2\pi \hat\sigma^2) + \frac{1}{\hat\sigma^2} E_{X^{(n)},y^{(n)}} || y^{(n)}-X^{(n)}\hat\beta||_2^2 \right ] + \left [np \log(2\pi) + n \log |\hat\Sigma| + E_{X^{(n)}} \left(\sum_{i=1}^n {x_{i}^{(n)}}^T \hat\Sigma^{-1} x_{i}^{(n)} \right) \right]\\
&= \left[ n \log (2\pi \hat\sigma^2) + \frac{n}{\hat\sigma^2} (\hat\beta-\beta_0)^T \Sigma_0 (\hat\beta-\beta_0) + \frac{n\sigma_0^2}{\hat\sigma^2} \right ] + \left [np \log(2\pi) + n \log |\hat\Sigma| + n \text{Tr}(\hat\Sigma^{-1}\Sigma_{0})\right],
\end{aligned}
\end{equation*}
and the training error is
\begin{equation*}
\text{errR}_\text{KL} = -2\log f(X,y|\hat\beta,\hat\sigma^2,\hat\Sigma) = \left [ n \log (2\pi \hat\sigma^2) + n \right ] + \left [np \log(2\pi) + n \log |\hat\Sigma| + np \right ].
\end{equation*}
As in the fixed-X case, we assume that the true model satisfies the restrictions, i.e. $R\beta_0=r$, and we obtain the following lemma.
\begin{lemma}
Under Assumption \ref{assumption}, $\hat\sigma^2$ and $(\hat{\beta}-\beta_0)^T \Sigma_0 (\hat{\beta}-\beta_0)$ are independent conditionally on $X$, and
\begin{equation*}
\begin{aligned}
E \left[ \text{Tr}(\hat \Sigma^{-1}\Sigma_0) \right] &= \frac{np}{n-p-1},\\
E_{X,y} \left [ (\hat \beta-\beta_0)^T \Sigma_0 (\hat \beta-\beta_0) \right ] &= \sigma_0^2 \frac{p-m}{n-p+m-1}.
\end{aligned}
\end{equation*}
\label{thm:components_ekl_lr_randomx}
\end{lemma}
Lemma \ref{thm:components_ekl_lr_randomx} provides the components for calculating the expected optimism.
\begin{theorem}
Under Assumption \ref{assumption},
\begin{equation*}
E_{X,y}(\text{optR}_\text{KL}) = n \frac{n(n-1)}{(n-p+m-2)(n-p+m-1)} + n \frac{np}{n-p-1} - n(p+1).
\end{equation*}
\label{thm:EoptR_KL}
\end{theorem}
Consequently,
\begin{equation*}
\begin{aligned}
\widehat{\text{ErrR}}_\text{KL} &= \text{errR}_\text{KL} + E_{X,y} (\text{optR}_\text{KL}) \\
&=n\log \left(\hat\sigma^2\right) + n \frac{n(n-1)}{(n-p+m-2)(n-p+m-1)} + n\log(2\pi)(p+1) + n\frac{np}{n-p-1} + n\log|\hat\Sigma|
\end{aligned}
\end{equation*}
is an unbiased estimator of the test error $E_{X,y}(\text{ErrR}_\text{KL})$. Note that the last three terms are free of the restrictions and only depend on $n$, $p$ and $X$. They are the same when we compare two models with different restrictions on $\beta$, and are thus irrelevant when comparing criteria for any two such models. Therefore, for the purpose of model selection, we define
\begin{equation*}
\text{RAICc}(R,r) = n\log \left(\frac{\text{RSS}(R,r)}{n}\right) + n \frac{n(n-1)}{(n-p+m-2)(n-p+m-1)}.
\end{equation*}
An equivalent form is
\begin{equation*}
\text{RAICc}(R,r) = \text{AICc}(R,r) + \frac{n(p-m)(p-m+1)}{(n-p+m-1)(n-p+m-2)}.
\end{equation*}
For linear regression on a subset of predictors with size $k$, we are restricting $p-k$ coefficients to be zero. By substituting $m=p-k$ and $\hat\beta = \hat\beta(k)$ into the expression of $\text{RAICc}(R,r)$, we obtain the RAICc criterion for the variable selection problem, i.e.
\begin{equation*}
\text{RAICc}(k) = n\log \left(\frac{\text{RSS}(k)}{n}\right) + n \frac{n(n-1)}{(n-k-2)(n-k-1)}.
\end{equation*}
\subsection{Squared error-based information criteria}
According to \citet[formula 6 and proposition 1]{rosset2020fixed}, $E_{X,y}(\text{optR}_\text{SE})$ can be decomposed into $E_{X,y}(\text{optF}_\text{SE})$ plus an excess bias term and an excess variance term. We calculate both terms for our estimator $\hat\beta$ and obtain the following theorem.
\begin{theorem}
Under Assumption \ref{assumption},
\begin{equation*}
E_{X,y}(\text{optR}_\text{SE}) = \sigma_0^2(p-m) \left( 2+ \frac{p-m+1}{n-p+m-1} \right).
\end{equation*}
\label{thm:EoptR_SE}
\end{theorem}
An immediate consequence is that
\begin{equation*}
\widehat{\text{ErrR}}_\text{SE} = \text{errR}_\text{SE} + E_{X,y} (\text{optR}_\text{SE}) = \text{RSS}(R,r) + \sigma_0^2(p-m) \left( 2+ \frac{p-m+1}{n-p+m-1} \right)
\end{equation*}
is an unbiased estimator of $E_{X,y}(\text{ErrR}_\text{SE})$. Using the OLS fit on all of the predictors to estimate $\sigma_0^2$, we have
\begin{equation*}
\text{RC}_p(R,r) = \text{RSS}(R,r) + \frac{\text{RSS(p)}}{n-p}(p-m) \left(2+\frac{p-m+1}{n-p+m-1}\right).
\end{equation*}
An alternative estimate of $\sigma_0^2$ is $\text{RSS}(R,r)/(n-p+m)$, which yields
\begin{equation*}
\text{S}_p(R,r) = \text{RSS}(R,r)\frac{n(n-1)}{(n-p+m)(n-p+m-1)}.
\end{equation*}
For the variable selection problem, by substituting $m=p-k$ into the expressions of RC$_p$ and S$_p$, we obtain the previously-noted definitions of them, i.e. RC$_p$(k) and S$_p$(k) given in \eqref{eq:rcp_subsetselection} and \eqref{eq:sp_subsetselection}, respectively.
\section{Performance of the selectors}
\label{sec:simulation}
\subsection{Some other selectors}
In this section we use computer simulations to explore the behavior of different criteria when used for model selection under linear restrictions (variable selection and general linear restrictions). In addition to the criteria already discussed, we also consider two other well-known criteria:
\begin{equation*}
\text{BIC}(k) = n \log\left( \frac{\text{RSS}(k)}{n} \right) +\log(n) k
\end{equation*}
\citep{schwarz1978estimating}, and generalized cross-validation (GCV)
\begin{equation}
\text{GCV}(k) = \text{RSS}(k)\frac{n^2}{(n-k)^2}.
\label{eq:gcv_subsetselection}
\end{equation}
BIC is a consistent criterion, in the sense that under some conditions, if the true model is among the candidate models, the probability of selecting the true model approaches one, as the sample size becomes infinite. GCV, derived by \citet{craven1978smoothing} in the context of smoothing, is equivalent to the mean square over degrees of freedom criterion proposed by \citet{tukey1967discussion}. By comparing the expressions of \eqref{eq:gcv_subsetselection} and \eqref{eq:sp_subsetselection}, GCV and S$_p$ only differ by a multiplicative factor of $1 + k / [(n-k)(n-1)]$.
By analogy to the criteria discussed in Section \ref{sec:ic_fixedx} and \ref{sec:ic_randomx}, if we substitute $k=p-m$ into the expressions of BIC$(k)$ and GCV$(k)$, we obtain their corresponding expressions for general linear restrictions BIC$(R,r)$ and GCV$(R,r)$, respectively. We also consider two types of the cross-validation (CV): 10-fold CV (denoted as 10FCV) and leave-one-out CV (LOOCV). The LOOCV is based on the PRESS(k) statistic for the variable selection problem and PRESS(R,r) for the general restriction problem.
\subsection{Random-X}
We first consider the variable selection problem. The candidate models include the predictors of $X$ in a nested fashion, i.e. the candidate model of size $k$ includes the first $k$ columns of $X$ ($X_1,\cdots,X_k$). We describe the simulation settings reported here; description of and results from all other settings (243 configurations in total) can be found in the Online Supplemental Material\footnote{\url{https://github.com/sentian/RAICc}}, where we also provide the code to reproduce all of the simulation results in this paper. The sample sizes considered are $n\in\{40, 1000\}$, with number of predictors $p=n-1$, being close to the sample size. Predictors exhibit moderate correlation with each other of an AR(1) type (see the Online Supplemental material for further details), and the strength of the overall regression is characterized as either low (average $R^2$ on the set of true predictors roughly $20\%$) or high (average $R^2$ on the set of true predictors roughly $90\%$). The true model is either sparse (with six nonzero slopes) or dense (with $p$ nonzero slopes exhibiting diminishing strengths of coefficients; \citealp{Taddy2017}). The design matrix $X$ is random. In each replication, we generate a matrix $X$ such that the rows $x_i$ ($i=1,\cdots,n$) are drawn from a $p$-dimensional multivariate normal distribution with mean zero and covariance matrix $\Sigma_0$, and we draw the response $y$ from the conditional distribution of $y|X$ based on \eqref{eq:truemodel}. The entire process is repeated $1000$ times.
We consider the following metrics to evaluate the fit. The values of each criterion over all of the simulation runs are plotted using side-by-side boxplots, with the average value over the simulation runs given below the boxplot for the corresponding criterion.
\begin{itemize}
\item Root mean squared error for random-X:
\begin{equation*}
\text{RMSER} = \sqrt{ E_{X^n} \lVert X^n\hat\beta-X^n\beta_0 \rVert_2^2 } = \sqrt{ (\hat{\beta}-\beta_0)^T \Sigma_0 (\hat{\beta}-\beta_0) }.
\end{equation*}
\item KL discrepancy for random-X \eqref{eq:KLR} in the log scale (denoted as logKLR).
\item Size of the subset selected for variable selection problem, and number of restrictions in the selected model for general restriction problem.
\end{itemize}
The results are presented in Figures \ref{fig:subsetselection_randomx_hsnr_largep} and \ref{fig:subsetselection_randomx_lsnr_largep}. We find that RAICc provides the best predictive performance and the sparsest subset while rarely underfitting, compared to other information criteria designed for random-X, including RC$_p$ and S$_p$. The underperformance of RC$_p$ and S$_p$ is due to overfitting. S$_p$, as an estimate of the squared prediction error, is a product of an unlogged residual sum of squares and a penalty term that is increasing in $k$. This results in higher variability in S$_p$ for models that overfit, thereby potentially increasing the chances for spurious minima of these criteria at models that drastically overfit. In RAICc, on the other hand, the residual sum of squares is logged, thereby stabilizing the variance and avoiding the problem. RC$_p$ drastically overfits in all scenarios, reflecting the price of estimating $\sigma_0^2$ using the full model, especially when $p$ is close to $n$. S$_p$, on the other hand, estimates $\sigma_0^2$ using the candidate model, which mitigates the problem. Nevertheless, S$_p$ also can sometimes strongly overfit, but only when $n$ is small. Even for large $n$, S$_p$ selects slightly larger subsets on average than does RAICc.
We also note that information criteria designed for random-X generally perform better than their counterparts for the fixed-X case. Both C$_p$ and FPE are largely outperformed by RC$_p$ and S$_p$, respectively. The advantage of RAICc over AICc is statistically significant in most scenarios, based on the Wilcoxon signed-rank test (the $p$-value for the test comparing the criteria for RAICc and AICc is given above the first two boxes in the first two columns of the table), but is not obvious in a practical sense. The only place that we see an advantage of AICc is for \{Dense model, $n=40$ and high signal\}. In this scenario, a model with many predictors with nonzero slopes can predict well, but that advantage disappears when there is a relatively weak signal, as in that situation the added noise from including predictors with small slopes cannot be overcome by a small error variance.
We further note that choosing the appropriate family of information criteria (the KL-based AICc and RAICc) is more important than choosing the information criteria designed for the underlying designs of $X$. AICc, despite being designed for fixed-X, outperforms RC$_p$ and S$_p$, which are designed for random-X in all of the scenarios, in terms of both predictive performance and providing sparse results. The KL-based criteria have a clear advantage compared to the squared-error based criteria.
Finally, we note some other findings that have been discussed previously in the literature. Despite its apparently strong penalty, BIC often chooses the model using all predictors when $n$ is close to $p$, as discussed in \citet{hurvich1989regression} and \citet{baraud2009gaussian}. We also see that even though GCV has a similar penalty term as S$_p$, it is more likely to suffer from overfitting. Unlike S$_p$, GCV can sometimes drastically overfit even when $n$ is large. The overfitting problem of GCV was also observed in the context of smoothing by \citet{hurvich1998smoothing}. We further find that 10-fold CV performs better than LOOCV, the latter of which sometimes drastically overfits. The tendency of LOOCV to strongly overfit was noted by \citet{scott1987biased} and \citet{hall1991local} in the context of smoothing. \citet{zhang2015cross} showed that when applied as selection rules, the larger validation set used by 10-fold CV can better distinguish the candidate models than can LOOCV, and this results in a model with smaller predictive error. RAICc performs better than 10-fold CV for small $n$, and performs similarly for large $n$. Computationally, 10-fold CV is ten times more expensive compared to RAICc, and since the split of validation samples is random, 10-fold CV can select different subsets if applied multiple times on the same dataset; that is, the result of 10-fold CV is not reproducible. The fact that LOOCV provides better estimate of the test error while being outperformed by 10-fold CV, further emphasizes the difference between the goal of providing the best estimate of the test error, and the goal of selecting the models with the best predictive performance. Clearly, KL-based criteria (AICc and RAICc) bridge the gap between the two goals more effectively than the squared-error based criteria (including cross-validation).
In a related study by \citet{leeb2008evaluation}, the author found that S$_p$ and GCV outperform AICc under random-X, but those results are not directly comparable to ours. That paper did not consider the case where $p$ is extremely close to $n$, which is the scenario that most separates the performances of the different criteria.
\begin{figure}
\caption{Results of simulations for variable selection. Random-X, high signal and $\rho=0.5$. The Sparse and Dense models correspond to the VS-Ex2 and VS-Ex3 configurations (details are given in the Online Supplemental Material). The first column refers to RMSE, the second column corresponds to KL discrepancy (in log scale), and the third column gives the number of variables in the selected model with nonzero slopes, jittered horizontally and vertically, so the number of models with that number of nonzero slopes can be ascertained more easily. The mean values of the evaluation metrics for each criterion are presented at the bottom of each graph. The p-values of the Wilcoxon signed-rank test (paired and two-sided) for comparing RAICc and AICc are also presented.}
\label{fig:subsetselection_randomx_hsnr_largep}
\end{figure}
\begin{figure}
\caption{Results of simulations for variable selection. Random-X, low signal and $\rho=0.5$. }
\label{fig:subsetselection_randomx_lsnr_largep}
\end{figure}
We next consider the general restriction problem. We take $\beta_0 = [2,2,2,1,1,1]^T$, $n\in\{10,40\}$, moderate correlations between the predictors, and either high or low signal levels. The candidate models are constructed in the following way. We consider a set of restrictions: $\beta_1=\beta_4$, $\beta_1=2\beta_2$, $\beta_1=\beta_2$, $\beta_2=\beta_3$, $\beta_4=\beta_5$, $\beta_5=\beta_6$, where the last four restrictions hold for our choice of $\beta_0$. We then consider all of the possible subsets of the six restrictions, resulting in $64$ candidate models in total. The detailed configurations and complete results for this and other examples of the general restriction problem ($54$ scenarios in total) are given in the Online Supplemental Material.
We see from Figure \ref{fig:generalrestriction_randomx} that differences in performance between the criteria are less dramatic. This is not surprising, since for these models the number of parameters never approaches the sample size. Still, RAICc is consistently the best selection rule for small sample size $n$, and it is second-best for large $n$, where it is outperformed by BIC (note that BIC has a strong tendency to select too few restrictions when the sample is small, which corresponds to overfitting in the variable selection context). We also note an advantage of RAICc over AICc, with AICc having a stronger tendency to select too few restrictions.
\begin{figure}
\caption{Results of simulations for general restrictions. Random-X, $\rho=0.5$. The configuration of the model is GR-Ex1 (details can be found in the Online Supplemental Material). Third column gives the number of restrictions in the selected models, jittered horizontally and vertically. }
\label{fig:generalrestriction_randomx}
\end{figure}
Finally, we extend the general restriction example by including restrictions that force additional predictors to have zero coefficients (as in the variable selection problem). Besides the six restrictions specified, we also consider $\beta_i=0$ for $i=7,\cdots,p$ resulting in $p$ possible restrictions in total. The candidate models are formulated by excluding the restrictions in a nested fashion. We start from the model including all $p$ restrictions (corresponding to the null model), and the next model includes the $p-1$ restrictions except the first one $\beta_1=\beta_4$. The process is repeated until all restrictions are excluded (the full model including all predictors with arbitrary slopes) resulting in $p+1$ candidate models in total. The true coefficient vector is the same as that used in Figure \ref{fig:generalrestriction_randomx}, implying that the correct number of restrictions is $p-2$. We present the detailed configurations and complete results for this and other examples ($243$ scenarios in total) in the Online Supplemental Material.
We see from Figure \ref{fig:subsetgeneral_randomx} that our findings for the variable selection problem also hold in this case. This is not surprising, since variable selection is just a special example of general restrictions, and in this scenario the set of candidate models includes ones where the number of parameters is close to the sample size. Thus, overall, RAICc and AICc are the best performers among all of the selectors. RAICc tends to provide the sparsest subset (or select more restrictions), while rarely underfitting, having a slight advantage over AICc in terms of predictive performance.
\begin{figure}
\caption{Results of simulations for general restrictions. Random-X, $\rho=0.5$. The configuration of the model is GR-Ex4 (details can be found in the Online Supplemental Material).}
\label{fig:subsetgeneral_randomx}
\end{figure}
\iffalse
\subsubsection{Without a true model}
\begin{itemize}
\item Omit: The configuration is the same as Sparse-Ex1, but with the 6-th predictor treated as missing for all the fitting procedures.
\item Exponential: We take $\Sigma = I$. The responses are generated by $y_i=exp(4i/n) + \epsilon_i$, for $i=1,\cdots,n$, where $\epsilon_i$ are independent $\mathcal{N}(0, \sigma_0^2)$.
\end{itemize}
We consider a fixed trigonometric configuration of $X$ that is studied by \citet{Hurvich1991}, where $X$ is an $n$ by $p$ matrix with components defined by
$$ x_{t, 2j-1} = \sin\left(\frac{2\pi j}{n}t\right),$$
and
$$ x_{t,2j} = \cos\left(\frac{2\pi j}{n}t\right),$$
for $j=1,\cdots,p/2$ and $t=0,\cdots,n-1$. The responses are generated by $y_i=exp(4i/n) + \epsilon_i$, for $i=1,\cdots,n$, where $\epsilon_i$ are independent $\mathcal{N}(0, \sigma_0^2)$.
\fi
\iffalse
\begin{figure}
\caption{Random-X, high signal and $\rho=0.5$. The mean values of the evaluation metrics for each criterion are presented at the bottom of each graph. The p-values of the Wilcoxon signed-rank test (paired and two-sided) for comparing RAICc and AICc are also presented.}
\label{fig:subsetselection_randomx_hsnr_smallp}
\end{figure}
\begin{figure}
\caption{Random-X, low signal and $\rho=0.5$. Other details are the same as in Figure \ref{fig:randomx_hsnr_smallp}
\label{fig:subsetselection_randomx_lsnr_smallp}
\end{figure}
\fi
\subsection{Fixed-X}
The simulation structure for random-X can also be applied to fixed-X. We only generate the design matrix $X$ once and draw $1000$ replications of the response vector $y$ from the conditional distribution of $y|X$ based on \eqref{eq:truemodel}. The evaluation metrics for fixed-X are as follows. The complete simulation results are given in the Online Supplemental Material.
\begin{itemize}
\item Root mean squared error for fixed-X:
\begin{equation*}
\text{RMSEF} = \sqrt{ \frac{1}{n}\lVert X\hat\beta-X\beta_0 \rVert_2^2 }.
\end{equation*}
\item KL discrepancy for fixed-X \eqref{eq:KLF} in the log scale (denoted as logKLF).
\item Size of the subset selected for variable selection problem, and number of restrictions in the selected model for general restriction problem.
\end{itemize}
The patterns for the fixed-X scenario are similar to those for random-X, as can be seen in Figures \ref{fig:subsetselection_fixedx_hsnr_largep}, \ref{fig:subsetselection_fixedx_lsnr_largep}, \ref{fig:generalrestriction_fixedx} and \ref{fig:subsetgeneral_fixedx}. In some ways this is surprising, in that the random-X versions of the criteria still seem to outperform the fixed-X versions, even though that is not the scenario for which they are designed. This seems to be related to the tendency for the fixed-X versions to overfit (or choose too few restrictions) compared to their random-X counterparts, which apparently works against the goal of selecting the candidate with best predictive performance. Otherwise, the KL-based criteria (RAICc and AICc) noticeably outperform the other criteria in general, especially $\mbox{C}_p$ and FPE, particularly for small samples.
\begin{figure}
\caption{Results of simulations for variable selection. Fixed-X, high signal. The configurations are the same as in Figure \ref{fig:subsetselection_randomx_hsnr_largep}
\label{fig:subsetselection_fixedx_hsnr_largep}
\end{figure}
\begin{figure}
\caption{Results of simulations for variable selection. Fixed-X, low signal. The configurations are the same as in Figure \ref{fig:subsetselection_randomx_lsnr_largep}
\label{fig:subsetselection_fixedx_lsnr_largep}
\end{figure}
\begin{figure}
\caption{Results of simulations for general restrictions. Fixed-X, GR-Ex1, $\rho=0.5$. The configurations are the same as in Figure \ref{fig:generalrestriction_randomx}
\label{fig:generalrestriction_fixedx}
\end{figure}
\begin{figure}
\caption{Results of simulations for general restrictions. Fixed-X, GR-Ex4, $\rho=0.5$. The configurations are the same as in Figure \ref{fig:subsetgeneral_randomx}
\label{fig:subsetgeneral_fixedx}
\end{figure}
\section{Conclusion and future work}
\label{sec:conclusion}
In this paper, the use of information criteria to compare regression models under general linear restrictions for both fixed and random predictors is discussed. It is shown that general versions for KL-based discrepancy (AICc and RAICc, respectively) and squared error-based discrepancy (C$_p$, FPE, RC$_p$ and S$_p$, respectively) can be formulated as effectively unbiased estimators of the test error (up to some terms that are free of the linear restrictions and hence are irrelevant when comparing criteria for different models). Model comparison based on the KL-based discrepancy measures is shown via simulations to be better-behaved than squared error-based discrepancies (including cross-validation) in selecting models with low predictive error and sparse subset.
The study of RAICc for variable selection in this paper focuses on OLS fits on pre-fixed predictors (e.g. nested predictors based on their physical orders in $X$). The discussion can be extended to other fitting procedures where the predictors in each subset are decided in a data-dependent way. For instance, \citet{tian2019use} discussed using AICc for least-squares based subset selection methods, and extending those results to the random-X scenario is a topic for future work.
Note also that only restrictions on the regression coefficients are considered here, corresponding to restrictions on the regression portion of the model. It is also possible that the data analyst could be interested in restrictions on the distributional parameters of the predictors (restricting the variances of some predictors to be equal to each other, for example, or restricting covariances to follow a specified pattern such as autoregressive of order $1$ or compound symmetry), and it would be interesting to try to generalize the criteria discussed here to that situation.
\beginsupplement
\appendix
\pagenumbering{arabic}
\begin{center}
\textbf{\large Supplemental Material \\
Selection of Regression Models under Linear Restrictions \\ for Fixed and Random Designs}
Sen Tian, Clifford M. Hurvich, Jeffrey S. Simonoff
\end{center}
This document provides theoretical details of the theorems and lemmas in the paper. The complete simulation results and the computer code to reproduce the results can be viewed online\footnote{\url{https://github.com/sentian/RAICc}}.
\section{Proof of Lemma \ref{thm:components_ekl_lr_fixedx}}
\begin{proof}
As is well known \citeponline[see, e.g.,][p.~122]{greene2003econometric},
\begin{equation}
n\hat\sigma^2 = \lVert y-X\hat{\beta} \rVert_2^2 \sim \sigma_0^2 \chi^2(n-p+m),
\label{eq:dist_rss}
\end{equation}
and by using Assumption \ref{assumption}
\begin{equation}
\begin{aligned}
E_y\left(\hat{\beta} - \beta_0 \right) &= 0,\\
\text{Cov}_y\left(\hat{\beta} - \beta_0 \right) &= E\left[\left(\hat{\beta} - \beta_0 \right)\left(\hat{\beta} - \beta_0 \right)^T \right]\\
&= \sigma_0^2 \left\{ (X^T X)^{-1} - (X^T X)^{-1}R^T\left[ R(X^T X)^{-1}R^T \right]^{-1} R(X^T X)^{-1} \right\}.
\end{aligned}
\label{eq:dist_betahat}
\end{equation}
From \eqref{eq:dist_rss}, $1/\hat{\sigma}^2$ follows an inverse $\chi^2$ distribution and we have
\begin{equation*}
n \sigma_0^2 E_y\left[ \frac{1}{\hat{\sigma}^2} \right] = n\frac{n}{n-p+m-2}.
\end{equation*}
From \eqref{eq:dist_betahat}, we have
\begin{equation*}
E_y \left [ (\hat \beta-\beta_0)^T X^T X (\hat \beta-\beta_0) \right ]
= \text{Tr} \left[ X^T X \cdot \text{Cov}_y \left( \hat{\beta} - \beta_0 \right) \right]
= \sigma_0^2 (p-m).
\end{equation*}
We next show that $\hat{\sigma}^2$ and $(\hat \beta-\beta_0)^T X^T X (\hat \beta-\beta_0)$ are independent. Define the idempotent matrix $H_R=(X^T X)^{-1} R^T \left[ R (X^T X)^{-1} R^T \right]^{-1} R$. Recall that two other idempotent matrices are defined as $H=X(X^T X)^{-1} X^T$ and $H_Q = X H_R (X^T X)^{-1} X^T$, respectively. We have
\begin{equation*}
\begin{aligned}
y-X\hat{\beta}^f &= (I-H)\epsilon,\\
X\hat{\beta}^f - X\hat{\beta} &= XH_R(\hat{\beta}^f - \beta_0) =H_Q \epsilon,\\
X\hat{\beta} - X\beta_0 &= X(I-H_R)(\hat{\beta}^f - \beta_0) = (H-H_Q) \epsilon,
\end{aligned}
\end{equation*}
where we use the fact that $\hat{\beta}^f - \beta_0 = (X^T X)^{-1}X^T \epsilon$. Also since $HH_Q=H_QH=H_Q$, any two of the three idempotent symmetric matrices $I-H$, $H_Q$ and $H-H_Q$ have product zero. Then by Craig's Theorem \citeponline{craig1943note} on the independence of two quadratic forms in a normal vector,
\begin{equation*}
n\hat{\sigma}^2 = \lVert y-X\hat{\beta} \rVert_2^2 = \lVert y-X\hat{\beta}^f \rVert_2^2 + \lVert X\hat{\beta}^f - X\hat{\beta} \rVert_2^2 = \epsilon^T (I-H) \epsilon + \epsilon^T H_Q \epsilon
\end{equation*}
and
\begin{equation*}
\lVert X\hat{\beta} - X\beta_0 \rVert_2^2 = \epsilon^T (H-H_Q) \epsilon
\end{equation*}
are independent.
\end{proof}
\section{Proof of Theorem \ref{thm:EoptF_KL}}
\begin{proof}
By using Lemma \ref{thm:components_ekl_lr_fixedx}, the expected KL discrepancy can be derived as
\begin{equation*}
\begin{aligned}
E_y [\text{ErrF}_\text{KL} ]
&= E_y \left\{ n \log (2\pi \hat\sigma^2) + \frac{1}{\hat\sigma^2} (\hat\beta-\beta_0)^T X^T X (\hat\beta-\beta_0) + \frac{n\sigma_0^2}{\hat\sigma^2} \right\} \\
&= E_y \left [ n\log (2\pi \hat \sigma^2)\right ] + (p-m) \frac{n}{n-p+m-2} + n \frac{n}{n-p+m-2}\\
&= E_y \left [ n\log (2\pi \hat \sigma^2) \right ] + n \frac{n+p-m}{n-p+m-2}.
\end{aligned}
\end{equation*}
Recall that
\begin{equation*}
\text{errF}_\text{KL} = n\log(2\pi \hat\sigma^2) + n.
\end{equation*}
The expected optimism is then
\begin{equation*}
E_y(\text{optF}_\text{KL}) = E_y[\text{ErrF}_\text{KL} ] - E_y[\text{errF}_\text{KL} ] = n \frac{n+p-m}{n-p+m-2} - n.
\end{equation*}
\end{proof}
\section{Proof of Theorem \ref{thm:EoptF_SE}}
\begin{proof}
Using the expression of $\hat\beta$ \eqref{eq:betahat_sigmahatsq} and the definitions of $H$ and $H_Q$, we have
\begin{equation*}
\hat{\mu} = X\hat{\beta} = (H-H_Q)y + X(X^T X)^{-1} R^T \left[ R(X^TX)^{-1}R^T \right]^{-1}r,
\end{equation*}
where the second term on the right-hand side is deterministic. Denote $h_i$ and ${h_Q}_i$ as the $i$-th rows of $H$ and $H_Q$, respectively. We then have
\begin{equation*}
\text{Cov}_y \left(\hat{\mu}_i, y_i \right) = \text{Cov}_y \left[ (h_i-{h_Q}_i)y, y_i \right] = \text{Cov}_y \left[ (H_{ii}-{H_Q}_{ii})y_i, y_i \right] = \sigma_0^2 (H_{ii}-{H_Q}_{ii}).
\end{equation*}
Therefore, the covariance penalty \eqref{eq:EoptF_SE} can be derived as
\begin{equation*}
E_y (\text{optF}_\text{SE}) = 2 \sum_{i=1}^n \text{Cov}_y (\hat\mu_i,y_i) = 2 \sigma_0^2 \text{Tr}(H-H_Q) = 2 \sigma_0^2 (p-m).
\end{equation*}
\end{proof}
\section{Proof of Lemma \ref{thm:components_ekl_lr_randomx}}
\begin{proof}
Since $x_i$ are iid $\mathcal{N}(0,\Sigma_0)$, $X^T X \sim \mathcal{W}(\Sigma_0, n)$ and $ (X^T X)^{-1} \sim \mathcal{W}^{-1} (\Sigma_0^{-1}, n)$, where $\mathcal{W}$ and $\mathcal{W}^{-1}$ denotes a Wishart and an inverse Wishart distribution with $n$ degrees of freedom, respectively. We have $E(X^T X) = n\Sigma_0$ and $E((X^T X)^{-1}) = \Sigma_0^{-1} / (n-p-1)$. Hence,
\begin{equation*}
E \left[ \text{Tr}(\hat \Sigma^{-1}\Sigma_0) \right] = E \left[ \text{Tr} (n (X^T X)^{-1} \Sigma_0) \right] = n \text{Tr}\left[ E\left( (X^T X)^{-1} \right) \Sigma_0\right] = \frac{np}{n-p-1}.
\end{equation*}
Define $H_S = X(X^T X)^{-1}(I-H_R)^T \Sigma_0 (I-H_R) (X^T X)^{-1} X^T$. Conditionally on $X$, the random variable $\hat{\sigma}^2$ and
\begin{equation*}
(\hat{\beta}-\beta_0)^T \Sigma_0 (\hat{\beta}-\beta_0) = \epsilon^T H_S \epsilon
\end{equation*}
are independent by Craig's Theorem, since $H_S$ is symmetric and $H_S(I-H+H_Q)=0$.
\iffalse
We also have
\begin{equation*}
n\sigma_0^2 E_{X,y} \left[ \frac{1}{\hat\sigma^2} \right] = n\sigma_0^2 E_X\left[ E_y \left(\frac{1}{\hat\sigma^2}\big| X \right) \right] = n \frac{n}{n-p+m-2},
\end{equation*}
where the last equality we use the result in Lemma \ref{thm:components_ekl_lr_fixedx}.
\fi
In order to calculate $\displaystyle E_{X,y} \left [ (\hat \beta-\beta_0)^T \Sigma_0 (\hat \beta-\beta_0) \right ]$, we transform the original basis of the problem. Denote $\tilde{R} = \big(\begin{smallmatrix}
R\\
R^c
\end{smallmatrix}\big)$, a ($p \times p$) matrix, where the rows of $R^c$ span the orthogonal complement of the row space of $R$. Hence $\tilde{R}$ has full rank. The true model now becomes
\begin{equation*}
y = X \beta_0 + \epsilon = \tilde{X} \tilde{\beta}_0 + \epsilon,
\end{equation*}
where $\tilde{X} = X \tilde{R}^T$, $\tilde{\beta}_0 = \tilde{R}^{T^{-1}} \beta_0$. Denote $\tilde{M} = R \tilde{R}^T$. Assumption \ref{assumption} indicates that the true model in the new basis satisfies $\tilde{M} \tilde{\beta}_0 = r$. The approximating model is
\begin{equation*}
y = X \beta + u = \tilde{X} \tilde{\beta} + u,
\end{equation*}
with restrictions $\tilde{M} \tilde{\beta} = r$ where $\tilde{\beta} = \tilde{R}^{T^{-1}} \beta$. Denote $\hat{\tilde{\beta}}^f = (\tilde{X}^T \tilde{X})^{-1}\tilde{X}^Ty$ as the OLS estimator in the regression of $y$ on $\tilde{X}$. The restricted MLE is then
\begin{equation*}
\hat{\tilde{\beta}} = \hat{\tilde{\beta}}^f - \left( \tilde{X}^T \tilde{X} \right)^{-1} \tilde{M}^T \left[ \tilde{M} (\tilde{X}^T \tilde{X})^{-1} \tilde{M}^T \right]^{-1} \left( \tilde{M} \hat{\tilde{\beta}}^f - r \right),
\end{equation*}
and it can be easily verified that $\hat{\tilde{\beta}} = \tilde{R}^{T^{-1}} \hat{\beta}$. Denote $\tilde{X}_m$ and $\tilde{X}_{p-m}$ as the matrices containing the first $m$ and last $p-m$ columns of $\tilde{X}$, respectively. Let $\hat{\tilde{\beta}}_{m}$ and $\hat{\tilde{\beta}}_{p-m}$ be column vectors consisting of the first $m$ and last $p-m$ entries in $\hat{\tilde{\beta}}$, respectively. Also let $\tilde{\beta}_{0,m}$ and $\tilde{\beta}_{0,p-m}$ be column vectors consisting of the first $m$ and last $p-m$ entries in $\tilde{\beta}_0$, respectively. By using the formula for the inverse of partitioned matrices and some algebra, it can be shown that (details are given in Supplemental Material Section \ref{sec:derivation_betahattilde})
\begin{equation}
\begin{aligned}
\hat{\tilde{\beta}}_m &= \tilde{r},\\
\hat{\tilde{\beta}}_{p-m} &= \left( \tilde{X}_{p-m}^T \tilde{X}_{p-m} \right)^{-1} \left( \tilde{X}_{p-m}^Ty - \tilde{X}_{p-m}^T\tilde{X}_{m}\tilde{r} \right),
\end{aligned}
\label{eq:betahat_tilde_partition}
\end{equation}
where $\tilde{r} = (R R^T)^{-1}r$. The restrictions $\tilde{M}\tilde{\beta}_0=r$ results in $\tilde{\beta}_{0,m} = \tilde{r}$. We then have
\begin{equation*}
\hat{\tilde{\beta}}_{p-m} - \tilde{\beta}_{0,p-m} = \left( \tilde{X}_{p-m}^T \tilde{X}_{p-m} \right)^{-1} \tilde{X}_{p-m} ^T \epsilon,
\end{equation*}
and therefore
\begin{equation*}
\hat{\tilde{\beta}}_{p-m} - \tilde{\beta}_{0,p-m} \big | \tilde{X} \sim \mathcal{N} \left(0, \sigma_0^2 \left( \tilde{X}_{p-m}^T \tilde{X}_{p-m} \right)^{-1} \right).
\end{equation*}
We also note that $\tilde{X}_{p-m} = X {R^c}^T$, and hence the rows $\tilde{x}_{p-m, i}$ of $\tilde{X}_{p-m}$, are independent and satisfy $\tilde{x}_{p-m, i} \sim \mathcal{N} \left( 0, R^c \Sigma_0 {R^c}^T \right)$. We then have that $\left( \tilde{X}_{p-m}^T \tilde{X}_{p-m} \right)^{-1}$ follows the inverse Wishart distribution $W^{-1}( R^c \Sigma_0 {R^c}^T, n)$. The expectation of the quadratic form can be derived as
\begin{equation*}
\begin{aligned}
E_{X,y} \left [ (\hat \beta-\beta_0)^T \Sigma_0 (\hat \beta-\beta_0) \right ]
&= E_{\tilde{X},y} \left [ \left( \hat{\tilde{\beta}} - \tilde{\beta}_0 \right)^T \tilde{R} \Sigma_0 \tilde{R}^T \left( \hat{\tilde{\beta}} - \tilde{\beta}_0 \right) \right ]\\
&= E_{\tilde{X}} \left\{ E \left [ \left( \hat{\tilde{\beta}}_{p-m} - \tilde{\beta}_{0,p-m} \right)^T R^c \Sigma_0 {R^c}^T \left(\hat{\tilde{\beta}}_{p-m} - \tilde{\beta}_{0,p-m} \right) \Big| \tilde{X} \right ] \right\}\\
&= \sigma_0^2 \text{Tr} \left\{ R^c \Sigma_0 {R^c}^T E \left[ \left( \tilde{X}_{p-m}^T \tilde{X}_{p-m} \right)^{-1} \right] \right\} \\
&= \sigma_0^2 \frac{p-m}{n-p+m-1}.
\end{aligned}
\end{equation*}
\end{proof}
\section{Proof of Theorem \ref{thm:EoptR_KL}}
\begin{proof}
The expected KL can be derived as
\begin{equation*}
\begin{aligned}
&E_{X,y} (\text{ErrR}_\text{KL}) \\
&= E_{X,y} \left [ n \log (2\pi \hat\sigma^2) + \frac{n}{\hat\sigma^2} (\hat\beta-\beta_0)^T \Sigma_0 (\hat\beta-\beta_0) + \frac{n\sigma_0^2}{\hat\sigma^2} \right ] + E \left [np \log(2\pi) + n \log |\hat\Sigma| + n \text{Tr}(\hat\Sigma^{-1}\Sigma_{0}) \right] \\
&= E_{X,y} \left [ n \log (2\pi \hat\sigma^2) \right ] + E_X \left[E\left(\frac{n}{\hat\sigma^2} \big | X \right) E\left((\hat\beta-\beta_0)^T \Sigma_0 (\hat\beta-\beta_0) \big| X \right) + E\left(\frac{n\sigma_0^2}{\hat\sigma^2} \big| X \right) \right] + \\
& \qquad E \left [np \log(2\pi) + n \log |\hat\Sigma| + n \text{Tr}(\hat\Sigma^{-1}\Sigma_{0}) \right] \\
&= E_{X,y} \left [ n\log (2\pi \hat \sigma^2) \right ] + n \frac{n(n-1)}{(n-p+m-2)(n-p+m-1)} + E \left [ n\log |\hat \Sigma| \right ] + np\log(2\pi) + n \frac{np}{n-p-1},
\end{aligned}
\end{equation*}
where the second equality is based on Lemma \ref{thm:components_ekl_lr_randomx} for the independence of $\hat\sigma^2$ and $(\hat\beta-\beta_0)^T \Sigma_0 (\hat\beta-\beta_0)$ conditionally on $X$, and in the last equality we use results from Lemmas \ref{thm:components_ekl_lr_fixedx} and \ref{thm:components_ekl_lr_randomx}. Since the training error is
\begin{equation*}
\text{errR}_\text{KL} = -2\log L(\hat\beta,\hat\sigma^2,\hat\Sigma|X,y) = \left [ n \log (2\pi \hat\sigma^2) + n \right ] + \left [np \log(2\pi) + n \log |\hat\Sigma| + np \right ],
\end{equation*}
the expected optimism can be obtained as
\begin{equation*}
\begin{aligned}
E_{X,y}(\text{optR}_\text{KL})
&= E_{X,y}[\text{ErrR}_\text{KL} ] - E_{X,y}[\text{errR}_\text{KL} ] \\
&= n \frac{n(n-1)}{(n-p+m-2)(n-p+m-1)} + n \frac{np}{n-p-1} - n(p+1).
\end{aligned}
\end{equation*}
\end{proof}
\section{Proof of Theorem \ref{thm:EoptR_SE}}
\begin{proof}
We first note from Theorem \ref{thm:EoptF_SE} that
\begin{equation*}
E_{X,y}(\text{optF}_\text{SE}) = E \left[ E(\text{optF}_\text{SE} | X ) \right] = 2 \sigma_0^2(p-m).
\end{equation*}
Based on formula 6 and proposition 1 in \citetonline{rosset2020fixed}, the expected optimism can be decomposed into
\begin{equation*}
E_{X,y}(\text{optR}_\text{SE}) = E_{X,y}(\text{optF}_\text{SE}) + B^+ + V^+ = 2 \sigma_0^2(p-m) + B^+ + V^+,
\end{equation*}
where $B^+$ and $V^+$ are the excess bias and excess variance of the fit. In particular, the excess bias is defined as
\begin{equation*}
B^+ = E_{X,X^{(n)}} \big\lVert E( X^{(n)} \hat{\beta} \big | X, X^{(n)} ) - X^{(n)} \beta_0 \big\rVert_2^2 - E_X \big\lVert E( X \hat{\beta} \big | X ) - X \beta_0 \big\rVert_2^2.
\end{equation*}
Because of Assumption \ref{assumption} that the true model satisfies the restrictions, it follows that $\hat{\beta}$ is unbiased, and hence it is easy to see that $B^+=0$. Next, $V^+$ is defined as
\begin{equation*}
V^+ = E_{X,X^{(n)}} \left\{ \text{Tr} \left[ \text{Cov}\left( X^{(n)}\hat{\beta} \big | X, X^{(n)} \right) \right] \right\} - E_X \left\{ \text{Tr} \left[ \text{Cov}\left( X\hat{\beta} \big | X \right) \right] \right\}.
\end{equation*}
The second term on the right-hand side is
\begin{equation*}
E_X \left\{ \text{Tr} \left[ \text{Cov} \left( X\hat{\beta} \big | X \right) \right] \right\} = E_X \left\{ \text{Tr} \left[ \text{Cov} \left( (H-H_Q)y \big | X \right) \right] \right\} = E\left\{ \sigma_0^2 \text{Tr} \left( H-H_Q \right) \right\}= \sigma_0^2 (p-m).
\end{equation*}
The first term on the right-hand side is
\begin{equation*}
\begin{aligned}
& E_{X,X^{(n)}} \text{Tr} \left[ \text{Cov}\left( X^{(n)}\hat{\beta} \big | X, X^{(n)} \right) \right] \\
&= E_{X,X^{(n)}} \text{Tr} \left[ \text{Cov} \left( X^{(n)} (\hat{\beta} -\beta_0) \big | X, X^{(n)} \right) \right] \\
&= \text{Tr} \left\{ E_X\left[ \text{Cov} \left( \hat{\beta} -\beta_0 \big | X \right) \right] E \left ({X^{(n)}}^T X^{(n)}) \right] \right\}\\
&= n E_X \left\{ \text{Tr} \left[ \Sigma_0 \text{Cov} \left( \hat{\beta} - \beta_0 \big | X \right) \right] \right\}\\
&= n E_X \left\{ E\left[ \left(\hat{\beta} - \beta_0 \right)^T \Sigma_0 \left(\hat{\beta} - \beta_0 \right) \big| X \right] \right\} \\
&= n E_{X,y} \left[ \left(\hat{\beta} - \beta_0 \right)^T \Sigma_0 \left(\hat{\beta} - \beta_0 \right) \right]\\
&= n \sigma_0^2 \frac{p-m}{n-p+m-1},
\end{aligned}
\end{equation*}
where in the third equality we use the independence and identical distribution of $X$ and $X_0$, and $E(X^T X) = n\Sigma_0$, while in the last equality we use the result in Lemma \ref{thm:components_ekl_lr_randomx}. Combining the results together, we have
\begin{equation*}
V^+ = n\sigma_0^2 \frac{p-m}{n-p+m-1} - \sigma_0^2 (p-m) = \sigma_0^2\frac{(p-m)(p-m+1)}{n-p+m-1},
\end{equation*}
and
\begin{equation*}
E_{X,y}(\text{optR}_\text{SE}) = 2\sigma_0^2 (p-m) + \sigma_0^2\frac{(p-m)(p-m+1)}{n-p+m-1}.
\end{equation*}
\end{proof}
\section{Derivation of the expression of \texorpdfstring{$\hat{\tilde{\beta}}$}{} in \eqref{eq:betahat_tilde_partition} }
\label{sec:derivation_betahattilde}
Denote $\tilde{H}_m = \tilde{X}_m \left(\tilde{X}_m^T \tilde{X}_m \right)^{-1} \tilde{X}_m^T$ and $\tilde{H}_{p-m} = \tilde{X}_{p-m} \left(\tilde{X}_{p-m}^T \tilde{X}_{p-m}\right)^{-1} \tilde{X}_{p-m}^T$. Then the partitioned form of the matrix $\tilde{X}^T \tilde{X}$ is given by
\begin{equation*}
\begin{aligned}
& \left(\tilde{X}^T \tilde{X}\right)^{-1} =
\begin{bmatrix}
\tilde{X}_m^T \tilde{X}_m & \tilde{X}_m^T \tilde{X}_{p-m} \\
\tilde{X}_{p-m}^T \tilde{X}_m & \tilde{X}_{p-m}^T \tilde{X}_{p-m}
\end{bmatrix}^{-1} \\
&=
\begin{bmatrix}
\left[\tilde{X}_m^T (I-\tilde{H}_{p-m}) \tilde{X}_m \right]^{-1} & - \left[\tilde{X}_m^T (I-\tilde{H}_{p-m}) \tilde{X}_m \right]^{-1} \tilde{X}_m^T \tilde{X}_{p-m} \left(\tilde{X}_{p-m}^T \tilde{X}_{p-m}\right)^{-1}\\
- \left[\tilde{X}_{p-m}^T \left(I-\tilde{H}_m\right) \tilde{X}_{p-m} \right]^{-1} \tilde{X}_{p-m}^T \tilde{X}_m \left(\tilde{X}_m^T \tilde{X}_m \right)^{-1} & \left[\tilde{X}_{p-m}^T \left(I-\tilde{H}_m\right) \tilde{X}_{p-m} \right]^{-1}
\end{bmatrix},
\end{aligned}
\end{equation*}
and the partitioned form of $\hat{\tilde{\beta}}^f$ is given by
\begin{equation*}
\begin{aligned}
\hat{\tilde{\beta}}^f &= \left(\tilde{X}^T \tilde{X}\right)^{-1} \tilde{X}^T y \\
&=
\begin{bmatrix}
\left[\tilde{X}_m^T \left(I-\tilde{H}_{p-m}\right) \tilde{X}_m \right]^{-1} \left( \tilde{X}_m^T y - \tilde{X}_m^T \tilde{H}_{p-m} y \right) \\
\left[\tilde{X}_{p-m}^T \left(I-\tilde{H}_m\right) \tilde{X}_{p-m} \right]^{-1} \left( \tilde{X}_{p-m}^T y - \tilde{X}_{p-m}^T \tilde{H}_m y \right)
\end{bmatrix}.
\end{aligned}
\end{equation*}
We also have
\begin{equation*}
\begin{aligned}
& \left[\tilde{X}_{p-m}^T \left(I-\tilde{H}_m\right) \tilde{X}_{p-m} \right]^{-1} \tilde{X}_{p-m}^T \tilde{H}_m \left(I_n - \tilde{H}_{p-m}\right)\tilde{X}_m \\
&= \left[\tilde{X}_{p-m}^T \left(I-\tilde{H}_m\right) \tilde{X}_{p-m} \right]^{-1} \left( \tilde{X}_{p-m}^T \tilde{X}_m - \tilde{X}_{p-m}^T \tilde{H}_m\tilde{H}_{p-m}\tilde{X}_m \right) \\
&= \left[\tilde{X}_{p-m}^T \left(I-\tilde{H}_m\right) \tilde{X}_{p-m} \right]^{-1} \tilde{X}_{p-m}^T \left(I-\tilde{H}_m\right) \tilde{X}_{p-m} \left(\tilde{X}_{p-m}^T \tilde{X}_{p-m} \right)^{-1} \tilde{X}_{p-m}^T \tilde{X}_m \\
&= \left(\tilde{X}_{p-m}^T \tilde{X}_{p-m} \right)^{-1} \tilde{X}_{p-m}^T \tilde{X}_m.
\end{aligned}
\end{equation*}
Using this property and $\tilde{M} = R \tilde{R}^T = \big(\begin{smallmatrix}
RR^T & 0
\end{smallmatrix}\big)$, we have
\begin{equation*}
\begin{aligned}
\left(\tilde{X}^T \tilde{X}\right)^{-1} \tilde{M}^T \left[ \tilde{M} \left( \tilde{X}^T \tilde{X} \right)^{-1} \tilde{M}^T \right]^{-1} &=
\begin{bmatrix}
(RR^T)^{-1} \\
-\left(\tilde{X}_{p-m}^T \tilde{X}_{p-m} \right)^{-1} \tilde{X}_{p-m}^T \tilde{X}_m (RR^T)^{-1}
\end{bmatrix},
\\
I_p - \left(\tilde{X}^T \tilde{X}\right)^{-1} \tilde{M}^T \left[ \tilde{M} \left( \tilde{X}^T \tilde{X} \right)^{-1} \tilde{M}^T \right]^{-1} \tilde{M} &=
\begin{bmatrix}
0 & 0\\
\left(\tilde{X}_{p-m}^T \tilde{X}_{p-m} \right)^{-1} \tilde{X}_{p-m}^T \tilde{X}_m & I_{p-m}
\end{bmatrix}.
\end{aligned}
\end{equation*}
Therefore, \eqref{eq:betahat_tilde_partition} can be derived as
\begin{equation*}
\begin{aligned}
\hat{\tilde{\beta}} &= \left\{I_p - \left(\tilde{X}^T \tilde{X}\right)^{-1} \tilde{M}^T \left[ \tilde{M} \left( \tilde{X}^T \tilde{X} \right)^{-1} \tilde{M}^T \right]^{-1} \tilde{M} \right\}\hat{\tilde{\beta}}^f + \left\{\left(\tilde{X}^T \tilde{X}\right)^{-1} \tilde{M}^T \left[ \tilde{M} \left( \tilde{X}^T \tilde{X} \right)^{-1} \tilde{M}^T \right]^{-1} \right\}r\\
&=
\begin{bmatrix}
\tilde{r}\\
\left(\tilde{X}_{p-m}^T \tilde{X}_{p-m} \right)^{-1} \tilde{X}_{p-m}^T \left(y - \tilde{X}_m \tilde{r}\right)
\end{bmatrix}.
\end{aligned}
\end{equation*}
\iffalse
\section{\texorpdfstring{$\hat{\beta}$}{} for \texorpdfstring{$R=[0, I_{p-k}]$}{} }
We show that for $R=[0, I_{p-k}]$, $\hat{\beta}$ \eqref{eq:betahat} is the least squares coefficient vector of $y$ upon $X_k$ where $X_k$ involves the first $k$ columns of $X$.
We simplify the notations. Denote $R=[0, I_2]$ where $0$ is an $p_2 \times p_1$ matrix and $I_2$ is an $p_2 \times p_2$ matrix, respectively. Also denote $X=[X_1, X_2]$ where $X_1$ is an $n \times p_1$ matrix and $X_2$ is an $n \times p_2$ matrix, respectively. The corresponding hat matrices are $H_1 = X_1 (X_1^T X_1)^{-1} X_1^T$ and $H_2 = X_2 (X_2^T X_2)^{-1} X_2^T$.
First note that the partitioned form of the matrix $X^T X$ is given by
\begin{equation}
\begin{aligned}
(X^T X)^{-1} &=
\begin{bmatrix}
X_1^T X_1 & X_1^T X_2 \\
X_2^T X_1 & X_2^T X_2
\end{bmatrix}^{-1} \\
&=
\begin{bmatrix}
[X_1^T (I-H_2) X_1 ]^{-1} & - [X_1^T (I-H_2) X_1 ]^{-1} X_1^T X_2 (X_2^T X_2)^{-1}\\
- [X_2^T (I-H_1) X_2 ]^{-1} X_2^T X_1 (X_1^T X_1)^{-1} & [X_2^T (I-H_1) X_2 ]^{-1}
\end{bmatrix}.
\end{aligned}
\end{equation}
Then we have
\begin{equation*}
I - H_R=
I - (X^T X)^{-1}(R^T X_2^T (I-H_1) X_2 R) =
\begin{bmatrix}
I_1 & [X_1^T (I-H_2) X_1 ]^{-1} X_1^T H_2 (I-H_1) X_2 \\
0 & 0
\end{bmatrix},
\end{equation*}
and
\begin{equation*}
z = (X^T X)^{-1} X^T y =
\begin{bmatrix}
[X_1^T (I-H_2) X_1 ]^{-1} (X_1^T y - X_1^T H_2 y)\\
[X_2^T (I-H_1) X_2 ]^{-1} (X_2^T y - X_2^T H_1 y)
\end{bmatrix}.
\end{equation*}
Also it can be easily verified that $HH_1=H_1H=H_1$, $HH_2=H_2H=H_2$, and
\begin{equation*}
(I-H_1)X_2 [X_2^T (I-H_1) X_2 ]^{-1} X_2^T (I-H_1) = H - H_1.
\end{equation*}
Therefore, we have
\begin{equation*}
\begin{aligned}
\hat{\beta} &= z + (X^T X)^{-1} R^T ( R(X^T X)^{-1} R^T)^{-1} (r-R z) \\
&= \left[ I
- (X^T X)^{-1}(R^T X_2^T (I-H_1) X_2 R) \right] z\\
&= [X_1^T (I-H_2) X_1 ]^{-1} X_1^T \left\{ I-H_2+H_2(I-H_1)X_2 [X_2^T (I-H_1) X_2 ]^{-1} X_2^T (I-H_1) \right\}y \\
&= [X_1^T (I-H_2) X_1 ]^{-1} X_1^T \left\{ I-H_2H_1 \right\}y\\
&= (X_1^T X_1)^{-1} X_1^T y,
\end{aligned}
\end{equation*}
which is the least squares estimates on $X_1$.
\section{Derivation of \texorpdfstring{\eqref{eq:firstterm_ekl_lr}}{} for \texorpdfstring{$R=[0, I_{p-k}]$}{}}
First note that
\begin{equation*}
\begin{aligned}
&[X_1^T (I-H_2) X_1 ]^{-1} - [X_1^T (I-H_2) X_1 ]^{-1} X_1^T H_2 (I-H_1) X_2 [X_2^T (I-H_1) X_2 ]^{-1} X_2^T X_1 (X_1^T X_1)^{-1}\\
&= [X_1^T (I-H_2) X_1 ]^{-1} \left( I - X_1^T H_2 X_1 (X_1^T X_1)^{-1} \right),\\
&= (X_1^T X_1)^{-1}
\end{aligned}
\end{equation*}
Then from the partition of $(X^T X)^{-1}$ and the expression of $I-H_R$ shown in the above Section, it is easy to have
\begin{equation*}
(I-H_R) (X^T X)^{-1} =
\begin{bmatrix}
(X_1^T X_1)^{-1} & 0\\
0 & 0
\end{bmatrix}.
\end{equation*}
Therefore,
\begin{equation*}
\begin{aligned}
E_0 \left [ (\hat \beta-\beta_0)^T \Sigma_0 (\hat \beta-\beta_0) \right ] &= E_0 \left\{ E_0 \left [ (\hat \beta-\beta_0)^T \Sigma_0 (\hat \beta-\beta_0) \Big| X \right ] \right\},\\
&= \sigma_0^2 \text{Tr} \left[ \Sigma_0 E_0 \left\{ (I-H_R) (X^T X)^{-1} \right\} \right], \\
&= \sigma_0^2 \frac{p_1}{n-p_1-1}.
\end{aligned}
\end{equation*}
\section{Cochrane's Theorem}
Let $\eta \sim \mathcal{N}(0, I)$ and let $H=\sum H_i$ be a sum of $k$ symmetric matrices with $\text{rank}(H)$=r and $\text{rank}(H_i)=r_i$ such that $H_i=H_i^2$ and $H_iH_j=0$ when $i \ne j$. Then $\eta^T H_i \eta \sim \chi^2(r_i)$, $i=1,\cdots,k$ are independent chi-square variates such that $\sum \eta^T H_i \eta = \eta^T H \eta \sim \chi^2(r)$ with $r=\sum r_i$.
\section{Using Adkin's result to derive \texorpdfstring{\eqref{eq:thirdterm_ekl_lr_randomx}}{} for any \texorpdfstring{$R$}{}}
\textcolor{red}{Unfinished}
The problem we have is
\begin{equation}
y = X\beta_0 + \epsilon, \quad \text{s.t. } R\beta_0=r,
\end{equation}
where $R$ is a $m$ by $p$ matrix. From the eigen-decomposition of $X^T X$, we have $X^T X = P \Lambda P^T$. Denote $V = P \Lambda^{1/2} P^T$ and $V^{-1} = P \Lambda^{-1/2} P^T$. Also denote $H_1 = R V^{-1}$, and an orthogonal matrix $Q$ that contains the characteristic vectors of $H_1 ^T (H_1 H_1^T)^{-1} H_1$. Hence, $Z = XV^{-1}Q$ has orthonormal columns. Adkins shows that the problem can be transformed to
\begin{equation}
y = Z\theta_0 + \epsilon, \quad \text{s.t. } [I_m \quad 0]\theta_0=h,
\end{equation}
where $h$ is some constant.
Denote $\hat{\theta}^f = Z^T y$ as the OLS estimator,
\begin{equation}
\begin{aligned}
( \hat{\beta} - \beta_0 )^T \Sigma_0 ( \hat{\beta} - \beta_0 )
&= ( \hat{\theta} - \theta_0 )^T Q^T V^{-1} \Sigma_0 V^{-1} Q ( \hat{\theta} - \theta_0 ) \\
&= ( \hat{\theta}^f_{p-m} - \theta_{0,p-m} )^T [0 \quad I_{p-m}] Q^T V^{-1} \Sigma_0 V^{-1} Q [0 \quad I_{p-m}]^T ( \hat{\theta}^f_{p-m} - \theta_{0,p-m} ),
\end{aligned}
\end{equation}
where $\hat{\theta}^f_{p-m}$ is the last $p-m$ elements of $\hat{\theta}^f$ and $\theta_{0,p-m}$ is the last $p-m$ elements of $\theta_0$. Since $\hat{\theta}^f_{p-m} - \theta_{0,p-m} \sim \mathcal{N}(0, \sigma_0^2 I_{p-m})$, we have
\begin{equation}
\begin{aligned}
E_0 \left\{ ( \hat{\beta} - \beta_0 )^T \Sigma_0 ( \hat{\beta} - \beta_0 ) \right\}
&= E_0 \left\{ E_0 \left[ ( \hat{\beta} - \beta_0 )^T \Sigma_0 ( \hat{\beta} - \beta_0 ) | X \right] \right\},\\
&= \sigma_0^2 E_0\left\{ \text{Tr}\left( [0 \quad I_{p-m}] Q^T V^{-1} \Sigma_0 V^{-1} Q [0 \quad I_{p-m}]^T \right) \right\}
\end{aligned}
\end{equation}
Here $Q$ and $V$ are random. I verify in simulations that the matrix $E_0 ([0 \quad I_{p-m}] Q^T V^{-1} \Sigma_0 V^{-1} Q [0 \quad I_{p-m}]^T)$ is a diagonal matrix.
\fi
\bibliographystyleonline{chicago}
\bibliographyonline{raicc.bib}
\iffalse
\input{tables/supplement/fixedx/subset_selection/Sparse-Ex1_n40.tex}
\input{tables/supplement/fixedx/subset_selection/Sparse-Ex1_n200.tex}
\input{tables/supplement/fixedx/subset_selection/Sparse-Ex1_n1000.tex}
\input{tables/supplement/fixedx/subset_selection/Sparse-Ex2_n40.tex}
\input{tables/supplement/fixedx/subset_selection/Sparse-Ex2_n200.tex}
\input{tables/supplement/fixedx/subset_selection/Sparse-Ex2_n1000.tex}
\input{tables/supplement/fixedx/subset_selection/Dense-Ex1_n40.tex}
\input{tables/supplement/fixedx/subset_selection/Dense-Ex1_n200.tex}
\input{tables/supplement/fixedx/subset_selection/Dense-Ex1_n1000.tex}
\input{tables/supplement/randomx/subset_selection/Sparse-Ex1_n40.tex}
\input{tables/supplement/randomx/subset_selection/Sparse-Ex1_n200.tex}
\input{tables/supplement/randomx/subset_selection/Sparse-Ex1_n1000.tex}
\input{tables/supplement/randomx/subset_selection/Sparse-Ex2_n40.tex}
\input{tables/supplement/randomx/subset_selection/Sparse-Ex2_n200.tex}
\input{tables/supplement/randomx/subset_selection/Sparse-Ex2_n1000.tex}
\input{tables/supplement/randomx/subset_selection/Dense-Ex1_n40.tex}
\input{tables/supplement/randomx/subset_selection/Dense-Ex1_n200.tex}
\input{tables/supplement/randomx/subset_selection/Dense-Ex1_n1000.tex}
\input{tables/supplement/fixedx/general_restriction/Ex1.tex}
\input{tables/supplement/fixedx/general_restriction/Ex2.tex}
\input{tables/supplement/fixedx/general_restriction/Ex3.tex}
\input{tables/supplement/randomx/general_restriction/Ex1.tex}
\input{tables/supplement/randomx/general_restriction/Ex2.tex}
\input{tables/supplement/randomx/general_restriction/Ex3.tex}
\fi
\end{document} |
\begin{document}
\begin{center}{\bf
Non-collision and collision properties of \\
Dyson's model in infinite dimension and other stochastic dynamics
whose equilibrium states are determinantal random point fields
}
\end{center}
\begin{center}{\sf
Hirofumi Osada}\\
Dedicated to Professor Tokuzo Shiga on his 60th birthday
\end{center}
\begin{center}Published in \\
Advanced Studies in Pure Mathematics 39, 2004\\
Stochastic Analysis on Large Scale Interacting Systems\\
pp.~325--343
\end{center}
\begin{abstract}
Dyson's model on interacting Brownian particles is a stochastic
dynamics consisting of
an infinite amount of particles moving in $ \R $
with a logarithmic pair interaction potential.
For this model we will prove that each pair of particles never collide.
The equilibrium state of this dynamics is a determinantal
random point field with the sine kernel.
We prove for stochastic dynamics given
by Dirichlet forms with determinantal random point fields
as equilibrium states the particles never collide
if the kernel of determining random point fields
are locally Lipschitz continuous, and give examples of collision
when H\"{o}lder continuous.
In addition we construct infinite volume dynamics
(a kind of infinite dimensional diffusions) whose equilibrium states
are determinantal random point fields.
The last result is partial in the sense that we simply construct
a diffusion associated with the {\em maximal closable part} of
{\em canonical} pre Dirichlet forms
for given determinantal random point fields as equilibrium states.
To prove the closability of canonical pre Dirichlet forms
for given determinantal random point fields is still an open problem.
We prove these dynamics are the strong resolvent limit of
finite volume dynamics.
\end{abstract}
\numberwithin{equation}{section}
\newcounter{Const} \setcounter{Const}{0}
\def\refstepcounter{Const}c_{\theConst}{\refstepcounter{Const}c_{\theConst}}
\def\cref#1{c_{\ref{#1}}}
\theoremstyle{theorem}
\newtheorem{thm}{Theorem}[section]
\theoremstyle{plain}
\newtheorem{lem}{Lemma}[section]
\newtheorem{cor}[lem]{Corollary}
\newtheorem{prop}[lem]{Proposition}
\theoremstyle{definition}
\newtheorem{dfn}[lem]{Definition}
\newtheorem{ntt}[lem]{Notation}
\newtheorem{cdn}[lem]{Condition}
\newtheorem{ex}[lem]{Example}
\newtheorem{prob}[lem]{Problem}
\theoremstyle{remark}
\newtheorem{rem}[lem]{Remark}
\section{Introduction} \label{s:1}
Dyson's model on interacting Brownian particles in infinite dimension
is an infinitely dimensional diffusion process $ \{ (X_t ^{i} )_{i \in \N}\} $
formally given by the following stochastic differential equation (SDE):
\begin{align}\label{:11}&
d X_t^i = dB_t^i + \sum_{j=1,\ j\not= i} ^{\infty}\frac{1}{X_t^i - X_t^j}
\, dt \quad (i = 1,2,3,\ldots)
, \end{align}
where $ \{ B_t ^{i} \} $ is an infinite amount of independent one dimensional Brownian motions.
The corresponding unlabeled dynamics is
\begin{align}\label{:12}&
\mathbb{X}_t = \sum _{i=1}^{\infty } \delta _{X_t^i}
. \end{align}
Here $ \delta _{\cdot } $ denote the point mass at $ \cdot $.
By definition $ \mathbb{X}_t $ is a $ \Theta $-valued diffusion, where
$ \Theta $ is the set consisting of configurations on $ \R $; that is,
\begin{align}&\label{:13}
\Theta = \{ \theta = \sum _{i} \delta _{x_i}\, ;\, x_i \in \R,\, \theta (\{ | x | \le r\}) < \infty
\text{ for all } r \in \R\}
.\end{align}
We regard $ \Theta $ as a complete, separable metric space with the vague topology.
In \cite{sp.2} Spohn constructed an unlabeled dynamics \eqref{:12} in the sense of
a Markovian semigroup on $ \Lm $.
Here $ \mu $ is a probability measure on $ (\Theta , \mathfrak{B}(\Theta ) ) $
whose correlation functions are generated by the sine kernel
\begin{align}\label{:14}&
\mathsf{K} _{\sin} (x)= \frac{\bar{\rho}}{\pi x } \sin (\pi x ).
\end{align}
(See \sref{s:2}).
Here $ 0 < \bar{\rho} \le 1 $ is a constant related to the {\em density} of the particle. Spohn indeed proved the closability of a non-negative bilinear form
$ (\mathcal{E}, \di ) $ on $ \Lm $
\begin{align} \label{:15} &
\mathcal{E}(\mathfrak{f},\mathfrak{g}) = \int_\Theta \mathbb{D}[\mathfrak{f},\mathfrak{g}](\theta )d\mu,
\\ & \notag
\di=\{\ff \in \dii \cap \Lm ; \mathcal{E}(\mathfrak{f},\mathfrak{f})
<\infty \}.
\end{align}
Here $ \mathbb{D} $ is the square field given by \eqref{:25} and
$ \dii $ is the set of the local smooth functions on $ \Theta $
(see \sref{s:3} for the definition). The Markovian semi-group is given by the Dirichlet form that is the closure $ (\mathcal{E},\dom ) $
of this closable form on $ \Lm $.
The measure $ \mu $ is an equilibrium state of
\eqref{:12}, whose formal Hamiltonian
$ \mathcal{H} = \mathcal{H} (\theta ) $
is given by $ ( \theta = \sum _{i} \delta _{x_i}) $
\begin{align}& \label{:16}
\mathcal{H}(\theta ) = \sum_{i\not=j} - 2 \log |x_i - x_j|
,\end{align}
which is a reason we regard Spohn's Markovian semi-group is a
correspondent to the dynamics formally given by
the SDE \eqref{:11} and \eqref{:12}.
We remark the existence of an $ L^2 $-Markovian semigroup does not imply the existence of
the associated {\em diffusion} in general.
Here a diffusion means (a family of distributions of) a strong Markov process with
continuous sample paths starting from each $ \theta \in \Theta $.
In \cite{o.dfa} it was proved that there exists a diffusion
$ \diffusion $
with state space $ \Theta $
associated with the Markovian semigroup above.
This construction admits us to investigate the {\em trajectory-wise}
properties of the dynamics.
In the present paper we concentrate on the collision property of the diffusion.
The problem we are interested in is the following:
{\sf Does a pair of particles $ (X_t^i,X_t^j) $ that collides
each other for some time $ 0 < t < \infty $ exist ?}
We say for a diffusion on $ \Theta $ {\em the non-collision occurs}
if the above property does {\em not} hold, and {\em the collision occurs} if otherwise.
If the number of particles is finite, then the non-collision should occur at least intuitive level.
This is because drifts $ \frac{1}{x_i-x_j} $ have a strong repulsive effect.
When the number of the particles is infinite, the non-collision property is
non-trivial because the interaction potential is
long range and un-integrable.
We will prove the non-collision property holds
for Dyson's model in infinite dimension.
Since the sine kernel measure is the prototype of
determinantal random point fields,
it is natural to ask such a non-collision property
is universal for stochastic dynamics given by Dirichlet forms \eqref{:15}
with the replacement of the measure $ \mu $ with general determinantal random point fields. We will prove, if the kernel of the determinantal random point field
(see \eqref{:23}) is
locally Lipschitz continuous, then the non-collision always occurs.
In addition, we give an example of determinantal random point fields with
H\"{o}lder continuous kernel that the collision occurs.
The second problem we are interested in this paper is the following:
\textsf{Does there exist $ \Theta $-valued diffusions associated with the Dirichlet forms $ (\mathcal{E},\dom ) $ on $ \Lm $ when
$ \mu $ is determinantal random point fields ?}
We give a partial answer for this in \tref{l:28}.
The organization of the paper is as follows:
In \sref{s:2} we state main theorems.
In \sref{s:3} we prepare some notion on configuration spaces.
In \sref{s:4} we prove \tref{l:23} and \tref{l:25}.
In \sref{s:5} we prove \pref{l:27} and \tref{l:26}.
In \sref{s:6} we prove \tref{l:28}.
Our method proving \tref{l:20} can be applied to Gibbs measures.
So we prove the non-collision property for Gibbs measures in \sref{s:7}.
\section{Set up and the main result} \label{s:2}
Let $ \mathsf{E} \subset \Rd $ be a closed set which is the closure
of a connected open set in $ \Rd $ with smooth boundary.
Although we will mainly treat the case $ \mathsf{E}= \R $,
we give a general framework here by following the line of \cite{so-}.
Let $ \Theta $ denote the set of configurations on $ \mathsf{E} $,
which is defined similarly as \eqref{:13} by replacing
$ \R $ with $ \mathsf{E} $.
A probability measure on $ (\Theta ,\mathcal{B}(\Theta ) ) $ is called
a random point field on $ \mathsf{E} $.
Let $ \mu $ be a random point field on $ \mathsf{E} $.
A non-negative, permutation invariant function
$ \map{\rho _n}{\mathsf{E}^n }{\R} $ is called an $ n $-correlation function
of $ \mu $
if for any measurable sets $ \{ A_1,\ldots,A_m \} $
and natural numbers $ \{ k_1,\ldots,k_m \} $ such that
$ k_1+\cdots + k_m = n $ the following holds:
\begin{align*}&
\int _{A_1^{k_1}\ts \cdots \ts A_{m}^{k_m}}
\rho _n (x_1,\ldots,x_n) dx_1\cdots dx_n =
\int _{\Theta } \prod _{i=1}^m
\frac{\theta (A_i) ! }{(\theta (A_i) - k_i ) ! }
d\mu
.\end{align*}
It is known (\cite{so-}, \cite{le-1}, \cite{le-2}) that, if a family of
non-negative, permutation invariant functions $ \{ \rho _n \} $ satisfies
\begin{align}\label{:21}&
\sum_{k=1}^{\infty}
\left\{ \frac{1}{(k+j)!} \int _{A^{k+j}} \rho _{k+j} \
dx_1\cdots dx_{k+j}
\right\} ^{-1/k} = \infty
, \end{align}
then there exists a unique probability measure (random point field) $ \mu $
on $ \mathsf{E} $ whose correlation functions equal $\{ \rho _n \} $.
Let $ \map{K}{L^2(\mathsf{E}, dx )}{L^2(\mathsf{E}, dx )} $ be
a non-negative definite operator which is locally trace class; namely
\begin{align}\label{:22}&
0 \le (K f , f ) _{L^2(\mathsf{E}, dx )},\quad
\\ \notag &
\mathrm{Tr} (1_B K 1_B ) < \infty \quad \text{ for all bounded Borel set }
B
.\end{align}
We assume $ K $ has a continuous kernel denoted by
$ \mathsf{K} = \mathsf{K} (x,y) $.
Without this assumption one can develop a theory of determinantal random point fields
(see \cite{so-}, \cite{shirai-takahashi}); we assume this
for the sake of simplicity.
\begin{dfn}\label{dfn:1}
A probability measure $ \mu $ on $ \Theta $ is said to be a determinantal
(or fermion) random point field with kernel $ \mathsf{K} $
if its correlation functions $ \rho _n $ are given by
\begin{align}\label{:23}&
\rho _n (x_1,\ldots,x_n) = \det (\mathsf{K} (x_i,x_j)_{1\le i,j \le n })
\end{align}
\end{dfn}
We quote:
\begin{lem}[Theorem 3 in \cite{so-}] \label{l:21}
Assume $ \mathsf{K}(x,y) = \overline{\mathsf{K}(y,x) } $ and $ 0 \le K \le 1 $.
Then $ K $ determines a unique determinantal random point field $ \mu $.
\end{lem}
We give examples of determinantal random point fields.
The first example is the stationary measure of Dyson's model in infinite dimension.
The first three examples are related to the semicircle law of empirical distribution of eigen values of random matrices. We refer to \cite{so-} for detail.
\begin{ex}[sine kernel]\label{d:21}
Let $ \mathsf{K} _{\sin} $ and $ \bar{\rho} $ be as in \eqref{:14}.
Then
\begin{align}\label{:2m1}&
\mathsf{K} _{\sin} (t) = \frac{1}{2\pi } \int _{|k| \le \pi \bar{\rho} } e ^{\sqrt{-1} k t} \, dk
. \end{align}
So the $ \mathsf{K} _{\sin} $ is a function of positive type and
satisfies the assumptions in \lref{l:21}.
Let $ \hat{\mu }^{ N } $ denote the probability measure on $ \R^{ N } $ defined by
\begin{align}\label{:2m4}&
\hat{\mu }^N = \frac{1}{Z^{N}}
e ^{ - \sum _{i, j =1 }^{ N } - 2 \log |x_i-x_j| }
e^{ - \lambda ^2 _{N}\sum_{i=1}^{ N } x _i ^2 }
\, dx_1 \cdots dx_{ N }
, \end{align}
where $ \lambda _{N} = 2 (\pi \bar{\rho })^3 / 3 N ^2 $ and $ Z^{N} $ is the normalization.
Set $ \mu ^{ N } = \hat{\mu }^{ N } \circ (\xi ^{ N })^{-1}$, where
$ \map{\xi ^{ N }}{\R^{ N }}{\Theta } $ such that
$ \xi ^{ N }(x_1,\ldots,x_N) = \sum _{i=1}^{ N } \delta _{x_i}$.
Let $ \rho _{ n } ^{ N } $ denote the $ n $-correlation function of $ \mu ^{ N } $.
Let $ \rho _n $ denote the $ n $-correlation function of $ \mu $.
Then it is known (\cite[Proposition 1]{sp.2}, \cite{so-}) that
for all $ n = 1,2,\ldots $
\begin{align}\label{:2m5}&
\lim_{ N \to\infty} \rho _{ n } ^{ N } (x_1,\ldots,x_n) = \rho _n (x_1,\ldots,x_n)
\quad \text{ for all }(x_1,\ldots,x_n )
. \end{align}
In this sense the measure $ \mu $ is associated
with the Hamiltonian $ \mathcal{H} $
in \eqref{:16} coming from the log potential $ -2\log |x| $.
\end{ex}
\begin{ex}[Airy kernel]\label{d:22}
$ \mathsf{E}=\R $ and
$$ \mathsf{K}(x,y)= \frac{\mathcal{A}_i(x) \cdot \mathcal{A}_i'(y)
- \mathcal{A}_i(y) \cdot \mathcal{A}_i'(x)
}{x-y}
$$
Here $ \mathcal{A}_i $ is the Airy function.
\end{ex}
\begin{ex}[Bessel kernel]\label{d:23}
Let $ \mathsf{E} = [0, \infty) $ and
\begin{align*}&
\mathsf{K} (x,y) = \frac{
J_{\alpha}(\sqrt{x})\cdot \sqrt{y}\cdot J_{\alpha}'(\sqrt{y})
-
J_{\alpha}(\sqrt{y})\cdot \sqrt{x}\cdot J_{\alpha}'(\sqrt{x})
}
{2(x-y)}
.\end{align*}
Here $ J_{\alpha } $ is the Bessel function of order $ \alpha $.
\end{ex}
\begin{ex}\label{d:24}
Let $ \mathsf{E} = \R $ and
$ \mathsf{K}(x,y) = \mathsf{m}(x) \mathsf{k}(x-y) \mathsf{m}(y) $, where
$ \map{\mathsf{k}}{\R }{\R } $ is a non-negative, continuous {\em even} function that is convex in $ [0, \infty) $ such that $ \mathsf{k} (0) \le 1 $,
and $ \map{\mathsf{m}}{\R }{\R } $ is nonnegative continuous and $ \int _{\R } \mathsf{m}(t) dt < \infty $ and $ \mathsf{m} (x) \le 1 $ for all $ x $ and
$ 0 < \mathsf{m} (x) $ for some $ x $.
Then $ \mathsf{K} $ satisfies the assumptions in \lref{l:21}.
Indeed, it is well-known that $ \mathsf{k} $ is a function of positive type
(187 p. in \cite{don} for example), so the Fourier transformation of a finite positive measure. By assumption $ 0 \le \mathsf{K}(x,y) \le 1 $,
which implies $ 0 \le K \le 1 $. Since $ \int \mathsf{K} (x,x) dx < \infty $,
$ K $ is of trace class.
\end{ex}
Let $ \mathsf{A}$ denote the subset of $ \Theta $ defined by
\begin{align}\label{:24}&
\mathsf{A}= \{ \theta \in \Theta ;\
\theta (\{ x \} ) \ge 2
\quad \text{ for some } x \in \mathsf{E} \}
. \end{align}
Note that $ \mathsf{A}$ denotes the set consisting of the configurations with collisions.
We are interested in how large the set $ \mathsf{A} $ is.
Of course $ \mu ( \mathsf{A}) = 0 $ because the 2-correlation function is locally
integrable. We study $ \mathsf{A}$ more closely from the point of stochastic dynamics;
namely, we measure $ \mathsf{A}$ by using a capacity.
To introduce the capacity we next consider a bilinear form related to
the given probability measure $ \mu $.
Let $\dii $ be the set of all local, smooth functions on $\Theta $
defined in \sref{s:3}.
For $\ff ,\mathfrak{g} \in\dii $ we set
$\map{\mathbb{D}[\mathfrak{f},\mathfrak{g}]}{\Theta}{\R}$ by
\begin{align} \label{:25}
\mathbb{D}[\mathfrak{f},\mathfrak{g}](\theta )&= \frac{1}{2}\sum_{i}
\frac{\partial f (\mathbf{x} )}{\partial x_{i}}\frac{\partial g (\mathbf{x} )}{\partial x_{i}}
.\end{align}
Here $ \theta = \sum _{i} \delta _{x_i}$, $ \mathbf{x} = (x_1,\ldots ) $ and
$ f (\mathbf{x} )= f(x_1,\ldots )$ is the permutation invariant function such that
$\ff (\theta )=f (x_1,x_2,\ldots )$ for all $\theta \in \Theta $.
We set $ g $ similarly. Note that the left hand side of \eqref{:25} is again
permutation invariant. Hence it can be regard as a function of $ \theta = \sum _{i} \delta _{x_i}$.
Such $ f $ and $ g $ are unique; so the function $\map{\mathbb{D}[\mathfrak{f},\mathfrak{g}]}{\Theta }{\R }$ is well defined.
For a probability measure $ \mu $ in $ \Theta $ we set as before
\begin{align} \notag &
\mathcal{E}(\mathfrak{f},\mathfrak{g}) = \int_\Theta \mathbb{D}[\mathfrak{f},\mathfrak{g}](\theta )d\mu,
\\ & \notag
\di=\{\ff \in \dii \cap \Lm ; \mathcal{E}(\mathfrak{f},\mathfrak{f})
<\infty \}.
\end{align}
When $ (\mathcal{E}, \di ) $ is closable on $ \Lm $, we denote its closure by
$ (\mathcal{E}, \dom ) $.
We are now ready to introduce a notion of capacity for a {\em pre}-Dirichlet space
$ (\mathcal{E}, \di , \Lm ) $.
Let $ \mathcal{O} $ denote the set consisting of all open sets in $ \Theta $.
For $ O \in \mathcal{O} $ we set
$ \mathcal{L}_{O} =\{ \mathfrak{f} \in \di \, ;\, \mathfrak{f} \ge 1 \
\mu \text{-a.e.\ on } {O} \} $ and
\begin{align*}&
\0 ({O}) =
\begin{cases}
\inf_{\mathfrak{f} \in \mathcal{L}_{O} } \left\{ \mathcal{E}(\mathfrak{f}, \mathfrak{f} )
+ (\mathfrak{f}, \mathfrak{f} )_{\Lm } \right\} &\mathcal{L}_{O}\not= \emptyset
\\
\infty
&\mathcal{L}_{O}=\emptyset
\end{cases}
.
\end{align*}
For an arbitrary subset $ {A} \subset \Theta $ we set
$ \0 ({A}) = \inf _{{A} \subset {O} \in \mathcal{O} } \0 ({O})$.
This quantity $ \0 $ is called 1-capacity for the pre-Dirichlet space
$ (\mathcal{E}, \di , \Lm ) $.
We state the main theorem:
\begin{thm}\label{l:20}
Let $ \mu $ be a determinantal random point field with kernel $ \mathsf{K} $.
Assume $ \mathsf{K} $ is locally Lipschitz continuous.
Then
\begin{align}\label{:26}&
\0 ( \mathsf{A})=0
, \end{align}
where $ \mathsf{A} $ is given by \eqref{:24}. \end{thm}
In \cite{o.dfa} it was proved
\begin{lem}[Corollary 1 in \cite{o.dfa}]\label{l:22}
Let $ \mu $ be a probability measure on $ \Theta $.
Assume $ \mu $ has locally bounded correlation functions.
Assume $ (\mathcal{E}, \di ) $ is closable on $ \Lm $.
Then there exists a diffusion $ \diffusion $
associated with the Dirichlet space $ (\mathcal{E}, \dom , \Lm ) $.
\end{lem}
Combining this with \tref{l:20} we have
\begin{thm} \label{l:23}
Assume $ \mu $ satisfies the assumption in \tref{l:20}.
Assume $ (\mathcal{E}, \di ) $ is closable on $ \Lm $.
Then a diffusion $ \diffusion $ associated with the Dirichlet space
$ (\mathcal{E}, \dom , \Lm ) $ exists and satisfies
\begin{align}\label{:27}&
\mathsf{P}_{\theta}( \sigma _{\mathsf{A}} = \infty ) = 1 \quad \text{ for q.e.\ }\theta
, \end{align}
where $ \sigma _{\mathsf{A}} = \inf \{ t > 0\, ;\, \X _t \in \mathsf{A}\} $.
\end{thm}
We refer to \cite{fot} for q.e.\ (quasi everywhere) and related notions on Dirichlet form theory. We remark the capacity of pre-Dirichlet forms are bigger than or
equal to the one of its closure by definition. So
\eqref{:27} is an immediate consequence of \tref{l:20} and
the general theory of Dirichlet forms
once $ (\mathcal{E}, \di ) $ is closable on $ \Lm $ and the resulting
(quasi) regular Dirichlet space $ (\mathcal{E}, \dom , \Lm ) $ exists.
To apply \tref{l:23} to Dyson's model we recall a result of Spohn.
\begin{lem}[Proposition 4 in \cite{sp.2}]\label{l:24}
Let $ \mu $ be the determinantal random point field
with the sine kernel in \eref{d:21}.
Then $ \ed $ is closable on $ \Lm $.
\end{lem}
We say a diffusion $ \diffusion $ is Dyson's model in infinite dimension
if it is associated with
the Dirichlet space $ (\mathcal{E}, \dom , \Lm ) $ in \tref{l:24}.
Collecting these we conclude:
\begin{thm} \label{l:25}
No collision \eqref{:27} occurs in Dyson's model in infinite dimension.
\end{thm}
The assumption of the local Lipschitz continuity of the kernel $ \mathsf{K} $
is crucial;
we next give a collision example when $ \mathsf{K} $ is
merely H\"{o}lder continuous.
We prepare:
\begin{prop} \label{l:27}
Assume $ K $ is of trace class.
Then $ (\mathcal{E}, \di ) $ is closable on $ \Lm $.
\end{prop}
\begin{thm} \label{l:26}
Let $ \mathsf{K} (x,y) = \mathsf{m}(x) \mathsf{k}(x-y) \mathsf{m}(y) $
be as in \eref{d:24}.
Let $ \alpha $ be a constant such that
\begin{align}\label{:27a}&
0 < \alpha < 1
.\end{align}
Assume $ \mathsf{m} $ and $ \mathsf{k} $ are continuous and
there exist positive constants
$ \refstepcounter{Const}c_{\theConst} \label{;21} $ and $ \refstepcounter{Const}c_{\theConst} \label{;22} $ such that
\begin{align}\label{:27b}&
\cref{;21}t ^{\alpha}
\le \mathsf{k} (0) - \mathsf{k}(t) \le
\cref{;22}t ^{\alpha}
\quad \text{ for } 0 \le t \le 1.
\end{align}
Then $ (\mathcal{E}, \di , \Lm ) $ is closable and the associated diffusion
satisfies
\begin{align}\label{:27c}&
\mathsf{P}_{\theta}(\sigma _{\mathsf{A}} < \infty ) = 1 \quad \text{ for q.e.\ } \theta
.\end{align}
\end{thm}
Unfortunately the closability of the pre-Dirichlet form
$ (\mathcal{E} , \di ) $ on $ \Lm $ has not yet proved for
determinantal random point fields of locally trace class except the sine kernel.
So we propose a problem:
\begin{prob}\label{p:22}
\thetag{1} Are pre-Dirichlet forms $ (\mathcal{E} , \di ) $ on $ \Lm $
closable when $ \mu $ are determinantal random fields with continuous kernels?
\\
\thetag{2}
Can one construct stochastic dynamics (diffusion processes) associated with
pre-Dirichlet forms $ (\mathcal{E} , \di ) $ on $ \Lm $.
\end{prob}
We remark one can deduce the second problem from the first one (see \cite[Theorem 1]{o.dfa}).
We conjecture that $ (\mathcal{E} , \di , \Lm ) $ are always closable.
As we see above,
in case of trace class kernel, this problem is solved by \pref{l:27}.
But it is important to prove this for determinantal random point field of
{\em locally} trace class.
This class contains Airy kernel and Bessel kernel and other nutritious examples.
We also remark for interacting Brownian motions with Gibbsian equilibriums
this problem was settled successfully (\cite{o.dfa}).
In the next theorem we give a partial answer for \thetag{2} of Problem~\ref{p:22}.
We will show one can construct a stochastic dynamics in infinite volume,
which is canonical in the sense that \thetag{1} it is the strong resolvent limit of
a sequence of finite volume dynamics and that \thetag{2} it coincides with
$ (\mathcal{E}, \dom ) $ whenever $ (\mathcal{E}, \di ) $ is closable
on $ \Lm $.
For two symmetric, nonnegative forms $ (\mathcal{E}_1,\dom _1 ) $ and
$ (\mathcal{E}_2 ,\dom _2 ) $, we write
$ (\mathcal{E}_1,\dom _1 ) \le (\mathcal{E}_2,\dom _2 ) $ if
$ \dom _1 \supset \dom _2 $ and
$
\mathcal{E}_1 (\mathfrak{f}, \mathfrak{f} ) \le
\mathcal{E}_2 (\mathfrak{f},\mathfrak{f} )$
for all $ \mathfrak{f} \in \dom _2 $.
Let $ (\mathcal{E}^{\text{reg}} , \dom ^{\text{reg}}) $ denote the regular part of $ (\mathcal{E} , \di ) $ on $ \Lm $, that is,
$ (\Ereg , \Dreg ) $ is closable on $ \Lm $ and in addition satisfies the following:
\begin{align*}&
(\mathcal{E}^{\text{reg}} , \dom ^{\text{reg}}) \le (\mathcal{E} ,\di )
,\\
\intertext{and for all closable forms such that
$ (\mathcal{E}', \mathcal{D}' ) \le (\mathcal{E} ,\di ) $}
&
(\mathcal{E}', \mathcal{D}' ) \le (\Ereg , \Dreg )
.\end{align*}
It is well known that such a $ (\Ereg , \Dreg ) $ exists uniquely and called
the maximal regular part of $ (\mathcal{E}, \dom ) $.
Let us denote the closure by the same
symbol $ (\Ereg , \Dreg ) $.
Let $ \map{\pi _r}{\Theta }{\Theta } $ be such that
$ \pi _r (\theta ) = \theta (\cdot \cap \{ x \in \mathsf{E} ; |x| < r \} )$.
We set
\begin{align*}&
\dom _{\infty , r} = \{ \mathfrak{f} \in \di \, ;\, \mathfrak{f} \text{ is }
\sigma[\pi _r] \text{-measurable}\}
.\end{align*}
We will prove $ (\mathcal{E}, \dom _{\infty , r} ) $ are closable on
$ \Lm $. These are the finite volume dynamics we are considering.
Let $ \mathbb{G}_{\alpha } $
(resp.\ $ \mathbb{G}_{r , \alpha } $) $ (\alpha >0) $ denote the
$ \alpha $-resolvent of the semi-group associated with the closure of
$ (\Ereg ,\Dreg ) $ (resp.\ $ (\mathcal{E}, \dom _{\infty , r} $)) on
$ \Lm $.
\begin{thm} \label{l:28}
\thetag{1}
$ (\Ereg , \Dreg ) $ on $ \Lm $ is a quasi-regular Dirichlet form.
So the associated diffusion exists.
\\
\thetag{2} $ \mathbb{G}_{r , \alpha } $ converge to $ \mathbb{G}_{\alpha} $
strongly in $ \Lm $ for all $ \alpha >0 $.
\end{thm}
\begin{rem}\label{r:222}
We think the diffusion constructed in \tref{l:28} is a reasonable one because
of the following reason.
\thetag{1} By definition the closure of $ (\Ereg , \Dreg ) $ equals
$ (\mathcal{E}, \dom ) $ when $ (\mathcal{E}, \di ) $ is closable.
\thetag{2} One naturally associated Markov processes on
$ \Theta _r $, where $ \Theta _r $ is the set of configurations on
$ \mathsf{E} \cap \{ |x| < r \} $.
So \thetag{2} of \tref{l:28} implies the diffusion is the strong resolvent limit of finite volume dynamics.
\end{rem}
\begin{rem}\label{r:223}
If one replace $ \mu $ by the Poisson random measure $ \lambda $
whose intensity measure is the Lebesgue measure and consider
the Dirichlet space $ (\mathcal{E}^{\lambda},\dom ) $
on $ L(\Theta , \lambda ) $, then the associated $ \Theta $-valued
diffusion is the $ \Theta $-valued Brownian motion $ \mathbb{B} $,
that is, it is given by
\begin{align*}&
\mathbb{B}_t = \sum _{i=1}^{\infty} \delta _{B^i_t}
,\end{align*}
where $ \{ B^i_t \} $ ($ i \in \N $ ) are infinite amount of independent
Brownian motions.
In this sense we say in Abstract that the Dirichlet form given by \eqref{:15}
for Radon measures in $ \Theta $ {\em canonical}.
We also remark such a type of local Dirichlet forms are often called
distorted Brownian motions.
\end{rem}
\section{Preliminary} \label{s:3}
Let $ \1 = (-\7,\7)^d \cap \mathsf{E} $ and
$ \ya =\{\theta \in\Theta ;\theta(\1 )= \6 \} $.
We note $\Theta =\sum _{\6 = 0} ^{\infty}\ya $.
Let $\yaQ $ be the $ \6 $ times product of $ \1 $.
We define $ \map{\pi_r}{\Theta }{\Theta } $ by $\pi _{\7}(\theta )=\theta(\cdot\cap \1 )$.
A function $\map{\mathbf{x}}{\ya }{\yaQ } $ is called a $\yaQ $-coordinate of $\theta $ if
\begin{align}\label{:31}&
\pi _{\7}(\theta )=\sum _{k=1} ^{\6 } \delta_{x_{k}(\theta )},
\qquad \mathbf{x}(\theta )=(x_{1}(\theta ),\ldots,x_{\6 }(\theta )).
\end{align}
Suppose $ \map{\ff }{\Theta }{\R }$ is $ \sigma [\pi _{\7}]$-measurable.
Then for each $ n = 1,2,\ldots $ there exists a unique permutation invariant function
$ \map{\frn }{\yaQ }{\R } $ such that
\begin{align}\label{:32}&
\ff (\theta ) = \frn (\mathbf{x}(\theta ) ) \quad \text{ for all } \theta \in \ya
. \end{align}
We next introduce mollifier.
Let $\map{j}{\R }{\R } $ be a non-negative, smooth function such that
$ j (x)= j (|x|)$,
$\int_{\R^d} j dx=1$ and $ j (x)=0$ for $ |x| \geq \frac{1}{2}$.
Let $ j _\e =\e j (\cdot/\e )$ and
$ j ^{\6}_{\e } (x_1,\ldots,x_{\6})=\prod _{i=1}^{\6} j _\e(x_i)$.
For a $ \sigma [\pi _{\7}]$-measurable function $\ff $ we set
$\map{\mathfrak{J} _{\7,\e }\ff }{\Theta }{\R }$ by
\begin{align} \label{:33}
\mathfrak{J} _{\7,\e }\ff (\theta ) &=
\begin{cases}
j _{\e }^{\6} * \hat{f}^{\6}_r (\mathbf{x}(\theta )) & \text{ for } \theta \in \ya , \6 \ge 1
\\
\ff (\theta ) & \text{ for } \theta \in \Theta ^0_r,
\end{cases}
\end{align}
where $ \frn $ is given by \eqref{:32} for $\ff $, and
$\map{\hat{f}^{\6}_{\7}}{\R^{d\6}}{\R}$ is the function defined by
$\hat{f}^{\6} _{\7}(x)=f^{\6} _{\7}(x)$ for $ x \in \yaQ $ and
$\hat{f}^{\6} _{\7}(x)=0$ for $ x \not\in \yaQ $.
Moreover $\mathbf{x}(\theta ) $ is an $ \1^{\6} $-coordinate of $\theta \in \ya $,
and $\ast $ denotes the convolution in $ \R ^{\6} $.
It is clear that $ \mathfrak{J} _{\7,\e }\ff $ is $ \sigma [\pi _{\7}]$-measurable.
We say a function $ \map{\ff }{\Theta }{\R } $ is local if $ \ff $ is
$ \sigma [\pi _{\7}]$-measurable for some $ \7 < \infty $.
For $ \map{\ff }{\Theta }{\R } $ and $ \6 \in \N \cup \{ \infty \} $
there exists a unique permutation function
$ f^{\6} $ such that $ \ff (\theta ) = f^{\6}(x_1,\ldots) $
for all $ \theta \in \Theta ^{\6 } $.
Here
$ \Theta ^{\6} = \{ \theta \in \Theta \, ;\, \theta (\mathsf{E} ) = \6 \} $,
and $ \theta = \sum _{i} \delta _{x_i}$.
A function $ \ff $ is called smooth if $ f^{\6 } $ is smooth for all $ \6 \in \N \cup \{ \infty \} $. Note that a $ \sigma [\pi _{\7}]$-measurable function $ \ff $ is smooth if and only if
$ \frn $ is smooth for all $ n \in \N $.
\section{Proof of \tref{l:23}} \label{s:4}
We give a sequence of reductions of \eqref{:26}. Let $ \mathbf{A} $ denote the set
consisting of the sequences $ \mathbf{a} = (a_r)_{r \in \N} $ satisfying the following:
\begin{align}\label{:41a}&
a_r \in \Q \quad \text{ for all }r \in \N ,
\\ & \label{:41b}
a_r = 2r + r_0 \quad \text{ for all sufficiently large }r \in \N
,\\ & \label{:41c}
2 \le a_1 ,\ 1 \le a_{r+1} - a_r \le 2 \quad \text{ for all }r \in \N
. \end{align}
Note that the cardinality of $ \mathbf{A} $ is countable by \eqref{:41a} and \eqref{:41b}.
Let $ \mathbb{I}= \{ 2,3,\ldots, \}^3 $.
For $ (\7,\6,\8) \in \mathbb{I}$ and $ \mathbf{a} = (a_{\7}) \in \mathbf{A} $ we set
\begin{align*}&
\za = \{ \theta \in \Theta \, ;\, \theta ( \Ia ) = \6 \}
\\&
\zb = \{ \theta \in \Theta \, ;\, \theta ( \Ia ) = \6 ,\
\theta ( \bar {\2} _{a_{\7} + \frac{1}{\8} } \backslash \Ia ) = 0 \}
.\end{align*}
Here
$ \bar {\2} _{a_{\7} + \frac{1}{\8} } $
is the closure of $ {\2} _{a_{\7} + \frac{1}{\8} } $,
where $ \1 = (-\7,\7)^d \cap \mathsf{E} $ as before.
We remark $ \zb $ is an open set in $ \Theta $.
We set
\begin{align}& \label{:42}
\Ae = \{ \theta = \sum _{i} \delta _{x_i} \, ;\, \theta \in \zb \text{ and $ \theta $ satisfy }
\\ &\quad \quad \notag
| x_i - x_j | < \e \text{ and } x_i, x_j \in \2 _{a_{\7} - 1} \text{ for some }i \not= j \}
.\end{align}
It is clear that $ \Ae $ is an open set in $ \Theta $.
\begin{lem} \label{l:43}
Assume that for all
$ \mathbf{a} \in \mathbf{A} $ and $ (\7,\6,\8) \in \mathbb{I}$
\begin{align}\label{:43}&
\inf _{ 0 < \e < 1/2m} \0 ( \Ae ) = 0
. \end{align}
Then \eqref{:26} holds.
\end{lem}
\begin{proof}
Let
\begin{align}\notag
\mathsf{A}^{\mathbf{a} } (\7,\6,\8)= &\{ \theta = \sum _{i} \delta _{x_i} \, ;\, \theta \in \zb \text{ and $ \theta $ satisfy }
\\ \quad \quad &\notag
\ x_i = x_j \text{ and } x_i, x_j \in \2 _{a_{\7} - 1} \text{ for some }i \not= j \}
.\end{align}
Then
$ \mathsf{A}= \bigcup _{\mathbf{a}\in \mathbf{A} }\bigcup _{ (\7,\6,\8) \in \mathbb{I}} \, \mathsf{A}^{\mathbf{a} } (\7,\6,\8)
$.
Since $ \mathbf{A} $ and $ \mathbb{I}$ are countable sets and the capacity is
sub additive, \eqref{:26} follows from
\begin{align}\label{:44}&
\0 (\mathsf{A}^{\mathbf{a} } (\7,\6,\8) )= 0 \quad \text{ for all }
\mathbf{a} \in \mathbf{A}, \ (\7,\6,\8) \in \mathbb{I}
. \end{align}
Note that
$ \mathsf{A}^{\mathbf{a} } (\7,\6,\8) \subset \Ae $.
So \eqref{:43} implies \eqref{:44} by the monotonicity of the capacity,
which deduces \eqref{:26}.
\end{proof}
Now fix $ \mathbf{a} \in \mathbf{A} $ and $ (\7,\6,\8) \in \mathbb{I}$
and suppress them from the notion. Set
\begin{align}\label{:45}&
\Aa = \mathsf{A}_{\e /2 }^{\mathbf{a} } (\7,\6,\8)
,\quad
\A = \Ae
,\quad
\Ab = \mathsf{A}_{1 + \e }^{\mathbf{a} } (\7,\6,\8)
.\end{align}
and let $ \map{\4 }{\R}{\R} $ ($ 0 < \e < 1/m < 1 $) such that
\begin{align}\label{:46}&
\4 (t) =
\begin{cases}
2 & ( |t| \le \e )
\\ 2 \log |t| / \log \e &( \e \le |t| \le 1 )
\\0 & ( 1 \le |t| )
.\end{cases}
\end{align}
We define $ \map{\5}{\Theta }{\R } $ by $ \5 (\theta )= 0 $ for $ \theta \not\in \zb $ and
\begin{align*}&
\5 (\theta ) =
\sum _{ x _i , \, x _j \in \2 _{a_{\7} - 1}, \ j\ne i}
\4 (x _i - x _j )
\quad \text{ for }\theta \in \zb
.\end{align*}
Here we set $ \5 (\theta ) = 0 $ if the summand is empty.
Let $ \mathfrak{g}_{\e } = \mathfrak{J}_{a_{\7} + \frac{1}{m}, \e / 4} \5 $.
Here $ \mathfrak{J}_{a_{\7} + \frac{1}{m}, \e / 4} $ is
the mollifier introduced in \eqref{:33}.
\begin{lem} \label{l:44}
For $ 0 < \e < 1/2m $, $ \mathfrak{g}_{\e } $ satisfy the following:
\begin{align}
\label{:47}&
\mathfrak{g}_{\e } \in \di
&&
\\ & \label{:48b}
\mathfrak{g}_{\e } (\theta ) \ge 1
&&
\text{ for all } \theta \in \A
\\ & \label{:48a}
0 \le \mathfrak{g}_{\e } (\theta ) \le n(n+1)
&&
\text{ for all } \theta \in \Theta
\\ & \label{:48c}
\mathfrak{g}_{\e } (\theta ) =0
&& \text{ for all } \theta \not\in \Ab
\\ & \label{:49b}
\mathbb{D}[\mathfrak{g}_{\e } , \mathfrak{g}_{\e } ] (\theta ) = 0
&&
\text{ for all } \theta \not\in \Aee
\\ \label{:49a}&
\mathbb{D}[\mathfrak{g}_{\e } , \mathfrak{g}_{\e } ] (\theta ) \le
\frac{ \cref{;41} }{( \log \e \ \min |x_i - x_j | ) ^2 }
&& \text{ for all } \theta \in \Aee
. \end{align}
Here $ \theta = \sum \delta _{x_k} $ and the minimum in \eqref{:49a} is taken over
$ x_i , x_j $ such that
$$ x _i , \, x _j \in \2 _{a_{\7} - 1}, \ \e / 2 \le |x_i - x_j | \le 1 + \e
,$$
and
$\refstepcounter{Const}c_{\theConst} \label{;41} \ge 0 $ is a constant independent of $ \e $
($ \cref{;41} $ depends on $ (\7,\6,\8) $ ).
\end{lem}
\begin{proof}
\eqref{:47} follows from \cite[Lemma 2.4 (1)]{o.dfa}.
Other statements are clear from a direct calculation.
\end{proof}
Permutation invariant functions $\map{\sigma^{\6 }_{\7 }}{\yaQ }{\R^+}$ are called
density functions of $\mu $ if, for all bounded $\sigma [\pi_r]$-measurable functions $ \ff $,
\begin{align}\label{:3.5}&
\int_{\ya } \ff \, d\mu =
\frac{1}{\6 !} \int_{\yaQ }f^{\6 }_{\7 }\sigma^{\6 }_{\7 }dx
. \end{align}
Here $\map{f^{\6 }_{\7 }}{\yaQ }{\R}$ is the permutation
invariant function such that $ f^{\6 }_{\7 }(\mathbf{x}(\theta ))=\ff (\theta ) $
for $\theta\in\ya $, where $\mathbf{x}$ is an $\yaQ $-coordinate.
We recall relations between a correlation function and a density function
(\cite{so-}):
\begin{align}\label{:4t}&
\rho _{\6} = \sum_{k=0}^{\infty} \frac{1}{k !}
\int _{I_r^{k }} \sigma ^{\6 + k}_{\7} (x_1,\ldots , x_{\6+k})
dx_{\6+1}\cdots dx_{\6+k}
\\ \label{:4u} &
\sigma ^{\6 }_{\7} = \sum_{k=0}^{\infty}
\frac{(-1)^k}{k !}
\int _{I_r^{k }} \rho _{\6 + k} (x_1,\ldots , x_{\6+k})
dx_{\6+1}\cdots dx_{\6+k}
\end{align}
The first summand in the right hand side of \eqref{:4t} is taken to be
$ \sigma ^{\6 }_{\7 } $. It is clear that
\begin{align}\label{:4v}&
0 \le \sigma ^{\6 }_{\7 } ( x_1,\ldots,x_n )
\le \rho _{\6 } ( x_1,\ldots,x_n )
\end{align}
\begin{lem} \label{l:42}
There exists a constant $ \refstepcounter{Const}c_{\theConst} \label{;4x} $ depending on $ \7, \6 $ such that
\begin{align}\label{:4x}&
\sigma^{\6 }_{\7 } (x_1,\ldots,x_n) \le
\cref{;4x}
\min _{ i \not= j} |x_i - x_j |
\quad \text{ for all } (x_1,\ldots,x_n) \in \yaQ
\end{align}
\end{lem}
\begin{proof}
By \eqref{:23} and the kernel $ \mathsf{K} $ is locally Lipschitz continuous,
we see $ \rho _{\6} $ is bounded and Lipschitz continuous on $ \yaQ $.
In addition, by using \eqref{:23} we see $ \rho _{\6} = 0 $ if
$ x_i=x_j $ for some $ i \not= j $.
Hence by using \eqref{:23} again
there exists a constant $ \refstepcounter{Const}c_{\theConst} \label{;4y} $ depending on $ \6 , \7 $ such that
\begin{align}\label{:4y}&
\rho _{\6} (x_1,\ldots,x_n) \le \cref{;4y} \min _{ i \not= j} |x_i - x_j |
\quad \text{ for all } (x_1,\ldots,x_n) \in \yaQ
.\end{align}
\eqref{:4x} follows from this and \eqref{:4v} immediately.
\end{proof}
\begin{lem} \label{l:45}
\eqref{:43} holds true.
\end{lem}
\begin{proof}
By the definition of the capacity, $ \mathfrak{g}_{\e } \in \di $,
\eqref{:47} and \eqref{:48b} we obtain
\begin{align}\label{:4a}&
\0 (\A ) \le \mathcal{E} (\mathfrak{g}_{\e } , \mathfrak{g}_{\e } ) + (\mathfrak{g}_{\e } ,\mathfrak{g}_{\e } )_{\Lm }
\end{align}
So we will estimate the right hand side. We now see by \eqref{:49b}
\begin{align}
\label{:4b}
\mathcal{E}(\mathfrak{g}_{\e } , \mathfrak{g}_{\e } ) &
= \int _{ \Aee } \mathbb{D}[\mathfrak{g}_{\e } , \mathfrak{g}_{\e } ] d \mu
\\& \notag
= \frac{1}{n !} \int _{ B _{\e }}
\{ \frac{1}{2}\sum_{i=1} ^{\6 }
\frac{\partial g _{\e }^{\6}}{\partial x_{i}}
\frac{\partial g _{\e }^{\6}}{\partial x_{i}}
\} \sigma _{\3}^{\6} dx_1 \cdots dx_n
\\ \notag &
= : \mathbf{I}_{\e }
.\end{align}
Here $ g _{\e }^{\6} $ is defined by \eqref{:32} for $ \mathfrak{g}_{\e } $,
and
$ B _{\e }= \varpi _{\3} ^{-1} ( \pi _{\3} (\Aee ) ) $,
where $ \map{\varpi }{ I_{\3}^{\6} }{ \Theta }$ is the map
such that $ \varpi ((x_1,\ldots,x_n)) = \sum \delta _{x_i}$.
By using \eqref{:49a} and \lref{l:42} for $ \3 $
it is not difficult to see there exists a constant
$ \refstepcounter{Const}c_{\theConst} \label{;44} $ independent of $ \e $ satisfying the following:
\begin{align*}&
\mathbf{I}_{\e } \le \frac{ \cref{;44} }{|\log \e |}
.\end{align*}
This implies
$ \lim_{\e \to 0} \mathcal{E}(\mathfrak{g}_{\e } , \mathfrak{g}_{\e } ) = 0 $.
By \eqref{:48a} and \eqref{:48c} we have
\begin{align}\notag &
(\mathfrak{g}_{\e } ,\mathfrak{g}_{\e } )_{\Lm } = \int _{\Ab } \mathfrak{g}_{\e } ^2 d \mu \le n^2(n+1)^2 \, \mu (\Ab ) \to 0 \quad \text{ as }\e \downarrow 0
. \end{align}
Combining these with \eqref{:4a} we complete the proof of \lref{l:45}.
\end{proof}
\noindent
{\em Proof of \tref{l:20}}.
\tref{l:20} follows from \lref{l:43} and \lref{l:45} immediately.
\qed
\section{Proof of \pref{l:27}}\label{s:5}
\begin{lem} \label{l:51}
Let $ \mu $ be a probability measure on $ (\Theta , \mathcal{B}(\Theta ) ) $
such that $ \mu (\{ \theta (\mathsf{E}) < \infty \} ) = 1 $ and that
density functions $ \{ \sigma ^n _{\mathsf{E} } \} $ on $ \mathsf{E} $ of
$ \mu $ are continuous.
Then $ (\mathcal{E}, \di ) $ is closable on $ \Lm $.
\end{lem}
\begin{proof}
Let $ \Theta ^n = \{ \theta \in \Theta\, ;\, \theta (\mathsf{E} ) = n \} $
and set
$$ \mathcal{E}^n (\mathfrak{f}, \mathsf{g} ) =
\sum _{k=1}^n \int _{\Theta ^k } \mathbb{D} [\mathfrak{f},\mathfrak{g}] d\mu .$$
By assumption $ \sum_{n=0}^{\infty } \mu (\Theta ^n) = 1$, from which we deduce
$ (\mathcal{E}, \di ) $ is the increasing limit of $ \{ (\mathcal{E}^n, \di ) \} $.
Since density functions are continuous, each $ (\mathcal{E}^n, \di ) $
is closable on $ \Lm $. So its increasing limit $ (\mathcal{E}, \di ) $ is also closable on $ \Lm $.
\end{proof}
\begin{lem} \label{l:52}
Let $ \mu $ be a determinantal random point field on $ \mathsf{E} $
with continuous kernel $ \mathsf{K} $.
Assume $ \mathsf{K} $ is of trace class.
Then their density functions $ \sigma ^n $ on $ \mathsf{E} $ are continuous.
\end{lem}
\begin{proof}
For the sake of simplicity we only prove the case $ K < 1 $,
where $ K $ is the operator generated by the integral kernel $ \mathsf{K} $.
The general case is proved similarly by using a device in \cite[935 p.]{so-}.
Let $ \lambda _i $ denote the $ i $-th eigenvalue of $ K $ and $ \varphi _i $
its normalized eigenfunction. Then since $ K $ is of trace class
we have
\begin{align} \label{:51}&
\mathsf{K} (x,y) = \sum _{i=1}^{\infty}
\lambda _i \varphi _i (x) \overline{\varphi _i (y)}
.\end{align}
It is known that (see \cite[934 p.]{so-})
\begin{align}\label{:53}&
\sigma ^n (x_1,\ldots,x_n) =
\det (\text{Id} - K ) \cdot \det (L (x_i, x_j))_{1\le i,j \le n}
,\end{align}
where $ \det (\text{Id} - K ) = \prod_{i=1}^{\infty} (1 - \lambda _i ) $
and
\begin{align} \label{:54}&
L (x,y) = \sum_{i=1}^{\infty} \frac{\lambda _i}{1- \lambda _i}
\varphi _i (x) \overline{\varphi _i (y)}
.
\end{align}
Since $ \mathsf{K} (x,y) $ is continuous, eigenfunctions
$ \varphi _i (x) $ are also continuous.
It is well known that the right hand side of \eqref{:51} converges uniformly.
By $ 0 \le K < 1 $ we have $ 0 \le \lambda _i \le \lambda _1 <1 $.
Collecting these implies the right hand side of \eqref{:54} converges uniformly.
Hence $ L(x,y) $ is continuous in $ (x,y) $.
This combined with \eqref{:53} completes the proof.
\end{proof}
{\em Proof of \pref{l:27}}.
Since $ K $ is of trace class, the associated determinantal random point field
$ \mu $ satisfies $ \mu (\{ \theta (\mathsf{E}) < \infty \} ) = 1 $.
By \lref{l:52}
we have density functions $ \sigma _{\mathsf{E} }^n $ are continuous.
So \pref{l:27} follows from \lref{l:51}.
\qed
We now turn to the proof of \tref{l:26}.
So as in the statement in \tref{l:26} let
$ \mathsf{E} = \R $ and
$ \mathsf{K}(x,y) = \mathsf{m}(x) \mathsf{k}(x-y) \mathsf{m}(y) $, where
$ \map{\mathsf{k}}{\R }{\R } $ is a non-negative, continuous {\em even} function that is convex in $ [0, \infty) $ such that $ \mathsf{k} (0) \le 1 $,
and $ \map{\mathsf{m}}{\R }{\R } $ is nonnegative continuous and $ \int _{\R } \mathsf{m}(t) dt < \infty $ and $ \mathsf{m} (x) \le 1 $ for all $ x $ and
$ 0 < \mathsf{m} (x) $ for some $ x $.
We assume
$ \mathsf{k} $ satisfies \eqref{:27b}.
\begin{lem} \label{l:53}
There exists an interval $ I $ in $ \mathsf{E} $ such that
\begin{align}\label{:55} &
\sigma _{I }^2 (x,x+t) \ge \cref{;51} t ^{\alpha }
\quad \text{ for all } |t| \le 1 \text{ and } x, x+t \in I
,\end{align}
where $ \refstepcounter{Const}c_{\theConst} \label{;51} $ is a positive constant and $ \sigma _{I }^2 $
is the 2-density function of $ \mu $ on $ I $.
\end{lem}
\begin{proof}
By assumption we see $ \inf _{ x \in I} \mathsf{m}(x) > 0 $
for some open bounded, nonempty interval $ I $ in $ \mathsf{E} $.
By \eqref{:4u} we have
\begin{align}\label{:56}&
\sigma _{I }^2 (x,x+t) \ge \rho _2 (x, x+t) -
\int _I \rho_3 (x, x+t, z) dz
\end{align}
By \eqref{:23} and \eqref{:27b} there exist positive constants
$ \refstepcounter{Const}c_{\theConst} \label{;52} $ and $ \refstepcounter{Const}c_{\theConst} \label{;53} $ such that
\begin{align}\label{:57}&
\cref{;52} t ^{\alpha } \le \rho _2 (x,x+t)
&&\quad \text{ for all } |t| \le 1 \text{ and } x, x+t \in I
\\ &\notag
\rho _3 (x,x+t,z ) \le \cref{;53} t ^{\alpha }
&&\quad \text{ for all } |t| \le 1 \text{ and } x, x+t , z \in I
.\end{align}
Hence by taking $ I $ so small we deduce \eqref{:55}
from \eqref{:56} and \eqref{:57}.
\end{proof}
{\em Proof of \tref{l:26}}.
The closability follows from \pref{l:27}.
So it only remains to prove \eqref{:27c}.
Let $ (\mathcal{E}^2 , \dom ^2 ) $ and $ (\mathcal{E}, \dom ) $
denote closures of
$ (\mathcal{E} ^2, \di ) $ and
$ (\mathcal{E} , \di ) $
on $ \Lm $, respectively. Then
\begin{align}\label{:5a}&
(\mathcal{E} ^2 , \dom ^2 ) \le (\mathcal{E} , \dom )
\end{align}
Let $ I $ be as in \lref{l:53}.
Let $ \{ I_r \} _{r=1,\ldots}$ be an increasing sequence of
open intervals in $ \mathsf{E} $
such that $ I_1 = I $ and $ \cup_r I _r = \mathsf{E} $.
Let
\begin{align}\label{:58}&
\mathcal{E}_r^2 (\mathfrak{f},\mathfrak{g} )= \int _{\Theta^2 }
\sum _{x_i \in I_r} \frac{1}{2}
\frac{\partial f (\mathbf{x} )}{\partial x_i } \cdot
\frac{\partial g (\mathbf{x} )}{\partial x_i }
d \mu
\end{align}
Here we set $ \mathbf{x}=(x_1,\ldots ) $, $ f $ and $ \mathfrak{f} $
similarly as in \eqref{:25}. Then since density functions on $ I_r $
are continuous, we see $ (\mathcal{E}_r^2 , \di ) $ are closable on $ \Lm $.
So we denote its closure by $ (\mathcal{E}_r^2 , \dom _r^2 ) $.
It is clear that $ \{ (\mathcal{E}_r^2 , \dom _r^2 ) \} $ is increasing
in the sense that $ \dom _r^2 \supset \dom _{r+1}^2 $ and
$ \mathcal{E}_r^2 (\mathfrak{f}, \mathfrak{f} ) \le
\mathcal{E}_{r+1}^2 (\mathfrak{f}, \mathfrak{f} ) $ for all
$ \mathfrak{f} \in \dom _{r+1} $.
So we denote its limit by $ (\check{\mathcal{E} }^2 , \check{\dom}^2 ) $.
It is known (\cite[Remark \thetag{3} after Theorem 3]{o.dfa}) that
\begin{align}\label{:59}&
(\check{\mathcal{E}}^2 , \check{\mathcal{\dom} }^2 ) \le
(\mathcal{E}^2, \dom ^2)
.\end{align}
By \eqref{:5a}, \eqref{:59} and the definition of
$ \{ (\mathcal{E}_r^2 , \dom _r^2 ) \} $ we conclude
$ (\mathcal{E}_1^2 , \dom _1^2 ) \le (\mathcal{E} , \dom ) $,
which implies
\begin{align}\label{:5b}&
\0 _1^2 \le \0
,\end{align}
where $ \0 _1^2 $ and $ \0 $ denote capacities of
$ (\mathcal{E}_1^2 , \dom _1^2 ) $ and $ (\mathcal{E} , \dom ) $, respectively.
Let $ \mathsf{B}=\Theta ^2 \cap
\{ \theta (\{ x \} ) =2 \text{ for some } x \in I \} $.
Then by \eqref{:27a} and \eqref{:55} together with
a standard argument (see \cite[Example 2.2.4]{fot} for example)
we obtain
\begin{align}\label{:5c}&
0 < \0 _1^2 (\mathsf{B} ) .
\end{align}Since $ \mathsf{B} \subset \mathsf{A} $,
we deduce $ 0 < \0 (\mathsf{A} ) $ from \eqref{:5b} and \eqref{:5c},
which implies \eqref{:27c}.
\qed
\section{A construction of infinite volume dynamics}\label{s:6}
In this section we prove \tref{l:28}.
We first prove the closability of pre-Dirichlet forms in finite volume.
\begin{lem} \label{l:61}
Let $ I _r = (-r, r) \cap \mathsf{E} $ and $ \sigma _r^n $ denote the
$ n $-density function on $ I _r $.
Then $ \sigma _r^n $ is continuous.
\end{lem}
\begin{proof}
Let $ M = \sup _{x,y \in I _r} | \mathsf{K}(x,y)| $.
Then $ M < \infty $ because $ \mathsf{K} $ is continuous.
Let $ \mathbf{x}_i =
(\mathsf{K} (x_i, x_1), \mathsf{K} (x_i, x_2),\ldots,
\mathsf{K} (x_i, x_n)) $ and
$ \| \mathbf{x}_i \| $ denote its Euclidean norm. Then by \eqref{:23} we see
\begin{align}\label{:61}&
|\rho _n| \le \prod _{i=1}^n \| \mathbf{x}_i \| \le \{ \sqrt{n} M \}^n
.\end{align}
By using Stirling's formula and \eqref{:61}
we have for some positive constant $ \refstepcounter{Const}c_{\theConst} \label{;61} $
independent of $ k $ and $ M $ such that
\begin{align}\label{:62}&
| \frac{(-1)^k}{k !}
\int _{I_r^{k }} \rho _{\6 + k} (x_1,\ldots , x_{\6+k})
dx_{\6+1}\cdots dx_{\6+k} |
\\\notag
& \quad \quad \quad \quad \quad \le
\cref{;61}^k k^{- k + 1/2} (n+k)^{(n+k)/2} M ^{n+k}
.\end{align}
This implies for each $ n $ the series in the right hand side of \eqref{:4u}
converges uniformly in $ (x_1,\ldots,x_n) $.
So $ \sigma _r^n $ is the limit of continuous functions in the uniform norm, which completes the proof.
\end{proof}
\begin{lem} \label{l:62}
$ (\mathcal{E}, \dom _{\infty , r} ) $ are closable on $ \Lm $.
\end{lem}
\begin{proof}
Let $ I_r = \{ x \in \mathsf{E} ; |x| < r \} $ and
$ \Theta _r^n = \{ \theta (I_r) = n \} $.
Let
$ \mathcal{E} _r^n (\mathfrak{f},\mathfrak{g})
= \int _{\Theta _r^n } \mathbb{D} [\mathfrak{f}, \mathfrak{g} ] d \mu $.
Then it is enough to show that
$ (\mathcal{E} _r^n , \dom _{\infty , r}) $
are closable on $ \Lm $ for all $ n $.
Since $ \mathfrak{f} $ is $ \sigma[\pi _r] $-measurable, we have
($ \mathbf{x}= (x_1,\ldots,x_n) $)
\begin{align*}&
\mathcal{E} _r^n (\mathfrak{f},\mathfrak{g}) = \frac{1}{n!}
\int _{I_r^n }
\sum _{i=1}^n \frac{1}{2}
\frac{\partial f _r^n (\mathbf{x} )}{\partial x_i } \cdot
\frac{\partial g _r^n (\mathbf{x} )}{\partial x_i }
\sigma _r^n (\mathbf{x} )d \mathbf{x}
,\end{align*}
where $ f _r^n $ and $ g _r^n $ are defined similarly as after \eqref{:3.5}.
Then since $ \sigma _r^n $ is continuous,
we see $ (\mathcal{E} _r^n , \dom _{\infty , r} ) $ is closable.
\end{proof}
\begin{proof}[Proof of \tref{l:28}]
By \lref{l:62} we see the assumption \thetag{A.1$ ^* $} in \cite{o.dfa}
is satisfied. \thetag{A.2} in \cite{o.dfa} is also satisfied by the construction of determinantal random point fields. So one can apply results
in \cite{o.dfa} (Theorem 1, Corollary 1, Lemma 2.1 \thetag{3} in \cite{o.dfa}) to the present situation.
Although in Theorem 1 in \cite{o.dfa} we treat $ (\mathcal{E}, \dom ) $,
it is not difficult to see that the same conclusion also holds for
$ (\Ereg , \Dreg ) $, which completes the proof.
\end{proof}
\section{Gibbsian case}\label{s:7}
In this section we consider the case $ \mu $ is a canonical Gibbs measure
with interaction potential $ \Phi $, whose $ n $-density functions for
bounded sets are bounded, and 1-correlation function is locally integrable.
If $ \Phi $ is super stable and regular in the sense of Ruelle, then
probability measures satisfying these exist.
In addition, it is known in \cite{o.dfa} that, if $ \Phi $ is upper semi-continuous
(or more generally $ \Phi $ is a measurable function dominated from above by
a upper semi-continuous potential satisfying certain integrable conditions
(see \cite{o.m})),
then the form $ (\mathcal{E}, \dom ) $ on $ \Lm $ is closable.
We remark these assumptions are quite mild.
In \cite{o.dfa} and \cite{o.m} only {\em grand}
canonical Gibbs measures with {\em pair} interaction potential are treated; it is easy to generalize the results in \cite{o.dfa} and \cite{o.m}
to the present situation.
\begin{prop}\label{l:71}
Let $ \mu $ be as above. Assume $ d \ge 2 $. Then
$ \0 (\mathsf{A} ) = 0 $ and no collision \eqref{:27} occurs.
\end{prop}
\begin{proof}
The proof is quite similar to the one of \tref{l:20}.
Let $ \mathbf{I}_{\e } $ be as in \eqref{:4b}.
It only remains to show $ \lim_{\e \to 0} \mathbf{I}_{\e } = 0 $.
We divide the case into two parts: \thetag{1} $ d = 2 $ and \thetag{2} $ 3 \le d $. Assume \thetag{1}. We can prove $ \lim \mathbf{I}_{\e } =0 $ similarly as
before.
In the case of \thetag{2} the proof is more simple.
Indeed, we change definitions of
$ \Ab $
in \eqref{:45} and $ h_{\e } $ in \eqref{:46} as follows:
$ \Ab = \mathsf{A}_{4 \e }^{\mathbf{a} } (\7,\6,\8) $
\begin{align}\label{:71}&
\4 (t) =
\begin{cases}
2 & ( |t| \le \e )
\\ -(2/\e) |t| + 4 &( \e \le |t| \le 2\e )
\\0 & ( 2\e \le |t| )
.\end{cases}
\end{align}
Then we can easily see $ \lim \mathbf{I}_{\e } =0 $.
\end{proof}
\begin{rem}\label{r:71} \thetag{1}
This result was announced and used in \cite[Lemma 1.4]{o.inv2}.
Since this result was so different from other parts of the paper \cite{o.inv2},
we did not give a detail of the proof there.
\\
\thetag{2} In \cite{r-s} a related result was obtained.
In their frame work the choice of the domain of Dirichlet forms
may be not same as ours. Indeed, their domains are
smaller than or equal to ours
(we do not know they are same or not).
So one may deduce \pref{l:71} from their result.
\end{rem}
\noindent
Address: \\
Graduate School of Mathematics\\
Nagoya University \\
Chikusa-ku, Nagoya, 464-8602\\
\noindent
Current Address (2015) \\
Hirofumi Osada\\
Tel 0081-92-802-4489 (voice)\\
Graduate School of Mathematics,\\
Kyushu University\\
Fukuoka 819-0395, JAPAN\\
{\em \texttt{osada@math.kyushu-u.ac.jp}}
\noindent
submit: February 9, 2003, revised: March 31, 2003
\noindent
{\em Partially supported by Grant-in-Aid for Scientific Research (B) 11440029}
\end{document} |
\begin{document}
\title[Hybrid Euler-Hadamard product]{Hybrid Euler-Hadamard product for quadratic Dirichlet L-functions in function fields}
\author{H. M. Bui and Alexandra Florea}
\address{School of Mathematics, University of Manchester, Manchester M13 9PL, UK}
\email{hung.bui@manchester.ac.uk}
\address{Department of Mathematics, Stanford University, Stanford CA 94305, USA}
\email{amusat@stanford.edu}
\allowdisplaybreaks
\begin{abstract}
We develop a hybrid Euler-Hadamard product model for quadratic Dirichlet $L$--functions over function fields (following the model introduced by Gonek, Hughes and Keating for the Riemann-zeta function). After computing the first three twisted moments in this family of $L$--functions, we provide further evidence for the conjectural asymptotic formulas for the moments of the family.
\end{abstract}
\maketitle
\section{Introduction}
An important and fascinating theme in number theory is the study of moments of the Riemann zeta-function and families of $L$-function. In this paper, we consider the moments of quadratic Dirichlet $L$-functions in the function field setting. Denote by $\mathcal{H}_{2g+1}$ the space of monic, square-free polynomials of degree $2g+1$ over $\mathbb{F}_q[x]$. We are interested in the asymptotic formula for the $k$-th moment,
\[
I_k(g)=\frac{1}{|\mathcal{H}_{2g+1}|}\sum_{D\in\mathcal{H}_{2g+1}}L(\tfrac{1}{2},\chi_D)^k,
\]
as $g\rightarrow\infty$.
In the corresponding problem over number fields, the first and second moments have been evaluated by Jutila [\textbf{\ref{J}}], with subsequent improvements on the error terms by Goldfeld and Hoffstein [\textbf{\ref{GH}}], Soundararajan [\textbf{\ref{S}}] and Young [\textbf{\ref{Y}}], and the third moment has been computed by Soundararajan [\textbf{\ref{S}}]. Conjectural asymptotic formulas for higher moments have also been given, being based on either random matrix theory [\textbf{\ref{KS}}] or the ``recipe'' [\textbf{\ref{CFKRS}}].
Using the idea of Jutila [\textbf{\ref{J}}], Andrade and Keating [\textbf{\ref{AK}}] obtained the asymptotic formula for $I_1(g)$ when $q$ is fixed and $q\equiv1(\textrm{mod}\ 4)$. They explicitly computed the main term, which is of size $g$, and bounded the error term by $O\big(q^{(-1/4+\log_q2)(2g+1)}\big)$. This result was recently improved by Florea [\textbf{\ref{F1}}] with a secondary main term and an error term of size $O_\varepsilon\big(q^{-3g/2+\varepsilon g}\big)$. Florea's approach is similar to Young's [\textbf{\ref{Y}}], but in the function field setting, it is striking that one can surpass the square-root cancellation. Florea [\textbf{\ref{F23}}, \textbf{\ref{F4}}] later also provided the asymptotic formulas for $I_k(g)$ when $k=2,3,4$.
For other values of $k$, by extending the Ratios Conjecture to the function field setting, Andrade and Keating [\textbf{\ref{AK2}}] proposed a general formula for the integral moments of quadratic Dirichlet $L$-functions over function fields. Concerning the leading terms, their conjecture reads
\begin{conjecture}
For any $k\in\mathbb{N}$ we have
\[
I_k(g)\sim 2^{-k/2}\mathcal{A}_k\frac{G(k+1)\sqrt{\Gamma(k+1)}}{\sqrt{G(2k+1)\Gamma(2k+1)}}(2g)^{k(k+1)/2}
\]
as $g\rightarrow\infty$, where
\[
\mathcal{A}_k=\prod_{P\in\mathcal{P}}\bigg[\bigg(1-\frac{1}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\bigg(1+\frac{1}{|P|}\bigg)^{-1}\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}\bigg)\bigg]
\]
with $\tau_k(f)$ being the $k$-th divisor function, and $G(k)$ is the Barnes $G$-function.
\label{ak}
\end{conjecture}
\begin{remark}\emph{An equivalent form of $\mathcal{A}_k$ is}
\begin{equation*}
\mathcal{A}_{k}=\prod_{P\in\mathcal{P}}\bigg(1-\frac{1}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\frac{1}{|P|}\bigg)^{-1}\bigg(\frac12\bigg(1-\frac{1}{|P|^{1/2}}\bigg)^{-k}+\frac12\bigg(1+\frac{1}{|P|^{1/2}}\bigg)^{-k}+\frac{1}{|P|}\bigg).
\end{equation*}
\end{remark}
Beside random matrix theory and the recipe, another method to predict asymptotic formulas for moments comes from the hybrid Euler-Hadamard product for the Riemann zeta-function developed by Gonek, Hughes and Keating [\textbf{\ref{GHK}}]. Using a smoothed form of the explicit
formula of Bombieri and Hejhal [\textbf{\ref{BH}}], the value of the Riemann zeta-function at a height $t$ on the critical line can be approximated as a partial Euler product multiplied by a partial Hadamard product over the nontrivial zeros close to $1/2+it$. The partial Hadamard product is expected to be modelled by the characteristic polynomial of a large random unitary matrix as it involves only local information about the zeros. Calculating the moments of the
partial Euler product rigorously and making an assumption (which can be proved in certain cases) about the independence of the two products, Gonek, Hughes and Keating then reproduced the conjecture for the moments of the Riemann zeta-function first put forward by Keating and Snaith [\textbf{\ref{KS2}}]. The hybrid Euler-Hadamard product model has been extended to various cases [\textbf{\ref{BK}}, \textbf{\ref{BK2}}, \textbf{\ref{D}}, \textbf{\ref{H}}, \textbf{\ref{BGM}}].
In this paper, we give further support for Conjecture 1.1 using the idea of Gonek, Hughes and Keating. Along the way, we also derive the first three twisted moments of quadratic Dirichlet $L$-functions over function fields.
\section{Statements of results}
Throughout the paper we assume $q$ is fixed and $q\equiv 1(\textrm{mod}\ 4)$. All theorems still hold for all $q$ odd by using the modified auxiliary lemmas in function fields as in [\textbf{\ref{BF}}], but we shall keep the assumption for simplicity. Let $\mathcal{M}$ be the set of monic polynomials in $\mathbb{F}_q[x]$, $\mathcal{M}_n$ and $\mathcal{M}_{\leq n}$ be the sets of those of degree $n$ and degree at most $n$, respectively. The letter $P$ will always denote a monic irreducible polynomial over $\mathbb{F}_q[x]$. The set of monic irreducible polynomials is denoted by $\mathcal{P}$. For a polynomial $f\in \mathbb{F}_q[x]$, we denote its degree by $d(f)$, its norm $|f|$ is defined to be $q^{d(f)}$, and the von Mangoldt function is defined by
$$ \Lambda(f) =
\begin{cases}
d(P) & \mbox{ if } f=cP^j \text{ for some }c \in \mathbb{F}_q^{\times}\ \text{and}\ j\geq 1, \\
0 & \mbox{ otherwise. }
\end{cases}
$$
Note that
$$|\mathcal{H}_d| =
\begin{cases}
q & \mbox{ if } d=1, \\
q^{d-1}(q-1) & \mbox{ if } d \geq 2.
\end{cases}
$$
For any function $F$ on $\mathcal{H}_{2g+1}$, the expected value of $F$ is defined by
$$ \Big\langle F \Big\rangle_{\mathcal{H}_{2g+1}} := \frac{1}{| \mathcal{H}_{2g+1} |} \sum_{D \in \mathcal{H}_{2g+1}} F(D).$$
The Euler-Hadamard product we use, which is proved in Section 4, takes the following form.
\begin{theorem}\label{HEH}
Let $u(x)$ be a real, non-negative, $C^\infty$-function with mass $1$ and compactly supported on $[q,q^{1+1/X}]$. Let
\begin{equation*}
U(z)=\int_{0}^{\infty}u(x)E_{1}(z\log x)dx,
\end{equation*}
where $E_{1}(z)$ is the exponential integral, $E_{1}(z)=\int_{z}^{\infty}e^{-x}/xdx$. Then for $\emph{Re}(s)\geq0$ we have
\begin{equation*}
L(s,\chi_D)=P_{X}(s,\chi_D)Z_{X}(s,\chi_D),
\end{equation*}
where
\begin{equation*}
P_{X}(s,\chi_D)=\exp\bigg( \sum_{\substack{f\in\mathcal{M}\\d(f)\leq X}}\frac{\Lambda(f)\chi_D(f)}{|f|^{s}d(f)}\bigg)
\end{equation*} and
\begin{equation*}
Z_{X}(s,\chi_D)=\exp\Big(-\sum_{\rho}U\big((s-\rho)\ X\big)\Big),
\end{equation*}
where the sum is over all the zeros $\rho$ of $L(s,\chi_D)$.
\end{theorem}
As remarked in [\textbf{\ref{GHK}}], $P_X(s,\chi_D)$ can be thought of as the Euler product for $L(s,\chi_D)$ truncated to include polynomials of degree $\leq X$, and $Z_X(s,\chi_D)$ can be thought of as the Hadamard product for $L(s,\chi_D)$ truncated to include zeros within a distance $\lesssim 1/X$ from the point $s$. The parameter $X$ thus controls the relative contributions of the Euler and Hadamard products. Note that a similar hybrid product formula was developed independently by Andrade, Keating, Gonek in [\textbf{\ref{AKG}}].
In Section 5 we evaluate the moments of $P_X(\chi_D):=P_X(1/2,\chi_D)$ rigorously and prove the following theorem.
\begin{theorem}\label{theoremP}
Let $0<c<2$. Suppose that $X\leq (2-c)\log g/\log q$. Then for any $k\in\mathbb{R}$ we have
\[
\Big\langle P_X(\chi_D)^k \Big\rangle_{\mathcal{H}_{2g+1}}=2^{-k/2}\mathcal{A}_k\big(e^\gamma X\big)^{k(k+1)/2}+O\big( X^{k(k+1)/2-1}\big).
\]
\end{theorem}
For the partial Hadamard product, $Z_X(\chi_D):=Z_X(1/2,\chi_D)$, we conjecture that
\begin{conjecture}\label{conjectureZ}
Let $0<c<2$. Suppose that $X\leq (2-c)\log g/\log q$ and $X,g\rightarrow\infty$. Then for any $k\geq0$ we have
\[
\Big\langle Z_X(\chi_D)^k \Big\rangle_{\mathcal{H}_{2g+1}}\sim \frac{G(k+1)\sqrt{\Gamma(k+1)}}{\sqrt{G(2k+1)\Gamma(2k+1)}}\Big(\frac{2g}{e^\gamma X}\Big)^{k(k+1)/2}.
\]
\end{conjecture}
In Section 7 we shall provide some support for Conjecture \ref{conjectureZ} using the random matrix theory model as follows. The zeros of quadratic Dirichlet $L$-functions are believed to have the same statistical distribution as the eigenangles $\theta_n$ of $2N\times 2N$ random symplectic unitary matrices with respect to the Haar measure for some $N$. Equating the density of the zeros and the density of the eigenangles suggests that $N=g$. Hence the $k$-th moment of $Z_X(\chi_D)$ is expected to be asymptotically the same as $Z_X(\chi_D)^k$ when the zeros $\rho$ are replaced by the eigenangles $\theta_n$ and averaged over all $2g\times 2g$ symplectic unitary matrices. This random matrix calculation is carried out in Section 7.
We also manage to verify Conjecture \ref{conjectureZ} in the cases $k=1,2,3$. As, from Theorem \ref{HEH}, $Z_X(\chi_D)=L(\tfrac{1}{2},\chi_D)P_X(\chi_D)^{-1}$, that is the same as to establish the following theorem.
\begin{theorem}\label{k123}
Let $0<c<2$. Suppose that $X\leq (2-c)\log g/\log q$. Then we have
\begin{align*}
&\Big\langle L(\tfrac{1}{2},\chi_D)P_X(\chi_D)^{-1} \Big\rangle_{\mathcal{H}_{2g+1}}= \frac{1}{\sqrt{2}}\frac{2g}{e^\gamma X} + O\big(gX^{-2}\big),\\
&\Big\langle L(\tfrac{1}{2},\chi_D)^2P_X(\chi_D)^{-2} \Big\rangle_{\mathcal{H}_{2g+1}}= \frac{1}{12}\Big(\frac{2g}{e^\gamma X}\Big)^3+O\big(g^3X^{-4}\big)
\end{align*}
and
\[
\Big\langle L(\tfrac{1}{2},\chi_D)^3P_X(\chi_D)^{-3} \Big\rangle_{\mathcal{H}_{2g+1}}= \frac{1}{720\sqrt{2}}\Big(\frac{2g}{e^\gamma X}\Big)^6+O\big(g^6X^{-7}\big).
\]
\end{theorem}
Our Theorem \ref{theoremP} and Theorem \ref{k123} suggest that at least when $X$ is not too large relative to $q^g$, the $k$-th moment of $L(1/2,\chi_D)$ is asymptotic to the product of the moments of $P_X(\chi_D)$ and $Z_X(\chi_D)$ for $k=1,2,3$. We believe that this is true in general and we make the following conjecture.
\begin{conjecture}[Splitting Conjecture]
Let $0<c<2$. Suppose that $X\leq (2-c)\log g/\log q$ and $X,g\rightarrow\infty$. Then for any $k\geq0$ we have
\[
\Big\langle L(\tfrac12,\chi_D)^k \Big\rangle_{\mathcal{H}_{2g+1}}\sim \Big\langle P_X(\chi_D)^k \Big\rangle_{\mathcal{H}_{2g+1}}\Big\langle Z_X(\chi_D)^k \Big\rangle_{\mathcal{H}_{2g+1}}.
\]
\end{conjecture}
Theorem \ref{theoremP}, Conjecture \ref{conjectureZ} and the Splitting Conjecture imply Conjecture 1.1.
To prove Theorem \ref{k123} requires knowledge and understanding about twisted moments of quadratic Dirichlet $L$-functions over function fields,
\[
I_k(\ell;g)=\Big\langle L(\tfrac12,\chi_D)^k\chi_D(\ell) \Big\rangle_{\mathcal{H}_{2g+1}}.
\]
For that we shall compute the first three twisted moments in Section 6 and show that the following theorems hold.
\begin{theorem}[Twisted first moment]\label{tfm}
Let $\ell=\ell_1\ell_2^2$ with $\ell_1$ square-free. Then we have
\begin{align*}
I_1(\ell;g)=&\,\frac{\eta_1(\ell;1)}{|\ell_1|^{1/2}}\bigg(g-d(\ell_1)+1-\frac{\partial_u\eta_1}{\eta_1}(\ell;1)\bigg)+|\ell_1|^{1/6} q^{-4g/3} P\big(g+d(\ell_1)\big)\\
&\qquad\qquad+O_\varepsilon\big(|\ell|^{1/2}q^{-3g/2+ \varepsilon g}\big),
\end{align*}
where the function $\eta_1(\ell,u)$ is defined in \eqref{eta} and $P(x)$ is a linear polynomial whose coefficients can be written down explicitly.
\end{theorem}
\begin{theorem}[Twisted second moment]\label{tsm}
Let $\ell=\ell_1\ell_2^2$ with $\ell_1$ square-free. Then we have
\begin{align*}
I_2(\ell;g)=&\,\frac{\eta_2(\ell;1)}{24|\ell_1|^{1/2}}\bigg(\sum_{j=0}^{3}\frac{\partial_u^j\eta_2}{\eta_2}(\ell;1)P_{2,j}\big(2g-d(\ell_1)\big)- 6g\sum_{j=0}^{2}\frac{\partial_u^j\kappa_2}{\kappa_2}(\ell;1,1)Q_{2,j}\big(d(\ell_1)\big)\\
&\qquad\qquad\qquad+2\sum_{i=0}^{1}\sum_{j=0}^{3-i}\frac{\partial_u^j\partial_w^i\kappa_2}{\kappa_2}(\ell;1,1)R_{2,i,j}\big(d(\ell_1)\big)\bigg)+O_\varepsilon\big(|\ell|^{1/2}q^{-g+ \varepsilon g}\big),
\end{align*}
where the functions $\eta_2(\ell,u)$ and $\kappa_2(\ell;u,v)$ are defined in \eqref{eta} and \eqref{kappa2}. Here $P_{2,j}(x)$'s are some explicit polynomials of degrees $3-j$ for all $0\leq j\leq 3$. Also, $Q_{2,j}(x)$'s and $R_{2,i,j}(x)$'s are some explicit polynomials of degrees $2-j$ and $3-i-j$, respectively.
As for the leading term we have
\begin{align*}
I_2(\ell;g)=&\,\frac{\eta_2(\ell;1)}{24|\ell_1|^{1/2}}\Big(8g^3-12g^2d(\ell_1)+d(\ell_1)^3\Big)+O_\varepsilon\big(g^2d(\ell)^\varepsilon\big)+O_\varepsilon\big(|\ell|^{1/2}q^{-g+ \varepsilon g}\big).
\end{align*}
\end{theorem}
\begin{theorem}[Twisted third moment]\label{ttm}
Let $\ell=\ell_1\ell_2^2$ with $\ell_1$ square-free. Then we have
\begin{align*}
I_3(\ell;g)=&\,\frac{\eta_3(\ell;1)}{2^56!|\ell_1|^{1/2}}\sum_{j=0}^{6}\frac{\partial_u^j\eta_3}{\eta_3}(\ell;1)P_{3,j}\big(3g-d(\ell_1)\big)\\
&\qquad+\frac{\kappa_3(\ell;1,1)q^4}{(q-1) |\ell_1|^{1/2}}\sum_{N=3g-1}^{3g}\sum_{i_1=0}^{2}\sum_{i_2=0}^{2-i_1}\sum_{j=0}^{6-i_1-i_2}\frac{\partial_u^j\partial_w^{i_1}\kappa_3}{\kappa_3}(\ell;1,1)R_{3,i_1,i_2,j}(\mathfrak{a},g+d)N^{i_2}\\
&\qquad\qquad+O_\varepsilon(|\ell_1|^{-3/4}q^{-g/4+\varepsilon g})+ O_\varepsilon\big(|\ell|^{1/2}q^{-g/2+\varepsilon g}\big),
\end{align*}
where the functions $\eta_3(\ell,u)$ and $\kappa_3(\ell;u,v)$ are defined in \eqref{eta} and \eqref{kappa3}. Here $P_{3,j}(x)$'s are some explicit polynomials of degrees $6-j$ for all $0\leq j\leq 6$. Also, $\mathfrak{a} \in \{0,1\}$ according to whether $N-d(\ell)$ is even or odd, and $R_{3,i_1,i_2,j}(\mathfrak{a},x)$ are some explicit polynomials in $x$ with degree $6-i_1-i_2-j$.
As for the leading term we have
\begin{align*}
I_3(\ell;g)=\frac{\eta_3(\ell;1)}{2^56! |\ell_1|^{\frac{1}{2}}}&\Big(\big(3g-d(\ell_1)\big)^6-73\big(g+d(\ell_1)\big)^6+396g\big(g+d(\ell_1)\big)^5\\
&\qquad\qquad-540g^2\big(g+d(\ell_1)\big)^4\Big)+O_\varepsilon\big(g^5d(\ell)^\varepsilon\big)+ O_\varepsilon\big(|\ell|^{1/2}q^{-g/4+\varepsilon g}\big).
\end{align*}
\end{theorem}
\section{Background in function fields}
We first give some background information on $L$-functions over function fields and their connection to zeta functions of curves.
Let $\pi_q(n)$ denote the number of monic, irreducible polynomials of degree $n$ over $\mathbb{F}_q[x]$. The following Prime Polynomial Theorem holds
\begin{equation*}
\pi_q(n) = \frac{1}{n} \sum_{d|n} \mu(d) q^{n/d}.
\label{pnt}
\end{equation*}
We can rewrite the Prime Polynomial Theorem in the form
\begin{equation*}
\sum_{f \in \mathcal{M}_n} \Lambda(f) = q^n.
\end{equation*}
\subsection{Quadratic Dirichlet $L$-functions over function fields}
For $\textrm{Re}(s)>1$, the zeta function of $\mathbb{F}_q[x]$ is defined by
\[
\zeta_q(s):=\sum_{f\in\mathcal{M}}\frac{1}{|f|^s}=\prod_{P\in \mathcal{P}}\bigg(1-\frac{1}{|P|^s}\bigg)^{-1}.
\]
Since there are $q^n$ monic polynomials of degree $n$, we see that
\[
\zeta_q(s)=\frac{1}{1-q^{1-s}}.
\]
It is sometimes convenient to make the change of variable $u=q^{-s}$, and then write $\mathcal{Z}(u)=\zeta_q(s)$, so that $$\mathcal{Z}(u)=\frac{1}{1-qu}.$$
For $P$ a monic irreducible polynomial, the quadratic residue symbol $\big(\frac{f}{P}\big)\in\{0,\pm1\}$ is defined by
\[
\Big(\frac{f}{P}\Big)\equiv f^{(|P|-1)/2}(\textrm{mod}\ P).
\]
If $Q=P_{1}^{\alpha_1}P_{2}^{\alpha_2}\ldots P_{r}^{\alpha_r}$, then the Jacobi symbol is defined by
\[
\Big(\frac{f}{Q}\Big)=\prod_{j=1}^{r}\Big(\frac{f}{P_j}\Big)^{\alpha_j}.
\]
The Jacobi symbol satisfies the quadratic reciprocity law. That is to say if $A,B\in \mathbb{F}_q[x]$ are relatively prime, monic polynomials, then
\[
\Big(\frac{A}{B}\Big)=(-1)^{(q-1)d(A)d(B)/2}\Big(\frac{B}{A}\Big).
\]
As we are assuming $q\equiv 1(\textrm{mod}\ 4)$, the quadratic reciprocity law gives $\big(\frac{A}{B}\big)=\big(\frac{B}{A}\big)$, a fact we will use throughout the paper.
For $D$ monic, we define the character
\[
\chi_D(g)=\Big(\frac{D}{g}\Big),
\]
and consider the $L$-function attached to $\chi_D$,
\[
L(s,\chi_D):=\sum_{f\in\mathcal{M}}\frac{\chi_D(f)}{|f|^s}.
\]
With the change of variable $u=q^{-s}$ we have
\begin{equation*}
\mathcal{L}(u,\chi_D):=L(s,\chi_D)=\sum_{f\in\mathcal{M}}\chi_D(f)u^{d(f)}=\prod_{P\in \mathcal{P}}\big(1-\chi_D(P)u^{d(P)}\big)^{-1}.
\end{equation*}
For $D\in\mathcal{H}_{2g+1}$, $\mathcal{L}(u,\chi_D)$ is a polynomial in $u$ of degree $2g$ and it satisfies a functional equation
\begin{equation*}
\mathcal{L}(u,\chi_D)=(qu^2)^g\mathcal{L}\Big(\frac{1}{qu},\chi_D\Big).
\end{equation*}
There is a connection between $\mathcal{L}(u,\chi_D)$ and zeta function of curves. For $D\in\mathcal{H}_{2g+1}$, the affine equation $y^2=D(x)$ defines a projective and connected hyperelliptic curve $C_D$ of genus $g$ over $\mathbb{F}_q$. The zeta function of the curve $C_D$ is defined by
\[
Z_{C_D}(u)=\exp\bigg(\sum_{j=1}^{\infty}N_j(C_D)\frac{u^j}{j}\bigg),
\]
where $N_j(C_D)$ is the number of points on $C_D$ over $\mathbb{F}_q$, including the point at infinity. Weil [\textbf{\ref{W}}] showed that
\[
Z_{C_D}(u)=\frac{P_{C_D}(u)}{(1-u)(1-qu)},
\]
where $P_{C_D}(u)$ is a polynomial of degree $2g$. It is known that $P_{C_D}(u)=\mathcal{L}(u,\chi_D)$ (this was proved in Artin's thesis). The Riemann Hypothesis for curves over function fields was proven by Weil [\textbf{\ref{W}}], so all the zeros of $\mathcal{L}(u,\chi_D)$ are on the circle $|u|=q^{-1/2}$.
\subsection{Preliminary lemmas}
The first three lemmas are in [\textbf{\ref{F1}}; Lemma 2.2, Proposition 3.1 and Lemma 3.2].
\begin{lemma}\label{L1}
For $f\in\mathcal{M}$ we have
\[
\sum_{D\in\mathcal{H}_{2g+1}}\chi_D(f)=\sum_{C|f^\infty}\sum_{h\in\mathcal{M}_{2g+1-2d(C)}}\chi_f(h)-q\sum_{C|f^\infty}\sum_{h\in\mathcal{M}_{2g-1-2d(C)}}\chi_f(h),
\]
where the summations over $C$ are over monic polynomials $C$ whose prime factors are among the prime factors of $f$.
\end{lemma}
We define the generalized Gauss sum as
\[
G(V,\chi):= \sum_{u (\textrm{mod}\ f)}\chi(u)e\Big(\frac{uV}{f}\Big),
\]
where the exponential was defined in [\textbf{\ref{Hayes}}] as follows. For $a \in \mathbb{F}_q\big((\frac 1x)\big) $,
$$ e(a) = e^{ 2 \pi i \text{Tr}_{\mathbb{F}_q / \mathbb{F}_p} (a_1)/p},$$ where $a_1$ is the coefficient of $1/x$ in the Laurent expansion of $a$.
\begin{lemma}\label{L3}
Let $f\in\mathcal{M}_n$. If $n$ is even then
\[
\sum_{h\in\mathcal{M}_m}\chi_f(h)=\frac{q^m}{|f|}\bigg(G(0,\chi_f)+q\sum_{V\in\mathcal{M}_{\leq n-m-2}}G(V,\chi_f)-\sum_{V\in\mathcal{M}_{\leq n-m-1}}G(V,\chi_f)\bigg),
\]
otherwise
\[
\sum_{h\in\mathcal{M}_m}\chi_f(h)= \frac{q^{m+1/2}} {|f|}\sum_{V\in\mathcal{M}_{n-m-1}}G(V,\chi_f).
\]
\end{lemma}
\begin{lemma}\label{L2}
\begin{enumerate}
\item If $(f,h)=1$, then $G(V, \chi_{fh})= G(V, \chi_f) G(V,\chi_h)$.
\item Write $V= V_1 P^{\alpha}$ where $P \nmid V_1$.
Then
$$G(V , \chi_{P^j})=
\begin{cases}
0 & \mbox{if } j \leq \alpha \text{ and } j \text{ odd,} \\
\varphi(P^j) & \mbox{if } j \leq \alpha \text{ and } j \text{ even,} \\
-|P|^{j-1} & \mbox{if } j= \alpha+1 \text{ and } j \text{ even,} \\
\chi_P(V_1) |P|^{j-1/2} & \mbox{if } j = \alpha+1 \text{ and } j \text{ odd, } \\
0 & \mbox{if } j \geq 2+ \alpha .
\end{cases}$$
\end{enumerate}
\end{lemma}
\begin{lemma}\label{L5}
For $\ell\in\mathcal{M}$ a square polynomial we have
\[
\frac{1}{| \mathcal{H}_{2g+1}|}\sum_{D \in \mathcal{H}_{2g+1}} \chi_D(\ell)=\prod_{\substack{P\in\mathcal{P}\\P|\ell}}\bigg(1+\frac{1}{|P|}\bigg)^{-1}+O(q^{-2g}).
\]
\end{lemma}
\begin{proof}
See [\textbf{\ref{BF}}; Lemma 3.7].
\end{proof}
We also have the following estimate.
\begin{lemma}[P\'olya--Vinogradov inequality]
\label{nonsq}
For $\ell\in\mathcal{M}$ not a square polynomial, let $\ell= \ell_1 \ell_2^2$ with $\ell_1$ square-free. Then we have
$$\bigg| \sum_{D \in \mathcal{H}_{2g+1}} \chi_D(\ell) \bigg| \ll_\varepsilon q^{g} | \ell_1|^{\varepsilon}.$$
\end{lemma}
\begin{proof}
As in the proof of Lemma $2.2$ in [\textbf{\ref{F1}}], using Perron's formula we have
$$ \sum_{D \in \mathcal{H}_{2g+1}} \chi_D( \ell) = \frac{1}{2 \pi i} \oint_{|u|=r}\mathcal{L} (u,\chi_{\ell})\prod_{P | \ell} \Big(1-u^{2d(P)}\Big)^{-1} \frac{(1-qu^2) du}{ u^{2g+2}},$$
where we pick $r = q^{-1/2}$. If we write $\ell= \ell_1 \ell_2^2$ with $\ell_1$ square-free, then
$$ \mathcal{L}(u,\chi_{\ell})= \mathcal{L}(u,\chi_{\ell_1}) \prod_{\substack{P \nmid \ell_1 \\ P | \ell_2}} \Big(1-u^{d(P)}\chi_{\ell_1}(P) \Big).$$
Now we use the Lindel\"{o}f bound for $\mathcal{L}(u,\chi_{\ell_1})$ (see Theorem $3.4$ in [\textbf{\ref{altug}}]),
$$ \mathcal{L}(u,\chi_{\ell_1}) \ll |\ell_1|^{\varepsilon},$$ in the integral above and the conclusion follows.
\end{proof}
\begin{lemma}[Mertens' theorem]
\label{mertens}
We have
$$ \prod_{d(P) \leq X} \bigg( 1-\frac{1}{|P|} \bigg)^{-1} = e^{\gamma} X + O(1),$$ where $\gamma$ is the Euler constant.
\end{lemma}
\begin{proof}
A more general version of Mertens' estimate was proved in [\textbf{\ref{R}}; Theorem 3]. Here we give a simpler proof in the above form for completeness.
Using the Prime Polynomial Theorem,
$$ \sum_{d(P) \leq X} \frac{d(P)}{|P|} = X+O(1),$$ and hence by partial summation, we get that
$$ \sum_{d(P) \leq X} \frac{1}{|P|} = \log X + c + O \Big( \frac{1}{X} \Big)$$ for some constant $c$. Then
\begin{align*}
\sum_{d(P) \leq X} \log &\bigg( 1- \frac{1}{|P|} \bigg)^{-1} = \sum_{d(P) \leq X} \frac{1}{|P|} + \sum_{d(P) \leq X} \sum_{j=2}^{\infty} \frac{1}{j|P|^j}\\
&= \log X +c + \sum_{P\in\mathcal{P}} \sum_{j=2}^{\infty} \frac{1}{j|P|^j} - \sum_{d(P) >X} \sum_{j=2}^{\infty} \frac{1}{j|P|^j} + O \Big( \frac{1}{X} \Big) \\
&= \log X + C +O \Big( \frac{1}{X} \Big),
\end{align*} where $C = c + \sum_{P\in\mathcal{P}} \sum_{j=2}^{\infty} \frac{1}{j|P|^j}$. Exponentiating and using the fact that for $x<1$, $e^x=1+O(x)$, we get that
$$ \prod_{d(P) \leq X} \bigg( 1-\frac{1}{|P|} \bigg)^{-1} = e^{C} X + O(1),$$ and it remains to show that $C=\gamma$.
Now by the Prime Polynomial Theorem,
$$ \sum_{\substack{f\in\mathcal{M}\\ d(f) \leq X}} \frac{\Lambda(f)}{|f| d(f)} = \sum_{n \leq X} \frac{1}{n} = \log X + \gamma + O \Big( \frac{1}{X} \Big).$$
Combining the formulas above, we also have that
\begin{align*}
\sum_{\substack{f\in\mathcal{M}\\ d(f) \leq X}} \frac{\Lambda(f)}{|f| d(f)} &= \sum_{d(P) \leq X} \frac{1}{|P|} + \sum_{P\in\mathcal{P}}\sum_{2\leq j\leq X/d(P)} \frac{1}{j |P|^j} \\
&= \log X + c+ \sum_{P\in\mathcal{P}} \sum_{j=2}^{\infty} \frac{1}{j|P|^j} - \sum_{j=2}^{\infty} \sum_{d(P) > X/j} \frac{1}{j|P|^j} + O \Big( \frac{1}{X} \Big) \\
&= \log X + C + O \Big( \frac{1}{X}\Big).
\end{align*}
Using the previous two identities, it follows that $C = \gamma$, which finishes the proof.
\end{proof}
\section{Hybrid Euler-Hadamard product}
We start with an explicit formula.
\begin{lemma}\label{explicit}
Let $u(x)$ be a real, non-negative, $C^{\infty}$ function with mass $1$ and compactly supported on $[q,q^{1+1/X}]$. Let $v(t)=\int_{t}^{\infty}u(x)dx$ and let $\widetilde{u}$ be the Mellin transform of $u$. Then for $s$ not a zero of $L(s,\chi_D)$ we have
\begin{eqnarray*}
&&-\frac{L'}{L}(s,\chi_D)=\sum_{f\in\mathcal{M}}\frac{(\log q)\Lambda(f)\chi_D(f)}{|f|^s}v\big(q^{d(f)/ X}\big)-\sum_{\rho}\frac{\widetilde{u}\big(1-(s-\rho) X\big)}{s-\rho},
\end{eqnarray*}
where the sum over $\rho$ runs over all the zeros of $L(s,\chi_D)$.
\end{lemma}
This lemma can be proved in a familiar way [\textbf{\ref{BH}}], beginning with the integral
\begin{equation*}
-\frac{1}{2\pi i}\int_{(c)}\frac{L'}{L}(s+z,\chi_D)\widetilde{u}(1+zX)\frac{dz}{z},
\end{equation*}
where $c=\max\{2,2-\textrm{Re}(s)\}$.
Following the arguments in [\textbf{\ref{GHK}}], we can integrate the formula in Lemma \ref{explicit} to give a formula
for $L(s,\chi_D)$: for $s$ not equal to one of the zeros and $\textrm{Re}(s)\geq0$ we have
\begin{eqnarray}\label{explicitintegrate}
L(s,\chi_D)=\exp\bigg(\sum_{f\in\mathcal{M}}\frac{\Lambda(f)\chi_D(f)}{|f|^sd(f)}v\big(q^{d(f)/ X}\big)\bigg)Z_{X}(s,\chi_D).
\end{eqnarray}
To remove the former restriction on $s$, we note that we may interpret $\exp\big(-U(z)\big)$ to be asymptotic to $Cz$ for some constant $C$ as $z\rightarrow0$, so both sides of \eqref{explicitintegrate} vanish at the zeros. Thus \eqref{explicitintegrate} holds for all $\textrm{Re}(s)\geq0$. Furthermore, since $v(q^{d(f)/X})=1$ for $d(f)\leq X$ and $v(q^{d(f)/X})=0$ for $d(f)\geq X+1$, the first factor in \eqref{explicitintegrate} is precisely $P_{X}(s,\chi_D)$, and that completes the proof of Theorem \ref{HEH}.
\section{Moments of the partial Euler product}\label{PXchi}
Recall that
\begin{equation*}
P_{X}(s,\chi_D)=\exp\bigg( \sum_{\substack{f\in\mathcal{M}\\d(f)\leq X}}\frac{\Lambda(f)\chi_D(f)}{|f|^{s}d(f)}\bigg).
\end{equation*}
We first show that we can approximate $P_X(s,\chi_D)^k$ by
\begin{equation*}
P_{k,X}^{*}(s,\chi_D)=\prod_{d(P)\leq X/2}\bigg( 1-\frac{\chi_D(P)}{|P|^{s}}\bigg) ^{-k}\prod_{X/2<d(P)\leq X}\bigg( 1+\frac{k\chi_{D}(P)}{|P|^s}+\frac{k^2\chi_{D}(P)^2}{2|P|^{2s}}\bigg)
\end{equation*}
for any $k\in\mathbb{R}$.
\begin{lemma}\label{PP*}
For any $k\in\mathbb{R}$ we have
\begin{equation*}
P_{X}(s,\chi_D)^{k}=\Big(1+O_{k}\big(q^{-X/6}/X\big)\Big)P_{k,X}^{*}(s,\chi_D)
\end{equation*}
uniformly for $\emph{Re}(s)=\sigma\geq1/2$.
\end{lemma}
\begin{proof}
For any $P\in\mathcal{P}$ we let $N_{P}=\lfloor X/d(P)\rfloor$, the integer part of $X/d(P)$. Then we have
\begin{equation*}
P_{X}(s,\chi_D)^{k}=\exp\bigg(k\sum_{d(P)\leq X}\sum_{1\leq j\leq N_{P}}\frac{\chi_D(P^j)}{j|P|^{js}}\bigg)
\end{equation*}
and
\begin{align*}
P_{k,X}^{*}(s,\chi_D)=&\exp\bigg(k\sum_{d(P)\leq X/2}\sum_{j=1}^{\infty}\frac{\chi_D(P)^j}{j|P|^{js}}+\sum_{X/2<d(P)\leq X}\frac{k\chi_D(P)}{|P|^{s}}\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+O_k\bigg(\sum_{X/2<d(P)\leq X}\frac{1}{|P|^{3\sigma}}\bigg)\bigg).
\end{align*}
We note that $N_{P}=1$ for $X/2<d(P)\leq X$, so
\begin{equation*}
P_{X}(s,\chi_D)^{k}P_{k,X}^{*}(s,\chi_D)^{-1}=\exp\bigg(-k\sum_{d(P)\leq X/2}\sum_{j>N_{P}}\frac{\chi_D(P)^j}{j|P|^{js}}+O_k\bigg(\sum_{X/2<d(P)\leq X}\frac{1}{|P|^{3\sigma}}\bigg)\bigg).
\end{equation*}
The expression in the exponent is
\begin{eqnarray*}
&\ll_k&\sum_{d(P)\leq X/2}\frac{1}{|P|^{\sigma(N_{P}+1)}}+\sum_{X/2<d(P)\leq X}\frac{1}{|P|^{3\sigma}}\nonumber\\
&\ll_k&\sum_{j=2}^{X}\sum_{X/(j+1)<d(P)\leq X/j}\frac{1}{|P|^{(j+1)/2}}+\sum_{X/2<d(P)\leq X}\frac{1}{|P|^{3/2}}\\
&\ll_k&\sum_{j=2}^{X}\frac{jq^{-(j-1)X/2(j+1)}}{X}+\frac{q^{-X/4}}{X}\ll_k\frac{q^{-X/6}}{X}.
\end{eqnarray*}
Hence $P_{X}(s,\chi_D)^{k}P_{k,X}^{*}(s,\chi_D)^{-1}=1+O_{k}\big(q^{-X/6}/X\big)$ as claimed.
\end{proof}
Next we write $P_{k,X}^{*}(s,\chi_{D})$ as a Dirichlet series
\begin{equation*}
\sum_{\ell\in\mathcal{M}}\frac{\alpha_{k}(\ell)\chi_{D}(\ell)}{|\ell|^s}=\prod_{d(P)\leq X/2}\bigg( 1-\frac{\chi_D(P)}{|P|^s}\bigg) ^{-k}\prod_{X/2<d(P)\leq X}\bigg(1+\frac{k\chi_{D}(P)}{|P|^s}+\frac{k^2\chi_{D}(P)^2}{2|P|^{2s}}\bigg) .
\end{equation*}
We note that $\alpha_{k}(\ell)\in\mathbb{R}$, and if we denote by $S(X)$ the set of $X$-smooth polynomials, i.e.
\begin{displaymath}
S(X)=\{\ell\in\mathcal{M}:P|\ell\rightarrow d(P)\leq X\},
\end{displaymath}
then $\alpha_{k}(\ell)$ is multiplicative, and $\alpha_{k}(\ell)=0$ if $\ell\notin S(X)$. We also have $0\leq\alpha_{k}(\ell)\leq\tau_{|k|}(\ell)$ for all $\ell\in\mathcal{M}$. Moreover, $\alpha_{k}(\ell)=\tau_{k}(\ell)$ if $\ell\in S(X/2)$, and $\alpha_{k}(P)=k$ and $\alpha_{k}(P^2)=k^2/2$ for all $P\in\mathcal{P}$ with $X/2<d(P)\leq X$.
We now truncate the series, for $s=1/2$, at $d(\ell)\leq \vartheta g$. From the Prime Polynomial Theorem we have
\begin{align}\label{truncation}
\sum_{\substack{\ell\in S(X)\\ d(\ell)> \vartheta g}}\frac{\alpha_{k}(\ell)\chi_{D}(\ell)}{|\ell|^{1/2}}&\leq \sum_{\ell\in S(X)}\frac{\tau_{|k|}(\ell)}{|\ell|^{1/2}}\Big(\frac{|\ell|}{q^{\vartheta g}}\Big)^{c/4}=q^{-c\vartheta g/4}\prod_{d(P)\leq X}\bigg(1-\frac{1}{|P|^{(2-c)/4}}\bigg)^{-|k|}\nonumber\\
&\ll q^{-c\vartheta g/4}\exp\bigg(O_k\Big(\sum_{d(P)\leq X}\frac{1}{|P|^{(2-c)/4}}\Big)\bigg)\nonumber\\
&\ll q^{-c\vartheta g/4}\exp\bigg(O_k\Big(\frac{q^{(2+c)X/4}}{X}\Big)\bigg)\ll_\varepsilon q^{-c\vartheta g/4+\varepsilon g},
\end{align}
as $X\leq (2-c)\log g/\log q$. Hence
\begin{equation}\label{P*}
P_{k,X}^{*}(\chi_{D}):=P_{k,X}^{*}(\tfrac12,\chi_{D})=\sum_{\substack{\ell\in S(X)\\ d(\ell)\leq\vartheta g}}\frac{\alpha_{k}(\ell)\chi_{D}(\ell)}{|\ell|^{1/2}}+O_\varepsilon(q^{-c\vartheta g/4+\varepsilon g})
\end{equation}
for all $k\in\mathbb{R}$ and $\vartheta>0$, and it follows that
\begin{equation*}
\Big\langle P_{k,X}^{*}(\chi_{D}) \Big\rangle_{\mathcal{H}_{2g+1}}=\frac{1}{|\mathcal{H}_{2g+1}|}\sum_{\substack{\ell\in S(X)\\ d(\ell)\leq\vartheta g}}\frac{\alpha_{k}(\ell)}{|\ell|^{1/2}}\sum_{D\in\mathcal{H}_{2g+1}} \chi_{D}(\ell)+O_\varepsilon\big(q^{-c\vartheta g/4+\varepsilon g}\big).
\end{equation*}
We first consider the contribution of the terms with $\ell=\square$. Denote this by $I(\ell=\square)$. By Lemma \ref{L5},
\begin{displaymath}
I\big(\ell=\square\big)=\sum_{\substack{\ell\in S(X)\\ d(\ell)\leq\vartheta g/2}}\frac{\alpha_{k}(\ell^2)}{|\ell|}\prod_{\substack{P\in\mathcal{P}\\P|\ell}}\bigg(1+\frac{1}{|P|}\bigg)^{-1}+O_\varepsilon\big(q^{-2g+\varepsilon g}\big).
\end{displaymath}
The sum can be extended to all $\ell\in S(X)$ as, like in \eqref{truncation},
\begin{displaymath}
\sum_{\substack{\ell\in S(X)\\ d(\ell)>\vartheta g/2}}\frac{\alpha_{k}(\ell^2)}{|\ell|}\prod_{\substack{P\in\mathcal{P}\\P|\ell}}\bigg(1+\frac{1}{|P|}\bigg)^{-1}\ll_\varepsilon q^{-\vartheta g/4+\varepsilon g}.
\end{displaymath}
So, using the multiplicativity of $\alpha_k(\ell)$ and Lemma \ref{mertens},
\begin{align*}
I\big(&\ell=\square\big)=\prod_{d(P)\leq X}\bigg(1+\sum_{j=1}^{\infty}\frac{\alpha_{k}(P^{2j})}{|P|^{j-1}(1+|P|)}\bigg)+O_\varepsilon\big(q^{-\vartheta g/4+\varepsilon g}\big)\\
&=\prod_{d(P)\leq X/2}\bigg(1+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^{j-1}(1+|P|)}\bigg)\prod_{X/2<d(P)\leq X}\bigg(1+\frac{k^2}{2(1+|P|)}+O_\varepsilon\big(|P|^{-2+\varepsilon}\big)\bigg)\\
&\qquad\qquad+O_\varepsilon\big(q^{-\vartheta g/4+\varepsilon g}\big)\\
&=\Big(1+O\big(q^{-X/2}/X\big)\Big)\prod_{d(P)\leq X/2}\bigg[\bigg(1-\frac{1}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^{j-1}(1+|P|)}\bigg)\bigg]\\
&\qquad\qquad\prod_{d(P)\leq X/2}\bigg(1-\frac{1}{|P|}\bigg)^{-k(k+1)/2}\prod_{X/2<d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^{-k^2/2}+O_\varepsilon\big(q^{-\vartheta g/4+\varepsilon g}\big)\\
&=\bigg(1+O\Big(\frac 1X\Big)\bigg)2^{-k/2}\mathcal{A}_k\big(e^\gamma X\big)^{k(k+1)/2}+O_\varepsilon\big(q^{-\vartheta g/4+\varepsilon g}\big).
\end{align*}
Now we consider the contribution from $\ell \neq \square$, which we denote by $I ( \ell \neq \square)$. Using Lemma \ref{nonsq} we have that
$$I(\ell \neq \square) \ll q^{-g} \sum_{\ell \in S(X)} \frac{ \tau_{|k|}(\ell)}{|\ell|^{1/2-\varepsilon}}.$$ As in \eqref{truncation},
\begin{align*}
\sum_{\ell \in S(X)} \frac{ \tau_{|k|}(\ell)}{|\ell|^{1/2-\varepsilon}}& \ll \prod_{d(P) \leq X} \bigg( 1- \frac{1}{|P|^{1/2-\varepsilon}} \bigg)^{-|k|}\ll \exp \bigg( O_k\Big( \sum_{d(P) \leq X} \frac{1}{|P|^{1/2 -\varepsilon}}\Big) \bigg)\\
&\ll \exp \bigg( O_k\Big(\frac{q^{(1/2+\varepsilon)X}}{X}\Big) \bigg)\ll_\varepsilon q^{\varepsilon g}.
\end{align*}
Hence
$$I( \ell \neq \square) \ll_\varepsilon q^{-g+\varepsilon g},$$
and we obtain the theorem.
\section{Twisted moments of $L(\frac12,\chi_D)$}
\label{twist}
In this section, we are interested in the $k$-th twisted moment
\[
I_k(\ell;g)=\frac{1}{|\mathcal{H}_{2g+1}|}\sum_{D\in\mathcal{H}_{2g+1}}L(\tfrac12,\chi_D)^k\chi_D(\ell).
\]
We first recall the approximate functional equation,
\[
L(\tfrac12,\chi_D)^k=\sum_{f\in\mathcal{M}_{kg}}\frac{\tau_k(f)\chi_D(f)}{|f|^{1/2}}+\sum_{f\in\mathcal{M}_{kg-1}}\frac{\tau_k(f)\chi_D(f)}{|f|^{1/2}}
\]
for any $k\in\mathbb{N}$. So
\[
I_k(\ell;g)=S_{k}(\ell;kg)+S_{k}(\ell;kg-1),
\]
where
\[
S_k(\ell;N)=\frac{1}{|\mathcal{H}_{2g+1}|}\sum_{f\in\mathcal{M}_{\leq N}}\frac{\tau_k(f)}{|f|^{1/2}}\sum_{D\in\mathcal{H}_{2g+1}}\chi_D(f\ell)
\]
for $N\in\{kg,kg-1\}$. Define
\begin{align*}
S_{k,1}(\ell;N)=\frac{1}{|\mathcal{H}_{2g+1}|}\sum_{f\in\mathcal{M}_{\leq N}}\frac{\tau_k(f)}{|f|^{1/2}}\sum_{C|(f\ell)^\infty}\sum_{h\in\mathcal{M}_{2g+1-2d(C)}}\chi_{f\ell}(h)
\end{align*}
and
\[
S_{k,2}(\ell;N)=\frac{1}{|\mathcal{H}_{2g+1}|}\sum_{f\in\mathcal{M}_{\leq N}}\frac{\tau_k(f)}{|f|^{1/2}}\sum_{C|(f\ell)^\infty}\sum_{h\in\mathcal{M}_{2g-1-2d(C)}}\chi_{f\ell}(h)
\]
so that, in view of Lemma \ref{L1},
\[
S_k(\ell;N)=S_{k,1}(\ell;N)-qS_{k,2}(\ell;N).
\]
We further write
\[
S_{k,1}(\ell;N)=S_{k,1}^{\textrm{o}}(\ell;N)+S_{k,1}^{\textrm{e}}(\ell;N)
\]
according to whether the degree of the product $f\ell$ is even or odd, respectively. Lemma \ref{L3} and Lemma \ref{L2} then lead to
\begin{align}
S_{k,1}^{\textrm{o}}(\ell;N)=\frac{q^{3/2}}{(q-1)|\ell|}\sum_{\substack{f\in\mathcal{M}_{\leq N}\\d(f\ell)\ \textrm{odd}}}\frac{\tau_k(f)}{|f|^{3/2}}\sum_{\substack{C|(f\ell)^\infty\\d(C)\leq g}}\frac{1}{|C|^2}\sum_{V\in\mathcal{M}_{d(f\ell)-2g-2+2d(C)}}G(V,\chi_{f\ell})
\label{s1odd}
\end{align}
and
\[
S_{k,1}^{\textrm{e}}(\ell;N)=M_{k,1}(\ell;N)+S_{k,1}^{\textrm{e}}(\ell;N;V\ne0),
\]
where
\begin{align}\label{M}
M_{k,1}(\ell;N)=\frac{q}{(q-1)|\ell|}\sum_{\substack{f\in\mathcal{M}_{\leq N}\\f\ell=\square}}\frac{\tau_k(f)\varphi(f\ell)}{|f|^{3/2}}\sum_{\substack{C|(f\ell)^\infty\\d(C)\leq g}}\frac{1}{|C|^2}
\end{align}
and
\begin{align}\label{Ske}
S_{k,1}^{\textrm{e}}(\ell;N;V\ne0)&=\frac{q}{(q-1)|\ell|}\sum_{\substack{f\in\mathcal{M}_{\leq N}\\d(f\ell)\ \textrm{even}}}\frac{\tau_k(f)}{|f|^{3/2}}\sum_{\substack{C|(f\ell)^\infty\\d(C)\leq g}}\frac{1}{|C|^2}\\
&\bigg(q\sum_{V\in\mathcal{M}_{\leq d(f\ell)-2g-3+2d(C)}}G(V,\chi_{f\ell})-\sum_{V\in\mathcal{M}_{\leq d(f\ell)-2g-2+2d(C)}}G(V,\chi_{f\ell})\bigg).\nonumber
\end{align}
We also decompose
\[
S_{k,1}^{\textrm{e}}(\ell;N;V\ne0)=S_{k,1}^{\textrm{e}}(\ell;N;V=\square)+S_{k,1}^{\textrm{e}}(\ell;N;V\ne\square)
\]
correspondingly to whether $V$ is a square or not.
We treat $S_{k,2}(\ell;N)$ similarly and define the functions $S_{k,2}^{\textrm{o}}(\ell;N)$, $M_{k,2}(\ell;N)$, $S_{k,2}^{\textrm{e}}(\ell;N;V=\square)$ and $S_{k,2}^{\textrm{e}}(\ell;N;V\ne\square)$ in the same way. Further denote
\[
M_{k}(\ell;N)=M_{k,1}(\ell;N)-qM_{k,2}(\ell;N),\qquad M_{k}(\ell)=M_{k}(\ell;kg)+M_{k}(\ell;kg-1)
\]
and
$$S_{k}^{\textrm{e}}(\ell;V=\square)=S_{k}^{\textrm{e}}(\ell;kg;V=\square)+S_{k}^{\textrm{e}}(\ell;kg-1;V=\square),$$
where $$S_{k}^{\textrm{e}}(\ell;N;V=\square)=S_{k,1}^{\textrm{e}}(\ell;N;V=\square)-qS_{k,2}^{\textrm{e}}(\ell;N;V=\square).$$
We shall next consider $M_{k}(\ell)$. The term $S_{k}^{\textrm{e}}(\ell;V=\square)$ also contributes to the main term and will be evaluated in Section \ref{Vsquare}. We will see that that combines nicely with the contribution from $M_{k}(\ell)$ for $k=1$. For the terms $S_{k,1}^{\textrm{o}}(\ell;N)$ and $S_{k,2}^{\textrm{o}}(\ell;N)$, we note that the summations over $V$ are over odd degree polynomials, so $V\ne\square$ in these cases. Let $S_{k}^{\textrm{o}}(\ell;N) = S_{k,1}^{\textrm{o}}(\ell;N)-qS_{k,2}^{\textrm{o}}(\ell;N)$, $S_{k}^{\textrm{e}}(\ell;N;V\ne\square)=S_{k,1}^{\textrm{e}}(\ell;N;V\ne\square)-qS_{k,2}^{\textrm{e}}(\ell;N;V\ne\square)$ and
\begin{equation}\label{Sknonsquare}
S_k(\ell;N;V \neq \square) = S_{k}^{\textrm{e}}(\ell;N;V\ne\square) +S_{k}^{\textrm{o}}(\ell;N)
\end{equation} be the total contribution from $V \neq \square$.
We will bound $S_k(\ell;N;V \neq \square)$ in Section \ref{Vnonsquare}.
\subsection{Evaluate $M_{k}(\ell)$}
\label{main}
We first note that the sum over $C$ in \eqref{M} can be extended to all $C|(f\ell)^\infty$ with the cost of an error of size $O_\varepsilon(q^{N/2-2g+\varepsilon g})=O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big)$, as
\begin{equation}\label{Cextend}
\sum_{\substack{C|(f\ell)^\infty\\d(C)> g}}\frac{1}{|C|^2}\ll \sum_{C|(f\ell)^\infty}\frac{1}{|C|^2}\Big(\frac{|C|}{q^{g}}\Big)^{2-\varepsilon}=q^{-2g+\varepsilon g}\prod_{P|f\ell}\bigg(1-\frac{1}{|P|^\varepsilon}\bigg)^{-1}.
\end{equation}
So
\begin{align*}
M_{k,1}(\ell;N)&=\frac{q}{(q-1)|\ell|}\sum_{\substack{f\in\mathcal{M}_{\leq N}\\f\ell=\square}}\frac{\tau_k(f)\varphi(f\ell)}{|f|^{3/2}}\prod_{P|f\ell}\bigg(1-\frac{1}{|P|^2}\bigg)^{-1}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big)\\
&=\frac{q}{(q-1)}\sum_{\substack{f\in\mathcal{M}_{\leq N}\\f\ell=\square}}\frac{\tau_k(f)}{|f|^{1/2}}\prod_{P|f\ell}\bigg(1+\frac{1}{|P|}\bigg)^{-1}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big).
\end{align*}
The condition $f\ell=\square$ implies that $f=f_1^2\ell_1$ for some $f_1\in\mathcal{M}$. Hence
\begin{align*}
M_{k,1}(\ell;N)=&\frac{q}{(q-1)|\ell_1|^{1/2}}\sum_{2d(f)\leq N-d(\ell_1)}\frac{\tau_k(f^2\ell_1)}{|f|}\prod_{P|f\ell_1\ell_2}\bigg(1+\frac{1}{|P|}\bigg)^{-1}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big).
\end{align*}
We are going to use an analogue of the Perron formula in the following form
\[
\sum_{2n\leq N}a(n)=\frac{1}{2\pi i}\int_{|u|=r}\Big(\sum_{n=0}^{\infty}a(n)u^{2n}\Big)\frac{du}{u^{N+1}(1-u)},
\]
provided that the power series $\sum_{n=0}^{\infty}a(n)u^n$ is absolutely convergent in $|u|\leq r<1$. Hence
\begin{align*}
M_{k,1}(\ell;N)=\frac{q}{(q-1)|\ell_1|^{1/2}}\frac{1}{2\pi i}\oint_{|u|=r}\frac{\mathcal{F}_k(u)du}{u^{N-d(\ell_1)+1}(1-u)}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big)
\end{align*}
for any $r<1$, where
\begin{align*}
\mathcal{F}_k(u)&=\sum_{f\in\mathcal{M}}\frac{\tau_k(f^2\ell_1)}{|f|}\prod_{P|f\ell_1\ell_2}\bigg(1+\frac{1}{|P|}\bigg)^{-1}u^{2d(f)}.
\end{align*}
Now by multiplicativity we have
\[
\mathcal{F}_k(u)=\eta_k(\ell;u)\mathcal{Z}\Big(\frac{u^2}{q}\Big)^{k(k+1)/2},
\]
where
\begin{equation}\label{eta}
\eta_k(\ell;u)=\prod_{P\in\mathcal{P}}\mathcal{A}_{k,P}(u)\prod_{P|\ell_1}\mathcal{B}_{k,P}(u)\prod_{\substack{P\nmid\ell_1\\P|\ell_2}}\mathcal{C}_{k,P}(u)
\end{equation}
with
\begin{align*}
&\mathcal{A}_{k,P}(u)=\bigg(1-\frac{u^{2d(P)}}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\bigg(1+\frac{1}{|P|}\bigg)^{-1}\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}u^{2jd(P)}\bigg),\\
&\mathcal{B}_{k,P}(u)=\bigg(\sum_{j=0}^{\infty}\frac{\tau_k(P^{2j+1})}{|P|^j}u^{2jd(P)}\bigg)\bigg(1+\frac{1}{|P|}+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}u^{2jd(P)}\bigg)^{-1}
\end{align*}
and
\begin{align*}
\mathcal{C}_{k,P}(u)=\bigg(\sum_{j=0}^{\infty}\frac{\tau_k(P^{2j})}{|P|^j}u^{2jd(P)}\bigg)\bigg(1+\frac{1}{|P|}+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}u^{2jd(P)}\bigg)^{-1}.
\end{align*}
Thus,
\begin{align*}
M_{k,1}(\ell;N)=&\frac{q}{(q-1)|\ell_1|^{1/2}}\frac{1}{2\pi i}\oint_{|u|=r}\\
&\qquad\qquad\frac{\eta_k(\ell;u)du}{u^{N-d(\ell_1)+1}(1-u)^{k(k+1)/2+1}(1+u)^{k(k+1)/2}}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big).
\end{align*}
Similarly,
\begin{align*}
M_{k,2}(\ell;N)=&\frac{1}{q(q-1)|\ell_1|^{1/2}}\frac{1}{2\pi i}\oint_{|u|=r}\\
&\qquad\qquad\frac{\eta_k(\ell;u)du}{u^{N-d(\ell_1)+1}(1-u)^{k(k+1)/2+1}(1+u)^{k(k+1)/2}}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big),
\end{align*}
and hence we obtain that
\begin{align*}
M_{k}(\ell)=&\frac{1}{|\ell_1|^{1/2}}\frac{1}{2\pi i}\oint_{|u|=r}\frac{\eta_k(\ell;u)du}{u^{kg-d(\ell_1)+1}(1-u)^{k(k+1)/2+1}(1+u)^{k(k+1)/2-1}}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big).
\end{align*}
As discussed in [\textbf{\ref{F23}}, \textbf{\ref{F1}}], $\eta_k(\ell;u)$ has an analytic continuation to the region $|u|\leq R_k=q^{\vartheta_k}$ for $1\leq k\leq 3$, where $\vartheta_1=1-\varepsilon$, $\vartheta_2=1/2-\varepsilon$ and $\vartheta_3=1/3-\varepsilon$. We then move the contour of integration to $|u|=R_k$, encountering a pole of order $k(k+1)/2+1$ at $u=1$ and a pole of order $k(k+1)/2-1$ at $u=-1$. In doing so we get
\begin{align*}
M_{k}(\ell)=&\frac{1}{|\ell_1|^{1/2}}\frac{1}{2\pi i}\oint_{|u|=R_k}\frac{\eta_k(\ell;u)du}{u^{kg-d(\ell_1)+1}(1-u)^{k(k+1)/2+1}(1+u)^{k(k+1)/2-1}}\\
&\qquad\qquad-\frac{1}{|\ell_1|^{1/2}}\textrm{Res}(u=1) -\frac{1}{|\ell_1|^{1/2}}\textrm{Res}(u=-1)+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big).
\end{align*}
We now evaluate the residue of the pole at $u=1$ and $u=-1$. We have
\begin{align*}
&\eta_k(\ell;u)=\eta_k(\ell;1)\sum_{j\geq0}\frac{1}{j!}\frac{\partial_u^j\eta_k}{\eta_k}(\ell;1)(u-1)^j,\\
&u^{-(kg-d(\ell_1)+1)}=1-\big(kg-d(\ell_1)+1\big)(u-1)+\ldots,\\
&\frac{1}{(1+u)^{k(k+1)/2-1}}=\frac{1}{2^{k(k+1)/2-1}}-\frac{k(k+1)/2-1}{2^{k(k+1)/2}}(u-1)\ldots.
\end{align*}
Similar expressions hold for the Taylor expansions around $u=-1$. So, using the fact that $\eta_k(\ell;u)$ is even,
\begin{align*}
\textrm{Res}(u=1)+\textrm{Res}&(u=-1)\\
&=\,- \frac{\eta_k(\ell;1)}{2^{k(k+1)/2-1}\big(k(k+1)/2\big)!}\sum_{j=0}^{k(k+1)/2}\frac{\partial_u^j\eta_k}{\eta_k}(\ell;1)P_{k,j}\big(kg-d(\ell_1)\big),
\end{align*}
where $P_{k,j}$'s are some explicit polynomials of degrees $k(k+1)/2-j$ for all $0\leq j\leq k(k+1)/2$, and the leading coefficient of $P_{k,0}$ is $1$.
We note that
\begin{align*}
&\eta_1(\ell;\pm1)=\mathcal{A}_1\prod_{P|\ell}\bigg(1+\frac{1}{|P|}-\frac{1}{|P|^2}\bigg)^{-1},\\
&\eta_2(\ell;\pm1)=\frac{\mathcal{A}_2\tau(\ell_1)|\ell_1|}{\sigma(\ell_1)}\prod_{P|\ell}\bigg(1+\frac{1}{|P|}\bigg)\bigg(1+\frac{2}{|P|}-\frac{2}{|P|^2}+\frac{1}{|P|^3}\bigg)^{-1},\\
&\eta_3(\ell;\pm1)=\mathcal{A}_3\prod_{P|\ell}\bigg(1+\frac{3}{|P|}\bigg)\bigg(1+\frac{4}{|P|}-\frac{3}{|P|^2}+\frac{3}{|P|^3}-\frac{1}{|P|^4}\bigg)^{-1}\prod_{P|\ell_1}\frac{1+3|P|}{3+|P|},
\end{align*}
where $\mathcal{A}_k$'s are as in Conjecture \ref{ak}, and $\sigma(\ell_1)= \sum_{d | \ell_1} |d| = \prod_{P | \ell_1} (1+|P|)$ is the sum of divisors function.
Moreover, differentiating \eqref{eta} $j$ times we see that
\[
\frac{\partial_u^j\eta_k}{\eta_k}(\ell;1)=\sum_{j_1,j_2=0}^{j}c_{j,j_1,j_2}\bigg(\sum_{P|\ell_1}\frac{D_{1,j,j_1}(P)d(P)^{j_1}}{|P|}\bigg)\bigg(\sum_{\substack{P\nmid\ell_1\\P|\ell_2}}\frac{D_{2,j,j_2}(P)d(P)^{j_2}}{|P|}\bigg)
\]
for some absolute constants $c_{j,j_1,j_2}$ and $D_{1,j,j_1}(P)\ll_{j,j_1}1$, $D_{2,j,j_2}(P)\ll_{j,j_2}1$. Hence,
\[
\frac{\partial_u^j\eta_k}{\eta_k}(\ell;1)\ll_{j,\varepsilon} d(\ell)^\varepsilon,
\]
and in particular we have
\[
M_{k}(\ell)=\frac{\eta_k(\ell;1)}{2^{k(k+1)/2-1}\big(k(k+1)/2\big)!|\ell_1|^{1/2}}\big(kg-d(\ell_1)\big)^{k(k+1)/2} +O_\varepsilon\big(g^{k(k+1)/2-1}d(\ell)^\varepsilon\big).
\]
For future purposes (see Section \ref{Vsquare}), we explicitly write down the main term for $k=1$:
\begin{equation}
\label{mainfirst}
M_{1}(\ell)=\frac{1}{|\ell_1|^{1/2}}\frac{1}{2\pi i}\oint_{|u|=r}\frac{\eta_1(\ell;u)du}{u^{g-d(\ell_1)+1}(1-u)^{2}}+O_\varepsilon\big(q^{-3g/2+\varepsilon g}\big)
\end{equation}
with
$$ \eta_1(\ell;u) = \prod_{P \in \mathcal{P}} \bigg(1- \frac{u^{2d(P)}}{|P|(1+|P|)} \bigg) \prod_{P | \ell}\bigg(1+\frac{1}{|P|}-\frac{u^{2d(P)}}{|P|^2}\bigg)^{-1}.$$
\subsection{Evaluate $S_{k}^{\textrm{e}}(\ell;V=\square)$}\label{Vsquare}
We proceed similarly as in [\textbf{\ref{F1}}] and [\textbf{\ref{F23}}]. First we note that as in \eqref{Cextend} we can extend the sum over $C$ in \eqref{Ske} to infinity, at the expense of an error of size $O_\varepsilon(q^{(k-4)g/2+\varepsilon g})$. So
\begin{align*}
&S_{k}^{\textrm{e}}(\ell;N;V=\square)=\frac{q}{(q-1)|\ell|}\sum_{\substack{f\in\mathcal{M}_{\leq N}\\d(f\ell)\ \textrm{even}}}\frac{\tau_k(f)}{|f|^{3/2}}\sum_{C|(f\ell)^\infty}\frac{1}{|C|^2}\\
&\qquad\qquad\qquad\bigg(q\sum_{V\in\mathcal{M}_{\leq d(f\ell)/2-g-2+d(C)}}G(V^2,\chi_{f\ell})-2\sum_{V\in\mathcal{M}_{\leq d(f\ell)/2-g-1+d(C)}}G(V^2,\chi_{f\ell})\\
&\qquad\qquad\qquad\qquad\qquad\qquad+\frac1q\sum_{V\in\mathcal{M}_{\leq d(f\ell)/2-g+d(C)}}G(V^2,\chi_{f\ell})\bigg)+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big).
\end{align*}
Applying the Perron formula in the form
\[
\sum_{n\leq N}a(n)=\frac{1}{2\pi i}\int_{|u|=r}\Big(\sum_{n=0}^{\infty}a(n)u^{n}\Big)\frac{du}{u^{N+1}(1-u)}
\] for the sums over $V$ we get
\begin{align*}
S_{k}^{\textrm{e}}(\ell;N;V=\square)& = \frac{1}{(q-1)|\ell|} \sum_{\substack{f \in \mathcal{M}_{\leq N} \\ d(f \ell) \text{ even}}} \frac{\tau_k(f) }{|f|^{3/2}} \sum_{C | (f\ell)^{\infty}}\frac{1}{|C|^2} \frac{1}{2 \pi i} \oint_{|u|=r_1}u^{-d(f)/2-d(C)}\\
& \qquad\qquad \Big( \sum_{V \in \mathcal{M}} G(V^2,\chi_{f \ell})\, u^{d(V)}\Big) \frac{(1-qu)^2du}{ u^{d(\ell)/2-g+1}(1-u) }+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big),
\end{align*}
where $r_1=q^{-1-\varepsilon}$. Another application of the Perron formula, this time in the form
\[
\sum_{\substack{n\leq N\\n+l\ \textrm{even}}}a(n)=\frac{1}{2\pi i}\int_{|w|=r}\Big(\sum_{n=0}^{\infty}a(n)w^{n}\Big)\delta(l,N;w)\frac{dw}{w^{N+1}},
\]
where
\[
\delta(l,N;w)=\frac12\bigg(\frac{1}{1-w}+\frac{(-1)^{N-l}}{1+w}\bigg)=\begin{cases}
\frac{1}{1-w^2}&N-l\ \textrm{even},\\
\frac{w}{1-w^2}&N-l\ \textrm{odd},
\end{cases}
\]for the sum over $f$ yields
\begin{align*}
&S_{k}^{\textrm{e}}(\ell;N;V=\square) = \frac{1}{(q-1)|\ell|} \\
&\qquad\quad \frac{1}{(2 \pi i)^2} \oint_{|u|=r_1} \oint_{|w|=r_2}\frac{\mathcal{N}_k (\ell;u,w) (1-qu)^2 dwdu }{u^{(N+d(\ell)-\mathfrak{a})/2-g+1}w^{N+1-\mathfrak{a}}(1-u) (1-uw^2)}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big),
\end{align*} where $r_2<1$,
$$ \mathcal{N}_k (\ell;u,w) = \sum_{f,V \in \mathcal{M}} \frac{\tau_k(f) G(V^2,\chi_{f \ell})}{|f|^{3/2}} \prod_{P | f \ell} \bigg(1- \frac{1}{|P|^2u^{d(P)}} \bigg)^{-1} u^{d(V)} w^{d(f)} $$
and $\mathfrak{a} \in \{0,1\}$ according to whether $N-d(\ell)$ is even or odd.
We next write $ \mathcal{N}_k (\ell;u,w)$ as an Euler product. From Lemma \ref{L2} we have
\begin{align*}
&\sum_{f \in \mathcal{M}} \frac{\tau_k(f) G(V^2,\chi_{f\ell})}{|f|^{3/2}} \prod_{P|f \ell} \bigg(1-\frac{1}{|P|^2 u^{d(P)}} \bigg)^{-1}w^{d(f)} \\
&\quad = \prod_{P |\ell} \bigg(1-\frac{1}{|P|^2 u^{d(P)}} \bigg)^{-1} \prod_{P \nmid \ell V} \bigg( 1+\frac{k w^{d(P)}}{|P|}\bigg(1-\frac{1}{|P|^2 u^{d(P)}} \bigg)^{-1}\bigg) \\
& \qquad\ \ \prod_{\substack{P\nmid \ell \\P|V}} \bigg( 1+ \sum_{j=1}^{\infty} \frac{ \tau_k(P^j) G(V^2,\chi_{P^j}) w^{j d(P)}}{|P|^{3j/2} }\bigg(1-\frac{1}{|P|^2 u^{d(P)}} \bigg)^{-1} \bigg) \\
&\qquad\quad\ \prod_{\substack{ P | \ell\\P \nmid V }} G(V^2,\chi_{P^{\text{ord}_P(\ell)}})\prod_{\substack{ P | \ell\\P | V }} \bigg( G(V^2,\chi_{P^{\text{ord}_P(\ell)}}) + \sum_{j=1}^{\infty} \frac{\tau_k(P^j) G(V^2,\chi_{P^{j+\text{ord}_P(\ell)}}) w^{jd(P)}}{|P|^{3j/2}} \bigg) .
\end{align*} Note that if $P| \ell_2$ and $P \nmid V$, then the above expression is $0$. Hence we must have that $\text{rad}(\ell_2) | V$. Moreover, from the last Euler factor above, note that we must have $\ell_2 | V$, so write $V= \ell_2 V_1$.
Using Lemma \ref{L2}, we rewrite
$$ \prod_{\substack{P | \ell \\ P \nmid V}} G(V^2,\chi_{P^{\text{ord}_P(\ell)}}) =\prod_{\substack{P | \ell_1 \\ P \nmid \ell_2}} |P|^{1/2} \prod_{\substack{P | \ell_1 \\ P \nmid \ell_2 \\ P |V_1}} |P|^{-1/2}.$$
By multiplicativity we then obtain
\begin{align*}
& \mathcal{N}_k (\ell;u,w) = u^{d(\ell_2)}\prod_{P\in\mathcal{P}} \bigg( 1- \frac{1}{|P|^2 u^{d(P)}} \bigg)^{-1} \\
&\ \prod_{P \nmid \ell} \bigg(1-\frac{1}{|P|^2u^{d(P)}}+\frac{k w^{d(P)}}{|P|}\\
&\qquad\qquad\qquad\qquad\qquad+ \sum_{i=1}^{\infty} u^{i d(P)} \bigg( 1-\frac{1}{|P|^2u^{d(P)}}+ \sum_{j=1}^{\infty} \frac{\tau_k(P^j) G(P^{2i},\chi_{P^j}) w^{j d(P)}}{|P|^{3j/2}}\bigg) \bigg) \\
&\ \ \prod_{P | \ell_1} \Bigg(|P|^{1/2+2 \text{ord}_P(\ell_2)}+\sum_{i=1}^{\infty}u^{i d(P)} \sum_{j=0}^{\infty} \frac{\tau_k(P^j) G( P^{2i+ 2 \text{ord}_P(\ell_2)} , \chi_{P^{j+1+2 \text{ord}_P(\ell_2)}}) w^{jd(P)}}{|P|^{3j/2}} \Bigg) \\
&\ \ \ \prod_{\substack{P \nmid \ell_1 \\ P | \ell_2 }} \bigg( \varphi\big(P^{2 \text{ord}_P(\ell_2)}\big) + \frac{k |P|^{2 \text{ord}_P(\ell_2)} w^{d(P)}}{|P|}\\
&\qquad\qquad\qquad\qquad\qquad\qquad+ \sum_{i=1}^{\infty} u^{i d(P)} \sum_{j=0}^{\infty} \frac{ \tau_k(P^j) G( P^{2i+ 2 \text{ord}_P(\ell_2)} , \chi_{P^{j+2 \text{ord}_P(\ell_2)}}) w^{jd(P)}}{|P|^{3j/2}} \bigg).
\end{align*}
\subsubsection{The case $k=1$}
We have
\begin{align*}
\mathcal{N}_1(\ell;u,w) = \frac{|\ell|u^{d(\ell_2)}}{| \ell_1 |^{1/2}}\kappa_{1}(\ell;u,w) \mathcal{Z}(u) \mathcal{Z}\Big(\frac wq\Big) \mathcal{Z}\Big(\frac {uw^2}{q}\Big),
\end{align*} where
\begin{equation*}
\kappa_1(\ell;u,w)=\prod_{P\in\mathcal{P}} \mathcal{D}_{1,P}(u,w)\prod_{P | \ell_1} \mathcal{H}_{1,P}(u,w)\prod_{\substack{P \nmid \ell_1 \\ P | \ell_2}} \mathcal{J}_{1,P}(u,w)
\end{equation*}
with
\begin{align*}
\mathcal{D}_{1,P}(u,w)=& \bigg(u^{d(P)}-\frac{1}{|P|^2}\bigg)^{-1}\bigg(1-\frac{w^{d(P)}}{|P|}\bigg)\\
&\qquad\qquad \bigg(u^{d(P)}+\frac{(uw)^{d(P)}(1-u^{d(P)})}{|P|}-\frac{1+(uw)^{2d(P)}}{|P|^2}+\frac{(uw^2)^{d(P)}}{|P|^3}\bigg),
\end{align*}
\begin{align*}
\mathcal{H}_{1,P}(u,w)=& u^{d(P)}\bigg(1-u^{d(P)}+(uw)^{d(P)}-\frac{(uw)^{d(P)}}{|P|}\bigg)\\
&\qquad\qquad \bigg(u^{d(P)}+\frac{(uw)^{d(P)}(1-u^{d(P)})}{|P|}-\frac{1+(uw)^{2d(P)}}{|P|^2}+\frac{(uw^2)^{d(P)}}{|P|^3}\bigg)^{-1}
\end{align*} and
\begin{align*}
\mathcal{J}_{1,P}(u,w)=& u^{d(P)}\bigg(1-\frac{1-w^{d(P)}+(uw)^{d(P)}}{|P|}\bigg)\\
&\qquad\qquad \bigg(u^{d(P)}+\frac{(uw)^{d(P)}(1-u^{d(P)})}{|P|}-\frac{1+(uw)^{2d(P)}}{|P|^2}+\frac{(uw^2)^{d(P)}}{|P|^3}\bigg)^{-1}.
\end{align*}
So
\begin{align*}
S_{1}^{\textrm{e}}(\ell;N;V&=\square) =\ \frac{1}{(q-1)|\ell_1|^{1/2}} \frac{1}{(2 \pi i)^2} \oint_{|u|=r_1} \oint_{|w|=r_2}\\
&\ \frac{ \kappa_1(\ell;u,w)(1-qu) dwdu }{u^{(N+d(\ell_1)-\mathfrak{a})/2-g+1}w^{N+1-\mathfrak{a}}(1-u)(1-w) (1-uw^2)^2}+O_\varepsilon\big(q^{-3g/2+\varepsilon g}\big).
\end{align*}
Note that $ \kappa_1(1;u,w)$ is the same as $\prod_{P}\mathcal{B}_P(u,w/q)$ in [\textbf{\ref{F1}}].
Similarly as in [\textbf{\ref{F1}}], we take $r_1 = q^{-3/2}$ and $r_2<1$ in the double integral above. Recall from Lemma $6.3$ in [\textbf{\ref{F1}}] that
$$ \kappa_1(1;u,w) = \mathcal{Z} \Big(
\frac{w}{q^3u} \Big) \mathcal{Z} \Big( \frac{w^2}{q^2} \Big)^{-1}\prod_{P\in\mathcal{P}} \mathcal{R}_P(u,w),$$ where
\begin{align*}
\mathcal{R}_{P}(u,w)=& 1-\bigg(u^{d(P)}-\frac{1}{|P|^2}\bigg)^{-1}\bigg(1+\frac{w^{d(P)}}{|P|}\bigg)^{-1}\frac{w^{d(P)}}{|P|u^{d(P)}}\bigg(u^{3d(P)} + \frac{(u^3w)^{d(P)}}{|P|}\\
&\qquad\qquad - \frac{ (u^2w)^{d(P)}}{|P|^2} +\frac{(uw)^{d(P)}(1-u^{d(P)})}{|P|^3} - \frac{1+(uw)^{2d(P)}}{|P|^4} + \frac{(uw^2)^{d(P)}}{|P|^5} \bigg),
\end{align*}
and $\prod_{P\in\mathcal{P}} \mathcal{R}_P(u,w)$ converges absolutely for $|w|^2<q^3 |u|, |w| < q^4 |u|^2 , |w|<q$ and $|wu| <1$.
In the double integral, we enlarge the contour of integration over $w$ to $|w| = q^{3/4-\varepsilon}$ and encounter two poles at $w=1$ and $w=q^2u$. Let $A(\ell;N)$ be the residue of the pole at $w=1$ and $B(\ell;N)$ be the residue of the pole at $w=q^2u$. By bounding the integral on the new contour, we can write $$S_{1}^{\textrm{e}}(\ell;N;V=\square) = A(\ell;N)+B(\ell;N)+O_\varepsilon\big(|\ell_1|^{1/4}q^{-3g/2+\varepsilon g }\big).$$
For the residue at $w=1$ we have
$$A(\ell;N) = \frac{1}{(q-1)|\ell_1|^{1/2}} \frac{1}{2 \pi i} \oint_{|u|=r_1} \frac{\kappa_1(\ell;u,1) (1-qu)du}{u^{(N+d(\ell_1)-\mathfrak{a})/2-g+1} (1-u)^3} .$$
We make the change of variables $u \mapsto 1/u^2$ and use the fact that
$$ 1- \frac{q}{u^2} = - \frac{q}{u^2 \mathcal{Z} ( \frac{u^2}{q^2} )}\qquad \text{ and }\qquad \Big(1-\frac{1}{u^2} \Big)^{-1} = \mathcal{Z}\Big( \frac{1}{qu^2} \Big).$$
A direct computation with the Euler product shows that
\begin{align*} \zeta_q(2) \kappa_1\Big( \ell;\frac{1}{u^2},1\Big) \mathcal{Z} \Big( \frac{1}{qu^2} \Big) \mathcal{Z} \Big( \frac{u^2}{q^2} \Big)^{-1} &= \prod_{P \in \mathcal{P}} \bigg(1- \frac{u^{2d(P)}}{|P|(1+|P|)} \bigg) \prod_{P | \ell}\bigg(1+\frac{1}{|P|}-\frac{u^{2d(P)}}{|P|^2}\bigg)^{-1}\\
& = \eta_1(\ell;u).\end{align*}
So after the change of variables, we have
$$A(\ell;N) = -\frac{1}{|\ell_1|^{1/2}} \frac{1}{2 \pi i} \oint_{|u| = r_1^{-1/2}} \frac{ \eta_1(\ell;u) du}{u^{2g-N-d(\ell_1)+\mathfrak{a}-1} (1-u^2)^2},$$
and hence
$$A(\ell):=A(\ell;g-1)+A(\ell;g) = -\frac{1}{|\ell_1|^{1/2}} \frac{1}{2 \pi i} \oint_{|u| = r_1^{-1/2}} \frac{ \eta_1(\ell;u)\big(u^{\mathfrak{a}(g)}+ u^{2-\mathfrak{a}(g)}\big)du}{u^{g-d(\ell_1)+1} (1-u^2)^2}.$$
Consider the integral
\[
\frac{1}{2\pi i}\oint_{|u| = r_1^{-1/2}}\frac{\eta_1(\ell;u)du}{u^{g-d(\ell_1)+1}(1-u)^{2}}.
\]
Making the change of variables $u \mapsto -u$ and using the facts that $\eta_1(\ell;u)$ is an even function and that $(-1)^{g-d(\ell_1)}=(-1)^{\mathfrak{a}(g)}$ (which follows from the definition of $\mathfrak{a}(g)$), this is equal to
\[
\frac{1}{2\pi i}\oint_{|u| = r_1^{-1/2}}\frac{(-1)^{\mathfrak{a}(g)}\eta_1(\ell;u)du}{u^{g-d(\ell_1)+1}(1+u)^{2}}.
\]
Hence we get
\begin{align*}
\frac{2}{2\pi i}\oint_{|u| = r_1^{-1/2}}\frac{\eta_1(\ell;u)du}{u^{g-d(\ell_1)+1}(1-u)^{2}}=\frac{1}{2\pi i}\oint_{|u| = r_1^{-1/2}}\frac{\eta_1(\ell;u)\big((1+u)^2+(-1)^{\mathfrak{a}(g)}(1-u)^2\big)du}{u^{g-d(\ell_1)+1}(1-u^2)^{2}}.
\end{align*}
Note that $(1+u)^2+(-1)^{\mathfrak{a}(g)}(1-u)^2=2\big(u^{\mathfrak{a}(g)}+ u^{2-\mathfrak{a}(g)}\big)$, and so
\[
A(\ell)=-\frac{1}{|\ell_1|^{1/2}} \frac{1}{2 \pi i}\oint_{|u| = r_1^{-1/2}}\frac{\eta_1(\ell;u)du}{u^{g-d(\ell_1)+1}(1-u)^{2}}.
\]
Now recall the expression \eqref{mainfirst} for the main term $M_1(\ell)$. Since the integrand above has no poles other than at $u=1$ between the circles of radius $r$ and $r_1^{-1/2}$ (recall that $r<1$ and $r_1^{-1/2}= q^{3/4})$, it follows that
$$A(\ell)+M_1(\ell) = - \frac{1}{| \ell_1|^{1/2}} \text{Res} (u=1)+ O_{\varepsilon}(q^{-3g/2+\varepsilon g}).$$
Note that the residue computation was done in Section \ref{main}.
Next we compute the residue at $w=q^2u$. We have
\begin{align*}
B(\ell;N) = \frac{1}{(q-1)| \ell_1|^{1/2}} \frac{1}{2 \pi i} \oint_{|u|=r_1} & \frac{(1-qu) (1-q^3u^2) }{u^{(N+d(\ell_1)-\mathfrak{a})/2-g+1}(q^2u)^{N-\mathfrak{a}}(1-u) (1-q^2u) (1-q^4u^3)^2}\\
& \prod_{P\in\mathcal{P}} \mathcal{R}_P(u,q^2u) \prod_{P | \ell_1} \mathcal{H}_{1,P}(u,q^2u) \prod_{\substack{P \nmid \ell_1 \\ P | \ell_2}} \mathcal{J}_{1,P}(u,q^2u) \, du.
\end{align*}
We shift the contour of integration to $|u| =q^{-1-\varepsilon}$ and encounter a double pole at $u=q^{-4/3}.$ The integral over the new contour will be bounded by $q^{-3g/2+\varepsilon g}$, and after computing the residue at $u= q^{-4/3}$, it follows that
$$B(\ell;g)+B(\ell;g-1) =|\ell_1|^{1/6} q^{-4g/3} P\big(g+d(\ell_1)\big) + O \big( q^{-3g/2 + \varepsilon g} \big),$$ where $P(x)$ is a linear polynomial whose coefficients can be written down explicitly.
\subsubsection{The case $k=2$}
We have
\begin{align*}
\mathcal{N}_2(\ell;u,w) &= \frac{|\ell|u^{d(\ell_2)}}{| \ell_1 |^{1/2}}\kappa_{2}(\ell;u,w) \mathcal{Z}(u)\mathcal{Z}\Big(\frac wq\Big)^2 \mathcal{Z}\Big(\frac{uw^2}{q}\Big) \mathcal{Z} \Big( \frac{1}{q^2u} \Big) ,
\end{align*}
where
\begin{equation}\label{kappa2}
\kappa_2(\ell;u,w)=\prod_{P\in\mathcal{P}} \mathcal{D}_{2,P}(u,w)\prod_{P | \ell_1} \mathcal{H}_{2,P}(u,w)\prod_{\substack{P \nmid \ell_1 \\ P | \ell_2}} \mathcal{J}_{2,P}(u,w)
\end{equation}
with
\begin{align*}
\mathcal{D}_{2,P}(u,w) =&\bigg( 1-\frac{ w^{d(P)}}{|P|}\bigg)^2\bigg(1-\frac{(uw^2)^{d(P)}}{|P|}\bigg)^{-1}\bigg(1 +\frac{w^{d(P)}\big(2-2u^{d(P)}+(uw)^{d(P)}\big)}{|P|}\\
&\qquad\quad -\frac{\big(u^{-d(P)}+3(uw^2)^{d(P)}\big)}{|P|^2}+\frac{w^{ 2d(P)}\big(2+(uw)^{2d(P)}\big)}{|P|^3}-\frac{(uw^4)^{d(P)}}{|P|^4}\bigg),
\end{align*}
\begin{align*}
& \mathcal{H}_{2,P}(u,w) = \bigg(1-u^{d(P)}+2(uw)^{d(P)}-\frac{(uw)^{d(P)}\big(2-w^{d(P)}+(uw)^{d(P)}}{|P|}\bigg)\\
&\qquad\qquad \bigg(1 +\frac{w^{d(P)}\big(2-2u^{d(P)}+(uw)^{d(P)}\big)}{|P|}\\
&\qquad\qquad\qquad\qquad-\frac{\big(u^{-d(P)}+3(uw^2)^{d(P)}\big)}{|P|^2} +\frac{w^{ 2d(P)}\big(2+(uw)^{2d(P)}\big)}{|P|^3}-\frac{(uw^4)^{d(P)}}{|P|^4}\bigg)^{-1}
\end{align*}
and
\begin{align*}
& \mathcal{J}_{2,P}(u,w) = \Big(1- \frac{1-2w^{d(P)}+2(uw)^{d(P)}- (uw^2)^{d(P)}}{|P|}-\frac{(uw^2)^{d(P)}}{|P|^2}\Big)\\
&\qquad\qquad \bigg(1 +\frac{w^{d(P)}\big(2-2u^{d(P)}+(uw)^{d(P)}\big)}{|P|}\\
&\qquad\qquad\qquad\qquad -\frac{\big(u^{-d(P)}+3(uw^2)^{d(P)}\big)}{|P|^2}+\frac{w^{ 2d(P)}\big(2+(uw)^{2d(P)}\big)}{|P|^3}-\frac{(uw^4)^{d(P)}}{|P|^4}\bigg)^{-1}.\end{align*}
Hence
\begin{align}\label{S2integral}
S_{2}^{\textrm{e}}(\ell;&N;V=\square) =\ - \frac{q}{(q-1)|\ell_1|^{1/2}} \frac{1}{(2 \pi i)^2} \oint_{|u|=r_1} \oint_{|w|=r_2}\nonumber \\
&\qquad \frac{\kappa_2(\ell;u,w) dwdu }{u^{(N+d(\ell_1)-\mathfrak{a})/2-g}w^{N+1-\mathfrak{a}}(1-u)(1-w)^2 (1-uw^2)^2}+O_\varepsilon\big(q^{-g+\varepsilon g}\big).
\end{align}
Note that $\kappa_2(1;u,w) $ is the same as $\mathcal{F}(u,w/q)$ in Lemma 4.3 of [\textbf{\ref{F23}}], and, hence, $\kappa_2(\ell;u,w) $ is absolutely convergent for $|u| > 1/q$, $|w| <
q^{1/2}$, $|uw| < 1$ and $|uw^2| < 1$.
We first shift the contour $|u|=r_1$ to $|u|=r_1'=q^{-1+\varepsilon}$, and then the contour $|w|=r_2$ to $|w|=r_2'=q^{1/2-\varepsilon}$ in the expression \eqref{S2integral}. In doing so, we encounter a double pole at $w=1$. Moreover, the new integral is bounded by $O_\varepsilon(q^{-g+\varepsilon g})$. Hence
\begin{align*}
S_{2}^{\textrm{e}}(\ell,N;V=\square) =\ & \frac{q}{(q-1) |\ell_1|^{\frac{1}{2}}} \frac{1}{2 \pi i} \oint_{|u| = r_1'}\bigg( \frac{ \partial_ w \mathcal{\kappa}_2}{\kappa_2}(\ell;u,1) + \frac{5u-1}{1-u}-N+\mathfrak{a} \bigg)\\
&\qquad\qquad \frac{\kappa_2(\ell;u,1) du }{u^{(N+d(\ell_1)-\mathfrak{a})/2-g}(1-u)^3} + O_\varepsilon(q^{-g+\varepsilon g}),
\end{align*}
and so letting $N=2g$ and $N=2g-1$ we obtain
\begin{align}\label{S2small}
S_{2}^{\textrm{e}}(\ell;V=\square) =\ & \frac{q}{(q-1) |\ell_1|^{\frac{1}{2}}} \frac{1}{2 \pi i} \oint_{|u| = r_1'}\bigg( \frac{ \partial_ w \mathcal{\kappa}_2}{\kappa_2}(\ell;u,1) -2g+ \frac{5u-1}{1-u}+\frac{2u}{u+u^{\mathfrak{a}(\ell)}} \bigg)\nonumber\\
&\qquad\qquad \frac{\kappa_2(\ell;u,1)(u+u^{\mathfrak{a}(\ell)}) du }{u^{(d(\ell_1)+\mathfrak{a}(\ell))/2}(1-u)^3} + O_\varepsilon(q^{-g+\varepsilon g}),
\end{align}
where $\mathfrak{a}(\ell) \in \{0,1\}$ according to whether $d(\ell)$ is even or odd.
It is a straightforward exercise to verify that
\begin{equation}\label{D2u}
\mathcal{D}_{2,P} (u,1)=\bigg(1-\frac{1}{|P|}\bigg)^2\bigg(1+\frac{2}{|P|}-\frac{u^{-d(P)}+u^{d(P)}}{|P|^2}+\frac{1}{|P|^3}\bigg)=\mathcal{D}_{2,P}\Big (\frac 1u,1\Big),
\end{equation}
$$ \mathcal{H}_{2,P} \Big( u, 1 \Big) =\big(1+u^{d(P)}\big)\bigg(1+\frac{2}{|P|}-\frac{u^{-d(P)}+u^{d(P)}}{|P|^2}+\frac{1}{|P|^3}\bigg)^{-1} = u^{d(P)}\mathcal{H}_{2,P} \Big( \frac{1}{u}, 1 \Big),$$
$$ \mathcal{J}_{2,P} \Big(u, 1 \Big) =\bigg(1+\frac{1}{|P|}\bigg)\bigg(1+\frac{2}{|P|}-\frac{u^{-d(P)}+u^{d(P)}}{|P|^2}+\frac{1}{|P|^3}\bigg)^{-1}= \mathcal{J}_{2,P} \Big( \frac{1}{u},1\Big) , $$
and hence
\begin{equation}
\kappa_2(\ell;u,1)=u^{d(\ell_1)}\kappa_2\Big(\ell;\frac 1u,1\Big). \label{simf2}
\end{equation}
Let
\[
\alpha_P(u)=\frac{\partial_ w \mathcal{D}_{2,P}}{\mathcal{D}_{2,P}}(u,1),\qquad\beta_P(u) = \frac{ \partial_ w \mathcal{H}_{2,P}}{ \mathcal{H}_{2,P}}(u,1)\qquad\textrm{and}\qquad
\gamma_P(u) = \frac{\partial_ w \mathcal{J}_{2,P}}{ \mathcal{J}_{2,P}}(u,1).\]
By direct computation we obtain
\begin{align*}
&\alpha_P(u)=\ 2 d(P)\bigg(1-\frac{1}{|P|}\bigg)^{-1}\bigg(1-\frac{u^{d(P)}}{|P|}\bigg)^{-1}\bigg(1+\frac{2}{|P|}-\frac{u^{-d(P)}+u^{d(P)}}{|P|^2}+\frac{1}{|P|^3}\bigg)^{-1}\\
&\qquad\bigg(\frac{u^{d(P)}}{|P|}-\frac{3+u^{d(P)}}{|P|^2}+\frac{u^{-d(P)}+1+4u^{d(P)}+u^{2d(P)}}{|P|^3}-\frac{3+u^{d(P)}+2u^{2d(P)}}{|P|^4}+\frac{2u^{d(P)}}{|P|^5}\bigg),
\end{align*}
\begin{align*}
\beta_P(u)=\ &2d(P)\big(1+u^{d(P)}\big)^{-2} \bigg(u^{d(P)}-\frac{1-u^{d(P)}}{|P|}+\frac{u^{2 d(P)}+2 u^{d(P)}-1}{|P|^2}-\frac{u^{d(P)}+2}{|P|^3}\bigg)
\end{align*}
and
\[
\gamma_P(u)=2d(P)\bigg(1+\frac{1}{|P|}\bigg)^{-2}\bigg(u^{d(P)}-\frac{1}{|P|}\bigg)\bigg(\frac{u^{-d(P)}+2}{|P|^2}+\frac{1}{|P|^3}\bigg) .
\]
From Lemma 4.4 of [\textbf{\ref{F23}}] we have
\begin{equation*}
\sum_{P\in\mathcal{P}}\alpha_P(u)=\sum_{P\in\mathcal{P}}\alpha_P\Big(\frac 1u\Big)+4u\sum_{P\in\mathcal{P}}\frac{\partial_u \mathcal{D}_{2,P}}{\mathcal{D}_{2,P}}(u,1)+\frac{2(1+u)}{1-u}.
\end{equation*}
Also note that
$$\beta_P(u)-\beta_P\Big(\frac 1u\Big) = 4 u \frac{\partial_u \mathcal{H}_{2,P} }{\mathcal{H}_{2,P}}(u, 1)-2d(P)$$
and
$$\gamma_P(u)-\gamma_P\Big(\frac1u\Big) = 4 u \frac{\partial_u \mathcal{J}_{2,P}}{\mathcal{J}_{2,P}}(u, 1).$$
Combining the above equations we get that
\begin{equation}
\frac{ \partial_ w \mathcal{\kappa}_2}{\kappa_2}(\ell;u,1) = \frac{ \partial_ w \mathcal{\kappa}_2}{\kappa_2}\Big(\ell;\frac1u,1\Big) + 4u \frac{ \partial_u \kappa_2}{\kappa_2} (\ell;u,1)+\frac{2(1+u)}{1-u}- 2 d(\ell_1).
\label{simbeta}
\end{equation}
We remark from \eqref{D2u} that $\kappa_2 (\ell;u,1)$ is analytic for $1/q<|u|<q$. Making a change of variables $u \mapsto 1/u$ in the integral \eqref{S2small} and using equations \eqref{simf2} and \eqref{simbeta} we get
\begin{align*}
&S_{2}^{\textrm{e}}(\ell;V=\square)= - \frac{q}{(q-1) |\ell_1|^{\frac{1}{2}}} \frac{1}{2 \pi i} \oint_{|u| = q^{1-\varepsilon}}\frac{\kappa_2(\ell;u,1)(u+u^{\mathfrak{a}(\ell)}) }{u^{(d(\ell_1)+\mathfrak{a}(\ell))/2} (1-u)^3}\\
&\quad \bigg(\frac{ \partial_ w \kappa_2}{\kappa_2}(\ell;u,1) -4u \frac{ \partial_u \kappa_2}{\kappa_2} (\ell;u,1)+ 2 d( \ell_1)-2g- \frac{7+u}{1-u}+\frac{2u^{\mathfrak{a}(\ell)}}{u+u^{\mathfrak{a}(\ell)}} \bigg) du+O_\varepsilon(q^{-g+ \varepsilon g}).
\end{align*}
We further use the fact that
\begin{align*}
&\frac{1}{2 \pi i} \oint_{|u| = q^{1-\varepsilon}}\frac{ \partial_u \kappa_2(\ell;u,1)(u+u^{\mathfrak{a}(\ell)})du }{u^{(d(\ell_1)+\mathfrak{a}(\ell))/2-1} (1-u)^3}\\
&\quad= - \frac{1}{2 \pi i} \oint_{|u| = q^{1-\varepsilon}} \frac{\kappa_2(\ell;u,1)(u+u^{\mathfrak{a}(\ell)})}{u^{(d(\ell_1)+\mathfrak{a}(\ell))/2-1}(1-u)^3}\bigg(\frac{u}{u+u^{\mathfrak{a}(\ell)}} -\frac{d(\ell_1)}{2}+\frac{1+2u}{1-u}\bigg)\, du,
\end{align*}
as $\mathfrak{a}(\ell)(u^{\mathfrak{a}(\ell)}-u)=0$. Combining the two equations above, it follows that
\begin{align}\label{S2large}
S_{2}^{\textrm{e}}(\ell;V=\square)= & - \frac{q}{(q-1) |\ell_1|^{\frac{1}{2}}} \frac{1}{2 \pi i} \oint_{|u| = q^{1-\varepsilon}}\frac{\kappa_2(\ell;u,1)(u+u^{\mathfrak{a}(\ell)}) }{u^{(d(\ell_1)+\mathfrak{a}(\ell))/2} (1-u)^3}\\
&\qquad\qquad \bigg( \frac{ \partial_ w \kappa_2}{\kappa_2}(\ell;u,1) -2g+ \frac{5u-1}{1-u}+\frac{2u}{u+u^{\mathfrak{a}(\ell)}} \bigg) du+O_\varepsilon(q^{-g+ \varepsilon g}).\nonumber
\end{align}
As there is only one pole of the integrand at $u=1$ in the annulus between $|u|=q^{-1+\varepsilon}$ and $|u|=q^{1-\varepsilon}$, in view of \eqref{S2small} and \eqref{S2large} we conclude that
\begin{equation*}
S_{2}^{\textrm{e}}(\ell;V=\square) = - \frac{q }{2(q-1)|\ell_1|^{\frac{1}{2}}}\text{Res}(u=1) +O_\varepsilon(q^{-g+ \varepsilon g}).
\end{equation*}
To compute the residue at $u=1$, we proceed as in calculating the residue of $M_k(\ell)$ in the previous subsection. In doing so we have
\begin{align*}
\textrm{Res}(u=1)=&\ \kappa_2(\ell;1,1)\bigg(\frac{g}{2}\sum_{j=0}^{2}\frac{\partial_u^j\kappa_2}{\kappa_2}(\ell;1,1)Q_{2,j}\big(d(\ell_1)\big)\\
&\qquad\qquad\qquad\qquad\qquad\qquad-\frac{1}{6}\sum_{i=0}^{1}\sum_{j=0}^{3-i}\frac{\partial_u^j\partial_w^i\kappa_2}{\kappa_2}(\ell;1,1)R_{2,i,j}\big(d(\ell_1)\big)\bigg),
\end{align*}
where $Q_{2,j}(x)$'s and $R_{2,i,j}(x)$'s are explicit polynomials of degrees $2-j$ and $3-i-j$, respectively, and the leading coefficients of $Q_{2,0}(x)$ and $R_{2,0,0}(x)$ are $1$. We also note that
\[
\kappa_2(\ell;1,1)=\frac{\eta_2(\ell;1)}{\zeta_q(2)}=\frac{(q-1)\eta_2(\ell;1)}{q},
\]
and as before we have the estimates
\[
\frac{\partial_u^j\kappa_2}{\kappa_2}(\ell;1,1),\, \frac{\partial_u^j\partial_w\kappa_2}{\kappa_2}(\ell;1,1)\ll_{j,\varepsilon}d(\ell)^\varepsilon.
\]
Hence, in particular, we get
\[
S_{2}^{\textrm{e}}(\ell;V=\square)=-\frac{\eta_2(\ell;1)}{12|\ell_1|^{1/2}}\Big(3gd(\ell_1)^2-d(\ell_1)^3\Big)+O\big(gd(\ell_1)d(\ell)^\varepsilon\big).
\]
\subsubsection{The case $k=3$}
We have
\begin{align*}
\mathcal{N}_3(\ell;z,w) =& \frac{|\ell| u^{d(\ell_2)}}{| \ell_1 |^{1/2}}\kappa_3(\ell;u,w)\mathcal{Z}(u) \mathcal{Z}\Big(\frac wq\Big)^3 \mathcal{Z}\Big(\frac{uw^2}{q}\Big)^6 \mathcal{Z} \Big( \frac{1}{q^2u} \Big) \mathcal{Z}(uw/q)^{-3} ,
\end{align*}
where
\begin{equation}\label{kappa3}
\kappa_3(\ell;u,w)=\prod_{P\in\mathcal{P}} \mathcal{D}_{3,P}(u,w)\prod_{P | \ell_1} \mathcal{H}_{3,P}(u,w)\prod_{\substack{P \nmid \ell_1 \\ P | \ell_2}} \mathcal{J}_{3,P}(u,w)
\end{equation}
with
\begin{align*}
&\mathcal{D}_{3,P}(u,w) =\bigg (1-\frac{w^{d(P)}}{|P|}\bigg)^3 \bigg(1\frac{(uw)^{d(P)}}{|P|}\bigg)^{-3}\bigg (1-\frac{(uw^2)^{d(P)}}{|P|}\bigg)^3 \\
&\qquad\bigg(1+\frac{3w^{d(P)}\big(1-u^{d(P)}+(uw)^{d(P)}\big)}{|P|}-\frac{u^{-d(P)}+(uw^2)^{d(P)}\big(6-w^{d(P)}+(uw)^{d(P)}\big)}{|P|^2}\\
&\qquad\qquad+\frac{3w^{2d(P)}\big(1+(uw)^{2d(P)}\big)}{|P|^3}-\frac{(uw^4)^{d(P)}\big(3+(uw)^{2d(P)}\big)}{|P|^4}+\frac{(uw^3)^{2d(P)}}{|P|^5}\bigg),
\end{align*}
$\mathcal{H}_{3,P} (u,w) =$
\begin{align*}
& \bigg(1-u^{d(P)}+3(uw)^{d(P)}-\frac{(uw)^{d(P)}\big(3-3w^{d(P)}+3(uw)^{d(P)}-(uw^2)^{d(P)}\big)}{|P|}-\frac{(u^2w^3)^{d(P)}}{|P|^2}\bigg)\\
&\qquad\bigg(1+\frac{3w^{d(P)}\big(1-u^{d(P)}+(uw)^{d(P)}\big)}{|P|}-\frac{u^{-d(P)}+(uw^2)^{d(P)}\big(6-w^{d(P)}+(uw)^{d(P)}\big)}{|P|^2}\\
&\qquad\qquad+\frac{3w^{2d(P)}\big(1+(uw)^{2d(P)}\big)}{|P|^3}-\frac{(uw^4)^{d(P)}\big(3+(uw)^{2d(P)}\big)}{|P|^4}+\frac{(uw^3)^{2d(P)}}{|P|^5}\bigg)^{-1}
\end{align*}
and $\mathcal{J}_{3,P} (u,w) =$
\begin{align*}
& \bigg(1-\frac{\big(1-3w^{d(P)}+3(uw)^{d(P)}-3(uw^2)^{d(P)}\big)}{|P|}-\frac{(uw^2)^{d(P)}\big(3-w^{d(P)}+(uw)^{d(P)}\big)}{|P|^2}\bigg)\\
&\qquad\bigg(1+\frac{3w^{d(P)}\big(1-u^{d(P)}+(uw)^{d(P)}\big)}{|P|}-\frac{u^{-d(P)}+(uw^2)^{d(P)}\big(6-w^{d(P)}+(uw)^{d(P)}\big)}{|P|^2}\\
&\qquad\qquad+\frac{3w^{2d(P)}\big(1+(uw)^{2d(P)}\big)}{|P|^3}-\frac{(uw^4)^{d(P)}\big(3+(uw)^{2d(P)}\big)}{|P|^4}+\frac{(uw^3)^{2d(P)}}{|P|^5}\bigg)^{-1}.
\end{align*}
Using the above, we obtain
\begin{align}\label{S3integral}
&S_{3}^{\textrm{e}}(\ell;N;V=\square) =- \frac{q}{(q-1)|\ell_1|^{1/2}} \frac{1}{(2 \pi i)^2} \oint_{|u|=r_1} \oint_{|w|=r_2} \\
&\qquad\qquad \frac{(1-uw)^3\kappa_3(\ell;u,w) dwdu }{u^{(N+d(\ell_1)-\mathfrak{a})/2-g}w^{N+1-\mathfrak{a}}(1-u)(1-w)^3 (1-uw^2)^7}+O\big(q^{-g/2+\varepsilon g}\big).\nonumber
\end{align}
Note that $\kappa_3(1;u,w)$ is the same as $\mathcal{T}(u,w/q)$ in Lemma 7.4 of [\textbf{\ref{F23}}]. As a result [\textbf{\ref{F23}}], $\kappa_3(\ell;u,w)$ is absolutely convergent for $|u| > 1/q$, $|w| < q^{1/2}$, $|uw| <q^{1/2}$ and $|uw^2| <q^{1/2}$. Moreover, $\kappa_3(\ell;u,1)$ has an analytic continuation when $1/q<|u|<q$.
We proceed as in the case $k=2$. First we move the contour $|u|=r_1$ to $|u|=r_1'=q^{-1+\varepsilon}$, and then the contour $|w|=r_2$ to $|w|=r_2'=q^{1/2-\varepsilon}$ in the equation \eqref{S3integral}. In doing so, we cross a triple pole at $w=1$. On the new contours, the integral is bounded by $O_\varepsilon(q^{-g+\varepsilon g})$. Hence, by expanding the terms in their Laurent series,
\begin{align*}
S_{3}^{\textrm{e}}(\ell,N;V=\square) &=\ \frac{q}{(q-1) |\ell_1|^{\frac{1}{2}}} \frac{1}{2 \pi i} \oint_{|u| = r_1'} \frac{\kappa_3(\ell;u,1)}{u^{(N+d(\ell_1)-\mathfrak{a})/2-g}(1-u)^7}\nonumber \\
&\quad\sum_{i_1=0}^{2}\sum_{i_2=0}^{2-i_1}\frac{\partial_w^{i_1}\kappa_3}{\kappa_3}(\ell;u,1)Q_{3,i_1,i_2}(\mathfrak{a},u)(1-u)^{i_1+i_2}N^{i_2} du + O_\varepsilon(q^{-g/2+\varepsilon g}),
\end{align*}
where $Q_{3,i,j}(\mathfrak{a},u)$'s are some explicit functions and are analytic as functions of $u$.
Next we move the $u$-contour to $|u|=q^{1-\varepsilon}$. We encounter a pole at $u=1$ and we bound the new integral by $O_\varepsilon(|\ell_1|^{-1}q^{-g/2+\varepsilon g})$. For the residue at $u=1$, we calculate the Taylor series of the terms in the integrand and get
\begin{align*}
\textrm{Res}(u=1)=&\kappa_3(\ell;1,1)\sum_{i_1=0}^{2}\sum_{i_2=0}^{2-i_1}\sum_{j=0}^{6-i_1-i_2}\frac{\partial_u^j\partial_w^{i_1}\kappa_3}{\kappa_3}(\ell;1,1)R_{3,i_1,i_2,j}(\mathfrak{a},g+d(\ell_1))N^{i_2},
\end{align*}
where $R_{3,i_1,i_2,j}(\mathfrak{a},x)$'s are explicit polynomials in $x$ with degree $6-i_1-i_2-j$. Thus,
\begin{align*}
S_{3}^{\textrm{e}}(\ell;V=\square)=&\frac{\kappa_3(\ell;1,1)q}{(q-1) |\ell_1|^{1/2}}\sum_{N=3g-1}^{3g}\sum_{i_1=0}^{2}\sum_{i_2=0}^{2-i_1}\sum_{j=0}^{6-i_1-i_2}\frac{\partial_u^j\partial_w^{i_1}\kappa_3}{\kappa_3}(\ell;1,1)R_{3,i_1,i_2,j}(\mathfrak{a},g+d(\ell_1))N^{i_2}\\
&\qquad\qquad+ O_\varepsilon(q^{-g/2+\varepsilon g})+O_\varepsilon(|\ell_1|^{-3/4}q^{-g/4+\varepsilon g}).
\end{align*}
As for the leading term, as before we can show that
\[
\kappa_3(\ell;1,1)=\frac{\eta_3(\ell;1)}{\zeta_q(2)} ,\qquad\frac{\partial_u^j\partial_w^{i_1}\kappa_3}{\kappa_3}(\ell;1,1)\ll_{i_1,j,\varepsilon}d(\ell)^\varepsilon,
\]
and so
\begin{align*}
S_{3}^{\textrm{e}}(\ell;V=\square)&=\frac{\eta_3(\ell;1)}{ |\ell_1|^{1/2}}\sum_{N=3g-1}^{3g}\sum_{i_2=0}^{2}R_{3,0,i_2,0}(\mathfrak{a},g+d)N^{i_2}+O_\varepsilon\big(g^5d(\ell)^\varepsilon\big)\\
&= \frac{\eta_3(\ell;1)}{ |\ell_1|^{\frac{1}{2}}}\sum_{N=3g-1}^{3g}\sum_{i_2=0}^{2} \frac{1}{2 \pi i} \oint_{|u| = r_1'} \frac{Q_{3,0,i_2}(\mathfrak{a},1)(1-u)^{i_2}N^{i_2} du}{u^{(g+d(\ell_1))/2}(1-u)^7}\nonumber \\
&\qquad\qquad +O_\varepsilon\big(g^5d(\ell)^\varepsilon\big).
\end{align*}
Expanding the terms in \eqref{S3integral} in the Laurent series
\begin{align*}
&(1-uw)^3=(1-u)^3-3u(1-u)^2(w-1)+3u^2(1-u)(w-1)^2\ldots,\\
&w^{-N}=1-N(w-1)+\frac{N(N+1)}{2}(w-1)^2\ldots,\\
&(1-uw^2)^{-7}=(1-u)^{-7}+14u(1-u)^{-8}(w-1)+14u(1-u)^{-9}(1+7u)(w-1)^2\ldots,
\end{align*}
we see that
\begin{align*}
S_{3}^{\textrm{e}}(\ell;V=\square)&= \frac{\eta_3(\ell;1)}{ |\ell_1|^{\frac{1}{2}}}\sum_{N=3g-1}^{3g} \frac{1}{2 \pi i} \oint_{|u| = r_1'} \frac{1}{u^{(g+d(\ell_1))/2}(1-u)^7}\nonumber \\
&\qquad\qquad \bigg(73-11(1-u)N+\frac{(1-u)^2N^2}{2}\bigg)du+O_\varepsilon\big(g^5d(\ell)^\varepsilon\big)\\
&=-\frac{\eta_3(\ell;1)}{2^56! |\ell_1|^{\frac{1}{2}}}(g+d)^4\Big(73(g+d)^2-396g(g+d)+540g^2\Big)+O_\varepsilon\big(g^5d(\ell)^\varepsilon\big).
\end{align*}
\subsection{Bounding $S_k(\ell;N;V \neq \square)$}\label{Vnonsquare}
Recall from \eqref{Sknonsquare} that
$$ S_k(\ell;N;V \neq \square) = S_{k}^{\textrm{e}}(\ell;N;V\ne\square) +S_{k}^{\textrm{o}}(\ell;N) , $$ with $S_{k}^{\textrm{e}}(\ell;N;V\ne\square) = S_{k,1}^{\textrm{e}}(\ell;N;V\ne\square)-qS_{k,2}^{\textrm{e}}(\ell;N;V\ne\square)$ and $S_{k}^{\textrm{o}}(\ell;N) = S_{k,1}^{\textrm{o}}(\ell;N)-qS_{k,2}^{\textrm{o}}(\ell;N)$, and $S_{k,1}^{\textrm{o}}(\ell;N)$ is given by equation \eqref{s1odd}. We will focus on bounding $S_{k,1}^{\textrm{o}}(\ell;N)$, since bounding the other ones follow similarly.
Using the fact that for $r_1<1$,
$$\sum_{\substack{C \in \mathcal{M}_j \\ C| (f \ell)^{\infty}}} \frac{1}{|C|^2} = \frac{1}{2 \pi i} \oint_{|u|=r_1} q^{-2j} \prod_{P | f \ell} \big(1-u^{d(P)}\big)^{-1}\, \frac{du}{u^{j+1}},$$ and writing $V=V_1V_2^2$ with $V_1$ a square-free polynomial, we have
\begin{align*}
S_{k,1}^{\textrm{o}}(\ell;N) &= \frac{q^{3/2}}{(q-1)|\ell|} \frac{1}{2 \pi i} \oint_{|u|=r_1} \sum_{\substack{n\leq N \\ n +d(\ell) \text{ odd}}}\sum_{j=0}^g \sum_{\substack{r\leq n+d(\ell)-2g+2j-2 \\ r \text{ odd}}}q^{-2j} \\
& \qquad \sum_{V_1 \in \mathcal{H}_r} \sum_{V_2 \in \mathcal{M}_{(n+d(\ell)-r)/2-g+j-1}} \sum_{f \in \mathcal{M}_n} \frac{\tau_k(f) G(V_1V_2^2, \chi_{f \ell})}{|f|^{3/2} }\prod_{P | f \ell} \big(1-u^{d(P)}\big)^{-1} \, \frac{du}{u^{j+1}}.
\end{align*}
Now
\begin{align*}
\sum_{f \in \mathcal{M}} \frac{\tau_k(f) G(V_1V_2^2, \chi_{f \ell})}{|f|^{3/2}} \prod_{P | f \ell} \big(1-u^{d(P)}\big)^{-1}w^{d(f)}= \mathcal{H}(V,\ell;u,w) \mathcal{J}(V,\ell;u,w)\mathcal{K}(V_1 ; u,w),
\end{align*} where
$$ \mathcal{H}(V,\ell;u,w) = \prod_{P | \ell} \bigg( \sum_{j=0}^{\infty} \frac{\tau_k(P^j) G(V,\chi_{P^{j+\text{ord}_P(\ell)}}) w^{j d(P)}}{|P|^{3j/2}} \bigg) \big(1-u^{d(P)}\big)^{-1},$$
\begin{align*}
\mathcal{J}(V,\ell;u,w) &= \prod_{\substack{P \nmid \ell\\P |V }} \bigg( 1+ \sum_{j=1}^{\infty} \frac{\tau_k(P^j) G(V, \chi_{P^j}) w^{j d(P)}}{|P|^{3j/2}} \big(1-u^{d(P)}\big)^{-1} \bigg) \\
& \qquad\qquad \prod_{\substack{P \nmid V_1\\P | \ell V_2 }}\Big( 1+ \frac{k \chi_{V_1}(P) w^{d(P)}}{|P|} \big(1-u^{d(P)}\big)^{-1}\Big)^{-1}
\end{align*}
and
$$ \mathcal{K}(V_1;u,w) = \prod_{P \nmid V_1} \bigg( 1+ \frac{k \chi_{V_1}(P) w^{d(P)}}{|P|}\big(1-u^{d(P)}\big)^{-1} \bigg). $$
We use the Perron formula for the sum over $f$ and obtain
\begin{align*}
S_{k,1}^{\textrm{o}}(\ell;N) &= \frac{q^{3/2}}{(q-1)|\ell|} \frac{1}{(2 \pi i)^2} \oint_{|u|=q^{-\varepsilon}} \oint_{|w| = q^{1/2-\varepsilon}} \sum_{\substack{n\leq N \\ n +d(\ell) \text{ odd}}} \sum_{j=0}^g \sum_{\substack{r\leq n+d(\ell)-2g+2j-2 \\ r \text{ odd}}} q^{-2j}\\
& \qquad \sum_{V_1 \in \mathcal{H}_r} \sum_{V_2 \in \mathcal{M}_{(n+d(\ell)-r)/2-g+j-1}} \mathcal{H}(V,\ell;u,w) \mathcal{J}(V,\ell;u,w)\mathcal{K}(V_1;u,w) \, \frac{du}{u^{j+1}} \, \frac{dw}{w^{n+1}}.
\end{align*}
Let $j_0$ be minimal such that $|wu^{j_0}| < 1$. Then we write
\begin{equation}
\mathcal{K}(V_1;u,w) = \mathcal{L}\Big(\frac wq,\chi_{V_1}\Big)^k \mathcal{L}\Big(\frac{uw}{q},\chi_{V_1}\Big)^k \cdot \ldots \cdot \mathcal{L}\Big(\frac{u^{j_0-1}w}{q}, \chi_{V_1}\Big)^k \mathcal{T}(V_1;u,w), \label{expr}
\end{equation} where $\mathcal{T}(V_1;u,w)$ is absolutely convergent in the selected region. We also have $$ \mathcal{J}(V,\ell;u,w) \ll 1$$ and similarly as in the proof of Lemma $5.3$ in [\textbf{\ref{S}}],
$$ \mathcal{H} (V,\ell;u,w) \ll_\varepsilon |\ell|^{1/2+\varepsilon} \big| (\ell, V_2^2)\big|^{1/2}|V|^{\varepsilon}.$$
We trivially bound the sum over $V_2$. Then we use \eqref{expr} and upper bounds for moments of $L$--functions (see Theorem $2.7$ in [\textbf{\ref{F4}}]) to get that
$$ \sum_{V_1 \in \mathcal{H}_r} \bigg| \mathcal{L}\Big(\frac wq,\chi_{V_1}\Big) \mathcal{L}\Big(\frac{uw}{q},\chi_{V_1}\Big) \cdot \ldots \cdot \mathcal{L}\Big(\frac{u^{j_0-1}w}{q}, \chi_{V_1}\Big) \bigg|^k \ll_\varepsilon q^{r} r^{k(k+1)/2+\varepsilon} .$$
Alternatively, one can use a Lindel\"{o}f type bound for each $L$--function to get the weaker upper bound of $q^{r+\varepsilon r}$ for the expression above.
Trivially bounding the rest of the expression, we obtain that
$$ S_{k,1}^{\textrm{o}}(\ell;N) \ll_\varepsilon |\ell|^{1/2} q^{N/2 -2g+ \varepsilon g}.$$
Hence $$S_k(\ell;N;V \neq \square) \ll_\varepsilon |\ell|^{1/2}q^{(k-4)g/2+ \varepsilon g}.$$
\section{Moments of the partial Hadamard product}\label{momentsZ}
\subsection{Random matrix theory model}
Recall that
\begin{equation*}
Z_{X}(s,\chi_D)=\exp\Big(-\sum_{\rho}U\big((s-\rho)\ X\big)\Big),
\end{equation*}
where
\begin{equation*}
U(z)=\int_{0}^{\infty}u(x)E_{1}(z\log x)dx.
\end{equation*}
Denote the zeros by $\rho=1/2+i\gamma$. Since $E_1(-ix)+E_1(ix)=-2\textrm{Ci}(|x|)$ for $x\in\mathbb{R}$, where $\textrm{Ci}(z)$ is the cosine integral,
\[
\textrm{Ci}(z)=-\int_{z}^{\infty}\frac{\cos(x)}{x}dx,
\]
we have
\begin{equation}\label{rmt}
\Big\langle Z_X(\chi_D)^k \Big\rangle_{\mathcal{H}_{2g+1}}=\bigg\langle \prod_{\gamma>0}\exp\Big(2k\int_{0}^{\infty}u(x)\textrm{Ci}\big(\gamma X(\log x) \big)dx\Big) \bigg\rangle_{\mathcal{H}_{2g+1}}.
\end{equation}
We model the right hand side of \eqref{rmt} by replacing the ordinates $\gamma$ by the eigenangles of a $2g\times 2g$ symplectic unitary matrix and averaging over all such matrices with respect to the Haar measure. The $k$-moment of $Z_X(\chi_D)$ is thus expected to be asymptotic to
\[
\mathbb{E}_{2g}\bigg[\prod_{n=1}^{g}\exp\Big(2k\int_{0}^{\infty}u(x)\textrm{Ci}\big(\theta_nX(\log x) \big)dx\Big)\bigg],
\]
where $\pm\,\theta_n$ with $0\leq\theta_1\leq\ldots\leq\theta_{g}\leq\pi$ are the $2g$ eigenangles of the random matrix and $\mathbb{E}_{2g}[.]$ denotes the expectation with respect to the Haar measure. It is convenient to have our function periodic, so we instead consider
\begin{equation}\label{rmt1}
\mathbb{E}_{2g}\bigg[\prod_{n=1}^{g}\phi(\theta_n)\bigg],
\end{equation}
where
\begin{align*}
\phi(\theta)&=\exp\bigg(2k\int_{0}^{\infty}u(x)\Big(\sum_{j=-\infty}^{\infty}\textrm{Ci}\big(|\theta+2\pi j|X(\log x) \big)\Big)dx\bigg)\\
&=\Big|2\sin\frac \theta 2\Big|^{2k}\exp\bigg(2k\int_{0}^{\infty}u(x)\Big(\sum_{j=-\infty}^{\infty}\textrm{Ci}\big(|\theta+2\pi j|X(\log x) \big)\Big)dx-2k\log\Big|2\sin\frac \theta 2\Big| \bigg).
\end{align*}
The average \eqref{rmt1} over the symplectic group has been asymptotically evaluated in [\textbf{\ref{DIK}}] and we have
\[
\mathbb{E}_{2g}\bigg[\prod_{n=1}^{g}\phi(\theta_n)\bigg]\sim \frac{G(k+1)\sqrt{\Gamma(k+1)}}{\sqrt{G(2k+1)\Gamma(2k+1)}}\Big(\frac{2g}{e^\gamma X}\Big)^{k(k+1)/2}.
\]
\subsection{Proof of Theorem \ref{k123}}
What most important to us in evaluating the moments of $Z_X(\chi_D)$ is the leading term coming from the twisted moments. Theorem \ref{tfm}, Theorem \ref{tsm}, Theorem \ref{ttm} and \eqref{eta} show that
\begin{align*}
\Big\langle L(\tfrac12,\chi_D)^k\chi_D(\ell) \Big\rangle_{\mathcal{H}_{2g+1}}=&\ \frac{c_k\mathcal{A}_k\mathcal{B}_k(\ell_1)\mathcal{C}_k(\ell_1,\ell_2)}{2^{k(k+1)/2-1}(k(k+1)/2)!|\ell_1|^{1/2}}\big(kg\big)^{k(k+1)/2}\\
&\qquad\qquad+O\Big(\frac{\tau_k(\ell_1)}{|\ell_1|^{1/2}}g^{k(k+1)/2-1}d(\ell)\Big)+O_\varepsilon\big(|\ell|^{1/2}q^{(k-4)g/2+\varepsilon g}\big)
\end{align*}
with $c_1=c_2=1$ and $c_3=2^9/3^6$, where
\begin{align*}
&\mathcal{A}_{k}=\prod_P\bigg(1-\frac{1}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\bigg(1+\frac{1}{|P|}\bigg)^{-1}\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}\bigg),\\
&\mathcal{B}_{k}(\ell_1)=\prod_{P|\ell_1}\bigg(\sum_{j=0}^{\infty}\frac{\tau_k(P^{2j+1})}{|P|^j}\bigg)\bigg(1+\frac{1}{|P|}+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}\bigg)^{-1},\\
&\mathcal{C}_{k}(\ell_1,\ell_2)=\prod_{\substack{P\nmid \ell_1\\P|\ell_2}}\bigg(\sum_{j=0}^{\infty}\frac{\tau_k(P^{2j})}{|P|^j}\bigg)\bigg(1+\frac{1}{|P|}+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}\bigg)^{-1}.
\end{align*}
Combining with \eqref{P*} we get
\begin{align*}
\Big\langle L(\tfrac12,\chi_D)^kP_{-k,X}^{*}(\chi_{D}) \Big\rangle_{\mathcal{H}_{2g+1}}&=\, J_{k,1}+J_{k,2}+O_\varepsilon\big(q^{\vartheta g+(k-4)g/2+\varepsilon g}\big)+O_\varepsilon(q^{-c\vartheta g/4+\varepsilon g})
\end{align*}
for any $\vartheta>0$, where
\begin{equation}\label{Jk1}
J_{k,1}=\frac{2c_k\mathcal{A}_k}{(k(k+1)/2)!}\Big(\frac{kg}{2}\Big)^{k(k+1)/2}\sum_{\substack{\ell_1,\ell_2\in S(X)\\ \ell_1\ \textrm{square-free}\\ d(\ell_1)+2d(\ell_2)\leq\vartheta g}}\frac{\mathcal{B}_k(\ell_1)\mathcal{C}_k(\ell_1,\ell_2)\alpha_{-k}(\ell_1\ell_2^2)}{|\ell_1||\ell_2|}
\end{equation}
and
\begin{align}\label{Jk2}
J_{k,2}&\ll g^{k(k+1)/2-1}\sum_{\ell_1,\ell_2\in S(X)}\frac{\tau_k(\ell_1)\tau_{k}(\ell_1\ell_2^2)\big(d(\ell_1)+2d(\ell_2)\big)}{|\ell_1||\ell_2|}\nonumber\\
&\ll g^{k(k+1)/2-1}\sum_{\ell_2\in S(X)}\frac{\tau_{k}(\ell_2^2)d(\ell_2)}{|\ell_2|}\sum_{\ell_1\in S(X)}\frac{\tau_k(\ell_1)^2d(\ell_1)}{|\ell_1|},
\end{align}
as $\alpha_{-k}(\ell_1\ell_2^2)\ll\tau_k(\ell_1\ell_2^2)\ll\tau_k(\ell_1)\tau_{k}(\ell_2^2)$.
We first consider the error term $J_{k,2}$. Let
\begin{displaymath}
F(\sigma)=\sum_{\ell\in S(X)}\frac{\tau_k(\ell)^2}{|\ell|^\sigma}=\prod_{d(P)\leq X}\bigg(\sum_{j=0}^{\infty}\frac{\tau_k(P^j)^2}{|P|^{j\sigma}}\bigg).
\end{displaymath}
Then $F(1)\asymp\prod_{d(P)\leq X}(1-1/|P|)^{-k^2}\asymp X^{k^2}$. We note that the sum over $\ell_1$ in \eqref{Jk2} is
\[
-\frac{F'(1)}{\log q}=F(1)\sum_{d(P)\leq X}\frac{\sum_{j=0}^{\infty}j\tau_k(P^j)^2d(P)/|P|^{j}}{\sum_{j=0}^{\infty}\tau_k(P^j)^2/|P|^{j}},
\]
and hence it is
\[
\ll X^{k^2}\sum_{d(P)\leq X}\frac{d(P)}{|P|}\ll X^{k^2+1}.
\]
Similarly we have $$\sum_{\ell_2\in S(X)}\frac{\tau_{k}(\ell_2^2)d(\ell_2)}{|\ell_2|}\ll X^{k(k+1)/2+1},$$ and hence $$J_{k,2}\ll g^{k(k+1)/2-1}X^{k(3k+1)/2+2}.$$
For the main term $J_{k,1}$, recall that the function $\alpha_{-k}(\ell)$ is given by
\begin{equation}\label{alpha}
\sum_{\ell\in\mathcal{M}}\frac{\alpha_{-k}(\ell)\chi_{D}(\ell)}{|\ell|^s}=\prod_{d(P)\leq X/2}\bigg( 1-\frac{\chi_D(P)}{|P|^s}\bigg) ^{k}\prod_{X/2<d(P)\leq X}\bigg(1-\frac{k\chi_{D}(P)}{|P|^s}+\frac{k^2\chi_{D}(P)^2}{2|P|^{2s}}\bigg) .
\end{equation}
So for $1\leq k\leq 3$, the function $\alpha_{-k}(\ell)$ is supported on quad-free polynomials. As a result, if we let
\[
\mathcal{P}_X=\prod_{d(P)\leq X}P,
\]
then for the sum over $\ell_1,\ell_2$ in \eqref{Jk1} we can write $\ell_1=\ell_1'\ell_3$ and $\ell_2=\ell_2'\ell_3$, where $\ell_1',\ell_2',\ell_3$ are all square-free, i.e. $\ell_1',\ell_2',\ell_3|\mathcal{P}_X$, and $\ell_1',\ell_2',\ell_3$ are pairwise co-prime. Hence
\begin{align*}
J_{k,1}=&\frac{2c_k\mathcal{A}_k}{(k(k+1)/2)!}\Big(\frac{kg}{2}\Big)^{k(k+1)/2}\sum_{\substack{\ell_3|\mathcal{P}_X\\ d(\ell_3)\leq\vartheta g/3}}\frac{\mathcal{B}_k(\ell_3)\alpha_{-k}(\ell_3^3)}{|\ell_3|^2}\\
&\qquad\qquad\sum_{\substack{\ell_2|(\mathcal{P}_X/\ell_3)\\ d(\ell_2)\leq(\vartheta g-3d(\ell_3))/2}}\frac{\mathcal{C}_k(\ell_2)\alpha_{-k}(\ell_2^2)}{|\ell_2|}\sum_{\substack{\ell_1|(\mathcal{P}_X/\ell_2\ell_3)\\ d(\ell_1)\leq\vartheta g-2d(\ell_2)-3d(\ell_3)}}\frac{\mathcal{B}_k(\ell_1)\alpha_{-k}(\ell_1)}{|\ell_1|},
\end{align*}
where
\[
\mathcal{C}_k(\ell_2)=\mathcal{C}_k(1,\ell_2)=\prod_{P|\ell_2}\bigg(\sum_{j=0}^{\infty}\frac{\tau_k(P^{2j})}{|P|^j}\bigg)\bigg(1+\frac{1}{|P|}+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}\bigg)^{-1}.
\]
Like in \eqref{truncation}, we can remove the condition $d(\ell_1)+2d(\ell_2)+3d(\ell_3)\leq\vartheta g$ at the cost of an error of size $O_\varepsilon\big(q^{-\vartheta g/2+\varepsilon g}\big)$. Define the following multiplicative functions
\begin{align*}
T_1(f)=\sum_{\ell|f}\frac{\mathcal{B}_k(\ell)\alpha_{-k}(\ell)}{|\ell|},\qquad T_2(f)=\sum_{\ell|f}\frac{\mathcal{C}_k(\ell)\alpha_{-k}(\ell^2)}{|\ell|T_1(\ell)}
\end{align*}
and
\[
T_3(f)=\sum_{\ell|f}\frac{\mathcal{B}_k(\ell)\alpha_{-k}(\ell^3)}{|\ell|^2T_1(\ell)T_2(\ell)}.
\]
Then
\begin{align*}
J_{k,1}=&\ \frac{2c_k\mathcal{A}_k}{(k(k+1)/2)!}\Big(\frac{kg}{2}\Big)^{k(k+1)/2}T_1\big(\mathcal{P}_X\big)T_2\big(\mathcal{P}_X\big)T_3\big(\mathcal{P}_X\big)\\
=&\ \frac{2c_k\mathcal{A}_k}{(k(k+1)/2)!}\Big(\frac{kg}{2}\Big)^{k(k+1)/2}\\
&\qquad\prod_{d(P)\leq X}\bigg(1+\frac{\mathcal{B}_k(P)\alpha_{-k}(P)}{|P|}+\frac{\mathcal{C}_k(P)\alpha_{-k}(P^2)}{|P|}+\frac{\mathcal{B}_k(P)\alpha_{-k}(P^3)}{|P|^2}\bigg).
\end{align*}
We remark that
\[
\mathcal{A}_k=\Big(1+O\big(q^{-X}/X\big)\Big)\prod_{d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\frac{1}{|P|}\bigg)^{-1}\mathcal{A}_{k}(P),
\]
where
\[
\mathcal{A}_{k}(P)=1+\frac{1}{|P|}+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}.
\]
So
\begin{align*}
&J_{k,1}=\Big(1+O\big(q^{-X}/X\big)\Big) \frac{2c_k}{(k(k+1)/2)!}\Big(\frac{kg}{2}\Big)^{k(k+1)/2}\prod_{d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\frac{1}{|P|}\bigg)^{-1}\\
&\ \prod_{d(P)\leq X}\bigg(\mathcal{A}_{k}(P)+\frac{\mathcal{A}_{k}(P)\mathcal{B}_k(P)\alpha_{-k}(P)}{|P|}+\frac{\mathcal{A}_{k}(P)\mathcal{C}_k(P)\alpha_{-k}(P^2)}{|P|}+\frac{\mathcal{A}_{k}(P)\mathcal{B}_k(P)\alpha_{-k}(P^3)}{|P|^2}\bigg).
\end{align*}
We note from \eqref{alpha} that $\alpha_{-k}(P)=-k$. Also if $P\in\mathcal{P}$ with $d(P)\leq X/2$, then
$$\alpha_{-k}(P^2)=\frac{k(k-1)}{2}\qquad \textrm{and}\qquad\alpha_{-k}(P^3)=-\frac{k(k-1)(k-2)}{6},$$ and if $P\in\mathcal{P}$ with $X/2<d(P)\leq X$, then
$$\alpha_{-k}(P^2)=\frac{k^2}{2}\qquad \textrm{and}\qquad\alpha_{-k}(P^3)=0.$$ Standard calculations also give
\[
\mathcal{A}_{k}(P)=\begin{cases}
\big(1-\frac{1}{|P|}\big)^{-1}\big(1+\frac{1}{|P|}-\frac{1}{|P|^2}\big)& k=1,\\
\big(1-\frac{1}{|P|}\big)^{-2}\big(1+\frac{2}{|P|}-\frac{2}{|P|^2}+\frac{1}{|P|^3}\big) & k=2,\\
\big(1-\frac{1}{|P|}\big)^{-3}\big(1+\frac{4}{|P|}-\frac{3}{|P|^2}+\frac{3}{|P|^3}-\frac{1}{|P|^4}\big) & k=3,
\end{cases}
\]
\[
\mathcal{A}_{k}(P)\mathcal{B}_k(P)=\begin{cases}
\big(1-\frac{1}{|P|}\big)^{-1} & k=1,\\
2\big(1-\frac{1}{|P|}\big)^{-2} & k=2,\\
\big(1-\frac{1}{|P|}\big)^{-3}\big(3+\frac{1}{|P|}\big) & k=3
\end{cases}
\]
and
\[
\mathcal{A}_{k}(P)\mathcal{C
}_k(P)=\begin{cases}
\big(1-\frac{1}{|P|}\big)^{-1} & k=1,\\
\big(1-\frac{1}{|P|}\big)^{-2}\big(1+\frac{1}{|P|}\big) & k=2,\\
\big(1-\frac{1}{|P|}\big)^{-3}\big(1+\frac{3}{|P|}\big) & k=3.
\end{cases}
\]
Hence, using Lemma \ref{mertens} we get
\begin{align*}
J_{1,1}&=\Big(1+O\big(q^{-X}/X\big)\Big)g\prod_{d(P)\leq X}\bigg(1+\frac{1}{|P|}\bigg)^{-1}\\
&\qquad\qquad\prod_{d(P)\leq X/2}\bigg(1-\frac{1}{|P|^2}\bigg)\prod_{X/2<d(P)\leq X}\bigg(1+\frac{1}{2|P|}-\frac{1}{|P|^2}\bigg)\\
&=\Big(1+O\big(q^{-X/2}/X\big)\Big) g\prod_{d(P)\leq X/2}\bigg(1-\frac{1}{|P|}\bigg)\prod_{X/2<d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^{1/2}\\
&= \frac{1}{\sqrt{2}}\frac{g}{e^\gamma X/2}+O\big(gX^{-2}\big)
\end{align*}
and
\begin{align*}
J_{2,1}&=\Big(1+O\big(q^{-X}/X\big)\Big)\frac{g^3}{3}\prod_{d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)\bigg(1+\frac{1}{|P|}\bigg)^{-1}\\
&\qquad\qquad\prod_{d(P)\leq X/2}\bigg(1-\frac{1}{|P|}-\frac{1}{|P|^2}+\frac{1}{|P|^3}\bigg)\prod_{X/2<d(P)\leq X}\bigg(1+\frac{1}{|P|^3}\bigg)\\
&=\Big(1+O\big(q^{-X/2}/X\big)\Big) \frac{g^3}{3}\prod_{d(P)\leq X/2}\bigg(1-\frac{1}{|P|}\bigg)^3\prod_{X/2<d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^{2}\\
&=\frac{1}{12} \Big(\frac{g}{e^\gamma X/2}\Big)^3+O\big(g^3X^{-4}\big).
\end{align*}
Lastly,
\begin{align*}
J_{3,1}&=\Big(1+O\big(q^{-X}/X\big)\Big)\frac{g^6}{45}\prod_{d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^3\bigg(1+\frac{1}{|P|}\bigg)^{-1}\\
&\qquad\qquad\prod_{d(P)\leq X/2}\bigg(1-\frac{2}{|P|}+\frac{2}{|P|^3}-\frac{1}{|P|^4}\bigg)\prod_{X/2<d(P)\leq X}\bigg(1-\frac{1}{2|P|}+O\bigg(\frac{1}{|P|^2}\bigg)\bigg)\\
&=\Big(1+O\big(q^{-X/2}/X\big)\Big) \frac{g^6}{45}\prod_{d(P)\leq X/2}\bigg(1-\frac{1}{|P|}\bigg)^6\prod_{X/2<d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^{9/2}\\
&=\frac{1}{720\sqrt{2}} \Big(\frac{g}{e^\gamma X/2}\Big)^6+O\big(g^6X^{-7}\big).
\end{align*}
The theorem follows by choosing any $0<\vartheta <(4-k)/2$.
\end{document} |
\begin{document}
\title{\TITEL\thanks{This work is supported by the DFG Research Training
Group 1763 (QuantLA) and the DFG
research project GELO.}}
\institute{Institut f\"ur Informatik, Universit\"at Leipzig, Germany \and
Department f\"ur Elektrotechnik und Informatik, Universit\"at
Siegen, Germany}
\author{\AUTHORS}
\maketitle
\begin{abstract}
Constraint automata are an adaptation of B{\"u}chi\xspace-automata that process
data words where the data comes from some relational structure
$\Struc{S}$. Every transition of such an automaton comes with
constraints in terms of the relations of $\Struc{S}$. A transition
can only
be fired if the current and the next data values satisfy all
constraints of this transition. These automata have been used in the
setting where $\Struc{S}$ is a linear order for
deciding constraint $\ensuremath{\mathrm{LTL}}\xspace$ with constraints over $\Struc{S}$.
In this paper, $\Struc{S}$ is the infinitely
branching infinite order tree $\Struc{T}$. We
provide a $\mathrm{PSPACE}$ algorithm for emptiness of $\Struc{T}$-constraint
automata. This result implies
$\mathrm{PSPACE}$-completeness of the satisfiability and the model checking
problem for constraint $\ensuremath{\mathrm{LTL}}\xspace$ with constraints over $\Struc{T}$.
\end{abstract}
\section{Introduction}
Temporal logics like $\ensuremath{\mathrm{LTL}}\xspace$ or
$\mathrm{CTL}^*$ are nowadays
standard languages for specifying system properties in
verification. These logics are interpreted over node labelled graphs,
where the node labels (also called atomic
propositions) represent abstract properties of a system (for
instance, a computer program). Clearly,
such an abstracted system state does not in general contain all the
information of the original system state. This may lead to incorrect
results in model checking.
In order to overcome this weakness, extensions of temporal logics by
atomic (local) constraints over some structure $\Struc{A}$ have been
proposed (cf.~\cite{Cerans94,DemriG08}).
For instance, \emph{\ensuremath{\mathrm{LTL}}\xspace with local constraints}
is evaluated over infinite words where the letters are tuples over
$\Struc{A}$ of a fixed size. For instance, for
$\Struc{A}= (\mathbb{Z}, <)$, this logic is standard $\ensuremath{\mathrm{LTL}}\xspace$ where atomic
propositions are replaced by atomic constraints of the
form $\mathsf{X}^i x_j < \mathsf{X}^l x_k$. This
constraint is satisfied by
a path $\pi$ if
the $j$-th element of the $i$-th letter
of $\pi$ is less than the $k$-th element of the $l$-th letter of $\pi$.
While temporal logics with integer constraints are suitable to
reason about programs manipulating counters,
reasoning about systems manipulating pushdowns requires constraints
over words over a fixed alphabet and the
prefix relation (which is equivalent to constraints over an infinite
$k$-ary tree with descendant/ancestor relations).
There are numerous investigations on satisfiability and model
checking for temporal logics with constraints over the integers
(cf.~\cite{Cerans94,BozzelliG06,DemriG08,Gascon09,BozzelliP14,CKL14}). Contrary,
temporal logics with constraints over trees have not yet been
investigated much, although questions concerning decidability of the
satisfiability problem for $\ensuremath{\mathrm{LTL}}\xspace$ or $\mathrm{CTL}^*$ with
such constraints have been asked for instance in
\cite{DemriG08,CKL13}. A first (negative) result by Carapelle et
al.~\cite{CFKL14} shows that a technique developed in
\cite{CKL13,CKL14} for
satisfiability results of
branching-time logics (like $\mathrm{CTL}^*$ or $\mathrm{ECTL}^*$) with
integer constraints cannot be used
to resolve the satisfiability status of temporal logics with
constraints over trees.
Our goal is to show that satisfiability of $\ensuremath{\mathrm{LTL}}\xspace$ with
constraints over the tree is decidable. At first,
we analyse the emptiness problem of
$\Struc{T}$-constraint automata (cf.~\cite{Gascon09,DemriD07}) where
$\Struc{T}$ is the infinitely branching infinite tree with prefix
relation. These
automata are B{\"u}chi\xspace-automata that process (multi-)data words where the
data values are elements of $\Struc{T}$
where applicability of transitions
depends on the order of the data values at the current and the next
position.
Our technical main
result shows that emptiness for these automata is $\mathrm{PSPACE}$-complete.
Having obtained an algorithm for the emptiness problem, we can easily
provide algorithms for the
satisfiability and model checking problems for
$\ensuremath{\mathrm{LTL}}\xspace$ with constraints over $\Struc{T}$. We exactly mimic the
automata based algorithms for standard $\ensuremath{\mathrm{LTL}}\xspace$ of Vardi and Wolper
\cite{VardiW94} noting that the constraints in the transitions are
exactly what is needed to deal with the atomic constraints in the
local constraint version of
$\ensuremath{\mathrm{LTL}}\xspace$. It follows directly that satisfiability of
$\ensuremath{\mathrm{LTL}}\xspace$ with constraints over $\Struc{T}$ and
model checking models defined by constraint automata against
$\ensuremath{\mathrm{LTL}}\xspace$ with constraints over $\Struc{T}$ is $\mathrm{PSPACE}$-complete.
Finally, we extend our results to the case of constraints over the
infinite $k$-ary tree for every $k\in\mathbb{N}$ by providing
a reduction to
$\ensuremath{\mathrm{LTL}}\xspace$ with constraints over $\Struc{T}$.
Thus, satisfiability and model checking for $\ensuremath{\mathrm{LTL}}\xspace$ with
constraints over the infinite $k$-ary tree is also in $\mathrm{PSPACE}$.
Upon finishing our paper, we have become aware that Demri and Deters
(abbreviated DD in the following)
have submitted a paper \cite{DemriD14} that shows above mentioned
results on satisfiability using a reduction of constraints over trees
to constraints over the integers.
Even though the
main results of both papers coincide, there are major differences.
\begin{enumerate}
\item DD's result extends to satisfiability of the
corresponding version of $\mathrm{CTL}^*$, but DD do not consider
the model checking problem.
\item DD's result holds
even if the logic is enriched by length constraints that compare the
lengths of the interpretations of variables. Since our approach
abstracts away the concrete length of words, we cannot reprove this
result. On the other hand,
we can
enrich the logic with constraints using the
lexicographic order on the tree as well. DD's approach can not deal
with this order. Thus, the logic in each paper is incomparable to
the logic of the other.
\item DD conjecture that (branching-degree) uniform satisfiability
problem is in $\mathrm{PSPACE}$. This problem asks, given a formula and a
$k\in\mathbb{N}\cup\set{\infty}$ whether there is a model with values in the
$k$-ary infinite tree that satisfies the formula.
We confirm DD's conjecture.
\item Finally, our proof is self-contained.
In contrast, DD's proof seems to be more elegant
and less technical, but this comes at the cost of relying on the
decidability result for satisfiability of $\ensuremath{\mathrm{LTL}}\xspace$ with constraints
over the integers \cite{BozzelliP14}, which is again quite technical to
prove.\footnote{In fact, our proof can be easily adapted to reprove
this result.}
\end{enumerate}
Our result leaves open several further research directions.
Firstly, DD's result on $\mathrm{CTL}^*$ with constraints over trees does
not yield any reasonable complexity bound because the complexity of
their algorithm
relies on the results of Boja\'nczyk\xspace and
Toru\'nczyk\xspace \cite{BojanczykT12} on weak monadic second order logic with
the unbounding quantifier. Thus, without any progresses concerning the
complexity of this logic, DD's approach cannot be used to obtain
better bounds. In contrast, the concept of $\Struc{T}$-constraint
automata can be
easily lifted to a $\Struc{T}$-constraint tree-automaton
model. Complexity bounds on the
emptiness problem for this model would directly imply bounds on the
satisfiability for $\mathrm{CTL}^*$ with constraints over $\Struc{T}$.
Thus, investigating whether our approach transfers to a result on the
emptiness problem of $\Struc{T}$-constraint tree-automata might be a
fruitful approach.
Secondly, it may be possible to lift our results to the global model
checking problem similar to the work of Bozelli and Pinchinat
\cite{BozzelliP14} on $\ensuremath{\mathrm{LTL}}\xspace$ with constraints over the integers.
Finally, it is a very challenging task to decide whether DD's result
and our result can be unified to a result on $\ensuremath{\mathrm{LTL}}\xspace$ with constraints
over the tree with prefix order, lexicographic order and
length-comparisons (of maximal common prefixes).
\section{Model Checking LTL with Constraints over Trees}
We first introduce $\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$, a variant of $\ensuremath{\mathrm{LTL}}\xspace$ with local
constraints. A model of a formula of this logic is a (multi-) data
word where the data comes from some $\set{\preceq,\KBleq,
S}$-structure. We are particularly interested in the case where this
structure is an order tree with lexicographic order $\KBleq$. We want
to adjust the automata-based model checking methods for $\ensuremath{\mathrm{LTL}}\xspace$ to this
setting. For this purpose we then recall the definition of
tree-constraint automata. The technical core of this paper shows that
emptiness of tree-constraint automata is $\mathrm{PSPACE}$-complete. Before we
delve into this technical part, we prove that satisfiability and
model checking for $\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$ formulas with constraints over the full
infinitely branching tree are in $\mathrm{PSPACE}$ due to a reduction to the
emptiness problem of tree-constraint automata. We conclude this
section by providing a reduction of satisfiability and model checking
for $\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$ with constraints over the full tree of branching degree $k$
to the corresponding problem over the full infinitely branching tree.
\subsection{LTL with Constraints}
Constraint $\ensuremath{\mathrm{LTL}}\xspace$ over signature $\set{{=}, {\preceq}, {\KBleq}, s_1, s_2,\dots, s_m}$
where $S = \Set{s_1, \dots, s_m}$ is a set of constant symbols,
abbreviated $\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$, is given by the grammar
\begin{equation*}
\phi ::= \mathsf{X}^i x_1 \mathrel{*} s \mid s \mathrel{*} \mathsf{X}^i x_1 \mid
\mathsf{X}^{i} x_1 \mathrel{*} \mathsf{X}^{j} x_2 \mid
\neg \phi \mid (\phi \land \phi) \mid \mathsf{X}\phi \mid \phi \mathsf{U} \psi
\mid \mathsf{G} \phi
\end{equation*}
where $* \in \Set{=, \preceq, \KBleq}$, $i,j$ are natural
numbers, $x_1, x_2$ are
variables from some countable fixed set $\mathcal{V}$ and
$s \in S$ is a constant symbol.
Given a structure $\Struc{A} = (A,
\preceq^{\Struc{A}}, \KBleq^{\Struc{A}} s_1^{\Struc{A}}, s_2^{\Struc{A}}, \dots,
s_m^{\Struc{A}})$,
an \define{$n$-dimensional data word over
$\Struc{A}$} is a sequence
$(\bar a_i)_{i\in \mathbb{N}}$ with $\bar a_i\in A^n$.
We evaluate
a formula $\phi$ (where $x_1, \dots,
x_n\in \mathcal{V}$ are the variables occurring in $\phi$)
on $n$-dimensional data words $(\bar a_i)_{i\in \mathbb{N}}$.
We write $a_i^j$ for the $j$-th
component of $\bar a_i$.
We say \define{$(\bar a_i)_{i\in\mathbb{N}}$ is a model of $\phi$},
denoted as $(\bar a_i)_{i\in\mathbb{N}} \models \phi$,
if the usual conditions for $\ensuremath{\mathrm{LTL}}\xspace$ hold, and
the following additional rules apply for
$*\in \Set{=, \preceq, \KBleq}$:
\begin{itemize}
\item $(\bar a_i)_{i\in\mathbb{N}} \models (\mathsf{X}^{i} x_k) \mathrel{*}
(\mathsf{X}^{j} x_l)$ if and only if $\Struc{A} \models a_{i}^l \mathrel{*} a_{j}^k$,
\item $(\bar a_i)_{i\in\mathbb{N}} \models (\mathsf{X}^i x_l) \mathrel{*} s_j$
(or $s_j \mathrel{*} (\mathsf{X}^i x_l)$, resp.)
if and only if
$\Struc{A} \models a_{i}^l \mathrel{*} s_j$ (or
$\Struc{A} \models s_j \mathrel{*} a_{i}^l$, respectively).
\end{itemize}
Note that our constraint LTL does not use
atomic propositions. On nontrivial structures,
proposition $p$ can be resembled
by constraints of the form $x_{p_1} = x_{p_2}$.
As for usual $\ensuremath{\mathrm{LTL}}\xspace$ one defines dual operators. Then every formula
has an equivalent negation normal form where negation only appears in
front of atomic constraints
($(\mathsf{X}^{i} x_1) \preceq (\mathsf{X}^{j} x_2)$,
$s \preceq \mathsf{X}^i x$
or $\mathsf{X}^i x\preceq s $).
Using that $X^n(X^i x_k \ast X^j x_{\ell}) \equiv X^{i+n} x_k \ast X^{j+n} x_{\ell}$ and by introducing auxiliary variables,
it is also easy to eliminate
exponents in terms:
\begin{proposition} \label{prop:Depth2ConstraitnsSuffice}
There is a polynomial time algorithm that computes, on input a
$\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$-formula $\phi$ an equivalent $\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$-formula $\psi$ such
that $\psi$ does not contain terms of the form $\mathsf{X}^i x$ with
$i\geq 2$.
\end{proposition}
We want to investigate $\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$ in the cases where the structure
$\Struc{A}$ is one of the following order trees.
For each $k\in \set{2,3,4,\dots }$, let
\begin{equation*}
\Tree^C = (\mathbb{Q}^*, \preceq, \KBleq c_1, c_2,\dots, c_m)\text{ and }
\Tree[k]^C = (\{1, 2,\dots, k\}^*, \preceq,\KBleq, c_1, \dots, c_m)
\end{equation*}
where $\preceq$ is the prefix order, $\KBleq$ is the lexicographic
order defined by $w \KBleq v$ if either $w \preceq v$ or there are
$q_1,q_2\in Q$ such that $(w\sqcap v)q_1 \preceq w$, $(w\sqcap v)q_2
\preceq v$ and $q_1 < q_2$, where $<$ is the natural order on $\mathbb{Q}$ and $\sqcap$ denotes the (binary) \define{greatest common prefix
operator}, and
$C=(c_1,c_2, \dots c_m)$ is a tuple of constants in $\mathbb{Q}^*$ or
$\set{1,2,\dots, k}^*$, respectively.
\subsection{Constraint Automata}
\label{sec:constAut}
In the following, we investigate the satisfiability and model checking
problems for $\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$ over models with data values in one of the trees
$\Tree[k]^C$ for $k\in\set{\infty, 2,3,4,\dots}$. We follow closely
the automata theoretic approach of Vardi and Wolper \cite{VardiW94}
which provides a reduction of model checking for $\ensuremath{\mathrm{LTL}}\xspace$ to the
emptiness problem of B{\"u}chi\xspace automata. In order to deal with the
constraints, we use \emph{$\Tree[k]^C$-constraint automata}
(cf.~\cite{Gascon09}) instead of B{\"u}chi\xspace automata. Next we recall the
definition of constraint automata and state our main result concerning
emptiness of constraint automata. We then derive analogous results of
Vardi and Wolper's decidability results on \ensuremath{\mathrm{LTL}}\xspace for \ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace} with
constraints over $\Tree[k]^C$. A
\emph{$\Tree[k]^C$-constraint automaton} is defined as a usual
B{\"u}chi\xspace automaton but instead of labelling transitions by some letter
from a finite alphabet we label them by Boolean combinations of
constraints which the current and the next data values have to satisfy
in order to apply the transition.
\begin{definition}
\begin{itemize}
\item An \define{$n$-dimensional
$\Tree[k]^C$-constraint automaton}
is a quadruple
\mbox{$\Aut{A} = (Q, I, F, \delta)$}
where $Q$ is a finite set of states, $I\subseteq Q$ the initial
states, $F\subseteq Q$ the set of accepting states and
$\delta \subseteq Q \times B^C_n \times Q$ the \define{transition
relation} where $B^C_n$ is the set of all quantifier-free
formulas over signature $\set{\preceq, \KBleq}\cup C$ with
variables $x_1, \dots, x_n, y_1, \dots, y_n$, i.e., propositional
logic formulas with atomic formulas $v \ast v'$, with
${\ast} \in \{{=}, {\preceq}, {\KBleq}\}$ and $v$, $v'$ are
variables or constants.
\item A
\define{configuration} of the automaton $\Aut{A}$ is a tuple
in $Q \times (\set{1,2,\dots,k}^*)^n)$ (or $(\mathbb{Q}^*)^n$ if $k = \infty$).
\item
We define $(q, \bar w) \to (p, \bar v)$
iff there is a transition $(q, \beta(x_1, \dots x_n, y_1, \dots,
y_n), p)$ such that $\Tree[k]^C \models \beta(\bar w, \bar v)$.
\item A \define{run} of $\Aut{A}$ is a finite or infinite sequence of
configurations $r = (c_j)_{j\in J}$ ($J\subseteq \mathbb{N}$ an interval)
such that $c_j \to c_{j+1}$ for all $j,j+1 \in J$. For a finite
run $r = (c_i)_{ i_1 \leq i \leq i_2}$ with $i_1 \leq i_2 \in \mathbb{N}$
we say $r$ is a \emph{run from $c_{i_1}$ to $c_{i_2}$}.
\item A run $r=(c_i)_{i\in\mathbb{N}}$ is \emph{accepting} if
$c_0 = (q,d_1, \dots, d_n)$ for an initial state $q\in I$ and a final
state $f\in F$ appears in infinitely many configurations
of $r$.
\item The \define{set
of all words accepted by $\Aut{A}$}
comprises all $\bar{w}_1 \bar{w}_2 \dots \in ((\mathbb{Q}^*)^n)^{\omega}$ (or $(\{1,\dotsc,k\})^n)^{\omega}$ if $k \ne \infty$)
such that there
is an accepting infinite run $(c_i)_{i \in \mathbb{N}}$ with
$c_i = (q_i, \bar{w}_i)$.
\end{itemize}
\end{definition}
In the following sections (see Theorem \ref{thm:EmptinessPSPACE}) we prove
that emptiness of $n$-dimensional
$\Tree^C$-constraint automata is $\mathrm{PSPACE}$-complete in terms of
$\abs{Q} + \abs{C} + k + m$ where $m$ is the length of the longest
constant occurring in $C$. We next
apply this result in order to obtain
$\mathrm{PSPACE}$-completeness of satisfiability and model checking.
\subsection{Satisfiability and Model Checking of Constraint LTL}
\begin{definition}
Let $k\in \set{\infty,2,3,4,\dots}$.
$\SAT{\Tree[k]^C}$ denotes the \define{satisfiability problem} for
$\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$ over
$\Tree[k]^C$: given a set of constants $C$ and a $\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$-formula $\varphi$, is
there a data word $(\bar w_i)_{i\in\mathbb{N}}$ over $\Tree[k]^C$ such
that $(\bar w_i)_{i\in\mathbb{N}} \models \varphi$?
$\MC{\Tree[k]^C}$ denotes the \define{model checking problem for
$\Tree[k]^C$-constraint automata against $\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$}:
given a set of constants $C$, a $\Tree[k]^C$-constraint automaton $\Aut{A}$ and a
$\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$-formula $\varphi$, is
there a data word $(\bar w_i)_{i\in\mathbb{N}}$ over $\Tree[k]^C$ accepted
by $\Aut{A}$ such that $(\bar w_i)_{i\in\mathbb{N}} \models \varphi$?
\end{definition}
\begin{theorem} \label{thm:SatAndMCInPSPACE}
Let $k\in \set{\infty,2,3,4,\dots}$ and $C$ a set of constants.
$\SAT{\Tree[k]^C}$ and $\MC{\Tree[k]^C}$ are $\mathrm{PSPACE}$-complete.
\end{theorem}
\begin{proof}
Since there is an automaton accepting all data words, the
satisfiability problem reduces to the model checking problem whence
it suffices to prove the claim on model checking. Hardness follows
directly from the known results for $\ensuremath{\mathrm{LTL}}\xspace$.
We first prove $\MC{\Tree^C}\in\mathrm{PSPACE}$ and then we provide a
reduction of $\MC{\Tree[k]^C}$ to $\MC{\Tree^C}$ for all other
$k$.
\noindent \textbf{Case $k = \infty$.}
Let $C\subseteq \mathbb{Q}^*$ be a finite set of constants, $\Aut{A}$ a
$\Tree^C$-constraint automaton and $\varphi\in \ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$.
Due to Proposition \ref{prop:Depth2ConstraitnsSuffice} we can assume
that all atomic constraints occurring in $\varphi$ only concern the current
and the next data values.
Recall that Vardi and Wolper~\cite{VardiW94} provided a translation
from $\ensuremath{\mathrm{LTL}}\xspace$ to B{\"u}chi\xspace automata such that the resulting automaton
accepts some word if and only if it is a model of the formula.
This
translation directly lifts to a translation of $\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$ over $\Tree$
to $\Tree$-constraint automata. As in the
standard construction, each state of the automaton is
a subset of (the negation closure of) the set of subformulas of the
$\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$-formula. Intuitively, an accepting run of the automaton on
$(\bar w_i)_{i\in\mathbb{N}}$ is at position
$i_0$ in a state containing some subformula $\psi$ if and only if
$(\bar w_i)_{i\geq i_0} \models \psi$. Obviously the dependence of
the transitions of a
constraint automaton on the order of the current and next data
values is exactly what is needed to allow the automaton to switch from
one state to another only if the (possibly negated) atomic constraints
contained in the current state are satisfied by the current and the
next data values.
Thus, we obtain a constraint automaton $\Aut{B}$ such that $\Aut{B}$
accepts $(\bar w_i)_{i\in\mathbb{N}}$ if and only if
$(\bar w_i)_{i\in\mathbb{N}}\models \varphi$. Since the usual product
construction for B{\"u}chi\xspace automata lifts also to constraint automata,
we easily construct in polynomial space an automaton $\Aut{C}$ such
that $\Aut{C}$ accepts a word if and only if both $\Aut{A}$ and
$\Aut{B}$ accept this word. Thus, the set of all words accepted by
$\Aut{C}$ is non-empty if and only if there is a data word
$(\bar w_i)_{i\in\mathbb{N}}$ such that $\Aut{A}$ accepts
$(\bar w_i)_{i\in\mathbb{N}}$ and $(\bar w_i)_{i\in\mathbb{N}}\models\varphi$.
Since emptiness is in $\mathrm{PSPACE}$ the claim follows.
\noindent \textbf{Case $k \neq \infty$.}
Now we turn to the case $\Tree[k]^C$ where $k\neq \infty$.
Let $C_l$ be the set of
$\preceq$-maximal elements of $C$, and let $\varphi$ and $\Aut{A}$
as before.
Without loss of generality we can assume that $C_l$ intersects every
infinite branch in $\set{1,2,\dots, k}^\omega$
(
If not, add $ci$ as
a new constant for
every $c$ in the prefix-closure of $C$ and $i\in\set{1,2,\dots, k}$, which only causes a
polynomial growth of the input
).
We claim that $(C, \Aut{A}, \varphi)$ is a positive instance of
$\MC{\Tree[k]^C}$ if and only if
$(C, \Aut{A}, \psi)$ is a positive instance of
$\MC{\Tree^C}$ where $\Aut{A}$ is seen as a $\Tree^C$-automaton and
$\psi =\varphi \land
\mathsf{G} \bigwedge_{i=1}^n \bigvee_{c\in C_l} (x_i \preceq c \lor
c\preceq x_i)$ where $x_1,x_2, \dots, x_n$ is the set of variables
occurring in the constraints of $\varphi$. Basically, $\psi$ is
$\varphi$ with the additional condition that the data values
occurring in a model form a tree of branching degree $k$ at all
constants.
It is clear that every witness $(\bar w_i)_{i\in\mathbb{N}}$ for the former
model checking problem is a witness for the latter.
For the converse
assume that $(\bar w_i)_{i\in\mathbb{N}}$ is a data word over $\Tree$
accepted by $\Aut{A}$ satisfying $\psi$.
Note that there is an injective map $g:\mathbb{Q}^*\to \set{1,2}^*$ preserving
$\preceq$ and $\KBleq$ in both directions (cf. Appendix
\ref{sec:Orders}). Moreover, by definition of $\psi$ we
conclude that every value occurring in $(\bar w_i)_{i\in\mathbb{N}}$ is
either a prefix of one of the constants or of the form
$c q_1q_2\dots q_n$ for some maximal constant $c\in C_l$.
Thus, we can define $\bar v_i = (v_i^1, v_i^2, \dots, v_i^n)$ where
$v_i^j= w_i^j$ if $w_i^j \preceq c$ for some $c\in C_l$ and
$v_i^j = c g(u)$ if
$w_i^j = c u$ for some $c\in C_l$ and $u\neq \varepsilon$.
Clearly $(\bar v_i)_{i\in\mathbb{N}}$ is a data word over $\Tree[k]$.
Since $g$ preserves $\preceq$, $\KBleq$ and all constants, it is a
model of $\psi$ accepted by $\Aut{A}$ whence it is also a model of
$\varphi$.
\qed
\end{proof}
\begin{remark}
Demri and Deter \cite{DemriD14} conjectured that if the arity $k$ of
the tree is part of the input to the satisfiability problem, it is
still in $\mathrm{PSPACE}$. Our proof confirms that this
branching degree uniform satisfiability problem is
$\mathrm{PSPACE}$-complete.
\end{remark}
\section{Emptiness of Tree Constraint Automata}
Recall that every nonempty B{\"u}chi\xspace automaton has an accepting
run which is ultimately periodic. We first prove that a nonempty
constraint automaton has an accepting run which ultimately consists of
loops that never contract the distances of data values and keep the
order type of
the data values constant. We then define the notion of the type of a
run. It turns out that such a non-contracting loop exists if and only
if the automaton has a run realising a type among a certain
set. Finally, we provide a $\mathrm{PSPACE}$-algorithm that checks whether an
automaton realises a given type. Putting all these together yields our
main technical result.
\begin{theorem}\label{thm:EmptinessPSPACE}
Emptiness of $\Tree^C$-constraint automata is in $\mathrm{PSPACE}$.
\end{theorem}
\subsection{Emptiness and Stretching Loops}
We first introduce some notation before defining our notion of
stretching loop and characterising emptiness in terms of stretching
loops.
From now on a \define{word} is always an element of $\mathbb{Q}^*$,
$\bigsqcap$ ($\sqcap$) denotes the (binary) \define{greatest common prefix
operator}, and
we fix a finite tuple of words $C=(c_1,c_2,\dots, c_m)$ called
constants. \textbf{We assume that $C$ is closed under prefixes.} Note that closing $C$ under prefixes results only in polynomial growth.
\begin{definition}
Let $s_1, \dots s_n$ be
constant symbols and $\sigma = \Set{\preceq, \KBleq, s_1, s_2, \dots,s_n}$.
Given a tuple $\bar w = (w_1, w_2, \dots, w_n)$ of words,
the \define{maximal common ancestor tree} of $\bar w$ is the
$\sigma$-structure
\begin{equation*}
\MCAT(\bar w)= (M,{\preceq}\restriction{M^2},
{\KBleq}\restriction{M^2}, w_1, w_2, \dots, w_n) ,
\end{equation*}
where $w_i$ is the interpretation of constant symbol $s_i$ and
\begin{equation*}
M =
\Set{\varepsilon} \cup \Set{ \bigsqcap_{i\in I} w_i | \emptyset
\neq I\subseteq
\set{1, 2, \dots, n}}.
\end{equation*}
The \define{(order) type $\typ(\bar w)$ of $\bar w$}
is the $\sigma$-isomorphism type of $\MCAT(\bar w)$.
We set $\MCAT_C(\bar w) \coloneqq \MCAT(\bar w, C)$ and
$\typ_C(\bar w) \coloneqq \typ(\bar w, C)$.
\end{definition}
Labelling the words from $\bar w$ by constant symbols has the following
consequence: if $\typ_C(\bar w) = \typ_C(\bar v)$ for
$\bar w= (w_1, w_2, \dots, w_n)$ then there is a unique
isomorphism $h$ from $\MCAT_C(\bar w)$ to $\MCAT_C(\bar v)$
which maps $c\mapsto c$ for every $c\in C$ and $w_i \to v_i$ for
$w_i$ the $i$-th element of $\bar w$ and $v_i$ the $i$-th element of
$\bar v$.
\begin{definition}
For $n\in\mathbb{N}$ we define a relation $\mathrel{\leq_C}$
on configurations from $Q\times (\mathbb{Q}^*)^n$
by $(q,\bar w) \mathrel{\leq_C} (p,\bar v)$ if
$q=p$,
$\typ_C(\bar w) = \typ_C(\bar v)$ and the
induced isomorphism $h:\MCAT_C(\bar w) \to
\MCAT_C(\bar v)$ satisfies for all $d,e\in \MCAT_C(\bar w)$
if $d\prec e$ then $\lvert h(e) \rvert - \lvert h(d) \rvert
\geq \lvert e \rvert - \lvert d\rvert$.
\end{definition}
Intuitively, $(q,\bar{w})\mathrel{\leq_C} (q,\bar{v})$ holds if both data
tuples
have the same order type and the
lengths of intervals in $\MCAT_C(\bar v)$ seen as a subtree of
$\mathbb{Q}^*$ are greater than the lengths of the corresponding intervals in
$\MCAT_C(\bar w)$. In the following sections, we make extensive use of the following
properties of $\mathrel{\leq_C}$.
\begin{lemma}\label{lem:stretchleqIsWQO} \label{lem:StrongUpwardsCompat}
\label{lem:CommonStretchLeqBoundExists}
\begin{enumerate}
\item \label{lem:stretchleqIsWQOPart1} $\mathrel{\leq_C}$ is a well-quasi order.
\item \label{lem:stretchleqIsWQOPart2} The (inverse) transition
relation
$\to$ ($\to^{-1}$) is strongly upwards compatible with respect to
$\mathrel{\leq_C}$ in the sense of \cite{FinkelS01}, i.e., if $u \to v$ ($u \to^{-1} v$) and $u \mathrel{\leq_C} u'$, then there is a $v'$ such that $v \mathrel{\leq_C} v'$ and $u' \to v'$ ($u' \to^{-1} v'$).
\item \label{lem:stretchleqIsWQOPart3}
Given two configurations $(q,\bar w)$ and $(q, \bar v)$ such
that $\typ_C(\bar w) = \typ_C(\bar v)$ then there is a
configuration $(q, \bar u)$ such that $(q,\bar w) \mathrel{\leq_C}
(q, \bar u)$ and $(q, \bar v) \mathrel{\leq_C} (q, \bar u)$.
\end{enumerate}
\end{lemma}
\begin{definition}
A \define{loop} is a finite run $r = (c_i)_{i\leq n}$
with $c_0 = (q,\bar{w})$, $c_n = (q,\bar{v})$ and $\typ_C(\bar w) = \typ_C(\bar v)$.
We say that a loop $r = (c_i)_{i\leq n}$ is
\define{stretching} if $c_0 \mathrel{\leq_C} c_n$.
\end{definition}
\begin{lemma}\label{lem:characteriseAcceptingRuns}
Let $\Aut{A}$ be a constraint automaton.
$\Aut{A}$ has an accepting run if and only if there are partial runs
$r_1$, $r_2$ where $r_1$ starts in an initial configuration and ends
in some configuration $c$ whose state is a final state, and where
$r_2$ is a stretching loop starting in $c$.
\end{lemma}
\begin{proof}
$(\Rightarrow)$. Let $r=(c_i)_{i\in\mathbb{N}}$ be an accepting run.
Since $r$ contains infinitely many configurations with a final state
and $\mathrel{\leq_C}$ is a wqo, we can find
numbers $n_1 < n_2$ such that $c_{n_1}
\mathrel{\leq_C} c_{n_2}$
whence $(c_n)_{n\leq n_1}$, $(c_n)_{n_1 \leq n \leq n_2}$ are the
desired runs.
\noindent $(\Leftarrow)$. Assume $r_1$ is a run from some initial
configuration to $c_1$ whose state is a final state $f\in F$ and
$r_2$ is a stretching loop starting in $c_1$ and ending in $c_2$.
Since $c_1\mathrel{\leq_C} c_2$,
iterated use of strong upwards compatibility
(Lemma~\ref{lem:StrongUpwardsCompat}) yields runs $r_i$
from $c_{i-1}$ to $c_i$ such that $c_{i-1}\mathrel{\leq_C} c_i$ for all
$i\geq 3$.
Clearly, the composition of $r_1, r_2, r_3, r_4, \dots$ is an accepting
run.
\qed
\end{proof}
\subsection{Stretching Loops and Types of Runs}
\begin{definition}
Let $r=(c_i)_{0\leq i \leq n}$ be a finite run,
with $c_0 = (q,\bar{w})$ and
$c_n = (p, \bar{v})$. Setting
$\pi = \typ_C(\bar{w},\bar{v})$, we say $r$ has \define{type} $\typ(r) = (q, \pi, p)$.
\end{definition}
\begin{definition}
Let $\bar w, \bar v$ be $k$-tuples of words such that $\typ_C(\bar
w) = \typ_C(\bar v)$ and
let $h$ be the induced isomorphism from $\MCAT_C(\bar w)$
to $\MCAT_C(\bar v)$. $(\bar w, \bar v)$ is called
\define{contracting} if one of the following holds.
\begin{enumerate}
\item There is some $d \in \MCAT_C(\bar w)$ such that $h(d) \prec
d$.
\item There are $d,e \in \MCAT_C(\bar w)$ such that $d \prec e$,
$h(e) = e$ and $d \prec h(d)$.
\end{enumerate}
We call a loop $r$ from $(q, \bar w)$ to $(q, \bar v)$
contracting if $(\bar w, \bar v)$ is contracting. Otherwise,
we call it (and its type) noncontracting.
\end{definition}
\begin{remark}
The type of a loop determines whether it is noncontracting. Let us explain
the term `contracting'. Fix a loop from $(q, \bar w)$ to $(q,
\bar v)$. The isomorphism $h:\MCAT_C(\bar w) \to
\MCAT_C(\bar v)$ relates for every pair $\bigsqcap_{k\in K} w_k
\prec \bigsqcap_{l\in L} w_l$ the interval $(\bigsqcap_{k\in K} w_k
, \bigsqcap_{l\in L} w_l)$ with the interval $(\bigsqcap_{k\in K}
v_k , \bigsqcap_{l\in L} v_l)$. By definition, for
every contracting loop there is a pair $(K,L)$ such that ( setting
$\bigsqcap_{k \in \emptyset} w_k=\varepsilon$)
\begin{equation*}
\lvert \bigsqcap_{l\in L} w_l \rvert -
\lvert \bigsqcap_{k\in K} w_k \rvert >
\lvert \bigsqcap_{l\in L} v_l \rvert -
\lvert \bigsqcap_{k\in K} v_k \rvert.
\end{equation*}
\end{remark}
The technical core of this section shows that if an automaton admits a
noncontracting loop then it admits a stretching loop with the same
initial and final state. This allows to rephrase the conditions
from Lemma \ref{lem:characteriseAcceptingRuns} in terms of types.
The proof of this claim requires some definitions and
preparatory lemmas.
\begin{definition}
Let $u$ be a word and $m\in \mathbb{N}$. We define
the \define{insertion of an $m$-gap at $u$} to be
$\iota_u^m: \mathbb{Q}^* \to \mathbb{Q}^*$ given by
$\iota_u^m(w) =
\begin{cases}
w &\text{if }u\not\preceq w, \\
u0^mv &\text{if } w = uv.
\end{cases}$
Given a finite run $r$, the
sequence $\iota_u^m(r)$ obtained by applying $\iota_u^m$ to each
data value of $r$ is the
\define{run obtained by insertion of an $m$-gap at $u$ in $r$}.
\end{definition}
For $r=(c_i)_{i\in I}$ and $r'=(d_i)_{i\in I}$ we write
$r\mathrel{\leq_C} r'$ if $c_i \mathrel{\leq_C} d_i$ for all $i\in I$.
Note that the insertion of a gap preserves $\preceq, \KBleq$ and
$\sqcap$ in both directions.
\begin{lemma}\label{lem:insertionIsStretchleqCompatible}
Given a run $r$ and a word $u$ such that $u$ is not
a prefix of any
constant. The sequence $\iota_u^m(r)$ is
indeed a run $r'$ of the same
type and $r \mathrel{\leq_C} r'$.
\end{lemma}
Let $w, v\in \mathbb{Q}^*$. We say $w$ is \define{incomparable left} of $v$ if
$w\KBleq v$ and $w \not\preceq v$. In the same situation we
call $v$ \define{incomparable right} of $w$.
\begin{lemma}\label{lem:RightaboveShiftsRight}
Let $\bar w, \bar v$ be $k$-tuples with
$\typ(\bar w) = \typ(\bar v)$. If $w_i$ is incomparable left
(right) of $v_i$ and $v_i\preceq w_j$, then $w_j$ is incomparable
left (right) of $v_j$ and incomparable right (left)
of $w_i$.
\end{lemma}
\begin{proof}
By type equality, we have that $v_i$ is incomparable left of
$v_j$, whence the same holds for its descendant $w_j$. From
$w_i \KBleq v_i \preceq w_j$ follows $w_i \KBleq w_j$, and $w_i \not\preceq
w_j$ as $w_i \not\preceq v_i$.
\qed
\end{proof}
\begin{proposition}\label{prop:noncontracting-to-stretching}
Let $r$ be a noncontracting loop. There is a stretching
loop $r'$ such that $r\mathrel{\leq_C} r'$.
\end{proposition}
\begin{proof}
Let $r$ from $(q, \bar w)$ to $(q, \bar v)$ be a noncontracting loop and
$h:\MCAT_C(\bar w) \to \MCAT_C(\bar v)$ the induced
isomorphism.
We iteratively define a sequence $r = r_0 \mathrel{\leq_C} r_1
\mathrel{\leq_C} \dots \mathrel{\leq_C} r_n $ of runs until $r_n$ is stretching.
We call a pair $(u_1, u_2)\in \MCAT_C(\bar w)^2$ \emph{problematic} (with
respect to $r$) if $u_1 \preceq u_2$ and
$\lvert u_2 \rvert - \lvert u_1 \rvert > \lvert h(u_2) \rvert -
\lvert h(u_1) \rvert$.
Recall that in this case $u_2$ and $h(u_2)$ are not prefix of any
constant $c$ from $C$ because $h$ fixes all such elements. Let
$P_r$ be the set of all problematic
pairs. We
split the set of all problematic pairs into three parts, which we
handle separately (cf.~Figure \ref{fig:Step1} for an example). Let
\allowdisplaybreaks
\begin{align*}
L_r &= \Set{ (u_1, u_2) \in P_r |
u_2 \text{ incomparable left of } h(u_2)}, \\
R_r &= \Set{ (u_1, u_2) \in P_r |
u_2 \text{ incomparable right of } h(u_2)}, \text{ and} \\
D_r &= \Set{ (u_1, u_2) \in P_r |
u_2 \text{ comparable to } h(u_2)}.
\end{align*}
\noindent\textbf{L-Step:}
If $L_r$ is nonempty, choose the $\KBleq$-minimal $u_2$ such that
there is $u_1$ with $(u_1, u_2)\in L_r$. Now fix $u_1$ such that
$(u_1,u_2)\in L_r$ and
$d \coloneqq (\lvert u_2 \rvert - \lvert u_1 \rvert) - (\lvert
h(u_2) \rvert - \lvert h(u_1) \rvert)$
is maximal. Let $\iota = \iota_{h(u_2)}^d$ be the insertion of a
$d$ gap at $h(u_2)$ and $r' = \iota(r)$.
Denote by $\iota(\bar{w})$ ($\iota(\bar{v})$) the data
values of the first (last, respectively) configuration of $r'$. Let
$h':\MCAT_C(\iota(\bar{w})) \to \MCAT_C(\iota(\bar{v}))$ be the corresponding
isomorphism.
\setlength{\textfloatsep}{0.5\textfloatsep}
\begin{figure}
\caption{
Example for Proposition~\ref{prop:noncontracting-to-stretching}
\label{fig:Step1}
\end{figure}
By definition the set
$L_{r'} = \Set{ (x_1, x_2) \in P_{r'} |
x_2 \text{ incomparable left of } h'(x_2)}$
does not contain a pair $(u,\iota(u_2))$ for any $u
\in\MCAT_C(\iota(\bar{w}))$. Nevertheless, $r'$ may admit
problematic pairs that are not problematic with respect to $r$.
This can happen if there are $x_1,x_2\in \MCAT_C(\bar w)$ such that $x_1
\prec h(u_2) \preceq x_2$ holds, but $h(x_1) \prec h(u_2) \preceq
h(x_2)$ does not. Then, the distance
between $\iota(x_1)$ and $\iota(x_2)$ is
greater than the distance between $x_1$ and
$x_2$ (by $d$). On the other hand, either both or none of
$h'(\iota(x_1))$ and $h'(\iota(x_2))$ are shifted by
the insertion of the gap whence their distance
is equal to the distance of $h(x_1)$ and $h(x_2)$.
In this case, possibly $(\iota(x_1), \iota(x_2))$ is problematic
w.r.t.~$r'$
while $(x_1, x_2)$ is not problematic w.r.t~$r$. Application of
Lemma~\ref{lem:RightaboveShiftsRight} shows that then $x_2$ is
incomparable left of $h(x_2)$ and $u_2$ is
incomparable left of $x_2$ whence
the same holds for $\iota(x_2), h'(\iota(x_2)) = \iota(h(x_2))$ and
$\iota(u_2)$.
Thus, if $(\iota(x_1),\iota(x_2))$ is problematic, then
$(\iota(x_1),\iota(x_2)\in L_{r'}$ and $\iota(u_2)$ is strictly
incomparable left of $\iota(x_2)$.
Thus,
iteration of this step only creates problematic pairs that
are more and more to the right with respect to $\typ_C(\bar w_n) =
\typ_C(\iota(\bar w))$.
Since $\typ_C(\bar w_n)$ is finite, we eventually
do not introduce new
problematic pairs and obtain a run $r_{i}$ such that $L_{r_i} =
\emptyset$ and $r \mathrel{\leq_C} r_i$ because
$r_i$ results from insertion of several gaps in $r$.
\noindent\textbf{R-Step}:
If $R_r\neq\emptyset$, proceed as in (L-Step)
all ``left'' and ``right''.
\noindent\textbf{D-Step}:
If $L_r = R_r = \emptyset$ and $r$ is not stretching, then
$D_r \ne \emptyset$. Choose $u_2$ $\KBleq$-minimal in $\MCAT(\bar w)$
such that there is some $u_1$ with $(u_1,u_2)\in D_r$ and choose
$u_1\prec u_2$ in $\MCAT_C(\bar w)$ such that
$d \coloneqq (\lvert u_2 \rvert - \lvert u_1 \rvert) - (\lvert h(u_1) \rvert
- \lvert h(u_2) \rvert)$
is maximal. Since $r$ is not contracting we have
$u_2 \preceq h(u_2)$ and $u_1\preceq h(u_1)$. Assume $u_2 = h(u_2)$,
then $u_1 \prec h(u_1)$ as $(u_1,u_2) \in D$. This contradicts that
$r$ is not contracting. Thus $u_2 \prec h(u_2)$. Again, let
$\iota = \iota_{h(u_2)}^d$ and $r' = \iota(r)$.
Define $\iota(\bar{w}), \iota(\bar{v})$ and $h'$ as
in the $L$-step. Again there may be a pair $(x_1,x_2)$ which is
not problematic with respect to $r$ while $(\iota(x_1), \iota(x_2))$
is problematic with respect to $r'$. If $R_{r'}$ or $L_{r'}$ are
nonempty, we can deal with those problematic intervals using R- or
L-steps.
This finally leads to a run $r_{j}$ with
$R_{r_j} = L_{r_j} = \emptyset$. Moreover, for every pair $(x_1,x_2)$
such that this pair is not problematic with respect to $r$ but
$(\iota(x_1), \iota(x_2))$ is problematic with respect to $r'$, we
conclude that $x_2$ is strictly below $u_2$ whence
$\iota(x_2)$ is strictly below $\iota(u_2)$
w.r.t.~$\preceq$. Thus, the endpoints of problematic pairs move
downwards
(in $\typ_C(\bar{w}, \bar{v}) = \typ_C(\bar w', \bar
v')$) and eventually all problematic pairs are removed. Once $r_{j}$
is a loop without problematic pair, it is stretching. \qed
\end{proof}
\begin{corollary}\label{cor:EmptinessTypeCharacerisation}
The set of words accepted by an automaton $\Aut{A}$ is nonempty if and only if there are runs
$r_1$ $r_2$ such that $r_2$ is a noncontracting loop starting in
configuration $(f, \bar w)$ where $f$ is a final state
and $r_1$ is a run from an initial configuration to some
configuration $(f, \bar v)$ such that $\typ_C(\bar w) = \typ_C(\bar
v)$.
\end{corollary}
\begin{proof}
Due to Lemma \ref{lem:characteriseAcceptingRuns}, only
$(\Leftarrow)$ requires a proof.
Assume that there are runs $r_1, r_2$ as stated above.
By Lemma \ref{lem:CommonStretchLeqBoundExists}, there is a run
$r_2 \mathrel{\leq_C} r_2'$ such that $(f,\bar v) \mathrel{\leq_C} c_0$
for $c_0$ the initial configuration of $r_2'$.
Note that $r_2'$ is also noncontracting whence
by Proposition \ref{prop:noncontracting-to-stretching} there is
a stretching loop $r_2''$ such
that $r_2'\mathrel{\leq_C} r_2''$. Hence this loop starts in some
configuration $c_1$ such that $(f,\bar v)\mathrel{\leq_C}
c_1$. Applying Lemma \ref{lem:StrongUpwardsCompat} to $r_1$ and
$c_2$ we obtain a run $r_1'$ from an initial configuration to
$c_2$. Thus, $r_1'$ and $r_2''$ match the conditions of Lemma
\ref{lem:characteriseAcceptingRuns} which completes the proof.
\qed
\end{proof}
\subsection{Emptiness and Computation of Types}
In order to turn this characterisation of emptiness in terms of types
into an effective algorithm for the emptiness problem the last missing
step is to compute whether a given type is realised by
some run of a given automaton.
For this purpose, we equip the set of all sets of types with a product
operation. Let $S,T$ be sets of types of runs; a type $(q,\pi,p)$ is
in $S \cdot T$ if there are $(q,\pi_1,r)\in S$, $(r,\pi_2,p)\in T$ and
tuples $\bar u,\bar v, \bar w$ such that $\typ_C(\bar u,\bar v) =
\pi_1$, $\typ_C(\bar v, \bar w) = \pi_2$ and $\typ_C(\bar u, \bar w) =
\pi$. Let $T_1$ denote the set of all types of runs of length $1$ (of
some fixed automaton $\Aut{A}$) and
$T_1^+ = \bigcup_{n\in\mathbb{N}} (T_1)^n$. By induction on the length, one
easily shows that every finite run $r$ of $\Aut{A}$ satisfies $\typ(r) \in
(T_1)^+$. Conversely,
for every type $t\in (T_1)^+$ there is also a run of $\Aut{A}$ of type
$t$. This is due to the fact that gap-insertion preserves types
(Lemma~\ref{lem:insertionIsStretchleqCompatible}),
$\to$ is upwards compatible (Lemma~\ref{lem:StrongUpwardsCompat}) and
that trees of a given type $t_1$ with large gaps have, for all order
types $t, t_2$ with $t \in \set{t_1}\cdot \set{t_2}$, an extension to
a tree witnessing this product. The necessary proofs are not very
difficult but tedious and lengthy.
We conclude that a
type $t$ is in $(T_1)^+$ if and only if $t$ is the type of some run of
$\Aut{A}$. Moreover, types of runs can be
represented in polynomial space (in terms of the constants and the
dimension of a given automaton) and the product of types can be
computed in $\mathrm{PSPACE}$. Thus, we can determine whether an
automaton $\Aut{A}$ realises a type $t$ by guessing
types in $T_1$ and computing an element of their product until it matches $t$.
This proves the following proposition.
\begin{proposition}\label{prop:TypesComputable}
There is a
$\mathrm{PSPACE}$-algorithm that, given a $\Tree^C$-constraint automaton
$\Aut{A}$ and a type
$t$, determines whether there is a run of $\Aut{A}$ of type $t$.
\end{proposition}
\noindent
Together with Corollary \ref{cor:EmptinessTypeCharacerisation} we
obtain an algorithm proving Theorem~\ref{thm:EmptinessPSPACE}.
\begin{proof}[of Theorem \ref{thm:EmptinessPSPACE}]
By Corollary \ref{cor:EmptinessTypeCharacerisation} it suffices that
the algorithm guesses a type $(i,\pi, f)$ and a noncontracting type
$(f, \pi', f)$ such that $i$ is an initial state, $f$ is a final
state, and the order type of the last elements of $\pi$ coincides
with the order type of the first elements of $\pi'$, and then checks
whether these types are realised by actual runs using the previous
proposition.
\qed
\end{proof}
\appendix
\section{Proof of Proposition \ref{prop:Depth2ConstraitnsSuffice}}
First we recall the proposition.
\begin{proposition}
There is a polynomial time algorithm that computes, on input a
$\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$-formula $\phi$ an equivalent $\ensuremath{\mathrm{LTL}(\set{\preceq, \KBleq, S})\xspace}$-formula $\psi$ such
that $\psi$ does not contain terms of the form $\mathsf{X}^i x$ with
$i\geq 2$.
\qed
\end{proposition}
\begin{proof}
First, we can replace any occurrence of $\mathsf{X}^i x \mathrel* \mathsf{X}^j
y$ by $\mathsf{X}^{\min(i,j)} (\mathsf{X}^{i-\min(i,j)}) \mathrel* (\mathsf{X}^{j-\min(i,j)}
y)$. Now assume that there is a subformula of the form $\mathsf{X}^i x \mathrel*
y$ (the case $x \mathrel* \mathsf{X}^j y$ is symmetrical). Introducing fresh
variables $y_0,y_1, \dots, y_{i-1}$ we replace this formula by the
formula $x \mathrel* y_i$ and add the conjunct
$\mathsf{G}(y_0 = y \land \bigwedge_{j=1}^{i} y_j = \mathsf{X} y_{j-1})$
which is polynomial in $i$. Obviously, this replacement yields an
equivalent formula. Iterating this process for all constraints, we
obtain the desired formula $\psi$.
\qed
\end{proof}
\section{Missing part of Theorem \ref{thm:SatAndMCInPSPACE}}
\label{sec:Orders}
Let $\Struc{O} = ( \set{11,22}^*12, \KBleq)$ where $\KBleq$ denotes the
lexicographical order.
\begin{lemma} \label{lem:QAsLexOrder}
$\Struc{O}$ and $(\mathbb{Q}, <)$ are isomorphic.
\end{lemma}
\begin{proof}
$\Struc{O}$ is countable and does not have endpoints because
$(11^n12)_{n\in\mathbb{N}}$ forms a strictly descending sequence such that
any element of $\Struc{O}$ is minorised by some element of the
chain. Analogously, $(22^n12)_{n\in\mathbb{N}}$ is a strictly increasing
sequence majorising every element.
Thus, it is left to show that $\KBleq$ is a dense order. Let
$w,v\in \Struc{O}$ with $w \KBleq v$. Writing $w=w_1w_2\dots w_k$
with $w_i\in\set{11,12,22}$ and
$v=v_1v_2\dots v_l$ with $v_i\in\set{11,12,22}$
let $i$ be minimal such that $w_i\neq v_i$. If $v_i = 12$ then
$w_i=11$ and $w_1w_2\dots w_i (22)^{\abs{w}}12$ is between $w$
and $v$.
If $v_i = 22$ and $w_i = 11 $ or $w_i = 12$ then
$w \prec w_1w_2\dots
w_{i-1}22(11)^{\abs{v}}12\prec v$.
\qed
\end{proof}
\begin{definition}
For $\sigma$ some signature and $\sigma$-structures
$\Struc{A}$ and $\Struc{B}$ we say a homomorphism $h:\Struc{A} \to
\Struc{B}$ is a \define{$\sigma$-injection} if it is injective and preserves
the relations, functions and constants under preimages.
\end{definition}
\begin{lemma}\label{lem:PreserverKBandPreceqToKAryTree}
Let $h: (\mathbb{Q}, <) \to \Struc{O}$ be an isomorphism.
The extension $g: \mathbb{Q}^* \to (\set{11,22}^*12)^*$,
given by $g(q_1q_2\dots q_n) = h(q_1)h(q_2)\dots h(q_n)$
is an
$\set{\preceq, \KBleq}$-injection of $\Tree = (Q^*, \preceq, \KBleq)$ into
$\Tree[2]=(\set{1,2}^*, \preceq, \KBleq)$.
\end{lemma}
\begin{proof}
Note that $g$ is injective: if $w$ is in $\image(g)$, then the
number of occurrences of $12$ where $1$ occurs at an odd position
determines the length of every preimage $v$ such that $g(v) =
w$. It is then a routine check to prove uniqueness of $v$.
We next show that $g$ preserves $\preceq$ (in both directions).
It is obvious from the definition that $w \preceq v$ implies $g(w) \preceq
g(v)$. Now assume that $g(w) \preceq g(v)$. Due to the same argument
as in the injectivity proof, this implies that $w = w_1w_2\dots w_k$,
$v = v_1v_2\dots v_l$, $k\leq l$ and $h(w_i) = h(v_i)$ for every
$1\leq i \leq k$. Since $h$ is injective, it follows that $w_i =
v_i$ for all $i\leq k$ which implies $w \preceq v$.
Finally, we have to prove preservation of $\KBleq$.
For rational numbers $q_1, q_2$ we have $q_1 < q_2$ iff $h(q_1)
\KBleq h(q_2)$.
From this it easily follows that
for words $w,w'\in Q^*$ $w\KBleq w'$ if and only if
$w\preceq w'$ or $w = viw_1$ and $w' = vjw_2$ for some $v\in\mathbb{Q}^*$
and some $i<j$
if and only if
$g(w) \preceq g(w')$ or $g(w) = g(v)h(i)g(w_1)$ and
$g(w') = g(v)h(j)g(w_2)$ with $h(i) \KBless h(j)$
if and only if $g(w) \KBleq g(w')$.
\qed
\end{proof}
\section{Missing Proofs Concerning $\mathrel{\leq_C}$}
In this section we prove Lemma \ref{lem:stretchleqIsWQO}.
Part \ref{lem:stretchleqIsWQOPart1} is proved in Lemma
\ref{lem:App:stretchleqIsWQO},
Part \ref{lem:stretchleqIsWQOPart2} in Lemma \ref{prop:StrongUpwardsCompat} and
Part \ref{lem:stretchleqIsWQOPart3} in Lemma \ref{lem:stretchleqUpperBoundsExist}.
\subsection{Proof of Part 1}
\begin{lemma}\label{lem:App:stretchleqIsWQO}
$\mathrel{\leq_C}$ is a well-quasi order.
\end{lemma}
\begin{proof}
Obviously, $\mathrel{\leq_C}$ is a quasi order.
Any infinite sequence $(\bar w^i)_{i\in\mathbb{N}}$ of $n$-tuples of words
induces an infinite sequence $(\bar w^i, C)_{i\in\mathbb{N}}$. The latter
has an infinite subsequence $(\bar w^i, C)_{i\in I}$ such that
for all $i,j\in I$ $\typ_C(\bar w^{i}) = \typ_C(\bar w^{j})$. This
implies that $\MCAT_C(\bar w^i)$ and $\MCAT_C(\bar w^j)$ are
isomorphic for all $i,j\in I$ via an isomorphism
$\phi_{i,j}$.
For every $i\in I$ we define a map $f_i: \MCAT_C(\bar w^i)^2 \to \mathbb{N}$
by $(u,v) \mapsto \lvert u\rvert - \lvert u\sqcap v \rvert$. Fix an
$i_0 \in I$ and an enumeration of the domain of $f_{i_0}$. This
induces an enumeration of the domain of $f_i$ for every $i\in I$ by
letting $(u,v)\in\dom(f_i)$ be the $k$-th element if
$(\phi_{i,i_0}(u), \phi_{i,i_0}(v))$ is the $k$-th element of
$\dom(f_{i_0})$.
By Dickson's Lemma we find tuples $\bar w^j$, $\bar w^k$ ($j<k$)
such that for all $(u,v)\in\MCAT_C(\bar w^j)$ $f_k( \phi_{j,k}(u),
\phi_{j,k}(v)) \geq f_j(u,v)$. From this we immediately conclude that
$\bar w^j \mathrel{\leq_C} \bar w^k$.
\qed
\end{proof}
\subsection{Proof of Part 2}
We prepare the proof of strong upwards compatability of the transition
relation by formally proving the following intuition: if $\MCAT_C(\bar
w')$ has larger gaps than $\MCAT_C(\bar w)$ (seen as subtrees of
$\mathbb{Q}^*$), every extension of $\MCAT_C(\bar w)$ to a bigger tree induces
a corresponding extension of $\MCAT_C(\bar w')$ to a bigger tree of
the same order type.
\begin{definition}
For $D, E, F$ sets with $D \subseteq E$, we say $h:E \to F$ extends
$g:D\to F$ if $h\restriction{D} = g$.
\end{definition}
\begin{lemma}\label{lem:AuxiliaryUpwardsCompatibiltyOfTypes}
Let $\sigma=\set{\preceq, \KBleq, \sqcap}$ and
$\bar w, \bar w'\in \mathbb{Q}^*$ be tuples such that $\bar w
\mathrel{\leq_C} \bar w'$. The isomorphism $h: \MCAT_C(\bar
w) \to \MCAT_C(\bar w')$ extends to a $\sigma$-injection
$f: (\mathbb{Q}^*, \preceq, \KBleq) \to
(\mathbb{Q}^*, \preceq, \KBleq)$.
\end{lemma}
\begin{proof}
In order to simplify the notation, we assume without loss of
generality that $C\subseteq \bar w$. We
define a family of $\sigma$-injections $f_j: \mathbb{Q}^{\leq j} \to
\Tree$ such that $f_j$ extends $h\restriction{M_j}$ where
$M_j = \Set{w \in \MCAT_C(\bar w) | \abs{w} \leq \mathbb{Q}^{\leq j}}$.
Let $f_0:\set{ \varepsilon} \to \set{ \varepsilon}$.
Assume that
$f_j$ has been defined and satisfies that for all $\bar v \subseteq
\bar w$ and all $u\in \mathbb{Q}^j$
\begin{enumerate}
\item $u \preceq \bigsqcap \bar v$ iff $f_j(u) \preceq h(\bigsqcap
\bar v)$ and
\item if $u\preceq \bigsqcap \bar v$ then $ \lvert \bigsqcap \bar v
\rvert - \lvert u \rvert \leq \lvert f(\bigsqcap \bar v) \rvert -
\lvert f_j(u) \rvert$.
\end{enumerate}
For each word $u\in \mathbb{Q}^{j}$, we define the values of $f_{j+1}$ on $u\mathbb{Q}$
according to the following rule.
Let $\bar v_1, \bar v_2, \dots, \bar v_m \subseteq \bar w$ be those
subsets such that for each $i$ there is some $q_i\in \mathbb{Q}$ with
$\bigsqcap \bar v_i = u q_i$. We can assume that $q_1 \leq q_2 \leq \dots
\leq q_m$.
Note that the second condition on $f_j$ implies that
$f_j(u)$ and $h(\bigsqcap \bar v_i)$ have distance at least $1$
whence there is some $q_i'\in \mathbb{Q}$ such that $f_j(u)q_i'\preceq
h(\bigsqcap \bar v_i)$.
We claim that for all $k,l \leq m$ we have $q_k \leq q_l$ if and
only if $q_k' \leq q_l'$.
\begin{itemize}
\item If $q_k = q_l$ then $u q_k \preceq \bigsqcap \bar v_k \sqcap
\bigsqcap \bar v_l = \bigsqcap (\bar v_k \cup \bar v_l)$.
Thus, there is some $i$ such that
$\bigsqcap \bar v_i = \bigsqcap (\bar v_k \cup \bar v_l)$ and
$q_i = q_k = q_l$. Then $f_j(u)q'_i \preceq h(\bigsqcap \bar v_i)
\preceq h(\bigsqcap \bar v_k)$ and analogously for $\bar v_l$
whence $q'_i = q'_k = q'_l$.
\item If $q_k < q_l$ then $\bigsqcap \bar v_k \sqcap \bigsqcap \bar
v_l = u$. Thus, $u\in\MCAT_C(\bar w)$ and $f_j(u) = h(u) =
h(\bigsqcap \bar v_k) \sqcap h(\bigsqcap \bar v_l)$.
Moreover, $\bigsqcap \bar v_k \KBless \bigsqcap \bar v_l$ whence
$h(\bigsqcap \bar v_k) \KBless h(\bigsqcap \bar v_l)$.
The only possibility to match both requirements is that
$q'_k < q'_l$.
\end{itemize}
Fixing isomorphisms
$g_i:\Set{ q\in Q | q_i <q < q_{i+1}} \to \Set{ q\in Q | q_i' <q <
q_{i+1}'}$ (with $q_0 = q_0' = -\infty$ and $q_{m+1}= q'_{m+1} =
\infty$), we define for every $q\in Q$
\begin{equation*}
f_{j+1}(u q) =
\begin{cases}
h(\bigsqcap v_i) &\text{if }q = q_i, \\
f_j(u)g_{i-1}(q) &\text{otherwise, where } q_i\in\set{q_1, \dots, q_m,
q_{m+1}} \text{\ is minimal with } q < q_i.
\end{cases}
\end{equation*}
Assuming that $f_j$ preserves $\preceq, \KBleq$, and $\sqcap$ in
both directions, it is not difficult to prove the same result for
$f_{j+1}$. Thus, the limit of $(f_j)_{j\in\mathbb{N}}$ is the desired
$\sigma$-injection $f$.
\qed
\end{proof}
\begin{proposition}
\label{prop:StrongUpwardsCompat}
$\to$ and $\to^{-1}$ are strongly upwards compatible with respect to
$\mathrel{\leq_C}$.
\end{proposition}
\begin{proof}
Given $k$-tuples $\bar w, \bar v, \bar w'$ and states $q,p$ such
that there is a transition $(q, \bar w) \to (p, \bar v)$ and such
that $\bar w \mathrel{\leq_C} \bar w'$ we have to show that there is
some $\bar v \mathrel{\leq_C} \bar v'$ and a transition $(q, \bar w')
\to (p, \bar v')$.
Since $\bar w \mathrel{\leq_C} \bar w'$,
the isomorphism $h: \MCAT_C(\bar w) \to \MCAT_C(\bar w')$ extends
(by Lemma~\ref{lem:AuxiliaryUpwardsCompatibiltyOfTypes}) to
a $\set{\preceq, \KBleq, \sqcap}$-injection
$\hat h: \mathbb{Q}^* \to \mathbb{Q}^*$.
Setting $v'_i = \hat h(v_i)$ for each $v_i\in \bar v$ we obtain with
$\bar v' = (v'_1, \dots, v'_k)$
that $(p, \bar v ) \mathrel{\leq_C} (p, \bar v')$ and $(q,\bar w') \to
(p, \bar v')$ as desired.
The argument for
$\to^{-1}$ is completely analogous.
\qed
\end{proof}
\subsection{Proof of Part 3}
Recall from Lemma \ref{lem:insertionIsStretchleqCompatible} that
insertion of an $n$-gap at some $u$ which is not prefixed by a
constant from $C$ preserves the type and leads to a $\mathrel{\leq_C}$
larger tuple. Iterated use of this lemma proves Part
\ref{lem:stretchleqIsWQOPart3} of Lemma \ref{lem:stretchleqIsWQO},
which we restate in the following lemma.
\begin{lemma} \label{lem:stretchleqUpperBoundsExist}
Given two configurations $(q,\bar w)$ and $(q, \bar v)$ such
that $\typ_C(\bar w) = \typ_C(\bar v)$ then there is a
configuration $(q, \bar u)$ such that $(q,\bar w) \mathrel{\leq_C}
(q, \bar u)$ and $(q, \bar v) \mathrel{\leq_C} (q, \bar u)$.
\end{lemma}
\begin{proof}
Let $d\in \mathbb{N}$ be maximal such that there are $x_1,x_2\in
\MCAT_C(\bar w)$ with $x_1\preceq x_2$ and $\abs{x_2} - \abs{x_1} =
d$. Inductively, from the $\preceq$-maximal elements to
$\varepsilon$ we insert a gap of size $d$ at each $y\in\MCAT_C(\bar
v)$ if $y$ is not prefixed by a constant from $C$. All these
iterated insertions result finally in a tuple $\bar u$ such that
$(q, \bar v) \mathrel{\leq_C} (q, \bar u)$ and for all
$z_1,z_2\in \MCAT_C(\bar u)$ such that $z_1\preceq z_2$ and $z_2$ is
not prefix of any constant from $C$, then $\abs{z_2}-\abs{z_1}\geq
d$. Thus, by definition of $d$ also $(q, \bar w) \mathrel{\leq_C} (q,
\bar u)$ holds as desired.
\qed
\end{proof}
\section{Computation of Types}
The goal of this section is to prove Proposition
\ref{prop:TypesComputable}, i.e., to provide an algorithm that checks
whether a given type is realised by one of the runs of a given
$\Tree^C$-automaton. For this purpose we first
fix an $n$-dimensional $\Tree^C$-constraint automaton $\Aut{A}$ with state set
$Q$. We equip the power set of all types with a product operation as follows.
\begin{definition}
\begin{itemize}
\item Let $\RunTypes$ denote the \define{set of all types} $(q,\pi,p)$ where
$q,p\in Q$ and $\pi= \typ_C(\bar w, \bar v)$ where $\bar w$
and $\bar v$ are $n$-tuples of words.
\item We equip the power set $2^\RunTypes$ with a \define{product}
$\cdot$ as
follows. For $t = (q_1,\pi_1,p_1) ,u = (q_2, \pi_2,p_2)$,
$v = (q_3,\pi_3,p_3) \in \RunTypes$ let $t\in \set{u} \cdot \set{v}$
if
\begin{enumerate}
\item $q_1 = q_2$, $p_1 = p_3$, $p_2 = q_3$, and
\item there are $n$-tuples $\bar x, \bar y, \bar
z$ such that $\typ_C(\bar x, \bar y) = \pi_2$,
$\typ_C(\bar y, \bar z) = \pi_3$ and
$\typ_C(\bar x, \bar z) = \pi_1$.
\end{enumerate}
Generally, for $A,B\subseteq \RunTypes$ such that at least one of
them is not a singleton, we define
$A\cdot B = \Set{t\cdot u | t\in A, u\in B}$.
\item The \define{set of types of
one-step runs} $T_1\subseteq \RunTypes$ is given by
$t=(q,\pi,p)\in T_1$ if there is a transition $(q,\beta,p)$ of
$\Aut{A}$ such that $\pi$ satisfies $\beta$.
\item Let $T_1^1=T_1$, $T_1^{n+1} = T_1^n T_1$, and $T_1^+ =
\bigcup_{n \geq 1} T_1^n$.
\end{itemize}
\end{definition}
\begin{remark}
One easily checks that $t\in T_1$ holds if and only if there is a
run of length $1$ with type $t$.
\end{remark}
The product operation resembles the composition of types. As a
consequence one can connect the runs of $\Aut{A}$ and $T_1^+$ as
follows.
\begin{lemma}\label{lem:TypesAndRunsConnection}
There is a run of $\Aut{A}$ of type $t$ if and only if $t \in
T_1^+$.
\end{lemma}
Before we provide a proof, we show how this lemma can be used to prove
\ref{prop:TypesComputable} which we restate here:
\begin{proposition}
There is a
$\mathrm{PSPACE}$-algorithm that, given an $n$-dimensional
$\Tree^C$-constraint automaton
$\Aut{A}$ and a type
$t$, determines whether there is a run of $\Aut{A}$ of type $t$.
\end{proposition}
\begin{proof}
Writing $m = \max(\abs{c}: c\in C)$ the algorithm uses polynomial
space in terms of $m + n + \abs{\Aut{A}}$.\footnote{Assuming any
reasonable notion of size of an automaton.}
Given $n$-tuples $\bar w$ and $\bar v$, note that $\typ_C(\bar w, \bar v)$
contains at most $4n$ elements that are not constants. Thus, we can
represent any type by $2$ states and $2n$ words of length at most
$m+ 4n$.
Moreover, It takes logarithmic space in $n$ and $\abs{A}$
to check whether a given type satisfies a specific transition.
Finally, it only needs $O( 2n (m+(4n\cdot 2n)))$ space to decide
whether a given type $t$ is in the product of two types $t_1,t_2$
(cf.~the upcoming Lemma \ref{lem:BigGapsAllowAllTypes}).
Thus, an $\mathbb{N}PSPACE$ ( = $\mathrm{PSPACE}$) algorithm can guess a
first type $t_1\in T_1$ and, having stored a type $t_i\in T_1^i$,
it can guess another type $t\in T_1$ and a type $t_{i+1}$ and
verify that $t_{i+1} \in \set{t_i} \cdot \set{t}$. This procedure is
iterated until $t_{i}$ is the desired type and the algorithm reports
that $t_i$ can be realised by some run.
\qed
\end{proof}
\subsection{Proof of Lemma \ref{lem:TypesAndRunsConnection}}
We finally have to prove the connection between composition of runs
and products of their types. One direction is easily shown and
contained in the following lemma.
\begin{lemma}
For $r=(c_i)_{1\leq i \leq n }$ a run (with $n\geq 2$),
$\typ(r)\in T_1^{n-1}$.
\end{lemma}
\begin{proof}
For $n = 2$ the claim follows by definition of $T_1^{2-1} = T_1$.
We proceed by induction.
Write $c_i = (q_i, w^i_1, \dots, w^i_k)$. Let
$r'=(c_i)_{1\leq i \leq n-1}$ and $r_{n-1} = (c_i)_{n-1\leq i \leq n}$.
By induction hypothesis $\typ(r') = (q_1, \pi, q_{n-1}) \in T_1^{n-2}$ with
\begin{equation*}
\pi = \typ_C(w^1_1, w^1_2, \dots, w^1_k, w^{n-1}_1,
\dots, w^{n-1}_k),
\end{equation*}
and $\typ(r_{n-1}) = (q_{n-1}, \pi_{n-1}, q_n) \in T_1$ with
\begin{equation*}
\pi_{n-1} =
\typ_C(w^{n-1}_1, \dots, w^{n-1}_k, w^n_1, \dots, w^n_k).
\end{equation*}
Thus, the tuples $w^1_1, \dots, w^1_k$, $w^{n-1}_1, \dots,
w^{n-1}_k$, $w^n_1, \dots, w^n_k$ witness that
\begin{equation*}
(q_1, \pi', q_n) := \typ(r) \in \typ(r') \cdot \typ(r_{n-1})
\subseteq T_1^{n-2} \cdot T_1 = T_1^{n-1},
\end{equation*}
which completes the proof.
\qed
\end{proof}
The other direction of Lemma \ref{lem:TypesAndRunsConnection} relies
on the following intuition.
\begin{enumerate}
\item By upwards-compatability and gap-insertion every type realised
by some run, is realised by one with large gaps between all pairs of
elements except the constants.
\item If two $n$-tuple $\bar w, \bar v$ have $2n$-gaps between all
pairs of elements from $\MCAT_C(\bar w, \bar v)$ except the
constants,
then for every type $t \in \typ_C(\bar w, \bar v) \cdot T_1$ there
is a tuple $\bar u$ such that $\bar w, \bar v, \bar u$ witness this
inclusion.
\item Thus, assuming that all types from $T_1^{n-1}$ are realised by
runs, for all $t\in T_1^{n-1} \cdot T_1$ we can realise the
appropriate type from $T_1^{n-1}$ with a run $r$ that has large gaps at
its last configuration and find a witness for $t$ by realising the
appropriate type from $T_1$ using the values of the last
configuration of $r$.
\end{enumerate}
Proving these intuitions is rather tedious and we give the details in
the following. Recall that we assume that the set of constants $C$ is
closed under prefixes. Let us first make precise what a gap is.
\begin{definition}
We say that a tree $T\subseteq \mathbb{Q}^*$ has \define{$n$-gaps above C} if for
all $d,e\in T$ with $d\prec e$ such that $e \not\preceq c$ for all
$c\in C$ we have $\lvert e \rvert - \lvert d \rvert > n$.
\end{definition}
We can now give a precise version of the first claim.
\begin{lemma}\label{lem:allRunsRealisedWithGaps}
Given a finite run $r$ there is a run $r'$ from $c_1'=(q,\bar w')$ to
$c_2 = (p', \bar v')$ of the same type such that
$\MCAT_C(\bar w', \bar v')$ has $2n$-gaps above $C$.
\end{lemma}
\begin{proof}
Let $r$ be a run from $(q,\bar w)$ to $(q,\bar v)$ For each $u \in
\MCAT_C(\bar w, \bar v)$ (starting with $\preceq$-maximal ones) that
is not a constant from $C$, we insert a gap of size $2n$ at $u$ in
$r$ . Since gap insertion preserves types (Lemma
\ref{lem:insertionIsStretchleqCompatible}), the resulting run $r'$
from $(q, \bar w')$ to $(p, \bar v')$ is of the same type as $r$ and
$\MCAT_C(\bar w', \bar v')$ has $2n$-gaps above $C$. \qed
\end{proof}
For the second claim we need a technical lemma first and then prove
the second intuition to be correct.
\begin{lemma} \label{lem:Auxiliary:BigGapsAllowAllTypes} Let $\sigma =
\set{\preceq, \KBleq, \sqcap}$, $n\in\mathbb{N}$. Let $A\subseteq \mathbb{Q}^*$ be
some finite set closed under maximal common prefixes such that
$\varepsilon\in A$. Let $B\subseteq A$ and $h:A\to \Tree$ a
$\sigma$-injection such that $h(A)$ has $n$-gaps above $h(B)$.
Given $D\subseteq \mathbb{Q}^*$ such that
\begin{enumerate}
\item $\lvert D \setminus A \rvert \leq n$,
\item $D\cup A$ is closed under maximal common prefixes, and
\item there is no $d\in D$ and $b\in B$ such that $d\preceq b$,
\end{enumerate}
then $h$ extends to a $\sigma$-injection $h_D:A\cup D \to \Tree$.
\end{lemma}
\begin{proof}
The base case $n = 0$ is trivial. Assume that the lemma has been
proven for some $n\in\mathbb{N}$. If $\lvert D\setminus A \rvert = n+1$,
let $d\in D\setminus A$ be $\KBleq$-minimal. By induction
hypothesis it suffices to extend $h$ to a $\sigma$-injection $h':
A\cup \set{d} \to \Tree$ that has $n$-gaps above $h(B\cup\set{d})$.
We first define the image of $d$ by a case distinction and prove
that the resulting map $h'$ has the desired properties. We
distinguish two cases.
\begin{enumerate}
\item Assume that there is some $a\in A$ such that $d\preceq a$.
Since $\varepsilon\in A$ we find a maximal $w\in A$ such that
$w\prec d$. Moreover, $\bar a = \bigsqcap \Set{a\in A |d\preceq
a}$ is well defined and satisfies $w\prec \bar a$. Thus, $h(w)
\prec h(\bar a)$ and there is a $q\in Q$ such that $h(w) q \preceq
h(\bar a)$. Let $h'$ be the extension of $h$ to $A\cup\{d\}$
mapping $h'(d) = h(w)q$ and $h'(a) = h(a)$ for all $a\in A$.
\item Otherwise, there is no $a\in A$ with $d\preceq a$. Let again
$w\in A$ be maximal with $w\preceq d$ and let $q_d\in \mathbb{Q}$ such
that $w q_d\preceq d$. For later use we first establish that
\begin{equation}
\label{eq:wqdisAfree}
\text{there is no $a\in A$ with $w q_d\preceq a$.}
\end{equation}
Assuming the contrary let $w q_d\preceq a$. Since $A\cup D$ is
closed under maximal common prefixes, we conclude that $w q_d
\preceq (a\sqcap d)\in A\cup D$. $(a\sqcap d)\in A$ contradicts
the maximality of $w$. But due to $\KBleq$-minimality of $d$,
$(a\sqcap d) \in D\setminus A$ is only possible if $d=a\sqcap d$
which implies $d\preceq a$ which contradicts our assumption on
$d$.
We define a partition of $\Set{a\in A | w\prec a}$ by setting
\begin{align*}
A^- &= \Set{a\in A | w\prec a \text{ and }a\KBleq d}\text{ and}\\
A^+ &= \Set{a\in A | w\prec a \text{ and } d\KBleq a}.
\end{align*}
If $A^-\neq \emptyset$ let $a^-$ be its $\KBleq$-maximal element.
Since $h$ preserves $\prec$, there is some $q^-\in \mathbb{Q}$ such
that $h(w)q^- \preceq h(a^-)$. If $A^- = \emptyset$ set
$q^-=-\infty$. Analogously, if $A^+\neq \emptyset$ let $a^+$ be
its $\KBleq$-minimal element. Since $h$ preserves $\prec$,
there is some $q^+\in \mathbb{Q}$ such that $h(w)q^+ \preceq h(a^+)$. If
$A^- = \emptyset$ set $q^+ = \infty$.
If $a^-$ and $a^+$ are both defined, we conclude with
\eqref{eq:wqdisAfree} that there are $q_1 < q_d < q_2$ such that
$wq_1\preceq a^-$ and $wq_2 \preceq a^+$. Since $h$ is a
$\sigma$-injection, we directly conclude that $q^- < q^+$.
Choose $q\in (q^-, q^+)$ arbitrarily and define the map $h': A\cup
\set{d} \to \Tree$ by $h'(a) = h(a)$ for all $a\in A$ and $h'(d) =
h(w)q$.
We prepare the proof that $h'$ is a $\sigma$-injection by
establishing that
\begin{equation}
\label{eq:wqdisAfreeHom}
\text{for all $p\in (q^-,q^+)$ there is no $a\in A$ such that $h(w)p\preceq h(a)$.}
\end{equation}
Heading for a contradiction assume that there was such $a$ and
note that $h(a^-) \KBless h(a) \KBless h(a^+)$ and $h(w)\prec
h(a)$. This would imply $a^-\KBless a \KBless a^+$ and $w\prec
a$. But this clearly contradicts the definitions of $a^-$ and
$a^+$ as maximal below $d$ (minimal above $d$, respectively).
\end{enumerate}
We claim that the resulting map $h'$ is a $\sigma$-injection.
\noindent\textbf{Injectivity:}
Heading for a contradiction, assume that there is an $a\in A$ with
$h(a)=h(w)q$ then $h(w) \prec h(a)$ which implies $w\prec a$. But
then either $w\prec a\preceq d$ violates the choice of $w$ or
$d\preceq a$. In the latter case the third condition on $D$ implies
that there is no $b\in B$ with $a\preceq b$. But then $h(w)$ and
$h(a)$ need to have an $(n+1)$-gap which is not the case. Thus, we
have arrived at a contradiction and conclude that there is no $a\in
A$ with $h(a) = h(w)q$ whence $h'$ is injective.
\noindent\textbf{Preservation of $\preceq$:}
We show that $h'$ preserves $\preceq$ in both directions. Choose
some $a\in A$.
\begin{itemize}
\item If $a \preceq d$ then by choice of $w$ we have $a\preceq w$
whence $h'(a) = h(a) \preceq h(w) \prec h'(d)$.
\item If $h'(a) = h(a) \preceq h'(d) = h(w)q$, then $h(a) \preceq
h(w)$ because $h'$ is injective. Thus, $a\preceq w \prec d$ as
desired.
\item If $d\preceq a$ we are in case one of the definition of
$h'$. Thus, $\bar a \preceq a$ whence by definition $h'(d) \preceq
h(\bar a) \preceq h(a) = h'(a)$.
\item If $h'(d)=h(w)q \preceq h(a)$, we conclude with
\eqref{eq:wqdisAfreeHom} that we are in case one of the definition
of $h'$. Thus, $h(w)q\preceq h(\bar a) \preceq h(a)$ implies that
$h(w)q \preceq h(a)\sqcap h(\bar a) = h(a\sqcap \bar a)$. Since
$h$ is a $\sigma$-injection, it follows that $w\prec a\sqcap \bar
a \preceq \bar a$. Since $d\preceq \bar a$, we obtain that
$a\sqcap \bar a$ and $d$ are comparable. By maximality of $w$, we
conclude $d\preceq (a \sqcap \bar a) \preceq a$.
\end{itemize}
\noindent\textbf{Preservation of $\KBleq$:}
Due to the $\preceq$
preservation, it suffices to prove preservation of $\KBless \cap
\not\preceq$. Again choose some $a\in A$.
\begin{itemize}
\item Assume that $a\KBleq d$ and $a\not\preceq d$.
If $a\KBleq w$ we immediately conclude that $h'(a) = h(a) \KBleq
h(w) \KBleq h(w)q = h'(d)$.
Otherwise, one immediately concludes that $d\sqcap a = w$.
\begin{enumerate}
\item If $h'$ has been defined in case one, we immediately
conclude $a \sqcap \bar a = w$ and $a \KBless \bar a$ whence $
h(a) \sqcap h(\bar a) = h(a \sqcap \bar a) = h(w)$ and $h(a)
\KBless h(\bar a)$. Since $h(w) \prec h'(d) \preceq h(\bar a)$,
it follows that that $h'(a) = h(a) \KBless h(w)$.
\item Otherwise, $h'$ has been defined in the second case and we
conclude that $a\in A^-$ whence $a \KBleq a^-$. This implies
that
$h'(a) = h(a) \KBleq h(a^-) \KBleq h(w)q = h'(d)$.
\end{enumerate}
\item Assume that $d\KBleq a$ and $d\not\preceq a$.
First assume that $w\not\preceq a$. Then $d\sqcap a = w \sqcap a
\prec w$ whence $w\KBleq a$. Since $h$ is a $\sigma$-injection, we
obtain
$h(w) \KBleq h(a)$, and $h(w) \sqcap h(a) = h(w\sqcap a) \prec
h(w)$. Thus, $h(w) \preceq h'(d)$ directly implies
$h'(d) \KBleq h(a) = h(a')$.
Otherwise, we have $w\preceq a$. Since $d\KBleq a$ we conclude
that $w\prec a$.
\begin{enumerate}
\item If $h'$ has been defined in case one,
$d\not\preceq a$, $w\prec a$ and maximality of $w$ imply that
$w = d \sqcap a = \bar a \sqcap a$.
Since $\bar a$ and $d$ are on a common path,
we also have $\bar a \KBleq a$.
Thus, $h(w) = h(\bar a \sqcap a) = h(\bar a) \sqcap h(a)$ and
$h(\bar a) \KBleq h(a)$. Since $h'(d)$ and $h(\bar a)$ are on a
common path, we obtain $h'(d) \KBleq h(a) = h'(a)$.
\item Otherwise, $h'$ has been defined in case two. Then $w\prec
a$ and $d \KBleq a$ imply $a^+ \KBleq a$. We conclude by choice
of $q$ that
$h'(d) = h(w)q \KBleq h'(a^+) \KBleq h(a)$.
\end{enumerate}
Since $\KBleq$ is a total order, the
backwards preservation of $\KBleq$ follows directly from the
forward preservation: assume $h'(x) \KBleq h'(y)$, then forwards
preservation and injectivity rules out the case $y \KBless x$, whence
$x \KBleq y$ because $\KBleq$ is total.
\end{itemize}
\noindent\textbf{Preservation of $\sqcap$:}
Finally, note that $h'$ preserves $\sqcap$ in both directions. Let
$a\in A$. If $a$ and $d$ are comparable, the claim follows from the
preservation of $\preceq$. Otherwise, if $a$ and $d$ are
incomparable (with respect to $\preceq$), then we conclude $a\sqcap
d \in A$ whence $a\sqcap d = a \sqcap w$. But then also $h'(a)$ and
$h'(d)$ are incomparable whence $h'(a) \sqcap h'(d) \preceq h'(w)$
whence by definition of $h'(d)$ we have $h'(a) \sqcap h'(d) = h'(a)
\sqcap h'(w) = h(a) \sqcap h(w) = h( a \sqcap w) = h'(a\sqcap w) =
h'(a\sqcap d)$.
\qed
\end{proof}
\begin{lemma} \label{lem:BigGapsAllowAllTypes} Let $\bar w, \bar v$ be
$n$-tuples and $t=(q,\pi,r), t_1 = (q,\pi_1,p), t_2 = (p, \pi_2, r)
\in \RunTypes$ such that
$\typ_C(\bar w, \bar v) = \pi_1$, and
$\MCAT_C(\bar w, \bar v)$ has $(2n)$-gaps above
$C$. There is an $n$-tuple $\bar u$ such that
$\typ_C(\bar v, \bar u) = \pi_2$
and $\typ_C(\bar w, \bar u) = \pi$.
\end{lemma}
\begin{proof}
By definition of the product, there are $k$-tuples $\bar x, \bar y, \bar
z$ such that $\typ_C(\bar x, \bar y) = \pi_1$, $\typ_C(\bar y, \bar
z) = \pi_2$ and $\typ_C(\bar x, \bar z) = \pi$. Fix the
isomorphism
$h: \MCAT_C(\bar x, \bar y) \to \MCAT_C(\bar w, \bar v)$.
One shows by induction on $n$ that if $\MCAT_C(\bar x, \bar y)$ has $n_1\in\mathbb{N}$
many leaves and $n_2\in \mathbb{N}$ many inner nodes then
$\MCAT_C(\bar x, \bar y)$ has at most $n_1+n$ leaves and $n_2+n$
inner nodes whence
$\lvert \MCAT_C(\bar x, \bar y, \bar z) \setminus
\MCAT_C(\bar x, \bar y) \rvert \leq 2n$.
Thus,
$h$ extends by
Lemma~\ref{lem:Auxiliary:BigGapsAllowAllTypes}
(setting $A = \MCAT_C(\bar x, \bar y)$, $B = C$, $D=\MCAT_C(\bar x,
\bar y, \bar z) \setminus \MCAT_C(\bar x, \bar y)$, and seeing $h$ as
an injection $A\to \Tree$)
to a
$\Set{\preceq, \KBleq, \sqcap}$-injection
$\hat h: \MCAT_C(\bar x, \bar y, \bar z) \to \Tree$ (which is the
identity on all all constants from $C$) such that for
$\bar u = \hat h(\bar z)$, $\typ_C(\bar w, \bar v, \bar u) =
\typ_C(\bar x, \bar y, \bar z)$.
In particular, $\typ_C(\bar v, \bar u) = \pi_2$ and
$\typ_C(\bar w, \bar u) = \pi$ as desired.
\qed
\end{proof}
Now we are prepared to prove the last direction of Lemma
\ref{lem:TypesAndRunsConnection}
\begin{lemma}
For every $t\in T_1^+$ there is a run $r$ of $\Aut{A}$ with
$\typ(r)=t$.
\end{lemma}
\begin{proof}
As remarked before, for $t\in T_1^1 = T_1$ there is nothing to
show. Let $r\in T_1^{n+1}$ and assume the claim is true for all
$t\in T_1^n$.
Let $t\in t_1 \cdot t_2$ with $t_1\in T_1^n$ and $t_2\in T_1$ and
let $r'$ be a run of type $t_1$.
Let $c_0=(q, \bar w)$ be the first and $c_1 = (p, \bar v)$ the last
configuration of $r'$.
By Lemma~\ref{lem:allRunsRealisedWithGaps}, we can assume that
$\MCAT_C(\bar w, \bar v)$ has $2n$-gaps.
Thus, by Lemma~\ref{lem:BigGapsAllowAllTypes}, there is tuple $\bar
u$ and a state $q'$ such that
$(p, \typ_C(\bar v, \bar u), q') = t_2$ and $(q, \typ_C(\bar w, \bar
u), q') = t$. Thus, extending $r'$ by configuration $(q', \bar u)$
results in the desired run $r$.
\qed
\end{proof}
\end{document} |
\betaegin{document}
\title{{\LARGE\bf
Open and other kinds of extensions over zero-dimensional local compactifications}
\betaegin{abstract}
{\footnotesize
\noindent Generalizing a theorem of Ph. Dwinger \cite{Dw}, we
describe the partially ordered set of all (up to equivalence)
zero-dimensional locally compact Hausdorff extensions of a
zero-dimensional Hausdorff space. Using this description, we find
the necessary and sufficient conditions which has to satisfy a map
between two zero-dimensional Hausdorff spaces in order to have
some kind of extension over arbitrary given in advance Hausdorff
zero-dimensional local compactifications of these spaces; we
regard the following kinds of extensions: continuous, open,
quasi-open, skeletal, perfect, injective, surjective. In this way
we generalize some classical results of B. Banaschewski \cite{Ba}
about the maximal zero-dimensional Hausdorff compactification.
Extending a recent theorem of G. Bezhanishvili \cite{B}, we
describe the local proximities corresponding to the
zero-dimensional Hausdorff local compactifications.}
\end{abstract}
{\footnotesize {\em MSC:} primary 54C20, 54D35; secondary 54C10,
54D45, 54E05.
{\em Keywords:} Locally compact (compact) Hausdorff
zero-dimensional extensions; Banaschewski compactification;
Zero-dimensi\-o\-nal local proximities; Local Boolean algebra;
Admissible ZLB-algebra; (Quasi-)Open extensions; Perfect
extensions; Skeletal extensions.}
\footnotetext[1]{{\footnotesize {\em E-mail address:}
gdimov@fmi.uni-sofia.bg}}
\betaaselineskip = \normalbaselineskip
\sigmaection*{Introduction}
In \cite{Ba}, B. Banaschewski proved that every zero-dimensional
Hausdorff space $X$ has a zero-dimensional Hausdorff
compactification $\beta_0X$ with the following remarkable property:
every continuous map $f:X\lambdara Y$, where $Y$ is a zero-dimensional
Hausdorff compact space, can be extended to a continuous map
$\beta_0f:\beta_0X\lambdara Y$; in particular, $\beta_0X$ is the maximal
zero-dimensional Hausdorff compactification of $X$. As far as I
know, there are no descriptions of the maps $f$ for which the
extension $\beta_0f$ is open or quasi-open. In this paper we solve
the following more general problem: let $f:X\lambdara Y$ be a map
between two zero-dimensional Hausdorff spaces and $(lX,l_X)$,
$(lY,l_Y)$ be Hausdorff zero-dimensional locally compact
extensions of $X$ and $Y$, respectively; find the necessary and
sufficient conditions which has to satisfy the map $f$ in order to
have an $``$extension" $g:lX\lambdara lY$ (i.e. $g\circ l_X=l_Y\circ
f$) which is a map with some special properties (we regard the
following properties: continuous, open, perfect, quasi-open,
skeletal, injective, surjective). In \cite{LE2}, S. Leader solved
such a problem for continuous extensions over Hausdorff local
compactifications (= locally compact extensions)
using the language of {\em local proximities} (the later,
as he showed, are in a bijective correspondence (preserving the
order) with the Hausdorff local compactifications regarded up to
equivalence). Hence, if one can describe the local proximities
which correspond to zero-dimensional Hausdorff local
compactifications then the above problem will be solved for
continuous extensions. Recently, G. Bezhanishvili \cite{B},
solving an old problem of L. Esakia, described the {\em
Efremovi\v{c} proximities} which correspond (in the sense of the
famous Smirnov Compactification Theorem \cite{Sm2}) to the
zero-dimensional Hausdorff compactifications
(and called them {\em zero-dimensional Efremovi\v{c} proximities}).
We extend here his result to the Leader's local proximities,
i.e. we describe the local proximities which correspond to the
Hausdorff zero-dimensional local compactifications and call them
{\em zero-dimensional local proximities} (see Theorem
\ref{zdlpth}). We do not use, however, these zero-dimensional
local proximities for solving our problem. We introduce a simpler
notion (namely, the {\em admissibe ZLB-algebra}) for doing this.
Ph. Dwinger \cite{Dw} proved, using Stone Duality Theorem
\cite{ST}, that the ordered set of all, up to equivalence,
zero-dimensional Hausdorff compactifications of a zero-dimensional
Hausdorff space is isomorphic to the ordered by inclusion set of
all {\em Boolean bases} of $X$ (i.e. of those bases of $X$ which
are Boolean subalgebras of the Boolean algebra $CO(X)$ of all
clopen (= closed and open) subsets of $X$). This description is
much simpler than that by Efremovi\v{c} proximities. It was
rediscovered by K. D. Magill Jr. and J. A. Glasenapp \cite{MG} and
applied very successfully to the study of the poset of all, up to
equivalence, zero-dimensional Hausdorff compactifications of a
zero-dimensional Hausdorff space. We extend the cited above
Dwinger Theorem \cite{Dw} to the zero-dimensional Hausdorff {\em
local compactifications} (see Theorem \ref{dwingerlc} below) with
the help of our generalization of the Stone Duality Theorem proved
in \cite{Di4} and the notion of $``$admissible ZLB-algebra" which
we introduce here. We obtain the solution of the problem
formulated above in the language of the admissible ZLB-algebras
(see Theorem \ref{zdextcmain}). As a corollary, we characterize
the maps $f:X\lambdara Y$ between two Hausdorff zero-dimensional spaces
$X$ and $Y$ for which the extension $\beta_0f:\beta_oX\lambdara\beta_0Y$ is open
or quasi-open (see Corollary \ref{zdextcmaincb}). Of course, one
can pass from admissible ZLB-algebras to zero-dimensional local
proximities and conversely (see Theorem \ref{ailp} below; it
generalizes
an analogous result about the connection between Boolean bases and
zero-dimensional Efremovi\v{c} proximities
obtained in \cite{B}).
We now fix the notations.
If ${\cal C}$ denotes a category, we write
$X\in \card{\cal C}$ if $X$ is
an object of ${\cal C}$, and $f\in {\cal C}(X,Y)$ if $f$ is a morphism of
${\cal C}$ with domain $X$ and codomain $Y$. By $Id_{{\cal C}}$ we denote the identity functor on the category ${\cal C}$.
All lattices are with top (= unit) and bottom (= zero) elements,
denoted respectively by 1 and 0. We do not require the elements
$0$ and $1$ to be distinct. Since we follow Johnstone's
terminology from \cite{J}, we will use the term {\em
pseudolattice} for a poset having all finite non-empty meets and
joins; the pseudolattices with a bottom will be called {\em
$\{0\}$-pseudolattices}. If $B$ is a Boolean algebra then we
denote by $Ult(B)$ the set of all ultrafilters in $B$.
If $X$ is a set then we denote the power set of $X$ by $P(X)$; the
identity function on $X$ is denoted by $id_X$.
If
$(X,\thetaau)$ is a topological space and $M$ is a subset of $X$, we
denote by $\mbox{{\rm cl}}_{(X,\thetaau)}(M)$ (or simply by $\mbox{{\rm cl}}(M)$ or
$\mbox{{\rm cl}}_X(M)$) the closure of $M$ in $(X,\thetaau)$ and by
$\mbox{{\rm int}}_{(X,\thetaau)}(M)$ (or briefly by $\mbox{{\rm int}}(M)$ or $\mbox{{\rm int}}_X(M)$) the
interior of $M$ in $(X,\thetaau)$.
The closed maps and the open maps between topological spaces are
assumed to be continuous but are not assumed to be onto. Recall
that a map is {\em perfect}\/ if it is closed and compact (i.e.
point inverses are compact sets).
For all notions and notations not defined here see \cite{Dw, E2, J, NW}.
\sigmaection{Preliminaries}
We will need some of our results from \cite{Di4} concerning the
extension of the Stone Duality Theorem to the category ${\bf ZLC}$ of
all locally compact zero-dimensio\-nal Hausdorff spaces and all
continuous maps between them.
Recall that if $(A,\lambdae)$ is a poset and $B\sigmabe A$ then $B$ is said
to be a {\em dense subset of} $A$ if for any $a\in A\sigmatm\{0\}$
there exists $b\in B\sigmatm\{0\}$ such that $b\lambdae a$.
\betaegin{defi}\lambdaabel{deflba}{\rm (\cite{Di4})}
\rm A pair $(A,I)$, where $A$ is a Boolean algebra and $I$ is an
ideal of $A$ (possibly non proper) which is dense in $A$, is called a {\em local Boolean algebra} (abbreviated
as LBA). Two LBAs $(A,I)$ and $(B,J)$ are said to be {\em
LBA-isomorphic} (or, simply, {\em isomorphic}) if there exists a
Boolean isomorphism $\varphi:A\lambdara B$ such that $\varphi(I)=J$.
\end{defi}
Let $A$ be a distributive $\{0\}$-pseudolattice and $Idl(A)$ be
the frame of all ideals of $A$. If $J\in Idl(A)$ then we will
write $\neg_A J$ (or simply $\neg J$) for the pseudocomplement of
$J$ in $Idl(A)$ (i.e. $\neg J=\betav\{I\in Idl(A)\sigmat I\wedge
J=\{0\}\}$). Recall that an ideal $J$ of $A$ is called {\em
simple} (Stone \cite{ST}) if $J\vee\neg J= A$ (i.e. $J$ has a
complement in $Idl(A)$). As it is proved in \cite{ST}, the set
$Si(A)$ of all simple ideals of $A$ is a Boolean algebra with
respect to the lattice operations in $Idl(A)$.
\betaegin{defi}\lambdaabel{defzlba}{\rm (\cite{Di4})}
\rm An LBA $(B, I)$ is called a {\em ZLB-algebra} (briefly, {\em
ZLBA}) if, for every $J\in Si(I)$, the join $\betav_B J$($=\betav_B
\{a\sigmat a\in J\}$) exists.
Let ${\bf ZLBA}$ be the category whose objects are all ZLBAs and whose
morphisms are all functions $\varphi:(B, I)\lambdara(B_1, I_1)$ between the
objects of ${\bf ZLBA}$ such that $\varphi:B\lambdara B_1$ is a Boolean
homomorphism satisfying the following condition:
\sigmamallskip
\noindent(ZLBA) For every $b\in I_1$ there exists $a\in I$ such
that $b\lambdae \varphi(a)$;
\sigmamallskip
\noindent let the composition between the morphisms of ${\bf ZLBA}$ be
the usual composition between functions, and the
${\bf ZLBA}$-identities be the identity functions.
\end{defi}
\betaegin{exa}\lambdaabel{zlbaexa}{\rm (\cite{Di4})}
\rm Let $B$ be a Boolean algebra. Then the pair $(B,B)$ is a ZLBA.
\end{exa}
\betaegin{notas}\lambdaabel{kxckx}
\rm Let $X$ be a topological space. We will denote by $CO(X)$ the
set of all clopen (= closed and open) subsets of $X$,
and by $CK(X)$ the set of all clopen compact subsets of $X$. For
every $x\in X$, we set
$u_x^{CO(X)}=\{F\in CO(X)\sigmat x\in F\}.$
When there is no ambiguity, we will write $``u_x^C$" instead of
$``u_x^{CO(X)}$".
\end{notas}
The next assertion follows from the results obtained in
\cite{R2,Di4}.
\betaegin{pro}\lambdaabel{psiult}
Let $(A,I)$ be a ZLBA. Set $X=\{u\in Ult(A)\sigmat u\cap I\neq\emptyset\}$.
Set, for every $a\in A$, $\lambda_A^C(a)=\{u\in X\sigmat a\in u\}$. Let
$\thetaau$ be the topology on $X$ having as an open base the family
$\{\lambda_A^C(a)\sigmat a\in I\}$. Then $(X,\thetaau)$ is a zero-dimensional
locally compact Hausdorff space, $\lambda_A^C(A)= CO(X)$,
$\lambda_A^C(I)=CK(X)$ and $\lambda_A^C:A\lambdara CO(X)$ is a Boolean
isomorphism; hence, $\lambda_A^C:(A,I)\lambdara (CO(X),CK(X))$ is a
${\bf ZLBA}$-isomorphism. We set $\mbox{{\boldmath $T$}}heta^a(A,I)=(X,\thetaau)$.
\end{pro}
\betaegin{theorem}\lambdaabel{genstonec}{\rm (\cite{Di4})}
The category\/ ${\bf ZLC}$ is dually equivalent to the category\/
${\bf ZLBA}$.
In more details, let $\mbox{{\boldmath $T$}}heta^a:{\bf ZLBA}\lambdara{\bf ZLC}$ and
$\mbox{{\boldmath $T$}}heta^t:{\bf ZLC}\lambdara{\bf ZLBA}$ be two contravariant functors defined as
follows: for every $X\in\card{{\bf ZLC}}$, we set $\mbox{{\boldmath $T$}}heta^t(X)=(CO(X), CK(X))$,
and for every $f\in{\bf ZLC}(X,Y)$, $\mbox{{\boldmath $T$}}heta^t(f):\mbox{{\boldmath $T$}}heta^t(Y)\lambdara\mbox{{\boldmath $T$}}heta^t(X)$ is
defined by the formula $\mbox{{\boldmath $T$}}heta^t(f)(G)=f^{-1}(G)$, where $G\in CO(Y)$;
for the definition of $\mbox{{\boldmath $T$}}heta^a(B, I)$, where $(B, I)$ is a ZLBA, see
\ref{psiult};
for every $\varphi\in{\bf ZLBA}((B, I),(B_1, J))$,
$\mbox{{\boldmath $T$}}heta^a(\varphi):\mbox{{\boldmath $T$}}heta^a(B_1, J)\lambdara\mbox{{\boldmath $T$}}heta^a(B, I)$ is given by the formula
$\mbox{{\boldmath $T$}}heta^a(\varphi)(u\alphap)=\varphi^{-1}(u\alphap), \ \forall u\alphap\in\mbox{{\boldmath $T$}}heta^a(B_1,J)$; then
$t^{C}:Id_{{\bf ZLC}}\lambdara \mbox{{\boldmath $T$}}heta^a\circ \mbox{{\boldmath $T$}}heta^t$, where $t^{C}(X)=t_X^C, \
\forall X\in\card{\bf ZLC}$ and $t_X^C(x)=u_x^C$, for every $x\in X$, is a
natural isomorphism (hence, in particular,
$t_X^C:X\lambdara\mbox{{\boldmath $T$}}heta^a(\mbox{{\boldmath $T$}}heta^t(X))$ is a homeomorphism for every
$X\in\card{{\bf ZLC}}$); also, $\lambda^C: Id_{{\bf ZLBA}}\lambdara \mbox{{\boldmath $T$}}heta^t\circ \mbox{{\boldmath $T$}}heta^a$,
where $ \lambda^C(B, I)=\lambda_B^C, \ \forall (B, I)\in\card{\bf ZLBA}$, is a
natural isomorphism.
\end{theorem}
Finally, we will recall some definitions and facts from the theory
of extensions of topological spaces, as well as the fundamental
Leader's Local Compactification Theorem \cite{LE2}.
Let $X$ be a Tychonoff space. We will denote by ${\cal L}(X)$ the
set of all, up to equivalence, locally compact Hausdorff
extensions of $X$ (recall that two (locally compact Hausdorff)
extensions $(Y_1,f_1)$ and $(Y_2,f_2)$ of $X$ are said to be {\em
equivalent}\/ iff there exists a homeomorphism $h:Y_1\lambdara Y_2$
such that $h\circ f_1=f_2$). Let $[(Y_i,f_i)]\in{\cal L}(X)$, where
$i=1,2$. We set $[(Y_1,f_1)]\lambdae [(Y_2,f_2)]$ if there exists a
continuous mapping $h:Y_2\lambdara Y_1$ such that $f_1=h\circ f_2$.
Then $({\cal L}(X),\lambdae)$ is a poset (=partially ordered set).
Let $X$ be a Tychonoff space. We will denote by ${\cal K}(X)$ the
set of all, up to equivalence, Hausdorff compactifications of
$X$.
\betaegin{nist}\lambdaabel{nilea}
\rm Recall that if $X$ is a set and $P(X)$ is the power set of $X$
ordered by the inclusion, then a triple $(X,\delta,{\cal B})$ is called
a {\em local proximity space} (see \cite{LE2}) if ${\cal B}$ is an ideal (possibly non proper) of $P(X)$ and
$\delta$ is
a symmetric binary relation
on $P(X)$ satisfying the following conditions:
\sigmamallskip
\noindent(P1) $\emptyset(-\delta) A$ for every $A\sigmabe X$ ($``-\delta$" means $``$not $\delta$");
\sigmamallskip
\noindent(P2) $A\delta A$ for every $A\not =\emptyset$;
\sigmamallskip
\noindent(P3) $A\delta(B\cup C)$ iff $A\delta B$ or $A\delta C$;
\sigmamallskip
\noindent(BC1) If $A\in {\cal B}$, $C\sigmabe X$ and $A\lambdal C$ (where, for $D,E\sigmabe X$, $D\lambdal E$
iff $D(-\delta) (X\sigmatm E)$)
then there exists a $B\in{\cal B}$
such that $A\lambdal B\lambdal C$;
\sigmamallskip
\noindent(BC2) If $A\delta C$, then there is a $B\in{\cal B}$ such that $B\sigmabe C$ and $A\delta B$.
\sigmamallskip
\noindent A local proximity space $(X,\delta,{\cal B})$ is said to be {\em
separated} if $\delta$ is the identity relation on singletons. Recall
that every separated local proximity space $(X,\delta,{\cal B})$ induces a
Tychonoff topology $\thetaau_{(X,\delta,{\cal B})}$ in $X$ by defining
$\mbox{{\rm cl}}(M)=\{x\in X\sigmat x\delta M\}$ for every $M\sigmabe X$ (\cite{LE2}). If
$(X,\thetaau)$ is a topological space then we say that $(X,\delta,{\cal B})$ is
a {\em local proximity space on} $(X,\thetaau)$ if
$\thetaau_{(X,\delta,{\cal B})}=\thetaau$.
The set of all separated local proximity spaces on a Tychonoff
space $(X,\thetaau)$ will be denoted by ${\cal L}{\cal P}(X,\thetaau)$. An order in
${\cal L}{\cal P}(X,\thetaau)$ is defined by $(X,\beta_1,{\cal B}_1)\varphireceq
(X,\beta_2,{\cal B}_2)$ if $\beta_2\sigmabe\beta_1$ and ${\cal B}_2\sigmabe{\cal B}_1$ (see
\cite{LE2}).
A function $f:X_1\lambdara X_2$ between two local proximity spaces
$(X_1,\beta_1,{\cal B}_1)$
and
$(X_2,\beta_2,{\cal B}_2)$
is said to be an {\em equicontinuous mapping\/} (see \cite{LE2}) if
the following two conditions are fulfilled:
\sigmamallskip
\noindent(EQ1) $A\beta_1 B$ implies $f(A)\beta_2 f(B)$, for $A,B\sigmabe X$, and
\sigmamallskip
\noindent(EQ2) $B\in{\cal B}_1$ implies $f(B)\in{\cal B}_2$.
\end{nist}
\betaegin{theorem}\lambdaabel{Leader} {\rm (S. Leader \cite{LE2})}
Let $(X,\thetaau)$ be a Tychonoff space. Then there exists an
isomorphism $\Leftarrowmbda_X$ between the ordered sets $({\cal L}(X,\thetaau),\lambdae)$
and $({\cal L}{\cal P}(X,\thetaau),\varphireceq)$. In more details, for every $(X,
\rho, {\cal B})\in{\cal L}{\cal P}(X,\thetaau)$ there exists a locally compact
Hausdorff extension $(Y,f)$ of X satisfying the following two
conditions:
\sigmamallskip
\noindent(a) $A \rho B$ iff $\mbox{{\rm cl}}_Y(f(A))\cap \mbox{{\rm cl}}_Y(f(B))\neq\emptyset$;
\sigmamallskip
\noindent(b) $B\in{\cal B}$ iff $\mbox{{\rm cl}}_Y(f(B))$ is compact.
\sigmamallskip
\noindent Such a local compactification is unique up to
equivalence; we set $(Y,f)=L(X,\rho,{\cal B})$ and
$(\Leftarrowmbda_X)^{-1}(X,\rho,{\cal B})=[(Y,f)]$. The space $Y$ is compact iff
$X\in{\cal B}$. Conversely, if $(Y,f)$ is a locally compact Hausdorff
extension of $X$ and $\rho$ and ${\cal B}$ are defined by (a) and (b),
then $(X, \rho, {\cal B})$ is a separated local proximity space, and we
set $\Leftarrowmbda_X([(Y,f)])=(X,\rho,{\cal B})$.
Let $(X_i,\beta_i,{\cal B}_i)$, $i=1,2$, be two separated local proximity spaces and
$f:X_1\lambdara X_2$ be a function. Let $(Y_i,f_i)=L(X_i,\beta_i,{\cal B}_i)$, where $i=1,2$.
Then there exists a continuous map
$L(f):Y_1\lambdara Y_2$
such that $f_2\circ f= L(f)\circ f_1$ iff $f$ is an
equicontinuous map between
$(X_1,\beta_1,{\cal B}_1)$ and $(X_2,\beta_2,{\cal B}_2)$.
\end{theorem}
Recall that a subset $F$ of a topological space $(X,\thetaau)$ is
called {\em regular closed}\/ if $F=\mbox{{\rm cl}}(\mbox{{\rm int}} (F))$. Clearly, $F$
is regular closed iff it is the closure of an open set. For any
topological space $(X,\thetaau)$, the collection $RC(X,\thetaau)$ (we will
often write simply $RC(X)$) of all regular closed subsets of
$(X,\thetaau)$ becomes a complete Boolean algebra
$(RC(X,\thetaau),0,1,\wedge,\vee,{}^*)$ under the following operations: $
1 = X, 0 = \emptyset, F^* = \mbox{{\rm cl}}(X\sigmatm F), F\vee G=F\cup G, F\wedge G
=\mbox{{\rm cl}}(\mbox{{\rm int}}(F\cap G)). $ The infinite operations are given by the
following formulas: $\betaigvee\{F_\gamma\sigmat
\gamma\in\Gamma\}=\mbox{{\rm cl}}(\betaigcup\{F_\gamma\sigmat \gamma\in\Gamma\})$ and
$\betaigwedge\{F_\gamma\sigmat \gamma\in\Gamma\}=\mbox{{\rm cl}}(\mbox{{\rm int}}(\betaigcap\{F_\gamma\sigmat
\gamma\in\Gamma\})).$ We denote by $CR(X,\thetaau)$ the family of all compact
regular closed subsets of $(X,\thetaau)$. We will often write $CR(X)$
instead of $CR(X,\thetaau)$.
We will need a lemma from \cite{CNG}:
\betaegin{lm}\lambdaabel{isombool}
Let $X$ be a dense subspace of a topological space $Y$. Then the
functions $r:RC(Y)\lambdara RC(X)$, $F\mapsto F\cap X$, and
$e:RC(X)\lambdara RC(Y)$, $G\mapsto \mbox{{\rm cl}}_Y(G)$, are Boolean isomorphisms
between Boolean algebras $RC(X)$ and $RC(Y)$, and $e\circ
r=id_{RC(Y)}$, $r\circ e=id_{RC(X)}$.
\end{lm}
\sigmaection{A Generalization of Dwinger Theorem}
\betaegin{defi}\lambdaabel{admzlba}
\rm Let $X$ be a zero-dimensional Hausdorff space. Then:
\sigmamallskip
\noindent(a) A ZLBA $(A,I)$ is called {\em admissible for} $X$ if
$A$ is a Boolean subalgebra of the Boolean algebra $CO(X)$ and $I$
is an open base of $X$.
\sigmamallskip
\noindent(b) The set of all admissible for $X$ ZLBAs is denoted by
${\cal Z}{\cal A}(X)$.
\sigmamallskip
\noindent(c) If $(A_1,I_1),(A_2,I_2)\in {\cal Z}{\cal A}(X)$ then we set
$(A_1,I_1)\varphireceq_0(A_2,I_2)$ if $A_1$ is a Boolean subalgebra of
$A_2$ and for every $V\in I_2$ there exists $U\in I_1$ such that
$V\sigmabe U$.
\end{defi}
\betaegin{nota}\lambdaabel{lz}
\rm The set of all (up to equivalence) zero-dimensional locally
compact Hausdorff extensions of a zero-dimensional Hausdorff space
$X$ will be denoted by ${\cal L}_0(X)$.
\end{nota}
\betaegin{theorem}\lambdaabel{dwingerlc}
Let $X$ be a zero-dimensional Hausdorff space. Then the ordered
sets $({\cal L}_0(X),\lambdae)$ and $({\cal Z}{\cal A}(X),\varphireceq_0)$ are isomorphic;
moreover, the zero-dimensional compact Hausdorff extensions of $X$
correspond to ZLBAs of the form $(A,A)$.
\end{theorem}
\deltaoc
Let $(Y,f)$ be a locally compact Hausdorff zero-dimensional
extensions of $X$. Set
\betaegin{equation}\lambdaabel{01}
A_{(Y,f)}=f^{-1}(CO(Y)) \mbox{ and }
I_{(Y,f)}=f^{-1}(CK(Y)).
\end{equation}
Note that $A_{(Y,f)}=\{F\in CO(X)\sigmat
\mbox{{\rm cl}}_Y(f(F))$ is open in $Y\}$ and $I_{(Y,f)}=\{F\in A_{(Y,f)}\sigmat
\mbox{{\rm cl}}_Y(f(F))$ is compact$\}$. We will show that
$(A_{(Y,f)},I_{(Y,f)})\in{\cal Z}{\cal A}(X)$. Obviously, the map
$r_{(Y,f)}^0:(CO(Y),CK(Y))\lambdara(A_{(Y,f)},I_{(Y,f)}), \ G\mapsto
f^{-1}(G),$
is a Boolean isomorphism such that
$r_{(Y,f)}^0(CK(Y))=I_{(Y,f)}$. Hence $(A_{(Y,f)},I_{(Y,f)})$ is a
ZLBA and $r_{(Y,f)}^0$ is an LBA-isomorphism. It is easy to see
that $I_{(Y,f)}$ is a base of $X$ (because $Y$ is locally
compact). Hence $(A_{(Y,f)},I_{(Y,f)})\in{\cal Z}{\cal A}(X)$.
It is
clear that if $(Y_1,f_1)$ is a locally compact Hausdorff
zero-dimensional extensions of $X$ equivalent to the extension
$(Y,f)$, then
$(A_{(Y,f)},I_{(Y,f)})=(A_{(Y_1,f_1)},I_{(Y_1,f_1)})$. Therefore,
a map
\betaegin{equation}\lambdaabel{dw1}
\alpha_X^0:{\cal L}_0(X)\lambdara{\cal Z}{\cal A}(X), \
[(Y,f)]\mapsto(A_{(Y,f)},I_{(Y,f)}),
\end{equation}
is well-defined.
Let $(A,I)\in{\cal Z}{\cal A}(X)$ and
$Y=\mbox{{\boldmath $T$}}heta^a(A,I)$. Then $Y$ is a locally compact Hausdorff
zero-dimensional space. For every $x\in X$, set
\betaegin{equation}\lambdaabel{02}
u_{x,A}=\{F\in
A\sigmat x\in F\}.
\end{equation}
Since $I$ is a base of $X$, we get that $u_{x,A}$
is an ultrafilter in $A$ and $u_{x,A}\cap I\neq\emptyset$, i.e. $u_{x,A}\in
Y$. Define
\betaegin{equation}\lambdaabel{0f}
f_{(A,I)}:X\lambdara Y, \ x\mapsto u_{x,A}.
\end{equation}
Set, for
short, $f=f_{(A,I)}$. Obviously, $\mbox{{\rm cl}}_Y(f(X))=Y$. It is easy to
see that $f$ is a homeomorphic embedding. Hence $(Y,f)$ is a
locally compact Hausdorff zero-dimensional extension of $X$. We
now set:
\betaegin{equation}\lambdaabel{dw2}
\beta_X^0:{\cal Z}{\cal A}(X)\lambdara {\cal L}_0(X), \ (A,I)\mapsto
[(\mbox{{\boldmath $T$}}heta^a(A,I),f_{(A,I)})].
\end{equation}
We will show that $\alpha_X^0\circ\beta_X^0=id_{{\cal Z}{\cal A}(X)}$ and
$\beta_X^0\circ\alpha_X^0=id_{{\cal L}_0(X)}$.
Let $[(Y,f)]\in{\cal L}_0(X)$. Set, for short, $A=A_{(Y,f)}$,
$I=I_{(Y,f)}$, $g=f_{(A,I)}$, $Z=\mbox{{\boldmath $T$}}heta^a(A,I)$ and $\varphi=r_{(Y,f)}^0$.
Then $\beta_X^0(\alpha_X^0([(Y,f)]))=\beta_X(A,I)= [(Z,g)]$. We have to show
that $[(Y,f)]=[(Z,g)]$. Since $\varphi$ is an LBA-isomorphism, we get
that $h=\mbox{{\boldmath $T$}}heta^a(\varphi):Z\lambdara\mbox{{\boldmath $T$}}heta^a(\mbox{{\boldmath $T$}}heta^t(Y))$ is a homeomorphism. Set
$Y\alphap=\mbox{{\boldmath $T$}}heta^a(\mbox{{\boldmath $T$}}heta^t(Y))$. By Theorem \ref{genstonec}, the map
$t_Y^C:Y\lambdara Y\alphap, \ y\mapsto u_y^{CO(Y)}$ is a homeomorphism. Let
$h\alphap=(t_Y^C)^{-1}\circ h$. Then $h\alphap:Z\lambdara Y$ is a homeomorphism.
We will prove that $h\alphap\circ g=f$ and this will imply that
$[(Y,f)]=[(Z,g)]$. Let $x\in X$. Then
$h\alphap(g(x))=h\alphap(u_{x,A})=(t_Y^C)^{-1}(h(u_{x,A}))=(t_Y)^{-1}(\varphi^{-1}(u_{x,A}))$.
We have that $u_{x,A}=\{f^{-1}(F)\sigmat F\in CO(Y), x\in
f^{-1}(F)\}=\{\varphi(F)\sigmat F\in CO(Y), x\in f^{-1}(F)\}$. Thus
$\varphi^{-1}(u_{x,A})=\{F\in CO(Y)\sigmat f(x)\in F\}=u_{f(x)}^{CO(Y)}$. Hence
$(t_Y)^{-1}(\varphi^{-1}(u_{x,A}))=f(x)$. So, $h\alphap\circ g=f$. Therefore,
$\beta_X^0\circ\alpha_X^0=id_{{\cal L}_0(X)}$.
Let $(A,I)\in{\cal Z}{\cal A}(X)$ and $Y=\mbox{{\boldmath $T$}}heta^a(A,I)$. Set $f=f_{(A,I)}$,
$B=A_{(Y,f)}$ and $J=I_{(Y,f)}$. Then
$\alpha_X^0(\beta_X^0(A,I))=(B,J)$.
By Theorem \ref{genstonec}, we have that
$\lambda_A^C:(A,I)\lambdara(CO(Y),CK(Y))$ is an LBA-isomorphism. Hence
$\lambda_A^C(A)=CO(Y)$ and $\lambda_A^C(I)=CK(Y)$. We will show that
$f^{-1}(\lambda_A^C(F))=F$, for every $F\in A$. Recall that
$\lambda_A^C(F)=\{u\in Y\sigmat F\in u\}$. Now we have that if $F\in A$
then $f^{-1}(\lambda_A^C(F))=\{x\in X\sigmat f(x)\in\lambda_A^C(F)\}=\{x\in X\sigmat
u_{x,A}\in\lambda_A^C(F)\}=\{x\in X\sigmat F\in u_{x,A}\}=\{x\in X\sigmat x\in
F\}=F$. Thus
\betaegin{equation}\lambdaabel{05}
B=f^{-1}(CO(Y))=A \mbox{ and } J=f^{-1}(CK(Y))=I.
\end{equation}
Therefore,
$\alpha_X^0\circ\beta_X^0=id_{{\cal Z}{\cal A}(X)}$.
We will now prove that $\alpha_X^0$ and $\beta_X^0$ are monotone maps.
Let $[(Y_i,f_i)]\in{\cal L}_0(X)$, where $i=1,2$, and
$[(Y_1,f_1)]\lambdae[(Y_2,f_2)]$. Then there exists a continuous map
$g:Y_2\lambdara Y_1$ such that $g\circ f_2=f_1$. Set
$A_i=A_{(Y_i,f_i)}$ and $I_i=I_{(Y_i,f_i)}$, $i=1,2$. Then
$\alpha_X^0([(Y_i,f_i)])=(A_i,I_i)$, where $i=1,2$. We have to show
that $A_1\sigmabe A_2$ and for every $V\in I_2$ there exists $U\in
I_1$ such that $V\sigmabe U$. Let $F\in A_1$. Then
$F\alphap=\mbox{{\rm cl}}_{Y_1}(f_1(F))\in CO(Y_1)$ and, hence, $G\alphap=g^{-1}(F\alphap)\in
CO(Y_2)$. Thus $(f_2)^{-1}(G\alphap)\in A_2$. Since
$(f_2)^{-1}(G\alphap)=(f_2)^{-1}(g^{-1}(F\alphap))=(f_2)^{-1}(g^{-1}(\mbox{{\rm cl}}_{Y_1}(f_1(F))))=(f_1)^{-1}(\mbox{{\rm cl}}_{Y_1}(f_1(F)))=F$,
we get that $F\in A_2$. Therefore, $A_1\sigmabe A_2$. Further, let
$V\in I_2$. Then $V\alphap=\mbox{{\rm cl}}_{Y_2}(f_2(V))\in CK(Y_2)$. Thus $g(V\alphap)$ is
a compact subset of $Y_1$. Hence there exists $U\in I_1$ such that
$g(V\alphap)\sigmabe \mbox{{\rm cl}}_{Y_1}(f_1(U))$. Then $V\sigmabe
(f_2)^{-1}(g^{-1}(g(\mbox{{\rm cl}}_{Y_2}(f_2(V)))))=(f_1)^{-1}(g(V\alphap))\sigmabe
(f_1)^{-1}(\mbox{{\rm cl}}_{Y_1}(f_1(U)))=U$. So,
$\alpha_X^0([(Y_1,f_1)])\varphireceq_0\alpha_X^0([(Y_2,f_2)])$. Hence, $\alpha_X^0$
is a monotone function.
Let now $(A_i,I_i)\in{\cal Z}{\cal A}(X)$, where $i=1,2$, and
$(A_1,I_1)\varphireceq_0(A_2,I_2)$. Set, for short,
$Y_i=\mbox{{\boldmath $T$}}heta^a(A_i,I_i)$ and $f_i=f_{(A_i,I_i)}$, $i=1,2$. Then
$\beta_X^0(A_i,I_i)=[(Y_i,f_i)]$, $i=1,2$. We will show that
$[(Y_1,f_1)]\lambdae[(Y_2,f_2)]$. We have that, for $i=1,2$, $f_i:X\lambdara
Y_i$ is defined by $f_i(x)=u_{x,A_i}$, for every $x\in X$. We also
have that $A_1\sigmabe A_2$ and for every $V\in I_2$ there exists
$U\in I_1$ such that $V\sigmabe U$. Let us regard the function
$\varphi:(A_1,I_1)\lambdara(A_2,I_2), \ F\mapsto F.$
Obviously, $\varphi$ is a ${\bf ZLBA}$-morphism. Then $g=\mbox{{\boldmath $T$}}heta^a(\varphi):Y_2\lambdara
Y_1$ is a continuous map. We will prove that $g\circ f_2=f_1$,
i.e. that for every $x\in X$, $g(u_{x,A_2})=u_{x,A_1}$. So, let
$x\in X$. We have that $u_{x,A_2}=\{F\in A_2\sigmat x\in F\}$ and
$g(u_{x,A_2})=\varphi^{-1}(u_{x,A_2})$. Clearly,
$\varphi^{-1}(u_{x,A_2})=\{F\in A_1\cap A_2\sigmat x\in F\}$. Since $A_1\sigmabe
A_2$, we get that $\varphi^{-1}(u_{x,A_2})=\{F\in A_1\sigmat x\in
F\}=u_{x,A_1}$. So, $g\circ f_2=f_1$. Thus
$[(Y_1,f_1)]\lambdae[(Y_2,f_2)]$. Therefore, $\beta_X^0$ is also a
monotone function. Since $\beta_X^0=(\alpha_X^0)^{-1}$, we get that
$\alpha_X^0$ (as well as $\beta_X^0$) is an isomorphism. \sigmaqs
\betaegin{defi}\lambdaabel{admba}
\rm Let $X$ be a zero-dimensional Hausdorff space.
A Boolean algebra $A$ is called {\em admissible for}
$X$ (or, a {\em Boolean base of} $X$) if $A$ is a Boolean
subalgebra of the Boolean algebra $CO(X)$ and $A$ is an open base
of $X$.
The set of all admissible Boolean algebras for $X$ is
denoted by ${\cal B}{\cal A}(X)$.
\end{defi}
\betaegin{nota}\lambdaabel{cz}
\rm The set of all (up to equivalence) zero-dimensional compact
Hausdorff
extensions of a zero-dimensional Hausdorff space $X$
will be denoted by ${\cal K}_0(X)$.
\end{nota}
\betaegin{cor}\lambdaabel{dwinger}{\rm (Ph. Dwinger \cite{Dw})}
Let $X$ be a zero-dimensional Hausdorff space. Then the ordered
sets $({\cal K}_0(X),\lambdae)$ and $({\cal B}{\cal A}(X),\sigmabe)$ are isomorphic.
\end{cor}
\deltaoc Clearly, a Boolean algebra $A$ is admissible for
$X$ iff the ZLBA $(A,A)$ is admissible for $X$. Also, if
$A_1,A_2$ are two admissible for $X$ Boolean algebras then
$A_1\sigmabe A_2$ iff $(A_1,A_1)\varphireceq_0(A_2,A_2)$. Since the
admissible ZLBAs of the form $(A,A)$ and only they correspond to
the zero-dimensional compact Hausdorff extensions of $X$, it
becomes obvious that our assertion follows from Theorem
\ref{dwingerlc}. \sigmaqs
\sigmaection{Zero-dimensional Local Proximities}
\betaegin{defi}\lambdaabel{zdlpdef}
\rm A local proximity $(X,\delta,{\cal B})$ is called {\em zero-dimensional}
if for every $A,B\in{\cal B}$ with $A\lambdal B$ there exists $C\sigmabe X$
such that $A\sigmabe C\sigmabe B$ and $C\lambdal C$.
The set of all separated zero-dimensional local proximity spaces on a Tychonoff
space $(X,\thetaau)$ will be denoted by ${\cal L}{\cal P}_0(X,\thetaau)$. The restriction of the
order relation $\varphireceq$ in ${\cal L}{\cal P}(X,\thetaau)$ (see \ref{nilea}) to the set
${\cal L}{\cal P}_0(X,\thetaau)$ will be denoted again by $\varphireceq$.
\end{defi}
\betaegin{theorem}\lambdaabel{zdlpth}
Let $(X,\thetaau)$ be a zero-dimensional Hausdorff space. Then the ordered
sets $({\cal L}_0(X),\lambdae)$ and $({\cal L}{\cal P}_0(X,\thetaau),\varphireceq)$ are isomorphic
(see \ref{zdlpdef} and \ref{dwingerlc} for the notations).
\end{theorem}
\deltaoc Having in mind Leader's Theorem \ref{Leader}, we need only to show that if
$[(Y,f)]\in {\cal L}(X)$ and $\Leftarrowmbda_X([(Y,f)])=(X,\delta,{\cal B})$ then $Y$ is a
zero-dimensional space iff $(X,\delta,{\cal B})\in{\cal L}{\cal P}_0(X)$.
So, let $Y$ be a zero-dimensional space. Then, by Theorem
\ref{Leader}, ${\cal B}=\{B\sigmabe X\sigmat \mbox{{\rm cl}}_Y(f(B))$ is compact$\}$, and
for every $A,B\sigmabe X$, $A\delta B$ iff
$\mbox{{\rm cl}}_Y(f(A))\cap\mbox{{\rm cl}}_Y(f(B))\neq\emptyset$. Let $A,B\in{\cal B}$ and $A\lambdal B$.
Then $\mbox{{\rm cl}}_Y(f(A))\cap\mbox{{\rm cl}}_Y(f(X\sigmatm B))=\emptyset$. Since $\mbox{{\rm cl}}_Y(f(A))$
is compact and $Y$ is zero-dimensional, there exists $U\in CO(Y)$
such that $\mbox{{\rm cl}}_Y(f(A))\sigmabe U\sigmabe Y\sigmatm \mbox{{\rm cl}}_Y(f(X\sigmatm B))$. Set
$V=f^{-1}(U)$. Then $A\sigmabe V\sigmabe\mbox{{\rm int}}_X(B)$, $\mbox{{\rm cl}}_Y(f(V))=U$ and
$\mbox{{\rm cl}}_Y(f(X\sigmatm V))=Y\sigmatm U$. Thus $V\lambdal V$ and $A\sigmabe V\sigmabe B$.
Therefore, $(X,\delta,{\cal B})\in{\cal L}{\cal P}_0(X)$.
Conversely, let $(X,\delta,{\cal B})\in{\cal L}{\cal P}_0(X)$ and $(Y,f)=L(X,\delta,{\cal B})$
(see \ref{Leader} for the notations). We will prove that $Y$ is a
zero-dimensional space. We have again, by Theorem \ref{Leader},
that the
formulas written in the preceding paragraph for ${\cal B}$ and $\delta$ take place.
Let $y\in Y$ and $U$ be an open neighborhood of $y$.
Since $Y$ is locally compact and Hausdorff, there exist $F_1, F_2\in CR(Y)$
such that $y\in F_1\sigmabe\mbox{{\rm int}}_Y(F_2)\sigmabe F_2\sigmabe U$. Let $A_i=f^{-1}(F_i)$, $i=1,2$.
Then $\mbox{{\rm cl}}_Y(f(A_i))=F_i$, and hence $A_i\in{\cal B}$, for $i=1,2$. Also, $A_1\lambdal A_2$.
Thus there exists $C\in{\cal B}$ such that $A_1\sigmabe C\sigmabe A_2$ and $C\lambdal C$.
It is easy to see that $F_1\sigmabe\mbox{{\rm cl}}_Y(f(C))\sigmabe F_2$ and that $\mbox{{\rm cl}}_Y(f(C))\in CO(Y)$.
Therefore, $Y$ is a zero-dimensional space. \sigmaqs
By Theorem \ref{Leader}, for every Tychonoff space $(X,\thetaau)$, the
local proximities of the form $(X,\delta,P(X))$ on $(X,\thetaau)$ and only
they correspond to the Hausdorff compactifications of $(X,\thetaau)$.
The pairs $(X,\delta)$ for which the triple $(X,\delta,P(X))$ is a local
proximity are called {\em Efremovi\v{c} proximities}. Hence,
Leader's Theorem \ref{Leader} implies the famous Smirnov
Compactification Theorem \cite{Sm2}. The notion of a
zero-dimensional proximity was introduced recently by G.
Bezhanishvili \cite{B}. Our notion of a zero-dimensional local
proximity is a generalization of it. We will denote by ${\cal P}_0(X)$
the set of all zero-dimensional proximities on a zero-dimensional
Hausdorff space $X$. Now it becomes clear that our Theorem
\ref{zdlpth} implies immediately the following theorem of G.
Bezhanishvili \cite{B}:
\betaegin{cor}\lambdaabel{zdlpcor}{\rm (G. Bezhanishvili \cite{B})}
Let $(X,\thetaau)$ be a zero-dimensional Hausdorff space.
Then there exists an isomorphism between the ordered
sets $({\cal K}_0(X),\lambdae)$ and $({\cal P}_0(X,\thetaau),\varphireceq)$
(see \ref{zdlpdef} and \ref{dwingerlc} for the notations).
\end{cor}
The connection between the zero-dimensional local proximities on a
zero-dimensional Hausdorff space $X$ and
the admissible for $X$ ZLBAs is clarified in the next result:
\betaegin{theorem}\lambdaabel{ailp}
Let $(X,\thetaau)$ be a zero-dimensional Hausdorff space. Then:
\sigmamallskip
\noindent(a) Let $(A,I)\in{\cal Z}{\cal A}(X,\thetaau)$. Set ${\cal B}=\{M\sigmabe X\sigmat \exists B\in I$ such that $M\sigmabe B\}$,
and for every $M,N\in{\cal B}$, let $M\delta N\iff (\forall F\in I)[(M\sigmabe F)\rightarrow(F\cap N\neq\emptyset)]$; further, for every
$K,L\sigmabe X$, let $K\delta L\iff[\exists M,N\in{\cal B}$ such that $M\sigmabe K, N\sigmabe L$ and $M\delta N]$. Then $(X,\delta,{\cal B})\in{\cal L}{\cal P}_0(X,\thetaau)$.
Set $(X,\delta,{\cal B})=L_X(A,I)$.
\sigmamallskip
\noindent(b) Let $(X,\delta,{\cal B})\in{\cal L}{\cal P}_0(X,\thetaau)$. Set $A=\{F\sigmabe X\sigmat F\lambdal F\}$ and $I=A\cap{\cal B}$. Then $(A,I)\in{\cal Z}{\cal A}(X,\thetaau)$.
Set $(A,I)=l_X(X,\delta,{\cal B})$.
\sigmamallskip
\noindent(c) $\beta_X^0=(\Leftarrowmbda_X)^{-1}\circ L_X$ and, for every
$(X,\delta,{\cal B})\in{\cal L}{\cal P}_0(X,\thetaau)$, $(\beta_X^0\circ
l_X)(X,\delta,{\cal B})=(\Leftarrowmbda_X)^{-1}(X,\delta,{\cal B})$ (see \ref{Leader},
(\ref{dw2}), as well as (a) and (b) here for the notations);
\sigmamallskip
\noindent(d) The correspondence
$L_X:({\cal Z}{\cal A}(X,\thetaau),\varphireceq_0)\lambdara({\cal L}{\cal P}_0(X,\thetaau),\varphireceq)$ is
an isomorphism (between posets) and $L_X^{-1}=l_X$.
\end{theorem}
\deltaoc It follows from Theorems \ref{dwingerlc}, \ref{zdlpth} and \ref{Leader}. \sigmaqs
The above assertion is a generalization of the analogous result of G. Bezhanishvili \cite{B} concerning the
connection between the zero-dimensional proximities on a zero-dimensional Hausdorff space $X$ and
the admissible for $X$ Boolean algebras.
\sigmaection{Extensions over Zero-dimensional Local Compactifications}
\betaegin{theorem}\lambdaabel{zdextc}
Let $(X_i,\thetaau_i)$, where $i=1,2$, be zero-dimensional Hausdorff
spaces, $(Y_i,f_i)$ be a zero-dimensional Hausdorff local
compactification of $(X_i,\thetaau_i)$, $(A_i,I_i)=\alpha_X^0(Y_i,f_i)$
(see (\ref{dw1}) and (\ref{01}) for
$\alpha_{X_i}^0$)), where $i=1,2$,
and $f:X_1\lambdara X_2$ be a function. Then there exists a continuous function
$g=L_0(f):Y_1\lambdara Y_2$ such that $g\circ f_1=f_2\circ f$ iff $f$ satisfies the
following conditions:
\sigmamallskip
\noindent{\rm (ZEQ1)} For every $G\in A_2$, $f^{-1}(G)\in A_1$ holds;
\sigmamallskip
\noindent{\rm (ZEQ2)} For every $F\in I_1$ there exists $G\in I_2$ such that $f(F)\sigmabe G$.
\end{theorem}
\deltaoc ($\mbox{{\boldmath $R$}}ightarrow$) Let there exists a continuous function $g:Y_1\lambdara Y_2$
such that $g\circ f_1=f_2\circ f$. By Lemma \ref{isombool} and (\ref{05}),
we have that the maps
\betaegin{equation}\lambdaabel{0i}
r_i^c:CO(Y_i)\lambdara A_i, \ G\mapsto (f_i)^{-1}(G), \ e_i^c:A_i\lambdara CO(Y_i), \ F\mapsto\mbox{{\rm cl}}_{Y_i}(f_i(F)),
\end{equation}
where $i=1,2$, are Boolean isomorphisms; moreover, since $r_i^c(CK(Y_i))=I_i$
and $e_i^c(I_i)=CK(Y_i)$, we get that
\betaegin{equation}\lambdaabel{0il}
r_i^c:(CO(Y_i),CK(Y_i))\lambdara (A_i,I_i) \mbox{ and }
e_i^c:(A_i,I_i)\lambdara (CO(Y_i),CK(Y_i)),
\end{equation}
where $i=1,2$, are LBA-isomorphisms.
Set
\betaegin{equation}\lambdaabel{0ic}
\varphisi_g:CO(Y_2)\lambdara CO(Y_1), \ G\mapsto g^{-1}(G), \mbox{ and }
\varphisi_f=r_1^c\circ\varphisi_g\circ e_2^c.
\end{equation}
Then $\varphisi_f:A_2\lambdara A_1$. We will prove that
\betaegin{equation}\lambdaabel{psif}
\varphisi_f(G)=f^{-1}(G), \mbox{ for every } G\in A_2.
\end{equation}
Indeed, let $G\in A_2$.
Then $\varphisi_f(G)=(r_1^c\circ\varphisi_g\circ e_2^c)(G)=(f_1)^{-1}(g^{-1}(\mbox{{\rm cl}}_{Y_2}(f_2(G))))=
\{x\in X_1\sigmat (g\circ f_1)(x)\in\mbox{{\rm cl}}_{Y_2}(f_2(G))\}=\{x\in X_1\sigmat f_2(f(x))\in\mbox{{\rm cl}}_{Y_2}(f_2(G))\}
=\{x\in X_1\sigmat f(x)\in (f_2)^{-1}(\mbox{{\rm cl}}_{Y_2}(f_2(G)))\}=
\{x\in X_1\sigmat f(x)\in G\}=f^{-1}(G)$. This shows that condition (ZEQ1) is fulfilled.
Since, by Theorem \ref{genstonec}, $\varphisi_g=\mbox{{\boldmath $T$}}heta^t(g)$, we get that $\varphisi_g$ is
a ${\bf ZLBA}$-morphism. Thus $\varphisi_f$ is a ${\bf ZLBA}$-morphism. Therefore, for every $F\in I_1$
there exists $G\in I_2$ such that $f^{-1}(G)\sigmape F$. Hence, condition (ZEQ2)
is also checked.
\sigmamallskip
\noindent($\Leftarrow$) Let $f$ be a function satisfying
conditions (ZEQ1) and (ZEQ2). Set $\varphisi_f:A_2\lambdara A_1$, $G\mapsto
f^{-1}(G)$. Then $\varphisi_f:(A_2,I_2)\lambdara (A_1,I_1)$ is a
${\bf ZLBA}$-morphism. Put $g=\mbox{{\boldmath $T$}}heta^a(\varphisi_f)$. Then
$g:\mbox{{\boldmath $T$}}heta^a(A_1,I_1)\lambdara\mbox{{\boldmath $T$}}heta^a(A_2,I_2)$, i.e. $g:Y_1\lambdara Y_2$ and $g$
is a continuous function (see Theorem \ref{genstonec} and
(\ref{dw2})). We will show that $g\circ f_1=f_2\circ f$. Let $x\in
X_1$. Then, by (\ref{0f}) and Theorem \ref{genstonec}, $g(f_1(x))=
g(u_{x,A_1})=(\varphisi_f)^{-1}(u_{x,A_1})=\{G\in A_2\sigmat\varphisi_f(G)\in
u_{x,A_1}\}=\{G\in A_2\sigmat x\in f^{-1}(G)\}=\{G\in A_2\sigmat f(x)\in
G\}=u_{f(x),A_2}=f_2(f(x))$. Thus, $g\circ f_1=f_2\circ f$. \sigmaqs
It is natural to write $f:(X_1,A_1,I_1)\lambdara (X_2,A_2,I_2)$ when we
have a situation like that which is described in Theorem
\ref{zdextc}. Then, in analogy with the Leader's equicontinuous
functions (see Leader's Theorem \ref{Leader}), the functions
$f:(X_1,A_1,I_1)\lambdara (X_2,A_2,I_2)$ which satisfy conditions
(ZEQ1) and (ZEQ2) will be called {\em 0-equicontinuous functions}.
Since $I_2$ is a base of $X_2$,
we obtain that every 0-equcontinuous function is a continuous function.
\betaegin{cor}\lambdaabel{zdextcc}
Let $(X_i,\thetaau_i)$, $i=1,2$, be two zero-dimensional Hausdorff spaces,
$A_i\in{\cal B}{\cal A}(X_i)$, $(Y_i,f_i)=\beta_{X_i}^0(A_i,A_i)$ (see (\ref{dw2}) for $\beta_{X_i}^0$),
where $i=1,2$,
and $f:X_1\lambdara X_2$ be a function. Then there exists a continuous function
$g=L_0(f):Y_1\lambdara Y_2$ such that $g\circ f_1=f_2\circ f$ iff $f$ satisfies condition
{\rm (ZEQ1)}.
\end{cor}
\deltaoc It follows from Theorem \ref{zdextc} because for ZLBAs of the form $(A_i,A_i)$,
where $i=1,2$, condition (ZEQ2) is always fulfilled. \sigmaqs
Clearly, Theorem \ref{dwinger} implies (and this is noted in
\cite{Dw}) that every zero-dimensi\-o\-nal Hausdorff space $X$ has
a greatest zero-dimensional Hausdorff compactification which
corresponds to the admissible for $X$ Boolean algebra $CO(X)$.
This compactification was discovered by B. Banaschewski \cite{Ba};
it is denoted by $(\beta_0X,\beta_0)$ and it is called {\em Banaschewski
compactification} of $X$. One obtains immediately its main
property using our Corollary \ref{zdextcc}:
\betaegin{cor}\lambdaabel{zdextcb}{\rm (B. Banaschewski \cite{Ba})}
Let $(X_i,\thetaau_i)$, $i=1,2$, be two zero-dimensional Hausdorff spaces and
$(cX_2,c)$ be a zero-dimensional Hausdorff compactification of $X_2$. Then for every continuous
function $f:X_1\lambdara X_2$ there exists a continuous function
$g:\beta_0X_1\lambdara cX_2$ such that $g\circ\beta_0=c\circ f$.
\end{cor}
\deltaoc Since $\beta_0X_1$ corresponds to the admissible for $X_1$ Boolean algebra $CO(X_1)$,
condition (ZEQ1) is clearly fulfilled when $f$ is a continuous function. Now apply
Corollary \ref{zdextcc}. \sigmaqs
If in the above Corollary \ref{zdextcb} $cX_2=\beta_0X_2$, then the map $g$ will be denoted by $\beta_0f$.
Recall
that a function $f:X\lambdara Y$ is called {\em skeletal}\/ (\cite{MR})
if
\betaegin{equation}\lambdaabel{ske}
\mbox{{\rm int}}(f^{-1}(\mbox{{\rm cl}} (V)))\sigmabe\mbox{{\rm cl}}(f^{-1}(V))
\end{equation}
for every open subset $V$ of $Y$. Recall also the following result:
\betaegin{lm}\lambdaabel{skel}{\rm (\cite{D1})}
A function $f:X\lambdara Y$ is skeletal iff\/ $\mbox{{\rm int}}(\mbox{{\rm cl}}(f(U)))\neq\emptyset$,
for every non-empty open subset $U$ of $X$.
\end{lm}
\betaegin{lm}\lambdaabel{skelnew}
A continuous map $f:X\lambdara Y$, where $X$ and $Y$ are topological spaces, is skeletal iff
for every open subset $V$ of $Y$ such that $\mbox{{\rm cl}}_Y(V)$ is open,
$\mbox{{\rm cl}}_X(f^{-1}(V))= f^{-1}(\mbox{{\rm cl}}_Y(V))$ holds.
\end{lm}
\deltaoc ($\mbox{{\boldmath $R$}}ightarrow$) Let $f$ be a skeletal continuous map and $V$ be
an open subset of $Y$ such that $\mbox{{\rm cl}}_Y(V)$ is open. Let $x\in
f^{-1}(\mbox{{\rm cl}}_Y(V))$. Then $f(x)\in \mbox{{\rm cl}}_Y(V)$. Since $f$ is
continuous, there exists an open neighborhood $U$ of $x$ in $X$
such that $f(U)\sigmabe \mbox{{\rm cl}}_Y(V)$. Suppose that
$x\not\in\mbox{{\rm cl}}_X(f^{-1}(V))$. Then there exists an open neighborhood $W$
of $x$ in $X$ such that $W\sigmabe U$ and $W\cap f^{-1}(V)=\emptyset$. We
obtain that
$\mbox{{\rm cl}}_Y(f(W))\cap V=\emptyset$ and $\mbox{{\rm cl}}_Y(f(W))\sigmabe \mbox{{\rm cl}}_Y(f(U))\sigmabe
\mbox{{\rm cl}}_Y(V)$. Since, by Lemma \ref{skel}, $\mbox{{\rm int}}_Y(\mbox{{\rm cl}}_Y(f(W)))\neq\emptyset$, we get a contradiction.
Thus $f^{-1}(\mbox{{\rm cl}}_Y(V))\sigmabe\mbox{{\rm cl}}_X(f^{-1}(V))$.
The converse inclusion follows from the continuity of $f$.
Hence $f^{-1}(\mbox{{\rm cl}}_Y(V))=\mbox{{\rm cl}}_X(f^{-1}(V))$.
\sigmamallskip
\noindent($\Leftarrow$) Suppose that there exists an open subset
$U$ of $X$ such that $\mbox{{\rm int}}_Y(\mbox{{\rm cl}}_Y(f(U)))=\emptyset$ and $U\neq\emptyset$. Then,
clearly, $V=Y\sigmatm\mbox{{\rm cl}}_Y(f(U))$ is an open dense subset of $Y$.
Hence $\mbox{{\rm cl}}_Y(V)$ is open in $Y$. Thus $\mbox{{\rm cl}}_X(f^{-1}(V))=
f^{-1}(\mbox{{\rm cl}}_Y(V))=f^{-1}(Y)=X$ holds. Therefore
$X=\mbox{{\rm cl}}_X(f^{-1}(V))=\mbox{{\rm cl}}_X(f^{-1}(Y\sigmatm \mbox{{\rm cl}}_Y(f(U))))=\mbox{{\rm cl}}_X(X\sigmatm
f^{-1}(\mbox{{\rm cl}}_Y(f(U))))$. Since $U\sigmabe f^{-1}(\mbox{{\rm cl}}_Y(f(U)))$, we get
that $X\sigmatm U\sigmape\mbox{{\rm cl}}_X(X\sigmatm f^{-1}(\mbox{{\rm cl}}_Y(f(U))))=X$, a
contradiction. Hence, $f$ is a skeletal map. \sigmaqs
Note that the proof of Lemma \ref{skelnew} shows that the following assertion is also true:
\betaegin{lm}\lambdaabel{skelnewcor}
A continuous map $f:X\lambdara Y$, where $X$ and $Y$ are topological spaces, is skeletal iff
for every open dense subset $V$ of $Y$, $\mbox{{\rm cl}}_X(f^{-1}(V))= X$ holds.
\end{lm}
\betaegin{lm}\lambdaabel{skelnewnew}
Let $(X_i,\thetaau_i)$, $i=1,2$, be two topological spaces,
$(Y_i,f_i)$ be some extensions of $(X_i,\thetaau_i)$, $i=1,2$,
$f:X_1\lambdara X_2$ and $g:Y_1\lambdara Y_2$ be two continuous functions
such that $g\circ f_1=f_2\circ f$. Then $g$ is skeletal iff $f$ is
skeletal.
\end{lm}
\deltaoc ($\mbox{{\boldmath $R$}}ightarrow$) Let $g$ be skeletal and $V$ be an open dense
subset of $X_2$. Set $U=Ex_{Y_2}(V)$, i.e. $U=Y_2\sigmatm
\mbox{{\rm cl}}_{Y_2}(f_2(X_2\sigmatm V))$. Then $U$ is an open dense subset of
$Y_2$ and $f_2^{-1}(U)=V$. Hence, by Lemma \ref{skelnewcor},
$g^{-1}(U)$ is a dense open subset of $Y_1$. We will prove that
$f_1^{-1}(g^{-1}(U))\sigmabe f^{-1}(V)$. Indeed, let $x\in
f_1^{-1}(g^{-1}(U))$. Then $g(f_1(x))\in U$, i.e. $f_2(f(x))\in U$.
Thus $f(x)\in f_2^{-1}(U)=V$. So, $f_1^{-1}(g^{-1}(U))\sigmabe f^{-1}(V)$.
This shows that $f^{-1}(V)$ is dense in $X_1$. Therefore, by Lemma
\ref{skelnewcor}, $f$ is a skeletal map.
\sigmamallskip
\noindent($\Leftarrow$) Let $f$ be a skeletal map and $U$ be a
dense open subset of $Y_2$. Set $V=f_2^{-1}(U)$. Then $V$ is an
open dense subset of $X_2$. Thus, by Lemma \ref{skelnewcor},
$f^{-1}(V)$ is a dense subset of $X_1$. We will prove that
$f^{-1}(V)\sigmabe f_1^{-1}(g^{-1}(U))$. Indeed, let $x\in f^{-1}(V)$.
Then $f(x)\in V=f_2^{-1}(U)$. Thus $f_2(f(x))\in U$, i.e.
$g(f_1(x))\in U$. So, $f^{-1}(V)\sigmabe f_1^{-1}(g^{-1}(U))$. This
implies that $g^{-1}(U)$ is dense in $Y_1$. Now, Lemma
\ref{skelnewcor} shows that $g$ is a skeletal map. \sigmaqs
We are now ready to prove the following result:
\betaegin{theorem}\lambdaabel{zdextcmain}
Let $(X_i,\thetaau_i)$, where $i=1,2$, be zero-dimensional Hausdorff
spaces. Let, for $i=1,2$, $(Y_i,f_i)$ be a zero-dimensional
Hausdorff local compactification of $(X_i,\thetaau_i)$,
$(A_i,I_i)=\alpha_X^0(Y_i,f_i)$
(see (\ref{dw1}) and (\ref{01}) for
$\alpha_{X_i}^0$),
$f:(X_1,A_1,I_1)\lambdara (X_2,A_2,I_2)$
be a 0-equicontinuous function and $g=L_0(f):Y_1\lambdara Y_2$ be the continuous
function such that $g\circ f_1=f_2\circ f$ (its existence is guaranteed by
Theorem \ref{zdextc}). Then:
\sigmamallskip
\noindent(a) $g$ is skeletal iff $f$ is skeletal;
\sigmamallskip
\noindent(b) $g$ is an open map iff $f$ satisfies the following condition:
\sigmamallskip
\noindent{\rm(ZO)} For every $F\in I_1$, $\mbox{{\rm cl}}_{X_2}(f(F))\in I_2$ holds;
\sigmamallskip
\noindent(c) $g$ is a perfect map iff $f$ satisfies the following condition:
\sigmamallskip
\noindent{\rm(ZP)} For every $G\in I_2$, $f^{-1}(G)\in I_1$ holds (i.e., briefly, $f^{-1}(I_2)\sigmabe I_1$);
\sigmamallskip
\noindent(d) $\mbox{{\rm cl}}_{Y_2}(g(Y_1))=Y_2$ iff\/ $\mbox{{\rm cl}}_{X_2}(f(X_1))=X_2$;
\sigmamallskip
\noindent(e) $g$ is an injection iff $f$ satisfies the following condition:
\sigmamallskip
\noindent{\rm(ZI)} For every $F_1,F_2\in I_1$ such that $F_1\cap F_2=\emptyset$
there exist $G_1,G_2\in I_2$ with $G_1\cap G_2=\emptyset$ and $f(F_i)\sigmabe G_i$, $i=1,2$;
\sigmamallskip
\noindent(f) $g$ is an open injection iff $I_1\sigmabe f^{-1}(I_2)$ and $f$ satisfies condition {\rm (ZO)};
\sigmamallskip
\noindent(g) $g$ is a closed injection iff $f^{-1}(I_2)=I_1$;
\sigmamallskip
\noindent(h) $g$ is a perfect surjection iff $f$ satisfies condition
{\rm (ZP)} and $\mbox{{\rm cl}}_{X_2}(f(X_1))=X_2$;
\sigmamallskip
\noindent(i) $g$ is a dense embedding iff\/ $\mbox{{\rm cl}}_{X_2}(f(X_1))=X_2$ and $I_1\sigmabe f^{-1}(I_2)$.
\end{theorem}
\deltaoc Set $\varphisi_g=\mbox{{\boldmath $T$}}heta^t(g)$ (see Theorem \ref{genstonec}). Then
$\varphisi_g:CO(Y_2)\lambdara CO(Y_1)$, $G\mapsto g^{-1}(G)$. Set also
$\varphisi_f: A_2\lambdara A_1$, $G\mapsto f^{-1}(G)$. Then, (\ref{0ic}),
(\ref{0i}) and (\ref{psif}) imply that
$\varphisi_f=r_1^c\circ\varphisi_g\circ e_2^c$.
\sigmamallskip
\noindent(a) It follows from Lemma \ref{skelnewnew}.
\sigmamallskip
\noindent(b) {\em First Proof.}~ Using \cite[Theorem 2.8(a)]{Di5}
and (\ref{0il}), we get that the map $g$ is open iff there exists
a map $\varphisi^f:I_1\lambdara I_2$ satisfying the following conditions:
\sigmamallskip
\noindent(OZL1) For every $F\in I_1$ and every $G\in I_2$,
$(F\cap f^{-1}(G)=\emptyset)\rightarrow (\varphisi^f(F)\cap G=\emptyset)$;
\sigmamallskip
\noindent(OZL2) For every $F\in I_1$, $f^{-1}(\varphisi^f(F))\sigmape F$.
Obviously, condition (OZL2) is equivalent to the following one:
for every $F\in I_1$, $f(F)\sigmabe\varphisi^f(F)$. We will show that for
every $F\in I_1$, $\varphisi^f(F)\sigmabe\mbox{{\rm cl}}_{X_2}(f(F))$. Indeed, let
$y\in\varphisi^f(F)$ and suppose that $y\not\in \mbox{{\rm cl}}_{X_2}(f(F))$. Since
$I_2$ is a base of $X_2$, there exists a $G\in I_2$ such that
$y\in G$ and $G\cap f(F)=\emptyset$. Then $F\cap f^{-1}(G)=\emptyset$ and
condition (OZL1) implies that $\varphisi^f(F)\cap G=\emptyset$. We get that
$y\not\in\varphisi^f(F)$, a contradiction. Thus
$f(F)\sigmabe\varphisi^f(F)\sigmabe\mbox{{\rm cl}}_{X_2}(f(F))$. Since $\varphisi^f(F)$ is a
closed set, we obtain that $\varphisi^f(F)=\mbox{{\rm cl}}_{X_2}(f(F))$. Obviously,
conditions (OZL1) and (OZL2) are satisfied when
$\varphisi^f(F)=\mbox{{\rm cl}}_{X_2}(f(F))$. This implies that $g$ is an open map
iff for every $F\in I_1$, $\mbox{{\rm cl}}_{X_2}(f(F))\in I_2$.
\sigmamallskip
\noindent{\em Second Proof.}~ We have, by (\ref{01}), that
$I_i=(f_i)^{-1}(CK(Y_i))$, for $i=1,2$. Thus, for every $F\in I_i$,
where $i\in\{1,2\}$, we have that $\mbox{{\rm cl}}_{Y_i}(f_i(F))\in CK(Y_i)$.
Let $g$ be an open map and $F\in I_1$. Then,
$G=\mbox{{\rm cl}}_{Y_1}(f_1(F))\in CK(Y_1)$. Thus $g(G)\in CK(Y_2)$. Since
$G$ is compact, we have that
$g(G)=\mbox{{\rm cl}}_{Y_2}(g(f_1(F)))=\mbox{{\rm cl}}_{Y_2}(f_2(f(F)))=\mbox{{\rm cl}}_{Y_2}(f_2(\mbox{{\rm cl}}_{X_2}(f(F))))$.
Therefore, $\mbox{{\rm cl}}_{X_2}(f(F))=(f_2)^{-1}(g(G))$, i.e.
$\mbox{{\rm cl}}_{X_2}(f(F))\in I_2$.
Conversely, let $f$ satisfies condition (ZO). Since $CK(Y_1)$ is
an open base of $Y_1$, for showing that $g$ is an open map, it is
enough to prove that for every $G\in CK(Y_1)$,
$g(G)=\mbox{{\rm cl}}_{Y_2}(f_2(\mbox{{\rm cl}}_{X_2}(f(F))))$ holds, where
$F=(f_1)^{-1}(G)$ and thus $F\in I_1$. Obviously,
$G=\mbox{{\rm cl}}_{Y_1}(f_1(F))$. Using again the fact that $G$ is compact,
we get that $g(G)=g(\mbox{{\rm cl}}_{Y_1}(f_1(F)))=\mbox{{\rm cl}}_{Y_2}(g(f_1(F)))=
\mbox{{\rm cl}}_{Y_2}(f_2(f(F)))=\mbox{{\rm cl}}_{Y_2}(f_2(\mbox{{\rm cl}}_{X_2}(f(F))))$. So, $g$ is
an open map.
\sigmamallskip
\noindent(c) Since $Y_2$ is a locally compact Hausdorff space and
$CK(Y_2)$ is a base of $Y_2$, we get, using the well-known
\cite[Theorem 3.7.18]{E2}, that $g$ is a perfect map iff
$g^{-1}(G)\in CK(Y_1)$ for every $G\in CK(Y_2)$. Thus $g$ is a
perfect map iff $\varphisi_g(G)\in CK(Y_1)$ for every $G\in CK(Y_2)$.
Now, (\ref{0il}) and (\ref{0ic}) imply that $g$ is a perfect map
$\iff$ $\varphisi_f(G)\in I_1$ for every $G\in I_2$ $\iff$ $f$
satisfies condition (ZP).
\sigmamallskip
\noindent(d) This is obvious.
\sigmamallskip
\noindent(e) Having in mind (\ref{0il}) and (\ref{0ic}),
our assertion follows from \cite[Theorem 3.5]{Di5}.
\sigmamallskip
\noindent(f) It follows from (b), (\ref{0il}), (\ref{0ic}), and
\cite[Theorem 3.12]{Di5}.
\sigmamallskip
\noindent(g) It follows from (c), (\ref{0il}), (\ref{0ic}), and
\cite[Theorem 3.14]{Di5}.
\sigmamallskip
\noindent(h) It follows from (c) and (d).
\sigmamallskip
\noindent(i) It follows from (d) and \cite[Theorem 3.28 and
Proposition 3.3]{Di5}. We will also give a {\em second proof}\/
of this fact. Obviously, if $g$ is a dense embedding then $g(Y_1)$
is an open subset of $Y_2$ (because $Y_1$ is locally compact);
thus $g$ is an open mapping and we can apply (f) and (d).
Conversely, if $\mbox{{\rm cl}}_{X_2}(f(X_1))=X_2$ and $I_1\sigmabe f^{-1}(I_2)$,
then, by (d), $g(Y_1)$ is a dense subset of $Y_2$. We will show
that $f$ satisfies condition (ZO). Let $F_1\in I_1$. Then there
exists $F_2\in I_2$ such that $F_1=f^{-1}(F_2)$. Then, obviously,
$\mbox{{\rm cl}}_{X_2}(f(F_1))\sigmabe F_2$. Suppose that
$G_2=F_2\sigmatm\mbox{{\rm cl}}_{X_2}(f(F_1))\neq\emptyset$. Since $G_2$ is open, there
exists $x_2\in G_2\cap f(X_1)$. Then there exists $x_1\in X_1$
such that $f(x_1)=x_2\in F_2$. Thus $x_1\in F_1$, a contradiction.
Therefore, $\mbox{{\rm cl}}_{X_2}(f(F_1))= F_2$. Thus, $\mbox{{\rm cl}}_{X_2}(f(F_1))\in
I_2$. So, condition (ZO) is fulfilled. Hence, by (b), $g$ is an
open map. Now, using (f), we get that $g$ is also an injection.
All this shows that $g$ is a dense embedding. \sigmaqs
Recall that a continuous map $f:X\lambdara Y$ is called {\em
quasi-open\/} (\cite{MP}) if for every non-empty open subset $U$
of $X$, $\mbox{{\rm int}}(f(U))\neq\emptyset$ holds. As it is shown in \cite{D1}, if
$X$ is regular and Hausdorff, and $f:X\lambdara Y$ is a closed map,
then $f$ is quasi-open iff $f$ is skeletal. This fact and Theorem
\ref{zdextcmain} imply the following two corollaries:
\betaegin{cor}\lambdaabel{zdextcmaincb}
Let $X_1$, $X_2$ be two zero-dimensional Hausdorff spaces and
$f:X_1\lambdara X_2$ be a continuous function. Then:
\sigmamallskip
\noindent(a) $\beta_0f$ is quasi-open iff $f$ is skeletal;
\sigmamallskip
\noindent(b) $\beta_0f$ is an open map iff $f$ satisfies the following condition:
\sigmamallskip
\noindent{\rm(ZOB)} For every $F\in CO(X_1)$, $\mbox{{\rm cl}}_{X_2}(f(F))\in CO(X_2)$ holds;
\sigmamallskip
\noindent(c) $\beta_0f$ is a surjection iff\/ $\mbox{{\rm cl}}_{X_2}(f(X_1))=X_2$;
\sigmamallskip
\noindent(d) $\beta_0f$ is an injection iff $f^{-1}(CO(X_2))=CO(X_1)$.
\end{cor}
\betaegin{cor}\lambdaabel{zdextcmaincc}
Let $X_1$, $X_2$ be two zero-dimensional Hausdorff spaces,
$f:X_1\lambdara X_2$ be a continuous function, ${\cal B}$ be a Boolean algebra admissible for $X_2$, $(cX_2,c)$ be the Hausdorff zero-dimensional
compactification of $X_2$ corresponding to ${\cal B}$ (see Theorems \ref{dwingerlc} and \ref{dwinger}) and $g:\beta_0X_1\lambdara cX_2$ be the continuous function such
that $g\circ \beta_0=c\circ f$ (its existence is guaranteed by
Theorem \ref{zdextcb}). Then:
\sigmamallskip
\noindent(a) $g$ is quasi-open iff $f$ is skeletal;
\sigmamallskip
\noindent(b) $g$ is an open map iff $f$ satisfies the following condition:
\sigmamallskip
\noindent{\rm(ZOC)} For every $F\in CO(X_1)$, $\mbox{{\rm cl}}_{X_2}(f(F))\in {\cal B}$ holds;
\sigmamallskip
\noindent(c) $g$ is a surjection iff\/ $\mbox{{\rm cl}}_{X_2}(f(X_1))=X_2$;
\sigmamallskip
\noindent(d) $g$ is an injection iff $f^{-1}({\cal B})=CO(X_1)$.
\end{cor}
\betaegin{thebibliography}{99}
\betaaselineskip = 0.75\normalbaselineskip
{\sigmamall
\betaibitem{Ba}
B. Banaschewski,
\newblock \"{U}ber nulldimensionale R\"{a}ume,
\newblock Math. Nachr. 13 (1955) 129-140.
\betaibitem{B}
G. Bezhanishvili,
\newblock Zero-dimensional proximities and zero-dimensional compactifications,
\newblock Topology Appl. 156 (2009) 1496-1504.
\betaibitem{CNG}
W. Comfort, S. Negrepontis,
\newblock Chain Conditions in Topology,
\newblock Cambridge Univ. Press, Cambridge, 1982.
\betaibitem{D1}
G. Dimov,
\newblock Some generalizations of Fedorchuk Duality Theorem -- I,
\newblock Topo\-logy Appl. 156 (2009) 728-746.
\betaibitem{Di4}
G. Dimov,
\newblock A De Vries-type Duality Theorem for Locally Compact Spaces -- II,
\newblock arXiv:0903.2593v4, 1-37.
\betaibitem{Di5}
G. Dimov,
\newblock A De Vries-type Duality Theorem for Locally Compact Spaces -- III,
\newblock arXiv:0907.2025v1, 1-31.
\betaibitem{Dw}
Ph. Dwinger,
\newblock Introduction to Boolean Algebras,
\newblock Physica Verlag, W\"{u}rzburg, 1961.
\betaibitem{E2}
R. Engelking,
\newblock General Topology,
\newblock PWN, Warszawa, 1977.
\betaibitem{J}
P. T. Johnstone,
\newblock {\em Stone Spaces},
\newblock Cambridge Univ. Press, Cambridge, 1982.
\betaibitem{LE2}
S. Leader,
\newblock Local proximity spaces,
\newblock Math. Annalen 169 (1967) 275--281.
\betaibitem{MG}
K. D. Magill Jr. and J. A. Glasenapp,
\newblock 0-dimensional compactifications and Boolean rings,
\newblock J. Aust. Math. Soc. 8 (1968) 755--765.
\betaibitem{MP}
S. Marde\v{s}ic, P. Papic,
\newblock Continuous images of ordered compacta, the Suslin property and dyadic compacta,
\newblock Glasnik mat.-fis. i astronom. 17 (1-2) (1962) 3--25.
\betaibitem{MR}
J. Mioduszewski, L. Rudolf,
\newblock H-closed and extremally disconected Hausdorff spaces,
\newblock Dissert. Math. (Rozpr. Mat.) 66 (1969) 1--52.
\betaibitem{NW}
S. A. Naimpally, B. D. Warrack,
\newblock Proximity Spaces,
\newblock Cambridge University Press, Cambridge, 1970.
\betaibitem{R2}
P. Roeper,
\newblock Region-based topology,
\newblock Journal of Philosophical Logic 26 (1997) 251--309.
\betaibitem{Sm2}
J. M. Smirnov,
\newblock On proximity spaces,
\newblock Mat. Sb. 31 (1952) 543--574.
\betaibitem{ST}
M. H. Stone,
\newblock Applications of the theory of Boolean rings to general
topology,
\newblock Trans. Amer. Math. Soc. 41 (1937) 375-481.
}
\end{thebibliography}
\end{document} |
\begin{document}
\renewcommand{2}{2}
\title{Invertible spectra of finite type}
\begin{abstract}
We describe the necessary and sufficient numerical condition when an element $X$ in the Picard group of $K(2)$-local category at prime $p \geqslant 5$ is of finite type, i.e., $\pi_kX$ is finitely generated as a $\mathbb{Z}_p$-module for all $k \in \mathbb{Z}$.
\end{abstract}
\section{Introduction}
A long-standing open question in chromatic homotopy theory is whether the homotopy groups of $K(n)$-local sphere are finitely generated over $\mathbb{Z}_p$ in each degree (\cite{MR2458150}, Conjecture 6.5 in \cite{handbook} Chapter 5). This is the first problem in the \href{https://www-users.cse.umn.edu/~tlawson/hovey/morava.html}{Morava $K$- and $E$-theory} section of Mark Hovey's algebraic topology problem list\footnote{https://www-users.cse.umn.edu/~tlawson/hovey/morava.html}. A positive answer would follow from the chromatic splitting conjecture \cite{BBP} \cite{handbook}. One could ask the same question for any invertible spectrum $X$ in $K(n)$-local category. At height 2, prime $p \geqslant 5$, Hovey and Strickland show that this is not true for most elements in the Picard groups \cite[Proposition~15.7]{MR1601906}.
From now on, $p \geqslant 5$ is a fixed prime and all spectra are $K(2)$-local. Let $Pic_{K(2)}$ be the Picard group of $K(2)$-local category.
We say a $p$-local spectrum $X$ is of finite type if $\pi_k X$ is finitely generated as a $\mathbb{Z}_p$-module in all degrees $k$ in $\mathbb{Z}$. A natural question is:
\begin{que} \label{question}
Which elements of $Pic_{K(2)}$ are of finite type?
\end{que}
Hovey and Strickland's argument is not constructive and does not give a criterion for answering this question. We answer Question \ref{question} by the following necessary and sufficient condition. It is a condition on coefficients in a ``$p$-adic''-like expansion of a number associated to $X$.
Let $Pic_{K(2)}^0$ be the index $2$ subgroup of invertible elements whose $(E_2)_*$ homology is zero in odd degrees. Since $X$ is of finite type if and only if $\Sigma X$ is. It is sufficient to analyze which elements of $Pic_{K(2)}^0$ are of finite type. In Corollary \ref{cor:e}, we give a concrete description of the map \cite[Page 11]{Goerss}
\[
e \colon Pic_{K(2)}^0 \rightarrow H^1(\mathbb{G}_2,(E_2/p)^\times_0) \cong \lim_k \mathbb{Z}/p^k(p^2-1).
\]
We denote the $\mathbb{Z}_p$-module $\displaystyle\lim_k \mathbb{Z}/p^k(p^2-1) \cong \mathbb{Z}_p \oplus \mathbb{Z}/(p^2-1)$ by ${\mathfrak{Z}}$. In Section \ref{subsec:alpha}, we will see that ${\mathfrak{Z}}$ is a direct summand in $Pic^0_{K(2)}.$
\begin{defn}{\label{padicexpansion}}
For an element $\alpha \in {\mathfrak{Z}}$, we define the ${\mathfrak{Z}}$-expansion of $\alpha$ to be the unique expansion of the following form
\[
\alpha = a_0 + (p^2-1) \sum^{\infty}_{i=1} a_ip^{i-1} \,\,\, \mbox{for $a_i \in \mathbb{Z}$,
$0 \leqslant a_0 < p^2-1$, $0 \leqslant a_i < p$}.
\]
We call $a_i$ the $i$\textsuperscript{th} ${\mathfrak{Z}}$-expansion coefficients.
\end{defn}
\begin{thm*}[\Cref{result}]{\label{thm:mainresult}} Let $X \in Pic_{K(2)}^0$. Then $X$ is of finite type if and only if the ${\mathfrak{Z}}$-expansion coefficients $\{a_i\}$ of $e(X)$ contain only either finitely many zero entries or finitely many nonzero entries.
\end{thm*}
\begin{rem}
Note that $e(X)$ contains only finitely many nonzero entries if and only if $e(X)$ is a nonnegative integer.
\end{rem}
Theorem \ref{thm:mainresult} gives a refinement to the following theorem due to Hovey and Strickland.
\begin{thm}{\cite[Theorem~15.1]{MR1601906}}
Let $\mu$ be the unique translation-invariant measure on ${\mathfrak{Z}}$ with $\mu({\mathfrak{Z}})=1$. We call $\mu$ the Haar measure on $Pic_{K(2)}^0$. Then the set
$$\{X \in {\mathfrak{Z}} \subset Pic_{K(2)}^0 \mid \text{$X$ is of finite type}\}$$
has Haar measure $0$ in $Pic_{K(2)}^0$.
\end{thm}
We will say $\alpha \in {\mathfrak{Z}}$ has the finiteness property if $\alpha$ satisfies the condition in Theorem \ref{thm:mainresult}; that is, there are either finitely many zero entries or finitely many nonzero entries in $\alpha$'s ${\mathfrak{Z}}$-expansion coefficients.
The ${\mathfrak{Z}}$-index $e(X)$ has the following beneficial properties. Let $X$ be a spectrum. We denote the homotopy cofiber of $X\xrightarrow{p}X$ by $X/p$.
\begin{thm*}[Theorem \ref{thm:modulep}]
Let $X$, $Y$ be elements in $Pic_{K(2)}^0$. Then $X/p \simeq Y /p$ if and only if $e(X)=e(Y)$.
\end{thm*}
\begin{thm*}[Theorem \ref{thm:ghdual}]\label{thm:dual}
Let $X$ be an element in $Pic_{K(2)}^0$, $I_2X$ be the Gross--Hopkins dual of $X$ (see Definition \ref{defn:GrossHopkins}), and $\lambda=\displaystyle\lim_k p^{2k}(p+1)\in {\mathfrak{Z}}$. Then $e(I_2X)=1+\lambda-e(X)$. In particular, $X$ is of finite type if and only if $I_2X$ is of finite type.
\end{thm*}
\begin{rem}
If $e(X)$ is an integer, then $e(I_2X)$ will not be an integer and its ${\mathfrak{Z}}$-expansion coefficients contain only finitely many zero entries.
\end{rem}
The paper is organized as follows: in section \ref{picardgroup}, we first review some facts about $Pic_{K(2)}^0$ and then give concrete constructions of some elements in it. In section \ref{reducesection}, we reduce the problem of finite type to those elements with concrete constructions. In section \ref{computationsection}, we prove our result by computations based on the constructions and the known computation of $\pi_*L_{K(2)}S^0$. In section \ref{examplesection}, we apply our result to three interesting examples $S^0$, $I_2$, and $S^{2\gamma}$ (to be defined later).
Through out this paper, $p \geqslant 5$ is a fixed prime and all spectra are $K(2)$-local and group cohomology of $\mathbb{G}_n$ are continuous cohomology. We may omit $L_{K(2)}$; that is, when we write $S^0$, it means $L_{K(2)} S^0$.
\subsection*{Acknowledgments}
I would like to heartily thank Paul Goerss for many helpful conversations and the feedback on early drafts of this paper. I would like to thank Tobias Barthel for explaining how the chromatic splitting conjecture implies that the $K(n)$-local sphere is of finite type.
\section{The Picard groups of $K(2)$-local categories at prime $p \geqslant 5$}
\label{picardgroup}
The Picard groups of $K(n)$-local categories is introduced by Hopkins (\cite{MR1232198}; see also \cite{MR1263713}).
\begin{defn}{\cite[Definition 1.2]{MR1263713}}
A $K(n)$-local spectrum $Z$ is \emph{invertible in the $K(n)$-local category} if and only if there is a spectrum $W$ such that
$$L_{K(n)}(Z \wedge W)= L_{K(n)} S^0.$$
The Picard group of $K(n)$-local category $Pic_{K(n)}$ is the group of isomorphism classes of such spectra, with multiplication given by
$$(X, Y) \rightarrow L_{K(n)}(X \wedge Y).$$
\end{defn}
The Picard group of $K(2)$-local categories at prime $p \geqslant 5$ has been computed by Olivier Lader in his thesis, who attributes the result to Hopkins and Karamanov.
\begin{thm}{\cite[Theorem ~ 5.3]{Lader}}
At height $2$, prime $p \geqslant 5$, the Picard group of $K(2)$-local category $Pic_{K(2)}$ is isomorphic to $\mathbb{Z}_p \oplus \mathbb{Z}_p \oplus \mathbb{Z}/ 2(p^2-1)$.
\end{thm}
We explore the algebraic structure of $Pic^0_{K(2)}$ (see \cite{Goerss} and \cite{MR1601906}). Most parts work for general heights $n$ but we focus on the height $2$ case here. Recall that $\mathfrak{Z}$ denotes $\displaystyle\lim_k \mathbb{Z}/p^k(p^2-1).$
The group $Pic^0_{K(2)}$ is a continuous ${\mathfrak{Z}}$-module. In particular, given $X \in Pic^0_{K(2)}$ and $\alpha \in {\mathfrak{Z}}$, there is an element $X^{\alpha} \in Pic^0_{K(2)}$.
The power series $exp(p-)=\displaystyle\sum^{\infty}_{n=0}p^nx^n/n!$ converges for $p>2$. We have a short exact sequence
$$0\xrightarrow{\hspace*{1cm}} E_0 \xrightarrow{exp(p-)} E_0^\times \xrightarrow{quotient} (E_0/p)^\times \xrightarrow{\hspace*{1cm}} 1.$$
This gives a long exact sequence
\begin{equation}{\label{equation:les}}
\cdots \rightarrow H^1(\mathbb{G}_2, E_0) \xrightarrow{exp(p-)} H^1(\mathbb{G}_2, E_0^\times) \xrightarrow{e} H^1(\mathbb{G}_2, (E_0/p)^\times) \longrightarrow \cdots.
\end{equation}
At $p \geqslant 5$, height $n=2$, we have
$$Pic^0_{K(2)}=H^1(\mathbb{G}_2, (E_2)_0^\times).$$
The quotient map
$$(E_2)_0^\times \rightarrow (E_2/p)_0^\times$$
induces the map
$$e \colon Pic_{K(2)}^0 \cong H^1(\mathbb{G}_2, (E_2)_0^\times) \rightarrow H^1(\mathbb{G}_2, (E_2/p)_0^\times) \cong \mathfrak{Z}.$$
It turns out that the part in \ref{equation:les} becomes a short exact sequence. We can rewrite it as
\begin{equation}\label{equation:ses}
0 \rightarrow \mathbb{Z}_p \rightarrow Pic^0_{K(2)} \xrightarrow{e} \mathfrak{Z} \rightarrow 0
\end{equation}
This is an exact sequence of continuous ${\mathfrak{Z}}$-modules. The first term $\mathbb{Z}_p$ is generated by an element $\zeta \in H^1(\mathbb{G}_2, E_0)$ defined as follows.
\begin{defn}{\cite[Section 1.3]{MR2183282}}\label{definition:zeta}
The homomorphism $\zeta$ is the composition
$$\mathbb{G}_2 \cong \mathbb{S}_2 \rtimes \mathbb{G}_2al(\mathbb{F}_{p^2}/\mathbb{F}_p) \xrightarrow{(det, 0)} \mathbb{Z}_p^\times \cong \mathbb{Z}_p.$$
\end{defn}
\begin{rem}
We actually define an element $\zeta$ in $H^1(\mathbb{G}_2, \mathbb{Z}_p)$. We will denote its image in $H^1(\mathbb{G}_2, E_0)$ also by $\zeta$. Note that $g'(0)$ originally lives in $\mathbb{F}_{p}^\times$ and we denote its Teichmuller lift to $\mathbb{Z}_p^\times$ also by $g'(0)$. Then
by choosing the isomorphism
\begin{align*}
\mathbb{Z}_p^\times &\cong \mathbb{Z}_p \\
1+px &\rightarrow \frac{1}{p}log(1+py),
\end{align*}
a concrete formula of $\zeta \in H^{1,0}(\mathbb{G}_2,(E_2)_0)$ is
\begin{align}\label{equation:zeta}
\mathbb{G}_2 &\rightarrow \mathbb{Z}_p \subset E_0\\ \notag
g &\rightarrow \frac{1}{p}log(g'(0)^{-(p+1)}det(g)).
\end{align}
\end{rem}
There is a splitting map
\begin{align*}
{\mathfrak{Z}} &\hookrightarrow Pic^0_{K(2)} \\
\alpha &\rightarrow S^{2\alpha}.
\end{align*}
We shall see a concrete construction of $S^{2\alpha}$ in the Section \ref{subsec:alpha} due to \cite{MR1263713}. As a continuous ${\mathfrak{Z}}$-module, $Pic^0_{K(2)}$ is generated by two topological generators $S^2$ and $S[det]$ with one relation. For the definition of the generator $S[det]$, see \cite{BBGS}. The map $det \colon \mathbb{G}_n \rightarrow \mathbb{Z}_p^\times$ is defined in \cite[Section~~ 1.3]{MR2183282}, see also \cite[Section 3]{BBGS}. The generator $S^2$ can also be realized as a crossed homomorphism $t_0 \colon \mathbb{G}_2 \rightarrow E_0^\times$. We denote $(S[det])^{\beta}$ by $S^\beta[det]$ for $\beta \in {\mathfrak{Z}}$. Now each element $X\in Pic^0_{K(2)}$ can be presented as $S^{2\alpha} \wedge S^\beta [det]$ where $\alpha,\,\beta \in {\mathfrak{Z}}$. To describe the relation, we introduce $\gamma = \displaystyle\lim_k p^{2k} \in {\mathfrak{Z}}$, a generator of the torsion part $\mathbb{Z}/(p^2-1)$ in ${\mathfrak{Z}}$, and $\lambda = \displaystyle\lim_k p^{2k}(p+1) \in {\mathfrak{Z}}$. The relation is
\begin{equation}\label{eq:relation}
S^{2\gamma \lambda} = S^\gamma[det].
\end{equation}
The relation follows from knowing the image of $\zeta$ in the short exact sequence \ref{equation:ses}. This is explained in Proposition \ref{prop:alg}.
\begin{prop}{\cite[Proposition~3.9]{Goerss}}\label{prop:alg}
The image of $\zeta$ is
$$exp(p\zeta)=t_0^{-\lambda}det.$$
\end{prop}
We copy the proof from \cite{Goerss} for completeness.
\begin{proof}
We exam the diagram
\begin{center}
\begin{tikzcd}
H^1(\mathbb{G}_2,\mathbb{Z}_p) \arrow[r, "exp(p-)"] \arrow[d]
& H^1(\mathbb{G}_2,\mathbb{Z}_p^\times) \arrow[d] \\
H^1(\mathbb{G}_2,E_0) \arrow[r, "exp(p-)"]
&H^1(\mathbb{G}_2,E_0^\times) \end{tikzcd}
\end{center}
Plugging in (\ref{equation:zeta}), we have $exp(p\zeta)$ as a crossed homomorphism
\begin{align*}
\mathbb{G}_2 &\rightarrow \mathbb{Z}_p^\times \subset E_0^\times \\
g &\rightarrow g'(0)^{-(p+1)}det(g).
\end{align*}
Note that $t_0(g)^\lambda=g'(0)$ mod $m$. Let $\gamma$ be $\displaystyle\lim_k p^{2k}\in {\mathfrak{Z}}$. Then $t_0(g)^{p^{2k}}=g'(0)^{p^{2k}}\text{ mod }m^{p^{2k}}$ and we have $t_0^\gamma=g'(0)^\gamma=g'(0)$. Also note that $(p+1)\gamma=\lambda$, the image of $\zeta$ is
$t_0^\lambda det(g)$.
\end{proof}
\begin{cor}\label{cor:e}
Let $X \in Pic_{K(n)}^0$ be $S^{2\alpha} \wedge S^\beta[det]$ where $\alpha, \, \beta \in {\mathfrak{Z}}$. Then $e(X)=\alpha+\lambda \beta$.
\end{cor}
\begin{proof}
By Proposition \ref{prop:alg} and the exactness of \ref{equation:ses}, the kernel of the map $e$ is generated by $S^{-2\lambda}S[det]$.
Therefore, we have
$$e(S^{2\alpha}\wedge S^\beta[det])=e(S^{2\alpha})+e(S^{2\lambda\beta})=\alpha+\lambda\beta.$$
\end{proof}
\subsection*{Construction of $S^{2\alpha}$}\label{subsec:alpha}
This appeared in \cite{Goerss} and \cite{MR1263713}. The construction works for all heights. Here we focus on the height $2$ case. There is a construction of $S^{2\alpha} \in Pic_{K(2)}^0$ for a given $\alpha \in {\mathfrak{Z}}$.
We introduce some notation for the ${\mathfrak{Z}}$-expansion.
\begin{defn}{\label{padic}}
Let $\alpha \in {\mathfrak{Z}}$ with the expansion
\[
\alpha = a_0 + (p^2-1) \sum^{\infty}_{i=1} a_ip^{i-1} \,\,\, \mbox{for $a_i \in \mathbb{Z}$, \, $0 \leqslant a_0 < p^2-1$, \,$0 \leqslant a_i < p$}.
\]
Denote
\[
\sum^{\infty}_{i=1} a_ip^{i-1} \in \mathbb{Z}_p
\]
by $\bar{\alpha}$.
For $k \geqslant 0$, define $\alpha_k \in \mathbb{Z}$ to be
\[
\alpha_k = a_0 + (p^2-1)\sum^{k}_{i=1} a_i p^{i-1}
\]
and for $k \geqslant 1$, define $\bar{\alpha}_k \in \mathbb{Z}$ to be
\[
\bar{\alpha}_k = \sum^{k}_{i=1} a_i p^{i-1}.
\]
\end{defn}
We also need $v_1$-self maps in the construction. A $v_1^{\ell}$-self map of $X$ is a map
$$f \colon\Sigma^{\ell|v_1|}X \rightarrow X$$
such that $K(1)_*(v_1^{\ell})$ is given by multiplying $v_1^{\ell}$ (see \cite{MR1192553}). We work in large primes so $S^0/p$ has a unique $v_1$-self map. We will abuse notation and write $v_1^\ell$ for powers of this unique map.
We will construct $S^{2\alpha}$ as a homotopy limit of generalized Moore spectra. The generalized Moore spectra are constructed by Hopkins and Smith (\cite{MR1652975}), explained and discussed in \cite[Chapter~6]{MR1192553} and \cite[Section~4]{MR1601906}. Here we will follow the notation in \cite{MR1601906} and make specific choices for our purposes. Let $S^0/p$ denote the cofiber of $p \colon S^0 \rightarrow S^0$. In general, $S^0/p^k$ denotes the cofiber of $p^k \colon S^0 \rightarrow S^0$. If $S^0/p^k$ admits a $v_1^{\ell}$-self map, let $S^0/(p^k,v_1^{\ell})$ denote the cofiber of $v_1^{\ell} \colon \Sigma^{\ell|v_1|} S^0/p^k \rightarrow S^0/p^k$.
Recall that from the computation of $K(2)$ local spheres at $p \geqslant 5$, $S^0/p^{k+1}$ admits a $v_1^{p^k}$-self map and hence $S^0/(p^{k+1},v_1^{p^k})$ exists. Also $S^0/(p^{k+1},v_1^{p^k})$ admits a $v_2^{p^k}$-self map. These $v_2^{p^k}$-self maps are weak equivalence (in $K(2)$-local category) so we can make the following definition.
\begin{defn}\label{exp}
Given $\alpha \in {\mathfrak{Z}}$, $S^{2\alpha}$ is defined as $\holim \Sigma^{2\alpha_k} S^0/(p^{k+1},v_1^{p^k})$. The maps in the inverse system are
$$\Sigma^{2\alpha_{k+1}} S^0/(p^{k+2},v_1^{p^{k+1}}) \overset{q}{\rightarrow} \Sigma^{2\alpha_{k+1}} S^0/(p^{k+1},v_1^{p^k}) \overset{v_2^{a_{k+1}p^{k}}}{\rightarrow} \Sigma^{2\alpha_k} S^0/(p^{k+1},v_1^{p^k}),$$
where the first map $q$ is the quotient and the second map is the $v_2^{a_{k+1}p^{k}}$-self maps.
\end{defn}
The construction of $S^{2\alpha}$ gives the splitting map
\begin{align*}
g \colon R &\rightarrow Pic_{K(2)}^0 \\
\alpha &\rightarrow S^{2\alpha}.
\end{align*}
such that $e \circ g = \mathrm{id}_{Pic_{K(2)}^0}$. In particular, the group morphism $g$ is injective.
\section{Reduction modulo $p$}\label{reducesection}
In this section, we show that $X \in Pic^0_{K(2)}$ has finite type if and only if $\pi_k S^{2e (X)}/p$ is a finite dimensional $\mathbb{F}_p$ vector space for all $k \in \mathbb{Z}$. The case $X=L_{K(2)}S^0$ is explained in \cite{MR2458150}.
\begin{prop}{\label{fg}}
Let $X \in Pic_{K(2)}$ and define $X/p$ as $X \wedge S^0/p$. Then $X$ is of finite type if and only if $\pi_k X/p$ is finite for all $k \in \mathbb{Z}$.
\end{prop}
\begin{proof}
The only if part follows from the long exact sequence
$$\cdots \rightarrow \pi_k X \xrightarrow{p} \pi_k X \rightarrow \pi_k X/p \rightarrow \pi_{k+1}X \rightarrow \cdots.$$
The if part follows from the fact that $\pi_*X$ is $p$-complete when $\pi_k X/p$ is finite for all $k \in \mathbb{Z}$. In this case, we can show $\pi_k X/p^i$ is also finite for all $k \in \mathbb{Z}$ inductively by the cofiber sequences
$$X/p \rightarrow X/p^{i+1} \rightarrow X/p^i.$$
Then the $\lim^1$ term vanishes and we have
\begin{equation*}
\pi_*X \cong \lim \pi_*X/p^i.
\end{equation*}
\end{proof}
Now we focus on $X/p$ for $X \in Pic_{K(2)}$. The following theorem tells that after reduction modulo $p$, the two topological generators $S^2$ and $S[det]$ of $Pic_{K(2)}$ become the same up to a ${\mathfrak{Z}}$-suspension.
\begin{thm}{\cite{MR1217353}; see also \cite[Theorem~3.11]{Goerss}}\label{thm:relation}
Let $\lambda = \lim p^{2k}(p+1) \in {\mathfrak{Z}}$. Then there is an equivalence
$$S^{2\lambda}/p = \Sigma^{2\lambda} S^0/p \simeq S[det]/p.$$
\end{thm}
In particular, this implies that $X/p$ is determined by $e(X) \in {\mathfrak{Z}}$.
\begin{thm}\label{thm:modulep}
Let $X$, $Y$ be elements in $Pic_{K(2)}^0$. Then $X/p \simeq Y/p$ if and only if $e(X)=e(Y)$.
\end{thm}
\begin{proof}
In Section \ref{picardgroup}, we see that an element $X \in Pic_{K(2)}^0$ can be presented as $S^{2\alpha} \wedge S^\beta[det]$ where $\alpha \, \beta \in {\mathfrak{Z}}$. By Theorem \ref{thm:relation} and Corollary \ref{cor:e}, we have
$$X/p \simeq S^{2\alpha+2\beta\lambda}/p= S^{2e(X)}/p.$$
Therefore, we have $X/p \simeq Y/p$ if and only if $e(X)=e(Y)$.
\end{proof}
Now for a given element $X \in Pic_{K(2)}^0$, Property \ref{fg} tells us that $X$ is of finite type if and only if $X/p$ is, and Theorem \ref{thm:modulep} implies that $X/p \simeq S^{2e (X)}/p$. Hence, the question if $X$ is of finite type rededuces to the question if $S^{2e(X)} /p $ is of finite type. We have the following corollary.
\begin{cor}{\label{reduce}}
Given $X \in Pic_{K(2)}^0$, $X$ is of finite type if and only if $S^{2e (X)}/p$ is of finite type.
\end{cor}
\section{computation of $\pi_* S^{2\alpha} / p$}\label{computationsection}
In this section, we give a computation of $\pi_* S^{2\alpha} / p$ for any $\alpha \in {\mathfrak{Z}}$. Our computation is based on the homotopy limit construction of $\alpha$ spheres and the known computation of $\pi_*L_{K(2)}S^0/p$. The latter is computed by Shimomura and Yabe \cite{MR1318877} and explained by Behrens \cite{MR2914955}. See also Lader's thesis \cite{Lader} for a more group theoretical computation of $\pi_*L_{K(2)}S^0/p$. (We will often omit the $L_{K(2)}$ in the following discussion.)
We compute $\pi_n S^{2\alpha} / p=\pi_n\holim \Sigma^{2\alpha_k}S^0/(p,v_1^{p^k})$ via the short exact sequence
$$0\rightarrow \limone \pi_{n+1}\Sigma^{2\alpha_k}S^0/(p,v_1^{p^k}) \rightarrow \pi_n\holim \Sigma^{2\alpha_k}S^0/(p,v_1^{p^k}) \rightarrow \lim \pi_n\Sigma^{2\alpha_k}S^0/(p,v_1^{p^k})\rightarrow 0.$$
Because $\pi_n \Sigma^{2\alpha_k}S^0/(p,v_1^{p^k})$ is finite for all $n \in \mathbb{Z}$, the $\limone$ term vanishes and
$$\pi_n S^{2\alpha} / p \cong \lim \pi_n \Sigma^{2\alpha_k}S^0/(p,v_1^{p^k}).$$
We compute $\pi_* S^0/(p,v_1^{p^k})$ via the homotopy fixed point spectral sequence
$$E_2^{s,t}=H^s (\mathbb{G}_2, E_t (S^0/(p,v_1^{p^k}))) {\mathfrak{Z}}ightarrow \pi_{t-s}(S^0/(p,v_1^{p^k})).$$
There is no room for differentials and nontrivial extensions for degree reasons because
$$E_2^{s,t}=0 ~ when ~ 2(p-1) \nmid t ~ or ~s>4,$$
$$d_r \colon E_r^{s,t} \rightarrow E_r^{s+r,t+r-1}.$$
For all nontrivial group $\pi_m L_{K(2)}S^0/(p,v_1^k)$, there is a unique pair $(s,t)$ such that $m=t-s$ and
$$\pi_m L_{K(2)}S^0/(p,v_1^k) \cong H^s(\mathbb{G}_2, E_t/(p,v_1^k)).$$
Therefore, we will not distinguish elements in the $E_2$ page and elements in the homotopy groups.
\begin{comment}
We compute $\pi_* S^{2\alpha} / p$ via the homotopy fixed point spectral sequence
$$E_2^{s,t}=H^s (\mathbb{G}_2, E_t (S^\alpha / p)) {\mathfrak{Z}}ightarrow \pi_{t-s}(S^\alpha / p).$$
It collapses for degree reasons because
$$E_2^{s,t}=0 ~ when ~ 2(p-1) \nmid t ~ or ~s>4,$$
$$d_r \colon E_r^{s,t} \rightarrow E_r^{s+r,t+r-1}.$$
By the definition of $S^\alpha$, we have
$$H^s (\mathbb{G}_2, E_t (S^\alpha / p)) = H^s (\mathbb{G}_2, E_t \holim (S^{\alpha_k} / (p, v_1^{p^k}))).$$
Because $E_t(S^{\alpha_k}/(p, v_1^{p^k}))$ is finite for all $k$, the $lim^1$ term vanishes, and we have \textcolor{red}{check this, something in Devinatz-Hopkins, also it is formal that continuous cohomology commutes with limit; It might be easier to prove that $\pi_*S^{2\alpha}/p \cong \lim \pi_*\Sigma^{2\alpha_k} S^0/(p,v_1^{p^k})$}
$$H^s (\mathbb{G}_2, E_t (S^\alpha / p)) \cong H^s (\mathbb{G}_2, \lim E_t (S^{\alpha_k} / (p, v_1^{p^k}))) \cong \lim H^s (\mathbb{G}_2, E_{t-\alpha_k}/ (p, v_1^{p^k})).$$
\end{comment}
We list the result of $\pi_*S^0/p$ below as Theorem \ref{computation}. We need to compute the map in the inverse limit
$$f_{k+1} \colon \Sigma^{2 \alpha_{k+1}} S^0/(1,p^{k+1}) \overset{q}{\rightarrow} \Sigma^{2 \alpha_{k+1}} S^0/(1,p^k) \overset{v_2^{a_{k+1}p^{k}}}{\rightarrow} \Sigma^{2 \alpha_k} S^0/(1,p^k).$$
where $\alpha=a_0+a_12(p^2-1)+a_2 2(p^2-1)p+\cdots$. Algebraically this is the quotient map composed with the multiplication of $v_2^{a_{k+1}p^{k}}$
\begin{align*}
\pi_*f_{k+1} \colon H^s(\mathbb{G}_2,E_*\Sigma^{2 \alpha_{k+1}} S^0/(1,p^{k+1}) & \rightarrow H^s(\mathbb{G}_2,E_* \Sigma^{2 \alpha_k} S^0/(1,p^k))\\
x &\rightarrow v_2^{a_{k+1}p^{k}}x.
\end{align*}
Therefore, the computation of $\pi_* S^\alpha / p$ reduces to the computation of the limit, which we will explain in this section. We will need some computational results about $\pi_* S^0/(p, v_1^{p^k})$. We follow the notation in \cite{MR2914955}, for readers who are familiar with Shimomura's notations, there is a dictionary in \cite{MR2914955} between the names. The algebraic descriptions are convenient for limit computations, but pictures of the result are much easier to digest, the author encourages readers to see figures in \cite{MR2914955}. In this section, all the differentials are really $v_1$-Bockstein spectral sequence differentials. In particular, we only list the leading term with respect to $v_1$-power in the formula. For example, the differential
$$d v_2^{sp^n} = v_1^{b_n}v_2^{sp^n-p^{n-1}} h_0$$
really means
$$d v_2^{sp^n} = a v_1^{b_n}v_2^{sp^n-p^{n-1}} h_0+\text{higher $v_1$ terms}$$
for some $a \in \mathbb{F}_p^\times$ in the $v_1$-Bockstein spectral sequence.
\begin{defn}
Let $A$ be the $\mathbb{F}_p-$algebra generated by $6$ elements $\{1, h_0,h_1,g_0,g_1, h_0g_1=h_1g_0\}$. We denote this basis as $B$. We assign bidegrees (homological degree, internal degree) to elements in $A$ as follows
\begin{align*}
|1|&=(0,0), & |h_0|&=(1, 2(p-1)), & |h_1|&=(1, -2(p-1)), \\
|g_0|&=(2, 2(p-1)), & g_1&=|2, -2(p-1)|, & |h_0g_1|&=(3,0).
\end{align*}
\end{defn}
\begin{thm}{\cite[Theorem 3.2]{MR0431168}}
We have
$$H^s(\mathbb{G}_2, (E_2/(p, v_1))_t) = \mathbb{F}_p[v_2^{\pm 1}]\otimes A \otimes \Lambda[\zeta].$$
\end{thm}
Let $\mathbb{G}_2^1$ denote the kernel of the homomorphism in (\ref{definition:zeta}) $\zeta \colon \mathbb{G}_2 \rightarrow \mathbb{Z}_p$.
Then $\mathbb{G}_2=\mathbb{G}_2^1 \rtimes \mathbb{Z}_p$
and
$$H^s(\mathbb{G}_2^1, (E_2/p)_t) = H^s(\mathbb{G}_2^1, (E_2/p)_t) \otimes \Lambda[\zeta].$$
\begin{thm}[\cite{MR1318877},\cite{MR2914955}]{\label{computation}}
There exists a complex $(C_0,d)$ such that $H^*(C_0,d)=H^*(\mathbb{G}_2^1, (E_2/p)_*)$ where $C_0 := A \otimes \mathbb{F}_p[v_2^{\pm 1}] \otimes \mathbb{F}_p[v_1] $ and differentials given in table \ref{diff0} are $v_1$ linear. The $\mathbb{F}_p-$generators are listed in table \ref{element0}.
Let $C$ be $C_0 \otimes \Lambda(\zeta)$. Then we have
$$H^*(C,d)=H^*(\mathbb{G}_2, (E_2)_*/p).$$
\end{thm}
\begin{table}[H]
\centering
\caption{differentials in $C_0$ \cite[Section 4]{MR1318877}, \cite[Theorem 3.2]{MR2914955}}
\label{diff0}
\begin{tabular}{| l | l |}
\hline
$d v_2^{sp^n} = v_1^{b_n}v_2^{sp^n-p^{n-1}} h_0$ & $p \nmid s, n \geqslant 1$ \\
\hline
$d v_2^s = v_1 v_2^s h_1$ & $p \nmid s$ \\
\hline
$d v_2^{sp^n} h_0 = v_1^{A_n + 2} v_2^{sp^n - \frac{p^n-1}{p-1}} g_1$ & $ s \not\equiv 0, -1 ~ mod ~ p, ~ n \geqslant 0$ \\
\hline
$d v_2^{sp^n - p^{n-2}} h_0 = v_1^{p^n - p^{n-2} + A_{n-2} + 2} v_2^{sp^n - p^{n-1} - \frac{p^{n-2} - 1}{p-1}} g_1 $ & $ \forall s, ~ n \geqslant 2$ \\
\hline
$d v_2^{sp} h_1 = v_1^{p-1} v_2^{sp - 1} g_0$ & $ \forall s$ \\
\hline
$d v_2^{sp^n - \frac{p^{n-1}-1}{p-1} } g_1 = v_1^{b_n} v_2^{sp^n - \frac{p^n - 1}{p-1}} h_0 g_1$ & $ s \not\equiv -1 ~ mod ~ p, ~ n \geqslant 1$ \\
\hline
$d v_2^s g_0 = v_1 v_2^s h_0 g_1$ & $ s \not\equiv -1 ~ mod ~ p$\\
\hline
\end{tabular}
\end{table}
We follow the notation in \cite{MR2914955} to define $b_n$ and $A_n$ as follows
$$b_n=
\begin{cases}
p^{n-1}(p+1)-1, \,\,\, n\geqslant 1,\\
1, \,\,\, n=0;
\end{cases}
$$
$$
A_n=
\begin{cases}
(p^n-1)(p+1)/(p-1), \,\,\, n\geqslant 1,\\
0, \,\,\, n=0.
\end{cases}
$$
\begin{table}[H]
\centering
\caption{Generators for $H^*(C_0, d)$}
\label{element0}
\begin{tabular}{| l |p{5cm} | p{5cm}|}
\hline
$v_1^j$ & & $j \geqslant 0 $ \\
\hline
$v_1^j h_0$ & & $j \geqslant 0 $ \\
\hline
$v_1^j v_2^{sp^n-p^{n-1}} h_0$ & $ p \nmid s, ~ n \geqslant 1$ & $ 0 \leqslant j \leqslant b_n - 1 $ \\
\hline
$v_2^s h_1$ & $p \nmid s$ & \\
\hline
$v_1^j v_2^{sp-1} g_0$ & $\forall s$ & $ 0 \leqslant j \leqslant p-2, p^k-1 $ \\
\hline
$v_1^j v_2^{sp^n - \frac{p^n - 1}{p - 1}} g_1 $ & $ s \not\equiv 0, -1 ~ mod ~ p, ~ n \geqslant 0$ & $ 0 \leqslant j \leqslant A_n + 1 $ \\
\hline
$v_1^j v_2^{sp^n - p^{n-1} - \frac{ p^{n-2}-1}{p-1}} g_1 $ & $\forall s, ~ n \geqslant 2 $ & $ 0 \leqslant j \leqslant p^n - p^{n-2} +A_{n-2} +1 $ \\
\hline
$v_1^j v_2^{sp^n - \frac{p^n - 1}{p-1}} h_0 g_1$ & $ s \not\equiv -1 ~ mod ~ p, ~ n \geqslant 0$ & $ 0 \leqslant j \leqslant b_n - 1 $ \\
\hline
\end{tabular}
\end{table}
For $S^0/(p, v_1^{p^k})$, we have $H^*(\mathbb{G}_2^1, E_* S^0/(p, v_1^{p^k}) )= H^*(C/v_1^{p^k}, d),$ which can be computed from $H^*(C,d)$ via the long exact sequence
$$\cdots \rightarrow H^s(C,d) \xrightarrow{\times v_1^{p^k}} H^s(C,d) \xrightarrow{q} H^s(C/v_1^{p^k},d) \rightarrow \cdots.$$
The $\mathbb{F}_p$-generators of $H^*(C_0/v_1^{p^k},d)$ are listed in Table~\ref{element1}, where the elements are divided into two subsets: the upper half $Coker$ part (the cokerel of $v_1^{p^k}$), and the lower half $Ker$ part (the kernel of $v_1^{p^k}$). While the elements in the $Coker$ part have straightforward names, we need to keep track of the boundary connecting morphism to name the elements in the $Ker$ part. For example, $v_1^jv_2^{sp^n-p^{n-1}}h_0$ with $p \nmid s$, $n \geqslant 1$, $\max\{0,\,b_n-p^k\} \leqslant j \leqslant p^k-1$ is in the kernel of
$$\times v_1^{p^k} \colon H^1(C_0,d) \rightarrow H^1(C_0,d)$$ when $k \geqslant 1$. Let $\partial$ be the boundary connecting morphism
$$\partial_0 \colon H^0(C_0/v_1^{p^k},d) \rightarrow H^1(C_0,d).$$
in the long exact sequence
$$H^0(C_0,d) \xrightarrow{v_1^{p^k}} H^0(C_0,d) \rightarrow H^0(C_0/v_1^{p^k},d) \xrightarrow{\partial_0} H^1(C_0,d) \xrightarrow{v_1^{p^k}} H^1(C_0,d) \rightarrow \cdots$$
By the snake lemma and the following differential in $C_0$, for $p \nmid s$, $n \geqslant 1$
$$d_0(v_2^{sp^n})=v_1^{b_n}v_2^{sp^n-p^{n-1}}h_0,$$
we have
$$\partial (v_1^{j-b_n+p^k}v_2^{sp^n})=v_1^jv_2^{sp^n-p^{n-1}}h_0.$$
After reindexing the power of $v_1$, we name the corresponding $\mathbb{F}_p-$generators in the $Ker$ part of $H^0(C_0/v_1^{p^k})$ as $v_1^jv_2^{sp^n}$ with $p \nmid s$, $n \geqslant 1$ and $\max\{0,\,p^k-b_n\} \leqslant j \leqslant p^k-1.$ Similarly, the first row in the $Ker$ part comes from the differential
$$dv_2^s=v_1v_2^sh_1.$$
Denote the result of $H^*(C_0/v_1^{p^k},d)$ as $X_1$, then $X_1 \otimes \Lambda (\zeta)$ gives the $E_2$ page and in this case also the $E_{\infty}$ page of the ANSS that converges to $\pi_* S^0/(p, v_1^{p^k})$.
\begin{table}[H]
\centering
\caption{Generators for $H^*(C_0/(v_1^{p^k}), d)$}
\label{element1}
\begin{tabular}{| p{3.5cm} | l |p{3.5cm} | p{3.2cm}| p{5.2cm}}
\hline
Names & $Coker$ part & & \\
\hline
\hline
$(1, k)$ & $v_1^j$ & & $0 \leqslant j \leqslant p^k-1 $ \\
\hline
$(h_0, k)$ & $v_1^j h_0$ & & $0 \leqslant j \leqslant p^k-1 $ \\
\hline
$(v_2^{sp^n-p^{n-1}}h_0,k)$ & $v_1^j v_2^{sp^n-p^{n-1}} h_0$ & $ p \nmid s, ~ n \geqslant 1$ & $ 0 \leqslant j \leqslant min \{ b_n - 1, p^k - 1 \}$ \\
\hline
$(v_2^sh_1,k)$ & $v_2^s h_1$ & $p \nmid s$ & \\
\hline
$(v_2^{sp-1} g_0, k)$ & $v_1^j v_2^{sp-1} g_0$ & $\forall s$ & $ 0 \leqslant j \leqslant min\{ p-2, p^k-1 \}$ \\
\hline
$(v_2^{sp^n - \frac{p^n - 1}{p - 1}} g_1,k)$ & $v_1^j v_2^{sp^n - \frac{p^n - 1}{p - 1}} g_1 $ & $ s \not\equiv 0, -1 ~ mod ~ p, ~ n \geqslant 0$ & $ 0 \leqslant j \leqslant min\{A_n + 1, p^k - 1 \} $ \\
\hline
$(v_2^{sp^n - p^{n-1} - \frac{ p^{n-2}-1}{p-1}} g_1,
k)$ & $v_1^j v_2^{sp^n - p^{n-1} - \frac{ p^{n-2}-1}{p-1}} g_1$ & $\forall s, ~ n \geqslant 2 $ & $ 0 \leqslant j \leqslant min \{ p^n - p^{n-2} +A_{n-2} +1, p^k - 1 \}$ \\
\hline
$(v_2^{sp^n - \frac{p^n - 1}{p-1}} h_0 g_1, k)$ & $v_1^j v_2^{sp^n - \frac{p^n - 1}{p-1}} h_0 g_1$ & $ s \not\equiv -1 ~ mod ~ p, ~ n \geqslant 0$ & $ 0 \leqslant j \leqslant min \{ b_n - 1, p^k - 1 \}$ \\
\hline
\hline
&$Ker$ part & & \\
\hline
\hline
&$v_1^{p^k - 1} v_2^s$ & $ p \nmid s$ & \\
\hline
&$v_1^j v_2^{sp^n}$ & $ p \nmid s, ~ n \geqslant 1$ & $max \{ 0, p^k - b_n \} \leqslant j \leqslant p^k - 1$ \\
\hline
&$v_1^j v_2^{sp^n} h_0 $ & $ s \not\equiv 0, -1 ~mod ~ p, ~ n \geqslant 0$ & $ max \{0, p^k - A_n -2 \} \leqslant j \leqslant p^k -1$ \\
\hline
&$v_1^j v_2^{sp^n - p^{n-2}} h_0$ & $\forall s, ~ n \geqslant 2$ & $max \{0, p^k - (p^n - p^{n-2} + A_{n-2} +2) \} \leqslant j \leqslant p^k - 1$ \\
\hline
&$v_1^j v_2^{sp} h_1$ & $\forall s$ & $max \{0, p^k - p +1 \} \leqslant j \leqslant p^k - 1$\\
\hline
&$v_1^{p^k - 1} v_2^s g_0$ & $ s \not\equiv -1 ~mod ~ p$ & \\
\hline
&$v_1^j v_2^{sp^n - \frac{p^{n-1}-1}{p-1}} g_1$ & $ s \not\equiv -1~mod ~ p,~ n \geqslant 1$ & $max \{ 0, p^k - b_n \} \leqslant j \leqslant p^k - 1$ \\
\hline
\end{tabular}
\end{table}
The essential computation is done by Shimomura and Yabe \cite{MR1318877}. The author learned it from Behrens's paper \cite{MR2914955}. The idea of organizing the result as a chain complex goes back to Henn, Karamanov and Mahowald.
We introduce some notation before going into the computation of $\displaystyle\lim_k\pi_*\Sigma^{2\alpha_k}S/(p,v_1^{p^k})$. In Definition \ref{padic}, for an element $\alpha \in {\mathfrak{Z}}$, we have defined a p-adic number $\bar{\alpha}$ and its truncation $\bar{\alpha}_k$. We use the notation $v_2^{-2\alpha}$ to name elements in the limit $\displaystyle\lim_k\pi_*\Sigma^{2\alpha_k}S/(p,v_1^{p^k})$.
\begin{defn}\label{rem:name}
Let $x \in B$ and
\begin{equation}
\alpha = a_0 + a_1(p^2-1))+ a_2 p(p^2-1) + a_3 p^2(p^2-1) + \cdots.
\end{equation}
Then for $m,\ell \in \mathbb{Z}$, define an element
$$v_1^m v_2^{\ell-2\alpha} x \in \displaystyle\lim_k H^{*,*}(S^{2\alpha_k}/ p, v_1^{p^k}) = \lim_k y_k$$
by setting
\begin{equation}
y_k = v_1^m v_2^{\ell-a_1-a_2p-\cdots-a_{k}p^{k-1}} \in H^{*,*-2a_0}(S^0/p, v_1^{p^k})
\end{equation}
and $y_{k+1} \rightarrow y_k$ is given by multiplying $v_2^{a_{k+1}p^k}$.
We will denote $\ell-a_1-a_2p-\cdots-a_{k}p^{k-1}$ by $\ell_k$.
\end{defn}
We use the following notation to describe the power of $v_1$.
\begin{defn}
For $k \in \mathbb{N}$, define the map $J_k$ from $\mathbb{Z} \times R \times B$ to subsets of natural numbers $\mathbb{N}$ by
$$(l, \alpha, x) \mapsto \{j \in \mathbb{N} \mid v_1^j v_2^{\ell_k} x \neq 0 \in H^*(C_0/(v_1^{p^k}),d)\}.$$
\end{defn}
\begin{thm}
Given $\alpha \in {\mathfrak{Z}}$, we have a $\mathbb{F}_p$-basis of $\pi_*S^{2\alpha}/p$ as follows
$$\{\zeta^\epsilon v_1^j v_2^{\ell-2\alpha} x \mid \epsilon=0,1, \ell \in \mathbb{Z}, x\in B, j \in \lim_{k\rightarrow \infty} \inf J_k(\ell, \alpha, x)\}.$$
\end{thm}
\begin{proof}
The above gives a $\mathbb{F}_p$-basis of $\displaystyle\lim_k \pi_*\Sigma^{2\alpha_k} S^0/(p,v_1^{p^k})$ by the definition of $J_k$. The theorem follows from
$$\pi_*S^{2\alpha}/p=\lim_k \pi_*\Sigma^{2\alpha_k} S^0/(p,v_1^{p^k})$$
as we discussed in the beginning.
\end{proof}
Note that if $\pi_*S^{2\alpha}/p$ is a finitely generated $\mathbb{Z}_p$-module if and only if $(\pi_*S^{2\alpha}/p)/\zeta$ is a finitely generated $\mathbb{Z}_p$-module. We will drop $\zeta$ in later analysis.
\begin{defn}\label{defn:stable1}
We say an element $(\ell, \alpha, x) \in \mathbb{Z}\times R \times B$ is \emph{stable} if there exists $K \in \mathbb{Z}^+$ and for all $k>K$, we have $J_k(\ell, \alpha, x)=J_K(\ell, \alpha, x)$. Otherwise, we say $(\ell, \alpha, x)$ is \emph{unstable}.
Given an $\alpha \in {\mathfrak{Z}}$, we say an element $v_1^j v_2^{\ell-2\alpha} x \in \pi_*S^{2\alpha}/p$ is \emph{$\alpha$-stable} if $(\ell,\alpha, x)$ is stable. Otherwise, we will call the element \emph{$\alpha$-unstable}.
\end{defn}
In particular, if $\displaystyle\lim_{k\rightarrow \infty} \inf J_k(\ell, \alpha, x)=\emptyset$, we say the element $(\ell, \alpha, x)$ is stable to trivial.
\begin{ex}
When $\ell-\bar{\alpha}=0$, we know $\alpha$ is an integer, then all elements $(\ell, \alpha, x)$ are stable. When $\ell-\bar{\alpha} = \displaystyle\sum^\infty_{k>0}(p-2)p^k+p-1$, we have interesting $\alpha$-unstable elements. For example, set $\ell=0$ and $\bar{\alpha}=\displaystyle\sum^\infty_{k\geqslant 0}p^k$. Then $v_1^jv_2^{-2\alpha}g_1$ and $v_1^jv_2^{-2\alpha}h_0g_1$ are two unstable elements. Because when $k$ is large, we have $l_k=-\frac{p^k-1}{p-1}$. Then when $k>0$, we have $J_k(0, \alpha, g_1)=\{\max\{0,p^k-b_k\}=0 \leqslant j \leqslant p^k-1\}$ and $\displaystyle\liminf_{k\rightarrow \infty}J_k(0, \alpha, g_1)=\mathbb{N}$. This gives an infinite $v_1$-tower on $v_2^{-2\alpha}g_1$. The element $v_1^jv_2^{-2\alpha}h_0g_1$ has similar situations.
\end{ex}
Because $S^{2\alpha}$ is of finite type if and only if $S^{2\alpha-2a_0}$ is of finite type, we can now assume that $a_0=0$, i.e., $\alpha=(p^2-1)(a_1+a_2p+\cdots)$.
We begin with a technical numerical lemma.
\begin{lem}\label{lemma: number}
Let $\alpha \in {\mathfrak{Z}}$ and $\ell \in \mathbb{Z}$. If $\ell-\bar{\alpha} \neq 0$, then for any $K>0$, there exists $k>K$ such that $\ell_k=sp^n$ where $p\nmid s$ and $n<k$.
\end{lem}
\begin{proof}
We prove this by contradiction. Note that if an integer $a$ is not divisible by $p^k$, then $a=sp^n$ where $p\nmid s$ and $n<k$. So assume the statement for $\ell_k$ is not true. Then for all $k>K$, $\ell_k$ is divisible by $p^k$. This implies that $\ell - \bar{\alpha}= 0$ and is a contradiction to the condition $\ell - \bar{\alpha} \neq 0$.
\end{proof}
The key observation is the following lemma.
\begin{lem}{\label{stable}}
The subgroup of $\alpha$-unstable elements in $\pi_* S^{2\alpha}/p$ is a finitely generated $\mathbb{F}_p[v_1]$-submodule.
\end{lem}
\begin{proof}
By definition \ref{defn:stable1}, we need to show there are only finitely many pair $(\ell,x) \in \mathbb{Z}\times B$ such that $(\ell, \alpha, x)$ is unstable. Note that there are only 6 elements in $B$. We discuss case by case on $x \in B$. We prove that for a fixed $x \in B$, $\alpha \in {\mathfrak{Z}}$, there are only finitely many $\ell \in \mathbb{Z}$ such that $(\ell, \alpha, x)$ is unstable. From now on, we can assume that $\ell + \alpha \neq 0$. By Lemma \ref{lemma: number}, there exists some $k_0$ such that $p^{k_0}>|\ell|$, and $\ell_{k_0}=sp^n$ where $p\nmid s$, $n<k_0$.
\begin{description}
\item[Case 1] $x=1$\\
Since $p^{k_0} \mid \ell_k-\ell_{k_0}$ for $k>k_0$, we have $\ell_k=s_kp^n$ where $p \nmid s_k$. When $k>k_0$, we have $k>n$ and $p^k>b_n$, so $max \{ 0, p^k - b_n \}=p^k-b_n$. Then $J_k(\ell, \alpha, 1)=\{p^k-b_n \leqslant j \leqslant p^k - 1\}$ and $\displaystyle\liminf_{k\rightarrow \infty}J_k(\ell, \alpha, 1)=\emptyset$.
\item[Case 2] $x=h_0$\\
\begin{enumerate}
\item If $s\neq -1$ mod $p$, for $k>k_0$, we have $\ell_k=s_kp^n$ where $s_k \neq 0,\,-1$ mod $p$. In this case, we have $max \{ 0,\, p^k - A_n -2\}=p^k - A_n -2$. Then $J_k(\ell, \alpha, h_0)=\{p^k - A_n -2 \leqslant j \leqslant p^k - 1\}$ and $\displaystyle\liminf_{k\rightarrow \infty}J_k(\ell, \alpha, h_0)=\emptyset$.
\item If $s=-1$ mod $p$, then we write $s=s'p-1$ and consider two sub cases.
\begin{enumerate}
\item If $p \nmid s' + a_{k_0+1}$, then for $k>k_0$, we have $\ell_k=s'_kp^{n+1}-p^n$ where $p \nmid s'_k$ and hence $J_k(\ell, \alpha, 1)=\{0 \leqslant j \leqslant \min\{b_{n+1}-1, p^k-1\}\}$. Because $k>k_0>n$, we have $\min\{b_{n+1}-1, p^k-1\}=b_{n+1}-1$ which is independent of $k$. Therefore, $(\ell, \alpha, h_0)$ is stable in this case.\\
\item If $p \mid s' + a_{k_0+1}$, then for $k>k_0$, we have $\ell_k=s'_kp^{n+2}-p^n$ and $max \{ 0, p^k - (p^n-p^{n-2}+A_{n-2}+2)\}=p^k - (p^n-p^{n-2}+A_{n-2}+2)$. Then $J_k(\ell, \alpha, h_0)=\{p^k - (p^n-p^{n-2}+A_{n-2}+2) \leqslant j \leqslant p^k - 1\}$. In this case, we have $\displaystyle\liminf_{k\rightarrow \infty}J_k(\ell, \alpha, h_0)=\emptyset$.
\end{enumerate}
\end{enumerate}
\item[Case 3] $x=h_1$\\
\begin{enumerate}
\item If $n>0$, for $k>k_0$, we have $max \{ 0, p^k-p+1\}=p^k-p+1$ and $J_k(\ell, \alpha, h_1)=\{p^k-p+1 \leqslant j \leqslant p^k - 1\}$. We have $\displaystyle\liminf_{k\rightarrow \infty}J_k(\ell, \alpha, h_1)=\emptyset$.
\item If $n=0$, for $k>k_0$, we have $J_k(\ell, \alpha, h_1)=\{0\}$ which is independent of $k$ and $(\ell, \alpha, h_1)$ is stable in this case.
\end{enumerate}
\item[Case 4] $x=g_0$\\
\begin{enumerate}
\item If $\ell_{k_0}=-1$ mod $p$, we have $\min\{p-2,\,p^k-1\}=p-2$, and then $J_k(\ell, \alpha, g_0)=\{0 \leqslant j \leqslant p-2\}$. Note that $J_k(\ell, \alpha, g_0)$ is independent of $k$ so $(\ell, \alpha, g_0)$ is stable in this case.
\item If $\ell_{k_0}\neq-1$ mod $p$, then $J_k(\ell, \alpha, g_0)=\{p^k-1\}$ and $\displaystyle\liminf_{k\rightarrow \infty}J_k(\ell, \alpha, g_0)=\emptyset$.
\end{enumerate}
In the following Case 5 and Case 6, considering the condition that $\ell-\bar{\alpha} \neq \displaystyle\sum^\infty_{k>0}(p-2)p^k+(p-1)$, we can assume this condition because at most one $\ell \in \mathbb{Z}$ fails this condition. With this condition, there exists a $k_0$ such that
$$\ell-\bar{\alpha}=sp^{k_0}-p^{k_0-1}-p^{k_0-2}-\cdots-1~mod~p^{k_0+1}$$
and $s\neq -1$ mod $p$.
\item[Case 5] $x=g_1$\\
\begin{enumerate}
\item If $s\neq0$ mod $p$, then for $k>k_0+1$, we have $\ell_k=s_kp^{k_0}-p^{k_0-1}-p^{k_0-2}-\cdots-1$ with $s_k \neq 0,\,-1$ mod $p$, and $\min\{A_{k_0}+1,p^k-1\}=A_{k_0}+1$. Then $J_k(\ell, \alpha, g_1)=\{0\leqslant j \leqslant A_{k_0}+1 \}$, which is independent of $k$. Therefore, $(\ell, \alpha, g_1)$ is stable in this case.
\item If $s=0$ mod $p$ and $s= -p$ mod $p^2$, then for $k>k_0+1$, we have $\ell_k=s_kp^{k_0+2}-p^{k_0+1}-p^{k_0-1}-p^{k_0-2}-\cdots-1$, and $min\{p^{k_0+2}-p^{k_0}+A_{k_0}+1,p^k-1\}=p^{k_0+2}-p^{k_0}+A_{k_0}+1$. Then $J_k(\ell, \alpha, g_1)=\{0\leqslant j \leqslant p^{k_0+2}-p^{k_0}+A_{k_0}+1 \}$, which is independent of $k$. Therefore, $(\ell, \alpha, g_1)$ is stable in this case.
\item If $s=0$ mod $p$ and $s\neq -p$ mod $p^2$, then for $k>k_0+1$, we have $\ell_k=s_kp^{k_0+1}-p^{k_0-1}-p^{k_0-2}-\cdots-1$ with $s_k \neq -1$ mod $p$, and $J_k(\ell, \alpha, g_1)=\{\max\{0,\,p^k-b_{k_0+1}\} \leqslant j \leqslant p^k-1\}$. We have $\displaystyle\liminf_{k\rightarrow \infty}J_k(\ell, \alpha, g_1)=\emptyset$.
\end{enumerate}
\item[Case 6] $x=h_0g_1$\\
For $k>k_0+1$, we have $\ell_k=s_kp^{k_0}-p^{k_0-1}-p^{k_0-2}-\cdots-1$ with $s_k \neq -1$ mod $p$, and $\min\{b_{k_0}-1,p^k-1\}=b_{k_0}-1$. Then $J_k(\ell, \alpha, h_0g_1)=\{0\leqslant j \leqslant b_{k_0}-1 \}$, which is independent of $k$. Therefore, $(\ell, \alpha, h_0g_1)$ is stable in this case.
\end{description}
\end{proof}
Lemma \ref{stable} reduces the question whether $S^{2\alpha}$ is of finite type to the question whether there are finitely many $\alpha$-stable $\mathbb{F}_p$-generators $v_1^jv_2^{\ell-2\alpha}x$ in a given bidegree $(s,t)$.
We now divide elements of $H^*(C_0/v_1^{p^k},d)$ into subsets of $v_1$ towers. First we divide the elements into subsets by rows, that is, we denote the subset of elements in a row by the name in the first column in Table \ref{element1}. For example, the first row in the table is $v_1^j$, $0 \leqslant j \leqslant p^k-1$, then we define a subset $(1,k)$ to be $\{ v_1^j \mid 0 \leqslant j \leqslant p^k-1\}$. The name $1$ indicates that it consists of $v_1$ towers starting at $1$ and $k$ indicates that these elements are from $S^0/(p,v_1^{p^k})$. Similarly the name for the second row is $(h_0,k)$. For the third row, the subset $(v_2^{sp^n-p^{n-1}}h_0,k)$ can be divided into smaller subsets with respect to $n$. If $n_0$ is an integer, we denote
$$\{ x \in (v_2^{sp^n-p^{n-1}}h_0,k) \mid x \mbox{ is of the form } v_1^jv_2^{sp^{n_0}-p^{n_0-1}}h_0 \}$$
by $(v_2^{sp^{n_0}-p^{n_0-1}}h_0, k)$. Then we have
$$(v_2^{sp^n-p^{n-1}}h_0, k) = \overset{\infty}{\underset{m = 1}{\cup}} (v_2^{sp^m-p^{m-1}}h_0, k).$$
\begin{table}[H]
\centering
\caption{Dividing the third row into subrows}
\label{Dividing}
\begin{tabular}{| p{3.6cm} | l |p{4cm} | p{5cm}|}
\hline
Names & $Coker$ part & & \\
\hline
\hline
$(v_2^{sp^n-p^{n-1}}h_0,k)$ & $v_1^j v_2^{sp^n-p^{n-1}} h_0$ & $ p \nmid s, ~ n \geqslant 1$ & $ 0 \leqslant j \leqslant min \{ b_n - 1, p^k - 1 \}$ \\
\hline
\hline
$(v_2^{sp^1-1}h_0,k)$ & $v_1^j v_2^{sp-1} h_0$ & $ p \nmid s$ & $ 0 \leqslant j \leqslant min \{ b_0 - 1, p^k - 1 \}$ \\
\hline
$(v_2^{sp^2-p}h_0,k)$ & $v_1^j v_2^{sp^2-p} h_0$ & $ p \nmid s$ & $ 0 \leqslant j \leqslant min \{ b_1 - 1, p^k - 1 \}$ \\
\hline
$\cdots$ \\
\hline
$(v_2^{sp^{n_0}-p^{n_0-1}}h_0,k)$ & $v_1^j v_2^{sp^{n_0}-p^{n_0-1}} h_0$ & $ p \nmid s$ & $ 0 \leqslant j \leqslant min \{ b_{n_0} - 1, p^k - 1 \}$ \\
\hline
$\cdots$ \\
\hline
\end{tabular}
\end{table}
\begin{ex}
The element $v_1^3v_2^{2p-1}h_0$ is in $(v_2^{sp^1-1}h_0,k) \subset (v_2^{sp^n-p^{n-1}},k)$. This is because $v_1^3v_2^{2p-1}h_0$ belongs to the third row in the $Coker$ part. The power of $v_2$ is $2p-1=2p^1-p^{1-1}$ so it is in $(v_2^{sp^1-1}h_0,k)$.
\end{ex}
We have divided elements in $H^*(C_0/v_1^{p^k},d)$ into subsets by rows, and each row may divide into smaller subsets, which we will call subrows. The length of the $v_1$-tower is determined by which (sub)rows the element lie in. Now we will group elements in $\pi_* S^{2\alpha}/p$ into (sub)rows in a similar way, which gives an alternative description of $\alpha$-stable.
\begin{defn}\label{defn:stable}
Given an $\alpha \in {\mathfrak{Z}}$, an element $v_1^m v_2^{\ell-\alpha} x \in \pi_*S^{2\alpha}/p$ is \emph{$\alpha$-stable} if there exists $k_0 \in \mathbb{Z}^+$ and for all $k>k_0$, we have the elements $v_1^m v_2^{\ell-\bar{\alpha}_k}x \in \pi_*S^{2\alpha_k}/p,v_1^{p^k}$ are in the subset $(y,k)$ where $y$ is fixed. Otherwise, we will call the element \emph{$\alpha$-unstable}.
\end{defn}
It is straightforward to check that Definition \ref{defn:stable} is equivalent to Definition \ref{defn:stable1}.
\begin{ex}
Let $\alpha$ be an integer $n \in {\mathfrak{Z}}$. Then all elements $v_1^m v_2^{\ell-2n} x$ are $n$-stable by definition.
\end{ex}
We are trying to decide if $S^{2\alpha}/p$ is of finite type. By Lemma \ref{stable}, we need only to focus on stable elements.
\begin{defn}\label{defn:form}
We define subset $(y)$ of the $\alpha$-stable elements by
$$(y)=\{x=\lim_{k\rightarrow \infty} x_k \in \pi_*S^{2\alpha}/p \mid x \text{ is $\alpha$-stable, } x_k \in (y,k) ~ when ~k ~is~large\}. $$
\end{defn}
By the definition of $\alpha$-stable, all stable elements lie in some subsets defined above.
\begin{ex}
In $\pi_ {-1}S^{2\lambda}$, the element $h_0v_2^{-\lambda}$ is in $(h_0v_2^{sp^1-1})$. The reason is as follows. When $k>1$, we have $h_0v_2^{-\lambda_k}=h_0v_2^{-\frac{p^{k-1}-1}{p-1}p-1}$ in $(h_0v_2^{sp^1-1},k)$. By Definition \ref{defn:stable}, the element $h_0v_2^{-\lambda}$ is $\lambda$-stable and by Definition \ref{defn:form}, it is in $(h_0v_2^{sp^1-1})$.
\end{ex}
Because the following lemma, we will focus on the $Coker$ part.
\begin{lem}\label{lem:kernel}
No stable element in the $Ker$ part will survive in the limit.
\end{lem}
\begin{proof}
We can check this row by row. The results in the table \ref{element1} shows that for any element in the $Ker$ part, its $v_1$ tower's range does not overlap when $k$ goes to infinity. For example, non trivial elements in the second row of the $Ker$ part, of the form $v_1^jv_2^{sp^n}$, has $j$ in the range $max \{ 0, p^k - b_n \} \leqslant j \leqslant p^k - 1$. When $k>>n$, the range is $A(k) \coloneqq \{ j \in \mathbb{Z} \mid 0, p^k - b_n \leqslant j \leqslant p^k - 1 \}$, and $A(k) \cap A(k+1) = 0$, so no element survives.
\end{proof}
If $\pi_* S^{2\alpha}$ is not finite at some stem, then there exists a bidegree $(s,t)$ such that $H^{s,t}(S^{2\alpha})$ is not finite. If this happens, because there are only finitely many subsets, there must be a subset so that infinitely many elements of this subset survive in the limit at this bidegree.
We check each row for the possibility to have infinitely many elements survive in the limit at some bidegree. Some forms can be easily excluded.
\begin{lem}\label{lem:form}
There are only finitely many elements in the subsets $(1)$, $(h_0)$, $(v_2^sh_1)$, $(v_2^{sp-1}g_0)$ that survive at a fixed degree $(s_0,t_0)$.
\end{lem}
\begin{proof}
From the table \ref{element1}, if the length of the $v_1$ tower in this form is bounded by a finite number, then there are only finite elements of a fixed bidegree. For example, for elements in $(v_2^sh_1)$, the length of the $v_1$ tower is bounded by $1$.
\end{proof}
Now we will focus on the rest rows that possibly have infinite length $v_1$ towers. By the Lemma \ref{lem:form} and Lemma \ref{lem:kernel}, there are only four possible rows: $(v_2^{sp^n-p^{n-1}}h_0)$, $(v_2^{sp^n-\frac{p^n-1}{p-1}}g_1)$, $(v_2^{sp^n-p^{n-1}-\frac{p^n-2}{p-1}}g_1)$ and $(v_2^{sp^n-\frac{p^n-1}{p-1}}h_0g_1)$. We shall analyse, row by row, the conditions on $\alpha$ such that at some $(s,t)$, there are infinitely many elements of one of the four rows that survive in the limit. Since shifting by an integer degree will not change the property of finiteness, we assume $a_0=0$ from now on.
In each case, the question of whether $\pi_k S^{2\alpha}/p$ has infinitely many elements in a certain subset at some stem $k$ reduces to an elementary numerical question.
We shall start with the row $(v_2^{sp^n-p^{n-1}}h_0)$.
\begin{thm}{\label{case1}}
If there are infinitely many elements in $H^{1,2t(p-1)}(E_2 S^{\alpha}/p)$ in $(v_2^{sp^n-p^{n-1}}h_0)$ then in the expansion of $\alpha$, infinitely many $a_i$s are nonzero and infinitely many $a_i$s are zero. Moreover, the converse statement is also true.
\end{thm}
\begin{proof}
Let $\ell_0$ be the unique integer in $(\frac{t-1}{p+1}-1,\frac{t-1}{p+1}]$. Then $|v_2^{\ell_0}h_0| \leqslant 2t(p-1) < |v_2^{\ell_0+1}h_0|$. At the bidegree $(1,2t(p-1))$, elements with the base $h_0$ are $v_1^{(p+1)\ell+j_0} v_2^{\ell_0-\ell - \alpha} h_0$ where $\ell \in \mathbb{N}$ and $j_0=t-1-(p+1)\ell_0$. By the assumption, we can assign infinitely many $m \in \mathbb{Z}^+$ a distinct nonzero element $v_1^{(p+1)\ell_m+j_0} v_2^{\ell_0-\ell_m - \alpha} h_0$ in $(v_2^{sp^n_m-p^{n_m-1}}h_0) \subset (v_2^{sp^n-p^{n-1}}h_0)$ where $n_m \in \mathbb{Z}$. We can assume that if $m>m'$, then $\ell_m>\ell_{m'}$. This gives following two restrictions on $\ell_m$ and $\alpha$.
\begin{enumerate}
\item $\ell_0-\ell_m - \alpha \equiv s_m p^{n_m}-p^{n_m-1}$ mod $p^{n_m+1}$ with $p \nmid s_m$
\item $(p+1)\ell_m+j_0 \leqslant p^{n_m-1}(p+1)-2$
\end{enumerate}
The the first restriction comes from the assumption that those elements are in $(v_2^{sp^n_m-p^{n_m-1}}h_0)$, that is, the power of $v_2$ is of the form $sp^n_m-p^{n_m-1}$; the second restriction comes from the bound on the length of $v_1$ towers: since nontrivial elements in $(v_2^{sp^n_m-p^{n_m-1}}h_0)$ are $\{v_1^jv_2^{sp^n_m-p^{n_m-1}}h_0\}$ with $0 \leqslant j \leqslant p^{n_m-1}(p+1)-2$, if $v_1^{(p+1)\ell_m+j_0} v_2^{\ell_0-\ell_m - \alpha} h_0 \neq 0$, then we have the second restriction. From the assumption, we have $\ell_m>0$; from the second restriction, we have $l_m \leqslant p^{n_m-1}$. The first restriction tells us
$$\ell_m + \alpha_{n_m} = p^{n_m-1}+\ell_0$$
Plugging $\ell_m = p^{n_m-1} + \ell_0 - \alpha_{n_m}$ into the restriction $0< \ell_m < p^{n_m-1}$, we have $\alpha_{n_m}<p^{n_m-1}+\ell_0$. There exists $n_M>log_p\ell_0+1$. The condition $p \nmid s_m$ shows that $a_{n_M+1} \neq 0$. Recall that $\alpha_{n_m} = \displaystyle\sum^{n_m}_{i=1} a_i p^{i-1}$ and $0 \leqslant a_i < p$. When $m>M$, $p^{n_M}+a_{n_m}p^{n_m-1} \leqslant \alpha_{n_m}<p^{n_m-1}+\ell_0$ is equivalent to $a_{n_m} = 0$. The condition $p \nmid s_m$ shows that $a_{n_m+1} \neq 0$. So the two restrictions are equivalent to $a_{n_m} = 0$ and $a_{n_m+1} \neq 0$. Hence, for each $m>M \in \mathbb{Z}^+$, we have $a_{n_m} = 0$ and $a_{n_m+1} \neq 0$, and there are infinitely many are nonzero coefficients and there are infinitely many zero coefficients in the ${\mathfrak{Z}}$-expansion of $\alpha$.
The above proof shows that the converse statement is true. In fact, if there are infinitely many $a_i$s are nonzero and there are infinite $a_i$s are zero, then we can assign each $m \in \mathbb{Z}^+$ a different number $n_m \in \mathbb{Z}^+$ such that $a_{n_m}=0$ and $a_{n_m+1} \neq 0$. Let $l_m = p^{n_m-1} - \alpha_{n_m}+\ell_0$, then check the table, we will have $v_1^{(p+1)\ell_m+j_0} v_2^{\ell_0-\ell_m - \alpha} h_0$ are survived elements. Therefore, we have infinitely many elements in $(v_2^{sp^n-p^{n-1}}h_0) \subset H^{1,2t(p-1)}(E_2 S^{2\alpha}/p)$.
\end{proof}
Theorem \ref{case1} shows that if $S^{2\alpha}$ is not of finite type, then the homotopy groups are not finitely generated in all possible nontrivial degrees. We state it as Corollary \ref{cor:infinite}
\begin{cor}\label{cor:infinite}
If there are infinitely many elements in $H^{1,t_0}(E_2 S^{2\alpha}/p)$ in $(v_2^{sp^n-p^{n-1}}h_0)$ at some bidegree $(1, t_0)$, then at all bidegree $(1, 2t(p-1))$ $(t \in \mathbb{Z})$ there are infinitely many elements in $H^{1,t_0+2k(p^2-1)}(E_2 S^{2\alpha}/p)$ of the form $v_2^{sp^n-p^{n-1}}h_0$.
\end{cor}
With the same approach, one could do with the elements of other forms. We state the results as follows.
\begin{thm}
If there are infinitely many elements in
$$(v_2^{sp^n-\frac{p^{n-1}-1}{p-1}}g_1), ~or$$
$$(v_2^{sp^n- p^{n-1} - \frac{p^{n-2}-1}{p-1}}g_1), ~or$$
$$(v_2^{sp^n- \frac{p^n-1}{p-1}}h_0g_1)$$
at some bidegree $(s,t)$, then in the ${\mathfrak{Z}}$-expansion coefficients of $\alpha$, infinitely many $a_i$s are nonzero and infinitely many $a_i$s are zero. Moreover, the converse statement is also true.
\end{thm}
Summing up all the cases together, we have the main theorem as follows.
\begin{thm}\label{condition}
If there are infinitely many elements in $H^{s,t_0}(E_2 S^{2\alpha}/p)$ at some bidegree $(s, t_0)$, then in the expansion of $\alpha$, infinitely many $a_i$s are nonzero and infinitely many $a_i$s are zero. Moreover, the converse statement is also true.
\end{thm}
Theorem \ref{reduce} and Theorem \ref{condition} answer the Question \ref{question}.
\begin{thm}\label{result}
For any $X \in Pic^0_{K(2)}$, let $e_i(X)$ be the $i$\textsuperscript{th} coefficient in the ${\mathfrak{Z}}$-expansion of $e(X)$. Then $\pi_kX$ is finitely generated for all degrees $k \in \mathbb{Z}$ if and only if either only finitely many $e_i(X)$'s are zeros, or only finitely many $e_i(X)$'s are nonzeros.
\end{thm}
\begin{proof}
From Theorem \ref{reduce}, $\pi_kX$ is finitely generated for all degrees $k \in \mathbb{Z}$ if and only if $\pi_k S^{e (X)}/p$ is. Then the result follows from the contrapositive of Theorem \ref{condition}.
\end{proof}
At the end of this section, we state a Corollary of Lemma \ref{stable}.
\begin{cor}
Let $X$ be an element in $Pic_{K(2)}$. Then the set of $v_1$-free elements in $\pi_*X$ is finitely generated as a $\mathbb{Z}_p[v_1]$-module.
\end{cor}
\begin{proof}
We first show the statement for $X/p \simeq S^{2e(X)}/p$. All $v_1$-free elements have infinite $v_1$-towers. The length of $v_1$-tower on an element $x$ in $\pi_*S^{2e(X)}/p$ is determined by the form and level of $x$ if $x$ is $e(X)$-stable. By Lemma \ref{stable}, the set of unstable elements in $\pi_*S^{2e(X)}/p$ is finitely generated as an $\mathbb{F}_p[v_1]$-module. Except for the form $1$ and $h_0$ ($\zeta$ and $\zeta h_0$), all other forms have finite length $v_1$-towers and are $v_1$-torsion. The length of $v_1$-tower on a stable element is the same as the length of $v_1$-tower of the form that this element stables to. The $v_1$-towers are finite on all stable elements but at most finitely many exceptions ($1$, $h_0$, $\zeta$, and $\zeta h_0$). Therefore, the $v_1$-free elements must belong to the finitely many $v_1$-towers in those exceptional cases or be unstable elements. So the set of $v_1$-free elements in $\pi_*X/p$ is finitely generated as an $\mathbb{F}_p[v_1]$-module. In particular, this is also true for $\pi_*X/(p\pi_*X)$. Since $\pi_* X$ is $p$-complete, this implies that the set of $v_1$-free elements in $\pi_*X$ is finitely generated as a $\mathbb{Z}_p[v_1]$-module.
\end{proof}
\section{examples}\label{examplesection}
In this section, we examine Theorem \ref{result} with three examples: $L_{K(2)}S^0$, $I_2S^0$ and $L_{K(2)}S^{2\gamma}$ where $I_2S^0$ is the Gross-Hopkins dual of the $K(2)$-local sphere (see \cite{MR3436395} for the definition of $I_2$) and $\gamma = \displaystyle\lim_k p^{2k} \in {\mathfrak{Z}}$ as before is a generator of the torsion part in $Pic_{K(2)}$.
\begin{enumerate}
\item For $X=L_{K(2)}S^0$, $e(X)=0$ and $e_i(X)=0$ for all $i$ and there are only finitely many nonzero $e_i(X)$s. Theorem \ref{result} implies that $L_{K(2)}S^0$ satisfies the finitely generated property, which agrees with the known computation.
\item For $X=I_2S^0$, in large prime case, $I_2S^0=S^{n^2-n}\wedge S[det]$. Since integer shifts does not change the finitely generated property, it is enough to consider $S[det]$. From Theorem \ref{thm:relation}, we have $e(S[det])=\lambda=(p+1)+ \displaystyle\sum^\infty_{k=0}(p^2-1)p^k$ and $e_i(X)=1$ for all $i\geqslant 1$. Therefore, there are only finitely many zero $e_i(X)$s. Theorem \ref{result} implies that $L_{K(2)}S^0$ satisfies the finitely generated property.
\item For $X=L_{K(2)}S^{2\gamma}$, $e(X)=1+ \displaystyle\sum^\infty_{k=0}(p^2-1)p^{2k}$. There are infinitely many zero $e_i(X)$s (when $i\geqslant 1$ is odd) and infinitely many nonzero $e_i(X)$s (when $i\geqslant 1$ is even). Theorem \ref{result} implies that $\pi_k L_{K(2)}S^{2\gamma}$ is not finitely generated for some stem $k$. In fact, in $\pi_{-2p^3+2p^2+4p-7}S^{2\gamma}$, we have linearly independent elements $v_1^{j_k}v_2^{m_k-\gamma}h_0$ for all $k \geqslant 0$ where $m_k = -p^{2k+1}+\frac{p^{2k+2}-1}{p^2-1}$, $j_k=(p+1)(m_k-m_0)$.
\end{enumerate}
As an application, we have the following theorem about Gross--Hopkins duality at prime $p\geqslant 5$, height $2$. The (non-local) Brown--Comenetz dual of the sphere $I_{\mathbb{Q}/\mathbb{Z}}$ is the spectrum representing the generalized cohomology theory
$$X \rightarrow Hom(\pi_{-*}X, \mathbb{Q}/\mathbb{Z}).$$
The (non-local) Brown--Comenetz dual of a spectrum $X$ is defined to be $I_{\mathbb{Q}/\mathbb{Z}}(X) \coloneqq F(X,I_{\mathbb{Q}/\mathbb{Z}})$. However, if we start with a $K(n)$-local spectrum $X$, the Brown--Comenetz dual $I_{\mathbb{Q}/\mathbb{Z}}(X)$ may not be $K(n)$-local any more. The Gross--Hopkins dual is a $K(n)$-version Brown--Comenetz dual (see \cite{MR2946825} and \cite{bbs_gross}).
\begin{defn}\label{defn:GrossHopkins}
Let $L_nX$ be the localization of $X$ with respect to the $n$\textsuperscript{th} Morava $E$-theory. Let $M_nX$ be the $n$\textsuperscript{th} monochromatic layer of $X$; that is, the fiber of $L_n X \rightarrow L_{n-1}X$. The height $n$ Gross--Hopkins dual of $X$ is defined to be
$$I_nX \coloneqq F(M_nX, I_{\mathbb{Q}/\mathbb{Z}}).$$
Denote $I_n S^0$ by $I_n$.
\end{defn}
While $I_nX$ is automatically $K(n)$-local (\cite[Proposition~2.2]{MR2946825}), the trade off is that it is usually very hard to compute $\pi_*I_nX$ from $\pi_*X$. If $X \in Pic_{K(n)}$, then $I_nX=X^{-1}\smash I_n \in Pic_{K(n)}$. Because $I_n$ is dualizable in $K(n)$-local category and we have $I_n X= D_nX \smash I_n$ where $D_nX$ is $F(X,L_{K(n)}S^0)$. As an application of Theorem \ref{thm:mainresult}, we show the following theorem.
\begin{thm}\label{thm:ghdual}
Let $I_2$ be the Gross--Hopkins dual at prime $p\geqslant 5$, height $2$, $X \in Pic_{K(2)}$, and $\lambda=\displaystyle\lim_k p^{2k}(p^2-1)\in {\mathfrak{Z}}$. Then $e(I_2X)=2+\lambda-e(X)$. In particular, $X$ is of finite type if and only if $I_2X$ is of finite type.
\end{thm}
\begin{proof}
The statement $e(I_2X)=2+\lambda-e(X)$ follows from the facts:
\begin{enumerate}
\item $e$ is a group homomorphism;
\item there is an equivalence
$$I_2 X \simeq D_2X \wedge I_2$$
where $D_2 X$ is the $K(2)$-local Spanier--Whitehead dual of $X$ and $I_2$ is the Gross--Hopkins dual of the $K(2)$-local sphere;
\item $e(D_2X) = -e(X)$;
\item $I_2 \simeq \Sigma^{2^2-2}S[det]$.
\end{enumerate}
We have
\begin{align*}
e(I_2X) &=e(D_2X \wedge \Sigma^{n^2-n}S[det])=e(D_2X)+2^2-2+e(S[det]) \\
& = -e(X) + 2+ \lambda.
\end{align*}
Next we will show that $X$ is of finite type if and only if $I_2X$ is of finite type. By Theorem \ref{thm:mainresult}, this is equivalent to the statement that $e(X)$ has the finiteness property if and only if $e(I_2X)=2+\lambda-e(X)$ has the finiteness property. We can ignore the integer shift $2$ when considering finiteness property. Because $I_2(I_2 X)=X$, we only need to show one direction. Assume that $X$ is of finite type, we will prove $I_2(X)$ is of finite type. The rest is an elementary numerical analysis.
\begin{case}
$e(X)$ has finitely many nonzero entries in its ${\mathfrak{Z}}$-expansion coefficients. If $e(X)=0$, then $e(I_2X)=\lambda$. This has finitely many zero entries in its coefficients. If $e(X)\neq 0$, then the coefficients of $-e(X)$'s ${\mathfrak{Z}}$-expansion $e_k(D_2X)$ will always be $p-1$ when $k>K_0$ for some $K_0 \in \mathbb{Z}$. The coefficients of $\lambda$ are always $1$. So the $i$\textsuperscript{th} coefficients of $\lambda-e(X)$'s ${\mathfrak{Z}}$-expansion will be always be $1$ when $k>K_0+1$. Then $e(I_2X)$ has finitely many zero entries.
\end{case}
\begin{case}
$e(X)$ has finitely many zero entries in its ${\mathfrak{Z}}$-expansion coefficients. Note that $e(X)+e(I_2X)=\lambda$ and the coefficients of the ${\mathfrak{Z}}$-expansion of $\lambda$ are $1$. We argue by contradiction. If there are infinitely many zero entries and infinitely many non zero entries in the coefficients of the ${\mathfrak{Z}}$-expansion of $e(I_2X)$, then there are infinitely many places where the nonzero coefficient followed by a zero one. Let $m$\textsuperscript{th} coefficient of $e(I_2X)$ be one of such places; i.e., $e_m(I_2X)\neq 0$ and $e_{m+1}(I_2X)=0$. Now considering the sum $e(X)_m+e(I_2X)_m$, we have two cases. If in the sum $e(X)_m+e(I_2X)_m \geqslant 2p^m(p^2-1)$, then at the $m+1$\textsuperscript{th} coefficients of the equation
$$e(X)+e(I_2X)=\lambda,$$
we have
$$e_{m+1}(X)+e_{m+1}(I_2X)+ 1=1 ~or~ p+1.$$
In this case, $e_{m+1}(I_2X)=0$ implies that $e_{m+1}(X)=0$.
If in the sum $e(X)+e(I_2X)<2p^m(p^2-1)$, then we have
$$e_m(X)+e_m(I_2X)<p.$$
From the equatoin
$$e(X)+e(I_2X)=\lambda,$$
we have
$$e_m(X)+e_m(I_2X)+\epsilon=1 ~or~ p+1$$
where $\epsilon=0$ or $1$. So we have
$$e_m(X)+e_m(I_2X) \leqslant 1.$$
The condition $e_m(X) \neq 0$ implies that $e_m(X) = 1$ and $e_m(X) = 0$.
\end{case}
In both cases, one such places would force a zero in $e(X)$. This would imply that there infinitely many zero entries in $e(X)$, contradicted to the assumption.
\end{proof}
\begin{rem}
We know $L_{K(2)}S^0$ is of finite type from the computation. We would like to have some features that may be generalized to higher height cases. During the computation, one observation is that in the computation of $\pi_*L_{K(2)}S^0$, for all elements $x$ with $|x|=(t,s)$, $t-s<-1$ in the $E_2$ page of the ANSS, the $v_1$ tower on $x$ can not pass the line $t-s=1$, i.e., $v_1^kx=0$ for $k|v_1|+t-s \geqslant 0$. This phenomenon together with the symmetric of the $E_2$ page from Gross-Hopkins duality implies the finitely generated property of $\pi_*L_{K(2)}S^0$. There might be a conceptual algebraic argument of this phenomenon that works for higher heights.
\end{rem}
\end{document} |
\begin{document}
\title{A note on the joint measurability of POVMs and its implications for contextuality}
\author{Ravi Kunjwal}
\email{rkunj@imsc.res.in}
\affiliation{Optics \& Quantum Information Group, The Institute of Mathematical Sciences, C.I.T Campus, Tharamani, Chennai 600 113, India}
\date{\today}
\begin{abstract}
The purpose of this note is to clarify the logical relationship between joint measurability and contextuality for quantum observables in view of recent developments \cite{LSW,KG,KHF,FyuOh}.
\end{abstract}
\pacs{03.65.Ta, 03.65.Ud}
\maketitle
\section*{Introduction}
In a recent work \cite{KG}, a new proof of contextuality---in the generalized sense of Spekkens \cite{Spe05, LSW}---was provided using positive operator-valued measures (POVMs) and the
connection between joint measurability of POVMs and contextuality was explicated. It was later shown in \cite{KHF} that any joint measurability structure can be realized in quantum theory,
leaving open the question of whether contextuality can always be demonstrated in these joint measurability structures. Subsequent to these two developments, in Ref. \cite{FyuOh} a peculiar
feature of POVMs with respect to joint measurability was pointed out: that there exist three measurements which are pairwise jointly measurable and triplewise jointly measurable but for
which there exist pairwise joint measurements which do not admit a triplewise joint measurement. In this note, I will briefly put these results in context and point out the logical relationship
between joint measurability and the possibility of contextuality. Also, throughout this note, `sharp measurement' will be synonymous with projection-valued measures (PVMs) and `unsharp measurement'
will be synonymous with POVMs that are not PVMs.
\section*{Uniqueness of joint measurement for Projection-Valued Measures}
Since the peculiarity of positive-operator valued measures (POVMs) in cases of interest here arises from the nonuniqueness of joint measurements, I will first prove the uniqueness of joint measurements for projection-valued measures (PVMs).
This will help clarify how the distinction between sharp and unsharp measurements comes to play a role in Specker's scenario \cite{KG}.
Consider a nonempty set $\Omega_i$ and a $\sigma$-algebra $\mathcal{F}_i$ of subsets of $\Omega_i$, for $i\in\{1,\dots,N\}$. The POVM $M_i$ is defined as the map $M_i: \mathcal{F}_i\rightarrow \mathcal{B}_+(\mathcal{H})$,
where $\sum_{X_i\in\mathcal{F}_i}M_i(X_i)=I$ and $\mathcal{B}_+(\mathcal{H})$ denotes the set of positive semidefinite operators on a Hilbert space $\mathcal{H}$. $I$ is the identity operator
on $\mathcal{H}$. Therefore: $M_i\equiv\{M_i(X_i)|X_i\in\mathcal{F}_i\}$, where $X_i$ labels the elements of POVM $M_i$. $M_i$ becomes a projection-valued measure (PVM) under the
additional constraint $M_i(X_i)^2=M_i(X_i)$ for all $X_i\in \mathcal{F}_i$.
\begin{theorem}\label{uniqueness}
Given a set of POVMs, $\{M_1,\dots,M_N\}$, all of which except at most one---say $M_N$---are PVMs, so that for $i\in\{1,\dots,N-1\}$
$$M_i\equiv\{M_i(X_i)|X_i\in\mathcal{F}_i, M_i(X_i)^2=M_i(X_i)\}$$ and $$M_N\equiv\{M_N(X_N)|X_N\in\mathcal{F}_N\},$$
the set of POVMs, $\{M_1,\dots,M_N\}$, is jointly measurable if and only if they commute pairwise, i.e.,
$$M_j(X_j)M_k(X_k)=M_k(X_k)M_j(X_j),$$
for all $j,k\in\{1,\dots,N\}$ and $X_j\in\mathcal{F}_j, X_k\in\mathcal{F}_k$. In this case, there exists a unique joint POVM $M$,
defined as a map $$M:\mathcal{F}_1\times\mathcal{F}_2\times\dots\times\mathcal{F}_N \rightarrow \mathcal{B}_+(\mathcal{H}),$$ such that
$$M(X_1\times\dots\times X_N)=M_1(X_1)M_2(X_2)\dots M_N(X_N),$$
for all $X_1\times\dots\times X_N \in\mathcal{F}_1\times\dots\times \mathcal{F}_N.$
\end{theorem}
\emph{Proof.}---This proof is adapted from, and is a generalization of, the proof of Proposition 8 in the Appendix of Ref. \cite{heinosaari}.
The first part of the proof is for the implication: joint measurability $\Rightarrow$ pairwise commutativity---A joint POVM for $\{M_1,\dots,M_N\}$ is defined as a map
$M:\mathcal{F}_1\times\mathcal{F}_2\times\dots\times\mathcal{F}_N \rightarrow \mathcal{B}_+(\mathcal{H})$, such that
\begin{equation}
M_i(X_i)=\sum_{\{X_j\in\mathcal{F}_j|j\neq i\}}M(X_1\times\dots\times X_N)
\end{equation}
for all $X_i\in\mathcal{F}_i$, $i\in\{1\dots N\}$. Also, $M(X_1\times\dots\times X_N)\leq M_1(X_1)$, so the range of $M(X_1\times\dots\times X_N)$ is contained in the range of
$M_1(X_1)$, and therefore:
\begin{equation}
M_1(X_1)M(X_1\times\dots\times X_N)=M(X_1\times\dots\times X_N).
\end{equation}
Using this relation for the complement $\Omega_1\backslash X_1 \in \mathcal{F}_1$:
\begin{eqnarray}
&&M_1(X_1)M(\Omega_1\backslash X_1\times\dots\times X_N)\nonumber\\
&&=(I-M_1(\Omega_1\backslash X_1))M(\Omega_1\backslash X_1\times\dots\times X_N)\nonumber\\
&&=0.
\end{eqnarray}
Taking the adjoints, it follows that
\begin{equation}
M(X_1\times\dots\times X_N)M_1(X_1)=M(X_1\times\dots\times X_N),
\end{equation}
and
\begin{equation}
M(\Omega_1\backslash X_1\times\dots\times X_N)M_1(X_1)=0.
\end{equation}
Denoting
$$M^{(i)}(X_{i+1}\times\dots\times X_N)\equiv\sum_{\{X_j\in\mathcal{F}_j|j\leq i\}}M(X_1\times\dots\times X_N),$$
this implies:
\begin{eqnarray}
&&M_1(X_1)M^{(1)}(X_2\times\dots\times X_N)\nonumber\\
&=&M_1(X_1)M(X_1\times\dots\times X_N)\nonumber\\&&+M_1(X_1)M(\Omega_1\backslash X_1\times\dots\times X_N)\nonumber\\
&=&M_1(X_1)M(X_1\times\dots\times X_N)\nonumber\\
&=&M(X_1\times\dots\times X_N).
\end{eqnarray}
Taking the adjoint,
\begin{equation}
M^{(1)}(X_2\times\dots\times X_N)M_1(X_1)=M(X_1\times\dots\times X_N).
\end{equation}
Therefore:
\begin{eqnarray}
&&M_1(X_1)M^{(1)}(X_2\times\dots\times X_N)\nonumber\\
&=&M^{(1)}(X_2\times\dots\times X_N)M_1(X_1)\nonumber\\
&=&M(X_1\times\dots\times X_N).
\end{eqnarray}
Noting that $M^{(i-1)}(X_i\times\dots\times X_N)\leq M_i(X_i)$, one can repeat the above procedure for $M_i$, $i\in\{2,\dots,N-1\},$ to obtain:
\begin{eqnarray}
&&M^{(i-1)}(X_i\times\dots\times X_N)\nonumber\\
&=&M_i(X_i)M^{(i)}(X_{i+1}\times\dots\times X_N)\nonumber\\
&=&M^{(i)}(X_{i+1}\times\dots\times X_N)M_i(X_i).
\end{eqnarray}
Doing this recursively until $i=N-1$ and noting that $M^{(N-1)}(X_N)=M_N(X_N)$, it follows:
\begin{eqnarray}
&&M(X_1\times\dots\times X_N)\nonumber\\
&=&M_1(X_1)M^{(1)}(X_2\times\dots\times X_N)\nonumber\\
&=&M^{(1)}(X_2\times\dots\times X_N)M_1(X_1)\nonumber\\
&&\vdots\nonumber\\
&=&M_1(X_1)M_2(X_2)\dots M_{N-1}(X_{N-1})M_N(X_N)\nonumber\\
&=&M_N(X_N)M_{N-1}(X_{N-1})\dots M_2(X_2)M_1(X_1).\nonumber\\
\end{eqnarray}
For the last equality to hold, the POVM elements must commute pairwise, so that
\begin{equation}
M(X_1\times\dots\times X_N)=\prod_{i=1}^N M_i(X_i).
\end{equation}
This concludes the proof of the implication, joint measurability $\Rightarrow$ pairwise commutativity. The converse is easy to see since the joint POVM defined by taking the
product of commuting POVM elements, $$\{M(X_1\times\dots\times X_N)=\prod_{i=1}^N M_i(X_i)|X_i\in\mathcal{F}_i\},$$ is indeed a valid POVM which coarse-grains to the given POVMs,
$\{M_1,\dots,M_N\}$.
\fbox\\
Indeed, pairwise commutativity $\Rightarrow$ joint measurability for any arbitrary set of POVMs, $\{M_1,\dots,M_N\}$, and it is only when all but (at most) one of these POVMs are PVMs that
the converse---and the uniqueness of the joint POVM---holds.
\section*{Specker's scenario}
Specker's scenario requires a set of three POVMs, $\{M_1,M_2,M_3\}$, that are pairwise jointly measurable, i.e., $\exists$ POVMs $M_{12}$, $M_{23}$, and $M_{31}$ which measure the respective pairs jointly.
An immediate consequence of the requirement of pairwise joint measurability of $\{M_1,M_2,M_3\}$ is that in quantum theory these three measurements cannot be realized as projective measurements
(PVMs) and still be expected to show any contextuality. This is because for projective measurements or projection-valued measures (PVMs), a set of three measurements that are pairwise jointly measurable---and therefore admit \emph{unique} pairwise joint measurements---are
also triplewise jointly measurable in the sense that there exists a \emph{unique} triplewise joint measurement which coarse-grains to each pairwise implementation of the three measurements and therefore also to the single measurements.
From Theorem \ref{uniqueness}, it follows that if $M_i$, $i\in\{1,2,3\}$, are PVMs then they admit unique pairwise and triplewise joint PVMs:
\begin{eqnarray}
M_{ij}(X_i\times X_j)&=&M_i(X_i)M_j(X_j),\\
M(X_1\times X_2\times X_3)&=&M_1(X_1)M_2(X_2)M_3(X_3),
\end{eqnarray}
corresponding to the maps $M_{ij}:\mathcal{F}_i\times\mathcal{F}_j\rightarrow \mathcal{B}_+(\mathcal{H})$ and $M:\mathcal{F}_1\times\mathcal{F}_2\times\mathcal{F}_3\rightarrow \mathcal{B}_+(\mathcal{H})$,
respectively. Intuitively, this is easy to see since joint measurability is equivalent to pairwise commutativity for a set of projective measurements and the joint measurement for each pair is unique \cite{heinosaari}.
The existence of a unique joint measurement implies that there exists a joint probability distribution realizable via this joint measurement, thus explaining the pairwise statistics of the triple of measurements noncontextually in the traditional
Kochen-Specker sense.\footnote{KS-noncontextuality just means that there exists a joint probability distribution over the three measurement outcomes which marginalizes to the pairwise measurement statistics.
Violation of a KS inequality---obtained under the assumption that a global joint distribution exists---rules out KS-noncontextuality.}
Clearly, then, the three measurements $\{M_1, M_2, M_3\}$ must necessarily be unsharp for Specker's scenario to exhibit KS-contextuality. The uniqueness of joint measurements
(pairwise or triplewise) need not hold in this case. I will refer to pairwise joint measurements as ``2-joints'' and triplewise joint measurements as ``3-joints''. Also,
I will use the phrases `joint measurability' and `compatibility' interchangeably since they will refer to the same notion. Consider
the four propositions regarding the three measurements:
\begin{itemize}
\item $\exists$ 2-joint: $\{M_1,M_2,M_3\}$ admit 2-joints,
\item $\nexists$ 2-joint: $\{M_1,M_2,M_3\}$ do not admit 2-joints,
\item $\exists$ 3-joint: $\{M_1,M_2,M_3\}$ admit a 3-joint,
\item $\nexists$ 3-joint: $\{M_1,M_2,M_3\}$ do not admit a 3-joint,
\end{itemize}
The possible pairwise-triplewise propositions for the three measurements are:
\begin{itemize}
\item $(\exists \text{ 2-joint}, \exists \text{ 3-joint})$,
\item $(\exists \text{ 2-joint}, \nexists \text{ 3-joint})$,
\item $(\nexists \text{ 2-joint}, \nexists \text{ 3-joint})$.
\end{itemize}
Note that the proposition $(\nexists \text{ 2-joint}, \exists \text{ 3-joint})$ is trivially excluded because triplewise compatibility implies pairwise compatibility. Of the three remaining
propositions, the ones of interest for contextuality are $(\exists \text{ 2-joint}, \exists \text{ 3-joint})$ and $(\exists \text{ 2-joint}, \nexists \text{ 3-joint})$,
since the remaining one is simply about observables that do not admit any joint measurement at all and hence no nontrivial measurement contexts exist for this proposition.\footnote{
It is worth noting that, if $\{M_1,M_2,M_3\}$ were PVMs, then there are only two possibilities: $(\exists \text{ 2-joint}, \exists \text{ 3-joint})$ and $(\nexists \text{ 2-joint}, \nexists \text{ 3-joint})$,
since for three PVMs, $\exists \text{ 2-joint} \Leftrightarrow \exists \text{ 3-joint}$, because pairwise commutativity is equivalent to joint measurability and
the joint measurements are unique on account of Theorem \ref{uniqueness}. This is why KS-contextuality is impossible with PVMs in this scenario.}
It may seem that for purposes of contextuality even the proposition $(\exists \text{ 2-joint}, \exists \text{ 3-joint})$ is of no interest, but there is a subtlety involved here: one is only
considering whether 2-joints or a 3-joint exist for the set $\{M_1, M_2, M_3\}$. Since the statistics that is of relevance for Specker's scenario is the pairwise statistics \cite{LSW, KG},
one also needs to consider whether a given choice of 2-joints, $\{M_{12}, M_{23}, M_{31}\}$, admits a 3-joint, i.e., the proposition $(\exists \text{ 3-joint}|\text{ a choice of 2-joints})$ or its negation $(\nexists \text{ 3-joint}|\text{ a choice of 2-joints})$.
The four possible conjunctions are:
\begin{itemize}
\item $(\exists \text{ 2-joint}, \exists \text{ 3-joint})\bigwedge(\exists \text{ 3-joint}|\text{ a choice of 2-joints}),$
\item $(\exists \text{ 2-joint}, \exists \text{ 3-joint})\bigwedge(\nexists \text{ 3-joint}|\text{ a choice of 2-joints}),$
\item $(\exists \text{ 2-joint}, \nexists \text{ 3-joint})\bigwedge(\exists \text{ 3-joint}|\text{ a choice of 2-joints}),$
\item $(\exists \text{ 2-joint}, \nexists \text{ 3-joint})\bigwedge(\nexists \text{ 3-joint}|\text{ a choice of 2-joints}).$
\end{itemize}
Of these, the first conjunction rules out the possibility of KS-contextuality, so it is not of interest for the present purpose. The third conjunction is false since the existence of a 3-joint for
a given choice of 2-joints would also imply the existence of a 3-joint for the three measurements, hence contradicting the fact that these admit no 3-joints. Thus the two remaining
conjunctions of interest are:
\begin{itemize}
\item \emph{Proposition 1}:\\$(\exists \text{ 2-joint}, \exists \text{ 3-joint})\bigwedge(\nexists \text{ 3-joint}|\text{ a choice of 2-joints})$,
\item \emph{Proposition 2}:\\$(\exists \text{ 2-joint}, \nexists \text{ 3-joint})\bigwedge(\nexists \text{ 3-joint}|\text{ a choice of 2-joints})$\\
$\Leftrightarrow (\exists \text{ 2-joint}, \nexists \text{ 3-joint})$.
\end{itemize}
These two possibilities lead to the following propositions:
\begin{itemize}
\item \emph{Weak}: $(\exists \text{ 2-joint})\bigwedge(\nexists \text{ 3-joint}|\text{ a choice of 2-joints})$,
\item \emph{Strong}:\\$(\exists \text{ 2-joint})\bigwedge(\nexists \text{ 3-joint}|\text{ for all choices of 2-joints})$\\
$\Leftrightarrow (\exists \text{ 2-joint}, \nexists \text{ 3-joint})$.
\end{itemize}
where \emph{Weak} $\Leftrightarrow$ \emph{Proposition 1} $\bigvee$ \emph{Proposition 2}, and \emph{Strong} $\Leftrightarrow$ \emph{Proposition 2}. The proposition \emph{Weak} relaxes
the requirement of proposition \emph{Strong} that the three measurements should themselves be incompatible to only the requirement that there exists a choice of 2-joints that do not admit a 3-joint.
Obviously, under \emph{Strong}, there exists no 3-joint for all possible choices of 2-joints: \emph{Strong} $\Rightarrow$ \emph{Weak}.\footnote{
Note that for the case of PVMs, only the conjunction $(\exists \text{ 2-joint}, \exists \text{ 3-joint})\bigwedge(\exists \text{ 3-joint}|\text{ a choice of 2-joints})$
makes sense and that it is, in fact, equivalent to the proposition $(\exists \text{ 2-joint}, \exists \text{ 3-joint})$ since there is no ``choice of 2-joints'' available: the 2-joints,
if they exist, are unique and admit a unique 3-joint (cf. Theorem \ref{uniqueness}). Consequently, the propositions \emph{Weak} and \emph{Strong} are not admissible for PVMs.}
\subsection{Comment on Ref. \cite{FyuOh} vis-\`a-vis Ref. \cite{KG}}
In Ref. \cite{KG}, contextuality---in the generalized sense of Spekkens \cite{Spe05} and by implication in the Kochen-Specker sense---was shown keeping in mind the proposition \emph{Strong}, i.e., requiring that
the three measurements $\{M_1,M_2,M_3\}$ are pairwise jointly measurable but not triplewise jointly measurable. This was in keeping with the approach adopted in Ref. \cite{LSW}, where the construction
used did not violate the LSW inequality \cite{LSW, KG}. Indeed, as shown in Theorem 1 of Ref. \cite{KG}, the construction used in Ref. \cite{LSW} could not have produced a violation because
it sought a state-independent violation.
In Ref. \cite{FyuOh}, the authors---under \emph{Proposition 1}---use the construction first obtained in \cite{KG} to show a higher violation of the LSW inequality than reported
in \cite{KG}. It is easy to check that the construction in Ref. \cite{KG} recovers the violation reported in Ref. \cite{FyuOh} when
the proposition \emph{Strong} is relaxed to the proposition \emph{Weak}: the expression for the quantum probability of anticorrelation in Ref.\cite{KG} is given by
\begin{equation}\label{anti}
R_3^Q=\frac{C}{6}+(1-\frac{\eta}{3})
\end{equation}
where $C>0$ for a state-dependent violation of the LSW inequality \cite{LSW,KG}. Given a coplanar choice of measurement directions $\{\hat{n}_1,\hat{n}_2,\hat{n}_3\}$, and $\eta$ satisfying $\eta_l<\eta\leq\eta_u$, the optimal value of $C$
---denoted as $C^{\{\hat{n}_i\},\eta}_{\max}$---is given by
\begin{eqnarray}\label{Cmax}\nonumber
&&C^{\{\hat{n}_i\},\eta}_{\max}=2\eta\\&+&\sum_{(ij)}\left(\sqrt{1+\eta^4(\hat{n}_i.\hat{n}_j)^2-2\eta^2}-(1+\eta^2 \hat{n}_i.\hat{n}_j)\right).
\end{eqnarray}
For trine measurements, $\hat{n}_i.\hat{n}_j=-\frac{1}{2}$ for each pair of measurement directions, $\{\hat{n}_i,\hat{n}_j\}$. Also, $\eta_l=\frac{2}{3}$
and $\eta_u=\sqrt{3}-1$. $\eta>\eta_l$ ensures that the three measurements corresponding to $\{\hat{n}_1,\hat{n}_2,\hat{n}_3\}$ do not admit a 3-joint while
$\eta\leq\eta_u$ is necessary and sufficient for 2-joints to exist: that is, $\eta_l<\eta\leq\eta_u$ corresponds to the proposition \emph{Strong}, $(\exists \text{ 2-joint}, \nexists \text{ 3-joint})$.
On relaxing the requirement $\eta_l<\eta$, we have $0\leq\eta\leq\eta_u$. This allows room for the proposition $(\exists \text{ 2-joint}, \exists \text{ 3-joint})$ when
$0\leq\eta\leq \eta_l$.
The quantity to be maximized is the quantum violation: $R_3^Q-(1-\frac{\eta}{3})=\frac{C}{6}$. Substituting the value $\hat{n}_i.\hat{n}_j=-\frac{1}{2}$ in Eq. (\ref{Cmax}), the quantum probability of anticorrelation from Eq. (\ref{anti}) for trine measurements is given by:
\begin{equation}
R_3^Q=\frac{1}{2}+\frac{\eta^2}{4}+\frac{1}{2}\sqrt{1-2\eta^2+\frac{\eta^4}{4}},
\end{equation}
which is the same as the bound in Eq. (11) in Theorem 3 of Ref. \cite{FyuOh}. The quantum violation is given by:
\begin{equation}
R_3^Q-(1-\frac{\eta}{3})=-\frac{1}{2}+\frac{\eta}{3}+\frac{\eta^2}{4}+\frac{1}{2}\sqrt{1-2\eta^2+\frac{\eta^4}{4}}.
\end{equation}
In Ref. \cite{KG}, this expression was maximized under the proposition \emph{Strong} ($\eta_l<\eta\leq \eta_u$)
and the quantum violation was seen to approach a maximum of $0.0336$ for $R_3^Q\rightarrow0.8114$ as $\eta\rightarrow \eta_l=\frac{2}{3}$. In Ref. \cite{FyuOh}, the same expression
was maximized while relaxing proposition \emph{Strong} to proposition \emph{Weak} (allowing $\eta\leq\eta_l$) and the maximum quantum violation was seen to be $0.0896$
for $R_3^Q=0.9374$ and $\eta\approx 0.4566$.
Another comment in Ref. \cite{FyuOh} is the following:
\emph{``Interestingly, there are three observables that are not triplewise jointly measurable but cannot violate LSW's inequality no matter
how each two observables are jointly measured.''}
That is, \emph{Strong} $\nRightarrow$ Violation of LSW inequality. Equally, it is also the case that \emph{Weak} $\nRightarrow$ Violation of LSW inequality.
Neither of these is surprising given the discussion in this note. In particular, note the following implications ($0\leq\eta\leq 1$):
\begin{enumerate}
\item Violation of LSW inequality, i.e., $R_3^Q>1-\frac{\eta}{3}$ $\Rightarrow$ Violation of KS inequality, i.e., $R_3^Q>\frac{2}{3}$,
\item Violation of KS inequality, i.e., $R_3^Q>\frac{2}{3}$ $\Rightarrow$ \emph{Weak}: $(\exists \text{ 2-joint})\bigwedge(\nexists \text{ 3-joint}|\text{ a choice of 2-joints})$,
\item \emph{Strong} $\Rightarrow$ \emph{Weak}.
\end{enumerate}
Therefore, \emph{Weak} is a necessary condition for a violation of the LSW inequality. It can be satisfied either under \emph{Proposition 1} (as done in \cite{FyuOh}) or under
\emph{Proposition 2} (or \emph{Strong}, as done in \cite{KG}).
\section*{Joint measurability structures}
I end this note with a comment on the result proven in Ref. \cite{KHF}, where it was shown constructively that any conceivable joint measurability structure for a set of $N$ observables is realizable via
binary POVMs. With regard to contextuality, this result proves the admissibility in quantum theory of contextuality scenarios that are not realizable with PVMs alone. This should be easy to see, specifically, from the
example of Specker's scenario, where PVMs do not suffice to demonstrate contextuality, primarily because they possess a very rigid joint measurability structure dictated by pairwise commutativity
and their joint measurements are unique (Theorem \ref{uniqueness}). If one can demonstrate contextuality given the scenarios obtained from more general joint measurability structures then a relaxation of
a sort similar to the case of Specker's scenario (from \emph{Strong} to \emph{Weak}) will also lead to contextuality. In this sense, an implication of the result of Ref. \cite{KHF} is that it allows
one to consider the question of contextuality for joint measurability structures which admit no PVM realization in quantum theory on account of Theorem \ref{uniqueness}.
In particular, for PVMs, \emph{pairwise compatibility} $\Leftrightarrow$ \emph{global compatibility} because commutativity is a necessary and sufficient criterion for compatibility. On the other hand, POVMs allow for
a failure of the implication \emph{pairwise compatibility} $\Rightarrow$ \emph{global compatibility} because pairwise compatibility is not equivalent to pairwise commutativity for POVMs:
\emph{pairwise commutativity} $\Rightarrow$ \emph{pairwise compatibility}, but not conversely.
\section{Conclusion} I hope this note clarifies issues that may have escaped analysis in Refs. \cite{LSW,KG,KHF,FyuOh}. In particular, the logical relationship between admissible joint measurability structures
and the possibility of contextuality should be clear from the discussion here.
\section{Acknowledgment}
I would like to thank Sibasish Ghosh and Prabha Mandayam for comments on earlier drafts of this article.
\end{document} |
\begin{document}
\title{Low-latency Federated Learning and Blockchain for Edge Association in Digital Twin empowered 6G Networks}
\author{Yunlong~Lu,~\IEEEmembership{Student Member,~IEEE},
Xiaohong~Huang,~\IEEEmembership{Member,~IEEE},
Ke~Zhang,
Sabita Maharjan,~\IEEEmembership{Senior~Member,~IEEE},
and~Yan~Zhang,~\IEEEmembership{Fellow,~IEEE}
\thanks{This work was partially supported by the National Natural Science Foundation of China under Grant No. 61941102, and in part by the Opening Project of Shanghai Trusted Industrial Control Platform, under Grant No. TICPSH202003016-ZC.}
\thanks{Y. Lu, X. Huang are with the Institute of Network Technology, Beijing University of Posts and Telecommunications, Beijing, China, (e-mail: yunlong.lu@ieee.org; huangxh@bupt.edu.cn;).}
\thanks{K. Zhang is with the School of Information and Communication Engineering, University of Electronic Science and Technology of China, (email:zhangke@uestc.edu.cn).}
\thanks{Sabita Maharjan and Yan Zhang are with Department of Informatics, University of Oslo, Norway; and Simula Metropolitan Center for Digital Engineering, Norway (e-mail: sabita@ifi.uio.no; yanzhang@ieee.org).}
}
\maketitle
\begin{abstract}
Emerging technologies such as digital twins and 6th Generation mobile networks (6G) have accelerated the realization of edge intelligence in Industrial Internet of Things (IIoT). The integration of digital twin and 6G bridges the physical system with digital space and enables robust instant wireless connectivity. With increasing concerns on data privacy, federated learning has been regarded as a promising solution for deploying distributed data processing and learning in wireless networks. However, unreliable communication channels, limited resources, and lack of trust among users, hinder the effective application of federated learning in IIoT. In this paper, we introduce the Digital Twin Wireless Networks (DTWN) by incorporating digital twins into wireless networks, to migrate real-time data processing and computation to the edge plane. Then, we propose a blockchain empowered federated learning framework running in the DTWN for collaborative computing, which improves the reliability and security of the system, and enhances data privacy. Moreover, to balance the learning accuracy and time cost of the proposed scheme, we formulate an optimization problem for edge association by jointly considering digital twin association, training data batch size, and bandwidth allocation.
We exploit multi-agent reinforcement learning to find an optimal solution to the problem. Numerical results on real-world dataset show that the proposed scheme yields improved efficiency and reduced cost compared to benchmark learning method.
\end{abstract}
\begin{IEEEkeywords}
Communication efficiency, Blockchain, Federated learning, Digital twin, Wirelesss networks
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
Recent advancements in digital twins and 6th Generation mobile networks (6G) have paved the way for Industrial Internet of Things (IIoT) \cite{8658105}. A series of edge intelligence use cases such as intelligent transportation \cite{9072340,zk3} and smart city \cite{7945258} are envisioned to provide high-quality services to users in the IIoT network. Data processing based on Artificial Intelligence (AI) algorithms lays the foundation for enabling the provision of intelligent services in the IIoT framework. Despite such enabling technologies and recent developments, the gap between the outcome of data analysis and the true reflection of the status of their physical systems remains a major hurdle in designing robust control algorithms for physical systems \cite{uhlemann2017digital}. In this regard, the digital twin paradigm \cite{8477101} emerges as one of the most promising technologies in 6G networks that can enable the near-instant wireless connectivity and extreme-reliable wireless communication\cite{dang2020should}. Empowered by digital twin, the 6G networks can bridge the physical systems with the digital world in real-time for realizing robust edge intelligence in IIoT.
Digital twins represent real objects or subjects with their data, functions, and communication capabilities in the digital space \cite{8289327}. To reduce latency and enhance reliability for edge computing applications, we incorporate digital twins into wireless networks and propose the Digital Twin Wireless Networks (DTWN). By mapping IoT devices to digital twins in edge servers, DTWN improve the efficiency of AI algorithms and mitigates the impact of unreliable communication between users and edge servers. Moreover, DTWN interact with the physical IIoT systems in real-time through data collection and analytics to keep the digital plane synchronised with the physical system. Thus, the running state analysis and optimization of user devices can be conducted directly in the DTWN, which reduces the system cost. We have noted few works that discuss integrating digital twins with physical systems to improve their performance for different applications. For instance, in \cite{8289327}, the authors proposed the Experimentable Digital Twins (EDTs) that can be used to realize complex control algorithms or mental models.
In particular, the authors constructed a general digital twin for complex equipment and proposed a new method for health management using digital twins \cite{tao2018digital}. In this paper, we propose to integrate digital twins into edge networks to mitigate the resource gap between end users and Base Stations (BSs) for efficient edge computing.
Conventional centralized AI algorithms require data to be transmitted to the server, which incurs high data leakage risks. To enhance data security and protect user privacy, federated learning \cite{mcmahan2017communication,smith2017federated} has emerged as a new paradigm for distributed machine learning. Different from conventional AI algorithms, federated learning based models require transmitting only the parameters from locally trained models to the server while the original data is stored with the user to alleviate security and privacy risks \cite{8843900}. To this end, the distributed framework and the enhanced data privacy assurance offered by federated learning have attracted considerable attention and have been well studied for typical MEC applications \cite{9060868} such as content caching \cite{yu2018federated} and vehicular data sharing recently \cite{8998397}.
Nonetheless the need of conducting the iterative training process on distributed entities each with possibly limited computing resources can adversely affect the efficiency of federated learning in edge networks. To address the issue, in \cite{wang2019adaptive}, the authors proposed a control algorithm to determine the trade-off between local update and global aggregation under a given resource budget. In \cite{8737464}, the authors formulated resource consumption and learning performance into an optimization problem and provided the approximate optimal solution.
Moreover, the lack of mutual trust among users is another key factor that can hinder them from participating in the learning process to realize edge intelligence. To deal with this issue, blockchain has been widely explored to build a secure collaborative learning mechanism among untrusted users. For example, the authors in \cite{alcaraz2020blockchain} proposed a three layer-based architecture to manage reliable and secure connections among entities, processes and critical resources for the smart grid. Meanwhile, the efficiency and intelligence of blockchain needs to be further improved. Several studies have explored the synergy between AI algorithms and blockchain to enhance the data security and system reliability aspects in edge intelligence enabled networks. In \cite{zk1}, the authors presented an edge intelligence and blockchain empowered framework to enhance the security of edge service management in an IoT scenario. Similarly, in \cite{8998330}, the authors integrated blockchain with Deep Reinforcement Learning (DRL) for content caching in vehicular networks. As conventional AI algorithms can suffer much data transmission load while also leading to higher data leakage risk, we propose to integrate blockchain with federated learning to achieve secure and efficient edge intelligence. In our proposed framework, the federated learning parameters are recorded in the blockchain instead of the parameter server thus considerably enhancing the security of parameters and improving the reliability of models from users.
In our proposed framework, DTWN mitigate the unreliability of learning parameter transmission caused by long communication distance and limited communication resource between end users and edge servers (BSs). However, some new challenges also arise for the maintenance of digital twins. First is the associations of digital twins to various servers. It consumes considerable amount of resources to maintain digital twins by synchronizing real-time data and building corresponding models. Due to the dynamic computing and communication resources available with edge servers, the association of digital twins to corresponding servers is a crucial factor in determining the performance of DTWN, which is a largely unexplored dimension in the context of edge networks. Moreover, due to the multiple communication rounds in federated learning, the limited bandwidth available with BSs requires to be optimally allocated to improve the communication efficiency of associated edge servers. In this paper, we first design the new DTWN model by incorporating digital twins into wireless networks to mitigate the unreliable and long distance communication between users and BSs, where the user data is synchronized to BSs to construct corresponding digital twins. Based on the DTWN, we propose a permissioned blockchain empowered federated learning framework for realizing robust edge intelligence.
Data analysis can be executed on their digital twins in the BS plane directly instead of in the resource constrained end users. The security and privacy of digital twin data are also enhanced by . To improve the efficiency of the proposed architecture, we formulate an optimization problem for edge association and bandwidth allocation, and design a multi-agent reinforcement learning based algorithm to find the optimal solution to it. The main contributions of this paper are summarized as follows.
\begin{itemize}
\item We design a new digital twin wireless network model- DTWN, which uses digital twins to mitigate the unreliable and long distance communication between end users and edge servers.
\item We define the edge association problem for DTWN, and propose a permissioned blockchain empowered federated learning framework for edge computing in DTWN.
\item To improve the efficiency of the proposed scheme, we formulate an optimization problem by jointly considering edge association and communication resource allocation to minimize the time cost, and develop a multi-agent reinforcement learning based algorithm to find the optimal solution to the problem.
\end{itemize}
The rest of the paper is organized as follows. In Section \ref{section:model}, the blockchain and federated learning model for DTWN is provided. In Section \ref{section:problem}, we formulate a new edge association problem in DTWN to minimize the total system latency. A multi-agent reinforcement learning based algorithm is developed to find the optimal solution to the problem in Section \ref{section:solution_DRL}. Numerical results are presented in Section \ref{section:evaluation}. Finally, Section \ref{section:conclusion} concludes the paper.
\section{Blockchain and Federated Learning Model for Digital Twin Networks}
\label{section:model}
In this section, we first present our DTWN model. Then we describe the edge association problem in DTWN in Section \ref{section:model_ass}, which is followed by the federated learning model in Section \ref{section:model_FL}. Then we introduce our blockchain model in Section \ref{section:model_blockchain} and the wireless communication model in Section \ref{section:model_comm}.
We consider a blockchain and federated learning empowered digital twin network model as depicted in Fig. \ref{fig:DTWN}. The system consists of $N$ end users such as IoT devices and mobile devices, $M$ BSs, and a Macro Base Station (MBS). The BSs and the MBS are equipped with MEC servers. The end devices generate running data and synchronize their data with corresponding digital twins that run on the BSs. We use $\mathcal{D}_i = \{ (x_{i1},y_{i1}),...,(x_{iD_i},y_{iD_i})\}$ to denote the data of end user $i$, where $D_i$ is the data size, $x_{i}$ is the data collected by end users and $y_i$ is the label of $x_{i}$. The digital twin of end user $i$ in the BSs are denoted as $DT_i$, which is composed of behavior model $\mathcal{M}_i$, static running data $\mathcal{D}_i$ and real-time dynamic state $s_t$, $DT_i = (\mathcal{M}_i,\mathcal{D}_i,s_t)$. $\mathcal{D}_i$ and $s_t$ is the essential data required to run digital twin applications. Instead of synchronizing all raw data to digital twins, which incurs great communication load and data leakage risk, we use federated learning to learn model $\mathcal{M}$ from user data. In various application scenarios, the end users may communicate with other end users to exchange running information and share data, through e.g., Device to Device (D2D) communications. Thus, the digital twins also form a network based on the connections of end users. Based on the constructed DTWN, we can obtain the running states of physical devices and make further decisions to optimize and drive the running of devices by analyzing digital twins directly.
In our proposed digital twin network model, we use federated learning to execute the training and learning process collaboratively for edge intelligence. Moreover, since the end users lack mutual trust and the digital twins consist of private data, we use permissioned blockchain to enhance the system security and data privacy. The permissioned blockchain records the data from digital twins and manages the participating users through permission control. The blockchain is maintained by the BSs, which are also the clients of the federated learning model. The MBS runs as the server for the federated learning model. In each iteration of federated learning, the MBS distributes the machine learning model parameters to BSs for training. The BSs train the model based on data from digital twins and returns the model parameters to the MBS.
\begin{figure}
\caption{The proposed Digital Twin Wireless Networks}
\label{fig:DTWN}
\end{figure}
\subsection{Edge Association in DTWN}
\label{section:model_ass}
The end devices or users are mapped to the digital twins in BSs in DTWN. The maintenance of digital twins consumes a large amount of computing and communication resources for synchronizing real-time data and building corresponding models. However, the computation and communication resources in wireless networks are very limited, which should be optimally used to improve the resource utility. Thus, to associate various IoT devices with different BSs according to their computation capabilities and states of the communication channel, is a key problem in DTWN. As depicted in Fig. \ref{fig:DTWN}, The digital twins of IoT devices are constructed and maintained by their associated BSs. The training data and the computation tasks for training are distributed to various BSs based on the association between digital twins and the BSs. We provide the formal definition of the edge association in Definition \ref{Definition:EA}.
\begin{Definition}[Edge Association]
\label{Definition:EA}
Consider a DTWN with $N$ IoT users and $M$ BSs. For any user $u_i$, $i \in \mathcal{N}$, the goal of edge association is to choose the target BS $j \in \mathcal{M}$ to construct the digital twin $DT_i$ of user $i$. The association $\langle DT_i, BS_j\rangle$ is denoted as $\Phi(i,j)$. If $DT_i$ is associated to BS $j$, then $\Phi(i,j)= D_i$, where $D_i$ is the size of data used to construct $DT_i$. Otherwise, $\Phi(i,j)= 0$.
\end{Definition}
A BS can associate with multiple digital twins, while a digital twin can only be associated with at most one BS. That is, $\sum_{j=1}^{M}\Phi(i,j) = D_i$. We perform edge association according to the datasets $D_i$ of IoT users, the computation capability of BSs $f_j$, and the transmission rate $R_{i,j}$ between $u_i$ and $BS_j$, denoted as
\begin{equation}
\label{eq:ass}
\Phi(i,j) = f(D_i,f_j,R_{i,j}).
\end{equation}
The objective of the edge association problem is to improve the utility of resources and the efficiency of running digital twins in the DTWN.
\subsection{Federated Learning Model}
\label{section:model_FL}
The goal of federated learning is to train a global machine learning model $\mathcal{M}$ based on the data from various digital twins $DT_i$ without transmitting the original training data. The learning process involves finding the parameter $w$ for global model $\mathcal{M}$ that minimizes the global loss function $f(w,x,y)$, denoted as
\begin{equation}
\label{eq:loss}
\min_w \frac{1}{N}\sum_{i=1}^{N}\frac{1}{D_j}\sum_{j=1}^{D_j}f(w,x_{ij},y_{ij}).
\end{equation}
The client BSs train their local models based on their own data. Each client minimizes the loss function on its local data through gradient descent based methods such as gradient descent and stochastic gradient descent, with a predefined learning rate. The clients then transmit the local gradients or the local models to the server MBS to update the global model, through algorithms such as federated averaging (FedAvg) \cite{mcmahan2017communication} and Federated SGD (FedSGD). We denote the aggregation of the global model as:
\begin{equation}
\label{eq:aggregation}
\mathcal{M} = \frac{1}{N}\sum_{i=1}^{N}D_i w_i.
\end{equation}
In the training process, the BSs train the local models based on the digital twins $DT_i$ that they maintained according to Eq. \ref{eq:loss}. A BS may contain several digital twins constructed from the end users under its coverage. Different from conventional federated learning, the BSs in our proposed scheme first aggregate their local models from multiple digital twins instead of sending the original local models to the MBS, which can reduce the transmission load. As BS $m$ has $K_i$ digital twins, the aggregation at BS $i$ is
\begin{equation}
\label{eq:BS_agg}
\mathcal{G}_m = \frac{1}{K_i}\sum_{j=1}^{K_i}D_{DT_j} w_{DT_j},
\end{equation}
where $D_{DT_i}$ is the training data of digital twin $DT_i$ and $w_{DT_i}$ is the trained model of $DT_i$. BS $m$ then sends $\mathcal{G}_m$ to the MBS, and the MBS updates the global model as
\begin{equation}
\label{eq:MBS_agg}
\mathcal{M} = \frac{1}{M}\sum_{m=1}^{M}\mathcal{G}_m.
\end{equation}
The local models of digital twins are transmitted to the MBS through wireless links. The MBS collects the model parameters from all participating BSs and updates the global model. In this process, since all BSs transmit their parameters in the same period and the wireless bandwidth is limited, the communication efficiency is of vital importance for the learning phase to converge.
\subsection{Blockchain Model}
\label{section:model_blockchain}
To enhance the security and reliability of digital twins from untrusted end users, the BSs act as the blockchain nodes and maintain the running of the permissioned blockchain. The digital twins are stored in the blockchain and their data is updated as the state of the corresponding user changes. Moreover, the local models of the BS are also stored in the blockchain, which can be verified by other BSs to ensure the quality of local models. Thus, there are three types of records, namely, digital twin model records, digital twin data records, and training model records.
The overall blockchain scheme for federated learning is shown in Fig. \ref{fig:blockchain}. The BSs first train the local training models on their own data, and then upload the trained models to the MBS. The trained models are also recorded as blockchain transactions, and broadcasted to other BSs for verification. Other BSs collect the transactions and pack them into blocks. The consensus process is executed to verify the transactions in blocks.
\begin{figure}
\caption{The blockchain scheme for federated learning}
\label{fig:blockchain}
\end{figure}
Our consensus process is executed based on Delegated Proof of Stakes (DPoS) protocol,
where the stakes are the training coins. Initial training coins are allocated to BS $i$ according to its data from digital twins, denoted as
\begin{equation}
\label{eq:stake}
S_{i} = \frac{\sum_{j=1}^{K_i} D_{DT_j}}{\sum_{k=1}^{M}D_{k}}S_{ini},
\end{equation}
where $S_{ini}$ is an initial value, $K_i$ is the number of digital twins associated to BS $i$.
The coins of each BS are then adjusted according to their performance in the training process. If the trained model of a BS passes the verification of other BSs and the MBS, coins will be awarded to the BS. Otherwise, the BS will get no pay for its training work. A number of BSs are elected as the block producers by all BSs. In the voting process, all the BSs vote for the candidate BSs by using their own training coins. The elected BSs take turns to pack the transactions in a time interval $T$ into a block $B$, and broadcast block $B$ to other producers for verification.
In our federated learning scheme, we leverage blockchain to verify the local models before embedding them into the global model. Due to high resource consuming for block verification,
the interval $T$ should be set to multiple times of the local training period, that is, the BSs execute multiple local training iterations before transmitting local models to the MBS for global aggregation.
\subsection{Communication Model}
\label{section:model_comm}
We use the Orthogonal Frequency Division Multiple Access (OFDMA) for wireless transmission in our system. To upload trained local models, all BSs share $C$ sub-channels to transmit their parameters. The achievable uplink data rate from BS $i$ to the MBS is:
\begin{equation}
\label{eq:uplink}
R_i^U = \sum_{c=1}^{C} \tau_{i,c} W^U log_2 (1+\frac{P_{i,c}^U h_{i,c}^U r_{i,m}^{- \alpha}}{\sum_{j \in \mathcal{N}'} P_{j,c}^U h_{j,c}^U r_{i,m}^{- \alpha} + N_0}),
\end{equation}
where $C$ is the total number of subchannels, $\tau_{i,c}$ is the time fraction allocated to BS $i$ on subchannel $c$, $W^U$ is the bandwidth of each subchannel which is a constant value. The transmission power is $P_{i,c}^U$ and the uplink channel gain on subchannel $c$ is $h_{i,c}^U$. $r_{i,m}^{- \alpha}$ is the path loss fading of the channel between BS $i$ and the MBS, $r_{i,m}$ is the distance between BS $i$ and the MBS, and $\alpha$ is the path loss exponent. $N_0$ is the noise power and $\sum_{j \in \mathcal{N}'} P_{j,c}^U h_{j,c}^U r_{i,m}^{- \alpha}$ is the interference caused by other BSs using the same subchannel.
In the download phase, the MBS broadcasts the global model with the rate
\begin{equation}
\label{eq:downlink}
R_i^D = \sum_{c=1}^{C} W^D log_2 (1+\frac{P_{i,c}^D h_{i,c}^D}{\sum_{j \in \mathcal{N}''} P_{j,c}^D h_{j,c}^D r_{i,m}^{- \alpha} +N_0}),
\end{equation}
where $P_{i,c}^D$ is the downlink power of BS $i$ and $h_{i,c}^D$ is the channel gain between BS $i$ and the MBS, $\sum_{j \in \mathcal{N}''} P_{j,c}^D h_{j,c}^D r_{i,m}^{- \alpha}$ is the downlink inference.
\section{Edge Association in DTWN: Problem Formulation}
\label{section:problem}
In this section, we first analyze the delay performance of the proposed model, and then formulates the optimization problem for edge association in DTWN to minimize the total system time cost under the given learning accuracy requirement.
Now we start to derive the formulation of edge association problem. We consider that the gradient $\nabla f(w)$ of $f(w)$ is $L$-Lipschitz smooth, that is
\begin{equation}
\label{eq:L}
||\nabla f(w_{t+1})-\nabla f(w_t)|| \leq L ||w_{t+1}-w_t||,
\end{equation}
where $L$ is a positive constant and $||w_{t+1}-w_t||$ is the norm of $w_{t+1}-w_t$. We consider that the loss function $f(w)$ is strongly convex, that is
\begin{equation}
\label{eq:c}
f(w_{t+1}) \geq f(w_t) + \langle \nabla f(w_t),w_{t+1}-w_t \rangle + \frac{1}{2}||w_{t+1}-w_t||^2.
\end{equation}
Many loss functions for federated learning can satisfy the above assumptions, for example, the logic loss functions. If (\ref{eq:L}) and (\ref{eq:c}) are satisfied, the upper bound of global iterations can be obtained as \cite{ma2017distributed}
\begin{equation}
\label{eq:bound}
\mathcal{T}(\theta_L,\theta_G) = \frac{\mathcal{O}(log(1/\theta_L))}{1-\theta_G},
\end{equation}
where $\theta_L$ is the local accuracy $\frac{||\nabla f(w_{t+1})||}{||\nabla f(w_t)||} \leq \theta_L$, $\theta_G$ is the global accuracy, and $0 \leq \theta_L, \theta_G\leq 1$. As in \cite{8737464}, we consider $\theta_L$ as a fixed value, so that the upper bound $\mathcal{T}(\theta_L,\theta_G)$ can be simplified to $\mathcal{T}(\theta_G) = \frac{1}{1-\theta_G}
$.
Denote the time of one local training iteration by $T_{cmp}$, then the computation time in one global iteration is $log(1/\theta)T_{cmp}$, and the upper bound of total learning time is $\mathcal{T}(\theta_G)T_{glob}$.
The time cost in our proposed scheme mainly consists of
\begin{enumerate}
\item \textit{Local training on digital twins:} The time cost for the local training of BS $i$ is decided by the computing capability and the data size of its digital twins. The time cost is
\begin{equation}
\label{eq:local_training}
T_i^{cmp} = \frac{\sum_{j=1}^{K_i} b_j D_{DT_j}}{f_i^C}f^C,
\end{equation}
where $f^C$ is the number of CPU cycles required to train one sample of data, ${f_i^C}$ is the CPU frequency of BS $i$, $b_j$ is the training batchsize of digital twin $DT_j$.
\item \textit{Model aggregation on BSs:} The BSs aggregate its local models from various digital twins according to Eq. (\ref{eq:BS_agg}). The computing time for local aggregation is
\begin{equation}
\label{eq:local_agg}
T_i^{la} = \frac{\sum_{j=1}^{K_i} |w_j|}{f_i^C}f_b^C,
\end{equation}
where $|w_j|$ is the size of local models and $f_b^C$ is the number of CPU cycles required to aggregate one unit of data. Since all the clients share the same global model, $|w_1|=|w_2|=...=|w_j|=|w_g|$. Thus the time cost for local aggregation is
\begin{equation}
\label{eq:local_agg2}
T_i^{la} = \frac{K_i |w_g|}{f_i^C}f_b^C.
\end{equation}
\item \textit{Transmission of model parameters:}
The local models aggregated by BS $i$ are then broadcasted to other BSs as transactions. The time cost is related to the number of blockchain nodes and the transmission efficiency. Since other BSs also help to transmit the transaction in the broadcast process, the time function is related to $log_2 M$, where $M$ is the size of the BS network. The required time cost is
\begin{equation}
\label{eq:bs_broadcast}
T_i^{pt} = \xi log_2 M \frac{K_i |w_g|}{R_i^U},
\end{equation}
where $\xi$ is a factor of transmission time cost that can be obtained from historical running records of the transmission process.
\item \textit{Block Validation:}
The block producer BS collects the transactions and packs them into a block. The block is then broadcasted to other producer BSs, and validated by them. Thus, the time cost is
\begin{equation}
\label{eq:block}
T_{bp}^{bv} = \xi log_2 M_p \frac{S_B}{R_i^D} + \max_i \frac{S_B f^v}{f_is},
\end{equation}
where $M_p$ is the number of block producers, $S_B$ is the size of a block.
\end{enumerate}
Note that in the aggregation phase, the size of model parameters $|w_g|$ is small and the computing capability $f_i$ is high. Thus, compared to other phases, the time for aggregation is very small that can be neglected. Based on the above analysis, the time cost for one iteration is denoted as
\begin{equation}
\begin{aligned}
\label{eq:time}
T &= \max_i\left\{\frac{\sum_{j=1}^{K_i} b_j D_{DT_j}}{f_i^C}f^C \right \} + \max_i\left\{ \xi log_2 M \frac{K_i |w_g|}{R_i^U}\right \} \\ & + \xi log_2 M_p \frac{S_B}{R_i^D} + \max_i \frac{S_B f^v}{f_is},
\end{aligned}
\end{equation}
In the 6G network, the growth of the user scale, the ultra-low latency requirement of communication, and the dynamic network status make it an important issue to reduce the time cost of model training in various applications. Since accuracy and latency are two main metrics to evaluate the decision making abilities of digital twins in our proposed scheme, we consider the edge association problem to find the trade-off between learning accuracy and time cost of the learning process. Due to the dynamic computing and communication capabilities of various BSs, the edge association of digital twins, that is, how to allocate the digital twins of different end users to various BSs for training, is a key issue to be solved for minimizing the total time cost. Moreover, increasing the training batch size $b_n$ of each digital twin $DT_n$ can improve the learning accuracy. However, it will also increase the learning time cost to execute more computation. In addition, how to allocate the bandwidth resources to improve the communication efficiency, should also be considered. In our edge association problem, we should carefully design these policies to minimize the total time cost of the proposed scheme. Thus, we formulate the optimization problem as minimizing the time cost of federated learning under a given learning accuracy. To solve the problem, the association of digital twins, the batchsize of their training data, the bandwidth allocation should be jointly considered according to the computing capability $f_i^C$ and the channel state $h_{i,c}$. The optimization problem can be formulated as
\begin{align}
& \mathop{\min} \limits_{\boldsymbol{K_i},\boldsymbol{b_n},\boldsymbol{\tau_{i,c}}} \frac{1}{1-\theta_G}T \label{eq:problem}\\
\mbox{s.t.} \quad & \theta_G \geq \theta_{th}, \theta_G, \theta_{th} \in (0,1), \tag{\ref{eq:problem}{a}} \label{eq:problem_a}\\
& \sum_{i=1}^{M} K_{i} = D, K_i \in \mathcal{N}, \tag{\ref{eq:problem}{b}} \label{eq:problem_b}\\
& \sum_{i=1}^{M} \tau_{i,c} \leq 1, c \in \mathcal{C}, \tag{\ref{eq:problem}{c}} \label{eq:problem_c}\\
& b^{min} \leq b_n \leq b_n^{max}, \forall n \in \mathcal{N} \tag{\ref{eq:problem}{d}} \label{eq:problem_d}
\end{align}
Constraint (\ref{eq:problem}{b}) ensures that the sum of the number of associated digital twins does not exceed the size of total dataset. Constraint (\ref{eq:problem}{c}) guarantees that each subchannel can only be allocated to at most one BS. Constraint (\ref{eq:problem}{d}) ensures the range of training batch size for each digital twin. Problem (\ref{eq:problem}) is a combinational problem. Since there are several products of variables in the objective function, and the time cost of each BS is also affected by the resource states of other BSs, it is challenging to solve problem (\ref{eq:problem}).
\section{Multi-agent DRL for Edge Association}
\label{section:solution_DRL}
Since the system states are only determined by network states in the current iteration and the allocation policies in the last iteration, we regard the problem as a Markov Decision Process (MDP) and use multi-agent DRL \cite{lowe2017multi} based algorithm to solve it.
\subsection{Multi-agent DRL Framework}
The proposed multi-agent reinforcement learning framework is shown in Fig. \ref{fig:drl}. In our proposed system, each BS is regarded as a DRL agent. The environment consists of BSs and digital twins of end users. Our multi-agent DRL framework consists of multiple agents, a common environment, system state $\mathcal{S}$, action $\mathcal{A}$ and the reward function $\mathcal{R}$, which are described below.
\begin{figure}
\caption{Multi-agent DRL for edge association}
\label{fig:drl}
\end{figure}
\begin{itemize}
\item \textit{State space:} The state of the environment is composed of the computing capabilities $f^C$ of BSs, the number of digital twins $K_i$ on each BS $i$, the training data size of each digital twin $D_n$, the channel state $h_{i,c}$. The states of multiple agents are denoted as $s(t)=(\boldsymbol{f}^C,\boldsymbol{K},\boldsymbol{D},\boldsymbol{h})$, where each dimension is a state vector that contains the states for all agents.
\item \textit{Action space:} The actions of BS $i$ in our system consist of the digital twin allocation $K_i$, the training data batchsizes for its digital twins $\boldsymbol{b}_i$, and the bandwidth allocation $\boldsymbol{\tau}_i$. Thus, the actions are denoted as $\boldsymbol{a}_i(t) = (K_i, \boldsymbol{b}_i, \boldsymbol{\tau}_i)$. BS agent $i$ makes new action decisions $\boldsymbol{a}_i(t)$ at the beginning of iteration $t$ based on system state $s(t)$. The system action is $\boldsymbol{a}(t) = (\boldsymbol{a}_1,...,\boldsymbol{a}_i,...,\boldsymbol{a}_m)$.
\item \textit{Reward:} We define the reward function of BS $i$ according to its time cost $T_i$ based on Eq. (\ref{eq:time}), as
\begin{equation}
\label{eq:reward}
\mathcal{R}_i(s(t),\boldsymbol{a}_i(t))= - T_i(t),
\end{equation}
The reward vector of all agents is $\boldsymbol{R}= (\mathcal{R}_1,...,\mathcal{R}_m)$. According to Eq. (\ref{eq:problem}), the total time cost $T$ is decided by the maximum time cost of agents $\max\{T_1,T_2,...,T_m\}$. Thus each DRL agent shares the same reward function in our scheme. In the training process, the BS agents adjust their actions to maximize the reward function, that is, to minimize the system time cost in each iteration.
\end{itemize}
The learning process of BS $i$ is to find the best policy that maps its states to its actions, denoted as $\boldsymbol{a}_i = \pi_i (s)$, where $\boldsymbol{a}_i$ is the action to be taken by BS $i$ for the whole system state $s$. The objective is to maximize the expected reward, that is
\begin{equation}
\label{eq:total_reward}
\mathcal{R}_t= \sum_{i}\gamma \mathcal{R}_i(s(t),\boldsymbol{a}_i(t)),
\end{equation}
where $\gamma$ is the discount rate, $0 \leq \gamma \leq 1$. In the conventional DRL framework, it is hard for an agent to obtain the states of others. In our DTWN, the states of digital twins and BSs are recorded in the blockchain. A BS can retrieve records from blockchain to obtain the system states and actions of other agents in the training process.
\subsection{Multi-agent DRL algorithm for edge association}
Since the network states variable such as $\boldsymbol{h}$ and $\boldsymbol{f}^C$ are continuous values, the state and action spaces are large. We adopt Deep Deterministic Policy Gradient (DDPG) algorithm to address the formulated MDP problem. The DDPG algorithm uses policy-based Actor-Critic (AC) to select and evaluate the actions, and uses Deep Q Networks (DQNs) to approximate value functions in actor and critic networks. The multiple agents share the same reward function and train their policy network cooperatively to minimize the time cost.
In the training phase, each BS agent generates the training sample $\langle s,\boldsymbol{a}_i, R_i, s' \rangle$, where $s'$ is the state of DTWN in the next time slot. We leverage DQN with the tuples randomly sampled from replay memory to train the actor network $\pi(s_t|\theta \pi)$ and critic network $Q(s_t,\boldsymbol{a}_i|\theta_Q)$, with parameters $\theta_{\pi}$ and $\theta_{Q}$. There are two sets of actor and critic networks- the primary networks and the target networks. The target networks have the same structure as the primary networks and are used to generate target values for training primary networks.
Each agent $i$ has a actor DNN $\pi(s_t|\theta_{\pi})$ to determine its action $\boldsymbol{a}_i$ based on current state $s_t$, as
\begin{equation}
\label{Eq:actor}
a_i(t) = \pi_i(s_t|\theta_{\pi_i}) + \mathfrak{N},
\end{equation}
where $\mathfrak{N}$ is the random noise generated by the Ornstein-
Uhlenbeck process \cite{barndorff2001non}, $\theta_{\pi_i}$ is the explored edge association policy.
The parameters of the primary actor DNN are updated as
\begin{equation}
\label{Eq:actorUpdate}
\theta_\pi = \theta_\pi + \alpha_\pi \cdot \mathbb{E}[\nabla_{a_i} Q(\boldsymbol{s_t},\boldsymbol{a}_1,...,\boldsymbol{a}_i| \theta_Q)|_{\boldsymbol{a_i} = \pi(\boldsymbol{s_t}|\theta_\pi)}\cdot \nabla_{\theta_\pi}\pi(\boldsymbol{s_t})],
\end{equation}
where $\alpha_\pi$ is the learning rate of the actor DNN.
We train the primary critic network of agent $i$ by taking steps in the opposite gradient direction of the loss function as
\begin{equation}
\label{Eq:criticUpdate}
\theta_{Q_i} = \theta_{Q_i} + \alpha_{Q_i} \cdot \mathbb{E}[2(y_t-Q(\boldsymbol{s_t},\boldsymbol{a_i}|\theta_{Q_i}))\cdot \nabla Q(\boldsymbol{s}_t,\boldsymbol{a}_1,...,\boldsymbol{a}_i)],
\end{equation}
where $\alpha_{Q_i}$ is the learning rate of the primary critic DNN, and $y_t$ is the target value generated by target networks, $(\boldsymbol{a}_1,...,\boldsymbol{a}_i)$ are the actions of all DRL agents.
The target networks represent the old version of the primary networks. The parameters of the target action DNN $\theta_{\pi}^T$ and the target critic DNN $\theta_{Q}^T$ are trained in a different way. Agent $i$ updates its target action DNN $\theta_{\pi_i}^T$ and critic DNN $\theta_{Q_i}^T$ as
\begin{align}
\label{Eq:tUpdate}
&\theta_{\pi_i}^T = \beta \theta_{\pi_i} + (1-\beta)\theta_{\pi_i},\\
&\theta_{Q_i}^T = \beta \theta_{Q_i} + (1-\beta)\theta_{Q_i},
\end{align}
where $\beta$ is the update rate.
The proposed multi-agent reinforcement learning algorithm for edge association problem is summarized as Algorithm \ref{algorithm:blockchain}. Each agent first initializes its primary and target critic-actor DNNs, and initializes the edge association policy. Then, the primary actor DNN of agent $i$ generates action $\boldsymbol{a}_i$ based on current state and policy according to Eq. (\ref{Eq:actor}). The observed reward $R_i(e)$ and next state $s_{t+1}$ are calculated, and the tuple $(s_t,a,R_i(e),s_{t+1})$ is stored into the replay memory as training samples. The primary actor DNN and critic DNN are then updated based on Eq. (\ref{Eq:actorUpdate}) and Eq. (\ref{Eq:criticUpdate}). The target networks are updated according to Eq. (\ref{Eq:tUpdate}).
\renewcommand{\textbf{Input:}}{\textbf{Input:}}
\renewcommand{\textbf{Output:}}{\textbf{Output:}}
\begin{algorithm}[ht]
\caption{The multi-agent deep reinforcement learning based algorithm for edge association}
\label{algorithm:blockchain}
\begin{algorithmic}[1]
\FOR{each BS $i \leq M$}
\STATE Initialize primary and target actor-critic networks; Initialize replay memory;
\ENDFOR
\FOR{each BS $i \leq M$}
\FOR{episode $e \leq E$}
\STATE Initialize DTWN environment setup;
\FOR{each time slot $t$}
\STATE Observe current state $s$ and execute action $\boldsymbol{a}_i$ according to Eq. (\ref{Eq:actor});
\STATE Compute reward $R_i(t)$, and transit to the next state $s_{t+1}$ based on actions $(\boldsymbol{a}_1,\boldsymbol{a}_2,...,\boldsymbol{a}_n)$
\STATE Store $(s_t,\boldsymbol{a}_i(t),R_i(t),s_{t+1})$ into replay memory
\STATE Update primary actor and critic networks according to Eq. (\ref{Eq:actorUpdate}) and Eq. (\ref{Eq:criticUpdate})
\STATE Update target networks according to Eq. (\ref{Eq:tUpdate})
\ENDFOR
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
The computation cost training process of the DRL can be performed offline for large number of episodes based on the dynamic system states. The trained DRL models are deployed online to optimally allocate the resources to minimize the time cost of federated learning under given learning accuracy. The actions in our multi-agent DRL for edge association are composed of digital twin association policies $\boldsymbol{K}$, batchsize of training data $\boldsymbol{b}$, and the bandwidth allocation policies $\boldsymbol{\tau}$. Due to the large action and state space,
we adopt the actor DNN to generate actions. The complexity of our proposed learning algorithm mainly lies in the training process of actor DNN $\theta_{\pi}$ and critic DNN $\theta_{Q}$. Consider the DNN in our algorithm has $L$ hidden layers, and the average number of neurons in each layer is $L_a$. Thus the time complexity for training a DNN is
$O(L_a^2)$. The total complexity for training the multi-agent DRL model with $M$ agents is $O((M \cdot L_a^2) \cdot E)$. Since the training at each agent is executed in parallel, the time complexity is $O(L_a^2 \cdot E)$, which is related to the DNN size and the number of episodes.
\section{Numerical Results}
\label{section:evaluation}
We conduct our experiment on a real-world dataset CIFAR10\cite{krizhevsky2009learning}. The CIFAR10 dataset consists of 60,000 images in 10 classes, including 50,000 training images and 10,000 test images. Learning on the image dataset simulates the real edge computing scenarios such as traffic flow monitoring and image recognition of smart cameras. We consider a wireless network with 1 MBS, 5 BSs, and 100 end users. The users are randomly distributed in the coverage area of BSs, as shown in Fig. \ref{fig:area}. The CIFAR10 dataset is shuffled and assigned to the end users randomly. Thus the training data of federated learning is Independent and Identically Distributed (IID) in our experiment. The Convolutional Neural Network (CNN) \cite{cnn} is adopted as the machine learning model for federated learning. The CNN model has two $5 \times 5$ convolution layers (with 32 channels and 64 channels, respectively) and $2 \times 2$ max-pooling layers, followed by a fully connected layer with 512 units.
The maximum CPU frequencies of five BSs are 2.6GHz, 1.8GHz, 3.6GHz, 2.4GHz, and 2.4GHz, respectively. The transmission power of RSUs and the MBS is 34dBm and 42dBm, respectively. The bandwidth of the subchannel is set as 30MHz. $N_0$ is set to -174 dBm.
\begin{figure}
\caption{Illustration of the wireless network}
\label{fig:area}
\end{figure}
The latency performance with respect to training rounds is shown in Fig. \ref{fig:FL_time} for our proposed algorithm and the conventional learning algorithms with random edge association and with average edge association. Our proposed algorithm significantly reduces the system time cost compared with the benchmark algorithms. The multi-agent reinforcement learning algorithm optimizes digital twin association, and optimally allocates communication resources, which improves the running efficiency and reduces the system latency for our scheme. Fig. \ref{fig:FL_loss} depicts the learning performance of the proposed algorithm, compared with the learning algorithm with full training data and the learning algorithm with random edge association. From the results we can see that our proposed algorithm achieves a comparable convergence performance compared to fully trained federated learning, where all training data of users is utilized in each training process. The learning loss of our proposed algorithm is improved through the optimization process compared with the conventional algorithm with random edge association.
\begin{figure}
\caption{Total system time cost of different schemes in training process}
\label{fig:FL_time}
\end{figure}
\begin{figure}
\caption{The loss performance in the learning process}
\label{fig:FL_loss}
\end{figure}
The average cumulative reward is calculated by $\mathcal{R}_n = \frac{\sum_{t=1}^n \sum_{i=1}^M R_{i,t}}{n \cdot M}$, where $R_{i,t}$ is the total reward of agent $i$ in episode $t$, and $M$ is the number of agents. The cumulative average time cost of our proposed multi-agent deep reinforcement learning are depicted in Fig. \ref{fig:DRL_cm_reward} for different values of the discount factor $\gamma$. The cost of different $\gamma$ converges as the number of iterations increases, which denotes that the training process of the DRL model has maximized the cumulative reward and has minimized the system time cost. The curve with $\gamma=0.9$ achieves the best performance, which has a higher convergence rate and smaller system time cost. The time cost for running our reinforcement learning based optimization algorithm is shown in Fig. \ref{fig:DRL_time}. We can see that the training of our proposed optimization algorithm costs much more time than the test phase. The reason is that the training of deep reinforcement learning model is a process of iterative exploration. The agents take much time to interact with the environment to learn optimal policies. Although there are some fluctuations in the result curves, the overall time cost for training or testing in each iteration is comparable. The time is consumed mainly by the training process of the deep neural networks in reinforcement learning. From Fig. \ref{fig:DRL_time} we can also see that a larger discount factor $\gamma$ slightly leads to a larger time cost in each iteration. The reason is that a larger discount factor incurs more computation in the policy training process, thus leading to a larger time cost.
\begin{figure}
\caption{The cumulative average system cost in training process of DRL}
\label{fig:DRL_cm_reward}
\end{figure}
\begin{figure}
\caption{Time cost of the proposed optimization algorithm}
\label{fig:DRL_time}
\end{figure}
\section{Conclusion}
\label{section:conclusion}
In this paper, we proposed the new DTWN model based on federated learning and blockchain, and improved the running efficiency of the proposed scheme. We first presented the DTWN model in a wireless edge network consisting of end users and BSs. We further developed a blockchain empowered federated learning scheme in DTWN for realizing edge intelligence. To improve the efficiency of the proposed scheme and better allocate the limited resources, we formulated an optimization problem for given learning accuracy. Specifically, we formulated the problem of edge association between digital twins and BSs and derived an optimal solution to the problem by exploiting the multi-agent reinforcement learning. Numerical results based on the real-world dataset demonstrated that the proposed scheme effectively reduces the learning latency and achieves good learning convergence.
\ifCLASSOPTIONcaptionsoff
\fi
\end{document} |
\begin{document}
\title{Market Equilibrium in Multi-tier Supply Chain Networks}
\author{Tao Jiang\footnote{Krannert School of Management, Purdue University, taujiang300@gmail.com} \hspace{5mm} Young-San Lin \footnote{Department of Computer Science, Purdue University, lin532@purdue.edu}\hspace{5mm} Th\`anh Nguyen\footnote{Krannert School of Management, Purdue University, nguye161@purdue.edu} }
\maketitle
\onehalfspacing
\begin{abstract}
We consider a sequential decision model over multi-tier supply chain networks and show that in particular, for series parallel networks, there is a unique equilibrium. We provide a linear time algorithm to compute the equilibrium and study the impact and invariant of the network structure to the total trade flow and social welfare.
\end{abstract}
\section{Introduction} \label{sec: intro}
Supply chain networks in practice are multi-tier and heterogeneous. A firm's decision influences not only other firms within the same tier but also across. The literature on game theoretical models of supply chain networks, however, has largely focused on two extreme cases: heterogeneous 2-tier networks (bipartite graph) \cite{kranton2001theory} and \cite{bimpikis2014cournot} and a linear chain of $n$-tier firms \cite{wright2014buyers} and \cite{nguyen2017local}. One main reason for this is that most models of sequential decision making in multi-tier supply chain networks are intractable. Sequential decision making is a well-observed phenomenon in supply chains because firms at the top tier typically need to make decisions on the quantity and the price to sell to firms in the next tier and the buying firms then decide how much to buy from which suppliers, and continue to pass on the goods by determining the quantity and price for firms at the next level.
To study such models, one needs to analyze subgame perfect equilibria, where each firm internalizes the decisions of all the firms downstream and compete with all the firms of the same tier. Another factor that further complicates models of general supply chain networks is that even the basic concept of tiers is ambiguous because there are often multiple routes of different length that goods are traded from the original producers to the consumers. Our paper studies a model of sequential network game motivated by supply chain network applications. Our main goal is to understand the existence of an equilibrium and the effect of network structure on the efficiency of the system.
The length and the number of trading routes are the two main factors that impact the efficiency of a supply chain network. On one hand, a large variety of options to trade indicates a high degree of competition. On the other hand, a long trading path causes double, triple and higher degree marginalization problems. To capture these ideas, we consider a sequential game theoretical model for a special class of networks, the {\em series parallel graphs}. We focus our analysis on these networks because they are rich enough for studying the trade-off described above and simple enough for characterizing the equilibrium outcomes. In particular, series parallel networks have two natural compositions. A parallel composition, which merges two different sub-networks at the source and the sink, can capture the increase in competition. A series composition, which attaches two sub-networks sequentially, corresponds to the increase in the length of trading.
\subsection{Our Contribution}
We consider a sequential decision model where each firm makes a decision on the buying quantity from its sellers, and the selling quantity and price to its buyers, given that all its sellers have made their decisions. The single source producer of the network starts the decision making with a fixed material cost. The single ending market of the network accepts all the goods offered by the firms in the last-tier and the market price is an affine decreasing function of the total quantity of goods. Each firm strives to maximize its utility, assuming that its downstream buyers make rational decisions. An \emph{equilibrium} is the collection of firm decisions such that no firm has the incentive to change its decision, assuming that its downstream buyers play rationally and the decision of other firms remain unchanged. Our main results are listed as follows.
\begin{itemize}
\item We show a linear time algorithm that finds the unique equilibrium in a series parallel network. A crucial step is to derive a closed-form expression of the price at each firm in terms of quantity.
\item We show a rich set of equilibrium comparative statics for series parallel networks, including the firm location advantage of upstream firms and a network-component-based efficiency analysis.
\item We analyze the equilibrium for generalized series parallel networks.
\end{itemize}
\subsection{Proof Techniques}
\paragraph{Equilibrium in Series Parallel Networks.} We first observe that the flow conservation property is satisfied at equilibrium, i.e. inflow equals outflow for each intermediary firm. The main strategy is to formulate the price offered to each firm in terms of its inflow. Our first algorithm starts the inductive price computation from the ending market in a reverse topological order. In the computation of each firm, we take the partial derivative of the firm's utility with respect to the buying quantity and obtain a closed-form expression for three cases. The most interesting one is the case when a single seller has multiple buyers. In this case, since the utility is a quadratic function of the buying quantity, we formulate a linear complementarity problem to compute the convex coefficients for each trade from this seller to her buyers. Once the price-quantity relation is obtained at the source, we compute the total flow needed for the network. Our second algorithm computes the flow in a topological order starting from the source producer. When a single seller encounters multiple buyers, the problem of maximizing the seller utility can be formulated by a linear complementarity problem, and this problem has an equivalent convex quadratic program. In particular, by the structure of series parallel networks, the problem can be simplified to a linear system, and the flow is distributed proportionally to the convex coefficients computed by our first algorithm. It takes linear time to find an equilibrium and the equilibrium is unique because the solution of each corresponding convex quadratic program is also unique.
\paragraph{Comparative Statics.} For the analysis of firm location advantage, we first obtain a closed-form expression of each firm's utility. By this expression, we conclude that an upstream firm which controls the flow of a downstream firm has at least twice the utility of the downstream firm. For the network-component-based efficiency analysis, we focus on analyzing the flow value and social welfare at equilibrium. We show that with the same production cost and ending market price, locally swapping the order of two components in a series composition does not change the total flow and social welfare.
\paragraph{Equilibrium Analysis for Generalized Series Parallel Networks.} We consider two extensions with multiple source producers or ending markets. When the generalized series parallel graph has a single source and multiple markets, we consider a simple network and observe that the price function of the intermediary firm can be piecewise linear and discontinuous. This enforces the source to apply either the high price or low price strategy. There can be multiple pure strategy equilibria when both strategies give the source the same utility. When the generalized series parallel graph has multiple sources and a single ending market, an equilibrium may not exist.
\subsection{Related work}
In a series parallel network, intermediary firms can be considered as Cournot markets at equilibrium. Thus, the structure of the game is closely related to the literature on Cournot games in networks. \cite{bimpikis2014cournot} and \cite{pang2017efficiency}, for example, consider a Cournot game in two sided markets. \cite{nguyen2018welfare} studies Cournot game in three-tier networks. However, the 2-tier structure of the network in these papers, and the assumption that only the middle tiers make decision in \cite{nguyen2018welfare} assumes away the complex sequential decision making considered in this work. \cite{nava2015efficiency} studies a Cournot game in general networks. However, firms are assumed to make simultaneous decisions. Simultaneous games are easier to analyze, but do not capture the essential element of sequential decision making of firms in supply chain networks.
\cite{carr2005competition} considers assembly network where agents make sequential decision, but assumes a tree network. The analysis for a tree network is substantially simpler, because each firm has a single downstream node that it can sell the products to. In our game, each firm needs to make decision on the selling quantity and price to each of its buyer. As we show, some of the quantities on some of the links can be zero. Such ``inactive'' links make the analysis more complicated. Recently, \cite{bimpikis2019supply} also considers a sequential game. The network considered in this paper is however symmetric and its structure is linear. The focus of \cite{bimpikis2019supply} is on the uncertainty of yields, which is different from the motivation in our paper.
More broadly, this work belongs to the growing literature of network games and their applications in supply chains, including \cite{corbett2001competition,federgruen2016sequential,nguyen2016delay}, and \cite{nguyen2017local}. These papers, however, are different from ours in the main focus as well as the modeling approach.
\cite{corbett2001competition} for example, assumes a linear structure of supply chains, \cite{federgruen2016sequential} considers price competition in two-tier networks, while \cite{nguyen2016delay}and \cite{nguyen2017local} analyze bargaining games in networks with simpler structures. The main contribution of our paper to this line of work is a tractable analysis of sequential competition model in series parallel graphs, which allows for a richer comparative analysis and deeper understanding of how basic network elements influence market outcomes.
\subsection{Organization}
This work is organized as follows. In section~\ref{sec: model}, we introduce the model and the parallel serial networks together with the compositions. In section~\ref{sec: clearance}, we provide the algorithm to compute the unique equilibrium. In section~\ref{sec: eqp}, we analyze how firm location affects individual utility and how network structure influences the efficiency. In section~\ref{sec: ext}, we discuss extensions to other classes of networks and show that pure strategy equilibrium might not exist in general networks.
\section{Preliminaries} \label{sec: model}
We introduce the sequential decision mechanism and the definition of {\em series parallel graph}.
\subsection{The Model}
\paragraph{The Sequential Decision Game} Let $G=(V \cup \{s, t\}, E)$ be a simple directed acyclic network that represents an economy where $s$ is the producer firm at the source, $t$ is the sink market, and $V$ represents intermediary firms. The arcs of $G$ represent the possibility of trade between two agents. The direction of an arc indicates the direction of trade. The outgoing end of the arc corresponds to the seller, and the incoming end is the buyer, while $s$ has only outgoing arcs, and $t$ has only incoming arcs.
The remaining vertices $i\in V$ representing intermediary firms has both incoming and outgoing arcs.
For a vertex $i$, $B(i)$ and $S(i)$ are the sets of agents that can be buyers and sellers in a trade with $i$, respectively.
Agents start making their decision after the output of their upstream suppliers is determined. Each firm $i$ decides on how much to buy from each of its sellers, how much to sell to each of its buyer, and the price to sell to each of its buyer. Formally, $i$'s decision includes:
\begin{itemize}
\item The buying quantity $x^{in}_{ki} \geqslant 0$ of arc $ki \in E$ for every $k \in S(i)$.
\item The selling quantity $x^{out}_{ij} \geqslant 0$ of arc $ij \in E$ to every $j \in B(i)$.
\item The selling price $p_{ij} \geqslant 0$ of arc $ij \in E$ to every $j \in B(i)$.
\end{itemize}
The source producer does not buy any goods, so the decision of $s$ is the selling quantity $x^{out}_{sj} \geqslant 0$ and the price $p_{sj}$ of arc $sj$ to every $j \in B(s)$.
The production cost $p_s$ of the source $s$ is given and assumed to be a constant $a_s$.
\[
p_s = a_s \text{ where } a_s > 0.
\]
The sink node $t$ does not represent a firm, it corresponds to an end market. The price function $p_t$ at sink node $t$ is given and assumed to be an affine decreasing function on the total amount of goods, $X_t$, sold to the market $t$.
$$
p_{t} = a_t - b_t X_t, \text{ where } X_t = \sum_{i \in S(t)}{x^{in}_{it}} = \sum_{i \in S(t)}{x^{out}_{it}}, a_t > a_s, \text{ and } b_t > 0.
$$
$a_t$ is the \emph{demand} of the market $t$. We note that the market must accept all the goods thus does not have a choice to reject. That is, $x^{in}_{it} = x^{out}_{it}$ for each $i \in S(t)$. Generally, for a trade $ij \in E$, the buyer $j$ cannot obtain more than what the seller $i$ offers, thus $x^{in}_{ij} \leqslant x^{out}_{ij}$. We assume that each intermediary firm $i$ cannot get goods from any other source besides its sellers. Firms do not get any value from retaining the goods.
The payoff of the source firm $s$ is
\begin{equation} \label{eq:sp}
\Pi_s = \sum_{j \in B(s)}{p_{sj} x^{in}_{sj}} - p_s \sum_{j \in B(s)}{x^{out}_{sj}}.
\end{equation}
The utility of an intermediary firm $i \in V$ is
\begin{equation} \label{eq:fp}
\Pi_i = \sum_{j \in B(i)}{p_{ij} x^{in}_{ij}} - \sum_{k \in S(i)} {p_{ki} x^{in}_{ki}}.
\end{equation}
The formula decomposes the utility function into two terms: the total revenue from $j \in B(i)$ and the total cost of materials from $k \in S(i)$.
The timing of the game is as follows. The source producer makes its decision first. A firm makes its decision on the buying quantity from its upstream, and the selling quantity and price to its downstream, once all of its sellers have made their decision. When deciding their accepting quantities to maximize their profits, firm $i$ also needs to take into account the strategies of both the competing firms and the firms downstream. Firms make their decisions based on rational expectation of other firms' strategies.
\paragraph{Equilibrium Characteristics} Intuitively, an {\em equilibrium} of the sequential decision game is an assignment of good quantity and price such that no firm is willing to change its decision after knowing the decision of other non-downstream firms, and assuming that all the downstream firms also pick their best decisions.
We present two examples to illustrate the equilibrium concept. The first example is a line network. The source producer controls the amount and price, thus affects the decision of the intermediary firm.
\begin{example}[The Equilibrium of a Line Network] \label{ex:eq-ln}
\mbox{}\\
Consider the following line network.
\begin{center}
\begin{tikzpicture}
\node[vertex] (s) at (0,0) {$s$};
\node at (-1, 0) {$p_{s} = 1$};
\node[vertex] (a) at (3,0) {$v$};
\node[vertex] (t) at (6,0) {$t$};
\node at (9.5, 0) {$p_{t} = 9 - X_t = 9 - x^{out}_{vt} = 9 - x^{in}_{vt}$};
\path[->]
(s) edge (a)
(a) edge (t);
\end{tikzpicture}
\end{center}
Suppose $s$ offers $v$ $x^{out}_{sv} = 2$ and $p_{sv} = 7$, $v$ has to make a decision on $x^{in}_{sv}$ and $x^{out}_{vt}$. $v$ cannot decide $p_{vt}$ because it is already fixed as $9 - x^{out}_{vt}$. The sink market $t$ accepts all the goods, so the utility of $v$ is
\[\Pi_v = (9 - x^{out}_{vt}) x^{in}_{vt} - p_{sv} x^{in}_{sv} = (9 - x^{out}_{vt}) x^{out}_{vt} - 7 x^{in}_{sv}.\]
If $v$ is a rational player who maximizes $\Pi_v$, then $v$ will sell all the goods it bought, so $x^{in}_{sv} = x^{out}_{vt}$ and $\Pi_v = (9 - x^{in}_{sv}) x^{in}_{sv} - 7 x^{in}_{sv}$. By taking the derivative:
\[\frac{d \Pi_v}{d x^{in}_{sv}} = 9 - 2 x^{in}_{sv} - 7 = 0 \implies x^{in}_{sv} = 1,\]
so $b$ will accept $1$ out of $2$ units of the goods from $s$.
The utility of $s$ is
\[\Pi_s = p_{sv} x^{in}_{sv} - p_s x^{out}_{sv} = 7 \times 1 - 1 \times 2 = 5.\]
In fact, $s$ is over-selling the goods, it could have set $x^{out}_{sv} = 1$ instead such that
\[\Pi_s = p_{sv} x^{in}_{sv} - p_s x^{out}_{sv} = 7 \times 1 - 1 \times 1 = 6 > 5.\]
However, this is not the best strategy for $s$. If $s$ offers $v$ $x^{out}_{sv} = 2$ and $p_{sv} = 5$, by the same reasoning, the utility of $v$ is
\[\Pi_v = (9 - x^{in}_{sv}) x^{in}_{sv} - 5 x^{in}_{sv}.\]
$v$ tries to maximize $\Pi_v$:
\[\frac{d \Pi_v}{d x^{in}_{sv}} = 9 - 2 x^{in}_{sv} - 5 = 0 \implies x^{in}_{sv} = 2,\]
so $v$ will accept all the goods from $s$.
This time, the utility of $s$ is
\[\Pi_s = p_{sv} x^{in}_{sv} - p_s x^{out}_{sv} = 5 \times 2 - 1 \times 2 = 8,\]
and $s$ is better off. This is the optimal strategy for $s$. In summary, the equilibrium is
\begin{center}
\begin{tabular}{ | l | l | l | l |}
\hline
$p_{sv}$ & $x^{out}_{sv}$ & $x^{in}_{sv}$ & $x^{out}_{vt}$ \\ \hline
5 & 2 & 2 & 2 \\ \hline
\end{tabular}
\end{center}
\end{example}
The second example illustrates the competition between two intermediary firms. The source producer controls the quantity and price, thus affects the decisions of its two downstream firms.
\begin{example}[The Equilibrium with Two Intermediary Firms] \label{ex:eq-simple-spg}
\mbox{}\\
Consider the following network.
\begin{center}
\begin{tikzpicture}[baseline=0,scale=2]
\node at (-0.5, 0) {$p_{s} = 1$};
\node[vertex] (s) at (0,0) {$s$};
\node[vertex] (b) at (2,0.5) {$u$};
\node[vertex] (c) at (2,-0.5) {$v$};
\node[vertex] (d) at (4,0) {$t$};
\node at (5.5,0) {$p_t = 7 - X_t = 7 - x - y$};
\path[->]
(s) edge node[above] {$x^{out}_{su}$} (b)
(s) edge node[below] {$x^{out}_{sv}$} (c)
(b) edge node[above] {$x$} (d)
(c) edge node[below] {$y$} (d)
;
\end{tikzpicture}
\end{center}
Suppose the decision of $s$ is
\begin{center}
\begin{tabular}{ | l | l | l | l |}
\hline
$p_{su}$ & $p_{sv}$ & $x^{out}_{su}$ & $x^{out}_{sv}$ \\ \hline
3 & 4 & 1 & 1 \\ \hline
\end{tabular}
\end{center}
and $u$ and $v$ are rational firms, i.e. they will sell all the goods they bought to maximize their payoff. For simplicity, let $x=x^{in}_{su}$ and $y=x^{in}_{sv}$. The utilities of $u$ and $v$ are
\[\Pi_u = (7 - x - y)x - 3 x \quad \text{and} \quad \Pi_v = (7 - x - y)y - 4 y.\]
By taking the derivative, we have
\[\frac{d \Pi_u}{dx} = 4 - 2x - y \quad \text{and} \quad \frac{d \Pi_v}{dy} = 3 - x - 2y.\]
We claim that the best decision of $u$ and $v$ is $x=1$ and $y=1$. $\Pi_v$ is a concave function so it is maximized when $\frac{d \Pi_v}{dy}=0$, which implies the best response to $x=1$ is $y=1$. We observe that $\Pi_u$ is also concave. $\Pi_u$ is maximized when $\frac{d \Pi_u}{dx} = 0$. However, when $y=1$, $x$ cannot be $\frac{3}{2}$ since $x \leqslant x^{out}_{su} = 1$. While $y=1$ and $x\in[0,1]$, $\Pi_u$ is an increasing function, so the best response to $y=1$ is $x=1$.
The utility of $s$ for this decision is
\[\Pi_s = p_{su} x + p_{sv} y - p_s (x^{out}_{su} + x^{out}_{sv}) = 3 \times 1 + 4 \times 1 - 1 \times (1+1) = 5.\]
Given that $p_{su}=3$ and $p_{sv}=4$, there is a better quantity decision for $s$. We recall that $\Pi_u$ and $\Pi_v$ are both concave, so it suffices to show that $\frac{d \Pi_u}{dx}$ and $\frac{d \Pi_v}{dx}$ are both zeros. This happens when $x=\frac{5}{3}$ and $y=\frac{2}{3}$. If $x^{out}_{su}=\frac{5}{3}$ and $x^{out}_{sv}=\frac{2}{3}$, then $u$ and $v$ will buy all the goods from $s$ and the utility of $s$ is
\[\Pi_s = p_{su} x + p_{sv} y - p_s (x^{out}_{su} + x^{out}_{sv}) = 3 \times \frac{5}{3} + 4 \times \frac{2}{3} - 1 \times (\frac{5}{3}+\frac{2}{3}) = \frac{16}{3}.\]
However, this is not the best decision of $s$. The equilibrium for this network is as follows.
\begin{center}
\begin{tabular}{ | l | l | l | l | l | l |}
\hline
$p_{su}$ & $p_{sv}$ & $x^{out}_{su}$ & $x^{out}_{sv}$ & $x$ & $y$ \\ \hline
4 & 4 & 1 & 1 & 1 & 1 \\ \hline
\end{tabular}
\end{center}
One can verify that $\Pi_v$ and $\Pi_v$ are concave and the derivatives are zeros. The payoff of $s$ is
\[\Pi_s = p_{su} x + p_{sv} y - p_s (x^{out}_{su} + x^{out}_{sv}) = 4 \times 1 + 4 \times 1 - 1 \times (1+1) = 6.\]
\end{example}
We observe that the best strategy for each firm $i \in V$ is to always sell as much as bought since it cannot benefit from paying more for those unsold goods. At the selling side, suppose firm $i$ is willing to offer ${x^{out}_{ij}}$ quantity of goods to firm $j$, but part of the goods got rejected, i.e. $x^{in}_{ij} < x^{out}_{ij}$. This can never happen in an equilibrium, because $i$ will be better off by rejecting $x^{out}_{ij} - x^{in}_{ij}$ amount of goods from its upstream before selling.
The next observation lists the properties of supplying quantities at an equilibrium.
\begin{observation}[Equilibrium Flow Conservation] \label{obs: fc}
An equilibrium satisfies:
\begin{enumerate}
\item $x^{out}_{ij} = x^{in}_{ij}$ for each $ij \in E$.
\item $\sum_{k \in S(i)}{x^{in}_{ki}} = \sum_{j \in B(i)}{x^{out}_{ij}}$, i.e. inflow is equal to outflow for each firm $i \in V$.
\end{enumerate}
\end{observation}
Now we are ready to state the formal definition of an equilibrium.
\begin{definition}
An \emph{equilibrium} is a set of strategies including:
\begin{enumerate}
\item the strategy of the source producer $s$: $p^{out}_{sj}$ and $x^{out}_{sj}$ for $j \in B(s)$, and
\item the strategy of each intermediary firm $i \in V$: $p^{out}_{ij}$ (if $j \neq t$, otherwise $p^{out}_{ij} = p_{t} = a_t - b_t X_t$), $x^{out}_{ij}$ for $j \in B(s)$, and $x^{in}_{ki}$ for $k \in S(i)$
\end{enumerate}
such that
\begin{enumerate}
\item $x^{in}_{ij} = x^{out}_{ij}$ for each $ij \in E$, i.e. $j$ accepts all the goods $i$ offers, and
\item for each firm $i \in \{s\} \cup V$, $i$ does not have the incentive to change its strategy for a better payoff, assuming that each descendant firm of $i$ plays the best strategy that maximizes its payoff, and the strategy of non-descendant firms of $i$ remain the same.
\end{enumerate}
\end{definition}
For later notations, at an equilibrium, we denote $x_{ij}$ as the \emph{flow} of arc $ij$, i.e. $x_{ij} = x^{out}_{ij} = x^{in}_{ij}$, and no longer use $x^{in}_{ij}$ and $x^{out}_{ij}$. Meanwhile, since each firm accepts all the offers and sells everything they bought, we denote this sum of flow as $X_i := \sum_{k \in S(i)}{x_{ki}} = \sum_{j \in B(i)}{x_{ij}}$. The utility of firm $i$ in \eqref{eq:fp} becomes
\begin{equation} \label{eq:fp2}
\Pi_i = \sum_{j \in B(i)}{p_{ij} x_{ij}} - \sum_{k \in S(i)}{p_{ki}x_{ki}}
\end{equation}
and the utility of source firm $s$ in \eqref{eq:sp} becomes
\begin{equation} \label{eq:sp2}
\Pi_s = \sum_{j \in B(s)}{p_{sj} x_{sj}} - p_s \sum_{j \in B(s)}{x_{sj}}.
\end{equation}
For flow activities, an arc $ij \in E$ is {\em active} if $x_{ij} > 0$, and {\em inactive} if $x_{ij} = 0$. For each firm $i \in V$ and active arcs $ki \in E$ and $ij \in E$, $p_{ki} \leqslant p_{ij}$. That is, the buying price should not exceed the selling price. Otherwise, $i$ could have been better off by rejecting some goods from $k$ and choosing not to offer the same amount of goods to $j$.
\begin{observation} \label{obs:inc-p}
For every $ki \in E$ and $ij \in E$ that are active, the price at an equilibrium satisfies $p_{ki} \leqslant p_{ij}$.
\end{observation}
To define equilibrium uniqueness, we require the set of active arcs to be unique, as well as the flow and price of each active arc. The prices of inactive trades, on the other hand, can be arbitrary since they do not contribute to the seller revenue.
\begin{definition}
An equilibrium of a network $G$ is \emph{unique} if the set of active arcs is unique, as well as the flow and price of each active arc.
\end{definition}
\subsection{Series Parallel Graphs}
\paragraph{General Series Parallel Graphs} We consider the case when $G$ is a {\em series parallel graph} (SPG). The networks in Example~\ref{ex:eq-ln} and \ref{ex:eq-simple-spg} are both SPGs. Our main goal is to compute the equilibrium in networks that belong to this special graph family. This class of networks is well studied and has several applications in graph theory (see for example \cite{duffin1965topology}). For completeness, we provide a formal definition as follows.
\begin{definition}[SPG] \label{def: 2.4}
A {\em single-source-and-sink SPG} is a graph that can be constructed by a sequence of series and parallel compositions starting from a set of copies of a single-arc graph, where:
\begin{enumerate}
\item {\em Series composition} of $X$ and $Y$: given two SPGs $X$ with source $s_X$ and sink $t_X$, and $Y$ with source $s_Y$ and sink $t_Y$, form a new graph $G=S(X,Y)$ by identifying $s=s_X$, $t_X=s_Y$, and $t=t_Y$.
\item {\em Parallel composition} of $X$ and $Y$: given two SPGs $X$ with source $s_X$ and sink $t_X$, and $Y$ with source $s_Y$ and sink $t_Y$, form a new graph $G=P(X,Y)$ by identifying $s=s_X=s_Y$ and $t=t_X=t_Y$.
\end{enumerate}
\end{definition}
\paragraph{Shortcut-free Series Parallel Graphs}
We start with the definition of \emph{shortcuts}.
\begin{definition}
Given an SPG $G=(V,E)$, let $i,j \in V$. Consider a path $l_{ij}=(i, v_1, ..., v_k, j)$ from node $i$ to node $j$. If there is an arc $ij \in E$, then we say $ij$ is a {\em shortcut} of $l_{ij}$, or $ij$ {\em dominates} path $l_{ij}$.
\end{definition}
\begin{definition}
An SPG is \emph{shortcut-free} if it has no shortcuts, i.e. there is no path dominated by an arc.
\end{definition}
Node $k$ is a {\em parent node} of $i$ if there is a directed path from $k$ to $i$. The set of parent nodes of $i$ is denoted as $P(i)$. Similarly, we can define child nodes and the set of children $C(i)$. Given a shortcut-free SPG, if we consider the relation between direct (or adjacent) parent and child $i$ and $j$, where $ij \in E$, there are three possibilities\footnote{Multiple sellers and multiple buyers case does not exist in shortcut-free SPGs, we refer the proof to Appendix~\ref{app:no_mm}}:
\iffalse
Base on the parent-child relation, the order of nodes is defined as below:
\begin{definition}[Node Order] \label{def: order}
$j$ has lower order than $i$ if and only if $j$ is a child of $i$. If neither $i$ is a child of $j$ nor $j$ is a child of $i$, then $i$ and $j$ have the same order.
\end{definition}
\fi
\begin{itemize}
\item Single seller and single buyer, $|S(j)| = |B(i)| = 1$. (SS)
\item Multiple sellers and single buyer, $|S(j)| \geqslant 2, |B(i)| = 1$. (MS)
\item Single seller and multiple buyers, $|S(j)| = 1, |B(i)| \geqslant 2$. (SM)
\end{itemize}
\begin{center}
\begin{tikzpicture}
\node[vertex] (i) at (-2,0) {$i$};
\node[vertex] (j) at (0,0) {$j$};
\path[->]
(i) edge (j);
\node at (-1, -2) {$SS$};
\node[vertex] (i1) at (2,1.5) {$i_1$};
\node[vertex] (i2) at (2,0.8) {$i_2$};
\node[vertex] (ik) at (2,-1.5) {$i_m$};
\node[vertex] (i) at (2,-0.2) {$i$};
\node[vertex] (j) at (4,0) {$j$};
\draw[dotted, very thick] (2,0.5) -- (2,0.1);
\draw[dotted, very thick] (2,-0.5) -- (2,-1.2);
\path[->]
(i) edge (j)
(i1) edge (j)
(i2) edge (j)
(ik) edge (j);
\node at (3, -2) {$MS$};
\node[vertex] (j1) at (8,1.5) {$j_1$};
\node[vertex] (j2) at (8,0.8) {$j_2$};
\node[vertex] (jk) at (8,-1.5) {$j_m$};
\node[vertex] (i) at (6,0) {$i$};
\node[vertex] (j) at (8,-0.2) {$j$};
\draw[dotted, very thick] (8,0.5) -- (8,0.1);
\draw[dotted, very thick] (8,-0.5) -- (8,-1.2);
\path[->]
(i) edge (j1)
(i) edge (j2)
(i) edge (j)
(i) edge (jk);
\node at (7, -2) {$SM$};
\end{tikzpicture}
\end{center}
Sometimes there are multiple paths from a parent node to one of its children, we call these paths {\em disjoint} if they do not have any common intermediary nodes, that is, all nodes except the starting and the ending ones are different. Base on this definition, we can define the merging nodes with respect to node $i$.
\begin{definition}[Self-merging Child Node] \label{def: 3.4}
Node $j \in C(i)$ is a {\em self-merging child node} of $i$ if there are disjoint paths from $i$ to $j$. The set of such nodes $j$ is termed $C_S(i)$.
\end{definition}
\begin{definition}[Parent-merging Child Node] \label{def: 3.3}
Node $j \in C(i)$ is a {\em parent-merging child node} of $i$, if there exists a node $k \in P(i)$, such that there are disjoint paths from $k$ to $j$. The set of such nodes $j$ is denoted as $C_P(i)$.
\end{definition}
For $ij \in E$, we introduce the set of special self-merging child nodes of $i$ and its direct child $j$ as $C_T(i,j) := C_S(i) \cap C(j) \backslash C_P(i)$. This notation helps us capture the ``internal'' merging nodes that are responsible for the selling price and quantity offered to $i$.
\begin{observation} \label{obs:mc}
An SPG has the following properties:
\begin{enumerate}
\item $C_P(s) = C_P(t) = \emptyset$.
\item In the SS case for $ij \in E$, $C_P(j) = C_P(i)$.
\item In the SM case for $ij \in E$, $C_P(j) = C_P(i) \sqcup C_T(i,j)$.
\item In the MS case for $ij \in E$, $C_P(i) = C_P(j) \sqcup \{j\}$.
\end{enumerate}
\end{observation}
We note that $\sqcup$ stands for disjoint set union. We show Example~\ref{app:ex:mc} in the appendix for merging child node sets.
\section{Equilibrium Computation} \label{sec: clearance}
We present two algorithms to compute the equilibrium quantity and price for each arc. We start with shortcut-free SPGs and show that all arcs are active at equilibrium. To do so, we first derive a closed-form relation between the quantity and price offered to the firms at equilibrium via a backward algorithm. Then, we show that the unique optimal quantity and price offered to each firm can be solved following the decision sequence from the source to sink by the closed-form relation. For SPGs with shortcuts, we show that the trade on paths dominated by shortcuts are inactive. Thus, the equilibrium for an SPG with shortcuts can be found by the same algorithm after removing dominated paths.
\subsection{Shortcut-free Series Parallel Graphs}
\paragraph{Equilibrium Price Computation.} A key characteristic of the equilibrium is that all edges are active when there are no shortcuts. The equilibrium price has a closed-form expression in terms of the equilibrium quantity based on the structure of the SPG.
\begin{restatable}{theorem}{lemeqprice}\label{lem:pq-relation}
Given a shortcut-free SPG $G$, at an equilibrium, all arcs are active. For each firm $i \in V$, each seller $k \in S(i)$ offers $i$ the same price $p_i$, and the following relation holds
\[
p_i = a_t - b_i X_i - \sum_{l \in C_P(i)} b_l X_l
\]
where $b_i$ for each $i \in V \cup \{s\}$ is a positive constant that only depends on the structure of $G$.
\end{restatable}
Theorem~\ref{lem:pq-relation} shows a concise way to present the price and quantity relation at equilibrium. The high level strategy for deriving the closed-form expression is via a backward induction starting from the sink market $t$. In order to calculate the subgame perfect equilibrium, we consider the fact that the upstream firms make their decision based on the best decision of the downstream. Since the price of the sink market $t$ is an affine decreasing function, we can inductively show that the utility of each intermediary firm is a concave function of the quantity. We observe that the derivative of the utility with respect to the quantity of a trade cannot be positive since otherwise the upstream firm can be better off by raising the price. On the other hand, the derivative with respect to the quantity of an active trade must be zero. This is because the upstream firms assume that the downstream firms make their decision to maximize their utility, which implies the derivative is zero. By this observation, we derive a linear complementarity problem, and show that the best price offered to the downstream is an affine decreasing function of the quantity. The proof is given in Appendix~\ref{app:lem:pq-relation}. By adapting the main equations in that proof, we introduce Algorithm~\ref{alg: 1} to compute the price at equilibrium in terms of the quantity.
Algorithm~\ref{alg: 1} starts from the sink market $t$. In each iteration, given the downstream $b_j$ where $j \in B(i)$, we compute $b_i$ and this can be done in $O(deg^+(i))$ time where $deg^+(i)$ is the out-degree of $i$. The calculation is in a reverse topological order. Particularly, in the SM case, we store the {\em convex coefficients} of each downstream node $j \in B(i)$ which is used later in the equilibrium quantity computation. The number of the $b_i$ computation is bounded by $O(|V|)$. Therefore, it takes linear time to compute the price function in terms of quantity by Algorithm~\ref{alg: 1}. When $s$ is reached, we already have $p_s = a_t - b_s X_s = a_s$ since $C_P(s) = \emptyset$. Eventually, $X_s = \frac{a_t - a_s}{b_s}$ so the expected price of $s$ meets the given production cost. We show Example~\ref{app: ex: price_compute_alg1} in the appendix for the price calculation by Algorithm~\ref{alg: 1}.
\begin{algorithm}[H]
\caption{: Price Function Computation (Backward)}
\label{alg: 1}
\renewcommand{\textbf{Input:}}{\textbf{Input:}}
\renewcommand{\textbf{Output:}}{\textbf{Output:}}
\textbf{Input:}{ A shortcut-free SPG $G = (V \cup \{s,t\}, E)$, the price function $p_t = a_t - b_t X_t$, and the source production cost $p_s = a_s$.}
\textbf{Output:}{ The equilibrium price function $p_i$ for each $i \in V$, the convex coefficients $\alpha_j$ for arc $ij$ where $j \in B(i)$ in the SM case, and the source flow $X_s$.}
\begin{algorithmic}[1]
\State Starting from $t$, given the downstream buyer(s) $j$'s price function $p_j$, compute the upstream seller(s) $i$'s price case by case in a reverse topological order:
\begin{itemize}
\item For the SS case,
\begin{align} \label{eq: SS} \tag{SS}
b_i = 2 b_j + \sum_{l \in C_P(j)} b_l.
\end{align}
\item For the MS case, for each seller $i$,
\begin{align} \label{eq: MS} \tag{MS}
b_i = b_j + \sum_{l \in C_P(j)} b_l.
\end{align}
\item For the simple SM case ($|C_S(i)| = 1$)\footnotemark, suppose $b_{j}$ was calculated for all $j \in B(i)$,
\begin{align*} \label{eq: SM} \tag{Simple SM}
b_i = \frac{2}{\sum_{j \in B(i)} \frac{1}{b_{j}}} + 2 \sum_{l \in C_S(i) \setminus C_P(i)} b_l + \sum_{l \in C_P(i)} b_l.
\end{align*}
For each $j \in B(i)$, assign the convex coefficient $\alpha_j = \frac{\frac{1}{b_j}}{\sum_{j' \in B(i)} \frac{1}{b_{j'}}}$ to arc $i j$.
\end{itemize}
\State Set the price function at seller $i$: $p_i = a_t - b_i X_i - \sum_{l \in C_P(i)} b_l X_l$.
\If {seller $i$ is the source $s$}
\State {Set $X_s = \frac{a_t - a_s}{b_s}$.}
\State \Return
\EndIf
\end{algorithmic}
\end{algorithm}
\footnotetext{If $|C_S(i)| \geqslant 2$, the computation of $b_i$ is more complicated, the detail is provided in Appendix~\ref{para:sm}. We also show Example~\ref{app: ex: price_compute} for this case in the appendix.}
\paragraph{Equilibrium Quantity Computation.} \label{subsubsec:equi_quant_comp}
We consider shortcut-free SPGs. After having the closed-form relation between the equilibrium price and quantity, we present an algorithm that finds the unique equilibrium. Consider the quantity decision for firm $i$ to its downstream buyers $j \in B(i)$. Suppose firm $i$ only has one buyer, i.e., $|B(i)| = 1$, by Observation~\ref{obs: fc}, inflow equals outflow at firm $i$, and firm $j$ will accept all the offer from $i$, formally, $x_{ij} = X_i$. Hence, in the following analysis, we focus on the nontrivial case, the SM case, when firm has multiple downstream buyers, i.e., $|B(i)| \geqslant 2$. How to assign the goods to different buyers so that the utility of firm $i$ is maximized? We recall that by Theorem~\ref{lem:pq-relation}, all arcs are active and for each firm $j \in V$, $p_{ij}=p_j$ for each $i \in S(j)$. Therefore, the utility of $i$ in \eqref{eq:fp2} becomes
\begin{align} \label{eq:fp3}
\Pi_i = \sum_{j \in B(i)} p_j x_{ij} - p_i \sum_{j \in B(i)} x_{ij}.
\end{align}
Again by Theorem~\ref{lem:pq-relation}, the price function of seller $j$ is
\begin{align} \label{eq: q2}
p_j = a_t - b_j x_{ij} - \sum_{l \in C_P(j)} b_l X_l
\end{align}
since $X_j = x_{ij}$ in the SM case.
The utility of firm $i$ in \eqref{eq:fp3} is concave. At the equilibrium, if $x_{ij} > 0$, then $\frac{\partial \Pi_i}{\partial x_{ij}} = 0$; if $x_{ij} = 0$, then $\frac{\partial \Pi_i}{\partial x_{ij}} \leqslant 0$. Therefore, solving the optimal decision for firm $i$ is equivalent to solving the following linear complementarity problem (LCP) with variables $x_{ij}$ where $j \in B(i)$.
\begin{equation} \label{eq: lcp} \tag{LCP}
\begin{cases}
\sum_{j \in B(i)} \frac{\partial \Pi_i}{\partial x_{ij}} x_{ij} = 0,\\
\frac{\partial \Pi_i}{\partial x_{ij}} \leqslant 0 \quad \forall j \in B(i),\\
x_{ij} \geqslant 0 \quad \forall j \in B(i).
\end{cases}
\end{equation}
To solve this feasibility problem \ref{eq: lcp} and find the optimal allocation to the downstream firm $j$, we take the derivative of $\Pi_i$ with respect to $x_{ij}$ and obtain
\begin{equation} \label{eq: q3}
\begin{aligned}
\frac{\partial \Pi_i}{\partial x_{ij}} &= p_j + \sum_{h \in B(i)} \frac{\partial p_h}{\partial x_{ij}} x_{ih} - p_i.
\end{aligned}
\end{equation}
The second term of \eqref{eq: q3} can be expanded as
\begin{equation} \label{eq: q4}
\begin{aligned}
\sum_{h \in B(i)} \frac{\partial p_h}{\partial x_{ij}} x_{ih}
&= - b_j x_{ij} - \sum_{h \in B(i)} (\frac{\partial \sum_{l \in C_P(h)} b_l X_l}{\partial x_{ij}}) x_{ih} \\
&= - b_j x_{ij} - \sum_{h \in B(i)} (\sum_{l \in C_P(h) \cap C(j)} b_l) x_{ih} \\
&= - b_j x_{ij} - \sum_{l \in C_T(i,j)} b_l X_l - \sum_{l \in C_P(i)} b_l X_i.
\end{aligned}
\end{equation}
The second equality holds because $X_l$ includes $x_{ij}$ only when $l$ is a child of $j$. The third equality holds by rearranging and summing the inflow value of the merging nodes. Plug \eqref{eq: q2} and \eqref{eq: q4} back into \eqref{eq: q3}, we get
\begin{equation} \label{eq: q9}
\begin{aligned}
\frac{\partial \Pi_i}{\partial x_{ij}}
&= a_t - 2 b_j x_{ij} - \sum_{l \in C_P(j)} b_l X_l -
\sum_{l \in C_T(i,j)} b_l X_l - \sum_{l \in C_P(i)} b_l X_i - c_i X_i - p_i \\
&= a_t - 2 b_j x_{ij} -
2 \sum_{l \in C_T(i,j)} b_l X_l - const_i
\end{aligned}
\end{equation}
where
\[const_i := (\sum_{l \in C_P(i)} b_l + c_i) X_i + \sum_{l \in C_P(i)} b_l X_l + p_i\]
is defined in terms of $X_i$, $X_l$ where $l \in C_P(i)$, and $p_i$. These values were determined by the upstream buyers thus are regarded as constants to $i$. The second equality of \eqref{eq: q9} holds by Observation~\ref{obs:mc}: $C_P(j) = C_P(i) \sqcup C_T(i,j)$.
We introduce a convex quadratic program (CQP) to solve \ref{eq: lcp}:
\begin{equation} \label{eq: cp} \tag{CQP}
\begin{aligned}
& \minimize_{x, X} & & \sum_{j \in B(i)} b_j x_{ij}^2 + \sum_{l \in C_S(i) \backslash C_P(i)} b_l X_l^2 \\
& \text{subject to}
& & a_t - 2 b_j x_{ij} - \sum_{l \in C_T(i,j)} 2 b_l X_l \leqslant const_i & \forall j \in B(i), \\
& & & x_{ij} \geqslant 0 & \forall j \in B(i).
\end{aligned}
\end{equation}
By examining the KKT conditions of the quadratic program, each variable $X_l$ satisfies $X_l = \sum_{j \mid l \in C(j)} x_{ij}$ where $j \in B(i)$, which fits the definition of $X_l$. The feasibility of \ref{eq: lcp} also holds. The proof of Lemma~\ref{lem: 3.2} is provided in Appendix~\ref{app: lem: 3.2}.
\begin{restatable}{lemma}{lemlcp}\label{lem: 3.2}
Problem~\ref{eq: lcp} is equivalent to the convex optimization problem~\ref{eq: cp}, and the solution is unique.
\end{restatable}
\iffalse
In fact, every edge is active according to lemma~\ref{lem: 3.1}, it suffices to solve a linear system which is a special case of \ref{eq: lcp}. Nevertheless, if we consider the extension case of MSPG in section~\ref{sec: msss} while the inactive edges may appear, then the convex quadratic programming in equation~\ref{eq: cp} is necessary to solve the sub-flows.
\fi
After the equilibrium price and quantity relation function is computed by Algorithm~\ref{alg: 1}, by solving \ref{eq: cp} directly, we have the optimal decision of each firm in polynomial time. By Theorem~\ref{lem:pq-relation}, $\frac{\partial \Pi_i}{\partial x_{ij}}=0$ since all the arcs are active. This is equivalent to solving a linear system. The algorithm can be sped up further by distributing the flow from $i$ to $j \in B(i)$ proportionally to the convex coefficients pre-computed in Algorithm~\ref{alg: 1}. Besides, each $p_j$ has the same value so that $i$ has no preference over whom to sell to. We refer the proof of Lemma~\ref{lem: 3.3} to Appendix~\ref{para:fccp}\footnote{Lemma~\ref{lem: 3.3} is used to find the price and the convex coefficients for the SM case in the proof of Theorem~\ref{lem:pq-relation}.} (the SM case).
\begin{lemma} \label{lem: 3.3}
For the SM case, $\Pi_i$ is maximized by distributing the flow to $j \in B(i)$ proportionally to the convex coefficients pre-computed in Algorithm~\ref{alg: 1}. Besides, each $p_j$ has the same value.
\end{lemma}
By Lemma~\ref{lem: 3.3}, we introduce Algorithm~\ref{alg: 2} that takes linear time to find the equilibrium price and quantity. The algorithm starts from the source $s$ and distributes the flow to the downstream firms in a topological order. We show Example~\ref{app: ex: flow_compute_alg2} in the appendix for an equilibrium flow computation by Algorithm~\ref{alg: 2}.
\begin{algorithm}[H]
\caption{: SPG Flow and Price Computation (Forward)}
\label{alg: 2}
\renewcommand{\textbf{Input:}}{\textbf{Input:}}
\renewcommand{\textbf{Output:}}{\textbf{Output:}}
\textbf{Input:}{ A shortcut-free SPG $G = (V \cup \{s,t\}, E)$, the market price function $p_t = a_t - b_t X_t$, and the source production cost $p_s = a_s$.}
\textbf{Output:}{ The equilibrium flow assignment $x_{ij}$ for $ij \in E$ and price $p_i$ for $i \in V$.}
\begin{algorithmic}[1]
\State Get $X_s$, the price function $p_i$ for each $i \in V$, and the convex coefficients in the SM case by running Algorithm~\ref{alg: 1}.
\State Assign quantity and price according to a topological order as the following:
\begin{itemize}
\item For the single buyer case, the flow to the buyer is the sum of the upstream flow.
\item For the single seller and multiple buyers case, assign the downstream flow proportionally to the convex coefficients.
\item Set the price accordingly to the quantity by the price function for each case.
\end{itemize}
\If {the buyer is sink $t$}
\State \Return
\EndIf
\end{algorithmic}
\end{algorithm}
\subsection{General Series Parallel Graphs} Suppose the given SPG $G=(V,E)$ has a shortcut $ij \in E$ that dominates a path $l_{ij} = (i, v_1, ..., v_k, j)$. When the price $p_j$ is a decreasing function of $X_j$, $i$ always prefers selling to $j$ directly than through the intermediary firms along the path $l_{ij}$ in order to obtain better utility. We refer the proof details to Appendix~\ref{app:lem:shortcut} and show Example~\ref{app:ex:shortcut} in the appendix that illustrates an equilibrium with inactive trades in an SPG with a shortcut.
\begin{restatable}{lemma}{lemshortcut}\label{lem:shortcut}
Given an SPG, at an equilibrium, if $ij \in E$ is a shortcut of a path $l_{ij}$ and the price $p_j$ is a decreasing function of $X_j$, then there is no trade on $l_{ij}$, i.e. all the arcs on the path $l_{ij}$ are inactive.
\end{restatable}
In the price computation for shortcut-free SPGs, we show by induction that $p_j$ is an affine decreasing function of $X_j$. By Lemma~\ref{lem:shortcut}, given a general SPG, we can remove the dominated paths and obtain a shortcut-free SPG. The equilibrium quantity and price can be found by Algorithm~\ref{alg: 1} and Algorithm~\ref{alg: 2} in linear time. The uniqueness of equilibrium can be proved by encoding this problem into \ref{eq: lcp} and its corresponding \ref{eq: cp} has a unique solution. We conclude by the following theorem.
\begin{theorem} \label{thm: 3.1}
There exists a linear time algorithm to solve the equilibrium quantity and price for SPGs, and the equilibrium is unique.
\end{theorem}
\iffalse
\subsubsection{Equilibrium under Seller Price Model}
We end this section by comparing the results under the market price model and the seller price model for an SPG. Given the same input, it turns out in the seller price model, for any firm in $V \cup \{s\}$, the best decision is to sell the same amount of goods to their downstream firms with the same quantity as in the market price model, with the market clearance price.
From the upstream firms' point of view, the best decision is to sell all the goods offered, so the equilibrium is a flow that satisfies conservation. To maximize utility, the price and quantity should be calculated by taking the derivative inductively. The best price and quantity calculation is exactly the same as Algorithm~\ref{alg: 1} and Algorithm~\ref{alg: 2}.
\begin{theorem} \label{thm: same}
For SPG under the seller price model, the equilibrium is exactly the same as under the market price model.
\end{theorem}
\fi
\section{Structural Analysis} \label{sec: eqp}
We present some structural analyses, including the relation between firm location and utility, and the influence and invariants of different SPG component compositions on the equilibrium. We consider shortcut-free SPGs throughout this section.
\subsection{Firm Location and Individual Utility}
This section focuses on firm's utility at equilibrium. Specifically, how does the position of a firm in the network influence its utility at equilibrium? The following example is useful to address this question.
\begin{example}[Firm Utility in a Line Network]
\mbox{}\\
\begin{center}
\begin{tikzpicture}
\node[vertex] (s) at (-2, 0) {$s$};
\node at (-3, 0) {$p_s = 0$};
\node[vertex] (a) at (0, 0) {$a$};
\node[vertex] (t) at (2, 0) {$t$};
\node at (3.5, 0) {$p_{t} = 1 - X_t$};
\path[->]
(s) edge node [above] {$x$} (a)
(a) edge node [above] {$x$} (t);
\end{tikzpicture}
\end{center}
The price at firm $a$ is $p_a = 1 - 2x$ and at producer $s$ is $p_s = 1 - 4x$. Therefore, the utility at firm $a$ is $\Pi_a = (p_t - p_a)x = x^2$ and at producer $s$ is $\Pi_s = 2x^2 = 2\Pi_a$.
\end{example}
The example above shows an intuition of the location advantage, that the firm closer to the source may have higher utility than its downstream buyers. However, this is not always true, especially when there are strong competition among upstream buyers (i.e. the MS case). The upstream firm who controls all the flow of its downstream firm has a relatively better utility at equilibrium. Therefore, we introduce the following definition.
\begin{definition}[Dominating Parent]
$i$ is a \emph{dominating} parent of $j$ if all the path from source $s$ to $j$ must go through $i$.
\end{definition}
Before analyzing the utility relation between a dominating parent and a dominated child, let us first focus on individual utility. By using the coefficient relation between the buyer $i$ and the seller $j \in B(i)$ as in equation~\ref{eq: SS}, \ref{eq: MS}, and \ref{eq: SM}, we show the closed-form expression of the utility in Lemma~\ref{thm: 4.4}. The proof is provided in Appendix~\ref{app: lem: 4.3}.
\begin{restatable}{lemma}{lemucf}\label{lem: 4.3}
\[\Pi_i = \frac{1}{2} (b_i + \sum_{l \in C_P(i)} b_l) X_i^2 \quad \forall i \in V \cup \{s\}.\]
\end{restatable}
From Lemma~\ref{lem: 4.3}, we show a closed-form expression of the price offered by the source, which is irrelevant to the structure of the supply chain network. Then, we show the \emph{double utility rule} of a dominating parent.
The utility of the source is
\[\Pi_s = \frac{1}{2}b_s X^2_s = \frac{1}{2}b_sX_s \frac{a_t-a_s}{b_s} = \frac{a_t-a_s}{2}X_s.\]
By Lemma~\ref{lem: 3.3}, $s$ offers its buyers the same price at equilibrium. Let $p = p_j$ for $j \in B(s)$, we have
\[\Pi_s = \frac{a_t-a_s}{2}X_s = (p-a_s)X_s \implies p=\frac{a_t-a_s}{2}.\]
\begin{proposition} \label{prop:same-p}
At equilibrium, the source offers the price $\frac{a_t-a_s}{2}$ to its buyers.
\end{proposition}
We prove the following propositions which show the location advantage of a dominating parent. We show that in the SS and the SM case, the seller benefits a lot from the competition among the buyer side, and the proof is provided in Appendix~\ref{app: cor: 4.1}.
\begin{restatable}{proposition}{proptwousssm}\label{cor: 4.1}
In the SS and the SM case, the utility of the seller is at least twice the utility of the buyers' total utility.
\end{restatable}
If a firm controls all the flow of another child firm in the supply chain, then its utility is at least twice as much as its child. We refer the proof to Appendix~\ref{app: thm: 4.4}.
\begin{restatable}{proposition}{proptwou}\label{thm: 4.4}
If firm $i$ is a dominating parent of firm $j$, then firm $i$ has at least twice the utility of firm $j$.
\end{restatable}
To sum up, a dominating parent always has better utility and the double utility rule always holds, which demonstrates the great value of controlling the upstream trades.
\subsection{Network Efficiency and Component Composition}
To measure how firms would benefit from the network, we may care about not only the flow value but also the {\em social welfare}. The social welfare is the total utility of the source and intermediary firms plus the consumer surplus:
\begin{equation} \label{eq: sw1}
\begin{aligned}
SW(G) &= \sum_{i \in V \cup \{s\}}{\Pi_i} + \frac{1}{2}b_t X_t^2\\
&= \frac{1}{2}\sum_{i \in V \cup \{s\}}{(b_i + \sum_{k \in C_P(i)} b_k) X_i^2} + \frac{1}{2}b_t X_t^2.
\end{aligned}
\end{equation}
The social welfare can also be interpreted as the product of the flow and the price difference between the sink and the source (the producer surplus), plus the consumer surplus:
\begin{equation} \label{eq: sw2}
\begin{aligned}
SW(G) &= (a_t - a_s - b_t X_s)X_s + \frac{1}{2}b_t X_t^2 \\
&= (a_t - a_s - \frac{b_t}{2} X_s)X_s.
\end{aligned}
\end{equation}
The criteria of interest are the welfare efficiency and flow efficiency defined as follows.
\begin{definition}[Welfare Efficiency]
A supply chain network is more welfare efficient if it provides a larger social welfare at equilibrium.
\end{definition}
\begin{definition}[Flow Efficiency]
A supply chain network is more flow efficient if it provides a larger flow at equilibrium.
\end{definition}
We examine the relation between flow efficiency, welfare efficiency, and the structure of an SPG. Suppose the given SPG $G$ is constructed by series and parallel compositions on SPGs $G_1$, $G_2$, ..., and $G_n$, then $G_i$ where $i=1, ..., n$ are the \emph{components} of $G$. Since we assume that $G$ is shortcut-free, there are no shortcuts in the components of $G$ as well. The flow efficiency and welfare efficiency of a supply chain is highly related to its components, and we define the {\em component factor} as the following.
\begin{definition}[Component Factor]
The \emph{component factor} of an SPG $Y$ is
\[\lambda(Y) := \frac{b_{s_Y}}{b_{t_Y}}.\]
\end{definition}
The component factor measures the enlargement of the decreasing linear coefficient at the equilibrium price of an SPG $Y$. $Y$ will have a higher flow value if its component factor $\lambda(Y)$ is small. The following theorem shows that the component factor is irrelevant to $b_{t_X}$. The proof is provided in Appendix~\ref{app: lem: 4.1}.
\begin{restatable}{lemma}{lemcp}\label{lem: 4.1}
$\lambda(Y) \geqslant 2$ is a constant that is only relevant to the graph structure of $Y$.
\end{restatable}
Now we can rewrite social welfare in terms of $\lambda(G)$
\begin{equation} \label{eq: sw3}
\begin{aligned}
SW(G) &= (a_t - a_s - \frac{b_t}{2} X_s)X_s \\
&= (a_t - a_s - \frac{b_t}{2} \frac{a_t - a_s}{\lambda(G)b_t})\frac{a_t - a_s}{\lambda(G)b_t} \\
&= (a_t - a_s)^2(1-\frac{1}{2\lambda(G)})\frac{1}{\lambda(G)b_t}.
\end{aligned}
\end{equation}
With fixed sink price function $a_t - b_t X_t$ and source production cost $a_s$, $SW(G)$ is a function of $\lambda(G)$. By Lemma \ref{lem: 4.1}, $\lambda(Y) \geqslant 2$, so $SW(G)$ is maximized when $\lambda(G)=2$, and it is a decreasing function of $\lambda(G)$ when $\lambda(G) \in [2,\infty)$. The flow $X_s$ is also a decreasing function of $\lambda(G)$. $G$ is the single edge network if and only if $\lambda(G)=2$. Therefore, the single edge network is the most flow and welfare efficient.
\begin{proposition}
The single edge network is the most flow and welfare efficient network.
\end{proposition}
If the network is fixed, we have the following. The proof is provided in Appendix~\ref{app: prop: 4.1}.
\begin{restatable}{proposition}{propinc}\label{prop: 4.1}
With a fixed network structure, if the demand at the market increases or the cost at the source decreases, then the market is more flow and welfare efficient and the utility of each individual firm increases.
\end{restatable}
Consider the order of series composition on two SPGs $Y$ and $Z$. By Lemma~\ref{lem: 4.1},
\[
\lambda(S(Y,Z)) = \lambda(Y)\lambda(Z) = \lambda(S(Z,Y))\\
\]
which implies that swapping the order of two components in a series composition does not change the component factor.
\begin{lemma} \label{prop:SYZ}
Given two SPGs $Y$ and $Z$, $\lambda(S(Y,Z)) = \lambda(S(Z,Y))$.
\end{lemma}
We also present a closed-form expression of the component factor $\lambda(P(Y,Z))$ after the parallel composition in terms of $\lambda(Y)$ and $\lambda(Z)$, assuming that $P(Y,Z)$ is shortcut-free. We refer the proof of Lemma \ref{lem: 4.z} to Appendix~\ref{app: lem: 4.z}.
\begin{restatable}{lemma}{lempc}\label{lem: 4.z}
Given SPGs $Y$ and $Z$ such that $P(Y,Z)$ does not have any shortcuts,
\[
\lambda(P(Y,Z)) = \frac{(\lambda(Y)-2)(\lambda(Z)-2)}{\lambda(Y)+\lambda(Z)-4} + 2.
\]
\end{restatable}
By Lemma \ref{prop:SYZ} and \ref{lem: 4.z}, during the construction of an SPG, the component factor remains unchanged after a parallel composition of two SPGs with unchanged component factor. Switching the order of two local components in a series composition does not change the global component factor. Hence with fixed sink price function and production cost, by \eqref{eq: sw3}, the flow and welfare efficiency remain unchanged after the swap. Since the flow remain the same and by Proposition~\ref{prop:same-p}, the source offers the same price to its buyers regardless of the network structure, the source utility is unchanged. We conclude by the following proposition.
\begin{proposition}
Suppose the given SPG $G$ is constructed by series and parallel compositions on SPGs $G_1$, $G_2$, ..., and $G_n$, which includes a step $S(G_i,G_j)$ for $i \neq j$ and $i,j \in \{1, ..., n\}$. Let $G'$ be an SPG with the same construction as $G$, except that $S(G_i,G_j)$ is replaced by $S(G_j,G_i)$. Then, $\lambda(G)=\lambda(G')$. With the same sink price function and production cost, the flow efficiency, welfare efficiency, and source utility remain the same.
\end{proposition}
\section{Equilibrium in Generalized Series Parallel Graphs} \label{sec: ext}
We discuss the equilibrium properties in the extension cases when the series parallel graph has multiple sources or sinks. In particular, we show that:
\begin{itemize}
\item Single-source-and-multiple-sinks SPG: Price function of a firm may be piecewise linear and discontinuous under simple settings. There may exist multiple equilibria.
\item Multiple-sources-and-single-sink SPG: An equilibrium may not exist.
\end{itemize}
\subsection{Single Source and Multiple Sinks} \label{sec: ssms}
A series parallel graph with a single source and multiple sinks (SMSPG) is defined as follows.
\begin{definition}[SMSPG] \label{def: 5.2}
$G$ is a {\em single-source-and-multiple-sink SPG} if it can be constructed by deleting the sink node of an SPG and setting the adjacent nodes of the sink as the new sink nodes. The set of sinks is denoted as $T$.
\end{definition}
First we consider a special case that all the markets have the same demand, then all the markets are {\em active} at equilibrium, i.e. every market has positive incoming flow. The proof is similar to Theorem~\ref{thm: 3.1} and we provide the sketch in Appendix~\ref{app: prop: 4.z}.
\begin{restatable}{proposition}{propsmspg}\label{prop: 4.z}
Given an SMSPG, if all the markets have the same demand, then there exists a unique equilibrium that can be found in polynomial time.
\end{restatable}
With different demand $a_t$ for $t \in T$, the markets may be {\em inactive}, i.e. the incoming quantity is zero. For example:
\begin{example} [Markets Activities]
\mbox{}\\
\begin{center}
\begin{tikzpicture}
\node[vertex] (s) at (-2,0) {$s$};
\node[vertex] (t1) at (0,0.8) {$t_1$};
\node[vertex] (a) at (0,-0.8) {$v$};
\node[vertex] (t2) at (2,-.3) {$t_2$};
\node[vertex] (t3) at (2,-1.3) {$t_3$};
\node at (-3, 0) {$p_{s} = 1$};
\node at (1.7, 0.8) {$p_{t_1} = 2 - X_{t_1}$};
\node at (3.7, -.3) {$p_{t_2} = 3 - X_{t_2}$};
\node at (3.7, -1.3) {$p_{t_3} = 11 - X_{t_3}$};
\path[->]
(s) edge [blue] (t1)
(s) edge [blue] (a)
(a) edge [red] (t2)
(a) edge [blue] (t3);
\end{tikzpicture}
\end{center}
Since $a_{t_1} > p_s$, the market $t_1$ is active. Suppose markets $t_2$ and $t_3$ are both active at equilibrium, and $s$ offers $v$ $x+y$ units of goods with price $p_v$, then $v$ will buy all the goods from $s$. Suppose $v$ sells $x$ to $t_2$ and $y$ to $t_3$, then the utility of $v$ is
\[\Pi_v = (3-x)x+(11-y)y-p_v(x+y).\]
By taking the derivative of $\Pi_v$ with respect to $x$ and $y$, we have
\[\frac{\partial \Pi_v}{\partial x}=3-2x-p_v \text{ and } \frac{\partial \Pi_v}{\partial y}=11-2y-p_v\]
so $\Pi_v$ is maximized when
\[p_v = 3-2x = 11-2y \implies p_v = 7-(x+y)=7-X_v.\]
When source $s$ makes a decision, the flow $x_{st_1}$ and $x_{sv}$ can be handled independently, the optimal decision that maximizes the utility $(7-X_v)X_v$ of $s$ from $v$ is $X_v = 3.5$ and $p_v = 3.5 > a_{t_2}=3$, which contradicts to the assumption that market $t_2$ is active. Therefore, market $t_2$ is inactive, even though $a_{t_2} > a_{t_1}$.
\end{example}
The above example is against the intuition that the market with higher demand is more likely to be active ($t_2$ is inactive while $t_1$ is). While the truth is not only market demand, but also the competitors and network structure have influence on market activity. Namely, market $t_2$ is inactive because it has a longer supply chain than $t_1$ and a strong competition between $t_3$. As a result, it is less favorable than $t_1$ and $t_3$.
The market behavior of SMSPG is usually intractable. In particular, we focus on supply chain networks of the shape in Figure~\ref{fig: 5.0}. Based on the activity status of the markets, we introduce two types of strategies for the upstream firm.
\begin{definition}[Low Price Strategy]
Firm processes relatively large quantity of goods at a relatively low price, such that all the markets are active.
\end{definition}
\begin{definition}[High Price Strategy]
Firm processes relatively small quantity of goods at a relatively high price, such that some markets are inactive.
\end{definition}
The firm plays its strategy to maximize its utility. Because of various choice of strategies, the price function might be piecewise linear and discontinuous. Furthermore, some counterintuitive results will occur, i.e. the increase of demand may result in the decrease of total flow and social welfare (comparing to Proposition~\ref{prop: 4.1}). To understand these differences, it is helpful to consider an example as in Figure~\ref{fig: 5.0}, where the two supply chain networks have identical structure with different market demands.
\begin{figure}
\caption{Multiple Sinks Supply Networks}
\label{fig: 5.0}
\end{figure}
Intuitively, supply chain 2 with higher market demand should have larger flow and social welfare. However, supply chain 1 is more flow and welfare efficient. The equilibrium price functions at $s$ and $v$ are shown in Figure~\ref{fig: 5.1}. We note that the source firm $s$ has two strategies when $p_s = 7$, and both low and high price strategies are feasible. Interestingly, when $a_{t_1} = 20$, the utility of $s$ is maximized by choosing high price strategy and only market $t_1$ is active. However, when demand at market $t_1$ drops, the low price strategy is preferred by $s$.
\begin{figure}
\caption{Piecewise Linear Price Functions of Supply Chain 2}
\label{fig: 5.1}
\end{figure}
By fixing the demand at market $t_2$ and adjusting the demand at market $t_1$ ($a_{t_1}$), Figure~\ref{fig: 5.2} shows the results of the source utility, consumer surplus, total flow and social welfare. The intersecting point at $a_{t_1} \approx 19.07$ shows that increasing demand at market $t_1$ hurts the supply chain efficiency. When $a_{t_1}$ is the intersecting point then there are multiple equilibria since $s$ has no preference between the high price and low price strategy. Besides, $a_{t_1}$ is feasible only in the interval $(12,22]$. The calculation details are provided in Example~\ref{app: ex: me_smspg} in the appendix.
\iffalse
\begin{figure}
\caption{Price Strategies Simulation}
\label{fig: 5.3}
\end{figure}
\fi
\begin{figure}\label{fig: 5.2}
\end{figure}
\begin{proposition}
An SMSPG may have multiple equilibria.
\end{proposition}
\begin{remark} \label{rm1}
For supply chain networks of the shape in Figure~\ref{fig: 5.0},
\begin{itemize}
\item The low price strategy always gives a higher flow value than the high price strategy.
\item When the demand difference between two markets is small enough, the low price strategy gives better utility for the source. When the difference is large enough, the high price strategy gives better utility for the source.
\item The low price strategy always produces higher social welfare.
\end{itemize}
\end{remark}
In short, the low price strategy is preferred by $s$ if the demand difference is not large. Besides, with the low price strategy, everyone is usually better off. We refer more interpretation and detailed calculations of these results to Appendix~\ref{app: rm1}.
\iffalse
When upstream chooses the optimal strategy and flow, there may exist multiple equilibria for downstream firms. Details are in Example~\ref{app: ex: s_muleq}.
\fi
\iffalse
\begin{proposition} \label{prop: 5.1}
Supply chain under low price strategy is always more efficient than under high price strategy.
\end{proposition}
The following proposition shows that as difference of demand between two markets increases, the sources preferred strategy switches from low price strategy to high price strategy.
\begin{proposition} \label{prop: 5.2}
When the demand difference between two markets is small enough, low price strategy gives better payoff for source firm $b$. If the difference is large enough, high price strategy gives better payoff for source firm $b$.
\end{proposition}
To show low price strategy always generate higher social welfare, we prove the following two lemmas first,
\begin{lemma} \label{lem: 5.3}
Low price strategy always produce higher total surplus of firms.
\end{lemma}
\begin{lemma} \label{lem: 5.4}
Low price strategy always produce higher surplus of consumers, higher total surplus of firms.
\end{lemma}
Since social welfare is the sum of consumers' surplus and firms' surplus, as a direct result of above two lemmas, we have the following comparison about social welfare.
\begin{proposition} \label{prop: 5.3}
Low price strategy always produce higher social welfare.
\end{proposition}
\fi
\subsection{Multiple Sources and Single Sink} \label{sec: msss}
The extension of SPG with multiple sinks is defined similarly to Definition~\ref{def: 5.2}.
\begin{definition}[MSSPG] \label{def: 5.1}
$G$ is a {\em multiple-source-and-single-sink SPG} if it can be constructed by deleting the source node of an SPG and setting the adjacent nodes of the source as the new source nodes. The set of sources is denoted as $S$.
\end{definition}
Assume that the source producers make their decision simultaneously, an equilibrium may not exist. We show Example~\ref{ex: SMSPG} in the appendix.
\begin{proposition}
An equilibrium in an MSSPG may not exist.
\end{proposition}
\section{Conclusion} \label{sec:conclude}
We consider a model of sequential competition in supply chain networks. Our main contribution is that when the network is series parallel, the model is tractable and allows a rich set of comparative analysis. In particular, we provide a linear time algorithm to compute the equilibrium and the algorithm helps us study the influence of the network to the total flow and social welfare of the equilibrium.
Slightly extending the network structure beyond series parallel graphs with a single source and multiple sinks (SMSPG) makes the model intractable. The first open problem is to design an efficient algorithm that verifies if an SMSPG has an equilibrium and finds one if it exists. The main challange is the piecewise-linearity and the discontinuity of the price function for intermediary firms. This problem may be computationally intractable but it is unclear what a reasonable proof strategy would be.
Another open problem is to efficiently find an equilibrium in general DAGs with a single source and a single sink. The trades can be inactive for shortcut-free DAGs as shown in Example~\ref{app: ex: gen_dag} in the appendix. We conjecture that there is always an equilibrium and the active trades form a shortcut-free SPG. A natural approach is to compute the price function in a reverse topological order from the sink inductively. However, this enforces one to solve LCPs that correspond to the firms, where the LCPs of the upstream firms are derived from the LCPs of the downstream firms. Solving the LCP system requires determining inactive trades where the number of combination of active and inactive trades is exponential. Therefore, a potential strategy to show computational intractability is a reduction from an LCP based or a quadratic programming problem.
\setstretch{1}
\setstretch{1}
\section*{Appendix}
\appendix
\section{Proofs in Section \ref{sec: clearance}}
\subsection{Proof of The Nonexistence of the MM Case} \label{app:no_mm}
\begin{proof}
The Multiple sellers and multiple buyers (MM) case is $|B(i)| \geqslant 2$ and $|S(j)| \geqslant 2$ for $ij \in E$:
\begin{center}
\begin{tikzpicture}
\node[vertex] (i1) at (2,1.5) {$i_1$};
\node[vertex] (i2) at (2,0.8) {$i_2$};
\node[vertex] (i) at (2,-.5) {$i$};
\node[vertex] (j1) at (4,1.5) {$j_1$};
\node[vertex] (j2) at (4,0.8) {$j_2$};
\node[vertex] (j) at (4,-.5) {$j$};
\draw[dotted, very thick] (2,0.5) -- (2,-.2);
\draw[dotted, very thick] (4,0.5) -- (4,-0.2);
\path[->]
(i) edge (j)
(i1) edge (j)
(i2) edge (j)
(i) edge (j1)
(i) edge (j2)
(i) edge (j);
\node at (3, -1.2) {$MM$};
\end{tikzpicture}
\end{center}
The MM case is impossible in a shortcut-free SPG, and this can be proved by induction. By definition, any SPG can be constructed by series and parallel composition:
\begin{itemize}
\item Series composition: the MM case will not appear after series composition.
\item Parallel composition: by checking the merging source and sink, one can see that the MM case will not appear either, unless there is a shortcut between the source and sink.
\end{itemize}
Therefore, the MM case does not happen in a shortcut-free SPG. \QED
\end{proof}
\subsection{Proof of Theorem~\ref{lem:pq-relation}} \label{app:lem:pq-relation}
\lemeqprice*
\begin{proof}
Our strategy is to start from the sink $t$ and argue inductively via reverse topological traversal that the price proposed to $i$ must be an affine decreasing function of $X_i$. We note that the computation is always under the flow reservation condition by Observation~\ref{obs: fc}.
\paragraph{Starting from the Sink $t$.}
We start with the behavior of the direct upstream firms of sink $t$. For a firm $i \in S(t)$, if arc $it$ belongs to the SS case, then the utility of $i$ is
\begin{align*}
\Pi_i &= (a_t - b_t x_{it}) x_{it} - \sum_{k \in S(i)} p_{ki} x_{ki} \\
&= (a_t - b_t \sum_{k \in S(i)} x_{ki}) \sum_{k \in S(i)} x_{ki} - \sum_{k \in S(i)} p_{ki} x_{ki}.
\end{align*}
$p_{ki}$ given by the selling firms are regarded as constants to $i$. $\Pi_i$ is a concave function. By taking the derivative of $\Pi_i$ with respect to $x_{ki}$, we have
\[\frac{\partial \Pi_i}{\partial x_{ki}} = a_t - 2b_t \sum_{k \in S(i)} x_{ki} - p_{ki}.\]
The price and quantity at equilibrium is a solution of the following linear complementarity problem. Intuitively, $\frac{\partial \Pi_i}{\partial x_{ki}} > 0$ cannot happen since otherwise $k$ could have raised the price $p_{ki}$ such that $\frac{\partial \Pi_i}{\partial x_{ki}} = 0$. This makes $i$ accept all the goods from $k$ and $k$ would obtain a higher payoff, which contradicts the equilibrium condition. If $\frac{\partial \Pi_i}{\partial x_{ki}} < 0$, then $k$ will not offer any goods to $i$ so $x_{ki}=0$ since the payoff of $i$ will decrease if $i$ accepts some goods from $k$.
\begin{equation} \label{eq: r-lcp} \tag{Reverse LCP}
\begin{cases}
\sum_{k \in S(i)}\frac{\partial \Pi_i}{\partial x_{ki}} x_{ki} = 0,\\
\frac{\partial \Pi_i}{\partial x_{ki}} \leqslant 0 \quad \forall k \in S(i),\\
x_{ki} \geqslant 0 \quad \forall k \in S(i).
\end{cases}
\end{equation}
When $ki$ is active, $p_{ki}$ is such that $\frac{\partial \Pi_i}{\partial x_{ki}} = 0$. Therefore, for an active arc $ki$,
\[p_{ki} = p_i = a_t - 2b_t \sum_{k \in S(i)} x_{ki}.\]
If arc $it$ belongs to the MS case, then the utility of $i$ is
\[\Pi_i = (a_t - b_t \sum_{j \in S(t)} x_{jt}) \sum_{k \in S(i)} x_{ki} - \sum_{k \in S(i)} p_{ki} x_{ki}.\]
$p_{ki}$ and $x_{jt}$ where $j \neq i$ are regarded as constants to $i$ so that regardless of some fixed $p_{ki}$ and $x_{jt}$, $i$ is not willing to change its decision at equilibrium. $\Pi_i$ is a concave function. By taking the derivative of $\Pi_i$ with respect to $x_{ki}$, we have
\[\frac{\partial \Pi_i}{\partial x_{ki}} = a_t - b_t \sum_{k \in S(i)} x_{ki} - b_t \sum_{j \in S(t)} x_{jt} - p_{ki}.\]
By a similar argument as before, the price and quantity at equilibrium is a solution of \ref{eq: r-lcp}. When $ki$ is active, $p_{ki}$ is such that $\frac{\partial \Pi_i}{\partial x_{ki}} = 0$. Therefore, for an active arc $ki$,
\[p_{ki} = p_i = a_t - b_t \sum_{k \in S(i)} x_{ki} - b_t \sum_{j \in S(t)} x_{jt} = a_t - b_i X_i - b_t X_t.\]
\paragraph{Before reaching the SM Case.}
The same procedure as before can be inductively repeated whenever we meet an MS or SS case by the reverse topological traversal from $t$. Given the fact that the downstream price must be an affine decreasing function of the inflow of the parent merging child nodes, the derivative of the firm utility with respect to the quantity decision variables must always be zero whenever the quantity is positive.
Before reaching the SM case during the reverse topological traversal from $t$, consider firm $i \in V$. By inductive hypothesis, suppose for each arc $x_{ij} > 0$,
\[p_{ij} = p_j = a_t - b_j X_j - \sum_{l \in C_P(j)} b_l X_l.\]
We note that the flow on $ki$ merges to the nodes $l \in C_P(j)$ and $j$, so $\frac{\partial X_j}{\partial x_{ki}} = \frac{\partial X_l}{\partial x_{ki}} = 1$.
Consider arc $ki$, if $ki$ is the SS case or the MS case, then the utility of $i$ is
\begin{align*}
\Pi_i &= p_j x_{ij} - \sum_{k \in S(i)} p_{ki} x_{ki} \\
&= p_j \sum_{k \in S(i)} x_{ki} - \sum_{k \in S(i)} p_{ki} x_{ki}.
\end{align*}
If $ki$ is the SS case and $x_{ki} > 0$, then by \ref{eq: r-lcp}, $\frac{\partial \Pi_i}{\partial x_{ki}} = 0$, thus
\begin{align}
p_{ki} &= p_j + \frac{\partial p_j}{\partial x_{ki}} \sum_{k \in S(i)} x_{ki} \nonumber \\
&= a_t - b_j X_j - \sum_{l \in C_P(j)} b_l X_l - (b_j + \sum_{l \in C_P(j)} b_l) \sum_{k \in S(i)} x_{ki} \nonumber \\
&= a_t - (2 b_j + \sum_{l \in C_P(j)} b_l) X_i - \sum_{l \in C_P(i)} b_l X_l \label{ss-price} \tag{SS-price}
\end{align}
where $X_i = X_j = \sum_{k \in S(i)} x_{ki}$ and $C_P(i) = C_P(j)$ by Observation~\ref{obs:mc}.
If $ki$ is the MS case and $x_{ki} > 0$, then by \ref{eq: r-lcp}, $\frac{\partial \Pi_i}{\partial x_{ki}} = 0$, thus
\begin{align}
p_{ki} &= p_j + \frac{\partial p_j}{\partial x_{ki}} \sum_{k \in S(i)} x_{ki} \nonumber \\
&= a_t - b_j X_j - \sum_{l \in C_P(j)} b_l X_l - (b_j + \sum_{l \in C_P(j)} b_l) \sum_{k \in S(i)} x_{ki} \nonumber \\
&= a_t - (b_j +\sum_{l \in C_P(j)} b_l) X_i - b_j X_j - \sum_{l \in C_P(j)} b_l X_l \nonumber \\
&= a_t - (b_j + \sum_{l \in C_P(j)} b_l) X_i - \sum_{l \in C_P(i)} b_l X_l \label{ms-price} \tag{MS-price}
\end{align}
where $X_i = \sum_{k \in S(i)} x_{ki}$ and $C_P(i) = C_P(j) \sqcup \{j\}$ by Observation~\ref{obs:mc} in this case.
\paragraph{Reaching the SM Case.} \label{para:sm}
Define the set of nodes $N_G$ such that for any node $i \in N_G$, $i$ itself and all the children of $i$ all have an empty self-merging child nodes set. Formally,
\[
N_G = \{i \mid i \in V \cup\{s\} \text{ such that } \forall j \in C(i) \cup \{i\}, C_s(j) = \emptyset\}.
\]
$N_G$ denotes the set of nodes starting from $t$ via reverse topological traversal until we reach a set of nodes that are sellers in the SM case. These sellers can be defined as the set of nodes $S_G$, such that for any node $i \in S_G$, there exists a buyer of $i$ that belongs to $N_G$. Formally,
\[S_G = \{i \mid i \in V \cup \{s\} \text{ such that } B(i) \cap N_G \neq \emptyset\}.\]
In the following figure, $N_G$ consists of the red nodes $j_1$, $v_1$, $v_2$, $l$, $k$, and $t$, while $S_G$ consists of the black nodes $s$ and $j_2$.
\begin{center}
\begin{tikzpicture}[baseline=0]
\node[vertex] (s) at (0,0) {$s$};
\node[vertex,red] (a) at (2,1.5) {$j_1$};
\node[vertex] (a1) at (1.5,0) {$j_2$};
\node[vertex,red] (b) at (3,0.5) {$v_1$};
\node[vertex,red] (c) at (3,-0.5) {$v_2$};
\node[vertex,red] (d) at (4.5,0) {$l$};
\node[vertex,red] (f) at (4,1.5) {$k$};
\node[vertex,red] (t) at (6,0) {$t$};
\path[->]
(s) edge (a)
(s) edge (a1)
(a1) edge (b)
(a1) edge (c)
(a) edge (f)
(b) edge (d)
(c) edge (d)
(f) edge (t)
(d) edge (t)
;
\end{tikzpicture}
\end{center}
There must exist a node $i \in S_G$ such that all its buyers $j \in B(i)$ belong to $N_G$, and $i$ is the seller in the SM case. Given that $p_j$ where $j \in B(i)$ are affine decreasing functions, the utility of $i$ is
\begin{equation} \label{eq: pi_i}
\Pi_i = \sum_{j \in B(i)}{p_j x_{ij}} - \sum_{k \in S(i)} p_{ki} x_{ki}.
\end{equation}
Suppose the sellers of $i$ make their best decision and offer $i$ total inflow $X_i = C$ such that $i$ accepts everything. $i$ does not have control over the buying cost $\sum_{k \in S(i)} p_{ki} x_{ki}$ and the total inflow $\sum_{k \in S(i)} x_{ki}$ at equilibrium. What $i$ can decide is how to distribute $C$ to its buyers in $N_G$. Here $p_{ki}$ are regarded as constants given to $i$ and $\Pi_i$ is a concave function. There is also a constraint $\sum_{j \in B(i)}x_{ij} = \sum_{k \in S(i)}x_{ki}$. Therefore, we can rewrite the problem of maximizing $\Pi_i$ as the following convex quadratic program:
\begin{equation} \label{eq: dist} \tag{SM-CQP}
\begin{aligned}
& \maximize_{x} & & \sum_{j \in B(i)}{p_j x_{ij}} \\
& \text{subject to}
& & \sum_{j \in B(i)}{x_{ij}} = C.
\end{aligned}
\end{equation}
Consider the Lagrangian function:
\[
\begin{aligned}
L(x, \lambda) &= \sum_{j \in B(i)}{p_j x_{ij}} - \lambda(\sum_{j \in B(i)}{x_{ij}} - C).
\end{aligned}
\]
By taking the derivative of $L(x,\lambda)$ with respect to $x_{ij}$, we have
\begin{align}
\frac{\partial L(x, \lambda)}{\partial x_{ij}} &= p_j + \sum_{h \in B(i)} \frac{\partial p_h}{\partial x_{ij}} x_{ih} - \lambda \nonumber \\
&= a_t - b_j x_{ij} - \sum_{l \in C_P(j)} b_l X_l - b_j x_{ij} - \sum_{l \in C_T(i,j)} b_l X_l - \sum_{l \in C_P(i)} b_l X_i - \lambda \nonumber \\
&= a_t - 2 b_j x_{ij} -
2 \sum_{l \in C_T(i,j)} b_l X_l - \sum_{l \in C_P(i)} b_l (X_l + C) - \lambda \label{eq: dLdx}
\end{align}
where $\sum_{l \in C_P(i)} b_l (X_l + C)$ is fixed by the variables $X_l$ and $C$ which are decided by the upstream buyers of $i$, thus regarded as a constant to $i$. The second equality follows by rearranging and summing the inflow value of the merging nodes and the inductive hypothesis on $p_j$.
$\sum_{j \in B(i)}{p_j x_{ij}}$ is maximized when $\frac{\partial L(x_{ij}, \lambda)}{\partial x_{ij}} = 0$ for each $j \in B(i)$. This indicates
\[
a_t - 2 b_j x_{ij} -
2 \sum_{l \in C_T(i,j)} b_l X_l = \sum_{l \in C_P(i)} b_l (X_l + C) + \lambda.
\]
By rearranging, this can be formulated as a linear system
\begin{equation} \label{ls-sm} \tag{SM-LS}
\begin{cases}
2 b_j x_{ij} +
2 \sum_{l \in C_T(i,j)} b_l X_l = D \quad \forall j \in B(i), \\
\sum_{j \in B(i)}{x_{ij}} = C,
\end{cases}
\end{equation}
where $D:=a_t - \sum_{l \in C_P(i)} b_l (X_l + C) - \lambda$. Since the right hand side is the same for each linear constraint $j \in B(i)$, $x_{ij}=\alpha_{j} C$ where $\sum_{j \in B(i)} \alpha_{j}=1$ is the solution of \ref{eq: dist}. We focus on finding the \emph{convex coefficients} $\alpha_j$ and the price $p_i$.
\paragraph{Finding the Convex Coefficients and the Price.} \label{para:fccp} Consider the SM case where $ij \in E$. We start with the simple SM case as a warm up and continue on the general SM case. \\
\textbf{Simple SM:} $|B(i)| \geqslant 2$, $|S(j)| = 1$, and $|C_S(i)|=1$:
\begin{center}
\begin{tikzpicture}
\node[vertex] (j1) at (8,1.5) {$j_1$};
\node[vertex] (j2) at (8,0.8) {$j_2$};
\node[vertex] (jk) at (8,-1.5) {$j_m$};
\node[vertex] (i) at (6,0) {$i$};
\node[vertex] (j) at (8,-0.2) {$j$};
\node (d1) at (10,1.5) {$...$};
\node (d2) at (10,0.8) {$...$};
\node (d3) at (10,-0.2) {$...$};
\node (d4) at (10,-1.5) {$...$};
\node[vertex] (h) at (12,0) {$h$};
\draw[dotted, very thick] (8,0.5) -- (8,0.1);
\draw[dotted, very thick] (8,-0.5) -- (8,-1.2);
\draw[dotted, very thick] (10,0.5) -- (10,0.1);
\draw[dotted, very thick] (10,-0.5) -- (10,-1.2);
\path[->]
(i) edge (j1)
(i) edge (j2)
(i) edge (j)
(i) edge (jk)
(j1) edge (d1)
(j2) edge (d2)
(j) edge (d3)
(jk) edge (d4)
(d1) edge (h)
(d2) edge (h)
(d3) edge (h)
(d4) edge (h);
\end{tikzpicture}
\end{center}
In the simple SM case, $C_T(i,j) = C_S(i) \setminus C_P(i)$ is the same for each $j \in B(i)$ so $b_j x_{ij}$, so $2 \sum_{l \in C_T(i,j)} b_l X_l$ is also the same. It suffices to find $\alpha_i$ so that $b_j x_{ij}$ is the same for each $j \in B(i)$. Let
\begin{equation} \label{app: eq: alpha_j} \tag{Simple-SM-$\alpha_j$}
\alpha_j = \frac{\frac{1}{b_j}}{\sum_{j' \in B(i)}{\frac{1}{b_{j'}}}},
\end{equation}
then when $x_{ij}=\alpha_j C$, $2 b_j x_{ij} +
2 \sum_{l \in C_T(i,j)} b_l X_l$ are the same for $j \in B(i)$ in \ref{ls-sm}. The best strategy for $i$ is to assign $\alpha_j C$ on arc $ij$. For all $j' \in B(i)$, $b_j'$ is positive so $\alpha_j$ is also positive. Therefore, if $C$ is positive, then $x_{ij}$ are all active.
Price $p_j$ is the same for each $j \in B(i)$:
\begin{align}
p_j &= a_t - b_j x_{ij} - \sum_{l \in C_p(j)}{b_l X_l} \nonumber \\
&= a_t - b_j\frac{\frac{1}{b_j}}{\sum_{j' \in B(i)}{\frac{1}{b_{j'}}}}C - \sum_{l \in C_T(i,j)}{b_l X_l} - \sum_{l \in C_P(i)}{b_l X_l} \nonumber \\
&= a_t - \frac{1}{\sum_{j' \in B(i)}{\frac{1}{b_{j'}}}}X_i - \sum_{l \in C_S(i) \setminus C_P(i)} b_l X_i - \sum_{l \in C_P(i)}{b_l X_l} \nonumber
\end{align}
where the last equality holds since $X_l = X_i$ when $l \in C_S(i) \setminus C_P(i)$. The utility of $i$ is
\begin{equation*}
\Pi_i = (a_t - \frac{1}{\sum_{j' \in B(i)}{\frac{1}{b_{j'}}}}X_i - \sum_{l \in C_S(i) \setminus C_P(i)} b_l X_i - \sum_{l \in C_P(i)}{b_l X_l})X_i - \sum_{k \in S(i)} p_{ki} x_{ki}.
\end{equation*}
By taking the derivative of $\Pi_i$ with respect to $x_{ki}$, we have
\begin{align*}
\frac{\partial \Pi_i}{\partial x_{ki}} &= a_t - \frac{1}{\sum_{j' \in B(i)}{\frac{1}{b_{j'}}}} X_i - \sum_{l \in C_S(i) \setminus C_P(i)} b_l X_i - \sum_{l \in C_P(i)}{b_l X_l}\\
& \quad - (\frac{1}{\sum_{j' \in B(i)}{\frac{1}{b_{j'}}}}+ \sum_{l \in C_S(i) \setminus C_P(i)} b_l + \sum_{l \in C_P(i)}{b_l})X_i - p_{ki} \\
&= a_t - (\frac{2}{\sum_{j' \in B(i)}{\frac{1}{b_{j'}}}}+ 2\sum_{l \in C_S(i) \setminus C_P(i)}{b_l}+\sum_{l \in C_P(i)}{b_l})X_i - \sum_{l \in C_P(i)}{b_l X_l} - p_{ki}.
\end{align*}
We note that $\frac{\partial X_i}{\partial x_{ki}} = 1$ and $\frac{\partial X_l}{\partial x_{ki}} = 1$ for $l \in C_P(i) \cup C_S(i)$. By \ref{eq: r-lcp}, if $x_{ki} > 0$, then $\frac{\partial \Pi_i}{\partial x_{ki}}=0$, so
\begin{equation} \label{simple-sm-price} \tag{Simple-SM-price}
p_{ki} = a_t - (\frac{2}{\sum_{j' \in B(i)}{\frac{1}{b_{j'}}}}+ 2\sum_{l \in C_S(i) \setminus C_P(i)}{b_l}+\sum_{l \in C_P(i)}{b_l})X_i - \sum_{l \in C_P(i)}{b_l X_l}.
\end{equation}
\textbf{General SM:} $|B(i)| \geqslant 3$, $|S(j)| = 1$, and $|C_S(i)| \geqslant 2$:
\begin{center}
\begin{tikzpicture}
\node[vertex] (j1) at (8,1.5) {$j_1$};
\node[vertex] (j2) at (8,0.8) {$j_2$};
\node[vertex] (jk) at (8,-1.5) {$j_m$};
\node[vertex] (i) at (6,0) {$i$};
\node[vertex] (j) at (8,-0.2) {$j$};
\node (d1) at (10,1.5) {$...$};
\node (d2) at (10,0.8) {$...$};
\node (d3) at (10,-0.2) {$...$};
\node (d4) at (10,-1.5) {$...$};
\node (dd) at (14,0.5) {$...$};
\node[vertex] (h) at (12,0.5) {$h_1$};
\node[vertex] (h2) at (16,0) {$h_n$};
\draw[dotted, very thick] (8,0.5) -- (8,0.1);
\draw[dotted, very thick] (8,-0.5) -- (8,-1.2);
\draw[dotted, very thick] (10,0.5) -- (10,0.1);
\draw[dotted, very thick] (10,-0.5) -- (10,-1.2);
\path[->]
(i) edge (j1)
(i) edge (j2)
(i) edge (j)
(i) edge (jk)
(j1) edge (d1)
(j2) edge (d2)
(j) edge (d3)
(jk) edge (d4)
(d1) edge (h)
(d2) edge (h)
(d3) edge (h)
(h) edge (dd)
(dd) edge (h2)
(d4) edge (h2);
\end{tikzpicture}
\end{center}
List the nodes in $C_S(i)$ as $h_1, h_2, ..., h_n$ according to a topological order, where $h_1$ is the first merging node, $h_2$ is the second merging node, ..., $h_n$ is the last merging node in $C_S(i)$. For each $h_k \in C_S(i)$, let $B_k(i)$ be the largest subset of $B(i)$, such that for each $j \in B_k(i)$, $ij$ is the first arc of a corresponding disjoint path that eventually reaches $h_k$ without reaching any $h_r$ where $r < k$. Let $P_k(i)$ be the {\em direct merging parent set} consists of nodes $h_r$ that can reach $h_k$ without passing any $h_l$ where $r < l < k$.
In the following example, $C_S(i) = \{h_1, h_2\}$. The paths from $i$ via $j_2$ and $j_3$ merge at $h_1$. The path from $i$ via $j_1$ reaches $h_2$ without passing $h_1$. $B_2(i) = \{j_1\}$ and $B_1(i) = \{j_2, j_3\}$. $P_2(i) = \{h_1\}$ and $P_1(i) = \emptyset$.
\begin{center}
\begin{tikzpicture}[baseline=0]
\node[vertex] (s) at (0,0) {$i$};
\node[vertex] (a) at (2,1.5) {$j_1$};
\node[vertex] (b) at (2,0.5) {$j_2$};
\node[vertex] (c) at (2,-0.5) {$j_3$};
\node[vertex] (d) at (4,0) {$h_1$};
\node[vertex] (f) at (4,1.5) {$v$};
\node[vertex] (t) at (6,0) {$h_2$};
\path[->]
(s) edge (a)
(s) edge (b)
(s) edge (c)
(a) edge (f)
(b) edge (d)
(c) edge (d)
(f) edge (t)
(d) edge (t)
;
\end{tikzpicture}
\end{center}
The calculation of $\alpha_i$ is done in an inductive fashion. We start from $k=1$, then $k=2$, and so on until $k=n$. We define an {\em aggregate variable} $c_k(i)$ for the vertices $B_k(i) \sqcup \{h_k\}$ recursively as the following:
\begin{equation} \label{app: eq: agg}
c_k(i):= \frac{1}{\sum_{j \in B_k(i)} \frac{1}{b_{j}} + \sum_{l \mid h_l \in P_k(i)}\frac{1}{c_l(i)}} + b_{h_k}.
\end{equation}
When $P_k(i) = \emptyset$, we only consider the nodes in $B_k(i)$. When $h_n$ is reached, if $h_n \in C_P(i)$, then $b_{h_n}$ is not part of the aggregate variable $b_{B_n(i)}$. Therefore,
\begin{equation} \label{app: eq: agg-n}
c_n(i) := \frac{1}{\sum_{j \in B_n(i)} \frac{1}{b_{j}} + \sum_{l \mid h_l \in P_n(i)}\frac{1}{c_l(i)}} + \sum_{l \in \{h_n\} \setminus C_P(i)}{b_l}.
\end{equation}
Now we find the convex coefficient for $ij$ where $j \in B(i)$, which allows us to rewrite $p_i$ in terms of $X_i$. The approach is a traversal of merging nodes until $h_n$ is reached. At step $k$, for each $p$ such that $h_p \in P_k(i)$, the aggregate variable $c_p(i)$ with nodes $j \in B_k(i)$ that merges to $h_k$ is {\em currently} weighted by
\begin{equation} \label{app: eq: aggcoeff}
\beta_p(i) := \frac{\frac{1}{c_p(i)}}{\sum_{j \in B_k(i)} \frac{1}{b_{j}} + \sum_{l \mid h_l \in P_k(i)}\frac{1}{c_l(i)}}
\end{equation}
while node $j \in B_k(i)$ is weighted by
\begin{equation} \label{app: eq: vercoeff}
\beta_{j} := \frac{\frac{1}{b_j}}{\sum_{j' \in B_k(i)} \frac{1}{b_{j'}} + \sum_{l \mid h_l \in P_k(i)}\frac{1}{c_l(i)}}.
\end{equation}
For $j \in B(i)$, the convex coefficient of $ij$ is
\begin{equation} \label{app: eq: prodcoeff} \tag{General-SM-$\alpha_j$}
\alpha_j = \beta_{j}\prod_{p \mid h_p \in C_T(i,j) \setminus \{h_n\}}{\beta_p(i)}.
\end{equation}
We note that $\alpha_j$ is positive since $b_{j'}$ and $b_l$ where $j' \in B(i)$ and $l \in C_T(i,j')$ are all positive. When $x_{ij}=\alpha_j C$, $2 b_j x_{ij} +
2 \sum_{l \in C_T(i,j)} b_l X_l$ are the same for $j \in B(i)$ in \ref{ls-sm}. Namely, $x_{ij}=\alpha_j C$ is the solution of \ref{ls-sm}. This implies if $C > 0$, then $ij$ are all active. In particular, when $x_{ij}=\alpha_j C$,
\begin{equation} \label{cnx}
b_j x_{ij} + \sum_{l \in C_T(i,j)}{b_l X_l} = c_n(i) C = c_n(i) X_i.
\end{equation}
Price $p_j$ is the same for each $j \in B(i)$:
\begin{align}
p_j &= a_t - b_j x_{ij} - \sum_{l \in C_p(j)}{b_l X_l} \nonumber \\
&= a_t - b_j x_{ij} - \sum_{l \in C_T(i,j)}{b_l X_l} - \sum_{l \in C_P(i)}{b_l X_l} \nonumber \\
&= a_t - c_n(i) X_i - \sum_{l \in C_P(i)}{b_l X_l}. \label{sm-pj}
\end{align}
The utility of $i$ is
\[
\Pi_i = (a_t - c_n(i)X_i - \sum_{l \in C_P(i)}{b_l X_l})X_i - \sum_{k \in S(i)} p_{ki} x_{ki}.
\]
By taking the derivative of $\Pi_i$ with respect to $x_{ki}$, we have
\begin{align}
\frac{\partial \Pi_i}{\partial x_{ki}} &= a_t - c_n(i)X_i - \sum_{l \in C_P(i)}{b_l X_l} - (c_n(i) + \sum_{l \in C_P(i)}{b_l})X_i - p_{ki} \nonumber \\
&= a_t - (2c_n(i)+\sum_{l \in C_P(i)}{b_l})X_i - \sum_{l \in C_P(i)}{b_l X_l} - p_{ki}. \label{sm-dev-0}
\end{align}
We note that $\frac{\partial X_i}{\partial x_{ki}} = 1$ and $\frac{\partial X_l}{\partial x_{ki}} = 1$ for $l \in C_P(i)$. By \ref{eq: r-lcp}, if $x_{ki} > 0$, then $\frac{\partial \Pi_i}{\partial x_{ki}}=0$, so
\begin{equation} \label{general-sm-price} \tag{General-SM-price}
p_{ki}=a_t - (2c_n(i)+\sum_{l \in C_P(i)}{b_l})X_i - \sum_{l \in C_P(i)}{b_l X_l}.
\end{equation}
When $n=1$, the general SM case is exactly the simple SM case where $P_1(i)=\emptyset$ and $B_1(i)=B(i)$. By \eqref{app: eq: agg-n} and \ref{app: eq: prodcoeff},
\[c_1(i)=\frac{1}{\sum_{j \in B(i)}{\frac{1}{b_j}}}+ \sum_{l \in C_S(i) \setminus C_P(i)}{b_l} \text{ and } \alpha_j=\beta_j=\frac{\frac{1}{b_j}}{\sum_{j' \in B(i)}{\frac{1}{b_j'}}}.\]
\ref{app: eq: alpha_j} exactly matches \ref{app: eq: prodcoeff} and \ref{simple-sm-price} exactly matches \ref{general-sm-price}.
\paragraph{Reverse Topological Traversal until Reaching the Source.} We have shown that when the price function of $t$ is an affine decreasing function of the inflow $X_t$, then by induction, at node $i$, whenever the SS, the MS, or the SM case is encountered, at equilibrium, the price of each active trade $p_{ki}$ on arc $ki$ is the same. This price can be rewritten as $p_i$, which is an affine decreasing function of the inflow $X_i$. When the source $s$ is reached, the price at $s$ satisfies $p_s = a_s = a_t - b_s X_s$. Since $a_t > a_s$ and $b_s > 0$, $X_s > 0$. At equilibrium, by the flow conservation property, the fact that goods are distributed accordingly to the positive convex coefficients in the SM case, and the assumption that there are no shortcuts, all arcs in $E$ are active, each seller $k \in S(i)$ offers $i$ the same price $p_i$, and $p_i = a_t - b_i X_i - \sum_{l \in C_P(i)} b_l X_l$.
For completeness, we list the closed-form of $b_i$ and the convex coefficient $\alpha_j$. The closed-form expression is used in Algorithm~\ref{alg: 1}.
\textbf{SS:} By \ref{ss-price},
\[b_i=2 b_j + \sum_{l \in C_P(j)} b_l.\]
\textbf{MS:} By \ref{ms-price},
\[b_i=b_j + \sum_{l \in C_P(j)} b_l.\]
\textbf{Simple SM:} By \ref{simple-sm-price} and \ref{app: eq: alpha_j},
\[b_i=\frac{2}{\sum_{j' \in B(i)}{\frac{1}{b_{j'}}}}+ 2\sum_{l \in C_S(i) \setminus C_P(i)}{b_l}+\sum_{l \in C_P(i)}{b_l} \text{ and } \alpha_j = \frac{\frac{1}{b_j}}{\sum_{j' \in B(i)}{\frac{1}{b_{j'}}}}.\]
\textbf{General SM:} By \ref{general-sm-price} and \ref{app: eq: prodcoeff},
\begin{equation} \label{general-sm} \tag{General SM}
b_i=2c_n(i)+\sum_{l \in C_P(i)}{b_l} \text{ and }
\alpha_j = \beta_{j}\prod_{p \mid h_p \in C_T(i,j) \setminus \{h_n\}}{\beta_p(i)}
\end{equation}
where $c_n(i)$, $\beta_p(i)$, and $\beta_j$ are defined by \eqref{app: eq: agg}, \eqref{app: eq: agg-n}, \eqref{app: eq: aggcoeff}, and \eqref{app: eq: vercoeff}.
\QED
\end{proof}
\subsection{Proof of Lemma~\ref{lem: 3.2}} \label{app: lem: 3.2}
\lemlcp*
\begin{proof}
We recall the feasibility problem \ref{eq: lcp}
\[
\begin{cases}
\sum_{j \in B(i)} \frac{\partial \Pi_i}{\partial x_{ij}} x_{ij} = 0,\\
\frac{\partial \Pi_i}{\partial x_{ij}} \leqslant 0 \quad \forall j \in B(i),\\
x_{ij} \geqslant 0 \quad \forall j \in B(i).
\end{cases}
\]
and the optimization problem \ref{eq: cp}
\[
\begin{aligned}
& \minimize_{x, X} & & \sum_{j \in B(i)} b_j x_{ij}^2 + \sum_{l \in C_S(i) \backslash C_P(i)} b_l X_l^2 \\
& \text{subject to}
& & a_t - 2 b_j x_{ij} - \sum_{l \in C_T(i,j)} 2 b_l X_l \leqslant const_i & \forall j \in B(i), \\
& & & x_{ij} \geqslant 0 & \forall j \in B(i).
\end{aligned}
\]
Consider the Lagrangian function:
\[
\begin{aligned}
L(x, X, \lambda) &= \sum_{j \in B(i)} b_j x_{ij}^2 + \sum_{l \in C_S(i) \backslash C_P(i)} b_k X_l^2 \\
&\quad - \sum_{j \in B(i)} \lambda_{ij} (a_t - 2 b_j x_{ij} - \sum_{l \in C_T(i,j)} 2 b_l X_l - const_i).
\end{aligned}
\]
\textbf{Stationarity condition:}
\begin{itemize}
\item By taking the derivative of $L$ with respect to $x_{ij}$, we have
\[
\frac{\partial L(x, X, \lambda)}{\partial x_{ij}}
= 2 b_j x_{ij} - 2 b_j \lambda_{ij} = 0
\]
which infers $x_{ij} = \lambda_{ij}$.
\item By taking the derivative of $L$ with respect to $X_l$ where $l \in C_S(i) \setminus C_P(i)$, we have
\[
\frac{\partial L(x, X, \lambda)}{\partial X_l}
= 2b_l X_l - \sum_{j \mid l \in C_P(j)} 2 b_l \lambda_{ij} = 0
\]
which infers $X_l = \sum_{j \mid l \in C_P(j)} \lambda_{ij} = \sum_{j \mid l \in C_P(j)} x_{ij}$. This is exactly the definition of $X_l$ (the total flow through $l$).
\end{itemize}
\textbf{Complementarity condition:}\\
$\forall j \in B(s)$ (we recall that $x_{ij} = \lambda_{ij}$):
\begin{align*}
\lambda_{ij}(a_t - 2 b_j x_{ij} - \sum_{l \in C_T(i,j)} 2 b_l X_l - const_i)
= x_{ij} \frac{\partial \Pi_i}{\partial x_{ij}}
= 0.
\end{align*}
Combined with the primal feasibility conditions $\frac{\partial \Pi_i}{\partial x_{ij}} \leqslant 0$ and $x_{ij} \geqslant 0$, the KKT condition of \ref{eq: cp} is equivalent to \ref{eq: lcp}. \ref{eq: cp} is strictly convex, so the solution is unique.
\QED
\end{proof}
\subsection{Proof of Lemma~\ref{lem:shortcut}} \label{app:lem:shortcut}
\lemshortcut*
\begin{proof}
From the structure of SPG and the flow conservation property at equilibrium, if path $l_{ij}$ has an active arc, then there exists a path from $i$ to $j$ where all arcs are active. To prove by contradiction, suppose $ij$ is a shortcut of path $l_{ij} = (i, v_1, ..., v_k, j)$, without loss of generality, we can assume that all arcs in the path $l_{ij}$ are active.
Since firms never sell goods at a lower price than the buying price, by Observation~\ref{obs:inc-p},
\begin{align*}
p_{i v_1} \leqslant p_{v_1 v_2} \leqslant \dots \leqslant p_{v_{k-1} v_k} \leqslant p_{v_k j} = p_j.
\end{align*}
Consider the case that $p_{i v_1} < p_j$ at the equilibrium, by the structure of SPG and the flow conservation property, all the flow from $i$ to $v_1$ will go to firm $j$. If firm $i$ moves $x_{i v_1}$ amount of flow from $i v_1$ to $ij$, the total flow through $j$ will be the same (since $p_j$ is a function of $X_j$), and $p_j$ will remain the same price. Therefore, firm $i$ is better off by the difference from the selling revenue
\begin{align*}
p_j (x_{ij} + x_{i v_1}) - p_j x_{ij} - p_{v_1} x_{i v_1} > 0,
\end{align*}
which cannot happen at an equilibrium. Thus, $p_{i v_1} = p_j$ must hold, and
\begin{align*}
p_{iv_1} = p_{v_1 v_2} = \dots = p_{v_{k-1}v_k} = p_{v_k j} = p_j.
\end{align*}
Now consider the optimal decision for $v_k$, if she buys all the goods offered to her and sell them to $j$, her profit is $0$, because $p_{v_{k-1}v_k} = p_j$. However, she would make a positive profit if she accepts and offers $j$ less amount of goods. Because this would decrease the flow to $j$ and raise the optimal price of $j$ from $p_j$ to $p_j'$, that is,
\begin{align*}
p_j' > p_j = p_{v_{k-1}v_k},
\end{align*}
which contradicts to the flow conservation property at equilibrium. Hence, the path $l_{ij}$ is inactive. \QED
\end{proof}
\section{Proofs in Section \ref{sec: eqp}}
\subsection{Proof of Lemma~\ref{lem: 4.3}} \label{app: lem: 4.3}
\lemucf*
\begin{proof} The proof is done case by case. Suppose $i$ is the seller of a trade:
\begin{itemize}
\item For the SS case, $X_i = X_j = x_{ij}$ and $C_P(i) = C_P(j)$. Consider the utility of $i$, by equation~\ref{eq: SS}:
\begin{align*}
\Pi_i &= (p_j - p_i) x_{ij} \\
&= (b_{i} X_i - b_j X_j) x_{ij} \\
&= (b_{i} - \frac{b_i - \sum_{l \in C_P(i)} b_l}{2}) X_i^2 \\
&= \frac{1}{2}(b_{i} + \sum_{l \in C_P(i)} b_l) X_i^2.
\end{align*}
\item For the SM case, for each $j \in B(i)$, we recall that $ij$ is active and $p_{ki} = p_i$ at equilibrium so the derivative in \eqref{sm-dev-0} must be 0:
\[\frac{\partial \Pi_i}{\partial x_{ki}} = a_t - (2c_n(i)+\sum_{l \in C_P(i)}{b_l})X_i - \sum_{l \in C_P(i)}{b_l X_l} - p_i = 0.\]
We recall the price of $j$ in \eqref{sm-pj}:
\[p_j= a_t - c_n(i) X_i - \sum_{l \in C_P(i)}{b_l X_l}.\]
By rearranging and \ref{general-sm}, we have
\begin{align*}
p_j - p_i &= (c_n(i)+\sum_{l \in C_P(i)}b_l)X_i \label{sm-u-cn} \\
&= \frac{1}{2}(b_{i} + \sum_{l \in C_P(i)} b_l)X_i.
\end{align*}
Therefore,
\begin{align*}
\Pi_i &= \sum_{j \in B(i)} (p_j - p_i)x_{ij} \\
&= \sum_{j \in B(i)} \frac{1}{2}(b_{i} + \sum_{l \in C_P(i)} b_l)X_i x_{ij} \\
&= \frac{1}{2}(b_{i} + \sum_{l \in C_P(i)} b_l)X^2_i.
\end{align*}
\item For the MS case, $x_{ij} = X_i$ and $C_P(i) = C_P(j) \sqcup \{j\}$. Consider the utility of $i$:
\begin{align*}
\Pi_i &= (p_j - p_i)x_{ij} \\
&= (b_i X_i + \sum_{l \in C_P(j)} b_l X_l + b_j X_j - b_j X_j - \sum_{l \in C_P(j)} b_l X_l)X_i \\
&= b_i X^2_i.
\end{align*}
By equation~\ref{eq: MS}:
\begin{align*}
b_i &= b_j + \sum_{l \in C_P(j)} b_l \\
&= \sum_{l \in C_P(i)} b_l \\
&= \frac{b_i + \sum_{l \in C_P(i)} b_l}{2}.
\end{align*}
Therefore,
\[\Pi_i = b_i X^2_i = \frac{1}{2}(b_{i} + \sum_{l \in C_P(i)} b_l)X^2_i.\]
\end{itemize}
\QED
\end{proof}
\subsection{Proof of Proposition~\ref{cor: 4.1}} \label{app: cor: 4.1}
\proptwousssm*
\begin{proof}
Consider the arc $ij \in E$. For the SS case, $X_i = X_j = x_{ij}$ and $C_P(i) = C_P(j)$. By Lemma~\ref{lem: 4.3} and \ref{eq: SS},
\begin{align*}
\Pi_i &= \frac{1}{2}(b_{i} + \sum_{l \in C_P(i)} b_l)X^2_i \\
&= \frac{1}{2}(2b_{j} + 2\sum_{l \in C_P(j)} b_l)X^2_i \\
&= 2\Pi_j.
\end{align*}
For the SM case, $X_j = x_{ij}$, by \eqref{cnx} and Lemma \ref{lem: 4.3},
\begin{align*}
c_n(i)X^2_i &= c_n(i)X_i \sum_{j \in B(i)}{x_{ij}} \\
&= \sum_{j \in B(i)}{x_{ij}(b_j x_{ij} + \sum_{l \in C_T(i,j)}{b_l X_l})} \\
&\geqslant \sum_{j \in B(i)}{(b_j x^2_{ij} + \sum_{l \in C_T(i,j)}{b_l} x^2_{ij})} \\
&=2\sum_{j \in B(i)}{\Pi_j}.
\end{align*}
By \ref{general-sm} and Lemma \ref{lem: 4.3},
\begin{align*}
\Pi_i &= \frac{1}{2}(2c_n(i)X_i+2\sum_{l \in C_P(i)}b_lX_i)X_i \\
&\geqslant c_n(i)X^2_i\\
&\geqslant 2\sum_{j \in B(i)}{\Pi_j}.
\end{align*}
\QED
\end{proof}
\subsection{Proof of Proposition~\ref{thm: 4.4}} \label{app: thm: 4.4}
\proptwou*
\begin{proof}
Without loss of generality, we consider the closest dominating parent $i$ of $j$. Suppose $j$ is the buyer of a trade.
In the SS or SM case, the closest dominating parent of $j$ is $i \in S(j)$ so $ij \in E$. The claim follows by Proposition~\ref{cor: 4.1}.
Suppose along the path from $i$ to $j$, $j$ ends up to be a single buyer in the MS case. Then by \ref{general-sm} and Lemma \ref{lem: 4.3},
\begin{align*}
\Pi_i &= \frac{1}{2}(b_i+\sum_{l \in C_P(i)}b_l)X^2_i \\
&= \frac{1}{2}(2c_n(i)+2\sum_{l \in C_P(i)}b_l)X^2_i \\
&\geqslant (b_j +\sum_{l \in C_P(j)}b_l)X^2_i \\
&\geqslant (b_j+\sum_{l \in C_P(j)}b_l)X^2_j \\
&= 2\Pi_j
\end{align*}
where the first inequality holds by $c_n(i) \geqslant b_j$, which can be proved by induction and equation \ref{app: eq: agg}, \ref{app: eq: agg-n}, \ref{app: eq: aggcoeff}, and \ref{app: eq: vercoeff}, and the fact that $i$ is the closest dominating parent of $j$ implies $C_P(j) \subseteq C_P(i)$; the second inequality holds by $X_j \leqslant X_i$. \QED
\end{proof}
\subsection{Proof of Lemma~\ref{lem: 4.1}} \label{app: lem: 4.1}
\lemcp*
\begin{proof}
By Theorem~\ref{lem:pq-relation}, $p_s = a_t - b_s X_s$. While calculating the price function from the sink, one can show that by induction and \ref{eq: SS}, \ref{eq: MS}, and \ref{general-sm}, $b_i$ where $i \in V \cup \{s\}$ changes proportionally to $b_t$, so $\lambda(Y)$ is a constant.
The remaining is to show that $b_s \geqslant 2b_t$ by induction. For $ij \in E$, in the SS case, by \ref{eq: SS}, $b_i \geqslant 2 b_j$; in the MS case, by \ref{eq: MS}, $b_i \geqslant 2 b_j$; in the SM case, $b_i \geqslant 2 c_n(i) \geqslant 2 b_{h_n}$ by induction and equation \ref{app: eq: agg}, \ref{app: eq: agg-n}, \ref{app: eq: aggcoeff}, \ref{app: eq: vercoeff}, and \ref{general-sm}, where $h_n$ is the farthest node from $i$ in $C_S(i)$. Combining these cases, $b_s \geqslant 2b_t$.
\QED
\end{proof}
\subsection{Proof of Proposition~\ref{prop: 4.1}} \label{app: prop: 4.1}
\propinc*
\begin{proof}
It follows that $X_s = \frac{a_t - a_s}{b_s}$, so the increasing demand at market ($a_t$) or decreasing cost at the source ($a_s$) will make the flow value larger. Since $b_t$ is not changed, by \eqref{eq: sw1}, the coefficients of the quadratic terms ($X_i^2$ and $X_t^2$) do not change either. By lemma~\ref{lem: 3.3}, the flow is distributed proportionally to the convex coefficients in the SM case (for the SS and MS case, just sum the flow from the upstream), so the flow increases proportionally as well. Therefore, the flow and welfare efficiency both increases, and by Lemma~\ref{lem: 4.3}, the utility of each individual firm increases.
\QED
\end{proof}
\subsection{Proof of Lemma~\ref{lem: 4.z}} \label{app: lem: 4.z}
\lempc*
\begin{proof}
The parallel composition $P(Y,Z)$ creates an SM case at the source. Therefore, it suffices to find the convex coefficients for solving \ref{ls-sm}. Let $\alpha^Y_j$ (respectively $\alpha^Z_j$) be the convex coefficient for $s_Y j \in E(Y)$ (respectively $s_Z j \in E(Z)$) where $j \in B(s_Y)$ (respectively $B(s_Z)$). If $s_Y$ (respectively $s_Z$) is the buyer of the SS case, then $\alpha^Y_j=1$ (respectively $\alpha^Z_j=1$). The convex coefficients are $\frac{\lambda(Z)-2}{\lambda(Y)+\lambda(Z)-2}\alpha^Y_j$ for each $j \in B(s_Y)$ and $\frac{\lambda(Y)-2}{\lambda(Y)+\lambda(Z)-2}\alpha^Z_j$ for each $j \in B(s_Z)$. Suppose $b_{t_{P(Y,Z)}}=b_{t_Y}=b_{t_Z}$, $b_{s_Y}=\lambda(Y)b_{t_Y}$, and $b_{s_Z}=\lambda(Z)b_{t_Z}$, then by \eqref{app: eq: agg-n} and \ref{general-sm},
\[b_{s_{P(Y,Z)}} = (\frac{(\lambda(Y)-2)(\lambda(Z)-2)}{\lambda(Y)+\lambda(Z)-4} + 2 )b_{t_{P(Y,Z)}}.\]
\QED
\end{proof}
\section{Proofs in Section \ref{sec: ext}}
\subsection{Proof Sketch of Proposition~\ref{prop: 4.z}} \label{app: prop: 4.z}
\propsmspg*
\noindent{\bf Proof Sketch.} Without loss of generality, we consider shortcut-free SMSPG. The derivation of the equilibrium price is similar to the proof of Theorem~\ref{thm: 3.1}. We inductively start from the sink markets in $T$ and the price function at each firm is affine decreasing before an SM case is reached. For the SM case, the buyer does not have the control over the buying cost and inflow at equilibrium, so a convex quadratic program \ref{eq: dist} can be derived. By considering the Lagrangian function, the linear system \ref{ls-sm} can be formulated. The solution of \ref{ls-sm} is to distribute the flow proportionally to the convex coefficients where each of them is positive, so each trade is active. The flow distribution according to the convex coefficients gives a closed-form expression of the price offered to the buyer. This procedure goes on inductively until the source is reached. When the source is reached, we compute the total flow value of the network and use Algorithm~\ref{alg: 2} to compute the equilibrium flow. The uniqueness of the equilibrium follows by Lemma~\ref{lem: 3.2}. The problem of distributing the flow for an SM case buyer can be described as an \ref{eq: lcp} which has an equivalent \ref{eq: cp}, and its solution is unique.
We note that when all markets have the same demand, it suffices to focus on the calculation of $b_i$ for each $i \in V \cup \{s\}$. The equilibrium calculation is more complicated if that is not the case.
\subsection{Proof of Remark~\ref{rm1}} \label{app: rm1}
We consider the following supply chain network:
\begin{center}
\begin{tikzpicture}
\node[vertex] (b) at (0,0) {$s$};
\node at (-1, 0) {$p_s=a_s$};
\node[vertex] (a) at (2,0) {$v$};
\node[vertex] (t1) at (4,.5) {$1$};
\node[vertex] (t2) at (4,-.5) {$2$};
\node at (5.7, .5) {$p_1 = a_1 - b_1 x_1$};
\node at (5.7, -.5) {$p_2 = a_2 - b_2 x_2$};
\path[->]
(b) edge node [above] {$X_v$} (a)
(a) edge node [above] {$x_1$} (t1)
(a) edge node [below] {$x_2$} (t2);
\end{tikzpicture}
\end{center}
For simplicity, we denote the first market price as $p_1$ and the second market price as $p_2$. The production cost is a constant $a_s$. Let the inflow of market 1 be $x_1$ and the inflow of market 2 be $x_2$. Suppose the two price functions at the markets are:
\begin{align*}
p_1 = a_1 - b_1 x_1, \\
p_2 = a_2 - b_2 x_2,
\end{align*}
where $a_1 \geqslant a_2 \geqslant a_s$.
Throughout the proof, we add a superscript $h$ for variables under the high price strategy and $l$ for the low price strategy.
\begin{itemize}
\item The low price strategy always gives a higher flow value than the high price strategy.
\begin{proof}
With the high price strategy, $x^h_2 = 0$, so it is equivalent to regard the entire supply chain as a line graph from $s$ to $v$ then from $v$ to market $1$. Therefore, $p^h_v = a_1 - 2b_1 X^h_v$. The optimal flow $X^h_v$ under the high price strategy is
\begin{align}
a_s = a_1 - 4 b_1 X^h_v \implies X^h_v = \frac{a_1 - a_s}{4 b_1}. \label{Xhv}
\end{align}
Under the low price strategy, the utility of $v$ is
\[\Pi_v = (a_1 - b_1 x_1) x_1 + (a_2 - b_2 x_2)x_2 - p^l_v(x_1 + x_2).\]
$x_1 > 0$ and $x_2 > 0$, so $\frac{\partial \Pi_v}{\partial x_1} = 0$ and $\frac{\partial \Pi_v}{\partial x_2} = 0$:
\begin{equation} \label{app: flow_rel}
\begin{cases}
a_1 - 2b_1x_1 - p^l_v = 0,\\
a_2 - 2b_2x_2 - p^l_v = 0,\\
\end{cases}
\implies
a_1 b_2 + a_2 b_1 - 2b_1 b_2 (x_1+x_2) - (b_1+b_2)p^l_v = 0.
\end{equation}
The optimal flow $X^l_v$ under the low price strategy is
\begin{align}
p^l_v &= (a_1/b_1 + a_2/b_2)B - 2B X^l_v, \nonumber \\
p_s &= a_s = (a_1/b_1 + a_2/b_2)B - 4B X^l_v, \nonumber \\
X^l_v &= \frac{(a_1/b_1 + a_2/b_2)B - a_s}{4 B}, \label{app: flow_val}
\end{align}
where $B = \frac{1}{\frac{1}{b_1} + \frac{1}{b_2}}$.
Then we have the difference of total flow between these two strategies:
\begin{align}
X^l_v - X^h_v &= \frac{(a_1/b_1 + a_2/b_2)B - a_s}{4 B} - \frac{a_1 - a_s}{4 b_1} \nonumber \\
&= \frac{a_2}{4 b_2} - \frac{a_s}{4 B} + \frac{a_s}{4 b_1} \nonumber \\
&= \frac{a_2}{4 b_2} - \frac{a_s}{4 b_1} - \frac{a_s}{4 b_2} + \frac{a_s}{4 b_1} \nonumber \\
&= \frac{a_2}{4 b_2} - \frac{a_s}{4 b_2} \nonumber \\
&\geqslant 0. \label{flow_diff}
\end{align}
\QED
\end{proof}
\item When the demand difference between two markets is small enough, the low price strategy gives better utility for the source. When the difference is large enough, the high price strategy gives better utility for the source.
\begin{proof}
Let $\Pi_v$ be the utility of firm $v$, and $\Pi_s$ be the utility of firm $s$. Under the high price strategy:
\begin{align}
\Pi^h_v &= b_1{X^h_v}^2, \label{Pihv}\\
\Pi^h_s &= 2b_1{X^h_v}^2 = \frac{(a_1 - a_s)^2}{8b_1}. \label{Pihs}
\end{align}
To get the social welfare under the low price strategy, from \eqref{app: flow_rel} and \eqref{app: flow_val}:
\begin{align*}
& p^l_v = a_1 - 2b_1 x_1 = a_2 - 2b_2 x_2, \\
& x_1 + x_2 = X^l_v,
\end{align*}
infers
\begin{align*}
& x_1 = \frac{a_1 - a_2 + 2b_2 X^l_v}{2b_1 + 2b_2}, \\
& x_2 = \frac{ 2b_1 X^l_v - a_1 + a_2}{2b_1 + 2b_2}, \\
\end{align*}
where
\[X^l_v = \frac{(a_1/b_1 + a_2/b_2)B - a_s}{4 B} \text{ and } B = \frac{1}{\frac{1}{b_1} + \frac{1}{b_2}}.\]
We have
\begin{align}
\Pi^l_s &= (p^l_v - a_s)X^l_v = 2B{X^l_v}^2 \label{Pils} \\
&= 2B [\frac{(a_1/b_1 + a_2/b_2)B - a_s}{4 B}]^2 = \frac{[(a_1/b_1 + a_2/b_2)B - a_s]^2}{8 B}. \nonumber
\end{align}
By taking the ratio between \eqref{Pihs} and \eqref{Pils},
\begin{align*}
\frac{\Pi^h_s}{\Pi^l_s} = \frac{b_1 {X^h_v}^2}{B{X^l_v}^2} = \frac{b_1 + b_2}{b_2}\frac{{X^h_v}^2}{(X^h_v+\Delta)^2}
\end{align*}
where $\Delta = \frac{a_2-a_s}{4 b_2}$ is irrelevant to $a_1$ by \eqref{flow_diff}.
$s$ has no preference between the high price and the low price strategy when
\begin{align*}
&\quad \quad \frac{X^h_v}{X^h_v+\Delta} = \sqrt{\frac{b_1}{b_1+b_2}} \\
&\implies \sqrt{b_1+b_2}(\frac{a_1 - a_s}{4 b_1}) = \sqrt{b_1}(\frac{a_1 - a_s}{4 b_1}+\Delta) \\
&\implies \frac{\sqrt{b_1+b_2} - \sqrt{b_1}}{4b_1}a_1 = \frac{\sqrt{b_1+b_2} - \sqrt{b_1}}{4b_1}a_s + \sqrt{b_1}\Delta \\
&\implies a_1 = a_s + \frac{4b_1^{\frac{3}{2}}\Delta}{\sqrt{b_1+b_2} - \sqrt{b_1}}. \\
\end{align*}
$X^h_v$ increases linearly to $a_1$. When $a_1$ is below this value, then the ratio is smaller than 1 and $s$ prefers the low price strategy. When $a_1$ is above this value, then the ratio is greater than 1 and $s$ prefers the high price strategy.
\QED
\iffalse
Notice $PS_b \leqslant PS_a$ in this case.
However, to prove this statement, we only need:
\begin{align*}
\frac{PS_b^h}{PS_b^l} = \frac{b_1 X_h^2}{B X_l^2} = \frac{b_1 + b_2}{b_2} \frac{X_h}{X_h + \Delta}
\end{align*}
where $PS_b^h$ is the surplus of $b$ at high price and $PS_b^l$ is the surplus of $b$ at low price, and $\Delta = \frac{a_2}{4 b_2} - \frac{p_b}{4 b_2}$ is irrelevant to $a_1$. Therefore, as $a_1$ increases, value $\frac{PS_s^h}{PS_s^l}$ increases from less than $1$ to greater than $1$.
\QED
\fi
\end{proof}
\item The low price strategy always produces higher social welfare.
\begin{proof}
Let $CS$ be the consumer surplus, and $SW$ be the social welfare. Under the high price strategy, by \eqref{Xhv}, \eqref{Pihv}, and \eqref{Pihs}:
\begin{align*}
CS^h &= \frac{1}{2}b_1{X^h_v}^2, \\
SW^h &= CS^h + \Pi^h_v + \Pi^h_s = \frac{7}{2} b_1 {X^h_v}^2 = \frac{7}{2} b_1 (\frac{a_1 - a_s}{4b_1})^2 = \frac{7(a_1 - a_s)^2}{32b_1}.
\end{align*}
Under the low price strategy:
\begin{align*}
CS^l &= \frac{1}{2}b_1x_1^2 + \frac{1}{2}b_2x_2^2, \\
\Pi^l_v &= x_1(p^l_1-p^l_v) + x_2(p^l_2-p^l_v) = b_1x_1^2 + b_2x_2^2, \\
SW^l &= CS^l + \Pi^l_v + \Pi^l_s \\
&= \frac{3}{2}(b_1 x_1^2 + b_2 x_2^2) + \Pi^l_s\\
&= \frac{3[b_1(a_1 - a_2 + 2b_2 X^l_v)^2 + b_2(2b_1 X^l_v - a_1 + a_2)^2]}{8(b_1 + b_2)^2} + \Pi^l_s\\
&= \frac{3b_1[(a_1-a_2)^2 + 4b_2^2{X^l_v}^2 + 4a_1b_2X^l_v-4a_2b_2X^l_v]}{8(b_1 + b_2)^2} \\
&\quad +\frac{3b_2[(a_1-a_2)^2 + 4b_1^2{X^l_v}^2 - 4a_1b_1X^l_v+4a_2b_1X^l_v]}{8(b_1 + b_2)^2} + \Pi^l_s \\
&= \frac{3[(b_1+b_2)(a_1-a_2)^2+4b_1b_2(b_1+b_2){X^l_v}^2]}{8(b_1 + b_2)^2} + \Pi^l_s \\
&= \frac{3(a_1-a_2)^2}{8(b_1+b_2)} + \frac{3B{X^l_v}^2}{2} + 2B{X^l_v}^2 \\
&= \frac{3(a_1-a_2)^2}{8(b_1+b_2)} + \frac{7B{X^l_v}^2}{2}.
\end{align*}
By taking the difference and \eqref{Pils},
\begin{align*}
SW^l - SW^h &= \frac{3(a_1-a_2)^2}{8(b_1+b_2)} + \frac{7}{2}[\frac{b_1 b_2}{b_1 + b_2}(X^h_v+\Delta)^2 - b_1{X^h_v}^2] \\
&= \frac{3(a_1-a_2)^2}{8(b_1+b_2)} + \frac{7}{2}[\frac{-b_1^2}{b_1 + b_2}{X^h_v}^2+\frac{b_1(a_2-a_s)X^h_v}{2(b_1+b_2)}+\frac{b_1(a_2-a_s)^2}{16b_2(b_1+b_2)}] \\
&= \frac{12b_2(a_1-a_2)^2-7b_2(a_1-a_s)^2+14b_2(a_2-a_s)(a_1-a_s)+7b_1(a_2-a_s)^2}{32b_2(b_1+b_2)} \\
&= \frac{12b_2(a_1-a_2)^2-7b_2[(a_1-a_s)-(a_2-a_s)]^2+7(b_1+b_2)(a_2-a_s)^2}{32b_2(b_1+b_2)} \\
&= \frac{5b_2(a_1-a_2)^2+7(b_1+b_2)(a_2-a_s)^2}{32b_2(b_1+b_2)} \\
&\geqslant 0
\end{align*}
where $\Delta = \frac{a_2-a_s}{4 b_2}$. \QED
\end{proof}
\end{itemize}
\section{Examples}
\begin{example}[Merging Child Nodes] \label{app:ex:mc}
\mbox{}\\
\begin{center}
\begin{tikzpicture}
\node[vertex] (s) at (-2,0) {$s$};
\node[vertex] (a) at (0,0) {$a$};
\node[vertex] (b) at (2,1.5) {$b$};
\node[vertex] (c) at (2,0.5) {$c$};
\node[vertex] (d) at (1.3,-0.5) {$d$};
\node[vertex] (e) at (2,-1.5) {$e$};
\node[vertex] (f) at (2.7,-.5) {$f$};
\node[vertex] (g) at (4,0) {$g$};
\node[vertex] (h) at (6,0) {$h$};
\node[vertex] (i) at (7.2,.8) {$i$};
\node[vertex] (j) at (7.2,-.8) {$j$};
\node[vertex] (t) at (8.4,0) {$t$};
\path[->]
(s) edge (a)
(s) edge (e)
(a) edge (b)
(a) edge (c)
(a) edge (d)
(c) edge (g)
(d) edge (f)
(f) edge (g)
(b) edge (h)
(e) edge (h)
(g) edge (h)
(h) edge (i)
(h) edge (j)
(i) edge (t)
(j) edge (t);
\end{tikzpicture}
\end{center}
In this graph, for node $a$, $C_S(a) = \{g, h\}$, because $\{g, h\} \subseteq C(a)$ and there are multiple disjoint paths from $a$ to $g$ and $h$, while $t \notin C_S(a)$ because all the paths from $a$ to $t$ must go through the common node $h$; $C_P(a) = \{h\}$ because $h \in C(a)$, $s \in P(a)$, and there are multiple disjoint paths from $s$ to $h$; $C_T(a,b) = \emptyset$, while $C_T(a,c) = \{g\}$.
For node $c$, $C_P(c) = \{g, h\}$, while $C_S(c) = \emptyset$; For node $g$, $C_P(g) = \{h\}$, while $C_S(g) = \emptyset$.
By Observation~\ref{obs:mc}, since $a$ and $c$ satisfy the SM relation, $C_P(c) = \{g, h\} = C_P(a) \sqcup C_T(a,c)$. $c$ and $g$ satisfy the MS relation, so $C_P(c) = \{g, h\} = C_P(g) \sqcup \{g\}$.
\end{example}
\begin{example}[Price Function Computation by Algorithm~\ref{alg: 1}] \label{app: ex: price_compute_alg1}
\mbox{}\\
Consider the following network.
\begin{center}
\begin{tikzpicture}[baseline=0]
\node at (-1, 0) {$p_{s} = 1$};
\node[vertex] (s) at (0,0) {$s$};
\node[vertex] (a) at (2,1.5) {$j_1$};
\node[vertex] (a1) at (1.5,0) {$j_2$};
\node[vertex] (b) at (3,0.5) {$v_1$};
\node[vertex] (c) at (3,-0.5) {$v_2$};
\node[vertex] (d) at (4.5,0) {$l$};
\node[vertex] (f) at (4,1.5) {$k$};
\node[vertex] (t) at (6,0) {$t$};
\node at (7.5, 0) {$p_{t} = 2 - X_t$};
\path[->]
(s) edge (a)
(s) edge (a1)
(a1) edge (b)
(a1) edge (c)
(a) edge (f)
(b) edge (d)
(c) edge (d)
(f) edge (t)
(d) edge (t)
;
\end{tikzpicture}
\end{center}
We recall the equations in Algorithm~\ref{alg: 1}:\\
\textbf{\ref{eq: SS}:}
\begin{align*}
b_i = 2 b_j + \sum_{l \in C_P(j)} b_l.
\end{align*}
\textbf{\ref{eq: MS}:}
\begin{align*}
b_i = b_j + \sum_{l \in C_P(j)} b_l.
\end{align*}
\textbf{\ref{eq: SM}:}
\begin{align*}
&b_i = \frac{2}{\sum_{j \in B(i)} \frac{1}{b_{j}}} + 2 \sum_{l \in C_S(i) \setminus C_P(i)} b_l + \sum_{l \in C_P(i)} b_l, \\
&\alpha_j = \frac{\frac{1}{b_{j}}}{\sum_{j' \in B(i)} \frac{1}{b_{j'}}} \text{ for each $j \in B(i)$}.
\end{align*}
By Algorithm~\ref{alg: 1}, for the MS case from $k$ and $l$ to $t$:
\begin{align*}
p_k = 2 - X_k - X_t, \\
p_l = 2 - X_l - X_t.
\end{align*}
For the MS case from $v_1$ and $v_2$ to $l$:
\begin{align*}
p_{v_2} = 2 - 2X_{v_1} - X_l - X_t, \\
p_{v_3} = 2 - 2X_{v_2} - X_l - X_t.
\end{align*}
For the SS case from $j_1$ to $k$:
\begin{align*}
p_{j_1} = 2 - 3X_{j_1} - X_t.
\end{align*}
For the SM case from $j_2$ to $v_1$ and $v_2$:
\begin{align*}
p_{j_2} &= 2 - (\frac{2}{\frac{1}{2}+\frac{1}{2}} + 2 + 1)X_{j_2} - X_t \\
&= 2 - 5X_{j_2} - X_t, \\
\alpha_{v_1} &= \alpha_{v_2} = \frac{\frac{1}{2}}{\frac{1}{2}+\frac{1}{2}} = \frac{1}{2}.
\end{align*}
For the SM case from $s$ to $j_1$ and $j_2$:
\begin{align*}
p_{s} &= 2 - (\frac{2}{\frac{1}{3}+\frac{1}{5}} + 2)X_t \\
&= 2 - \frac{23}{4} X_t, \\
X_s &= \frac{4}{23}(2 - 1) = \frac{4}{23}, \\
\alpha_{j_1} &= \frac{\frac{1}{3}}{\frac{1}{3}+\frac{1}{5}} = \frac{5}{8}, \\
\alpha_{j_2} &= \frac{\frac{1}{5}}{\frac{1}{3}+\frac{1}{5}} = \frac{3}{8}.
\end{align*}
\end{example}
\begin{example}[Price Function Computation for General SM] \label{app: ex: price_compute}
\mbox{}\\
Consider the following network.
\begin{center}
\begin{tikzpicture}[baseline=0]
\node at (-1, 0) {$p_{s} = 1$};
\node[vertex] (s) at (0,0) {$s$};
\node[vertex] (a) at (2,1.5) {$j_1$};
\node[vertex] (b) at (2,0.5) {$j_2$};
\node[vertex] (c) at (2,-0.5) {$j_3$};
\node[vertex] (d) at (4,0) {$l$};
\node[vertex] (f) at (4,1.5) {$k$};
\node[vertex] (t) at (6,0) {$t$};
\node at (7.5, 0) {$p_{t} = 2 - X_t$};
\path[->]
(s) edge (a)
(s) edge (b)
(s) edge (c)
(a) edge (f)
(b) edge (d)
(c) edge (d)
(f) edge (t)
(d) edge (t)
;
\end{tikzpicture}
\end{center}
From the equations in \ref{app:lem:pq-relation}:\\
\textbf{SS:}
\[b_i=2 b_j + \sum_{l \in C_P(j)} b_l.\]
\textbf{MS:}
\[b_i=b_j + \sum_{l \in C_P(j)} b_l.\]
\textbf{General SM:}
\[b_i=2c_n(i)+\sum_{l \in C_P(i)}{b_l} \text{ and } \alpha_j = \beta_{j}\prod_{p \mid h_p \in C_T(i,j) \setminus \{h_n\}}{\beta_p(i)}\]
where $c_n(i)$, $\beta_p(i)$, and $\beta_j$ are defined by equation \ref{app: eq: agg}, \ref{app: eq: agg-n}, \ref{app: eq: aggcoeff}, and \ref{app: eq: vercoeff}.
By the backward Algorithm~\ref{alg: 1}, for the MS case from $k$ and $l$ to $t$:
\begin{align*}
p_k = 2 - X_k - X_t, \\
p_l = 2 - X_l - X_t.
\end{align*}
For the MS case from $j_2$ and $j_3$ to $l$:
\begin{align*}
p_{j_2} = 2 - 2X_{j_2} - X_l - X_t, \\
p_{j_3} = 2 - 2X_{j_3} - X_l - X_t.
\end{align*}
For the SS case from $j_1$ to $k$:
\begin{align*}
p_{j_1} = 2 - 3X_{j_1} - X_t.
\end{align*}
The remaining is the general SM case from $s$ to $j_1$, $j_2$, and $j_3$. By following the notations in \ref{para:sm}, $C_S(s) = \{h_1, h_2\}$ where $h_1 = l$ and $h_2 = t$. $B_2(s) = \{j_1\}$, $B_1(s) = \{j_2, j_3\}$, $P_2(i) = \{h_1\}$, and $P_1(s) = \emptyset$. We start with the aggregate variable $c_1(s)$ since $P_1(s) = \emptyset$.
\[c_1(s) = \frac{1}{\frac{1}{b_{j_2}}+\frac{1}{b_{j_3}}} + b_l = \frac{1}{\frac{1}{2}+\frac{1}{2}} + 1 = 2,\]
\[b_s = \frac{2}{\frac{1}{b_{j_1}} + \frac{1}{c_1(s)}} + 2 b_t = \frac{2}{\frac{1}{3} + \frac{1}{2}} + 2 = \frac{22}{5}.\]
Therefore, $p_s = 1 - \frac{22}{5}X_s = 0$ and $X_s = \frac{5}{22}$. For the convex coefficients,
\[\beta_{j_2} = \beta_{j_3} = \frac{\frac{1}{2}}{\frac{1}{2}+\frac{1}{2}} = \frac{1}{2},\]
\[\alpha_{j_1} = \beta_{j_1} = \frac{\frac{1}{3}}{\frac{1}{3} + \frac{1}{2}} = \frac{2}{5},\]
\[\beta_1(s) = \frac{\frac{1}{2}}{\frac{1}{3} + \frac{1}{2}} = \frac{3}{5},\]
\[\alpha_{j_2} = \alpha_{j_3} = \frac{1}{2} \beta_{b_{B_1(s)}} = \frac{3}{10}.\]
\end{example}
\begin{example}[Price and Flow Computation by Algorithm~\ref{alg: 2}] \label{app: ex: flow_compute_alg2}
\mbox{}\\
We use the same SPG as in Example~\ref{app: ex: price_compute_alg1}.
\begin{center}
\begin{tikzpicture}[baseline=0]
\node at (-1, 0) {$p_{s} = 1$};
\node[vertex] (s) at (0,0) {$s$};
\node[vertex] (a) at (2,1.5) {$j_1$};
\node[vertex] (a1) at (1.5,0) {$j_2$};
\node[vertex] (b) at (3,0.5) {$v_1$};
\node[vertex] (c) at (3,-0.5) {$v_2$};
\node[vertex] (d) at (4.5,0) {$l$};
\node[vertex] (f) at (4,1.5) {$k$};
\node[vertex] (t) at (6,0) {$t$};
\node at (7.5, 0) {$p_{t} = 2 - X_t$};
\path[->]
(s) edge (a)
(s) edge (a1)
(a1) edge (b)
(a1) edge (c)
(a) edge (f)
(b) edge (d)
(c) edge (d)
(f) edge (t)
(d) edge (t)
;
\end{tikzpicture}
\end{center}
From Example~\ref{app: ex: price_compute_alg1}, $X_s = \frac{4}{23}$, $\alpha_{j_1} = \frac{5}{8}$, $\alpha_{j_2} = \frac{3}{8}$, and $\alpha_{v_1} = \alpha_{v_2} = \frac{1}{2}$. By Lemma~\ref{lem: 3.3},
\begin{align*}
x_{sj_1} &= \frac{4}{23} \times \frac{5}{8} = \frac{5}{46}, \\
x_{sj_2} &= \frac{4}{23} \times \frac{3}{8} = \frac{3}{46}, \\
\end{align*}
\begin{align*}
p_{j_1} &= 1 - 3X_{j_1} - X_t = 2 - 3 \times \frac{5}{46} - \frac{4}{23} = \frac{3}{2}, \\
p_{j_2} &= 1 - 5X_{j_2} - X_t = 2 - 5 \times \frac{3}{46} - \frac{4}{23} = \frac{3}{2},
\end{align*}
\[x_{j_2v_1} = x_{j_2v_2} = \frac{x_{sj_2}}{2} = \frac{3}{92},\]
\[p_{v_1} = p_{v_2} = 2 - 2X_{v_1} - X_l - X_t = 2 - 2 \times \frac{3}{92} - \frac{3}{46} - \frac{4}{23} = \frac{39}{23}.\]
We can continue the price calculation by a topological order. Since there are no multiple buyers case later on, the flow to the downstream is just the sum of the inflow from upstream.
\end{example}
\begin{example}[SPG with a Shortcut] \label{app:ex:shortcut}
\mbox{}\\
Consider the following network where $st$ is a shortcut of path $(s,v,t)$.
\begin{center}
\begin{tikzpicture}
\node[vertex] (s) at (0,0) {$s$};
\node at (-1, 0) {$p_s=1$};
\node[vertex] (v) at (2,1) {$v$};
\node[vertex] (t) at (4,0) {$t$};
\node at (6.5, 0) {$p_{t} = 3 - X_t = 3 - x - y$};
\path[->]
(s) edge node [above] {$x$} (v)
(v) edge node [above] {$x$} (t)
(s) edge node [below] {$y$} (t);
\end{tikzpicture}
\end{center}
At equilibrium, suppose $s$ offers $v$ price $p_{sv}$, and let $x=x_{sv}=x_{vt}$ and $y=x_{st}$. The utility of $v$ is
\begin{align*}
\Pi_v = (3 - x - y) x - p_{sv} x.
\end{align*}
Take the derivative of $\Pi_v$ with respect to $x$, since $\Pi_v$ is concave, the best $p_{sv}$ satisfies:
\begin{align*}
\frac{\partial \Pi_v}{\partial x} = 3-2x-y - p_{sv} = 0 \Rightarrow p_{sv} = 3 - 2x - y.
\end{align*}
The utility of $s$ is
\begin{align*}
\Pi_s &= p_{sv} x + p_t y - p_s(x+y) \\
&= (3-2x-y)x + (3-x-y)y - (x+y) \\
&= 2x + 2y - 2x^2 - y^2 - 2xy.
\end{align*}
Take the derivative of $\Pi_s$ with respect to $x$ and $y$, since $\Pi_s$ is concave, we have:
\begin{align*}
\frac{\partial \Pi_s}{\partial x} &= 2-4x-2y = 0, \\
\frac{\partial \Pi_s}{\partial y} &= 2-2x-2y = 0.
\end{align*}
The equilibrium solution is $x=0$ and $y=1$. $sv$ and $vt$ are inactive.
\end{example}
\begin{example}[Multiple Equilibria in SMSPG] \label{app: ex: me_smspg}
\mbox{}\\
\begin{center}
\begin{tikzpicture}
\node[vertex] (b) at (0,0) {$s$};
\node at (-1, 0) {$p_s = a_s$};
\node[vertex] (a) at (2,0) {$a$};
\node[vertex] (t1) at (4,.5) {$t_1$};
\node[vertex] (t2) at (4,-.5) {$t_2$};
\node at (5.6, .5) {$p_{t_1} = a_{t_1} - x$};
\node at (5.6, -.5) {$p_{t_2} = a_{t_2} - y$};
\path[->]
(b) edge node [above] {$X_a$} (a)
(a) edge node [above] {$x$} (t1)
(a) edge node [below] {$y$} (t2);
\end{tikzpicture}
\end{center}
Suppose $a_{t_1} > a_{t_2} > a_s$. The high price strategy for $s$ is such that $x>0$ and $y=0$ when $a$ tries to maximize its utility $\Pi_a=(a_{t_1}-x-p_a)x$, so
\[\frac{\partial \Pi_a}{\partial x} = a_{t_1} - 2x - p_a = 0 \implies p_a = a_{t_1} - 2x.\]
$s$ tries to maximize its utility $\Pi_s=(a_{t_1}-2x-a_s)x$ given the above $p_a$, so
\[\frac{\partial \Pi_s}{\partial x} = a_{t_1} - 4x - p_s = 0 \implies p_s = a_{t_1} - 4x = a_s \implies X_a = x = \frac{a_{t_1}-a_s}{4},\]
and the high price utility of $s$ is
\[\Pi_s = (p_a - p_s) X_a = (a_{t_1} - 2X_a - a_s)\frac{a_{t_1}-a_s}{4} = \frac{(a_{t_1}-a_s)^2}{8}.\]
The low price strategy for $s$ is such that $x>0$ and $y>0$ when $a$ tries to maximize its utility $\Pi_a=(a_{t_1}-x-p_a)x + (a_{t_2}-y-p_a)y$, so
\[
\begin{cases}
\frac{\partial \Pi_a}{\partial x} = a_{t_1} - 2x - p_a = 0, \\
\frac{\partial \Pi_a}{\partial y} = a_{t_2} - 2y - p_a = 0,
\end{cases}
\implies
p_a = \frac{a_{t_1}+a_{t_2}}{2} - (x+y).
\]
$s$ tries to maximize its utility $\Pi_s=(\frac{a_{t_1}+a_{t_2}}{2} - (x+y)-a_s)(x+y)$ given the above $p_a$, so
\[
\frac{\partial \Pi_s}{\partial x+y} = \frac{a_{t_1}+a_{t_2}}{2} - 2(x+y) - p_s = 0
\]
which implies
\[p_s = \frac{a_{t_1}+a_{t_2}}{2} - 2(x+y) = a_s \implies X_a = x+y = \frac{a_{t_1}+a_{t_2}-2a_s}{4},\]
and the low price utility of $s$ is
\[\Pi_s = (p_a-a_s) X_a = (\frac{a_{t_1}+a_{t_2}}{2} - X_a - a_s)\frac{a_{t_1}+a_{t_2}-2a_s}{4} = \frac{(a_{t_1}+a_{t_2}-2a_s)^2}{16}.\]
The high price strategy and the low price strategy give the same utility to $s$ when
\[\frac{(a_{t_1}-a_s)^2}{8} = \frac{(a_{t_1}+a_{t_2}-2a_s)^2}{16} \implies a_{t_1} = (1+\sqrt{2})a_{t_2} - \sqrt{2}a_s.\]
For price feasibility for the low price strategy, we must have
\[a_{t_2} \geqslant p_a \implies a_{t_2} \geqslant \frac{a_{t_1}+a_{t_2}}{2} - (x+y) = \frac{a_{t_1}+a_{t_2}+2a_s}{4} \implies 3a_{t_2} \geqslant a_{t_1}+2a_s\]
which is feasible when the high price utility and the low price utility are the same for $s$ since
\begin{align*}
a_{t_1} + 2a_s &= (1+\sqrt{2})a_{t_2} - \sqrt{2}a_s + 2a_s \\
&= (1+\sqrt{2})a_{t_2} + (2-\sqrt{2})a_s \\
&\leqslant (1+\sqrt{2})a_{t_2} + (2-\sqrt{2})a_{t_2} \\
&= 3 a_{t_2}.
\end{align*}
There are multiple equilibria when $a_{t_1} = (1+\sqrt{2})a_{t_2} - \sqrt{a_s}$ since $s$ can play either the high price or the low price strategy. We note that in the left upper figure in Figure~\ref{fig: 5.2}, $s$ has no preference when $a_{t_1} = 19.07 = (1+\sqrt{2})a_{t_2} -\sqrt{2} a_s = (1+\sqrt{2})12 - 7\sqrt{2}=12+5\sqrt{2}$ and by fixing $a_{t_2}=12$, $(12, 3\times12 - 2\times7] = (12, 22]$ is the low price feasible interval for $a_{t_1}$.
\end{example}
\begin{example}[SMSPG without an Equilibrium] \label{ex: SMSPG}
\mbox{}\\
\begin{center}
\begin{tikzpicture}
\node[vertex] (s1) at (2,.7) {$s_1$};
\node[vertex] (s2) at (2,-.7) {$s_2$};
\node at (1, .7) {$p_{s_1} = 1$};
\node at (1, -.7) {$p_{s_2} = 1$};
\node[vertex] (c) at (4,0) {$c$};
\node[vertex] (t1) at (6,0) {$t$};
\node at (7.8, 0) {$p_t = 2-x-y$};
\path[->]
(s1) edge node [above] {$x$} (c)
(s2) edge node [below] {$y$} (c)
(c) edge node [above] {$x+y$} (t1);
\end{tikzpicture}
\end{center}
Suppose there is an equilibrium where $s_1$ offers the price $p_{s_1 c}$ to $c$ and $s_2$ offers the price $p_{s_2 c}$ to $c$, then $c$ tries to maximize its utility
\[\Pi_c = (2-x-y)(x+y) - p_{s_1 c} x - p_{s_2 c} y\]
where
\[\frac{\partial \Pi_c}{\partial x} = 2 - 2(x+y) - p_{s_1 c} \quad \text{ and } \quad \frac{\partial \Pi_c}{\partial y} = 2 - 2(x+y) - p_{s_2 c}.\]
The equilibrium is a solution of the following LCP:
\[
\begin{cases}
(2-2x-2y-p_{s_1 c}) x + (2-2x-2y-p_{s_2 c}) y = 0,\\
2-2x-2y-p_{s_1 c} \leqslant 0,\\
2-2x-2y-p_{s_2 c} \leqslant 0,\\
x \geqslant 0, \\
y \geqslant 0.
\end{cases}
\]
Now fix the price $p_{s_1 c} \in (1,2)$. To maximize the utility of $s_2$, the best response is to set $p_{s_2 c} = p_{s_1 c} - \varepsilon$ so that $c$ does not buy anything from $s_1$. $s_1$ can have a similar best response to the strategy of $s_2$. $s_1$ and $s_2$ will just set their selling price as close as possible to 1.
\end{example}
\begin{example}[Equilibrium in General DAGs]\label{app: ex: gen_dag}
\mbox{}\\
\begin{center}
\begin{tikzpicture}[scale=1.5]
\node[vertex] (s) at (0,.5) {$s$};
\node at (-0.8, 0.5) {$p_s = 1$};
\node[vertex] (a) at (2,1) {$a$};
\node[vertex] (b) at (2,0) {$b$};
\node[vertex] (c) at (4,.5) {$c$};
\node[vertex] (d) at (4,-.5) {$d$};
\node[vertex] (t) at (6,0) {$t$};
\node at (7.8, 0) {$p_t = 11 - (x+y+z)$};
\path[->]
(s) edge node [above] {$x$} (a)
(s) edge node [below] {$y+z$} (b)
(a) edge node [above] {$x$} (c)
(b) edge node [above] {$y$} (c)
(b) edge node [below] {$z$} (d)
(c) edge node [above] {$x+y$} (t)
(d) edge node [below] {$z$} (t);
\end{tikzpicture}
\end{center}
We compute the price function from $t$ in this network and consider the following cases:
\begin{itemize}
\item $x>0$ and $y+z=0$:
In this case, it is equivalent to consider the line network:
\begin{center}
\begin{tikzpicture}[scale=1.5]
\node[vertex] (s) at (0,0) {$s$};
\node at (-0.8, 0) {$p_s = 1$};
\node[vertex] (b) at (2,0) {$a$};
\node[vertex] (d) at (4,0) {$c$};
\node[vertex] (t) at (6,0) {$t$};
\node at (7.2, 0) {$p_t = 11 - x$};
\path[->]
(s) edge node [above] {$x$} (b)
(b) edge node [above] {$x$} (d)
(d) edge node [above] {$x$} (t);
\end{tikzpicture}
\end{center}
We have
\[p_c = 11-2x, \quad p_a = 11-4x, \text{ and}\]
\[p_s = 11-8x = 1 \implies x=\frac{5}{4}.\]
The utility of $s$ is
\[(p_a - p_s)x = 4x^2 = \frac{25}{4}.\]
\item $x=0$ and $y+z>0$:
In this case, the best strategy for $s$ is to make $y>0$ and $z>0$, so it is equivalent to consider the following network:
\begin{center}
\begin{tikzpicture}[scale=1.5]
\node[vertex] (s) at (0,0) {$s$};
\node at (-0.8, 0) {$p_s = 1$};
\node[vertex] (b) at (2,0) {$b$};
\node[vertex] (c) at (4,.5) {$c$};
\node[vertex] (d) at (4,-.5) {$d$};
\node[vertex] (t) at (6,0) {$t$};
\node at (7.5, 0) {$p_t = 11 - (y+z)$};
\path[->]
(s) edge node [below] {$y+z$} (b)
(b) edge node [above] {$y$} (c)
(b) edge node [below] {$z$} (d)
(c) edge node [above] {$y$} (t)
(d) edge node [below] {$z$} (t);
\end{tikzpicture}
\end{center}
By Algorithm~\ref{alg: 1} and \ref{alg: 2}, we have
\[p_c = 11-2y-z, \quad p_d = 11-y-2z, \quad p_b = 11-3(y+z),\]
\[\text{and } p_s = 11-6(y+z) = 1 \implies y+z=\frac{5}{3}.\]
The utility of $s$ is
\[(p_b - p_s)(y+z) = 3(y+z)^2 = \frac{25}{3}.\]
\item $x>0$ and $y+z>0$: In this case, if the equilibrium is equivalent to consider the following network:
\begin{center}
\begin{tikzpicture}[scale=1.5]
\node[vertex] (s) at (0,.5) {$s$};
\node at (-0.8, 0.5) {$p_s = 1$};
\node[vertex] (a) at (2,1) {$a$};
\node[vertex] (b) at (2,0) {$b$};
\node[vertex] (c) at (4,.5) {$c$};
\node[vertex] (d) at (4,-.5) {$d$};
\node[vertex] (t) at (6,0) {$t$};
\node at (7.5, 0) {$p_t = 11 - (x+y)$};
\path[->]
(s) edge node [above] {$x$} (a)
(s) edge node [below] {$z$} (b)
(a) edge node [above] {$x$} (c)
(b) edge node [below] {$z$} (d)
(c) edge node [above] {$x$} (t)
(d) edge node [below] {$z$} (t);
\end{tikzpicture}
\end{center}
then by Algorithm~\ref{alg: 1} and \ref{alg: 2}, we have
\[p_a = 11-4x-z, \quad p_b = 11-x-4z,\]
\[p_s = 11-5(x+z) = 1 \implies x+z=2, \text{ and } x=z=1.\]
The utility of $s$ is
\[(p_a - p_s)x + (p_b - p_s)z = 5 + 5 = 10.\]
Besides, $p_{ac} = 11-2x-z = p_{bd} = 11-x-2z = 8$.
This is indeed an equilibrium. Given that $p_{ac} = 8$, if $b$ wants to earn profit from $c$, then $b$ must set the price $p_{bc} \leqslant 8$. However, $b$ does not have the incentive to compete with $a$ because $b$ can sell all the goods with $z=1$ to $d$ at price $p_{bd}=8$.
\end{itemize}
The equilibrium is the last case when $x=z=1$, $y=0$, $p_{sa} = p_{sb} = 6$, $p_{ac} = 11 - 2x - z = 8 = 11 - x - 2z = p_{bd}$, and $p_{bc}$ can be any positive number.
\end{example}
\end{document} |
\begin{document}
\begin{abstract}
We present a method to compute the Euler characteristic of an algebraic subset of $\bc^n$. This method relies on clasical tools such as Gröbner basis and primary decomposition. The existence of this method allows us to define a new invariant for such varieties. This invariant is related to the poblem of counting rational points over finite fields.
\end{abstract}
\title{A polynomial generalization of the Euler characteristic for algebraic sets.}
\section{Introduction}
One of the main invariants of a topological space is its Euler characteristic. It was initially defined for cell complexes, but several extensions have been defined to more general classes of spaces. In the setting of complex algebraic varieties, the natural extension is the Euler characteristic with compact support. In~\cite{szafraniec-complex} Szafraniec gives a method to compute the Euler characteristic of a complex algebraic set by using methods from the real geometry. In this paper, we present another method, that only makes use of the basic properties of the Euler characteristic, and classical results on algebraic sets. This way of computing the Euler characteristic gives naturally a stronger invariant, which we define.
The method works as follows. Consider $V\subseteq \bc^n$ an irreducible algebraic set of dimension $d$ and degree $g$. Take a generic linear projection $\pi:\bc^n\to \bc^d$. If we consider $\pi$ restricted to $V$, it is a $g:1$ branched cover. The branchig locus $\Delta$ and its preimage $\pi\mid_V^{-1}(\Delta)$ can be computed. From the aditivity and the multiplicativity for covers of the Euler characeristic, we have the following formula:
\[
\chi(V)=g\cdot \chi(\bc^d)-g\cdot \chi(\Delta)+\chi(\pi\mid_V^{-1}(\Delta)).
\]
So the computation of $\chi(V)$ is reduced to the computation of the Euler characteristic of algebraic sets of lower dimension, allowing us to use a recursion process.
In the previous method, we make use of the fact that $\chi(\bc^d)=1$. If instead of making this substitution, we keep track of $\chi(\bc^d)$ as a formal symbol, we obtain a stronger invariant $F(V)$. This invariant is defined as a polynomial in $\bz[L]$, and has some interesting properties: the dimension, degree, and Euler characteristic of an algebraic set can be computed from this polynomial. It also gives information on the number of points on some varieties over finite fields. This relation with finite varieties could be used to compute this invariant by counting points.
In Sections~\ref{sec-justificacion} and \ref{sec-descripcion} we show the preliminary results that prove the correctness of the method to compute the Euler characteristic, and describe the algorithm. Section~\ref{sec-invariante} is devoted to the generalization of this method to the new invariant, which is also defined and some of its properties are shown. The extension of this invariant to projective varieties
is discused in Section~\ref{sec-proyectivo}. In Sections~\ref{sec-ejemplos} and \ref{sec-codigo} we include an implementation of the two algorithms in Sage, together with some examples and timings. As an important example, we show that in the case of hyperplane arrangements this invariant coincides with the characteristic polynomial. Finally, the relationship of the invariant with the number of points over finite fields is shown in Section~\ref{sec-cuerposfinitos}.
\section{Theoretical justification}
\label{sec-justificacion}
Let $V=V(I)\subseteq \bc^n$ be the algebraic set determined by a radical ideal $I$. Without loss of generality, we can assume that it is in general position (in the sense that we will precise later). By computing the associated primes of $I$ we obtain the decomposition in irreducible components $V=V_1 \cup \cdots \cup V_c$. The Euler characteristic $\chi(V)$ can be expressed as $\chi(V_1)-\chi((V_1)\cap(V_2\cup\cdots\cup V_c))+\chi(V_2\cup\cdots\cup V_c)$. The variety $(V_1)\cap(V_2\cup\cdots\cup V_c)$ is an algebraic set of lower dimension. So, by a double induction argument (over the dimension and over the number of irreducible components), we may reduce the problem of computing $\chi(V)$ to the case where $V$ is either zero-dimensional or irreducible.
If $V$ is zero-dimensional, it consists on a number of isolated points, and its Euler characteristic equals the number of points. This number of points can be computed as the degree of the homogenization of the radical of $I$ (which can be computed via the Hilbert polynomial, see~\cite[Chapter 5]{singular-book} for example).
For the case of an irreducible variety $V=V(I)\subseteq\bc^n$ being $I\trianglelefteq\bc[x_1,\ldots,x_n]$ a radical ideal of Krull dimension $d$, we will distinguish the homogeneous case from the non homogeneous.
If $I$ is a homogenous ideal, the variety $V$ has a conic structure (it is formed by a union of lines that go through the origin). It means that $V$ is contractible and hence its Euler characteristic is $1$.
For the non homogeneous case, consider the projection
\[
\begin{array}{rcl}
\pi:\bc^n & \to & \bc^d \\
(x_1,\ldots,x_n) & \mapsto & (x_1,\ldots x_d)\end{array}
\]
We may assume (aplying a generic linear change of coordinates if necessary) that the following condition is satisfied:
\begin{dfn}
Consider $I_h\trianglelefteq\bc[x_0,x_1,\ldots,x_n]$ the homogenization of $I$. We will say that $I$ is in \textbf{general position} if $\sqrt{I_h+(x_0,x_1,\ldots,x_d)}\supseteq(x_0,x_1,\ldots,x_n)$.
\end{dfn}
\begin{thm}
The previous condition is satisfied by any ideal $I$ after a generic linear change of variables. Moreover, when this condition is satisfied the map $\pi$ restricted to $V$ is surjective and has no vertical asymptotes.
\end{thm}
\begin{proof}
If we consider the projectivization $\bar{V}\subseteq \bc\bp^n$, the projection $\pi$ consists on taking as a center the $n-d-1$ dimensional subspace $S=\{[x_0:x_1:\cdots:x_n]\mid x_0=x_1=\cdots=x_d=0\}$. Since the dimension of $\bar{V}$ is $d$, the intersection $\bar{V}\cap S$ is generically empty. We may hence assume that after a generic linear change of variables, $S\cap \bar{V}=\emptyset$. This intersection is given precisely by the ideal $\sqrt{I_h+(x_0,x_1,\ldots,x_d)}$, which is homogeneous. The condition of $\bar{V}\cap S$ being empty is equivalent to the ideal $I$ being in general position.
In this situation the preimage by $\pi\mid_V$ of a point $[1:x_1:\cdots:x_d]$ is given by the intersection of the subspace $\{[y_0:y_1:\cdots :y_n] \mid y_1=y_0x_1,\ldots,y_d=y_0 x_d\}$ with $\bar{V}$. By the genericity assumption, this intersection does not have points in the infinity. By dimension arguments, this intersection cannot be empty, and must be contained in the affine part of $\bar{V}$. We have then proved that $\pi$ restricted to $V$ is surjective.
\end{proof}
The intersection of a generic linear subspace of dimension $n-d$ with $\bar{V}$ is a union of $g$ distinct points, being $g$ the degree of $I_h$. This degree can be computed through the Hilbert polynomial. Since $I$ is in general position, all of the intersections of $\bar{V}$ with the fibres of $\pi$ will happen in the affine part. This means that $\pi$ restricted to $V$ is a branched cover of degree $g$. We will see now that the branching locus of this cover is contained in a subvariety of $\bc^d$ that can be computed.
Assume $I=(f_1,\ldots,f_s)$ is in general position. Consider the matrix
\[
M:=\left(\begin{matrix} \frac{\partial f_1}{x_{d+1}} & \cdots & \frac{\partial f_1}{x_{n}} \\
\vdots & \ddots & \vdots \\
\frac{\partial f_s}{x_{d+1}} & \cdots & \frac{\partial f_s}{x_{n}}
\end{matrix}\right)
\]
and the ideal $J$ generated by its $(n-d)\times(n-d)$ minors.
\begin{thm}
The branching locus of $\pi\mid_V$ is contained in the elimination ideal $(I+J)\cap \bc[x_1,\ldots,x_d]$.
\end{thm}
\begin{proof}
Consider a point $p=(x_1,\ldots,x_n)\in V$. If the linear space $\pi^{-1}(\pi(p))$ intersects $V$ at $p$ transversally, then there is no ramification at $p$, since it means that at a neighbourhood of $p$ the map $\pi\mid_V$ is a difeomorphism. This condition of transversality can be expressed as follows: the normal space of $V$ in $p$ and the normal space of $\pi^{-1}(\pi(p))$ generate the tangent space of $\bc^n$ in $p$.
The normal space of $V$ in $p$ is generated by the rows of the matrix
\[
\left( \begin{matrix}
\frac{\partial f_1}{x_1}(p) & \cdots & \frac{\partial f_1}{x_n}(p)\\
\vdots & \ddots & \vdots\\
\frac{\partial f_s}{x_1}(p) & \cdots & \frac{\partial f_s}{x_n}(p)\\
\end{matrix}
\right).
\]
The normal space of $\pi^{-1}(\pi(p))$ is generated by the first $d$ vectors of the canonical basis. A gaussian elimination argument tells us that these two spaces generate the whole space if and only if the matrix $M$ has rank $(n-d)$. So the set of points of $V$ where $\pi\mid_V$ ramifies is contained in the set $S$ of zeros of $I+J$.
The elimination ideal $\bc[x_1,\ldots x_d]\cap (I+J)$ is the Zariski closure of $\pi(S)$.
\end{proof}
Since $V\setminus \pi^{-1}(\pi(S))$ is a cover over $\bc^d\setminus \pi(S)$ of degree $g$, we have that
\[
\chi(V)=\chi(V\setminus \pi^{-1}(\pi(S)))+\chi(V\cap \pi^{-1}(\pi(S)))=g\cdot \chi(\bc^d\setminus \pi(S))+\chi(V\cap \pi^{-1}(\pi(S)))=\]
\[
=g\cdot(\chi(\bc^d)-\chi(\pi(S)))+\chi(V\cap \pi^{-1}(\pi(S)))=g-g\cdot \chi(\pi(S))+\chi(V\cap \pi^{-1}(\pi(S))).
\]
Both $\pi(S)$ and $V\cap \pi^{-1}(\pi(S))$ are varieties of dimension smaller than $V$, so, by induction hypothesis, we can compute their Euler characteristic in the same form.
\section{Description of the algorithm}
\label{sec-descripcion}
Now we will describe, step by step, an algorithm to compute the Euler characteristic of the zero set of an ideal $I=(f_1,\ldots,f_s)$.
\begin{alg}
\label{algoritmoEuler}
(Compute the Euler characteristic of the algebraic set defined by the ideal $I$):
\begin{enumerate}
\item Check if $I$ is homogeneous. If it is, return $1$.
\item Compute the associated primes $(I_1,\ldots,I_m)$ of $I$. This can be achieved by primary decomposition (see \cite[Chapter 4]{singular-book}).
\item If there is more than one associated prime, we have that
\[
\chi(V(I))=\chi(V(I_1))+\chi(V(I_2\cap\cdots\cap I_m))-\chi(V(I_1+(I_2\cap\cdots \cap I_m))).
\]
by recursion, each summand can be computed with this algorithm. The following parts of this algorithm consider only the irreducible case, since we have already computed the associated primes, we will assume that $I_1$ is prime.
\item Compute the dimension $d$ and the degree $g$ of $V(I)$. If $d$ is zero, return $g$.
\item Check that $I$ is in general position. This can be done by computing a Gröbner basis of $\sqrt{I_h+(x_0,x_1,\ldots,x_d)}$ (where $I_h$ is the homogenization of $I$) and using it to check that $x_{d+1},\ldots,x_n$ are in it. If it is not in general position, apply a generic linear change of variables and start again the algorithm.
\item Construct the ideal $J$ generated by the $(n-d)\times (n-d)$ minors of the matrix
\[
M:=\left(\begin{matrix} \frac{\partial f_1}{x_{d+1}} & \cdots & \frac{\partial f_1}{x_{n}} \\
\vdots & \ddots & \vdots \\
\frac{\partial f_s}{x_{d+1}} & \cdots & \frac{\partial f_s}{x_{n}}
\end{matrix}\right).
\]
\item Compute the elimination ideal $K=(I+J)\cap\bc[x_1,\cdots,x_d]$.
\item Compute by recursion $\chi(V(K))$ and $\chi(V(I+K))$. Return the number $g-g\cdot \chi(V(K))+\chi(V(I+K))$.
\end{enumerate}
\end{alg}
\section{A finer invariant}
\label{sec-invariante}
The previous method essentially consists in decomposing our variety $V$ in pieces, each of which is compared to $\bc^i$ through linear maps that are unbranched covers. At the end of the day, it gives us a linear combination (with integer coefficients), of the Euler characteristic of $\bc^i$.
Now we will show that we can actually keep the information in this linear combination, defining a slightly different invariant. This information will be kept in a polynomial $F_\pi(V)\in\bz[L]$, where $L^i$ will play the role of $\chi(\bc^i)$
We follow the same method as before but with two differences:
\begin{itemize}
\item If the ideal $I$ is homogeneous, we don't end returning a $1$. Instead, we continue the algorithm, taking as $I_h$ the ideal generated by $I$ inside $K[x_0,\ldots,x_n]$.
\item In the final step, we return $g\cdot L^d -g\cdot F_\pi(V(K))+F_\pi(V(I+K)))$ instead of $g-g\cdot \chi(V(K))+\chi(V(I+K))$.
\end{itemize}
So the algorithm results like this:
\begin{alg}
\label{algoritmopoly}
(Compute the polynomial $F\pi(V)$ associated to an algebraic set $V(I)$ in general position).
\begin{enumerate}
\item Compute the associated primes $(I_1,\ldots,I_m)$ of $I$.
\item If there is more than one associated prime, we have that
\[
F_\pi(V(I))=F_\pi(V(I_1))+F_\pi(V(I_2\cap\cdots\cap I_m))-F_\pi(V(I_1+(I_2\cap\cdots \cap I_m))).
\]
by recursion, each summand can be computed with this algorithm. The following parts of this algorithm consider only the irreducible case, since we have already computed the associated primes, we will assume that $I_1$ is prime.
\item Compute the dimension $d$ and the degree $g$ of $V(I)$. If $d$ is zero, return $g$.
\item Construct the ideal $J$ generated by the $(n-d)\times (n-d)$ minors of the matrix
\[
M:=\left(\begin{matrix} \frac{\partial f_1}{x_{d+1}} & \cdots & \frac{\partial f_1}{x_{n}} \\
\vdots & \ddots & \vdots \\
\frac{\partial f_s}{x_{d+1}} & \cdots & \frac{\partial f_s}{x_{n}}
\end{matrix}\right).
\]
\item Compute the elimination ideal $K=(I+J)\cap\bc[x_1,\cdots,x_d]$.
\item Compute by recursion $F_\pi(V(K))$ and $F_\pi(V(I+K))$. Return the number $gL^d-g\cdot F_\pi(V(K))+F_\pi(V(I+K))$.
\end{enumerate}
\end{alg}
Note that both algorithms~\ref{algoritmoEuler} and \ref{algoritmopoly} can run differently if we apply a linear change of coordinates to $I$ (which would change the projection $\pi$). The topological properties of the Euler characteristic tells us that the final result of the algorithm~\ref{algoritmoEuler} will coincide with the Euler characteristic regardless of this linear change of coordinates. But in the case of $F_\pi(V)$ we cannot ensure such a result. Nevertheless, for two sufficiently generic projections, algorithm~\ref{algoritmopoly} will follow the same exact steps, so we can define $F(V)$ as the polynomial obtained by the algorithm~\ref{algoritmopoly} for generic projections.
More preciselly, there must exist a Zariski open set $T\subseteq GL(n,\bc)$ such that, the polynomial $F_\pi(\sigma(V(I)))$ is the same for every linear change of coordinates $\sigma\in T$.
\begin{dfn}
Given an ideal $I\trianglelefteq \bc[x_1,\ldots,x_n]$, we define the polynomial $F(V(I))$ as the polynomial $F_\pi(\sigma(V(I))$ for any $\sigma\in T$.
We will say that $I$ or $V(I)$ are in \textbf{generic position}, or that we are in \textbf{generic coordinates} if $F_\pi(V(I))=F(V(I))$.
\end{dfn}
Since so far we have no algorithmic criterion to determine if a projection is generic enough or not, the generic case can be computed by introducing the parameters of the projection, and computing the Gröbner basis with those parameters.
Anyways, experimental evidence suggests the following conjecture:
\begin{conj}\label{conjetura}
If an ideal is in general position, it is also in generic position.
\end{conj}
Some partial results in this direction are easy to show.
\begin{lema}
If $V$ in in general position, the leading term of $F_\pi(V)$ coincides with the leading term of $F(V)$.
\end{lema}
\begin{proof}
It is immediate to check that the degree of $F_\pi(V)$ coincides with the dimension of $V$, and that the leading coefficient of $F_\pi(V)$ coincide with the degree of $V$, regardless of the projection used to compute it.
\end{proof}
\begin{obs}
The value of $F_\pi(V)$ at $L=1$ equals $\chi(V)$, independently of the choices of projections made for its computation, as long as we are in general position.
\end{obs}
These two results actually shows that $F(V)$ is independent of the projection for the case of curves (since in this case it is a degree $1$ polynomial whose leading term and value at $1$ are fixed).
We will now show that the invariant $F_\pi$ behaves well with respect to the product of varieties:
\begin{prop}
Let $I_1\trianglelefteq \bc[x_1,\ldots,x_n]$ and $I_2\trianglelefteq \bc[y_1,\ldots,y_m]$ be two ideals on polynomial rings with separated variables, and let $V_1\subseteq \bc^n$ and $V_2\subseteq \bc^m$ be their corresponding algebraic sets of dimensions $d_1$ and $d_2$ respectively. Consider the ideal \[I:=I_1+I_2\trianglelefteq\bc[x_1\ldots,x_{d_1},y_1,\ldots,y_{d_2},x_{d_1+1},\ldots,x_n,y_{d_2+1},\ldots,y_m].\] Its corresponding algebraic set is $V=V_1\times V_2\subseteq\bc^n\times \bc^m=\bc^{n+m}$. Then $F_\pi(V)=F_\pi(V_1)\cdot F_\pi(V_2)$.
\end{prop}
\begin{proof}
Without loss of generality, we can assume that we are in the irreducible case. We will work on induction over the dimension. If $V_1$ or $V_2$ ar zero dimensional, the result is immediate.
Consider $g_1,g_2$ the degrees of $V_1$ and $V_2$, and $g$ the degree of $V$. It is easy to check that $g=g_1\cdot g_2$. Is is also immediate to check that, if $V_2=\bc^m$, the statements holds (that is, $F_\pi(V_1\times \bc^m)=F_\pi(V_1)\cdot L^m$). Consider also $\Delta_1,\Delta_2$ and $\Delta$ analogously.
Now we will show that $\Delta=(\Delta_1\times\bc^{d_2})\cup (\bc^{d_1}\times \Delta_2)$. Let $p=(x_1,\ldots,x_{d_1},y_1,\ldots,y_{d_2})\in\bc^{d_1}\times \bc^{d_2})$. The set of points in $V$ that project on $p$ is the product of the set of points in $V_1$ that project in $(x_1\,\ldots,x_{d_1})$ and the set of points in $V_2$ that project in $(y_1,\ldots,y_{d_2})$. This set has less than $g_1\cdot g_2$ points if and only if $(x_1,\ldots,x_{d_1})\in \Delta_1$ or $(y_1,\ldots,y_{d_1})\in \Delta_2$. It is immedaite also that $(\Delta_1\times\bc^{d_2})\cap (\bc^{d_1}\times \Delta_2)=\Delta_1\times \Delta_2$. By induction hypothesis, we have that
\[
F_\pi(\Delta)=L^{d_2}\cdot F_\pi(\Delta_1)+L^{d_1}\cdot F_\pi(\Delta_2)-F_\pi(\Delta_1)\cdot F_\pi(\Delta_2).
\]
Reasoning analogoulsy, we can conclude that
\[
F_\pi(\pi^{-1}(\Delta))=F_\pi(\pi^{-1}(\Delta_1))\cdot F_\pi(V_2)+F_\pi(\pi^{-1}(\Delta_2))\cdot F_\pi(V_1)-F_\pi(\pi^{-1}(\Delta_1))\cdot F_\pi(\pi^{-1}(\Delta_2)).
\]
So, sumarizing, we have that
\[
\begin{array}{rcl} F_\pi(V)&=&g_1 g_2(L^{d_1+d_2}-F_\pi(\Delta))+F_\pi(\pi^{-1}(\Delta))\\
F_\pi(V_1)&=&g_1(L^{d_1}-F_\pi(\Delta_1))+F_\pi(\pi^{-1}(\Delta_1)) \\
F_\pi(V_2)&=&g_2(L^{d_2}-F_\pi(\Delta_2))+F_\pi(\pi^{-1}(\Delta_2)).
\end{array}
\]
Using all the previous formulas one can easily check that $F_\pi(V)=F_\pi(V_1)\cdot F_\pi(V_2)$.
\end{proof}
If conjecture~\ref{conjetura} is true, the same result would be true for $F(V)$. In fact, a weaker condition would be enough: if the product of two generc projections is generic, then the invariant $F$ is multiplicative. This could be useful, for example, to give a criterion to check if an algebraic set can be the product of two nontrivial algebraic sets. If $F(V)$ is irreducible in $\bz[L]$, then $V$ couln't be a product.
\section{The projective case}
\label{sec-proyectivo}
To compute the Euler characteristic of the projective variety $\bar{V}$ defined by a homogeneous ideal $I_h\trianglelefteq\bc[x_0,\ldots,x_n]$ we can also use algorithm~\ref{algoritmoEuler}. In order to do so, we will consider the hyperplane $H$ ``at infinity'' given by the equation $x_0=0$. This allows us to decompose $\bar{V}$ as its affine part $V:=\bar{V}\setminus H$ and its part at infinity $\bar{V}^\infty :=\bar{V}\cap H$. It is clear that $\chi(\bar{V})=\chi(V)+\chi(\bar{V}^\infty)$.
The affine part $V$ is an affine variety defined by the ideal obtained by substituting $x_0=1$ in the generators of $I_h$, whose Euler characteristic can be defined as seen before.
The part at infinity $\bar{V}^\infty$ is a projective variety embedded in a projective space of less dimension. The homogeneous ideal that defines is obtained by substituting $X_0=0$ in the generators of $I_h$. Its Euler characteristic can be computed by recursion. If we are in the case of $\bc\bp^1$, $\bar{V}$ will consist on a finite number of points, which can be computed as the degree of $\sqrt{I_h}$.
Now we will show a different way to compute the Euler characteristic of a projective variety using the polynomial $F(V)$.
\begin{thm}
Let $I=\bc[x_0,\ldots,x_n]\cdot(f_1,\ldots,f_s)$ be a homogeneous ideal in generic position. Assume that the generators $f_1,\ldots,f_s$ are homogeneous. Denote by $I_0:=(I+\bc[x_0,\ldots,x_n]\cdot(x_0))\cap \bc[x_1,\ldots,x_n]$, and $I_1:=(I+\bc[x_0,\ldots,x_n]\cdot(x_0-1))\cap \bc[x_1,\ldots,x_n]$. That is, the ideals that represent the intersection of $V(I)$ with the hyperplanes $\{x_0=0\}$ and $\{x_0=1\}$ respectively, seeing the two hyperplanes as ambient spaces. Then the following formula holds:
\[
F(V(I))=(L-1)\cdot F(V(I_1))+F(V(I_0))
\]
\end{thm}
\begin{proof}
By induction on the dimension of $V(I)$.
If the dimension is zero, $V(I)$ must consist only on the origin, since $I$ is homogeneous. In this case, $V(I_1)$ is empty, and $V(I_0)$ is also the origin. We have that
\[
1=F(V(I))=(L-1)\cdot 0+1=(L-1)\cdot F(V(I_1))+F(V(I_0)).
\]
If the dimension $d$ of $V(I)$ is positive, consider the ideals $J,K$ and $H=I+K$ as before. Construct also $H_0,H_1,K_0$ and $K_1$ in the same way as $I_0$ and $I_1$. Note that, since we are in generic position, the ideals $H_0'$ and $K_0'$ needed to compute $F(V(I_0))$ are precisely $H_0$ and $K_0$ (that is, specializing $x_0=0$ and then computing the minors of the matrix $M$ and the elimination ideal is the same as computing the minors of the matrix and the elimination and then specializing). The same happens with $J_1$ and $K_1$.
By induction hypothesis, we have that
\[
F(V(H))=(L-1)\cdot F(V(H_1))+F(V(H_0))\]
and
\[
F(V(K))=(L-1)\cdot F(V(K_1))+F(V(K_0)).
\]
Now we have that
\[
\begin{array}{rcl}
F(V(I)) & = & g\cdot L^d-g\cdot F(V(K))+F(V(H))= \\
& = & g\cdot L^d-g\cdot((L-1)\cdot F(V(K_1))+F(V(K_0)))+(L-1)\cdot F(V(H_1))+F(V(H_0))=\\
& = & (L-1)\cdot (g\cdot L^{d-1}-g\cdot F(V(K_1))+F(V(H_1))+g\cdot L^{d-1}-g\cdot F(K_0)+F(H_0)=\\
& = & (L-1)\cdot F(V(I_1))+F(V(I_0))
\end{array}
\]
and this proves the result.
\end{proof}
This theorem allows to relate the invariant $F$ of the affine algebraic set defined by a homogenous ideal, and the invariant $F$ of the projective variety defined by the same ideal as follows:
\begin{cor}
Let $I$ be a homogeneous ideal in $\bc[x_1,\ldots,x_n]$ in generic position. Let $V$ be the algebraic set defined by $I$, and $V'$ the projective variety defined by the same ideal. Then $F(V)=(L-1)\cdot F(V')+1$.
\end{cor}
\begin{proof}
Aplying the previous result recursively, we have that $F(V(I))$ equals
\[
(L-1)\cdot F(V(I_1))+(L-1)\cdot F(V((I_0)_1))+\cdots+(L-1)\cdot F(V((\cdots(I_0)\cdots)_0)_1))+F(V((\cdots (I_0)\cdots)_0)).
\]
Note that if we compute piecewise $F(V')$ we obtain preciselly $F(V(I_1))+F(V((I_0)_1))+\cdots+F(V((\cdots(I_0)\cdots)_0)_1))$. Since $V((\cdots (I_0)\cdots)_0))$ consists only on the origin, we have the result.
This last result can be interpreted as the fact that $V$ is the complex cone over $V'$. That is, $V\setminus\{0\}$ is the product $V'\times \bc^*$.
\end{proof}
\section{Examples}
\label{sec-ejemplos}
Both the polynomial $F(V)$ and the Hilbert polynomial $P_I$ have the same degree, and the leading term determined by the degree of $V$. This would point in the direction of considering that they contain the same information. The following example shows this is not the case:
\begin{ejm}
Consider the conics $C_1,C_2\in\bc^2$ given by $C_1:=V(x^2+y^2-1)$ and $C_2:=V(x^2+y^2)$. If we compute the Hilbert polynomial of the corresponding homogenous ideals in $\bc[x,y,z]$ we get that $P_{(x^2+y^2+z^2)}=P_{(x^2+y^2)}=2\cdot t+1$.
Let's now compute the polynomial $V(C_1)$ using the canonical projection $\pi:\bc^2\to\bc$ to the first component. This projection is a $2:1$ cover of $\bc$ branched along the points $\pm1$. Over each of these points, there is only one preimage. So finally we have that $F(C_1)=2(L-2)+1+1=2L-2$.
On the other hand, the curve $C_2$ also projects $2:1$ over $\bc$, but now there is only one branching point ($x=0$). So the result is $F(C_2)=2(L-1)+1=2L-1$.
That is, this example shows that the polynomial $F(V)$ contains information that is not contained in the Hilbert polynomial.
\end{ejm}
An important example of algebraic sets is the case of hyperplane arrangements. We will now recall some related notions now (see~\cite[Chapter II]{orlik-terao}).
\begin{dfn}
Let $\mathcal A$ be a hyperplane arrangement in $\bc^n$. Its \textbf{intersection lattice} $L(\mathcal A)$ is the set of all its intersections ordered by reverse inclusion, with the convention that the intersection of the empty set is $\bc^n$ itself.
The Möbius function is the only function $\mu:L\rightarrow \bz$ satisfying that:
\[\begin{array}{cc}
\mu(\bc^n)=1 & \\
\sum_{Y\leq X}\mu(Y)=0 & \forall X\in L\setminus\{\bc^n\}.
\end{array}
\]
\end{dfn}
\begin{dfn}
The \textbf{characteristic polynomial} of $\mathcal A$ is defined as
\[
\chi(\mathcal{A},L):=\sum_{X\in L(\mathcal{A})}\mu(X)\cdot L^{dim(X)}.
\]
\end{dfn}
\begin{thm}[Deletion-Restriction]
Let $\mathcal A$ be a hyperplane arrangement in $\bc^n$, and $H$ a hyperplane of $\mathcal{A}$. Let $\mathcal{A}'$ be the arrangement resulting from eliminating $H$ from $\mathcal{A}$, and let $\mathcal{A}''$ be the hyperlane arrangement inside $H$ induced by the intersection with $\mathcal{A}'$. Then the following formula holds:
\[
\chi(\mathcal{A},L)=\chi(\mathcal{A}',L)-\chi(\mathcal{A}'',L).
\]
\end{thm}
Now we will see how this characteristic polynomial relates to the polynomial $F(V)$.
\begin{thm}
Let $\mathcal A$ be a hyperplane arrangement in $\bc^n$. The following holds:
\[
F(\mathcal{A})=L^n-\chi(\mathcal{A},L).
\]
\end{thm}
\begin{proof}
By induction on the number of hyperplanes. he case of only one hyperplane is immediate.
If there are more than one hyperplane, take one hyperplane $H$ of $\mathcal{A}$. Since $\mathcal{A}=\mathcal{A}'\cup H$, we have, by aditivity, that
\[
F(\mathcal{A})=F(\mathcal{A}')+F(H)-F(\mathcal{A}'\cap H)=F(\mathcal{A}')+L^{n-1}-F(\mathcal{A}'').
\]
Both $\mathcal{A}'$ and $\mathcal{A}''$ are hyperplane arrangements with less hyperplanes than $\mathcal{A}$, so, by induction hypothesis the following formulas hold:
\[
\begin{array}{c}
F(\mathcal{A}')=L^n-\chi(\mathcal{A}',L) \\
F(\mathcal{A}'')=L^{n-1}-\chi(\mathcal{A}'',L).
\end{array}
\]
Substituting these formulas in the previous one, and using the delition-restriction theorem we get the result.
\end{proof}
This esult tells us that the characteristic polynomial of $\mathcal A$ can be though of as the polynomial $F$ of its complement.
\section{Code and timings}
\label{sec-codigo}
Here we show an implementation in Sage (\cite{sage}) of the two algorithms.
\subsection{Implementation of Algorithm~\ref{algoritmoEuler}}
\begin{verbatim}
def Euler_characteristic(I):
R=I.ring()
if I.is_one():
return 0
if R.ngens()==1:
return sum([j[0].degree() for j in I.gen().factor()])
J1=I.radical()
if J1.is_homogeneous():
return 1
primdec=J1.associated_primes()
J1=primdec[0]
m=len(primdec)
if m>1:
J2=R.ideal(1)
for j in [1..m-1]:
J2=J2.intersection(primdec[j])
return Euler_characteristic(J1)+Euler_characteristic(J2)-Euler_characteristic(J1+J2)
P=J1.homogenize().hilbert_polynomial()
if P.is_zero():
deg=0
else:
deg=P.leading_coefficient()*P.degree().factorial()
if deg==1:
return 1
dim=J1.dimension()
n=R.ngens()
vars1=R.gens()[0:n-dim]
vars2=R.gens()[n-dim:n]
varpiv=vars1[-1]
IH=J1.homogenize()
S=IH.ring()
JH=IH+S.ideal(S.gens()[n-dim:])
if JH.dimension()>0:
det=0
while det==0:
MH=random_matrix(R.base_ring(),n)
det=MH.determinant()
L=list(MH*vector(list(R.gens())))
return Euler_characteristic(R.hom(L)(J1))
if dim==0:
return deg
M=matrix([[f.derivative(v) for v in vars1] for f in J1.gens()])
J=R.ideal(M.minors(n-dim))
K=(J+J1).elimination_ideal(vars1)
S=PolynomialRing(R.base_ring(),vars2)
H=R.hom([S(0) for j in vars1]+[S(j) for j in vars2])
C=deg-deg*Euler_characteristic(H(K))+Euler_characteristic(K+J1)
return C
\end{verbatim}
This algorithm may be very slow (since it involves several Gröbner basis computations), but in several interesting cases, it gives a useful answer in reasonable time. Let's show here some examples.
In the case of curves and surfaces, the result is often reasonably fast; but it may vary a lot if a random change of variables has to be aplied. Here we show a few examples. These tests have been run on a Dual-Core AMD Opteron 8220.
Three examples of plane curves:
\begin{verbatim}
sage: R.<x,y>=QQ[]
sage: time Euler_characteristic(R.ideal(x^5+1))
5
Time: CPU 0.16 s, Wall: 0.17 s
sage: time Euler_characteristic(R.ideal(y^4+x^3-1))
-5
Time: CPU 0.19 s, Wall: 0.20 s
time Euler_characteristic(R.ideal(x^2+y^2-5*x^2*y^4+x*y-1))
-8
Time: CPU 17.82 s, Wall: 17.82 s
\end{verbatim}
A curve and a surface in $\bc^3$:
\begin{verbatim}
S.<x,y,z>=QQ[]
timeit('Euler_characteristic(S.ideal(x^5+y^2+2*x*y+1,3*x-5*y*x+y^2+1))')
10
Time: CPU 0.17 s, Wall: 0.18 s
timeit('Euler_characteristic(S.ideal(x^5+y^2+2*x*y+1))')
-3
Time: CPU 0.49 s, Wall: 0.49 s
\end{verbatim}
\subsection{Implementation of Algorithm~\ref{algoritmopoly}}
\begin{verbatim}
@parallel(7)
@cached_function
def FV(I,var='L'):
FS=PolynomialRing(ZZ,var)
L=FS.gen()
R=I.ring()
if R.ngens()==0:
return 0
if I.is_zero():
return L^R.ngens()
if I.is_one():
return 0
if R.ngens()==1:
return FS(sum([j[0].degree() for j in I.gen().factor()]))
J1=I.radical()
if J1.is_homogeneous():
S1=PolynomialRing(R.base_ring(),R.gens()[0:-1])
H1=R.hom(list(S1.gens())+[S1(1)])
H2=R.hom(list(S1.gens())+[S1(0)])
resulp=FV([H1(J1),H2(J1)])
d=dict([[a[0][0][0],a[1]] for a in resulp])
[i1,i2]=[d[H1(J1)],d[H2(J1)]]
return (L-1)*(i1)+i2
primdec=J1.associated_primes()
J1=primdec[0]
m=len(primdec)
if m>1:
J2=R.ideal(1)
for j in [1..m-1]:
J2=J2.intersection(primdec[j])
resulp=FV([J1,J2,J1+J2])
d=dict([[a[0][0][0],a[1]] for a in resulp])
[i1,i2,i3]=[d[J1],d[J2],d[J1+J2]]
return i1+i2-i3
P=J1.homogenize().hilbert_polynomial()
if P.is_zero():
deg=0
else:
deg=P.leading_coefficient()*P.degree().factorial()
dim=J1.dimension()
if deg==1:
return FS(L^dim)
n=R.ngens()
vars1=R.gens()[0:n-dim]
vars2=R.gens()[n-dim:n]
varpiv=vars1[-1]
IH=J1.homogenize()
S=IH.ring()
JH=IH+S.ideal(S.gens()[n-dim:])
if JH.dimension()>0:
det=0
while det==0:
MH=random_matrix(R.base_ring(),n)
det=MH.determinant()
L=list(MH*vector(list(R.gens())))
return FV(R.hom(L)(J1))
if dim==0:
return FS(deg)
M=matrix([[f.derivative(v) for v in vars1] for f in J1.gens()])
J=R.ideal(M.minors(n-dim))
K=(J+J1).elimination_ideal(vars1)
S=PolynomialRing(R.base_ring(),vars2)
H=R.hom([S(0) for j in vars1]+[S(j) for j in vars2])
d=dict([[a[0][0][0],a[1]] for a in FV([H(K),K+J1])])
[i1,i2]=[d[H(K)],d[K+J1]]
C=deg*FS(L^dim-i1)+i2
return C
\end{verbatim}
This implementation makes use of the Sage framework for parallel computations, allowing to use several processor cores at the same time to compute the intermediate steps. It also caches the already computed results in case they would be needed later.
Note that, if we assume Conjecture~\ref{conjetura} to be true, this implementation works fine giving as entry the ideal whose algebraic set we want to compute (assuming it is in general position, which is something that can be easily checked). If we want to be safe from the posibility of the conjecture to be false, we have to introduce it over a ring that contains the parameters of the possible linear transformations. But, unluckily, the methods to compute the primary decomposition do not work over rings with parameters. One way to proceed is to compute the primary decomposition over the original ring without parameters. Then compute the discriminant with parameters, and then choose a value for the parameters in the open part of the Gröbner cover (see~\cite{montes-groebnercover} for a definition and algorithm).
Again, this method can be very slow, but in some cases it is fast enough to be useful. Here we present some of those examples:
The complex $2$-sphere:
\begin{verbatim}
sage: R.<x,y,z>=QQ[]
sage: time FV(R.ideal(x^2+y^2+z^2-1))
2*L^2 - 2*L + 2
Time: CPU 0.07 s, Wall: 0.32 s
\end{verbatim}
Another surface:
\begin{verbatim}
sage: time FV(R.ideal(x^3+y^3+z^3-1))
3*L^2 - 6*L + 12
Time: CPU 0.07 s, Wall: 0.29 s
\end{verbatim}
The intersection of the two:
\begin{verbatim}
sage: time FV(R.ideal(x^3+y^3+z^3-1,x^2+y^2+z^2-1))
6*L - 15
Time: CPU 0.08 s, Wall: 43.01 s
\end{verbatim}
and their union (the timing is done after cleaning the cache of the function):
\begin{verbatim}
sage: time FV(R.ideal((x^3+y^3+z^3-1)*(x^2+y^2+z^2-1)))
5*L^2 - 14*L + 29
Time: CPU 0.08 s, Wall: 43.02 s
\end{verbatim}
notice the aditivity of the polynomial.
The Whitney umbrella:
\begin{verbatim}
sage: time FV(R.ideal(x*y^2-z^2))
3*L^2 - 4*L + 2
Time: CPU 0.13 s, Wall: 3.44 s
\end{verbatim}
The $3$-sphere:
\begin{verbatim}
sage: S.<x,y,z,t>=QQ[]
sage: time FV(S.ideal(x^2+y^2+z^2+t^2-1))
2*L^3 - 2*L^2 + 2*L - 2
Time: CPU 0.09 s, Wall: 0.43 s
\end{verbatim}
\section{Counting points over finite fields}
\label{sec-cuerposfinitos}
In this section we will see how the polynomial $P(V)$ can be related to the number of points of the variety considered over a finite field. Let's ilustrate this fact with an example.
\begin{ejm}
Consider the conic given by the equation
\[
x_1^2+x_2^2-1
\]
in the affine plane over the field of $5$ elements $\bfff_5$.
The set of rational points is the following:
\[
(0,1),(0,4),(1,0),(4,0)
\]
If we project it to the $x_1$ axis, we see that over the point $(0)$, we have two preimages, as expected by the degree. Over the points $(1)$ and $(4)$, we have just one point of the curve, since the cover ramifies there. But over the points $(2)$ and $(3)$ we have no points of the curve. The reason for this is that the equations $2^2+x_2^2-1$ and $3^2+x_2^2-1$ have no roots over $\bfff_5$. However, they do have all their solutions over a quadratic extension $\bfff_{25}$ of degree $2$. In particular, if we look at the points of $\bfff_5\times\bfff_{25}$ that satisfy the equation, we obtain the set
\[
\{(0, 1), (0, 4), (1, 0), (2, a + 2), (2, 4a + 3), (3, a + 2), (3, 4a + 3), (4, 0)\},
\]
where $a$ is an element of $\bfff_{25}$ that has minimal polynomial $x^2+3$ over $\bfff_5$.
It might seem strange to consider the set of points in $\bfff_{5}\times \bfff_{25}$ that satisfy a given equation. But this set is in fact an algebraic set. Indeed, it can be expressed as the set of points of the affine plane over $\bfff_{25}$ that satisfy the equations
\[\begin{array}{c}
x_1^2+x_2^2-1\\
x_1^5-x_1
\end{array}
\]
Recall that the polynomial $P(V)$ for such a conic was $f=2L-2$. In this case, we have obtained $8$ points in total, which is preciselly the value of $f$ for $L=5$. A quick look at the points shows why this happens: there are two points over each value of $\bfff_5$, with the exception of the two branching points, where there is only one.
\end{ejm}
Note that both algorithms~\ref{algoritmoEuler} and~\ref{algoritmopoly} can be run over finite fields in the same way as they run over the rationals. Whith a small exception, though: to ensure the existence of a change of coordinates that puts the ideal in general position, we might need to work in a finite field extension. Once done that, both algorithms would mimic the steps given by the algorithm run over $\bq$, except if some leading coefficient becomes zero. But that would happen only for a finite number of prime numbers $p$.
\begin{ntc}
We can then define the polynomial $F_p(V)$ as the result of runing algorithm \ref{algoritmopoly} in a field of characteristic $p$.
\end{ntc}
As we have seen before, $F_p(V)=F(V)$ for almost every prime number $p$.
\begin{ntc}
Given a polynomial \[f=a_0+a_1L+a_2L^2+\cdots+a_nL^n\in\bz[L]\] and a list of numbers $(d_1,\ldots,d_s)$ with $s\geq n$, we will denote by $f(d_1,\ldots,d_s)$ the number
\[
f(d_1,\ldots,d_s)=a_0+a_1d_1+a_2d_1d_2+\cdots+a_nd_1d_2\cdots d_n
\]
\end{ntc}
Another difference between the way the algorithms would run over $\bq$ and over finite fields lies in the primary decomposition. But then again, this will only happen for a finite number of primes.
\begin{thm}
Given an ideal $I\trianglelefteq \bfff_p[x_1,\ldots,x_n]$, and the corresponding $f:=F_p(V)$ of degree $s$, there exists a list of numbers $(d_1\leq\cdots\leq d_n)$ such that the number of points in
\[\left(\bfff_{p^{d_1}}\times\cdots\times\bfff_{p^{d_n}}\right)\cap V(I)
\]
equals the number $f(p^{d_1},\ldots,p^{d_n})$.
Moreover, for any such list $(d_1\leq \cdots \leq d_n)$ and any $d_i'$ multiple of $d_i$ there exists another list $(d_1,\leq\cdots\leq d_{i-1}\leq d_i'\leq d_{i+1}'\leq \cdots \leq d_n')$ satisfying the same property.
\end{thm}
\begin{proof}
By induction over the dimension. If $I$ is zero dimensional, $V(I)$ consists on $F_p(V)=deg(I)$ distinct points, whose coordinates lie in a suficiently big extension $\bfff_{p^{d_1}}$. Any sequence starting with $d_1$ would satisfy the theorem.
If $I$ has dimension $d>0$ and degree $g$, we have that $F_p(V(I))=g(L^d-F_p(V(K)))+F_p(V(I+K))$. By induction, we can assume that both $K$ and $I+K$ satisfy the result. Let $(d_1^1\leq\cdots\leq d_d^1)$ and $(d_1^2\leq\cdots\leq d_n^2)$ the corresponding sequences. Take $d_1=gcd(d_1^1,d_1^2)$. There exist two sequences $(d_1\leq {d_2^1}'\leq\cdots\leq {d_d^1}')$ and $(d_1\leq {d_2^2}'\leq\cdots\leq {d_n^2}')$ that are valid for $K$ and $I+K$ respctivelly, and coinciding in the first term. Repeating this reasoning we can obtain two sequences $(d_1\leq\cdots \leq d_d)$ and $(d_1\leq\cdots\leq d_d\leq d_{d+1}\leq \cdots \leq d_n)$ that are valid for $K$ and $I+K$ respectivelly.
Note that, since both $F_p(V(K))$ and $F_p(V(I+K))$ are of dimension at most $d-1$, the terms $d_d,\ldots, d_n$ can be changed arbitrarily and still the squence would be valid for $K$ and $I+K$.
Now take any point $q:=(q_1,\ldots,q_d)\in(\bfff_{p^{d_1}}\times\cdots\times \bfff_{p^{d_{d}}})\setminus V(K)$. Taking an apropiate field $\bfff_{p^{n_q}}$, we can ensure that there are exactly $g$ points in $\{(x_1,\ldots,x_n)\in (\bfff_{p^{n_q}})^n\mid x_1=q_1,\ldots,x_d=q_d\}\cap V$. We can do the same for every point $q$, and take a common field extension $\bfff_{p^s}$ of all the different $\bfff_{p^{n_q}}$. This way, we have that there are exactly $g$ points of $V\cap (\bfff_{p^{d_1}}\times\cdots\times \bfff_{p^{d_{d}}}\times \bfff_{p^{s}}\times\cdots \times \bfff_{p^{s}})$ over each point of $(\bfff_{p^{d_1}}\times\cdots\times \bfff_{p^{d_{d}}})\setminus V(K)$.
By definition, we have that
\[
F_p(V)=g(L^d-F_p(V(K)))+F_p(V(I+K)).
\]
Now if we restrict ourselves to the points in $\bfff_{p^{d_1}}\times\cdots\times \bfff_{p^{d_{d}}}\times \bfff_{p^{s}}\times\cdots \times \bfff_{p^{s}}$, we have that
\[
\#V=g\cdot(p^{d_1}p^{d_2}\cdots p^{d_d}-\#V(K))+\#V(I+K).
\]
Making use of the induction hypothesis, and the fact that $F_p(V(K))$ and $F_p(V(I+K))$ are of dimension less than $d$, the result follows easily.
\end{proof}
\begin{cor}
Let $V$ be an algebraic set in $\bc^n$ defined by a an ideal $I\trianglelefteq\bk[x_1,\ldots,x_n]$, where $\bk$ is an algebraic extension of $\bq$. Then for almost every prime $p$ there exists a list of positive integers $(d_{1,p},\ldots,d_{n,p})$ such that the number of points in
\[\left(\bfff_{p^{d_1}}\times\cdots\times\bfff_{p^{d_n}}\right)
\] that satisfy the equations of $I$
equals the number $F(V(I))(p^{d_1},\ldots,p^{d_n})$.
\end{cor}
This corollary could allow a different way to compute the polynomial $F(V(I))$ by counting points over finite fields. If we know the value of $F(V(I))(S)$ for a sufficient number of such sequences $S$, recovering the coefficients of the polynomial is a simple linear algebra problem.
\end{document} |
\begin{document}
\title[An IVP with a time-measurable pseudo-differential operator in a weighted $L_p$-space]
{A regularity theory for an initial value problem with a time-measurable pseudo-differential operator in a weighted $L_p$-space }
\author[J.-H. Choi]{Jae-Hwan Choi}
\address[J.-H. Choi]{Department of Mathematical Sciences, Korea Advanced Institute of Science and Technology, 291, Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea}
\email{jaehwanchoi@kaist.ac.kr}
\author[I. Kim]{Ildoo Kim}
\address[I. Kim]{Department of Mathematics, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea}
\email{waldoo@korea.ac.kr}
\author[J.B. Lee]{Jin Bong Lee}
\address[J.B. Lee]{Research Institute of Mathematics, Seoul National University, Seoul 08826, Republic of Korea}
\email{jinblee@snu.ac.kr}
\thanks{The second author has been supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No.2020R1A2C1A01003959). The third author has been supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No.2021R1C1C2008252)}
\subjclass[2020]{35B30, 35S05, 35B65, 47G30}
\keywords{Initial value problem, (time-measurable) Pseudo-differential operator, Muckenhoupt's weight, Variable smoothness}
\maketitle
\begin{abstract}
We study initial value problems with (time-measurable) pseudo-differential operators in weighted $L_p$-spaces.
Initial data are given in generalized Besov spaces and regularity assumptions on symbols of our pseudo-differential operators depend on the dimension of the space-variable and weights.
These regularity assumptions are characterized based on some properties of weights.
However, any regularity condition is not given with respect to the time variable.
We show the uniqueness, existence, and maximal regularity estimates of a solution $u$ in weighted Sobolev spaces with variable smoothness.
We emphasize that a weight given in our estimates with respect to the time variable is beyond the scope of Muckenhoupt's class.
\end{abstract}
\mysection{Introduction}
Pseudo-differential operators naturally arise in theories of partial differential equations when one considers equations with fractional smoothness depending on variables.
In this paper, we study the following homogeneous initial value problem:
\begin{equation}\label{eqn:model eqn}
\begin{cases}
\partial_tu(t,x)=\partialsi(t,-i\nabla)u(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=u_0(x),\quad &x\in\bR^d,
\end{cases}
\end{equation}
where $\partialsi(t,-i\nabla)$ is a (time-measurable) pseudo-differential operators whose (complex-valued) symbol is $\partialsi(t, \xi)$, \textit{i.e.}
\begin{align*}
\partialsi(t,-i\nabla)u(t,x)= \cF^{-1}\left[ \partialsi(t,\xi) \cF[u(t,\cdot)](\xi) \right] (x).
\end{align*}
Here $\cF$ and $\cF^{-1}$ denote the $d$-dimensional Fourier transform and Fourier inverse transform on $\bR^d$, respectively, \textit{i.e.}
$$
\mathcal{F}[f](\xi) := \frac{1}{(2\partiali)^{d/2}} \int_{\mathbb{R}^d} e^{-i \xi \cdot x} f(x) dx, \,\,\,\,
\mathcal{F}^{-1}[f](x) := \frac{1}{(2\partiali)^{d/2}} \int_{\mathbb{R}^d} e^{ix\cdot \xi} f(\xi) d\xi.
$$
Our goal is to establish a well-posedness result and a maximal regularity theory to equation \eqref{eqn:model eqn} in weighted $L_p$-spaces
with appropriate assumptions on symbols $\partialsi(t,\xi)$ depending on weights.
To elaborate these conditions on $\partialsi(t,\xi)$, we introduce two definitions so called an ellipticity condition and a regular upper bound condition.
We say that a symbol $\partialsi(t,\xi)$ or the operator $\partialsi(t,-i\nabla)$ satisfies {\bf an ellipticity condition} (with ($\gamma$, $\kappa$))
if there exists a $\gamma\in(0,\infty)$ and $\kappa\in(0,1]$ such that
\begin{align}\label{condi:ellipticity}
\frR [-\partialsi(t,\xi)]\geq \kappa |\xi|^{\gamma},\quad \forall (t,\xi)\in (0,\infty)\times\bR^d,
\end{align}
where $\frR[z]$ denotes the real part of the complex number $z$.
On the other hand, for $n \in \bN$, we say that a symbol $\partialsi(t,\xi)$ or the operator $\partialsi(t,-i\nabla)$ has {\bf a $n$-times regular upper bound} (with ($\gamma$, $M$)) if there exist positive constants $\gamma$ and $M$ such that
\begin{align}\label{condi:reg ubound}
|D^{\alpha}_{\xi}\partialsi(t,\xi)|\leq M|\xi|^{\gamma-|\alpha|},\quad \forall (t,\xi)\in (0,\infty) \times(\bR^{d}\setminus\{0\}),
\end{align}
for any ($d$-dimensional) multi-index $\alpha$ with $|\alpha| \leq n$. The positive number $\gamma $ is called the order of the operator $\partialsi(t,-i\nabla)$ and we fix $\gamma \in (0,\infty)$ throughout the whole paper.
We want to find a minimal $n$ in \eqref{condi:reg ubound} and a maximal $r$ depending on weights $\mu(dt)$ and $w(dx)$ so that
\begin{align*}
\int_0^T \|u(t,\cdot)\|^q_{H_p^r(\bR^d, w )}\mu(dt)
\leq N \int_{\bR^d} |u_0(x)|^p w(dx),
\end{align*}
where $N$ is independent of $u_0$ and $H_p^r(\bR^d, w\,dx)$ denotes the classical weighted Bessel potential space (weighted fractional Sobolev space) whose norm is given by
\begin{align*}
\| f \|_{H_p^r(\bR^d,w\,dx)}
:= \| (1 - \Delta)^{r/2} f \|_{L_p(\bR^d,w\,dx)}
:= \int_{\bR^d} |(1 - \Delta)^{r/2} f(x)|^pw(dx)
\end{align*}
and
\begin{align*}
(1 - \Delta)^{s/2} f(x)= \cF^{-1}\left[ (1+|\xi|^2)^{s/2} \cF[f](\xi) \right] (x).
\end{align*}
In particular, if $ w(dx) = |x|^{b}dx$ with $ b \in \left(-d,d(p-1)\right)$ and $\mu(dt) = t^adt$ with $a \in (-1,\infty)$, then for $p=q \in [2,\infty)$, $n= \left\lfloor \frac{b+d}{p} \right\rfloor+2$, and $r= \frac{\gamma (a+1)}{p}$, we have
\begin{align*}
\int_0^T \left(\int_{\bR^d} | (1-\Delta)^{r/2}u(t,\cdot)|^p |x|^b dx \right)^{q/p} t^a dt
\leq N \int_{\bR^d} |u_0(x)|^p |x|^b dx,
\end{align*}
which could be easily obtained from our main result and $\left\lfloor \frac{b+d}{p} \right\rfloor$ denotes the greatest integer which is smaller or equal to $\frac{b+d}{p}$. We consider $w(dx)= w(x)dx$ with a function $w$ in Muckenhoupt's $A_p$-class and $\mu(dt)$ characterized based on the Laplace transform of $\mu$.
To introduce these results with general weights, however, we need to mention that the classical weighted fractional Sobolev space does not fit to include a solution $u$ and initial data $u_0$ in an optimal way. Thus we need generalized weighted Besov spaces and Sobolev spaces with variable smoothness to restrict solutions and data to an optimal class depending on given weights.
These generalizations are possible due to the Littlewood-Paley theory.
Before introducing these general weighted spaces associated with Littlewood-Paley projections, we first recall the most important weighted class so called Muckenhoupt's class in $L_p$-spaces.
\begin{defn}
For $p\in(1,\infty)$, let $A_p(\bR^d)$ be the class of all nonnegative and locally integrable functions $w$ satisfying
\begin{align}
\notag
[w]_{A_p(\bR^d)}
&:=\sup_{x_0\in\bR^d,r>0}\left(-\hspace{-0.40cm}\int_{B_r(x_0)}w(x)dx\right)\left(-\hspace{-0.40cm}\int_{B_r(x_0)}w(x)^{-1/(p-1)}dx\right)^{p-1}\\
\label{def ap}
&:=\sup_{x_0\in\bR^d,r>0}\left(\frac{1}{|B_r(x_0)|}\int_{B_r(x_0)}w(x)dx\right)\left(\frac{1}{|B_r(x_0)|}\int_{B_r(x_0)}w(x)^{-1/(p-1)}dx\right)^{p-1}
<\infty,
\end{align}
where $|B_r(x_0)|$ denotes the Lebesgue measure of $B_r(x_0) := \{ x \in \bR^d : |x-x_0| < r\}$.
The class $A_{\infty}(\bR^d)$ could be defined as the union of $A_p(\bR^d)$ for all $p\in(1,\infty)$, \textit{i.e.}
$$
A_\infty(\bR^d)=\bigcup_{p\in(1,\infty)}A_p(\bR^d).
$$
\end{defn}
We can relate a constant to each $w \in A_p(\bR^d)$ to characterize sufficient smoothness of our symbols.
For each $w \in A_p(\bR^d)$, we define
\begin{align}\label{2021-01-19-01}
R_{p,d}^{w} := \sup \left\{ p_0 \in (1,2] : w \in A_{p/p_0}(\bR^d) \right\}
\end{align}
and say that $R_{p,d}^{w}$ is \textbf{the regularity constant of the weight $w\in A_p(\bR^d)$}.
Due to the reverse H\"older property of Muckenhoupt's class, $R_{p,d}^{w}$ is well-defined, \textit{i.e.} $R_{p,d}^{w} \in (1,2]$.
We give a regularity assumption on a symbol $\partialsi(t,\xi)$ on the basis of the regularity constant of the given weight $w\in A_p(\bR^d)$.
More precisely, we assume that $\partialsi$ has a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$.
Next we introduce generalized weighted Besov spaces and Sobolev spaces mentioned above.
\begin{defn}\label{def:bessel besov}
We choose a function $\Psi$ in the Schwartz class $\mathcal{S}(\mathbb{R}^d)$ whose Fourier transform $\mathcal{F}[\Psi]$ is nonnegative, supported in the annulus $\{\xi \in \bR^d : \frac{1}{2}\leq |\xi| \leq 2\}$, and $\sum_{j\in\mathbb{Z}} \mathcal{F}[\Psi](2^{-j}\xi) = 1$ for all $\xi \not=0$.
Then we define the Littlewood-Paley projection operators $\Delta_j$ and $S_0$ as $\mathcal{F}[\Delta_j f](\xi) = \mathcal{F}[\Psi](2^{-j}\xi) \mathcal{F}[f](\xi)$ and $S_0f = \sum_{j\leq 0} \Delta_j f$, respectively.
Using the notations $\Delta_j$ and $S_0$, we introduce weighted Bessel potential and Besov spaces. Let $p\in(1,\infty)$, $q\in(0,\infty)$, and $w\in A_p(\bR^d)$.
For sequences $\boldsymbol{r}: \bN \to \bR$ and $\tilde{\boldsymbol{r}}: \bZ \to \bR$, we can define the following Besov and Sobolev spaces with variable smoothness.
\begin{enumerate}[(i)]
\item
((Inhomogeneous) Weighted Bessel potential space)
We denote by $H_p^{\boldsymbol{r}}(\bR^d,w\,dx)$ the space of all $f \in \cS'(\bR^d)$ satisfying
$$
\|f\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}:=\|S_0f\|_{L_p(\bR^d,w\,dx)}+\left\|\left(\sum_{j=1}^{\infty}|2^{\boldsymbol{r}(j)}\Delta_jf|^{2}\right)^{1/2}\right\|_{L_p(\bR^d,w\,dx)}<\infty,
$$
where $\cS'(\bR^d)$ denotes the tempered distributions on $\cS(\bR^d)$.
\item
((Inhomogeneous) Weighted Besov space)
$B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)$ denotes the space of all $f \in \cS'(\bR^d)$ satisfying
$$
\|f\|_{B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)}:=\|S_0f\|_{L_p(\bR^d,w\,dx)}+\left(\sum_{j=1}^{\infty}2^{q\boldsymbol{r}(j)}\|\Delta_jf\|_{L_p(\bR^d,w\,dx)}^q\right)^{1/q}<\infty.
$$
\item
((Homogeneous) Weighted Bessel potential space)
We use $\dot{H}_{p,q}^{\tilde{\boldsymbol{r}}}(\bR^d,w\,dx)$ to denote the space of all $f \in \cS'(\bR^d)$ satisfying
$$
\|f\|_{\dot{H}_{p,q}^{\tilde{\boldsymbol{r}}}(\bR^d,w\,dx)}:=\left\|\left(\sum_{j\in\bZ}|2^{\tilde{\boldsymbol{r}}(j)}\Delta_jf|^{2}\right)^{1/2}\right\|_{L_p(\bR^d,w\,dx)}<\infty.
$$
\item
((Homogeneous) Weighted Besov space)
$\dot{B}_{p,q}^{\tilde{\boldsymbol{r}}}(\bR^d,w\,dx)$ denotes the space of all $f \in \cS'(\bR^d)$ satisfying
$$
\|f\|_{\dot{B}_{p,q}^{\tilde{\boldsymbol{r}}}(\bR^d,w\,dx)}:=\left(\sum_{j\in\bZ}2^{q\tilde{\boldsymbol{r}}(j)}\|\Delta_jf\|_{L_p(\bR^d,w\,dx)}^q\right)^{1/q}<\infty.
$$
\end{enumerate}
\end{defn}
\begin{rem}
Note that if $\boldsymbol{r}(j) = sj$ for $s\in\bR$, then the space $H_p^{\boldsymbol{r}}(\bR^d, w\,dx)$ is equivalent to
the classical weighted Bessel potential space $H_p^s(\bR^d, w\,dx)$ whose norm is given by
$$
\| f \|_{H_p^s(\bR^d,w\,dx)} := \| (1 - \Delta)^{s/2} f \|_{L_p(\bR^d,w\,dx)}
$$
(see Corollary \ref{classical sobo}).
\end{rem}
Finally, we are ready to mention our main result, which is Theorem \ref{22.12.27.16.53}.
We first give a rough version of main result rather than present rigorous conditions and inequalities, which might not be helpful to understand the whole picture.
Assume that the Laplace transform of a measure $\mu$ is controlled in a $\gamma$-scaling with a parameter $a \in (0,\infty)$ as follows:
$$
\cL_{\mu}(2^{\gamma j})
:= \int_0^\infty \exp\left( - 2^{\gamma j} t \right) \mu(dt) \leq N_{\cL_\mu} \cdot 2^{\gamma j a} 2^{-\boldsymbol{\mu}(j)},\quad \forall j\in\bZ.
$$
Then for any sequence $\boldsymbol{r}:\bN \to \bR$, we can find a unique solution $u$ to \eqref{eqn:model eqn} so that
\begin{align*}
\int_{0}^T\left\|u(t,\cdot)\right\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}^q
t^a\mu\left(dt\right)
\leq N\|u_0\|^q_{B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}.
\end{align*}
In particular, our measure $\mu(dt)$ includes $w(t)dt$ for any $w \in A_\infty(\bR^d)$ and it seems a surprise since the weight $w$ could be chosen independently to the exponent $q$, which is believed to be impossible for inhomogeneous problems.
Indeed, Theorem~\ref{22.12.27.16.53} contains precise statements on weights and regularity of symbols.
For previous works related to our results,
we start with the related zero-initial inhomogeneous problems, \textit{i.e.}
\begin{equation}
\label{20230126 01}
\begin{cases}
\partial_tu(t,x)=\partialsi(t,-i\nabla)u(t,x) + f(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=0,\quad &x\in\bR^d.
\end{cases}
\end{equation}
There are tons of papers handling these equations in $L_p$-spaces.
We first mention results handling time-measurable pseudo-differential operators.
We refer readers to \cite{kim2015parabolic,kim2016lplq,kim2016lp,kim2018lp} treating data and solutions in various $L_p$-spaces without weights.
For a weighted theory, see recent result of the first and second authors \cite{Choi_Kim2022}.
Moreover, if the order $\gamma$ of the operator $\partialsi(t,-i\nabla)$ is less than or equal to $2$, then these operators become generators of stochastic processes so called additive processes which is a time-direction generalization of a L\'evy process.
Particularly for $\gamma<2$, these operators can be represented in a form of non-local operators.
We refer to \cite{dong2021sobolev,gyongy2021lp,kang2021lp,kim2019lp,kim2012lp,kim2021lq,kim2021sobolev,mikulevivcius1992,mikulevivcius2014,mikulevivcius2017p,mikulevivcius2019cauchy,zhang2013maximal,zhang2013lp} for $L_p$-theories with generators of stochastic processes and non-local operators.
We also introduce papers \cite{neerven2012maximal,neerven2020,portal2019stochastic} handling general operators with smooth symbols in $L_p$-spaces
based on a $H^\infty$-calculus approach.
On the other hand, there are not many results treating non-zero initial value problems with general operators such as \eqref{eqn:model eqn} in $L_p$-spaces. We found \cite{Choi_Kim2023,Dong_Kim2021,Dong_Liu2022,Gallarati_Veraar2017} as recent results to non-zero initial value problems with general operators.
In \cite{Choi_Kim2023}, the first and second authors considered initial value problems with generators of general additive processes.
In \cite{Gallarati_Veraar2017}, Gallarati and Veraar studied an $L_p$-maximal regularity theory of evolution equations with general operators by means of $H^\infty$-calculus.
Especially, in section \cite[Section 4.4]{Gallarati_Veraar2017}, they obtained solvability to an equation of the type of \eqref{eqn:model eqn} in $L_p((0,T),v\, dt; X_1)\cap W_{p}^{1}((0,T),v\, dt; X_0)$.
Here $X_0$ and $X_1$ are Banach spaces which have a finite cotype and the initial data space $X_{v,p}$ is given by a trace of a certain interpolated space determined by $X_0$ and $X_1$, \textit{i.e.}
$$
X_{v,p}:=\{x\in X_0:\|x\|_{X_{v,p}}<\infty\},\quad \|x\|_{X_{v,p}}:=\inf\{\|u\|_{L_p((0,T),v\, dt; X_1)\cap W_{p}^{1}((0,T),v\, dt; X_0)}:u(0)=x\}.
$$
It is not easy to identify $X_{v,p}$ as a certain interpolation space in general.
However, this abstract space can be characterized by a real interpolation space if $v(t)$ is given by a power-type weight $v(t)=t^\gamma$.
Indeed, due to \cite[Theorem 1.8.2]{triebel1978interpolation},
$$
X_{v,p}=(X_0,X_1)_{1-\frac{(1+\gamma)}{p},p},\quad \gamma\in(-1,p-1),
$$
where $(X_0,X_1)_{\theta,p}$ stands for the real interpolation space between $X_0$ and $X_1$.
We refer to \cite{Hemmel_Lindemulder2022,Kohne2010,Kohne2014,Lindemulder2020,Meyries2012}, which provide such an identification for initial data spaces with the power type weights in various situations such as smooth domains, quasi-linear equations, and systems of equations.
For another generalization, time-fractional non-zero initial value problems were considered in \cite{Dong_Kim2021,Dong_Liu2022}.
The authors in \cite{Dong_Kim2021,Dong_Liu2022} considered the following second-order problems with general coefficients:
\begin{equation}\label{eqn:time frac}
\begin{cases}
\partialartial_t^\alpha u(t,x) = a^{ij}(t,x)D_{x_i x_j}u(t,x) + f(t,x), \quad &(t,x) \in (0,T)\times \mathbb{R}^d,\\
u(0, x) = u_0(x), \quad &x\in \bR^d,
\end{cases}
\end{equation}
In \cite{Dong_Kim2021}, a weighted Slobodeckij space is used as initial data spaces to obtain the solvability of \eqref{eqn:time frac} in $L_q((0,T),t^\gamma\, dt; W_p^2(\bR^d,w\,dx))$ for $\alpha \in (0,1]$.
In \cite{Dong_Liu2022}, the range of $\alpha$ is extended to $(0,2)$.
Next we want to mention methods we used to obtain the result. Comparing the difficulties of the initial value problems with inhomogeneous problems such as \eqref{20230126 01}, we recall that at least formally a solution $u$ to \eqref{20230126 01} is given by
\begin{align}\label{inhom soln}
u(t,x) = \int_0^t \int_{\mathbb{R}^d} K(t,s, x-y) f(s,y) dyds,
\end{align}
where
\begin{align*}
K(t,s, x-y) = \cF^{-1}\left[ \exp\left( \int_s^t \partialsi(r,\xi)dr \right) \right](x-y).
\end{align*}
Since the operator $f \mapsto \int_0^t \int_{\mathbb{R}^d} 1_{s<t} (-\Delta)_x^{\gamma/2}K(t,s, x-y) f(s,y) dyds$ becomes a singular integral operator on $L_p(\bR^{d+1})$,
we apply well-known weighted $L_p$-boundedness of singular integral operators with Muckenhoupt's $A_p$-class to obtain the maximal regularity of a solution $u$ in weighted $L_p$-spaces. However, a solution $u$ to \eqref{eqn:model eqn} is given by
\begin{align*}
u(t,x) = \int_{\mathbb{R}^d} K(t,0, x-y) u_0(y) dy
\end{align*}
and the integral operator $u_0 \mapsto \int_{\mathbb{R}^d} K(t,0, x-y) u_0(y) dy$ becomes an extension operator from $L_p(\bR^d)$ to $L_p(\bR^{d+1})$.
Thus we cannot apply weighted $L_p (\bR^{d+1})$-boundedness of singular integral operators which plays an very important role to inhomogeneous problems.
Moreover, for the inhomogeneous case, an $L_q(L_p)$-extension from $L_p$-estimates is easily obtained due to Rubio de Francia's extrapolation theorem. However, our weight class with respect to the time-variable is larger than Muckenhoupt's $A_p(\bR)$-class and it makes Rubio de Francia's powerful theory impossible to be adopted for $L_q(L_p)$-extensions.
To overcome all these difficulties, we use the Littlewood-Paley theory and the Laplace transform.
Roughly speaking, we estimate every fraction of the Littlewood-Paley projection of a solution with the help of the Hardy-Littlewood maximal function and the Fefferman-Stein sharp function depending on optimal scales given by the Laplace transform of the weighted measure with respect to the time variable.
It is also remarkable that the generality of a weight $\mu$ makes our result to initial value problems more valuable since the weight $\mu$ cannot be covered in inhomogeneous problems. Commonly, the generality of a free data $f$ in \eqref{20230126 01} makes people consider a non-zero initial value problem with a simple model operator. For a short detail, consider a solution $u_1$ to
\begin{equation*}
\begin{cases}
\partial_tu_1(t,x)= \Delta^{\gamma/2}u_1(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=u_0(x),\quad &x\in\bR^d,
\end{cases}
\end{equation*}
and then by taking $f=\partialsi(t,-i\nabla)u_1(t,x)-\Delta^{\gamma/2}u_1(t,x)$ in $\eqref{20230126 01}$, also consider $u_2$ such that
\begin{equation*}
\begin{cases}
\partial_tu_2(t,x)=\partialsi(t,-i\nabla)u_2(t,x) +\partialsi(t,-i\nabla)u_1(t,x)-\Delta^{\gamma/2}u_1(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=0,\quad &x\in\bR^d.
\end{cases}
\end{equation*}
Then obviously due to the linearity of the equations, the function $u:=u_1+u_2$ becomes a solution to
\begin{equation*}
\begin{cases}
\partial_tu(t,x)=\partialsi(t,-i\nabla)u(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=u_0(x),\quad &x\in\bR^d.
\end{cases}
\end{equation*}
Then a weighted estimate of a solution $u$ to \eqref{eqn:model eqn} can be recovered from
weighted estimates of both $u_1$ and $u_2$.
However, this scheme for a weighted estimate of $u$ cannot be fully restored since a weight class for $u_2$ is smaller than the one for $u_1$ as mentioned above, which make our direct estimate to the initial value problem \eqref{eqn:model eqn} more novel.
Apart from the novelty, even for the model operator $\Delta^\gamma$, our weighted estimate is new thanks to the generality of a measure $\mu$.
We finish the introduction by presenting special cases of the main result for an easy application.
\begin{thm}
\label{22.02.03.13.35}
Let $T \in (0,\infty)$, $p\in(1,\infty)$, $q\in(0,\infty)$, $a \in (0,\infty)$, $w\in A_p(\bR^d)$, and $\mu$ be a nonnegative Borel measure on $[0.\infty)$ whose Laplace transform is well-defined, \textit{i.e.}
\begin{align*}
\cL_{\mu}(\lambda):=\int_0^{\infty}e^{-\lambda t}\mu(dt)<\infty,\quad \forall\lambda\in(0,\infty).
\end{align*}
Assume that for any $k \in (1,\infty)$,
\begin{align}
\label{upper mu scaling}
N_k:=\sup \frac{\mu(kI)}{\mu(I)} < \infty,
\end{align}
where the supremeum is taken over all open intervals $I \subset (0,\infty)$.
Suppose that $\partialsi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$ $($see \eqref{condi:ellipticity}, \eqref{condi:reg ubound}, and \eqref{2021-01-19-01}$)$.
Then for any $u_0\in B_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}_a}{q}}(\bR^d,w\,dx)$, there exists a unique solution $u\in L_q((0,T),t^a\mu;H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx))$ to \eqref{eqn:model eqn} so that
\begin{align}
\label{int a priori 1}
\left(\int_{0}^T \|u(t,\cdot)\|_{H^{\boldsymbol{\gamma}}_p(\bR^d,w\,dx)}^q t^a\mu(dt)\right)^{1/q}
\leq N\left(1+\mu_{a,T}^{1/q}\right)\|u_0\|_{B_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}_a}{q}}(\bR^d,w\,dx)},
\end{align}
and
\begin{align}
\label{int a priori 2}
\left(\int_{0}^T\left\||\partialsi(t,-i\nabla)u(t,\cdot)| +|\Delta^{\gamma/2}u(t,\cdot)|\right\|_{L_p(\bR^d,w\,dx)}^qt^a\mu(dt)\right)^{1/q}\leq N\|u_0\|_{\dot{B}_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}_a}{q}}(\bR^d,w\,dx)},
\end{align}
where $[w]_{A_p(\bR^d)}\leq K$, $N=N(a,d,\gamma,\kappa,N_{16^{\gamma}\kappa^{-1}/(1\wedge q)},K,M,p,q)$, $\boldsymbol{\gamma}(j)=\gamma j$,
$\mu_{a,T}=\int_0^T t^a\mu\left(dt\right)$, and
$$
\boldsymbol{\mu}_a(j):=\gamma ja-\log_2(\cL_{\mu}(2^{j\gamma})),\quad \forall j\in\bZ.
$$
\end{thm}
As the most important case, we state a second-order case with a polynomial weight in the Sobolev space.
To the best of our knowledge, even this second-order case is new since $ a\in (-1,\infty)$ below.
\begin{thm}
\label{second-order case}
Let $T \in (0,\infty)$, $p\in (1,\infty)$, $a \in (-1,\infty)$, and $w\in A_p(\bR^d)$.
Assume that $a^{ij}(t)$ is a measurable function on $(0,T)$ for all $i,j \in \{1,\ldots,d\}$ and there exist positive constants $\kappa$ and $M$ such that
\begin{align}
\label{ellip coefficient}
\kappa |\xi|^2 \leq a^{ij}(t) \xi^i \xi^j \leq M |\xi|^2 \qquad \forall (t,\xi) \in (0,T) \times \bR^d.
\end{align}
Then for any $u_0\in L_p(\bR^d,w\,dx)$, there exists a unique solution $u\in L_{p \vee 2}\left((0,T),t^adt;H^{2(a+1)/(p\vee 2)}_p(\bR^d,w\,dx)\right)$ to
\begin{align}
\notag
&\partial_tu(t,x)=a^{ij}(t)u_{x^ix^j}(t,x),\quad (t,x)\in(0,T)\times\bR^d,\\
\label{second eqn}
&u(0,x)=u_0(x),\quad x\in\bR^d,
\end{align}
where $p \vee 2:= \max\{p,2\}$.
Moreover, $u$ satisfies the following estimation:
\begin{align}
\label{second a priori}
\int_{0}^T \|u(t,\cdot)\|_{ H^{2(a+1)/(p \vee 2)}_p(\bR^d,w\,dx)}^{ (p \vee 2)} t^a dt
\leq N \int_{\bR^d} |u_0(x)|^p w(x)dx
\end{align}
where $[w]_{A_p(\bR^d)}\leq K$ and $N=N(a,d,\kappa,M,K,T,p,q)$.
\end{thm}
In Section \ref{22.12.27.14.47}, we will derive Theorem \ref{22.02.03.13.35} and Theorem \ref{second-order case} from Theorem \ref{22.12.27.16.53}, in which the case of a more generic Borel measure $\mu$ is considered, and provide useful examples and corresponding inhomogeneous results in Section \ref{22.12.28.13.34}. The proof of the main result (Theorem \ref{22.12.27.16.53}) is given in Section \ref{pf main thm} and a proof of a key estimate to obtain the main theorem is given Section \ref{sec:prop}. Finally, in the appendix, we present weighted multiplier and Littlewood-Paley theories used to prove our main theorem.
\subsection*{Notations}
\begin{comment}
\begin{itemize}
\item
Let $\bN$ and $\bZ$ denote the natural number system and the integer number system, respectively.
As usual $\fR^{d}$, $d \in \bN$, stands for the Euclidean space of points $x=(x^{1},...,x^{d})$.
For $i=1,...,d$, a ($d$-dimensional) multi-index $\alpha=(\alpha_{1},...,\alpha_{d})$ with
$\alpha_{i}\in\{0,1,2,...\}$, and functions $u(x)$ we set
$$
f_{x^{i}}=\frac{\partialartial f}{\partialartial x^{i}}=D_{i}f,\quad
D^{\alpha}f=D_{1}^{\alpha_{1}}\cdot...\cdot D^{\alpha_{d}}_{d}f.
$$
For $\alpha_i =0$, we define $D^{\alpha_i}_i f = f$. For a function $f$ defined on $[0,T] \times \bR^d$ and a ($d$-dimensional) multi-index, we write $D^\alpha f(t,\cdot)$ or $D^\alpha_xf(t,x)$ if it is a derivative with respect to the variable $x$.
Sometimes, we simply use the notation $D^\alpha f(t,x)$ if there is no confusion.
\item
The gradient of a function $f$ is denoted by
\[
\nabla f = (D_1f, D_2f, \cdots, D_df).
\]
where $D_{i}f = \frac{\partialartial f}{\partialartial x^{i}}$ for $i=1,...,d$.
\item
Let $C^\infty(\bR^d)$ denote the space of infinitely differentiable functions on $\fR^d$.
Let $\cS(\bR^d)$ be the Schwartz space consisting of infinitely differentiable and rapidly decreasing functions on $\bR^d$ and
$\cS'(\bR^d)$ be the space of tempered distributions on $\bR^d$. We say that $f_n \to f$ in $S(\bR^d)$ as $n \to \infty$ if for all multi indexes $\alpha$ and $\beta$
\begin{align*}\label{sch conver}
\sup_{x \in \bR^d} |x^\alpha (D^\beta (f_n-f))(x)| \to 0 \quad \text{as}~ n \to \infty.
\end{align*}
\item
For $p \in [1,\infty)$, a normed space $F$, and a measure space $(X,\mathcal{M},\mu)$, we denote by $L_{p}(X,\cM,\mu;F)$ the space of all $\mathcal{M}^{\mu}$-measurable functions $u : X \to F$ with the norm
\[
\left\Vert u\right\Vert _{L_{p}(X,\cM,\mu;F)}:=\left(\int_{X}\left\Vert u(x)\right\Vert _{F}^{p}\mu(dx)\right)^{1/p}<\infty
\]
where $\mathcal{M}^{\mu}$ denotes the completion of $\cM$ with respect to the measure $\mu$. We also denote by $L_{\infty}(X,\cM,\mu;F)$ the space of all $\mathcal{M}^{\mu}$-measurable functions $u : X \to F$ with the norm
$$
\|u\|_{L_{\infty}(X,\cM,\mu;F)}:=\inf\left\{r\geq0 : \mu(\{x\in X:\|u(x)\|_F\geq r\})=0\right\}<\infty.
$$
If there is no confusion for the given measure and $\sigma$-algebra, we usually omit them.
\item
For $\cO\subseteq \bR^d$, we denote by $\cB(\cO)$ the set of all Borel sets contained in $\cO$.
\item
For $\cO\subset \fR^d$ and a normed space $F$, we denote by $C(\cO;F)$ the space of all $F$-valued continuous functions $u : \cO \to F$ with the norm
\[
|u|_{C}:=\sup_{x\in O}|u(x)|_F<\infty.
\]
\item
We denote the $d$-dimensional Fourier transform of $f$ by
\[
\cF[f](\xi) := \frac{1}{(2\partiali)^{d/2}}\int_{\bR^{d}} e^{- i\xi \cdot x} f(x) dx,
\]
and the $d$-dimensional inverse Fourier transform of $f$ by
\[
\cF^{-1}[f](x) := \frac{1}{(2\partiali)^{d/2}}\int_{\bR^{d}} e^{ ix \cdot \xi} f(\xi) d\xi.
\]
\item
Take a function $\Psi\in\cS(\bR^d)$ whose Fourier transform $ \cF[\Psi]$ is infinitely differentiable, supported in an annulus $\{\xi\in\bR^d : \frac{1}{2} \leq |\xi| \leq 2\}$, $\cF[\Psi]\geq0$ and
$$
\sum_{j\in \bZ} \cF[\Psi](2^{-j}\xi)=1 \quad \forall \xi\neq0.
$$
For a tempered distribution $f$ and $j\in \bZ$, define
$$
\Delta_j f(x):=\cF^{-1}\left[\cF[\Psi](2^{-j}\cdot)\cF [f]\right](x)
$$
and
$$
S_0 f(x):=\cF^{-1}[\cF[\Psi_0]\cF[f]](x):=\sum_{ j=-\infty}^0 \Delta_j f(x).
$$
\item
If we write $N=N(a,b,\cdots)$, this means that the constant $N$ depends only on $a,b,\cdots$.
\item
For $a,b\in \bR$,
$$
a \wedge b := \min\{a,b\},\quad a \vee b := \max\{a,b\},\quad \lfloor a \rfloor:=\max\{n\in\bZ: n\leq a\}.
$$
\item
For $r>0$,
$$
B_r(x):=\{y\in\bR^d:|x-y|< r\},\quad
\overline{B_r(x)}:=\{y\in\bR^d:|x-y|\leq r\}
$$
\end{itemize}
\end{comment}
\begin{itemize}
\item
For $p \in [1,\infty)$, a normed space $F$, and a measure space $(X,\mathcal{M},\mu)$, we denote by $L_{p}(X,\cM,\mu;F)$ the space of all $\mathcal{M}^{\mu}$-measurable functions $u : X \to F$ with the norm
\[
\left\Vert u\right\Vert _{L_{p}(X,\cM,\mu;F)}:=\left(\int_{X}\left\Vert u(x)\right\Vert _{F}^{p}\mu(dx)\right)^{1/p}<\infty
\]
where $\mathcal{M}^{\mu}$ denotes the completion of $\cM$ with respect to the measure $\mu$. We also denote by $L_{\infty}(X,\cM,\mu;F)$ the space of all $\mathcal{M}^{\mu}$-measurable functions $u : X \to F$ with the norm
$$
\|u\|_{L_{\infty}(X,\cM,\mu;F)}:=\inf\left\{r\geq0 : \mu(\{x\in X:\|u(x)\|_F\geq r\})=0\right\}<\infty.
$$
If there is no confusion for the given measure and $\sigma$-algebra, we usually omit them.
\item
For $\cO\subseteq \bR^d$, we denote by $\cB(\cO)$ the set of all Borel sets contained in $\cO$.
\item
For $\cO\subset \fR^d$ and a normed space $F$, we denote by $C(\cO;F)$ the space of all $F$-valued continuous functions $u : \cO \to F$ with the norm
\[
|u|_{C}:=\sup_{x\in O}|u(x)|_F<\infty.
\]
\item
We write $a \lesssim b$ if there is a positive constant $N$ such that $ a\leq N b$.
We use $a \approx b$ if $a \lesssim b$ and $b \lesssim a$.
If we write $N=N(a,b,\cdots)$, this means that the
constant $N$ depends only on $a,b,\cdots$.
A generic constant $N$ may change from a location to a location, even within a line.
The dependence of a generic constant $N$ is usually specified in each statement of theorems, propositions, lemmas, and corollaries.
\item
For $a,b\in \bR$,
$$
a \wedge b := \min\{a,b\},\quad a \vee b := \max\{a,b\},\quad \lfloor a \rfloor:=\max\{n\in\bZ: n\leq a\}.
$$
\item
For $r>0$,
$$
B_r(x):=\{y\in\bR^d:|x-y|< r\},\quad
\overline{B_r(x)}:=\{y\in\bR^d:|x-y|\leq r\}.
$$
\item Let $\mu$ be a nonnegative Borel measure on $(0,\infty)$ and $c$ be a positive constant.
We use the notation $\mu(c\,dt)$ to denote the scaled measure defined by
$$
\int_{\bR^d}f(t)\mu(c\,dt):=\int_{\bR^d}f(t/c)\mu(dt).
$$
\end{itemize}
\mysection{Main results and proofs of Theorems \ref{22.02.03.13.35} and \ref{second-order case}}
\label{22.12.27.14.47}
\subsection{Main results}
To state the main result, we need the following definitions among sequences and functions.
\begin{defn}
\label{doubling sequence}
We say that a sequence $\boldsymbol{r} : \bZ \to \bR$ has {\bf a controlled difference} if
$$
\|\boldsymbol{r}\|_d:=\sup_{j\in\bZ}|\boldsymbol{r}(j+1)-\boldsymbol{r}(j)|<\infty.
$$
\end{defn}
\begin{defn}
\label{weight}
Let $ a \in \bR$. We say that a function $\partialhi : (0,\infty) \to \bR$ is {\bf controlled by a sequence $\boldsymbol{\mu} : \bZ \to \bR$ in a $\gamma$-dyadic way with a parameter $a$} if there exists a positive constant $N_\partialhi$ such that
$$
\partialhi(2^{\gamma j}) \leq N_\partialhi 2^{j\gamma a}2^{-\boldsymbol{\mu}(j)},\quad \forall j\in\bZ.
$$
\end{defn}
Since we handle a general weight, it is impossible to understand the meaning of a solution to \eqref{eqn:model eqn} even in a weak formulation.
However, it is still possible to find an approximation of nice functions to a solution $u$ in which an approximating function satisfies the equation in a strong sense.
Thus we need to consider a space of smooth functions approximating a solution to mention the exact meaning of a solution to \eqref{eqn:model eqn}.
Due to the lack of regularity of the symbol $\partialsi$, the classical Schwartz class does not fit to this space and it leads to consider larger class consisting of locally integrable smooth functions. We use the notation $C_{p}^{1,\infty}([0,T]\times\bR^d)$ to denote this space.
Here is a rigorous mathematical definition.
\begin{enumerate}[(i)]
\item The space $C_p^{\infty}([0,T]\times\bR^d)$ denotes the set of all $\cB([0,T]\times\bR^d)$-measurable functions $f$ on $[0,T]\times\bR^d$ such that for any multi-index $\alpha$ with respect to the space variable $x$,
\begin{equation*}
D^{\alpha}_{x}f\in L_{\infty}([0,T];L_2(\bR^d)\cap L_p(\bR^d)).
\end{equation*}
\item The space $C_p^{1,\infty}([0,T]\times\bR^d)$ denotes the set of all $f\in C_p^{\infty}([0,T]\times\bR^d)$ such that
$\partialartial_tf\in C_p^{\infty}([0,T]\times\bR^d)$ and for any multi-index $\alpha$ with respect to the space variable $x$,
\begin{equation*}
D^{\alpha}_{x}f\in C([0,T]\times \bR^d),
\end{equation*}
where $\partialartial_tf(t,\cdot)$ denotes the right derivative and the left derivative at $t=0$ and $t=T$, respectively.
\end{enumerate}
Our main result is the following sufficient condition for the existence and uniqueness of a solution in weighted $L_p$-spaces with variable smoothness.
\begin{thm}
\label{22.12.27.16.53}
Let $T \in (0,\infty)$, $p\in(1,\infty)$, $q\in(0,\infty)$, $a \in (0,\infty)$, $w\in A_p(\bR^d)$, $\mu$ be a nonnegative Borel measure on $(0,\infty)$, and
$\boldsymbol{r}:\bZ\to(-\infty,\infty)$, $\boldsymbol{\mu}:\bZ\to(-\infty,\infty)$ be sequences having a controlled difference.
Suppose that $\partialsi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$ $($see \eqref{condi:ellipticity}, \eqref{condi:reg ubound}, and \eqref{2021-01-19-01}$)$.
Additionally, assume that the Laplace transform of $\mu$ is controlled by a sequence $\boldsymbol{\mu}$ in a $\gamma$-dyadic way with parameter $a$, \textit{i.e.}
\begin{align}
\label{20230210 01}
\cL_{\mu}(2^{\gamma j})
:= \int_0^\infty \exp\left( - 2^{\gamma j} t \right) \mu(dt) \leq N_{\cL_\mu} \cdot 2^{j\gamma a}2^{-\boldsymbol{\mu}(j)},\quad \forall j\in\bZ.
\end{align}
Then for any $u_0\in B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)$, there exists a unique solution
$$
u\in L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right);H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right)
$$
to \eqref{eqn:model eqn}.
Moreover, if $u \in C_{p}^{1,\infty}([0,T]\times\bR^d) $ and $u_0 \in C_c^\infty(\bR^d)$, then the following \textit{a priori} estimates hold:
\begin{align}
\label{main a priori est 0}
\int_{0}^T\left\|u(t,\cdot)\right\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}^q
t^a\mu\left(\frac{(1\wedge q) \kappa }{16^\gamma}dt\right)
\leq N(1+\mu_{a,T,\kappa,\gamma,q})\|u_0\|^q_{B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align}
and
\begin{align}
\notag
&\int_{0}^T\left\||\partialsi(t,-i\nabla)u(t,\cdot)|+|\Delta^{\gamma/2}u(t,\cdot)|\right\|_{H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\\
\label{main a priori est}
&\leq N\|u_0\|^q_{\dot B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align}
where $[w]_{A_p(\bR^d)}\leq K$, $N=N(a,\|\boldsymbol{r}\|_d, \|\boldsymbol{\mu}\|_d, N_{\cL_\mu},d,\gamma,\kappa,K,M,p,q)$, and
\begin{align*}
\mu_{a,T,\kappa,\gamma,q}=\int_0^T t^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right).
\end{align*}
\end{thm}
The proof of this main theorem will be given in Section \ref{pf main thm}.
\begin{rem}
Let $\mu$ be a nonnegative Borel measure on $(0,\infty)$ and
assume that
\begin{align}
\label{20230210 03}
\inf_{j \in \bZ}\frac{\cL_{\mu}(2^{\gamma (j+1)})}{\cL_{\mu}(2^{\gamma j})} >0,
\end{align}
where
\begin{align*}
\cL_{\mu}(2^{\gamma j})
:= \int_0^\infty \exp\left( - 2^{\gamma j} t \right) \mu(dt).
\end{align*}
Then there is an easy way to obtain the vector $\boldsymbol{\mu}$ to denote the optimal regularity improvement in Theorem \ref{22.12.27.16.53}.
Indeed, since
\begin{align*}
\cL_{\mu}(2^{\gamma j})
=2^{\log_2\left( \cL_{\mu}(2^{\gamma j}) \right)}
=2^{j\gamma a} 2^{-j\gamma a + \log_2\left( \cL_{\mu}(2^{\gamma j}) \right)},
\end{align*}
we can take
\begin{align*}
\boldsymbol{\mu}(j) = j\gamma a -\log_2\left( \cL_{\mu}(2^{\gamma j}) \right)
\end{align*}
so that \eqref{20230210 01} holds with $N_{\cL_\mu}=1$.
In addition, it is obvious that $\boldsymbol{\mu}$ has a controlled difference due to \eqref{20230210 03}
and the fact that $\cL_{\mu}(2^{\gamma j})$ is non-increasing as $j$ increases.
\end{rem}
\begin{rem}
The most interesting example of $\mu$ in Theorem \ref{22.12.27.16.53} is $\mu(dt) = w(t)dt$ with $w \in A_\infty(\bR)$.
In the next section, we give a detail to show that the class of measures $\mu$ in Theorem \ref{22.12.27.16.53} includes $\mu(dt) = w(t)dt$ for all $w \in A_\infty(\bR)$.
However, we emphasize that our measure $\mu(dt)$ does not need to have a density.
In other words, the class of our measures $\mu$ is larger than Muckenhoupt's class.
Indeed, for any $t_0 \in (0,\infty)$, consider the Dirac delta measure centered at $t_0$, \textit{i.e.} $\mu(dt) = \delta_{t_0}(t)dt$.
Then \eqref{20230210 03} holds since
\begin{align*}
\inf_{j \in \bZ}\frac{\cL_{\mu}(2^{\gamma (j+1)})}{\cL_{\mu}(2^{\gamma j})}
=2^{-\gamma t_0}.
\end{align*}
Therefore, taking
\begin{align*}
\boldsymbol{\mu}(j) = j\gamma a -\log_2\left(\exp\left( - 2^{\gamma j} t_0 \right) \right),
\end{align*}
we could get a regularity improvement of a solution $u$ as much as $\frac{\boldsymbol{\mu}}{q}$ with respect to the space variable $x$ from Theorem
\ref{22.12.27.16.53}.
\end{rem}
We finally present the mathematical meaning of our solution.
\begin{defn}[Solution]
\label{def sol}
Assume that all parameters are given as in Theorem \ref{22.12.27.16.53}.
We say that $u \in L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa }{16^\gamma}dt\right);H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right)$ is a solution to \eqref{eqn:model eqn} if
there exists a sequence $u_n\in C_p^{1,\infty}([0,T]\times\bR^d)$ such that $u_n(0,\cdot)\in C_c^{\infty}(\bR^d)$ and
\begin{equation}
\label{202301014 01}
\begin{gathered}
\partialartial_tu_n(t,x)=\partialsi(t,-i\nabla)u_n(t,x),\quad \forall (t,x)\in(0,T)\times\bR^d,\\
u_n(0,\cdot)\to u_0 \quad\text{in}\quad B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx),\\
u_n\to u\quad \text{in}\quad L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa }{16^\gamma}dt\right);H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right)
\end{gathered}
\end{equation}
as $n\to\infty$.
Due to this definition, we have \eqref{main a priori est 0} for any solution
$$
u\in L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right);H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right)
$$
to \eqref{eqn:model eqn} with the corresponding $u_0 \in B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)$.
\end{defn}
\begin{rem}
Since our equation is linear, by using \textit{a priori} estimate we may consider that our solution is a strong solution in $L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma} dt\right);H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right)$.
Indeed, due to \eqref{main a priori est}, for all $n, m \in \bN$, we have
\begin{align*}
&\int_{0}^T\left\|\partialartial_t(u_n - u_m)(t,\cdot)\right\|_{H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\\
&=\int_{0}^T\left\|\partialsi(t,-i\nabla)(u_n - u_m)(t,\cdot)\right\|_{H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\\
&\leq N\|u_n(0,\cdot) - u_m(0,\cdot)\|^q_{B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}.
\end{align*}
Thus defining $\partialartial_tu$ and $\partialsi(t,-i\nabla)u$ as the limits of $\partialartial_tu_n$ and $\partialsi(t,-i\nabla)u_n$ in
$$
L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right);H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)\right),
$$
respectively, we understand that our solution $u$ is a strong solution.
However, the limits $\partialartial_tu$ and $\partialsi(t,-i\nabla)u$ are not well-defined in general, which means that they could be dependent on the choice of a sequence $u_n$ approximating to $u$.
To make the limits $\partialartial_tu$ and $\partialsi(t,-i\nabla)u$ uniquely determined, we need the condition $\boldsymbol{r}-\boldsymbol{\gamma} \geq 0$ and another condition on a measure $\mu$, which will be specified in the next remark.
\end{rem}
\begin{rem}
To show that our solution becomes the classical weak solution, we need an extra condition on a measure $\mu$.
For simplicity, we may assume that the scaling constant $\frac{(1\wedge q) \kappa}{16^\gamma}$ is 1.
Additionally, assume that the measure $\mu(dt)$ has a density $\mu(t)$ and $t^{-a} (1/\mu)(t)$ is locally in $L_{q/(q-1)}$, \textit{i.e.} $\mu(dt)=\mu(t)dt$ and
$\int_0^{T_0} t^{-\frac{aq}{q-1}} (\mu(t))^{-\frac{q}{q-1}} dt < \infty$ for all $T_0 \in (0,T)$.
Let $u$ be a solution to \eqref{eqn:model eqn}. Then there exists a $u_n\in C_p^{1,\infty}([0,T]\times\bR^d)$ so that \eqref{202301014 01} holds.
Then for any $n \in \bN$ and $\varphi \in C_c^\infty ( (0,T) \times \bR^d)$, we have
\begin{align}
\label{20230114 02}
-\int_0^T\int_{\bR^d} u_n(t,x) \partialartial_t\varphi(t,x) dt dx = \int_0^T \int_{\bR^d} u_n(t,x) \overline{\partialsi}(t,-i\nabla) \varphi(t,x) dt dx,
\end{align}
where
\begin{align*}
\overline{\partialsi}(t,-i\nabla)u(t,x)= \cF^{-1}\left[ \overline{\partialsi(t,\xi)} \cF[u(t,\cdot)](\xi) \right] (x)
\end{align*}
and $\overline{\partialsi(t,\xi)}$ denotes the complex conjugate of $\partialsi(t,\xi)$.
Moreover, applying H\"older's inequality, we have
\begin{align*}
&\left|\int_0^T\int_{\bR^d} u_n(t,x) \partialartial_t\varphi(t,x) dt dx -\int_0^T\int_{\bR^d} u(t,x) \partialartial_t\varphi(t,x) dt dx \right| \\
&\leq N\| u_n -u\|_{L_q((0,T),t^a \mu;L_p(\bR^d,w\,dx))} \left(\int_0^{T_0} t^{-\frac{aq}{q-1}} (\mu(t))^{-\frac{q}{q-1}} dt\right)^{\frac{q-1}{q}},
\end{align*}
where $T_0$ is a positive number so that $\partialartial_t \varphi(t,x) = 0$ for all $t \in [T_0,T)$.
Thus
\begin{align*}
\lim_{n \to \infty}\int_0^T\int_{\bR^d} u_n(t,x) \partialartial_t\varphi(t,x) dt dx
=\int_0^T\int_{\bR^d} u(t,x) \partialartial_t\varphi(t,x) dt dx.
\end{align*}
Similarly,
\begin{align*}
\lim_{n\to \infty}\int_0^T \int_{\bR^d} u_n(t,x) \overline{\partialsi}(t,-i\nabla) \varphi(t,x) dt dx
=\int_0^T \int_{\bR^d} u(t,x) \overline{\partialsi}(t,-i\nabla) \varphi(t,x) dt dx.
\end{align*}
Finally, taking the limit in \eqref{20230114 02}, we show that our solution becomes a classical weak solution, \textit{i.e.}
\begin{align}
\label{20230114 03}
-\int_0^T\int_{\bR^d} u(t,x) \partialartial_t\varphi(t,x) dt dx = \int_0^T \int_{\bR^d} u(t,x) \overline{\partialsi}(t,-i\nabla) \varphi(t,x) dt dx
\end{align}
for all $\varphi \in C_c^\infty ( (0,T) \times \bR^d)$.
\end{rem}
\begin{rem}
We want to emphasize that \eqref{main a priori est 0} and \eqref{main a priori est} can be distinguished.
In other words, there could be different two measures $\mu_1$ and $\mu_2$ so that $\boldsymbol{\mu_1}(j) = \boldsymbol{\mu_2}(j)$ for all $j \in \bN$ but there exists a $k \in \bZ$ such that $\boldsymbol{\mu_1}(k) \neq \boldsymbol{\mu_2}(k)$.
A concrete example taking $\mu_1(dt) = t^{b_1}dt$ and $\mu_2(dt) = \left(t^{b_1} + t^{b_2}\right)dt$ will be given in Example \ref{conc example}.
\end{rem}
We finish the section by proving Theorem \ref{22.02.03.13.35} and Theorem \ref{second-order case}.
\begin{proof}[Proof of Theorem \ref{22.02.03.13.35}]
It is easy to check that \eqref{upper mu scaling} implies that
\begin{equation}
\label{22.10.03.15.43}
\cL_{\mu}(\lambda)\leq N_k\cL_{\mu}(k\lambda),\quad \forall \lambda\in(0,\infty).
\end{equation}
Recall that a sequence $\boldsymbol{\mu}_a$ is defined by
$$
\boldsymbol{\mu}_a(j):=\gamma ja-\log_2(\cL_{\mu}(2^{j\gamma})).
$$
We first show that $\boldsymbol{\mu}_a$ has a controlled difference.
By the definition of the Laplace transform, $\cL_\mu(\lambda)$ is non-increasing with respect to $\lambda$.
Moreover, by \eqref{22.10.03.15.43},
$$
1\leq \frac{\cL_{\mu}(2^{\gamma j})}{\cL_{\mu}(2^{\gamma(j+1)})}\leq N_{2^{\gamma}}.
$$
Thus
$$
\sup_{j\in\bZ}|\boldsymbol{\mu}_a(j+1)-\boldsymbol{\mu}_a(j)|\leq \gamma a+\log_2(N_{2^{\gamma}})<\infty.
$$
Next we prove that the Laplace transform of $\mu$ is controlled by a sequence $\boldsymbol{\mu}_a$ in a $\gamma$-dyadic way with a parameter $a$, \textit{i.e.}
$$
\cL_{\mu}(2^{\gamma j})
:= \int_0^\infty \exp\left( - 2^{\gamma j} t \right) \mu(dt) \leq N_{\cL_\mu} \cdot 2^{j\gamma a}2^{-\boldsymbol{\mu}_a(j)},\quad \forall j\in\bZ.
$$
However, it is a direct consequence of the definition of $\boldsymbol{\mu}_a$ with $N_{\cL_\mu}=1$.
Therefore, we obtain the existence and uniqueness result of Theorem \ref{22.02.03.13.35} and the following estimates from Theorem \ref{22.12.27.16.53} with $\boldsymbol{r}=\boldsymbol{\gamma}$:
\begin{align*}
\int_{0}^T\left\|u(t,\cdot)\right\|_{H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx)}^q
t^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)
\leq N(1+\mu_{a,T,\kappa,\gamma,q})\|u_0\|^q_{B_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align*}
and
\begin{align*}
\int_{0}^T\left\||\partialsi(t,-i\nabla)u(t,\cdot)|+|\Delta^{\gamma/2}u(t,\cdot)|\right\|_{L_p(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\leq N\|u_0\|^q_{B_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}.
\end{align*}
Note that all above estimates hold even for the measure $\mu(c\,dt)$ with $c>0$ since we have
\begin{align}
N_k:=\sup \frac{\mu(ckI)}{\mu(cI)} = \sup \frac{\mu(kI)}{\mu(I)},
\end{align}
where the supremum is taken over all open intervals $I \subset (0,\infty)$.
Therefore considering the measure
$$
\mu\left( \frac{16^\gamma}{(1\wedge q) \kappa}dt\right)
$$
instead of $\mu(dt)$, we finally have \eqref{int a priori 1} and \eqref{int a priori 2}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{second-order case}]
For $f \in \cS(\bR^d)$, it follows that
\begin{align*}
a^{ij}(t)f_{x^ix^j}
=-\cF^{-1}\left[ a^{ij}(t) \xi^i \xi^j \cF[f] (\xi) \right].
\end{align*}
Then due to \eqref{ellip coefficient}, the symbol $\partialsi(t,\xi):=-a^{ij}(t) \xi^i \xi^j$ satisfies an ellipticity condition with $(2,\kappa)$ and has a $n$-times regular upper bound with $(2,M)$ for any $n \in \bN$ by using the trivial extension $a^{ij}(t)=a^{ij}(T)$ for all $ t \geq T$.
Take $a_0 \in (0,1)$ so that $a-a_0 > -1$ and the exponent $q$ depending on the range of $p$ as follows:
\begin{align*}
\begin{cases}
&q=2 \quad \text{if $p \in (1,2)$} \\
&q=p \quad \text{if $p \in [2,\infty)$}.
\end{cases}
\end{align*}
Put $\boldsymbol{r}(j) = 2j(a+1)/q$ for all $j \in \bZ$, and $\mu(dt)=t^{a-a_0} dt$.
By a simple change of variable,
\begin{align*}
\cL_{\mu}(2^{2 j})
\leq 2^{-2 j(a-a_0 +1)}\cL_{\mu}(1)
= 2^{ 2j a_0} 2^{-2 j (a +1)}\cL_{\mu}(1).
\end{align*}
Set
$$
\boldsymbol{\mu}(j) = 2 j (a+1) \quad \forall j \in \bZ.
$$
Then the Laplace transform of $\mu$ is controlled by a sequence $\boldsymbol{\mu}$ in a $2$-dyadic way with parameter $a_0$.
Therefore by Theorem \ref{22.12.27.16.53},
for any $u_0\in B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)$, there exists a unique solution
$$
u\in L_q\left((0,T),t^{a_0} \mu\left(\frac{(1\wedge q) \kappa}{16^2} dt\right);H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right)
$$
to \eqref{second eqn} and $u$ satisfies
\begin{align}
\notag
&\int_{0}^T\left\|u(t,\cdot)\right\|_{H_p^{2(a+1)/q}(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^2}dt\right)\\
&=\left(\frac{(1\wedge q) \kappa}{16^2}\right)^{a_0-a} \int_{0}^T\left\|u(t,\cdot)\right\|_{H_p^{2(a+1)/q}(\bR^d,w\,dx)}^q
t^{a_0} \mu\left(\frac{(1\wedge q) \kappa}{16^2}dt\right)\notag\\
\label{2023012320}
&\leq N(1+\mu_{a,T,\kappa,\gamma,q})\|u_0\|^q_{B_{p,q}^{2(a+1)/q-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}.
\end{align}
We apply a well-known embedding theorem in Besov space (cf. \cite[Theorem 6.4.4]{BL1976}). Then
\begin{align}
\label{2023012321}
\begin{cases}
&\|u_0\|^p_{B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}
\leq \|u_0\|^p_{H_{p}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}
=\|u_0\|^p_{L_p(\bR^d,w\,dx)} \quad \text{if $p \in (1,2)$ and $q=2$} \\
&\|u_0\|^p_{B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}
\leq \|u_0\|^p_{H_{p}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{p}}(\bR^d,w\,dx)}
=\|u_0\|^p_{L_p(\bR^d,w\,dx)} \quad \text{if $p=q \in [2,\infty)$}.
\end{cases}
\end{align}
Finally, putting \eqref{2023012321} in \eqref{2023012320}, we have \eqref{second a priori}.
\end{proof}
\mysection{Examples and inhomogeneous problems}
\label{22.12.28.13.34}
We proved that the regularity improvement is given by $\boldsymbol{\mu}/q$ in Theorem \ref{22.12.27.16.53}.
Here $\boldsymbol{\mu}$ satisfies
$$
\cL_{\mu}(2^{\gamma j})
:= \int_0^\infty \exp\left( - 2^{\gamma j} t \right) \mu(dt) \leq N_{\cL_\mu} \cdot 2^{j\gamma a}2^{-\boldsymbol{\mu}(j)},\quad \forall j\in\bZ.
$$
It seems not easy to figure out the exact value of $\boldsymbol{\mu}$ since this quantity relates to the Laplace transform of the measure $\mu$.
In this section, we present a concrete example showing the regularity improvement, which could be directly gained from the measures of open intervals starting at the origin. Briefly speaking, it is possible if measures of all open intervals starting at the origin satisfy a weak scaling property. Here is the precise statement. Suppose that for any $k\in(1,\infty)$, there exist positive constants $b_k$ and $B_k$ such that
\begin{equation}
\label{22.04.22.16.39}
0<b_k\leq \frac{\mu((0,\theta))}{\mu((0,k\theta))}\leq B_k<1,\quad \forall \theta\in(0,\infty).
\end{equation}
Choose
\begin{align}
\label{20230128 100}
a_0 \in \left[0, -\log_2B_2 \right)
\end{align}
and denote
$$
\mu^{-a_0}(dt) = t^{-a_0} \mu(dt).
$$
We claim that
\begin{equation}
\label{22.10.31.16.02}
\cL_{\mu^{-a_0}}(\lambda) \approx \lambda^{a_0} \mu((0,1/\lambda)) \quad \forall \lambda \in (0,\infty).
\end{equation}
Indeed, by \eqref{22.04.22.16.39} and the ratio test,
\begin{align}
\notag
\int_{1}^{\infty}e^{-t}\mu^{-a_0}(dt)
=\int_{1}^{\infty}e^{-t}t^{-a_0}\mu(dt)
\leq \int_{1}^{\infty}e^{-t}\mu(dt)
&=\sum_{n=1}^{\infty}\int_{2^{n-1}}^{2^n}e^{- t}\mu(dt)\\
\notag
&\leq \sum_{n=1}^{\infty}e^{-2^{n-1}}\mu((0,2^n)) \\
\label{2023012401}
&\leq \mu((0,1))\sum_{n=0}^{\infty}e^{- 2^{n-1}}b^{-n}_2<\infty.
\end{align}
Using \eqref{22.04.22.16.39} again, we have
\begin{align}
\notag
\int_{1}^{\infty}e^{-t}\mu^{-a_0}(dt)
=\int_0^1e^{- t}t^{-a_0}\mu(dt)
&\leq \sum_{n=0}^{\infty}\int_{2^{-n-1}}^{2^{-n}}t^{-a_0}\mu(dt)\\
\label{22.04.23.19.49}
&\leq \sum_{n=0}^{\infty}2^{(n+1)a_0}\mu((0,2^{-n}))\leq \mu((0,1))\sum_{n=0}^{\infty}2^{(n+1)a_0} B^n_2.
\end{align}
Due to the choice of $a_0$ in \eqref{20230128 100}, it is obvious that $2^{a_0}B_2 < 1$ and the last term in \eqref{22.04.23.19.49} converges.
Therefore combining \eqref{2023012401} and \eqref{22.04.23.19.49}, we obtain
\begin{align}
\label{20230124 100}
\cL_{\mu^{-a_0}}(1) \approx \mu((0,1)).
\end{align}
Next we use the scaling property.
For $\lambda \in (0,\infty)$, define the measures as
$$
\mu_{1/\lambda} (dt) := \mu \left(\frac{1}{\lambda} \,dt\right)
$$
and
$$
\mu^{-a_0}_{1/\lambda} (dt) := \mu^{-a_0} \left(\frac{1}{\lambda}\, dt\right).
$$
Then for any $k \in (1,\infty)$,
\begin{equation*}
0<b_k\leq \frac{\mu_{1/\lambda}((0, \theta))}{\mu_{1/\lambda}((0,k \theta))}\leq B_k<1,\quad \forall \theta\in(0,\infty).
\end{equation*}
Thus applying \eqref{20230124 100} with $\mu_{1/\lambda}$ instead of $\mu(dt)$, we prove the claim that
$$
\cL_{\mu^{-a_0}}(\lambda)=\lambda^{a_0} \cL_{\mu^{-a_0}_{1/\lambda}}(1) \approx \lambda^{a_0} \mu_{1/\lambda}((0,1))= \lambda^{a_0} \mu((0,1/\lambda)).
$$
Therefore, for all $C_\mu \in (0,\infty)$, $a \in [0,\infty)$, and $j \in \bZ$,
\begin{align*}
\cL_{\mu^{-a_0}}(2^{j \gamma}) \approx C_\mu 2^{j\gamma a_0}\mu((0,2^{-j\gamma})) = 2^{j \gamma a} \cdot 2^{-\left( j \gamma (a-a_0)-\log_2(C_\mu\mu((0,2^{-j\gamma}))) \right)}
\end{align*}
In other words,
the Laplace transform of $\mu^{-a_0}$ with $a_0 \in \left[0, -\log_2B_2 \right)$ is controlled by a sequence
\begin{align}
\label{20230124 05}
\boldsymbol{\mu}(j)=\left( j \gamma (a-a_0)-\log_2(C_\mu\mu((0,2^{-j\gamma}))) \right)
\end{align}
in a $\gamma$-dyadic way with a parameter $a$.
In particular,
the Laplace transform of $\mu^{-a}$ is controlled by a sequence
\begin{align*}
\boldsymbol{\mu}(j)=\left(-\log_2(C_\mu\mu((0,2^{-j\gamma}))) \right)
\end{align*}
in a $\gamma$-dyadic way with a parameter $a$ if $a \in \left[0, -\log_2B_2 \right)$.
\begin{example}
\label{conc example}
One of the most important examples satisfying \eqref{22.04.22.16.39} is the measure induced by a function in $A_\infty(\bR)$.
We show that the measure $\mu(dt):=w'(t)dt$ with $w' \in A_{\infty}(\bR)$ could be a measure satisfying assumptions in Theorem \ref{22.12.27.16.53}.
We emphasize that $w'$ does not have to be in $A_{q}(\bR)$ even for $q \in (1,\infty)$, which cannot be expected for inhomogeneous problems.
Let $w'\in A_{\infty}(\bR)$ and $\mu(dt):=w'(t)dt$. Then there exists $\nu\in(1,\infty)$ such that $w'\in A_{\nu}(\bR)$. By \cite[Proposition 7.1.5 and Lemma 7.2.1]{grafakos2014classical},
$$
b_k:=\frac{1}{k^{\nu}[w']_{A_{\nu}(\bR)}}\leq \frac{\mu((0,\theta))}{\mu((0,k\theta))}\leq \left(1-\frac{(1-k^{-1})^{\nu}}{[w']_{A_{\nu}(\bR)}}\right)=:B_k,\quad \forall k\in(1,\infty)~\text{and}~ \theta \in (0,\infty).
$$
Thus $\mu$ also satisfies
\begin{align*}
\frac{ \mu(k I)}{\mu(I)} \leq k^{\nu}[w']_{A_\nu} \quad \forall k \in (1,\infty),
\end{align*}
which implies
$$
N_k := \sup \frac{ \mu(k I)}{\mu(I)} \leq k^{\nu}[w']_{A_\nu} <\infty \quad \forall k \in (1,\infty),
$$
where the sup is taken over all the open intervals $I$ in $(0,\infty)$.
Moreover, it is well-known that $|t|^{b_1}$ is in $A_\infty(\bR)$ if $ b_1 >-1$ (\textit{e.g.}, \cite[Example 7.1.7]{grafakos2014classical}). Then by a simple calculation, $\mu_1((0,2^{-j\gamma})) := \int_0^{2^{-j \gamma}} t^{b_1} dt \approx 2^{-j \gamma (b_1+1)}$.
Thus due to \eqref{20230124 05}, choosing an appropriate scaling constant $C_{\mu_1}$, we have
\begin{align*}
\boldsymbol{\mu_1}(j)= j \gamma (a+b_1+1) \quad \forall j \in \bZ.
\end{align*}
Next for $-1<b_1 < b_2 <\infty$, we consider
\begin{align*}
\mu_2((0,2^{-j\gamma}))
:= \int_0^{2^{-j \gamma}} \left(t^{b_1} +t^{b_2}\right)dt
\approx \left( 2^{-j \gamma (b_1+1)} + 2^{-j \gamma (b_2+1)} \right).
\end{align*}
Therefore, it follows that
\begin{align*}
\mu_2((0,2^{-j\gamma})) \lesssim
\begin{cases}
&2^{-j \gamma (b_1+1)} \quad \text{if}~ j \in \bN \\
&2^{-j \gamma (b_2+1)} \quad \text{if $j$ is a non-positive integer}
\end{cases}
\end{align*}
and the optimal regularity improvement becomes
\begin{align*}
\boldsymbol{\mu_2}(j)\lesssim
\begin{cases}
&j \gamma (a+b_1+1) \quad \text{if}~ j \in \bN \\
&j \gamma (a+b_2+1) \quad \text{if $j$ is a non-positive integer}.
\end{cases}
\end{align*}
\end{example}
Finally, restricting a weight $w'$ to $A_q(\bR)$ and combining \cite[Theorem 2.14]{Choi_Kim2022}, we can handle the following inhomogeneous problems with non-zero initial conditions.
\begin{corollary}
Let $T \in (0,\infty)$, $p\in(1,\infty)$, $q\in(1,\infty)$, $w'\in A_{q}(\bR)$, $w\in A_p(\bR^d)$, $C_{w'} \in (0,\infty)$, and
$\boldsymbol{r}:\bZ\to(-\infty,\infty)$ be a sequence having a controlled difference.
Suppose that $[w]_{A_{p}(\bR^d)}\leq K$ and $[w']_{A_{q}(\bR)}\leq K_0$.
Additionally, assume that $\partialsi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor \vee \left\lfloor \frac{d}{R_{q,1}^{w'}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$.
Then for any
$$
u_0\in B_{p,q}^{\boldsymbol{r}-\boldsymbol{w}'/q}(\bR^d,w\,dx)~\text{and}~ f\in L_q((0,T),w'\,dt;H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)),
$$
there exists a unique solution
$$
u\in L_q\left((0,T),w'\,dt;H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right)
$$ to the equation
\begin{equation}
\label{inhomo problem}
\begin{cases}
\partial_tu(t,x)=\partialsi(t,-i\nabla)u(t,x)+f(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=u_0(x),\quad &x\in\bR^d,
\end{cases}
\end{equation}
where $\boldsymbol{\gamma}(j) = \gamma j$ and
\begin{align*}
\boldsymbol{w'}(j)=-\log_2\left(C_{w'} \int_0^{2^{-j\gamma}} w'(t)dt \right).
\end{align*}
Moreover, $u$ satisfies
\begin{align}
\label{20230124 08}
\int_{0}^T\left\|u(t,\cdot)\right\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}^qw'(t)dt
\leq N(1+T)^q\left(\|u_0\|^q_{B_{p,q}^{\boldsymbol{r}-\boldsymbol{w}'/q}(\bR^d,w\,dx)}+\int_0^T\|f(t,\cdot)\|^q_{H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)} w'(t)dt\right)
\end{align}
where $N=N(\|\boldsymbol{r}\|_d,C_{w'},d,\gamma,\kappa,K,K_0,M,p,q)$.
\end{corollary}
\begin{proof}
Since the uniqueness comes from the homogeneous case (Theorem \ref{22.12.27.16.53}), we only prove the existence of a solution $u$ satisfying \eqref{20230124 08}. Moreover, due to Proposition \ref{22.05.03.11.34}, we may assume that $\boldsymbol{r}=\boldsymbol{\gamma}$.
We set
$$
\mu(dt)= w'(t) dt.
$$
Let $a_0 \in \left(0, -\log_2B_2 \right)$
and recall
$$
\mu^{-a_0}(dt)= t^{-a_0}w'\left( t\right)dt
$$
where
\begin{align*}
B_2=\left(1-\frac{(1-2^{-1})^{q}}{[w']_{A_{q}(\bR)}}\right).
\end{align*}
Then due to \eqref{20230124 05}, the Laplace transform of $\mu^{-a_0}$ is controlled by a sequence
\begin{align*}
\boldsymbol{w'}(j):=\boldsymbol{\mu^{-a_0}}(j)
=-\log_2(C_{w'}\mu((0,2^{-j\gamma})))
\end{align*}
in a $\gamma$-dyadic way with parameter $a_0$.
Besides, it is well-known (\textit{cf}. \cite[Proposition 7.1.5]{grafakos2014classical}) that
\begin{align*}
w'(t)dt \leq [w']_{A_q(\bR)}\left(\frac{16^\gamma}{(1\wedge q) \kappa}\right)^{q-1} w'\left(\frac{(1\wedge q) \kappa}{16^\gamma}t\right)dt.
\end{align*}
Thus by Theorem \ref{22.12.27.16.53}, there exists a solution $u^1$ to the equation
\begin{equation*}
\begin{cases}
\partial_tu^1(t,x)=\partialsi(t,-i\nabla)u^1(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u^1(0,x)=u_0(x),\quad &x\in\bR^d
\end{cases}
\end{equation*}
such that
\begin{align}
\notag
\int_{0}^T\left\|u^1(t,\cdot)\right\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}^q
w'(t)dt
&\lesssim \int_{0}^T\left\|u^1(t,\cdot)\right\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}^q
t^{a_0}\mu^{-a_0}\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right) \\
\label{20230124 10}
&\leq N(1+\mu^{-a_0}_{a_0,T,\kappa,\gamma,q})\|u_0\|^q_{B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{w'}}{q}}(\bR^d,w\,dx)},
\end{align}
where
\begin{align}
\notag
\mu^{-a_0}_{a_0,T,\kappa,\gamma,q}
=\int_0^T t^{a_0}\mu^{-a_0}\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)&=\left(\frac{(1\wedge q) \kappa}{16^\gamma}\right)^{1-a_0} \int_0^T w'\left(\frac{(1\wedge q) \kappa}{16^\gamma}t\right)dt \\
\notag
&= \left(\frac{(1\wedge q) \kappa}{16^\gamma}\right)^{-a_0}\int_0^{\frac{(1\wedge q) \kappa}{16^\gamma}T} w'\left(t\right)dt \\
\label{20230124 10-2}
&= \left(\frac{(1\wedge q) \kappa}{16^\gamma}\right)^{-a_0} \mu\left(\left(0,\frac{(1\wedge q) \kappa}{16^\gamma}T\right)\right).
\end{align}
Moreover, due to some properties of the $A_p$-class again,
\begin{align}
\label{20230124 11}
\mu\left(\left(0,\frac{(1\wedge q) \kappa}{16^\gamma}T\right)\right) \leq \mu(\lambda(0,1))\leq \lambda^{q}[w']_{A_q(\bR)}\mu((0,1)),\quad \lambda:=1+T.
\end{align}
On the other hand, by \cite[Theorem 2.14.]{Choi_Kim2022}, there exists a solution $u^2$ to
\begin{equation*}
\begin{cases}
\partial_tu^2(t,x)=\partialsi(t,-i\nabla)u^2(t,x)+f(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u^1(0,x)=0,\quad &x\in\bR^d
\end{cases}
\end{equation*}
such that
\begin{align}
\label{20230124 12}
\int_{0}^T\left\|u^2(t,\cdot)\right\|_{H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx)}^q w'(t)dt
\leq N(1+T)^q\int_0^T\|f(t,\cdot)\|^q_{H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)} w'(t)dt.
\end{align}
Due to the linearity, $u:=u^1+u^2$ becomes a solution to \eqref{inhomo problem}.
Combining all \eqref{20230124 10}, \eqref{20230124 10-2}, \eqref{20230124 11}, and \eqref{20230124 12}, we finally have \eqref{20230124 08}.
The corollary is proved.
\end{proof}
\mysection{Proof of Theorem \ref{22.12.27.16.53}}
\label{pf main thm}
We define kernels related to the symbol $\partialsi(t,\xi)$ first.
For $(t,s,x) \in \bR \times \bR \times \bR^d$ and $\varepsilon \in [0,1]$, we set
\begin{align*}
p(t,s,x):=1_{0 \leq s < t} \cdot \frac{1}{(2\partiali)^{d/2}}\int_{\bR^d} \exp\left(\int_{s}^t\partialsi(r,\xi)dr\right)e^{ix\cdot\xi}d\xi,
\end{align*}
\begin{align*}
P_{\varepsilon}(t,s,x)
:=\Delta^{\frac{\varepsilon\gamma}{2}}p(t,s,x)
:=-(-\Delta)^{\frac{\varepsilon\gamma}{2}}p(t,s,x)
:=1_{0 \leq s <t} \cdot \frac{1}{(2\partiali)^{d/2}}\int_{\bR^d} |\xi|^{\varepsilon \gamma}\exp\left(\int_{s}^t\partialsi(r,\xi)dr\right)e^{ix\cdot\xi}d\xi,
\end{align*}
and
\begin{align*}
\partialsi(t,-i\nabla)p(t,s,x):=(\partialsi(t,-i\nabla)p)(t,s,x)
:=1_{0 \leq s < t} \cdot \frac{1}{(2\partiali)^{d/2}}\int_{\bR^d} \partialsi(t,\xi)\exp\left(\int_{s}^t\partialsi(r,\xi)dr\right)e^{ix\cdot\xi}d\xi.
\end{align*}
For these kernels, we introduce integral operators as follows:
\begin{align*}
\cT_{t,s} f(x)
:= \int_{\bR^d} p(t,s,x-y)f(y)dy,\quad
\cT_{t,s}^{\varepsilon} f(x)
:=\Delta^{\frac{\varepsilon\gamma}{2}}\cT_{t,s} f(x)
:=\int_{\bR^d} P_{\varepsilon}(t,s,x-y)f(y)dy,
\end{align*}
and
\begin{align*}
\partialsi(t,-i\nabla)\cT_{t,s} f(x) := \int_{\bR^d} \partialsi(t,-i\nabla) p(t,s,x-y)f(y)dy.
\end{align*}
These operators are closely related to solutions of our initial value problems.
Formally, it is easy to check that
\begin{align*}
\partialartial_t\cT_{t,0} f(x) = \partialsi(t,-i\nabla)\cT_{t,0} f(x)
\end{align*}
and
\begin{align*}
\lim_{t \to 0} \cT_{t,0} f(x) = f(x).
\end{align*}
Thus if a symbol $\partialsi$ and an initial data $u_0$ are nice enough, for instance $\partialsi$ satisfies the ellipticity condition and $u_0\in C_c^{\infty}(\bR^d)$, then the function
$u(t,x):=\cT_{t,0}u_0(x)$ becomes a classical strong solution to the Cauchy problem
\begin{equation*}
\begin{cases}
\partial_tu(t,x)=\partialsi(t,-i\nabla)u(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=u_0(x),\quad &x\in\bR^d.
\end{cases}
\end{equation*}
Therefore, roughly speaking, it is sufficient to show boundedness of $\cT_{t,0}^\varepsilon u_0$ in appropriate spaces for our \textit{a priori} estimates.
More precisely, due to the definitions of the Besov and Sobolev spaces, we have to estimate their Littlewood-Paley projections.
We recall that $\Psi$ is a function in the Schwartz class $\mathcal{S}(\mathbb{R}^d)$ whose Fourier transform $\mathcal{F}[\Psi]$ is nonnegative, supported in an annulus $\{\frac{1}{2}\leq |\xi| \leq 2\}$, and $\sum_{j\in\mathbb{Z}} \mathcal{F}[\Psi](2^{-j}\xi) = 1$ for $\xi \not=0$.
Then we define the Littlewood-Paley projection operators $\Delta_j$ and $S_0$ as $\mathcal{F}[\Delta_j f](\xi) = \mathcal{F}[\Psi](2^{-j}\xi) \mathcal{F}[f](\xi)$, $S_0f = \sum_{j\leq 0} \Delta_j f$, respectively.
We denote
\begin{align}
\label{def T ep j}
\cT_{t,s}^{\varepsilon,j} f(x):=\int_{\bR^d}\Delta_jP_{\varepsilon}(t,s,x-y)f(y)dy~ \text{and}~\cT_{t,s}^{\varepsilon,\leq0}f(x):=\int_{\bR^d}S_0P_{\varepsilon}(t,s,x-y)f(y)dy,
\end{align}
where
\begin{align*}
\Delta_jP_{\varepsilon}(t,s,x-y)
:=(\Delta_jP_{\varepsilon})(t,s,x-y)
:=\Delta_j\left[P_{\varepsilon}(t,s,\cdot)\right](x-y).
\end{align*}
Similarly,
\begin{align*}
\partialsi(t,-i\nabla)\cT_{t,s}^{j} f(x):=\int_{\bR^d}\Delta_j \partialsi(t,-i\nabla) p(t,s,x-y)f(y)dy
\end{align*}
and
\begin{align*}
\partialsi(t,-i\nabla)\cT_{t,s}^{\leq 0}f(x):=\int_{\bR^d}S_0\partialsi(t,-i\nabla)p(t,s,x-y)f(y)dy.
\end{align*}
Next we recall Hardy-Littlewood's maximal function and Fefferman-Stein's sharp (maximal) function.
For a locally integrable function $f$ on $\bR^d$, we define
\begin{align*}
\mathbb{M} f(x)
&:=\sup_{x \in B_r(x_0)} -\hspace{-0.40cm}\int_{B_r(x_0)}|f(y)|dy := \sup_{x \in B_r(x_0)} \frac{1}{|B_r(x_0)|}\int_{B_r(x_0)}|f(y)|dy
\end{align*}
and
\begin{align*}
f^\sharp(x)
:=\mathbb{M}^\sharp f(x)
&:=\sup_{x \in B_r(x_0)} -\hspace{-0.40cm}\int_{B_r(x_0)}-\hspace{-0.40cm}\int_{B_r(x_0)}|f(y_0)-f(y_1)|dy_0dy_1 \\
&:= \sup_{x \in B_r(x_0)} \frac{1}{|B_r(x_0)|^2}\int_{B_r(x_0)}\int_{B_r(x_0)}|f(y_0)-f(y_1)|dy_0dy_1,
\end{align*}
where the supremum is taken over all balls $B_r(x_0)$ containing $x$ with $r \in (0,\infty)$ and $x_0 \in \bR^d$.
Moreover, for a function $f(t,x)$ defined on $(0,\infty) \times \bR^d$, we use the notation
$\mathbb{M}_x^{\sharp}\big( f(t,x) \big)$ or $\mathbb{M}_x^{\sharp}\big( f(t,\cdot) \big)(x)$ to denote the sharp function with respect to the variable $x$ after fixing $t$.
We recall a weighted version of the Hardy-Littlewood Theorem and Fefferman-Stein Theorem which play an important role in our main estimate.
\begin{prop}
\label{lem:FS ineq}
Let $p \in (1,\infty)$ and $w \in A_p(\bR^d)$. Assume that $[w]_{A_p(\bR^d)}\leq K$ for a positive constant $K$.
Then there exists a positive constant $N=N(d,K,p)$ such that for any $f\in L_p(\bR^d)$,
\begin{align*}
\big\| \mathbb{M} f \big\|_{L_p(\bR^d, w\,dx)}
\leq
N\big\|f \big\|_{L_p(\bR^d, w\,dx)}
\end{align*}
and
\begin{align*}
\big\|f \big\|_{L_p(\bR^d, w\,dx)}
\leq
N \big\| \mathbb{M}_x^{\sharp} f \big\|_{L_p(\bR^d, w\,dx)}.
\end{align*}
\end{prop}
This weighted version of the Hardy-Littlewood Theorem and the Fefferman-Stein Theorem is very well-known. For instance, see \cite[Theorems 2.2 and 2.3]{Dong_Kim2018}.
\begin{prop}\label{prop:maximal esti}
Let $p \in (1,\infty)$, $\varepsilon \in [0,1]$, $w \in A_p(\bR^d)$, and $p_0\in(1,2]$ be a constant so that $p_0 \leq R_{p,d}^w$ and
$$
\left\lfloor\frac{d}{p_0}\right\rfloor = \left\lfloor\frac{d}{R_{p,d}^w}\right\rfloor.
$$
Suppose that $\partialsi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$.
Then there exist positive constants $N=N(d,\delta,\varepsilon,\gamma,\kappa,M$, $N'=N'(d,p_0,\varepsilon,\gamma,\kappa,M)$, $N''=N''(d,p_0, \delta,\gamma,\kappa,M)$, and $N'''=N'''(d,p_0,\gamma,\kappa,M)$ such that for all $t\in (0,\infty)$, $f\in \cS(\bR^{d})$, and $j \in \bZ$,
\begin{equation*}
\begin{gathered}
\bM^{\sharp}_x\left(\cT_{t,0}^{\varepsilon,j}f\right)(x)
\leq N2^{j\varepsilon\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0},\quad \forall\delta\in(0,1),\\
\bM^{\sharp}_x\left(\cT_{t,0}^{\varepsilon,\leq 0}f\right)(x)\leq N'\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0},\\
\bM^{\sharp}_x\left(\partialsi(t,-i\nabla)\cT_{t,0}^{j}f\right)(x)
\leq N''2^{j\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0},\quad \forall\delta\in(0,1),\\
\bM^{\sharp}_x\left(\partialsi(t,-i\nabla)\cT_{t,0}^{\leq0}f\right)(x)\leq N'''\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0}.
\end{gathered}
\end{equation*}
\end{prop}
The proof of Proposition \ref{prop:maximal esti} is given in Sections \ref{sec:prop}.
\begin{corollary}
\label{main ingra}
Let $p \in (1,\infty)$ and $w \in A_p(\bR^d)$.
Suppose that $\partialsi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$.
Then there exist positive constants $N$, $N'$, $N''$, and $N'''$ such that for all $t\in (0,\infty)$, $f\in \cS(\bR^{d})$, and $j \in \bZ$,
\begin{align*}
\big\|\cT_{t,0}^{\varepsilon,j}f\big\|_{L_p(\bR^d, w\,dx)} \leq N2^{j\varepsilon\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}} \big\|f \big\|_{L_p(\bR^d, w\,dx)} \quad \forall \delta \in (0,1),
\end{align*}
\begin{align*}
\big\|\cT_{t,0}^{\varepsilon,\leq 0}f\big\|_{L_p(\bR^d, w\,dx)} \leq N' \big\|f \big\|_{L_p(\bR^d, w\,dx)},
\end{align*}
\begin{align*}
\big\|\partialsi(t,-i\nabla)\cT_{t,0}^{j}f \big\|_{L_p(\bR^d, w\,dx)} \leq N''2^{j\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}} \big\|f \big\|_{L_p(\bR^d, w\,dx)} \quad \forall \delta \in (0,1),
\end{align*}
\begin{align*}
\big\|\partialsi(t,-i\nabla)\cT_{t,0}^{\leq0}f \big\|_{L_p(\bR^d, w\,dx)} \leq N''' \big\|f \big\|_{L_p(\bR^d, w\,dx)},
\end{align*}
where the dependence of constants $N$, $N'$, $N''$, and $N'''$ is given similarly to those in Proposition \ref{prop:maximal esti} with additional dependence to $p$ and an upper bound of the $A_p$ semi-norm of $w$.
\end{corollary}
\begin{proof}
First, we claim that there exists a $p_0 \in (1,2]$ such that
$p_0 \leq R_{p,d}^w \wedge p$, $w \in A_{p/p_0}(\bR^d)$, and $\left\lfloor\frac{d}{p_0}\right\rfloor = \left\lfloor\frac{d}{R_{p,d}^w}\right\rfloor$.
Recall that
\begin{align*}
R_{p,d}^{w} := \sup \left\{ p_0 \in (1,2] : w \in A_{p/p_0}(\bR^d) \right\}.
\end{align*}
It is well-known that $ R_{p,d}^{w} >1$ and
$$
w \in A_{p/p_0}(\bR^d) \quad \forall p_0 \in \left(1,R_{p,d}^{w} \right)
$$
due to the reverse H\"older inequality (\textit{e.g.} see \cite[Remark 2.2]{Choi_Kim2022}).
Note that $\left\lfloor\frac{d}{p}\right\rfloor$ is left-continuous and piecewise-constant with respect to $p$.
Therefore, there exists a $p_0 \in \left(1,R_{p,d}^{w} \right)$ such that $\left\lfloor\frac{d}{p_0}\right\rfloor = \left\lfloor\frac{d}{R_{p,d}^w}\right\rfloor$
and $w \in A_{p/p_0}(\bR^d)$.
It only remains to show that $p_0 \in \left(1,R_{p,d}^{w} \right)$ above is less that or equal to $p$.
If $p \geq 2$, then it is obvious since $ p_0< 2$. Thus we only consider the case $p \in (1,2)$.
Recall that $A_p(\bR^d)$ is defined only for $p>1$ ($A_1(\bR^d)$-class is not introduced in this paper (see \eqref{def ap}).
Thus any $p_0 \in (1,2)$ such that $w \in A_{p/p_0}(\bR^d)$ is obviously less than to $p$.
Next we apply the weighted version of Hardy-Littlewood Theorem and Fefferman-Stein Theorem.
Let $f \in \cS(\bR^d)$ and choose a $p_0$ in the claim, i.e. $p_0 \in \left(1, R_{p,d}^w \wedge p\right)$, $w \in A_{p/p_0}(\bR^d)$, and
$\left\lfloor\frac{d}{p_0}\right\rfloor = \left\lfloor\frac{d}{R_{p,d}^w}\right\rfloor$.
Then by Proposition \ref{prop:maximal esti}, we have
\begin{equation*}
\begin{gathered}
\bM^{\sharp}_x\left(\cT_{t,0}^{\varepsilon,j}f\right)(x)
\leq N2^{j\varepsilon\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0},\quad \forall\delta\in(0,1),\\
\bM^{\sharp}_x\left(\cT_{t,0}^{\varepsilon,\leq 0}f\right)(x)\leq N'\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0},\\
\bM^{\sharp}_x\left(\partialsi(t,-i\nabla)\cT_{t,0}^{j}f\right)(x)
\leq N''2^{j\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0},\quad \forall\delta\in(0,1),\\
\bM^{\sharp}_x\left(\partialsi(t,-i\nabla)\cT_{t,0}^{\leq0}f\right)(x)\leq N'''\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0}.
\end{gathered}
\end{equation*}
Moreover, recalling $\frac{p}{p_0} \in (1,\infty)$ and applying Proposition \ref{lem:FS ineq}, we obtain
\begin{align*}
\big\|\cT_{t,0}^{\varepsilon,j}f\big\|_{L_p(\bR^d, w\,dx)} \leq N2^{j\varepsilon\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}} \big\|f \big\|_{L_p(\bR^d, w\,dx)} \quad \forall \delta \in (0,1),
\end{align*}
\begin{align*}
\big\|\cT_{t,0}^{\varepsilon,\leq 0}f\big\|_{L_p(\bR^d, w\,dx)} \leq N' \big\|f \big\|_{L_p(\bR^d, w\,dx)},
\end{align*}
\begin{align*}
\big\|\partialsi(t,-i\nabla)\cT_{t,0}^{j}f \big\|_{L_p(\bR^d, w\,dx)} \leq N''2^{j\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}} \big\|f \big\|_{L_p(\bR^d, w\,dx)} \quad \forall \delta \in (0,1),
\end{align*}
\begin{align*}
\big\|\partialsi(t,-i\nabla)\cT_{t,0}^{\leq0}f \big\|_{L_p(\bR^d, w\,dx)} \leq N''' \big\|f \big\|_{L_p(\bR^d, w\,dx)}.
\end{align*}
Since $p_0 \in (1,2)$, the corollary is proved.
\end{proof}
Based on Corollary \ref{main ingra}, we prove the following main estimate which yields Theorem \ref{22.12.27.16.53}.
\begin{thm}\label{thm:ep gamma esti}
Let $p\in(1,\infty)$, $q\in(0,\infty)$, $\varepsilon \in [0,1]$, $a \in (0,\infty)$, $w\in A_p(\bR^d)$, $\mu$ be a nonnegative Borel measrue on $(0,\infty)$, and $\boldsymbol{\mu}:\bZ\to(-\infty,\infty)$ be a sequence having a controlled difference. Suppose that $\partialsi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$.
Additionally, assume that the Laplace transform of $\mu$ is controlled by a sequence $\boldsymbol{\mu}$ in a $\gamma$-dyadic way with parameter $a$, \textit{i.e.}
\begin{align}
\label{laplace cond}
\cL_{\mu}(2^{\gamma j})
:= \int_0^\infty \exp\left( - 2^{\gamma j} t \right) \mu(dt) \leq N_{\cL_\mu} \cdot 2^{j\gamma a}2^{-\boldsymbol{\mu}(j)},\quad \forall j\in\bZ.
\end{align}
Then there exists a positive constant $N$ such that for any $u_0\in C_c^{\infty}(\bR^d)$,
\begin{align}
\label{20230118 01}
\int_{0}^T\left\|\Delta^{\frac{\varepsilon\gamma}{2}}\cT_{t,0} u_0 \right\|_{L_p(\bR^d,w\,dx)}^q
t^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)
\leq N(1+ \mu_{a,T,\kappa,\gamma,q})\|u_0\|^q_{B_{p,q}^{\varepsilon\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align}
\begin{align}
\label{20230118 02}
\int_{0}^T\left\|\Delta^{\frac{\varepsilon\gamma}{2}} \cT_{t,0} u_0\right\|_{L_p(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\leq N\|u_0\|^q_{\dot{B}_{p,q}^{\varepsilon \boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align}
\begin{align}
\label{20230118 03}
\int_{0}^T\left\|\partialsi(t,-i\nabla)\cT_{t,0} u_0 \right\|_{L_p(\bR^d,w\,dx)}^q
t^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)
\leq N'(1+\mu_{a,T,\kappa,\gamma,q})\|u_0\|^q_{B_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align}
and
\begin{align}
\label{20230118 04}
\int_{0}^T\left\|\partialsi(t,-i\nabla)\cT_{t,0} u_0\right\|_{L_p(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\leq N'\|u_0\|^q_{\dot{B}_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align}
where $[w]_{A_p(\bR^d)}\leq K$, $N=N(a, N_{\cL_\mu},d,\varepsilon, \gamma,\kappa,K,M,p,q)$,
$N'=N'(a, N_{\cL_\mu},d, \gamma,\kappa,K,M,p,q,R_{p,d}^w)$, $\varepsilon \boldsymbol{\gamma}(j)=\varepsilon \gamma j$ for all $j \in \bZ$, and
\begin{align*}
\mu_{a,T,\kappa,\gamma,q}=\int_0^T t^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right).
\end{align*}
\end{thm}
\begin{proof}
Due to the upper bounds of $L_p$-norms in Corollary \ref{main ingra},
the proofs of \eqref{20230118 03} and \eqref{20230118 04} are very similar to those of \eqref{20230118 01} and \eqref{20230118 02} when $\varepsilon=1$.
Thus we only prove \eqref{20230118 01} and \eqref{20230118 02}.
We make use of the Littlewood-Paley operators $\Delta_j$.
By using the almost orthogonal property of Littleewood-Paley operators, we have (at least in a distribution sense)
\begin{align}
\label{22.02.15.16.48}
&\Delta^{\frac{\varepsilon\gamma}{2}} \cT_{t,0} u_0
=\sum_{j\in\bZ}\Delta_j(-\Delta)^{\varepsilon\gamma/2}\cT_{t,0}u_0=\sum_{j\in\bZ}\sum_{i \in \bZ}\Delta_j(-\Delta)^{\varepsilon\gamma/2}\cT_{t,0}\Delta_i u_0
=\sum_{j\in\bZ}\sum_{i=-1}^1\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\\
&\label{22.02.15.16.49}
=\cT_{t,0}^{\varepsilon,\leq0}(S_0u_0)+\cT_{t,0}^{\varepsilon,1}(\Delta_0u_0)+\sum_{j=1}^{\infty}\sum_{i=-1}^1\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0),
\end{align}
where $\cT_{t,0}^{\varepsilon,\leq0}$ and $\cT_{t,0}^{\varepsilon, j}$ are defined in \eqref{def T ep j}.
Thus by Minkowski's inequality,
\begin{align}
&\int_0^T \|(-\Delta)^{\varepsilon\gamma/2}\cT_{t,0} u_0\|_{L_p(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\nonumber\\
&\leq \int_0^T \bigg(\sum_{j\in\bZ}\sum_{i=-1}^1\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\|_{L_p(\bR^d,w\,dx)}\bigg)^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right),\label{ineq:22.02.22.12.37}
\end{align}
and
\begin{align}
&\int_0^T \|(-\Delta)^{\varepsilon\gamma/2}\cT_{t,0} u_0\|_{L_p(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\nonumber\\
&\leq \int_0^T \bigg( \big\|\cT_{t,0}^{\varepsilon,\leq0}(S_0u_0)+\cT_{t,0}^{\varepsilon,1}(\Delta_0u_0)\big\|_{L_p(\bR^d, w\,dx)}+\sum_{j=1}^{\infty}\sum_{i=-1}^1 \big\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\big\|_{L_p(\bR^d, w\,dx)} \bigg)^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right).
\label{ineq:22.02.22.12.38}
\end{align}
Moreover, by Corollary \ref{main ingra}, we have
\begin{align}
\sum_{i=-1}^1 \|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\|_{L_p(\bR^d,w\,dx)} &\leq N2^{j\varepsilon\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{(1-\delta)}{4^{\gamma}}}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)},\label{claim:hom}
\end{align}
and
\begin{align}
\|\cT_{t,0}^{\varepsilon,\leq 0}(S_0u_0) \|_{L_p(\bR^d, w\,dx)} &\leq N\| S_0 u_0\|_{L_p(\bR^d, w\,dx)}.
\label{claim:inhom}
\end{align}
Due to \eqref{ineq:22.02.22.12.38} and \eqref{claim:inhom}, to show \eqref{20230118 01}, it is sufficient to show that
\begin{align}
\label{20230120 01}
\int_0^T \bigg(\sum_{j\in\bN}\sum_{i=-1}^1\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\|_{L_p(\bR^d,w\,dx)}\bigg)^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)
\leq
N \|u_0\|_{B_{p,q}^{\varepsilon\boldsymbol{\gamma}-\boldsymbol{\mu}/q}(\bR^d,w\,dx)}^q.
\end{align}
Similarly, to show \eqref{20230118 02}, it is sufficient to show
\begin{align}
\int_0^T \bigg(\sum_{j\in\bZ}\sum_{i=-1}^1\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\|_{L_p(\bR^d,w\,dx)}\bigg)^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)
\leq
N \|u_0\|_{\dot{B}_{p,q}^{\varepsilon\boldsymbol{\gamma}-\boldsymbol{\mu}/q}(\bR^d,w\,dx)}^q. \label{ineq:hom}
\end{align}
Since the proofs of \eqref{20230120 01} and \eqref{ineq:hom} are very similar, we only focus on proving the difficult case \eqref{ineq:hom}.
To verify \eqref{ineq:hom}, we apply \eqref{claim:hom} to \eqref{ineq:22.02.22.12.37} with $\delta= (1-2^{-\gamma})$ and obtain
\begin{align}
\notag
&\int_0^T \bigg(\sum_{j\in\bZ}\sum_{i=-1}^1\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)(t,\cdot)\|_{L_p(\bR^d,w\,dx)}\bigg)^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right) \\
\label{20230120 11}
&\leq N\int_0^T\left(\sum_{j\in\bZ}2^{j\varepsilon\gamma}e^{-\kappa t2^{j\gamma} 2^{-3\gamma}}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}\right)^qt^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right).
\end{align}
We estimate it depending on the range of $q$.
First, if $q\in(0,1]$, then simply we have
\begin{align*}
&\int_0^T\left(\sum_{j\in\bZ}2^{j\varepsilon\gamma}e^{-\kappa t2^{j\gamma}2^{-3\gamma}}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}\right)^qt^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right) \\
&\lesssim \sum_{j\in\bZ}2^{qj\varepsilon\gamma}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
\int_0^\infty \exp\left(-2^{j\gamma}2^\gamma t\right) t^a \mu(dt) \\
&\lesssim \sum_{j\in\bZ}2^{qj\varepsilon\gamma - j\gamma a}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
\cL_\mu(2^{j \gamma}),
\end{align*}
where the simple inequality that $t^ae^{-2^{j\gamma}2^{\gamma}t} \leq N(a,\gamma)2^{-j\gamma a}e^{-2^{j\gamma}t}$ is used in the last part of the computation above.
Finally applying \eqref{laplace cond}, we have \eqref{ineq:hom}.
Next we consider the case $q\in(1,\infty)$. Divide $\bZ$ into two parts as follows:
$$
\bZ=\{j\in\bZ:2^{j\gamma}t\leq1\}\cup\{j\in\bZ:2^{j\gamma}t>1\}=:\cI_1(t)\cup\cI_2(t).
$$
Thus, we have
\begin{align}
&\int_0^T \bigg(\sum_{j\in\bZ}\sum_{i=-1}^1\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\|_{L_p(\bR^d,w\,dx)}\bigg)^qt^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\nonumber\\
&\leq N\int_0^T\left(\sum_{j\in\bZ}2^{j\varepsilon\gamma}e^{-\kappa t2^{j\gamma}2^{-3\gamma}}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}\right)^qt^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\nonumber\\
&\leq N\int_0^T\left(\sum_{j\in\bZ}2^{j\varepsilon\gamma}e^{- t2^{j\gamma}2^\gamma}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}\right)^qt^a\mu\left(dt\right)\nonumber\\
&= N\int_0^T\left(\sum_{j\in\cI_1(t)}\cdots\right)^q\mu(dt) +N \int_0^T\left(\sum_{j\in\cI_2(t)}\cdots\right)^qt^a\mu\left(dt\right)
=: N(I_1 + I_2).\label{cI1 cI2}
\end{align}
Let $b\in (0,a]$ whose exact value will be chosen later.
Then we put $2^{j\gamma b/q} 2^{-j \gamma b/q}$ in the summation with respect to $\cI_1(t)$ and make use of H\"older's inequality to obtain
\begin{align}
I_1=
&\int_0^T\left(\sum_{j\in\cI_1(t)}2^{j\varepsilon\gamma}e^{- t2^{j\gamma}2^\gamma}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}\right)^q t^a \mu(dt)\nonumber\\
&\leq \int_0^T\left(\sum_{j\in\cI_1(t)}2^{\frac{j\gamma b}{q-1}}\right)^{q-1}\left(\sum_{j\in\cI_1(t)}2^{jq\varepsilon\gamma-j\gamma b}e^{-q t2^{j\gamma} 2^\gamma}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q\right)t^a\mu(dt).\label{cI1}
\end{align}
and similarly,
\begin{align}
I_2=
&\int_0^T\left(\sum_{j\in\cI_2(t)}2^{j\varepsilon\gamma} e^{- t2^{j\gamma}2^\gamma} \|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}\right)^q t^a\mu(dt)\nonumber\\
&\leq \int_0^T\left(\sum_{j\in\cI_2(t)}2^{\frac{j\gamma b}{q-1}}e^{- t2^{j\gamma}2^\gamma}\right)^{q-1}\left(\sum_{j\in\cI_2(t)}2^{jq\varepsilon\gamma-j\gamma b}e^{-t2^{j\gamma}2^\gamma}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q\right) t^a \mu(dt).\label{cI2}
\end{align}
We estimate \eqref{cI1} first. For each $t>0$, we can choose $j_1(t)\in\bZ$ such that $2^{j_1(t)\gamma}t\leq 1$ and $2^{(j_1(t)+1)\gamma}t>1$.
Roughly speaking, $j_1(t) \approx -\frac{1}{\gamma}\log_2t$.
Thus, we have
\begin{align}
\sum_{j\in\cI_1(t)}2^{\frac{j\gamma b}{q-1}}=\frac{2^{\frac{j_1(t)\gamma b}{q-1}}}{1-2^{-\frac{\gamma b}{q-1}}}\leq N(b,\gamma ,q)t^{-\frac{b}{q-1}}.\label{ineq:22.02.22.14.06}
\end{align}
Putting \eqref{ineq:22.02.22.14.06} in \eqref{cI1}, we obtain
\begin{align}
\cI_1
&\leq
N
\int_0^T
t^{-b}
\sum_{j\in\cI_1(t)}
2^{jq\varepsilon\gamma-j\gamma b}
e^{-q t2^{j\gamma}2^\gamma}
\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
t^a \mu(dt)\nonumber\\
&\leq
N
\sum_{j\in\bZ}
2^{jq\varepsilon\gamma-j\gamma b} \|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
\int_0^T e^{-qt 2^{j\gamma} 2^\gamma} t^{a-b}\mu(dt)\nonumber\\
&\leq
N
\sum_{j\in\bZ}
2^{jq\varepsilon\gamma-j\gamma b} \|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
2^{-j\gamma(a-b)}\int_0^\infty e^{-t 2^{j\gamma}} \mu(dt)\nonumber\\
&\leq
N
\sum_{j\in\bZ} 2^{jq\varepsilon\gamma-j\gamma a}\cL_{\mu}(2^{j\gamma}) \|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q,\nonumber
\end{align}
where $N=N(b,\gamma,\kappa)$.
Therefore, by \eqref{laplace cond},
\begin{align}
\label{ineq:cI1 final}
\cI_1\leq N(b,N_{\cL_\mu},\gamma,\kappa)\sum_{j\in\bZ}2^{q\varepsilon\boldsymbol{\gamma}(j)-\boldsymbol{\mu}(j)}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q.
\end{align}
For $\cI_2$, we choose a sufficiently small $b\in (0,a]$ satisfying
$$
a\geq\frac{b}{(q-1) 2^\gamma}.
$$
Then we can check that
$$
f(\lambda):=\lambda^{\frac{b}{q-1}}e^{-2^\gamma\lambda}
$$
is a decreasing function on $(1,\infty)$.
Using $f(\lambda)$ and taking $\lambda = 2^{j\gamma}t$, we have
\begin{align}
\label{20230120 30}
\sum_{j\in\cI_2(t)}2^{\frac{j\gamma b}{q-1}}e^{-t2^{j\gamma}2^\gamma}&=t^{-\frac{b}{q-1}}\sum_{j\in\cI_2(t)}(2^{j\gamma}t)^{\frac{b}{q-1}}e^{-t2^{j\gamma}2^\gamma}\leq Nt^{-\frac{b}{q-1}}\int_{-\frac{1}{\gamma}\log_{2}t}^{\infty}(2^{\lambda\gamma}t)^{\frac{b}{q-1}}e^{- t2^{\lambda\gamma}2^\gamma}d\lambda.
\end{align}
Moreover, applying the simple change of the variable $t 2^{\lambda \gamma} \to \lambda$, the above term is less than or equal to
\begin{align}
Nt^{-\frac{b}{q-1}}\int_1^{\infty} \frac{f(\lambda)}{\lambda}d\lambda=N(q,b,\kappa,\gamma)t^{-\frac{b}{q-1}}.\label{ineq:22.02.22.14.35}
\end{align}
Putting \eqref{20230120 30} and \eqref{ineq:22.02.22.14.35} in \eqref{cI2}, we have
\begin{align}
\cI_2
&\leq N
\int_0^{\infty}
\sum_{j\in\cI_2(t)}
2^{jq\varepsilon\gamma-j\gamma b}
e^{- t2^{j\gamma}2^\gamma}
\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
t^{a-b}\mu(dt)\nonumber\\
&\leq
N
\sum_{j\in\bZ}
2^{jq\varepsilon\gamma-j\gamma a}
\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
\cL_{\mu}(2^{j\gamma}),\nonumber
\end{align}
where $N=N(b,\delta,\gamma,\kappa)$.
Therefore, by \eqref{laplace cond} again,
\begin{align}
\label{ineq:cI2 final}
\cI_2\leq N(b,N_{\cL_\mu},\gamma,\kappa)\sum_{j\in\bZ}2^{q\varepsilon\boldsymbol{\gamma}(j)-\boldsymbol{\mu}(j)}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q.
\end{align}
Finally combining \eqref{cI1 cI2}, \eqref{ineq:cI1 final}, and \eqref{ineq:cI2 final}, we obtain
\begin{align*}
\int_0^T \bigg(\sum_{j\in\bZ}\sum_{i=-1}^1\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\|_{L_p(\bR^d,w\,dx)}\bigg)^q t^a\mu(dt)
&\leq N\sum_{j\in\bZ} 2^{q\varepsilon\boldsymbol{\gamma}(j)-\boldsymbol{\mu}(j)} \|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q\\
&= N\|u_0\|_{\dot{B}_{p,q}^{\varepsilon\boldsymbol{\gamma}-\boldsymbol{\mu}/q}(\bR^d,w\,dx)}^q,
\end{align*}
which proves \eqref{ineq:hom} since $b$ can be chosen depending only on $a$,$\gamma$, and $q$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{22.12.27.16.53}]
Due to Proposition \ref{22.05.03.11.34}, without loss of generality, we assume that $\boldsymbol{r}(j)=\boldsymbol{\gamma}(j)=j\gamma$.
First, we prove \textit{a priori} estimates \eqref{main a priori est 0} and \eqref{main a priori est}.
By \cite[Theorem 2.1.5]{choi_thesis}, for any $u_0 \in C_c^\infty(\bR^d)$,
there is a unique classical solution $u \in C_p^{1,\infty}([0,T] \times \bR^d)$ to the Cauchy problem
\begin{equation*}
\begin{cases}
\partial_tu(t,x)=\partialsi(t,-i\nabla)u(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=u_0(x),\quad & x\in\bR^d.
\end{cases}
\end{equation*}
and the solution $u$ is given by
$$
u(t,x):=\int_{\bR^d}p(t,0,x-y)u_0(y)dy=\cT_{t,0}u_0(x).
$$
Thus due to Theorem \ref{thm:ep gamma esti}, we have \eqref{main a priori est 0} and \eqref{main a priori est}.
Next we prove the existence of a solution.
By Proposition \ref{22.04.24.20.57}-($ii$), for $u_0\in B_{p,q}^{\boldsymbol{\gamma}-\boldsymbol{\mu}/q}(\bR^d,w\,dx)$, there exists a $\{u_0^n\}_{n=1}^{\infty}\subseteq C_c^{\infty}(\bR^d)$ such that $u_0^n\to u_0$ in $B_{p,q}^{\boldsymbol{\gamma}-\boldsymbol{\mu}/q}(\bR^d,w\,dx)$.
By \cite[Theorem 2.1.5]{choi_thesis} again,
$$
u_n(t,x):=\int_{\bR^d}p(t,0,x-y)u_0^n(y)dy=\cT_{t,0}u_0^n(x)
\in C_p^{1,\infty}([0,T]\times\bR^d)
$$
becomes a unique classical solution to the Cauchy problem
\begin{equation*}
\begin{cases}
\partial_tu_n(t,x)=\partialsi(t,-i\nabla)u_n(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u_n(0,x)=u_0^n(x),\quad & x\in\bR^d.
\end{cases}
\end{equation*}
Moreover, due to the linearity of the equation, applying Theorem \ref{thm:ep gamma esti} again, for all $n,m \in \bN$, we have
\begin{align*}
&\int_{0}^T\left(\left\|u_n-u_m \right\|_{H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx))}^q +\left\|\partialsi(t,-i\nabla)(u_n - u_m) \right\|_{L_p(\bR^d,w\,dx)}^q \right)
t^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right) \\
&\leq N'(1+\mu_{a,T,\kappa,\gamma,q})\|u^n_0-u^m\|^q_{B_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}.
\end{align*}
In particular, $u_n$ becomes a Cauchy sequence in
$$
L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right);H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx)\right).
$$
Since the above space is a quasi-Banach space, there exists a $u$ which is given by the limit of $u_n$ in the space.
Therefore by Definition \ref{def sol}, this $u$ becomes a solution.
At last, we prove the uniqueness of a solution.
We assume that there exist two solutions $u$ and $v$.
Then by Definition \ref{def sol}, there exist
$u_n,v_n\in C_p^{1,\infty}([0,T]\times\bR^d)$ such that $u_n(0,\cdot),v_n(0,\cdot)\in C_c^{\infty}(\bR^d)$,
\begin{equation*}
\begin{gathered}
\partialartial_tu_n(t,x)=\partialsi(t,-i\nabla)u_n(t,x),\quad \partialartial_tv_n(t,x)=\partialsi(t,-i\nabla)v_n(t,x)\quad \forall (t,x)\in(0,T)\times\bR^d,\\
u_n(0,\cdot), v_n(0,\cdot)\to u_0 \quad\text{in}\quad B_{p,q}^{\boldsymbol{\gamma}-\boldsymbol{\mu}/q}(\bR^d,w\,dx),
\end{gathered}
\end{equation*}
and $u_n\to u$, $v_n\to v$ in
$$
L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right);H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx)\right).
$$
as $n\to\infty$. Then $w_n:=u_n-v_n$ satisfies
\begin{equation*}
\begin{cases}
\partial_tw_n(t,x)=\partialsi(t,-i\nabla)w_n(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
w_n(0,x)=u_n(0,x)-v_n(0,x),\quad & x\in\bR^d.
\end{cases}
\end{equation*}
Due to \cite[Theorem 2.1.5]{choi_thesis} and Theorem \ref{thm:ep gamma esti}, we conclude that $w_n\to0$ in
$$
L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma} dt\right);H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx)\right).
$$ as $n\to\infty$.
Since the limit is unique,
$$
0=\lim_{n \to \infty} w_n = \lim_{n \to \infty}u_n - \lim_{n \to \infty}u_n = u -v.
$$
The theorem is proved.
\end{proof}
\mysection{Proof of Proposition \ref{prop:maximal esti}}\label{sec:prop}
Recall
\begin{align}
\label{def fund}
p(t,s,x):=1_{0 < s< t} \cdot \frac{1}{(2\partiali)^{d/2}}\int_{\bR^d} \exp\left(\int_{s}^t\partialsi(r,\xi)dr\right)e^{ix\cdot\xi}d\xi
\end{align}
and
\begin{align}
\label{def kernel}
P_{\varepsilon}(t,s,x):=(-\Delta)^{\varepsilon\gamma/2}p(t,s,x),\quad \varepsilon\in[0,1].
\end{align}
We fix $\varepsilon \in [0,1]$ throughout this section.
The proof of Proposition \ref{prop:maximal esti} is twofold.
In the first subsection, we obtain quantitative estimates for the kernel $ P_{\varepsilon}$.
As the special case $\varepsilon =0$, we obtain an estimate for the fundamental solution $P_{0}(t,s,x):=p(t,s,x)$ to \eqref{eqn:model eqn}, which means
\begin{equation*}
\begin{cases}
\partial_tp(t,s,x)=\partialsi(t,-i\nabla)p(t,s,x),\quad &(t,x)\in(s,\infty)\times\bR^d,\\
\lim_{t \downarrow s}p(t,s,x)=\delta_0(x),\quad &x\in\bR^d.
\end{cases}
\end{equation*}
Here, $\delta_0$ is the Dirac measure centered on the origin.
We show some quantitative estimates in the first subsection and
then by using these estimates, prove an important lemma which controls mean oscillations of operators $\cT_{t,0}^{\varepsilon, j}$, $\cT_{t,0}^{\varepsilon, \leq 0}$,
$\partialsi(t,-i\nabla)\cT_{t,0}^{ j}$, and $\partialsi(t,-i\nabla) \cT_{t,0}^{\leq 0}$ in the second subsection.
Finally, we prove Proposition \ref{prop:maximal esti} based on the mean oscillation estimates.
\subsection{Estimates on fundamental solutions}
Recall
\begin{align*}
\partialsi(t,-i \nabla)p(t,s,x):=1_{0 < s< t} \cdot \frac{1}{(2\partiali)^{d/2}}\int_{\bR^d} \exp\left(\int_{s}^t\partialsi(r,\xi)dr\right)e^{ix\cdot\xi}d\xi.
\end{align*}
Here is our main kernel estimate.
\begin{thm}\label{22.02.15.11.27}
Let $k$ be an integer such that $k>\lfloor d/2\rfloor$. Assume that $\partialsi(t,\xi)$ satisfies an ellipticity condition with $(\gamma,\kappa)$ and has a $k$-times regular upper bound with $(\gamma,M)$.
\begin{enumerate}[(i)]
\item
Let $p\in[2,\infty]$, $(n,m,|\alpha|)\in[0,k]\times\{0,1\}\times \{0,1,2\}$, and $\delta\in(0,1)$. Then there exists a positive constant $N=N(|\alpha|,d,\delta,\varepsilon,\gamma,\kappa,M,m,n)$ such that for all $t>s>0$,
\begin{equation}
\label{22.01.27.13.46}
\left\||\cdot|^{n}\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq Ne^{-\kappa|t-s|2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-n+d/p'\right)},
\end{equation}
where $p'$ is the H\"older conjugate of $p$, \textit{i.e.} $1/p+1/p'=1$ $(p'=1$ if $p=\infty)$.
\item
Let $p\in[1,2]$ and $(m,|\alpha|)\in\times\{0,1\}\times \{0,1,2\}$, and $\delta\in(0,1)$.
Then there exists a positive constant $N=N(|\alpha|,d,\delta,\varepsilon,\gamma,\kappa,M,m)$ such that for all $t>s>0$,
\begin{equation}\label{22.01.27.13.58}
\left\|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq Ne^{-\kappa|t-s|2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|+d/p'\right)}.
\end{equation}
\item
Let $p\in[2,\infty]$, $(n,m,|\alpha|)\in[0,k]\times\{0,1\}\times \{0,1,2\}$, and $\delta\in(0,1)$. Then there exists a positive constant $N=N(|\alpha|,d,\delta,\varepsilon,\gamma,\kappa,M,m,n)$ such thatfor all $t>s>0$,
\begin{equation}
\label{2023012301}
\left\||\cdot|^{n}\partial_t^mD_x^{\alpha}\Delta_j \partialsi(t,-i\nabla) p(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq Ne^{-\kappa|t-s|2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-n+d/p'\right)}.
\end{equation}
\item
Let $p\in[1,2]$ and $(m,|\alpha|)\in\times\{0,1\}\times \{0,1,2\}$, and $\delta\in(0,1)$.
Then there exists a positive constant $N=N(|\alpha|,d,\delta,\varepsilon,\gamma,\kappa,M,m)$ such that for all $t>s>0$,
\begin{equation}
\label{2023012302}
\left\|\partial_t^mD_x^{\alpha}\Delta_j\partialsi(t,-i\nabla)p(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq Ne^{-\kappa|t-s|2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|+d/p'\right)}.
\end{equation}
\end{enumerate}
\end{thm}
\begin{proof}
The proofs of \eqref{2023012301} and \eqref{2023012302} are very similar to those of \eqref{22.01.27.13.46} and \eqref{22.01.27.13.58} with $\varepsilon =1$ due to \eqref{condi:reg ubound}, \textit{i.e.}
$$
|\partialsi(t,\xi)| \lesssim |\xi|^\gamma.
$$
Thus we only focus on proving \eqref{22.01.27.13.46} and \eqref{22.01.27.13.58}.
The proofs highly rely on the following lemma whose proof is given in the last part of this subsection.
\begin{lem}\label{lem:kernel esti}
Let $k\in \bN$, $\alpha$ be a ($d$-dimensional) multi-index and $m\in\{0,1\}$.
Assume that $\partialsi(t,\xi)$ satisfies an ellipticity condition with $(\gamma,\kappa)$ and has a $k$-times regular upper bound with $(\gamma,M)$.
Then there exists a positive constant $N=N(|\alpha|,d,\delta,\varepsilon,\gamma,\kappa,M,n)$ such that for all $n\in\{0,1,\cdots,k\}$, $t>s>0$, $j\in\bZ$, and $\delta\in(0,1)$,
\begin{equation*}
\begin{gathered}
\sup_{x\in\bR^d}|x|^{n} |\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|\leq Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j((m+\varepsilon)\gamma+|\alpha|-n+d)}\\
\left(\int_{\bR^d}|x|^{2n}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^2dx\right)^{1/2}\leq Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-n+d/2\right)}.
\end{gathered}
\end{equation*}
\end{lem}
We temporarily assume that Lemma \ref{lem:kernel esti} holds to complete the proof of Theorem \ref{22.02.15.11.27}.
We prove \eqref{22.01.27.13.46} first. We divide the proof into two cases: integer case and non-integer case, \textit{i.e.} $n=0,1,2,\cdots,k$ and $n\in[0,k]\setminus\{0,1,2,\cdots,k\}$.
\begin{enumerate}
\item[Case 1.]
Assume $n$ is an integer, \textit{i.e.} $n=0,1,2,\cdots,k$.
If $p=2$ or $p=\infty$, then \eqref{22.01.27.13.46} directly holds due to Lemma \ref{lem:kernel esti}.
For $p\in(2,\infty)$, we apply Lemma \ref{lem:kernel esti} again and obtain
\begin{align*}
&\int_{\bR^d}|x|^{pn}|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,x)|^{p}dx\\
&=\int_{\bR^d}|x|^{2n}|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,x)|^{2}|x|^{(p-2)n}|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,x)|^{p-2}dx\\
&\leq N\left(e^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j((m+\varepsilon)\gamma+|\alpha|-n+d)}\right)^{p-2}\int_{\bR^d}|x|^{2n}|\partial_t^mD_x^{\alpha}K_{\varepsilon}(t,s,x)|^{2}dx\\
&\leq N\left(e^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j((m+\varepsilon)\gamma+|\alpha|-n+d)}\right)^{p-2}\times\left(e^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-n+d/2\right)}\right)^{2}\\
&=N\left(e^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-n+d/p'\right)}\right)^{p}.
\end{align*}
\item[Case 2.]
Assume $n$ is a non-integer, \textit{i.e.} $n\in[0,k]\setminus\{0,1,2,\cdots,k\}$.
Observe that for any $q\in[1,\infty)$,
\begin{equation}\label{22.01.27.13.59}
\begin{aligned}
&|x|^{qn}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^q\\
&=\left(|x|^{q\lfloor n\rfloor}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^q\right)^{1-(n-\lfloor n\rfloor)}\times\left(|x|^{q(\lfloor n\rfloor+1)}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^q\right)^{n-\lfloor n\rfloor}.
\end{aligned}
\end{equation}
We use the result of Case 1 with \eqref{22.01.27.13.59} repeatedly, \textit{i.e.} we use \eqref{22.01.27.13.46} with $\lfloor n \rfloor$ and $\lfloor n \rfloor+1$ after applying \eqref{22.01.27.13.59} for the proof of this case.
Using \eqref{22.01.27.13.59} with $q=1$ and the result of Case 1, \eqref{22.01.27.13.46} holds if $p=\infty$.
For other $p$, \textit{i.e.} $p \in [2,\infty)$, we use \eqref{22.01.27.13.59} with $q=p$ and apply H\"older's inequality with $\frac{1}{n-\lfloor n \rfloor}$. Then finally due to the result of Case 1, we have
\begin{align*}
&\left(\int_{\bR^d}|x|^{pn}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^pdx\right)^{1/p}\\
&\leq \left(\int_{\bR^d}|x|^{p(\lfloor n\rfloor+1)}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^pdx\right)^{(n-\lfloor n\rfloor)/p}\times\left(\int_{\bR^d}|x|^{p\lfloor n\rfloor}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^pdx\right)^{(1-(n-\lfloor n\rfloor))/p}\\
&\leq Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-n+d/p'\right)}.
\end{align*}
\end{enumerate}
Next we prove \eqref{22.01.27.13.58}. The case $p=2$ holds due to \eqref{22.01.27.13.46} with $n=0$.
Moreover, we claim that it is sufficient to show that \eqref{22.01.27.13.58} holds for $p=1$.
Indeed, for $p \in (1,2)$ there exists a $\lambda \in (0,1)$ such that $p=\lambda+2(1-\lambda)$ and
\begin{equation}\label{ineq:22 02 28 13 27}
\begin{aligned}
|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^p=|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^{\lambda}\times|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^{2(1-\lambda)}.
\end{aligned}
\end{equation}
Applying H\"older's inequality with \eqref{ineq:22 02 28 13 27} and $\frac{1}{\lambda}$, we obtain \eqref{22.01.27.13.58}.
Thus we focus on showing \eqref{22.01.27.13.58} with $p=1$.
Let $j \in \bZ$. We consider $|x| \leq 2^{-j}$ and $|x|>2^{-j}$, separately.
\begin{align*}
\int_{\bR^d}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|dx&=\int_{|x|\leq 2^{-j}}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|dx + \int_{|x|> 2^{-j}}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|dx.
\end{align*}
For $|x| \leq 2^{-j}$ we make use of $(i)$ with $p=\infty$ and $n=0$. Then
\begin{align*}
\int_{|x|\leq 2^{-j}}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|dx&\leq Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j((m+\varepsilon)\gamma+|\alpha|)}.
\end{align*}
For $|x|>2^{-j}$ we put $d_2:=\lfloor d/2\rfloor+1$ and note that $d_2\leq k$.
Then by H\"older's inequality and $(i)$ with $p=2$,
\begin{align*}
\int_{|x|> 2^{-j}}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|dx&\leq \left(\int_{|x|> 2^{-j}}|x|^{-2d_2}\right)^{1/2} \left(\int_{|x|> 2^{-j}}|x|^{2d_2}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^2dx\right)^{1/2}\\
&\leq N2^{j(d_2-d/2)}e^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-d_2+d/2\right)}\\
&= Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|\right)}.
\end{align*}
The theorem is proved.
\end{proof}
\begin{corollary}\label{22.02.15.14.36}
Let $k$ be an integer such that $k>\lfloor d/2\rfloor$. Assume that $\partialsi(t,\xi)$ satisfies an ellipticity condition with $(\gamma,\kappa)$ and has a $k$-times regular upper bound with $(\gamma,M)$. Then, for all $p\in[1,\infty]$ and $(m,|\alpha|)\in\{0,1\}\times\{0,1,2\}$,
there exist positive constants $N$ and $N'$ such that for all $t>s>0$
\begin{align}
\label{20230130 01}
\left\|\partial_t^mD_x^{\alpha}S_0P_{\varepsilon}(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq N,
\end{align}
and
$$
\left\|\partial_t^mD_x^{\alpha}S_0 \partialsi(t,-i\nabla)p(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq N',
$$
where $N=N(|\alpha|,d,\varepsilon,\gamma,\kappa,M,m,p)$ and $N'=N'(|\alpha|,d,\gamma,\kappa,M,m,p)$.
\end{corollary}
\begin{proof}
Due to similarity, we only prove \eqref{20230130 01}.
Since $S_0:=\sum_{j\leq 0}\Delta_j$, we make use of Minkowski's inequality and Theorem \ref{22.02.15.11.27} to obtain
\begin{align*}
\left\|\partial_t^mD_x^{\alpha}S_0P_{\varepsilon}(t,s,\cdot)\right\|_{L_p(\bR^d)}&\leq \sum_{j\leq 0}\left\|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq N\sum_{j\leq 0}2^{j((m+\varepsilon)\gamma+|\alpha|+d/p')}.
\end{align*}
Note that the summation is finite if $(m+\varepsilon)\gamma+|\alpha|+d/p'>0$.
The corollary is proved.
\end{proof}
Now we prove Lemma \ref{lem:kernel esti}.
\begin{proof}[Proof of Lemma \ref{lem:kernel esti}]
We apply some elementary properties of the Fourier inverse transform to obtain an upper bound of $|x|^n|\partialartial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|$.
Indeed, recalling \eqref{def fund} and \eqref{def kernel}, we have
\begin{equation}\label{ineq:22 02 28 14 29}
\begin{aligned}
|x|^n|\partialartial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|&\leq N(d,n)\sum_{i=1}^d|x^i|^n|\partialartial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|\\
&\leq N(d,n)\sum_{|\beta|=n}\int_{\bR^d}\left|D^{\beta}_{\xi}\left(\xi^{\alpha}|\xi|^{\varepsilon\gamma}\cF[\Psi](2^{-j}\xi)\partialartial_t^m\exp\left(\int_s^t\partialsi(r,\xi)dr\right)\right)\right|d\xi.
\end{aligned}
\end{equation}
For the integrand, we make use of Leibniz's product rule. Then
\begin{equation}\label{ineq:22 02 28 14 30}
\begin{aligned}
&D_{\xi}^{\beta}\left(\xi^{\alpha}|\xi|^{\varepsilon\gamma}\cF[\Psi](2^{-j}\xi)\partial_t^m\exp\left(\int_{s}^t\partialsi(r,\xi)dr\right)\right)\\
&=D_{\xi}^{\beta}\left(\partialsi(t,\xi)^m\xi^{\alpha}|\xi|^{\varepsilon\gamma}\cF[\Psi](2^{-j}\xi)\exp\left(\int_{s}^t\partialsi(r,\xi)dr\right)\right)\\
&=\sum_{\beta=\beta_0+\beta_1}c_{\beta_0,\beta_1}D^{\beta_0}_{\xi}\left(\partialsi(t,\xi)^m\xi^{\alpha}|\xi|^{\varepsilon\gamma}\exp\left(\int_{s}^t\partialsi(r,\xi)dr\right)\right)2^{-j|\beta_1|}D^{\beta_1}_{\xi}\cF[\Psi](2^{-j}\xi).
\end{aligned}
\end{equation}
To estimate $D^{\beta_0}_{\xi}\left(\partialsi(t,\xi)^m\xi^{\alpha}|\xi|^{\varepsilon\gamma}\exp\left(\int_{s}^t\partialsi(r,\xi)dr\right)\right)$, we borrow the lemma in \cite{Choi_Kim2022} and introduce it below.
\begin{lem}[\cite{Choi_Kim2022} Lemma 4.1]\label{lem:multiplier esti}
Let $n\in\bN$ and assume that $\partialsi(t, \xi)$ satisfies an ellipticity condition with $(\gamma, \kappa)$ and has an $n$-times regular upper bound with $(\gamma, M)$.
Then there exists a positive constant $N = N(M,n)$ such that for all $t>s>0$ and $\xi\in\bR^d \setminus \{0\}$,
\begin{align*}
\bigg| D_\xi^n\biggl(\exp\bigg(\int_s^t \partialsi(r, \xi)dr\bigg)\biggr) \bigg|
\leq N |\xi|^{-n} \exp(-\kappa(t-s)|\xi|^\gamma) \sum_{k=1}^n |t-s|^k |\xi|^{k\gamma}.
\end{align*}
\end{lem}
We keep going to prove Lemma \ref{lem:kernel esti}.
By Lemma \ref{lem:multiplier esti}, it follows that
\begin{equation}\label{ineq:22 02 28 15 13}
\begin{aligned}
&\bigg| D^{\beta_0}_{\xi}\left(\partialsi(t,\xi)^m\xi^{\alpha}|\xi|^{\varepsilon\gamma}\exp\left(\int_{s}^t\partialsi(r,\xi)dr\right)\right)\bigg| \\
&\leq \sum_{\gamma_1 + \gamma_2 + \gamma_3 = \beta_0} c_{\gamma_1,\gamma_2,\gamma_3}
\big|D_\xi^{\gamma_1} \big(\partialsi(t,\xi)^m\big) \big|
\times \big|D_\xi^{\gamma_2} \big(\xi^\alpha |\xi|^{\varepsilon\gamma}\big) \big|
\times \bigg|D_\xi^{\gamma_3} \bigg(\exp\biggl(\int_s^t \partialsi(r,t)dr\biggr)\bigg) \bigg|\\
&\leq N(|\beta_0|,M) |\xi|^{m\gamma + \varepsilon\gamma +|\alpha| - |\beta_0|} \exp(-\kappa(t-s)|\xi|^{\gamma}) \sum_{k=1}^n(t-s)^k |\xi|^{k\gamma}.
\end{aligned}
\end{equation}
By \eqref{ineq:22 02 28 14 30} and \eqref{ineq:22 02 28 15 13},
\begin{equation}\label{21.08.31.13.49}
\begin{aligned}
&\left|D_{\xi}^{\beta}\left(\xi^{\alpha}|\xi|^{\varepsilon\gamma}\cF[\Psi](2^{-j}\xi)\partial_t^m\exp\left(\int_{s}^t\partialsi(r,\xi)dr\right)\right)\right|\\
&\leq Ne^{-\kappa(t-s)|\xi|^{\gamma}}\left(\sum_{k=1}^{n}(t-s)^k|\xi|^{k\gamma}\right)\times\left(\sum_{\beta=\beta_0+\beta_1}|\xi|^{(m+\varepsilon)\gamma+|\alpha|-|\beta_0|}2^{-j|\beta_1|}|D^{\beta_1}_{\xi}\cF[\Psi](2^{-j}\xi)|\right)\\
&\leq N|\xi|^{(m+\varepsilon)\gamma+|\alpha|-|\beta|}e^{-\kappa(t-s)|\xi|^{\gamma}}\left(\sum_{k=1}^{n}(t-s)^k|\xi|^{k\gamma}\right)1_{2^{j-1}\leq|\xi|\leq2^{j+1}}\\
&\leq N|\xi|^{(m+\varepsilon)\gamma+|\alpha|-|\beta|}e^{-\kappa(1-\delta)(t-s)|\xi|^{\gamma}}1_{2^{j-1}\leq|\xi|\leq2^{j+1}}=N|\xi|^{(m+\varepsilon)\gamma+|\alpha|-n}e^{-\kappa(1-\delta)(t-s)|\xi|^{\gamma}}1_{2^{j-1}\leq|\xi|\leq2^{j+1}},
\end{aligned}
\end{equation}
where $N=N(\delta,M,n)$.
Moreover, one can check that
\begin{align}
\notag
&\int_{2^{j-1}\leq |\xi|\leq 2^{j+1}}|\xi|^{(m+\varepsilon)\gamma+|\alpha|-n}e^{-\kappa(1-\delta)(t-s)|\xi|^{\gamma}}d\xi\\
\notag
&=N(d)\int_{2^{j-1}}^{2^{j+1}}l^{(m+\varepsilon)\gamma+|\alpha|-n+d-1}e^{-\kappa(1-\delta)(t-s)l^{\gamma}}dl\\
\notag
&\leq Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}\int_{2^{j-1}}^{2^{j+1}}l^{(m+\varepsilon)+\frac{|\alpha|-n+d}{\gamma}-1}dl\\
\label{ineq:22 02 28 14 31}
&=N(|\alpha|,d,\varepsilon,m,n)e^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j((m+\varepsilon)\gamma+|\alpha|-n+d)}.
\end{align}
All together with \eqref{ineq:22 02 28 14 29}, \eqref{21.08.31.13.49}, and \eqref{ineq:22 02 28 14 31}, we have
\begin{align*}
|x|^n|\partialartial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|
\leq
Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j((m+\varepsilon)\gamma+|\alpha|-n+d}.
\end{align*}
Similarly, by \eqref{21.08.31.13.49} and Plancherel's theorem,
\begin{align*}
&\int_{\bR^d}|x|^{2n}|\partial_t^mD_x^{\alpha} \Delta_jP_{\varepsilon}(t,s,x)|^2dx\leq N\int_{\bR^d}|x^i|^{2n}|\partialsi(t,-i\nabla)^mD_x^{\alpha}\Delta_j P_{\varepsilon}(t,s,x)|^2dx\\
&\leq N\sum_{|\beta|=n}\int_{\bR^d}\left|D^{\beta}_{\xi}\left(\partialsi(t,\xi)^m\xi^{\alpha}|\xi|^{\varepsilon\gamma}\cF[\Psi](2^{-j}\xi)\exp\left(\int_{s}^t\partialsi(r,\xi)dr\right)\right)\right|^2d\xi\\
&\leq \int_{2^{j-1}\leq|\xi|\leq2^{j+1}}|\xi|^{2(m+\varepsilon)\gamma+2|\alpha|-2n}e^{-2\kappa(1-\delta)(t-s)|\xi|^{\gamma}}d\xi\\
&\leq Ne^{-2\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j(2(m+\varepsilon)\gamma+2|\alpha|-2n+d)},
\end{align*}
where $N=N(|\alpha|,d,\delta,\varepsilon,m,n)$.
The lemma is proved.
\end{proof}
\subsection{Proof of Proposition \ref{prop:maximal esti}}
Recall
$$
\cT_{t,0}^{\varepsilon,j} f(x):=\int_{\bR^d}\Delta_jP_{\varepsilon}(t,0,x-y)f(y)dy,\quad \cT_{t,0}^{\varepsilon,\leq0}f(x):=\int_{\bR^d}S_0P_{\varepsilon}(t,0,x-y)f(y)dy,
$$
and
$$
\partialsi(t,-i \nabla)\cT_{t,0}^{j} f(x):=\int_{\bR^d}\Delta_jP_{\varepsilon}(t,0,x-y)f(y)dy,\quad \partialsi(t,-i \nabla)\cT_{t,0}^{\leq0}f(x):=\int_{\bR^d}S_0P_{\varepsilon}(t,0,x-y)f(y)dy.
$$
In this subsection, we start estimating mean oscillations of $\cT_{t,0}^{\varepsilon, j}f$ and $\cT_{t,0}^{\varepsilon,\leq0}f$.
\begin{lem}\label{20.12.21.16.26}
Let $t>0$, $j\in\bZ$, $\delta \in (0,1)$, $p_0\in(1,2]$, $b>0$, and $f\in \cS(\bR^d)$. Suppose that $\partialsi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{p_0}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$.
\begin{enumerate}[(i)]
\item
Then for any $x\in B_{2^{-j}b}(0)$,
\begin{equation*}
\begin{gathered}
-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}|\cT_{t,0}^{\varepsilon,j}f(y_0)-\cT_{t,0}^{\varepsilon,j} f(y_1)|^{p_0}dy_0dy_1\leq N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}} \bM\left(|f|^{p_0}\right)(x),\\
-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}| \partialsi(t,-i\nabla)\cT_{t,0}^{j}f(y_0)-\partialsi(t,-i\nabla)\cT_{t,0}^{j} f(y_1)|^{p_0}dy_0dy_1
\leq N'2^{j\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}} \bM\left(|f|^{p_0}\right)(x),
\end{gathered}
\end{equation*}
where $N=N(d, \delta,\varepsilon,\gamma,\kappa,M,m,p_0)$ and $N'=N'(d,\delta, \gamma,\kappa,M,m,p_0)$.
\item
Then for any $x\in B_{b}(0)$,
\begin{equation*}
\begin{gathered}
-\hspace{-0.40cm}\int_{B_{b}(0)}-\hspace{-0.40cm}\int_{B_{b}(0)}|\cT_{t,0}^{\varepsilon,\leq0} f(y_0)-\cT_{t,0}^{\varepsilon,\leq0} f(y_1)|^{p_0}dy_0dy_1\leq N\bM\left(|f|^{p_0}\right)(x),\\
-\hspace{-0.40cm}\int_{B_{b}(0)}-\hspace{-0.40cm}\int_{B_{b}(0)}|\partialsi(t,-i\nabla)\cT_{t,0}^{\leq0} f(y_0)-\partialsi(t,-i\nabla)\cT_{t,0}^{\leq0} f(y_1)|^{p_0}dy_0dy_1\leq N'\bM\left(|f|^{p_0}\right)(x),
\end{gathered}
\end{equation*}
where $N=N(d,\varepsilon,\gamma,\kappa,M,m,p_0)$ and $N'=N'(d,\gamma,\kappa,M,m,p_0)$
\end{enumerate}
\end{lem}
Once we assume Lemma \ref{20.12.21.16.26}, then Proposition \ref{prop:maximal esti} follows.
\begin{proof}[Proof of Proposition \ref{prop:maximal esti}]
Due to similarity, we only prove it for $\cT_{t,0}^{\varepsilon,j}$. Let $b>0$, $t>0$, and $x\in B_{2^{-j}b}(0)$. By Lemma \ref{20.12.21.16.26},
\begin{align*}
&-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}|\cT_{t,0}^{\varepsilon,j} f(y_0)-\cT_{t,0}^{\varepsilon,j} f(
y_1)|^{p_0}dy_0dy_1 \leq N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x).
\end{align*}
For $x_0\in\bR^{d}$, denote
$$
\tau_{x_0}f(t,x):=f(t,x_0+x).
$$
Since $\cT_{t,0}^{\varepsilon,j}$ and $\tau_{x_0}$ are commutative,
\begin{align*}
&-\hspace{-0.40cm}\int_{B_{2^{-j}b}(x_0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(x_0)}|\cT_{t,0}^{\varepsilon,j} f(y_0)-\cT_{t,0}^{\varepsilon,j} f(y_1)|^{p_0}dy_0dy_1\\
&=-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}|\cT_{t,0}^{\varepsilon,j} (\tau_{x_0}f)(t,y_0)-\cT_{t,0}^{\varepsilon,j} (\tau_{x_0}f)(t,y_1)|^{p_0}dy_0dy_1\\
&\leq N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|\tau_{x_0}f|^{p_0}\right)(x)=N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x_0+x).
\end{align*}
Therefore, by Jensen's inequality, for $x\in B_{2^{-j}b}(0)$ and $x_0\in\bR^{d}$
\begin{align*}
&\left(-\hspace{-0.40cm}\int_{B_{2^{-j}b}(x_0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(x_0)}|\cT_{t,0}^{\varepsilon,j}f(y_0)-\cT_{t,0}^{\varepsilon,j} f(y_1)|dy_0dy_1\right)^{p_0}\\
&\leq -\hspace{-0.40cm}\int_{B_{2^{-j}b}(x_0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(x_0)}|\cT_{t,0}^{\varepsilon,j}f(y_0)-\cT_{t,0}^{\varepsilon,j} f(y_1)|^{p_0}dy_0dy_1\\
&\leq N2^{j\varepsilon\gamma
p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x_0+x).
\end{align*}
Taking the supremum both sides with respect to all $B_{2^{-j}b}$ containing $x_0+x$, we obtain the desired result. The theorem is proved.
\end{proof}
Therefore, it suffices to prove Lemma \ref{20.12.21.16.26} to complete the proof of Proposition \ref{prop:maximal esti}.
In doing so, we begin with two lemmas which reduce our computational effort.
For readers' convenience, we also present the following scheme, which explains relations among Lemmas \ref{20.12.17.20.21}, \ref{22.02.15.16.27}, \ref{22.01.28.16.57}, \ref{22.01.28.17.18}, \ref{20.12.21.16.26}, and Proposition \ref{prop:maximal esti}.
\begin{equation*}
\begin{rcases}
\text{Lemma \ref{22.01.28.16.57}} \to &\text{Lemma \ref{22.01.28.17.18}} \\
&\text{Lemma \ref{20.12.17.20.21}} \\
&\text{Lemma \ref{22.02.15.16.27}}
\end{rcases}
\to \text{Lemma \ref{20.12.21.16.26}} \to \text{Proposition \ref{prop:maximal esti}},
\end{equation*}
where $A\to B$ implies that $A$ is used in the proof of $B$.
Note that Lemmas \ref{20.12.17.20.21}, \ref{22.02.15.16.27}, and \ref{22.01.28.16.57} are simple consequences of Theorem \ref{22.02.15.11.27} and Corollary \ref{22.02.15.14.36}.
\begin{lem}\label{20.12.17.20.21}
Let $p_0\in(1,2]$ and $f\in \cS(\bR^d)$.
Suppose that $\partialsi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{p_0}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$.
Then for all $t>s>0$ and $(m,|\alpha|)\in\{0,1\}\times\{0,1,2\}$, there exists a positive constant $N=N(|\alpha|,d,\delta,\varepsilon,\gamma,\kappa,M,m,p_0)$ such that
for all $a_0,a_1\in(0,\infty)$,
\begin{equation*}
\begin{aligned}
&\int_{a_1}^{\infty}\left(\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dz\right)^{1/p_0}\left(\lambda^d \int_{\bS^{d-1}}|\partial_t^mD_x^{\alpha}\Delta_j P_{\varepsilon}(t,s,\lambda \omega)|^{p_0'}\sigma(d \omega) \right)^{1/p_0'}d\lambda\\
&\leq Ne^{-\kappa|t-s|2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-d(p_0)-1+d/p_0\right)}\left(\int_{a_1}^{\infty}\lambda^{-p_0d(p_0)-1}\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)^{1/p_0},
\end{aligned}
\end{equation*}
where $\bS^{d-1}$ denotes the $d-1$-dimensional unit sphere, $\sigma(d\omega)$ denotes the surface measure on $\bS^{d-1}$,
$$
d(p_0) :=\left\lfloor\frac{d}{p_0}\right\rfloor+1,
$$
and $p_0'$ is the H\"older conjugate of $p_0$, \textit{i.e.} $1/p_0+1/p_0'=1$.
\end{lem}
\begin{proof}
For notational convenience, we define
\begin{equation*}
\begin{gathered}
\mu:=d(p_0)+\frac{1}{p_0}>\frac{d+1}{p_0}
\end{gathered}
\end{equation*}
and
$$
K_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda):=\int_{\bS^{d-1}}|\partial_t^mD_x^{\alpha} \Delta_jP_{\varepsilon}(t,s,\lambda w)|^{p_0'}\sigma(dw).
$$
By H\"older's inequality and Theorem \ref{22.02.15.11.27},
\begin{align*}
&\int_{a_1}^{\infty}\left(\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dz\right)^{1/p_0}(\lambda^dK_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda))^{1/p_0'}d\lambda\\
&\leq\left(\int_{a_1}^{\infty}\lambda^{-p_0\mu}\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)^{1/p_0}\left(\int_{a_1}^{\infty}\lambda^{d+p_0'\mu}K_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda)d\lambda \right)^{1/p_0'}\\
&\leq\left(\int_{a_1}^{\infty}\lambda^{-p_0\mu}\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)^{1/p_0}\left(\int_{\bR^d}|z|^{1+p_0'\mu}|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,z)|^{p_0'} dz\right)^{1/p_0'}\\
&\leq Ne^{-\kappa|t-s|2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-d(p_0)-1+d/p_0\right)}\left(\int_{a_1}^{\infty}\lambda^{-p_0d(p_0)-1}\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)^{1/p_0}.
\end{align*}
The lemma is proved.
\end{proof}
\begin{lem}\label{22.02.15.16.27}
Let $p_0\in(1,2]$ and $f\in \cS(\bR^d)$.
Suppose that $\partialsi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{p_0}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$.
Then for all $t>s>0$ and $(m,|\alpha|)\in\{0,1\}\times\{0,1,2\}$ satisfying
$$
(m+\varepsilon)\gamma+|\alpha|+d/p_0-\lfloor d/p_0\rfloor>0,
$$
there exists a positive constant $N=N(|\alpha|,d,\varepsilon,\gamma,k,\kappa,M,m,p_0)$ such that
for all $a_0,a_1\in(0,\infty)$,
\begin{equation*}
\begin{aligned}
&\int_{a_1}^{\infty}\left(\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dz\right)^{1/p_0}\left(\lambda^d \int_{\bS^{d-1}}|\partial_t^mD_x^{\alpha}S_0 P_{\varepsilon}(t,s,\lambda \omega)|^{p_0'}\sigma(d \omega) \right)^{1/p_0'}d\lambda\\
&\leq N\left(\int_{a_1}^{\infty}\lambda^{-p_0d(p_0)-1}\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)^{1/p_0},
\end{aligned}
\end{equation*}
where
$$
d(p_0) :=\left\lfloor\frac{d}{p_0}\right\rfloor+1
$$
and $p_0'$ is the H\"older conjugate of $p_0$, \textit{i.e.} $1/p_0+1/p_0'=1$.
\end{lem}
\begin{proof}
First, we choose a $c=c(|\alpha|,d,\varepsilon,\gamma,m,p_0)>1$ so that
$$
(m+\varepsilon)\gamma+|\alpha|+d/p_0-\lfloor d/p_0\rfloor>\log_2(c)>0.
$$
By H\"older's inequality,
\begin{equation}\label{22.02.15.16.06}
\begin{aligned}
&\int_{\bS^{d-1}}|\partial_t^mD_x^{\alpha} S_0P_{\varepsilon}(t,s,\lambda w)|^{p_0'}\sigma(dw)\\
&\leq \left(\sum_{j\leq 0}c^{j/(p_0-1)}\right)^{p_0-1}\int_{\bS^{d-1}}\sum_{j\leq 0}c^{-j}|\partial_t^mD_x^{\alpha} \Delta_jP_{\varepsilon}(t,s,\lambda w)|^{p_0'}\sigma(dw)\\
&\leq N(c,p_0)\sum_{j\leq0}c^{-j}K_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda).
\end{aligned}
\end{equation}
Putting
$$
\mu = d(p_0) +\frac{1}{p_0}
$$
and using \eqref{22.02.15.16.06} and H\"older's inequality, we have
\begin{align*}
&\int_{a_1}^{\infty}\left(\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dz\right)^{1/p_0}\left(\lambda^d \int_{\bS^{d-1}}|\partial_t^mD_x^{\alpha}S_0 P_{\varepsilon}(t,s,\lambda \omega)|^{p_0'}\sigma(d \omega) \right)^{1/p_0'}d\lambda\\
\leq& \int_{a_1}^{\infty}\left(\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dz\right)^{1/p_0}\left(\lambda^d\sum_{j\leq0}c^{-j}K_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda) \right)^{1/p_0'}d\lambda\\
\leq &\left(\int_{a_1}^{\infty}\lambda^{-p_0\mu}\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)^{1/p_0}\left(\sum_{j\leq0}c^{-j}\int_{a_1}^{\infty}\lambda^{d+p_0'\mu}K_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda)d\lambda \right)^{1/p_0'}.
\end{align*}
By Theorem \ref{22.02.15.11.27},
\begin{align*}
\sum_{j\leq0}c^{-j}\int_{a_1}^{\infty}\lambda^{d+p_0'\mu}K_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda)d\lambda& N\leq \sum_{j\leq0}c^{-j}\int_{\bR^d}|z|^{1+p_0'\mu}|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,z)|^{p_0'} dz\\
&\leq N \sum_{j\leq0}c^{-j}2^{j\left((m+\varepsilon)\gamma+|\alpha|-d(p_0)-1+d/p_0\right)}\\
&=N\sum_{j\leq0}2^{j\left((m+\varepsilon)\gamma+|\alpha|+d/p_0-\lfloor d/p_0\rfloor-\log_2(c)\right)}=N.
\end{align*}
The lemma is proved.
\end{proof}
Making use of Lemmas \ref{20.12.17.20.21}, \ref{22.02.15.16.27}, we want to estimate mean oscillations of $\cT_{t,0}^{\varepsilon,j} f$ and $\cT_{t,0}^{\varepsilon,\leq0}f$.
To do so, we first calculate $L_p$-norms of $\cT_{t,0}^{\varepsilon,j}f$ and $\cT_{t,0}^{\varepsilon,\leq0}f$ with respect to the space variable $x$.
\begin{lem}\label{22.01.28.16.57}
Suppose that $\partialsi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{2}\right\rfloor+1\right)$-times regular upper bound with $(\gamma,M)$.
Then for all $t>0$, $p\in[1,\infty]$, and $\delta \in (0,1)$, there exist positive constants $N=N(d,\varepsilon,\gamma,\kappa,M)$, $N(\delta)=N(d,\delta, \varepsilon,\gamma,\kappa,M)$, $N'(\delta)=N'(d,\delta,\gamma,\kappa,M)$, and $N'=N'(d,\gamma,\kappa,M)$ such that for all $f \in \cS(\bR^{d})$ and $ t>0$,
\begin{equation*}
\begin{gathered}
\|\cT_{t,0}^{\varepsilon,j}f\|_{L_p(\bR^{d})}\leq N(\delta)e^{-\kappa t 2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\varepsilon\gamma}\|f\|_{L_p(\bR^{d})}, \quad \|\cT_{t,0}^{\varepsilon,\leq0}f\|_{L_p(\bR^{d})}\leq N\|f\|_{L_p(\bR^{d})},\\
\| \partialsi(t, -i\nabla)T_{t,0}^{j}f\|_{L_p(\bR^d)}\leq N'(\delta)e^{-\kappa t 2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\gamma}\|f\|_{L_p(\bR^{d})},\quad
\|\partialsi(t, -i\nabla) T_{t,0}^{\leq0}f\|_{L_p(\bR^d)}\leq N'\|f\|_{L_p(\bR^{d})}.
\end{gathered}
\end{equation*}
\end{lem}
\begin{proof}
Let $ t > 0$. By Minkowski's inequality and Theorem \ref{22.02.15.11.27},
\begin{equation*}
\begin{gathered}
\|\cT_{t,0}^{\varepsilon,j}f\|_{L_p(\bR^{d})}\leq \|\Delta_{j}P_{\varepsilon}(t,0,\cdot)\|_{L_1(\bR^d)}\|f\|_{L_p(\bR^d)}\leq Ne^{-\kappa t2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\varepsilon\gamma}\|f\|_{L_p(\bR^d)},\\
\|\partialsi(t, -i\nabla)\cT_{t,0}^{j}f\|_{L_p(\bR^{d})}\leq \|\partialsi(t, -i\nabla) \Delta_{j} p(t,0,\cdot)\|_{L_1(\bR^d)}\|f\|_{L_p(\bR^d)}\leq Ne^{-\kappa t2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\gamma}\|f\|_{L_p(\bR^d)}.
\end{gathered}
\end{equation*}
Similarly, using Minkowski's inequality and Corollary \ref{22.02.15.14.36}, we also obtain the other estimates. The lemma is proved.
\end{proof}
\begin{lem}\label{22.01.28.17.18}
Let $t>0$ and $b>0$.
\begin{enumerate}[(i)]
\item
Assume that $f\in \cS(\bR^{d})$ has a support in $B_{3\times2^{-j}b}(0)$.
Then for any $x\in B_{2^{-j}b}(0)$,
\begin{equation*}
\begin{gathered}
-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}|\cT_{t,0}^{\varepsilon,j} f(y)|^{p_0}dy\leq N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}} \bM\left(|f|^{p_0}\right)(x),\\
-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}|\partialsi(t,-i\nabla)\cT_{t,0}^{j} f(y)|^{p_0}dy\leq N'2^{j\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}} \bM\left(|f|^{p_0}\right)(x),
\end{gathered}
\end{equation*}
where $N=N(d,\delta,\varepsilon,\gamma,\kappa,M)$ and $N=N(d,\delta,\gamma,\kappa,M)$.
\item
Assume that $f\in \cS(\bR^{d})$ has a support in $B_{3b}(0)$.
Then for any $x\in B_{b}(0)$,
\begin{equation*}
\begin{gathered}
-\hspace{-0.40cm}\int_{B_{b}(0)}|\cT_{t,0}^{\varepsilon,\leq0} f(y)|^{p_0}dy\leq N\bM\left(|f|^{p_0}\right)(x),\\
-\hspace{-0.40cm}\int_{B_{b}(0)}|\partialsi(t,-i\nabla)\cT_{t,0}^{\leq 0} f(y)|^{p_0}dy\leq N\bM\left(|f|^{p_0}\right)(x),
\end{gathered}
\end{equation*}
where $N=N(d,\varepsilon,\gamma,\kappa,M)$.
\end{enumerate}
\end{lem}
\begin{proof}
By Lemma \ref{22.01.28.16.57},
\begin{align*}
-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}|\cT_{t,0}^{\varepsilon,j} f(y)|^{p_0}dy&\leq N\frac{2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}}{|B_{2^{-j}b}(0)|}\int_{\bR^d}|f(y)|^{p_0}dy\\
&= N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\left(-\hspace{-0.40cm}\int_{B_{3\times 2^{-j}b}(0)}|f(y)|^{p_0}dy\right)\\
&\leq N2^{j\varepsilon \gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x).
\end{align*}
Similarly, using Lemma \ref{22.01.28.16.57}, we can easily obtain the other results. The lemma is proved.
\end{proof}
With the help of Lemmas \ref{20.12.17.20.21}, \ref{22.02.15.16.27}, and \ref{22.01.28.17.18}, we can prove Lemma \ref{20.12.21.16.26}.
\begin{proof}[Proof of Lemma \ref{20.12.21.16.26}]
First, we prove $(i)$. Due to similarity, we only prove it for $T_{t,0}^{\varepsilon,j}$. Choose a $\zeta\in C^{\infty}(\bR^d)$ satisfying
\begin{itemize}
\item $\zeta(y)\in[0,1]$ for all $y\in\bR^d$
\item $\zeta(y)=1$ for all $y\in B_{2\times 2^{-j}b}(0)$
\item $\zeta(y)=0$ for all $y\in \bR^d\setminus B_{5\times2^{-j}b/2}(0)$.
\end{itemize}
Note that $\cT_{t,0}^{\varepsilon,j} f=\cT_{t,0}^{\varepsilon,j} (f\zeta)+\cT_{t,0}^{\varepsilon,j} (f(1-\zeta))$ and $\cT_{t,0}^{\varepsilon,j}(f\zeta)$ can be estimated by Lemma \ref{22.01.28.17.18}.
Thus it suffices to estimate $\cT_{t,0}^{\varepsilon,j} (f(1-\zeta))$ and we may assume that $f(y)=0$ if $|y|<2\times 2^{-j}b$. Hence if $y\in B_{2^{-j}b}(0)$ and $|z|<2^{-j}b$, then $|y-z|\leq 2\times 2^{-j}b$ and $f(y-z)=0$. By \cite[Lemma 6.6]{Choi_Kim2022} and H\"older's inequality,
\begin{align}
&\left|\int_{\bR^d}(\Delta_{j}P_{\varepsilon}(t,0,y_0-z)-\Delta_{j}P_{\varepsilon}(t,0,y_1-z))f(z)dz\right|\nonumber\\
&\leq N|y_0-y_1| \int_{2^{-j}b}^{\infty}(\lambda^dK_{\varepsilon,0,\alpha,j}^{p_0'}(t,0,\lambda))^{1/p_0'}\left(\int_{B_{2\times 2^{-j}b+\lambda}(x)}|f(z)|^{p_0}dz\right)^{1/p_0}d\lambda,\label{22.10.24.11.37}
\end{align}
where $N=N(d,p_0)$, $|\alpha|=2$, and
$$
K_{\varepsilon,0,\alpha,j}^{p_0'}(t,0,\lambda):=\int_{\bS^{d-1}}|D_x^{\alpha}\Delta_j P_{\varepsilon}(t,0,\lambda w)|^{p_0'}\sigma(dw).
$$
If $b\leq1$, then by Lemma \ref{20.12.17.20.21} and \eqref{22.10.24.11.37},
\begin{align*}
&|\cT_{t,0}^{\varepsilon,j}f(y_0)-\cT_{t,0}^{\varepsilon,j}f(y_1)|^{p_0}\\
&\leq N|y_0-y_1|^{p_0}\left(\int_{2^{-j}b}^{\infty}\lambda^{-p_0d(p_0)-1}\int_{B_{2\times 2^{-j}b+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}2^{jp_0\left(\varepsilon\gamma-d(p_0)+1+d/p_0\right)}\\
&\leq N2^{j(\varepsilon\gamma p_0-p_0d(p_0)+d)}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}b^{p_0}\int_{2^{-j}b}^{\infty}\lambda^{-p_0d(p_0)-1}\left(\int_{B_{2\times 2^{-j}b+\lambda}(x)}|f(z)|^{p_0}dz\right)d\lambda\\
&\leq N2^{j(\varepsilon\gamma p_0-p_0d(p_0)+d)}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x)b^{p_0}\int_{2^{-j}b}^{\infty}\lambda^{-p_0d(p_0)+d-1}d\lambda\\
&\leq N2^{j\varepsilon\gamma p_0}b^{-p_0d(p_0)+d+p_0}e^{-p_0\kappa'2^{j\gamma} t }\bM\left(|f|^{p_0}\right)(x)\\
&\leq N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x).
\end{align*}
If $b>1$ and $y\in B_{2^{-j}b}(0)$, then by \cite[Lemma 6.6]{Choi_Kim2022} with $|\alpha|=1$ and Lemma \ref{20.12.17.20.21},
\begin{align*}
&|\cT_{t,0}^{\varepsilon,j}f(y)|^{p_0}\\
&\leq N\left(\int_{2^{-j}b}^{\infty}\lambda^{-p_0d(p_0)-1}\int_{B_{2\times 2^{-j}b+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}2^{jp_0\left(\varepsilon\gamma-d(p_0)+d/p_0\right)}\\
&\leq N2^{j(\varepsilon\gamma p_0-p_0d(p_0)+d)}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\int_{2^{-j}b}^{\infty}\lambda^{-p_0d(p_0)-1}\left(\int_{B_{2\times 2^{-j}b+\lambda}(x)}|f(z)|^{p_0}dz\right)d\lambda\\
&\leq N2^{j(\varepsilon\gamma p_0-p_0d(p_0)+d)}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x)\int_{2^{-j}b}^{\infty}\lambda^{-p_0d(p_0)+d-1}d\lambda\\
&\leq N2^{j\varepsilon\gamma p_0}b^{-p_0d(p_0)+d}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x)\\
&\leq N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x).
\end{align*}
For $(ii)$, note that if $|\alpha|\geq1$, then
$$
|\alpha|+d/p_0-\lfloor d/p_0\rfloor\geq |\alpha|>0.
$$
Thus applying the similar arguments in $(i)$ with Lemma \ref{22.02.15.16.27} instead of Lemma \ref{20.12.17.20.21}, we also have $(ii)$.
The lemma is proved.
\end{proof}
\appendix
\mysection{Weighted multiplier and Littlewood-Paley theorem}
\begin{prop}[Weighted Mikhlin multiplier theorem]
\label{21.02.24.16.49}
Let $p\in(1,\infty)$, $w\in A_p(\bR^{d})$, and $f \in \cS(\bR^{d})$.
Suppose that \begin{equation}
\label{22.05.12.13.14}
\sup_{R>0}\left(R^{2|\alpha|-d}\int_{R<|\xi|<2R}|D^{\alpha}_{\xi}\partiali(\xi)|^{2}d\xi\right)^{1/2}\leq N^*,\quad\forall |\alpha|\leq d.
\end{equation}
Then there exists a constant $N=N(d,p,K,N^*)$ such that
$$
\|\bT_{\partiali}f\|_{L_p(\bR^d,w\,dx)}\leq N\|f\|_{L_p(\bR^d,w\,dx)},
$$
where $[w]_{A_p(\bR^d)}\leq K$ and
$$
\bT_{\partiali}f(x):=\cF^{-1}[\partiali\cF[f]](x).
$$
\end{prop}
\begin{proof}
By \cite[Theorem 6.2.7]{grafakos2014classical}, the operator $\bT_{\partiali}:L_q(\bR^d)\to L_{q}(\bR^d)$ is bounded for all $q\in(1,\infty)$. It is well-known that there exists $r>1$ such that $w\in A_{p/r}(\bR^d)$.
Applying \cite[Corollaries 6.10, 6.11, and Remark 6.14]{fackler2020weighted}, we have
\begin{align*}
\|\bT_{\partiali} f\|_{L_p(\bR^d,w\,dx)}&\leq N\|f\|_{L_p(\bR^d,w\,dx)},
\end{align*}
where $N=N(d,p,K,N^*)$. The proposition is proved.
\end{proof}
\begin{prop}[Weighted Littlewood-Paley theorem]
\label{prop:WLP}
Let $p\in (1, \infty)$ and $w\in A_p(\bR^d)$. Then we have
\begin{align}
\label{20230128 01}
\|Sf\|_{L_p(\bR^d,w\,dx)} &\leq C(d,p) [w]_{A_p(\bR^d)}^{\max(\frac{1}{2}, \frac{1}{p-1})} \|f\|_{L_p(\bR^d,w\,dx)}
\end{align}
and
\begin{align}
\label{20230128 03}
\|f\|_{L_p(\bR^d,w\,dx)} &\leq C(d,p) [w]_{A_p(\bR^d)}^{\frac{\max(1/2, 1/(p'-1))}{p-1}} \|Sf \|_{L_p(\bR^d,w\,dx)},
\end{align}
where
\begin{align}
\label{20230128 02}
Sf(x):=\left(\sum_{j\in\bZ}|\Delta_jf(x)|^2\right)^{1/2}.
\end{align}
\end{prop}
\begin{proof}
This result is already proved in several literature \cite{kurtz1980little,rychkov2001weights}. They did not, however, reveal the growth of implicit constants relative to $A_p(\bR^d)$-seminorm.
These optimum implicit constants could be obtained from recent general theories.
The first inequality is proved in \cite{Ler2011weighted,Wil2007square} for various types of Littlewood-Paley operators.
We show that $Sf$ is one of such Littlewood-Paley operators.
The second inequality follows from the first inequality and the duality of $L_p(\bR^d,w\,dx)$, which will be shown in the last part of the proof.
For $\alpha \in (0,1]$, let $\mathcal{C}_\alpha$ be a family of functions $\partialhi:\bR^d \to \bR$ supported in $\{x\in\bR^d : |x|\leq 1\}$ such that
\begin{align*}
\int_{\bR^d} \partialhi(x) dx=0,\quad |\partialhi(x) - \partialhi(x')| \leq |x-x'|^\alpha,\quad \forall x, x' \in \bR^d.
\end{align*}
Then we define a maximal operator over the family $\mathcal{C}_\alpha$ as
\begin{align}\label{ineq:22 05 06 17 32}
A_\alpha(f)(x,t) := \sup_{\partialhi \in \mathcal{C}_\alpha} |\partialhi_t \ast f(x)|,\quad \partialhi_t(y) = t^{-d}\partialhi(t^{-1}y).
\end{align}
Using \eqref{ineq:22 05 06 17 32}, we construct intrinsic square functions as follows:
\begin{align*}
G_\alpha(f)(x) &:= \biggl(\int_{\Gamma(x)}\big|A_\alpha(f)(y,t)\big|^2 \frac{dydt}{t^{d+1}}\biggr)^{1/2},\\
g_\alpha(f)(x) &:= \biggl(\int_0^\infty\big|A_\alpha(f)(x,t)\big|^2\frac{dt}{t}\biggr)^{1/2},\\
\sigma_\alpha(f)(x) &:= \biggl(\sum_{j\in\bZ} \big| A_\alpha(f)(x, 2^j)\big|^2\biggr)^{1/2},
\end{align*}
where $\Gamma(x)$ denotes the conic area $\{(y,t) \in \bR^d\times\bR_+ : |x-y|\leq t\}$.
In \cite{Wil2007square}, Wilson showed pointwise equivalences among $G_\alpha$, $g_\alpha$ and $\sigma_\alpha$, \textit{i.e.}
\begin{align}\label{ineq:22 05 06 18 12}
G_\alpha(f)(x) \approx g_\alpha(f)(x) \approx \sigma_\alpha(f)(x),
\end{align}
where the implicit constants depend only on $\alpha$ and $d$.
Moreover, Lerner \cite[Theorem 1.1]{Ler2011weighted} proved
\begin{align}\label{ineq:WLP}
\|G_\alpha\|_{L_p(\bR^d,w\,dx)\to L_p(\bR^d,w\,dx)} \leq C(\alpha, d, p) [w]_{A_p(\bR^d)}^{\max(\frac{1}{2}, \frac{1}{p-1})}.
\end{align}
It should be remarked that \cite[Theorem 1.1]{Ler2011weighted} covers a broad class of operators of Littlewood-Paley type.
However, this result does not give \eqref{20230128 01} directly since the Littlewood-Paley operator considered in this paper is not an integral form (see \eqref{20230128 02}).
Our Littlewood-Paley operator is given as the summation of $\Delta_j f = \Psi_j \ast f$ over $\bZ$ and the symbol of $\Delta_j$ is supported in $\{\xi\in\bR^d:2^{j-1}\leq|\xi| \leq2^{j+1} \}$.
Note that $\Psi_j$ is globally defined due to the uncertainty principle, while elements in $\mathcal{C}_\alpha$ are supported in $\{x\in\bR^d:|x|\leq 1\}$.
To fill this gap, we need a new family $\mathcal{C}_{\alpha,\varepsilon}$ introduced in \cite{Wil2007square}.
For $\alpha \in (0,1]$ and $\varepsilon>0$, let $\mathcal{C}_{\alpha, \varepsilon}$ be a family of functions $\partialhi:\bR^d \to \bR$ satisfying
\begin{equation*}
\begin{gathered}
\int_{\bR^d} \partialhi(x) dx=0,\quad |\partialhi(x)| \leq (1+|x|)^{-d-\varepsilon},\\
|\partialhi(x) - \partialhi(x')| \leq |x-x'|^\alpha \bigl((1+|x|)^{-d-\varepsilon} + (1+|x'|)^{-d-\varepsilon}\bigr),\quad \forall x, x' \in \bR^d.
\end{gathered}
\end{equation*}
Then we can define $\widetilde{A}_{\alpha, \varepsilon}$ on the basis of all functions in the above class as in \eqref{ineq:22 05 06 17 32}, \textit{i.e.}
\begin{align}
\widetilde{A}_{\alpha,\varepsilon}(f)(x,t) := \sup_{\partialhi \in \mathcal{C}_{\alpha,\varepsilon}} |\partialhi_t \ast f(x)|,\quad \partialhi_t(y) = t^{-d}\partialhi(t^{-1}y).
\end{align}
We can also define $\widetilde{G}_{\alpha, \varepsilon}, \widetilde{g}_{\alpha, \varepsilon}$ and $\widetilde{\sigma}_{\alpha, \varepsilon}$ similarly to $G_\alpha$, $g_\alpha$ and $\sigma_\alpha$. Likewise, they become equivalent, \textit{i.e.}
\begin{align}\label{ineq:22 05 06 18 13}
\widetilde{G}_{\alpha,\varepsilon}(f)(x) \approx \widetilde{g}_{\alpha,\varepsilon}(f)(x) \approx \widetilde{\sigma}_{\alpha,\varepsilon}(f)(x).
\end{align}
Here, the implicit constants depend only on $\alpha,\varepsilon$ and $d$. Since every Schwartz function is contained in $\mathcal{C}_{1,1}$, it follows that $\partialsi \in \mathcal{C}_{1,1}$.
Thus we have
\begin{align*}
|\Delta_jf(x)| \leq \widetilde{A}_{1,1}(f)(x, 2^j),
\end{align*}
which yields
\begin{align}\label{ineq:22 05 06 18 27}
Sf(x) \leq C(d)\widetilde{\sigma}_{1,1}(f)(x)\approx C(d) \widetilde{G}_{1,1}(f)(x).
\end{align}
It is also known in \cite[Theorem 2]{Wil2007square}, for all $\alpha \in (0,1]$, $\varepsilon>0$, and $0<\alpha' \leq \alpha$, there is a constant $C = C(\alpha, \alpha', \varepsilon, d)$ such that
\begin{align}\label{ineq:22 05 06 18 28}
\widetilde{G}_{\alpha, \varepsilon}(f)(x) \leq C G_{\alpha'}(f)(x).
\end{align}
Thus finally by \eqref{ineq:WLP}, \eqref{ineq:22 05 06 18 13}, \eqref{ineq:22 05 06 18 27} and \eqref{ineq:22 05 06 18 28}, we have
\begin{equation}
\label{22.05.12.16.08}
\begin{aligned}
\big\|Sf \big\|_{L_p(\bR^d,w\,dx)}
&\leq C(d) \big\|\widetilde{G}_{1,1}(f) \big\|_{L_p(\bR^d,w\,dx)}\\
&\leq C(d) \big\|G_{1}(f) \big\|_{L_p(\bR^d,w\,dx)} \leq C(d, p) [w]_{A_p(\bR^d)}^{\max(\frac{1}{2}, \frac{1}{p-1})} \| f\|_{L^p(\bR^d,w\,dx)}.
\end{aligned}
\end{equation}
For the converse inequality \eqref{20230128 03}, we recall that the topological dual space of $L_p(\bR^d,w\,dx)$ is $L_{p'}(\bR^d,\bar{w}\,dx)$ where
$$
\frac{1}{p}+\frac{1}{p'}=1,\quad \bar{w}(x):=(w(x))^{-\frac{1}{p-1}}.
$$
For a more detail, see \cite[Theorem A.1]{Choi_Kim2022}.
Then by the almost orthogonality of $\Delta_j$, the Cauchy-Schwartz inequality, H\"older's inequality, and \eqref{22.05.12.16.08}, for all $f\in L_p(\bR^d,w\,dx)$ and $g\in C_c^{\infty}(\bR^d)$,
\begin{equation}
\label{22.05.12.16.15}
\begin{aligned}
\int_{\bR^d}f(x)g(x)dx&=\int_{\bR^d}\sum_{j\in\bZ}\Delta_jf(x)(\Delta_{j-1}+\Delta_j+\Delta_{j+1})g(x)dx\\
&\leq 3\int_{\bR^d}Sf(x)Sg(x)dx\leq 3\|Sf\|_{L_p(\bR^d,w\,dx)}\|Sg\|_{L_{p'}(\bR^d,\bar{w}\,dx)}\\
&\leq N(d,p)[\bar{w}]_{A_{p'}(\bR^d)}^{\max(\frac{1}{2},\frac{1}{p'-1})}\|Sf\|_{L_p(\bR^d,w\,dx)}\|g\|_{L_{p'}(\bR^d,\bar{w}\,dx)}.
\end{aligned}
\end{equation}
Combining \eqref{22.05.12.16.08} and \eqref{22.05.12.16.15}, we have
$$
\|f\|_{L_p(\bR^d,w\,dx)}=\sup_{\|g\|_{L_{p'}(\bR^d,\bar{w}\,dx)}\leq 1}\left|\int_{\bR^d}f(x)g(x)dx\right|\leq N(d,p)[w]_{A_{p}(\bR^d)}^{\frac{\max(1/2,1/(p'-1)
)}{p-1}}\|Sf\|_{L_p(\bR^d,w\,dx)}.
$$
The proposition is proved.
\end{proof}
\mysection{Properties of function spaces}
\begin{lem}
\label{wbound}
Let $p\in(1,\infty)$ and $w\in A_p(\bR^d)$. Suppose that
$$
[w]_{A_p(\bR^d)}\leq K.
$$
Then there exists a positive constant $N=N(d,p,K)$ such that
$$
\|S_0f\|_{L_p(\bR^d,w\,dx)}+\sup_{j\in\bZ}\|\Delta_jf\|_{L_p(\bR^d,w\,dx)}\leq N\|f\|_{L_p(\bR^d,w\,dx)},\quad \forall f\in L_p(\bR^d,w\,dx).
$$
\end{lem}
\begin{proof}
By the definition of $S_0$, there exists a $\Phi\in\cS(\bR^d)$ such that
$$
S_0f(x)=\Phi\ast f(x)=\int_{B_1(0)}f(x-y)\Phi(y)dy+\sum_{k=1}^{\infty}\int_{B_{2^k}(0)\setminus\overline{B_{2^{k-1}}(0)}}f(x-y)\Phi(y)dy.
$$
Since $\Phi\in\cS(\bR^d)$, there exists $N=N(d,\Phi)$ such that
$$
|\Phi(y)|+|y|^{d+1}|\Phi(y)|\leq N,\quad \forall y\in\bR^d.
$$
Therefore $| S_0 f (x) | \leq N(d,\Phi)\bM f(x)$ and it leads to the first part of the inequality, \textit{i.e.}
$$
\|S_0f\|_{L_p(\bR^d,w\,dx)}\leq N\|f\|_{L_p(\bR^d,w\,dx)},\quad \forall f\in L_p(\bR^d,w\,dx)
$$
due to the weighted Hardy-Littlewood Theorem.
Next recall
$$
\Delta_jf=f\ast\Psi_j=f \ast 2^{jd}\Psi(2^{j}\cdot),
$$
and we show that $\Delta_j$ is a Calder\'on-Zygmund operator to obtain
\begin{equation}
\label{22.04.11.14.12}
\sup_{j\in\bZ}\|\Delta_jf\|_{L_p(\bR^d,w\,dx)}\leq N\|f\|_{L_p(\bR^d,w\,dx)}.
\end{equation}
In other words, we have to show that $f \to \Delta_j f$ is bounded on $L_2(\bR^d)$ and $\Psi_j(x-y)$ is a standard kernel.
It is obvious that
$$
\sup_{\xi \in \bR^d} |\cF[\Psi_j](\xi)|=\sup_{\xi \in \bR^d}|\cF[\Psi](2^{-j} \xi)| = \sup_{\xi \in \bR^d}|\cF[\Psi](\xi)|,
$$
which implies that $f \to \Delta_jf$ becomes a bounded operator on $L_2(\bR^d)$ due to the Plancherel theorem.
More precisely, we have
\begin{align}
\label{20230128 30}
\|\Delta_j f\|_{L_2(\bR^d)} \leq \frac{1}{ (2\partiali)^{d/2} } \sup_{\xi \in \bR^d}|\cF[\Psi](\xi)| \|f\|_{L_2(\bR^d)}
\end{align}
where $\Psi_j=2^{jd}\Psi(2^{j}\cdot)$ and $\Psi\in\cS(\bR^d)$.
Next we show that $\Psi_j(x-y)$ is a standard kernel.
Observe that there exists a $N=N(d,\Psi)$ such that
\begin{align}
\label{20230128 10}
|x|^d|\Psi_j(x)|+|x|^{d+1}|\nabla\Psi_j(x)|=|2^jx|^d|\Psi(2^{j}x)|+|2^jx|^{d+1}|\nabla\Psi(2^jx)|\leq N,\quad \forall x\in\bR^d.
\end{align}
Moreover, by the fundamental theorem of calculus,
$$
|\Psi_j(x)-\Psi_j(y)|\leq |x-y|\int_0^1|\nabla \Psi_j(\theta x+(1-\theta)y)|d\theta.
$$
If $2|x-y|\leq\max(|x|,|y|)$, then for $\theta\in[0,1]$,
$$
|\theta x+(1-\theta)y|\geq \max(|x|,|y|)-|x-y|\geq \frac{\max(|x|,|y|)}{2}\geq \frac{|x|+|y|}{4}.
$$
Hence,
\begin{align}
\label{20230128 11}
|\Psi_j(x)-\Psi_j(y)| \leq |x-y|\int_0^1|\nabla \Psi_j(\theta x+(1-\theta)y)|d\theta\leq \frac{N4^{d+1}|x-y|}{(|x|+|y|)^{d+1}}.
\end{align}
Due to \eqref{20230128 10} and \eqref{20230128 11}, $\Psi_j(x-y)$ becomes a standard kernel with constant $1$ and $N$.
Especially, $N$ can be chosen uniformly for all $j \in \bZ$.
In total, the $L_2$-bounds are given uniformly for all $j \in \bZ$ in \eqref{20230128 30} and $\Psi_j(x-y)$ becomes a standard kernel with same parameters for all $j \in \bZ$.
Therefore applying \cite[Theorem 7.4.6]{grafakos2014classical}, we have \eqref{22.04.11.14.12}. The lemma is proved.
\end{proof}
\begin{rem}\label{rem RdF vv}
An operator $T:L_p(\bR^d)\to L_p(\bR^d)$ is linearizable if there exist a Banach space $B$ and a $B$-valued linear operator $U:L_p(\bR^d)\to L_p(\bR^d;B)$ such that
$$
|Tf(x)| = \| Uf(x)\|_B,\quad f\in L_p(\bR^d).
$$
Hence any linear operator is linearizable.
Let $\{T_j\}_{j\in\bZ}$ be a sequence of linearizable operators and $K \in (0,\infty)$.
Assume that there exist a $r \in (1,\infty)$ and $C(K) \in (0,\infty)$ such that
\begin{equation}
\label{22.09.13.16.42}
\sup_{j\in\bZ}\int_{\bR^d} | T_j f(x)|^r w(x) dx \leq C(K) \int_{\bR^d} |f(x)|^r w(x)dx
\end{equation}
for all $[w]_{A_r} \leq K$ and $f\in L_r(\bR^d,w\,dx)$.
Then for all $1<p,q<\infty$, there exist $K' \in (0,\infty)$ and $C$ such that
\begin{align}
\label{20230128 40}
\Big\| \Big( \sum_{j\in\bZ} |T_j f_j|^p\Big)^{1/p} \Big\|_{L_q(\bR^d,w'\,dx)} \leq C\Big\| \Big( \sum_{j\in\bZ} |f_j|^p\Big)^{1/p} \Big\|_{L_q(\bR^d,w'\,dx)}
\end{align}
for all $[w']_{A_q(\bR^d)}\leq K'$ and $f_j \in L_q(\bR^d,w'\,dx)$. The above statement can be found, for instance, in \cite[p. 521, Remarks 6.5]{RdF1985weighted} with a modification to dependence of $A_p$-norms.
In particular, since $\Delta_j$ is linear and satisfies the weighted inequality for any $r>1$ due to Lemma \ref{wbound}, we have \eqref{20230128 40}
with $T_j=\Delta_j$ and it will be used in the proof of the next proposition.
\end{rem}
For $\boldsymbol{r}:\bZ\to(0,\infty)$, we denote
$$
\partiali_{\boldsymbol{r}}:=\sum_{j\in\bZ}2^{\boldsymbol{r}(j)}\Delta_j~\text{and}~ \partiali^{\boldsymbol{r}}:=S_0+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j.
$$
\begin{prop}
\label{22.05.03.11.34}
Let $p\in(1,\infty)$, $q\in(0,\infty)$, and $w\in A_p(\bR^d)$. Suppose that two sequences $\boldsymbol{r},\boldsymbol{r}':\bZ\to(-\infty,\infty)$ satisfy
\begin{align}
\label{uniform r assumption}
\sup_{j\in\bZ}\left(|\boldsymbol{r}'(j+1)-\boldsymbol{r}'(j)|+|\boldsymbol{r}(j+1)-\boldsymbol{r}(j)|\right)=:C_0<\infty.
\end{align}
Then for any $f\in \cS(\bR^d)$,
\begin{equation*}
\begin{gathered}
\|\partiali^{\boldsymbol{r}}f\|_{H_p^{\boldsymbol{r}'}(\bR^d,w\,dx)}\approx \|f\|_{H_p^{\boldsymbol{r}+\boldsymbol{r}'}(\bR^d,w\,dx)},\quad \|\partiali^{\boldsymbol{r}}f\|_{B_{p,q}^{\boldsymbol{r}'}(\bR^d,w\,dx)}\approx\|f\|_{B_{p,q}^{\boldsymbol{r}+\boldsymbol{r}'}(\bR^d,w\,dx)},\\
\|\partiali_{\boldsymbol{r}}f\|_{\dot{H}_p^{\boldsymbol{r}'}(\bR^d,w\,dx)}\approx \|f\|_{\dot{H}_p^{\boldsymbol{r}+\boldsymbol{r}'}(\bR^d,w\,dx)},\quad
\|\partiali_{\boldsymbol{r}}f\|_{\dot{B}_{p,q}^{\boldsymbol{r}'}(\bR^d,w\,dx)}\approx\|f\|_{\dot{B}_{p,q}^{\boldsymbol{r}+\boldsymbol{r}'}(\bR^d,w\,dx)},
\end{gathered}
\end{equation*}
where the equivalences depend only on $p$, $q$, $d$, $C_0$ and $[w]_{A_p}(\bR^d)$.
In particular, we have
$$
\|\partiali^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)}\approx \|f\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}.
$$
\end{prop}
\begin{proof}
Due to the existence of $S_0$, the proof of the inhomogeneous case become more difficult.
Even the case $\boldsymbol{r}'\neq\boldsymbol{0}$ is quite similar to the case $\boldsymbol{r}'=\boldsymbol{0}$, where
$\boldsymbol{0}(j)=0$ for all $j\in\bZ$.
Thus we only prove the inhomogeneous case with the assumption $\boldsymbol{r}'=\boldsymbol{0}$.
First, by the definition of $\partiali^{\boldsymbol{r}}$ and the triangle inequality, it is obvious that
\begin{align}
\label{20230128 50}
\|\partiali^{\boldsymbol{r}} f\|_{L_p(\bR^d,w\,dx)}
\leq \| S_0 f\|_{L_p(\bR^d,w\,dx)} + \Big\| \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big\|_{L_p(\bR^d,w\,dx)}.
\end{align}
For the converse inequality of \eqref{20230128 50}, we use Proposition \ref{21.02.24.16.49}. One can easily check that
$$
\Pi_0(\xi):=\frac{\cF[\Phi](\xi)}{\cF[\Phi](\xi)+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\cF[\Psi](2^{-j}\xi)}
$$
is infinitely differentiable, $\Pi_0(\xi)=1$ if $|\xi|\leq 1$ and $\Pi_0(\xi)=0$ if $|\xi|\geq2$, where
$$
\cF[\Phi](\xi) = \sum_{j=-\infty}^{0}\cF[\Psi](2^{-j}\xi).
$$
This certainly implies that $\Pi_0$ satisfies \eqref{22.05.12.13.14}. Hence by Proposition \ref{21.02.24.16.49},
$$
\|S_0f\|_{L_p(\bR^d,w\,dx)}=\|\bT_{\Pi_0}\partiali^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)}\leq N\|\partiali^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)},
$$
where
$$
\bT_{\Pi_0}f(x):=\cF^{-1}[\Pi_0\cF[f]](x).
$$
This also yields
$$
\Big\| \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big\|_{L_p(\bR^d,w\,dx)}=\|\partiali^{\boldsymbol{r}}f-S_0f\|_{L_p(\bR^d,w\,dx)}\leq N\|\partiali^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)}
$$
and thus
\begin{align}
\label{20230128 51}
\|\partiali^{\boldsymbol{r}} f\|_{L_p(\bR^d,w\,dx)}
\approx \| S_0 f\|_{L_p(\bR^d,w\,dx)} + \Big\| \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big\|_{L_p(\bR^d,w\,dx)}.
\end{align}
Moreover, Proposition \ref{prop:WLP} implies
\begin{align}
\label{20230128 52}
\Big\| S\Big( \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big) \Big\|_{L_p(\bR^d,w\,dx)}
\approx
\Big\| \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big\|_{L_p(\bR^d,w\,dx)},
\end{align}
where the implicit constant depends only on $p$, $d$ and $[w]_{A_p(\bR^d)}$.
Therefore by \eqref{20230128 51} and \eqref{20230128 52},
\begin{align*}
\|\partiali^{\boldsymbol{r}} f\|_{L_p(\bR^d,w\,dx)} \approx \| S_0 f\|_{L_p(\bR^d,w\,dx)} + \Big\| S\Big( \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big) \Big\|_{L_p(\bR^d,w\,dx)}.
\end{align*}
Next we claim
\begin{equation}\label{ineq 220718 1747}
\begin{aligned}
\Big\| S\Big( \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big) \Big\|_{L_p(\bR^d,w\,dx)}
\leq N\Big\| \Big( \sum_{j=1}^{\infty} | 2^{\boldsymbol{r}(j)}\Delta_j f |^2 \Big)^{1/2} \Big\|_{L_p(\bR^d,w\,dx)}.
\end{aligned}
\end{equation}
Put $T_j:=\Delta_j$ for all $j\in\bZ$ and
\begin{equation*}
\begin{gathered} f_0:=2^{\boldsymbol{r}(1)}\Delta_1f,\quad f_1:=2^{\boldsymbol{r}(1)}\Delta_1f+2^{\boldsymbol{r}(2)}\Delta_2f\\ f_j:=(2^{\boldsymbol{r}(j-1)}\Delta_{j-1}+2^{\boldsymbol{r}(j)}\Delta_{j}+2^{\boldsymbol{r}(j+1)}\Delta_{j+1})f\quad \text{if } j\geq2,\quad f_j:=0 \quad \text{if } j<0.
\end{gathered}
\end{equation*}
Considering the almost orthogonal property of $\Delta_j$ and Remark \ref{rem RdF vv}, we have
$$
\Big\| S\Big( \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big) \Big\|_{L_p(\bR^d,w\,dx)}
=\Big\| \Big( \sum_{j\in\bZ}|T_jf_j|^2 \Big)^{1/2} \Big\|_{L_p(\bR^d,w\,dx)}\leq N\Big\| \Big( \sum_{j=1}^{\infty} | 2^{\boldsymbol{r}(j)}\Delta_j f |^2 \Big)^{1/2} \Big\|_{L_p(\bR^d,w\,dx)}
$$
For the converse of \eqref{ineq 220718 1747}, we make use of the Khintchine inequality (\textit{e.g.} see \cite{Haagerup1981}),
\begin{align}\label{ineq 220718 1734}
\Big( \sum_{j=1}^{\infty} \big| 2^{\boldsymbol{r}(j)}\Delta_j f \big|^2 \Big)^{1/2}
\approx \Big( \mathbb{E} \Big|\sum_{j=1}^{\infty} X_j 2^{\boldsymbol{r}(j)}\Delta_j f \Big|^p \Big)^{1/p},\quad p \in (0, \infty)
\end{align}
where the implicit constants depend only on $p$ and $\{X_j\}_{j\in\bN}$ is a sequence of independent and identically distributed random variables with the Rademacher distribution. Let $\Pi_1$ be a Fourier multiplier defined by
$$
\Pi_1(\xi) :=
\frac{ \sum_{j=1}^\infty X_j 2^{\boldsymbol{r}(j)}\cF[\Psi](2^{-j}\xi)}{\cF[\Phi](\xi)+\sum_{j=1}^{\infty} 2^{\boldsymbol{r}(j)}\cF[\Psi](2^{-j}\xi)}.
$$
Recall the notation
$$
\bT_{\Pi_1}f(x):=\cF^{-1}[\Pi_1\cF[f]](x)
$$
and
\begin{align*}
\partiali^{\boldsymbol{r}}:=S_0+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j.
\end{align*}
Then the right-hand side of \eqref{ineq 220718 1734} equals to
\begin{align*}
\left( \bE \left| \bT_{\Pi_1}(\partiali^{\boldsymbol{r}}f) \right|^p \right)^{1/p}.
\end{align*}
It is easy to check that $\Pi_1$ satisfies \eqref{22.05.12.13.14}, that is, $\Pi_1$ is a weighted Mikhlin multiplier.
Thus it follows that
\begin{equation}\label{ineq 220718 1744}
\begin{aligned}
\Big\| \Big( \mathbb{E} \Big|\sum_{j=1}^{\infty} X_j 2^{\boldsymbol{r}(j)}\Delta_j f \Big|^p \Big)^{1/p} \Big\|_{L_p(\bR^d,w\,dx)}
&\leq N\Big( \mathbb{E} \Big\|\sum_{j=1}^{\infty} X_j 2^{\boldsymbol{r}(j)}\Delta_j f \Big\|_{L_p(\bR^d,w\,dx)}^p \Big)^{1/p}\\
&= N\Big( \mathbb{E} \Big\| \bT_{\Pi_1}(\partiali^{\boldsymbol{r}}f) \Big\|_{L_p(\bR^d,w\,dx)}^p \Big)^{1/p}
\\
&\leq N [w]_{A_p(\bR^d)}^{\max(1,(p-1)^{-1})}\|\partiali^{\boldsymbol{r}} f\|_{L_p(\bR^d,w\,dx)}.
\end{aligned}
\end{equation}
By \eqref{ineq 220718 1747}, \eqref{ineq 220718 1734} and \eqref{ineq 220718 1744}, we conclude that
$$
\|\partiali^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)}\approx \|f\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}.
$$
Similarly, we compute $\|\partiali^{\boldsymbol{r}}f\|_{B_{p,q}^{\boldsymbol{0}}(\bR^d,w\,dx)}$.
Recall the definition first:
\begin{align*}
\|\partiali^{\boldsymbol{r}}f\|_{B_{p,q}^{\boldsymbol{0}}(\bR^d,w\,dx)}
&= \| S_0 \partiali^{\boldsymbol{r}}f \|_{L_p(\bR^d,w\,dx)} + \Big( \sum_{j=1}^{\infty} \Big\| \Delta_j \partiali^{\boldsymbol{r}}f \Big\|_{L_p(\bR^d,w\,dx)}^q \Big)^{1/q}
\end{align*}
and
\begin{align*}
\|f\|_{B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)}
&= \| S_0 f \|_{L_p(\bR^d,w\,dx)} + \Big( \sum_{j=1}^{\infty} \Big\| 2^{\boldsymbol{r}(j)}\Delta_j f \Big\|_{L_p(\bR^d,w\,dx)}^q \Big)^{1/q}.
\end{align*}
Due to the almost orthogonal property again, it is obvious that
\begin{equation*}
\begin{gathered}
S_0\partiali^{\boldsymbol{r}}f=S_0(S_0+2^{\boldsymbol{r}(1)}\Delta_1)f, \\
\Delta_1\partiali^{\boldsymbol{r}}f=\Delta_1(S_0+2^{\boldsymbol{r}(1)}\Delta_1+2^{\boldsymbol{r}(2)}\Delta_2)f, \\
\Delta_j \partiali^{\boldsymbol{r}}f = \Delta_{j}(2^{\boldsymbol{r}(j-1)} \Delta_{j-1} +2^{\boldsymbol{r}(j)} \Delta_{j} +2^{\boldsymbol{r}(j+1)} \Delta_{j+1}) f,\quad j\geq 2.
\end{gathered}
\end{equation*}
Moreover, setting
\begin{equation*}
\begin{gathered}
\Pi_2:=\frac{\cF[\Phi]+\cF[\Psi](2^{-1}\cdot)}{\cF[\Phi]+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\cF[\Psi](2^{-j}\cdot)},\\
\Pi_3:=\frac{2^{\boldsymbol{r}(1)}(\cF[\Phi]+\cF[\Psi](2^{-1}\cdot)+\cF[\Psi](2^{-2}\cdot))}{\cF[\Phi]+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\cF[\Psi](2^{-j}\cdot)},\\
\Pi_{j+2}:=\frac{2^{\boldsymbol{r}(j)}(\cF[\Psi](2^{-j+1}\cdot)+\cF[\Psi](2^{-j}\cdot)+\cF[\Psi](2^{-j-1}\cdot))}{\cF[\Phi]+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\cF[\Psi](2^{-j}\cdot)},\quad j\geq 2,
\end{gathered}
\end{equation*}
we have
\begin{equation*}
\begin{gathered}
S_0f=\bT_{\Pi_2}S_0\partiali^{\boldsymbol{r}}f,
\quad 2^{\boldsymbol{r}(1)}\Delta_1f=\bT_{\Pi_3}\Delta_1\partiali^{\boldsymbol{r}}f,\\
2^{\boldsymbol{r}(j)}\Delta_jf=\bT_{\Pi_{j+2}}\Delta_j \partiali^{\boldsymbol{r}}f, \quad j\geq 2.
\end{gathered}
\end{equation*}
Therefore, by Lemma \ref{wbound}, and Proposition \ref{21.02.24.16.49}, we obtain
$$
\|\partiali^{\boldsymbol{r}}f\|_{B_{p,q}^{\boldsymbol{0}}(\bR^d,w\,dx)}\approx\|f\|_{B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)}
$$
since all $L_p$-bounds can be chosen uniformly for all $j$ due to the assumption on $\boldsymbol{r}$ \eqref{uniform r assumption}.
The proposition is proved.
\end{proof}
\begin{corollary}
\label{classical sobo}
For $s\in\bR$, put $\boldsymbol{r}(j) = sj$.
Then the space $H_p^{\boldsymbol{r}}(\bR^d, w\,dx)$ is equivalent to $H_p^s(\bR^d, w\,dx)$ whose norm is given by
$$
\| f \|_{H_p^s(\bR^d,w\,dx)} := \| (1 - \Delta)^{s/2} f \|_{L_p(\bR^d,w\,dx)}.
$$
\end{corollary}
\begin{proof}
Recall the notation
$$
\partiali^{\boldsymbol{r}}:=S_0+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j.
$$
By Proposition~\ref{22.05.03.11.34}, it follows that $\|f\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)} \approx \|\partiali^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)}
$.
Thus it is sufficient to show
\begin{align*}
\|\partiali^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)} \approx \| (1 - \Delta)^{s/2} f \|_{L_p(\bR^d,w\,dx)}.
\end{align*}
Let $\Psi$ be the Schwartz function appearing in the definition of $\Delta_j$.
Define
$$
\cF[\Phi](\xi) = \sum_{j=-\infty}^{0}\cF[\Psi](2^{-j}\xi).
$$
and
$$
\Pi(\xi) := \frac{\cF[\Phi](\xi)+\sum_{j=1}^\infty 2^{sj} \cF[\Psi](2^{-j}\xi)}{(1+|\xi|^2)^{s/2}}.
$$
Then $\Pi$ is a weighted Mikhlin multiplier in Proposition~\ref{21.02.24.16.49}.
Thus we have
$$
\left\|\sum_{j=1}^\infty 2^{\boldsymbol{r}(j)} \Delta_j f\right\|_{L_p(\bR^d,w\,dx)} = \Big\| \bT_{\Pi} \Bigl( (1-\Delta)^{s/2} f \Bigr)\Big\|_{L_p(\bR^d,w\,dx)} \leq N\| (1-\Delta)^{s/2} f\|_{L_p(\bR^d,w\,dx)}.
$$
For the converse direction, it suffices to observe that
$$
\widetilde{\Pi}(\xi) = \frac{(1+|\xi|^2)^{s/2} }{\cF[\Phi](\xi)+\sum_{j=1}^\infty 2^{sj}\cF[\Psi](2^{-j}\xi)}
$$
becomes a weighted Mikhlin multiplier as well. The corollary is proved.
\end{proof}
\begin{defn}
Let $q\in(0,\infty)$, $\boldsymbol{r}:\bZ\to(-\infty,\infty)$, and $B$ be a Banach lattice.
\begin{enumerate}[(i)]
\item We denote by $B(\ell_2^{\boldsymbol{r}})$ the set of all $B$-valued sequences $x=(x_0,x_1,\cdots)$ such that
$$
\|x\|_{B(\ell_2^{\boldsymbol{r}})}:=\|x_0\|_B+\left\|\Big(\sum_{j=1}^{\infty}2^{2\boldsymbol{r}(j)}|x_j|^2\Big)^{1/2}\right\|_B<\infty,\quad |x|:
=x\vee(-x).
$$
\item We denote by $\ell_q^{\boldsymbol{r}}(B)$ the set of all $B$-valued sequences $x=(x_0,x_1,\cdots)$ such that
$$
\|x\|_{\ell_q^{\boldsymbol{r}}(B)}:=\|x_0\|_B+\Big(\sum_{j=1}^{\infty}2^{q\boldsymbol{r}(j)}\|x_j\|_{B}^q\Big)^{1/q}<\infty.
$$
In particular, we use the simpler notation $\ell_q^{\boldsymbol{r}}=\ell_q^{\boldsymbol{r}}(\bR)$.
\end{enumerate}
\end{defn}
\begin{defn}
Let $X$ and $Y$ be quasi-Banach spaces. We say that $X$ is a retract of $Y$ if there exist linear transformation $\frI:X\to Y$ and $\frP:Y\to X$ such that $\frP\frI$ is an identity operator in $X$.
\end{defn}
\begin{lem}
\label{22.04.19.16.55}
Let $p\in(1,\infty)$, $q\in(0,\infty)$, and $w\in A_p(\bR^d)$. Suppose that
\begin{equation}
\label{locsim}
\sup_{j\in\bZ}|\boldsymbol{r}(j+1)-\boldsymbol{r}(j)|=:C_0<\infty.
\end{equation}
\begin{enumerate}[(i)]
\item Then $H_p^{\boldsymbol{r}}(\bR^d,w\,dx)$ is a retract of $L_p(\bR^d,w\,dx)(\ell_2^{\boldsymbol{r}})$.
\item Then $B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)$ is a retract of $\ell_q^{\boldsymbol{r}}(L_p(\bR^d,w\,dx))$.
\end{enumerate}
\end{lem}
\begin{proof}
We suggest a unified method to prove both (i) and (ii) simultaneously.
Let
$$
(X,Y)=\left(H_p^{\boldsymbol{r}}(\bR^d,w\,dx),L_p(\bR^d,w\,dx)(\ell_2^{\boldsymbol{r}})\right)
$$
or
$$
(X,Y)=\left(B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx),\ell_q^{\boldsymbol{r}}(L_p(\bR^d,w\,dx))\right).
$$
Consider two mappings:
\begin{equation*}
f \mapsto \frI(f):=(S_0f,\Delta_1f,\Delta_2f,\cdots)
\end{equation*}
and
\begin{equation*}
\boldsymbol{f}=(f_0,f_1,\cdots)\mapsto \frP(\boldsymbol{f}):=S_0(f_0+f_1)+\sum_{j=1}^{\infty}\sum_{l=-1}^1\Delta_{j}f_{j+l}.
\end{equation*}
By using the fact that $f= S_0(f) + \sum_{j=1}^\infty \Delta_j f$,
it is each to check that $\frP\frI$ is an identity operator in $X$. Moreover,
due to the definitions of the function spaces,
\begin{align}
\label{J norm equivalence}
\|f\|_{X}=\|\frI(f)\|_{Y}.
\end{align}
In other words, the target space of the mapping $f \mapsto \frI(f)$ is $Y$
and this implies that $\frI$ is a linear transformation from $X$ to $Y$ since the linearity of the mapping is obvious.
Now, it only remains to show that $\frP$ is a linear transformation from $Y$ to $X$.
Since the linearity is trivial as before, it is sufficient to show that the target space of the mapping
$\boldsymbol{f}=(f_0,f_1,\cdots)\mapsto \frP(\boldsymbol{f})$ is $X$.
By the almost orthogonality of the Littlewood-Paley projections,
\begin{equation}
\label{22.09.13.16.17}
\begin{gathered}
S_0\frP(\boldsymbol{f})=S_0f_0+S_0f_1+S_0\Delta_1f_{2},\\
\Delta_1\frP(\boldsymbol{f})=\Delta_1(S_0+\Delta_1)f_0+\Delta_1f_1+\Delta_1(\Delta_1+\Delta_2)f_2+\Delta_1\Delta_2f_3,
\end{gathered}
\end{equation}
and
\begin{equation}
\begin{aligned}
\Delta_i\frP(\boldsymbol{f})&=\Delta_i\Delta_{i-1}f_{i-2}+\Delta_i(\Delta_{i-1}+\Delta_i)f_{i-1}+\Delta_if_i+\Delta_i(\Delta_i+\Delta_{i+1})f_{i+1}+\Delta_i\Delta_{i+1}f_{i+2}\\
&=:T_i^1f_{i-2}+T_i^2f_{i-1}+T_i^3f_i+T_i^4f_{i+1}+T_i^5f_{i+2},\quad i\geq2.
\end{aligned}
\end{equation}
The remaining part of the proof becomes slightly different depending on $Y=L_p(\bR^d,w\,dx)(\ell_2^{\boldsymbol{r}})$ or $Y=\ell_q^{\boldsymbol{r}}(L_p(\bR^d,w\,dx))$.
\begin{enumerate}[(i)]
\item Due to \eqref{22.09.13.16.17},
$$
\sum_{i=2}^{\infty}2^{2\boldsymbol{r}(i)}|\Delta_i\frP(\boldsymbol{f})|^2\leq N\sum_{k=1}^{5}\cI_k,
$$
where
$$
\cI_k=\sum_{i=2}^{\infty}|T_i^k(2^{\boldsymbol{r}(i)}f_{i+k-3})|^2,\quad k=1,2,3,4,5.
$$
By Lemma \ref{wbound}, the sequence of linear operators $\{T_j^k\}_{j=2}^{\infty}$ satisfies \eqref{22.09.13.16.42} for all $k=1,2,3,4,5$. Using Remark \ref{rem RdF vv} and \eqref{locsim}, we have
$$
\Big\|\Big(\sum_{i=2}^{\infty}2^{2\boldsymbol{r}(i)}|\Delta_i\frP(\boldsymbol{f})|^2\Big)^{1/2}\Big\|_{L_p(\bR^d,w\,dx)}\leq N\|\boldsymbol{f}\|_{L_p(\bR^d,w\,dx)(\ell_2^{\boldsymbol{r}})}.
$$
This implies that $\frP(\boldsymbol{f})\in H_p^{\boldsymbol{r}}(\bR^d,w\,dx)$.
\item By Lemma \ref{wbound},
\begin{align}
\label{2023012880}
\|S_0\frP(\boldsymbol{f})\|_{L_p(\bR^d,w\,dx)}\leq N \sum_{j=0}^{2}\|f_{j}\|_{L_p(\bR^d,w\,dx)}
\end{align}
and
\begin{align}
\label{2023012881}
\|\Delta_i\frP(\boldsymbol{f})\|_{L_p(\bR^d,w\,dx)}\leq N \sum_{j=-2}^{2}\|f_{i+j}\|_{L_p(\bR^d,w\,dx)},\quad \forall i\in\bN,
\end{align}
where $N$ is independent of $i$ and $f_i=0$ if $i<0$.
Moreover, due to \eqref{locsim},
\begin{equation}
\label{22.04.11.15.09}
2^{\boldsymbol{r}(i)}\|\Delta_i\frP(\boldsymbol{f})\|_{L_p(\bR^d,w\,dx)}\leq N\sum_{j=-2}^{2}2^{\boldsymbol{r}(i+j)}\|f_{i+j}\|_{L_p(\bR^d,w\,dx)}
\end{equation}
and the constant $N$ is independent of $i$.
Finally, recalling the definition of $B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)$ and combining all \eqref{2023012880}, \eqref{2023012881}, and \eqref{22.04.11.15.09}, we have
\begin{equation*}
\begin{aligned}
\|\frP(\boldsymbol{f})\|_{B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)} \leq N\|\boldsymbol{f}\|_{\ell_q^{\boldsymbol{r}}(L_p(\bR^d,w\,dx))},\quad q\in(0,\infty),
\end{aligned}
\end{equation*}
where $N$ is independent of $\boldsymbol{f}$. Therefore, $\frP$ is a linear transformation from $Y$ to $X$.
\end{enumerate}
The lemma is proved.
\end{proof}
\begin{prop}
\label{22.04.24.20.57}
Let $p\in(1,\infty)$, $q\in(0,\infty)$, $w\in A_p(\bR^d)$, and $\boldsymbol{r}:\bZ\to(-\infty,\infty)$ be a sequence satisfying
$$
\sup_{j\in\bZ}|\boldsymbol{r}(j+1)-\boldsymbol{r}(j)|<\infty.
$$
\begin{enumerate}[(i)]
\item The space $X$ is a quasi-Banach space
\item The closure of $C_c^{\infty}(\bR^d)$ under the quasi-norm $\|\cdot\|_{X}$ is $X$,
\end{enumerate}
where the space $X$ indicates $H_p^{\boldsymbol{r}}(\bR^d,w\,dx)$ or $B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)$.
\end{prop}
\begin{proof}
We put
$$
(X,Y)=\left(H_p^{\boldsymbol{r}}(\bR^d,w\,dx),L_p(\bR^d,w\,dx)(\ell_2^{\boldsymbol{r}})\right)
$$
or
$$
(X,Y)=\left(B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx),\ell_q^{\boldsymbol{r}}(L_p(\bR^d,w\,dx))\right)
$$
as in the previous lemma.
\begin{enumerate}[(i)]
\item All properties of a quasi-Banach space is obvious except completeness.
Thus we only prove the completeness. Let $\{f_n\}_{n=1}^{\infty}$ be a Cauchy sequence in $X$.
It is obvious that $Y$ is a Banach space due to the completeness of $L_p$-spaces with general measures.
Therefore, using the equivalence of the norms in \eqref{J norm equivalence}, one can find a $\boldsymbol{f}\in Y$ such that
\begin{align*}
\frI(f_n)=(S_0f_n,\Delta_1f_n,\cdots)\to\boldsymbol{f}\quad \text{ in } Y.
\end{align*}
By Lemma \ref{22.04.19.16.55}, $\frP(\boldsymbol{f})\in X$ and
$$
\|f_n-\frP(\boldsymbol{f})\|_{X}\leq N\|\frI(f_n)-\boldsymbol{f}\|_{Y}\to0
$$
as $n\to\infty$. Therefore, $X$ is complete.
\item With the help of Proposition \ref{22.05.03.11.34}, it is sufficient to consider the case that the order is $\boldsymbol{0}$.
Then $\cS(\bR^d)$ is dense in $X$ due to \cite[Theorem 2.4]{qui1982weighted}.
Finally, based on the typical truncation and diagonal arguments, the result is obtained.
\end{enumerate}
The proposition is proved.
\end{proof}
\end{document} |
\mathbf{b}etagin{document}
\title[Higher-order linearization and regularity in homogenization]{Higher-order linearization and regularity in nonlinear homogenization}
\mathbf{b}etagin{abstract}
We prove large-scale $C^\infty$ regularity for solutions of nonlinear elliptic equations with random coefficients, thereby obtaining a version of the statement of Hilbert's 19th problem in the context of homogenization. The analysis proceeds by iteratively improving three statements together: (i) the regularity of the homogenized Lagrangian~$\overlineerline{L}$, (ii) the commutation of higher-order linearization and homogenization, and (iii) large-scale $C^{0,1}$-type regularity for higher-order linearization errors. We consequently obtain a quantitative estimate on the scaling of linearization errors, a Liouville-type theorem describing the polynomially-growing solutions of the system of higher-order linearized equations, and an explicit (heterogenous analogue of the) Taylor series for an arbitrary solution of the nonlinear equations---with the remainder term optimally controlled. These results give a complete generalization to the nonlinear setting of the large-scale regularity theory in homogenization for linear elliptic equations.
\end{abstract}
\mathbf{a}uthor[S. Armstrong]{Scott Armstrong}
\mathbf{a}ddress[S. Armstrong]{Courant Institute of Mathematical Sciences, New York University, 251 Mercer St., New York, NY 10012}
\email{scotta@cims.nyu.edu}
\mathbf{a}uthor[S. J. Ferguson]{Samuel J. Ferguson}
\mathbf{a}ddress[S. J. Ferguson]{Courant Institute of Mathematical Sciences, New York University, 251 Mercer St., New York, NY 10012}
\email{sjf370@nyu.edu}
\mathbf{a}uthor[T. Kuusi]{Tuomo Kuusi}
\mathbf{a}ddress[T. Kuusi]{Department of Mathematics and Statistics, P.O. Box 68 (Gustaf H\"allstr\"omin katu 2), FI-00014 University of Helsinki, Finland}
\email{tuomo.kuusi@helsinki.fi}
\keywords{stochastic homogenization, large-scale regularity, nonlinear elliptic equation, linearized equation, commutativity of homogenization and linearization}
\mathbf{m}athbf{s}ubjclass[2010]{35B27, 35B45, 60K37, 60F05}
\date{\today}
\mathbf{m}aketitle
\mathbf{m}athbf{s}etcounter{tocdepth}{1}
\tableofcontents
\mathbf{m}athbf{s}ection{Introduction}
\mathbf{m}athbf{s}ubsection{Motivation: quantitative homogenization for nonlinear equations}
This article is concerned with nonlinear, divergence-form, uniformly elliptic equations of the form
\mathbf{b}etagin{equation}
\label{e.pde}
- \nabla \cdot \left( D_p L(\nabla u(x),x ) \right) = 0 \quad \mathbf{m}box{in} \ U \mathbf{m}athbf{s}ubseteq \mathbf{m}athbb{R}d, \quad d\mathbf{m}athbf{g}eq 2.
\end{equation}
The Lagrangian~$L(p,x)$ is assumed to be uniformly convex and regular in~$p$ and possess some mild regularity in~$x$. Furthermore,~$L$ is a stochastic object: it is sampled by a probability measure~$\mathbb{P}$ which is statistically stationary and satisfies a unit range of dependence. This essentially means that $x\mathbf{m}apsto L(\cdot,x)$ is a random field, valued in the space of uniformly convex functions, the law of which is independent of~$x$ (or, to be precise, periodic in~$x$; see Subsection~\ref{ss.assumptions} for the assumptions).
\mathbf{m}athbf{s}mallskip
The objective is to describe the statistical behavior of the solutions of~\eqref{e.pde}, with respect to the probability measure~$\mathbb{P}$, on \emph{large length scales}. In other words, we want to understand what the solution~$u$ looks like in the case that the ``macroscopic'' domain $U$ is very large relative to ``microscopic'' scale, which is the correlation length scale of the coefficients (taken to the unit scale).
\mathbf{m}athbf{s}mallskip
At a qualitative level, a satisfactory characterization of the solutions, in the regime in which the ratio of these two length scales is large, is provided by the principle of homogenization. First proved in this context by Dal Maso and Modica~\cite{DM1,DM2}, it asserts roughly that a solution of~\eqref{e.pde} is, with probability approaching one, close in~$L^2$ (relative to its size in~$L^2$) to a solution of a deterministic equation of the form
\mathbf{b}etagin{equation}
\label{e.pde.homog}
-\nabla \cdot \left( D_p\overlineerline{L} \left( \nabla {u_{\mathrm{hom}}} \right) \right)
= 0 \quad \mathbf{m}box{in} \ U,
\end{equation}
for an \emph{effective Lagrangian}~$\overlineerline{L}$ which is also uniformly convex and~$C^{1,1}$.
\mathbf{m}athbf{s}mallskip
This result is of great importance, from both the theoretical and computation points of view, since the complexity of the homogenized equation~\eqref{e.pde.homog} is significantly less than that of~\eqref{e.pde} as it is both deterministic and spatially homogeneous. It hints that the structure of~\eqref{e.pde} should, on large domains and with high probability, possess some of the structure of a constant coefficient equation and thus we may expect it to be more amenable to our analysis than the worst-possible heterogeneous equation of the form~\eqref{e.pde}. In other words, since the Lagrangian~$L$ is sampled by a probability measure~$\mathbb{P}$ with nice ergodic properties, rather than given to us by the devil, can we expect its solutions to have a nicer structure? In order to answer this kind of question, we need to build a \emph{quantitative} theory of homogenization.
\mathbf{m}athbf{s}mallskip
To be of practical use, the principle of homogenization needs be made quantitative. We need to have answers to questions such as these:
\mathbf{b}etagin{itemize}
\item How large does the ratio of scale separation need to be before we can be reasonably sure that solutions of~\eqref{e.pde} are close to those of~\eqref{e.pde.homog}? In other words, what is the size of a typical error in the homogenization approximation in terms of the size of $U$?
\item Can we estimate the probability of the unlikely event that the error is large?
\item What is~$D_p\overlineerline{L}$ and how can we efficiently compute it? How regular can we expect it to be? Can we efficiently compute its derivatives?
\item Can we describe the fluctuations of the solutions?
\end{itemize}
In this paper we show that~\eqref{e.pde} has a $C^\infty$ structure. In particular, we will essentially answer the third question posed above by demonstrating that the effective Lagrangian~$\overlineerline{L}$ is as regular as~$L(\cdot,x)$ with estimates for its derivatives. We will identify the higher derivatives of $\overlineerline{L}$ as the homogenized coefficients of certain \emph{linearized} equations and give quantitative homogenization estimates for these, implicitly indicating a computational method for approximating them and thus a Taylor approximation for~$\overlineerline{L}$. Finally, we will prove large-scale~$C^{k,1}$ type estimates for solutions of~\eqref{e.pde}, for~$k\in\mathbf{m}athbb{N}$ as large as can be expected from the regularity assumptions on~$L$, a result analogous to Hilbert's 19th problem, famously given for spatially homogeneous Lagrangians by De Giorgi and Nash. Our analysis reveals the interplay between these three seemingly different kinds results: (i) the regularity of $\overlineerline{L}$, (ii) the homogenization of linearized equations, and (iii) the large-scale regularity of the solutions.
In analogy to the way that the Schauder estimates are iterated in the resolution of Hilbert's 19th problem, these three statements must be proved together, iteratively in the parameter~$k\in\mathbf{m}athbb{N}$ which represents the degree of regularity of~$\overlineerline{L}$, the order of the linearized equation, and the order of the $C^{k,1}$ estimate.
\mathbf{m}athbf{s}ubsection{Background: large-scale regularity theory and its crucial role in quantitative homogenization}
In the last decade, beginning with the work of Gloria and Otto~\cite{GO1,GNO1},
a quantitative theory of homogenization has been developed to give precise answers to questions like the ones stated in the previous subsection. Until now, most of the progress has come in the case of \emph{linear} equations
\mathbf{b}etagin{equation}
\label{e.pde.lin}
-\nabla \cdot \mathbf{a}(x) \nabla u = 0,
\end{equation}
which corresponds to the special case~$L(p,x) = \mathbf{m}athbf{f}rac12p\cdot \mathbf{a}(x)p$ of~\eqref{e.pde}, where~$\mathbf{a}(x)$ is a symmetric matrix. By now there is an essentially complete quantitative theory for linear equations, and we refer to the monograph~\cite{AKMbook} and the references therein for a comprehensive presentation of this theory. Quantitative homogenization for the nonlinear equation~\eqref{e.pde} has a comparatively sparse literature; in fact, the only such results of which we are aware are those of~\cite{AS,AM} (see also~\cite[Chapter 11]{AKMbook}), our previous paper~\cite{AFK} and a new paper of Fischer and Neukamm~\cite{FN} which was posted to the arXiv as we were finishing the present article.
\mathbf{m}athbf{s}mallskip
Quantitative homogenization is inextractably linked to regularity estimates on the solutions of the heterogeneous equation. This is not surprising when we consider that the homogenized flux~$D_p\overlineerline{L}(\nabla u_{\mathbf{m}athrm{hom}})$ should be related to the spatial average (say, on some mesoscopic scale) of the heterogeneous flux $D_pL(\nabla u(x),x)$. In order for spatial averages of the latter to converge nicely, we need to have bounds. It could be unfortunate and lead to a very slow rate of homogenization if, for instance, the flux was concentrated on sets of very small measure which percolate only on very large scales. To rule this out we need good great estimates: ideally, we would like to know that the size of the flux on small scales is the same as on large scales, which amounts to a~$W^{1,\infty}$ estimate on solutions.
\mathbf{m}athbf{s}mallskip
Unfortunately, solutions of equations with highly oscillating coefficients do not possess very strong regularity, in general. Indeed, the best deterministic elliptic regularity estimate for solutions of~\eqref{e.pde}, which does not degenerate as the size of the domain~$U$ becomes large, is~$C^{0,\deltalta}$ in terms of H\"older regularity (the De Giorgi-Nash estimate) and~$W^{1,2+\deltalta}$ in terms of Sobolev regularity (the Meyers estimate). The tiny exponent $\deltalta>0$ in each estimate becomes small as the ellipticity ratio becomes large (see~\cite[Example 3.1]{AKMbook}) and thus both estimates are far short of the desired regularity class~$W^{1,\infty} = C^{0,1}$.
\mathbf{m}athbf{s}mallskip
One of the main insights in the quantitative theory of homogenization
is that, compared to a generic (``worst-case'')~$L$, solutions of the equation~\eqref{e.pde} have much better regularity if~$L$ is sampled by an ergodic probability measure~$\mathbb{P}$. This is an effect of homogenization itself: on large scales,~\eqref{e.pde} should be a ``small perturbation'' of~\eqref{e.pde.homog}, and therefore better regularity estimates for the former can be inherited from the latter. This is the same idea used to prove the classical Schauder estimates. In the context of homogenization, the result states that there exists a random variable~$\mathcal{X}$ (sometimes called the minimal scale) which is finite almost surely such that, for every $\mathcal{X} < r < \mathbf{m}athbf{f}rac 12R$ and solution $u\in H^1(B_R)$ of~\eqref{e.pde} with $U=B_R$, we have the estimate
\mathbf{b}etagin{equation}
\label{e.C01estintro}
\mathbf{m}athbf{f}int_{B_r} \left| \nabla u \right|^2
\leq
C \left( 1+ \mathbf{m}athbf{f}int_{B_R} \left| \nabla u \right|^2 \right).
\end{equation}
Here~$C$ depends only on dimension and ellipticity and~$\mathbf{m}athbf{f}int_U := \mathbf{m}athbf{f}rac{1}{|U|}\int_U$ denotes the mean of a function in~$U$. If we could send $r\to 0$ in~\eqref{e.C01estintro}, it would imply that
\mathbf{b}etagin{equation*} \label{}
\left| \nabla u(0) \right|^2
\leq
C \left( 1+ \mathbf{m}athbf{f}int_{B_R} \left| \nabla u \right|^2 \right),
\end{equation*}
which is a true Lipschitz estimate, same estimate in fact as holds for the homogenized equation~\eqref{e.pde.homog}. As~\eqref{e.C01estintro} is valid only for $r>\mathcal{X}$, it is sometimes called a ``large-scale $C^{0,1}$ estimate'' or a ``Lipschitz estimate down to the microscopic scale." This estimate, first demonstrated in~\cite{AS} in the stochastic setting, is a generalization of the celebrated result in the case of (non-random) periodic coefficients due to Avellaneda and Lin~\cite{AL1}. Of course, it then becomes very important to quantify the size of~$\mathcal{X}$. The estimate proved in~\cite{AS}, which is essentially optimal, states that $\mathcal{X}$ is bounded up to ``almost volume-order large deviations'': for every $s<d$ and $r\mathbf{m}athbf{g}eq1$,
\mathbf{b}etagin{equation}
\label{e.SI.X}
\mathbb{P} \left[ \mathcal{X} > r \right] \leq C\exp\left( -cr^s \right).
\end{equation}
Here the constant~$C$ depends only on~$s$,~$d$, and the ellipticity. A proof of this large-scale regularity estimate together with~\eqref{e.SI.X} can be found in~\cite[Chapter 3]{AKMbook} in the linear case and in~\cite[Chapter 11]{AKMbook} for the nonlinear case. The right side of~\eqref{e.SI.X} represents the probability of the unlikely event that the~$L$ sampled by~$\mathbb{P}$ will be a ``worst-case''~$L$ in the ball of radius~$r$. A proof of the optimality of~\eqref{e.SI.X} can be found in~\cite[Section 3.6]{AKMbook}.
\mathbf{m}athbf{s}mallskip
This large-scale regularity theory introduced in~\cite{AS} was further developed in the case of~\eqref{e.pde.lin} in~\cite{GNO2,AM,FO,AKM1,AKM} and now plays an essential role in the quantitative theory of stochastic homogenization. Whether one employs functional inequalities~\cite{GNO2,DGO} or renormalization arguments~\cite{AKM1,AKM,GO6}, it is a crucial ingredient in the proof of the optimal error estimates in homogenization for~\eqref{e.pde.lin}: see the monograph~\cite{AKMbook} and the references therein for a complete presentation of these developments.
\mathbf{m}athbf{s}mallskip
The large-scale~$C^{0,1}$ estimate is, from one point of view, the best regularity one can expect solutions of~\eqref{e.pde} or~\eqref{e.pde.lin} to satisfy: since the coefficients are rapidly oscillation, there is no hope for the gradient to exhibit continuity on the macroscopic scale. However, as previously shown in the periodic case in~\cite{AL1,AL2}, the solutions of the linear equation~\eqref{e.pde.lin} still have a $C^\infty$ structure. To state what we mean, let us first think of an (interior) $C^{k,1}$ estimate not as a pointwise bound on the $(k+1)$th derivatives of a function, but as an estimate on how well a function may be approximated on small balls by a $k$th order polynomial. By Taylor's theorem, these are of course equivalent in the following sense:
\mathbf{b}etagin{equation*} \label{}
\left| \nabla^{k+1} u(0) \right|
\mathbf{m}athbf{s}imeq
\limsup_{r\to 0}
\mathbf{m}athbf{f}rac1{r^{k+1}}
\inf_{p\in\mathbf{m}athcal{P}_k}
\left\| u - p \right\|_{\underlinederline{L}^2(B_r)},
\end{equation*}
where $\mathbf{m}athcal{P}_k$ denotes the set of polynomials of degree at most~$k$ and and we use the notation $\| w \|_{\underlinederline{L}^2(U)}:= \left( \mathbf{m}athbf{f}int_U |w|^2 \right)^{\mathbf{m}athbf{f}rac12}$ to denote the volume-normalized~$L^2(U)$ norm. Thus the interior $C^{k,1}$ estimate for a harmonic function can be stated in the form: for any harmonic function $u$ in $B_R$ and any $r\in \left(0,\tfrac12R\right]$,
\mathbf{b}etagin{equation}
\label{e.Ck1harm}
\inf_{p\in \mathbf{m}athcal{P}_k}
\left\| u - p\right\|_{\underlinederline{L}^2(B_r)}
\leq
C \left( \mathbf{m}athbf{f}rac{r}{R} \right)^{k+1}
\inf_{p\in \mathbf{m}athcal{P}_k}
\left\| u - p\right\|_{\underlinederline{L}^2(B_R)}.
\end{equation}
Moreover, the infimum on the left side may be replaced by the set of \emph{harmonic} polynomials of degree at most~$k$.
\mathbf{m}athbf{s}mallskip
As we cannot expect a solution of~\eqref{e.pde} to have regularity beyond~$C^{0,1}$ in a classical (pointwise) sense, in order to make sense of a~$C^{k,1}$ estimate for the heterogeneous equation~\eqref{e.pde.lin} we need to replace the set of polynomials by a heterogeneous analogue. The classical Liouville theorem says that the set of harmonic functions which grow like $o(|x|^{k+1})$ is just the set of harmonic polynomials of degree at most~$k$. This suggests that we should use the (random) vector space
\mathbf{b}etagin{equation*} \label{}
\mathbf{m}athcal{A}_k:=
\left\{
u\in H^1_{\mathbf{m}athrm{loc}}(\mathbf{m}athbb{R}d)\,:\, -\nabla \cdot \mathbf{a}\nabla u = 0, \ \limsup_{r\to \infty} r^{k+1} \left\| u \right\|_{\underlinederline{L}^2(B_r)} = 0
\right\}.
\end{equation*}
We think of these as~``$\mathbf{a}(x)$-harmonic polynomials.''
It turns out that one can prove that, ~$\mathbb{P}$--almost surely, this set is finite dimensional and has the same dimension as the set of at most $k$th order harmonic polynomials. In fact, one can match any~$\mathbf{a}(x)$-harmonic polynomial to an ${\overlineerbracket[1pt][-1pt]{\mathbf{a}}}$-harmonic polynomial in the highest degree, and vice versa. In close analogy to~\eqref{e.Ck1harm}, the statement of large-scale $C^{k,1}$ regularity is then as follows: there exists a minimal scale~$\mathcal{X}$ satisfying~\eqref{e.SI.X} such that, for any $R > 2\mathcal{X}$ and any solution $u\in H^1(B_R)$ of $-\nabla \cdot \mathbf{a}\nabla u =0$, we can find $\phi \in \mathbf{m}athcal{A}_k$ such that, for every $r\in \left[\mathcal{X},\tfrac12 R \right]$,
\mathbf{b}etagin{equation}
\label{e.Ck1introd.lin}
\left\| u - \phi\right\|_{\underlinederline{L}^2(B_r)}
\leq
C \left( \mathbf{m}athbf{f}rac{r}{R} \right)^{k+1}
\inf_{\phi \in \mathbf{m}athcal{A}_k}
\left\| u - \phi \right\|_{\underlinederline{L}^2(B_R)}.
\end{equation}
See~\cite[Theorem 3.8]{AKMbook} for the full statement, which was first proved in the periodic setting by Avellaneda and Lin~\cite{AL4}. Subsequent versions of this result, which are based on the ideas of~\cite{AL1,AL4} in their more quantitative formulation given in~\cite{AS}, were proved in various works~\cite{GNO2,FO,AKM1}, with the full statement here given in~\cite{AKM,BGO}.
\mathbf{m}athbf{s}mallskip
In all of its various forms, higher regularity in stochastic homogenzations is based on the simple idea that solutions of the heterogeneous equation should be close to those of the homogenized equation, which should have much better regularity. In the case of the linear equation~\eqref{e.pde.lin}, it does not a large leap, technically or philosophically, to go from~\eqref{e.C01estintro} to~\eqref{e.Ck1introd.lin}.
Indeed, to gain control over higher derivatives, one just needs to differentiate the equation (not in the microscopic parameters, of course, but in macroscopic ones) and, luckily, since the equation is linear, this does not change the equation. Roughly speaking, the idea is analogous to bootstrapping the regularity of a constant-coefficient, linear equation by differentiating it. Therefore the estimate~\eqref{e.Ck1introd.lin} is perhaps not too surprising once one has the large-scale~$C^{0,1}$ estimate in hand.
\mathbf{m}athbf{s}ubsection{Summary of the results proved in this paper}
\label{ss.naive}
The situation is very different in the nonlinear case. When one differentiates the equation (again, in a macroscopic parameter) we get a new equation, namely the first-order linearized equation. If we want to apply a large-scale regularity result to this equation, we must first (quantitatively) homogenize it! Achieving higher-order regularity estimates requires repeatedly differentiating the equation, which lead to a hierarchy of linearized equations requiring homogenization estimates.
\mathbf{m}athbf{s}mallskip
Let us be a bit more explicit. The gradient of the homogenized Lagrangian~$\overlineerline{L}$ is given by the well-known formula
\mathbf{b}etagin{align}
\label{e.DpLbar.form}
D_p\overlineerline{L}(p)
&
= \mathbf{m}athbb{E} \left[ \int_{[0,1]^d} D_pL\left( p+ \nabla \phi_p(x),x\right) \,dx \right]
\\ & \notag
= \lim_{r\to \infty} \mathbf{m}athbf{f}int_{B_r} D_pL\left( p+ \nabla \phi_p(x),x\right) \,dx,
\end{align}
where~$\phi_p$ is the first-order corrector with slope~$p\in\mathbf{m}athbb{R}d$, that is, it satisfies
\mathbf{b}etagin{equation*} \label{}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot D_pL(p+\nabla \phi_p(x),x) = 0 \quad \mathbf{m}box{in} \ \mathbf{m}athbb{R}d, \\
&
\nabla \phi_p \quad \mathbf{m}box{is $\mathbf{m}athbb{Z}d$--stationary,} \quad \mathbf{m}athbb{E} \left[ \int_{[0,1]^d} \nabla \phi_p(x) \,dx \right]=0.
\end{aligned}
\right.
\end{equation*}
The limit in the second line of~\eqref{e.DpLbar.form} is to be understood in a~$\mathbb{P}$--almost sure sense, and it is a consequence of the ergodic theorem, which states that macroscopic averages of stationary fields must converge to their expectations.
The formula~\eqref{e.DpLbar.form} says that~$D_p\overlineerline{L}(p)$ is the flux per unit volume of the first-order corrector with slope~$p\in\mathbf{m}athbb{R}d$. It naturally arises when we homogenize the nonlinear equation. We can try to show that $\overlineerline{L}\in C^2$ by formally differentiating~\eqref{e.DpLbar.form}, which leads to the expression
\mathbf{b}etagin{equation*} \label{}
D_p \mathrm{par}rtial_{p_i} \overlineerline{L}(p)
=
\mathbf{m}athbb{E} \left[ \int_{[0,1]^d} D_p^2L\left( p+ \nabla \phi_p(x),x\right) \left(e_i + \nabla \left(\mathrm{par}rtial_{p_i}\phi_p(x) \right) \right) \,dx \right].
\end{equation*}
If we define the linearized coefficients around~$\ell_p + \phi_p$ by
\mathbf{b}etagin{equation*} \label{}
\mathbf{a}_p (x) :=D_p^2L\left( p+ \nabla \phi_p(x),x\right)
\end{equation*}
and put $\psi^{(1)}_{p,e_i}:= \mathrm{par}rtial_{p_i}\phi_p(x)$, then we see that $\psi^{(1)}_{p,e_i}$ is the first-order corrector with slope $e_i$ of the linear equation with coefficients $\mathbf{a}_p$:
\mathbf{b}etagin{equation*} \label{}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \mathbf{a}_p\left( e_i + \nabla \psi^{(1)}_{p,e_i} \right) = 0 \quad \mathbf{m}box{in} \ \mathbf{m}athbb{R}d, \\
&
\nabla \psi^{(1)}_{p,e_i} \quad \mathbf{m}box{is $\mathbf{m}athbb{Z}d$--stationary,} \quad \mathbf{m}athbb{E} \left[ \int_{[0,1]^d} \nabla \psi_{p,e_i}(x) \,dx \right]=0.
\end{aligned}
\right.
\end{equation*}
We call $\psi^{(1)}_{p,e}$ a \emph{first-order linearized corrector}.
Moreover, we have the formula
\mathbf{b}etagin{equation*} \label{}
D_p \mathrm{par}rtial_{p_i} \overlineerline{L}(p)
=
\mathbf{m}athbb{E} \left[ \int_{[0,1]^d} \mathbf{a}_p(x) \left(e_i + \nabla \psi_{p,e_i} (x) \right) \,dx \right] = {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p e_i.
\end{equation*}
That is, ``linearization and homogenization commute'': the Hessian of $\overlineerline{L}$ at~$p$ should be equal to the homogenized coefficients~${\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p$ corresponding to the linear equation with coefficient field~$\mathbf{a}_p=D^2_pL(p+\nabla \phi_p(\cdot),\cdot)$. This reasoning is only formal, but since the right side of the above formula for $D^2_p\overlineerline{L}$ is well-defined (and only needs qualitative homogenization), we should expect that it should be rather easy to confirm it rigorously. Moreover, while quantitative homogenization of the original nonlinear equation gives us a $C^{0,1}$ estimate, we should expect that quantitative homogenization of the linearized equation should give us a $C^{0,1}$ estimate for \emph{differences of solutions} and a large-scale $C^{1,1}$ estimate for solutions. This is indeed the case and was proved in our previous paper~\cite{AFK}, where we were motivated by the goal of obtaining this regularity estimate for differences of solutions in anticipation of its important role in the proof of optimal quantitative homogenization estimates. Indeed,
in the very recent preprint~\cite{FN}, Fischer and Neukamm showed that this estimate can be combined with spectral gap-type assumptions on the probability measure to obtain quantitative bounds on the first-order correctors which are optimal in the scaling of the error.
\mathbf{m}athbf{s}mallskip
We may attempt to differentiate the formula for the homogenized Lagrangian a second time, with the ambition of obtaining a $C^3$ estimate for $\overlineerline{L}$, a $C^{2,1}$ estimate for solutions and a higher-order improvement of our $C^{0,1}$ estimate for differences (which will be a~$C^{0,1}$ estimate for \emph{linearization errors}): we get
\mathbf{b}etagin{align*} \label{}
D_p \mathrm{par}rtial_{p_i} \mathrm{par}rtial_{p_j} \overlineerline{L}(p)
&
=
\mathbf{m}athbb{E} \left[ \int_{[0,1]^d} \mathbf{a}_p (x)\nabla \psi^{(2)}_{p,e_i,e_j} (x) \,dx \right]
\\ & \quad
+ \mathbf{m}athbb{E} \left[ \int_{[0,1]^d} D^3L(p+\nabla \phi_p(x),x) \left( e_i + \nabla \psi^{(1)}_{p,e_i} \right) \left( e_j + \nabla \psi^{(1)}_{p,e_j} \right)
\,dx \right].
\end{align*}
If we define the vector field
\mathbf{b}etagin{equation*} \label{}
\mathbf{m}athbf{F}_{2,p,e_i,e_j} (x)
:=
D^3L(p+\nabla \phi_p(x),x) \left( e_i + \nabla \psi^{(1)}_{p,e_i} \right) \left( e_j + \nabla \psi^{(1)}_{p,e_j} \right),
\end{equation*}
then we see that $\psi^{(2)}_{p,e_i,e_j}$ is the first-order corrector with slope zero of the linear equation
\mathbf{b}etagin{equation}
\label{e.secondorderlin}
-\nabla \cdot \mathbf{a}_p \nabla \psi^{(2)}_{p,e_i,e_j} = \nabla \cdot \mathbf{m}athbf{F}_{2,p,e_i,e_j} \quad \mathbf{m}box{in} \ \mathbf{m}athbb{R}d,
\end{equation}
and the formula for the tensor $D^3_p\overlineerline{L}$ becomes
\mathbf{b}etagin{equation}
\label{e.thirdorderdervLbar}
D_p \mathrm{par}rtial_{p_i} \mathrm{par}rtial_{p_j} \overlineerline{L}(p)
=
\mathbf{m}athbb{E} \left[ \int_{[0,1]^d} \mathbf{a}_p (x)\nabla \psi^{(2)}_{p,e_i,e_j} (x) +\mathbf{m}athbf{F}_{2,p,e_i,e_j} (x) \,dx \right] = \overlineerline{\mathbf{m}athbf{F}}_{2,p,e_i,e_j},
\end{equation}
the corresponding homogenized coefficient. Unlike the case for the Hessian of $\overlineerline{L}$, we should \emph{not} expect this formula to be valid under qualitative ergodic assumptions! Indeed, the qualitative homogenization of~\eqref{e.secondorderlin} and thus the validity of~\eqref{e.thirdorderdervLbar} requires that $\mathbf{m}athbf{F}_{2,p,e_i,e_j}$ belong to~$L^2$, in the sense that $\mathbf{m}athbb{E} \left[ \int_{[0,1]^d} \left| \mathbf{m}athbf{F}_{2,p,e_i,e_j} (x) \right|^2\,dx \right] < \infty$, and due to the product of two first-order correctors we only have $L^1$--type integrability\mathbf{m}athbf{f}ootnote{to be pedantic, we actually have $L^{1+\deltalta}$-type integrability for a tiny $\deltalta>0$ by the Meyers estimate, but this does not help.} for $ \mathbf{m}athbf{F}_{2,p,e_i,e_j}$. This is a serious problem which can only be fixed using the large-scale regularity theory for the first-order linearized equation, and a suitable bound on the minimal scale~$\mathcal{X}$, thereby obtaining a bound in $L^\infty(\cu_0)$ and hence $L^4(\cu_0)$ for~$\nabla \psi^{(1)}_{p,e_i}$, with at least a fourth moment in expectation. Note that this also requires some regularity of the Lagrangian $L$ on the smallest scale.
\mathbf{m}athbf{s}mallskip
If we differentiate the equation once more in an effort to prove that $\overlineerline{L}\in C^4$, we will be faced with similar difficulties, this time with the more complicated vector field
\mathbf{b}etagin{align*} \label{}
\mathbf{m}athbf{F}^{(3)}_{p,e_i,e_j,e_k}
(x) &
:= D^3L(p+\nabla \phi_p(x),x) \left( e_i + \nabla \psi^{(1)}_{p,e_i} \right) \nabla \psi^{(2)}_{p,e_j,e_k} \\
& \qquad
+ D^3L(p+\nabla \phi_p(x),x) \left( e_j + \nabla \psi^{(1)}_{p,e_j} \right) \nabla \psi^{(2)}_{p,e_i,e_k} \\
& \qquad
+D^3L(p+\nabla \phi_p(x),x) \left( e_k + \nabla \psi^{(1)}_{p,e_k} \right) \nabla \psi^{(2)}_{p,e_i,e_j} \\
& \qquad
+ D^4L(p+\nabla \phi_p(x),x)
\left( e_i+ \nabla \psi^{(1)}_{p,e_i} \right)
\left( e_j + \nabla \psi^{(1)}_{p,e_j} \right)
\left( e_k + \nabla \psi^{(1)}_{p,e_k} \right).
\end{align*}
Notice that the last term of which has three factors of the first-order linearized correctors instead of two, and is thus ``even further'' from being obviously $L^2$ than was~$\mathbf{m}athbf{F}^{(2)}_{p,e_i,e_j}$. Homogenizing the third-order linearized equation will therefore require large-scale regularity estimates for both the first-order and second-order linearized equations, and one can now see the situation will get worse as the order increases beyond three. Moreover, proving quantitative homogenization for these equations will also require some smoothness of the homogenized coefficients associated to the lower-order equations, due to the needs for the homogenized solutions to be smooth in quantitative two-scale expansion arguments.
\mathbf{m}athbf{s}mallskip
This suggets a bootstrap argument for progressively and simultaneously obtaining (i) the smoothness of $\overlineerline{L}$; (ii) the higher-order large-scale regularity of solutions (and solutions of linearized equations); and (iii) the homogenization of the higher-order linearized equations and commutation of homogenization and higher-order linearization. The point of this paper is to formalize this idea and thereby give a proof of ``Hilbert's 19th problem for homogenization.'' Here is a rough schematic of the argument, as discussed above, which comes in three distinct steps, discussed in more detail below:
\mathbf{b}etagin{itemize}
\item Homogenization \& large-scale $C^{0,1}$ regularity for the linearized equations up to order $\mathbf{m}athsf{N}$ $\implies$ $\overlineerline{L} \in C^{2+\mathbf{m}athsf{N}}$.
\item $\overlineerline{L} \in C^{2+\mathbf{m}athsf{N}}$ and large-scale $C^{0,1}$ regularity for the linearized equations up to order $\mathbf{m}athsf{N}$
$\implies$ homogenization for the linearized equations up to order $\mathbf{m}athsf{N}+1$.
\item $\overlineerline{L} \in C^{2+\mathbf{m}athsf{N}}$, large-scale $C^{0,1}$ regularity for the linearized equations up to order $\mathbf{m}athsf{N}$ and homogenization for the linearized equations up to order $\mathbf{m}athsf{N}+1$
$\implies$ large-scale $C^{0,1}$ regularity for the linearized equations up to order $\mathbf{m}athsf{N}+1$.
\end{itemize}
The three implications above are the focus of most of the paper and their proofs are given in Sections~\ref{s.regLbar}--\ref{s.reglinerrors}.
\mathbf{m}athbf{s}mallskip
Once this induction argument is completed, we consequently obtain a full $C^{k,1}$--type large scale regularity estimate for solutions of the original nonlinear equation, generalizing~\eqref{e.Ck1introd.lin}. The main question becomes what the replacement for $\mathbf{m}athcal{A}_k$ should be, that is, what the ``polynomials'' should be. We show that these are certain solutions of the system of linearized equations (linearized around a first-order corrector) exhibiting polynomial-type growth, which we classify by providing a Liouville-type result which is part of the statement of the theorem (see the discussion between the statements of Theorems~\ref{t.C11estimate} and Theorem~\ref{t.regularityhigher}, below, for a definition of these spaces, which are denoted by~$\mathbf{m}athsf{W}_n^p$). The resulting theorem we obtain, which is a version of the statement of Hilbert's 19th problem in the context of homogenization, provides a very precise description of the solutions of~\eqref{e.pde} in terms of the first-order correctors and the correctors of a hierarchy of linear equations.
\mathbf{m}athbf{s}mallskip
We also obtain, as a corollary, the improvement of the scaling of linearization errors---which is a closely related to the regularity of solutions. To motivate this result, suppose we are given two solutions $u,v\in H^1(B_R)$ of~\eqref{e.pde} in a large ball ($R\mathbf{m}athbf{g}g 1$) which are close to each other in the sense that
\mathbf{b}etagin{equation}
\left\| \nabla u - \nabla v \right\|_{L^2(B_R)}
\ll \left\| \nabla u \right\|_{L^2(B_R)}.
\end{equation}
Suppose that we attempt to approximate the difference $u-v$ by the solution~$w\in H^1(B_{R/2})$ of the linearized problem
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D^2_pL(\nabla u,x) \nabla w \right) = 0 & \mathbf{m}box{in} & \ B_R, \\
& w = v-u & \mathbf{m}box{on} & \ \mathrm{par}rtial B_R.
\end{aligned}
\right.
\end{equation*}
Then we may ask the question of how small we should expect the first-order linearization error to be. The best answer that we have from deterministic elliptic regularity estimates is that there exists a small exponent $\deltalta(d,\mathcal{L}ambda)>0$ such that
\mathbf{b}etagin{equation}
\label{e.linerrorintro}
\mathbf{m}athbf{f}rac{\left\| u - v - w\right\|_{L^2(B_{R})}}{\left\| u \right\|_{L^2(B_R)}}
\leq
C\left(
\mathbf{m}athbf{f}rac{\left\| u - v \right\|_{L^2(B_{R})}}{\left\| u \right\|_{L^2(B_R)}}
\right)^{1+\deltalta}.
\end{equation}
This can be easily proved for instance using the Meyers gradient~$L^{2+\deltalta}$ estimate, and it is sharp in the sense that it is not possible to do better than the very small exponent~$\deltalta$. We can say roughly that the space of solutions of~\eqref{e.pde} is a $C^{1,\deltalta}$ manifold, but no better. Of course, if~$L$ does not depend on~$x$, or if~$R$ is of order one and~$L$ is smooth in both variables~$(p,x)$, then one expects to have the estimate above with $\deltalta=1$ and to be able to prove more precise estimates using higher-order linearized equations. In fact, this is essentially a reformulation of the statement of Hilbert's 19th problem (indeed---see Appendix~\ref{a.constantcoeff}, where we give a proof of Hilbert's 19th problem in its classical formulation by following this line of reasoning).
\mathbf{m}athbf{s}mallskip
In this paper we also prove a large-scale version of the quadratic response to first-order linearization in the context of homogenization, which states that~\eqref{e.linerrorintro} holds with $\deltalta=1$ whenever~$R$ is larger than a random minimal scale. Moreover, we prove a full slate of higher-order versions of this result: see Corollary~\ref{c.linerrors}. These results roughly assert that, with probability one, the large-scale structure of solutions of~\eqref{e.pde} resembles that of a smooth manifold.
\mathbf{m}athbf{s}mallskip
In the following two subsections, we state our assumptions and give the precise statements of the results discussed above.
\mathbf{m}athbf{s}ubsection{Assumptions and notation}
\label{ss.assumptions}
In this subsection, we state the standing assumptions in force throughout the paper.
\mathbf{m}athbf{s}mallskip
We fix following global parameters: the dimension~$d\in\mathbf{m}athbb{N}$ with $d\mathbf{m}athbf{g}eq 2$, a constant~$\mathcal{L}ambda \in [1,\infty)$ measuring the ellipticity, an integer $\mathbf{m}athsf{N}\in\mathbf{m}athbb{N}$ with $\mathbf{m}athsf{N}\mathbf{m}athbf{g}eq 1$ measuring the smoothness of the Lagrangian, and constants $\mathbf{m}athsf{M}_0, \mathbf{m}athsf{K}_0 \in [1,\infty)$. For short, we denote
\mathbf{b}etagin{equation*}
\mathrm{data}:= (d,\mathcal{L}ambda,\mathbf{m}athsf{N},\mathbf{m}athsf{M}_0,\mathbf{m}athsf{K}_0).
\end{equation*}
This allows us to, for instance, denote constants~$C$ which depend on $(d,\mathcal{L}ambda,\mathbf{m}athsf{N},\mathbf{m}athsf{M}_0,\mathbf{m}athsf{K}_0)$ by simply $C(\mathrm{data})$ instead of $C(d,\mathcal{L}ambda,\mathbf{m}athsf{N},\mathbf{m}athsf{M}_0,\mathbf{m}athsf{K}_0)$.
\mathbf{m}athbf{s}mallskip
The probability space is the set~$\mathcal{O}mega$ of all Lagrangians~$L:\mathbf{m}athbb{R}d \times \mathbf{m}athbb{R}d \to \mathbf{m}athbb{R}$, written as a function of $(p,x)\in\mathbf{m}athbb{R}d\times\mathbf{m}athbb{R}d$, satisfying the following conditions:
\mathbf{b}etagin{enumerate}
\item[(L1)] $L$ is $2+\mathbf{m}athsf{N}$ times differentiable in the variable~$p$ and, for every $k\in \{2,\ldots,2+\mathbf{m}athsf{N}\}$, the function $D^{k}_pL$ is uniformly Lipschitz in both variables and satisfies
\mathbf{b}etagin{equation} \label{e.gradLbndk}
\left[ D^{k}_pL \right]_{C^{0,1}(\mathbf{m}athbb{R}d \times\mathbf{m}athbb{R}d)}
\leq
\mathbf{m}athsf{K}_0.
\end{equation}
For $k=1$, we assume that, for $z \in \mathbf{m}athbb{R}^d$,
\mathbf{b}etagin{equation} \label{e.gradLbnd}
\left[ D_pL(z,\cdot) \right]_{C^{0,1}(\mathbf{m}athbb{R}d)}
\leq
\mathbf{m}athsf{K}_0 (1+|z|).
\end{equation}
\mathbf{m}athbf{s}mallskip
\item[(L2)] $L$ is uniformly convex in the variable~$p$: for every $p\in\mathbf{m}athbb{R}d$ and~$x\in\mathbf{m}athbb{R}d$,
\mathbf{b}etagin{equation*} \label{}
I_d \leq D^2_p L(p,x) \leq \mathcal{L}ambda I_d.
\end{equation*}
\mathbf{m}athbf{s}mallskip
\item[(L3)] $D_pL(0,\cdot)$ is uniformly bounded:
\mathbf{b}etagin{equation*}
\left\| D_pL(0,\cdot) \right\|_{L^\infty(\mathbf{m}athbb{R}d)} \leq \mathbf{m}athsf{M}_0.
\end{equation*}
\end{enumerate}
We define~$\mathcal{O}mega$ to be the set of all such Lagrangians~$L$:
\mathbf{b}etagin{equation*} \label{}
\mathcal{O}mega := \left\{ L \,:\, \mathbf{m}box{$L$ satisfies~(L1), (L2) and~(L3)} \right\}.
\end{equation*}
Note that $\mathcal{O}mega$ depends on the fixed parameters~$(d,\mathcal{L}ambda,\mathbf{m}athsf{N},\mathbf{m}athsf{M_0},\mathbf{m}athsf{K}_0)$. It is endowed with the following family of~$\mathbf{m}athbf{s}igma$--algebras: for each Borel subset~$U \mathbf{m}athbf{s}ubseteq \mathbf{m}athbb{R}d$, define
\mathbf{b}etagin{multline*} \label{}
\mathbf{m}athcal{F}(U):= \mathbf{m}box{the $\mathbf{m}athbf{s}igma$--algebra generated by the family of random variables}\\
L \mathbf{m}apsto L(p,x), \quad (p,x) \in\mathbf{m}athbb{R}d \times U.
\end{multline*}
The largest of these is denoted by $\mathbf{m}athcal{F}:= \mathbf{m}athcal{F}(\mathbf{m}athbb{R}d)$.
\mathbf{m}athbf{s}mallskip
We assume that the law of the ``canonical Lagrangian''~$L$ is a probability measure~$\mathbb{P}$ on $(\mathcal{O}mega,\mathbf{m}athcal{F})$ satisfying the following two assumptions:
\mathbf{b}etagin{enumerate}
\item[(P1)] $\mathbb{P}$ has a unit range of dependence: for all Borel subsets $U,V\mathbf{m}athbf{s}ubseteq \mathbf{m}athbb{R}d$ such that $\dist(U,V) \mathbf{m}athbf{g}eq 1$,
\mathbf{b}etagin{equation*} \label{}
\mathbf{m}box{$\mathbf{m}athcal{F}(U)$ and $\mathbf{m}athcal{F}(V)$ are $\mathbb{P}$--independent.}
\end{equation*}
\item[(P2)] $\mathbb{P}$ is stationary with respect to $\mathbf{m}athbb{Z}d$--translations: for every $z\in \mathbf{m}athbb{Z}d$ and $E\in \mathbf{m}athcal{F}$,
\mathbf{b}etagin{equation*} \label{}
\mathbb{P} \left[ E \right] = \mathbb{P} \left[ T_z E \right],
\end{equation*}
where the translation group $\{T_z\}_{z\in\mathbf{m}athbb{Z}d}$ acts on $\mathcal{O}mega$ by $(T_zL)(p,x) = L(p,x+z)$.
\end{enumerate}
The expectation with respect to~$\mathbb{P}$ is denoted by~$\mathbf{m}athbb{E}$.
\mathbf{m}athbf{s}mallskip
Since we will be often concerned with measuring the stretched exponential moments of the random variables we encounter, the following notation is convenient: for every $\mathbf{m}athbf{s}igma \in(0,\infty)$, $\theta>0$, and random variable~$X$ on $\mathcal{O}mega$, we write
\mathbf{b}etagin{equation*}
X \leq \mathcal{O}_\mathbf{m}athbf{s}igma\left( \theta \right)
\iff
\mathbf{m}athbb{E} \left[ \exp\left( \left( \mathbf{m}athbf{f}rac{X_+}{\theta} \right)^\mathbf{m}athbf{s}igma \right) \right] \leq 2.
\end{equation*}
This is essentially notation for an Orlicz norm on $(\mathcal{O}mega,\mathbb{P})$. Some basic properties of this notation is given in~\cite[Appendix A]{AKMbook}.
\mathbf{m}athbf{s}ubsection{Statement of the main results}
\label{ss.assump}
We begin by introducing the higher-order linearized equations. These can be computed by hand, as we did for the second and third linearized equations in Section~\ref{ss.naive}, but it is convenient to work with more compact formulas. Observe that, by Taylor's formula with remainder, we have, for every $n\in\{ 0,\ldots,\mathbf{m}athsf{N}+1 \}$,
\mathbf{b}etagin{equation*}
\left| D_p L(p_0 + h,x) - \mathbf{m}athbf{s}um_{k=0}^{n} \mathbf{m}athbf{f}rac1{k!} D_p^{k+1} L(p_0,x) h^{\otimes k} \right| \leq \mathbf{m}athbf{f}rac{C\mathbf{m}athsf{K}}{(n+1)!} \left| h \right|^{n+1}.
\end{equation*}
Define, for $p,x,h_1,\ldots,h_{\mathbf{m}athsf{N}} \in\mathbf{m}athbb{R}d$ and $t \in\mathbf{m}athbb{R}$,
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{G}(t,p,h_1,\ldots,h_{\mathbf{m}athsf{N}},x) := \mathbf{m}athbf{s}um_{k=2}^{\mathbf{m}athsf{N}+1} \mathbf{m}athbf{f}rac1{k!} D_p^{k+1} L(p,x) \left( \mathbf{m}athbf{s}um_{j=1}^{\mathbf{m}athsf{N}} \mathbf{m}athbf{f}rac{t^j}{j!} h_j \right)^{\otimes k}.
\end{equation*}
Also define, for each $m\in\{ 1,\ldots,\mathbf{m}athsf{N}+1\}$ and $p,x,h_1,\ldots,h_{m-1} \in \mathbf{m}athbb{R}d$,
\mathbf{b}etagin{equation} \label{e.defFm}
\mathbf{m}athbf{F}_m (p,h_1,\ldots,h_{m-1},x)
:=
\left( \mathrm{par}rtial_t^m \mathbf{m}athbf{G}\right)\left(0,p,x,h_1,\ldots,h_{m-1},0,\ldots,0 \right).
\end{equation}
Observe that $\mathbf{m}athbf{F}_1\equiv 0$ by definition (that is, the right side of the first linearized equation is zero, as we have already seen).
\mathbf{m}athbf{s}mallskip
Our first main result concerns the regularity of the effective Lagrangian~$\overlineerline{L}$ and states that it has essentially the same regularity in~$p$ as we assumed for~$L$.
\mathbf{b}etagin{theorem}
[{Regularity of $\overlineerline{L}$}]
\label{t.regularity.Lbar}
For every~$\mathbf{b}etata\in (0,1)$, the effective Lagrangian $\overlineerline{L}$ belongs to $C^{2+\mathbf{m}athsf{N},\mathbf{b}etata}_{\mathbf{m}athrm{loc}}(\mathbf{m}athbb{R}d)$ and, for every~$\mathbf{m}athsf{M}\in [1,\infty)$, there exists~$C(\mathbf{b}etata,\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that
\mathbf{b}etagin{equation*}
\left\| D^2 \overlineerline{L} \right\|_{C^{\mathbf{m}athsf{N},\mathbf{b}etata}(B_{\mathbf{m}athsf{M}})}
\leq C.
\end{equation*}
\end{theorem}
In view of Theorem~\ref{t.regularity.Lbar}, we may introduce homogenized versions of the above functions. We define, for every~$p\in\mathbf{m}athbb{R}d$, $\{h_i\}_{i=1}^{\mathbf{m}athsf{N}} \mathbf{m}athbf{s}ubseteq \mathbf{m}athbb{R}^d$ and $\deltalta \in\mathbf{m}athbb{R}$,
\mathbf{b}etagin{equation*}
\overlineerline{\mathbf{m}athbf{G}}(t,p,h_1,\ldots,h_{\mathbf{m}athsf{N}}) := \mathbf{m}athbf{s}um_{k=2}^{\mathbf{m}athsf{N}+1} \mathbf{m}athbf{f}rac1{k!} D_p^{k+1} \overlineerline{L}(p) \left( \mathbf{m}athbf{s}um_{j=1}^{\mathbf{m}athsf{N}} \mathbf{m}athbf{f}rac{t^j}{j!} h_j \right)^{\otimes k}
\end{equation*}
and then, for every $m\in\{ 1,\ldots,\mathbf{m}athsf{N}+1\}$ and $\{h_i\}_{i=1}^{m-1} \mathbf{m}athbf{s}ubseteq \mathbf{m}athbb{R}^d$,
\mathbf{b}etagin{equation}
\label{e.defbarFm}
\overlineerline{\mathbf{m}athbf{F}}_m (p,h_1,\ldots,h_{m-1})
:=
\left( \mathrm{par}rtial_t^m \overlineerline{\mathbf{m}athbf{G}}\right)\left(0,p,h_1,\ldots,h_{m-1},0,\ldots,0 \right).
\end{equation}
As above, we have that $\overlineerline{\mathbf{m}athbf{F}}_1\equiv 0$ by definition.
\mathbf{m}athbf{s}mallskip
In the next theorem, we present a statement concerning the commutability of homogenization and higher-order linearizations. It generalizes~\cite[Theorem 1.1]{AFK}, which proved the result in the case $\mathbf{m}athsf{N}=1$.
\mathbf{b}etagin{theorem}[Homogenization of higher-order linearized equations]
\label{t.linearizehigher}
\emph{}\\
Let $n\in\{0,\ldots,\mathbf{m}athsf{N}\}$, $\deltalta\in \left(0,\tfrac12\right]$, $\mathbf{m}athsf{M}\in [1,\infty)$, and $U_1,\ldots,U_{n+1} \mathbf{m}athbf{s}ubseteq\mathbf{m}athbb{R}d$ be a sequence of bounded Lipschitz domains satisfying
\mathbf{b}etagin{equation}
\label{e.Um.inclus}
\overlineerline{U}_{m+1} \mathbf{m}athbf{s}ubseteq U_{m}, \quad \mathbf{m}athbf{f}orall m\in\left\{ 1,\ldots, n \right\}.
\end{equation}
There exist~$\mathbf{m}athbf{s}igma(\mathrm{data})>0$,~$\mathbf{a}lphapha\left(\{U_m\},\deltalta,\mathrm{data}\right)>0$,~$C\left(\{U_m\},\mathbf{m}athsf{M},\deltalta,\mathrm{data}\right)<\infty$, and a random variable $\mathcal{X}$ satisfying
\mathbf{b}etagin{equation}
\label{e.X2}
\mathcal{X} = \mathcal{O}_\mathbf{m}athbf{s}igma \left( C \right)
\end{equation}
such that the following statement is valid. Let $\varepsilon\in (0,1]$, $f \in W^{1,2+\deltalta}(U_0)$ be such that
\mathbf{b}etagin{equation*}
\left\| \nabla f \right\|_{L^{2+\deltalta}(U_1)} \leq \mathbf{m}athsf{M},
\end{equation*}
and, for each $m\in\{1,\ldots, n+1\}$, fix $g_m\in W^{1,2+\deltalta}(U_m)$ and
let $u^\varepsilon \in H^1(U_1)$ and the functions $w_1^\varepsilon\in H^1(U_1),w_2^\varepsilon\in H^1(U_2),\ldots,w_{n+1}^\varepsilon \in H^1(U_{n+1})$ satisfy, for every $m\in\{1,\ldots,n+1\}$, the Dirichlet problems
\mathbf{b}etagin{equation}
\label{e.linearized}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_pL\left( \nabla u^\varepsilon,\tfrac x\varepsilon \right) \right) = 0 & \mathbf{m}box{in} & \ U_1, \\
& u^\varepsilon = f, & \mathbf{m}box{on} & \ \mathrm{par}rtial U_1, \\
& -\nabla \cdot \left( D^2_pL\left( \nabla u^\varepsilon,\tfrac x\varepsilon \right) \nabla w^\varepsilon_m \right) = \nabla \cdot \left( \mathbf{m}athbf{F}_m(\nabla u^\varepsilon,\nabla w^\varepsilon_1,\ldots,\nabla w^\varepsilon_{m-1},\tfrac x\varepsilon)\right) & \mathbf{m}box{in} & \ U_m, \\
& w_m^\varepsilon = g_m & \mathbf{m}box{on} & \ \mathrm{par}rtial U_m.
\end{aligned}
\right.
\end{equation}
Finally, let $\mathbf{b}ar{u} \in H^1(U_1)$ and, for every $m\in \{1,\ldots,n+1\}$, the function $\mathbf{b}ar{w}_{m}\in H^1(U_m)$ satisfy the homogenized problems
\mathbf{b}etagin{equation}
\label{e.homogenizedlinearized}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot D\overlineerline{L}\left( \nabla \mathbf{b}ar{u} \right) = 0 & \mathbf{m}box{in} & \ U_1, \\
& \mathbf{b}ar{u} = f & \mathbf{m}box{on} & \ \mathrm{par}rtial U_1,\\
& -\nabla \cdot \left( D^2\overlineerline{L}\left( \nabla \mathbf{b}ar{u} \right) \nabla \mathbf{b}ar{w}_m \right) = \nabla\cdot \left( \overlineerline{\mathbf{m}athbf{F}}_m\left(\nabla\mathbf{b}ar{u},\nabla \mathbf{b}ar{w}_1,\ldots,\nabla \mathbf{b}ar{w}_{m-1}\right) \right)& \mathbf{m}box{in} & \ U_m, \\
& \mathbf{b}ar{w}_m = g_m & \mathbf{m}box{on} & \ \mathrm{par}rtial U_m.
\end{aligned}
\right.
\end{equation}
Then, for every $m\in \{1,\ldots,n+1\}$, we have the estimate
\mathbf{b}etagin{equation}
\label{e.homogenization.estimates}
\left\| \nabla w_m^\varepsilon - \nabla \mathbf{b}ar{w}_m \right\|_{H^{-1}(U_m)}
\leq
\mathcal{X} \varepsilon^{\mathbf{a}lphapha}
\mathbf{m}athbf{s}um_{j=1}^m \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{m}{j}}.
\end{equation}
\end{theorem}
Observe that, due to the assumed regularity of $L$ in the spatial variable and the Schauder estimates, the vector fields $\mathbf{m}athbf{F}_m(\nabla u^\varepsilon,\nabla w^\varepsilon_1,\ldots,\nabla w^\varepsilon_{m-1},\tfrac x\varepsilon)$ on the right side of the equations for $w_m$ in~\eqref{e.linearized} belong to $L^\infty(U_m)$. In particular, they belong to $L^2(U_m)$ and therefore the Dirichlet problems in~\eqref{e.linearized} are well-posed in the sense that the solutions~$w_m$ belong to $H^1(U_m)$. Of course, this regularity given by the application of the Schauder estimate depends on~$\varepsilon$ (and indeed the constants blow up like a large power of $\varepsilon^{-1}$) and therefore this remark is not very useful as a quantitative statement. To prove the homogenization result for the $m$th linearized equation, we will need to possess much better bounds on these vector fields, which amounts to better regularity on the solutions~$\nabla u, \nabla w_1,\ldots,\nabla w_{m-1}$.
\mathbf{m}athbf{s}mallskip
This is the reason we must prove Theorems~\ref{t.regularity.Lbar} and~\ref{t.linearizehigher} at the same time (in an induction on the order $m$ of the linearized equation and of the regularity of $D^2\overlineerline{L}$) as the following result on the large-scale regularity of solutions of the linearized equations and of the linearization errors. Its statement is a generalization of the large-scale~$C^{0,1}$ estimates for linearized equation and \emph{differences} of solutions proved in~\cite{AFK}.
\mathbf{m}athbf{s}mallskip
As mentioned above, throughout the paper we use the following notation for volume-normalized $L^p$ norms: for each $p\in [1,\infty)$, $U \mathbf{m}athbf{s}ubseteq\mathbf{m}athbb{R}d$ with $|U|<\infty$ and $f\in L^p(U)$,
\mathbf{b}etagin{equation*}
\left\| f \right\|_{\underlinederline{L}^p(U)}:=
\left( \mathbf{m}athbf{f}int_U |f|^p\,dx \right)^{\mathbf{m}athbf{f}rac1p}
=
\left| U \right|^{-\mathbf{m}athbf{f}rac1p} \left\| f \right\|_{L^p(U)}.
\end{equation*}
\mathbf{b}etagin{theorem}[Large-scale $C^{0,1}$ estimates for the linearized equations]
\label{t.regularity.linerrors}
\emph{}\\
Let $n\in\{0,\ldots,\mathbf{m}athsf{N}\}$, $q\in [2,\infty)$, and $\mathbf{m}athsf{M}\in [1,\infty)$. Then there exist~$\mathbf{m}athbf{s}igma(q,\mathrm{data})>0$, a constant~$C(q,\mathbf{m}athsf{M},\mathrm{data}) <\infty$ and a random variable~$\mathcal{X}$ satisfying~$\mathcal{X} \leq \mathcal{O}_{\mathbf{m}athbf{s}igma}\left( C \right)$ such that the following statement is valid.
For $R \in \left[ 2\mathcal{X},\infty\right)$ and $u,v,w_1,\ldots,w_{n+1}\in H^1(B_R)$ satisfying, for every $m\in\{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
&
\left\| \nabla u \right\|_{\underlinederline{L}^2(B_R)} \vee
\left\| \nabla v \right\|_{\underlinederline{L}^2(B_R)} \leq \mathbf{m}athsf{M}
\\ &
-\nabla \cdot \left( D_pL(\nabla u,x) \right) = 0
\quad \mathbf{m}box{and} \quad -\nabla \cdot \left( D_pL(\nabla v,x) \right) = 0
\quad \mathbf{m}box{in} \ B_R,\\
&
-\nabla \cdot \left( D^2_pL\left( \nabla u,x \right) \nabla w_m \right) = \nabla \cdot \left( \mathbf{m}athbf{F}_m(\nabla u,\nabla w_1,\ldots,\nabla w_{m-1},x)\right) \quad \mathbf{m}box{in} \ B_R,
\end{aligned}
\right.
\end{equation*}
and defining, for~$m\in \{0,\ldots,n\}$, the $m$th order linearization error $\xi_m \in H^1(B_R)$ by
\mathbf{b}etagin{equation*}
\xi_m:= v - u - \mathbf{m}athbf{s}um_{k=1}^m \mathbf{m}athbf{f}rac1{k!} w_k,
\end{equation*}
then we have, for every $r \in \left[ \mathcal{X} , \tfrac 12 R \right]$ and $m\in\{0,\ldots,n\}$, the estimates
\mathbf{b}etagin{align}
\label{e.C01linsols}
\left\| \nabla w_{m+1} \right\|_{\underlinederline{L}^q(B_r)}
&
\leq
C\mathbf{m}athbf{s}um_{i=1}^{m+1}
\left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}
\right)^{\mathbf{m}athbf{f}rac{m+1}i}
\end{align}
and
\mathbf{b}etagin{multline}
\label{e.C01linerror}
\left\| \nabla \xi_{m} \right\|_{\underlinederline{L}^{q} (B_r)}
\\
\leq
C \mathbf{m}athbf{s}um_{i=0}^{m} \left( \mathbf{m}athbf{f}rac{1}{R}\left\| \xi_{i} - \left( \xi_{i} \right)_{B_{R}} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac{m+1}{i+1}}
+ C \mathbf{m}athbf{s}um_{i=1}^{m} \left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}\right)^{\mathbf{m}athbf{f}rac {m+1}{i}}
.
\end{multline}
\end{theorem}
The main interest in the above theorem is the case of the exponent~$q=2$. However, we must consider arbitrarily large exponents $q\in [2,\infty)$ in order for the induction argument to work. In particular, in order to show that Theorem~\ref{t.regularity.linerrors} for some $n$ implies Theorem~\ref{t.linearizehigher} for $n+1$, we need to consider potentially very large~$q$ (depending on $n$).
As mentioned above, Theorems~\ref{t.regularity.Lbar},~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors} are proved together in an induction argument. Each of the theorems has already been proved in the case~$\mathbf{m}athsf{N}=0$ and~$q=2$ in our previous paper~\cite{AFK}. The integrability in Theorem~\ref{t.regularity.linerrors} is upgraded to $q \in (2,\infty)$ in Propositions~\ref{p.w1higher} and~\ref{p.Lipxi0} below for $w_1$ and $\xi_0$, respectively. These serve as the base case of the induction. The main induction step is comprised of the following three implications:
\mathbf{b}etagin{itemize}
\item \emph{Regularity of~$\overlineerline{L}$} (Section~\ref{s.regLbar}). We show that if Theorems~\ref{t.regularity.Lbar},~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors} are valid for some $n\in \{0,\ldots,\mathbf{m}athsf{N}-1\}$, then Theorems~\ref{t.regularity.Lbar} is valid for $n+1$. The argument essentially consists of differentiating the corrector equation for the~$n$th linearized equation in the parameter~$p$. However, the reader should not be misled into expecting a simple argument based on the implicit function theorem. Due to the lack of sufficient spatial integrability of the vector fields~$\mathbf{m}athbf{F}_{m}$, it is necessary to use the large-scale regularity theory (i.e., the assumed validity of Theorem~\ref{t.regularity.linerrors} for~$n$) to complete the argument.
\mathbf{m}athbf{s}mallskip
\item \emph{Homogenization of higher-order linearized equations} (Section~\ref{s.homogenization}). We argue, for $n\in \{0,\ldots,\mathbf{m}athsf{N}-1\}$, that if Theorem~\ref{t.regularity.Lbar} is valid for~$n+1$ and Theorems~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors} are valid for~$n$, then Theorems~\ref{t.linearizehigher} is valid for~$n+1$. The regularity of~$\overlineerline{L}$ allows us to write down the homogenized equation, while the homogenization and regularity estimates for the previous linearized equations allow us to \emph{localize} the heterogeneous equation; that is, approximate it with another equation which has a finite range of dependence and bounded coefficients. This allows us to apply homogenization estimates from~\cite{AKMbook}.
\mathbf{m}athbf{s}mallskip
\item \emph{Large-scale ~$C^{0,1}$ regularity of linearized solutions and linearization errors}
(Sections~\ref{s.reglineqs} and~\ref{s.reglinerrors}). We argue, for $n\in \{0,\ldots,\mathbf{m}athsf{N}-1\}$, that if Theorems~\ref{t.regularity.Lbar} and~\ref{t.linearizehigher} are valid for~$n+1$ and Theorem~\ref{t.regularity.linerrors} is valid for~$n$, then we may conclude that Theorem~\ref{t.regularity.linerrors} is also valid for~$n+1$. Here we use the method introduced in~\cite{AS} of applying a quantitative excess decay iteration, based on the ``harmonic'' approximation provided by the quantitative homogenization statement. This estimate controls the regularity of the $w_m$'s on ``large'' scales (i.e., larger than a multiple of the microscopic scale). To obtain $L^q$--type integrability for $\nabla w_m$, it is also necessary to control the small scales, and for this we apply deterministic Calder\'on-Zygmund-type estimates (this is our reason for assuming~$L$ possesses some small-sacle spatial regularity). The estimates for the linearization errors~$\xi_{m-1}$ are then obtained as a consequence by comparing them to~$w_m$.
\end{itemize}
From a high-level point of view, the induction argument summarized above resembles the resolution of Hilbert's 19th problem on the regularity of minimizers of integral functionals with uniformly convex and smooth integrands. The previous three theorems allow us to prove the next two results, which can be considered as resolutions of Hilbert's 19th problem in the context of homogenization.
\mathbf{m}athbf{s}mallskip
The first is the following result on precision of the higher-order linearization approximations which matches the one we have in the constant-coefficient case, as discussed near the end of Subsection~\ref{ss.naive}.
\mathbf{b}etagin{corollary}
[Large-scale estimates of linearization errors]
\label{c.linerrors}
Fix $n\in\{0,\ldots,\mathbf{m}athsf{N} \}$, $\mathbf{m}athsf{M}\in [1,\infty)$ and let $U_0,U_1,\ldots,U_{n} \mathbf{m}athbf{s}ubseteq\mathbf{m}athbb{R}d$ be a sequence of bounded Lipschitz domains satisfying
\mathbf{b}etagin{equation}
\label{e.Um.inclus2}
\overlineerline{U}_{m+1} \mathbf{m}athbf{s}ubseteq U_{m}, \quad \mathbf{m}athbf{f}orall m\in\left\{ 1,\ldots, n-1 \right\}.
\end{equation}
There exist constants~$\mathbf{m}athbf{s}igma(\mathrm{data}) \in \left(0,\tfrac12 \right]$,~$C(\{U_m\},\mathbf{m}athsf{M},\mathrm{data})<\infty$ and a random variable~$\mathcal{X}$ satisfying
\mathbf{b}etagin{equation*}
\mathcal{X} = \mathcal{O}_{\mathbf{m}athbf{s}igma} \left( C \right)
\end{equation*}
such that the following statement is valid.
Let~$r \in \left[ \mathcal{X},\infty \right)$, $n\in\{1,\ldots,\mathbf{m}athsf{N}\}$ and $u,v\in H^1(rU_0)$ satisfy
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_pL(\nabla u,x) \right) = 0
\quad \mathbf{m}box{and} \quad -\nabla \cdot \left( D_pL(\nabla v,x) \right) = 0
\quad \mathbf{m}box{in} \ rU_0,\\
&
\left\| \nabla u \right\|_{\underlinederline{L}^{2}(rU_0)} \vee
\left\| \nabla v \right\|_{\underlinederline{L}^{2}(rU_0)} \leq \mathbf{m}athsf{M},
\end{aligned}
\right.
\end{equation*}
and recursively define $w_m \in H^1(rU_m)$, for every~$m\in\{1,\ldots,n\}$, to be the solution of the Dirichlet problem
\mathbf{b}etagin{equation}
\label{e.linearized.cor}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D^2_pL\left( \nabla u,x \right) \nabla w_m \right) = \nabla \cdot \left( \mathbf{m}athbf{F}_m(\nabla u,\nabla w_1,\ldots,\nabla w_{m-1},x)\right) & \mathbf{m}box{in} & \ rU_m, \\
& w_m =
v - u - \mathbf{m}athbf{s}um_{k=1}^{m-1} \mathbf{m}athbf{f}rac1{k!} w_k
& \mathbf{m}box{on} & \ r\mathrm{par}rtial U_m.
\end{aligned}
\right.
\end{equation}
Then, for every $m\in\{1,\ldots,n\}$,
\mathbf{b}etagin{equation}
\label{e.homogenization.estimates.cor}
\left\|
\nabla v - \nabla \left( u + \mathbf{m}athbf{s}um_{k=1}^m \mathbf{m}athbf{f}rac1{k!} w_k \right)
\right\|_{\underlinederline{L}^2(rU_{m})}
\leq
C \left( \left\| \nabla u - \nabla v \right\|_{\underlinederline{L}^{2}(rU_0)} \right)^{m+1}.
\end{equation}
\end{corollary}
Corollary~\ref{c.linerrors} is an easy consequence of the previous theorems stated above. Its proof is presented in Section~\ref{s.linerrorsproof}.
\mathbf{m}athbf{s}mallskip
The analysis of the linearized equations presented in the theorems above allow us to develop a higher regularity theory for solutions of the nonlinear equation on large scales, in analogy to the role of the Schauder theory in the resolution of Hilbert's~19th problem on the regularity of solutions of nonlinear equations with smooth (or constant) coefficients.
This result generalizes the large-scale $C^{1,1}$-type estimate proved in our previous paper~\cite{AFK} to higher-order regularity as well as the result in the linear case~\cite[Theorem 3.6]{AKMbook}.
\mathbf{m}athbf{s}mallskip
Before giving the statement of this result, we introduce some additional notation and provide some motivational discussion. Given a domain $U\mathbf{m}athbf{s}ubseteq \mathbf{m}athbb{R}d$, we define
\mathbf{b}etagin{equation*}
\mathbf{m}athcal{L}(U):=
\left\{
u\in H^1_{\mathbf{m}athrm{loc}}(U) \,:\,
-\nabla \cdot D_pL(\nabla u,x) = 0 \ \mathbf{m}box{in} \ U
\right\}.
\end{equation*}
This is the set of solutions of the nonlinear equation in the domain~$U$, which we note is a stochastic object. We next define
$\mathbf{m}athcal{L}_1$ to be the set of global solutions of the nonlinear equation which exhibit at most linear growth at infinity:
\mathbf{b}etagin{equation*}
\mathbf{m}athcal{L}_1 :=
\left\{
u\in \mathbf{m}athcal{L}(\mathbf{m}athbb{R}d) \,:\,
\limsup_{r\to \infty} r^{-1} \left\| u \right\|_{\underlinederline{L}^2(B_r)}
< \infty
\right\}.
\end{equation*}
For each $p\in\mathbf{m}athbb{R}d$, we denote the affine function~$\ell_p$ by~$\ell_p(x):=p\cdot x$. Observe that if the difference of two elements of $\mathbf{m}athcal{L}_1$ has strictly sublinear growth at infinity, it must be constant, by the $C^{0,1}$-type estimate for differences (the estimate~\eqref{e.C01linerror} with $m=0$). Therefore the following theorem, which was proved in~\cite{AFK}, gives a complete classification of~$\mathbf{m}athcal{L}_1$.
\mathbf{b}etagin{theorem}
[{Large-scale $C^{1,1}$-type estimate~\cite[Theorem 1.3]{AFK}}]
\label{t.C11estimate}
\emph{}\\
Fix $\mathbf{m}athbf{s}igma \in \left(0,d\right)$ and $\mathbf{m}athsf{M} \in [1,\infty)$. There exist~$\deltalta(\mathbf{m}athbf{s}igma,d,\mathcal{L}ambda)\in \left( 0, \mathbf{m}athbf{f}rac12 \right]$, $C(\mathbf{m}athsf{M},\mathbf{m}athbf{s}igma,\mathrm{data})<\infty$ and a random variable $\mathcal{X}_\mathbf{m}athbf{s}igma$ which satisfies the estimate
\mathbf{b}etagin{equation}
\label{e.X}
\mathcal{X}_\mathbf{m}athbf{s}igma \leq \mathcal{O}_{\mathbf{m}athbf{s}igma}\left(C\right)
\end{equation}
such that the following statements are valid.
\mathbf{b}etagin{enumerate}
\item[{$\mathbf{m}athrm{(i)}$}]
For every $u \in \mathbf{m}athcal{L}_1$ satisfying $\limsup_{r \to \infty} \mathbf{m}athbf{f}rac1r \left\| u - (u)_{B_r} \right\|_{\underlinederline{L}^2(B_r)} \leq \mathbf{m}athsf{M}$,
there exist an affine function $\ell$ such that, for every $R\mathbf{m}athbf{g}eq \mathcal{X}_\mathbf{m}athbf{s}igma$,
\mathbf{b}etagin{equation*}
\label{e.liouvillec0}
\left\| u - \ell \right\|_{\underlinederline{L}^2(B_R)} \leq C R^{1-\deltalta} .
\end{equation*}
\item[{$\mathbf{m}athrm{(ii)}$}]
For every $p\in B_\mathbf{m}athsf{M}$, there exists $u\in \mathbf{m}athcal{L}_1$ satisfying, for every $R\mathbf{m}athbf{g}eq \mathcal{X}_\mathbf{m}athbf{s}igma$,
\mathbf{b}etagin{equation*}
\label{e.liouvillec1}
\left\| u - \ell_p \right\|_{\underlinederline{L}^2(B_R)} \leq C R^{1-\deltalta} .
\end{equation*}
\mathbf{m}athbf{s}mallskip
\item[{$\mathbf{m}athrm{(iii)}$}]
For every $R\mathbf{m}athbf{g}eq \mathcal{X}_s$ and $u\in \mathbf{m}athcal{L}(B_R)$ satisfying
$\mathbf{m}athbf{f}rac1R \left\| u - (u)_{B_R} \right\|_{\underlinederline{L}^2 \left( B_{R} \right)} \leq \mathbf{m}athsf{M}$, there exists $\phi \in \mathbf{m}athcal{L}_1$ such that, for every $r \in \left[ \mathcal{X}_\mathbf{m}athbf{s}igma, R \right]$,
\mathbf{b}etagin{equation}
\label{e.C11}
\left\| u - \phi \right\|_{\underlinederline{L}^2(B_r)} \leq C \left( \mathbf{m}athbf{f}rac r R \right)^{2} \inf_{\psi\in\mathcal{L}_1}
\left\| u - \psi \right\|_{\underlinederline{L}^2(B_R)}.
\end{equation}
\end{enumerate}
\end{theorem}
Statements~(i) and~(ii) of the above theorem, which give the characterization of~$\mathcal{L}_1$, can be considered as a first-order Liouville-type theorem. In the case of deterministic, periodic coefficient fields, this result was proved by Moser and Struwe~\cite{MS}, who generalized the result of Avellaneda and Lin~\cite{AL4} in the linear case. Part~(iii) of the theorem is a quantitative version of this Liouville-type result, which we call a ``large-scale~$C^{1,1}$ estimate'' since it states that, on large scales, an arbitrary solution of the nonlinear equation can be approximated by an elements of~$\mathcal{L}_1$ with the same precision as harmonic functions can be approximated by affine functions. It can be compared to similar statements in the linear case (see for instance~\cite{AKMbook,GNO2}).
\mathbf{m}athbf{s}mallskip
In this paper we prove a higher-order version of Theorem~\ref{t.C11estimate}. We will show that, just as a harmonic function can be approximated locally by harmonic polynomials, we can approximate an arbitrary element of $\mathcal{L}(B_R)$ can be approximated by elements of a random set of functions which are the natural analogue of harmonic polynomials. In order to state this result, we must first define this space of functions.
\mathbf{m}athbf{s}mallskip
Let us first discuss the constant-coefficient case.
If $\overlineerline{L}$ is a smooth Lagrangian, we know from the resolution of Hilbert's 19th problem that solutions of $-\nabla \cdot D_p \overlineerline{L}(\nabla \overlineerline{u}) = 0$ are smooth and thus may be approximated by a Taylor expansion at each point. One may then ask, can we characterize the possible Taylor polynomials? In Appendix~\ref{a.constantcoeff} we provide such a characterization in terms of the linearized equations. The quadratic part is an ${\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p:=D^2\overlineerline{L}(p)$-harmonic polynomial and the higher-order polynomials satisfy the equations the linearized equations, involving the $\overlineerline{\mathbf{m}athbf{F}}_m$'s as right hand sides. More precisely, for each $p\in\mathbf{m}athbb{R}d$ and $n\in\mathbf{m}athbb{N}$, we set
\mathbf{b}etagin{align} \notag
\overlineerline{\mathbf{m}athsf{W}}_n^{p,\textrm{hom}} := \mathbf{b}igg\{ & (\overlineerline{w}_1,\ldots,\overlineerline{w}_n) \in H_{\textrm{loc}}^1(\mathbf{m}athbb{R}^d;\mathbf{m}athbb{R}^n) \, : \, \mathbf{m}box{for }
m \in \{1,\ldots,n\} \mathbf{m}box{ we have }
\\ \notag & \qquad
\lim_{r \to 0} r^{-m} \left\| \overlineerline{w}_m \right\|_{\underlinederline{L}^2 \left( B_{r} \right)} = 0 , \quad
\lim_{R\to \infty} R^{-1-m} \left\| \nabla \overlineerline{w}_m \right\|_{\underlinederline{L}^2 \left( B_{R} \right)} = 0 ,
\\ \notag & \qquad
- \nabla \cdot \left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p \nabla \overlineerline{w}_m \right) = \nabla \cdot \overlineerline{\mathbf{m}athbf{F}}_m\left(p , \nabla \overlineerline{w}_1 , \ldots, \nabla \overlineerline{w}_{m-1} \right)
\mathbf{b}igg\} .
\end{align}
It is not too hard to show that $\overlineerline{\mathbf{m}athsf{W}}_n^{p,\textrm{hom}} \mathbf{m}athbf{s}ubset \mathbf{m}athcal{P}^{\textrm{hom}}_2 \times \ldots \times \mathbf{m}athcal{P}^{\textrm{hom}}_{n+1}$, where $\mathbf{m}athcal{P}^{\textrm{hom}}_j$ stands for homogeneous polynomials of degree $j$. Indeed, we see, by Liouville's theorem, that~$\overlineerline{w}_1$ is an~${\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p$-harmonic polynomial of degree two. More importantly, according to Appendix~\ref{a.constantcoeff}, we have that if~$\overlineerline{u}$ solves $-\nabla \cdot D_p \overlineerline{L}(\nabla \overlineerline{u}) = 0$ in the neigborhood of origin, and we set $p = \nabla \overlineerline{u}(0)$ and $\overlineerline{w}_m(x) := \mathbf{m}athbf{f}rac{1}{m+1} \nabla^{m+1} \overlineerline{u}(0) \, x^{\otimes (m+1)}$, then
\mathbf{b}etagin{equation*}
(\overlineerline{w}_1,\ldots,\overlineerline{w}_n) \in \overlineerline{\mathbf{m}athsf{W}}_n^{p,\textrm{hom}} .
\end{equation*}
In particular, $\overlineerline{w}_m$ is a sum of a special solution of
$$\nabla \cdot \left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p \nabla \overlineerline{w}_m + \overlineerline{\mathbf{m}athbf{F}}_m\left(p , \nabla \overlineerline{w}_1 , \ldots, \nabla \overlineerline{w}_{m-1} \right) \right) = 0 $$
in $\mathbf{m}athcal{P}^{\textrm{hom}}_{m+1}$ and an ${\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p$-harmonic polynomial in $\mathbf{m}athcal{P}^{\textrm{hom}}_{m+1}$ .
For our purposes it is convenient to relax the growth condition at the origin and define simply
\mathbf{b}etagin{align} \notag
\overlineerline{\mathbf{m}athsf{W}}_n^p := \mathbf{b}igg\{ & (\overlineerline{w}_1,\ldots,\overlineerline{w}_n) \in \mathbf{m}athcal{P}_2 \times \ldots \times \mathbf{m}athcal{P}_{n+1} \, : \, \mathbf{m}box{for }
m \in \{1,\ldots,n\} \mathbf{m}box{ we have }
\\ \notag & \qquad
\lim_{R\to \infty} R^{-1-m} \left\| \nabla \overlineerline{w}_m \right\|_{\underlinederline{L}^2 \left( B_{R} \right)} = 0 ,
\\ \notag & \qquad
- \nabla \cdot \left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p \nabla \overlineerline{w}_m \right) = \nabla \cdot \overlineerline{\mathbf{m}athbf{F}}_m\left(p , \nabla \overlineerline{w}_1 , \ldots, \nabla \overlineerline{w}_{m-1} \right)
\mathbf{b}igg\} .
\end{align}
With this definition, we lose the homogeneity of polynomials. Following the approach in~\cite[Chapter 3]{AKMbook}, it is natural to define heterogeneous versions of these spaces by
\mathbf{b}etagin{align} \notag
\mathbf{m}athsf{W}_n^p := \mathbf{b}igg\{ & (w_1,\ldots,w_n) \in H_{\textrm{loc}}^1(\mathbf{m}athbb{R}^d;\mathbf{m}athbb{R}^n) \, : \, \mathbf{m}box{for }
m \in \{1,\ldots,n\} \mathbf{m}box{ we have }
\\ \notag & \qquad
\limsup_{R\to \infty} R^{-1-m} \left\| \nabla w_m \right\|_{\underlinederline{L}^2 \left( B_{R} \right)} = 0 ,
\\ \notag & \qquad
- \nabla \cdot \left( D_p^2L( p + \nabla \phi_p,\cdot) \nabla w_m \right) = \nabla \cdot \mathbf{m}athbf{F}_m\left( p+ \nabla \phi_p , \nabla w_1 , \ldots, \nabla w_{m-1} ,\cdot \right)
\mathbf{b}igg\} ,
\end{align}
that is, the tuplets of heterogeneous solutions with prescribed growth. Here $\ell_p + \phi_p$ is the unique element of~$\mathcal{L}_1$ (up to additive constants) satisfying
\mathbf{b}etagin{equation*}
\lim_{r\to \infty} \mathbf{m}athbf{f}rac1r \left\| \phi_p \right\|_{\underlinederline{L}^2 \left( B_{r} \right)} = 0.
\end{equation*}
In other words, $\phi_p$ is the first-order corrector with slope $p$: see Lemma~\ref{l.corr.sublinearity}.
\mathbf{m}athbf{s}mallskip
The next theorem gives a higher-order Liouville-type result which classifies the spaces~$\mathbf{m}athsf{W}_n^p$ and states that they may be used to approximate any solution of the nonlinear equation with the precision of a~$C^{n,1}$ estimate.
\mathbf{b}etagin{theorem}[Large-scale regularity]
\label{t.regularityhigher}
Fix $n \in \{1,\ldots,\mathbf{m}athsf{N}\}$ and $\mathbf{m}athsf{M} \in [1, \infty)$. There exist constants $\mathbf{m}athbf{s}igma(n,\mathbf{m}athsf{M},\mathrm{data}),\deltalta(n,\mathrm{data})\in \left( 0, \mathbf{m}athbf{f}rac12 \right]$ and a random variable $\mathcal{X}$ satisfying the estimate
\mathbf{b}etagin{equation}
\label{e.higherX}
\mathcal{X} \leq \mathcal{O}_\mathbf{m}athbf{s}igma \left(C(n,\mathbf{m}athsf{M}, d,\mathcal{L}ambda) \right)
\end{equation}
such that the following statements hold:
\mathbf{b}etagin{enumerate}
\item[{$\mathbf{m}athrm{(i)}_n$}] There exists a constant $C(n,\mathbf{m}athsf{M},\mathrm{data}) <\infty$ such that, for every $p \in B_{\mathbf{m}athsf{M}}$ and $(w_1,\ldots,w_n) \in \mathbf{m}athsf{W}_n^p$, there exists $(\overlineerline{w}_1,\ldots,\overlineerline{w}_n) \in \overlineerline{\mathbf{m}athsf{W}}_n^p $ such that, for every $R\mathbf{m}athbf{g}eq \mathcal{X}$ and $m \in \{1,\ldots,n\}$,
\mathbf{b}etagin{equation} \label{e.liouvillec}
\left\| w_m - \overlineerline{w}_m \right\|_{\underlinederline{L}^2(B_R)} \leq C R^{1-\deltalta} \left( \mathbf{m}athbf{f}rac{R}{\mathcal{X}} \right)^{m} \mathbf{m}athbf{s}um_{i=1}^m \left( \mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_{\mathcal{X}})}\right)^{\mathbf{m}athbf{f}rac{m}{i}} .
\end{equation}
\item[{$\mathbf{m}athrm{(ii)}_n$}] For every $p \in B_{\mathbf{m}athsf{M}}$ and $(\overlineerline{w}_1,\ldots,\overlineerline{w}_n) \in \overlineerline{\mathbf{m}athsf{W}}_n^{p}$, there exists $(w_1,\ldots,w_n) \in \mathbf{m}athsf{W}_n^p$ satisfying~\eqref{e.liouvillec} for every $R\mathbf{m}athbf{g}eq \mathcal{X}$ and $m \in \{1,\ldots,n\}$.
\item[{$\mathbf{m}athrm{(iii)}_n$}]
There exists $C(n,\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that, for every $R\mathbf{m}athbf{g}eq \mathcal{X}$ and $v \in \mathbf{m}athcal{L}(B_R)$ satisfying
$
\left\| \nabla v \right\|_{\underlinederline{L}^2 \left( B_{R} \right)} \leq \mathbf{m}athsf{M},
$
there exist $p \in B_{C}$ and $(w_1,\ldots,w_n) \in \mathbf{m}athsf{W}_n^p$ such that, defining
\mathbf{b}etagin{equation*}
\xi_k(x) := v(x) - p\cdot x - \phi_p(x) - \mathbf{m}athbf{s}um_{i=1}^{k} \mathbf{m}athbf{f}rac{w_i}{i!}
\end{equation*}
for every $r \in \left[ \mathcal{X}, \mathbf{m}athbf{f}rac12 R \right]$, we have, for $k \in \{0,1,\ldots,n\},$ the following estimates:
\mathbf{b}etagin{equation}
\label{e.intrinsicreg}
\mathbf{m}athbf{s}um_{i=0}^{k}\left( \left\| \nabla \xi_i \right\|_{\underlinederline{L}^{2}(B_r)} \right)^{\mathbf{m}athbf{f}rac{k+1}{i+1}}
\leq
C \left( \mathbf{m}athbf{f}rac r R \right)^{k+1} \mathbf{m}athbf{s}um_{i=0}^{k}\left( \mathbf{m}athbf{f}rac1R \left\| \xi_i - (\xi_i)_{B_R} \right\|_{\underlinederline{L}^{2}(B_R)} \right)^{\mathbf{m}athbf{f}rac{k+1}{i+1}}
\end{equation}
and
\mathbf{b}etagin{equation}
\label{e.intrinsicreg2}
\left\| \nabla \xi_k \right\|_{\underlinederline{L}^{2}(B_r)}
\leq
C \left( \mathbf{m}athbf{f}rac r R \right)^{k+1} \mathbf{m}athbf{f}rac1R \inf_{\phi \in \mathbf{m}athcal{L}_1}\left\| v - \phi \right\|_{\underlinederline{L}^{2}(B_R)}.
\end{equation}
\end{enumerate}
\end{theorem}
The proof of Theorem~\ref{t.regularityhigher} is given in Section~\ref{s.regularityhigher}.
\mathbf{m}athbf{s}mallskip
Appendices~\ref{app.linerrors}--\ref{a.constantcoeff} of this paper contain estimates for constant-coefficient equations which are essentially known but not to our knowledge written anywhere. We also collect some auxiliary estimates and computations in Appendices~\ref{app.CZ} and~\ref{s.AppendixFm}.
\mathbf{m}athbf{s}ection{Regularity estimates for the effective Lagrangian}
\label{s.regLbar}
In this section, we suppose that~$n\in \{0,\ldots,\mathbf{m}athsf{N}-1 \}$ is such that
\mathbf{b}etagin{equation}
\label{e.assumption.section3}
\mathbf{m}box{the statements of Theorems~\ref{t.regularity.Lbar},~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors} are valid for $n$.}
\end{equation}
The goal is to prove Theorem~\ref{t.regularity.Lbar} for $n+1$.
\mathbf{m}athbf{s}mallskip
We proceed by constructing the linearized correctors $\psi^{(m)}_{p,h}$ up to $m=n+2$ and relate the correctors of different orders to each other via differentiation in the parameter~$p$. We show that these results allow us to improve the regularity of~$D^2\overlineerline{L}$ up to $C^{n+1,\mathbf{b}etata}$ and obtain the statement of Theorem~\ref{t.regularity.Lbar} for $n+1$. In particular, this allows us to define the effective coefficient~$\overlineerline{\mathbf{m}athbf{F}}_{n+1}$. We also give formulas for the derivatives of~$\overlineerline{L}$ and for~$\overlineerline{\mathbf{m}athbf{F}}_{m}$ in terms of the correctors, which allow us to relate them to each other and show that~\eqref{e.defbarFm} holds.
\mathbf{m}athbf{s}mallskip
\mathbf{m}athbf{s}ubsection{The first-order correctors and linearized correctors}
In this subsection we construct the linearized correctors up to order~$n+2$.
\mathbf{m}athbf{s}mallskip
For each $p\in\mathbf{m}athbb{R}d$, we define~$\phi_p$ to be the \emph{first-order corrector}
of the nonlinear equation, that is, the unique solution of
\mathbf{b}etagin{equation} \label{e.nonlinearcorrector}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_pL\left( p + \nabla\phi_p(x),x\right) \right)=0 \quad \mathbf{m}box{in} \ \mathbf{m}athbb{R}d,\\
& \nabla\phi_p \ \ \mathbf{m}box{is $\mathbf{m}athbb{Z}d$--stationary, \quad and}\quad \mathbf{m}athbb{E} \left[ \int_{\cu_0} \nabla \phi_p(x)\,dx \right] = 0.
\end{aligned}
\right.
\end{equation}
The existence and uniqueness (up to additive constants) of the first-order corrector $\phi_p$ is classical: it can be obtained from a variational argument (applied to an appropriate function space of stationary functions. Alternatively,
it can be shown (following the proof given in~\cite[Section 3.4]{AKMbook}) that the elements of~$\mathcal{L}_1$, which was characterized in Theorem~\ref{t.C11estimate} above (which was proved already in~\cite{AFK}),
have stationary gradients.
\mathbf{m}athbf{s}mallskip
We define the coefficient field $\mathbf{a}_p(x)$ to be the coefficients for the linearized equation around the solution $x\mathbf{m}apsto p\cdot x+\phi_p(x)$:
\mathbf{b}etagin{equation*}
\mathbf{a}_p(x):= D_p^2L\left( p + \nabla\phi_p(x),x\right).
\end{equation*}
Given $p,h\in\mathbf{m}athbb{R}d$, we define the \emph{first linearized corrector} $\psi^{(1)}_{p,h}$ to satisfy
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( \mathbf{a}_p(x) \left( h+ \nabla \psi^{(1)}_{p,h} \right)\right)=0 \quad \mathbf{m}box{in} \ \mathbf{m}athbb{R}d,\\
&\nabla\psi^{(1)}_{p,h} \ \ \mathbf{m}box{is $\mathbf{m}athbb{Z}d$--stationary, \quad and} \quad
\mathbf{m}athbb{E} \left[ \int_{\cu_0} \nabla \psi^{(1)}_{p,h}(x)\,dx \right] = 0.
\end{aligned}
\right.
\end{equation*}
In other words, ~$\psi^{(1)}_{p,h}$ is the first-order corrector with slope~$h$ for the equation which is the linearization around the first-order corrector $x\mathbf{m}apsto p\cdot x + \phi_p(x)$.
For $m \in \{ 2,\ldots,\mathbf{m}athsf{N}+2 \}$, we define the \emph{$m$th linearized corrector} to be the unique (modulo additive constants) random field $\psi^{(m)}_{p,h}$ satisfying
\mathbf{b}etagin{equation}
\label{e.mthlinearized.corr}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( \mathbf{a}_p(x) \nabla \psi^{(m)}_{p,h} \right)
\\
& \quad
=\nabla \cdot
\mathbf{m}athbf{F}_m\left( p+\nabla\phi_p(x),h+\nabla\psi^{(1)}_{p,h},\nabla \psi^{(2)}_{p,h}(x), \ldots, \nabla \psi^{(m-1)}_{p,h}(x),x \right)
& \mathbf{m}box{in} & \ \mathbf{m}athbb{R}d,\\
& \nabla\psi^{(m)}_{p,h} \quad \mathbf{m}box{is $\mathbf{m}athbb{Z}d$--stationary, \quad and} \quad
\mathbf{m}athbb{E} \left[ \int_{\cu_0} \nabla \psi^{(m)}_{p,h}(x)\,dx \right] = 0.
\end{aligned}
\right.
\end{equation}
In other words, $\psi^{(m)}_{p,h}$ is the corrector with slope zero for the $m$th linearized equation around $x\mathbf{m}apsto p\cdot x + \phi_p(x)$ and $x\mathbf{m}apsto h\cdot x + \psi^{(1)}_{p,h}(x)$, $x\mathbf{m}apsto \psi^{(2)}_{p,h}(x),\ldots, x\mathbf{m}apsto \psi^{(m-1)}_{p,h}(x)$. Notice that this gives us the complete collection of correctors for the latter equation, since by linearity we observe that $\psi^{(m)}_{p,h} + \psi^{(1)}_{p,h'}$ is the corrector with slope $h'$. Furthermore, by the linearity of the map $h \mathbf{m}apsto h + \nabla \psi^{(1)}_{p,h}(x)$, it is easy to see from the structure of the equations of $\psi^{(m)}_{p,h}$ that, for $p,h \in \mathbf{m}athbb{R}^d$, and $t \in \mathbf{m}athbb{R}$,
\mathbf{b}etagin{equation} \label{e.psimhomogen}
\nabla \psi^{(m)}_{p,t h} = t^m \nabla \psi^{(m)}_{p,h}.
\end{equation}
For $p,h \in \mathbf{m}athbb{R}^d$, we define
\mathbf{b}etagin{equation}
\label{e.corrcoeff}
\mathbf{m}athbf{f}^{(k)}_{p,h} := \mathbf{m}athbf{F}_{k} \left(p + \nabla \phi_p , h+ \nabla \psi^{(1)}_{p,h}, \nabla \psi^{(2)}_{p,h}, \ldots , \nabla \psi^{(k-1)}_{p,h},\cdot \right) .
\end{equation}
By~\eqref{e.psimhomogen}, we have that
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{f}^{(k)}_{p,h}= |h|^k \mathbf{m}athbf{f}^{(k)}_{p,h/|h|}
\end{equation*}
\mathbf{m}athbf{s}mallskip
We first show that the the problem~\eqref{e.mthlinearized.corr} for the $m$th linearized corrector is well-posed for $m\in \{2,\ldots,n+1\}$. This is accomplished by checking, inductively, using our hypothesis~\eqref{e.assumption.section3} (and in particular the validity of Theorem~\ref{t.regularity.linerrors} for $m\leq n$), that we have appropriate estimates on the vector fields~$\mathbf{m}athbf{F}_m(\cdots)$ on the right side. We have to make this argument at the same time that we obtain estimates on the smoothness of the correctors $\psi^{(m)}_{p,h}$ as functions of~$p$. In fact, we prove that $\nabla \psi^{(m+1)}_{p,h} = h\cdot D_p\nabla \psi^{(m)}_{p,h}$, which expressed in coordinates is
\mathbf{b}etagin{equation*}
\nabla \psi^{(m+1)}_{p,h} = \mathbf{m}athbf{s}um_{i=1}^d h_i\mathrm{par}rtial_{p_i} \nabla \psi^{(m)}_{p,h}.
\end{equation*}
We will also obtain $C^{0,1}$-type bounds on the linearized correctors, which together with the previous display yields good quantitative control on the smoothness of the correctors in~$p$.
\mathbf{m}athbf{s}mallskip
Before the main statements, let us collect a few preliminary elementary results needed in the proofs. The following lemma is well-known and can be proved by the Lax-Milgram lemma (see for instance~\cite[Chapter 7]{JKO}).
\mathbf{b}etagin{lemma}
\label{l.abstractnonsense}
Let $\mathbf{a}(\cdot)$ be a $\mathbf{m}athbb{Z}d$--stationary random field valued in the symmetric matrices satisfying the ellipticity bound $I_d \leq \mathbf{a} \leq \mathcal{L}ambda I_d$. Suppose that~$\mathbf{m}athbf{f}$ is a $\mathbf{m}athbb{Z}d$--stationary, $\mathbf{m}athbb{R}d$--valued random field satisfying
\mathbf{b}etagin{equation*}
\mathbf{m}athbb{E} \left[ \left\| \mathbf{m}athbf{f} \right\|_{L^2(\cu_0)}^2 \right]
< \infty.
\end{equation*}
Then there exists a unique random potential field $\nabla z$ satisfying
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \mathbf{a} \nabla z = \nabla \cdot \mathbf{m}athbf{f} \quad \mathbf{m}box{in} \ \mathbf{m}athbb{R}d
\\ &
\nabla z \ \mathbf{m}box{is $\mathbf{m}athbb{Z}d$--stationary, \quad and} \quad
\mathbf{m}athbb{E} \left[ \int_{\cu_0} \nabla z(x)\,dx \right] = 0.
\end{aligned}
\right.
\end{equation*}
and, for a constant~$C(d,\mathcal{L}ambda)<\infty$, the estimate
\mathbf{b}etagin{equation*}
\mathbf{m}athbb{E} \left[ \left\| \nabla z \right\|_{L^2(\cu_0)}^2 \right]
\leq C\, \mathbf{m}athbb{E} \left[ \left\| \mathbf{m}athbf{f} \right\|_{L^2(\cu_0)}^2 \right].
\end{equation*}
\end{lemma}
Next, a central object in our analysis is the quantity $\mathbf{m}athbf{F}_m$ defined in~\eqref{e.defFm}. Fix $m\in \{2,\ldots, \mathbf{m}athsf{N}+1\}$ and $x,p,h_1,\ldots,h_{m-1} \in \mathbf{m}athbb{R}^d$. One can easily read from the definition that we have
\mathbf{b}etagin{equation} \label{e.Fmalt}
\mathbf{m}athbf{F}_m(p,h_1,\ldots,h_{m-1},x)
= m! \mathbf{m}athbf{s}um_{2 \leq j \leq m}\mathbf{m}athbf{f}rac{1}{j!}D_{p}^{j+1}L(p,x)\mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}tackrel{ i_{1}+\cdots i_{j}= m}{ i_{1},\dots, i_{j}\mathbf{m}athbf{g}eq 1} } \prod_{k=1}^j \mathbf{m}athbf{f}rac{h_{i_k}^{\otimes 1}}{i_{k}!} .
\end{equation}
It then follows, by Young's inequality, that there exists $C(m,\mathrm{data})<\infty$ such that
\mathbf{b}etagin{equation}
\label{e.Fmbasic}
\left| \mathbf{m}athbf{F}_m (p,h_1,\ldots,h_{m-1},x) \right| \leq C \mathbf{m}athbf{s}um_{i=1}^{m-1} |h_i|^{\mathbf{m}athbf{f}rac{m}{i}} .
\end{equation}
We have, similarly, that
\mathbf{b}etagin{equation} \label{e.Fmbasicgradp}
\left[ \mathbf{m}athbf{F}_m (\cdot,h_1,\ldots,h_{m-1},x) \right]_{C^{0,1}(\mathbf{m}athbb{R}^n)} \leq C \mathbf{m}athbf{s}um_{i=1}^{m-1} |h_i|^{\mathbf{m}athbf{f}rac{m}{i}}
\end{equation}
and, for $k \in \{1,\ldots,m-1\}$,
\mathbf{b}etagin{equation} \label{e.Fmbasicgradh}
\left| D_{h_k} \mathbf{m}athbf{F}_m (p ,h_1,\ldots,h_{m-1},x) \right| \leq C \mathbf{m}athbf{s}um_{i=1}^{m-k} |h_i|^{\mathbf{m}athbf{f}rac{m-k}{i}}.
\end{equation}
Using these we get, for $x,p,\widetilde p, h_1,\widetilde h_1,\ldots, h_{m-1},\widetilde h_{m-1} \in \mathbf{m}athbb{R}^d$,
\mathbf{b}etagin{align} \label{e.Fmbasic2}
\lefteqn{\left| \mathbf{m}athbf{F}_m (p,h_1,\ldots,h_{m-1},x) - \mathbf{m}athbf{F}_m (\widetilde p,\widetilde h_1,\ldots,\widetilde h_{m-1},x)\right| } \quad &
\\ \notag &
\leq
C |p - \widetilde p| \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \left| h_i \right| + \left| \widetilde h_i \right| \right) ^{\mathbf{m}athbf{f}rac{m}{i}}
+ C \mathbf{m}athbf{s}um_{i=1}^{m-1} \left| h_i - \widetilde h_i \right| \mathbf{m}athbf{s}um_{j=1}^{m-i} \left( \left| h_j \right| + \left| \widetilde h_j \right| \right)^{\mathbf{m}athbf{f}rac{m-i}{j}} .
\end{align}
Therefore, by Young's inequality, we get, for all $\deltalta>0$,
\mathbf{b}etagin{align} \label{e.Fmbasic3}
\lefteqn{\left| \mathbf{m}athbf{F}_m (p,h_1,\ldots,h_{m-1},x) - \mathbf{m}athbf{F}_m (\widetilde p,\widetilde h_1,\ldots,\widetilde h_{m-1},x)\right| } \quad &
\\ \notag &
\leq
C \left( |p - \widetilde p| + \deltalta \right) \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \left| h_i \right| + \left| \widetilde h_i \right| \right)^{\mathbf{m}athbf{f}rac{m}{i}}
+ C \deltalta \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \deltalta^{-1} \left| h_i - \widetilde h_i \right| \right)^{\mathbf{m}athbf{f}rac mi} .
\end{align}
Furthermore, as we will be employing an induction argument in $m$, it is useful to notice that the leading term in $\mathbf{m}athbf{F}_m$
has a simple form, and we have
\mathbf{b}etagin{equation} \label{e.Fmdecomposed}
\mathbf{m}athbf{F}_m (p,h_1,\ldots,h_{m-1},x) = m D_p^{3} L(p,x) h_{m-1}^{\otimes 1} h_1^{\otimes 1} + \widetilde{\mathbf{m}athbf{F}}_m (p,h_1,\ldots,h_{m-2},x) ,
\end{equation}
with $\widetilde{\mathbf{m}athbf{F}}_2 = 0$.
Moreover, as is shown in Appendix~\ref{s.AppendixFm}, if, for $p,h \in \mathbf{m}athbb{R}^d$, $t \mathbf{m}apsto \mathbf{m}athbf{g}(p+th)$ is~$m$ times differentiable at $t=0$ and noticing that since $m \leq \mathbf{m}athsf{N}+1$, $p\mathbf{m}apsto L(p,x)$ is in $C^{m+2}$, then
\mathbf{b}etagin{align} \label{e.Fmrelation}
\lefteqn{
\mathbf{m}athbf{F}_{m+1} (\mathbf{m}athbf{g}(p) , D_p \mathbf{m}athbf{g}(p) h^{\otimes 1} ,\ldots,D_p^{m} \mathbf{m}athbf{g}(p) h^{\otimes m},x)
} \quad &
\\ \notag &
= D_p \left( \mathbf{m}athbf{F}_{m} (\mathbf{m}athbf{g}(p),D_p \mathbf{m}athbf{g}(p) h^{\otimes 1},\ldots,D_p^{m-1} \mathbf{m}athbf{g}(p) h^{\otimes (m-1)},x) \right) \cdot h
\\ \notag & \quad
+ D_p \left( D_p^{2} L(\mathbf{m}athbf{g}(p),x) \right) h^{\otimes 1} \left( D_p^{m} \mathbf{m}athbf{g}(p) h^{\otimes m} \right)^{\otimes 1}.
\end{align}
\mathbf{m}athbf{s}mallskip
Our first result in this section gives direct consequences of~\eqref{e.assumption.section3} for estimates on the first-order correctors and linearized correctors.
\mathbf{b}etagin{theorem}[Quantitative estimates on linearized correctors]
\label{t.correctorestimates}
Assume~\eqref{e.assumption.section3} is valid. Fix $\mathbf{m}athsf{M}\in [1,\infty)$. For every $m\in \{ 2,\ldots, n+1 \}$ and $p,h\in\mathbf{m}athbb{R}d$, there exists a function $\psi^{(m)}_{p,h}$ satisfying~\eqref{e.mthlinearized.corr}.
Moreover, there exist constants $C(\mathbf{m}athsf{M},\mathrm{data})<\infty$ and $\mathbf{m}athbf{s}igma(n,d,\mathcal{L}ambda) \in \left(0,\tfrac12\right]$ and a random variable $\mathcal{X}$ satisfying $\mathcal{X} \leq \mathcal{O}_\mathbf{m}athbf{s}igma(C)$ such that the following statement is valid.
For every~$p\in B_{\mathbf{m}athsf{M}}$, $h \in \overlineerline{B}_1$, $m\in \{1,\ldots,n\}$, and $r\mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{equation}
\label{e.linearization.corr}
\left\|
\nabla \phi_{p+h}
- \left( \nabla \phi_p + h + \mathbf{m}athbf{s}um_{k=1}^m \mathbf{m}athbf{f}rac1{k!} \nabla \psi^{(k)}_{p,h} \right)
\right\|_{\underlinederline{L}^2(B_r)}
\leq C |h|^{m+1}
\end{equation}
and, for every~$p\in B_{\mathbf{m}athsf{M}}$, $h \in \overlineerline{B}_1$, $m\in \{1,\ldots,n+1\}$ and $r\mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{equation}
\label{e.sec3corrbnd0}
\left\|
\nabla \psi^{(m)}_{p,h}
\right\|_{\underlinederline{L}^2(B_r)}
\leq C |h|^{m}.
\end{equation}
Finally, for $q \in [2,\infty)$and $m\in \{1,\ldots,n+1\}$, there exist constants~$\deltalta(m,d,\mathcal{L}ambda)\in \left(0,\tfrac12\right]$ and ~$C(q,m,\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that, for every
~$p\in B_{\mathbf{m}athsf{M}}$ and $h \in \overlineerline{B}_1$,
\mathbf{b}etagin{equation}
\label{e.sec3corrbnd}
\left\|
\nabla \psi^{(m)}_{p,h}
\right\|_{L^q(\cu_0)}
\leq \mathcal{O}_{\deltalta}
\left(
C |h|^m
\right).
\end{equation}
\end{theorem}
\mathbf{b}etagin{proof}
Set, for $m \in \{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation*}
\xi_0 := (p+h) \cdot x + \phi_{p+h}(x) - (p \cdot x + \phi_{p}(x)) \quad \mathbf{m}box{and} \quad
\xi_m := \xi_0 - \mathbf{m}athbf{s}um_{k=1}^m \mathbf{m}athbf{f}rac1{k!} \psi^{(k)}_{p,h} .
\end{equation*}
We first collect two consequences of Theorem~\ref{t.regularity.linerrors} assumed for $n$. Fix $q \in [2,\infty)$. Theorem~\ref{t.regularity.linerrors} implies that there is a minimal scale $\mathcal{X}$ such that~\eqref{e.C01linsols} and~\eqref{e.C01linerror} are valid with $q (n+1)$ instead of $q$ and for every $r \in \left[ \mathcal{X} , \tfrac 12 R \right]$. Hence,
for every $r \in \left[ \mathcal{X} , \tfrac 12 R \right]$ and $m\in\{0,\ldots,n\}$, we get the estimates
\mathbf{b}etagin{align}
\label{e.C01linsolsapplied1}
\left\| \nabla \psi^{(m+1)}_{p,h} \right\|_{\underlinederline{L}^{ q (n+1)}(B_r)}
&
\leq
C\mathbf{m}athbf{s}um_{i=1}^{m+1}
\left( \left\| \nabla \psi^{(i)}_{p,h} \right\|_{\underlinederline{L}^2(B_R)}
\right)^{\mathbf{m}athbf{f}rac{m+1}i}
\end{align}
and
\mathbf{b}etagin{align}
\label{e.C01linerrorapplied1}
\left\| \nabla \xi_{m} \right\|_{\underlinederline{L}^{q (n+1) }(B_r)}
\leq
C \mathbf{m}athbf{s}um_{i=0}^{m-1}
\left( \left\| \nabla \xi_{i} \right\|_{\underlinederline{L}^2(B_R)} + \left\| \nabla \psi^{(i+1)}_{p,h} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac{m+1}{i+1}} .
\end{align}
On the other hand, using~\eqref{e.Fmbasic} we obtain
\mathbf{b}etagin{equation*}
\left| \mathbf{m}athbf{f}^{(m)}_{p,h} \right| \leq C \mathbf{m}athbf{s}um_{i=1}^{m-1} \left| \nabla \psi^{(i)}_{p,h} \right|^{\mathbf{m}athbf{f}rac{m}{i}}.
\end{equation*}
In particular, it follows by~\eqref{e.C01linsolsapplied1} that, for $m \in \{1,\ldots,n+2\}$, $R>2\mathcal{X}$ and $r \in [\mathcal{X},\tfrac12 R)$,
\mathbf{b}etagin{equation*}
\left\| \mathbf{m}athbf{f}^{(m)}_{p,h} \right\|_{\underlinederline{L}^q(B_r)} \leq C \mathbf{m}athbf{s}um_{i=1}^{m-1} \left\| \nabla \psi^{(i)}_{p,h} \right\|_{\underlinederline{L}^2(B_R)}^{\mathbf{m}athbf{f}rac{m}{i}}.
\end{equation*}
Since $\nabla \psi^{(i)}_{p,h} $ is $\mathbf{m}athbb{Z}^d$-stationary random field, we have by the ergodic theorem, after sending $R\to \infty$, that a.s.
\mathbf{b}etagin{equation*}
\left\| \mathbf{m}athbf{f}^{(m)}_{p,h} \right\|_{\underlinederline{L}^q(\mathcal{X} \cu_0)} \leq C \mathbf{m}athbf{s}um_{i=1}^{m-1} \mathbf{m}athbb{E} \left[ \left\| \nabla \psi^{(i)}_{p,h} \right\|_{L^2(\cu_0)}^2 \right]^{\mathbf{m}athbf{f}rac{m}{2i}}.
\end{equation*}
Furthermore, by Lemma~\ref{l.abstractnonsense} and the previous display we get, for $m \in \{1,\ldots,n+2\}$, that
\mathbf{b}etagin{align} \notag
\mathbf{m}athbb{E} \left[ \left\| \nabla \psi^{(m)}_{p,h} \right\|_{L^2(\cu_0)}^2 \right]
&
\leq
C \mathbf{m}athbb{E} \left[ \left\| \mathbf{m}athbf{f}^{(m)}_{p,h} \right\|_{L^2(\cu_0)}^2 \right]
\\ \notag &
\leq
C \mathbf{m}athbb{E} \left[ \mathcal{X}^d \left\| \mathbf{m}athbf{f}^{(m)}_{p,h} \right\|_{\underlinederline{L}^2(\mathcal{X}_m \cu_0)}^2 \right]
\leq
C \mathbf{m}athbf{s}um_{i=1}^{m-1} \mathbf{m}athbb{E} \left[ \left\| \nabla \psi^{(i)}_{p,h} \right\|_{L^2(\cu_0)}^2 \right]^{\mathbf{m}athbf{f}rac{m}{i}} .
\end{align}
Observe that the limiting behavior of $\xi_0$ and $\psi^{(1)}_{p,h}$ can be identified via their equations
\mathbf{b}etagin{equation*}
-\nabla \cdot \left(\widetilde{\mathbf{a}}_p \left(h + \nabla (\phi_{p+h} - \phi_{p}) \right) \right) = 0
\quad \mathbf{m}box{and} \quad
- \nabla \cdot \left( \mathbf{a}_p \left(h + \nabla \psi^{(1)}_{p,h}) \right) \right) = 0
\end{equation*}
respectively, where
\mathbf{b}etagin{equation*}
\widetilde{\mathbf{a}}_p := \int_0^1 D_p^2L\left( p + th + t \nabla \phi_{p+h} + (1-t)\nabla \phi_{p} ,\cdot \right) \, dt .
\end{equation*}
By $\mathbf{m}athbb{Z}^d$-stationary of $\nabla \phi_p$ and $\nabla \phi_{p+h}$, implying $\mathbf{m}athbb{Z}^d$-stationarity of $\mathbf{a}_p$ and $\widetilde{\mathbf{a}}_p$, we may apply Lemma~\ref{l.abstractnonsense} to obtain
\mathbf{b}etagin{equation*}
\mathbf{m}athbb{E} \left[ \left\| \nabla \xi_{0} \right\|_{L^2(\cu_0)}^2 + \left\| \nabla \psi^{(1)}_{p,h} \right\|_{L^2(\cu_0)}^2 \right]^\mathbf{m}athbf{f}rac12 \leq C |h| .
\end{equation*}
It then follows inductively that, for $m \in \{1,\ldots,n+2\}$,
\mathbf{b}etagin{equation*}
\mathbf{m}athbb{E} \left[ \left\| \nabla \psi^{(m)}_{p,h} \right\|_{L^2(\cu_0)}^2 \right]^\mathbf{m}athbf{f}rac12 \leq C |h|^m .
\end{equation*}
Using this together with the ergodic theorem and~\eqref{e.C01linsolsapplied1},~\eqref{e.C01linerrorapplied1}, we obtain inductively, for $q\in [2,\infty)$,
$m \in \{0,\ldots,n+1\}$ and $r \mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{equation*}
\left\| \nabla \psi^{(m)}_{p,h} \right\|_{\underlinederline{L}^q(B_r)} \leq C_q |h|^{m}
\quad \mathbf{m}box{and} \quad
\left\| \nabla \xi_{m-1} \right\|_{\underlinederline{L}^{q}(B_r)} \leq C |h|^{m} .
\end{equation*}
Now~\eqref{e.sec3corrbnd} follows by giving up a volume factor. The proof is complete.
\end{proof}
We next show, again using~\eqref{e.assumption.section3}, that the corrector $\psi^{(n+2)}_{p,h}$ satisfies an~$L^2$--type gradient estimate.
\mathbf{b}etagin{lemma} \label{l.psinplus2}
Assume~\eqref{e.assumption.section3} is valid. Let $\mathbf{m}athsf{M} \in [1,\infty)$. Suppose that~$p \in B_{\mathbf{m}athsf{M}}$ and~$h \in \overlineerline{B}_1$.
There exists~$\psi^{(n+2)}_{p,h}$ satisfying~\eqref{e.mthlinearized.corr} for $m=n+2$ and a constant $C(n,\mathbf{m}athsf{M},\mathrm{data})$ such that
\mathbf{b}etagin{equation*}
\mathbf{m}athbb{E} \left[ \left\| \nabla \psi^{(n+2)}_{p,h} \right\|_{L^2(\cu_0)}^2 \right]^{\mathbf{m}athbf{f}rac12} \leq C |h|^{n+2}.
\end{equation*}
\end{lemma}
\mathbf{b}etagin{proof}
The result follows directly by Lemma~\ref{l.abstractnonsense} and~\eqref{e.sec3corrbnd} using~\eqref{e.Fmbasic}.
\end{proof}
\mathbf{b}etagin{lemma} \label{l.stupidpsi}
Assume~\eqref{e.assumption.section3} is valid. Suppose that $p \in B_{\mathbf{m}athsf{M}}$ and $h \in \overlineerline{B}_1$. Then, for $m\in \{1,\ldots,n+1\}$, we have, a.s. and a.e.,
\mathbf{b}etagin{equation} \label{e.gradpsiformula}
\nabla \psi^{(m+1)}_{p,h} = \mathbf{m}athbf{s}um_{i=1}^d h_i\mathrm{par}rtial_{p_i} \nabla \psi^{(m)}_{p,h} .
\end{equation}
Moreover, for $\mathbf{b}etata \in (0,1)$ and $m \in \{1,\ldots,n+1\}$, there exists~$C(\mathbf{b}etata,m,\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that, for $t \in (-1,1)$,
\mathbf{b}etagin{equation} \label{e.sec3contest}
\mathbf{m}athbb{E} \left[ \left\|
\nabla \psi_{p + th,h}^{(m)} - \nabla \psi_{p,h}^{(m)} - t \nabla \psi_{p,h}^{(m+1)}
\right\|_{L^2(\cu_0)}^2 \right]^\mathbf{m}athbf{f}rac12 \leq C |h|^{m+1} |t|^{1+\mathbf{b}etata}.
\end{equation}
For $m \in \{1,\ldots,n\}$, we can take $\mathbf{b}etata =1$ in~\eqref{e.sec3contest}. Finally, we have, for $m \in \{1,\ldots,n+2\}$, that, a.s. and a.e.,
\mathbf{b}etagin{equation} \label{e.Fmrelationapplied}
\mathbf{m}athbf{f}^{m+1}_{p,h} = \left( D_p \mathbf{a}_p \cdot h \right) \nabla \psi_{p,h}^{(m)} + D_p \mathbf{m}athbf{f}^{m}_{p,h} \cdot h.
\end{equation}
\end{lemma}
\mathbf{b}etagin{proof} \
Fix $p \in \mathbf{m}athbb{R}^d$ and, without loss of generality, $h \in \mathrm{par}rtial B_1$. By~\eqref{e.linearization.corr} we have that $p \mathbf{m}apsto p + \nabla \phi_p(x)$ is $C^n$ and $D_p^j \nabla \phi_p(x) h^{\otimes j} = \psi_{p,h}^{(j)}$ for every $j \in \{2,\ldots,n\}$ almost surely for almost every $x$. Thus~\eqref{e.sec3contest} is valid with $\mathbf{b}etata=1$ for $m\in \{1,\ldots,n-1\}$
and, by~\eqref{e.Fmrelation},
\mathbf{b}etagin{equation} \label{e.e.Fmrelationapplied1}
\mathbf{m}athbf{f}_{p,h}^{n+1} = \left( D_p \mathbf{m}athbf{f}_{p,h}^{n} + D_p \mathbf{a}_p \left( \nabla \psi_{p,h}^{(n)} \right)^{\otimes 1} \right)\cdot h.
\end{equation}
We denote, in short, for $t \neq 0$,
\mathbf{b}etagin{equation*}
\zeta_{p,h,t}^{(n)} := \mathbf{m}athbf{f}rac1t \left( \psi_{p + th,h}^{(n)} - \psi_{p,h}^{(n)} - t \psi_{p,h}^{(n+1)} \right).
\end{equation*}
Observe that, by~\eqref{e.sec3corrbnd}, $\mathbf{m}athbb{E} \left[ \left\|
\nabla \zeta_{p,h,t}^{(n)}
\right\|_{L^2(\cu_0)}^2 \right] < \infty$ for $t\neq 0$.
\mathbf{m}athbf{s}mallskip
\emph{Step 1}. We prove that, for $t\in (-1,1)$, $t\neq 0$,
\mathbf{b}etagin{align} \label{e.sec3contest1}
\mathbf{m}athbb{E} \left[ \left\|
\nabla \zeta_{p,h,t}^{(n)}
\right\|_{L^2(\cu_0)}^2 \right]
&
\leq
C \mathbf{m}athbb{E} \left[ \left\| D_p \mathbf{a}_{p} \cdot h \left( \nabla \psi_{p,h}^{(n)} - \nabla \psi_{p+th,h}^{(n)} \right) \right\|_{L^2(\cu_0)}^2 \right]
\\ \notag & \quad
+
C \mathbf{m}athbb{E} \left[ \left\| \mathbf{m}athbf{f}rac1t \left( \mathbf{a}_{p+th} - \mathbf{a}_p - D_p \mathbf{a}_{p} \cdot h \right) \nabla \psi_{p+th,h}^{(n)} \right\|_{L^2(\cu_0)}^2 \right]
\\ \notag & \quad
+
C \mathbf{m}athbb{E} \left[ \left\| \mathbf{m}athbf{f}rac1t \left( \mathbf{m}athbf{f}_{p+th,h}^{n} - \mathbf{m}athbf{f}_{p,h}^{n} - D_p \mathbf{m}athbf{f}_{p,h}^{n} \cdot h \right) \right\|_{L^2(\cu_0)}^2 \right] .
\end{align}
To show~\eqref{e.sec3contest1}, we first claim that the difference quotient solves the equation
\mathbf{b}etagin{multline} \label{e.sec3conteqn}
\nabla \cdot \left( \mathbf{a}_p \nabla \zeta_{p,h,t}^{(n)} \right)
=
t \nabla \cdot \left( D_p \mathbf{a}_{p} \cdot h \left( \nabla \psi_{p,h}^{(n)} - \nabla \psi_{p+th,h}^{(n)} \right) \right)
\\ - \nabla \cdot \left( \left( \mathbf{a}_{p+th} - \mathbf{a}_p - D_p \mathbf{a}_{p} \cdot h \right) \nabla \psi_{p+th,h}^{(n)} - \left( \mathbf{m}athbf{f}_{p+th,h}^{n} - \mathbf{m}athbf{f}_{p,h}^{n} - D_p \mathbf{m}athbf{f}_{p,h}^{n} \cdot h \right) \right).
\end{multline}
Indeed, then~\eqref{e.sec3contest1} follows by Lemma~\ref{l.abstractnonsense}. Rewriting
\mathbf{b}etagin{align} \notag
\mathbf{a}_p \nabla \zeta_{p,h,t}^{(n)}&
= \left( \mathbf{a}_{p+th} \nabla \psi_{p+th,h}^{(n)} + \mathbf{m}athbf{f}_{p+th,h}^{n} \right) - \left( \mathbf{a}_{p} \nabla \psi_{p,h}^{(n)} + \mathbf{m}athbf{f}_{p,h}^{n} \right) - t
\left( \mathbf{a}_{p} \nabla \psi_{p,h}^{(n+1)} + \mathbf{m}athbf{f}_{p,h}^{n+1} \right)
\\ \notag & \quad
+ t \left( \mathbf{m}athbf{f}_{p,h}^{n+1} - D_p \mathbf{a}_{p} \cdot h \psi_{p+th,h}^{(n)} - D_p \mathbf{m}athbf{f}_{p,h}^{n} \cdot h \right)
\\ \notag & \quad
+ t D_p \mathbf{a}_{p} \cdot h \left( \nabla \psi_{p,h}^{(n)} - \nabla \psi_{p+th,h}^{(n)} \right)
\\ \notag & \quad
- \left( \mathbf{a}_{p+th} - \mathbf{a}_p - D_p \mathbf{a}_{p} \cdot h \right) \nabla \psi_{p+th,h}^{(n)} - \left( \mathbf{m}athbf{f}_{p+th,h}^{n} - \mathbf{m}athbf{f}_{p,h}^{n} - D_p \mathbf{m}athbf{f}_{p,h}^{n} \cdot h \right),
\end{align}
we observe that the first three terms on the right are solenoidal by the equations of~$\psi_{p+th,h}^{(n)}$,~$\psi_{p,h}^{(n)}$ and~$\psi_{p,h}^{(n+1)}$, respectively, and the fourth term on the right is zero by~\eqref{e.e.Fmrelationapplied1}. We thus obtain~\eqref{e.sec3conteqn}.
\mathbf{m}athbf{s}mallskip
\emph{Step 2}. We will estimate the terms on the right in~\eqref{e.sec3contest1} separately, and in this step we first show that, for $t\in (-1,1)$, $t\neq 0$,
\mathbf{b}etagin{equation} \label{e.sec3contest2}
\mathbf{m}athbb{E} \left[ \left\| D_p \mathbf{a}_{p} \cdot h \left( \nabla \psi_{p,h}^{(n)} - \nabla \psi_{p+th,h}^{(n)} \right) \right\|_{L^2(\cu_0)}^2 \right] \leq
C |t|^{2\mathbf{b}etata}\left( 1 + \mathbf{m}athbb{E} \left[ \left\| \nabla \zeta_{p,h,t}^{(n)} \right\|^2_{L^2(\cu_0)} \right] \right)^{\mathbf{b}etata} .
\end{equation}
By the triangle inequality we have
\mathbf{b}etagin{equation*}
\left| \nabla \psi_{p + th,h}^{(n)} - \nabla \psi_{p,h}^{(n+1)} \right|
\leq |t|^{\mathbf{b}etata}
\left( \left| \nabla \psi_{p + th,h}^{(n)}\right| + \left|\nabla \psi_{p,h}^{(n)} \right| \right)^{1-\mathbf{b}etata}
\left( \left| \nabla \psi_{p,h}^{(n+1)} \right| + \left| \nabla \zeta_{p,h,t}^{(n)} \right| \right)^\mathbf{b}etata.
\end{equation*}
Therefore, by H\"older's inequality and~\eqref{e.sec3corrbnd},
\mathbf{b}etagin{align} \notag
\lefteqn{
\mathbf{m}athbb{E} \left[
\left\| D_p \mathbf{a}_{p} h^{\otimes 1} \left( \nabla \psi_{p + th,h}^{(n)} - \nabla \psi_{p,h}^{(n)} \right) \right\|^2_{L^2(\cu_0)}
\right]
} \quad &
\\ \notag &
\leq C |t|^{2\mathbf{b}etata} \mathbf{m}athbb{E} \left[
\left\| D_p \mathbf{a}_{p} \right\|_{L^{\mathbf{m}athbf{f}rac{4}{1-\mathbf{b}etata}} (\cu_0) }^2 \left( \left\| \nabla \psi_{p + th,h}^{(n)}\right\|_{L^4(\cu_0)} + \left\| \nabla \psi_{p,h}^{(n)}\right\|_{L^4(\cu_0)}\right)^2
\right]^{1-\mathbf{b}etata}
\\ \notag &
\qquad \times
\left( \mathbf{m}athbb{E} \left[ \left\| \nabla \psi_{p,h}^{(n+1)} \right\|^2_{L^2(\cu_0)} \right] +
\mathbf{m}athbb{E} \left[ \left\| \nabla \zeta_{p,h,t}^{(n)} \right\|^2_{L^2(\cu_0)} \right] \right)^{\mathbf{b}etata}
\\ \notag &
\leq
C |t|^{2\mathbf{b}etata} \left( 1 +
\mathbf{m}athbb{E} \left[ \left\| \nabla \zeta_{p,h,t}^{(n)} \right\|^2_{L^2(\cu_0)} \right] \right)^{\mathbf{b}etata},
\end{align}
which is~\eqref{e.sec3contest2}.
\mathbf{m}athbf{s}mallskip
\emph{Step 3}.
We show that
\mathbf{b}etagin{align} \label{e.sec3contest3}
\mathbf{m}athbb{E} \left[ \left\| \left( \mathbf{a}_{p+th} - \mathbf{a}_p - D_p \mathbf{a}_{p} \cdot h \right) \nabla \psi_{p+th,h}^{(n)} \right\|_{L^2(\cu_0)}^2 \right] \leq C t^4 .
\end{align}
We have that
\mathbf{b}etagin{equation*}
\mathbf{a}_{p+th} - \mathbf{a}_p - D_p \mathbf{a}_{p} \cdot h = t^2 \int_0^1 s_1 \int_0^1 D_p^2 \mathbf{a}_{p+s_1s_2 th} h^{\otimes 2} \, ds_1 \,ds_2 ,
\end{equation*}
and since
\mathbf{b}etagin{equation*}
D_p^2 \mathbf{a}_{p}\mathbf{b}igg|_{p=z} h^{\otimes2}
= D_p^3 L(z+ \phi_z,\cdot) \left( \nabla \psi_{z,h}^{(2)} \right)^{\otimes 1} + D_p^4 L(p+ \phi_p,\cdot) \left( h + \nabla \psi_{p,h}^{(1)} \right)^{\otimes 2},
\end{equation*}
we obtain~\eqref{e.sec3contest3} by~\eqref{e.sec3corrbnd}.
\mathbf{m}athbf{s}mallskip
\emph{Step 4}. We then prove that
\mathbf{b}etagin{align} \label{e.sec3contest4}
\mathbf{m}athbb{E} \left[ \left\| \mathbf{m}athbf{f}_{p+th,h}^{n} - \mathbf{m}athbf{f}_{p,h}^{n} - D_p \mathbf{m}athbf{f}_{p,h}^{n} \cdot h \right\|_{L^2(\cu_0)}^2 \right] \leq C |t|^{2(1+\mathbf{b}etata)}\left( 1 + \mathbf{m}athbb{E} \left[ \left\| \nabla \zeta_{p,h,t}^{(n)} \right\|^2_{L^2(\cu_0)} \right] \right)^{\mathbf{b}etata} .
\end{align}
Using decomposition~\eqref{e.Fmdecomposed}, we have that
\mathbf{b}etagin{align} \notag
D_p \mathbf{m}athbf{f}_{p,h}^{n} \cdot h
&
= D_p \left( n D_p \mathbf{a}_p \cdot h \nabla \psi_{p,h}^{(n-1)} + \widetilde{\mathbf{m}athbf{f}}_{p,h}^{n} \right) \cdot h
\\ & \notag
= n D_p \mathbf{a}_p \cdot h \nabla \psi_{p,h}^{(n)} + n D_p^2 \mathbf{a}_p h^{\otimes 2} \nabla \psi_{p,h}^{(n-1)} + D_p \widetilde{\mathbf{m}athbf{f}}_{p,h}^{n}
\\ & \notag
= n D_p \mathbf{a}_p \cdot h \nabla \psi_{p,h}^{(n)} + \mathbf{m}athbf{g}_{p,h}^{n-1},
\end{align}
where
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{g}_{p,h}^{n-1} := n D_p^2 \mathbf{a}_p h^{\otimes 2} \nabla \psi_{p,h}^{(n-1)} + D_p \widetilde{\mathbf{m}athbf{f}}_{p,h}^{n}
\end{equation*}
is a function of $\nabla \phi_p,\nabla \psi_{p,h}^{(1)}, \ldots,\nabla \psi_{p,h}^{(n-1)}$ for $n \mathbf{m}athbf{g}eq 2$, and thus differentiable in~$p$. In particular, by~\eqref{e.sec3corrbnd},
for every $q \in [2,\infty)$ there is $\deltalta(q,n,\mathrm{data})>0$ and $C(q,n,\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that
\mathbf{b}etagin{equation*}
\left\| Â \mathbf{m}athbf{g}_{p,h}^{n-1} \right\|_{L^q(\cu_0)} + \left\| Â D_p \mathbf{m}athbf{g}_{p,h}^{n-1} \right\|_{L^q(\cu_0)}
\leq \mathcal{O}_\deltalta(C).
\end{equation*}
Using the above decomposition for $D_p \mathbf{m}athbf{f}_{p,h}^{n} \cdot h$, we compute
\mathbf{b}etagin{align} \notag
\lefteqn{
\mathbf{m}athbf{f}_{p+th,h}^{n} - \mathbf{m}athbf{f}_{p,h}^{n} - D_p \mathbf{m}athbf{f}_{p,h}^{n} \cdot h
} \quad &
\\ \notag &
= t \int_0^1 \left( D_p \mathbf{m}athbf{f}_{p+s_1 t h,h}^{n} - D_p \mathbf{m}athbf{f}_{p,h}^{n} \right) \cdot h \, ds_1
\\ \notag &
=
n t \int_0^1 D_p \mathbf{a}_{p} \cdot h \left( \nabla \psi_{p+s t h,h}^{(n)} - \nabla \psi_{p,h}^{(n)} \right)
\, ds
\\ \notag & \quad
+ t^2 \int_0^1 s_1 \int_{0}^1 \left( n D_p^2 \mathbf{a}_{p+s_1s_2 t h} h^{\otimes 2} \nabla \psi_{p+s_1 t h,h}^{(n)} + \left( D_p \mathbf{m}athbf{g}_{p + s_1s_2 th ,h}^{n-1} \right) \cdot h \right) \, ds_1 \, ds_2 .
\end{align}
We have by~\eqref{e.sec3corrbnd} that
\mathbf{b}etagin{equation*}
\left\| D_p^2 \mathbf{a}_{p+s_1s_2 t h} h^{\otimes 2} \nabla \psi_{p+s_1 t h,h}^{(n)} + \left( D_p \mathbf{m}athbf{g}_{p + s_1s_2 th ,h}^{n-1} \right) \cdot h \right\|_{L^q(\cu_0)} \leq \mathcal{O}_\deltalta(C),
\end{equation*}
and therefore
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{f}_{p+th,h}^{n} - \mathbf{m}athbf{f}_{p,h}^{n} - D_p \mathbf{m}athbf{f}_{p,h}^{n} \cdot h = t \int_0^1 D_p \mathbf{a}_{p} \cdot h \left( \nabla \psi_{p+s t h,h}^{(n)} - \nabla \psi_{p,h}^{(n)} \right) + \mathcal{O}_\deltalta(Ct^2).
\end{equation*}
The right hand side can be estimated with the aid of~\eqref{e.sec3contest2} to obtain~\eqref{e.sec3contest4}.
\mathbf{m}athbf{s}mallskip
\emph{Step 5}. Conclusion. Combining~\eqref{e.sec3contest1} with~\eqref{e.sec3contest2},~\eqref{e.sec3contest3} and~\eqref{e.sec3contest4} yields~\eqref{e.sec3contest} by Young's inequality. Now,~\eqref{e.sec3contest} implies~\eqref{e.gradpsiformula} for $m=n$. Therefore, we may replace $n$ by $n+1$ in Steps 1-4 above, and conclude that~\eqref{e.sec3contest} is valid for $m=n+1$ as well, which then gives~\eqref{e.gradpsiformula} for $m=n+1$. Using obtained formula, it is straightforward to show that~\eqref{e.sec3contest} is valid for $m=n$ with $\mathbf{b}etata=1$. Indeed, we notice that
\mathbf{b}etagin{equation*}
\mathbf{m}athbb{E} \left[ \left\|
\nabla \psi_{p + th,h}^{(m)} - \nabla \psi_{p,h}^{(m)} - t \nabla \psi_{p,h}^{(m+1)} - \mathbf{m}athbf{f}rac{t^2}2 \nabla \psi_{p,h}^{(m+2)}
\right\|_{L^2(\cu_0)}^2 \right]^\mathbf{m}athbf{f}rac12 \leq C |t|^{2+\mathbf{b}etata},
\end{equation*}
from which we get~\eqref{e.sec3contest} for $m=n$ with $\mathbf{b}etata=1$. Finally,~\eqref{e.sec3contest} implies that $t \mathbf{m}apsto \nabla \phi_{p+th}$ is in $C^{n+2,\mathbf{b}etata}$ close to $t=0$, and thus we have that~\eqref{e.Fmrelationapplied} is valid by~\eqref{e.Fmrelation}. The proof is complete.
\end{proof}
\mathbf{b}etagin{lemma} \label{l.polarizationD_pphi}
Assume~\eqref{e.assumption.section3} is valid. Let $\mathbf{m}athsf{M} \in [1,\infty)$ and $p \in B_{\mathbf{m}athsf{M}}$. Then $p \mathbf{m}apsto p + \nabla \phi_p$ is $(n+2)$ times differentiable with respect to $p$ and, for $q \in [2,\infty)$, there are constants $\deltalta(q,n,\mathrm{data}) \in \left(0, \tfrac12 \right]$ and $C(q,n,\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that, for $m \{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation*}
\left\| D_p^{m} \phi_p \right\|_{L^q(\cu_0)} \leq \mathcal{O}_\deltalta(C)
\quad \mathbf{m}box{and} \quad
\mathbf{m}athbb{E} \left[ \left\| D_p^{n+2} \phi_p \right\|_{L^2(\cu_0)}^2 \right] \leq C.
\end{equation*}
\end{lemma}
The proof of Lemma~\ref{l.polarizationD_pphi} relies on a general principle in polarization based on multilinear analysis, and it is formalized in the following lemma.
\mathbf{b}etagin{lemma}
\label{l.polarization}
Let $V$ be a real, finite-dimensional vector space, and let $\mathbb{P}hi:V^{n}\to\mathbf{m}athbb{R}$ be a multilinear, symmetric form, that is,
for all $v_1,\ldots,v_n \in V$ and any permutation $\mathbf{m}athbf{s}igma$ of $\{1,\ldots,n\}$, we have
\mathbf{b}etagin{equation*}
\mathbb{P}hi(v_1,\ldots,v_{n}) = \mathbb{P}hi(v_{\mathbf{m}athbf{s}igma(1)},\ldots,v_{\mathbf{m}athbf{s}igma(n)}) .
\end{equation*}
For $v\in V$, define $\phi(v):=\mathbb{P}hi(v, v, \dots, v)$.
Then, for $v_{1}, \dots, v_{n}\in V$, the polarization formula
\[
\mathbb{P}hi(v_{1}, \dots, v_{n})=\mathbf{m}athbf{f}rac{1}{n!}\mathbf{m}athbf{s}um_{A\mathbf{m}athbf{s}ubseteq\{1,\dots, n\}}(-1)^{n-|A|}\phi\left(\mathbf{m}athbf{s}um_{j\in A}v_{j}\right)
\]
holds, where the leftmost summation is over all non-empty subsets $A\mathbf{m}athbf{s}ubseteq \{1,2,\dots, n\}$ and $|A|$ is the number of elements in $A$.
\end{lemma}
\mathbf{b}etagin{proof}
We show the equivalent statement that
\[
\mathbf{m}athbf{s}um_{A\mathbf{m}athbf{s}ubseteq\{1,\dots, n\}}(-1)^{n-|A|}\phi\left(\mathbf{m}athbf{s}um_{j\in A}v_{j}\right)=n!\mathbb{P}hi(v_{1}, \dots, v_{n}),
\]
where the sum on the left is over all non-empty subsets $A$ of $\{1,2,\dots, n\}$.
For this, we begin by expanding each summand $\phi\left(\mathbf{m}athbf{s}um_{j\in A}v_{j}\right)=\mathbb{P}hi\left(\mathbf{m}athbf{s}um_{j\in A}v_{j},\dots, \mathbf{m}athbf{s}um_{j\in A}v_{j}\right)$ fully, as a sum of terms of the form $\mathbb{P}hi(v_{j_{1}}, \dots, v_{j_{n}})$ with $j_1,\dots, j_n\in A$.
Using the symmetry of $\mathbb{P}hi$, each such term can be written as $\mathbb{P}hi(v_{f(1)}, \dots, v_{f(n)})$, with non-decreasing indices $f(1)\leq f(2)\leq \cdots \leq f(n)$ in $A$.
Denote
\mathbf{b}etagin{equation*}
\mathbf{m}athcal{M} := \{ f: \{1,\ldots,n\} \to \{1,\ldots,n\} \, : \, f(1) \leq f(2) \leq \ldots \leq f(n) \}
\end{equation*}
and, for $f \in \mathbf{m}athcal{M}$,
\mathbf{b}etagin{equation*}
\text{im}\, f := \mathbf{b}igcup_{j \in \{1,\ldots,n\}}\{ f(j) \} .
\end{equation*}
Letting $c_{A}(f)$ denote the number of ordered $n$-tuples $(j_{1}, \dots, j_{n})$ of elements of $A$ which can be reordered to form $(f(1),\dots, f(n))$, it follows that the expression $\mathbf{m}athbf{s}um_{A\mathbf{m}athbf{s}ubseteq\{1,\dots, n\}}(-1)^{n-|A|}\phi\left(\mathbf{m}athbf{s}um_{j\in A}v_{j}\right)$ can be expanded to give
\[
\mathbf{m}athbf{s}um_{A\mathbf{m}athbf{s}ubseteq\{1,\dots, n\}}(-1)^{n-|A|}\mathbf{m}athbf{s}um_{f\in \mathbf{m}athcal{M} , \; \text{ im} \, f \mathbf{m}athbf{s}ubset A}c_{A}(f)\mathbb{P}hi(v_{f(1)}, \dots, v_{f(n)}).
\]
Changing the order of summation gives
\[
\mathbf{m}athbf{s}um_{ f\in \mathbf{m}athcal{M} }c_{\{1,\dots, n\}}(f)\mathbb{P}hi(v_{f(1)},\dots, v_{f(n)})\mathbf{m}athbf{s}um_{A\mathbf{m}athbf{s}upseteq \text{im}\,f}(-1)^{n-|A|},
\]
where the sum on the right is over all subsets $A$ of $\{1,2,\dots, n\}$ which contain $\text{im}\,f$.
Each such subset $A$ can be written as $\{f(1), \dots, f(n)\}\cup B$, for some set $B$, possibly empty, satisfying $B\mathbf{m}athbf{s}ubseteq \{1,2,\dots, n\}\mathbf{m}athbf{s}etminus \text{im}\,f $.
Hence, as $|A|=|\text{im}\, f|+|B|$ for $B$ defined in this way, we can write our expression as
\[
\mathbf{m}athbf{s}um_{ f\in \mathbf{m}athcal{M} } c_{\{1,\dots, n\}}(f)\mathbb{P}hi(v_{f(1)},\dots, v_{f(n)})s_{f},
\]
where
\[
s_{f}=(-1)^{n-|\text{im}\, f| }\mathbf{m}athbf{s}um_{B\mathbf{m}athbf{s}ubseteq ( \{1,\dots, n\}\mathbf{m}athbf{s}etminus \text{im}\, f)}(-1)^{|B|}.
\]
Finally, it is a well-known combinatorial fact that for every nonempty finite set $S$, we have
\mathbf{b}etagin{equation}
\mathbf{m}athbf{s}um_{B\mathbf{m}athbf{s}ubseteq S}(-1)^{|B|}= \mathbf{m}athbf{s}um_{j=0}^{|S|} (-1)^j { \mathbf{b}inom{|S|}{j}} = 0,
\end{equation}
where the sum on the left is over all subsets $B$ of $S$. Thus, above, we have
\[
s_{f}=0
\]
unless $f(1)< \cdots < f(n)$. Therefore $f(1)=1, f(2)=2, \dots, f(n)=n$, and in this case $s_{f}=1$ and $c_{\{1,\dots, n\}}(f)=n!$. It follows that
\[
\mathbf{m}athbf{s}um_{A\mathbf{m}athbf{s}ubseteq\{1,\dots, n\}}(-1)^{n-|A|}\phi\left(\mathbf{m}athbf{s}um_{j\in A}v_{j}\right)=n!\mathbb{P}hi(v_{1},\dots, v_{n}),
\]
as was to be shown; this proves the polarization formula.
\end{proof}
\mathbf{b}etagin{proof}[Proof of Lemma~\ref{l.polarizationD_pphi}]
The lemma follows from Lemmas~\ref{l.psinplus2},~\ref{l.stupidpsi},~\ref{l.polarization}, and~\eqref{e.sec3corrbnd}.
\end{proof}
We next prove H\"older continuity of $p\mathbf{m}apsto D_p^{n+2} \phi_p$ and $p \mathbf{m}apsto \mathbf{m}athbf{f}^{n+2}_{p,h}$.
\mathbf{b}etagin{lemma} \label{l.continuityinp}
Assume~\eqref{e.assumption.section3} is valid. Let $\mathbf{m}athsf{M} \in [1,\infty)$ and $\mathbf{b}etata \in (0,1)$. Then there is a constant $C(\mathbf{b}etata,n,\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that, for all $p,p' \in B_{\mathbf{m}athsf{M}}$ and $h \in \overlineerline{B}_1\mathbf{m}athbf{s}etminus\{0\}$,
\mathbf{b}etagin{equation} \label{e.continuityinp}
\mathbf{m}athbb{E} \left[ \left\| \nabla D_p^{n+2} \phi_p - \nabla D_p^{n+2} \phi_{p'} \right\|_{L^2(\cu_0)}^2 \right]^\mathbf{m}athbf{f}rac12 + |h|^{-n-2} \mathbf{m}athbb{E} \left[ \left\| \mathbf{m}athbf{f}^{n+2}_{p',h} - \mathbf{m}athbf{f}^{n+2}_{p,h} \right\|_{L^2(\cu_0)}^2 \right]^\mathbf{m}athbf{f}rac12 \leq C \left| p- p' \right|^\mathbf{b}etata.
\end{equation}
\end{lemma}
\mathbf{b}etagin{proof}
Fix $\mathbf{m}athsf{M} \in [1,\infty)$ and $\mathbf{b}etata \in (0,1)$, and fix $p,p' \in B_{\mathbf{m}athsf{M}}$ and $h \in \overlineerline{B}_1\mathbf{m}athbf{s}etminus\{0\}$.
By Lemma~\ref{l.abstractnonsense} and equations of $\psi_{p,h}^{(n+2)}$ and $\psi_{p',h}^{(n+2)}$ we have that
\mathbf{b}etagin{equation*}
\mathbf{m}athbb{E} \left[ \left\| \nabla \psi_{p,h}^{(n+2)} - \nabla \psi_{p',h}^{(n+2)}\right\|_{L^2(\cu_0)}^2 \right] \leq
C \mathbf{m}athbb{E} \left[ \left\| \mathbf{m}athbf{f}^{n+2}_{p',h} - \mathbf{m}athbf{f}^{n+2}_{p,h} \right\|_{L^2(\cu_0)}^2 \right] .
\end{equation*}
Therefore, in view of Lemma~\ref{l.polarization}, it is enough to show that
\mathbf{b}etagin{equation} \label{e.continuityinppre}
\mathbf{m}athbb{E} \left[ \left\| \mathbf{m}athbf{f}^{n+2}_{p',h} - \mathbf{m}athbf{f}^{n+2}_{p,h} \right\|_{L^2(\cu_0)}^2 \right]^\mathbf{m}athbf{f}rac12 \leq C|h|^{n+2} |p-p'|^\mathbf{b}etata .
\end{equation}
By~\eqref{e.Fmdecomposed} we may decompose $\mathbf{m}athbf{f}^{n+2}_{p,h}$ as
\mathbf{b}etagin{align} \label{e.fnplus2decomposed}
\mathbf{m}athbf{f}^{n+2}_{p,h} & = (n+2) D_p^{3} L(p + \nabla \phi_p, \cdot)\left((h + \nabla (D_p \phi_{p} h^{\otimes 1})\right)^{\otimes 1} \left( \nabla \psi_{p,h}^{(n+1)} \right)^{\otimes 1}
\\ \notag &
\quad
+ \widetilde{\mathbf{m}athbf{F}}_{n+2} \left(p + \nabla \phi_p ,h + \nabla \left(D_p \phi_{p} h^{\otimes 1}\right) ,\ldots, \nabla \left( D_p^n \phi_{p} h^{\otimes n}\right) ,\cdot\right) .
\end{align}
Since~$\widetilde{\mathbf{m}athbf{F}}_{n+2}$ is $C^{0,1}$ in its first argument and polynomial in its last $n$ arguments, we obtain by homogeneity, the chain rule and Lemma~\ref{l.polarizationD_pphi} that
\mathbf{b}etagin{align} \notag
& \mathbf{m}athbb{E} \mathbf{b}igg[
\mathbf{b}igg\| \widetilde{\mathbf{m}athbf{F}}_{n+2} \left(p {+}\nabla \phi_p ,h {+} \nabla \left(D_p \phi_{p} h^{\otimes 1}\right) ,{\ldots},\nabla \left( D_p^n \phi_{p} h^{\otimes n}\right) ,\cdot \right)
\\ \notag & \ \
- \widetilde{\mathbf{m}athbf{F}}_{n+2} \left(p' {+}\nabla \phi_{p'} ,h {+} \nabla \left(D_p \phi_{p'} h^{\otimes 1}\right),{\ldots},\nabla \left( D_p^n \phi_{p'} h^{\otimes n}\right) ,\cdot \right)
\mathbf{b}igg\|_{L^2(\cu_0)}^2 \mathbf{b}igg]^\mathbf{m}athbf{f}rac12
\leq
C|h|^{n+2} |p-p'| .
\end{align}
Therefore, the leading term is the first one on the right in~\eqref{e.fnplus2decomposed}. Indeed, we observe by the above display that
\mathbf{b}etagin{multline*}
\mathbf{m}athbb{E} \left[ \left\| \mathbf{m}athbf{f}^{n+2}_{p',h} - \mathbf{m}athbf{f}^{n+2}_{p,h} \right\|_{L^2(\cu_0)}^2 \right]^\mathbf{m}athbf{f}rac12
\\
\leq C \mathbf{m}athbb{E} \left[ \left\| \left| h + \nabla (D_p \phi_{p} h^{\otimes 1})\right| \left| \nabla \psi_{p',h}^{(n+1)} - \nabla \psi_{p,h}^{(n+1)} \right| \right\|_{L^2(\cu_0)}^2 \right]^\mathbf{m}athbf{f}rac12
+ C |h|^{n+2} |p-p'|
\end{multline*}
To conclude, we need to estimate the first term on the right. To this end, we use the triangle inequality to get, for any $\mathbf{b}etata \in [0,1]$,
\mathbf{b}etagin{equation*}
\left| \nabla \psi_{p',h}^{(n+1)} - \nabla \psi_{p,h}^{(n+1)} \right|
\leq
\left(
\left| \nabla \psi_{p',h}^{(n+1)}\right| + \left| \nabla \psi_{p,h}^{(n+1)} \right|
\right)^{1-\mathbf{b}etata}
\left(
\left| \nabla \psi_{p',h}^{(n+1)} - \nabla \psi_{p,h}^{(n+1)} \right|
\right)^{\mathbf{b}etata}.
\end{equation*}
It follows, by H\"older's inequality, that
\mathbf{b}etagin{align} \notag
\lefteqn{
\mathbf{m}athbb{E} \left[ \left\| \mathbf{m}athbf{f}^{n+2}_{p',h} - \mathbf{m}athbf{f}^{n+2}_{p,h} \right\|_{L^2(\cu_0)}^2 \right]^\mathbf{m}athbf{f}rac12
} \quad &
\\ \notag &
\leq
C \mathbf{m}athbb{E} \left[ \left\| h + \nabla (D_p \phi_{p} h^{\otimes 1}) \right\|_{L^{\mathbf{m}athbf{f}rac{4}{1-\mathbf{b}etata}}(\cu_0)}^{\mathbf{m}athbf{f}rac{4}{1-\mathbf{b}etata}} \right]^{\mathbf{m}athbf{f}rac {1-\mathbf{b}etata}{4} }
\mathbf{m}athbb{E} \left[ \left\| \left| \nabla \psi_{p',h}^{(n+1)} \right| + \left| \nabla \psi_{p,h}^{(n+1)}\right| \right\|_{L^4(\cu_0)}^4 \right]^{\mathbf{m}athbf{f}rac {1-\mathbf{b}etata}{4} }
\\ \notag & \qquad \times \mathbf{m}athbb{E} \left[ \left\| \nabla \psi_{p',h}^{(n+1)} - \nabla \psi_{p,h}^{(n+1)} \right\|_{L^2(\cu_0)}^2 \right]^{\mathbf{m}athbf{f}rac{\mathbf{b}etata}{2}} .
\end{align}
By Lemma~\ref{l.polarizationD_pphi} and H\"older's inequality, we get
\mathbf{b}etagin{align} \notag
\lefteqn{
\mathbf{m}athbb{E} \left[ \left\| \nabla \psi_{p',h}^{(n+1)} - \nabla \psi_{p,h}^{(n+1)} \right\|_{L^2(\cu_0)}^2 \right]^{\mathbf{m}athbf{f}rac{\mathbf{b}etata}{2}}
} \quad &
\\ \notag &
\leq
C \mathbf{m}athbb{E} \left[ \left\| \nabla D_p^{n+1} \phi_{p'} - \nabla D_p^{n+1} \phi_{p} \right\|_{L^2(\cu_0)}^2 \right]^{\mathbf{m}athbf{f}rac{\mathbf{b}etata}{2}} |h|^{\mathbf{b}etata(n+1)}
\\ \notag &
\leq
C \mathbf{m}athbb{E} \left[ \left\| \nabla \int_0^1 D_p^{n+2} \phi_{t p' + (1-t)p} \,dt \right\|_{L^2(\cu_0)}^2 \right]^{\mathbf{m}athbf{f}rac{\mathbf{b}etata}{2}} |h|^{\mathbf{b}etata(n+1)}|p-p'|^\mathbf{b}etata
\\ \notag &
\leq
C \left( \int_0^1 \mathbf{m}athbb{E} \left[ \left\| \nabla D_p^{n+2} \phi_{t p' + (1-t)p} \right\|_{L^2(\cu_0)}^2 \right]^{\mathbf{m}athbf{f}rac{1}{2}} \right)^\mathbf{b}etata \,dt |h|^{\mathbf{b}etata(n+1)}|p-p'|^\mathbf{b}etata
\\ \notag &
\leq
C |h|^{\mathbf{b}etata(n+1)}|p-p'|^\mathbf{b}etata,
\end{align}
together with
\mathbf{b}etagin{equation*}
\mathbf{m}athbb{E} \left[ \left\| h + \nabla (D_p \phi_{p} h^{\otimes 1}) \right\|_{L^{\mathbf{m}athbf{f}rac{4}{1-\mathbf{b}etata}}(\cu_0)}^{\mathbf{m}athbf{f}rac{4}{1-\mathbf{b}etata}} \right]^{\mathbf{m}athbf{f}rac {1-\mathbf{b}etata}{4} } \leq C|h|
\end{equation*}
and
\mathbf{b}etagin{equation*}
\mathbf{m}athbb{E} \left[ \left\| \left| \nabla \psi_{p',h}^{(n+1)} \right| + \left| \nabla \psi_{p,h}^{(n+1)}\right| \right\|_{L^2(\cu_0)}^4 \right]^{\mathbf{m}athbf{f}rac {1-\mathbf{b}etata}{4} } \leq C|h|^{(1-\mathbf{b}etata)(n+1)}.
\end{equation*}
Consequently, combining above displays yields~\eqref{e.continuityinppre}, finishing the proof.
\end{proof}
\mathbf{m}athbf{s}ubsection{Smoothness of~\texorpdfstring{$\overlineerline{L}$}{L-bar}}
It is a well-known and easy consequence of qualitative homogenization that $D\overlineerline{L}$ is given by the formula
\mathbf{b}etagin{equation*}
D\overlineerline{L}(p)
=
\mathbf{m}athbb{E} \left[
\int_{\cu_0}
D_pL \left( p + \nabla \phi_p(x),x\right)\,dx
\right].
\end{equation*}
In~\cite{AFK}, we proved that $\overlineerline{L}\in C^2$ and $D^2\overlineerline{L}$ is given by the formula
\mathbf{b}etagin{equation} \label{e.D2pbarL}
D^2\overlineerline{L}(p) h
=
\mathbf{m}athbb{E} \left[
\int_{\cu_0}
\mathbf{a}_p
\left( h + \nabla \psi_{p,h}^{(1)}\right)
\,dx
\right].
\end{equation}
We next generalize this result to higher derivatives of $\overlineerline{L}$ by essentially differentiating the previous formula many times and applying the results of the previous subsection. Moreover, we validate the statement of Theorem~\ref{t.regularity.Lbar} for $n+1$.
\mathbf{b}etagin{proposition}
\label{e.Lbar.reg.qualitative}
Assume that~\eqref{e.assumption.section3} is valid.
Then Theorem~\ref{t.regularity.Lbar} is valid for $n+1$. Moreover, for every $m\in\{ 1,\ldots,n+1\}$ and $h\in\overlineerline{B}_1$, we have the formula
\mathbf{b}etagin{equation}
\label{e.formulaDervsL}
D_p^{m+2} \overlineerline{L}(p) h^{\otimes (m+1)} = \mathbf{m}athbb{E}\left[ \int_{\cu_0} \left( \mathbf{a}_p \nabla \psi^{(m+1)}_{p,h} + \mathbf{m}athbf{f}^{(m+1)}_{p,h} \right) \right].
\end{equation}
\end{proposition}
\mathbf{b}etagin{proof}
Fix $\mathbf{m}athsf{M} \in [1,\infty)$ and $p \in B_{\mathbf{m}athsf{M}}$. We begin by showing~\eqref{e.formulaDervsL}. Starting from~\eqref{e.D2pbarL}, and observing that since
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{f}^{(2)}_{p,h} = D_p^3L(p+\nabla \phi_p,\cdot) \left(h + \nabla\psi_{p,h }^{(1)} \right)^{\otimes 2} ,
\end{equation*}
we have that
\mathbf{b}etagin{equation*}
D_p \left(\mathbf{a}_p \left( h + \nabla \psi_{p,h}^{(1)}\right) \right) \cdot h = \mathbf{a}_p \nabla \psi_{p,h}^{(2)} + \mathbf{m}athbf{f}^{(2)}_{p,h}.
\end{equation*}
This implies that
\mathbf{b}etagin{equation*}
D_p^3 \overlineerline{L}(p) h^{\otimes 2} = \mathbf{m}athbb{E}\left[ \int_{\cu_0} \left( \mathbf{a}_p \nabla \psi_{p,h}^{(2)} + \mathbf{m}athbf{f}^{(2)}_{p,h}\right) \right] .
\end{equation*}
Assume then, inductively, that for some $m \in \{3,\ldots,n+1\}$ we have that, for all $ k \in \{2,\ldots,m\}$,
\mathbf{b}etagin{equation}
\label{e.diffL}
D_p^{k+1} \overlineerline{L}(p) h^{\otimes k} = \mathbf{m}athbb{E}\left[ \int_{\cu_0} \left( \mathbf{a}_p \nabla \psi^{(k)}_{p,h} + \mathbf{m}athbf{f}^{(k)}_{p,h} \right) \right].
\end{equation}
We prove that~\eqref{e.diffL} is valid for $k=m+1$ as well. Differentiating with respect to $p$ yields, using~\eqref{e.gradpsiformula} and~\eqref{e.Fmrelationapplied}, that
\mathbf{b}etagin{align} \notag
D_p^{m+2} \overlineerline{L}(p) h^{\otimes (m+1)} & = \mathbf{m}athbb{E}\left[ \int_{\cu_0} \left( D_p \left( \mathbf{a}_p \nabla \psi^{(m)}_{p,h} + \mathbf{m}athbf{f}^{(m)}_{p,h} \right) \cdot h\right) \right]
\\ \notag &
=
\mathbf{m}athbb{E}\left[ \int_{\cu_0} \left( \mathbf{a}_p \nabla \psi^{(m+1)}_{p,h} + \left( D_p \mathbf{a}_p \cdot h \nabla \psi^{(m)}_{p,h} + D_p \mathbf{m}athbf{f}^{(m)}_{p,h} \cdot h \right) \right) \right]
\\ \notag &
=
\mathbf{m}athbb{E}\left[ \int_{\cu_0} \left( \mathbf{a}_p \nabla \psi^{(m+1)}_{p,h} + \mathbf{m}athbf{f}^{(m+1)}_{p,h} \right) \right],
\end{align}
proving the induction step. This validates~\eqref{e.formulaDervsL}.
\mathbf{m}athbf{s}mallskip
To show the regularity of $\overlineerline{L}$, we first observe that Theorem~\ref{t.correctorestimates} and Lemma~\ref{l.psinplus2}, together with~\eqref{e.diffL} and Lemma~\ref{l.polarization}, yield that
\mathbf{b}etagin{equation} \label{e.DpLreg1}
\mathbf{m}ax_{k \in \{2,\ldots,n+3\} } \left\| D_p^{k} \overlineerline{L}(p) \right\|_{L^\infty(B_{\mathbf{m}athsf{M}})} \leq C.
\end{equation}
Fix then $p' \in B_{\mathbf{m}athsf{M}}$. Since
\mathbf{b}etagin{equation*}
\nabla \psi^{(n+2)}_{p,h} = \nabla \left(D_p^{n+2} \phi_p h^{\otimes (n+2)} \right),
\end{equation*}
decomposing
\mathbf{b}etagin{align} \notag
\lefteqn{
\mathbf{a}_p \nabla \psi^{(m+1)}_{p,h} + \mathbf{m}athbf{f}^{(m+1)}_{p,h} - (\mathbf{a}_{p'} \nabla \psi^{(m+1)}_{p',h} + \mathbf{m}athbf{f}^{(m+1)}_{p',h} )
} \quad &
\\ \notag &
=
\mathbf{a}_p \nabla \left(( D_p^{n+2} \phi_p - D_p^{n+2} \phi_{p'}) h^{\otimes (n+2)} \right)
\\ \notag & \quad
+ (\mathbf{a}_p - \mathbf{a}_{p'}) \nabla \left(D_p^{n+2} \phi_{p'}) h^{\otimes (n+2)} \right) + \left( \mathbf{m}athbf{f}^{(m+1)}_{p,h} - \mathbf{m}athbf{f}^{(m+1)}_{p',h} \right),
\end{align}
and noticing that
\mathbf{b}etagin{align} \notag
| \mathbf{a}_p(x) - \mathbf{a}_{p'}(x)| & \leq \left[D_p^2 L(\cdot,x) \right]_{C^{0,1}} \left(|p-p'| + \left| \nabla \phi_p(x) - \nabla \phi_{p'}(x)\right| \right)
\\ \notag &
\leq C\int_0^1 (1+ \left| \nabla D_p \phi_{t p + (1-t)p'}(x) \right|) \,dt \, |p-p'|,
\end{align}
we obtain by Lemmas~\ref{l.polarizationD_pphi} and~\ref{l.continuityinp}, together with H\"older's inequality, that
\mathbf{b}etagin{equation*}
\left| D_p^{n+3} \overlineerline{L}(p) h^{\otimes (n+3)} - D_p^{n+3} \overlineerline{L}(p') h^{\otimes (n+3)} \right| \leq C|h|^{n+3}|p-p'|^\mathbf{b}etata.
\end{equation*}
In view of Lemma~\ref{l.polarization} this yields
\mathbf{b}etagin{equation*}
\left| D_p^{n+3} \overlineerline{L}(p) - D_p^{n+3} \overlineerline{L}(p') \right| \leq C |p-p'|^\mathbf{b}etata,
\end{equation*}
proving that $[D_p^{n+3} \overlineerline{L}]_{C^{0,\mathbf{b}etata}(B_{\mathbf{m}athsf{M}})} \leq C$. Together with~\eqref{e.DpLreg1} we thus get
\mathbf{b}etagin{equation*}
\left\| D_p^2 \overlineerline{L} \right\|_{C^{n+1,\mathbf{b}etata}(B_{\mathbf{m}athsf{M}})} \leq C,
\end{equation*}
which is the statement of Theorem~\ref{t.regularity.Lbar} for $n+1$. The proof is complete.
\end{proof}
\mathbf{b}etagin{remark} \label{r.barFregularity}
Since we now have that Theorem~\ref{t.regularity.Lbar} is valid for $n+1$, we also have that $p \mathbf{m}apsto \overlineerline{F}_{n+2}(p,\cdot,\ldots,\cdot)$ belongs to $C^{0,\mathbf{b}etata}$ for all~$\mathbf{b}etata \in (0,1)$. In particular, for~$\mathbf{b}etata \in (0,1)$ and $\mathbf{m}athsf{M}\in [1,\infty)$, there exists $C(n,\mathbf{b}etata,\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that, for every tuplet $(h_1,\ldots,h_{n+1}) \in \mathbf{m}athbb{R}^d \times \ldots \mathbf{m}athbb{R}^d$, we have that
\mathbf{b}etagin{equation*}
\left[ F_{n+2}(\cdot ,h_1,\ldots,h_{n+1}) Â \right]_{C^{0,\mathbf{b}etata}(B_{\mathbf{m}athsf{M}} )} \leq C \mathbf{m}athbf{s}um_{i=1}^{n+1} |h_i|^{\mathbf{m}athbf{f}rac{n+2}{i}}.
\end{equation*}
\end{remark}
\mathbf{m}athbf{s}ubsection{Sublinearity of correctors}
By the ergodic theorem, we have that, for every $p,h\in\mathbf{m}athbb{R}d$ and $m\in\{1,\ldots,n+1\}$, the correctors and linearized correctors are (qualitatively) sublinear at infinity:
\mathbf{b}etagin{equation*} \label{}
\left\{
\mathbf{b}etagin{aligned}
&
\limsup_{r\to \infty}
\mathbf{m}athbf{f}rac1r \left\| \phi_{p} - \left( \phi_p \right)_{B_r} \right\|_{\underlinederline{L}^2(B_r)} = 0,
\\ &
\limsup_{r\to \infty}
\mathbf{m}athbf{f}rac1r \left\| \psi_{p,h}^{(m)} - \left( \psi_{p,h}^{(m)} \right)_{B_r} \right\|_{\underlinederline{L}^2(B_r)} = 0,
\end{aligned}
\right.
\qquad \mathbf{m}box{$\mathbb{P}$--a.s.}
\end{equation*}
The assumption~\eqref{e.assumption.section3} allows us to give a quantitative estimate of this sublinearity.
\mathbf{b}etagin{lemma}[Sublinearity of correctors]
\label{l.corr.sublinearity}
Assume~\eqref{e.assumption.section3} is valid. Let $\mathbf{m}athsf{M} \in [1,\infty)$. There exist~$\mathbf{a}lphapha(\mathrm{data}),\deltalta(\mathrm{data}) >0$, $C(\mathbf{m}athsf{M},\mathrm{data})<\infty$ and a random variable $\mathcal{X}$ satisfying $\mathcal{X} \leq \mathcal{O}_\deltalta(C)$ such that, for every $r\mathbf{m}athbf{g}eq \mathcal{X}$,~$p \in B_{\mathbf{m}athsf{M}}$, $h\in \overlineerline{B}_1$ and $m\in\{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation}
\label{e.correctorsublin}
\left\| \phi_{p} - \left( \phi_p \right)_{B_r} \right\|_{\underlinederline{L}^2(B_r)}
+
\left\| \psi_{p,h}^{(m)} - \left( \psi_{p,h}^{(m)} \right)_{B_r} \right\|_{\underlinederline{L}^2(B_r)}
\leq
Cr^{1-\mathbf{a}lphapha}.
\end{equation}
\end{lemma}
\mathbf{b}etagin{proof}
Fix $p \in B_\mathbf{m}athsf{M}$ and $h \in \overlineerline{B}_1$. Clearly $x \mathbf{m}apsto p\cdot x + \phi_p(x)$ belongs to $\mathbf{m}athcal{L}_1$, and thus the result follows from~\cite[Theorem 1.3]{AFK} as in~\cite[Section 3.4]{AKMbook}. Hence we are left to show that $\psi_{p,h}^{(m)} $ satisfies the estimate in the statement. For $m=1$ the result follows by~\cite[Theorem 5.2]{AFK} and~\cite[Section 3.4]{AKMbook}. We thus proceed inductively. Assume that~\eqref{e.correctorsublin} is valid for $m \in \{1,\ldots,k\}$ for some $k \in \{1,\ldots, n\}$. We then show that it continues to hold for $m=k+1$. Since~\eqref{e.assumption.section3} is valid, by taking $\mathbf{a}lphapha$ and $\mathcal{X}$ as in Theorem~\ref{t.linearizehigher}, and setting $\mathcal{Y} := \mathcal{X}^{2/\mathbf{a}lphapha}$, we have that if $R:=\varepsilon^{-1}\mathbf{m}athbf{g}eq \mathcal{Y}$, then $\varepsilon^\mathbf{a}lphapha \mathcal{X} \leq R^{-\mathbf{m}athbf{f}rac \mathbf{a}lphapha2}$. We relabel $\mathcal{Y}$ to $\mathcal{X}$ and $\mathbf{m}athbf{f}rac \mathbf{a}lphapha2$ to $\mathbf{a}lphapha$. We further take $\mathcal{X}$ larger, if necessary, so that~\cite[Proposition 4.3]{AFK} is at our disposal. Suppressing both $p$ and $h$ from the notation, we let $\phi_{r}$ and $\psi_{r}^{(m)}$, for $m \in \{1,\ldots,n+1\}$, solve
\mathbf{b}etagin{equation}
\label{e.corr.hom000}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_pL(p + \nabla \phi_r ,\cdot )\right) = 0 & \mathbf{m}box{in} & \ B_{2^{n+1}r},\\
&-\nabla \cdot \left( D_p^2L(p + \nabla \phi ) \nabla \psi^{(m)}_{r} \right) = \nabla \cdot \mathbf{m}athbf{f}^{(k+1)} & \mathbf{m}box{in} & \ B_{2^{n+1-m}r},\\
&-\nabla \cdot \left( D_p^2L(p + \nabla \phi_r ) \nabla \widetilde \psi^{(m)}_{r} \right) = \nabla \cdot \mathbf{m}athbf{f}^{(k+1)}_r & \mathbf{m}box{in} & \ B_{2^{n+1-m}r},\\
& \phi_r = 0 & \mathbf{m}box{on} & \ \mathrm{par}rtial B_{2^{n+1} r}, \\
& \psi^{(m)}_{r} = 0 & \mathbf{m}box{on} & \ \mathrm{par}rtial B_{2^{n+1-m}r},
\end{aligned}
\right.
\end{equation}
where
\mathbf{b}etagin{align} \notag
\mathbf{m}athbf{f}^{(m)} & := \mathbf{m}athbf{F}_m\left( p+\nabla\phi,h+\nabla\psi^{(1)}, \nabla\psi^{(2)}, \ldots, \nabla\psi^{(m-1)},\cdot \right) ,
\\ \notag
\mathbf{m}athbf{f}^{(m)}_r & := \mathbf{m}athbf{F}_m\left( p+\nabla\phi,h+\nabla\psi^{(1)}_{r}, \nabla\psi^{(2)}_{r}, \ldots, \nabla\psi^{(m-1)}_{r},\cdot \right).
\end{align}
Now, $\phi_r, \psi_r^{(1)},\ldots, \psi_r^{(m-1)}$ all homogenize to zero and we get, by Theorem~\ref{t.linearizehigher}, that, for~$r \mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{equation*}
\left\| \phi_{r} \right\|_{\underlinederline{L}^2(B_r)}
+
\left\| \widetilde \psi_{r}^{(m)} \right\|_{\underlinederline{L}^2(B_{r})}
\leq
Cr^{1-\mathbf{a}lphapha}
.
\end{equation*}
This and the induction assumption, i.e. that~\eqref{e.correctorsublin} is valid for $m \in \{1,\ldots,k\}$, together with Lemma~\ref{l.diff.linearizedsystem} below, imply that
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{f}rac1r \left\| \psi_r^{(k+1)} - \widetilde \psi_r^{(k+1)} \right\|_{\underlinederline{L}^2 \left( B_{r} \right)} + \left\| \mathbf{m}athbf{f}^{(k+1)} - \mathbf{m}athbf{f}_r^{(k+1)} \right\|_{\underlinederline{L}^2 \left( B_{r} \right)} \leq C r^{-\mathbf{a}lphapha}.
\end{equation*}
Combining the previous two displays yields
\mathbf{b}etagin{equation*}
\left\| \phi_{r} \right\|_{\underlinederline{L}^2(B_r)}
+
\left\| \psi_{r}^{(k+1)} \right\|_{\underlinederline{L}^2(B_{r})}
\leq
Cr^{1-\mathbf{a}lphapha}
.
\end{equation*}
Now, since $\psi_{2r}^{(k+1)}- \psi_{r}^{(k+1)}$ is $\mathbf{a}_p$-harmonic in $B_r$, we have by the Lipschitz estimate~\cite[Proposition 4.3]{AFK} that, for $r \mathbf{m}athbf{g}eq \mathcal{X}$ and $t \in [\mathcal{X},r]$,
\mathbf{b}etagin{equation*}
\left\|\nabla \psi_{2r}^{(k+1)} - \nabla \psi_{r}^{(k+1)} \right\|_{\underlinederline{L}^2(B_{t})}
\leq
\mathbf{m}athbf{f}rac{C}{r} \left\| \psi_{2r}^{(k+1)} - \psi_{r}^{(k+1)} \right\|_{\underlinederline{L}^2(B_{r})}
\leq
Cr^{-\mathbf{a}lphapha}
.
\end{equation*}
Therefore, by compactness, there exists $\mathbf{m}athbf{h}at \psi^{(k+1)}$ such that, for $t \in [\mathcal{X},r]$,
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{f}rac1r \left\| \mathbf{m}athbf{h}at \psi^{(k+1)} \right\|_{\underlinederline{L}^2(B_{r})} + \left\|\nabla \psi_{r}^{(k+1)} - \nabla \mathbf{m}athbf{h}at \psi^{(k+1)} \right\|_{\underlinederline{L}^2(B_{t})} \leq C r^{-\mathbf{a}lphapha}
.
\end{equation*}
Proceeding now as in~\cite[Section 3.4]{AKMbook} proves that $\nabla \mathbf{m}athbf{h}at \psi^{(k+1)}$ is $\mathbf{m}athbb{Z}^d$-stationary. Finally, by integration by parts we also obtain that $\mathbf{m}athbb{E} \left[ \int_{\cu_0} \nabla \mathbf{m}athbf{h}at \psi^{(k+1)} \right] = 0$ and, therefore, since $\mathbf{m}athbf{h}at \psi^{(k+1)}$ solves the same equation as $\psi^{(k+1)}$, by the uniqueness we have that
$\mathbf{m}athbf{h}at \psi^{(k+1)} = \psi^{(k+1)}$ up to a constant. The proof is hence complete by the previous display.
\end{proof}
Above we made use of a lemma, which roughly states that if two solutions of the systems of linearized equations are close in $L^2$ then their gradients are also close. This lemma will also be applied repeatedly in the following sections.
\mathbf{b}etagin{lemma}
\label{l.diff.linearizedsystem}
Suppose that~\eqref{e.assumption.section4} holds. Let~$q\in [2,\infty)$ and $\mathbf{m}athsf{M}\in [1,\infty)$. There exist~$\deltalta(q,\mathrm{data})>0$,~$C(q,\mathbf{m}athsf{M},\mathrm{data}) <\infty$ and a random variable~$\mathcal{X} \leq \mathcal{O}_{\deltalta}\left( C \right)$ such that the following holds. Let $R\mathbf{m}athbf{g}eq \mathbf{m}athcal{X}$, $N\leq n+1$ and $\left(u,w_1,\ldots,w_{N}\right),\left(\widetilde{u},\widetilde{w}_1,\ldots,\widetilde{w}_{N} \right) \in (H^1(B_R))^{N+1}$ each be a solution of the system of the linearized equations, that is, for every
$m\in\{1,\ldots,N\}$, we have
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
&
\left\| \nabla u \right\|_{\underlinederline{L}^2(B_R)} \vee
\left\| \nabla v \right\|_{\underlinederline{L}^2(B_R)} \leq \mathbf{m}athsf{M}
\\ &
-\nabla \cdot \left( D_pL(\nabla u,x) \right) = 0
\quad \mathbf{m}box{and} \quad -\nabla \cdot \left( D_pL(\nabla v,x) \right) = 0
\quad \mathbf{m}box{in} \ B_R,\\
&
-\nabla \cdot \left( D^2_pL\left( \nabla u,x \right) \nabla w_m \right) = \nabla \cdot \left( \mathbf{m}athbf{F}_m(\nabla u,\nabla w_1,\ldots,\nabla w_{m-1},x)\right) \quad \mathbf{m}box{in} \ B_R,
\end{aligned}
\right.
\end{equation*}
and the same holds with $\left(\widetilde{u},\widetilde{w}_1,\ldots,\widetilde{w}_{N} \right)$ in place of~$\left(u,w_1,\ldots,w_{N}\right)$. Then
\mathbf{b}etagin{equation}
\label{e.diff.utildeu}
\left\| \nabla u- \nabla\widetilde{u} \right\|_{\underlinederline{L}^q(B_{R/2})}
\leq
\mathbf{m}athbf{f}rac CR\left\| u- \widetilde{u} \right\|_{\underlinederline{L}^2(B_R)}
\end{equation}
and, denoting
\mathbf{b}etagin{equation*} \label{}
\mathbf{m}athsf{h}_i
:=
\mathbf{m}athbf{f}rac 1R\left\| w_i - (w_i)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} + \mathbf{m}athbf{f}rac 1R\left\| \widetilde{w}_i - (\widetilde{w}_i)_{B_R} \right\|_{\underlinederline{L}^2(B_R)},
\end{equation*}
we have, for every $m\in\{1,\ldots,N\}$,
\mathbf{b}etagin{align}
\label{e.diff.linearizedsystem.wm}
\lefteqn{
\left\| \nabla w_m - \nabla\widetilde{w}_m \right\|_{\underlinederline{L}^2(B_{R/2})}
}
\\ & \qquad \notag
\leq
C \left( \mathbf{m}athbf{f}rac 1R\left\| u- \widetilde{u} \right\|_{\underlinederline{L}^2(B_R)} \right)
\mathbf{m}athbf{s}um_{i=1}^{m-1} \mathbf{m}athsf{h}_i^{\mathbf{m}athbf{f}rac mi}
+
C\mathbf{m}athbf{s}um_{i=1}^{m}
\mathbf{m}athbf{f}rac1R \left\| w_i - \widetilde{w}_i \right\|_{\underlinederline{L}^{2}(B_{R})}
\mathbf{m}athbf{s}um_{j=1}^{m-i}
\mathbf{m}athsf{h}_j^{\mathbf{m}athbf{f}rac{m-i}{j}}.
\end{align}
and
\mathbf{b}etagin{align}
\label{e.diff.linearizedsystem.Fm}
\lefteqn{
\left\| \mathbf{m}athbf{F}_m\left(\nabla u, \nabla w_{1} ,\ldots,\nabla w_{m-1},\cdot\right) - \mathbf{m}athbf{F}_m\left(\nabla \widetilde{u}, \nabla \widetilde{w}_{1} ,\ldots,\nabla \widetilde{w}_{m-1},\cdot\right) \right\|_{\underlinederline{L}^2(B_{R/2})}
}
\\ & \qquad \notag
\leq
C \left( \mathbf{m}athbf{f}rac 1R\left\| u- \widetilde{u} \right\|_{\underlinederline{L}^2(B_R)} \right)
\mathbf{m}athbf{s}um_{i=1}^{m-1} \mathbf{m}athsf{h}_i^{\mathbf{m}athbf{f}rac mi}
+
C\mathbf{m}athbf{s}um_{i=1}^{m-1}
\mathbf{m}athbf{f}rac1R \left\| w_i - \widetilde{w}_i \right\|_{\underlinederline{L}^{2}(B_{R})}
\mathbf{m}athbf{s}um_{j=1}^{m-i}
\mathbf{m}athsf{h}_j^{\mathbf{m}athbf{f}rac{m-i}{j}}.
\end{align}
\end{lemma}
\mathbf{b}etagin{proof}
Let us define $\mathbf{a}(x):= D^2_pL(\nabla u,x)$ and $\mathbf{m}athbf{f}_m(x):= \mathbf{m}athbf{F}_m(\nabla u,\nabla w_1,\ldots,\nabla w_{m-1},x)$ and analogously define $\widetilde{\mathbf{a}}$ and~$\widetilde{\mathbf{m}athbf{f}}_m$. We assume that $R \mathbf{m}athbf{g}eq 2^{m+2}\mathcal{X}$, where $\mathcal{X}$ is as in Theorem~\ref{t.regularity.linerrors} for $n$, valid by the assumption of~\eqref{e.assumption.section4}.
\mathbf{m}athbf{s}mallskip
The estimate~\eqref{e.diff.utildeu} is just the estimate for $\xi_0$ in Theorem~\ref{t.regularity.linerrors}. It also implies
\mathbf{b}etagin{equation}
\label{e.estaatilde}
\left\| \mathbf{a} - \widetilde{\mathbf{a}} \right\|_{\underlinederline{L}^q(B_{R/2})} \leq
\mathbf{m}athbf{f}rac CR\left\| u- \widetilde{u} \right\|_{\underlinederline{L}^2(B_R)}.
\end{equation}
By~\eqref{e.Fmbasic2}, we have that
\mathbf{b}etagin{align*}
&
\left| \mathbf{m}athbf{f}_m - \widetilde{\mathbf{m}athbf{f}}_m \right|
\\ & \quad\notag
\leq
C \left| \nabla u - \nabla \widetilde{u} \right|
\mathbf{m}athbf{s}um_{i=1}^{m-1}
\left( \left| \nabla w_i \right| \vee\left| \nabla \widetilde{w}_i \right| \right)^{\mathbf{m}athbf{f}rac mi}
+
C \mathbf{m}athbf{s}um_{i=1}^{m-1}
\left| \nabla w_i - \nabla \widetilde{w}_i \right|
\mathbf{m}athbf{s}um_{j=1}^{m-i}
\left( \left| \nabla w_j \right| \vee\left| \nabla \widetilde{w}_j \right| \right)^{\mathbf{m}athbf{f}rac{m-i}{j}}.
\end{align*}
Using H\"older's inequality and applying Theorem~\ref{t.regularity.linerrors} for~$n$, we obtain, for any $p\in [2,\infty)$ and $\deltalta>0$,
\mathbf{b}etagin{align}
\label{w.fmdfifs}
\lefteqn{
\left\| \mathbf{m}athbf{f}_m - \widetilde{\mathbf{m}athbf{f}_m} \right\|_{L^{p}(B_{2^{-m}R})}
} \quad &
\\ &\notag
\leq
C \left( \mathbf{m}athbf{f}rac 1R\left\| u- \widetilde{u} \right\|_{\underlinederline{L}^2(B_R)} \right)
\mathbf{m}athbf{s}um_{i=1}^{m-1} \mathbf{m}athsf{h}_i^{\mathbf{m}athbf{f}rac mi}
+
C\mathbf{m}athbf{s}um_{i=1}^{m-1}
\left\| \nabla w_i - \nabla \widetilde{w}_i \right\|_{\underlinederline{L}^{p+\deltalta}(B_{2^{-m}R})}
\mathbf{m}athbf{s}um_{j=1}^{m-i}
\mathbf{m}athsf{h}_j^{\mathbf{m}athbf{f}rac{m-i}{j}}.
\end{align}
We observe that $\zeta:= w_m-\widetilde{w}_m$ satisfies the equation
\mathbf{b}etagin{equation}
\label{e.differe.wm}
-\nabla \cdot \mathbf{a}\nabla \zeta =
\nabla \cdot \left( \mathbf{m}athbf{f}_m - \widetilde{\mathbf{m}athbf{f}}_m \right) + \nabla \cdot \left( \left( \mathbf{a}-\widetilde{\mathbf{a}} \right) \nabla \widetilde{w}_m\right) \quad \mathbf{m}box{in} \ B_R.
\end{equation}
By Meyer's estimate, if $\deltalta >0$ is small enough, then
\mathbf{b}etagin{align*} \label{}
\left\| \nabla w_m - \nabla \widetilde{w}_m \right\|_{\underlinederline{L}^{2+\deltalta}(B_{2^{-m-1}R})}
&
\leq
C\left\| \mathbf{m}athbf{f}_m - \widetilde{\mathbf{m}athbf{f}_m} \right\|_{L^{2+\deltalta}(B_{2^{-m}R})}
+
\mathbf{m}athbf{f}rac CR \left\| w_m - \widetilde{w}_m \right\|_{\underlinederline{L}^{2}(B_{R})}
\\ & \qquad
+ \left\| \mathbf{a}-\widetilde{\mathbf{a}} \right\|_{\underlinederline{L}^5(B_{R/2})}
\left\| \nabla \widetilde{w}_m \right\|_{\underlinederline{L}^5(B_{R/2})}.
\end{align*}
Combining these and using~\eqref{e.estaatilde} and the validity of Theorem~\ref{t.regularity.linerrors}, we get
\mathbf{b}etagin{align*} \label{}
\left\| \nabla w_m - \nabla \widetilde{w}_m \right\|_{\underlinederline{L}^{2+\deltalta}(B_{2^{-m-1}R})}
&
\leq
\mathbf{m}athbf{f}rac CR \left\| w_m - \widetilde{w}_m \right\|_{\underlinederline{L}^{2}(B_{R})}
+
C \left( \mathbf{m}athbf{f}rac 1R\left\| u- \widetilde{u} \right\|_{\underlinederline{L}^2(B_R)} \right)
\mathbf{m}athbf{s}um_{i=1}^{m-1} \mathbf{m}athsf{h}_i^{\mathbf{m}athbf{f}rac mi}
\\
&\qquad
+
C\mathbf{m}athbf{s}um_{i=1}^{m-1}
\left\| \nabla w_i - \nabla \widetilde{w}_i \right\|_{\underlinederline{L}^{2+2\deltalta}(B_{2^{-m}R})}
\mathbf{m}athbf{s}um_{j=1}^{m-i}
\mathbf{m}athsf{h}_j^{\mathbf{m}athbf{f}rac{m-i}{j}}.
\end{align*}
Taking $\deltalta_0$ sufficiently small and putting $\deltalta_m:=2^{-m}\deltalta_0$, we get by induction (using Young's inequality and rearranging several sums) that, for every $m\in\{1,\ldots,N\}$,
\mathbf{b}etagin{align*} \label{}
& \left\| \nabla w_m - \nabla \widetilde{w}_m \right\|_{\underlinederline{L}^{2+\deltalta_m}(B_{2^{-m-1}R})}
\\ & \qquad
\leq
C \left( \mathbf{m}athbf{f}rac 1R\left\| u- \widetilde{u} \right\|_{\underlinederline{L}^2(B_R)} \right)
\mathbf{m}athbf{s}um_{i=1}^{m-1} \mathbf{m}athsf{h}_i^{\mathbf{m}athbf{f}rac mi}
+
C\mathbf{m}athbf{s}um_{i=1}^{m}
\mathbf{m}athbf{f}rac1R \left\| w_i - \widetilde{w}_i \right\|_{\underlinederline{L}^{2}(B_{R})}
\mathbf{m}athbf{s}um_{j=1}^{m-i}
\mathbf{m}athsf{h}_j^{\mathbf{m}athbf{f}rac{m-i}{j}}.
\end{align*}
Combining this with~\eqref{w.fmdfifs}, we get
\mathbf{b}etagin{align*} \label{}
& \left\| \mathbf{m}athbf{f}_m - \widetilde{\mathbf{m}athbf{f}_m} \right\|_{L^{2}(B_{2^{-m-1}R})}
\\ & \qquad
\leq
C \left( \mathbf{m}athbf{f}rac 1R\left\| u- \widetilde{u} \right\|_{\underlinederline{L}^2(B_R)} \right)
\mathbf{m}athbf{s}um_{i=1}^{m-1} \mathbf{m}athsf{h}_i^{\mathbf{m}athbf{f}rac mi}
+
C\mathbf{m}athbf{s}um_{i=1}^{m-1}
\mathbf{m}athbf{f}rac1R \left\| w_i - \widetilde{w}_i \right\|_{\underlinederline{L}^{2}(B_{R})}
\mathbf{m}athbf{s}um_{j=1}^{m-i}
\mathbf{m}athsf{h}_j^{\mathbf{m}athbf{f}rac{m-i}{j}}.
\end{align*}
These imply~\eqref{e.diff.linearizedsystem.wm} and~\eqref{e.diff.linearizedsystem.Fm} after a covering argument.
\end{proof}
\mathbf{m}athbf{s}ection{Quantitative homogenization of the linearized equations}
\label{s.homogenization}
In this section, we suppose that~$n\in \{0,\ldots,\mathbf{m}athsf{N}-1 \}$ is such that
\mathbf{b}etagin{equation}
\label{e.assumption.section4}
\left\{
\mathbf{b}etagin{aligned}
& \ \mathbf{m}box{Theorem~\ref{t.regularity.Lbar} is valid with $n+1$ in place of $\mathbf{m}athsf{N}$,} \\
& \ \mathbf{m}box{Theorems~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors} are valid for $n$.}
\end{aligned}\right.
\end{equation}
The goal is to prove that Theorem~\ref{t.linearizehigher} is also valid for~ $n+1$ in place of~$n$. That is, we need to homogenize the $(n+2)$th linearized equation.
\mathbf{m}athbf{s}mallskip
In order to prove homogenization for the $(n+2)$th linearized equation, we follow the procedure used in~\cite{AFK} for homogenizing the first linearized equation. We first show, using the induction hypothesis~\eqref{e.assumption.section4}, that the coefficients $D^2_pL\left(\nabla u^\varepsilon,\tfrac x\varepsilon \right)$ and $\mathbf{m}athbf{F}_{n+1}\left( \tfrac x\varepsilon, \nabla u^\varepsilon, \nabla w_1^\varepsilon, \ldots,\nabla w_{n}^\varepsilon \right)$ can be approximated by random fields which are \emph{local} (they satisfy a finite range of dependence condition) and \emph{locally stationary} (they are stationary up to dependence on a slowly varying macroscopic variable). This then allows us to apply known quantitative homogenization results for linear elliptic equations which can be found for instance in~\cite{AKMbook}.
\mathbf{m}athbf{s}mallskip
\mathbf{m}athbf{s}ubsection{Stationary, finite range coefficient fields}
\label{ss.stat}
We proceed in a similar fashion as in~\cite[Section 3.4]{AFK} by introducing approximating equations which are stationary and localized.
We fix an integer $k\in\mathbf{m}athbb{N}$ which represents the scale on which we localize.
Let~$v_{p,z}^{(k)}$ denote the solution of the Dirichlet problem
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_{p}L(\nabla v_{p,z}^{(k)}, x)\right)=0 & \mathbf{m}box{in} & \ z+\cu_{k+1}, \\
& v_{p,z}^{(k)} = \ell_p & \mathbf{m}box{on} & \ \mathrm{par}rtial (z+\cu_{k+1}),
\end{aligned}
\right.
\end{equation*}
where~$\ell_p$ is the affine function $\ell_p(x):=p\cdot x$.
We then define, for each~$z\in 3^k\mathbf{m}athbb{Z}d$, a coefficient field $\widetilde\mathbf{a}_{p,z}^{(k)}$ in $z+\cu_{k+1}$ by
\mathbf{b}etagin{equation*}
\widetilde\mathbf{a}_{p,z}^{(k)}(x) := D_{p}^{2}L(\nabla v_{p,z}^{(k)}(x),x), \quad
x\in z+\cu_{k+1}
\end{equation*}
and then recursively define, for each $\mathbf{m}athsf{\!t}heta = (p, h_{1}, \dots, h_{n+1})\in\left(\mathbf{m}athbb{R}^{d}\right)^{n+2}$, $m\in \{1,\ldots,n+1\}$ and $z\in3^k\mathbf{m}athbb{Z}d$, the functions $w_{m,\mathbf{m}athsf{\!t}heta,z}^{(k)} \in H^1(z+(1+2^{-m})\cu_{k})$ to be the solutions of the sequence Dirichlet problems
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla\cdot\left( \widetilde\mathbf{a}_{p,z}^{(k)} \nabla w_{m,\mathbf{m}athsf{\!t}heta,z}^{(k)} \right)= \nabla \cdot \widetilde{\mathbf{m}athbf{F}}_{m,\mathbf{m}athsf{\!t}heta,z}^{(k)} & \mathbf{m}box{in} & \ z+(1+2^{-m})\cu_k, \\
& w_{m,\mathbf{m}athsf{\!t}heta,z}^{(k)} =\ell_{h_m} & \mathbf{m}box{on} & \ \mathrm{par}rtial (z+(1+2^{-m})\cu_k),
\end{aligned}
\right.
\end{equation*}
where $\widetilde{\mathbf{m}athbf{F}}_{m,\mathbf{m}athsf{\!t}heta,z}^{(k)} \in L^2(z+(1+2^{-(m-1)})\cu_k)$ is defined for $m\in \{1,\ldots,n+2\}$ by
\mathbf{b}etagin{equation}
\widetilde{\mathbf{m}athbf{F}}_{m,\mathbf{m}athsf{\!t}heta,z}^{(k)}(x)
:=
\mathbf{m}athbf{F}_{m}(\nabla v^{(k)}_{p,z}(x, z+\cu_{k+1}, p), \nabla w^{(k)}_{1,\mathbf{m}athsf{\!t}heta,z}(x), \ldots, \nabla w^{(k)}_{m-1,\mathbf{m}athsf{\!t}heta,z}(x),x).
\end{equation}
Finally, we create $3^k\mathbf{m}athbb{Z}d$--stationary fields by gluing the above functions together: for each $x\in\mathbf{m}athbb{R}d$, we define
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
&
v^{(k)}_p(x):= v^{(k)}_{p,z}(x)
\\ &
w^{(k)}_{m,\mathbf{m}athsf{\!t}heta} (x),
:=
w^{(k)}_{m,\mathbf{m}athsf{\!t}heta,z}(x)
\\ &
\mathbf{a}^{(k)}_p(x):= \widetilde\mathbf{a}_{p,z}^{(k)}(x),
\\ &
\mathbf{m}athbf{F}^{(k)}_{m,\mathbf{m}athsf{\!t}heta}(x)
:=
\widetilde{\mathbf{m}athbf{F}}^{(k)}_{m,\mathbf{m}athsf{\!t}heta,z}(x),
\end{aligned}
\right.
\qquad \mathbf{m}box{$z\in 3^k\mathbf{m}athbb{Z}d$ is such that $x\in z+\cu_k$.}
\end{equation*}
Notice that $v^{(k)}_p$ and $w^{(k)}_{m,\mathbf{m}athsf{\!t}heta}$ might not be $H^1$ functions globally, but we can nevertheless definer their gradients locally in $z+\cu_k$. The $\mathbf{m}athbb{R}^{d(n+3)}$--valued random field $\left( \nabla v^{(k)}_p, \nabla w^{(k)}_{1,\mathbf{m}athsf{\!t}heta},\ldots,\nabla w^{(k)}_{n+2,\mathbf{m}athsf{\!t}heta} \right)$ is $3^{k}\mathbf{m}athbb{Z}^{d}$-stationary and has a range of dependence of at most $3^k\mathbf{m}athbf{s}qrt{15+d}$, by construction. The same is also true of the corresponding coefficient fields, since these are local functions of this random field:
\mathbf{b}etagin{equation}
\label{e.klocalize}
\left\{
\mathbf{b}etagin{aligned}
& \ \left(\mathbf{a}^{(k)}_p, \mathbf{m}athbf{F}^{(k)}_{2,\mathbf{m}athsf{\!t}heta},\ldots,\mathbf{m}athbf{F}^{(k)}_{n+2,\mathbf{m}athsf{\!t}heta} \right)
\ \ \mathbf{m}box{is $3^{k}\mathbf{m}athbb{Z}^{d}$-stationary and }
\\ & \ \ \mathbf{m}box{has a range of dependence of at most $3^k\mathbf{m}athbf{s}qrt{15+d}$}.
\end{aligned}
\right.
\end{equation}
In the next subsection, we will apply some known quantitative homogenization estimates to the linear equation
\mathbf{b}etagin{equation}
\label{e.statpde}
-\nabla\cdot\left( \mathbf{a}^{(k)}_p \nabla v \right) = \nabla \cdot \mathbf{m}athbf{F}^{(k)}_{n+2,\mathbf{m}athsf{\!t}heta}.
\end{equation}
In order to anchor these estimates, we require some deterministic bounds on the vector field~$\mathbf{m}athbf{F}^{(k)}_{n+2,\mathbf{m}athsf{\!t}heta}$ and thus on the gradients of the functions $w^{(k)}_{m,\mathbf{m}athsf{\!t}heta}$ defined above. These bounds are exploding as a large power of~$3^k$, but we will eventually choose $3^k$ to be relatively small compared with the macroscopic scale (so these large powers of $3^k$ will be absorbed).
\mathbf{b}etagin{lemma}
\label{l.detbounds.wm}
Fix $\mathbf{m}athsf{M}\in [1,\infty)$.
There exist exponents~$\mathbf{b}etata(\mathrm{data})>0$,~$Q(\mathrm{data})<\infty$ and a constant~$C(\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that, for every~$k\in\mathbf{m}athbb{N}$,~$m\in\{ 1,\ldots,n+2\}$ and $\mathbf{m}athsf{\!t}heta=(p,h_1,\ldots,h_{n+1})\in \mathbf{m}athbb{R}^{d(n+2)}$ with $|p|\leq \mathbf{m}athsf{M}$, we have
\mathbf{b}etagin{equation}
\label{e.detbounds.Fm}
\left\| \mathbf{m}athbf{F}^{(k)}_{m,\mathbf{m}athsf{\!t}heta} \right\|_{C^{0,\mathbf{b}etata}(\cu_k)}
\leq
C 3^{Qk} \mathbf{m}athbf{s}um_{j=1}^{m-1} \left| h_j \right|^{\mathbf{m}athbf{f}rac mj}.
\end{equation}
\end{lemma}
\mathbf{b}etagin{proof}
By~\cite[Proposition A.3]{AFK}, there exist~$\mathbf{b}etata(d,\mathcal{L}ambda)\in (0,1)$ and $C(\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that, for every $z\in 3^k\mathbf{m}athbb{Z}d$ and $p\in B_{\mathbf{m}athsf{M}}$,
\mathbf{b}etagin{align*}
\left[ \nabla v_{p,z}^{(k)} \right]_{C^{0,\mathbf{b}etata}(z+2\cu_{k})}
\leq
C + C \mathbf{m}athbf{s}up_{x\in z+2\cu_{k}} \left\| \nabla v_{p,z}^{(k)} \right\|_{L^2(B_1(x))}
&
\leq
C + C \left\| \nabla v_{p,z}^{(k)} \right\|_{L^2(z+\cu_{k+1})}
\\ &
\leq
C + C (1+|p| ) 3^{\mathbf{m}athbf{f}rac{kd}2}.
\end{align*}
We deduce the existence of~$C(\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that
\mathbf{b}etagin{equation}
\label{e.detbounds.ak}
\left[ \widetilde{\mathbf{a}}^{(k)}_{p,z} \right]_{C^{0,\mathbf{b}etata}(z+2\cu_{k})} \leq C 3^{\mathbf{m}athbf{f}rac{kd}2}.
\end{equation}
We will argue by induction in~$m\in \{1,\ldots,n+2\}$ that there exist~$Q(\mathrm{data})<\infty$ and $C(\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that
\mathbf{b}etagin{equation}
\label{e.detbounds.wm.indy}
\left\| \nabla w_{m,\mathbf{m}athsf{\!t}heta,z}^{(k)} \right\|_{C^{0,\mathbf{b}etata}(z+(1+\mathbf{m}athbf{f}rac23 2^{-m})\cu_k)}
\leq
C 3^{Qk} \mathbf{m}athbf{s}um_{j=1}^{m} \left| h_j \right|^{\mathbf{m}athbf{f}rac mj}.
\end{equation}
For $m=1$ the claim follows from Proposition~\ref{p.schauder} and~\eqref{e.detbounds.ak}. Suppose now that there exists $M\in \{2,\ldots,n+2\}$ such that the bound~\eqref{e.detbounds.wm.indy} holds for each $m \in \{1,\ldots,M-1\}$. Then we obtain that, for some $Q(M,\mathrm{data})<\infty$,
\mathbf{b}etagin{equation*}
\left[ \widetilde{\mathbf{m}athbf{F}}^{(k)}_{M,\mathbf{m}athsf{\!t}heta,z} \right]_{C^{0,\mathbf{b}etata}(z+(1+2^{-M})\cu_k)}
\leq
C 3^{Qk} \mathbf{m}athbf{s}um_{j=1}^{M} \left| h_j \right|^{\mathbf{m}athbf{f}rac Mj}
\end{equation*}
By the Caccioppoli inequality, we get
\mathbf{b}etagin{equation*}
\left\| \nabla w_{M,\mathbf{m}athsf{\!t}heta,z}^{(k)} \right\|_{\underlinederline{L}^2(z+(1+\mathbf{m}athbf{f}rac34 \cdot 2^{-M})\cu_k)}
\leq
C 3^{Qk} \mathbf{m}athbf{s}um_{j=1}^{M} \left| h_j \right|^{\mathbf{m}athbf{f}rac Mj}.
\end{equation*}
In view of~\eqref{e.detbounds.ak} and the previous two displays,
another application of Proposition~\ref{p.schauder} yields, after enlarging~$Q$ and~$C$, the bound~\eqref{e.detbounds.wm.indy} for~$m=M$. This completes the induction argument and the proof of~\eqref{e.detbounds.wm.indy}. The bound~\eqref{e.detbounds.Fm} immediately follows.
\end{proof}
By the assumed validity of Theorem~\ref{t.regularity.linerrors} for $n$, we also have that, for each $\mathbf{m}athsf{M},q \in[2,\infty)$ there exist $\deltalta(q,\mathrm{data})>0$ and $C(\mathbf{m}athsf{M},q,\mathrm{data})<\infty$ and a random variable~$\mathcal{X} = \mathcal{O}_\deltalta(C)$ such that, for every $k\in \mathbf{m}athbb{N}$ with $3^k\mathbf{m}athbf{g}eq \mathcal{X}$ and every $m\in\{1,\ldots,n+1\}$ and $\mathbf{m}athsf{\!t}heta = (p,h_1,\ldots,h_{n+1})\in\mathbf{m}athbb{R}^{d(n+2)}$ with $|p| \leq \mathbf{m}athsf{M}$, we have
\mathbf{b}etagin{equation}
\label{e.wmTheta0bounds.X}
\left\| \nabla w^{(k)}_{m,\mathbf{m}athsf{\!t}heta,0}
\right\|_{\underlinederline{L}^q((1+2^{-m-1})\cu_{k})}
\leq
C \mathbf{m}athbf{s}um_{j=1}^{m-1} \left| h_j \right|^{\mathbf{m}athbf{f}rac mj}
\end{equation}
and hence, for such~$k$ and $\mathbf{m}athsf{\!t}heta$ and $m\in\{1,\ldots,n+2\}$,
\mathbf{b}etagin{equation}
\label{e.FmTheta0bounds.X}
\left\| \widetilde{\mathbf{m}athbf{F}}_{m,\mathbf{m}athsf{\!t}heta,0}^{(k)}
\right\|_{\underlinederline{L}^q((1+2^{-m-1})\cu_{k})}
\leq
C \mathbf{m}athbf{s}um_{j=1}^{m-1} \left| h_j \right|^{\mathbf{m}athbf{f}rac mj}.
\end{equation}
Observe that~\eqref{e.detbounds.Fm} and~\eqref{e.FmTheta0bounds.X} together imply that, for $\deltalta$ and $C$ as above and every $m\in\{ 1,\ldots,n+2\}$,
\mathbf{b}etagin{equation}
\label{e.FmTheta0bounds.X2}
\left\| \widetilde{\mathbf{m}athbf{F}}_{m,\mathbf{m}athsf{\!t}heta,0}^{(k)}
\right\|_{\underlinederline{L}^q( \cu_{k})}
\leq
\mathcal{O}_\deltalta\left( C \mathbf{m}athbf{s}um_{j=1}^{m-1} \left| h_j \right|^{\mathbf{m}athbf{f}rac mj} \right).
\end{equation}
We next study the continuity of $\mathbf{a}_p^{(k)}$ and~$\mathbf{m}athbf{F}^{(k)}_{m,\mathbf{m}athsf{\!t}heta}$ in the parameter~$\mathbf{m}athsf{\!t}heta$.
\mathbf{b}etagin{lemma}[{Continuity of $\mathbf{a}_p^{(k)}$ and~$\mathbf{m}athbf{F}^{(k)}_{m,\mathbf{m}athsf{\!t}heta}$ in $\mathbf{m}athsf{\!t}heta$}]
\label{l.contdep.coeffs}
Fix $q\in [2,\infty)$ and $\mathbf{m}athsf{M}\in [1,\infty)$.
There exist constants~$\deltalta(q,\mathrm{data})>0$ and~$Q(q,\mathrm{data}),C(q,\mathbf{m}athsf{M},\mathrm{data})<\infty$ and a random variable $\mathcal{X} = \mathcal{O}_\deltalta(C)$ such that, for every~$k\in\mathbf{m}athbb{N}$ with $3^k\mathbf{m}athbf{g}eq \mathcal{X}$, $\mathbf{m}athsf{\!t}heta=(p,h_1,\ldots,h_{n+1})\in \mathbf{m}athbb{R}^{d(n+2)}$ and $\mathbf{m}athsf{\!t}heta'=(p',h_{1}',\ldots,h_{n+1}')\in \mathbf{m}athbb{R}^{d(n+2)}$
with $|p|,|p'|\leq \mathbf{m}athsf{M}$,
\mathbf{b}etagin{equation}
\label{e.contdep.ap}
\left\| \mathbf{a}_p^{(k)} - \mathbf{a}_{p'}^{(k)} \right\|_{\underlinederline{L}^q(\cu_k)}
\leq
C \left|p-p'\right|
\end{equation}
and, for every~$m\in\{ 1,\ldots,n+2\}$,
\mathbf{b}etagin{align}
\label{e.contdep.Fm}
\lefteqn{
\left\| \mathbf{m}athbf{F}^{(k)}_{m,\mathbf{m}athsf{\!t}heta} - \mathbf{m}athbf{F}^{(k)}_{m,\mathbf{m}athsf{\!t}heta'} \right\|_{\underlinederline{L}^2(\cu_k)}
} \qquad &
\\ & \notag
\leq
C |p-p'| \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \left| h_i \right| \vee \left| h_i' \right| \right)^{\mathbf{m}athbf{f}rac mi}
+C \mathbf{m}athbf{s}um_{i=1}^{m-1} |h_i-h_i'| \mathbf{m}athbf{s}um_{j=1}^{m-i}
\left( \left| h_j \right| \vee \left| h_j' \right| \right)^{\mathbf{m}athbf{f}rac {m-i}j}.
\end{align}
\end{lemma}
\mathbf{b}etagin{proof}
We take $\mathcal{X}$ to be larger than the random variables in the statements of Lemma~\ref{l.diff.linearizedsystem} and Theorems~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors} for~$n$. The bound~\eqref{e.contdep.ap} is then an immediate consequence of~\eqref{e.diff.utildeu} and the obvious fact that
\mathbf{b}etagin{equation*} \label{}
\left\| \nabla v^{(k)}_{p,0} - \nabla v^{(k)}_{p',0} \right\|_{\underlinederline{L}^2(z+\cu_{k+1})}
\leq C |p-p'|.
\end{equation*}
We then use the equation for the difference~$w^{(k)}_{m,\mathbf{m}athsf{\!t}heta} - w^{(k)}_{m,\mathbf{m}athsf{\!t}heta'}$ (see~\eqref{e.differe.wm}) and then apply the result of Lemma~\ref{l.diff.linearizedsystem} to obtain, for every $m\in\{1,\ldots,n+1\}$,
\mathbf{b}etagin{align*} \label{}
\lefteqn{
\left\| \nabla w^{(k)}_{m,\mathbf{m}athsf{\!t}heta,0} -\nabla w^{(k)}_{m,\mathbf{m}athsf{\!t}heta',0} \right\|_{\underlinederline{L}^2((1+2^{-m})\cu_k)}
} \quad &
\\ &
\leq
C \left| h_m - h_m' \right|
+
C
\left\| \widetilde{\mathbf{m}athbf{F}}^{(k)}_{m,\mathbf{m}athsf{\!t}heta,0} - \widetilde{\mathbf{m}athbf{F}}^{(k)}_{m,\mathbf{m}athsf{\!t}heta',0} \right\|_{\underlinederline{L}^2((1+2^{-m})\cu_k)}
+
C|p-p'| \mathbf{m}athbf{s}um_{i=1}^m \left| h_i \right|^{\mathbf{m}athbf{f}rac mi}
\\ &
\leq
C \left| h_m - h_m' \right|
+
C |p-p'|
\mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \left| h_i \right| \vee \left| h_i' \right| \right)^{\mathbf{m}athbf{f}rac mi}
\\ & \quad
+
C\mathbf{m}athbf{s}um_{i=1}^{m-1}
\mathbf{m}athbf{f}rac1R \left\| w^{(k)}_{i,\mathbf{m}athsf{\!t}heta,0} - w^{(k)}_{i,\mathbf{m}athsf{\!t}heta',0} \right\|_{\underlinederline{L}^{2}((1+2^{-i})\cu_k)}
\mathbf{m}athbf{s}um_{j=1}^{m-i}
\left( \left| h_i \right| \vee \left| h_i' \right| \right)^{\mathbf{m}athbf{f}rac{m-i}{j}}.
\end{align*}
By induction we now obtain, for every $m\in\{1,\ldots,n+1\}$,
\mathbf{b}etagin{align*} \label{}
\lefteqn{
\left\| \nabla w^{(k)}_{m,\mathbf{m}athsf{\!t}heta,0} -\nabla w^{(k)}_{m,\mathbf{m}athsf{\!t}heta',0} \right\|_{\underlinederline{L}^2((1+2^{-m})\cu_k)}
} \quad &
\\ &
\leq
C |p-p'| \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \left| h_i \right| \vee \left| h_i' \right| \right)^{\mathbf{m}athbf{f}rac mi}
+C \mathbf{m}athbf{s}um_{i=1}^{m-1} |h_i-h_i'| \mathbf{m}athbf{s}um_{j=1}^{m-i}
\left( \left| h_j \right| \vee \left| h_j' \right| \right)^{\mathbf{m}athbf{f}rac {m-i}j}.
\end{align*}
This implies~\eqref{e.contdep.Fm}.
\end{proof}
By combining~\eqref{e.detbounds.Fm} and~\eqref{e.contdep.Fm} and using interpolation, we obtain the existence of~$\mathbf{a}lphapha(\mathrm{data})>0$,~$Q(\mathrm{data})<\infty$ and $C(\mathrm{data})>0$ such that, with~$\mathcal{X}$ as in the statement of Lemma~\ref{l.contdep.coeffs}, then for every~$k\in\mathbf{m}athbb{N}$ with $3^k\mathbf{m}athbf{g}eq \mathcal{X}$, $m\in\{1,\ldots,n+2\}$, and $\mathbf{m}athsf{\!t}heta=(p,h_1,\ldots,h_{n+1})\in \mathbf{m}athbb{R}^{d(n+2)}$ and $\mathbf{m}athsf{\!t}heta'=(p',h_{1}',\ldots,h_{n+1}')\in \mathbf{m}athbb{R}^{d(n+2)}$
with $|p|,|p'|\leq \mathbf{m}athsf{M}$, we have
\mathbf{b}etagin{align}
\label{e.contdep.Fm.Linfty}
\lefteqn{
\left\| \mathbf{m}athbf{F}^{(k)}_{m,\mathbf{m}athsf{\!t}heta} - \mathbf{m}athbf{F}^{(k)}_{m,\mathbf{m}athsf{\!t}heta'} \right\|_{L^\infty(\cu_k)}
} \quad &
\\ & \notag
\leq
C3^{Qk} |p-p'|^\mathbf{a}lphapha \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \left| h_i \right| \vee \left| h_i' \right| \right)^{\mathbf{m}athbf{f}rac mi}
+C3^{Qk} \mathbf{m}athbf{s}um_{i=1}^{m-1} |h_i-h_i'|^\mathbf{a}lphapha \mathbf{m}athbf{s}um_{j=1}^{m-i}
\left( \left| h_j \right| \vee \left| h_j' \right| \right)^{\mathbf{m}athbf{f}rac {m-i}j}.
\end{align}
Likewise, we can use~\eqref{e.detbounds.ak} and~\eqref{e.contdep.ap} to obtain
\mathbf{b}etagin{equation}
\label{e.contdep.ap.Linfty}
\left\| \mathbf{a}_p^{(k)} - \mathbf{a}_{p'}^{(k)} \right\|_{\underlinederline{L}^q(\cu_k)}
\leq
C 3^{Qk} \left|p-p'\right|^\mathbf{a}lphapha.
\end{equation}
This variation of Lemma~\ref{l.contdep.coeffs} will be needed below.
\mathbf{m}athbf{s}ubsection{Setup of the proof of Theorem~\ref{t.linearizehigher}}
We are now ready to begin the proof of the implication
\mathbf{b}etagin{equation*} \label{}
\mathbf{m}box{
~\eqref{e.assumption.section4}
\, $\implies$ \,
the statement of Theorem~\ref{t.linearizehigher}\, with~$n+1$ in place of~$n$.
}
\end{equation*}
We fix parameters $\mathbf{m}athsf{M} \in [1,\infty)$, $\deltalta>0$ and $\varepsilon\in (0,1)$,
a sequence of Lipschitz domains $U_1,U_2,\ldots,U_{n+2} \mathbf{m}athbf{s}ubseteq \cu_0$ satisfying
\mathbf{b}etagin{equation}
\label{e.domainsdescending}
\overlineerline{U}_{m+1} \mathbf{m}athbf{s}ubseteq U_m, \quad \mathbf{m}athbf{f}orall m\in\{1,\ldots,n+1\}.
\end{equation}
a function $f\in W^{1,2+\deltalta}(U_1)$ satisfying
\mathbf{b}etagin{equation}
\label{e.fboundedass}
\left\| \nabla f \right\|_{L^{2+\deltalta}(U_1)} \leq \mathbf{m}athsf{M},
\end{equation}
and a sequence of boundary conditions
$g_1\in W^{1,2+\deltalta}(U_1), \ldots, g_{n+2} \in W^{1,2+\deltalta}(U_{n+2})$. We let $u^\varepsilon \in f+H^1_0(U_1)$ and $w^\varepsilon_1\in g_1+H^1_0(U_1),\ldots,w^\varepsilon_{n+2} \in g_{n+2}+H^1_0(U_{n+2})$ as well as $\overlineerline{u} \in f+H^1_0(U)$ and $\overlineerline{w}_1\in g_1+H^1_0(U_1),\ldots,\overlineerline{w}_{n+2} \in g_{n+2}+H^1_0(U_{n+2})$ be as in the statement of Theorem~\ref{t.linearizehigher} for~$n+1$ in place of~$n$.
\mathbf{m}athbf{s}mallskip
We denote
\mathbf{b}etagin{equation}
\label{e.aepbars}
\left\{
\mathbf{b}etagin{aligned}
& \mathbf{a}^\varepsilon(x):= D^2_p L \left( \nabla u^\varepsilon(x),\tfrac x\varepsilon \right), \\
& {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}(x):= D^2_p \overlineerline{L} \left( \nabla \overlineerline{u}(x) \right).
\end{aligned}
\right.
\end{equation}
and
\mathbf{b}etagin{equation}
\label{e.Fepbars}
\left\{
\mathbf{b}etagin{aligned}
& \mathbf{m}athbf{F}^\varepsilon_m(x):= \mathbf{m}athbf{F}_{m}\left(\nabla u^\varepsilon(x), \nabla w^\varepsilon_{1}(x), \ldots, \nabla w^\varepsilon_{m-1}(x),\tfrac x\varepsilon \right), \\
& \overlineerline{\mathbf{m}athbf{F}}_m(x):= \overlineerline{\mathbf{m}athbf{F}}_{m}\left(\nabla \overlineerline{u}(x), \nabla \overlineerline{w}_{1}(x), \ldots, \nabla \overlineerline{w}_{m-1}(x)\right).
\end{aligned}
\right.
\end{equation}
We also choose $K\in\mathbf{m}athbb{N}$ to be the unique positive integer satisfying $3^{-K-1}< \varepsilon \leq 3^{-K}$. We write
\mathbf{b}etagin{equation}
\label{e.overlineTheta}
\overlineerline\mathbf{m}athsf{\!t}heta(x):= \left( \nabla \overlineerline{u}(x),\nabla \overlineerline{w}_{1}(x), \ldots, \nabla \overlineerline{w}_{n+1}(x) \right).
\end{equation}
By the assumption~\eqref{e.assumption.section4}, we only need to homogenize the linearized equation for $m=n+2$. As we have already proved a homogenization result in~\cite[Theorem 1.1]{AFK} for the linearized equation with zero right-hand side, it suffices to prove~\eqref{e.homogenization.estimates} for $m=n+2$ under the assumption that the boundary condition vanishes:
\mathbf{b}etagin{equation}
\label{e.gvanishass}
g_{n+2}= 0 \quad \mathbf{m}box{in} \ U_{n+2}.
\end{equation}
We can write the equations for $w_{n+2}^\varepsilon$ and $\overlineerline{w}_{n+2}$ respectively as
\mathbf{b}etagin{equation}
\label{e.wn+2}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( \mathbf{a}^\varepsilon\nabla w_{n+2}^\varepsilon \right) = \nabla \cdot \mathbf{m}athbf{F}^\varepsilon_{n+2} & \mathbf{m}box{in} & \ U_{n+2}, \\
& w_{n+2}= 0, & \mathbf{m}box{on} & \ \mathrm{par}rtial U_{n+2},
\end{aligned}
\right.
\end{equation}
and
\mathbf{b}etagin{equation}
\label{e.wn+2.bar}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( \overlineerline{\mathbf{a}}(x) \nabla \overlineerline{w}_{n+2}^\varepsilon \right) = \nabla \cdot \overlineerline{\mathbf{m}athbf{F}}_{n+2} & \mathbf{m}box{in} & \ U_{n+2}, \\
& \overlineerline{w}_{n+2}= 0, & \mathbf{m}box{on} & \ \mathrm{par}rtial U_{n+2}.
\end{aligned}
\right.
\end{equation}
Our goal is to prove the following estimate: there exists a constant $C(\mathbf{m}athsf{M},\mathrm{data})<\infty$, exponents~$\mathbf{m}athbf{s}igma(\mathrm{data})$ and $\mathbf{a}lphapha(\mathrm{data})>0$ and random variable~$\mathcal{X}$ satisfying
\mathbf{b}etagin{equation}
\label{e.Xint.wts}
\mathcal{X} = \mathcal{O}_\mathbf{m}athbf{s}igma(C),
\end{equation}
as in the statement of the theorem:
\mathbf{b}etagin{equation}
\label{e.homogenization.estimates.wts}
\left\| \nabla w_{n+2}^\varepsilon - \nabla \mathbf{b}ar{w}_{n+2} \right\|_{H^{-1}(U_m)}
\leq
\mathcal{X} \varepsilon^{\mathbf{a}lphapha}
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}}.
\end{equation}
To prove~\eqref{e.homogenization.estimates.wts}, we first compare the solution~$w_{n+2}$ to the solution~$\widetilde{w}_{n+2}^\varepsilon$ of a second heterogeneous problem, namely
\mathbf{b}etagin{equation}
\label{e.wn+2.tilde}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( \widetilde{\mathbf{a}}^\varepsilon\nabla \widetilde{w}_{n+2}^\varepsilon \right) = \nabla \cdot \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} & \mathbf{m}box{in} & \ U_{n+2}, \\
& \widetilde{w}_{n+2}^\varepsilon = 0, & \mathbf{m}box{on} & \ \mathrm{par}rtial U_{n+2},
\end{aligned}
\right.
\end{equation}
where the coefficient fields~$\widetilde{\mathbf{a}}^\varepsilon$ and~$\widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2}$ are defined in terms of the localized, stationary approximating coefficients (introduced above in Subsection~\ref{ss.stat}) by
\mathbf{b}etagin{equation}
\label{e.tildecoeffs}
\left\{
\mathbf{b}etagin{aligned}
& \widetilde{\mathbf{a}}^\varepsilon(x)
:=
\mathbf{a}^{(k)}_{\nabla \overlineerline{u}(x)}\left(\tfrac x\varepsilon\right), \\
& \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2}
:=
\mathbf{m}athbf{F}^{(k)}_{n+2,\overlineerline\mathbf{m}athsf{\!t}heta(x)}
\left(\tfrac x\varepsilon\right).
\end{aligned}
\right.
\end{equation}
The parameter~$k\in\mathbf{m}athbb{N}$ will be chosen below in such a way that~$1\ll 3^{k} \ll \varepsilon^{-1}$. We also need to declare a second mesoscopic scale by taking $l\in\mathbf{m}athbb{N}$ such that~$1\ll 3^k \ll 3^l \ll \varepsilon^{-1}$ and,
for every $m\in\{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation}
\label{e.l.mesofitting}
U_{m+1} + \varepsilon\cu_l \mathbf{m}athbf{s}ubseteq U_m.
\end{equation}
Like~$k$, the parameter~$l$ will be declared later in this section. For convenience we will also take a slightly smaller domain $U_{n+3}$ than $U_{n+2}$, which also depends on~$\varepsilon$ and~$l$ and is defined by
\mathbf{b}etagin{equation}
\label{e.defUn+3}
U_{n+3}:= \left\{ x\in U_{n+2} \,:\, x + \varepsilon\cu_l \mathbf{m}athbf{s}ubseteq U_{n+2} \right\}.
\end{equation}
Thus we have~\eqref{e.l.mesofitting} for every $m\in\{1,\ldots,n+2\}$. See Subection~\ref{ss.meso} below for more on the choices of the parameters~$k$ and~$l$.
\mathbf{m}athbf{s}mallskip
The estimate~\eqref{e.homogenization.estimates.wts} follows from the following two estimates, which are proved separately below (here~$\mathcal{X}$ denotes a random variable as in the statement of the theorem):
\mathbf{b}etagin{equation} \label{e.homogenization.estimates.wts.1}
\left\| \nabla w_{n+2}^\varepsilon - \nabla \widetilde{w}_{n+2}^\varepsilon \right\|_{L^2(U_m)}
\leq
\mathcal{X} \varepsilon^{\mathbf{a}lphapha}
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}},
\end{equation}
and
\mathbf{b}etagin{equation} \label{e.homogenization.estimates.wts.2}
\left\| \nabla \widetilde{w}_{n+2}^\varepsilon - \nabla \mathbf{b}ar{w}_{n+2} \right\|_{H^{-1}(U_m)}
\leq
\mathcal{X} \varepsilon^{\mathbf{a}lphapha}
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}}.
\end{equation}
\mathbf{m}athbf{s}ubsection{Estimates on size of the $w_m^\varepsilon$ and $\overlineerline{w}_m$}
To prepare for the proofs of~\eqref{e.homogenization.estimates.wts.1} and~\eqref{e.homogenization.estimates.wts.2}, we present some preliminary bounds on the size of the $w_m^\varepsilon$'s. The first is a set of deterministic bounds representing the ``worst-case scenario'' in which nothing has homogenized up to the current scale.
\mathbf{b}etagin{lemma}
\label{l.detbounds.het}
There exist exponents~$\mathbf{b}etata(d,\mathcal{L}ambda)\in (0,1)$ and $Q(\mathrm{data})<\infty$ and a constant~$C(\{ U_m \}, \mathbf{m}athsf{M},\mathbf{m}athsf{K}_0,\mathrm{data})<\infty$ such that, for every $m\in \{1,\ldots,n+2\}$,
\mathbf{b}etagin{equation}
\label{e.detbounds.het}
\left\| \nabla w^\varepsilon_{m} \right\|_{C^{0,\mathbf{b}etata}(U_{m+1})}
\leq
C \varepsilon^{-Q} \mathbf{m}athbf{s}um_{j=1}^{m} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{m}{j}}.
\end{equation}
\end{lemma}
\mathbf{b}etagin{proof}
By~\cite[Proposition A.3]{AFK}, there exist~$\mathbf{b}etata(d,\mathcal{L}ambda)\in (0,1)$, $Q(\mathrm{data})<\infty$ and $C(\{ U_m\},\mathbf{m}athsf{M},\mathbf{m}athsf{K}_0,\mathrm{data})<\infty$ such that
\mathbf{b}etagin{align*}
\left[ \nabla u^\varepsilon \right]_{C^{0,\mathbf{b}etata}(U_1)}
\leq
C \varepsilon^{-Q}
\end{align*}
and hence
\mathbf{b}etagin{equation}
\label{e.detbounds.aep}
\left[ \mathbf{a}^\varepsilon \right]_{C^{0,\mathbf{b}etata}(U_1)} \leq C \varepsilon^{-Q}.
\end{equation}
We will argue by induction in~$m\in \{1,\ldots,n+2\}$ that there exists $Q(m,\mathrm{data})<\infty$ and $C(m,\{U_k\}, \mathbf{m}athsf{M},\mathbf{m}athsf{K}_0,\mathrm{data})<\infty$ such that
\mathbf{b}etagin{equation}
\label{e.detbounds.wm.het.indy}
\left\| \nabla w^\varepsilon_{m} \right\|_{C^{0,\mathbf{b}etata}(U_{m+1})}
\leq
C \varepsilon^{-Q} \mathbf{m}athbf{s}um_{j=1}^{m} \left| h_j \right|^{\mathbf{m}athbf{f}rac mj}.
\end{equation}
For $m=1$ the claim follows from Proposition~\ref{p.schauder} and~\eqref{e.detbounds.aep}. Suppose now that there exists $M\in \{2,\ldots,n+2\}$ such that the bound~\eqref{e.detbounds.wm.het.indy} holds for each $m \in \{1,\ldots,M-1\}$. Then we obtain that, for some $Q(\mathbf{m}athsf{M},\mathrm{data})<\infty$,
\mathbf{b}etagin{equation*}
\left\| \mathbf{m}athbf{F}^{\varepsilon}_M \right\|_{L^2(U_M)}
\leq
C \varepsilon^{-Q} \mathbf{m}athbf{s}um_{j=1}^{M} \left| h_j \right|^{\mathbf{m}athbf{f}rac Mj}
\end{equation*}
Then by the basic energy estimate, we get
\mathbf{b}etagin{equation*}
\left\| \nabla w^\varepsilon_{M} \right\|_{{L}^2(U_M)}
\leq
C \varepsilon^{-Q} \mathbf{m}athbf{s}um_{j=1}^{M} \left| h_j \right|^{\mathbf{m}athbf{f}rac Mj}.
\end{equation*}
In view of~\eqref{e.detbounds.ak} and the previous two displays,
another application of Proposition~\ref{p.schauder} yields, after enlarging~$Q$ and~$C$, the bound~\eqref{e.detbounds.wm.het.indy} for~$m=M$. This completes the induction argument and the proof of the lemma.
\end{proof}
The \emph{typical} size of $|\nabla w_m^\varepsilon|$ is much better than~\eqref{e.detbounds.het} gives.
\mathbf{b}etagin{lemma}
\label{l.goodbounds.wm}
For each $q\in [2,\infty)$, there exist $\mathbf{m}athbf{s}igma(q,data)>0$, $C(q,\mathbf{m}athsf{M},\mathbf{m}athsf{K}_0,\mathrm{data})<\infty$ and a random variable $\mathcal{X}$ satisfying
\mathbf{b}etagin{equation*}
\mathcal{X} = \mathcal{O}_\mathbf{m}athbf{s}igma(C)
\end{equation*}
such that, for each $m\in \{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation}
\label{e.goodbounds.wm}
\left\| \nabla w^\varepsilon_{m} \right\|_{L^q(U_{m+1})}
\leq
\mathcal{X} \mathbf{m}athbf{s}um_{j=1}^{m} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{m}{j}}.
\end{equation}
\end{lemma}
\mathbf{b}etagin{proof}
We argue by induction in~$m$. Assume that~\eqref{e.goodbounds.wm} holds for $m\in \{1,\ldots,M-1\}$ for some $M\leq n+1$.
By~\eqref{e.assumption.section4}, in particular the assumed validity of Theorem~\ref{t.regularity.linerrors} for $m\leq n+1$,
it suffices to show that
\mathbf{b}etagin{equation}
\label{e.mishitswts}
\left\| \nabla w^\varepsilon_{M} \right\|_{{L}^2(U_M)}
\leq
C\mathcal{X} \mathbf{m}athbf{s}um_{j=1}^{M} \left\| \nabla g_{j} \right\|_{{L}^{2}(U_{j})}^{\mathbf{m}athbf{f}rac{M}{j}}.
\end{equation}
The induction hypothesis yields that
\mathbf{b}etagin{equation}
\left\| \mathbf{m}athbf{F}_{M}^\varepsilon \right\|_{L^2(U_M)}
\leq
C\mathcal{X} \mathbf{m}athbf{s}um_{j=1}^{M-1} \left\| \nabla g_{j} \right\|_{{L}^{2}(U_{j})}^{\mathbf{m}athbf{f}rac{M}{j}}
\end{equation}
and then the basic energy estimate yields~\eqref{e.mishitswts}.
\end{proof}
We also require bounds on the homogenized solutions~$\overlineerline{u}$ and~$\overlineerline{w}_1,\ldots,\overlineerline{w}_{n+2}$. These are consequences of elliptic regularity estimates presented in Appendix~\ref{s.appendixconstant}, namely Lemma~\ref{l.appC.C1alphabarwn}, which is applicable here because of the assumption~\eqref{e.assumption.section4} which ensures that~$\overlineerline{L} \in C^{3+\mathbf{m}athsf{N},\mathbf{b}etata}$ for every $\mathbf{b}etata\in (0,1)$. We obtain the existence of $C(\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that, for every~$m\in\{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation}
\label{e.detanchors}
\left\{
\mathbf{b}etagin{aligned}
& \left\| \nabla \overlineerline{u} \right\|_{C^{0,1}(\overlineerline{U}_2)}
\leq C,
\\ &
\left\| \nabla \overlineerline{w}_m \right\|_{C^{0,1}(\overlineerline{U}_{n+2})}
\leq
C \mathbf{m}athbf{s}um_{i=1}^{m} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac mi}.
\end{aligned}
\right.
\end{equation}
In particular, the function~$\overlineerline\mathbf{m}athsf{\!t}heta$ defined in~\eqref{e.overlineTheta} is Lipschitz continuous. By the global Meyers estimate, we also have, for some~$\deltalta(d,\mathcal{L}ambda)>0$ and~$C(\mathbf{m}athsf{M},\mathrm{data})<\infty$, the bound
\mathbf{b}etagin{equation}
\label{e.wn2meyers}
\left\| \nabla \overlineerline{w}_{n+2} \right\|_{L^{2+\deltalta}(\overlineerline{U}_{n+2})}
\leq
C \left\| \overlineerline{\mathbf{m}athbf{F}}_{n+2} \right\|_{L^{2+\deltalta}(U_{n+2})}
\leq
C \mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac{n+2}i}.
\end{equation}
We may also apply Lemma~\ref{l.appC.C1alphabarwn} to get a bound on $w_{n+2}$. In view of the merely mesoscopic gap between $\mathrm{par}rtial U_{n+2}$ and $U_{n+3}$, which is much smaller than the macroscopic gaps between $\mathrm{par}rtial U_{m}$ and $U_{m+1}$ for $m\in\{1,\ldots,n+1\}$, cf.~\eqref{e.domainsdescending} and~\eqref{e.defUn+3}, the estimate we obtain is, for every $\mathbf{b}etata \in (0,1)$,
\mathbf{b}etagin{equation}
\label{e.detanchors2}
\left\| \nabla \overlineerline{w}_{n+2} \right\|_{C^{0,\mathbf{b}etata}(\overlineerline{U}_{n+3})}
\leq
C \left( 3^l\varepsilon \right)^{-Q} \mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac {n+2}i},
\end{equation}
for an exponent~$Q(\mathbf{b}etata,\mathrm{data})<\infty$ which can be explicitly computed rather easily (but for our purposes is not worth the bother) and a constant~$C(\mathbf{b}etata,\mathbf{m}athsf{M},\mathrm{data})<\infty$.
\mathbf{m}athbf{s}ubsection{The mesoscopic parameters~$k$ and~$l$}
\label{ss.meso}
Here we enter into a discussion regarding the choice of the parameters~$k$ and~$l$ which specify the mesoscopic scales. Recall that $1 \ll 3^k \ll 3^l \ll \varepsilon^{-1}$. As we will eventually see later in this section, we will estimate the left sides of~\eqref{e.homogenization.estimates.wts.1} and~\eqref{e.homogenization.estimates.wts.2} by expressions of the form
\mathbf{b}etagin{equation}
\label{e.expression}
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-Q} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + 3^{-k\mathbf{a}lphapha} + \left( \varepsilon 3^l \right)^\mathbf{a}lphapha 3^{kQ} \right)
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}},
\end{equation}
where~$Q(\mathrm{data})<\infty$ is a large exponent and~$\mathbf{a}lphapha(\mathrm{data})>0$ is a small exponent. We then need to choose $k$ and $l$ so that the expression in parentheses is a positive power of~$\varepsilon$. We may do this as follows. First, we may assume throughout for convenience that $Q\mathbf{m}athbf{g}eq 1 \mathbf{m}athbf{g}eq 4\mathbf{a}lphapha$. Then we may take care of the first two terms by choosing $l$ so that $\varepsilon3^l$ is a ``very large'' mesoscale: we let $l$ be defined so that
\mathbf{b}etagin{equation*}
\varepsilon 3^l \leq \varepsilon^{\mathbf{m}athbf{f}rac\mathbf{a}lphapha{4Q}} < \varepsilon 3^{l+1} .
\end{equation*}
Then we see that, for some $\mathbf{b}etata>0$,
\mathbf{b}etagin{equation*}
\varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-Q} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1}
\leq C \varepsilon^{\mathbf{b}etata}.
\end{equation*}
Next, we take care of the last term: since, for some $\mathbf{b}etata>0$,
\mathbf{b}etagin{equation*}
\left( \varepsilon 3^l \right)^\mathbf{a}lphapha 3^{kQ}
\leq
C\varepsilon^{\mathbf{b}etata} 3^{kQ},
\end{equation*}
we can make this smaller than $\varepsilon^{\mathbf{m}athbf{f}rac\mathbf{b}etata2}$ by taking $\varepsilon 3^k$ to be a ``very small'' mesoscale. We take $k$ so that, for~$\mathbf{b}etata$ and~$Q$ as in the previous display,
\mathbf{b}etagin{equation*}
3^k \leq \varepsilon^{-\mathbf{m}athbf{f}rac{\mathbf{b}etata}{2Q}} < 3^{k+1}.
\end{equation*}
From this we deduce that $\left( \varepsilon 3^l \right)^\mathbf{a}lphapha 3^{kQ}
\leq
C\varepsilon^{\mathbf{m}athbf{f}rac\mathbf{b}etata2}$. We see that this choice of $k$ also makes the third term inside the parentheses on the right side of~\eqref{e.expression} smaller than a positive power of~$\varepsilon$. With these choices, we obtain that, for some $\mathbf{b}etata>0$,
\mathbf{b}etagin{equation}
\label{e.goodchoices}
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-Q} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + 3^{-k\mathbf{a}lphapha} + \left( \varepsilon 3^l \right)^\mathbf{a}lphapha 3^{kQ} \right)
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}}
\leq C\varepsilon^{\mathbf{b}etata}.
\end{equation}
Throughout the rest of this section, we will allow the exponents~$\mathbf{a}lphapha \in \left(0,\tfrac14\right]$ and~$Q\in [1,\infty)$ to vary in each occurrence, but will always depend only on~$\mathrm{data}$. Similarly, we let~$c$ and~$C$ denote positive constants which may vary in each occurrence and whose dependence will be clear from the context.
\mathbf{m}athbf{s}ubsection{The minimal scales}
\label{ss.mscales}
Many of the estimates we will use in the proof of Theorem~\ref{t.linearizehigher} are deterministic estimates (i.e., the constants~$C$ in the estimate are not random) but which are valid only above a minimal scale~$\mathcal{X}$ which satisfies~$\mathcal{X} = \mathcal{O}_\deltalta(C)$ for some $\deltalta(\mathrm{data})>0$ and $C(\mathbf{m}athsf{M},\mathrm{data})<\infty$. This includes, for instance, the estimates of Theorems~\ref{t.regularity.Lbar},~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors} assumed to be valid by our assumption~\eqref{e.assumption.section4}, as well as Lemmas~\ref{l.diff.linearizedsystem},~\ref{l.detbounds.wm},~\ref{l.contdep.coeffs} and~\ref{l.goodbounds.wm}, some estimates like~\eqref{e.contdep.Fm.Linfty} which appear in the text, and future estimates such as those of Lemmas~\ref{l.coeffclose},~\ref{l.fmsclose} and~\ref{l.barsclose}. We stress that this list is not exhaustive.
\mathbf{m}athbf{s}mallskip
In our proofs of~\eqref{e.homogenization.estimates.wts.1} and~\eqref{e.homogenization.estimates.wts.2}, we may always suppose that $3^k \mathbf{m}athbf{g}eq \mathcal{X}$, where $\mathcal{X}$ is the maximum of all of these minimal scales. In fact, we may suppose that $3^k$ is larger than the stationary translation~$T_z\mathcal{X}$ of $\mathcal{X}$ by any element of $z\in \mathbf{m}athbb{Z}d \cap \varepsilon^{-1} U_1$. To see why, first we remark that if $\mathcal{X}$ is any random variable satisfying $\mathcal{X}= \mathcal{O}_\deltalta(C)$, then a union bounds gives that, for any $Q<\infty$ the random variable
\mathbf{b}etagin{equation}
\widetilde{\mathcal{X}} := \mathbf{m}athbf{s}up\left\{ 3^{j+1} \,:\, \mathbf{m}athbf{s}up_{z\in \mathbf{m}athbb{Z}d \cap 3^{Qj} U_1} T_z\mathcal{X} \mathbf{m}athbf{g}eq 3^j \right\}
\end{equation}
satisfies, for a possibly smaller $\deltalta$ and larger $C<\infty$, depending also on $Q$, the estimate $\widetilde{\mathcal{X}} = \mathcal{O}_{\deltalta/2}(C)$. Since, as explained in the previous subsection,~$3^k$ is a small but positive power of $\varepsilon^{-1}$, we see that by choosing $Q$ suitably we have that $3^k \mathbf{m}athbf{g}eq \widetilde{\mathcal{X}}$ implies the validity of all of our estimates. Finally, in the event that $3^k < \widetilde{\mathcal{X}}$, we can use the deterministic bounds obtained in Lemma~\ref{l.detbounds.het} and~\eqref{e.wn2meyers} very crudely as follows:
\mathbf{b}etagin{align*}
\left\|
\nabla {w}_{n+2}^\varepsilon
-
\nabla \overlineerline{w}_{n+2}
\right\|_{H^{-1}(U_{n+2})}
&
\leq
C\left\|
\nabla {w}_{n+2}^\varepsilon
-
\nabla \overlineerline{w}_{n+2}
\right\|_{L^2(U_{n+2})}
\\ &
\leq
C\left\|
\nabla {w}_{n+2}^\varepsilon\right\|_{L^2(U_{n+2})}
+
C\left\|
\nabla \overlineerline{w}_{n+2}
\right\|_{L^2(U_{n+2})}
\\ &
\leq
C\left( 1+ \varepsilon^{-Q} \right) \mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}}.
\end{align*}
Then we use that
\mathbf{b}etagin{equation}
(1+ \varepsilon^{-Q})\mathds{1}_{\{3^k < \widetilde{\mathcal{X}}\}}
\leq
\varepsilon {\widetilde{\mathcal{X}}}^Q = \mathcal{O}_{\deltalta/Q}(C\varepsilon),
\end{equation}
where in the previous display the exponent $Q$ is larger in the second instance than in the first.
This yields~\eqref{e.homogenization.estimates.wts.1} and~\eqref{e.homogenization.estimates.wts.2} in the event $\{ 3^k > \widetilde{\mathcal{X}} \}$, so we do not have to worry about this event.
\mathbf{m}athbf{s}mallskip
Therefore, throughout the rest of this section, we let $\mathcal{X}$ denote a minimal scale satisfying $\mathcal{X}=\mathcal{O}_\deltalta(C)$ which, in addition to both~$\deltalta$ and $C$, may vary in each occurrence.
\mathbf{m}athbf{s}ubsection{The proof of the local stationary approximation}
We now turn to the proof of~\eqref{e.homogenization.estimates.wts.1}, which amounts to showing that the difference between the coefficient fields $(\mathbf{a}^\varepsilon,\mathbf{m}athbf{F}^\varepsilon_{n+2})$ and $(\widetilde{\mathbf{a}}^\varepsilon,\widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2})$ is small. This is accomplished by an application of Lemma~\ref{l.diff.linearizedsystem}.
\mathbf{b}etagin{lemma}
\label{l.coeffclose}
There exist~$\mathbf{a}lphapha(\mathrm{data})>0$, $Q(\mathrm{data})<\infty$ and~$C(\mathbf{m}athsf{M},\mathrm{data})<\infty$ and a minimal scale $\mathcal{X}=\mathcal{O}_\deltalta(C)$ such that, if $3^k\mathbf{m}athbf{g}eq \mathcal{X}$, then
\mathbf{b}etagin{align} \label{e.blargh.1}
&
\left\| \nabla w_{n+2}^\varepsilon - \nabla \widetilde{w}_{n+2}^\varepsilon \right\|_{L^2(U_m)}
\\ & \quad \notag
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2-1} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + 3^{-k\mathbf{a}lphapha} + \left( \varepsilon 3^l \right)^\mathbf{a}lphapha 3^{kQ} \right)
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}}.
\end{align}
\end{lemma}
\mathbf{b}etagin{proof}
Throughout $\mathcal{X}$ denotes a random scale which may change from line to line and satisfies $\mathcal{X}=\mathcal{O}_\deltalta(C)$.
\mathbf{m}athbf{s}mallskip
For each $z\in \varepsilon3^l \mathbf{m}athbb{Z}d$ with $z+\varepsilon\cu_{l+1}\mathbf{m}athbf{s}ubseteq U_{n+1}$, we compare $\nabla u$ and $\nabla w_m^\varepsilon$ to the functions $\nabla v^{(l)}_{\nabla \overlineerline{u}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right)$ and $\nabla w^{(l)}_{m,\overlineerline{\mathbf{m}athsf{\!t}heta}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right)$, using Lemma~\ref{l.diff.linearizedsystem} and the assumed validity of Theorems~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors} for~$n$. The latter yields that, if $\varepsilon^{-1}\mathbf{m}athbf{g}eq \mathcal{X}$, then for every such~$z$, and every~$m\in\{1,\ldots,n+1\}$
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& \left\| u^\varepsilon - \overlineerline{u} \right\|_{\underlinederline{L}^2(z+\varepsilon \cu_{l+1})}
\leq C\varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2},
\\ &
\left\| w^\varepsilon_m - \overlineerline{w}_m \right\|_{\underlinederline{L}^2(z+\varepsilon\cu_{l+1})}
C\varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2}
\mathbf{m}athbf{s}um_{j=1}^{m} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{m}{j}}.
\end{aligned}
\right.
\end{equation*}
Likewise, if $3^l \mathbf{m}athbf{g}eq \mathcal{X}$, then for every~$z$ as above, and every~$m\in\{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& \left\| v^{(l)}_{\nabla \overlineerline{u}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right) - \ell_{\nabla \overlineerline{u}(z)} \right\|_{\underlinederline{L}^2(z+\varepsilon \cu_{l+1})}
\leq C3^{-l\mathbf{a}lphapha},
\\ &
\left\| w^{(l)}_{m,\overlineerline{\mathbf{m}athsf{\!t}heta}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right) - \ell_{\nabla \overlineerline{w}_m(z)} \right\|_{\underlinederline{L}^2(z+\varepsilon \left(1+2^{-m} \right) \cu_l)}
\leq
C3^{-l\mathbf{a}lphapha}
\mathbf{m}athbf{s}um_{j=1}^{m} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{m}{j}}.
\end{aligned}
\right.
\end{equation*}
Here we used~\eqref{e.detanchors} in order that the~$C$ not depend implicitly on $\left| \nabla u(z) \right|$ and in order that the prefactor on the right side of the second line be is as it is, rather than~$\mathbf{m}athbf{s}um_{j=1}^m \left| \nabla \overlineerline{w}_j(z) \right|^{\mathbf{m}athbf{f}rac mj}$. Using~\eqref{e.detanchors} again, we see that
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& \left\| \overlineerline{u} -\left( \overlineerline{u}(z) + \ell_{\nabla \overlineerline{u}(z)} \right) \right\|_{\underlinederline{L}^2(z+\varepsilon \cu_{l+1})}
\leq C\varepsilon^2 3^{2l},
\\ &
\left\| \overlineerline{w}_m - \left( \overlineerline{w}_m(z) + \ell_{\nabla \overlineerline{w}_m(z)} \right) \right\|_{\underlinederline{L}^2(z+\varepsilon \cu_{l+1})}
\leq
C\varepsilon^2 3^{2l}
\mathbf{m}athbf{s}um_{j=1}^{m} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{m}{j}}.
\end{aligned}
\right.
\end{equation*}
Combining the three previous displays with the triangle inequality, we get, for all such~$m$ and~$z$ and provided that~$3^l\mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
&
\left( \varepsilon 3^{l} \right)^{-1}
\left\| u^\varepsilon - v^{(l)}_{\nabla \overlineerline{u}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right) - \overlineerline{u}(z)
\right\|_{\underlinederline{L}^2(z+\varepsilon \cu_{l+1})}
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2-1} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + \varepsilon 3^{l} \right),
\\ &
\left( \varepsilon 3^{l} \right)^{-1} \left\| w^\varepsilon_m - w^{(l)}_{m,\overlineerline{\mathbf{m}athsf{\!t}heta}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right) - \overlineerline{w}_m(z)
\right\|_{\underlinederline{L}^2(z+\varepsilon\left(1+2^{-m} \right) \cu_l)}
\\ & \qquad
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2-1} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + \varepsilon 3^{l} \right)
\mathbf{m}athbf{s}um_{j=1}^{m} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{m}{j}}.
\end{aligned}
\right.
\end{equation*}
An application of Lemma~\ref{l.diff.linearizedsystem} then yields
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
&
\left\| \nabla u^\varepsilon - \nabla v^{(l)}_{\nabla \overlineerline{u}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right)
\right\|_{\underlinederline{L}^2(z+\varepsilon \cu_{l})}
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2-1} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + \varepsilon 3^{l} \right),
\\ &
\left\| \nabla w^\varepsilon_m - \nabla w^{(l)}_{m,\overlineerline{\mathbf{m}athsf{\!t}heta}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right)
\right\|_{\underlinederline{L}^2(z+\varepsilon \cu_l)}
\\ & \qquad
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2-1} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + \varepsilon 3^{l} \right)
\mathbf{m}athbf{s}um_{j=1}^{m} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{m}{j}}.
\end{aligned}
\right.
\end{equation*}
By~$L^p$ interpolation and the bounds~\eqref{e.wmTheta0bounds.X} and~\eqref{e.goodbounds.wm}, we obtain, for every $q\in[2,\infty)$, an~$\mathcal{X}$ such that $3^l\mathbf{m}athbf{g}eq \mathcal{X}$ implies that, for every~$m$ and~$z$ as above,
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
&
\left\| \nabla u^\varepsilon - \nabla v^{(l)}_{\nabla \overlineerline{u}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right)
\right\|_{\underlinederline{L}^q(z+\varepsilon \cu_{l})}
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2-1} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + \varepsilon 3^{l} \right),
\\ &
\left\| \nabla w^\varepsilon_m - \nabla w^{(l)}_{m,\overlineerline{\mathbf{m}athsf{\!t}heta}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right)
\right\|_{\underlinederline{L}^q(z+\varepsilon \cu_l)}
\\ & \qquad
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2-1} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + \varepsilon 3^{l} \right)
\mathbf{m}athbf{s}um_{j=1}^{m} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{m}{j}}.
\end{aligned}
\right.
\end{equation*}
This estimate implies
\mathbf{b}etagin{equation*} \label{}
\left\{
\mathbf{b}etagin{aligned}
& \left\| \mathbf{a}^\varepsilon - \mathbf{a}^{(l)}_{\nabla \overlineerline{u}(z)}\left(\tfrac\cdot\varepsilon\right) \right\|_{\underlinederline{L}^q(z+\varepsilon\cu_l)}
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2-1} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + \varepsilon 3^{l} \right),
\\ &
\left\| \mathbf{m}athbf{F}^\varepsilon_{n+2} - \mathbf{m}athbf{F}^{(l)}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left( \tfrac\cdot\varepsilon\right) \right\|_{\underlinederline{L}^q(z+\varepsilon\cu_l)}
\\ & \qquad
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2-1} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} \!+\! \varepsilon 3^{l} \right)
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}}.
\end{aligned}
\right.
\end{equation*}
By a very similar argument, comparing the functions the functions $\nabla v^{(l)}_{\nabla \overlineerline{u}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right)$ and $\nabla w^{(l)}_{m,\overlineerline{\mathbf{m}athsf{\!t}heta}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right)$ to $\nabla v^{(k)}_{\nabla \overlineerline{u}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right)$ and $\nabla w^{(k)}_{m,\overlineerline{\mathbf{m}athsf{\!t}heta}(z),\mathbf{m}athbf{f}rac z\varepsilon}\left( \tfrac \cdot \varepsilon\right)$, also using Lemma~\ref{l.diff.linearizedsystem} and the assumed validity of Theorems~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors} for~$n$, we obtain, for all $m$ and $z$ as above,
\mathbf{b}etagin{equation*} \label{}
\left\{
\mathbf{b}etagin{aligned}
& \left\| \mathbf{a}^{(l)}_{\nabla \overlineerline{u}(z)}\left(\tfrac\cdot\varepsilon\right) - \mathbf{a}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac\cdot\varepsilon\right) \right\|_{\underlinederline{L}^q(z+\varepsilon\cu_l)}
\leq
C 3^{-k\mathbf{a}lphapha} ,
\\ &
\left\| \mathbf{m}athbf{F}^{(l)}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left( \tfrac\cdot\varepsilon\right)
- \mathbf{m}athbf{F}^{(k)}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left( \tfrac\cdot\varepsilon\right)
\right\|_{\underlinederline{L}^q(z+\varepsilon\cu_l)}
\leq
C3^{-k\mathbf{a}lphapha}
\mathbf{m}athbf{s}um_{j=1}^{n+2} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+1}{j}}.
\end{aligned}
\right.
\end{equation*}
Using~\eqref{e.contdep.Fm.Linfty} and~\eqref{e.contdep.ap.Linfty}, in view of~\eqref{e.detanchors}, we find an exponent~$Q(\mathrm{data})<\infty$ such that, for every~$m$ and~$z$ as above,
\mathbf{b}etagin{equation*} \label{}
\left\{
\mathbf{b}etagin{aligned}
& \left\| \widetilde{\mathbf{a}}^\varepsilon - \mathbf{a}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac\cdot\varepsilon\right) \right\|_{\underlinederline{L}^q(z+\varepsilon\cu_l)}
\leq
C \left(\varepsilon 3^l \right) 3^{Qk} ,
\\ &
\left\|
\widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2}
- \mathbf{m}athbf{F}^{(k)}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left( \tfrac\cdot\varepsilon\right)
\right\|_{\underlinederline{L}^q(z+\varepsilon\cu_l)}
\leq
C\left(\varepsilon 3^l \right) 3^{Qk}
\mathbf{m}athbf{s}um_{j=1}^{n+2} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+1}{j}}.
\end{aligned}
\right.
\end{equation*}
Combining the above estimates, we finally obtain that, for every~$z$ and $m$ as above,
\mathbf{b}etagin{equation*} \label{}
\left\{
\mathbf{b}etagin{aligned}
& \left\| \mathbf{a}^\varepsilon - \widetilde{\mathbf{a}}^\varepsilon \right\|_{\underlinederline{L}^q(z+\varepsilon\cu_l)}
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2-1} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + 3^{-k\mathbf{a}lphapha} + \varepsilon 3^{l+kQ} \right),
\\ &
\left\| \mathbf{m}athbf{F}^\varepsilon_{n+2} - \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} \right\|_{\underlinederline{L}^q(z+\varepsilon\cu_l)}
\\ & \qquad
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2-1} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + 3^{-k\mathbf{a}lphapha} + \varepsilon 3^{l+kQ} \right)
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}},
\end{aligned}
\right.
\end{equation*}
Using that these coefficient fields are bounded and that the boundary layer (i.e., the set of points which lie in $U_{n+2}$ but not in any cube of the form $z+\varepsilon\cu_{l}$ with $z$ as above) has thickness~$O(\varepsilon3^l)$, we get from the previous estimate that
\mathbf{b}etagin{equation*} \label{}
\left\{
\mathbf{b}etagin{aligned}
& \left\| \mathbf{a}^\varepsilon - \widetilde{\mathbf{a}}^\varepsilon \right\|_{\underlinederline{L}^q(U_{n+2})}
\leq
C\mathbf{m}athsf{A},
\\ &
\left\| \mathbf{m}athbf{F}^\varepsilon_{n+2} - \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} \right\|_{\underlinederline{L}^q(U_{n+2})}
\leq
C\mathbf{m}athsf{A}
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}},
\end{aligned}
\right.
\end{equation*}
where (in order to shorten the expressions) we denote
\mathbf{b}etagin{equation*} \label{}
\mathbf{m}athsf{A}:=\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-\mathbf{m}athbf{f}rac d2-1} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + 3^{-k\mathbf{a}lphapha} + \left( \varepsilon 3^l \right)^\mathbf{a}lphapha 3^{kQ} \right).
\end{equation*}
To complete the proof of the lemma, we observe that $\zeta:= \widetilde{w}_{n+2}^\varepsilon-w^\varepsilon_{n+2}\in H^1_0(U_{n+2})$ satisfies the equation
\mathbf{b}etagin{equation}
-\nabla \cdot \widetilde{\mathbf{a}}^\varepsilon \nabla \zeta = \nabla \cdot \left( \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} - \mathbf{m}athbf{F}^\varepsilon_{n+2} \right) + \nabla \cdot \left( \left( \widetilde{\mathbf{a}}^\varepsilon - {\mathbf{a}}^\varepsilon \right) \nabla w^\varepsilon_{n+2} \right).
\end{equation}
By the basic energy estimate (test the equation for~$\zeta$ with~$\zeta$), we obtain
\mathbf{b}etagin{align*}
\left\| \nabla \zeta \right\|_{{L}^{2}(U_{n+2})}
\leq
\left(
\left\| \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} - \mathbf{m}athbf{F}^\varepsilon_{n+2} \right\|_{L^{2 }(U_{n+2})}
+
\left\| \left( \widetilde{\mathbf{a}}^\varepsilon - {\mathbf{a}}^\varepsilon \right) \nabla w_{n+2}^\varepsilon \right\|_{L^{2}(U_{n+2})}
\right).
\end{align*}
The first term on the right side is already estimated above. For the second term, we use the Meyers estimate and~\eqref{e.goodbounds.wm}, without forgetting~\eqref{e.gvanishass}, to find~$\deltalta(d,\mathcal{L}ambda)>0$ such that
\mathbf{b}etagin{align*}
\left\| \nabla {w}^\varepsilon_{n+2} \right\|_{\underlinederline{L}^{2+\deltalta}(U_{n+2})}
\leq
C \left\| {\mathbf{m}athbf{F}}^\varepsilon_{n+2} \right\|_{L^{2+\deltalta}(U_{n+2})}
\leq
C
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}}.
\end{align*}
Therefore, by H\"older's inequality and the above estimate on~$\left\| \mathbf{a}^\varepsilon - \widetilde{\mathbf{a}}^\varepsilon \right\|_{\underlinederline{L}^q(z+\varepsilon\cu_l)} $ with exponent~$q:=\mathbf{m}athbf{f}rac{4+2\deltalta}{\deltalta} $, we obtain
\mathbf{b}etagin{align*}
\lefteqn{
\left\| \left( \widetilde{\mathbf{a}}^\varepsilon - \mathbf{a}^\varepsilon \right) \nabla w^\varepsilon_{n+2} \right\|_{L^{2}(U_{n+2})}
} \qquad &
\\ &
\leq
\left\| \widetilde{\mathbf{a}}^\varepsilon - \mathbf{a}^\varepsilon \right\|_{L^{\mathbf{m}athbf{f}rac{4+2\deltalta}{\deltalta}}(U_{1})} \left\| \nabla w^\varepsilon_{n+2} \right\|_{L^{2+\deltalta}(U_{n+2})}
\leq
C\mathbf{m}athsf{A}
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}}.
\end{align*}
This completes the proof of~\eqref{e.blargh.1}.
\end{proof}
In view of the discussion in Subsections~\ref{ss.meso} and~\ref{ss.mscales},
Lemma~\ref{l.coeffclose} implies~\eqref{e.homogenization.estimates.wts.1}.
\mathbf{m}athbf{s}ubsection{Homogenization estimates for the approximating equations}
\label{ss.homogapp}
To prepare for the proof of~\eqref{e.homogenization.estimates.wts.1},
we apply the quantitative homogenization estimates proved in~\cite{AKMbook} for the linear equation~\eqref{e.statpde}. We denote by ${\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p^{(k)}$ and $\overlineerline{\mathbf{m}athbf{F}}^{(k)}_{n+2,\mathbf{m}athsf{\!t}heta}$ the homogenized coefficients.
\mathbf{m}athbf{s}mallskip
By an application of the results of~\cite[Chapter 11]{AKMbook}, the solutions $v(\cdot,U,e,\mathbf{m}athsf{\!t}heta)$ of the family of Dirichlet problems
\mathbf{b}etagin{equation}
\label{e.linearizequantities}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla\cdot\left( \mathbf{a}^{(k)}_p \nabla v(\cdot,U,e,\mathbf{m}athsf{\!t}heta) \right) = \nabla \cdot \mathbf{m}athbf{F}^{(k)}_{n+2,\mathbf{m}athsf{\!t}heta} & \mathbf{m}box{in} & \ U, \\
& v(\cdot,U,e,\mathbf{m}athsf{\!t}heta) = \ell_e & \mathbf{m}box{on} & \ \mathrm{par}rtial U,
\end{aligned}
\right.
\end{equation}
indexed over bounded Lipschitz domains $U\mathbf{m}athbf{s}ubseteq\mathbf{m}athbb{R}d$ and $e\in B_1$ and~$\mathbf{m}athsf{\!t}heta\in\mathbf{m}athbb{R}^{d(n+2)}$,
satisfy the following quantitative homogenization estimate: there exist~$\mathbf{a}lphapha(s,d,\mathcal{L}ambda)>0$, $\mathbf{b}etata(\mathrm{data})>0$, $\deltalta(\mathrm{data})>0$ and~$Q(\mathrm{data})<\infty$ and~$C(s,\mathbf{m}athsf{M},\mathrm{data})<\infty$ and a random variable~$\mathcal{X} = \mathcal{O}_\deltalta(C)$ such that, for every $\mathbf{m}athsf{\!t}heta=(p,h_1,\ldots,h_{n+1})\in\mathbf{m}athbb{R}^{d(n+2)}$ with $|p|\leq \mathbf{m}athsf{M}$, $e\in\mathbf{m}athbb{R}d$ and every $l\in\mathbf{m}athbb{N}$ with $3^{l-k} \mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{align}
\label{e.appcorrbounds}
&
3^{-l} \left\| v(\cdot,\cu_{l+1},e,\mathbf{m}athsf{\!t}heta) - \ell_e \right\|_{\underlinederline{L}^2(\cu_l)} +
3^{-l} \left\| \nabla v(\cdot,\cu_{l+1},e,\mathbf{m}athsf{\!t}heta) - e \right\|_{\underlinederline{H}^{-1}(\cu_l)}
\\ & \ \ \notag
+
3^{-l} \left\| \mathbf{a}_p^{(k)} \nabla v(\cdot,\cu_{l+1},e,\mathbf{m}athsf{\!t}heta) + \mathbf{m}athbf{F}^{(k)}_{n+2,\mathbf{m}athsf{\!t}heta} - \left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p^{(k)} e+\overlineerline{\mathbf{m}athbf{F}}^{(k)}_{n+2,\mathbf{m}athsf{\!t}heta} \right)\right\|_{\underlinederline{H}^{-1}(\cu_l)}
\\ & \notag
\qquad\qquad\qquad\qquad
\qquad\qquad \qquad
\leq
C3^{-\mathbf{a}lphapha(l-k)} 3^{Qk} \left( |e| + \mathbf{m}athbf{s}um_{j=1}^{n+1} \left| h_j \right|^{\mathbf{m}athbf{f}rac {n+2}j} \right)
\end{align}
as well as
\mathbf{b}etagin{equation}
\label{e.detbound.wU}
\left\| \nabla v(\cdot,\cu_{l+1},e,\mathbf{m}athsf{\!t}heta) \right\|_{C^{0,\mathbf{b}etata}(\cu_l)}
\leq
Cl^Q3^{Qk} \left( |e| + \mathbf{m}athbf{s}um_{j=1}^{n+1} \left| h_j \right|^{\mathbf{m}athbf{f}rac {n+2}j} \right).
\end{equation}
To obtain~\eqref{e.appcorrbounds}, we change the scale by performing a dilation $x\mathbf{m}apsto 3^k \left\lceil \left(15+d\right)^{\mathbf{m}athbf{f}rac12} \right\rceil x$ so that the resulting coefficient fields have a unit range of dependence and are still~$\mathbf{m}athbb{Z}d$--stationary, cf.~\eqref{e.klocalize}. We then apply~\cite[Lemma 11.11]{AKMbook} to obtain~\eqref{e.appcorrbounds}, using Lemma~\ref{l.detbounds.wm}, namely~\eqref{e.detbounds.Fm}, to give a bound on the parameter~$\mathbf{m}athsf{K}$ in the application of the former. The reason we quote results for general nonlinear equations, despite the fact that~\eqref{e.linearizequantities} is linear, is because~\eqref{e.linearizequantities} has a nonzero right-hand side and the results for linear equations have not been formalized in that generality.
\mathbf{m}athbf{s}mallskip
The bound~\eqref{e.detbound.wU} is a consequence the large-scale $C^{0,1}$--type estimate for the equation~\eqref{e.statpde} (see \cite[Theorem 11.13]{AKMbook}) which, together with an routine union bound, yields~\eqref{e.detbound.wU} with the left side replaced by
$\mathbf{m}athbf{s}up_{z \in \mathbf{m}athbb{Z}d \cap \cu_l} \left\| \nabla v \right\|_{\underlinederline{L}^2(z+(l-k)\cu_k)}$. Indeed, if we denote by $\mathcal{X}_z$ the minimal scale in the large-scale $C^{0,1}$ estimate, then a union bound yields (for any $s\in (0,d)$, so in particular we can take $s>1$),
\mathbf{b}etagin{equation*} \label{}
\mathbf{m}ax\left\{
k' \in \mathbf{m}athbb{N} \,:\, \mathbf{m}ax_{z\in \mathbf{m}athbb{Z}d \cap \cu_{k'} } \mathcal{X}_z > \left( \log 3^{k'-k} \right)^{\mathbf{m}athbf{f}rac 1s}
\right\}
\leq \mathcal{O}_\deltalta (C).
\end{equation*}
We obtain~\eqref{e.detbound.wU} by combining this bound for $\mathbf{m}athbf{s}up_{z \in \mathbf{m}athbb{Z}d \cap \cu_l} \left\| \nabla v \right\|_{\underlinederline{L}^2(z+(l-k)\cu_k)}$ with the deterministic bounds~\eqref{e.detbounds.Fm}, ~\eqref{e.detbounds.ak} and an application of the Schauder estimates (see Proposition~\ref{p.schauder}).
\mathbf{m}athbf{s}mallskip
We also need the following estimate for the difference of~$v$'s on overlapping cubes, which can be inferred from~\eqref{e.appcorrbounds} and the Caccioppoli inequality:
there exist~$\mathbf{a}lphapha(s,d,\mathcal{L}ambda)>0$, $\mathbf{b}etata(\mathrm{data})>0$, $\deltalta(\mathrm{data})>0$ and~$Q(\mathrm{data})<\infty$ and~$C(s,\mathbf{m}athsf{M},\mathrm{data})<\infty$ and a random variable~$\mathcal{X} = \mathcal{O}_\deltalta(C)$ such that, for every $\mathbf{m}athsf{\!t}heta=(p,h_1,\ldots,h_{n+1})\in\mathbf{m}athbb{R}^{d(n+2)}$ with $|p|\leq \mathbf{m}athsf{M}$, $e\in\mathbf{m}athbb{R}d$ and every $l\in\mathbf{m}athbb{N}$ with $3^{l-k} \mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{align}
\label{e.appcorr.overlap}
\lefteqn{
\mathbf{m}athbf{s}um_{z'\in 3^l\mathbf{m}athbb{Z}d\cap \cu_{l+1}}
\left\|
\nabla v\left(\cdot,z+\cu_{l+1},e,\mathbf{m}athsf{\!t}heta\right)
-
\nabla v\left(\cdot,z'+\cu_{l},e,\mathbf{m}athsf{\!t}heta\right)
\right\|_{\underlinederline{L}^2(z'+\cu_{l})}
}
\qquad\qquad\qquad\qquad
\qquad\qquad\qquad
&
\\ & \notag
\leq
C3^{-\mathbf{a}lphapha(l-k)} \left( |e| + 3^{Qk} \mathbf{m}athbf{s}um_{j=1}^{n+1} \left| h_j \right|^{\mathbf{m}athbf{f}rac {n+2}j} \right) .
\end{align}
Finally, we mention that we have the following deterministic estimate on the functions $v(\cdot,z+\cu_{l+1},e,\mathbf{m}athsf{\!t}heta)$ which states that
\mathbf{b}etagin{align}
\label{e.appcorr.bndd}
\left\| \nabla v(\cdot,z+\cu_{l+1},e,\mathbf{m}athsf{\!t}heta) \right\|_{\underlinederline{L}^2(z+\cu_{l+1})}
&
\leq
C \left( |e| + \left\| \mathbf{m}athbf{F}^{(k)}_{n+2,\mathbf{m}athsf{\!t}heta} \right\|_{\underlinederline{L}^2(z+\cu_{l+1})} \right)
\\ & \notag
\leq
C\left( |e| + \mathbf{m}athbf{s}um_{j=1}^{n+1} \left| h_j \right|^{\mathbf{m}athbf{f}rac {n+2}j} \right).
\end{align}
This follows from testing the problem~\eqref{e.linearizequantities} with $U=z+\cu_{l+1}$ with the test function $v(\cdot,z+\cu_{l+1},e,\mathbf{m}athsf{\!t}heta) - \ell_e\in H^1_0(z+\cu_{l+1})$ and then using~\eqref{e.FmTheta0bounds.X} with~ $m=n+2$.
Our next goal is to compare the homogenized coefficients~$\overlineerline{\mathbf{m}athbf{F}}^{(k)}_{m,\mathbf{m}athsf{\!t}heta}$ for the approximating equations to the functions~$\overlineerline{\mathbf{m}athbf{F}}_m (\mathbf{m}athsf{\!t}heta)$ defined in~\eqref{e.defbarFm}. In view of the results of Section~\ref{s.regLbar}, it is natural to proceed by comparing the vector fields~$\mathbf{m}athbf{F}_{m,\mathbf{m}athsf{\!t}heta}^{(k)}$ to~$\mathbf{m}athbf{f}^{(m)}_{p,h}$ defined in~\eqref{e.corrcoeff}. This is accomplished by invoking Lemma~\ref{l.diff.linearizedsystem} again.
\mathbf{b}etagin{lemma}
\label{l.fmsclose}
Fix $q\in [2,\infty)$ and $\mathbf{m}athsf{M}\in [1,\infty)$.
There exist~$\deltalta(q,\mathrm{data}),\mathbf{a}lphapha(q,\mathrm{data})\in (0,\mathbf{m}athbf{f}rac{1}{2}]$,~$C(q,\mathbf{m}athsf{M}, \text{data})<\infty$ such that, for every~$k\in\mathbf{m}athbb{N}$ with $3^k\mathbf{m}athbf{g}eq \mathcal{X}$,~$m\in\{ 1,\ldots,n+2\}$ and~$p,h\in\mathbf{m}athbb{R}d$ with $|p|\leq \mathbf{m}athsf{M}$, we have
\mathbf{b}etagin{equation}
\label{e.comcoeff.ap}
\left\| \mathbf{a}_p - \mathbf{a}_p^{(k)} \right\|_{\underlinederline{L}^q(\cu_k)}
\leq
\mathcal{O}_\deltalta\left( C 3^{-k\mathbf{a}lphapha} \right)
\end{equation}
and, for~$\mathbf{m}athsf{\!t}heta:=(p,h,0,\cdots,0)\in \mathbf{m}athbb{R}^{d(n+2)}$,
\mathbf{b}etagin{equation}
\label{e.comcoeff.Fm}
\left\| \mathbf{m}athbf{F}_{m,\mathbf{m}athsf{\!t}heta}^{(k)} - \mathbf{m}athbf{f}^{(m)}_{p,h} \right\|_{\underlinederline{L}^q(\cu_k)}
\leq
\mathcal{O}_\deltalta\left( C 3^{-k\mathbf{a}lphapha} |h|^{m} \right).
\end{equation}
\end{lemma}
\mathbf{b}etagin{proof}
We take~$\mathcal{X}$ to be the maximum of the random scales in Theorem~\ref{t.correctorestimates}, Lemmas~\ref{l.corr.sublinearity} and~\ref{l.diff.linearizedsystem} and Theorems~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors} for $n$, the latter two being valid by assumption~\eqref{e.assumption.section4}. Assume $3^k \mathbf{m}athbf{g}eq \mathcal{X}$.
\mathbf{m}athbf{s}mallskip
By Lemma~\ref{l.corr.sublinearity} and the assumed validity of Theorem~\ref{t.linearizehigher} for $n$, we have
\mathbf{b}etagin{equation*} \label{}
3^{-k} \left\| v^{(k)}_{p,0} - \left( \ell_p + \phi_p - \left(\phi_p \right)_{\cu_{k+1}} \right) \right\|_{\underlinederline{L}^2(\cu_{k+1})}
\leq
C 3^{-k\mathbf{a}lphapha},
\end{equation*}
\mathbf{b}etagin{equation*} \label{}
3^{-k} \left\| w^{(k)}_{1,\mathbf{m}athsf{\!t}heta,0} - \left( \ell_h+\psi^{(1)}_{p,h} - \left(\psi^{(1)}_{p,h} \right)_{\mathbf{m}athbf{f}rac32\cu_{k}} \right) \right\|_{\underlinederline{L}^2(\mathbf{m}athbf{f}rac32\cu_{k})}
\leq
C 3^{-k\mathbf{a}lphapha}|h|
\end{equation*}
and, for $m\in\{2,\ldots,n+1\}$,
\mathbf{b}etagin{equation*} \label{}
3^{-k} \left\| w^{(k)}_{m,\mathbf{m}athsf{\!t}heta,0} - \left( \psi^{(m)}_{p,h} - \left(\psi^{(m)}_{p,h} \right)_{\mathbf{m}athbf{f}rac32\cu_{k}} \right) \right\|_{\underlinederline{L}^2((1+2^{-m})\cu_{k})}
\leq
C 3^{-k\mathbf{a}lphapha} |h|^m.
\end{equation*}
By Theorem~\ref{t.correctorestimates} and the assumed validity of Theorem~\ref{t.regularity.linerrors} for $n$, we also have that, for every $m\in\{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation}
\label{e.wmTheta0bounds}
\left\| \nabla w^{(k)}_{m,\mathbf{m}athsf{\!t}heta,0}
\right\|_{\underlinederline{L}^2((1+2^{-m})\cu_{k})}
+
\left\|
\nabla \psi^{(m)}_{p,h}
\right\|_{\underlinederline{L}^2((1+2^{-m})\cu_{k})}
\leq
C|h|^m.
\end{equation}
Using these estimates and Lemma~\ref{l.diff.linearizedsystem}, we obtain, for $m\in\{2,\ldots,n+1\}$,
\mathbf{b}etagin{equation}
\label{e.closenessofcorr}
\left\| \nabla w^{(k)}_{m,\mathbf{m}athsf{\!t}heta,0} - \nabla \psi^{(m)}_{p,h}
\right\|_{\underlinederline{L}^2(\cu_{k})}
\leq
C 3^{-k\mathbf{a}lphapha} |h|^m.
\end{equation}
The previous display holds for $3^k\mathbf{m}athbf{g}eq \mathcal{X}$. Combining this estimate with~\eqref{e.sec3corrbnd} and~\eqref{e.FmTheta0bounds.X2} and using $L^p$ interpolation, we obtain
\mathbf{b}etagin{equation}
\label{e.closenessofcorr.O}
\left\| \nabla w^{(k)}_{m,\mathbf{m}athsf{\!t}heta,0} - \nabla \psi^{(m)}_{p,h}
\right\|_{\underlinederline{L}^2(\cu_{k})}
\leq
\mathcal{O}_\deltalta\left( C 3^{-k\mathbf{a}lphapha} |h|^m \right).
\end{equation}
The conclusion of the lemma now follows.
\end{proof}
We next observe that the homogenized coefficients~${\overlineerbracket[1pt][-1pt]{\mathbf{a}}}^{(k)}_p$ and $\overlineerline{\mathbf{m}athbf{F}}^{(k)}_{n+2,\mathbf{m}athsf{\!t}heta}$ agree, up to a small error, with ${\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p:=D^2\overlineerline{L}(p)$ and $\overlineerline{\mathbf{m}athbf{F}}_{n+2}(\mathbf{m}athsf{\!t}heta)$. This will help us eventually to prove that the homogenized coefficients for the linearized equations are the linearized coefficients of the homogenized equation.
\mathbf{b}etagin{lemma}
\label{l.barsclose}
Fix $\mathbf{m}athsf{M}\in [1,\infty)$. There exist~$\mathbf{a}lphapha(\mathrm{data})>0$ and~ $C(n,\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that, for every $\mathbf{m}athsf{\!t}heta =(p,h_1,\ldots,h_{n+1}) \in B_\mathbf{m}athsf{M} \times \mathbf{m}athbb{R}^{d(n+1)}$,
\mathbf{b}etagin{equation}
\label{e.abarsclose}
\left| {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p - {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p^{(k)}\right|
\leq
C 3^{-k\mathbf{a}lphapha}
\end{equation}
and
\mathbf{b}etagin{equation}
\label{e.barsclose}
\left| \overlineerline{\mathbf{m}athbf{F}}^{(k)}_{n+2,\mathbf{m}athsf{\!t}heta} - \overlineerline{\mathbf{m}athbf{F}}_{n+2}(\mathbf{m}athsf{\!t}heta)\right|
\leq
C 3^{-k\mathbf{a}lphapha}
\left( \mathbf{m}athbf{s}um_{j=1}^{n+1} \left| h_j \right|^{\mathbf{m}athbf{f}rac {n+1}j} \right).
\end{equation}
\end{lemma}
\mathbf{b}etagin{proof}
We first give the argument in the case that $\mathbf{m}athsf{\!t}heta=(p,h,0,\ldots,0)$.
From the definition~\eqref{e.defbarFm} and Proposition~\ref{e.Lbar.reg.qualitative}, we see that $\left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p,\overlineerline{\mathbf{m}athbf{F}}_{n+2}(\mathbf{m}athsf{\!t}heta) \right)$ are the homogenized coefficients for the stationary coefficient fields~$\left( \mathbf{a}_p, \mathbf{m}athbf{f}_{p,h}^{(n+2)} \right)$. By Lemma~\ref{l.fmsclose}, and the bounds~\eqref{e.detbounds.Fm} and~\eqref{e.sec3corrbnd}, we find that, for every $q\in [2,\infty)$,
\mathbf{b}etagin{equation}
\left\| \mathbf{a}_p - \mathbf{a}_p^{(k)} \right\|_{\underlinederline{L}^q(\cu_k)}
\leq \mathcal{O}_\deltalta\left( 3^{-k\mathbf{a}lphapha} \right)
\end{equation}
and
\mathbf{b}etagin{equation}
\left\| \mathbf{m}athbf{F}_{m,\mathbf{m}athsf{\!t}heta}^{(k)} - \mathbf{m}athbf{f}^{(m)}_{p,h} \right\|_{\underlinederline{L}^q(\cu_k)}
\leq
\mathcal{O}_\deltalta\left( C 3^{-k\mathbf{a}lphapha} |h|^{m} \right),
\end{equation}
with the constants $C$ depending additionally on~$q$. The result now follows easily by the Meyers estimate in the case~$\mathbf{m}athsf{\!t}heta=(p,h,0,\ldots,0)$. Indeed, by subtracting the equations and applying H\"older's inequality and the Meyers estimate, we can show that the first-order correctors for the two linear problems are close in the $\underlinederline{L}^2(\cu_k)$ norm (with a second moment in expectation). Therefore their fluxes are also close, and taking the expectations of the fluxes gives the homogenized coefficients. This argument can also be performed in a finite volume box~$\cu_M$ with the result obtained after sending $M\to \infty$. See~\cite[Lemma 3.2]{AFK} for a similar argument.
\mathbf{m}athbf{s}mallskip
For general~$\mathbf{m}athsf{\!t}heta$ the result follows by polarization (see Lemma~\ref{l.polarization}) and the multilinear structure shared by the functions $\mathbf{m}athsf{\!t}heta\mathbf{m}apsto \overlineerline{\mathbf{m}athbf{F}}^{(k)}_{n+2,\mathbf{m}athsf{\!t}heta}$ and $\mathbf{m}athsf{\!t}heta \mathbf{m}apsto \overlineerline{\mathbf{m}athbf{F}}_{n+2}(\mathbf{m}athsf{\!t}heta)$.
\end{proof}
We also require the following continuity estimate for the functions $w(\cdot,\cu_l,e,\mathbf{m}athsf{\!t}heta)$ in~$(e,\mathbf{m}athsf{\!t}heta)$. Following the proof of Lemma~\ref{l.contdep.coeffs} for~$L^2$ dependence in $\mathbf{m}athsf{\!t}heta$ (we need need to do one more step of the iteration described there), using~\eqref{e.appcorrbounds} for ~$L^2$ dependence in~$e$, and then interpolating this result with~\eqref{e.detbound.wU}, we obtain exponents $\mathbf{a}lphapha(\mathrm{data})>0$ and $Q(\mathrm{data})<\infty$ and a random variable~$\mathcal{X} = \mathcal{O}_\deltalta(C)$ such that, for every $e,e'\in\mathbf{m}athbb{R}d$ and~$\mathbf{m}athsf{\!t}heta=(p,h_1,\ldots,h_{n+1})\in \mathbf{m}athbb{R}^{d(n+2)}$ and $\mathbf{m}athsf{\!t}heta'=(p',h_{1}',\ldots,h_{n+1}')\in \mathbf{m}athbb{R}^{d(n+2)}$
with $|p|,|p'|\leq \mathbf{m}athsf{M}$, if $3^k\mathbf{m}athbf{g}eq \mathcal{X}$, then
\mathbf{b}etagin{align}
\label{e.contdep.corrw}
&
\left\| \nabla v(\cdot,\cu_{l+1},e,\mathbf{m}athsf{\!t}heta) - \nabla v(\cdot,\cu_{l+1},e',\mathbf{m}athsf{\!t}heta' )\right\|_{L^\infty(\cu_l)}
\\
& \qquad \notag
\leq
Cl^Q3^{kQ} |p-p'|^\mathbf{a}lphapha \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \left| h_i \right| \vee \left| h_i' \right| \right)^{\mathbf{m}athbf{f}rac mi}
+Cl^Q 3^{kQ} \mathbf{m}athbf{s}um_{i=1}^{m-1} |h_i-h_i'|^\mathbf{a}lphapha \mathbf{m}athbf{s}um_{j=1}^{m-i}
\left( \left| h_j \right| \vee \left| h_j' \right| \right)^{\mathbf{m}athbf{f}rac {m-i}j}.
\end{align}
\mathbf{m}athbf{s}ubsection{Homogenization of the locally stationary equation}
We next present the proof of~\eqref{e.homogenization.estimates.wts.2}, which is the final step in the proof of Theorem~\ref{t.linearizehigher}. The argument follows a fairly routine (albeit technical) two-scale expansion argument and requires the homogenization estimates given in Subsection~\ref{ss.homogapp}.
\mathbf{b}etagin{proof}[{The proof of~\eqref{e.homogenization.estimates.wts.2}}]
In view of the discussion in Subsection~\ref{ss.mscales}, we may assume throughout that $3^k\mathbf{m}athbf{g}eq \mathcal{X}$ where~$\mathcal{X}$ is any minimal scale described above.
We begin by building an approximation of the solution~$\widetilde{w}_{n+2}^\varepsilon$ of~\eqref{e.wn+2.tilde} built out of the solutions of~\eqref{e.linearizequantities}. We select a smooth function $\zeta \in C^\infty_c( \mathbf{m}athbb{R}d )$ such that, for some $C(d)<\infty$ and each~$i\in\{1,2\}$,
\mathbf{b}etagin{equation}
\label{e.cutoffzeta}
\mathds{1}_{\varepsilon \cu_l} \leq \zeta \leq \mathds{1}_{\varepsilon \cu_{l+1}}
\quad
\left\| \nabla^i \zeta \right\|_{L^\infty}
\leq C\left(\varepsilon3^{l}\right)^{-i},
\quad \mathbf{m}box{and} \quad
\mathbf{m}athbf{s}um_{z\in \varepsilon 3^l\mathbf{m}athbb{Z}d} \zeta(\cdot-z) = 1 \quad \mathbf{m}box{in} \ \mathbf{m}athbb{R}d.
\end{equation}
We can construct such a~$\zeta$ by a suitable mollification of $\mathds{1}_{\varepsilon\cu_{l}}$.
We write $\zeta_z:= \zeta(\cdot-z)$ for short. We next define a function $\widetilde{v}^\varepsilon_{n+2} \in H^1_0(U_{n+2})$ by
\mathbf{b}etagin{align*} \label{}
&
\widetilde{v}_{n+2}^\varepsilon(x)
\\ & \quad
:=
\overlineerline{w}_{n+2}(x)
+
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\zeta_z(x) \left( \varepsilon v\left( \tfrac x\varepsilon , \tfrac{z}\varepsilon + \cu_{l+1},\nabla \overlineerline{w}_{n+2}(z),\overlineerline\mathbf{m}athsf{\!t}heta(z) \right) - \ell_{\nabla \overlineerline{w}_{n+2}(z)}(x) \right).
\end{align*}
Since $\widetilde{v}_{n+2}^\varepsilon - \widetilde{w}_{n+2}^\varepsilon \in H^1_0(U_{n+2})$,
We have that
\mathbf{b}etagin{align}
\label{e.captcha}
\left\| \nabla \widetilde{v}^\varepsilon_{n+2} - \nabla \widetilde{w}^\varepsilon_{n+2}
\right\|_{L^2(U_{n+2})}
&
\leq
C
\left\| \nabla \cdot \widetilde{\mathbf{a}}^\varepsilon \nabla \widetilde{v}^\varepsilon_{n+2}
- \nabla \cdot \widetilde{\mathbf{a}}^\varepsilon\nabla \widetilde{w}^\varepsilon_{n+2}
\right\|_{H^{-1}(U_{n+2})}
\\ & \notag
=
C
\left\|
\nabla \cdot \left( \widetilde{\mathbf{a}}^\varepsilon \nabla \widetilde{v}_{n+2}^\varepsilon
+ \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} \right)
\right\|_{H^{-1}(U_{n+2})}.
\end{align}
A preliminary goal is therefore to prove the estimate
\mathbf{b}etagin{align}
\label{e.eqyes.wts}
&
\left\|
\nabla \cdot \left( \widetilde{\mathbf{a}}^\varepsilon \nabla \widetilde{v}_{n+2}^\varepsilon
+ \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} \right)
\right\|_{H^{-1}(U_{n+2})}
\\ & \qquad \notag
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-Q} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + 3^{-k\mathbf{a}lphapha} + \left( \varepsilon 3^l \right)^\mathbf{a}lphapha 3^{kQ} \right)
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}}.
\end{align}
The proof of~\eqref{e.eqyes.wts} is essentially completed in Steps~1--5 below: see~\eqref{e.yesstep6} and~\eqref{e.captcha.indeed}. After obtaining this estimate, we will prove that
\mathbf{b}etagin{equation}
\label{e.throwyes.wts}
\left\| \nabla \widetilde{v}^\varepsilon_{n+2} - \nabla \overlineerline{w}_{n+2} \right\|_{H^{-1}(U_{n+2})}
\leq
C 3^{-\mathbf{a}lphapha(l-k)} 3^{Qk} \mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac {n+2}i}.
\end{equation}
Together these inequalities imply~\eqref{e.homogenization.estimates.wts.2}.
\mathbf{m}athbf{s}mallskip
Denote $v_z:= v\left( \cdot , \tfrac{z}\varepsilon + \cu_{l+1},\nabla \overlineerline{w}_{n+2}(z),\overlineerline\mathbf{m}athsf{\!t}heta(z) \right)$ for short and compute
\mathbf{b}etagin{align}
\label{e.gradtildevep}
\nabla \widetilde{v}_{n+2}^\varepsilon(x)
&
=
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}} \left(
\nabla \zeta_z(x) \left( \varepsilon v_z(\tfrac x\varepsilon ) - \ell_{\nabla \overlineerline{w}_{n+2}(z) }(x)\right)
+ \zeta_z(x) \nabla v_z\left( \tfrac x\varepsilon \right)
\right)
\\ & \qquad \notag
+
\left( \nabla \overlineerline{w}_{n+2}(x) - \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}} \zeta_z(x) \nabla \overlineerline{w}_{n+2}(z) \right) .
\end{align}
Thus
\mathbf{b}etagin{align}
\label{e.fluxtildevep}
\widetilde{\mathbf{a}}^\varepsilon \nabla \widetilde{v}^{\varepsilon}_{n+2}
&
=
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\zeta_z
\widetilde{\mathbf{a}}^{(k)}_{\nabla \overlineerline{u}(z)}
\nabla v_z\left( \tfrac \cdot\varepsilon \right)
\\ & \qquad \notag
+
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\widetilde{\mathbf{a}}^\varepsilon\nabla \zeta_z \left( \varepsilon v_z(\tfrac \cdot\varepsilon ) - \ell_{\nabla \overlineerline{w}_{n+2}(z)}(x) \right)
\\ & \qquad \notag
+
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\zeta_z(x)
\left( \widetilde{\mathbf{a}}^\varepsilon - \widetilde{\mathbf{a}}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac\cdot\varepsilon \right)\right)
\nabla v_z\left( \tfrac \cdot\varepsilon \right)
\\ & \qquad \notag
+
\widetilde{\mathbf{a}}^\varepsilon\left( \nabla \overlineerline{w}_{n+2}(x) - \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}} \zeta_z(x) \nabla \overlineerline{w}_{n+2}(z) \right) .
\end{align}
Using the equation for $v_z$, we find that
\mathbf{b}etagin{align*} \label{}
\nabla \cdot \left( \widetilde{\mathbf{a}}^\varepsilon \nabla \widetilde{v}^{\varepsilon}_{n+2} + \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} \right)
&
=
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\nabla \zeta_z \cdot \left(
\widetilde{\mathbf{a}}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac \cdot\varepsilon\right)
\nabla v_z\left( \tfrac \cdot\varepsilon \right) + \widetilde{\mathbf{m}athbf{F}}^{(k)}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left(\tfrac \cdot\varepsilon\right) \right)
\\ & \qquad
+\nabla \cdot \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\zeta_z(x)
\left( \widetilde{\mathbf{a}}^\varepsilon - \widetilde{\mathbf{a}}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac\cdot\varepsilon \right)\right)
\nabla v_z\left( \tfrac \cdot\varepsilon \right)
\\ & \qquad
+
\nabla \cdot \left( \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} - \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\zeta_z
\widetilde{\mathbf{m}athbf{F}}^{(k)}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left(\tfrac \cdot\varepsilon\right) \right)
\\ & \qquad
+
\nabla \cdot\left( \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\widetilde{\mathbf{a}}^\varepsilon\nabla \zeta_z \left( \varepsilon v_z(\tfrac \cdot\varepsilon ) - \ell_{\nabla \overlineerline{w}_{n+2}(z)}(x) \right) \right)
\\ & \qquad
+
\nabla \cdot
\left(
\widetilde{\mathbf{a}}^\varepsilon\left( \nabla \overlineerline{w}_{n+2}(x) - \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}} \zeta_z(x) \nabla \overlineerline{w}_{n+2}(z) \right)
\right).
\end{align*}
Therefore, we have that
\mathbf{b}etagin{align}
\label{e.bigestwthreetterms}
\lefteqn{
\left\|
\nabla \cdot \left( \widetilde{\mathbf{a}}^\varepsilon \nabla \widetilde{v}_{n+2}^\varepsilon
+ \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} \right)
\right\|_{H^{-1}(U_{n+2})}
} \quad &
\\ & \notag
\leq
C\left\|
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\nabla \zeta_z \cdot \left(
\widetilde{\mathbf{a}}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac \cdot\varepsilon\right)
\nabla v_z\left( \tfrac \cdot\varepsilon \right) + \widetilde{\mathbf{m}athbf{F}}^{(k)}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left(\tfrac \cdot\varepsilon\right) \right)
\right\|_{H^{-1}(U_{n+2})}
\\ & \qquad \notag
+C\left\| \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\zeta_z(x)
\left( \widetilde{\mathbf{a}}^\varepsilon - \widetilde{\mathbf{a}}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac\cdot\varepsilon \right)\right)
\nabla v_z\left( \tfrac \cdot\varepsilon \right)
\right\|_{L^2(U_{n+2})}
\\ & \qquad \notag
+
C\left\|
\widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} - \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\zeta_z
\widetilde{\mathbf{m}athbf{F}}^{(k)}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left(\tfrac \cdot\varepsilon\right)
\right\|_{L^2(U_{n+2})}
\\ & \qquad \notag
+
C\left\| \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\nabla \zeta_z \left( \varepsilon v_z(\tfrac \cdot\varepsilon ) - \ell_{\nabla \overlineerline{w}_{n+2}(z)}(x) \right)
\right\|_{L^2(U_{n+2})}
\\ & \qquad \notag
+
C\left\| \nabla \overlineerline{w}_{n+2}(x) - \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}} \zeta_z(x) \nabla \overlineerline{w}_{n+2}(z)
\right\|_{L^2(U_{n+2})} .
\end{align}
We next estimate each of the five terms on the right side of the previous display.
\mathbf{m}athbf{s}mallskip
\emph{Step 1.} The estimate of the first term on the right side of~\eqref{e.bigestwthreetterms}.
This is where we use the homogenized equation for $\overlineerline{w}_{n+2}$.
Fix $h\in H^1_0(U_{n+2})$ with $\left\| h \right\|_{H^1(U_{n+2})}=1$.
By the equation for $\overlineerline{w}_{n+2}$ and integration by parts that
\mathbf{b}etagin{align*}
0
&
=
\int_{U_{n+2}}
\nabla h(x) \cdot \left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}(x) \nabla \overlineerline{w}_{n+2}(x) + \overlineerline{\mathbf{m}athbf{F}}_{n+2}(x) \right)
\,dx
\\ &
=
- \int_{U_{n+2}}
h(x) \mathbf{m}athbf{s}um_{z\in3^l\mathbf{m}athbb{Z}d\cap U_{n+3}} \nabla \zeta_z(x) \cdot \left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}(x) \nabla \overlineerline{w}_{n+2}(x) + \overlineerline{\mathbf{m}athbf{F}}_{n+2}(x) \right)
\,dx
\\ & \qquad
+
\int_{U_{n+2}}
\nabla h(x) \cdot \mathbf{m}athbf{s}um_{z\in3^l\mathbf{m}athbb{Z}d\mathbf{m}athbf{s}etminus U_{n+3}} \zeta_z(x) \left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}(x) \nabla \overlineerline{w}_{n+2}(x) + \overlineerline{\mathbf{m}athbf{F}}_{n+2}(x) \right)
\,dx .
\end{align*}
Therefore we may have
\mathbf{b}etagin{align*}
\lefteqn{
\left|
\int_{U_{n+2}}
h(x)
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\nabla \zeta_z(x) \cdot \left(
\widetilde{\mathbf{a}}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac x\varepsilon\right)
\nabla v_z\left( \tfrac x\varepsilon \right) + \widetilde{\mathbf{m}athbf{F}}^{(k)}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left(\tfrac x\varepsilon\right) \right) \,dx
\right|
} &
\\ &
\leq
\left|
\int_{U_{n+2}}
h \!\!
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\nabla \zeta_z \cdot \left(
\widetilde{\mathbf{a}}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac \cdot \varepsilon\right)
\nabla v_z\left( \tfrac \cdot\varepsilon \right) + \widetilde{\mathbf{m}athbf{F}}^{(k)}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left(\tfrac \cdot\varepsilon\right)
-
{\overlineerbracket[1pt][-1pt]{\mathbf{a}}} \nabla \overlineerline{w}_{n+2} + \overlineerline{\mathbf{m}athbf{F}}_{n+2}
\right) \,dx
\right|
\\ & \qquad
+
\int_{U_{n+2}}
\left| \nabla h(x) \right|
\left| \, \mathbf{m}athbf{s}um_{z\in3^l\mathbf{m}athbb{Z}d\mathbf{m}athbf{s}etminus U_{n+3}} \zeta_z(x) \left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}(x) \nabla \overlineerline{w}_{n+2}(x) + \overlineerline{\mathbf{m}athbf{F}}_{n+2}(x) \right)
\, \right|
\,dx .
\end{align*}
For the first term on the right, we use~\eqref{e.detanchors},~\eqref{e.appcorrbounds},~\eqref{e.abarsclose} and~\eqref{e.barsclose} to obtain
\mathbf{b}etagin{multline*}
\left|
\int_{U_{n+2}}
h \!\!
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\nabla \zeta_z \cdot \left(
\widetilde{\mathbf{a}}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac \cdot \varepsilon\right)
\nabla v_z\left( \tfrac \cdot\varepsilon \right) + \widetilde{\mathbf{m}athbf{F}}^{(k)}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left(\tfrac \cdot\varepsilon\right)
-
{\overlineerbracket[1pt][-1pt]{\mathbf{a}}} \nabla \overlineerline{w}_{n+2} + \overlineerline{\mathbf{m}athbf{F}}_{n+2}
\right) \,dx \,
\right|
\\
\leq
C \left\| h \right\|_{H^1(U_{n+2})}
\left( 3^{-\mathbf{a}lphapha(l-k)}3^{Qk} + 3^{-k\mathbf{a}lphapha} \right)
\mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac {n+2}i}.
\end{multline*}
For the second term, we denote
\mathbf{b}etagin{equation}
\label{e.defUn4}
U_{n+4}:= \left\{ x\in U_{n+3} \,:\, x+\varepsilon\cu_{l+1} \mathbf{m}athbf{s}ubseteq U_{n+3} \right\},
\end{equation}
observe that $\left|U_{n+2}\mathbf{m}athbf{s}etminus U_{n+4} \right| \leq C \varepsilon 3^{l}$, and compute, using the H\"older inequality, to obtain, using also~\eqref{e.wn2meyers},
\mathbf{b}etagin{align*}
\lefteqn{
\int_{U_{n+2}}
\left| \nabla h(x) \right| \left| \mathbf{m}athbf{s}um_{z\in3^l\mathbf{m}athbb{Z}d\mathbf{m}athbf{s}etminus U_{n+3}} \zeta_z(x) \left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}(x) \nabla \overlineerline{w}_{n+2}(x) + \overlineerline{\mathbf{m}athbf{F}}_{n+2}(x) \right) \right|
\,dx
} \qquad &
\\ &
\leq
C \left\| \nabla h \right\|_{L^2(U_{n+2})}
\left\| {\overlineerbracket[1pt][-1pt]{\mathbf{a}}} \nabla \overlineerline{w}_{n+2} + \overlineerline{\mathbf{m}athbf{F}}_{n+2} \right\|_{L^2(U_{n+2} \mathbf{m}athbf{s}etminus U_{n+4})}
\\ &
\leq
C \left\| \nabla h \right\|_{L^2(U_{n+2})}
\left(
(\varepsilon3^l)^{\mathbf{m}athbf{f}rac{\deltalta}{4+2\deltalta}}
\left\| \nabla \overlineerline{w}_{n+2} \right\|_{L^{2+\deltalta}(U_{n+2})} + \varepsilon 3^l \left\| \overlineerline{\mathbf{m}athbf{F}}_{n+2} \right\|_{L^\infty(U_{n+2})} \right)
\\ &
\leq
C \left\| \nabla h \right\|_{L^2(U_{n+2})} (\varepsilon3^l)^{\mathbf{a}lphapha} \mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{L^2(U_i)}^{\mathbf{m}athbf{f}rac{n+2}i} .
\end{align*}
Combining the above estimates and taking the supremum over~$h\in H^1_0(U_{n+2})$ with~$\| h \|_{H^1(U_{n+2})}=1$ yields
\mathbf{b}etagin{align*}
&
\left\|
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\nabla \zeta_z \cdot \left(
\widetilde{\mathbf{a}}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac \cdot\varepsilon\right)
\nabla v_z\left( \tfrac \cdot\varepsilon \right) + \widetilde{\mathbf{m}athbf{F}}^{(k)}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left(\tfrac \cdot\varepsilon\right) \right)
\right\|_{H^{-1}(U_{n+2})}
\\ & \qquad\qquad\qquad\qquad
\notag
\leq
C \left( 3^{-\mathbf{a}lphapha(l-k)}3^{Qk} + 3^{-k\mathbf{a}lphapha} + (\varepsilon3^l)^{\mathbf{a}lphapha} \right) \mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{L^2(U_i)}^{\mathbf{m}athbf{f}rac{n+2}i} .
\end{align*}
\mathbf{m}athbf{s}mallskip
\emph{Step 2.} The estimate of the second term on the right side of~\eqref{e.bigestwthreetterms}. We use~\eqref{e.contdep.ap.Linfty},~\eqref{e.detanchors} and~\eqref{e.appcorr.bndd} to find
\mathbf{b}etagin{align*}
\lefteqn{
\left\| \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\zeta_z(x)
\left( \widetilde{\mathbf{a}}^\varepsilon - \widetilde{\mathbf{a}}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac\cdot\varepsilon \right)\right)
\nabla v_z\left( \tfrac \cdot\varepsilon \right)
\right\|_{L^2(U_{n+2})}
} \qquad &
\\ &
\leq
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\left\|
\left( \widetilde{\mathbf{a}}^\varepsilon - \widetilde{\mathbf{a}}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac\cdot\varepsilon \right)\right)
\nabla v_z\left( \tfrac \cdot\varepsilon \right)
\right\|_{L^2(z+\varepsilon\cu_{l+1})}
\\ &
\leq
C
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\left\|
\widetilde{\mathbf{a}}^\varepsilon - \widetilde{\mathbf{a}}^{(k)}_{\nabla \overlineerline{u}(z)}\left(\tfrac\cdot\varepsilon \right)
\right\|_{L^\infty(z+\varepsilon\cu_{l+1})}
\left\| \nabla v_z\left( \tfrac \cdot\varepsilon \right)
\right\|_{L^2(z+\varepsilon\cu_{l+1})}
\\ &
\leq
C 3^{Qk} \left( \varepsilon 3^{l} \right)^\mathbf{a}lphapha \mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac {n+2}i}.
\end{align*}
\mathbf{m}athbf{s}mallskip
\emph{Step 3.} The estimate of the third term on the right side of~\eqref{e.bigestwthreetterms}. We have by~\eqref{e.contdep.Fm.Linfty} and~\eqref{e.detanchors} that
\mathbf{b}etagin{align*} \label{}
\lefteqn{
\left\| \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} - \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\zeta_z
\widetilde{\mathbf{m}athbf{F}}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left(\tfrac \cdot\varepsilon\right) \right\|_{L^2(U_{n+2})}
} \qquad &
\\ &
\leq
C \left\| \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}} \zeta_z \left( \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} -
\widetilde{\mathbf{m}athbf{F}}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left(\tfrac \cdot\varepsilon\right) \right) \right\|_{L^2(U_{n+2})}
\\ &
\leq
C\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}} \left\| \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} -
\widetilde{\mathbf{m}athbf{F}}_{n+2,\overlineerline{\mathbf{m}athsf{\!t}heta}(z)}\left(\tfrac \cdot\varepsilon\right) \right\|_{L^2(z+\varepsilon\cu_{l+1})}
\\ &
\leq
C 3^{Qk} \left( \varepsilon 3^l \right)^\mathbf{a}lphapha
\mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac {n+2}i}.
\end{align*}
\mathbf{m}athbf{s}mallskip
\emph{Step 4.} The estimate of the fourth term on the right side of~\eqref{e.bigestwthreetterms}. For convenience, extend $v_z$ so that it is equal to the affine function $\ell_{\nabla \overlineerline{w}_{n+2}(z)}$ outside of the cube~$z+\varepsilon\cu_{l+1}$.
By the triangle inequality, we have
\mathbf{b}etagin{align}
\label{e.termfour}
\lefteqn{
\left\| \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\nabla \zeta_z \left( \varepsilon v_z(\tfrac \cdot\varepsilon ) - \ell_{\nabla \overlineerline{w}_{n+2}(z)}(x) \right)
\right\|_{L^2(U_{n+2})}
} \ \qquad &
\\ & \notag
\leq
\left\|
\mathbf{m}athbf{s}um_{y,z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\nabla \zeta_z \left( \varepsilon v_y(\tfrac \cdot\varepsilon )
-
\ell_{\nabla \overlineerline{w}_{n+2}(y)} \right)
\right\|_{L^2(U_{n+2})}
\\ & \qquad \notag
+
\left\|\varepsilon
\mathbf{m}athbf{s}um_{y,z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}\nabla \zeta_z
\left( v_z(\tfrac \cdot\varepsilon ) - v_y(\tfrac \cdot\varepsilon ) \right)
\right\|_{L^2(U_{n+2})}
\\ & \qquad \notag
+
\left\|
\mathbf{m}athbf{s}um_{y,z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3},\,y\mathbf{m}athbf{s}im z}
\nabla \zeta_z \varepsilon \left| \nabla \overlineerline{w}_{n+2}(z) - \nabla \overlineerline{w}_{n+2}(y) \right|
\right\|_{L^2(U_{n+2})},
\end{align}
where here and below we use the notation $y\mathbf{m}athbf{s}im z$ to mean that $y+\cu_{l+1}$ and $z+\cu_{l+1}$ have nonempty intersection.
Using~\eqref{e.cutoffzeta} and the fact that $\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d } \nabla \zeta_z = 0$, we have that
\mathbf{b}etagin{equation}
\label{e.zetatrim}
\left| \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}} \nabla \zeta_z \right|
\leq
C \left( \varepsilon 3^{l}\right)^{-1} \mathds{1}_{V}.
\end{equation}
where
\mathbf{b}etagin{equation}
V:= \left\{ x\in \mathbf{m}athbb{R}d\,:\, (x+\varepsilon\cu_{l+1}) \cap \mathrm{par}rtial U_{n+3} \neq \emptyset \right\}
\end{equation}
is a boundary layer of $U_{n+3}$ of thickness $\varepsilon 3^l$. Observe that $|V| \leq C\varepsilon3^l$.
Thus by~\eqref{e.appcorrbounds} we find that
\mathbf{b}etagin{align*}
\lefteqn{
\left\|
\mathbf{m}athbf{s}um_{y,z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\nabla \zeta_z \left( \varepsilon v_y(\tfrac \cdot\varepsilon )
-
\ell_{\nabla \overlineerline{w}_{n+2}(y)} \right)
\right\|_{L^2(U_{n+2})}
} \qquad &
\\ &
\leq
C \left( \varepsilon 3^l\right)^{-1}
\varepsilon\left\|
\mathds{1}_{V} \mathbf{m}athbf{s}um_{y\in\varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\left( \varepsilon v_y(\tfrac \cdot\varepsilon )
-
\ell_{\nabla \overlineerline{w}_{n+2}(y)} \right) \right\|_{L^2(U_{n+2})}
\\ &
\leq
C3^{-\mathbf{a}lphapha(l-k)} 3^{Qk} \mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac {n+2}i}
.
\end{align*}
Combining~\eqref{e.cutoffzeta} with~\eqref{e.appcorr.overlap},~\eqref{e.detanchors} and the triangle inequality to compare the $v_z$'s on overlapping cubes,
we find that
\mathbf{b}etagin{align*}
\lefteqn{
\left\|\varepsilon
\mathbf{m}athbf{s}um_{y,z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}\nabla \zeta_z
\left( v_z(\tfrac \cdot\varepsilon ) - v_y(\tfrac \cdot\varepsilon ) \right)
\right\|_{L^2(U_{n+2})}
} \quad &
\\ &
\leq
\mathbf{m}athbf{s}um_{y,z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}, \, y\mathbf{m}athbf{s}im z}
\left\| \nabla \zeta_z \right\|_{L^\infty(\mathbf{m}athbb{R}d)}
\cdot \varepsilon \left\| v_y - v_z \right\|_{L^2((y+\cu_{l+1})\cap (z+\cu_{l+1}))}
\\ &
\leq
C3^{-\mathbf{a}lphapha(l-k)}3^{Qk} \mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac {n+2}i}.
\end{align*}
For the last term, we compute, using~\eqref{e.detanchors2},
\mathbf{b}etagin{align*}
\lefteqn{
\left\|
\mathbf{m}athbf{s}um_{y,z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3},\,y\mathbf{m}athbf{s}im z}
\nabla \zeta_z \varepsilon \left| \nabla \overlineerline{w}_{n+2}(z) - \nabla \overlineerline{w}_{n+2}(y) \right|
\right\|_{L^2(U_{n+2})}
} \qquad &
\\ &
\leq
C\varepsilon^{1+\mathbf{b}etata} 3^{l} \left\| \nabla \overlineerline{w}_{n+2} \right\|_{C^{0,\mathbf{b}etata}(U_{n+3})} \left\| \mathbf{m}athbf{s}um_{y,z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3},\,y\mathbf{m}athbf{s}im z}
\left| \nabla \zeta_z \right|\right\|_{L^2(U_{n+2})}
\\ &
\leq
C \varepsilon^\mathbf{a}lphapha \left( 3^l\varepsilon \right)^{-Q}
\mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac {n+2}i}.
\end{align*}
Putting the above inequalities together, we find that
\mathbf{b}etagin{align*}
\lefteqn{
\left\| \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\nabla \zeta_z \left( \varepsilon v_z(\tfrac \cdot\varepsilon ) - \ell_{\nabla \overlineerline{w}_{n+2}(z)}(x) \right)
\right\|_{L^2(U_{n+2})}
}
\qquad\qquad\qquad\qquad\qquad
&
\\ &
\leq
C\left(
3^{-\mathbf{a}lphapha(l-k)}3^{Qk}
+
\varepsilon^\mathbf{a}lphapha \left( 3^l\varepsilon \right)^{-Q}
\right)
\mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{ {L}^2(U_i)}^{\mathbf{m}athbf{f}rac {n+2}i}.
\end{align*}
\mathbf{m}athbf{s}mallskip
\emph{Step 5.} The estimate of the fifth term on the right side of~\eqref{e.bigestwthreetterms}. Recall that~$U_{n+4}$ is defined in~\eqref{e.defUn4}.
We have that, using~\eqref{e.wn2meyers} and~\eqref{e.detanchors2},
\mathbf{b}etagin{align*}
\lefteqn{
\left\| \nabla \overlineerline{w}_{n+2} - \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}} \zeta_z \nabla \overlineerline{w}_{n+2}(z)
\right\|_{L^2(U_{n+2})}
} &
\\ &
\leq
\left\| \nabla \overlineerline{w}_{n+2} \left( 1 - \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}} \zeta_z \right)
\right\|_{L^2(U_{n+2})} \!\!\!
+
\left\| \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}} \zeta_z \left( \nabla \overlineerline{w}_{n+2} -\nabla \overlineerline{w}_{n+2}(z) \right)
\right\|_{L^2(U_{n+2})}
\\ &
\leq
\left\| \nabla \overlineerline{w}_{n+2}
\right\|_{L^2(U_{n+2}\mathbf{m}athbf{s}etminus U_{n+4})}
+ C \varepsilon 3^{l} \left\| \nabla \overlineerline{w}_{n+2} \right\|_{C^{0,\mathbf{b}etata}(U_{n+3})}
\\ &
\leq
C\left| U_{n+2}\mathbf{m}athbf{s}etminus U_{n+4} \right|^{\mathbf{m}athbf{f}rac{\deltalta}{4+2\deltalta}}
\left\| \nabla \overlineerline{w}_{n+2}
\right\|_{L^{2+\deltalta}(U_{n+2})}
+
C \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^{l} \right)^{-Q}\mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac {n+2} i}
\\ &
\leq
C \left( (\varepsilon 3^{l})^{\mathbf{a}lphapha} + \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^{l} \right)^{-Q} \right) \mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac {n+2}i}.
\end{align*}
\emph{Step 6.} Let us summarize what we have shown so far. Substituting the results of the five previous steps into~\eqref{e.bigestwthreetterms}, we obtain that
\mathbf{b}etagin{align}
\label{e.yesstep6}
\lefteqn{
\left\|
\nabla \cdot \left( \widetilde{\mathbf{a}}^\varepsilon \nabla \widetilde{v}_{n+2}^\varepsilon
+ \widetilde{\mathbf{m}athbf{F}}^\varepsilon_{n+2} \right)
\right\|_{H^{-1}(U_{n+2})}
} \qquad &
\\ & \notag
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-Q} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + 3^{-k\mathbf{a}lphapha} + \left( \varepsilon 3^l \right)^\mathbf{a}lphapha 3^{kQ} \right)
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}}.
\end{align}
This implies by~\eqref{e.captcha} that
\mathbf{b}etagin{align}
\label{e.captcha.indeed}
\lefteqn{
\left\| \nabla \widetilde{v}^\varepsilon_{n+2} - \nabla \widetilde{w}^\varepsilon_{n+2}
\right\|_{L^2(U_{n+2})}
} \qquad &
\\ & \notag
\leq
C\left( \varepsilon^\mathbf{a}lphapha \left( \varepsilon 3^l \right)^{-Q} + 3^{-l\mathbf{a}lphapha} \left( \varepsilon 3^l \right)^{-1} + 3^{-k\mathbf{a}lphapha} + \left( \varepsilon 3^l \right)^\mathbf{a}lphapha 3^{kQ} \right)
\mathbf{m}athbf{s}um_{j=1}^{n+1} \left\| \nabla g_{j} \right\|_{{L}^{2+\deltalta}(U_{j})}^{\mathbf{m}athbf{f}rac{n+2}{j}}.
\end{align}
Therefore to obtain the lemma it suffices to prove~\eqref{e.throwyes.wts}.
To prove this bound, we use the identity
\mathbf{b}etagin{align*}
\lefteqn{
\nabla \widetilde{v}_{n+2}^\varepsilon(x)
-
\nabla \overlineerline{w}_{n+2}(x)
} \qquad &
\\ &
=
\nabla \left( \mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\zeta_z(x) \left( \varepsilon v\left( \tfrac x\varepsilon , \tfrac{z}\varepsilon + \cu_{l+1},\nabla \overlineerline{w}_{n+2}(z),\overlineerline\mathbf{m}athsf{\!t}heta(z) \right) - \ell_{\nabla \overlineerline{w}_{n+2}(z)}(x) \right) \right)
\end{align*}
which, combined with~\eqref{e.appcorrbounds}, and~\eqref{e.detanchors}, yields
\mathbf{b}etagin{align*}
\lefteqn{
\left\| \nabla \widetilde{v}_{n+2}^\varepsilon(x)
-
\nabla \overlineerline{w}_{n+2}(x)
\right\|_{H^{-1}(U_{n+2})}
} \ \ &
\\ &
\leq
\left\|
\mathbf{m}athbf{s}um_{z\in \varepsilon3^l\mathbf{m}athbb{Z}d \cap U_{n+3}}
\zeta_z(x) \left( \varepsilon v\left( \tfrac x\varepsilon , \tfrac{z}\varepsilon + \cu_{l+1},\nabla \overlineerline{w}_{n+2}(z),\overlineerline\mathbf{m}athsf{\!t}heta(z) \right) - \ell_{\nabla \overlineerline{w}_{n+2}(z)}(x) \right)
\right\|_{L^2(U_{n+2})}
\\ &
\leq
C\varepsilon 3^{l} 3^{-\mathbf{a}lphapha(l-k)} 3^{Qk} \mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac {n+2}i}
\leq
C 3^{-\mathbf{a}lphapha(l-k)} 3^{Qk} \mathbf{m}athbf{s}um_{i=1}^{n+1} \left\| g_i \right\|_{{L}^2(U_i)}^{\mathbf{m}athbf{f}rac {n+2}i}.
\end{align*}
This is~\eqref{e.throwyes.wts}.
\mathbf{m}athbf{s}mallskip
In view of the selection of the mesoscopic parameters~$k$ and~$l$ in Section~\ref{ss.meso}, which gives us~\eqref{e.goodchoices}, the proof of~\eqref{e.homogenization.estimates.wts.2} is now complete.
\end{proof}
\mathbf{m}athbf{s}ection{Large-scale
\texorpdfstring{$C^{0,1}$}{C01}-type estimates for linearized equations}
\label{s.reglineqs}
In the next two sections, we suppose that~$n\in \{0,\ldots,\mathbf{m}athsf{N}-1 \}$ is such that
\mathbf{b}etagin{equation}
\label{e.assumption.section5}
\left\{
\mathbf{b}etagin{aligned}
& \ \mathbf{m}box{Theorems~\ref{t.regularity.Lbar} and~\ref{t.linearizehigher} are valid for $n+1$ in place of $n$,} \\
& \ \mathbf{m}box{and Theorem~\ref{t.regularity.linerrors} is valid for $n$.}
\end{aligned}\right.
\end{equation}
The goal is to prove that Theorem~\ref{t.regularity.linerrors} is also valid for~ $n+1$. Combined with the results of the previous two sections and an induction argument, this completes the proof of Theorems~\ref{t.regularity.Lbar},~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors}.
\mathbf{m}athbf{s}mallskip
The goal of this section is to prove the half of Theorem~\ref{t.regularity.linerrors}, namely estimate~\eqref{e.C01linsols} for~$w_{n+2}$. The estimate~\eqref{e.C01linerror} will be the focus of Section~\ref{s.reglinerrors}.
\mathbf{m}athbf{s}mallskip
Both of the estimate in the conditional proof of Theorem~\ref{t.regularity.linerrors} are obtained by ``harmonic'' approximation: homogenization says that on large scales the heterogeneous equations behave like the homogenized equations, and therefore we may expect the former to inherit some of the better regularity estimates of the latter. The quantitative version of the homogenization statement provided by Theorem~\ref{t.linearizehigher} allows us to prove a~$C^{0,1}$--type estimate, following a well-known excess decay argument originally introduced in~\cite{AS}.
\mathbf{m}athbf{s}mallskip
\mathbf{m}athbf{s}ubsection{Approximation of
\texorpdfstring{$w_{n+2}$}{w n+2}
by smooth functions}
The large-scale regularity estimates are obtained by approximating the solutions of the linearized equations for the heterogeneous Lagrangian~$L$, as well as the linearization errors, by the solutions of the linearized equations for the homogenized Lagrangian~$\overlineerline{L}$. Since the latter possess better smoothness properties, this allows us to implement an excess decay iteration for the former.
\mathbf{m}athbf{s}mallskip
We begin by giving a quantitative statement concerning the smoothness of solutions to the linearized equations for the homogenized operator~$\overlineerline{L}$. This is essentially well-known, but we need a more precise statement than we could find in the literature.
\mathbf{m}athbf{s}mallskip
We next present the statement concerning the approximating of the solutions~$w_m$ of the linearized equations for~$L$, as well as the linearized errors~$\xi_m$, by solutions of the linearized equations for~$\overlineerline{L}$. For the~$w_m$'s, this is essentially a rephrasing of the assumed validity of Theorem~\ref{t.linearizehigher} for $n+1$ in place of~$\mathbf{m}athsf{N}$, see~\eqref{e.assumption.section5}. For the $\xi_m$'s, this can be thought of as a homogenization result.
\mathbf{b}etagin{lemma}[{Smooth approximation of $w_{n+2}$}]
\label{l.smoothapprox.w}
Assume that~\eqref{e.assumption.section5} holds. Fix $\mathbf{m}athsf{M} \in [1,\infty)$. There exist $\deltalta(n,d,\mathcal{L}ambda)\in(0,d)$, $\mathbf{a}lphapha(d,\mathcal{L}ambda) \in \left(0,\tfrac 12\right]$, $C(n,\mathbf{m}athsf{M},d,\mathcal{L}ambda)<\infty$ and a random variable~$\mathcal{X}$ satisfying
\mathbf{b}etagin{equation*}
\mathcal{X} = \mathcal{O}_{\deltalta}(C)
\end{equation*}
such that the following statement is valid.
For every $R\mathbf{m}athbf{g}eq \mathcal{X}$ and $u,v,w_1,\ldots,w_{n+2} \in H^1(B_R)$ satisfying, for each $m\in \{1,\ldots,n+2\}$,
\mathbf{b}etagin{equation}
\label{e.linearized.approx}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_pL\left( \nabla u,x \right) \right) = 0
\quad \mathbf{m}box{and} \quad -\nabla \cdot \left( D_pL(\nabla v,x) \right) = 0
& \mathbf{m}box{in} & \ B_R,\\
& -\nabla \cdot \left( D^2_pL\left( \nabla u,x \right) \nabla w_m \right) = \nabla \cdot \left( \mathbf{m}athbf{F}_m(x, \nabla u,\nabla w_1,\ldots,\nabla w_{m-1})\right) & \mathbf{m}box{in} & \ B_{R},\\
&
\left\| \nabla u \right\|_{\underlinederline{L}^2(B_R)} \vee
\left\| \nabla v \right\|_{\underlinederline{L}^2(B_R)} \leq \mathbf{m}athsf{M},
\end{aligned}
\right.
\end{equation}
if we let $\overlineerline{u},\overlineerline{v},\overlineerline{w}_1,\ldots,\overlineerline{w}_{n+2} \in H^1(B_{R/2})$ be the solutions of the Dirichlet problems
\mathbf{b}etagin{equation}
\label{e.linearized.approx.hom}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_p\overlineerline{L}\left( \nabla \overlineerline{u} \right) \right) = 0
\quad \mathbf{m}box{and} \quad -\nabla \cdot \left( D_p\overlineerline{L}(\nabla \overlineerline{v}) \right) = 0
& \mathbf{m}box{in} & \ B_{R},\\
& -\nabla \cdot \left( D^2_p\overlineerline{L}\left( \nabla \overlineerline{u} \right) \nabla \overlineerline{w}_m \right) = \nabla \cdot \left( \overlineerline{\mathbf{m}athbf{F}}_m(\nabla \overlineerline{u},\nabla \overlineerline{w}_1,\ldots,\nabla \overlineerline{w}_{m-1})\right) & \mathbf{m}box{in} & \ B_{\mathbf{m}athbf{f}rac12(1+2^{-m}) R},\\
& \overlineerline{u} = u, \ \overlineerline{v} = v, \ \overlineerline{w}_m = w_m & \mathbf{m}box{on} & \ \mathrm{par}rtial B_{\mathbf{m}athbf{f}rac12 (1+2^{-m})R},
\end{aligned}
\right.
\end{equation}
then we have, for each $m \in \{1,\ldots,n+2\}$, the estimate
\mathbf{b}etagin{equation}
\label{e.approx.w}
\left\| w_{m} - \mathbf{b}ar{w}_{m} \right\|_{\underlinederline{L}^2(B_{\mathbf{m}athbf{f}rac12 (1+2^{-m}) R})}
\leq
CR^{-\mathbf{a}lphapha}
\mathbf{m}athbf{s}um_{i=1}^{m} \left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac{m}{i}}.
\end{equation}
\end{lemma}
\mathbf{b}etagin{proof}
Denote, in short, $R_m := \mathbf{m}athbf{f}rac12 (1+2^{-m}) R$ and $r_m := \mathbf{m}athbf{f}rac14 (R_m - R_{m-1})$. Since we assume~\eqref{e.assumption.section5},
we have that Theorem~\ref{t.linearizehigher} applied for $n+1$ instead of $\mathbf{m}athsf{N}$ and Theorem~\ref{t.regularity.linerrors}, assumed now for $n$ in place of $\mathbf{m}athsf{N}$, are both valid. In particular, there is $\mathbf{m}athbf{s}igma(n,\mathrm{data}) \in (0,1)$ and $C(n,\mathbf{m}athsf{M},\mathrm{data})<\infty$, a random variable
$\widetilde \mathcal{X} \leq \mathcal{O}_\mathbf{m}athbf{s}igma(C)$ such that, for $R \mathbf{m}athbf{g}eq \widetilde \mathcal{X}$ and $m \in \{1,\ldots,n+2\}$, Theorem~\ref{t.regularity.linerrors} gives
\mathbf{b}etagin{equation} \label{e.C01linsols.applied000}
\mathbf{m}athbf{s}um_{i=1}^{m} \left\| \nabla w_i \right\|_{L^{\mathbf{m}athbf{f}rac{m}{i} (2+\deltalta)}(B_{R_i})}^{\mathbf{m}athbf{f}rac mi} \leq C
\mathbf{m}athbf{s}um_{i=1}^{m} \left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac{m}{i}}.
\end{equation}
and Theorem~\ref{t.linearizehigher} yields
\mathbf{b}etagin{equation} \label{e.C01linsols.applied0000}
\left\| w_m - \mathbf{b}ar{w}_m \right\|_{\underlinederline{L}^2(B_{R_m} )}
\leq
\widetilde{\mathcal{X}} R^{-\mathbf{a}lphapha}
\mathbf{m}athbf{s}um_{i=1}^m \left\| \nabla w_{j} \right\|_{\underlinederline{L}^{2+\deltalta}(B_{R_i} )} ^{\mathbf{m}athbf{f}rac{m}{i}} .
\end{equation}
We set
\mathbf{b}etagin{equation*}
\mathcal{X} := \left( 1 \vee \widetilde{\mathcal{X}} \right)^{\mathbf{m}athbf{f}rac{2}{\mathbf{a}lphapha}}.
\end{equation*}
Clearly $\mathcal{X} = \mathcal{O}_{\mathbf{m}athbf{f}rac12 \mathbf{a}lphapha \mathbf{m}athbf{s}igma}(C)$ and $\mathcal{X} \mathbf{m}athbf{g}eq \widetilde \mathcal{X}$, and if $R \mathbf{m}athbf{g}eq \mathcal{X}$, then $ \widetilde{\mathcal{X}} R^{-\mathbf{a}lphapha} \leq R^{-\mathbf{m}athbf{f}rac12 \mathbf{a}lphapha}$. In conclusion, by taking $\mathbf{a}lphapha$ smaller if necessary, we obtain by~\eqref{e.C01linsols.applied0000} that, for $m \in \{1,\ldots,n+2\}$ and~$R \mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{equation} \label{e.C01linsols.applied001}
\left\| w_m - \mathbf{b}ar{w}_m \right\|_{\underlinederline{L}^2(B_{R_m} )}
\leq
C R^{-\mathbf{a}lphapha}
\mathbf{m}athbf{s}um_{i=1}^m \left\| \nabla w_{i} \right\|_{\underlinederline{L}^{2+\deltalta}(B_{R_i})}^{\mathbf{m}athbf{f}rac{m}{i}}.
\end{equation}
Furthermore, we notice that~\eqref{e.Fmbasic} yields
\mathbf{b}etagin{equation*}
\left| \mathbf{m}athbf{F}_{m}(x, \nabla u,\nabla w_1,\ldots,\nabla w_{n+1}) \right| \leq C \mathbf{m}athbf{s}um_{i=1}^{n+1} \left|\nabla w_i \right|^{\mathbf{m}athbf{f}rac{m}{i}},
\end{equation*}
and thus we get by~\eqref{e.C01linsols.applied000} and~\eqref{e.C01linsols.applied001} that
\mathbf{b}etagin{equation*}
\left\| \mathbf{m}athbf{F}_{m}(x, \nabla u,\nabla w_1,\ldots,\nabla w_{m-1}) \right\|_{\underlinederline{L}^{2+\deltalta}(B_{R_{m-1}})} \leq C
\mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}\right)^{\mathbf{m}athbf{f}rac {m}{i}}.
\end{equation*}
It then follows by the Meyers estimate and equation of $w_m$ that
\mathbf{b}etagin{equation*}
\left\| \nabla w_{m} \right\|_{\underlinederline{L}^{2+\deltalta}(B_{R_m} )} \leq C \mathbf{m}athbf{s}um_{i=1}^{m} \left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}\right)^{\mathbf{m}athbf{f}rac {m}{i}},
\end{equation*}
finishing the proof of~\eqref{e.approx.w} by~\eqref{e.C01linsols.applied001}.
\end{proof}
\mathbf{m}athbf{s}ubsection{Excess decay iteration for \texorpdfstring{$w_{n+2}$}{w n+2}}
In this subsection we conditionally prove the statement of Theorem~\ref{t.regularity.linerrors} for $n+1$ and for $q=2$. The restriction on $q$ will be removed in the next subsection.
The proof is by a decay of excess iteration, following along similar lines as the argument from~\cite{AS}, using ``harmonic'' approximation. The statement we prove is summarized in the following proposition.
\mathbf{b}etagin{proposition}
\label{p.wLip}
Assume that~\eqref{e.assumption.section5} holds. Fix $\mathbf{m}athsf{M} \in [1,\infty)$. Then there exist constants $\mathbf{m}athbf{s}igma(\mathrm{data}),\mathbf{a}lphapha(d,\mathcal{L}ambda) \in \left(0,\mathbf{m}athbf{f}rac12 \right]$, $C(\mathbf{m}athsf{M},\mathrm{data})<\infty$ and a random variable~$\mathcal{X}$ satisfying
\mathbf{b}etagin{equation*}
\mathcal{X} \leq \mathcal{O}_{\mathbf{m}athbf{s}igma}(C)
\end{equation*}
such that the following statement is valid. Let~$R \in [\mathcal{X} , \infty)$ and
$u,w_1,\ldots,w_{n}$ satisfy, for each $m\in \{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation}
\label{e.linearized.c1alpha}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_pL\left( \nabla u,x \right) \right) = 0 & \mathbf{m}box{in} & \ B_R, \\
& -\nabla \cdot \left( D^2_pL\left( \nabla u,x \right) \nabla w_m \right) = \nabla \cdot \left( \mathbf{m}athbf{F}_m(\nabla u,\nabla w_1,\ldots,\nabla w_{m-1})\right) & \mathbf{m}box{in} & \ B_{R},
\end{aligned}
\right.
\end{equation}
where $u$ satisfy the normalization
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{f}rac1R \left\| u - (u)_{B_R} \right\|_{\underlinederline{L}^2\left( B_{R} \right)} \leq \mathbf{m}athsf{M} .
\end{equation*}
Then, for $m \in \{1,\ldots,n+2\}$ and for every $r \in [\mathcal{X},\tfrac12 R]$, we have
\mathbf{b}etagin{multline} \label{e.Lipw}
\left( \mathbf{m}athbf{f}rac{r}{R} + \mathbf{m}athbf{f}rac1{r} \right)^{-\mathbf{a}lphapha} \inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac{1}{r} \left\| w_{m} - \ell \right\|_{\underlinederline{L}^2(B_{r})}
+ \left\| \nabla w_{m} \right\|_{\underlinederline{L}^2(B_r)}
\\ \leq C \mathbf{m}athbf{s}um_{i=1}^{m} \left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}\right)^{\mathbf{m}athbf{f}rac {m}{i}} .
\end{multline}
\end{proposition}
\mathbf{b}etagin{remark} \label{r.w1reg}
Proposition~\ref{p.wLip} holds for $n=0$ by~\cite[Proposition 4.3]{AFK} without assuming~\eqref{e.assumption.section5}.
\end{remark}
\mathbf{b}etagin{proof}
The proof is based on combination of harmonic approximation and decay estimates for homogenized solutions presented in Appendix~\ref{s.appendixconstant}. The necessary estimate is~\eqref{e.appC.C1alphabarw}. We take the minimal scale $\mathcal{X}$ as the maximum of the minimal scale in Lemma~\ref{l.smoothapprox.w} and in Theorem~\ref{t.regularity.linerrors}, which is valid with $n$ in place of $\mathbf{m}athsf{N}$, corresponding $q =2 $, and a constant $\mathbf{m}athsf{R}(\mathbf{m}athsf{M},\mathrm{data}) \in [1,\infty)$ to be fixed in the course of the proof. This choice, in particular, implies that there exist constants $C(\mathbf{m}athsf{M},\mathrm{data})<\infty$ and $\mathbf{m}athbf{s}igma(\mathrm{data}) \in \left(0 ,\mathbf{m}athbf{f}rac12 \right]$ such that $\mathcal{X} \leq \mathcal{O}_{\mathbf{m}athbf{s}igma}(C)$ and $\mathcal{X} \mathbf{m}athbf{g}eq \mathbf{m}athsf{R}$.
\mathbf{m}athbf{s}mallskip
We will prove the statement using induction in shrinking radii. Indeed, we set, for $j \in \mathbf{m}athbb{N}$ and $\theta \in (0,\mathbf{m}athbf{f}rac12]$, $r_j := \theta^{j} r_0 $, where $r_0 \in [\mathcal{X}, \deltalta R]$ and $\deltalta \in \left(0 , \tfrac12 \right]$. Parameters~$\theta$,~$\deltalta$ and~$\mathbf{m}athsf{R}$ will all be fixed in the course of the proof. Having fixed~$\theta$,~$\deltalta$ and~$\mathbf{m}athsf{R}$, we assume that there is $J \in \mathbf{m}athbb{N}$ such that $r_{J+1} < \mathcal{X} \leq r_J$ for some $J \in \mathbf{m}athbb{N}$. If there is no such $J$ or $\mathcal{X} \mathbf{m}athbf{g}eq \deltalta R$, the result will follow simply by giving up a volume factor. Furthermore, we device the notation of this proof in such a way that it will also allow us to prove the result of the next lemma, Lemma~\ref{l.C1alphanew}.
\mathbf{m}athbf{s}mallskip
We denote, in short, for $m \in \{1,\ldots,n+2\}$ and $\mathbf{m}athbf{g}amma \in [0,1)$ to be fixed,
\mathbf{b}etagin{align} \label{e.wLipDm}
\mathbf{m}athsf{D}_{m}
&
:= \mathbf{m}athbf{f}rac{1}{r_0} \left\| w_{m} - \left( w_{m} \right)_{B_{r_0}} \right\|_{\underlinederline{L}^2(B_{r_0})}
+ \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \mathbf{m}athbf{f}rac1{R} \left\| w_{i} - \left( w_{i} \right)_{B_{R}} \right\|_{\underlinederline{L}^2(B_{R})}\right)^{\mathbf{m}athbf{f}rac{m}i}
\end{align}
and
\mathbf{b}etagin{align} \label{e.wLipEm}
\mathbf{m}athsf{E}_{m}
&
:=
\inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac{1}{r_0} \left\| w_{m} - \ell \right\|_{\underlinederline{L}^2(B_{r_0})}
+\left( \mathbf{m}athbf{f}rac{r_0}{R} + \mathbf{m}athbf{f}rac1{r_0} \right)^{\mathbf{m}athbf{g}amma} \mathbf{m}athsf{D}_{m} .
\end{align}
Theorem~\ref{t.regularity.linerrors} implies that, for $r \in [\mathcal{X} , \tfrac12 R]$ and $m \in \{1,\ldots,n+1\}$, we have that
\mathbf{b}etagin{align}
\label{e.C01linsols.applied1}
\left\| \nabla w_{m} \right\|_{\underlinederline{L}^2(B_r)}
&
\leq
C \mathbf{m}athbf{s}um_{i=1}^{m} \left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}\right)^{\mathbf{m}athbf{f}rac mi} .
\end{align}
Notice then that, by~\eqref{e.C01linsols.applied1} and Poincar\'e's inequality, we have, for $r \in [\mathcal{X} , \tfrac12 R]$ and $m\in \{1,\ldots,n+2\}$, that
\mathbf{b}etagin{equation} \label{e.wLipEmest}
\mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \mathbf{m}athbf{f}rac1r \left\| w_{i} - \left( w_{i} \right)_{B_r} \right\|_{\underlinederline{L}^2(B_r)}\right)^{\mathbf{m}athbf{f}rac mi} \leq C\mathbf{m}athsf{D}_{m} .
\end{equation}
\mathbf{m}athbf{s}mallskip
\emph{Step 1.}
Letting $\overlineerline{u}_j$ solve
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_p\overlineerline{L}\left( \nabla \overlineerline{u}_j \right) \right) = 0 & \mathbf{m}box{in} & \ B_{2r_j}, \\
& \overlineerline{u}_j = u & \mathbf{m}box{on} & \ \mathrm{par}rtial B_{2r_j},
\end{aligned}
\right.
\end{equation*}
we show that there exist, for $\eta \in (0,1]$, constants $ \mathbf{a}lphapha_1(d,\mathcal{L}ambda) \in (0,\mathbf{m}athbf{f}rac12)$, $\deltalta(\eta,\mathbf{m}athsf{M},\mathrm{data})<\infty$ and~$\mathbf{m}athsf{R}(\eta,\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that, for $j \in \{0,\ldots,J\}$,
\mathbf{b}etagin{equation} \label{e.witerbarureg}
\mathbf{m}athbf{f}rac1{2r_j} \inf_{\ell \in \mathbf{m}athcal{P}_1}\left\| u -\ell \right\|_{\underlinederline{L}^2 \left( B_{2r_j} \right)}
+
\mathbf{m}athbf{f}rac1{2r_j} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \overlineerline{u}_j -\ell \right\|_{\underlinederline{L}^2 \left( B_{2r_j} \right)} \leq \eta \left( \mathbf{m}athbf{f}rac{r_j}{R} + \mathbf{m}athbf{f}rac{1}{r_j}\right)^{\mathbf{a}lphapha_1} .
\end{equation}
The parameter $\eta$ shall be fixed later in~\eqref{e.fixetainw}. On the one hand, we have from~\cite[Theorem 2.3]{AFK} that there exist constants~$\mathbf{b}etata_1(d,\mathcal{L}ambda) \in (0,1)$ and $C(\mathbf{m}athsf{M},d,\mathcal{L}ambda)<\infty$ such that
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{f}rac1{2r_0} \inf_{\ell \in \mathbf{m}athcal{P}_1}\left\| u -\ell \right\|_{\underlinederline{L}^2 \left( B_{2r_0} \right)} \leq C \left( \mathbf{m}athbf{f}rac{r_0}{R} + \mathbf{m}athbf{f}rac{1}{r_0}\right)^{\mathbf{b}etata_1} .
\end{equation*}
On the other hand, by harmonic approximation (\cite[Corollary 2.2]{AFK}) and Lipschitz estimate for $u$ (\cite[Theorem 2.3]{AFK}) we get that there exist constants~$\mathbf{b}etata_2(d,\mathcal{L}ambda) \in (0,1)$ and $C(\mathbf{m}athsf{M},d,\mathcal{L}ambda)<\infty$ such that
\mathbf{b}etagin{equation*}
\left\| u - \overlineerline{u}_j \right\|_{\underlinederline{L}^2 \left( B_{r_j} \right)} \leq C r_j^{-\mathbf{b}etata_2} .
\end{equation*}
Thus~\eqref{e.witerbarureg} follows by the triangle inequality by taking $\mathbf{a}lphapha_1 := \tfrac12 (\mathbf{b}etata_1 \wedge \mathbf{b}etata_2)$, and choosing $\deltalta$ small enough and $\mathbf{m}athsf{R}$ large enough so that
\mathbf{b}etagin{equation} \label{e.wLipetaR1}
C \left( \deltalta + \mathbf{m}athbf{f}rac{1}{\mathbf{m}athsf{R}}\right)^{\mathbf{m}athbf{f}rac12 \mathbf{b}etata_1} + C \mathbf{m}athsf{R}^{-\mathbf{m}athbf{f}rac12 \mathbf{b}etata_2} \leq \eta .
\end{equation}
We assume, from now on, that $\deltalta$ and $\mathbf{m}athsf{R}$ are such that~\eqref{e.wLipetaR1} is valid.
\mathbf{m}athbf{s}mallskip
\emph{Step 2.}
Letting $j \in \{0,\ldots,J\}$ and $m\in \{1,\ldots,n+2\}$, and $\mathbf{b}ar{w}_{1,j},\ldots,\mathbf{b}ar{w}_{m,j}$ to solve, with $\overlineerline{u}_j$ as in Step 2, equations
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D^2_p\overlineerline{L}\left( \nabla \overlineerline{u}_j\right) \nabla \overlineerline{w}_{m,j} \right) = \nabla \cdot \overlineerline{\mathbf{m}athbf{F}}_{m}\left(\nabla \mathbf{b}ar{u}_j , \mathbf{b}ar{w}_{1,j} , \ldots, \mathbf{b}ar{w}_{m-1,j} \right) & \mathbf{m}box{in} & \ B_{\mathbf{m}athbf{f}rac12 (1+2^{-m})r_{j}}
, \\
& \mathbf{b}ar{w}_{m,j} = w_m & \mathbf{m}box{on} & \ \mathrm{par}rtial B_{\mathbf{m}athbf{f}rac12 (1+2^{-m})r_{j}},
\end{aligned}
\right.
\end{equation*}
we show that then there is $\mathbf{a}lphapha_2(\mathrm{data}) \in \left(0,\tfrac 12\right]$ such that
\mathbf{b}etagin{align} \label{e.wLipcomp}
\mathbf{m}athbf{f}rac1{r_j} \left\| w_{m} - \mathbf{b}ar{w}_{m,j} \right\|_{L^2 \left(B_{ r_j/2} \right)}
&
\leq
Cr_j^{-\mathbf{a}lphapha_2} \mathbf{m}athbf{f}rac1{r_j} \left\| w_{m} - (w_{m})_{B_{r_j}} \right\|_{\underlinederline{L}^{2}(B_{r_j})}
\\ \notag & \quad
+
Cr_j^{-\mathbf{a}lphapha_2} \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \mathbf{m}athbf{f}rac1{R} \left\| w_{i} - \left( w_{i} \right)_{B_{R}} \right\|_{\underlinederline{L}^2(B_{R})}\right)^{\mathbf{m}athbf{f}rac{m}i} .
\end{align}
This, however, is a direct consequence of~\eqref{e.C01linsols.applied1} and Lemma~\ref{l.smoothapprox.w}.
\mathbf{m}athbf{s}mallskip
\emph{Step 3}. Induction assumption. Set $\mathbf{a}lphapha := \mathbf{m}athbf{f}rac \mathbf{b}etata2 (\mathbf{a}lphapha_1 \wedge \mathbf{a}lphapha_2)$, where~$\mathbf{a}lphapha_1$ and~$\mathbf{a}lphapha_2$ are as in Steps 1 and 2, respectively, and $\mathbf{b}etata$ comes from the $C^{n+2,\mathbf{b}etata}$ regularity of $\overlineerline{L}$. Let~$\deltalta_j$ be defined as
\mathbf{b}etagin{equation} \label{e.wLipdeltaj}
\deltalta_j := \left( \mathbf{m}athbf{f}rac {r_j}{r_0} + \mathbf{m}athbf{f}rac 1{r_j^{d-\mathbf{m}athbf{s}igma}} \right)^{\mathbf{a}lphapha} .
\end{equation}
We assume inductively that, for $j^* \in \{1,\ldots,J\}$, $j \in \{0,\ldots,j^*\}$, and $m \in \{1,\ldots,n+2\}$, we have, for a constant $\mathbf{m}athsf{C} \in [1,\infty)$ to be fixed in Step 5, that
\mathbf{b}etagin{equation} \label{e.wLipinductionass}
\inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac1{r_j}\left\| w_{m} - \ell \right\|_{\underlinederline{L}^2(B_{r_j})} \leq \deltalta_j \mathbf{m}athsf{E}_m
\quad \mathbf{m}box{and} \quad \mathbf{m}athbf{f}rac1{r_j} \left\| w_{m} - (w_{m})_{B_{r_j}} \right\|_{\underlinederline{L}^2(B_{r_j})} \leq \mathbf{m}athsf{C} \mathbf{m}athsf{D}_{m}.
\end{equation}
Here~$\mathbf{m}athsf{D}_m $ and~$\mathbf{m}athsf{E}_m $ are defined in~\eqref{e.wLipDm} and~\eqref{e.wLipEm}, respectively.
Obviously ~\eqref{e.wLipinductionass} is valid for $j=0$ by the definitions of~$\mathbf{m}athsf{D}_m $ and~$\mathbf{m}athsf{E}_m $. Fixing
\mathbf{b}etagin{equation} \label{e.fixetainw}
\eta := (1 \vee \mathbf{m}athsf{C})^{1/\mathbf{b}etata},
\end{equation}
we have that~\eqref{e.witerbarureg} implies
\mathbf{b}etagin{equation} \label{e.witerbarureg2}
\mathbf{m}athsf{C} \left( \mathbf{m}athbf{f}rac1{r_j} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \overlineerline{u}_j - \ell \right\|_{\underlinederline{L}^{2} \left( B_{r_j} \right)} \right)^\mathbf{b}etata \leq \deltalta_{j} \left( \mathbf{m}athbf{f}rac{r_0}{R} + \mathbf{m}athbf{f}rac1{r_0} \right)^{\mathbf{a}lphapha} .
\end{equation}
Using also~\eqref{e.wLipcomp} and~\eqref{e.wLipinductionass}, we obtain, for $m \in \{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation} \label{e.wLipcompwstupid}
\mathbf{m}athbf{f}rac1{r_j} \left\| w_{m} - \mathbf{b}ar{w}_{m,j} \right\|_{L^2 \left(B_{ r_j/2} \right)} \leq C \mathbf{m}athsf{C} r_j^{-\mathbf{a}lphapha_2 } \mathbf{m}athsf{D}_m
\leq
\mathbf{m}athbf{f}rac12 \theta^{\mathbf{m}athbf{f}rac d2 +1}\deltalta_{j+1} \mathbf{m}athsf{E}_{m}
\end{equation}
provided that
\mathbf{b}etagin{equation} \label{e.wLipetaR2}
C \mathbf{m}athsf{C} \theta^{-\mathbf{m}athbf{f}rac d2 - 2} \mathbf{m}athsf{R}^{- \mathbf{m}athbf{f}rac12 \mathbf{a}lphapha_2} \leq \mathbf{m}athbf{f}rac12 .
\end{equation}
We assume, from now on, that $\mathbf{m}athsf{R}$ is such that both~\eqref{e.wLipetaR1} and~\eqref{e.wLipetaR2} are valid.
\mathbf{m}athbf{s}mallskip
\emph{Step 4}.
We show that the first inequality in~\eqref{e.wLipinductionass} continues to hold for $j = j^*+1$.
First, by the triangle inequality,~\eqref{e.wLipcompwstupid} and~\eqref{e.wLipinductionass}, we have that
\mathbf{b}etagin{align*}
\lefteqn{
\mathbf{m}athbf{f}rac1{\deltalta_j r_j} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \mathbf{b}ar{w}_{m,j} - \ell \right\|_{L^2 \left(B_{ r_j/2} \right)}
} \quad &
\\ \notag &
\leq
\mathbf{m}athbf{f}rac1{ \deltalta_j r_j} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| w_{m} - \ell \right\|_{L^2 \left(B_{ r_j/2} \right)} +
\mathbf{m}athbf{f}rac1{ \deltalta_j r_j} \left\| w_{m} - \mathbf{b}ar{w}_{m,j} \right\|_{L^2 \left(B_{ r_j/2} \right)}
\\ \notag &
\leq
2 \mathbf{m}athsf{E}_{m}
\end{align*}
and
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{f}rac1{ \deltalta_{j+1} r_{j+1}}\inf_{\ell \in \mathbf{m}athcal{P}_1 } \left\| w_{m} - \ell \right\|_{\underlinederline{L}^2 \left( B_{r_{j+1}} \right)}
\leq
\mathbf{m}athbf{f}rac1{ \deltalta_{j+1} r_{j+1}}\inf_{\ell \in \mathbf{m}athcal{P}_1 } \left\| \overlineerline{w}_{m,j} - \ell \right\|_{\underlinederline{L}^2 \left( B_{r_{j+1}} \right)}
+ \mathbf{m}athbf{f}rac12 \mathbf{m}athsf{E}_{m} .
\end{equation*}
By a similar computation, using also the induction assumption~\eqref{e.wLipinductionass},
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{s}um_{i=1}^{m} \left( \mathbf{m}athbf{f}rac1{r_j} \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_{r_j/2}} \right\|_{\underlinederline{L}^2(B_{r_j/2})}\right)^{\mathbf{m}athbf{f}rac{m}{i}}
\leq C \mathbf{m}athsf{C} \mathbf{m}athsf{D}_{m} .
\end{equation*}
Now, applying~\eqref{e.appC.C1alphabarw} we obtain by the previous three displays,~\eqref{e.witerbarureg2} and~\eqref{e.wLipEmest}, for each $m \in \{1,\ldots,n+2\}$, that
\mathbf{b}etagin{align}
\label{e.wLipbarwiter}
\lefteqn{
\left( \mathbf{m}athbf{f}rac{r_j}{r_{j+1}} \right)^{\mathbf{b}etata} \mathbf{m}athbf{f}rac1{ r_{j+1}}\inf_{\ell \in \mathbf{m}athcal{P}_1 } \left\| \overlineerline{w}_{m,j} - \ell \right\|_{\underlinederline{L}^2 \left( B_{r_{j+1}} \right)}
} \quad &
\\ \notag &
\leq
C \deltalta_j \mathbf{m}athbf{s}um_{i=1}^{m} \left( \mathbf{m}athbf{f}rac1{\deltalta_j r_j} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \overlineerline{w}_{i,j} - \ell \right\|_{\underlinederline{L}^2(B_{r_j/2})} \right)^{\mathbf{m}athbf{f}rac{m}{i}}
\\ \notag & \quad
+
C \deltalta_j \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \mathbf{m}athbf{f}rac1{r_j} \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_{r_j/2}} \right\|_{\underlinederline{L}^2(B_{r_j/2})} \right)^{\mathbf{m}athbf{f}rac{m}{i}}
\\ \notag & \quad
+ C \left( \mathbf{m}athbf{f}rac1{ r_j} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \mathbf{b}ar{u}_j - \ell \right\|_{\underlinederline{L}^2(B_{r_j/2})}\right)^\mathbf{b}etata
\mathbf{m}athbf{s}um_{i=1}^{m} \left( \mathbf{m}athbf{f}rac1{r_j} \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_{r_j/2}} \right\|_{\underlinederline{L}^2(B_{r_j/2})} \right)^{\mathbf{m}athbf{f}rac{m}{i}}
\\ \notag &
\leq C \deltalta_j \mathbf{m}athsf{E}_{m}.
\end{align}
and, consequently,
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{f}rac1{\deltalta_{j+1} r_{j+1}}\inf_{\ell \in \mathbf{m}athcal{P}_1 } \left\| w_{m} - \ell \right\|_{\underlinederline{L}^2 \left( B_{r_{j+1}} \right)}
\leq \mathbf{m}athbf{f}rac12 \mathbf{m}athsf{E}_{m} +
C \left( \mathbf{m}athbf{f}rac{\deltalta_j}{\deltalta_{j+1}} \right) \left( \mathbf{m}athbf{f}rac{r_{j+1}}{r_j} \right)^{\mathbf{b}etata} \mathbf{m}athsf{E}_{m} \leq \left(\mathbf{m}athbf{f}rac12 + C \theta^{\mathbf{m}athbf{f}rac \mathbf{b}etata2} \right) \mathbf{m}athsf{E}_{m} .
\end{equation*}
Thus, choosing $\theta$ small enough so that $C \theta^{\mathbf{m}athbf{f}rac \mathbf{b}etata2} \leq \mathbf{m}athbf{f}rac12$, we obtain that the first inequality in~\eqref{e.wLipinductionass} is valid for $j= j^*+1$.
\mathbf{m}athbf{s}mallskip
\emph{Step 5}. The last step in the proof is to show the second inequality in~\eqref{e.wLipinductionass} for $j = j^*+1$. Let $\ell_j$ be the minimizing affine function in $ \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| w_{m} - \ell \right\|_{\underlinederline{L}^2(B_{r_j})}$. Then, by the triangle inequality and the first inequality in~\eqref{e.wLipinductionass} valid for $j \in \{0,\ldots,j^*\}$,
\mathbf{b}etagin{equation*}
\left| \nabla \ell_{j+1} - \nabla \ell_j \right|
\leq C (\deltalta_{j+1} + \deltalta_j) \mathbf{m}athsf{E}_m
\end{equation*}
Thus, summation gives that
\mathbf{b}etagin{equation*}
|\nabla \ell_{j^*+1} - \nabla \ell_0 | \leq C \mathbf{m}athsf{E}_m \mathbf{m}athbf{s}um_{j=0}^{j^* +1} \deltalta_j
.
\end{equation*}
Therefore,
\mathbf{b}etagin{align*}
\mathbf{m}athbf{f}rac{1}{r_{j^*+1}} \left\| w_{m} - (w_{m})_{B_{r_{j^*+1}}} \right\|_{\underlinederline{L}^2 \left( B_{r_{j^*+1}} \right)}
&
\leq
\mathbf{m}athbf{f}rac{1}{r_{j^*+1}} \left\| w_{m} - \ell_{j^*+1} \right\|_{\underlinederline{L}^2 \left( B_{r_{j^*+1}} \right)} + |\nabla \ell_{j^*+1} |
\\ &
\leq
C \mathbf{m}athsf{E}_m \mathbf{m}athbf{s}um_{j=0}^{j^* +1} \deltalta_j + |\nabla \ell_{0} | .
\end{align*}
By the triangle inequality we have that
\mathbf{b}etagin{equation*}
|\nabla \ell_{0} | \leq \mathbf{m}athbf{f}rac{8}{r_{0}} \left\| w_{m} - \ell_{0} \right\|_{\underlinederline{L}^2 \left( B_{r_{0}} \right)} +
\mathbf{m}athbf{f}rac{8}{r_{0}} \left\| w_{m} - (w_m)_{B_{r_0}} \right\|_{\underlinederline{L}^2 \left( B_{r_{0}} \right)} \leq 8 \mathbf{m}athsf{E}_m + 8 \mathbf{m}athsf{D}_m
\end{equation*}
and hence
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{f}rac{1}{r_{j^*+1}} \left\| w_{m} - (w_{m})_{B_{r_{j^*+1}}} \right\|_{\underlinederline{L}^2 \left( B_{r_{j^*+1}} \right)} \leq C \mathbf{m}athsf{D}_m \mathbf{m}athbf{s}um_{j=0}^{j^* +1} \deltalta_j .
\end{equation*}
Choosing $\mathbf{m}athsf{C} = C$, where $C$ is as in the above inequality, verifies the second inequality in~\eqref{e.wLipinductionass} for $j = j^*+1$. This finishes the proof of the induction step, and thus completes the proof.
\end{proof}
To show Lipschitz estimates for the linearization errors in the next section, we need a small variant of the previous proposition.
\mathbf{b}etagin{lemma} \label{l.C1alphanew}
Assume that~\eqref{e.assumption.section5} holds. Fix $\mathbf{m}athsf{M} \in [1,\infty)$. Then there exist constants $\mathbf{a}lphapha(\mathrm{data}),\mathbf{m}athbf{s}igma(n,\mathbf{m}athsf{M},\mathrm{data}), \theta(n,\mathbf{m}athsf{M},\mathrm{data}) \in \left( 0, \mathbf{m}athbf{f}rac12 \right]$ and $C(\mathbf{m}athbf{s}igma,\mathbf{m}athsf{M},\mathrm{data})<\infty$, and a random variable~$\mathcal{X}$ satisfying
\mathbf{b}etagin{equation*}
\mathcal{X} \leq \mathcal{O}_{\mathbf{m}athbf{s}igma}(C)
\end{equation*}
such that the following statement is valid. Let~$R \in [\mathcal{X} , \infty)$ and
$u,w_1,\ldots,w_{n}$ satisfy, for each $m\in \{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation}
\label{e.linearized.c1alphanew}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_pL\left( \nabla u,x \right) \right) = 0 & \mathbf{m}box{in} & \ B_R, \\
& -\nabla \cdot \left( D^2_pL\left( \nabla u,x \right) \nabla w_m \right) = \nabla \cdot \left( \mathbf{m}athbf{F}_m(\nabla u,\nabla w_1,\ldots,\nabla w_{m-1})\right) & \mathbf{m}box{in} & \ B_{R},
\end{aligned}
\right.
\end{equation}
and, for $r \in [\mathcal{X} , \mathbf{m}athbf{f}rac12 R]$,
\mathbf{b}etagin{equation*}
-\nabla \cdot \left( D^2_pL\left( \nabla u,x \right) \nabla w_{n+2} \right) = \nabla \cdot \left( \mathbf{m}athbf{F}_m(\nabla u,\nabla w_1,\ldots,\nabla w_{n+1})\right) \quad \mathbf{m}box{in} \ B_{r},
\end{equation*}
where $u$ satisfy the normalization
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{f}rac1R \left\| u - (u)_{B_R} \right\|_{\underlinederline{L}^2\left( B_{R} \right)} \leq \mathbf{m}athsf{M}.
\end{equation*}
Then
\mathbf{b}etagin{align}
\label{e.Lipw.pre}
\lefteqn{
\inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac 1{\theta r}\left\| w_{n+2} - \ell \right\|_{\underlinederline{L}^2(B_{\theta r})}
} \qquad &
\\ & \notag
\leq
\mathbf{m}athbf{f}rac12 \inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac 1r \left\| w_{n+2} - \ell \right\|_{\underlinederline{L}^2(B_r)}
+ C \left( \mathbf{m}athbf{f}rac{r}{R} + \mathbf{m}athbf{f}rac1{r} \right)^{\mathbf{a}lphapha} \mathbf{m}athbf{f}rac{1}{r} \left\| w_{n+2} - \left( w_{n+2} \right)_{B_r} \right\|_{\underlinederline{L}^2(B_r)}
\\ \notag & \qquad
+ C\left( \mathbf{m}athbf{f}rac{r}{R} + \mathbf{m}athbf{f}rac1{r} \right)^{\mathbf{a}lphapha } \ \mathbf{m}athbf{s}um_{i=1}^{n+1} \left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}\right)^{\mathbf{m}athbf{f}rac {n+2}{i}} .
\end{align}
\end{lemma}
\mathbf{b}etagin{proof}
The proof is a rearrangement of the argument in the proof of Proposition~\ref{p.wLip}. Indeed, we take $r_0=r$ and combine the first inequality of~\eqref{e.wLipbarwiter} with~\eqref{e.wLipcomp} and~\eqref{e.Lipw}, which is valid for $m\in \{1,\ldots,n+1\}$.
\end{proof}
\mathbf{m}athbf{s}ubsection{Improvement of spatial integrability} \label{s.spatintw}
We next complete the conditional proof of~\eqref{e.C01linsols} by improving the spatial integrability of~\eqref{e.Lipw.pre} from $L^2$ to $L^q$ for every $q\in [2,\infty)$. To do this, we use the estimate~\eqref{e.Lipw.pre} to pass from the large scale~$R$ to the microscopic, random scale~$\mathcal{X}$. We then use deterministic estimates from classical elliptic regularity theory to obtain local $L^q$ estimates in balls of radius one, picking up a volume factor---which is power of~$\mathcal{X}$---as a price to pay. The first formalize the latter step in the following lemma.
\mathbf{b}etagin{lemma}
\label{l.toqandbeyond}
Assume~\eqref{e.assumption.section5} and the hypotheses of Theorem~\ref{t.regularity.linerrors}. Let $\mathbf{b}etata\in (0,1)$ and $q\in (2,\infty)$. Then there exist~$\mathbf{m}athbf{s}igma(q,\mathrm{data}) \in (0,d)$, $C(q,\mathbf{m}athsf{M},\mathrm{data})<\infty$ and a random variable~$\mathcal{X}$ satisfying~$\mathcal{X} = \mathcal{O}_\mathbf{m}athbf{s}igma(C)$ such that, for every $r\mathbf{m}athbf{g}eq \mathcal{X}$ and $m\in\{1,\ldots,n+2\}$,
\mathbf{b}etagin{equation}
\label{e.smallscalezz}
\left\| \nabla w_m \right\|_{\underlinederline{L}^q(B_{r/2})}
\leq
C \left( 1 + r^{\mathbf{m}athbf{f}rac{d^2(q-2)}{4q\mathbf{b}etata} + \mathbf{m}athbf{f}rac{d(q-2)}{2q}} \right)
\mathbf{m}athbf{s}um_{i=1}^{m}
\left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}
\right)^{\mathbf{m}athbf{f}rac{m}i}.
\end{equation}
\end{lemma}
\mathbf{b}etagin{proof}
In view of the assumption of~\eqref{e.assumption.section5} and thus the validity of Theorem~\ref{t.regularity.linerrors} for~$n$, we only need to prove~\eqref{e.smallscalezz} for $m=n+2$.
Fix~$q\in (2,\infty)$, $r\in [2,\infty)$, $\mathbf{b}etata\in (0,1)$ to be selected below. The $C^{1,\mathbf{b}etata}$-estimate in Proposition~\ref{p.schauder}, together with a covering argument, yields
\mathbf{b}etagin{equation*}
\left\| \nabla u \right\|_{L^\infty(B_r)}
+
\left[ \nabla u \right]_{C^{0,\mathbf{b}etata}(B_r)}
\leq
Cr^{\mathbf{m}athbf{f}rac d2} \left\| \nabla u \right\|_{\underlinederline{L}^2(B_{2r})}.
\end{equation*}
Setting $\mathbf{a}:=D^2_pL(\nabla u,x)$, we deduce by the assumption of~(L1) that
\mathbf{b}etagin{equation*}
\left[ \mathbf{a} \right]_{C^{0,\mathbf{b}etata}(B_r)}
\leq
C \left( 1 + \left[ \nabla u \right]_{C^{0,\mathbf{b}etata}(B_r)} \right)
\leq
C \left(1 + r^{\mathbf{m}athbf{f}rac d2} \left\| \nabla u \right\|_{\underlinederline{L}^2(B_{2r})} \right).
\end{equation*}
Applying Proposition~\ref{p.gradientLq} we find that, for each~$x\in B_{r/2}$ and $q\in (2,\infty]$,
\mathbf{b}etagin{align*}
\lefteqn{
\left\| \nabla w_{n+2} \right\|_{L^q(B_1(x))}
} \quad &
\\ &
\leq
C \left[ \mathbf{a} \right]_{C^{0,\mathbf{b}etata}(B_r)}^{\mathbf{m}athbf{f}rac{d(q-2)}{2q\mathbf{b}etata}} \left\| \nabla w_{n+2} \right\|_{L^2(B_2(x))}
+
C\left\| \mathbf{m}athbf{f}_m\left( \nabla u,\nabla w_1,\ldots,\nabla w_{n+1}\right) \right\|_{L^q(B_2(x))}
\\ &
\leq
C \left( 1 + r^{\mathbf{m}athbf{f}rac{d^2(q-2)}{4q\mathbf{b}etata}} \left\| \nabla u \right\|_{\underlinederline{L}^2(B_{2r})} \right)
\left\| \nabla w_{n+2} \right\|_{L^2(B_2(x))}
\\ & \qquad
+
C\left\| \mathbf{m}athbf{f}_{n+2}\left( \nabla u,\nabla w_1,\ldots,\nabla w_{n+1}\right) \right\|_{L^q(B_2(x))}.
\end{align*}
By a covering argument, we therefore obtain
\mathbf{b}etagin{align}
\label{e.readyforqbeezes}
\left\| \nabla w_{n+2} \right\|_{L^q(B_{r/2})}
& \leq
C \left( 1 + r^{\mathbf{m}athbf{f}rac{d^2(q-2)}{4q\mathbf{b}etata}} \left\| \nabla u \right\|_{\underlinederline{L}^2(B_{2r})} \right)
\left\| \nabla w_{n+2} \right\|_{L^2(B_r)}
\\ & \qquad\notag
+
C\left\| \mathbf{m}athbf{f}_{n+2}\left( \nabla u,\nabla w_1,\ldots,\nabla w_{n+1}\right) \right\|_{L^q(B_r)}.
\end{align}
If we now take~$\mathcal{X}$ to be the maximum of the minimal scales in the statements of:
\mathbf{b}etagin{enumerate}
\item[(1)]
\cite[Theorem 11.13]{AKMbook};
\item[(2)] Theorem~\ref{t.regularity.linerrors} for~$n$ in place of $\mathbf{m}athsf{N}$ and with a sufficiently large exponent of spatial integrability~$q'$ in place of~$q$ (which can be computed explicitly in terms of our~$q$ using the H\"older inequality, although we omit this computation)---the validity of which is given by assumption~\eqref{e.assumption.section5};
\item[(3)] Proposition~\ref{p.wLip};
\end{enumerate}
then we have that $\mathcal{X} = \mathcal{O}_\mathbf{m}athbf{s}igma(C)$ as stated in the lemma and that~$r \mathbf{m}athbf{g}eq \mathcal{X}$ implies the following estimates:
\mathbf{b}etagin{equation*}
\left\| \nabla u \right\|_{\underlinederline{L}^2(B_{2r})} \leq C,
\end{equation*}
\mathbf{b}etagin{equation*}
\left\| \mathbf{m}athbf{f}_{n+2}\left( \nabla u,\nabla w_1,\ldots,\nabla w_{n+1}\right) \right\|_{\underlinederline{L}^q(B_r)}
\leq
C\mathbf{m}athbf{s}um_{i=1}^{n+1}
\left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}
\right)^{\mathbf{m}athbf{f}rac{n+2}i}
\end{equation*}
and
\mathbf{b}etagin{equation*} \label{}
\left\| \nabla w_{n+2} \right\|_{\underlinederline{L}^2(B_r)}
\leq
C\mathbf{m}athbf{s}um_{i=1}^{n+2}
\left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}
\right)^{\mathbf{m}athbf{f}rac{n+2}i}.
\end{equation*}
Combining these with~\eqref{e.readyforqbeezes}, we obtain
\mathbf{b}etagin{equation*}
\left\| \nabla w_{n+2} \right\|_{\underlinederline{L}^q(B_{r/2})}
\leq
C \left( 1 + r^{\mathbf{m}athbf{f}rac{d^2(q-2)}{4q\mathbf{b}etata} + \mathbf{m}athbf{f}rac{d(q-2)}{2q}} \right)
\mathbf{m}athbf{s}um_{i=1}^{n+2}
\left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}
\right)^{\mathbf{m}athbf{f}rac{n+2}i}.
\end{equation*}
This completes the proof of the lemma.
\end{proof}
In the next lemma we finally achieve the goal of this section, which is to show that~\eqref{e.assumption.section5} implies~\eqref{e.C01linsols} for $m=n+1$.
\mathbf{b}etagin{lemma} \label{l.toqandbeyond2}
Assume that~\eqref{e.assumption.section5} holds. Fix $\mathbf{m}athsf{M} \in [1,\infty)$ and $q\in (2,\infty)$. Then there exist constants $\mathbf{m}athbf{s}igma(q,\mathrm{data}) \in (0,d)$, $C(q,\mathbf{m}athsf{M},\mathrm{data})<\infty$ and a random variable~$\mathcal{X}$ satisfying
\mathbf{b}etagin{equation*}
\mathcal{X} \leq \mathcal{O}_{\mathbf{m}athbf{s}igma}(C)
\end{equation*}
such that the following is valid. Suppose that~$R \in [\mathcal{X} , \infty)$ and~$u,w_1,\ldots,w_{n} \in H^1(B_R)$ such that, for every $m\in \{1,\ldots,n+2\}$,
\mathbf{b}etagin{equation}
\label{e.linearized.2toq}
\left\{
\mathbf{b}etagin{aligned}
& \mathbf{m}athbf{f}rac1R \left\| u - (u)_{B_R} \right\|_{\underlinederline{L}^2\left( B_{R} \right)} \leq \mathbf{m}athsf{M},
\\
& -\nabla \cdot \left( D_pL\left( \nabla u,x \right) \right) = 0 & \mathbf{m}box{in} & \ B_R, \\
& -\nabla \cdot \left( D^2_pL\left( \nabla u,x \right) \nabla w_m \right) = \nabla \cdot \left( \mathbf{m}athbf{F}_m(\nabla u,\nabla w_1,\ldots,\nabla w_{m-1})\right) & \mathbf{m}box{in} & \ B_{R},
\end{aligned}
\right.
\end{equation}
Then, for every $r \in [\mathcal{X},\tfrac12 R]$, we have
\mathbf{b}etagin{equation}
\label{e.Lipw.q}
\left\| \nabla w_{n+2} \right\|_{\underlinederline{L}^q(B_r)}
\leq C \mathbf{m}athbf{s}um_{i=1}^{{n+2}} \left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}\right)^{\mathbf{m}athbf{f}rac {{n+2}}{i}} .
\end{equation}
\end{lemma}
\mathbf{b}etagin{proof}
Fix $q\in (2,\infty)$.
Select a parameter~$\theta\in (0,1)$ which will denote a mesoscopic scale.
For each $z\in\mathbf{m}athbb{R}d$, we take $\mathcal{X}_z$ to be the random variable $\mathcal{X}$ in the statement of Proposition~\ref{p.wLip}, centered at the point~$z$. Define another random variable (a minimal scale) by
\mathbf{b}etagin{equation*} \label{}
\mathbf{m}athcal{Y}
:=
\mathbf{m}athbf{s}up \left\{ 3^k \,:\, k\in\mathbf{m}athbb{N}, \, \mathbf{m}athbf{s}up_{z\in \mathbf{m}athbb{Z}d\cap B_{3^k}} \mathcal{X}_z \mathbf{m}athbf{g}eq 3^{k \theta} \right\}.
\end{equation*}
It is clear from Proposition~\ref{p.wLip} and an routine union bound argument that
\mathbf{b}etagin{equation*} \label{}
\mathcal{Y} \leq \mathcal{O}_\mathbf{m}athbf{s}igma(C).
\end{equation*}
Next, for every $k\in\mathbf{m}athbb{N}$ and $z\in \mathbf{m}athbb{R}d$ we let $\mathbf{m}athcal{Z}_{k,z}$ denote the random variable
\mathbf{b}etagin{equation*} \label{}
\mathbf{m}athcal{Z}_{k,z}:=
\mathbf{m}athbf{s}up_{(u,w_1,\ldots,w_{n+2})}
\mathbf{m}athbf{s}up_{m\in\{1,\ldots,n+2\}}
\mathbf{m}athbf{f}rac{\left\| \nabla w_m \right\|_{\underlinederline{L}^q(z+\cu_k)}}{\mathbf{m}athbf{s}um_{i=1}^{m}
\left( 3^{-k} \left\| w_{i} - \left( w_{i} \right)_{z+\cu_{k+1}} \right\|_{\underlinederline{L}^2(z+\cu_{k+1})}
\right)^{\mathbf{m}athbf{f}rac{m}i}},
\end{equation*}
where the supremum is taken over all $(u,w_1,\ldots,w_{n+2})\in \left( H^1(z+\cu_{k+1})\right)^{n+3}$ satisfying, for every $m\in \{1,\ldots,n+2\}$,
\mathbf{b}etagin{equation}
\label{e.linsreffss}
\left\{
\mathbf{b}etagin{aligned}
& \left\| \nabla u \right\|_{\underlinederline{L}^2(z+\cu_{k+1})} \leq \mathbf{m}athsf{M},
\\ &
-\nabla \cdot D_pL\left( \nabla u,x \right) = 0 & \mathbf{m}box{in} & \ z+\cu_{k+1},
\\ &
-\nabla \cdot \left( D^2_pL\left( \nabla u,x \right) \nabla w_m \right) = \nabla \cdot \left( \mathbf{m}athbf{F}_m(\nabla u,\nabla w_1,\ldots,\nabla w_{m-1},x)\right) & \mathbf{m}box{in} & \ z+\cu_{k+1}.
\end{aligned}
\right.
\end{equation}
Observe that $\mathbf{m}athcal{Z}_{k,z}$ is $\mathbf{m}athcal{F}(z+\cu_{k+1})$--measurable and, by Lemma~\ref{l.toqandbeyond} and an easy covering argument, it satisfies the estimate
\mathbf{b}etagin{equation}
\label{e.Zkzest}
\mathbf{m}athcal{Z}_{k,z} \leq \mathcal{O}_\mathbf{m}athbf{s}igma(C).
\end{equation}
Fix $\mathbf{m}athsf{A}\in [1,\infty)$ and define another random variable (a minimal scale) $\mathbf{m}athcal{Z}$ by
\mathbf{b}etagin{equation*} \label{}
\mathbf{m}athcal{Z} :=
\mathbf{m}athbf{s}up\left\{
3^k \,:\,
\left( \left| 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_{k+1} \right|^{-1} \!\!\!\!
\mathbf{m}athbf{s}um_{z\in 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_{k+1} }
\mathbf{m}athcal{Z}_{\lceil \theta k \rceil,z}^q \right)^{\mathbf{m}athbf{f}rac1q}
\mathbf{m}athbf{g}eq \mathbf{m}athsf{A}
\right\}.
\end{equation*}
We will show below that, if~$\mathbf{m}athsf{A}$ is chosen sufficiently large (depending of course on the appropriate parameters) then
\mathbf{b}etagin{equation}
\label{e.claimedZest}
\mathbf{m}athcal{Z} \leq \mathcal{O}_\mathbf{m}athbf{s}igma(C).
\end{equation}
Assuming that~\eqref{e.claimedZest} holds for the moment, let us finish the proof of the lemma. Suppose now that $k\in\mathbf{m}athbb{N}$ satisfies $\mathbf{m}athcal{Y} \vee \mathbf{m}athcal{Z} \leq 3^k \leq 3^{k+1}\leq R$. Let $(u,w_1,\ldots,w_{n+2})\in \left( H^1(B_R)\right)^{n+3}$ satisfy~\eqref{e.linearized.2toq}. Then we have that
\mathbf{b}etagin{align*} \label{}
\lefteqn{
\left\| \nabla w_{n+2} \right\|_{\underlinederline{L}^q(\cu_k)}
} &
\\ &
=
\left(\left| 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_k \right|^{-1} \!\!\!\!
\mathbf{m}athbf{s}um_{z\in 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_k}
\left\| \nabla w_{n+2} \right\|_{\underlinederline{L}^q\left(z+\cu_{\lceil \theta k\rceil}\right)}^q
\right)^{\mathbf{m}athbf{f}rac1q}
\\ &
\leq
\left(\left| 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_k \right|^{-1} \!\!\!\!
\mathbf{m}athbf{s}um_{z\in 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_k}
\mathbf{m}athbf{s}um_{i=1}^{{n+2}}
\left( 3^{-\theta k} \left\| w_{i} - \left( w_{i} \right)_{z+\cu_{\lceil \theta k\rceil +1}} \right\|_{\underlinederline{L}^2(z+\cu_{\lceil \theta k\rceil +1})}
\right)^{\mathbf{m}athbf{f}rac{(n+2)q}i}
\mathbf{m}athcal{Z}_{\lceil \theta k \rceil,z}^q
\right)^{\mathbf{m}athbf{f}rac1q}
\\ &
\leq
C\mathbf{m}athbf{s}um_{i=1}^{{n+2}}
\left(3^{-k} \left\| w_{i} - \left( w_{i} \right)_{z+\cu_{k +1}} \right\|_{\underlinederline{L}^2(\cu_{k +1})}
\right)^{\mathbf{m}athbf{f}rac{n+2}i}
\left(\left| 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_k \right|^{-1}\!\!\!\!
\mathbf{m}athbf{s}um_{z\in 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_k}
\mathbf{m}athcal{Z}_{\lceil \theta k \rceil,z}^q
\right)^{\mathbf{m}athbf{f}rac1q}
\\ &
\leq
C\mathbf{m}athsf{A}\mathbf{m}athbf{s}um_{i=1}^{{n+2}}
\left( 3^{-k} \left\| w_{i} - \left( w_{i} \right)_{z+\cu_{k +1}} \right\|_{\underlinederline{L}^2(\cu_{k +1})}
\right)^{\mathbf{m}athbf{f}rac{{n+2}}i}
\\ &
\leq
C\mathbf{m}athsf{A}\mathbf{m}athbf{s}um_{i=1}^{{n+2}}
\left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{z+\cu_{k +1}} \right\|_{\underlinederline{L}^2(B_R)}
\right)^{\mathbf{m}athbf{f}rac{{n+2}}i}.
\end{align*}
Note that in the third and final lines we used that $3^k\mathbf{m}athbf{g}eq \mathcal{Y}$, that is, we used the result of Proposition~\ref{p.wLip}. This is the desired estimate for $\mathcal{X} = \mathcal{Y} \vee \mathbf{m}athcal{Z}$, and so the proof of the lemma is complete subject to the verification of~\eqref{e.claimedZest}.
\mathbf{m}athbf{s}mallskip
Turning to the proof of~\eqref{e.claimedZest}, we notice first that it suffices to show, for~$\mathbf{m}athsf{A}$ sufficiently large, the existence of~$\mathbf{m}athbf{s}igma(q,\mathrm{data})>0$ and $C(q,\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that, for every~$k\in\mathbf{m}athbb{N}$,
\mathbf{b}etagin{equation}
\label{e.claimedZest.red}
\mathbb{P}\left[ \left| 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_{k+1} \right|^{-1} \!\!\!\!
\mathbf{m}athbf{s}um_{z\in 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_{k+1} }
\mathbf{m}athcal{Z}_{\lceil \theta k \rceil,z}^q
\mathbf{m}athbf{g}eq \mathbf{m}athsf{A}^q \right]
\leq
C \exp\left( -c3^{k\mathbf{m}athbf{s}igma } \right).
\end{equation}
Indeed, we can see that~\eqref{e.claimedZest.red} implies~\eqref{e.claimedZest} using a simple union bound.
Fix then a parameter~$\lambda\in [1,\infty)$ and compute, using~\eqref{e.Zkzest} and the Chebyshev inequality,
\mathbf{b}etagin{equation}
\label{e.Zest1}
\mathbb{P} \left[ \mathbf{m}athbf{s}up_{z\in 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_{k+1} } \mathbf{m}athcal{Z}_{\lceil \theta k \rceil \mathbf{m}athbb{Z}d\cap \cu_{k+1}}
> \lambda
\right]
\leq
\exp\left( -c \lambda^{\mathbf{m}athbf{s}igma} \right).
\end{equation}
Using the simple large deviations bound for sums of bounded, independent random variables, we have
\mathbf{b}etagin{align}
\label{e.Zest2}
&
\mathbb{P} \left[
\left| 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_{k+1} \right|^{-1} \!\!\!\!
\mathbf{m}athbf{s}um_{z\in 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_{k+1} }
\left( \mathbf{m}athcal{Z}_{\lceil \theta k \rceil,z}^q \wedge \lambda \right)
\mathbf{m}athbf{g}eq \mathbf{m}athbb{E} \left[ \mathbf{m}athcal{Z}_{\lceil \theta k \rceil,z}^q \wedge \lambda \right] + 1
\right]
\\ & \qquad\qquad\qquad \notag
\leq 3^d \exp\left( -c\lambda^{-2} \left| 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_{k+1} \right| \right)
\leq
3^d \exp\left( -c3^{d(1-\theta)k} \lambda^{-2} \right).
\end{align}
Here we are careful to notice that, while the collection~$\{ \mathbf{m}athcal{Z}_{\lceil \theta k \rceil,z}^q \wedge \lambda : z\in 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_{k+1} \}$ of random variables is not independent (since adjacent cubes are touching and thus not separated by a unit distance), we can partition this collection into $3^d$ many subcollections which have an equal number of elements and each of which is independent. The large deviations estimate can then be applied to each subcollection, and then a union bound yields~\eqref{e.Zest2}. Combining~\eqref{e.Zest1},~\eqref{e.Zest2} and the observation that $\mathbf{m}athbb{E} \left[ \mathbf{m}athcal{Z}_{\lceil \theta k \rceil,z}^q \wedge \lambda \right] \leq \mathbf{m}athbb{E} \left[ \mathbf{m}athcal{Z}_{\lceil \theta k \rceil,z}^q\right] \leq C$ by~\eqref{e.Zkzest}, we obtain
\mathbf{b}etagin{equation*}
\mathbb{P}\left[ \left| 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_{k+1} \right|^{-1} \!\!\!\!
\mathbf{m}athbf{s}um_{z\in 3^{\lceil \theta k \rceil}\mathbf{m}athbb{Z}d \cap \cu_{k+1} }
\mathbf{m}athcal{Z}_{\lceil \theta k \rceil,z}^q
\mathbf{m}athbf{g}eq
C + 1
\right]
\leq
C \exp\left( -c \left( \lambda^\mathbf{m}athbf{s}igma \wedge 3^{d(1-\theta)k} \lambda^{-2} \right) \right).
\end{equation*}
Taking $\lambda := 3^{\mathbf{m}athbf{f}rac d4(1-\theta)k}$ and $\mathbf{m}athsf{A}^q:=C+1$ yields~\eqref{e.claimedZest.red}.
\end{proof}
The above proof, in view of Remark~\ref{r.w1reg}, gives the following result without assuming~\eqref{e.assumption.section5}. This, together with~\eqref{p.Lipxi0} below, serves as the base case for the induction.
\mathbf{b}etagin{proposition} \label{p.w1higher}
Let $q\in [2,\infty)$, $\mathbf{m}athsf{M}\in [1,\infty)$. Then there exist~$\mathbf{m}athbf{s}igma(q,\mathrm{data})>0$ and~$C(q,\mathbf{m}athsf{M},\mathrm{data}) <\infty$ and a random variable~$\mathcal{X}$ satisfying~$\mathcal{X} \leq \mathcal{O}_{\mathbf{m}athbf{s}igma}\left( C \right)$ such that the following statement is valid.
For $R \in \left[ 2\mathcal{X},\infty\right)$ and $u,w_1 \in H^1(B_R)$ satisfying $\left\| \nabla u \right\|_{\underlinederline{L}^2(B_R)} \leq \mathbf{m}athsf{M}$ and
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
&
-\nabla \cdot \left( D_pL(\nabla u,x) \right) = 0 \quad \mathbf{m}box{in} \ B_R,\\
&
-\nabla \cdot \left( D^2_pL\left( \nabla u,x \right) \nabla w_1 \right) = 0 \quad \mathbf{m}box{in} \ B_R,
\end{aligned}
\right.
\end{equation*}
we have, for all $r\in [\mathcal{X} ,\tfrac12 R]$,
\mathbf{b}etagin{equation} \label{e.w1higher}
\left\| \nabla w_1 \right\|_{\underlinederline{L}^q \left( B_{r} \right)} \leq \mathbf{m}athbf{f}rac{C}{R} \left\| w_1 -(w_1)_{B_R} \right\|_{\underlinederline{L}^2 \left( B_{r} \right)}.
\end{equation}
\end{proposition}
\mathbf{m}athbf{s}ection{Large-scale
\texorpdfstring{$C^{0,1}$}{{C01}}-type estimate for linearization errors}
\label{s.reglinerrors}
In this section we continue to suppose that~$n\in \{0,\ldots,\mathbf{m}athsf{N}-1 \}$ is such that~\eqref{e.assumption.section5} holds.
The goal is to complete the proof that Theorem~\ref{t.regularity.linerrors} is also valid for~ $n+1$. Combined with the results of the previous three sections and an induction argument, this completes the proof of Theorems~\ref{t.regularity.Lbar},~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors}.
\mathbf{m}athbf{s}ubsection{Excess decay iteration for $\xi_{n+1}$}
We start by proving higher integrability for a difference of two solutions. This, together with Proposition~\ref{p.w1higher}, yields the base case for the induction.
\mathbf{b}etagin{proposition}
\label{p.Lipxi0}
Fix $\mathbf{m}athsf{M} \in [1,\infty)$ and $q \in [2,\infty)$.
There exist $\mathbf{a}lphapha(\mathrm{data}),\mathbf{m}athbf{s}igma(q,\mathrm{data}) \in \left(0,\tfrac 12\right]$, $C(q,\mathbf{m}athsf{M},\mathrm{data})<\infty$ and a random variable~$\mathcal{X}$ satisfying
\mathbf{b}etagin{equation*}
\mathcal{X} = \mathcal{O}_{\mathbf{m}athbf{s}igma}(C)
\end{equation*}
such that the following statement is valid.
For every $R\mathbf{m}athbf{g}eq \mathcal{X}$ and $u,v \in H^1(B_R)$ satisfying, for each $m\in \{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation}
\label{e.linearized.approx.L.0}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_pL\left( \nabla u,x \right) \right) = 0
\quad \mathbf{m}box{and} \quad -\nabla \cdot \left( D_pL(\nabla v,x) \right) = 0
& \mathbf{m}box{in} & \ B_R,\\
&
\left\| \nabla u \right\|_{\underlinederline{L}^2(B_R)} \vee
\left\| \nabla v \right\|_{\underlinederline{L}^2(B_R)} \leq \mathbf{m}athsf{M},
\end{aligned}
\right.
\end{equation}
Then, for $m\in \{1,\ldots,n+1\}$, $\xi_0 = u-v$ and $r \in [\mathcal{X} , \tfrac12 R]$, we have
\mathbf{b}etagin{equation} \label{e.Lipxi0}
\left\| \nabla \xi_{0} \right\|_{\underlinederline{L}^{q}(B_r)}
+ \left( \mathbf{m}athbf{f}rac{r}{R} + \mathbf{m}athbf{f}rac1{r} \right)^{-\mathbf{a}lphapha} \inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac{1}{r} \left\| \xi_{0} - \ell \right\|_{\underlinederline{L}^2(B_{r})}
\leq
C \mathbf{m}athbf{f}rac{1}{R}\left\| \xi_{0} - \left( \xi_{0} \right)_{B_{R}} \right\|_{\underlinederline{L}^2(B_R)} .
\end{equation}
\end{proposition}
\mathbf{b}etagin{proof}
On the one hand, the estimate
\mathbf{b}etagin{equation} \label{e.Lipxi0pre}
\left\| \nabla \xi_{0} \right\|_{\underlinederline{L}^{2}(B_r)} + \left( \mathbf{m}athbf{f}rac{r}{R} + \mathbf{m}athbf{f}rac1{r} \right)^{-\mathbf{a}lphapha} \inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac{1}{r} \left\| \xi_{0} - \ell \right\|_{\underlinederline{L}^2(B_{r})} \leq
C \mathbf{m}athbf{f}rac{1}{R}\left\| \xi_{0} - \left( \xi_{0} \right)_{B_{R}} \right\|_{\underlinederline{L}^2(B_R)}
\end{equation}
follows by~\cite[Proposition 4.2]{AFK}. On the other hand, the proof of
\mathbf{b}etagin{equation} \label{e.Lipxi0pre2}
\left\| \nabla \xi_{0} \right\|_{\underlinederline{L}^{q}(B_r)}
\leq
C \mathbf{m}athbf{f}rac{1}{R}\left\| \xi_{0} - \left( \xi_{0} \right)_{B_{R}} \right\|_{\underlinederline{L}^2(B_R)}
\end{equation}
is similar to the proof of $L^q$-integrability of $w_1$ presented in Section~\ref{s.spatintw}. Indeed, noticing that $\xi_0$ satisfies the equation
\mathbf{b}etagin{equation*}
-\nabla \cdot \widetilde \mathbf{a} \nabla \xi_0 = 0, \qquad \widetilde \mathbf{a}(x) := \int_0^1 D_p^2F(t \nabla u(x) + (1-t) \nabla v(x),x) \, dt,
\end{equation*}
we have by the normalization in~\eqref{e.linearized.approx.L.0} and $C^{1,\mathbf{a}lphapha}$ regularity of $u$ and $v$ that we may replace $w_1$ with $\xi_0$ in Lemma~\ref{l.toqandbeyond} applied with $m=1$. Using this together with the Lipschitz estimate~\eqref{e.Lipxi0pre} for $\xi_0$ as in the proof of Lemma~\ref{l.toqandbeyond2}, concludes the proof of~\eqref{e.Lipxi0pre2}. We omit further details.
\end{proof}
\mathbf{b}etagin{proposition} \label{p.Lipxi}
Assume that~\eqref{e.assumption.section5} holds. Fix $\mathbf{m}athsf{M} \in [1,\infty)$. There exist constants $\mathbf{a}lphapha(n,\mathrm{data}),\mathbf{m}athbf{s}igma(n,\mathrm{data}) \in \left(0,\tfrac 12\right]$, $C(n,\mathbf{m}athsf{M},\mathrm{data})<\infty$ and a random variable~$\mathcal{X}$ satisfying
\mathbf{b}etagin{equation*}
\mathcal{X} = \mathcal{O}_{\mathbf{m}athbf{s}igma}(C)
\end{equation*}
such that the following statement is valid.
For every $R\mathbf{m}athbf{g}eq \mathcal{X}$ and $u,v,w_1,\ldots,w_{n+1} \in H^1(B_R)$ solving, for each $m\in \{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation}
\label{e.linearized.approx.L}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_pL\left( \nabla u,x \right) \right) = 0
\quad \mathbf{m}box{and} \quad -\nabla \cdot \left( D_pL(\nabla v,x) \right) = 0
& \mathbf{m}box{in} & \ B_R,\\
& -\nabla \cdot \left( D^2_pL\left( \nabla u,x \right) \nabla w_m \right) = \nabla \cdot \left( \mathbf{m}athbf{F}_m(x, \nabla u,\nabla w_1,\ldots,\nabla w_{m-1})\right) & \mathbf{m}box{in} & \ B_{R},\\
&
\left\| \nabla u \right\|_{\underlinederline{L}^2(B_R)} \vee
\left\| \nabla v \right\|_{\underlinederline{L}^2(B_R)} \leq \mathbf{m}athsf{M},
\end{aligned}
\right.
\end{equation}
we have, for $m\in \{1,\ldots,n+1\}$ and $r \in [\mathcal{X} , \tfrac12 R]$, the estimate
\mathbf{b}etagin{align}
\label{e.Lipxi}
\lefteqn{ \left( \mathbf{m}athbf{f}rac{r}{R} + \mathbf{m}athbf{f}rac1{r} \right)^{-\mathbf{a}lphapha} \inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac{1}{r} \left\| \nabla \xi_{m} - \ell \right\|_{\underlinederline{L}^2(B_{r})}
+ \left\| \nabla \xi_{m} \right\|_{\underlinederline{L}^{2}(B_r)}
} \qquad &
\\ & \leq \notag
C \mathbf{m}athbf{s}um_{i=0}^{m}
\left(\mathbf{m}athbf{f}rac{1}{R}\left\| \xi_{i} - \left( \xi_{i} \right)_{B_{R}} \right\|_{\underlinederline{L}^2(B_R)}^{\mathbf{m}athbf{f}rac{1}{i+1}} + \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}^{\mathbf{m}athbf{f}rac 1i} \right)^{m+1} .
\end{align}
\end{proposition}
\mathbf{b}etagin{proof}
We start by fixing some notation. Let $q(n,d,\mathcal{L}ambda)$ be as in Lemma~\ref{l.lineq}, applied for $n+1$ instead of $n$. Corresponding this $q$, choose $\mathcal{X} $ to be the maximum of minimum scales appearing in Proposition~\ref{p.Lipxi0}, Theorem~\ref{t.regularity.linerrors} and Lemma~\ref{l.C1alphanew}, of which last two are valid for $n+1$ in place of $\mathbf{m}athsf{N}$ by~\eqref{e.assumption.section5}. We also assume that $\mathcal{X} \mathbf{m}athbf{g}eq \mathbf{m}athsf{R}$, by taking $\mathcal{X} \vee \mathbf{m}athsf{R}$ instead of $\mathcal{X}$, where $\mathbf{m}athsf{R}$ will be fixed in the course of the proof to depend on parameters $(n,\mathbf{m}athsf{M},\mathrm{data})$. Chosen this way,~$\mathcal{X}$ satisfies $\mathcal{X} \leq \mathcal{O}_\mathbf{m}athbf{s}igma(C)$ for some constants $\mathbf{m}athbf{s}igma(n,\mathrm{data})>0$ and $C(n,\mathbf{m}athsf{M},\mathrm{data})<\infty$.
\mathbf{m}athbf{s}mallskip
Let $r_j = \theta^{j} \eta R$, where $\theta$ is as in Lemma~\ref{l.C1alphanew} and $\eta \in \left(0,\mathbf{m}athbf{f}rac12 \right]$. The constant $\eta$, as well as $\mathbf{m}athsf{R}$, will be fixed in the course of the proof, so that $\eta$ is small and $\mathbf{m}athsf{R}$ is large. We track down the dependencies on $\eta$ and $\mathbf{m}athsf{R}$ carefully and, in particular, constants denoted by $C$ below do not depend on them. We may assume, without loss of generality, that $\eta R \mathbf{m}athbf{g}eq \mathcal{X}$.
Set, for $k \in \{0,1,\ldots,n\}$,
\mathbf{b}etagin{align} \label{e.xiregEk}
\mathbf{m}athsf{E}_{k} &
:= \mathbf{m}athbf{f}rac { 1}{2 r_0}\left\| \xi_{k+1} - \left( \xi_{k+1} \right)_{B_{2 r_0}} \right\|_{\underlinederline{L}^2(B_{2 r_0})}
\\ & \notag
\qquad
+
\mathbf{m}athbf{s}um_{i=0}^{k} \left( \mathbf{m}athbf{f}rac{1}{R}\left\| \xi_{i} - \left( \xi_{i} \right)_{B_{R}} \right\|_{\underlinederline{L}^2(B_{R})}
+
\mathbf{m}athbf{f}rac1R \left\| w_{i+1} - \left( w_{i+1} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac{k+2}{i+1}} .
\end{align}
We denote
\mathbf{b}etagin{equation*}
R_k := \mathbf{m}athbf{f}rac12 \left(1 + 2^{-k} \right)R .
\end{equation*}
\mathbf{m}athbf{s}mallskip
\emph{Step 1.} Induction on degree. We assume that, for fixed $k \in \{0,\ldots,n\}$, we have, for every $m \in \{0,\ldots,k\}$ and every
$r \in [\mathcal{X} , R_m]$,
\mathbf{b}etagin{multline} \label{e.Lipxiind}
\left( \mathbf{m}athbf{f}rac{r}{R} + \mathbf{m}athbf{f}rac1{r} \right)^{-\mathbf{a}lphapha} \inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac{1}{r} \left\| \nabla \xi_{m} - \ell \right\|_{\underlinederline{L}^2(B_{r})}
+ \left\| \nabla \xi_{m} \right\|_{\underlinederline{L}^{2}(B_r)}
\\ \leq
C \mathbf{m}athbf{s}um_{i=0}^{m}
\left(\mathbf{m}athbf{f}rac{1}{R}\left\| \xi_{i} - \left( \xi_{i} \right)_{B_{R}} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac{m+1}{i+1}} + \mathbf{m}athbf{s}um_{i=1}^{m} \left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac{m+1}{i}} .
\end{multline}
Notice that if $k=0$, then~\eqref{e.Lipxiind} follows by Proposition~\ref{p.Lipxi0}.
\mathbf{m}athbf{s}mallskip
\emph{Step 2.} Cacciopppoli estimate for $\xi_{k+1}$.
We show that, for all $r \in [\mathcal{X}, R_{k}]$,
\mathbf{b}etagin{equation}
\label{e.ximplus1bnd}
\left\| \nabla \xi_{k+1} \right\|_{\underlinederline{L}^{2} \left( B_{(1-2^{-k-2})r} \right)}
\leq
\mathbf{m}athbf{f}rac{C}{r} \left\| \xi_{k+1} - ( \xi_{k+1})_{B_{r}} \right\|_{\underlinederline{L}^{2} \left( B_{r} \right)}
+
C \mathbf{m}athsf{E}_{k}.
\end{equation}
We first have by~\eqref{e.EnL2} that
\mathbf{b}etagin{align*}
\lefteqn{
\left\| \nabla \xi_{k+1} \right\|_{\underlinederline{L}^{2} \left( B_{(1-2^{-k-3})r} \right)}
} \qquad &
\\ &
\leq
C \mathbf{m}athbf{s}um_{i=1}^{k} \left( \mathbf{m}athbf{f}rac1{r}\left\| \xi_{i} - (\xi_{i})_{B_{r}}\right\|_{\underlinederline{L}^{2} \left( B_{r} \right)} \right)^{\mathbf{m}athbf{f}rac{k+2}{i+1}}
+
C \left\| \nabla \xi_{0} \right\|_{\underlinederline{L}^{q}(B_{r})}^{k+2}
+
C \mathbf{m}athbf{s}um_{i=1}^{k+1} \left\| \nabla \mathbf{m}athbf{f}rac{w_i}{i!} \right\|_{\underlinederline{L}^{q}(B_{r})}^{\mathbf{m}athbf{f}rac{k+2}{i}} .
\end{align*}
By Theorem~\ref{t.regularity.linerrors} and the choice of $q$ in the beginning of the proof, we obtain, for every $r \in \left[ \mathcal{X} , \tfrac 12 R \right]$ and $m\in\{1,\ldots,k+1\}$, the estimates
\mathbf{b}etagin{align} \label{e.ximplus1bndw}
\left\| \nabla w_{m} \right\|_{\underlinederline{L}^q(B_r)}^{\mathbf{m}athbf{f}rac{1}{m}}
&
\leq
C\mathbf{m}athbf{s}um_{i=1}^{m}
\left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}
\right)^{\mathbf{m}athbf{f}rac{1}i}
\end{align}
and, by Proposition~\ref{p.Lipxi0},
\mathbf{b}etagin{equation*}
\left\| \nabla \xi_0 \right\|_{\underlinederline{L}^q(B_r)} \leq \mathbf{m}athbf{f}rac CR \left\| \xi_{0} - \left( \xi_{0} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} .
\end{equation*}
Combining above displays yields~\eqref{e.ximplus1bnd} in view of the induction assumption, i.e., that~\eqref{e.Lipxiind} holds for $m \in \{0,\ldots,k\}$.
\mathbf{m}athbf{s}mallskip
\emph{Step 3.}
We prove that there is a constant $C<\infty$ independent of $\eta$ and $\mathbf{m}athsf{R}$ such that, for $j \in \mathbf{m}athbb{N}_0$ such that $r_j \mathbf{m}athbf{g}eq 2 (\mathcal{X} \vee \mathbf{m}athsf{R})$, we have
\mathbf{b}etagin{align} \label{e.Lnplusonedecay.case2res}
\lefteqn{
\inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac 1{r_{j+1}}\left\| \xi_{k+1} - \ell \right\|_{\underlinederline{L}^2(B_{r_{j+1}})}
} \quad & \\ \notag
&
\leq
\mathbf{m}athbf{f}rac12 \inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac 1{r_j} \left\| \xi_{k+1} - \ell \right\|_{\underlinederline{L}^2(B_{r_j})}
+ C\mathbf{m}athbf{f}rac{\varepsilon_j }{r_j} \left\| \xi_{k+1} - \left( \xi_{k+1} \right)_{B_{r_j}} \right\|_{\underlinederline{L}^2(B_{r_j})}
+ C \varepsilon_j^{\mathbf{m}athbf{f}rac{1}{k+1}} \mathbf{m}athsf{E}_{k} ,
\end{align}
where
\mathbf{b}etagin{equation} \label{e.Lnplusonedecay.epr}
\varepsilon_j := \mathbf{m}athbf{f}rac12 \left( \mathbf{m}athbf{f}rac{r_j}{R} + \mathbf{m}athbf{f}rac1{r_j} \right)^{\mathbf{m}athbf{f}rac{\mathbf{a}lphapha}{2}},
\end{equation}
and $\mathbf{a}lphapha(\mathrm{data})$ is the minimum of parameter $\mathbf{a}lphapha$ appearing in statements of Proposition~\ref{p.Lipxi0} and Lemma~\ref{l.C1alphanew}.
\mathbf{m}athbf{s}mallskip
Let us fix $j \in \mathbf{m}athbb{N}_0$ such that $r_j \mathbf{m}athbf{g}eq 2 (\mathcal{X} \vee \mathbf{m}athsf{R})$. We argue in two cases, namely, either~\eqref{e.Lnplusonedecay.case1}
or~\eqref{e.Lnplusonedecay.case2} below is valid. We prove that in both cases~\eqref{e.Lnplusonedecay.case2res} follows.
\mathbf{m}athbf{s}mallskip
\emph{Step 3.1.} We first assume that
\mathbf{b}etagin{equation}
\label{e.Lnplusonedecay.case1}
\mathbf{m}athsf{E}_{k}
\\ > \varepsilon_j
\left(
\mathbf{m}athbf{f}rac1R \left\| \xi_0 - ( \xi_0 )_{B_R} \right\|_{\underlinederline{L}^2(B_R)}
+ \mathbf{m}athbf{s}um_{i=1}^{k+1} \left( \mathbf{m}athbf{f}rac1R \left\| w_i - (w_i)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac {k+1}i}
\right),
\end{equation}
where $\varepsilon_j $ has been fixed in~\eqref{e.Lnplusonedecay.epr}. We show that this implies
\mathbf{b}etagin{equation} \label{e.Lnplusonedecay.case1res}
\inf_{\ell \in \mathbf{m}athcal{P}_1 }\mathbf{m}athbf{f}rac1r \left\| \xi_{k+1} - \ell \right\|_{\underlinederline{L}^2 \left( B_{r} \right)} \leq C \varepsilon_j\mathbf{m}athsf{E}_{k} .
\end{equation}
Notice that this gives~\eqref{e.Lnplusonedecay.case2res}. To show~\eqref{e.Lnplusonedecay.case1res},
we have by the triangle inequality that
\mathbf{b}etagin{align} \notag
\inf_{\ell \in \mathbf{m}athcal{P}_1 }\mathbf{m}athbf{f}rac1{r_j} \left\| \xi_{k+1} - \ell \right\|_{\underlinederline{L}^2 \left( B_{r_j} \right)}
&
\leq
\inf_{\ell \in \mathbf{m}athcal{P}_1 }\mathbf{m}athbf{f}rac1{r_j} \left\| \xi_{0} - \ell \right\|_{\underlinederline{L}^2 \left( B_{r_j} \right)}
+
\mathbf{m}athbf{s}um_{i=1}^{k+1} \inf_{\ell \in \mathbf{m}athcal{P}_1 } \mathbf{m}athbf{f}rac1{r_j} \left\| w_i - \ell \right\|_{\underlinederline{L}^2(B_{r_j})} .
\end{align}
By the choice of $\mathbf{a}lphapha$, we get by Proposition~\ref{p.Lipxi0} that
\mathbf{b}etagin{equation*}
\inf_{\ell \in \mathbf{m}athcal{P}_1 }\mathbf{m}athbf{f}rac1{r_j} \left\| \xi_{0} - \ell \right\|_{\underlinederline{L}^2 \left( B_{r_j} \right)}
\leq
C \left( \mathbf{m}athbf{f}rac{r_j}{R} + \mathbf{m}athbf{f}rac1{r_j} \right)^{\mathbf{a}lphapha} \mathbf{m}athbf{f}rac1R \left\| \xi_0 - ( \xi_0 )_{B_R} \right\|_{\underlinederline{L}^2(B_R)}
\end{equation*}
and, by Proposition~\ref{p.wLip},
\mathbf{b}etagin{equation*}
\inf_{\ell \in \mathbf{m}athcal{P}_1 } \mathbf{m}athbf{f}rac1{r_j} \left\| w_i - \ell \right\|_{\underlinederline{L}^2(B_r)}
\leq
C \left( \mathbf{m}athbf{f}rac{r_j}{R} + \mathbf{m}athbf{f}rac1{r_j} \right)^{\mathbf{a}lphapha} \mathbf{m}athbf{s}um_{h=1}^i \left( \mathbf{m}athbf{f}rac1R \left\| w_h - (w_h)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac ih} .
\end{equation*}
Combining the estimates and using~\eqref{e.Lnplusonedecay.case1}, we have that
\mathbf{b}etagin{equation*}
\inf_{\ell \in \mathbf{m}athcal{P}_1 }\mathbf{m}athbf{f}rac1r \left\| \xi_{k+1} - \ell \right\|_{\underlinederline{L}^2 \left( B_{r} \right)} \leq C \left( \mathbf{m}athbf{f}rac{r_j}{R} + \mathbf{m}athbf{f}rac1{r_j} \right)^{\mathbf{a}lphapha} \varepsilon_j^{-1} \mathbf{m}athsf{E}_{k} .
\end{equation*}
We then obtain~\eqref{e.Lnplusonedecay.case1res} by the choice of $\varepsilon_j$ in~\eqref{e.Lnplusonedecay.epr}, provided that~\eqref{e.Lnplusonedecay.case1} is valid.
\mathbf{m}athbf{s}mallskip
\emph{Step 3.2.} We then assume that~\eqref{e.Lnplusonedecay.case1} is not the case, that is,
\mathbf{b}etagin{equation}
\label{e.Lnplusonedecay.case2}
\mathbf{m}athsf{E}_{k}
\leq \varepsilon_j
\left(
\mathbf{m}athbf{f}rac1R \left\| \xi_0 - ( \xi_0 )_{B_R} \right\|_{\underlinederline{L}^2(B_R)}
+ \mathbf{m}athbf{s}um_{i=1}^{k+1} \left( \mathbf{m}athbf{f}rac1R \left\| w_i - (w_i)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac {k+1}{i}}
\right)
\end{equation}
holds with~$\varepsilon_j$ defined in~\eqref{e.Lnplusonedecay.epr}. We validate~\eqref{e.Lnplusonedecay.case2res} also in this case. To this end, let us first observe an immediate consequence of~\eqref{e.Lnplusonedecay.case2}. By Young's inequality, we have
\mathbf{b}etagin{equation*}
\varepsilon_j \mathbf{m}athbf{s}um_{i=1}^{k+1} \left( \mathbf{m}athbf{f}rac1R \left\| w_i - (w_i)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac{k+1}{i}}
\leq
C \varepsilon_j^{\mathbf{m}athbf{f}rac{k+2}{k+1}}
+ \mathbf{m}athbf{f}rac14 \mathbf{m}athbf{s}um_{i=0}^{k} \left( \mathbf{m}athbf{f}rac1R \left\| w_{i+1} - (w_{i+1})_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac {k+2}{i+1}}
\end{equation*}
and
\mathbf{b}etagin{equation*}
\varepsilon_j \mathbf{m}athbf{f}rac1R \left\| \xi_0 - ( \xi_0 )_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \leq 4 \varepsilon_j^{\mathbf{m}athbf{f}rac{k+2}{k+1}} +
\mathbf{m}athbf{f}rac14 \left(\mathbf{m}athbf{f}rac1R \left\| \xi_0 - ( \xi_0 )_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{k+2} .
\end{equation*}
Hence we get, by the definition of $\mathbf{m}athsf{E}_{k}$ in~\eqref{e.xiregEk} and reabsorption, that
\mathbf{b}etagin{equation} \label{e.Eksmallincasetwo}
\mathbf{m}athsf{E}_{k} \leq C \varepsilon_j^{\mathbf{m}athbf{f}rac{k+2}{k+1}}.
\end{equation}
Let then ${w}_{m+2,j}$ solve
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D^2_pL\left( \nabla u ,x\right) \nabla {w}_{m+2,j} \right) = \nabla \cdot \left({\mathbf{m}athbf{F}}_{m+2}(\nabla {u},\nabla {w}_1,\ldots,\nabla {w}_{m+1}, \cdot )\right) & \mathbf{m}box{in} & \ B_{r_j},\\
& {w}_{m+2,j} = \xi_{m+1} & \mathbf{m}box{on} & \ \mathrm{par}rtial B_{r_j}.
\end{aligned}
\right.
\end{equation*}
It follows from Lemma~\ref{l.lineq}, together with~\eqref{e.Lipxi0},~\eqref{e.ximplus1bndw} and~\eqref{e.Lipxiind}, assumed inductively for $m \in \{1,\ldots,k\}$, that
\mathbf{b}etagin{align} \notag
\lefteqn{\left\| \nabla \cdot \left( D^2 L(\nabla u,\cdot) \nabla\left( \xi_{k+1} - w_{k+2,j} \right) \right) \right\|_{\underlinederline{H}^{-1}(B_{r_j})} } \qquad &
\\ \notag &
\leq
C\mathbf{m}athbf{s}um_{i=1}^{k+1}
\left( \left\| \nabla \xi_{i-1} \right\|_{\underlinederline{L}^{2+\deltalta}(B_{r_j})}^{\mathbf{m}athbf{f}rac{1}{i}}
+
\left\| \nabla \xi_{0} \right\|_{\underlinederline{L}^{\mathbf{m}athbf{f}rac{8}{\deltalta}(k+3)}(B_{r_j})}
+
\left\| \nabla w_{i} \right\|_{\underlinederline{L}^{\mathbf{m}athbf{f}rac{8}{\deltalta}(k+3)}(B_{r_j})}^{\mathbf{m}athbf{f}rac{1}{i}}
\right)^{k+3}
\\ \notag &
\leq
C \mathbf{m}athsf{E}_{k}^{\mathbf{m}athbf{f}rac{k+3}{k+2}}.
\end{align}
Consequently, since $ \xi_{k+1} - w_{k+2,j }\in H_0^1(B_{r_j})$,~\eqref{e.Eksmallincasetwo} yields
\mathbf{b}etagin{equation} \label{e.Lnplusonecomp}
\mathbf{m}athbf{f}rac1{r_j} \left\| \xi_{k+1} - w_{k+2,j}\right\|_{\underlinederline{L}^2 ( B_{r_j} )} \leq C \varepsilon_j^{\mathbf{m}athbf{f}rac{1}{k+1}} \mathbf{m}athsf{E}_{k} .
\end{equation}
By Lemma~\ref{l.C1alphanew} we have
\mathbf{b}etagin{align} \label{e.Lipw.pre.applied}
\lefteqn{
\inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac 1{r_{j+1}}\left\| w_{k+2,r} - \ell \right\|_{\underlinederline{L}^2(B_{r_{j+1}})}
} \qquad & \\ \notag
&
\leq
\mathbf{m}athbf{f}rac12 \inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac 1{r_j} \left\| w_{k+2,j} - \ell \right\|_{\underlinederline{L}^2(B_{r_j})}
+ C \varepsilon_j \mathbf{m}athbf{f}rac{1}{r_j} \left\| w_{k+2,j} - \left( w_{k+2,j} \right)_{B_{r_j}} \right\|_{\underlinederline{L}^2(B_{r_j})}
\\ \notag & \qquad
+ C \varepsilon_j \mathbf{m}athbf{s}um_{i=1}^{k+1} \left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}\right)^{\mathbf{m}athbf{f}rac {k+2}{i}} .
\end{align}
Combining this with~\eqref{e.Lnplusonecomp} and the triangle inequality yields~\eqref{e.Lnplusonedecay.case2res}.
\mathbf{m}athbf{s}mallskip
\emph{Step 4.} We show that, for $r \in [ 2 (\mathcal{X}_\mathbf{m}athbf{s}igma \vee \mathbf{m}athsf{R}),r_0]$ we have
\mathbf{b}etagin{equation} \label{e.Lipxipre}
\mathbf{m}athbf{f}rac{1}{r} \left\| \xi_{k+1} - \left( \xi_{k+1} \right)_{B_{r}} \right\|_{\underlinederline{L}^2(B_{r})} \leq C \mathbf{m}athsf{E}_{k} .
\end{equation}
We proceed inductively, and assume $J \in \mathbf{m}athbb{N}_0$ is such that $r_J \mathbf{m}athbf{g}eq 2(\mathcal{X} \vee \mathbf{m}athsf{R})$ and for all $j \in \{0,\ldots,J\}$ we assume that there exists a constant $\mathbf{m}athsf{C}(d,\theta) \in [1,\infty)$, independent of $\eta$ and $\mathbf{m}athsf{R}$, such that
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{f}rac{1}{r_j} \left\| \xi_{k+1} - \left( \xi_{k+1} \right)_{B_{r_j}} \right\|_{\underlinederline{L}^2(B_{r_j})} \leq \mathbf{m}athsf{C} \mathbf{m}athsf{E}_{k} .
\end{equation*}
This is true for $J=0$ by the definition of $\mathbf{m}athsf{E}_{k}$. We claim that it continues to hold for $j=J+1$ as well. Combining~\eqref{e.Lnplusonedecay.case1res} and~\eqref{e.Lnplusonedecay.case2res} with the induction assumption we have, for $j \in \{1,\ldots,J\}$, that
\mathbf{b}etagin{align*}
\inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac 1{r_{j+1}}\left\| \xi_{k+1} - \ell \right\|_{\underlinederline{L}^2(B_{r_{j+1}})}
\leq
\mathbf{m}athbf{f}rac12 \inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac 1{r_j} \left\| \xi_{k+1} - \ell \right\|_{\underlinederline{L}^2(B_{r_j})} + C \left( \mathbf{m}athsf{C} \varepsilon_j + \varepsilon_j^{\mathbf{m}athbf{f}rac{1}{k+1}} \right) \mathbf{m}athsf{E}_{k} .
\end{align*}
Since $r_J \mathbf{m}athbf{g}eq \mathbf{m}athsf{R}$, we may take $\mathbf{m}athsf{R}$ large and $\eta$ small enough so that
\mathbf{b}etagin{equation*}
C \mathbf{m}athbf{s}um_{j=0}^J \left( \mathbf{m}athsf{C} \varepsilon_j + \varepsilon_j^{\mathbf{m}athbf{f}rac{1}{k+1}} \right)
\leq
C \mathbf{m}athsf{C} \left( 1 - \theta^{\mathbf{m}athbf{f}rac{\mathbf{a}lphapha}{n+1}})\right)^{-1} \left(\eta + \mathbf{m}athbf{f}rac1{\mathbf{m}athsf{R}} \right)^{\mathbf{m}athbf{f}rac{1}{k+1}}
\leq
\mathbf{m}athbf{f}rac12.
\end{equation*}
Thus, by summing and reabsorbing,
\mathbf{b}etagin{equation} \label{e.iterxikplusone}
\mathbf{m}athbf{s}um_{j=0}^{J+1} \inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac 1{r_{j}}\left\| \xi_{k+1} - \ell \right\|_{\underlinederline{L}^2(B_{r_{j}})} \leq \inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac 1{r_0} \left\| \xi_{k+1} - \ell \right\|_{\underlinederline{L}^2(B_{r_0})} + \mathbf{m}athsf{E}_{k} \leq 2 \mathbf{m}athsf{E}_{k}.
\end{equation}
Letting $\ell_j$ being the minimizing affine function in $ \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \xi_{k+1} - \ell \right\|_{\underlinederline{L}^2(B_{r_{j}})}$, we obtain by the above display and the triangle inequality that
\mathbf{b}etagin{equation*}
\left| \nabla \ell_{j+1} Â - \nabla \ell_0 \right| \leq C(d) \theta^{-\mathbf{m}athbf{f}rac d2-1} \mathbf{m}athsf{E}_{k}.
\end{equation*}
By the triangle inequality again, we obtain that
\mathbf{b}etagin{equation*}
\left| \nabla \ell_0 \right| \leq C(d) \left( \mathbf{m}athbf{f}rac{1}{r_0}\left\| \xi_{k+1} - \ell_0 \right\|_{\underlinederline{L}^2(B_{r_{0}})} + \mathbf{m}athbf{f}rac{1}{r_0}\left\| \xi_{k+1} - (\xi_{k+1})_{B_{r_0}} \right\|_{\underlinederline{L}^2(B_{r_{0}})} \right) \leq 2 \mathbf{m}athsf{E}_{k}
\end{equation*}
and, consequently, for $C(d,\theta) <\infty$,
\mathbf{b}etagin{equation*}
\left| \nabla \ell_{J+1} \right| \leq C \mathbf{m}athsf{E}_{k} .
\end{equation*}
We thus obtain by the triangle inequality, together with~\eqref{e.iterxikplusone} and the above display, that, there exists $C(d,\theta) <\infty$ such that
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{f}rac{1}{r_{J+1}}\left\| \xi_{k+1} - (\xi_{k+1})_{B_{r_{J+1}}} \right\|_{\underlinederline{L}^2(B_{r_{J+1}})} \leq C \mathbf{m}athsf{E}_{k} .
\end{equation*}
Hence we can take $\mathbf{m}athsf{C} = C$, proving the induction step. Letting then $J$ be such that $r \in (r_{J+1},r_{J}]$, we obtain~\eqref{e.Lipxipre} by giving up a volume factor.
\mathbf{m}athbf{s}mallskip
\emph{Step 5.} Conclusion.
To conclude, we obtain from~\eqref{e.Lnplusonedecay.case2res} and~\eqref{e.Lipxipre} by iterating that
\mathbf{b}etagin{equation*}
\inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac 1{r_{j}}\left\| \xi_{k+1} - \ell \right\|_{\underlinederline{L}^2(B_{r_{j}})} \leq C \left(2^{-j} + \mathbf{m}athbf{s}um_{i=0}^j 2^{i-j} \varepsilon_j \right) \mathbf{m}athsf{E}_{k}
\end{equation*}
We hence find $\mathbf{a}lphapha$ such that, for all $r \in [2 (\mathcal{X} \wedge \mathbf{m}athsf{R} , r_0]$,
\mathbf{b}etagin{equation*}
\left( \mathbf{m}athbf{f}rac{r}{R} + \mathbf{m}athbf{f}rac1{r} \right)^{-\mathbf{a}lphapha} \inf_{\ell \in \mathbf{m}athcal{P}_1} \mathbf{m}athbf{f}rac{1}{r} \left\| \nabla \xi_{k+1} - \ell \right\|_{\underlinederline{L}^2(B_{r})}
\leq
C \mathbf{m}athsf{E}_{k}.
\end{equation*}
Moreover, by~\eqref{e.Lipxipre} and~\eqref{e.ximplus1bnd} we deduce that, for all $r \in [2 (\mathcal{X} \wedge \mathbf{m}athsf{R}) , R_{k+1}]$,
\mathbf{b}etagin{equation*}
\left\| \nabla \xi_{k+1} \right\|_{\underlinederline{L}^{2} \left( B_{r} \right)} \leq C \mathbf{m}athbf{s}um_{i=0}^{k+1}
\left(\mathbf{m}athbf{f}rac{1}{R}\left\| \xi_{i} - \left( \xi_{i} \right)_{B_{R}} \right\|_{\underlinederline{L}^2(B_R)} + \mathbf{m}athbf{f}rac1R \left\| w_{i+1} - \left( w_{i+1} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac{m+1}{i+1}} .
\end{equation*}
Therefore,~\eqref{e.Lipxiind} is valid for $m= k+1$, by giving up a volume factor. This proves the induction step and completes the proof.
\end{proof}
\mathbf{m}athbf{s}ubsection{Improvement of spatial integrability} \label{s.spatintxi}
Following the strategy in Section~\ref{s.spatintw}, we improve the spatial integrability of~\eqref{e.Lipxi} from $L^2$ to $L^q$ for every $q\in [2,\infty)$. Fix $q \in [2,\infty)$.
Now, $\xi_{n+1}$ solves
\mathbf{b}etagin{equation*}
-\nabla \cdot \left( D_p^2 L(\nabla u,\cdot) \nabla \xi_{n+1} \right) = \nabla \cdot \mathbf{m}athbf{E}_{n+1} ,
\end{equation*}
where $ \mathbf{m}athbf{E}_{n+1} $ satisfies the estimate~\eqref{e.EnLq} for $\deltalta = q$ and $n+1$ instead of $n$. Recall that both~\eqref{e.C01linsols} and~\eqref{e.C01linerror} are valid for $m \in \{1,\ldots,n\}$ with $2nq$ instead of $q$. These, together with Proposition~\ref{p.Lipxi0}, yields by~\eqref{e.EnLq} that, for $R \mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{align*}
\left\| \mathbf{m}athbf{E}_{n+1} \right\|_{\underlinederline{L}^{q} \left( B_{\mathcal{X}/2} \right)}
&
\leq
C \mathbf{m}athbf{s}um_{i=0}^{n} \left( \mathbf{m}athbf{f}rac{1}{R}\left\| \xi_{i} - \left( \xi_{i} \right)_{B_{R}} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac{n+2}{i+1}}
\\ & \qquad
+ C \mathbf{m}athbf{s}um_{i=1}^{n+1} \left( \mathbf{m}athbf{f}rac1R \left\| w_{i} - \left( w_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)}\right)^{\mathbf{m}athbf{f}rac {n+2}{i}} .
\end{align*}
Having this at our disposal, we may repeat the proof of Section~\ref{s.spatintw} to conclude~\eqref{e.Lipxi} for $q \in [2,\infty)$.
\mathbf{m}athbf{s}ection{Sharp estimates of linearization errors}
\label{s.linerrorsproof}
Here we show that Corollary~\ref{c.linerrors} is a consequence of Theorems~\ref{t.regularity.Lbar},~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors}.
\mathbf{b}etagin{proof}[{Proof of Corollary~\ref{c.linerrors}}]
For each $k\in \{0,\ldots,n-1\}$, let $\{ B^{(k)}_j \}$ be a finite family of balls such that
\mathbf{b}etagin{equation}
U_k \mathbf{m}athbf{s}ubseteq \mathbf{b}igcup_j B^{(k)}_j
\quad \mathbf{m}box{and} \quad
\mathbf{b}igcup_j 4 B^{(k)}_j \mathbf{m}athbf{s}ubseteq U_{k+1}.
\end{equation}
By the Vitali covering lemma, we may further assume that $\mathbf{m}athbf{f}rac13 B^{(k)}_j \cap \mathbf{m}athbf{f}rac13 B^{(k)}_i=\emptyset$ whenever $i\neq j$. Let $\mathbf{m}athcal{Z}$ be the finite set consisting of the centers of these balls.
The size of $\mathbf{m}athcal{Z}$ depends only on the geometry of the sequence of domains~$U_0,U_1,\ldots,U_{n}$. Let~$\mathcal{X}$ be the maximum of the random variables~$\mathcal{X}$ given in Theorem~\ref{t.regularity.linerrors}, centered at elements of~$\mathbf{m}athcal{Z}$, divided by the radius of the smallest ball. We assume that $r\mathbf{m}athbf{g}eq \mathcal{X}$. This ensures the validity of Theorem~\ref{t.regularity.linerrors} in each of the balls $B^{(k)}_j$: that is, for every $q\in [2,\infty)$ and $m\in\{1,\ldots,n\}$, we have the estimate
\mathbf{b}etagin{align}
\label{e.C01linsols2}
\left\| \nabla w_{m} \right\|_{\underlinederline{L}^q(B^{(m)}_j)}
&
\leq
C\mathbf{m}athbf{s}um_{i=1}^{m}
\left\| \nabla w_{i} \right\|_{\underlinederline{L}^2(2B^{(m)}_j)}^{\mathbf{m}athbf{f}rac{m}i}
\end{align}
and hence, by the covering,
\mathbf{b}etagin{align}
\label{e.C01linsols3}
\left\| \nabla w_{m} \right\|_{\underlinederline{L}^q(U_m)}
&
\leq
C\mathbf{m}athbf{s}um_{i=1}^{m}
\left\| \nabla w_{i} \right\|_{\underlinederline{L}^2(U_{i})}^{\mathbf{m}athbf{f}rac{m}i}.
\end{align}
Proceeding with the proof of the corollary, we define, as usual,
\mathbf{b}etagin{equation*}
\xi_m := v - u - \mathbf{m}athbf{s}um_{k=1}^m \mathbf{m}athbf{f}rac1{k!} w_k.
\end{equation*}
Arguing by induction, let us suppose that~$m\in\mathbf{m}athbb{N}$ with $m\mathbf{m}athbf{g}eq 1$ and $\theta>0$ are such that, for every $j \in \left\{ 0,\ldots,m-1\right\}$ and $q\in [2,\infty)$, there exists $C(q,\mathrm{data})<\infty$ such that
\mathbf{b}etagin{equation}
\label{e.inductgoatass}
\left\| \nabla \xi_{j} \right\|_{\underlinederline{L}^{2+\theta}(U_j)}
+
\left\| \nabla w_{j+1} \right\|_{\underlinederline{L}^q(U_{j+1})}
\leq
C \left( \left\| \nabla u - \nabla v \right\|_{\underlinederline{L}^{2}(rU_0)} \right)^{j+1}.
\end{equation}
This obviously holds for~$m=1$ and some~$\theta (d,\mathcal{L}ambda)>0$ by the Meyers estimate and Theorem~\ref{t.regularity.linerrors}. We will show that it must also hold for~$m+1$ and some other (possibly smaller) exponent~$\theta>0$.
\mathbf{m}athbf{s}mallskip
\emph{Step 1.} We show that
\mathbf{b}etagin{equation}
\label{e.corowts1}
\left\| \nabla w_{m+1} \right\|_{\underlinederline{L}^2(U_{m+1})}
\leq
C \left( \left\| \nabla u - \nabla v \right\|_{\underlinederline{L}^{2}(rU_0)} \right)^{m+1}.
\end{equation}
By the basic energy estimate,
\mathbf{b}etagin{align*}
\left\| \nabla w_{m+1} \right\|_{\underlinederline{L}^2(U_{m+1})}
\leq
C\cdot
\left\{
\mathbf{b}etagin{aligned}
& \left\| \nabla u - \nabla v \right\|_{\underlinederline{L}^{2}(rU_0)} & \mathbf{m}box{if} & \ m=0,\\
& \left\| \mathbf{m}athbf{F}_{m+1} (\cdot,\nabla u,\nabla w_1,\ldots,\nabla w_{m}) \right\|_{L^2(U_{m+1})}
& \mathbf{m}box{if} & \ m\mathbf{m}athbf{g}eq 1,
\end{aligned}
\right.
\end{align*}
where
\mathbf{b}etagin{equation*}
\left| \mathbf{m}athbf{F}_{m+1} (\cdot,\nabla u,\nabla w_1,\ldots,\nabla w_{m}) \right|
\leq
C \mathbf{m}athbf{s}um_{k=1}^{m} \left| \nabla w_k \right|^{\mathbf{m}athbf{f}rac{m+1}{k}}.
\end{equation*}
By Theorem~\ref{t.regularity.linerrors} (using our definition of~$\mathcal{X}$ and the fact that $r\mathbf{m}athbf{g}eq \mathcal{X}$) and the induction hypothesis, we have, for every $k\in\{ 1,\ldots,m\}$,
\mathbf{b}etagin{align*}
\left\| \nabla w_k \right\|_{\underlinederline{L}^{\mathbf{m}athbf{f}rac{2(m+1)}{k}}(U_{m})}^{\mathbf{m}athbf{f}rac{m+1}{k}}
\leq
C\mathbf{m}athbf{s}um_{i=1}^{k-1}
\left( \mathbf{m}athbf{f}rac1R
\left\| w_{i} \right\|_{\underlinederline{L}^2(U_{m-1})}
\right)^{\mathbf{m}athbf{f}rac{m+1}i}
\leq
C\left( \left\| \nabla u - \nabla v \right\|_{\underlinederline{L}^{2}(rU_0)} \right)^{m+1}.
\end{align*}
This completes the proof of~\eqref{e.corowts1}.
\mathbf{m}athbf{s}mallskip
\emph{Step 2.} We show that
\mathbf{b}etagin{equation}
\label{e.corowts2}
\left\| \nabla \xi_m \right\|_{\underlinederline{L}^{2+\theta/2}(U_m)}
\leq
C \left( \left\| \nabla u - \nabla v \right\|_{\underlinederline{L}^{2}(rU_0)} \right)^{m+1}.
\end{equation}
Observe that, since $m\mathbf{m}athbf{g}eq 1$, $\xi_m \in H^1_0(U_m)$, that is, $\xi_m$ vanishes on $\mathrm{par}rtial U_m$. Therefore, by Meyers estimate and Lemma~\ref{l.lineq}, in particular~\eqref{e.EnLq} with $q=2+\mathbf{m}athbf{f}rac12\theta$ and $\deltalta=\mathbf{m}athbf{f}rac12\theta$, we get
\mathbf{b}etagin{align*}
\left\| \nabla \xi_m \right\|_{\underlinederline{L}^{2+\theta/2}(U_m)}
&
\leq
C\left( \mathbf{m}athbf{s}um_{i=1}^{m-1} \left\| \nabla \xi_{i} \right\|_{\underlinederline{L}^{2+\theta} \left( B_{R} \right)}^{\mathbf{m}athbf{f}rac{m+1}{i+1}}
+
\left\| \nabla \xi_{0} \right\|_{\underlinederline{L}^{\mathbf{m}athbf{f}rac{2m(4+\theta)}{\theta}}(B_R)}^{m+1}
+
\mathbf{m}athbf{s}um_{i=1}^{m-1} \left\| \nabla w_i \right\|_{\underlinederline{L}^{\mathbf{m}athbf{f}rac{2m(4+\theta)}{\theta}}(B_R)}^{\mathbf{m}athbf{f}rac{m+1}{i}} \right)
\\ &
\leq
C \left( \left\| \nabla u - \nabla v \right\|_{\underlinederline{L}^{2}(rU_0)} \right)^{m+1}.
\end{align*}
This completes the proof of~\eqref{e.corowts2}.
\mathbf{m}athbf{s}mallskip
The corollary now follows by induction.
\end{proof}
\mathbf{m}athbf{s}ection{Liouville theorems and higher regularity}
\label{s.regularityhigher}
In this section we prove Theorem~\ref{t.regularityhigher} by an induction in the degree~$n$. The initial step $n=1$ has been already established in~\cite{AFK}. Indeed, $\mathbf{m}athrm{(i)}_1$ and $\mathbf{m}athrm{(ii)}_1$ are consequences of~\cite[Theorem 5.2]{AFK}, and $\mathbf{m}athrm{(iii)}_1$ follows from Theorem~\ref{t.C11estimate} which is~\cite[Theorem 1.3]{AFK}. Moreover, these estimates hold with optimal stochastic integrability, namely we may take any $\mathbf{m}athbf{s}igma \in (0,d)$ for $n=1$ (with the constant~$C$ then depending additionally on~$\mathbf{m}athbf{s}igma$).
\mathbf{m}athbf{s}mallskip
Throughout the section we will use the following notation. Given $p \in \mathbf{m}athbb{R}^d$ and $k \in \mathbf{m}athbb{N}$, we denote
\mathbf{b}etagin{equation*}
\mathbf{m}athcal{A}_k^p := \left\{Â u \in H_{\textrm{loc}}^1(\mathbf{m}athbb{R}^d) \; : \; -\nabla \cdot \left( \mathbf{a}_p \nabla u \right) = 0 , \; \lim_{r \to \infty}
r^{-k-1} \left\| u \right\|_{\underlinederline{L}^2 \left( B_{r} \right)} = 0 \right\}
\end{equation*}
and
\mathbf{b}etagin{equation*}
\mathbf{m}athcal{A}hom_k^p := \left\{Â \overlineerline{u} \in H_{\textrm{loc}}^1(\mathbf{m}athbb{R}^d) \; : \; -\nabla \cdot \left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p \nabla \overlineerline{u} \right) = 0 , \; \lim_{r \to \infty}
r^{-k-1} \left\| \overlineerline{u} \right\|_{\underlinederline{L}^2 \left( B_{r} \right)} = 0 \right\}.
\end{equation*}
\mathbf{b}etagin{remark} \label{r.regularityhigher1}
In proving Theorem~\ref{t.regularityhigher}$\mathbf{m}athrm{(ii)}_n$ by induction in~$n$, it will be necessary to prove a stronger statement, namely that, for every $p \in B_{\mathbf{m}athsf{M}}$, $(\overlineerline{w}_1,\ldots,\overlineerline{w}_n) \in \overlineerline{\mathbf{m}athsf{W}}_n^{p}$ and $(w_1,\ldots,w_{n-1}) \in \mathbf{m}athsf{W}_{n-1}^p$ satisfying ~\eqref{e.liouvillec} for $m\in \{1,\ldots,n-1\}$, there exists~$w_n$ satisfying~\eqref{e.liouvillec} for~$m=n$ such that $(w_1,\ldots,w_{n}) \in \mathbf{m}athsf{W}_{n}^p$.
\end{remark}
\mathbf{b}etagin{proof}[Proof of Theorem~\ref{t.regularityhigher} $\mathbf{m}athrm{(ii)}_n$] For fixed $n$, we will take $\mathcal{X}$ as the maximum of random variables $\mathcal{X}$ appearing in Theorem~\ref{t.linearizehigher}, Theorem~\ref{t.regularity.linerrors} corresponding $q=2+\deltalta$, where $\deltalta$ is as in Theorem~\ref{t.linearizehigher}, and a deterministic constant $\mathbf{m}athsf{R}(n,\mathbf{m}athsf{M}, d,\mathcal{L}ambda) < \infty$. Clearly~\eqref{e.higherX} holds then.
\mathbf{m}athbf{s}mallskip
Given $p \in B_{\mathbf{m}athsf{M}}$ and $(\overlineerline{w}_1,\ldots,\overlineerline{w}_n) \in \overlineerline{\mathbf{m}athsf{W}}_n^p $, our goal is to prove that there exists a tuplet $(w_1,\ldots,w_n) \in \mathbf{m}athsf{W}_n^p$ such that~\eqref{e.liouvillec} holds for every $R\mathbf{m}athbf{g}eq \mathcal{X}$ and $k \in \{1,\ldots,n\}$.
\mathbf{m}athbf{s}mallskip
\emph{Step 1.} Induction assumption. We proceed inductively and assume that there is a tuplet $(w_1,\ldots,w_{n-1}) \in \mathbf{m}athsf{W}_{n-1}^p$ such that~\eqref{e.liouvillec} is true for every $R\mathbf{m}athbf{g}eq \mathcal{X}$ and $m \in \{1,\ldots,n-1\}$. The base case for the induction is valid by the results of~\cite{AFK}, as mentioned at the beginning of this section. Our goal is therefore to construct $w_n$ such that $(w_1,\ldots,w_n) \in \mathbf{m}athsf{W}_n^p$ and~\eqref{e.liouvillec} holds for every $R\mathbf{m}athbf{g}eq \mathcal{X}$ and $k \in \{1,\ldots,n\}$. Recall that since $(\overlineerline{w}_1,\ldots,\overlineerline{w}_n) \in \overlineerline{\mathbf{m}athsf{W}}_n^p$, we have, for every $R\mathbf{m}athbf{g}eq \mathcal{X}$, that
\mathbf{b}etagin{equation}
\label{e.overlinewgrowth}
\mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\mathbf{m}athbf{f}rac1{R} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_R)}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
\leq
C(d)
\left(\mathbf{m}athbf{f}rac{R}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
.
\end{equation}
Moreover, by the induction assumption, if the lower bound $\mathbf{m}athsf{R}$ for $\mathcal{X}$ is large enough, we deduce by~\eqref{e.liouvillec} that, for $R \mathbf{m}athbf{g}eq \mathcal{X}$ and $m \in \{1,\ldots,n-1\}$,
\mathbf{b}etagin{equation}
\label{e.overlinewpluswgrowth}
\mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\mathbf{m}athbf{f}rac1{R} \left\| w_i \right\|_{\underlinederline{L}^2(B_R)}
\right)^{\mathbf{m}athbf{f}rac{m}{i}}
\leq 2
\left(\mathbf{m}athbf{f}rac{R}{\mathcal{X}}\right)^m \mathbf{m}athbf{s}um_{i=1}^{m}
\left(
\mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
.
\end{equation}
\mathbf{m}athbf{s}mallskip
\emph{Step 1.}
Let $r_j:= 2^j \mathcal{X}$ and let $w_{n,j}$, $j \in \mathbf{m}athbb{N}$, solve
\mathbf{b}etagin{equation}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( \mathbf{a}_p \nabla w_{n,j} \right) = \nabla \cdot \mathbf{m}athbf{F}_n(p + \nabla \phi_p,\nabla w_{1},\ldots, \nabla w_{n-1},\cdot)
& \mathbf{m}box{in} & \ B_{r_{j}},\\
& w_{n,j} = \overlineerline{w}_n & \mathbf{m}box{on} & \ \mathrm{par}rtial B_{r_{j}}.
\end{aligned}
\right.
\end{equation}
We show that
\mathbf{b}etagin{equation} \label{e.wmjhomog2}
\left\| w_{n,j} - \overlineerline{w}_{n} \right\|_{\underlinederline{L}^2 \left( B_{r_{j-1}} \right)}
\leq
C r_j^{1-\mathbf{a}lphapha}
\left(\mathbf{m}athbf{f}rac{r_j}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
.
\end{equation}
Theorem~\ref{t.linearizehigher}
yields that, for $m \in \{1,\ldots,n\}$, solutions of
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_p\overlineerline{L}(\nabla \overlineerline{u}_j) \right) = 0 & & \mathbf{m}box{in } \ B_{r_{j+n}},\\
& -\nabla \cdot \left( D_p^2\overlineerline{L}(\nabla \overlineerline{u}_j) \nabla \overlineerline{w}_{m,j} \right) = \nabla \cdot \overlineerline{\mathbf{m}athbf{F}}_n(\nabla \overlineerline{u}_j,\nabla \overlineerline{w}_{1,j},\ldots, \nabla \overlineerline{w}_{m-1,j})
& & \mathbf{m}box{in } \ B_{r_{j+n-m}},\\
& \overlineerline{u}_{j}(x) = p\cdot x + \phi_p(x) & &\mathbf{m}box{for } x \in \mathrm{par}rtial B_{r_{j+n}},\\
& \overlineerline{w}_{m,j} = w_{m}, \quad m \in \{1,\ldots,n-1\}, & & \mathbf{m}box{on } \mathrm{par}rtial B_{r_{j+n-m}},
\\
& \overlineerline{w}_{n,j} = \overlineerline{w}_{n} & & \mathbf{m}box{on } \mathrm{par}rtial B_{r_{j}}
\end{aligned}
\right.
\end{equation*}
satisfy the estimate
\mathbf{b}etagin{equation*}
\left\| w_{n,j} - \overlineerline{w}_{n,j} \right\|_{\underlinederline{L}^2 \left( B_{r_{j}} \right)}
\leq
C r_j^{1-\mathbf{a}lphapha} \left( \left\| \nabla \overlineerline{w}_{n} \right\|_{\underlinederline{L}^{2+\deltalta}(B_{r_{j}})} +
\mathbf{m}athbf{s}um_{i=1}^{n-1}
\left(
\left\| \nabla w_i \right\|_{\underlinederline{L}^{2+\deltalta}(B_{r_{j + n-m}})} \right)^{\mathbf{m}athbf{f}rac{n}{i}}
\right) .
\end{equation*}
By Theorem~\ref{t.regularity.linerrors}, together with~\eqref{e.overlinewgrowth} and~\eqref{e.overlinewpluswgrowth}, assumed for $m \in \{1,\ldots,n-1\}$, we obtain
\mathbf{b}etagin{equation} \label{e.wmjhomog00}
\left\| w_{n,j} - \overlineerline{w}_{n,j} \right\|_{\underlinederline{L}^2 \left( B_{r_{j}} \right)}
\leq
C r_j^{1-\mathbf{a}lphapha} \left(\mathbf{m}athbf{f}rac{r_j}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
.
\end{equation}
Similarly, by Theorems~\ref{t.linearizehigher} and~\ref{t.regularity.linerrors}, together with~\eqref{e.overlinewpluswgrowth} valid for $m \in \{1,\ldots,n-1\}$, we get, for $m \in \{1,\ldots,n-1\}$, that
\mathbf{b}etagin{equation*}
\left\| w_{m} - \overlineerline{w}_{m,j} \right\|_{\underlinederline{L}^2 \left( B_{r_{j+n-m}} \right)} \leq
C r_j^{1-\mathbf{a}lphapha}
\left(\mathbf{m}athbf{f}rac{r_j}{\mathcal{X}}\right)^m \mathbf{m}athbf{s}um_{i=1}^{m}
\left(
\mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{m}{i}} .
\end{equation*}
Consequently, again by~\eqref{e.liouvillec} assumed for $m \in \{1,\ldots,n-1\}$, that
\mathbf{b}etagin{equation} \label{e.wmjhomog0}
\left\| \overlineerline{w}_{m} - \overlineerline{w}_{m,j} \right\|_{\underlinederline{L}^2 \left( B_{r_{j}} \right)} \leq
C r_j^{1-\deltalta}
\left(\mathbf{m}athbf{f}rac{r_j}{\mathcal{X}}\right)^m \mathbf{m}athbf{s}um_{i=1}^{m}
\left(
\mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{m}{i}}
.
\end{equation}
We denote, in short,
\mathbf{b}etagin{equation*}
\overlineerline{\mathbf{m}athbf{f}}_{m,j} := \overlineerline{\mathbf{m}athbf{F}}_m(\nabla \overlineerline{u}_j,\nabla \overlineerline{w}_{1,j},\ldots, \nabla \overlineerline{w}_{m-1,j}) ,
\end{equation*}
together with
\mathbf{b}etagin{equation*}
{\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p := D_p^2 \overlineerline{L}(p) \quad \mathbf{m}box{and} \quad \overlineerline{\mathbf{m}athbf{f}}_{m} := \overlineerline{\mathbf{m}athbf{F}}_m(p, \nabla \overlineerline{w}_{1},\ldots, \nabla \overlineerline{w}_{m-1}) .
\end{equation*}
Now, it is easy to see (cf. the proof of~\cite[Theorem 5.2]{AFK}) that
\mathbf{b}etagin{equation*}
\left\| \nabla \overlineerline{u}_{j} - p \right\|_{L^\infty(B_{r_{j+n-1}})} \leq C r_j^{-\mathbf{a}lphapha},
\end{equation*}
so that, by Theorem~\ref{t.regularity.Lbar},
\mathbf{b}etagin{equation*}
\left\| D_p^2\overlineerline{L}(\nabla \overlineerline{u}_{j}) - {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p \right\|_{L^\infty(B_{r_{j+n-1}})} \leq
\left\| D_p^3 \overlineerline{L} \right\|_{L^\infty(B_{2\mathbf{m}athsf{M}})} \left\| \nabla \overlineerline{u}_{j} - p \right\|_{L^\infty(B_{r_{j+n-1}})}
\leq C r_j^{-\mathbf{a}lphapha}.
\end{equation*}
Therefore, by an analogous computation to the proof of Lemma~\ref{l.diff.linearizedsystem}, using~\eqref{e.wmjhomog0}, we get
\mathbf{b}etagin{equation*}
\left\| \overlineerline{\mathbf{m}athbf{f}}_{n,j} - \overlineerline{\mathbf{m}athbf{f}}_{n} \right\|_{\underlinederline{L}^{2}(B_{r_{j}})}
\leq
C r_j^{-\mathbf{a}lphapha/2}
\left(\mathbf{m}athbf{f}rac{r_j}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
\end{equation*}
as well as
\mathbf{b}etagin{equation*}
\left\| D_p^2 \overlineerline{L}(\nabla \overlineerline{u}_j) - {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p ) \nabla \overlineerline{w}_{n,j} \right\|_{\underlinederline{L}^{2}(B_{r_{j}})}
\leq
C r_j^{- \mathbf{a}lphapha/2-1}
\left\| \overlineerline{w}_n - (\overlineerline{w}_n)_{B_{r_n}} \right\|_{\underlinederline{L}^{2}(B_{r_{j}})}.
\end{equation*}
By testing the equation
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p \nabla ( \overlineerline{w}_{n,j} - \overlineerline{w}_{n} ) \right) = \nabla \cdot \left((D_p^2 \overlineerline{L}(\nabla \overlineerline{u}_j) - {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p ) \nabla \overlineerline{w}_{n,j} + \overlineerline{\mathbf{m}athbf{f}}_{n,j} - \overlineerline{\mathbf{m}athbf{f}}_{n} \right)
& & \mathbf{m}box{in } \ B_{r_{j}},
\\
& \overlineerline{w}_{n,j} - \overlineerline{w}_{n} = 0 & & \mathbf{m}box{on } \mathrm{par}rtial B_{r_{j}}
\end{aligned}
\right.
\end{equation*}
with $\overlineerline{w}_{n,j} - \overlineerline{w}_{n}$, we then obtain
\mathbf{b}etagin{equation*}
\left\| \overlineerline{w}_{n,j} - \overlineerline{w}_{n} \right\|_{\underlinederline{L}^{2}(B_{r_{j}})} \leq
C (r_j^{1-\deltalta} + r_j^{1-\mathbf{a}lphapha/2})
\left(\mathbf{m}athbf{f}rac{r_j}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
.
\end{equation*}
Therefore,~\eqref{e.wmjhomog2} follows by~\eqref{e.wmjhomog00} and the above display by taking $\deltalta = \mathbf{a}lphapha/2$.
\mathbf{m}athbf{s}mallskip
\emph{Step 2.} We show that there is $w_n$ such that $(w_1,\ldots,w_n) \in \mathbf{m}athsf{W}_n^p$ and $w_n$ satisfies~\eqref{e.liouvillec}.
Setting $z_{n,j} := w_{n,j} - w_{n,j+1}$ we have by the triangle inequality that
\mathbf{b}etagin{equation} \label{e.znjest}
\left\| z_{n,j} \right\|_{\underlinederline{L}^2 \left( B_{r_j} \right)} \leq
C r_j^{1 - \deltalta}
\left(\mathbf{m}athbf{f}rac{r_j}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}} .
\end{equation}
Notice that $z_{n,j}$ is $\mathbf{a}_p$-harmonic in $B_{r_j}$. Thus, by~\cite[Theorem 5.2]{AFK}, we find $\phi_{n,j} \in \mathbf{m}athcal{A}_{n}^p$ such that, for every $r \in [\mathcal{X},r_j]$,
\mathbf{b}etagin{equation} \label{e.znjest2}
\left\| z_{n,j} - \phi_{n,j} \right\|_{\underlinederline{L}^2 \left( B_{r} \right)}
\leq
C \left(\mathbf{m}athbf{f}rac{r}{r_j}\right)^{n+1} \left\| z_{n,j} \right\|_{\underlinederline{L}^2 \left( B_{r_j} \right)} .
\end{equation}
Consequently, for every $r \in [\mathcal{X},r_j]$,
\mathbf{b}etagin{equation*}
\left\| z_{n,j} - \phi_{n,j} \right\|_{\underlinederline{L}^2 \left( B_{r} \right)}
\leq
C \left( \mathbf{m}athbf{f}rac{r}{r_j} \right)^{\deltalta}
r^{1 - \deltalta}
\left(\mathbf{m}athbf{f}rac{r}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}} .
\end{equation*}
Setting then $\widetilde w_{n,j} := w_{n,j} - \mathbf{m}athbf{s}um_{h=1}^{j-1} \phi_{n,h}$, we have that
$
\widetilde w_{n,k} - \widetilde w_{n,j} = \mathbf{m}athbf{s}um_{i= j}^{k-1} (z_{n,i} - \phi_{n,i})
$ and it follows that, for all $j,k \in \mathbf{m}athbb{N}$, $j < k$,
\mathbf{b}etagin{equation} \label{e.wmjhomog3prepre}
\left\| \widetilde w_{n,k} - \widetilde w_{n,j} \right\|_{\underlinederline{L}^2 \left( B_{r_j} \right)}
\leq
C r_j^{1-\deltalta}
\left(\mathbf{m}athbf{f}rac{r_j}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
.
\end{equation}
Therefore, $\{\widetilde w_{n,k} \}_{k=j}^\infty$ is a Cauchy sequence and, by the Caccioppoli estimate and the diagonal argument, we find~$w_n$ such that
\mathbf{b}etagin{equation} \label{e.wmjhomog3pre}
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( \mathbf{a}_p \nabla w_{n} \right)
=
\nabla \cdot \mathbf{m}athbf{F}_n(p + \nabla \phi_p,\nabla w_1,\ldots, \nabla w_{n-1},\cdot)
& \mathbf{m}box{in} & \ \mathbf{m}athbb{R}^d
\end{aligned}
\end{equation}
and, for all $j \in \mathbf{m}athbb{N}$,
\mathbf{b}etagin{equation} \label{e.wmjhomog3}
\left\| w_{n} - \widetilde w_{n,j} \right\|_{\underlinederline{L}^2 \left( B_{r_{j}} \right)}
\leq
C r_j^{1-\deltalta}
\left(\mathbf{m}athbf{f}rac{r_j}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}} .
\end{equation}
We now use the facts that $(\overlineerline{w}_1,\ldots,\overlineerline{w}_n) \in \overlineerline{\mathbf{m}athsf{W}}_n^{p} $ and $\phi_{n,j} \in \mathbf{m}athcal{A}_{n}^p$, together with~\eqref{e.znjest} and~\eqref{e.znjest2}, to deduce that
\mathbf{b}etagin{align} \notag
\mathbf{m}athbf{s}um_{h=1}^{j-1} \left\| \phi_{n,h} \right\|_{\underlinederline{L}^2 \left( B_{r_j} \right)}
&
\leq
\mathbf{m}athbf{s}um_{h=1}^{j-1} \left( \mathbf{m}athbf{f}rac{r_j}{r_h} \right)^n \left\| \phi_{n,h} \right\|_{\underlinederline{L}^2 \left( B_{r_h} \right)}
\\ \notag &
\leq
\mathbf{m}athbf{s}um_{h=1}^{j-1}
\left( \mathbf{m}athbf{f}rac{r_j}{r_h} \right)^n
\left( \left\| \phi_{n,j} \right\|_{\underlinederline{L}^2 \left( B_{r_h} \right)} +
\left\| z_{n,j} - \phi_{n,j} \right\|_{\underlinederline{L}^2 \left( B_{r_h} \right)}
\right)
\\ \notag &
\leq
C \mathbf{m}athbf{s}um_{h=1}^{j-1} \left( \mathbf{m}athbf{f}rac{r_j}{r_h} \right)^n r_h^{1-\deltalta}
\left(\mathbf{m}athbf{f}rac{r_h}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
\\ \notag &
\leq
C r_j^{1-\deltalta}
\left(\mathbf{m}athbf{f}rac{r_j}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
.
\end{align}
Combining this with~\eqref{e.wmjhomog3} yields that $w_n$ satisfies~\eqref{e.liouvillec}. Moreover, obviously $(w_1,\ldots,w_n) \in \mathbf{m}athsf{W}_n^p$. The proof is complete.
\end{proof}
\mathbf{b}etagin{proof}[Proof of Theorem~\ref{t.regularityhigher} $\mathbf{m}athrm{(i)}_n$] Let $\mathcal{X}$ be as in the beginning of the proof of $\mathbf{m}athrm{(ii)}_n$. Fix $R \mathbf{m}athbf{g}eq \mathcal{X}$. We proceed via induction. Assume that $(w_1,\ldots,w_{n-1})$ satisfy~\eqref{e.liouvillec}, i.e., we find $(\overlineerline{w}_1,\ldots,\overlineerline{w}_{n-1}) \in \overlineerline{\mathbf{m}athsf{W}}_{n-1}^p$ such that, for $k \in \{1,\ldots,n-1\}$ and~$t \mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{equation} \label{e.liouvillec2}
\left\| w_k - \overlineerline{w}_k \right\|_{\underlinederline{L}^2(B_t)}
\leq
C t^{1-\deltalta}
\left(\mathbf{m}athbf{f}rac{t}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\mathbf{m}athbf{f}rac{1}{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
.
\end{equation}
Since $(\overlineerline{w}_1,\ldots,\overlineerline{w}_{n-1}) \in \mathbf{m}athcal{P}_2 \times \ldots \times \mathbf{m}athcal{P}_{n}$, we have by the homogeneity that
\mathbf{b}etagin{equation*}
\overlineerline{\mathbf{m}athbf{F}}(p,\nabla\overlineerline{w}_1,\ldots,\nabla\overlineerline{w}_{n-1}) \in \mathbf{m}athcal{P}_{n}.
\end{equation*}
We find, by Remark~\ref{r.barwregtrivial} below, a solution $\overlineerline{w} \in \mathbf{m}athcal{P}_{n+1}$ such that $(\overlineerline{w}_1,\ldots,\overlineerline{w}_{n-1},\overlineerline{w}) \in \overlineerline{\mathbf{m}athsf{W}}_{n}^p$ and that there exists a constant $\mathbf{m}athsf{C}(n,d,\mathcal{L}ambda)< \infty$ such that, for every $t \mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{equation} \label{e.barwregtrivial}
\left\| \overlineerline{w} \right\|_{\underlinederline{L}^2 \left( B_{t} \right)}
\leq
\mathbf{m}athsf{C} t
\left(\mathbf{m}athbf{f}rac{t}{\mathcal{X}}\right)^{n} \mathbf{m}athbf{s}um_{i=1}^{n-1}
\left(
\mathbf{m}athbf{f}rac{1}{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
.
\end{equation}
Consequently, Remark~\ref{r.regularityhigher1} provides us $\widetilde w$ such that $(w_1,\ldots,w_{n-1},\widetilde w) \in \mathbf{m}athsf{W}_{n}^p$ and, for~$t\mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{equation} \label{e.liouvillec3}
\left\| \widetilde w - \overlineerline{w} \right\|_{\underlinederline{L}^2(B_t)}
\leq
C \mathbf{m}athsf{C}
t^{1-\mathbf{a}lphapha}
\left(\mathbf{m}athbf{f}rac{t}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n-1}
\left(
\mathbf{m}athbf{f}rac{1}{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
.
\end{equation}
Moreover, by the equations and the growth condition at infinity, $w_n - \widetilde w \in \mathbf{m}athcal{A}_{n+1}^p$. Therefore, by~\cite[Theorem 5.2]{AFK}, there is $q \in \mathbf{m}athcal{A}hom_{n+1}^p$ such that, for all $t \mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{equation*}
\left\| w_n - \widetilde w - q \right\|_{\underlinederline{L}^2(B_t)} \leq C t^{-\deltalta}\left\| q \right\|_{\underlinederline{L}^2 \left( B_{t} \right)} .
\end{equation*}
We set $\overlineerline{w}_n := \overlineerline{w} + q$. Obviously, $(\overlineerline{w}_1,\ldots,\overlineerline{w}_{n-1},\overlineerline{w}_n) \in \overlineerline{\mathbf{m}athsf{W}}_{n}^p$ since $q$ is ${\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p$-harmonic. By~\eqref{e.barwregtrivial} and the triangle inequality we have, for $t\mathbf{m}athbf{g}eq \mathcal{X}$, that
\mathbf{b}etagin{multline*}
t
\left(\mathbf{m}athbf{f}rac{t}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n-1}
\left(
\mathbf{m}athbf{f}rac{1}{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
\leq
\mathbf{m}athbf{f}rac1{2\mathbf{m}athsf{C}} \left\| q \right\|_{\underlinederline{L}^2 \left( B_{t} \right)}
\\ \; \implies \;
\left\| q \right\|_{\underlinederline{L}^2 \left( B_{t} \right)} + \left\| \overlineerline{w} \right\|_{\underlinederline{L}^2 \left( B_{t} \right)}
\leq
3 \left\| \overlineerline{w}_n \right\|_{\underlinederline{L}^2 \left( B_{t} \right)} \leq C \left( \mathbf{m}athbf{f}rac{t}{\mathcal{X}} \right)^{n+1} \left\| \overlineerline{w}_n \right\|_{\underlinederline{L}^2 \left( B_{\mathcal{X}} \right)}
\end{multline*}
and
\mathbf{b}etagin{multline*}
\left\| q \right\|_{\underlinederline{L}^2 \left( B_{t} \right)}
\leq
2\mathbf{m}athsf{C} t
\left(\mathbf{m}athbf{f}rac{t}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n-1}
\left(
\mathbf{m}athbf{f}rac{1}{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
\\
\; \implies \;
\left\| q \right\|_{\underlinederline{L}^2 \left( B_{t} \right)} + \left\| \overlineerline{w} \right\|_{\underlinederline{L}^2 \left( B_{t} \right)}
\leq
3\mathbf{m}athsf{C}
t
\left(\mathbf{m}athbf{f}rac{t}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n-1}
\left(
\mathbf{m}athbf{f}rac{1}{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
.
\end{multline*}
We thus have by the triangle inequality that
\mathbf{b}etagin{align} \notag
\left\| w_n - \overlineerline{w}_n \right\|_{\underlinederline{L}^2(B_t)}
&
\leq
\mathbf{m}athbf{f}rac1t \left\| w_n - \widetilde w - q \right\|_{\underlinederline{L}^2(B_t)} + \left\| \widetilde w - \overlineerline{w} \right\|_{\underlinederline{L}^2(B_t)}
\\ \notag &
\leq
C t^{-\deltalta \wedge \mathbf{a}lphapha}
\left(
\left\| q \right\|_{\underlinederline{L}^2 \left( B_{t} \right)} + \left\| \overlineerline{w} \right\|_{\underlinederline{L}^2(B_t)}
+ t\left(\mathbf{m}athbf{f}rac{t}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n-1} \left( \mathbf{m}athbf{f}rac{1}{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})} \right)^{\mathbf{m}athbf{f}rac{n}{i}}
\right)
\\ \notag &
\leq
Ct^{1-\deltalta \wedge \mathbf{a}lphapha}
\left(\mathbf{m}athbf{f}rac{t}{\mathcal{X}}\right)^n \mathbf{m}athbf{s}um_{i=1}^{n}
\left(
\mathbf{m}athbf{f}rac{1}{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\right)^{\mathbf{m}athbf{f}rac{n}{i}}
,
\end{align}
proving the induction step and finishing the proof of~$\mathbf{m}athrm{(i)}_n$.
\end{proof}
\mathbf{b}etagin{remark} \label{r.barwregtrivial}
We show that there is $\overlineerline{w}$ such that~\eqref{e.barwregtrivial} is valid. Indeed, by letting~$\overlineerline{v}$ solve
\mathbf{b}etagin{equation} \label{e.barwregtrivial2}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p \nabla \overlineerline{v} = \nabla \cdot \overlineerline{\mathbf{m}athbf{F}}(p,\nabla\overlineerline{w}_1,\ldots,\nabla\overlineerline{w}_{n-1})
& \mathbf{m}box{in} & \ B_{R},\\
& \overlineerline{v} = 0 & \mathbf{m}box{on} & \ \mathrm{par}rtial B_{R},
\end{aligned}
\right.
\end{equation}
using the fact that $\overlineerline{\mathbf{m}athbf{F}}(p,\nabla\overlineerline{w}_1,\ldots,\nabla\overlineerline{w}_{n-1}) $ is a polynomial of degree $n$,
it is straightforward to show by homogeneity that
\mathbf{b}etagin{equation*}
\overlineerline{w}(x) = \mathbf{m}athbf{s}um_{m=2}^{n+1} \mathbf{m}athbf{f}rac{1}{m!} \nabla^m \overlineerline{v}(0) x^{\otimes m}
\end{equation*}
solves
\mathbf{b}etagin{equation*}
-\nabla \cdot \left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}} \nabla \overlineerline{w}\right) = \nabla \cdot \overlineerline{\mathbf{m}athbf{F}}(p,\nabla\overlineerline{w}_1,\ldots,\nabla\overlineerline{w}_{n-1}) \quad \mathbf{m}box{in } \; \mathbf{m}athbb{R}^d .
\end{equation*}
We have the estimate
\mathbf{b}etagin{equation*}
\left| \nabla^{m+1} \overlineerline{v}(0) \right| \leq C \mathbf{m}athbf{s}um_{k=0}^{m+1} R^{k - m} \left\| \nabla^k \overlineerline{\mathbf{m}athbf{F}}(p,\nabla\overlineerline{w}_1,\ldots,\nabla\overlineerline{w}_{n-1}) \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}
\end{equation*}
for all $m \in \{1,\ldots,n\}$, and by the equations of $\overlineerline{w}_1,\ldots,\overlineerline{w}_{n-1}$ we see that
\mathbf{b}etagin{equation*}
\left\| \nabla^k \overlineerline{\mathbf{m}athbf{F}}(p,\nabla\overlineerline{w}_1,\ldots,\nabla\overlineerline{w}_{n-1}) \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}
\leq C R^{-k} \left( \mathbf{m}athbf{s}um_{i=1}^{n-1} \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2 \left( B_{R} \right)} \right)^{\mathbf{m}athbf{f}rac ni} .
\end{equation*}
Therefore, for all $R>0$ and $m \in \{1,\ldots,n\}$,
\mathbf{b}etagin{equation*}
\left| \nabla^{m+1} \overlineerline{v}(0) \right|
\leq
C R^{-m} \left( \mathbf{m}athbf{s}um_{i=1}^{n-1} \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2 \left( B_{R} \right)} \right)^{\mathbf{m}athbf{f}rac ni} .
\end{equation*}
Thus we have that, for $t \mathbf{m}athbf{g}eq R$,
\mathbf{b}etagin{equation*}
\left\| \nabla \overlineerline{w} \right\|_{\underlinederline{L}^2 \left( B_{t} \right)} \leq C \left( \mathbf{m}athbf{f}rac{t}{R} \right)^{n} \left( \mathbf{m}athbf{s}um_{i=1}^{n-1} \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2 \left( B_{R} \right)} \right)^{\mathbf{m}athbf{f}rac ni} ,
\end{equation*}
which yields~\eqref{e.barwregtrivial}.
\end{remark}
We now turn to the proof of the large-scale $C^{n,1}$ estimate.
\mathbf{b}etagin{proof}[Proof of Theorem~\ref{t.regularityhigher} $\mathbf{m}athrm{(iii)}_n$]
Fix $\mathbf{m}athsf{M} \in [1,\infty)$. By Theorem~\ref{t.regularity.linerrors} there exist constants $\mathbf{m}athbf{s}igma(n,\mathbf{m}athsf{M},\mathrm{data}) \in (0,1)$ and $C(n,\mathbf{m}athsf{M},\mathrm{data})<\infty$ and a random variable $\mathcal{X}$ satisfying $\mathcal{X} \leq \mathcal{O}_\mathbf{m}athbf{s}igma(C)$ so that the statement of Theorem~\ref{t.regularity.linerrors} is valid with $q = 2(n+2)$. We now divide the proof in two steps.
\mathbf{m}athbf{s}mallskip
\emph{Step 1.}
Induction assumption. Assume $\mathbf{m}athrm{(iii)}_{n-1}$. Consequently there is $p \in B_{C}$ and a tuplet $(w_1,\ldots,w_{n-1})$ such that, for $k \in \{0,\ldots,n-1\}$ and
\mathbf{b}etagin{equation*}
\xi_k(x) = v(x) - p \cdot x - \phi_p(x) - \mathbf{m}athbf{s}um_{i=1}^{k} \mathbf{m}athbf{f}rac{w_i(x)}{i!} , \qquad \xi_0(x) := v(x) - p \cdot x - \phi_p(x),
\end{equation*}
we have that there exists $C(k,\mathbf{m}athsf{M},\mathrm{data})$ such that, for every $m \in \{0,\ldots,n-1\}$ and for every $r \in [\mathcal{X}, \mathbf{m}athbf{f}rac12 (1+ 2^{-k-2})R]$,
\mathbf{b}etagin{equation}
\label{e.intrinsicregnminusone1}
\left\| \nabla \xi_m \right\|_{\underlinederline{L}^{2}(B_r)}
\leq
C \left( \mathbf{m}athbf{f}rac r R \right)^{m+1}
\left( \mathbf{m}athsf{H}_k^{m+1} \wedge \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)} \right) ,
\end{equation}
where we denote, in short,
\mathbf{b}etagin{equation*}
\mathbf{m}athsf{H}_m
:=
\mathbf{m}athbf{s}um_{i=0}^{m}
\left( \mathbf{m}athbf{f}rac1R
\left\| \xi_i - (\xi_i)_{B_R} \right\|_{\underlinederline{L}^2(B_{R})}
\right)^{\mathbf{m}athbf{f}rac{1}{i+1}} .
\end{equation*}
Our goal is to show that~\eqref{e.intrinsicregnminusone1} continues to hold with $m=n$ and for every $r \in [\mathcal{X}, \mathbf{m}athbf{f}rac12 (1+ 2^{-n-2})R]$. The base case $n=1$ is valid since, by~\cite[Theorem 1.3]{AFK}, we have that there is $ p \in B_C$ such that, for all $r \in [\mathcal{X}, \mathbf{m}athbf{f}rac 34 R]$,
\mathbf{b}etagin{equation} \label{e.intrinsicregnminusonebbase}
\left\| \nabla \xi_0 \right\|_{\underlinederline{L}^2 \left( B_{r} \right)} \leq C\left( \mathbf{m}athbf{f}rac rR \right) \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}.
\end{equation}
\mathbf{m}athbf{s}mallskip
\mathbf{m}athbf{s}mallskip
\emph{Step 2.} Construction of a special solution. We construct a solution $\widetilde w_n$ of
\mathbf{b}etagin{equation} \label{e.tildewneq}
- \nabla \cdot \left( \mathbf{a}_p \nabla \widetilde w_n \right)
=
\nabla \cdot \mathbf{m}athbf{F}_n \left( p+ \nabla \phi_p , \nabla w_1,\ldots, \nabla w_{n-1} ,\cdot \right) \quad \mathbf{m}box{in } \, \mathbf{m}athbb{R}^d
\end{equation}
satisfying, for $r \mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{equation} \label{e.tildewnest}
\left\| \nabla \widetilde w_n \right\|_{\underlinederline{L}^{2(n+1)}(B_r)}
\leq
C \left( \mathbf{m}athbf{f}rac r R \right)^{n} \left( \mathbf{m}athsf{H}_{n-1}^{n} \wedge \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}\right) Ã.
\end{equation}
To show this, it first follows by~\eqref{e.intrinsicregnminusone1} and the triangle inequality that, for $m \in \{1,\ldots,n-1\}$,
\mathbf{b}etagin{equation*}
\left\| \nabla w_m \right\|_{\underlinederline{L}^{2}(B_r)}
\leq
C \left( \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}\right)^{\mathbf{m}athbf{f}rac{1}{m+1}} \left\| \nabla \xi_m \right\|_{\underlinederline{L}^{2}(B_r)}^{\mathbf{m}athbf{f}rac{m}{m+1}} + \left\| \nabla \xi_{m-1} \right\|_{\underlinederline{L}^{2}(B_r)}
.
\end{equation*}
Since we have Theorem~\ref{t.regularity.linerrors} at our disposal with $q = 2(n+1)$, we can increase the integrability and obtain by~\eqref{e.C01linsols} and~\eqref{e.intrinsicregnminusone1} that, for $r \in [\mathcal{X}, \mathbf{m}athbf{f}rac12 (1+ \mathbf{m}athbf{f}rac38 2^{-n})R]$ and $m \in \{1,\ldots,n-1\}$,
\mathbf{b}etagin{equation}
\label{e.intrinsicregnminusone3n}
\left\| \nabla w_m \right\|_{\underlinederline{L}^{2(n+1)}(B_r)}
\leq
C \left( \mathbf{m}athbf{f}rac r R \right)^{m}
\left( \mathbf{m}athsf{H}_{m}^{m} \wedge \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}\right)
.
\end{equation}
Consequently, by~\eqref{e.C01linerror} and~\eqref{e.intrinsicregnminusone3n}, we also get, for $m \in \{0,\ldots,n-1\}$ and $r \in [\mathcal{X}, \mathbf{m}athbf{f}rac12 (1+\mathbf{m}athbf{f}rac3{16} 2^{-m})R]$, that
\mathbf{b}etagin{equation}
\label{e.intrinsicregnminusone1n}
\left\| \nabla \xi_m \right\|_{\underlinederline{L}^{2(n+1)}(B_r)}
\leq
C \left( \mathbf{m}athbf{f}rac r R \right)^{m+1} \left( \mathbf{m}athsf{H}_{m}^{m+1} \wedge \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}\right)
.
\end{equation}
Next, Theorem~\ref{t.regularityhigher}$\mathbf{m}athrm{(i)}_{n-1}$ yields that we find $(\overlineerline{w}_1,\ldots,\overlineerline{w}_{n-1}) \in \overlineerline{\mathbf{m}athsf{W}}_{n-1}^{p}$ such that, for $m \in \{1,\ldots,n-1\}$,
\mathbf{b}etagin{equation} \label{e.liouvillec.applied1}
\mathbf{m}athbf{f}rac1\mathcal{X} \left\| w_m - \overlineerline{w}_m \right\|_{\underlinederline{L}^2(B_\mathcal{X})} \leq C \mathcal{X}^{-\deltalta} \mathbf{m}athbf{s}um_{i=1}^m \left( \mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_{\mathcal{X}})}\right)^{\mathbf{m}athbf{f}rac{m}{i}} .
\end{equation}
In particular, applying this inductively, assuming that the lower bound $\mathbf{m}athsf{R}$ for $\mathcal{X}$ is such that $C \mathbf{m}athsf{R}^{-\deltalta} \leq \mathbf{m}athbf{f}rac12$, we deduce by~\eqref{e.intrinsicregnminusone3n} that, for $k \in \{1,\ldots,n-1\}$,
\mathbf{b}etagin{equation} \label{e.liouvillec.applied2}
\left\| \nabla \overlineerline{w}_k \right\|_{\underlinederline{L}^2(B_\mathcal{X})}
\leq
C \left( \mathbf{m}athbf{f}rac {\mathcal{X}}{R} \right)^{k}
\left( \mathbf{m}athsf{H}_{k}^{k} \wedge \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}\right)
.
\end{equation}
By Remark~\ref{r.barwregtrivial} we find a solution $\overlineerline{w}_n$ of
\mathbf{b}etagin{equation*}
-\nabla \cdot \left( {\overlineerbracket[1pt][-1pt]{\mathbf{a}}}_p \nabla \overlineerline{w}_n \right)
=
\nabla \cdot \overlineerline{\mathbf{m}athbf{F}}(p,\nabla\overlineerline{w}_1,\ldots,\nabla\overlineerline{w}_{n-1})
\end{equation*}
satisfying, for $r \mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{equation*}
\left\| \nabla \overlineerline{w}_n \right\|_{\underlinederline{L}^2 \left( B_{r} \right)}
\leq
C \left( \mathbf{m}athbf{f}rac{r}{\mathcal{X}} \right)^{n} \left( \mathbf{m}athbf{s}um_{i=1}^{n-1} \left\| \nabla \overlineerline{w}_i \right\|_{\underlinederline{L}^2 \left( B_{\mathcal{X}} \right)} \right)^{\mathbf{m}athbf{f}rac ni}
.
\end{equation*}
In view of~\eqref{e.liouvillec.applied2} this yields, for $r \mathbf{m}athbf{g}eq \mathcal{X}$, that
\mathbf{b}etagin{equation} \label{e.liouvillec.applied3}
\left\| \nabla \overlineerline{w}_n \right\|_{\underlinederline{L}^2 \left( B_{r} \right)}
\leq
C \left( \mathbf{m}athbf{f}rac{r}{R} \right)^{n}
\left( \mathbf{m}athsf{H}_{n-1}^{n} \wedge \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}\right)
.
\end{equation}
By Remark~\ref{r.regularityhigher1} we then find $\widetilde w_n$ solving~\eqref{e.tildewneq} such that, for $r\mathbf{m}athbf{g}eq \mathcal{X}$,
\mathbf{b}etagin{equation} \label{e.liouvillec.applied4}
\left\| \widetilde w_n - \overlineerline{w}_n \right\|_{\underlinederline{L}^2(B_r)} \leq C r^{1-\deltalta} \left( \mathbf{m}athbf{f}rac{r}{\mathcal{X}} \right)^{n} \mathbf{m}athbf{s}um_{i=1}^n \left( \mathbf{m}athbf{f}rac1{\mathcal{X}} \left\| \overlineerline{w}_i \right\|_{\underlinederline{L}^2(B_{\mathcal{X}})}\right)^{\mathbf{m}athbf{f}rac{n}{i}} .
\end{equation}
Now~\eqref{e.tildewnest} follows by~\eqref{e.liouvillec.applied2},~\eqref{e.liouvillec.applied3} and~\eqref{e.liouvillec.applied4} together with Theorem~\ref{t.regularity.linerrors}.
\mathbf{m}athbf{s}mallskip
\emph{Step 3.} We show the induction step, that is, we validate~\eqref{e.intrinsicregnminusone1} for $k=n$ and $r \in [\mathcal{X}, \mathbf{m}athbf{f}rac12 (1+ 2^{-n-2})R]$.
Denote~$\widetilde \xi_n := \xi_{n-1} - \mathbf{m}athbf{f}rac1{n!} \widetilde w_n$. We begin by deducing an estimate for~$\widetilde \xi_n$. Appendix~\ref{app.linerrors} tells us that~$\widetilde \xi_n$ solves the equation
\mathbf{b}etagin{equation*}
- \nabla \cdot \left( \mathbf{a}_p \nabla \widetilde \xi_{n} \right)
= \nabla \cdot \mathbf{m}athbf{E}_{n}
\quad \mathbf{m}box{in} \ B_{R}
\end{equation*}
and there exist constants $C(n,\mathbf{m}athsf{M},\mathrm{data})<\infty$ such that
\mathbf{b}etagin{equation} \label{e.Enapplied}
\left\| \mathbf{m}athbf{E}_{n} \right\|_{\underlinederline{L}^{2} \left( B_{r} \right)}
\leq
C \mathbf{m}athbf{s}um_{i=0}^{n-1} \left\| \nabla \xi_{i} \right\|_{\underlinederline{L}^{2(n+1)} \left( B_{r} \right)}^{\mathbf{m}athbf{f}rac{n+1}{i+1}}
+
C \mathbf{m}athbf{s}um_{i=1}^{n-1} \left\| \nabla w_i \right\|_{\underlinederline{L}^{2(n+1)} (B_r)}^{\mathbf{m}athbf{f}rac{n+1}{i}}
.
\end{equation}
By~\eqref{e.intrinsicregnminusone1n} and~\eqref{e.intrinsicregnminusone3n} we then obtain, for $r \in [\mathcal{X}, \mathbf{m}athbf{f}rac12 (1+\mathbf{m}athbf{f}rac38 2^{-n})R]$, that
\mathbf{b}etagin{equation} \label{e.intrinsicregnminusone4}
\left\| \mathbf{m}athbf{E}_{n} \right\|_{\underlinederline{L}^{2} \left( B_{r} \right)}
\leq
C \left( \mathbf{m}athbf{f}rac r R \right)^{n+1}
\left( \mathbf{m}athsf{H}_{n-1}^{n+1} \wedge \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}\right)
.
\end{equation}
\mathbf{m}athbf{s}mallskip
Next, set, for $j\in \mathbf{m}athbb{N}_0$, $r_j := \mathbf{m}athbf{f}rac12 \theta^j (1+2^{-n-2})R$, where $\theta(n,\mathbf{m}athsf{M},\mathrm{data}) \in \left(0,\mathbf{m}athbf{f}rac12 \right]$ will be fixed shortly.
Let $\phi_0 \in \mathbf{m}athcal{A}_{n+2}^p$ and, for given $\phi_{j} \in \mathbf{m}athcal{A}_{n+2}^p$, let $h_{j}$ solve
\mathbf{b}etagin{equation}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( \mathbf{a}_p \nabla h_j \right) = 0
& \mathbf{m}box{in} & \ B_{r_{j}},\\
& h_j = \widetilde \xi_n - \phi_{j} & \mathbf{m}box{on} & \ \mathrm{par}rtial B_{r_{j}}.
\end{aligned}
\right.
\end{equation}
By testing and~\eqref{e.intrinsicregnminusone4} we get
\mathbf{b}etagin{equation} \label{e.intrinsicregnminusone5}
\left\| \nabla \widetilde \xi_n - \nabla \phi_{j} - \nabla h_j \right\|_{\underlinederline{L}^{2} \left( B_{r_{j}} \right)}
\leq
C \left( \mathbf{m}athbf{f}rac r R \right)^{n+1}
\left( \mathbf{m}athsf{H}_{n-1}^{n+1} \wedge \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}\right)
.
\end{equation}
Furthermore, by~\cite[Theorem 5.2]{AFK}, there is $\widetilde \phi_{j+1} \in \mathbf{m}athcal{A}_{n+2}^p$ such that
\mathbf{b}etagin{equation*}
\left\| \nabla h_j - \nabla \widetilde \phi_{j+1} \right\|_{\underlinederline{L}^{2}(B_{r_{j+1}})}
\leq
C \theta^{n+2} \left\| \nabla h_j - \nabla \phi_{j} \right\|_{\underlinederline{L}^{2}(B_{r_{j}})}
.
\end{equation*}
Combining, we have by the triangle inequality, for $\phi_{j+1} := \widetilde \phi_{j+1} + \phi_{j} \in \mathbf{m}athcal{A}_{n+2}^p$, that
\mathbf{b}etagin{align} \notag
\left\| \nabla \widetilde \xi_n- \nabla \phi_{j+1} \right\|_{\underlinederline{L}^{2} (B_{r_{j+1}})}
&
\leq
C \theta^{n+2} \left\| \nabla \widetilde \xi_n - \nabla \phi_{j} \right\|_{\underlinederline{L}^{2}(B_{r_{j}})}
\\ \notag & \quad
+
C \theta^{-\mathbf{m}athbf{f}rac d2} \left( \mathbf{m}athbf{f}rac r R \right)^{n+1} \left( \mathbf{m}athsf{H}_{n-1}^{n+1} \wedge \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}\right)
.
\end{align}
Choosing $C\theta^{1/2} = 1$, we thus arrive at
\mathbf{b}etagin{align} \notag
\lefteqn{ \mathbf{m}athbf{f}rac{1}{r_{j+1}^{n+1}}
\left\| \nabla \widetilde \xi_n - \nabla \phi_{j+1} \right\|_{\underlinederline{L}^{2} (B_{r_{j+1}})}} \quad &
\\ \notag &
\leq
\mathbf{m}athbf{f}rac{ \theta^{1/2} }{r_{j}^{n+1}} \left\| \nabla \widetilde \xi_n - \nabla \phi_{j} \right\|_{\underlinederline{L}^{2+\theta}(B_{r_{j}})}
+
\mathbf{m}athbf{f}rac{C}{R^{n+1}} \left( \mathbf{m}athsf{H}_{n-1}^{n+1} \wedge \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}\right)
.
\end{align}
It follows by iteration that, for $r \in [\mathcal{X}, \mathbf{m}athbf{f}rac12 (1+ 2^{-n-2})R]$,
\mathbf{b}etagin{align} \notag
\inf_{\phi \in \mathbf{m}athcal{A}_{n+2}^p } \left\| \nabla \widetilde \xi_n - \nabla \phi \right\|_{\underlinederline{L}^{2} (B_{r}) }
&
\leq
C\left( \mathbf{m}athbf{f}rac rR \right)^{n+3/2} \inf_{\phi \in \mathbf{m}athcal{A}_{n+2}^p } \left\| \nabla \widetilde \xi_n - \nabla \phi \right\|_{\underlinederline{L}^{2} (B_{r_0}) }
\\ \notag & \quad
+ C\left( \mathbf{m}athbf{f}rac rR \right)^{n+3/2} \left( \mathbf{m}athsf{H}_{n-1}^{n+1} \wedge \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}\right)
.
\end{align}
We can now revisit~\cite[Proof of (3.49)]{AKMbook} and obtain that there exists $\phi \in \mathbf{m}athcal{A}_{n+1}^p$ such that, taking $w_n = \widetilde w_n + \phi$ and $\xi_n := \xi_{n-1} - \mathbf{m}athbf{f}rac1{n!} w_n$,
we get, for all $r \in [\mathcal{X}, \mathbf{m}athbf{f}rac12 (1+2^{-n-2})R]$,
\mathbf{b}etagin{equation}
\left\| \nabla \xi_n \right\|_{\underlinederline{L}^{2} (B_{r}) }
\leq
C \left( \mathbf{m}athbf{f}rac rR \right)^{n+1}
\left( \mathbf{m}athsf{H}_{n}^{n+1} \wedge \inf_{\phi \in \mathbf{m}athcal{L}_1} \mathbf{m}athbf{f}rac1R \left\| v - \phi \right\|_{\underlinederline{L}^2 \left( B_{R} \right)}\right)
\end{equation}
Since $\phi \in \mathbf{m}athcal{A}_{n+1}^p$, we see that $(w_1,\ldots,w_n) \in \mathbf{m}athsf{W}_n^p$. Now~\eqref{e.intrinsicregnminusone1} follows for $k=n$ by the previous inequality together with the Caccioppoli estimate and~\eqref{e.intrinsicregnminusone4}. The proof is complete.
\end{proof}
\mathbf{a}ppendix
\mathbf{m}athbf{s}ection{Deterministic regularity estimates}
\label{app.CZ}
In this first appendix, we record some determinstic regularity estimates of Schauder and Calder\'on-Zygmund type for linear equations with H\"older continuous coefficients. These estimates, while well-known, are not typically written with explicit dependence on the regularity of the coefficients, which is needed for our purposes in this paper.
\mathbf{b}etagin{proposition}[{Calder\'on-Zygmund gradient $L^q$ estimates}]
\label{p.gradientLq}
Let~$\mathbf{b}etata \in (0,1]$, $q\in [2,\infty)$ and~$\mathbf{a}\in \mathbf{m}athbb{R}^{d\times d}$ be a symmetric matrix with entries in~$C^{0,\mathbf{b}etata}(B_2)$ satisfying
\mathbf{b}etagin{equation*}
I_d \leq \mathbf{a}(x) \leq \mathcal{L}ambda I_d, \quad \mathbf{m}athbf{f}orall x\in B_2.
\end{equation*}
Suppose~$\mathbf{m}athbf{f} \in L^q(B_2;\mathbf{m}athbb{R}d)$ and~$u\in H^1(B_2)$ is a solution of
\mathbf{b}etagin{equation*}
-\nabla \cdot \left( \mathbf{a}\nabla u \right) = \nabla\cdot \mathbf{m}athbf{f} \quad \mathbf{m}box{in} \ B_2.
\end{equation*}
Then $u \in W^{1,q}_{\mathbf{m}athrm{loc}}(B_2)$ and there exists $C(q,d,\mathcal{L}ambda)<\infty$ such that
\mathbf{b}etagin{equation}
\label{e.CZ}
\left\| \nabla u \right\|_{L^q(B_1)}
\leq
C \exp\left( \tfrac C\mathbf{b}etata\left(1 - \tfrac2q\right) \right)
\left( 1+ \left[ \mathbf{a} \right]_{C^{0,\mathbf{b}etata}(B_2)}^{\mathbf{m}athbf{f}rac{d}{2\mathbf{b}etata}\left( 1 - \mathbf{m}athbf{f}rac{2}{q}\right)} \right)
\left\| \nabla u \right\|_{L^2(B_2)}
+
C \left\| \mathbf{m}athbf{f} \right\|_{L^q(B_2)}
.
\end{equation}
\end{proposition}
\mathbf{b}etagin{proof}
We will explain how to extract the statement of the proposition from that of~\cite[Proposition 7.3]{AKMbook}. The latter asserts the existence of $\deltalta_0(q,d,\mathcal{L}ambda)>0$ such that, for every ball $x\in B_1$ and $r\in \left( 0,\tfrac 12\right]$ satisfying
\mathbf{b}etagin{equation*}
\osc_{B_{2r}(x)} \mathbf{a} \leq \deltalta_0,
\end{equation*}
we have, for a constant $C(q,d,\mathcal{L}ambda)<\infty$, the estimate
\mathbf{b}etagin{equation*}
\left\| \nabla u \right\|_{\underlinederline{L}^q(B_r(x))}
\leq
C \left( \left\| \nabla u \right\|_{\underlinederline{L}^2(B_{2r}(x))} + \left\| \mathbf{m}athbf{f} \right\|_{\underlinederline{L}^q(B_{2r}(x))} \right) .
\end{equation*}
Since $\osc_{B_{2r}} \mathbf{a} \leq (2r)^\mathbf{b}etata \left[ \mathbf{a} \right]_{C^{0,\mathbf{b}etata}(B_2)}$, we have the above estimate for every $x\in B_1$ and
\mathbf{b}etagin{equation*}
r:= \mathbf{m}athbf{f}rac12\wedge \mathbf{m}athbf{f}rac12 \left( \deltalta_0 \left[ \mathbf{a} \right]_{C^{0,\mathbf{b}etata}(B_2)}^{-1} \right)^{\mathbf{m}athbf{f}rac1\mathbf{b}etata}.
\end{equation*}
From this, Fubini's theorem and Young's inequality for convolutions, we obtain
\mathbf{b}etagin{align*}
\left\| \nabla u \right\|_{L^q(B_1)}^q
&
\leq
C \int_{B_{1}} \left( \left|\nabla u \right|^q \mathbf{a}st \left( \mathbf{m}athbf{f}rac1{|B_r|} \mathds{1}_{B_r} \right) \right) (x)\,dx
\\ &
=
C \int_{B_{1}} \mathbf{m}athbf{f}int_{B_r(x)} \left|\nabla u(y) \right|^q \,dy\,dx
\\ &
\leq
C \int_{B_{1}}
\left(
\mathbf{m}athbf{f}int_{B_{2r}(x)} \left| \nabla u(y) \right|^2 \, dy
\right)^{\mathbf{m}athbf{f}rac q2}\,dx
+
C \int_{B_{1}} \mathbf{m}athbf{f}int_{B_{2r}(x)} \left| \mathbf{m}athbf{f}(y) \right|^q\,dy\,dx
\\ &
\leq
C \int_{B_{1}} \left| \left( \left| \nabla u \right|^2 \mathbf{a}st
\left( \mathbf{m}athbf{f}rac1{|B_{2r}|} \mathds{1}_{B_{2r}} \right) \right) (x) \right|^{\mathbf{m}athbf{f}rac q2} \,dx
+
C \left\| \mathbf{m}athbf{f} \right\|_{L^q(B_2)}^q
\\ &
\leq
C \left\| \nabla u \right\|_{L^2(B_2)}^q
\left\| \mathbf{m}athbf{f}rac1{|B_{2r}|} \mathds{1}_{B_{2r}} \right\|_{L^{q/2}(B_2)}^{\mathbf{m}athbf{f}rac q2}
+
C \left\| \mathbf{m}athbf{f} \right\|_{L^q(B_2)}^q
\\ &
= C r^{-d\left(\mathbf{m}athbf{f}rac q2-1\right)} \left\| \nabla u \right\|_{L^2(B_2)}^q
+
C \left\| \mathbf{m}athbf{f} \right\|_{L^q(B_2)}^q.
\end{align*}
This completes the proof.
\end{proof}
\mathbf{b}etagin{proposition}
\label{p.schauder}
Let~$\mathbf{b}etata \in (0,1)$ and~$\mathbf{a}\in \mathbf{m}athbb{R}^{d\times d}$ be a symmetric matrix with entries in~$C^{0,\mathbf{b}etata}(B_2)$ satisfying
\mathbf{b}etagin{equation*}
I_d \leq \mathbf{a}(x) \leq \mathcal{L}ambda I_d, \quad \mathbf{m}athbf{f}orall x\in B_2.
\end{equation*}
Suppose~$\mathbf{m}athbf{f} \in C^{0,\mathbf{b}etata} (B_2;\mathbf{m}athbb{R}d)$ and~$u\in H^1(B_2)$ is a solution of
\mathbf{b}etagin{equation*}
-\nabla \cdot \left( \mathbf{a}\nabla u \right) = \nabla\cdot \mathbf{m}athbf{f} \quad \mathbf{m}box{in} \ B_2.
\end{equation*}
Then $u \in C^{1,\mathbf{b}etata}_{\mathbf{m}athrm{loc}}(B_2)$ and there exists $C(\mathbf{b}etata,d,\mathcal{L}ambda)<\infty$ such that
\mathbf{b}etagin{equation}
\label{e.schauder1}
\left\| \nabla u \right\|_{L^\infty(B_1)}
\leq
C \left( 1 + \left[ \mathbf{a} \right]_{C^{0,\mathbf{b}etata}(B_2)}^{\mathbf{m}athbf{f}rac d{2\mathbf{b}etata}} \right)
\left\| \nabla u \right\|_{L^2(B_2)}
+
C \left[ \mathbf{m}athbf{f} \right]_{C^{0,\mathbf{b}etata}(B_{2})}
\end{equation}
and
\mathbf{b}etagin{equation}
\label{e.schauder2}
\left[ \nabla u \right]_{C^{0,\mathbf{b}etata}(B_1)}
\leq
C\left( 1 + \left[ \mathbf{a} \right]_{C^{0,\mathbf{b}etata}(B_2)}^{1+\mathbf{m}athbf{f}rac d{2\mathbf{b}etata}} \right) \left\| \nabla u \right\|_{L^2(B_2)}
+
C\left[ \mathbf{m}athbf{f} \right]_{C^{0,\mathbf{b}etata}(B_{2})}.
\end{equation}
\end{proposition}
\mathbf{b}etagin{proof}
We will explain how to extract the statement of the proposition from the gradient H\"older estimate found in~\cite[Theorem 3.13]{HL}. The latter states that, under the assumption that
\mathbf{b}etagin{equation*}
\left[ \mathbf{a} \right]_{C^{0,\mathbf{b}etata}(B_2)} \leq 1,
\end{equation*}
there exists $C(\mathbf{b}etata,d,\mathcal{L}ambda)<\infty$ such that
\mathbf{b}etagin{equation*}
\left\| \nabla u \right\|_{C^{0,\mathbf{b}etata}(B_1)}
\leq
C\left( \left\| \nabla u \right\|_{L^2(B_2)} + \left[ \mathbf{m}athbf{f} \right]_{C^{0,\mathbf{b}etata}(B_2)} \right).
\end{equation*}
After changing the scale, we obtain the corresponding statement in $B_r$, which asserts that, under the assumption that
\mathbf{b}etagin{equation*}
r^{\mathbf{b}etata} \left[ \mathbf{a} \right]_{C^{0,\mathbf{b}etata} (B_{2r})} \leq 1,
\end{equation*}
there exists $C(\mathbf{b}etata,d,\mathcal{L}ambda)<\infty$ such that
\mathbf{b}etagin{equation*}
\left\| \nabla u \right\|_{L^\infty(B_r)}
+ r^{\mathbf{b}etata} \left[ \nabla u \right]_{C^{0,\mathbf{b}etata}(B_r)}
\leq
C\left( \left\| \nabla u \right\|_{\underlinederline{L}^2(B_{2r})} + r^{\mathbf{b}etata} \left[ \mathbf{m}athbf{f} \right]_{C^{0,\mathbf{b}etata}(B_{2r})} \right).
\end{equation*}
Therefore we take~$r:= \mathbf{m}athbf{f}rac12\wedge \left[ \mathbf{a} \right]_{C^{0,\mathbf{b}etata} (B_{2})}^{-\mathbf{m}athbf{f}rac1\mathbf{b}etata}$ and apply the previous statement in every ball~$B_r(x)$ with $x\in B_1$ to obtain
\mathbf{b}etagin{align*}
\left\| \nabla u \right\|_{L^\infty(B_1)}
+ \mathbf{m}athbf{s}up_{x\in B_1} r^{\mathbf{b}etata} \left[ \nabla u \right]_{C^{0,\mathbf{b}etata}(B_r(x))}
&
\leq
C\mathbf{m}athbf{s}up_{x\in B_1}\left( \left\| \nabla u \right\|_{\underlinederline{L}^2(B_{2r}(x))} + r^{\mathbf{b}etata} \left[ \mathbf{m}athbf{f} \right]_{C^{0,\mathbf{b}etata}(B_{2r}(x))} \right)
\\ &
\leq
C \left(r^{-\mathbf{m}athbf{f}rac d2} \left\| \nabla u \right\|_{L^2(B_2)}
+
r^{\mathbf{b}etata} \left[ \mathbf{m}athbf{f} \right]_{C^{0,\mathbf{b}etata}(B_{2})} \right).
\end{align*}
After a covering argument, we obtain
\mathbf{b}etagin{equation*}
\left\| \nabla u \right\|_{L^\infty(B_1)} + r^{\mathbf{b}etata} \left[ \nabla u \right]_{C^{0,\mathbf{b}etata}(B_1)}
\leq
C \left(r^{-\mathbf{m}athbf{f}rac d2} \left\| \nabla u \right\|_{L^2(B_2)}
+
r^{\mathbf{b}etata} \left[ \mathbf{m}athbf{f} \right]_{C^{0,\mathbf{b}etata}(B_{2})} \right),\end{equation*}
which yields the proposition.
\end{proof}
\mathbf{m}athbf{s}ection{Differentiation of \texorpdfstring{$\mathbf{m}athbf{F}_m$}{Fm}}
\label{s.AppendixFm}
In this appendix, we show that \eqref{e.Fmrelation} holds.
\mathbf{b}etagin{lemma} \label{l.Fmrelation} Fix $m \in \mathbf{m}athbb{N}$ and $h,p \in \mathbf{m}athbb{R}^d$. Suppose that $z\mathbf{m}apsto L(z,x)$ is $C^{m+2}$ and $t \mathbf{m}apsto \mathbf{m}athbf{g}( p + th)$ is~$m$ times differentiable at~$0$. Then
\mathbf{b}etagin{align} \label{e.FmrelationApp}
\lefteqn{
\mathbf{m}athbf{F}_{m+1} (\mathbf{m}athbf{g}(p) , D_p \mathbf{m}athbf{g}(p) h^{\otimes 1} ,\ldots,D_p^{m} \mathbf{m}athbf{g}(p) h^{\otimes m},x)
} \quad &
\\ \notag &
= D_p \left( \mathbf{m}athbf{F}_{m} (\mathbf{m}athbf{g}(p),D_p \mathbf{m}athbf{g}(p) h^{\otimes 1},\ldots,D_p^{m-1} \mathbf{m}athbf{g}(p) h^{\otimes (m-1)},x) \right) \cdot h
\\ \notag & \quad
+ D_p \left( D_p^{2} L(\mathbf{m}athbf{g}(p),x) \right) h^{\otimes 1} \left( D_p^{m} \mathbf{m}athbf{g}(p) h^{\otimes m} \right)^{\otimes 1}.
\end{align}
\end{lemma}
\mathbf{b}etagin{proof}
We first observe that the terms $D_p \mathbf{m}athbf{g}(p) h^{\otimes 1},\ldots,D_p^{m} \mathbf{m}athbf{g}(p) h^{\otimes m}$ in~\eqref{e.FmrelationApp} are precisely the directional derivatives of $\mathbf{m}athbf{g}$ in the $h$ direction, up to $m$th degree. The terms in \eqref{e.FmrelationApp} involve derivatives of $z\mathbf{m}apsto L(z,x)$ up to the $(m+2)\text{th}$ degree. Hence, we can assume by approximation, without loss of generality, that $z \mathbf{m}apsto L(z,x)$ and $p \mathbf{m}apsto \mathbf{m}athbf{g}(p)$ are polynomials, of degrees at most $m+2$ and $m$, respectively. Fix $h \in \mathbf{m}athbb{R}^d$ and let $t \in \mathbf{m}athbb{R}$. We write
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{g}(p+t h) = \mathbf{m}athbf{g}(p) + \mathbf{m}athbf{s}um_{j=1}^m \mathbf{m}athbf{f}rac{t^j }{j!} D_p^j \mathbf{m}athbf{g}(p) h^{\otimes j} .
\end{equation*}
Denote
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{z}_j(p) := D_p^j \mathbf{m}athbf{g}(p) h^{\otimes j} \quad \mathbf{m}box{and} \quad \mathbf{m}athbf{Z}(p,t) := \mathbf{m}athbf{s}um_{j=1}^{m} \mathbf{m}athbf{f}rac{t^j }{j!} \mathbf{m}athbf{z}_{j}(p).
\end{equation*}
Examining the relation between the $p$ and $t$ derivatives of $\mathbf{m}athbf{Z}(p,t)$, we find that
\mathbf{b}etagin{equation} \label{e.diffZ}
D_p \mathbf{m}athbf{Z}(p,t) \cdot h = \mathbf{m}athbf{s}um_{j=1}^{m} \mathbf{m}athbf{f}rac{t^j }{j!} \mathbf{m}athbf{z}_{j+1}(p)
=
\mathrm{par}rtial_t \mathbf{m}athbf{Z}(p,t) - \mathbf{m}athbf{z}_{1}(p) .
\end{equation}
Set now, for fixed $h,x\in\mathbf{m}athbb{R}^d$,
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{G}_{h,x}(t,p) := \mathbf{m}athbf{s}um_{k=2}^{m+1} \mathbf{m}athbf{f}rac1{k!} D_p^{k+1} L(\mathbf{m}athbf{g}(p),x) \left( \mathbf{m}athbf{Z}(p,t) \right)^{\otimes k}
\end{equation*}
and, by the definition of $\mathbf{m}athbf{F}_m$ in~\eqref{e.defFm},
\mathbf{b}etagin{equation*}
\mathrm{par}rtial_t^m \mathbf{m}athbf{G}_{h,x} (0,p) = \mathbf{m}athbf{F}_m (\mathbf{m}athbf{g}(p),\mathbf{m}athbf{z}_1(p) ,\ldots,\mathbf{m}athbf{z}_{m-1}(p),x) ,
\end{equation*}
and similarly for $\mathbf{m}athbf{F}_{m+1}$. Computing the directional derivative, recalling that we assume that $z \mathbf{m}apsto L(z,x)$ is a $(m+2)$th degree polynomial,
\mathbf{b}etagin{align} \notag
D_p \mathbf{m}athbf{G}_{h,x}(t,p) \cdot h
&
=
\mathbf{m}athbf{s}um_{k=2}^{m+1} \mathbf{m}athbf{f}rac1{(k-1)!} D_p^{k+1} L(\mathbf{m}athbf{g}(p),x) \left( \mathbf{m}athbf{Z}(p,t) \right)^{\otimes (k-1)} \left( D_p \mathbf{m}athbf{Z}(p,t) \cdot h \right)^{\otimes 1}
\\ \notag & \quad
+
\mathbf{m}athbf{s}um_{k=3}^{m+1} \mathbf{m}athbf{f}rac1{(k-1)!} D_p^{k+1} L(\mathbf{m}athbf{g}(p),x) \left( \mathbf{m}athbf{Z}(p,t) \right)^{\otimes (k-1)} \left( \mathbf{m}athbf{z}_{1}(p) \right)^{\otimes 1},
\end{align}
we get by~\eqref{e.diffZ} that
\mathbf{b}etagin{align} \notag
\mathrm{par}rtial_t \mathbf{m}athbf{G}_{h,x}(t,p)
& = D_p \mathbf{m}athbf{G}_{h,x}(t,p) \cdot h + D_p^{3} L(\mathbf{m}athbf{g}(p),x) \left( \mathbf{m}athbf{Z}(p,t) \right)^{\otimes 1} \left( \mathbf{m}athbf{z}_{1}(p) \right)^{\otimes 1}.
\end{align}
Consequently, we have
\mathbf{b}etagin{equation*}
\mathrm{par}rtial_t^{m+1} \mathbf{m}athbf{G}_{h,x}(0,p) =
D_p \mathrm{par}rtial_t^{m} \mathbf{m}athbf{G}_{h,x}(0,p) \cdot h + D_p^{3} L(\mathbf{m}athbf{g}(p),x) \left( \mathbf{m}athbf{z}_{1}(p) \right)^{\otimes 1} \left( \mathbf{m}athbf{z}_{m}(p) \right)^{\otimes 1},
\end{equation*}
which is~\eqref{e.FmrelationApp}, concluding the proof.
\end{proof}
\mathbf{m}athbf{s}ection{Linearization errors}
\label{app.linerrors}
In this appendix we compute the equation satisfied by a higher-order linearization error and thereby obtain gradient estimates.
\mathbf{m}athbf{s}mallskip
\mathbf{b}etagin{lemma}[{Equation for the linearization error}]
\label{l.lineq}
Fix $\mathbf{m}athsf{M},R \in (0,\infty)$ and $n\in \mathbf{m}athbb{N}$ with $n \mathbf{m}athbf{g}eq 2$. Assume that $p \mathbf{m}apsto L(p,x)$ is $C^{n+1,1}$ for every $x\in \mathbf{m}athbb{R}^d$ and
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{s}um_{k=1}^{n} \mathbf{m}athbf{f}rac{1}{k!} \| Â D_p^{k+1} L \|_{L^\infty(\mathbf{m}athbb{R}^d \times \mathbf{m}athbb{R}^d)} \leq \mathbf{m}athsf{M}.
\end{equation*}
Suppose that $u,v,w_1,\ldots,w_n \in H^1(B_R)$ satisfy
\mathbf{b}etagin{equation*}
\nabla \cdot \left( D_pL(\nabla u,x) - D_pL(\nabla v,x) \right) = 0
\quad \mathbf{m}box{in} \ B_R
\end{equation*}
and, for each~$m \in \{1,\ldots,n\}$,
\mathbf{b}etagin{equation*}
-\nabla \cdot \left( D^2_pL\left( \nabla u, x \right) \nabla w_m \right) = \nabla \cdot \mathbf{m}athbf{F}_m(\nabla u,\nabla w_1,\ldots,\nabla w_{m-1},x)
\quad\mathbf{m}box{in} \ B_R,
\end{equation*}
where $ \mathbf{m}athbf{F}_m$ is defined in~\eqref{e.defFm}. Denote $\xi_0 = v -u$ and, for each~$m \in \{1,\ldots,n\}$,
\mathbf{b}etagin{equation*}
\xi_m := v - u - \mathbf{m}athbf{s}um_{k=1}^m \mathbf{m}athbf{f}rac{w_k}{k!}.
\end{equation*}
Then there is vector field $\mathbf{m}athbf{E}_{n}$ such that $\xi_n$ solves
\mathbf{b}etagin{equation*}
- \nabla \cdot \left( D^2 L(\nabla u,\cdot) \nabla \xi_{n} \right) = \nabla \cdot \mathbf{m}athbf{E}_{n}
\quad \mathbf{m}box{in} \ B_R
\end{equation*}
and there exists a constant $C(n,\mathbf{m}athsf{M},d)<\infty$ such that
\mathbf{b}etagin{align} \label{e.Enprel}
\left| \mathbf{m}athbf{E}_{n} \right|
&
\leq
C \mathbf{m}athbf{s}um_{h=0}^{n-1} \left| \nabla \xi_{h} \right| \left( \left| \nabla \xi_{0} \right| + \mathbf{m}athbf{s}um_{i=1}^{n-1} \left| \nabla \mathbf{m}athbf{f}rac{w_i}{i!} \right|^{\mathbf{m}athbf{f}rac{1}{i}} \right)^{n-h}.
\end{align}
Furthermore, there exist constants $C(n,\mathbf{m}athsf{M},d)<\infty$, $q(n,d) \in (2,\infty)$ and $\deltalta(d,\mathcal{L}ambda) \in \left( 0 ,\tfrac 12 \right]$ such that
\mathbf{b}etagin{align}
\label{e.EnL2}
\left\| \nabla \xi_n \right\|_{\underlinederline{L}^{2+\deltalta} \left( B_{\mathbf{m}athbf{f}rac12 (1 + 2^{-n})R} \right)}
& \leq
C \mathbf{m}athbf{s}um_{i=1}^{n} \left( \mathbf{m}athbf{f}rac1R\left\| \xi_{i} - (\xi_{i})_{B_R}\right\|_{\underlinederline{L}^{2} \left( B_{R} \right)} \right)^{\mathbf{m}athbf{f}rac{n+1}{i+1}}
\\ \notag & \qquad +
C \left\| \nabla \xi_{0} \right\|_{\underlinederline{L}^{q}(B_R)}^{n+1}
+
C \mathbf{m}athbf{s}um_{i=1}^{n-1} \left\| \nabla \mathbf{m}athbf{f}rac{w_i}{i!} \right\|_{\underlinederline{L}^{q}(B_R)}^{\mathbf{m}athbf{f}rac{n+1}{i}} .
\end{align}
\end{lemma}
\mathbf{b}etagin{proof}
Throughout the proof we use the notation $s_k = \mathbf{m}athbf{s}um_{j=1}^k \mathbf{m}athbf{f}rac{w_j}{j!}$ and $\xi_0 = v-u$, so that $\xi_k = \xi_0 - s_k$.
\mathbf{m}athbf{s}mallskip
\emph{Step 1.} Recalling that $\mathbf{m}athbf{F}_1 = 0$, we may rewrite
\mathbf{b}etagin{align} \notag
\lefteqn{D_p L(\nabla v,x) - D_p L(\nabla u,x) -
D^2 L(\nabla u,x) \nabla \xi_n } \quad &
\\ \notag &
=
\mathbf{m}athbf{s}um_{k=1}^{n} \mathbf{m}athbf{f}rac{1}{k!} \left(D^2 L(\nabla u,x) \nabla w_k + \mathbf{m}athbf{F}_{k}(\nabla u,\nabla w_1,\ldots,\nabla w_{k-1},x) \right) + \mathbf{m}athbf{E}_{n}
\end{align}
where we define
\mathbf{b}etagin{align} \notag
\mathbf{m}athbf{E}_{n} & := \mathbf{m}athbf{s}um_{k=2}^{n} \mathbf{m}athbf{f}rac{1}{k!} \left( D_p^{k+1} L(\nabla u,x) (\nabla \xi_0)^{\otimes k} - \mathbf{m}athbf{F}_{k}(\nabla u,\nabla w_1,\ldots,\nabla w_{k-1},x) \right)
\\ \notag &
\quad
+ D_p L(\nabla v,x) - \mathbf{m}athbf{s}um_{k=0}^{n} \mathbf{m}athbf{f}rac1{k!} D_p^{k+1} L(\nabla u,x) (\nabla \xi_0)^{\otimes k} .
\end{align}
By the equations of $u$, $v$ and $w_k$, we have that
\mathbf{b}etagin{equation*}
- \nabla \cdot \left( D^2 L(\nabla u,x) \nabla \xi_{n} \right) = \nabla \cdot \mathbf{m}athbf{E}_{n}.
\end{equation*}
It thus remains to estimate~$\mathbf{m}athbf{E}_{n}$.
\mathbf{m}athbf{s}mallskip
\emph{Step 2.} We show that, for $k \in \{2,\ldots n\}$ and $m \in \{k,\ldots,n\}$,
\mathbf{b}etagin{equation} \label{e.recursiveformulaforL}
(\nabla \xi_0)^{\otimes k} = \mathbf{m}athbf{S}^{(k)}_{m} + \mathbf{m}athbf{E}^{(k)}_{m},
\end{equation}
where $\mathbf{m}athbf{S}^{(j)}_{m}$ and $\mathbf{m}athbf{E}^{(j)}_{m}$ are defined, for $j \in \{2,\ldots,k\}$, recursively as
\mathbf{b}etagin{equation} \label{e.Sjm}
\mathbf{m}athbf{S}^{(j)}_{m} := \mathbf{m}athbf{s}um_{i = 1}^{m+1-j} \mathbf{m}athbf{S}^{(j-1)}_{m - i} \otimes \nabla \mathbf{m}athbf{f}rac{w_{i}}{i!}
\end{equation}
and
\mathbf{b}etagin{equation} \label{e.Ejm}
\mathbf{m}athbf{E}^{(j)}_{m} := \mathbf{m}athbf{s}um_{i = 1}^{m+1-j} \mathbf{m}athbf{E}^{(j-1)}_{m-i} \otimes \nabla \mathbf{m}athbf{f}rac{w_{i}}{i!}
+ (\nabla \xi_0)^{\otimes (j-1)} \otimes \nabla \xi_{m-(j-1)} ,
\end{equation}
with
\mathbf{b}etagin{equation} \label{e.S1E1mdef}
\mathbf{m}athbf{S}^{(1)}_{i} := \nabla s_{i}
\qquad \mathbf{m}box{and} \qquad
\mathbf{m}athbf{E}^{(1)}_{i} := \nabla \xi_{i} .
\end{equation}
Indeed, suppose that we have, for $j \in \{1,\ldots,k-1\}$ and $ m \in \{j,\ldots,n\}$, that
\mathbf{b}etagin{equation*}
(\nabla \xi_0)^{\otimes j} = \mathbf{m}athbf{S}^{(j)}_{m} + \mathbf{m}athbf{E}^{(j)}_{m}.
\end{equation*}
This is obviously true for $j=1$. We compute, for $m \in \{k,\ldots,n\}$,
\mathbf{b}etagin{align}
\notag
(\nabla \xi_0)^{\otimes k}
&
= \mathbf{m}athbf{s}um_{i=1}^{m+1-k} (\nabla \xi_0)^{\otimes (k-1)} \otimes \nabla \mathbf{m}athbf{f}rac{w_i}{i!} + (\nabla \xi_0)^{\otimes (k-1)} \otimes \nabla \xi_{m-(k-1)}
\\ \notag &
= \mathbf{m}athbf{s}um_{i = 1}^{m+1- k} \mathbf{m}athbf{S}^{(k-1)}_{m - i} \otimes \nabla \mathbf{m}athbf{f}rac{w_{i}}{i!}
+ \mathbf{m}athbf{s}um_{i = 1}^{m+1-k} \mathbf{m}athbf{E}^{(k-1)}_{m-i} \otimes \nabla \mathbf{m}athbf{f}rac{w_{i}}{i!} + (\nabla \xi_0)^{\otimes (k-1)} \otimes \nabla \xi_{m-(k-1)}
\\ \notag &
= \mathbf{m}athbf{S}^{(k)}_{m} + \mathbf{m}athbf{E}^{(k)}_{m} ,
\end{align}
which proves the recursive formula~\eqref{e.recursiveformulaforL}.
\mathbf{m}athbf{s}mallskip
\emph{Step 3.}
We show that, for $k \in \{2,\ldots, n\}$ there exists a constant $C(n,k,d)<\infty$ such that
\mathbf{b}etagin{align} \label{e.Eknest}
\left| \mathbf{m}athbf{E}^{(k)}_{n} \right|
\leq
C \mathbf{m}athbf{s}um_{h=1}^{n+1-k} \left| \nabla \xi_{h} \right| \left(\left| \nabla \xi_0 \right| + \mathbf{m}athbf{s}um_{i=1}^{n+1-k} \left|\nabla \mathbf{m}athbf{f}rac{w_{i}}{i!}\right|^{\mathbf{m}athbf{f}rac1i} \right)^{n-h}.
\end{align}
The statement is easy to verify by induction. Indeed, for $m=j=2$ we have by~\eqref{e.Ejm} that
\mathbf{b}etagin{equation*}
\left|\mathbf{m}athbf{E}^{(2)}_{2} \right| \leq \left| \nabla \xi_{1} \otimes \nabla w_{1} + \nabla \xi_{0} \otimes \nabla \xi_{1} \right| \leq
C \mathbf{m}athbf{s}um_{h=1}^{1} \left| \nabla \xi_{h} \right| \left(\left| \nabla \xi_0 \right| + \left|\nabla w_{1}\right| \right)^{2-h}.
\end{equation*}
Assume then that, for $ m\in \{2,\ldots,n-1\}$ and $j \in \{2,\ldots,m\}$, we have
\mathbf{b}etagin{align} \label{e.Ekmjest}
\left| \mathbf{m}athbf{E}^{(j)}_{m} \right|
\leq
C \mathbf{m}athbf{s}um_{h=0}^{m+1-j} \left| \nabla \xi_{h} \right| \left(\left| \nabla \xi_0 \right| + \mathbf{m}athbf{s}um_{i=1}^{m+1-j} \left|\nabla \mathbf{m}athbf{f}rac{w_{i}}{i!}\right|^{\mathbf{m}athbf{f}rac1i} \right)^{m-h}.
\end{align}
By the definition of $\mathbf{m}athbf{E}^{(j)}_{n}$, we have, for $j \in \{2,\ldots,n\}$, that
\mathbf{b}etagin{align} \label{e.Ejm.again}
\left|\mathbf{m}athbf{E}^{(j)}_{n} \right| & \leq C \mathbf{m}athbf{s}um_{i = 1}^{n+1-j} \left| \mathbf{m}athbf{E}^{(j-1)}_{n-i} \right| \left|\nabla \mathbf{m}athbf{f}rac{w_{i}}{i!}\right|
+ C \left| \nabla \xi_0 \right|^{j-1} \left| \nabla \xi_{n-(j-1)} \right|
\end{align}
By~\eqref{e.Ekmjest}, using Fubini for sums, we obtain,
\mathbf{b}etagin{align*}
\mathbf{m}athbf{s}um_{i = 1}^{n+1-j} \left| \mathbf{m}athbf{E}^{(j-1)}_{n-i} \right| \left|\nabla \mathbf{m}athbf{f}rac{w_{i}}{i!}\right|
&
\leq
C \mathbf{m}athbf{s}um_{i = 1}^{n+1-j} \mathbf{m}athbf{s}um_{h=0}^{n+2-i-j} \left| \nabla \xi_{h} \right| \left(\left| \nabla \xi_0 \right| + \mathbf{m}athbf{s}um_{\ell=1}^{n+2-i-j} \left|\nabla \mathbf{m}athbf{f}rac{w_{\ell}}{\ell !}\right|^{\mathbf{m}athbf{f}rac1\ell} \right)^{n-i-h} |\nabla \mathbf{m}athbf{f}rac{w_i}{i!}|
\\ \notag &
\leq
C \mathbf{m}athbf{s}um_{h = 0}^{n+1-j}
\left| \nabla \xi_{h} \right|
\mathbf{m}athbf{s}um_{i=1}^{n+2-j-h} \left(\left| \nabla \xi_0 \right| + \mathbf{m}athbf{s}um_{\ell=1}^{n+1-j} \left|\nabla w_{\ell}\right|^{\mathbf{m}athbf{f}rac1\ell}\right)^{n-i-h} |\nabla \mathbf{m}athbf{f}rac{w_i}{i!}|
\\ \notag &
\leq
C \mathbf{m}athbf{s}um_{h = 0}^{n+1-j} \left| \nabla \xi_{h} \right| \left(\left| \nabla \xi_0 \right| + \mathbf{m}athbf{s}um_{\ell=1}^{n+1-j} \left|\nabla \mathbf{m}athbf{f}rac{w_{\ell}}{\ell!}\right|^{\mathbf{m}athbf{f}rac1\ell}\right)^{n-h}.
\end{align*}
This, together with~\eqref{e.Ejm.again}, proves the induction step, and gives also~\eqref{e.Eknest}.
\mathbf{m}athbf{s}mallskip
\emph{Step 4.}
We show that
\mathbf{b}etagin{equation}
\label{e.L0vsSkns}
\left| \mathbf{m}athbf{s}um_{k=2}^{n} \left( \mathbf{m}athbf{f}rac1{k!} D_p^{k+1} L(\nabla u,x) \left((\nabla \xi_0)^{\otimes k} - \mathbf{m}athbf{S}^{(k)}_{n} \right) \right) \right|
\leq C \mathbf{m}athbf{s}um_{h=0}^{n-1} \left| \nabla \xi_{h} \right| \left(\left| \nabla \xi_0 \right| + \mathbf{m}athbf{s}um_{i=1}^{n-1} \left|\nabla \mathbf{m}athbf{f}rac{w_{i}}{i!}\right|^{\mathbf{m}athbf{f}rac1i} \right)^{n-h}.
\end{equation}
By the recursive formula we have, for $k \in \{2,\ldots,n\}$, that
\mathbf{b}etagin{align}
\notag
(\nabla \xi_0)^{\otimes k}
&
= \mathbf{m}athbf{S}^{(k)}_{n} + \mathbf{m}athbf{E}^{(k)}_{n},
\end{align}
and thus~\eqref{e.L0vsSkns} follows by~\eqref{e.Eknest}.
\mathbf{m}athbf{s}mallskip
\emph{Step 5.}
We show that
\mathbf{b}etagin{align}
\label{e.obvioushomo}
\mathbf{m}athbf{s}um_{k=2}^{n} \mathbf{m}athbf{f}rac1{k!} \left( D_p^{k+1} L(\nabla u,x) \mathbf{m}athbf{S}^{(k)}_{n} - \mathbf{m}athbf{F}_k(\nabla u,\nabla w_1,\ldots,\nabla w_{k-1},x) \right) = 0 .
\end{align}
For this, we first abbreviate
\[
\mathbf{m}athbf{F}_k=\mathbf{m}athbf{F}_{k}(\nabla u,\nabla w_1,\ldots,\nabla w_{k-1},x)
\]
and observe that, by definition,
\[
\mathbf{m}athbf{f}rac{1}{k!}\mathbf{m}athbf{F}_{k}=\mathbf{m}athbf{s}um_{j \mathbf{m}athbf{g}eq 2}\mathbf{m}athbf{f}rac{1}{j!}D_{p}^{j+1}L(\nabla u, x)\left(\mathbf{m}athbf{s}um_{i_{1}+\cdots i_{j}= k \, : \, i_{1},\dots, i_{j}\mathbf{m}athbf{g}eq 1}\nabla \mathbf{m}athbf{f}rac{w_{i_1}}{i_{1}!}\otimes \cdots \otimes \nabla \mathbf{m}athbf{f}rac{w_{i_j}}{i_{j}!}\right).
\]
Second, we observe that, by induction on $j\mathbf{m}athbf{g}eq 2$, we have
\[
\mathbf{m}athbf{S}^{(j)}_{n}=\mathbf{m}athbf{s}um_{m\leq n}\left(\mathbf{m}athbf{s}um_{i_{1}+\cdots i_{j}= m \, : \, i_{1}, \dots, i_{j}\mathbf{m}athbf{g}eq 1}\nabla \mathbf{m}athbf{f}rac{w_{i_1}}{i_{1}!}\otimes\cdots \nabla \mathbf{m}athbf{f}rac{w_{i_j}}{i_{j}!}\right)
\]
for all $n\mathbf{m}athbf{g}eq j$. Third, by commutativity of addition, we observe that
\[
\mathbf{m}athbf{s}um_{m\leq n}\left(\mathbf{m}athbf{s}um_{j\mathbf{m}athbf{g}eq 2}\mathbf{m}athbf{f}rac{1}{j!}D_{p}^{j+1}L(\nabla u, x)\left(\mathbf{m}athbf{s}um_{i_{1}+\cdots i_{j}=m\ :\ i_{1}, \dots, i_{j}\mathbf{m}athbf{g}eq 1}\nabla \mathbf{m}athbf{f}rac{w_{i_1}}{i_{1}!}\otimes\cdots\otimes\nabla \mathbf{m}athbf{f}rac{w_{i_j}}{i_{j}!}\right)\right)=
\]
\[
=\mathbf{m}athbf{s}um_{j\mathbf{m}athbf{g}eq 2}\mathbf{m}athbf{f}rac{1}{j!}D_{p}^{j+1}L(\nabla u, x)\left(\mathbf{m}athbf{s}um_{m\leq n}\left(\mathbf{m}athbf{s}um_{i_{1}+\cdots i_{j}=m\ :\ i_{1}, \dots, i_{j}\mathbf{m}athbf{g}eq 1}\nabla \mathbf{m}athbf{f}rac{w_{i_1}}{i_{1}!}\otimes\cdots\otimes\nabla \mathbf{m}athbf{f}rac{w_{i_j}}{i_{j}!}\right)\right).
\]
Finally, letting $\mathbf{m}athbf{F}_{m}=0$ for $m<2$ and $\mathbf{m}athbf{S}^{(j)}_{n}=0$ for $j>n$ for notational convenience, we note that the above equation may be rewritten as
\[
\mathbf{m}athbf{s}um_{m\leq n}\mathbf{m}athbf{f}rac{1}{m!}\mathbf{m}athbf{F}_{m}=\mathbf{m}athbf{s}um_{j\mathbf{m}athbf{g}eq 2}\mathbf{m}athbf{f}rac{1}{j!}D_{p}^{j+1}L(\nabla u, x)\mathbf{m}athbf{S}^{(j)}_{n},
\]
which is \eqref{e.obvioushomo}.
\mathbf{m}athbf{s}mallskip
\emph{Step 6.} Conclusion.
We have that
\mathbf{b}etagin{equation}
\label{e.ws.restaylor}
\left| D_p L(\nabla v,x) - \mathbf{m}athbf{s}um_{k=0}^{n} \mathbf{m}athbf{f}rac1{k!} D_p^{k+1} L(\nabla u,x) (\nabla v - \nabla u)^{\otimes k} \right|
\leq
C \left| \nabla v - \nabla u \right|^{n+1} .
\end{equation}
Indeed, by a Taylor expansion, we see that
\mathbf{b}etagin{equation*}
\left| D_p L(z_0 + z,x) - \mathbf{m}athbf{s}um_{k=0}^{n} \mathbf{m}athbf{f}rac1{k!} D_p^{k+1} L(z_0,x) z^{\otimes k} \right| \leq \left[ D_p^{n+1} L(\cdot,x)\right]_{C^{0,1}\left( B_{|z|}(z_0) \right)} \mathbf{m}athbf{f}rac{\left| z \right|^{n+1} }{(k+1)!} .
\end{equation*}
Applying this with $z_0 = \nabla u$ and $z = \nabla v - \nabla u$ gives~\eqref{e.ws.restaylor}. Combining this with the previous steps yields the desired estimate~\eqref{e.Enprel} for $\mathbf{m}athbf{E}_{n}$. Finally, by the H\"older and Young inequalities, we get, for all $q \in [2,\infty)$ and $r \in (0,R]$,
\mathbf{b}etagin{multline} \label{e.EnLq}
\left\| \mathbf{m}athbf{E}_{n} \right\|_{\underlinederline{L}^{q} \left( B_{r} \right)}
\leq
\mathbf{m}athbf{s}um_{h=1}^{n-1} \left\| \nabla \xi_{h} \right\|_{\underlinederline{L}^{q+\deltalta} \left( B_{r} \right)} \left\| \left| \nabla \xi_{0} \right| + \mathbf{m}athbf{s}um_{i=1}^{n-1} \left| \nabla \mathbf{m}athbf{f}rac{w_i}{i!} \right|^{\mathbf{m}athbf{f}rac{1}{i}} \right\|_{\underlinederline{L}^{\mathbf{m}athbf{f}rac{nq(q +\deltalta)}{\deltalta}} \left( B_{r} \right)}^{n-h}
\\
\leq C
\left(\mathbf{m}athbf{s}um_{h=1}^{n-1} \left\| \nabla \xi_{h} \right\|_{\underlinederline{L}^{q+\deltalta} \left( B_{R} \right)}^{\mathbf{m}athbf{f}rac{n+1}{h+1}} +
\left\| \nabla \xi_{0} \right\|_{\underlinederline{L}^{\mathbf{m}athbf{f}rac{nq(q +\deltalta)}{\deltalta}} \left( B_{R} \right)}^{n+1} +
\mathbf{m}athbf{s}um_{i=1}^{n-1} \left\| \nabla \mathbf{m}athbf{f}rac{w_i}{i!} \right\|_{\underlinederline{L}^{\mathbf{m}athbf{f}rac{nq(q +\deltalta)}{\deltalta}} \left( B_{R} \right)}^{\mathbf{m}athbf{f}rac{n+1}{i}}\right),
\end{multline}
Let $\deltalta_0$ be the Meyers exponent corresponding $\mathcal{L}ambda$. Let
\mathbf{b}etagin{equation*}
q_h := q + \mathbf{m}athbf{f}rac12 \deltalta_0 + \mathbf{m}athbf{f}rac12 \mathbf{m}athbf{f}rac{n+1-h}{n+1} \deltalta_0
\quad \mathbf{m}box{and} \quad
q := \mathbf{m}athbf{f}rac{16}{\deltalta_0} n(n+1) .
\end{equation*}
Set also $R_h := \mathbf{m}athbf{f}rac12 (1+2^{-h}) R$. With this notation the previous display yields, by H\"older's inequality, that
\mathbf{b}etagin{equation*}
\left\| \mathbf{m}athbf{E}_{m} \right\|_{\underlinederline{L}^{q_m} \left( B_{R_m} \right)}
\leq C
\left(\mathbf{m}athbf{s}um_{h=1}^{m-1} \left\| \nabla \xi_{h} \right\|_{\underlinederline{L}^{q_h} \left( B_{R_h} \right)}^{\mathbf{m}athbf{f}rac{n+1}{h+1}} +
\left\| \nabla \xi_{0} \right\|_{\underlinederline{L}^{q} \left( B_{R} \right)}^{n+1} +
\mathbf{m}athbf{s}um_{i=1}^{n-1} \left\| \nabla \mathbf{m}athbf{f}rac{w_i}{i!} \right\|_{\underlinederline{L}^{q} \left( B_{R} \right)}^{\mathbf{m}athbf{f}rac{n+1}{i}}\right),
\end{equation*}
Now~\eqref{e.EnL2} follows by the Caccioppoli estimate, concluding the proof.
\end{proof}
\mathbf{m}athbf{s}ection{Regularity for constant coefficient linearized equations}
\label{s.appendixconstant}
In this appendix we prove a lemma tracking down the regularity of a solution $(\overlineerline{w}_1,\ldots,\overlineerline{w}_n)$ of the linearized system in the case that~$\overlineerline{L}$ is a smooth, constant-coefficient Lagrangian.
\mathbf{m}athbf{s}mallskip
Throughout we fix $n \in \mathbf{m}athbb{N}_0$, $\mathcal{L}ambda \in [1,\infty)$, $\mathbf{b}etata \in (0,1)$, and assume that $\overlineerline{L}$ satisfies
\mathbf{b}etagin{equation} \label{e.appC.barL1}
I_d \leq D_p^2 \overlineerline{L} \leq \mathcal{L}ambda I_d
\end{equation}
and for all $\mathbf{m}athsf{M}_0 \in [1,\infty)$ there is $C(\mathbf{m}athsf{M}_0,\mathbf{b}etata,d)<\infty$ such that
\mathbf{b}etagin{equation} \label{e.appC.barL2}
\left\| D^2 \overlineerline{L} \right\|_{C^{n,\mathbf{b}etata}(B_{ \mathbf{m}athsf{M}_0 })} \leq C.
\end{equation}
\mathbf{m}athbf{s}mallskip
\mathbf{b}etagin{lemma}[{Regularity of $\overlineerline{w}_{m}$}]
\label{l.appC.C1alphabarwn}
Let $\eta \in [\tfrac12,1)$, $\mathbf{m}athsf{M} \in [1,\infty)$ and $R \in (0,\infty)$. Assume that $\overlineerline{L}$ satisfies~\eqref{e.appC.barL1} and~\eqref{e.appC.barL2}. Let $\overlineerline{u},\overlineerline{w}_1,\ldots,\overlineerline{w}_n$ solve the equations, for $m \in \{1,\ldots,n+1\}$,
\mathbf{b}etagin{equation}
\label{e.appC.eqs}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( D_p\overlineerline{L}\left( \nabla \overlineerline{u} \right) \right) = 0 & \mathbf{m}box{in} & \ B_{R}, \\
& -\nabla \cdot \left( D^2_p\overlineerline{L}\left( \nabla \overlineerline{u}\right) \nabla \overlineerline{w}_m \right) = \nabla \cdot \left( \overlineerline{\mathbf{m}athbf{F}}_m(\nabla \overlineerline{u},\nabla \overlineerline{w}_1,\ldots,\nabla \overlineerline{w}_{m-1})\right) & \mathbf{m}box{in} & \ B_{R},
\end{aligned}
\right.
\end{equation}
where $\overlineerline{\mathbf{m}athbf{F}}_m$ has been defined in~\eqref{e.defbarFm} and $\overlineerline{u}$ satisfies
\mathbf{b}etagin{equation} \label{e.appC.norm1pre}
\mathbf{m}athbf{f}rac1R \left\| \overlineerline{u} - ( \overlineerline{u})_{B_{R}} \right\|_{\underlinederline{L}^2\left( B_{R} \right)}
\leq \mathbf{m}athsf{M}.
\end{equation}
Then, for $m \in \{1,\ldots,n+1\}$, there exists a constant $C(m,\eta,\mathbf{m}athsf{M},\mathbf{b}etata,d,\mathcal{L}ambda)<\infty$ such that
\mathbf{b}etagin{equation} \label{e.appC.wres1}
\left\| \nabla \overlineerline{w}_m \right\|_{L^\infty(B_{\eta R})}
\leq
C \mathbf{m}athbf{s}um_{i=1}^m \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi}
\end{equation}
Moreover, letting
\mathbf{b}etagin{equation} \label{e.deltacondonbaru}
\deltalta\in (0,\infty) \cap \left[ \left( \mathbf{m}athbf{f}rac{1}{R} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \mathbf{b}ar{u} - \ell \right\|_{L^2(B_{R})} \right)^\mathbf{b}etata, \infty \right),
\end{equation}
we have, for $m \in \{1,\ldots,n+1\}$, that
\mathbf{b}etagin{align} \label{e.appC.C1alphabarw}
\lefteqn{ R^\mathbf{b}etata \left[ \nabla \overlineerline{w}_{m} \right]_{C^{0,\mathbf{b}etata}(B_{\eta R})}} \quad &
\\ \notag
& \leq
C \deltalta \mathbf{m}athbf{s}um_{i=1}^m \left( \mathbf{m}athbf{f}rac1{\deltalta R} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \overlineerline{w}_{i} - \ell \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi}
+ C \deltalta \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi}
\\ \notag & \quad
+ C \left( \mathbf{m}athbf{f}rac{1}{R} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \mathbf{b}ar{u} - \ell \right\|_{\underlinederline{L}^2(B_R)} \right)^\mathbf{b}etata \mathbf{m}athbf{s}um_{i=1}^{m} \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi} .
\end{align}
and, for $m \in \{1,\ldots,n\}$ and $k \in \{1,\ldots,n+1-m\}$,
\mathbf{b}etagin{align} \label{e.appC.C1alphabarw2}
\lefteqn{ R^{k} \left\| \nabla^{k+1} \overlineerline{w}_{m} \right\|_{L^\infty(B_{\eta R})} + R^{k +\mathbf{b}etata} \left[ \nabla^{k+1} \overlineerline{w}_{m} \right]_{C^{0,\mathbf{b}etata}(B_{\eta R})}} \quad &
\\ \notag
& \leq
C \deltalta \mathbf{m}athbf{s}um_{i=1}^m \left( \mathbf{m}athbf{f}rac1{\deltalta R} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \overlineerline{w}_{i} - \ell \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi}
+ C \deltalta \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi}
\\ \notag & \quad
+ C \left( \mathbf{m}athbf{f}rac{1}{R} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \mathbf{b}ar{u} - \ell \right\|_{\underlinederline{L}^2(B_R)} \right)^\mathbf{b}etata \mathbf{m}athbf{s}um_{i=1}^{m} \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi} .
\end{align}
\end{lemma}
Notice that by~\eqref{e.appC.norm1pre} we may always take $\deltalta = \mathbf{m}athsf{M}^{\mathbf{b}etata}$ in~\eqref{e.deltacondonbaru}. When applying the result in practice, we typically take $\deltalta$ to be very small.
\mathbf{b}etagin{proof}
Fix $m \in \{1,\ldots,n+1\}$, $\eta \in [\tfrac12,1)$, $\mathbf{m}athsf{M} \in [1,\infty)$. Let $\overlineerline{u},\overlineerline{w}_1,\ldots,\overlineerline{w}_n$ solve~\eqref{e.appC.eqs} and assume~\eqref{e.appC.norm1pre}.
Fix also $\deltalta$ as in~\eqref{e.deltacondonbaru}.
\mathbf{m}athbf{s}mallskip
Throughout the proof we denote, for~$\theta \in (0,\infty)$,
\mathbf{b}etagin{align*}
\mathbf{m}athsf{E}_m ^{(\theta)} & :=
\theta \mathbf{m}athbf{s}um_{i=1}^m \left( \mathbf{m}athbf{f}rac1{\theta R} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \overlineerline{w}_{i} - \ell \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi}
+ \theta \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi}
\\ \notag & \quad
+ \left( \mathbf{m}athbf{f}rac{1}{R} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \mathbf{b}ar{u} - \ell \right\|_{\underlinederline{L}^2(B_R)} \right)^\mathbf{b}etata \mathbf{m}athbf{s}um_{i=1}^{m} \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi} .
\end{align*}
We also denote
\mathbf{b}etagin{equation*}
\overlineerline{\mathbf{m}athbf{f}}_{m} := \overlineerline{\mathbf{m}athbf{F}}_{m}( \nabla \mathbf{b}ar{u},\nabla \overlineerline{w}_1,\ldots,\nabla \overlineerline{w}_{m-1})
\end{equation*}
and, for $j \in \mathbf{m}athbb{N}$,
\mathbf{b}etagin{equation*}
R_j := (\eta + 2^{-j}(1-\eta))R \quad \mathbf{m}box{and} \quad r_j := 2^{-j-8}(1-\eta) R .
\end{equation*}
Below we denote by $C$ a constant depending only on parameters $(m,\eta,\mathbf{m}athsf{M},\mathbf{b}etata,d,\mathcal{L}ambda)$. It may change from line to line.
\mathbf{m}athbf{s}mallskip
\emph{Step 1}.
Basic properties of $\overlineerline{u}$. In view of~\cite[Proposition A.1]{AFK}, assumption~\eqref{e.appC.barL1} and normalization~\eqref{e.appC.norm1pre} imply that there exists a constant $\mathbf{m}athsf{M_0}(\eta,\mathbf{m}athsf{M},d,\mathcal{L}ambda)<\infty$ such that
\mathbf{b}etagin{equation} \label{e.appC.norm1}
\left\| \nabla \overlineerline{u} \right\|_{L^\infty\left( B_{\mathbf{m}athbf{f}rac13 (2+\eta)R} \right)}
\leq \mathbf{m}athsf{M_0} .
\end{equation}
Therefore,~\eqref{e.appC.barL2} is applicable in $B_{\mathbf{m}athbf{f}rac13 (2+\eta)R}$, and we obtain by~\cite[Proposition A.1]{AFK} that
\mathbf{b}etagin{equation} \label{e.appC.norm3}
R^2 \left\| \nabla^2 \overlineerline{u} \right\|_{L^\infty\left( B_{\mathbf{m}athbf{f}rac12(1+\eta)R} \right)}
\leq
C \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \mathbf{b}ar{u} - \ell \right\|_{L^2(B_{R})}.
\end{equation}
We also define
\mathbf{b}etagin{equation} \label{e.appC.b}
\mathbf{b}(x) := D_p^2 \overlineerline{L} \left( \nabla \mathbf{b}ar u \right).
\end{equation}
We have by~\eqref{e.appC.norm3} and~\eqref{e.appC.norm1pre} that
\mathbf{b}etagin{equation} \label{e.appC.bbound}
I_d \leq \mathbf{b}(x) \leq \mathcal{L}ambda I_d
\quad \mathbf{m}box{and} \quad
R \left\| \nabla \mathbf{b} \right\|_{L^{\infty}(B_{\mathbf{m}athbf{f}rac12 (1+\eta)R})} \leq \mathbf{m}athbf{f}rac{C}{R} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \mathbf{b}ar{u} - \ell \right\|_{L^2(B_{R})} \leq C \deltalta^{1/\mathbf{b}etata}.
\end{equation}
Notice also that, by~\eqref{e.appC.bbound}, we have
\mathbf{b}etagin{equation} \label{e.appCHm1bnd}
\mathbf{m}athsf{E}_m ^{(1)} \leq C \mathbf{m}athbf{s}um_{i=1}^m \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi} .
\end{equation}
\mathbf{m}athbf{s}mallskip
\emph{Step 2}. Induction assumption on degree $m$. We assume inductively that, for $j \in \{1,\ldots,m-1\}$ there exists a constant $\mathbf{m}athsf{K}_{j}(\eta,\mathbf{m}athsf{M},\mathbf{m}athsf{K}_{0},d,\mathcal{L}ambda)<\infty$ such that
\mathbf{b}etagin{equation} \label{e.appC.inductionnabla2w}
R_j^\mathbf{b}etata \left[ \nabla \overlineerline{w}_{j} \right]_{C^{0,\mathbf{b}etata}(B_{R_j})} \leq \mathbf{m}athsf{K}_{j} \mathbf{m}athsf{E}_{j}^{(\deltalta)}
\end{equation}
and
\mathbf{b}etagin{equation} \label{e.appC.inductionnablaw}
\left\| \nabla \overlineerline{w}_{j} \right\|_{\underlinederline{L}^\infty(B_{R_{j}})}
\leq
\mathbf{m}athsf{K}_{j} \mathbf{m}athbf{s}um_{i=1}^j \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac ji} .
\end{equation}
Notice that the case $m=2$ has been established in~\cite[Proposition A.1]{AFK}.
\mathbf{m}athbf{s}mallskip
Throughout the next steps of the proof, we let constants $C$ depend on parameters $(\{\mathbf{m}athsf{K}_{i}\}_{i=1}^{m-1}, m,\eta,\mathbf{m}athsf{M},\mathbf{b}etata,\mathbf{m}athsf{K}_{0},d,\mathcal{L}ambda)$, and they may change from line to line.
\mathbf{m}athbf{s}mallskip
\emph{Step 3}. Bounds on $\mathbf{m}athbf{f}$. We show that under induction assumptions~\eqref{e.appC.inductionnabla2w} and~\eqref{e.appC.inductionnablaw}, we have that
\mathbf{b}etagin{align} \label{e.appC.fm}
\left\| \overlineerline{\mathbf{m}athbf{f}}_{m} \right\|_{L^\infty(B_{R_{m-1}})}
&
\leq C \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi}
\end{align}
and, for $r \in (0,r_m]$ and $y \in B_{R_m}$, defining
\mathbf{b}etagin{equation} \label{e.appC.fmyr}
\overlineerline{\mathbf{m}athbf{f}}_{m,y,r} := \overlineerline{\mathbf{m}athbf{F}}_m \left( (\nabla \overlineerline{u} )_{B_r(y)} , (\nabla \overlineerline{w}_1 )_{B_r(y)},\ldots, (\nabla \overlineerline{w}_{m-1} )_{B_r(y)}\right) ,
\end{equation}
we have that
\mathbf{b}etagin{equation} \label{e.appC.fm2}
\left\| \overlineerline{\mathbf{m}athbf{f}}_{m} - \overlineerline{\mathbf{m}athbf{f}}_{m,y,r} \right\|_{L^\infty(B_r(y))} \leq C \left(\mathbf{m}athbf{f}rac{r}{R}\right)^\mathbf{b}etata \mathbf{m}athsf{E}_m ^{(\deltalta)} .
\end{equation}
\mathbf{m}athbf{s}mallskip
To show~\eqref{e.appC.fm}, we have by~\eqref{e.appC.inductionnablaw} that
\mathbf{b}etagin{align} \label{e.appC.fmpre}
\mathbf{m}athbf{s}um_{i=1}^{m-1} \left\| \nabla \overlineerline{w}_{i} \right\|_{L^\infty(B_{R_{m-1}})}^{\mathbf{m}athbf{f}rac mi}
&
\leq
C \mathbf{m}athbf{s}um_{i=1}^{m-1} \mathbf{m}athsf{K}_{i}^m \mathbf{m}athbf{s}um_{j=1}^{i} \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{j} - \left( \overlineerline{w}_{j} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac{m}{j}}
\\ \notag &
\leq
C
\mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi} ,
\end{align}
which yields~\eqref{e.appC.fm} by~\eqref{e.Fmbasic}.
\mathbf{m}athbf{s}mallskip
To show~\eqref{e.appC.fm2}, using H\"older regularity of $\overlineerline{\mathbf{m}athbf{f}}_{m}$ with respect to $\nabla \overlineerline{u}$ variable, similarly to~\eqref{e.Fmbasic3}, gives us
\mathbf{b}etagin{align*}
\left| \overlineerline{\mathbf{m}athbf{f}}_{m} - \overlineerline{\mathbf{m}athbf{f}}_{m,r,y} \right|
&
\leq
C \left( r^\mathbf{b}etata \left\| \nabla^2 \overlineerline{u} \right\|_{L^\infty (B_{r}(y)) }^\mathbf{b}etata +\deltalta \left(\mathbf{m}athbf{f}rac rR\right)^\mathbf{b}etata \right)\mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \left| \nabla \overlineerline{w}_{j} \right| + \left| (\nabla \overlineerline{w}_{j})_{B_r(y)} \right| \right) ^{\mathbf{m}athbf{f}rac{m}{i}}
\\ \notag & \quad
+ C \deltalta \left(\mathbf{m}athbf{f}rac rR\right)^\mathbf{b}etata \mathbf{m}athbf{s}um_{i=1}^{m-1} \left( \deltalta^{-1} \left(\mathbf{m}athbf{f}rac rR\right)^{-\mathbf{b}etata} \left| \nabla \overlineerline{w}_{i} - (\nabla \overlineerline{w}_{i})_{B_r(y)}\right| \right)^{\mathbf{m}athbf{f}rac{m}{i}}.
\end{align*}
Applying~\eqref{e.appC.inductionnabla2w} and~\eqref{e.deltacondonbaru} yields that
\mathbf{b}etagin{equation*}
\deltalta \left( \deltalta^{-1} \left(\mathbf{m}athbf{f}rac rR\right)^{-\mathbf{b}etata} \left| \nabla \overlineerline{w}_{i} - (\nabla \overlineerline{w}_{i})_{B_r(y)}\right| \right)^{\mathbf{m}athbf{f}rac mi} \leq
C \deltalta \left( \deltalta^{-1} \mathbf{m}athsf{E}_i ^{(\deltalta)} \right)^{\mathbf{m}athbf{f}rac mi} \leq
C \mathbf{m}athsf{E}_m ^{(\deltalta)} .
\end{equation*}
Thus~\eqref{e.appC.fm2} follows by~\eqref{e.appC.bbound} and~\eqref{e.appC.fmpre}.
\mathbf{m}athbf{s}mallskip
\emph{Step 4}. Caccioppoli estimate. We show that under induction assumptions~\eqref{e.appC.inductionnabla2w} and~\eqref{e.appC.inductionnablaw}, we have that, for all $y \in B_{R_m + r_m}$ and $r \in (0,r_m]$,
\mathbf{b}etagin{multline} \label{e.appC.cacc}
\left\| \nabla \overlineerline{w}_{m} - \nabla \ell \right\|_{\underlinederline{L}^2 \left( B_{r/2}(y) \right)} \leq \mathbf{m}athbf{f}rac{C}{r} \left\| \overlineerline{w}_{m} - \ell \right\|_{\underlinederline{L}^2 \left( B_{r}(y) \right)}
\\
+ C r \left\| \nabla^2 \overlineerline{u} \right\|_{L^\infty (B_{r}(y)) } ( \left| \nabla \ell \right| \wedge \left\| \nabla \overlineerline{w}_{m} \right\|_{\underlinederline{L}^2 \left( B_{r}(y) \right)} )
+
C \left(\mathbf{m}athbf{f}rac{r}{R}\right)^\mathbf{b}etata \mathbf{m}athsf{E}_m ^{(\deltalta)} .
\end{multline}
Since $\overlineerline{w}_{m}$ solves the equations, for $\mathbf{m}athbf{f}_{m,y,r}$ defined in~\eqref{e.appC.fmyr} and any affine function $\ell$,
\mathbf{b}etagin{equation*}
-\nabla \cdot \left( \mathbf{b} \nabla ( \overlineerline{w}_{m} -\ell) \right) = \nabla \cdot \left(( \mathbf{b} - \mathbf{b}(y)) \nabla \ell + \mathbf{m}athbf{f}_{m} - \mathbf{m}athbf{f}_{m,y,r} \right),
\end{equation*}
and
\mathbf{b}etagin{equation*}
-\nabla \cdot \left( \mathbf{b}(y) \nabla ( \overlineerline{w}_{m} - \ell) \right) = \nabla \cdot \left( (\mathbf{b} - \mathbf{b}(y)) \nabla \overlineerline{w}_{m} + \mathbf{m}athbf{f}_{m} - \mathbf{m}athbf{f}_{m,y,r} \right),
\end{equation*}
we obtain~\eqref{e.appC.cacc} simply by testing and~\eqref{e.appC.fm2}.
\mathbf{m}athbf{s}mallskip
\emph{Step 5.}
Induction assumption on the scale. We now assume that we find $\varepsilon \in (0,1]$, a constant $\mathbf{m}athsf{C}_\varepsilon$, and $r^* \in (0, \varepsilon r_m]$ such that
\mathbf{b}etagin{equation} \label{e.appC.indscale}
\mathbf{m}athbf{s}up_{y \in B_{R_m}}\mathbf{m}athbf{s}up_{t \in [r^*, \varepsilon r_m]}\left\| \nabla \overlineerline{w}_{m} \right\|_{\underlinederline{L}^2 \left( B_{t} (y) \right)} \leq \mathbf{m}athsf{C}_\varepsilon \mathbf{m}athbf{s}um_{i=1}^m \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi} .
\end{equation}
Notice that, by the Caccioppoli estimate~\eqref{e.appC.cacc} and~\eqref{e.appCHm1bnd}, we have, for any $\varepsilon \in (0,1]$, that
\mathbf{b}etagin{equation} \label{e.appC.indscaleinit}
\left\| \nabla \overlineerline{w}_{m} \right\|_{\underlinederline{L}^2 \left( B_{\varepsilon r_m}(y) \right)} \leq C_\varepsilon \mathbf{m}athbf{s}um_{i=1}^j \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac ji} ,
\end{equation}
implying that~\eqref{e.appC.indscale} is valid for $r^* = \varepsilon r_m$ provided that $\mathbf{m}athsf{C}_\varepsilon \mathbf{m}athbf{g}eq C_\varepsilon$.
\mathbf{m}athbf{s}mallskip
\emph{Step 6.} We verify that~\eqref{e.appC.inductionnablaw} is true for $j=m$. This gives us also~\eqref{e.appC.wres1}. Actually, we prove that if~\eqref{e.appC.indscale} is valid for some $r^* \in (0, \varepsilon r_m]$, then it remains valid for $\tfrac12 r^*$ instead. This proves, by induction, that we may take any $r^* \in (0, \varepsilon r_m]$ in~\eqref{e.appC.indscale}. In particular, we obtain~\eqref{e.appC.inductionnablaw} for $j=m$.
\mathbf{m}athbf{s}mallskip
Fix $y \in B_{R_m}$. Rewriting the equation of $ \overlineerline{w}_{m}$ as before, for $r \in \left(0 ,r_m \right]$,
\mathbf{b}etagin{equation*}
-\nabla \cdot \left( \mathbf{b}(y) \nabla \overlineerline{w}_{m} \right) = \nabla \cdot \left( (\mathbf{b} - \mathbf{b}(y)) \nabla \overlineerline{w}_{m} + \left(\overlineerline{\mathbf{m}athbf{f}}_m- \overlineerline{\mathbf{m}athbf{f}}_{m,y,r} \right) \right),
\end{equation*}
where $\mathbf{m}athbf{f}_{m,y,r}$ defined in~\eqref{e.appC.fmyr}, and consequently solving
\mathbf{b}etagin{equation*}
\left\{
\mathbf{b}etagin{aligned}
& -\nabla \cdot \left( \mathbf{b}(y) \nabla \overlineerline{w}_{m,y,r}\right) = 0 & \mathbf{m}box{in} & \ B_{r}(y), \\
& \overlineerline{w}_{m,y,r} = \overlineerline{w}_{m} & \mathbf{m}box{on} & \ \mathrm{par}rtial B_{r}(y ),
\end{aligned}
\right.
\end{equation*}
we obtain by testing and~\eqref{e.appC.fm2} that
\mathbf{b}etagin{equation*}
\left\| \nabla \overlineerline{w}_{m,y,r} - \nabla \overlineerline{w}_{m} \right\|_{\underlinederline{L}^2 \left( B_r(y)\right)}
\\
\leq C r \left\| \nabla^2 \overlineerline{u} \right\|_{L^\infty (B_{r}(y)) } \left\| \nabla \overlineerline{w}_{m} \right\|_{\underlinederline{L}^2 \left( B_r(y)\right)}
+C \left(\mathbf{m}athbf{f}rac{r}{R}\right)^\mathbf{b}etata \mathbf{m}athsf{E}_m ^{(\deltalta)} .
\end{equation*}
In particular, we get by~\eqref{e.appC.indscale} for $r \in [r^*, \varepsilon r_m]$ that Â
\mathbf{b}etagin{equation*}
\left\| \nabla \overlineerline{w}_{m,y,r} - \nabla \overlineerline{w}_{m} \right\|_{\underlinederline{L}^2 \left( B_r(y)\right)} \leq C \left(\mathbf{m}athbf{f}rac rR\right)^{\mathbf{b}etata} \left( \mathbf{m}athsf{C}_\varepsilon \varepsilon^{1-\mathbf{b}etata} + 1 \right) \mathbf{m}athsf{E}_m ^{(\deltalta)}.
\end{equation*}
By decay estimate for harmonic functions we have for small enough $\theta(\mathbf{b}etata,d,\mathcal{L}ambda) \in \left(0,\mathbf{m}athbf{f}rac12\right]$ that
\mathbf{b}etagin{equation*}
\left\| \nabla \overlineerline{w}_{m,y,r} - (\nabla \overlineerline{w}_{m,y,r})_{B_{\theta r} (y) } \right\|_{\underlinederline{L}^2 \left( B_{\theta r} (y) \right)}
\leq
\mathbf{m}athbf{f}rac12 \theta^\mathbf{b}etata \left\| \nabla \overlineerline{w}_{m,y,r} - (\nabla \overlineerline{w}_{m,y,r})_{B_{r} (y) } \right\|_{\underlinederline{L}^2 \left( B_{r} (y) \right)}.
\end{equation*}
Therefore, by the triangle inequality, we get
\mathbf{b}etagin{multline} \notag
\left\| \nabla \overlineerline{w}_{m} - (\nabla \overlineerline{w}_{m})_{B_{\theta r}(y)} \right\|_{\underlinederline{L}^2 \left( B_{\theta r}(y) \right)}
\\
\leq
\mathbf{m}athbf{f}rac12 \theta^\mathbf{b}etata \left\| \nabla \overlineerline{w}_{m} - (\nabla \overlineerline{w}_{m})_{B_{r}(y)} \right\|_{\underlinederline{L}^2 \left( B_{r}(y) \right)}
+ C \left(\mathbf{m}athbf{f}rac rR\right)^{\mathbf{b}etata} \left( \mathbf{m}athsf{C}_\varepsilon \varepsilon^{1-\mathbf{b}etata} + 1 \right) \mathbf{m}athsf{E}_m ^{(\deltalta)}.
\end{multline}
By an iteration argument we thus obtain that, for $r \in [\theta r^*, \varepsilon r_m]$
\mathbf{b}etagin{multline} \label{e.appC.almostthere}
\left(\mathbf{m}athbf{f}rac {r}{\varepsilon r_m}\right)^{-\mathbf{b}etata} \left\| \nabla \overlineerline{w}_{m} - (\nabla \overlineerline{w}_{m})_{B_{r}(y)} \right\|_{\underlinederline{L}^2 \left( B_{r}(y)\right)}
\\ \leq
C \left\| \nabla \overlineerline{w}_{m} - (\nabla \overlineerline{w}_{m})_{B_{\varepsilon r_m} (y)} \right\|_{\underlinederline{L}^2 \left( B_{\varepsilon r_m}(y) \right)}
+ C \left( \varepsilon \mathbf{m}athsf{C}_\varepsilon + 1 \right) \mathbf{m}athsf{E}_m ^{(\deltalta)}.
\end{multline}
Letting $r \in [\theta r^*, \varepsilon r_m]$ and $n\in \mathbf{m}athbb{N}_0$ be such that $r \in (\theta^{n+1} \varepsilon r_m, \theta^n \varepsilon r_m]$, we obtain by the triangle inequality that
\mathbf{b}etagin{align} \notag
\left\| \nabla \overlineerline{w}_{m} \right\|_{\underlinederline{L}^2 \left( B_{r}(y) \right)} & \leq C
\left\| \nabla \overlineerline{w}_{m} \right\|_{\underlinederline{L}^2 \left( B_{\theta^n \varepsilon r_m}(y) \right)}
\\ Â \notag &
\leq
C \left\| \nabla \overlineerline{w}_{m} - (\nabla \overlineerline{w}_{m})_{B_{\theta^{n} \varepsilon r_m}(y)} \right\|_{\underlinederline{L}^2 \left( B_{\theta^n \varepsilon r_m}(y) \right)}
\\ Â \notag & \quad
+C \left| (\nabla \overlineerline{w}_{m})_{B_{\varepsilon r_m}(y)} \right|
+ C \mathbf{m}athbf{s}um_{i=1}^n \left| (\nabla \overlineerline{w}_{m})_{B_{\theta^{i} \varepsilon r_m}(y) } - (\nabla \overlineerline{w}_{m})_{B_{\theta^{i-1} \varepsilon r_m}(y) }\right|.
\end{align}
Thus, by the previous two displays and~\eqref{e.appC.indscaleinit}, we obtain, for $r \in [\theta r^*, \varepsilon r_m]$, that
\mathbf{b}etagin{equation*}
\left\| \nabla \overlineerline{w}_{m} \right\|_{\underlinederline{L}^2 \left( B_{r}(y) \right)} \leq \left(C_\varepsilon + C \varepsilon \mathbf{m}athsf{C}_\varepsilon \right)
\mathbf{m}athbf{s}um_{i=1}^m \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi} .
\end{equation*}
We first take $\varepsilon$ so small that $C\varepsilon = \mathbf{m}athbf{f}rac12$ and then choose $\mathbf{m}athsf{C}_\varepsilon \mathbf{m}athbf{g}eq 2C_\varepsilon$. All in all, we have proved that
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{s}up_{t \in [\theta r^*, \varepsilon r_m]}\left\| \nabla \overlineerline{w}_{m} \right\|_{\underlinederline{L}^2 \left( B_{t} (y) \right)} \leq \mathbf{m}athsf{C}_\varepsilon \mathbf{m}athbf{s}um_{i=1}^m \left( \mathbf{m}athbf{f}rac1R \left\| \overlineerline{w}_{i} - \left( \overlineerline{w}_{i} \right)_{B_R} \right\|_{\underlinederline{L}^2(B_R)} \right)^{\mathbf{m}athbf{f}rac mi} ,
\end{equation*}
which implies that~\eqref{e.appC.indscale} is valid for $\mathbf{m}athbf{f}rac12 r^*$ instead of $r^*$, which was to be shown.
\mathbf{m}athbf{s}mallskip
\emph{Step 7.}
We now prove that~\eqref{e.appC.inductionnabla2w} is valid for $j=m$, giving also~\eqref{e.appC.C1alphabarw}. An application of the Caccioppoli estimate~\eqref{e.appC.cacc}, together with~\eqref{e.appC.wres1}, which was proved in Step 6 above, we have that, by giving up volume factors,
\mathbf{b}etagin{equation*}
\left\| \nabla \overlineerline{w}_{m} - (\nabla \overlineerline{w}_{m})_{B_{\varepsilon r_m} (y)} \right\|_{\underlinederline{L}^2 \left( B_{\varepsilon r_m}(y) \right)}
\leq C \mathbf{m}athsf{E}_m ^{(\deltalta)} .
\end{equation*}
Therefore,~\eqref{e.appC.almostthere} yields, for all $y \in B_{\eta R}$, that
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{s}up_{r \in (0,r_m)} \left(\mathbf{m}athbf{f}rac rR\right)^{-\mathbf{b}etata} \left\| \nabla \overlineerline{w}_{m} - (\nabla \overlineerline{w}_{m})_{B_r(y)} \right\|_{\underlinederline{L}^2 \left( B_{r}(y) \right) } \leq C \mathbf{m}athsf{E}_m ^{(\deltalta)}.
\end{equation*}
This yields, via telescoping summation as in Step 6, that
\mathbf{b}etagin{equation*}
\left| \nabla \overlineerline{w}_{m}(y) - (\nabla \overlineerline{w}_{m})_{B_r(y)} \right| \leq C \left(\mathbf{m}athbf{f}rac rR\right)^{\mathbf{b}etata} \mathbf{m}athsf{E}_m ^{(\deltalta)}.
\end{equation*}
Thus, if, on the one hand, $r = |x-y| \in (0,r_m]$, we get by the above two displays that
\mathbf{b}etagin{align} \notag
\left| \nabla \overlineerline{w}_{m}(y) - \nabla \overlineerline{w}_{m}(x) \right|
& \leq
\left| \nabla \overlineerline{w}_{m}(y) - (\nabla \overlineerline{w}_{m})_{B_r(y)} \right|
+\left| \nabla \overlineerline{w}_{m}(x) - (\nabla \overlineerline{w}_{m})_{B_r(x)} \right|
\\ \notag &
\quad + C \left\| \nabla \overlineerline{w}_{m} - (\nabla \overlineerline{w}_{m})_{B_{2r}(y)} \right\|_{\underlinederline{L}^2 \left( B_{4r}(y) \right) }
\\ \notag &
\leq C \left(\mathbf{m}athbf{f}rac rR\right)^{\mathbf{b}etata} \mathbf{m}athsf{E}_m ^{(\deltalta)}.
\end{align}
If, on the other hand $ |x-y|> r_m$, we get
\mathbf{b}etagin{equation*}
\left| \nabla \overlineerline{w}_{m}(y) - \nabla \overlineerline{w}_{m}(x) \right| \leq C \mathbf{m}athsf{E}_m ^{(\deltalta)}
\end{equation*}
by noticing that
\mathbf{b}etagin{equation*}
\left| (\nabla \overlineerline{w}_{m})_{B_{r_m}(y)} - (\nabla \overlineerline{w}_{m})_{B_{r_m}(x)} \right| \leq C
\left\| \nabla \overlineerline{w}_{m} - (\nabla \overlineerline{w}_{m})_{B_{R_m + r_m }(y)} \right\|_{\underlinederline{L}^2 \left( B_{R_m + r_m}(y) \right) } ,
\end{equation*}
and applying once more~\eqref{e.appC.cacc}. Thus we have proved~\eqref{e.appC.C1alphabarw}.
\mathbf{m}athbf{s}mallskip
\emph{Step 8.}
We finally sketch the proof of~\eqref{e.appC.C1alphabarw2}. Since it is very similar to the above reasoning, we will omit most of the details. We prove the statement by using induction in $m$ and in $k$. First, we observe that by differentiation we see that $\mathrm{par}rtial_{x_j}^k \overlineerline{u}$ satisfies the equation
\mathbf{b}etagin{equation*}
-\nabla \cdot (\mathbf{b} \nabla \mathrm{par}rtial_{x_j}^k u) = \nabla \cdot \mathbf{m}athbf{F}_k(\nabla \overlineerline{u}, \nabla \mathrm{par}rtial_{x_j} \overlineerline{u},\ldots, \nabla \mathrm{par}rtial_{x_j}^{k-1} \overlineerline{u}) .
\end{equation*}
Thus we can apply~\eqref{e.appC.wres1} and~\eqref{e.appC.C1alphabarw2} for $\overlineerline{w}_k = \mathrm{par}rtial_{x_j}^k u$ recursively and obtain, by polarization as in Lemma~\ref{l.polarization}, that, for every $k \in \{1,\ldots,n+1\}$ and $\eta \in \left[\tfrac12,1\right)$, there is a constant $C(k,\eta,\mathbf{m}athsf{M},\mathbf{b}etata,d,\mathcal{L}ambda)$ such that
\mathbf{b}etagin{equation} \label{e.baruhigherreg}
R^{k}\left\| \nabla^{k+1} \overlineerline{u} \right\|_{L^\infty(B_{(2+\eta)R/3})}
+
R^{k+\mathbf{b}etata} \left[ \nabla^{k+1} \overlineerline{u} \right]_{C^{0,\mathbf{b}etata}(B_{(2+\eta)R/3})} \leq C \mathbf{m}athbf{f}rac{1}{R} \inf_{\ell \in \mathbf{m}athcal{P}_1} \left\| \mathbf{b}ar{u} - \ell \right\|_{L^2(B_{R})} .
\end{equation}
\mathbf{m}athbf{s}mallskip
Next, $\overlineerline{w}_1$ solves $-\nabla \cdot \left( \mathbf{b} \nabla \overlineerline{w}_1 \right) = 0$ and $z \mathbf{m}apsto \mathbf{b}(z) = D^2_p\overlineerline{L}(\nabla u(z))$ is in $C^{n,\mathbf{b}etata}$ by~\eqref{e.baruhigherreg}, we may differentiate the equation at most $n$ times and obtain that $\overlineerline{w}_1 \in C^{n+1,\mathbf{b}etata}$ and it is also straightforward to show that $\overlineerline{w}_1$ satisfies~\eqref{e.appC.C1alphabarw2} for $m=1$; this is just classical Schauder theory.
\mathbf{m}athbf{s}mallskip
We than assume that~\eqref{e.appC.C1alphabarw2} is valid for every $m \in \{1,\ldots,M\}$ and $k \in \{1,\ldots,n+1-m\}$ with some $M \in \{1,\ldots,n-1\}$. We then show that it continues to hold for $m = M+1$ and $k \in \{1,\ldots,n+M\}$ as well. Now
\mathbf{b}etagin{equation*}
-\nabla \cdot (\mathbf{b} \nabla \overlineerline{w}_{M+1} ) = -\nabla \cdot \overlineerline{F}_m(\nabla \overlineerline{u}, \overlineerline{w}_{1},\ldots, \overlineerline{w}_{M}).
\end{equation*}
Recalling that $(h_1,\ldots,h_{m-1}) \mathbf{m}apsto \overlineerline{F}_m(\nabla \overlineerline{u},h_1,\ldots,h_{m-1})$ is a polynomial, using~\eqref{e.baruhigherreg} and~\eqref{e.appC.C1alphabarw2} for $m \in \{1,\ldots,M\}$ and $k \in \{1,\ldots,n+1-m\}$, we can actually deduce that
\mathbf{b}etagin{equation*}
R^{k+\mathbf{b}etata} \left[ \nabla^k \overlineerline{F}_m(\nabla \overlineerline{u}, \overlineerline{w}_{1},\ldots, \overlineerline{w}_{M}) \right]_{C^{0,\mathbf{b}etata}(B_{(1+\eta)R/2})} \leq C \mathbf{m}athsf{E}_m ^{(\deltalta)}
\end{equation*}
Therefore, using~\eqref{e.baruhigherreg} once more, we can differentiate the equation of $ \overlineerline{w}_{M+1}$ $k$ times and then show that $ \overlineerline{w}_{M+1}$ satisfies~\eqref{e.appC.C1alphabarw2}. The proof is complete.
\end{proof}
\mathbf{m}athbf{s}ection{\texorpdfstring{$C^\infty$}{{C-infty}} regularity for smooth constant-coefficient Lagrangians}
\label{a.constantcoeff}
In this section we give an alternative proof of the statement that $C^{1,1}$ regularity implies $C^\infty$ regularity for smooth, constant-coefficient Lagrangians. Our argument is similar to the classical argument by Schauder theory, but we keep track of the linearized equations to obtain a Taylor series with an explicit representation of the Taylor polynomials in terms of the linearized equations. We note that it is relatively simple to obtain real analyticity for solutions using this argument.
\mathbf{b}etagin{proposition} \label{p.cc}
Fix $\varepsilon \in (0,\mathbf{m}athbf{f}rac12 ]$, $\mathbf{m}athsf{M} \in [0,\infty)$, $\mathbf{m}athsf{N} \in \mathbf{m}athbb{N}$, $\mathbf{m}athbf{s}igma \in [2,\infty)$, and $R \in (0,\infty]$. Suppose that $\overlineerline{L} \in C^{\mathbf{m}athsf{N}+2,1}(\mathbf{m}athbb{R}^d)$ is uniformly convex, that is, for all $\zeta,\xi \in \mathbf{m}athbb{R}^d$,
\mathbf{b}etagin{equation*}
|\zeta|^2 \leq D^2 \overlineerline{L}(\xi)\zeta \cdot \zeta \leq \mathcal{L}ambda |\zeta|^2
\end{equation*}
Let $u \in H^1(B_R)$ solve
$\nabla \cdot D\overlineerline{L}(\nabla u) = 0$ such that $\left\| \nabla u \right\|_{\underlinederline{L}^2(B_R)} \leq \mathbf{m}athsf{M}$. Then there exist constants $C(\overlineerline{L},\mathbf{m}athsf{M},\mathbf{m}athsf{N},\varepsilon,\mathrm{data})$ and polynomials $q_1, \ldots, q_{\mathbf{m}athsf{N+2}}$ such that $q_{m+1}$ is homogeneous polynomial of degree $m+1$ solving
\mathbf{b}etagin{equation*}
-\nabla \cdot \left( D^2_p\overlineerline{L}\left( \nabla q_1 \right) \nabla \mathbf{m}athbf{f}rac{q_{m+1}}{m+1} \right) = \nabla \cdot \overlineerline{\mathbf{m}athbf{F}}_{m} \left(\nabla q_1 , \nabla
\mathbf{m}athbf{f}rac{q_{2}}{2} ,\ldots, \nabla \mathbf{m}athbf{f}rac{q_{m}}{m} \right) \quad \mathbf{m}box{in } \mathbf{m}athbb{R}^d ,
\end{equation*}
and, for all $r \in \left(0,\mathbf{m}athbf{f}rac R2\right]$,
\mathbf{b}etagin{equation*}
\left\| \nabla u - \nabla \mathbf{m}athbf{s}um_{j=1}^{\mathbf{m}athsf{N}+2} \mathbf{m}athbf{f}rac{q_j}{j!} \ \right\|_{\underlinederline{L}^p(B_{r})} \leq C \left( \mathbf{m}athbf{f}rac{r}{R} \right)^{\mathbf{m}athsf{N} + 2-\varepsilon}.
\end{equation*}
\end{proposition}
\mathbf{b}etagin{proof}
Without loss of generality, we may take $R=1$.
By $C^{1,1}$-estimates, see for example~\cite[Proposition A.1] {AFK}, we have that
\mathbf{b}etagin{equation} \label{e.C11app}
\left| \nabla u(0) \right| \leq C \mathbf{m}athsf{M} \quad \mathbf{m}box{and} \quad \mathbf{m}athbf{s}up_{r \in (0,3/4)}r^{-1} \left\| \nabla u - \nabla u(0) \right\|_{\underlinederline{L}^{\infty} \left( B_{r} \right)} \leq C .
\end{equation}
We set
\mathbf{b}etagin{equation*}
q_0 = u(0) \quad \mathbf{m}box{and} \quad q_1(x) = \nabla u(0) \cdot x.
\end{equation*}
Assume then inductively that, for $m \in \{1,\ldots,n\}$, there exists homogeneous polynomials $q_m$ of degree $m$ such that, for every $\mathbf{m}athbf{s}igma \in [2,\infty)$ and $\varepsilon \in (0,\tfrac12]$, there exists a constant $\mathbf{m}athsf{N}_{m,\mathbf{m}athbf{s}igma}(\varepsilon,d,\mathcal{L}ambda)$ such that
\mathbf{b}etagin{equation*}
\left| \nabla^{m} q_{m} \right| \leq \mathbf{m}athsf{N}_{m,2}, \qquad \mathbf{m}athbf{s}up_{r \in \left( 0, \mathbf{m}athbf{f}rac {2m+1}{4 m} \right)} r^{-m+\varepsilon} \left\| \nabla u - \nabla \mathbf{m}athbf{s}um_{j=1}^m \mathbf{m}athbf{f}rac{q_j}{j!} \right\|_{\underlinederline{L}^{\mathbf{m}athbf{s}igma}(B_r)} \leq \mathbf{m}athsf{N}_{m,\mathbf{m}athbf{s}igma}.
\end{equation*}
and that, for $m \in \{1,\ldots,n-1\}$, $q_{m+1}$ satisfies the equation
\mathbf{b}etagin{equation*}
-\nabla \cdot \left( D^2_p\overlineerline{L}\left( \nabla q_1 \right) \nabla \mathbf{m}athbf{f}rac{q_{m+1}}{m+1} \right) = \nabla \cdot \overlineerline{\mathbf{m}athbf{F}}_{m} \left(\nabla q_1 , \nabla
\mathbf{m}athbf{f}rac{q_{2}}{2} ,\ldots, \nabla \mathbf{m}athbf{f}rac{q_{m}}{m} \right) \quad \mathbf{m}box{in } \mathbf{m}athbb{R}^d ,
\end{equation*}
Let us denote ${\overlineerbracket[1pt][-1pt]{\mathbf{a}}}:= D^2_p\overlineerline{L}\left( \nabla q_1 \right)$ and, for $m \in \{1,\ldots,n-1\}$,
\mathbf{b}etagin{equation*}
\overlineerline{w}_{m} := \mathbf{m}athbf{f}rac{q_{m+1}}{m+1} .
\end{equation*}
\mathbf{m}athbf{s}mallskip
By homogeneity, we find a homogeneous polynomial $q_{n+1}$ of degree $n+1$ solving the equation
\mathbf{b}etagin{equation*}
-\nabla \cdot \left( D^2_p\overlineerline{L}\left( \nabla q_1 \right) \nabla \mathbf{m}athbf{f}rac{q_{n+1}}{n+1} \right)
=
\nabla \cdot \overlineerline{\mathbf{m}athbf{F}}_{n} \left(\nabla q_1 , \nabla \mathbf{m}athbf{f}rac{q_{2}}{2} ,\ldots, \nabla \mathbf{m}athbf{f}rac{q_{n}}{n} \right) \quad \mathbf{m}box{in } \mathbf{m}athbb{R}^d ,
\end{equation*}
Notice that there is a degree of freedom in the choice of $q_{n+1}$. Namely, the solution is unique up to an ${\overlineerbracket[1pt][-1pt]{\mathbf{a}}}$-harmonic polynomial of degree $n+1$. We will fix this shortly. To draw parallels between this appendix and Appendix~\ref{app.linerrors}, we set
\mathbf{b}etagin{equation*}
\overlineerline{w}_{n} := \mathbf{m}athbf{f}rac{q_{n+1}}{n+1} \quad \mathbf{m}box{and} \quad \xi_m := u - q_1 - \mathbf{m}athbf{s}um_{k=1}^{m} \mathbf{m}athbf{f}rac{\overlineerline{w}_k}{k!} = u - \mathbf{m}athbf{s}um_{k=1}^{m+1} \mathbf{m}athbf{f}rac{q_k}{k!}.
\end{equation*}
Rewrite
\mathbf{b}etagin{align} \notag
\lefteqn{
D_p \overlineerline{L}\left( \nabla u \right) - D_p \overlineerline{L}\left( \nabla q_1 \right)
-
D^2 \overlineerline{L}\left( \nabla q_1 \right) \nabla \xi_n
} \quad &
\\ \notag &
=
\mathbf{m}athbf{s}um_{m=1}^{n} \mathbf{m}athbf{f}rac{1}{m!} \left(D^2 \overlineerline{L}\left( \nabla q_1 \right) \nabla \overlineerline{w}_{m}
+ \overlineerline{\mathbf{m}athbf{F}}_m \left(\nabla q_1,\nabla \overlineerline{w}_1,\ldots,\nabla \overlineerline{w}_{m-1} \right) \right)
+ \overlineerline{\mathbf{m}athbf{E}}_{n}
\end{align}
where
\mathbf{b}etagin{align} \notag
\overlineerline{\mathbf{m}athbf{E}}_{n} & := \mathbf{m}athbf{s}um_{m=2}^{n} \mathbf{m}athbf{f}rac{1}{m!} \left( D_p^{m+1} \overlineerline{L}(\nabla q_1) (\nabla u - \nabla q_1)^{\otimes m} - \overlineerline{\mathbf{m}athbf{F}}_{m} \left(\nabla q_1 , \nabla \overlineerline{w}_{1},\ldots, \nabla \overlineerline{w}_{m-1} \right) \right)
\\ \notag &
\quad
+ D_p \overlineerline{L}(\nabla u) - \mathbf{m}athbf{s}um_{k=0}^{n+1} \mathbf{m}athbf{f}rac1{k!} D_p^{k+1} \overlineerline{L}(\nabla q_1) (\nabla u - \nabla q_1)^{\otimes k} .
\end{align}
By the estimate in Appendix~\ref{app.linerrors}, we have that
\mathbf{b}etagin{align*}
\left| \overlineerline{\mathbf{m}athbf{E}}_{n} \right|
&
\leq
C \mathbf{m}athbf{s}um_{h=0}^{n-1} \left| \nabla \xi_h \right| \left( \left| \nabla \xi_0 \right| + \mathbf{m}athbf{s}um_{i=1}^{n-1} \left| \nabla \mathbf{m}athbf{f}rac{\overlineerline{w}_{i} }{i!} \right|^{\mathbf{m}athbf{f}rac{1}{i}} \right)^{n-h}
.
\end{align*}
Taking divergence gives us, by the equations of $u$, $ \overlineerline{w}_{1},\ldots, \overlineerline{w}_{n} $, that
\mathbf{b}etagin{equation*}
-\nabla \cdot {\overlineerbracket[1pt][-1pt]{\mathbf{a}}} \nabla \xi_n = \nabla \cdot \overlineerline{\mathbf{m}athbf{E}}_{n}.
\end{equation*}
Using the induction assumption we get that
\mathbf{b}etagin{equation*}
\left\| \overlineerline{\mathbf{m}athbf{E}}_{n} \right\|_{\underlinederline{L}^{\mathbf{m}athbf{s}igma}(B_r)} \leq C r^{n+1 - \varepsilon}.
\end{equation*}
Now Lemma~\ref{l.Aharmdecay} below allows us to identify the homogeneous ${\overlineerbracket[1pt][-1pt]{\mathbf{a}}}$-harmonic polynomial part of $q_{n+1}$ of degree $n+1$ such that
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{s}up_{r \in \left( 0, \mathbf{m}athbf{f}rac {2(n+1)+1}{4(n+1)} \right)} r^{-(n+1)+\varepsilon} \left\| \nabla u - \nabla \mathbf{m}athbf{s}um_{j=1}^{n+1} \mathbf{m}athbf{f}rac{q_j}{j!} \right\|_{\underlinederline{L}^{\mathbf{m}athbf{s}igma}(B_r)} \leq C
\end{equation*}
and
\mathbf{b}etagin{equation*}
\left| \nabla^{n+1} q_{n+1} \right| \leq C.
\end{equation*}
This proves the induction step, and finishes the proof.
\end{proof}
\mathbf{b}etagin{lemma} \label{l.Aharmdecay}
Let $n \in \mathbf{m}athbb{N}$ and let $\mathbf{a}lphapha \in (n,n+1)$. Let $\mathbf{m}athsf{M} \in [0,\infty)$ and $\varepsilon \in (0,1)$.
Suppose that $\mathbf{m}athbf{A} $ is a constant symmetric matrix having eigenvalues on the interval $[1,\mathcal{L}ambda]$. There is a constant $C(n,\mathbf{a}lphapha,\varepsilon,d,\mathcal{L}ambda)$ such that if $\mathbf{m}athbf{F} \in L^p(B_1)$ and $u \in H^1(B_1)$ solve
\mathbf{b}etagin{equation*}
\nabla \cdot \mathbf{m}athbf{A} \nabla u= \nabla \cdot \mathbf{m}athbf{F},
\end{equation*}
and that $\mathbf{m}athbf{F} \in L^p(B_1)$ satisfies, for $r \in (0,1)$,
\mathbf{b}etagin{equation} \label{e.Aharmdecay.cond1}
\left\| \mathbf{m}athbf{F} \right\|_{\underlinederline{L}^p(B_r)} \leq \mathbf{m}athsf{M} r^{\mathbf{a}lphapha},
\end{equation}
then there is $\mathbf{m}athbf{A}$-harmonic $q \in \mathbf{m}athcal{P}_{n+1}$ such that
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{s}up_{r \in (0,1-\varepsilon)} r^{-\mathbf{a}lphapha} \left\| \nabla u - \nabla q \right\|_{L^p(B_r)} \leq C \left( \left\| \nabla u \right\|_{L^2(B_1)} + \mathbf{m}athsf{M}\right).
\end{equation*}
\end{lemma}
\mathbf{b}etagin{proof}
We proceed via harmonic approximation. Let $v_r \in u + H_0^1(B_r)$ be $A$-harmonic. Denote
\mathbf{b}etagin{equation*}
\mathbf{m}athcal{A}hom_{n+1} := \{ p \in \mathbf{m}athcal{P}_{n+1} \, : \, p \mathbf{m}box{ is } A-\mathbf{m}box{harmonic}\}
\end{equation*}
By Calder\'on-Zygmund estimates and~\eqref{e.Aharmdecay.cond1},
\mathbf{b}etagin{equation*}
\left\| \nabla v_r - \nabla u \right\|_{\underlinederline{L}^2(B_r)} \leq C \mathbf{m}athsf{M} r^{\mathbf{a}lphapha}.
\end{equation*}
Using the oscillation decay estimate
\mathbf{b}etagin{equation*}
\inf_{ \widetilde q \in \mathbf{m}athcal{A}hom_{n+1}} \left\| \nabla v_r- \nabla \widetilde q \right\|_{L^\infty(B_{r})} \leq C \theta^{n+1} \inf_{ \widetilde q \in \mathbf{m}athcal{A}hom_{n+1}} \left\| \nabla v_r- \nabla \widetilde q \right\|_{\underlinederline{L}^2(B_r)}
\end{equation*}
and defining
\mathbf{b}etagin{equation*}
D(r) := r^{-\mathbf{a}lphapha} \inf_{ \widetilde q \in \mathbf{m}athcal{A}hom_{n+1}} \left\| \nabla u - \nabla \widetilde q \right\|_{\underlinederline{L}^p(B_{r})},
\end{equation*}
we obtain by the triangle inequality, for $\theta>0$ such that $ C \theta^{n+1-\mathbf{a}lphapha} = \mathbf{m}athbf{f}rac12$, that
\mathbf{b}etagin{equation*}
D(\theta r) \leq \mathbf{m}athbf{f}rac12 D(r) + C \mathbf{m}athsf{M} .
\end{equation*}
It follows by reabsorption that, for $r \in \left( 0,1\right)$,
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{s}up_{t \in (0,r)} D(t) \leq C \left(D(r) + \mathbf{m}athsf{M} \right) .
\end{equation*}
In particular, letting $\widetilde q_t $ be the minimizing element of $\mathbf{m}athcal{A}hom_{n+1}$ in the definition of $D(t)$, we get by the triangle inequality that
\mathbf{b}etagin{equation*}
t^{-n}\left\| \nabla \widetilde q_{t/2} - \nabla \widetilde q_t \right\|_{\underlinederline{L}^p(B_{t})} \leq C t^{\mathbf{a}lphapha-n} \left(D(t/2) + D(t) \right) \leq C t^{\mathbf{a}lphapha-n} \left(D(r) + \mathbf{m}athsf{M} \right) .
\end{equation*}
This allows us to identify $q \in \mathbf{m}athcal{A}hom_{n+1}$ such that, for $t \in (0,r)$,
\mathbf{b}etagin{equation*}
\mathbf{m}athbf{s}um_{j=1}^{n+1} t^{n+1-j} \left| \nabla^{j} \widetilde q_{t}(0) - \nabla^{j} q(0) \right| \leq C t^{\mathbf{a}lphapha-n} \left(D(1) + \mathbf{m}athsf{M} \right) ,
\end{equation*}
and it follows that
\mathbf{b}etagin{equation*}
\left\| \nabla u - \nabla q \right\|_{\underlinederline{L}^p(B_{r})} \leq C r^{\mathbf{a}lphapha} \left(D(1-\varepsilon) + \mathbf{m}athsf{M} \right) .
\end{equation*}
The proof is complete by an easy estimate $ D(1-\varepsilon) \leq C_\varepsilon(\left\| \nabla u \right\|_{\underlinederline{L}^2 \left( B_{1} \right)} + \mathbf{m}athsf{M} )$.
\end{proof}
\mathbf{m}athbf{s}ubsection*{Acknowledgments}
SA was partially supported by the NSF grant DMS-1700329. SF was partially supported by NSF grants DMS-1700329 and DMS-1311833. T.K. was supported by the Academy of Finland and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 818437).
\mathbf{m}athbf{s}mall
\mathbf{b}ibliographystyle{abbrv}
\mathbf{b}ibliography{linearization}
\end{document} |
\begin{document}
\setcounter{page}{1}
\title[Local and 2-local derivations of locally finite simple Lie algebras]{Local and 2-local derivations of locally finite simple Lie algebras}
\author[ Ayupov Sh.A., Kudaybergenov K.K., Yusupov B.B. ]{Shavkat Ayupov$^{1,3}$, Karimbergen Kudaybergenov$^2$, Bakhtiyor Yusupov$^3$}
\address{$^1$ V.I.Romanovskiy Institute of Mathematics\\
Uzbekistan Academy of Sciences, 81 \\ Mirzo Ulughbek street, 100170 \\
Tashkent, Uzbekistan}
\address{$^2$ Department of Mathematics, Karakalpak State University, 1, Academician Ch.~Abdirov street, 230113, Nukus, Uzbekistan}
\address{$^3$ National University of Uzbekistan, 4, University street, 100174, Tashkent, Uzbekistan}
\email{\textcolor[rgb]{0.00,0.00,0.84}{sh$_{-}$ayupov@mail.ru, shavkat.ayupov@mathinst.uz}}
\email{\textcolor[rgb]{0.00,0.00,0.84}{karim2006@mail.ru}}
\email{\textcolor[rgb]{0.00,0.00,0.84}{baxtiyor\_yusupov\_93@mail.ru}}
\maketitle
\begin{abstract} In the present paper we study local and 2-local derivations of locally finite split simple Lie algebras. Namely, we show that every local and 2-local derivation on such Lie algebra is a derivation.
\end{abstract}
{\it Keywords:} Lie algebras, locally finite simple Lie algebras, derivation, local derivation, 2-local derivation.
\\
{\it AMS Subject Classification:} 17B65, 17B20, 16W25.
\section{Introduction}
The notion of local derivation were first introduced in 1990 by R.V.Kadison \cite{Kadison} and D.R.Larson, A.R.Sourour \cite{Larson}.
A linear operator $\Delta$ on an algebra $\mathcal{A}$ is called a \textit{local derivation} if given any $x\in\mathcal{A}$ there exists a derivation $D_x$(depending on $x$) such that $\Delta(x)=D_x(x).$ The main problems concerning this notion are to find conditions under which local derivations become derivations and to present examples of algebras with local derivations that are not derivations.
R.V.Kadison proved that each continuous local
derivation of a von Neumann algebra $M$ into a dual Banach $M$-bimodule is a derivation.
Investigation of local and 2-local derivations on finite dimensional Lie algebras were initiated in papers \cite{Ayupov7, AyuKudRak}. In \cite{Ayupov7} the first two authors
have proved that every local
derivation on semi-simple Lie algebras is a derivation and gave examples of nilpotent
finite-dimensional Lie algebras with local derivations which are not derivations. In \cite{Ayupov6} local derivations of solvable Lie algebras are investigated and it is shown that any local derivation of solvable Lie algebra with model nilradical
is a derivation.
In 1997, P.\v{S}emrl \cite{Sem} introduced the notion of 2-local derivations and
2-local automorphisms on algebras. Namely, a map \(\nabla : \mathcal{A} \to
\mathcal{A}\) (not necessarily linear) on an algebra \(\mathcal{A}\) is called a \textit{2-local
derivation}, if for every pair of elements \(x,y \in \mathcal{A}\) there exists a
derivation \(D_{x,y} : \mathcal{A} \to \mathcal{A}\) such that
\(D_{x,y} (x) = \nabla(x)\) and \(D_{x,y}(y) = \nabla(y).\) The notion of 2-local automorphism is given in a similar way. For a
given algebra \(\mathcal{A}\), the main problem concerning
these notions is to prove that they automatically become a
derivation (respectively, an automorphism) or to give examples of
local and 2-local derivations or automorphisms of \(\mathcal{A},\)
which are not derivations or automorphisms, respectively.
Solution
of such problems for finite-dimensional Lie algebras over
algebraically closed field of zero characteristic were obtained in
\cite{AyuKud, AyuKudRak, ChenWang}. Namely, in
\cite{AyuKudRak} it was proved that every 2-local derivation on a
semi-simple Lie algebra \(\mathcal{L}\) is a derivation and that
each finite-dimensional nilpotent Lie algebra, with dimension
larger than two admits 2-local derivation which is not a
derivation.
Concerning 2-local
automorphism, Z.Chen and D.Wang in \cite{ChenWang} proved that if \(\mathcal{L}\) is
a simple Lie algebra of type $A_{l},D_{l}$ or $E_{k}, (k = 6, 7,
8)$ over an algebraically closed field of characteristic zero
then every 2-local automorphism of \(\mathcal{L}\) is an automorphism. Finally,
in \cite{AyuKud} it was proved that every 2-local automorphism of a
finite-dimensional semi-simple Lie algebra over an algebraically
closed field of characteristic zero is an automorphism. Moreover,
have shown also that every nilpotent Lie algebra with finite
dimension larger than two admits 2-local automorphisms which is
not an automorphism.
In \cite{Ayupov8, AyuYus} the authors studied 2-local derivations of infinite-dimensional Lie algebras over a field of characteristic zero and proved that all 2-local derivations of the Witt algebra as well as of the positive Witt algebra and the classical one-sided Witt algebra are (global) derivations and every 2-local derivation on Virasoro algebras is a derivation. In \cite{AyuKudYus} we have proved that every 2-local derivation on the generalized Witt algebra $W_n(\mathbb{F})$ over the vector space $\mathbb{F}^n$ is a derivation.
In \cite{YangKai} Y.Chen, K.Zhao and Y.Zhao studied local derivations on generalized Witt algebras. They proved that every local derivation on Witt algebras is a derivation and that every local derivation on a centerless generalized Virasoro algebra of higher rank is a derivation.
In the present paper we study local and 2-local derivations of locally finite split simple Lie algebras.
\section{Preliminaries}
In this section we give some necessary definitions and preliminary results (for details see \cite{Neeb2005, Neeb2001}).
A Lie algebra $\mathfrak{g}$ over a field $\mathbb{F}$ is a vector space over $\mathbb{F}$ with a bilinear
mapping $\mathfrak{g}\times\mathfrak{g}\rightarrow\mathfrak{g}$ denoted $(x,y)\mapsto[x,y]$ and called the bracket of $\mathfrak{g}$ and satisfying:
$$
[x,x] =0 ,\ \ \ \ \forall x\in\mathfrak{g},
$$
$$
[[x,y],z]+[[y,z],x]+[[z,x],y]=0, \forall x,y,z\in\mathfrak{g}.
$$
A Lie algebra $\mathfrak{g}$ is said
to be \textit{solvable} if $\mathfrak{g}^{(k)}=\{0\}$ for some
integer $k,$ where $\mathfrak{g}^{(0)}=\mathfrak{g},$
$\mathfrak{g}^{(k)}=\Big[\mathfrak{g}^{(k-1)}, \mathfrak{g}^{(k-1)}\Big],\,
k\geq1.$ Any Lie algebra $\mathfrak{g}$ contains a unique maximal
solvable ideal, called the radical of $\mathfrak{g}$ and denoted by
$\mbox{Rad} \mathfrak{g}.$ A non trivial Lie algebra $\mathfrak{g}$
is called \textit{semisimple} if $\mbox{Rad} \mathfrak{g}=0.$ That
is equivalent to requiring that $\mathfrak{g}$ have no nonzero
abelian ideals. A Lie algebra $\mathfrak{g}$ is simple, if it has no non-trivial ideals and is not abelian.
We say that a Lie algebra $\mathfrak{g}$ has a \textit{root decomposition} with respect to an abelian subalgebra $\mathfrak{h},$ if
$$
\mathfrak{g}=\mathfrak{h}\oplus\bigoplus\limits_{\alpha\in \mathfrak{R}}\mathfrak{g}_{\alpha},
$$
where $\mathfrak{g}_{\alpha}=\Big\{x\in\mathfrak{g}: [h,x]=\alpha(h)x,\,\, \forall h\in\mathfrak{h}\Big\}$
and $\mathfrak{R}=\Big\{\alpha\in\mathfrak{h}^*\backslash0:\mathfrak{g}_{\alpha}\neq\{0\}\Big\}$
is the corresponding root system and $\mathfrak{h}^*$ is the space of all linear functionals on $\mathfrak{h}.$
In this case, $\mathfrak{h}$ is called \textit{splitting Cartan subalgebra} of $\mathfrak{g},$ and $\mathfrak{g}$
respectively the pair $(\mathfrak{g},\mathfrak{h})$ is called \textit{split} Lie algebra.
Suppose that $\mathfrak{g}$ is a Lie algebra over $\mathbb{F}$ which is a directed
union of finite-dimensional simple Lie algebras, that is, $\mathfrak{g}=\lim\limits_{\longrightarrow}\mathfrak{g}_\alpha$
is the direct limit of a family $(\mathfrak{g}_\alpha)_{\alpha\in A}$ of finite-dimensional simple Lie algebras $\mathfrak{g}_\alpha$
which are subalgebras of $\mathfrak{g}$ and the directed order $\leq$ on the index set $A$ is given
by $\alpha\leq \beta$ if $\mathfrak{g}_\alpha\leq \mathfrak{g}_\beta.$
A Lie algebra $\mathfrak{g}$ of this form is said to be locally finite simple Lie algebra.
Now following \cite{Neeb2005} we give a description of locally
finite split simple Lie algebras.
For a set $\mathfrak{J}$ we denote by $M_\mathfrak{J}(\mathbb{F})=\mathbb{F}^{\mathfrak{J}\times\mathfrak{J}}$
the set of all
$\mathfrak{J}\times\mathfrak{J}$-matrices with entries in $\mathbb{F}.$
Let $M_\mathfrak{J}(\mathbb{F})_{rc-fin}\subseteq M_\mathfrak{J}(\mathbb{F})$ be
the set of all $\mathfrak{J}\times\mathfrak{J}$-matrices
with at most finitely many non-zero entries in each row and each column, and let $\mathfrak{gl}_{\mathfrak{J}}(\mathbb{F})$ denote the subspace
consisting of all matrices with at most finitely many non-zero entries.
The matrix product $xy$ is defined if at least one factor is in $\mathfrak{gl}_{\mathfrak{J}}(\mathbb{F})$ and the other
is in $M_\mathfrak{J}(\mathbb{F}).$
In particular, $\mathfrak{gl}_{\mathfrak{J}}(\mathbb{F})$ thus inherits the structure of locally finite Lie algebra via $[x,y]:=xy-yx$ and
\begin{equation*}\begin{split}
\mathfrak{sl}_{\mathfrak{J}}(\mathbb{F})=\left\{x\in \mathfrak{gl}_{\mathfrak{J}}(\mathbb{F}): tr(x)=0\right\}
\end{split}\end{equation*}
is a simple Lie algebra.
For $i,j\in \mathfrak{J}$ denote by $e_{i,j}$ a matrix unit defined as
$$
e_{i,j}:\mathfrak{J}\times\mathfrak{J}\rightarrow\mathbb{F},\ \ (k,s)\longmapsto\delta_{ik}\delta_{sj},
$$
where $\delta_{i,j}$ is the Kronecker symbol.
Set $2\mathfrak{J}:=\mathfrak{J}\,\dot{\cup}-\mathfrak{J},$ where $-\mathfrak{J}$ denotes a copy of $\mathfrak{J}$ whose elements are denoted by $-i\,(i\in\mathfrak{J})$ and consider the $2\mathfrak{J}\times2\mathfrak{J}$-matrices
\begin{equation*}
q_1=\sum\limits_{i\in\mathfrak{J}}e_{i,-i}+e_{-i,i}\ \ \text{and} \ \ q_2=\sum\limits_{i\in\mathfrak{J}}e_{i,-i}-e_{-i,i}.
\end{equation*}
Set
\begin{equation*}
\mathfrak{o}_{\mathfrak{J},\mathfrak{J}}(\mathbb{F})=\left\{x\in\mathfrak{gl}_{2\mathfrak{J}}(\mathbb{F}):\ x^{\top} q_1+q_1x=0\right\}
\end{equation*}
and
\begin{equation*}
\mathfrak{sp}_{\mathfrak{J}}(\mathbb{F})=\left\{x\in\mathfrak{gl}_{2\mathfrak{J}}(\mathbb{F}):\ x^{\top} q_2+q_2x=0\right\}.
\end{equation*}
By \cite[Theorem IV.6]{Neeb2001}
every infinite dimensional locally finite split simple Lie algebra is isomorphic
to one of the Lie algebras $\mathfrak{sl}_{\mathfrak{J}}(\mathbb{F}), \mathfrak{o}_{\mathfrak{J},\mathfrak{J}}(\mathbb{F}), \mathfrak{sp}_{\mathfrak{J}}(\mathbb{F}),$ where $\mathfrak{J}$ is an infinite set with
$\textrm{card}\mathfrak{J} = \dim\mathfrak{g}.$
In the next section we shall use the following description of algebras of derivations of locally finite simple Lie algebras \cite[Theorem I.3]{Neeb2005}:
\begin{equation*}
\begin{split}
der\left(\mathfrak{sl}_{\mathfrak{J}}(\mathbb{F}\right) & \cong M_\mathfrak{J}(\mathbb{F})_{rc-fin}/\mathbb{F}\mathbf{1}\\
der\left(\mathfrak{o}_{\mathfrak{J},\mathfrak{J}}(\mathbb{F})\right) & \cong \left\{x\in M_{\mathfrak{J}}(\mathbb{F})_{rc-fin}:x^\top q_1+q_1x=0\right\}\\
der\left(\mathfrak{sp}_\mathfrak{J}(\mathbb(F))\right) & \cong \left\{x\in M_{\mathfrak{J}}(\mathbb{F})_{rc-fin}:x^\top q_2+q_2x=0\right\},
\end{split}
\end{equation*}
where $\mathbf{1}=(\delta_{ij})$ is the indentity matrix in
$M_\mathfrak{J}(\mathbb{F}).$ In particular, any derivation $D$ on $\mathfrak{sl}_{\mathfrak{J}}(\mathbb{F})$ represented as
\begin{equation}\label{dersplit}
D(x)=[a,x],\,\, x\in \mathfrak{sl}_{\mathfrak{J}}(\mathbb{F}),
\end{equation}
where $a\in M_\mathfrak{J}(\mathbb{F})_{rc-fin}.$ Further, in the cases of algebras $\mathfrak{o}_{\mathfrak{J},\mathfrak{J}}(\mathbb{F})$ and $\mathfrak{sp}_{\mathfrak{J}}(\mathbb{F})$ an element $a\in M_\mathfrak{J}(\mathbb{F})_{rc-fin}$ need to satisfy conditions
$a^\top q_1+q_1 a=0$ and $a^\top q_2+q_2 a=0,$ respectively.
\section{Main results}
\subsection{Local derivation on the locally finite split simple Lie algebras}
\
The main result of this subsection is given as follows.
\begin{theorem}\label{th21}
Let $\mathfrak{g}$ be a locally
finite split simple Lie algebras over a field of characteristic zero. Then any
local derivation on $\mathfrak{g}$ is a derivation.
\end{theorem}
For the proof we need several Lemmata and from now on
$\mathfrak{g}$ is an one of the algebras
$\mathfrak{sl}_{\mathfrak{J}}(\mathbb{F}), \mathfrak{o}_{\mathfrak{J},\mathfrak{J}}(\mathbb{F}), \mathfrak{sp}_{\mathfrak{J}}(\mathbb{F})$ (see the end of the
previous Section).
Any $x\in M_\mathfrak{J}(\mathbb{F})_{rc-fin}$ can be uniquely represented as
$$
x=\sum\limits_{i,j\in \mathfrak{J}} x_{i,j}e_{i,j},
$$
where $x_{i,j}\in \mathbb{F}$ for all $i,j\in \mathfrak{J}.$
For a subset $\mathfrak{I} \subset \mathfrak{J}$ we shall identify the algebra $M_\mathfrak{I}(\mathbb{F})_{rc-fin}$
with the subalgebra in $M_\mathfrak{J}(\mathbb{F})_{rc-fin},$ consisting of elements of the form
$
x=\sum\limits_{i,j\in \mathfrak{I}} x_{i,j}e_{i,j},
$
where $x_{i,j}\in \mathbb{F}$ for all $i,j\in \mathfrak{I}.$
Further, for a finite subset $\mathfrak{I} \subset \mathfrak{J}$
we define a projection mapping $\pi_\mathfrak{I}:M_\mathfrak{J}(\mathbb{F})_{rc-fin}\rightarrow M_\mathfrak{I}(\mathbb{F})_{rc-fin}$ as follows
\begin{equation*}
\pi_\mathfrak{I}(x)=\sum\limits_{i,j\in \mathfrak{I}} x_{i,j}e_{i,j},
\end{equation*}
where $x=\sum\limits_{i,j\in \mathfrak{J}} x_{i,j}e_{i,j}.$
\begin{lemma}\label{123}
$$
\pi_\mathfrak{I}\left([x,y]\right)=\left[\pi_\mathfrak{I}(x), \pi_\mathfrak{I}(y)\right]
$$
for all $x\in M_\mathfrak{J}(\mathbb{F})_{rc-fin}$ and $y\in M_\mathfrak{I}(\mathbb{F})_{rc-fin}.$
\end{lemma}
\begin{proof}
Note that each matrix $x\in M_\mathfrak{J}(\mathbb{F})_{rc-fin}$ is represented as $x=x_{\mathfrak{I},\mathfrak{I}}+x_{\mathfrak{I},\mathfrak{K}}+x_{\mathfrak{K},\mathfrak{I}}+x_{\mathfrak{K}, \mathfrak{K}},$
where $x_{\mathfrak{I},\mathfrak{I}}=\sum\limits_{i,j\in \mathfrak{I}} x_{i,j}e_{i,j},$ $x_{\mathfrak{I},\mathfrak{K}}=\sum\limits_{i\in \mathfrak{I}, j\in \mathfrak{K}} x_{i,j}e_{i,j},$
$x_{\mathfrak{K},\mathfrak{I}}=\sum\limits_{i\in \mathfrak{K}, j\in \mathfrak{I}} x_{i,j}e_{i,j},$
$x_{\mathfrak{K},\mathfrak{K}}=\sum\limits_{i,j\in \mathfrak{K}} x_{i,j}e_{i,j}$ and $\mathfrak{K}=\mathfrak{J}\setminus\mathfrak{I}.$
Take the matrices $x=x_{\mathfrak{I},\mathfrak{I}}+x_{\mathfrak{I},\mathfrak{K}}+x_{\mathfrak{K},\mathfrak{I}}+x_{\mathfrak{K}, \mathfrak{K}}\in M_\mathfrak{J}(\mathbb{F})_{rc-fin}$
and $y=y_{\mathfrak{I},\mathfrak{I}}\in M_\mathfrak{I}(\mathbb{F})_{rc-fin}.$
Then
\begin{equation*}\begin{split}
\pi_\mathfrak{I}\left([x,y]\right)&=\pi_\mathfrak{I}\left(\left[x_{\mathfrak{I},\mathfrak{I}}+x_{\mathfrak{I},\mathfrak{K}}+x_{\mathfrak{K},\mathfrak{I}}+x_{\mathfrak{K}, \mathfrak{K}},
y_{\mathfrak{I},\mathfrak{I}}\right]\right)\\
&=
\pi_\mathfrak{I}\left([x_{\mathfrak{I},\mathfrak{I}}, y_{\mathfrak{I},\mathfrak{I}}]\right)+\pi_\mathfrak{I}\left(\left[x_{\mathfrak{I},\mathfrak{K}}+x_{\mathfrak{K},\mathfrak{I}}+x_{\mathfrak{K}, \mathfrak{K}},
y_{\mathfrak{I},\mathfrak{I}}\right]\right)\\
&=[x_{\mathfrak{I},\mathfrak{I}}, y_{\mathfrak{I},\mathfrak{I}}]
=
\left[\pi_\mathfrak{I}(x), \pi_\mathfrak{I}(y)\right].
\end{split}\end{equation*}
\end{proof}
For a subset $\mathfrak{I} \subset \mathfrak{J}$ denote by $\mathfrak{g}_\mathfrak{I}$ the subalgebra in $\mathfrak{g}$ consisting of elements of the form
$
x=\sum\limits_{i,j\in \mathfrak{I}} x_{i,j}e_{i,j}\in \mathfrak{g},
$
where $x_{i,j}\in \mathbb{F}$ for all $i,j\in \mathfrak{I}.$
It is clear that the restriction $\pi_\mathfrak{I}|_{\mathfrak{g}}$ of $\pi_\mathfrak{I}$ on $\mathfrak{g}$ maps $\mathfrak{g}$ onto $\mathfrak{g}_\mathfrak{I}.$
\begin{lemma}\label{resder}
Let $\Delta$ be a local derivation on $\mathfrak{g}.$ Then the mapping $\Delta_\mathfrak{I}$ on $\mathfrak{g}_\mathfrak{I}$ defined as
$$
\Delta_\mathfrak{I}(x)=\pi_\mathfrak{I}(\Delta(x)),\ x\in\mathfrak{g}_\mathfrak{I}
$$
is a local derivation for all finite subset $\mathfrak{I}$ in $\mathfrak{J}.$
\end{lemma}
\begin{proof} Let $x\in \mathfrak{g}_\mathfrak{I}$ be an arbitrary element. By \eqref{dersplit} there is an element $a_x\in M_\mathfrak{J}(\mathbb{F})_{rc-fin}$ such that $\left[a_x, \mathfrak{g}\right]\subseteq \mathfrak{g}$ and
$
\Delta(x)=[a_x,x].
$
Then by Lemma \ref{123}
\begin{equation*}
\Delta_\mathfrak{I}(x)=\pi_\mathfrak{I}(\Delta(x))=\pi_\mathfrak{I}([a_x,x])=
[\pi_\mathfrak{I}(a_x),\pi_\mathfrak{I}(x)]=[\pi_\mathfrak{I}(a_x),x].
\end{equation*}
Thus $\Delta_\mathfrak{I}$ is a local derivation. \end{proof}
{\it Proof of Theorem \ref{th21}.} Let $\Delta$ be a local derivation $\mathfrak{g}$ and let $x\in \mathfrak{g}$ be an arbitrary element. Take a finite subset $\mathfrak{I}$ in $\mathfrak{J}$ such that $x, y, \Delta(x), \Delta(y), \Delta([x,y])\in\mathfrak{g}_\mathfrak{I}.$
By Lemma~\ref{resder}, $\Delta_\mathfrak{I}$ is a local derivation of $\mathfrak{g}_\mathfrak{I}.$ Since $\mathfrak{g}_\mathfrak{I}$ is a finite dimensional simple Lie algebra, by \cite[Theorem 3.1]{Ayupov7} $\Delta_\mathfrak{I}$ is a derivation. Hence,
\begin{equation*}\begin{split}
\Delta([x,y]) &=\Delta_\mathfrak{I}([x,y])=[\Delta_\mathfrak{I}(x),y]+[x,\Delta_\mathfrak{I}(y)]=[\Delta(x),y]+[x,\Delta(y)].
\end{split}
\end{equation*}
This means that $\Delta$ is a derivation.
\subsection{2-local derivations on the locally finite split simple Lie algebras}
\
In this subsection we study 2-local derivations on the locally finite split simple Lie algebras.
Recall that a bilinear from $\kappa$ on $\mathfrak{g}$ is said to be non degenerate, i.e. $\kappa(x, y)=0$ for
all $y\in \mathfrak{g}$ implies that $x=0.$
We shall use the following results from \cite{Neeb2005}.
\begin{proposition}
There exists a nondegenerate invariant symmetric bilinear
form $\kappa$ on $\mathfrak{g}.$
\end{proposition}
\begin{proposition}
Every invariant symmetric bilinear form $\kappa$ on $\mathfrak{g}$ is invariant under all derivations of $\mathfrak{g}.$
\end{proposition}
The main results of this subsection is the following.
\begin{theorem}\label{th32}
Let $\mathfrak{g}$ be a locally finite split simple Lie algebras over a field of characteristic zero. Then any
2-local derivation on $\mathfrak{g}$ is a derivation.
\end{theorem}
\begin{proof}
Let us first to show that $\nabla$ is linear.
Let $x,y,z\in \mathfrak{g}$ be arbitrary elements. Taking into account invariance of $\kappa$ we obtain
\begin{equation*}\begin{split}
\kappa(\nabla(x+y),z)&=\kappa(D_{x+y,z}(x+y),z)=-\kappa(x+y,D_{x+y,z}(z))\\
&=-\kappa(x+y,\nabla(z))=-\kappa(x,\nabla(z))-\kappa(y,\nabla(z))\\
&=-\kappa(x,D_{x,z}(z))-k(y,D_{y,z}(z))=\kappa(D_{x,z}(x),z)\\
&+\kappa(D_{y,z},z)=\kappa(\nabla(x),z)+\kappa(\nabla(y),z)\\
&=\kappa(\nabla(x)+\nabla(y),z),
\end{split}
\end{equation*}
i.e.
\begin{equation*}
\kappa(\nabla(x+y),z)=\kappa(\nabla(x)+\nabla(y),z)
\end{equation*}
Since $\kappa$ is non-degenerate the last equality implies that
\begin{equation*}
\nabla(x+y)=\nabla(x)+\nabla(y)\ \ \ \text{for}\ x,y\in\mathfrak{g}.
\end{equation*}
Further,
\begin{equation*}
\nabla(\lambda x)=D_{\lambda x,x}(\lambda x)= \lambda D_{\lambda x,x}(x)=\lambda\nabla(x).
\end{equation*}
Hence, $\nabla$ is linear, and therefore is a local derivation.
Finally, by Theorem \ref{th21} a local derivation $\nabla$ is a derivation.
\end{proof}
\end{document} |
\begin{document}
\title[On a characterization of finite-dimensional vector spaces]
{On a characterization\\ of finite-dimensional vector spaces}
\author[Marat V. Markin]{Marat V. Markin}
\address{
Department of Mathematics\newline
California State University, Fresno\newline
5245 N. Backer Avenue, M/S PB 108\newline
Fresno, CA 93740-8001, USA
}
\email{mmarkin@csufresno.edu}
\subjclass{Primary 15A03, 15A04; Secondary 15A09, 15A15}
\keywords{Linear operator, vector space, Hamel basis}
\begin{abstract}
We provide a characterization of the finite dimensionality of vector spaces in terms of the right-sided invertibility of linear operators on them.
\end{abstract}
\maketitle
\section{Introduction}
In paper \cite{Markin2005}, found is a characterization of one-dimensional (real or complex) normed algebras in terms
of the bounded linear operators on them, echoing the celebrated \textit{Gelfand-Mazur theorem} charachterizing complex one-dimensional Banach algebras (see, e.g., \cite{Bach-Nar,Gelfand39,Gelfand41,Naimark,Rickart1958}).
Here, continuing along this path, we provide a simple characterization of the finite dimensionality of vector spaces in terms of the right-sided invertibility of linear operators on them.
\section{Preliminaries}
As is well-known (see, e.g., \cite{Horn-Johnson,ONan}), a square matrix $A$ with complex entries is invertible \textit{iff} it is one-sided invertible, i.e., there exists
a square matrix $C$ of the same order as $A$ such that
\[
AC = I\ \text{(\textit{right inverse})}\quad \text{or}\quad CA=I\ \text{(\textit{left inverse})},
\]
where $I$ is the \textit{identity matrix} of an appropriate size, in which case $C$ is the (two-sided) inverse of $A$, i.e.,
\[
AC=CA=I.
\]
Generally, for a linear operator on a (real or complex) vector space, the existence of a \textit{left inverse} implies being \textit{invertible}, i.e., \textit{injective}. Indeed, let $A:X\to X$ be a linear operator on a (real or complex) vector space $X$ and a linear operator $C:X\to X$ be its \textit{left inverse}, i.e.,
\begin{equation}\label{cfdvs2}
CA=I,
\end{equation}
where $I$ is the \textit{identity operator} on $X$. Equality \eqref{cfdvs2}, obviously, implies that
\[
\ker A=\left\{0\right\},
\]
and hence, there exists an inverse $A^{-1}:R(A)\to X$ for the operator $A$, where $R(A)$ is its range (see, e.g., \cite{Markin2020EOT}). Equality \eqref{cfdvs2} also implies that the inverse operator $A^{-1}$ is the restriction of $C$ to $R(A)$.
Further, as is easily seen, for a linear operator on a (real or complex) vector space, the existence of a \textit{right inverse}, i.e., a linear operator $C:X\to X$ such that
\begin{equation*}
AC=I,
\end{equation*}
immediately implies being \textit{surjective}, which, provided the underlying vector space is \textit{finite-dimensional}, by the \textit{rank-nullity theorem} (see, e.g., \cite{Markin2018EFA,Markin2020EOT}), is equivalent to being \textit{injective}, i.e., being \textit{invertible}.
With the underlying space being \textit{infinite-dimensional}, the arithmetic of infinite cardinals does not allow to directly infer by the \textit{rank-nullity theorem} that the \textit{surjectivity} of a linear operator on the space is equivalent to its \textit{injectivity}. In this case the right-sided invertibility for linear operators need not imply invertibility. For instance, on the (real or complex) \textit{infinite-dimensional} vector space $l_\infty$ of bounded sequences, the left shift linear operator
\[
l_\infty\ni x:=(x_1,x_2,x_3,\dots)
\mapsto Lx:=(x_2,x_3,x_4,\dots)\in l_\infty
\]
is \textit{non-invertible} since
\[
\ker L=\left\{(x_1,0,0,\dots) \right\}\neq \left\{0\right\}
\]
(see, e.g., \cite{Markin2018EFA,Markin2020EOT}), but the right shift linear operator
\[
l_\infty\ni x:=(x_1,x_2,x_3,\dots)
\mapsto Rx:=(0,x_1,x_2,\dots)\in l_\infty
\]
is its \textit{right inverse}, i.e.,
\[
LR=I,
\]
where $I$ is the \textit{identity operator} on $l_\infty$.
Not only does the above example give rise to the natural question of whether, when the right-sided invertibility for linear operators on a (real or complex) vector space implies their invertibility, i.e., \textit{injectivity}, the underlying space is necessarily \textit{finite-dimensional} but also serve as an inspiration for proving the \textit{``if''} part of the subsequent characterization.
\section{Characterization}
\begin{thm}[Characterization of Finite-Dimensional Vector Spaces]\ \\
A (real or complex) vector space $X$ is finite-dimensional
iff, for linear operators on $X$, right-sided invertibility
implies invertibility.
\end{thm}
\begin{proof}\
\textit{``Only if''} part. Suppose that the vector space $X$ is \textit{finite-dimensional} with $\dim X=n$ ($n\in {\mathbb N}$) and let $B:=\left\{x_1,\dots,x_n\right\}$ be an ordered basis for $X$.
For an arbitrary linear operator $A:X\to X$ on $X$, which has
a \textit{right inverse}, i.e., a linear operator $C:X\to X$ such that
\begin{equation*}
AC=I,
\end{equation*}
where $I$ is the \textit{identity operator} on $X$, let
$[A]_B$ and $[C]_B$ be the \textit{matrix representations} of
the operators $A$ and $C$ relative to the basis $B$, respectively (see, e.g., \cite{Horn-Johnson,ONan}).
Then
\begin{equation}\label{cfdvs1}
[A]_B[C]_B=I_n,
\end{equation}
where $I_n$ is the \textit{identity matrix} of size $n$
(see, e.g., \cite{Horn-Johnson,ONan}).
By the \textit{multiplicativity of determinant} (see, e.g, \cite{Horn-Johnson,ONan}), equality \eqref{cfdvs1}
implies that
\[
\det\left([A]_B\right)\det\left([C]_B\right)=\det\left([A]_B[C]_B\right)=\det(I_n)=1.
\]
Whence, we conclude that
\[
\det\left([A]_B\right)\neq 0,
\]
which, by the \textit{determinant characterization of invertibility}, implies that the matrix $[A]_B$ is invertible, and hence, so is the operator $A$ (see, e.g., \cite{Horn-Johnson,ONan}).
\textit{``If''} part. Let us prove this part \textit{by contrapositive}, assuming that the vector space $X$ is \textit{infinite-dimensional}. Suppose that $B:=\left\{x_i\right\}_{i\in I}$ is a
(Hamel) basis for $X$ (see, e.g., \cite{Markin2018EFA,Markin2020EOT}), where $I$ is an infinite indexing set, and that
$J:=\left\{i(n)\right\}_{n\in {\mathbb N}}$ is a \textit{countably infinite} subset of $I$.
Let us define a linear operator $A:X\to X$ as follows:
\[
Ax_{i(1)}:=0,\ Ax_{i(n)}:=x_{i(n-1)},\ n\ge 2,\quad Ax_i:=x_i,\ i\in I\setminus J,
\]
and
\[
X\ni x=\sum_{i\in I}c_ix_i\mapsto Ax:=\sum_{i\in I}c_iAx_i,
\]
where
\[
\sum_{i\in I}c_ix_i
\]
is the \textit{basis representation} of a vector $x\in X$ relative to $B$, in which all but a finite number of the coefficients $c_i$, $i\in I$, called the \textit{coordinates} of $x$ relative to $B$, are zero (see, e.g., \cite{Markin2018EFA,Markin2020EOT}).
As is easily seen, $A$ is a linear operator on $X$, which is \textit{non-invertible}, i.e., \textit{non-injective}, since
\[
\ker A=\spa\left(\left\{x_{i(1)}\right\}\right)\neq \left\{0\right\}.
\]
The linear operator $C:X\to X$ on $X$ defined as follows:
\[
Cx_{i(n)}:=x_{i(n+1)},\ n\in{\mathbb N},\quad Cx_i:=x_i,\ i\in I\setminus J,
\]
and
\[
X\ni x=\sum_{i\in I}c_ix_i\mapsto Cx:=\sum_{i\in I}c_iCx_i,
\]
is a \textit{right inverse} for $A$ since
\[
ACx_{i(n)}=Ax_{i(n+1)}=x_{i(n)},\ n\in {\mathbb N},\quad ACx_i=Ax_i=x_i,\ i\in I\setminus J.
\]
Thus, on a (real or complex) infinite-dimensional vector space, there exists a non-invertible linear operator with a right inverse, which completes the proof of the \textit{``if''} part, and hence, of the entire statement.
\end{proof}
\end{document} |
\begin{document}
\title{Nonuniform average sampling in multiply generated shift-invariant subspaces of mixed
Lebesgue spaces}
\author{Qingyue Zhang\\
\footnotesize\it College of Science, Tianjin University of Technology,
Tianjin~300384, China\\
\footnotesize\it {e-mails: {jczhangqingyue@163.com}}\\ }
\maketitle
\textbf{Abstract.}\,\,
In this paper, we study nonuniform average sampling problem in multiply generated shift-invariant subspaces of mixed Lebesgue spaces. We discuss two types of average sampled values: average sampled values $\{\left \langle f,\psi_{a}(\cdot-x_{j},\cdot-y_{k}) \right \rangle:j,k\in\mathbb{J}\}$ generated by single averaging function and average sampled values $\left\{\left \langle f,\psi_{x_{j},y_{k}}\right \rangle:j,k\in\mathbb{J}\right\}$ generated by multiple averaging functions. Two fast reconstruction algorithms for this two types of average sampled values are provided.
\textbf{Key words.}\,\,
mixed Lebesgue spaces; nonuniform average sampling; shift-invariant subspaces.
\textbf{2010 MR Subject Classification}\,\,
94A20, 94A12, 42C15, 41A58
\section{Introduction and motivation}
\ \ \ \
In 1961, Benedek firstly proposed mixed Lebesgue spaces \cite{Benedek,Benedek2}. In the 1980s, Fernandez, and Francia, Ruiz and Torrea developed the theory of mixed Lebesgue spaces in integral operators and Calder\'on--Zygmund operators, respectively \cite{Fernandez,Rubio}. Recently, Torres and Ward, and Li, Liu and Zhang studied sampling problem in shift-invariant subspaces of mixed Lebesgue spaces \cite{Torres,Ward,LiLiu,zhangqingyue}. Mixed Lebesgue spaces generalize Lebesgue spaces. It was proposed due to considering functions that depend on independent quantities with different properties. For a function in mixed Lebesgue spaces, one can consider the integrability of each variable independently. This is distinct from traditional Lebesgue spaces. The flexibility of mixed Lebesgue spaces makes them have a crucial role to play in the study of time based partial differential equations. In this context, we study nonuniform average sampling problem in shift-invariant subspaces of mixed Lebesgue spaces.
Sampling theorem is the theoretical basis of modern pulse coded modulation communication system, and is also one of the most powerful basic tools in signal processing and image processing. In 1948, Shannon formally proposed sampling theorem \cite{S,S1}. Shannon sampling
theorem shows that for any $f\in L^{2}(\mathbb{R})$ with $\mathrm{supp}\hat{f}\subseteq[-T,T],$
$$f(x)=\sum_{n\in \mathbb{Z}}f\left(\frac{n}{2T}\right)\frac{\sin\pi(2Tx-n)}{\pi(2Tx-n)},$$
where the series converges uniformly on compact sets and in $L^{2}(\mathbb{R})$, and
$$\hat{f}(\xi)=\int_{\mathbb{R}}f(x)e^{-2\pi i x\xi }dx,\ \ \ \ \ \ \xi\in\mathbb{R}.$$
However, in many realistic situations, the sampling set is only a nonuniform
sampling set. For example, the transmission through the
internet from satellites only can be viewed as a nonuniform sampling problem, because there exists the loss of data packets in the transmission. In recent years, there are many results concerning nonuniform sampling problem \cite{SQ,BF1,M,Venkataramani,Christopher,Sun,Zhou}.
Uniform and nonuniform sampling problems
also have been generalized to more general shift-invariant spaces \cite{Aldroubi1,Aldroubi2,Aldroubi3,Aldroubi4,Vaidyanathan,Zhang1} of the form
$$V(\phi)=\left\{\sum_{k\in\mathbb{Z}}c(k)\phi(x-k):\{c(k):k\in\mathbb{Z}\}\in \ell^{2}(\mathbb{Z})\right\}.$$
In the classical sampling theory, the sampling values are the function values of the signal at the sampling points. In practical application, due to the precision of physical equipment and so on,
it is impossible to measure the exact value of a signal at a point. And the actual measured value is the local mean value of the signal near this point. Therefore, average sampling has attracted more and more the attentions of researchers \cite{SunZhou,Portal,Ponnaian,Kang,Atreas,Aldroubi10,Aldroubisun,Xianli,sunqiyu}.
For the sampling problem in shift-invariant subspaces of mixed Lebesgue spaces, Torres and Ward studied uniform sampling problem for band-limited functions in mixed Lebesgue spaces \cite{Torres,Ward}. Li, Liu and Zhang discussed the nonuniform sampling problem in principal shift-invariant spaces of mixed Lebesgue spaces \cite{LiLiu,zhangqingyue}. In this paper, we discuss nonuniform average sampling problem in multiply generated shift-invariant subspaces of mixed Lebesgue spaces. We discuss two types of average sampled values: average sampled values $\{\left \langle f,\psi_{a}(\cdot-x_{j},\cdot-y_{k}) \right \rangle:j,k\in\mathbb{J}\}$ generated by single averaging function and average sampled values $\left\{\left \langle f,\psi_{x_{j},y_{k}}\right \rangle:j,k\in\mathbb{J}\right\}$ generated by multiple averaging functions. Two fast reconstruction algorithms for this two types of average sampled values are provided.
The paper is organized as follows. In the next section, we give the definitions and preliminary results needed and define multiply generated shift-invariant subspaces in mixed Lebesgue spaces $L^{p,q}\left(\mathbb{R}^{d+1}\right)$. In Section 3, we give main results of this paper. Section 4 gives some useful propositions and lemmas. In Section 5, we give proofs of main results. Finally, concluding remarks are presented in Section 6.
\section{Definitions and preliminary results}
\ \ \ \
In this section, we give some definitions and preliminary results needed in this paper. First of all, we give the definition of mixed Lebesgue spaces $L^{p,q}(\Bbb R^{d+1})$.
\begin{definition}
For $1 \leq p,q <+\infty$. $L^{p,q}=L^{p,q}(\Bbb R^{d+1})$ consists of all measurable functions $f=f(x,y)$ defined on $\Bbb R\times\Bbb R^{d}$ satisfying
$$\|f\|_{L^{p,q}}=\left[\int_{\Bbb R}\left(\int_{\Bbb R^d}|f(x,y)|^{q}dy\right)^{\frac{p}{q}}dx\right]^{\frac{1}{p}}<+\infty.$$
\end{definition}
The corresponding sequence spaces are defined by $$\ell^{p,q}=\ell^{p,q}(\mathbb{Z}^{d+1})=\left\{c: \|c\|^{p}_{\ell^{p,q}}=\sum_{k_{1} \in \Bbb Z}\left(\sum_{k_{2} \in \Bbb Z^d }
|c(k_{1},k_{2})|^{q}\right)^{\frac{p}{q}}
<+\infty\right\}.$$
In order to control the local behavior of functions, we introduce mixed Wiener amalgam spaces $W(L^{p,q})(\Bbb R^{d+1})$.
\begin{definition}
For $1\leq p,q<\infty$, if a measurable function $f$ satisfies
$$\|f\|^{p}_{W(L^{p,q})}:=\sum_{n\in \Bbb Z}\sup_{x\in[0,1]}\left[\sum_{l\in \Bbb Z^d}\sup_{y\in [0,1]^d}|f(x+n,y+l)|^{q}\right]^{p/q}<\infty,$$
then we say that $f$ belongs to the mixed Wiener amalgam space $W(L^{p,q})=W(L^{p,q})(\Bbb R^{d+1})$.
\end{definition}
For $1\leq p,q<\infty$, let $W_{0}\left ( L^{p,q} \right )$ denote the space of all continuous functions in $W(L^{p,q})$.
For $1\leq p<\infty,$ if a function $f$ satisfies
$$\|f\|^{p}_{W(L^{p})}:=\sum_{k\in \Bbb Z^{d+1}}\mathrm{ess\sup}_{x\in[0,1]^{d+1}} |f(x+k)|^{p}<\infty,$$
then we say that $f$ belongs to the Wiener amalgam space $W(L^{p})=W(L^{p})(\Bbb R^{d+1}).$
Obviously, $W(L^{p})\subset W(L^{p,p}).$
For $1\leq p< \infty $, let $W_{0}\left ( L^{p} \right )$ denote the space of all continuous functions in $W(L^{p})$.
Let $B$ be a Banach space. $(B)^{(r)}$ denotes $r$ copies $B\times\cdots\times B$ of $B$.
For any $f,g\in L^{2}(\mathbb{R}^{d+1})$, define their convolution
$$(f*g)(x)=\int_{\mathbb{R}^{d+1}} f(y)g(x-y)dy.$$
The following is a preliminary result.
\begin{lemma}\label{convolution relation}
If $f\in L^{1}(\mathbb{R}^{d+1})$ and $g\in W(L^{1,1})$, then $f*g\in W(L^{1,1})$ and
\[
\|f*g\|_{W(L^{1,1})}\leq\|g\|_{W(L^{1,1})}\|f\|_{L^{1}}.
\]
\end{lemma}
\begin{proof}
Since
\begin{eqnarray*}
&&\|f*g\|_{W(L^{1,1})}=\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\left|\int\limits_{y_{1}\in\mathbb{R}}\int\limits_{y_{2}\in\mathbb{R}^{d}}
f(y_{1},y_{2})g(x_{1}+k_{1}-y_{1},x_{2}+k_{2}-y_{2}))dy_{2}dy_{1}\right|\\
&&\quad\leq \sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int\limits_{y_{1}\in\mathbb{R}}\int\limits_{y_{2}\in\mathbb{R}^{d}}
|f(y_{1},y_{2})||g(x_{1}+k_{1}-y_{1},x_{2}+k_{2}-y_{2}))|dy_{2}dy_{1}\\
&&\quad\leq \int\limits_{y_{1}\in\mathbb{R}}\int\limits_{y_{2}\in\mathbb{R}^{d}}
|f(y_{1},y_{2})|\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}|g(x_{1}+k_{1}-y_{1},x_{2}+k_{2}-y_{2}))|dy_{2}dy_{1}\\
&&\quad\leq \int\limits_{y_{1}\in\mathbb{R}}\int\limits_{y_{2}\in\mathbb{R}^{d}}
|f(y_{1},y_{2})|\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}|g(x_{1}+k_{1},x_{2}+k_{2}))|dy_{2}dy_{1}\\
&&\quad= \int\limits_{y_{1}\in\mathbb{R}}\int\limits_{y_{2}\in\mathbb{R}^{d}}|f(y_{1},y_{2})|\|g\|_{W(L^{1,1})}dy_{2}dy_{1}\\
&&\quad= \|g\|_{W(L^{1,1})}\int\limits_{y_{1}\in\mathbb{R}}\int\limits_{y_{2}\in\mathbb{R}^{d}}|f(y_{1},y_{2})|dy_{2}dy_{1}\\
&&\quad=\|g\|_{W(L^{1,1})}\|f\|_{L^{1}},
\end{eqnarray*}
the desired result in Lemma \ref{convolution relation} is following.
\end{proof}
\subsection{Shift-invariant spaces}
\ \ \ \ For $\Phi=(\phi_{1},\phi_{2},\cdots,\phi_{r})^{T}\in W(L^{1,1})^{(r)}$, the multiply generated shift-invariant space in the mixed Legesgue spaces $L^{p,q}$ is defined by
\begin{eqnarray*}
&&V_{p,q}(\Phi)=\left\{\sum_{i=1}^{r}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}c_{i}(k_{1},k_{2})\phi_{i}(\cdot-k_{1},\cdot-k_{2}):\right.\\
&& \quad\quad\quad\quad\quad\quad\quad\quad\left.c_{i}=\left\{c_{i}(k_{1},k_{2}):k_{1}\in \Bbb Z,k_{2}\in\Bbb Z^{d}\right\}\in \ell^{p,q},\,1\leq i\leq r\right\}.
\end{eqnarray*}
It is easy to see that the three sum pointwisely converges almost everywhere. In fact, for any $1\leq i\leq r$, $c_{i}=\left\{c_{i}(k_{1},k_{2}):k_{1}\in \Bbb Z,k_{2}\in\Bbb Z^{d}\right\}\in\ell^{p,q}$ derives $c_{i}\in \ell^{\infty}.$ This combines $\Phi=(\phi_{1},\phi_{2},\cdots,\phi_{r})^{T}\in W(L^{1,1})^{(r)}$ gets
\begin{eqnarray*}
\sum_{i=1}^{r}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}\left|c_{i}(k_{1},k_{2})\phi_{i}(x-k_{1},y-k_{2})\right|&\leq& \sum_{i=1}^{r}\|c_{i}\|_\infty\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}|\phi_{i}(x-k_{1},y-k_{2})|\\
&\leq&\sum_{i=1}^{r}\|c_{i}\|_\infty\| \phi_{i} \|_{W(L^{1,1})}<\infty\,(a.e.).
\end{eqnarray*}
The following proposition gives that multiply generated shift-invariant spaces are well-defined in $L^{p,q}$.
\begin{proposition}\cite[Theorem 2.8]{zhangqingyue}\label{thm:stableup}
Assume that $1\leq p,q<\infty $ and $\Phi=(\phi_{1},\phi_{2},\cdots,\phi_{r})^{T}\in W(L^{1,1})^{(r)}$.
Then for any $C=(c_{1},c_{2},\cdots,c_{r})^{T}\in (\ell^{p,q})^{(r)}$, the function
\[
f=\sum_{i=1}^{r}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^d}c_{i}(k_{1},k_{2})\phi_{i}(\cdot-k_{1},\cdot-k_{2})
\]
belongs to $L^{p,q}$ and there exist $D_{1}, D_{2}>0$ such that
$$
D_{1}\|f\|_{L^{p,q}}\leq\left(\sum_{i=1}^{r}\|c_{i}\|^{2}_{\ell^{p,q}}\right)^{1/2}\leq D_{2}\|f\|_{L^{p,q}}.
$$
\end{proposition}
\section{Main results}
\ \ \ \
In this section, we mainly discuss nonuniform average sampling in multiply generated shift-invariant spaces. The main results of this section are two fast reconstruction algorithms which allow to exactly reconstruct the signals $f$ in multiply generated shift-invariant subspaces from the average sampled values of $f$.
\subsection{The case of single averaging function}
\ \ \ \
In this subsection, we will give a fast reconstruction algorithm which allows to exactly reconstruct the signals $f$ in multiply generated shift-invariant subspaces from the average sampled values $\{\left \langle f,\psi_{a}(\cdot-x_{j},\cdot-y_{k}) \right \rangle:j,k\in\mathbb{J}\}$ of $f$. Before giving the main result of this subsection, we first give some definitions.
\begin{definition}
A bounded uniform partition of unity $\{\beta_{j,k}:j,k\in\mathbb{J}\}$ associated to $\{B_{\gamma}(x_{j},y_{k}):j,k\in\mathbb{J}\}$ is a set of functions satisfying
\begin{enumerate}
\item $0\leq\beta_{j,k}\leq1, \forall\,j,k\in\mathbb{J},$
\item $\mathrm{supp}\beta_{j,k}\subset B_{\gamma}(x_{j},y_{k}),$
\item $\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\beta_{j,k}=1$.
\end{enumerate}
Here $B_{\gamma}(x_{j},y_{k})$ is the open ball with center $(x_{j},y_{k})$ and radius $\gamma$.
\end{definition}
If $f\in W_{0}(L^{1,1})$, we define
\[
A_{X,a}f=\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\left \langle f,\psi_{a}(\cdot-x_{j},\cdot-y_{k}) \right \rangle\beta_{j,k}=\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}(f*\psi^{*}_{a})(x_{j},y_{k})\beta_{j,k},
\]
and define
\[
Q_{X}f=\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}f(x_{j},y_{k})\beta_{j,k}
\]
for the quasi-interpolant of the sequence $c(j,k)=f(x_{j},y_{k})$.
Here $\psi_{a}(\cdot)=1/a^{d+1}\psi(\cdot/a)$ and $\psi^{*}_{a}(x)=\overline{\psi_{a}(-x)}$. Obviously, one has $A_{X,a}f=Q_{X}(f*\psi^{*}_{a})$.
In order to describe the structure of the sampling set $X$, we give the following definition.
\begin{definition}
If a set $X=\{(x_{j},y_{k}):k,j\in \mathbb{J},x_{k}\in\mathbb{R},y_{j}\in\mathbb{R}^{d}\}$ satisfies
\[
\mathbb{R}^{d+1}=\cup_{j,k}B_{\gamma}(x_{j},y_{k})\quad\mbox{for every}\,\gamma>\gamma_{0},
\]
then we say that the set $X$ is $\gamma_{0}$-dense in $\mathbb{R}^{d+1}$.
Here $B_{\gamma}(x_{j},y_{k})$ is the open ball with center $(x_{j},y_{k})$ and radius $\gamma$, and $\mathbb{J}$ is a countable index set.
\end{definition}
The following is main result of this subsection.
\begin{theorem}\label{th:suanfa}
Assume that $\Phi=(\phi_{1},\phi_{2},\cdots,\phi_{r})^{T}\in W_{0}(L^{1,1})^{(r)}$ whose support is compact and $P$ is a bounded projection from $L^{p,q}$ onto $V_{p,q}(\Phi)$. Let $\psi\in W_{0}(L^{1,1})$ and $\int_{\mathbb{R}^{d+1}}\psi=1$. Then there are density $\gamma_{0}=\gamma_{0}(\Phi,\psi)>0$ and $a_{0}=a_{0}(\Phi,\psi)>0$ such that any $f\in V_{p,q}(\Phi)$ can be reconstructed
from its average samples $\{\left \langle f,\psi_{a}(\cdot-x_{j},\cdot-y_{k}) \right \rangle:j,k\in\mathbb{J}\}$ on any $\gamma\,(\gamma\leq\gamma_{0})$-dense set $X=\{(x_{j},y_{k}):j,k\in\mathbb{J}\}$ and for any $0<a\leq a_{0}$, by the following iterative algorithm:
\begin{eqnarray}\label{eq:iterative algorithm}
\left\{
\begin{array}{rl}f_{1}=&PA_{X,a}f \\
f_{n+1}=&PA_{X,a}(f-f_{n})+f_{n}.\\
\end{array}\right.
\end{eqnarray}
The iterate $f_{n}$ converges to $f$ in the $L^{p,q}$ norm.
Furthermore, the convergence is geometric, namely,
\[
\|f-f_{n}\|_{L^{p,q}}\leq M\alpha^{n}
\]
for some $\alpha=\alpha(\gamma,a,P,\Phi,\psi))<1$ and $M<\infty.$
\end{theorem}
\subsection{The case of multiple averaging functions}
\ \ \ \
In above subsection, we treat the case of single averaging function. However, in practice, we often encounter
the case of multiple averaging functions. Thus, the average sampled values can be described by $\left\{\left \langle f,\psi_{x_{j},y_{k}}\right \rangle:j,k\in\mathbb{J}\right\}$.
For this case, we recover the functions $f$ exactly by using the following fast algorithm. Before giving the fast algorithm, we first define
\[
A_{X}f=\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\left \langle f,\psi_{x_{j},y_{k}} \right \rangle\beta_{j,k}.
\]
\begin{theorem}\label{th:suanfa-m}
Assume that $\Phi=(\phi_{1},\phi_{2},\cdots,\phi_{r})^{T}\in W_{0}(L^{1,1})^{(r)}$ whose support is compact and $P$ is a bounded projection from $L^{p,q}$ onto $V_{p,q}(\Phi)$.
Let the averaging sampling functions $\psi_{x_{j},y_{k}}\in W(L^{1,1})$ satisfy $\int_{\mathbb{R}^{d+1}}\psi_{x_{j},y_{k}}=1$ and $\int_{\mathbb{R}^{d+1}}|\psi_{x_{j},y_{k}}|\leq M$, where $M>0$
is independent of $(x_{j},y_{k})$. Then there exist density $\gamma_{0}=\gamma_{0}(\Phi,M)>0$ and $a_{0}=a_{0}(\Phi,M)>0$ such that if $X=\{(x_{j},y_{k}): j,k\in \mathbb{J}\}$ is
$\gamma\,(\gamma\leq\gamma_{0})$-dense
in $\mathbb{R}^{d+1}$, and if the average sampling functions $\psi_{x_{j},y_{k}}$ satisfy $\textup{supp}\,\psi_{x_{j},y_{k}}\subseteq (x_{j},y_{k})+[-a,a]^{d+1}$ for some $0<a\leq a_{0}$, then any $f\in V_{p,q}(\Phi)$
can be recovered from its average samples $\left\{\left \langle f,\psi_{x_{j},y_{k}}\right \rangle:j,k\in\mathbb{J}\right\}$ by the following iterative algorithm:
\begin{eqnarray}\label{eq:iterative algorithm-m}
\left\{
\begin{array}{rl}f_{1}=&PA_{X}f \\
f_{n+1}=&PA_{X}(f-f_{n})+f_{n}.\\
\end{array}\right.
\end{eqnarray}
In this case, the iterate $f_{n}$ converges to $f$ in the $L^{p,q}$-norm. Moreover, the convergence is geometric, that is,
\[
\|f-f_{n}\|_{L^{p,q}}\leq C\alpha^{n}
\]
for some $\alpha=\alpha(\gamma,a,P,\Phi,M)<1$ and $C<\infty.$
\end{theorem}
\section{Useful propositions and lemmas}
\ \ \ \
In this section, we introduce some useful propositions and lemmas.
Let $f$ be a continuous function. We define the oscillation (or modulus of continuity) of $f$
by $\hbox{osc}_{\delta}(f)(x_{1},x_{2})=\sup_{|y_{1}|\leq\delta,|y_{2}|\leq\delta}|f(x_{1}+y_{1},x_{2}+y_{2})-f(x_{1},x_{2})|$.
The following tow propositions are needed in the proof of two lemmas in this section.
\begin{proposition}\cite[Lemma 3.4]{zhangqingyue}\label{pro:in Wiener space}
If $\phi\in W_{0}(L^{1,1})$ whose support is compact, then there exists $\delta_{0}>0$ such that
$\hbox{osc}_{\delta}(\phi)\in W_{0}(L^{1,1})$ for any $\delta\leq\delta_{0}$.
\end{proposition}
\begin{proposition}\cite[Lemma 3.5]{zhangqingyue}\label{pro:oscillation}
If $\phi\in W_{0}(L^{1,1})$ whose support is compact, then $$\lim_{\delta\rightarrow0}\|\hbox{osc}_{\delta}(\phi)\|_{W(L^{1,1})}=0.$$
\end{proposition}
To prove our main results, we need the following Lemma.
\begin{lemma}\label{lem:average function}
Let $\psi\in L^{1}(\mathbb{R}^{d+1})$ satisfying $\int_{\mathbb{R}^{d+1}}\psi(x)dx=1$ and $\psi_{a}=(1/a^{d+1})\psi(\cdot/a)$. Then
for every $\phi\in W_{0}(L^{1,1})$ whose support is compact,
\[
\|\phi-\phi*\psi^{*}_{a}\|_{W(L^{1,1})}\rightarrow0\quad \mbox{as}\quad a\rightarrow0^{+}.
\]
Here $a$ is any positive real number.
\end{lemma}
\begin{proof}
Since $\int_{\mathbb{R}^{d+1}}\psi(x)dx=1$ and $\psi^{*}_{a}(x)=\overline{\psi_{a}(-x)}$, one has
\begin{eqnarray*}
\phi-\phi*\psi^{*}_{a}=\int_{\mathbb{R}^{d+1}}(\phi(x)-\phi(x+t))\overline{\psi_{a}(t)}dt.
\end{eqnarray*}
By Proposition \ref{pro:in Wiener space}, there exists $\delta_{0}>0$ such that
$\hbox{osc}_{\delta}(\phi)\in W_{0}(L^{1,1})$ for any $\delta\leq\delta_{0}$.
Thus
\begin{eqnarray*}
&&\|\phi-\phi*\psi^{*}_{a}\|_{W(L^{1,1})}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\\
&&\quad=\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\left|\int_{\mathbb{R}}\int_{\mathbb{R}^{d}}(\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2}))
\overline{\psi_{a}(t_{1},t_{2})}dt_{2}dt_{1}\right|\\
&&\quad\leq\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int_{\mathbb{R}}\int_{\mathbb{R}^{d}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad\leq\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int\limits_{|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad\quad+\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int\limits_{|t_{1}|>\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad\quad+\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int\limits_{\mathbb{R}}\int\limits_{|t_{2}|>\delta_{0}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad=I_{1}+I_{2}+I_{3},
\end{eqnarray*}
where
\begin{eqnarray*}
&&I_{1}=\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int\limits_{|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1},
\end{eqnarray*}
\begin{eqnarray*}
&&I_{2}=\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int\limits_{|t_{1}|>\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}
\end{eqnarray*}
and
\begin{eqnarray*}
&&I_{3}=\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int\limits_{\mathbb{R}}\int\limits_{|t_{2}|>\delta_{0}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}.
\end{eqnarray*}
First of all, we treat $I_{1}$: let $|t|=\max\{|t_{1}|,|t_{2}|\}$. Then
\begin{eqnarray*}
I_{1}&\leq& \sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\int\limits_{|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}
\hbox{osc}_{|t|}(\phi)(x_{1}+k_{1},x_{2}+k_{2}))\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&\leq& \int\limits_{|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}
\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\hbox{osc}_{|t|}(\phi)(x_{1}+k_{1},x_{2}+k_{2}))\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&=&\int\limits_{|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\|\hbox{osc}_{|t|}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}.
\end{eqnarray*}
By proposition \ref{pro:oscillation}, for any $\epsilon>0$, there exists $\delta_{1}>0\,(\delta_{1}<\delta_{0})$ such that
\[
\|\hbox{osc}_{\delta}(\phi)\|_{W(L^{1,1})}<\epsilon, \quad \mbox{for any}\,\delta\leq\delta_{1}.
\]
Write
\begin{eqnarray*}
&&\int\limits_{|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\|\hbox{osc}_{|t|}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad=\int\limits_{|t_{1}|\leq\delta_{1}}\int\limits_{|t_{2}|\leq\delta_{1}}\|\hbox{osc}_{|t|}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad\quad+\int\limits_{\delta_{1}<|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\|\hbox{osc}_{|t|}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad \quad\quad +\int\limits_{|t_{1}|\leq\delta_{1}}\int\limits_{\delta_{1}<|t_{2}|\leq\delta_{0}}\|\hbox{osc}_{|t|}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad=I_{4}+I_{5}+I_{6}.
\end{eqnarray*}
Then
\begin{eqnarray*}
I_{4}&\leq& \int\limits_{|t_{1}|\leq\delta_{1}}\int\limits_{|t_{2}|\leq\delta_{1}}\|\hbox{osc}_{\delta_{1}}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&\leq& \epsilon\int\limits_{|t_{1}|\leq\delta_{1}}\int\limits_{|t_{2}|\leq\delta_{1}}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\leq\epsilon\|\psi\|_{L^{1}},
\end{eqnarray*}
\begin{eqnarray*}
I_{5}&\leq&\int\limits_{\delta_{1}<|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\|\hbox{osc}_{\delta_{0}}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&\leq&\|\hbox{osc}_{\delta_{0}}(\phi)\|_{W(L^{1,1})}\int\limits_{\delta_{1}/a<|s_{1}|}\int\limits_{s_{2}\in\mathbb{R}^{d}}\left|\psi(s_{1},s_{2})\right|ds_{2}ds_{1}\\
&\rightarrow& 0\quad \mbox{as}\,a\rightarrow0^{+}
\end{eqnarray*}
and
\begin{eqnarray*}
I_{6}&\leq&\int\limits_{|t_{1}|\leq\delta_{1}}\int\limits_{\delta_{1}<|t_{2}|\leq\delta_{0}}\|\hbox{osc}_{|t|}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&\leq&\|\hbox{osc}_{\delta_{0}}(\phi)\|_{W(L^{1,1})}\int\limits_{s_{1}\in\mathbb{R}}\int\limits_{\delta_{1}/a<|s_{2}|}\left|\psi(s_{1},s_{2})\right|ds_{2}ds_{1}\\
&\rightarrow& 0\quad \mbox{as}\,a\rightarrow0^{+}.
\end{eqnarray*}
Next, we treat $I_{2}$:
\begin{eqnarray*}
&&I_{2}\leq\int\limits_{|t_{1}|>\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad\quad+\int\limits_{|t_{1}|>\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\\
&&\quad\quad\quad\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\left|\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad\leq2\int\limits_{|t_{1}|>\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\|\phi\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad\leq2\|\phi\|_{W(L^{1,1})}\int\limits_{|t_{1}|>\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad\leq2\|\phi\|_{W(L^{1,1})}\int\limits_{|s_{1}|>\delta_{0}/a}\int\limits_{t_{2}\in\mathbb{R}^{d}}\left|\psi(s_{1},s_{2})\right|ds_{2}ds_{1}\\
&&\quad\rightarrow 0\quad \mbox{as}\,a\rightarrow0^{+}.
\end{eqnarray*}
Similarly, we can prove $I_{3}\rightarrow 0\quad \mbox{as}\,a\rightarrow0^{+}$. This completes the proof of Lemma \ref{lem:average function}.
\end{proof}
The following lemma is a generalization of \cite[Lemma 4.1]{Aldroubisun}.
\begin{lemma}\label{Qoperator}
Let $X$ be any sampling set which is $\gamma$-dense in $\mathbb{R}^{d+1}$, let $\{\beta_{j,k}:j,k\in\mathbb{J}\}$ be a bounded uniform partition of unity associated with $X$, and let $\phi\in W_{0}(L^{1,1})$ whose support is compact.
Then there exist
constants $C$ and $\gamma_{0}$ such that for any $f=\sum_{k\in\mathbb{Z}^{d+1}}c_{k}\phi(\cdot-k)$ and $\gamma\leq\gamma_{0}$, one has
\[
\|Q_{X}f\|_{L^{p,q}}\leq C\|c\|_{\ell^{p,q}}\|\phi\|_{W(L^{1,1})} \quad \mbox{for any}\, c=\{c_{k}:k\in\mathbb{Z}^{d+1}\}\in\ell^{p,q}.
\]
\end{lemma}
To prove the Lemma \ref{Qoperator}, we introduce the following proposition.
\begin{proposition}\cite[Theorem 3.1]{LiLiu}\label{pro:stableup2}
Assume that $1\leq p,q<\infty $ and $\phi\in W(L^{1,1})$.
Then for any $c\in \ell^{p,q}$, the function $f=\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^d}c(k_{1},k_{2})\phi(\cdot-k_{1},\cdot-k_{2})$
belongs to $L^{p,q}$ and
$$
\|f\|_{L^{p,q}}\leq\|c\|_{\ell^{p,q}}\left \| \phi \right \|_{W(L^{1,1})}.
$$
\end{proposition}
\begin{proof}[\textbf{Proof of Lemma \ref{Qoperator}}]
By Proposition \ref{pro:stableup2} and Proposition \ref{pro:in Wiener space}, one has that there exists $\gamma_{0}>0$ such that for any $\gamma\leq\gamma_{0}$
\begin{eqnarray*}
\|f-Q_{X}f\|_{L^{p,q}}\leq\|\hbox{osc}_{\gamma}(f)\|_{L^{p,q}}\leq\|c\|_{\ell^{p,q}}\|\hbox{osc}_{\gamma}(\phi)\|_{W(L^{1,1})}.
\end{eqnarray*}
Using Proposition \ref{pro:stableup2} and the proof of \cite[Lemma 3.4]{zhangqingyue}, one obtains that there exists $C'$ such that
\begin{eqnarray*}
\|Q_{X}f\|_{L^{p,q}}&=&\|f-Q_{X}f+f\|_{L^{p,q}}\leq\|f-Q_{X}f\|_{L^{p,q}}+\|f\|_{L^{p,q}}\\
&\leq&\|c\|_{\ell^{p,q}}\|\hbox{osc}_{\gamma}(\phi)\|_{W(L^{1,1})}+\|c\|_{\ell^{p,q}}\left \| \phi \right \|_{W(L^{1,1})}\\
&\leq&\|c\|_{\ell^{p,q}}C'\|\phi\|_{W(L^{1,1})}+\|c\|_{\ell^{p,q}}\left \| \phi \right \|_{W(L^{1,1})}\\
&\leq&C\|c\|_{\ell^{p,q}}\left \| \phi \right \|_{W(L^{1,1})},
\end{eqnarray*}
where $C=1+C'$.
\end{proof}
The following lemma plays an important role in the proof Theorem \ref{th:suanfa}.
\begin{lemma}\label{lem:co}
Let $\Phi=(\phi_{1},\phi_{2},\cdots,\phi_{r})^{T}\in W_{0}(L^{1,1})^{(r)}$ whose support is compact and $P$ be a bounded projection from $L^{p,q}(\mathbb{R}^{d+1})$ onto $V_{p,q}(\Phi)$. Then there exist $\gamma_{0}> 0$ and $a_{0}> 0$ such that for $\gamma$-dense set $X$ with $\gamma\leq\gamma_{0}$ and for every positive real number $a\leq a_{0}$, the operator $I-PA_{X,a}$ is a contraction operator on $V_{p,q}(\Phi)$.
\end{lemma}
\begin{proof}
Putting $f=\sum_{i=1}^{r}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}c_{i}(k_{1},k_{2})\phi_{i}(\cdot-k_{1},\cdot-k_{2})\in V_{p,q}(\Phi)$. Then one has
\begin{eqnarray*}
\|f-PA_{X,a}f\|_{L^{p,q}}&=&\|f-PQ_{X}f+PQ_{X}f-PA_{X,a}f\|_{L^{p,q}}\\
&\leq&\|f-PQ_{X}f\|_{L^{p,q}}+\|PQ_{X}f-PA_{X,a}f\|_{L^{p,q}}\\
&=&\|Pf-PQ_{X}f\|_{L^{p,q}}+\|PQ_{X}f-PA_{X,a}f\|_{L^{p,q}}\\
&\leq&\|P\|_{op}(\|f-Q_{X}f\|_{L^{p,q}}+\|Q_{X}f-A_{X,a}f\|_{L^{p,q}})\\
&=&\|P\|_{op}(\|f-Q_{X}f\|_{L^{p,q}}+\|Q_{X}f-Q_{X}(f*\psi^{*}_{a})\|_{L^{p,q}}).
\end{eqnarray*}
Firstly, we estimate the first term of the last inequality.
By Proposition \ref{pro:stableup2}, Proposition \ref{thm:stableup} and Cauchy inequality, one has that there exists $\gamma_{1}$ such that for any $\gamma\leq\gamma_{1}$
\begin{eqnarray}\label{eq:1}
\|f-Q_{X}f\|_{L^{p,q}}&\leq&\left\|\sum_{i=1}^{r}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}|c_{i}(k_{1},k_{2})|\hbox{osc}_{\gamma}(\phi_{i})(\cdot-k_{1},\cdot-k_{2})\right\|_{L^{p,q}}\nonumber\\
&\leq&\sum_{i=1}^{r}\left\|\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}|c_{i}(k_{1},k_{2})|\hbox{osc}_{\gamma}(\phi_{i})(\cdot-k_{1},\cdot-k_{2})\right\|_{L^{p,q}}\nonumber\\
&\leq&\sum_{i=1}^{r}\|c_{i}\|_{\ell^{p,q}}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})}\nonumber\\
&\leq&\max_{1\leq i\leq r}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})}\sum_{i=1}^{r}\|c_{i}\|_{\ell^{p,q}}\nonumber\\
&\leq&\max_{1\leq i\leq r}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})}\sqrt{r}\left(\sum_{i=1}^{r}\|c_{i}\|^{2}_{\ell^{p,q}}\right)^{1/2}\nonumber\\
&\leq&D_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})} \|f\|_{L^{p,q}}.
\end{eqnarray}
Next, we estimate the second term of the last inequality. Let $\phi^{a}_{i}=\phi_{i}-\phi_{i}*\psi^{*}_{a}\,(i=1,\cdots,r)$. Using $\phi_{i}\in W_{0}(L^{1,1})\,(i=1,\cdots,r)$, $\psi\in L^{1}$ and
Lemma \ref{convolution relation}, it follows that $\phi^{a}_{i}\in W_{0}(L^{1,1})$. Note that $f-f*\psi^{*}_{a}=\sum_{i=1}^{r}\sum_{k\in \mathbb{Z}^{d+1}}c_{i}(k)\phi^{a}_{i}(\cdot-k)$. By Lemma \ref{Qoperator},
there exist constants $C,\gamma_{2}>0$ such that for any $\gamma\leq\gamma_{2}$
\begin{eqnarray*}
\|Q_{X}f-Q_{X}(f*\psi^{*}_{a})\|_{L^{p,q}}&\leq&\left\|Q_{X}\left(\sum_{i=1}^{r}\sum_{k\in \mathbb{Z}^{d+1}}c_{i}(k)\phi^{a}_{i}(\cdot-k)\right)\right\|_{L^{p,q}}\\
&=&\left\|\sum_{i=1}^{r}Q_{X}\left(\sum_{k\in \mathbb{Z}^{d+1}}c_{i}(k)\phi^{a}_{i}(\cdot-k)\right)\right\|_{L^{p,q}}\\
&=&\sum_{i=1}^{r}\left\|Q_{X}\left(\sum_{k\in \mathbb{Z}^{d+1}}c_{i}(k)\phi^{a}_{i}(\cdot-k)\right)\right\|_{L^{p,q}}\\
&\leq&C\sum_{i=1}^{r}\|c_{i}\|_{\ell^{p,q}}\|\phi^{a}_{i}\|_{W(L^{1,1})}.
\end{eqnarray*}
Using Proposition \ref{thm:stableup} and Cauchy inequality
\begin{eqnarray*}
\|Q_{X}f-Q_{X}(f*\psi^{*}_{a})\|_{L^{p,q}}&\leq& C\max_{1\leq i\leq r}\|\phi^{a}_{i}\|_{W(L^{1,1})}\sum_{i=1}^{r}\|c_{i}\|_{\ell^{p,q}}\\
&\leq&C\max_{1\leq i\leq r}\|\phi^{a}_{i}\|_{W(L^{1,1})}\sqrt{r}\left(\sum_{i=1}^{r}\|c_{i}\|^{2}_{\ell^{p,q}}\right)^{1/2}\\
&\leq&D_{2}\sqrt{r}C\max_{1\leq i\leq r}\|\phi^{a}_{i}\|_{W(L^{1,1})} \|f\|_{L^{p,q}}.
\end{eqnarray*}
Assume that $\epsilon>0$ is any positive real number. By Proposition \ref{pro:oscillation}, there exists $\gamma_{3}$ such that
$D_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})}\leq \epsilon/2$ for any $\gamma\leq\gamma_{3}$. Using Lemma \ref{lem:average function}, there exists $a_{0}>0$ such that
for any $a\leq a_{0}$
\[
D_{2}\sqrt{r}C\max_{1\leq i\leq r}\|\phi^{a}_{i}\|_{W(L^{1,1})} \leq \epsilon/2.
\]
Hence, we choose $\gamma_{0}=\min\{\gamma_{1},\gamma_{2},\gamma_{3}\}$, one has
\[
\|f-PA_{X,a}f\|_{L^{p,q}}\leq\epsilon\|P\|_{op}\|f\|_{L^{p,q}} \quad\mbox{for any} \, f\in V_{p,q}(\Phi),\,\gamma\leq\gamma_{0},\,a\leq a_{0}.
\]
To get a contraction operator, we choose $\epsilon\|P\|_{op}<1$.
\end{proof}
\section{Proofs of main results}
\ \ \ \
In this section, we give proofs of Theorem \ref{th:suanfa} and Theorem \ref{th:suanfa-m}.
\subsection{Proof of Theorem \ref{th:suanfa}}
\ \ \ \
For convenience, let $e_{n}=f-f_{n}$ be the error after $n$ iterations. Using (\ref{eq:iterative algorithm}),
\begin{eqnarray*}
e_{n+1}&=&f-f_{n+1}\\
&=&f-f_{n}-PA_{X,a}(f-f_{n})\\
&=&(I-PA_{X,a})e_{n}.
\end{eqnarray*}
Using Lemma \ref{lem:co}, we may choose right $\gamma_{0}$ and $a_{0}$ such that for any $\gamma\leq\gamma_{0}$ and $a\leq a_{0}$
\[
\|I-PA_{X,a}\|_{op}=\alpha<1.
\]
Then we obtain
\[
\|e_{n+1}\|_{L^{p,q}}\leq \alpha\|e_{n}\|_{L^{p,q}}\leq \alpha^{n}\|e_{1}\|_{L^{p,q}}.
\]
Wherewith $\|e_{n}\|_{L^{p,q}}\rightarrow0$, when $n\rightarrow\infty$. This completes the proof.
\subsection{Proof of Theorem \ref{th:suanfa-m}}
\ \ \ \
We only need to prove $I-PA_{X}$ is a contraction operator on $V_{p,q}(\Phi)$.
Let $f=\sum_{i=1}^{r}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}c_{i}(k_{1},k_{2})\phi_{i}(\cdot-k_{1},\cdot-k_{2})\in V_{p,q}(\Phi)$. One has
\begin{eqnarray}\label{eq:2}
\|f-PA_{X}f\|_{L^{p,q}}&=&\|f-PQ_{X}f+PQ_{X}f-PA_{X}f\|_{L^{p,q}}\nonumber\\
&\leq&\|f-PQ_{X}f\|_{L^{p,q}}+\|PQ_{X}f-PA_{X}f\|_{L^{p,q}}\nonumber\\
&=&\|Pf-PQ_{X}f\|_{L^{p,q}}+\|PQ_{X}f-PA_{X}f\|_{L^{p,q}}\nonumber\\
&\leq&\|P\|_{op}(\|f-Q_{X}f\|_{L^{p,q}}+\|Q_{X}f-A_{X}f\|_{L^{p,q}}).
\end{eqnarray}
Putting $f_{i}=\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}c_{i}(k_{1},k_{2})\phi_{i}(\cdot-k_{1},\cdot-k_{2}),\,(1\leq i\leq r)$. Then $f_{i}\in V^{p,q}(\phi_{i})$ for $1\leq i\leq r$ and $f=\sum_{i=1}^{r}f_{i}$. For each $f_{i}$, one has
\begin{eqnarray*}
|Q_{X}f_{i}-A_{X}f_{i}|&=&\left|\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\left(f_{i}(x_{j},y_{k})-\left \langle f_{i},\psi_{x_{j},y_{k}} \right \rangle\right)\beta_{j,k}\right|\\
&=&\left|\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\left(\int_{\mathbb{R}^{d+1}}(f_{i}(x_{j},y_{k})-f_{i}(t))\overline{\psi_{x_{j},y_{k}}(t)}dt\right)\beta_{j,k}\right|\\
&\leq&\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\int_{\mathbb{R}^{d+1}}|f_{i}(x_{j},y_{k})-f_{i}(t)||\psi_{x_{j},y_{k}}(t)|dt\beta_{j,k}\\
&\leq&\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\hbox{osc}_{a}(f_{i})(x_{j},y_{k})\int_{\mathbb{R}^{d+1}}|\psi_{x_{j},y_{k}}(t)|dt\beta_{j,k}\\
&\leq&M\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\hbox{osc}_{a}(f_{i})(x_{j},y_{k})\beta_{j,k}\\
&\leq&M\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}|c_{i}(k_{1},k_{2})|\hbox{osc}_{a}(\phi_{i})(x_{j}-k_{1},y_{k}-k_{2})\beta_{j,k}.
\end{eqnarray*}
By Proposition \ref{pro:stableup2} and Proposition \ref{pro:in Wiener space}, there exists $a_{1}$ such that for any $a\leq a_{1}$
\begin{eqnarray*}
\|Q_{X}f_{i}-A_{X}f_{i}\|_{L^{p,q}}\leq MC\|c_{i}\|_{\ell^{p,q}}\|\hbox{osc}_{a}(\phi_{i})\|_{W(L^{1,1})}.
\end{eqnarray*}
Therewith
\[
\|Q_{X}f-A_{X}f\|_{L^{p,q}}\leq MC\sum_{i=1}^{r}\|c_{i}\|_{\ell^{p,q}}\|\hbox{osc}_{a}(\phi_{i})\|_{W(L^{1,1})}.
\]
Thus by the proof of Lemma \ref{lem:co}
\begin{eqnarray}\label{eq:3}
\|Q_{X}f-A_{X}f\|_{L^{p,q}}\leq MCD_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{a}(\phi_{i})\|_{W(L^{1,1})} \|f\|_{L^{p,q}}.
\end{eqnarray}
Using (\ref{eq:1}), (\ref{eq:2}) and (\ref{eq:3}), one has that there exists $\gamma_{1}$ such that for any $\gamma\leq\gamma_{1}$ and $a\leq a_{1}$
\begin{eqnarray*}
\|f-PA_{X}f\|_{L^{p,q}}&\leq&\|P\|_{op}(D_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})}\\
&&\quad+MCD_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{a}(\phi_{i})\|_{W(L^{1,1})} )\|f\|_{L^{p,q}}.
\end{eqnarray*}
Assume that $\epsilon>0$ is any positive real number. By Proposition \ref{pro:oscillation}, there exists $\gamma_{2}$ such that for any $\gamma\leq\gamma_{2}$
$$D_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})}\leq \epsilon/2.$$
Using Lemma \ref{lem:average function}, there exists $a_{2}>0$ such that
for any $a\leq a_{2}$
\[
MCD_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{a}(\phi_{i})\|_{W(L^{1,1})} \leq \epsilon/2.
\]
Hence, we choose $\gamma_{0}=\min\{\gamma_{1},\gamma_{2}\}$ and $a_{0}=\min\{a_{1},a_{2}\}$, one has
\[
\|f-PA_{X}f\|_{L^{p,q}}\leq\epsilon\|P\|_{op}\|f\|_{L^{p,q}} \quad\mbox{for any} \, f\in V_{p,q}(\Phi),\,\gamma\leq \gamma_{0},\,a\leq a_{0}.
\]
To get a contraction operator, we choose $\epsilon\|P\|_{op}<1$.
\section{Conclusion}
\ \ \ \
In this paper, we study nonuniform average sampling problem in multiply generated shift-invariant subspaces of mixed Lebesgue spaces. We discuss two types of average sampled values. Two fast reconstruction algorithms for this two types of average sampled values are provided. Studying $L^{p,q}$-frames in multiply generated shift-invariant subspaces of mixed Lebesgue spaces is the goal of future work.
\\
\textbf{Acknowledgements}
This work was supported partially by the
National Natural Science Foundation of China (11326094 and 11401435).
\end{document} |
\begin{document}
\title[Algorithmic Problems]{Algorithmic Problems in Amalgams of Finite Groups}
\author{L.Markus-Epstein}\footnote{Supported in part at the Technion by a fellowship of the Israel Council for Higher Education}
\address{Department of Mathematics \\
Technion \\
Haifa 32000, Israel}
$\diamond$mail{epstin@math.biu.ac.il}
\maketitle
\begin{abstract}
Geometric methods proposed by Stallings \cite{stal} for treating
finitely generated subgroups of free groups were successfully
used to solve a wide collection of decision problems for free
groups and their subgroups \cite{b-m-m-w, kap-m, mar_meak, m-s-w,
mvw, rvw, ventura}.
It turns out that Stallings' methods can be effectively
generalized for the class of amalgams of finite groups
\cite{m-foldings}. In the present paper we employ subgroup graphs
constructed by the generalized Stallings' folding algorithm,
presented in \cite{m-foldings}, to solve various algorithmic
problems in amalgams of finite groups.
$\diamond$nd{abstract}
\pagestyle{headings}
\section{Introduction} \label{section:Introduction}
Decision (or
$\diamond$mph{algorithmic}) problems is one of the classical
subjects of combinatorial group theory, originating in the three
fundamental decision problems posed by Dehn \cite{dehn} in 1911:
the
$\diamond$mph{word problem} (which asks to answer whether a word over
the group generators represents the identity), the
$\diamond$mph{conjugacy
problem} (which asks to answer whether an arbitrary pair of words
over the group generators define conjugate elements) and the
$\diamond$mph{isomorphism problem} (which asks to answer whether an
arbitrary pair of finite presentations determine isomorphic
groups).
Though Dehn solved all three of these problems as restricted to
the canonical presentation of fundamental groups of closed
2-manifolds, they are theoretically undecidable (unsolvable) in
general \cite{miller71, miller92}. However restrictions to some
particular classes of groups may yield surprisingly good results.
Remarkable examples include the solvability of the word problem in
one-relator groups (Magnus, see II.5.4 in \cite{l_s}) and in
hyperbolic groups (Gromov, see 2.3.B in \cite{gro}). The reader is
referred to the papers of Miller \cite{miller71, miller92} for a
survey of decision problems for groups.
The groups considered in the present paper are amalgams of
finite groups. As is well known \cite{b-f}, these groups are
hyperbolic. Therefore the word problem in this class of groups is
solvable.
A natural generalization of the word problem is the
$\diamond$mph{(subgroup) membership problem} (or the
$\diamond$mph{generalized
word problem}), which asks to decide whether a word in the
generators of the group is an element of the given subgroup. An
efficient solution of the membership problem in amalgams of finite
groups was presented by the author in \cite{m-foldings}, where
graph theoretic methods for treating amalgams of finite groups
were developed. Namely, a finitely generated subgroup $H$ of an
amalgam $G=G_1 \ast_A G_2$ of finite groups is canonically
represented by a finite labelled graph $\Gamma(H)$. This graph
carries all the essential information about the subgroup $H$
itself, which enables one to ``read off'' a solution of the
membership problem in $H$ directly from its subgroup graph
$\Gamma(H)$. This yields a quadratic (and sometimes even linear)
time solution of the membership problem in amalgams of finite
groups.
Such strategy was originally developed by Stallings \cite{stal}
to treat finitely generated subgroups of free groups. Stallings'
approach was topological. He showed that every finitely generated
subgroup of a free group is canonically represented by a minimal
immersion of a bouquet of circles. Using the graph theoretic
language, the results of \cite{stal} can be restated as follows. A
finitely generated subgroup of a free group is canonically
represented by a finite labelled graph which can be constructed
algorithmically by a so called process of
$\diamond$mph{Stallings'
foldings} (
$\diamond$mph{Stallings' folding algorithm}). Moreover, this
algorithm is quadratic in the size of the input \cite{kap-m,
m-s-w}. See \cite{tuikan} for a faster implementation of this
algorithm.
In \cite{m-foldings} Stallings' folding algorithm was generalized
to the class of amalgams of finite groups. Along the current paper
we refer to this algorithm as the
$\diamond$mph{generalized Stallings'
folding algorithm}. Its description is included in the Appendix.
Note that graphs constructed by the Stallings' folding algorithm
can be viewed as finite inverse automata as well. This
convergence of ideas from the group theory, topology, graph
theory, the theory of finite automata and finite semigroups yields
reach computational and algorithmic results concerning free groups
and their subgroups. In particular, this approach gives polynomial
time algorithms to solve the membership problem, the finite index
problem, to compute closures of subgroups in various profinite
topologies. See \cite{b-m-m-w, mar_meak, m-s-w, mvw, rvw, ventura}
for these and other examples of the applications of the Stallings'
approach in free groups, and \cite{kap-w-m, kmrs, m_w, schupp} for
the applications in some other classes of groups. Note that the
Stallings' ideas were recast in a combinatorial graph theoretic
way in the remarkable survey paper of Kapovich and Myasnikov
\cite{kap-m}, where these methods were applied systematically to
study the subgroup structure of free groups.
Our objective is to apply the generalized Stallings' methods
developed by the author in \cite{m-foldings} to solve various
decision problems concerning finitely generated subgroups of
amalgams of finite groups algorithmically (that is to find a
precise procedure, an
$\diamond$mph{algorithm}), which extends the results
of \cite{kap-m}.
Our results include polynomial solutions for the following
algorithmic problems in amalgams of finite groups, which are known
to be unsolvable in general \cite{miller71, miller92}:
\begin{itemize}
\item computing subgroup presentations,
\item detecting triviality of a given subgroup,
\item the freeness problem,
\item the finite index problem,
\item the separability problem,
\item the conjugacy problem,
\item the normality,
\item the intersection problem,
\item the malnormality problem,
\item the power problem,
\item reading off Kurosh decomposition for finitely generated
subgroups of free products of finite groups.
$\diamond$nd{itemize}
These results are spread out between three papers: \cite{m-algII,
m-kurosh} and the current one. In \cite{m-kurosh} free products of
finite groups are considered, and an efficient procedure to read
off a Kurosh decomposition is presented.
The splitting between \cite{m-algII} and the current paper was
done with the following idea in mind. It turn out that some
subgroup properties, such as computing of a subgroup presentation
and index, as well as detecting of freeness and normality, can be
obtained directly by an analysis of the corresponding subgroup
graph.
Solutions of others require some additional
constructions. Thus, for example, intersection properties can be
examined via product graphs, and separability needs constructions
of a pushout of graphs.
In the current paper algorithmic problems of the first type are
presented: the computing of subgroup presentations, the freeness
problem and the finite index problem. The separability problem is
also included here, because it is closely related with the other
problems presented in the current paper. The rest of the
algorithmic problems are introduced in \cite{m-algII}.
The paper is organized as follows. The Preliminary Section
includes the description of the basic notions used along the
present paper. Readers familiar with amalgams, normal words in
amalgams and labelled graphs can skip it. The next section
presents a summary of the results from \cite{m-foldings} which are
essential for our algorithmic purposes. It describes the nature
and the properties of the subgroup graphs constructed by the
generalized Stallings' folding algorithm in \cite{m-foldings}.
The rest of the sections are titled by the names of various
algorithmic problems and present definitions (descriptions) and
solutions of the corresponding algorithmic problems. The relevant
references to other papers considering similar problems and a
rough analysis of the complexity of the presented solutions
(algorithms) are provided. In contrast with papers that establish
the exploration of the complexity of decision problems as their
main goal (for instance, \cite{generic-case, average-case,
tuikan}), we do it rapidly (sketchy) viewing in its analysis a way
to emphasize the effectiveness of our methods.
\subsection*{Other Methods} \
There have been a number of papers, where methods, not based on
Stallings' foldings, have been presented. One can use these
methods to treat finitely generated subgroups of amalgams of
finite groups. A topological approach can be found in works of
Bogopolskii \cite{b1, b2}. For the automata theoretic approach,
see papers of Holt and Hurt \cite{holt-decision, holt-hurt},
papers of Cremanns, Kuhn, Madlener and Otto \cite{c-otto,
k-m-otto}, as well as the recent paper of Lohrey and Senizergues
\cite{l-s}.
However the methods for treating finitely generated subgroups
presented in the above papers were applied to some particular
subgroup property. No one of these papers has as its goal a
solution of various algorithmic problems, which we consider as our
primary aim. Moreover, similarly to the case of free groups (see
\cite{kap-m}), our combinatorial approach seems to be the most
natural one for this purpose.
\section{Acknowledgments}
I wish to deeply thank to my PhD advisor Prof. Stuart W. Margolis
for introducing me to this subject, for his help and
encouragement throughout my work on the thesis. I owe gratitude
to Prof. Arye Juhasz for his suggestions and many useful comments
during the writing of this paper. I gratefully acknowledge a
partial support at the Technion by a fellowship of the Israel
Council for Higher Education.
\section{Preliminaries} \label{section:Preliminaries}
\subsection*{Amalgams}
Let $G=G_1 \ast_{A} G_2$ be a free product of $G_1$ and $G_2$ with
amalgamation, customary, an
$\diamond$mph{amalgam} of $G_1$ and $G_2$.
We assume that the (free) factors are given by the finite group
presentations
\begin{align} G_1=gp\langle X_1|R_1\rangle, \ \ G_2=gp\langle
X_2|R_2\rangle \ \ {\rm such \ that} \ \ X_1^{\pm} \cap
X_2^{\pm}=
$\diamond$mptyset. \tag{\text{$1.a$}}
$\diamond$nd{align}
$A= \langle Y \rangle$ is a group such that there exist two
monomorphisms
\begin{align}
\phi_1:A \rightarrow G_1 \ {\rm and } \ \phi_2:A \rightarrow G_2.
\tag{\text{$1.b$}}
$\diamond$nd{align}
Thus $G$ has a finite group presentation
\begin{align}
G=gp\langle X_1,X_2 | R_1, R_2, \phi_1(a)=\phi_2(a), \; a \in Y
\rangle. \tag{\text{$1.c$}}
$\diamond$nd{align}
We put $X=X_1 \cup X_2$, $R=R_1 \cup R_2 \cup
\{\phi_1(a)=\phi_2(a) \; | \; a \in Y \} $. Thus $G=gp\langle
X|R\rangle$.
As is well known \cite{l_s, m-k-s, serre}, the free factors embed
in $G$. It enables us to identify $A$ with its monomorphic image
in each one of the free factors. Sometimes in order to make the
context clear we use \fbox{$G_i \cap A$}
\footnote{Boxes are used for emphasizing purposes only.}
to denote the monomorphic image of $A$ in $G_i$ ($i \in \{1,2\}$).
Elements of $G=gp \langle X |R \rangle$ are equivalence classes of
words. However it is customary to blur the distinction between a
word $u$ and the equivalence class containing $u$. We will
distinguish between them by using different equality signs:
\fbox{``$
$\diamond$quiv$''} for the equality of two words and
\fbox{``$=_G$''} to denote the equality of two elements of $G$,
that is the equality of two equivalence classes. Thus in
$G=gp\langle x \; | \; x^4 \rangle$, for example, $x
$\diamond$quiv x$ but
$x \not
$\diamond$quiv x^{-3}$, while $x=_G x^{-3}$.
\subsection*{Normal Forms}
Let $G=G_1 \ast_A G_2$. A word $g_1g_2 \cdots g_n \in G$ is
$\diamond$mph{in normal form} (or, simply, it is a
$\diamond$mph{normal word}) if:
\begin{enumerate}
\item [(1)] $g_i \neq_G 1$ lies in one of the factors, $G_1$ or $G_2$,
\item [(2)] $g_i$ and $g_{i+1}$ are in different factors,
\item [(3)] if $n \neq 1$, then $g_i \not\in A$.
$\diamond$nd{enumerate}
We call the sequence $(g_1, g_2, \ldots, g_n)$ a
$\diamond$mph{normal decomposition} of the element $g \in G $, where $g=_G g_1g_2 \cdots g_n$.
Any $g \in G$ has a representative in a normal form (see, for
instance, p.187 in \cite{l_s}). If $g
$\diamond$quiv g_1g_2 \cdots g_n $
is in normal form and $n>1$, then the Normal Form Theorem (IV.2.6
in \cite{l_s}) implies that $g \neq_G 1$. The number $n$ is unique
for a given element $g$ of $G$ and it is called the
$\diamond$mph{syllable
length} of $g$. We denote it $l(g)$. We use $|g|$ to denote the
length of $g$ as a word in $X^*$.
\subsection*{Labelled graphs}
Below we follow the notation of \cite{gi_sep, stal}.
A graph $\Gamma$ consists of two sets $E(\Gamma)$ and $V(\Gamma)$,
and two functions $E(\Gamma)\rightarrow E(\Gamma)$ and
$E(\Gamma)\rightarrow V(\Gamma)$: for each $e \in E$ there is an
element $\overline{e} \in E(\Gamma)$ and an element $\iota(e) \in
V(\Gamma)$, such that $\overline{\overline{e}}=e$ and
$\overline{e} \neq e$.
The elements of $E(\Gamma)$ are called \textit{edges}, and an $e
\in E(\Gamma)$ is a
$\diamond$mph{direct edge} of $\Gamma$, $\overline{e}$
is the
$\diamond$mph{reverse (inverse) edge} of $e$.
The elements of $V(\Gamma)$ are called \textit{vertices},
$\iota(e)$ is the
$\diamond$mph{initial vertex} of $e$, and
$\tau(e)=\iota(\overline{e})$ is the
$\diamond$mph{terminal vertex} of
$e$. We call them the
$\diamond$mph{endpoints} of the edge $e$.
A
$\diamond$mph{path of length $n$} is a sequence of $n$ edges $p=e_1
\cdots e_n $ such that $v_i=\tau(e_i)=\iota(e_{i+1})$ ($1 \leq
i<n$). We call $p$ a
$\diamond$mph{path from $v_0=\iota(e_1)$ to
$v_n=\tau(e_n)$}. The
$\diamond$mph{inverse} of the path $p$ is
$\overline{p}=\overline{e_n} \cdots \overline{e_1}$. A path of
length 0 is the
$\diamond$mph{empty path}.
We say that the graph $\Gamma$ is
$\diamond$mph{connected} if $V(\Gamma)
\neq
$\diamond$mptyset$ and any two vertices are joined by a path. The
path $p$ is
$\diamond$mph{closed} if $\iota(p)=\tau(p)$, and it is
$\diamond$mph{freely reduced} if $e_{i+1} \neq \overline{e_i}$ ($1 \leq i
<n$). $\Gamma$ is a
$\diamond$mph{tree} if it is a connected graph and
every closed freely reduced path in $\Gamma$ is empty.
A
$\diamond$mph{subgraph} of $\Gamma$ is a graph $C$ such that $V(C)
\subseteq V(\Gamma)$ and $E(C) \subseteq E(\Gamma)$. In this case,
by abuse of language, we write $C\subseteq \Gamma$.
Similarly, whenever we write $\Gamma_1 \cup \Gamma_2$ or $\Gamma_1
\cap \Gamma_2$, we always mean that the set operations are, in
fact, applied to the vertex sets and the edge sets of the
corresponding graphs.
A
$\diamond$mph{labelling} of $\Gamma$ by the set $X^{\pm}$ is a function
$$lab: \: E(\Gamma)\rightarrow X^{\pm}$$ such that for each $e \in
E(\Gamma)$, $lab(\overline{e})
$\diamond$quiv (lab(e))^{-1}$.
The last equality enables one, when representing the labelled
graph $\Gamma$ as a directed diagram, to represent only
$X$-labelled edges, because $X^{-1}$-labelled edges can be deduced
immediately from them.
A graph with a labelling function is called a
$\diamond$mph{labelled (with
$X^{\pm}$) graph}. The only graphs considered in the present
paper are labelled graphs.
A labelled graph is called
$\diamond$mph{well-labelled} if
$$\iota(e_1)=\iota(e_2), \; lab(e_1)
$\diamond$quiv lab(e_2)\ \Rightarrow \
e_1=e_2,$$ for each pair of edges $e_1, e_2 \in E(\Gamma)$. See
Figure \ref{fig: labelled, well-labelled graphs}.
\begin{figure}[!h]
\psfrag{a }[][]{$a$} \psfrag{b }[][]{$b$} \psfrag{c }[][]{$c$}
\psfrag{e }[][]{$e_1$}
\psfrag{f }[][]{$e_2$}
\psfragscanon \psfrag{G }[][]{{\Large $\Gamma_1$}}
\psfragscanon \psfrag{H }[][]{{\Large $\Gamma_2$}}
\psfragscanon \psfrag{K }[][]{{\Large $\Gamma_3$}}
\includegraphics[width=\textwidth]{LabelledGraph.eps}
\caption[The construction of $\Gamma(H_1)$]{ \footnotesize {The
graph $\Gamma_1$ is labelled with $\{a,b,c\} ^{\pm}$, but it is
not well-labelled. The graphs $\Gamma_2$ and $\Gamma_3$ are
well-labelled with $\{a,b,c\} ^{\pm}$.}
\label{fig: labelled, well-labelled graphs}}
$\diamond$nd{figure}
If a finite graph $\Gamma$ is not well-labelled then a process of
iterative identifications of each pair $\{e_1,e_2\}$ of distinct
edges with the same initial vertex and the same label to a single
edge yields a well-labelled graph. Such identifications are called
$\diamond$mph{foldings}, and the whole process is known as the process of
$\diamond$mph{Stallings' foldings} \cite{b-m-m-w, kap-m, mar_meak, m-s-w}.
Thus the graph $\Gamma_2$ on Figure
\ref{fig: labelled, well-labelled graphs} is obtained from the
graph $\Gamma_1$ by folding the edges $e_1$ and $e_2$ to a single
edge labelled by $a$.
Notice that the graph $\Gamma_3$ is obtained from the graph
$\Gamma_2$ by removing the edge labelled by $a$ whose initial
vertex has degree 1. Such an edge is called a
$\diamond$mph{hair}, and the
above procedure is used to be called
$\diamond$mph{``cutting hairs''}.
The label of a path $p=e_1e_2 \cdots e_n$ in $\Gamma$, where $e_i
\in E(\Gamma)$, is the word $$lab(p)
$\diamond$quiv lab(e_1)\cdots
lab(e_n) \in (X^{\pm})^*.$$ Notice that the label of the empty
path is the empty word. As usual, we identify the word $lab(p)$
with the corresponding element in $G=gp\langle X | R \rangle$. We
say that $p$ is a
$\diamond$mph{normal path} (or $p$ is a path in
$\diamond$mph{normal form}) if $lab(p)$ is a normal word.
If $\Gamma$ is a well-labelled graph then a path $p$ in $\Gamma$
is freely reduced if and only if $lab(p)$ is a freely reduced
word.
Otherwise $p$ can be converted into a freely reduced path $p'$ by
iterative removals of the subpaths $e\overline{e}$
(
$\diamond$mph{backtrackings}) (\cite{mar_meak, kap-m}). Thus
$$\iota(p')=\iota(p), \ \tau(p')=\tau(p) \ \; {\rm and } \ \; lab(p)=_{FG(X)} lab(p'),$$ where \fbox{$FG(X)$} is a free group
with a free basis $X$. We say that $p'$ is obtained from $p$ by
$\diamond$mph{free reductions}.
If $v_1,v_2 \in V(\Gamma)$ and $p$ is a path in $\Gamma$ such that
$$\iota(p)=v_1, \ \tau(p)=v_2 \ {\rm and } \ lab(p)
$\diamond$quiv u,$$
then, following the automata theoretic notation, we simply write
\fbox{$v_1 \cdot u=v_2$} to summarize this situation, and say
that the word $u$ is
$\diamond$mph{readable} at $v_1$ in $\Gamma$.
A pair \fbox{$(\Gamma, v_0)$} consisting of the graph $\Gamma$
and the
$\diamond$mph{basepoint} $v_0$ (a distinguished vertex of the
graph $\Gamma$) is called a
$\diamond$mph{pointed graph}.
Following the notation of Gitik (\cite{gi_sep}) we denote the set
of all closed paths in $\Gamma$ starting at $v_0$ by
\fbox{$Loop(\Gamma, v_0)$}, and the image of $lab(Loop(\Gamma,
v_0))$ in $G=gp\langle X | R \rangle$ by \fbox{$Lab(\Gamma,
v_0)$}. More precisely,
$$Loop(\Gamma, v_0)=\{ p \; | \; p {\rm \ is \ a \ path \ in \ \Gamma \
with} \ \iota(p)=\tau(p)=v_0\}, $$
$$Lab(\Gamma,v_0)=\{g \in G \; | \;
$\diamond$xists p \in Loop(\Gamma,
v_0) \; : \; lab(p)=_G g \}.$$
It is easy to see that $Lab(\Gamma, v_0)$ is a subgroup of $G$
(\cite{gi_sep}). Moreover, $Lab(\Gamma,v)=gLab(\Gamma,u)g^{-1}$,
where $g=_G lab(p)$, and $p$ is a path in $\Gamma$ from $v$ to $u$
(\cite{kap-m}).
If $V(\Gamma)=\{v_0\}$ and $E(\Gamma)=
$\diamond$mptyset$ then we assume
that $H=\{1\}$.
We say that $H=Lab(\Gamma, v_0)$ is
$\diamond$mph{the subgroup of $G$
determined by the graph $(\Gamma,v_0)$}. Thus any pointed graph
labelled by $X^{\pm}$, where $X$ is a generating set of a group
$G$, determines a subgroup of $G$. This argues the use of the name
$\diamond$mph{subgroup graphs} for such graphs.
\subsection*{Morphisms of Labelled Graphs} \label{sec:Morphisms
Of Well-Labelled Graphs}
Let $\Gamma$ and $\Delta$ be graphs labelled with $X^{\pm}$. The
map $\pi:\Gamma \rightarrow \Delta$ is called a
$\diamond$mph{morphism of
labelled graphs}, if $\pi$ takes vertices to vertices, edges to
edges, preserves labels of direct edges and has the property that
$$ \iota(\pi(e))=\pi(\iota(e)) \ {\rm and } \
\tau(\pi(e))=\pi(\tau(e)), \ \forall e\in E(\Gamma).$$
An injective morphism of labelled graphs is called an
$\diamond$mph{embedding}. If $\pi$ is an embedding then we say that the
graph $\Gamma$
$\diamond$mph{embeds} in the graph $\Delta$.
A
$\diamond$mph{morphism of pointed labelled graphs} $\pi:(\Gamma_1,v_1)
\rightarrow (\Gamma_2,v_2)$ is a morphism of underlying labelled
graphs $ \pi: \Gamma_1\rightarrow \Gamma_2$ which preserves the
basepoint $\pi(v_1)=v_2$. If $\Gamma_2$ is well-labelled then
there exists at most one such morphism (\cite{kap-m}).
\begin{remark}[\cite{kap-m}] \label{unique isomorphism}
{\rm If two pointed well-labelled (with $X^{\pm}$) graphs
$(\Gamma_1,v_1)$ and $(\Gamma_2,v_2)$ are isomorphic, then there
exists a unique isomorphism $\pi:(\Gamma_1,v_1) \rightarrow
(\Gamma_2,v_2)$. Therefore $(\Gamma_1,v_1)$ and $(\Gamma_2,v_2)$
can be identified via $\pi$. In this case we sometimes write
$(\Gamma_1,v_1)=(\Gamma_2,v_2)$.}
$\diamond$
$\diamond$nd{remark}
The notation $\Gamma_1=\Gamma_2$ means that there exists an
isomorphism between these two graphs. More precisely, one can find
$v_i \in V(\Gamma_i)$ ($i \in \{1,2\}$) such that
$(\Gamma_1,v_1)=(\Gamma_2,v_2)$ in the sense of Remark~\ref{unique
isomorphism}.
\section{Subgroup Graphs}
The current section is devoted to the discussion on subgroup
graphs constructed by the generalized Stallings' folding
algorithm. The main results of \cite{m-foldings} concerning these
graphs (more precisely, Theorem 7.1, Lemma 8.6, Lemma 8.7, Theorem
8.9 and Corollary 8.11 in \cite{m-foldings}), which are essential
for the current paper, are summarized in Theorem~\ref{thm:
properties of subgroup graphs} below. All the missing notations
are explained along the rest of the present section.
\begin{thm} \label{thm: properties of subgroup graphs}
Let $H=\langle h_1, \cdots, h_k \rangle$ be a finitely generated
subgroup of an amalgam of finite groups $ G=G_1 \ast_A G_2$.
Then there is an algorithm (\underline{the generalized Stallings'
folding algorithm}) which constructs a finite labelled graph
$(\Gamma(H),v_0)$ with the following properties:
\begin{itemize}
\item[(1)] $ {Lab(\Gamma(H),v_0)}= {H}. $
\item[(2)] Up to isomorphism, $(\Gamma(H),v_0)$ is a unique
\underline{reduced precover} of $G$ determining $H$.
\item[(3)] A {\underline{normal word}} $g \in G$ is in $H$ if and
only if it labels a closed path in $\Gamma(H)$ starting at $v_0$,
that is $v_0 \cdot g=v_0$.
\item[(4)] Let $m$ be the sum of the lengths of words $h_1,
\ldots h_n$. Then the algorithm computes $(\Gamma(H),v_0)$ in time
$O(m^2)$.
Moreover, $|V(\Gamma(H))|$ and $|E(\Gamma(H))|$ are proportional
to $m$.
$\diamond$nd{itemize}
$\diamond$nd{thm}
\begin{cor}
Theorem~\ref{thm: properties of subgroup graphs} (3) provides a
solution of the \underline{membership problem} for finitely
generated subgroups of amalgams of finite groups.
$\diamond$nd{cor}
Throughout the present paper the notation \fbox{$(\Gamma(H),v_0)$}
is used for the finite labelled graph constructed by the
generalized Stallings' folding algorithm for a finitely generated
subgroup $H$ of an amalgam of finite groups $G=G_1 \ast_A G_2$.
\subsection*{Definition of Precovers:} The notion of
$\diamond$mph{precovers} was defined by Gitik in \cite{gi_sep} for
subgroup graphs of amalgams. Below we present its definition and
list some basic properties. In doing so, we rely on the notation
and results obtained in \cite{gi_sep}.
In \cite{m-foldings} some special cases of precovers,
$\diamond$mph{reduced precovers}, were considered. However the properties
of
$\diamond$mph{reduced precovers} are irrelevant for the results
presented in the current paper. Hence we skip the discussion on
them, which can be found in \cite{m-foldings}.
Let $\Gamma$ be a graph labelled with $X^{\pm}$, where $X=X_1 \cup
X_2$ is the generating set of $G=G_1 \ast_A G_2$ given by
(1.a)-(1.c).
We view $\Gamma$ as a two colored graph: one color for each one of
the generating sets $X_1$ and $X_2$ of the factors $G_1$ and
$G_2$, respectively.
The vertex $v \in V(\Gamma)$ is called
$\diamond$mph{$X_i$-monochromatic}
if all the edges of $\Gamma$ incident with $v$ are labelled with
$X_i^{\pm}$, for some $i \in \{1,2\}$. We denote the set of
$X_i$-monochromatic vertices of $\Gamma$ by $VM_i(\Gamma)$ and put
$VM(\Gamma)= VM_1(\Gamma) \cup VM_2(\Gamma)$.
We say that a vertex $v \in V(\Gamma)$ is
$\diamond$mph{bichromatic} if
there exist edges $e_1$ and $e_2$ in $\Gamma$ with
$$\iota(e_1)=\iota(e_2)=v \ {\rm and} \ lab(e_i) \in X_i^{\pm}, \ i \in \{1,2\}.$$
The set of bichromatic vertices of $\Gamma$ is denoted by
$VB(\Gamma)$.
A subgraph of $\Gamma$ is called
$\diamond$mph{monochromatic} if it is
labelled only with $X_1^{\pm}$ or only with $X_2^{\pm}$. An
$\diamond$mph{$X_i$-monochromatic component} of $\Gamma$ ($i \in \{1,2\}$)
is a maximal connected subgraph of $\Gamma$ labelled with
$X_i^{\pm}$, which contains at least one edge.
Thus monochromatic components of $\Gamma$ are graphs determining
subgroups of the factors, $G_1$ or $G_2$.
We say that a graph $\Gamma$ is
$\diamond$mph{$ G$-based} if any path $p
\subseteq \Gamma$ with $lab(p)=_G 1$ is closed. Thus if $\Gamma$
is $G$-based then, obviously, it is well-labelled with $X^{\pm}$.
\begin{defin}[Definition of Precover] A $G$-based graph $\Gamma$
is a
$\diamond$mph{precover} of $G$ if each $X_i$-monochromatic
component of $\Gamma$ is a
$\diamond$mph{cover} of $G_i$ ($i \in
\{1,2\}$).
$\diamond$nd{defin}
Following the terminology of Gitik (\cite{gi_sep}), we use the
term
$\diamond$mph{``covers of $G$''} for
$\diamond$mph{relative (coset) Cayley
graphs} of $G$ and denote by \fbox{$Cayley(G,S)$} the coset Cayley
graph of $G$ relative to the subgroup $S$ of
$G$.\footnote{Whenever the notation $Cayley(G,S)$ is used, it
always means that $S$ is a subgroup of the group $G$ and the
presentation of $G$ is fixed and clear from the context. }
If $S=\{1\}$, then $Cayley(G,S)$ is the
$\diamond$mph{Cayley graph} of $G$
and the notation \fbox{$Cayley(G)$} is used.
Note that the use of the term ``covers'' is adjusted by the well
known fact that a geometric realization of a coset Cayley graph of
$G$ relative to some $S \leq G$ is a 1-skeleton of a topological
cover corresponding to $S$ of the standard 2-complex representing
the group $G$ (see \cite{stil}, pp.162-163).
\begin{conv}
By the above definition, a precover doesn't have to be a connected
graph. However along this paper we restrict our attention only to
connected precovers. Thus any time this term
is used, we always mean that the corresponding graph
is connected unless it is stated otherwise.
We follow the convention that a graph $\Gamma$ with
$V(\Gamma)=\{v\}$ and $E(\Gamma)=
$\diamond$mptyset$ determining the
trivial subgroup (that is $Lab(\Gamma,v)=\{1\}$) is a (an empty)
precover of $G$.
$\diamond$
$\diamond$nd{conv}
\begin{ex}
{\rm
Let $G=gp\langle x,y | x^4, y^6, x^2=y^3 \rangle=\mathbb{Z}_4 \ast_{\mathbb{Z}_2} \mathbb{Z}_6$.
Recall that $G$ is isomorphic to $SL(2,\mathbb{Z})$ under the
homomorphism
$$x\mapsto \left(
\begin{array}{cc}
0 & 1 \\
-1 & 0
$\diamond$nd{array}
\right), \ y \mapsto \left(
\begin{array} {cc}
0 & -1\\
1 & 1
$\diamond$nd{array}
\right).$$
The graphs $\Gamma_1$ and $\Gamma_3$ on Figure \ref{fig:Precovers}
are examples of precovers of $G$ with one monochromatic component
and two monochromatic components, respectively.
Though the $\{x\}$-monochromatic component of the graph $\Gamma_2$
is a cover of $\mathbb{Z}_4 $ and the $\{y\}$-monochromatic
component is a cover of $\mathbb{Z}_6$, $\Gamma_2$ is not a
precover of $G$, because it is not a $G$-based graph. Indeed, $v
\cdot (x^2y^{-3})=u$, while $x^2y^{-3}=_G 1$.
The graph $\Gamma_4$ is not a precover of $G$ because its
$\{x\}$-monochromatic components are not covers of $\mathbb{Z}_4
$. }
$\diamond$
$\diamond$nd{ex}
\begin{figure}[!h]
\psfrag{x }[][]{$x$} \psfrag{y }[][]{$y$} \psfrag{v }[][]{$v$}
\psfrag{u }[][]{$u$}
\psfrag{w }[][]{$w$}
\psfrag{x1 - monochromatic vertex }[][]{{\footnotesize
$\{x\}$-monochromatic vertex}}
\psfrag{y1 - monochromatic vertex }[][]{\footnotesize
{$\{y\}$-monochromatic vertex}}
\psfrag{ bichromatic vertex }[][]{\footnotesize {bichromatic
vertex}}
\psfragscanon \psfrag{G }[][]{{\Large $\Gamma_1$}}
\psfragscanon \psfrag{K }[][]{{\Large $\Gamma_2$}}
\psfragscanon \psfrag{H }[][]{{\Large $\Gamma_3$}}
\psfragscanon \psfrag{L }[][]{{\Large $\Gamma_4$}}
\includegraphics[width=\textwidth]{Precovers.eps}
\caption{ \label{fig:Precovers}}
$\diamond$nd{figure}
A graph $\Gamma$ is
$\diamond$mph{$x$-saturated} at $v \in V(\Gamma)$, if
there exists $e \in E(\Gamma)$ with $\iota(e)=v$ and $lab(e)=x$
($x \in X$). $\Gamma$ is
$\diamond$mph{$X^{\pm}$-saturated} if it is
$x$-saturated for each $x \in X^{\pm}$ at each $v \in V(\Gamma)$.
\begin{lem}[Lemma 1.5 in \cite{gi_sep}] \label{lemma1.5}
Let $G=gp\langle X|R \rangle$ be a group and let $(\Gamma,v_0)$ be
a graph well-labelled with $X^{\pm}$. Denote $Lab(\Gamma,v_0)=S$.
Then
\begin{itemize}
\item $\Gamma$ is $G$-based if and only if it can be embedded in $(Cayley(G,S), S~\cdot~1)$,
\item $\Gamma$ is $G$-based and $X^{\pm}$-saturated if and only if it is isomorphic to \linebreak[4] $(Cayley(G,S), S
\cdot~1)$.~
\footnote{We write $S \cdot 1$ instead of the usual $S1=S$ to distinguish this vertex of $Cayley(G,S)$ as the basepoint of the
graph.}
$\diamond$nd{itemize}
$\diamond$nd{lem}
\begin{cor} \label{cor:PrecoversSubgrOfCayleyGr}
If $\Gamma$ is a precover of $G$ with $Lab(\Gamma,v_0)=H \leq G$
then $\Gamma$ is a subgraph of $Cayley(G,H)$.
$\diamond$nd{cor}
Thus a precover of $G$ can be viewed as a part of the
corresponding cover of $G$, which explains the use of the term
``precovers''.
\begin{remark}[\cite{m-foldings}] \label{remark: morphism of precovers}
{\rm Let $\phi: \Gamma \rightarrow \Delta$ be a morphism of
labelled graphs. If $\Gamma$ is a precover of $G$, then
$\phi(\Gamma)$ is a precover of $G$ as well. }
$\diamond$
$\diamond$nd{remark}
\subsection*{Precovers are Compatible:}
A graph $\Gamma$ is called
$\diamond$mph{compatible at a bichromatic
vertex} $v$ if for any monochromatic path $p$ in $\Gamma$ such
that $\iota(p)=v$ and $lab(p) \in A$ there exists a monochromatic
path $t$ of a different color in $\Gamma$ such that $\iota(t)=v$,
$\tau(t)=\tau(p)$ and $lab(t)=_G lab(p)$. We say that $\Gamma$ is
$\diamond$mph{compatible} if it is compatible at all bichromatic vertices.
\begin{ex}
{\rm The graphs $\Gamma_1$ and $\Gamma_3$ on Figure
\ref{fig:Precovers} are compatible. The graph $\Gamma_2$ does not
possess this property because $w \cdot x^{2}=v$, while $w \cdot
y^3=u$. $\Gamma_4$ is not compatible as well.}
$\diamond$
$\diamond$nd{ex}
\begin{lem} [Lemma 2.12 in \cite{gi_sep}] \label{lemma2.12}
If $\Gamma$ is a compatible graph, then for any path $p$ in
$\Gamma$ there exists a path $t$ in normal form such that
$\iota(t)=\iota(p), \ \tau(t)=\tau(p) \ {\rm and} \ lab(t)=_G
lab(p).$
$\diamond$nd{lem}
\begin{remark} [Remark 2.11 in \cite{gi_sep}] \label{remark:
precovers are compatible} {\rm Precovers are compatible.
$\diamond$}
$\diamond$nd{remark}
The following can be taken as another definition of precovers.
\begin{lem} [Corollary2.13 in \cite{gi_sep}] \label{corol2.13}
Let $\Gamma$ be a compatible graph. If all $X_i$-components of
$\Gamma$ are $G_i$-based, $i \in \{1,2\}$, then $\Gamma$ is
$G$-based. In particular, if each $X_i$-component of $\Gamma$ is a
cover of $G_i$, $i \in \{1,2\}$, and $\Gamma$ is compatible, then
$\Gamma$ is a precover of $G$.
$\diamond$nd{lem}
\subsection*{Complexity Issues:}
As were noted in \cite{m-foldings}, the complexity of the
generalized Stallings' algorithm is quadratic in the size of the
input, when we assume that all the information concerning the
finite groups $G_1$, $G_2$, $A$ and the amalgam $G=G_1 \ast_{A}
G_2$ given via $(1.a)$, $(1.b)$ and $(1.c)$ (see
Section~\ref{section:Preliminaries}) is not a part of the input.
We also assume that the Cayley graphs and all the relative Cayley
graphs of the free factors are given for ``free'' as well.
Otherwise, if the group presentations of the free factors $G_1$
and $G_2$, as well as the monomorphisms between the amalgamated
subgroup $A$ and the free factors are a part of the input (the
$\diamond$mph{uniform version} of the algorithm) then we have to build the
groups $G_1$ and $G_2$, that is to construct their Cayley graphs
and relative Cayley graphs.
Since we assume that the groups $G_1$ and $G_2$ are finite, the
Todd-Coxeter algorithm and the Knuth Bendix algorithm are
suitable \cite{l_s, sims, stil} for these purposes. Then the
complexity of the construction depends on the group presentation
of $G_1$ and $G_2$ we have: it could be even exponential in the
size of the presentation \cite{cdhw73}. Therefore the generalized
Stallings algorithm, presented in \cite{m-foldings}, with these
additional constructions could take time exponential in the size
of the input.
Thus each uniform algorithmic problem for $H$ whose solution
involves the construction of the subgroup graph $\Gamma(H)$ may
have an exponential complexity in the size of the input.
The primary goal of the complexity analysis introduced along the
current paper is to estimate our graph theoretical methods. To
this end, we assume that all the algorithms along the present
paper have the following ``given data''.
\begin{description}
\item[GIVEN] : Finite groups $G_1$, $G_2$, $A$ and the amalgam
$G=G_1 \ast_{A} G_2$ given via $(1.a)$, $(1.b)$ and $(1.c)$.\\
We assume that the Cayley graphs and all the relative Cayley
graphs of the free factors are given.
$\diamond$nd{description}
\section{Computing Subgroup Presentations}
\label{sec:SubgroupPresentation}
Given a presentation of a group $G$, and a suitable information
about a subgroup $H$ of $G$, the Reidemeister-Schreier method (see
2.2.3 in \cite{m-k-s}) enables one to compute a presentation for
$H$.
It's a well known fact (see, for instance, \cite{m-k-s}, p.90)
that if $[G:H]<\infty$ then $H$ is finitely generated when $G$ is
finitely generated, and $H$ is finitely presented when $G$ is
finitely presented. Such a finite presentation of $H$ can be
effectively calculated by an application of the
Reidemeister-Schreier method.
However a subgroup can be finitely presented even if its index is
infinite. For instance, if the group under the consideration is
$\diamond$mph{coherent}, then all its finitely generated subgroups are
finitely presented. Recently, coherence of some classes of groups
has been investigated in \cite{m_w, p_s}.
Below we introduce a restricted version of the
Reidemeister-Schreier method which allows to compute a finite
presentation for a finitely generated subgroup $H$ of an amalgam
of finite groups $G=G_1 \ast_A G_2$ given by (1.a)-(1.c). This
immediately implies the
$\diamond$mph{coherence} of amalgams of finite
groups.
The suitable information about the subgroup which is needed for an
application of the method can be read off from its subgroup graph
$\Gamma(H)$ constructed by the generalized Stallings' algorithm.
Let $(\Gamma,v_0)$ be a finite precover of $G$. Let
$H=Lab(\Gamma,v_0)$.
Recall that $H=Lab(\Gamma,v_0)$ is the image of
$lab(Loop(\Gamma,v_0)) \subseteq X^*$ in $G$ under the natural
morphism $\varphi: X^* {\rightarrow} G$. Note that
$\varphi=\varphi_2 \circ \varphi_1$, where
$$\varphi_1: X^*
\rightarrow FG(X) \ {\rm and} \ \varphi_2: FG(X) \rightarrow G.$$
Let $\widetilde{H} = \varphi_1 \left(lab(Loop(\Gamma,v_0)
\right)$. Thus $H=\varphi_2(\widetilde{H})$.
Moreover, $$H=\widetilde{H} / N=\widetilde{H} /
\left(\widetilde{H} \cap N \right),$$ where $N$ is the normal
closure of $R$ in $FG(X)$ (see \cite{l_s, m-k-s}). We put
$F=FG(X)$.
Let $T$ be a fixed spanning tree of $\Gamma$. For all $v \in
V(\Gamma)$, we consider $t_v$ to be the unique freely reduced path
in $T$ from the basepoint $v_0$ to the vertex $v$.
For each $e \in E(\Gamma)$ we consider $t(e)=t_{\iota(e)}e
\overline{t_{\tau(e)}}$. Thus if $e \in E(T)$ then $t(e)$ can be
freely reduced to an empty path, that is $lab(t(e))=_{F} 1$.
Let $E^+$ be the set of positively oriented edges of $\Gamma$. Let
\begin{equation} \label{eq:Def_of X_H}
X_H=\{lab(t(e)) \; | \; e \in E^+ \setminus E(T) \},
$\diamond$nd{equation}
$$Q_v=\{ q \subseteq \Gamma \; | \; \iota(q)=\tau(q)=v, \;
lab(q)
$\diamond$quiv r \in R \},$$
\begin{equation} \label{eq:DefI of R H}
R_H=\left\{lab\left(\phi\left(t_vq\overline{t_v}\right)\right) \;
| v \in V(\Gamma), \; t_v \subseteq T, \; q \in Q_v,\right\},
$\diamond$nd{equation}
where $\phi$ is a function from the set of freely reduced paths
in $\Gamma$ into $Loop(\Gamma,v_0)$ defined as follows.
$$\phi(p)=t(e_1)t(e_2) \cdots t(e_n), \ {\rm where} \
p=e_1e_2 \cdots e_n \subseteq \Gamma.$$ Thus the
path $\phi(p)$ is closed at $v_0$ in $\Gamma$ and
$$lab(\phi(p))
$\diamond$quiv lab(t(e_1))lab(t(e_2)) \cdots lab(t(e_n)).$$
Moreover, if the path $p$ is closed at $v_0$ in $\Gamma$ then the
path $\phi(p)$ is
$\diamond$mph{freely equivalent} to $p$, that is
$\phi(p)$ can be transformed to the path $p$ by a series of free
reductions. Thus $lab(\phi(p))=_{F} lab(p)$.
The function $\phi$ induces a partial function $\phi'$ from
$FG(X)$ into $FG(X_H)$ such that $\phi'(w) = lab(\phi(p))$, where
$p$ is a path in $\Gamma$ with $lab(p)
$\diamond$quiv w$. Thus another
definition of $R_H$ takes the following form
\begin{equation} \label{eq:DefII of R H}
R_H=\left\{\phi'\left(lab(t_vq\overline{t_v})\right)
\; | \; v \in V(\Gamma), \; t_v \subseteq T, \; q \in
Q_v,\right\}.
$\diamond$nd{equation}
\begin{remark}
{\rm Note that the system of coset representatives $ \{lab (t_v)
\; | \; v \in V(\Gamma) \}$ is a subset of the
$\diamond$mph{Schreier
transversal} of $\widetilde{H}$ in $FG(X)$ (\cite{stal}). }
$\diamond$
$\diamond$nd{remark}
\begin{thm} \label{thm:NewReidmeisterSchreier}
With the above notation, $H=gp \langle X_H \; | R_H \rangle$.
$\diamond$nd{thm}
\begin{proof}
As is well known (\cite{kap-m, mar_meak, stal}),
$\widetilde{H}=FG(X_H)$. Therefore $H=\langle X_H \rangle$.
To complete the proof it remains to show that the normal closure
$N_H$ of $R_H$ in $FG(X_H)=\widetilde{H}$ is equal to
$\widetilde{H} \cap N$.
Let $v \in V(\Gamma)$ such that $Q_v \neq
$\diamond$mptyset$. Let $q \in
Q_v$. Therefore $\phi(t_vq\overline{t_v})$ is freely equivalent to
the path $t_vq\overline{t_v}$. Thus
$$lab(\phi(t_vq\overline{t_v}))=_{F}lab(t_vq\overline{t_v})
$\diamond$quiv lab(t_v)lab(q)lab(t_v)^{-1}
\in N.$$ On the other hand, the path $t_vq\overline{t_v}$ is
closed at $v_0$, hence $lab(t_vq\overline{t_v}) \in
\widetilde{H}$. Thus $lab(\phi(t_vq\overline{t_v})) \in
\widetilde{H} \cap N$. Therefore $R_H \subseteq \widetilde{H} \cap
N$.
For all $y \in \widetilde{H}$, there exist a closed path $s \in
\Gamma$ starting at $v_0$ with $lab(s)=_F \widetilde{H}$. By the
definition of $R_H$, for all $r \in R_H$ there exist a path
$t_vq\overline{t_v} \subseteq \Gamma$ closed at $v_0$ such that
$lab(t_vq\overline{t_v})=_F r$. Hence the path
$s(t_vq\overline{t_v})\overline{s}$ is closed at $v_0$ in
$\Gamma$. Thus $lab(s(t_vq\overline{t_v})\overline{s})=_F yry^{-1}
\in \widetilde{H}$. Moreover,
$lab(s(t_vq\overline{t_v})\overline{s})
$\diamond$quiv
lab(st_v)lab(q)(lab(st_v))^{-1} \in N$. Therefore $N_H \subseteq
\widetilde{H} \cap N$.
{ \ }
Assume now that $w \in \widetilde{H} \cap N$.
Since $w \in \widetilde{H}$, there exists a freely reduced path
$p$ in $\Gamma$ closed at $v_0$ with $lab(p)=_{F} w$ (\cite{kap-m,
mar_meak}). Let $p=p_1 \cdots p_k$ be its decomposition into
maximal monochromatic paths $p_i$ with $lab(p_i)
$\diamond$quiv w_i \in
G_{l_i}$ ($1 \leq i \leq k$ and $l_i \in \{1,2\}$).
Since $w \in N$, $w=_G 1$. Therefore, by the Normal Form Theorem
for free products with amalgamation (IV.2.6 in \cite{l_s}), there
exists $1 \leq i \leq k$ such that $w_i \in A \cap G_{l_i}$.
The proof is by induction on the number $k$ of the maximal
monochromatic subpaths of the path $p$. Without loss of
generality, simplifying the notation, we let $l_i=1$.
Assume first that $w_i=_{G_1} 1$. Since $\Gamma$ is $G$-based, the
subpath $p_i$ is closed at $\iota(p_i)=\tau(p_i)$.
Let $v_j =\tau(p_j)$ and let $t_j=t_{v_j} \subseteq T$ ($1 \leq j
\leq k$). Thus $v_{i-1}=v_i$ and $t_{i-1}=t_i$. Let $t=p_1 \cdots
p_{i-1}$. See Figure~\ref{fig:ProofReidmSchreier} (a).
\begin{figure}[!h]
\begin{center}
\psfrag{v0 }[][]{$v_0$}
\psfrag{v1 }[][]{$v_1$}
\psfrag{vk }[][]{$v_{k-1}$}
\psfrag{v }[][]{$v_{i-1}$}
\psfrag{p1 }[][]{$p_1$}
\psfrag{pk }[][]{$p_k$}
\psfrag{px }[][]{$p_{i-1}$}
\psfrag{py }[][]{$p_{i+1}$}
\psfrag{pi }[][]{$p_i$}
\psfrag{pj }[][]{$p_i'$}
\psfrag{ti }[][]{$t_{i-1}$}
\psfrag{t }[][]{$t$}
\includegraphics[width=\textwidth]{ProofReidmSchreier.eps}
\caption{ \label{fig:ProofReidmSchreier}}
$\diamond$nd{center}
$\diamond$nd{figure}
Hence the path $p$ can be obtained by free reductions from the
following path
$$ \left( \: (t\overline{ t}_{i-1}) (t_{i-1} p_i
\overline{t_{i-1}}) (t_{i-1}\overline{t}) \: \right)\left(tp_{i+1}
\cdots p_k \right).$$
Thus $lab(t\overline{ t}_{i-1}) \in \widetilde{H}$, and the number
of the maximal monochromatic subpaths of the path
$$tp_{i+1} \cdots p_k=p_1 \cdots p_{i-2}(p_{i-1}p_{i+1})p_{i+2}
\cdots p_k$$ is $k-2$. Therefore, by the inductive assumption,
$lab (tp_{i+1} \cdots p_k) \in N_H$. To get the desired conclusion
it remains to show that $lab(t_{i-1} p_i \overline{t_{i-1}}) \in
N_H$.
Since $w_i=_{G_1} 1$, we have $lab(p_i) \in N_{1}$, where $N_{1}$
is the normal closure of $R_{1}$ in $F_1=FG(X_{1})$. Therefore
$lab(p_i) =_{F_1} (z_{1}s_{1}z_{1}^{-1}) \cdots
(z_{m}s_{m}z_{m}^{-1})$, where $z_{j} \in F_1$ and $s_{j} \in
R_{1}$ ($1 \leq j \leq m$).
Let $C_i$ be the $X_{1}$-monochromatic component of $\Gamma$ such
that $p_i \subseteq C_i$. Since $\Gamma$ is a precover of $G$, $C$
is $X_{1}^{\pm}$-saturated. Hence $p_i$ is a free reduction of the
path $p_i' \subseteq C$ such that $lab(p_i')
$\diamond$quiv
(z_{1}s_{1}z_{1}^{-1}) \cdots (z_{m}s_{m}z_{m}^{-1})$. Since
$\Gamma$ is $G$-based, the subpaths of $p_i'$ labelled by $s_j$
($1 \leq j \leq m$) are closed. Therefore $p_i'$ has the following
decomposition
$$p_i'=(c_{1}q_{1}\overline{c_{1}}) \cdots
(c_{m}q_{m}\overline{c_{m}}),$$ where $lab(c_{j})
$\diamond$quiv z_{j}$
and $lab(q_{j})
$\diamond$quiv s_{j}$ ($1 \leq j \leq m$). Thus the path
$t_{i-1} p_i \overline{t_{i-1}}$ can be obtained by free
reductions from the path
$$\left(t_{i-1}(c_{1}q_{1}\overline{c_{1}})\overline{t_{i-1}}\right) \cdots
\left(t_{i-1}(c_{m}q_{m}\overline{c_{m}})\overline{t_{i-1}}\right).$$
For each $1\leq j \leq m$, the path
$t_{i-1}(c_{j}q_{j}\overline{c_{j}})\overline{t_{i-1}}$ is a free
reduction of the path
$$(t_{i-1} c_{1}
\overline{t_{\tau(c_{j})}})(t_{\tau(c_{j})}q_{j}\overline{t_{\tau(c_{j})}})(\overline{t_{i-1}
c_{j} \overline{t_{\tau(c_{j})}}}).$$
Since $lab(t_{i-1} c_{j} \overline{t_{\tau(c_{j})}}) \in
\widetilde{H}$ and
$lab(\phi(t_{\tau(c_{j})}q_{j}\overline{t_{\tau(c_{j})}})) \in
R_H$ we conclude that
$lab(t_{i-1}(c_{j}q_{j}\overline{c_{j}})\overline{t_{i-1}}) \in
N_H$ ($1 \leq j \leq m$). Therefore $lab(t_{i-1} p_i
\overline{t_{i-1}}) \in N_H$. We are done.
{ \ } \\
Assume now that $1 \neq_G w_i \in A \cap G_{1}$.
Since for all $1 \leq j \leq k$ the vertices $v_j=\tau(p_j)$ are
bichromatic, and because the graph $\Gamma$ is compatible, there
exists a $X_2$-monochromatic path $p_i'$ in $\Gamma$ such that
$\iota(p_i')=\iota(p_i)$, $\tau(p_i')=\tau(p_i)$ and $lab(p_i')=_G
lab(p_i)$. See Figure~\ref{fig:ProofReidmSchreier} (b).
Hence the path $p$ can be obtained by free reductions from the
following path
$$ \left( \: (t\overline{ t}_{i-1}) (t_{i-1} p_i\overline{p_i'}
\overline{t_{i-1}}) (t_{i-1}\overline{t}) \: \right)
\left(tp'_ip_{i+1} \cdots p_k \right).$$
Thus $lab(t\overline{ t}_{i-1}) \in \widetilde{H}$, and the number
of the maximal monochromatic subpaths of the path
$$tp'_ip_{i+1} \cdots p_k=p_1 \cdots p_{i-2}(p_{i-1}p'_ip_{i+1})p_{i+2}
\cdots p_k$$ is $k-2$. Therefore, by the inductive assumption,
$lab (tp'_ip_{i+1} \cdots p_k) \in N_H$. To get the desired
conclusion it remains to show that $lab(t_{i-1}
(p_i\overline{p'_i}) \overline{t_{i-1}}) \in N_H$.
Let $lab(p_i) =_{G_1} a_1 \cdots a_m $, where $a_j$ are
generators of $A \cap G_1$. Let $b_j$ be corresponding generators
of $A \cap G_2$ such that $a_j=_G b_j$ and $a_j{b_j}^{-1} \in R$
($1 \leq j \leq m$). Note that
$$ (a_1 \cdots a_m)(b_1 \cdots b_m)^{-1} =_{F} $$
$$ =_F \left(a_1{b_1}^{-1}\right)\left(b_1 \left( a_2{b_2 }^{-1}
\right) b_1 ^{-1}\right) \cdots
\left(b_1 \cdots b_{m-1} \left(a_m b_m ^{-1} \right) b_{m-1}
^{-1} \cdots b_1^{-1}\right).$$
Since monochromatic components of $\Gamma$ are
$X_i^{\pm}$-saturated ($i \in \{1,2\}$), and because $\iota(p_i)
\in VB(\Gamma)$, there exist paths $\gamma_1$ and $\delta_1$ such
that $\iota(\gamma_1)=\iota(p_i)=\iota(\delta_1)$ and $
lab(\gamma_1)
$\diamond$quiv a_1$, $lab(\delta_1)
$\diamond$quiv b_1$. Since
$\Gamma$ is compatible, $\tau(\gamma_1)=\tau(\delta_1) \in
VB(\Gamma)$. Thus there exist paths $\gamma_2$ and $\delta_2$ such
that $\iota(\gamma_2)=\tau(\gamma_1)=\iota(\delta_2)$ and $
lab(\gamma_2)
$\diamond$quiv a_2$, $lab(\delta_2)
$\diamond$quiv b_2$. Since
$\Gamma$ is compatible, $\tau(\gamma_2)=\tau(\delta_2) \in
VB(\Gamma)$.
Continuing in this manner one can construct such paths
$\gamma_j$, $\delta_j$ for all $1 \leq j \leq m$. Thus $p_i$ and
$p'_i$ are free reductions of the paths $\gamma_1 \cdots
\gamma_m$ and $\delta_1 \cdots \delta_m$, respectively.
Hence the path $p_i\overline{p_i'}$ can be obtained by free
reductions from the path
$$ \left(\gamma_1{\overline{\delta_1}} \right)\left(\delta_1 \left( \gamma_2{\overline{\delta_2}
} \right) \overline{\delta_1} \right) \cdots
\left(\delta_1 \cdots \delta_{m-1} \left(\gamma_m
\overline{\delta_m } \right)\overline{ \delta_{m-1}} \cdots
\overline{\delta_1} \right).$$
Therefore the path $t_{i-1}(p_i\overline{p_i'})\overline{t}_{i-1}$
is a free reduction of
$$ \left(t_{i-1}(\gamma_1{\overline{\delta_1}}) \overline{t}_{i-1} \right)
\cdots
\left(t_{i-1}(\delta_1 \cdots \delta_{m-1} \left(\gamma_m
\overline{\delta_m } \right)\overline{ \delta_{m-1}} \cdots
\overline{\delta_1}) \overline{t}_{i-1}\right).$$
For each $1 \leq j \leq m-1$, the path $t_{i-1}(\delta_1 \cdots
\delta_{j-1} \left(\gamma_j \overline{\delta_j } \right)\overline{
\delta_{j-1}} \cdots \overline{\delta_1}) \overline{t}_{i-1}$ is
a free reduction of the path
$$\left(t_{i-1}\delta_1 \cdots \delta_{j-1}
\overline{t_{\iota(\gamma_j)}}\right)
\left(t_{\iota(\gamma_j)}(\gamma_j{\overline{\delta_j} })
\overline{t_{\iota(\gamma_j)}} \right) \left(t_{\iota(\gamma_j)}
\overline{\delta_{j-1}} \cdots \overline{\delta_1}
\overline{t_{i-1}}\right).$$
Since $ lab\left(\phi\left(t_{\iota(\gamma_j)}(\gamma_j
\overline{\delta_j}) \overline{t_{\iota(\gamma_j)}}\right)\right)
\in R_H$ and
$lab(t_{i-1}\delta_1 \cdots \delta_{j-1}
\overline{t_{\iota(\gamma_j)}}) \in \widetilde{H}$, we conclude
that for each $1 \leq j \leq m-1$
$$lab(t_{i-1}(\delta_1 \cdots
\delta_{j-1} \left(\gamma_j \overline{\delta_j } \right)\overline{
\delta_{j-1}} \cdots \overline{\delta_1}) \overline{t}_{i-1}) \in
N_H.$$
Therefore $lab(t_{i-1} (p_i\overline{p_i'}) \overline{t_{i-1}})
\in N_H$. We are done.
$\diamond$nd{proof}
\begin{cor} \label{cor:ReidShcrPrecovers}
Let $(\Gamma,v_0)$ be a finite precover of $G$.
Then there exists an algorithm which computes a subgroup of $G$
determined by $(\Gamma,v_0)$, that is computes a finite group
presentation of $H=Lab(\Gamma,v_0)$.
$\diamond$nd{cor}
\begin{proof}
We compute the sets $X_H$ and $R_H$ according to their
definitions. These sets are finite, because the graph $\Gamma$ is
finite.
By Theorem \ref{thm:NewReidmeisterSchreier}, $H= gp\langle X_H \;
| \; R_H \rangle$.
$\diamond$nd{proof}
\begin{cor} \label{cor:NewReidmShcreier}
Let $h_1, \ldots h_n \in G$.
Then there exists an algorithm which computes a finite group
presentation of the subgroup $H=\langle h_1, \ldots, h_n \rangle$
in $G$ (not necessary with respect to $\{h_1, \cdots, h_n\}$) .
$\diamond$nd{cor}
\begin{proof} We first construct the graph $(\Gamma(H),v_0)$, using the
generalized Stallings' folding algorithm. By Theorem~\ref{thm:
properties of subgroup graphs} (2), this graph is a finite
precover of $G$.
Now we proceed according to Corollary~\ref{cor:ReidShcrPrecovers}.
$\diamond$nd{proof}
\begin{cor}
Amalgams of finite groups are coherent.
$\diamond$nd{cor}
\begin{remark}
{\rm As is well known, the Reidemeister-Schreier method yields a
presentation of a subgroup $H$ which is usually not in a useful
form. Namely, some of the generators are redundant and can be
eliminated, while some of the relators can be simplified. In order
to improve (to simplify) this presentation, one can apply the
Tietze transformation. An efficient version of such a
simplification procedure was developed in \cite{havas, hkrr}.
}
$\diamond$
$\diamond$nd{remark}
\begin{ex} \label{ex:ReidmShreier}
{\rm Let $G=gp\langle x,y | x^4, y^6, x^2(y^3)^{-1}
\rangle=\mathbb{Z}_4 \ast_{\mathbb{Z}_2} \mathbb{Z}_6$.
Recall that $G$ is isomorphic to $SL(2,\mathbb{Z})$ under the
homomorphism
$$x\mapsto \left(
\begin{array}{cc}
0 & 1 \\
-1 & 0
$\diamond$nd{array}
\right), \ y \mapsto \left(
\begin{array} {cc}
0 & -1\\
1 & 1
$\diamond$nd{array}
\right).$$
Let $H =\langle xyx^{-1}, yxy^{-1} \rangle$ be a subgroup of
$G$. The subgroup graph $\Gamma(H)$ constructed by the generalized
Stallings' folding algorithm is presented on
Figure~\ref{fig:ExReidmSchreier}.
We apply to $\Gamma(H)$ the algorithm described along with the
proof of Corollary~\ref{cor:NewReidmShcreier}.
We first compute $X_H$ according to (\ref{eq:Def_of X_H}):
$$h_1=xyx^{-1}, \ h_2=x^2, \ h_3=yxy^{-1}, \ h_4=y^3.$$
The computation of $R_H$ according to (\ref{eq:DefII of R H})
consists of the following steps.
$$\phi'(x^4)=(h_2)^2, \ \phi'(y^6)=(h_4)^2, \
\phi'(x^2(y^3)^{-1})=h_2(h_4)^{-1}.$$
$$\phi'(x(x^4)x^{-1})=(h_2)^2, \ \phi'(x(y^6)x^{-1})=(h_1)^6, \
\phi'(x(x^2(y^3)^{-1})x^{-1})=h_2(h_1)^{-3}.$$
$$\phi'(y(x^4)y^{-1})=(h_3)^4, \ \phi'(y(y^6)y^{-1})=(h_4)^2, \
\phi'(y(x^2(y^3)^{-1})y^{-1})=h_3^2(h_4)^{-1}.$$
Therefore $H =gp\langle h_1, h_3 \; | \; h_1^6, h_3^4,
h_1^3=h_3^2 \rangle$.
}
$\diamond$
$\diamond$nd{ex}
\begin{figure}[!htb]
\psfrag{v0 }[][]{$v_0$} \psfrag{x }[][]{$x$}
\psfrag{y }[][]{$y$}
\includegraphics[width=0.5\textwidth]{ExReidmSchreier.eps}
\caption {{\footnotesize The bold edges of the graph $\Gamma(H)$
correspond to a spanning tree $T$} \label{fig:ExReidmSchreier}}
$\diamond$nd{figure}
\subsection*{Complexity.}
Let $m$ be the sum of the lengths of the words $h_1, \ldots h_n$.
By Theorem~\ref{thm: properties of subgroup graphs} $(4)$, the
generalized Stallings' algorithm computes $(\Gamma(H),v_0)$ in
time $O(m^2)$.
The construction of $X_H$, which is a free basis of
$\widetilde{H}= \varphi_1 \left(lab(Loop(\Gamma(H),v_0) \right)$,
takes $O(|E(\Gamma(H))|^2)$, by \cite{b-m-m-w}. Since, by
Theorem~\ref{thm: properties of subgroup graphs} $(4)$,
$|E(\Gamma(H))|$ is proportional to $m$, the computation of $X_H$
takes $O(m^2)$.
To construct the set $Q_H$ we try to read each one of the defining
relators of $G$ at each one of the vertices of the graph
$\Gamma(H)$. It takes at most $$|R| \cdot |V(\Gamma(H)| \cdot
\left(\sum_{v \in V(\Gamma(H))} deg(v) \right).$$ Since $\sum_{v
\in V(\Gamma(H))} deg(v)=2 |E(\Gamma(H)|$ and because, by our
assumption, the presentation of $G$ is given and it is not a part
of the input, the computation of the set $Q_H$ takes
$O(|V(\Gamma(H)| \cdot |E(\Gamma(H)|)$. Since, by
Theorem~\ref{thm: properties of subgroup graphs} (4),
$|V(\Gamma(H)|=O(m)$, it takes $O(m^2)$.
The rewriting process which yield the set of relators $R_H$ takes
at most $|V(\Gamma(H)| \cdot \left(\sum_{r \in R} |r| \right)$
which is $O(|V(\Gamma(H)|)$.
Thus the complexity of the restricted Reidemeister-Schreier
process given by Corollary~\ref{cor:NewReidmShcreier} is $O(m^2)$.
\section{The Freeness Problem}
\label{sec:FreenessProblem}
A freeness of subgroups is one of the fundamental questions of
combinatorial and geometric group theory. The classical results in
this issue include the Nielsen-Schreier subgroup theorem for free
groups, the corollary of Kurosh subgroup theorem and the
Freiheitssatz of Magnus.
Namely, subgroups of free groups are free (I.3.8, \cite{l_s}). A
subgroup of a free product which has a trivial intersection with
all conjugates of the factors is free (\cite{l_s}, p.120). A
subgroup $H$ of an one-relator group $G=gp\langle X \; | \; r=1
\rangle$, where $r$ is
$\diamond$mph{cyclically freely reduced}, is free
if $H$ is generated by a subset of $X$ which omits a generator
occurring in $r$ (II.5.1, \cite{l_s}).
Results concerning amalgamated free products follow from the
Neumann's subgroup theorem.
\begin{thm}[H.Neumann, IV.6.6 \cite{l_s}]
Let $G=G_1 \ast_A G_2$ be a non-trivial free product with
amalgamation. Let $H$ be a finitely generated subgroup of $G$ such
that all conjugates of $H$ intersect $A$ trivially.
Then $H=F \ast ( \ast_j \; g_jH_jg_j^{-1})$, where $F$ is a free
group and each $H_j$ is the intersection of a subgroup of $H$ with
a conjugate of a factor of $G$.
$\diamond$nd{thm}
\begin{cor}[IV.6.7 \cite{l_s}] \label{cor: Newmann's thm}
Let $G=G_1 \ast_A G_2$ be a non-trivial free product with
amalgamation. If $H$ is a finitely generated subgroup of $G$ which
has trivial intersection with all conjugates of the factors, $G_1$
and $G_2$, of $G$, then $H$ is free.
$\diamond$nd{cor}
It turns out (Lemma \ref{free=each component is a Cayley(G_i)})
that the triviality of the intersections between $H$ and
conjugates of the factors, $G_1$ and $G_2$, of $G$ can be detected
from the subgroup graph $\Gamma(H)$ constructed by the generalized
Stallings' folding algorithm, when $G=G_1 \ast_A G_2$ is an
amalgam of finite groups. Therefore, by Corollary \ref{cor:
Newmann's thm}, the freeness of $H$ is decidable via its subgroup
graph.
We consider the
$\diamond$mph{freeness problem} to be one which asks to
verify if a subgroup of a given group $G$ is free. Clearly, the
freeness problem is solvable in amalgams of finite groups.
Below we introduce a polynomial time algorithm (Corollary
\ref{freeness in amalgams of finite grp}) that employs subgroup
graphs constructed by the generalized Stallings' algorithm to
solve the freeness problem. A complexity analysis of the
algorithm is given at the end of the section.
\begin{lem} \label{free=each component is a Cayley(G_i)}
Let $H$ be a finitely generated subgroup of an amalgam of finite
groups $G=G_1 \ast_A G_2$.
Then $H$ has a trivial intersection with all conjugates of the
factors of $G$ if and only if each $X_i$-monochromatic component
$C$ of $\Gamma(H)$ is isomorphic to $Cayley(G_i)$, for all $i \in
\{1,2\}$.
Equivalently, by Lemma~\ref{lemma1.5}, if and only if
$Lab(C,v)=\{1\}$ for each $X_i$-monochromatic component $C$ of
$\Gamma(H)$ ($v \in V(C)$).
$\diamond$nd{lem}
\begin{proof} Assume first
that there exists a $X_i$-monochromatic component $C$ of
$\Gamma(H)$ ($i \in \{1,2\}$) which is not isomorphic to
$Cayley(G_i)$. Thus, by Lemma~\ref{lemma1.5}, $(C,\vartheta)$ is
not isomorphic to $Cayley(G_i, S, S \cdot 1)$, where is $\vartheta
\in V(C)$ and $\{1\} \neq S \leq G_i$.
Let $1 \neq_G w \in S$. Then there exists a path $q$ in $C$ closed
at $\vartheta$ such that $lab(q)
$\diamond$quiv w$.
Let $p$ be an approach path in $\Gamma(H)$ from $\iota(p)=v_0$ to
$ \tau(p)=\vartheta$. Let $u
$\diamond$quiv lab(p)$.
The path $pq\overline{p}$ is closed at $v_0$ in $\Gamma(H)$. Hence
$lab(pq\overline{p}) \in H$. Therefore $$lab(pq\overline{p})=_G
uwu^{-1} \in H \cap uLab(C,\vartheta)u^{-1}= H \cap uSu^{-1}.$$
Since $w \neq_G 1$, we have $uwu^{-1} \neq_G 1$ and hence $H \cap
uSu^{-1} \neq \{1\}$.
Assume now that there exists $\{1\} \neq S \leq G_i$ ($i \in
\{1,2\}$) such that $H \cap uSu^{-1} \neq \{1\}$, where $u \in
G$. Let $1 \neq_G h \in H \cap uSu^{-1}$. Thus $h=_G ugu^{-1}$,
where $1 \neq_G g \in S$. Without loss of generality we can
assume that the words $u$ and $g$ are normal.
If the word $ugu^{-1}$ is in normal form, then there exist a path
$p$ in $\Gamma(H)$ closed at $v_0$ such that $lab(p)
$\diamond$quiv
ugu^{-1}$. Thus there is a decomposition $p=p_1p_2\overline{p_1}$
(because $\Gamma(H)$ is $G$-based, so it is a well-labelled
graph), where $lab(p_1)
$\diamond$quiv u$ and $lab(p_2)
$\diamond$quiv g$. Let $C$
be a $X_i$-monochromatic component of $\Gamma(H)$ such that
$p_2\subseteq C$ and let $v=\tau(p_1)$. Hence $g
$\diamond$quiv lab(p_2)
\in Lab(C, v) \leq G_i$. Thus $Lab(C,v)\neq \{1\}$. Equivalently,
by Lemma~\ref{lemma1.5}, $C$ is not isomorphic to $Cayley(G_i)$.
Assume now that the word $ugu^{-1}$ is not in normal form. Let
$(u_1, \ldots, u_k)$ be a normal decomposition of $u$. Since $g
\in G_i$, its normal decomposition is $(g)$. Hence the normal
decomposition of $ugu^{-1}$ has the form
$$(u_1, \ldots, u_{j-1},w, u_{j-1}^{-1}, \ldots , u_1^{-1}),$$
where $w=_G u_j \ldots u_kgu_k^{-1} \ldots u_{j}^{-1} \in G_l
\setminus A$ and $u_{j-1} \in G_m \setminus A$ ($1 \leq l \neq m
\leq 2$).
Let $u'
$\diamond$quiv u_1 \ldots u_{j-1}$. Then $h=_G u'
w(u')^{-1}$, while the word $u' w(u')^{-1}$ is in normal form and
$w \in G_l$, $l\in\{1,2\}$. Hence, by arguments similar to those
used in the previous case, we are done.
$\diamond$nd{proof}
\begin{thm} \label{cor: h is free}
Let $H$ be a finitely generated subgroup of an amalgam of finite
groups $G=G_1 \ast_A G_2$.
Then $H$ is free if and only if each $X_i$-monochromatic
component of $\Gamma(H)$ is isomorphic to $Cayley(G_i)$, for all
$i \in \{1,2\}$.
$\diamond$nd{thm}
\begin{proof}
The statement follows immediately from Corollary \ref{cor:
Newmann's thm} and Lemma \ref{free=each component is a
Cayley(G_i)}.
$\diamond$nd{proof}
Combining Lemma \ref{free=each component is a Cayley(G_i)} with
the Torsion Theorem for amalgamated free products we get Corollary
\ref{cor:ConditionsTorsionFree}.
\begin{thm}[Torsion Theorem, IV.2.7, \cite{l_s}] \label{torsion theorem}
Every element of finite order in $G=G_1 \ast_A G_2$ is a conjugate
of an element of finite order in $G_1$ or $G_2$.
$\diamond$nd{thm}
\begin{cor} \label{cor:ConditionsTorsionFree}
Let $H$ be a finitely generated subgroup of an amalgam of finite
groups $G=G_1 \ast_A G_2$.
Then $H$ is torsion free if and only if each $X_i$-monochromatic
component of $\Gamma(H)$ is isomorphic to $Cayley(G_i)$, for all
$i \in \{1,2\}$.
$\diamond$nd{cor}
\begin{cor} \label{freeness in amalgams of finite grp}
Let $ h_1, \ldots, h_k \in G.$ Then there exists an algorithm
which decides whether or not the subgroup $H=\langle h_1,
\ldots, h_k \rangle$ is a free subgroup of $G$.
$\diamond$nd{cor}
\begin{proof} We first construct the graph $\Gamma(H)$, using the
generalized Stallings' folding algorithm.
Now, for each $X_i$-monochromatic
component $C$ of $\Gamma(H)$ we verify if $C$ is isomorphic to
$Cayley(G_i)$ ($i \in \{1,2\}$). It can be easily done by
checking the number of vertices of $C$: $|V(C)|=|G_i|$ if and only
if $C$ is isomorphic to $Cayley(G_i)$.
By Theorem \ref{cor: h is free}, $H$ is free if and only if each
monochromatic component of $\Gamma(H)$ is isomorphic to the
Cayley graph of an appropriate factor of $G$.
$\diamond$nd{proof}
\begin{remark}
{\rm If $H$ is free then its free basis can be computed using the
restricted Reidemeister-Schreier procedure
(Corollary~\ref{cor:NewReidmShcreier}) followed by a
simplification process based on Tietze transformation. For an
effective version of a simplification procedure when redundant
generators are eliminated consequently using a substring search
technique see \cite{havas, hkrr}. }
$\diamond$
$\diamond$nd{remark}
\begin{ex} \label{ex:Freeness}
{\rm Let $G=gp\langle x,y | x^4, y^6, x^2=y^3 \rangle=\mathbb{Z}_4
\ast_{\mathbb{Z}_2} \mathbb{Z}_6$.
Let $H_1$ and $H_2$ be finitely generated subgroups of $G$ such
that
$$H_1=\langle xy \rangle \ {\rm and} \ H_2=\langle xy^2, yxyx \rangle.$$
The graphs $\Gamma(H_1)$ and $\Gamma(H_2)$ on Figure \ref{fig:
ExOfFI} are the subgroup graphs of $H_1$ and $H_2$, respectively,
constructed by the generalized Stallings' folding algorithm. See
Example~\ref{example: graphconstruction} from Appendix for the
detailed construction of these graphs.
Applying the above algorithm to the graphs $\Gamma(H_1)$ and
$\Gamma(H_2)$, we conclude that $H_2$ is not free, while
$H_1=FG(\{xy\})$. }
$\diamond$
$\diamond$nd{ex}
\begin{figure}[!htb]
\psfrag{A }[][]{$\Gamma(H_1)$} \psfrag{B }[][]{$\Gamma(H_2)$}
\includegraphics[width=\textwidth]{ExFiniteIndex.eps}
\caption { \label{fig: ExOfFI}}
$\diamond$nd{figure}
\subsection*{Complexity.}
Let $m$ be the sum of the lengths of the words $h_1, \ldots h_k$.
By Theorem~\ref{thm: properties of subgroup graphs} (4), the
complexity of the construction of $\Gamma(H)$ is $O(m^2)$.
The detecting of monochromatic components in this graph takes $ \:
O(|E(\Gamma(H))|) \: $. Since, by our assumption, all the
essential information about $A$, $G_1$ and $G_2$ is given and it
is not a part of the input, verifications concerning a particular
monochromatic component of $\Gamma(H)$ take $O(1)$. Therefore to
do such verifications for all monochromatic component of
$\Gamma(H)$ takes $O(|E(\Gamma(H))|)$.
Since, by Theorem~\ref{thm: properties of subgroup graphs} (4), $|E(\Gamma(H))|$ is proportional to $m$, the
complexity of the ``freeness'' detecting presented along with the
proof of Corollary \ref{freeness in amalgams of finite grp} is
$O(m^2)$, that is it is quadratic in the size of the input.
If the subgroup $H$ is given by the graph $\Gamma(H)$, then to
verify that $H$ is a free subgroup of $G$ takes
$O(|E(\Gamma(H))|)$. That is the ``freeness'' algorithm is even
linear in the size of the input.
\section{The Finite Index Problem}
\label{sec:FiniteIndexProblem}
One of the first natural computational questions regarding
subgroups is to compute the index of the subgroup in the given
group, the
$\diamond$mph{finite index problem}.
As is well known (\cite{b-m-m-w, kap-m}), this problem is easily
solvable via subgroup graphs in the case of free groups. Recall
that a subgroup $H$ of a free group $FG(X)$ has finite index if
and only if its subgroup graph $\Gamma_H$ constructed by the
Stallings' folding algorithm is
$\diamond$mph{full}, i.e.
$\diamond$mph{complete}, i.e.
$\diamond$mph{$X^{\pm}$-saturated}. That is for each
vertex $v \in V(\Gamma_H)$ and for each $x \in X^{\pm}$ there
exists an edge which starts at $v$ and which is labelled by $x$.
In \cite{schupp} similar results were obtained for finitely
generated subgroups of certain Coxeter groups and surface groups
of an extra-large type.
In general, the index $[G:H]$ equals to the
$\diamond$mph{sheet number} of
the covering space, corresponding to the subgroup $H $, of the
standard $2$-complex representing the group $G$ (\cite{stil}).
Thus if $G$ is finitely presented, the index of $H$ in $G$ is
finite if and only if the 1-skeleton of the corresponding covering
space is finite. That is if and only if the relative Cayley
graph, $Cayley(G,H)$, is finite.
By Theorem~\ref{thm: properties of subgroup graphs} (2) and
Corollary~\ref{cor:PrecoversSubgrOfCayleyGr}, a subgroup graph
$(\Gamma(H),v_0)$ is a subgraph of $(Cayley(G,H), H \cdot 1)$. It
turns out that there exists a strong connection between the index
of $H$ in $G$ and how ``saturated'' the graph $\Gamma(H)$ is. We
describe this connection in Theorem~\ref{fi} and use it to solve
the
$\diamond$mph{finite index problem} in amalgams of finite groups
(Corollary \ref{algorithm finite index_finite_grp}).
The complexity analysis of the presented algorithm is given at the
end of the section.
\begin{thm} \label{fi}
Let $H$ be a finitely generated subgroup of an amalgam of finite
groups $G=G_1 \ast G_2$.
Then $[G:H] < \infty$ if and only if $\Gamma(H)$ is
$X^{\pm}$-saturated.
$\diamond$nd{thm}
\begin{proof} The ``if'' direction is clear. Indeed, if $\Gamma(H)$ is
$X^{\pm}$-saturated then, by Lemma~\ref{lemma1.5}, $\Gamma(H)$ is
isomorphic to $Cayley(G,H,H \cdot 1)$. Since, by Theorem~\ref{thm:
properties of subgroup graphs}, the graph $\Gamma(H)$ is finite,
$Cayley(G,H,H \cdot 1)$ is a finite graph. Hence
$[G:H]=|V(Cayley(G,H))| < \infty$.
To prove the opposite direction we assume that $\Gamma(H)$ is not
$X^{\pm}$-saturated. Note that since $\Gamma(H)$ is a precover of
$Cayley(G,H)$, any of its monochromatic component are either
$X^{\pm}_1$-saturated or $X^{\pm}_2$-saturated. Thus every
bichromatic vertex of $\Gamma$ is $X^{\pm}$-saturated and each
monochromatic vertex is either $X^{\pm}_1$-saturated or
$X^{\pm}_2$-saturated.
Let $v$ be a $X_1$-monochromatic vertex of $\Gamma$. Then by
Lemma~\ref{lemma2.12}, there is a path $p$ in normal form such
that $\iota(p)=v_0, \ \tau(p)=v$, and $w
$\diamond$quiv lab(p)$ is a word
in normal form.
Let $(w_1, \ldots, w_n)$ be a normal decomposition of $w$. Then
there is $x \in X_2 \setminus A$ \footnote{We assume that A is a
proper subgroup of $G_1$ and of $G_2$, otherwise the amalgam $G_1
\ast_A G_1$ is a finite group and all computations are trivial in
our context}, such that $(w_1, \ldots, w_n,x)$ represents a word
$w' \in G$ in normal form. Now if $w_1 \in G_1$ (more precisely,
$w_1 \in G_1 \setminus A$, since $w$ is a normal word) or if $w_1
\in G_2$ but $xw_1 \in G_2 \setminus A$ then $(w')^n$ is in normal
form for all $n \geq 1$.
Otherwise $xw_1 \in G_2 \cap A$. Then there exists $y \in X_1
\setminus A$, such that $(w_1, \ldots, w_n,x,y)$ represents a word
$w'' \in G$ in normal form and $(w'')^n$ is in normal form for
all $n \geq 1$. But neither $w'$ nor $w''$, and hence neither
$(w')^n$ nor $(w'')^n$ (for all $n \geq 1$) label a path closed
at $v_0$ in $\Gamma(H)$.
Thus $(w')^n \not\in H$ and $(w'')^n \not\in H$, for all $n \geq
1$.
The existence of such elements shows that $H$ has infinite index
in $G$. Indeed, for all $n_1 \neq n_2$ and $g \in \{w',w''\}$ we
have $H(g)^{n_1} \neq H(g)^{n_2}$, because otherwise
$(g)^{n_1-n_2} \in H$. Thus, without loss of generality we can
assume that $n_1 > n_2$, then $n_1-n_2 \geq 1$ and we get a
contradiction.
$\diamond$nd{proof}
\begin{cor} \label{algorithm finite index_finite_grp}
Let $h_1, \ldots h_n \in G$.
Then there exists an algorithm which computes the index of the
subgroup $H=\langle h_1, \ldots, h_n \rangle$ in $G$.
$\diamond$nd{cor}
\begin{proof} We first construct the graph $\Gamma(H)$, using the
generalized Stallings' folding algorithm.
Then we verify if this graph is $(X_1 \cup X_2)^{\pm}$-saturated.
If no, the subgroup $H$ has infinite index in $G$, by
Theorem~\ref{fi}. Otherwise, the index of $H$ in $G$ is finite
and $[G:H]=|V(\Gamma(H))|$.
$\diamond$nd{proof}
\subsection*{Complexity} \
Let $m$ be the sum of the lengths of the words $h_1, \ldots h_n$.
By Theorem~\ref{thm: properties of subgroup graphs} $(4)$, the
generalized Stallings' algorithm computes $(\Gamma(H),v_0)$ in
time $O(m^2)$.
By the proof of Corollary \ref{algorithm finite
index_finite_grp}, the detecting of the index takes time
proportional to $|E(\Gamma(H)|$. (Indeed, for each vertex of
$\Gamma(H)$ we have to check if it is bichromatic, which takes
$\sum_{v \in V(\Gamma(H))} deg(v)=2 |E(\Gamma(H)|$.)
Since, by Theorem~\ref{thm: properties of subgroup graphs} (4),
$|E(\Gamma(H)|=O(m)$, the complexity of the algorithm given along
with the proof of Corollary \ref{algorithm finite
index_finite_grp} is $O(m^2)$.
If the subgroup $H$ is given by $(\Gamma(H),v_0)$ and not by a
finite set of subgroup generators, then the above algorithm is
even linear in the size of the graph.
\begin{ex} \label{ex:FiniteIndex}
{\rm Let $H_1$ and $H_2$ be the subgroups considered in Example
\ref{ex:Freeness}.
Analyzing the ``saturation'' of the graphs $\Gamma(H_1)$ and
$\Gamma(H_2)$ illustrated on Figure~\ref{fig: ExOfFI}, we see that
$[G:H_1]=\infty$, while $[G:H_2]=2$.}
$\diamond$
$\diamond$nd{ex}
\section{The Separability Problem}
\label{sec:Separability}
A group $G$ is
$\diamond$mph{subgroup separable}, or
$\diamond$mph{LERF}, if
given a finitely generated subgroup $H$ of $G$ and $g \not\in H$
there exists a finite index subgroup $K \leq G$ with $H \leq K$
and $g \not\in K$.
We call $K$ a
$\diamond$mph{separating subgroup}. If one places a topology
on $G$ (called the
$\diamond$mph{profinite topology} \cite{hall2}), by
taking the collection of finite index subgroups as a neighborhood
basis of 1, then $G$ is LERF if and only if all its finitely
generated subgroups are closed.
$\diamond$mph{LERF} was introduced by M.Hall \cite{hall1}, who
proved that free groups are LERF. This property is preserved by
free products \cite{burns, ro}, but it is not preserved by direct
products: $F_2 \times F_2$ is not LERF \cite{a_g}. Free products
of LERF groups with finite amalgamation are LERF \cite{a_g}. In
general, the property is not preserved under free products with
infinite cyclic amalgamation \cite{long-niblo, ribs}. However
amalgams of free groups over a cyclic subgroup are LERF
\cite{b-b-s}, and, by \cite{gi_sep}, free products of a free group
and a LERF group amalgamated over a cyclic subgroup maximal in the
free factor are LERF as well.
Subgroup separability of some classes of hyperbolic groups was
widely exploited in papers of Gitik \cite{gi_sep, gi_doub,
gi_rips}. Long and Reid \cite{long-reid1, long-reid2} studied this
property in 3-manifold topology and in hyperbolic Coxeter groups.
Results on subgroup separability for right-angle Coxeter groups
and for Coxeter groups of extra-large type can be found in
\cite{gi_coxeter} and in \cite{schupp}, respectively. These papers
include detailed algorithms which construct separating subgroups
using graph-theoretic methods.
$\diamond$mph{M.Hall property} is closely connected with subgroup
separability. A groups $G$ is
$\diamond$mph{M.Hall} if and only if each of
its finitely generated subgroups is a free factor in a subgroup of
finite index in $G$. M.Hall property of virtually free groups was
deeply studied in works of Bogopolskii \cite{b1, b2}, where a
criterion to determine whether a virtually free group is M.Hall
was given.
An algorithmic aspect of the LERF property can be formulated as
the
$\diamond$mph{separability problem}. It asks to find an algorithm
which constructs a separating subgroup $K$ for a given finitely
generated subgroup $H$ and $g \not\in H$.
Let us emphasize that the knowledge that $G$ has a solvable
decision problem does not provide yet an effective procedure to
solve this problem. Thus, on the one hand, since amalgams of
finite groups are LERF, by the result of Allenby and Gregorac
\cite{a_g}, the separability problem in this class of groups might
be solvable. On the other hand, we are interested to find an
efficient solution.
Below we adopt some ideas of Gitik introduced in \cite{gi_sep} to
develop such an algorithm (given along with the proof of the Main
Theorem (Theorem~\ref{thm:MainTheoremSeparability})). Our main
result in this issue is summarized in the following theorem.
\begin{thm}[The Main Theorem] \label{thm:MainTheoremSeparability}
Let $G=G_1 \ast_A G_2$ be an amalgam of finite groups.
The \underline{separability
problem} for $G$ is solvable if one of the following holds
\begin{enumerate}
\item[(1)] $A$ is cyclic,
\item[(2)] $A$ is malnormal in at least one of the factors $G_1$ or
$G_2$,
\item[(3)] $A \leq Z(G_i)$, for some $i \in \{1,2\}$.
\item[*]
In particular, the separability problem is solvable if
at least one of the factors ($G_1$ or $G_2$) is Abelian.
$\diamond$nd{enumerate}
$\diamond$nd{thm}
Recall that given a finitely generated subgroup $H$ of an amalgam
of finite groups $G=G_1 \ast_A G_2$ the generalized Stallings'
algorithm constructs the canonical subgroup graph $\Gamma(H)$
which is a (reduced) precover of $G$ (Theorem~\ref{thm: properties
of subgroup graphs} (2)). Thus in order to prove our Main Theorem
we first show that each finite precover $(\Gamma,v_0)$ of $G$,
when $G$ satisfies one of the conditions $(1)-(3)$, can be
embedded in a finite $X_i$-saturated precover $(\Gamma',v_0)$ of
$G$ ($i \in \{1,2\}$). Then we prove that such a precover can be
embedded in a finite cover $(\Gamma'',v_0)$ of $G$. Finally, we
take $K=Lab(\Gamma'',v_0)$ to be the separating subgroup. This
completes the proof of the Main Theorem.
Example~\ref{example: separability} demonstrates the computation
of the separating subgroup $K$ for a given subgroup $H \leq G$.
{ \ } \\
The
$\diamond$mph{amalgam} of labelled graphs $\Gamma_1$ and $\Gamma_2$
along $\Gamma_0$ denoted by $\Gamma_1 \ast_{\: \Gamma_0}
\Gamma_2$, is the pushout of the following diagram in the category
of labelled graphs:
$$\begin{array}{ccl}
\Gamma_0 & \rightarrow & \Gamma_1 \\
\downarrow & \searrow & \downarrow \\
\Gamma_2 & \rightarrow & \Gamma_1 \ast_{\; \Gamma_0} \Gamma_2, \\
$\diamond$nd{array}$$
where $i_1:\Gamma_0 \rightarrow \Gamma_1$ and $i_2:\Gamma_0
\rightarrow \Gamma_2$ are injective maps and none of the graphs
need be connected. The amalgam depends on the maps $i_1$ and
$i_2$, but we omit reference to them, whenever it does not cause
confusion. It can be easily seen that amalgamation consists of
taking the disjoint union of graphs and performing the
identification prescribed by $i_1$ and $i_2$ and subsequent
$\diamond$mph{foldings} (an identification of the terminal vertices of a
pair of edges with the same origin and the same label) until a
labelled graph is obtained \cite{gi_sep, stal}.
\begin{lem} \label{construction of X1-saturated precover}
Let $\Gamma$ be a finite precover of an amalgamated free product
of finite groups $G=G_1 *_A G_2$. Then $\Gamma$ can be embedded in
a $X_1^{\pm}$-saturated precover of $G$ with finitely many
vertices.
$\diamond$nd{lem}
\begin{proof}
Any vertex of a graph well-labelled with $X_1^{\pm} \cup
X_2^{\pm}$ has one of the following types:
\begin{itemize}
\item It is bichromatic.
\item It is $X_1$-monochromatic.
\item It is $X_2$-monochromatic.
$\diamond$nd{itemize}
Since $\Gamma$ is a precover of $G$, the above types take the form
(respectively):
\begin{itemize}
\item It is $X_1^{\pm} \cup X_2^{\pm}$-saturated.
\item It is $X_1$-monochromatic and $X_1^{\pm}$-saturated.
\item It is $X_2$-monochromatic and $X_2^{\pm}$-saturated.
$\diamond$nd{itemize}
The proof is by induction on the number of vertices of the third
type. If no such vertices exist, then $\Gamma$ is already
$X_1^{\pm}$-saturated. Assume that $\Gamma$ has $m$
$X_2$-monochromatic vertices, and let $v$ be one of them.
Let $C$ be a $X_2$-monochromatic component, such that $v \in
VM_2(C)$. Let $S= A_v$ be the
$\diamond$mph{stabilizer of} $v$ by the
action of $A$ on the vertices of $C$, that is $A_v=\{ x \in A \; |
\; v \cdot x=v \} \leq A$, and let $A(v)=\{ v \cdot x \; | \; x
\in A \} \subseteq V(C)$ be the
$\diamond$mph{$A$-orbit} of $v$.
Consider $Cayley(G_1,S,S \cdot 1)$. Thus $A_{S \cdot 1}=S=A_v$ and
the $A$-orbit $A(S \cdot 1)=\{ (S \cdot 1) \cdot x \; | \; x \in
A \}=\{ S x \; | \; x \in A \} \subseteq V(Cayley(G_1,S))$ is
isomorphic to $A(v)$. Hence, taking $\Gamma_v= \Gamma \ast_{\{v
\cdot x=S x \; | \; x \in A \}}Cayley(G_1,S)$, we get a finite
compatible graph whose monochromatic components are covers of the
factors $G_1$ or $G_2$. Therefore, by Corollary \ref{corol2.13},
$\Gamma_v$ is a precover of $\Gamma$.
Since $A_{S \cdot 1}=S=A_v$, the only identifications in
$\Gamma_v$ are between vertices of $A(v)$ and $A(S \cdot 1)$.
Since these are sets of monochromatic vertices of different
colors, no foldings are possible in $\Gamma_v$. Hence the graphs
$\Gamma$ and $Cayley(G_1, S)$ embed in $\Gamma_v$. Thus the images
in $\Gamma_v$ of the vertices of $A(v)$ (equivalently, of $A(S
\cdot 1)$) are bichromatic vertices, while the chromacity of the
images of other vertices of $\Gamma$ and $Cayley(G_1, S)$ remains
unchanged. Hence
$$|VM_2(\Gamma_v)|=|VM_2(\Gamma)|-|A(v)|<m.$$
Therefore $\Gamma_v$ is a finite precover of
$G$ with $|VM_2(\Gamma_v)|<m$ such that $\Gamma$ embeds in
$\Gamma_v$. This completes the inductive step.
$\diamond$nd{proof}
\begin{remark} \label{remark: separability X2-sat precover}
{\rm By the symmetric arguments if the conditions of
Lemma~\ref{construction of X1-saturated precover} hold then
$\Gamma$ can be embedded in a $X_i^{\pm}$-saturated precover of
$G$ ($i \in \{1,2\}$) with finitely many vertices.}
$\diamond$
$\diamond$nd{remark}
The proof of Lemma~\ref{construction of X1-saturated precover}
yields the following technical result, which we employ later to
produce $X_i$-saturated precovers ($i \in \{1,2\}$).
\begin{cor} \label{cor:SeparabilityTechnical}
Let $G=G_1 \ast_A G_2$ be an amalgam of finite groups.
Let $\Gamma_i$ be a finite precover of $G$ (not necessary
connected) and let $v_i \in VM_i(\Gamma_i)$ ($i \in \{1,2\}$).
If $A_{v_1}=A_{v_2}$ then $A(v_1)\simeq A(v_2)$, and
$\Gamma=\Gamma_1 \ast_{\{v_1 \cdot a=v_2 \cdot a \; | \; a \in
A\}}\Gamma_2$ is a finite precover of $G$ such that the graphs
$\Gamma_1$ and $\Gamma_2$ embed into the graph $\Gamma$.
$\diamond$nd{cor}
Now we consider $\Gamma$ to be a finite
$X_{\beta}^{\pm}$-saturated precover of $G$ ($\beta \in \{1,2\}$),
where $G=G_1 *_A G_2$ is an amalgamated free product of finite
groups. In the consequent lemmas, it is showen that if one of
the conditions from Theorem~\ref{thm:MainTheoremSeparability} is
satisfied then $\Gamma$ can be embedded into a finite cover of
$G$.
Since the graph $\Gamma$ is $X_{\beta}^{\pm}$, any vertex of
$\Gamma$ is either bichromatic or $X_{\beta}$-monochromatic.
Moreover, the graph $\Gamma$ is compatible, as a precover of $G$.
Hence any $A$-orbit consists of the vertices of the same type.
Therefore the set of $X_{\beta}$-monochromatic vertices of
$\Gamma$ can be viewed as a disjoint union of distinct $A$-orbits.
This enables us to consider the following notation.
For each $v \in VM_{\beta}(\Gamma)$ we set $n_v$ to be the number
of vertices in the $A$-orbit of $v$, that is $n_v=|A(v)|$. Recall
that $$A_v \leq A, \ A(v) \simeq {A} / {A_v}, \ {\rm thus} \
|A|=|A(v)||A_v|.$$
Let $n(\Gamma)=\{ n_v | v \in VM_{\beta}(\Gamma) \}$. For each $n
\in n(\Gamma)$, assume that $\Gamma$ has $m$ different $A$-orbits,
each containing $n$ $X_{\beta}$-monochromatic vertices. Let
$\{v_i| 1 \leq i \leq m \}$ be the set of representatives of these
orbits. Denote $S_i= A_{v_i}$. Then for all $1 \leq i \leq m$, $|
S_i |=\frac{|A|}{n}$.
Assume that $A$ has $r$ distinct subgroups $S_j$ ($1 \leq j \leq
r$) of order $\frac{|A|}{n}$ and assume that $\Gamma$ has $m_j$
representatives of distinct orbits $v_j \in VM_{\beta}(\Gamma)$ with
$A_{v_j}=S_j$. Hence $\sum^{r}_{j=1}{m_j}=m$.
With the above notation, we formulate Lemmas~\ref{separability:
A<=Z(G_i)}, \ref{separability: A is malnormal} and
\ref{separability: A is cyclic}.
\begin{lem} \label{separability: A<=Z(G_i)}
If $A$ is a center subgroup of $G_{\alpha}$ (that is $A \leq
Z(G_{\alpha})$ \footnote{Recall that the
$\diamond$mph{center of} $G$ is
the subgroup \ $ Z(G)=\{g \in G \; | \; gx=xg, \; \forall x \in
G\}.$ }),
then any finite $X_{\beta}^{\pm}$-saturated precover of $G$ can
be embedded in a cover of $G$ with finitely many vertices ($1 \leq
\beta \neq \alpha \leq 2$).
$\diamond$nd{lem}
\begin{proof}
The proof is by induction on $|n(\Gamma)|$.
Since $A \leq Z(G_{\alpha})$, $S_j$ is normal in $G_{\alpha}$ for
all $1 \leq j \leq r$. Therefore for each vertex $u \in
V(Cayley(G_{\alpha}, S_j))$, we have $A_u=S_j$. Indeed,
$$A_u=Lab(Cayley(G_{\alpha}, S_j),u) \cap A=g^{-1}S_jg \cap A=S_j \cap
A=S_j,$$ where $g \in G_{\alpha}$, such that $(S_j \cdot 1) \cdot
g = u$.
Thus distinct $A$-orbits of vertices in $Cayley(G_{\alpha},S_j)$
are isomorphic to each other and have length $n$. Their number is
equal to
$$\frac{|V(Cayley(G_{\alpha},S_j))|}{n}
=\frac{|G_{\alpha}|}{|S_j|} :
\frac{|A|}{|S_j|}=\frac{|G_{\alpha}|}{|A|}=[G_{\alpha}:A].$$
Let $t=[G_{\alpha}:A]$. Let $\Gamma_{1}$ be the disjoint union of
$t$ isomorphic copies of $\Gamma$ and let $\Gamma_{2}$ be the
disjoint union of $m_j$ isomorphic copies of
$Cayley(G_{\alpha},S_j)$, for all $1 \leq j \leq r$. Then both
$\Gamma_{1}$ and $\Gamma_{2}$ have $tm_j$ distinct isomorphic
$A$-orbits of length $n$.
Let $\{ w_{ji} \; | \; 1 \leq i \leq tm_j, \; 1 \leq j \leq r \}$
and $\{ u_{ji} \; | \; 1 \leq i \leq tm_j, \; 1 \leq j \leq r \}$
be the sets of representatives of these orbits in $\Gamma_{1}$ and
in $\Gamma_2$, respectively. Thus $A_{w_{ji}}=S_j=A_{u_{ji}}$, for
all $1 \leq i \leq tm_j$ and $1 \leq j \leq r$.
Let $\Gamma'$ be the amalgam of $\Gamma_{1}$ and $\Gamma_{2}$ over
these sets of vertices,
$$ \Gamma'=\Gamma_{1} \ast_{\{w_{ji} \cdot a=u_{ji} \cdot a \; | \; a \in A \}} \Gamma_{2}.$$
By Corollary~\ref{cor:SeparabilityTechnical}, $\Gamma'$ is a
finite precover of $G$ such that the graphs $\Gamma_1$ and
$\Gamma_2$ embed in it. Therefore the graph $\Gamma$ embeds in
$\Gamma'$ as well. Moreover, by construction, the graph $\Gamma'$
is $X_{\beta}^{\pm}$-saturated, and $n(\Gamma')=n(\Gamma)
\setminus \{n\}$. Thus $\Gamma'$ satisfies the inductive
assumption. We are done.
$\diamond$nd{proof}
\begin{lem} \label{separability: A is malnormal}
If $A$ is a malnormal subgroup of $G_{\alpha}$ then any finite
$X_{\beta}^{\pm}$-saturated precover of $G$ can be embedded in a
cover of $G$ with finitely many vertices ($1 \leq \beta \neq
\alpha \leq 2$).
$\diamond$nd{lem}
\begin{proof}
The proof is by induction on $|n(\Gamma)|$.
Since $A$ is malnormal in $G_{\alpha}$, for each vertex $u \in
V(Cayley(G_{\alpha}, S_j))$ ($1 \leq j \leq r$) such that $u=(S_j
\cdot 1) \cdot g$, where $g \in G_{\alpha} \setminus A$, we have
$$A_u=Lab(Cayley(G_{\alpha}, S_j),u) \cap A=g^{-1}S_jg \cap A=\{1\}.$$
Therefore $A(u)\simeq A$ and $|A(u)|=|A|$. Thus
$V(Cayley(G_{\alpha}, S_j))$ form one $A$-orbit isomorphic to
$A(v_j)$ of length $n$ with $A_{S_j \cdot 1}=S_j=A_{v_j}$, and
$c=(|V(C)|-n)/|A|$ \ $A$-orbits isomorphic to $A(u)\simeq A$ of
length $|A|$ with, roughly speaking, a trivial $A$-stabilizer.
On the other hand, in $Cayley(G_{\beta})$ the number of distinct
$A$-orbits of length $|A|$ with the trivial $A$-stabilizer is $d=
|V(Cayley(G_{\beta}) |/|A|=|G_{\beta} |/|A|=[G_{\beta}:A]$.
Let $\Gamma_1$ be the disjoint union of $d$ isomorphic copies of
$\Gamma$ and $c r$ isomorphic copies of $Cayley(G_{\beta})$. Let
$\Gamma_2$ be the union of disjoint unions of $m_j d$ isomorphic
copies of $Cayley(G_{\alpha},S_j)$, for all $1 \leq j \leq r$.
Then both $\Gamma_1$ and $\Gamma_2$ have $m_j d$ distinct
$A$-orbits of length $n$ isomorphic to $A(v_j)$, and $c d r$
different isomorphic $A$-orbits of length $|A|$.
Let $\{ w_{ji} \; | \; 1 \leq i \leq m_j d, \; 1 \leq j \leq r \}$
and $\{ u_{ji} \; | \; 1 \leq i \leq m_j d, \; 1 \leq j \leq r \}$
be the sets of representatives of the orbits of length $n$ in
$\Gamma_{1}$ and in $\Gamma_2$, respectively. Hence
$A_{w_{ji}}=S_j=A_{u_{ji}}$, for all $1 \leq i \leq m_j d$ and $1
\leq j \leq r$.
Let $\{ x_{l} \; | \; 1 \leq l \leq c d r \}$ and $\{ y_{l} \; |
\; 1 \leq l \leq c d r \}$ be the sets of representatives of the
orbits of length $|A|$ in $\Gamma_{1}$ and in $\Gamma_2$,
respectively. Then $A_{x_l}=\{1\}=A_{y_l}$, for all $1 \leq l \leq
c d r$.
Let $\Gamma'$ be the amalgam of $\Gamma_{1}$ and $\Gamma_{2}$ over
these sets of vertices,
$$ \Gamma'=\Gamma_{1} \ast_{\{w_{ji} \cdot a=u_{ji} \cdot a \; | \; a \in A \}
\cup \{x_l \cdot a=y_l \cdot a \; | \; a \in A \} } \Gamma_{2}.$$
By Corollary~\ref{cor:SeparabilityTechnical}, $\Gamma'$ is a
finite precover of $G$ such that the graphs $\Gamma_1$ and
$\Gamma_2$ embed in it. Therefore the graph $\Gamma$ embeds in
$\Gamma'$ as well. Moreover, by construction, the graph $\Gamma'$
is $X_{\beta}^{\pm}$-saturated, and $n(\Gamma')=n(\Gamma)
\setminus \{n\}$. Thus $\Gamma'$ satisfies the inductive
assumption. We are done.
$\diamond$nd{proof}
\begin{lem} \label{separability: A is cyclic}
If $A$ is cyclic then any finite $X_{\beta}^{\pm}$-saturated
precover of $G$ can be embedded in a cover of $G$ with finitely
many vertices ($\beta \in \{1,2\}$).
$\diamond$nd{lem}
\begin{proof}
Since $A$ is cyclic, $S_i=S_j$ for all $1 \leq i,j \leq m$, that
is $A_{v_i}=A_{v_j}$. Assume that $A_v=S$, for all $v \in \{v_i \:
| \: 1 \leq i \leq m \}$.
Consider $Cayley(G_{\alpha}, S)$ ($1 \leq \beta \neq \alpha \leq
2$). For each vertex $u \in V(Cayley(G_{\alpha}, S))$, we have
$$A_u=Lab(Cayley(G_{\alpha}, S),u) \cap A=g^{-1}Sg \cap A,$$ where $g \in G_{\alpha}$, such that $(S \cdot 1) \cdot
g = u$. Thus $|A_u| \leq |S|$ and therefore, since $A$ is cyclic,
$A_u \leq S \leq A$.
\begin{claim}
Let $ \alpha \in \{1,2\}$.
Then there is $0 < N \in {\bf Z}$ such that $Cayley(G_{\alpha},
S)$ can be embedded into a finite $X_{\alpha}^{\pm}$-saturated
precover $C$ of $G$, whose $X_{\alpha} $-monochromatic vertices
form $N$ distinct $A$-orbits of length $n$ isomorphic to each
other, with the $A$-stabilizer $S$. More precisely,
$$VM_{\alpha}(C)=\bigcup_{i=1}^N A(v_i), \ {\rm such \ that } \
A_{v_i}=S \ ( \forall \ 1 \leq i \leq N).$$
$\diamond$nd{claim}
\begin{proof}[Proof of the Claim]
The proof is by induction on the number of prime factors of $|S|$.
Assume first that $|S|=p$ is prime. By the above observation,
for all $u \in V(Cayley(G_{\alpha}, S))$, either $A_u=S$, $|A(u)|
=n$, or $A_u=\{1\}$, $|A(u)|=|A|$ (that is $A(u)\simeq A$).
Assume that $V(Cayley(G_{\alpha}, S))$ form $b$ distinct
isomorphic orbits of length $n$. Hence the number of distinct
$A$-orbits of $V(Cayley(G_{\alpha}, S))$ of length $|A|$
isomorphic to $A(u)$ with the trivial $A$-stabilizer is
$c=(|V(C)|-n \cdot b)/|A|$.
On the other hand, in $Cayley(G_{\beta})$ the number of distinct
$A$-orbits of length $|A|$ with the trivial $A$-stabilizer is
$$d=\frac{|V(Cayley(G_{\beta}))|}{|A|}=\frac{|G_{\beta}|}{|A|}=[G_{\beta}:A].$$
Let $C_1$ be the disjoint union of $d$ isomorphic copies of
$Cayley(G_{\alpha}, S)$. Let $C_2$ be the disjoint union of $c$
isomorphic copies of $Cayley(G_{\beta})$. Then both $C_1$ and
$C_2$ have $c d$ distinct $A$-orbits of length $|A|$.
Let $\{ x_{l} \; | \; 1 \leq l \leq c d \}$ and $\{ y_{l} \; | \;
1 \leq l \leq c d \}$ be the sets of representatives of these
orbits in $C_{1}$ and in $C_2$, respectively. Then
$A_{x_l}=\{1\}=A_{y_l}$, for all $1 \leq l \leq c d $.
Let $C$ be the amalgam of $C_{1}$ and $C_{2}$ over these sets of
vertices,
$$ C=C_{1} \ast_{ \{x_l \cdot a=y_l \cdot a \; | \; a \in A \} } C_{2}.$$
By Corollary~\ref{cor:SeparabilityTechnical}, $C$ is a finite
precover of $G$ such that the graphs $C_1$ and $C_2$ embed in it.
Therefore the graph $Cayley(G_{\alpha}, S)$ embeds in $C$ as well.
Moreover, by construction, the graph $C$ is
$X_{\alpha}^{\pm}$-saturated, and $VM_{\alpha}(C)$ form $N=b d$
distinct $A$-orbits of length $n$ isomorphic to each other, with,
roughly speaking, an $A$-stabilizer $S$.
Assume now that $|S|$ is not a prime number. Let
$V(Cayley(G_{\alpha}, S))$ form $t_i$ distinct $A$-orbits of
length $\frac{|A|}{i}$ isomorphic to $A(u_i)$ with the
$A$-stabilizer $A_{u_i} \leq S$. Thus $|A_{u_i}|=i$, where $i \in
I=\{ i \: | \: 1 \leq i < |S|, \ i \mid |S| \}$.
By the inductive assumption, $Cayley(G_{\beta},A_{u_i})$ can be
embedded into a finite $X_{\beta}^{\pm}$-saturated precover $C_i$
of $G$ whose $X_{\beta}$-monochromatic vertices form $k_i$
distinct $A$-orbits isomorphic to $A(u_i)$ of length
$\frac{|A|}{i}$ with the $A$-stabilizer $A_{u_i}$.
Let $l=lcm(\{k_i \; | \; i \in I \}$).
We take $C'_1$ be the disjoint union of $l$ isomorphic copies of
$Cayley(G_{\alpha}, S)$. Let $C'_2$ be the union of disjoint
unions of $\left( t_i \frac{l}{k_i} \right)$ isomorphic copies
of $C_i$, for all $i \in I$. Then both $C'_1$ and $C'_2$ have $t_i
l$ distinct $A$-orbits of length $|i|$ isomorphic to $A(u_i)$.
Let $\{ w_{ij} \; | \; i \in I, \; 1 \leq j \leq t_i l \}$ and $\{
u_{ij} \; | \; i \in I, \; 1 \leq j \leq t_i l \}$ be the sets of
representatives of these orbits in $C'_{1}$ and in $C'_2$,
respectively. Hence $A_{w_{ij}}=A_{u_i}=A_{u_{ij}}$, for all $i
\in I$ and $1 \leq j \leq t_i l$.
Let $C$ be the amalgam of $C'_{1}$ and $C'_{2}$ over these sets of
vertices,
$$ C=C'_{1} \ast_{\{w_{ij} \cdot a=u_{ij} \cdot a \; | \; a \in A \}} C'_{2}.$$
By Corollary~\ref{cor:SeparabilityTechnical}, $C$ is a finite
precover of $G$ such that the graphs $C'_1$ and $C'_2$ embed in
it. Therefore the graph $Cayley(G_{\alpha}, S)$ embeds in $C$ as
well. Moreover, by construction, the graph $C$ is
$X_{\alpha}^{\pm}$-saturated, and $VM_{\alpha}(C)$ form $N=t_n
l$ distinct $A$-orbits of length $n$ isomorphic to $A(v)$
with the $A$-stabilizer $S$. We are done.
$\diamond$nd{proof}
Let $\Gamma_1$ be the disjoint union of $N$ isomorphic copies of
$\Gamma$ and let $\Gamma_2$ be the disjoint unions of $m$
isomorphic copies of $C$. Then both $\Gamma_1$ and $\Gamma_2$ have
$m N$ distinct $A$-orbits of length $n$ isomorphic to $A(v)$ with
the $A$-stabilizer $S$. The standard arguments used in the proofs
of Lemmas~\ref{separability: A<=Z(G_i)} and \ref{separability: A
is malnormal} complete the proof.
$\diamond$nd{proof}
\begin{proof}[Proof of the Main Theorem]
We first construct the graph $(\Gamma(H),v_0)$, using the
generalized Stallings' folding algorithm.
Without loss of generality we can assume that $g$ is a normal
word. Since $g \not\in H$, then, by Theorem~\ref{thm: properties
of subgroup graphs} (3), $v_0 \cdot g \neq v_0$. Thus either $g$
is readable in $\Gamma(H)$, that is $v_0 \cdot g =v \in
V(\Gamma(H))$, or it is not readable.
Assume first that $v_0 \cdot g =v \in V(\Gamma(H))$.
We apply the algorithm described along with the proof of
Lemma~\ref{construction of X1-saturated precover} to embed the
finite precover $Lab(\Gamma(H),v_0 )$ into a finite
$X_i^{\pm}$-saturated precover $(\Gamma,\vartheta)$, where
$\vartheta$ is the image of $v_0$, and we take $1 \leq i \neq j
\leq 2$, if $A$ is malnormal or central in $G_j$.
Now we embed $(\Gamma,\vartheta)$ into a finite cover of $G$,
using the appropriate algorithm given along with the proof of one
of Lemmas~\ref{separability: A<=Z(G_i)}, \ref{separability: A is
malnormal} or \ref{separability: A is cyclic}. Let $(\Phi, \nu)$
be the resulting graph, where $\nu$ is the image of $\vartheta$.
Let $K=Lab(\Phi,\nu)$. By Theorem~\ref{fi}, $[G:K] < \infty$ and
$(\Phi,\nu)=(\Gamma(K),u_0)$. Since $$\Gamma(H) \subseteq \Gamma
\subseteq \Phi,$$ we have
$$Lab(\Gamma(H),v_0) \leq Lab(\Gamma,\vartheta) \leq Lab(\Phi,\nu).$$
Thus $H \leq K$. However $g \not\in K$, because the above graphs
are inclusions are embeddings. Therefore we are done.
Assume now that $g$ is not readable in $\Gamma(H)$. Let $g_1$ be
the longest prefix of $g$ that is readable in $\Gamma(H)$, that is
$v_0 \cdot g_1 =v \in V(\Gamma(H))$. Thus $v \in VM(\Gamma(H))$.
Without loss of generality, we can assume that $v \in
VM_1(\Gamma(H))$.
We glue to $\Gamma(H)$ a ``stem'' labelled by $g_2$ at $v$,
where $g
$\diamond$quiv g_1g_2$. Let $\Gamma$ be the resulting graph (see
Figure~\ref{fig:SeparabilityStem}).
\begin{figure}[!h]
\begin{center}
\psfrag{v0 }[][]{$v_0$}
\psfrag{v1 }[][]{$v_1$}
\psfrag{v }[][]{$v$}
\psfrag{g1 }[][]{$g_1$}
\psfrag{g2 }[][]{$g_2$}
\psfrag{g }[][]{$\gamma$}
\psfrag{u1 }[][]{$u_1$}
\psfrag{u2 }[][]{$u_2$}
\psfrag{um }[][]{$u_m$}
\psfrag{A }[][]{$\Gamma(H)$}
\psfrag{C }[][]{$C$}
\psfrag{D }[][]{$D$}
\includegraphics[width=0.8\textwidth]{ExSeparabilityConstruction.eps}
\caption { \label{fig:SeparabilityStem}}
$\diamond$nd{center}
$\diamond$nd{figure}
\begin{claim} \label{claim:PrecoverStem->Precover}
The graph $(\Gamma,v_0)$ can be embedded into a finite precover
$(\Gamma',v_0')$ of $G$ such that $v_0' \neq v_0' \cdot g \in
VM(\Gamma')$, where $v_0'$ is the image of $v_0$ in $\Gamma'$.
$\diamond$nd{claim}
\begin{proof}[Proof of the Claim]
Let $(u_1, \cdots, u_m)$ be the normal (Serre) decomposition of
$g_2$. Hence $u_1 \in G_2 \setminus A$. The proof is by induction
on the syllable length of $g_2$.
Let $C$ be a $X_1$-monochromatic component of $\Gamma(H)$ such
that $v \in V(C)$. Let $S=A_v$.
Consider $Cayley(G_1,S,S \cdot 1)$. Thus $A_{S \cdot 1}=S$ and the
$A$-orbit $A(S \cdot 1)=\{ (S \cdot 1) \cdot x \; | \; x \in A
\}=\{ S x \; | \; x \in A \} \subseteq V(Cayley(G_2,S))$ is
isomorphic to the $A$-orbit of $v$ in $C$.
Therefore taking $\Gamma_v= \Gamma \ast_{\{v \cdot x=S x \; | \;
x \in A \}}Cayley(G_2,S)$, we get a graph such that $\Gamma(H)$
and $Cayley(G_2, S)$ embed in it, by
Corollary~\ref{cor:SeparabilityTechnical}.
Let $D$ be the $X_2$-monochromatic component of $\Gamma_v$ such
that $v \in V(D)$. Since $u_1 \in G_2$ and $D$ is
$X_2^{\pm}$-saturated, there exists a path $\gamma$ in $D$ such
that $\iota(\gamma)=v$ and $lab(\gamma)
$\diamond$quiv u_1$. Moreover, the
vertex $v_1=\tau(\gamma) \in VB(D) \setminus VB(C)$, because $u_1
\in G_2 \setminus A$. Thus $v_1 \neq v_0$.
Therefore the graph $\Gamma_v$ can be thought of as a precover of
$G$ with a stem labelled by $u_2 \cdots u_m$ which rises up from
the vertex $v_1$. Note that $u_2 \in G_1 \setminus A$. Thus the
graph $\Gamma_v$ and the word given by the normal (Serre)
decomposition $(u_2, \cdots, u_m)$ satisfy the inductive
assumption. We are done.
$\diamond$nd{proof}
Proceeding in the same manner as in the previous case, when $v_0
\cdot g \in V(\Gamma(H))$, we embed the finite precover
$Lab(\Gamma',v_0' )$ of $G$ into a finite cover $(\Phi, \nu)$ of
$G$. This completes the proof.
$\diamond$nd{proof}
\begin{figure}[!h]
\begin{center}
\psfrag{v0 }[][]{$v_0$}
\psfrag{v1 }[][]{$v_1$}
\psfrag{v2 }[][]{$v_2$}
\psfrag{v3 }[][]{$v_3$}
\psfrag{v4 }[][]{$v_4$}
\psfrag{v5 }[][]{$v_5$}
\includegraphics[width=0.8\textwidth]{am_fp_ex131.eps}
\caption[The construction of the separating
subgroup]{{\footnotesize The construction of the cover $\Gamma(K)$
of $G$.} \label{fig: separability}}
$\diamond$nd{center}
$\diamond$nd{figure}
\begin{ex} \label{example: separability}
{\rm Let $G$ and $H_1$ be as in Example \ref{ex:Freeness}. Recall
that
$$G=\langle x,y | x^4, y^6, x^2=y^3 \rangle \ {\rm and} \ H_1=\langle xy \rangle.$$
Let $g=xy^{-1}$ be an element of $G$. By Theorem~\ref{thm:
properties of subgroup graphs} (3), $g \not\in H_1$, because $v_0
\cdot g \neq v_0$.
Figure \ref{fig: separability} illustrates the construction of the
cover $\Gamma(K)$ of $G$, where $K \leq G$ is the separating
subgroup for $H_1 \leq G$ and the element $g \not\in H_1$.}
$\diamond$
$\diamond$nd{ex}
\appendix
\section{}
Below we follow the notation of Grunschlag \cite{grunschlag},
distinguishing between the ``
$\diamond$mph{input}'' and the ``
$\diamond$mph{given
data}'', the information that can be used by the algorithm
$\diamond$mph{``for free''}, that is it does not affect the complexity
issues.
\begin{center}
\large{
$\diamond$mph{\underline{\textbf{Algorithm}}}}
$\diamond$nd{center}
\begin{description}
\item[Given] Finite groups $G_1$, $G_2$, $A$ and the amalgam
$G=G_1 \ast_{A} G_2$ given via $(1.a)$, $(1.b)$ and $(1.c)$,
respectively.
We assume that the Cayley graphs and all the relative Cayley
graphs of the free factors are given.
\item[Input] A finite set $\{ g_1, \cdots, g_n \} \subseteq G$.
\item[Output] A finite graph $\Gamma(H)$ with a basepoint $v_0$
which is a reduced precover of $G$ and the following holds
\begin{itemize}
\item
$Lab(\Gamma(H),v_0)=_{G} H$;
\item $H=\langle g_1, \cdots, g_n \rangle$;
\item a normal word $w$ is in $H$ if and only if
there is a loop (at $v_0$) in $\Gamma(H)$
labelled by the word $w$.
$\diamond$nd{itemize}
\item[Notation] $\Gamma_i$ is the graph obtained after the
execution of the $i$-th step.
\item[\underline{Step1}] Construct a based set of $n$ loops around a common distinguished
vertex $v_0$, each labelled by a generator of $H$;
\item[\underline{Step2}] Iteratively fold edges and cut hairs
\footnote{A
$\diamond$mph{hair} is an edge one of whose endpoint has degree 1};
\item[\underline{Step3}] { \ }\\
\texttt{For} { \ } each $X_i$-monochromatic component $C$ of
$\Gamma_2$ ($i=1,2$) { \ } \texttt{Do} \\
\texttt{Begin}\\
pick an edge $e \in E(C)$; \\
glue a copy of $Cayley(G_i)$ on $e$ via identifying $ 1_{G_i} $ with $\iota(e)$ \\
and identifying the two copies of $e$ in $Cayley(G_i)$ and in $\Gamma_2$; \\
\texttt{If} { \ } necessary { \ } \texttt{Then} { \ } iteratively fold
edges; \\
\texttt{End;}
\item[\underline{Step4}] { \ } \\
\texttt{ For} { \ } each $v \in VB(\Gamma_3)$ { \ } \texttt{ Do} \\
\texttt{If} { \ } there are paths $p_1$ and $p_2$, with $\iota(p_1)=\iota(p_2)=v$
and $\tau(p_1)~\neq~\tau(p_2)$ such that
$$lab(p_i) \in G_i \cap A \ (i=1,2) \ {\rm and} \ lab(p_1)=_G
lab(p_2)$$
\texttt{ Then} { \ } identify $\tau(p_1)$ with $\tau(p_2)$; \\
\texttt{If} { \ } necessary { \ } \texttt{Then} { \ } iteratively fold
edges; \\
\item[\underline{Step5}]
Reduce $\Gamma_4$ by an iterative removal of all
(
$\diamond$mph{redundant})
$X_i$-monochromatic components $C$ such that
\begin{itemize}
\item $(C,\vartheta)$ is isomorphic to $Cayley(G_i, K, K \cdot 1)$, where $K \leq A$ and
$\vartheta \in VB(C)$;
\item $|VB(C)|=[A:K]$;
\item one of the following holds
\begin{itemize}
\item $K=\{1\}$ and $v_0 \not\in VM_i(C)$;
\item $K$ is a nontrivial subgroup of $A$ and $v_0 \not\in V(C)$.\\
$\diamond$nd{itemize}
$\diamond$nd{itemize}
Let $\Gamma$ be the resulting graph;\\
\texttt{If} { \ }
$VB(\Gamma)=
$\diamond$mptyset$ and $(\Gamma,v_0)$ is isomorphic to $Cayley(G_i, 1_{G_i})$ \\
\texttt{Then} { \ } we set $V(\Gamma_5)=\{v_0\}$ and
$E(\Gamma_5)=
$\diamond$mptyset$; \\
\texttt{Else} { \ } we set $\Gamma_5=\Gamma$.
\item[\underline{Step6}] { \ } \\
\texttt{If} { \ }
\begin{itemize}
\item $v_0 \in VM_i(\Gamma_5)$ ($i \in \{1,2\}$);
\item $(C,v_0)$ is isomorphic to $Cayley(G_i,K,K \cdot 1)$, where $L=K \cap A$ is a nontrivial
subgroup of
$A$ and $C$ is a $X_i$-monochromatic component of $\Gamma_5$ such that $v_0 \in V(C)$;
$\diamond$nd{itemize}
\texttt{Then} { \ } glue to $\Gamma_5$ a $X_j$-monochromatic
component ($1 \leq i \neq j \leq 2$) $D=Cayley(G_j,L,L \cdot 1)$
via identifying $L \cdot 1$ with $v_0$ and \\
identifying the vertices $L \cdot a$ of $Cayley(G_j,L,L \cdot 1)$
with the vertices $v_0 \cdot a$ of $C$, for all $a \in A \setminus
L$.
Denote $\Gamma(H)=\Gamma_6$.
$\diamond$nd{description}
\begin{remark} \label{stal-mar-meak-kap-m}
{\rm Note that the first two steps of the above algorithm
correspond precisely to the Stallings' folding algorithm for
finitely generated subgroups of free groups \cite{stal, mar_meak,
kap-m}.}
$\diamond$
$\diamond$nd{remark}
\begin{figure}[!h]
\psfrag{x }[][]{$x$} \psfrag{y }[][]{$y$} \psfrag{v }[][]{$v$}
\psfrag{x1 - monochromatic vertex }[][]{{\footnotesize
$\{x\}$-monochromatic vertex}}
\psfrag{y1 - monochromatic vertex }[][]{\footnotesize
{$\{y\}$-monochromatic vertex}}
\psfrag{ bichromatic vertex }[][]{\footnotesize {bichromatic
vertex}}
\includegraphics[width=0.8\textwidth]{am_fp_ex1New.eps}
\caption[The construction of $\Gamma(H_1)$]{ \footnotesize {The
construction of $\Gamma(H_1)$.}
\label{fig: example of H=xy}}
$\diamond$nd{figure}
\begin{figure}[!hb]
\psfrag{v }[][]{$v$}
\includegraphics[width=0.8\textwidth]{am_fp_ex2New.eps}
\caption[The construction of $\Gamma(H_2)$]{ \footnotesize {The
construction of $\Gamma(H_2)$.} \label{fig: example of H=xy^2x,
yxyx}}
$\diamond$nd{figure}
\begin{ex} \label{example: graphconstruction}
{\rm Let $G=gp\langle x,y | x^4, y^6, x^2=y^3 \rangle$.
Let $H_1$ and $H_2$ be finitely generated subgroups of $G$ such
that
$$H_1=\langle xy \rangle \ {\rm and} \ H_2=\langle xy^2, yxyx \rangle.$$
The construction of $\Gamma(H_1)$ and $\Gamma(H_2)$ by the
algorithm presented above is illustrated on Figures \ref{fig:
example of H=xy}
and \ref{fig: example of H=xy^2x, yxyx}.}
$\diamond$
$\diamond$nd{ex}
\begin{thebibliography}{99}
\bibitem{a_g}
R.B.J.T.Allenby, R.G.Gregorac, On locally extended
residually finite groups. {\it Conference on Group Theory
(Univ. Wisconsin-Parkside, Kenosha, Wis., 1972)}, 9-17.
Lecture Notes in Math., Vol. 319, {\it Springer, Berlin},
1973.
\bibitem{b-f}
M. Bestvina, M. Feighn, A combinatorial theorem for negatively
curved groups, \textit{J. Differential Geom.} {\bf 35} (1992),
no.1, 85-101.
\bibitem{b-f-correction}
M. Bestvina, M. Feighn, Addendum and correction to: ``A combination theorem for negatively
curved groups'' [J. Differential Geom. \textbf{35} (1992), no. 1,
85-101], {\it J. Differential Geom.} \textbf{43} (1996), no. 4,
783-788.
\bibitem{b-m-m-w}
J.-C.Birget, S.Margolis, J.Meakin, P.Weil, PSPACE-complete
problems for subgroups of free groups and inverse automata,
$\diamond$mph{Theoret. Comput. Sci.} \textbf{242} (2000), no. 1-2,
247-281.
\bibitem{b1}
O.V.Bogopolski, Finitely generated groups with the M. Hall
property,
$\diamond$mph{Algebra and Logic} \textbf{31} (1992), no. 3,
141-169.
\bibitem{b2}
O.V.Bogopolski, Almost free groups and the M. Hall property,
$\diamond$mph{Algebra and Logic} \textbf{33} (1994), no. 1, 1-13.
\bibitem{b-b-s}
A.M.Brunner, R.G.Burns and D.Solitar, The subgroup separability of free products of two free groups with cyclic amalgamation, \textit{ Contemporary Math.}, vol. 33, Amer. Math. Soc., Rhode Island, (1984), 90-144.
\bibitem{burns}
R.G.Burns, On finitely generated subgroups of free products, \textit{J. Austral. Math. Soc.} {\bf 12} (1971), 358-364.
\bibitem{cdhw73}
J.J.Cannon, L.A.Dimino, G.Havas, J.M.Watson, Implementation
and analysis of the Todd-Coxeter algorithm,
$\diamond$mph{Math.Comp.}, {\bf
27} (1973), 463-490.
\bibitem{c-otto}
R.Cremanns, F.Otto, Constructing cannonical presentations for
subgroups of context-free groups in polynomial time, Proc.
$\diamond$mph{ISSAC'94}.
\bibitem{dehn}
M.Dehn, $\ddot{U}$ber unendliche diskontinuerliche Gruppen, {\it
Math. Annn.} {\bf 69} (1911), 116-144.
\bibitem{gi_coxeter}
R.Gitik, On the profinite topology on Coxeter groups,
$\diamond$mph{Internat. J. Algebra Comput.} \textbf{13} (2003), no.4, 393-400
\bibitem{gi_quas}
R.Gitik, On quasiconvex subgroups of negatively curved groups,
{\it J. Pure Appl. Algebra} {\bf 119} (1997), no.2, 155-169.
\bibitem{gi_sep}
R.Gitik, Graphs and separability properties of groups, {\it J. of Algebra} {\bf 188} (1997), no.1,
125-143.
\bibitem{gi_doub}
R.Gitik, Doubles of groups and hyperbolic LERF 3-manifolds, {\it Ann. of Math.(2)} {\bf 150} (1999), no.3, 775-806.
\bibitem{gi_rips}
R.Gitik, E.Rips, On separability properties of groups, {\it
Internat. J. Algebra and Comput.} \textbf{5} (1995), no.6, 703-717.
\bibitem{gro}
M.Gromov, Hyperbolic groups.
$\diamond$mph{Essays in group theory},
75-263, Math. Sci. Res. Inst. Publ., 8,
$\diamond$mph{Springer, New York}, 1987.
\bibitem{grunschlag}
Z. Grunschlag, Algorithms in geometric group theory, PhD
thesis, University of California at Berkeley, 1999.
\bibitem{hall1}
M.Hall Jr. Coset representation in free groups, \textit{ Trans. AMS} {\bf 67} (1949), 421-432.
\bibitem{hall2}
M.Hall Jr. A topology for free groups and related groups, \textit{ Annals of Math.} {\bf 52} (1950), 127-139.
\bibitem{havas}
G.Havas, A Reidemeister-Schreier program, Proc. Second
Internat. Conf. Theory of Groups (Canberra 1973), Lecture
Notes in Mathematics, {\bf 372}, 347-356, {\it Springer-Verlag,
Berlin}.
\bibitem{hkrr}
G.Havas, P.E.Kenne, J.S.Richardson and E.F.Robertson, A Tietze
transformation program, in: Computational Group Theory, M.D.Atkinson (ed), {\it Academic
Press} (1984), 69-73.
\bibitem{holt-decision}
D.F.Holt, Decision problems in finitely presented groups.
$\diamond$mph{Computational methods for representations of groups and
algebras (Essen, 1997)}, 259-265, Progr. Math., 173,
$\diamond$mph{Birkhauser, Basel}, 1999.
\bibitem{holt-hurt}
D.F.Holt, D.Hurt, Computing automatic coset systems and subgroup
presentations,
{\it J. Symbolic Computation} {\bf 27} (1999), no.1, 1-19.
\bibitem{kap-m}
I.Kapovich, A.Myasnikov, Stallings foldings and subgroups of free groups, {\it J. Algebra}, {\bf 248} (2002), no.2, 608--668
\bibitem{generic-case}
I.Kapovich, A.Myasnikov, P.E. Schupp, V.Shpilrain,
Generic-case complexity, decision problems in group
theory, and random walks,
$\diamond$mph{J. Algebra} \textbf{264} (2003),
no. 2, 665-694.
\bibitem{average-case}
I.Kapovich, A.Myasnikov, P.E. Schupp, V.Shpilrain,
Average-case complexity and decision problems in group
theory,
$\diamond$mph{Adv. Math} \textbf{190} (2005), no.2, 343-359.
\bibitem{kap-w-m}
I.Kapovich, R.Weidman, A.Miasnikov, Foldings,
graphs of groups and the membership problem.
$\diamond$mph{Internat. J.
Algebra Comput.} \textbf{15} (2005), no. 1, 95-128.
\bibitem{kmrs}
O.Kharlamovich, A.Myasnikov, V.Remeslennikov, D.Serbin, Subgroups of fully
residually free groups: algorithmic problems, {\it Contemporary Math}.
\bibitem{k-m-otto}
N.Kuhn, K.Madlener, F.Otto,
Computing presentations for subgroups of polycyclic groups and of context-free groups.
\textit{Appl. Algebra in Engrg, Comm. and Comput.}, {\bf 5}
(1994), no.5, 287-316.
\bibitem{l-s}
M.Lohrey, G. Senizergues, Rational subsets in HNN-extentions
and amalgamated products, in preparation.
\bibitem{long-niblo}
D.D.Long, G.A.Niblo, Subgroup Separability and 3-Manifold
group Group,
$\diamond$mph{Math. Z.} \textbf{207} (1991), 209-215.
\bibitem{long-reid1}
D.D.Long, A.W.Reid, Surface subgroups and separability in
3-manifold topology. [IMPA Mathematical Publications, 25th
Brasilian Mathematics Colloquium]
$\diamond$mph{Instituto Nacional de
Mathem$\acute{a}$tica Pura e Aplicada (IMPA)}, Rio de Janeiro, 2005.
55pp.
\bibitem{long-reid2}
D.D.Long, A.W.Reid, On subgroup separability in hypeerbolic
Coxeter groups,
$\diamond$mph{Geom.Dedicata} {\bf 87} (2001), no.1-3,
245-260.
\bibitem{l_s}
R.C.Lyndon and P.E.Schupp, Combinatorial group theory.
$\diamond$mph{Springer-Verlag, Berlin-New York}, 1977.
\bibitem{m-k-s}
W.Magnus, A.Karas, D.Solitar, Combinatorial group theory.
Presentations of groups in terms of generators and
relations. Second revised edition.
$\diamond$mph{Dover Publications, Inc.,
New York}, 1976.
\bibitem{mar_meak}
S.W.Margolis and J.C.Meakin, Free inverse monoids and graph immersions, {\it Internat. J. Algebra Comput}.
{\bf 3} (1993), 79-99.
\bibitem{m-s-w}
S.W.Margolis, M.Sapir, P.Weil, Closed subgroups in
pro-\textbf{V} topologies and the extension problem for
inverse automata, \textit{Int. J. Algebra Comput.} {\bf 11} (2001), no.4, 405-445.
\bibitem{m-foldings}
L.Markus-Epstein, Stallings Foldings and Subgroups of Amalgams of Finite Groups, arXiv.org: math.GR/0705.0754, to appear in {\it Internat. J. Algebra Comput}
(2007).
\bibitem{m-algII}
L.Markus-Epstein, Algorithmic Problems in Amalgams of Finite Groups: Conjugacy and Intersection Properties, arXiv.org: math.GR/0707.0165.
\bibitem{m-kurosh}
L.Markus-Epstein, Reading Off Kurosh Decompositions, arXiv.org: math.GR/0706.0101 (2007).
\bibitem{m_w}
J.McCammond, D.Wise, Coherence, local quasiconvexity and the perimeter of 2-complexes, to appear in
$\diamond$mph{Geom. funct. anal.}.
\bibitem{mvw}
A.Miasnikov, E.Ventura, P.Weil, Algebraic extensions in free
groups, arXiv.org: math.GR/0610880 (2006).
\bibitem{miller71}
C.F. Miller III, On group-theoretic decision problems and their
classification. Annals of Mathematics Studies, No. 68.
$\diamond$mph{Princeton University Press, Princeton, N.J.; University of
Tokyo Press, Tokyo}, 1971.
\bibitem{miller92}
C. F. Miller III, Decision problems for groups---survey and
reflections.
$\diamond$mph{Algorithms and classification in combinatorial
group theory (Berkeley, CA, 1989)}, 1-59, Math. Sci. Res. Inst.
Publ., 23,
$\diamond$mph{Springer, New York}, 1992.
\bibitem{p_s}
O.Payne, S.Rees, Computing Subgroup Presentation, Using
Arguments of McCammond and Wise,
$\diamond$mph{J. of Algebra}, \textbf{300} (2006) (Leedham-Green birthday volume), 109-133.
\bibitem{ribs}
E.Ribs, An Example of a non-LERF group, which is a free
product of LERF groups with an amalgamated cyclic subgroup,
$\diamond$mph{Israel J. of Math.}, {\bf 70} (1990), no.1, 104-110.
\bibitem{rvw}
A.Roig, E.Ventura, P.Weil, On the complexity of the Whitehead
minimization problem, arXiv.org: math.GR/0608779 (2006).
\bibitem{ro}
N.S.Romanovskii, On the finite residuality of free products, relative to an
embedding. (Russian)
\textit{Izv. Akad. Nauk SSSR Ser. Mat.} {\bf 33} (1969), 1324-1329.
\bibitem{schupp}
P.E.Schupp, Coxeter groups, 2-completion, perimeter reduction
and subgroup separability, \textit{Geom. Dedicata} {\bf 96}
(2003), 179-198.
\bibitem{serre}
J.-P.Serre, Trees. Translated from the French by John
Stillwell.
$\diamond$mph{Springer-Verlag, Berlin-New York}, 1980.
\bibitem{sims}
C.C.Sims, Computation with finitely presented groups.
Encyclopedia of Mathematics and its Applications, 48.
$\diamond$mph{Cambridge University Press, Cambridge}, 1994.
\bibitem{stal}
J.Stallings, Topology of graphs, {\it Invent. Math.} {\bf 71} (1983), no.3, 551-565.
\bibitem{stil}
J Stillwell, Classical topology and combinatorial group
theory.
$\diamond$mph{Springer-Verlag, Berlin-New York}, 1980.
\bibitem{tuikan}
N.Tuikan, A fast algorithm for Stallings' folding process,
$\diamond$mph{Internat. J. Algebra Comput.}
\textbf{16} (2006), no. 6, 1031-1045.
\bibitem{ventura}
E.Ventura, On fixed subgroups of maximal rank, {\it Comm.
Algebra}, {\bf 25} (1997), 3361-3375.
$\diamond$nd{thebibliography}
$\diamond$nd{document} |
\begin{document}
{\hskip 115mm December 7, 2010
\vskip 5mm}
\title{Topological entropy and irregular recurrence}
\author{Lenka Obadalov\'a}
\address{ Mathematical Institute, Silesian University,
CZ-746 01 Opava, Czech Republic}
\email{lenka.obadalova@math.slu.cz}
\thanks{The research was supported, in part, by grant SGS/15/2010 from the Silesian University in Opava.}
\begin{abstract}
This paper is devoted to problems stated by Z. Zhou and F. Li in 2009. They concern relations between almost periodic, weakly almost periodic, and quasi-weakly almost periodic points of a continuous map $f$ and its topological entropy. The negative answer follows by our recent paper. But for continuous maps of the interval and other more general one-dimensional spaces we give more results; in some cases, the answer is positive.
{\small {2000 {\it Mathematics Subject Classification.}}
Primary 37B20, 37B40, 37D45, 37E05.}
\end{abstract}
\maketitle
\section{Introduction}
Let $(X,d)$ be a compact metric space, $I=[0,1]$ the unit interval, and $\mathcal C(X)$ the set of continuous maps $f :X\rightarrow X$. By $\omega (f,x)$ we denote the {\it $\omega$-limit set} of $x$ which is the set of limit points of the {\it trajectory} $\{f ^{i}(x)\} _{i\ge 0}$ of $x$, where $f^{i}$ denotes the $i$th iterate of $f$. We consider sets $W(f)$ of {\it weakly almost periodic points} of $f$, and $QW(f)$ of {\it quasi-weakly almost periodic points} of $f$. They are defined as follows, see \cite{zhou}:
$$
W(f)= \left\{x \in X; \forall \varepsilon ~\exists N>0 \ \text{such that} \ \sum_{i=0}^{nN-1} \chi_{B(x,\varepsilon)}(f^i(x)) \geq n, \forall n >0\right\},
$$
$$
QW(f)= \left\{x \in X; \forall \varepsilon ~\exists N>0, \exists \{n_j\} \ \text{such that} \ \sum_{i=0}^{n_jN-1} \chi_{B(x,\varepsilon)}(f^i(x)) \geq n_j, \forall j >0\right\},
$$
where ${B(x,\varepsilon)}$ is the $\varepsilon$-neighbourhood of $x$, $\chi_{A}$ the characteristic
function of a set $A$, and $\{n_j\}$ an increasing
sequence of positive integers. For $x \in X$ and $t>0$, let
\begin{eqnarray}
\label{novew}
\Psi_x (f, t) &=& \liminf_{n \to \infty} \tfrac1n \# \{0\le j<n; d(x, f^j(x))<t\}, \\
\label{noveqw}
\Psi_x^* (f, t) &=& \limsup_{n \to \infty} \tfrac1n \# \{0\le j<n; d(x, f^j(x))<t\}.
\end{eqnarray}
Thus, $\Psi _x(f,t)$ and $\Psi ^*_x(f,t)$ are the {\it lower} and {\it upper Banach density} of the set $\{ n\in\mathbb N; f^n(x)\in B(x,t)\}$, respectively. In this paper we make of use more convenient definitions of $W(f)$ and $QW(f)$ based on the following lemma.
\begin{lm}
\label{l1}
Lef $f \in \mathcal C(X)$. Then
{\rm (i)} \ $x \in W(f)$ if and only if $\Psi_x (f, t)>0$, for every $t>0$,
{\rm (ii)} \ $x \in QW(f)$ if and only if $\Psi_x^* (f, t)>0$, for every $t>0$.
\end{lm}
\begin{proof}
It is easy to see that, for every $\varepsilon>0$ and $N>0$,
\begin{equation}
\label{99}
\sum_{i=0}^{nN-1} \chi_{B(x,\varepsilon)}(f^i(x)) \geq n \
\ \ \text{if and only if} \ \ \ \#\{0 \leq j <nN; f^j(x) \in B(x, \varepsilon)\} \geq n.
\end{equation}
(i) If $x\in W(f)$ then, for every $\varepsilon >0$ there is an $N>0$ such that the condition on the left side in (\ref{99}) is satisfied for every $n$. Hence, by the condition on the right, $\Psi _x(f,\varepsilon )\ge 1/N>0$. If $x\notin W(f)$
then there is an $\varepsilon >0$ such, that for every $N>0$, there is an $n>0$ such that the condition on the left side of (\ref{99}) is not satisfied. Hence, by the condition on the right, $\Psi _x(f,t)<1/N\to 0$ if $N\to\infty$. Proof of (ii) is similar.
\end{proof}
Obviously, $W(f)\subseteq QW(f)$. The properties of $W(f)$ and $QW(f)$ were studied in the nineties by Z. Zhou et al, see \cite{zhou} for references. The points in $IR(f):=QW(f)\setminus W(f)$ are {\it irregularly recurrent points}, i.e., points $x$ such that $\Psi_x^*(f, t) >0 $ for any $t>0$, and $\Psi_x(f, t_0)=0$ for {\it some} $t_0>0$, see \cite{lenka}.
Denote by $h(f)$ {\it topological entropy} of $f$ and by $R(f)$, $UR(f)$ and $AP(f)$ the set of {\it recurrent}, {\it uniformly recurrent} and {\it almost periodic} points of $f$, respectively. Thus, $x\in R(f)$ if, for every neighborhood $U$ of $x$, $f^j(x)\in U$ for infinitely many $j\in\mathbb N$, $x\in UR(f)$ if, for every neighborhood $U$ of $x$ there is a $K>0$ such that every interval $[n, n+K]$ contains a $j\in\mathbb N$ with $f^j(x)\in U$, and $x\in AP(f)$, if for every neighborhood $U$ of $x$, there is a $k>0$ such that $f^{kj}(x)\in U$, for every $j\in\mathbb N$. Recall that $x\in R(f)$ if and only if $x\in\omega (f,x)$, and $x\in UR(f)$ if and only if $\omega (f,x)$ is a {\it minimal set}, i.e., a closed set $\emptyset\ne M\subseteq X$ such that $f(M)=M$ and no proper subset of $M$ has this property. Denote by $\omega (f)$ the union of all $\omega$-limit sets of $f$.
The next relations follow by definition:
\begin{equation}
\label{eq10}
AP(f)\subseteq UR(f)\subseteq W(f)\subseteq QW(f)\subseteq R(f)\subseteq \omega (f)
\end{equation}
The next theorem will be used in Section 2. Its part (i) is proved in \cite{zhou2} but we are able to give a simpler argument, and extend it to part (ii).
\begin{theorem}
If $f\in\mathcal C(X)$ then
{\rm (i)} \ $W(f) = W(f^m)$,
{\rm (ii)} \ $QW(f) = QW(f^m)$,
{\rm (iii)} \ $IR(f) = IR(f^m)$.
\label{dalsi3vlastnosti}
\end{theorem}
\begin{proof} Since $\Psi _x(f,t)\ge \tfrac 1m \Psi
_x(f^m,t)$, $x\in W(f^m)$ implies $x\in W(f)$ and
similarly, $QW(f^m)\subseteq QW(f)$. Since (iii) follows by (i) and (ii), it suffices to prove that for every $\varepsilon >0$ there is a $\delta >0$ such that, for every prime integer $m$,
\begin{equation}
\label{eq11}
\Psi _x(f^m,\varepsilon) \ge\Psi _x(f,\delta ) \ \text{and} \ \Psi ^*_x(f^m,\varepsilon )\ge \Psi ^*_x(f,\delta ).
\end{equation}
For every $i\ge 0$, denote $\omega _i:=\omega (f^m,f^i(x))$
and $\omega _{ij}:=\omega _i\cap\omega _j$.
Obviously, $\omega (f,x)=\bigcup _{0\le i<m}\omega _i$, and $f(\omega _{i})=\omega _{i+1}$, where $i$ is taken mod $m$.
Moreover, $f^m(\omega _i)=\omega _i$ and $f^m(\omega _{ij})=\omega _{ij}$, for every $0\le i<j<m$. Hence
\begin{equation}
\label{equ12}
\omega _i\ne \omega _{ij} \ \text{implies} \ \omega _j\ne\omega _{ij}, \ \text{and} \ f^i(x), f^j(x)\notin \omega _{ij}.
\end{equation}
Let $k$ be the least period of $\omega _0$. Since $m$ is prime, there are two cases.
(a) If $k=m$ then the sets $\omega _i$ are pairwise distinct
and, by (\ref{equ12}), there is a $\delta>0$ such that $B(x,\delta )\cap \omega _i=\emptyset$, $0<i<m$. It follows
that if $f^r(x)\in B(x,\delta )$ then $r$ is a multiple of $m$, with finitely many exceptions. Consequently, (\ref{eq11}) is satisfied for $\varepsilon =\delta$, even with $\ge$ replaced by the equality.
(b) If $k=1$ then $\omega _i=\omega _0$, for every $i$.
Let $\varepsilon >0$. For every $i$, $0\le i<m$, there
is the minimal integer $k_i\ge 0$ such that $f^{mk_i+i}(x)\in B(x,\varepsilon)$. By the continuity, there is a $\delta >0$
such that $f^{mk_i+i}(B(x,\delta ))\subseteq B(x,\varepsilon )$, $0\le i<m$. If $f^r(x)\in B(x,\delta )$ and $r\equiv i ({\rm mod} \ m)$, $r=ml+i$, then $f^{m(l+1+k_{m-i})}(x)=f^{r+ mk_{m-i}+{m-i}}(x)\in
f^{mk_{m-i} +m-i}(B(x,\delta ))\subseteq B(x,\varepsilon )$. This proves (\ref{eq11}).
\end{proof}
In 2009 Z. Zhou and F. Li stated, among others, the following problems, see \cite{zhou3}.
\noindent {\bf Problem 1.} Does $IR(f)\ne\emptyset$ imply $h(f)>0$?
\noindent {\bf Problem 2.} Does $W(f)\ne AP(f)$ imply $h(f)>0$?
\noindent In general, the answer to either problem is negative. In \cite{lenka} we constructed a skew-product map $F:Q\times I\to Q\times I$, $(x,y)\mapsto (\tau (x), g_x(y))$, where $Q=\{ 0,1\}^{\mathbb N}$ is a Cantor-type set, $\tau$ the adding machine (or, odometer) on $Q$ and, for every $x$, $g_x$ is a nondecreasing mapping $I\to I$, with $g_x(0)=0$. Consequently, $h(F)=0$ and $Q_0:=Q\times\{ 0\}$ is an invariant set. On the other hand, $IR(F)\ne\emptyset$ and $Q_0=AP(F)\ne W(F)$. This example answers in the negative both problems.
However, for maps $f\in\mathcal C(I)$, $h(f)>0$ is equivalent to $IR(f)\ne\emptyset$. On the other hand, the answer to Problem 2 remains negative even for maps in $\mathcal C(I)$. Instead, we are able to show that such maps with $W(f)\ne AP(f)$ are Li-Yorke chaotic. These results are given in the next section, as Theorems 2 and 3. Then, in Section 3 we show that these results can be extended to maps of more general one-dimensional compact metric space like topological graphs, topological trees, but not dendrites, see Theorems \ref{gen1} and \ref{gen2}.
\section{Relations with topological entropy for maps in $\mathcal C(I)$}
\begin{theorem}
For $f\in\mathcal C (I)$, the conditions $h(f)>0$ and $IR(f)\ne\emptyset$ are equivalent.
\label{entropie}
\end{theorem}
\begin{proof} If $h(f)=0$ then $UR(f)=R(f)$ (see, e.g., \cite{block}, Corollary VI.8). Hence, by (\ref{eq10}), $W(f)=QW(f)$. If $h(f)>0$ then $W(f)\ne QW(f)$; this follows by Theorem \ref{dalsi3vlastnosti} and Lemmas \ref{shift} and \ref{kladna} stated below.
\end{proof}
Let $(\Sigma _2,\sigma )$ be the shift on the set $\Sigma _2$ of sequences of two symbols, $0, 1$, equipped with a metric $\rho$ of pointwise convergence, say, $\rho (\{x_i\}_{i\ge 1},\{y_i\}_{i\ge 1})=1/k$ where $k=\min \{ i\ge 1; x_i\ne y_i\}$.
\begin{lm}
$IR(\sigma)$ is non-empty, and contains a transitive point.
\label{shift}
\end{lm}
\begin{proof}
Let
$$
k_{1,0},k_{1,1},k_{2,0},k_{2,1},k_{2,2},k_{3,0},\cdots
,k_{3,3},k_{4,0},\cdots ,k_{4,4},k_{5,0},\cdots
$$
be an increasing sequence of positive integers.
Let $\{ B_n\} _{n\ge 1}$ be a sequence of all finite blocks of digits 0 and 1.
Put $A_0 = 10$, $A_1=(A_0)^{k_{1,0}}0^{k_{1,1}}B_1$ and, in general,
\begin{equation}
\label{equ2}
A_{n}=A_{n-1}(A_0)^{k_{n,0}}(A_1)^{k_{n,1}}\cdots (A_{n-1})^{k_{n,n-1}}0^{k_{n,n}}B_{n}, \ n\ge 1.
\end{equation}
Denote by $|A|$ the lenght of a finite block of 0's and 1's, and let
\begin{equation}
\label{equ20}
a_n=|A_n|, \ b_n=|B_n|, \ c_n=a_n-b_{n}
-k_{n, n}, \ n\ge 1,
\end{equation}
and
\begin{equation}
\label{equ21}
\lambda _{n,m}=\left|A_{n-1}(A_0)^{k_{n,0}}(A_1)^{k_{n,1}}\cdots (A_m)^{k_{n,m}}\right|, \ 0\le m< n.
\end{equation}
By induction we can take the numbers $k_{i,j}$ such that
\begin{equation}
\label{equ22}
k_{n,m+1}=n\cdot \lambda _{n,m}, \ 0\le m< n.
\end{equation}
Let $N(A)$ be the cylinder of all $x \in \Sigma_2$ beginning with a finite block $A$. Then $\{ N(B_n)\}_{n\ge 1}$ is a base of the topology of $\Sigma _2$, and $\bigcap _{n=1}^\infty N(A_n) $ contains exactly one point; denote it by $u$.
Since $\sigma ^{a_n-b_n}(u)\in N(B_n)$, i.e., since the trajectory of $u$ visits every $N(B_n)$, $u$ is a transitive point of $\sigma$. Moreover, $\rho (u, \sigma ^{j}
(u))=1$, whenever $c_n\le j<a_n-b_n$. By (\ref{equ22}) it follows that $\Psi _u(\sigma ,t)=0$ for every $t\in (0,1)$.
Consequently, $u\notin W(\sigma )$.
It remains to show that $u\in QW(\sigma )$. Let $t\in (0,1)$. Fix an $n_0\in\mathbb N$ such that $1/a_{n_0}<t$.
Then, by (\ref{equ2}),
$$
\#\left\{ j<\lambda _{n,n_0}; \rho (u,\sigma ^j(u))<t\right\} \ge k_{n,n_0}, \ n>n_0,
$$
hence, by (\ref{equ21}) and (\ref{equ22}),
$$
\lim _{n\to\infty}\frac{\#\left\{ j<\lambda _{n,n_0}; \rho (u,\sigma ^j(u))<t\right\}} {\lambda _{n,n_0}}\ge \lim _{n\to\infty}\frac{k_{n,n_0}}{\lambda _{n,n_0}}
=\lim_{n\to\infty}\frac{k_{n,n_0}}{\lambda _{n,n_0-1}+a_{n_0}k_{n,n_0}} =\lim _{n\to\infty}\frac n{1+a_{n_0}n}=\frac 1{a_{n_0}}.
$$
Thus, $\Psi^*_u(\sigma ,t)\ge 1/{a_{n_0}}$ and, by Lemma \ref{l1},
$u\in QW(\sigma )$.
\end{proof}
\begin{lm}
Let $f\in \mathcal C (I)$ have positive topological entropy. Then $IR(f)\ne\emptyset$.
\label{kladna}
\end{lm}
\begin{proof}
When $h(f)>0$, then $f^m$ is strictly turbulent for some $m$. This means that there exist disjoint compact intervals $K_0$, $K_1$ such that $f^m(K_0)\cap f^m(K_1)\supset K_0\cup K_1$, see \cite{block}, Theorem IX.28. This condition is equivalent to the existence of a continuous map $g: X\subset I \rightarrow \Sigma_2$, where $X$ is of Cantor type, such that $g \circ f^m(x) = \sigma \circ g(x)$ for every $x \in X$, and such that each point in $\Sigma_2$ is the image of at most two points in $X$ (\cite{block}, Proposition II.15). By Lemma~\ref{shift}, there is a $u\in IR(\sigma )$. Hence, for every $t>0$, $\Psi_u^*(\sigma, t)>0$, and there is an $s>0$ such that $\Psi_u(\sigma, s)=0$. There are at most two preimages, $u_0$ and $u_1$, of $u$. Then, by the continuity, $\Psi_{u_i}(f^m, r)=0$, for some $r>0$ and $i= 0, 1$, and $\Psi_{u_i}^*(f^m, k)>0$ for at least one $i\in\{0, 1\}$ and every $k>0$. Thus, $u_0\in IR(f^m)$ or $u_1\in IR(f^m )$ and, by Theorem \ref{dalsi3vlastnosti}, $IR (f)\ne\emptyset$.
\end{proof}
Recall that $f\in\mathcal C (X)$ is {\it Li-Yorke chaotic}, or {\it LYC}, if there is an uncountable set $S\subseteq X$ such that, for every $x\ne y$ in $S$, $\liminf _{n\rightarrow\infty} \rho (\varphi ^n(x), \varphi ^n(y))=0$ and $\limsup _{n\rightarrow\infty} \rho (\varphi ^n(x), \varphi ^n(y))>0$.
\begin{theorem}
\label{LiYorke}
For $f\in \mathcal C(I)$, $W(f)\ne AP(f)$ implies that $f$ is Li-Yorke chaotic, but does not imply $h(f)>0$.
\end{theorem}
\begin{proof}
Every continuous map of a compact metric space with positive topological entropy is Li-Yorke chaotic \cite{BGKM}. Hence to prove the theorem it suffices to consider the class $\mathcal C_0\subset \mathcal C(I)$ of maps with zero topological entropy and show that
(i) for every $f\in\mathcal C_0$, $W(f)\ne AP(f)$ implies {\it LYC}, and
(ii) there is an $f\in\mathcal C_0$ with $W(f)\ne AP(f)$.
\noindent For $f\in\mathcal C_0$, $R(f)=UR(f)$, see, e.g., \cite{block}, Corollary VI.8. Hence, by (\ref{eq10}), $W(f)\ne AP(f)$ implies that $f$ has an infinite minimal $\omega$-limit set $\widetilde\omega$ possessing a point which is not in $AP(f)$.
Recall that for every such $\widetilde\omega$ there is an {\it associated system} $\{ J_n\}_{n\ge 1}$ of compact periodic intervals such that $J_n$ has period $2^n$, and $\widetilde\omega\subseteq \bigcap _{n\ge 1}\bigcup _{0\le j<2^n} f^{j}(J_n)$ \cite{smital}. For every $x\in\widetilde\omega$ there is a sequence $\iota (x)=\{j_n\}_{n\ge 1}$ of integers, $0\le j_n<2^n$, such that $x\in\bigcap _{n\ge 1}f^{j_n}(J_n)=:Q_x$. For every $x\in \widetilde\omega$, the set $\widetilde\omega \cap Q_x$ contains one (i.e., the point $x$) or two points. In the second case $Q_x=[a,b]$ is a compact wandering interval (i.e., $f^n(Q_x)\cap Q_x=\emptyset$ for every $n\ge 1$) such that $a,b\in\widetilde\omega$ and either $x=a$ or $x=b$. Moreover, if, for every $x\in\widetilde\omega$, $\widetilde\omega \cap Q_x$ is a singleton then $f$ restricted to $\widetilde\omega$ is the adding machine, and $\widetilde\omega \subseteq AP(f)$, see \cite{brucknersmital}. Consequently, $W(f)\ne AP(f)$ implies the existence of an infinite $\omega$-limit set $\widetilde\omega$ such that
\begin{equation}
\label{eq12}
\widetilde\omega\cap Q_x=\{a,b\}, \ a<b, \ \text{for some} \ x\in\widetilde\omega .
\end{equation}
This condition characterizes {\it LYC} maps in $\mathcal C_0$ (see \cite{smital} or subsequent books like \cite{block}) which proves (i).
To prove (ii) note that there are maps $f\in\mathcal C_0$ such that both $a$ and $b$ in (\ref{eq12}) are non-isolated points of $\widetilde\omega$, see \cite{brucknersmital} or \cite{MS}. Then $a,b\in UR(f)$ are minimal points. We show that in this case either $a\notin AP(f)$ or $b\notin AP(f)$ (actually, neither $a$ nor $b$ is in $AP(f)$ but we do not need this stronger property). So assume that $a,b\in AP(f)$ and $U_a$, $U_b$ are their disjoint open neighborhoods. Then there is an {\it even} $m$, $m=(2k+1)2^n$, with $n\ge 1$, such that $f^{jm}(a)\in U_a$ and $f^{jm}(b)\in U_b$, for every $j\ge 0$. Let $\{ J_n\} _{n\ge 1}$ be the system of compact periodic intervals associated with $\widetilde\omega$. Without loss of generality we may assume that, for some $n$, $[a,b]\subset J_n$. Since $J_n$ has period $2^n$, for arbitrary odd $j$, $f^{jm}(J_n)\cap J_n=\emptyset$. If $f^{jm}(J_n)$ is to the left of $J_n$, then $f^{jm}(J_n)\cap U_b=\emptyset$, otherwise $f^{jm}(J_n)\cap U_a=\emptyset$. In any case, $f^{jm}(a)\notin U_a$ or $f^{jm}(b)\notin U_b$, which is a contradiction.
\end{proof}
\section{Generalization for maps on more general one-dimensional spaces}
Here we show that results given in Theorems \ref{entropie} and \ref{LiYorke} concerning maps in $\mathcal C(I)$ can be generalized to more general one-dimensional compact metric spaces like topological graphs or trees, but not dendrites. Recall that $X$ is a {\it topological graph} if $X$ is a non-empty compact connected metric space which is the union of finitely many arcs (i.e., continuous images of the interval $I$) such that every two arcs can have only end-points in common. A {\it tree} is a topological graph which contains no subset homeomorphic to the circle. A {\it dendrite} is a locally connected continuum containing no subset homeomorphic to the circle. The proofs of generalized results are based on the same ideas, as the proofs of Theorems \ref{entropie} and \ref{LiYorke}. We only need some recent, nontrivial results concerning the structure of $\omega$-limit sets of such maps, see \cite{HrMa} and \cite{KKM}. Therefore we give here only outline of the proofs, pointing out only main differences.
\begin{theorem}
\label{gen1}
Let $f\in\mathcal C(X)$.
{\rm (i)} \ If $X$ is a topological graph then $h(f)>0$ is equivalent to $QW(f)\ne W(f)$.
{\rm (ii)} \ There is a dendrit $X$ such that $h(f)>0$ and $QW(f)=W(f)=UR(f)$.
\end{theorem}
\begin{proof}
To prove (i) note that, for $f\in\mathcal C(X)$ where $X$ is a topological graph, $h(f)>0$ if and only if, for some $n\ge 1$, $f^n$ is turbulent \cite{HrMa}. Hence the proof of Lemma \ref{kladna} applies also to this case and $h(f)>0$ implies $IR(f)\ne\emptyset$. On the other hand, if $h(f)=0$ then every infinite $\omega$-limit set is a solenoid (i.e., it has an associated system of compact periodic intervals $\{ J_n\}_{n\ge 1}$, $J_n$ with period $2^n$) and consequently, $R(f)=UR(f)$ \cite{HrMa} which gives the other implication.
(ii) In \cite{KKM} there is an example of a dendrit $X$ with a continuous map $f$ possessing exactly two $\omega$-limit sets: a minimal Cantor-type set $Q$ such that $h(f|_Q)\ge 0$
and a fixed point $p$ such that $\omega (f,x)=\{ p\}$ for every $x\in X\setminus Q$.
\end{proof}
\begin{theorem}
\label{gen2}
Let $f\in\mathcal C(X)$.
{\rm (i)} \ If $X$ is a compact tree then $W(f)\ne AP(f)$ implies LYC, but does not imply $h(f)>0$.
{\rm (ii)} \ If $X$ is a dendrit, or a topological graph containing a circle then $W(f)\ne AP(f)$ implies neither LYC nor $h(f)>0$.
\end{theorem}
\begin{proof}
(i) Similarly as in the proof of Theorem \ref{LiYorke} we may assume that $h(f)=0$. Then every infinite $\omega$-limit set of $f$ is a solenoid and the argument, with obvious modifications, applies.
(ii) If $X$ is the circle, take $f$ to be an irrational rotation. Then obvioulsy $X=UR(f)\setminus AP(f)=W(f)\setminus AP(f)$ but $f$ is not {\it LYC}. On the other hand,
let $\widetilde\omega$ be the $\omega$-limit set used in the proof of part (ii) of Theorem \ref{LiYorke}. Thus, $\widetilde\omega$ is a minimal set intersecting $UR(f)\setminus AP(f)$. A modification of the construction from \cite{KKM} yields a dendrite with exactly two $\omega$-limit sets, an infinite minimal set $Q=\widetilde\omega$ and a fixed point $q$ (see proof of part (ii) of preceding theorem). It is easy to see that $f$ is not {\it LYC}.
\end{proof}
\begin{remark}
By Theorems \ref{gen1} and \ref{gen2}, for a map $f\in\mathcal C(X)$ where $X$ is a compact metric space, the properties $h(f)>0$ and $W(f)\ne AP(f)$ are independent.
Similarly, $h(f)>0$ and $IR(f)\ne\emptyset$ are independent. Example of a map $f$ with $h(f)=0$ and $IR(f)\ne\emptyset$ is given in \cite{lenka} (see also text at the end of Section 1), and any minimal map $f$ with $h(f)>0$ yields $IR(f)=\emptyset$.
\end{remark}
{\bf Acknowledgments}
The author thanks Professor Jaroslav Sm\'{\i}tal for his heedful guidance and helpful suggestions.
\thebibliography{99}
\bibitem{BGKM} Blanchard F., Glasner E., Kolyada S. and Maass A., On Li-Yorke pairs, J. Reine Angew. Math., 547 (2002), 51--68.
\bibitem{block} Block L.S. and Coppel W.A., Dynamics in One Dimension, Springer-Verlag, Berlin Heidelberg, 1992.
\bibitem{brucknersmital} Bruckner A. M. and Sm\'{i}tal J., A characterization of $\omega$-limit sets of maps of the interval with zero topological entropy, Ergod. Th. \& Dynam. Sys., 13 (1993), 7--19.
\bibitem{HrMa} Hric R. and M\'alek M., Omega-limit sets and distributional chas on graphs, Topology Appl. 153 (2006), 2469 -- 2475.
\bibitem{KKM} Ko\v can Z., Korneck\'a-Kurkov\'a V. and M\'alek M., Entropy, horseshoes and homoclinic trajectories on trees, graphs and dendrites, Ergodic Theory \& Dynam. Syst. 30 (2011), to appear.
\bibitem{MS} Misiurewicz M. and Sm\'{\i}tal J., Smooth chaotic mappings with zero topological entropy, Ergod. Th. \& Dynam. Sys., 8 (1988), 421--424.
\bibitem{lenka} Obadalov\'{a} L. and Sm\'{i}tal J., Distributional chaos and irregular recurrence, Nonlin. Anal. A - Theor. Meth. Appl., 72 (2010), 2190--2194.
\bibitem{smital} Sm\'{i}tal J., Chaotic functions with zero topological entropy, Trans. Amer. Math. Soc. 297 (1986), 269 -- 282.
\bibitem{zhou2} Zhou Z., Weakly almost periodic point and measure centre, Science in China (Ser. A), 36 (1993), 142 -- 153.
\bibitem{zhou3} Zhou Z. and Li F., Some problems on fractal geometry and topological dynamical systems. Anal. Theor. Appl. 25 (2009), 5 - 15.
\bibitem{zhou} Zhou Z. and Feng L., Twelve open problems on the exact value of the Hausdorff measure and on topological entropy, Nonlinearity 17 (2004), 493--502.
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
In this note we consider some generalizations of the Schwarz lemma for harmonic functions on the unit disk, whereby values of such functions and the norms of their differentials at the point $z=0$ are given.
\end{abstract}
\section{Introduction}
\subsection{A summary of some results}
In this paper we consider some generalizations of the Schwarz lemma for harmonic functions from the unit disk $\mathbb{U}=\{z\in\mathbb{C}:|z|<1\}$ to the interval $(-1,1)$ (or to itself).
First, we cite a theorem which is known as the Schwarz lemma for harmonic functions and is considered as a classical result.
\begin{theorem}[\cite{heinz},{\cite[p.77]{duren}}]\label{th:schwhar}
Let $f:\mathbb{U}\rightarrow\mathbb{U}$ be a harmonic function such that $f(0)=0$. Then
\begin{equation*}
|f(z)|\leqslant\frac{4}{\pi}\arctan{|z|}, \quad \mbox{ for all } \quad z\in\mathbb{U},
\end{equation*}
and this inequality is sharp for each point $z\in\mathbb{U}$.
\end{theorem}
In 1977, H. W. Hethcote \cite{heth} improved this result by removing the assumption $f(0)=0$ and proved the following theorem.
\begin{theorem}[{\cite[Theorem 1]{heth}} and {\cite[Theorem 3.6.1]{pavlovic}}]\label{th:schwheth}
Let $f:\mathbb{U}\rightarrow\mathbb{U}$ be a harmonic function. Then
\begin{equation*}
\left|f(z) -\frac{1-|z|^2}{1 + |z|^2} f(0)\right|\leqslant \frac{4}{\pi}\arctan|z|, \quad \mbox{ for all } \quad z\in\mathbb{U}.
\end{equation*}
\end{theorem}
As it was written in \cite{MMandMS}, it seems that researchers have had some difficulties to handle the case $f(0)\neq0$, where $f$ is harmonic mapping from $\mathbb{U}$ to itself. Before explaining the essence of these difficulties, it is necessary to recall one mapping and some of its properties. Also, emphasize that this mapping and its properties have an important role in our results.
Let $\alpha\in\mathbb{U}$ be arbitrary. Then for $z\in\mathbb{U}$ we define $\displaystyle\varphi_{\alpha}(z)=\frac{\alpha+z}{1+\overline{\alpha}z}$. It is well known that $\varphi_{\alpha}$ is a conformal automorphism of $\mathbb{U}$. Also, for $\alpha\in(-1,1)$ we have
\begin{itemize}
\item[$1^\circ$] $\varphi_{\alpha}$ is increasing on $(-1,1)$ and maps $(-1,1)$ onto itself;
\item[$2^\circ$] $\displaystyle\varphi_{\alpha}([-r,r])=[\varphi_{\alpha}(-r),\varphi_{\alpha}(r)]=\left[\frac{\alpha-r}{1-\alpha r},\frac{\alpha+r}{1+\alpha r}\right]$, where $r\in[0,1)$.
\end{itemize}
Now we can explain the mentioned difficulties. If $f$ is holomorphic mapping from $\mathbb{U}$ to $\mathbb{U}$, such that $f(0)=b$, then using the mapping $g=\varphi_{-b}\circ f$ we can reduce described problem to the case $f(0)=0$. But, if $f$ is harmonic mapping from $\mathbb{U}$ to $\mathbb{U}$ such that $f(0)=b$, then the mapping $g=\varphi_{-b}\circ f$ doesn't have to be harmonic mapping.
In joint work \cite{MMandMS} of the author with M. Mateljevi\'c, the Theorem \mathop{\mathrm{Re}}f{th:schwhar} was proved in a different way than those that could be found in the literature (for example, see \cite{heinz, duren}). Modifying that proof in an obvious way, the following theorem (which can be considered as an improvement of the H. W. Hethcote result) has also been proved in \cite{MMandMS}.
\begin{theorem}[{\cite[Theorem 6]{MMandMS}}]\label{th:schwhar1}
Let $u:\mathbb{U}\rightarrow(-1,1)$ be a harmonic function such that $u(0)=b$. Then
\begin{equation*}
\frac{4}{\pi}\arctan\varphi_a(-|z|)\leqslant u(z)\leqslant \frac{4}{\pi}\arctan\varphi_a(|z|), \quad \mbox{for all} \quad z\in\mathbb{U}.
\end{equation*}
Here $\displaystyle a=\tan{\frac{b\pi}{4}}$. Also, these inequalities are both sharp at each point $z\in\mathbb{U}$.
\end{theorem}
As one corollary of Theorem \mathop{\mathrm{Re}}f{th:schwhar1} it is possible to prove the following theorem.
\begin{theorem}[{\cite[Theorem 1]{MMandAK}}]\label{th:schwar1complex}
Let $f:\mathbb{U}\rightarrow\mathbb{U}$ be a harmonic function such that $f(0)=b$. Then
\begin{equation*}
|f(z)|\leqslant\frac{4}{\pi}\arctan\varphi_{A}(|z|), \quad \mbox{for all} \quad z\in\mathbb{U}.
\end{equation*}
Here $\displaystyle A=\tan{\frac{|b|\pi}{4}}$.
\end{theorem}
This paper gives a relatively elementary contribution and continuation to the mentioned approach. We give further generalizations of Theorems \mathop{\mathrm{Re}}f{th:schwhar1} and \mathop{\mathrm{Re}}f{th:schwar1complex}. These generalizations (see Theorems \mathop{\mathrm{Re}}f{th:main1} and \mathop{\mathrm{Re}}f{th:main2}) consist of considering harmonic functions on the unit disk $\mathbb{U}$ with following additional conditions:
\begin{itemize}
\item[1)] the value at the point $z=0$ is given;
\item[2)] the values of partial derivatives at the point $z=0$ are given.
\end{itemize}
In the literature one can find the following two generalizations of the Schwarz lemma for holomorphic functions.
\begin{theorem}[{\cite[Proposition 2.2.2 (p. 32)]{krantz}}]\label{th:lindelof}
Let $f:\mathbb{U}\rightarrow\mathbb{U}$ be a holomorphic function. Then
\begin{equation}\label{fla1:krantz}
|f(z)|\leqslant\frac{|f(0)|+|z|}{1+|f(0)||z|},\quad \mbox{for all} \quad z\in\mathbb{U}.
\end{equation}
\end{theorem}
\begin{theorem}[{\cite[Proposition 2.6.3 (p. 60)]{krantz}}, {\cite[Lemma 2]{osserman}}]\label{th:osserman}
Let $f:\mathbb{U}\rightarrow\mathbb{U}$ be a holomorphic function such that $f(0)=0$. Then
\begin{equation}\label{fla1:krantz}
|f(z)|\leqslant|z|\frac{|f'(0)|+|z|}{1+|f'(0)||z|},\quad \mbox{for all} \quad z\in\mathbb{U}.
\end{equation}
\end{theorem}
S. G. Krantz in his book \cite{krantz} attribute Theorem~\mathop{\mathrm{Re}}f{th:lindelof} to Lindel\"of. Note that Theorem~\mathop{\mathrm{Re}}f{th:schwar1complex} could be considered as harmonic version of Theorem \mathop{\mathrm{Re}}f{th:lindelof}. Similarly, one of the main result of this paper (Theorem \mathop{\mathrm{Re}}f{th:main2}) could be considered as harmonic version of Theorem \mathop{\mathrm{Re}}f{th:osserman}.
\subsection{Hyperbolic metric and Schwarz-Pick type estimates}\label{subsec:hyperbolic}
By $\Omega$ we denote a simply connected plane domain different from $\mathbb{C}$ (we call these domains hyperbolic). By Riemann's Mapping Theorem, it follows that any such domain are conformally equivalent to the unit disk $\mathbb{U}$. Also, a domain $\Omega$ is equipped with the hyperbolic metric $\rho_{\Omega}(z)|dz|$. More precisely, by definition we have
$$\displaystyle\rho_{\mathbb{U}}(z)=\frac{2}{1-|z|^2}$$
and if $f:\Omega\rightarrow\mathbb{U}$ a conformal isomorphism, then also by definition, we have $$\rho_{\Omega}(w)=\rho_{\mathbb{U}}(f(w))|f'(w)|.$$
The hyperbolic metric induces a hyperbolic distance on $\Omega$ in the following way
\begin{equation*}
d_{\Omega}(z_1,z_2)=\inf\int_{\gamma}\rho_{\Omega}(z)|dz|,
\end{equation*}
where the infimum is taken over all $C^1$ curves $\gamma$ joining $z_1$ to $z_2$ in $\Omega$. For example, one can show that
\begin{equation*}
d_{\mathbb{U}}(z_1,z_2)=2\mathop{\mathrm{artanh}}{\left|\frac{z_1-z_2}{1-z_1\overline{z_2}}\right|},
\end{equation*}
where $z_1,z_2\in\mathbb{U}$.
Hyperbolic metric and hyperbolic distance do not increase under a holomorphic function. More precisely, the following well-known theorem holds.
\begin{theorem}[The Schwarz-Pick lemma for simply connected domains, {\cite[Theorem~6.4.]{BeardonMinda}}]\label{th:schwpick}
Let $\Omega_1$ and $\Omega_2$ be hyperbolic domains and $f:\Omega_1\rightarrow\Omega_2$ be a holomorphic function. Then
\begin{equation}\label{schwpick:fla1}
\rho_{\Omega_2}(f(z))|f'(z)|\leqslant\rho_{\Omega_1}(z), \quad \mbox{ for all } \quad z\in\Omega_1,
\end{equation}
and
\begin{equation}\label{schwpick:fla2}
d_{\Omega_2}(f(z_1),f(z_2))\leqslant d_{\Omega_1}(z_1,z_2), \quad \mbox{ for all } \quad z_1,z_2\in\Omega_1.
\end{equation}
If $f$ is a conformal isomorphism from $\Omega_1$ onto $\Omega_2$ then in (\mathop{\mathrm{Re}}f{schwpick:fla1}) and (\mathop{\mathrm{Re}}f{schwpick:fla2}) equalities hold. On other hand if either equality holds in (\mathop{\mathrm{Re}}f{schwpick:fla1}) at one
point $z$ or for a pair of distinct points in (\mathop{\mathrm{Re}}f{schwpick:fla2}) then $f$ is a conformal isomorphism from $\Omega_1$ onto $\Omega_2$.
\end{theorem}
\
For holomorphic function $f:\Omega_1\rightarrow\Omega_2$ (where $\Omega_1$ and $\Omega_2$ are hyperbolic domains) it's defined (for motivation and details see Section 5 in \cite{BeardonMinda}, cf. \cite{BeardonCarne}) the \emph{hyperbolic distortion} of $f$ at $z\in\Omega_1$ on the following way $$\displaystyle|f^h(z)|=\frac{\rho_{\Omega_2}(f(z))}{\rho_{\Omega_1}(z)}|f'(z)|.$$
Note that by Theorem \mathop{\mathrm{Re}}f{th:schwpick} we also have $|f^h(z)|\leqslant 1$ for all $z\in\Omega_1$.
Using this notion, in 1992, A. F. Beardon and T. K. Carne proved the following theorem which is stronger than Theorem \mathop{\mathrm{Re}}f{th:schwpick}.
\begin{theorem}[\cite{BeardonCarne}]\label{th:BeardonCarne}
Let $\Omega_1$ and $\Omega_2$ be hyperbolic domains and $f:\Omega_1\rightarrow\Omega_2$ be a holomorphic function. Then for all $z,w\in\Omega_1$,
\begin{equation*}
d_{\Omega_2}(f(z),f(w))\leqslant\log(\cosh{d_{\Omega_1}(z,w)}+|f^h(w)|\sinh{d_{\Omega_1}(z,w)}).
\end{equation*}
\end{theorem}
Let us note that Theorem \mathop{\mathrm{Re}}f{th:BeardonCarne} is of crucial importance for our research (see proof of Theorem \mathop{\mathrm{Re}}f{th:main1}).
There are many papers where authors have considered various versions of Schwarz-Pick type estimates for harmonic functions (see \cite{khavinson, burgeth, MKandMM, colonna, kavu, hhChen, MMar}). In this regard, we note that M. Mateljevi\'c \cite{MMSchw_Kob} recently explained one method (refer to it as the strip method) which enabled that some of these results to be proven in an elegant way.
For completeness we will shortly reproduce the strip method. In order to do so, we will first introduce the appropriate notation and specify some simple facts.
By $\mathbb{S}$ we denote the strip $\{z\in\mathbb{C}:-1<\mathop{\mathrm{Re}}{z}<1\}$. The mapping $\varphi$ defined by $\displaystyle\varphi(z)=\tan{\left(\frac{\pi}{4}z\right)}$ is a conformal isomorphism from $\mathbb{S}$ onto $\mathbb{U}$ and by $\phi$ we denote the inverse mapping of $\varphi$ (see also Example 1 in \cite{MMandMS}). Throughout this paper by $\varphi$ and $\phi$ we always denote these mappings.
Using the mapping $\varphi$ one can derive the following equality
$$
\rho_{\mathbb{S}}(z)=\rho_{\mathbb{U}}(\varphi(z))|\varphi'(z)|=\frac{\pi}{2}\frac{1}{\cos{\left(\displaystyle\frac{\pi}{2}\mathop{\mathrm{Re}}{z}\right)}}, \quad \mbox{ for all } \quad z\in\mathbb{S}.
$$
By $\nabla u$ we denote the gradient of real-valued $C^1$ function $u$, i.e. $\nabla u=(u_x,u_y)=u_x+iu_y$. If $f=u+iv$ is complex-valued $C^1$ function, where $u=\mathop{\mathrm{Re}}{f}$ and $v=\mathop{\mathrm{Im}}{f}$, then we use notation
$$f_{x}=u_{x}+iv_{x} \qquad \mbox{ and } \qquad f_{y}=u_{y}+iv_{y},$$
as well as
$$f_{z}=\displaystyle\frac{1}{2}(f_{x}-if_{y}) \qquad \mbox{ and } \qquad f_{\overline{z}}=\displaystyle\frac{1}{2}(f_{x}+if_{y}).$$
Finally, by $df(z)$ we denote differential of the function $f$ at point $z$, i.e. the Jacobian matrix
\begin{equation*}
\left(
\begin{array}{cc}
u_x(z) & u_y(z) \\
v_x(z) & v_y(z) \\
\end{array}
\right).
\end{equation*}
The matrix $df(z)$ is an $\mathbb{R}$-linear operator from the tangent $T_{z}\mathbb{R}^2$ to the tangent space $T_{f(z)}\mathbb{R}^2$. By $\|df(z)\|$ we denote norm of this operator. It is not difficult to prove that $\|df(z)\|=|f_z(z)|+|f_{\overline{z}}(z)|$.
Briefly, the strip method consists of the following elementary considerations (see \cite{MMandMS}):
\begin{itemize}
\item[(I)] Suppose that $f:\mathbb{U}\rightarrow\mathbb{S}$ be a holomorphic function. Then by Theorem \mathop{\mathrm{Re}}f{th:schwpick} we have $\rho_{\mathbb{S}}(f(z))|f'(z)|\leqslant\rho_{\mathbb{U}}(z)$, for all $z\in \mathbb{U}$.
\item[(II)] If $f=u+iv$ is a harmonic function and $F=U+iV$ is a holomorphic function on a domain $D$ such that $\mathop{\mathrm{Re}}{f}=\mathop{\mathrm{Re}}{F}$ on $D$ (in this setting we say that $F$ is associated to $f$ or to $u$), then $F'=U_x+iV_x = U_x-iU_y = u_x-iu_y$. Hence $F'=\overline{\nabla u}$ and $|F'|=|\overline{\nabla u}|=|\nabla u|$.
\item[(III)] Suppose that $D$ is a simply connected plane domain and $f:D \rightarrow \mathbb{S}$ is a harmonic function. Then it is known from the standard course of complex analysis that there is a holomorphic function $F$ on $D$ such that $\mathop{\mathrm{Re}}{f} = \mathop{\mathrm{Re}}{F}$ on $D$, and it is clear that $F:D\rightarrow\mathbb{S}$.
\item[(IV)] The hyperbolic density $\rho_{\mathbb{S}}$ at point $z$ depends only on $\mathop{\mathrm{Re}}{z}$.
\end{itemize}
By (I)-(IV) it is readable that we have the following theorem.
\begin{theorem}[{\cite[Proposition 2.4]{MMSchw_Kob}}, \cite{kavu,hhChen}]\label{th:schwpickhar}
Let $u:\mathbb{U}\rightarrow(-1,1)$ be a harmonic function and let $F$ be a holomorphic function which is associated to $u$. Then
\begin{equation}\label{schwpickhar:fla1}
\rho_{\mathbb{S}}(u(z))|\nabla u(z)|=\rho_{\mathbb{S}}(F(z))|F'(z)|\leqslant \rho_{\mathbb{U}}(z),\quad \mbox{ for all }\quad z\in\mathbb{U}.
\end{equation}
In other words
\begin{equation}\label{schwpickhar:fla2}
|\nabla u(z)|\leqslant\frac{4}{\pi}\frac{\displaystyle\cos{\left(\frac{\pi}{2}u(z)\right)}}{1-|z|^2}, \quad \mbox{ for all }\quad z\in\mathbb{U}.
\end{equation}
If $u$ is real part of a conformal isomorphism from $\mathbb{U}$ onto $\mathbb{S}$ then in (\mathop{\mathrm{Re}}f{schwpickhar:fla2}) equality holds for all $z\in\mathbb{U}$ and vice versa.
\end{theorem}
In 1989, F. Colonna \cite{colonna} proved the following version of the Schwarz-Pick lemma
for harmonic functions.
\begin{theorem}[{\cite[Theorem 3]{colonna}} and {\cite[Proposition 2.8]{MMSchw_Kob}}, cf. {\cite[Theorem 6.26]{axler}}]\label{th:colonna}
Let $f:\mathbb{U}\rightarrow\mathbb{U}$ be a harmonic function. Then
\begin{equation}\label{colonna:fla1}
\|df(z)\|\leqslant\frac{4}{\pi}\frac{1}{1-|z|^2}, \quad \mbox{ for all } z\in\mathbb{U}.
\end{equation}
In particular,
\begin{equation}\label{colonna:fla2}
\|df(0)\|\leqslant\frac{4}{\pi}.
\end{equation}
\end{theorem}
\begin{remark} The inequality (\mathop{\mathrm{Re}}f{colonna:fla1}) is sharp in the following sense: for all $z\in\mathbb{U}$ there exists a harmonic function $f^z:\mathbb{U}\rightarrow\mathbb{U}$ (which depends on $z$) such that
\begin{equation*}
\|df^z(z)\|=\frac{4}{\pi}\frac{1}{1-|z|^2}.
\end{equation*}
One such function is defined by $f(\zeta)=\mathop{\mathrm{Re}}{(\phi(\varphi_{-z}(\zeta)))}$. For more details see Theorem 4 in \cite{colonna}.
\end{remark}
\begin{remark}
The inequality (\mathop{\mathrm{Re}}f{colonna:fla2}) could not be improved even if we add the assumption that $f(0)=0$. More precisely, if $f(\zeta)=\mathop{\mathrm{Re}}\phi(\zeta)$ then $f$ satisfy all assumptions of Theorem \mathop{\mathrm{Re}}f{th:colonna}, $f(0)=0$ and $\displaystyle\|df(0)\|=\frac{4}{\pi}$
(see also {\cite[Proposition 2.8]{MMSchw_Kob}} and {\cite[Theorem 6.26]{axler}}).\end{remark}
\begin{remark}\label{rm:problem}
It seems that question: ,,Is it possible to improve the inequality (\mathop{\mathrm{Re}}f{colonna:fla1}), if we add the assumption $f(0)=b$, where $b\neq0$?'' is an open problem (see {\cite[Problem 2]{MMSchw_Kob}}).
\end{remark}
Note that the inequalities (\mathop{\mathrm{Re}}f{schwpickhar:fla2}) and (\mathop{\mathrm{Re}}f{colonna:fla2}) naturally impose assumptions in the Theorems \mathop{\mathrm{Re}}f{th:main1} and \mathop{\mathrm{Re}}f{th:main2} below.
\section{Main results}
\begin{theorem}\label{th:main1}
Let $u:\mathbb{U}\rightarrow(-1,1)$ be a harmonic function such that:
\begin{itemize}
\item[(R1)] $u(0)=b$ and
\item[(R2)] $|\nabla u(0)|=d$, where $\displaystyle d\leqslant\frac{4}{\pi}\cos{\left(\frac{\pi}{2}b\right)}$.
\end{itemize}
Then, for all $z\in\mathbb{U}$,
\begin{equation}\label{ineq:main}
\frac{4}{\pi}\arctan{\varphi_{a}\big(-|z|\varphi_{c}(|z|)\big)}\leqslant u(z)\leqslant\frac{4}{\pi}\arctan{\varphi_{a}\big(|z|\varphi_{c}(|z|)\big)}.
\end{equation}
Here $\displaystyle a=\tan\frac{b\pi}{4}$ and $c=\displaystyle\frac{\pi}{4}\frac{1}{\cos{\displaystyle\frac{\pi}{2}b}}d$.
These inequalities are sharp for each point $z\in\mathbb{U}$ in the following sense: for arbitrary $z\in\mathbb{U}$ there exist harmonic functions $\widehat{u}^z, \widetilde{u}^z:\mathbb{U}\rightarrow(-1,1)$, which depend on $z$, such that they satisfy {\rm(R1)} and {\rm(R2)} and also
\begin{equation*}
\widehat{u}^z(z)=\frac{4}{\pi}\arctan{\varphi_{a}\big(-|z|\varphi_{c}(|z|)\big)}\quad \mbox{and} \quad \widetilde{u}^z(z)=\frac{4}{\pi}\arctan{\varphi_{a}\big(|z|\varphi_{c}(|z|)\big)}.
\end{equation*}
\end{theorem}
\begin{remark}
Formally, if $c=1$ then function $\varphi_c$ is not defined. In this case we mean that $\varphi_{c}(|z|)=1$ for all $z\in\mathbb{U}$.
\end{remark}
\begin{corollary}
Let $u:\mathbb{U}\rightarrow(-1,1)$ be a harmonic function such that $u(0)=0$ and $\nabla u(0)=(0,0)$. Then, for all $z\in\mathbb{U}$,
\begin{equation*}
|u(z)|\leqslant\frac{4}{\pi}\arctan{|z|^2}.
\end{equation*}
\end{corollary}
\begin{theorem}\label{th:main2}
Let $f:\mathbb{U}\rightarrow\mathbb{U}$ be a harmonic function such that:
\begin{itemize}
\item[(C1)] $f(0)=0$ and
\item[(C2)] $\|df(0)\|=d$, where $\displaystyle d\leqslant\frac{4}{\pi}$.
\end{itemize}
Then, for all $z\in\mathbb{U}$
\begin{equation}\label{main2:ineq}
|f(z)|\leqslant\frac{4}{\pi}\arctan\big(|z|\varphi_{C}(|z|)\big),
\end{equation}
where $\displaystyle C=\frac{\pi}{4}d$.
\end{theorem}
\begin{corollary}
Let $f:\mathbb{U}\rightarrow\mathbb{U}$ be a harmonic function such that $f(0)=0$ and $\|df(0)\|=0$. Then, for all $z\in\mathbb{U}$,
\begin{equation*}
|f(z)|\leqslant\frac{4}{\pi}\arctan{|z|^2}.
\end{equation*}
\end{corollary}
\begin{remark}
Formally, if $C=1$ then function $\varphi_C$ is not defined. In this case we mean that $\varphi_{C}(|z|)=1$ for all $z\in\mathbb{U}$.
\end{remark}
\section{Proofs of main results}
\subsection{Proof of Theorem \mathop{\mathrm{Re}}f{th:main1}} In order to prove Theorem \mathop{\mathrm{Re}}f{th:main1}, we recall the following definitions and one lemma from \cite{MMandMS}.
Let $\lambda>0$ be arbitrary. By $\overline{D}_{\lambda}(\zeta)=\{z\in\mathbb{U}:d_{\mathbb{U}}(z,\zeta)\leqslant\lambda\}$ (respectively $\overline{S}_{\lambda}(\zeta)=\{z\in\mathbb{S}:d_{\mathbb{S}}(z,\zeta)\leqslant\lambda\}$) we denote the hyperbolic closed disc in $\mathbb{U}$ (respectively in $\mathbb{S}$) with hyperbolic center $\zeta\in\mathbb{U}$ (respectively $\zeta\in\mathbb{S}$) and hyperbolic radius $\lambda$. Specially, if $\zeta=0$ we omit $\zeta$ from the notations.
Let $r\in(0,1)$ be arbitrary. By $\overline{U}_r$ we denote Euclidean closed disc
$$\{z\in\mathbb{C}: |z|\leqslant r\}.$$
Also, let
\begin{equation*}
\lambda(r)=d_{\mathbb{U}}(r,0)=\ln\frac{1+r}{1-r}=2\mathop{\mathrm{artanh}}{r}.
\end{equation*}
Since $\displaystyle d_{\mathbb{U}}(z,0)=\ln\frac{1+|z|}{1-|z|}=2\mathop{\mathrm{artanh}}{|z|}$, for all $z\in\mathbb{U}$, we have
\begin{equation*}
\overline{D}_{\lambda(r)}=\left\{z\in\mathbb{C}:2\mathop{\mathrm{artanh}}{|z|}\leqslant 2\mathop{\mathrm{artanh}}{r}\right\}=\{z\in\mathbb{C}:|z|\leqslant r\}=\overline{U}_r.
\end{equation*}
Let $b\in(-1,1)$ be arbitrary and $\displaystyle a=\tan{\frac{b\pi}{4}}$. By Theorem \mathop{\mathrm{Re}}f{th:schwpick} we have
\begin{equation*}
\overline{S}_{\lambda(r)}(b)=\overline{S}_{\lambda(r)}(\phi(\varphi_{a}(0)))=\phi(\varphi_{a}( \overline{D}_{\lambda(r)}))=\phi(\varphi_{a}(\overline{U}_r)),
\end{equation*}
where $\phi$ is conformal isomorphism from $\mathbb{U}$ onto $\mathbb{S}$ defined in subsection \mathop{\mathrm{Re}}f{subsec:hyperbolic}.
Further, one can show that (see Figure \mathop{\mathrm{Re}}f{fig:1}):
\begin{itemize}
\item[i)] $\overline{S}_{\lambda(r)}(b)$ is symmetric with respect to the $x$-axis;
\item[ii)] $\overline{S}_{\lambda(r)}(b)$ is Euclidean convex (see \cite[Theorem 7.11]{BeardonMinda}).
\end{itemize}
\begin{figure}
\caption{Disks $\overline{U}
\label{fig:1}
\end{figure}
By i)-ii) it is readable that we have:
\begin{lemma}[{\cite[Lemma 3]{MMandMS}}]\label{lem:hypdisk0}
Let $r\in(0,1)$ and $b\in(-1,1)$ be arbitrary. Then
\begin{equation*}
R_{e}(\overline{S}_{\lambda(r)}(b))=\left[\frac{4}{\pi}\arctan\varphi_a(-r),\frac{4}{\pi}\arctan\varphi_a(r)\right]
\end{equation*}
Here $\displaystyle a=\tan{\frac{b\pi}{4}}$ and $R_{e}:\mathbb{C}\rightarrow\mathbb{R}$ is defined by $R_{e}{(z)}=\mathop{\mathrm{Re}}{z}$.
\end{lemma}
\begin{proof}[Proof of Theorem \mathop{\mathrm{Re}}f{th:main1}]
Applying the strip method we obtain that there exists holomorphic function $f:\mathbb{U}\rightarrow\mathbb{S}$ such that $\mathop{\mathrm{Re}}{f}=u$, $f(0)=b$ and $|f'(0)|=d$.
Also, we have
\begin{equation*}
|f^h(0)|=\frac{\rho_{\mathbb{S}}(f(0))}{\rho_{\mathbb{U}}(0)}|f'(0)|=\frac{\pi}{4}\frac{1}{\cos{\displaystyle\frac{\pi}{2}b}}d=c.
\end{equation*}
Let $z\in\mathbb{U}$ be arbitrary. By Theorem \mathop{\mathrm{Re}}f{th:BeardonCarne}, taking $\Omega_1=\mathbb{U}$ and $\Omega_2=\mathbb{S}$, we have
\begin{eqnarray*}
d_{\mathbb{S}}(f(z),b) &\leqslant& \log(\cosh{d_{\mathbb{U}}(z,0)}+|f^h(0)|\sinh{d_{\mathbb{U}}(z,0)}) \\
&=&\log\left(\frac{1+|z|^2+2c|z|}{1-|z|^2}\right).
\end{eqnarray*}
Now, we chose a point $R(z)\in[0,1)$ such that
\begin{equation}\label{fla:defRz}
d_{\mathbb{U}}(R(z),0)=\log\left(\frac{1+|z|^2+2c|z|}{1-|z|^2}\right).
\end{equation}
Note that the equality (\mathop{\mathrm{Re}}f{fla:defRz}) is equivalent to the equality
\begin{equation*}
\frac{1+R(z)}{1-R(z)}=\frac{1+|z|^2+2c|z|}{1-|z|^2}
\end{equation*}
and hence we obtain $\displaystyle R(z)=|z|\frac{c+|z|}{1+c|z|}=|z|\varphi_{c}(|z|)$.
Therefore
\begin{equation*}
d_{\mathbb{S}}(f(z),b)\leqslant d_{\mathbb{U}}(|z|\varphi_{c}(|z|),0),
\end{equation*}
i.e. $f(z)\in\overline{S}_{\lambda\big(|z|\varphi_{c}(|z|)\big)}(b)$. Finally, by Lemma \mathop{\mathrm{Re}}f{lem:hypdisk0}
\begin{equation*}
u(z)=\mathop{\mathrm{Re}} f(z)\in\left[\frac{4}{\pi}\arctan{\varphi_{a}\big(-|z|\varphi_{c}(|z|)\big)}, \frac{4}{\pi}\arctan{\varphi_{a}\big(|z|\varphi_{c}(|z|)\big)}\right].
\end{equation*}
If $z=0$ then it is clear that the inequality (\mathop{\mathrm{Re}}f{ineq:main}) is sharp.
In order to prove that inequality (\mathop{\mathrm{Re}}f{ineq:main}) is sharp in the case $z\in\mathbb{U}\backslash\{0\}$, we first define the functions $\widehat{\Phi},\widetilde{\Phi}:\mathbb{U}\rightarrow\mathbb{S}$ as follows
\begin{equation*}
\widehat{\Phi}(\zeta)=\phi\Big(\varphi_{a}\big(-\zeta\cdot\varphi_{c}(\zeta)\big)\Big)
\end{equation*}
and
\begin{equation*}
\widetilde{\Phi}(\zeta)=\phi\Big(\varphi_{a}\big(\zeta\cdot\varphi_{c}(\zeta)\big)\Big).
\end{equation*}
Let $z\in\mathbb{U}\backslash\{0\}$. Define the functions $\widehat{u}^z,\widetilde{u}^z:\mathbb{U}\rightarrow(-1,1)$ (which depend on $z$) on the following way:
\begin{equation*}
\widehat{u}^z(\zeta)=\mathop{\mathrm{Re}}{\widehat{\Phi}}(e^{-i\arg{z}}\zeta)
\end{equation*}
and
\begin{equation*}
\widetilde{u}^z(\zeta)=\mathop{\mathrm{Re}}{\widetilde{\Phi}}(e^{-i\arg{z}}\zeta).
\end{equation*}
It is easy to check that the functions $\widehat{u}^z$ and $ \widetilde{u}^z$ are harmonic and that they satisfy assumptions \rm{(R1)} and \rm{(R2)}. Also
\begin{equation*}
\widehat{u}^z(z)=\frac{4}{\pi}\arctan{\varphi_{a}\big(-|z|\varphi_{c}(|z|)\big)}
\end{equation*}
and
\begin{equation*}
\widetilde{u}^z(z)=\frac{4}{\pi}\arctan{\varphi_{a}\big(|z|\varphi_{c}(|z|)\big)}.
\end{equation*}
\end{proof}
\subsection{Proof of Theorem \mathop{\mathrm{Re}}f{th:main2}}
In order to prove Theorem \mathop{\mathrm{Re}}f{th:main2}, we need two lemmas.
\begin{lemma}[{\cite[Lemma 1]{colonna}}]\label{lem:colonna}
Let $z,w\in\mathbb{C}$. Then
\begin{equation*}
\max\limits_{\theta\in\mathbb{R}}|w\cos{\theta}+z\sin{\theta}|=\frac{1}{2}(|w+iz|+|w-iz|).
\end{equation*}
\end{lemma}
\begin{lemma}\label{lem:phiAincreasing}
Fix $z\in\mathbb{U}$. Function $h:(-1,1)\rightarrow\mathbb{R}$ defined by $\displaystyle h(t)=\frac{t+|z|}{1+t|z|}$ is monotonically increasing.
\end{lemma}
\begin{proof}
The proof follows directly from the fact $\displaystyle h'(t)=\frac{1-|z|^2}{(1+t|z|)^2}>0$ for all $t\in(-1,1)$.
\end{proof}
\begin{proof}[Proof of Theorem \mathop{\mathrm{Re}}f{th:main2}] Denote by $u$ and $v$ real and imaginary part of $f$, respectively. Let $\theta\in\mathbb{R}$ be arbitrary. It is clear that function $U$ defined by
\begin{equation*}
U(z)=\cos{\theta}u(z)+\sin{\theta}v(z)
\end{equation*}
is harmonic on the unit disk $\mathbb{U}$, $U(0)=0$ and $|U(z)|\leqslant|f(z)|<1$ for all $z\in\mathbb{U}$.
By Theorem \mathop{\mathrm{Re}}f{th:main1} we have
\begin{equation}\label{proofmain2:fla1}
|U(z)|\leqslant\frac{4}{\pi}\arctan\big(|z|\varphi_{c}(|z|)\big), \quad \mbox{ for all }\quad z\in\mathbb{U},
\end{equation}
where $\displaystyle c=\frac{\pi}{4}|\nabla U(0)|$.
Since
\begin{eqnarray*}
\nabla U(z) &=& \cos{\theta}\nabla u(z)+\sin{\theta}\nabla v(z) \\
&=& \cos{\theta}\left(u_x(z)+iu_{y}(z)\right)+\sin{\theta}\left(v_x(z)+iv_{y}(z)\right),
\end{eqnarray*}
by Lemma \mathop{\mathrm{Re}}f{lem:colonna} we get
\begin{eqnarray*}
\max\limits_{\theta\in\mathbb{R}}|\nabla U(z)| &=& \max\limits_{\theta\in\mathbb{R}}|\cos{\theta}\left(u_x(z)+iu_{y}(z)\right)+\sin{\theta}\left(v_x(z)+iv_{y}(z)\right)| \\
&=& \frac{1}{2}\big(|u_{x}(z)+iu_{y}(z)+i(v_{x}(z)+iv_{y}(z))| \\
& & {}+|u_{x}(z)+iu_{y}(z)-i(v_{x}(z)+iv_{y}(z))|\big) \\
&=& \frac{1}{2}\left(\sqrt{(u_x(z)-v_y(z))^2+(u_y(z)+v_x(z))^2}\right.\\
& & \left.{}+\sqrt{(u_x(z)+v_y(z))^2+(u_y(z)-v_x(z))^2}\right)\\
&=& |f_{z}(z)|+|f_{\overline{z}}(z)|\\
&=&\|df(z)\|.
\end{eqnarray*}
Hence
\begin{equation*}
|\nabla U(0)|\leqslant\|df(0)\|
\end{equation*}
and
\begin{equation*}
c=\frac{\pi}{4}|\nabla U(0)|\leqslant\frac{\pi}{4}\|df(0)\|=\frac{\pi}{4}d=C.
\end{equation*}
By Lemma \mathop{\mathrm{Re}}f{lem:phiAincreasing}, from (\mathop{\mathrm{Re}}f{proofmain2:fla1}) we obtain
\begin{equation}\label{proofmain2:fla3}
|U(z)|\leqslant\frac{4}{\pi}\arctan\big(|z|\varphi_{C}(|z|)\big), \quad \mbox{ for all }\quad z\in\mathbb{U}.
\end{equation}
Finally, let $z\in\mathbb{U}$ be such that $f(z)\neq0$ and let $\theta$ such that
\begin{equation*}
\cos{\theta}=\frac{u(z)}{|f(z)|} \quad \mbox{ and } \quad \sin{\theta}=\frac{v(z)}{|f(z)|}.
\end{equation*}
Then $U(z)=|f(z)|$ and hence from (\mathop{\mathrm{Re}}f{proofmain2:fla3}) we get the inequality (\mathop{\mathrm{Re}}f{main2:ineq}).
If $z\in\mathbb{U}$ be such that $f(z)=0$ then the inequality (\mathop{\mathrm{Re}}f{main2:ineq}) is trivial.
\end{proof}
\section{Appendix}
\subsection{Harmonic quasiregular mappings and the Schwarz-Pick type estimates}
Taking into account Remark \mathop{\mathrm{Re}}f{rm:problem} we mention some results related to harmonic quasiregular mappings.
Let $D$ and $G$ be domains in $\mathbb{C}$ and $K\geqslant1$. A $C^1$ mapping $f:D\rightarrow G$ we call $K-$quasiregular mapping if
\begin{equation*}
\|df(z)\|^2\leqslant K|J_{f}(z)|, \quad \mbox{ for all } z\in D.
\end{equation*}
Here $J_{f}$ is Jacobian determinant of $f$. In particular, $K-$quasiconformal mapping is the $K-$quasiregular mapping which is a homeomorphism.
In \cite{MKandMM}, M. Kne\v zevi\'c and M. Mateljevi\'c proved the following result (which can be considered as generalization of Theorem \mathop{\mathrm{Re}}f{th:colonna}):
\begin{theorem}
Let $f:\mathbb{U}\rightarrow\mathbb{U}$ be a harmonic $K-$quasiconformal mapping. Then
$$\displaystyle \|df(z)\|\leqslant K\frac{1-|f(z)|^2}{1-|z|^2},\quad \mbox{for all}\quad z\in\mathbb{U}.$$
\end{theorem}
Also, one result of this type was obtained by H. H. Chen \cite{hhChen1}:
\begin{theorem}
Let $f:\mathbb{U}\rightarrow\mathbb{U}$ be a harmonic $K-$quasiconformal mapping. Then
$$\displaystyle \|df(z)\|\leqslant\frac{4}{\pi}K\frac{\cos{\displaystyle\left(|f(z)|\pi/2\right)}}{1-|z|^2},\quad \mbox{for all}\quad z\in\mathbb{U}.$$
\end{theorem}
For further results related to harmonic quasiconformal and hyperbolic harmonic quasiconformal mappings we refer to interested reader to \cite{MMTopics,wan,MMRevue, XChenAFang,MMrckmFil, MKFil, MKMor} and literature cited there.
\textbf{Acknowledgement.}
The author is greatly indebted to Professor M.Mateljevi\'c for introducing in this topic and for many stimulating conversations. Also, the author wishes to express his thanks to Mijan Kne\v zevi\'c for useful comments related to this paper.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{The impact of the diagonals of polynomial forms on limit
theorems with long memory}
\runtitle{Impact of diagonals of polynomial forms on limit theorems
with long memory}
\begin{aug}
\author{\inits{S.}\fnms{Shuyang}~\snm{Bai}\thanksref{e1}\ead[label=e1,mark]{bsy9142@bu.edu}}
\and
\author{\inits{M.S.}\fnms{Murad S.}~\snm{Taqqu}\corref{}\thanksref{e2}\ead[label=e2,mark]{bumastat@gmail.com}}
\address[]{Department of Mathematics and Statistics, Boston
University, 111 Cumminton Street,
Boston, MA 02215, USA. \printead{e1}; \printead*{e2}}
\end{aug}
\received{\smonth{3} \syear{2014}}
\revised{\smonth{11} \syear{2014}}
\begin{abstract}
We start with an i.i.d.
sequence and consider the product of two polynomial-forms moving
averages based on that sequence. The coefficients of the polynomial
forms are asymptotically slowly decaying homogeneous functions so that
these processes have long memory. The product of these two polynomial
forms is a stationary
nonlinear process. Our goal is to obtain limit theorems for the
normalized sums of this product process in three cases: exclusion of
the diagonal terms of the polynomial form,
inclusion, or the mixed case (one polynomial form excludes the
diagonals while the other one includes them). In any one of these
cases, if the product has long memory,
then the limits are given by Wiener chaos. But the limits in each of
the cases are quite different. If the diagonals are excluded, then the
limit is expressed as in the product
formula of two Wiener--It\^o integrals. When the diagonals are
included, the limit stochastic integrals are typically due to a single
factor of the product, namely the one with
the strongest memory. In the mixed case, the limit stochastic integral
is due to the polynomial form without the diagonals irrespective of the
strength of the memory.
\end{abstract}
\begin{keyword}
\kwd{diagonals}
\kwd{long memory}
\kwd{noncentral limit theorem}
\kwd{self-similar processes}
\kwd{Volterra}
\kwd{Wiener}
\end{keyword}
\end{frontmatter}
\section{Introduction}
Let $X(n)$ be a stationary process with mean $0$ and finite variance.
We are interested in the following weak convergence of normalized
partial sum to a process $Z(t)$:
\begin{equation}
\label{eqgenerallimit}
\frac{1}{A(N)}\sum_{n=1}^{[Nt]}X(n)
\Rightarrow Z(t)
\end{equation}
as $N\rightarrow\infty$ where $A(N)\rightarrow\infty$ is a suitable
normalization. The limit $Z(t), t\ge0$ if it exists, has stationary
increments and is self-similar with some index $H>0$, that is, for any
$a>0$, $\{Z(at),t\ge0\}$ and $\{a^HZ(t),t\ge0\}$ have the same
finite-dimensional distributions. The parameter $H$ is called the \emph
{memory parameter}\footnote{A precise definition of memory parameter is
given in Definition~\ref{DefHurstexpXn}.} of the process $X(n)$
and the \emph{Hurst index} or \emph{self-similarity parameter} of the
limit process $Z(t)$. The higher the value of $H$, the stronger the
memory of the process $X(n)$.
When the dependence in $X(n)$ is weak, one typically ends up in (\ref
{eqgenerallimit}) with
\[
A(N)= \Biggl(\operatorname{Var}\Biggl[\sum_{n=1}^N
X(n)\Biggr] \Biggr)^{1/2}\sim cN^{1/2}
\]
as $N\rightarrow\infty$ for some $c>0$, and $Z(t)$ is the Brownian
motion. These types of limit theorems are often called \emph{central
limit theorems}.
When, however, the dependence in $X(n)$ is so strong that
$\operatorname{Var}[\sum_{n=1}^N X(n)]$ grows faster than the linear speed $N$, and typically
as $N^{2H}$ with $H\in(1/2,1)$, the limit process $Z(t)$ in (\ref
{eqgenerallimit}) is no longer Brownian motion. $Z(t)$ is in this
case a self-similar process with stationary increments which has a
\emph
{Hurst index} $H$ (see \cite{embrechtsmaejima2002selfsimilar}).
This type of limit theorems involving non-Brownian limits are often
called \emph{noncentral limit theorems}. When the process $X(n)$ is
nonlinear and has long memory, the limit $Z(t)$ can be non-Gaussian
(e.g., \cite{dobrushinmajor1979non,taqqu1979convergence,surgailis1982zones}).
In \cite{baitaqqu2013generalized}, a noncentral limit theorem is
established for an off-diagonal polynomial-form process called $k$th
order \emph{discrete chaos process}:
\begin{equation}
\label{eqXnintro1}
Y'(n)=\sum_{0<i_1,\ldots,i_k<\infty}'
a(i_1,\ldots,i_k)\varepsilon _{n-i_1}\cdots
\varepsilon_{n-i_k},
\end{equation}
where the prime $'$ indicates that we do not sum on the diagonals
$i_p=i_q$, $p\neq q$, the noise $\varepsilon_i$'s are i.i.d. random
variables with mean $0$ and variance $1$, and $a(\cdot)$ is
asymptotically some homogeneous function $g$ called \emph{generalized
Hermite kernel} (GHK). The limit $Z(t)$, called a \emph{generalized
Hermite process}, is expressed by a $k$-fold \emph{Wiener--It\^o integral}:
\begin{equation}
\label{eqgenhermproc} Z(t)=\int_{\mathbb{R}^k}'\!\int
_0^t g(s-x_1,\ldots,s-x_k)1_{\{
s>x_1,\ldots,s>x_k\}}
\,\mathrm{d}s B(\mathrm{d}x_1)\cdots B(\mathrm{d}x_k),
\end{equation}
where the prime $'$ indicates that we do not integrate on the diagonals
$x_p=x_q$, $p\neq q$, and $B(\cdot)$ is Brownian motion. These
processes $Z(t)$ include the \emph{Hermite process} considered in
\cite{dobrushinmajor1979non,taqqu1979convergence} and \cite{surgailis1982zones}.
In \cite{baitaqqu2014convergence}, a noncentral limit theorem is
established for a polynomial-form process called $k$th order \emph
{discrete Volterra process}:
\begin{equation}
\label{eqXnintro} Y(n)=\sum_{0<i_1,\ldots,i_k<\infty} a(i_1,
\ldots,i_k)\varepsilon _{n-i_1}\cdots\varepsilon_{n-i_k},
\end{equation}
which differs from $Y'(n)$ in (\ref{eqXnintro1}) by including the
diagonals, and where $a(\cdot)$ is asymptotically $g(\cdot)$, some
special type of generalized Hermite kernel called \emph{generalized
Hermite kernel of Class (B)} (GHK(B)). The limit $Z(t)$ can be
heuristically thought as (\ref{eqgenhermproc}) with diagonals
included, and is precisely expressed as a $k$-fold \emph{centered
Wiener--Stratonovich integral}, which is a linear combination of
certain Wiener--It\^o integrals of orders lower than or equal to $k$
(see \cite{baitaqqu2014convergence}).
In this paper, we contrast the effect of two types of stationary
sequences in the limit theorem~(\ref{eqgenerallimit}). The first
stationary sequence is
\begin{equation}
\label{eqprodofchaos}
X(n)=Y'_1(n)Y'_2(n),
\end{equation}
that is, a product of two long memory chaos processes (\ref{eqXnintro1}) which \emph{exclude} the diagonals. The second stationary
sequence is
\begin{equation}
\label{eqprodofvolt} X(n)=Y_1(n)Y_2(n),
\end{equation}
that is, a product of two long memory processes in (\ref{eqXnintro}) which \emph{include} the diagonals. We also consider the mixed case
\begin{equation}
\label{eqprodofmixed} X(n)=Y_1'(n)Y_2(n).
\end{equation}
Limit theorems for such types of product are of interest, for example,
in statistical inference involving long memory processes with different
memory parameters (\cite{koulbaillie2004regression}, see also
Proposition~11.5.6 of \cite{giraitiskoulsurgailis2009large}), and
in the study of covariation of fractional Brownian motions with
different Hurst indexes \cite{maejimatudor2012selfsimilar}.
Typically, the factor processes $Y$ there are assumed to be either
linear (or Gaussian) or a transformation of linear process (or
Gaussian), which yields in the limit a generalized Rosenblatt processes
where $g(x_1,x_2)=x_1^{\gamma_1}x_2^{\gamma_2}$ in (\ref{eqgenhermproc}). By taking the factors $Y$ to be some nonlinear processes as in
(\ref{eqprodofchaos}), (\ref{eqprodofvolt}) and (\ref{eqprodofmixed}), one can obtain much richer limit structures, which are briefly
described below.
We show that in the case (\ref{eqprodofchaos}), the limit in (\ref
{eqgenerallimit}) is expressed as Wiener--It\^o integrals which can
be obtained by using a rule similar to that used for computing the
product of two Wiener--It\^o integrals. In fact, if the stationary
sequences $Y'_1(n)$ and $Y'_2(n)$ have, respectively, memory parameters
$H_1,H_2\in(1/2,1)$ with $H_1+H_2>3/2$, then the limit in (\ref
{eqgenerallimit}) has Hurst index
\[
H=H_1+H_2-1\in(1/2,1).
\]
In the case (\ref{eqprodofvolt}), in contrast, the limit stochastic
integrals are typically due to a single factor $Y_1(n)$ or $Y_2(n)$,
namely, the one with the strongest memory parameter. The Hurst index of
the limit is then
\[
\max(H_1,H_2)\in(1/2,1)
\]
which is always greater than $H_1+H_2-1$. In the case (\ref{eqprodofmixed}), only the off-diagonal factor $Y_1'(n)$ contributes to the
limit stochastic integral, irrespective of the strength of the memory.
The paper is organized as follows. Section~\ref{secbackground}
contains some background. We state the main results in Section~\ref{secstateresult}, namely, Theorem~\ref{Thmgeneralprodchaos} for
processes without diagonals, Theorem~\ref{Thmgeneralprodvolt} for
processes with diagonals and Theorem~\ref{Thmgeneralprodmixed} for
the mixed case. Section~\ref{secprelim} provides some preliminary
results used in the proofs. Section~\ref{secproof} contains the proofs
of the theorems.
\section{Background}\label{secbackground}
The following notation will be used throughout: $\mathbf{0}$ denotes
the zero vector $(0,0,\ldots,0)$ and $\mathbf{1}=(1,1,\ldots,1)$
denotes the vector with ones in every component. For two vectors
$\mathbf{x}$ and $\mathbf{y}$ with the same dimension, we write
$\mathbf
{x}\le\mathbf{y}$ (or $<$, $\ge$, $>$) if the inequality holds
componentwise. We let
\[
[x]=\sup\{n\in\mathbb{Z}\dvt n\le x\}
\]
for any real $x$ and for a real vector $\mathbf{x}=(x_1,\ldots,x_k)$,
we define
\[
[\mathbf{x}]=\bigl([x_1],\ldots,[x_k]\bigr).
\]
The notation $1_A$ denotes the indicator function of a set $A$. The
value of a constant $C>0$ or $c>0$ may change from line to line.
In \cite{baitaqqu2013generalized}, the following classes of
functions were introduced.
\begin{Def}\label{DefGHK}
A measurable function $g$ defined on $\mathbb{R}_+^k$ is called a
\emph
{generalized Hermite kernel} (GHK) with homogeneity exponent
\begin{equation}
\label{eqalpha} \alpha\in \biggl(-\frac{k+1}{2},-\frac{k}{2} \biggr),
\end{equation}
if it satisfies
\begin{enumerate}[2.]
\item[1.] $g(\lambda\mathbf{x})=\lambda^{\alpha}g(\mathbf{x})$,
$\forall
\lambda>0$;
\item[2.] $\int_{\mathbb{R}_+^k} \vert g(\mathbf{1}+\mathbf{x})g(\mathbf
{x})\vert \,\mathrm{d}\mathbf{x}<\infty$;
\end{enumerate}
A GHK $g$ is said to belong to Class (B) [abbreviated as GHK(B)], if
$g$ is a.e. continuous on $\mathbb{R}_+^k$ and
\[
\bigl\vert g(\mathbf{x})\bigr\vert \le c \Vert \mathbf{x}\Vert
^{\alpha}=c(x_1+\cdots +x_k)^{\alpha}
\]
($\|\cdot\|$ is the $L^1$-norm) for some constant $c>0$.
\end{Def}
\begin{Rem}\label{RemGHKhtL^2}
As it was shown in Theorem~3.5 of \cite{baitaqqu2013generalized},
if $g$ is a GHK, then
\[
\int_0^t \bigl\vert g(s\mathbf{1}-
\mathbf{x})\bigr\vert 1_{\{s\mathbf{1}>\mathbf{x}\}
}\,\mathrm{d}s<\infty
\]
for a.e. $\mathbf{x}\in\mathbb{R}^k$, and the function
\[
h_t(\mathbf{x}):=\int_0^t g(s
\mathbf{1}-\mathbf{x})1_{\{s\mathbf
{1}>\mathbf{x}\}}\,\mathrm{d}s\in L^2\bigl(
\mathbb{R}^k\bigr).
\]
\end{Rem}
Using a GHK, one can define a self-similar process with stationary
increments on a Wiener chaos as follows.
\begin{Def}\label{DefGHprocess}
Let $g$ be a GHK on $\mathbb{R}_+^k$ with homogeneity exponent $\alpha
\in(-\frac{k+1}{2},-\frac{k}{2})$, then (\ref{eqgenhermproc}) is
called a \emph{generalized Hermite process} $Z(t)$. It is self-similar
with Hurst index
\begin{equation}
\label{eqH}
H=\alpha+k/2+1.
\end{equation}
\end{Def}
\begin{Eg}\label{EgHermKernel}
If
\[
g(\mathbf{x})=\prod_{j=1}^k
x_j^{\gamma},
\]
where $-1/2-1/k<\gamma<-1/2$, then $Z(t)$ in (\ref{eqgenhermproc})
is the \textit{Hermite process} considered in \cite{dobrushinmajor1979non} and~\cite{taqqu1979convergence}.
\end{Eg}
Note that GHK(B) does not include the kernel in Example~\ref{EgHermKernel}. We use a GHK(B) because of its boundedness property. The
subclass of GHK(B) is, in fact, a dense subset in the whole class of
GHK (see Remark~3.17 of \cite{baitaqqu2013generalized}).
We now state two limit theorems, the first for the discrete chaos
process $Y'(n)$ defined in (\ref{eqXnintro1}) where the diagonals
are excluded, and the second for the Volterra process $Y(n)$ defined in
(\ref{eqXnintro}) which includes the diagonals.
Suppose that $g$ is a GHK(B) on $\mathbb{R}_+^k$, $L(\cdot)$ is a
bounded function defined on $\mathbb{Z}_+^k$ such that
\[
\lim_{n\rightarrow\infty}L\bigl([n\mathbf{x}]+\mathbf{B}(n)\bigr)=1
\]
for any $\mathbf{x}\in\mathbb{R}_+^k$ and any $\mathbb{Z}_+^k$-valued
bounded function $\mathbf{B}(n)$, and suppose that the coefficient
$a(\cdot)$ in (\ref{eqXnintro1}) is given by
\begin{equation}
\label{eqa=gL}
a(\mathbf{i})=g(\mathbf{i})L(\mathbf{i}).
\end{equation}
\begin{Pro}[(Theorem~6.5 of \cite{baitaqqu2013generalized})]\label{Thmncltchaossingle}
The following weak convergence holds in $D[0,1]$:
\begin{equation}
\frac{1}{N^{H}} \sum_{n=1}^{[Nt]}
Y'(n) \Rightarrow Z(t):=I_k(h_t),
\end{equation}
where $H=\alpha+k/2+1\in(1/2,1)$,
\begin{equation}
\label{eqht}
h_t(\mathbf{x})=\int_0^t
g(s\mathbf{1}-\mathbf{x})1_{\{s\mathbf
{1}>\mathbf{x}\}}\,\mathrm{d}s
\end{equation}
with $g$ as in (\ref{eqa=gL}), and $I_k(\cdot)$ denotes the $k$-fold
Wiener--It\^o integral, so that $Z(t)$ is a generalized Hermite process
(\ref{eqgenhermproc}).
\end{Pro}
We now consider the limit when the diagonals are included. If $g$ is
GHK(B) on $\mathbb{R}_+^k$ and is in addition symmetric, we define the
following function $g_r$ by identifying $r$ pairs of variables of $g$
and integrating them out, as follows:
\begin{equation}
\label{eqgr} g_r(\mathbf{x})=\int_{\mathbb{R}_+^r}g(y_1,y_1,
\ldots ,y_r,y_r,x_1,\ldots ,x_{k-2r})\,\mathrm{d}
\mathbf{y}.
\end{equation}
In \cite{baitaqqu2014convergence}, a noncentral limit theorem was
established for the Volterra process $Y(n)$ in (\ref{eqXnintro}).
Let
\[
a(\cdot)=g(\cdot)L(\cdot)
\]
in (\ref{eqXnintro}) be given as in (\ref{eqa=gL}) assuming in
addition that $g$ is symmetric.
\begin{Pro}[(Theorem~6.2 of \cite{baitaqqu2014convergence})]\label
{Thmncltvoltsingle}
One has the following weak convergence\vspace*{-4pt} in $D[0,1]$:
\begin{equation}
\label{eqncltvolt} \frac{1}{N^{H}} \sum_{n=1}^{[Nt]}
Y(n) \Rightarrow Z(t):=\sum_{0\le
r<k/2} d_{k,r}
Z_{k-2r}(t),
\end{equation}
where\vspace*{-4pt} $H=\alpha+k/2+1\in(1/2,1)$,
\begin{equation}
\label{eqdkr} d_{k,r}=\frac{k!}{2^r(k-2r)!r!},
\end{equation}
and\vspace*{-4pt}
\begin{equation}
\label{eqZt=gr} Z_{k-2r}(t):=\int'_{\mathbb{R}^{k-2r}}
\!\int_0^t g_r(s\mathbf {1}-\mathbf
{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}\,\mathrm{d}s B(\mathrm{d}x_1)\cdots B(\mathrm{d}x_k)
\end{equation}
is a $(k-2r)$th order generalized Hermite process with GHK given by
$g_r$ in (\ref{eqgr}).
\end{Pro}
\begin{Rem}
The limit process $Z(t)$ in (\ref{eqncltvolt}) can be simply
expressed in terms of a centered Wiener--Stratonovich integral
$\accentset{\circ}{I}_k^c(\cdot)$ as
\begin{equation}
\label{eqwienerstratonintegral} Z(t)=\aaa_k^c(h_t),
\end{equation}
where $h_t$ is as in (\ref{eqht}), and\vspace*{-3pt} where
\[
\aaa_k^c(\cdot)=\sum
_{0\le r<k/2} d_{k,r}I_{k-2r}\bigl(
\tau^r \cdot\bigr).
\]
The integral $\aaa_k^c(\cdot)$ differs from the
Wiener--Stratonovich integral
\[
\aaa_k(\cdot):=\sum_{0\le r \le[k/2]}
d_{k,r}I_{k-2r}\bigl(\tau^r \cdot\bigr)
\]
introduced in \cite{humeyer1988integrales} by excluding the term
$r=k/2$ when $k$ is even. Here, the operator $\tau^r$ identifies~$r$
pairs of variables of $h$ and integrates them out (see \cite{baitaqqu2014convergence}). The operator $\tau^r$ is often called a
``trace operator.''\vspace*{-2pt}
\end{Rem}
\section{Statement of the main results}\label{secstateresult}
We state here the main results, and defer the proofs to Sections~\ref{secproofchaos} and \ref{secproofvolt}.
In the statement of the results, the following expressions are used.
\begin{Def}\label{Defcltnclt}
Let $X(n)$ be a stationary process with finite variance. We say that:
\begin{enumerate}[2.]
\item[1.] $X(n)$ satisfies a \emph{central limit theorem} (CLT),\vspace*{-2pt} if
\begin{equation}
\label{eqclt} N^{-1/2}\sum_{n=1}^{[Nt]}
\bigl[X(n)-\mathbb{E}X(n)\bigr]\Rightarrow\sigma B(t)
\end{equation}
in $D[0,1]$,
where\vadjust{\goodbreak} $\sigma^2=\sum_{n=-\infty}^\infty\operatorname{Cov}(X(n),X(0))$;
\item[2.] $X(n)$ satisfies a \emph{noncentral limit theorem} (NCLT) with a
Hurst index $H\in(1/2,1)$ and limit $Z(t)$, if
\begin{equation}
\label{eqnclt} N^{-H}\sum_{n=1}^{[Nt]}
\bigl[X(n)-\mathbb{E}X(n)\bigr]\Rightarrow Z(t)
\end{equation}
in $D[0,1]$.
\end{enumerate}
\end{Def}
\begin{Rem}
In case~1 above, the ``long-run variance'' $\sigma^2$ can be $0$. In
this case, we understand the limit theorem as degenerate (the
normalization $N^{-1/2}$ is too strong). We do not consider here limit
theorems involving a Hurst index $H<1/2$. In case~2, the limit in (\ref
{eqnclt}) may be fractional Brownian motion.
\end{Rem}
We now consider separately the cases where the diagonals of the
polynomial forms are excluded (chaos processes) and when they are
included (Volterra processes).
\subsection{Limit theorem for a product of long-memory chaos processes}
Suppose that we have the following two discrete chaos processes
(off-diagonal polynomial forms):
\begin{equation}
\label{eqoff-diagonalY} Y_1'(n)=\sum
_{\mathbf{i}\in\mathbb{Z}_+^{k_1}}'a^{(1)}(\mathbf {i})
\varepsilon_{n-i_1}\cdots\varepsilon_{n-i_{k_1}}, \qquad Y_2'(n)=
\sum_{\mathbf{i}\in\mathbb{Z}_+^{k_2}}'a^{(2)}(\mathbf{i})
\varepsilon _{n-i_1}\cdots\varepsilon_{n-i_{k_2}},
\end{equation}
where we assume that $a^{(j)}=g^{(j)}L^{(j)}$ as in (\ref{eqa=gL}) is
symmetric, where $g^{(j)}$ is a symmetric GHK(B) with homogeneity
exponent
\[
\alpha_j\in(-k_j/2-1/2,-k_j/2),\qquad j=1,2.
\]
Definition~\ref{DefGHprocess} suggests the following terminology.
\begin{Def}\label{DefHurstExpa^j}
The index
\begin{equation}
\label{eqHrange} H=\alpha+k/2+1\in(1/2,1)
\end{equation}
is called the
\emph{associated Hurst index} of the coefficient $a(\cdot)=g(\cdot
)L(\cdot)$ in (\ref{eqa=gL}).
\end{Def}
\begin{Rem}
The associated Hurst indices of the coefficients in $Y_1'(n)$ and
$Y_2'(n)$ will determine the Hurst index of the limit process $Z(t)$ in
(\ref{eqgenerallimit}).
\end{Rem}
We want to obtain a limit theorem for the normalized partial sum of the
product process:
\begin{equation}
\label{eqchaosproductproc} X(n):=Y'_1(n)Y_2'(n).
\end{equation}
\begin{Thm}\label{Thmgeneralprodchaos}
Let $X(n)$ be the product process in (\ref{eqchaosproductproc}).
Suppose that $H_j$ is the associated Hurst index of $a^{(j)}(\cdot)$,
$j=1,2$, and assume that $\mathbb{E}|\varepsilon_i|^{4+\delta}<\infty$
for some
$\delta>0$.
\begin{enumerate}[2.]
\item[1.] If $H_1+H_2<3/2$, then $X(n)$ satisfies the
CLT (\ref{eqclt});
\item[2.] If $H_1+H_2>3/2$, then $X(n)$ satisfies
the NCLT (\ref{eqnclt}) with Hurst index $H=H_1+H_2-1$ and limit
\begin{equation}
Z(t)=\sum_{r=0}^{k} r!\pmatrix{k_1 \cr r} \pmatrix{k_2\cr r} I_{k_1+k_2-2r}(h_{t,r}),
\end{equation}
where $k=k_1\wedge k_2$ if $k_1\neq k_2$, and $k=k_1-1$ if $k_1=k_2$.
The integrand $h_{t,r}$ above is defined as
\begin{equation}
\label{eqhtr}
h_{t,r}(\mathbf{x})= \int_0^t
\bigl( g^{(1)}\otimes_r g^{(2)} \bigr) (s
\mathbf{1}-\mathbf{x})1_{\{s\mathbf{1}>\mathbf{x}\}} \,\mathrm{d}s,
\end{equation}
where
\begin{eqnarray}
&& g^{(1)}\otimes_r g^{(2)}(
\mathbf{x})
\nonumber
\\[-8pt]
\label{eqcontractofg}
\\[-8pt]
\nonumber
&&\quad :=\int_{\mathbb{R}_+^r}g^{(1)}(y_1,
\ldots,y_r,x_1,\ldots ,x_{k_1-r})g^{(2)}(y_1,
\ldots,y_r,x_{k_1-r+1},\ldots ,x_{k_2+k_2-2r})\,\mathrm{d}\mathbf{y}
\end{eqnarray}
is a GHK, and when $r=0$, (\ref{eqcontractofg}) is understood as the
tensor product $g^{(1)}\otimes g^{(2)}$. When $r>0$ in (\ref
{eqcontractofg}), we identify $r$ variables of $g^{(1)}$ and
$g^{(2)}$ and integrate over them.
\end{enumerate}
\end{Thm}
This theorem is proved in Section~\ref{secproofchaos}.
\subsection{Limit theorem for a product of long-memory Volterra processes}
Let now
\begin{equation}
\label{eqX=product} X(n)=Y_1(n)Y_2(n),
\end{equation}
where
\begin{equation}
\label{eqwith-diagonalY} Y_1(n)=\sum_{\mathbf{i}\in\mathbb{Z}_+^{k_1}}a^{(1)}(
\mathbf {i})\varepsilon_{n-i_1}\cdots\varepsilon_{n-i_{k_1}},\qquad
Y_2(n)=\sum_{\mathbf{i}\in\mathbb{Z}_+^{k_2}}a^{(2)}(
\mathbf{i})\varepsilon _{n-i_1}\cdots\varepsilon_{n-i_{k_2}}.
\end{equation}
We assume that $a^{(j)}=g^{(j)}L^{(j)}$ in (\ref{eqa=gL}) is
symmetric, and $g^{(j)}$ is a symmetric GHK(B) with homogeneity
exponent $\alpha_j\in(-k_j/2-1/2,-k_j/2)$, $j=1,2$.
In this case, we can write
\[
X(n)=\sum_{\mathbf{i}\in\mathbb{Z}_+^k}a(\mathbf{i})\varepsilon
_{i_1}\cdots\varepsilon_{i_k},
\]
where $k=k_1+k_2$, and
\begin{equation}
\label{eqa=product} a=a^{(1)}\otimes a^{(2)}.
\end{equation}
Let $\mathcal{C}_{1}^2$ to be the collection of partitions of the set
$\{1,\ldots,k_1\}$ such that each set in the partition contains at
least $2$ elements, and similarly let $\mathcal{C}_{2}^2$ be the same
thing for $\{k_1+1,\ldots,k_1+k_2\}$. Any partition $\pi\in\mathcal
{C}_j^2$ can be expressed as $\pi=(P_1,\ldots,P_m)$, where $P_i$,
$i=1,\ldots,m$, are subsets ordered according to their smallest
elements. For example, if $\pi=\{\{1,4\},\{2,3\}\}$, then $P_1=\{1,4\}$
and $P_2=\{2,3\}$.
Let
\begin{equation}
\label{eqcsa}
c_j=\sum_{\pi\in\mathcal{C}_j^2} \sum
'_{\mathbf{i}>\mathbf
{0}}a^{(j)}_{\pi}(
\mathbf{i}) \mu_{\pi},\qquad j=1,2,
\end{equation}
where
\[
\mu_\pi=\mu_{p_1}\cdots\mu_{p_m}\qquad\mbox{with }
\mu_p=\mathbb {E}\varepsilon_i^p
\]
and $p_i=|P_i|\ge2$ if $\pi=(P_1,\ldots,P_m)$, and where
$a^{(j)}_{\pi
}(\cdot)$ denotes $a^{(j)}$ with its variables identified according to
the partition $\pi$ (see (\ref{eqapi}) below).
The limit theorem for the normalized partial sum of the centered $X(n)$
in (\ref{eqX=product}) includes several cases. We shall use the
centered multiple Wiener--Stratonovich integral $ \aaa^c_k(\cdot
)$ introduced in (\ref{eqwienerstratonintegral}). The theorem states
that except for some low-dimensional cases (cases 1--4), the limit is up
to some constant the same as the limit for a single factor, namely the
one with the highest $H_j$ (cases 5--7).
\begin{Thm}\label{Thmgeneralprodvolt}
Let $X(n)$ be the product process in (\ref{eqX=product}), where
$a^{(j)}$ has associated Hurst index $H_j=\alpha_j+k_j/2+1\in(1/2,1)$
(Definition~\ref{DefHurstExpa^j}). Assume $\mathbb{E}|\varepsilon
_i|^{2k_1+2k_2+\delta}<\infty$ for some $\delta>0$. Then using the
language of Definition~\ref{Defcltnclt},
\begin{enumerate}[7.]
\item[1.] if $k_1=1$, $k_2=1$, and
$H_1+H_2<3/2$, then $X(n)$ satisfies a CLT (\ref{eqclt});
\item[2.] if $k_1=1$, $k_2=1$, and
$H_1+H_2>3/2$, then $X(n)$ satisfies a NCLT (\ref{eqnclt}) with Hurst
index $H_1+H_2-1$ and limit
\[
Z(t)=\int'_{\mathbb{R}^2}\!\int_0^t
g_1(s-x_1) g_2(s-x_2)
1_{\{
s\mathbf
{1}>\mathbf{x}\}} \,\mathrm{d}s B(\mathrm{d}x_1) B(\mathrm{d}x_2)
\]
(nonsymmetric Rosenblatt process);
\item[3.] if $k_1\ge2$, $k_2=1$, and if $c_1$
in (\ref{eqcsa}) is nonzero, then $X(n)$ satisfies a NCLT (\ref
{eqnclt}) with Hurst index $H_2$ and limit
\[
Z(t)=c_1\int_{\mathbb{R}}\!\int_0^t
g_2(s-x)1_{\{s>x\}} \,\mathrm{d}s B(\mathrm{d}x)
\]
(fractional Brownian motion);
\item[4.] if $k_1= 1$, $k_2\ge2$, and if $c_2$
in (\ref{eqcsa}) is nonzero, then $X(n)$ satisfies a NCLT (\ref
{eqnclt}) with Hurst index $H_1$ and limit
\[
Z(t)=c_2\int_{\mathbb{R}}\!\int_0^t
g_1(s-x)1_{\{s>x\}} \,\mathrm{d}s B(\mathrm{d}x)
\]
(fractional Brownian motion);
\item[5.] if $k_1\ge2$, $k_2\ge2$,
$H_1>H_2$, and if $c_2$ in (\ref{eqcsa}) is nonzero, then $X(n)$
satisfies a NCLT (\ref{eqnclt}) with Hurst index $H_1$, and the\vspace*{-2pt} limit
\[
Z(t)=c_2 \aaa^c_{k_1}(h_{t,1}),
\]
where $h_{t,1}(\mathbf{x})=\int_0^t g_1(s\mathbf{1}-\mathbf{x})
1_{\{
s\mathbf{1}>\mathbf{x}\}} \,\mathrm{d}s$;
\item[6.] if $k_1\ge2$, $k_2\ge2$,
$H_1<H_2$, and if $c_1$ in (\ref{eqcsa}) is nonzero, then $X(n)$
satisfies a NCLT (\ref{eqnclt}) with Hurst index $H_2$, and the limit
\[
Z(t)=c_1 \aaa^c_{k_2}(h_{t,2}),
\]
where $h_{t,2}(\mathbf{x})=\int_0^t g_2(s\mathbf{1}-\mathbf{x})
1_{\{
s\mathbf{1}>\mathbf{x}\}} \,\mathrm{d}s$;
\item[7.] if $k_1\ge2$, $k_2\ge2$,
$H_1=H_2$, and if at least one of the $c_j$'s in (\ref{eqcsa}) is
nonzero, then $X(n)$ satisfies a NCLT (\ref{eqnclt}) with Hurst index
$H_1=H_2$, and the limit
\[
Z(t)=c_1 \aaa^c_{k_2}(h_{t,2})+c_2
\aaa^c_{k_1}(h_{t,1}).
\]
\end{enumerate}
\end{Thm}
\begin{Rem}
These constants $c_j$'s in the theorem are nonzero if, for example,
every $a^{(j)}(\mathbf{i})>0$, $j=1,2$.
\end{Rem}
The theorem, which is proved in Section~\ref{secproofvolt}, seems
bewildering at first glance. But there is structure into it. The cases~3 and 4 are symmetric,
and so are the cases~5 and~6. Case~1 involves
short-range dependence, while all the other cases involve long-range
dependence. Case~2 involves the nonsymmetric
Rosenblatt process, originally introduced by
Maejima and Tudor
\cite{maejimatudor2012selfsimilar}. Cases~3 and
4 involve fractional Brownian motion since one
of the orders $k$ equals $1$. The typical cases are 5 (and~6). In these cases,
quite surprisingly, it is not the orders $k_1$ or $k_2$ that matter,
but the process $Y_1(n)$ or $Y_2(n)$ in (\ref{eqX=product}) with the
highest value of $H$. In the boundary case~7, where $H_1=H_2$, they both contribute.\vspace*{-3pt}
\subsection{Limit theorem for the mixed case}
Now we consider the mixed case (\ref{eqprodofmixed}), where
$Y_1'(n)$ is as in (\ref{eqoff-diagonalY}) and $Y_2(n)$ is as in
(\ref
{eqwith-diagonalY}).
Let
\begin{equation}
\label{eqX=productmixed} X(n)=Y_1'(n)Y_2(n).
\end{equation}
We only state the case which does not overlap Theorem~\ref{Thmgeneralprodchaos} and Theorem~\ref{Thmgeneralprodvolt}, that is, both
$Y_1'(n)$ and $Y_2(n)$ are nonlinear: $k_1\ge2$ and $k_2\ge2$. The
limit, up to some constant, turns out to be the same as the limit for
the single factor $Y_1'(n)$.
\begin{Thm}\label{Thmgeneralprodmixed}
Let $X(n)$ be the product process in (\ref{eqX=productmixed}), where
$a^{(j)}$ has associated Hurst index $H_j=\alpha_j+k_j/2+1\in(1/2,1)$
(Definition~\ref{DefHurstExpa^j}). Assume $k_1\ge2$, $k_2\ge2$
and $\mathbb{E}|\varepsilon_i|^{2+2k_2+\delta}<\infty$ for some
$\delta>0$. Then
using the language of Definition~\ref{Defcltnclt}, if $c_2$ in (\ref
{eqcsa}) is nonzero, then $X(n)$ satisfies a NCLT (\ref{eqnclt})
with Hurst index $H_1=\alpha_1+k_1/2+1$, and the limit is
\[
Z(t)=c_2 I_{k_1}(h_{t,1}),
\]
where\vadjust{\goodbreak} $h_{t,1}(\mathbf{x})=\int_0^t g_1(s\mathbf{1}-\mathbf{x})
1_{\{
s\mathbf{1}>\mathbf{x}\}} \,\mathrm{d}s$.
\end{Thm}
Theorem~\ref{Thmgeneralprodmixed} is proved in Section~\ref{secproofmixed}.
\begin{Rem}
If the noises $\varepsilon_i$'s are Gaussian, then the normalized partial
sum
\[
\frac{1}{A(N)}\sum_{n=1}^{[Nt]}X(n)
\]
considered in Theorem~\ref{Thmgeneralprodchaos} \ref{Thmgeneralprodvolt} and \ref{Thmgeneralprodmixed} belongs to a Wiener chaos
of finite order. There is a rich literature on obtaining Berry--Esseen
type quantitative limit theorems for elements on Wiener chaos. For the
case where the limit is Gaussian, see the monograph \cite{nourdinpeccati2012normal} and the references therein; for the case
where the limit belongs to higher-order Wiener chaos, see
\cite{davydovmartynova1987limit,breton2006convergence} and
\cite{nourdinpoly2013convergence}. The case where $\varepsilon_i$'s
are non-Gaussian may also be treated using techniques from \cite{nourdinpeccati2010invariance}.
The quantitative results mentioned above, however, seem not directly
applicable to the limit theorems considered here. This is because, as
it will be clear in the proofs of these theorems, $\frac{1}{A(N)}\sum_{n=1}^{[Nt]}X(n)$ does not have a ``clean'' structure as that
considered in the works mentioned above. In particular, the
decomposition of $\frac{1}{A(N)}\sum_{n=1}^{[Nt]}X(n)$ yields many
terms. Some of the quantitative results mentioned above may be
applicable to the terms which contribute to the limit, but there are
other terms in the decomposition which converge in $L^2(\Omega)$ to
zero. How to deal with these degenerate terms is an open problem.
\end{Rem}
\section{Preliminary results}\label{secprelim}
A central idea in establishing the limit theorems is to involve the
\emph{nonsymmetric discrete chaos process} which generalizes the chaos
process in (\ref{eqXnintro1}) by allowing different sequences of
noises. We shall now define it.
Let $\bolds{\varepsilon}_i=(\varepsilon_i^{(1)},\ldots,\varepsilon
_i^{(k)})$ be
an i.i.d. vector where each component has mean $0$ and finite
variance. The components $\varepsilon_i^{(1)},\ldots,\varepsilon_i^{(k)}$ are
typically dependent. Introduce the following nonsymmetric discrete
chaos process
\begin{equation}
\label{eqXnnonsym} Y'(n)=\sum_{0<i_1,\ldots,i_k<\infty}'
a(i_1,\ldots,i_k)\varepsilon ^{(1)}_{n-i_1}
\cdots\varepsilon^{(k)}_{n-i_k},
\end{equation}
where $\sum_{\mathbf{i}\in\mathbb{Z}_+^k}' a(\mathbf{i})^2<\infty
$ so
that $X'(n)$ is well-defined in the $L^2(\Omega)$-sense. Let
\[
\Sigma(i,j)= \mathbb{E}\varepsilon_n^{(i)}
\varepsilon_n^{(j)}.
\]
The autocovariance of $Y'(n)$ is then given by
\begin{equation}
\label{eqXnnonsymautocov} \hspace*{-6pt}\gamma(n)=\sum_{\sigma}\sum
_{0<i_1,\ldots,i_k<\infty}' a(i_1,\ldots
,i_k) a(i_{\sigma(1)}+n,\ldots,i_{\sigma(k)}+n) \Sigma
(i_1,i_{\sigma
(1)})\cdots\Sigma(i_k,i_{\sigma(k)}),
\end{equation}
where in the summation $\sigma$ runs over all the $k!$ permutations of
$\{1,\ldots,k\}$.
The following lemma is useful for studying the asymptotic properties of
the covariance of $X'(n)$.
\begin{Lem}\label{Lembound}
Suppose that in (\ref{eqXnnonsym}), there exist constant $c_0>0$
and $\gamma_j<-1/2$, $j=1,\ldots,k$, such that
\begin{equation}
\label{eqabound} \bigl\vert a(i_1,\ldots,i_k)\bigr\vert
\le c_0 i_1^{\gamma_1}\cdots i_k^{\gamma_k}.
\end{equation}
Let
\begin{equation}
\label{eqH^*} H^*=\alpha+k/2+1 \qquad\mbox{with }\alpha=\sum
_{j=1}^{k}\gamma_j.
\end{equation}
\begin{itemize}
\item If $H^*<1/2$, then $\sum_{n=-\infty}^\infty|\gamma(n)|<\infty$,
and $\operatorname{Var}[\sum_{n=1}^N Y'(n)]\le c_1 N$ for some $c_1>0$;
\item If $H^*>1/2$, then $|\gamma(n)|\le c_2 n^{2H^*-2}$ for some
$c_2>0$, and $\operatorname{Var}[\sum_{n=1}^N Y'(n)]\le c_3 N^{2H^*}$
for some $c_3>0$.
\end{itemize}
\end{Lem}
\begin{pf}
The case $H^*<1/2$ was proved in Proposition~5.4 in \cite{baitaqqu2014convergence}.
In the case $H^*>1/2$, let $\widetilde{|a|}$ be the symmetrization of
$|a|(\mathbf{i}):=|a(\mathbf{i})|$, then for $n\ge0$, by (\ref
{eqXnnonsymautocov}) and (\ref{eqabound}),
\begin{eqnarray*}
\bigl\vert \gamma(n)\bigr\vert &\le& C_0 \sum
_{\mathbf{i}\in\mathbb{Z}_+^k} \widetilde{|a|}(\mathbf{i}+n\mathbf{1})
\widetilde{|a|}(\mathbf{i})
\\
&\le& C_1 \sum_{\sigma} \sum
_{i_1=1}^\infty\cdots\sum_{i_k=1}^\infty
(i_1+n)^{\gamma_1}\cdots(i_k+n)^{\gamma_k}
i_1^{\gamma_{\sigma(1)}} \cdots i_k^{\gamma_{\sigma(k)}}
\\
&\le& C_2 \sum_{\sigma} n^{\gamma_1+\gamma_{\sigma(1)}+1}
\cdots n^{\gamma
_k+\gamma_{\sigma(k)}+1}=C_3 n^{2\alpha+k}=C_3n^{2H^*-2},
\end{eqnarray*}
where\vspace*{1pt} the $C_i$'s are positive constants, and $\sigma$ in the summation
runs over all the permutations of $\{1,\ldots,k\}$. $\operatorname
{Var}[\sum_{n=1}^N
Y'(n)]\le c_3 N^{2H^*}$ then follows as a standard result.
\end{pf}
\begin{Rem}
In the applications of Lemma~\ref{Lembound}, the inequality (\ref{eqabound}) is often not seen in this form. For example, the function
$a(\cdot)$ defined on $\mathbb{Z}_+^k$ may satisfy
\[
\bigl\vert a(\mathbf{i})\bigr\vert \le C (i_1+
\cdots+i_{k_1})^{\alpha
_1}(i_{k_1+1}+\cdots
+i_{k_1+k_2})^{\alpha_2},
\]
for some $C>0$,
where $k_1+k_2=k$, and $\frac{\alpha_j}{k_j}<-\frac{1}{2}$, then it is
easily verified by the arithmetic--geometric mean inequality
\[
k^{-1}\sum_{j=1}^k
y_j \ge \Biggl(\prod_{j=1}^k
y_j \Biggr)^{1/k}
\]
for $y_j>0$, that (\ref{eqabound}) is satisfied since $\alpha<0$. It
is also verified for a function $a_\pi(\cdot)$ which is $a(\cdot)$ with
some of its variables identified.
In general when applying Lemma~\ref{Lembound}, we will omit the
verification of (\ref{eqabound}) which usually can be easily done as
indicated above. We will merely count the \emph{total homogeneity
exponents} of the bound, which in the preceding example is $\alpha
=\alpha_1+\alpha_2$.
\end{Rem}
For convenience, we make the following definition.
\begin{Def}\label{DefHurstexpXn}
Let $X(n)$ be a stationary process with mean $0$ and finite variance.
We say
\begin{itemize}
\item
$X(n)$ has a memory parameter of \emph{at most} (denoted using $\le$)
$H$, if
\[
\operatorname{Var} \Biggl[\sum_{n=1}^N
X(n) \Biggr]\le c N^{2H}
\]
for some $c>0$;
\item
$X(n)$ has a memory parameter (denoted using $=$) $H$, if
\[
\operatorname{Var} \Biggl[\sum_{n=1}^N
X(n) \Biggr]\sim c N^{2H}
\]
as $N\rightarrow\infty$ for some $c>0$.
\end{itemize}
\end{Def}
\begin{Rem}
In view of the definition above, Lemma~\ref{Lembound} states that if
$Y'(n)$ in (\ref{eqXnintro1}) satisfies (\ref{eqabound}), then
$Y'(n)$ has a memory parameter of at most $1/2$ if $H^*<1/2$ and of at
most $H^*$ if $H^*>1/2$.
\end{Rem}
\begin{Pro}[(Proposition~5.4 of \cite{baitaqqu2014convergence})]\label
{ThmcltXn}
Let $Y'(n)$ be given as in (\ref{eqXnnonsym}) with coefficient
$a(\cdot)$ satisfying (\ref{eqabound}) and $H^*< 1/2$ in Lemma~\ref
{Lembound}. Then
\[
N^{-1/2}\sum_{n=1}^{[Nt]}
\bigl[Y'(n)-\mathbb{E}Y'(n)\bigr]\stackrel {f.d.d.} {
\longrightarrow}\sigma B(t),
\]
where
\[
\sigma^2=\sum_{n=-\infty}^\infty
\operatorname{Cov}\bigl[Y'(n),Y'(0)\bigr],
\]
$B(t)$ is a standard Brownian motion, and $\stackrel
{f.d.d.}{\longrightarrow}$ stands for
convergence of finite-dimensional distributions.
If each $\varepsilon_i^{(1)},\ldots,\varepsilon_{i}^{(k)}$ has a moment
greater than $2$, then the tightness of
\[
N^{-1/2}\sum_{n=1}^{[Nt]}
\bigl[Y'(n)-\mathbb{E}Y'(n)\bigr]
\]
in $D[0,1]$ holds and thus $\stackrel{f.d.d.}{\longrightarrow}$ can
be replaced by weak
convergence $\Rightarrow$ in $D[0,1]$.
\end{Pro}
\begin{Rem}
The above $\stackrel{f.d.d.}{\longrightarrow}$ or $\Rightarrow$
convergence also holds for a
linear combination of different $Y'(n)$'s defined on a common i.i.d.
noise vector $\bolds{\varepsilon}_i$, while the $Y'(n)$'s can have
different orders and involve different subvectors of $\bolds{\varepsilon
}_i$, provided the coefficient of each $Y'(n)$ satisfies (\ref{eqabound}) with $H^*<1/2$.
\end{Rem}
We now state an important result concerning the weak convergence of a
discrete chaos to a Wiener chaos. Let $h$ be a function defined on
$\mathbb{Z}^k$ such that $\sum'_{\mathbf{i}\in\mathbb{Z}^k_+}
h(\mathbf
{i})^2<\infty$, where $'$ indicates the exclusion of the diagonals
$i_p=i_q$, $p\neq q$.
Let $Q_k(h)$ be defined as follows:
\begin{eqnarray}
\label{eqQkh} Q_k(h)=Q_k(h,\bolds{\varepsilon})=\sum
'_{(i_1,\ldots,i_k)\in\mathbb{Z}^k} h(i_1,
\ldots,i_k) \varepsilon_{i_1}\cdots\varepsilon_{i_k}=
\sum'_{\mathbf
{i}\in\mathbb{Z}^k} h(\mathbf{i})\prod
_{p=1}^k \varepsilon_{i_p},
\end{eqnarray}
where $\varepsilon_i$'s are i.i.d. noises. Observe that $Q_k(h)$ is
invariant under permutation of the arguments of $h(i_1,\ldots,i_k)$. So
if $\tilde{h}$ is the symmetrization of $h$, then $Q_k(h)=Q_k(\tilde{h})$.
Suppose now that we have a sequence of function vectors $\mathbf
{h}_n=(h_{1,n},\ldots,h_{J,n})$ where each $h_{j,n}\in L^2(\mathbb
{Z}^{k_j})$, $j=1,\ldots,J$.
\begin{Pro}[(Proposition~4.1 of \cite{baitaqqu2013generalized})]\label
{ProPoly->Wiener}
Let
\[
\tilde{h}_{j,n}(\mathbf{x})=n^{k_j/2}h_{j,n} \bigl([n
\mathbf {x}]+\mathbf {c}_j \bigr),\qquad j=1,\ldots,J,
\]
where $\mathbf{c}_j\in\mathbb{Z}^k$. Suppose that there exists
$h_j\in
L^2(\mathbb{R}^{k_j})$, such that
\begin{equation}
\label{eqhtildehL2conv} \Vert \tilde{h}_{j,n}-h_j\Vert
_{L^2(\mathbb{R}^{k_j})}\rightarrow0
\end{equation}
as $n\rightarrow\infty$. Then, as $n\rightarrow\infty$, we have the
following joint convergence in distribution:
\begin{eqnarray*}
&&\mathbf{Q}:= \bigl(Q_{k_1}(h_{1,n}),\ldots,Q_{k_J}(h_{J,n})
\bigr) \stackrel{d} {\rightarrow} \mathbf{I}:= \bigl(I_{k_1}(h_1),
\ldots,I_{k_J}(h_J) \bigr).
\end{eqnarray*}
\end{Pro}
\section{Proofs}\label{secproof}
\subsection{Proof of Theorem \texorpdfstring{\protect\ref{Thmgeneralprodchaos}}{3.5} where
diagonals are excluded}\label{secproofchaos}
We first show that $g^{(1)}\otimes_r g^{(2)}$ in (\ref{eqcontractofg}) is a GHK.
\begin{Lem}\label{LemgcontractgisGHK}
Let $g^{(j)}$ be a symmetric GHK(B) with homogeneity exponent $\alpha
_j$ defined on $\mathbb{R}_+^{k_j}$, $j=1,2$. Suppose in addition that
either $k_1\ge2$ or $k_2\ge2$, and that
\begin{equation}
\label{eqalpha1+alpha2>} \alpha_1+\alpha_2>-(k_1+k_2+1)/2,
\end{equation}
and set
\[
r=
\cases{ 0,\ldots,k_1\wedge k_2, &\quad $
\mbox{if } k_1\neq k_2$,\vspace*{3pt}
\cr
0,
\ldots,k_1-1, & \quad$\mbox{if } k_1= k_2$.}
\]
If the function $g^{(1)}\otimes_r g^{(2)} $
is nonzero, then it is a GHK on $\mathbb{R}_+^{k_1+k_2-2r}$ with
homogeneity exponent $\alpha_1+\alpha_2+r$.
\end{Lem}
\begin{pf}
When $r=0$, $g^{(1)}\otimes g^{(2)}$ is a tensor product of two
GHK(B)s. It is a GHK because condition~1 of Definition~\ref{DefGHK} is
satisfied with homogeneity\vspace*{-2pt} exponent
\begin{equation}
\label{eqalpha1+alpha2+k+1} -(k_1+k_2+1)/2<\alpha_1+
\alpha_2<-(k_1+k_2)/2
\end{equation}
[see (\ref{eqalpha})], and condition~2 of Definition~\ref{DefGHK} is
satisfied because
\begin{eqnarray*}
&&\int_{\mathbb{R}_+^{k_1+k_2}}\bigl\vert g^{(1)}(
\mathbf{x}_1)g^{(2)}(\mathbf {x}_2)g^{(1)}(
\mathbf{1}+\mathbf{x}_1) g^{(2)}(\mathbf{1}+\mathbf
{x}_2)\bigr\vert \,\mathrm{d}\mathbf{x}_1\,\mathrm{d}\mathbf{x}_2
\\[-2pt]
&&\quad=\int_{\mathbb{R}_+^{k_1}} \bigl\vert g^{(1)}(
\mathbf{x})g^{(1)}(\mathbf {1}+\mathbf{x})\bigr\vert \,\mathrm{d}\mathbf{x} \int
_{\mathbb{R}_+^{k_2}} \bigl\vert g^{(2)}(\mathbf
{x})g^{(2)}(\mathbf{1}+\mathbf{x})\bigr\vert \,\mathrm{d}\mathbf{x}<\infty.
\end{eqnarray*}
We shall now focus on the case $r>0$.
Consider\vspace*{1pt} first $k_1\ge2$ and $k_2=1$ (the case $k_1=1$ and $k_2\ge2$
is similar), so that $g^{(2)}(x)=C x^{\alpha_2}$ for some $C\neq0$,
where $\alpha_2\in(-1,-1/2)$. Fix an $\mathbf{x}=(x_1,\ldots
,x_{k-1})\in\mathbb{R}_+^{k_1-1}$, then
\[
\int_0^\infty\bigl\vert g^{(1)}(y,
\mathbf{x})\bigr\vert y^{\alpha_2} \,\mathrm{d}y\le C \int_0^\infty
(y+x_1\cdots+x_{k_1-1})^{\alpha_1}y^{\alpha_2} \,\mathrm{d}y <
\infty,
\]
because near $y=0$ (the other $\mathbf{x}>\mathbf{0}$), the integrand
behaves like $y^{\alpha_2}$, where $\alpha_2>-1$, while near
$y=\infty
$, the integrand is like $y^{\alpha_1+\alpha_2}$, where $\alpha_1<-1$
and $\alpha_2<-1/2$. Hence, $g^{(1)}\otimes_1 g^{(2)}$ is well-defined
in this case. It is easy to check that
\[
g^{(1)}\otimes_1 g^{(2)}(\lambda\mathbf{x})=
\lambda^{\alpha
_1+\alpha
_2+1} g^{(1)}\otimes_1 g^{(2)}(
\mathbf{x})
\]
for any $\lambda>0$ by using a change of variable and using the
homogeneity of $g^{(j)}$. We are left to show that $g:=g^{(1)}\otimes_1
g^{(2)}$ satisfies condition~2 of Definition~\ref{DefGHK}.
This is true because the function $f(x):=\int_0^\infty(x+y)^{\alpha_1}
y^{\alpha_2}\,\mathrm{d}y$ is $f(x)=C_0 x^{\alpha_1+\alpha_2+1}$ for some
$C_0>0$. So
\begin{equation}
\label{eqboundoneleft} \bigl\vert g^{(1)}\otimes_1
g^{(2)}(\mathbf{x})\bigr\vert \le C (x_1+\cdots
+x_{k_1-1})^{\alpha_1+\alpha_2+1}=:g^*(\mathbf{x})
\end{equation}
for some $C>0$. Note that $g^*(\cdot)$ is a GHK(B) on $\mathbb
{R}^{k_1-1}$ with
\[
-(k_1-1)/2-1/2<\alpha_1+\alpha_2+1<-(k_1-1)/2
\]
because $\alpha_1<-1/2$, $\alpha_2<-k_2/2$ and $\alpha_1+\alpha
_2>-(1+k_2+1)/2$ by assumption (\ref{eqalpha1+alpha2>}). So
$g=g^{(1)}\otimes_1 g^{(2)}$ satisfies condition 2 of Definition~\ref
{DefGHK} because the dominating function $g^*$ does.
Suppose now that $k_1\ge2$ and $k_2\ge2$. Consider first the case $1
\le r\le(k_1\wedge k_2)-1 $. Using the bound $g^{(j)}(\mathbf{x})\le
C\|\mathbf{x}\|^{\alpha_j}$, one has by applying Cauchy--Schwarz and
integrating power functions iteratively that
\begin{eqnarray}
&&\bigl\vert g^{(1)}\otimes_r g^{(2)}(\mathbf{x})
\bigr\vert
\nonumber
\\
&&\quad\le C \int_{\mathbb{R}_+^r}(y_1+\cdots+y_r+x_1+
\cdots +x_{k_1-r})^{\alpha_1}
\nonumber
\\
\label{eqitercauchyschwartz}
&&\hspace*{24pt}\qquad{}\times(y_1+\cdots +y_r+x_{k_1-r+1}+\cdots
+x_{k_2+k_2-2r})^{\alpha_2}\,\mathrm{d}y_1\cdots \,\mathrm{d}y_r
\\
\nonumber
&&\quad= C \int_{\mathbb{R}^{r-1}_+}\,\mathrm{d}y_1\cdots
\,\mathrm{d}y_{r-1} \biggl(\int_0^\infty
(y_1+\cdots+y_r+x_1+\cdots+x_{k_1-r})^{2\alpha_1}\,\mathrm{d}y_r
\biggr)^{1/2}
\nonumber
\\
&&\hspace*{32pt}\qquad{}\times \biggl(\int_0^\infty(y_1+
\cdots+y_r+x_{k_1-r+1}+\cdots +x_{k_2+k_2-2r})^{2\alpha_2}\,\mathrm{d}y_r
\biggr)^{1/2}
\nonumber
\\
&&\quad\le C \int_{\mathbb{R}_+^{r-1}}(y_1+\cdots+y_{r-1}+x_1+
\cdots +x_{k_1-r})^{\alpha_1+1/2}
\nonumber
\\
&&\hspace*{30pt}\qquad{}\times(y_1+\cdots+y_{r-1}+x_{k_1-r+1}+\cdots
+x_{k_2+k_2-2r})^{\alpha
_2+1/2}\,\mathrm{d}\mathbf{y}
\nonumber
\\
&&\qquad \cdots
\nonumber
\\
\nonumber
&&\quad\le C (x_1+\cdots+x_{k_1-r})^{\alpha_1+r/2}(x_{k_1-r+1}+
\cdots +x_{k_1+k_2-2r})^{\alpha_2+r/2}=:g^*(\mathbf{x}).
\end{eqnarray}
The dominating function $g^*$ is a GHK because it is a tensor product
of two GHK(B)'s on $\mathbb{R}_+^{k_j}$, $j=1,2$, and
\[
-\frac{(k_1-r)+(k_2-r)+1}{2}<(\alpha_1+r/2)+(\alpha_2+r/2)<-
\frac
{(k_1-r)+(k_2-r)}{2},
\]
as in the inequality (\ref{eqalpha1+alpha2+k+1}).
Therefore, the bound $g^*(\mathbf{x})$, and hence
the kernel $g^{(1)}\otimes_r g^{(2)}$ satisfy condition 2 of Definition~\ref{DefGHK}. Moreover, the homogeneity exponent of $g^{(1)}\otimes_r
g^{(2)}$ is $\alpha_1+\alpha_2+r$ in condition~1 of Definition~\ref
{DefGHK}. This can be easily verified as above by change of variables
and using the homogeneity of $g^{(j)}$.
The only case left is: $k_1\neq k_2 \ge2$ and $r=k_1\wedge k_2$.
Suppose $k_1<k_2$. In this case, condition~2 of Definition~\ref
{DefGHK} can be checked by first applying the iterative
Cauchy--Schwarz argument leading to (\ref{eqitercauchyschwartz})
until only one variable of $g^{(1)}$ is unintegrated, and then bounding
the last fold of integration similarly as in (\ref{eqboundoneleft}).
Hence, in this case as well, $g^{(1)}\otimes_r g^{(2)}$ is GHK.
\end{pf}
The following lemma shows a noncentral convergence involving
$g^{(1)}\otimes_r g^{(2)}$ appearing in (\ref{eqcontractofg}).
\begin{Lem}\label{Lemchaosproductnclt}
Suppose that all the assumptions in Lemma~\ref{LemgcontractgisGHK}
hold. Let $a^j(\cdot)=g^{(j)}L^{(j)}$, $j=1,2$, be as assumed before. Set
\begin{eqnarray*}
X'_r(n)&:= &\sum_{(\mathbf{u},\mathbf{i})>\mathbf{0}}'
a^{(1)}(u_1,\ldots ,u_r,i_1,
\ldots,i_{k_1-r})
\\
&&\hspace*{4pt}\qquad{}\times a^{(2)}(u_1,\ldots ,u_r,i_{k_1-r+1},
\ldots,i_{k_1+k_2-2r})\varepsilon_{n-i_1}\cdots \varepsilon _{n-i_{k_1+k_2-2r}},
\end{eqnarray*}
where $\varepsilon_i$'s are i.i.d. with mean $0$ and variance $1$. We
then have
\[
\frac{1}{N^{H}}\sum_{n=1}^{[Nt]}
X_r'(n) \stackrel {f.d.d.} {\longrightarrow}Z_r(t):=
I_{k_1+k_2-2r}(h_{t,r})
\]
jointly for all the $r=0,1\ldots,k$ where $k$ is as defined in Theorem~\ref{Thmgeneralprodchaos}, and
where
\[
H=\alpha_1+\alpha_2+(k_1+k_2)/2+1
\in(1/2,1).
\]
\end{Lem}
\begin{pf}
In view of Proposition~\ref{ProPoly->Wiener}, we need only to prove
the convergence for a single $r$ and a single $t>0$, and the joint
convergence for different $r$'s and $t$'s follows.
We assume for simplicity that $a^{(j)}(\cdot)=g^{(j)}(\cdot)$ (setting
$L=1$), and including a general $L$ in (\ref{eqa=gL}) is easy.
We focus on the case $r\ge1$, since the case $r=0$ follows from
Theorem~6.5 of \cite{baitaqqu2013generalized}, although the proof
for case $r=0$ may be regarded as contained in the proof below with
$\mathbf{u}$ being an empty vector.
Let $\mathbf{u}=(u_1,\ldots,u_r)$, $\mathbf{i}_1=(i_1,\ldots
,i_{k_1-r})$, $\mathbf{i}_2=(i_{k_1-r+1},\ldots,i_{k_1+k_2-2r})$, and
$\mathbf{i}=(\mathbf{i}_1,\mathbf{i}_2)$.
We define the sum
\[
\sum_{n=1}^{Nt}x_n:=\sum
_{n=1}^{[Nt]}x_n + \bigl(Nt-[Nt]
\bigr)x_{[Nt]+1}=N\int_0^t
x_{1+[Ny]}\,\mathrm{d}y.
\]
Obviously,
\[
\mathbb{E} \Biggl[\frac{1}{N^{H}}\sum_{n=1}^{[Nt]}
X_r'(n)-\frac
{1}{N^{H}}\sum
_{n=1}^{Nt} X_r'(n)
\Biggr]^2\rightarrow0
\]
as $N\rightarrow\infty$. One can thus focus on $\frac{1}{N^{H}}\sum_{n=1}^{Nt} X_r'(n)$ instead.
\begin{eqnarray*}
\frac{1}{N^{H}}\sum_{n=1}^{Nt}
X_r'(n)&=&\sum_{\mathbf{i}\in\mathbb
{Z}^{k_1+k_2-2r} }'
\frac{1}{N^{H}}\sum_{n=1}^{Nt} \sum
_{\mathbf
{u}\in
D(\mathbf{i},n)} g^{(1)}(\mathbf{u},n\mathbf{1}-
\mathbf{i}_1)1_{\{
n\mathbf{1}>\mathbf{i}_1\}}
\\
&&\hspace*{86pt}\qquad{}\times g^{(2)}(\mathbf{u},n\mathbf {1}-\mathbf{i}_2)1_{\{n\mathbf{1}>\mathbf{i}_2\}}
\prod_{j=1}^{k_1+k_2-r}\varepsilon_{i_j}
\\
&=:&Q_{k_1+k_2-2r}(h_{N,t,r}),
\end{eqnarray*}
using the notation (\ref{eqQkh}), where
\[
h_{N,t,r}(\mathbf{i}):=\frac{1}{N^{H}}\sum
_{n=1}^{Nt} \sum_{\mathbf
{u}\in D(\mathbf{i},n)}
g^{(1)}(\mathbf{u},n\mathbf{1}-\mathbf {i}_1)g^{(2)}(
\mathbf{u},n\mathbf{1}-\mathbf{i}_2)1_{\{n\mathbf
{1}>\mathbf{i}\}}
\]
and
\[
D(\mathbf{i},n)=\bigl\{\mathbf{u}\in\mathbb{Z}_+^r\dvt u_p
\neq u_q \mbox{ if }p\neq q; \mbox{ and } u_p\neq n-
i_q \mbox{ even if } p=q\bigr\}.
\]
Set $\mathbf{x}_1\in\mathbb{R}^{k_1-r}$, $\mathbf{x}_1\in\mathbb
{R}^{k_2-r}$ and $\mathbf{x}=(\mathbf{x}_1,\mathbf{x}_2)$. Define
\[
E(\mathbf{x},N)=\bigl\{\mathbf{u}\in\mathbb{Z}_+^r\dvt u_p
\neq u_q \mbox{ if }p\neq q; \mbox{ and } u_p\neq n-
[Nx_q]-1 \mbox{ even if } p=q\bigr\}.
\]
In view of Proposition~\ref{ProPoly->Wiener} and using the homogeneity
of $g^{(j)}$'s, one writes:
\begin{eqnarray*}
\tilde{h}_{N,t,r}(\mathbf{x})&=&N^{(k_1+k_2-2r)/2}h_{N,t}
\bigl([N\mathbf {x}]+\mathbf{1} \bigr)
\\
&=&\frac{1}{N^{\alpha_1+\alpha_2+ r+1}}\sum_{n=1}^{Nt} \sum
_{\mathbf
{u}\in E(\mathbf{x},n)} g^{(1)}\bigl(\mathbf{u},n
\mathbf{1}-[N\mathbf {x}_1]-\mathbf{1}\bigr)
g^{(2)}\bigl(\mathbf{u},n\mathbf{1}-[N\mathbf
{x}_2]-\mathbf{1}\bigr)1_{\{n\mathbf{1}>\mathbf{i}\}}
\\
&=&\sum_{n=1}^{Nt} \frac{1}{N} \sum
_{\mathbf{u}\in E(\mathbf{x},n)} \frac
{1}{N^r} g^{(1)} \biggl(
\frac{\mathbf{u}}{N},\frac{n\mathbf
{1}-[N\mathbf
{x}_1]-\mathbf{1}}{N} \biggr)
g^{(2)} \biggl(\frac{\mathbf
{u}}{N},\frac{n\mathbf{1}-[N\mathbf{x}_2]-\mathbf{1}}{N}
\biggr)1_{\{
n\mathbf{1}>\mathbf{i}\}}
\\
&=&\int_0^t \mathrm{d}s \int_{\mathbb{R}_+^r}\,\mathrm{d}
\mathbf{y} g^{(1)} \biggl(\frac
{[N\mathbf{y}]+\mathbf{1}}{N},\frac{[Ns]\mathbf{1}-[N\mathbf
{x}_1]}{N} \biggr)
\\
&&\hspace*{43pt}{}\times g^{(2)} \biggl(\frac{[N\mathbf
{y}]+\mathbf
{1}}{N},\frac{[Ns]\mathbf{1}-[N\mathbf{x}_2]}{N}
\biggr)1_{\{
[Ns]\mathbf
{1}>[N\mathbf{x}]\}\cap F(N)},
\end{eqnarray*}
where we correspond $\mathbf{u}$ to $[N\mathbf{y}]+\mathbf{1}$, $n$ to
$[Ns]+1$, and
\begin{eqnarray*}
F(N)&=&\bigl\{( \mathbf{x},\mathbf{y},s)\dvt [Ny_p]
\neq[Ny_q], [Nx_p]\neq [Nx_q],
\\
&&\hspace*{4pt} \mbox{ if }p\neq q; \mbox{ and } [Ny_p]\neq[Ns]-[Nx_q] \mbox{ even if }p=q
\bigr\}.
\end{eqnarray*}
In view of Proposition~\ref{ProPoly->Wiener}, the goal is to show that
\begin{equation}
\label{eqgoalcontractL^2} \lim_{N\rightarrow\infty}\Vert \tilde{h}_{N,t,r}-h_{t,r}
\Vert _{L^2(\mathbb
{R}^{k_1+k_2-2r})}=0,
\end{equation}
where $h_{t,r}$ is given in (\ref{eqhtr}).
By the a.e. continuity of $g^{(j)}$'s and the fact that
$1_{F(N)}\rightarrow1$ a.e. as $N\rightarrow\infty$, one has
\begin{eqnarray*}
&&g^{(1)} \biggl(\frac{[N\mathbf{y}]+\mathbf{1}}{N},\frac{[Ns]\mathbf
{1}-[N\mathbf{x}_1]}{N}
\biggr)g^{(2)} \biggl(\frac{[N\mathbf
{y}]+\mathbf
{1}}{N},\frac{[Ns]\mathbf{1}-[N\mathbf{x}_2]}{N}
\biggr)1_{\{
[Ns]\mathbf
{1}>[N\mathbf{x}]\}\cap F(N)}
\\
&&\quad \rightarrow g^{(1)} (\mathbf{y},s\mathbf{1}-\mathbf{x}_1
)g^{(2)} (\mathbf{y},s\mathbf{1}-\mathbf{x}_2
)1_{\{s\mathbf
{1}>\mathbf{x}\}
}\qquad \mbox{for a.e. } (\mathbf{x},\mathbf{y},s).
\end{eqnarray*}
We are left to establish suitable bound to apply the dominated
convergence theorem.
To this end, since $g^{(j)}(\mathbf{x})\le C \|\mathbf{x}\|^{\alpha
_j}=:g^{(j)*}(\mathbf{x})$ on $\mathbb{R}_+^{k_j}$, we have the
following bound:
\begin{eqnarray}
&&\biggl\vert g^{(1)} \biggl(\frac{[N\mathbf{y}]+\mathbf{1}}{N},\frac
{[Ns]\mathbf
{1}-[N\mathbf{x}_1]}{N}
\biggr)g^{(2)} \biggl(\frac{[N\mathbf
{y}]+\mathbf
{1}}{N},\frac{[Ns]\mathbf{1}-[N\mathbf{x}_2]}{N} \biggr)\biggr
\vert 1_{\{
[Ns]\mathbf{1}>[N\mathbf{x}]\}\cap F(N)}
\nonumber
\\
&&\quad\le g^{(1)*} \biggl(\frac{[N\mathbf{y}]+\mathbf{1}}{N},\frac
{[Ns]\mathbf
{1}-[N\mathbf{x}_1]}{N}
\biggr)g^{(2)*} \biggl(\frac{[N\mathbf
{y}]+\mathbf
{1}}{N},\frac{[Ns]\mathbf{1}-[N\mathbf{x}_2]}{N}
\biggr)
\nonumber
\\[-8pt]
\label{eqchaosproofbound}
\\[-8pt]
\nonumber
&&\qquad{}\times 1_{\{
[Ns]\mathbf
{1}>[N\mathbf{x}]\}\cap F(N)}
\\
\nonumber
&&\quad\le C g^{(1)*} (\mathbf{y},s\mathbf{1}-\mathbf{x}_1
)g^{(2)*} (\mathbf{y},s\mathbf{1}-\mathbf{x}_2
)1_{\{
s\mathbf
{1}>\mathbf{x}\}},
\end{eqnarray}
where we have used the following facts:
on the set $\{\mathbf{y}>\mathbf{0},[Ns]\mathbf{1}>[N\mathbf{x}]\}
$, we
have $([N\mathbf{y}]+1)/N>\mathbf{y}$, $([Ns]-[Nx_j])/N \ge\frac
{1}{2}(s-x_j)$ (see relation (40) in the proof of Theorem~6.5 of \cite{baitaqqu2013generalized}) and $g^{(j)*}$ decreases in its every variables,
as well as the fact that
$\{[Ns]\mathbf{1}>[N\mathbf{x}]\}\subset\{s\mathbf{1}>\mathbf{x}\}$.
Note that
\begin{eqnarray}
&&\int_0^t \mathrm{d}s \int
_{\mathbb{R}_+^r}\,\mathrm{d}\mathbf{y} g^{(1)*} (\mathbf {y},s\mathbf{1}-
\mathbf{x}_1 )g^{(2)*} (\mathbf {y},s\mathbf {1}-
\mathbf{x}_2 )1_{\{s\mathbf{1}>\mathbf{x}\}}
\nonumber
\\[-8pt]
\label{eqboundcontract}
\\[-8pt]
\nonumber
&&\quad=\int_0^t g^{(1)*}
\otimes_r g^{(2)*} (s\mathbf{1}-\mathbf{x} ) 1_{\{
s\mathbf{1}>\mathbf{x}\}}
\,\mathrm{d}s.
\end{eqnarray}
Since $g^{(1)*}$ and $g^{(2)*}$ are GHK(B)s, so by Lemma~\ref{LemgcontractgisGHK}, $g^{(1)*}\otimes_r g^{(2)*}$ is a GHK. This has two
consequences. First, by Theorem~3.5 and Remark~3.6 of \cite{baitaqqu2013generalized}, the integral in $\mathrm{d}s\,\mathrm{d}\mathbf{y}$ on the
left-hand side of (\ref{eqboundcontract}) is finite for a.e.
$\mathbf{x}\in\mathbb{R}^{k_1+k_2-2r}$. One can then apply the
dominated convergence theorem to conclude that
\begin{equation}
\label{eqgoalchaosproof} \tilde{h}_{N,t,r}(\mathbf{x})\rightarrow
h_{t,r}(\mathbf{x})\qquad\mbox{for a.e. }\mathbf{x}\in\mathbb{R}^{k_1+k_2-2r}.
\end{equation}
But to obtain (\ref{eqgoalcontractL^2}), we need $L^2$ convergence
for the integral in $\mathrm{d}\mathbf{x}$. For this, we use the bound~(\ref
{eqchaosproofbound}):
\[
\bigl\vert \tilde{h}_{N,t,r}(\mathbf{x})\bigr\vert \le
h_{t,r}^*(\mathbf{x}):=C\int_0^t
g^{(1)*}\otimes_r g^{(2)*} (s\mathbf{1}-\mathbf{x} )
1_{\{
s\mathbf{1}>\mathbf{x}\}}\,\mathrm{d}s.
\]
The second consequence of the fact that $g^{(1)*}\otimes_r g^{(2)*}$ is
a GHK stems from Remark~\ref{RemGHKhtL^2}, which entails that
$h_{t,r}^*\in L^2(\mathbb{R}^{k_1+k_2-2r})$, and hence (\ref{eqgoalcontractL^2}) follows from (\ref{eqgoalchaosproof}) and the
dominated convergence theorem. This concludes the proof of Lemma~\ref
{Lemchaosproductnclt}.
\end{pf}
We now decompose
the product $X(n)$ in (\ref{eqchaosproductproc}) in off-diagonal
forms (\ref{eqXnnonsym}) as follows: let $\mathbf
{u}=(u_1,\ldots
,u_r)\in\mathbb{Z}_+^{r}$, $\mathbf{i}_1=(i_1,\ldots,i_{k_1-r})$ and
$\mathbf{i}_2=(i_{k_1-r+1},\ldots,i_{k_1+k_2-2r})$, and $\mathbf
{i}=(\mathbf{i}_1,\mathbf{i}_2)\in\mathbb{Z}_+^{k_1+k_2-2r}$, then
\begin{eqnarray*}
X(n)&=& Y'_1(n)Y'_2(n)\\
&=& \sum
_{r=0}^{k_1\wedge k_2}\! r! \pmatrix{k_1\cr r}
\pmatrix{k_2\cr r}\\
&&{}\times\sum_{(\mathbf{u},\mathbf{i})\in\mathbb
{Z}_+^{k_1+k_2-r}}'\! a^{(1)}(
\mathbf{u},\mathbf{i}_1)a^{(2)}(\mathbf {u},
\mathbf{i}_2) \varepsilon_{n-u_1}^2\cdots
\varepsilon_{n-u_r}^2 \varepsilon _{n-i_1}\cdots
\varepsilon_{n-i_{k_1+k_2-2r}}\!,
\end{eqnarray*}
where we have used the symmetry of $a^{(j)}$'s, while the combinatorial
coefficient
\[
c(r,k_1,k_2):=r! \pmatrix{k_1\cr r}
\pmatrix{k_2\cr r}
\]
is obtained as the number of ways to pair $r$ variables of $a^{(1)}$ to
$r$ variables of $a^{(2)}$.
We write
\[
\varepsilon_{n-i}^2=1+\bigl(\varepsilon_{n-i}^2-1
\bigr)=: A_0(\varepsilon _{n-i})+A_2(
\varepsilon_{n-i}),
\]
where $A_0(\varepsilon)=1$ and $A_2(\varepsilon)=\varepsilon^2-1$. These are
Appell polynomials which will be introduced in more details in Section~\ref{secproofvolt}. Set $J_r=\{0,2\}\times\cdots\times\{0,2\}$. Then
\begin{eqnarray*}
Y'_1(n)Y'_2(n)&= &\sum
_{r=0}^{k_1\wedge k_2} c(r,k_1,k_2)
\sum_{(\mathbf
{u},\mathbf{i})\in\mathbb{Z}_+^{k_1+k_2-r}}'\sum
_{\mathbf{j}\in J_r
}a^{(1)}(\mathbf{u},\mathbf{i}_1)a^{(2)}(
\mathbf{u},\mathbf{i}_2)
\\
&&{}\times A_{j_1}(\varepsilon_{n-u_1})\cdots A_{j_r}(
\varepsilon_{n-u_r}) \varepsilon _{n-i_1}\cdots\varepsilon_{n-i_{k_1+k_2-2r}}\!.
\end{eqnarray*}
The random variables in each summand are independent because the sum
does not include diagonals. Observe that it is only when $k_1=k_2$,
that the mean
\[
\mathbb{E}Y'_1(n)Y'_2(n)=k_1!
\sum_{\mathbf{u}\in\mathbb{Z}_+^{k_1}}' a^{(1)}(
\mathbf{u})a^{(2)}(\mathbf{u})
\]
may possibly be nonzero (this is the case when $r=k_1=k_2$). Hence, one
can use the $k$ defined in Theorem~\ref{Thmgeneralprodchaos} to
write that
\begin{eqnarray}
X(n)-\mathbb{E}X(n) &=&\sum_{r=0}^{k}\sum
_{\mathbf{j}\in J_r } \sum_{(\mathbf
{u},\mathbf{i})\in\mathbb{Z}_+^{k_1+k_2-r}}'
c(r,k_1,k_2) a^{(1)}(\mathbf{u},
\mathbf{i}_1)a^{(2)}(\mathbf{u},\mathbf {i}_2)
\nonumber
\\[-8pt]
\label{eqoffdiagdecompchaos}
\\[-8pt]
\nonumber
&&\hspace*{80pt}{}\times A_{j_1}(\varepsilon_{n-u_1})\cdots A_{j_r}(
\varepsilon_{n-u_r}) \varepsilon_{n-i_1}\cdots\varepsilon_{n-i_{k_1+k_2-2r}}.\quad
\end{eqnarray}
A basic term of the preceding decomposition of $X(n)-\mathbb{E}X(n)$ is
\begin{eqnarray*}
X_{\mathbf{j}}^r(n)&:=&\sum_{(\mathbf{u},\mathbf{i})\in\mathbb
{Z}_+^{k_1+k_2-r}}'c(r,k_1,k_2)
a^{(1)}(\mathbf{u},\mathbf {i}_1)a^{(2)}(
\mathbf{u},\mathbf{i}_2)
\\
&&\hspace*{38pt}\quad{}\times A_{j_1}(\varepsilon _{n-u_1})\cdots A_{j_r}(
\varepsilon_{n-u_r}) \varepsilon_{n-i_1}\cdots \varepsilon
_{n-i_{k_1+k_2-2r}}.
\end{eqnarray*}
Note\vspace*{1pt} that $0\le r\le k_1\wedge k_2$ if $k_1\neq k_2$, and $0\le r\le
k_1-1$ if $k_1=k_2$, which implies $k_1+k_2-2r\ge1$ so that there is
at least one $i$ variable. Due to the symmetry of $a^{(j)}$'s, we can
suppose without loss of generality that $j_1=\cdots= j_s=0$ and
$j_{s+1}=\cdots=j_r=2$, $0\le s\le r$. One can hence rewrite the basic
term as
\begin{eqnarray*}
X_{\mathbf{j}}^r(n)&=&\sum_{(\mathbf{u},\mathbf{i})\in\mathbb
{Z}_+^{k_1+k_2-r}}'c(r,k_1,k_2)
a^{(1)}(\mathbf{u},\mathbf {i}_1,\mathbf
{i}_2)a^{(2)}(\mathbf{u},\mathbf{i}_1,
\mathbf{i}_3)
\\
&&\hspace*{49pt}{}\times A_{2}(\varepsilon_{n-i_1})\cdots A_{2}(
\varepsilon_{n-i_{r-s}}) \varepsilon_{n-i_{r-s+1}}\cdots\varepsilon_{n-i_{k_1+k_2-r-s}},
\end{eqnarray*}
where
\begin{eqnarray*}
\mathbf{u} &=& (u_1,\ldots,u_s),\qquad \mathbf{i}_1=(i_1,
\ldots,i_{r-s}),
\\
\mathbf {i}_2 &=& (i_{r-s+1},\ldots ,i_{k_1-s}),\qquad
\mathbf{i}_3=(i_{k_1-s+1},\ldots,i_{k_1+k_2-r-s})
\end{eqnarray*}
and $\mathbf{i}=(\mathbf{i}_1,\mathbf{i}_2,\mathbf{i}_3)$. Setting
\begin{equation}
\label{eqchaosa} a'(\mathbf{i})=\sum_{\mathbf{u}\in K(\mathbf{i})}c(r,k_1,k_2)
a^{(1)}(\mathbf{u},\mathbf{i}_1,\mathbf{i}_2)a^{(2)}(
\mathbf {u},\mathbf {i}_1,\mathbf{i}_3),
\end{equation}
with
\[
K(\mathbf{i})=\{\mathbf{u}>\mathbf{0} \dvt u_p\neq u_q
\mbox{ if } p\neq q; \mbox{ and } u_p\neq i_q \mbox{
even if } p=q\},
\]
we get
\begin{equation}
\label{eqbasictermchaosproduct} X_{\mathbf{j}}^r(n)=\sum
_{\mathbf{i}>\mathbf{0}}'a'(\mathbf{i})
A_{2}(\varepsilon_{n-i_1})\cdots A_{2}(
\varepsilon_{n-i_{r-s}}) \varepsilon _{n-i_{r-s+1}}\cdots\varepsilon_{n-i_{k_1+k_2-r-s}}.
\end{equation}
We list here some useful elementary inequalities which will be used
many times in the sequel:
\begin{Lem} Let $A>0$, $B> 0$. If $\gamma<-1$, then
\begin{equation}
\label{eqboundA+i} \sum_{i=1}^\infty(A+i)^{\gamma}
\le CA^{\gamma+1}.
\end{equation}
If $\gamma<0$, $\beta<-1$, then
\begin{equation}
\label{eqboundA+iismall} \sum_{i=1}^\infty(A+i)^{\gamma}i^{\beta}
\le CA^{\gamma}.
\end{equation}
If $\gamma<-1/2$, $-1<\beta<-1/2$, then
\begin{equation}
\label{eqboundA+iilarge} \sum_{i=1}^\infty(A+i)^{\gamma}i^\beta
\le CA^{\gamma+\beta+1}.
\end{equation}
If $\gamma<-1/2$, $\beta<-1/2$, then
\begin{equation}
\label{eqboundA+iB+ilarge} \sum_{i=1}^\infty(A+i)^{\gamma}(B+i)^{\beta}
\le C A^{\gamma
+1/2}B^{\beta+1/2}.
\end{equation}
\end{Lem}
\begin{pf}
To obtain inequality (\ref{eqboundA+i}), we have
\begin{eqnarray*}
\sum_{i=1}^\infty(A+i)^\gamma &=& \sum
_{i=1}^\infty\int_{i-1}^i
(A+i)^\gamma \,\mathrm{d}x \le\sum_{i=1}^\infty
\int_{i-1}^i (A+x)^\gamma \,\mathrm{d}x\\
&=& \int
_0^\infty(A+x)^\gamma \,\mathrm{d}x =-(
\gamma+1)^{-1}A^{\gamma+1}.
\end{eqnarray*}
For (\ref{eqboundA+iismall}), note that $(A+i)^\gamma\le
A^\gamma$
and $\sum_{i=1}^\infty i^{\beta}<\infty$.
For inequality (\ref{eqboundA+iilarge}),
we have
\[
\sum_{i=1}^\infty(A+i)^{\gamma}
i^{\beta} =A^{\gamma+\beta+1}\sum_{i=1}^\infty
\int_{i-1}^i (1+i/A)^{\gamma}
(i/A)^\beta \,\mathrm{d}(x/A) \le A^{\gamma+\beta+1} \int_0^\infty(1+y)^\gamma
y^{\beta}\,\mathrm{d}y,
\]
where the integral is finite since $\beta>-1$ and $\gamma+\beta<-1$.
The last one (\ref{eqboundA+iB+ilarge}) is obtained by applying
Cauchy--Schwarz and (\ref{eqboundA+i}) as follows:
\begin{eqnarray*}
\sum_{i=1}^\infty(A+i)^\gamma(B+i)^{\beta}
\le \Biggl[\sum_{i=1}^\infty
(A+i)^{2\gamma} \Biggr]^{1/2} \Biggl[ \sum
_{i=1}^\infty(B+i)^{2\beta
}
\Biggr]^{1/2} \le C A^{\gamma+1/2} B^{\beta+1/2}.
\end{eqnarray*}
\upqed\end{pf}
\begin{Rem}
The inequalities (\ref{eqboundA+i}), (\ref{eqboundA+iilarge})
and (\ref{eqboundA+iB+ilarge}) all raise the total power
exponent by $1$, while inequality (\ref{eqboundA+iismall}) kills
one of the exponents. These observations are useful in the proof below
and also in Section~\ref{secproofvolt}.
\end{Rem}
We now state the proof of Theorem~\ref{Thmgeneralprodchaos}.
\begin{pf*}{Proof of case~1 of Theorem~\ref
{Thmgeneralprodchaos}}
We want to apply Proposition~\ref{ThmcltXn}. The condition
$\mathbb{E}
|\varepsilon_i|^{4+\delta}<\infty$ guarantees that $\mathbb
{E}|A_2(\varepsilon
)|^{2+\delta'}<\infty$ in (\ref{eqbasictermchaosproduct}) holds for
some $\delta'>0$ and so the tightness in $D[0,1]$ holds.
We only need to show that $H^*<1/2$ in Lemma~\ref{Lembound} for each
of the basic terms $X_{\mathbf{j}}^r(n)$ in (\ref{eqbasictermchaosproduct}).
Suppose without loss of generality that $k_1\le k_2$.
Using the fact $|a^{(j)}(\mathbf{i})|\le C\|\mathbf{i}\|^{\alpha_j}$
(recall that $\|\cdot\|$ is the $L^1$-norm), one can bound $a'(\mathbf
{i})$ in (\ref{eqchaosa}). One has to distinguish two cases. In the
first case, where $s<k_1$, one gets
\begin{eqnarray*}
\bigl\vert a'(\mathbf{i})\bigr\vert &\le & C \sum
_{\mathbf{u}\in\mathbb{Z}_+^s} \bigl\Vert (\mathbf{u},\mathbf {i}_1,
\mathbf{i}_2)\bigr\Vert ^{\alpha_1} \bigl\Vert (\mathbf{u},
\mathbf{i}_1,\mathbf {i}_3)\bigr\Vert ^{\alpha_2}
\\
&\le& C\sum_{\mathbf{u}\in\mathbb{Z}_+^s} \bigl(u_1+
\cdots+u_s+\Vert \mathbf {i}_1\Vert +\Vert
\mathbf{i}_2\Vert \bigr)^{\alpha_1}\bigl(u_1+
\cdots+u_s+\Vert \mathbf{i}_1\Vert +\Vert \mathbf
{i}_3\Vert \bigr)^{\alpha_2}
\\
&\le & C \bigl(\Vert \mathbf{i}_1\Vert +\Vert
\mathbf{i}_2\Vert \bigr)^{\alpha_1+s/2}\bigl(\Vert \mathbf
{i}_1\Vert +\Vert \mathbf{i}_3\Vert
\bigr)^{\alpha_2+s/2},
\end{eqnarray*}
after applying (\ref{eqboundA+iB+ilarge}) to each of the $s$
components of $\mathbf{u}$ iteratively (note: $\mathbf{i}_1$ may not be
present).
In the second case, where $s=r=k_1$, one gets
\begin{eqnarray*}
\bigl\vert a'(\mathbf{i})\bigr\vert &\le & C \sum
_{\mathbf{u}\in\mathbb{Z}_+^s} \Vert \mathbf{u}\Vert ^{\alpha
_1} \bigl\Vert
(\mathbf{u},\mathbf{i}_3)\bigr\Vert ^{\alpha_2}
\\
&\le & C\sum_{\mathbf{u}\in\mathbb{Z}_+^s} (u_1+
\cdots+u_s)^{\alpha
_1}\bigl(u_1+\cdots+u_s+
\Vert \mathbf{i}_3\Vert \bigr)^{\alpha_2}
\\
&\le & C \Vert \mathbf{i}\Vert ^{\alpha_1+\alpha_2+s},
\end{eqnarray*}
after applying (\ref{eqboundA+iB+ilarge}) $s-1$ times, and then
(\ref{eqboundA+iilarge}) to the last component of $\mathbf{u}$. In
either case, the total power exponent is raised by $s$.
According to (\ref{eqH^*}), this yields
\begin{eqnarray}
H^*&=& \alpha_1+\alpha_2+s+(r-s+k_1-r+k_2-r)/2+1
\nonumber
\\
\label{eqH1+H2+s-r2-1}
&=& H_1+H_2+(s-r)/2-1
\\
&\le & H_1+H_2-1<1/2,
\nonumber
\end{eqnarray}
where the last strict inequality is due to the assumption $H_1+H_2<3/2$
of case~1.
\end{pf*}
\begin{pf*}{Proof of case~2 of Theorem~\ref
{Thmgeneralprodchaos}}
We now suppose that $H_1+H_2>3/2$.
As was shown in case~1 above, the off-diagonal chaos
coefficient $a'(\cdot)$ in (\ref{eqchaosa}) leads to
\[
H^*=H_1+H_2+(s-r)/2-1.
\]
When $s=r$, we have only factors $A_0(\varepsilon)=1$ in (\ref{eqbasictermchaosproduct}). The chaos process $X_{r}^\mathbf{j}(n)$ is up to
some constant the process $X_r'(n)$ in Lemma~\ref{Lemchaosproductnclt}. Note that Lemma~\ref{Lemchaosproductnclt} concludes a joint
convergence for $X_r'(n)$ with different $r$'s. So adding up all the
terms corresponding to the case $r=s$ in (\ref{eqoffdiagdecompchaos}), which yields
\[
\sum_{r=0}^{k} \sum
_{(\mathbf{u},\mathbf{i})\in\mathbb
{Z}_+^{k_1+k_2-r}}' r! \pmatrix{k_1\cr r}
\pmatrix{k_2\cr r} a^{(1)}(\mathbf {u},\mathbf{i}_1)a^{(2)}(
\mathbf{u},\mathbf{i}_2) \varepsilon _{n-i_1}\cdots
\varepsilon_{n-i_{k_1+k_2-2r}},
\]
one obtains the noncentral limit claimed in the theorem with a Hurst
index $H=H_1+H_2-1>1/2$.
When $s<r$, the corresponding terms are negligible. Indeed,
\[
H^*=H_1+H_2+(s-r)/2-1\le H_1+H_2-1/2-1<
1/2.
\]
So by Lemma~\ref{Lembound}, the term $X_{r}^\mathbf{j}(n)$ has a
memory parameter $H\le1/2$ in the sense of Definition~\ref{DefHurstexpXn}. Hence,
\[
\lim_{N\rightarrow\infty}\mathbb{E} \Biggl[N^{-(H_1+H_2-1)}\sum
_{n=1}^{[Nt]}X_{r}^\mathbf{j}(n)
\Biggr]^2=0.
\]
We have now shown the convergence of finite-dimensional distributions.
Tightness in $D[0,1]$ is automatic since $H>1/2$ (see, e.g.,
Proposition~4.4.2 of \cite{giraitiskoulsurgailis2009large}).
\end{pf*}
\subsection{Proof of Theorem \texorpdfstring{\protect\ref{Thmgeneralprodvolt}}{3.6} where
diagonals are included}\label{secproofvolt}
We first recall from \cite{baitaqqu2014convergence} the
off-diagonal decomposition of a general $k$th order Volterra process
$X(n)$ in (\ref{eqXnintro}). The purpose is to decompose $X(n)$
into off-diagonal chaos terms as in (\ref{eqXnnonsym}).
To this end, it is convenient to use Appell polynomials. Suppose that
$\varepsilon$ is a random variable with finite $K$th moment. The Appell
polynomial with respect to the law of $\varepsilon$ is defined through the
following recursive relation:
\begin{eqnarray*}
\frac{\mathrm{d}}{\mathrm{d}x}A_p(x)= pA_{p-1},\qquad \mathbb{E}A_p(
\varepsilon)=0,\qquad A_0(x)=1,\qquad p=1,\ldots,K.
\end{eqnarray*}
We will use the following identity:
\begin{equation}
\label{eqappelldecomp} x^p = \sum_{j=0}^p
\pmatrix{p\cr j} \mu_{p-j}A_j(x),\qquad p=0,1,2,3,\ldots.
\end{equation}
For more details about Appell polynomials, see for example Chapter~3.3
of \cite{beran2013long}.
Let $\mathcal{P}_k$ be the collection of all the partitions of $\{
1,\ldots,k\}$. We further express each partition $\pi\in\mathcal{P}_k$
as $\pi=(P_1,\ldots,P_m)$ (so $m=|\pi|$), where the sets $P_t$'s are
\emph{ordered} according to their smallest element.
If we have a variable $\mathbf{i}\in\mathbb{Z}^k_+$, then $\mathbf
{i}_\pi$ denotes a new variable where its components are identified
according to $\pi$. For example, if $k=3$, $\pi=(\{1,2\},\{3\})$ and
$\mathbf{i}=(i_1,i_2,i_3)$, then $\mathbf{i}_\pi=(i_1,i_1,i_2)$. In
this case we write $\pi=(P_1,P_2)$ where $P_1=\{1,2\}$ and $P_2=\{3\}$.
If $a(\cdot)$ is a function on $\mathbb{Z}^k_+$, then
\begin{equation}
\label{eqapi} a_\pi(i_1,\ldots,i_m):=a(
\mathbf{i}_\pi),
\end{equation}
where $m=|\pi|$. In the preceding example, $a_{\pi}(\mathbf
{i})=a(i_1,i_2,i_2)$ with $m=2$. We define a summation operator $S'_T$
as follows:
for any $T\subset\{1,\ldots,|\pi|\}$, $S'_{T}(a_\pi)$ is obtained by
summing $a_\pi$ over its variables indicated by $T$ off-diagonally,
yielding a function with $|\pi|-|T|$ variables.
For instance, if $\pi=(\{1,5\},\{2\},\{3,4\})$, then $\mathbf{i}_\pi
=(i_1,i_2,i_3,i_3,i_1)$ and if $T=\{1,3\}$, then
\[
\bigl(S'_{T} a_\pi\bigr) (i)=\sum
_{0<i_1,i_3<\infty}' a(i_1,i,i_3,i_3,i_1),
\]
provided that it is well-defined. Note that in this off-diagonal sum,
we require also that neither $i_1$ nor $i_3$ equals to $i$.
If $T=\varnothing$, $S'_T$ is understood to be the identity operator.
Now, by collecting various diagonal cases and using (\ref{eqappelldecomp}), $X(n)$ in (\ref{eqXnintro}) can be decomposed as
\begin{eqnarray}
\label{eqoffdiagdecom} X(n)=\sum_{\pi\in\mathcal{P}_k} \sum
_{\mathbf{i}\in\mathbb
{Z}_+^m }' a_\pi(\mathbf{i})
\varepsilon_{n-i_1}^{p_1}\cdots\varepsilon _{n-i_m}^{p_m}=
\sum_{\pi\in\mathcal{P}_k}\sum_{\mathbf{j}\in
J(\pi)}
X_\pi^{\mathbf{j}}(n),
\end{eqnarray}
where
\begin{equation}
\label{eqbasicterm} X_\pi^{\mathbf{j}}(n)=\sum
_{\mathbf{i}\in\mathbb{Z}_+^m}' a_\pi (\mathbf {i})c(
\mathbf{p},\mathbf{j}) A_{j_1}(\varepsilon_{n-i_1})\cdots
A_{j_m}(\varepsilon_{n-i_m}),
\end{equation}
$A_j(\cdot)$ is the $j$th order Appell polynomial with respect to the
law of $\varepsilon_i$, $p_t=|P_t|$, $J(\pi)=\{0,\ldots,p_1\}\times
\cdots
\times\{0,\ldots,p_m\}$, and
\begin{equation}
\label{eqcpj} c(\mathbf{p},\mathbf{j})=\pmatrix{p_1\cr j_1}
\cdots \pmatrix{p_m\cr j_m}\mu _{p_1-j_1}\cdots
\mu_{p_m-j_m},\qquad\mu_j=\mathbb{E}\varepsilon_i^j.
\end{equation}
Note that since by assumption $\mu_1=0$, when $j_t=0$, it is only when
$p_t\ge2$ that it is possible to have a nonzero term.
In addition,
the expression for the centered $X(n)-\mathbb{E}X(n)$ is the sum in
(\ref
{eqoffdiagdecom}) with $J(\pi)$ replaced by $J^+(\pi):=J(\pi
)\setminus(0,\ldots,0)$, and
\begin{equation}
\label{eqmeanexpress} \mathbb{E}X(n)=\sum_{\pi\in\mathcal{P}_k}\sum
_{\mathbf{i}\in
\mathbb
{Z}^m_+}'a_{\pi}(\mathbf{i})
\mu_{p_1}\cdots\mu_{p_m}=\sum_{\pi
\in
\mathcal{P}_k^2}
\sum_{\mathbf{i}\in\mathbb{Z}^m_+}'a_{\pi
}(\mathbf
{i})\mu_{p_1}\cdots\mu_{p_m},
\end{equation}
where $\mathcal{P}_k^2$ denotes the collection of partitions of $\{
1,\ldots,k\}$ such that each set in the partition contains at least $2$
elements, namely, $p_t\ge2$ for all $t=1,\ldots,m$.
So from (\ref{eqoffdiagdecom}), (\ref{eqbasicterm}) and the
discussion above (\ref{eqmeanexpress}), the summands in the
off-diagonal decomposition of $X(n)-\mathbb{E}X(n)$ can be written as
\begin{equation}
\label{eqXpi^jbasicterm} X_\pi^{\mathbf{j}}(n)=\sum
_{\mathbf{i}\in\mathbb{Z}_+^{k'}}' c(\mathbf {p},\mathbf{j})
S_T'a_\pi(\mathbf{i}) A_{j_{t_1}}(
\varepsilon _{n-i_{t_1}})\cdots A_{j_{t_{k'}}}(\varepsilon_{n-i_{t_{k'}}}),
\end{equation}
where $T=\{t=1,\ldots,m \dvt j_t=0\}$, and $\{t_1,\ldots,t_{k'}\}= \{
1,\ldots,m\} \setminus T$ (thus $j_{t_1}\ge1,\ldots,j_{t_{k'}}\ge1$).
Note that $T\neq\{1,\ldots,m\}$ since $\mathbf{j}\in J^+(\pi)$. In
fact, $X_\pi^{\mathbf{j}}(n)$ is of the form (\ref{eqXnnonsym})
with $k=k'$ and $a(\cdot)=c(\mathbf{p},\mathbf{j}) S_T'a_\pi(\cdot)$.
We now state the proof of Theorem~\ref{Thmgeneralprodvolt} case by case.
Recall that $C>0$ denotes a constant whose value can change from line
to line.
\begin{pf*}{Proof of case~1}
In this case, $g^{(1)}(i)=C_1i^{\alpha_1}$, and
$g^{(2)}(i)=C_2i^{\alpha
_2}$, where $C_1$ and $C_2$ are two nonzero constants. The off-diagonal
decomposition (\ref{eqoffdiagdecom}) for the centered $X(n)$ is simply
\begin{equation}
\label{eqcase2simpleoffdiagdecomp} X(n)-\mathbb{E}X(n)= \sum_{0<i_1,i_2<\infty
}'a^{(1)}(i_1)a^{(2)}(i_2)
\varepsilon _{n-i_1}\varepsilon_{n-i_2}+\sum
_{0<i<\infty
}a^{(1)}(i)a^{(2)}(i)A_2(
\varepsilon_{n-i}),
\end{equation}
where $A_2(\varepsilon_{n-i})=\varepsilon_{n-i}^2-1$. Note that
\[
\bigl\vert a^{(1)}(i_1)a^{(2)}(i_2)
\bigr\vert \le C i_1^{\alpha_1}i_2^{\alpha_2},
\]
so the coefficient of the first term in (\ref{eqcase2simpleoffdiagdecomp}) satisfies (\ref{eqabound}) with
\[
H^*=\alpha_1+\alpha_2+(1+1)/2+1=(H_1-3/2)+(H_2-3/2)+2<1/2
\]
by (\ref{eqHrange}), since $H_1+H_2<3/2$. For the second term in
(\ref
{eqabound}), one has
\[
\bigl\vert a^{(1)}(i)a^{(2)}(i)\bigr\vert \le C
i^{\alpha_1+\alpha_2},
\]
which yields
\begin{equation}
\label{eqH*simplecase} H^*=\alpha_1+\alpha_2+1/2+1=(H_1-3/2)+(H_2-3/2)+3/2=H_1+H_2-3/2<1/2,
\end{equation}
since $H_1<1$ and $H_2<1$.
Hence,\vadjust{\goodbreak} Proposition~\ref{ThmcltXn} applies.
\end{pf*}
\begin{pf*}{Proof of case~2}
Now the first term of (\ref{eqcase2simpleoffdiagdecomp}) is
subject to Proposition~\ref{Thmncltchaossingle} with a Hurst index
$H=\alpha_1+\alpha_2+2=H_1+H_2-1>1/2$. One can see that for the second
term of (\ref{eqcase2simpleoffdiagdecomp}), relation (\ref{eqH*simplecase}) still holds. So by Lemma~\ref{Lembound}, the second term
of (\ref{eqcase2simpleoffdiagdecomp}) has a memory parameter
$H\le1/2$ in the sense of Definition~\ref{DefHurstexpXn}, and
hence with the normalization $N^{-H}$, the normalized partial sum of
the second term of (\ref{eqcase2simpleoffdiagdecomp}) converges
to $0$ in $D[0,1]$.
\end{pf*}
\begin{pf*}{Proof of case~3}
Recall from (\ref{eqXpi^jbasicterm}) that the summands in the
off-diagonal decomposition of $X(n)-\mathbb{E}X(n)$ are
\[
X_\pi^{\mathbf{j}}(n)=\sum_{\mathbf{i}\in\mathbb{Z}_+^{k'}} c(
\mathbf {p},\mathbf{j}) S_T'a_\pi(
\mathbf{i}) A_{j_{t_1}}(\varepsilon _{n-i_{t_1}})\cdots A_{j_{t_{k'}}}(
\varepsilon_{n-i_{t_{k'}}}).
\]
Consider first the following partition $\pi=(P_1,\ldots,P_m)$ of $\{
1,\ldots,k_1,k_1+1\}$, which we express as
\[
\pi=\bigl(P_1,\ldots,P_{m_1},\{k_1+1\}\bigr),
\]
with $m_1=m-1$, $\bigcup_{j=1}^{m_1} P_j=\{1,\ldots,k_1\}$, and $P_m=\{
k_1+1\}$. Let $T= \{1,\ldots,m_1\}$. Recall that to have nonzero
$c(\mathbf{p},\mathbf{j})$, one must require $|P_t|\ge2$ if $t\in T$,
and hence $2m_1\le k_1$. Set $\pi_1=\{P_1,\ldots,P_{m_1}\}$ and let
$\mathbf{u}\in\mathbb{Z}_+^{k_1}$. Then applying the off-diagonal
summation $S_T'$, we get
\begin{eqnarray}
\label{eqSTa} \bigl(S_T'a_{\pi}\bigr) (i)=
\sum_{u_p\neq u_q, u_p\neq i} a^{(1)}_{\pi
_1}(\mathbf
{u})a^{(2)}(i)= \biggl(\sum_{u_p\neq u_q}
a^{(1)}_{\pi_1}(\mathbf {u}) \biggr)a^{(2)}(i)-R(i),
\end{eqnarray}
where the difference $R(i)$ includes the terms where some $u_p=i$.
Since $|a^{(1)}(\mathbf{i})|\le C (i_1+\cdots+i_{k_1})^{\alpha_1}$
which implies $|a^{(1)}_{\pi}(\mathbf{u})|\le C (u_1+\cdots
+u_{m_1})^{\alpha_1}$. Suppose without loss of generality that
$u_{m_1}=i$, then by applying (\ref{eqboundA+i}),
\[
\bigl\vert R(i)\bigr\vert \le C \sum_{0<u_1,\ldots,u_{m_1-1}<\infty}
(u_1+\cdots +u_{m_1-1}+i)^{\alpha_1} i^{\alpha_2} \le
C i^{\alpha_2+(\alpha_1+m_1-1)},
\]
where $\alpha_1+m_1-1<0$ because $\alpha_1<-k_1/2\le-m_1 \le-1$. It
follows that $|R(i)|\le Ci^{\alpha_2-\delta}$ for some $\delta>0$.
Since $k_2=1$,
the term $R(i)$ defines the linear process $\sum_{i>0} R(i) \varepsilon
_{n-i}$ but one with smaller memory parameter in the sense of
Definition~\ref{DefHurstexpXn}, than the linear process:
\[
\mu_{\pi_1} \Biggl(\sum_{\mathbf{u}>\mathbf{0}}'
a^{(1)}_{\pi
_1}(\mathbf {u}) \Biggr)\sum
_{i=1}^\infty a^{(2)}(i)
\varepsilon_{n-i},
\]
resulting from the first term in the right-hand side of (\ref{eqSTa}) (in this case $c(\mathbf{p},\mathbf{j})=\mu_{\pi_1}:=\mu
_{p_1}\cdots
\mu_{p_{m_1}}$).
Collecting all such $\pi_1\in\mathcal{C}_1^2$, one obtains $c_1\sum_{i=1}^\infty a^{(2)}(i) \varepsilon_{n-i}$ with $c_1$ as given in (\ref
{eqcsa}). Applying Proposition~\ref{Thmncltvoltsingle} with
$k=1$, we get the noncentral limit in case~3,
with a Hurst index
\[
H=\alpha_2+1/2+1=\alpha_2+3/2=H_2.
\]
We now show that in all the other cases, the memory parameter of
$X_{\pi
}^{\mathbf{j}}(n)$ is smaller than $H=\alpha_2+3/2$, which will
conclude the proof.
Observe first that
\begin{equation}
\label{eqaboundspecial} \bigl\vert a(\mathbf{i})\bigr\vert \le C (i_1+
\cdots+i_{k_1})^{\alpha_1} i_{k_1+1}^{\alpha_2}.
\end{equation}
Let $\pi=\{P_1,\ldots,P_m\}$ is a partition of $\{1,\ldots,k_{1}+1\}$,
and $T=\{t_1,\ldots,t_l\}$, $l\le m-1$. To bound $|(S_T'a)(\mathbf{i})|$,
one can assume without loss of generality that either
\begin{longlist}[(a)]
\item[(a)] $P_j\cap\{k_1+1 \}=\varnothing$ for $1\le j\le m-1$, $P_m=\{
k_1+1\}$, $T\subset\{1,\ldots,m-1\}$, $\bigcup_{j=1}^{l} P_{t_j}\neq\{
1,\ldots,k_1\}$, or
\item[(b)] $P_m \cap\{k_1+1\}\neq\varnothing$, and $P_m\cap\{1,\ldots
,k_1\}\neq\varnothing$.
\end{longlist}
Observe that in the previous case we had $\bigcup_{j=1}^{l} P_{t_j}=\{
1,\ldots,k_1\}$ ($l=m_1=m-1$) and $P_m=\{k_1+1\}$.
In case (a), one has by (\ref{eqaboundspecial}) that
\[
\bigl\vert a_\pi(\mathbf{i})\bigr\vert \le C(i_1+
\cdots+i_{m-1})^{\alpha
_1}i_{m}^{\alpha_2}.
\]
Since in case (a), $\bigcup_{j=1}^l P_{t_j}$ is a strict subset of $\{
1,\ldots,k_1\}$, we have $l< m-1$, and thus by applying (\ref{eqboundA+i}) iteratively, one has that
\begin{eqnarray*}
\bigl\vert \bigl(S_T' a_\pi\bigr) (
\mathbf{i})\bigr\vert & \le& \sum_{\mathbf{u}>\mathbf{0}}
C(u_1+\cdots +u_l+i_{1}+
\cdots+i_{m-l-1})^{\alpha_1} i_{m-l}^{\alpha_2}
\\
&\le& C (i_{1}+\cdots+i_{m-l-1})^{\alpha_1+l}i_{m-l}^{\alpha_2},
\end{eqnarray*}
which results in $H^*$ in (\ref{eqH^*}) equal to
\begin{eqnarray*}
H^*&= &(\alpha_1+l+\alpha_2)+ (m-l)/2+1=
\alpha_1+\alpha_2+m/2+l/2+1
\\
&< & -k_1/2+\alpha_2 +(k_1+1)/2+1=
\alpha_2+3/2=H_2
\end{eqnarray*}
since $\alpha_1<-k_1/2$, and $m+l=2l+(m-l)\le k_1+1$ (recall that each
$|P_t|\ge2$ if $t\in T$).
In case (b), one can write without loss of generality that
\[
\bigl\vert a_\pi(\mathbf{i})\bigr\vert \le C(i_1+
\cdots+i_m)^{\alpha_1}i_{1}^{\alpha_2}
\]
since $\pi$ contains $m$ partitions.
If for the above $a_\pi$, the summation $S_T'$ includes a sum over the
index $1$, that is, $1\in T$, then using (\ref{eqboundA+i}) and then
(\ref{eqboundA+iilarge}), one has
\begin{eqnarray*}
\bigl\vert \bigl(S_T' a_\pi\bigr) (
\mathbf{i})\bigr\vert &\le & C \sum_{\mathbf{u}>\mathbf{0}}
(u_1+\cdots+u_l+i_1+\cdots+i_{m-l})^{\alpha_1}
u_{1}^{\alpha_2}
\\
&\le & C \sum_{u_1=1}^\infty(u_1+
i_{1}+\cdots+i_{m-l})^{\alpha
_1+l-1}u_{1}^{\alpha_2}
\le C (i_{1}+\cdots+i_{m-l})^{\alpha_1+\alpha_2+l}.
\end{eqnarray*}
Relation (\ref{eqboundA+iilarge}) does apply because on one hand
$\alpha_2>-1$, and on the other hand, we have $\alpha_1+l-1<-1/2$ since
$\alpha_1<-k_1/2$ and $2(l-1)+1<k_1$ because of $|P_t|\ge2$ if $t\in
T$. This leads to $H^*$ in (\ref{eqH^*}) equal to
\begin{eqnarray*}
H^*&= &(\alpha_1+\alpha_2+l)+ (m-l)/2+1=
\alpha_1+\alpha _2+m/2+l/2+1<\alpha_2+3/2=H_2.
\end{eqnarray*}
If the summation $S_T'$ does not include the index $1$, that is, if
$1\notin T$, one has
\begin{eqnarray*}
\bigl\vert \bigl(S_T' a_\pi\bigr) (
\mathbf{i})\bigr\vert &\le & C \sum_{\mathbf{u}> \mathbf{0}}
(i_1+\cdots+i_{m-l}+u_1+\cdots+u_{l})^{\alpha_1}
i_1^{\alpha_2}
\\
&\le & C (i_{1}+\cdots+i_{m-l})^{\alpha_1+l}
i_1^{\alpha_2},
\end{eqnarray*}
by (\ref{eqboundA+i}),
which also yields $H^*<\alpha_2+3/2=H_2$.
\end{pf*}
\begin{pf*}{Proof of case~4}
Same as case~3.
\end{pf*}
\begin{pf*}{Proof of case~5}
We consider first in Part~1 all cases of $S_T'a_\pi$ in (\ref{eqXpi^jbasicterm}) which contribute to the limit, and in Part~2 negligible cases.
\emph{Part 1 of case~5}:
Suppose that $\pi$ can be split into $\pi_1$ and $\pi_2$ which satisfy
the following: the subpartition $\pi_1=\{P_1,\ldots,P_{m_1}\}$ is a
partition of $\{1,\ldots,k_1\}$, such that each $P_j$ satisfies
$|P_j|\le2$, and at least one $|P_j|=1$, $j=1,\ldots,m_1$.
Thus, suppose without loss of generality that $|P_1|=2,\ldots,|P_r|=2$,
$0\le r< m_1$, and $|P_{r+1}|=\cdots=|P_{m_1}|=1$. Require that the
subpartition $\pi_2$ belongs to $\mathcal{C}_2^2$, where $\mathcal
{C}_2^2$ is the collection of partitions of $\{k_1+1,\ldots,k_1+\cdots
+k_2\}$ such that each set in $\pi_2$ contains at least 2 elements.
$\mathcal{C}_2^2$~is nonempty because $k_2\ge2$. Let
\[
T=\{1,\ldots,r,m_1+1,\ldots,m_1+m_2\}.
\]
Setting $\mathbf{i}=(i_1,\ldots,i_{m_1-r})$, $\mathbf{u}=(u_1,\ldots
,u_r)\in\mathbb{Z}_+^r$ and $\mathbf{v}=(v_1,\ldots,v_{m_2})\in
\mathbb
{Z}_+^{m_2}$, one can write
\begin{eqnarray}\label{eqoridiagonalST}
\hspace*{-5pt}\bigl(S'_T a_\pi \bigr) (\mathbf{i})&=&
\mathop{\sum_{u_p\neq u_q u_p\neq i_q,u_p\neq v_q,}}_{v_p\neq v_q, v_p\neq i_q,\mathbf{u},\mathbf
{v}>\mathbf{0}}
a^{(1)}(u_1,u_1,\ldots,u_r,u_r,i_{1},
\ldots ,i_{m_1-r})a^{(2)}_{\pi_2}(\mathbf{v})
\\
\label{eqoridiagonalSTsecond}
&=&\sum_{u_p\neq u_q ,u_p\neq v_q,v_p\neq v_q,\mathbf
{u},\mathbf{v}>\mathbf{0}} a^{(1)}(u_1,u_1,
\ldots ,u_r,u_r,i_{1},\ldots
,i_{m_1-r})a^{(2)}_{\pi_2}(\mathbf{v})-R_1(
\mathbf{i})\qquad
\\
&=&\sum_{u_p\neq u_q,\mathbf{u}>\mathbf
{0}}a^{(1)}(u_1,u_1,
\ldots,u_r,u_r,i_{1},\ldots,i_{m_1-r})
\nonumber
\\[-8pt]
\label{eqtworesiduals}
\\[-8pt]
\nonumber
&&{}\times\sum_{v_p\neq
v_q,\mathbf{v}>\mathbf{0}}a^{(2)}_{\pi_2}(
\mathbf{v})-R_1(\mathbf {i})-R_2(\mathbf{i})
\end{eqnarray}
for $i_p\neq i_q$.
Relation (\ref{eqtworesiduals}) has the preceding three parts. We
shall now apply Proposition~\ref{Thmncltvoltsingle} to the first
part. Summing over all possible values of $r$, one gets a NCLT with
Hurst index $H=\alpha_1+k_1/2+1$, where the limit is
\[
Z:=c_2\sum_{0\le r<k_1/2} d_{k,r}
Z_{k_1-2r},
\]
where the process $Z_{k_1-2r}$ is defined in (\ref{eqZt=gr}) with
$g_r=g_r^{(1)}$. Taking into account that in this setting, $c(\mathbf
{p},\mathbf{j})$ in (\ref{eqcpj}) and (\ref{eqXpi^jbasicterm}) is
\begin{eqnarray*}
&& \pmatrix{p_1\cr 0}\cdots\pmatrix{p_r\cr 0} \pmatrix{p_{r+1} \cr 1} \cdots \pmatrix{p_{m_1} \cr 1} \pmatrix{p_{m_1+1} \cr 0}
\cdots \pmatrix{p_{m_1+m_2} \cr 0} (\mu _2)^r
\mu_{p_{m_1+1}}\cdots\mu_{p_{m_1+m_2}}
\\
&&\quad
=:\mu_{\pi_2},
\end{eqnarray*}
since $\mu_2=1$, $p_1=\cdots=p_r=2$ and $p_{r+1}=\cdots= p_{m_1}=1$,
one gets the nonzero constant $c_2$ in~(\ref{eqcsa}). As in (\ref
{eqwienerstratonintegral}), we can express the limit $Z(t)$ as a
centered Wiener--Stratonovich integral.
We shall now show that $R_1$ and $R_2$ in (\ref{eqtworesiduals}) lead
only to terms with Hurst indices strictly less than $H=\alpha
_1+k_1/2+1$ in the sense of Definition~\ref{DefHurstexpXn}, so
they are negligible compared to the first term, and hence they do not
contribute to the limit.
$R_1$ in (\ref{eqoridiagonalSTsecond}) is obtained by taking the
difference between the sum in (\ref{eqoridiagonalST}) and the sum
in (\ref{eqoridiagonalSTsecond}). Thus $R_1$ is obtained by
identifying some of the $u$ and $v$ variables in the sum in (\ref
{eqoridiagonalST}) with $i$ variables. Using the fact
$a^{(j)}(\mathbf{i})\le C\|\mathbf{i}\|^{\alpha_j}$, one can see that
one of the terms (a coefficient on $\mathbb{Z}_+^{m_1-r}$) in $R_1$ is
bounded by
\begin{equation}
\label{eqlastproof1} \sum_{u_p\neq u_q, u_p\neq v_q, v_p\neq v_q,\mathbf{u},\mathbf
{v}>\mathbf{0}} C\bigl(\Vert \mathbf{u}
\Vert +\Vert \mathbf{i}\Vert \bigr)^{\alpha_1}\bigl(\Vert \mathbf {v}
\Vert +\bigl\Vert \mathbf{i}'\bigr\Vert \bigr)^{\alpha_2},
\end{equation}
where $\mathbf{u}=(u_1,\ldots,u_{r-s_1})$, $\mathbf{i}=(i_1,\ldots
,i_{m_1-r})$, $\mathbf{v}=(v_1,\ldots,v_{m_2-s_2})$, $\mathbf
{i}'=(i_1,\ldots,i_{t})$, where
\[
0\le s_1 \le r\wedge(m_1-r),\qquad 0\le t\le s_2
\le m_2 \wedge(m_1-r).
\]
If $t=0$, then $s_2=0$, and in addition, either $s_1>0$ or $s_2>0$.
Note that $\mathbf{i}'$ is a subvector of $\mathbf{i}$.
By (\ref{eqboundA+i}), the term (\ref{eqlastproof1}) is bounded by
\begin{eqnarray*}
\sum_{\mathbf{u},\mathbf{v}>\mathbf{0}}C\bigl(\Vert \mathbf{u}\Vert +\Vert
\mathbf {i}\Vert \bigr)^{\alpha_1}\bigl(\Vert \mathbf{v}\Vert +\bigl
\Vert \mathbf{i}'\bigr\Vert \bigr)^{\alpha_2}\le
\cases{ C\Vert \mathbf{i}\Vert ^ {\alpha_1+r-s_1} &\quad$\mbox{if }t=0$;\vspace*{3pt}
\cr
C\Vert \mathbf{i}\Vert ^ {\alpha_1+r-s_1} \bigl\Vert
\mathbf{i}'\bigr\Vert ^ {\alpha_2+m_2-s_2} &\quad$\mbox{if }t>0$.}
\end{eqnarray*}
When $t=s_2=0$, one must have $s_1>0$, and so the term yields
\[
H^*=\alpha_1+r-s_1+(m_1-r)/2+1=
\alpha_1+(r+m_1)/2+1-s_1<\alpha_1+k_1/2+1,
\]
because
\[
r+m_1=2r+(m_1-r)=k_1.
\]
When $s_2\ge t>0$, it yields an
\begin{eqnarray*}
H^*&=&\alpha_1+r-s_1+\alpha_2+m_2-s_2+(m_1-r)/2+1
\\
&=&\alpha_1+(m_1+r)/2+1+\alpha_2+m_2-s_1-s_2
\\
&\le &\alpha_1+k_1/2+1+\alpha_2+k_2/2
-s_1-s_2<\alpha_1+k_1/2+1,
\end{eqnarray*}
since $2m_2\le k_2$ due to $\pi_2 \in\mathcal{C}_2^2$, and where the
last inequality is due to the assumption $\alpha_2<-k_2/2$.
We now examine $R_2$ in (\ref{eqtworesiduals}), which is obtained by
identifying some of the $u$ variables to the $v$ variables in the first
sum in (\ref{eqoridiagonalSTsecond}). One term of $R_2$ can be
bounded by
\[
\sum_{u_p\neq u_q, v_p\neq v_q,\mathbf{u},\mathbf{v}>\mathbf{0}} C \bigl(\Vert \mathbf{u}\Vert +
\Vert \mathbf{v}_1\Vert +\Vert \mathbf{i}\Vert
\bigr)^{\alpha_1} \bigl(\Vert \mathbf {v}_1\Vert +\Vert
\mathbf{v}_2\Vert \bigr)^{\alpha_2},
\]
where $\mathbf{u}=(u_1,\ldots,u_{r-s}), \mathbf{v}_1=(v_1,\ldots,v_s),
\mathbf{v}_2=(v_{s+1},\ldots,v_{m_2})$ and $\mathbf{i}=(i_1,\ldots
,i_{m_1-r})$, where $1\le s\le(r\wedge m_2)$. By using (\ref{eqboundA+i}), and then (\ref{eqboundA+iB+ilarge}) and (\ref{eqboundA+iismall}), this term is bounded by
\begin{eqnarray*}
&&\sum_{\mathbf{u}>\mathbf{0},\mathbf{v}_1>\mathbf{0},\mathbf
{v}_2>\mathbf{0}} C \bigl(\Vert \mathbf{u}\Vert +
\Vert \mathbf{v}_1\Vert +\Vert \mathbf{i}\Vert
\bigr)^{\alpha_1} \bigl(\Vert \mathbf{v}_1\Vert +\Vert
\mathbf{v}_2\Vert \bigr)^{\alpha_2}
\\
&&\quad\le \sum_{\mathbf{v}_1>\mathbf{0}} C \bigl(\Vert
\mathbf{v}_1\Vert +\Vert \mathbf {i}\Vert \bigr)^{\alpha_1+r-s}
\Vert \mathbf{v}_1\Vert ^{\alpha_2+m_2-s} \le C \Vert \mathbf{i}
\Vert ^{\alpha_1+r-s+(s-1)/2},
\end{eqnarray*}
which yields an
\[
H^*=\alpha_1+r-s/2-1/2+(m_1-r)/2+1=\alpha
_1+(m_1+r)/2+1-s/2-1/2<\alpha _1+k_1/2+1.
\]
So neither $R_1$ nor $R_2$ contributes to the limit.
\noindent\emph{Part 2 of case~5}.
Suppose now that $\pi$ and $T$ are \emph{not} as in Part 1. To
determine these cases, note that one can always bound $|(S_T'a_\pi
)(\mathbf{i})|$ by
\begin{equation}
\label{eqproofcasegeneral} C \sum_{\mathbf{u}>\mathbf{0}}\bigl(\Vert
\mathbf{i}_1\Vert +\Vert \mathbf{i}_2\Vert +\Vert
\mathbf{u}_1\Vert +\Vert \mathbf{u}_2\Vert
\bigr)^{\alpha_1}\bigl(\Vert \mathbf{i}_1\Vert +\Vert
\mathbf {i}_3\Vert +\Vert \mathbf{u}_1\Vert +
\Vert \mathbf{u}_3\Vert \bigr)^{\alpha_2},
\end{equation}
where $\mathbf{i}_j\in\mathbb{Z}_+^{s_j}$, $\mathbf{u}_j\in\mathbb
{Z}_+^{t_j}$, $s_j,t_j\ge0$ and where $s_1+s_2+s_3>0$ (at least one
$i$ variable must remain), and
\[
s_1+s_2+t_1+2t_2\le
k_1, \qquad s_1+s_3+t_1+2t_3
\le k_2.
\]
Thus, the variables in $\mathbf{u}_2$ are at least paired within
$a^{(1)}$, and the variables in $\mathbf{u}_3$ are at least paired
within $a^{(2)}$.
We note that in Part 1, we had $s_1=s_3=t_1=0$, and $s_t+2t_2=k_1$.
Thus, to avoid the situation considered in Part 1, we require
\begin{equation}
\label{eqavoidrep}
\mbox{if }s_1=s_3=t_1=0,\qquad
\mbox{then }s_2+2t_2<k_1.
\end{equation}
As we have dealt with $R_1$ and $R_2$ before, by properly applying
(\ref
{eqboundA+i})--(\ref{eqboundA+iB+ilarge}), the bound in (\ref
{eqproofcasegeneral}) yields
\[
H^*<H_1=\alpha_1/2+k_1/2+1.
\]
To check this, we consider the following exhaustive cases:
\begin{longlist}[(a)]
\item[(a)] either $s_1> 0$, or $s_1=0$, $s_2>0$, $s_3>0$;
\item[(b)]$s_1=s_2=0$, $s_3>0$;
\item[(c)]$s_1=s_3=0$, $s_2>0$ but $s_2+2t_2<k_1$.
\end{longlist}
Note that in case (c), if $s_2+2t_2=k_1$ then $t_1=0$, which would
contradict (\ref{eqavoidrep}).
In case (a), for example, if $s_1>0$, by applying (\ref{eqboundA+i})
to the sum over $\mathbf{u}_2$ and $\mathbf{u}_3$, and then (\ref
{eqboundA+iB+ilarge}) on the sum over $\mathbf{u}_1$, we can
bound (\ref{eqproofcasegeneral}) by
\begin{eqnarray*}
C \bigl(\Vert \mathbf{i}_1\Vert +\Vert \mathbf{i}_2
\Vert \bigr)^{\alpha_1+t_1/2+t_2}\bigl(\Vert \mathbf {i}_1\Vert +
\Vert \mathbf{i}_3\Vert \bigr)^{\alpha_2+t_1/2+t_3}.
\end{eqnarray*}
This yields
\begin{eqnarray}
H^*&=& \alpha_1+\alpha_2+t_1+t_2+t_3+(s_1+s_2+s_3)/2+1\nonumber
\\
\label{eqH*typical}
&=& \alpha_1+(s_1+s_2+t_1+2t_2)/2+1
+\alpha_2+(s_3+t_1+2t_3)/2
\\
&\le & \alpha_1+k_1/2+1 +\alpha_2+k_2/2<
\alpha_1+k_1/2+1=H_1.
\nonumber
\end{eqnarray}
In case (b), (\ref{eqproofcasegeneral}) becomes $C \sum_{\mathbf
{u}>\mathbf{0}} (\|\mathbf{u}_1\|+\|\mathbf{u}_2\|)^{\alpha_1}(\|
\mathbf
{i}_3\|+\|\mathbf{u}_1\|+\|\mathbf{u}_3\|)^{\alpha_2}$ which we can
bound by
\begin{eqnarray*}
&&C \sum_{\mathbf{u}_1>\mathbf{0}}\Vert \mathbf{u}_1
\Vert ^{\alpha_1+t_2}\bigl(\Vert \mathbf{i}_3\Vert +\Vert
\mathbf{u}_1\Vert \bigr)^{\alpha_2+t_3}
\\
&&\quad\le
\cases{ \Vert \mathbf{i}_3\Vert
^{\alpha_2+(t_1-1)_+/2+t_3}& \quad$\mbox{ if } \alpha _1+t_1/2+t_2<-1/2$;
\vspace*{3pt}
\cr
\Vert \mathbf{i}_3\Vert ^{\alpha_1+\alpha_2+t_1+t_2+t_3} & \quad$
\mbox{ if } -1/2<\alpha_1+t_1/2+t_2<0$,}
\end{eqnarray*}
where we need to apply first (\ref{eqboundA+i}), then apply (\ref
{eqboundA+iB+ilarge}) if $t_1\ge2$, and finally apply either
(\ref{eqboundA+iismall}) for the first case or (\ref{eqboundA+iilarge}) for the second. Note that $\alpha_1+t_1/2+t_2>-1/2$ only
if $t_1/2+t_2=k_1/2$ since $-k_1/2-1/2<\alpha_1<-k_1/2$ and
$t_1+2t_2\le k_1$. So this yields either an
\begin{eqnarray}
H^*&=&\alpha_2+(t_1-1)_+/2+t_3+s_3/2+1
\nonumber
\\[-8pt]
\label{eqcontributewhenH1=H2}
\\[-8pt]
\nonumber
&=&\alpha_2+ (s_3+t_1+2t_3)/2
+1 +(t_1-1)_+/2-t_1/2\le\alpha_2+k_2/2+1=
H_2<H_1
\end{eqnarray}
or $H^*$ as in (\ref{eqH*typical}).
Similarly, in case (c), (\ref{eqproofcasegeneral}) is $\sum_{\mathbf
{u}>\mathbf{0}}C(\|\mathbf{i}_2\|+\|\mathbf{u}_1\|+\|\mathbf{u}_2\|
)^{\alpha_1}(\|\mathbf{u}_1\|+\|\mathbf{u}_3\|)^{\alpha_2}$, which can
be bounded by
\begin{eqnarray*}
&&C \sum_{\mathbf{u}_1>\mathbf{0}}\bigl(\Vert \mathbf{i}_2
\Vert +\Vert \mathbf{u}_1\Vert \bigr)^{\alpha_1+t_2}\Vert
\mathbf{u}_1\Vert ^{\alpha_2+t_3}
\\
&&\quad\le
\cases{ \Vert \mathbf{i}_2\Vert
^{\alpha_1+(t_1-1)_+/2+t_2}& \quad$\mbox{ if } \alpha _2+t_1/2+t_3<-1/2$;
\vspace*{3pt}
\cr
\Vert \mathbf{i}_2\Vert ^{\alpha_1+\alpha_2+t_1+t_2+t_3} &\quad $
\mbox{ if } {-}1/2<\alpha_1+t_1/2+t_2<0$.}
\end{eqnarray*}
So it yields either an
\begin{eqnarray}
H^*&=&\alpha_1+(t_1-1)_+/2+t_2+s_2/2+1
\nonumber
\\[-8pt]
\label{eqH^*last}
\\[-8pt]
\nonumber
&=&\alpha_1+ (s_2+t_1+2t_2)/2 +1
+(t_1-1)_+/2-t_1/2<\alpha _1+k_1/2+1=H_1,
\end{eqnarray}
or $H^*$ as in (\ref{eqH*typical}).
To get the strict inequality in (\ref{eqH^*last}), we use (\ref
{eqavoidrep}) when $t_1=0$, and use $(t_1-1)_+/2<t_1/2$ when $t_1>0$.
\end{pf*}
\begin{pf*}{Proof of case~6}
Same as case~5.
\end{pf*}
\begin{pf*}{Proof of case~7}
Since $H_1=H_2$, both factors $a^{(1)}$ and $a^{(2)}$ may contribute to
the limit. The proof is similar to case~5, while the other term in the limit arises by exchanging of the
role of $a^{(1)}$ and $a^{(2)}$ in the proof of case~5. Note that because $H_1=H_2$, the equality in ``$\le
$'' in (\ref{eqcontributewhenH1=H2}) is attained whenever $t_1=0$
and $s_3+2t_3=k_2$, a case which would then be included in the NCLT
part of the proof.
\end{pf*}
\subsection{Proof of Theorem \texorpdfstring{\protect\ref{Thmgeneralprodmixed}}{3.8} the mixed
case}\label{secproofmixed}
The proof is similar to case 5 of Theorem~\ref{Thmgeneralprodvolt}.
We thus only give a sketch.
First, following the same notation as Part 1 of case 5 of Theorem~\ref
{Thmgeneralprodvolt}, we look at the contributing case: the
partition $\pi$ can be split into $\pi_1$ and $\pi_2$, where since now
the factor $Y_1'$ in (\ref{eqX=productmixed}) excludes the diagonals,
the first partition $\pi_1$ is just $\{\{1\},\ldots,\{m_1\}\}$. This
means that the
component $\mathbf{u}$ in (\ref{eqoridiagonalST}) does not appear,
namely, $r=0$. Hence, instead of (\ref{eqtworesiduals}) one gets
\begin{equation}
\label{eqmixedcontribute} a^{(1)}(i_{1},\ldots,i_{m_1})\sum
_{v_p\neq v_q,\mathbf{v}>\mathbf
{0}}a^{(2)}_{\pi_2}(\mathbf{v})-R(
\mathbf{i}),
\end{equation}
where $i_p\neq i_q$ for $p\neq q$ and the residual term $R(\mathbf{i})$
is as $R_1(\mathbf{i})$ in (\ref{eqtworesiduals}) (there is no $R_2$
due to absence of $\mathbf{u}$).
The first term leads to the noncentral limit $c_2 I_{k_1}(h_{t,1})$
with Hurst index $H_1=\alpha_1+k_1/2+1$ claimed in Theorem~\ref
{Thmgeneralprodmixed} by Proposition~\ref{Thmncltchaossingle}.
Then treating $R(\mathbf{i})$ in the same way as $R_1(\mathbf{i})$ is
treated there, one can show that $R(\mathbf{i})$ leads to terms with
Hurst index strictly less than $H_1=\alpha_1+k_1/2+1$. Since $H_1$ is
used in the normalization, all these terms are negligible.
Next, one follows Part 2 of case 5 of the proof of Theorem~\ref
{Thmgeneralprodvolt} to show that all other cases of $\pi$ yield
terms with Hurst indices strictly less than $H_1=\alpha_1+k_1/2+1$. Due
to the off-diagonality of $Y_1'$, for the bound (\ref{eqproofcasegeneral}), we have the following additional restrictions involving the
dimensions of the vectors in (\ref{eqproofcasegeneral}): $t_2=0$
($\mathbf{u}_2$ does not appear), and thus
\begin{equation}
\label{eqrestradd} s_1+s_2+t_1=k_1.
\end{equation}
The argument in the proof of Theorem~\ref{Thmgeneralprodvolt} for
cases (a) and (c) continue to hold because the quantity $H^*$ continues
to be strictly less than $H_1$. The only case there involving
modification is case (b) where $s_1=s_2=0, s_3>0$, because the original
inequality (\ref{eqcontributewhenH1=H2}) allows $H^*=H_2$ which
can be greater than $H_1$. But now by (\ref{eqrestradd}) we have the
restriction $t_1=k_1\ge2$. So (\ref{eqcontributewhenH1=H2}) is
now changed to
\[
H^*=\alpha_2+ (s_3+t_1+2t_3)/2
+1 -1/2\le H_2-1/2 < 1/2<H_1.
\]
Since $H^*<H_1$, these terms are also negligible. Then the first term
of (\ref{eqmixedcontribute}) dominates and provides the limit
$c_2I_{k_1}(h_{t,1})$.
\printhistory
\end{document} |
\begin{document}
\title{$(k,k',k'')$-domination in graphs}
\begin{abstract}
\noindent We first introduce the concept of $(k,k',k'')$-domination numbers in graphs, which is
a genaralization of many domination parameters. Then we find lower and upper bounds for this parameter,
which improve many well-known results in literatures.
\\
{\bf Keywords:} $(k,k',k'')$-domination number, $k$-domination number, restrained domination numbers. \\
{\bf MSC 2000}: 05C69
\end{abstract}
\section{Introduction and preliminaries}
Throughout this paper, let $G$ be a finite connected graph with vertex set $V=V(G)$, edge set $E=E(G)$, minimum degree $\delta=\delta(G)$ and maximum degree $\Delta=\Delta(G)$. We use \cite{w} as a reference for terminology and notation which are not defined here. For any vertex $v \in V$, $N(v)=\{u\in G\mid uv\in E(G)\}$ denotes the {\em open neighbourhood} of $v$ in $G$, and $N[v]=N(v)\cup \{v\}$ denotes its {\em closed neighbourhood}.
\noindent There are many domination parameters in graph theory. The diversity of domination parameters and the types of proofs involved are very extensive. We believe that some of the results in this field are similar and the main ideas of their proofs are the same. Therefore we introduce and investigate the concept of $(k,k',k'')$-domination number, as a generalization of many domination parameters, by a simple uniform approach.
\noindent Let $k,k'$ and $k''$ be nonnegative integers. A set $S\subseteq V$ is a {\em $(k,k',k'')$-dominating set} in $G$ if every vertex in $S$ has at least $k$ neighbors in $S$ and every vertex in $V\setminus S$ has at least $k'$ neighbors in $S$ and at least $k''$ neighbors in $V\setminus S$. The {\em $(k,k',k'')$-domination number} $\gamma_{(k,k',k'')}(G)$ is the minimum cardinality of a $(k,k',k'')$-dominating set. We note that every graph with the minimum degree at least $k$ has a $(k,k',k'')$-dominating set, since $S=V(G)$ is such a set.
Note that
\begin{itemize}
\item $\gamma_{(0,1,1)}(G)=\gamma_{r}(G)$: {\em Restrained domination number};
\item $\gamma_{(1,1,1)}(G)= \gamma_{t}^r(G)$: {\em Total restrained domination number};
\item $\gamma_{(1,2,1)}(G)=\gamma_{2r}(G)$: {\em Restrained double domination number};
\item $\gamma_{(k,k,k)}(G)=\gamma_{\times k,t}^r(G): k$-{\em Tuple total restrained domination number};
\item $\gamma_{(k,k,0)}(G)=\gamma_{\times k,t}(G): k$-{\em Tuple total domination number};
\item $\gamma_{(k-1,k,0)}(G)=\gamma__{\times k}(G): k$-{\em Tuple domination number};
\item $\gamma_{(0,k,0)}(G)=\gamma_{k}(G): k$-{\em Domination number}.
\end{itemize}
For the definitions of the parameters above and a comprehensive work on domination in graphs see \cite{cfhv,cr,dhhm,hhs,hhs2,kn,k}.
\section {Lower bounds on $(k,k',k'')$-domination numbers}
In this section, we calculate a lower bound on $\gamma_{(k,k',k'')}(G)$, which improves
the existing lower bounds on these seven parameters.
\noindent The following result can be found in \cite{cr} and \cite{hjjp2}.
\begin{theorem}\label{LB:TR}
If $G$ is a graph without isolated vertices of order $n$ and size $m$, then
\begin{equation}\label{EQ3}
\gamma_{t}^r(G)\geq 3n/2-m,
\end{equation}
in addition this bound is sharp.
\end{theorem}
\noindent Also Hattingh et.al \cite{hjlpv} found that
\begin{equation}\label{EQ4}
\gamma_{r}(G)\geq n-2m/3.
\end{equation}
The following known
result is an immediate consequence of Theorem \ref{LB:TR}.
\begin{theorem}\cite{hjjp1}
If $T$ is a tree of order $n\geq2$, then
\begin{equation}\label{EQ5}
\gamma_{t}^r(T)\geq \lceil\frac{n+2}{2}\rceil.
\end{equation}
\end{theorem}
The inequality
\begin{equation}\label{EQ6}
\gamma_{r}(T)\geq \lceil\frac{n+2}{3}\rceil
\end{equation}
on restrained domination number of a tree of order $n\geq1$
was obtained by Domke et al. \cite{dhhm}.
\noindent The author in \cite{k} generalized Theorem \ref{LB:TR} and proved
that if $\delta(G)\geq k$, then
\begin{equation}\label{EQ7}
\gamma_{\times k,t}^r(G)\geq 3n/2-m/k.
\end{equation}
Moreover the authors in \cite{kn} proved that if $G$ is a graph without isolated vertices, then
\begin{equation}\label{EQ8}
\gamma_{2r}(G)\geq \frac{5n-2m}{4}.
\end{equation}
\noindent We now improve the lower bounds given in
$(\ref{EQ3}), (\ref{EQ4}), \ldots,(\ref{EQ8})$.
For this purpose we first introduce a notation.
Let $G$ be a graph with $\delta(G)\geq k$ and let $S$ be a $(k,k',k'')$-dominating set in $G$. We define
$$\delta^{*}=\min\{\deg(v)\mid v\in V(G) \mbox{ and } \deg(v) \geq k'+k'' \}.$$
\noindent It is easy to see that $\deg(v)$ is at least $k'+k''$ and
therefore is at least $\delta^{*}$ for all vertices in $V\setminus S$.
\begin{theorem}\label{Th:Lower.Bound}
Let $G$ be a graph with $\delta(G)\geq k$. Then
$$\gamma_{(k,k',k'')}(G)\geq \frac{(k'+\delta^{*})n-2m}{\delta^{*}+k'-k}.$$
\end{theorem}
\begin{proof}
Let $S$ be a minimum $(k,k',k'')$-dominating set in $G$. Then,
every vertex $v\in S$ is adjacent to at least $k$ vertices in $S$.
Therefore $|E(G[S])|\geq k|S|/2$. Let $E(v)$ be the set of edges at
vertex $v$. Now let $v\in V\setminus S$. Since $S$ is a
$(k,k',k'')$-dominating set, it follows that $v$ is incident to at least $k'$ edges
$e_{1}, \ldots ,e_{k'}$ in $[S,V\setminus S]$ and at least $k''$
edges $e_{k'+1}, \ldots ,e_{k'+k''}$ in $E(G[V\setminus S])$.
Since $deg(v)\geq \delta^{*}\geq k'+k''$, $v$ is incident to at least
$\delta^{*}-k'-k''$ edges in $E(v)\setminus \{e_{i}\}_{i=1}^{k'+k''}$.
The value of $|[S,V\setminus S]|+|E(G[V\setminus S])|$ is minimized if
the edges in $E(v)\setminus \{e_{i}\}_{i=1}^{k'+k''}$
belong to $E(G[V\setminus S])$. Therefore
$$\begin{array}{lcl}
2m&=&2|E(G[S])|+2|[S,V\setminus S]|+2|E(G[V\setminus S])|\\
&\geq &k|S|+2k'(n-|S|)+k''(n-|S|)+(\delta^{*}-k'-k'')(n-|S|).
\end{array}$$
This leads to $\gamma_{(k,k',k'')}(G)=|S|\geq \frac{(k'+\delta^{*})n-2m}{\delta^{*}+k'-k}$.
\end{proof}
\noindent We note that when $(k,k',k'')=(1,1,1)$, then Theorem \ref{Th:Lower.Bound} gives
improvements for inequalities (\ref{EQ3}) and (\ref{EQ5}). When $(k,k',k'')=(0,1,1)$,
then it will be improvements of its corresponding results given by (\ref{EQ4}) and (\ref{EQ6}).
Also, if $(k,k',k'')=(k,k,k)$, Theorem \ref{Th:Lower.Bound} improves (\ref{EQ7}) and
if $(k,k',k'')=(1,2,1)$, it improves (\ref{EQ8}).
As an immediate result of Theorem \ref{Th:Lower.Bound}, we conclude the following result of Hattingh and Joubert.
\begin{corollary}\cite{hj}
If $G$ is a cubic graph of order $n$, then $\gamma_{r}(G)\geq \frac{n}{4}$.
\end{corollary}
Also, for total restrained and restrained double domination numbers of a cubic graph $G$, we obtain
$\gamma_{t}^r(G)\geq \frac{n}{3}$ and $\gamma_{2r}(G)\geq \frac{n}{2}$,
by Theorem \ref{Th:Lower.Bound}, respectively.
\noindent Since $\gamma_{\timesk}(G)=\gamma_{(k-1,k,0)}(G)$, Theorem \ref{Th:Lower.Bound} is improvements of the following results.
\begin{theorem}\cite{hh}
Let $G$ be a graph of order $n$ and $\delta(G)\geq k-1$, then
$$\gamma_{\timesk}(G)\geq \frac{2kn-2m}{k+1}$$
and this bound is sharp.
\end{theorem}
\begin{theorem}\cite{zwx}
Let $G$ be a graph of order $n$ and size $m$ with minimum degree $\delta\geq k$. Then $\gamma_{\timesk,t}(G)\geq2(n-\frac{m}{k})$ and this bound is sharp.
\end{theorem}
Theorem \ref{Th:Lower.Bound} is also an improvement of the following Theorem.
\begin{theorem}\cite{fj2}
If $G$ is a graph with $n$ vertices and $m$ edges, then $\gamma_{k}(G)\geq n-\frac{m}{k}$ for each $k\geq1$.
\end{theorem}
\noindent We note that every graph $G$ with $\delta(G)\geq k$ has
a $(k,k',0)$-dominating set such as $S=V(G)$
and therefore $\gamma_{(k,k',0)}(G)$ is well-defined when $\delta(G)\geq k$.
\begin{theorem}\label{Th:LB2}
If $G$ is a graph of order $n$ and $\delta(G)\geq k$, then $\gamma_{(k,k',0)}(G)\geq k'n/(\Delta+k'-k)$.
\end{theorem}
\begin{proof}
Let $S$ be a minimum $(k,k',0)$-dominating set in $G$. Then each vertex of $S$ is adjacent to at least $k$ vertices in $S$ and therefore to at most $\Delta-k$ vertices in $V\setminus S$, and so $|[S,V\setminus S]|\leq (\Delta-k)|S|$. On the other hand, every vertex of $V\setminus S$ has at least $k'$ neighbors in $S$, and so $k'(n-|S|)\leq |[S,V\setminus S]|$. Consequently, $\gamma_{(k,k',0)}(G)=|S|\geq k'n/(\Delta+k'-k)$.
\end{proof}
\noindent The following corollaries are immediate results of Theorem \ref{Th:LB2}.
\begin{corollary}(\cite{hk,zwx})
If $G$ is a graph of minimum degree at least $k$, then $\gamma_{\times k,t}(G)\geq kn/\Delta$ and this bound is sharp.
\end{corollary}
\begin{corollary}\cite{hh}
If $G$ is a graph of order $n$ with $\delta(G)\geq k-1$, then $\gamma_{\times k}(G)\geq kn/(\Delta+1)$ and this bound is sharp.
\end{corollary}
\begin{corollary}\cite{fj1}
If $G$ is a graph of order $n$, then $\gamma_{k}(G)\geq kn/(\Delta+k)$ for every integer $k$.
\end{corollary}
\section {Upper bounds on $(k,k',1)$-domination numbers}
\noindent In this section we present an upper bound on $(k,k',1)$-domination numbers and list some of the existing upper bounds which can be derived from this upper bound.
\begin{theorem}
Let $G$ be a graph of order $n$ and let $k$ and $k'$ be positive integers.
\begin{enumerate}
\item If $k'\geq k+1$ and $\delta\geq k'+1$, then
$\gamma_{(k,k',1)}(G)\leq n-\delta+ k'-1$.
\item If $k\geq k'$ and $\delta\geq k+2$, then
$\gamma_{(k,k',1)}(G)\leq n-\delta+ k$.
\end{enumerate}
The bounds given in Parts 1 and 2 are sharp.
\end{theorem}
\begin{proof}
Let $u$ be a vertex in $G$ with $\deg(u)=\delta$.
\noindent {\bf Proof of 1:} Suppose that $v_{1},\ldots,v_{k'}\in N(u)$. Since $\delta\geq k'+1$, it follows that $|N[u]\setminus \{v_{1},\ldots,v_{k'}\}|\geq2$ and therefore $N[u]\setminus \{v_{1},\ldots,v_{k'}\}$ is a nonempty set. Also, it is easy to see that the subgraph induced by $N[u]\setminus \{v_{1},\ldots,v_{k'}\}$ has no isolated vertices. Now let $S=V(G)\setminus (N[u]\setminus \{v_{1},\ldots,v_{k'}\})$. Let $v\in N[u]\setminus \{v_{1},\ldots,v_{k'}\}$. Then $v$ can be joint to at most $\delta-k'$ vertices in $N[u]\setminus \{v_{1},\ldots,v_{k'}\}$. Thus $v$ has at least $k'$ neighbors in $S$. On the other hand, for every vertex $v$ in $S$ we have
$$|N(v)\cap S|=deg(v)-|N(v)\cap (N[u]\setminus \{v_{1},\ldots,v_{k'}\})|\geq deg(v)-\delta +k'-1\geq k.$$
Therefore $S$ is a $(k,k',1)$-dominating set in $G$. Hence,
$$
\gamma_{(k,k',1)}(G)\leq |S|=|V(G)\setminus (N[u]\setminus \{v_{1},\ldots,v_{k'}\})|=n-\delta+k'-1.
$$
\noindent {\bf Proof of 2:} Suppose that $v_{1},\ldots,v_{k+1}\in N(u)$. By assumption,
$|N[u]\setminus \{v_{1},\ldots,v_{k+1}\}|\geq 2$ and the subgraph induced by
$N[u]\setminus \{v_{1},\ldots,v_{k+1}\}$ has no isolated vertices. An argument similar to that described
in Part 1 shows that $S=V(G)\setminus (N[u]\setminus \{v_{1},\ldots,v_{k+1}\})$
is a $(k,k',1)$-dominating set in $G$. Therefore
$$
\gamma_{(k,k',1)}(G)\leq |S|=|V(G)\setminus (N[u]\setminus \{v_{1},\ldots,v_{k+1}\})|=n-\delta+k.
$$
\noindent It is easy to see that the upper bounds are sharp for the complete graph $K_{n}$, when $n\geq \max\{k,k'\}+3$.
\end{proof}
Considering Parts 1 and 2 of Theorem 3.1 we can see that
$$\gamma_{(k,k',1)}(G)\leq n-\delta+ max\{k,k'-1\},$$
when $\delta(G)\geq max\{k,k'\}+2$.\\
As an immediate consequence we conclude the following corollary.
\begin{corollary}
If $G$ is a graph of order $n$ and minimum degree $\delta\geq3$, then $\gamma_{2r}(G)\leq n-\delta+1$ and the bound is sharp.
\end{corollary}
\noindent The authors in \cite{kn} showed that $\gamma_{2r}(G)\leq n-2$ for every graph of order $n$ and minimum degree $\delta(G)\geq 3$. In fact, Corollary 3.2 gives an improvement for this bound. In addition, if $\delta\geq4$,
then the upper bound $n-2$ for $\gamma_{2r}(G)$ is not sharp.
\end{document} |
\begin{document}
\title{\LARGE \bf
Accelerated Primal-dual Scheme for a Class of Stochastic Nonconvex-concave Saddle Point Problems
}
\thispagestyle{empty}
\pagestyle{empty}
\begin{abstract}
Stochastic nonconvex-concave min-max saddle point problems appear in many machine learning and control problems including distributionally robust optimization, generative adversarial networks, and adversarial learning. In this paper, we consider a class of nonconvex saddle point problems where the objective function
satisfies the Polyak-Łojasiewicz condition with respect to the minimization variable and it is concave with respect to the maximization variable. The existing methods for solving nonconvex-concave saddle point problems often suffer from slow convergence and/or contain multiple loops. Our main contribution lies in proposing a novel single-loop accelerated primal-dual algorithm with new convergence rate results appearing for the first time in the literature, to the best of our knowledge.
\end{abstract}
\section{Introduction}
\label{sec:intro}
In this paper, we consider the following min-max saddle point (SP) game:
\begin{align}\label{main}
& \min_{x\in \mathcal X} \max_{y\in \mathcal Y} {\Phi(x,y)}\triangleq \mathcal L(x,y)-h(y),
\end{align}
where $\mathcal X=\mathbb R^n$, $\mathcal Y=\mathbb R^m$, $\mathcal L(x,y)=\mathbb E[\mathcal L(x,y;\xi)]$, $\xi$ is a random vector, $\mathcal L(\cdot,y)$ is potentially nonconvex for any $y\in \mathcal Y$ and satisfies Polyak-Łojasiewicz (PL) condition (see Definition \ref{def PL}), $\mathcal L(x,\cdot)$ is concave for any $x\in \mathcal X$ and $h(\cdot)$ is convex and possibly nonsmooth.
Our goal is to develop an algorithm to find a first order stationary point of this SP problem.
Recent emerging applications
in machine learning and control have further stimulated a surge of interest in these problems. Examples that can be formulated as \eqref{main} include generative adversarial
networks (GANs) \cite{goodfellow2016deep}, fair classification \cite{nouiehed2019solving}, communications \cite{akhtar2021conservative,bedi2019asynchronous}, and wireless system \cite{chen2011convergence,feijer2009krasovskii}.
Convex-concave saddle point problems have been extensively studied in the literature \cite{chambolle2016ergodic,hamedani2021primal}. However, recent applications in machine learning and control may involve nonconvexity. One class of nonconvex-concave min-max problems is when the objective function satisfies PL condition that we aim to study in this paper. Next, we provide two examples that can be formulated as problem \eqref{main} and satisfies PL condition.
\begin{example}[Generative adversarial imitation learning]\label{ex2}\emph{
One practical example of PL-game is generative adversarial imitation learning of linear quadratic regulators (LQR). Imitation learning techniques aim to mimic human behavior by observing
an expert demonstrating a given task \cite{hussein2017imitation}. Generative adversarial imitation learning (GAIL) was proposed by \cite{ho2016generative} which solves imitation learning via min-max optimization.
Let $K$ represents the choice of the policy, $K_E$ represents the expert policy, and the cost parameter and the expected cumulative cost for a given policy $K$ are denoted by $\theta=(Q,R)$ and $C(K,\theta)$, respectively. The problem of GAIL for LQR can be formulated \cite{cai2019global} as $
\min_{K\in \mathcal K} \max_{\theta\in \Theta} m(K,\theta),$
where $m(K,\theta)=C(K,\theta)-C(K_E,\theta)$, $Q\in \mathbb R^{d\times d }$, $R\in \mathbb R^{k\times k}$, $\alpha_Q I \preceq Q \preceq \beta_Q I$, and $\alpha_R I \preceq R \preceq \beta_R I$. It is known that $m$ satisfies PL condition in $K$ \cite{nouiehed2019solving}.}
\end{example}
\begin{example}[Distributionally robust optimization]\label{ex1}\emph{
Define $\ell_i(x)=\ell(x,\xi_i)$, where $\ell:\mathcal X\times \Omega\to\mathbb R$ is a loss function possibly nonconvex and $\Omega=\{\xi_1,\hdots,\xi_n\}$. Distributionally robust optimization (DRO) studies worse case performance under uncertainty to find solutions with some specific confidence level \cite{namkoong2016stochastic}. DRO can be formulated as $\min_{x\in \mathcal X}\max_{y\in \mathcal Y} \sum_{i=1}^n y_i \ell_i(x),$
where $\mathcal Y$ represents the uncertainty set, e.g., $\mathcal Y=\{y\in \mathbb R^m_{+}\mid y\geq \delta/n, \ V(y,\tfrac{1}{n}\mathbf 1_n)\leq \rho\}$ is an uncertainty set considered in \cite{namkoong2016stochastic} and $V (Q,P)$ denotes the divergence measure between two sets of probability measures $Q$ and $P$. As it has been shown in \cite{guo2020fast}, DRO for deep learning with ReLU activation function satisfies PL condition in an $\epsilon$-neighborhood around a random initialized point.}
\end{example}
One natural way to solve problem \eqref{main} is directly with the idea of taking two simultaneous or sequential steps for reducing the objective function $\Phi(\cdot,y)$ for a given $y$ and increasing the objective function $\Phi(x,\cdot)$ for a given $x$. One of the most famous algorithms for solving such problem is known as gradient descent-ascent (GDA) \cite{nedic2009subgradient}.
It has been discovered that such a naive approach leads to poor performance and may even diverge for simple problems.
One way to resolve this issue is by adding a momentum in terms of the gradient of the objective function. Although this approach leads to an optimal convergence rate result \cite{hamedani2021primal,zhao2021accelerated}, it may not be directly applicable in nonconvex-concave setting. Therefore, we aim to to develop a novel primal-dual algorithm with acceleration in the primal update as well as a new momentum in the dual update.
\subsection{Related Works}
{\bf Nonconvex-concave SP problem.} {Various algorithms have been proposed for solving nonconvex-concave SP problems.} The existing methods can be categorized into two types: multi-loop and single-loop. Multi-loop algorithms \cite{kong2021accelerated,ostrovskii2021efficient} are often difficult to implement in practice because the termination of the inner loop has a high impact on the overall complexity of such algorithms.
There have been some recent efforts \cite{zhang2020single,xu2020unified} to design and analyze single-loop algorithms to solve nonconvex-concave problems. In particular, a convergence rate of $\mathcal O(\epsilon^{-4})$ has been obtained for the aforementioned single-loop algorithms.
There are also some limited studies \cite{rafique2018weakly,lin2020gradient} in the stochastic regime, where convergence rates of $\mathcal O(\epsilon^{-6})$ and $\mathcal O(\epsilon^{-8})$ have been achieved, respectively.
{\bf PL condition.} Rate results for nonconvex-concave problems can be improved for a class of problems where the objective function satisfies PL condition. Recently, nonconvex-PL SP problems have been studied in \cite{nouiehed2019solving,fiez2021global}, assuming that the objective satisfies one-sided PL condition, and convergence rate of $\mathcal{\tilde O} (\epsilon^{-2})$ has been achieved. More recently, to guarantee a global convergence, Yang et al. \cite{yang2020global} proposed alternating gradient descent ascent algorithm with a linear convergence rate to solve SP problem where the objective satisfies two-sided PL condition. Moreover, the convergence rate of $\mathcal O(\epsilon^{-1})$ has been shown for the stochastic regime under two-sided PL condition. \mbr{Subsequently, Guo et al. \cite{guo2020fast} improved the dependence of convergence rate on the condition number (the ratio of smoothness parameter to the PL constant) under different problem conditions.}
\begin{table}[htb]
\caption{Comparison of complexity between some of the main existing methods for solving SP problem}
\label{result table}
\centering
\begin{tabular}{|c|c|cc|c|}
\hline
\multirow{2}{*}{References} & \multirow{2}{*}{Problem} & \multicolumn{2}{c|}{Complexity} & \multirow{2}{*}{\# of loops} \\ \cline{3-4}
& & \multicolumn{1}{c|}{det.} & stoch. & \\ \hline
\cite{hamedani2021primal,zhao2021accelerated}&SC-C& \multicolumn{1}{c|}{$\mathcal O(\epsilon^{-0.5})$} & $\mathcal O(\epsilon^{-1})$ & Single \\ \hline
\cite{chambolle2016ergodic,juditsky2011solving,zhao2021accelerated}&C-C& \multicolumn{1}{c|}{$\mathcal O(\epsilon^{-1})$} & $\mathcal O(\epsilon^{-2})$ & Single \\ \hline
\cite{rafique2018weakly} & NC-C & \multicolumn{1}{c|}{$\mathcal O(\epsilon^{-6})$} & $\mathcal O(\epsilon^{-6})$ & Double \\ \hline
\cite{lin2020gradient} & NC-C & \multicolumn{1}{c|}{$\mathcal O(\epsilon^{-6})$} & $\mathcal O(\epsilon^{-8})$ & Single \\ \hline
\cite{fiez2021global} & NC-PL & \multicolumn{1}{c|}
{$\mathcal{\tilde O} (\epsilon^{-2})$} & -- & Single \\ \hline
{This paper} & {PL-C} & \multicolumn{1}{c|}{{$\mathcal O(\epsilon^{-2})$}} & {$\mathcal O(\epsilon^{-4})$} & {Single } \\\hline
\end{tabular}
\end{table}
Some of the main results in deterministic and stochastic settings for (non)convex-concave SP problems are summarized in Table \ref{result table}. Next, we state our main contributions.
\subsection{Contributions}
The existing methods for solving nonconvex-concave SP problems often suffer from slow convergence and/or contain multiple loops. Our main contribution lies in proposing a novel single-loop accelerated primal-dual algorithm with convergence rate results for PL-game appearing for the first time in the literature to the best of our knowledge.
Our main contributions are summarized as follows:
(i) We propose an accelerated primal-dual scheme to solve problem \eqref{main}. Our main idea lies in designing a novel algorithm by combining an accelerated step in the primal variable with a dual step involving a momentum in terms of the gradient of the objective function.
(ii) Under a stochastic setting, using an acceleration where mini-batch sample gradients are utilized, our method achieves an oracle complexity (number of sample gradients calls) of $\mathcal O(\epsilon^{-4})$. (iii) Under a deterministic regime,
we demonstrate a convergence guarantee of $\mathcal O(\epsilon^{-2})$ to find an $\epsilon$-stationary solution. This is the best-known rate for SP problems satisfying one-sided PL condition to the best of our knowledge.
\section{Preliminaries}
First we define some important notations.
{\bf Notations.} $\|x\|$ denotes the Euclidean vector norm, i.e., $\|x\|=\sqrt{x^Tx}$. $\mbox{prox}_g(x)$ denotes the proximal operator with respect to $g$ at $x$, i.e., $\mbox{prox}_g(y)\triangleq \mbox{argmin}_x\{\tfrac{1}{2}\|x-y\|^2+g(x)\}$. $\mathbb E[x]$ is used to denote the expectation of a random variable $x$. We define $x^*(y)\triangleq \mbox{argmin}_x \mathcal L({x,y)}$. Given the mini-batch samples $ \mathcal U=\{\xi^i\}_{i=1}^b$ and $\mathcal V=\{\bar \xi^i\}_{i=1}^b$, we let $\nabla_x \mathcal L_{\mathcal U}(x,y)={1\over b} \sum_{i=1}^b\nabla_x \mathcal L(x,y;\xi^i)$ and $\nabla_y \mathcal L_{\mathcal V}(x,y)={1\over b} \sum_{i=1}^b\nabla_y \mathcal L(x,y;\bar \xi^i)$. We defined $\sigma$-algebras $\mathcal H_k=\{\mathcal U_1,\mathcal V_1,\mathcal U_2,\mathcal V_2,\hdots,\mathcal U_{k-1}, \mathcal V_{k-1}\}$ and $\mathcal F_k=\{\mathcal H_k\cup V_k\}$.
Now we briefly highlight a few aspects of the PL condition \cite{polyak1963gradient} that differentiate it from convexity and make it a more relevant and appealing setting for many machine learning applications.
For unconstrained minimization problem $\min_{x\in \mathbb R^n} f(x)$, we say that a function satisfies the PL inequality if for any $ x\in \mathbb R^n$, ${1\over 2}\|\zizi{\nabla } f(x)\|^2\geq \mu(f(x)-f(x^*))$ holds for some $\mu>0$.
Verification of the PL condition requires access to both the objective function value and the norm of gradient, both of which are typically tractable and can be estimated from a sub-sample of the data. However, for verifying convexity, one needs to estimate the minimum eigenvalue of the Hessian matrix. Moreover, the norm of the gradient is much more resilient to perturbation of the objective function than the smallest eigenvalue of the Hessian.
PL condition does not require strong convexity or even convexity of the objective function \cite{zhang2013gradient,allen2018natasha}.
In the absence of strong-convexity, it has been shown that linear convergence rate can be obtained for gradient-based methods by only assuming the objective function satisfies PL inequality \cite{karimi2016linear}. In this paper, we consider a min-max SP problem and we assume that the objective function satisfies one-sided PL inequality.
\begin{definition}\label{def PL}
A continuously differentiable function $\mathcal L(x,y)$ satisfies the one-sided PL condition if there exists a constant $\mu>0$ such that ${1\over2}\|\nabla_x \mathcal L{(x,y)}\|^2\geq \mu(\mathcal L{(x,y)}- \mathcal L({x^*(y),y)}),$ for all $x\in \mathcal X, y\in \mathcal Y,$
where $\mathcal L({x^*(y),y)})=\min_x \mathcal L({x,y)}$.
\end{definition}
Now we state our main assumptions.
\begin{assumption}\label{assump0} (i) The solution set of problem \eqref{main} is nonempty; (ii)
Function $h(y)$ is convex and possibly nonsmooth; (iii) $\mathcal L(x,y)$ is continuously differentiable satisfying one-sided PL condition and $\mathcal L(x,\cdot)$ is concave for any $x\in \mathcal X$.
\end{assumption}
\begin{assumption}\label{assump1} $\nabla_x\mathcal L$ is Lipschitz continuous, i.e., there exist $L_{xx}\geq0$ and $L_{xy}\geq0$ such that
$ \|\nabla_x \mathcal L(x,y)- \nabla_x \mathcal L(\bar x,\bar y)\| \leq{L_{xx}} \|x-\bar x\|+{L_{xy}} \|y-\bar y\|.$
Moreover, $\mathcal L(x,y)$ is linear in terms of $y$.
\end{assumption}
Note that Assumption \ref{assump1} implies that
\begin{align}\label{assump3}
\mathcal L(x,y)-\mathcal L(\bar x,y)-\langle \nabla_x \mathcal L(\bar x,y),x-\bar x\rangle \leq \tfrac{L_{xx}}{2} \mor{\|x-\bar x\|^2}.
\end{align}
Under stochastic setting, we assume that the sample gradients can be generated \zizi{by} satisfying the following standard conditions.
\begin{assumption}\label{assump:stoch}
Each component function $\mathcal L(x,y;\xi)$ has unbiased stochastic gradients with bounded variance:
\begin{align*}\quad &\mathbb E[\nabla_x\mathcal L(x,y,\xi)\mid \mathcal F_k]=0,\ \mathbb E[\nabla_y\mathcal L(x,y,\xi)\mid \mathcal H_k]=0,\\
&\mathbb E[\|\nabla_x\mathcal L(x,y;\xi)-\nabla_x\mathcal L(x,y)\|^2]\leq \nu^2_x,\\
&\mathbb E[\|\nabla_y\mathcal L(x,y;\xi)-\nabla_y\mathcal L(x,y)\|^2]\leq \nu^2_y.
\end{align*}
\end{assumption}
\section{Primal-Dual Method with Momentum}\label{sec:alg}
In this section, we propose a stochastic primal-dual algorithm with momentum (SPDM) for stochastic (and PDM for deterministic) PL-concave problems -- See Algorithm \ref{alg1}.
\begin{algorithm}[htb]
\caption{Stochastic Primal-Dual with Momentum (SPDM)}
\label{alg1}
\begin{algorithmic}[1]
\STATE Given $x_0, \tilde x_0,y_0$, $\alpha_k\in(0,1]$, and positive sequences $\{\sigma_k\}$, $\{\gamma_k$\} and $\{\lambda_k\}$;
\FOR{$k=0\hdots T-1$}
\STATE\label{update z} $z_{k+1} =(1-\alpha_k)\tilde x_{k} +\alpha_kx_{k}$;
\STATE\label{gen sample} Generate randomly mini-batch samples \\$ \mathcal U_k=\{\xi^i_k\}_{i=1}^b$ and $\mathcal V_k=\{\bar \xi^i_k\}_{i=1}^b$;
\STATE \label{update p,q} $q_k=\tfrac{1}{(\gamma_k-L_{xx}\gamma_k^2) \mu}(\nabla_y\mathcal L_{\mathcal V_k}{(x_{k},y_{k})}-\nabla_y\mathcal L_{\mathcal V_k}{(x_{k-1},y_{k})})$ and $p_k=\nabla_y \mathcal L_{\mathcal V_k}{(z_{k+1},y_{k})}$;
\STATE\label{update y} $y_{k+1}= \mbox{prox}_{\sigma_{k},h}\left(y_{k}+\sigma_k (p_k+ q_k)\right)$;
\STATE\label{update r} $r_k=\nabla_x \mathcal L_{\mathcal U_k}(z_{k+1},y_{k+1})$;
\STATE\label{update x} $x_{k+1}= \left(x_k -\gamma_k r_k\right)$;
\STATE\label{update tilde x} $\tilde x_{k+1}=(z_{k+1}-\lambda_k r_k)$;
\ENDFOR
\end{algorithmic}
\end{algorithm}
Algorithm \ref{alg1} involves executing primal-dual steps within a single loop. After initialization of parameters,
at each iteration $k\geq 0$, a proximal gradient ascent step for the variable $y$ is taken in the direction of $\nabla_y \mathcal L$ with an additive momentum term $q_k$. Such a momentum is an algorithmic approach to gain acceleration for solving PL-concave problems
Note that, we estimate the gradient of the function by drawing mini-batch samples $\mathcal U_k$ and $\mathcal V_k$ in Step \ref{gen sample}. Finally, after computing gradient $\nabla_x\mathcal L$ at $(z_{k+1},y_{k+1})$, two gradient descent steps for the variable $x$ is taken to generate $x_{k+1}$ and $\tilde x_{k+1}$ which then will be combined by a convex combination in the next iteration.
\section{Convergence Analysis}\label{sec:conv}
In this section, we study the convergence properties of and \ref{alg1} for stochastic (and also deterministic) settings. All related proofs are provided in the appendix. Our goal is to find a first order stationary point of problem \eqref{main}. For a given positive $\epsilon$, we define a point $(x,y)$ as an $\epsilon$-stationary solution of problem \eqref{main} if $\|\nabla_x\Phi(x,y)\|\leq \epsilon$ and $\nabla_y\mathcal L(x,y)\in h(y)+\mathcal B(0,r\epsilon)$ for some $r>0$.
For our analysis, for all $k\in(0,T-1)$, define $C_k$ as:
\begin{align}\label{align1}
&C_k\triangleq 1-L_{xx}\gamma_k-\tfrac{L_{xx}(\gamma_k-\lambda_k)^2}{2\alpha_k\Gamma_k\gamma_k}\left(\sum_{\tau=k}^{T-1} \Gamma_\tau\right)\geq 0,
\end{align}
where $ \Gamma_k \triangleq
\begin{cases}
1 & k=0\\
(1-\alpha_k)\Gamma_{k-1} & k\geq1\\
\end{cases}.$
\begin{remark}
By choosing $\alpha_k=\tfrac{2}{k+1}$, $\lambda_k=\tfrac{1}{2L_{xx}}$ and $\gamma_k\in [\lambda_k,(1+\alpha_k/4)\lambda_k]$ for any $k\geq 0$, from definition of $C_k$, one can show that $C_k\geq 11/32$ (see \cite{ghadimi2016accelerated}).
\end{remark}
Now we establish the convergence rate of SPDM for solving stochastic PL-concave SP problem \eqref{main}. In Algorithm \ref{alg1}, to estimate the gradient of the function, we draw mini-batch samples $\mathcal U_k$ and $\mathcal V_k$ at each iteration, where $|\mathcal U_k|=|\mathcal V_k|=b$.
\begin{theorem}\label{th1}
Let $\{x_k, y_{k},z_k\}_{{k} \geq0}$ generated by Algorithm \ref{alg1} and suppose Assumptions \ref{assump0}, \ref{assump1} and \ref{assump:stoch} hold.
Moreover, let $\sigma_k= \tfrac{\mu}{36L^2_{xy}}$ $\alpha_k=\tfrac{2}{k+1}$, $\lambda_k=\tfrac{1}{2L_{xx}}$ and $\gamma_k\in [\lambda_k,(1+\alpha_k/4)\lambda_k]$ for any $k\geq 0$ and $b=T$. Then, there exists an iteration $k\in\{0,\hdots,T\}$ such that $(z_k,y_k)$ is an $\epsilon$-stationary point of problem \eqref{main} which can be obtained within $\mathcal O(\epsilon^{-4})$ evaluations of sample gradients.
\end{theorem}
Consider function $\mathcal L$ in problem \eqref{main} to be deterministic, i.e. exact gradients $\nabla_x \mathcal L$ and $\nabla_y \mathcal L$ are available. We show that the convergence rate can be improved to $\mathcal O(\epsilon^{-2})$.
\begin{theorem}\label{th2}
Let $\{x_k, y_{k},z_k\}_{{k} \geq0}$ generated by Algorithm \ref{alg1}, where exact gradients $\nabla_x \mathcal L$ and $\nabla_y \mathcal L$ are available and suppose Assumptions \ref{assump0}, \ref{assump1} hold.
Choosing parameters as Theorem \ref{th1}, there exists an iteration $k\in\{0,\hdots,T\}$ such that $(z_k,y_k)$ is an $\epsilon$-stationary point which can be obtained within $\mathcal O(\epsilon^{-2})$ evaluations of the gradients.
\end{theorem}
The proof for deterministic setting, i.e, Theorem \ref{th2}, is similar to Theorem \ref{th1}, by letting $\nu_x=\nu_y=0$.
\section{Numerical Results}\label{sec:numeric}
In this section, we implement our method to solve GAIL problem described in Example \ref{ex2}. The code utilized in our experiment was adapted from an existing implementation developed by \cite{yang2020global}.
To validate the efficiency of the proposed scheme, \afj{we compare PDM algorithm with alternating gradient descent ascent (AGDA) \cite{yang2020global}, Smoothed-GDA \cite{zhang2020single}, and AGP \cite{xu2020unified}.}
The optimal control problem for LQR can be formulated as follows \cite{cai2019global}:
\begin{align}\label{num:problem}
&\underset{\pi_t}{\text{minimize} }\quad \mathbb E\left[\sum_{t=0}^{\infty}x_t^{\top}Qx_t + u_t^{\top}Ru_t\right]\\
\nonumber&\text{subject to}\quad x_{t+1}=Ax_t+Bu_t,\ u_t=\pi_t(x_t), \
x_0\sim \mathbb D_0,
\end{align}
where $Q\in \mathbb R^{d\times d }$, $R\in \mathbb R^{k\times k}$ are both positive definite matrices, $A\in \mathbb R^{d\times d }$, $B\in \mathbb R^{d\times k }$, , $u_t\in \mathbb R^{k}$ is a control, $x_t\in \mathbb R^{d}$ is a state, $\pi_t$ is a policy, and $\mathbb D_0$ is a given initial distribution. In infinite-horizon setting with a stochastic initial state $x_0\sim \mathbb D_0$, the optimal control input can be written as a linear function $u_t=-K^*x_t$ where $K^*\in \mathbb R ^{k\times d}$ is the policy and does not depend on $t$. We denote the expected cumulative cost in \eqref{num:problem} by $C(K,\theta)$, where $\theta=(Q,R)$. To estimate the expected cumulative cost, we sample $n$ initial points $x_0^{(1)},\hdots,x_0^{(n)}$ and estimate $C(K,\theta)$ using sample average:
$C_{n}(K;\theta):={1\over n} \sum_{i=1}^{n} \left[\sum_{t=0}^{\infty}x_t^{\top}Qx_t + u_t^{\top}Ru_t\right]_{x_0=x_0^{(i)}}.$
In GAIL for LQR, the goal is to learn the cost function parameters $Q$ and $R$ from the expert after the trajectories induced by an expert policy $K_E$ are observed. Hence, the min-max formulation of the imitation learning problem is $\min_K\max_{\theta\in \Theta}\ m_n(K,\theta),$
where $m_n(K,\theta)=C_n(K,\theta)-C_n(K_E,\theta)-\phi(\theta)$, $\phi$ is a regularization term that we added so that the problem becomes strongly concave, so can apply AGDA scheme (see \cite{yang2020global}). Moreover, $\Theta$ is the feasible set of the cost parameters. We assume $\Theta$ is convex and there exist positive constants $\alpha_Q,\beta_Q,\alpha_R$ and $\beta_R$ such that for any $(Q,R)\in \Theta$ we have
$\alpha_QI\preceq Q\preceq \beta_QI, \ \alpha_RI\preceq R\preceq\beta_RI.$
We generate three different data sets for different choices of $d$ and $k$ and we set $n=100$, $\alpha_Q=\alpha_R=0.1$ and $\beta_Q=\beta_R=100$. We choose $\alpha_k=\tfrac{2}{(k+1)}$, $\sigma_k=0.4$ and $\lambda_k=\gamma_k=2e$-4. The exact gradient of the problem in compact form has been established in \cite{fazel2018global}.
In Figure \ref{figcomp} (a) and (b), we compared the performance of our proposed method (PDM) with AGDA \cite{yang2020global}, \afj{Smoothed-GDA \cite{zhang2020single}, and AGP \cite{xu2020unified}. We set the same stepsizes for all the methods to ensure fairness in our experiment. Other parameters for competitive methods are selected as suggested in their papers.}
\begin{figure}
\caption{ \afj{(a),(b): PDM vs competitive schemes for different dimensions ($n=100$); (c),(d): PDM vs SPDM }
\label{figcomp}
\end{figure}
In Figure \ref{figcomp} (c) and (d), we compared PDM with its stochastic variant (SPDM). To see the benefit of the stochastic scheme, we run both algorithms for the same amount of time. As it can be seen SPDM outperforms PDM and its superiority is more evident as $n$ becomes larger.
\section{Concluding Remarks}\label{sec:conclude}
In this paper, we proposed an accelerated primal-dual scheme for solving a class of nonconvex-concave problems where the objective function satisfies the PL condition for both deterministic and stochastic settings. By combining an accelerated step in the minimization variable with an update involving a momentum in terms of the gradient of the objective function for the maximization variable, we obtained a convergence rate of \mbrs{$\mathcal O(\epsilon^{-2})$} and $\mathcal O(\epsilon^{-4})$ for the deterministic and stochastic problems, respectively. To the best of our knowledge, this is the first work that proposed a primal-dual scheme with momentum to solve PL-concave minimax problems.
There are different interesting directions for future work: (i) Utilizing variance-reduction techniques to improve the convergence rates in the stochastic setting; (ii) Study the distributed variant of the proposed scheme.
\section*{APPENDIX}
First we state the following technical lemma.
\begin{lemma}\label{lem:error}
Given a arbitrary sequences $\{\bar\sigma_k\}_{k\geq 0}\subset \mathbb R^n$ and $\z{\{\bar\alpha_k\}}_{k\geq 0}\subset \mathbb R^{++}$, let $\{v_k\}_{k\geq0}$ be a sequence such that $v_0\in \mathbb R^n$ and $v_{k+1}=v_k+\z{\tfrac{\bar\sigma_k}{\bar \alpha_k}}$. Then, for all $k\geq 0$ and $x\in \mathbb R^n$,
$$\langle \sigma_k,x-v_k\rangle\leq {\z{\bar \alpha_k}\over 2}\|x-v_k\|^2-{\z{\bar \alpha_k}\over 2}\|x-v_{k+1}\|+{1\over 2 \z{\bar \alpha_k}}\|\bar\sigma_k\|^2.$$
\end{lemma}
Moreover, to prove the convergence rate, we use the following lemma (proof is similar to Lemma 3 in \cite{ghadimi2016accelerated}).
\begin{lemma}\label{epsilon stationary}
For any given $z,\bar z\in \mathbb R^n$ and $c>0$, such that $\|z-\bar z\|\leq c \epsilon$, and $y\in \mathbb R^m$ let $\bar y\triangleq \mbox{prox}_{\sigma,h}(y+\sigma(\nabla_y\mathcal L(\bar z,y)+q+u))$ for some $q,u\in \mathbb R^m$ such that $\|q\|\leq \ell\|\nabla_x\mathcal L(x,y)\|$ and $\|u\|^2\leq \nu_y^2/b$ for some $\ell,\nu_y,b>0$. If $\|\nabla_x\mathcal L(z,y)\|^2+\|\bar y-y\|^2\leq \epsilon^2$, for some $\epsilon>0$, then $\|\nabla_x\mathcal L(z,y)\|\leq \epsilon$ and $\nabla_y\mathcal L(z,y)\in h(\bar y)+\mathcal B(0,(1/\sigma+\ell+cL_{yx}) \epsilon+\nu_y/\sqrt b)$.
\end{lemma}
Now we are ready to prove Theorem \ref{th1}.\\
{\bf Proof of Theorem \ref{th1}.} Define $\Delta_k\triangleq\nabla_x \mathcal L{(x_k,y_{k+1})}-\nabla_x \mathcal L{(z_{k+1},y_{k+1})}$. From \mor{Assumption \ref{assump1}} and step \ref{update z} of Algorithm \zal{\ref{alg1}}, the following can be obtained,
\begin{align}\label{align5}
\nonumber\|\Delta_k\|&=\|\nabla_x \mathcal L{(x_k,y_{k+1})}-\nabla_x \mathcal L{(z_{k+1},y_{k+1})}\|\\
\nonumber&\leq L_{xx}\|x_k-z_{k+1}\|\\
\nonumber&=L_{xx}\|x_k-(1-\alpha_k)\tilde x_k-\alpha_k
x_k\|
\\&=L_{xx}(1-\alpha_k)\|\tilde x_k-x_k\|.
\end{align}
\af{Define $w_k\triangleq\nabla_x\mathcal L_{\mathcal U_k}(z_{k+1},y_{k+1})-\nabla_x\mathcal L(z_{k+1},y_{k+1})$}, and using \eqref{assump3} and step \ref{update x} of Algorithm \zal{\ref{alg1}}, one can obtain
\begin{align}\label {align6}
\nonumber &\mathcal L{(x_{k+1},y_{k+1})}\\
\nonumber&\leq \mathcal L{(x_k,y_{k+1})}+\langle\nabla_x \mathcal L{(x_k,y_{k+1})},x_{k+1}-x_k\rangle\\\nonumber&\quad+\tfrac{L_{xx}}{ 2}\|x_{k+1}-x_k\|^2\\
\nonumber&=\mathcal L{(x_k,y_{k+1})}+\big\langle \Delta_k+\nabla_x \mathcal L{(z_{k+1},y_{k+1})}, \\
\nonumber&\quad-\gamma_k (\nabla_x \mathcal L{(z_{k+1},y_{k+1})}\af{+w_k})\big\rangle\\
\nonumber&\quad +L_{xx}\tfrac{\gamma_k^2}{ 2}\|\nabla_x\mathcal L{(z_{k+1},y_{k+1})}\af{+w_k}\|^2\\
\nonumber&\leq \mathcal L{(x_k,y_{k+1})}-\gamma_k \left( 1-\tfrac{L_{xx}\gamma_k}{ 2}\right)\|\nabla_x \mathcal L{(z_{k+1},y_{k+1})}\|^2\\\nonumber&\quad+\gamma_k\|\Delta_k\|\|\nabla_x\mathcal L{(z_{k+1},y_{k+1})}\|
\\ &\quad -\gamma_k\left\langle w_k,\nabla_x\mathcal L(x_k,y_{k+1})\right\rangle\\
&\nonumber \quad+L_{xx}\gamma_k^2\left\langle w_k,\nabla_x\mathcal L(z_{k+1},y_{k+1})\right\rangle+\tfrac{L_{xx}\gamma_k^2}{2}\|w_k\|^2.
\end{align}
Define
$E_k^x\triangleq -\gamma_k\left\langle w_k,\nabla_x\mathcal L(x_k,y_{k+1})\right\rangle\quad+L_{xx}\gamma_k^2\left\langle w_k,\nabla_x\mathcal L(z_{k+1},y_{k+1})\right\rangle.$
Combining \eqref{align5} and \eqref{align6}:
\begin{align}\label{align3}
\nonumber &\mathcal L{(x_{k+1},y_{k+1})}\\
\nonumber&\leq \mathcal L{(x_k,y_{k+1})}-\gamma_k\big( 1-\tfrac{ L_{xx}\gamma_k}{ 2}\big)\|\nabla_x \mathcal L{(z_{k+1},y_{k+1})}\|^2\\\nonumber&\quad+L_{xx}(1-\alpha_k)\gamma_k\| \zal{\nabla_x}\mathcal L{(z_{k+1},y_{k+1})}\|\|\tilde x_k-x_k\| \\\nonumber&\quad+E_k^x+\tfrac{L_{xx}\gamma_k^2}{2}\|w_k\|^2\\
\nonumber&\leq \mathcal L{(x_k,y_{k+1})} -\gamma_k\big( 1-\tfrac{L_{xx}\gamma_k}{ 2}\big)\|\nabla_x \mathcal L{(z_{k+1},y_{k+1})}\|^2\\
\nonumber&\quad+\tfrac{L_{xx}{\gamma_k}^2}{ 2}\|\zal{\nabla_x}\mathcal L{(z_{k+1},y_{k+1})}\|^2\\\nonumber&\quad + \tfrac{L_{xx}(1-{\alpha_k})^2}{2}\|\tilde x_k-x_k\|^2+\af{E_k^x+\tfrac{L_{xx}\gamma_k^2}{2}\|w_k\|^2}\\
\nonumber&=\mathcal L{(x_k,y_{k+1})}-\gamma_k\big(1-{L_{xx}\gamma_k}\big)\|\nabla_x\mathcal L{(z_{k+1},y_{k+1})}\|^2
\\&\quad+\tfrac{L_{xx}(1-{\alpha_k})^2}{ 2}\|\tilde x_k-x_k\|^2 +E_k^x+\tfrac{L_{xx}\gamma_k^2}{2}\|w_k\|^2,
\end{align}
where we used $ab\leq\tfrac{(a^2+b^2)}{2}$. By steps \zal{\ref{update z}, \ref{update x} and \ref{update tilde x}} of Algorithm \zal{\ref{alg1}} one can obtain
$\tilde x_{k+1}-x_{k+1}=(1-\alpha_k)\tilde x_k+\alpha_k x_k-\lambda_k(\nabla_x \mathcal L{(z_{k+1},y_{k+1})}+w_k)-[x_k-\gamma_k(\nabla_x \mathcal L{(z_{k+1},y_{k+1})}+w_k)]=(1-\alpha_k)(\tilde x_k-x_k)+(\gamma_k-\lambda_k)(\nabla_x \mathcal L{(z_{k+1},y_{k+1})}+w_k).$
\zal{If we divide both sides of the above equality by $\Gamma_k$, summing over $k$ and using the definition of $\Gamma_k$, we obtain}
$\tilde x_{k+1}-x_{k+1}=\Gamma_k\sum_{\tau=0}^k\left({\gamma_\tau-\lambda_\tau \over \Gamma_\tau}\right)(\nabla_x \mathcal L{(z_{\tau+1},y_{\tau+1})}+\af{w_\tau}).$
Using above equality, the Jensen's inequality, and the fact that
$\sum_{\tau=0}^k {\alpha_\tau \over \Gamma_\tau}={\alpha_0 \over \Gamma_0}+\sum_{\tau=1}^k{1\over \Gamma_\tau }\left (1-{{\Gamma_\tau}\over{\Gamma_{\tau-1}}}\right)={1 \over \Gamma_0}+\left(\sum_{\tau=1}^k{1\over \Gamma_\tau }-{1\over \Gamma_{\tau-1}}\right)={1\over \Gamma_k},$
we obtain
\begin{align}\label{align9}
\nonumber&\| \tilde x_{k+1}-x_{k+1}\|^2\\\nonumber&=\left \|\Gamma_k \sum_{\tau=0}^k\left(\tfrac{\gamma_\tau-\lambda_\tau}{\Gamma_\tau}\right)(\nabla_x \mathcal L{(z_{\tau+1},y_{\tau+1})}+\af{w_\tau})\right\|^2\\
\nonumber &=\left\|\Gamma_k \sum_{\tau=0}^k\tfrac{\alpha_\tau}{\Gamma_\tau}\left[\left(\tfrac{\gamma_\tau-\lambda_\tau}{\alpha_\tau}\right)(\nabla_x \mathcal L{(z_{\tau+1},y_{\tau+1})}+\af{w_\tau})\right]\right\|^2\\
\nonumber &\leq\Gamma_k \sum_{\tau=0}^k\tfrac{\alpha_\tau}{\Gamma_\tau}\left \|\left ( \tfrac{\gamma_\tau-\lambda_\tau}{\alpha_\tau}\right )(\nabla_x \mathcal L{(z_{\tau+1},y_{\tau+1})}+\af{w_\tau}) \right \| ^2\\
&=\Gamma_k \sum_{\tau=0}^k\tfrac{(\gamma_\tau-\lambda_\tau)^2}{\Gamma_\tau \alpha_\tau}\|(\nabla_x \mathcal L{(z_{\tau+1},y_{\tau+1})}+\af{w_\tau})\|^2.
\end{align}
Using \eqref{align9} in \eqref{align3}, one can obtain the following,
\begin{align}\label{sum1}
\nonumber &\mathcal L{(x_{k+1},y_{k+1})}\\
\nonumber&\leq \mathcal L{(x_{k},y_{k+1})}-\gamma_k(1-L_{xx}\gamma_k)\|\nabla_x\mathcal L{(z_{k+1},y_{k+1})}\|^2\\\nonumber&\quad+\af{E_k^x+\tfrac{L_{xx}\gamma_k^2}{2}\|w_k\|^2}+\tfrac{L_{xx}\Gamma_{\zal{k-1}}(1-\alpha_k)^2}{2}
\\\nonumber&\quad\times \sum_{\tau=0}^\zal{k-1}\tfrac{(\gamma_\tau-\lambda_\tau)^2}{\Gamma_{\tau}\alpha_{\tau}}\|\nabla_x\mathcal L{(z_{\tau+1},y_{\tau+1})}+w_\tau\|^2\\
\nonumber&\leq \mathcal L{(x_{k},y_{k+1})}-\gamma_k(1-L_{xx}\gamma_k)\|\nabla_x\mathcal L{(z_{k+1},y_{k+1})}\|^2\\\nonumber&\quad+\af{E_k^x+\tfrac{L_{xx}\gamma_k^2}{2}\|w_k\|^2}\\\nonumber&\quad
+\tfrac{L_{xx}\Gamma_{k}}{ 2}\sum_{\tau=0}^{k}\tfrac{(\gamma_\tau-\lambda_\tau)^2}{\Gamma_{\tau}
\alpha_{\tau}}\|\nabla_x\mathcal L{(z_{\tau+1},y_{\tau+1})}\|^2\\\nonumber&\quad+\tfrac{L_{xx}\Gamma_{k}}{2}\sum_{\tau=0}^{k}\tfrac{(\gamma_\tau-\lambda_\tau)^2}{\Gamma_{\tau}\alpha_{\tau}}\|w_\tau\|^2+\tfrac{L_{xx}\Gamma_{\zal{k-1}}(1-\alpha_k)^2}{2}\\
&\quad \times\af{\sum_{\tau=0}^{k}\tfrac{(\gamma_\tau-\lambda_\tau)^2}{\Gamma_{\tau}\alpha_{\tau}}}w_\tau^T\nabla_x\mathcal L(z_{\tau+1},y_{\tau+1}).
\end{align}
Define $\bar E_k^x\triangleq \tfrac{L_{xx}\Gamma_{\zal{k-1}}(1-\alpha_k)^2}{ 2}\sum_{\tau=0}^{k}\tfrac{(\gamma_\tau-\lambda_\tau)^2}{\Gamma_{\tau}\alpha_{\tau}}\break w_\tau^T\nabla_x\mathcal L(z_{\tau+1},y_{\tau+1})$, \mb{$\Xi_k\triangleq E_k^x+\bar E_k^x+\tfrac{L_{xx}\gamma_k^2}{2}\|w_k\|^2$, $\zeta_k\triangleq{L_{xx}\Gamma_{k}\over 2}\sum_{\tau=0}^{k}{(\gamma_\tau-\lambda_\tau)^2\over\Gamma_{\tau}\alpha_{\tau}}\|w_\tau\|^2$}, and summing both sides of \eqref{sum1} over $k$, and using the definition of $C_k$ in \eqref{align1}, we obtain
\begin{align}\label{align11}
\nonumber &\sum_{k=0}^{T-1}\left(\mathcal L{(x_{k+1},y_{k+1})}-\mathcal L{(x_{k},y_{k+1})}\right)\\ \nonumber&\leq-\sum_{k=0}^{T-1}\gamma_k(1-L_{xx}\gamma_k)\|\nabla_x \mathcal L{(z_{k+1},y_{k+1})}\|^2\\
\nonumber&\quad+\sum_{k=0}^{T-1} \tfrac{L_{xx}\Gamma_\mor{{k}}}{2}\sum_{\tau=0}^\mor{{k}}\tfrac{(\gamma_\tau-\lambda_\tau)^2}{\Gamma_{\tau}\alpha_{\tau}}\|\nabla_x \mathcal L{(z_{\zal{\tau+1}},y_{\zal{\tau+1}})}\|^2
\\
\nonumber &\quad+\mb{\sum_{k=0}^{T-1}(\Xi_k+\zeta_k)}\\
&=\nonumber \quad\tfrac{L_{xx}}{ 2}\sum_{k=0}^{T-1}\tfrac{(\gamma_k-\lambda_k)^2}{\Gamma_{k}\alpha_{k}}\mor{\left(\sum_{\tau=k}^{T-1}\Gamma_\tau \right)} \|\nabla_x \mathcal L{(z_{k+1},y_{k+1})}\|^2\\
&\quad-\sum_{k=0}^{T-1}\gamma_k C_k\|\nabla_x \mathcal L{(z_{k+1},y_{k+1})}\|^2+\mb{\sum_{k=0}^{T-1}(\Xi_k+\zeta_k)}.
\end{align}
From \eqref{align11} and Definition \eqref{def PL}, one can obtain
\begin{align*}
&\sum_{k=0}^{T-1}(\mathcal L(x_{k+1},y_{k+1})-\mathcal L(x_{k},y_{k+1}))\\ \nonumber&\leq-\sum_{k=0}^{T-1}\gamma_k C_k\mu(\mathcal L{(z_{k+1},y_{k+1})}-\mathcal L(x^\ast\zal{(y_{k+1})},y_{k+1}))
\nonumber \\&\quad-\sum_{k=0}^{T-1}{\gamma_k C_k\over 2}\|\nabla_x \mathcal L{(z_{k+1},y_{k+1})}\|^2+\mb{{\sum_{k=0}^{T-1}(\Xi_k+\zeta_k)}.}
\end{align*}
Adding $\sum_{k=0}^{T-1}(\mathcal L(x_{k+1},y)-\mathcal L(x_{k},y))$ to both sides:
\begin{align*}
\nonumber &\sum_{k=0}^{T-1}(\mathcal L(x_{k+1},y)-\mathcal L(x_{k},y))\\
\nonumber&\leq
\sum_{k=0}^{T-1}{(\mathcal L(x_{k+1},y)-\mathcal L(x_{k+1},y_{k+1}))}
\\\nonumber&\quad+ \sum_{k=0}^{T-1}{(\mathcal L(x_{k},y_{k+1})-\mathcal L(x_{k},y))}\\
\nonumber &\quad- \sum_{k=0}^{T-1}\zal{\gamma_k C_k\mu}{(\mathcal L(z_{k+1},y_{k+1})-\mathcal L(z_{k+1},y)\zal{)}}\\\nonumber&\quad-\sum_{k=0}^{T-1}\gamma_k C_k\mu (\mathcal L(z_{k+1},y)-\mathcal L(x^*(y_{k+1}),y_{k+1}))\\
&\quad-\zal{\sum_{k=0}^{T-1}}{\gamma_k C_k\over 2}\|\nabla_x \mathcal L{(z_{k+1},y_{k+1})}\|^2+\mb{\sum_{k=0}^{T-1}(\Xi_k+\zeta_k)}.
\end{align*}
Using concavity of $\mathcal L$ over $y$, one can obtain
\begin{align}\label{align14}
\nonumber &\sum_{k=0}^{T-1}(\mathcal L(x_{k+1},y)-\mathcal L(x_{k},y))\\
\nonumber&\leq -\sum_{k=0}^{T-1}\gamma_k C_k\mu(\mathcal L{(z_{k+1},y)}-\mathcal L({x^\ast\zal{(y_{k+1})},y_{k+1}})\zal{)}
\\\nonumber&\quad-\sum_{k=0}^{T-1}{\gamma_k C_k\over 2}\|\nabla_x \mathcal L{(z_{k+1},y_{k+1})}\|^2\\
\nonumber&\quad+\sum_{k=0}^{T-1}[\langle\nabla_y\mathcal L{(x_{k+1},y_{k+1})}\\&\quad+\gamma_kC_k\mu\nabla_y\mathcal L{(z_{k+1},y_{k+1})},y-y_{k+1}\rangle
\\ \nonumber&\quad+\langle
\nabla_y\mathcal L{(x_{k},y_{k+1})},y_{k+1}-y\rangle
]+\mb{\sum_{k=0}^{T-1}(\Xi_k+\zeta_k)}.
\end{align}
Let us define $u_k^1=\nabla_y\mathcal L_{\mathcal V_k}{(x_{k},y_{k})}-\nabla_y\mathcal L{(x_{k},y_{k})}$, $u_k^2=\nabla_y\mathcal L_{\mathcal V_k}{(x_{k-1},y_{k})}-\nabla_y\mathcal L{(x_{k-1},y_{k})}$, $u_k^3=\nabla_y \mathcal L_{\mathcal V_k}{(z_{k+1},y_{k})}-\nabla_y \mathcal L{(z_{k+1},y_{k})}$, define $\bar p_k=\nabla_y \mathcal L{(z_{k+1},y_{k})}$, and $\bar q_k={1\over \beta_k}(\nabla_y\mathcal L{(x_{k},y_{k})}-\nabla_y\mathcal L{(x_{k-1},y_{k})})$. \mor{From optimality condition of step \ref{update y} in Algorithm \ref{alg1},} letting $s_{k}=\bar p_{k}+\bar q_{k}+u_k^1+u_k^2+u_k^3$, one can obtain
$h(y_{k+1})-\langle s_{k},y_{k+1}-y\rangle \leq
h(y)+\tfrac{1}{2\sigma_k}[\|y-y_k\|^2-\|y-y_{k+1}\|^2-\|y_{k+1}-y_{k}\|^2].$
Multiplying both sides by $\beta_k=\gamma_kC_k\mu$ and summing over $k$, we obtain,
\begin{align}\label{bound9}
\nonumber&\sum_{k=0}^{T-1}\beta_k(h(y_{k+1})-\langle s_{k},y_{k+1}-y\rangle)\\\nonumber& \leq
\sum_{k=0}^{T-1}\beta_k(h(y)+\tfrac{1}{2\sigma_k}[\|y-y_k\|^2-\|y-y_{k+1}\|^2\\
&\quad-\|y_{k+1}-y_{k}\|^2]).
\end{align}
Now, we simplify the inner products involving in \eqref{align14} and \eqref{bound9} using the definition of $\bar p_k$ and $\bar q_k$.
\begin{align}\label{simply}
\nonumber& \sum_{k=0}^{T-1}\langle\nabla_y\mathcal L{(x_{k+1},y_{k+1})}+\beta_k\nabla_y\mathcal L{(z_{k+1},y_{k+1})},y-y_{k+1}\rangle\\ \nonumber& \quad+\langle
\nabla_y\mathcal L{(x_{k},y_{k+1})},y_{k+1}-y\rangle+\langle s_{k},y_{k+1}-y\rangle)\\\nonumber
&=\sum_{k=0}^{T-1}[\beta_{k+1}\langle \bar q_{k+1},y-y_{k+1}\rangle\\&\quad-\beta_k\langle \bar q_k,y-y_k\rangle+\langle \bar q_k, y_{k+1}-y_k\rangle].
\end{align}
\mor{\zal{Moreover, using Young's inequality, and step \ref{update x} in Algorithm \ref{alg1}}, one can obtain}
\begin{align}\label{align15}
\nonumber&\langle \bar q_k, y_{k+1}-y_k\rangle\\\nonumber&\leq \tfrac{L_{xy}}{ \beta_k}\|x_k-x_{k-1}\|\|y_{k+1}-y_k\|\\
&\leq \tfrac{L_{xy}^2\gamma_{k-1}^2}{2\beta_k\tau_k}\|\nabla_x\mathcal L{(z_k,y_k)}\|^2+\tfrac{\tau_k}{2}\|y_{k+1}-y_k\|^2.
\end{align}
Summing \eqref{bound9} and \eqref{align14}, using \eqref{align15} and \eqref{simply}, we get,
\begin{align}\label{align22}
&\nonumber \sum_{k=0}^{T-1}(\mathcal L(x_{k+1},y)-\mathcal L(x_{k},y))\\
\nonumber&\quad+ \sum_{k=0}^{T-1}\beta_k(\mathcal L(z_{k+1},y)-\mathcal L(x^\ast\zal{(y_{k+1})},y_{k+1}))\\
\nonumber&\quad+ \sum_{k=0}^{T-1}\beta_k(h(y_{k+1})-h(y))\\
&\leq \sum_{k=0}^{T-1}{(\tfrac{L_{xy}^2\gamma_{k-1}^2}{2{\beta_k}\tau_k}-\tfrac{\gamma_k C_k}{2})}\|\nabla_x\mathcal L{(z_k,y_k)}\|^2\\
\nonumber&\quad+\sum_{k=0}^{T-1}[\beta_{k+1}\langle \bar q_{k+1},y-y_{k+1}\rangle-\beta_k\langle \bar q_k,y-y_k\rangle]\\
\nonumber& \quad+ \sum_{k=0}^{T-1}\tfrac{\beta_k}{ 2\sigma_k}[\|y-y_k\|^2-\|y-y_{k+1}\|^2]\\
\nonumber&\quad+\sum_{k=0}^{T-1}{(\tfrac{\tau_k\beta_k}{2}-\tfrac{\beta_k}{{2\sigma_k}})}\|y_{k+1}-y_k\|^2\\
\nonumber&\quad+\tfrac{L_{xy}^2\gamma_0^2}{ 2\beta_0\tau_0}\|\nabla_x\mathcal L{(z_0,y_0)}\|^2-\tfrac{\gamma_{T-1}C_{T-1}}{ 2}\|\nabla_x \mathcal L (z_{T},y_T)\|^2\\
\nonumber&\quad+\af{\sum_{k=0}^{T-1}\langle\beta_ku_k^3+u_k^1-u_k^2,y_{k+1}-y\rangle}+\mb{\sum_{k=0}^{T-1}(\Xi_k+\zeta_k)},
\end{align}
where $\gamma_{-1} = \gamma_0$. From Cauchy-Schwartz inequality, using Lemma \ref{lem:error} where we choose $v_0=y_0$, and defining $U_k\triangleq\langle \beta_ku_k^3+u_k^1-u_k^2,y_k+v_k\rangle$, the following holds
\begin{align}\label{2align22}
\nonumber &\langle\beta_ku_k^3+u_k^1-u_k^2,y_{k+1}-y\pm y_k\rangle\\
\nonumber&\leq \tfrac{1}{ 2\bar \alpha_k}\|\beta_ku_k^3+u_k^1-u_k^2\|^2+\tfrac{\bar \alpha_k}{ 2}\|y_{k+1}-y_k\|^2\\
\nonumber&\quad+\langle \beta_ku_k^3+u_k^1-u_k^2,y_{k}-y\pm v_k\rangle\\
\nonumber &\leq \tfrac{1}{ \bar \alpha_k}\|\beta_ku_k^3+u_k^1-u_k^2\|^2+\tfrac{\bar \alpha_k}{ 2}\|y_{k+1}-y_k\|^2\\
&\quad+\tfrac{\bar \alpha_k}{ 2}\|y-v_k\|^2-\tfrac{\bar \alpha_k}{ 2}\|y-v_{k+1}\|^2+U_k,
\end{align}
for some $\bar \alpha_k\geq 0$. Hence, using \eqref{2align22} in \eqref{align22} and rearranging terms, one can obtain the following,
\begin{align}\label{3align23}
&\quad -\sum_{k=0}^{T-1}\underbrace{(\tfrac{L_{xy}^2{\gamma_{k-1}}^2}{2{\beta_k}\tau_k}-\tfrac{\gamma_k C_k}{2})}_{\text{term (A)}}\|\nabla_x\mathcal L{(z_k,y_k)}\|^2\\
\nonumber&\quad-\sum_{k=0}^{T-1}\underbrace{(\tfrac{\tau_k\beta_k}{2}-\tfrac{\beta_k}{{2\sigma_k}}+\tfrac{\bar \alpha_k}{ 2})}_{\text{term (B)}}\|y_{k+1}-y_k\|^2\\
\nonumber&\leq -\sum_{k=0}^{T-1}(\mathcal L(x_{k+1},y)-\mathcal L(x_{k},y))\\
\nonumber&\quad- \sum_{k=0}^{T-1}\beta_k(\mathcal L(z_{k+1},y)-\mathcal L(x^\ast\zal{(y_{k+1})},y_{k+1}))\\
\nonumber&\quad- \sum_{k=0}^{T-1}\beta_k(h(y_{k+1})-h(y))\\
\nonumber&\quad +\sum_{k=0}^{T-1}[\beta_{k+1}\langle \bar q_{k+1},y-y_{k+1}\rangle-\beta_k\langle \bar q_k,y-y_k\rangle] \\
\nonumber&\quad+\sum_{k=0}^{T-1}\tfrac{\beta_k}{ 2\sigma_k}[\|y-y_k\|^2-\|y-y_{k+1}\|^2\\
\nonumber&\quad+\tfrac{1}{ 2}\|y-v_k\|^2-\frac{1}{2}\|y-v_{k+1}\|^2]\\
\nonumber&\quad+\tfrac{L_{xy}^2\gamma_0^2}{ 2\beta_0\tau_0}\|\nabla_x\mathcal L{(z_0,y_0)}\|^2-\tfrac{\gamma_{T-1}C_{T-1}}{ 2}\|\nabla_x \mathcal L (z_{T},y_T)\|^2\\
\nonumber&\quad+\af{\sum_{k=0}^{T-1}\tfrac{1}{ \bar\alpha_k}\|\beta_ku_k^3+u_k^1-u_k^2\|^2}+\mb{\sum_{k=0}^{T-1}(\Xi_k+\zeta_k+U_k)}.
\end{align}
Choosing the parameters such that $\sigma_k \leq\tfrac{\mu}{36L^2_{xy}}$, $\alpha_k=\tfrac{2}{k+1}$, $\lambda_k=\tfrac{1}{2L_{xx}}$, $\tau_k=\tfrac{9L^2_{xy}}{\mu}$ , $\bar\alpha_k=\tfrac{\beta_k}{4\sigma_k}$, and $\gamma_k\in [\lambda_k,(1+\alpha_k/4)\lambda_k]$ for any $k\geq 0$, one can show that in \eqref{3align23} Term (A)$\leq{-{\gamma_k C_k}\over4}$ and Term (B)$\leq {-\beta_k\over{4\sigma_k}}$. Therefore, choosing $k^*=\mbox{argmin}\{\|\nabla \mathcal L (z_{k},y_{k})\|^2+\|y_{k+1}-y_{k}\|^2\}$, the left hand side (LHS) of \eqref{3align23} can be bounded from below by $\big(\sum_{k=0}^{T-1} \min \{{\tfrac{{\gamma_k C_k}}{4}, \tfrac{\beta_k}{{4\sigma_k}}}\}\big)\big(\|\nabla_x \mathcal L(z_{k^*}),y_{k^*})\|^2+\|y_{k^*+1}-y_{k^*}\|^2\big)$.
Moreover, letting $(x^*,y^*)$ to be an arbitrary saddle point solution of \eqref{main}, choosing $y=y^*$, using the fact that $\mathcal L(x^\ast{(y_{k+1})},y_{k+1}) \leq \mathcal L(x^*,y_{k+1})$ and \eqref{align15}, one can obtain:
\begin{align}
\nonumber&\|\nabla_x \mathcal L(z_{k^*},y_{k^*})\|^2+\|y_{k^*+1}-y_{k^*}\|^2\\
&\leq \nonumber \tfrac{1}{TD}\biggr[ \mathcal L(x_0,y^*)-\mathcal L(x_T,y^*) +\tfrac{3\beta_0}{4\sigma_0}\|y^*-y_0\|^2\\\nonumber&\quad + \tfrac{{L^2_{xy}}\gamma_0^2}{2\beta_0\tau_0}\|\nabla_x \mathcal L(z_0,y_0))\|^2\\\nonumber
&\quad+\sum_{k=0}^{T-1}\tfrac{1}{\bar \alpha_k} \underbrace{\|\beta_k u_k^3+u_k^1-u_k^2\|^2}_{\text{term (C)}}+\mb{\sum_{k=0}^{T-1}(\Xi_k+\zeta_k+U_k)}\biggr],
\end{align}
{where $D\triangleq {\min \{{\tfrac{{\gamma_k C_k}}{4}, \tfrac{\beta_k}{{4\sigma_k}}}}\}$ and we used ${\sum_{k=0}^{T-1}D}=TD$. Taking conditional expectation, one can show $\mathbb E[C \mid \mathcal H_k ]\leq \tfrac{9\nu^2_y}{T}, \mathbb E[\Xi_k \mid \mathcal F_k]=\tfrac{L_{xx} \gamma^2_k \nu^2_x}{2T}$, $\mathbb E[\zeta_k \mid \mathcal F_k]\leq \tfrac{L_{xx} \lambda^2_k \nu^2_x}{32T},$ and $\mathbb E[U_k \mid \mathcal H_k ]=0$. Hence, we obtain:
\begin{align}\label{last inequal}
\nonumber&\mathbb E \biggr[ \|\nabla_x \mathcal L(z_{k^*},y_{k^*})\|^2+\|y_{k^*+1}-y_{k^*}
\|^2\biggr]\\
\nonumber&\leq
\tfrac{1}{TD} \biggr[ \mathcal L(x_0,y^*)-\mathcal L(x_T,y^*) +\tfrac{3\beta_0}{4\sigma_0}\|y^*-y_0\|^2\\&\qquad + \tfrac{{L^2_{xy}}\gamma_0^2}{2\beta_0\tau_0}\|\nabla_x \mathcal L(z_0,y_0))\|^2 \\ \nonumber&\qquad +\sum_{k=0}^{T-1}\left( \tfrac{9\nu^2_y}{\bar \alpha_k T}+\tfrac{L_{xx}, \gamma^2_k \nu^2_x}{2T}+\tfrac{L_{xx} \lambda^2_k \nu^2_x}{32T}\right)\biggr]\leq \mathcal O(1/T).
\end{align}
Moreover, from the steps of Algorithm \ref{alg1}, $\|z_{k+1}-z_k\|\leq \lambda_{k-1}\|r_{k-1}\|+\|x_k-\tilde x_k\|$. Using steps \ref{update x} and \ref{update tilde x}, one can show that $\|x_k-\tilde x_k\|\leq \mathcal O(\tfrac{1}{(T+1)\sqrt T})$. Hence $\|z_{k+1}-z_k\|\leq \mathcal O(\sqrt\epsilon)$. Invoking Lemma \ref{epsilon stationary}, we conclude that $(z_{k^*},y_{k^*})$ is an $\epsilon$-stationary point of problem \eqref{main}.}
To achieve an $\epsilon$-stationary point, we let the rhs of \eqref{last inequal} equal to $\epsilon^2$ which implies that $T=\mathcal O(\epsilon^{-2})$. Hence, total number of sample gradients evaluations is $\sum_{k=0}^{T-1}b=T^2=\mathcal O(\epsilon^{-4})$, since we chose $b=T$. \qed
\end{document} |
\begin{document}
\author[M. Nasernejad, A. A. Qureshi, K. Khashyarmanesh, and L. G. Roberts]{ Mehrdad ~Nasernejad$^{1}$, Ayesha Asloob Qureshi$^{2,*}$, Kazem Khashyarmanesh$^{1}$, and Leslie G. Roberts$^{3}$}
\title[Classes of normally and nearly normally torsion-free ideals]{Classes of normally and nearly normally torsion-free monomial ideals}
\subjclass[2010]{13B25, 13F20, 05E40.}
\keywords { Normally torsion-free ideals, Nearly normally torsion-free ideals, Associated prime ideals, $t$-spread monomial ideals, Hypergraphs.}
\thanks{$^*$Corresponding author}
\thanks{E-mail addresses: m\_{nasernejad@yahoo.com}, aqureshi@sabanciuniv.edu, khashyar@ipm.ir, and robertsl@queensu.ca}
\maketitle
\begin{center}
{\it
$^1$Department of Pure Mathematics,
Ferdowsi University of Mashhad,\\
P.O.Box 1159-91775, Mashhad, Iran\\
$^2$Sabanc\i\;University, Faculty of Engineering and Natural Sciences, \\
Orta Mahalle, Tuzla 34956, Istanbul, Turkey\\
$^3$Department of Mathematics and Statistics,
Queen's University, \\
Kingston, Ontario, Canada, K7L 3N6
}
\end{center}
\begin{abstract}
In this paper, our main focus is to explore different classes of nearly normally torsion-free ideals. We first characterize all finite simple connected graphs with nearly normally torsion-free cover ideals. Next, we characterize all normally torsion-free $t$-spread principal Borel ideals that can also be viewed as edge ideals of uniform multipartite hypergraphs.
\end{abstract}
\section{Introduction}
Let $\mathcal{H}=(V_{\mathcal{H}},E_{\mathcal{H}})$ be a simple hypergraph on vertex set $V_{\mathcal{H}}$ and edge set $E_{\mathcal{H}}$. The edge ideal of $\mathcal{H}$, denoted by $I(\mathcal{H})$, is the ideal generated by the monomials corresponding to the edges of $\mathcal{H}$. A hypergraph $\mathcal{H}$ is called {\em Mengerian} if it satisfies a certain min-max equation, which is known as the Mengerian property in hypergraph theory or as the max-flow min-cut property in integer programming. Algebraically, it is equivalent to $I(\mathcal{H})$ being normally torsion-free, see \cite[Corollary 10.3.15]{HH1}, \cite[Theorem 14.3.6]{V1}.
Let $R$ be a commutative Noetherian ring and $I$ be an ideal of $R$. In addition, let $\mathrm{Ass}_R(R/I)$ be the set of all prime ideals associated to $I$. An ideal $I$ is called {\it normally torsion-free} if $\mathrm{Ass}(R/I^k)\subseteq \mathrm{Ass}(R/I)$, for all $k\geq 1$ \cite[Definition 1.4.5]{HH1}. In other words, if $I$ has no embedded primes, then $I$ is normally torsion-free if and only if the ordinary powers of $I$ coincide with the symbolic powers of $I$. Normally torsion-free ideals have been a topic of several papers, however, a few classes of these ideals originate from graph theory. Simis, Vasconcelos and Villarreal showed in \cite{SVV} that a finite simple graph is bipartite if and only if its edge ideal is normally torsion-free. An analogue of bipartite graphs in higher dimensions is considered to be a hypergraph that avoids ``special odd cycles". Such hypergraphs are called {\em balanced}, and a well-known result of Fulkerson, Hoffman and Oppenheim in \cite{FHO} states that balanced hypergraphs are Mengerian. It follows immediately that the edge ideals of balanced hypergraphs are normally torsion-free. However, unlike the case of bipartite graphs, it should be noted that the converse of this statement is not true. We refer the reader to \cite{B} for the related definitions in hypergraph theory.
A monomial ideal $I$ in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$ is called {\it nearly normally torsion-free} if there exist a positive integer $k$ and a monomial prime ideal
$\mathfrak{p}$ such that $\mathrm{Ass}_R(R/I^m)=\mathrm{Min}(I)$ for all $1\leq m\leq k$, and
$\mathrm{Ass}_R(R/I^m) \subseteq \mathrm{Min}(I) \cup \{\mathfrak{p}\}$ for all $m \geq k+1$, see \cite[Definition 2.1]{Claudia}. This concept generalizes normally torsion-freeness to some extent. Recently, the author, in \cite[Theorem 2.3]{Claudia}, characterized all connected graphs whose edge ideals are nearly normally torsion-free. Our main aim is to further explore different classes of nearly normally torsion-free ideals and normally torsion-free ideals that originate from hypergraph theory.
This paper is organized as follows: In Section~\ref{prem}, we provide some notation and definitions which appear throughout the paper. In Section~\ref{generalresults}, we give some results to develop different criteria that help to investigate whether an ideal is normally torsion-free. For this purpose, we employ the notion of monomial localization of a monomial ideal with respect to a prime monomial ideal, see Lemmas \ref{Lem. 1} and \ref{Lem. 2}. In what follows, we provide two applications of Corollary \ref{Cor. 1}. In the first application, we give a class of nearly normally torsion-free monomial ideals, which is concerned with the monomial ideals of intersection type (Proposition \ref{App. 1}). For the second application, we turn our attention to the $t$-spread principal Borel ideals, and reprove one of the main results of \cite{Claudia} (Proposition \ref{App. 2}). We close Section~\ref{generalresults} by giving a concrete example of a nearly normally torsion-free ideal that does not have the strong persistence property, see Definition~\ref{spersistence}. Note that the ideal given in our example is not square-free. The question that whether nearly normally torsion-free square-free monomial ideals have the persistence or the strong persistence property still remains open.
In Section~\ref{coverideals}, one of our main results is Theorem \ref{Main. 1} which proves that if $G$ is a finite simple connected graph, then the cover ideal of $G$ is nearly normally torsion-free if and only if $G$ is either a bipartite graph or an almost bipartite graph. In proving
this we found reference \cite {JS} very helpful. The second main result of Section~\ref{coverideals} is Corollary~\ref{Cor. NFT1} which provides a new class of normally torsion-free ideals based on the existing ones. To reach this purpose, we start with a hypergraph $\mathcal{H}$ whose edge ideal is normally torsion-free, in other words, we take a Mengerian hypergraph $\mathcal{H}$. By using the notion of coloring of hypergraphs, in Theorem~\ref{NTF1}, we show that the new hypergraph $\mathcal{H'}$ obtained by adding a ``whisker" to $\mathcal{H}$ is also Mengerian. Here, to add a whisker to $\mathcal{H}$, one adds a new vertex and an edge of size two consisting of this new vertex and an existing vertex of $\mathcal{H}$.
In Section~\ref{borel}, we study $t$-spread principal Borel ideals, see Definition~\ref{boreldef}. An interesting feature of $t$-spread principal Borel ideals is noted in Theorem~\ref{complete} that shows a $t$-spread principal Borel ideal is normally torsion-free if and only if it can viewed as an edge ideal of a certain $d$-uniform $d$-partite hypergraph. As we noted in Example~\ref{oddcycle}, the hypergraph associated to a $t$-spread principal Borel ideal may contain special odd cycles, and hence these hypergraphs are not necessarily balanced. This limits us to use the result of Fulkerson, Hoffman and Oppenheim in \cite{FHO}. We prove Theorem~\ref{complete} by applying a combination of algebraic and combinatorial techniques. We make use of \cite[Theorem 3.7]{SNQ} which gives a criterion to check whether an ideal is normally torsion-free. In addition, we use the notion of linear relation graph associated to monomial ideals, see Definition~\ref{linearrelationgraphdef}, to study the set of associated primes ideals of $t$-spread principal Borel ideals.
\section{Preliminaries}\label{prem}
Let $R$ be a commutative Noetherian ring and $I$ be an ideal of $R$. A prime ideal $\mathfrak{p}\subset R$ is an {\it associated prime} of $I$ if there exists an element $v$ in $R$ such that $\mathfrak{p}=(I:_R v)$, where $(I:_R v)=\{r\in R |~ rv\in I\}$. The {\it set of associated primes} of $I$, denoted by $\mathrm{Ass}_R(R/I)$, is the set of all prime ideals associated to $I$.
The minimal members of $\mathrm{Ass}_R(R/I)$ are called the {\it minimal} primes of $I$, and $\mathrm{Min}(I)$ denotes the set of minimal prime ideals of $I$. Moreover, the associated primes of $I$ which are not minimal are called the {\it embedded} primes of $I$. If $I$ is a square-free monomial ideal, then $\mathrm{Ass}_R(R/I)=\mathrm{Min}(I)$, for example see \cite[Corollary 1.3.6]{HH1}.
\begin{definition}\label{spersistence}
The ideal $I$ is said to have the {\it persistence property} if $\mathrm{Ass}(R/I^k)\subseteq \mathrm{Ass}(R/I^{k+1})$
for all positive integers $k$. Moreover, an ideal $I$ satisfies the {\it strong persistence property} if $(I^{k+1}: I)=I^k$ for all positive integers $k$, for more details refer to \cite{ HQ, N2}. The strong persistence property implies the persistence property, however the converse is not true, as noted in \cite{HQ}.
\end{definition}
Here, we should recall the definition of symbolic powers of an ideal.
\begin{definition} (\cite[Definition 4.3.22]{V1})
Let $I$ be an ideal of a ring $R$ and $\mathfrak{p}_1, \ldots, \mathfrak{p}_r$ the minimal primes of $I$. Given an integer $n \geq 1$, the {\it $n$-th symbolic
power} of $I$ is defined to be the ideal
$$I^{(n)} = \mathfrak{q}_1 \cap \cdots \cap \mathfrak{q}_r,$$
where $\mathfrak{q}_i$ is the primary component of $I^n$ corresponding to $\mathfrak{p}_i$.
\end{definition}
Furthermore, we say that $I$ has the {\it symbolic strong persistence property} if $(I^{(k+1)}: I^{(1)})=I^{(k)}$ for all $k$, where $I^{(k)}$ denotes the $k$-th symbolic power of $I$, cf. \cite{KNT, RT}. \par
Let $R$ be a unitary commutative ring and $I$ an ideal in $R$. An element $f\in R$ is {\it integral} over $I$, if there exists an equation
$$f^k+c_1f^{k-1}+\cdots +c_{k-1}f+c_k=0 ~~\mathrm{with} ~~ c_i\in I^i.$$
The set of elements $\overline{I}$ in $R$ which are integral over $I$ is the
{\it integral closure} of $I$. The ideal $I$ is {\it integrally closed}, if $I=\overline{I}$, and $I$ is {\it normal} if all powers of $I$ are integrally closed, we refer to \cite{HH1} for more information. \par
In particular, if $I$ is a monomial ideal, then
the notion of integrality becomes simpler, namely, a monomial
$u \in R=K[x_1, \ldots, x_n]$ is integral
over $I\subset R$ if and only if there exists an integer $k$ such that $u^k \in I^k$, see \cite[Theorem 1.4.2]{HH1}.
An ideal $I$ is called {\it normally torsion-free} if $\mathrm{Ass}(R/I^k)\subseteq \mathrm{Ass}(R/I)$, for all $k\geq 1$. If $I$ is a square-free monomial ideal, then $I$ is normally torsion-free if and only if $I^k=I^{(k)}$, for all $k \geq 1$, see \cite[Theorem 1.4.6]{HH1}.
\begin{definition}
A monomial ideal $I$ in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$ is called {\it nearly normally torsion-free} if there exist a positive integer $k$ and a monomial prime ideal
$\mathfrak{p}$ such that $\mathrm{Ass}_R(R/I^m)=\mathrm{Min}(I)$ for all $1\leq m\leq k$, and
$\mathrm{Ass}_R(R/I^m) \subseteq \mathrm{Min}(I) \cup \{\mathfrak{p}\}$ for all $m \geq k+1$, see \cite[Definition 2.1]{Claudia}.
\end{definition}
Next, we recall some notions which are related to graph theory and hypergraphs.
\begin{definition}
Let $G=(V(G), E(G))$ be a finite simple graph on the vertex set $V(G)=\{1,\ldots,n\}$. Then the {\it edge ideal } associated to $G$ is the monomial ideal
$$I(G)=(x_ix_j \ : \ \{i,j\}\in E(G))
\subset R=K[x_1,\ldots, x_n]. $$
\end{definition}
A finite {\it hypergraph} $\mathcal{H}$ on a vertex set $[n]=\{1,2,\ldots,n\}$ is a collection of edges $\{ E_1, \ldots, E_m\}$ with $E_i \subseteq [n]$, for all $i=1, \ldots,m$. The vertex set $[n]$ of $\mathcal{H}$ is denoted by $V_{\mathcal{H}}$, and the edge set of $\mathcal{H}$ is denoted by $E_{\mathcal{H}}$. Typically, a hypergraph is represented as a pair $(V_{\mathcal{H}}, E_{\mathcal{H}})$. A hypergraph $\mathcal{H}$ is called {\it simple}, if $E_i \subseteq E_j$ implies $i = j$. Moreover, if $|E_i|=d$, for all $i=1, \ldots, m$, then $\mathcal{H}$ is called {\em $d$-uniform } hypergraph. A 2-uniform hypergraph $\mathcal{H}$ is just a finite simple graph. If $\mathcal{W}$ is a subset of the vertices of $\mathcal{H}$, then the {\it induced subhypergraph} of $\mathcal{H}$ on $\mathcal{W}$ is $(\mathcal{W}, E_{\mathcal{W}})$, where $E_{\mathcal{W}}=\{E\cap \mathcal{W}: E \in E_{\mathcal{H}} \text{ and } E \cap \mathcal{W} \neq \emptyset\}$.
In \cite{HaV}, H\`a and Van Tuyl extended the concept of the edge ideal to hypergraphs.
\begin{definition} \cite{HaV}
Let $\mathcal{H}$ be a hypergraph on the vertex set $V_{\mathcal{H}}=[n]$, and $E_{\mathcal{H}} = \{E_1, \ldots, E_m\}$ be the edge set of $\mathcal{H}$. Then the {\it edge ideal} corresponding
to $\mathcal{H}$ is given by
$$I(\mathcal{H}) = (\{x^{E_i}~ | ~E_i\in E_{\mathcal{H}}\}),$$
where $x^{E_i}=\prod_{j\in E_i} x_j$.
A subset $W \subseteq V_{\mathcal{H}}$ is a {\it vertex cover} of $\mathcal{H}$ if $W \cap E_i\neq \emptyset$ for all $i=1, \ldots, m$. A vertex cover $W$ is {\it minimal} if no proper subset of $W$ is a vertex cover of $\mathcal{H}$. Let $W_1, \ldots, W_t$ be the minimal vertex covers of $\mathcal{H}$. Then the cover ideal of the hypergraph $\mathcal{H}$,
denoted by $J(\mathcal{H})$, is given by
$J(\mathcal{H})=(X_{W_1}, \ldots, X_{W_t}),$ where
$X_{W_j}=\prod_{r\in W_j}x_r$ for each $j=1, \ldots, t$.
\end{definition}
Moreover, recall that the Alexander dual of a square-free monomial ideal $I$, denoted by $I^\vee$, is given by
$$I^\vee= \bigcap_{u\in \mathcal{G}(I)} (x_i~:~ x_i|u).$$
In particular, according to Proposition 2.7 in \cite{FHM}, one has $J(\mathcal{H})=I(\mathcal{H})^{\vee}$, where
$I(\mathcal{H})^{\vee}$ denotes the Alexander dual of $I(\mathcal{H})$.
Throughout this paper, we denote the unique minimal set of monomial generators of a monomial ideal $I$ by $\mathcal{G}(I)$. Also, $R=K[x_1,\ldots, x_n]$ is a polynomial ring over a field $K$, $\mathfrak{m}=(x_1, \ldots, x_n)$ is the graded maximal ideal of $R$, and $x_1, \ldots, x_n$ are indeterminates.
\section{Some classes of nearly normally torsion-free ideals}\label{generalresults}
In this section, our aim is to give additional classes of nearly normally torsion-free monomial ideals. To achieve this, we first recall the definition of the monomial localization of a monomial ideal with respect to a monomial prime ideal.
Let $I$ be a monomial ideal in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$.
We denote by $V^*(I)$ the set of monomial prime ideals containing $I$. Let $\mathfrak{p}=(x_{i_1}, \ldots, x_{i_r})$ be a monomial prime ideal. The {\it monomial localization} of $I$ with respect to $\mathfrak{p}$, denoted by $I(\mathfrak{p})$, is the ideal in the polynomial ring $R(\mathfrak{p})=K[x_{i_1}, \ldots, x_{i_r}]$ which is obtained from $I$ by applying the $K$-algebra homomorphism $R\rightarrow R(\mathfrak{p})$ with $x_j\mapsto 1$ for all $x_j\notin \{x_{i_1}, \ldots, x_{i_r}\}$.
\begin{lemma} \label{Lem. 1}
Let $I$ be a nearly normally torsion-free monomial ideal in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$. Then there exists a monomial prime ideal $\mathfrak{p}\in V^*(I)$ such that $I(\mathfrak{p}\setminus \{x_i\})$ is normally torsion-free for all $x_i\in \mathfrak{p}$.
\end{lemma}
\begin{proof}
Since $I$ is nearly normally torsion-free, there exists a positive integer $k$ and a monomial prime ideal $\mathfrak{p}$ such that $\mathrm{Ass}_R(R/I^m)=\mathrm{Min}(I)$ for all $1\leq m\leq k$, and
$\mathrm{Ass}_R(R/I^m) \subseteq \mathrm{Min}(I) \cup \{\mathfrak{p}\}$ for all $m \geq k+1$. We claim that $I(\mathfrak{p}\setminus \{x_i\})$ is normally torsion-free for all $x_i\in \mathfrak{p}$. To do this, fix $x_i\in \mathfrak{p}$, and set $\mathfrak{q}:=\mathfrak{p}\setminus \{x_i\}$.
We need to show that
$\mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/(I(\mathfrak{q}))^\ell) \subseteq \mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/I(\mathfrak{q}))$ for all $\ell$. Fix $\ell \geq 1$.
It follows from \cite[Lemma 4.6]{RNA} that
$$\mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/(I(\mathfrak{q}))^\ell)=
\mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/I^\ell(\mathfrak{q}))=\{Q~:~ Q\in \mathrm{Ass}_{R}(R/I^\ell)~\mathrm{and}~ Q \subseteq \mathfrak{q}\}.$$
Pick an arbitrary element $Q\in \mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/(I(\mathfrak{q}))^\ell)$. Thus, $Q\in \mathrm{Ass}_{R}(R/I^\ell)$ and $ Q \subseteq \mathfrak{q}$. Since $\mathfrak{q}=\mathfrak{p}\setminus \{x_i\}$, this yields that $Q\neq \mathfrak{p}$; thus, one must have $Q\in \mathrm{Min}(I)$. Hence, $Q\in \mathrm{Ass}_R(R/I)$, and so $Q\in \mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/I(\mathfrak{q}))$. Therefore,
$I(\mathfrak{p}\setminus \{x_i\})$ is normally torsion-free.
\end{proof}
\begin{lemma} \label{Lem. 2}
Let $I$ be a monomial ideal in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$ such that $\mathrm{Ass}_R(R/I)=\mathrm{Min}(I)$. Let $I(\mathfrak{m}\setminus \{x_i\})$ be normally torsion-free for all $i=1, \ldots, n$, where $\mathfrak{m}=(x_1, \ldots, x_n)$. Then $I$ is nearly normally torsion-free.
\end{lemma}
\begin{proof}
In the light of $\mathrm{Ass}_R(R/I)=\mathrm{Min}(I)$, it is enough for us to show that $\mathrm{Ass}_R(R/I^k) \subseteq \mathrm{Min}(I) \cup \{\mathfrak{m}\}$ for all $k \geq 2.$
To achieve this, fix $k \geq 2$, and take an arbitrary element $Q \in \mathrm{Ass}_R(R/I^k)$. If $Q=\mathfrak{m}$, then the proof is over. Hence, let $Q\neq \mathfrak{m}$.
On account of $Q$ is a monomial prime ideal, this implies that $Q\subseteq \mathfrak{m}\setminus \{x_j\}$ for some $x_j \in \mathfrak{m}$.
Put $\mathfrak{q}:=\mathfrak{m}\setminus \{x_j\}$. Because $I(\mathfrak{q})$ is normally torsion-free, we thus have $\mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/(I(\mathfrak{q}))^k) \subseteq \mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/I(\mathfrak{q}))$. Also, one can deduce from \cite[Lemma 4.6]{RNA} that $Q\in \mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/(I(\mathfrak{q}))^k)$, and so
$Q \in \mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/I(\mathfrak{q}))$. This yields that $Q \in\mathrm{Ass}_R(R/I)$, and hence $Q\in \mathrm{Min}(I)$.
This completes the proof.
\end{proof}
As an immediate consequence of Lemma \ref{Lem. 2}, we obtain the following corollary:
\begin{corollary} \label{Cor. 1}
Let $I$ be a square-free monomial ideal in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$. Let $I(\mathfrak{m}\setminus \{x_i\})$ be normally torsion-free for all $i=1, \ldots, n$, where $\mathfrak{m}=(x_1, \ldots, x_n)$. Then $I$ is nearly normally torsion-free.
\end{corollary}
As an application of Lemma \ref{Lem. 2}, we give a class of nearly normally torsion-free monomial ideals in the subsequent
theorem. To see this, one needs to recall from \cite{HV} that a monomial ideal is said to be a {\it monomial ideal of intersection type} when it can be presented as an intersection of powers of monomial prime ideals.
\begin{proposition} \label{App. 1}
Let $R=K[x_1, \ldots, x_n]$ be a polynomial ring and
$I=\cap_{\mathfrak{p}\in \mathrm{Ass}_R(R/I)}\mathfrak{p}^{d_{\mathfrak{p}}}$ be a monomial ideal of intersection type such that for any $1\leq i \leq n$, there exists unique
$\mathfrak{p}\in \mathrm{Ass}_R(R/I)$ with $I(\mathfrak{m}\setminus \{x_i\})=\mathfrak{p}^{d_{\mathfrak{p}}}$, where $\mathfrak{m}=(x_1, \ldots, x_n)$. Then $I$ is nearly normally torsion-free.
\end{proposition}
\begin{proof}
We first note that the assumption concludes that the monomial ideal $I$ must have the following form
$$I=(\mathfrak{m}\setminus \{x_1\})^{d_1} \cap (\mathfrak{m}\setminus \{x_2\})^{d_2} \cap \cdots \cap (\mathfrak{m}\setminus \{x_n\})^{d_n},$$
for some positive integers $d_1, \ldots, d_n$.
This gives rise to $\mathrm{Ass}_R(R/I)=\mathrm{Min}(I)$. In addition, since $(\mathfrak{m}\setminus \{x_i\})^{d_i}$, for each $i=1, \ldots, n$, is normally torsion-free, the claim can be deduced promptly from Lemma \ref{Lem. 2}.
\end{proof}
To show Lemma \ref{Lem. Multipe}, we require the following auxiliary result.
\begin{theorem} (\cite[Theorem 5.2]{KHN2})\label{5.2KHN}
Let $I$ be a monomial ideal of $R$ with $\mathcal{G}(I) =\{u_1,\ldots,u_m\}$. Also assume that there exists a monomial $h=x_{j_1}^{b_1}\cdots x_{j_s}^{b_s}$ such that $h | u_i$ for all $i=1,\ldots,m$. By setting $J:=(u_1/h,\ldots,u_m/h)$, we have $$\mathrm{Ass}_R(R/I)=\mathrm{Ass}_R(R/J)\cup\{ (x_{j_1}),\ldots,(x_{j_s})\}.$$
\end{theorem}
The next lemma says that a monomial ideal is nearly normally torsion-free if and only if its monomial multiple is nearly normally torsion-free under certain conditions. It is an updated version of
\cite[Lemma 3.5]{HLR} and \cite[Lemma 3.12]{SN}.
\begin{lemma}\label{Lem. Multipe}
Let $I$ be a monomial ideal in a polynomial ring $R=K[x_1, \ldots, x_n]$, and $h$ be a monomial in $R$ such that $\mathrm{gcd}(h,u)=1$ for all $u\in \mathcal{G}(I)$. Then $I$ is nearly normally torsion-free if and only if $hI$ is nearly normally torsion-free.
\end{lemma}
\begin{proof}
$(\Rightarrow)$ Assume that $I$ is nearly normally torsion-free. Let $h=x_{j_1}^{b_1}\cdots x_{j_s}^{b_s}$ with $j_1, \ldots, j_s \in \{1, \ldots, n\}$. On account of Theorem \ref{5.2KHN}, we obtain, for all $\ell$,
\begin{equation}
\mathrm{Ass}_R(R/(hI)^{\ell})=\mathrm{Ass}_R(R/I^{\ell})\cup\{ (x_{j_1}),\ldots,(x_{j_s})\}. \label{12}
\end{equation}
Due to $\mathrm{gcd}(h,u)=1$ for all $u\in \mathcal{G}(I)$, it is routine to check that
\begin{equation}
\mathrm{Min}(hI)=\mathrm{Min}(I)\cup\{ (x_{j_1}),\ldots,(x_{j_s})\}. \label{13}
\end{equation}
Since $I$ is nearly normally torsion-free, there exist a positive integer $k$ and a monomial prime ideal
$\mathfrak{p}$ such that
$\mathrm{Ass}_R(R/I^m)=\mathrm{Min}(I)$ for all $1\leq m\leq k$, and
$\mathrm{Ass}_R(R/I^m) \subseteq \mathrm{Min}(I) \cup \{\mathfrak{p}\}$ for all $m \geq k+1$. Select an arbitrary element $\mathfrak{q}\in \mathrm{Ass}_R(R/(hI)^m)$.
Then \eqref{12} implies that $\mathfrak{q}\in \mathrm{Ass}_R(R/I^m)\cup\{ (x_{j_1}),\ldots,(x_{j_s})\}$.
If $1\leq m \leq k$, then \eqref{13} yields that
$\mathfrak{q}\in \mathrm{Min}(hI)$. Hence, let $m\geq k+1$. The claim follows readily from \eqref{12}, \eqref{13}, and $\mathrm{Ass}_R(R/I^m) \subseteq \mathrm{Min}(I) \cup \{\mathfrak{p}\}$.
$(\Leftarrow)$ Let $hI$ be nearly normally torsion-free.
This means that there exist a positive integer $k$ and a monomial prime ideal
$\mathfrak{p}$ such that
$\mathrm{Ass}_R(R/(hI)^m)=\mathrm{Min}(hI)$ for all $1\leq m\leq k$, and
$\mathrm{Ass}_R(R/(hI)^m) \subseteq \mathrm{Min}(hI) \cup \{\mathfrak{p}\}$ for all $m \geq k+1$. Take an arbitrary element $\mathfrak{q}\in \mathrm{Ass}_R(R/I^m)$.
Because $\mathfrak{q}\in \mathrm{Ass}_R(R/I^m)$, we get $\mathfrak{q} \notin
\{ (x_{j_1}),\ldots,(x_{j_s})\}$.
If $1\leq m \leq k$, then \eqref{12} gives that
$\mathfrak{q}\in \mathrm{Ass}_R(R/(hI)^m)$, and so $\mathfrak{q}\in \mathrm{Min}(hI)$. As $\mathfrak{q} \notin
\{ (x_{j_1}),\ldots,(x_{j_s})\}$, we gain $\mathfrak{q}\in \mathrm{Min}(I)$. We thus assume that $m\geq k+1$.
One can derive the assertion according to the facts $\mathfrak{q} \notin \{ (x_{j_1}),\ldots,(x_{j_s})\}$,
\eqref{12}, \eqref{13}, and $\mathrm{Ass}_R(R/(hI)^m) \subseteq \mathrm{Min}(hI) \cup \{\mathfrak{p}\}$.
\end{proof}
We conclude this section by observing that nearly normally torsion-freeness does not imply the strong persistence property. It is not known if nearly normally torsion-freeness implies the persistence property.
\begin{example}
Let $R=K[x_1, x_2, x_3]$ be the polynomial ring over a field $K$ and $I:=(x_2^4, x_1x_2^3, x_1^3x_2, x_1^4x_3)$ be a monomial ideal of $R$. On account of
$$
I=(x_1, x_2^4) \cap (x_1^3, x_2^3) \cap (x_1^4, x_2) \cap (x_2, x_3), $$
one can conclude that
$\mathrm{Ass}_R(R/I)=\mathrm{Min}(I)=\{(x_1, x_2), (x_2, x_3)\}.$
We claim that $\mathrm{Ass}_R(R/I^m)=\mathrm{Min}(I) \cup
\{(x_1, x_2, x_3)\}$ for all $m\geq 2$. To prove this claim, fix $m\geq 2$. In what follows, we verify that
$(x_1, x_2, x_3)\in \mathrm{Ass}_R(R/I^m)$. To establish this, we show that
$(I^m:_Rv)=(x_1, x_2, x_3)$, where $v:=x_1^{3m-1} x_2^{m+1}$. To see this, one may consider the following statements:
\begin{itemize}
\item[(i)] Since $vx_1=x_1^{3m}x_2^{m+1}=
(x_1^3x_2)^mx_2$ and $x_1^3x_2\in I$, we get $vx_1\in I^m,$ and so $x_1 \in (I^m:_Rv)$;
\item[(ii)] Due to $vx_2=x_1^{3m-1}x_2^{m+2}=
(x_1^3x_2)^{m-1} (x_1^2x_2^3)$ and $x_1^2x_2^3\in I$, one has $vx_2 \in I^m,$ and hence $x_2 \in (I^m:_Rv)$;
\item[(iii)] As $vx_3=x_1^{3m-1}x_2^{m+1}x_3 =(x_1^3x_2)^{m-2}(x_1^5x_2^3x_3)$ and $x_1^5x_2^3x_3\in I^2$, we obtain $vx_3 \in I^m,$ and thus $x_3 \in (I^m:_Rv)$.
\end{itemize}
Consequently, $(x_1, x_2, x_3) \subseteq (I^m:_Rv).$ For the converse, let $v \in I^m$. This leads to
there exist monomials $h_1, \ldots, h_m \in \mathcal{G}(I)$ such that $x_3\nmid h_i$ for each $i=1, \ldots, m$, and $h_1 \cdots h_m | x_1^{3m-1} x_2^{m+1}$. Hence, we have the following equality
\begin{equation}
h_1 \cdots h_m=(x_2^4)^{{\alpha}_1} (x_1x_2^3)^{{\alpha}_2} (x_1^3x_2)^{{\alpha}_3},
\label{14}
\end{equation}
for some nonnegative integers $ {\alpha}_1, \alpha_2, {\alpha}_3$ with ${\alpha}_1 + \alpha_2 + {\alpha}_3=m.$
Especially, one can derive from (\ref{14}) that
\begin{equation}
4\alpha_1 + 3\alpha_2 + \alpha_3 \leq m+1, \label{15}
\end{equation}
and
\begin{equation}
\alpha_2 + 3\alpha_3 \leq 3m-1. \label{16}
\end{equation}
Since $\sum_{i=1}^3 \alpha_i =m$, we obtain from (\ref{15}) that
$m+3\alpha_1 + 2 \alpha_2 \leq m+1$, and so $3\alpha_1 + 2\alpha_2 \leq 1$. We thus have
$\alpha_1= \alpha_2=0$, and so $\alpha_3 =m$. Moreover, it follows from (\ref{16}) that $3m \leq 3m-1$, which is a contradiction. Therefore, $v \notin I^m$, and so $(I^m:_Rv) =
(x_1, x_2, x_3)$. This gives rise to $(x_1, x_2, x_3) \in
\mathrm{Ass}_R(R/I^m)$ for all $m\geq 2$. Hence,
$\mathrm{Ass}_R(R/I^m)=\mathrm{Min}(I) \cup
\{(x_1, x_2, x_3)\}$ for all $m\geq 2$. This
means that $I$ has the persistence property and is a nearly normally torsion-free ideal.
On the other hand, using Macaulay2 \cite{GS} shows that
$(I^2:_RI) \neq I$. We therefore deduce that $I$ does not satisfy the strong persistence property.
\end{example}
\section{Nearly normally torsion-freeness and cover ideals}\label{coverideals}
In this section, our goal is to characterize all finite simple connected graphs such that their cover ideals are nearly normally torsion-free. To do this, one has to recall some results which will be used in the proof of Theorem \ref{Main. 1}. We begin with the following lemma.
\begin{lemma} \cite[Lemma 2.11]{FHV2} \label{FHV2}
Let $\mathcal{H}$ be a finite simple hypergraph on
$V = \{x_1, \ldots , x_n\}$ with cover ideal
$J(\mathcal{H}) \subseteq R=k[x_1, \ldots, x_n]$.
Then
$$P = (x_{i_1} , \ldots , x_{i_r}) \in \mathrm{Ass}(R/J(\mathcal{H})^d) \Leftrightarrow P = (x_{i_1} , \ldots , x_{i_r}) \in \mathrm{Ass}(k[P]/J(\mathcal{H}_P)^d),$$
where $k[P] =k[x_{i_1} , \ldots , x_{i_r}]$, and $\mathcal{H}_P$ is the induced hypergraph of $\mathcal{H}$ on the vertex set $P = \{x_{i_1} , \ldots , x_{i_r}\} \subseteq V$.
\end{lemma}
In the following proposition, we investigate the associated primes of powers of the cover ideals of odd cycle graphs.
\begin{proposition}\label{Pro. 1} \cite[Proposition 3.6]{NKA}
Suppose that $C_{2n+1}$ is a cycle graph on the vertex set $[2n+1]$, $R=K[x_1, \ldots, x_{2n+1}]$ is a polynomial ring over a field $K$,
and $\mathfrak{m}$ is the unique homogeneous maximal ideal of $R$. Then $$\mathrm{Ass}_R(R/(J(C_{2n+1}))^s)= \mathrm{Ass}_R(R/J(C_{2n+1}))\cup \{\mathfrak{m}\},$$ for all $s\geq 2$. In particular,
$$\mathrm{Ass}^\infty(J(C_{2n+1}))=\{(x_i, x_{i+1})~: ~ i=1, \ldots 2n\}\cup\{(x_{2n+1}, x_1)\}\cup \{\mathfrak{m}\}.$$
\end{proposition}
Next theorem explores the relation between associated primes of powers of the cover ideal of the union of two finite simple graphs with the associated primes of powers of the cover ideals of each of them, under the condition that they have only one common vertex.
\begin{theorem} \cite[Theorem 11]{KNT} \label{IntersectionOne}
Let $G=(V(G), E(G))$ and $H=(V(H), E(H))$ be two finite simple connected graphs such that $|V(G)\cap V(H)|=1$. Let $L=(V(L), E(L))$ be the finite simple graph such that $V(L):=V(G) \cup V(H)$ and $E(L):=E(G) \cup E(H)$.
Then
$$\mathrm{Ass}_{R}(R/J(L)^s)=\mathrm{Ass}_{R_1}(R_1/J(G)^s)\cup \mathrm{Ass}_{R_2}(R_2/J(H)^s),$$ for all $s$, where $R_1=K[ x_\alpha : \alpha\in V(G)]$,
$R_2=K[ x_\alpha : \alpha\in V(H)]$,
and $R=K[ x_\alpha : \alpha\in V(L)]$.
\end{theorem}
The subsequent theorem examines the relation between associated primes of powers of the cover ideal of the union of two finite simple connected graphs with the associated primes of powers of the cover ideals of each of them, under the condition that they have only one edge in common.
\begin{theorem} \cite[Theorem 12]{KNT} \label{IntersectionTwo}
Let $G=(V(G), E(G))$ and $H=(V(H), E(H))$ be two finite simple connected graphs such that $|V(G)\cap V(H)|=2$ and
$|E(G)\cap E(H)|=1$. Let $L=(V(L), E(L))$ be the finite simple graph such that $V(L):=V(G) \cup V(H)$ and $E(L):=E(G) \cup E(H)$.
Then
$$\mathrm{Ass}_{R}(R/J(L)^s)=\mathrm{Ass}_{R_1}(R_1/J(G)^s)\cup \mathrm{Ass}_{R_2}(R_2/J(H)^s),$$ for all $s$, where $R_1=K[ x_\alpha : \alpha\in V(G)]$,
$R_2=K[ x_\alpha : \alpha\in V(H)]$,
and $R=K[ x_\alpha : \alpha\in V(L)]$.
\end{theorem}
To understand the proof of Theorem \ref{Main. 1}, we first review some notation from \cite{JS} as follows:
Let $G=(V(G), E(G))$ be a finite simple connected graph. For any $x,y \in V(G)$, an $(x,y)$-path is simply a path between the vertices $x$ and $y$ in $G$. Also, for a vertex subset $W$ of a graph $G$, $\langle W \rangle$ will denote the subgraph of $G$ induced by $W$.
Let $G$ be an {\it almost bipartite} graph, that is to say, $G$ has only one induced odd cycle subgraph, say $C_{2k+1}$. For each $i\in V(C_{2k+1})$, let
$$A_i=\{x\in V(G)|~x\neq i ~\text{and for all~} j\in V(C_{2k+1}), ~i \text{~appears on every~} (x,j)\text{-path}\}.$$
Based on \cite[Page 540]{JS}, it should be noted that the set $A_i$ may be empty for some $i$, and it follows from \cite[Lemma 2.3]{JS} that if $A_i\neq \emptyset$, then the induced subgraph $\langle A_i \rangle$ is bipartite in its own right. Also, for every edge $e=\{i,j\} \in E(C_{2k+1})$, let
\begin{align*}
B_e= \{& x\in V(G)\setminus V(C_{2k+1})|~\text{for every}~
m\in V(C_{2k+1})\setminus \{i,j\}, ~\text{there } \\
& \text{is an}~ (x,m)\text{-path in which} ~ i \text{~appears but}~ j~\text{does not, and an } \\
& (x,m)\text{-path in which} ~ j \text{~appears but}~ i~ \text{does not}\}.
\end{align*}
Once again, according to \cite[Page 541]{JS}, one must note that the set $B_e$ may be empty for some $e$, and it can be deduced from \cite[Lemma 2.3]{JS} that if $B_e\neq \emptyset$, then the induced subgraph $\langle B_e \rangle$ is bipartite in its own right. Furthermore, the $A_i$'s and
$B_e$'s are all mutually disjoint. In other words, each vertex of $G$ is in exactly one of the following sets: (i) $V(C_{2k+1})$,
(ii) $A_i$ for some $i$, and (iii) $B_e$ for some $e$. Moreover, for each $i,j\in V(C_{2k+1})$ with $i\neq j$, and $e, e'\in E(C_{2k+1})$ with $e\neq e'$, one can easily derive from the definitions that there is no path between any two vertices of $\langle A_i \rangle$ and $\langle B_e \rangle$, or $\langle A_i \rangle$ and $\langle A_j \rangle$, or $\langle B_e \rangle$ and $\langle B_{e'} \rangle$.
As an example, consider the following graph $G$ from \cite{JS}.
\begin{center}
\scalebox{1}
{
\begin{pspicture}(0,-2.5629687)(6.0628123,2.5629687)
\psdots[dotsize=0.12](2.7809374,0.42453125)
\psdots[dotsize=0.12](3.7809374,-0.37546876)
\psdots[dotsize=0.12](1.7809376,-0.37546876)
\psdots[dotsize=0.12](3.1809375,-1.3954687)
\psdots[dotsize=0.12](2.3809376,-1.4154687)
\psline[linewidth=0.04cm](2.7609375,0.42453125)(3.7209375,-0.35546875)
\psline[linewidth=0.04cm](2.7609375,0.42453125)(1.7809376,-0.35546875)
\psline[linewidth=0.04cm](3.7409375,-0.37546876)(3.1809375,-1.3354688)
\psline[linewidth=0.04cm](1.7809376,-0.39546874)(2.3609376,-1.3754687)
\psline[linewidth=0.04cm](2.3809376,-1.3954687)(3.1409376,-1.3954687)
\psdots[dotsize=0.12](3.9809375,-1.3954687)
\psline[linewidth=0.04cm](3.1809375,-1.4154687)(3.2009375,-1.3954687)
\psline[linewidth=0.04cm](3.1809375,-1.3954687)(3.9809375,-1.3954687)
\psline[linewidth=0.04cm](3.9609375,-1.4154687)(3.9609375,-1.3954687)
\psdots[dotsize=0.12](4.7809377,-1.3754687)
\psline[linewidth=0.04cm](3.9809375,-1.3954687)(4.7409377,-1.3954687)
\psdots[dotsize=0.12](4.5809374,0.00453125)
\psdots[dotsize=0.12](5.4009376,0.40453124)
\psdots[dotsize=0.12](3.5609374,0.8045313)
\psdots[dotsize=0.12](4.4009376,1.2045312)
\psdots[dotsize=0.12](3.2009375,1.2245313)
\psdots[dotsize=0.12](2.7409375,2.0445313)
\psdots[dotsize=0.12](2.3809376,1.2245313)
\psdots[dotsize=0.12](1.4009376,-0.97546875)
\psdots[dotsize=0.12](0.6009375,-0.15546875)
\psline[linewidth=0.04cm](0.5809375,-0.15546875)(1.7409375,-0.35546875)
\psline[linewidth=0.04cm](0.6009375,-0.17546874)(1.3609375,-0.9554688)
\psline[linewidth=0.04cm](1.3809375,-0.9554688)(2.3609376,-1.4154687)
\psline[linewidth=0.04cm](2.7409375,2.0645313)(2.3809376,1.2445313)
\psline[linewidth=0.04cm](2.7409375,2.0245314)(3.1609375,1.2645313)
\psline[linewidth=0.04cm](2.3809376,1.2245313)(2.7609375,0.44453126)
\psline[linewidth=0.04cm](3.2009375,1.2245313)(2.7809374,0.44453126)
\psline[linewidth=0.04cm](2.7609375,0.44453126)(3.5409374,0.8045313)
\psline[linewidth=0.04cm](3.5609374,0.82453126)(4.4009376,1.2045312)
\psline[linewidth=0.04cm](4.3809376,1.2245313)(5.3809376,0.40453124)
\psline[linewidth=0.04cm](3.7809374,-0.35546875)(4.5609374,0.00453125)
\psline[linewidth=0.04cm](4.5609374,0.04453125)(4.5809374,0.08453125)
\psline[linewidth=0.04cm](4.5809374,0.02453125)(5.4009376,0.40453124)
\psline[linewidth=0.04cm](3.5609374,0.8045313)(4.5609374,0.04453125)
\usefont{T1}{ptm}{m}{n}
\rput(2.4723437,0.47453126){$1$}
\usefont{T1}{ptm}{m}{n}
\rput(1.7723438,-0.08546875){$2$}
\usefont{T1}{ptm}{m}{n}
\rput(2.3523438,-1.7054688){$3$}
\usefont{T1}{ptm}{m}{n}
\rput(3.1923437,-1.7254688){$4$}
\usefont{T1}{ptm}{m}{n}
\rput(3.9123437,-0.62546873){$5$}
\usefont{T1}{ptm}{m}{n}
\rput(3.4723437,1.4145312){$6$}
\usefont{T1}{ptm}{m}{n}
\rput(2.7323437,2.3745313){$7$}
\usefont{T1}{ptm}{m}{n}
\rput(2.0523438,1.3945312){$8$}
\usefont{T1}{ptm}{m}{n}
\rput(3.9523437,-1.7254688){$9$}
\usefont{T1}{ptm}{m}{n}
\rput(4.802344,-1.7054688){$10$}
\usefont{T1}{ptm}{m}{n}
\rput(3.4823437,1.0345312){$11$}
\usefont{T1}{ptm}{m}{n}
\rput(4.662344,-0.24546875){$12$}
\usefont{T1}{ptm}{m}{n}
\rput(4.382344,1.4745313){$13$}
\usefont{T1}{ptm}{m}{n}
\rput(5.662344,0.43453124){$14$}
\usefont{T1}{ptm}{m}{n}
\rput(0.32234374,-0.08546875){$15$}
\usefont{T1}{ptm}{m}{n}
\rput(1.4623437,-0.72546875){$16$}
\usefont{T1}{ptm}{m}{n}
\rput(2.7523437,-2.3854687){$G$}
\psdots[dotsize=0.12](0.9809375,-1.3954687)
\psline[linewidth=0.04cm](0.5809375,-0.17546874)(0.9609375,-1.3554688)
\psline[linewidth=0.04cm](0.9809375,-1.3954687)(2.3609376,-1.4154687)
\usefont{T1}{ptm}{m}{n}
\rput(0.96234375,-1.7054688){$17$}
\end{pspicture}
}
\end{center}
Direct computations show that $A_1=\{6, 7, 8\}, ~A_4=\{9, 10\}, ~A_2=A_3=A_5=\emptyset,$
and
$B_{\{1,5\}}=\{11, 12, 13, 14\}, ~
B_{\{2,3\}}=\{15, 16, 17\},~ B_{\{1,2\}}=B_{\{3,4\}}=B_{\{4,5\}}=\emptyset.$
The following theorem is the first main result of this section.
\begin{theorem} \label{Main. 1}
Assume that $G=(V(G), E(G))$ is a finite simple connected graph, and $J(G)$ denotes the cover ideal of $G$. Then $J(G)$ is nearly normally torsion-free if and only if $G$ is either a bipartite graph or an almost bipartite graph.
\end{theorem}
\begin{proof}
To show the forward implication, let $J(G)$ be nearly normally torsion-free. Suppose, on the contrary, that $G$ is neither bipartite nor almost bipartite. This gives that $G$ has at least two induced odd cycle subgraphs, say $C$ and $C'$. It follows from Proposition \ref{Pro. 1} that $\mathfrak{p}=(x_j~:~ j \in V(C))\in \mathrm{Ass}(J(C)^s)$ and $\mathfrak{p}'=(x_j~:~ j \in V(C'))\in \mathrm{Ass}(J(C')^s)$ for all $s\geq 2$. Since $G_{\mathfrak{p}}=C$ and $G_{\mathfrak{p}'}=C'$,
we can deduce from Lemma \ref{FHV2} that $\mathfrak{p}, \mathfrak{p}' \in \mathrm{Ass}_R(R/J(G)^s)$ for all $s\geq 2$. This contradicts the assumption that $J(G)$ is nearly normally torsion-free.
Conversely, if $G$ is bipartite, then on account of \cite[Corollary 2.6]{GRV}, one has $J(G)$ is normally torsion-free, and so $J(G)$ is nearly normally torsion-free. Next, we assume that $G$ is an almost bipartite graph, and let $C$ be its unique induced odd cycle subgraph. Put $\mathfrak{p}=(x_j~:~ j\in V(C))$. We claim that $\mathrm{Ass}(J(G)^s)=\mathrm{Min}(J(G)) \cup \{\mathfrak{p}\}$ for all $s\geq 2$. Fix $s\geq 2$. For any $i\in V(C)$ and $e\in E(C)$, assume that $A_i$ and $B_e$ are the vertex subsets of $G$ as defined in the discussion above. Without loss of generality, suppose that $A_i \neq \emptyset$ for all $i=1, \ldots, r$ and $B_{e_j}\neq \emptyset$ for all $j=1, \ldots, t$. Set $H_i:=\langle A_i \cup \{i\}\rangle$ for all
$i=1, \ldots, r$, and $L_j:=\langle B_{e_j} \cup \{\alpha_j, \beta_j\}\rangle$ for all $j=1, \ldots, t$, where $e_j=\{\alpha_j, \beta_j\}$. Accordingly, we get $|V(C) \cap V(H_i)|=1$ for all $i=1, \ldots, r$, $|V(C) \cap V(L_j)|=2$
and $|E(C) \cap E(L_j)|=1$ for all $j=1, \ldots, t$. On the other hand, it should be noted that all $H_i$ and $L_j$ are bipartite,
and so \cite[Corollary 2.6]{GRV} yields that $\mathrm{Ass}(J(H_i)^s)=\mathrm{Min}(J(H_i))$ and
$\mathrm{Ass}(J(L_j)^s)=\mathrm{Min}(J(L_j))$ for all
$i=1, \ldots, r$ and $j=1, \ldots, t$. Now, repeated applications of Theorems \ref{IntersectionOne} and \ref{IntersectionTwo} give that $\mathrm{Ass}(J(G)^s)=\mathrm{Ass}(J(C)^s) \cup \mathrm{Min}(J(H_i)) \cup \mathrm{Min}(J(L_j))$ for all
$i=1, \ldots, r$ and $j=1, \ldots, t$. By virtue of Proposition \ref{Pro. 1}, one obtains $\mathrm{Ass}(J(C)^s)=\{\mathfrak{p}\} \cup \mathrm{Min}(J(C))$.
We thus have $\mathrm{Ass}(J(G)^s)= \mathrm{Min}(J(G)) \cup \{\mathfrak{p}\}$, as claimed. This shows that $J(G)$ is nearly normally torsion-free, and the proof is done.
\end{proof}
Now, we focus on cover ideals of hypergraphs. For this purpose, we state the following theorem. To do this, we recall some definitions which will be
necessary for understanding Theorem \ref{NTF1}. We list them as follows:
\begin{definition} (see \cite[Definition 2.7]{FHV2})
Let $\mathcal{H}= (V_{\mathcal{H}} , E_{\mathcal{H}})$ be a hypergraph. A {\em $d$-coloring} of $\mathcal{H}$ is any partition of $V_{\mathcal{H}} = C_1\cup \cdots \cup C_d$
into $d$ disjoint sets such that for every $E \in E_{\mathcal{H}}$, we have $E\nsubseteq C_i$ for all $i = 1, \ldots ,d$. (In the case of a
graph $G$, this simply means that any two vertices connected by an edge receive different colors.) The
$C_i$'s are called the color classes of $\mathcal{H}$. Each color class $C_i$ is an {\em independent set}, meaning that $C_i$ does not
contain any edge of the hypergraph. The chromatic number of $\mathcal{H}$, denoted by $\chi(\mathcal{H})$, is the minimal $d$
such that $\mathcal{H}$ has a $d$-coloring.
\end{definition}
\begin{definition} (see \cite[Definition 2.8]{FHV2})
A hypergraph $\mathcal{H}$ is called {\em critically $d$-chromatic} if
$\chi(\mathcal{H})= d$, but for every vertex
$x\in V_{\mathcal{H}}$, $\chi(\mathcal{H}\setminus \{x\})< d$, where
$\mathcal{H}\setminus \{x\}$ denotes the hypergraph $\mathcal{H}$ with $x$ and all edges containing $x$ removed.
\end{definition}
\begin{definition} (see \cite[Definition 4.2]{FHV2})
Let $\mathcal{H}= (V_{\mathcal{H}} , E_{\mathcal{H}})$ be a hypergraph with $V_{\mathcal{H}}=\{x_1, \ldots, x_n\}$. For each $s$, the {\em $s$-th expansion} of $\mathcal{H}$ is defined to be the hypergraph obtained by replacing each vertex $x_i \in V_{\mathcal{H}}$ by a collection $\{x_{ij}~|~ j=1, \ldots, s\}$, and replacing $E_{\mathcal{H}}$ by the edge set that consists of edges
$\{x_{i_1l_1}, \ldots, x_{i_rl_r}\}$ whenever
$\{x_{i_1}, \ldots, x_{i_r}\}\in E_{\mathcal{H}}$ and edges
$\{x_{il}, x_{ik}\}$ for $l\neq k$. We denote this hypergraph by $\mathcal{H}^s$. The new variables $x_{ij}$ are called the shadows of $x_i$. The process of setting $x_{il}$ to equal to $x_i$ for all $i$ and $l$ is called the {\em depolarization}.
\end{definition}
\begin{theorem} \label{NTF1}
Assume that $\mathcal{G}=(V(\mathcal{G}), E(\mathcal{G}))$ and $\mathcal{H}=(V(\mathcal{H}), E(\mathcal{H}))$ are
two finite simple hypergraphs such that $V(\mathcal{H})=V(\mathcal{G})\cup \{w\}$ with $w\notin V(\mathcal{G})$, and $E(\mathcal{H})=E(\mathcal{G}) \cup \{\{v,w\}\}$ for some vertex $v\in V(\mathcal{G})$. Then
$$\mathrm{Ass}_{R'}(R'/J(\mathcal{H})^s)=\mathrm{Ass}_{R}(R/J(\mathcal{G})^s)\cup
\{(x_v, x_w)\},$$
for all $s$, where $R=K[ x_\alpha : \alpha\in V(\mathcal{G})]$ and
$R'=K[ x_\alpha : \alpha\in V(\mathcal{H})]$.
\end{theorem}
\begin{proof}
For convenience of notation, set $I:=J(\mathcal{G})$ and $J:=J(\mathcal{H})$. We first prove that
$\mathrm{Ass}_{R}(R/I^s)\cup
\{(x_v, x_w)\}\subseteq \mathrm{Ass}_{R'}(R'/J^s)$ for all $s$. Fix $s\geq 1$, and assume that $\mathfrak{p}=(x_{i_1}, \ldots, x_{i_r})$ is an arbitrary element of $\mathrm{Ass}_R(R/I^s)$. According to
\cite[Lemma 2.11]{FHV2}, we get
$\mathfrak{p}\in \mathrm{Ass}(K[\mathfrak{p}]/J(\mathcal{G}_\mathfrak{p})^s)$, where $K[\mathfrak{p}]=K[x_{i_1}, \ldots, x_{i_r}]$ and $\mathcal{G}_\mathfrak{p}$ is the induced subhypergraph of $\mathcal{G}$ on the vertex set $\{i_1, \ldots, i_r\}\subseteq V(\mathcal{G})$. Since $\mathcal{G}_\mathfrak{p}= \mathcal{H}_\mathfrak{p}$, we have
$\mathfrak{p}\in \mathrm{Ass}(K[\mathfrak{p}]/J(H_\mathfrak{p})^s)$. This yields that $\mathfrak{p}\in \mathrm{Ass}_{R'}(R'/J^s)$. On account of $(x_v, x_w)\in \mathrm{Ass}_{R'}(R'/J^s)$, one derives $\mathrm{Ass}_{R}(R/I^s)\cup \{(x_v, x_w)\}\subseteq \mathrm{Ass}_{R'}(R'/J^s)$.
To complete the proof, it is enough for us to show the reverse inclusion. Assume that $\mathfrak{p}=(x_{i_1}, \ldots, x_{i_r})$ is
an arbitrary element of $\mathrm{Ass}_{R'}(R'/J^s)$ with
$\{i_1, \ldots, i_r\}\subseteq V(\mathcal{H})$. If $\{i_1, \ldots, i_r\}\subseteq V(\mathcal{G})$, then \cite[Lemma 2.11]{FHV2} implies that $\mathfrak{p}\in \mathrm{Ass}_R(R/I^s)$, and the proof is done. Thus, let $w\in \{i_1, \ldots, i_r\}$.
It follows from \cite[Corollary 4.5]{FHV2} that the associated primes of $J(\mathcal{H})^s$ will correspond to critical chromatic subhypergraphs of size $s+1$ in the $s$-th expansion of $\mathcal{H}$. This means that one can take the induced subhypergraph on the vertex set $\{i_1, \ldots, i_r\}$, and
then form the $s$-th expansion on this induced subhypergraph, and within this new hypergraph find a critical $(s+1)$-chromatic hypergraph. Notice that since this expansion cannot have any critical chromatic subgraphs, this implies that $\mathcal{H}_{\mathfrak{p}}$ must be connected. Hence,
$v\in \{i_1, \ldots, i_r\}$. Without loss of generality, one may assume that $i_1=v$ and $i_2=w$.
Thanks to $w$ is only connected to $v$ in the hypergraph $\mathcal{H}$, and because this induced subhypergraph is critical, if we remove the vertex $w$, one can color the resulting hypergraph with at least $s$ colors. This leads to that $w$ has to be adjacent to at least $s$ vertices. But the only thing $w$ is adjacent to is the shadows of $w$ and the shadows of $v$, and so one has a clique among these vertices. Accordingly,
$w$ and its neighbors will form a clique of size $s+1$. Since
a clique is a critical graph, it follows that we do not need any element of $\{i_3, \ldots, i_r\}$ or their shadows when making the critical $(s+1)$-chromatic hypergraph. Consequently, we obtain $\mathfrak{p}=(x_v, x_w)$, as required.
\end{proof}
Before stating the next result, it should be noted that one can always view a square-free monomial ideal as the cover ideal of a
simple hypergraph. In fact, assume that $I$ is a square-free monomial ideal, and $I^\vee$ denotes its Alexander dual.
Also, let $\mathcal{H}$ denote the hypergraph corresponding to $I^\vee$. Then, we have $I=J(\mathcal{H})$, where
$J(\mathcal{H})$ denotes the cover ideal of the hypergraph $\mathcal{H}$. Consult \cite{FHM} for further details and information.
For instance, consider the following square-free monomial ideal in the polynomial ring $R=K[x_1,x_2,x_3,x_4,x_5]$ over a field $K$,
$$I=(x_1x_2x_3, x_2x_3x_4, x_3x_4x_5, x_4x_5x_1, x_5x_1x_2).$$
Then the Alexander dual of $I$ is given by
\begin{align*}
I^\vee = &(x_1, x_2, x_3) \cap (x_2, x_3, x_4) \cap (x_3, x_4, x_5) \cap (x_4, x_5, x_1) \cap (x_5, x_1, x_2)\\
= &(x_3x_5, x_2x_5, x_2x_4, x_1x_4, x_1x_3).
\end{align*}
Now, define the hypergraph $\mathcal{H}=(\mathcal{X, \mathcal{E}})$ with $\mathcal{X}=\{x_1,x_2,x_3,x_4,x_5\}$ and
$$\mathcal{E}=\{\{x_3,x_5\}, \{x_2,x_5\}, \{x_2, x_4\}, \{x_1, x_4\}, \{x_1, x_3\}\}.$$
Then the edge ideal and cover ideal of the hypergraph $\mathcal{H}$ are given by
$$I(\mathcal{H})=(x_3x_5, x_2x_5, x_2x_4, x_1x_4, x_1x_3),$$
and
$$J(\mathcal{H})= I(\mathcal{H})^\vee
=(x_3,x_5) \cap (x_2,x_5) \cap (x_2, x_4) \cap (x_1, x_4) \cap (x_1, x_3).$$
It is easy to see that $J(\mathcal{H})=I$, as claimed.
We are in a position to provide the second main result of this section in the following corollary.
\begin{corollary} \label{Cor. NFT1}
Let $I$ be a normally torsion-free square-free monomial ideal in $
R=K[x_{1},\ldots ,x_{n}]$ with $\mathcal{G}(I) \subset R$. Then the ideal $L:=IS\cap (x_{n},x_{n+1})\subset S=R[x_{n+1}]$ satisfies the following statements:
\begin{itemize}
\item[(i)] $L$ is normally torsion-free.
\item[(ii)] $L$ is nearly normally torsion-free.
\item[(iii)] $L$ is normal.
\item[(iv)] $L$ has the strong persistence proeprty.
\item[(v)] $L$ has the persistence property.
\item[(vi)] $L$ has the symbolic strong persistence property.
\end{itemize}
\end{corollary}
\begin{proof}
(i) In the light of the argument which has been stated above, we can assume that $I=J(\mathcal{H})$ such that the hypergraph
$\mathcal{H}$ corresponds to $I^\vee$, where
$I^\vee$ denotes the Alexander dual of $I$. Fix $t\geq 1$.
It follows now from Theorem \ref{NTF1} that
$$\mathrm{Ass}_{S}(S/L^t)=\mathrm{Ass}_{R}(R/J(\mathcal{H})^t)\cup \{(x_n, x_{n+1})\}.$$
Since $I$ is normally torsion-free, one can deduce that
$\mathrm{Ass}_{R}(R/J(\mathcal{H})^t)=\mathrm{Min}(J(\mathcal{H}))$, and so $\mathrm{Ass}_{S}(S/L^t)=\mathrm{Min}(J(\mathcal{H}))\cup
\{(x_n, x_{n+1})\}.$ Therefore, $\mathrm{Ass}_{S}(S/L^t)=\mathrm{Min}(L)$. This means that
$L$ is normally torsion-free, as desired. \par
(ii) It is obvious from the definition of nearly normally torsion-freeness and (i). \par
(iii) By virtue of \cite[Theorem 1.4.6]{HH1}, every normally torsion-free square-free monomial ideal is normal. Now, the assertion can be deduced from (i). \par
(iv) Due to \cite[Theorem 6.2]{RNA}, every normal monomial ideal has the strong persistence property, and hence the claim follows readily from (iii). \par
(v) It is shown in \cite{HQ} that the strong persistence property implies the persistence property. Therefore, one can derive the assertion from (iv). \par
(vi) Thanks to \cite[Theorem 11]{RT}, the strong persistence property implies the symbolic strong persistence property, and thus the claim is an immediate consequence of (iv).
\end{proof}
\section{The case of $t$-spread principal Borel ideals}\label{borel}
In this section, we focus on normally torsion-free and nearly normally torsion-free $t$-spread principal Borel ideals. Let $R=K[x_1, \ldots, x_n]$ be a polynomial ring over a field $K$. Let $t$ be a positive integer. A monomial $x_{i_1} x_{i_2} \cdots x_{i_d} \in R$ with $i_1 \leq i_2 \leq \cdots \leq i_d$ is called {\it $t$-spread} if $i_j -i_{j-1} \geq t$ for all $j=2, \ldots, d$. A monomial ideal in $R$ is called a {\it $t$-spread monomial ideal} if it is generated by $t$-spread monomials. A 0-spread monomial ideal is just an ordinary monomial ideal, while a 1-spread monomial ideal is just a square-free monomial ideal. In the following text, we will assume that $t \geq 1$.
Let $I\subset R$ be a $t$-spread monomial ideal. Then $I$ is called a {\it $t$-spread strongly stable ideal} if for all $t$-spread monomials $u\in \mathcal{G}(I)$, all $j\in \mathrm{supp}(u)$
and all $1\leq i <j$ such that $x_i(u/x_j)$ is a $t$-spread monomial, it follows that $x_i(u/x_j)\in I$.
\begin{definition}\label{boreldef}
A monomial ideal $I\subset R$ is called a {\it $t$-spread principal Borel} if there exists a $t$-spread monomial $u\in \mathcal{G}(I)$ such that $I$ is the smallest $t$-spread strongly stable ideal which contains $u$. We denote it as $I=B_t(u)$. It should be noted that for a $t$-spread monomial $u=x_{i_1} x_{i_2} \cdots x_{i_d} \in R$, we have $x_{j_1} x_{j_2} \cdots x_{j_d}\in \mathcal{G}(B_t(u))$ if and only if
$j_1\leq i_1, \ldots, j_d \leq i_d$ and $j_k - j_{k-1} \geq t$ for $k\in \{2, \ldots, d\}$. We refer the reader to \cite{EHQ} for more information.
\end{definition}
To see an application of Corollary \ref{Cor. 1}, we re-prove Proposition 4.4 from \cite{Claudia}.
\begin{proposition} \label{App. 2}
Let $u=x_ix_n$ be a $t$-spread monomial in $R=K[x_1, \ldots, x_n]$ with $i\geq t$. Then $I=B_t(u)$ is nearly normally torsion-free.
\end{proposition}
\begin{proof}
We first assume that $i=t$. In this case, it follows from the definition that
$$I=(x_1x_{t+1}, x_1x_{t+2}, \ldots, x_1x_{n}, x_2x_{t+2}, \ldots, x_2x_n, \ldots, x_tx_{2t}, x_tx_{2t+1}, \ldots, x_tx_n).$$ It is routine to check that $I$ is the edge ideal of a bipartite graph with the vertex set $ \{1, 2, \ldots, t\} \cup \{t+1, t+2, \ldots, n\}$.
In addition, \cite[Corollary 14.3.15]{V1} implies that $I$ is normally torsion-free. Now, let $i>t$. One can conclude from the definition that the minimal generators of $I$ are as follows:
$$x_i x_n, x_i x_{n-1}, \ldots, x_i x_{i+t}, x_{i-1} x_n, x_{i-1} x_{n-1}, \ldots, x_{i-1}x_{i+t-1}, $$
$$ x_{i-2} x_n, x_{i-2} x_{n-1}, \ldots, x_{i-2} x_{i+t-2}, \ldots, x_1x_n, x_1 x_{n-1}, \ldots, x_1 x_{t+1}.$$
Our strategy is to use Corollary \ref{Cor. 1}. To do this, one has to show that $I(\mathfrak{m}\setminus \{x_z\})$ is normally torsion-free for all $z=1, \ldots, n$, where $\mathfrak{m}=(x_1, \ldots, x_n)$. First, let $1\leq z \leq i$.
Direct computation gives that
\begin{align*}
I(\mathfrak{m}\setminus \{x_z\})&= (x_{\alpha} x_{\beta}~:~ \alpha = 1, \ldots, z-1, \beta = t+1, \ldots, z+t-1, \beta - \alpha \geq t) \\
& + (x_n, x_{n-1}, \ldots, x_{z+t}).
\end{align*}
One can easily see that
$(x_{\alpha} x_{\beta}~:~ \alpha = 1, \ldots, z-1, \beta = t+1, \ldots, z+t-1, \beta - \alpha \geq t)$ is the edge ideal of a bipartite graph with the following vertex set
$$\{1, \ldots, z-1\} \cup \{t+1, \ldots, z+t-1\},$$
and so is normally torsion-free. Also, we know that every prime monomial ideal is normally torsion-free. Moreover, on account of the two ideals
$(x_n, x_{n-1}, \ldots, x_{z+t})$ and
$(x_{\alpha} x_{\beta}~:~ \alpha = 1, \ldots, z-1, \beta = t+1, \ldots, z+t-1, \beta - \alpha \geq t)$ do not have any common variables, we conclude from \cite[Theorem 2.5]{SN} that
$I(\mathfrak{m}\setminus \{x_z\})$ is normally torsion-free as well. Now, let $i+1 \leq z \leq n$. It is routine to check that
\begin{align*}
I(\mathfrak{m}\setminus \{x_z\})&= (x_{\alpha} x_{\beta}~:~ \alpha = z-t+1, \ldots, i, \beta = z+1, \ldots, n, \beta - \alpha \geq t) \\
& + (x_1, x_2, \ldots, x_{z-t}).
\end{align*}
It is not hard to see that $(x_{\alpha} x_{\beta}~:~ \alpha = z-t+1, \ldots, i, \beta = z+1, \ldots, n, \beta - \alpha \geq t)$
is the edge ideal of a bipartite graph with the following vertex set $$\{z-t+1, \ldots, i\} \cup \{z+1, \ldots, n\},$$
and hence is normally torsion-free. In addition, by a similar argument, the ideal $(x_1, x_2, \ldots, x_{z-t})$ is normally torsion-free. Furthermore, since $(x_1, x_2, \ldots, x_{z-t})$ and
$(x_{\alpha} x_{\beta}~:~ \alpha = z-t+1, \ldots, i, \beta = z+1, \ldots, n, \beta - \alpha \geq t)$ have no common variables, we are able to derive from \cite[Theorem 2.5]{SN} that
$I(\mathfrak{m}\setminus \{x_z\})$ is normally torsion-free too. Inspired by Corollary \ref{Cor. 1}, we conclude that $I$ is nearly normally torsion-free, as claimed.
\end{proof}
We illustrate the statement of Proposition~\ref{App. 2} through the following example.
\begin{example}
Let $u= x_4x_7$ and $t=3$. If $v=x_ax_b \in \mathcal{G}(B_3(u))$, then $a \in [1,4]$ and $b \in [4,7]$. The complete list of minimal generators of $B_3(x_4x_{7})$ is given below:
\begin{center}
\begin{tabular}{ c c c c}
$x_1x_4$\\
$x_1x_5$\quad & \quad $x_2x_5$\\
$x_1x_6$\quad & \quad $x_2x_6$\quad & \quad $x_3x_6$\\
$x_1x_7$\quad & \quad $x_2x_7$\quad & \quad $x_3x_7$\quad & \quad $x_4x_7$
\end{tabular}
\end{center}
The monomial localizations of $I$ at $\mathfrak{m}\setminus \{x_k\}$, for each $k=1, \ldots 7$, are listed below:
\begin{center}
\begin{tabular}{l}
$I(\mathfrak{m}\setminus \{x_1\})=(x_4,x_5,x_6,x_7)$;\\
$I(\mathfrak{m}\setminus \{x_2\})=(x_5,x_6,x_7)+(x_1x_4)$;\\
$I(\mathfrak{m}\setminus \{x_3\})=(x_6,x_7)+(x_1x_4,x_1x_5,x_2x_5) $;\\
$I(\mathfrak{m}\setminus \{x_4\})=(x_7)+(x_1,x_2x_5,x_2x_6,x_3x_6)$;\\
$I(\mathfrak{m}\setminus \{x_5\})=(x_1,x_2)+(x_3x_6,x_3x_7,x_4x_7)$;\\
$I(\mathfrak{m}\setminus \{x_6\})=(x_1,x_2,x_3)+(x_4x_7)$;\\
$I(\mathfrak{m}\setminus \{x_7\})=(x_1,x_2,x_3,x_4)$.
\end{tabular}
\end{center}
Therefore, for each $k=1, \ldots, 7$, the monomial localization $I(\mathfrak{m}\setminus \{x_k\})$ is either a monomial prime ideal, or sum of a monomial prime ideal and an edge ideal of a bipartite graph, which have no common variables. In each of the above cases, $I(\mathfrak{m}\setminus \{x_k\})$ is a normally torsion-free ideal. Hence, it follows from Corollary~\ref{Cor. 1} that $I$ is nearly normally torsion-free.
\end{example}
We further investigate the class of ideals of type $B_t(u)$ in the context of normally torsion-freeness and nearly normally torsion-freeness. Given $a,b \in \mathbb{N}$, we set $[a,b]:=\{c \in \mathbb{N}: a \leq c \leq b\}$. Let $u:=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ and
\begin{equation}\label{ak}
A_k:=[(k-1)t+1,i_k], \text{ for each } k=1, \ldots d.
\end{equation}
We can describe the minimal generators of $I=B_t(u)$ in the following way: if $x_{j_1}\cdots x_{j_d} \in \mathcal{G}(I)$, then for each $k=1, \ldots ,d$, we have $j_k \in A_k$. For example, a complete list of minimal generators of $B_2(x_3x_5x_{7})$ is given below:
\begin{center}
\begin{tabular}{ c c c }
$x_1x_3x_5$\\
$x_1x_3x_6$\\
$x_1x_3x_7$\\
$x_1x_4x_6$\quad & \quad $x_2x_4x_6$\\
$x_1x_4x_7$\quad & \quad $x_2x_4x_7$\\
$x_1x_5x_7$\quad & \quad $x_2x_5x_7$\quad & \quad $x_3x_5x_7$
\end{tabular}
\end{center}
Here, $A_1=[1,3]$, $A_2=[3,5]$, and $A_3=[5,7]$.
Let $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ with $i_{d-1} \leq(d-1)t$, then it forces us to have $i_k \leq kt$, for each $k=1,\ldots, d-2$ because $u$ is a $t$-spread monomial. In this case, $A_i\cap A_j = \emptyset$. For example, a complete list of minimal generators of $B_3(x_3x_6x_9)$ is given below:
\begin{center}
\begin{tabular}{ c c c }
$x_1x_4x_7$\\
$x_1x_4x_8$\\
$x_1x_4x_9$\\
$x_1x_5x_8$\quad & \quad $x_2x_5x_8$\\
$x_1x_5x_9$\quad & \quad $x_2x_5x_9$\\
$x_1x_6x_9$\quad & \quad $x_2x_6x_9$\quad & \quad $x_3x_6x_9$
\end{tabular}
\end{center}
Here, $A_1=[1,3]$, $A_2=[4,6]$, and $A_3=[7,9]$.
\begin{remark}\label{dpartiteduniform}
If $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ with $i_{d-1} \leq(d-1)t$, then $B_t(u)$ is an edge ideal of a {\em $d$-uniform $d$-partite} hypergraph whose vertex partition is given by $A_1, \ldots, A_d$. Recall that $\mathcal{H}$ is a $d$-partite hypergraph if its vertex set $V_{\mathcal{H}}$ is a disjoint union of sets $V_1, \ldots, V_d$ such that if $E$ is an edge of $\mathcal{H}$, then $|E \cap V_i | \leq 1$. Moreover, a hypergraph is $d$-uniform if each edge of $\mathcal{H}$ has size $d$. In particular, if $\mathcal{H}$ is a $d$-uniform $d$-partite hypergraph with vertex partition $V_1, \ldots, V_d$, then $|E|=d$ and $|E \cap V_i | =1$ for each $E \in E_{\mathcal{H}}$.
\end{remark}
Let $\mathcal{A}_t$ denote the family of all ideals of the form $B_t(u)$ such that $B_t(u)$ is an edge ideal of a $d$-partite $d$-uniform hypergraph, for some $d$. From the above discussion it follows that $B_t(u) \in \mathcal{A}_t$ if and only if there exists some $d$ such that $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ with $i_{d-1} \leq(d-1)t$.
Let $\mathcal{H}$ be a hypergraph. A sequence $v_1, E_1, v_2, E_2, \ldots, v_s, E_s, v_{s+1}=v_1$ of distinct edges and vertices of $\mathcal{H}$ is called a cycle in $\mathcal{H}$ if $v_i,v_{i+1} \in E_i$, for all $i=1, \ldots, s$. Such a cycle is called {\em special} if no edge contain more than two vertices of the cycle. After translating the language of simplicial comlexes to hypergraphs, it can be seen from \cite[Theorem 10.3.16]{HH1} that if a hypergraph does not have any special odd cycles, then its edge ideal is normally torsion-free. The following example shows that hypergraphs with edge ideals in $\mathcal{A}_t$, may contain special odd cycles.
\begin{example}\label{oddcycle}
Let $\mathcal{H}$ be the hypergraph whose edge ideal is $I=B_3(x_3x_6x_9x_{12})$. Then the following sequence of the vertices and the edges of $\mathcal{H}$ gives a special odd cycle.
\[
1, \{1,4,9,12\}, 9, \{2,5,9,12\}, 5, \{1,5,8,11\}, 1
\]
\end{example}
For any monomial ideal $I \subset R=K[x_1, \ldots,x_n]$, the {\it deletion} of $I$ at $x_i$ with $1\leq i \leq n$, denoted by $I\setminus x_i$, is obtained by setting $x_i=0$ in every minimal generator of $I$, that is, we delete every minimal generator $u\in \mathcal{G}(I)$ with $x_i\mid u$. For a monomial $u \in R$, we denote the support of $u$ by $\mathrm{supp}(u)=\{x_i : x_i|u \}$. Moreover, for a monomial ideal $I\subset R$, we set $\mathrm{supp}(I)=\cup_{u \in \mathcal{G}(I)} \mathrm{supp}(u)$. Then we observe the following:
\begin{remark}\label{rem1}
Let $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ with $i_{d-1} \leq(d-1)t$, and $I=B_t(u) \subset R$. Note that we have $I \in \mathcal{A}_t$.
\begin{enumerate}
\item If $i_k < kt$, for some $1 \leq k \leq d-1$, then the variables $x_{i_k+1}, \ldots, x_{kt}$ do not appear in $\mathrm{supp}(I)$. For example, a complete list of minimal generators of $B_3(x_2x_5x_9)$ is given below:
\begin{center}
\begin{tabular}{ c c c }
$x_1x_4x_7$\\
$x_1x_4x_8$\\
$x_1x_4x_9$\\
$x_1x_5x_8$\quad & \quad $x_2x_5x_8$\\
$x_1x_5x_9$\quad & \quad $x_2x_5x_9$\\
\end{tabular}
\end{center}
Here, $A_1=[1,2]$, $A_2=[4,5]$, and $A_3=[7,9]$, and $x_3, x_6 \notin \mathrm{supp}(B_3(x_2x_5x_9))$.
\item If $i_d<n$, then $x_{{i_d}+1}, \ldots, x_n \notin \mathrm{supp} (I)$. Hence we can always assume that $i_d=n$.
\item If $i_k > (k-1)t+1$, that is, $|A_k| >1$ for some $k = 1, \ldots, d$, then $I\setminus x_{i_k}=B_t(v)$ where $v$ is chosen with the following property: $v=x_{j_1}\ldots x_{j_d} \in \mathcal{G}(I\setminus x_{i_k})$ and for any other $w=x_{l_1}\ldots x_{l_d} \in \mathcal{G}(I\setminus x_{i_k})$, we have $l_k \leq j_k$. For example, $B_3(x_2x_5x_9)\setminus x_5=B_3(x_1x_4x_9)$. Therefore, we conclude that $I\setminus x_{i_k} \in \mathcal{A}_t$, for all $x_{i_k}$, with $k = 1, \ldots, d$.
\item The definition of $A_k$ immediately implies that $|A_k|=1$ if and only if $i_k=(k-1)t+1$. Moreover, if $i_k=(k-1)t+1$ for some $1 \leq k \leq d$, then $i_j=(j-1)t+1$, for all $j \leq k$ because $u$ is a $t$-spread monomial. In this case $x_1x_{t+1}\cdots x_{(k-1)t+1}$ divides every minimal generator of $B_t(u)$ and hence $B_t(u)=x_1x_{t+1}\cdots x_{(k-1)t+1} J$, where $J$ can be identified as a $t$-spread principal Borel ideal generated in degree $d-k$ in it's ambient polynomial ring. Hence, $J=B_t(v) \in \mathcal{A}_t$, with $v=u/x_1x_{t+1}\cdots x_{(k-1)t+1} $.
For example, if $u=x_1x_4x_9x_{12}$, then $B_3(u)=x_1x_4J$ with $J=B_t(v) \subset K[x_7, \ldots, x_{12}]$ and $v=x_9x_{12}$. In fact, by substituting $y_{i-6}=x_i$ for $i=7, \ldots, 12$, $J$ can be identified with $B_3(y_3y_6)\subset K[y_1, \ldots, y_6]$.
\item If $|A_k| \geq 2$, then $i_k > (k-1)t+1$. This forces $|A_j| \geq 2$ for all $j=k, \ldots,d$ because $u$ is a $t$-spread monomial.
\end{enumerate}
\end{remark}
In what follows, our aim is to show that for any fixed $t$, all ideals in $A_t$ are normally torsion-free. It is known from \cite[Corollary]{AEL} that $B_t(u)$ satisfies the persistence property and the Rees algebra $\mathcal{R}(B_t(u))$ is a normal Cohen-Macaulay domain. It is a well-known fact that for any non-zero graded ideal $I \subset R=K[x_1, \ldots, x_n]$, if $\mathcal{R}(I)$ is Cohen-Macaulay then $\lim_{k \rightarrow \infty} \mathrm{depth}(R/I^k)= n-\ell(I)$, for example see \cite[Proposition 10.3.2]{HH1}, where $\ell(I)$ denotes the analytic spread of $I$, that is, the Krull dimension of the fiber ring $\mathcal{R}(I)/\mathfrak{m}\mathcal{R}(I)$. This leads us to the following corollary which will be used in the subsequent results. Note that due to Remark~\ref{rem1}(1), one needs to pay attention to the ambient ring of $B_t(u)$. Here by the ambient ring, we mean the polynomial ring $R$ containing $B_t(u)$ such that all variables in $R$ appear in $\mathrm{supp}(B_t(u))$.
\begin{corollary}\label{cor1}
Let $I=B_t(u)\subset R=K[x_i: x_i \in \mathrm{supp}(I)]$. If $\ell(I)<n$, then $\lim_{k \rightarrow \infty} \mathrm{depth}(R/I^k)\neq 0$. In particular, if $\ell(I)<n$, then $\mathfrak{m} \notin \mathrm{Ass} (R/I^k)$, where $\mathfrak{m}$ is the unique graded maximal ideal of $R$.
\end{corollary}
Next we compute $\ell(I)$, for all $I \in \mathcal{A}_t$. For this, we first recall the definition of linear relation graph from \cite[Definition 3.1]{HQ}.
\begin{definition}\label{linearrelationgraphdef}
Let $I\subset R$ be a monomial ideal with $\mathcal{G}(I)=\{u_1, \ldots, u_m\}$. The {\em linear relation graph} $\Gamma$ of $I$ is the graph with the edge set
\[
E(\Gamma)=\{\{i,j\}: \text{there exist $u_k$,$u_l \in \mathcal{G}(I)$ such that $x_iu_k=x_ju_l$}\},
\]
and the vertex set $V(\Gamma)=\bigcup_{\{i,j\}\in E(\Gamma)}\{i,j\}$.
\end{definition}
It is known from \cite[Lemma 5.2]{DHQ} that if $I$ is a monomial ideal generated in degree $d$ and the first syzygy of $I$ is generated in degree $d+1$ then
\begin{equation}\label{eq1}
\ell(I)=r-s+1
\end{equation}
where $r$ is the number of vertices and $s$ is the number of connected components of the linear relation graph of $I$. If $u$ is a $t$-spread monomial of degree $d$, then it can be concluded from \cite[Theorem 2.3]{AEL} that the first syzygy of $B_t(u)$ is generated in degree $d+1$. Hence \cite[Lemma 5.2]{DHQ} gives a way to compute the analytic spread of $B_t(u)$. Before proving the other main result of this section, we first analyze the linear relation graph of $B_t(u)$.
\begin{lemma}\label{aklemma}
Let $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ and $A_k=[(k-1)t+1,i_k]$, for each $k=1, \ldots d$. Moreover, let $\Gamma$ be the linear relation graph of $I=B_t(u)$. Then we have the following:
\begin{enumerate}
\item[(i)] For some $k=1, \ldots, d$, $|A_k| \geq 2$ if and only if $A_k \subseteq V(\Gamma)$. Moreover, if $|A_k| \geq 2$, then the induced subgraph of $\Gamma$ on $A_k$ is a complete graph.
\item[(ii)] Let $i_k < kt+1$, for some $1 \leq k \leq d$.
Then $\Gamma$ does not contain any edge with one endpoint in $A_r$ and the other endpoint in $A_s$ for any $1 \leq r<s\leq k$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) First we show that if $A_k \subseteq V(\Gamma)$, then $|A_k| \geq 2$. Assume that $|A_k|=1$, for some $k$. Then from Remark~\ref{rem1}(4), we see that $i_k=(k-1)t+1$. In this case, each generator of $B_t(u)$ is a multiple of $x_{i_k}$ and by following the definition of $\Gamma$, we see that $A_k=\{i_k\} \not\subseteq V(\Gamma) $.
Conversely, we show that if $|A_k| \geq 2$, for some $k$, then $A_k \subseteq V(\Gamma)$ and the induced subgraph on $A_k$ is a complete graph. Take any $f,h \in A_k$ with $f<h$. If $k=1$, then $f<h \leq i_1$ and the monomials $v=x_h(u/x_{i_1})$ and $v'=x_f(u/x_{i_1})$ belong to $\mathcal{G}(I)$ because $I$ is $t$-spread strongly stable. Hence $\{f,h\} \in \Gamma$, as required. Similar argument shows that if $|A_d| \geq 2$, then $|A_d| \subseteq V(\Gamma)$ and the induced subgraph on $|A_d|$ is also a complete graph. Now assume that $1 < k < d$. Since $|A_k| \geq 2$, we have $i_k > (k-1)t+1$ and $(k-1)t+1 \leq f<h \leq i_k$. By using the fact that $I$ is $t$-spread strongly stable, we see that $v=x_1 \cdots x_{(k-2)t+1}x_f x_{kt+1}\cdots x_{i_d} \in \mathcal{G}(I)$ and $v'=x_1 \cdots x_{(k-2)t+1}x_h x_{kt+1}\cdots x_{i_d} \in \mathcal{G}(I)$. Moreover, $v=(v'/x_h)x_f$ and $\{f,h\} \in E(\Gamma)$, as required.
(ii) If $|A_r|=1$ or $|A_s|=1$, then by using (i), we see that the statement holds trivially. Assume that $|A_r| \geq 2$ and $|A_s| \geq 2$. Since $i_k< kt+1$, we have $i_j<jt+1$, for all $j=1, \ldots, k$ because $u$ is a $t$-spread monomial. In this case $A_r \cap A_s = \emptyset$ for all $1 \leq r < s \leq k$. Moreover, $|A_r| <t$ for all $r=1, \ldots, k$. Take $f \in A_r$ and $h \in A_s$ for some $1<r < s \leq k$. Then for any $u \in \mathcal{G}(I)$ with $x_h | u$, we have $(u/x_h)x_f \notin G(I)$. This can be easily verified because $(u/x_h)x_f$ is not a $t$-spread monomial as it contains two variables with indices in $A_r$ and $|A_r|<t$. Hence, we do not have any edge in $\Gamma$ of the form $\{f,h\}$ where $f \in A_r$ and $h \in A_s$.
\end{proof}
\begin{proposition}\label{limdepth}
Let $I=B_t(x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}) \subset R=K[x_i: x_i \in \mathrm{supp}(I)]$ with $i_{d-1} \leq(d-1)t$. Then $\ell(I) < n=|\mathrm{supp}(I)|$. In particular, $\mathfrak{m} \notin \mathrm{Ass} (R/I^k)$ for all $k\geq 1$, where $\mathfrak{m}$ is the unique graded maximal ideal of $R$.
\end{proposition}
\begin{proof}
If $I$ is a principal ideal, then the assertion holds trivially. Therefore, we may assume that $I$ is not a principal ideal. To show that $\ell(I)<n$, from the equality in (\ref{eq1}), it is enough to prove that the linear relation graph $\Gamma$ of $I$ has more than one connected components. Let $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ and $A_k=[(k-1)t+1,i_k]$, for $k=1, \ldots d$. Since $I$ is not principal, it follows from Remark~\ref{rem1}(4) that $|A_k| \geq 2$, for some $1 \leq k \leq d$.
Since $i_{d-1} \leq(d-1)t$, one can deduce from Lemma~\ref{aklemma} that we do not have any edge in $\Gamma$ of the form $\{f,h\}$ with $f$ and $h$ in different $A_k$'s and if $|A_k| \geq 2$, for some $i$, then $A_k \subset V(\Gamma)$ and the induced subgraph on $A_k$ is a complete graph. This shows that $V(\Gamma)$ is the union of all $A_i$'s for which $|A_i|\geq 2$. Moreover, each such $A_i$ determines a connected component in $\Gamma$. Hence $\Gamma$ has only one connected components if and only if $|A_d|\geq 2$ and $|A_i| =1$, for all $i=1, \ldots, d-1$. In this case, $\ell(I)= |A_d|<n$. Otherwise, $\Gamma$ has at least two connected components and again we obtain $\ell(I)<n$, as required. Then the assertion $\mathfrak{m} \notin \mathrm{Ass} (R/I^k)$, for all $k \geq 1$, follows from Corollary~\ref{cor1}.
\end{proof}
Before proving Theorem~\ref{main-NTF}, we first recall the following theorem which gives a criterion to check whether a square-free monomial ideal is normally torsion-free or not.
\begin{theorem}\label{use}\cite[Theorem 3.7]{SNQ}
Let $I$ be a square-free monomial ideal in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$ and $\mathfrak{m}=(x_1, \ldots, x_n)$. If there exists a square-free monomial $v \in I$ such that $v\in \mathfrak{p}\setminus \mathfrak{p}^2$ for any $\mathfrak{p}\in \mathrm{Min}(I)$, and $\mathfrak{m}\setminus x_i \notin \mathrm{Ass}(R/(I\setminus x_i)^s)$ for all $s$ and $x_i \in \mathrm{supp}(v)$, then the following statements hold:
\begin{itemize}
\item[(i)] $I$ is normally torsion-free.
\item[(ii)] $I$ is normal.
\item[(iii)] $I$ has the strong persistence proeprty.
\item[(iv)] $I$ has the persistence property.
\item[(v)] $I$ has the symbolic strong persistence property.
\end{itemize}
\end{theorem}
Now, we state the second main result of this section.
\begin{theorem}\label{main-NTF}
Let $I=B_t(x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}) \subset R=K[x_i: x_i \in \mathrm{supp}(I)]$ with $i_{d-1} \leq(d-1)t$. Then $I$ is normally torsion-free.
\end{theorem}
\begin{proof}
We may assume that $i_k \neq (k-1)t+1$, for all $1 \leq k \leq d$. Otherwise, from Remark~\ref{rem1}(4), it follows that $I=wB_t(v)$ where $w$ is the product of all variables for which $i_k =(k-1)t+1$, $v=u/w$, and $B_t(v)$ is a $t$-spread principal Borel ideal in it's ambient ring. Then from \cite[Lemma 3.12]{SN}, it follows that $I$ is normally torsion-free if and only if $B_t(v)$ is normally torsion-free. Therefore, one may reduce the discussion to $B_t(v)$ whose generators are not a multiple of a fixed monomial.
Let $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ with $i_1>1$. To show that $I=B_t(u)$ is normally torsion-free, we will use Theorem~\ref{use}. It can be seen from \cite[Theorem 4.2]{Claudia} that $u \in \mathfrak{p}\setminus \mathfrak{p}^2$, for all $\mathfrak{p} \in \mathrm{Min}(I)$. Recall that the family of ideals $\mathcal{A}_t$ is defined by: $B_t(u) \in \mathcal{A}_t$ if and only if there exists some $d$ such that $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ with $i_{d-1} \leq(d-1)t$. From Remark~\ref{rem1}(3), it follows that $I\setminus{x_{i_k}} \in \mathcal{A}_t$, for all $k=1, \ldots, d$. Furthermore, Proposition~\ref{limdepth} implies that the unique graded maximal ideal of the ambient ring of $I\setminus{x_{i_k}}$ does not belong to $\mathrm{Ass}(R/(I\setminus x_i)^s)$ for all $s$.
\end{proof}
Next, we prove the converse of Theorem~\ref{main-NTF} to obtain the complete characterization of normally torsion-free $t$-spread principal Borel ideals.
\begin{theorem}\label{complete}
Let $I=B_t(x_{i_1}x_{i_2}\ldots x_{i_{d-1}}x_{i_d}) \subset R=K[x_i: x_i \in \mathrm{supp}(I)]$. Then $I$ is normally torsion-free if and only if $i_{d-1} \leq(d-1)t$. In other words, $I$ is normally torsion-free if and only if $I$ can be viewed as an edge ideal of a $d$-uniform $d$-partite hypergraph.
\end{theorem}
\begin{proof}
Following Theorem~\ref{main-NTF}, it is enough to show that if $i_{d-1} >(d-1)t$, then $I$ is not normally torsion-free. Let $k$ be the smallest integer for which $i_k > kt$. As we explained in Remark~\ref{rem1}(5), if $i_k > kt$, then $i_j> jt$ and $|A_j| \geq 2$, for all $j= k, \ldots, d$. The sets $A_j$'s are defined in (\ref{ak}). It follows from Lemm~\ref{aklemma} that $A=A_k \cup A_{k+1} \cup \cdots \cup A_d \subset V(\Gamma) $, where $\Gamma$ is linear relation graph of $I$ and for each $j=k, \ldots, d$, the induced subgraph of $\Gamma$ on $A_j$ is a complete graph. Moreover, $i_j> jt$, for each $j=k, \ldots, d$ gives that $i_j \in A_j \cap A_{j+1} \neq \emptyset$, for each $j=k, \ldots, d-1$. Therefore, we conclude that the induced subgraph on $A$ is connected.
Set $\mathfrak{p}=(x_{(k-1)t+1}, \ldots, x_{i_d})$. Here the crucial observation is that if we take the monomial localization of $I$ at $\mathfrak{p}$; in other words, if we map all variables $x_i $ to 1 where $i \in A_1 \cup \cdots \cup A_{k-1}$, then we are reducing the degree of each generator of $B_t(u)$ by $k-1$. It is because $A \cap B = \emptyset$, where $B=A_1 \cup \cdots \cup A_{k-1}$. Hence $I(\mathfrak{p})$ can be viewed as a $t$-spread principal Borel ideal by a shift of indices of variables. More precisely, each $j \in A $ is shifted to $j-(k-1)t$. Therefore, $I(\mathfrak{p})= B_t(x_{j_1} \cdots x_{j_k})$, where $t<j_1 < \cdots < j_k$. The linear relation graph of $I(\mathfrak{p})$ is isomorphic to the induced subgraph of $\Gamma$ on vertex set $A$. Then $\ell( I(\mathfrak{p})) = |\mathrm{supp} (B_t(x_{j_1} \cdots x_{j_k}))|=\mathrm{dim}(R(\mathfrak{p}))$, and $\lim_{k \rightarrow \infty} \mathrm{depth}(R(\mathfrak{p})/I(\mathfrak{p})^k)= 0$. This shows that $\mathfrak{p} \in \mathrm{Ass }(R/I^k)$, for some $k>1$. Moreover, $\mathfrak{p} \notin \mathrm{Ass}(R/I)$ because of \cite[Theorem 4.2]{Claudia}. Hence we conclude that $I$ is not normally torsion-free.
\end{proof}
In the subsequent example, we illustrate the construction of $\mathfrak{p}$ as in the proof of Theorem~\ref{complete}.
\begin{example}
Let $u=x_2x_7x_{10}x_{13}$ be a 3-spread monomial and $I=B_3(u)$. Here $i_1=2, i_2=7, i_3=10, i_4=13$. Moreover $i_1=2 < t=3$, but $i_2=7 >2.3$. Set $\mathfrak{p}=(x_4, x_5, \ldots, x_{13})$ as in the proof of Theorem~\ref{complete}. Then the minimal generators of $I(\mathfrak{p})$ are listed below. In the following table, $u \rightarrow v$ indicates that $u \in \mathcal{G}(I(\mathfrak{p}))$ and $v$ is the monomial obtained by shifting each $j \in [4,13]$ to $j-3$.
\begin{center}
\begin{tabular}{ l l l l}
$x_4x_7x_{10} \rightarrow x_1x_4x_{7}$\\
$x_4x_7x_{11} \rightarrow x_1x_4x_{8}$\\
$x_4x_7x_{12} \rightarrow x_1x_4x_{9}$\\
$x_4x_7x_{13} \rightarrow x_1x_4x_{10}$\\
$x_4x_8x_{11} \rightarrow x_1x_5x_{8}$&$x_5x_8x_{11} \rightarrow x_2x_5x_{8}$\\
$x_4x_8x_{12} \rightarrow x_1x_5x_{9}$&$x_5x_8x_{12} \rightarrow x_2x_5x_{9}$\\
$x_4x_8x_{13} \rightarrow x_1x_5x_{10}$&$x_5x_8x_{13} \rightarrow x_2x_5x_{10}$\\
$x_4x_9x_{12} \rightarrow x_1x_6x_{9}$&$x_5x_9x_{12} \rightarrow x_2x_6x_{9}$&$x_6x_9x_{12} \rightarrow x_3x_6x_{9}$\\
$x_4x_9x_{13} \rightarrow x_1x_6x_{10}$&$x_5x_9x_{13} \rightarrow x_2x_6x_{10}$&$x_6x_9x_{13}\rightarrow x_3x_6x_{10}$\\
$x_4x_{10}x_{13} \rightarrow x_1x_7x_{10}$&$x_5x_{10}x_{13} \rightarrow x_2x_7x_{10}$&$x_6x_{10}x_{13}\rightarrow x_3x_7x_{10}$&$x_7x_{10}x_{13}\rightarrow x_4x_7x_{10}$
\end{tabular}
\end{center}
Therefore, $I(\mathfrak{p})$ can be viewed as a $3$-spread principal Borel ideal $B_3(x_4x_7x_{10})$. A direct computation in Macaulay2 \cite{GS} shows that $(x_1, \ldots, x_{10})$ is an associated prime of the third power of $B_3(x_4x_7x_{10})$. Consequently, $\mathfrak{p}$ is also an associated prime of the third power of $I$.
\end{example}
When $u$ is a monomial of degree 3, then we have the following:
\begin{lemma}
Let $u=x_ax_bx_n$ be a $t$-spread monomial and $I=B_t(u)\subset R=K[x_i: x_i \in \mathrm{supp}(I)]$. Then we have the following:
\begin{enumerate}
\item[\em{(i)}] if $b < 2t+1$, then $I$ is normally torsion-free.
\item[\em{(ii)}] $a=1$ and $b\geq2t+1$, then $I$ is nearly normally torsion-free.
\item[\em{(iii)}] $a>1$ and $b\geq2t+1$, then $I$ is not nearly normally torsion-free.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) It follows from Theorem~\ref{main-NTF}.
(ii) Let $u\in \mathcal{G}(I)$. Then $u=x_1x_{i_2}x_{i_3}$, where $i_2\in \{t+1, \ldots, b\}$ and $i_3\in \{2t+1,\ldots,n\}$. It can be easily seen that $I=x_1J$, where $J=B_t(x_{b}x_n)$. It follows from Proposition~\ref{App. 2} that $B_t(x_{b}x_n)$ is nearly normally torsion-free. Then we get the required result by using Lemma~\ref{Lem. Multipe}.
(iii) We will prove the assertion by constructing two monomial prime ideals $\mathfrak{p}_1$ and $\mathfrak{p}_2$ that belong to $\mathrm{Ass}(R/I^k)$, for some $k>1$, but $\mathfrak{p}_1, \mathfrak{p}_2 \notin \mathrm{Ass}(R/I)$. Recall from (\ref{ak}) that if $v \in \mathcal{G}(I)$, then $v=x_{a'}x_{b'}x_{c'}$ with $ a' \in A_1=[1, a]$, $b' \in A_2=[t+1,b]$, and $c'\in A_3=[2t+1, n]$. Since $a>1$, we have $|A_1|> 1$ and since $b\geq 2t+1$, we have $|A_2|> 1$ and $|A_3|>1$.
Let $\mathfrak{p}_1=(x_{t+1}, \ldots, x_{i_d})$. As shown in the last part of the proof of Theorem~\ref{complete}, $\mathfrak{p}_1 \in \mathrm{Ass}(R/I^k)$, for some $k>1$, but $\mathfrak{p}_1 \notin \mathrm{Ass}(R/I)$. Let $\mathfrak{p}_2=(x_1, x_{t+2}, x_{t+3}, \ldots, x_n)$, then from \cite[Theorem 4.2]{Claudia}, we have $\mathfrak{p}_2 \notin \mathrm{Ass}(R/I)$. We claim that $I(\mathfrak{p}_2)$ can be viewed as the $t$ spread principal Borel ideal $B_t(x_rx_n)$ where $r>t$. Indeed, by substituting $x_{t+1}=1$, all $t$-spread monomials of the form $x_1x_{t+1}x_{c'}$, with $c' \in A_3$ are reduced to $x_1x_{c'}$. Therefore, the monomials $x_1x_{b'}x_c \in \mathcal{G}(I)$ with $b' \in [t+2,b]$ and $c' \in [2t+2,n]$ do not appear in $\mathcal{G}(I(\mathfrak{p}_2))$.
Since $a>1$, we have $2 \in A_1$. Moreover, if $v=x_{a'}x_{b'}x_{c'} \in \mathcal{G}(I)$ with $a'\geq 2$, then $b' \geq t+2$ and $c' \geq 2t+2$ because $v$ is a $t$-spread monomial .
By substituting $x_{i}=1$, for all $i \in [2,a]$, all monomials of the form $x_i x_{b'}x_{c'}$, with $b' \in [t+2, b]$, $c' \in [2t+2,n]$ are reduced to $x_{b'}x_{c'}$. This shows that $I(\mathfrak{p}_2)$ is generated in degree 2 by $t$-spread monomials of the following form:
\begin{center}
\begin{tabular}{l}
$x_1x_{c'}$ with $c' \in A_3=[2t+1,n]$; \\
$x_{b'}x_{c'}$ with $b'\in [t+2,b]$ and $c' \in [2t+2,n]$ and $c' - b'\geq t$.
\end{tabular}
\end{center}
Note that $\mathrm{supp}(I(\mathfrak{p}_2))=\{x_1, x_{t+2}, x_{t+3}, \ldots, x_n\}$. Moreover, if we shift the indices of variables in $\{x_{t+2}, x_{t+3}, \ldots, x_n\}$ by $k\rightarrow k-t$, then $I(\mathfrak{p}_2)$ can be viewed as a $t$-spread principal Borel ideal $B_t(x_{r}x_{s})$ where $s=n-t$ and $r=b-t>t$ because $b \geq 2t+1$. Moreover, the linear relation graph of $B_t(x_{r}x_{s})$ is a connected with $ |\mathrm{supp}(I(\mathfrak{p}_2))| $ vertices. Hence, $\lim_{k \rightarrow \infty} \mathrm{depth}(R(\mathfrak{p}_2)/I(\mathfrak{p}_2)^k)= 0$, and consequently $\mathfrak{p}_2 \in \mathrm{Ass}(R/I^k)$, for some $k >1$.
\end{proof}
\noindent{\bf Acknowledgments.}
We would like to thank Professor Adam Van Tuyl for his valuable comments in preparation of Theorem \ref{NTF1}.
In addition, the authors are deeply grateful to the referee for careful reading of the manuscript and valuable suggestions which led to significant improvements in the quality of this paper.
\end{document} |
\begin{document}
\allowdisplaybreaks
\title{\LARGE \bf
Time Optimal Attitude Control for a Rigid Body} \thispagestyle{empty} \pagestyle{empty}
\begin{abstract}
A time optimal attitude control problem is studied for the dynamics of a rigid body. The objective is to minimize the time to rotate the rigid body to a desired attitude and angular velocity while subject to constraints on the control input. Necessary conditions for optimality are developed directly on the special orthogonal group using rotation matrices. They completely avoid singularities associated with local parameterizations such as Euler angles, and they are expressed as compact vector equations. In addition, a discrete control method based on a geometric numerical integrator, referred to as a Lie group variational integrator, is proposed to compute the optimal control input. The computational approach is geometrically exact and numerically efficient. The proposed method is demonstrated by a large-angle maneuver for an elliptic cylinder rigid body.
\end{abstract}
\ensuremath{\mathfrak{se}(3)}ction{Introduction}
The time optimal control of spacecraft has received consistent interest as rapid attitude maneuvers are critical to various space missions such as military observation and satellite communication. The objective is to reorient the attitude of the spacecraft in a minimal maneuver time with constrained control moments. To accomplish many space missions, large-angle attitude maneuvering capabilities are required.
Time optimal attitude maneuvers have been extensively studied in the literature~\cite{ScrTho.JGCD94}. The time optimal solution is found for a single degree of freedom system, where the attitude maneuver is constrained to an eigen-axis rotation, in~\cite{Ett.AIAA89}. It is known that the eigen-axis rotation is not generally time optimal~\cite{BilWie.JGCD93,SeyKum.JGCD93}. The attitude dynamics is often simplified in an optimality analysis, e.g., by assuming an inertially symmetric rigid body model~\cite{BilWie.JGCD93,SeyKum.JGCD93,ModBha.CDC06}, linearization~\cite{BeyVad.JGCD93} and constant magnitude angular velocity~\cite{ModBha.CDC06}.
The attitude is defined as the orientation of a body-fixed frame with respect to a reference frame, and it is represented by a rotation matrix that lies on the special orthogonal group, \ensuremath{\mathrm{SO(3)}}. However, most existing optimal control scheme for the dynamics of a rigid body uses coordinate representations such as Euler angles and quaternions. The minimal attitude representations like Euler angles and Rodrigues parameters have singularities, so they are not desirable for large-angle maneuvers. The non-minimal attitude representations like quaternions have associated problems. Besides the unit norm constraint, the quaternion representation double covers \ensuremath{\mathrm{SO(3)}}. So, it has an inevitable ambiguity in expressing the attitude.
The objective of this paper is to solve the time optimal attitude control problem directly on $\ensuremath{\mathrm{SO(3)}}$ using rotation matrices without need of any attitude parameterization. Using a specific property of the special orthogonal group, namely that the Lie algebra $\ensuremath{\mathfrak{so}(3)}$ is isomorphic to $\ensuremath{\mathbb{R}}^3$, necessary conditions for optimality are developed and represented as vector equations on $\ensuremath{\mathbb{R}}^3$. They avoid singularities associated with Euler angles completely, and the resulting expressions for the optimality necessary conditions are more compact than expressions obtained by using quaternions. Consequently, the attitude dynamics need not be simplified to make the optimal control problem tractable.
The remaining part of this paper is focused on developing a computational approach to solve this optimal control problem. The dynamics of a rigid body has certain geometric features; in addition to the configuration space being a Lie group, the dynamics are characterized by symplectic, momentum and energy preserving properties. The most common numerical integration methods, including the widely used (non-symplectic) explicit Runge--Kutta schemes, preserve neither the Lie group structure nor these geometric properties.
Lie group variational integrators are geometric numerical integrators that preserve these geometric features of the rigid body dynamics~\cite{CMA07}. Based on this structure-preserving numerical integrator, computational approaches have been proposed to solve various optimal control problems for the dynamics of rigid bodies~\cite{CDC06.opt,ACC07.opt,CDC07.opt}. In this paper, the time optimal attitude control problem is discretized at the level of the initial problem formulation, and discrete necessary conditions for optimality are developed using the Lie group variational integrator. This provides geometrically exact but computationally efficient tools.
In summary, the optimization scheme for time optimal attitude maneuvers that we present in this paper has the following important features: (i) necessary conditions for optimality are developed directly on $\ensuremath{\mathrm{SO(3)}}$, and (ii) a computational approach is adopted by using a Lie group variational integrator for overall numerical accuracy and efficiency.
This paper is organized as follows. The time optimal attitude control problem is formulated, and continuous-time necessary conditions for optimality are developed in Section II, and in a parallel fashion, a discrete-time optimal control method is presented in Section III, followed by numerical examples in Section IV.
\ensuremath{\mathfrak{se}(3)}ction{Time Optimal Attitude Control}
\subsection{Equations of Motion}
We consider the attitude dynamics of a rigid body. The configuration space is the special orthogonal group $\ensuremath{\mathrm{SO(3)}}$,
\begin{align*}
\ensuremath{\mathrm{SO(3)}}=\braces{R\in\ensuremath{\mathbb{R}}^{3\times 3}\,\big|\, R^T R=I_{3\times 3},\quad \det{R}=1},
\end{align*}
where the rotation matrix $R\in\ensuremath{\mathrm{SO(3)}}$ represents the linear transformation from the body-fixed frame to the inertial frame.
The continuous equations of motion for the attitude dynamics of a rigid body are given by
\begin{gather}
J\dot\Omega+\Omega\times J\Omega = u,\label{eqn:Omegadot}\\
\dot R= R\hat\Omega,\label{eqn:Rdot}
\end{gather}
where the matrix $J\in\ensuremath{\mathbb{R}}^{3\times 3}$ is the moment of inertia matrix, the vector $\Omega\in\ensuremath{\mathbb{R}}^3$ is the angular velocity expressed in the body-fixed frame, and the external control moment is denoted by $u\in\ensuremath{\mathbb{R}}^3$. The \textit{hat map} $\hat\cdot:\ensuremath{\mathbb{R}}^3\mapsto\ensuremath{\mathfrak{so}(3)}$ is an isomorphism from $\ensuremath{\mathbb{R}}^3$ to skew-symmetric matrices $\ensuremath{\mathfrak{so}(3)}$, and is defined by the condition $\hat x y = x\times y$ for all $x,y\in\ensuremath{\mathbb{R}}^3$. The inverse map is denoted by the \textit{vee map} $(\cdot)^\vee:\ensuremath{\mathfrak{so}(3)}\mapsto\ensuremath{\mathbb{R}}^3$.
\subsection{Time Optimal Attitude Control Problem}
The objective of the time optimal attitude control problem is to transfer the given initial attitude and the angular velocity $(R_\circ,\Omega_\circ)$ of the rigid body to the desired values $(R_f,\Omega_f)$ within a minimal maneuver time $t_f$ with constrained control moment $\norm{u}\leq \overline u$ for a given control limit $\overline u\in\ensuremath{\mathbb{R}}$.
\begin{gather*}
\text{For given: } (R_\circ,\Omega_\circ), (R_f,\Omega_f), \bar{u}\\
\min_u \braces{\mathcal{J}=\int_{0}^{t_f} 1\,dt},\\
\text{such that } R(t_f)=R_f,\, \Omega(t_f)=\Omega_f,\\
\text{subject to } \norm{u(t)} \leq \bar{u}\;\;\forall t\in[0,t_f] \text{ and }\refeqn{Omegadot}, \refeqn{Rdot}.
\end{gather*}
\subsection{Necessary Conditions for Optimality}
We solve this optimal control problem using variational principles applied on $\ensuremath{\mathrm{SO(3)}}$. Expressions for variations of a rotation matrix, and transversality conditions are presented, and necessary conditions for optimality are developed.
\paragraph*{Expressions for variations}
We represent a variation of a rotation matrix using the exponential map, $\exp:\ensuremath{\mathfrak{so}(3)}\mapsto\ensuremath{\mathrm{SO(3)}}$
\begin{align}
R^\epsilon = R \exp \epsilon\hat\eta,
\end{align}
where $\epsilon\in(-c,c)$ for $c>0$, and $\hat\eta\in\ensuremath{\mathfrak{so}(3)}$ for $\eta\in\ensuremath{\mathbb{R}}^3$. Since the exponential map is a local diffeomorphism, this expression is well-defined for some constant $c$ for given $\hat\eta$. The infinitesimal variation of the rotation matrix is given by
\begin{align}
\delta R = \frac{d}{d\epsilon}\bigg|_{\epsilon=0} R\exp
\epsilon\hat\eta = R\hat\eta.\label{eqn:delR}
\end{align}
The infinitesimal variation of $R^T\dot R$ is obtained from \refeqn{Rdot} and \refeqn{delR} as
\begin{align}
\delta (R^T\dot R) & = \delta R^T \dot R + R^T \delta \dot R,\nonumber\\
& = -\eta R^T \dot R + R^T (\dot R \hat\eta + R\hat{\dot\eta}),\nonumber\\
& = \hat{\dot\eta}+\hat\Omega\hat\eta - \hat\eta\hat\Omega,\nonumber\\
& = (\dot\eta+ \Omega\times \eta) \widehat{\;}.\label{eqn:delOmega}
\end{align}
The variational expressions given by \refeqn{delR} and \refeqn{delOmega} are the key ingredients to developing necessary conditions for optimality for an arbitrary optimal attitude maneuver.
\paragraph*{Transversality conditions}
The differentials in the terminal attitude and the terminal angular velocity are composed of the variation for a fixed time and a term due to the terminal time variation. Since the terminal boundary conditions are fixed, we have the transversality conditions as
\begin{gather}
\delta R(t_f)+\dot R(t_f)d t_f = R(t_f)\hat\eta(t_f) + \dot R(t_f) d t_f=0,\label{eqn:transR}\\
\delta \Omega(t_f) +\dot\Omega(t_f) d t_f =0.\label{eqn:transOmega}
\end{gather}
\paragraph*{Necessary conditions for optimality}
Define the augmented performance index as
\begin{align*}
\mathcal{J}_a = \int_{0}^{t_f} & 1 +\lambda^\Omega \cdot(
u-\Omega\times J\Omega-J\dot\Omega)\\& + \lambda^R\cdot
(\hat\Omega-R^T\dot R){^{\vee}}\,dt,
\end{align*}
where $\lambda^\Omega,\lambda^R\in\ensuremath{\mathbb{R}}^3$ are Lagrange multipliers.
Using \refeqn{delOmega}, the infinitesimal variation of the augmented performance index is given by
\begin{align*}
& \delta \mathcal J_a = \int_{0}^{t_f}
\lambda^\Omega\cdot(\delta u -\delta\Omega\times J\Omega - \Omega\times J\delta\Omega
-J\delta\dot\Omega)\\
& \qquad\qquad\quad +\lambda^R\cdot(\delta\Omega-\dot\eta-\Omega\times
\eta)\,dt\\
& +\big\{1 +\lambda^\Omega \cdot( u-\Omega\times J\Omega-J\dot\Omega) + \lambda^R\cdot
(\hat\Omega-R^T\dot R)^\vee \big\} \Big|_{t_f}d t_f.
\end{align*}
Using integration by parts, we obtain
\begin{align*}
& \delta \mathcal J_a = \int_{0}^{t_f}
\lambda^\Omega\cdot(\delta u-\delta\Omega\times J\Omega - \Omega\times
J\delta\Omega)+\dot\lambda^\Omega\cdot J\delta\Omega\\
& \qquad\qquad\quad +\lambda^R\cdot(\delta\Omega-\Omega\times
\eta)+\dot\lambda^R\cdot \eta \,dt\\
&\hspace{10mm}-\{\lambda^\Omega\cdot J\delta\Omega+\lambda^R\cdot\eta\}\Big|^{t_f}_{0}\\
&+\big\{1 +\lambda^\Omega \cdot( u-\Omega\times J\Omega-J\dot\Omega) + \lambda^R\cdot
(\hat\Omega-R^T\dot R)^\vee \big\} \Big|_{t_f}d t_f.
\end{align*}
Since the initial attitude and the initial angular velocity are fixed, we have $\eta(0)=0$, $\delta\Omega(0)=0$. Substituting and rearranging, the infinitesimal variation of the augmented performance index is given by
\begin{align*}
& \delta \mathcal J_a = \int_{0}^{t_f}
\delta\Omega\cdot\{-J\Omega\times\lambda^\Omega-J(\lambda^\Omega\times\Omega)+J\dot\lambda^\Omega+\lambda^R\}\\
& \qquad\qquad\quad
+\eta\cdot\braces{\Omega\times \lambda^R +\dot\lambda^R} +\delta u \cdot \lambda^\Omega \,dt\\
& \qquad\quad +\big\{1 +\lambda^\Omega \cdot(
u-\Omega\times J\Omega) + \lambda^R\cdot\Omega\big\}\Big|_{t_f}d t_f.
\end{align*}
We choose multiplier equations and boundary conditions such that the expressions in all braces in the above equations are identically zero. Then, we have
\begin{align*}
\delta \mathcal{J}_a & = \int_{0}^{t_f} \delta u \cdot \lambda^\Omega\,dt.
\end{align*}
The optimal control input $u$ must satisfy
\begin{align}
\lambda^\Omega\cdot \delta u \geq 0,\label{eqn:pon}
\end{align}
for all admissible $\delta u$ in $t\in[0,t_f]$. If $\lambda^\Omega=0$ for a finite time period, the control input is not determined by \refeqn{pon}. Such solutions are referred to as singular arcs. Later, it is shown that there is no singular arc in this optimal control problem.
In summary, the necessary conditions for optimality are given by
\begin{list}{$\quad\bullet$}{\ensuremath{\mathfrak{se}(3)}tlength{\leftmargin}{0pt}}
\item Multiplier equations
\begin{gather}
J\dot\lambda^\Omega +J(\Omega\times\lambda^\Omega)-J\Omega\times\lambda^\Omega+\lambda^R=0,\label{eqn:lamOmega}\\
\dot\lambda^R +\Omega\times\lambda^R =0,\label{eqn:lamR}
\end{gather}
\item Optimality condition
\begin{align}
u = -\bar{u}\,(\lambda^\Omega/\norm{\lambda^\Omega}),\label{eqn:u}
\end{align}
\item Boundary and transversality conditions
\begin{gather}
(R(0),\Omega(0))=(R_\circ,\Omega_\circ),\\
(R(t_f),\Omega(t_f))= (R_f,\Omega_f),\\
\braces{1 +\lambda^\Omega \cdot(u-\Omega\times J\Omega) + \lambda^R\cdot\Omega}\Big|_{t_f}=0,\label{eqn:trans}
\end{gather}
\end{list}
Assuming that the rigid body is inertially symmetric, $J=I_{3\times 3}$, the multiplier equation \refeqn{lamOmega} is reduced to $\dot\lambda^\Omega +\lambda^R=0$.
These necessary conditions for optimality are valid for attitude maneuvers of arbitrary magnitude as they are developed by using the rotation matrix representation on $\ensuremath{\mathrm{SO(3)}}$. Since the variation of the rotation matrix is expressed in terms of the Lie algebra $\ensuremath{\mathfrak{so}(3)}$ isomorphic to $\ensuremath{\mathbb{R}}^3$, the multiplier equations are written as compact vector equations on $\ensuremath{\mathbb{R}}^3$. The presented necessary conditions for optimality have neither the singularities inherent to Euler angles nor the ambiguities and redundancy associated with quaternions.
\subsection{Singular arc}\label{subsec:sa}
In this subsection, we show that singular arcs do not exist along a solution of this time optimal control problem. Suppose that there exist a singular interval, i.e. $\lambda^\Omega(t)=0$ for a finite time period in $[0,t_f]$. Then, the minimum principle given by \refeqn{pon} does not lead to a well-defined condition for the optimal control input. Instead, the control input is determined by the requirement that the time derivative of $\lambda^\Omega$ is equal to zero.
Let the $2q$-th time derivative of $\lambda^\Omega$ be the lowest order derivative in which the control input $u$ appears explicitly with a coefficient that is not identically zero on the singular interval. Then, the integer $q$ is called the order of the singular arc~\cite{Bel.BK75}. Here, due to the special linear structure of this multiplier equation, the singular arc has infinite order. If the condition $\lambda^\Omega=\dot\lambda^\Omega=0$ is satisfied at a single point along the trajectory, $\lambda^R=\dot\lambda^R=0$, and these are satisfied identically throughout the trajectory independent of the control input. In this case, it is clear that the boundary condition \refeqn{trans} cannot be satisfied. Thus, there is no singular arc in an optimal solution.
\ensuremath{\mathfrak{se}(3)}ction{Discrete-time Time Optimal Attitude Control}
In this section, we present a computational approach, referred to as discrete optimal control of discrete Lagrangian systems~\cite{SEC07}, to solve the time optimal attitude control problem numerically. In this approach, the dynamics of the rigid body is discretized using the discrete Hamilton's principle, in order to obtain a Lie group variational integrator~\cite{CMA07}. The corresponding discrete equations of motion are imposed as dynamic constraints to be satisfied by using Lagrange multipliers, and necessary conditions for optimality, expressed as discrete equations on multipliers, are obtained.
This method yields substantial computational advantages in finding an optimal control solution. The discrete dynamics are more faithful to the continuous equations of motion, and consequently more accurate solutions to the optimal control problem are obtained. It has been shown that the discrete dynamics is more reliable even for controlled system as it computes the energy dissipation rate of controlled systems more accurately~\cite{MarWes.AN01}. In particular, the discrete flow of the Lie group variational integrator remains on $\ensuremath{\mathrm{SO(3)}}$.
Optimal solutions, computed using an indirect approach, are usually sensitive to small variations of the multipliers. This causes difficulties, such as numerical ill-conditioning, when solving the necessary conditions for optimality expressed as a two-point boundary value problem. Sensitivity derivatives, computed using the discrete necessary conditions, are not corrupted by numerical dissipation caused by conventional numerical integration schemes. Thus, the proposed computational approach is more numerically robust, and the necessary conditions can be solved in a computationally efficient manner.
\subsection{Lie Group Variational Integrator}
Since the dynamics of a rigid body has the structure of a Lagrangian or Hamiltonian system, they are symplectic, momentum and energy preserving. These geometric features determine the qualitative behavior of the rigid body dynamics, and they can serve as a basis for theoretical study of rigid body dynamics.
In contrast, the most common numerical integration methods, including the widely used (non-symplectic) explicit Runge--Kutta schemes, preserve neither the Lie group structure nor these geometric properties. Additionally, if we integrate \refeqn{Rdot} using a typical Runge--Kutta scheme, the quantity $R^T R$ inevitably drifts from the identity matrix as the simulation time increases.
In~\cite{CMA07}, Lie group variational integrators are constructed by explicitly adapting Lie group methods~\cite{IserMun.AN00} to the discrete variational principle~\cite{MarWes.AN01}. They have the desirable property that they are symplectic and momentum preserving, and they exhibit good energy behavior for an exponentially long time period. They also preserve the Lie group structure without the use of local charts, reprojection, or constraints. These geometrically exact numerical integration methods yield highly efficient and accurate computational algorithms for rigid body dynamics, and avoid singularities and ambiguities.
Using the results presented in~\cite{CMA07}, a Lie group variational integrator on $\ensuremath{\mathrm{SO(3)}}$ for equations \refeqn{Omegadot}, \refeqn{Rdot} is given by
\begin{gather}
h \widehat{J\Omega_k} = F_k J_d - J_dF_k^T,\label{eqn:findf}\\
R_{k+1} = R_k F_k,\label{eqn:Rkp}\\
J\Omega_{k+1} = F_k^T J\Omega_k + hu_{k+1},\label{eqn:Omegakp}
\end{gather}
where the subscript $k$ denotes the $k$-th step for a fixed integration step size $h\in\ensuremath{\mathbb{R}}$. The matrix $J_d\in\ensuremath{\mathbb{R}}^{3\times 3}$ is a nonstandard moment of inertia matrix defined by $J_d = \frac{1}{2}\mathrm{tr}[J]I_{3\times 3} - J\in\ensuremath{\mathbb{R}}^{3\times 3}$. The matrix $F_k\in\ensuremath{\mathrm{SO(3)}}$ denotes the relative attitude between adjacent integration steps.
For given $(R_k,x_k)$ and control input, \refeqn{findf} is solved to find $F_k$. Then $(R_{k+1},\Omega_{k+1})$ are obtained by \refeqn{Rkp} and \refeqn{Omegakp}. This yields a map $(R_k,\Omega_k)\mapsto(R_{k+1},\Omega_{k+1})$, and this process is repeated. The only implicit part is \refeqn{findf}, where the actual computation of $F_k$ is done in the Lie algebra $\ensuremath{\mathfrak{so}(3)}$ of dimension 3.
One of the distinct features of the Lie group variational integrator is that it preserves both the symplectic property and the Lie group structure of the rigid body dynamics. As such, it exhibits substantially improved computational accuracy and efficiency compared with other geometric integrators that preserve only one of these properties such as non-symplectic Lie group methods~\cite{CMDA07}. The symplectic property is important even in the case of controlled dynamics, since the dissipation rate of the total energy is typically computed inaccurately by non-symplectic integrators~\cite{MarWes.AN01}.
\subsection{Discrete-time Time Optimal Attitude Control Problem}
The objective is to transfer the rigid body in a prescribed way within a minimal discrete maneuver time $N$ with constrained control input.
\begin{gather*}
\text{For given: } (R_\circ,\Omega_\circ), (R_f,\Omega_f), \bar{u}\\
\min_{u_{k+1}} \braces{\mathcal{J}=\sum_{k=0}^{N-1} 1},\\
\text{such that } R_N=R_f,\, \Omega_N=\Omega_f,\\
\text{subject to } \norm{u_{k+1}} \leq \bar{u}\;\; \forall k\in[0,N\!-1] \text{ and }\refeqn{findf}\!-\!\refeqn{Omegakp}.
\end{gather*}
\subsection{Discrete-Time Necessary Conditions for Optimality}
\paragraph*{Expressions for variations}
Similar to \refeqn{delR}, the variation of rotation matrices $R_k$ and $F_k$ are expressed as
\begin{align}
\delta R_k = R_k\hat\eta_k,\quad \delta F_k = F_k\hat\xi_k\label{eqn:delRk}
\end{align}
for $\eta_k,\xi_k\in\ensuremath{\mathbb{R}}^3$. Using this and \refeqn{Rkp}, the variation of $R_k^T R_{k+1}$ is given by
\begin{align}
\delta (R_k^T R_{k+1}) & = \delta R_k^T R_{k+1} + R_k^T \delta R_{k+1},\nonumber\\
& = -\hat\eta F_k + F_k \hat\eta_{k+1},\nonumber\\
& = F_k (-F_k^T \eta_k + \eta_{k+1})\widehat{\;}, \label{eqn:delFk}
\end{align}
where the property $\widehat{F^T x} = F^T \hat x F$ for any $x\in\ensuremath{\mathbb{R}}^3$ and $F\in\ensuremath{\mathrm{SO(3)}}$ is used in the last step.
Now we develop an expression for a constrained variation corresponding
\refeqn{findf}. Taking a variation of \refeqn{findf}, we obtain
\begin{align*}
h \widehat{J\delta\Omega_k} & = F_k \hat\xi_k J_d + J_d \hat\xi_kF_k^T.
\end{align*}
Using the property, $\hat x A+A^T\hat x=(\braces{\tr{A}I_{3\times 3}-A}x)\widehat{\;}\; $ for all $x\in\ensuremath{\mathbb{R}}^3$ $A\in\ensuremath{\mathbb{R}}^{3\times 3}$, the above
equation can be written as
\begin{align*}
h J\delta\hat\Omega_k & = \widehat{F_k\xi_k} F_kJ_d + J_d F_k^T \widehat{F_k \xi_k},\\
& = (\braces{\tr{F_kJ_d}I_{3\times 3}-F_kJ_d}F_k\xi_k)\widehat{\;}.
\end{align*}
Thus, the vector $\xi_k$ is expressed in terms of $\delta\Omega_k$
\begin{align}
\xi_k = \mathcal{B}_k J\delta\Omega_k, \label{eqn:xik}
\end{align}
where $\mathcal{B}_k=hF_k^T\braces{\tr{F_kJ_d}I_{3\times 3}-F_kJ_d}^{-1}\in\ensuremath{\mathbb{R}}^{3\times 3}$. This shows the relationship between $\delta\Omega_k$ and $\delta F_k$.
\paragraph*{Transversality conditions}
Similar to \refeqn{transOmega}, we choose the transversality conditions for the angular velocity as
\begin{align}
\delta \Omega_N + (\Omega_N-\Omega_{N-1})\delta N =0.\label{eqn:dtrnsOmega}
\end{align}
The variation of the terminal attitude due to the terminal time change is expressed as
\begin{align*}
R_N & \braces{\frac{1}{2}R_{N-1}^T (R_N-R_{N-1})+\frac{1}{2} R_N^T (R_N-R_{N-1})}\delta N\\ & = \frac{1}{2} R_N \braces{F_{N-1}-F_{N-1}^T}\delta N.
\end{align*}
This expression is chosen such that it respects the skew-symmetry of a Lie algebra $\ensuremath{\mathfrak{so}(3)}$ element. Using this, the transversality conditions for the attitude are given by
\begin{align}
R_N \hat \eta_N + \frac{1}{2} R_N \braces{F_{N-1}-F_{N-1}^T}\delta N =0.\label{eqn:dtrnsR}
\end{align}
\paragraph*{Necessary conditions for optimality}
Define the augmented performance index as
\begin{align*}
\mathcal{J}_a & = \sum_{k=0}^{N-1} 1
+\lambda_k^{\Omega}\cdot\braces{-J\Omega_{k+1} + F_k^T J\Omega_k + hu_{k+1}}\nonumber\\
& +\lambda_k^{R}\cdot\frac{1}{2}\parenth{(F_k-F_k^T)^{\vee}-(R_{k}^TR_{k+1}-R_{k+1}^T R_k)^{\vee}}.
\end{align*}
Here we assume that the time step size $h$ is small so that the relative attitude rotation between adjacent integration steps is less than $\frac{\pi}{2}$, i.e. $\norm{(\mathrm{logm}F_k)^{\vee}}< \frac{\pi}{2}$. Then, $F_k$ is equal to $R_k^T R_{k+1}$ if and only if their skew parts are identical, which can be easily shown using Rodrigues' formula. Equation \refeqn{findf} is considered implicitly using a constrained variation.
Using \refeqn{delFk}, the infinitesimal variation of the augmented performance index is given by
\begin{align*}
& \delta\mathcal{J}_a = \sum_{k=0}^{N-1}
\lambda_k^{\Omega}\cdot \braces{h\delta u_{k+1}-J\delta \Omega_{k+1} + \delta
F_k^T J\Omega_k+F_k^T J\delta\Omega_k }\\
& + \lambda_k^{R}\cdot \frac{1}{2} \Big\{F_k(\xi_k+F_k^T\eta_k-\eta_{k+1})\widehat{\;}\\
& \hspace{16mm} +(\xi_k+F_k^T\eta_k-\eta_{k+1})\widehat{\;}\;F_k^T \Big\}^\vee\nonumber\\
& + \{ 1 +\lambda_{N-1}^{\Omega}\cdot\braces{-J\Omega_{N} + F_{N-1}^T J\Omega_{N-1} +
hu_{N}}\}\delta N\\
& + \lambda_{N-1}^{R}\cdot\frac{1}{2} (F_{N-1}-F_{N-1}^T)^{\vee}\delta N\\
& - \lambda_{N-1}^R\cdot\frac{1}{2} (R_{N-1}^TR_{N}-R_{N}^T R_{N-1})^{\vee}\delta N.
\end{align*}
Several algebraic manipulation steps are required here; (i) using the property
$\hat x A+A^T\hat x=(\braces{\tr{A}I_{3\times 3}-A}x)\widehat{\;}\,$ for all
$x\in\ensuremath{\mathbb{R}}^3$ and $A\in\ensuremath{\mathbb{R}}^{3\times 3}$, the expression in the second braces is written as a vector form, (ii) equation \refeqn{xik} is substituted to express $\xi_k$ in terms of $\delta\Omega_k$, and (iii) using the fact that $\eta_0=0$, $\delta\Omega_0=0$, the summation indices for the variables at the $k+1$-th step are rewritten, which is the discrete analog of integration by parts. Then, we obtain
\begin{align}
\delta\mathcal{J}_a & = \sum_{k=0}^{N-1} \lambda^\Omega_k\cdot h \delta u_{k+1} \nonumber\\%
& + \sum_{k=1}^{N-1} \delta\Omega_k \cdot \Big\{-J\lambda^{\Omega}_{k-1} + J(F_k - \mathcal{B}_k^T \widehat{F_k^T J\Omega_k}) \lambda_k^{\Omega}\nonumber\\
& \hspace*{12mm}+\frac{1}{2}J\mathcal{B}_k^T (\tr{F_k}I-F_k)\lambda_k^R\Big\}\nonumber\\
&+ \sum_{k=1}^{N-1} \eta_k\cdot\Big\{\frac{1}{2}(\tr{F_{k-1}}I-F_{k-1})\lambda_{k-1}^R \nonumber\\
& \hspace*{12mm}- \frac{1}{2}F_k(\tr{F_{k}}I-F_{k})\lambda_k^R\Big\}\nonumber\\
& -\lambda_{N-1}^{\Omega}\cdot J\delta\Omega_N -\lambda_{N-1}^R\cdot\frac{1}{2}(\tr{F_{N-1}}I-F_{N-1}^T)\eta_N\nonumber\\
& +\{1+\lambda^\Omega_{N-1}\cdot\braces{-J\Omega_{N} + F_{N-1}^T J\Omega_{N-1} +
hu_{N}}\}\delta N\nonumber\\
& +\lambda_{N-1}^{R}\cdot\frac{1}{2}(F_{N-1}-F_{N-1}^T)^{\vee}\delta N\nonumber\\
& -\lambda_{N-1}^R\cdot\frac{1}{2} (R_{N-1}^TR_{N}-R_{N}^T R_{N-1})^{\vee}\delta N.\label{eqn:delJa1}
\end{align}
Substituting the transversality conditions \refeqn{dtrnsOmega} and \refeqn{dtrnsR}, all of the expressions in the last four lines of the above equation are reduced to
\begin{align}
\Big\{1& +\lambda^\Omega_{N-1} \cdot\braces{-J\Omega_{N-1} + F_{N-1}^T J\Omega_{N-1} +
hu_{N}}\nonumber\\
& +\lambda_{N-1}^{R}\cdot\frac{1}{4}\parenth{(F_{N-1})^2-(F_{N-1}^T)^2}^{\vee}\big\}\delta N.\label{eqn:delJa11}
\end{align}
We choose discrete multiplier equations such that the expressions in the first two braces in \refeqn{delJa1} are identically zero, and we choose boundary condition such that the expression given by \refeqn{delJa11} is equal to zero. Then, we have
\begin{align*}
\delta \mathcal{J}_a & = \sum_{k=0}^{N-1} \lambda^\Omega_k\cdot h \delta u_{k+1}.
\end{align*}
The optimal control input $u_{k+1}$ must satisfy
\begin{align*}
\lambda^\Omega_k \cdot \delta u_{k+1} \geq 0,
\end{align*}
for all admissible $\delta u_{k+1}$ and $k\in\braces{0,\cdots,N-1}$. Here, we do not show that there is no singular arc in the discrete-time optimal control problem. We assume that the result presented in Section \ref{subsec:sa} for the continuous-time case also applies to the discrete-time case.
In summary, the discrete necessary conditions for optimality are given by
\begin{list}{$\quad\bullet$}{\ensuremath{\mathfrak{se}(3)}tlength{\leftmargin}{0pt}}
\item Multiplier equations
\begin{gather}
\begin{aligned}
-J\lambda^{\Omega}_{k-1} + J & (F_k - \mathcal{B}_k^T \widehat{F_k^T J\Omega_k}) \lambda_k^{\Omega} \\ &+\frac{1}{2}J\mathcal{B}_k^T (\tr{F_k}I-F_k)\lambda_k^R=0,
\end{aligned}\label{eqn:lamOmegak}\\
(\tr{F_{k-1}}I-F_{k-1})\lambda_{k-1}^R - F_k(\tr{F_{k}}I-F_{k})\lambda_k^R=0.\label{eqn:lamRk}
\end{gather}
\item Optimality condition
\begin{align}
u_{k+1} =
-\bar{u}\,(\lambda_k^\Omega/\norm{\lambda_k^\Omega})\label{eqn:ukp}
\end{align}
\item Boundary and transversality conditions
\begin{gather}
(R_0,\Omega_0)=(R_\circ,\Omega_\circ),\label{eqn:R0}\\
(R_N,\Omega_N)= (R_f,\Omega_f),\\
\begin{aligned}
1&+\lambda^\Omega_{N-1}\cdot\braces{-J\Omega_{N-1} + F_{N-1}^T J\Omega_{N-1} + hu_{N}}\\
&+\lambda_{N-1}^{R}\cdot\frac{1}{4}\parenth{(F_{N-1})^2-(F_{N-1}^T)^2}^{\vee}=0.\label{eqn:BCN}
\end{aligned}
\end{gather}
\end{list}
In the above equations, the only implicit part is \refeqn{findf}. For a given initial condition $\{(R_0,\Omega_0),(\lambda^R_0,\lambda^\Omega_0)\}$, we solve \refeqn{findf} to obtain $F_0$, and we find the control input $u_1$ by \refeqn{ukp}. Then, $(R_1,\Omega_1)$ are obtained by \refeqn{Rkp} and \refeqn{Omegakp}. Using $\Omega_1$, we solve \refeqn{findf} to obtain $F_1$. Finally, $(\lambda^R_1,\lambda^\Omega_1)$ are obtained by \refeqn{lamRk} and \refeqn{lamOmegak}. This yields a map $\{(R_0,\Omega_0),(\lambda^R_0,\lambda^\Omega_0)\}\mapsto\{(R_1,\Omega_1),(\lambda^R_1,\lambda^\Omega_1)\}$, and this process is repeated.
The discrete necessary conditions for optimality are given by a two-point boundary value problem. This is to find the optimal discrete flow, multiplier, control input, and terminal maneuver time to satisfy the equations of motion \refeqn{findf}--\refeqn{Omegakp}, multiplier equations \refeqn{lamOmegak}, \refeqn{lamRk}, optimality condition \refeqn{ukp}, and boundary conditions \refeqn{R0}--\refeqn{BCN} simultaneously.
We use a neighboring extremal computational method~\cite{Bry.BK75}. A nominal solution satisfying all of the necessary conditions except the boundary conditions is chosen. The unspecified initial multiplier is updated so as to satisfy the specified terminal boundary conditions in the limit. This is also referred to as a shooting method. The main advantage of the neighboring extremal method is that the number of iteration variables is small. In other approaches, the initial guess of control input history or multiplier variables are iterated, so the number of optimization parameters are proportional to the number of discrete time steps.
A difficulty is that the extremal solutions are sensitive to small changes in the unspecified initial multiplier values. The nonlinearities also make it hard to construct an accurate estimate of sensitivity, and it may result in numerical ill-conditioning. By adopting a geometric numerical integrator, sensitivity derivatives along the discrete necessary conditions do not have numerical dissipation introduced by conventional numerical integration schemes. Thus, they are numerically more robust, and the necessary conditions can be solved computationally efficiently.
\ensuremath{\mathfrak{se}(3)}ction{Numerical Example}
We choose an elliptic cylinder for a rigid body model with semi-major axis $0.8\,\mathrm{m}$, semi-minor axis $0.2\,\mathrm{m}$, height $0.6\,\mathrm{m}$, mass $1,\mathrm{kg}$. The moment of inertia matrix is $J=\mathrm{diag}[0.04,\,0.19,\,0.17]\,\mathrm{kgm^2}$, and the maximum control inputs is chosen as $\overline u =0.1\,\mathrm{Nm}$.
The desired attitude maneuver is a rest-to-rest large angle rotation given by
\begin{align*}
(R_\circ,\Omega_\circ)&=(I_{3\times 3},0)\\ (R_f,\Omega_f)&=(\exp \theta v,0),
\end{align*}
where $v=\frac{1}{\sqrt{3}}[1,\,1,\,1]\in\ensuremath{\mathbb{R}}^3$, and $\theta$ is varied as $120^\circ$ and $180^\circ$.
\begin{figure}
\caption{Time optimal attitude maneuver, $\theta=120^\circ$}
\label{fig:d120}
\end{figure}
\begin{figure}
\caption{Time optimal attitude maneuver, $\theta=180^\circ$}
\label{fig:d180}
\end{figure}
When deriving the discrete necessary conditions for optimality, we assume that the number of discrete steps $N$ varies. For computational purpose, it is not desirable to search the optimal value of $N$ since the terminal attitude, angular velocity and multiplier change in a discrete manner for varying integer $N$. Thus, it is not guaranteed that the boundary condition is satisfied to a desired numerical accuracy.
In the numerical computation, we fix the number of steps by an educated guess, $N=1000$ in this particular numerical example, and we vary the timestep $h$. In essence, we find the seven parameters, initial multiplier $(\lambda^R_0,\lambda^\Omega_0)$ and the time step $h$, satisfying the seven-dimensional terminal boundary conditions \refeqn{R0}--\refeqn{BCN} under the discrete equations of motion, the multiplier equation, and the optimality condition.
We solve this two-point boundary value problem, interpreted as a nonlinear equation by the shooting method, using a general nonlinear equation solver, namely the Matlab \texttt{fsolve} function. The multipliers are initialized randomly, and the timestep is initialized as $h=0.002$ seconds. The optimal solutions are found in $94$ and $211$ seconds, respectively, on Intel Pentinum M 1.73 GHz processor, and the boundary condition errors are less than $10^{-15}$.
The optimized attitude maneuver, angular velocity, multiplier, and control input history are presented in Figures \ref{fig:d120} and \ref{fig:d180}. (Simple animations which show these maneuvers of the rigid body are available at \url{http://www.umich.edu/~tylee}.) The optimized maneuver times are $3.3855$ and $3.8184$ seconds, respectively, and there is no singular arc along the optimized solutions.
\ensuremath{\mathfrak{se}(3)}ction{Conclusions}
A time optimal attitude control problem to rotate a rigid body within a minimal time with constrained control input is studied. Necessary conditions for optimality are developed on $\ensuremath{\mathrm{SO(3)}}$ using rotation matrices without need of attitude parameterizations such as Euler angles and quaternions. This provides a globally applicable and compact form of necessary conditions for optimality. For overall computational accuracy and efficiency, a discrete optimal control method is proposed using a Lie group variational integrator.
In this paper, the two-norm of the control moment is constrained, and consequently, there is no singular arc in the optimal solution. The proposed necessary conditions for optimality can be directly applied, without modification, to the case where the absolute value of each component of the control moment is bounded. In this case, the expressions for optimal singular control can be developed, for example, by following the approach given in~\cite{SeyKum.JGCD93}, using the compact multiplier equations presented here.
\end{document} |
\begin{document}
\title{\Large\textbf{Geometrical structures of multipartite quantum systems}
\begin{abstract}
In this paper I will investigate geometrical structures of multipartite quantum systems based on complex projective varieties. These varieties are important in characterization of quantum entangled states. In particular I will establish relation between multi-projective Segre varieties and multip-qubit quantum states. I also will discuss other geometrical approaches such as toric varieties to visualize complex multipartite quantum systems.
\end{abstract}
\title{\Large\textbf{Geometrical structures of multipartite quantum systems}
\section{Introduction}
Characterization of multipartite quantum systems is very interesting research topic in the foundations of quantum theory and has many applications in the field quantum information and quantum computing.
And in particular geometrical structures of multipartite quantum entangled
pure states are of special importance.
In this paper we will review the construction of
Segre variety for multi-qubit states. We will also show a
construction of geometrical measure of entanglement based on
the Segre variety for multi-qubit systems. Finally we will
establish a relation between the Segre variety, toric variety,
and multi-qubit quantum systems. The relation could be used as a
tool to visualize entanglement properties of multi-qubit states.
Let $\mathcal{Q}_{j},~j=1,2,\ldots m$ be quantum systems with underlying Hilbert spaces $\mathcal{H}_{\mathcal{Q}_{j}}$. Then the Hilbert space of a multi-qubit systems $\mathcal{Q}$, is given by $\mathcal{H}_{\mathcal{Q}}=\mathcal{H}_{\mathcal{Q}_{m}}\otimes \mathcal{H}_{\mathcal{Q}_{m-1}}\otimes\cdots\otimes \mathcal{H}_{\mathcal{Q}_{1}}$, where $\mathcal{H}_{\mathcal{Q}_{j}}=\mathbf{C}^{2}$ and $\dim \mathcal{H}_{\mathcal{Q}}=2^{m}$. Now, let
\begin{equation}\ket{\Psi}=\sum^{1}_{x_{m}=0}\sum^{1}_{x_{m-1}=0}\cdots
\sum^{1}_{
x_{1}=0}\alpha_{x_{m}x_{m-1}\cdots x_{1}}\ket{x_{m}x_{m-1}\cdots
x_{1}},
\end{equation}
be a vector in $\mathcal{H}_{\mathcal{Q}}$,
where
$\ket{x_{m}x_{m-1}\cdots~x_{1}}=
\ket{x_{m}}\otimes\ket{x_{m-1}}\otimes\cdots\otimes\ket{x_{1}}$ are orthonormal basis in
$\mathcal{H}_{\mathcal{Q}}$
and $\alpha_{x_{m}x_{m-1}\cdots x_{1}}\in \mathbf{C}$. Then the quantum states are normalized vectors in $\mathcal{P}(\mathcal{H}_{\mathcal{Q}})= \mathcal{H}_{\mathcal{Q}}/\sim$. Moreover, let
$\rho_{\mathcal{Q}}=\sum^{\mathrm{N}}_{i=1}p_{i}\ket{\Psi_{i}}\bra{\Psi_{i}}$,
for all $0\leq p_{i}\leq 1$ and $\sum^{\mathrm{N}}_{i=1}p_{i}=1$,
denotes a density operator acting on the Hilbert space $\mathcal{H}_{\mathcal{Q}}$.
The density operator
$\rho_{\mathcal{Q}}$ is said to be fully separable, which we will
denote by $\rho^{sep}_{\mathcal{Q}}$, with respect to the Hilbert
space decomposition, if it can be written as $
\rho^{sep}_{\mathcal{Q}}=\sum^\mathrm{N}_{i=1}p_i
\bigotimes^m_{j=1}\rho^i_{\mathcal{Q}_{j}},
$ where $\rho^i_{\mathcal{Q}_{j}}$ denotes a density operator on
Hilbert space $\mathcal{H}_{\mathcal{Q}_{j}}$. If
$\rho^{p}_{\mathcal{Q}}$ represents a pure state, then the quantum
system is fully separable if $\rho^{p}_{\mathcal{Q}}$ can be written
as
$\rho^{sep}_{\mathcal{Q}}=\bigotimes^m_{j=1}\rho_{\mathcal{Q}_{j}}$,
where $\rho_{\mathcal{Q}_{j}}$ is the density operator on
$\mathcal{H}_{\mathcal{Q}_{j}}$. If a state is not separable, then
it is said to be an entangled state.
\section{Projective geometry}
In this section we give a short introduction to variety.
Let $\mathbf{C}[z]=\mathbf{C}[z_{1},z_{2}, \ldots,z_{n}]$ denotes the polynomial
algebra in $n$ variables with complex coefficients. Then, given a
set of $r$ polynomials $\{g_{1},g_{2},\ldots,g_{r}\}$ with $g_{i}\in
\mathbf{C}[z]$, we define a complex affine variety as
\begin{eqnarray}
&&\mathcal{V}_{\mathbf{C}}(g_{1},g_{2},\ldots,g_{r})=\{P\in\mathbf{C}^{n}:
g_{i}(P)=0~\forall~1\leq i\leq r\},
\end{eqnarray}
where $P\in\mathbf{C}^{n}$ is called a point of $\mathbf{C}^{n}$ and if
$P=(a_{1},a_{2},\ldots,a_{n})$ with $a_{j}\in\mathbf{C}$, then
$a_{j}$ is called the coordinates of $P$.
A complex projective space $\mathbf{CP}^{n}$ is defined to be the
set of lines through the origin in $\mathbf{C}^{n+1}$, that is,
\begin{equation}
\mathbf{CP}^{n}=\frac{\mathbf{C}^{n+1}-\{0\}}{
u\sim v},~\lambda\in
\mathbf{C}-0,~v_{i}=\lambda u_{i} ~\forall ~0\leq i\leq n+1,
\end{equation}
where $u=(u_{1},\ldots,u_{n+1})$ and $v=(v_{1},\ldots,v_{n+1})$. Given a set of homogeneous polynomials
$\{g_{1},g_{2},\ldots,g_{r}\}$ with $g_{i}\in \mathbf{C}[z]$, we define a
complex projective variety as
\begin{eqnarray}
&&\mathcal{V}(g_{1},\ldots,g_{r})=\{O\in\mathbf{CP}^{n}:
g_{i}(O)=0~\forall~1\leq i\leq r\},
\end{eqnarray}
where $O=[a_{1},a_{2},\ldots,a_{n+1}]$ denotes the equivalent class
of point $\{\alpha_{1},\alpha_{2},\ldots,$
$\alpha_{n+1}\}\in\mathbf{C}^{n+1}$. We can view the affine complex
variety
$\mathcal{V}_{\mathbf{C}}(g_{1},g_{2},\ldots,g_{r})\subset\mathbf{C}^{n+1}$
as a complex cone over the complex projective variety
$\mathcal{V}(g_{1},g_{2},\ldots,g_{r})$.
We can map the
product of spaces $\underbrace{\mathbf{CP}^{1}\times\mathbf{CP}^{1}
\times\cdots\times\mathbf{CP}^{1}}_{m~\mathrm{times }}$ into a projective space by
its Segre embedding as follows. The Segre map is given by
\begin{equation}
\begin{array}{ccc}
\mathcal{S}_{2,\ldots,2}:\mathbf{CP}^{1}\times\mathbf{CP}^{1}
\times\cdots\times\mathbf{CP}^{1}&\longrightarrow&
\mathbf{CP}^{2^{m}-1},\\
\end{array}
\end{equation}
is defined by $ ((\alpha^{1}_{0},\alpha^{1}_{1}),\ldots,
(\alpha^{m}_{0},\alpha^{m}_{1})) \longmapsto
(\alpha^{m}_{i_{m}}\alpha^{m-1}_{i_{m-1}}\cdots\alpha^{1}_{i_{1}})$, where $(\alpha^{i}_{0},\alpha^{i}_{1})$ is
points defined on the $i$th complex projective space
$\mathbf{CP}^{1}$ and $\alpha_{i_{m}i_{m-1}\cdots i_{1}}$,$0\leq i_{s}\leq 1$
be a homogeneous coordinate-function on
$\mathbf{CP}^{2^{m}-1}$. Moreover, let us consider
a multi-qubit quantum system
and let
$
\mathcal{A}=\left(\alpha_{i_{m}i_{m-1}\ldots i_{1}}\right)_{0\leq
i_{s}\leq 1},
$
for all $j=1,2,\ldots,m$. $\mathcal{A}$ can be realized as the
following set $\{(i_{1},i_{2},\ldots,i_{m}):1\leq i_{s}\leq
2,\forall~s\}$, in which each point $(i_{m},i_{m-1},\ldots,i_{1})$
is assigned the value $\alpha_{i_{m}i_{m-1}\ldots i_{1}}$. For each $s=1,2,\ldots,m$, a two-by-two minor about the
$j$-th coordinate of $\mathcal{A}$ is given by
\begin{eqnarray}\label{segreply1}
&&\mathcal{I}^{m}_{\mathcal{A}}=
\alpha_{x_{m}x_{m-1}\ldots x_{1}}\alpha_{y_{m}y_{m-1}\ldots y_{1}}
-
\alpha_{x_{m}x_{m-1}\ldots x_{s+1}y_{s}x_{s-1}\ldots
x_{1}}\alpha_{y_{m}y_{m-1} \ldots y_{s+1} x_{s}y_{s-1}\ldots y_{m}}.
\end{eqnarray}
Then the ideal $\mathcal{I}^{m}_{\mathcal{A}}$ is generated by
The image of the Segre embedding
$\mathrm{Im}(\mathcal{S}_{2,2,\ldots,2})$, which again
is an intersection of families of quadric hypersurfaces in
$\mathbf{CP}^{2^{m}-1}$, is called Segre variety
and it is given by
\begin{eqnarray}\label{eq: submeasure}
\mathrm{Im}(\mathcal{S}_{2,2,\ldots,2})&=&\bigcap_{\forall
s}\mathcal{V}\left(\mathcal{I}^{m}_{\mathcal{A}}\right).
\end{eqnarray}
This is the space of separable multi-qubit states.
Moreover, we propose a measure of
entanglement for general pure multipartite states based on modified Segre variety as follows
\begin{eqnarray}\label{EntangSeg2}\nonumber
&&\mathcal{F}(\mathcal{Q}^{p}_{m}(2,2\ldots,2))
=(\mathcal{N}\sum_{\forall \sigma\in\mathrm{Perm}(u)}\sum_{
k_{j},l_{j}, j=1,2,\ldots,m}\\&&|\alpha_{k_{1}k_{2}\ldots
k_{m}}\alpha_{l_{1}l_{2}\ldots l_{m}} -
\alpha_{\sigma(k_{1})\sigma(k_{2})\ldots\sigma(k_{m})}\alpha_{\sigma(l_{1})\sigma(l_{2})
\ldots\sigma(l_{m})}|^{2})^{\frac{1}{2}},
\end{eqnarray}
where $\sigma\in\mathrm{Perm}(u)$ denotes all possible sets of
permutations of indices for which $k_{1}k_{2}\ldots k_{m}$ are
replace by $l_{1}l_{2}\ldots l_{m}$, and $u$ is the number of
indices to permute.
As an example we will discuss the four-qubit state
in which we first encounter
these new varieties. For this quantum system we can partition the
Segre embedding as follows:
$$\xymatrix{ \mathbf{P}^{1}\times\mathbf{P}^{1}\times\mathbf{P}^{1}\times\mathbf{P}^{1}
\ar[d]_{\mathcal{S}_{2,\ldots,2}}\ar[r]_{\mathcal{S}_{2,2}\otimes
I\otimes I}&\mathbf{P}^{3}
\times\mathbf{P}^{1}\times\mathbf{P}^{1}\ar[d]_{I\otimes\mathcal{S}_{2,2}}\\
\mathbf{P}^{2^{4}-1}&\mathbf{P}^{3}
\times\mathbf{P}^{3}\ar[l]_{\mathcal{S}_{4,4}}}.$$
For the Segre variety,
which is represented by completely decomposable tensor, we have a
commuting diagram and
$\mathcal{S}_{2,\ldots,2}=(\mathcal{S}_{4,4})
\circ(I\otimes\mathcal{S}_{2,2})\circ(\mathcal{S}_{2,2}\otimes
I\otimes I)$.
\section{Toric variety and multi-qubit quantum systems}
Let $S\subset \mathbf{R}^{n}$ be finite subset, then a convex polyhedral cone is defined by
$
\sigma=\mathrm{Cone}(S
)=\left\{\sum_{v\in S}\lambda_{v}v|\lambda_{v}\geq0\right\}.$
In this case $\sigma$ is generated by $S$. In a similar way we define a polytope by
$
P=\mathrm{Conv}(S)=\left\{\sum_{v\in S}\lambda_{v}v|\lambda_{v}\geq0, \sum_{v\in S}\lambda_{v}=1\right\}.
$
We also could say that $P$ is convex hull of $S$. A convex polyhedral cone is called simplicial if it is generated by linearly independent set. Now, let $\sigma\subset \mathbf{R}^{n}$ be a convex polyhedral cone and $\langle u,v\rangle$ be a natural pairing between $u\in \mathbf{R}^{n}$ and $v\in\mathbf{R}^{n}$. Then, the dual cone of the $\sigma$ is define by
$$
\sigma^{\wedge}=\left\{u\in \mathbf{R}^{n*}|\langle u,v\rangle\geq0~\forall~v\in\sigma\right\},
$$
where $\mathbf{R}^{n*}$ is dual of $\mathbf{R}^{n}$.
We call a convex polyhedral cone strongly convex if $\sigma\cap(-\sigma)=\{0\}$.
The algebra of Laurent polynomials is defined by
$
\mathbf{C}[z,z^{-1}]=\mathbf{C}[z_{1},z^{-1}_{1},\ldots,z_{n},z^{-1}_{n}],
$
where $z_{i}=\chi^{e^{*}_{i}}$. The terms of the form $\lambda \cdot z^{\beta}=\lambda z^{\beta_{1}}_{1}z^{\beta_{2}}_{2}\cdots z^{\beta_{n}}_{n}$ for $\beta=(\beta_{1},\beta_{2},\ldots,\beta_{n})\in \mathbf{Z}$ and $\lambda\in \mathbf{C}^{*}$ are called Laurent monomials. A ring $R$ of Laurent polynomials is called a monomial algebra if it is a $\mathbf{C}$-algebra generated by Laurent monomials. Moreover, for a lattice cone $\sigma$, the ring
$$R_{\sigma}=\{f\in \mathbf{C}[z,z^{-1}]:\mathrm{supp}(f)\subset \sigma\}
$$
is a finitely generated monomial algebra, where the support of a Laurent polynomial $f=\sum_{i}\lambda_{i}z^{i}$ is defined by
$$\mathrm{supp}(f)=\{i\in \mathbf
{Z}^{n}:\lambda_{i}\neq0\}.$$
Now, for a lattice cone $\sigma$ we can define an affine toric variety to be the maximal spectrum $$\mathbf{X}_{\sigma}=\mathrm{Spec}R_{\sigma}.$$ A toric variety
$\mathbf{X}_{\Sigma}$ associated to a fan $\Sigma$ is the result of gluing affine varieties
$\mathbf{X}_{\sigma}=\mathrm{Spec}R_{\sigma}$ for all $\sigma\in \Sigma$ by identifying $\mathbf{X}_{\sigma}$ with the corresponding Zariski open subset in $\mathbf{X}_{\sigma^{'}}$ if
$\sigma$ is a face of $\sigma^{'}$. That is,
first we take the disjoint union of all affine toric varieties $\mathbf{X}_{\sigma}$ corresponding to the cones of $\Sigma$.
Then by gluing all these affine toric varieties together we get $\mathbf{X}_{\Sigma}$.
A compact toric variety $\mathcal{X}_{A}$ is called projective if there exists an injective morphism
$$\Phi:\mathcal{X}_{\Sigma}\longrightarrow\mathbf{P}^{r}$$ of $\mathcal{X}_{\Sigma}$
into some projective space such that $\Phi(\mathcal{X}_{\Sigma})$ is Zariski
closed in $\mathbf{P}^{r}$.
A toric variety $\mathcal{X}_{\Sigma}$ is equivariantly projective if and only if $\Sigma$ is strongly polytopal. Now, let $\mathcal{X}_{\Sigma}$ be equivariantly projective and morphism
$\Phi$ be embedding which is induced by the rational map $\phi:\mathcal{X}_{A} \longrightarrow \mathbf{P}^{r}$
defined by $p \mapsto[z^{m_{0}},z^{m_{1}},\ldots,z^{m_{r}}],$ where $z^{m_{l}}(p)=p^{m_{l}}$ in case $p=(p_{1},p_{2},\ldots p_{n})$. Then, the rational map $\Phi(\mathcal{X}_{\Sigma§})$ is the set of common solutions of finitely many monomial equations
\begin{equation}
z^{\beta_{0}}_{i_{0}}z^{\beta_{1}}_{i_{1}}\cdots z^{\beta_{s}}_{i_{s}}=z^{\beta_{s+1}}_{i_{s+1}}z^{\beta_{s+2}}_{i_{s+2}}\cdots z^{\beta_{r}}_{i_{r}}
\end{equation}
which satisfy the following relationships
\begin{equation}
\beta_{0}m_{0}+\beta_{1}m_{1}+\cdots +\beta_{s}m_{s}=\beta_{s+1}m_{s+1}+\beta_{s+2}m_{s+2}+\cdots +\beta_{r}m_{r}
\end{equation}
and
\begin{equation}
\beta_{0}+\beta_{1}+\cdots +\beta_{s}=\beta_{s+1}+\beta_{s+2}+\cdots +\beta_{r}
,
\end{equation}
for all $\beta_{l}\in \mathcal{Z}_{\geq 0}$ and $l=0,1,\ldots, r$ \cite{Ewald}.
As we have seen for multi-qubit systems the separable states are given by the Segre embedding of $\mathbf{CP}^{1}\times\mathbf{CP}^{1}\times\cdots\times\mathbf{CP}^{1}$.
Now, for example, let $z_{1}=\alpha^{1}_{1}/\alpha^{1}_{0}, z_{2}=\alpha^{2}_{1}/\alpha^{2}_{0},\ldots, z_{m}=\alpha^{m}_{1}/\alpha^{m}_{0}$.
Then we can cover $\mathbf{CP}^{1}\times\mathbf{CP}^{1}
\times\cdots\times\mathbf{CP}^{1}$ by $2^{m}$ charts
\begin{eqnarray}
\nonumber &&
\mathbf{X}_{\check{\Delta}_{1}}=\{(z_{1},z_{2},\ldots,z_{m})\},\\\nonumber&&
~\mathbf{X}_{\check{\Delta}_{2}}=\{(z^{-1}_{1},z_{2},\ldots,z_{m})\},\\\nonumber&&
~~~~~~~~~~\vdots
\\\nonumber&&
\mathbf{X}_{\check{\Delta}_{2^{m}-1}}=\{(z_{1},z^{-1}_{2},\ldots,z^{-1}_{m})\},
\\\nonumber&&
\mathbf{X}_{\check{\Delta}_{2^{m}}}=\{(z^{-1}_{1},z^{-1}_{2},\ldots,z^{-1}_{m})\}
\end{eqnarray}
Let us consider the $m$
-hypercube $\Sigma$ centered at the origin with vertices $(\pm1,\ldots,\pm1)$. This gives the toric variety $\mathcal{X}_{\Sigma}=
\mathbf{CP}^{1}\times\mathbf{CP}^{1}\times\cdots\times\mathbf{CP}^{1}$.
Now, the map $\Phi(\mathcal{X}_{\Sigma})$ is a set of the common
solutions of the following monomial equations
\begin{equation}
x^{\beta_{0}}_{i_{0}}x^{\beta_{1}}_{i_{1}}\cdots
x^{\beta_{2^{m-1}-1}}_{i_{2^{m-1}-1}}=x^{\beta_{2^{m-1}}}_{i_{2^{m-1}}}
\cdots x^{\beta_{2^{m}-1}}_{i_{2^{m}-1}}
\end{equation}
that gives quadratic polynomials $\alpha_{k_{1}k_{2}\ldots
k_{m}}\alpha_{l_{1}l_{2}\ldots l_{m}} = \alpha_{k_{1}k_{2}\ldots
l_{j}\ldots k_{m}}\alpha_{l_{1}l_{2} \ldots k_{j}\ldots l_{m}}$ for
all $j=1,2,\ldots,m$ which coincides with the Segre ideals.
Moreover, we have
\begin{equation}
\Phi(\mathcal{X}_{\Sigma})=\mathrm{Specm} \mathcal{C}[\alpha_{00\ldots
0},\alpha_{00\ldots 1},\ldots,\alpha_{11\ldots
1}]/\mathcal{I}(\mathcal{A}),
\end{equation}
where $\mathcal{I}(\mathcal{A})=\langle \alpha_{k_{1}k_{2}\ldots
k_{m}}\alpha_{l_{1}l_{2}\ldots l_{m}} - \alpha_{k_{1}k_{2}\ldots
l_{j}\ldots k_{m}}\alpha_{l_{1}l_{2} \ldots
k_{j}\ldots l_{m}}\rangle_{\forall j;k_{j},l_{j}=0,1}$.
This toric variety describe the space of separable states in a multi-qubit quantum systems.
In summary we have investigated the geometrical structures of quantum multi-qubit states based on the Segre variety toric varieties. We showed that multi-qubit states can be characterized and visualized by embedding of toric variety in a complex projective space. The results are interesting in our voyage to the realm of quantum theory and a better understanding of the nature of multipartite quantum systems.
\begin{flushleft}
\textbf{Acknowledgments:} The work was supported by the Swedish Research Council (VR).
\end{flushleft}
\end{document} |
\begin{document}
\title{Convolution based smooth approximations to the absolute value function
with application to non-smooth regularization}
\begin{abstract}
We present new convolution based smooth approximations to the absolute value function
and apply them to construct gradient based algorithms such as the nonlinear
conjugate gradient scheme to obtain sparse, regularized solutions of linear
systems $Ax = b$, a problem often tackled via
iterative algorithms which attack the corresponding non-smooth minimization
problem directly. In contrast, the approximations we propose allow us to replace
the generalized non-smooth sparsity inducing functional by a smooth
approximation of which we can readily compute gradients and Hessians. The resulting
gradient based algorithms often yield a good estimate for the sought
solution in few iterations and can either be used directly or to quickly warm
start existing algorithms.
\end{abstract}
\section{Introduction}
Consider the linear system $Ax=b$, where
$A\in\mathbb{R}^{m\times n}$ and $b\in\mathbb{R}^m$. Often, in linear
systems arising from physical inverse problems, we have
more unknowns than data: $m \ll n$ \cite{tarantola82} and the right hand side of the system
which we are given contains noise. In such a setting,
it is common to introduce a constraint on the solution, both to account for the
possible ill-conditioning of $A$ and noise in $b$ (regularization) and for the
lack of data with respect to the number of unknown variables in the linear system.
A commonly used constraint is sparsity: to require the
solution $x$ to have few nonzero elements compared to the dimension of $x$.
A common way of finding a sparse solution to
the under-determined linear system
$Ax = b$ is to solve the classical Lasso problem \cite{DonohoSparseSolutions}. That is,
to find $\bar{x} = \arg\min_x F_1(x)$ where
\[
F_1(x) = \|Ax-b\|_2^2+2\tau\|x\|_1,
\]
i.e., the least squares problem
with an $\ell_1$ regularizer, governed by the regularization parameter $\tau>0$
\cite{ingrid_thresholding1}.
For any $p\geq1$, the map
$\|x\|_p := \left( \sum_{k=1}^n |x_k|^p \right)^{\frac{1}{p}}$
(for any $x\in\mathbb{R}^n$) is called the $\ell_p$-norm on
$\mathbb{R}^n$. For $p=1$, the $||\cdot||_1$ norm is called an $\ell_1$ norm and is convex.
Besides $F_1(x)$, we also consider the more general $\ell_p$ functional (for
$p > 0$) of which $\ell_1$ is a special case:
\begin{equation}
\label{eq:lp_funct}
F_p(x) = ||Ax - b||_2^2
+ 2\tau \left( \sum_{k=1}^n |x_k|^p \right)^{\frac{1}{p}},
\end{equation}
As $p \to 0$, the right term of this functional approximates
the count of nonzeros or the so called $\ell_0$ ``norm''
(which is not a proper norm):
\begin{equation*}
||x||_0
= \lim_{p \to 0} \|x\|_p
= \lim_{p \to 0}
\left( \sum_{k=1}^n |x_k|^p \right)^{1/p},
\end{equation*}
which can be seen from Figure \ref{fig:fvals_to_diff_p}.
\begin{figure*}
\caption{$|x|^p$ plotted for different values of $p$, as $p \to 0$, the plot approaches an
indicator function.}
\label{fig:fvals_to_diff_p}
\end{figure*}
For $0<p<1$, \eqref{eq:lp_funct} is not convex.
However, the minimization of non-smooth non-convex functions has been shown to
produce good results in some compressive sensing applications
\cite{Chartrand09fastalgorithms}. The non-smoothness of the functional $F_p(x)$, however, complicates
its minimization from an algorithmic point of view.
The non-smooth part of \eqref{eq:lp_funct}
is due to the absolute value function $g(x_k) = |x_k|$.
Because the gradient of $F_p(x)$ cannot be obtained,
different minimization techniques such as sub-gradient methods are
usually used \cite{ShorMinimization}.
For the convex $p=1$ case, various thresholding based
methods have become popular. A particularly successful example
is the soft thresholding
based method FISTA \cite{MR2486527}. This algorithm is an accelerated version of a soft thresholded
Landweber iteration \cite{MR0043348}:
\begin{equation}
\label{eq:ista}
x^{n+1} = \mathbb{S}_{\tau}(x^n + A^T b - A^T A x^n)
\end{equation}
The soft thresholding function $\mathbb{S}_{\tau}:\mathbb{R}^n\to \mathbb{R}^n$ \cite{ingrid_thresholding1} is defined by
\begin{equation*}
\left(\mathbb{S}_{\tau}(x)\right)_k = \sgn(x_k) \max{\{0, |x_k| - \tau\}}, \ \forall\, k=1,\ldots,n,\ \forall\, x\in\mathbb{R}^n.
\end{equation*}
The scheme \eqref{eq:ista} is known to converge from some initial guess, but slowly, to the
$\ell_1$ minimizer \cite{ingrid_thresholding1}.
The thresholding in \eqref{eq:ista} is performed on
$x^n - \nabla_x (\frac{1}{2} ||Ax^n - b||_2^2) = x^n - A^T (A x^n - b)$, which is a very
simple gradient based scheme with a constant line search \cite{EnglRegularization}.
Naturally, more advanced gradient schemes may be able to provide better numerical performance; however, they are possible to utilize only if we are able to
compute the gradient of the functional we want to minimize.
In this article, we present new smooth approximations to the non-smooth
absolute value function $g(t) = |t|$, computed via convolution with a Gaussian function.
This allows us to replace the non-smooth objective function
$F_p(x)$ by a smooth functional $H_{p,\sigma}(x)$, which is close to $F_p(x)$ in value
(as the parameter $\sigma \to 0$). Since the approximating functional
$H_{p,\sigma}(x)$ is smooth,
we can compute its gradient vector $\nabla_x H_{p,\sigma}(x)$ and Hessian matrix
$\nabla^2_x H_{p,\sigma}(x)$. We are then able to use gradient based
algorithms such as conjugate gradients to approximately minimize $F_p(x)$
by working with the approximate functional and gradient pair. The resulting
gradient based methods we show are simple to implement and in many instances yield good
numerical performance in few iterations.
We remark that this article is not the first in attempting to use
smooth approximations for sparsity constrained problems. A smooth $\ell_0$ norm
approach has been proposed in \cite{SmoothL0}. In this article, we propose a more
general method which can be used for $F_p(x)$, including the popular $\ell_1$ case.
The absolute value function which appears in non-smooth regularization is just one application
of the convolution based smoothing approach we introduce here, which can likely be extended to
different non-smooth functions.
\section{Smooth approximation of absolute value function}
\subsection{Some existing smoothing techniques}
One simple smooth approximation to the absolute value function is given by
$s_{\sigma}(t) = \sqrt{t^2 + \sigma^2}$ with $\sigma > 0 \in \mathbb{R}$.
\begin{lemma}
\label{lem:s_fcn_def}
The approximation $s_{\sigma}(t)$ to $|t|$ satisfies:
\begin{equation}
\label{eq:s_fcn_derivative}
\frac{\mathrm{d}}{\mathrm{d} t} s_{\sigma}(t) = t \left(t^2 + \sigma^2 \right)^{-\frac{1}{2}}
\end{equation}
and
\begin{equation}
\label{eq:s_fcn_approx_error}
\left||t| - s_{\sigma}(t)\right| \leq \sigma
\end{equation}
\end{lemma}
\begin{proof}
To establish \eqref{eq:s_fcn_approx_error}, consider the inequality:
\begin{equation*}
\frac{t^2}{\sqrt{t^2 + \sigma^2}} \leq |t| \leq \sqrt{t^2 + \sigma^2}
\end{equation*}
It follows that:
\begin{equation*}
\left||t| - s_{\sigma}(t)\right| \leq \sqrt{t^2 + \sigma^2} - \frac{t^2}{\sqrt{t^2 + \sigma^2}} = \frac{\sigma^2}{\sqrt{t^2 + \sigma^2}} \leq \sigma
\end{equation*}
\end{proof}
Another well known smoothing technique for the absolute value is the so called Huber function
\cite{BeckSmoothing}.
\begin{lemma}
\label{lem:huber_fcn_def}
The Huber function defined as:
\begin{equation}
\label{eq:huber_fcn}
p_{\sigma}(t) = \twopartdef { \frac{t^2}{2 \sigma} } {|t| \leq \sigma} {|t| - \frac{\sigma}{2}} {|t| \geq \sigma}
\end{equation}
corresponds to the minimum value of the function
\begin{equation}
\label{eq:ell1_reg_1d}
\min_{x \in \mathbb{R}} \left\{ \frac{1}{2 \sigma}(x - t)^2 + |x| \right\}
\end{equation}
It follows that:
\begin{equation}
\label{eq:huber_fcn_derivative}
\frac{\mathrm{d}}{\mathrm{d} t} p_{\sigma}(t) = \twopartdef{ \frac{t}{\sigma} } {|t| \leq \sigma} {sgn(t)} {|t| \geq \sigma}
\end{equation}
and
\begin{equation}
\label{eq:huber_fcn_approx_error}
\left| |t| - p_{\sigma}(t) \right| \leq \frac{\sigma}{2}
\end{equation}
\end{lemma}
\begin{proof}
The derivation follows by means of the soft thresholding operator \eqref{eq:thresholding},
which is known to satisfy the relation
$\mathbb{S}_{\sigma}(t) = \arg\min_x \left\{ (x-t)^2 + 2\sigma |x| \right\}$
\cite{ingrid_thresholding1}. When $|t| \leq \sigma$, $\mathbb{S}_{\sigma}(t) = 0$. Plugging
this into \eqref{eq:ell1_reg_1d}, we obtain:
\begin{equation*}
\min \left\{ \frac{1}{2 \sigma}(x - t)^2 + |x| \right\} = \frac{1}{2 \sigma} (0 - t)^2 + |0| =
\frac{t^2}{2 \sigma}
\end{equation*}
When $|t| > \sigma$, $S_{\sigma}(t) = t - \sigma$ (when $t > \sigma$) or $t + \sigma$ (when $t < -\sigma$).
Taking the case $t > \sigma$, we have $|t| = t$, $|t - \sigma| = t - \sigma$ and:
\begin{equation*}
\min \left\{ \frac{1}{2 \sigma}(x - t)^2 + |x| \right\} = \frac{1}{2 \sigma} (t - \sigma - t)^2 + |t - \sigma| = \frac{1}{2 \sigma} \sigma^2 + (t - \sigma) = |t| - \frac{\sigma}{2}
\end{equation*}
Similarly, when $t < -\sigma$, we have $|t| = -t$, $|t + \sigma| = - t - \sigma$ and:
\begin{equation*}
\min \left\{ \frac{1}{2 \sigma}(x - t)^2 + |x| \right\} = \frac{1}{2 \sigma} (t + \sigma - t)^2 + |t + \sigma| = \frac{1}{2 \sigma} \sigma^2 -t - \sigma = |t| - \frac{\sigma}{2}
\end{equation*}
and so we obtain both parts of $\eqref{eq:huber_fcn}$. To show \eqref{eq:huber_fcn_approx_error},
consider both cases of $\eqref{eq:huber_fcn}$. When $|t| \geq \sigma$,
$\left||t| - p_{\sigma}(t)\right| = \left||t| - |t| - \frac{\sigma}{2} \right| = \frac{\sigma}{2}$. When
$|t| \leq \sigma$, we evaluate $\left| |t| - \frac{t^2}{2 \sigma} \right|$. Let
$u = |t|$, then for $u \leq \sigma$, $u - \frac{u^2}{2\sigma} > 0$ and
$u - \frac{u^2}{2\sigma} = \frac{2 \sigma u - u^2}{2 \sigma}$. Let
$r(u) = 2 \sigma u - u^2 \implies r^{\prime}(u) = 2 \sigma - 2 u = 0$
and $r^{\prime\prime}(u) = -2 < 0$.
Hence the max occurs at $u = \sigma$, which gives
$\max\left( \frac{2 \sigma u - u^2}{2 \sigma} \right) = \frac{\sigma^2}{2 \sigma} = \frac{\sigma}{2}$ when $u = |t| \leq \sigma$.
\end{proof}
Lemmas \ref{lem:s_fcn_def} and \ref{lem:huber_fcn_def} imply that we can approximate the one norm of vector
$x \in \mathbb{R}^n$ as $\|x\|_1 \approx \displaystyle\sum_{i=1}^n s_{\sigma}(x_i)$ or as
$\|x\|_1 \approx \displaystyle\sum_{i=1}^n p_{\sigma}(x_i)$.
From \eqref{eq:s_fcn_approx_error} and \eqref{eq:huber_fcn_approx_error}, the approximations
satisfy:
\begin{equation*}
\displaystyle\sum_{k=1}^n s_{\sigma}(x_k) \leq \|x\|_1 \leq \displaystyle\sum_{k=1}^n s_{\sigma}(x_k) + \sigma n \quad \mbox{and} \quad
\displaystyle\sum_{k=1}^n p_{\sigma}(x_k) \leq \|x\|_1 \leq \displaystyle\sum_{k=1}^n p_{\sigma}(x_k) + \frac{\sigma n}{2}
\end{equation*}
The smooth approximation for $\|x\|_1$ allows us to approximate the $\ell_1$ functional
$F_1(x)$ as:
\begin{equation*}
S_{1,\sigma}(x) = ||Ax - b||_2^2 + 2 \tau \displaystyle\sum_{k=1}^n s_{\sigma}(x_k) \quad \mbox{and} \quad P_{1,\sigma}(x) = ||Ax - b||_2^2 + 2 \tau \displaystyle\sum_{k=1}^n p_{\sigma}(x_k)
\end{equation*}
Note that from \eqref{eq:s_fcn_derivative} and \eqref{eq:huber_fcn_derivative}, the
corresponding gradient vectors $\nabla S_{1,\sigma}(x)$ and $\nabla P_{1,\sigma}(x)$ are given by:
\begin{equation*}
\nabla S_{1,\sigma}(x) = 2 A^T (Ax - b) + 2 \tau \left\{ s^{\prime}_{\sigma}(x_k) \right\}_{k=1}^n \quad \mbox{and} \quad \nabla P_{1,\sigma}(x) = 2 A^T (Ax - b) + 2 \tau \left\{ p^{\prime}_{\sigma}(x_k) \right\}_{k=1}^n
\end{equation*}
with $s^{\prime}_{\sigma}(x_k) = x_k \left( x_k^2 + \sigma^2 \right)^{-\frac{1}{2}}$ and
$p^{\prime}_{\sigma}(x_k) = \twopartdef{ \frac{x_k}{\sigma} } {|x_k| \leq \sigma} {sgn(x_k)} {|x_k| \geq \sigma}$.
The advantage of working with the smooth functionals instead of
$F_1(x)$ is that given the gradients we can use gradient based methods as we later describe. However, we cannot compute
the Hessian matrix of $P_{1,\sigma}(x)$ because $p_{\sigma}(x_k)$ is not twice
differentiable, while $S_{1,\sigma}(x)$ is a less accurate approximation for $F_1(x)$.
In the next section we introduce a new approximation to the absolute value based on convolution with
a Gaussian kernel which addresses both of these concerns.
\subsection{Mollifiers and Approximation via Convolution}
In mathematical analysis, the concept of mollifiers is well known.
Below, we state the definition and convergence result regarding mollifiers,
as exhibited in \cite{DenkowskiNonlinearAnalysis}.
A smooth function $\psi_\sigma:\mathbb R\to\mathbb R$
is said to be a (non-negative) \textit{mollifier}
if it has finite support,
is non-negative $(\psi \geq 0)$,
and has area $\int_{\mathbb{R}} \psi(t) \mathrm{d} t = 1$.
For any mollifier $\psi$ and any $\sigma>0$, define the parametric function $\psi_\sigma:\mathbb{R}\to\mathbb{R}$ by:
$\psi_{\sigma}(t) := \frac{1}{\sigma}\psi\left(\frac{t}{\sigma}\right)$, for all $t\in\mathbb R$.
Then $\{\psi_{\sigma}:\sigma>0\}$ is a family of mollifiers,
whose support decreases
as $\sigma \to 0$, but the volume under the graph always remains equal to one.
We then have the following important lemma for the approximation of
functions, whose proof is given in \cite{DenkowskiNonlinearAnalysis}.
\begin{lemma}
For any continuous function $g\in L^1(\Theta)$
with compact support and $\Theta\subseteq\mathbb{R}$,
and any mollifier $\psi:\mathbb{R}\to\mathbb{R}$,
the convolution $\psi_{\sigma} * g$, which is the function defined
by:
\begin{equation*}
(\psi_{\sigma} * g) (t)
:= \int_{\mathbb{R}} \psi_{\sigma} (t-s) g(s) \mathrm{d}s
= \int_{\mathbb{R}} \psi_{\sigma} (s) g(t-s) \mathrm{d}s,
\ \forall\, t\in\mathbb{R},
\end{equation*}
converges uniformly to $g$ on $\Theta$, as $\sigma\to 0$.
\end{lemma}
Inspired by the above results, we will now use convolution with
approximate mollifiers to approximate the absolute value function
$g(t) = |t|$ (which is not in $L^1(\mathbb{R})$) with a smooth function.
We start with the Gaussian function $K(t) = \frac{1}{\sqrt{2 \pi}} \exp\left(-\frac{t^2}{2}\right)$ (for all $t\in\mathbb R$),
and introduce the $\sigma$-dependent family:
\begin{equation}
\label{eq:gaussian_mollifier1}
K_{\sigma}(t) :=
\frac{1}{\sigma} K\left(\frac{t}{\sigma}\right) =
\frac{1}{\sqrt{2 \pi \sigma^2}}
\exp\left( -\frac{t^2}{2\sigma^2} \right),
\quad\forall\, t\in\mathbb{R}.
\end{equation}
This function is not a mollifier because it does not have finite
support. However, this function is coercive, that is,
for any $\sigma>0$, $K_{\sigma}(t) \to 0$ as $|t|\to\infty$.
In addition, we have that
$\int_{-\infty}^{\infty} K_{\sigma}(t) \, \mathrm{d}t=1$ for all $\sigma>0$:
\begin{eqnarray*}
\int_{-\infty}^{\infty} \! K_{\sigma}(t) \, \mathrm{d}t
&=&
\frac{1}{\sqrt{2 \pi \sigma^2}}
\int_{\mathbb{R}} \exp\left(-\frac{t^2}{2\sigma^2} \right) \, \mathrm{d}t
=
\frac{2}{\sqrt{2 \pi \sigma^2}}
\int_{0}^{\infty} \exp\left(-\frac{t^2}{2\sigma^2} \right) \, \mathrm{d}t
\\
&=& \frac{2}{\sqrt{2 \pi \sigma^2}}
\int_{0}^\infty \exp(-u^2) \sqrt{2} \sigma \, \mathrm{d}u
= \frac{2}{\sqrt{2 \pi \sigma^2}} \sqrt{2}
\sigma \frac{\sqrt{\pi}}{2}
= 1.
\end{eqnarray*}
Figure \ref{fig:Ksigma_plot} below presents a plot of the function
$K_{\sigma}$ in relation to the particular choice $\sigma = 0.01$.
We see that $K_{\sigma}(t) \geq 0$ and $K_{\sigma}(t)$
is very close to zero for $|t| > 4 \sigma$.
In this sense, the function $K_\sigma$ is an approximate mollifier.
\begin{figure*}
\caption{$K_{\sigma}
\label{fig:Ksigma_plot}
\end{figure*}
Let us now compute the limit $\lim_{\sigma \to 0} K_{\sigma}(t)$.
For $t = 0$, it is immediate that $\lim_{\sigma \to 0} K_{\sigma}(0) = \infty$. For $t\neq0$, we use
l'H\^{o}spital's rule:
\begin{equation*}
\lim_{\sigma \to 0} K_{\sigma}(t) =
\lim_{\sigma \to 0} \frac{1}{\sqrt{2 \pi \sigma^2}}
\exp\left( -\frac{t^2}{2\sigma^2} \right) =
\lim_{\gamma \to \infty} \frac{\gamma}{\sqrt{2 \pi}
\exp\left(\frac{\gamma^2t^2}{2}\right)} =
\frac{1}{\sqrt{2 \pi}} \lim_{\gamma \to \infty}
\frac{1}{\gamma\, t^2 \exp\left(\frac{\gamma^2t^2}{2}\right)} = 0,
\end{equation*}
with $\gamma = \frac{1}{\sigma}$.
We see that $K_{\sigma}(t)$ behaves
like a Dirac delta function $\delta_0(x)$ with unit integral over
$\mathbb{R}$ and the same pointwise limit.
Thus, for small $\sigma>0$,
we expect that the absolute value function can be approximated
by its convolution with $K_\sigma$, i.e.,
\begin{equation}
\label{eq:x_k_approx1}
|t| \approx
\phi_\sigma(t),
\quad\forall\, t\in\mathbb R,
\end{equation}
where the function $\phi_\sigma:\mathbb R\to\mathbb R$ is defined as the
convolution of $K_\sigma$ with the absolute value function:
\begin{equation}
\label{eq:phisigma}
\phi_\sigma(t):=
(K_{\sigma} * |\cdot|)(t) =
\frac{1}{\sqrt{2 \pi \sigma^2}}
\int_{-\infty}^{\infty} |t - s|
\exp\left( -\frac{s^2}{2\sigma^2} \right)
\mathrm{d}s,
\quad\forall\, t\in\mathbb R.
\end{equation}
\noindent
We show in Proposition \ref{prop:conv} below,
that the approximation in \eqref{eq:x_k_approx1} converges
in the $L^1$ norm (as $\sigma \to 0$).
The advantage of using this approximation is that
$\phi_\sigma$, unlike the absolute value function,
is a smooth function.
Before we state the convergence result in
Proposition \ref{prop:conv},
we express the convolution integral and
its derivative in terms of the well-known error function \cite{HandbookOfMathFunctions}.
\begin{lemma}
\label{lem:convolution}
For any $\sigma>0$, define $\phi_\sigma:\mathbb R\to\mathbb R$
as in \eqref{eq:phisigma}
Then we have that for all $t\in\mathbb R$:
\begin{align}
\label{eq:convolution}
\phi_\sigma(t)
\,=&\ t \erf\left( \frac{t}{\sqrt{2} \sigma} \right)
+ \sqrt{\frac{2}{\pi}} \sigma
\exp\left(-\frac{t^2}{2 \sigma^2}\right),
\\
\label{eq:phisigma_derivative}
\frac{\mathrm{d}}{\mathrm{d} t}\,\phi_\sigma(t)
\,=&\
\erf\left(\frac{t}{\sqrt{2} \sigma}\right),
\end{align}
where the error function is defined as:
\begin{equation*}
\erf(t) = \frac{2}{\sqrt{\pi}}
\int_{0}^{t} \exp(-u^2) \mathrm{d} u
\quad\forall\, t\in\mathbb R.
\end{equation*}
\end{lemma}
\begin{proof}
Fix $\sigma>0$.
Define $C_\sigma:\mathbb R_+\times\mathbb R$ by
\begin{eqnarray*}
C_{\sigma}(T,t)
&:=&
\int_{-T}^{T} |t-s| K_\sigma(s) \mathrm{d} s =
\frac{1}{\sqrt{2\pi\sigma^2}}
\int_{-T}^{T} |t-s| \exp\left(-\frac{s^2}{2\sigma^2}\right)
\mathrm{d} s,\quad
\forall\, t\in\mathbb{R}, \ T\geq0.
\end{eqnarray*}
We can remove the absolute value sign in the integration above by breaking up the integral into
intervals from $-T$ to $t$ and from $t$ to $T$ where $|t - s|$ can be replaced by $(t - s)$ and
$(s-t)$, respectively. Expanding the above we have that:
\begin{eqnarray*}
\sqrt{2 \pi \sigma^2} C_{\sigma}(T,t) &=&
\int_{-T}^{T} |t-s|\exp\left(-\frac{s^2}{2\sigma^2}\right)\mathrm{d} s
\\
&=&
\int_{-T}^{t} (t-s)\exp\left(-\frac{s^2}{2\sigma^2}\right) \mathrm{d} s
+ \int_{t}^{T} (s-t)\exp\left(-\frac{s^2}{2\sigma^2}\right) \mathrm{d} s
\\
&=&
t\left(\int_{-T}^{t} \exp\left(-\frac{s^2}{2\sigma^2}\right) \mathrm{d} s
-\int_{t}^{T} \exp\left(-\frac{s^2}{2\sigma^2}\right) \mathrm{d} s\right)
\\
&&\qquad
+\int_{-T}^{t} \exp\left(-\frac{s^2}{2\sigma^2}\right)(-s) \mathrm{d} s
+\int_{t}^{T} \exp\left(-\frac{s^2}{2\sigma^2}\right)s\, \mathrm{d} s
\\
&=&
\sqrt2\sigma t \left(\int_{-T/\sqrt2 \sigma}^{t/\sqrt2 \sigma} \exp\left(-u^2\right) \mathrm{d} u
-\int_{t/\sqrt2 \sigma}^{T/\sqrt 2 \sigma} \exp\left(-u^2\right) \mathrm{d} u\right)
\\
&&\qquad
+\sigma^2\left(\int_{-T}^{t} \exp\left(-\frac{s^2}{2\sigma^2}\right) \left(-\frac{s}{\sigma^2} \right) \mathrm{d} s
-\int_{t}^{T} \exp\left(-\frac{s^2}{2\sigma^2}\right) \left(-\frac{s}{\sigma^2} \right) \mathrm{d} s\right).
\end{eqnarray*}
Next, making use of the definition of the error function, the fact that it's an odd function (i.e.
$\erf(-t) = -\erf(t)$), and of the fundamental theorem of calculus, we have:
\begin{eqnarray*}
\sqrt{2 \pi \sigma^2} C_{\sigma}(T,t)
&=& \sqrt{\frac{\pi}{2}}\sigma t\left(
\erf\left(\frac{t}{\sqrt2\sigma}\right)
-\erf\left(\frac{-T}{\sqrt2\sigma}\right)
-\erf\left(\frac{T}{\sqrt2\sigma}\right)
+\erf\left(\frac{t}{\sqrt2\sigma}\right)\right)
\\
&&\qquad
+\sigma^2\left(\int_{-T}^{t} \frac{\mathrm{d}}{\mathrm{d} s} \left(
\exp\left(-\frac{s^2}{2\sigma^2}\right)
\right) \mathrm{d} s
-\int_{t}^{T} \frac{\mathrm{d}}{\mathrm{d} s}
\left( \exp\left(-\frac{s^2}{2\sigma^2}\right) \right) \mathrm{d} s \right)
\\
&=&\sqrt{2\pi}\sigma t
\erf\left(\frac{t}{\sqrt2\sigma}\right)
+2\sigma^2\left(\exp\left(-\frac{t^2}{2\sigma^2}\right)
-\exp\left(-\frac{T^2}{2\sigma^2}\right)\right).
\end{eqnarray*}
Since $\exp \left(-\frac{T^2}{2\sigma^2}\right) \to 0$ as $T \to \infty$, we have:
\begin{eqnarray*}
\phi_\sigma(t)
\,=\,
(K_\sigma*|\cdot|)(t)
&=&
\lim_{T\to\infty} C_{\sigma}(T,t)=
\frac{1}{\sqrt{2 \pi} \sigma}
\sqrt{2\pi}\sigma t \erf\left(\frac{t}{\sqrt2\sigma}\right) +
\frac{2}{\sqrt{2 \pi} \sigma} \sigma^2
\exp\left(-\frac{t^2}{2\sigma^2}\right)
\\
&=&
t\erf\left(\frac{t}{\sqrt2\sigma}\right)
+\sqrt{\frac{2}{\pi}}\sigma
\exp\left(-\frac{t^2}{2\sigma^2}\right).
\end{eqnarray*}
This proves \eqref{eq:convolution}.
To derive \eqref{eq:phisigma_derivative}, we use
\begin{equation}
\label{eq:erf_deriv}
\frac{\mathrm{d}}{\mathrm{d} t} \erf\left(\frac{t}{\sqrt{2} \sigma}\right) =
\frac{2}{\sqrt{\pi}}
\frac{\mathrm{d}}{\mathrm{d} t} \left[ \int_{0}^{\left(\frac{t}{\sqrt{2} \sigma}\right)} \exp(-u^2) \mathrm{d} u \right]
= \frac{\sqrt{2}}{\sigma \sqrt{\pi}}
\exp\left(-\frac{t^2}{2 \sigma^2}\right),
\end{equation}
Plugging in, we get:
\begin{eqnarray*}
\frac{\mathrm{d}}{\mathrm{d} t}\,\phi_{\sigma} (t)
&=&
\frac{\mathrm{d}}{\mathrm{d} t} \left( t \erf
\left( \frac{t}{\sqrt{2} \sigma} \right)
+ \sqrt{\frac{2}{\pi}} \sigma \exp\left(-\frac{t^2}{2 \sigma^2}\right) \right)
\\
&=&
\erf\left(\frac{t}{\sqrt{2} \sigma}\right)
+ t \frac{\sqrt{2}}{\sigma \sqrt{\pi}}
\exp\left(-\frac{t^2}{2 \sigma^2}\right)
- \sqrt{\frac{2}{\pi}} \sigma \frac{2 t} {2\sigma^2}
\exp\left(-\frac{t^2}{2\sigma^2}\right)
\\
&=&
\erf\left(\frac{t}{\sqrt{2} \sigma}\right).
\end{eqnarray*}
so that \eqref{eq:phisigma_derivative} holds.
\end{proof}
\noindent
Next, we review some basic properties of the error function
$\erf(t) = \frac{2}{\sqrt{\pi}} \int_{0}^t \exp(-s^2) \mathrm{d} s$ and the Gaussian integral
\cite{HandbookOfMathFunctions}.
It is well known that the Gaussian integral satisfies:
\begin{equation*}
\int_{-\infty}^{\infty} \exp(-s^2) \mathrm{d} s =
2 \int_{0}^{\infty} \exp(-s^2) \mathrm{d} s =
\sqrt{\pi} ,
\end{equation*}
and, in particular, $0 < \erf(t) < 1$ for all $t>0$.
Using results from \cite{ChuNormalIntegral} on the Gaussian integral,
we also have the following bounds for the error function:
\begin{lemma}
The error function $\erf(t) = \frac{2}{\sqrt{\pi}} \int_{0}^t \exp(-s^2) \mathrm{d} s$ satisfies the bounds:
\begin{equation}
\label{eq:erf_bounds}
\bigl(1 - \exp(-t^2)\bigr)^\frac{1}{2} \leq
\erf(t) \leq
\bigl(1 - \exp(-2 t^2)\bigr)^\frac{1}{2},
\quad\forall\, t\geq0.
\end{equation}
\end{lemma}
\begin{proof}
In \cite{ChuNormalIntegral}, it is shown that for
the function $v(t) := \frac{1}{\sqrt{2 \pi}} \int_{0}^t
\exp\left(-\frac{s^2}{2}\right) \mathrm{d} s$,
the following bounds hold:
\begin{equation}
\label{eq:vx_bounds}
\frac{1}{2} \left( 1 - \exp\left(-\frac{t^2}{2}\right) \right)^{\frac{1}{2}} \leq
v(t) \leq
\frac{1}{2} \left( 1 - \exp\left(-t^2\right) \right)^{\frac{1}{2}},
\quad\forall\, t\geq0.
\end{equation}
Now we relate $v(t)$ to the error function.
Using the substitution $u=\frac{s}{\sqrt{2}}$:
\begin{eqnarray*}
v(t) =
\frac{1}{\sqrt{2 \pi}}
\int_{0}^{\frac{t}{\sqrt{2}}} \exp(-u^2) \sqrt{2} \mathrm{d} u =
\frac{1}{2} \erf\left( \frac{t}{\sqrt{2}} \right) .
\end{eqnarray*}
From \eqref{eq:vx_bounds}, it follows that:
\begin{equation*}
\left( 1 - \exp\left(-\frac{t^2}{2}\right) \right)^{\frac{1}{2}} \leq
\erf\left( \frac{t}{\sqrt{2}} \right) \leq
\left( 1 - \exp(-t^2) \right)^{\frac{1}{2}}.
\end{equation*}
With the substitution $s = \frac{t}{\sqrt{2}}$, we obtain \eqref{eq:erf_bounds}.
\end{proof}
Using the above properties of the error function and the Gaussian integral,
we can prove our convergence result.
\begin{proposition}
\label{prop:conv}
Let $g(t):=|t|$ for all $t\in\mathbb R$,
and let the function $\phi_\sigma:=K_\sigma*g$
be defined as in \eqref{eq:phisigma}, for all $\sigma>0$.
Then:
\begin{equation*}
\lim_{\sigma \to 0} \left\|\phi_{\sigma} - g\right\|_{L^1} = 0.
\end{equation*}
\end{proposition}
\begin{proof}
By definition of $\phi_\sigma$,
\begin{equation*}
\left\|\phi_{\sigma} - g\right\|_{L^1}
=
\int_{-\infty}^{\infty} \bigl| (K_{\sigma} * |\cdot|)(t) - |t|\, \bigr| \mathrm{d} t = 2 \int_{0}^{\infty} \bigl| (K_{\sigma} * |\cdot|)(t) - t \bigr| \mathrm{d} t ,
\end{equation*}
where the last equality follows from the fact that
$\phi_\sigma-g$ is an even function since both $\phi_{\sigma}$ and
$g$ are even functions.
Plugging in \eqref{eq:x_k_approx1}, we have:
\begin{eqnarray*}
||\phi_{\sigma} - g||_{L^1}
&=&
2 \int_{0}^{\infty} \left|
t \left( \erf\left(\frac{t}{\sqrt{2} \sigma}\right) -1 \right)
+ \sqrt{\frac{2}{\pi}} \sigma \exp\left(-\frac{t^2}{2 \sigma^2}\right)
\right| \mathrm{d} t
\\
&\leq&
2 \int_{0}^{\infty}
\left| t \left( \erf\left(\frac{t}{\sqrt{2} \sigma}\right) -1 \right) \right|
+ \sqrt{\frac{2}{\pi}} \sigma \exp\left(-\frac{t^2}{2 \sigma^2}\right)
\mathrm{d} t
\\
&=&
2 \int_{0}^{\infty}
t \left( 1 - \erf\left(\frac{t}{\sqrt{2} \sigma} \right) \right)
+ \sqrt{\frac{2}{\pi}} \sigma \exp\left(-\frac{t^2}{2 \sigma^2}\right) \mathrm{d} t ,
\end{eqnarray*}
where we have used the inequality $0 < \erf(t) < 1$ for $t > 0$.
Next, we analyze both terms of the
integral. First, using \eqref{eq:erf_bounds}, we have:
\begin{eqnarray*}
\left(1 - \exp(-t^2)\right)^\frac{1}{2}
\leq
\erf(t)
\implies
1 - \erf(t)
\leq
1 - \left(1 - \exp(-t^2)\right)^\frac{1}{2}
\leq
\exp\left(-\frac{x^2}{2}\right),
\end{eqnarray*}
where the last inequality follows from the fact that $1 - \sqrt{1 - \alpha} \leq \sqrt{\alpha}$
for $\alpha \in (0,1)$. It follows that:
\begin{equation*}
\int_{0}^{\infty} t
\left( 1 - \erf\left(\frac{t}{\sqrt{2} \sigma} \right) \right) \mathrm{d}t
\leq
\int_{0}^{\infty} t \exp\left( -\frac{t^2}{4 \sigma^2} \right) \mathrm{d}t
=
2 \sigma^2 \int_{0}^{\infty} \exp(-u) \mathrm{d}u
=
2 \sigma^2,
\end{equation*}
For the second term,
\begin{equation*}
\sqrt{\frac{2}{\pi}} \sigma \int_{0}^{\infty}
\exp\left(-\frac{t^2}{2 \sigma^2}\right) \mathrm{d} t
=
\sqrt{\frac{2}{\pi}} \sigma
\left(\sqrt{2} \sigma \int_{0}^{\infty} \exp(-s^2) \mathrm{d}s\right)
=
\frac{2}{\sqrt{\pi}} \sigma^2 \frac{\sqrt{\pi}}{2}
=
\sigma^2.
\end{equation*}
Thus:
\begin{equation*}
\lim_{\sigma \to 0} \|\phi_{\sigma} - g\|_{L^1}
\leq
2 \lim_{\sigma \to 0} \left( 2 \sigma^2 + \sigma^2 \right)
= 0.
\end{equation*}
Hence we have that $\lim_{\sigma \to 0} \|\phi_{\sigma} - g\|_{L^1} = 0$.
\end{proof}
Note that in the above proof, $g=|\cdot| \not \in L^1$ (since $g(t) \to \infty$ as
$t \to \infty$), but the approximation in the $L^1$ norm still holds. It is likely
that the convolution approximation converges to $g$ in the $L^1$ norm for a variety of non-smooth coercive functions $g$,
not just for $g(t) = |t|$.
Note from \eqref{eq:x_k_approx1} that
while the approximation $\phi_{\sigma}(t) = K_\sigma*|t|$ is indeed smooth,
it is positive on $\mathbb R$ and in particular $(K_\sigma*|\cdot|)(0)=\sqrt{\frac{2}{\pi}} \sigma>0$,
although $(K_\sigma*|\cdot|)(0)$ does go to zero as $\sigma \to 0$.
To address this, we can use different approximations based on $\phi_{\sigma}(t)$ which are zero
at zero. Many different alternatives are possible. We describe here two particular approximations.
The first is formed by subtracting the value at $0$:
\begin{equation}
\label{eq:x_k_approx2}
\tilde{\phi}_{\sigma}(t) = \phi_\sigma(t)-\phi_\sigma(0)
\,=\,
t \erf \left( \frac{t}{\sqrt{2} \sigma} \right)
+ \sqrt{\frac{2}{\pi}} \sigma \exp\left(\frac{-t^2}{2 \sigma^2}\right)
- \sqrt{\frac{2}{\pi}} \sigma,
\end{equation}
An alternative is to use $\tilde{\phi^{(2)}}_{\sigma}(t) = \phi_\sigma(t) - \sqrt{\frac{2}{\pi}} \sigma \exp\left(-t^2\right)$
where the subtracted term decreases in magnitude as $t$ becomes larger and only has much effect for $t$ close to zero.
We could also simply drop the second term of $\phi_{\sigma}(t)$ to get:
\begin{equation}
\label{eq:x_k_approx3}
\hat{\phi}_{\sigma}(t) = \phi_\sigma(t) - \sqrt{\frac{2}{\pi}} \sigma \exp\left(\frac{-t^2}{2 \sigma^2}\right)
\,=\,
t \erf \left( \frac{t}{\sqrt{2} \sigma} \right)
\end{equation}
which is zero when $t = 0$.
\subsection{Comparsion of different approximations}
We now illustrate the different convolution based approximations along with
the previously discussed $s_{\sigma}(t)$ and $p_{\sigma}(t)$. In
Figure \ref{fig:smooth_approximations_for_abs_value}, we plot the absolute value function
$g(t) = |t|$ and the different approximations
$\phi_{\sigma}(t), \tilde{\phi}_{\sigma}(t), \hat{\phi}_{\sigma}(t), s_{\sigma}(t)$,
and $p_{\sigma}(t)$ for $\sigma_1 = 3e^{-4}$ (larger value corresponding to a worser approximation)
and $\sigma_2 = e^{-4}$ (smaller value corresponding to a better approximation) for a small range of
values $t \in (-4.5 e^{-4}, 4.5 e^{-4})$ around $t = 0$.
We may observe that $\phi_{\sigma}(t)$ smooths out the sharp corner
of the absolute value, at the expense of being above zero at $t = 0$ for positive $\sigma$. The modified
approximations $\tilde{\phi}_{\sigma}(t)$ and $\hat{\phi}_{\sigma}(t)$ are zero at zero. However,
$\tilde{\phi}_{\sigma}(t)$ over-approximates $|t|$ for all $t > 0$ while
$\hat{\phi}_{\sigma}(t)$ does not preserve convexity. The three $\phi$ approximations
respectively capture the general characteristics possible to obtain with the
described convolution approach.
From the figure we may observe that $\phi_{\sigma}(t)$ and
$\hat{\phi}_{\sigma}(t)$ remain closer to the absolute value curve than
$s_{\sigma}(t)$ and $p_{\sigma}(t)$ as $|t|$ becomes larger. The best approximation
appears to be $\hat{\phi}_{\sigma}(t)$, which is close to $|t|$ even for the larger
value $\sigma_1$ and is twice differentiable.
\begin{figure}
\caption{
Absolute value function $|t|$ on a fixed
interval vs different smooth approximations with $\sigma_1 = 3e^{-4}
\label{fig:smooth_approximations_for_abs_value}
\end{figure}
\section{Gradient Based Algorithms}
In this section, we discuss the use of gradient based optimization algorithms such as
steepest descent and conjugate gradients to approximately minimize the functional
\eqref{eq:lp_funct}:
\begin{equation*}
F_p(x)
=
||Ax - b||^2_2
+ 2\tau \left( \sum_{k=1}^n |x_k|^p \right)^{1/p},
\end{equation*}
where we make use of one of the approximations ($\phi_{\sigma}(x_k),\tilde{\phi}_{\sigma}(x_k),\hat{\phi}_{\sigma}(x_k)$) from
\eqref{eq:x_k_approx1}, \eqref{eq:x_k_approx2}, \eqref{eq:x_k_approx3} to replace the non-smooth $|x_k|$.
Let us first consider the important case of $p=1$ leading to the
convex $\ell_1$ norm minimization.
In this case, we approximate the non-smooth functional
\begin{equation*}
F_1(x) = ||Ax - b||_2^2 + 2\tau ||x||_1
\end{equation*}
by one of the smooth functionals:
\begin{equation}
\label{eq:approx_ell1}
\begin{array}{rcl}
H_{1,\sigma}(x)
&=&\displaystyle
\|Ax-b\|^2_2 + 2\tau\sum_{k=1}^n\phi_{\sigma}(x_k)
\\
&=&\displaystyle
\|Ax-b\|^2_2 + 2\tau \displaystyle \sum_{k=1}^n
\left( x_k \erf \left( \frac{x_k}{\sqrt{2} \sigma} \right)
+ \sqrt{\frac{2}{\pi}} \sigma
\exp\left(-\frac{x_k^2}{2 \sigma^2}\right) \right) \\
\tilde{H}_{1,\sigma}(x) &=& \|Ax-b\|^2_2 + 2\tau \displaystyle \sum_{k=1}^n
\left( x_k \erf \left( \frac{x_k}{\sqrt{2} \sigma} \right)
+ \sqrt{\frac{2}{\pi}} \sigma
\exp\left(-\frac{x_k^2}{2 \sigma^2}\right) - \sqrt{\frac{2}{\pi}} \sigma \right) \\
\hat{H}_{1,\sigma}(x) &=& \|Ax-b\|^2_2 + 2\tau \displaystyle \sum_{k=1}^n
\left( x_k \erf \left( \frac{x_k}{\sqrt{2} \sigma} \right) \right) \\
\end{array}
\end{equation}
As with previous approximations, the advantage of working with the smooth
$H_{1,\sigma}$ functionals instead of $F_1(x)$ is that we can easily compute
their explicit gradient $\nabla H_{1,\sigma}(x)$ and in this case also the
Hessian $\nabla^2 H_{1,\sigma}(x)$:
\begin{lemma}
Let $H_{1,\sigma}(x)$, $\tilde{H}_{1,\sigma}(x)$, $\hat{H}_{1,\sigma}(x)$ be as defined in \eqref{eq:approx_ell1} where
$A\in\mathbb R^{m\times n}$, $b\in\mathbb R^m$, $\tau,\sigma>0$
are constants and
\[
\erf(t) := \frac{2}{\sqrt\pi}\int_0^t \exp(-u^2)\, du,\ \ \forall\,
t\in\mathbb R.
\]
Then the gradients are given by:
\begin{eqnarray}
\nabla H_{1,\sigma}(x) &=& \nabla \tilde{H}_{1,\sigma}(x) = 2A^T(Ax-b) + 2\tau\left\{
\erf\left(\frac{x_k}{\sqrt2 \sigma}\right) \right\}_{k=1}^n \label{eq:Gradient_H1} \\
\nabla \hat{H}_{1,\sigma}(x) &=& 2A^T(Ax-b) + 2\tau\left\{
\erf\left(\frac{x_k}{\sqrt2 \sigma}\right) + x_k \frac{1}{\sigma} \sqrt{\frac{2}{\pi}} \exp\left(-\frac{x_k^2}{2 \sigma^2}\right)
\right\}_{k=1}^n \label{eq:Gradient_hatH1}
\end{eqnarray}
and the Hessians by:
\begin{eqnarray}
\nabla^2 H_{1,\sigma}(x) &=& \nabla^2 \tilde{H}_{1,\sigma}(x) = 2 A^T A +
\frac{2\sqrt{2}\tau}{\sigma\sqrt\pi} \Diag\left(\exp\left(-\frac{x_k^2}{2\sigma^2}\right) \right)
\label{eq:Hessian_H1} \\
\nabla^2 \hat{H}_{1,\sigma}(x) &=& 2 A^T A +
\frac{4\sqrt{2}\tau}{\sigma\sqrt\pi} \Diag\left(\exp\left(-\frac{x_k^2}{2\sigma^2}\right) - \frac{x_k^2}{2 \sigma^2} \exp\left(-\frac{x_k^2}{2 \sigma^2}\right) \right) \label{eq:Hessian_hatH1}.
\end{eqnarray}
\end{lemma}
where $\Diag:\mathbb R^n\to\mathbb R^{n\times n}$
is a diagonal matrix with the input vector elements on the diagonal.
\begin{proof}
The results follow by direct verification using \eqref{eq:phisigma_derivative} and of the following derivatives:
\begin{eqnarray}
\frac{\mathrm{d}}{\mathrm{d} t} \erf\left(\frac{t}{\sqrt{2} \sigma}\right) &=& \frac{\sqrt{2}}{\sigma \sqrt{\pi}} \exp\left(-\frac{t^2}{2 \sigma^2}\right) \label{eq:phiderivstack1} \\
\frac{\mathrm{d}}{\mathrm{d} t} \left[ t \erf\left(\frac{t}{\sqrt{2} \sigma}\right) \right] &=& \erf\left(\frac{t}{\sqrt{2} \sigma}\right) + t \frac{\sqrt{2}}{\sigma \sqrt{\pi}} \exp\left( -\frac{t^2}{2 \sigma^2} \right) \label{eq:phiderivstack2} \\
\frac{\mathrm{d}^2}{\mathrm{d} t^2} \left[ t \erf\left(\frac{t}{\sqrt{2} \sigma}\right) \right] &=& \frac{2 \sqrt{2}}{\sigma \sqrt{\pi}} \exp\left( -\frac{t^2}{2\sigma^2}\right) - t^2 \frac{\sqrt{2}}{\sigma^3 \sqrt{\pi}} \exp\left( -\frac{t^2}{2\sigma^2}\right) \label{eq:phiderivstack3}
\end{eqnarray}
For instance, using \eqref{eq:phisigma_derivative}, the gradient of $H_{1,\sigma}(x)$ is given by:
\begin{equation*}
\nabla H_{1,\sigma}(x) = \nabla_x \|Ax - b\|_2^2 +
2\tau \left\{ \frac{\mathrm{d}}{\mathrm{d} x_k} \phi_{\sigma}(x_k) \right\}_{k=1}^n = 2 A^T (Ax - b) +
2\tau \left\{ \erf\left(\frac{x_k}{\sqrt{2} \sigma} \right) \right\}_{k=1}^n
\end{equation*}
which establishes \eqref{eq:Gradient_H1}. For the Hessian matrix, we have:
\begin{eqnarray*}
\nabla^2 H_{1,\sigma}(x)
&=& 2A^T A + 2\tau
\Diag\left(
\frac{\mathrm{d}}{\mathrm{d} x_1}\erf\left(\frac{x_1}{\sqrt2 \sigma}\right),
\frac{\mathrm{d}}{\mathrm{d} x_2}\erf\left(\frac{x_2}{\sqrt2 \sigma}\right),
\ldots,
\frac{\mathrm{d}}{\mathrm{d} x_n}\erf\left(\frac{x_n}{\sqrt2 \sigma}\right)
\right),
\end{eqnarray*}
Using \eqref{eq:phiderivstack1}, we obtain \eqref{eq:Hessian_H1}. Similar computations
using \eqref{eq:phiderivstack2} and \eqref{eq:phiderivstack3} yield \eqref{eq:Gradient_hatH1}
and \eqref{eq:Hessian_hatH1}.
\end{proof}
Next, we discuss the smooth approximation to the general functional
\eqref{eq:lp_funct}. In particular, we are interested in the case $p<1$.
In this case, the functional is not convex, but may still be useful in
compressive sensing applications \cite{MR2421974}. We presents the results using
the approximation $\phi_\sigma$ from \eqref{eq:phisigma}. The calculations with
$\tilde{\phi}_\sigma$ and $\hat{\phi}_{\sigma}$ take similar form. We obtain the
approximation functional to $F_p(x)$:
\begin{equation}
\label{eq:approx_lp}
\begin{array}{rcl}
H_{p,\sigma}(x)
&:=&\displaystyle
\|Ax - b\|^2_2 + 2\tau \left( \sum_{k=1}^n
\phi_\sigma(x_k)^p \right)^{1/p}
\\
&=&\displaystyle
\|Ax - b\|^2_2 + 2\tau \left( \sum_{k=1}^n
\left( x_k \erf \left( \frac{x_k}{\sqrt{2} \sigma} \right)
+ \sqrt{\frac{2}{\pi}} \sigma
\exp\left(\frac{-x_k^2}{2 \sigma^2}\right)\right)^p \right)^{1/p} .
\end{array}
\end{equation}
\begin{lemma}
Let $H_{p,\sigma}(x)$ be as defined in \eqref{eq:approx_lp} where $p>0$ and $\sigma > 0$.
Then the gradient is given by:
\begin{equation}
\label{eq:lp_gradient}
\nabla {H_{p,\sigma}}(x)
=
2 A^T (Ax - b) +
2\tau p \left( \sum_{k=1}^n \phi_{\sigma}(x_k)^p \right)^{(1-p)/p}
\left\{ \phi_{\sigma}(x_j)^{p-1}
\erf\left(\frac{x_j}{\sqrt{2} \sigma}\right) \right\}_{j=1}^n,
\end{equation}
and the Hessian is given by:
\begin{equation}
\label{eq:lp_hessian}
\nabla^2 H_{p,\sigma}(x) = 2 A^T A + 2\tau
\left(v(x) v(x)^T + \Diag\bigl(w(x)\bigr)\right),
\end{equation}
where the functions $v,w:\mathbb R^n\to\mathbb R^n$ are defined for all $x\in\mathbb R^n$:
\begin{small}
\begin{align*}
v(x):=&\
\left\{
\sqrt{1-p}\, \phi_\sigma(x_j)^{p-1}
\erf\left(\frac{x_j}{\sqrt{2}\sigma}\right)
\left(\sum_{k=1}^n \phi_{\sigma}(x_k)^p\right)^{(1-2p)/(2p)}
\right\}_{j=1}^n,
\\
w(x):=&\
\left(\sum_{k=1}^n \phi_{\sigma}(x_k)^p\right)^{(1-p)/p} \left\{
(p-1)\phi_\sigma(x_j)^{p-2}\left(\erf\left(\frac{x_j}{\sqrt{2}\sigma}\right)\right)^2
+\frac{\sqrt{2}}{\sigma \sqrt{\pi}}\phi_\sigma(x_j)^{p-1}
\exp\left(-\frac{x_j^2}{2 \sigma^2}\right)
\right\}_{j=1}^n.
\end{align*}
\end{small}
\end{lemma}
\begin{proof}
Define $G_{p,\sigma}:\mathbb R^n\to\mathbb R$ by
\begin{equation}
\label{eq:Gpsigma}
G_{p,\sigma}(x)
:=\left(\sum_{k=1}^n \phi_{\sigma}(x_k)^p\right)^{1/p}.
\end{equation}
Then $H_{p,\sigma}(x)=\|Ax-b\|+2\tau G_{p,\sigma}(x)$ for all $x$,
and for each $j=1,\ldots,n$,
\begin{equation}
\label{eq:Gpartial1}
\frac{\partial}{\partial x_j}G_{p,\sigma}(x)
=
\frac{1}{p} \left(\sum_{k=1}^n \phi_{\sigma}(x_k)^p\right)^{(1-p)/p}
\left(p\phi_\sigma(x_j)^{p-1}\phi_\sigma'(x_j)\right)
=
G_{p,\sigma}(x)^{1-p} \phi_\sigma(x_j)^{p-1}
\erf\left(\frac{x_j}{\sqrt{2}\sigma}\right)
\end{equation}
where we have used \eqref{eq:phisigma_derivative}. Hence,
\eqref{eq:lp_gradient} follows.
Next, we compute the Hessian of $G_{p,\sigma}$.
For each $i\neq j$,
\begin{eqnarray*}
\frac{\partial^2}{\partial x_i\partial x_j}G_{p,\sigma}(x)
&=& \frac{\partial}{\partial x_i} \left[ G_{p,\sigma}(x)^{1-p} \phi_\sigma(x_j)^{p-1} \erf\left(\frac{x_j}{\sqrt{2}\sigma}\right) \right] = \phi_\sigma(x_j)^{p-1} \erf\left(\frac{x_j}{\sqrt{2}\sigma}\right) \frac{\partial}{\partial x_i} \left[ G_{p,\sigma}(x)^{1-p} \right] \\
&=& (1-p) \phi_\sigma(x_j)^{p-1}
\erf\left(\frac{x_j}{\sqrt{2}\sigma}\right)
G_{p,\sigma}(x)^{-p}\frac{\partial}{\partial x_i}G_{p,\sigma}(x) \\
&=&
(1-p) \phi_\sigma(x_i)^{p-1} \phi_\sigma(x_j)^{p-1}
\erf\left(\frac{x_i}{\sqrt{2}\sigma}\right)
\erf\left(\frac{x_j}{\sqrt{2}\sigma}\right)
G_{p,\sigma}(x)^{1-2p} \\
&=& v(x)_i v(x)_j = \left(v(x) v(x)^T\right)_{ij},
\end{eqnarray*}
and when $i=j$, for each $j=1,\ldots,n$,
\begin{eqnarray*}
\frac{\partial^2}{\partial^2 x_j}G_{p,\sigma}(x) &=&
\frac{\partial}{\partial x_j} \left[ G_{p,\sigma}(x)^{1-p} \phi_\sigma(x_j)^{p-1} \erf\left(\frac{x_j}{\sqrt{2}\sigma}\right) \right] \\
&=&
(1-p) \phi_\sigma(x_j)^{2(p-1)}
\left(\erf\left(\frac{x_j}{\sqrt{2}\sigma}\right)\right)^2
G_{p,\sigma}(x)^{1-2p} \\
&+&
G_{p,\sigma}(x)^{1-p} \left(
(p-1)\phi_\sigma(x_j)^{p-2}\left(\erf\left(\frac{x_j}{\sqrt{2}\sigma}\right)\right)^2
+\frac{\sqrt{2}}{\sigma \sqrt{\pi}}\phi_\sigma(x_j)^{p-1}
\exp\left(-\frac{x_j^2}{2 \sigma^2}\right)
\right)
\\
&=&
v(x)_j^2 + w(x)_j
= \bigl(v(x)v(x)^T + \Diag\bigl(w(x)\bigr)\bigr)_{jj}.
\end{eqnarray*}
Hence, \eqref{eq:lp_hessian} holds.
\end{proof}
Given $H_{p,\sigma}(x) \approx F_p(x)$ and $\nabla H_{p,\sigma}(x)$, we can apply a number of gradient based methods for the minimization of $H_{p,\sigma}(x)$ (and hence for the approximate minimization
of $F_p(x)$), which take the following general form:
\begin{algorithm}[ht!]
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\caption{Generic Gradient Method for finding $\arg\min H_{p,\sigma}(x)$.}
\label{algo:gradient_method}
\BlankLine
Pick an initial point $x^0$\;
\For{$k=0,1,\ldots$,\texttt{\textup{ maxiter}}}{
Compute search direction $s^n$ based on gradient
$\nabla H_{p,\sigma}(x^n)$. \;
Compute step size parameter $\mu$ via line search. \;
Update the iterate: $x^{n+1} = x^n + \mu s^n$. \;
Check if the termination conditions are met. \;
}
Record final solution: $\bar{x} = x^{n+1}$. \;
\end{algorithm}
Note that in the case of $p<1$, the functional $F_p(x)$ is not convex, so such an algorithm may
not converge to the global minimum in that case. The generic algorithm above depends on the choice
of search direction $s^n$, which is based on the gradient, and the line search,
which can be performed several different ways.
\noindent
\subsection{Line Search Techniques}
Gradient based algorithms differ based on
the choice of search direction vector $s^n$ and
line search techniques for parameter $\mu$.
In this section we describe some suitable line search
techniques.
Given the current iterate $x^n$ and search direction $s^n$,
we would like to choose $\mu$ so that:
\begin{equation*}
H_{p,\sigma}(x^{n+1})
= H_{p,\sigma}(x^n + \mu s^n)
\leq H_{p,\sigma}(x^n),
\end{equation*}
where $\mu>0$ is a scalar which measures how long along
the search direction we advance from the previous iterate.
Ideally, we would
like a strict inequality and the functional value to decrease.
Exact line search would solve the single variable
minimization problem:
\begin{equation*}
\bar{\mu} = \arg\min_{\mu} H_{p,\sigma}(x^n + \mu s^n) .
\end{equation*}
The first order necessary optimality condition
(i.e., $\nabla H_{p,\sigma}(x + \mu s)^T s=0$) can be used to find a
candidate value for $\mu$, but it is not easy to solve the gradient equation.
An alternative approach is to use a backtracking line search
to get a step size $\mu$ that satisfies one or two of
the Wolfe conditions \cite{nocedal_wright_opt} as in Algorithm \ref{alg:wolfe_line_search}.
This update scheme can be slow since several evaluations
of $H_{p,\sigma}(x)$ may be necessary, which are relatively expensive when the
dimension $n$ is large. It also depends on the choice of parameters $\rho$ and $c$,
to which the generic gradient method may be sensitive.
\begin{algorithm}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\caption{Backtracking Line Search \label{alg:wolfe_line_search}}
\Input{Evaluators for $H_{p,\sigma}(x)$ and $\nabla H_{p,\sigma}(x)$,
current iterate $x^n$, search direction $s^n$,
and constants $\mu > 0$, $\rho \in (0,1)$, $c \in (0,1)$.}
\Output{$\mu>0$ satisfying a sufficient decrease condition.}
\BlankLine
\While{$H_{p,\sigma}(x^n + \mu s^n) > H_{p,\sigma}(x^n)
+ c \mu (\nabla H_{p,\sigma}(x^n))^T s^n$ }{
$\mu = \rho \mu$ \;
}
\end{algorithm}
\noindent
Another way to perform approximate line search is to utilize a
Taylor series approximation for the solution of
$\frac{\mathrm{d}}{\mathrm{d} \mu} H_{p,\sigma}(x + \mu s) = 0$ \cite{shewchukCG}.
This involves the gradient and Hessian terms which we have
previously computed.
Using the second order Taylor approximation of $n(t):=H_{p,\sigma}(x+t s)$ at any given $x,s\in\mathbb R^n$,
we have that
\begin{equation}
\label{eq:taylor_expansion_of_H}
n^{\prime}(t)=n^{\prime}(0)+t n^{\prime \prime}(0)+o(t) \approx n^{\prime}(0)+t n^{\prime \prime}(0)
\end{equation}
using basic matrix calculus:
\begin{eqnarray*}
n^{\prime}(t)
&=&
\left( \nabla H_{p,\sigma}(x+ts) \right)^T s
\implies
n^{\prime}(0) = \nabla H_{p,\sigma}(x)^T s
\\
n^{\prime \prime}(t)
&=& \left[ \left( \nabla^2 H_{p,\sigma}(x + ts) \right)^T s \right]^T s =
s^T \nabla^2 H_{p,\sigma}(x + ts) s
\implies
n^{\prime \prime}(0) = s^T \nabla^2 H_{p,\sigma}(x) s,
\end{eqnarray*}
we get that $n^{\prime}(0)+\mu n^{\prime \prime}(0)=0$ if and only if
\begin{equation}
\label{eq:mu_approx_sol}
\mu = -\frac{\nabla H_{p,\sigma}(x)^T s}{s^T \nabla^2 H_{p,\sigma}(x) s},
\end{equation}
which can be used as the step size in Algorithm \ref{algo:gradient_method}.
For the case $p=1$ (approximating the $\ell_1$ functional),
the Hessian is $A^T A$ plus a diagonal matrix,
which is quick to form and the above approximation can be
efficiently used for line search.
For $p\neq1$, the Hessian is the sum of $A^T A$ and $M$,
and $M$ in turn is the sum of a diagonal matrix and a rank one matrix;
the matrix-vector multiplication involving this Hessian is
more expensive than in the case $p=1$.
In this case, one may approximate the Hessian in
\eqref{eq:mu_approx_sol} using finite differences, i.e.,
when $\xi > 0$ is sufficiently small,
\begin{equation}
\label{eq:second_deriv_secant}
n^{\prime \prime}(t) \approx \frac{n^{\prime}(t + \xi) - n^{\prime}(t - \xi)}{2\xi} .
\implies n^{\prime \prime}(0) \approx \frac{n^{\prime}(\xi) - n^{\prime}(-\xi)}{2\xi}
\end{equation}
Approximating $n^{\prime\prime}(0)$ in \eqref{eq:taylor_expansion_of_H} by
$\frac{n^{\prime}(\xi) - n^{\prime}(-\xi)}{2\xi}$, we get
\begin{equation}
\label{eq:secant_approx_n}
\frac{\mathrm{d}}{\mathrm{d}\mu} H_{p,\sigma}(x + \mu s)
\approx \nabla H_{p,\sigma}(x)^Ts
+ \mu \frac{\left(\nabla H_{p,\sigma}(x + \xi s)
- \nabla H_{p,\sigma}(x - \xi s)\right)^T s}{2\xi}.
\end{equation}
Setting the right hand side of \eqref{eq:secant_approx_n} to zero,
and solving for $\mu$, we get the approximation:
\begin{equation}
\label{eq:mu_approx_sol2}
\mu
=
\frac{-2 \xi\nabla H_{p,\sigma}(x)^T s}
{(\nabla H_{p,\sigma}(x + \xi s) - \nabla H_{p,\sigma}(x - \xi s))^T s}.
\end{equation}
In the finite difference scheme, the parameter $\xi$ should be taken to be
of the same order as the components of the current iterate $x^n$.
In practice, we find that for $p=1$, the Hessian based line search \eqref{eq:mu_approx_sol}
works well; for $p\neq1$, one can also use the finite difference scheme
\eqref{eq:mu_approx_sol2} if one wants to avoid evaluating the Hessian.
\subsection{Steepest Descent and Conjugate Gradient Algorithms}
We now present steepest descent and conjugate gradient schemes,
in Algorithms \ref{algo:steepestdescent} and \ref{algo:nonlincg} respectively,
which can be used for sparsity constrained regularization. We also discuss the use
of Newton's method in Algorithm \ref{algo:nonlin_newton}.
Steepest descent and conjugate gradient methods differ in the choice of the search direction.
In steepest descent methods, we simply take the negative of the gradient
as the search direction. For nonlinear conjugate gradient methods,
which one expects to perform better than steepest descent,
several different search direction updates are possible.
We find that the Polak-Ribi\`{e}re scheme often offers good performance
\cite{MR0255025,Polyak196994,shewchukCG}.
In this scheme, we set the initial search direction $s^0$
to the negative gradient, as in steepest descent, but then do a more
complicated update involving the gradient at the current and previous steps:
\begin{eqnarray*}
\beta^{n+1}
&=&
\max\left\{ \frac{\nabla H_{p,\sigma_n}(x^{n+1})^T
\left(\nabla H_{p,\sigma_n}(x^{n+1}) - \nabla H_{p,\sigma_n}(x^n)\right)}
{\nabla H_{p,\sigma_n}(x^n)^T \nabla H_{p,\sigma_n}(x^n)}, 0 \right\} ,
\\
s^{n+1}
&=& -\nabla H_{p,\sigma_n}(x^{n+1}) + \beta^{n+1} s^n.
\end{eqnarray*}
One extra step we introduce in Algorithms \ref{algo:steepestdescent} and
\ref{algo:nonlincg} below is a thresholding
which sets small components to zero. That is, at the end of each iteration, we
retain only a portion of the largest coefficients. This is necessary,
as otherwise the solution we recover will contain many small noisy components
and will not be sparse. In our numerical experiments, we found that soft
thresholding works well when $p=1$ and that hard thresholding works well
when $p<1$.
The componentwise soft and hard thresholding functions with parameter
$\tau>0$ are given by:
\begin{equation}
\label{eq:thresholding}
\left(\mathbb{S}_{\tau }(x)\right)_k = \left\{
\begin{array}{ll}
x_k - \tau, & \hbox{$x_k > \tau$} \\
0, & \hbox{$-\tau \le x_k \le \tau$;} \\
x_k + \tau, & \hbox{$x_k < -\tau$ } \\
\end{array}
\right. \quad \left(\mathbb{H}_{\tau }(x)\right)_k = \left\{
\begin{array}{ll}
x_k, & \hbox{$|x_k| > \tau$} \\
0, & \hbox{$-\tau \le x_k \le \tau$}
\end{array},
\right.
\quad\forall\, x\in\mathbb R^n.
\end{equation}
For $p=1$, an alternative to thresholding at each iteration at $\tau$ is to
use the optimality condition of the $F_1(x)$ functional \cite{ingrid_thresholding1}.
After each iteration (or after a block of iterations), we can evaluate the vector
\begin{equation}
\label{eq:v_thresholding}
v^n = A^T (b - A x^n) .
\end{equation}
We then set the components (indexed by $k$) of the current solution vector
$x^n$ to zero for indices $k$ for which $|v^n_k| \leq \tau$.
Note that after each iteration, we also vary the parameter $\sigma$ in the
approximating function to the absolute value $\phi_\sigma$,
starting with $\sigma$ relatively far from zero at the first iteration
and decreasing towards $0$ as we approach the iteration limit.
The decrease can be controlled by a parameter $\alpha\in(0,1)$
so that $\sigma_{n+1} = \alpha \sigma_n$.
The choice $\alpha = 0.8$ worked well in our experiments.
We could also tie $\sigma^n$ to the progress of the iteration,
such as the quantity $||x^{n+1} - x^n||_2$.
One should experiment to find what works best with a given application.
Finally, we comment on the computational cost of Algorithms
\ref{algo:steepestdescent} and \ref{algo:nonlincg},
relative to standard iterative thresholding methods, notably the FISTA method.
The FISTA iteration for $F_1(x)$, for example, would be implemented as:
\begin{equation}
\label{eq:FISTA}
y^{0} = x^{0}
\mbox{ ,} \quad
x^{n+1} = S_{\tau}\left(y^n + A^T b - A^T A y^n\right)
\mbox{ ,} \quad
y^{n+1} = x^{n+1} + \frac{t_k - 1}{t_{k+1}}\left(x^{n+1} - x^{n}\right),
\end{equation}
where $\{t_k\}$ is a special sequence of constants \cite{MR2486527}. For large linear systems,
the main cost is in the evaluation of $A^T A y^n$. The same is true for the gradient based schemes we
present below. The product of $A^T A$ and the vector iterate goes into the gradient computation
and the line search method and can be shared between the two.
Notice also that the gradient and line search computations involve the evaluation of the error function
$\erf(t) = \frac{2}{\sqrt{\pi}} \int_{0}^{t} e^{-u^2} du$,
and there is no closed form solution for this integral.
However, various ways of efficiently approximating the integral value exist:
apart from standard quadrature methods,
several approximations involving the exponential function
are described in \cite{HandbookOfMathFunctions}.
The gradient methods below do have extra overhead compared to the thresholding
schemes and may not be ideal for runs with large numbers of iterations.
However, for large matrices and with efficient implementation, the runtimes for our schemes
and existing iterative methods are expected to be competitive,
since the most time consuming step (multiplication with $A^T A$) is common to both.
Algorithm \ref{algo:steepestdescent}, below, presents a simple steepest descent scheme
to approximately minimize $F_p$ defined in \eqref{eq:lp_funct}.
In Algorithm \ref{algo:nonlincg}, we present a nonlinear Polak-Ribi\`{e}re conjugate
gradient scheme to approximately minimize $F_p$ \cite{MR0255025,Polyak196994,shewchukCG}.
In practice, this slightly more complicated algorithm is expected to perform
significantly better than the simple steepest descent method.
Another possibility, given access to both gradient and Hessian, is to use a higher
order root finding method, such as Newton's method \cite{nocedal_wright_opt} presented
in Algorithm \ref{algo:nonlin_newton}. The idea here is to find a
root of $\nabla H_{p,\sigma}(x) = 0$, given some initial guess $x^{0}$ which should
correspond to a local extrema of $H_{p,\sigma}$. By classical
application of Newton's method for vector valued functions, we obtain the simple
scheme: $x^{n+1} = x^n + \Delta x$ with $\Delta x$ the solution to the linear
system $\nabla^2 H_{p,\sigma_n}(x^n) \Delta x = - \nabla H_{p,\sigma_n}(x^n)$.
However, Newton's method usually requires an accurate initial guess $x^0$
\cite{nocedal_wright_opt}. For this reason, the presented scheme would
usually be used to top off a CG algorithm or sandwiched between
CG iterations.
The function $\mathtt{Threshold}(\cdot,\tau)$ in the algorithms which enforces sparsity refers to either one of the two thresholding functions defined in \eqref{eq:thresholding} or to the strategy using the $v^n$ vector in \eqref{eq:v_thresholding}.
\begin{algorithm}[!ht]
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\caption{Steepest Descent Scheme
\label{algo:steepestdescent}}
\Input{An $m\times n$ matrix $A$, an initial guess $n \times 1$ vector $x^0$,
a parameter $\tau < \|A^T b\|_{\infty}$,
a parameter $p\in(0,1]$,
a parameter $\sigma_0>0$,
a parameter $0< \alpha < 1$,
the maximum number of iterations $M$,
and a routine to evaluate the gradient $\nabla H_{p,\sigma}(x)$ (and possibly
the Hessian $\nabla^2 H_{p,\sigma}(x)$ depending on choice of line search method).}
\Output{A vector $\bar{x}$,
close to either the global or local minimum of $F_p(x)$,
depending on choice of $p$.}
\BlankLine
\For{$k=0,1,\ldots$,M}{
$s^n = -\nabla H_{p,\sigma_n}(x^n)$ \;
use line search to find $\mu>0$\;
$x^{n+1} = \mathtt{Threshold}(x^n + \mu s^n, \tau)$ \;
$\sigma_{n+1} = \alpha \sigma_n$ \;
}
$\bar{x} = x^{n+1}$\;
\end{algorithm}
\begin{algorithm}[!ht]
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\caption{Nonlinear Conjugate Gradient Scheme
\label{algo:nonlincg}}
\Input{An $m\times n$ matrix $A$,
an initial guess $n \times 1$ vector $x^0$,
a parameter $\tau < \|A^T b\|_\infty$,
a parameter $p\in(0,1]$,
a parameter $\sigma_0>0$,
a parameter $0 < \alpha < 1$,
the maximum number of iterations $M$,
and a routine to evaluate the gradient $\nabla H_{p,\sigma}(x)$ (and possibly
the Hessian $\nabla^2 H_{p,\sigma}(x)$ depending on choice of line search method).}
\Output{A vector $\bar{x}$,
close to either the global or local minimum of $F_p(x)$,
depending on choice of $p$.}
\BlankLine
$s^0 = - \nabla H_{p,\sigma_0}(x^0)$ \;
\For{$k=0,1,\ldots$,M}{
use line search to find $\mu>0$\;
$x^{n+1} = \mathtt{Threshold}(x^n + \mu s^n, \tau)$ \;
$\beta^{n+1} =
\max\left\{ \frac{\nabla H_{p,\sigma_n}
(x^{n+1})^T (\nabla H_{p,\sigma_n}(x^{n+1}) - \nabla H_{p,\sigma_n}(x^n))}
{\nabla H_{p,\sigma_n}(x^n)^T \nabla H_{p,\sigma_n}(x^n)}, 0 \right\}$ \;
$s^{n+1} = -\nabla H_{p,\sigma_n}(x^{n+1}) + \beta^{n+1} s^n$ \;
$\sigma_{n+1} = \alpha \sigma_n$ \;
}
$\bar{x} = x^{n+1}$\;
\end{algorithm}
\begin{algorithm}[!ht]
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\caption{Newton's Method
\label{algo:nonlin_newton}}
\Input{An $m\times n$ matrix $A$,
a parameter $\tau < \|A^T b\|_\infty$,
a parameter $p\in(0,1]$,
a parameter $\sigma_0>0$,
a parameter $0 < \alpha < 1$,
the maximum number of iterations $M$,
a tolerance parameter TOL,
and routines to evaluate the gradient $\nabla H_{p,\sigma}(x)$ and
the Hessian $\nabla^2 H_{p,\sigma}(x)$.}
\Output{A vector $\bar{x}$, close to either the global or local minimum of $F_p(x)$,
depending on choice of $p$.}
\BlankLine
Obtain a relatively accurate initial guess $x^0$. \;
\For{$k=0,1,\ldots$,M}{
Solve the linear system
$\nabla^2 H_{p,\sigma_n}(x^n) \Delta x = - \nabla H_{p,\sigma_n}(x^n)$ to tolerance (TOL). \;
$x^{n+1} = \mathtt{Threshold}(x^n + \Delta x, \tau)$ \;
$\sigma_{n+1} = \alpha \sigma_n$ \;
}
$\bar{x} = x^{n+1}$\;
\end{algorithm}
\section{Numerical Experiments}
\label{sect:numerics}
We now show some numerical experiments, comparing Algorithm \ref{algo:nonlincg}
with FISTA, a state of the art sparse regularization algorithm \cite{MR2486527}
outlined in \eqref{eq:FISTA}. We use our CG scheme with $p=1$ and later also
with Newton's method (Algorithm \ref{algo:nonlin_newton}) and with $p<1$.
We present plots of averaged quantities over many trials, wherever possible.
We observe that for these experiments, the CG scheme gives good results in few iterations,
although each iteration of CG is more expensive than a single iteration of FISTA.
To account for this, we run the experiments using twice more iterations for FISTA ($100$)
than for CG ($50$). The Matlab codes for all the experiments and figures described and
plotted here are available for download from the author's website.
When performing a sparse reconstruction, we typically vary
the value of the regularization parameter $\tau$ and move along a regularization parameter
curve while doing warm starts, starting from a relatively high value of $\tau$ close
to $||A^T b||_{\infty}$ with a zero initial guess (since for $\tau > ||A^T b||_{\infty}$, the
$\ell_1$ minimizer is zero \cite{ingrid_thresholding1}) and moving to a lower value,
while reusing the solution at the previous $\tau$ as the initial
guess at the next, lower $\tau$ \cite{Loris2009247, Sindhwani2012}. If the $\tau$'s follow a
logarithmic decrease, the corresponding curves we observe are
like those plotted in Figure \ref{fig:sparse_reconstruction2}.
At some $\tau$, the reconstruction $x_{\tau}$ will be optimal along the curve and the
percent error between solution $x_{\tau}$ and true solution $x$ will be lowest. If we do not
know the true solution $x$, we have to use other criteria to pick the $\tau$ at which
we want to record the solution. One way is by using the norm of the noise vector in the
right hand side $||e||_2$. If an accurate estimate of this is known, we can use the
solution at the $\tau$ for which the residual norm $||A x_{\tau} - b||_2 \approx ||e||_2$.
For our examples in Figure \ref{fig:sparse_reconstruction2} and
\ref{fig:sparse_reconstruction3}, we use three types of
matrices, each of size $1000 \times 1000$. We use random Gaussian matrices constructed
to have fast decay of singular values
(matrix type I), matrices with a portion of columns which are linearly correlated
(matrix type II - formed by taking matrix type I and forcing a random sample
of 200 columns to be approximately linearly dependent with some of the others),
and matrices with entries from the random Cauchy distribution \cite{MR1326603}
(matrix type III). For our CG scheme, we
use the scheme as presented in Algorithm \ref{algo:nonlincg} with the approximation
$\hat{\phi}_{\tau}(x)$ for $|x|$.
In Figure \ref{fig:sparse_reconstruction2}, we plot the residuals vs $\tau$ for
the two algorithms. We also plot curves for the percent errors
$100 \frac{||x_{\tau} - x||_2}{||x||_2}$ and the functional values
$F_1(x_{\tau})$ vs $\tau$ (note that CG is in fact minimizing an approximation
to the non-smooth $F_1(x)$, yet for these examples we find that the value of $F_1(x)$
evaluated for CG is often lower than for FISTA, even when FISTA is run at twice more iterations).
The curves shown are median values recorded over $20$ runs.
We present more general contour plots in Figure \ref{fig:sparse_reconstruction3} which
compare the minimum percent errors along the regularization curve produced by the two algorithms at different combinations of number of nonzeros and noise levels.
The data at each point of the contour plots
is obtained by running FISTA for $100$ iterations and CG for $50$ iterations at each $\tau$
starting from $\tau = \frac{||A^T b||_{\infty}}{10}$ going down to $\tau = \frac{||A^T b||_{\infty}}{5e^{8}}$ and reusing the previous solution as the initial guess at the next $\tau$.
We do this for $10$ trials and record the median values.
From Figures \ref{fig:sparse_reconstruction2} and \ref{fig:sparse_reconstruction3}, we observe
similar performance of CG and FISTA for matrix type I and significantly better performance of
CG for matrix types II and III where FISTA does not do as well.
In Figure \ref{fig:sparse_reconstruction4}, we do a compressive sensing image recovery
test by trying to recover the original image from its samples. Here we also test
CG with Newton's method and CG with $p < 1$. A sparse image $x$ was used
to construct the right hand side with a sensing matrix $A$ via $b = A \tilde{x}$
where $\tilde{x}$ is the noisy image, with $\tilde{x} = x + e$ and $e$ being the noise
vector with $25$ percent noise level relative to the norm of $x$.
The matrix $A$ was constructed to be as matrix type II described above.
The number of columns and image pixels was $5025$. The number of rows
(image samples) is $5000$. We run the algorithms to recover an approximation to $x$
given the sensing matrix $A$ and the noisy
measurements $b$. For each algorithm we used a fixed number of iterations at each $\tau$ along
the regularization curve as before, from $\tau = \frac{||A^T b||_{\infty}}{10}$ going down
to $\tau = \frac{||A^T b||_{\infty}}{5e^{8}}$ and reusing the previous solution as
the initial guess at the next $\tau$ starting from a zero vector guess at the beginning.
Each CG algorithm is run for a total of 50 iterations at each $\tau$ and FISTA for 100 iterations.
The first CG method uses 50 iterations with $p=1$ using Algorithm \ref{algo:nonlincg}.
The second CG method uses $30$
iterations with $p=1$, followed by Newton's method for $5$ iterations (using
Algorithm \ref{algo:nonlin_newton} with the system solve done via CG for 15 iterations at each
step) and a further $15$ iterations of CG with the initial guess from the result of the Newton scheme.
That is, we sandwich $5$ Newton iterations within the CG scheme.
The final CG scheme uses $50$ iterations of CG with $p=0.83$ (operating on the
non-convex $F_p(x)$ for $p<1$).
In these experiments, we used Hessian based line search approximation \eqref{eq:mu_approx_sol} and
soft thresholding \eqref{eq:thresholding} for $p=1$ and the finite difference line
search approximation \eqref{eq:mu_approx_sol2} and hard thresholding
\eqref{eq:thresholding} for $p<1$.
The results are shown in Figure \ref{fig:sparse_reconstruction4}.
We observe that from the same number of samples, better reconstructions
are obtained using the CG algorithm and that $p$ slightly less than $1$ can give
even better performance than $p=1$ in terms of recovery error. Including a few
iterations of Newton's scheme also seems to slightly improve the result. All of the
CG schemes demonstrate better recovery than FISTA in this test.
\begin{figure*}
\caption{Averaged quantities along the regularization curve for matrix types I,II,III.
Residual norms $||Ax_{\tau}
\label{fig:sparse_reconstruction2}
\end{figure*}
\begin{figure*}
\caption{Contour plots for minimum percent errors over all $\tau$'s along the regularization curve
at different combinations of nonzeros and noise fractions for matrix types I,II,III.}
\label{fig:sparse_reconstruction3}
\end{figure*}
\begin{figure*}
\caption{Image reconstruction for a $5025$ pixel image sampled
via a random sensing matrix of size
$5000 \times 5025$ with rapidly decaying singular values, applied to the noisy version of the
image. Row 1: original and noisy image. Row 2: recovered images using
FISTA, CG (with $p=1$), CG with Newton's method (with $p=1$), and
CG (with $p=0.83$) run for $100$ iterations at each $\tau$ (FISTA) and at $50$
iterations (CG). Row 3: bar plot of percent errors
(recovery error between recovered and original image) for the different algorithms.}
\label{fig:sparse_reconstruction4}
\end{figure*}
\section{Conclusions}
In this article, we proposed new convolution based smooth approximations for the
absolute value function $g(t) = |t|$ using the concept of
approximate mollifiers. We established convergence results for our approximation
in the $L^1$ norm. We applied the approximation to the minimization of the non-smooth
functional $F_p(x)$ which arises in sparsity promoting regularization
(of which the popular $\ell_1$ functional is a special case for $p=1$) to
construct a smooth approximation $H_{p,\sigma}(x)$ of $F_p(x)$ and derived the
gradient and Hessian of $H_{p,\sigma}(x)$.
We discussed the use of the nonlinear CG algorithm and higher order algorithms (like Newton's
method) which operate with the smooth
$H_{p,\sigma}(x)$, $\nabla H_{p,\sigma}(x)$, $\nabla^2 H_{p,\sigma}(x)$ functions
instead of the original non-smooth functional $F_p(x)$.
We observe from the numerics in Section \ref{sect:numerics} that in many cases,
in a small number of iterations, we are able to obtain better results
than FISTA can for the $\ell_1$ case (for example, in the presence of
high noise). We also observe that when $p<1$ but not too far away from one
(say $p \approx 0.9$) we can sometimes obtain even better reconstructions
in compressed sensing experiments.
The simple algorithms we show maybe useful for larger problems, where
one can afford only a small number of iterations, or when one wants to quickly
obtain an approximate solution (for example, to warm start a thresholding based method).
The presented ideas and algorithms can be applied to design more complex
algorithms, possibly with better performance for ill-conditioned problems,
by exploiting the wealth of available literature on conjugate gradient and
other gradient based methods. Finally, the convolution smoothing technique which
we use is more flexible than the traditional mollifier approach and
maybe useful in a variety of applications where the minimization of
non-smooth functions is needed.
\end{document} |
\begin{document}
\title[Centrality of the congruence kernel]{Centrality of the congruence kernel for elementary subgroups of Chevalley groups of rank $> 1$ over noetherian rings}
\begin{abstract}
Let $G$ be a universal Chevalley-Demazure group scheme associated to a reduced irreducible root system of rank $>1.$ For a commutative ring $R$, we let ${{\mathfrak{m}}athcal G}amma = E(R)$ denote the elementary subgroup of the group of $R$-points $G(R).$ The congruence kernel $C({{\mathfrak{m}}athcal G}amma)$ is then defined to be the kernel of the natural homomorphism $\widehat{{{\mathfrak{m}}athcal G}amma} \to \overline{{{\mathfrak{m}}athcal G}amma},$ where $\widehat{{{\mathfrak{m}}athcal G}amma}$ is the profinite completion of ${{\mathfrak{m}}athcal G}amma$ and $\overline{{{\mathfrak{m}}athcal G}amma}$ is the congruence completion defined by ideals of finite index. The purpose of this note is to show that for an arbitrary noetherian ring $R$ (with some minor restrictions if $G$ is of type $C_n$ or $G_2$), the congruence kernel $C({{\mathfrak{m}}athcal G}amma)$ is central in $\widehat{{{\mathfrak{m}}athcal G}amma}.$
\end{abstract}
\author[A.S.~Rapinchuk]{Andrei S. Rapinchuk}
\address{Department of Mathematics, University of Virginia,
Charlottesville, VA 22904}
\email{asr3x@virginia.edu}
\author[I.A.~Rapinchuk]{Igor A. Rapinchuk}
\address{Department of Mathematics, Yale University, New Haven, CT 06502}
\email{igor.rapinchuk@yale.edu}
{\mathfrak{m}}aketitle
\section{Introduction}\label{S:I}
Let $G$ be a universal Chevalley-Demazure group scheme associated to a reduced irreducible root system $\Phi$ of rank $> 1$. Given a commutative ring $R$, we let $G(R)$ denote the group of $R$-points of $G$, and let $E(R) \subset G(R)$ be the corresponding elementary subgroup. (We recall that $E(R)$ is defined as the subgroup generated by the images $e_{\alpha} (R) =: U_{\alpha} (R)$ for all $\alpha \in \Phi$, where $e_{\alpha} {\rm co}lon {\mathfrak{m}}athbb{G}_a \to G$ is the canonical 1-parameter subgroup corresponding to a root $\alpha \in \Phi$ --- see \cite{Bo1} for details.) The goal of this note is to make a contribution to the analysis of the congruence subgroup problem for $E(R)$ over a general commutative noetherian ring $R$ (with some minor restrictions if $\Phi$ is of type $C_n$ $(n \geq 2)$ or $G_2$).
While the congruence subgroup problem for $S$-arithmetic groups is a well-established subject (see \cite{PR} for a recent survey), its analysis over general rings, at least from the point of view we adopt in this note, has been rather limited, despite a large number of results dealing with arbitrary normal subgroups of Chevalley groups over commutative rings. For this reason, we begin with a careful description of our set-up.
Let $R$ be a commutative ring and $n \geq 1.$ Then to every ideal ${\mathfrak{m}}athfrak{a} \subset R$, one associates the congruence subgroup $GL_n (R, {\mathfrak{m}}athfrak{a}) = \ker (GL_n (R) \to GL_n (R/ {\mathfrak{m}}athfrak{a}))$, where the map is the one induced by the canonical homomorphism $R \to R/ {\mathfrak{m}}athfrak{a}$. Clearly, if ${\mathfrak{m}}athfrak{a}$ is of {\it finite index} (i.e. the quotient $R/ {\mathfrak{m}}athfrak{a}$ is a finite ring), then $GL_n (R, {\mathfrak{m}}athfrak{a})$ is a normal subgroup of $GL_n (R)$ of {\it finite index}. Given a subgroup ${{\mathfrak{m}}athcal G}amma \subset GL_n (R),$ we set ${{\mathfrak{m}}athcal G}amma ({\mathfrak{m}}athfrak{a}) = {{\mathfrak{m}}athcal G}amma \cap GL_n (R, {\mathfrak{m}}athfrak{a}).$ Then, by the congruence subgroup problem for ${{\mathfrak{m}}athcal G}amma$, we understand the following question:
\vskip3mm
(CSP) \hskip2mm {{\mathfrak{m}}athbb P}arbox{14.9cm}{Does every normal subgroup $\Delta \subset {{\mathfrak{m}}athcal G}amma$ of {\it finite index} contain the congruence subgroup ${{\mathfrak{m}}athcal G}amma ({\mathfrak{m}}athfrak{a})$ for some ideal ${\mathfrak{m}}athfrak{a} \subset R$ of {\it finite index}?}
\vskip3mm
{\mathfrak{n}}oindent The affirmative answer would give us information about the profinite completion $\widehat{{{\mathfrak{m}}athcal G}amma}$, which is precisely what is needed for the analysis of representations of ${{\mathfrak{m}}athcal G}amma$, as well as other issues (cf. \cite{BMS}, \cite{KN}, \cite{Sh}). However, even when ${{\mathfrak{m}}athcal G}amma$ is $S$-arithmetic, the answer to (CSP) is often negative. So one is instead interested in the computation of the congruence kernel, which measures the deviation from a~positive solution. For this, just as in the arithmetic case, we introduce two topologies on ${{\mathfrak{m}}athcal G}amma$: the profinite topology $\tau_p^{{{\mathfrak{m}}athcal G}amma}$ and the congruence topology $\tau_c^{{{\mathfrak{m}}athcal G}amma}.$ The fundamental system of neighborhoods of the identity for the former consists of all normal subgroups $N \subset {{\mathfrak{m}}athcal G}amma$ of finite index, and for the latter of the congruence subgroups ${{\mathfrak{m}}athcal G}amma ({\mathfrak{m}}athfrak{a}),$ where ${\mathfrak{m}}athfrak{a}$ runs through all ideals of $R$ of finite index. The corresponding completions are then given by
$$
\widehat{{{\mathfrak{m}}athcal G}amma} = \lim_{\longleftarrow} {{\mathfrak{m}}athcal G}amma / N, \ \ \ \text{where} \ N \lhd {{\mathfrak{m}}athcal G}amma \ \text{and} \ [{{\mathfrak{m}}athcal G}amma :N ] < \infty
$$
and
$$
\overline{{{\mathfrak{m}}athcal G}amma} = \lim_{\longleftarrow} {{\mathfrak{m}}athcal G}amma / {{\mathfrak{m}}athcal G}amma ({\mathfrak{m}}athfrak{a}), \ \ \ \text{where} \ \vert R/ {\mathfrak{m}}athfrak{a} \vert < \infty.
$$
As $\tau_p^{{{\mathfrak{m}}athcal G}amma}$ is stronger than $\tau_c^{{{\mathfrak{m}}athcal G}amma}$, there exists a continuous surjective homomorphism ${{\mathfrak{m}}athbb P}i^{{{\mathfrak{m}}athcal G}amma} {\rm co}lon \widehat{{{\mathfrak{m}}athcal G}amma} \to \overline{{{\mathfrak{m}}athcal G}amma},$ whose kernel is called the {\it congruence kernel} and denoted $C({{\mathfrak{m}}athcal G}amma).$ Clearly, $C({{\mathfrak{m}}athcal G}amma)$ is trivial if and only if the answer to (CSP) is affirmative; in general, its size measures the extent of deviation from the affirmative answer. Unfortunately, as remarked above, in many situations, $C({{\mathfrak{m}}athcal G}amma)$ is nontrivial, and the focus of this note is on a different property, viz. the {\it centrality} of $C({{\mathfrak{m}}athcal G}amma)$ (which means that $C({{\mathfrak{m}}athcal G}amma)$ is contained in the center of $\widehat{{{\mathfrak{m}}athcal G}amma}$). We note that in some cases, centrality is almost as good as triviality (cf. \cite{KN}, \cite{Sh}), and in arithmetic cases actually implies the finiteness of $C ({{\mathfrak{m}}athcal G}amma).$
Returning to Chevalley groups, we observe that congruence subgroups $G(R, {\mathfrak{m}}athfrak{a}) \subset G(R)$ can be defined either as pullbacks of the congruence subgroups $GL_n (R, {\mathfrak{m}}athfrak{a})$ under a faithful representation of group schemes $G \hookrightarrow GL_n$ over ${{\mathfrak{m}}athbb Z}$, or, intrinsically, as the kernel of the natural homomorphism $G(R) \to G(R/{\mathfrak{m}}athfrak{a}).$
Our main result concerns the congruence kernel of the elementary group ${{\mathfrak{m}}athcal G}amma = E(R).$ We note that the congruence topology on ${{\mathfrak{m}}athcal G}amma$ is induced by that on $G(R)$, i.e. is defined by the intersections ${{\mathfrak{m}}athcal G}amma \cap G(R, {\mathfrak{m}}athfrak{a})$, where ${\mathfrak{m}}athfrak{a}$ runs over all ideals ${\mathfrak{m}}athfrak{a} \subset R$ of finite index. On the other hand, the profinite topology on ${{\mathfrak{m}}athcal G}amma$ may {\it a priori} be different from the topology induced by the profinite topology of $G(R)$ (cf. the remarks at the end of \S 4).
\vskip2mm
{\mathfrak{n}}oindent {\bf Main Theorem.} {\it Let $G$ be a universal Chevalley-Demazure group scheme corresponding to a reduced irreducible root system $\Phi$ of rank $>1.$ Furthermore, let $R$ be a noetherian commutative ring such that $2 \in R^{\times}$ if $\Phi$ is of type $C_n$ ($n \geq 2$) or $G_2$, and let ${{\mathfrak{m}}athcal G}amma = E(R)$ be the corresponding elementary subgroup. Then the congruence kernel $C({{\mathfrak{m}}athcal G}amma)$ is central.}
\vskip2mm
The centrality of the congruence kernel for $SL_n$ ($n \geq 3$) and $Sp_{2n}$ ($n \geq 2$) over rings of algebraic integers was proved by Bass, Milnor, and Serre \cite{BMS}. Their result was generalized to arbitrary Chevalley groups of rank $> 1$ over rings of algebraic integers by Matsumoto \cite{M1}. The only known result for general rings is due to Kassabov and Nikolov \cite{KN}, where centrality was established for $SL_n ({{\mathfrak{m}}athbb Z} [x_1, \dots, x_k])$, with $n \geq 3$, and hence for the elementary group $E_n (R)$ over any finitely generated ring $R$, using $K$-theoretic methods. Although our proof shares some elements with the argument in \cite{KN}, it is purely group-theoretic and is inspired by the proof of centrality for $SL_n$ ($n \geq 3$) over arithmetic rings given in \cite{AR}; in addition, we do not use any results of Matsumoto \cite{M1}.
\vskip2mm
{\mathfrak{n}}oindent {\bf Conventions and notations.} All of our rings will be assumed to be commutative and unital. Unless explicitly stated otherwise, $G$ will always denote a universal Chevalley-Demazure group scheme corresponding to a reduced irreducible root system $\Phi$ of rank $> 1$. Furthermore, if $R$ is a commutative ring, then for a subgroup ${{\mathfrak{m}}athcal G}amma \subset G(R)$, we let $\widehat{{{\mathfrak{m}}athcal G}amma}$ and $\overline{{{\mathfrak{m}}athcal G}amma}$ denote the profinite and congruence completions of ${{\mathfrak{m}}athcal G}amma$, respectively.
\section{Structure of $\overline{G(R)}$}
Let ${\mathfrak{m}}athcal{I}$ be the set of all ideals ${\mathfrak{m}}athfrak{a} \subset R$ of finite index, and let ${\mathfrak{m}}athcal{M} \subset {\mathfrak{m}}athcal{I}$ be the subset of maximal ideals. It is not difficult to see (cf. the proof of Proposition \ref{P-2}) that $\overline{G(R)}$ can be identified with the closure of the image of $G(R)$ in $G(\widehat{R})$, where
$$
\widehat{R} = {\lim_{\longleftarrow}}_{{\mathfrak{m}}athfrak{a} \in {\mathcal{I}}} R/ {\mathfrak{m}}athfrak{a}
$$
is the profinite completion of $R$. The proof of the Main Theorem relies on the fact that $G(\widehat{R})$ has the bounded generation property with respect to the set $\widehat{S} = \{ e_{\alpha} (t) {\mathfrak{m}}id t \in \widehat{R}, \ \alpha \in \Phi \}$ of elementaries, which we will establish at the end of this section (cf. Corollary \ref{C-1}). We begin, however, by describing the structure of $\widehat{R}$ itself. For each ${\mathfrak{m}}athfrak{m} \in {{\mathfrak{m}}athcal{M}}$, we let $$R_{{\mathfrak{m}}athfrak{m}} = \lim_{\longleftarrow} R/ {\mathfrak{m}}athfrak{m}^n$$ denote the ${\mathfrak{m}}$-adic completion of $R$ (cf. \cite{At}, Chapter 10).
\begin{lemma}\label{L-1}
Let $R$ be a noetherian ring.
\vskip1mm
{\mathfrak{n}}oindent {\rm (1)} {{\mathfrak{m}}athbb P}arbox[t]{15cm}{There exists a natural isomorphism of topological rings
$$
\widehat{R} = {{\mathfrak{m}}athbb P}rod_{{\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}}} R_{{\mathfrak{m}}}.
$$}
\vskip1mm
{\mathfrak{n}}oindent {\rm (2)} {{\mathfrak{m}}athbb P}arbox[t]{15cm}{Each $R_{{\mathfrak{m}}}$ is a complete local ring.}
\end{lemma}
\begin{proof}
(1) Since $R$ is noetherian, for any ${\mathfrak{m}}athfrak{a} \in {\mathcal{I}}$ and any $n \geq 2$, the quotient ${\mathfrak{m}}athfrak{a}^{n-1}/ {\mathfrak{m}}athfrak{a}^n$ is a finitely generated $R/ {\mathfrak{m}}athfrak{a}$-module, hence finite. It follows that $R/ {\mathfrak{m}}athfrak{a}^n$ is finite for any $n \geq 1.$ In particular, for any ${\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}}$ and $n \geq 1,$ there exists a natural continuous surjective projection
$$
\rho_{{\mathfrak{m}}, n} {\rm co}lon \widehat{R} \to R/ {\mathfrak{m}}^n.
$$
For a fixed ${\mathfrak{m}}$, the inverse limit of the $\rho_{{\mathfrak{m}}, n}$ over all $n \geq 1$ yields a continuous ring homomorphism $\rho_{{\mathfrak{m}}} {\rm co}lon \widehat{R} \to R_{{\mathfrak{m}}}.$ Taking the direct product of the $\rho_{{\mathfrak{m}}}$ over all ${\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}},$ we obtain a continuous ring homomorphism
$$
\rho {\rm co}lon \widehat{R} \to {{\mathfrak{m}}athbb P}rod_{{\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}}} R_{{\mathfrak{m}}} =: \overline{R}.
$$
We claim that $\rho$ is the required isomorphism.
Note that ideals of the form
$$
\overline{{\mathfrak{a}}} = {\mathfrak{m}}_1^{\alpha_1} R_{{\mathfrak{m}}_1} \times \cdots \times {\mathfrak{m}}_n^{\alpha_n} R_{{\mathfrak{m}}_n} \times {{\mathfrak{m}}athbb P}rod_{{\mathfrak{m}} {\mathfrak{n}}eq {\mathfrak{m}}_i} R_{{\mathfrak{m}}},
$$
where $\{ {\mathfrak{m}}_1, \dots, {\mathfrak{m}}_n \} \subset {{\mathfrak{m}}athcal{M}}$ is a finite subset and $\alpha_i \geq 1,$ form a base of neighborhoods of zero in $\overline{R}$, with
$$
\overline{R}/ \overline{{\mathfrak{a}}} = R / {\mathfrak{m}}_1^{\alpha_1} \times \cdots \times R/ {\mathfrak{m}}_{n}^{\alpha_n}
$$
(cf. \cite{At}, Proposition 10.15). Set ${\mathfrak{a}} = {\mathfrak{m}}_1^{\alpha_1} \cdots {\mathfrak{m}}_n^{\alpha_n}.$ By the Chinese Remainder Theorem,
$$
R / {\mathfrak{a}} \simeq R/ {\mathfrak{m}}_1^{\alpha_1} \times \cdots \times R/ {\mathfrak{m}}_n^{\alpha_n},
$$
which implies that the composite map
$$
\widehat{R} \to \overline{R} \to \overline{R}/ \overline{{\mathfrak{a}}}
$$
is surjective. Since this is true for all $\overline{{\mathfrak{a}}},$ we conclude that the image of $\rho$ is dense. On the other hand, $\widehat{R}$ is compact, so the image is closed, and we obtain that $\rho$ is in fact surjective.
To prove the injectivity of $\rho$, we observe that for any ${\mathfrak{a}} \in {\mathcal{I}}$, the quotient $R/ {\mathfrak{a}}$, being a finite, hence artinian ring, is a product of finite local ring $R_1, \dots, R_r$ (\cite{At}, Theorem 8.7). Furthermore, for each maximal ideal ${\mathfrak{n}}_i \subset R_i$, there exists $\beta_i \geq 1$ such that ${\mathfrak{n}}_i^{\beta_i} = 0$ (cf. \cite{At}, Proposition 8.4). Letting ${\mathfrak{m}}_i$ denote the pullback of ${\mathfrak{n}}_i$ in $R$, we obtain that ${\mathfrak{a}}$ contains ${\mathfrak{m}}athfrak{b} := {\mathfrak{m}}_1^{\beta_1} \cdots {\mathfrak{m}}_r^{\beta_r} \in {\mathcal{I}}.$ It follows that any nonzero $x \in \widehat{R}$ will have a nonzero projection to some $R/ {\mathfrak{m}}athfrak{b} = R/ {\mathfrak{m}}_1^{\beta_1} \times \cdots \times R/ {\mathfrak{m}}_r^{\beta_r}$, and hence to some $R_{{\mathfrak{m}}_i}$, as required.
\vskip1mm
{\mathfrak{n}}oindent (2) It is well-known that $R_{{\mathfrak{m}}}$ is both complete and local (cf. \cite{At}, Propositions 10.5 and 10.16).
\end{proof}
As a first step towards establishing bounded generation of $G(\widehat{R})$ with respect to the set of elementaries, we prove
\begin{prop}\label{P-1}
There exists an integer $N = N(\Phi)$, depending only on the root system $\Phi$, such that for any commutative local ring ${\mathcal{R}}$, any element of $G({\mathcal{R}})$ is a product of $\leq N$ elements of $S = \{ e_{\alpha} (r) {\mathfrak{m}}id r \in {\mathcal{R}}, \ \alpha \in \Phi \}.$
\end{prop}
\begin{proof}
Fix a system of simple roots $\Pi \subset \Phi,$ and let $\Phi^+$ and $\Phi^-$ be the corresponding sets of positive and negative roots. Let $T \subset G$ be the canonical maximal torus, and $U^+$ and $U^-$ be the canonical unipotent ${{\mathfrak{m}}athbb Z}$-subschemes corresponding to $\Phi^+$ and $\Phi^-.$ It is well-known (see, for example, \cite{Bo1}, Lemma 4.5) that the product map ${\mathfrak{m}}u {\rm co}lon U^- \times T \times U^+ \to G$ is an isomorphism onto a principal open subscheme $\Omega \subset G$ defined by some $d \in {{\mathfrak{m}}athbb Z}[G].$ We have decompositions
$$
U^{{{\mathfrak{m}}athbb P}m} = {{\mathfrak{m}}athbb P}rod_{\alpha \in \Phi^{{{\mathfrak{m}}athbb P}m}} U_{\alpha} \ \ \ \text{and} \ \ \ T = {{\mathfrak{m}}athbb P}rod_{\alpha \in \Pi} T_{\alpha},
$$
where $T_{\alpha}$ is the maximal diagonal torus in $G_{\alpha} = <U_{\alpha}, U_{-\alpha}> = SL_2.$ So, the identity
$$
\left( \begin{array}{cl} a & 0 \\ 0 & a^{-1} \end{array} \right) = \left( \begin{array}{lr} 1 & -1 \\ 0 & 1 \end{array} \right) \left( \begin{array}{cc} 1 & 0 \\ 1-a & 1 \end{array} \right) \left( \begin{array}{ll} 1 & a^{-1} \\ 0 & 1 \end{array} \right) \left( \begin{array}{cc} 1 & 0 \\ a(a-1) & 1 \end{array} \right)
$$
shows that there exists $N_1 = N_1 (\Phi)$ such that any element of $\Omega (R)$ is a product of $\leq N_1$ elementaries, for {\it any} ring ${\mathcal{R}}.$
On the other hand, it follows from the existence of the Bruhat decomposition in Chevalley groups over fields that there exists $N_2 = N_2 (\Phi)$ such that any element of $G(k)$ is a product of $\leq N_2$ elementaries, for any field $k.$ We will now show that $N:= N_1 + N_2$ has the required property for any local ring ${\mathcal{R}}.$ Indeed, let ${\mathfrak{m}} \subset {\mathcal{R}}$ be the maximal ideal, and $k = {\mathcal{R}}/ {\mathfrak{m}}$ be the residue field. As $G(k)$ is generated by elementaries, the canonical homomorphism $\omega {\rm co}lon G({\mathcal{R}}) \to G(k)$ is surjective. Given $g \in G({\mathcal{R}})$, there exists $h \in G({\mathcal{R}})$ that is a product of $\leq N_2$ elementaries and for which we have $\omega (g) = \omega (h).$ Then, for $t = gh^{-1}$, we have $\omega (t) = 1$ (in particular, $\omega (t) \in \Omega (k)$), and therefore $d(t) {\mathfrak{n}}ot\equiv 0 (\text{mod} \ {\mathfrak{m}})$. Since ${\mathcal{R}}$ is local, this means that $d(t) \in {\mathcal{R}}^{\times}$, and therefore $t \in \Omega ({\mathcal{R}})$. Thus, $t$ is a product of $\leq N_1$ elementaries, and the required fact follows.
\end{proof}
Next, we have the following
\begin{lemma}\label{L-2}
Let ${\mathcal{R}}_i$ ($i \in I$) be a family of commutative rings such that there exists an integer $N$ with the property that for any $i \in I,$ any $x_i \in G({\mathcal{R}}_i)$ is a product of $\leq N$ elementaries. Set ${\mathcal{R}} = {{\mathfrak{m}}athbb P}rod_{i \in I} {\mathcal{R}}_i.$ Then any $x \in G({\mathcal{R}})$ is a product of $\leq N \cdot \vert \Phi \vert$ elementaries.
\end{lemma}
\begin{proof}
It is enough to observe that any element of the form
$$
(e_{\alpha_i} (r_i)) \in G({\mathcal{R}}) = {{\mathfrak{m}}athbb P}rod_{i \in I} G({\mathcal{R}}_i),
$$
with $\alpha_i \in \Phi,$ $r_i \in {\mathcal{R}}_i$, can be written as
$$
{{\mathfrak{m}}athbb P}rod_{\alpha \in \Phi} e_{\alpha} (t_{\alpha})
$$
for some $t_{\alpha} \in {\mathcal{R}}.$
\end{proof}
Using this result, together with Lemma \ref{L-1} and Proposition \ref{P-1}, we obtain
\begin{cor}\label{C-1}
Let $R$ be a noetherian ring. Then there exists an integer $M > 0$ such that any element of $G(\widehat{R})$ is a product of $\leq M$ elementaries from the set $\widehat{S} = \{e_{\alpha} (t) {\mathfrak{m}}id t \in \widehat{R}, \alpha \in \Phi \}.$
\end{cor}
As we noted earlier, one can identify the congruence completion $\overline{G(R)}$ with the closure of the image of $G(R)$ in $G(\widehat{R})$. The following proposition gives more precise information.
\begin{prop}\label{P-2}
Let $R$ be a noetherian ring. Then $\overline{E(R)} = \overline{G(R)}$ can be naturally identified with $G(\widehat{R})$. Furthermore, there exists an integer $M > 0$ such that any element of $\overline{E(R)} = \overline{G(R)}$ is a product of $\leq M$ elements of the set $\overline{S} := \overline{ \{ e_{\alpha} (r) {\mathfrak{m}}id \alpha \in \Phi, r \in R \} } $ (closure in the congruence topology).
\end{prop}
\begin{proof}
For any ${\mathfrak{m}}athfrak{a} \in {\mathcal{I}}$, there exists a natural injective homomorphism $\omega_{{\mathfrak{m}}athfrak{a}} {\rm co}lon G(R) / G(R, {\mathfrak{m}}athfrak{a}) \to G(R/ {\mathfrak{m}}athfrak{a}),$ where as before, $G(R, {\mathfrak{m}}athfrak{a})$ is the principal congruence subgroup of level ${\mathfrak{m}}athfrak{a}.$ Taking the inverse limit over all ${\mathfrak{m}}athfrak{a} \in {\mathcal{I}},$ we obtain a continuous injective homomorphism
$$
\omega {\rm co}lon \overline{G(R)} \to G(\widehat{R}).
$$
Clearly, the image of $\omega$ coincides with the closure of the image of the natural homomorphism $G(R) \to G(\widehat{R})$.
From the definitions, one easily sees that if $\overline{e_{\alpha} (r)}$ is the image of $e_{\alpha} (r)$ ($\alpha \in \Phi, r \in R$) in $\overline{G(R)}$, then
$$
\omega (\overline{e_{\alpha} (r)}) = e_{\alpha} (\hat{r}),
$$
where $\hat{r}$ is the image of $r$ in $\widehat{R}.$ It follows that $\omega$ maps $\overline{S}$ onto $\widehat{S} = \{ e_{\alpha} (t) {\mathfrak{m}}id \alpha \in \Phi, \ t \in \widehat{R} \}.$ Since by Corollary \ref{C-1}, $\widehat{S}$ generates $G(\widehat{R}),$ we obtain that $\omega (\overline{E(R)}) = G(\widehat{R})$, and consequently $\omega$ identifies $\overline{E(R)} = \overline{G(R)}$ with $G(\widehat{R})$. Furthermore, if $M$ is the same integer as in Corollary \ref{C-1}, then since every element of $G(\widehat{R})$ is a product of $\leq M$ elements of $\widehat{S}$, our second claim follows.
\end{proof}
\vskip1mm
{\mathfrak{n}}oindent {\bf Remark.} Recall that a group ${{\mathfrak{m}}athcal G}$ is said to have {\it bounded generation} with respect to a generating set $X \subset {{\mathfrak{m}}athcal G}$ if there exists an integer $N > 0$ such that every $g \in {{\mathfrak{m}}athcal G}$ can be written as $g = x_1^{\varepsilon_1} \cdots x_d^{\varepsilon_d}$ with $x_i \in X$, $d \leq N$, and $\varepsilon_i = {{\mathfrak{m}}athbb P}m 1.$ It follows from the Baire category theorem (cf. \cite{Mun}, Theorem 48.2) that if a compact topological group ${{\mathfrak{m}}athcal G}$ is (algebraically) generated by a compact subset $X$, then in fact, ${{\mathfrak{m}}athcal G}$ is automatically {\it boundedly} generated by $X$. Indeed, replacing $X$ by $X \cup X^{-1} \cup \{1 \},$ we may assume that $X = X^{-1}$ and $1 \in X.$ Set $X^{(n)} = X \cdots X$ ($n$-fold product). Then the fact that ${{\mathfrak{m}}athcal G} = < X>$ means that
$$
{{\mathfrak{m}}athcal G} = \bigcup_{n \geq 1} X^{(n)}.
$$
Since each $X^{(n)}$ is compact, hence closed, we conclude from Baire's theorem that for some $n \geq 1,$ $X^{(n)}$ contains an open set. Then ${{\mathfrak{m}}athcal G}$ can be covered by finitely many translates of $X^{(n)}$, and therefore there exists $M > 0$ such that $X^{(M)} = {{\mathfrak{m}}athcal G}$, as required. This remark shows, in particular, that (algebraic) generation of $\overline{G (R)}$ by $\overline{S}$, or that of $G(\widehat{R})$ by $\widehat{S},$ automatically yields bounded generation.
\vskip2mm
We would like to point out that the fact that $\overline{G(R)} = \overline{E(R)}$ is not used in the proof of the Main Theorem; all we need is that $\overline{E(R)}$ is boundedly generated by $\overline{S}.$ So, we will indicate another way to prove this based on some ideas of Tavgen (cf. \cite{Tav}, Lemma 1), which also gives an explicit bound on the constant $M$ in Proposition \ref{P-2}. First we observe that it is enough to establish the bounded generation of $E(\widehat{R})$ by $\widehat{S} = \{ e_{\alpha} (t) {\mathfrak{m}}id \alpha \in \Phi, \ t \in \widehat{R} \}$ (indeed, this will show that $E(\widehat{R})$ is a continuous image of $\widehat{R}^N$ for some $N > 0$, hence compact, implying that the map $\omega$ from the proof of Proposition \ref{P-2} identifies $\overline{E(R)}$ with $E(\widehat{R})$, and also $\overline{S}$ with $\widehat{S}$). In turn, by the same argument as above, we see that to prove bounded generation of $E(\widehat{R})$, it suffices to show that there exists an integer $N >0$ depending only on $\Phi$ such that for any local ring $R$, any element of $E(R)$ is a product of $\leq N$ elementaries. We will show that in fact
\begin{equation}\label{E-BG}
E(R) = (U^+ (R) U^- (R))^4,
\end{equation}
so one can take $N = 4 \cdot \vert \Phi \vert.$ Let us now prove (\ref{E-BG}) by induction on the rank $\ell$ of $\Phi$. If $\ell = 1,$ then $G = SL_2$, and one easily checks that $$G(R) = E(R) = (U^+ (R) U^- (R))^4.$$ Now, we assume that (\ref{E-BG}) is valid for every reduced irreducible root system of rank $\leq \ell-1$, with $\ell \geq 2$, and prove it for a root system $\Phi$ of rank $\ell.$ Set $X = (U^+(R) U^-(R))^4$, and let $\Delta \subset \Phi$ be a system of simple roots. Since the group $E(R)$ is generated by $e_{{{\mathfrak{m}}athbb P}m \beta} (t)$ for $\beta \in \Delta$ and $t \in R$ (cf. the proof of (\ref{E:St-1}) in \S 4), to prove (\ref{E-BG}), it suffices to show that
$$
e_{{{\mathfrak{m}}athbb P}m \beta} (t) X \subset X.
$$
Pick $\alpha \in \Delta,$ $\alpha {\mathfrak{n}}eq \beta$, that corresponds to an extremal node in the Dynkin diagram of $\Phi.$ Let $\Phi_0$ (resp., $\Phi_1$) be the set of roots in $\Phi$ that do not contain (resp., contain) $\alpha$, and let $\Phi_i^{{{\mathfrak{m}}athbb P}m} = \Phi_i \cap \Phi^{{{\mathfrak{m}}athbb P}m}.$ Then $\Phi_0$ is an irreducible root system having $\Delta_0 = \Delta \setminus \{ \alpha \}$ as a system of simple roots; in particular, $\Phi_0$ has rank $\ell - 1.$ If we let $G_0$ denote the corresponding universal Chevalley-Demazure group scheme, then by the induction hypothesis
$$
E_0 (R) = (U_0^+ (R) U_0^- (R))^4,
$$
with the obvious notations. Let $U_1^{{{\mathfrak{m}}athbb P}m} (R)$ be the subgroup generated by $e_{\alpha} (r)$ for $\alpha \in \Phi_1^+$ (resp., $\alpha \in \Phi_1^-$) and $r \in R.$ Then
$U^{{{\mathfrak{m}}athbb P}m} (R) = U_0^{{{\mathfrak{m}}athbb P}m} (R) U_1^{{{\mathfrak{m}}athbb P}m} (R)$, and according to (\cite{Stb}, Lemma 17),
$$
U_0^{{{\mathfrak{m}}athbb P}m} (R) U_1^{{\mathfrak{m}}p} (R) = U_1^{{\mathfrak{m}}p} (R) U_0^{{{\mathfrak{m}}athbb P}m} (R).
$$
So,
$$
X = (U_0^+ (R) U_1^+ (R) U_0^- (R) U_1^- (R))^4 = (U_0^+ (R) U_0^- (R))^4 (U_1^+ (R) U_1^- (R))^4 = E_0 (R) (U_1^+ (R) U_1^- (R))^4.
$$
Since $e_{{{\mathfrak{m}}athbb P}m \beta} (t) \in E_0 (R),$ we obtain that
$$
e_{{{\mathfrak{m}}athbb P}m \beta} (t) X = e_{{{\mathfrak{m}}athbb P}m \beta} (t) E_0 (R) (U_1^+ (R) U_1^- (R))^4 = X,
$$
as required.
\section{Profinite and congruence topologies coincide on 1-parameter root subgroups}
\begin{prop}\label{P-3}
Let $\Phi$ be a reduced irreducible root system of rank $\geq 2$, $G$ be the corresponding universal Chevalley-Demazure group scheme, and $E(R)$ be the elementary subgroup of the group $G(R)$ over a commutative ring $R$. Furthermore, suppose $N \subset E(R)$ is a normal subgroup of finite index. If $\Phi$ is not of type $C_n$ $(n \geq 2)$ or $G_2$, then there exists an ideal ${\mathfrak{a}} \subset R$ of finite index such that \begin{equation}\label{E-Ideal}
e_{\alpha} ({\mathfrak{a}}) \subset N \cap U_{\alpha} (R)
\end{equation}
for all $\alpha \in \Phi$, where $e_{\alpha} ({\mathfrak{a}}) = \{ e_{\alpha} (t) {\mathfrak{m}}id t \in {\mathfrak{a}} \}.$ The same conclusion holds for $\Phi$ of type $C_n$ $(n \geq 2)$ and $G_2$ if $2 \in R^{\times}$. Thus, in these cases, the profinite and the congruence topologies of $E(R)$ induce the same topology on $U_{\alpha} (R)$, for all $\alpha \in \Phi.$
\end{prop}
\begin{proof}
We begin with two preliminary remarks. First, for any root $\alpha \in \Phi$,
$$
{\mathfrak{a}} (\alpha) := \{ t \in R {\mathfrak{m}}id e_{\alpha} (t) \in N \}
$$
is obviously a finite index subgroup of the additive group of $R$. What one needs to show is that either ${\mathfrak{a}}(\alpha)$ itself is an ideal of $R$, or that it at least contains an ideal of finite index. Second, if $\alpha_1, \alpha_2 \in \Phi$ are roots of the same length, then by (\cite{H1}, 10.4, Lemma C), there exists an element $\tilde{w}$ of the Weyl group $W(\Phi)$ such that $\alpha_2 = \tilde{w} \cdot \alpha_1$. Consequently, it follows from (\cite{St1}, 3.8, relation (R4)) that we can find $w \in E(R)$ such that
$$
w e_{\alpha_1}(t) w^{-1} = e_{\alpha_2} (\varepsilon(w) t)
$$
for all $t \in R$, where $\varepsilon(w) \in \{ {{\mathfrak{m}}athbb P}m 1 \}$ is independent of $t.$ Since $N$ is a normal subgroup of $E(R)$, we conclude that
\begin{equation}\label{E-Ideal1}
{\mathfrak{a}} (\alpha_1) = {\mathfrak{a}} (\alpha_2).
\end{equation}
Thus, it is enough to find a finite index ideal ${\mathfrak{a}} \subset R$ such that (\ref{E-Ideal}) holds for a {\it single} root of each length.
Let us now prove our claim for $\Phi$ of type $A_2$ using explicit computations with commutator relations. We will use the standard realization of $\Phi$, described in \cite{Bour}, where the roots are of the form $\varepsilon_i - \varepsilon_j$,
with $i,j \in \{1, 2, 3 \}, i {\mathfrak{n}}eq j.$ To simplify notation, we will write $e_{ij} (t)$ to denote $e_{\alpha} (t)$ for $\alpha = \varepsilon_i - \varepsilon_j.$ Set $\alpha_1 = \varepsilon_1 - \varepsilon_2.$ We will now show that ${\mathfrak{a}} (\alpha_1)$ is an ideal of $R$, and then it will follow from our previous remarks that ${\mathfrak{a}} := {\mathfrak{a}}(\alpha_1)$ is as required. Let $r \in {\mathfrak{a}} (\alpha_1)$ and $s \in R.$ Since $N \lhd E(R)$, the (well-known) relation
$$
[e_{12} (r), e_{23} (s)] = e_{13} (rs),
$$
where $[g,h] = gh g^{-1} h^{-1}$, shows that $rs \in {\mathfrak{a}} (\alpha_2)$ for $\alpha_2 = \varepsilon_1 - \varepsilon_2.$ But then (\ref{E-Ideal1}) yields $rs \in {\mathfrak{a}}(\alpha_1),$ completing the argument.
Now let $\Phi$ be any root system of rank $\geq 2$ in which all roots have the same length. Then clearly $\Phi$ contains a subsystem $\Phi_0$ of type $A_2$, so our previous considerations show that there exists a~finite index ideal ${\mathfrak{a}} \subset R$ with the property that ${\mathfrak{a}} \subset {\mathfrak{a}}(\alpha)$ for all $\alpha \in \Phi_0.$ But then, by (\ref{E-Ideal1}), the same inclusion holds for all $\alpha \in \Phi.$
Next, we consider the case of $\Phi$ of type $B_n$ with $n \geq 3$. Note that since the system of type $F_4$ contains a subsystem of type $B_3$, this will automatically take care of the case when $\Phi$ is of type $F_4$ as well.
We will use the standard realization of $\Phi$ of type $B_n$, where the roots are of the form ${{\mathfrak{m}}athbb P}m \varepsilon_i$, ${{\mathfrak{m}}athbb P}m \varepsilon_i {{\mathfrak{m}}athbb P}m \varepsilon_j$ with $i, j \in \{ 1, \dots, n \}, i {\mathfrak{n}}eq j.$ The system $\Phi$ contains a subsystem $\Phi_0$ of type $A_{n-1}$, all of whose roots are long roots in $\Phi.$ Arguing as above, we see that there exists an ideal ${\mathfrak{a}} \subset R$ of finite index such that (\ref{E-Ideal}) holds for all $\alpha \in \Phi_0,$ and hence for all long roots $\alpha \in \Phi.$ To show that the same ideal also works for short roots, we will use the following relation, which is verified by direct computation:
\begin{equation}\label{E-3}
[e_{\varepsilon_1 + \varepsilon_2} (r), e_{-\varepsilon_2}(s)] = e_{\varepsilon_1} (rs) e_{-\varepsilon_1 - \varepsilon_2} (-rs^2).
\end{equation}
for any $r, s \in R$. Now, if $r \in {\mathfrak{a}},$ then $e_{\varepsilon_1 + \varepsilon_2} (r), e_{- \varepsilon_1 - \varepsilon_2} (-r) \in N.$ So, setting $s = 1$ in (\ref{E-3}) and noting that $[e_{\varepsilon_1 + \varepsilon_2} (r), e_{- \varepsilon_2} (1)] \in N$ as $N \lhd E(R),$ we obtain that $e_{\varepsilon_1} (r) \in N.$ Thus, (\ref{E-Ideal}) holds for $\alpha = \varepsilon_1$, and therefore for all short roots.
Next, we proceed to the case of $\Phi$ of type $B_2 = C_2$, where we assume that $2 \in R^{\times}.$
We will use the same realization of $\Phi$ as in the previous paragraph (for $n = 2$). Set ${\mathfrak{a}} = {\mathfrak{a}} (\varepsilon_1).$ Then for $r \in {\mathfrak{a}}$, $s \in R$, one can check by direct computation that
\begin{equation}\label{E-2}
[e_{\varepsilon_1} (r), e_{\varepsilon_2} (s/4)] = e_{\varepsilon_1 + \varepsilon_2} (rs/2) \in N.
\end{equation}
Next, using (\ref{E-3}), in conjunction with the fact that $e_{\varepsilon_1} (u)$ and $e_{\varepsilon_1 - \varepsilon_2} (v)$ commute for all $u, v \in R$, we obtain
$$
[e_{\varepsilon_1 + \varepsilon_2} (rs/2), e_{- \varepsilon_2} (1)][e_{\varepsilon_1 + \varepsilon_2} (rs/2), e_{- \varepsilon_2} (-1)]^{-1} = e_{\varepsilon_1} (rs) \in N,
$$
i.e. $rs \in {\mathfrak{a}}$, which shows that ${\mathfrak{a}}$ is an ideal. Furthermore, from (\ref{E-2}), we see that for any $r \in {\mathfrak{a}},$ we have
$$
[e_{\varepsilon_1} (r), e_{\varepsilon_2} (1/2)] = e_{\varepsilon_1 + \varepsilon_2} (r) \in N.
$$
Thus, $e_{\varepsilon_1 + \varepsilon_2} ({\mathfrak{a}}) \subset N,$ and therefore (\ref{E-Ideal}) holds for all $\alpha \in \Phi.$
Finally, suppose that $\Phi$ is of type $G_2$ and assume again that $2 \in R^{\times}.$
We will use the realization of $\Phi$ described in \cite{CK}: one picks a system of simple roots $\{k, c \}$ in $\Phi$, where $k$ is long and $c$ is short, and then the long roots of $\Phi$ are
$${{\mathfrak{m}}athbb P}m k, {{\mathfrak{m}}athbb P}m (3c + k), {{\mathfrak{m}}athbb P}m (3c + 2k),$$ and the short roots are $${{\mathfrak{m}}athbb P}m c, {{\mathfrak{m}}athbb P}m (c+k), {{\mathfrak{m}}athbb P}m (2c + k).$$ Set ${\mathfrak{a}} = {\mathfrak{a}} (k).$
Since the long roots of $\Phi$ form a subsystem of type $A_2$, for which our claim has already been established, we conclude that
${\mathfrak{a}}$ is a finite index ideal in $R$ and that (\ref{E-Ideal}) holds for all long roots. To show that (\ref{E-Ideal}) is true for the short roots as well, we need to recall the following explicit forms of the Steinberg commutator relations that were established in (\cite{CK}, Theorem 1.1):
\begin{equation}\label{E-4}
[e_{k}(s), e_{c} (t)] = e_{c+k} (\varepsilon_1 st) e_{2c + k} (\varepsilon_2 st^2) e_{3c + k} (\varepsilon_3 st^3) e_{3c + 2k} (\varepsilon_4 s^2 t^3),
\end{equation}
\begin{equation}\label{E-5}
[e_{c+k} (s), e_{2c+k}(t)] = e_{3c+2k} (3 \varepsilon_5 st),
\end{equation}
where $\varepsilon_i = {{\mathfrak{m}}athbb P}m 1.$
Using (\ref{E-4}), we obtain
$$
[e_k(s), e_c(1)] [e_k (s), e_c(-1)] =
$$
$$
=e_{c+k} (\varepsilon_1 s) e_{2c+k} (\varepsilon_2 s) e_{3c + k} (\varepsilon_3 s) e_{3c+2k} (\varepsilon_4 s^2) e_{c+k} (-\varepsilon_1 s) e_{2c+k} (\varepsilon_2 s) e_{3c + k} (-\varepsilon_3 s) e_{3c+2k} (-\varepsilon_4 s^2).
$$
Since the terms $e_{3c+k} (-\varepsilon_3 s)$ and $e_{3c + 2k} (-\varepsilon_4 s^2)$ commute with all other terms, the last expression reduces to
$$
e_{c+k} (\varepsilon_1 s) e_{2c +k} (\varepsilon_2 s) e_{c+k}(-\varepsilon_1 s) e_{2c + k} (\varepsilon_2 s),
$$
which, using (\ref{E-5}), can be written in the form
$$
e_{3c + 2k} (3 \varepsilon_5 \varepsilon_1 \varepsilon_2 s^2) e_{2c + k} (2 \varepsilon_2 s).
$$
Hence if $s \in {\mathfrak{a}}$, we obtain that
$$
[e_k(s/2), e_c(1)] [e_k (s/2), e_c(-1)] = e_{3c + 2k} (3 \varepsilon_5 \varepsilon_1 \varepsilon_2 s^2/4) e_{2c + k} (\varepsilon_2 s) \in N.
$$
But $e_{3c + 2k} (3 \varepsilon_5 \varepsilon_1 \varepsilon_2 s^2/4) \in N,$ from which it follows that $e_{2c + k} ({\mathfrak{m}}athfrak{a}) \subset N.$ This completes the proof.
\end{proof}
\vskip1mm
{\mathfrak{n}}oindent {\bf Remark.} If $R$ is the ring of algebraic $S$-integers, then any subgroup of finite index of the additive group of $R$ contains an ideal of finite index, so the conclusion of Proposition \ref{P-3} holds for root systems of rank $>1$ of all types without any additional restrictions on $R$. On the other hand, if $R$ is the ring of $S$-integers in a global field of positive characteristic $>2$, then $2 \in R^{\times}$, and Proposition \ref{P-3} again applies to all root systems without any extra assumptions.
\section{Proof of the main theorem}
We return to the notations introduced in \S \ref{S:I}. In particular, we set ${{\mathfrak{m}}athcal G}amma = E(R)$, where $R$ is a commutative noetherian ring such that $2 \in R^{\times}$ if our root system $\Phi$ is of type $C_n$ ($n \geq 2$) or $G_2$, and let $\widehat{{{\mathfrak{m}}athcal G}amma}$ and $\overline{{{\mathfrak{m}}athcal G}amma}$ denote the profinite and congruence completions of ${{\mathfrak{m}}athcal G}amma$, respectively. Furthermore, we let ${{\mathfrak{m}}athbb P}i {\rm co}lon \widehat{{{\mathfrak{m}}athcal G}amma} \to \overline{{{\mathfrak{m}}athcal G}amma}$ denote the canonical continuous homomorphism, so that $C({{\mathfrak{m}}athcal G}amma) := \ker {{\mathfrak{m}}athbb P}i$ is the congruence kernel. For each root $\alpha \in \Phi$, we let $\widehat{U}_{\alpha}$ and $\overline{U}_{\alpha}$ denote the closures of the images of the natural homomorphisms $U_{\alpha} (R) \to \widehat{{{\mathfrak{m}}athcal G}amma}$ and $U_{\alpha} (R) \to \overline{{{\mathfrak{m}}athcal G}amma}.$ By Proposition \ref{P-3}, the profinite and congruence topologies of ${{\mathfrak{m}}athcal G}amma$ induce the same topology on each $U_{\alpha} (R)$, which implies that ${{\mathfrak{m}}athbb P}i \vert_{\widehat{U}_{\alpha}} {\rm co}lon \widehat{U}_{\alpha} \to \overline{U}_{\alpha}$ is a group isomorphism. From the definitions, it is clear that $\overline{U}_{\alpha}$ coincides with $\overline{e}_{\alpha} (\widehat{R})$, where $\overline{e}_{\alpha} {\rm co}lon \widehat{R} \to G(\widehat{R}) = \overline{G(R)}$ is the 1-parameter subgroup associated with $\alpha$ over the ring $\widehat{R}.$ Set
$$
\widehat{e}_{\alpha} = ({{\mathfrak{m}}athbb P}i \vert_{\widehat{U}_{\alpha}})^{-1} \circ \overline{e}_{\alpha}.
$$
Then $\widehat{e}_{\alpha} {\rm co}lon \widehat{R} \to \widehat{U}_{\alpha}$ is an isomorphism of topological groups, and in particular, we have
$$
\widehat{e}_{\alpha} (r+s) = \widehat{e}_{\alpha} (r) \widehat{e}_{\alpha} (s)
$$
for all $r, s \in \widehat{R}$ and any $\alpha \in \Phi.$
Before establishing some further properties of the $\widehat{e}_{\alpha}$, let us recall that for any commutative ring $S$ and any $\alpha, \beta \in \Phi$, $\beta {\mathfrak{n}}eq -\alpha$, there is a relation in $G(S)$ of the form
\begin{equation}\label{E-Steinberg}
[e_{\alpha} (s), e_{\beta} (t)] = {{\mathfrak{m}}athbb P}rod e_{i \alpha + j \beta} (N_{\alpha, \beta}^{i,j} s^i t^j)
\end{equation}
for all $s,t \in S$,
where the product is taken over all roots of the form $i \alpha + j \beta$ with $i, j \in {{\mathfrak{m}}athbb Z}^+$, listed in an arbitrary (but {\it fixed}) order, and the $N^{i,j}_{\alpha, \beta}$ are integers depending only on $\alpha, \beta \in \Phi$ and the order of the factors in (\ref{E-Steinberg}), but not on $s, t \in S$. Furthermore, recall that the abstract group $\tilde{G}(S)$ with generators $x_{\alpha} (s)$ for all $s \in S$ and $\alpha \in \Phi$ subject to the relations
\vskip1mm
(R1) $\tilde{x}_{\alpha}(s) \tilde{x}_{\alpha}(t) = \tilde{x}_{\alpha} (s+t)$,
\vskip1mm
(R2) {{\mathfrak{m}}athbb P}arbox[t]{15cm}{$[\tilde{x}_{\alpha} (s), \tilde{x}_{\beta} (t)] = {{\mathfrak{m}}athbb P}rod \tilde{x}_{i \alpha + j \beta} (N^{i,j}_{\alpha, \beta} s^i t^j)$, where $N_{\alpha, \beta}^{i,j}$ are the same integers, and the roots are listed in the same order, as in (\ref{E-Steinberg}),}
\vskip1mm
{\mathfrak{n}}oindent is called the {\it Steinberg group}. It follows from (\ref{E-Steinberg}) that there exists a canonical homomorphism $\tilde{G}(S) \to G(S)$, defined by $x_{\alpha} (s) {\mathfrak{m}}apsto e_{\alpha} (s)$, whose kernel is denoted by $K_2 (\Phi, S).$
\begin{lemma}\label{L-3}
{\rm (1)} {{\mathfrak{m}}athbb P}arbox[t]{15cm}{For any $\alpha, \beta \in \Phi$, $\beta {\mathfrak{n}}eq -\alpha$, and $s, t \in \widehat{R}$, we have $[\widehat{e}_{\alpha} (s), \widehat{e}_{\beta} (t)] = {{\mathfrak{m}}athbb P}rod \widehat{e}_{i \alpha + j \beta} (N_{\alpha, \beta}^{i,j} s^i t^j).$}
\vskip2mm
{\mathfrak{n}}oindent Let $\widehat{R} = {{\mathfrak{m}}athbb P}rod_{{\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}}} R_{{\mathfrak{m}}}$ be the decomposition from Lemma \ref{L-1}(1), and for ${\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}}$, let $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ (resp. $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}'$) be the subgroup of $\widehat{{{\mathfrak{m}}athcal G}amma}$ (algebraically) generated by $\widehat{e}_{\alpha} (r)$ for all $r \in R_{{\mathfrak{m}}}$ (resp., $r \in R_{{\mathfrak{m}}}' := {{\mathfrak{m}}athbb P}rod_{{\mathfrak{n}} {\mathfrak{n}}eq {\mathfrak{m}}} R_{{\mathfrak{n}}}$) and all $\alpha \in \Phi$. Then
{\mathfrak{n}}oindent {\rm (2)} {{\mathfrak{m}}athbb P}arbox[t]{16cm}{There exists a surjective group homomorphism $\theta_{{\mathfrak{m}}} {\rm co}lon \tilde{G}(R_{{\mathfrak{m}}}) \to \widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ such that $x_{\alpha} (r) {\mathfrak{m}}apsto \widehat{e}_{\alpha} (r)$ for all $r \in R_{{\mathfrak{m}}}$ and $\alpha \in \Phi.$}
\vskip1mm
{\mathfrak{n}}oindent {\rm (3)} {{\mathfrak{m}}athbb P}arbox[t]{16cm}{$\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ and $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}'$ commute elementwise inside $\widehat{{{\mathfrak{m}}athcal G}amma}.$}
\end{lemma}
\begin{proof}
(1) Define two continuous maps
$$
\varphi {\rm co}lon \widehat{R} \times \widehat{R} \to \widehat{{{\mathfrak{m}}athcal G}amma}, \ \ \ (s,t) {\mathfrak{m}}apsto [\widehat{e}_{\alpha} (s), \widehat{e}_{\beta} (t)]
$$
and
$$
{{\mathfrak{m}}athbb P}si {\rm co}lon \widehat{R} \times \widehat{R} \to \widehat{{{\mathfrak{m}}athcal G}amma}, \ \ \ (s,t) {\mathfrak{m}}apsto {{\mathfrak{m}}athbb P}rod \widehat{e}_{i \alpha + j \beta} (N_{\alpha, \beta}^{i,j} s^i t^j).
$$
It follows from (\ref{E-Steinberg}) that these maps coincide on $R \times R.$ Since $R \times R$ is dense in $\widehat{R} \times \widehat{R},$ we have $\varphi \equiv {{\mathfrak{m}}athbb P}si,$ yielding our claim.
(2) Since we have shown that the $\widehat{e}_{\alpha}(r)$, $r \in R_{{\mathfrak{m}}}$, $\alpha \in \Phi,$ satisfy the relations (R1) and (R2), the existence of the homomorphism $\theta_{{\mathfrak{m}}}$ follows.
(3) It suffices to show that for any $\alpha, \beta \in \Phi$ and any $r \in R_{{\mathfrak{m}}}, \ s \in R_{{\mathfrak{m}}}',$ the elements $\widehat{e}_{\alpha} (r), \widehat{e}_{\beta} (s) \in \widehat{{{\mathfrak{m}}athcal G}amma}$ commute. Since $r s= 0$ in $\widehat{R},$ this fact immediately follows from (1) if $\beta {\mathfrak{n}}eq -\alpha.$ To handle the remaining case $\beta = - \alpha,$ we observe that for any ring $S$ and the corresponding Steinberg group $\tilde{G}(S)$, we have
\begin{equation}\label{E:St-1}
\tilde{G}(S) = <x_{\gamma} (r) {\mathfrak{m}}id \gamma \in \Phi \setminus \{ \alpha \}, r \in S>.
\end{equation}
Indeed, it is well-known that $\tilde{G}(S)$ is generated by the elements $x_{\gamma} (r)$ for all $r \in R$ and all $\gamma$ in an arbitrarily chosen system $\Pi \subset \Phi$ of simple roots (this follows, for example, from the fact that the Weyl group of $\Phi$ is generated by the reflections corresponding to simple roots, and moreover, every root lies in the orbit of a simple root under the action of the Weyl group).
On the other hand, since $\Phi$ is of rank $\geq 2$, for any $\alpha \in \Phi,$ one can find a system of simple roots $\Pi \subset \Phi$ that does not contain $\alpha$, and (\ref{E:St-1}) follows. Using the homomorphism $\theta_{{\mathfrak{m}}}$ constructed in part (2), we conclude from (\ref{E:St-1}) that $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}} = \theta_{{\mathfrak{m}}} (\tilde{G}(R_{{\mathfrak{m}}}))$ is generated by $\widehat{e}_{\gamma}(r)$ for $r \in R_{{\mathfrak{m}}}$, $\gamma \in \Phi \setminus \{ \alpha \}$. So, since we already know that $\widehat{e}_{-\alpha} (s)$, with $s \in R_{{\mathfrak{m}}}'$, commutes with all of these elements, it also commutes with $\widehat{e}_{\alpha} (r),$ yielding our claim.
\end{proof}
The following lemma, which uses results of Stein \cite{St2} on the computation of $K_2$ over semi-local rings, is a key ingredient in the proof of the Main Theorem.
\begin{lemma}\label{L-4}
The kernel $\ker ({{\mathfrak{m}}athbb P}i \vert_{\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}})$ of the restriction ${{\mathfrak{m}}athbb P}i \vert_{\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}}$ lies in the center of $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$, for any ${\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}}.$
\end{lemma}
\begin{proof}
Stein has shown that if $\Phi$ has rank $\geq 2$ and $S$ is a semi-local ring which is generated by its units, then $K_2 (\Phi, S)$ lies in the center of $\tilde{G}(S)$ (cf. \cite{St2}, Theorem 2.13). Since $S = R_{{\mathfrak{m}}}$ is local, it is automatically generated by its units, hence $K_2 (\Phi, R_{{\mathfrak{m}}}) = \ker (\tilde{G}(R_{{\mathfrak{m}}}) \stackrel{{\mathfrak{m}}u}{\longrightarrow} E(R_{{\mathfrak{m}}}))$ is central. On the other hand, ${\mathfrak{m}}u$ admits the following factorization:
$$
\tilde{G}(R_{{\mathfrak{m}}}) \stackrel{\theta_{{\mathfrak{m}}}}{\longrightarrow} \widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}} \stackrel{{{\mathfrak{m}}athbb P}i \vert_{\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}}}{\longrightarrow} E(R_{{\mathfrak{m}}}).
$$
Since $\theta_{{\mathfrak{m}}}$ is surjective, we conclude that
$$
\ker ({{\mathfrak{m}}athbb P}i \vert_{\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}}) = \theta_{{\mathfrak{m}}} (K_2 (\Phi, R_{{\mathfrak{m}}}))
$$
is central in $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}.$
\end{proof}
Now fix ${\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}}$ and let $\Delta_{{\mathfrak{m}}} = \widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}} \widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}'$ be the subgroup of $\widehat{{{\mathfrak{m}}athcal G}amma}$ (algebraically) generated by $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ and $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}'.$ Let $c \in C({{\mathfrak{m}}athcal G}amma) \cap \Delta_{{\mathfrak{m}}},$ and write $c = c_1 c_2,$ with $c_1 \in \widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}, c_2 \in \widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}'.$
We have $\overline{{{\mathfrak{m}}athcal G}amma} = \overline{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}} \times \overline{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}'$, where $\overline{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}} = E(R_{{\mathfrak{m}}})$ and $\overline{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}' = E(R_{{\mathfrak{m}}}').$
Since ${{\mathfrak{m}}athbb P}i (c_1) \in \overline{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$, ${{\mathfrak{m}}athbb P}i(c_2) \in \overline{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}',$ we conclude from $${{\mathfrak{m}}athbb P}i(c) = e = {{\mathfrak{m}}athbb P}i(c_1) {{\mathfrak{m}}athbb P}i(c_2)$$ that ${{\mathfrak{m}}athbb P}i(c_1) = e,$ i.e. $c_1 \in \ker ({{\mathfrak{m}}athbb P}i \vert_{\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}}).$ Then by Lemma \ref{L-4}, $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ centralizes $c_1.$ On the other hand, $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ centralizes $c_2 \in \widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}'$ by Lemma \ref{L-3}(3). So, $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ centralizes $c.$ Thus, we have shown that $C \cap \Delta_{{\mathfrak{m}}}$ is centralized by $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}.$ To prove that $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ actually centralizes all of $C$, we need the following
\begin{lemma}\label{L-5}
Let $\varphi {\rm co}lon {\mathfrak{m}}athcal{G}_1 \to {\mathfrak{m}}athcal{G}_2$ be a continuous homomorphism of topological groups, and let ${\mathfrak{m}}athcal{F} = \ker \varphi.$ Suppose $\Theta \subset {\mathfrak{m}}athcal{G}_1$ is a dense subgroup such that there exists a compact set $\Omega \subset \Theta$ whose image $\varphi(\Omega)$ is a neighborhood of the identity in ${\mathfrak{m}}athcal{G}_2.$ Then ${\mathfrak{m}}athcal{F} \cap \Theta$ is dense in ${\mathfrak{m}}athcal{F}.$
\end{lemma}
\begin{proof}
Since $\varphi(\Omega)$ is a neighborhood of the identity in ${\mathfrak{m}}athcal{G}_2$, we can find an open set $U \subset {\mathfrak{m}}athcal{G}_1$ such that
$$
{\mathfrak{m}}athcal{F} \subset U \subset \varphi^{-1}(\varphi(\Omega)) = \Omega {\mathfrak{m}}athcal{F}.
$$
Now since $\Theta$ is dense in ${\mathfrak{m}}athcal{G}_1$, we have $U \subset \overline{\Theta \cap U},$ where the bar denotes the closure in ${\mathfrak{m}}athcal{G}_1.$ Thus,
$$
{\mathfrak{m}}athcal{F} \subset \overline{\Theta \cap U} \subset \overline{\Theta \cap \Omega {\mathfrak{m}}athcal{F}}.
$$
But $\Theta \cap \Omega {\mathfrak{m}}athcal{F} = \Omega (\Theta \cap {\mathfrak{m}}athcal{F}),$ and since $\Omega$ is compact, the product $\Omega \overline{(\Theta \cap {\mathfrak{m}}athcal{F})}$ is closed. So
$$
{\mathfrak{m}}athcal{F} \subset \overline{\Theta \cap \Omega {\mathfrak{m}}athcal{F}} \subset \Omega \overline{(\Theta \cap {\mathfrak{m}}athcal{F})}.
$$
Since ${\mathfrak{m}}athcal{F}$ is closed, we have $\overline{\Theta \cap {\mathfrak{m}}athcal{F}} \subset {\mathfrak{m}}athcal{F},$ so
$$
{\mathfrak{m}}athcal{F} = (\Omega \cap {\mathfrak{m}}athcal{F}) \overline{(\Theta \cap {\mathfrak{m}}athcal{F})} \subset (\Theta \cap {\mathfrak{m}}athcal{F}) \overline{(\Theta \cap {\mathfrak{m}}athcal{F})} = \overline{\Theta \cap {\mathfrak{m}}athcal{F}},
$$
as required.
\end{proof}
In order to apply Lemma \ref{L-5} in our situation, we noted the following simple fact
\begin{lemma}\label{L-6}
The subgroup $\Delta \subset \widehat{{{\mathfrak{m}}athcal G}amma}$ (algebraically) generated by the $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ for all ${\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}}$ is dense. Consequently, for any ${\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}},$ the subgroup $\Delta_{{\mathfrak{m}}} = \widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}} \widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}' \subset \widehat{{{\mathfrak{m}}athcal G}amma}$ is dense.
\end{lemma}
\begin{proof}
Let
$$
R_0 := \sum_{{\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}}} R_{{\mathfrak{m}}} \subset \widehat{R} = {{\mathfrak{m}}athbb P}rod_{{\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}}} R_{{\mathfrak{m}}}.
$$
Clearly $R_0$ is a dense subring of $\widehat{R}.$ On the other hand, $\Delta$ obviously contains $\widehat{e}_{\alpha} (R_0)$ for any $\alpha \in \Phi.$ So, the closure $\overline{\Delta}$ contains $\widehat{e}_{\alpha} (R)$ for all $\alpha \in \Phi$, and therefore coincides with $\widehat{{{\mathfrak{m}}athcal G}amma},$ yielding our first assertion. Furthermore, for any ${\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}},$ the subgroup $\Delta_{{\mathfrak{m}}}$ contains ${{\mathfrak{m}}athcal G}amma_{{\mathfrak{n}}}$ for all ${\mathfrak{n}} \in {{\mathfrak{m}}athcal{M}},$ so our second assertion follows.
\end{proof}
\vskip1mm
{\mathfrak{n}}oindent {\it Conclusion of the proof of the Main Theorem}: Fix ${\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}}.$ We have already seen that $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ centralizes $C \cap \Delta_{{\mathfrak{m}}}.$ We claim that $C \cap \Delta_{{\mathfrak{m}}}$ is dense in $C$, and hence $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ centralizes $C$. Indeed, by Lemma~\ref{L-6}, $\Delta_{{\mathfrak{m}}}$ is dense in $\widehat{{{\mathfrak{m}}athcal G}amma}.$ On the other hand, it follows from Corollary \ref{C-1} that there exists a string of roots $(\alpha_1, \dots, \alpha_L)$ such that the map
$$
\widehat{R}^L \to \overline{{{\mathfrak{m}}athcal G}amma}, \ \ \ \ \ (r_1, \dots, r_L) {\mathfrak{m}}apsto {{\mathfrak{m}}athbb P}rod_{i=1}^L \overline{e}_{\alpha_i} (r_i)
$$
is surjective. Then
$$
\Omega := \widehat{e}_{\alpha_1} (\widehat{R}) \cdots \widehat{e}_{\alpha_L} (\widehat{R}) = \left( \widehat{e}_{\alpha_1} (R_{{\mathfrak{m}}}) \cdots \widehat{e}_{\alpha_L} (R_{{\mathfrak{m}}}) \right) \left( \widehat{e}_{\alpha_1} (R_{{\mathfrak{m}}}') \cdots \widehat{e}_{\alpha_L} (R_{{\mathfrak{m}}}') \right)
$$
is a compact subset of $\widehat{{{\mathfrak{m}}athcal G}amma}$ that is contained in $\Delta_{{\mathfrak{m}}}$ and has the property that ${{\mathfrak{m}}athbb P}i(\Omega) = \overline{{{\mathfrak{m}}athcal G}amma}.$ Invoking Lemma \ref{L-5}, we obtain that $C \cap \Delta_{{\mathfrak{m}}}$ is dense in $C$, as required.
We now see that $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ centralizes $C$ for all ${\mathfrak{m}} \in {{\mathfrak{m}}athcal{M}}.$ Since the subgroup $\Delta \subset \widehat{{{\mathfrak{m}}athcal G}amma}$ generated by the $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ is dense in $\widehat{{{\mathfrak{m}}athcal G}amma}$ by Lemma \ref{L-6}, we obtain that $\widehat{{{\mathfrak{m}}athcal G}amma}$ centralizes $C$, completing the proof.
$\Box$
\vskip2mm
To put our proof of the Main Theorem into perspective, we recall the following criterion for the centrality of the congruence kernel in the context of the congruence subgroup problem for algebraic groups over global fields (see \cite{PR}, Theorem 4). Let $G$ be an absolutely almost simple simply connected algebraic group over a global field $K$, and $S$ be a set of places of $K$, which we assume to contain all archimedean places if $K$ is a number field, such that the corresponding $S$-arithmetic group $G({\mathfrak{m}}athcal{O}_S)$ is infinite (where ${\mathfrak{m}}athcal{O}_S$ is the ring of $S$-integers in $K$). Then by the Strong Approximation Theorem, the $S$-congruence completion $\overline{G(K)}$ of the group $G(K)$ of $K$-rational points can be identified with the group of $S$-adeles $G({\mathfrak{m}}athbb{A}_S)$, and in particular the group $G(K_v)$, for $v {\mathfrak{n}}otin S$, can be viewed as a subgroup of $\overline{G(K)}.$ Assume furthermore that $S$ contains no nonarchimedean anisotropic places for $G$ and that $G/K$ satisfies the Margulis-Platonov conjecture. If for each $v \in S$, there exists a subgroup $H_v$ of the $S$-arithmetic completion $\widehat{G(K)}$ such that
\vskip1mm
(1) {{\mathfrak{m}}athbb P}arbox[t]{16cm}{${{\mathfrak{m}}athbb P}i (H_v) = G(K_v)$ for all $v {\mathfrak{n}}otin S$, where ${{\mathfrak{m}}athbb P}i {\rm co}lon \widehat{G(K)} \to \overline{G(K)}$ is the canonical projection;}
\vskip1mm
(2) {{\mathfrak{m}}athbb P}arbox[t]{16cm}{$H_{v_1}$ and $H_{v_2}$ commute elementwise for $v_1 {\mathfrak{n}}eq v_2$;}
\vskip1mm
(3) {{\mathfrak{m}}athbb P}arbox[t]{16cm}{the $H_v$, for $v {\mathfrak{n}}otin S$, (algebraically) generate a dense subgroup of $\widehat{G(K)}$,}
\vskip1mm
{\mathfrak{n}}oindent then the congruence kernel $C^S(G) := \ker {{\mathfrak{m}}athbb P}i$ is central. So, this criterion basically states that in the arithmetic situation, the mere existence of elementwise commuting lifts of ``local groups" implies the centrality of the congruence kernel. In our situation, the existence of elementwise commuting lifts (which we denoted $\widehat{{{\mathfrak{m}}athcal G}amma}_{{\mathfrak{m}}}$ above) also plays a part in the proof of centrality (cf. Lemma \ref{L-4}(3)), but some additional considerations (such as the result of Stein and the bounded generation property for $E(\widehat{R}) = G(\widehat{R})$) are needed; the facilitating factor in the arithmetic situation is the action of the group $G(K)$ on the congruence kernel, which is not available over more general rings.
Finally, we will relate our result on the centrality of the congruence kernel $C({{\mathfrak{m}}athcal G}amma)$ for ${{\mathfrak{m}}athcal G}amma = E(R)$ to the congruence subgroup problem for $G(R).$ We have the following commutative diagram induced by the natural embedding ${{\mathfrak{m}}athcal G}amma \hookrightarrow G(R)$:
$$
\xymatrix{1 \ar[r] & C({{\mathfrak{m}}athcal G}amma) \ar[r] \ar[d]_{\alpha} & \widehat{{{\mathfrak{m}}athcal G}amma} \ar[d]_{\beta} \ar[r]^{{{\mathfrak{m}}athbb P}i^{{{\mathfrak{m}}athcal G}amma}} & \overline{{{\mathfrak{m}}athcal G}amma} \ar[d]_{\gamma} \ar[r] & 1 \\ 1 \ar[r] & C(G(R)) \ar[r] & \widehat{G(R)} \ar[r]^{{{\mathfrak{m}}athbb P}i^{G(R)}} & \overline{G(R)} \ar[r] & 1}
$$
We note that by Proposition \ref{P-2}, $\gamma$ is an isomorphism. So, $\alpha(C({{\mathfrak{m}}athcal G}amma)) = C(G(R)) \cap \beta (\widehat{{{\mathfrak{m}}athcal G}amma})$, and $\beta (\widehat{{{\mathfrak{m}}athcal G}amma})$ coincides with the closure $\check{{{\mathfrak{m}}athcal G}amma}$ of ${{\mathfrak{m}}athcal G}amma$ in $\widehat{G(R)}$. Thus, our Main Theorem yields the following
\begin{cor}
$C(G(R)) \cap \check{{{\mathfrak{m}}athcal G}amma}$ is centralized by $\check{{{\mathfrak{m}}athcal G}amma}.$
\end{cor}
The exact relationship between $C(G(R))$ and $C(G(R)) \cap \check{{{\mathfrak{m}}athcal G}amma}$ (or $C({{\mathfrak{m}}athcal G}amma)$) remains unclear except in a few cases. Matsumoto \cite{M1} showed that $G(R) = E(R)$ for any ring $R$ of algebraic $S$-integers, which combined with our Main Theorem and the remark at the end of \S 3, yields the centrality of $C(E(R)) = C(G(R))$, established by Matsumoto himself. Furthermore,
for $G = SL_n$ $(n \geq 3)$ and $R = {{\mathfrak{m}}athbb Z}[x_1, \dots, x_k]$, by a result of Suslin \cite{Su}, we again have $G(R) = E(R),$ so $C(G(R)) = C(E(R))$ is central in $\widehat{E(R)} = \widehat{G(R)},$
which was established in \cite{KN}. On the other hand, there exist principal ideal domains $R$ for which $SL_n (R) {\mathfrak{n}}eq E(R)$ (cf. \cite{G}, \cite{I}), and then the analysis of $C(G(R))$ requires more effort. We only note that if ${{\mathfrak{m}}athcal G}amma = E(R)$ has finite index in $G(R)$, then the profinite topology on ${{\mathfrak{m}}athcal G}amma$ is induced by the profinite topology of $G(R)$, which implies that $\beta$ is injective, and therefore $C({{\mathfrak{m}}athcal G}amma)$ is identified with a finite index subgroup of $C(G(R)).$
\vskip2mm
{\mathfrak{n}}oindent {\bf Acknowledgements.} The first-named author was partially supported by NSF grant DMS-0965758 and the Humboldt Foundation. The paper was finalized when both authors were visiting SFB 701 (Bielefeld), whose hospitality is gratefully acknowledged.
\end{document} |
\begin{document}
\title{The Manneville map: topological, metric and algorithmic entropy}
\author{Claudio Bonanno \\
\small Dipartimento di Matematica \\
\small Universit\`a di Pisa \\
\small via Buonarroti 2, 56100 Pisa (Italy) \\
\small email: bonanno@mail.dm.unipi.it}
\date{}
\maketitle
\begin{abstract}
We study the Manneville map $f(x)=x+x^z (mod\ 1)$, with $z>1$, from a
computational point of view, studying the behaviour of the {\it
Algorithmic Information Content}. In particular, we consider a family
of piecewise linear maps that gives examples of algorithmic behaviour
ranging from the {\it fully} to the {\it mildly} chaotic, and show
that the Manneville map is a member of this family.
\end{abstract}
\section{Introduction} \label{sI}
The {\it Manneville map} was introduced by Manneville in \cite{Mann},
as an example of a discrete dissipative dynamical system with {\it
intermittency}, an alternation between long regular phases, called
{\it laminar}, and short irregular phases, called {\it
turbulent}. This behaviour had been observed in fluid dynamics
experiments and in chemical reactions. Manneville introduced his map,
defined on the interval $I=[0,1]$ by
\begin{equation}
f(x)=x+x^z (\hbox{mod } 1) \hskip 0.5cm z > 1, \label{mannmap}
\end{equation}
to have a simple model displaying this complicated behaviour. In
Figure \ref{figmann} it is plotted the Manneville map for $z=2$. His
work has attracted much attention, and the dynamics of the Manneville
map has been found in many other systems. We can find applications of
the Manneville map in dynamical approaches to DNA sequences
(\cite{Grigo1},\cite{Grigo2}) and ion channels (\cite{Toth}), and in
non-extensive thermodynamical problems (\cite{Grigo3}).
\begin{figure}
\caption{\it The Manneville map $f$ for $z=2$}
\label{figmann}
\end{figure}
The Manneville map has also been studied by Gaspard and Wang
(\cite{GW}), using the notion of {\it Algorithmic Information Content}
of a string, briefly explained below. Given a dynamical system, any
orbit of this system can be translated into a string $\sigma$ of
symbols by a partition of the phase space of the system (symbolic
dynamics). For any finite string $\sigma^n$ of length $n$, it has been
introduced by Chaitin (\cite{Chaitin}) and Kolmogorov (\cite{Kolm}),
the notion of {\it Algorithmic Information Content (AIC)} (or {\it
Kolmogorov complexity}) of the string, that we denote by
$I_{AIC}(\sigma^n)$, defined as the binary length of the shortest
program $p$ that outputs the string on a universal machine $C$,
\begin{equation}
I_{AIC}(\sigma^n)= \left\{ |p| \ | \ C(p)= \sigma^n \right\}.
\label{AIC}
\end{equation}
It is then possible, using the symbolic dynamics, to define the notion
of Algorithmic Information Content for a finite orbit of a dynamical
system. This extension requires some attention on the choice of the
partition. The first results have been obtained by Brudno
(\cite{Brudno}) using open covers of the phase space. Another possible
approach, using computable partitions, is introduced in \cite{Licata}.
To generalize the notion of AIC to infinite strings, it is natural to
consider the mean of the AIC. We call {\it complexity} of an infinite
string $\sigma$ the maximum limit of the AIC of the first $n$ symbols
of the string divided by $n$. Then, if we denote the complexity of an
infinite string by $K(\sigma)$, we have
\begin{equation}
K(\sigma)=\limsup_{n\to +\infty} \frac{I_{AIC}(\sigma^n)}{n},
\label{complessita}
\end{equation}
where $\sigma^n$ is the string given by the first $n$ digits of the
infinite string $\sigma$. Symbolic dynamics is again the tool to
define the complexity of an infinite orbit of a dynamical system.
Moreover, we can ask whether it is possible to define a notion of
information content for the dynamical system, without to consider any
particular orbit. To do this, we have to introduce a probability
measure $\mu$ on the phase space $X$ of the system and we can define
the {\it algorithmic entropy} $h_\mu$ of a dynamical system by
\begin{equation}
h_\mu = \int_X \ K(x) \ d\mu, \label{entropia}
\end{equation}
where $K(x)$ denotes the complexity of the orbit of the system with
initial condition $x \in X$.
There exist some results connecting the information content of a
string generated by a dynamical system and the Kolmogorov-Sinai
entropy $h^{KS}$ of the system.
First of all it is proved that for a compact phase space $X$ and for
an invariant measure $\mu$, we have $h_\mu=h^{KS}_\mu$. Then in
particular, in a dynamical system with an ergodic invariant measure
$\mu$ with positive K-S entropy $h^{KS}_\mu$, the AIC of a string $n$
symbols long behaves like $I_{AIC}(\sigma^n) \sim h^{KS}_\mu n$ for
almost any initial condition with respect to the measure $\mu$
(\cite{Brudno}).
Instead, in a periodic dynamical system, we expect to find
$I_{AIC}(\sigma^n) = O(\log(n))$. Indeed, the shortest program that
outputs the string $\sigma^n$ would contain only information on the
period of the string and on its length.
It is possible to have also intermediate cases, in which the K-S
entropy is null for all the invariant measures that are physically
relevant and the system is not periodic. These systems, whose
behaviour has been defined {\it weak chaos}, are an important
challenge for research on dynamical systems. Indeed no information are
given by the classical properties, such as K-S entropy or Lyapunov
exponents, and in the last years some generalized definitions of
entropy of a system have been introduced to characterize the behaviour
of such systems (for example see \cite{Tsallis}). We believe that an
approach to weakly chaotic systems using the infinite order of their
AIC could be a powerful way to classify these systems (no information
are obtained by the complexity and the algorithmic entropy defined as
above).
The Manneville map with parameter $z>2$ is a non periodic map with
null K-S entropy for all the physically relevant invariant measures,
then the analysis of the AIC of the strings generated by the map is
interesting. Gaspard and Wang (\cite{GW}) showed that the Manneville
map exhibits a behaviour that they called {\it sporadicity}. Namely,
the Algorithmic Information Content, $I_{AIC}(\sigma^n)$, of a string
$n$ symbols long, behaves in mean like $n^\alpha (\log(n))^\beta $,
with either $0<\alpha <1$ or $\alpha =1$ and $\beta <0$.
In this paper, we give a formal proof of the results obtained by
Gaspard and Wang (\cite{GW}) for the Manneville map, giving more
precise estimates for the AIC of a string generated by the map. But
the most important generalization is that we find our results for the
Manneville map as a particular case of a general theorem concerning a
large family of maps $L$, defined in equation (\ref{mannmaplin}). This
family of maps exhibits an extremely wide range of behaviours (for the
AIC of the generated strings), and sporadicity is only one possible
case. Then we find a family of maps that can be classified with
respect to the order of the AIC of a ``typical''(in the sense of
Lebesgue measure) initial condition. Moreover we study some
topological and metric properties of the maps $L$, useful to obtain a
prediction of the behaviour of the AIC of the related symbolic
dynamics (see Section \ref{sTca}).
In Section \ref{sTlm}, we introduce the family of maps $L$, and show
how the Manneville map $f$ can be thought of as one member of the
family.
In Section \ref{sTta}, we study the maps $L$ from the topological
point of view. In Section \ref{sTma} we show that the maps in the
class $L$ are equivalent to a Markov chain in a suitable sense. This
equivalence is extensively used in Subsection \ref{ssGr}, where we
present our results relative to the behaviour of the AIC of the
strings obtained from the maps $L$.
Finally, in Subsection \ref{ssRttlm}, we study the computational
aspect of the maps $L$ from a practical point of view. So, we restrict
ourselves to consider the Lebesgue measure $l$ on the interval $I$.
\section{The family of piecewise linear maps $L$} \label{sTlm}
In this section we present what we shall use as our formulation of the
Manneville map (\ref{mannmap}). In the following, we study a family of
piecewise linear maps $L$ on the interval $I=[0,1]$, which are
topologically equivalent to the Manneville map $f$. Using the maps
$L$, all the theorems have an easier interpretation and computations
can be done exactly. Moreover all our results are extendible through
metric isomorphism, hence we shall define on the interval $I$, two
different measures that make the topological equivalence between $L$
and $f$ a metric isomorphism. Then we can extend all the results that
we find for the maps $L$ to the Manneville map $f$.
Let's start defining the piecewise linear maps $L$ that we consider. We
use here the same approach as in \cite{Stefano}. A natural way to get a
partition of the interval $I=[0,1]$ from the Manneville map $f$ is the
following: let's call $x_0$ the point of $I$ such that $f(x_0)=1$ with
$x_0 \not= 0,1$, and $x_1$ the preimage of $x_0$ in the interval
$[0,x_0]$; then we define recursively $x_n = \{ f^{-1}(x_{n-1}) \}
\cap [0,x_{n-1}]$. Then the sub-intervals $B_k = (x_k, x_{k-1}]$, for
$k \geq 1$, and $B_0 = (x_0,1]$ are a partition of $I$.
Define $\{ \epsilon_k \}_{k\in \mathbb N}$ a sequence of positive real
numbers, that is strictly monotonically decreasing and converging
towards zero, with the property that
\begin{equation}
\frac{\epsilon_{k-1}-\epsilon_k}{\epsilon_{k-2}-\epsilon_{k-1}} < 1
\hskip 0.5cm \forall \ k \in \mathbb N. \label{propdieps}
\end{equation}
The piecewise linear maps that we consider are defined by
\begin{equation}
L(x)=\left\{
\begin{array}{ll}
\frac{\epsilon_{k-2}-\epsilon_{k-1}}{\epsilon_{k-1}-\epsilon_k} (x -
\epsilon_k) + \epsilon_{k-1} & \quad \epsilon_k < x \leq
\epsilon_{k-1}, \quad k \geq 1 \\[0.3cm]
\frac{x-\epsilon_0}{1-\epsilon_0} & \quad \epsilon_0 < x \leq 1
\\[0.3cm]
0 & \quad x=0
\end{array}
\right. \label{mannmaplin}
\end{equation}
where we define $\epsilon_{-1} = 1$. These piecewise linear maps $L$
clearly depend on the definition of the sequence $\{ \epsilon_k
\}_{k\in \mathbb N}$, but a particular choice for this sequence is important
only for Section \ref{sTca}. For the moment we consider any possible
sequence, with the properties specified above. Let's define
sub-intervals $A_i = (\epsilon_i, \epsilon_{i-1}]$, and
$A_0=(\epsilon_0,1]$. These interval form a partition of the interval
$I$. We prove the following
\begin{teorema}
Any piecewise linear map $L$ defined as in equation (\ref{mannmaplin})
is topologically equivalent to the Manneville map $f$. \label{teoeqlin}
\end{teorema}
\noindent {\bf Proof.} We have to find a homeomorphism $h :I \to I$
such that $h(f(x))=L(h(x))$ for each $x \in I$. To find such a
homeomorphism we use the partitions $(B_j)$ and $(A_j)$. We define
$h(x_n)=\epsilon_n$ for each $n \in \mathbb N$ and $h(0)=0$, $h(1)=1$, and
such that $h(B_j)=A_j$, for all $j \geq 0$. To define the
homeomorphism $h$ we use a dense set of $I$, define $h$ on this set
and, then, simply extends the definition of $h$ to the whole interval
$I$ by continuity.
Let's consider a sub-interval $B_k$. By definition of the Manneville
map $f$, we have that $f(B_k)=B_{k-1}$, for $k\geq 1$, and
$f(B_0)=I$. Then it follows that $f^k(B_k)=B_0$ and
$f^{k+1}(B_k)=I$. So, within each $B_k$ we can find sub-intervals
$B_{kj}$, with $k,j \in \mathbb N$, defined by $f^{k+1}(B_{kj})=B_j$. These
sub-intervals form a partition of each $B_k$. We can continue this
partition of the intervals $B_k$, defining, by the same rule,
sub-intervals $B_{kji}$ that form a partition of $B_{kj}$. We write
then any sub-interval of the form $B_{k_1 k_2 \dots k_n}$, with $k_i
\in \mathbb N$ for each $i=1,\dots ,n$, as $B_{k_1 k_2 \dots k_n}=(x_{k_1 k_2
\dots k_n}, x_{k_1 k_2 \dots (k_n-1)}]$. The set $\left\{ x_{k_1 k_2
\dots k_n}, \ n\in \mathbb N, \ k_i \in \mathbb N \right\}$ is a countable dense set
of the interval $I$. Analogously, we can define a set of points
$\left\{ \epsilon_{k_1 k_2 \dots k_n}, \ n\in \mathbb N, \ k_i \in \mathbb N
\right\}$, for the map $L$, with the same property as $x_{k_1
k_2 \dots k_n}$. We define then $h(x_{k_1 k_2 \dots
k_n})=\epsilon_{k_1 k_2 \dots k_n}$ for each $n\in \mathbb N$, and extend the
function $h$ continuously to the whole interval $I$. We have thus
obtained a continuous function $h$ such that $h(B_{k_1 k_2 \dots
k_n})=A_{k_1 k_2 \dots k_n}$, for each $n\in \mathbb N$, where the
sub-intervals $A_{k_1 k_2 \dots k_n}$ are defined as $A_{k_1 k_2 \dots
k_n}=(\epsilon_{k_1 k_2 \dots k_n}, \epsilon_{k_1 k_2 \dots
(k_n-1)}]$.
The injectivity of $h$ follows by contradiction. Let's suppose to have
two points $x < y \in I$ such that $h(x)=h(y)=z$. By the density of
the set $\left\{ x_{k_1 k_2 \dots k_n}, \ n\in \mathbb N, \ k_i \in \mathbb N
\right\}$, we can find a point $\bar x_{k_1 k_2 \dots k_{\bar n}} \in
(x,y)$. This implies that there exists a $\bar n$ such that $x \in
B_{k_1 k_2 \dots (k_{\bar n}-1)}$ and $y \in B_{k_1 k_2 \dots k_{\bar
n}}$. Then we have $h(x)\not= h(y)$. The subjectivity of $h$ follows
immediately by the definition.
Then the inverse function $h^{-1}$ exists and $h$ is a homeomorphism
because it is a continuous invertible function from a compact to a
Hausdorff space. \qed
\vskip 0.5cm The topological equivalence between $L$ and $f$ can be
used in particular to obtain a metric isomorphism. If we have a
measure $\mu$ on the interval $(I,{\cal B},L)$, where ${\cal B}$ is
the Borel $\sigma$-algebra, then the homeomorphism $h$ carries $\mu$
into another measure $\nu=h^* \mu$ on $(I,{\cal B},f)$, and with
respect to these measures $h$ is a metric isomorphism.
\begin{teorema}[Radon-Nikodym]
Given two measures $\mu$ and $\nu$ on $(I,{\cal B})$, such that
$(I,{\cal B},\nu)$ is $\sigma$-finite, $\mu << \nu$ if and only if
there exists a real function $f$ on $I$, integrable with respect to
$\nu$ on all sets $B \in {\cal B}$ such that $\nu (B) < +\infty$,
satisfying the following condition for every $B \in {\cal B}$:
\[
\mu(B) = \int_B f \ d\nu
\]
\label{rdteo}
\end{teorema}
\begin{teorema}
If the measure $\mu$ on $(I,{\cal B},L)$ is absolutely continuous with
respect to the Lebesgue measure $l$ ($\mu << l$), then also the
measure $\nu = h^* \mu$ on $(I,{\cal B},f)$ is absolutely continuous
with respect to the Lebesgue measure $l$, and vice-versa. \label{miseq}
\end{teorema}
\noindent {\bf Proof.} We can apply Theorem \ref{rdteo} to the
measures $\mu$ and $l$. Then we obtain a real function $f_\mu$ on $I$,
integrable with respect to $l$ and such that for all $B \in {\cal B}$
\[
\mu(B)= \int_B f_\mu \ dl.
\]
By definition of the measure $\nu$, we have that $\nu(B) = \mu(h(B))$
for all $B \in {\cal B}$, then
\[
\nu(B)= \int_{h(B)} f_\mu \ dl = \int_B (f_\mu \circ h) (dh) \ dl,
\]
where $dh$ is defined almost everywhere with respect to $l$, being $h$
a monotone continuous function. Then we have found a function $f_\nu =
(f_\mu \circ h) (dh)$, which satisfies the hypotheses of the
Radon-Nikodym Theorem. Then $\nu << l$.
The vice-versa is proved in the same way. \qed
\vskip 0.5cm At this point, thanks to Theorem \ref{miseq}, we can use
our linear map in all our applications of topological and metric
methods.
\section{The topological approach} \label{sTta}
In this section we start a procedure of equivalences of the maps $L$
with well-known maps, that can be used to establish the results of
Section \ref{sTca}. The first step is to study the relationship
between the maps $L$ and a {\it sub-shift of finite type}.
\subsection{Symbolic dynamics} \label{ssSd}
We briefly recall the definition of a {\it sub-shift of finite
type}. Let's consider a finite set of symbols $S= \{ 0,1,2,\dots,N\}$,
with $N \geq 1$, and build a set $\Sigma^N = S^{\mathbb N}$ as the product of
countable factors $S$. On $\Sigma^N$ it is defined a map $T$, called
the {\it shift map}, that acts on the elements of $\Sigma^N$, by
shifting forward the indexes. Namely, if $\sigma = (\sigma_0 \sigma_1
\dots \sigma_n \dots) \in \Sigma^N$, with $\sigma_i \in \{ 0,\dots,N
\}$ for all $i \in \mathbb N$, then $T(\sigma)= (\sigma_1 \dots \sigma_n
\dots)$. The set $\Sigma^N$ is endowed with a metric $d$ defined by
\[
d(\sigma,\sigma')= \sum_{n=0}^{\infty} \ \frac{\delta_{\sigma_n
\sigma'_n}}{2^n},
\]
where $\delta_{ij}$ is the Kronecker symbol, that makes it a compact
space. A {\it sub-shift of finite type} is obtained from the set
$(\Sigma^N,T)$, by means of a $(N+1)\times (N+1)$ matrix $M=(m_{ij})$, called
the {\it transition matrix}, such that $m_{ij} \in \{0,1 \}$ for all
$i,j=0,\dots, N$. We define a subset $\Sigma^N_M$ of $\Sigma^N$, by
\[
\Sigma^N_M = \left\{ \sigma \in \Sigma^N \ | \ m_{\sigma_i
\sigma_{i+1}} =1 \; \forall \; i \in \mathbb N \right\},
\]
then a {\it sub-shift of finite type} is simply the compact, metric
space $(\Sigma^N_M,T_M)$ (see \cite{KH}).
We have
\begin{teorema}
For any $N\geq 1$, there exists a particular transition matrix $M$
such that any map $L$ of the family of piecewise linear maps
(\ref{mannmaplin}) is a factor of the sub-shift of finite type
$(\Sigma^N_M,T_M)$. This means that there exists a subjective
continuous map $\pi: \Sigma^N_M \to I$ such that $\pi \circ T_M = L
\circ \pi$. \label{eqdinsimb}
\end{teorema}
\noindent {\bf Proof.} Let a $(N+1)\times (N+1)$ matrix $M$ be defined
by
\begin{equation}
M= \left(
\begin{array}{ccccccc}
1 & 1 & 1 & \cdots & 1 & 1 & 1\\
1 & 0 & 0 & \cdots & 0 & 0 & 0\\
0 & 1 & 0 & \cdots & 0 & 0 & 0\\
\cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots\\
0 & 0 & 0 & \cdots & 0 & 1 & 1
\end{array}
\right), \label{transmatrix}
\end{equation}
\noindent we show that this is the matrix we need.
Let's consider a partition of the interval $I$ defined by using the
partition $A_i =(\epsilon_i , \epsilon_{i-1}]$, introduced in Section
\ref{sTlm}. From this partition we obtain a partition of the interval
$I$, by $B_i = A_i$ for $i=0,\dots, N-1$, and $B_N = \{ 0 \} \cup
\left( \cup_{j=N}^{\infty} A_j \right)= [0,\epsilon_{N-1})$. From this
partition and by using the properties of any of the map $L$, we obtain
a $(N+1)$-nary representation of the interval $I$.
The $(N+1)$-nary representation of a point $x\in I$ is given by a
string $\sigma$ such that $L^n(x) \in B_{\sigma_n}$ with $\sigma_n
=0,\dots,N$, for any $n\in \mathbb N$. This representation is nothing else
that a map $\pi : \Sigma^N \to I$. Hence we just need to show that
this map $\pi$ is continuous and subjective, and verifies the
commutation rule with $L$ and $T$.
First of all we notice that our map $\pi$ is not defined on the whole
space $\Sigma^N$, because of the restrictions given by the particular
form of the map $L$. If we want to reduce the space $\Sigma^N$, we
have to consider a transition matrix $M$, and we use the matrix $M$
defined in equation (\ref{transmatrix}). It is easy to verify that for
any $\sigma \in \Sigma^N_M$ there is a point $x\in I$ such that
$\pi(\sigma)=x$. Then we show that $\pi : \Sigma^N_M \to I$ is a
subjective continuous map such that $\pi \circ T_M = L \circ \pi$.
We have then a {\it semi-conjugacy} between our piecewise linear maps
$L$ and symbolic dynamics. We have not a conjugacy because of the lack
of injectivity of the map $\pi$. Indeed, as in any $n$-ary
representation of the real numbers in the interval $I=[0,1]$, there is
a countable set $X$ of points that are images of two
sequences. Moreover, in our case, these points can be characterized by
the property that for any $x$ in this set there exists a $N \in \mathbb N$
such that $L^N(x)=1$.
Commutation. The commutation rule $\pi \circ T_M = L \circ \pi$ is an
immediate consequence of the definition of $\pi$.
Subjectivity. It follows immediately from the definition of the map $\pi$.
Continuity. We have to prove that given any $\epsilon >0$ there exists
a $\delta >0$ such that if $d(\sigma_1,\sigma_2) <\delta$ then
$|\pi(\sigma^1)-\pi(\sigma^2)| <\epsilon$. But from the definition of
the metric $d$ on the space $\Sigma^N_M$, we have that
$d(\sigma^1,\sigma^2) <\delta$ is equivalent to: there exists a $K>0$
such that $\sigma^1_j=\sigma^2_j$ for all $j=0,\dots,K$. So given any
$\epsilon>0$ we have to find a $K>0$ such that $\sigma^1_j=\sigma^2_j$
for all $j=0,\dots,K$ implies $|\pi(\sigma^1)-\pi(\sigma^2)|
<\epsilon$. From the definition of the map $\pi$ it is clear that if
$\sigma^1_j=\sigma^2_j$ for all $j=0,\dots,K$ for any $K>0$, then
$L^j(\pi(\sigma^1))$ and $L^j(\pi(\sigma_2))$ belong to the same
subset $B_{\sigma^i_j}$ of the partition $(B_i)$ for all
$j=0,\dots,K$. Then, if we consider a partition of the subset
$B_{\sigma^i_0}$, given by $(B_{\sigma^i_0})_{j_1,j_2,\dots,j_n}$,
with $j_i=0,\dots,N$ for all $i$, where
$L^r((B_{\sigma^i_0})_{j_1,j_2,\dots,j_n}) = B_{j_r}$ for all
$r=1,\dots,n$, we have that
$diam((B_{\sigma^i_0})_{j_1,j_2,\dots,j_n}) \to 0$ as $n \to +\infty$,
thanks to the particular form of the map $L$. This argument gives the
continuity of the map $\pi$.\qed
\vskip 0.5cm
We can extend Theorem \ref{eqdinsimb} to the case of $N=\infty$. The
space $\Sigma=\Sigma^{\infty}$ is defined in the same way, but we
cannot extend the metric $d$, defined as before, and we can only
define a topology on $\Sigma$, where the open balls are the same as
before. In this case $\Sigma$ is not anymore a compact space. The
transition matrix $M$ is $\infty \times \infty$ dimensional and it is
defined by
\begin{equation}
M= \left(
\begin{array}{cccccccc}
1 & 1 & 1 & \cdots & 1 & 1 & 1 & \cdots \\
1 & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots \\
0 & 1 & 0 & \cdots & 0 & 0 & 0 & \cdots \\
0 & 0 & 1 & \cdots & 0 & 0 & 0 & \cdots \\
\cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots
\end{array}
\right). \label{inftransmatrix}
\end{equation}
On the space $\Sigma_M$, we define then a map $T_M$, given by the
forward shift. Then we can prove
\begin{teorema}
Any of our maps $L$ on $I$ is a topologically conjugate to the
dynamical system given by $(\Sigma_M,T_M)$ built on countable symbols
and transition matrix $M$ given by equation (\ref{inftransmatrix}).
\label{eqinfdinsimb}
\end{teorema}
\noindent {\bf Proof.} The proof is the same as in Theorem
\ref{eqdinsimb}. The difference is that now we obtain a {\it
conjugacy} with the set $\Sigma_M$, thanks to the fact that the matrix
$M$, being infinite, exactly simulate the dynamics of the piecewise
linear maps $L$ from a topological point of view. \qed
\subsection{Topological and Kolmogorov-Sinai entropy} \label{ssTame}
The semi-conjugacy of the maps $L$ with the symbolic dynamical system
on finite symbols is useful to compute some topological and metric
quantities of our map. Indeed the lack of injectivity of the map $\pi$
is on a set that doesn't change the dynamical richness of the
systems. In this subsection we compute the {\it topological entropy}
and the {\it Kolmogorov-Sinai entropy} for some measures (see
\cite{KH}).
From the theory of dynamical systems, we know that the following
theorems hold (see \cite{KH}):
\begin{teorema}
The topological entropy is invariant for topological
equivalence. \label{invarianzaet}
\end{teorema}
\begin{teorema}
The topological entropy $h_{top}$ of a sub-shift of finite type is
$\log \lambda_{\max}$, where $\lambda_{\max}$ is the largest
eigenvalue of the transition matrix. \label{tedinsimb}
\end{teorema}
Then we have just to compute the eigenvalues of the transition matrix
$M$ on finite symbols defined in equation (\ref{transmatrix}), and
then, thanks to the previous theorems, the topological entropy
$h_{top} (L)$ of our linear map $L$ is given by $\log
\lambda_{\max}$. For any $N$, we find that $\lambda_{\max} =2$, then
$h_{top} (L)= \log 2$, for any possible sequence $(\epsilon_k)$
defined as before.
At this point we start to consider measures on the space
$\Sigma^N_M$. We have to introduce first a $\sigma$-algebra ${\cal
C}$. We take as a basis of $\cal C$ the sets of the form
\begin{equation}
C^n_r = \left\{ \sigma \in \Sigma^N_M \ | \ \sigma_i = r_i \; \forall \; i =0,\dots n \right\}, \label{cilindri}
\end{equation}
for any $n \in \mathbb N$ and $r \in S^{n+1}$. These sets are called {\it
cylinders}. At this point we use a classical result of dynamical
systems (see \cite{Mane}):
\begin{teorema}[Variational Principle]
Given a continuous map $f: X \to X$ of a compact metric space $X$, the
topological entropy $h_{top}(f)$ is the maximum of the
K-S entropies $h_{\nu} (f)$ on the set of all the
$f$-invariant probability measures $\nu$ on $X$.
\end{teorema}
Then we look for the probability measures $\nu$ on $\Sigma^N_M$ with
K-S entropy $h^{KS}_\nu$ equal to $\log 2$. For sub-shift of finite
type, a particular class of $T_M$-invariant measures are defined by a
{\it stochastic matrix} associated to the sub-shift. These measures
are called {\it Markov measures}, and among them there is a measure
that maximize the K-S entropy. This measure is called {\it Parry
measure}, and is denoted by $\nu_\Pi$ (see \cite{KH}). This measure is
defined by a particular choice of the stochastic matrix $\Pi$. In
words, the Parry measure represent the asymptotic distribution of the
periodic orbits, that is if $C$ is a cylinder in $\Sigma^N_M$, we have
that
\[
\nu_\Pi (C) = \lim_{n \to \infty} \frac{\hbox{periodic orbits of
period $n$ contained in $C$}}{\hbox{all the periodic orbits of period
$n$}}.
\]
We compute the Parry measures for some $N$, then we define on $I$ the
induced measure $\mu_\Pi=\pi^* \nu_\Pi$, to obtain an $L$-invariant
probability measure on $I$ of K-S entropy
$h^{KS}_{\mu_\Pi}(L)= \log 2$.
For $N=1$, we have that $\nu_\Pi (C^0_0) = \nu_\Pi (C^0_1) =
\frac{1}{2}$. So $\mu_\Pi (B_0) = \mu_\Pi (B_1) = \frac{1}{2}$.
For any $N$, we obtain
\[
\begin{array}{ll}
\mu_\Pi (B_i) & = \frac{1}{2^{i+1}} \qquad i=0,\dots,N-1 \\[0.3cm]
\mu_\Pi (B_N) & = \frac{1}{2^{N-1}}
\end{array}
\]
So for the limit $N \to \infty$ (where we directly have topological
equivalence between $(I,L)$ and $(\Sigma_M,T_M)$), we obtain $\mu_\Pi
(B_i) = \mu_\Pi (A_i) = \frac{1}{2^{i+1}}$. We have thus found a
countable family of $L$-invariant measures on $I$ with K-S entropy
$\log 2$.
\section{The metric approach} \label{sTma}
We start now to consider what happens if we start from our interval
$I$, endowed with a probability measure $\mu$ on the Borel
$\sigma$-algebra $\cal B$, and with the dynamics induced by the maps
$L$. In particular we show that we obtain an equivalence with a Markov
chain that will be useful for the computational approach (Section
\ref{sTca}).
From our space $(I,{\cal B},L,\mu)$, where we remark that we haven't
supposed $\mu$ to be $L$-invariant, we have a metric isomorphism with
the space $(\Sigma_M,{\cal C},T_m)$ on countable symbols, with the
probability measure $P=\pi_* \mu$ induced by the homeomorphism $\pi$
found in Theorem \ref{eqinfdinsimb}.
At this point we use some notions and results introduced by Parry
\cite{Parry}.
\begin{definizione}
A {\it non-atomic stochastic process} is $(X,{\cal A},T,m)$ where
$X=\{ x= x_0, x_1, \dots \; | \; x_i \in \mathbb N \; \forall i \in \mathbb N \}$,
${\cal A}$ is the $\sigma$-algebra generated by the cylinders $C^n_r$,
$m$ is a non-atomic probability measure on ${\cal A}$, and $T$ is the
forward shift on $X$. In the theory of stochastic processes the
transition matrix $M$ on $\Sigma_M$ is called a {\it structure matrix}.
\end{definizione}
\begin{definizione}
A stochastic process is called {\it transitive of order $k$} if for
all $(x_1, \dots, x_k)$ and $(y_1,\dots,y_k)$ with $m(x_1, \dots, x_k)
>0$ and $m(y_1, \dots, y_k) >0$, there exists a finite
$(z_1,\dots,z_n)$ such that
\[
m(x_1, \dots, x_k;z_1, \dots, z_n;y_1, \dots, y_k) >0.
\]
When $k=1$, a stochastic process is simply called {\it transitive}.
\end{definizione}
\begin{definizione}
A stochastic process is said to be {\it intrinsically Markovian of
order $k$} if $m(x_1,\dots,x_n)>0$ and $m(x_{n-k+1},\dots,x_{n+1})>0$
imply
\[
m(x_1,\dots,x_n,x_{n+1})>0.
\]
When $k=1$ it is simply called {\it intrinsically Markovian}.
\end{definizione}
We can easily prove
\begin{prop}
Our space $(\Sigma_M,{\cal C},T_M,P)$ is a non-atomic stochastic
process, which is transitive and intrinsically Markovian. \label{prop1}
\end{prop}
\begin{definizione}
Given a stochastic process $(X,{\cal A},T,m)$, a measure $p$ makes the
process $(X,{\cal A},T,p)$ {\it compatible} with the original when
$p(C^n_r)>0$ if and only if $m(C^n_r)>0$, for any cylinder $C^n_r$.
\end{definizione}
At this point we use the notion of non-atomic stochastic processes to
obtain a compatibility between the maps $L$ and a Markov chain,
through the symbolic dynamical system on countable symbols. Before
giving the theorems in this direction, we briefly recall the theory of
Markov chains (see \cite{Chung}).
\begin{definizione}
Given a probability space $(\Lambda, {\cal F}, P)$ and a countable
space $Y$ with the discrete $\sigma$-algebra, a {\it Markov chain} is
a sequence $(Z_n)_{n\in \mathbb N}$ of random variables $Z_n : \Lambda \to Y$ such that
i) If, given $y_0,\dots,y_{n+1} \in Y$, we have $P[Z_n=y_n,
Z_{n-1}=y_{n-1},\dots,Z_0=y_0]>0$, then
\[
P[Z_{n+1}=y_{n+1}\; | \; Z_n=y_n,\dots,Z_0=y_0] = P[Z_{n+1}=y_{n+1}\; | \; Z_n=y_n],
\]
ii) If $x,y \in Y$ and $m,n \in \mathbb N$ are such that $P[Z_m=x]>0$ and
$P[Z_n=x]>0$, then
\[
P[Z_{m+1}=y \; | \; Z_m=x]=P[Z_{n+1}=y \; | \; Z_n=x].
\]
In particular the numbers $p(x,y)=P[Z_{n+1}=y \; | \; Z_n=x]$ form a
matrix $\Pi = (p(x,y))_{x,y \in X}$, called the {\it transition
matrix}. Moreover the probability measure on $Y$ defined by $\nu(y) =
P[Z_0=y]$ is called the {\it initial distribution}.
\end{definizione}
The transition matrix $\Pi$ is a {\it stochastic matrix}, that is
$p(x,y)\geq 0$ and $\sum_{y \in X} p(x,y) =1$ for all $x \in Y$.
\begin{teorema}
Given any countable space $Y$, a transition matrix $\Pi$ and an
initial distribution $\nu$, it is possible to construct a probability
space $(\Lambda,{\cal F},P)$ and a sequence of random variables
$Z_n:\Lambda \to Y$, such that the constructed Markov chain has $\Pi$
as transition matrix and $\nu$ as initial distribution. \label{yonescotulcea}
\end{teorema}
\noindent {\bf Proof.} For the proof of the theorem see Chung
\cite{Chung}. We simply say what is the constructed probability space
$(\Lambda,{\cal F},P)$. The space $\Lambda$ is $Y^{\mathbb N}$ and is called
the {\it realizations space}, the $\sigma$-algebra ${\cal F}$ is given
by the cylinders defined as in equation (\ref{cilindri}), and the
probability $P$ is defined on the cylinders by
\[
P(C^n_r)=\nu(r_0) p(r_0, r_1) p(r_1, r_2) \dots p(r_{n-1} r_n).
\]
The random variables are defined as the projections of $\Lambda$ on $Y$.
\qed
\begin{teorema}
Given any intrinsically Markovian, transitive sto\-cha\-stic process
$(X,{\cal A},T,m)$ and a stochastic matrix $\Pi$ such that $p(i,j)>0$
if and only if $m_{ij}=1$, for the structure matrix $M$ of the
process, there is a probability $p$ on $X$ such that $(X,{\cal
A},T,p)$ is compatible with $(X,{\cal A},T,m)$, and it is a Markov
chain with $\Pi$ as transition matrix. \label{compatibili}
\end{teorema}
\noindent {\bf Proof.}
We have just to apply Theorem \ref{yonescotulcea} to the matrix $\Pi$
and to an initial distribution $\nu$, being $X$ already in the form of
the realizations space. The probability $p$ is then the probability
$P$ defined as above.
\qed
\begin{cor}
Our space $(\Sigma_M,{\cal C},T_M,P)$ is com\-pa\-ti\-ble with a Mar\-kov
cha\-in. \label{cor1}
\end{cor}
\noindent {\bf Proof.} The corollary is proved thanks to Theorem
\ref{compatibili} and Proposition \ref{prop1}. We have just to choose
a stochastic matrix that satisfies the hypothesis, that is $p(i,j) >0
$ if and only if $m_{ij}=1$ for the structure matrix defined as in
equation (\ref{inftransmatrix}). The measure $P$ can be used as
initial distribution.
\qed
\vskip 0.5cm We have thus completed our equivalence, in the sense of
Corollary \ref{cor1}, between the maps $L$ defined on the space
$(I,{\cal B},\mu)$, for any probability measure $\mu$, and a Markov
chain, that is denoted simply by a stochastic matrix $\Pi$ and an
initial distribution $\nu$.
\section{The algorithmic entropy} \label{sTca}
The results of Section \ref{sTma} are useful for the computational
approach to the Manneville map $f$ defined by equation
(\ref{mannmap}). As remarked before, also in this section we shall
restrict ourselves to the AIC for the maps $L$, which are equivalent
to the Manneville map $f$ in the sense described above. Using this
restriction it is possible to perform explicit computations which, by
Theorems \ref{teoeqlin} and \ref{miseq}, can be extended to the
Manneville map $f$. The investigation on the maps $L$ that we present
in this section is meant to be a generalization of the work of Gaspard
and Wang on the Manneville map (see \cite{GW}).
Given our dynamical system $(I,{\cal B},L,\mu)$, where $\mu$ is any
probability measure on the Borel $\sigma$-algebra of the interval $I$,
we translate the orbit of a point $x \in I$ into a string
$\sigma=\pi(x) \in \Sigma_M$, with the transition matrix $M$ given in
equation (\ref{inftransmatrix}), and study the AIC of the
string.
\subsection{General results} \label{ssGr}
In Section \ref{sTma}, we proved that for any map $L$ our dynamical
system $(I,{\cal B},L,\mu)$ is equivalent, in a sense specified above,
with a Markov chain with a stochastic matrix $\Pi$ defined by means of
the transition matrix $M$ of equation (\ref{inftransmatrix}), and a
given initial distribution that can be considered to be the measure
$\mu$ itself (see Corollary \ref{cor1}). We remark that it is not
necessary to choose the measure $\mu$ on $I$ to be $L$-invariant. Now,
we want to relate the dynamics of the Markov chain with our dynamical
system. A natural choice for the stochastic matrix $\Pi$ is
\begin{equation}
\Pi= \left(
\begin{array}{cccccccc}
p_0 & p_1 & p_2 & \cdots & p_n & p_{n+1} & p_{n+2} & \cdots \\
1 & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots \\
0 & 1 & 0 & \cdots & 0 & 0 & 0 & \cdots \\
0 & 0 & 1 & \cdots & 0 & 0 & 0 & \cdots \\
\cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots
\end{array}
\right), \label{stochmatrix}
\end{equation}
where $p_i$ are the probabilities of transition from the sub-interval
$A_0$ to the sub-interval $A_i$, and in terms of the measure $\mu$ are
given by
\begin{equation}
p_i = \frac{\mu(A_0 \cap L^{-1} (A_i))}{\mu (A_0)}. \label{probtrans}
\end{equation}
It is evident in the definition of $\Pi$ the dependence on the
particular sequence $(\epsilon_k)$ that we consider to define the map
$L$ and on the probability measure $\mu$.
It has been shown by Gaspard and Wang (\cite{GW}), that a way to
estimate the AIC of a string obtained from our dynamical system is the
theory of {\it recurrent events} applied to Markov chains
(\cite{Feller2},\cite{Feller}). From the theory of Markov chains, we
have that our stochastic matrix $\Pi$ is {\it irreducible}, and that
the state $A_0$ is {\it persistent}, in the sense that
\[
p[Z_m = 0 \hbox{ for some } m >n | Z_n = 0] =1 \quad \forall \ n,
\]
where $p$ is the probability measure on $(\Sigma_M,{\cal C},T_M)$ that
makes it a Markov chain with $(Z_n)$ as random variables from
$\Sigma_M$ to $\mathbb N$ (see Theorems \ref{yonescotulcea},
\ref{compatibili}).
If we consider as recurrent event ${\cal E}$ the passage from the
sub-interval $A_0$, we have that ${\cal E}$ is {\it certain} and that
the {\it mean recurrence time} $m_0$ is given by
\begin{equation}
m_0 = \sum_{k=1}^{+\infty} \ k \ p_{k-1}. \label{mrt}
\end{equation}
If $m_0 = +\infty$ the state $A_0$ is called {\it null}, otherwise it
is {\it ergodic}. Thanks to the irreducibility of the stochastic
matrix $\Pi$, we have that all the states $A_i$ are of the same kind
of $A_0$, and then either $m_i = +\infty$ for all $i \in \mathbb N$ or $m_i
< +\infty$ for all $i \in \mathbb N$.
For the recurrent event ${\cal E}$ two random variables can be
introduced: $X_k : \Sigma_M \to \mathbb N$ given by 1 plus the number of
trials between the $(k-1)$-th and $k$-th occurrence of ${\cal E}$;
$N_k : \Sigma_M \to \mathbb N$ given by the number of realizations of ${\cal
E}$ in $k$ trials. The random variables $X_k$ have all the same
probability distribution given by
\[
p[X_k = r] = p_{r-1},
\]
and their mean $E_p[X_k]=m_0$, where the subscript $p$ specifies the
measure we use to find the mean. For our problem it will be very
important also the form of the distribution function $F(x) =
\sum_{r=0}^{[x]} p_r$ of the $X_k$. Finally, we consider also the
probabilities $u_n$ that ${\cal E}$ occurs at the $n$-th trial. It
holds that
\begin{equation}
\lim_{n\to +\infty} u_n = \ \frac{1}{m_0}. \label{limitediu}
\end{equation}
Let's now explain the link between the AIC of a string generated by
the map $L$ and the theory of recurrent events (\cite{GW}). Given an
initial point $x\in I$ we obtain a string $\sigma \in \Sigma_M$ such
that $L^k(x) \in A_{\sigma_k}$ for all $k \in \mathbb N$. The string $\sigma$
is, for example, of the form
\begin{equation}
\sigma = (7654321054321002103210\dots). \label{stringa}
\end{equation}
One possible way to give an estimate for the AIC of the string is to
consider a compression of the string, and study the binary length of
the compressed string. One possible compression of the string $\sigma$
is given by
\begin{equation}
S = (75023\dots), \label{stringacompressa}
\end{equation}
that is the sequence of recurrent times for ${\cal E}$. So to the
finite string $\sigma^n$, obtained by the first $n$ symbols of
$\sigma$, we associate the string
\[
S^{N_n}= (\sigma_{q_{_1}} \sigma_{q_{_2}} \dots \sigma_{q_{_{N_n}}})
\]
with $\sigma_{q_{_i} -1} =0$ for all $i$, where $N_n$ is the number of
realizations of ${\cal E}$. Then to have an idea of the behaviour of
the AIC of a string we have to estimate the behaviour of the random
variables $N_n$.
In \cite{Feller}, some possible behaviours for $E_p[N_n]$ have been
studied, for particular forms of the distribution function $F(x)$. In
particular
\begin{teorema}[Feller]
If the recurrence time of ${\cal E}$ has finite mean $m_0$ and
variance $V$, then
\[
E_p[N_n] \sim \ \frac{n}{m_0} + \frac{V-m_0+m_0^2}{2m_0^2}.
\]
If, instead $V=+\infty$, and the distribution function $F(x)$ satisfies
\[
F(x) \sim 1- A x^{- \alpha}
\]
with a constant $A$ and $0 < \alpha <2$, then:
i) If $1<\alpha <2$,
\[
E_p[N_n] \sim \ \frac{n}{m_0} + \frac{A}{(\alpha -1)(2-\alpha) m_0^2}
\ n^{(2-\alpha)};
\]
ii) If $0<\alpha <1$,
\[
E_p[N_n] \sim \ \frac{\sin \alpha \pi}{A \alpha \pi} n^{\alpha}.
\] \label{teofeller}
\end{teorema}
If the variance $V$ of the recurrence time is infinite but the
distribution function has a form different from that studied in
Theorem \ref{teofeller}, then we can show that
\begin{teorema}
If the state $A_0$ is ergodic then $E_p[N_n] \sim kn$, if instead
$A_0$ is a null state then $E_p[N_n]$ is an infinite of order
less than $n$.
\label{teoinf}
\end{teorema}
\noindent {\bf Proof.} The proof is based on a characterization of the
mean $E_p[N_n]$. In \cite{Feller}, it is shown that $E_p[N_n] = U_n -1$,
where
\[
U_n = \sum_{i=0}^n \ u_i.
\]
Then it follows that
\[
\lim_{n\to +\infty} \frac{E_p[N_n]}{n} = \lim_{n\to +\infty}
\frac{U_n}{n} = \lim_{n\to +\infty} u_n = \ \frac{1}{m_0}.
\]
Then, if $m_0<+\infty$, that is $A_0$ is ergodic, then $E_p[N_n]$ is
linear on $n$. Whereas if $m_0 = +\infty$, that is $A_0$ is a null
state, then $E_p[N_n]$ is an infinite of order less than $n$. \qed
\vskip 0.5cm The AIC calculated with the compression we have chosen
can be linked to the random variables $N_n$ by the following theorem
\begin{teorema}
For any string $\sigma \in \Sigma_M$ it holds
\begin{equation}
(N_n -1) + \log_2 (n-N_n +2) \leq I_{AIC}(\sigma^n) \leq N_n \log_2 \left(
\frac{n+N_n}{N_n} \right)
\label{eqstimainf}
\end{equation}
\label{teostimainf}
\end{teorema}
\noindent {\bf Proof.} Given $n \in \mathbb N$, we have that $\Sigma_M =
\cup C^n_r$, where the union is made on all the possible cylinders
$C^n_r$, with $r \in \mathbb N^n$. Moreover we write
\[
I_{AIC}(\sigma^n) = \sum_{i=1}^{N_n} \log_2 (\sigma_{q_{_i}} +2)
\]
for the AIC, where $\sigma_{q_{_i}} +2$ is used instead of
$\sigma_{q_{_i}}$, to have $\log_2 (\sigma_{q_{_i}} +2) \geq 1$ for
all $i$.
First of all, we consider only the cylinders $C^n_r$ with $r_n
=0$. This is done because we want to study the strings whose
compression changes when we increase our given $n$. Indeed, if the
compression wouldn't change, we wouldn't have any hint on the
behaviour of the AIC with respect to the length of the string.
We start with some special cases. Let's consider first the case $N_n
=n$. The only possible cylinder is then $C^n_r$ with
$r=(0,\dots,0)$. Then our compression doesn't change any string in
this cylinder, and $I_{AIC}(\sigma^n)= n $ for any string.
In the case $N_n =(n-1)$, the possible cylinders are given by $r \in
\mathbb N^n$ with only one symbol $1$ and all the others $0$. For all the
strings in these cylinders, the compression is $(n-1)$ symbols long,
and $I_{AIC}(\sigma^n) = (n-2)+\log_2 3$.
In the case $N_n =1$, the only possible cylinder is given by $r_i =
(n-i)$, and for the strings in this cylinder $I_{AIC}(\sigma^n) = \log_2
(n+1)$.
Let now be in general $N_n = n-h$, for some $h < n$. The
compression of such strings is then $N_n$-symbols
long. Moreover the compression is such that $\sum_{i=1}^{N_n}
\sigma_{q_{_i}} = h$. We now want to find the maximum and the minimum of
the function
\[
\sum_{i=1}^{N_n} \log_2 (\sigma_{q_{_i}} +2)
\]
with the condition $\sum_{i=1}^{N_n} \sigma_{q_{_i}} = k$. The maximum
is attained for equal $\sigma_{q_{_i}} \not= 0$, and the minimum for
all the $\sigma_{q_{_i}} =0$ but one which is equal to $h$. Then the
maximum is given by $\sigma_{q_{_i}} = \frac{h}{n-h}$ for all $i$, and
the AIC for the strings in such a cylinder is given by
$\sum_{i=1}^{n-h} \log_2 \left( \frac{h}{n-h} +2 \right) = N_n \log_2
\left( \frac{n+N_n}{N_n} \right)$, and the minimum is given by
$(n-h-1) + \log_2 (h+2) = (N_n -1) + \log_2 (n-N_n +2)$.
Looking back at the special cases we studied before, we see that for
$N_n=n$ and $N_n=1$, the maximum and the minimum are the same, and
give exactly the value of the AIC we found building the
sequences. For the case $N_n = (n-1)$, the only possible value for the
AIC is equal to the minimum we found. This shows that
the maximum is not attained always, but there are cases in
which it is attained. For example let $n=8$, $N_n=4$, and consider the
string $\sigma = (10101010)$. Its compression is then given by
$S=(1111)$, and $I_{AIC}(\sigma^n) = 4\log_2 3 = N_n \log_2 \left(
\frac{n+N_n}{N_n} \right)$.
Finally we remark that we have tacitly assumed that our strings do not
begin with the symbol $0$. If it happened, the estimates wouldn't
change significantly.
We have thus proved that for a subset of $\Sigma_M$ of full measure
with respect to all the cylinders with $0$ as last symbol, the
AIC of a string can be estimated using the value of the random
variables $N_n$. \qed
\vskip 0.5cm Then our plan is the following: given a probability
measure $\mu$ on $(I,{\cal B},L)$, we find the stochastic matrix
$\Pi$, equation (\ref{stochmatrix}), the distribution function $F(x)$,
and, the mean $E_p[N_n]$. Then we can link the mean
$E_{\mu}[I_{AIC}(\sigma^n)]$, where $\mu$ is the probability measure on
$(I,{\cal B})$, with the mean $E_p[N_n]$, by Theorem
\ref{teostimainf}.
Another important aspect is the existence of an invariant measure for
the Markov chain associated to our dynamical system. Given a
stochastic matrix $\Pi$ of the form of equation (\ref{stochmatrix}) we
have
\begin{teorema}
There is a measure $\bar p$ on the space $(\Sigma_M,{\cal
C},T_M)$, invariant for the stochastic matrix $\Pi$, defined by
\[
\bar p (k) = \sum_{n=0}^{+\infty} \ p_{k+n}.
\]
This measure is a probability measure if and only if the mean
recurrence time $m_0$ is finite.
\end{teorema}
This theorem allows us to induce on $(I,{\cal B},L)$ an $L$-invariant
measure $\bar \mu$, which is a probability measure if and only if
$A_0$ is ergodic.
\subsection{Restriction to the Lebesgue measure} \label{ssRttlm}
In the previous subsection we have shown that given a probability
measure $\mu$ on the space $(I,{\cal B},L)$, we have that the mean of
the AIC behaves differently according to the mean recurrence time of
the passage for the sub-interval $A_0$. These results clearly depend
on the choice of the measure $\mu$ and of the sequence $(\epsilon_k)$
used in the definition of the map $L$. In this subsection we want to
study these two problems from a practical point of view.
When we apply the notion of AIC to a string obtained from a dynamical
system, the choice of this string depends on the choice of the initial
point $x$ which we use to generate the orbit of the dynamical
system. This choice can be made randomly, and the most natural way to
introduce a probability distribution on the choice of the initial
point is by using the Lebesgue measure $l$ on the space. Hence we
apply the results of Subsection \ref{ssGr} to the system $(I,{\cal
B},L,l)$. Using the Lebesgue measure $l$, thanks to the piecewise
linearity of the map $L$, the probabilities of transition $p_i$ given
in equation (\ref{probtrans}) assume a particular simple form. Indeed
we find that $p_i = l(A_i)$ for all $i \in \mathbb N$.
With respect to the Lebesgue measure $l$, it is also possible to prove
that the compression we have introduced for strings given by any map
$L$ is the best possible. We have
\begin{teorema}
Given any piecewise linear map $L$ of the form (\ref{mannmaplin}), the
best compression for the strings generated by the dynamical system
$(I,{\cal B},L,l)$, where $l$ is the Lebesgue measure, is the
compression given in equations (\ref{stringa}) and
(\ref{stringacompressa}), for $l$-almost any initial condition.
\label{bestcompressiontheorem}
\end{teorema}
\noindent {\bf Proof.} The compression we introduced gives a bijective
relation between our space $(\Sigma_M,{\cal C},T_M,p)$ and the space
of all possible infinite sequences built on countable symbols, without
any restriction given by a transition matrix. We denote this space as
$\Sigma$. We introduce on $\Sigma$ the $\sigma$-algebra of the
cylinders $C^{'n}_r$, given as in equation (\ref{cilindri}), and a
probability measure $p'$ inherited in some way from $p$. We define
$p'$ by
\begin{equation}
p'(C^{'n}_r) = p(C^N_R), \label{confmis}
\end{equation}
where the cylinder $C^N_R$ is built in such a way that compression of
strings that belong to it gives strings belonging to the cylinder
$C^{'n}_r$. At this point we ask if it is possible to compress any
more strings belonging to the space $(\Sigma,{\cal C'},p')$. But, if
on this space we consider the usual shift map, we find a
$p'$-invariant map with positive K-S entropy $h^{KS}$. This is given
by a direct computation, using the piecewise linearity of the map $L$
that gives a simple form for the measure $p'$. Then it is well known
that the AIC for this dynamical system behaves like $h^{KS} n$, for
$p'$-almost any string $\sigma$, then for $l$-almost any initial
condition $x \in I$. This clearly implies that for $l$-almost any
initial condition $x \in I$, ours is the best possible
compression. \qed
\vskip 0.5cm
\begin{guess}
According, for example, to Brudno's approach (\cite{Brudno}) to obtain
a definition of AIC for finite orbits of a dynamical system, we should
evaluate the supremum of $I_{AIC}(\sigma^n)$ varying the open covers
of the interval $I=[0,1]$. Theorem \ref{eqinfdinsimb} can be
established as well if we consider an open cover of the form
$A_i=(\epsilon_i, \epsilon_{i-1} +\eta_i)$ for $0<\eta_i <<1$. Theorem
\ref{bestcompressiontheorem} suggests that the AIC of any sequence
generated by the system $(I,{\cal B},L)$ with a non-trivial open cover
has to be related to the random variables $N_n$ as in the case we are
considering. Then we can prove that the AIC of the particular strings
we are considering is the AIC of the dynamical system $(I,L)$. This
can also be proved in a more general contest (\cite{Stefano}).
\end{guess}
The second point is the choice of the sequence $(\epsilon_k)$. We know
that for any sequence, we can find an isomorphism of the map
$L$ with the Manneville map $f$ of equation (\ref{mannmap}). At this
point we present some cases for the choice of $(\epsilon_k)$, and then
study a particular choice.
\begin{esempio}
Let the sequence be given by $\epsilon_k = \frac{1}{k^{\alpha}}$ with
$\alpha >0$. This sequence has all the properties we need for the
definition of the map $L$. Then
\[
p_i = l(A_i) = \ \frac{1}{(i-1)^\alpha} - \frac{1}{i^\alpha} \sim \
\frac{1}{i^{\alpha +1}}.
\]
The mean recurrence time $m_0$ is given by
\[
m_0 = \sum_{r=1}^{+\infty} \ r \ p_{r-1}
\]
and it is finite if and only if $\alpha >1$. Then we can find an
invariant measure $\bar \mu$ for the system $(I,{\cal B},L)$, such
that $\bar \mu (A_i) \sim \frac{1}{i^\alpha}$. This measure $\bar \mu$
is a probability measure if and only if $\alpha >1$.
The variance $V$ of the recurrence time is given by
\[
V = \sum_{r=1}^{+\infty} \ r^2 \ p_{r-1}
\]
and then $V<+\infty$ if and only if $\alpha > 2$. If we compute the
distribution function $F(x)$, we have that
\[
F(x) \sim 1- A x^{-\alpha},
\]
then we can apply Theorems \ref{teofeller} and \ref{teostimainf}, and
obtain that:
\begin{itemize}
\item[i)] if $\alpha > 1$ then $E_l[I_{AIC}(\sigma^n)] \sim E_p[N_n]
\sim n$;
\item[ii)] if $\alpha <1$ then $E_p[N_n] \sim n^\alpha$ and $n^\alpha
\leq E_l[I_{AIC}(\sigma^n)] \leq n^\alpha \log_2 n$ (see Theorem
\ref{teostimainf}).
\end{itemize}
\label{esempio1}
\end{esempio}
\begin{esempio}
Let now the sequence $(\epsilon_k)$ be given by $\epsilon_k =
\frac{1}{a^k}$, where $a \in \mathbb N$ and $a >1$. In this case it is easy
to verify that $p_i \sim \frac{1}{a^i}$. Then the mean recurrence time
$m_0$ and the variance $V$ of the recurrence time are always finite,
and we can deduce that $E_l[I_{AIC}(\sigma^n)] \sim E_p[N_n] \sim n$. The
invariant probability measure $\bar \mu$ exists and is given by $\bar
\mu (A_i) \sim \frac{1}{a^i}$.
\label{esempio2}
\end{esempio}
\begin{esempio}
Finally let's consider the case in which the sequence $(\epsilon_k)$
is given by $\epsilon_k = \frac{1}{k^\alpha (\log k)^\beta}$ with
either $\alpha =1$ and $\beta >1$ or $\alpha >1$. If we compute the
variance $V$ of the recurrence time, we obtain that it is finite if
and only if either $\alpha >3$ or $\alpha=3$ and $\beta>1$. In this
case we obtain from Theorem \ref{teofeller} that
$E_l[I_{AIC}(\sigma^n)] \sim n$. But if $V=+\infty$, we cannot find an
explicit form for the distribution function $F(x)$ similar to that of
Theorem \ref{teofeller}, but we can use Theorem \ref{teoinf}. Indeed
we have that the state $A_0$ is ergodic if and only if either $3
>\alpha >2$ or $\alpha =2$ and $\beta >1$, in which case
$E_l[I_{AIC}(\sigma^n)] \sim n$. For other values of $\alpha$ and
$\beta$, we know that $E_p[N_n]$ is an infinite of order less than
$n$, and for $E_l[I_{AIC}(\sigma^n)]$ we can apply Theorem
\ref{teostimainf} to obtain an estimate for its order of infinite.
\label{esempio3}
\end{esempio}
\begin{esempio}
In a particular case, we can say something more about the order of
infinite of $E_p[N_n]$ using the theory of recurrent events
(\cite{Feller}) and of power series (\cite{Titchmarsh}). Indeed,
choosing the sequence $\epsilon_k \sim \frac{1}{\log k}$, we have that
the distribution function $F(x) \sim \left( 1 - \frac{1}{\log x}
\right)$. Then from the characterization of $E_p[N_n]$ in terms of the
generating function of the random variables $X_k$, we obtain that
asymptotically $E_p[N_n] < n^\alpha$ for all $\alpha >0$. Hence, from
Theorem \ref{teostimainf}, $E_l[I_{AIC}(\sigma^n)]$ is an infinite of
order smaller than any power law.
\label{esempio4}
\end{esempio}
We have thus found that changing the sequence $(\epsilon_k)$ it is
possible to obtain all a set of behaviours for the mean of the AIC
with respect to the Lebesgue measure. It is clear that the different
behaviours are induced by the rate with which the derivative of the
map $L$ increases, rate that depends on the order with which
$(\epsilon_k)$ tends to 0. It is then evident that this must be also
the criterion to distinguish between different behaviours of the
information function of the Manneville map $f$ for different values of
the parameter $z$ in equation (\ref{mannmap}). We have then to find a
way to associate to a particular value of $z$ a given sequence
$(\epsilon_k)$. Since we have to maintain a given rate of increasing
of the derivative, given a value for $z$, we look for the sequence
$(\epsilon_k)$ such that $\epsilon_k \sim x_k$, where $x_k$ is the
sequence of preimages of the point $x_0$, as defined in Section
\ref{sTlm}.
We have that $x_k \sim \frac{1}{k^\alpha}$ with $\alpha =
\frac{1}{z-1}$. We are then in the first case we studied, and $\alpha
>0$ being $z >1$. We can apply all the results we found, in particular:
\begin{itemize}
\item if $z <2$ then $E_l[I_{AIC}(\sigma^n)] \sim n$ and there is an
$L$-invariant probability measure $\bar \mu$ such that $\bar \mu (A_i)
\sim \frac{1}{i^\alpha}$;
\item if $z>2$ then $n^\alpha \leq E_l[I_{AIC}(\sigma^n)] \leq n^\alpha
\log_2 n$ and the invariant measure $\bar \mu$ is not a probability
measure.
\end{itemize}
We have then found as a particular case the same results of
\cite{GW}. But, we would like to have a behaviour of the AIC valid for
almost any orbit with respect to the Lebesgue measure $l$. We have
\begin{teorema}
For almost any point $x \in I$ with respect to the Lebesgue measure
$l$ and for all $\delta>0$, we have that
$E_{\mu_\delta}[I_{AIC}(\sigma^n)]$ is asymptotically equivalent to
$E_l[I_{AIC}(\sigma^n)]$, where with $\mu_\delta$ we denote the
measure given by the Lebesgue measure $l$ concentrated on
$U_\delta=(x-\delta,x+\delta)$.
\label{teoqo}
\end{teorema}
\noindent {\bf Proof.} The proof is based on a simple application of
the method we used before. Indeed, given $x \in I$, let's consider
$U_\delta=(x-\delta,x+\delta)$ for some $\delta$. If we now want to
estimate the value of $E_\delta[N_n]$, that is the mean made with
respect to the measure on $\Sigma_M$ induced by $\mu_\delta$, we
notice that, from the properties of the map $L$, it is clear that
there exists a $R(\delta)\in \mathbb N$ such that $L^{R(\delta)} (U_\delta) =
[0,1]$. We can deduce that $E_\delta[N_n]$ is the same as $E_p[N_n]$,
where $p$ is the measure on $\Sigma_M$ induced by $l$, for
$n>R(\delta)$. Then it follows that $E_{\mu_\delta}[I_{AIC}(\sigma^n)]
\sim E_l[I_{AIC}(\sigma^n)]$ for $n>R(\delta)$. \qed
\begin{cor}
The AIC of the Manneville map $f$ is given by
\begin{itemize}
\item $n^\alpha \leq E_{\mu_\delta}[I_{AIC}(\sigma^n)] \leq n^{\alpha}
\log_2 n$ with $\alpha = \frac{1}{z-1}$, for $z>2$;
\item $E_{\mu_\delta}[I_{AIC}(\sigma^n)] \sim n$ for $z<2$
\end{itemize}
for almost any point $x\in I$ with respect to the Lebesgue measure
$l$. \label{cor2}
\end{cor}
We remark that there are points $x\in I$ for which $I_{AIC}(x^n) \sim
n$ also for the Manneville map $f$ with $z>2$. Indeed, in Subsection
\ref{ssTame}, we proved the existence of $f$-invariant measures $\mu$
on $(I,{\cal B})$, for which the K-S entropy $h^{KS}_\mu =\log
2$. This is not a contradiction since such measures have support on a
set of zero Lebesgue measure.
\section{Conclusions}
In this paper we have proved that the Manneville map with $z>2$
(\ref{mannmap}) exhibits, from the AIC point of view, a behaviour
which is intermediate between the so-called {\it full chaos} (positive
K-S entropy) and periodicity. Then we obtain that the complexity (see
Section \ref{sI}, equations (\ref{complessita})) of the Manneville map
with $z>2$ is null for almost any initial condition with respect to
the Lebesgue measure $l$ and then the algorithmic entropy $h_l =0$
(see Section \ref{sI}, equations (\ref{entropia})). In particular we
have found a family of piecewise linear maps $L$ that, for fixed
sequences $\epsilon_k$ (see equation (\ref{mannmaplin})), can be used
as a model for the Manneville map. Moreover this family of maps
presents a rich set of possible algorithmic behaviours (depending on
the choice of the map $L$ of the family). It is evident that changing
the sequence $\epsilon_k$, the algorithmic behaviour varies from full
chaos to {\it mild chaos}, which is characterized by
$I_{AIC}(\sigma^n)$ of order smaller than any power law. This
behaviour can be achieved from the Manneville map at the limit $z \to
+\infty$ (see Theorem \ref{teostimainf}), and Example \ref{esempio4}
seems to suggest a way to find a particular sequence generating mild
chaos. We remark that it would be impossible using the notions of
complexity and algorithmic entropy to distinguish many of the maps of
the family $L$, since we would obtain $K(x)=h_l =0$ for almost every
$x \in (I,{\cal B},l)$. Then the AIC is for those maps a powerful tool
for the classification and it can be actually estimated.
Finally we observe that it has been proved that the AIC of a string is
not a computable function and, in particular, it cannot be computed by
any algorithm (\cite{Chaitin}). Then, apart from particular dynamical
systems for which we can estimate the AIC, the classification of
dynamical systems using the AIC cannot be obtained
explicitly. Nevertheless we believe that it is fundamental to obtain
an explicit estimation of the AIC for as many dynamical systems as
possible. However the AIC can be approximated by different notions of
information content of a string. In particular we can define
\begin{equation}
I_A(\sigma^n) = |A(\sigma^n)|,
\end{equation}
the {\it information function}, where $|A(\sigma^n)|$ is the binary
length of the output obtained from a string $\sigma^n$ by means of a
compression algorithm $A$ (see \cite{Licata}). A particular
compression algorithm, called {\it CASToRe}, has been built to analyze
dynamical systems which present rich dynamics, but having zero K-S
entropy for all invariant measures that are physically relevant
(\cite{Argenti}). The algorithm {\it CASToRe} has been tested on the
Manneville map, giving as result $I_{CASToRe}(\sigma^n)\sim n^\alpha$
for $\alpha <1$, confirming our results (\cite{Argenti},
\cite{Licata}), and on the logistic map at the chaos threshold, giving
the presence of mild chaos (\cite{Bonanno}). At the moment it is not
clear whether the algorithm {\it CASToRe} can approximate the AIC for
any dynamical system, but from this paper on the Manneville map and
from many experimental results on the logistic map, it seems that at
least in these two cases there is evidence of accordance between the
theoretical predictions and the experiments with {\it CASToRe}.
\end{document} |
\begin{equation}gin{document}
\title{Continuous time `true' self-avoiding random walk on $\Z$}
\begin{equation}gin{center}
\vspace*{-3ex}
Institute of Mathematics\\
Budapest University of Technology (BME)
\end{center}
\vspace*{3ex}
\begin{equation}gin{abstract}
We consider the continuous time version of the `true' or `myopic' self-avoiding
random walk with site repulsion in $1d$. The Ray\,--\,Knight-type method which
was applied in \citetp{toth_95} to the discrete time and edge repulsion case is
applicable to this model with some modifications. We present a limit theorem
for the local time of the walk and a local limit theorem for the displacement.
\end{abstract}
\section{Introduction}
\subsection{Historical background}
Let $X(t)$, $t\in\Z_+:=\{0,1,2,\mathrm dots\}$ be a nearest neighbour walk on the
integer lattice $\Z$ starting from $X(0)=0$ and denote by $\ell(t,x)$,
$(t,x)\in\Z_+\times\Z$, its local time (that is: its occupation time measure)
on sites:
\[\ell(t,x):=\#\{0\le s\le t: X(s)=x\}\]
where $\#\{\mathrm dots\}$ denotes cardinality of the set. The true self-avoiding
random walk with site repulsion (STSAW) was introduced in
\citetp{amit_parisi_peliti_83} as an example for a non-trivial random walk with
long memory which behaves qualitatively differently from the usual diffusive
behaviour of random walks. It is governed by the evolution rules
\begin{equation}gin{align}
\condprob{X(t+1)=x\pm1} {{\mathcal{F}}_t, \ X(t)=x} &= \frac {e^{-\begin{equation}ta\ell(t,x\pm1)}}
{e^{-\begin{equation}ta\ell(t,x+1)}+e^{-\begin{equation}ta\ell(t,x-1)}}\notag\\
&=\frac{e^{-\begin{equation}ta(\ell(t,x\pm1)-\ell(t,x))}}
{e^{-\begin{equation}ta(\ell(t,x+1)-\ell(t,x))}+e^{-\begin{equation}ta(\ell(t,x-1)-\ell(t,x))}},
\langlebel{transprobstsaw}\\
\ell(t+1,x) &= \ell(t,x) + \ind{X(t+1)=x}.\notag
\end{align}
The extension of this definition to arbitrary dimensions is straightforward. In
\citetp{amit_parisi_peliti_83}, actually, the multidimensional version of the
walk was defined. Non-rigorous -- nevertheless rather convincing -- scaling and
renormalization group arguments suggested that:
\begin{equation}gin{enumerate}
\item
In three and more dimensions, the walk behaves diffusively with a Gaussian
scaling limit of $t^{-1/2}X(t)$ as $t\to\infty$. See e.g.\
\citetp{amit_parisi_peliti_83}, \citetp{obukhov_peliti_83} and
\citetp{horvath_toth_veto_10}.
\item
In one dimension (that is: the case formally defined above), the walk is
superdiffusive with a non-degenerate scaling limit of $t^{-2/3}X(t)$ as
$t\to\infty$, but with no hint about the limiting distribution. See
\citetp{peliti_pietronero_87}, \citetp{toth_99} and \citetp{toth_veto_08}.
\item
The critical dimension is $d=2$ where the Gaussian scaling limit is obtained
with logarithmic multiplicative corrections added to the diffusive scaling. See
\citetp{amit_parisi_peliti_83} and \citetp{obukhov_peliti_83}.
\end{enumerate}
These questions are still open. However, the scaling limit in one dimension of
a closely related object was clarified in \citetp{toth_95}. The true
self-avoiding walk with self-repulsion defined in terms of the local times on
edges rather than sites is defined as follows:
Let $\wX(t)$, $t\in\Z_+:=\{0,1,2,\mathrm dots\}$ be yet again a nearest neighbour walk
on the integer lattice $\Z$ starting from $\wX(0)=0$ and denote now by
$\well_{\pm}(t,x)$, $(t,x)\in\Z_+\times\Z$, its local time (that is: occupation
time measure) on unoriented edges:
\begin{equation}gin{align*}
\well_+(t,x)
&:=
\#\{0\le s < t: \{\wX(s),\wX(s+1)\}=\{x,x+1\} \},
\\
\well_-(t,x)
&:=
\#\{0\le s < t: \{\wX(s),\wX(s+1)\}=\{x,x-1\} \}.
\end{align*}
Note that $\well_+(t,x)=\well_-(t,x+1)$. The true self-avoiding random
walk with edge repulsion (ETSAW) is governed by the evolution rules
\begin{equation}gin{align*}
\condprob{\wX(t+1)=x\pm1} {{\mathcal{F}}_t,\ \wX(t)=x} &= \frac
{e^{-2\begin{equation}ta\well_\pm(t,x)}}
{e^{-2\begin{equation}ta\well_+(t,x)}+e^{-2\begin{equation}ta\well_-(t,x)}}\\
&=\frac{e^{-\begin{equation}ta(\well_\pm(t,x)-\well_\mp(t,x))}}
{e^{-\begin{equation}ta(\well_+(t,x)-\well_-(t,x))}+
e^{-\begin{equation}ta(\well_-(t,x)-\well_+(t,x))}}\\
\well_\pm(t+1,x) &= \well_\pm(t,x) + \ind{\{\wX(t),\wX(t+1)\}=\{x,x\pm1\}}.
\end{align*}
In {\citetp{toth_95}}, a limit theorem was proved for $t^{-2/3}\wX(t)$, as
$t\to\infty$. Later, in \citetp{toth_werner_98}, a space-time continuous
process $\R_+\ni t\mapsto \mathcal{X}(t)\in\R$ was constructed -- called the true
self-repelling motion (TSRM) -- which possessed all the analytic and stochastic
properties of an assumed scaling limit of $\R_+\ni t\mapsto \mathcal{X}^{(A)}(t):=
A^{-2/3}\wX([At])\in\R$. The invariance principle for this model has been
clarified in \citetp{newman_ravishankar_06}.
A key point in the proof of \citetp{toth_95}\ is a kind of Ray\,--\,Knight-type
argument which works for the ETSAW but not for the STSAW. (For the original
idea of Ray\,--\,Knight theory, see \citetp{knight_63} and \citetp{ray_63}.) Let
\[\wT_{\pm,x,h}:=\min\{t\ge0: \well_\pm(t,x)\ge h\},\qquad x\in\Z, \quad h\in\Z_+\]
be the so called inverse local times and
\[\wLambda_{\pm,x,h}(y):=\well_\pm(\wT_{\pm,x,h},y),\qquad x,y\in\Z, \quad h\in\Z_+\]
the local time sequence of the walk stopped at the inverse local times. It
turns out that, in the ETSAW case, for any fixed $(x,h)\in\Z\times\Z_+$, the
process $\Z\ni y\mapsto\wLambda_{\pm,x,h}(y)\in\Z_+$ is Markovian and it can
be thoroughly analyzed.
It is a fact that the similar reduction does not hold for the STSAW. Here, the
natural objects are actually slightly simpler to define:
\begin{equation}gin{align*}
T_{x,h}
&:=
\min\{t\ge0: \ell(t,x)\ge h\},&&
x\in\Z, & h\in\Z_+,
\\[1ex]
\Lambda_{x,h}(y)
&:=
\ell(T_{x,h},y),&&
x,y\in\Z, & h\in\Z_+.
\end{align*}
The process $\Z\ni y\mapsto\Lambda_{x,h}(y)\in\Z_+$ (with fixed
$(x,h)\in\Z\times\Z_+$) is not Markovian and thus the Ray\,--\,Knight-type of
approach fails. Nevertheless, this method works also for the model treated in
the present paper.
The main ideas of this paper are similar to those of \citetp{toth_95}, but there
are essential differences, too. Those parts of the proofs which are the same as
in \citetp{toth_95} will not be spelled out explicitly. E.g.\ the full proof of
Theorem \ref{thmtoth} is omitted altogether. We put the emphasis on those
arguments which differ genuinely from \citetp{toth_95}. In particular, we
present some new coupling arguments.
This paper is organised as follows. First, we describe the model which we will
study and present our theorems. In Section \ref{RK}, we give the proof of
Theorem \ref{thmlimLambda} in three steps: we introduce the main technical
tools, i.e.\ some auxiliary Markov processes. Then we state technical lemmas
which are all devoted to check the conditions of Theorem \ref{thmtoth} cited
from \citetp{toth_95}. Finally, we complete the proof using the lemmas. The
proof of these lemmas are postponed until Section \ref{proofs}. The proof of
Theorem \ref{thmXconv} is in Section \ref{Xconv}.
\subsection{The random walk considered and the main results}
Now, we define a version of true self-avoiding random walk in continuous time,
for which the Ray\,--\,Knight-type method sketched in the previous section is
applicable. Let $X(t)$, $t\in\R_+$ be a \emph{continuous time} random walk on
$\Z$ starting from $X(0)=0$ and having right continuous paths. Denote by
$\ell(t,x)$, $(t,x)\in\R_+\times\Z$ its local time (occupation time measure) on
sites:
\[\ell(t,x):=\abs{\{s\in[0,t)\,:\, X(s)=x\}}\]
where $|\{\mathrm dots\}|$ now denotes Lebesgue measure of the set indicated. Let
$w:\R\to(0,\infty)$ be an almost arbitrary rate function. We assume that it is
non-decreasing and not constant.
The law of the random walk is governed by the following jump rates and
differential equations (for the local time increase):
\begin{equation}gin{align}
\condprob{X(t+\mathrm d t)=x\pm1} {{\mathcal{F}}_t, \ X(t)=x} &=
w(\ell(t,x)-\ell(t,x\pm1))\,\mathrm d t + o(\mathrm d t),\langlebel{Xtrans}\\
\mathrm dot{\ell}(t,x) &= \ind{X(t)=x}\langlebel{ltrans}
\end{align}
with initial conditions
\[X(0)=0, \qquad \ell(0,x)=0.\]
The dot in \eqref{ltrans} denotes time derivative. Note that for the the choice
of exponential weight function $w(u)=\exp\{\begin{equation}ta u\}$. This means exactly that
conditionally on a jump occurring at the instant $t$, the random walker jumps
to right or left from its actual position with probabilities
$e^{-\begin{equation}ta\ell(t,x\pm1)}/(e^{-\begin{equation}ta\ell(t,x+1)}+e^{-\begin{equation}ta\ell(t,x-1)})$, just
like in \eqref{transprobstsaw}. It will turn out that in the long run the
holding times remain of order one.
Fix $j\in\Z$ and $r\in\R_+$. We consider the random walk $X(t)$ running from
$t=0$ up to the stopping time
\begin{equation}gin{equation}
T_{j,r}=\inf\{t\ge0:\ell(t,j)\ge r\},\langlebel{defT}
\end{equation}
which is the inverse local time for our model. Define
\begin{equation}gin{equation}
\Lambda_{j,r}(k):=\ell(T_{j,r},k)\qquad k\in\Z\langlebel{Lambdadef}
\end{equation}
the local time process of $X$ stopped at the inverse local time.
Let
\begin{equation}gin{align*}
\langlembda_{j,r}&:=\inf\{k\in\Z:\Lambda_{j,r}(k)>0\},\\
\rho_{j,r}&:=\sup\{k\in\Z:\Lambda_{j,r}(k)>0\}.
\end{align*}
Fix $x\in\R$ and $h\in\R_+$. Consider the two-sided reflected Brownian motion
$W_{x,h}(y)$, $y\in\R$ with starting point $W_{x,h}(x)=h$. Define the times of
the first hitting of $0$ outside the interval $[0,x]$ or $[x,0]$ with
\begin{equation}gin{align*}
\mathfrak l_{x,h}&:=\sup\{y<0\wedge x:W_{x,h}(y)=0\},\\
\mathfrak r_{x,h}&:=\inf\{y>0\vee x:W_{x,h}(y)=0\}
\end{align*}
where $a\wedge b=\min(a,b)$, $a\vee b=\max(a,b)$, and let
\begin{equation}gin{equation}
\mathcal{T}_{x,h}:=\int_{\mathfrak l_{x,h}}^{\mathfrak r_{x,h}} W_{x,h}(y)\,\mathrm d y.\langlebel{defcT}
\end{equation}
The main result of this paper is
\begin{equation}gin{theorem}\langlebel{thmlimLambda}
Let $x\in\R$ and $h\in\R_+$ be fixed. Then
\begin{equation}gin{align}
A^{-1}\langlembda_{\lfloor Ax\rfloor,\lfloor\sqrt A\sigma h\rfloor}&\Longrightarrow\mathfrak l_{0\wedge x,h},\\
A^{-1}\rho_{\lfloor Ax\rfloor,\lfloor\sqrt A\sigma h\rfloor}&\Longrightarrow\mathfrak r_{0\vee x,h},
\end{align}
and
\begin{equation}gin{equation}
\begin{equation}gin{split}
\left(\frac{\Lambda_{\lfloor Ax\rfloor,\lfloor\sqrt A\sigma h\rfloor}(\lfloor Ay\rfloor)}{\sigma\sqrt A},
\frac{\langlembda_{\lfloor Ax\rfloor,\lfloor\sqrt A\sigma h\rfloor}}A\le y\le\frac{\rho_{\lfloor Ax\rfloor,\lfloor\sqrt A\sigma h\rfloor}}A\right)\hspace*{8em}\\
\Longrightarrow\left(W_{x,h}(y), \mathfrak l_{0\wedge x,h}\le y\le\mathfrak r_{0\vee
x,h}\right)
\end{split}
\end{equation}
as $A\to\infty$ where
$\sigma^2=\int_{-\infty}^\infty u^2\rho(\mathrm d u)\in(0,\infty)$ with $\rho$ defined
by \eqref{defrho} and \eqref{defW} later.
\end{theorem}
\begin{equation}gin{corollary}\langlebel{corTlim}
For any $x\in\R$ and $h\ge0$,
\begin{equation}gin{equation}
\frac{T_{\lfloor Ax\rfloor,\lfloor\sqrt A\sigma h\rfloor}} {\sigma A^{3/2}}\Longrightarrow\mathcal{T}_{x,h}.
\end{equation}
\end{corollary}
For stating Theorem \ref{thmXconv}, we need some more definitions. It follows
from \eqref{defcT} that $\mathcal{T}_{x,h}$ has an absolutely continuous distribution.
Let
\begin{equation}gin{equation}
\omega(t,x,h):=\frac\partial{\partial t}\,\prob{\mathcal{T}_{x,h}<t}\langlebel{defomega}
\end{equation}
be the density of the distribution of $\mathcal{T}_{x,h}$. Define
\[\varphi(t,x):=\int_0^\infty \omega(t,x,h)\,\mathrm d h.\]
Theorem 2 of \citetp{toth_95} gives that, for fixed $t>0$, $\varphi(t,\cdot)$ is
a density function, i.e.
\begin{equation}gin{equation}
\int_{-\infty}^\infty \varphi(t,x)\,\mathrm d x=1.\langlebel{intfi}
\end{equation}
One could expect that $\varphi(t,\cdot)$ is the density of the limit
distribution of $X(At)/A^{2/3}$ as $A\to\infty$, but we prove a similar
statement for their Laplace transform. We denote by $\hat\varphi$ the Laplace
transforms of $\varphi$:
\begin{equation}gin{equation}
\hat\varphi(s,x):=s\int_0^\infty e^{-st}\varphi(t,x)\,\mathrm d t.\langlebel{deffihat}
\end{equation}
\begin{equation}gin{theorem}\langlebel{thmXconv}
Let $s\in\R_+$ be fixed and $\theta_{s/A}$ a random variable of exponential
distribution with mean $A/s$ which is independent of the random walk $X(t)$.
Then, for almost all $x\in\R$,
\begin{equation}gin{equation}
A^{2/3}\prob{X(\theta_{s/A})=\lfloor A^{2/3}x\rfloor}\to\hat\varphi(s,x)
\end{equation}
as $A\to\infty$.
\end{theorem}
From this local limit theorem, the integral limit theorem follows immediately:
\[\lim_{A\to\infty}\prob{A^{-2/3}X(\theta_{s/A})<x}=\int_{-\infty}^x
\hat\varphi(s,y)\,\mathrm d y.\]
\section{Ray\,--\,Knight construction}\langlebel{RK}
The aim of this section is to give a random walk representation of the local
time sequence $\Lambda_{j,r}$. Therefore, we introduce auxiliary Markov
processes corresponding to each edge of $\Z$. The process corresponding to the
edge $e$ is defined in such a way that its value is the difference of local
times of $X(T_{j,r})$ on the two vertices adjacent to $e$ where $X(T_{j,r})$ is
the process $X(t)$ stopped at an inverse local time. It turns out that the
auxiliary Markov processes are independent. Hence, by induction, the sequence
of local times can be given as partial sums of independent auxiliary Markov
processes. The proof of Theorem \ref{thmlimLambda} relies exactly on this
observation.
\subsection{The basic construction}\langlebel{basic}
Let
\begin{equation}gin{equation}
\tau(t,k):=\ell(t,k)+\ell(t,k+1)\langlebel{deftau}
\end{equation}
be the local time spent on (the endpoints of) the edge $\langlengle k,k+1\ranglengle$,
$k\in\Z$, and
\begin{equation}gin{equation}
\theta(s,k):=\inf\{t\ge0\,:\,\tau(t,k)>s\}
\end{equation}
its inverse. Further on, define
\begin{equation}gin{align}
\xi_k(s)&:=\ell(\theta(s,k),k+1)-\ell(\theta(s,k),k),\\
\alpha_k(s)&:=\ind{X(\theta(s,k))=k+1}-\ind{X(\theta(s,k))=k}.
\end{align}
A crucial observation is that, for each $k\in\Z$,
$s\mapsto(\alpha_k(s),\xi_k(s))$ is a Markov process on the state space
$\{-1,+1\}\times\R$. The transition rules are
\begin{equation}gin{align}
\condprob{\alpha_k(t+\mathrm d t)=-\alpha_k(t)}{\mathcal{F}_t}
&=w(\alpha_k(t)\xi_k(t))\,\mathrm d t + o(\mathrm d t),\langlebel{lawalpha}\\
\mathrm dot{\xi_k}(t)&=\alpha_k(t),\langlebel{lawxi}
\end{align}
with some initial state $(\alpha_k(0),\xi_k(0))$. Furthermore, these processes
are independent. In plain words:
\begin{equation}gin{enumerate}
\item
$\xi_k(t)$ is the difference of time spent by $\alpha_k$ in the states $+1$ and
$-1$, alternatively, the difference of time spent by the walker on the sites
$k+1$ and $k$;
\item
$\alpha_k(t)$ changes sign with rate $w(\alpha_k(t)\xi_k(t))$ since the walker
jumps between $k$ and $k+1$ with these rates.
\end{enumerate}
The common infinitesimal generator of these processes is
\[(Gf)(\pm1, u)=\pm f'(\pm1,u) + w(\pm u)\big(f(\mp1,u)-f(\pm1,u)\big)\]
where $f'(\pm1,u)$ is the derivative with respect to the second variable. It is
an easy computation to check that these Markov processes are ergodic and their
common unique stationary measure is
\begin{equation}gin{equation}
\mu(\pm1,\mathrm d u)=\frac{1}{2Z}e^{-W(u)}\,\mathrm d u\langlebel{defmu}
\end{equation}
where
\begin{equation}gin{equation}
W(u):=\int_0^u \left(w(v)-w(-v)\right)\mathrm d v\quad\text{and}\quad
Z:=\int_{-\infty}^\infty e^{-W(v)}\,\mathrm d v.\langlebel{defW}
\end{equation}
Mind that, due to the condition imposed on $w$ (non-decreasing and
non-constant),
\begin{equation}gin{equation}
\lim_{\abs{u}\to\infty}\frac{W(u)}{\abs{u}}=\lim_{v\to\infty}(w(v)-w(-v))>0,
\langlebel{Z<infty}
\end{equation}
and thus $Z<\infty$ and $\mu(\pm1,\mathrm d u)$ is indeed a probability measure on
$\{-1,+1\}\times\R$.
Let
\begin{equation}gin{equation}
\begin{equation}ta_\pm(t,k):=\inf\left\{s\ge0:\int_0^s \ind{\alpha_k(u)=\pm1}\,\mathrm d u\ge
t\right\}
\end{equation}
be the inverse local times of $(\alpha_k(t),\xi_k(t))$. With the use of them,
we can define the processes
\begin{equation}gin{equation}\langlebel{defetak}
\eta_{k,-}(t):=\xi_k(\begin{equation}ta_-(t,k)),\qquad\eta_{k,+}(t):=-\xi_k(\begin{equation}ta_+(t,k)).
\end{equation}
which are also Markovian. By symmetry, the processes with different sign have
the same law. The infinitesimal generator of $\eta_{k,\pm}$ is
\[(Hf)(u)=-f'(u)+w(u)\int_u^\infty e^{-\int_u^v w(s)\,\mathrm d s}w(v)(f(v)-f(u))\,\mathrm d v.\]
It is easy to see that the Markov processes $\eta_{k,\pm}$ are ergodic and
their common unique stationary distribution is
\begin{equation}gin{equation}
\rho(\mathrm d u):=\frac1Z e^{-W(u)}\,\mathrm d u\langlebel{defrho}
\end{equation}
with the notations \eqref{defW}. The stationarity of $\mu$ is not surprising
after \eqref{defmu}, but a straightforward calculation yields it also.
The main point is the following
\begin{equation}gin{proposition}
\begin{equation}gin{enumerate}
\item The processes $s\mapsto(\alpha_k(s),\xi_k(s))$, $k\in\Z$ are
independent Markov process with the same law given in
\eqref{lawalpha}--\eqref{lawxi}. They start from the initial states
$\xi_k(0)=0$ and
\[\alpha_k(0)=\left\{
\begin{equation}gin{array}{rl}
+1\quad & \mbox{if}\quad k<0,\\
-1\quad & \mbox{if}\quad k\ge0.
\end{array}\right.\]
\item The processes $s\mapsto\eta_{k,\pm}(s)$, $k\in\Z$ are independent
Markov processes if we consider exactly one of $\eta_{k,+}$ and $\eta_{k,-}$
for each $k$. The initial distributions are
\begin{equation}gin{align}
\prob{\eta_{k,+}(0)\in A}&=\left\{
\begin{equation}gin{array}{ll} Q(0,A)\quad & \mbox{if}\quad k\ge0,\\ \ind{0\in A}\quad &
\mbox{if}\quad k<0,
\end{array}\right.\langlebel{initial+}\\[2ex]
\prob{\eta_{k,-}(0)\in A}&=\left\{
\begin{equation}gin{array}{ll} \ind{0\in A}\quad & \mbox{if}\quad k\ge0,\\ Q(0,A)\quad &
\mbox{if}\quad k<0.
\end{array}\right.\langlebel{initial-}
\end{align}
\end{enumerate}
\end{proposition}
\subsection{Technical lemmas}
The lemmas of this subsection descibe the behaviour of the auxiliary Markov
processes $\eta_{k,\pm}$. Since they all have the same law, we denote them by
$\eta$ to keep the notation simple, and it means that the statement is true for
all $\eta_{k,\pm}$.
Fix $b\in\R$. Define the stopping times
\begin{equation}gin{align}
\theta_+&:=\inf\{t>0:\eta(t)\ge b\},\\
\theta_-&:=\inf\{t>0:\eta(t)\le b\}.
\end{align}
In our lemmas, $\gamma$ will always be a positive constant, which is considered
as being a small exponent, and $C$ will be a finite constant considered as
being large. To simplify the notation, we will use the same letter for
constants at different points of our proof. The notation does not emphasizes,
but their value depend on $b$.
First, we estimate the exponential moments of $\theta_-$ and $\theta_+$.
\begin{equation}gin{lemma}\langlebel{lemma-momgenf}
There are $\gamma>0$ and $C<\infty$ such that, for all $y\ge b$,
\begin{equation}gin{equation}
\condexpect{\exp(\gamma\theta_-)}{\eta(0)=y}\le\exp(C(y-b)).\langlebel{esttheta-}
\end{equation}
\end{lemma}
\begin{equation}gin{lemma}\langlebel{lemma+momgenf}
There exists $\gamma>0$ such that
\begin{equation}gin{equation}
\condexpect{\exp(\gamma\theta_+)}{\eta(0)=b}<\infty.\langlebel{esttheta+}
\end{equation}
\end{lemma}
Denote by $P^t=e^{tH}$ the transition kernel of $\eta$. For any $x\in\R$,
define the probability measure
\[Q(x,\mathrm d y):=\left\{\begin{equation}gin{array}{lcl}\exp(-\int_x^y w(u)\,\mathrm d u) w(y)\,\mathrm d y
& \mbox{if} & y\ge x,\\ 0 & \mbox{if} & y<x,\end{array}\right.\] which is the
conditional distribution of the endpoint of a jump of $\eta$ provided that
$\eta$ jumps from $x$. We show that the Markov process $\eta$ converges
exponentially fast to its stationary distribution $\rho$ defined by
\eqref{defrho} if the initial distribution is $0$ with probability $1$ or
$Q(0,\cdot)$.
\begin{equation}gin{lemma}\langlebel{lemmaexpconv}
There are $C<\infty$ and $\gamma>0$ such that
\begin{equation}gin{equation}
\left\|P^t(0,\cdot)-\rho\right\|<C\exp(-\gamma t)\langlebel{expconv}
\end{equation}
and
\begin{equation}gin{equation}
\left\|Q(0,\cdot)P^t-\rho\right\|<C\exp(-\gamma t).\langlebel{nuexpconv}
\end{equation}
\end{lemma}
We give a bound on the decay of the tails of $P^t(0,\cdot)$ and $Q(0,\cdot)P^t$
uniformly in $t$.
\begin{equation}gin{lemma}\langlebel{lemmaunifexpbound}
There are constants $C<\infty$ and $\gamma>0$ such that
\begin{equation}gin{equation}
P^t(0,(x,\infty))\le Ce^{-\gamma x}
\end{equation}
and
\begin{equation}gin{equation}
Q(0,\cdot)P^t(0,(x,\infty))\le Ce^{-\gamma x}
\end{equation}
for all $x\ge0$ and for any $t>0$ uniformly, i.e.\ the value of $C$ and
$\gamma$ does not depend on $x$ and $t$.
\end{lemma}
We introduce some notation from \citetp{toth_95} and cite a theorem, which will
be the main ingredient of our proof. Let $A>0$ be the scaling parameter, and
let
\[S_A(l)=S_A(0)+\sum_{j=1}^l \xi_A(j)\qquad l\in\N\]
be a discrete time random walk on $\R_+$ with the law
\[\condprob{\xi_A(l)\in\mathrm d x}{S_A(l-1)=y}=\pi_A(\mathrm d x,y,l)\]
for each $l\in\N$ with
\[\int_{-y}^\infty \pi_A(\mathrm d x,y,l)=1.\]
Define the following stopping time of the random walk $S_A(\cdot)$:
\[\omega_{[Ar]}=\inf\{l\ge[Ar]:S_A(l)=0\}.\]
We give the following theorem without proof, because this is the continuous
analog of Theorem 4 in \citetp{toth_95} and its proof is essentially identical
to that of the corresponding statement in \citetp{toth_95}.
\begin{equation}gin{theorem}\langlebel{thmtoth}
Suppose that the following conditions hold:
\begin{equation}gin{enumerate}
\item The step distributions $\pi_A(\cdot,y,l)$ converge exponentially fast as
$y\to\infty$ to a common asymptotic distribution $\pi$. That is, for each
$l\in\Z$,
\[\int_\R |\pi_A(\mathrm d x,y,l)-\pi(\mathrm d x)|<Ce^{-\gamma y}.\]
\item The asymptotic distribution is symmetric: $\pi(-\mathrm d x)=\pi(\mathrm d x)$, and its
moments are finite, in particular, denote
\begin{equation}gin{equation}
\sigma^2:=\int_\R x^2\pi(\mathrm d x).\langlebel{defsigma}
\end{equation}
\item Uniform decay of the step distributions: for each $l\in\Z$,
\[\pi_A((x,\infty),y,l)\le Ce^{-\gamma x}.\]
\item Uniform non-trapping condition: The random walk is not trapped in a
bounded domain or in a domain away from the origin. That is, there is
$\mathrm delta>0$ such that
\begin{equation}gin{equation}
\int_\mathrm delta^\infty \pi_A(\mathrm d x,y,l)>\mathrm delta\quad\mbox{or}\quad
\int_{x=\mathrm delta}^\infty \int_{z=-\infty}^\infty \pi_A(\mathrm d x-z,y+z,l+1)\pi_A(\mathrm d
z,y,l)>\mathrm delta \langlebel{nontrapping}
\end{equation}
and
\[\int_{-\infty}^{-(\mathrm delta\wedge y)} \pi_A(\mathrm d x,y,l)>\mathrm delta.\]
\end{enumerate}
Under these conditions, if
\[\frac{S_A(0)}{\sigma\sqrt A}\to h,\]
then
\begin{equation}gin{equation}
\left(\frac{\omega_{[Ar]}}A,\frac{S_A([Ay])}{\sigma\sqrt A}:0\le y\le
\frac{\omega_{[Ar]}}A\right)\Longrightarrow\left(\omega_r^W,|W_y|:0\le
y\le\omega_r^W\bigm||W_0|=h\right)
\end{equation}
in $\R_+\times D[0,\infty)$ as $A\to\infty$ where
\[\omega_r^W=\inf\{s>0:W_s=0\}\]
with a standard Brownian motion $W$ and $\sigma$ is given by \eqref{defsigma}.
\end{theorem}
\subsection{Proof of Theorem \ref{thmlimLambda}}
Using the auxiliary Markov processes introduced in Subsection \ref{basic}, we
can build up the local time sequence as a random walk. This
Ray\,--\,Knight-type construction is the main idea of the following proof.
\begin{equation}gin{proof}[Proof of Theorem \ref{thmlimLambda}]
Fix $j\in\Z$ and $r\in\R_+$. Using the definition \eqref{Lambdadef} and the
construction of $\eta_{k,\pm}$ \eqref{deftau}--\eqref{defetak}, we can
formulate the following recursion for $\Lambda_{j,r}$:
\begin{equation}gin{equation}\begin{equation}gin{aligned}
&\Lambda_{j,r}(j)=r,\\
&\Lambda_{j,r}(k+1)=\Lambda_{j,r}(k)+\eta_{k,-}(\Lambda_{j,r}(k)) \qquad &
\mbox{if}\quad k\ge j,\\
&\Lambda_{j,r}(k-1)=\Lambda_{j,r}(k)+\eta_{k-1,+}(\Lambda_{j,r}(k)) \qquad &
\mbox{if}\quad k\le j.
\end{aligned}\langlebel{Lambdawalk}\end{equation}
It means that the processes $(\Lambda_{j,r}(j-k))_{k=0}^\infty$ and
$(\Lambda_{j,r}(j+k))_{k=0}^\infty$ are random walks on $\R_+$, they start from
$\Lambda_{j,r}(j)=r$, and the distribution of the following step always depends
on the actual position of the walker. In order to apply Theorem \ref{thmtoth},
we rewrite \eqref{Lambdawalk}:
\begin{equation}gin{align*}
\Lambda_{j,r}(j+k)&=h+\sum_{i=0}^{k-1} \eta_{j+i,-}(\Lambda_{j,r}(j+i)) &
k&=0,1,2,\mathrm dots,\\
\Lambda_{j,r}(j-k)&=h+\sum_{i=0}^{k-1} \eta_{j-i-1,+}(\Lambda_{j,r}(j-i)) &
k&=0,1,2,\mathrm dots.
\end{align*}
The step distributions of this random walks are
\[\pi_A(\mathrm d x,y,l)=\left\{\begin{equation}gin{array}{l} P^y(0,\mathrm d x)\\[1ex]
Q(0,\cdot)P^y(\mathrm d x)\end{array}\right.\]
according to \eqref{initial+}--\eqref{initial-}.
The exponential closeness of the step distribution to the stationary
distribution is shown by Lemma \ref{lemmaexpconv}. One can see from
\eqref{defrho} and \eqref{defW} that the distribution $\rho$ is symmetric and
it has a non-zero finite variance. Lemma \ref{lemmaunifexpbound} gives a
uniform exponential bound on the tail of the distributions $P^t(0,\cdot)$ and
$Q(0,\cdot)P^t$.
Since we only consider $[\langlembda_{j,r},\rho_{j,r}]$, that is, the time
interval until $\Lambda_{j,r}$ hits $0$, we can force the walk to jump to
$1$ in the next step after hitting $0$, which does not influence our
investigations. It means that $\pi_A(\{1\},0,l)=1$ for $l\in\Z$, and with
this, the non-trapping condition \eqref{nontrapping} fulfils. Therefore,
Theorem \ref{thmtoth} is applicable for the forward and the backward walks,
and Theorem \ref{thmlimLambda} is proved.
\end{proof}
\section{The position of the random walker}\langlebel{Xconv}
We turn to the proof of Theorem \ref{thmXconv}. First, we introduce the
rescaled distribution
\[\varphi_A(t,x):=A^{2/3}\prob{X(t)=\lfloor A^{2/3}x\rfloor}\]
where $t,x\in\R_+$. We define the Laplace transform of $\varphi_A$ with
\begin{equation}gin{equation}
\hat\varphi_A(s,x)=s\int_0^\infty e^{-st}\varphi_A(t,x)\,\mathrm d t,\langlebel{deffiAhat}
\end{equation}
which is the position of the random walker at an independent random time of
exponential distribution with mean $A/s$.
We denote by $\hat\omega$ the Laplace transforms of $\omega$ defined in
\eqref{defomega} and rewrite \eqref{deffihat}:
\begin{equation}gin{gather*}
\hat\omega(s,x,h):=s\int_0^\infty e^{-st}\omega(t,x,h)\,\mathrm d t
=s\,\expect{e^{-s\mathcal{T}_{x,h}}},\\
\hat\varphi(s,x)=s\int_0^\infty e^{-st}\varphi(t,x)\,\mathrm d t
=\int_0^\infty\hat\omega(s,x,h)\,\mathrm d h.
\end{gather*}
Note that the scaling relations
\begin{equation}gin{align}
\alpha\omega(\alpha t,\alpha^{2/3}x,\alpha^{1/3}h)&=\omega(t,x,h),\notag\\
\alpha^{2/3}\hat\varphi(\alpha^{-1}s,\alpha^{2/3}x)&=\hat\varphi(s,x)\langlebel{fiscaling}
\end{align}
hold because of the scaling property of the Brownian motion.
\begin{equation}gin{proof}[Proof of Theorem \ref{thmXconv}]
The first observation for the proof is the identity
\begin{equation}gin{equation}
\prob{X(t)=k}=\int_{h=0}^\infty \prob{T_{k,h}\in(t,t+\mathrm d h)},\langlebel{Ptransf}
\end{equation}
which follows from \eqref{defT}. If we insert it to the definition of
$\hat\varphi_A$ \eqref{deffiAhat}, then we get
\begin{equation}gin{equation}\langlebel{fihatcomp}\begin{equation}gin{split}
\hat\varphi_A(s,x)&=sA^{-1/3}\int_0^\infty e^{-st/A}\prob{X(t)=\lfloor
A^{2/3}x\rfloor}\mathrm d t\\
&=sA^{-1/3}\int_0^\infty e^{-st/A}\int_{h=0}^\infty\prob{T_{\lfloor
A^{2/3}x\rfloor,h}\in(t,t+\mathrm d h)}\mathrm d t\\
&=sA^{-1/3}\int_0^\infty \expect{e^{-sT_{\lfloor A^{2/3}x\rfloor,h}/A}}\mathrm d h
\end{split}\end{equation}
using \eqref{Ptransf}. Defining
\[\hat\omega_A(s,x,h)=s\expect{\exp(-sT_{\lfloor A^{2/3}x\rfloor,\lfloor
A^{1/3}\sigma h\rfloor}/(\sigma A))}\]
gives us
\begin{equation}gin{equation}\langlebel{fihatfinal}
\hat\varphi_A(s,x)=\int_0^\infty \hat\omega_A(\sigma s,x,h)\,\mathrm d h
\end{equation}
from \eqref{fihatcomp}. From Corollary \ref{corTlim}, it follows that, for any
$s>0$, $x\ge0$ and $h>0$,
\[\hat\omega_A(s,x,h)\to\hat\omega(s,x,h).\]
Applying Fatou's lemma in \eqref{fihatfinal}, one gets
\begin{equation}gin{equation}
\liminf_{A\to\infty}\hat\varphi_A(s,x) \ge\int_0^\infty \hat\omega(\sigma
s,x,h)\,\mathrm d h =\sigma^{2/3}\hat\varphi(s,\sigma^{2/3}x),\langlebel{liminffihat}
\end{equation}
where we used \eqref{fiscaling} in the last equation. A consequence of
\eqref{intfi}, \eqref{liminffihat} integrated and a second application of
Fatou's lemma yield
\[1=\int_{-\infty}^\infty \hat\varphi(s,x)\,\mathrm d x \le \int_{-\infty}^\infty
\liminf_{A\to\infty}\hat\varphi_A(s,x)\,\mathrm d x \le \liminf_{A\to\infty}
\int_{-\infty}^\infty \hat\varphi_A(s,x)\,\mathrm d x=1,\] which gives that, for fixed
$s\in\R_+$, $\hat\varphi_A(s,x)\to\hat\varphi(s,x)$ holds for almost all
$x\in\R$, indeed.
\end{proof}
\section{Proof of lemmas}\langlebel{proofs}
\subsection{Exponential moments of the return times}
\begin{equation}gin{proof}[Proof of Lemma \ref{lemma-momgenf}]
Consider the Markov process $\zeta(t)$ which decreases with constant speed $1$,
it has upwards jumps with homogeneous rate $w(-b)$, and the distribution of
the size of a jump is the same as that of $\eta$, provided that the jump
starts from $b$. In other words, the infinitesimal generator of $\zeta$ is
\[(Zf)(u)=-f'(u)+w(-b)\int_0^\infty e^{-\int_0^v w(b+s)\,\mathrm d s}w(b+v)
(f(u+v)-f(u))\,\mathrm d v.\]
Note that, by the monotonicity of $w$, $\eta$ and $\zeta$ can be coupled in such
a way that they start from the same position and, as long as $\eta\ge b$ holds,
$\zeta\ge\eta$ is true almost surely. It means that it suffices to prove
\eqref{esttheta-} with
\begin{equation}gin{equation}
\theta'_-:=\inf\{t>0:\zeta(t)\le b\}\langlebel{theta'-def}
\end{equation}
instead of $\theta_-$. But the transitions of $\zeta$ are homogeneous in space,
which yields that \eqref{esttheta-} follows from the finiteness of
\begin{equation}gin{equation}
\condexpect{\exp(\gamma\theta'_-)}{\zeta(0)=b+1}.\langlebel{theta'-}
\end{equation}
In addition to this, $\zeta$ is a supermartingale with stationary increments,
which gives us
\[\expect{\zeta(t)}=b+1-ct\]
with some $c>0$, if the initial condition is $\zeta(0)=b+1$. For
$\alpha\in\left(-\infty,\lim_{u\to\infty}\frac{W(u)}u\right)$
(c.f.\ \eqref{Z<infty}), the expectation
\[\log\expect{e^{\alpha(\zeta(t)-\zeta(0))}}\]
is finite, and negative for some $\alpha>0$. Hence, the martingale
\begin{equation}gin{equation}
M(t)=\exp\left(\alpha(\zeta(t)-\zeta(0))-t\log\expect{e^{\alpha(\zeta(1)-\zeta(0))}}\right)
\end{equation}
stopped at $\theta'_-$ gives that the expectation in \eqref{theta'-} is finite
with $\gamma=-\log\expect{e^{\alpha(\zeta(1)-\zeta(0))}}$.
\end{proof}
\begin{equation}gin{proof}[Proof of Lemma \ref{lemma+momgenf}]
First, we prove for negative $b$, more precisely, for which $w(-b)>w(b)$. In
this case, define the homogeneous process $\kappa$ with $\kappa(0)=b$ and generator
\[Kf(u)=-f'(u)+w(-b)\int_0^\infty e^{-w(b)s}w(b)(f(u+s)-f(u))\,\mathrm d s.\]
It is easy to see that there is a coupling of $\eta$ and $\kappa$, for which
$\eta\ge\kappa$ as long as $\eta\le b$. Therefore, it is enough to show
\eqref{esttheta+} with
\[\theta'_+:=\inf\{t>0:\kappa(t)\ge b\}\]
instead of $\theta_+$.
But $\kappa$ is a submartingale with stationary increments, for which
\[\log\expect{e^{\alpha(\kappa(t)-\kappa(0))}}\]
is finite if $\alpha\in(-\infty,w(b))$, and negative for some $\alpha<0$.
The statement follows from the same idea as in the proof of Lemma
\ref{lemma-momgenf}.
Now, we prove the lemma for the remaining case. Fix $b$, for which we already
know \eqref{esttheta+}, and chose $b_1>b$ arbitrarily. We start $\eta$ from
$\eta(0)=b_1$, and we decompose its trajectory into independent excursions
above and below $b$, alternatingly. Let
\begin{equation}gin{equation}
Y_0:=\inf\{t\ge0:\eta(t)\le b\},
\end{equation}
and by induction, define
\begin{equation}gin{align}
X_k&:=\inf\left\{t>0:\eta\left(\sum_{j=1}^{k-1} X_j+\sum_{j=0}^{k-1}
Y_j+t\right)\ge b\right\},
\langlebel{defX}\\
Y_k&:=\inf\left\{t\ge0:\eta\left(\sum_{j=1}^k X_j+\sum_{j=0}^{k-1} Y_j+t
\right)\le b\right\} \langlebel{defY}
\end{align}
if $k=1,2,\mathrm dots$. Note that $(X_k,Y_k)_{k=1,2,\mathrm dots}$ is an i.i.d.\ sequence of
pairs of random variables. Finally, let
\begin{equation}gin{equation}
Z_k:=X_k+Y_k\qquad k=1,2,\mathrm dots.\langlebel{defZ}
\end{equation}
With this definition, the $Z_k$'s are the lengths of the epochs in a renewal
process. Lemma \ref{lemma-momgenf} tells us that $Y_0$ has finite exponential
moment. The same holds for $X_1,X_2,\mathrm dots$ because of the first part of this
proof for the case of small $b$. Note that the distribution of the upper
endpoint of a jump of $\eta$ conditionally given that $\eta$ jumps above $b$
is exactly $Q(b,\cdot)$. Since $Q(b,\cdot)$ decays exponentially fast, we
can use Lemma \ref{lemma-momgenf} again to conclude that
$\expect{\exp(\gamma Y_k)}<\infty$ for $\gamma>0$ small enough. Define
\begin{equation}gin{equation}
\nu_t:=\max\left\{n\ge0:\sum_{k=1}^n Z_k\le t\right\}\langlebel{defnut}
\end{equation}
in the usual way. The following decomposition is true:
\begin{equation}gin{equation}\begin{equation}gin{split}
&\prob{\frac{\sum_{k=1}^{\nu_t+1} Y_k}t<\varepsilonilon}\\
&\qquad\qquad\le\prob{\frac{\nu_t+1}t<\frac12\frac1{\expect{Z_1}}}
+\prob{\frac{\sum_{k=1}^{\nu_t+1} Y_k}t<\varepsilonilon,
\frac{\nu_t+1}t\ge\frac12\frac1{\expect{Z_1}}}.
\end{split}\langlebel{X/tdecomp}\end{equation}
Lemma 4.1 of \citetp{vandenberg_toth_91} gives a large deviation principle for
the renewal process $\nu_t$, hence
\begin{equation}gin{equation}
\prob{\frac{\nu_t+1}t<\frac12\frac1{\expect{Z_1}}}
\le\prob{\frac{\nu_t}t<\frac12\frac1{\expect{Z_1}}}<e^{-\gamma t}
\end{equation}
with some $\gamma>0$. For the second term on the right-hand side in
\eqref{X/tdecomp},
\begin{equation}gin{equation}\begin{equation}gin{split}
&\prob{\frac{\sum_{k=1}^{\nu_t+1} Y_k}t<\varepsilonilon,
\frac{\nu_t+1}t\ge\frac12\frac1{\expect{Z_1}}}\\
&\hspace*{7em} =\prob{\frac{\sum_{k=1}^{\nu_t+1} Y_k}{\nu_t+1}
<\varepsilonilon\frac t{\nu_t+1},
\frac{\nu_t+1}t\ge\frac12\frac1{\expect{Z_1}}}\\
&\hspace*{7em}\le\prob{\frac{\sum_{k=1}^{\nu_t+1}Y_k}{\nu_t+1}
<2\varepsilonilon\expect{Z_1},
\frac{\nu_t+1}t\ge\frac12\frac1{\expect{Z_1}}}\\
&\hspace*{7em}\le\max_{n\ge\frac12\frac1{\expect{Z_1}}t}
\prob{\frac{\sum_{k=1}^n Y_k}n<2\varepsilonilon\expect{Z_1}},
\end{split}\langlebel{X/test}\end{equation}
which is exponentially small for some $\varepsilonilon>0$ by standard large
deviation theory, and the same holds for the probability estimated is
\eqref{X/tdecomp}, which means that $\eta$ spends at least $\varepsilonilon t$
time above $b$ with overwhelming probability.
The inequality
\[\condprob{\theta_+>t}{\eta(0)=b_1}
\le\prob{\sum_{k=1}^{\nu_t+1}Y_k<\varepsilonilon t}
+\Condprob{\theta_+>t}{\eta(0)=b_1,\sum_{k=1}^{\nu_t+1}Y_k>\varepsilonilon t}\]
is obvious. The first term on the right-hand side is exponentially small by
\eqref{X/tdecomp}--\eqref{X/test}. In order to bound the second term, denote
by $J(t)$ the number of jumps when $\eta(s)\ge b$. The condition
$\sum_{k=1}^{\nu_t+1}Y_k>\varepsilonilon t$ means that this is the case in an at
least $\varepsilonilon$ portion of $[0,t]$. The rate of these jumps are at least
$w(-b)$ by the monotonicity of $w$. Note that $J(t)$ dominates stochastically
a Poisson random variable $L(t)$ with mean $w(-b)t$. Hence,
\begin{equation}gin{equation}
\prob{J(t)<\frac12w(-b)t}\le\prob{L(t)<\frac12w(-b)t}<e^{-\gamma t}\langlebel{theta+tail}
\end{equation}
for $t$ large enough with some $\gamma>0$ by a standard large deviation
estimate.
Note that $Q$ is also monotone in the sense that
\[\int_{b_1}^\infty Q(x_1,\mathrm d y)<\int_{b_1}^\infty Q(x_2,\mathrm d y)\]
if $x_1<x_2$. Therefore, a jump of $\eta$, which starts above $b$, exits
$(-\infty,b_1]$ with probability at least
\[r=\int_{b_1}^\infty Q(b,\mathrm d y)>0.\]
Finally,
\[\begin{equation}gin{split}
&\Condprob{\theta_+>t}{\eta(0)=b_1,\sum_{k=1}^{\nu_t+1} Y_k>\varepsilonilon t}\\
&\quad\le\prob{J(t)<\frac12w(-b)\varepsilonilon t}
+\Condprob{\theta_+>t}
{J(t)\ge\frac12w(-b)\varepsilonilon t,\eta(0)=b_1,\sum_{k=1}^{\nu_t+1} Y_k>\varepsilonilon t}\\
&\quad\le e^{-\gamma t}+(1-r)^{\frac12w(-b)\varepsilonilon t}
\end{split}\]
by \eqref{theta+tail}, which is an exponential decay, as required.
\end{proof}
\subsection{Exponential convergence to the stationarity}
\begin{equation}gin{proof}[Proof of Lemma \ref{lemmaexpconv}]
First, we prove \eqref{expconv}. We couple two copies of $\eta$, say $\eta_1$
and $\eta_2$. Suppose that
\[\eta_1(0)=0\qquad\mbox{and}\qquad\prob{\eta_2(0)\in A}=\rho(A).\]
Their distribution after time $t$ are obviously $P^t(0,\cdot)$ and $\rho$,
respectively. We use the standard coupling lemma to estimate their variation
distance:
\[\left\|P^t(0,\cdot)-\rho\right\|\le\prob{T>t}\]
where $T$ is the random time when the two processes merge.
Assume that $\eta_1=x_1$ and $\eta_2=x_2$ with fixed numbers $x_1,x_2\in\R$.
Then there is a coupling where the rate of merge is
\[c(x_1,x_2):=w(-x_1\vee x_2)\exp\left(-\int_{x_1\wedge x_2}^{x_1\vee x_2}
w(z)\,\mathrm d z\right).\]
Consider the interval $I_b=(-b,b)$ where $b$ will be chosen later
appropriately. If $\eta_1=x_1$ and $\eta_2=x_2$ where $x_1,x_2\in I_b$, then
for the rate of merge
\begin{equation}gin{equation}
c(x_1,x_2)\ge w(-b)\exp\left(-\int_{-b}^b w(z)\,\mathrm d z\right)=:\begin{equation}ta(b)>0
\langlebel{rateofmerge}
\end{equation}
holds if $w(x)>0$ for all $x\in\R$.
Let $\vartheta$ be the time spent in $I_b$, more precisely,
\begin{equation}gin{align*}
\vartheta_i(t)&:=|\{0\le s\le t:\eta_i(s)\in I_b\}|\qquad i=1,2,\\
\vartheta_{12}(t)&:=|\{0\le s\le t:\eta_1(s)\in I_b,\eta_2(s)\in I_b\}|.
\end{align*}
The estimate
\[\prob{T>t}\le\prob{\vartheta_{12}(t)<\frac t2}
+\condprob{T>t}{\vartheta_{12}(t)\ge\frac t2}\]
is clearly true. Note that
\[\Condprob{T>t}{\vartheta_{12}(t)\ge\frac
t2}\le\exp\left(-\frac12\begin{equation}ta(b)t\right)\]
follows from \eqref{rateofmerge}.
By the inclusion relation
\begin{equation}gin{equation}
\left\{\vartheta_{12}(t)<\frac t2\right\}\subset \left\{\vartheta_1(t)<\frac34
t\right\}\cup\left\{\vartheta_2<\frac34 t\right\},\langlebel{inclusion}
\end{equation}
it suffices to prove that the tails of $\prob{\vartheta_i(t)<\frac34 t}$
decay exponentially $i=1,2$, if $b$ is large enough.
We will show that
\begin{equation}gin{equation}
\prob{\frac{|\{0\le s\le t:\eta(s)<b\}|}t<\frac78}\le e^{-\gamma t}.\langlebel{7/8}
\end{equation}
A similar statement can be proved for the time spent above $-b$, therefore
another inclusion relation like \eqref{inclusion} gives the lemma.
First, we verify that the first hitting of level $b$
\[\inf\{s>0:\eta_i(s)=b\}\]
has finite exponential moment, hence, it is negligible with overwhelming
probability and we can suppose that $\eta_i(0)=b$. Indeed, for any fixed
$\varepsilonilon>0$, the measures $\rho$ and $Q(b,\cdot)$ assign exponentially
small weight to the complement of the interval $[-\varepsilonilon t,\varepsilonilon t]$
as $t\to\infty$. From now on, we suppress the subscript of $\eta_i$, we forget
about the initial values, and assume only that
$\eta(0)\in[-\varepsilonilon t,\varepsilonilon t]$.
If $\eta(0)\in[b,\varepsilonilon t]$, then recall the proof Lemma
\ref{lemma-momgenf}. There, we could majorate $\eta$ with a homogeneous
process $\zeta$. If we define
\[a:=\condexpect{\theta'_-}{\zeta(0)=b+1}\]
with the notation \eqref{theta'-def}, which is finite by Lemma
\ref{lemma-momgenf}, then from a large deviation principle,
\begin{equation}gin{equation}
\condprob{\theta_-(t)>2a\varepsilonilon t}{\eta(0)\in[b,\varepsilonilon t]}
\le\condprob{\theta'_-(t)>2a\varepsilonilon t}{\eta(0)\in[b,\varepsilonilon t]}
<e^{-\gamma t}\langlebel{theta-largedev}
\end{equation}
with some $\gamma>0$.
If $\eta(0)\in[-\varepsilonilon t,b]$, then we can neglect that piece of the
trajectory of $\eta$ which falls into the interval $[0,\theta_+]$, because
without this, $\vartheta(t)$ decreases and the bound on \eqref{7/8}
becomes stronger. Since $\eta$ jumps at $\theta_+$ a.s.\ and the distribution
of $\eta(\theta_+)$ is $Q(b,\cdot)$, we can use the previous observations
concerning the case $\eta(0)\in[b,\varepsilonilon t]$.
Using \eqref{theta-largedev}, it is enough to prove that
\[\prob{\frac{|\{0\le s\le t:\eta(s)<b\}|}t<\frac78+2a\varepsilonilon}\le e^{-\gamma t}\]
with the initial condition $\eta(0)=b$ where the value of $b$ is not specified
yet. We introduce $X_k,Y_k,Z_k$ and $\nu_t$ as in \eqref{defX}--\eqref{defnut}
with $Y_0\equiv0$. The only difference is that here we want to ensure a given
portion of time spent below $b$ with high probability with the appropriate
choice of $b$. With the same idea as in the proof of Lemma \ref{lemma+momgenf}
in \eqref{X/tdecomp}--\eqref{X/test}, we can show that
\[\prob{\frac{\sum_{k=1}^{\nu_t+1} X_k}t\le\frac78+2a\varepsilonilon}\]
is exponentially small by large deviation theory if we choose $b$ large enough
to set $\expect{X_1}/\expect{Z_1}$ (the expected portion of time spent below
$b$) sufficiently close to $1$. With this, the proof of \eqref{expconv} is
complete, that of \eqref{nuexpconv} is similar.
\end{proof}
\subsection{Decay of the transition kernel}
\begin{equation}gin{proof}[Proof of Lemma \ref{lemmaunifexpbound}]
We return to the idea that the partial sums of $Z_k$'s form a renewal process.
Remember the definitions \eqref{defX}--\eqref{defnut}. This proof relies on the
estimate
\[|\eta(t)|\le Z_{\nu_t+1},\]
which is true, because the process $\eta$ can decrease with speed at most $1$.
Therefore, it suffices to prove the exponential decay of the tail of
$Z_{\nu_t+1}$.
Define the \emph{renewal measure} with
\[U(A):=\sum_{n=0}^\infty \prob{\sum_{k=1}^n Z_k\in A}\]
for any $A\subset\R$. We consider the \emph{age} and the \emph{residual waiting
time}
\begin{equation}gin{align*}
A_t&:=t-\sum_{k=1}^{\nu_t} Z_k,\\
R_t&:=\sum_{k=1}^{\nu_t+1} Z_k-t
\end{align*}
separately. For the distribution of the former $H(t,x):=\prob{A_t>x}$, the
renewal equation
\begin{equation}gin{equation}
H(t,x)=(1-F(t))\ind{t>x}+\int_0^t H(t-s,x)\,\mathrm d F(s)\langlebel{reneq}
\end{equation}
holds where $F(x)=\prob{Z_1<x}$. \eqref{reneq} can be deduced by conditioning
on the time of the first renewal, $Z_1$. From Theorem (4.8) in
\citetp{durrett_95}, it follows that
\begin{equation}gin{equation}
H(t,x)=\int_0^t(1-F(t-s))\ind{t-s>x}U(\mathrm d s).\langlebel{formH}
\end{equation}
As explained after \eqref{defZ}, Lemma \ref{lemma-momgenf} and Lemma
\ref{lemma+momgenf} with $b=0$ together imply that $1-F(x)\le C e^{-\gamma x}$
with some $C<\infty$ and $\gamma>0$. On the other hand,
\[U([k,k+1])\le U([0,1])\]
is true, because, in the worst case, there is a renewal at time $k$. Otherwise,
the distribution of renewals in $[k,k+1]$ can be obtained by shifting the
renewals in $[0,1]$ with $R_k$. We can see from \eqref{formH} by splitting the
integral into segments with unit length that
\[H(t,x)\le U([0,1])\sum_{k=\lfloor x\rfloor}^\infty C e^{-\gamma k},\]
which is uniform in $t>0$.
With the equation
\[\{R_t>x\}=\{A_{t+x}\ge x\}=\{\mbox{no renewal in } (t,t+x]\},\]
a similar uniform exponential bound can be deduced for the tail $\prob{R_t>x}$.
Since $Z_{\nu_t+1}=A_t+R_t$, the proof is complete.
\end{proof}
\begin{equation}gin{thebibliography}{10}
\bibitem[Amit et al.(1983)]{amit_parisi_peliti_83}
D.~Amit, G.~Parisi, L.~Peliti.
\newblock Asymptotic behaviour of the `true' self-avoiding walk.
\newblock {\em Phys. Rev. B}, 27:1635--1645, 1983.
\bibitem[v.~d.~Berg et al.(1991)]{vandenberg_toth_91}
M.~van~den Berg and B.~T\'oth.
\newblock Exponential estimates for the {W}iener sausage.
\newblock {\em Probab. Theory Relat. Fields}, 88:249--259, 1991.
\bibitem[Durrett(1995)]{durrett_95}
R.~Durrett.
\newblock {\em Probability: {T}heory and {E}xamples, {S}econd {E}dition}.
\newblock Duxbury Press, 1995.
\bibitem[Horv\'ath et al.(2010)]{horvath_toth_veto_10}
I.~A. Horv\'ath, B.~T\'oth, and B.~Vet\H o.
\newblock Diffusive limits for ``true'' (or myopic) self-avoiding random walks
and self-repellent {B}rownian polymers in $d\ge3$.
\newblock {\em Preprint, {\tt arXiv:1009.0401}}, 2010.
\bibitem[Knight(1963)]{knight_63}
F.~B. Knight.
\newblock Random walks and a sojourn density process of {B}rownian motion.
\newblock {\em Transactions of the AMS}, 109:56--86, 1963.
\bibitem[Newman et al.(2006)]{newman_ravishankar_06}
C.~M. Newman and K.~Ravishankar.
\newblock Convergence of the {T}{\'o}th lattice filling curve to the
{T}{\'o}th\,--\,{Werner} plane filling curve.
\newblock {\em ALEA -- Latin American Electr. J. Probab. Theory}, 1:333--346,
2006.
\bibitem[Obukhov et al.(1983)]{obukhov_peliti_83}
S.~P. Obukhov and L.~Peliti.
\newblock Renormalisation of the `true' self-avoiding walk.
\newblock {\em J. Phys. A}, 16:L147--L151, 1983.
\bibitem[Peliti et al.(1987)]{peliti_pietronero_87}
L.~Peliti and L.~Pietronero.
\newblock Random walks with memory.
\newblock {\em Riv. Nuovo Cimento}, 10:1--33, 1987.
\bibitem[Ray(1963)]{ray_63}
D.~Ray.
\newblock Sojourn times of a diffusion process.
\newblock {\em Illinois J. Math.}, 7:615--630, 1963.
\bibitem[T\'oth(1995)]{toth_95}
B.~T{\'o}th.
\newblock The `true' self-avoiding walk with bond repulsion on {$\mathbb Z$}:
limit theorems.
\newblock {\em Ann. Probab.}, 23:1523--1556, 1995.
\bibitem[T\'oth(1999)]{toth_99}
B.~T\'oth.
\newblock Self-interacting random motions -- a survey.
\newblock In P.~R{\'e}v{\'e}sz and B.~T{\'o}th, editors, {\em Random {W}alks},
volume~9 of {\em Bolyai Society Mathematical Studies}, pages 349--384.
J{\'a}nos Bolyai Mathematical Society, Budapest, 1999.
\bibitem[T\'oth et al.(2008)]{toth_veto_08}
B.~T{\'o}th and B.~Vet\H o.
\newblock {Self-repelling random walk with directed edges on $\mathbb Z$}.
\newblock {\em Electr. J. Probab.}, 13:1909--1926, 2008.
\bibitem[T\'oth et al.(1998)]{toth_werner_98}
B.~T{\'o}th and W.~Werner.
\newblock The true self-repelling motion.
\newblock {\em Probab. Theory Relat. Fields}, 111:375--452, 1998.
\end{thebibliography}
\hbox{ \phantom{M} \hskip7cm
\vbox{\hsize8cm {\noindent Address of authors:\\
{\sc
Institute of Mathematics\\
Budapest University of Technology \\
Egry J\'ozsef u.\ 1\\
H-1111 Budapest, Hungary}\\[10pt]
e-mail:\\
{\tt balint{@}math.bme.hu}\\
{\tt vetob{@}math.bme.hu}
}}}
\end{document} |
\begin{document}
\title{GENERALIZED FUSION FRAMES IN HILBERT SPACES}
\author[V. Sadri]{Vahid Sadri}
\address{Department of Mathematics, Faculty of Tabriz Branch,\\ Technical and Vocational University (TUV), East Azarbaijan
, Iran}
\email{vahidsadri57@gmail.com}
\author[Gh. Rahimlou]{GHOLAMREZA RAHIMLOU}
\address{Department of Mathematics, Faculty of Tabriz Branch,\\ Technical and Vocational University (TUV), East Azarbaijan
, Iran}
\email{grahimlou@gmail.com}
\author[R. Ahmadi]{Reza Ahmadi}
\address{Institute of Fundamental Sciences\\University of Tabriz\\, Iran\\}
\email{rahmadi@tabrizu.ac.ir}
\author[R. Zarghami Farfar]{Ramazan Zarghami Farfar}
\address{Dapartement of Geomatic and Mathematical\\Marand Faculty of Technical and Engineering\\University of Tabriz\\, Iran\\}
\email{zarghamir@gmail.com}
\begin{abstract}
After introducing g-frames and fusion frames by Sun and Casazza, combining these frames together is an interesting topic for research. In this paper, we introduce the generalized fusion frames or g-fusion frames for Hilbert spaces and give characterizations of these frames from the viewpoint of closed range and g-fusion frame sequences. Also, the canonical dual g-fusion frames are presented and we introduce Parseval g-fusion frames.
\end{abstract}
\subjclass[2010]{Primary 42C15; Secondary 46C99, 41A58}
\keywords{Fusion frame, g-fusion frame, Dual g-fusion frame, g-fusion frame sequence.}
\maketitle
\section{Introduction}
During the past few years, the theory of frames have been growing
rapidly and new topics about them are discovered almost every year.
For example, generalized frames (or g-frames), subspaces of frames
(or fusion frames), continuous frames (or c-frames), $k$-frames,
controlled frames and the combination of each two of them, lead
to c-fusion frames, g-c-frames, c-g-frames, c$k$-frames,
c$k$-fusion frames and etc. The purpose of this paper is to
introduce and review some of the generalized fusion frames (or g-fusion
frames) and their operators. Then, we will get some useful
propositions about these frames and finally, we will study g-fusion
frame sequences.
Throughout this paper, $H$ and $K$ are separable Hilbert spaces and $\mathcal{B}(H,K)$ is the collection of all the bounded linear operators of $H$ into $K$. If $K=H$, then $\mathcal{B}(H,H)$ will be denoted by $\mathcal{B}(H)$. Also, $\pi_{V}$ is the orthogonal projection from $H$ onto a closed subspace $V\subset H$ and $\lbrace H_j\rbrace_{j\in\Bbb J}$ is a sequence of Hilbert spaces where $\Bbb J$ is a subset of $\Bbb Z$. It is easy to check that if $u\in\mathcal{B}(H)$ is an invertible operator, then (\cite{ga})
$$\pi_{uV}u\pi_{V}=u\pi_{V}.$$
\begin{definition}\textbf{(frame)}.
Let $\{f_j\}_{j\in\Bbb J}$ be a sequence of members of $H$. We say that $\{f_j\}_{j\in\Bbb J}$ is a frame for $H$ if there exists $0<A\leq B<\infty$ such that for each $f\in H$
\begin{eqnarray*}
A\Vert f\Vert^2\leq\sum_{j\in\Bbb J}\vert\langle f,f_j\rangle\vert^2\leq B\Vert f\Vert^2.
\end{eqnarray*}
\end{definition}
\begin{definition}\textbf{(g-frame)}
A family $\lbrace \Lambda_j\in\mathcal{B}(H,H_j)\rbrace_{j\in\Bbb J}$ is called a g-frame for $H$ with respect to $\lbrace H_j\rbrace_{j\in\Bbb J}$, if there exist $0<A\leq B<\infty$ such that
\begin{equation} \label{1}
A\Vert f\Vert^2\leq\sum_{j\in\Bbb J}\Vert \Lambda_{j}f\Vert^2\leq B\Vert f\Vert^2, \ \ f\in H.
\end{equation}
\end{definition}
\begin{definition}\textbf{(fusion frame)}.
Let $\{W_j\}_{j\in\Bbb J}$ be a family of closed subspaces of $H$ and $\{v_j\}_{j\in\Bbb J}$ be a family of weights (i.e. $v_j>0$ for any $j\in\Bbb J$). We say that $(W_j, v_j)$ is a fusion frame for $H$ if there exists $0<A\leq B<\infty$ such that for each $f\in H$
\begin{eqnarray*}
A\Vert f\Vert^2\leq\sum_{j\in\Bbb J}v^{2}_j\Vert \pi_{W_j}f\Vert^2\leq B\Vert f\Vert^2.
\end{eqnarray*}
\end{definition}
If an operator $ u$ has closed range, then there exists a right-inverse operator $u^ \dagger$ (pseudo-inverse of $u$) in the following sences (see \cite{ch}).
\begin{lemma}\label{l1}
Let $u\in\mathcal{B}(K,H)$ be a bounded operator with closed range $\mathcal{R}_{u}$. Then there exists a bounded operator $u^\dagger \in\mathcal{B}(H,K)$ for which
$$uu^{\dagger} x=x, \ \ x\in \mathcal{R}_{u}.$$
\end{lemma}
\begin{lemma}\label{Ru}
Let $u\in\mathcal{B}(K,H)$. Then the following assertions holds:
\begin{enumerate} \item
$\mathcal{R}_u$ is closed in $H$ if and only if $\mathcal{R}_{u^{\ast}}$ is closed in $K$.
\item $(u^{\ast})^\dagger=(u^\dagger)^\ast$.
\item
The orthogonal projection of $H$ onto $\mathcal{R}_{u}$ is given by $uu^{\dagger}$.
\item
The orthogonal projection of $K$ onto $\mathcal{R}_{u^{\dagger}}$ is given by $u^{\dagger}u$.\item$\mathcal{N}_{{u}^{\dagger}}=\mathcal{R}^{\bot}_{u}$ and $\mathcal{R}_{u^{\dagger}}=\mathcal{N}^{\bot}_{u}$.
\end{enumerate}
\end{lemma}
\section{Generalized Fusion Frames and Their Operators}
We define the space $\mathscr{H}_2:=(\sum_{j\in\Bbb J}\oplus H_j)_{\ell_2}$ by
\begin{eqnarray}
\mathscr{H}_2=\big\lbrace \lbrace f_j\rbrace_{j\in\Bbb J} \ : \ f_j\in H_j , \ \sum_{j\in\Bbb J}\Vert f_j\Vert^2<\infty\big\rbrace.
\end{eqnarray}
with the inner product defined by
$$\langle \lbrace f_j\rbrace, \lbrace g_j\rbrace\rangle=\sum_{j\in\Bbb J}\langle f_j, g_j\rangle.$$
It is clear that $\mathscr{H}_2$ is a Hilbert space with pointwise operations.
\begin{definition}
Let $W=\lbrace W_j\rbrace_{j\in\Bbb J}$ be a family of closed subspaces of $H$, $\lbrace v_j\rbrace_{j\in\Bbb J}$ be a family of weights, i.e. $v_j>0$ and $\Lambda_j\in\mathcal{B}(H,H_j)$ for each $j\in\Bbb J$. We say $\Lambda:=(W_j, \Lambda_j, v_j)$ is a \textit{generalized fusion frame} (or \textit{g-fusion frame} ) for $H$ if there exists $0<A\leq B<\infty$ such that for each $f\in H$
\begin{eqnarray}\label{g}
A\Vert f\Vert^2\leq\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2\leq B\Vert f\Vert^2.
\end{eqnarray}
\end{definition}
We call $\Lambda$ a \textit{Parseval g-fusion frame} if $A=B=1$. When the right hand of (\ref{g}) holds, $\Lambda$ is called a \textit{g-fusion Bessel sequence} for $H$ with bound $B$. If $H_j=H$ for all $j\in\Bbb J$ and $\Lambda_j=I_H$, then we get the fusion frame $(W_j, v_j)$ for $H$. Throughout this paper, $\Lambda$ will be a triple $(W_j, \Lambda_j, v_j)$ with $j\in\Bbb J$ unless otherwise stated.
\begin{proposition}\label{2.2}
Let $\Lambda$ be a g-fusion Bessel sequence for $H$ with bound $B$. Then for each sequence $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathscr{H}_2$, the series $\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j$ converges unconditionally.
\end{proposition}
\begin{proof}
Let $\Bbb I$ be a finite subset of $\Bbb J$, then
\begin{align*}
\Vert \sum_{j\in\Bbb I}v_j\pi_{W_j}\Lambda_j^{*} f_j\Vert&=\sup_{\Vert g\Vert=1}\big\vert\langle\sum_{j\in\Bbb I}v_j\pi_{W_j}\Lambda_j^{*} f_j , g\rangle\big\vert\\
&\leq\big(\sum_{j\in\Bbb I}\Vert f_j\Vert^2\big)^{\frac{1}{2}}\sup_{\Vert g\Vert=1}\big(\sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}g\Vert^2\big)^{\frac{1}{2}}\\
&\leq \sqrt{B}\big(\sum_{j\in\Bbb I}\Vert f_j\Vert^2\big)^{\frac{1}{2}}<\infty
\end{align*}
and it follows that $\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j$ is unconditionally convergent in $H$ (see \cite{diestel} page 58).
\end{proof}
Now, we can define the \textit{synthesis operator} by Proposition \ref{2.2}.
\begin{definition}
Let $\Lambda$ be a g-fusion frame for $H$. Then, the synthesis operator for $\Lambda$ is the operator
\begin{eqnarray*}
T_{\Lambda}:\mathscr{H}_2\longrightarrow H
\end{eqnarray*}
defined by
\begin{eqnarray*}
T_{\Lambda}(\lbrace f_j\rbrace_{j\in\Bbb J})=\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j.
\end{eqnarray*}
\end{definition}
We say the adjoint $T_{\Lambda}^*$ of the synthesis operator the \textit{analysis operator} and it is defined
by the following Proposition.
\begin{proposition}
Let $\Lambda$ be a g-fusion frame for $H$. Then, the analysis operator
\begin{equation*}
T_{\Lambda}^*:H\longrightarrow\mathscr{H}_2
\end{equation*}
is given by
\begin{equation*}
T_{\Lambda}^*(f)=\lbrace v_j \Lambda_j \pi_{W_j}f\rbrace_{j\in\Bbb J}.
\end{equation*}
\end{proposition}
\begin{proof}
If $f\in H$ and $\lbrace g_j\rbrace_{j\in\Bbb J}\in\mathscr{H}_2$, we have
\begin{align*}
\langle T_{\Lambda}^*(f), \lbrace g_j\rbrace_{j\in\Bbb J}\rangle&=\langle f, T_{\Lambda}\lbrace g_j\rbrace_{j\in\Bbb J}\rangle\\
&=\langle f, \sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}g_j\rangle\\
&=\sum_{j\in\Bbb J}v_j \langle \Lambda_j \pi_{W_j}f, g_j\rangle\\
&=\langle\lbrace v_j \Lambda_j \pi_{W_j}f\rbrace_{j\in\Bbb J}, \lbrace g_j\rbrace_{j\in\Bbb J}\rangle.
\end{align*}
\end{proof}
\begin{theorem}\label{t2}
The following assertions are equivalent:
\begin{enumerate}
\item $\Lambda$ is a g-fusion Bessel sequence for $H$ with bound $B$.
\item The operator
\begin{align*}
T_{\Lambda}&:\mathscr{H}_2\longrightarrow H\\
T_{\Lambda}(\lbrace f_j\rbrace_{j\in\Bbb J})&=\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j
\end{align*}
is a well-defined and bounded operator with $\Vert T_{\lambda}\Vert\leq \sqrt{B}$.
\item The series
\begin{align*}
\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j
\end{align*}
converges for all $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathscr{H}_2$.
\end{enumerate}
\end{theorem}
\begin{proof}
$\textit{(1)}\Rightarrow\textit{(2)} $. It is clear by Proposition \ref{2.2}.\\
$\textit{(2)}\Rightarrow\textit{(1)}$. Suppose that $T_{\Lambda}$ is a well-defined and bounded operator with $\Vert T_{\lambda}\Vert\leq \sqrt{B}$. Let $\Bbb I$ be a finite subset of $\Bbb J$ and $f\in H$.
Therefore
\begin{align*}
\sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2&=\sum_{j\in\Bbb I}v_j^2\langle \pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}f, f \rangle\\
&=\langle T_{\Lambda}\lbrace v_j\Lambda_j \pi_{W_j}f\rbrace_{j\in\Bbb I}, f\rangle\\
&\leq \Vert T_{\Lambda}\Vert \Vert v_j\Lambda_j \pi_{W_j}f\Vert \Vert f\Vert\\
&=\Vert T_{\Lambda}\Vert\big(\sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2\big)^{\frac{1}{2}}\Vert f\Vert.
\end{align*}
Thus, we conclude that
\begin{align*}
\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2\leq\Vert T_{\Lambda}\Vert^2 \Vert f\Vert^2\leq B\Vert f\Vert^2.
\end{align*}
$\textit{(1)}\Rightarrow\textit{(3)}$. It is clear.\\
$\textit{(3)}\Rightarrow\textit{(1)}$. Suppose that $\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j$ converges for all $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathscr{H}_2$. We define
\begin{align*}
T:&\mathscr{H}_2\longrightarrow H\\
T(\lbrace f_j\rbrace_{j\in\Bbb J})&=\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j.
\end{align*}
Then, $T$ is well-defined. Let for each $n\in\Bbb N$,
\begin{align*}
T_n:&\mathscr{H}_2\longrightarrow H\\
T_n(\lbrace f_j\rbrace_{j\in\Bbb J})&=\sum_{j=1}^{n}v_j \pi_{W_j}\Lambda_{j}^{*}f_j.
\end{align*}
Let $B_n:=(\sum_{j=1}^{n}\Vert v_j \pi_{W_j}\Lambda_{j}^{*}f_j\Vert^2)^{\frac{1}{2}}$. Since, $\Vert T_n(\lbrace f_j\rbrace_{j\in\Bbb J})\Vert\leq B_n \Vert f_j\lbrace\rbrace_{j\in\Bbb J}\Vert$, then $\lbrace T_n\rbrace$ is a sequence of bounded linear operators which converges pointwise to $T$. Hence, by the Banach-Steinhaus Theorem, $T$ is a bounded operator with
$$\Vert T\Vert\leq\lim\inf\Vert T_n\Vert.$$
So, by Theorem \ref{t2}, $\Lambda$ is a g-fusion Bessel sequence for $H$.
\end{proof}
\begin{corollary}\label{cor}
$\Lambda$ is a g-fusion Bessel sequence for $H$ with bound $B$ if and only if for each finite subset $\Bbb I\subseteq\Bbb J$ and $f_j\in H_j$
$$\Vert\sum_{j\in\Bbb I}v_j \pi_{W_j}\Lambda^*_j f_j\Vert^2\leq B\sum_{j\in\Bbb I}\Vert f_j\Vert^2.$$
\end{corollary}
\begin{proof}
It is an immediate consequence of Theorem \ref{t2} and the proof of Proposition \ref{2.2}.
\end{proof}
Let $\Lambda$ be a g-fusion frame for $H$. The \textit{g-fusion frame operator} is defined by
\begin{align*}
S_{\Lambda}&:H\longrightarrow H\\
S_{\Lambda}f&=T_{\Lambda}T^*_{\Lambda}f.
\end{align*}
Now, for each $f\in H$ we have
$$S_{\Lambda}f=\sum_{j\in\Bbb J}v_j^2 \pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}f$$
and
$$\langle S_{\Lambda}f, f\rangle=\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2.$$
Therefore,
$$A I\leq S_{\Lambda}\leq B I.$$
This means that $S_{\Lambda}$ is a bounded, positive and invertible operator (with adjoint inverse). So, we have the reconstruction formula for any $f\in H$:
\begin{equation}\label{3}
f=\sum_{j\in\Bbb J}v_j^2 \pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}S^{-1}_{\Lambda}f
=\sum_{j\in\Bbb J}v_j^2 S^{-1}_{\Lambda}\pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}f.
\end{equation}
\begin{example}
We introduce a Parseval g-fusion frame for $H$ by the g-fusion frame operator. Assume that $\Lambda=(W_j, \Lambda_j, v_j)$ is a g-fusion frame for $H$. Since $S_{\Lambda}$(or $S_{\Lambda}^{-1}$) is positive in $\mathcal{B}(H)$ and $\mathcal{B}(H)$ is a $C^*$-algebra, then there exist a unique positive square root $S^{\frac{1}{2}}_{\Lambda}$ (or $S^{-\frac{1}{2}}_{\Lambda}$) and they commute with $S_{\Lambda}$ and $S_{\Lambda}^{-1}$. Therefore, each $f\in H$ can be written
\begin{align*}
f&=S^{-\frac{1}{2}}_{\Lambda}S_{\Lambda}S^{-\frac{1}{2}}_{\Lambda}\\
&=\sum_{j\in\Bbb J}v_j^2 S^{-\frac{1}{2}}_{\Lambda}\pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}S^{-\frac{1}{2}}_{\Lambda}f.
\end{align*}
This implies that
\begin{align*}
\Vert f\Vert^2&=\langle f, f\rangle\\
&=\langle\sum_{j\in\Bbb J}v_j^2 S^{-\frac{1}{2}}_{\Lambda}\pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}S^{-\frac{1}{2}}_{\Lambda}f, f\rangle\\
&=\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}S^{-\frac{1}{2}}_{\Lambda}f\Vert^2\\
&=\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}S^{-\frac{1}{2}}_{\Lambda}\pi_{S^{-\frac{1}{2}}_{\Lambda}W_j}f\Vert^2,
\end{align*}
this means that $(S^{-\frac{1}{2}}_{\Lambda}W_j, \Lambda_j \pi_{W_j}S^{-\frac{1}{2}}_{\Lambda}, v_j)$ is a Parseval g-fusion frame.
\end{example}
\begin{theorem}\label{2.3}
$\Lambda$ is a g-fusion frame for $H$ if and only if
\begin{align*}
T_{\Lambda}&:\mathscr{H}_2\longrightarrow H\\
T_{\Lambda}(\lbrace f_j\rbrace_{j\in\Bbb J})&=\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j
\end{align*}
is a well-defined, bounded and surjective.
\end{theorem}
\begin{proof}
If $\Lambda$ is a g-fusion frame for $H$, the operator $S_{\Lambda}$ is invertible. Thus, $T_{\Lambda}$ is surjective. Conversely, let $T_{\Lambda}$ be a well-defined, bounded and surjective. Then, by Theorem \ref{t2}, $\Lambda$ is a g-fusion Bessel sequence for $H$. So, $T^{*}_{\Lambda}f=\lbrace v_j \Lambda_j \pi_{W_j}f\rbrace_{j\in\Bbb J}$ for all $f\in H$. Since $T_{\Lambda}$ is surjective, by Lemma \ref{l1}, there exists an operator $T^{\dagger}_{\Lambda}:H\rightarrow\mathscr{H}_2$ such that $(T^{\dagger}_{\Lambda})^*T^*_{\Lambda}=I_{H}$. Now, for each $f\in H$ we have
\begin{align*}
\Vert f\Vert^2&\leq\Vert (T^{\dagger}_{\Lambda})^*\Vert^2 \Vert T^*_{\Lambda}f\Vert^2\\
&=\Vert T^{\dagger}_{\Lambda}\Vert^2\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2.
\end{align*}
Therefore, $\Lambda$ is a g-fusion frame for $H$ with lower g-fusion frame bound $\Vert T^{\dagger}\Vert^{-2}$ and upper g-fusion frame $\Vert T_{\Lambda}\Vert^2$.
\end{proof}
\begin{theorem}
$\Lambda$ is a g-fusion frame for $H$ if and only if the operator
$$S_{\Lambda}:f\longrightarrow\sum_{j\in\Bbb J}v_j^2 \pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}f$$
is a well-defined, bounded and surjective.
\end{theorem}
\begin{proof}
The necessity of the statement is clear. Let $S_{\Lambda}$ be a well-defined, bounded and surjective operator. Since $\langle S_{\Lambda}f, f\rangle\geq0$ for all $f\in H$, so $S_{\Lambda}$ is positive. Then
$$\ker S_{\Lambda}=(\mathcal{R}_{S^*_{\Lambda}})^{\dagger}=(\mathcal{R}_{S_{\Lambda}})^{\dagger}=\lbrace 0\rbrace$$
thus, $S_{\Lambda}$ is injective. Therefore, $S_{\Lambda}$ is invertible. Thus, $0\notin\sigma(S_{\Lambda})$. Let $C:=\inf_{\Vert f\Vert=1}\langle S_{\Lambda}f, f\rangle$. By Proposition 70.8 in \cite{he}, we have $C\in\sigma(S_{\Lambda})$. So $C>0$. Now, we can write for each $f\in H$
$$\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2=\langle S_{\Lambda}f, f\rangle\geq C\Vert f\Vert^2$$
and
$$\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2=\langle S_{\Lambda}f, f\rangle\leq \Vert S_{\Lambda}\Vert \Vert f\Vert^2.$$
It follows that $\Lambda$ is a g-fusion frame for $H$.
\end{proof}
\begin{theorem}
Let $\Lambda:=(W_j, \Lambda_j, v_j)$ and $\Theta:=(W_j, \Theta_j, v_j)$ be two g-fusion Bessel sequence for $H$ with bounds $B_1$ and $B_2$, respectively. Suppose that $T_{\Lambda}$ and $T_{\Theta}$ be their analysis operators such that $T_{\Theta}T^*_{\Lambda}=I_H$. Then, both $\Lambda$ and $\Theta$ are g-fusion frames.
\end{theorem}
\begin{proof}
For each $f\in H$ we have
\begin{align*}
\Vert f\Vert^4&=\langle f, f\rangle^2\\
&=\langle T^*_{\Lambda}f, T^*_{\Theta}f\rangle^2\\
&\leq\Vert T^*_{\Lambda}f\Vert^2 \Vert T^*_{\Theta}f\Vert^2\\
&=\big(\sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2\big)\big(\sum_{j\in\Bbb I}v_j^2\Vert \Theta_j \pi_{W_j}f\Vert^2\big)\\
&\leq\big(\sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2\big) B_2 \Vert f\Vert^2,
\end{align*}
thus, $B_2^{-1}\Vert f\Vert^2\leq\sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2$. This means that $\Lambda$ is a g-fusion frame for $H$. Similarly, $\Theta$ is a g-fusion frame with the lower bound $B_1^{-1}$.
\end{proof}
\section{Dual g-Fusion Frames}
For definition of the dual g-fusion frames, we need the following theorem.
\begin{theorem}\label{dual}
Let $\Lambda=(W_j, \Lambda_j, v_j)$ be a g-fusion frame for $H$. Then $(S^{-1}_{\Lambda}W_j, \Lambda_j \pi_{W_j}S_{\Lambda}^{-1}, v_j)$ is a g-fusion frame for $H$.
\end{theorem}
\begin{proof}
Let $A,B$ be the g-fusion frame bounds of $\Lambda$ and $f\in H$, then
\begin{align*}
\sum_{j\in\Bbb J}v^2_j\Vert \Lambda_{j}\pi_{W_j}S_{\Lambda}^{-1}\pi_{S^{-1}_{\Lambda}W_j}f\Vert^2&=\sum_{j\in\Bbb J}v^2_j\Vert \Lambda_{j}\pi_{W_j}S_{\Lambda}^{-1}f\Vert^2\\
&\leq B\Vert S_{\Lambda}^{-1}\Vert^2 \Vert f\Vert^2.
\end{align*}
Now, to get the lower bound, by using (\ref{3}) we can write
\begin{align*}
\Vert f\Vert^4&=\big\vert\langle\sum_{j\in\Bbb J}v_j^2\pi_{W_j}\Lambda^*_J\Lambda_j\pi_{W_j}S^{-1}_{\Lambda}f, f\rangle\big\vert^2\\
&=\big\vert\sum_{j\in\Bbb J}v_j^2\langle\Lambda_j\pi_{W_j}S^{-1}_{\Lambda}f, \Lambda_j\pi_{W_j}f\rangle\big\vert^2\\
&\leq\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j\pi_{W_j}S^{-1}_{\Lambda}f\Vert^2 \sum_{j\in\Bbb J}v_j^2\Vert\Lambda_j\pi_{W_j}f\Vert^2\\
&\leq \sum_{j\in\Bbb J}v_j^2\Vert\Lambda_j\pi_{W_j} S_{\Lambda}^{-1}\pi_{S^{-1}_{\Lambda}W_j}f\Vert^2\big(B\Vert f\Vert^2\big),
\end{align*}
therefore
\begin{align*}
B^{-1}\Vert f\Vert^2\leq \sum_{j\in\Bbb J}v_j^2\Vert\Lambda_j\pi_{W_j} S_{\Lambda}^{-1}\pi_{S^{-1}_{\Lambda}W_j}f\Vert^2.
\end{align*}
\end{proof}
Now, by Theorem \ref{dual}, $\tilde{\Lambda}=(S^{-1}_{\Lambda}W_j, \Lambda_j\pi_{W_j} S_{\Lambda}^{-1}, v_j)$ is a g-fusion frame for $H$. Then, $\tilde{\Lambda}$ is called the \textit{(canonical) dual g-fusion frame} of $\Lambda$. Let $S_{\tilde{\Lambda}}=T_{\tilde{\Lambda}}T^*_{\tilde{\Lambda}}$ is the g-fusion frame operator of $\tilde{\Lambda}$. Then, for each $f\in H$ we get
$$T^*_{\tilde{\Lambda}}f=\lbrace v_j\Lambda_j\pi_{W_j}S^{-1}_{\Lambda}\pi_{S^{-1}_{\Lambda W_j}}f\rbrace=\lbrace v_j\Lambda_j\pi_{W_j} S^{-1}_{\Lambda}f\rbrace=T^*_{\Lambda}(S^{-1}_{\Lambda}f),$$
so $T_{\Lambda}T^*_{\tilde{\Lambda}}=I_H$. Also, we have for each $f\in H$,
\begin{align*}
\langle S_{\tilde{\Lambda}}f, f\rangle&=\sum_{j\in\Bbb J}v_j^2\Vert\Lambda_j\pi_{W_j} S_{\Lambda}^{-1}\pi_{S^{-1}_{\Lambda}W_j}f\Vert^2\\
&=\sum_{j\in\Bbb J}v_j^2\Vert\Lambda_j \pi_{W_j}S_{\Lambda}^{-1}f\Vert^2\\
&=\langle S_{\Lambda}(S_{\Lambda}^{-1}f), S_{\Lambda}^{-1}f\rangle\\
&=\langle S_{\Lambda}^{-1}f, f\rangle
\end{align*}
thus, $S_{\tilde{\Lambda}}=S_{\Lambda}^{-1}$ and by (\ref{3}), we get for each $f\in H$
\begin{align}\label{frame}
f=\sum_{j\in\Bbb J}v_j^2\pi_{W_j}\Lambda^*_j\tilde{\Lambda_j}\pi_{\tilde{W_j}}f=
\sum_{j\in\Bbb J}v_j^2\pi_{\tilde{W_j}}\tilde{\Lambda_j}^*\Lambda_j\pi_{W_j}f,
\end{align}
where $\tilde{W_j}:=S^{-1}_{\Lambda}W_j \ , \ \tilde{\Lambda_j}:=\Lambda_j \pi_{W_j}S_{\Lambda}^{-1}.$
The following Theorem shows that the canonical dual g-fusion frame
gives rise to expansion coefficients with the minimal norm.
\begin{theorem}\label{min}
Let $\Lambda$ be a g-fusion frame with canonical dual $\tilde{\Lambda}$.
For each $g_j\in H_j$, put $f=\sum_{j\in\Bbb J}v_j^2\pi_{W_j}\Lambda^*_j g_j$. Then
$$\sum_{j\in\Bbb J}\Vert g_j\Vert^2=\sum_{j\in\Bbb J}v_j^2\Vert \tilde{\Lambda_j}\pi_{\tilde{W_j}}f\Vert^2+\sum_{j\in\Bbb J}\Vert g_j-v_j^2\tilde{\Lambda_j}\pi_{\tilde{W_j}}f\Vert^2.$$
\end{theorem}
\begin{proof}
We can write again
\begin{align*}
\sum_{j\in\Bbb J}v_j^2\Vert \tilde{\Lambda_j}\pi_{\tilde{W_j}}f\Vert^2&=\langle f, S^{-1}_{\Lambda}f\rangle\\
&=\sum_{j\in\Bbb J}v_j^2\langle\pi_{W_j}\Lambda^*_j g_j, S_{\Lambda}^{-1}f \rangle\\
&=\sum_{j\in\Bbb J}v_j^2\langle g_j, \Lambda_j\pi_{W_j}S_{\Lambda}^{-1}f \rangle\\
&=\sum_{j\in\Bbb J}v_j^2\langle g_j, \tilde{\Lambda_j}\pi_{\tilde{W_j}} f \rangle.
\end{align*}
Therefore, $\mbox{Im}\Big(\sum_{j\in\Bbb J}v_j^2\langle g_j, \tilde{\Lambda_j}\pi_{\tilde{W_j}} f \rangle\Big)=0$. So
\begin{align*}
\sum_{j\in\Bbb J}\Vert g_j-v_j^2\tilde{\Lambda_j}\pi_{\tilde{W_j}}f\Vert^2=\sum_{j\in\Bbb J}\Vert g_j\Vert^2 -2\sum_{j\in\Bbb J}v_j^2\langle g_j, \tilde{\Lambda_j}\pi_{\tilde{W_j}} f \rangle+\sum_{j\in\Bbb J}v_j^2\Vert \tilde{\Lambda_j}\pi_{\tilde{W_j}}f\Vert^2
\end{align*}
and the proof completes.
\end{proof}
\section{Gf-Complete and g-Fusion Frame Sequences}
\begin{definition}
We say that $(W_j, \Lambda_j)$ is \textit{gf-complete} , if
$\overline{\mbox{span}}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace=H.$
\end{definition}
Now, it is easy to check that $(W_j, \Lambda_j)$ is gf-complete if and only if
$$\lbrace f: \ \Lambda_j \pi_{W_j}f=0 , \ j\in\Bbb J\rbrace=\lbrace 0\rbrace.$$
\begin{proposition}\label{p3}
If $\Lambda=(W_j, \Lambda_j, v_j)$ is a g-fusion frame for $H$, then $(W_j, \Lambda_j)$ is a gf-complete.
\end{proposition}
\begin{proof}
Let $f\in(\mbox{span}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace)^{\perp}\subseteq H$. For each $j\in\Bbb J$ and $g_j\in H_j$ we have
$$\langle \Lambda_j\pi_{W_j}f, g_j\rangle=\langle f, \pi_{W_j}\Lambda^*_j g_j\rangle=0,$$
so, $\Lambda_j\pi_{W_j}f=0$ for all $j\in\Bbb J$. Since $\Lambda$ is a g-fusion frame for $H$, then $\Vert f\Vert=0$. Thus $f=0$ and we get $(\mbox{span}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace)^{\perp}=\lbrace0\rbrace$.
\end{proof}
In the following, we want to check that if a member is removed from a g-fusion frame, will the new set remain a g-fusion frame or not?
\begin{theorem}\label{del}
Let $\Lambda=(W_j, \Lambda_j, v_j)$ be a g-fusion frame for $H$ with bounds $A, B$ and $\tilde{\Lambda}=(S^{-1}_{\Lambda}W_j, \Lambda_j \pi_{W_j}S^{-1}_{\Lambda}, v_j)$ be a canonical dual g-fusion frame. Suppose that $j_0\in\Bbb J$.
\begin{enumerate}
\item If there is a $g_0\in H_{j_0}\setminus\lbrace 0\rbrace$ such that $\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=g_0$ and $v_{j_0}=1$, then $(W_j, \Lambda_j)_{j\neq j_0}$ is not gf-complete in $H$.
\item If there is a $f_0\in H_{j_0}\setminus\lbrace0\rbrace$ such that $\pi_{W_{j_0}}\Lambda^*_{j_0}\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}f_0=f_0$ and $v_{j_0}=1$, then $(W_j, \Lambda_j)_{j\neq j_0}$ is not gf-complete in $H$.
\item If $I-\Lambda_{j_0}\pi_{W_{j_0}}\pi_{\tilde{W_{j_0}}}\tilde{\Lambda}^*_{j_0}$ is bounded invertible on $H_{j_0}$, then $(W_j, \Lambda_j, v_j)_{j\neq j_0}$ is a g-fusion frame for $H$.
\end{enumerate}
\end{theorem}
\begin{proof}
\textit{(1).} Since $\pi_{W_{j_0}}\Lambda^*_{j_0}g_0\in H$, then by (\ref{frame}),
$$\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=\sum_{j\in\Bbb J}v_j^2\pi_{W_j}\Lambda^*_j\tilde{\Lambda_j}\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0.$$
So,
$$\sum_{j\neq j_0}v_j^2\pi_{W_j}\Lambda^*_j\tilde{\Lambda_j}\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=0.$$
Let $u_{j_0, j}:=\delta_{j_0, j}g_0$, thus
$$\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=\sum_{j\in\Bbb J}v_j^2\pi_{W_j}\Lambda^*_j u_{j_0, j}.$$
Then, by Theorem \ref{min}, we have
$$\sum_{j\in\Bbb J}\Vert u_{j_0, j}\Vert^2=\sum_{j\in\Bbb J}v_j^2\Vert \tilde{\Lambda}_j\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0\Vert^2+\sum_{j\in\Bbb J}\Vert v_j^2\tilde{\Lambda}_j\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0-u_{j_0, j}\Vert^2.$$
Consequently,
$$\Vert g_0\Vert^2=\Vert g_0\Vert^2+2\sum_{j\neq j_0}v_j^2\Vert \tilde{\Lambda}_j\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0\Vert^2$$
and we get $\tilde{\Lambda}_j\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=0$.
Therefore,
$$\Lambda_j\pi_{W_j}S^{-1}_{\Lambda}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=\tilde{\Lambda}_j\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=0.$$
But, $g_0=\tilde\Lambda_{j_0}^*\pi_{\tilde{W}_{j_0}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=\Lambda_{j_0}\pi_{W_{j_0}}S^{-1}_{\Lambda}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0\neq0$, which implies that $S^{-1}_{\Lambda}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0\neq0$ and this means that $(W_j, \Lambda_j)_{j\neq j_0}$ is not gf-complete in $H$.
\textit{(2).} Since $\pi_{W_{j_0}}\Lambda^*_{j_0}\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}f_0=f_0\neq0$, we obtain $\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}f_0\neq0$ and
$$\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}\pi_{W_{j_0}}\Lambda^*_{j_0}\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}f_0=\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}f_0.$$
Now, the conclusion follows from \textit{(1)}.
\textit{(3)}. Using (\ref{frame}), we have for any $f\in H$
$$\Lambda_{j_0}\pi_{W_{j_0}}f=\sum_{j\in\Bbb J}v_j^2\Lambda_{j_0}\pi_{W_{j_0}}\pi_{\tilde{W_j}}\tilde{\Lambda}^*_j\Lambda_j\pi_{W_j}f.$$
So,
\begin{equation}\label{com}(I-\Lambda_{j_0}\pi_{W_{j_0}}\pi_{\tilde{W_{j_0}}}\tilde{\Lambda}^*_{j_0})\Lambda_{j_0}\pi_{W_{j_0}}f=\sum_{j\neq j_0}v_j^2\Lambda_{j_0}\pi_{W_{j_0}}\pi_{\tilde{W_j}}\tilde{\Lambda}^*_j\Lambda_j\pi_{W_j}f.
\end{equation}
On the other hand, we can write
\begin{small}
\begin{align*}
\big\Vert\sum_{j\neq j_0}v_j^2\Lambda_{j_0}\pi_{W_{j_0}}\pi_{\tilde{W_j}}\tilde{\Lambda}^*_j\Lambda_j\pi_{W_j}f\big\Vert^2&=\sup_{\Vert g\Vert=1}\big\vert\sum_{j\neq j_0}v_j^2\big\langle \Lambda_j\pi_{W_j}f, \tilde{\Lambda}_j\pi_{\tilde{W}_j}\pi_{W_{j_0}}\Lambda^*_{j_0}g \big\rangle\big\vert^2\\
&\leq\big(\sum_{j\neq j_0}v_j^2\Vert \Lambda_j\pi_{W_j}f\Vert^2\big)\sup_{\Vert g\Vert=1}\sum_{j\in\Bbb J}v_j^2\Vert \tilde{\Lambda}_j\pi_{\tilde{W}_j}\pi_{W_{j_0}}\Lambda^*_{j_0}g\Vert^2\\
&\leq\tilde{B}\Vert\Lambda_{j_0}\Vert^2(\sum_{j\neq j_0}v_j^2\Vert \Lambda_j\pi_{W_j}f\Vert^2)
\end{align*}
\end{small}
where, $\tilde{B}$ is the upper bound of $\tilde\Lambda$. Now, by (\ref{com}), we have
\begin{equation*}
\Vert\Lambda_{j_0}\pi_{W_{j_0}}f\Vert^2\leq\Vert(I-\Lambda_{j_0}\pi_{W_{j_0}}\pi_{\tilde{W_{j_0}}}\tilde{\Lambda}^*_{j_0})^{-1}\Vert^2 \tilde{B}\Vert\Lambda_{j_0}\Vert^2(\sum_{j\neq j_0}v_j^2\Vert \Lambda_j\pi_{W_j}f\Vert^2).
\end{equation*}
Therefore, there is a number $C>0$ such that
$$\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j\pi_{W_j}f\Vert^2\leq C\sum_{j\neq j_0}v_j^2\Vert \Lambda_j\pi_{W_j}f\Vert^2$$
and we conclude for each $f\in H$
$$\frac{A}{C}\Vert f\Vert^2\leq\sum_{j\neq j_0}v_j^2\Vert \Lambda_j\pi_{W_j}f\Vert^2\leq B\Vert f\Vert^2.$$
\end{proof}
\begin{theorem}
$\Lambda$ is a g-fusion frame for $H$ with bounds $A,B$ if and only if the following two conditions are satisfied:
\begin{enumerate}
\item[(I)] The pair $(W_j, \Lambda_j)$ is gf-complete.
\item[(II)] The operator
$$T_{\Lambda}: \lbrace f_j\rbrace_{j\in\Bbb J}\mapsto \sum_{j\in\Bbb J}v_j\pi_{W_j}\Lambda_j^* f_j$$
is a well-defined from $\mathscr{H}_2$ into $H$ and for each $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathcal{N}^{\perp}_{T_{\Lambda}}$,
\begin{equation}\label{e7}
A\sum_{j\in\Bbb J}\Vert f_j\Vert^2\leq \Vert T_{\Lambda}\lbrace f_j\rbrace_{j\in\Bbb J}\Vert^2\leq B\sum_{j\in\Bbb J}\Vert f_j\Vert^2.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
First, suppose that $\Lambda$ is a g-fusion frame. By Proposition \ref{p3}, (I) is satisfied. By Theorem \ref{t2}, $T_{\Lambda}$ is a well-defined from $\mathscr{H}_2$ into $H$ and $\Vert T_{\Lambda}\Vert^2\leq B$. Now, the right-hand inequality in (\ref{e7}) is proved.
By Theorem \ref{2.3}, $T_{\Lambda}$ is surjective. So, $\mathcal{R}_{T^*_{\Lambda}}$ is closed. Thus
$$\mathcal{N}^{\perp}_{T_{\Lambda}}=\overline{\mathcal{R}_{T^*_{\Lambda}}}=\mathcal{R}_{T^*_{\Lambda}}.$$
Now, if $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathcal{N}^{\perp}_{T_{\Lambda}}$, then
$$\lbrace f_j\rbrace_{j\in\Bbb J}=T^*_{\Lambda}g=\lbrace v_j\Lambda_j \pi_{W_j}g\rbrace_{j\in\Bbb J}$$
for some $g\in H$. Therefore
\begin{align*}
(\sum_{j\in\Bbb J}\Vert f_j\Vert^2)^2&=(\sum_{j\in\Bbb J}v^2_j\Vert \Lambda_j \pi_{W_j}g\Vert^2)^2
\vert\langle S_{\Lambda}(g), g\rangle\vert^2\\
&\leq\Vert S_{\Lambda}(g)\Vert^2 \Vert g\Vert^2\\
&\leq\Vert S_{\Lambda}(g)\Vert^2 \big(\frac{1}{A}\sum_{j\in\Bbb J}v^2_j\Vert \Lambda_j \pi_{W_j}g\Vert^2\big).
\end{align*}
This implies that
$$A\sum_{j\in\Bbb J}\Vert f_j\Vert^2\leq\Vert S_{\Lambda}(g)\Vert^2=\Vert T_{\Lambda}\lbrace f_j\rbrace_{j\in\Bbb J}\Vert^2$$
and (II) is proved.
Conversely, Let $(W_j, \Lambda_j)$ be gf-complete and inequality
(\ref{e7}) is satisfied. Let $\lbrace t_j\rbrace_{j\in\Bbb
J}=\lbrace f_j\rbrace_{j\in\Bbb J}+\lbrace g_j\rbrace_{j\in\Bbb J}$,
where $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathcal{N}_{T_{\Lambda}}$
and $\lbrace g_j\rbrace_{j\in\Bbb
J}\in\mathcal{N}_{T_{\Lambda}}^{\perp}$. We get
\begin{align*}
\Vert T_{\Lambda}\lbrace t_j\rbrace_{j\in\Bbb J}\Vert^2&=\Vert T_{\Lambda}\lbrace g_j\rbrace_{j\in\Bbb J}\Vert^2\\
&\leq B\sum_{j\in\Bbb J}\Vert g_j\Vert^2\\
&\leq B\Vert \lbrace f_j\rbrace+\lbrace g_j\rbrace\Vert^2\\
&=B\Vert\lbrace t_j\rbrace_{j\in\Bbb J}\Vert^2.
\end{align*}
Thus, $\Lambda$ is a g-fusion Bessel sequence.
Assume that $\lbrace y_n\rbrace$ is a sequence of members of $\mathcal{R}_{T_{\Lambda}}$ such that $y_n\rightarrow y$ for some $y\in H$. So, there is a $\lbrace x_n\rbrace\in\mathcal{N}_{T_{\Lambda}}$ such that $T_{\Lambda}\{x_n\}=y_n$. By (\ref{e7}), we obtain
\begin{align*}
A\Vert\lbrace x_n-x_m\rbrace\Vert^2&\leq\Vert T_{\Lambda}\lbrace x_n-x_m\rbrace\Vert^2\\
&=\Vert T_{\Lambda}\lbrace x_n\rbrace -T_{\Lambda}\lbrace x_m\rbrace\Vert^2\\
&=\Vert y_n-y_m\Vert^2.
\end{align*}
Therefore, $\lbrace x_n\rbrace$ is a Cauchy sequence in $\mathscr{H}_2$. Therefore $\lbrace x_n\rbrace$ converges to some $x\in \mathscr{H}_2$, which by continuity of $T_{\Lambda}$, we have $y=T_{\Lambda}(x)\in\mathcal{R}_{T_{\Lambda_j}}$. Hence $\mathcal{R}_{T_{\Lambda}}$ is closed. Since $\mbox{span}\lbrace \pi_{W_j}\Lambda_{j}^{\ast}(H_j)\rbrace\subseteq\mathcal{R}_{T_{\Lambda}}$, by (I) we get $\mathcal{R}_{T_{\Lambda}}=H$.
Let $T_{\Lambda}^\dagger$ denotes the pseudo-inverse of $T_{\Lambda}$. By Lemma \ref{Ru}(4), $T_{\Lambda}T_{\Lambda}^{\dagger}$ is the orthogonal projection onto $\mathcal{R}_{T_{\Lambda}}=H$. Thus for any $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathscr{H}_2 $,
\begin{eqnarray*}
A\Vert T_{\Lambda}^{\dagger}T_{\Lambda}\lbrace f_j\rbrace\Vert^2\leq\Vert T_{\Lambda}T_{\Lambda}^{\dagger}T_{\Lambda}\lbrace f_j\rbrace \Vert^2=\Vert T_{\Lambda}\lbrace f_j\rbrace\Vert^2.
\end{eqnarray*}
By Lemma \ref{Ru} (4), $\mathcal{N}_{{T}_{\Lambda}^{\dagger}}=\mathcal{R}^{\bot}_{T_{\Lambda}}$, therefore
\begin{eqnarray*}
\Vert T_{\Lambda}^\dagger\Vert^2\leq\frac{1}{A}.
\end{eqnarray*}
Also by Lemma \ref{Ru}(2), we have
$$ \Vert(T_{\Lambda}^\ast)^{\dagger}\Vert^2\leq\frac{1}{A}.$$
But $(T_{\Lambda}^\ast)^{\dagger}T_{\Lambda}^\ast$ is the
orthogonal projection onto
\begin{eqnarray*}
\mathcal{R}_{(T_{\Lambda}^\ast)^\dagger}=\mathcal{R}_{(T_{\Lambda}^\dagger)^\ast}=\mathcal{N}_{T_{\Lambda}^\dagger}^{\bot}=\mathcal{R}_{T_{\Lambda}}=H.
\end{eqnarray*}
So, for all $f\in H$
\begin{align*}
\Vert f\Vert^2&=\Vert(T_{\Lambda}^\ast)^{\dagger}T_{\Lambda}^\ast f\Vert^2\\
&\leq \frac{1}{A}\Vert T_{\Lambda}^\ast f\Vert^2\\
&=\frac{1}{A}\sum_{j\in\Bbb J}v^2_j\Vert \Lambda_j \pi_{W_j}f\Vert^2.
\end{align*}
This implies that $\Lambda$ satisfies the lower g-fusion frame condition.
\end{proof}
Now, we can define a g-fusion frame sequence in the Hilbert space.
\begin{definition}
We say that $\Lambda$ is a \textit{g-fusion frame sequence} if it is a g-fusion frame for $\overline{\mbox{span}}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace$.
\end{definition}
\begin{theorem}\label{2.6}
$\Lambda$ is a g-fusion frame sequence if and only if the operator
\begin{align*}
T_{\Lambda}&:\mathscr{H}_2\longrightarrow H\\
T_{\Lambda}(\lbrace f_j\rbrace_{j\in\Bbb J})&=\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j
\end{align*}
is a well-defined and has closed range.
\end{theorem}
\begin{proof}
By Theorem \ref{2.3}, it is enough to prove that if $T_{\lambda}$ has closed range, then $\overline{\mbox{span}}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace=\mathcal{R}_{T_{\Lambda}}$.
Let $f\in\overline{\mbox{span}}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace$, then
$$f=\lim_{n\rightarrow\infty}g_n , \ \ \ g_n\in\mbox{span}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace\subseteq \mathcal{R}_{T_{\Lambda}}=\overline{\mathcal{R}}_{T_{\Lambda}}.$$
Therefore, $\overline{\mbox{span}}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace\subseteq\overline{\mathcal{R}}_{T_{\Lambda}}=\mathcal{R}_{T_{\Lambda}}$. On the other hand, if $f\in\mathcal{R}_{T_{\Lambda}}$, then
$$f\in\mbox{span}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace\subseteq\overline{\mbox{span}}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace$$
and the proof is completed.
\end{proof}
\begin{theorem}
$\Lambda$ is a g-fusion frame sequence if and only if
\begin{equation}\label{4}
f \longmapsto \lbrace v_j \Lambda_j \pi_{W_j}f\rbrace_{j\in\Bbb J}
\end{equation}
defines a map from $H$ onto a closed subspace of $\mathscr{H}_2$.
\end{theorem}
\begin{proof}
Let $\Lambda$ be a g-fusion frame sequence. Then, by Theorem \ref{2.6}, $T_{\lambda}$ is well-defined and $\mathcal{R}_{T_{\Lambda}}$ is closed. So, $T^*_{\Lambda}$ is well-defined and has closed range. Conversely, by hypothesis, for all $f\in H$
$$\sum_{j\in\Bbb J}\Vert v_j \Lambda_j \pi_{W_j}f\Vert^2<\infty.$$
Let
$$B:=\sup\big\lbrace \sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2 : \ \ f\in H, \ \Vert f\Vert=1\big\rbrace$$
and suppose that $g_j\in H_j$ and $\Bbb I\subseteq\Bbb J$ be finite. We can write
\begin{align*}
\Vert\sum_{j\in\Bbb I}v_j \pi_{W_j}\Lambda^*_j g_j\Vert^2&=\Big(\sup_{\Vert f\Vert=1}\big\vert\langle\sum_{j\in\Bbb I}v_j \pi_{W_j}\Lambda^*_j g_j, f\rangle\big\vert\Big)^2\\
&\leq\Big(\sup_{\Vert f\Vert=1}\sum_{j\in\Bbb I}v_j\big\vert\langle g_j, \Lambda_j \pi_{W_j}f\rangle\big\vert\Big)^2\\
&\leq\big(\sum_{j\in\Bbb I}\Vert g_j\Vert^2\big)\big(\sup_{\Vert f\Vert=1}\sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2\big)\\
&\leq B\big(\sum_{j\in\Bbb I}\Vert g_j\Vert^2\big)
\end{align*}
Thus, by Corollary \ref{cor}, $\Lambda$ is a g-fusion Bessel sequence for $H$. Therefore, $T_{\Lambda}$ is well-defined and bounded. Furthermore, if the range of the map (\ref{4}) is closed, the same is true for $T_{\Lambda}$. So, by Theorem \ref{2.6}, $\Lambda$ is a g-fusion frame sequence.
\end{proof}
\begin{theorem}
Let $\Lambda=(W_j, \Lambda_j, v_j)$ be a g-fusion frame sequence Then, it is a g-fusion frame for $H$ if and only if the map
\begin{equation}\label{5}
f \longmapsto \lbrace v_j \Lambda_j \pi_{W_j}f\rbrace_{j\in\Bbb J}
\end{equation}
from $H$ onto a closed subspace of $\mathscr{H}_2$ be injective.
\end{theorem}
\begin{proof}
Suppose that the map (\ref{5}) is injective and $v_j \Lambda_j \pi_{W_j}f=0$ for all $j\in\Bbb J$. Then, the value of the map at $f$ is zero. So, $f=0$. This means that $(W_j, \Lambda_j)$ is gf-complete. Since, $\Lambda$ is a g-fusion frame sequence, so, it is a g-fusion frame for $H$.
The converse is evident.
\end{proof}
\begin{theorem}
Let $\Lambda$ be a g-fusion frame for $H$ and $u\in\mathcal{B}(H)$. Then $\Gamma:=(uW_j, \Lambda_j u^*, v_j)$ is a g-fusion frame sequence if and only if $u$ has closed range.
\end{theorem}
\begin{proof}
Assume that $\Gamma$ is a g-fusion frame sequence. So, by Theorem \ref{2.6}, $T_{\Lambda u^*}$ is a well-defined operator from $\mathscr{H}_2$ into $H$ with closed range. If $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathscr{H}_2$, then
\begin{align*}
uT_{\Lambda}\lbrace f_j\rbrace_{j\in\Bbb J}&=\sum_{j\in\Bbb J}v_ju\pi_{W_j}\Lambda_j^* f_j\\
&=\sum_{j\in\Bbb J}v_j\pi_{uW_j}u\Lambda_j^* f_j\\
&=\sum_{j\in\Bbb J}v_j\pi_{uW_j}(\Lambda_j u^*)^* f_j\\
&=T_{\Lambda u^*}\lbrace f_j\rbrace_{j\in\Bbb J},
\end{align*}
therefore $uT_{\Lambda}=T_{\Lambda u^*}$. Thus $uT_{\Lambda}$ has closed range too. Let $y\in\mathcal{R}_u$, then there is $x\in H$ such that $u(x)=y$. By Theorem \ref{2.3}, $T_{\Lambda}$ is surjective, so there exist $\{f_j\}_{j\in\Bbb J}\in\mathscr{H}_2$ such that
$$y=u(T_{\Lambda}\{f_j\}_{j\in\Bbb J}).$$
Thus, $\mathcal{R}_{u}=\mathcal{R}_{uT_{\Lambda}}$ and $u$ has closed range.
For the opposite implication, let
\begin{align*}
T_{\Lambda u^*}:&\mathscr{H}_2\longrightarrow H\\
T_{\Lambda u^*}\lbrace f_j\rbrace_{j\in\Bbb J}&=\sum_{j\in\Bbb J}v_j\pi_{uW_j}(\Lambda_j u^*)^* f_j.
\end{align*}
Hence, $T_{\Lambda u^*}=uT_{\Lambda}$. Since, $T_{\Lambda}$ is surjective, so $T_{\Lambda u^*}$ has closed range and by Theorem \ref{t2}, is well-defined. Therefore, by Theorem \ref{2.6}, the proof is completed.
\end{proof}
\section{Conclusions}
In this paper, we could transfer some common properties in general frames to g-fusion frames with the definition of the g-fusion frames and their operators. Afterward, we reviewed a basic theorem about deleting a member in Theorem \ref{del} with the definition of the dual g-fusion frames and the gf-completeness. In this theorem, the defined operator in part \textit{3} could be replaced by some other operators which are the same as the parts \textit{1} and \textit{2}; this is an open problem at the moment. Eventually, the g-fusion frame sequences and their relationship with the closed range operators were defined and presented.
\end{document} |
\begin{document}
\noindent
\Large
{\bf RELATIVISTIC QUANTUM MECHANICS AND THE BOHMIAN INTERPRETATION}
\normalsize
\vspace*{1cm}
\noindent
{\bf Hrvoje Nikoli\'c}
\vspace*{0.5cm}
\noindent
{\it
Theoretical Physics Division \\
Rudjer Bo\v{s}kovi\'{c} Institute \\
P.O.B. 180, HR-10002 Zagreb, Croatia \\
E-mail: hrvoje@thphys.irb.hr}
\vspace*{2cm}
\noindent
Conventional relativistic quantum mechanics, based on the Klein-Gordon
equation, does not possess a natural probabilistic interpretation
in configuration space.
The Bohmian interpretation, in which probabilities play a secondary
role, provides a viable interpretation of relativistic quantum
mechanics. We formulate the Bohmian interpretation of many-particle
wave functions in a Lorentz-covariant way.
In contrast with the nonrelativistic case, the relativistic
Bohmian interpretation may lead to measurable predictions on particle
positions even when
the conventional interpretation does not lead to such predictions.
\vspace*{0.5cm}
\noindent
Key words: relativistic quantum mechanics, Klein-Gordon equation,
Bohmian interpretation.
\section{INTRODUCTION}
\label{secI}
It is well known that the probabilistic interpretation
of the nonrelativistic Schr\"odinger equation for
particles without spin does not work for its relativistic
generalization -- the Klein-Gordon equation
(see, e.g., Ref.~\cite{bjor1}).
This is related to the fact that the Klein-Gordon equation
\begin{equation}\label{KG}
(\partial^{\mu}\partial_{\mu}+m^2)\psi(x)=0
\end{equation}
(where $x=(t,\bf{x})$ and we take $\hbar=c=1$)
contains a second time derivative, instead of the first time
derivative that appears in the Schr\"odinger equation.
The quantity $|\psi(x)|^2$ cannot be interpreted as a
probability density for a particle to have the position $\bf{x}$
at time $t$ because then the total probability
$\int d^3x |\psi(x)|^2$ would not be conserved in time.
One can introduce the conserved current
\begin{equation}\label{cur}
j^{\mu}=i\psi^* \!\stackrel{\leftrightarrow\;}{\partial^{\mu}}\! \psi
\end{equation}
(where $a \!\stackrel{\leftrightarrow\;}{\partial^{\mu}}\! b \equiv
a\partial^{\mu}b - b\partial^{\mu}a$),
but the time component $j^0(x)$ cannot be interpreted as a probability
density because it is not positive definite.
Note that this is not a problem for the scattering formalism
where one assumes that wave functions are positive-frequency
plane waves asymptotically. However, quantum theory is more
than a theory of scattering and the problem of negative
$j^0$ arises outside the scattering regime.
The usual solution of this problem consists in adopting the
second quantization of $\psi$ (see, e.g., Ref.~\cite{bjor2}), where
$\psi$ is not a wave function determining probabilities, but
an observable (called field) that satisfies the quantum
uncertainty laws. However, if, at the fundamental level,
$\psi$ should not be interpreted as a wave function
that determines probabilities of particle positions, then
it is not clear at all why
such an interpretation of $\psi$ is in such good
agreement with experiments for nonrelativistic particles.
Using the Bohmian interpretation
\cite{bohm,bohmPR1,bohmPR2,holPR,holbook} and the theory of
particle currents \cite{nikcur1,nikcur2,nikcur3},
a theory that consistently combines
the postulates of second quantization
(relativistic quantum field theory) with the postulates of
first quantization
(quantum mechanics) has recently been proposed in
Refs.~\cite{nikoldbb1,nikoldbb2}. The comparative advantages of the Bohmian
interpretation over other interpretations applied to relativistic
quantum mechanics have been discussed in Ref.~\cite{nikoldbb3}.
In Refs.~\cite{nikoldbb1,nikoldbb2}, the equations that determine
the Bohmian trajectories of relativistic quantum particles
described by {\em many-particle} wave functions
were written in a form that required a preferred time
coordinate. Indeed, it is often argued that any relativistic
hidden variable theory compatible with quantum nonlocality
must introduce a preferred Lorentz frame. However, as
demonstrated in Ref.~\cite{bern}, this is not necessarily so.
In the present paper, we further elaborate some of the ideas introduced
in Refs.~\cite{nikoldbb1} and \cite{bern}
to formulate a fully Lorentz-covariant Bohmian interpretation of
relativistic quantum mechanics for particles without spin.
(The generalization to other spins is straightforward.)
As in Refs.~\cite{bern,nikoldbb1}, it appears that particles may
be superluminal, i.e., faster than light in the vacuum.
Before discussing how this happens in the framework of the relativistic
Bohmian interpretation, it is important to emphasize that,
contrary to frequent claims, the principle of Lorentz covariance
does {\em not} forbid superluminal velocities and superluminal
velocities do {\em not} lead to
causal paradoxes (see, e.g., Refs.~\cite{lib,nikolcaus}). Indeed, there is
large evidence that various relativistic interactions
may cause ordinary particles or waves to propagate superluminally
\cite{gar,chu,drum,sch,chi,bol,chi2,nik}.
As noted in Ref.~\cite{bern}, the Lorentz-covariant Bohmian interpretation
of the many-particle Klein-Gordon equation is not
{\em statistically transparent}, i.e., the statistical distribution
of particle positions cannot be calculated in a simple way
from the wave function alone without the knowledge of
particle trajectories. This lack of statistical transparency
is one of the main objects of the present paper, the physical
meaning of which is qualitatively discussed in Sec.~\ref{secST}.
The quanitative formulation of the Lorentz-covariant Bohmian interpretation
is given in Sec.~\ref{secBM}, while some statistical predictions
are discussed in Sec.~\ref{secSP}.
The conclusions are drawn in Sec.~\ref{secCon}.
\section{STATISTICAL TRANSPARENCY}
\label{secST}
Consider a dynamical theory of configuration variables
that may or may not involve the existence
of trajectories of configuration
variables. We say that this theory
is statistically transparent
if one can calculate the probabilites for possible outcomes
of measurements of configuration variables in a natural
way that, in particular, does {\em not}
refer to any information on the
(possibly existing) trajectories.
A remarkable property of nonrelativistic
quantum mechanics (QM)
is that it {\em is} statistically transparent.
In other words, one can calculate
the probability density for particle positions directly from the
wave function, without knowing particle trajectories.
Therefore, from the practical calculational point of view,
the concept of a particle trajectory in nonrelativistic QM
is simply superfluous.
This is certainly the main reason
that the Bohmian interpretation of
nonrelativistic QM is ignored by most physicists,
and is probably the main reason for
the wide belief that, in nonrelativistic QM,
particle trajectories simply do not exist.
In other words, statistical transparency is the main reason
for the wide belief that QM is a fundamentally probabilistic
theory. Without statistical transparency, there would
no longer be a good reason for such a belief.
However, nonrelativistic QM is certainly not the most fundamental
theory that we know. Is statistical transparency a fundamental
principle, or just a property of some approximative
theories? For example, what about relativistic (i.e., Lorentz-covariant)
theories without a preferred time? The following intentionally
chosen facts suggest that statistical transparency may not be
a fundamental principle of nature:
\begin{enumerate}
\item Classical mechanics, either nonrelativistic or relativistic,
is {\em not} statistically transparent.
\item Relativistic quantum mechanics based on the Klein-Gordon
equation or some of its generalizations is {\em not}
statistically transparent (owing to the reasons discussed in Sec.~\ref{secI}).
\item The relativistic Bohmian interpretation of the
Dirac equation is statistically transparent,
but the corresponding
many-particle relativistic generalization is {\em not}
statistically transparent \cite{bern} (unless a preferred time
coordinate is determined in a yet unknown dynamical way \cite{durr99}).
\item Nonrelativistic QM is statistically transparent,
but it is {\em not completely} statistically transparent,
in the sense that, for a fixed time $t$, it gives the probability
density $\rho(x^1,x^2,x^3)$, but, for example,
for a fixed $x^3$, it does not give a probability
density $\rho(x^1,x^2,t)$.
\item The background-independent quantum gravity based on
the Wheeler-DeWitt equation lacks the notion of time
and is {\em not} statistically transparent \cite{kuc,ish}.
\end{enumerate}
Note that, given fact 4, one should not be surprised
with the fact that relativistic QM may not be statistically tranparent.
Since time and space should play equal roles in a relativistic
theory, from fact 4 one might expect that
relativistic QM should be either
completely statistically transparent or not statistically
transparent at all. For the Klein-Gordon equation, it is
the latter possibility that actually realizes.
The lack of statistical transparency does not automatically imply
that the probabilities cannot be calculated at all. For example,
in classical mechanics, if we know the probability distribution
$\rho({\bf x})$ at some initial time and
the exact particle velocity for each possible initial position
${\bf x}$, we can calculate $\rho({\bf x})$ at any
other time by calculating {\em particle trajectories} for all
possible initial positions.
(For practical purposes, it is usually sufficient to calculate
the trajectories for a large but finite sample of initial positions.)
In a similar way, in principle, {\em one can obtain statistical
predictions from any quantum theory provided that deterministic
trajectories do exist}. On the other hand, if deterministic
trajectories do not exist, then it is not clear at all how to assign
{\em any} physical interpretation to a quantum theory that
lacks statistical transparency.
Sinse most of the interpretations
of QM do not incorporate deterministic trajectories, we conclude
that, in general, {\em the Bohmian interpretation of quantum theory
is more powerful than most other interpretations}
(see also Ref.~\cite{nikoldbb3}).
The lack of statistical transparency in relativistic QM
translates into the lack of statistical transparency in relativistic
Bohmian mechanics. This feature is often considered as a drawback
of Bohmian mechanics, which motivates investigations of
various modifications
of the formalism (e.g., based on the introduction of a preferred
time coordinate \cite{holpra} or on the representation of the
Klein-Gordon equation by a first-order differential equation
\cite{holNC}),
such that statistical transparency is recovered.
Such modifications typically imply statistical transparency
even without the Bohmian interpretation, making the Bohmian
interpretation unnecessary for practical predictions.
In our view, the lack of statistical transparency
is not a drawback of a Bohmian theory,
but rather its {\em virtue}. The reason is the following.
The statistical predictions of nonrelativistic Bohmian
mechanics are equivalent to those of the conventional
interpretation, which makes the scientific value
of the Bohmian interpretation questionable, because its basic
assumption - the existence of particle trajectories - cannot
be verified experimentally. On the other hand, for a quantum
theory without statistical transparency, the statistical
predictions of a Bohmian interpretation are {\em not} equivalent
to those of the conventional interpretation, simply because,
in general,
the conventional interpretation does not provide any
statistical predictions for configuration variables at all.
This opens the possibility
of verifying the validity of a Bohmian interpretation
experimentally,
i.e., of obtaining some statistical predictions by explicitly
calculating the trajectories and comparing the predictions with
experiments. If the predictions turn out to disagree with experiments,
then one may conclude that this particular Bohmian theory
is wrong. (One of the main criteria for
a meaningful scientific theory is that it can be falsified.)
If the predictions turn out to agree
with experiments, then one can conclude that the theory
is correct and that the particle trajectories
are very likely to really exist (instead of merely being a calculational
tool \cite{lop}),
because the same predictions cannot be obtained without
the trajectories.
In Bohmian mechanics without statistical transparency,
one can obtain statistical predictions at later times if
the probability distribution $\rho({\bf x})$ is known at the initial
time. However, how to know this initial $\rho$? Since the theory is not
statistically transparent, in general, there
is no way of knowing it in a purely theoretical way.
Fortunately, in some special cases,
it is possible to know the initial $\rho$ in a purely theoretical way,
even without statistical
transparency at all times. For example,
assume that particles
are slow initially, so that the nonrelativistic approximation
can be used. Then one can invoke either the nonrelativistic
typicality argument \cite{durr1,durr2} or the nonrelativistic
quantum H-theorem \cite{val} to conclude that
the quantum equilibrium takes place initially, i.e.,
that the initial
probability distribution is given by $\rho=\psi^*\psi$.
Then, assume that later interactions are such that particles
become relativistic, so that $\psi$ is a relativistic solution
which is not statistically transparent
(i.e., the quantity $j^0$ is not positive definite)
at later times. In such a case,
the statistical predictions for particle positions at later times
can be obtained with the Bohmian interpretation, but not with
the conventional interpretation.
Such predictions can be compared with experiments.
This motivates us to study the relativistic Bohmian interpretation
at the quantiative level, which we do in the subsequent sections.
\section{LORENTZ-COVARIANT BOHMIAN MECHANICS}
\label{secBM}
Let $\hat{\phi}(x)$ be a scalar field operator satisfying
the Klein-Gordon equation (\ref{KG}).
For simplicity, we take $\hat{\phi}$ to be a hermitian uncharged field,
so that
the negative values of the time component of the current (to be defined
below)
cannot be interpreted as negative-charge densities.
Let $|0\rangle$ be the vacuum and $|n\rangle$ an arbitrary
$n$-particle state. These states are Lorentz-invariant objects.
The corresponding $n$-particle wave function is \cite{schweber,nikoldbb1}
\begin{equation}\label{wf}
\psi(x_1,\ldots ,x_n)=(n!)^{-1/2}S_{\{ x_a\} }
\langle 0|\hat{\phi}(x_1)\cdots\hat{\phi}(x_n)|n\rangle .
\end{equation}
The symbol $S_{\{ x_a\} }$ ($a=1,\ldots ,n$)
denotes the symmetrization over all $x_a$,
which is needed because the field operators do not commute
for nonequal times.
The wave function $\psi$ satisfies $n$ Klein-Gordon equations
\begin{equation}\label{KGn}
(\partial_a^{\mu}\partial_{a\mu}+m^2)\psi(x_1,\ldots ,x_n)=0 ,
\end{equation}
one for each $x_a$.
Although the operator $\hat{\phi}$ is hermitian, the nondiagonal
matrix element $\psi$ defined by (\ref{wf}) is complex.
Therefore,
one can introduce $n$ real 4-currents
\begin{equation}\label{curn}
j^{\mu}_a=i\psi^* \!\stackrel{\leftrightarrow\;}{\partial^{\mu}_a}\! \psi ,
\end{equation}
each of which is separately conserved:
\begin{equation}\label{cons}
\partial^{\mu}_a j_{a\mu}=0.
\end{equation}
Equation (\ref{KGn}) also implies
\begin{equation}\label{KGs}
\left( \sum_a\partial_a^{\mu}\partial_{a\mu}+nm^2 \right)
\psi(x_1,\ldots ,x_n)=0 ,
\end{equation}
while (\ref{cons}) implies
\begin{equation}\label{conss}
\sum_a\partial^{\mu}_a j_{a\mu}=0.
\end{equation}
Next we write $\psi=Re^{iS}$, where $R$ and $S$ are real
functions. Equation (\ref{KGs})
is then equivalent to a set of two real equations
\begin{equation}\label{cont}
\sum_a\partial_a^{\mu}(R^2\partial_{a\mu}S)=0,
\end{equation}
\begin{equation}\label{HJ}
-\frac{\sum_a(\partial_a^{\mu}S)(\partial_{a\mu}S)}{2m} +\frac{nm}{2} +Q=0,
\end{equation}
where
\begin{equation}\label{Q}
Q=\frac{1}{2m}\frac{\sum_a\partial_a^{\mu}\partial_{a\mu}R}{R}
\end{equation}
is the quantum potential. Eq.~(\ref{cont}) is equivalent to
(\ref{conss}), while (\ref{HJ}) is the quantum analog of the
relativistic Hamilton-Jacobi equation for $n$ particles.
The corresponding classical Hamilton-Jacobi equation takes the same
form as (\ref{HJ}), except for the fact that there is no $Q$ term
in the classical case.
The Bohmian interpretation consists in postulating the existence
of particle trajectories $x_a^{\mu}(s)$, satisfying
\begin{equation}\label{traj}
\frac{dx_a^{\mu}}{ds} = -\frac{1}{m}\partial_a^{\mu}S .
\end{equation}
Here $s$ is an affine parameter along the $n$ curves
in the 4-dimensional Minkowski spacetime. These $n$ curves can also
be viewed as one curve in a $4n$-dimensional configuration
space. Equation (\ref{traj}) has a form identical
to that of the corresponding
classical relativistic equation. Equation (\ref{traj})
can also be written in another form, as
\begin{equation}\label{traj2}
\frac{dx_a^{\mu}}{ds} = \frac{j_a^{\mu}}{2m\psi^*\psi} .
\end{equation}
From (\ref{traj}), (\ref{HJ}), and the identity
\begin{equation}
\frac{d}{ds}=\sum_a\frac{dx_a^{\mu}}{ds}\partial_{a\mu},
\end{equation}
one also finds the equation of motion
\begin{equation}\label{eqm}
m\frac{d^2x_a^{\mu}}{ds^2}=\partial_a^{\mu}Q.
\end{equation}
Note that the equations above for the particle trajectories are nonlocal,
but still Lorentz covariant.
The Lorentz covariance is a consequence of the fact that the trajectories
in spacetime do not depend on the choice of the affine parameter
$s$ \cite{bern}. Instead, by choosing $n$ ``initial" spacetime positions
$x_a$, the $n$ trajectories are uniquelly determined by the vector fields
$j_a^{\mu}$ or $-\partial_a^{\mu}S$. More precisely,
{\em the trajectories are integral curves of these vector fields}.
(These two vector fields are equal up to a local positive definite scalar
factor, which implies that they generate the same integral curves.)
The nonlocality is encoded in the fact that the right-hand
side of (\ref{eqm}) depends not only on $x_a$, but also
on all other $x_{a'}$. This is a consequence of the fact that
$Q(x_1,\ldots ,x_n)$ in (\ref{Q}) is not of the form
$\sum_a Q_a(x_a)$, which is also closely related to the fact that
$S(x_1,\ldots ,x_n)$ is not of the form
$\sum_a S_a(x_a)$. The quantities $Q$ and $S$ take these forms
only when the wave function is a product of the form
$\psi(x_1,\ldots ,x_n)=\psi(x_1)\cdots\psi(x_n)$ (recall that the
total wave function must be symmetric for bosons). In this case
there is no entanglement and the nonlocality disappears.
Note also that the fact
that we parametrize all trajectories with the same parameter $s$
is not directly related to the nonlocality, because such a parametrization
can be used even in local classical physics. Indeed, in a canonical approach
with a single classical Hamilton-Jacobi equation
that describes all $4n$ degrees
of freedom, such a parametrization is the most natural.
Conversely,
even in nonlocal quantum physics, once the
$n$ curves $x^{\mu}_a(s)$ are found, one can reparametrize each
of the curves with its own parameter $s_a$. However, when the interactions
are local, then one can even use another parameter for each of the
$n$ particle {\em equations of motion themselves}.
For example, one can replace (\ref{traj}) with
$dx_a^{\mu}/ds_a = -\partial_a^{\mu}S_a/m$. On the other hand,
when the interactions
are not local, then one must use a single parameter $s$ in the
equations of motion for the trajectories, whereas new separate parameters
$s_a$ can be used only after these equations of motion have been solved.
Now consider the nonrelativistic limit. In this limit, {\em all}
wave-function frequencies are (approximately) equal to $m$, so
from (\ref{curn}) one finds that {\em all}
time components of the currents are equal and given by
$j^0_a=2m\psi^*\psi\equiv\tilde{\rho}$, which does not depend on $a$.
Therefore,
in the nonrelativistic limit, the time components of the currents
become positive definite and unique for all particles.
By introducing the quantity
\begin{equation}
\rho({\bf x}_1,\ldots ,{\bf x}_n,t)=
\tilde{\rho}(x_1,\ldots ,x_n)|_{t_1=\cdots =t_n=t},
\end{equation}
one finds that the nonrelativistic limit of (\ref{conss}) can be written
as
\begin{equation}\label{conssnr}
\frac{\partial\rho}{\partial t}+ \sum_a\partial^{i}_a j_{ai}=0,
\end{equation}
where $i=1,2,3$ label space coordinates. Equation (\ref{conssnr})
implies that $\rho$ can be interpreted as the probability density,
which explains why the nonrelativistic limit leads to
statistical transparency, but not to complete statistical
transparency.
In general,
in the full relativistic case, there is no analog of the function
$\rho$ that would be both positive definite and independent of $a$.
Such a quantity exists only in some special cases.
For example, the independence of
$j^0_a$ on $a$ occurs when the wave function is a product
$\psi(x_1,\ldots ,x_n)=\psi(x_1)\cdots\psi(x_n)$, while the
positivity of $j^0_a$ occurs when $\psi(x)$ is a plane wave
(not a superposition of plane waves) $\psi(x)=e^{-ip_{\mu}x^{\mu}}$
with a positive frequency $p_0$.
In such special cases,
one can introduce the quantity $\rho$ in a way similar to that
of the nonrelativistic limit, leading to an equation
of the form (\ref{conssnr}) valid also in the relativistic
case.
Note also that the 4-currents $j^{\mu}_a$ do not need
to be timelike, but locally can be lightlike or even spacelike.
This implies that massive particles may move with the
velocity of light or even faster. As shown in Ref.~\cite{nikoldbb1},
such superluminal velocities cannot be observed, so there is no
contradiction with experiments.
(To understand why these superluminal velocities cannot be observed,
one has to take into account the theory
of quantum measurements. To emphasize the role of measurements,
we also note that, contrary to frequent claims,
the theory of quantum measurements
{\em is} crucial for understanding why nonrelativistic Bohmian
mechanics is consistent not only
with the conventional statistical predictions for particle positions,
but also with {\em all} statistical
predictions of the conventional interpretation
\cite{bohm,bohmPR1,holbook,nikoldbb1}.)
\section{TOWARDS STATISTICAL PREDICTIONS}
\label{secSP}
In order to understand more closely how interesting
statistical predictions may be obtained from relativistic
Bohmian mechanics, we study the one-particle case described by a
wave function $\psi(x)$. The corresponding
current is $j^{\mu}(x)$, for which we assume regularity
at all $x$. In this case,
one can make statitistical predictions without calculating
all trajectories, provided that certain additional
assumptions are fulfilled.
Let $n_{\mu}$ be the unit
future-oriented timelike vector normal to
an initial 3-dimensional spacelike
hypersurface $\Sigma_0$. Assume that the quantity
\begin{equation}\label{j}
j=j^{\mu}n_{\mu}
\end{equation}
has the property $j\geq 0$
everywhere on $\Sigma_0$.
Assume also
that the statistical distribution of particle positions on
$\Sigma_0$ is given by $j$.
We take the normalization such that
\begin{equation}\label{globcons}
\int_{\Sigma_0} dS_{\mu}j^{\mu}=
\int_{\Sigma_0} d^3x \sqrt{|g^{(3)}|}\, j =1,
\end{equation}
where $dS_{\mu}=d^3x \sqrt{|g^{(3)}|}n_{\mu}$
is the covariant measure of the volume
on $\Sigma_0$ and $g^{(3)}$ is the determinant of the induced metric
on $\Sigma_0$.
The normalization above corresponds
to the assumption that we know with certainty that
there is one and only one pointlike particle on $\Sigma_0$.
Let the measurement of particle
positions be performed at a later spacelike hypersurface $\Sigma$.
Given that the initial statistical distribution is given by
$j$ on $\Sigma_0$,
what can we conclude about the statistical distribution of particle
positions on $\Sigma$? From the local conservation
$\partial_{\mu}j^{\mu}=0$, one concludes that (\ref{globcons})
is valid also on $\Sigma$:
\begin{equation}\label{globcons1}
\int_{\Sigma} dS_{\mu}j^{\mu}=
\int_{\Sigma} d^3x \sqrt{|g^{(3)}|}\, j =1,
\end{equation}
where $j$ is defined by (\ref{j}) with respect to an analogous
normal $n_{\mu}$ on $\Sigma$.
If $j\geq 0$ on $\Sigma$, then
the statistical distribution on $\Sigma$ will be given by $j$.
In this case, we have statistical transparency on $\Sigma$.
However, a more interesting question is what if
$j<0$ on some regions of $\Sigma$?
From (\ref{globcons1})
it is clear that the inequality $j<0$ cannot be valid
everywhere on $\Sigma$. Let $\Sigma^-$ be the set of all points
on $\Sigma$ at which $j<0$.
For every point on $\Sigma^-$ there exists a unique point on
$\Sigma-\Sigma^-$, such that these two points are connected with a
trajectory in ${\cal M}$ (where ${\cal M}$ is the region of
Minkowski spacetime bounded by $\Sigma_0$ and $\Sigma$).
Let $\Sigma^+$ be the set of all such points
on $\Sigma-\Sigma^-$ that are connected with a point on $\Sigma^-$.
Finally, let
\begin{equation}
\Sigma'=\Sigma -(\Sigma^+\cup\Sigma^-) .
\end{equation}
All this is illustrated in Fig.~\ref{fig1}.
The point $C^-$ is an element of $\Sigma^-$, the point
$C^+$ is an element of $\Sigma^+$, while the points
$A'$ and $B'$ are elements of $\Sigma'$.
The system is described by the wave function $\psi(x)$
on ${\cal M}$,
i.e., between $\Sigma_0$ and $\Sigma$.
The arrows on the
integral curves in ${\cal M}$ indicate the direction of $j^{\mu}$.
The dotted curves above $\Sigma$ indicate the particle trajectories
that might be realized if there were no measurement of particle
positions on $\Sigma$, i.e., if the system were described by $\psi(x)$
even above
$\Sigma$. If there were no measurement on $\Sigma$, and if the
initial position of the particle were the point $A$ on Fig.~\ref{fig1},
then
the particle would cross $\Sigma$ at 3 points, i.e., at $A'$, $C^-$, and
$C^+$. However, owing to the measurement of particle positions,
the actual wave function above $\Sigma$ is of the form $\psi(x,y)$,
where $y$ represents the degrees of freedom of the measuring apparatus.
The interactions with the measuring apparatus
are such that the particles enter localized channels
\cite{bohm,bohmPR1,holbook,nikoldbb1} which typically forbid
trajectories such as the trajectory connecting $A'$ with $C^-$. This is how
the theory of quantum measurements explains why the
Bohmian motions backwards
in time are not in contradiction with the fact that we do not observe
multiple copies of particles, such as $A'$, $C^-$, and $C^+$ (see also
\cite{nikoldbb1}). Therefore, in the
rest of the discussion we ignore the dotted trajectories.
Since we assume
that the initial distribution is given by $j$ on $\Sigma_0$
(recall (\ref{globcons}) and its interpretation),
the dashed trajectory connecting $C^-$ with $C^+$
cannot be realized as an actual particle trajectory either. Only
the solid trajectories can be realized as the actual trajectories.
\begin{figure}
\caption{A sample of typical relativistic Bohmian trajectories.
The solid ones represent physically realizable trajectories.
The dotted ones are unphysical because the measurement takes place
at the hypersuface $\Sigma$ and above. The dashed one is unphysical
because it is assumed that only one particle exists and that
this particle crosses the initial hypersurface $\Sigma_0$.
The arrows indicate the direction of $j^{\mu}
\label{fig1}
\end{figure}
In order to find the probability distribution of particle positions
on $\Sigma$, first recall that $\Sigma$ can be decomposed in
3 disjunct sets as
\begin{equation}
\Sigma=\Sigma' \cup \Sigma^+ \cup \Sigma^- .
\end{equation}
Thus the integral (\ref{globcons1}) has contributions from 3 regions.
These contributions have the properties
\begin{equation}\label{globcons2}
\displaystyle\int_{\Sigma'} dS_{\mu}j^{\mu}=1,
\end{equation}
\begin{equation}
\displaystyle\int_{\Sigma^+} dS_{\mu}j^{\mu} +
\displaystyle\int_{\Sigma^-} dS_{\mu}j^{\mu}=0.
\end{equation}
Second, note that a trajectory in ${\cal M}$ crosses $\Sigma'$ if and only
if it crosses $\Sigma_0$. (See Fig.~\ref{fig1}, recall the definition
of $\Sigma'$ and the fact that
$j\geq 0$ on $\Sigma_0$, and note that the integral
curves of $j^{\mu}$ define a nonsingular congruence on ${\cal M}$.)
Having this in mind and recalling (\ref{globcons2}) and the
local conservation law $\partial_{\mu}j^{\mu}(x)=0$ valid on ${\cal M}$,
it becomes evident
that the probability distribution $\rho({\bf x})$ on $\Sigma$ is given by
\begin{equation}\label{probdis}
\rho=\left\{
\begin{array}{ll}
j & \mbox{on $\Sigma'$}, \\
0 & \mbox{on $\Sigma^+\cup\Sigma^-$}.
\end{array}
\right.
\end{equation}
This is the {\em measurable} probability distribution.
The remarkable result emerging from the existence of particle trajectories
is that the probabilty for a particle to be found on the region
$\Sigma^+\cup\Sigma^-$ vanishes, despite the fact that $j$ does not
vanish there. This is valid even if the initial distribution on
$\Sigma_0$ is not given by $j$, only provided that we know
with certainty that the particle has some
(unknown) position on $\Sigma_0$.
Then the particle cannot be found on $\Sigma^+\cup\Sigma^-$ simply because
there is no trajectory in ${\cal M}$ connecting $\Sigma_0$ with
$\Sigma^+\cup\Sigma^-$.
The prediction that the particle cannot be found on $\Sigma^+\cup\Sigma^-$
could be tested experimentally and the experimental confirmation
of such a prediction would be a strong support
for the claim that particles
really do have trajectories given by the integral curves of $j^{\mu}$.
It is also interesting to note that in order to find the distribution
(\ref{probdis}), the only thing that cannot be found without
the explicit calculation of the trajectories is the region $\Sigma^+$.
Therefore, one does not need to calculate the physically
realizable trajectories (the solid ones in Fig.~\ref{fig1}), but only
the unphysical ones (a dashed one in Fig.~\ref{fig1}) connecting
points in $\Sigma^-$ with points in $\Sigma^+$.
Of course, in order to give a more concrete proposal for an experiment
that could verify the validity of the relativistic Bohmian interpretation,
one has to make a more detailed and realistic
quantitative analysis of some specific case (probably with spin)
in which
the predictions of the Bohmian interpretation are sufficiently different
from those of some other interpretation. Such a specific
analysis is beyond the scope of this paper, but
we hope that the presented ideas will motivate a more extensive research
towards a possible experimental verification of the relativistic
Bohmian mechanics.
\section{CONCLUSION}
\label{secCon}
The Klein-Gordon equation is not statistically transparent,
i.e., it is not clear how to calculate the probabilities
of particle positions from the knowledge of the wave function.
The Bohmian interpretation, in which the probabilites only play
a secondary role, provides a viable interpretation
of the Klein-Gordon equation. It turns out that the
lack of statistical
tranparency is not a drawback but rather a virtue of
the Bohmian interpretation, because it opens the possibility
of experimentally distinguishing its predictions
from the predictions of other possible interpretations.
Another remarkable result is that
the equations for Bohmian particle trajectories
are nonlocal, but they can still be naturally
written in a Lorentz-covariant
form without a preferred Lorentz frame.
\noindent
{\bf Acknowledgements.}
This work was supported by the Ministry of Science and Technology of the
Republic of Croatia under Contract No.~0098002.
\end{document} |
\begin{document}
\title{Trisections induced by the gluck surgery along certain spun knots}
\date{}
\begin{abstract}
Gay and Meier asked whether or not a trisection diagram obtained by the Gluck twist on a spun or a twist spun 2-knot obtained from some method is standard. In this paper, we depict the trisection diagrams explicitly when the 2- knot is the spun $(2n + 1, -2)$-torus knot, where $n\geq1$, and show that the trisection diagram is standard when $n = 1$. Moreover, we introduce a notion of homologically standard for trisection diagrams and show that the trisection diagram is homologically standard for all $n$.
\end{abstract}
\maketitle
\section{introduction}
A trisection is a decomposition of a 4-manifold introduced by Gay and Kirby. This decomposition is used to study surface knots and 4-manifolds. One of the fields of studying trisections is the classification of trisections.
Similarly, a Heegaard splitting is a decomposition of a 3-manifold, and the classification of Heegaard splittings is a longstanding problem. A well-known result in this area is due to Waldhausen, which states the uniqueness of Heegaard splittings of the 3-sphere for each genus. Bonahon and Otal showed a similar theorem for lens spaces.
Following the results of a Heegaard splitting, there are problems in the classification of trisections. By spinning non-isotopic Heegaard splittings of a Seifert fibered space, one can obtain mutually non-isotopic trisections of the spin of Seifert fibered spaces \cite{I}. Islambouli showed that each of these trisections is isotopic after one stabilization, based on the stable equivalence of trisections.
In \cite{MSA}, Meier, Schirmer, and Zupan conjectured that each trisection of the standard 4-sphere is unique, which is a generalization of Waldhausen's theorem about Heegaard splittings. One of the cases of this conjecture is a trisection of the 4-sphere obtained by the Gluck surgery.
The Gluck surgery is an operation performed along a 2-knot in a 4-manifold. For a 2-knot in $S^4$, a 4-manifold obtained by the Gluck surgery is a homotopy 4-sphere. It is known that the 4-manifold obtained by the Gluck surgery along a spun 2-knot is also the 4-sphere. Gay and Meier described a way to construct a trisection diagram of the 4-manifold and asked if the trisection diagram is standard \cite{GM1}. This question is a special case of the conjecture mentioned above, and the trisection diagram is considered as a potential counterexample to the conjecture, even in the case of the spun trefoil knot. However, in this paper, we show the following result:
\begin{theorem}\label{thm1}
The trisection diagrams $\mathcal{D}_{x} \cup \mathcal{D}_{S(t(3,-2))}$ and $\overline{\mathcal{D}_{x}} \cup \mathcal{D}_{S(t(3,-2))}$ ($x=a,b,c$) are standard.
\end{theorem}
This theorem answers affirmatively the question given by Gay and Meier in the simplest case. Note that $\mathcal{D}_{x}$ and $\overline{\mathcal{D}_{x}}$ are described in Figure \ref{fig:D_x,overline{D_x}}.
To prove this theorem, we perform many handle slides on the obtained trisection diagram. Determining whether a diagram is standard or not by handle sliding alone is challenging, so we consider homological information about trisection diagrams. We obtain the following result from this homological information:
\begin{theorem}\label{thm2}
The trisection diagrams $\mathcal{D}_{x} \cup \mathcal{D}_{S(t(2n+1,-2))}$ and $\overline{\mathcal{D}_{x}} \cup \mathcal{D}_{S(t(2n+1,-2))}$ ($x=a,b,c$) are homologically standard.
\end{theorem}
This means that the trisection diagrams obtained by the Gluck surgery in Theorem \ref{thm2} may be standard.
This paper is organized as follows:
In Section 2, we review some definitions of trisections and relative trisections. After that, we construct trisection diagrams obtained by the Gluck twist in Section 3. Then we show Theorem \ref{thm1} in Section 4. In Section 5, we introduce a notion of homological standard and show Theorem \ref{thm2}.
\section{preliminaries}
In this section, we recall the notion of a trisection and a bridge trisection.
First, we review the definition of a trisection and a bridge trisection.
In this paper, we suppose that all 4-manifolds are smooth, compact, connected, and orientable, and a surface link in a 4-manifold is smoothly embedded.
\subsection{A trisection and a relative trisection}
A trisection of a 4-manifold is a decomposition of the 4-manifold into three 1-handlebodies. This is an analogy of a Heegaard splitting of a 3-manifold.
\begin{definition}
Let $X$ be a closed 4-manifold.
A decomposition $X=X_1\cup X_2\cup X_3$ is called a {\it trisection} if the following holds:
\begin{itemize}
\item $X_i\cong \natural ^{k_i} S^1\times B^3$ for $i=1, 2, 3$,
\item $X_i\cap X_j\cong H_g$ for $i\neq j$, and
\item $X_1\cap X_2\cap X_3\cong \Sigma_g$
\end{itemize}
where $H_g$ is a 3-dimensional genus $g$ handlebody and $\Sigma_g$ is a genus $g$ orientable, closed surface.
In this case, we call this trisection a $(g; k_1, k_2, k_3)$-trisection.
\end{definition}
Any closed 4-manifold admits a trisection \cite{GK}. For a 4-manifold with non-empty boundaries, a relative trisection was defined in \cite{CGP}.
Before reviewing the definition of a relative trisection we give some notations.
We decompose the boundary of the disk $D=\{re^{i\theta} | r\in [0, 1] , -\pi/3\leq \theta \leq \pi / 3 \}$ in $\mathbb{C}$ as follows:
\[
\partial ^{-} D = \{re \in \partial D | \theta = -\pi/3\},
\]
\[
\partial^{0} D = \{e^{i\theta}\in \partial D\}, and
\]
\[
\partial^{+} D = \{re \in \partial D | \theta = \pi/3\}.
\]
For the genus $p$ surface $P$ with $b$ boundary, $U=P\times D$ is diffeomorphic to $\natural ^{2p+b-1} S^1\times B^3$.
Also, its boundary decomposed into $\partial ^0 U = (P\times \partial ^{-} D) \cup (\partial P \times D)$ and $\partial ^{\pm} U = P\times \partial^{\pm} D$.
Let $V_n$ be $\natural ^n (S^1\times B^3)$ and $\partial V_n = H_s^{-}\cup H_{s}^{+}$ a genus $n+s$ Heegaard splitting.
We set $s=g-k+p+b-1$ and $n=k-2p-b+1$. Then $Z_k\cong U\natural V_n\cong \natural ^k S^1\times B^3$.
We note that
\[
\partial Z_k = Y^{+}_{g, k ; p, b}\cup Y^{0}_{g, k ; p, b}\cup Y^{-}_{g, k ; p, b}
\]
, where $Y^{\pm}_{g, k ; p, b}=\partial ^{\pm} U\natural H_{s}^{\pm}$ and $Y^{0}_{g, k ; p, b}= \partial ^0 U$.
After the above preparation, we define a relative trisection as follows.
\begin{definition}
Let $W$ be a 4-manifold with connected boundary. We call a decomposition $W=W_1\cup W_2\cup W_3$ a $(g, k; p, b)$-{\it relative trisection} of W if:
\begin{itemize}
\item $W_i\cong Z_k$ for $i=1, 2, 3$,
\item $W_i\cap W_j\cong Y^{+}_{g, k ; p, b}$ and $W_i\cap W_{i-1}\cong Y^{-}_{g, k ; p, b}$ for $i=1, 2, 3$.
\end{itemize}
As a consequence of the above condition, $W_1\cap W_2\cap W_3$ is a genus $g$ surface with $b$ boundary components.
\end{definition}
\subsection{A doubly-pointed Heegaard diagram and a trisection diagram}
A classical knot in a 3-manifold can be described as a doubly pointed Heegaard diagram. This is induced by a bridge decomposition with respect to a Heegaard splitting of a 3-manifold.
Let $K$ be a knot in a 3-manifold $M$, $\Sigma$ a Heegaard surface, and $(\Sigma; \alpha, \beta)$ a genus $g$ Heegaard diagram of $M$.
Suppose that $K$ is in a bridge position with respect to $\Sigma$.
We denote $\bold{x}=K\cap \Sigma$.
Then we call $(\Sigma; \alpha, \beta, \bold{x})$ a {\it pointed Heegaard diagram}.
Particularly, it is called a {\it doubly pointed Heegaard diagram} if $\bold{x}$ is two points.
For a surface knot in a 4-manifold, Meier and Zupan showed that every closed surface embedded in a 4-manifold can be put into a bridge trisected position \cite{MZ1}.
Moreover, Gay and Meier showed a 2-knot in a 4-manifold can be put into a 1-bridge position \cite{GM1}.
We review the notion of a bridge trisection.
\begin{definition}
Let $X_1\cup X_2\cup X_3$ be a trisection of a 4-manifold $X$ and $\mathcal{K}$ a 2-knot in $X$.
The decomposition $(X, \mathcal{K})=(X_1, D_1)\cup (X_2, D_2)\cup (X_3, D_3)$ is called {\it1-bridge trisection} of $\mathcal{K}$ with respect to $X_1\cup X_2\cup X_3$ if the following holds:
\begin{itemize}
\item $(X_i, D_i)$ is a trivial disk (i.e. each of $D_i$ are isotopic into the boundary),
\item $H_{ij}\cap \mathcal{K}$ is an arc where $H_{ij}=X_i\cap X_j$.
\end{itemize}
\end{definition}
Then we can define a doubly pointed trisection diagram introduced in \cite{GM1}.
A trisection diagram is a 4-tuple $(\Sigma; \alpha, \beta, \gamma)$ such that each of $(\Sigma; \alpha, \beta)$, $(\Sigma; \beta, \gamma)$ and $(\Sigma; \alpha, \gamma)$ is a Heegaard diagram of $\#^{k_i} S^1\times S^2$ for $i=1, 2, 3$ respectively.
\begin{definition}
We call $(\Sigma; \alpha, \beta, \gamma, \bold{x})$ a {\it doubly pointed trisection diagram} if the following holds:
\begin{itemize}
\item $(\Sigma; \alpha, \beta, \gamma)$ is a trisection diagram.
\item $\bold{x}$ is a set of disjoint two points on $\Sigma$ which is disjoint from $\alpha$, $\beta$ and $\gamma$.
\end{itemize}
\end{definition}
It is known that a doubly pointed trisection diagram determines a 2-knot uniquely (Proposition 4.5 in \cite{GM1}).
Next, we review the definition of a relative trisection diagram. This is a diagram that is similar to a trisection diagram but that possesses boundaries.
\begin{definition}
A $(g, k; p, b)$ {\it relative trisection diagram} $(\Sigma; \alpha, \beta, \gamma)$ is a 4-tuple of a genus $g$ surface with $b$ boundary components and three sets of $g-p$ simple closed curves such that each triple $(\Sigma; \alpha, \beta)$, $(\Sigma; \alpha, \gamma)$ and $(\Sigma; \beta, \gamma)$ are slide equivalent to the following figure (Figure \ref{reltridiag}).
\begin{figure}\label{reltridiag}
\end{figure}
\end{definition}
Also, we can obtain a diagram of a relative trisection that has information on the open book decomposition of its boundary.
That is called an arced relative trisection diagram.
\begin{definition}
Let $(\Sigma; \alpha, \beta, \gamma)$ be a $(g, k; p, b)$ relative trisection diagram and $a$, $b$ and $c$ sets of $2p$ properly embedded arcs in $\Sigma$ disjoint from $\alpha$, $\beta$ and $\gamma$ respectively.
We call $(\Sigma; \alpha, \beta, \gamma, a, b, c)$ an {\it arced relative trisection diagram} if the following holds:
\begin{enumerate}
\item Cutting $\Sigma$ along $a$ (resp.b and c) and surgering $\Sigma$ along $\alpha$ (resp. $\beta$ and $\gamma$) turns $\Sigma$ into a disk.
\item After handlesliding $(\Sigma; \alpha, \beta, a, b)$, $(\Sigma; \alpha, \beta)$ is the diagram shown in Figure \ref{reltridiag} and $a=b$.
\item After handlesliding $(\Sigma; \beta, \gamma, b, c)$, $(\Sigma; \beta, \gamma)$ is the diagram shown in Figure \ref{reltridiag} and $c=b$.
\end{enumerate}
\end{definition}
More precisely, see \cite{GM1}
\section{Trisections obtained by the Gluck surgery}
The Gluck surgery is a regluing operation about a 2-knot in a 4-manifold.
This is a generalization of the Dehn surgery.
Let $\mathcal{K}$ be a 2-knot in a 4-manifold $X$ with normal Euler number $0$ and $N(\mathcal{K})$ a regular neighborhood of $\mathcal{K}$.
The boundary of the exterior of $\mathcal{K}$ is diffeomorphic to $S^2\times S^1$.
Let $f$ be the non-trivial self diffeomorphism of $S^2\times S^1$.
Then we can reglue $S^2\times D^2$ to $X\setminus int N(\mathcal{K})$ by the diffeomorphism $f$.
This operation is called the {\it Gluck surgery}.
We denote the 4-manifold obtained by Gluck twisting $\Sigma_{\mathcal{K}}(X)$.
At the beginning, we define a term that is used in the main theorem and review a question that is related to the main theorem.
\begin{definition}
Let $X$ be a closed 4-manifold which is diffeomorphic to $S^4$. A trisection diagram of $X$ is \textit{standard} if it is the stabilization of the genus 0 trisection diagram up to surface diffeomorphism and handle slide.
\end{definition}
\begin{question}[Question 6.4 in \cite{GM1}]\label{que:Gay-Meier}
Is the trisection diagram for Gluck twisting on the spin or twist spin of a non-trivial classical knot in $S^3$ obtained by the methods of \cite{M} and \cite{GM1} \textbf{ever} standard?
\end{question}
In this section, we construct the trisection diagram in Question \ref{que:Gay-Meier} for the spun $(2n+1,-2)$-torus knot $S(t(2n+1,-2))$, where $n \geq 1$. A recipe for constructing the trisection diagram for $S(t(2n+1,-2))$ is as follows.
\begin{enumerate}
\item Equip a doubly pointed Heegaard diagram of ($S^3,t(2n+1,-2)$).
\item Construct a doubly pointed trisection diagram of ($S^4,S(t(2n+1,-2))$) from the doubly pointed Heegaard diagram using Meier's method.
\item Construct a relative trisection diagram of $S^4-S(t(2n+1,-2))$ from the doubly pointed trisection diagram by removing the two base points.
\item Construct an arced relative trisection diagram of $S^4-S(t(2n+1,-2))$ from the relative trisection diagram using an algorithm for arcs.
\item Construct the trisection diagram for Gluck twisting on $S(t(2n+1,-2))$ by gluing Figure \ref{fig:Gluck} and the arced relative trisection diagram (\cite{GM1}).
\end{enumerate}
From now on, we recall the three methods in the recipe. Then, we construct the trisection diagram according to the above recipe in order.
\subsection{Trisection diagrams for the spin of 3-manifolds}
In \cite{M}, Meier constructed a trisection diagram of a 4-manifold $S(M)$, the spin of a closed, connected, orientable 3-manifold $M$, using a Heegaard diagram of $M$.
\begin{theorem}[Theorem 1.4 in \cite{M}]\label{spun}
Let $(\Sigma,\delta,\epsilon)$ be a genus $g$ Heegaard diagram of $M$, where the collection of simple closed curves $\epsilon$ is depicted in the left of Figure \ref{meierspun}. Then, for three collections of simple closed curves $\alpha$, $\beta$ and $\gamma$ depicted in the right of Figure \ref{meierspun}, the 4-tuple $(\Sigma,\alpha,\beta,\gamma)$ is a genus $3g$ trisection diagram of $S(M)$.
\end{theorem}
\begin{figure}\label{meierspun}
\end{figure}
He also constructed a doubly pointed trisection diagram of $(S(M),S(K))$ from a doubly pointed Heegaard diagram of $(M,K)$ in the same way, where $K$ is a classical knot in $M$ and $S(K)$ is the spin of $K$ (Theorem 1.5 in \cite{M}).
\subsection{Describing trisection diagrams for Gluck twisting}
In \cite{GM1}, Gay and Meier described a trisection diagram of $\Sigma_K(S^4)$ using an arced relative trisection diagram of $S^4-K$.
\begin{theorem}[Lemma 5.5 in \cite{GM1}]\label{1-bridge}
Let $K$ be a 2-knot in $S^4$ which is in 1-bridge position. Then, a trisection diagram of $\Sigma_K(S^4)$ is obtained by gluing Figure \ref{fig:Gluck} and an arced relative trisection diagram of $S^4-K$.
\end{theorem}
Note that the trisection diagram of $\Sigma_K(S^4)$ is independent of the choice of arcs of a relative trisection diagram of $S^4-K$.
\subsection{An algorithm of drawing arcs for relative trisection diagrams}
An algorithm for drawing arcs of a relative trisection diagram is developed in \cite{CGP2}.
\begin{theorem}[Theorem 5 in \cite{CGP2}]\label{algorithm}
Let $(\Sigma,\alpha,\beta,\gamma)$ be a relative trisection diagram. Also let $\Sigma_\alpha$ be the surface obtained by surgering $\Sigma$ along $\alpha$ and $\phi \colon \Sigma-\alpha \to \Sigma_\alpha$ an associated embedding. Then, arcs $a$, $b$ and $c$ in $\Sigma$ are obtained by following the procedure below in order.
\begin{enumerate}
\item There exists a collection of properly embedded arcs $\delta$ in $\Sigma_\alpha$ such that cutting $\Sigma_\alpha$ along $\delta$ produces the disk. Then, let $a$ denote a collection of properly embedded arcs in $\Sigma-\alpha$ such that $\delta$ is isotopic to $\phi(a)$ in $\Sigma_\alpha$.
\item A collection of arcs in $\Sigma$ which does not intersect $\beta$ geometrically is obtained by sliding a copy of $a$ over $\alpha$. Then, let $b$ denote the collection of arcs. Note that in this operation, sliding a curve of $\alpha$ over other $\alpha$ curves can be performed if necessary.
\item A collection of arcs in $\Sigma$ which does not intersect $\gamma$ geometrically is obtained by sliding a copy of $b$ over $\beta$. Then, let $c$ denote the collection of arcs. Note that in this operation, sliding a curve of $\beta$ over other $\beta$ curves can be performed if necessary.
\end{enumerate}
\end{theorem}
\subsection{Constructing trisection diagrams for Gluck twisting}
In this subsection, we will finally construct the trisection diagram according to the recipe described above, utilizing the methods explained in the three previous subsections. Here, we will use the notation $t_c$ to represent the right-handed Dehn twist along a simple closed curve $c$.
\begin{lemma}\label{lem:(p,-2)_dpHd}
Let $(\Sigma, \alpha, \beta, z, w)$ be a doubly-pointed Heegaard diagram of $(S^3, t(3,-2))$ depicted in Figure \ref{fig:(p,-2)_dpHd}. Then, for $n \ge 1$, $(\Sigma, \alpha, t_{\delta}^{n-1}(\beta), z, w)$ is a doubly-pointed Heegaard diagram of $(S^3, t(2n+1,-2))$, where $\delta$ is the curve depicted in Figure \ref{fig:(p,-2)_dpHd}.
\end{lemma}
\begin{proof}
We can check, using a method in \cite{O}, that the 4-tuple $(\Sigma, \alpha, t_{\delta}(\beta), z, w)$ is a doubly-pointed Heegaard diagram of $(S^3, t(5,-2))$. If we perform this operation $n-1$ times, the proof will be completed.
\end{proof}
\begin{figure}
\caption{A doubly-pointed Heegaard diagram of $(S^3, t(3,-2))$ and the reference curve $\delta$.}
\label{fig:(p,-2)_dpHd}
\end{figure}
The following lemma is obtained from Lemma \ref{lem:(p,-2)_dpHd} by the method in \cite{M}. More precisely, we construct a doubly pointed trisection diagram from the doubly pointed Heegaard diagram as prescribed in Lemma \ref{lem:(p,-2)_dpHd}, following the procedure shown in Figure \ref{meierspun}. However, in this case, we need to take into account the presence of the base points.
Note that, we take the mirror image here.
\begin{lemma}\label{lem:(p,-2)_dptd}
Let $(\Sigma, \alpha, \beta, \gamma, z, w)$ be a doubly-pointed trisection diagram of $(S^4, S(t(3,-2)))$ depicted in Figure \ref{fig:S(p,-2)_dptd}. Then, for $n \ge 1$, $(\Sigma, \alpha', \beta', \gamma', z, w)$ is a doubly-pointed trisection diagram of $(S^4, S(t(2n+1,-2)))$, where
\begin{itemize}
\item $\alpha' = (t_{\delta_1}^{-(n-1)}(\alpha_1), \alpha_2, \alpha_3)$,
\item $\beta' = (t_{\delta_2}^{-(n-1)}(\beta_1), \beta_2, \beta_3)$,
\item $\gamma' = (t_{\delta_3}^{-(n-1)}(\gamma_1), \gamma_2, \gamma_3)$,
\end{itemize}
and curves $\delta_1$, $\delta_2$ and $\delta_3$ are them depicted in Figure \ref{fig:S(p,-2)_dptd}.
\end{lemma}
\begin{figure}
\caption{A doubly-pointed trisection diagram of $(S^4, S(t(3,-2)))$ and the reference curves $\delta_1$, $\delta_2$ and $\delta_3$.}
\label{fig:S(p,-2)_dptd}
\end{figure}
If a doubly pointed trisection diagram is given, a relative trisection diagram of the complement of the 2-knot represented by the diagram can be obtained by excluding a neighborhood of the base points, following the procedure described in subsection 4.1 of \cite{GM1}. Referring to subsection 4.1 of \cite{GM1} will provide more detailed information on this construction.
Hence, the following lemma is obtained from Lemma \ref{lem:(p,-2)_dptd} by removing the open tubular neighborhood of the two base points $z$ and $w$ in Figure \ref{fig:S(p,-2)_dptd}.
\begin{lemma}\label{lem:(p,-2)_rtd}
Let $(\Sigma, \alpha, \beta, \gamma)$ be a relative trisection diagram of $S^4-S(t(3,-2))$ depicted in Figure \ref{fig:S(p,-2)_rtd}. Then, for $n \ge 1$, $(\Sigma, \alpha', \beta', \gamma')$ is a relative trisection diagram of $S^4-S(t(2n+1,-2))$, where
\begin{itemize}
\item $\alpha' = (t_{\delta_2}^{-(n-1)}(\alpha_1), \alpha_2, \alpha_3)$,
\item $\beta' = (t_{\delta_2}^{-(n-1)}(\beta_1), \beta_2, \beta_3)$,
\item $\gamma' = (t_{\delta_3}^{-(n-1)}(\gamma_1), \gamma_2, \gamma_3)$,
\end{itemize}
and curves $\delta_2$ and $\delta_3$ are them depicted in Figure \ref{fig:S(p,-2)_rtd}.
\end{lemma}
\begin{figure}
\caption{A relative trisection diagram of $S^4-S(t(3,-2)))$ and the reference curves $\delta_2$ and $\delta_3$.}
\label{fig:S(p,-2)_rtd}
\end{figure}
The following lemma is obtained from Lemma \ref{lem:(p,-2)_rtd} using Theorem \ref{algorithm}.
\begin{lemma}\label{lem:(p,-2)_artd}
Let $(\Sigma, \alpha, \beta, \gamma, a, b, c)$ be an arced relative trisection diagram of $S^4-S(t(3,-2)))$ depicted in Figure \ref{fig:S(p,-2)_artd}. Then, for $n \ge 1$, $(\Sigma, \alpha', \beta', \gamma', a', b', c')$ is an arced relative trisection diagram of $S^4-S(t(2n+1,-2)))$, where
\begin{itemize}
\item $\alpha' = (t_{\delta_2}^{-(n-1)}(\alpha_1), \alpha_2, \alpha_3)$,
\item $\beta' = (t_{\delta_2}^{-(n-1)}(\beta_1), \beta_2, \beta_3)$,
\item $\gamma' = (t_{\delta_3}^{-(n-1)}(\gamma_1), \gamma_2, \gamma_3)$,
\item $a' = t_{\delta_2}^{-(n-1)}(a)$,
\item $b' = t_{\delta_2}^{-(n-1)}(b)$,
\item $c' = t_{\delta_3}^{-(n-1)}(c)$,
\end{itemize}
and curves $\delta_2$ and $\delta_3$ are them depicted in Figure \ref{fig:S(p,-2)_artd}.
\end{lemma}
Let $\mathcal{D}_{S(t(2n+1,-2))}$ denote this arced relative trisection diagram.
\begin{figure}
\caption{An arced relative trisection diagram of $S^4-S(t(3,-2)))$ and the reference curves $\delta_2$ and $\delta_3$.}
\label{fig:S(p,-2)_artd}
\end{figure}
\begin{example}
Figure \ref{fig:S(5,-2)_artd} shows the arced relative trisection diagram $\mathcal{D}_{S(t(5,-2))}$ obtained from Lemma \ref{lem:(p,-2)_artd} (depicted for each family curve).
\end{example}
\begin{figure}
\caption{The arced relative trisection diagram $\mathcal{D}
\label{fig:S(5,-2)_artd}
\end{figure}
The following lemma is obtained by combing Lemma \ref{lem:(p,-2)_artd} and Lemma 5.5 in \cite{GM1}.
\begin{lemma}\label{lem:main}
Let $(\Sigma, \alpha, \beta, \gamma)$ be a trisection diagram obtained by Gluck twisting on the spun trefoil depicted in Figure \ref{fig:main}, using the methods of \cite{GM1} and \cite{M}(Theorem \ref{spun} and \ref{1-bridge}). Then, for $n \ge 1$, $(\Sigma, \alpha', \beta', \gamma')$ is a trisection diagram obtained by Gluck twisting on $S(t(2n+1,-2))$ using the same methods, where
\begin{itemize}
\item $\alpha' = (t_{\delta_2}^{-(n-1)}(\alpha_1), \alpha_2, \alpha_3, t_{\delta_2}^{-(n-1)}(\alpha_4), \alpha_5, \alpha_6)$,
\item $\beta' = (t_{\delta_2}^{-(n-1)}(\beta_1), \beta_2, \beta_3, t_{\delta_2}^{-(n-1)}(\beta_4), \beta_5, \beta_6)$,
\item $\gamma' = (t_{\delta_3}^{-(n-1)}(\gamma_1), \gamma_2, \gamma_3, t_{\delta_3}^{-(n-1)}(\gamma_4), \gamma_5, \gamma_6)$,
\end{itemize}
and curves $\delta_2$ and $\delta_3$ are them depicted in Figure \ref{fig:main}.
\end{lemma}
Let $\mathcal{D} \cup \mathcal{D}_{S(t(2n+1,-2))}$ denote the trisection diagram of $\Sigma_{S(t(2n+1,-2))}(S^4)$ constructed in Lemma \ref{lem:main} (we write $\mathcal{D}$ for Figure \ref{fig:Gluck}).
\begin{figure}
\caption{A trisection diagram obtained by Gluck twisting on the spun trefoil and the reference curves $\delta_2$ and $\delta_3$.}
\label{fig:main}
\end{figure}
\begin{figure}
\caption{The figure (a) depicts an arced relative trisection diagram used in performing the Gluck twist, and the figure (b) is obtained by destabilizing the figure (a).}
\label{fig:Gluck}
\label{fig:D_x,overline{D_x}
\label{fig:aaa}
\end{figure}
\section{The case of the spun trefoil}
In this section, we prove Theorem \ref{thm1}. Since destabilizing $\mathcal{D}$ twice provides us with one of Figure \ref{fig:D_x,overline{D_x}}, we consider the case where $\mathcal{D}_{x} \cup \mathcal{D}_{S(t(3,-2))}$ and $\overline{\mathcal{D}_{x}} \cup \mathcal{D}_{S(t(3,-2))}$.
\begin{proof}[Proof of Theorem \ref{thm1}]
During the proof, we call a name of a simple closed curve obtained by a Dehn twist or a handle slide for a simple closed curve named $c$ also $c$. A trisection diagram is depicted for each family curve as in Figure \ref{fig:start}.
Note that the arced relative trisection diagram $\mathcal{D}_{S(t(3,-2))}$ depicted in Figure \ref{fig:S(p,-2)_artd} has $\mathbb{Z}_3$-symmetry since it is 0-annular. Thus, it suffices to show that $\mathcal{D}_{b} \cup \mathcal{D}_{S(t(3,-2))}$ and $\overline{\mathcal{D}_{b}} \cup \mathcal{D}_{S(t(3,-2))}$ are standard.
To begin with, we show the theorem in the case of $\mathcal{D}_{b}$. Note that in both cases, handle slides over some curves may be performed several times. In Figure \ref{fig:start}, for $\alpha$ curves, slide $\alpha_4$ over $\alpha_3$. Then, slide $\alpha_1$ and $\alpha_4$ over both $\alpha_2$ and $\alpha_3$. Finally, slide $\alpha_4$ over $\alpha_1$. For $\beta$ curves, slide $\beta_1$ over $\beta_3$. Then, slide $\beta_4$ over $\beta_1$. For $\gamma$ curves, slide $\gamma_4$ and $\gamma_1$ over $\gamma_2$. Then, slide $\gamma_4$ over $\gamma_1$. After these slides, the diagram is depicted in Figure \ref{fig:sec4_1}. In Figure \ref{fig:sec4_1}, $\alpha_i$ and $\gamma_i$ ($i=1,4$) are parallel to each other, and $\beta_4$ transversally intersects both $\alpha_4$ and $\gamma_4$ only once. Thus, we can destabilize Figure \ref{fig:sec4_1}. From now, we deform Figure \ref{fig:sec4_1} to simplify $\beta_4$.
In Figure \ref{fig:sec4_1}, for $\alpha$ curves, slide both $\alpha_1$ and $\alpha_4$ over $\alpha_2$, after that, slide $\alpha_2$ over $\alpha_3$. For $\gamma$ curves, slide both $\gamma_1$ and $\gamma_4$ over $\gamma_2$. After these slides, Figure \ref{fig:sec4_2} is obtained. Note that in Figure \ref{fig:sec4_2}, $t_{\delta}(\beta_4)=m$ (the braid relation), $i(\alpha_{j}, \delta)=i(\gamma_{j}, \delta)=0$ ($j=1,2,4$) and $i(\beta_{k},\delta)=0$ ($k=1,2,3$), where $i(\cdot, \cdot)$ represents the geometric intersection number.
In Figure \ref{fig:sec4_2}, perform $t_{\delta}$. Then, slide $\alpha_3$ and $\gamma_3$ over $\alpha_4$ and $\gamma_4$, respectively so that $i(\alpha_3,\beta_4)=i(\gamma_3,\beta_4)=0$. Moreover, slide $\alpha_3$ and $\gamma_3$ over $\alpha_2$ and $\gamma_2$, respectively. After that, perform $t_{\beta_2}^{-1}$. Finally, for $\alpha$ curves, slide $\alpha_2$ over $\alpha_3$, and $\alpha_4$ over $\alpha_2$. For $\gamma$ curves, slide $\gamma_4$ over $\gamma_2$. After these slides and twists, Figure \ref{fig:sec4_3} is obtained.
In Figure \ref{fig:sec4_3}, since $\alpha_4$ and $\gamma_4$ are parallel to each other, and $\beta_4$ transversally intersects $\alpha_4$ and $\gamma_4$ once, by destabilizing them, Figure \ref{fig:sec4_4} is obtained.
In Figure \ref{fig:sec4_4}, by sliding each curve properly, we have Figure \ref{fig:sec4_5}, the stabilization of the genus 0 trisection diagram of $S^4$.
\begin{figure}
\caption{The trisection diagram $\mathcal{D}
\label{fig:start}
\end{figure}
\begin{figure}
\caption{A trisection diagram obtained from Figure \ref{fig:start}
\label{fig:sec4_1}
\end{figure}
\begin{figure}
\caption{A trisection diagram obtained from Figure \ref{fig:sec4_1}
\label{fig:sec4_2}
\end{figure}
\begin{figure}
\caption{Before the destabilization for $\alpha_4$, $\beta_4$ and $\gamma_4$.}
\label{fig:sec4_3}
\end{figure}
\begin{figure}
\caption{After the destabilization.}
\label{fig:sec4_4}
\end{figure}
\begin{figure}
\caption{The stabilization of the genus 0 trisection diagram of $S^4$.}
\label{fig:sec4_5}
\end{figure}
\begin{figure}
\caption{The trisection diagram $\overline{\mathcal{D}
\label{fig:Gluck_overline{D_b}
\end{figure}
\begin{figure}
\caption{A trisection diagram obtained from Figure \ref{fig:Gluck_overline{D_b}
\label{fig:sec4_6}
\end{figure}
\begin{figure}
\caption{A trisection diagram obtained from Figure \ref{fig:sec4_6}
\label{fig:sec4_7}
\end{figure}
\begin{figure}
\caption{Before the destabilization for $\alpha_4$, $\beta_4$ and $\gamma_4$.}
\label{fig:sec4_8}
\end{figure}
\begin{figure}
\caption{After the destabilization.}
\label{fig:sec4_9}
\end{figure}
Next, we show the case of $\overline{\mathcal{D}_{b}}$. In Figure \ref{fig:Gluck_overline{D_b}}, for $\alpha$ curves, slide $\alpha_4$ over $\alpha_3$. Then, slide $\alpha_1$ and $\alpha_4$ over both $\alpha_2$ and $\alpha_3$. After that, slide $\alpha_1$ over $\alpha_2$.
Then, slide $\alpha_4$ over $\alpha_1$. After that, slide $\alpha_4$ over both $\alpha_2$ and $\alpha_3$.
For $\beta$ curves, slide $\beta_4$ over $\beta_1$. For $\gamma$ curves, slide $\gamma_4$ over $\gamma_2$. Then, slide $\gamma_1$ and $\gamma_4$ over $\gamma_2$. After that, slide $\gamma_4$ over $\gamma_1$. Then, slide $\gamma_4$ over both $\gamma_2$ and $\gamma_3$. After these slides, the diagram is depicted in Figure \ref{fig:sec4_6}. In Figure \ref{fig:sec4_6}, $\alpha_i$ and $\gamma_i$ ($i=1,4$) are parallel to each other, and $\beta_4$ transversally intersects $\alpha_4$ and $\gamma_4$ only once. Thus, we can destabilize Figure \ref{fig:sec4_6}. From now, we deform Figure \ref{fig:sec4_6} to simplify $\beta_4$.
In Figure \ref{fig:sec4_6}, slide $\alpha_1$ and $\alpha_4$ over both $\alpha_2$ and $\alpha_3$, and $\gamma_1$ and $\gamma_4$ over $\gamma_2$. Then, Figure \ref{fig:sec4_7} is obtained. Note that in Figure \ref{fig:sec4_7}, $t_{\delta}^{-1}(\beta_4)=m$ (the braid relation), $i(\alpha_{j}, \delta)=i(\gamma_{j}, \delta)=0$ ($j=1,2,4$) and $i(\beta_{k},\delta)=0$ ($k=1,2,3$), where $m$ is the curve depicted in Figure \ref{fig:sec4_2}.
In Figure \ref{fig:sec4_7}, perform $t_{\delta}^{-1}$. Then, slide $\alpha_3$ and $\gamma_3$ over $\alpha_4$ and $\gamma_4$, respectively so that $i(\alpha_3,\beta_4)=i(\gamma_3,\beta_4)=0$. Moreover, slide $\alpha_3$ and $\gamma_3$ over $\alpha_2$ and $\gamma_2$, respectively. After that, perform $t_{\beta_2}$. Finally, for $\alpha$ curves, slide $\alpha_4$ over both $\alpha_2$ and $\alpha_3$. For $\gamma$ curves, slide $\gamma_4$ over $\gamma_2$. After these slides and twists, Figure \ref{fig:sec4_8} is obtained.
In Figure \ref{fig:sec4_8}, since $\alpha_4$ and $\gamma_4$ are parallel to each other, and $\beta_4$ transversally intersects $\alpha_4$ and $\gamma_4$ only once, by destabilizing them, Figure \ref{fig:sec4_9} is obtained.
In Figure \ref{fig:sec4_9}, by sliding each curve properly, we have Figure \ref{fig:sec4_5}, the stabilization of the genus 0 trisection diagram of $S^4$.
\end{proof}
\section{Homological calculations}
In this section, we compute homological information of trisection diagrams of $S^4$ obtained by Gluck twisting along the spun $(2n+1,-2)$-torus knots.
This section is organized as follows: First, we investigate how the homology class of curves in a given trisection diagram changes with handle slides and Dehn twists.
After that, we define a homologically standard trisection diagram of $S^4$ and show that all trisection diagrams in section 3 are homologically standard trisection diagrams of $S^4$.
The following lemma is a change of a homology class of curves on a surface after handle sliding.
\begin{lemma}
Let $C_1$ and $C_2$ be non-isotopic essential simple closed curves on an orientable closed surface $\Sigma_g$.
Suppose that $C_3$ is an essential simple closed curve that is obtained by handle sliding $C_1$ over $C_2$.
Then,
\[
[C_3]=[C_1]\pm[C_2]
\]
in $H_1(\Sigma_g)$.
\end{lemma}
\begin{proof}
Let $a$ be an arc in $\Sigma_g$ which connects $C_1$ and $C_2$ and $\{l_1, \ldots ,l_{2g}\}$ a symplectic basis of $H_1(\Sigma_g)$.
Then $C_3$ is one of the boundaries of $N(C_1\cup a\cup C_2)$ that is not isotopic to both $C_1$ and $C_2$.
The intersection of $a$ and $l_i$ corresponds to an intersection of $C_3$ and $l_i$.
One of the signs of two points corresponding to this intersection is positive, the other is negative.
Hence the algebraic intersection number of $C_3$ and $l_i$ is $\langle C_1, l_i\rangle+\langle C_2, l_i \rangle$, where $\langle \cdot, \cdot \rangle$ is a algebraic intersection number.
Therefore the statement holds.
\end{proof}
\begin{definition}
Let $\Sigma_g$ be an orientable closed surface and $[C_1]$ and $[C_2]$ non-trivial elements in $H_1(\Sigma_g)$, and $[C_3]=[C_1]\pm[C_2]$.
We call the operation that replaces $[C_1]$ by $[C_3]$ {\it a homological handle slide} $[C_1]$ over $[C_2]$.
\end{definition}
\begin{lemma}
Let $C_1$ and $C_2$ be essential simple closed curves on an orientable closed surface $\Sigma_g$.
Suppose that $C_3$ is an essential simple closed curve that is obtained from $C_1$ by Dehn twisting along $C_2$.
Then,
\[
[C_3]=[C_1]\pm\langle C_1, C_2 \rangle [C_2]
\]
in $H_1(\Sigma_g)$, where $\langle C_1, C_2 \rangle$ is the algebraic intersection number of $C_1$ and $C_2$ for certain basis of $H_1(\Sigma_g)$.
\end{lemma}
\begin{proof}
If $C_1$ intersects $C_2$ exactly one point, $[C_3]=[C_1]\pm [C_2]$ after the Dehn twists (the sign depends on both Dehn twist is the right or left hand, and $\langle C_1, C_2\rangle$ is $+1$ or $-1$).
Hence the statement holds.
\end{proof}
\begin{definition}
Let $\Sigma_g$ be an orientable closed surface and $[C_1]$ and $[C_2]$ non-trivial elements in $H_1(\Sigma_g)$, and $[C_3]=[C_1]+\langle C_1, C_2 \rangle [C_2]$.
We say the operation that replaces $[C_1]$ by $[C_3]$ {\it a homological Dehn twist} along $[C_2]$.
\end{definition}
\begin{definition}
Let $(\Sigma; \alpha, \beta, \gamma)$ be a trisection diagram of the 4-sphere and $\alpha=\{\alpha_1,\ldots, \alpha_g\}$, $\beta=\{\beta_1, \ldots , \beta_g\}$ and $\gamma=\{\gamma_1, \ldots , \gamma_g\}$.
We say $(\Sigma; \alpha, \beta, \gamma)$ is {\it homologically standard} if the following holds after performing homological Dehn twists and handle slides finitely many times, and replacing some indices:
\begin{itemize}
\item $\langle [\alpha_i], [\beta_i] \rangle=\langle [\alpha_i], [\gamma_i] \rangle=\pm 1$, $\langle [\alpha_i], [\beta_j] \rangle=\langle [\gamma_i], [\beta_j] \rangle=0$ and $[\beta_i]=[\gamma_i]$ in $H_1(\Sigma)$ for $i\neq j$.
\item $\langle [\beta_i], [\gamma_i] \rangle=\langle [\beta_i], [\alpha_i] \rangle=\pm 1$, $\langle [\beta_i], [\gamma_j] \rangle=\langle [\beta_i], [\alpha_j] \rangle=0$ and $[\gamma_i]=[\alpha_i]$ in $H_1(\Sigma)$ for $i\neq j$.
\item $\langle [\gamma_i], [\alpha_i] \rangle=\langle [\gamma_i], [\beta_i] \rangle=\pm 1$, $\langle [\gamma_i], [\alpha_j] \rangle=\langle [\gamma_i], [\beta_j] \rangle=0$ and $[\alpha_i]=[\beta_i]$ in $H_1(\Sigma)$ for $i\neq j$.
\end{itemize}
\end{definition}
If a trisection diagram is standard, then it is also homologically standard, and this property serves as a kind of invariant for measuring standardness.
\begin{proof}[Proof of Theorem \ref{thm2}]
Let $\{C_1, \ldots, C_{2g}\}$ be a basis of $\Sigma_g$.
We define the matrix $T$ whose row vectors are $\alpha_i$, $\beta_i$ and $\gamma_i$ in $H_1(\Sigma_g)$ with respect to a basis $\{C_1, \ldots, C_{2g}\}$ for $i=1, 2, 3, 4$.
\[
T :=\begin{pmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4\\
\beta_1\\
\beta_2\\
\beta_3\\
\beta_4\\
\gamma_1\\
\gamma_2\\
\gamma_3\\
\gamma_4
\end{pmatrix}
\]
By Figure \ref{fig:start}, and Lemma \ref{lem:main}, we obtain
\[
T=\begin{pmatrix}
0&0&-1&-1&0&0&0&0\\
1&0&-1&0&0&0&0&0\\
0&0&0&0&0&1&0&0\\
1&0&-2(1+n)&3+2n&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
0&1&-1&0&0&0&0&0\\
0&0&0&0&1&0&0&0\\
0&0&-(1+2n)&2(2+n)&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
1&-1&0&0&0&0&0&0\\
0&0&0&0&0&0&1&0\\
0&-(1+2n)&0&3+2n&1&1&1&-1
\end{pmatrix}.
\]
From now, we will perform homological handle slides and Dehn twists until $\alpha_i$, $\beta_i$, and $\gamma_i$ are homologically standard for $i=1, 2, 3, 4$.
After Dehn twisting $1+2n$ times along $(0, 0, 1, 0, 0, 0, 0, 0)$, we have
\[
T=\begin{pmatrix}
0&0&-1&-1&0&0&0&0\\
1&0&-1&0&0&0&0&0\\
0&0&0&0&0&1&0&0\\
1&0&-1&3+2n&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
0&1&-1&0&0&0&0&0\\
0&0&0&0&1&0&0&0\\
0&0&0&2(2+n)&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
1&-1&0&0&0&0&0&0\\
0&0&1+2n&0&0&0&1&0\\
0&-(1+2n)&1+2n&3+2n&1&1&1&-1
\end{pmatrix}.
\]
Next, we perform homological Dehn twists $3+2n$ times along $(0, 0, 0, 1, 0, 0, 0, 0)$. Then, we obtain
\[
T=\begin{pmatrix}
0&0&-1&-1&0&0&0&0\\
1&0&-1&0&0&0&0&0\\
0&0&0&0&0&1&0&0\\
1&0&-1&0&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
0&1&-1&0&0&0&0&0\\
0&0&0&0&1&0&0&0\\
0&0&0&1&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
1&-1&0&0&0&0&0&0\\
0&0&1+2n&0&0&0&1&0\\
0&-(1+2n)&1+2n&0&1&1&1&-1
\end{pmatrix}.
\]
After homological handle sliding $\alpha_4$ over $\alpha_2$, $\beta_2$ over $\beta_1$ and $\gamma_4$ over $\gamma_2$, we obtain
\[
T=\begin{pmatrix}
0&0&-1&-1&0&0&0&0\\
1&0&-1&0&0&0&0&0\\
0&0&0&0&0&1&0&0\\
-(1+2n)&0&1+2n&0&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
0&0&-1&-1&0&0&0&0\\
0&0&0&0&1&0&0&0\\
0&0&0&1&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
1&-1&0&0&0&0&0&0\\
0&0&1+2n&0&0&0&1&0\\
-(1+2n)&0&1+2n&0&1&1&1&-1
\end{pmatrix}.
\]
Next, we perform homological handle slides $\alpha_2$ over $\alpha_1$ and $\gamma_2$ over $\gamma_1$. Then, we obtain
\[
T=\begin{pmatrix}
0&0&-1&-1&0&0&0&0\\
1&0&0&1&0&0&0&0\\
0&0&0&0&0&1&0&0\\
-(1+2n)&0&1+2n&0&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
0&0&-1&-1&0&0&0&0\\
0&0&0&0&1&0&0&0\\
0&0&0&1&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
1&0&0&1&0&0&0&0\\
0&0&1+2n&0&0&0&1&0\\
-(1+2n)&0&1+2n&0&1&1&1&-1
\end{pmatrix}.
\]
After homological handle sliding $\alpha_4$ over $\alpha_2$ and $\gamma_4$ over $\gamma_2$, we obtain
\[
T=\begin{pmatrix}
0&0&-1&-1&0&0&0&0\\
1&0&0&1&0&0&0&0\\
0&0&0&0&0&1&0&0\\
0&0&1+2n&1+2n&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
0&0&-1&-1&0&0&0&0\\
0&0&0&0&1&0&0&0\\
0&0&0&1&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
1&0&0&1&0&0&0&0\\
0&0&1+2n&0&0&0&1&0\\
0&0&1+2n&1+2n&1&1&1&-1
\end{pmatrix}.
\]
Next, we perform homological Dehn twists $-(2n+1)$ times along $(0, 0, 1, 0, 0, 0, 0, 0)$. Then, we obtain
\[
T=\begin{pmatrix}
0&0&-1&-1&0&0&0&0\\
1&0&0&1&0&0&0&0\\
0&0&0&0&0&1&0&0\\
0&0&0&1+2n&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
0&0&-1&-1&0&0&0&0\\
0&0&0&0&1&0&0&0\\
0&0&-(2n+1)&1&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
1&0&0&1&0&0&0&0\\
0&0&0&0&0&0&1&0\\
0&0&0&1+2n&1&1&1&-1
\end{pmatrix}.
\]
Finally, we perform homological Dehn twists $-(1+2n)$ times along $(0, 0, 0, 1, 0, 0, 0, 0)$, and after homological handlesliding $\beta_4$ over $\beta_2$, we obtain
\[
T=\begin{pmatrix}
0&0&-1&-1&0&0&0&0\\
1&0&0&1&0&0&0&0\\
0&0&0&0&0&1&0&0\\
0&0&0&0&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
0&0&-1&-1&0&0&0&0\\
0&0&0&0&1&0&0&0\\
0&0&0&1&1&1&1&-1\\
\hline
0&-1&0&-1&0&0&0&0\\
1&0&0&1&0&0&0&0\\
0&0&0&0&0&0&1&0\\
0&0&0&0&1&1&1&-1
\end{pmatrix}.
\]
After reordering the indices, $\alpha_i$, $\beta_i$, and $\gamma_i$ are homologically standard for $i=1, 2, 3, 4$.
We can prove similarly that in the case of $\overline{\mathcal{D}_b}$.
\end{proof}
\end{document} |
\begin{document}
\title{Natural Philosophy and Quantum Theory}\author{Thomas Marlow \\ \emph{School of Mathematical Sciences, University of Nottingham,}\\
\emph{UK, NG7 2RD}}
\maketitle
\begin{abstract}
We attempt to show how relationalism might help in understanding Bell's theorem. We also present an analogy with Darwinian evolution in order to pedagogically hint at how one might go about using a theory in which one does not even desire to explain correlations by invoking common causes.
\end{abstract}
\textbf{Keywords}: Bell's Theorem; Quantum Theory; Relationalism.\\
\section*{Motivation}
\label{sec:convergency}
In the first section we shall introduce Bell's theorem and explain how a relational philosophy might be used in interpreting it. In the second section we will present an analogy with Darwinian evolution via two parables. Our main aim is pedagogy---in explaining Bell's theorem to the layman.
\section*{Bell's Theorem}
\label{sec:Bell}
In his seminal work \cite{BellBOOK} Bell defines formally what he means by `locality' and `completeness' and he convincingly proves that in order to maintain his definition of `locality' one must reject his definition of `completeness' (or vice versa). His definitions are very intuitive so they are very difficult to reject---and this is exactly the reason his theorem is so powerful. We will outline Bell's beautiful and, we would argue, irrefutable proof below and then we shall show that using relationalism we can, as Bell proves we should, reject his definition of `completeness' and/or `locality'. Bell's theorem is even more difficult to dissolve than Einstein, Podolsky and Rosen's famous theorem \cite{EPR} because we have, strictly speaking, to reject both his particular `completeness' assumption and his particular `locality' one---this is merely because his `completeness' assumption is an integral part of his `locality' assumption. But we are getting ahead of ourselves. Let us briefly discuss Bell's theorem and then we will discuss relationalism (note that we emphatically do not refute his proof because it is, in our opinion, wholly and profoundly correct---the aim of this note is to explain why this is the case).
If we have two experiments, labeled by parameters $a$ and $b$, which are spacelike separated, where $A$ and $B$ parameterise the particular results that we might receive in $a$ and $b$ respectively, then Bell assumes the following:
\begin{equation}
p(AB \vert ab I) = p(A \vert a I)p(B \vert b I)
\label{locality}
\end{equation}
\noindent where $I$ is any other prior information that goes into the assignment of probabilities (this might include `hidden variables' if one wishes to introduce such odd things). $A,B,a$ and $b$ can take a variety of different values. Bell called this factorisation assumption `local causality' with the explicit hint that any `complete' theory which disobeys this assumption must embody nonlocal causality of a kind.
Jarrett \cite{Jarrett84} showed that this factorisation assumption can be split up into two logically distinct assumptions which, when taken together, imply Eq.\,(\mathbbm{R} \mbox{e }f{locality}). These two assumptions are, as named by Shimony \cite{Shimon86}, parameter independence and outcome independence. Shimony \cite{Shimon86} argued that orthodox quantum theory itself obeys parameter independence, namely:
\begin{equation}
p(A \vert abI) = p(A \vert aI).
\label{parameter}
\end{equation}
\noindent
This means that, in orthodox quantum theory, the probability that one predicts for an event $A$ in one spacetime region does not change when the parameter $b$ chosen in measuring an entangled subsystem elsewhere is known. Knowledge of $b$ doesn't help in predicting anything about $A$. If parameter independence is disobeyed then, some suggest \cite{Shimon86}, we would be able to signal between spacelike separated regions using orthodox quantum theory and, as such, there seems to be no harm in presuming that parameter independence failing means that nonlocal causality is probably manifest. However, by far the more subtle assumption implicit in Bell's `locality' assumption is outcome independence (which orthodox quantum theory does not obey):
\begin{equation}
p(A \vert BabI) = p(A \vert abI).
\label{outcome}
\end{equation}
\noindent
Knowledge of $B$ can and does effect the predictions one makes about $A$ in the standard interpretation of quantum theory \cite{Shimon86}. Bell implicitly justified assumption (\mathbbm{R} \mbox{e }f{outcome}) by presuming `completeness' of a certain kind. If one has `complete' information about the experiments then finding out the outcome $B$ cannot teach you anything about the probability of outcome $A$, hence we should presume (\mathbbm{R} \mbox{e }f{outcome}) for such `complete' theories. So outcome independence does not necessarily have anything to do with locality. Even if one were a nonlocal observer who gathered his information nonlocally one would still assume (\mathbbm{R} \mbox{e }f{outcome}) as long as that information was `complete'. So two assumptions go into Bell's factorisation assumption, namely a `locality' assumption (\mathbbm{R} \mbox{e }f{parameter}) and a `completeness' assumption (\mathbbm{R} \mbox{e }f{outcome}). Bell then proves elegantly that (\mathbbm{R} \mbox{e }f{parameter}) and (\mathbbm{R} \mbox{e }f{outcome}) together imply an inequality that quantum theory disobeys. So, quantum theory does not obey Bell's locality condition (\mathbbm{R} \mbox{e }f{locality}) which is itself made up of two sub-assumptions (\mathbbm{R} \mbox{e }f{parameter}) and (\mathbbm{R} \mbox{e }f{outcome}). Note that \emph{the one sub-assumption that orthodox quantum theory emphatically does not obey is the `completeness' one} (\mathbbm{R} \mbox{e }f{outcome}). We hope to convince you that any justification of (\mathbbm{R} \mbox{e }f{outcome}) relies on a category error\footnote{One argument against this presentation \cite{MaudlinBOOK} is that the splitting of the Bell's factorisation assumption into (\mathbbm{R} \mbox{e }f{parameter}) and (\mathbbm{R} \mbox{e }f{outcome}) is not unique, and one could equally well split the factorisation assumption into different sub-assumptions. There is no \emph{a priori} reason to choose one particular way to split (\mathbbm{R} \mbox{e }f{locality}) into prior assumptions. This we agree with, but note that there are \emph{a posteriori} reasons from choosing to use Jarrett's analysis while discussing orthodox quantum theory. Since we are not refuting Bell's theorem---we would rather merely adopt a different interpretation of his assumptions---it does not matter which way we split (\mathbbm{R} \mbox{e }f{locality}), we merely split it up into (\mathbbm{R} \mbox{e }f{parameter}) and (\mathbbm{R} \mbox{e }f{outcome}) for pedagogical reasons.}.
So we used completeness to justify demanding (\mathbbm{R} \mbox{e }f{outcome}) but could we not similarly have used completeness to demand (\mathbbm{R} \mbox{e }f{parameter})? Yes we could. So the problem arises that it is difficult even to justify (\mathbbm{R} \mbox{e }f{parameter}) as a locality assumption. The only reason we \emph{do} call (\mathbbm{R} \mbox{e }f{parameter}) a locality assumption is that some theories which disobey it could perhaps be used to signal between spacelike separated regions. The point, however, is mute because quantum theory (in the orthodox interpretation) obeys (\mathbbm{R} \mbox{e }f{parameter}) and hence we have no problem with signalling in standard quantum theory \cite{Shimon86}. So Bell's theorem seems to suggest that quantum theory is `incomplete' rather than `nonlocal'. The very assumption that we should demand that we can ever find the \emph{real, absolute, ontological} probability independently of the probabilities we assign to other propositions seems to be \emph{the} assumption that orthodox quantum theory disobeys in Bell's analysis. Hence one might wish to adopt a relational approach to probability theory \cite{Marlow06fold} where we cannot even define the probability of an event independently of the probabilities we assign to other events---\emph{cf.} Cox's axioms of probability \cite{CoxBOOK,Cox46,JaynesBOOK}. So we must, it seems, try and interpret quantum theory in such a way that demanding (\mathbbm{R} \mbox{e }f{outcome}) is simply a category error of some sort (rather than something we desire in the first place)---Bell's theorem is so elegant and irrefutable that we must try to accommodate his conclusions.
Relational theories are ones in which we do not use statements about mere ontology, we only use facts or definitions about things that we can rationally justify. In fact, this is the only real principle of relational theories introduced by Leibniz, it is called the Principle of Sufficient Reason (PSR) \cite{Smolin05}. The PSR implies another principle called the Principle of Identifying the Indiscernible (PII) which says that if we cannot distinguish two theoretical entities by distinctions that we can rationally justify, we should identify them. In other words, if two things are to be distinct then they must be distinct for an identifiable reason. No two equals are the same. These are desiderata that ensure that we do not introduce facts, statements or distinctions in our theories which we are not rationally compelled to. However natural or obvious our assumptions sound, if we are not rationally compelled to use them we clearly ought not to use them in case we are mistaken (and, of course, there are lots and lots of ways we might be mistaken). So, for example, it seems very natural and obvious to define motion with respect to an absolute `flat' space and time (like in Newton's theory of gravity) because we instinctively na\"{\i}vely feel this is the case but it turns out that, since we are not rationally compelled to use such an absolute definition, we ought not to. This was Einstein's insight and it allowed him to derive a better theory than Newton's. In \cite{Marlow06fold} we argued that such a `parsimonious' relational philosophy can help in interpreting and generalising probability theory and quantum theory. For other relational approaches to quantum theory see \cite{Rovel96,SR06,Marlow06weak}.
\section*{Convergency}
It may seem like an odd jump to now begin discussing an analogy with Darwinian evolution. Nonetheless, this is exactly what we shall do and let us explain why. It is easy to see that Darwin also seemingly used such careful rational desiderata \cite{Smolin05}. Many people used to claim that species were eternal categories. One might perversely say that species possessed `elements of biology' that could not change. Rabbits are rabbits because they embody some eternal `Rabbitness'. A few hundred years ago this assumption didn't seem too implausible. Nonetheless, it is wrong. Utterly fallacious. Darwin argued that one was not rationally compelled to make such an assumption. And rejecting this assumption allowed him to design, with considerable effort and insight, his theories of natural and sexual selection. It might be that an analogous rationalist approach might help in understanding Bell's theorem.
Inspired by \cite{DawkinsBOOK}, let us introduce a little parable. Imagine that, in some horrible future, we find two species which look and behave in identical ways and these two organisms each call home an island that is geologically separate from the other. This amazes us so we search as hard as we possibly can but we find that, beyond reasonable doubt, that there is \emph{no} way that a common ancestor of the two species could have recently (in geological time) `rafted', `tunneled', flown or swam between the two islands. This amazes us even more. Furthermore, we check the two organisms' genetics and we find that they each have \emph{exactly the same genome}. Exactly the same. We must conclude that two species have evolved separately and have exactly the same genome. This more than merely amazes us, it shocks us to our cores, and we reel about jabbering for years trying to understand it. We know of no (and cannot rationally justify a) physical mechanism that can ensure a common genome for separate species. Some biologists commit suicide, some find God, some continue to search for a way that the two species might have evolved and conspired together regardless of the fact they are separate, and others consider it a fluke and move on. However, I am confident that we do not need to worry about this parable, this future will never become reality... beyond reasonable doubt anyway.
Note that this parable suggests a reading that is \emph{exactly opposite} to many expositions of Bell's theorem (especially Mermin's `instruction set' approach, see \cite{Mermin02} and references therein). One might question whether the `instruction sets' (or, equivalently, the catalogue of causes) we assign to two separated correlated particles are common to both. If we assume the instruction sets for each particle should be the same for two correlated particles then we conclude that quantum theory is surprising (it disobeys this assumption or embodies causal nonlocality). Our parable suggests the opposite view. Turning the argument on its head we should be \emph{utterly shocked} to find that the `instruction sets' or catalogues of causes are the same in each region. We know of no (and cannot rationally justify a) physical mechanism that can ensure common causes within the separate regions. We are shocked if two correlated phenotypes evolved separately and ended up embodying the same genetics, but why would we not be shocked if two correlated particles were shown to have arisen from common causes. The orthodoxy \emph{demands} that separate correlated properties should arise from common causes. Why the distinction between the two analogous cases? Why?
But, of course, we need another parable because perhaps the two organisms embody the same phenotypes merely because they live in sufficiently similar surroundings, so it might be that the `instruction sets' or common causes arise in the environment of the two separate convergent species. So we use two different definitions of `niche': an organism has its internal `genetic' niche and it has its external `environmental' niche. Convergent species might evolve in geological separation seemingly \emph{because} their environmental niches are sufficiently similar. But just as two distinct genetics can give rise to the same phenotype, might it not be that two distinct `ecological' niches might also house a common correlated phenotype? Yes!---this happens all over the place; biology is abundantly and wonderfully filled with convergent phenotypes that exist within significantly distinct environmental niches. (Species like Phytophthora, Platypus and Thylacine---Tasmanian Tiger---are all good examples of species which seem to embody convergent phenotypes with respect to other separated species; similarly, Koalas have fingerprints...). So we begin to question whether, in biology, we should ever demand that correlated phenotypes arise from common causes. Certainly we might happily begin to question such a thing without resorting to proclamations about nature being `nonlocal'.
So, another parable. Imagine in the future we again have two geologically separate islands and two seemingly identical species. Lets say they are apes, and that all the phenotypes that we can identify are pretty much the same: they have the same gait, both have grey hair on their heads, squat when they defaecate, speak the same jargon, the same kidney function etc. We would be very surprised if they have the same genetics as long as we rule out the idea that a recent ancestor `rafted' between the islands. We catch these apes and try to make them mate, but they do not seem to want to. We talk to them nicely and even though it disgusts them they mate. They do not produce any offspring or, if they do, their offspring are sterile\footnote{This is a biological version of the relational Principle of Identifying the Indiscernible.}; they are different species with different genetics. We are happy about this because it confirms that they evolved separately as we would expect two geologically separate species would. But then we ask ourselves, why do they share common phenotypes regardless of the fact they evolved in geologically separate niches? Perhaps they share common phenotypes because their niches are sufficiently similarly simian. So now let us list common features of their ecological niches: the forests on each islands have similarly shaped trees, a similar array of fruit, a similar array of predators, the temperatures and humidity are the same, the same sun shines down on the two islands, etc. So it seems that the two apes have evolved common phenotypes \emph{because} their niches embody common causes. This seems obvious, right? Bell convincingly, and profoundly, proves that we cannot use analogous sets of common causes to explain correlations in quantum theory while maintaining causal locality, for such a `complete' local theory would obey (\mathbbm{R} \mbox{e }f{outcome}).
However, note that, in biology at least, we have exactly the same problem we had with the `genetics' parable. We should look at it from \emph{exactly the opposite point of view}. If we were to find that the environmental niches were so amazingly similar between the islands so as to ensure that two simians existed with the same phenotypes we would be utterly shocked. If the local ecologies had `genomes'---we might call them `e-nomes'---we would be shocked if their `e-nomes' were the same (for \emph{exactly the same} reason we would be shocked if two species that have evolved separately for a long time have the same genome). It would be as if the environments of each island were conspiring with each other, regardless of geological separation, so as to design simian apes. \emph{That} would be, by far, the more amazingly shocking conspiracy in comparison to what, in fact, actually happens: convergent phenotypes evolve with distinct genetics and within distinct ecologies. Common causes are the conspiracy---an analogue of the illusion of design. So we must, in biology, be careful to distinguish common causes from `sufficiently similar' uncommon causes which might arise by natural means. We simply \emph{cannot justify} the presumption of common causes. Like a magician on stage, nature does not repeat its tricks in the same way twice\footnote{The space of sets of possible causes for a particular phenotype to be the case is so vast that nature is unlikely to use the same set of causes the next time. There are many ways to \emph{evolve} a cat.}. Bell's theorem proves that we cannot assume an analogous common cause design-conspiracy in quantum theory while maintaining causal locality, but clearly it is a logical possibility that, however unpalatable, we ought \emph{not to assume it in the first place}. So, just like Bohr \cite{Bohr35} famously rejected EPR's \cite{EPR} assumptions and dissolved their nonlocality proof, we might yet happily reject Bell's \cite{BellBOOK} assumptions. And Bell (like EPR) \emph{proved} that we should reject his (their) assumptions. Two interpretations remain: some suggest that we should search for a way to maintain causal locality \cite{SR06,Bohr35,Peres03,Jaynes89,BellBOOKsub} while others suggest that we should use causally nonlocal common cause theories because they are, in the least, pedagogical and easy to understand \cite{BellBOOK}.
Furthermore, we can ask ourselves another penetrating question: in our analogy we know that the islands are similar, that they have similar forests, fruits, predators, temperatures and so forth but what, by Darwin's beard, gives us the sheer audacity, the silly ape-centric irrationalism, to call such things `causes'? Such a thing stinks of teleology. The `real' causes are small unpredictable changes by seemingly `random' mutation. Niches don't `cause' particular phenotypes to be the case, neither as a whole nor in their particulars. Analogously, what sheer teleological audacity do we have to discuss hidden `causes' in quantum theory? Measurements don't cause particular properties to be the case. Phenotypes and properties merely evolve---and we can design theories of probabilistic inference in which we assign probabilities to propositions about whether such phenotypes (resp. properties) will be said to be the case in certain niches (resp. measurements)\footnote{This Darwinian analogy correlates quite well with the idea of complementarity \cite{BohrBOOK}. `Complimentary' phenotypes rarely arise within the same niche merely because they tend to arise in different niches.}. Nonetheless we don't have to give up on causality, nor do we have to give up local causality (merely some of our mere definitions of `local causality'). Local causality ought to be inviolate unless we find something physical, that we can \emph{rationally justify}, that travels faster than light.
This analogy with biology, and Bell's theorem itself, suggests that common causes might be something that we \emph{desire} but they are not something that we can \emph{demand} of nature; common causes are an anthropomorphic ideal. So, now let us ask ourselves, where might this analogy fail? Correlations between separate phenotypes arise over long periods whereas quantum correlations arise over very short time scales. Perhaps convergency will not have enough `time' to occur in quantum theory. However, `long' is also defined relative to us: the statutory apes. Quantum correlations happen over `long' periods of `Plank' time.
There is one interesting way in which the analogy succeeds brilliantly. Evolution happens by ensuring that small changes of phenotypes occur unpredictably. Quantum mechanics is, at its very core, a theory of small\footnote{\emph{Cf.} Hardy's `damn classical jumps' \cite{Hardy01}.} unpredictable changes as well. Perhaps we can learn lessons from quantum theory that are similar to those we learnt from Darwin. Rationally we cannot justify any conspiracy or design (some demon or god which ensures correlations arise from common causes) so rather, we must invoke a theory that explains all the `magical' and `interesting' correlations we see in nature by manifestly rejecting such accounts. Instead we should search for a physical mechanism by which convergency occurs in a causally local manner (interestingly, this is an approach suggested by Bell himself\footnote{He discusses trying to define a more `human' definition of locality that is weaker than his formal definition---we would argue, in opposition, that his formal definition (\mathbbm{R} \mbox{e }f{locality}) is the `human' one because it panders to an anthropomorphic ideal and is \emph{clearly not obeyed by nature.}} in \cite{BellBOOKsub}). Let us learn from life.
So, this brings us back to relationalism. If we follow a relational philosophy we need to obey the Principle of Identifying the Indiscernible (PII). Remember that the PII ensures that if all the facts are telling us that two things are exactly the same then we must identify them, they are the same entity and not separate entities. Let us catalogue all the `causes' for one event to be the case, or even for it to probably be the case (one might call these `causes' hidden variables). Similarly for another correlated event at spacelike separation. The PII tells us that these two catalogues of `causes' \emph{must not be the same}. The catalogues of `causes' cannot be the same otherwise we would identify indiscernibles and we could not be discussing separate entities. This \emph{uncommon} `cause' principle should not worry us---it is the very definition we use to allow us to call two things separate in the first place. This absolves us of reasoning teleologically about nonlocality. Instead we can reason rationally about locality. So two species that evolve separately will, in general, be separate species that cannot breed however similar the phenotypes that they embody. If we isolate two species, we know of no natural mechanism which maintains common causes in geological time, there are no `elements of biology'. Analogously, two separate correlated particles are, in fact, separate \emph{because} they arise from uncommon `causes'.
\section*{Summary and Conclusion}
Bell's theorem takes the form: `something' plus casual locality logically implies a condition---let's call it $C$---and quantum theory disobeys $C$. Rather than fruitlessly argue over whether Bell's theorem is logically sound or whether quantum theory disobeys $C$, we have tried to discuss why people choose to reject causal locality rather than the `something else' that goes into Bell's theorem. Bell was quite explicit in noting that the reason he chose to reject causal locality was because the `something' constituted some very basic ideas about realism that he didn't know how to reject. That `something' might be called `completeness' and we obviously don't want quantum theory to be incomplete. Hence Bell's conclusions rely on that `something' being eminently desirable---a very \emph{anthropomorphic} criteria. Also, he noted that we are used to causally nonlocal theories (\emph{cf.} Newtonian gravity) and that they are pedagogically useful and easy to understand. These are all good, if not wholly compelling, reasons for choosing to reject causal locality.
The major problem for those who would rather not reject causal locality (until a particle or information is found to travel faster than light) is in identifying clearly what that `something' is, or what part of that `something' we should reject. We have used a relational standpoint to give two options. Clearly Bell made some very specific assumptions about probability theory and it might be here that we can nullify his theorem. Perhaps probability theory ought to be relational, and thus it is not clear whether we should demand outcome independence (\mathbbm{R} \mbox{e }f{outcome}) on logical grounds \cite{Jaynes89}. If we define probabilities relationally then what we \emph{mean} by a probability will be its unique catalogue of relationships with all other probabilities (regardless of any separation of the events to which those probabilities refer). This doesn't seem to convince people, perhaps because Bell's justification for (\mathbbm{R} \mbox{e }f{locality}) didn't seem to come from probability theory \emph{per se} but rather it stemmed from some notion of `completeness' or `realism' (an argument which falters when we note that Bell's ideas of `realism' or `completeness' might themselves stem from a particular understanding of probability theory \cite{Jaynes89}). Nonetheless we have provided a second way out. Instead of assuming that the pertinent part of that `something' is mainly to do with probability theory, let us go to the heart of Bell's theorem and assume that `something' that we might yet reject is ``correlations ought to have common causes''. This common cause principle is the cornerstone of Bell's theorem, but perhaps it is just plain wrong. Like `elements of biology' are just plain wrong.
We have given an example of a physical theory---if also a biological theory---which convincingly inspires doubt about the anthropomorphic desire for common causes. This is not to suggest that Darwinian evolution necessarily disobeys $C$, nor that we necessarily ought to use Darwinian principles in quantum theory. Rather, ``correlations ought to have common causes'' might be a rejectable assumption, and there might be \emph{good reason} to reject it. If one agrees with a relational philosophy then there is a simple argument that suggests that correlations ought to have \emph{un}common causes, \emph{cf.} Leibniz's PII. Even if you do not accept this na\"{\i}ve argument, we hope that the analogy with Darwinian evolution will, in the least, convince you that the assumption of common causes might yet \emph{possibly} be rejected by rational reasoning (there are theories out there, even if quantum theory is not yet one of them, where the presumption of common causes for correlations just doesn't hold true, and for good reason). Perhaps we have even convinced you that uncommon causes are \emph{plausible}, and that there might yet be found some natural mechanism by which quantum-convergency occurs---one that we might yet identify and investigate. It is often suggested that causal nonlocality is a route that we logically need not take, but it is also possible that we \emph{ought} not to take it.
\end{document} |
\begin{document}
\title{On $(h,k,\mu,\nu)$-trichotomy of evolution operators in Banach spaces}
\author{ Mihail Megan, Traian Ceau\c su, Violeta Crai}
\begin{abstract}
The paper considers some concepts of trichotomy with different growth rates for evolution operators in Banach spaces. Connections between these concepts and characterizations in terms of Lyapunov- type norms are given.
\end{abstract}
\section{Introduction}
In the qualitative theory of evolution equations, exponential dichotomy, essentially introduced by O. Perron in \cite{peron} is one of the most important asymptotic properties and in last years it was treated from various perspective.
For some of the most relevant early contributions in this area we refer to the books of J.L. Massera and J.J. Schaffer \cite{masera}, Ju. L. Dalecki and M.G. Krein \cite{daletchi} and W.A. Coppel \cite{copel}. We also refer to the book of C. Chichone and Yu. Latushkin \cite{chicone}.
In some situations, particularly in the nonautonomous setting, the concept of uniform exponential dichotomy is too restrictive and it is important to consider more general behaviors. Two different perspectives can be identify for to generalize the concept of uniform exponential dichotomy: on one hand one can define dichotomies that depend on the initial time (and therefore are nonuniform) and on the other hand one can consider growth rates that are not necessarily exponential.
The first approach leads to concepts of nonuniform exponential dichotomies and can be found in the works of L. Barreira and C. Valls \cite{ba2} and in a different form in the works of P. Preda and M. Megan \cite{preda-megan} and M. Megan, L. Sasu and B. Sasu \cite{megan-sasu}.
The second approach is present in the works of L. Barreira and C. Valls \cite{ba3}, A.J.G. Bento and C.M. Silva \cite{bento} and M. Megan \cite{megan}.
A more general dichotomy concept is introduced by M. Pinto in \cite{pinto2} called $(h,k)$-dichotomy, where $h$ and $k$ are growth rates. The concept of $(h,k)-$ dichotomy has a great generality and it permits the construction of similar notions for systems with dichotomic behaviour which are not described by the classical theory of J.L. Massera \cite{masera}.
As a natural generalization of exponential dichotomy (see \cite{ba3}, \cite{vio}, \cite{elaydi1}, \cite{saker}, \cite{sasu} and the references therein), exponential trichotomy is one of the most complex asymptotic properties of dynamical systems arising from the central manifold theory (see \cite{carr}). In the study of the trichotomy the main idea is to obtain a decomposition of the space at every moment into three closed subspaces: the stable subspace, the unstable subspace and the central manifold.
Two concepts of trichotomy have been introduced: the first by R.J. Sacker and G.L. Sell \cite{saker} (called (S,S)-trichotomy) and the second by S. Elaydi and O. Hayek \cite{elaydi1} (called (E,H)-trichotomy).
The existence of exponential trichotomies is a strong requirement and hence it is of considerable interest to look for more general types of trichotomic behaviors.
In previous studies of uniform and nonuniform trichotomies, the growth rates are always assumed to be the same type functions. However, the nonuniformly hyperbolic dynamical systems vary greatly in forms and none of the nonuniform trichotomy can well characterize all the nonuniformly dynamics. Thus it is necessary and reasonable to look for more general types of nonuniform trichotomies.
The present paper considers the general concept of nonuniform $(h,k,\mu,\nu)-$ trichotomy, which not only incorporates the existing notions of uniform or nonuniform trichotomy as special cases, but also allows the different growth rates in the stable subspace, unstable subspace and the central manifold.
We give characterizations of nonuniform $(h,k,\mu,\nu)-$ trichotomy using families of norms equivalent with the initial norm of the states space. Thus we obtain a characterization of the nonuniform $(h,k,\mu,\nu)-$trichotomy in terms of a certain type of uniform $(h,k,\mu,\nu)-$trichotomy.
As an original reference for considering families of norms in the nonuniform theory we mention Ya. B. Pesin's works \cite{pesin} and \cite{pesin1}. Our characterizations using families of norms are inspired by the work of L. Barreira and C. Valls \cite{ba3} where characterizations of nonuniform exponential trichotomy in terms of Lyapunov functions are given.
\section{Preliminaries}
Let $X$ be a Banach space and $\mathcal{B}(X)$ the Banach algebra of all linear and bounded operators on $X$. The norms on $X$ and on $\mathcal{B}(X)$ will be denoted by $\|\cdot\|$. The identity operator on $X$ is denoted by $I$. We also denote by $\Delta=\{(t,s)\in\mathbb{R}_+^2:t\geq s\geq 0\}$.
We recall that
an application $U:\Delta\to\mathcal{B}(X)$ is called \textit{evolution operator} on $X$ if
\begin{itemize}
\item[$(e_1)$]$U(t,t)=I$, for every $t\geq 0$
\item[] and
\item[$(e_2)$]$U(t,t_0)=U(t,s)U(s,t_0)$, for all $(t,s),(s,t_0)\in\Delta$.
\end{itemize}
\begin{definition}
A map $P:\mathbb{R}_+\to\mathcal{B}(X)$ is called
\begin{itemize}
\item[(i)] \textit{a family of projectors} on $X$ if
$$P^2(t)=P(t),\text{ for every } t\geq 0;$$
\item [(ii)] \textit{invariant} for the evolution operator $U:\Delta\to\mathcal{B}(X)$ if
\begin{align*}
U(t,s)P(s)x=P(t)U(t,s)x,
\end{align*}
for all $(t,s,x)\in \Delta\times X$;
\item[(iii)] \textit{stronlgy invariant} for the evolution operator $U:\Delta\to\mathcal{B}(X)$ if it is invariant for $U$ and for all $(t,s)\in\Delta$ the restriction of $U(t,s)$ on Range $P(s)$ is an isomorphism from Range $P(s)$ to Range $P(t)$.
\end{itemize}
\end{definition}
\begin{remark}
It is obvious that if $P$ is strongly invariant for $U$ then it is also invariant for $U$. The converse is not valid (see \cite{mihit}).
\end{remark}
\begin{remark}\label{rem-proiectorstrong}
If the family of projectors $P:\mathbb{R}_+\to\mathcal{B}(X)$ is strongly invariant for the evolution operator $U:\Delta\to\mathcal{B}(X)$ then (\cite{lupa}) there exists a map $V:\Delta\to\mathcal{B}(X)$ with the properties:
\begin{itemize}
\item[$v_1)$ ] $V(t,s)$ is an isomorphism from Range $ P(t)$ to Range $ P(s)$,
\item [$v_2)$ ] $U(t,s)V(t,s)P(t)x=P(t)x$,
\item[$v_3)$ ] $V(t,s)U(t,s)P(s)x=P(s)x$,
\item[$v_4)$ ]$V(t,t_0)P(t)=V(s,t_0)V(t,s)P(t)$,
\item[$v_5)$ ]$V(t,s)P(t)=P(s)V(t,s)P(t)$,
\item[$v_6)$ ] $V(t,t)P(t)=P(t)V(t,t)P(t)=P(t)$,
\end{itemize}
for all $(t,s),(s,t_0)\in \Delta$ and $x\in X$.
\end{remark}
\begin{definition}
Let $P_1,P_2,P_3:\mathbb{R}\to\mathcal{B}(X)$ be three families of projectors on $X$. We say that the family $\mathcal{P}=\{P_1,P_2,P_3\}$ is
\begin{itemize}
\item [(i)] \textit{orthogonal} if
\begin{itemize}
\item [$o_1)$]$P_1(t)+P_2(t)+P_3(t)=I$ for every $t\geq 0$\\
and
\item[$o_2)$] $P_i(t)P_j(t)=0$ for all $t\geq 0$ and all $i,j\in\{1,2,3\}$ with $i\neq j$;
\end{itemize}
\item[(ii)] \textit{compatible} with the evolution operator $U:\Delta\to\mathcal{B}(X)$ if
\begin{itemize}
\item[$c_1)$] $P_1$ is invariant for $U$\\
and
\item[$c_2)$] $P_2,P_3$ are strongly invariant for $U$.
\end{itemize}
\end{itemize}
\end{definition}
In what follows we shall denote by $V_j(t,s)$ the isomorphism (given by Remark \ref{rem-proiectorstrong}) from Range $P_j(t)$ to Range $P_j(s)$ and $j\in\{2,3\}$, where $\mathcal{P}=\{P_2,P_2,P_3\}$ is compatible with $U.$
\begin{definition}
We say that a nondecreasing map $h:\mathbb{R}_+\to[1,\infty)$ is a \textit{ growth rate} if
\begin{align*}
\lim\limits_{t\to\infty}h(t)=\infty.
\end{align*}
\end{definition}
As particular cases of growth rates we remark:
\begin{itemize}
\item [$r_1)$ ] \textit{exponential rates}, i.e.
$h(t)=e^{\alpha t}$ with $\alpha>0;$
\item [$r_2)$ ]\textit{polynomial rates}, i.e.
$h(t)=(t+1)^\alpha$ with $\alpha>0.$
\end{itemize}
Let $\mathcal{P}=\{P_1, P_2, P_3\}$ be an orthogonal family of projectors which is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ and $h,k,\mu,\nu:\mathbb{R}_+\to[1,\infty)$ be four growth rates.
\begin{definition}\label{def-tricho}
We say that the pair $(U,\mathcal{P})$ is \textit{$(h,k,\mu,\nu)$-trichotomic} (and we denote $(h,k,\mu,\nu)-t$) if there exists a nondecreasing function $N:\mathbb{R}_+\to [1,\infty)$ such that
\begin{itemize}
\item [$(ht_1)$ ]$h(t)\|U(t,s)P_1(s)x\|\leq N(s)h(s) \|P_1(s)x\|$
\item [$(kt_1)$ ]$k(t)\|P_2(s)x\|\leq N(t) k(s) \|U(t,s)P_2(s)x\|$
\item [$(\mu t_1)$ ]$\mu(s)\|U(t,s)P_3(s)x\|\leq N(s) \mu(t) \|P_3(s)x\|$
\item [$(\nu t_1)$ ]$\nu(s)\|P_3(s)x\|\leq N(t)\nu(t) \|U(t,s)P_3(s)x\|,$
\end{itemize}
for all $(t,s,x)\in \Delta\times X.$
\end{definition}
In particular, if the function $N$ is constant then we obtain the \textit{uniform $(h,k,\mu,\nu)$-trichotomy} property, denoted by $u-(h,k,\mu,\nu)-t$.
\begin{remark}
As important particular cases of $(h,k,\mu,\nu)$-trichotomy we have:
\begin{itemize}
\item[(i)] \textit{(nonuniform) exponential trichotomy} ($et$) and respectively \textit{uniform exponential trichotomy} ($uet$) when the rates $h,k,\mu,\nu$ are exponential rates;
\item[(ii)]\textit{(nonuniform) polynomial trichotomy} ($pt$) and respectively \textit{uniform polynomial trichotomy} ($upt$) when the rates $h,k,\mu,\nu$ are polynomial rates;
\item[(iii)]\textit{(nonuniform) $(h,k)-$dichotomy} ($(h,k)-d$) respectively {uniform $(h,k)-$dichotomy} ($u-(h,k)-d$) for $P_3=0$;
\item[(iv)] \textit{(nonuniform) exponential dichotomy} ($ed$) and respectively \textit{uniform exponential dichotomy} ($ued$) when $P_3=0$ and the rates $h,k$ are exponential rates;
\item[(v)]\textit{(nonuniform) polynomial dichotomy} (p.d.) and respectively \textit{uniform polynomial dichotomy} ($upd$) when $P_3=0$ and the rates $h,k$ are polynomial rates;
\end{itemize}
\end{remark}
It is obvious that if the pair $(U,\mathcal{P})$ is $u-(h,k,\mu,\nu)-t$ then it is also $(h,k,\mu,\nu)-t$ In general, the reverse of this statement is not valid, phenomenon illustrated by
\begin{example}
Let $U:\Delta\to\mathcal{B}(X)$ be the evolution operator defined by
\begin{align}
U(t,s)=\frac{u(s)}{u(t)}\left( \frac{h(s)}{h(t)}P_1(s)+ \frac{k(t)}{k(s)}P_2(s)+\frac{\mu(t)}{\mu(s)}\frac{\nu(s)}{\nu(t)}P_3(s)\right)
\end{align}
where $u,h,k,\mu,\nu:\mathbb{R}_+\to[1,\infty)$ are growth rates and $P_1, P_2, P_3:\mathbb{R}_+\to\mathcal{B}(X)$ are projectors families on $X$ with the properties:
\begin{itemize}
\item[(i)]$P_1(t)+P_2(t)+P_3(t)=I$ for every $t\geq 0$;
\item[(ii)]\[ P_i(t)P_j(s)=\left\{ \begin{array}{ll} 0&\mbox{if $i\neq j$} \\
P_i(s),& \mbox{ if $i=j$},
\end{array} \right. \]for all $(t,s)\in\Delta.$
\item[(iii)] $U(t,s)P_i(s)=P_i(t)U(t,s)$ for all $(t,s)\in\Delta$ and all $i\in\{1,2,3\}$.
\end{itemize}
For example if $P_1,P_2,P_3$ are constant and orthogonal then the conditions (i),(ii) and (iii) are satisfied.
We observe that
\begin{align*}
h(t)\|U(t,s)P_1(s)x\|&=\frac{u(s)h(s)}{u(t)}\|P_1(s)x\|\leq u(s) h(s)\|P_1(s)x\|\\
u(t)k(s)\|U(t,s)P_2(s)x\|&=u(s){k(s)}\|P_2(s)x\|\geq k(t)\|P_2(s)x\|\\
\mu(s)\|U(t,s)P_3(s)x\|&=\frac{u(s)\mu(t)\nu(s)}{u(t)\nu(t)}\|P_3(s)x\|\leq u(s)\mu(t)\|P_3(s)x\|\\
u(t)\nu(t)\|U(t,s)P_3(s)x\|&=\frac{u(s)\nu(s)\mu(t)}{\mu(s)}\|P_3(s)x\|\geq \nu(s)\|P_3(s)x\|
\end{align*} for all $(t,s,x)\in\Delta\times X.$
Thus the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-t$. \\
If we assume that the pair $(U,\mathcal{P})$ is $u-(h,k,\mu,\nu)-t$ then there exists a real constant $N\geq 1$ such that
\begin{align*}
N u(s)\geq u(t),\text{ for all } (t,s)\in\Delta.
\end{align*}
Taking $s=0$ we obtain a contradiction.
\end{example}
\begin{remark}
The previous example shows that for all four growth rates $h,k,\mu,\nu$ there exits a pair $(U,\mathcal{P})$ which is $(h,k,\mu,\nu)-t$ and is not $u-(h,k,\mu,\nu)-t$.
\end{remark}
In the particular case when $\mathcal{P}$ is compatible with $U$ a characterization of $(h,k,\mu,\nu)-t$ is given by
\begin{proposition}\label{prop strong invariant trichotomy}
If $\mathcal{P}=
\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ then the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)$-trichotomic if and only if there exists a nondecreasing function $ N_1:\mathbb{R}_+\to[1,\infty)$ such that
\begin{itemize}
\item [$(ht_2)$ ]$h(t)\|U(t,s)P_1(s)x\|\leq N_1(s)h(s)\|x\|$
\item [$(kt_2)$ ]$k(t)\|V_2(t,s)P_2(t)x\|\leq N_1(t) k(s) \|x\|$
\item [$(\mu t_2)$ ]$\mu(s)\|U(t,s)P_3(s)x\|\leq N_1(s) \mu(t)\|x\|$
\item[$(\nu t_2)$ ]$\nu(s)\|V_3(t,s)P_3(t)x\|\leq N_1(t) \nu(t) \|x\|$
\end{itemize}
for all $ (t,s,x)\in \Delta\times X$, where $V_j(t,s)$ for $j\in\{2,3\}$ is the isomorphism from Range $P_j(t)$ to Range $P_j(s)$.
\end{proposition}
\begin{proof}
\textit{Necessity.} By Remark \ref{rem-proiectorstrong} and the Definition \ref{def-tricho} we obtain
\begin{align*}
(ht_2)\thickspace& \thickspace h(t)\|U(t,s)P_1(s)x\|\leq N(s)h(s)\|P_1(s)x\|\leq N(s)\|P_1(s)\|h(s)\|x\|\\
&\leq N_1(s)h(s)\|x\|\\
(kt_2)\thickspace &\thickspace k(t)\|V_2(t,s)P_2(t)x\|=k(t)\|P_2(s)V_2(t,s)P_2(t)x\|\\
&\leq N(t)k(s)\|U(t,s)P_2(s)V_2(t,s)P_2(t)x\|\\
&=N(t)k(s)\|P_2(t)x\|\leq N(t)\|P_2(t)\|k(s)\|x\|\leq N_1(t)k(s)\|x\|\\
(\mu t_2)\thickspace &\thickspace \mu(s)\|U(t,s)P_3(s)x\|\leq N(s) \mu(t)\|P_3(s)x\|\leq N(s)\|P_3(s)\|\mu(t)\|x\|\\
&\leq N_1(s)\mu(t)\|x\|\\
(\nu t_2 )\thickspace&\thickspace \nu(s)\|V_3(t,s)P_3(t)x\|=\nu(s)\|P_3(s)V_3(t,s)P_3(t)x\|\\
&\leq N(t)\nu(t)\|U(t,s)P_3(s)V_3(t,s)P_3(t)x\|\\
&=N(t)\nu(t)\|P_3(t)x\|\leq N(t)\|P_3(t)\|\nu(t)\|x\|\leq N_1(t)\nu(t)\|x\|,
\end{align*}for all $(t,s,x)\in\Delta\times X,$ where
$$N_1(t)=\sup_{s\in[0,t]}N(s)(\|P_1(s)\|+\|P_2(s)\|+\|P_3(s)\|).$$
\textit{Sufficiency.} The implications $(ht_2)\Rightarrow(ht_1)$ and $(\mu t_2)\Rightarrow(\mu t_1)$ result by replacing $x$ with $P_1(s)x$ respectively by $P_3(s)x$.
For the implications $(kt_2)\Rightarrow(kt_1)$ and $(\nu t_2)\Rightarrow (\nu t_1)$ we have (by Remark \ref{rem-proiectorstrong})
\begin{align*}
k(t)\|P_2(s)x\|&=k(t)\|V_2(t,s)U(t,s)P_2(s)x\|\leq N(t)k(s)\|U(t,s)P_2(s)x\|\\
&\text{and}\\
\nu(s)\|P_3(s)x\|&=\nu(s)\|V_3(t,s)U(t,s)P_3(s)x\|\leq N(t)\nu(t)\|U(t,s)P_3(s)x\|,
\end{align*}for all $(t,s,x)\in\Delta\times X.$
\end{proof}
A similar characterization for the $u-(h,k,\mu,\nu)-t$ concept results under the hypotheses of boundedness of the projectors $P_1,P_2,P_3$. A characterization with compatible family of projectors without assuming the boundedness of projectors is given by
\begin{proposition}\label{prop strong invariant trichotomy uniform}
If $\mathcal{P}=
\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ then the pair $(U,\mathcal{P})$ is uniformly$-(h,k,\mu,\nu)-$ trichotomic if and only if there exists a constant $ N\geq 1$ such that
\begin{itemize}
\item [$(uht_1)$ ]$h(t)\|U(t,s)P_1(s)x\|\leq Nh(s)\|P_1(s)x\|$
\item [$(ukt_1)$ ]$k(t)\|V_2(t,s)P_2(t)x\|\leq N k(s) \|P_2(t)x\|$
\item [$(u\mu t_1)$ ]$\mu(s)\|U(t,s)P_3(s)x\|\leq N \mu(t)\|P_3(s)x\|$
\item[$(u\nu t_1)$ ]$\nu(s)\|V_3(t,s)P_3(t)x\|\leq N \nu(t) \|P_3(t)x\|$
\end{itemize}
for all $ (t,s,x)\in \Delta\times X$, where $V_j(t,s)$ for $j\in\{2,3\}$ is the isomorphism from Range $P_j(t)$ to Range $P_j(s)$.
\end{proposition}
\begin{proof}
It is similar to the proof of Proposition \ref{prop strong invariant trichotomy}.
\end{proof}
\section{The main result}
In this section we give a characterization of $(h,k,\mu,\nu)-$trichotomy in terms of a certain type of uniform $(h,k,\mu,\nu)-$trichotomy using families of norms equivalent with the norms of $X$. Firstly we introduce
\begin{definition}\label{def-norma-compatibila}
A family $\mathcal{N}=\{\|\cdot\|_t: t\geq0\}$ of norms on the Banach space $X$ (endowed with the norm $\|\cdot\|$) is called \textit{compatible} to the norm $\|\cdot\|$ if there exists a nondecreasing map $C:\mathbb{R}_+\to[1,\infty)$ such that
\begin{align}
\|x\|&\leq \|x\|_t\leq C(t)\|x\|,\label{normprop-fara-proiectori1}
\end{align}
for all $(t,x)\in\mathbb{R}_+\times X$.
\end{definition}
\begin{proposition}\label{ex-norma-trichotomie2}
If the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-t$ then the family of norms $\mathcal{N}_1=\{\|\cdot\|_t:t\geq0\}$ given by
\begin{align}
\|x\|_t&=\sup_{\tau\geq t} \frac{h(\tau)}{h(t)}\|U(\tau,t)P_1(t)x\|+\sup_{r\leq t} \frac{k(t)}{k(r)}\|V_2(t,r)P_2(t)x\|\nonumber\\
&+\sup_{\tau\geq t} \frac{\mu(t)}{\mu(\tau)}\|U(\tau,t)P_3(t)x\|\label{norma-tricho-sus}
\end{align}
is compatible with $\|\cdot\|$.
\end{proposition}
\begin{proof}
For $\tau=t=r$ in (\ref{norma-tricho-sus}) we obtain that
\begin{align*}
\|x\|_t&\geq \|P_1(t)x\|+\|P_2(t)x\|+\|P_3(t)x\|\geq \|x\|
\end{align*}
for all $t\geq 0$.
If the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-t$ then by Proposition \ref{prop strong invariant trichotomy} there exits a nondecreasing function $N_1:\mathbb{R}_+\to\mathcal{B}(X)$ such that
\begin{align*}
\|x\|_t\leq 3N_1(t)\|x\|, \text{ for all } (t,x)\in\mathbb{R}_+\times X.
\end{align*}
Finally we obtain that $\mathcal{N}_1$ is compatible with $\|\cdot\|.$
\end{proof}
\begin{proposition}\label{ex-norma-trichotomie1}
If the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-t$ then the family of norms $\mathcal{N}_2=\{\||\cdot\||_t,t\geq0\}$ defined by
\begin{align}
\||x\||_t&=\sup_{\tau\geq t} \frac{h(\tau)}{h(t)}\|U(\tau,t)P_1(t)x\|+ \sup_{r\leq t} \frac{k(t)}{k(r)}\|V_2(t,r)P_2(t)x\|\nonumber\\
&+
\sup_{r\leq t} \frac{\nu(r)}{\nu(t)}\|V_3(t,r)P_3(t)x\|\label{norma-tricho-jos}
\end{align}
is compatible with $\|\cdot\|.$
\end{proposition}
\begin{proof}
If the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-t$ then by Proposition \ref{prop strong invariant trichotomy} there exits a nondecreasing function $N_1:\mathbb{R}_+\to\mathcal{B}(X)$ such that
\begin{align*}
\||x\||_t\leq 3N_1(t)\|x\|, \text{ for all } (t,x)\in\mathbb{R}_+\times X.
\end{align*}
On the other hand, for $\tau=t=r$ in the definition of $\||\cdot\||_t$ we obtain
\begin{align*}
\||x\||_t&\geq \|P_1(t)x\|+\|P_2(t)x\|+\|P_3(t)x\|\geq\|x\|.
\end{align*}
In consequence, by Definition \ref{def-norma-compatibila} it results that the family of norms $\mathcal{N}_2$ is compatible to $\|\cdot\|.$
\end{proof}
The main result of this paper is
\begin{theorem}\label{unif=neunif-trichotomie}If $\mathcal{P}=\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ then
the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)$-trichotomic if and only if there exist two families of norms $\mathcal{N}_1=\{\|\cdot\|_t: t\geq0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ compatible with the norm $\|\cdot\|$ such that the following take place
\begin{itemize}
\item[($ht_3$) ] $h(t)\|U(t,s)P_1(s)x\|_t\leq h(s) \|P_1(s)x\|_s$
\item[($kt_3$) ] $k(t)\||V_2(t,s)P_2(t)x\||_s\leq k(s) \||P_2(t)x\||_t$
\item [$(\mu t_3)$ ]$\mu(s)\|U(t,s)P_3(s)x\|_t\leq \mu(t) \|P_3(s)x\|_s$
\item[$(\nu t_3)$ ]$\nu(s)\||V_3(t,s)P_3(t)x\||_s\leq \nu(t) \||P_3(t)x|\|_t$
\end{itemize}
for all $(t,s,x)\in\Delta\times X.$
\end{theorem}
\begin{proof}
\textit{Necessary.}
If the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)$-trichotomic then by Propositions \ref{ex-norma-trichotomie2} and \ref{ex-norma-trichotomie1} that there exist the families of norms $\mathcal{N}_1=\{\|\cdot\|_t: t\geq0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ compatible with $\|\cdot\|$.
$\boldsymbol{(ht_1)\Rightarrow(ht_3)}.$ We have that
\begin{align*}
h(t)\|U(t,s)P_1(s)x\|_t&=h(t)\|P_1(t)U(t,s)P_1(s)x\|_t\\
&=h(t)\sup_{\tau\geq t} \frac{h(\tau)}{h(t)}\|U(\tau,t)P_1(t)U(t,s)P_1(s)x\|\\
&\leq h(s)\sup_{\tau\geq s} \frac{h(\tau)}{h(s)}\|U(\tau,s)P_1(s)x\|= h(s)\|P_1(s)\|_s,
\end{align*}for all $(t,s,x)\in\Delta\times X$.
$\boldsymbol{(kt_2)\Rightarrow(kt_3)}.$ If $(kt_2)$ holds then
\begin{align*}
k(t)\||V_2(t,s)P_2(t)x\||_s&=k(t)\||P_2(s)V_2(t,s)P_2(t)x\||_s\\
&=k(t)\sup_{r\leq s} \frac{k(s)}{k(r)}\|V_2(s,r)P_2(s)V_2(t,s)P_2(t)x\|\\
&\leq k(s) \sup_{r\leq t} \frac{k(t)}{k(r)}\|V_2(t,r)P_2(t)x\|= k(s)\||P_2(t)\||_t
\end{align*}for all $(t,s,x)\in\Delta\times X$.
$\boldsymbol{(\mu t_1)\Rightarrow(\mu t_3)}.$ If $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-$ trichotomic then by $(\mu t_1)$ it results
\begin{align*}
\mu(s)\|U(t,s)P_3(s)x\|_t&=\mu(s)\|P_3(t)U(t,s)P_3(s)x\|_t\\
&=\mu(s)\sup_{\tau\geq t} \frac{\mu(t)}{\mu(\tau)}\|U(\tau,t)P_3(t)U(t,s)P_3(s)x\|\\
&= \mu(s)\sup_{\tau\geq t}\frac{\mu(t)}{\mu(\tau)}\|U(\tau,s)P_3(s)x\| \leq\mu(t)\sup_{\tau\geq s}\frac{\mu(s)}{\mu(\tau)}\|U(\tau,s)P_3(s)x\|\\
&=\mu(t)\|P_3(s)x\|_s,
\end{align*}for all $(t,s,x)\in\Delta\times X$.
$\boldsymbol{(\nu t_2)\Rightarrow(\nu t_3)}.$ Using Proposition \ref{normprop-fara-proiectori1} we obtain
\begin{align*}
\nu(s)\||V_3(t,s)P_3(t)x\||_s&=\nu(s)\||P_3(s)V_3(t,s)P_3(t)x\||_s\\
&=\nu(s)\sup_{r\leq s} \frac{\nu(r)}{\nu(s)}\|V_3(s,r)P_3(s)V_3(t,s)P_3(t)x\|\\
&\leq \nu(t)\sup_{r\leq t}\frac{\nu(r)}{\nu(t)}\|V_3(t,r)P_3(t)x\| =\nu(t)\||P_3(t)x\||_t,
\end{align*}
for all $(t,s,x)\in\Delta\times X$.
\textit{Sufficiency.}We assume that there are two families of norms $\mathcal{N}_1=\{\|\cdot\|_t: t\geq0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ compatible with the norm $\|\cdot\|$ such that the inequalities $(ht_3)--(\nu t_3)$ take place. Let $(t,s,x)\in\Delta\times X$,
$\boldsymbol{(ht_3)\Rightarrow(ht_2)}.$ The inequality $(ht_3)$ and Definition \ref{def-norma-compatibila} imply that
\begin{align*}
h(t)\|U(t,s)P_1(s)x\|&\leq \|U(t,s)P_1(s)x\|_t\leq h(s)\|P_1(s)x\|_s\\
&\leq h(s)C(s)\|P_1(s)x\| \leq C(s)\|P_1(s)\|h(s) \|x\|.
\end{align*}
$\boldsymbol{(kt_3)\Rightarrow(kt_2)}.$ Similarly,
\begin{align*}
k(t)\|V_2(t,s)P_2(t)x\|&\leq k(t)\||V_2(t,s)P_2(t)x\||_s\leq k(s)\||P_2(t)\||_t\\
&\leq k(s)C(t)\|P_2(t)x\|\leq C(t)\|P_2(t)\|k(s)\|x\|.
\end{align*}
$\boldsymbol{(\mu t_3)\Rightarrow(\mu t_2)}.$ From Definition \ref{def-norma-compatibila} and inequality $(\mu t_3)$ we have
\begin{align*}
\mu(s)\|U(t,s)P_3(s)x\|&\leq \mu(s) \|U(t,s)P_3(s)x\|_t
\leq \mu(t)\|P_3(s)x\|_s\\
&\leq C(s) \mu(t)\|P_3(s)x\|\leq C(s)\|P_3(s)\| \mu(t)\|x\|.
\end{align*}
$\boldsymbol{(\nu t_3)\Rightarrow(\nu t_2)}.$ Similarly,
\begin{align*}
\nu(s)\|V_3(t,s)P_3(t)x\|&|\leq \nu(s)\||V_3(t,s)P_3(s)x\||_s\leq \nu(t)\||P_3(t)x\||_t\\
&\leq C(t) \nu(t)\|P_3(t)x\|\leq C(t)\|P_3(t)\| \nu(t)\|x\| .
\end{align*}
If we denote by
$$N(t)=\sup_{s\in[0,t]}C(s)(\|P_1(s)\|+\|P_2(s)\|+\|P_3(s)\|)$$
then we obtain that the inequalities $(ht_2),(kt_2),(\mu t_2),(\nu t_2)$ are satisfied. By Proposition \ref{prop strong invariant trichotomy} it follows that $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-t$.
\end{proof}
As a particular case, we obtain a characterization of (nonuniform) exponential trichotomy given by
\begin{corollary}\label{cor1}
If $\mathcal{P}=\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ then
the pair $(U,\mathcal{P})$ is exponential trichotomic if and only if there are four real constants $\alpha,\beta,\gamma,\delta>0$ and two families of norms $\mathcal{N}_1=\{\|\cdot\|_t: t\geq0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ compatible with the norm $\|\cdot\|$ such that
\begin{itemize}
\item[($et_1$) ] $\|U(t,s)P_1(s)x\|_t\leq e^{-\alpha(t-s)}\|P_1(s)x\|_s$
\item[($et_2$) ] $\||V_2(t,s)P_2(t)x\||_s\leq e^{-\beta(t-s)}\||P_2(t)x\||_t$
\item [($e t_3$) ]$\|U(t,s)P_3(s)x\|_t\leq e^{\gamma(t-s)} \|P_3(s)x\|_s$
\item[($e t_4$) ]$\||V_3(t,s)P_3(t)x\||_s\leq e^{\delta(t-s)}\||P_3(t)x\||_t$,
\end{itemize}
for all $(t,s,x)\in\Delta\times X.$
\end{corollary}
\begin{proof}
It results from Theorem \ref{unif=neunif-trichotomie} for $$h(t)=e^{\alpha t},k(t)=e^{\beta t},\nu(t)=e^{\gamma t},\nu(t)=e^{\delta t},$$
with $\alpha,\beta,\gamma,\delta>0.$
\end{proof}
If the growth rates are of polynomial type then we obtain a characterization of (nonuniform) polynomial trichotomy given by
\begin{corollary}\label{cor2}
Let $\mathcal{P}=\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$.
Then $(U,\mathcal{P})$ is nonuniform polynomial trichotomic if and only if there exist two families of norms $\mathcal{N}_1=\{\|\cdot\|_t: t\geq0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ compatible with the norm $\|\cdot\|$ and four real constants $\alpha,\beta,\gamma,\delta>0$ such that
\begin{itemize}
\item[($pt_1$) ] $(t+1)^\alpha\|U(t,s)P_1(s)x\|_t\leq (s+1)^\alpha\|P_1(s)x\|_s$
\item[($pt_2$) ] $(t+1)^\beta\||V_2(t,s)P_2(t)x\||_s\leq (s+1)^\beta\||P_2(t)x\||_t$
\item [($p t_3$) ]$(s+1)^\gamma\|U(t,s)P_3(s)x\|_t\leq (t+1)^\gamma \|P_3(s)x\|_s$
\item[($p t_4$) ]$(s+1)^\delta\||V_3(t,s)P_3(t)x\||_s\leq (t+1)^\delta
\||P_3(t)x\||_t$,
\end{itemize}
for all $(t,s,x)\in\Delta\times X.$
\end{corollary}
\begin{proof}
It results from Theorem \ref{unif=neunif-trichotomie} for $$h(t)=(t+1)^\alpha,k(t)=(t+1)^\beta,\mu(t)=(t+1)^\gamma,\nu(t)=(t+1)^\delta,$$
with $\alpha,\beta,\gamma,\delta>0.$
\end{proof}
\begin{definition}
A family of norms $\mathcal{N}=\{\|\cdot\|_t,t\geq 0\}$ is \textit{uniformly compatible} with the norm $\|\cdot\|$ if there exits a constant $c>0$
such that
\begin{equation}\label{norma compatibila uniform}
\|x\|\leq \|x\|_t\leq c\|x\|, \text{ for all } (t,x)\in\mathbb{R}_+\times X.
\end{equation}
\end{definition}
\begin{remark}
From the proofs of Propositions \ref{ex-norma-trichotomie2}, \ref{ex-norma-trichotomie1} it results that if the pair $(U,\mathcal{P})$ is uniformly $(h,k,\mu,\nu)-$ trichotomic then the families of norms $\mathcal{N}_1=\{\|\cdot\|_t:t\geq 0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t:t\geq 0\}$ (given by (\ref{norma-tricho-sus}) and (\ref{norma-tricho-jos})) are uniformly compatible with the norm $\|\cdot\|.$
\end{remark}
A characterization of the uniform$-(h,k,\mu,\nu)-$trichotomy is given by
\begin{theorem}
Let $\mathcal{P}=\{P_1,P_2,P_3\}$ be compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$. Then
the pair $(U,\mathcal{P})$ is uniformly$-(h,k,\mu,\nu)-$trichotomic if and only if there exist two families of norms $\mathcal{N}_1=\{\|\cdot\|_t: t\geq0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ uniformly compatible with the norm $\|\cdot\|$ such that the inequalities $(ht_3),(kt_3),(\mu t_3)$ and $(\nu t_3)$ are satisfied.
\end{theorem}
\begin{proof}
It results from the proof of Theorem \ref{unif=neunif-trichotomie} (via Proposition \ref{prop strong invariant trichotomy uniform}).
\end{proof}
\begin{remark}
Similarly as in Corollaries \ref{cor1}, \ref{cor2} one can obtain characterizations for uniform exponential trichotomy respectively uniform polynomial trichotomy.
\end{remark}
Another characterization of the $(h,k,\mu,\nu)-$trichotomy is given by
\begin{theorem}\label{unif=neunif-trichotomie-fara-proiectori}If $\mathcal{P}=\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ then
the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)$-trichotomic if and only if there exist two families of norms $\mathcal{N}_1=\{\|\cdot\|_t,t\geq 0\},\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ compatible with the family of projectors $\mathcal{P}=\{P_1,P_2,P_3\}$ such that
\begin{itemize}
\item[($ht_4$) ] $h(t)\|U(t,s)P_1(s)x\|_t\leq h(s) \|x\|_s$
\item[($kt_4$) ] $k(t)\||V_2(t,s)P_2(t)x\||_s\leq k(s) \||x\||_t$
\item [$(\mu t_4)$ ]$\mu(s)\|U(t,s)P_3(s)x\|_t\leq \mu(t) \|x\|_s$
\item[$(\nu t_4$) ]$\nu(s)\||V_3(t,s)P_3(t)x\||_s\leq \nu(t) \||x|\||_t$
\end{itemize}
for all $(t,s,x)\in\Delta\times X.$
\end{theorem}
\begin{proof}
\textit{Necessity.} It results from Theorem \ref{unif=neunif-trichotomie} and inequalities
\begin{eqnarray*}
\|P_i(t)x\|_t\leq \|x\|_t&\text{and}&
\||P_i(t)x\||_t\leq \||x\||_t,
\end{eqnarray*}for all $(t,x)\in\mathbb{R}_+\times X$ and $i=\{1,2,3\}$.
\textit{Sufficiency.} It results replacing $x$ by $P_1(s)x$ in $(ht_4)$, $x$ by $P_2(t)x$ in $(kt_4)$, $x$ by $P_3(s)x$ in $(\mu t_4)$ and $x$ by $P_3(t)x$ in $(\nu t_4)$.
\end{proof}
The variant of the previous theorem for uniform $(h,k,\mu,\nu)-$trichotomy is given by
\begin{theorem}
If $\mathcal{P}=\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ then
the pair $(U,\mathcal{P})$ is uniformly $-(h,k,\mu,\nu)-$ trichotomic if and only if there exist two families of norms $\mathcal{N}_1=\{\|\cdot\|_t:t\geq 0\},\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ uniformly compatible with the family of projectors $\mathcal{P}=\{P_1,P_2,P_3\}$ such that the inequalities $(ht_4),(kt_4),(\mu t_4 )$ and $(\nu t_4)$ are satisfied.
\end{theorem}
\begin{proof}
It is similar with the proof of Theorem \ref{unif=neunif-trichotomie}.
\end{proof}
\begin{remark}
If the growth rates are exponential respectively polynomial then we obtain characterizations for exponential trichotomy, uniform exponential trichotomy and uniform polynomial trichotomy.
\end{remark}
\end{document} |
\begin{document}
\setcounter{tocdepth}{1}
\title{The deformations of flat affine structures on the two-torus}
\lie{aut}hor{Oliver Baues \thanks{\today;
}
}
{\rm ad}dress{
Institut f\"ur Algebra und Geometrie\\
Karlsruher Institut f\"ur Technologie (KIT)\\
D-76128 Karlsruhe\\
email:\,\tt{baues@math.uni-karlsruhe.de}
}
\maketitle
\begin{abstract} The group action which defines the
moduli problem for the deformation space of flat affine structures on
the two-torus is the action of the affine group ${\rm Aff}(2)$ on ${\bb R}^2$.
Since this action has non-compact stabiliser $\lie{o}peratorname{GL}(2,{\bb R})$, the underlying locally
homogeneous geometry is highly non-Riemannian.
In this chapter,
we describe the deformation space of all flat affine structures on the two-torus.
In this context
interesting
phenomena arise in the topology
of the deformation space, which, for example,
is \emph{not} a Hausdorff space.
This contrasts with
the case of constant curvature metrics, or
conformal structures on surfaces, which are encountered in classical Teichm\"uller theory. As our main result on the space of deformations of flat affine structures on the two-torus we prove that the holonomy map from the deformation space to the variety of conjugacy classes of homomorphisms
from the fundamental group of the two-torus to the affine group is a local homeomorphism.
\end{abstract}
\begin{classification}
\end{classification}
\begin{keywords}
flat affine structure, locally homogeneous structure, surface, two-torus, development map, deformation space, moduli space, holonomy map, stratification
\end{keywords}
\tableofcontents
\section{Introduction} {\lie{a}}bel{sect:intro}
A flat affine structure on a smooth manifold is specified by an
atlas with coordinate changes in the group of affine transformations of ${\bb R}^n$.
A manifold together with such an atlas
is called a \emph{flat affine manifold}.
\index{flat affine manifold} \index{manifold!flat affine}
Equivalently,
a flat affine manifold is a smooth manifold
which has a flat and torsion-free connection on the tangent bundle.
A particular class of examples is furnished by Riemannian flat manifolds,
but the class of flat affine manifolds is much larger.
The study of flat affine manifolds has a long history which can be
traced back to the local theory of hypersurfaces and Cartan's projective
connections. Global questions were first studied in the context
of Bieberbach's theory of crystallographic groups, and they
have gained renewed interest in the more general setting by
Ehresmann's theory of locally homogeneous spaces, and more
recently in Thurston's geometrisation program which shows the importance
of locally homogeneous structures in the classification of manifolds.
¥
Flat affine manifolds are \emph{affinely diffeomorphic}
if they are diffeomorphic by a diffeomorphism which looks like an
affine map in the coordinate charts.
The universal covering space of a flat affine
manifold admits a local affine diffeomorphism into affine space
${\bb A}^n = {\bb R}^n$ which is called the \emph{development map};
\index{development maps}
its image is an open subset of ${\bb R}^n$, called the \emph{development image}. \index{development image}
The development map and image provide rough
invariants for the classification of flat affine manifolds.
Benz\'ecri \cite{Benzecri} showed that a closed oriented surface which supports a flat affine structure must be diffeomorphic to a two-torus
, thereby confirming in dimension two a conjecture of Chern that the Euler characteristic of a compact flat affine manifold must be zero.
The flat affine structures on the two-torus and their development images
were partially classified by Kuiper \cite{Kuiper} in 1953. The classification
was completed by independent work of Furness-Arrowsmith \cite{FurArr}
and Nagano-Yagi \cite{NaganoYagi} around 1972. Their works show that the flat affine structures on the two-torus fall into
four main classes which have
development image the plane ${\bb R}^2$, the halfspace, the sector,
or the once punctured plane, respectively.
The \emph{moduli space} \index{moduli space!of flat affine structures}
of flat affine structures is by definition the set of flat affine structures up to affine diffeomorphism.
More precisely, the group ${\rm Diff}(T^2)$ of all diffeomorphisms of the two-torus $T^2$ acts naturally on the set of flat affine structures on $T^2$.
The set of orbits classifies flat affine two-tori
up to affine diffeomorphism; it is called the {moduli space}.
The \emph{deformation space} \index{deformation space!of flat affine structures}
is the set
of all flat affine structures divided by the action of the group ${\rm Diff}_0(T^2)$
of diffeomorphisms which are isotopic to the identity. This action classifies
flat affine structures on $T^2$ up to isotopy, or equivalently affine
two-tori with a marking.
The deformation space has a natural topology, which it
inherits from the $C^\infty$-topology on the space of development maps.
In this chapter, our aim is to describe the topology of the deformation
space ${\mathfrak D}(T^2,{\bb A}^2)$ of all flat affine structures on the two-torus.
The development process gives, for each flat affine two-torus, a natural
homomorphism $h: {\bb Z}^2= \pi_{1}(T^2) \rightarrow {\rm Aff}(2)$ of the fundamental group of the torus to the plane affine group
${\rm Aff}(2) = {\rm Aff}({\bb R}^2)$. This homomorphism is called
the \emph{holonomy homomorphism} \index{holonomy homomorphism} and it is defined up to conjugacy
with an affine map. The holonomy thus gives rise to a continuous
open map $${hol}: {\mathfrak D}(T^2,{\bb A}^2) \rightarrow \lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2)) / {\rm Aff}(2) \; $$
from the deformation space to the space of conjugacy classes of homomorphisms, which is called the \emph{holonomy map}.
\index{holonomy map} \index{deformation space!holonomy map}
By a general theorem of Thurston and Weil concerning deformations of \emph{locally
homogeneous structures} on manifolds,
this map has an \emph{open} image.
As such, the deformation space of flat affine structures is the natural
analogue of the Teichm\"uller space of conformal structures, or,
equivalently, constant curvature Riemannian metrics, on surfaces.
Its construction
is completely analogous to the definition of the
Teichm\"uller space for flat Riemannian metrics on the two-torus,
or hyperbolic constant curvature $-1$ metrics on surfaces $M_{g}$, $g {< , >}eq 2$.
In these classic situations, both the Teichm\"uller space ${\rm T}eich_{g}$
and its quotient the moduli
space are Hausdorff spaces.
The Teichm\"uller space of flat metrics
${\rm T}eich_{1}$ is diffeomorphic to ${\bb R}^2$, and the Teichm\"uller space of
hyperbolic metrics ${\rm T}eich_{g}$, $g {< , >}e 2$, is diffeomorphic
to ${\bb R}^{6g-6}$. Moreover,
the corresponding holonomy map topologically identifies ${\rm T}eich_{g}$ with an open subset of the quotient space $\lie{o}peratorname{Hom}({\bb Z}^2, {\rm Isom}({\bb R}^2)) /
{\rm Isom}({\bb R}^2)$, for $g=1$, or, respectively,
a component of the space
$\lie{o}peratorname{Hom}(\Gamma_{g}, \lie{o}peratorname{PSL}(2,{\bb R})) / \lie{o}peratorname{PSL}(2,{\bb R})$, $g {< , >}eq 2$.
Here the analogy with the classical theory breaks down, and
neither of these facts are true for the deformation space of flat affine structures.
In fact, the group action, which defines the
moduli problem for the deformation space of flat affine structures on the two-torus, namely the action of the affine group
${\rm Aff}(2)$ on the homogeneous space $$ X = {\bb R}^2 ={\rm Aff}(2) / \lie{o}peratorname{GL}(2,{\bb R})$$
has non-compact stabiliser $\lie{o}peratorname{GL}(2,{\bb R})$, and therefore
the underlying geometry on $X$ is highly non-Riemannian.
This is illustrated by the fact that
various kinds of flat affine structures,
with sometimes strikingly distinct geometric properties, are supported on the two-torus. A fact which can be seen already from the various possible development images for flat affine structures, and which is also reflected in the structure and
topology of the deformation space.
Here phenomena arise which
are completely different from the case of constant curvature metrics or conformal structures on surfaces.
Another salient difference stems from the fact that the local model of the deformation space of flat affine structures, namely the
\emph{character variety} $\lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2)) / {\rm Aff}(2)$ arises
as a quotient space of an algebraic variety by a \emph{non-reductive} group action. The properties of such actions
and their invariant theory are generally poorly understood.
The case of deformation of \emph{complete} flat affine structures bears the \index{flat affine manifold!complete}
closest resemblance to the classical situation. A flat affine structure is
called \emph{complete} if the development map is a diffeomorphism, a property
which in the Riemannian situation is always guaranteed.
A flat affine two-torus is complete if and only if its development
image is the affine plane ${\bb A}^2$. The
deformation space of complete affine structures
on the two-torus was studied recently in \cite{BauesG, BG}.
It is shown there,
for example, that the holonomy map identifies the space of complete structures ${\mathfrak D}_{c}(T^2,{\bb A}^2)$ with
a locally closed subspace of the space of homomorphisms $\lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}({\bb R}^2))$, and, moreover, the space ${\mathfrak D}_{c}(T^2,{\bb A}^2)$ is
homeomorphic to ${\bb R}^2$. However, the topology of the moduli space of complete flat affine structures,
which, with respect to appropiately chosen coordinates for ${\mathfrak D}_{c}(T^2,{\bb A}^2)$, is homeomorphic to the quotient space of ${\bb R}^2$ by the natural action of $\lie{o}peratorname{GL}(2,{\bb Z})$, is highly singular.
This chapter is devoted to the study of the global and local structure of the space of deformations of \emph{all} flat affine structures on the two-torus. The deformation space of all flat affine
structures is much larger than the deformation space of complete flat affine structures. Indeed, the deformation space
${\mathfrak D}_{c}(T^2,{\bb A}^2)$ of complete flat affine structures on the two-torus forms a closed two-dimensional subspace in the deformation space of all structures ${\mathfrak D}(T^2,{\bb A}^2)$, which itself is a space of dimension four. In the general situation the holonomy map ${hol}: {\mathfrak D}(T^2,{\bb A}^2) \rightarrow \lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2)) / {\rm Aff}(2)$ for the deformation space of flat affine structures is no longer a homeomorphism onto its image. That is, there exist flat affine structures on the two-torus, which have the same holonomy group and which have dramatically different geometry. (Compare, in particular, Example \ref{ex:hol_notinj} in this chapter.)
Moreover, the holonomy image in $\lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2))$ contains singular orbits for the affine group ${\rm Aff}(2)$, which in turn give rise to non-closed points in the deformation space ${\mathfrak D}(T^2,{\bb A}^2)$. This also shows
that the deformation
space is \emph{not} a Hausdorff space. It is a four-dimensional and connected space which has an intricate topology and it supports various substructures arising from the different types of affine flat geometries on the two-torus.
As our main result on the local structure of the space of deformations of flat affine structures
we prove in this chapter that \emph{the holonomy map $hol$
is a local homeomorphism onto its image}. That is, at least locally the topology of the deformation
space ${\mathfrak D}(T^2,{\bb A}^2)$ is fully controlled by the character variety. We remark that this is \emph{not} a general phenomenon for
deformation spaces of locally homogeneous structures, \emph{not even on surfaces}. Indeed, in Appendix B of this chapter, we specify a two-dimensional
homogeneous geometry whose deformation space of structures on the two-torus has a holonomy map $hol$ which locally near certain structures is a branched covering. Examples of deformation spaces of flat conformal structures on three-dimensional manifolds where the holonomy map $hol$ is not locally injective at exceptional points were found previously by Kapovich and are discussed in \cite{Kapovich}.\\
The chapter is organized as follows. In Section~ 2 we give a self-contained proof of Benz\'ecri's theorem which states that
a closed orientable flat affine surface is diffeomorphic
to the two-torus. In Section 3 we describe the deformation theory of compact locally homogeneous manifolds, including its
foundational results and give basic examples. Section 4 discusses
several methods to construct flat affine surfaces and introduces
the main classes of flat affine structures on the two-torus.
In Section~ 5 we prove the main classification theorem for flat affine structures on the two-torus in detail, including the crucial and nontrivial fact that the development map of such a structure is always a covering map. Finally, in Section~ 6 we put the pieces
together in order to prove that the holonomy map for the deformation space of flat affine structures on the two-torus is a local homeomorphism to the character variety. In addition, Appendix A gives an account on conjugacy classes in $\lie{o}peratorname{GL}(2,{\bb R})$ and in its universal covering group. In Appendix B we describe a two-dimensional homogeneous geometry such that the holonomy map for its deformation space of structures on the two-torus is not everywhere a local homeomorphism.
\begin{figure}
\caption{Tiled sectors approaching the standard plane.}
\end{figure}
\begin{figure}
\caption{Tiled punctured planes approaching the standard plane.}
\end{figure}
\begin{figure}
\caption{Tiled sectors approaching a halfplane of type $\mathsf{C}
\end{figure}
\begin{figure}
\caption{Tiled sectors approaching a halfplane of type $\mathsf{C}
\end{figure}
\begin{figure}
\caption{Tiled halfplanes approaching the standard plane.}
\end{figure}
\section{The theorem of Benz\'ecri }
Let $M$ be a closed oriented surface of genus $g$.
Then the Gau\ss-Bonnet theorem \cite{Hopf2} expresses the Euler characteristic
$$ \chi(M)= 2 -2g$$ as an integral over the Gau\ss\ curvature
of any Riemannian metric on $M$. In particular, a flat Riemannian closed
surface $M$ has Euler characteristic zero, and therefore it is
diffeomorphic to a two-torus. If $M$ is a closed flat affine surface, the
Gau\ss-Bonnet theorem does not apply, since the corresponding flat
connection is possibly non-Riemannian. However, the strong
topological restriction applies to flat affine surfaces, as well:
\index{surface!flat affine} \index{Benz\'ecri's theorem} \index{Gau\ss-Bonnet theorem} \index{Euler characteristic}
\index{flat affine torus} \index{flat affine surface}
\begin{theorem}[Benz\'ecri, \cite{Benzecri}] {\lie{a}}bel{thm:Benzecri}
Let $M$ be a closed flat affine surface.
Then $M$ has Euler characteristic zero.
\end{theorem}
\begin{proof} First we remark that the sphere $S^2$ does not admit a flat affine
structure. In fact, since $S^2$ is simply connected,
the development image of a flat affine structure on $S^2$
would be compact and open in ${\bb R}^2$, which is absurd.
Now we assume that $M$ has genus $g$, $g {< , >}eq 1$. Let
$\mathsf{p}: \tilde M \rightarrow M$ be the universal covering.
Then $\tilde M$ is a flat affine manifold which is diffeomorphic to
${\bb R}^2$. Moreover, $M$ is obtained by gluing
a $4g$-gon $P \subset \tilde M$
along its consecutive sides $a_{1}, b_{1}, a_{1}^{-}, b_{1}^-, \ldots,
a_{g}, b_{g}, a_{g}^{-}, b_{g}^-$,
with side pairing transformations $g_{a_{i}}, g_{b_{i}}$
such that $g_{a_{i}} a_{i}^- = a_{i}$ and $g_{b_{i}} b_{i}^- = b_{i}$.
These transformations are subject to the single cycle relation
$$ \prod_{i= 1,\ldots ,g} \; g_{a_{i}} g_{b_{i}}
g_{a_{i}}^{-1} g_{b_{i}}^{-1} = {\rm id}_{\tilde M} $$
and generate the discontinuous group of deck transformations
of the covering $\mathsf{p}: \tilde M \rightarrow M$.
In particular, the side pairing transformations are affine maps of $\tilde M$.
Note however that the polygon $P \subset \tilde M$
is a closed oriented topological disc with piecewise
smooth boundary. (The construction
may be carried out, in Euclidean geometry if $g=0$,
respectively hyperbolic geometry, for $g {< , >}eq 2$, such that the edges of $P$
are geodesic segments. See, for example, \cite{Ratcliffe}.)
Let $\tilde x_{0}$ denote the vertex of $P$ belonging to the sides $a_{1}$ and
$b^-_{g}$. Let $v \neq 0$ be a tangent vector at $\tilde x_{0}$.
Now choose a \emph{non-vanishing} vector field
$V$ along the boundary of $P$, such that $V(\tilde x_{0}) = v$, and,
furthermore, such that $V$ restricted to $a_{i}$ (resp.\ $b_{i}$)
is related to $V$ restricted to $a^-_{i}$ (resp.\ $b^-_{i}$)
by the corresponding side pairing transformation. (We can obtain such a $V$
by constructing vector fields along the closed curves $\mathsf{p} \, a_{i}$, $\mathsf{p} \, b_{i}$ which coincide at $x_{0} = \mathsf{p}(\tilde x_{0})$.)
Next we extend $V$ to a vector field $X$ on $P$, which has an
isolated singularity in the interior of $P$.
The \emph{index} of the vector field $X$ at the singularity may be calculated as the
\emph{turning number} of the restriction of $X$ to the positively traversed
boundary of $P$, see \cite{Hopf2}. For this, recall that the turning
number $\tau(V) \in {\bb Z}$
of a closed non-vanishing vector field
$V: I \rightarrow \, {\bb A}o$ is defined by the equation
$$\tau(V) \, 2\pi = \theta(1) - \theta(0) , $$ where
$\theta: I \rightarrow {\bb R}$ is any lift of of the map $I \rightarrow S^1$, $t \mapsto V(t) / |V(t)|$.
Now, since the flat affine manifold $\tilde M$ is simply connected, $\tilde M$ has
a global parallelism which identifies each tangent space $T_{x} \tilde M$
with $T_{\tilde x_{0}} \tilde M$. Therefore, we can choose any scalar product in $T_{\tilde x_{0}} \tilde M$ to compute the index of $X$ by the above formula.
Since the side pairing maps $g_{a_{i}}$ and $g_{b_{i}}$ are affine
transformations of $\tilde M$, they preserve antipodality of any two vectors
$V(s)$ and $V(t)$. This implies that the turn of $V$ restricted to
the positively traversed curve $ a_{1} b_{1} a_{1}^{-} b_{1}^-$
is less than $2 \pi$. And consequently, $|\tau( V )| < g$.
Our construction implies that the vector field $X$ on $P$ projects to
a vector field on $M$. Therefore, by the Poincar\'e-Hopf theorem \cite{Hopf2, MilnorTop}, the index of $X$ equals the Euler
characteristic $\chi(M)$ of $M$. We thus obtain the
estimate $$ | \chi(M) | = | 2-2g | \, < g \; . $$
This implies $g = 1$.
\end{proof}
Benz\'ecri's theorem was generalised by Milnor \cite{Milnor} to
the more general
\begin{theorem} Let $E$ be a flat rank
two vector-bundle over a closed orientable surface $M_{g}$, $g {< , >}eq 1$,
then $ |\chi(E) | < g $.
\end{theorem}
Here, $\chi(E)$ denotes the evaluation of the Euler-class $e(E) \in H^2(M,{\bb Z})$
on the fundamental homology class of $M$. In case of the tangent bundle
$E =TM_{g}$, the equality $$ \chi(TM_{g}) = \chi(M_{g})$$ (see
\cite[Section~ 11]{MilnorStasheff}) implies Benz\'ecri 's theorem. Wood \cite{Wood}
interpreted Milnor's result in the context of circle-bundles. See \cite{Goldman5}
for a recent survey on Benz\'ecri 's theorem, Milnor's inequality and related topics.
\paragraph{Generalizations to higher dimensions}
Weak analogues of the Milnor-Wood estimate for the Euler-class of higher-dimensional manifolds were
subsequently given by Sullivan \cite{Sullivan} and Smillie in his doctoral thesis \cite{Smillie}.
See also \cite{BucherGelander} for a recent contribution in this realm.
The Chern conjecture
asserts that any compact flat affine \index{Chern conjecture}
manifold should have Euler characteristic zero.
Kostant and Sullivan \cite{KS} observed that every compact \emph{and} complete
flat affine manifold has Euler characteristic zero.
There are some additional affirmative results under
various assumptions on the holonomy group, see
for example \cite{Goldman}.
The original conjecture, however, remains a difficult
open problem.
Another fruitful generalization of the Milnor-Wood inequality
concerns the representation theory
of surface groups into higher-dimensional simple Lie groups,
see \cite{BIW} for a survey.
\section{Locally homogeneous structures and their deformation
spaces}
Let $M$ be a smooth manifold and fix a universal
covering space $\mathsf{p}:\tilde{M} \rightarrow M$.
Let $X$ be a homogeneous space for the
Lie group $G$, on which $G$ acts effectively.
The manifold $M$ is said to be
locally modeled on $(X,G)$ \index{manifold!$(X,G)$-}
if $M$ admits an atlas of charts with range in $X$ such that the
coordinate changes are locally restrictions of elements of $G$.
A maximal atlas with this property
is called an {\em $(X,G)$-structure} on $M$.
The manifold $M$ together with an $(X,G)$-structure is called an
{\em $(X,G)$-manifold}, or \emph{locally homogeneous space}
\index{locally homogeneous space}
modeled on $(X,G)$. A map between two $(X,G)$-manifolds is called an \emph{$(X,G)$-map} if it coincides with the action of an element of
$G$ in the local charts. If the $(X,G)$-map is a diffeomorphism it
is called an {\em $(X,G)$-equivalence} and accordingly the two
manifolds are called \emph{$(X,G)$-equivalent}.
\subsection{$(X,G)$-manifolds, development map and holonomy}
Every $(X,G)$-manifold comes equipped with some extra structure, called the development and the holonomy.
Via the covering projection $\mathsf{p}:\tilde{M} \rightarrow M$ the universal covering space of
the $(X,G)$-manifold $M$ inherits a unique $(X,G)$-structure
from $M$. We fix $x_0 \in M$, and
a local $(X,G)$-chart at $x_0$.
The corresponding
{\em development map\/} of the $(X,G)$-structure
is the $(X,G)$-map
$$ D: \tilde{M} \rightarrow X$$ which is obtained by
analytic continuation of the local chart.
\index{development maps!for $(X,G)$-structures}
For every $(X,G)$-equivalence $\Phi$ of $\tilde{M}$,
there exists a unique element $h(\Phi) \in G$ such that
\begin{equation} D \circ \Phi= h(\Phi) \circ D \; . {\lie{a}}bel{eq:devel1}
\end{equation}
The fundamental group $\pi_1(M) = \pi_1(M,x_0)$ acts
on $\tilde{M}$ via deck transformations.
This induces the {\em holonomy homomorphism\/} \index{holonomy homomorphism}
$$h: \pi_1(M,x_0) \longrightarrow G$$ which satisfies
\begin{equation} {\lie{a}}bel{eq:hol} D \circ {< , >}amma = h({< , >}amma) \circ D \; , \;
\text{ for all ${< , >}amma \in \pi_1(M,x_0)$.}
\end{equation}
Note that, after the choice of the development map (which corresponds to a choice of a germ of an $(X,G)$-chart in $x_0$ and also the choice
of a lift $\tilde{x}_{0} \in \tilde{M}$ of $x_{0}$), the holonomy homomorphism
$h$ is well defined.
Therefore, the $(X,G)$-structure on $M$ determines
the \emph{development pair} \index{development pair}
$(D,h)$ up to the action of $G$, where
$G$ acts by left-composition on $D$, and by conjugation on $h$.
Specifying a development pair is equivalent to constructing an
$(X,G)$-structure on $M$:
\begin{proposition} {\lie{a}}bel{prop:devpair}
Every local diffeomorphism $ D: \tilde{M} \rightarrow X$ which satisfies \eqref{eq:hol},
for some $h: \pi_1(M,x_0) \rightarrow G$,
defines a unique $(X,G)$-structure on $M$,
and every $(X,G)$-structure on $M$ arises in this way.
\end{proposition}
\subsubsection{Compactness and completeness of
$(X,G)$-manifolds}
An important special case arises if the development map
is a diffeomorphism. Recall the following definition:
\begin{definition}[\textbf{Proper actions}]
A discrete group $\Gamma$ is said to
act \emph{properly discontinuously} on $X$ if, for all compact
subsets $\kappa \subseteq X$, the set $$\Gamma_{\kappa} =
\{ {< , >}amma \in \Gamma \mid {< , >}amma
\kappa \cap \kappa \neq \emptyset \} $$ is finite.
More generally, if $\Gamma$ is a locally compact group, and
$\Gamma_{\kappa}$ is required to be compact,
then the action is called \emph{proper}.
\end{definition}
\begin{example}[\textbf{$(X,G)$-space forms}] \index{space form}
Let $\Gamma$ be a group of $(X,G)$-equivalences
acting properly discontinuously and freely on $X$. Then
$X/\, \Gamma$ is a manifold which inherits an $(X,G)$-structure
from $X$.
If $X$ is simply connected the identity map
of $X$ is a development map for $X/\, \Gamma$.
\index{$(X,G)$-space form}
\end{example}
In general, if the development map is a covering map
onto $X$, the $(X,G)$-manifold $M$ will be called \emph{complete}. \index{manifold!$(X,G)$-} \index{$(X,G)$-structures!complete}
\\
Simple examples (cf.\ the Hopf tori in Example \ref{ex:hopftori})
show that compactness of $M$
does not imply completeness.
\begin{example}[\textbf{Compactness and completeness}] {\lie{a}}bel{ex:cac}
If $G$ acts properly on $X$ then
every \emph{compact} $(X,G)$-manifold is complete.
\end{example}
In general, the relation between the properties of the $G$-action on $X$, and the completeness properties of compact
$(X,G)$-manifolds is only vaguely understood.
See \cite{Carriere} for a striking contribution in this direction
in the context of flat affine manifolds.
Further discussion of $(X,G)$-geometries and the properties of the development process may be found in \cite{Epstein,Thurston}. \\
It may well happen that an $(X,G)$-geometry does not
admit (non-finite) proper actions (see \cite{Kob,Kulkarni,IozziWitte}) or no compact $(X,G)$-manifolds
at all \cite{Benoist_Labourie}.
\begin{example}[\textbf{Calabi-Marcus phenomenon}]
{\lie{a}}bel{ex:bbAo1}
Let ${\bb A}o$ be the once-~punc\-tured affine plane.
It is easily observed that every discrete subgroup of $\lie{o}peratorname{SL}(2,{\bb R})$
which acts properly on ${\bb A}o$ must be finite, see Figure \ref{fig:CalabiMarcus}. This is called the \emph{Calabi-Markus phenomenon}. \index{Calabi-Markus phenomenon}
\end{example}
\begin{figure}
\caption{Dynamics of a hyperbolic rotation and a shearing acting on ${\bb A}
\end{figure}
It follows that the homogeneous space $({\bb A}o, \lie{o}peratorname{SL}(2,{\bb R}))$ has only quotients by
finite groups. Therefore, a complete space modeled on $({\bb A}o, \lie{o}peratorname{SL}(2,{\bb R}))$ cannot
be compact. In fact, we will remark in Example \ref{ex:bbAo2} below that there do not exist compact $({\bb A}o, \lie{o}peratorname{SL}(2,{\bb R}))$-manifolds at all.
\paragraph{Prominent $(X,G)$-structures on surfaces}
Let $M_{g}$ denote an orientable surface of genus $g$.
In the context of this paper, the following $(X,G)$-structures play a prominent role. \index{$(X,G)$-structures!on surfaces}
\begin{example}
\hspace{1cm}
\begin{enumerate}
\item $(\mathbb{S}^2, {\rm O}(2))$, spherical geometry, $g=0$.
\item $({\bb R}^2, {\rm E}(2))$, plane Euclidean geometry, $g=1$.
\item $({\mathbb H}^2, \lie{o}peratorname{PSL}(2,{\bb R}))$, plane hyperbolic geometry, $g {< , >}eq 2$.
\item $({\bb R}^2, {\rm Aff}(2))$, plane affine geometry, $g=1$.
\item $({\mathbb P}^2({\bb R}), \lie{o}peratorname{PSL}(3,{\bb R}))$, plane projective geometry, $g {< , >}eq 0$.
\end{enumerate}
Every compact orientable surface of genus $g {< , >}eq 2$ supports hyperbolic structures.
Also every compact surface supports a projective structure, see \cite{ChoiGoldman}.
By Benz\'ecri's theorem the only compact surfaces which support a flat affine structure are the two-torus and the Klein bottle. The classification of flat affine structures on the two-torus
was completed in the 1970's, see Section~ \ref{sect:classification} of this article. Subsequently, Bill Goldman in his undergraduate thesis \cite{Goldman0} classified projective structures on the two-torus in 1977.
\end{example}
Note that
every Euclidean or hyperbolic compact surface is complete
(compare Example \ref{ex:cac}).
The majority of flat affine structures on the two-torus are \emph{not} complete but the development map of a flat affine structure on
the two-torus is always a covering onto its image (see Theorem
\ref{thm:classification}). The
development map of a projective structure
on a surface may not even be a covering \cite{ChoiGoldman}.
\subsubsection{$(X,G)$-subgeometries} {\lie{a}}bel{sect:sub_geom}
We may relate different locally homogeneous geometries by inclusion
as follows.
\begin{definition} {\lie{a}}bel{def:subgeometry}
Let $(X,G)$ and $(X',G')$ be homogeneous spaces and
$\rho: G' \rightarrow G$ a homomorphism
together with a $\rho$-equivariant local diffeomorphism
$o: X' \rightarrow X$.
Then we say that \emph{$(X',G')$ is subjacent to} or
\emph{a subgeometry of $(X,G)$}. The subgeometry is
called \emph{full} if the map $o$ is surjective onto $X$.
The subgeometry is called a \emph{covering of geometries}
if $o: X' \rightarrow X$
is a regular covering map with group of deck transformations
precisely the kernel of $\rho$.
\end{definition}
\noindent
If $(X',G')$ is a subgeometry of $(X,G)$ then $X'$ is an
$(X,G)$-manifold with development map $o: X' \rightarrow X$.
Note also that $o$ is a covering map onto its image,
since $o$ is an equivariant map of homogeneous spaces.
The group $G'$ then acts as a group of $(X,G)$-equivalences of $X'$,
so that $X'$ is, in fact, a homogeneous $(X,G)$-manifold.
\begin{example}
Let $\mathsf{p}: \widetilde{\bbA^2 \! - 0} \,\rightarrow \, {\bb A}o$ be the universal covering
of the once-punctured affine plane ${\bb A}o$, and $\widetilde \lie{o}peratorname{GL}(2,{\bb R}) \rightarrow \lie{o}peratorname{GL}(2,{\bb R})$
the universal covering group of $\lie{o}peratorname{GL}(2,{\bb R})$. Then the subgeometry $$
\mathsf{p}: (\widetilde {\bb A}o,
\widetilde \lie{o}peratorname{GL}(2,{\bb R})) \; {\, \longrightarrow \, } \; ({\bb A}o, \lie{o}peratorname{GL}(2,{\bb R}))$$
is a full subgeometry and, indeed, it is a covering of geometries.
\end{example}
If $(X',G')$ is a subgeometry of $(X,G)$ then, in particular, \emph{every $(X',G')$-manifold with development
map $D'$ inherits naturally an $(X,G)$-manifold structure
with development map $o \circ D'$}. \\
This observation provides a useful tool to construct $(X,G)$-manifolds.
Assume, for instance, that $G'$ acts properly on $X'$ and $\Gamma' \leq G'$ is a discrete subgroup. Then $\Gamma' \backslash X'$ is an $(X',G')$-manifold which inherits an $(X,G)$-structure via $o$.
The following special case is of particular
importance:
\begin{definition}[\textbf{\'Etale $(X,G)$-representations}] {\lie{a}}bel{def:etale}
\index{etale@\'etale representation} \index{representation!\'etale}
If $G'$ acts on $X'$ with finite stabilizer then an inclusion of geometries $\rho: G' \rightarrow G$ as above is called an \emph{\'etale representation} of
$G'$ into $(X,G)$.
\end{definition}
If $\rho$ is \'etale with open orbit $\rho(G') x_{0}$ the group manifold $G'$ inherits via the
orbit map $$ o: G' \rightarrow X \, , \; g' \mapsto \rho(g') x_{0}$$ a natural
$(X,G)$-structure which is invariant by left-multiplication of $G'$. In particular, if $\Gamma' \leq G'$ is a discrete subgroup then the coset space $\Gamma' \backslash G'$ inherits an $(X,G)$-manifold structure.
\begin{example}[\textbf{Geometries subjacent
to the punctured plane}]
The affine automorphism group of the once punctured
affine plane ${\bb A}o$ is the linear group $\lie{o}peratorname{GL}(2,{\bb R})$.
The homogeneous geometry $\left({\bb A}o, \lie{o}peratorname{GL}(2,{\bb R})\right)$ has full subgeometries $\left({\bb A}o, \lie{o}peratorname{GL}(1,{\bb C})\right)$ and $({\bb A}o, \lie{o}peratorname{SL}(2,{\bb R}))$. Note that
the first one arises from an \'etale affine representation of the abelian Lie group ${\bb R}^2$.
Further subgeometries, which are not full,
are defined by the abelian
\'etale Lie subgroups $\mathsf{C}_{1}$ and $\mathsf{B}$, which are listed in (2) and (3) of Example \ref{ex:domains}.
Of course, all these homogeneous spaces define particular subgeometries of plane affine geometry, as well.
\end{example}
\subsubsection{Existence of compact forms}
A compact manifold $M$, which is locally modeled on $(X,G)$,
will be called a \emph{compact form} for $(X,G)$. Given a
homogeneous space $(X,G)$, it is possibly a difficult problem to decide if it has compact form.
\begin{example}[\textbf{$({\bb A}o, \, \lie{o}peratorname{SL}(2,{\bb R}))$ has no compact form}]
{\lie{a}}bel{ex:bbAo2}
By the Calabi-Markus phenomenon (see Example \ref{ex:bbAo1}),
$({\bb A}o, \, \lie{o}peratorname{SL}(2,{\bb R}))$ has only quotients by finite groups.
Since $({\bb A}o, \, \lie{o}peratorname{SL}(2,{\bb R}))$ is a subgeometry of plane affine geometry, Benz\'ecri's theorem (Theorem \ref{thm:Benzecri}) and
the classification of flat affine structures with development image
${\bb A}o$ (see Theorem \ref{thm:classification} )
imply the stronger result that \emph{there is no compact locally homogeneous surface modeled on the homogeneous space $({\bb A}o, \lie{o}peratorname{SL}(2,{\bb R}))$}.
\end{example}
On the contrary, the
spaces $({\bb A}o,\lie{o}peratorname{GL}(2,{\bb R}))$ and $\left({\bb A}o, \lie{o}peratorname{GL}(1,{\bb C})\right)$ evidently
have complete compact forms. For example, every lattice
$\Gamma \leq \lie{o}peratorname{GL}(1,{\bb C})$ acts properly discontinuously and
freely on ${\bb A}o$, which thus gives rise to a compact flat affine manifold ${\bb A}o\, \big/ \, \Gamma$.\\
Benz\'ecri's theorem implies that
every orientable compact form of the space $({\bb A}o,\lie{o}peratorname{GL}(2,{\bb R}))$
is diffeomorphic to the two-torus, and in particular it has abelian
fundamental group ${\bb Z}^2$.
\begin{example}[\textbf{Compact forms of $({\bb A}o,\lie{o}peratorname{GL}(2,{\bb R}))$}]
The classification theorem asserts that the development
map of a flat affine structure on the two-torus is a covering map onto the development image (cf.\ Proposition \ref{prop:Deviscovering}).
In particular,
every compact locally homogeneous surface modeled on the homogeneous spaces $({\bb A}o, \lie{o}peratorname{GL}(1,{\bb C}))$ or $({\bb A}o, \lie{o}peratorname{GL}(2,{\bb R}))$ is
\emph{diffeomorphic to the two-torus} and
either it is complete (which is always true for $({\bb A}o, \lie{o}peratorname{GL}(1,{\bb C}))$-structures)
or its development image is a sector of halfspace in ${\bb A}^2$
(see Section~ \ref{sect:etale_affine}).
\end{example}
In Section~ \ref{sect:tori_bbao}, we describe the construction of
all compact $({\bb A}o, \lie{o}peratorname{GL}(2,{\bb R}))$- manifolds which are
\emph{complete}. The classification theorem for all structures, including
the non-complete case, is stated in Section~ \ref{sect:bbaoclass}.
\subsection{Convergence of development maps}
The space of $(X,G)$-\-de\-ve\-lop\-ment maps for the manifold $M$ \index{development maps!space of} \index{development maps!convergence of}
is the set $$ {\mathrm{Dev}}(M) = {\mathrm{Dev}}(M,X,G) $$ of all local
$C^\infty$-diffeomorphisms
$$ D: \tilde{M} \rightarrow X $$ which,
for some $h \in \lie{o}peratorname{Hom}(\pi_1(M),G)$, and, for all ${< , >}amma \in \pi_{1}(M)$, satisfy
$$ D \circ {< , >}amma = h({< , >}amma) \circ D . $$
We endow the space of development maps with the compact $C^{\infty}$-topology. \index{compact $C^{\infty}$-topology}
\index{topology!compact $C^{\infty}$-}
In this topology, a sequence of smooth maps converges if and only if \emph{it and all its derivatives} (computed in local coordinate charts) converge uniformly on the compact subsets of $\tilde M$. In particular, ${\mathrm{Dev}}(M)$ thus becomes a Hausdorff second countable topological space.
\subsubsection{Convergence of holonomy} {\lie{a}}bel{sect:convhol}
Let $M$ be compact. Then $\pi_{1}(M)$ is finitely generated, and we equip $\lie{o}peratorname{Hom}(\pi_1(M),G)$ with
the topology of pointwise convergence. \index{holonomy!convergence of}
Then the map $$\mathsf{hol}: {\mathrm{Dev}}(M) \rightarrow \lie{o}peratorname{Hom}(\pi_1(M),G) \; , \; D \mapsto h$$
is continuous, since $G$ has the
$C^\infty$-topology of maps on $X$. The main theorem
on deformations of $(X,G)$-structures
(see Theorem \ref{thm:Deformations} below)
asserts that a small deformation of holonomy
induces a deformation of development
maps. That is, the map $\mathsf{hol}$ admits local sections.
By compactness of $M$, the convergence of
development maps is controlled
on a fundamental domain and by the holonomy.
\begin{fact}[Holonomy determines convergence] {\lie{a}}bel{fact:convergence}
Let $U \subset \tilde{M}$ be
an open subset with compact closure such that
$\mathsf{p}(U) = M$, where $\mathsf{p}: \tilde M \rightarrow M$ is the universal covering.
Then a sequence of development maps $D_{i} \in {\mathrm{Dev}}(M)$ with
holonomy $h_{i}$ converges to a development map $D$
if and only if the restrictions of $D_{i}$ to $U$ converge to
$D$ and the homomorphisms $h_{i}$ converge
to the holonomy $h$ of $D$.
\end{fact}
A particular property of the $C^{\infty}$-topology is that it does not control the behavior of maps outside compact sets. This allows
for possibly unexpected phenomena:
\begin{example}[{\textbf{Openness of embeddings fails}}] {\lie{a}}bel{ex:embeddings}
Let ${\mathrm{Dev}}_{e}(M)$ be the subset of development maps which are
injective. Let ${\mathcal K} \subset \tilde M$ be a compact fundamental
domain for the action of $\pi_{1}(M)$. By \cite[Chapter 2, Lemma 1.3]{Hirsch}, the set of development maps which are injective on ${\mathcal K}$ is open with respect to the $C^1$-topology. In particular, it is open with respect to the $C^\infty$-topology. However, the global behavior of development maps is controlled by the holonomy. Therefore, even if $M$ is compact ${\mathrm{Dev}}_{e}(M)$ may not be an open subset in ${\mathrm{Dev}}(M)$.
On the two-torus there are injective development maps in ${\mathrm{Dev}}(T^2, {\bb A}^2, {\rm Aff}(2))$ which contain a non-trivial covering map in every small neighborhood. This is even true
for the development of the
standard translation structure, cf.\ Section~ \ref{sect:orbit_closures}
and Figure \ref{figure:qHopfTo trans}.
\end{example}
\begin{figure}
\caption{The quotient of a Hopf torus deforms to a translation torus.}
\end{figure}
\subsubsection{Deformation
of development maps}
If $M$ is compact then, as observed by Thurston \cite{Thurston1},
building on earlier work of Weil \cite{Weil}, a small deformation of holonomy in the space of homomorphisms
$\lie{o}peratorname{Hom}(\pi_1(M) , G)$ induces a deformation of $(X,G)$-development maps.
Before stating the theorem precisely, we discuss the \index{development maps!deformation of}
\paragraph{Action of
diffeomorphisms of $M$ on development pairs.} \index{development pair}
Let $x_{0} \in M$ and $\tilde x_{0} \in \tilde M$, $\mathsf{p}(\tilde x_{0}) = x_{0}$, be basepoints and $\Phi \in {\rm Diff}(M,x_{0})$ a basepoint preserving diffeomorphism
with lift $\tilde{\Phi} \in {\rm Diff}(\tilde{M}, \tilde x_{0})$.
The group ${\rm Diff}(M,x_{0})$ of basepoint preserving diffeomorphisms then acts on
development pairs, by mapping $D \in {\mathrm{Dev}}(M)$ to
$D \circ \tilde {\Phi}$.
We let ${\rm Diff}_{1}(M,x_{0})$ denote the subgroup of ${\rm Diff}(M,x_{0})$ consisting of diffeomorphisms which
are homotopic to the identity by a basepoint preserving
homotopy, and ${\rm Diff}_{0}(M,x_{0})$ the identity component of
${\rm Diff}(M,x_{0})$ (that is, the subgroup of elements
which are isotopic to the identity).
Then the action of ${\rm Diff}_{1}(M,x_{0})$ and its subgroup
${\rm Diff}_{0}(M,x_{0})$ on the set of development
maps ${\mathrm{Dev}}(M)$ leaves the holonomy invariant, since
${\rm Diff}_{1}(M,x_{0})$ acts trivially on $\pi_{1}(M,x_{0})$.
\\
See \cite{Epstein,Goldman,Lok,BeGe} for more detailed discussion
of the following: \index{deformation theorem}
\index{Thurston deformation theorem}
\begin{theorem}[Deformation theorem, Thurston et al.] {\lie{a}}bel{thm:Deformations}
Let $M$ be a compact manifold. Then the induced map
\begin{equation} \mathsf{hol}:
\; {\rm Diff}_{0}(M,x_{0}) \backslash \,{\mathrm{Dev}}(M) \longrightarrow \lie{o}peratorname{Hom}(\pi_1(M) , G) {\lie{a}}bel{sfholmap} \end{equation}
which associates to a development map its holonomy homomorphism is a local homeomorphism.
\end{theorem}
The theorem states that the map $\mathsf{hol}: {\mathrm{Dev}}(M) \longrightarrow \lie{o}peratorname{Hom}(\pi_1(M) , G)$
is continuous and open.
In addition,
$\mathsf{hol}$ locally admits \emph{continuous} sections.
Such a section is called a {\em development section}.
More specifically, it is proved (see below) that
\emph{every convergent sequence of holonomy maps lifts to a convergent sequence of
development maps}, and two nearby development maps with identical holonomy are isotopic by a basepoint preserving diffeomorphism.
Therefore, a sequence of points in the quotient space ${\rm Diff}_{0}(M,x_{0}) \backslash \,{\mathrm{Dev}}(M)$
is convergent if and only if there exists a corresponding lifted
sequence of development maps which converges. \\
The main idea in the proof of Theorem \ref{thm:Deformations} due to Weil \cite{Weil} is easy to grasp. Here we sketch the construction of the development section in the particular case of flat affine two-tori. In addition, we consider only tori which are obtained by gluing polygons in the
plane (cf.\ Section~ \ref{sect:agluing}).
A similar approach is also
valid for non-homogeneous tori which
are obtained as quotients of the universal covering affine
manifold of ${\bb A}o$
(cf.\ Section~ \ref{sect:tori_bbao}), and,
in fact, in the general case of arbitrary $(X,G)$-manifolds, compare \cite{Lok,Weil}. A somewhat different approach
to this result is explained in \cite{Goldman}
and the recent survey \cite{Goldman4} on locally homogeneous manifolds.
\begin{proof}
[Proof of Theorem \ref{thm:Deformations}]
Let $M$ be a flat affine two-torus and $D_{o}: {\bb R}^2 \rightarrow {\bb A}^2$,
$h_{o}: {\bb Z}^2 \rightarrow {\rm Aff}(2)$ a development pair for $M$. Let $h_{\epsilon}
\in \lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2))$, $\epsilon {< , >}eq 0$, be a small deformation of $h_{o}$. To
obtain the development section, we construct a curve of development maps
$D_{\epsilon}$ with holonomy $h_{\epsilon}$, which converges to $D_{o}$
in the compact $C^\infty$-topology.
For this, we assume that the development pair of $M$ is represented
as the identification space of a polygon $\mathcal P$ in affine space.
In fact, as explained in \cite[Section~ 2]{BauesG}, $\mathcal{P}$ can be chosen to be a quadrilateral contained in ${\bb A}^2$,
which is glued along its sides by the generators ${< , >}amma_{1}$, ${< , >}amma_{2}$ of $\pi_{1}(T^2) = {\bb Z}^2$ using the holonomy images $h_{o}({< , >}amma_{i}) \in {\rm Aff}(2)$.
The generators satisfy cycle relations
and certain gluing conditions.
Next we fix a diffeomorphism of the standard unit square in ${\bb R}^2$
with $\mathcal{P}$. Using $h$, this extends $\pi_{1}(T^2)$-equivariantly
to a smooth covering
$$ {\bb R}^2 \; \rightarrow \; \bar{X}= (P \times \Gamma) \big/ \sim \; \; ,$$
where the identification space $\bar{X}$ is a flat affine manifold which is obtained as
the disjoint union of
the polygons ${< , >}amma P$, ${< , >}amma \in \Gamma$,
glued along their edges as determined by the
side pairings $h_{o}({< , >}amma_{i})$. Here $\Gamma = h_{o}({\bb Z}^2)$
is the holonomy group of $M$.
Moreover, the inclusion $\mathcal P \rightarrow {\bb A}^2$
extends to a development map $$ \bar D: \bar X \rightarrow {\bb A}^2 \; .$$ The composition
of both maps yields the desired
development map $D: {\bb R}^2 \rightarrow {\bb A}^2$ with
holonomy $h_{o}$. The space $\bar X$ is the holonomy covering space of
$M$, see \cite[Proposition 2.1]{BauesG} for a detailed account.
The development section $D_{\epsilon}$ may now be obtained in a similar
manner. In fact, for small $\epsilon>0$, $\mathcal{P}$
can be deformed continuously to a quadrilateral $\mathcal{P}_{\epsilon}$, which
satisfies the gluing conditions with respect to $h_{\epsilon}$. (See Figure
\ref{figure:deform1} for an illustration.) This gives
rise to a series of identification spaces
$\bar X_{\epsilon} = (P \times \Gamma_{\epsilon}) / \sim_{\epsilon}$, and corresponding development maps $D_{\epsilon}: {\bb R}^2 \rightarrow {\bb A}^2$ with
holonomy $h_{\epsilon}$. By the above Fact \ref{fact:convergence}, the developments maps
$D_{\epsilon}$ converge to $D_{o}$ in the $C^\infty$-topology.
\end{proof}
The above construction of the development section is illustrated in Figures \ref{figure:deform1} and \ref{figure:deform2}.
\begin{figure}
\caption{The fundamental polygon $\mathcal{P}
\end{figure}
\begin{figure}
\caption{A family of development maps for the
once-punctured plane ${\bb A}
\end{figure}
\subsubsection{Topological rigidity of development maps}
\index{development maps!topological rigidity of}
\index{rigidity!of development maps}
Although local rigidity holds by the deformation theorem,
it may fail globally.
If the map $\mathsf{hol}$ in \eqref{sfholmap} is \emph{not} injective (as happens in the case of flat affine two-tori, see the basic Examples \ref{ex:finhopftori}, \ref{ex:gentorusnh} and also Section~ \ref{sect:thedefspace} for further discussion), there do
exist non-isomorphic $(X,G)$-manifolds with the same holonomy
homomorphism $h$. On the contrary, if the domain of discontinuity ${\rm O}mega$
for the holonomy group $h(\Gamma)$, $\Gamma = \pi_{1}(M)$,
on $X$ is large then the development is uniquely determined. This is the case, for example, if ${\rm O}mega = X$ and $h(\Gamma)$ is the holonomy of a compact complete $(X,G)$-manifold.
\begin{example}[\textbf{Discontinuous holonomy}] {\lie{a}}bel{ex:toprig}
\index{holonomy!discontinuous} \index{domain of discontinuity}
\index{holonomy homomorphism}
Let $D: \tilde M \rightarrow X$ be the development map for an
$(X,G)$-structure on the compact manifold $M$ with holonomy homomorphism
$h$. If $h(\Gamma)$ acts
properly discontinuously and freely with compact quotient
on $X$ then
$D$ is a covering map onto $X$. (In fact,
$D$ is a covering, since the local diffeomorphism
on compact manifolds
$M \rightarrow X/h(\Gamma)$ induced by $D$ is a covering map.)
It follows that every
other development map $D': \tilde M \rightarrow X$ with holonomy homomorphism $h$
is of the form $D' = D \circ \Phi$, where $\Phi \in {\rm Diff}(\tilde M)$
is a diffeomorphism which centralizes the deck transformation
group $\Gamma$.
\end{example}
A more involved argument allows to show that
$D$ is determined by $h(\Gamma)$
if the Hausdorff dimension of $X -{\rm O}mega$ is small, see \cite{GoKa}.
\begin{example} The development maps of compact $(\widetilde{\bbA^2 \! - 0}, \widetilde \lie{o}peratorname{GL}^+ \! (2,{\bb R}))$-forms are rigid, see Section~ \ref{sect:bbaoclass}, Theorem \ref{thm:bbao_rigid}.
We remark that
the domain of discontinuity for the holonomy group of such a manifold can be a proper open
subset of $\widetilde{\bbA^2 \! - 0}$.
\end{example}
\subsection{Deformation spaces of $(X,G)$-structures}
Let ${\mathfrak S}(M) = {\mathfrak S}(M,X,G)$ denote the set of
all $(X,G)$-structures on $M$. The group ${\rm Diff}(M)$ of all diffeomorphisms of $M$ \index{diffeomorphism group!of manifold}
\index{manifold!diffeomorphism group of}
acts naturally on this set such that two $(X,G)$-structures are in the same orbit if and only if they are $(X,G)$-equivalent. The set of all $(X,G)$-structures on $M$ up to $(X,G)$-equivalence is called the {\em moduli space} ${\mathfrak M}(M) = {\mathfrak M}(M,X, G)$ of $(X,G)$-structures.
\begin{definition} \index{deformation space!of $(X,G)$-structures}
The {\em deformation space\/} for
$(X,G)$-structures on $M$ is the quotient space
$$ {\mathfrak D}(M) = {\mathfrak D}(M,X,G) = {\mathfrak S}(M,X,G) / {\rm Diff}_{1}(M)$$
of equivalence classes of $(X,G)$-structures up to {\em homotopy\/}.
\end{definition}
\noindent
Thus, two $(X,G)$-structures define the same point
in ${\mathfrak D}(M)$ if they are equivalent by an $(X,G)$-equivalence which is homotopic to the identity of $M$. The moduli space
$ {\mathfrak M}(M)$ is the quotient space of the deformation space ${\mathfrak D}(M)$ by the group of homotopy \index{moduli space!of $(X,G)$-structures}
classes of diffeomorphisms of $M$.
\begin{remark} There is some inconsistency in the literature about the definition
of the deformation space. Many authors define ${\mathfrak D}(M)$
to be the space of structures up to {\em isotopy}.
If $M$ is a surface (two-dimensional manifold) two homotopic diffeomorphisms are isotopic, by classical results of Dehn, Nielsen, and Baer (see
for example \cite{Stillwell}).
Therefore, in this case, these two definitions coincide.
The corresponding fact fails in higher dimensions, even for tori,
see \cite{HS}.
\end{remark}
We observe that the Lie group $G$ acts by left-composition
on the space of development maps.
This action is continuous \emph{and free}, and the set of
$(X,G)$-structures naturally identifies with the quotient by the action of $G$, that is,
$$ {\mathfrak S}(M,X,G) = G \, \backslash {\mathrm{Dev}}(M,X,G) \; . $$
Indeed, if $g \in G$ and $D \in {\mathrm{Dev}}(M,X,G)$ is a development map then $g \circ D$ is another development map for the same
$(X,G)$-structure on $M$. This exhibits the deformation space
as a double quotient space
$$ {\mathfrak D}(M,X,G) = G \, \backslash {\mathrm{Dev}}(M,X,G) / \, {\rm Diff}_{1}(M) \; . $$
The $C^\infty$-topology on the set of $(X,G)$-structures is the quotient topology inherited from ${\mathrm{Dev}}(M,X,G)$.
(Thurston \cite[Chapter 5]{Thurston} also gives a direct
description of the topology on ${\mathfrak S}(M)$ in terms of
convergence of sets of local charts which define
the elements of ${\mathfrak S}(M)$, see \cite[1.5.1]{Epstein}.)
The deformation space and the moduli space carry
the quotient topology inherited from the
set of $(X,G)$-structures.
\subsubsection{Orientation components of the deformation space}
Let $X$ be a $G$-space which is orientable. We let $G^+$ denote the normal subgroup
of orientation preserving elements of $G$.
Now assume that $M$ is an $(X,G)$-manifold which is orientable. We fix an orientation for $M$.
Then there is a disjoint decomposition
\begin{equation} {\lie{a}}bel{eq:devdec}
{\mathrm{Dev}}(M,X,G) = {\mathrm{Dev}}^+(M,X,G) \cup {\mathrm{Dev}}^-(M,X,G) \, ,
\end{equation}
where ${\mathrm{Dev}}^+(M,X,G)$ and ${\mathrm{Dev}}^-(M,X,G)$ denote the
closed (and open) subspaces
which consist of orientation preserving and of orientation reversing, development maps, respectively.
Since $M$ is orientable, the components of the decomposition \eqref{eq:devdec} are preserved by the action of ${\rm Diff}_{1}(M,x_{0})$ on development maps. Furthermore, the action of
$G^+$ on development maps preserves the components.
Therefore, the deformation space ${\mathfrak D}(M,X,G^+)$ decomposes into two disjoint open and closed subsets, the \emph{orientation components}, \index{deformation space!orientation components of}
\begin{equation} {\lie{a}}bel{eq:defdec}
{\mathfrak D}(M,X,G^+) = {\mathfrak D}^+(M,X,G) \cup {\mathfrak D}^-(M,X,G) \; .
\end{equation}
Note that every orientation reversing element of $G$ exchanges the orientation components of ${\mathrm{Dev}}(M,X,G)$ and therefore also
of ${\mathfrak D}(M,X,G^+)$. Hence, if
$G$ contains orientation reversing elements
then the subgeometry $(X,G^+) \rightarrow (X,G)$ induces a homeomorphism $$ {\mathfrak D}^+(M,X,G) \approx {\mathfrak D}(M,X,G) \; . $$
\subsubsection{The topology of the deformation space}
The following classical and fundamental example gives a role model for the investigation of the properties of deformation
spaces for locally homogeneous structures.
\begin{example}[Teichm\"uller space ${\rm T}eich_{g}$] \index{Teichm\"uller space}
Let $G^+= \lie{o}peratorname{PSL}(2,{\bb R})$ be the group of orientation preserving isometries of the
hyperbolic plane ${\mathbb H}_{2}$ and $M = M_{g}$ a surface of
genus $g$, $g {< , >}eq 2$.
By the uniformization theorem, the Teichm\"uller space ${\rm T}eich_{g}$ of conformal structures on a surface $M_{g}$, $g {< , >}eq 2$,
may be considered as the deformation space of constant curvature $-1$ metrics, that is, $${\rm T}eich_{g} = {\mathfrak D}^+(M_{g}, {\mathbb H}_{2}, \lie{o}peratorname{PSL}(2,{\bb R})) \, . $$
The space ${\rm T}eich_{g}$ is homeomorphic to ${\bb R}^{6g-6}$.
Recall that the
\emph{mapping class group} \index{mapping class group}
$$ {\mathrm{Map}}_{g}= {\rm Diff}^+(M_{g})/ {\rm Diff}_{0}(M_{g}) \cong {\rm Out}^+(\Gamma_{g}) $$
is the group of isotopy classes of orientation
preserving diffeomorphisms of a surface.
This group
acts properly discontinuously on ${\rm T}eich_{g}$, and the
moduli space of conformal structures $$ {\mathfrak M}(M_{g}) =
{\rm T}eich_{g}/ \, {\mathrm{Map}}_{g} $$ is a Hausdorff space.
(See, for example, \cite{Abikoff,EE,Ratcliffe} and
other chapters of this handbook \cite{BIW, Goldman3}).
\end{example}
In general, however, the topology on the moduli space and the deformation space can be highly singular, as we can see, in particular, from Examples \ref{ex:completeas} and \ref{ex:defissing} below. The local properties of the
deformation space are reflected in the character variety
$\lie{o}peratorname{Hom}(\pi_1(M) , G)/G$, which is the space of
conjugacy classes of representations of
$\pi_1(M)$ into $G$. \index{character variety}
\paragraph{The holonomy map on the deformation space}
Since ${\mathfrak S}(M,X,G)$ has the quotient topology from development maps, the holonomy \eqref{sfholmap}
induces a continuous map
\begin{equation*} \lie{o}verline{\mathsf{hol}}: {\mathfrak S}(M,X,G) \longrightarrow \lie{o}peratorname{Hom}(\pi_1(M) , G)/G \; , {\lie{a}}bel{sfholmap2}
\end{equation*} which
gives rise to the map
\begin{equation} {\lie{a}}bel{holmap}
hol: {\mathfrak D}(M) \longrightarrow \lie{o}peratorname{Hom}(\pi_1(M) , G)/G \; .
\end{equation}
The continuous map $hol$ associates to a homotopy class of $(X,G)$-structures on $M$ the corresponding conjugacy class of its holonomy homomorphism $h$. By the deformation theorem (Theorem \ref{thm:Deformations}),
$hol$ is furthermore an open map. \\
The map $hol$ thus encodes
a good picture of the topology on ${\mathfrak D}(M)$:
\begin{example}[Teichm\"uller space ${\rm T}eich_{g}$ is a cell]
{\lie{a}}bel{ex:Teichm} \index{Teichm\"uller space} \index{surface!hyperbolic}
The holonomy image of hyperbolic structures on
$M_{g}$, $g {< , >}eq 2$,
is the subspace $\lie{o}peratorname{Hom}_{c}(\Gamma_{g}, \lie{o}peratorname{PSL}(2,{\bb R}))$
of the space $\lie{o}peratorname{Hom}(\Gamma_{g}, \lie{o}peratorname{PSL}(2,{\bb R}))$
which consists of injective homomorphisms with
discrete image. The space $\lie{o}peratorname{Hom}_{c}(\Gamma_{g}, \lie{o}peratorname{PSL}(2,{\bb R}))$ has two connected components \cite{Go88}. The
components $\lie{o}peratorname{Hom}_{c}^+(\Gamma_{g}, \lie{o}peratorname{PSL}(2,{\bb R}))$ and
$\lie{o}peratorname{Hom}_{c}^-(\Gamma_{g}, \lie{o}peratorname{PSL}(2,{\bb R}))$ arise
from the orientation of development maps.
The group $\lie{o}peratorname{PSL}(2,{\bb R})$ acts freely and properly
(by conjugation) on $$ \lie{o}peratorname{Hom}_{c}^+(\Gamma_{g}, \lie{o}peratorname{PSL}(2,{\bb R})) \; ,$$ the quotient
space being homeomorphic to ${\bb R}^{6g-6}$.
(See, for example, \cite[Theorem 9.7.4]{Ratcliffe}).
By completeness of hyperbolic structures on $M_{g}$, every
development map is a diffeomorphism. The topological
rigidity of development maps (cf.\ Example \ref{ex:toprig})
implies that the induced map
$${\rm T}eich_{g}= {\mathfrak D}^+(M_{g}, {\mathbb H}_{2}) \, \; \stackrel{hol}{\longrightarrow} \, \; \lie{o}peratorname{Hom}_{c}^+(\Gamma_{g}, \lie{o}peratorname{PSL}(2,{\bb R}))/ \lie{o}peratorname{PSL}(2,{\bb R})) $$
is a homeomorphism.
\end{example}
In general, it seems difficult to decide if the map
\index{holonomy map!is a local homeomorphism}
$hol$ is a local homeomorphism, as well. Indeed, Kapovich
\cite{Kapovich} constructed examples of deformation spaces such
that the map $hol$ is \emph{not} everywhere a local homeomorphism.
We construct such a counterexample for the deformation space of
a two-dimensional geometric structure on the two-torus in Appendix B. \index{holonomy map!is not a local homeomorphism}
In the case of flat affine two-tori though, we shall show that $hol$ is a local homeomorphism (see Section~ \ref{sect:holislh}).
\index{holonomy map!for flat affine structures}
\paragraph{The induced map of a subgeometry}
Let $o: (X',G') \rightarrow (X,G)$ be a subgeometry with $\rho: G' \rightarrow G$ the associated homomorphism (see
Section~ \ref{sect:sub_geom}). There is an associated map
\begin{equation} {\lie{a}}bel{eq:subgeometry0}
{\mathrm{Dev}}(M, X') \, {\, \longrightarrow \, } \, {\mathrm{Dev}}(M,X) \, , \; D' \mapsto D = o \circ D'
\end{equation}
and a map on homomorphisms
$$ \lie{o}peratorname{Hom}(\pi_{1}(M), G') \, {\, \longrightarrow \, } \, \lie{o}peratorname{Hom}(\pi_{1}(M), G) \, , \; h' \mapsto h = \rho \circ h' \; , $$
where $h = \mathsf{hol}(D)$.
These maps allow to relate
the deformation spaces in a commutative diagram
of the form
\begin{align} {\lie{a}}bel{eq:subgeometry1}
\xymatrix{
\; \; {\mathfrak D}(M, X') \; \; \ar[d] \ar[r]^(0.37){hol} & \; \; \lie{o}peratorname{Hom}(\pi_{1}(M), G')/ G' \; \; \ar[d] \\
\; \; {\mathfrak D}(M, X) \; \; \ar[r]^(0.37){hol} & \; \;
\lie{o}peratorname{Hom}(\pi_{1}(M), G)/ G \; \; . }
\end{align}
Note that the properties of the induced map ${\mathfrak D}(M, X') \rightarrow {\mathfrak D}(M,X)$ can vary wildy with various types of subgeometries.
In general, the induced map need not be injective nor surjective.\\
Recall the notion of covering of geometries from Definition
\ref{def:subgeometry}. We shall require the following lemma:
\begin{lemma} {\lie{a}}bel{lem:covering_geoms}
If $o: (X',G') \rightarrow (X,G)$ is a covering of geometries then
the induced map on deformation spaces
$$ {\mathfrak D}(M, X') {\, \longrightarrow \, } {\mathfrak D}(M,X) $$
is a homeomorphism.
\end{lemma}
\begin{proof} Indeed, since $o$ is a covering the above map \eqref{eq:subgeometry0}, $D' \mapsto D$,
on development maps
descends to a ${\rm Diff}(M)$-equivariant map on the sets of structures
$$ {\mathfrak S}(M,X') {\, \longrightarrow \, } {\mathfrak S}(M,X) $$
which is a homeomorphism.
\end{proof}
\subsubsection{The topology on the space of $(X,G)$-structures}
\index{$(X,G)$-structures!space of}
The topology on the space ${\mathfrak S}(M,X,G)$ is rather well behaved.
In fact, ${\mathfrak S}(M,X,G)$ is a Hausdorff and metrizable
topological space. This can be seen by representing an
$(X,G)$-structure on $M$ as an integrable higher order structure
in the sense of Ehresmann (cf.\ \cite[Section~ I.8]{SKobayashi}).
We discuss two important examples now:
\begin{example}[${\mathfrak S}(M, {\mathbb H}_{2}, \lie{o}peratorname{PSL}(2,{\bb R}))$]
The space of hyperbolic structures on a surface $M$ is homeomorphic \index{$(X,G)$-structures!hyperbolic}
to the space of hyperbolic (constant curvature $-1$) Riemannian metrics
with the $C^\infty$-topology on the space of Riemannian metrics.
It can also be equipped with the structure of a contractible Fr\'echet manifold, see \cite{EE}. Similarly, the space ${\mathfrak S}(M, {\bb R}^2, {\rm E}(2))$ of flat Euclidean structures is homeomorphic to
the space of flat Riemannian metrics on $M$.
\end{example}
In the case of flat affine structures, the action of the affine
group on development maps admits a global slice:
\index{$(X,G)$-structures!flat affine}
\begin{example}[${\mathfrak S}(M,{\bb A}^n)$ is Hausdorff]
{\lie{a}}bel{ex:framepreserving}
Let ${\mathrm{Dev}}(M, {\bb A}^n)$ be the \index{development maps!space of}
set of development maps for flat affine structures on $M$.
We choose a base frame $E_{x_{0}}$ on ${\bb A}^n$ and a
frame $F_{\tilde m_{0}}$
on $\tilde M$, respectively,
and let $$ {\mathrm{Dev}}_{f}(M,{\bb A}^n) = {\mathrm{Dev}}_{f}(M,F_{{\tilde m}_{0}}, E_{x_{0}})$$ denote the set of frame preserving
development maps. Since ${\rm Aff}(n)$ acts simply transitively on
the frame bundle of ${\bb A}^n$, there is a well defined continuous
retraction ${\mathrm{Dev}}(M, {\bb A}^n) \rightarrow {\mathrm{Dev}}_{f}(M,{\bb A}^n)$, and, in fact, there is
a homeomorphism $$ {\mathrm{Dev}}(M, {\bb A}^n) \approx {\rm Aff}(n) \times {\mathrm{Dev}}_{f}(M,{\bb A}^n) \; .$$
This shows
that the quotient ${\mathrm{Dev}}(M, {\bb A}^n)/ {\rm Aff}(n)$ is homeomorphic to
the subspace ${\mathrm{Dev}}_{f}(M, {\bb A}^n)$ and the affine group ${\rm Aff}(n)$ acts properly on the set
of development maps. In particular, the space of flat affine structures
${\mathfrak S}(M, {\bb A}^n)$ is a Hausdorff space.
\end{example}
Another way to understand the topology on ${\mathfrak S}(M,{\bb A}^n)$
is to identify flat affine structures with
\emph{flat torsion free
connections} on the tangent bundle of $M$. These form
a space of sections of a quotient of the bundle of $2$-frames
over $M$, see
\cite[Proposition IV.7.1]{SKobayashi}.
In Section~ \ref{sect:affine_conns}
of this chapter we employ this approach to study flat affine structures on the two-torus.
\subsubsection{The subspace of complete $(X,G)$-structures}
\index{deformation space!of complete $(X,G)$-structures}
\index{$(X,G)$-structures!complete}
Let ${\mathfrak D}_{c}(M)$ denote the subset of the deformation space
${\mathfrak D}(M)$ which consists of complete $(X,G)$-space forms
(that is, the subspace corresponding to development maps which are diffeomorphisms).
We denote with $\lie{o}peratorname{Hom}_{c}(\pi_1(M), G)$
the set of all injective homomorphisms $\pi_1(M) \rightarrow G$,
such that the image acts properly discontinuously on $X$.
We call $\lie{o}peratorname{Hom}_{c}(\pi_1(M), G)$ the \emph{set of discontinuous homomorphisms}.
The holonomy homomorphisms belonging to
the elements of ${\mathfrak D}_{c}(M)$ form an open subset
of $\lie{o}peratorname{Hom}_{c}(\pi_1(M), G)$.
In fact, by the rigidity of development maps belonging to discontinuous
holonomy homomorphisms (cf.\ Example \ref{ex:toprig}),
a small deformation of holonomy,
which remains in the domain of discontinuous homomorphisms, lifts to a
deformation of complete $(X,G)$-manifold structures on $M$. Therefore, Theorem
\ref{thm:Deformations} implies that
the restricted map
\begin{equation*} \mathsf{hol}:
\; {\rm Diff}_{0}(M,x_{0}) \backslash \,{\mathrm{Dev}}_{c}(M) \longrightarrow \lie{o}peratorname{Hom}_{c}(\pi_1(M) , G)
\end{equation*}
is a local homeomorphism.
Then the following result is easily observed (see also \cite{BauesV}):
\begin{theorem} {\lie{a}}bel{thm:cdefspace}
Let $M$ be a smooth compact manifold such that
the natural homomorphism ${\rm Diff}(M)/{\rm Diff}_{1}(M) \rightarrow {\rm Out}(\pi_{1}(M))$ is injective. Then the induced map
$$ hol: {{\mathfrak D}}_{c}(M) \longrightarrow \lie{o}peratorname{Hom}_{c}(\pi_1(M), G)/G $$
is a homeomorphism onto its image.
\end{theorem}
Note that the assumptions of the theorem are satisfied, for example, if $X$ is contractible.
\begin{example}[Complete flat affine structures on $T^2$] {\lie{a}}bel{ex:completeas} \index{flat affine torus!complete} \index{flat affine manifold!complete} \index{$(X,G)$-structures!flat affine}
The holonomy image of development maps
for complete flat affine structures on the two-torus
is $\lie{o}peratorname{Hom}_{c}({\bb Z}^2, {\rm Aff}(2))$, that is, it
consists of all injective homomorphisms with
properly discontinuous image.
As is shown in \cite[Section~ 4.4]{BauesG},
this is a locally closed
subset of $\lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2))$, defined by algebraic
equalities and inequalities, and it has two connected
components. Moreover, the conjugation action of the
group ${\rm Aff}(2)$ on $\lie{o}peratorname{Hom}_{c}({\bb Z}^2, {\rm Aff}(2))$ is orbit
equivalent to its restriction to the subgroup $\lie{o}peratorname{GL}(2,{\bb R})$.
The latter group acts freely and properly
on $\lie{o}peratorname{Hom}_{c}({\bb Z}^2, {\rm Aff}(2))$ and the quotient
space is homeomorphic to ${\bb R}^2$. Since
$$ {\mathfrak D}_{c}(T^2, {\bb A}^2) \, \; \stackrel{hol}{\longrightarrow} \, \; \lie{o}peratorname{Hom}_{c}({\bb Z}^2, {\rm Aff}(2))/{\rm Aff}(2) $$
is a homeomorphism, the
deformation space of complete flat affine structures
${\mathfrak D}_{c}(T^2, {\bb A}^2)$ is homeomorphic to ${\bb R}^2$.
As is shown in \cite{BauesG,BG}, natural
coordinates can be chosen such that the action of ${\mathrm{Map}}^{+}(T^2) = \lie{o}peratorname{SL}(2,{\bb Z})$
on ${\mathfrak D}_{c}(T^2, {\bb A}^2)$ corresponds to the standard representation of $\lie{o}peratorname{SL}(2,{\bb Z})$
on ${\bb R}^2$. \index{deformation space!of complete flat affine structures}
\index{deformation space!of flat affine structures}
\end{example}
\subsubsection{Deformation of lattices (A. Weil, 1962)} {\lie{a}}bel{sect:lattices} \index{lattice!deformation of} \index{deformation of lattice} \index{Weil A.}
Let $G$ be a simply connected Lie group and $\Gamma_{o} \leq G$ a cocompact lattice. We put $$ M_{o} = G/ \Gamma_{o} \; , $$ where $\Gamma_{o}$ acts by left-multiplication on the universal cover $\tilde M_{o} =G$ of $M_{o}$. Let $(X,G_{L}) = (G,G_{L
})$ be the homogeneous
geometry which is defined by the action of $G$ on
itself by left-\-mul\-tipli\-cation. Since the action of $G$ on itself is proper, every $(G,G_{L})$-manifold is complete. Hence
$$ {\mathfrak D}(M_{o}, G) = {\mathfrak D}_{c}(M_{o}, G) \; $$
and the holonomy image of $ {\mathfrak D}(M_{o}, G) $ is contained in
the space of lattice homomorphisms
$$ \lie{o}peratorname{Hom}_{L}(\Gamma_{o},G) = \{ \rho: \Gamma_{o} \hookrightarrow G \mid \rho(\Gamma_{o}) \text{ is a lattice in } G \} \; . $$
We call the space of conjugacy classes of lattice homomorphisms
$$ {\mathfrak D}_{L}(\Gamma_{o}, G) = \lie{o}peratorname{Hom}_{L}(\Gamma_{o},G)/G $$
the deformation space of the lattice $\Gamma_{o}$. \index{deformation space!of lattice}¥
The holonomy map
\begin{equation} {\lie{a}}bel{eq:hol_lat} hol: {\mathfrak D}(M_{o}, G) \, \longrightarrow \, {\mathfrak D}_{L}(\Gamma_{o}, G)
\end{equation}
therefore locally embeds ${\mathfrak D}(M_{o}, G)$ as an open (and closed) subspace of the deformation space of $\Gamma_{o}$.
This is the original setup which is studied in the seminal paper \cite{Weil} by Andr\'e Weil. Fundamental results on the nature of the involved spaces $\lie{o}peratorname{Hom}_{L}(\Gamma_{o},G)$ and ${\mathfrak D}_{L}(\Gamma_{o}, G)$ are obtained in the foundational
papers \cite{Weil, Weil2, Wang}, see also \cite{BeGe}.
For a recent contribution in the context of solvable Lie groups $G$, see \cite{BK};
the examples which are constructed in \cite[Section 2.3]{BK} show that there exist deformation spaces of the form
${\mathfrak D}(M_{o}, G)$, which have infinitely many connected components.
\paragraph{Rigidity of lattices and
action of the automorphism group of $G$} \index{rigidity!of lattice}
\index{lattice!rigidity of}
Note that the group ${\rm Aut}(G)$ of automorphisms of
$G$ has natural actions on the space of development maps
${\mathrm{Dev}}(M_{o}, G)$ and on $\lie{o}peratorname{Hom}_{L}(\Gamma_{o},G)$.
Indeed, let $\phi \in {\rm Aut}(G)$, and $D: G \rightarrow G$ a development map for an $(G,G_{L})$-structure on $M_{o}$ with
holonomy $\rho \in \lie{o}peratorname{Hom}_{L}(\Gamma_{o},G)$.
Then the composition $$ \phi \circ D: G \rightarrow G $$ is a
development map with holonomy $\phi \circ \rho$.
These actions descend to actions on $ {\mathfrak D}(M_{o}, G)$,
${\mathfrak D}_{L}(\Gamma_{o}, G)$ respectively, such that
\eqref{eq:hol_lat} becomes an equivariant map.
\begin{example}[Rigid lattices] \index{lattice!rigid}
A lattice $\Gamma_{o}$ is called \emph{rigid} in $G$
if ${\rm Aut}(G)$ acts transitively on ${\mathfrak D}_{L}(\Gamma_{o}, G)$.
For example, lattices in nilpotent Lie groups $G$, or
lattices in simple Lie groups $G$ not locally isomorphic to
$\lie{o}peratorname{SL}(2,{\bb R})$ are rigid, see \cite{OnVi}. In these two cases
we then have identities
$$ {\mathfrak D}(M_{o}, G) \stackrel{\approx}{{\, \longrightarrow \, }} {\mathfrak D}_{L}(\Gamma_{o}, G) \approx {\rm Aut}(G) / {\rm Inn}(G) \; ,
$$
for any lattice $\Gamma_{o} \leq G$. Here, $ {\rm Inn}(G)$ denotes the
group of inner automorphisms of $G$.
\end{example}
More generally, we call a lattice $\Gamma_{o}$ \emph{smoothly rigid}, if the holonomy map \eqref{eq:hol_lat} is a homeomorphism, \index{lattice!smoothly rigid}
that is, if ${\mathfrak D}(M_{o}, G) = {\mathfrak D}_{L}(\Gamma_{o}, G)$. For example, lattices in solvable Lie groups are smoothly rigid, by a theorem of Mostow; but there do exist solvable Lie groups which admit non-rigid lattices. See \cite{OnVi}, or \cite{BK} and the references therein for specific examples. \\
The deformation spaces of the form ${\mathfrak D}(M_{o}, G)$ play an important role in the analysis of general deformation spaces,
since many geometric structures arise from \'etale representations.
An illustrative example is given by the stratification of
\index{deformation space!stratification of}
the space of deformations of flat affine structure
on the two-torus which is studied in detail in
Section~ \ref{sect:def_homogeneous}.
\paragraph{The induced map of an \'etale representation}
\index{etale@\'etale representation} \index{representation!\'etale}
Let $G'$ be a simply connected Lie group and $\Gamma_{o} \leq G'$ a cocompact lattice. We put $M_{o} = G'/\Gamma_{o}$.
Let us assume for simplicity that $\Gamma_{o}$ is smoothly
rigid as well.
Now let $(X,G)$ be a homogeneous space and $\rho:G' \rightarrow G$ be an \'etale representation (see Definition \ref{def:etale}). Then the orbit map
which is associated to an
open orbit of $G'$ defines a subgeometry
$$ o: (G',G'_{L}) \rightarrow (X,G)
\, $$
which in turn gives rise to a map \eqref{eq:subgeometry1} of deformation
spaces
$$ {\mathfrak D}(M_{o}, G') \, {\, \longrightarrow \, } \, {\mathfrak D}(M_{o},X,G) \; , $$
that is, we obtain a map
$$ {\mathfrak D}_{L}(\Gamma_{o}, G') = \lie{o}peratorname{Hom}_{L}(\Gamma_{o},G')/G' \rightarrow {\mathfrak D}(M_{o},X,G) \; . $$
This map factors over the action
of the normalizer ${\rm N}_{G}(\rho)$ of $\rho(G')$ in $G$,
that is, we have an induced map
$$ \lie{o}peratorname{Hom}_{L}(\Gamma_{o},G')/ \, {\rm N}_{G}(\rho) \rightarrow {\mathfrak D}(M_{o},X,G) \; . $$
We remark that, if $\Gamma_{o}$ is rigid in $G'$ then
$$ \lie{o}peratorname{Hom}_{L}(\Gamma_{o},G')/ \, {\rm N}_{G}(\rho) = {\rm Aut}(G') /N \; ,$$
where $N \leq {\rm Aut}(G')$ denotes the image of
${\rm N}_{G}(\rho)$ in ${\rm Aut}(G')$.
\subsubsection{Dynamics of the $G$-action on $\lie{o}peratorname{Hom}(\Gamma,G)$}
In Examples \ref{ex:Teichm} and \ref{ex:completeas} above, the map $hol$ is a homeomorphism, and the corresponding deformation spaces are Hausdorff. These properties hold in particular if the holonomy image in $\lie{o}peratorname{Hom}(\Gamma,G)/G$ is obtained as a quotient by a \emph{proper} group action. In fact, if $G$ acts properly (and freely) on the image of $\mathsf{hol}$, then, \index{slice theorem}
by the slice theorem (cf.\ \cite{Palais}), the projection map
$\lie{o}peratorname{Hom}(\Gamma,G) \rightarrow \lie{o}peratorname{Hom}(\Gamma,G)/G$ admits a section near every
holonomy homomorphism. It then follows from Theorem \ref{thm:Deformations} that
$hol: {\mathfrak D}(M) \rightarrow \lie{o}peratorname{Hom}(\Gamma,G)/G$ is a local homeomorphism.
\begin{example}[Subvariety of stable points]
If $G$ is a reductive \index{stable point of reductive group action}
linear algebraic group, then, by a general fact on representations
of such groups, there exists a Zariski-open subset of
stable points in $\lie{o}peratorname{Hom}(\Gamma,G)$, where $G$ acts properly.
Recall that, for any representation of $G$ on a vector space, or
any action of $G$ on an affine variety $V$, a point $x \in V$ is called \emph{stable}
if the orbit $Gx$ is closed and $\dim G x = \dim G$.
The set of stable points may be empty though. For the
action of $G$ on $\lie{o}peratorname{Hom}(\Gamma, G)$ it is non-empty if there are
points $\rho \in \lie{o}peratorname{Hom}(\Gamma,G)$ such that $\rho(\Gamma)$
is sufficiently dense in $G$. In the specific context where $\Gamma$ is abelian
(or solvable), $\lie{o}peratorname{Hom}(\Gamma,G)$ has no stable points
(as follows from \cite[Theorem 1.1]{Millson}).
See \cite{Millson, Goldman}
for further discussion of these facts and
for some applications.
\end{example}
One cannot expect ${\mathfrak D}(M)$ to be a Hausdorff space, in general.
In fact, the image of $hol$ in $\lie{o}peratorname{Hom}(\pi_1(M) , G)/G$ may
contain non-closed points. In this situation
also ${\mathfrak D}(M)$ has non-closed points. The following example
is due to Bill Goldman:
\index{deformation space!non-closed point of}
\begin{example}[Non-closed points in ${\mathfrak D}(T^2, {\bb A}^2)$]
{\lie{a}}bel{ex:defissing}
Let
$$ A_{\epsilon} = \begin{matrix}{cc} {\lie{a}}mbda & \epsilon \\
0 & {\lie{a}}mbda \end{matrix} \; , \; \text{where ${\lie{a}}mbda>1$.} $$
Then
$M_{\epsilon} = {\lie{a}}ngle A_{\epsilon} \rightarrowngle \backslash \, {\bb A}o$
is a flat affine two-torus,
which has an
infinite cyclic holonomy group generated by
$A_{\epsilon}$ (see also Example \ref{ex:expholonomy}).
Let $\rho_{\epsilon}$ denote a corresponding
holonomy homomorphism for $M_{\epsilon}$.
Since the $A_{\epsilon}$, $\epsilon \neq 0$,
are all conjugate elements of $\lie{o}peratorname{GL}(2,{\bb R})$,
the closure of the $\lie{o}peratorname{GL}(2,{\bb R})$-orbit of $\rho_{1} \in
\lie{o}peratorname{Hom}({\bb Z}^2,\lie{o}peratorname{GL}(2,{\bb R}))$
contains the holonomy homomorphism $\rho_{0}$.
Therefore, the orbit $[\rho_{1}]$ is not closed in
$\lie{o}peratorname{Hom}({\bb Z}^2,{\rm Aff}(2))/{\rm Aff}(2)$. By Corollary \ref{cor:bbaotop},
$M_{1}$ defines a non-closed point in the deformation space.
\end{example}
Observe that $\rho_{0}$ is the holonomy of the Hopf torus ${\mathcal H}_{{\lie{a}}mbda}$. \index{Hopf torus}
By Theorem \ref{thm:Deformations} there exists
a corresponding family of development maps
$D_{\epsilon}$
with holonomy $\rho_{\epsilon}$
which converges to the development
map of the Hopf torus $M_{0}= {\mathcal H}_{{\lie{a}}mbda}$.
We observe that these development maps belong to
affine structures which are isotopically equivalent to the tori $M_{\epsilon}$. Hence, the closure of $M_{1}$ in
the deformation space contains the Hopf torus
${\mathcal H}_{{\lie{a}}mbda}$.
(To see explicitly how the development maps for the
tori $M_{\epsilon}$ converge to
the Hopf torus in the deformation space, we may use
the constructions in Section~ \ref{sect:tori_bbao} in this chapter.
In fact, we construct
$M_{\epsilon}$ as
a quotient space $M_{\epsilon} = {\mathcal{T}}_{A_{\epsilon}, {\rm id}, 2}$ of $\widetilde{\bbA^2 \! - 0}$, as in Example
\ref{ex:gentorusnh}. Then we
deform the development $D= D_{0}$ of $M_{0}$
as in the proof of Theorem \ref{thm:Deformations} to
obtain a sequence of development maps
$D_{\epsilon}: \, \widetilde{\bbA^2 \! - 0} \, \rightarrow \, {\bb A}^2$
for ${\mathcal{T}}_{A_{\epsilon}, {\rm id}, 2}$ which converges to $D_{0}$.)
\subsubsection{Dynamics of the ${\rm Diff}_{0}(M)$-action on $(X,G)$-structures}
In favorable cases, the topology on ${\mathfrak D}(M)$ may be determined by constructing
slices for the action of ${\rm Diff}_{0}(M)$ on ${\mathfrak S}(M,X,G)$. The study of the action of
${\rm Diff}_{0}(M)$ on ${\mathfrak S}(M,X,G)$ may then be used to deduce information on the
topology (diffeomorphism groups carry the
$C^\infty$-topology) of ${\rm Diff}_{0}(M)$, or, vice versa, on the
topology of ${\mathfrak S}(M,X,G)$. The theory of slices for action of diffeomorphism
groups on spaces of Riemannian metrics was developed by Palais and
Ebin \cite{Ebin}. \index{slices for group of diffeomorphisms}
\index{diffeomorphism group!of manifold} \index{manifold!diffeomorphism group of}
Recall that a continuous action of ${\rm Diff}(M)$ on a space $\mathcal S$ is called
proper if the map ${\rm Diff}(M) \times \mathcal{S} \rightarrow \mathcal{S} \times \mathcal{S}$,
$(g,s) \mapsto ( g \cdot s, s)$ is proper. If the action is proper, the quotient
space is Hausdorff (cf.\ \cite[III, Section~ 4.2]{BourbakiTop}).
\index{proper group action}
\index{diffeomorphism group!proper action of}
\begin{example}[${\rm Diff}(M_{g})$ acts properly]
The group of diffeomorphisms of a closed surface ${\rm Diff}(M_{g})$
acts properly on the space of conformal structures \index{conformal structure}
${\mathfrak S}(M_{g}, {\mathbb H}_{2}, \lie{o}peratorname{PSL}(2,{\bb R}))$, if $g {< , >}e 1$.
\index{diffeomorphism group!of surface} \index{surface!diffeomorphism group of}
In particular, the identity component
${\rm Diff}_{0}(M_{g})$ acts properly and freely.
Moreover, the projection map $${\mathfrak S}(M_{g}, {\mathbb H}_{2}, \lie{o}peratorname{PSL}(2,{\bb R})) \rightarrow {\rm T}eich_{g}$$ is a trivial
${\rm Diff}_{0}(M_{g})$-principal bundle. Since
${\mathfrak S}(M_{g}, {\mathbb H}_{2}, \lie{o}peratorname{PSL}(2,{\bb R}))$ and
${\rm T}eich_{g}$ are contractible, this implies at once
that \emph{the group ${\rm Diff}_{0}(M)$
is contractible}. These results were shown in \cite[Section~5 D]{EE}.
\index{diffeomorphism group!contractible}
\end{example}
Similar results hold also for the space ${\rm T}eich_{1}$ of conformal structures
(flat Riemannian metrics) on the two-torus. In fact, ${\rm Diff}(T^2)$ acts
properly on the space ${\mathfrak S}(T^2, {\bb R}^2, E(2))$,
and the moduli space of such structures is a Hausdorff space.
\index{diffeomorphism group!of two-torus} \index{torus!diffeomorphism group of}
However, the action of
${\rm Diff}_{0}(T^2)$ is not free,
since every flat Riemannian
structure on $T^2$ has
$S^1\times S^1$ acting as a group of isometries.
In this situation, we may replace
${\rm Diff}(T^2)$ with the subgroup ${\rm Diff}(T^2,x_{0})$.
Indeed, $S^1 \times S^1$ is a deformation
retract of ${\rm Diff}_{0}(T^2)$ and the group
${\rm Diff}_{0}(T^2,x_{0})$ is contractible (cf.\ \cite{EE}).\\
In general, the action of ${\rm Diff}_{1}(M)$ on a space of structures ${\mathfrak S}(M,X,G)$
need not be free neither proper, as we show in the following examples.
\paragraph{Action of ${\rm Diff}(T^2)$ on the space of flat affine structures}
\index{diffeomorphism group!action on flat affine structures}
In the case of flat affine structures on the two-torus, the action
of ${\rm Diff}_{0}(T^2)$
on the set of all affine structures
${\mathfrak S}(T^2,{\bb A}^2)$ is not proper, for otherwise ${\mathfrak D}(T^2, {\bb A}^2)$ would be
a Hausdorff space. But, in fact, as we show in
Example \ref{ex:defissing}, ${\mathfrak D}(T^2, {\bb A}^2)$ has singularities. \\
An interesting in-between
case arises when restricting to the subspace ${\mathfrak S}_{c}(T^2,{\bb A}^2)$ of \emph{complete} flat affine structures. This case bears some resemblance to the case of conformal structures, although here the action of ${\rm Diff}(T^2)$ on the set of structures ${\mathfrak S}_{c}(T^2,{\bb A}^2)$ is not proper. However, the action of the subgroup ${\rm Diff}_{0}(T^2)$ on ${\mathfrak S}_{c}(T^2,{\bb A}^2)$ \emph{is} proper.
\begin{example}[Action of ${\rm Diff}(T^2)$ on ${\mathfrak S}_{c}(T^2,{\bb A}^2)$]
Observe first that every complete flat affine structure on $T^2$
is homogeneous and the identity component of its automorphism
group acts simply transitively. This follows from the classification
given in Theorem \ref{thm:classification}.
Therefore, like in the case of Euclidean structures,
${\rm Diff}_{0}(T^2, x_{0})$ acts freely on ${\mathfrak S}_{c}(T^2, {\bb A}^2)$
and $$ {\mathfrak D}_{c}(T^2, {\bb A}^2) =
{\mathfrak S}_{c}(T^2, {\bb A}^2) / {\rm Diff}_{0}(T^2, x_{0}) \; . $$
Since $\mathsf{hol}: {\rm Diff}_{0}(M,x_{0}) \backslash \, {\mathrm{Dev}}_{c}(T^2, {\bb A}^2) \rightarrow \lie{o}peratorname{Hom}_{c}({\bb Z}^2,{\rm Aff}(2))$ locally admits continuous equivariant sections (see the discussion before Theorem \ref{thm:cdefspace}), it follows that the map
$$ {\mathfrak S}_{c}(T^2, {\bb A}^2) \rightarrow {\mathfrak D}_{c}(T^2, {\bb A}^2)$$ is a locally trivial principal bundle
for ${\rm Diff}_{0}(T^2, x_{0})$. (It is also a universal bundle, since ${\mathfrak S}_{c}(T^2, {\bb A}^2)$ is contractible, as we see in Proposition \ref{prop:ss_iscon} below.) This already implies
that $ {\rm Diff}_{0}(T^2, x_{0})$ acts properly on
${\mathfrak S}_{c}(T^2, {\bb A}^2)$. On the other hand, ${\rm Diff}(T^2, x_{0})$ does \emph{not} act
properly, since the action of the (extended) mapping class group
of the two-torus
$$ {\rm Diff}(T^2, x_{0}) /{\rm Diff}_{0}(T^2, x_{0}) \cong \lie{o}peratorname{GL}(2,{\bb Z})$$
on the deformation space ${\mathfrak D}_{c}(T^2, {\bb A}^2) = {\bb R}^2$
is not properly discontinuous. (See also Example \ref{ex:completeas}).
\end{example}
In the previous example a slightly stronger result holds.
Indeed, by the proof of \cite[Corollary 4.9]{BauesG} the projection map
$$ \lie{o}peratorname{Hom}_{c}({\bb Z}^2, {\rm Aff}(2)) {\, \longrightarrow \, } {\mathfrak D}_{c}(T^2, {\bb A}^2)$$
admits a global (continuous) section.
Since the space ${\mathfrak D}_{c}(T^2, {\bb A}^2) = {\bb R}^2$ is contractible, we may use the covering homotopy theorem to conclude that there exists a continuous section
$s: {\mathfrak D}_{c}(T^2, {\bb A}^2) \rightarrow {\mathfrak S}(T^2, {\bb A}^2)$.
This shows that the
above principal bundle is trivial. (For an explicit construction of such a section, refer to Section~ \ref{sect:devsection} of this chapter.)
\index{deformation space!of complete flat affine structures}
A typical application is:
\begin{proposition} {\lie{a}}bel{prop:ss_iscon}
The space ${\mathfrak S}_{c}(T^2, {\bb A}^2)$ of complete flat affine structures on the two-torus is contractible.
\end{proposition}
\begin{proof} The group $ {\rm Diff}_{0}(T^2, x_{0})$ acts freely on
the set of complete flat affine structures.
By the above, invariant sections exists for this action of $ {\rm Diff}_{0}(T^2, x_{0})$. It follows that there is a homeomorphism
$$ {\mathfrak S}_{c}(T^2, {\bb A}^2) \approx {\rm Diff}_{0}(T^2, x_{0}) \times {\mathfrak D}_{c}(T^2, {\bb A}^2) \; .$$
In particular, since
${\rm Diff}_{0}(T^2,x_{0})$ and
$ {\mathfrak D}_{c}(T^2, {\bb A}^2)$ are contractible, the space ${\mathfrak S}_{c}(T^2, {\bb A}^2)$ is contractible.
\end{proof}
See Section~ \ref{sect:affine_conns} for a description of ${\mathfrak S}(T^2, {\bb A}^2)$ and
${\mathfrak S}_{c}(T^2, {\bb A}^2)$ as subsets of the affine space of torsion free
flat affine connections of $T^2$.
\subsection{Spaces of marked structures} {\lie{a}}bel{sect:markedstructures}
\index{$(X,G)$-structures!marked}
Let $M_{0}$ be a fixed smooth manifold.
A diffeomorphism $f:M_{0} \rightarrow M$, where $M$ is an $(X,G)$-manifold is called a \emph{marking} of $M$. Two marked $(X,G)$-manifolds $(f,M)$ and $(f',M')$ are called equivalent if there exists
an $(X,G)$-equivalence $g: M' \rightarrow M$ such that
$g \circ f'$ is \emph{homotopic} to $f$. Let ${\mathfrak S}M(M_{0}, X,G)$
denote the set of classes of marked $(X,G)$-manifolds.
By composing with $f$, every
local $(X,G)$-chart for the marked manifold $(f,M)$ extends to
the development map of an $(X,G)$-structure on
$M_{o}$. This correspondence descends to a bijection
of ${\mathfrak S}M(M_{0}, X,G)$ with the deformation space
${\mathfrak D}(M_{0},X,G)$.
We can thus topologize the space of classes
of markings with the topology induced from the
$C^\infty$-topology on development maps.
\begin{example}[Teichm\"uller metric on ${\rm T}eich_{g}$]
\index{Teichm\"uller metric} \index{Teichm\"uller space}
Classically, Teichm\"uller space ${\rm T}eich_{g}$ is represented
as a space of marked conformal structures on Riemann surfaces.
Let $S_{0}$ be a closed Riemann surface of genus $g$.
A marking of a Riemann surface $R$ is an orientation preserving
quasi-conformal homeomorphism $f: S_{0} \rightarrow R$.
Two marked surfaces $(f,R)$ and $(f',R')$ are equivalent
if there exists a biholomorphic map $h: R' \rightarrow R$ such that
$h \circ f'$ is \emph{homotopic} to $f$. Teichm\"uller space
${\rm T}eich_{g}$ is the set of classes of marked surfaces.
The infimum of dilatations $K(\ell)$, where $\ell: R \rightarrow R'$ is
a quasiconformal map homotopic to $f' \circ f$, defines
the \emph{Teichm\"uller distance} of $(f,R)$ and
$(f',R')$ in ${\rm T}eich_{g}$:
$$ d_{{\rm T}eich}([f,R], [f',R']) = \inf \log K(\ell) \; . $$
With the metric topology induced by $ d_{{\rm T}eich}$,
${\rm T}eich_{g}$ is homeomorphic to
the Fricke space
$$ \mathfrak{F}(S_{0}) = \lie{o}peratorname{Hom}_{c}(\Gamma_{g}, \lie{o}peratorname{PSL}(2,{\bb R}))/ \lie{o}peratorname{PSL}(2,{\bb R}) , $$
as defined in Example \ref{ex:Teichm}. See \cite{Abikoff} and
\cite{PT} in Vol.\ I of this handbook for
reference on this material. Therefore, the topology defined
by Teichm\"uller's metric coincides with the topology on ${\rm T}eich_{g}$, which is defined by
the convergence of development maps for hyperbolic structures.
\end{example}
We may also consider various refined versions of
classes of marked $(X,G) $-manifolds and corresponding
deformation spaces.
\paragraph{$(X,G)$-manifolds with basepoint}
Fix a basepoint $x_{0} \in X$ and write $X= G/H$,
where $H = G_{x_{0}}$.
The space ${\mathfrak S}M_{\mathsf{p}}(M_{0}, X,G)$ of
\emph{basepointed} marked structures is defined
as follows. Let $\mathsf{p}: (\tilde M_{0}, \tilde m_{0}) \rightarrow (M_{0}, m_{0})$ be a fixed universal cover. A
marking of $(M,m)$ is a based diffeomorphism
$f:(M_{0},m_{0}) \rightarrow (M,m)$. Two marked basepointed
$(X,G)$-manifolds are equivalent if there exists an $(X,G)$-\-equivalence $g: (M',m') \rightarrow (M,m)$ such that
$g \circ f'$ is homotopic to $f$ by a basepoint preserving homotopy.
Let $ {\mathrm{Dev}}_{\mathsf{p}}(M_{0})$ be the
set of basepoint preserving development maps. For every
based local
$(X,G)$ chart $\varphi$, defined near $m_{0}$, there exists a unique
development map $D$ for $M_{0}$, which extends $\varphi \circ f \circ \mathsf{p}$ from a neighborhood of $\tilde m_{0}$. This correspondence induces a homeomorphism
$$ {\mathfrak S}M_{\mathsf{p}}(M_{0}, X,G) \, \stackrel{\approx}{{\, \longrightarrow \, }} \,
{\rm Diff}_{1}(M_{0}, m_{0}) \backslash {\mathrm{Dev}}_{\mathsf{p}}(M_{0})/H \; . $$
\begin{example}[Homogeneous $(X,G)$-structures]
{\lie{a}}bel{ex:hom_structures} \index{$(X,G)$-structures!homogeneous}
Note that the natural (forgetful) map
${\mathfrak S}M_{\mathsf{p}}(M, X,G) \rightarrow {\mathfrak S}M(M, X,G)$ is surjective, but it is usually not
injective. In fact, let $D$ be a development map.
Then for the classes $G \circ D \circ {\rm Diff}_{1}(M,m)$ and
$G \circ D \circ {\rm Diff}_{1}(M)$ to
coincide it is necessary that
the group of $(X,G)$-equivalences acts transitively on $M$.
That is, $D$ is the development map of a homogeneous
$(X,G)$-structure. We let ${\mathrm{Dev}}_{h}(M,X,G)$ denote the set of development
maps of homogeneous $(X,G)$-structures.
\end{example}
Note further that, in general,
${\rm Diff}_{1}(M_{0},m_{0})$ is a proper subgroup of
all basepoint preserving diffeomorphisms
which are \emph{freely} homotopic to the identity. The difference is obtained by the natural action of $\pi_{1}(M_{0}, m_{0})$
on based homotopy classes of maps.
However, if $\pi_{1}(M_{0}, m_{0})$ is abelian,
the inclusion is an isomorphism.
\begin{example} {\lie{a}}bel{ex:def_homaff}
\index{flat affine torus!homogeneous}
\index{$(X,G)$-structures!flat affine}
Let ${\mathfrak D}_{h}(T^2,{\bb A}^2)$ be the deformation
space of homogeneous flat affine structures on the two-torus. Then
$${\mathfrak D}_{h}(T^2,{\bb A}^2) = {\rm Diff}_{1}(T^2, x_{0}) \backslash
{\mathrm{Dev}}_{h}(T^2)/{\rm Aff}(2) . $$ In particular, since every complete
affine two-torus
is homogeneous, ${\mathfrak D}_{h}(T^2,{\bb A}^2)$ contains
the subspace of complete flat affine structures
$${\mathfrak D}_{c}(T^2,{\bb A}^2) = {\rm Diff}_{1}(T^2, x_{0}) \backslash
{\mathrm{Dev}}_{c}(T^2)/{\rm Aff}(2) \; .$$
\end{example}
\subsubsection{Framed $(X,G)$-manifolds} {\lie{a}}bel{sec:framedXG}
The holonomy of a marked
$(X,G)$-\-mani\-fold is a $G$-conjugacy class
of homomorphisms.
To get rid of the dependence on the conjugacy class,
one introduces framed structures. The holonomy theorem
implies that the deformation space of framed structures is
a locally compact Hausdorff space. We shall discuss only the particular simple case of $({\bb A}^n,{\rm Aff}(n))$-manifolds.
\begin{example} {\lie{a}}bel{ex:fat}
Let $m \in M$.
A frame ${\mathcal F}_{m}$ for a flat affine structure
on $M$ is a choice of basis of the tangent space $T_{m} M$.
The pair $(M,{\mathcal F}_{m})$ is called a \emph{framed flat affine manifold}.
Fix a frame ${\mathcal F}_{m_{0}}$ for $M_{0}$, as well, and call a frame preserving diffeomorphism $(M_{0}, {\mathcal F}_{m_{0}}) \rightarrow ( M, {\mathcal F}_{m})$ a marking of $( M, {\mathcal F}_{m})$.
Two marked framed flat affine manifolds $(f,M, {\mathcal F}_{m})$ and
$(f',M', {\mathcal F}'_{m'})$ are called equivalent if there exists
a frame preserving affine diffeomorphism
$g: (M',{\mathcal F}'_{m'}) \rightarrow (M,{\mathcal F}_{m})$ such that $g \circ f'$ is based homotopic to $f$.
The set of classes is denoted ${\mathfrak S}Mf(M_{0}, {\bb A}^2)$.
\end{example}
Let us fix a base frame
${\mathcal E}_{x_{0}}$ on affine space ${\bb A}^n$. Given a marked framed flat affine manifold $(f,M, {\mathcal F}_{m })$, there exists a unique frame preserving affine chart for $M$, which is defined near $m$.
This chart lifts to a unique development
map $D: (\tilde M_{0}, \tilde {\mathcal F}_{\tilde m_{0}}) \rightarrow ({\bb A}^n, {\mathcal E}_{x_{0}})$.
The correspondence descends to a bijection
$$ {\mathfrak S}Mf(M_{0}, {\bb A}^n) =
{\rm Diff}_{1,f}(M_{0}, {\mathcal F}_{m_{0}}) \backslash {\mathrm{Dev}}_{f}(M_{0}) \; ,$$ where ${\rm Diff}_{1,f}(M_{0}, {\mathcal F}_{m_{0}})$ denotes the group of frame preserving
diffeomorphisms which are based homotopic to the identity.
By the deformation theorem, there is a map
$$ \mathsf{hol}: {\mathfrak S}Mf(M_{0},{\bb A}^n) \rightarrow \lie{o}peratorname{Hom}(\pi_{1}(M,m_{0}), {\rm Aff}(n)) \; $$
which is continuous and which is a local
homeomorphism onto its image.
\emph{This shows that ${\mathfrak S}Mf(M_{0},{\bb A}^n)$ is a locally compact Hausdorff space}.\\
As is apparent from
Example \ref{ex:framepreserving}, the natural map
${\mathfrak S}Mf(M_{0},{\bb A}^n) \rightarrow {\mathfrak S}M(M_{0},{\bb A}^n)$ is
surjective, and it factors over ${\mathfrak S}M_{\mathsf{p}}(M_{0},{\bb A}^n)$.
The following tower of maps thus sheds some light on
the topology of the deformation space of flat affine structures:
\begin{align} {\lie{a}}bel{eq:deftower}
\xymatrix@1{
{\mathfrak S}Mf(M_{0},{\bb A}^n) \ar[d]
\\
{\mathfrak S}M_{\mathsf{p}}(M_{0},{\bb A}^n) \ar[d] \\
{\mathfrak D}(M_{0},{\bb A}^n) = {\mathfrak S}M(M_{0},{\bb A}^n) \; .}
\end{align}
Note that the group $\lie{o}peratorname{GL}^+(n,{\bb R}) = {\rm Diff}_{1}(M_{0},m_{0})/
{\rm Diff}_{1,f}(M_{0}, {\mathcal F}_{m_{0}})$ acts on ${\mathfrak S}Mf(M_{0},{\bb A}^n)$,
such that the first projection map in the tower is the quotient map for this action. In particular, \emph{${\mathfrak S}M_{\mathsf{p}}(M_{0},{\bb A}^n)$
arises as the quotient space of a locally compact Hausdorff
space by a reductive group action}.
(Note also that the holonomy map is equivariant with respect to
the conjugation action of $\lie{o}peratorname{GL}^+(n,{\bb R})$
on holonomy homomorphisms.)
\begin{example}[Homogeneous framed flat affine structures on the torus] {\lie{a}}bel{ex:def_homfaff}
As we have seen already in Example \ref{ex:def_homaff} above, the lower map
in the tower \eqref{eq:deftower} is a bijection on homogeneous flat affine structures, that is, ${\mathfrak D}_{h}(T^2,{\bb A}^2) = {\mathfrak S}M_{\mathsf{p},h}(T^2,{\bb A}^2)$.
The deformation space of homogeneous structures
is thus obtained as a quotient by an action of
$\lie{o}peratorname{GL}^+(2,{\bb R})$:
$$
{\mathfrak D}_{h}(T^2,{\bb A}^2) = {{\mathfrak S}Mf}\ _{\! \! \!,h}(T^2,{\bb A}^2) /
\lie{o}peratorname{GL}^+(2,{\bb R}) \; . $$
We shall further
study this quotient space in Section~ \ref{sect:homog_structs}.
Observe that the action of
$\lie{o}peratorname{GL}^+(2,{\bb R})$ is not free,
since, in fact, the Hopf tori have non-trivial stabilizers.
On the other hand, as follows from the discussion in
Example \ref{ex:completeas}, $\lie{o}peratorname{GL}^+(2,{\bb R})$ acts freely on the subspace of complete
affine structures, and the map
${{\mathfrak S}Mf}\ _{\! \! \!,c}(T^2,{\bb A}^2) \rightarrow {\mathfrak D}_{c}(T^2,{\bb A}^2)$
is a trivial $\lie{o}peratorname{GL}^+(2,{\bb R})$-principal bundle.
\end{example}
\section{Construction of flat affine
surfaces} \index{surface!flat affine}
A flat affine manifold is called \emph{homogeneous} if its group of
affine automorphisms acts transitively.
\index{flat affine manifold!homogeneous} \index{flat affine torus!homogeneous}
Homogeneous flat affine manifolds may be constructed from
\'etale affine representations
of two-dimensional Lie groups in
a straightforward way. Compact examples
can be derived from \'etale affine representations
of the two-dimensional group manifold ${\bb R}^2$
by taking quotients with a discrete uniform subgroup.
Every flat affine surface constructed in this way is then
a \emph{homogeneous} flat affine torus.
In fact, an easy argument (see Section~ \ref{sect:classification})
shows that \emph{all} homogeneous flat affine surfaces are obtained in this way.
Therefore, all homogeneous flat affine tori are
affinely diffeomorphic to quotients of abelian
Lie groups with left-invariant flat affine structure.
This also relates homogeneous
affine structures on tori to two-dimensional associative algebras,
a point of view which will be discussed in Section~ \ref{sect:thedefspace}.
In Section~ \ref{sect:etale_affine}, we describe
the classification of abelian \'etale affine representations on ${\bb A}^2$.
By the above remarks, this amounts to a rough classification of
homogeneous flat affine tori.
A genuinely more geometric approach is to construct flat affine
surfaces by gluing patches of affine space along their boundaries.
The affine version of Poincar\'e's
fundamental polygon theorem allows
to construct flat affine tori by gluing affine
quadrilaterals along their sides.
The flat affine two-tori thus obtained depend on
the shape of the quadrilateral and also on the
particular affine transformations which are used in the gluing process.
This, in turn, gives natural
coordinates for an open subset in the deformation space
of flat affine structures on the two-torus.
As it turns out, the flat affine tori which are obtained by
gluing an affine quadrilateral along its sides are all homogeneous,
and they form a dense subset in the deformation space of homogeneous
flat affine structures on the torus. This material is explained in Section~ \ref{sect:agluing}. \index{torus!flat affine}
\index{flat affine torus}
To construct all flat
affine two-tori, it is required to glue more general objects.
In the following sections Section~ \ref{sect:tori_bbao} and Section~ \ref{sect:acylinders},
we discuss in detail a construction method for flat affine
tori with development image
${\bb A}o$, which builds on the idea of cutting flat affine surfaces into simple building blocks.
Here flat affine tori
are constructed by gluing several copies of
half annuli in ${\bb A}o$, or cutting the surface into
affine cylinders. Equivalently, these tori are obtained
by gluing certain strips which are situated in the universal covering space of ${\bb A}o$, and which project to
annuli in ${\bb A}o$. In this way also \emph{non-homogeneous} examples
of flat affine tori arise.
As follows from the main classification theorem, which will be proved in
Section~ \ref{sect:classification},
the above construction methods exhaust all flat affine two-tori.
\subsection{Quotients of flat affine Lie groups}
{\lie{a}}bel{sect:etale_affine}
If a Lie group $G$ has an \'etale action
(cf.\ Definition \ref{def:etale}) on affine space
we call it an \'etale affine \index{etale@\'etale representation!affine}
Lie group. An \'etale affine Lie group carries a natural
left invariant flat affine structure, and, thus, for every discrete
subgroup $\Gamma \leq G$, the quotient space $\Gamma \backslash G$ inherits the structure of a flat affine manifold. If $G$ is
abelian the resulting flat affine structure is homogeneous. \\
The following result will be established in the course of the proof
of the classification theorem (see Section~\ref{sect:univcovs}):
\index{flat affine torus!homogeneous}
\begin{proposition} Every homogeneous flat affine two-torus is
affinely diffeomorphic to a quotient of an abelian \'etale affine Lie group.
\end{proposition}
\noindent Up to affine conjugacy there are six types $\mathsf{T},
\mathsf{D}, \mathsf{C_{1}}, \mathsf{C_{2}}, \mathsf{B}, \mathsf{A}$ of
\'etale \emph{abelian} subgroups in the affine group ${\rm Aff}(2)$. Both
the plane and the halfplane admit two distinct simply transitive
abelian affine actions $\mathsf{T},
\mathsf{D}$, and $\mathsf{C_{1}}, \mathsf{C_{2}}$ respectively.
\begin{example}[Affine automorphisms of development images]
{\lie{a}}bel{ex:domains}
\hspace{1cm}
\begin{enumerate}
\item {\bf (\emph{The plane ${\bb A}^2$})} The groups
$$ \mathsf{T} = \left\{
\begin{matrix}{ccc} 1 & 0 & u \\ 0 & 1 & v \\
0 & 0 & 1
\end{matrix}
\right\} \;
\text{ and } \; \mathsf{D} =
\left\{
\begin{matrix}{ccc} 1 & v & u+ \frac {1}{2} v^2 \\ 0 & 1 & v \\
0 & 0 & 1
\end{matrix}
\right\} \;
$$
are abelian groups of affine transformations which are simply transitive on the plane.
\item {\bf (\emph{The half space ${\cal H}$})}
Let ${\cal H}$ be the
half space $y >0$. Then
$$ {\rm Aff}({\cal H}) =
\left\{ \begin{matrix}{ccc} \alpha & z & v \\ 0 & \beta & 0 \\
0 & 0 & 1 \end{matrix} {\, \Big\vert \;} \alpha \neq 0 ,\,\beta > 0\right\} \; $$
is its affine automorphism group.
The subgroups
$$ \mathsf{C}_{1} = \left\{ \begin{matrix}{cc} \exp(t) & z \\ 0 & \exp(t) \\
\end{matrix} \right\} \subset \lie{o}peratorname{GL}(2,{\bb R}) $$
and
$$ \mathsf{C}_{2} = \left\{ \begin{matrix}{ccc} 1 & 0 & v \\ 0 & \exp(t) & 0 \\
0 & 0 & 1 \end{matrix} \right\} \subset {\rm Aff}(2)$$
are simply transitive abelian groups of affine transformations on ${\cal H}$.
The half spaces $(x,y)$, $y>0$ and $y <0$ are open orbits for the groups
$\mathsf{C}_{i}$.
\item {\bf (\emph{The sector ${\cal Q}$})} Let ${\cal Q}$ denote the upper right
open quadrant. Then
$$ \mathsf{B} = {\rm Aff}(\mathcal Q)^0 = \left\{
\begin{matrix}{cc}
a & 0 \\ 0 & b
\end{matrix} {\, \Big\vert \;} a>0, b> 0 \right\} \; \subseteq \lie{o}peratorname{GL}(2,{\bb R})
$$
is an abelian, simply transitive \emph{linear} group of transformations of ${\cal Q}$.
\item {\bf (\emph{The punctured plane ${\bb A}o$}) }
$$ \mathsf{A} = \left\{ \exp(t) \begin{matrix}{cc} \cos \theta & \sin \theta \\
- \sin \theta & \cos \theta \\
\end{matrix} \right\} \; \subseteq \lie{o}peratorname{GL}(2,{\bb R}) = {\rm Aff}({\bb A}o) $$
is an abelian \emph{linear} group, which is simply transitive on
${\bb A}o$. \end{enumerate}
\end{example}
Let $\mathsf{G} \leq {\rm Aff}(2)$ be one of the above
groups and $\Gamma \leq \mathsf{G}$ a lattice. Then $\Gamma$ acts properly discontinuously and with compact quotient on every open orbit $U \subset {\bb A}^2$ of $\mathsf{G}$ and the quotient
space $$ M = \; \;
_{\mbox{$\Gamma$}} \, \backslash \, U $$ is a flat affine two-torus.
The group $\Gamma$ is a discrete abelian subgroup
of ${\rm Aff}(2)$ and, by choosing an appropriate fundamental domain, its action defines a tessellation of the open domain $U$. In fact, a convex affine quadrilateral may be chosen as
fundamental domain.
See Figure \ref{figure:basictess} and Figure \ref{figure:basictess2} for some examples.
Since $\mathsf{G}$ is abelian and centralizes $\Gamma$, the affine action of $\mathsf{G}$ on $U$ descends to $M$. Thus, $\mathsf{G}$ acts on $M$ by affine transformations, and $M$ is a homogeneous flat affine
manifold.
\begin{figure}
\caption{Tesselations of homogeneous affine domains of type $\mathsf{T}
\end{figure}
For further reference we note the following:
\begin{lemma}[Normalisers of \'etale affine groups] {\lie{a}}bel{lem:normalizers}
\begin{enumerate}
\item
The \'etale affine groups $\mathsf{A}$, $\mathsf{B}$ have index two and eight in their normalizers in ${\rm Aff}(2)$.
The quotients are generated by the reflections
$$
\begin{matrix}{cc}
0 & 1\\
1 & 0 \\
\end{matrix} \in \lie{o}peratorname{GL}(2,{\bb R})
\text{, respectively, }
\begin{matrix}{cc}
-1& 0 \\
0 & 1 \\
\end{matrix},
\begin{matrix}{cc}
1& 0 \\
0 & -1 \\
\end{matrix},
\begin{matrix}{cc}
0 & 1\\
1 & 0 \\
\end{matrix} .
$$
\item The normalizers in ${\rm Aff}(2)$ of the \'etale affine groups $\mathsf{C}_{1}$, $\mathsf{C}_{2}$ are
$$ \left\{ \begin{matrix}{cc} \alpha & z \\ 0 & \beta \\
\end{matrix} \right\} \; \subset \lie{o}peratorname{GL}(2,{\bb R}), \;
\left\{ \begin{matrix}{ccc} \alpha & 0 & v \\ 0 & \beta & 0 \\
0 & 0 & 1 \end{matrix} \right\} \; \subset {\rm Aff}(2)$$
respectively.
\item The normalizer in ${\rm Aff}(2)$ of the \'etale affine group $\mathsf{D}$ is the semi-direct
product generated by $\mathsf{D}$ and the linear group
\[ {\cal N}_\mathsf{D} = \left\{ \begin{matrix}{ccc}d^2 & b & 0 \\
0 & d & 0\\
0 & 0 & 1 \\
\end{matrix} \, {\, \Big\vert \;} \; d \in {\bb R}^{*}, \, b \in {\bb R} \, \right\} \; . \]
\end{enumerate}
\end{lemma}
\begin{figure}
\caption{Tesselations of homogeneous affine domains of type $\mathsf{B}
\end{figure}
In the case of the group $\mathsf{A}$, which is not simply connected, we can consider, more generally, the universal covering group $\tilde {\mathsf{A}}$ of
$\mathsf{A}$. The covering homomorphism
$\tilde {\mathsf{A}} \rightarrow \mathsf{A}$ turns $\tilde {\mathsf{A}}$
into an \'etale affine Lie group.
Let $\Gamma$ be a lattice in $\tilde {\mathsf{A}}$. Then
$$ M = \; \; _{{\mbox{$\Gamma$}}} \, \backslash \, \tilde {\mathsf{A}} $$
inherits a flat affine structure, for which the orbit map $\tilde {\mathsf{A}} \rightarrow {\bb A}o$ at any point $x \in {\bb A}o$
is a development map. The holonomy group $h(\Gamma) \leq \mathsf{A}$ is
the image of $\Gamma$ under the covering $\tilde {\mathsf{A}} \rightarrow \mathsf{A}$. Since $\Gamma$ is central in $\tilde {\mathsf{A}}$, the group
$\tilde {\mathsf{A}}$ acts on $M$ by affine transformations.
In particular, as before, $M$ is a \emph{homogeneous}
flat affine two-torus.
\begin{figure}
\caption{Development process with non-discrete holonomy group.}
\end{figure}
Note that, in this case, it may also happen that the holonomy $h(\Gamma)$ is \emph{not} discrete in ${\rm Aff}(2)$, see Figure \ref{figure:non-discrete}.
If the holonomy is cyclic, as is the case for Hopf tori
(Example \ref{ex:hopftori}), $M$ cannot be constructed by gluing an affine quadrilateral which is contained in the development image. However, $M$ may always be obtained by gluing a
strip which is situated in
$\tilde {\mathsf{A}}$, see Example \ref{ex:cylinders2}.
\subsection{Affine gluing of polygons} {\lie{a}}bel{sect:agluing}
Let ${\mathcal P} \subset {\bb A}^2$ be polygon with ${\cal S}$ its set of sides.
Let $\{ g_S \in {\rm Aff}(2) \mid \, S \in {\cal S} \}$ be a set of affine transformations pairing the sides of ${\mathcal P}$ and let
$M$ be the corresponding identification space of ${\mathcal P}$.
We say that the \emph{affine gluing criterion}\footnote{See \cite{BauesG} for more details on this definition} holds if, for each vertex $x \in {\mathcal P}$ with cycle of edges $S_1, \ldots, {S_m}$,
the \emph{cycle relation} $g_{S_1} \cdots g_{S_m} = 1$ holds,
and furthermore the corners at $x$ of the polygons
$g_{S_1} \cdots g_{S_i} {\mathcal P}$, $i = 1, \ldots ,m$, add up
subsequently to a disc, while intersecting only in their consecutive
boundaries. This disc then provides an affine coordinate neighborhood in the identification space of ${\mathcal P}$ defined
by the pairing of sides. If the gluing criterion is satisfied the identification space $M$ inherits the structure of a flat affine manifold from ${\mathcal P}$. \\
The following result is the analogue of Poincar\'{e}'s fundamental polygon theorem (cf.\ \cite{Maskit} for the classical
\index{Poincar\'{e} fundamental polygon theorem}
version) for gluing flat affine surfaces:
\begin{proposition}[\mbox{see \cite[Proposition 2.1]{BauesG}}] {\lie{a}}bel{prop:agluing}
If the affine gluing criterion holds then the group
$\Gamma \subset {\rm Aff}(2)$ generated by the side-pairing transformations
$\{ g_S | \, S \in {\cal S} \}$ acts properly discontinuously and with fundamental domain ${\mathcal P}$
on a flat affine surface $\bar{X}$ which develops
$\Gamma$-equivariantly onto an open set $U$ in ${\bb A}^{2}$.
The inclusion of ${\mathcal P}$ into $\bar{X}$ identifies $M$ and the orbit space
$ _{{\mbox{$\Gamma$}}} \backslash \bar{X}$.
\end{proposition}
It follows that $M$ inherits a natural flat affine structure from ${\mathcal P}$. In fact, the surface $\bar{X}$ is the holonomy covering space
of $M$ and the group $\Gamma$ is the holonomy group of $M$.
Note also that the construction of $\bar{X}$ is sketched in the proof of Theorem \ref{thm:Deformations}.
The situation is pictured in the following commutative diagram
of maps:
\begin{eqnarray*}
\bar{X} & \stackrel{\bar D}{\longrightarrow} & U \subset {\bb A}^{2} \\
\downarrow & & \downarrow \\
M= \; _{{\mbox{$\Gamma$}}} \backslash \bar{X} & \longrightarrow & {_{{\mbox{$\Gamma$}}} \backslash U} \; \; . \\
\end{eqnarray*}
\begin{example}
Figure \ref{figure:glue1}
shows how to glue a trapezium ${\mathcal{T}}$ with angle $\alpha < \pi$.
The sides $S_1$ and $S_3$ are glued with a homothety.
$S_2$ and $S_4$ are glued with a rotation of angle $\alpha$.
The developing image is $U = {\bb A}o$.
It is tessellated by the translates of ${\mathcal{T}}$ if and only if $m \alpha = 2 \pi$ for an
integer $m$. If the angle $\alpha$ is rational, $p \alpha = q 2\pi$, the development $\bar{X} \rightarrow {\bb A}o$ is
a finite cyclic covering of degree $q$. Otherwise the development is an infinite cyclic covering and $\bar{X}$ is simply connected.
\end{example}
\begin{figure}
\caption{Gluing a trapezium in ${\bb A}
\end{figure}
\subsubsection{Gluing affine quadrilaterals}
Theorem \ref{thm:Benzecri} implies that the gluing criterion imposes strong restrictions on the possible combinatorial types of polygons and pairings.
However, flat affine two-tori are easily obtained by gluing an affine
quadrilateral ${\mathcal P}$ in the way indicated in Figure \ref{gluingq}.
The equivalence
class of the flat affine manifold thus obtained depends
on the affine equivalence class of ${\mathcal P}$ and the particular
side pairing transformations chosen, see \cite[Section~ 3]{BauesG}.
\begin{figure}
\caption{Gluing a torus.}
\end{figure}
The gluing conditions for such a pairing are easily verified:
\begin{lemma}[{\cite[Lemma 3.2]{BauesG}}] {\lie{a}}bel{gluelemma}
Let
$ A,B \in {\rm Aff}(2)$, $A (0,0) =(1,0) $, $A(0,1) = p$, $B(0,0) = (0,1)$,
$B(1,0)= p$.
The side pairing transformations $\{ A, A^{-1}\! , B, B^{-1} \}$ for the polygon with vertices ${\mathcal P} = ( (0,0), (1,0), p, (0,1) )$ satisfy the gluing conditions if and only if\/ $\det l(A)>0$ and $\det l(B)>0$ (where
$l$ denotes the linear part of an affine transformation) and
$[A,B] = {\mathrm{Id}}$.
\end{lemma}
We remark further:
\begin{proposition} Every flat affine two-torus obtained by gluing a quadrilateral ${\mathcal P}$ on its sides is homogeneous. Conversely,
if $M$ is a homogeneous flat affine two-torus with non-cyclic affine holonomy group, then $M$ may be obtained by gluing a quadrilateral.
\end{proposition}
\begin{proof} For the proof that the gluing torus of ${\mathcal P}$ is
homogeneous, we have to appeal to some of the facts
which are explained in Section~ \ref{sect:classification}. In particular,
we use Proposition \ref{prop:devimages} and
Proposition \ref{prop:liftofN}.
The gluing conditions imply that the minimal connected
abelian subgroup $N$ which contains the holonomy of $M$ is at least two-dimensional. Therefore, $N$ is one of the two-dimensional abelian subgroups listed in Example \ref{ex:domains}.
By Proposition \ref{prop:liftofN}, $N$ acts on the
development image of $M$. Let $U$ be an open orbit
for $N$, which is one of the domains of Example \ref{ex:domains}.
Then, by convexity of the homogeneous domain $U$, the polygon ${\mathcal P}$ is contained in $U$. The construction of $M$ and its development process show that the development image of $M$ is covered by the holonomy translates of ${\mathcal P}$. Since the holonomy is contained in $N$, it follows that the development image of $M$ is contained in the open orbit $U$. On the other hand, by Proposition \ref{prop:liftofN}, the development image contains $U$. This implies that the development image of $M$ equals the open orbit $U$.
Therefore, $N$ acts transitively on the development image, and thus also on $M$. In particular, it follows that $M$ is homogeneous.
We omit the proof of the converse statement. A special case
is treated in \cite[Section~ 4.6]{BauesG}.
\end{proof}
\subsubsection{The gluing variety}
By Lemma \ref{gluelemma}, the set of side pairings $$
{\cal V} = \{ (p, A, B ) \} \, \subset {{\bb R}}^2 \times {{\bb R}}^6 \times {{\bb R}}^6 \; , $$
which satisfy the gluing conditions for the (convex) quadrilateral $${\mathcal P} = ( \, (0,0), (1,0),p , (0,1) \, ) $$
form a semi-algebraic subset
of $ {{\bb R}}^2 \times {{\bb R}}^6 \times {{\bb R}}^6$.
It is easily computed that ${\cal V}$ is four-dimensional
and that the set of solutions with respect to a fixed ${\mathcal P}$ is
of dimension two. We let $\mathsf{p}: {\cal V} \rightarrow {\bb R}^2$ denote the projection to the first factor. Note that the projection of ${\cal V}$ to the matrix factors ${{\bb R}}^6 \times {{\bb R}}^6$
defines an
\emph{embedding}
$ {\cal V} \, \hookrightarrow \, \lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2))$
as a subset of the holonomy image.
We call the set
${\cal V}$ the {\em gluing variety of quadrilaterals}.\\
\paragraph{Embedding into the deformation space}
Observe that the gluing of a quadrilateral ${\mathcal P}$
naturally constructs a \emph{framed affine two-torus}.
If we choose a diffeomorphism of the unit-square
with ${\mathcal P}$, the development process of the gluing extends this
diffeomorphism to the development map of a \emph{marked} framed affine two-torus (see Example \ref{ex:fat}). This defines a continuous map
$$ {\cal V} \supset \mathsf{p}^{-1} ({\mathcal P}) \rightarrow {\mathfrak S}Mf(T^2,{\bb A}^2) \; ,
$$
where $\mathsf{p}: {\cal V} \rightarrow {\bb R}^2$ is the projection
to the first factor.
We may furthermore choose a natural identification
of the unit square with ${\mathcal P}$ (for example, by decomposing
any quadrilateral into two triangles and using affine identifications
of the triangles). Then, using \eqref{eq:deftower}, we obtain
a continuous (open) embedding
$$ \nu: {\cal V} \rightarrow {\mathfrak S}Mf(T^2,{\bb A}^2) \; $$
to the space of classes of framed flat affine tori.
Note that $ \nu$
is a section of the holonomy map $\mathsf{hol}:
{\mathfrak S}Mf(T^2,{\bb A}^2) \rightarrow \lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2))$.
Since $\cal V$ also defines
a slice for the $\lie{o}peratorname{GL}(2,{\bb R})$-orbits on $
\lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2))$, the map $\nu$ descends
to an embedding $ {\cal V} \rightarrow {\mathfrak S}M_{\mathsf{p}}(T^2,{\bb A}^2)$,
whose image consists of homogeneous structures.
Therefore, by the discussion in Example \ref{ex:def_homfaff},
\emph{the gluing variety $ {\cal V}$
embeds as a locally closed subset of the
deformation space ${\mathfrak D}(T^2,{\bb A}^2)$}.
\subsection{Tori with development image ${\bb A}o$} {\lie{a}}bel{sect:tori_bbao}
Here we discuss how to glue flat affine tori from annuli which are contained in the once punctured plane or the universal covering
flat affine manifold of the once punctured plane.
\subsubsection{Hopf tori and quotients of ${\bb A}o$} {\lie{a}}bel{sect:Hopftori}
The simplest examples of flat affine tori with development image
${\bb A}o$ are obtained by gluing closed
annuli
along their boundary curves.
\begin{example}[Hopf tori] {\lie{a}}bel{ex:hopftori}
Let $A_{{\lie{a}}mbda}$,
${\lie{a}}mbda >0$, be a dilation with scaling factor ${\lie{a}}mbda$,
and $\Gamma = \, {\lie{a}}ngle \, A_{{\lie{a}}mbda} \, \rightarrowngle$
the subgroup of $\lie{o}peratorname{GL}(2,{\bb R})$
generated by $A_{{\lie{a}}mbda}$. Then $\Gamma$ acts properly discontinuously on
${\bb A}o$ and the quotient space
$$ {\mathcal H}_{{\lie{a}}mbda} = \; {< , >}enfrac{}{}{0pt}{0}{}{\Gamma}\backslash \; \left({\bb A}o\right)$$ is a compact flat affine two-torus
${\mathcal H}_{{\lie{a}}mbda}$, which is called a \emph{Hopf torus}.
\end{example}
The Hopf torus ${\mathcal H}_{{\lie{a}}mbda}$ is obtained by gluing a closed
annulus ${\mathcal{A}}_{{\lie{a}}mbda} \subset {\bb A}o$ of width ${\lie{a}}mbda$
along its boundary circles, see Figure \ref{figure:annuli}.
\begin{figure}
\caption{Gluing of annuli: Hopf torus, non-homogeneous flat affine tori.}
\end{figure}
The geometric construction of the Hopf tori ${\mathcal H}_{{\lie{a}}mbda}$ may be refined
as follows:
\begin{example}[Finite coverings of Hopf tori] {\lie{a}}bel{ex:finhopftori}
Let $$ X_{k} \, {\, \longrightarrow \, } \; {\bb A}o$$ be a $k$-fold covering flat affine manifold.
Then we may lift the action of $A_{{\lie{a}}mbda}$ on ${\bb A}o$ to
a properly discontinuous action of an affine transformation
$A_{{\lie{a}}mbda,k}$ of $X_{k}$. The quotient spaces
$$ {\mathcal H}_{{\lie{a}}mbda,k} = \; {\lie{a}}ngle A_{{\lie{a}}mbda,k} \rightarrowngle \, \backslash X_{k} $$
are
flat affine manifolds, which are $k$-fold
covering spaces of ${\mathcal H}_{{\lie{a}}mbda}$.
Geometrically, $X_{k}$ is a
topological annulus with
a flat affine structure which is obtained
by cutting ${\bb A}o$ at a radial line and then
gluing $k$ copies of ${\bb A}o$ along this
geodesic ray. A \emph{geodesic} in a
flat affine manifold is a curve which
corresponds to a straight line in all affine coordinate
charts. Thus, correspondingly, the manifolds ${\mathcal H}_{{\lie{a}}mbda,k}$
are obtained by gluing $k$ copies of ${\mathcal H}_{{\lie{a}}mbda}$
at a closed geodesic.
\end{example}
Note that the family of Hopf tori ${\mathcal H}_{{\lie{a}}mbda,k}$
gives a simple example of a family of distinct
flat affine manifolds which have identical holonomy homomorphism.
\begin{example}[Finite quotients of Hopf tori] {\lie{a}}bel{ex:finquothopftori}
Let ${\mathrm{R}}_{\alpha}$ be a rotation with angle $\alpha= \frac{p}{q} \pi $
a rational multiple of $\pi$. Then the finite group of rotations
of order $2q$ generated by ${\mathrm{R}}_{\alpha}$ acts without fixed points
on $ {\bb A}o$ and on the Hopf tori ${\mathcal H}_{{\lie{a}}mbda,k}$.
Therefore, the quotient spaces $$ {\mathcal H}_{{\lie{a}}mbda, \alpha,k} = \; {\lie{a}}ngle {\mathrm{R}}_{\alpha} \rightarrowngle
\, \backslash {\mathcal H}_{{\lie{a}}mbda,k}$$ are flat affine two-tori.
\end{example}
Since $A_{{\lie{a}}mbda}$ is in the center of $\lie{o}peratorname{GL}(2,{\bb R})$,
${\mathcal H}_{{\lie{a}}mbda}$ is a homogeneous flat affine manifold
with affine automorphism group
$$ {\rm Aff}({\mathcal H}_{{\lie{a}}mbda})= \lie{o}peratorname{GL}(2,{\bb R})/\Gamma \; .
$$ Hence, its finite coverings
$ {\mathcal H}_{{\lie{a}}mbda,k}$ are homogeneous, as well. Similarly,
$ {\mathcal H}_{{\lie{a}}mbda, \alpha,k}$ are homogeneous flat affine two-tori,
with ${\rm Aff}({\mathcal H}_{{\lie{a}}mbda, \alpha})^0$
isomorphic to $\lie{o}peratorname{GL}(1,{\bb C})/\Gamma$, except for $\alpha = \pi$. In the latter case ${\rm Aff}({\mathcal H}_{{\lie{a}}mbda, \pi}) = {\rm PGL}(2,{\bb R})/ \Gamma$.
\paragraph{Expanding holonomy}
Non-homogeneous quotients of ${\bb A}o$ may be constructed
by using expanding elements of $\lie{o}peratorname{GL}(2,{\bb R})$.
A matrix $A \in \lie{o}peratorname{GL}(2,{\bb R})$ is called an \emph{expansion} if
it has real eigenvalues ${\lie{a}}mbda_{1}, {\lie{a}}mbda_{2} > 1$.
($A^{-1}$ is then called a \emph{contraction}.)
Every expansion acts properly
discontinuously on ${\bb A}o$, see Figure \ref{figure:expanding}.
This motivates the following:
\begin{definition}[Expanding elements]
A matrix $A \in \lie{o}peratorname{GL}(2,{\bb R})$ is called \emph{expanding} if it
acts properly on ${\bb A}o$ and
every compact subset of ${\bb A}o$ is moved to infinity by its iterates
$A^k$, $k \to \infty$.
\end{definition}
Note that, if $A$ is expanding, it is either an expansion, or a product of
an expansion with ${\mathrm{R}}_{\pi}$, or it is conjugate to a product of a
dilation and a rotation.
\begin{figure}
\caption{Dynamics of expanding elements in $\lie{o}
\end{figure}
\begin{figure}
\caption{Dynamics of non-expanding elements in $\lie{o}
\end{figure}
\begin{example}[Tori with expanding holonomy]
{\lie{a}}bel{ex:expholonomy}
If $A$ is an expanding element then the quotient space
$$ {\mathcal H}_{A} = {\lie{a}}ngle A \rightarrowngle \backslash \, {\bb A}o$$
is a flat affine two-torus with development image ${\bb A}o$.
If $A$ is an expansion, the torus ${\mathcal H}_{A}$ is
obtained by gluing an annulus ${\mathcal{A}}_{A} \subset {\bb A}o$, as indicated
in Figure \ref{figure:annuli}. Note, if $A$ is an expansion which is not a dilation then ${\mathcal H}_{A}$ is a flat affine torus, which
is \emph{not} homogenous.
\end{example}
\subsubsection{Quotients of $\widetilde {\bb A}o$} {\lie{a}}bel{sect:gentori}
We consider the universal covering
flat affine manifold
$$\mathsf{q}: \widetilde {\bb A}o\, \longrightarrow \, {\bb A}o$$ of
the open domain ${\bb A}o$ in ${\bb A}^2$. Let ${\rm Aff}( \widetilde {\bb A}o)$ be its group of affine
diffeomorphisms.
\paragraph{Universal covering of $ \lie{o}peratorname{GL}(2,{\bb R})$}
The development $\mathsf{q}$ induces a surjective homomorphism
$$ \mathsf{p}: {\rm Aff}( \widetilde {\bb A}o) {\, \longrightarrow \, } {\rm Aff}({\bb A}o) = \lie{o}peratorname{GL}(2,{\bb R}) \; , $$
which exhibits ${\rm Aff}( \widetilde {\bb A}o)$
as the universal covering group of $\lie{o}peratorname{GL}(2,{\bb R})$.
Let ${\mathrm{R}}_{\pi} \in \lie{o}peratorname{GL}(2,{\bb R})$ denote rotation by $\pi$.
The center of the group $$ {\rm Aff}( \widetilde {\bb A}o) =
\widetilde \lie{o}peratorname{GL}(2,{\bb R})$$ is therefore generated by an element
$\tau$ which satisfies $\mathsf{p}(\tau) = {\mathrm{R}}_{\pi}$,
and the kernel of $\mathsf{p}$ is generated by $\tau^2$.
(cf.\ Section~ \ref{sect:GL2R}.)
\paragraph{Polar coordinates.}
We let $(r, \theta)$ denote polar coordinates for $\widetilde {\bb A}o$. Then $$ \tau: (r, \theta) \mapsto (r, \theta + \pi) \, . $$
More generally, the universal covering $\widetilde {\rm SO}(2,{\bb R})$ of the rotation group is a subgroup of $\widetilde \lie{o}peratorname{GL}(2,{\bb R})$ which acts by translations in the $\theta$-direction.
\paragraph{Elements with non-zero rotation angle.}
Let $B \in \lie{o}peratorname{GL}^+\!(2,{\bb R})$ have positive eigenvalues.
After conjugation, we may assume that $B$ preserves the horizontal coordinate
axis in ${\bb A}^2$. We let $\tilde B= \tilde B_{0}$ denote
the lift of $B$ to $\widetilde {\bb A}o$ which preserves the line $\theta = 0$.
It follows that
$\tilde B$ preserves all horizontal strips
$${\bar {\rm O}mega}_{\ell} = \; \{ \, (r, \theta) \mid \ell \, \pi \leq \theta \leq (\ell+1) \, \pi \, \}$$
and their boundary components.
We observe that (the group generated by) any other lift
$$ \tilde B_{k} = \tau^k \tilde B \, , \, k \neq 0 \; , $$
acts properly on
$\widetilde {\bb A}o$.
\begin{definition}[Non-zero angle of rotation] {\lie{a}}bel{def:posrot}
Let $\tilde B \in \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$.
We say that $\tilde B$ has a non-zero rotation angle if
$\tilde B$ acts properly on $ \widetilde {\bb A}o$, and for every compact subset the $\theta$ coordinates are unbounded
under the iterates $\tilde B^k$, $k \to \infty$.
\end{definition}
The property to have non-zero rotation is an affine invariant
that is, it is invariant by conjugation
in $\widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$.
In particular,
$\tilde B \neq 1$ has non-zero angle of rotation if and only if $\tilde B$ is conjugate
to an element of $\widetilde {\rm SO}(2)$ or $B$ has positive eigenvalues and
$\tilde B = \tilde B_{k} = \tau^k \tilde B_{0}$, $k \neq 0$,
as above.
\paragraph{Proper actions on $\widetilde {\bb A}o$.}
Let $\bar {\mathcal H}_{0}^k = \bigcup_{\ell = 0 \ldots k}
{\bar {\rm O}mega}_{\ell}$ be the successive
union of $k$ strips ${\bar {\rm O}mega}_{\ell}$.
Note that the development image of ${\bar {\rm O}mega}_{\ell}$ is
a closed halfspace with the origin $0$ removed, and
the development image of $\bar {\mathcal H}_{0}^k$
is ${\bb A}o$, $k {< , >}eq 1$ (compare also Figure
\ref{figure:deviscov}).
\begin{example}[Affine cylinders without boundary]
Consider the quotient flat affine manifolds
$$ X_{B,k} = \; {{\lie{a}}ngle \tilde B_{k} \rightarrowngle}
\, \backslash \, \widetilde {\bb A}o \, , \; \, k {< , >}eq 1 \; .$$
These are open affine cylinders which are obtained by gluing
$\bar {\mathcal H}_{0}^k$ along its two incomplete
boundary geodesics.
The development image of $X_{B,k}$ is ${\bb A}o$, and
its holonomy group is generated by ${\mathrm{R}}_{\pi}^k B$.
\end{example}
Let $A$ be an expansion which commutes with $B$ and
$\tilde A= \tilde A_{0}$ the lift of $A$ which preserves the
line $\theta=0$. Then $\tilde A$ acts properly on
$X_{\tilde B}$.
\begin{example}[Quotients of $\widetilde{\bbA^2 \! - 0}$] {\lie{a}}bel{ex:gentorusnh}
We obtain the quotient affine torus
$$ {\mathcal{T}}_{\tilde A, \tilde B} = {\mathcal{T}}_{A,B,k} = \; {\lie{a}}ngle \tilde A \rightarrowngle
\, \backslash X_{B,k} \; , k \neq 0 \, . $$
The holonomy homomorphism of
${\mathcal{T}}_{A,B,k}$ is determined by $A$, $B$ and
the parity of $k$.
\end{example}
\begin{example}[Affine cylinders without boundary, general case] {\lie{a}}bel{ex:acylinders1}
Let $\tilde B$ be an element of $\widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$ which has non-zero rotation.
Then the quotient flat affine
manifolds
$$ X_{\tilde B} = \; {{\lie{a}}ngle \tilde B \rightarrowngle}
\, \backslash \, \widetilde {\bb A}o \, $$
are open cylinders which are obtained by gluing a
strip $$\bar {\mathcal H}_{\alpha} = \{ (r, \theta) \mid 0 \leq \theta \leq \alpha \}$$
in $\widetilde {\bb A}o$ along its two boundary geodesics.
\end{example}
Now let $B \in \lie{o}peratorname{GL}(1,{\bb C})$ and $\tilde B$ a lift with
non-zero rotation, and $A \in \lie{o}peratorname{GL}(1,{\bb C})$ an expanding
element. Then $\tilde A$ acts properly on $X_{\tilde B}$
if and only if $ \tilde A$ and $ \tilde B$ generate a lattice in
$\widetilde \lie{o}peratorname{GL}(1,{\bb C})$.
\begin{example} {\lie{a}}bel{ex:gentorush}
The quotient flat affine torus
$$ {\mathcal{T}}_{\tilde A, \tilde B} = \; {\lie{a}}ngle \tilde A \rightarrowngle
\, \backslash X_{\tilde B,k} \; , $$
is a homogeneous flat affine two-torus
with holonomy in $\lie{o}peratorname{GL}(1,{\bb C})$.
\end{example}
\subsection{Affine cylinders with geodesic boundary}
{\lie{a}}bel{sect:acylinders}
We show that by gluing flat affine cylinders whose boundary
curves are incomplete geodesics
we may construct flat affine two-tori with development image ${\bb A}o$. This yields another construction of the manifolds ${\mathcal{T}}_{A,B,k}$
which have been introduced in Example \ref{ex:gentorusnh}.
As a matter of fact, as a key step in the course of the proof of Theorem \ref{thm:classification}, we shall show that
all \emph{non-homogeneous} flat affine tori
may be obtained in this way.\\
\begin{example}[Affine cylinders with geodesic boundary] {\lie{a}}bel{ex:acylinders2}
Let $\bar {\mathcal H}_{0}$ be the closed upper
halfplane with the origin $0$ removed. If $A$ is an expansion then
$$ {\mathcal{C}}_{A} = \; {\lie{a}}ngle A \rightarrowngle
\, \backslash \bar {\mathcal H}_{0} $$
is topologically an annulus with two boundary components,
and it is also a flat affine manifold with (incomplete) geodesic boundary, see Figure \ref{figure:affinecylinders} and Figure \ref{figure:affinecylinder3d}. We call ${\mathcal{C}}_{A}$ an
affine cylinder.\footnote{Benoist \cite{Benoist_Ta} calls ${\mathcal{C}}_{A}$ an annulus.}
\end{example}
\begin{figure}
\caption{{Gluing of flat affine cylinders $ {\mathcal{C}
\end{figure}
More generally, let $\bar {\mathcal H}_{0}^k = \bigcup_{\ell = 0 \ldots k}
{\bar {\rm O}mega}_{\ell}$, and $\tilde A$ the lift of $A$, which preserves the line
$\theta =0$. Then the manifold with boundary
$ {\mathcal{C}}^k_{A} = \; {\lie{a}}ngle \tilde A \rightarrowngle
\, \backslash \bar {\mathcal H}_{0}^k$ is called an \emph{affine cylinder}.
\begin{figure}
\caption{{Affine cylinder $ {\mathcal{C}
\end{figure}
\subsubsection{Gluing of flat affine cylinders} {\lie{a}}bel{sect:TABk}
Let $A$ be an expansion and $B \in \lie{o}peratorname{GL}^+\!(2,{\bb R})$,
commuting with $A$, such that $B$ has positive eigenvalues.
Then, as shown by Example \ref{ex:gentorusnh},
every lift $\tilde B_{k}$, $k {< , >}eq 1$
acts properly on $ {\lie{a}}ngle \tilde A \rightarrowngle
\, \backslash \widetilde {\bb A}o$ and yields the quotient flat affine torus
$$ {\mathcal{T}}_{A,B,k} = \; {\lie{a}}ngle \tilde A \rightarrowngle
\, \backslash X_{B,k} \; . $$
Thus, geometrically, the flat affine torus ${\mathcal{T}}_{A,B,k}$ is constructed
by gluing the flat affine cylinder $ {\mathcal{C}}^k_{A}$ along its two boundary
geodesics using the transformation $\tilde B_{k}$, see Figure \ref{figure:Gluedtorus}.
\begin{figure}
\caption{{Gluing a flat affine torus ${\mathcal{T}
\end{figure}
\begin{remark}
Note that $M= {\mathcal{T}}_{A,B,k}$ is a homogeneous flat affine two-torus if
and only if its holonomy group
$h(\Gamma) = {\lie{a}}ngle A , B \rightarrowngle$ is
contained in the group of dilations, in which case ${\rm Aff}(M)$
is a finite covering group of $\lie{o}peratorname{PSL}(2,{\bb R})$, and $M$ is
a Hopf torus.
\end{remark}
\begin{remark} Similarly, if $A$ is an expansion and $h(\Gamma) = {\lie{a}}ngle A , B \rightarrowngle$ is a discrete group of rank two then $h(\Gamma) $ acts properly
discontinuously and with compact quotient either on ${\mathcal H}$ or on
$\mathcal Q$. The corresponding
torus is obtained by gluing ${\mathcal{T}}_{A}$ or ${\mathcal{T}}_{A, \alpha}$, $\alpha < \pi$.
\end{remark}
Finally, we remark that a homogeneous flat affine torus with \index{flat affine torus!homogeneous}
development image ${\bb A}o$ may be constructed by gluing
cylinders if and only if it admits a closed (non-complete) geodesic:
\begin{example}[Homogeneous tori with a closed geodesic] {\lie{a}}bel{ex:cylinders2}
Let $A_{{\lie{a}}mbda}$ be a dilation, and $B \in \lie{o}peratorname{GL}(1,{\bb C})$. Then every
lift of $B$ to $\widetilde {\bb A}o$ is of the form $$\tilde B_{k}: \; (r, \theta) \mapsto
( {\lie{a}}mbda r, \theta + \alpha) \; , \; \alpha = \alpha_{0} + 2k \pi$$
with $\alpha_{0} \in [0, 2 \pi)$. If $\alpha \neq 0$, we define $
X_{B,k} = \; {{\lie{a}}ngle \tilde B_{k} \rightarrowngle}
\, \backslash \, \widetilde {\bb A}o \, , \; \, $ and
$$ {\mathcal{T}}^h_{{\lie{a}}mbda,B, k} = \; {\lie{a}}ngle \tilde A_{{\lie{a}}mbda} \rightarrowngle
\, \backslash X_{B,k} \; . $$
Let $\bar {\mathcal H}_{\alpha} = \{ (r, \theta) \mid 0 \leq \theta \leq \alpha \}$
be a strip in $\widetilde {\bb A}o$, and
$$ {\mathcal{C}}_{{\lie{a}}mbda, \alpha} = \; {\lie{a}}ngle \tilde A_{{\lie{a}}mbda} \rightarrowngle
\, \backslash \bar {\mathcal H}_{\alpha} \; . $$
Then, ${\mathcal{T}}^h_{{\lie{a}}mbda,B, k}$ is a homogeneous flat affine two-torus which
is obtained by gluing the cylinder ${\mathcal{C}}_{{\lie{a}}mbda, \alpha}$ with $\tilde B_{k}$.
\end{example}
\begin{figure}
\caption{{Affine cylinders ${\mathcal{C}
\end{figure}
\section{The classification of flat affine structures on the two-torus}
{\lie{a}}bel{sect:classification}
The classification of flat affine structures on the two-torus was carried
out by Kuiper \cite{Kuiper} and completed by Nagano-Yagi in
\cite{NaganoYagi}, and independently also by Furness and Arrowsmith
in \cite{FurArr}. Later on much of the work in \cite{NaganoYagi} was clarified
and beautifully generalised in Benoist's paper \cite{Benoist_Np}.
In this section we describe the classification result
in detail and explain its proof, following loosely along
the lines of \cite{NaganoYagi}, and
employing also the main ideas from \cite{Benoist_Np,Benoist_Ta}
to establish in Proposition \ref{prop:Deviscovering} the crucial fact that the development map of a flat affine two-torus is always a covering map onto its image. \index{development maps!for flat affine two-torus}
\begin{theorem} {\lie{a}}bel{thm:classification}
Let $M$ be a flat affine two-torus. Then $M$ is affinely
diffeomorphic to either
\begin{enumerate}
\item a quotient of a simply connected two-dimensional affine homogeneous domain
by a properly discontinuous group of affine transformations.
\item or a quotient space of the universal covering $\widetilde {\bb A}o$ of the once punctured plane.
\end{enumerate}
In particular, the universal covering flat affine manifold of $M$ is affinely
diffeomorphic to the affine plane ${\bb A}^2$, the half-plane ${\cal H}$, or
the sector (quarter plane) ${\cal Q}$, in the first case, and to $\widetilde {\bb A}o$, in
the second case.
\end{theorem}
The first step in the proof of Theorem \ref{thm:classification}
consists of the determination of the
open domains in ${\bb A}^2$ which appear as
the development images of flat affine structures on the two-torus.
This is done in Section~
\ref{sect:devimages}. If $M$ is homogeneous
then the development map is a covering map.
The main step is then the determination of the structure of flat affine
two-tori with development image ${\bb A}o$, which are not homogeneous.
This is carried out in
Section~ \ref{sect:univcovs}. We prove that such tori
may be obtained by gluing affine cylinders with geodesic boundary. We deduce that also in this case
the development map of $M$ is a covering map onto its image.\\
The following further consequences are implied by the theorem or its proof.
\paragraph{Classification of divisible affine domains}
An affine domain is called \emph{divisible} if it admits a discontinuous affine action with compact quotient. Since they admit a simply transitive abelian group, all development images of flat affine structures on the two-torus are divisible by abelian discrete groups (isomorphic to ${\bb Z}$ or ${\bb Z}^2$). Conversely, by Benz\'ecri's theorem, every divisible plane affine domain is the development image of a flat affine structure on the two-torus.
By Theorem \ref{thm:classification}, the universal covering of a flat affine two-torus is
a homogeneous flat affine manifold, which covers a convex divisible
homogeneous domain in ${\bb A}^2$.
\paragraph{The affine automorphism group of $M$}
\begin{enumerate}
\item If $M$ has development image ${\bb A}^2$, the half-plane
${\cal H}$, or the sector $ {\cal Q}$, then $M$ is a homogeneous
flat affine manifold. The connected component ${\rm Aff}(M)^0$ of the group
${\rm Aff}(M)$ of affine diffeomorphisms of $M$ is a two-dimensional compact abelian
Lie group, acting transitively and freely on $M$.
\item If the development image is the once-punctured plane ${\bb A}o$ and
$M$ is homogeneous,
then the group ${\rm Aff}(M)^0$ is either a quotient of $\widetilde{\lie{o}peratorname{GL}}^+\!\!(2,{\bb R})$,
as is the case for Hopf tori (Examples \ref{ex:hopftori}, \ref{ex:finhopftori}), or
${\rm Aff}(M)^0$ is a quotient of $\lie{o}peratorname{GL}(1,{\bb C})$, as in Example \ref{ex:finquothopftori}. In either case, the action of $\lie{o}peratorname{GL}(1,{\bb C})$
on ${\bb A}o$ descends to a transitive and free action of a two-dimensional compact abelian Lie group on $M$.
\item
Otherwise, ${\rm Aff}(M)^0$ is a two-dimensional abelian connected Lie group
which has a one-dimensional compact factor. In this case,
$M$ is not homogeneous, as in Example \ref{ex:expholonomy}.
\item The affine automorphism group of $M$ acts prehomogeneously on $M$,
that is, it has only finitely many orbits on $M$.
\item The one-dimensional orbits of ${\rm Aff}(M)^0$ are non-complete
geodesics in $M$ along which
$M$ may be cut into flat affine cylinders.
\end{enumerate}
\paragraph{Homogeneous and complete flat affine tori}
\begin{enumerate}
\item Every \emph{homogeneous} flat affine two-torus $M$ is
affinely diffeomorphic to a quotient of an abelian \'etale affine Lie group of type $\mathsf{T}$, $\mathsf{D}$, $\mathsf{C}_1$,
$\mathsf{C}_2$, $\mathsf{B}$ or $\mathsf{A}$ as listed in Example \ref{ex:domains}.
\item Every \emph{complete} flat affine two-torus $M$ is affinely diffeomorphic to a quotient of an abelian simply transitive affine Lie group of type $\mathsf{T}$ or $ \mathsf{D}$. In particular, $M$
is also a homogeneous flat affine two-torus.
\end{enumerate}
\subsection{Development images} {\lie{a}}bel{sect:devimages}
The classification of development images is as follows:
\index{development image!of flat affine structure}
\begin{proposition} {\lie{a}}bel{prop:devimages}
Let $M$ be a flat affine two-torus.
Then the development image of $M$ is either
the affine plane ${\bb A}^2$, the half-plane ${\cal H}$,
the sector (quarter plane) ${\cal Q}$ or the once-punctured plane ${\bb A}o$,
respectively.
\end{proposition}
\paragraph{Proof of Proposition \ref{prop:devimages}}
Let $h(\Gamma) \leq {\rm Aff}(2)$ be the holonomy
group of $M$. Let $N$ be the
identity component of a maximal
abelian subgroup of ${\rm Aff}(2)$ which contains $h(\Gamma)$.
Note that $N$ contains the
identity component of the Zariski-closure of $h(\Gamma)$.
Therefore, $h(\Gamma) \cap N$
is of finite index in $h(\Gamma)$.
We let $\tilde N$ denote the universal covering group of $N$. The first observation is
that \emph{$N$ acts on the development image}, and it has only finitely many orbits on ${\bb A}^2$.
\begin{proposition} {\lie{a}}bel{prop:liftofN}
The action of $N$ on ${\bb A}^2$ lifts via the development map
to an action of $\tilde N$ on the universal covering flat affine manifold $\tilde M$
of $M$. Moreover, it follows that
\begin{enumerate}
\item $N$ acts on the development image ${\rm O}mega$ of $M$.
\item $N$ has only finitely many orbits on ${\rm O}mega$.
\end{enumerate}
\end{proposition}
\begin{proof} Let $Y$ be an affine vector field on ${\bb R}^2$ which
is tangent to the action of $N$, and let $\bar Y$ denote its lift to $\tilde M$ via $D$.
Since $h(\Gamma)$ commutes with $N$, the vector field
$\bar Y$ is $\Gamma$-invariant and projects to a vector field on
$M$. Since $M$ is compact, the flow of $\bar Y$ is complete.
Therefore, the action of $N$ integrates to an action of the universal covering
group $\tilde N$ on $\tilde M$, such that $D(\tilde n x) = n D(x)$, where $\tilde n \in \tilde N$ and $n = h(\tilde n)$ is its image in $N$. This implies (1).
To prove (2), we note that, up to affine conjugacy, every maximal
abelian and connected subgroup $N$ of ${\rm Aff}(2)$ is either one of the abelian groups
$\mathsf{T}$, $\mathsf{D}$, $\mathsf{C_{1}}, \mathsf{C_{2}}$, $\mathsf{B}$,
$\mathsf{A}$ (as listed in Example
\ref{ex:domains}), or the group
$$ \mathsf{N} = \left\{
\begin{matrix}{ccc} 1 & u & v \\ 0 & 1 & 0 \\
0 & 0 & 1
\end{matrix} {\, \Big\vert \;} u,v \in {\bb R}
\right\} \; . $$
All of the groups appearing in Example
\ref{ex:domains} are simply transitive on an affine domain,
and they have finitely many orbits on ${\bb A}^2$. This shows (2).
To complete the proof,
we contend that $h(\Gamma)$ is \emph{not} contained in the group $ \mathsf{N}$.
We remark that the orbits of $\mathsf{N}$ are the horizontal lines on ${\bb R}^2$.
If $h(\Gamma)$ is contained in $\mathsf{N}$, then by (1), ${\rm O}mega$ is a
union of orbits. Thus horizontal lines define a one-dimensional foliation of
${\rm O}mega$, which is preserved by $h(\Gamma)$. This, in turn, defines a
foliation on the manifold $M$, which has an open subset of the real line
as its space of leaves. This is not possible, since $M$ is compact:
The space of leaves is a quotient of $M$, and therefore it is a compact and
closed subset of the line as well.
\end{proof}
It follows that the development image ${\rm O}mega$ is a finite union of orbits
of one of the connected abelian groups listed in Example
\ref{ex:domains}. Since ${\rm O}mega$ is a connected open subset
of ${\bb A}^{2}$, as well, it follows that ${\rm O}mega$ must be one of the
domains listed in Proposition \ref{prop:devimages}.
Conversely, as follows from Section~ \ref{sect:etale_affine}, each of these
domains appears as the development image of a homogeneous
flat affine structure on the torus. This completes the proof of
Proposition \ref{prop:devimages}.
\subsection{The classification of manifolds modeled on $({\bb A}o, \lie{o}peratorname{GL}(2, {\bb R}))$}
{\lie{a}}bel{sect:bbaoclass}
As follows from the proof of Theorem \ref{thm:classification},
every compact manifold $M$ modeled on $({\bb A}o, \lie{o}peratorname{GL}(2, {\bb R}))$
is either complete and the development image of $M$ is ${\bb A}o$,
or $M$ is (isomorphic to) a quotient of the open quadrant $\mathcal Q$, or a quotient of an open
half space $\mathcal H$. In the first case
$$ M = \; _{\mbox{$\Gamma$}} \, \backslash \, \widetilde{\bbA^2 \! - 0} \; , $$
where $\Gamma \leq \widetilde \lie{o}peratorname{GL}^+ \! (2,{\bb R})$ is a
discontinuous subgroup, and in the second case
the affine holonomy group $\Gamma$ is a discrete subgroup of
$\mathsf{B} \leq \lie{o}peratorname{GL}(2, {\bb R})$, $\mathsf{C}_{1} \leq \lie{o}peratorname{GL}(2, {\bb R})$
respectively. \\
We arrive at the following classification theorem for
$({\bb A}o, \lie{o}peratorname{GL}(2, {\bb R}))$-manifolds which are complete (see also Corollary \ref{cor:inhomog}):
\begin{theorem} Let $M$ be a compact complete $({\bb A}o,\lie{o}peratorname{GL}(2, {\bb R}))$-manifold. If $M$ is not homogeneous then it is
isomorphic to a torus ${\mathcal{T}}_{\tilde A, \tilde B}$, as constructed in Section~ \ref{sect:TABk} and Example \ref{ex:gentorusnh}.
Moreover,
$M$ is homogeneous
if and only if it can be modeled on $({\bb A}o, \lie{o}peratorname{GL}(1, {\bb C}))$.
\end{theorem}
In particular, if $M$
is not homogeneous
then it is obtained by gluing flat affine cylinders $ {\mathcal{C}}^k_{A}$, where $A$ is an expansion and $\tilde B \in \widetilde \lie{o}peratorname{GL}^+ \! (2,{\bb R})$ has non-zero angle of rotation and commutes with $A$.
Furthermore if $A$ is a dilation, $B$ cannot be conjugate to an element of $\lie{o}peratorname{GL}(1,{\bb C})$.
\begin{example}[Holonomy in $\lie{o}peratorname{GL}^+(2,{\bb R})$ is not injective] {\lie{a}}bel{ex:hol_notinj}
This phenomenon already occurs for homogeneous flat affine manifolds which are quotients of the universal covering
flat affine Lie group
$\tilde {\mathsf{A}}$ of the \'etale flat affine group $\mathsf{A}
= \lie{o}peratorname{GL}(1,{\bb C}) \leq \lie{o}peratorname{GL}(2,{\bb R})$. Here different lattices $\Gamma_{1}$ and
$\Gamma_{2}$ of $\mathsf{A}$ determine non-isomorphic
flat affine manifolds. However, different lattices may project to the same holonomy group in $\mathsf{A}$.
More striking examples arise as a consequence of
the construction of the tori ${\mathcal{T}}_{\tilde A, \tilde B}$, as constructed in Section~ \ref{sect:TABk}. In fact, for every non-complete homogeneous flat affine manifold modeled on
$\mathsf{C}_{1}$ or $\mathsf{B}$ one can construct
a \emph{non-homogeneous} $({\bb A}o,\lie{o}peratorname{GL}^+(2,{\bb R}))$-manifold
which has the same holonomy group in $\lie{o}peratorname{GL}^+(2,{\bb R})$.
These examples show in particular that the affine holonomy group does \emph{not} determine the development image.
\end{example}
We can consider also the corresponding
$(\widetilde{\bbA^2 \! - 0},\widetilde \lie{o}peratorname{GL}^+\! (2,{\bb R}))$-manifolds.
Here we have:
\begin{theorem}
{\lie{a}}bel{thm:bbao_rigid}
All compact $(\widetilde{\bbA^2 \! - 0},\widetilde \lie{o}peratorname{GL}^+\! (2,{\bb R}))$-manifolds
are determined up to isomorphism by their holonomy group
in $\widetilde \lie{o}peratorname{GL}^+ \! (2,{\bb R})$.
\end{theorem}
\begin{proof} For the complete manifolds the rigidity is shown
in Example \ref{ex:toprig}. In particular, complete
and non-complete manifolds do not share the same holonomy
in $\widetilde \lie{o}peratorname{GL}^+ \! (2,{\bb R})$ (although they often do in $ \lie{o}peratorname{GL}^+(2,{\bb R})$). In fact, the holonomy group of every complete manifold has an element with non-zero angle of rotation (cf.\ Definition \ref{def:posrot}), which is not possible if the
development image is one of the domains $\mathcal H$,
$\mathcal Q$. Similarly the non-complete examples are
lattice quotients of a simply connected abelian Lie group contained
in $\lie{o}peratorname{GL}^+(2,{\bb R})$ which acts simply transitively on some open
domain in ${\bb A}^2$. Therefore, these manifolds are determined
by their affine holonomy group.
\end{proof}
Also the following is an immediate consequence of the above classification result:
\begin{corollary} Let $\Gamma \leq \widetilde \lie{o}peratorname{GL}^+ \! (2,{\bb R})$
be a non-finite discrete subgroup which is acting properly on $\widetilde {\bb A}o$. Then one of the following hold:
\begin{enumerate}
\item $\Gamma$ is isomorphic to ${\bb Z}$. Moreover, it is generated by an expansion, or it is generated by an element of non-zero rotation.
\item $\Gamma$ is isomorphic to ${\bb Z}^2$ and it is
conjugate to one of the subgroups constructed in Examples
\ref{ex:gentorusnh} or \ref{ex:gentorush}.
\end{enumerate}
\end{corollary}
\subsection{The global model spaces} {\lie{a}}bel{sect:univcovs}
The classification theorem, Theorem \ref{thm:classification},
implies that there do exists four simply connected flat
affine manifolds
which appear as the universal covering space of a flat affine two-torus.
These simply connected model spaces for
two-dimensional compact flat affine manifolds
are the plane ${\bb A}^2$, the half-plane ${\cal H}$,
the sector ${\cal Q}$, and $\widetilde{{\bb A}o}$,
the universal covering space of the once punctured plane. \\
This follows from the classification of development images
once the following fact is established. \index{development maps!for flat affine two-torus}
\begin{proposition} {\lie{a}}bel{prop:Deviscovering}
The development map of a flat affine two-torus $M$ is a covering map.
\end{proposition}
The main step in the proof of Proposition \ref{prop:Deviscovering} relies on the
decomposition of $M$ into fundamental pieces, which are
called bricks. This concept is due to Benoist \cite{Benoist_Np}.
The bricks in this case are flat affine cylinders with geodesic boundary.
This brick decomposition for flat affine two-tori resembles the pants
decomposition for closed hyperbolic surfaces (see \cite{Ratcliffe}).
\emph{It will also
serve us in the parametrisation of the deformation space in
Section~ \ref{sect:cart}. }
\subsubsection{The brick decomposition for flat affine two-tori}
Let $\tilde M$ be the universal covering flat affine manifold of $M$,
$\Gamma \leq {\rm Aff}(\tilde{M})$ the
group of covering transformations, and
$D: \tilde M \rightarrow {\bb A}^2$ the development map. Let $N \leq {\rm Aff}(2)$ be
the identity component of a maximal abelian subgroup containing $h(\Gamma)$,
as in Section~ \ref{sect:devimages}.
By Proposition \ref{prop:liftofN}, the action of $N$ on the development image $D(\tilde M)$ lifts to an action of its universal covering
$\tilde N$ on $\tilde M$, such that
$D$ is an equivariant map $\tilde M \rightarrow D(\tilde M)$.
\paragraph{Homogeneous flat affine tori}
In case that the development image $D(\tilde M)$ is ${\bb A}^2$, or the half-plane ${ \cal H}$, or the sector ${\cal Q}$, $N$ is simply connected and
acts simply transitively on $D(\tilde M)$.
It follows that $\tilde N$ acts simply transitively on $\tilde M$, and
$D$ is an equivariant local diffeomorphism. Hence,
$D: \tilde M \rightarrow D(\tilde M)$ is an affine diffeomorphism.
If the development image is ${\bb A}o$, and $N$ is conjugate to $\lie{o}peratorname{GL}(1,{\bb C})$
then $\tilde N$ acts simply transitively on $\tilde M$. It follows
that $D$ is a covering map. We thus have an affine covering
$$ \widetilde {\bb A}o \; \, \stackrel{D}{\longrightarrow} \; \, {\bb A}o \; \; . $$
This proves that the development map is a covering map
for all homogeneous flat affine tori $M$.
\paragraph{Inhomogeneous flat affine tori}
We assume now
that $M$ is \emph{not} homogeneous. Therefore,
the development image of $M$ is ${\bb A}o$ and
$N$ is different from $\lie{o}peratorname{GL}(1,{\bb C})$. Then $N$ equals either
the group of diagonal matrices with positive entries $\mathsf{B}$
or the group $\mathsf{C}_{1}$ (compare Example \ref{ex:domains}).
The open orbits of $N$ on ${\bb A}o$ are the open quadrant in the case $N= \mathsf{B}$, or the open half space in the case $N = \mathsf{C}_{1}$.
In particular, in this case, $N$ does not act transitively on ${\bb A}o$, and therefore $M$ is not a homogeneous flat affine torus. However,
the orbits of $N$ on $M$ decompose $M$ into finitely many pieces,
the \emph{bricks}, from which $M$ is constructed.
\begin{proposition}[Brick Lemma for the flat affine two-torus] {\lie{a}}bel{prop:bricklemma}
Let ${\rm O}mega= \tilde N \tilde x_{0}$ be an open orbit of $\tilde N$ on $\tilde M$,
and\/ $\bar {\rm O}mega$ the closure of\/ ${\rm O}mega$. Let\/ $\Gamma_{0} =
\{ {< , >}amma \in \Gamma \mid {< , >}amma \bar {\rm O}mega = \bar {\rm O}mega\}$. Then
\begin{enumerate}
\item $D: \, \bar {\rm O}mega \, \rightarrow \,{\bb A}o$ is a diffeomorphism
onto its image.
\item $\bar {\rm O}mega/ \, \Gamma_{0}$ is
a flat affine cylinder with geodesic boundary.
\end{enumerate}
\end{proposition}
\begin{proof}
Observe that $N = \tilde N$ is simply connected.
Put $x_{0}= D \tilde x_{0}$. It follows that $D: \tilde N \tilde x_{0} \rightarrow N x_{0}$ is a diffeomorphism. The complement of $N \tilde x_{0}$ in its
closure $\lie{o}verline{N \tilde x_{0}}$ consists of
one-dimensional orbits for $N$, which are diffeomorphic to a ray in ${\bb A}o$.
Since $D$ is a local diffeomorphism on $\tilde M$,
$\lie{o}verline{N \tilde x_{0}}$ has precisely two such orbits in its closure,
which map to their corresponding orbits in ${\bb A}o$, see Figure \ref{figure:strips}.
It follows that $D$ is injective on the closure $\lie{o}verline{N \tilde x_{0}}$ and,
in fact, $ D: \lie{o}verline{N \tilde x_{0}} \rightarrow {\bb A}o$ is a diffeomorphism onto its image.
This proves (1).
To prove (2), remark first that $\tilde N$ has at most finitely many orbits on
the compact manifold $M$. This implies that there are only finitely
many orbits of $\Gamma \tilde N$ on $\tilde M$. Since $\Gamma$ acts
properly on $\tilde M$, it follows that
every compact subset $\kappa$ of $\tilde M$ intersects only finitely many
orbits of $\tilde N$. In particular, $\kappa$ intersects
only finitely many components of $\Gamma \bar {\rm O}mega$. Therefore,
$\Gamma \bar {\rm O}mega$ is closed in $\tilde M$. Hence, $\bar {\rm O}mega$
projects to a compact subset in $M$.
We may assume (by replacing $M$ with a finite covering manifold if necessary)
that $h(\Gamma)$ is contained in $N$.
Note then, if ${< , >}amma \in \Gamma$ such that ${< , >}amma \bar {\rm O}mega
\cap \bar {\rm O}mega \neq \emptyset$ then ${< , >}amma \in \Gamma_{0}$. In fact, since
$h({< , >}amma) \in N$, there exists $\tilde n \in \tilde N$ such that
$h({< , >}amma \tilde n) = 1$. Since $D$ is a diffeomorphism on $\bar {\rm O}mega$
and ${< , >}amma \tilde n \bar {\rm O}mega \cap \bar {\rm O}mega \neq \emptyset$, we conclude that ${< , >}amma \tilde n $ preserves both boundary components
of $\bar {\rm O}mega$. Thus ${< , >}amma \tilde n {\rm O}mega = {\rm O}mega$,
and therefore ${< , >}amma = n^{-1} \in \tilde N$. In particular, $\Gamma_{0} = \Gamma \cap
\tilde N$. Moreover, it follows that $\bar {\rm O}mega/ \Gamma_{0}$ is the image of
$\bar {\rm O}mega$ in $M = \tilde M/\Gamma$.
We thus proved that $\bar {\rm O}mega/ \Gamma_{0}$ is compact.
Since $\bar {\rm O}mega$ has two boundary components which are
geodesic rays, $\bar {\rm O}mega/ \Gamma_{0}$ must be a flat affine cylinder
with geodesic circles as boundary components. This proves (2).
\end{proof}
\begin{figure}
\caption{{The orbits of $\tilde N$ in $\tilde M$.}
\end{figure}
The development image of $\bar {\rm O}mega$ is a closed half space
or a sector. This implies:
\begin{proposition} If the development image is ${\bb A}o$ then
the holonomy $h(\Gamma_{0})$ is generated by an expansion.
\end{proposition}
\begin{proof}
The proof of the previous proposition
implies that $h(\Gamma_{0})$ is contained in $N$.
Since
$\Gamma_{0}$ acts properly on $\lie{o}verline{N \tilde x_{0}}$, and since $D$ is a
diffeomorphism onto its image $\lie{o}verline{N x_{0}}$,
it follows that $h(\Gamma_{0})$ acts properly and with compact quotient on the orbit closure $\lie{o}verline{N x_{0}}$.
This implies that $h(\Gamma_{0})$ has
positive eigenvalues on one-dimensional orbits, and a fortiori, by properness
on $\lie{o}verline{N x_{0}}$, it must be a group of expansions of $\lie{o}peratorname{GL}(2,{\bb R})$ (see Figures \ref{figure:nonexpanding}
and \ref{figure:expanding}).
\end{proof}
\paragraph{Final step in the proof.}
By the brick lemma (Proposition \ref{prop:bricklemma}), $M$ decomposes as
a finite union of copies of a flat affine cylinder $\bar {\rm O}mega/ \Gamma_{0}$, which
are glued along their boundary geodesics. Therefore, there exists a finite union
$\bar {\mathcal{H}}$ of neighbouring copies of $\bar {\rm O}mega$ in $\tilde M$, such that the torus $M$ is obtained by identifying the two boundary geodesics of $\bar {\mathcal H}/\Gamma_{0}$ by an affine
transformation $\tilde B$ in ${\rm Aff}(\tilde M)$.
Since $h(\tilde B) \in \lie{o}peratorname{GL}^+\!(2,{\bb R}) $ commutes with $N$, it follows that
$B= h(\tilde B)$ is contained in $N$ or $B \in {\mathrm{R}}_{\pi} N$. It follows that
the development image of $\bar {\mathcal H}$ must be ${\bb A}o$ or $\bar {\mathcal H}_{0}$, the closed half space with the origin removed,
respectively, see Figure \ref{figure:deviscov}.
Hence, $\bar {\mathcal H}$ is affinely
diffeomorphic to one of the strips $\bar {\mathcal{H}}_{0}^k$,
which are defined in Section~ \ref{sect:acylinders}. Let $A \in \lie{o}peratorname{GL}^+\!(2,{\bb R})$
be the expansion which generates $\Gamma_{0}$. Then $\bar {\mathcal H}/\Gamma_{0}$ is a flat affine cylinder $\mathcal C^k_{A}$, as defined in Example \ref{ex:acylinders2}. Therefore,
$M$ is affinely diffeomorphic to a flat affine torus
${\mathcal T}_{A,B,k}$ constructed in Example
\ref{ex:gentorusnh}. In particular, $M$ is affinely diffeomorphic to
a quotient of $\widetilde {\bb A}o$, by a properly discontinuous subgroup
$$ \Gamma = {\lie{a}}ngle \tilde A, \tilde B_{k} \rightarrowngle$$ of affine
transformations in ${\rm Aff}(\widetilde {\bb A}o)$.
\begin{figure}
\caption{{\bf $\bar {\mathcal H}
\end{figure}
\begin{corollary} {\lie{a}}bel{cor:inhomog} \index{flat affine torus!non-homogeneous}
Every non-homogenous flat affine two-torus $M$ is affinely
diffeomorphic to a two-torus ${\mathcal{T}}_{A,B,k}$
(see Example \ref{ex:gentorusnh}), and $M$
is obtained by gluing
a flat affine cylinder $\mathcal{C}^k_{A}$, where $A \in \lie{o}peratorname{GL}^+\!(2,{\bb R})$
is an expansion, and $B \in \lie{o}peratorname{GL}^+\!(2,{\bb R})$ has
positive eigenvalues and commutes with $A$.
\end{corollary}
\section{The topology of the deformation space} {\lie{a}}bel{sect:thedefspace}
In this section we describe the global and local structure
of the deformation space ${\mathfrak D}(T^2,{\bb A}^2)$ of all flat affine
structures on the two-torus. The deformation
space decomposes into two \emph{overlapping}
subsets: the open subspace ${\mathfrak D}(T^2,\widetilde{\bbA^2 \! - 0})$ of structures modeled on the once punctured plane $\widetilde{\bbA^2 \! - 0}$ and
the closed subspace ${\mathfrak D}_{h}(T^2,{\bb A}^2)$ of homogeneous flat affine structures. We describe the structure and topology of these
two subspaces separately in Sections 6.3 to 6.4.
In Section 6.5 we deduce our main result that the
holonomy map for the deformation space ${\mathfrak D}(T^2,{\bb A}^2)$
is a local homeomorphism.
\subsection{Flat affine connections} {\lie{a}}bel{sect:affine_conns}
In this subsection we introduce flat affine connections. These
provide another point of view on flat affine structures,
which turns out to be particularly useful in the study of homogeneous flat affine manifolds. \\
An {\em affine connection\/} on the tangent bundle of $M$ is determined by a covariant differentiation
operation on vector fields which is
a ${\bb R}$-bilinear map
\begin{align*}
{\nabla}: {\rm Vect}(M) \times {\rm Vect}(M) & \longrightarrow {\rm Vect}(M) \\
(X,Y) & \longmapsto \nabla_X(Y)
\end{align*}
and for $f\in C^\infty(M)$,
satisfies
\begin{equation*}
\nabla_{fX} Y = f \nabla_{X} Y, \qquad \text{and } \qquad
\nabla_{X}(f Y) = f \nabla_{X} Y + (Xf) Y
\end{equation*}
(where $Xf\in C^\infty(M)$ denotes the directional derivative of
$f$ with respect to $X$).
The connection is \emph{torsion free} if and only if, for all $X,Y \in {\rm Vect}(M)$,
\begin{equation} {\lie{a}}bel{eq:torsion}
\nabla_X Y - \nabla_Y X = [X,Y] \; ,
\end{equation}
and it is \emph{flat} if and only if the curvature tensor $R^\nabla$
vanishes. That is, if
\begin{equation} {\lie{a}}bel{eq:curv}
R^\nabla(X,Y) = \nabla_X \nabla_Y - \nabla_Y \nabla_X - \nabla_{[X,Y]} = 0 \; .
\end{equation}
\subsubsection{Correspondence with flat affine structures}
Specifying an affine structure on $M$ is equivalent
to giving a torsion free flat affine connection on the tangent bundle
of $M$. Indeed, let $M$ be a flat affine manifold. Then the affine structure
defines a unique torsion free flat affine connection on $M$ by pulling back the canonical affine connection on ${\bb A}^n$
(that is, the usual derivative on ${\bb R}^n$) via a development map. Conversely, given any torsion free flat affine connection $\nabla$ on $M$, for each $p \in M$, the exponential map for $\nabla$ at $p$
is a connection preserving diffeomorphism from an open subset of the tangent vector space $T_{p} M$ (with the canonical flat affine connection) to a neighborhood of $p$, compare \cite[VI. Theorem 7.2]{KN1}. This gives rise to an atlas of locally affine coordinates and therefore determines a unique flat affine structure on $M$.
Thus, there is a natural one to one correspondence
\begin{equation} {\lie{a}}bel{eq:correspondac}
{\mathfrak S}(M,{\bb A}^n) \, \longleftrightarrow \; \{ \text{torsion free flat affine connections on $M$} \}
\end{equation}
of the set of flat affine structures ${\mathfrak S}(M,{\bb A}^n)$ with a set of affine connections.
An affine connection is called \emph{complete} if all of its geodesics can be extended to infinity. Under the correspondence \eqref{eq:correspondac} complete affine structures are in bijection with complete affine connections.
\\
Observe that the difference of two affine connections is a tensor field on $M$ and therefore the set of all affine connections forms an affine space.
\begin{example}[Flat connections form a closed
subset of an affine space] {\lie{a}}bel{ex:conn_affine_space}
Let $E$ denote the tangent bundle of the flat affine manifold
$M$. Let $\nabla_{0}$ be the natural flat connection induced on $M$ by its flat affine structure.
We choose $\nabla_{0}$ as a basepoint in the space of all affine connections on $M$. Every torsion free affine connection on $M$ is of
the form $\nabla = \nabla_{0} + S$, where $S \in \Gamma(S^2 E^* \tensor{} E)$ is a vector valued symmetric form on $M$.
The set of all torsion free affine connections $\nabla$ on $M$ is
thus an affine space modeled on the vector space $\Gamma(S^2 E^* \tensor{} E)$. Every torsion free \emph{flat}
affine connection on $M$ is of
the form $\nabla = \nabla_{0} + S$, where $S$ is contained
in the closed subset ${\mathcal{C}}$ of $\Gamma(S^2 E^* \tensor{} E)$
defined by the equation \eqref{eq:curv}, which encodes the vanishing of curvature.
\end{example}
The space of sections
$\Gamma(S^2 E^* \tensor{} E)$ carries the $C^\infty$-topology of maps. This defines
a topology on the space of torsion free affine connections.
\begin{proposition} {\lie{a}}bel{prop:corres_is_homeo}
The natural correspondence \eqref{eq:correspondac}
of flat affine structures with flat torsion free affine connections is a homeomorphism. In particular, the space of flat affine structures ${\mathfrak S}(M,{\bb A}^n)$ is homeomorphic to the closed subset $\mathcal C$ in the tensor space $\Gamma(S^2 E^* \tensor{} E)$ as described above.
\end{proposition}
\begin{proof} Let us fix a flat affine structure on $M$ and let $\nabla_{0}$ be its compatible torsion free flat affine connection. Let $\nabla = \nabla_{0}+S$ be another torsion free flat affine connection on $M$.
In a local flat affine coordinate chart for $M$,
$S \in {\mathcal{C}}$ is represented by a set of functions $\Gamma_{ij}^k$ which are called Christoffel symbols for $\nabla$, see \cite[III. Proposition 7.10]{KN1}. We observe
that the functions $\Gamma_{ij}^k$ also coincide with the coordinate representation of the tensor $S$.
Therefore, a sequence $\nabla_{n}$ of affine connections is convergent if and only if the corresponding Christoffel symbols converge in all local flat affine coordinate systems.
Let $D_{n}$ be a sequence of development maps and
consider the corresponding sequence of flat affine connections
$\nabla_{n}$.
Since the Christoffel symbols for $\nabla_{n}$ are polynomials in the first and second derivatives of $D_{n}$ (see \cite{KN1}), convergence of $D_{n}$ implies convergence of $\nabla_{n}$.
Therefore, the correspondence \eqref{eq:correspondac} is continuous.
Conversely, for any torsion free flat affine connection $\nabla$ on $M$, normal coordinate systems on $M$ define compatible
coordinate charts for the flat affine structure defined by
$\nabla$, see \cite[VI. Theorem 7.2]{KN1}.
Normal coordinate systems for $\nabla$ are determined by an ordinary differential equation whose solutions depend smoothly on the Christoffel symbols for $\nabla$.
Hence, the flat affine coordinate charts for $\nabla$
depend smoothly on $\nabla$. This shows that the correspondence \eqref{eq:correspondac} is a homeomorphism.
\end{proof}
\subsubsection{Translation invariant flat affine connections} {\lie{a}}bel{sect:transinv_cons}
An affine connection $\nabla$ on the two-torus $$ T^2 = S^1 \times S^1 $$ is called \emph{translation invariant} if the group $S^1 \times S^1$ acts by affine transformations. Let $\nabla_{0}$ be the natural Riemannian flat affine connection on $T^2$.
It is characterized by the property that the translation vector fields of the $S^1 \times S^1$-actions are parallel. Then any other connection $\nabla=\nabla_{0} +S $ is translation invariant iff the covariant derivatives of the translation vector fields of the $S^1 \times S^1$-action are parallel with respect to $\nabla$.
This condition is satisfied, if and only if the Christoffel symbols $S$ are constant functions in the flat coordinates for $\nabla_{0}$.
Therefore, the set of all translation invariant torsion free flat affine connection is in bijection with the subset $ {\mathcal{C}}(T^2,{\bb R})$ of ${\mathcal{C}}$ which consists of all \emph{constant} (that is, of all $\nabla_{0}$-parallel) tensors contained in ${\mathcal{C}}$.
\begin{remark} Equation \eqref{eq:curv} shows that ${\mathcal{C}}(T^2,{\bb R})$ is a \emph{quadratic cone} in the vector space of symmetric bilinear maps $S^2 \, {\bb R}^2 \tensor{} {\bb R}^2$.
Every element $S \in {\mathcal{C}}({\bb R})$
represents a symmetric bilinear product
$$ \cdot_{\nabla}: \, {\bb R}^2 \times {\bb R}^2 \rightarrow {\bb R}^2 \, , \; \, u \cdot_{\nabla} v := \, S(u,v) \: , $$
which, for all $u,v,w$, satisfies the associativity relation
$$ (u \cdot_{\nabla} v) \cdot_{\nabla} w = u \cdot_{\nabla} (v\cdot_{\nabla} w) \; . $$
This product defines a left-invariant flat affine connection on the abelian Lie group ${\bb R}^2$ by extending the covariant derivative from left-invariant vector fields to all vector fields.
Indeed, there is a general correspondence of \emph{associative}, and more generally \emph{left-symmetric} algebra products with left-invariant torsion free flat affine connections on Lie groups, see for example \cite[Section~ 5.1]{BauesPSR}. Under this correspondence
complete connections are represented by products which have
the property that all maps $v \mapsto u \cdot_{\nabla} v$ have trace zero (compare \cite[Corollary 5.7]{BauesPSR}).
\end{remark}
We summarize this discussion by the following:
\begin{corollary}
\begin{enumerate}
\item The set of all translation invariant flat affine connections on $T^2$ is homeomorphic to a four-dimensional
homogeneous quadratic cone ${\mathcal{C}}(T^2, {\bb R})$ in the six-dimensional vector space $S^2 {\bb R}^2 \tensor{} {\bb R}^2$.
\item
The subset of \emph{complete} translation invariant flat affine structures on $T^2$ is homeomorphic to a two-dimensional homogeneous quadratic cone in the vector space ${\bb R}^4$.
\end{enumerate}
\end{corollary}
In particular, one can deduce from (2) that the set of complete translation invariant flat affine connections on the two-torus is homeomorphic to ${\bb R}^2$. In view of Lemma \ref{lem:transhomotopic}, this
gives yet another proof of the fact (cf.\ Example \ref{ex:completeas}) that the deformation space of complete affine structures on the two-torus is homeomorphic to ${\bb R}^2$.
\subsection{Translation invariant flat affine structures} {\lie{a}}bel{sect:homog_structs}
The usual representation of the two-torus $ T^2 = {\bb R}^2 / {\bb Z}^2$ as a quotient of the vector group ${\bb R}^2$ by its integral lattice tacitly induces various extra structures.
The translation action of the vector space ${\bb R}^2$
gives a simply transitive action of the abelian Lie group $S^1 \times S^1$ and the vector space structure on ${\bb R}^2$ descends to a compact abelian Lie group structure
on $T^2$. Similarly, the ordinary flat affine structure on ${\bb R}^2$ induces the natural Riemannian flat affine structure on $T^2$
which is invariant by the translation
group $S^1 \times S^1$.
\begin{definition} A flat affine structure on $T^2 = S^1 \times S^1$ is called {\em translation invariant} if the group $S^1 \times S^1$ acts by affine transformations.
\end{definition}
A translation invariant flat affine structure is thus compatible with the Lie group structure on $T^2$. In partciular, every flat affine torus with translation invariant flat affine structure is also a homogeneous flat affine torus.
Note that the set of all
translation invariant flat affine structures ${\mathcal{T}}(T^2,{\bb A}^2)$
corresponds to the set of translation invariant flat affine connections ${\mathcal{C}}(T^2, {\bb R})$ under the map \eqref{eq:correspondac}.
\subsubsection{Relation with homogeneous flat affine tori}
{\lie{a}}bel{sect:homandtransinv_structs}
Let $M$ be a homogeneous flat affine two-torus, and ${\rm Aff}(M)_{0}$ the identity component of its affine automorphism
group. By the classification Theorem \ref{thm:classification},
the following two cases occur: \begin{enumerate}
\item
Either the Lie group ${\rm Aff}(M)_{0}$ is isomorphic to $S^1 \times S^1$ and it develops to an
action of an affine Lie group
as listed in Example \ref{ex:domains}.
\item Or $M$ is affinely diffeomorphic to a Hopf torus. In this case, ${\rm Aff}(M)_{0}$ contains a simply transitive group isomorphic to $S^1 \times S^1$, and this subgroup
develops to the action of an affine Lie group of
type $\mathsf{A}$.
\end{enumerate}
In particular, for every homogeneous flat affine two-torus $M$,
the identity
component ${\rm Aff}(M)_{0}$ of the affine
automorphism group of $M$ contains a two-dimensional compact abelian Lie group, which acts transitively and freely on $M$.
This shows that every homogeneous flat affine two-torus is affinely diffeomorphic to a translation invariant flat affine torus.
\\
Recall that the diffeomorphism group ${\rm Diff}(T^2)$ acts
on the set of all flat affine structures, and two flat affine structures on $T^2$ are called homotopic if they are equivalent by a diffeomorphism of $T^2$ which is homotopic to the identity. \index{flat affine torus!homogeneous}
\index{flat affine torus!translation invariant}
\begin{lemma} {\lie{a}}bel{lem:transhomotopic}
Every homogeneous flat affine two-torus is
homotopic (isotopic) to a unique translation invariant flat
affine two-torus.
\end{lemma}
\begin{proof} Let $(f,M)$ be a marked homogeneous flat affine two-torus, where $f: T^2 \rightarrow M$ is a diffeomorphism. By the above remarks, we may choose a Lie subgroup ${\mathcal{A}}$ of ${\rm Aff}(M)_{0}$ which acts simply transitively on $M$. The subgroup ${\mathcal{A}}$ is unique up to conjugacy in ${\rm Aff}(M)_{0}$.
We also choose a basepoint $m_{0} \in M$.
This fixes the structure of a compact abelian Lie group on $M$
which is isomorphic to ${\mathcal{A}}$. Then there exists a \emph{unique} isomorphism of Lie groups $\phi: T^2 \rightarrow M$ such that $\phi^{-1} \circ f$ is homotopic to the identity of $T^2$. In other words,
$(\phi,M)$ and $(f,M)$ are equivalent markings (cf.\ Section~ \ref{sect:markedstructures}). By construction, the affine structure on $T^2$ induced by $\phi$ is translation invariant, and it is homotopic to the original homogeneous structure on $T^2$, which is induced by $(f,M)$. It is also independent of the choice of basepoint since ${\mathcal{A}}$ acts transitively on $M$ (compare also Example \ref{ex:hom_structures}). Neither does it depend on the choice of the subgroup ${\mathcal{A}}$ in ${\rm Aff}(M)_{0}$, since the conjugacy class of ${\mathcal{A}}$ in ${\rm Aff}(M)_{0}$ is uniquely determined. In particular, this argument implies that every two translation invariant structures which are homotopic do coincide. This shows
uniqueness.
\end{proof}
The Lemma asserts that every orbit of the identity component ${\rm Diff}_{0}(T^2)$ of the group of all diffeomorphisms ${\rm Diff}(T^2)$ acting on homogeneous flat affine
structures intersects the subset of translation invariant
structures in precisely a single point. The proof also shows that
on the subset of marked homogeneous tori which are in the complement of Hopf tori, we have a continuous projection
onto translation invariant tori. This proves that outside the
Hopf tori the subset of translation invariant flat affine structures on $T^2$ defines a slice for the action of ${\rm Diff}_{0}(T^2)$ on the set of all homogeneous flat affine structures.
\subsubsection{Translation invariant development maps} {\lie{a}}bel{sect:devsection}
We construct an explicit \emph{continuous section} from the set of translation invariant flat affine structures to development maps.
More specifically, we construct a continuous map
\begin{equation} {\lie{a}}bel{eq:LSAsection}
{\mathcal{T}}(T^2,{\bb A}^2) = {\mathcal{C}}(T^2,{{\bb R}})
\; \stackrel{\mathcal E}{\longrightarrow} \; {\mathrm{Dev}}(T^2,{\bb A}^2) \; ,\; \,
S \mapsto D_{S}
\end{equation}
such that the development map $D_{S}$ defines an affine structure on $T^2$ which has associated flat affine connection $\nabla = \nabla_{0}+ S$.
The construction is based on the relation of translation invariant flat affine structures
with the set
of commutative associative algebra products on ${\bb R}^2$ as follows:
\begin{example}[Associated \'etale affine representation] {\lie{a}}bel{ex:developing_section} \index{etale@\'etale representation!affine}
For $S \in {\mathcal{C}}(T^2, {\bb R})$, and $v \in {\bb R}^2$ we define
an element $$ \bar \rho(v) =
\begin{matrix}{cc} S(v, \cdot ) & v \\
0 & 0
\end{matrix} \in \mathfrak{aff}(2)
$$
of the Lie algebra $\mathfrak{aff}(2)$ of the
affine group ${\rm Aff}(2)$. In fact,
the map $v \mapsto \bar \rho(v)$ is a Lie algebra
homomorphism and the associated homomorphism of Lie groups
$$ \rho = \rho_{S}: {\bb R}^2 {\, \longrightarrow \, } {\rm Aff}(2) \, , \; \, v \mapsto \, \rho(v) = \exp \bar \rho(v)$$
defines an affine representation of the
Lie group ${\bb R}^2$ on ${\bb A}^2$ which is \'etale in $0 \in {\bb A}^2$
(cf.\ Definition \ref{def:etale} and also the discussion in \cite[Section~ 2.1]{BC_1}).
\end{example}
Let $\nabla$ be the translation invariant flat affine connection
on ${\bb R}^2$ which is represented by $S \in {\mathcal{C}}(T^2, {\bb R})$.
The orbit map of the \'etale representation $\rho_{S}$ is
$$ D_{S} = o_{S}: {\bb R}^2 {\, \longrightarrow \, } {\bb A}^2 \, , \; \, v \mapsto
\rho_{S} \cdot 0 \; $$
and it is a development map for a translation invariant flat affine structure on $T^2 = {\bb R}^2 / {\bb Z}^2$ with associated affine connection $\nabla$. (In fact, $D_{S}$ is also a \emph{frame preserving} development map. Compare Example \ref{ex:framepreserving} and Section~ \ref{sec:framedXG}.)
Since, $D_{S}$ depends smoothly on $S$, $$ {\mathcal E}(S) = D_{S}$$ defines the required continuous section.\\
Note that the holonomy homomorphism $h = h^\nabla: {\bb Z}^2 \rightarrow {\rm Aff}(2)$ for $D_{S}$ satisfies $$ h^\nabla({< , >}amma) = \rho({< , >}amma) , \text{ for all ${< , >}amma \in {\bb Z}^2$} \; .$$
We state without proof:
\begin{lemma} The continuous map
$ {\mathcal{C}}(T^2, {\bb R}) \rightarrow \lie{o}peratorname{Hom}({\bb Z}^2,{\rm Aff}(2)$, $\nabla \mapsto h^\nabla$,
is locally injective.
\end{lemma}
\subsection{The space of structures modeled on $({\bb A}o,\lie{o}peratorname{GL}(2, {\bb R}))$}
Here we discuss in detail the subspace of the deformation space
of flat affine structures on the two-torus which consists
of structures which have the once-punctured plane as
development image. Our main observation is that the holonomy map is a local homeomorphism on such structures.
\subsubsection{The holonomy map}
We consider first the subgeometry of structures
which are modeled on the universal covering of
the once-punctured plane.
The topology of the deformation space of
such structures
is completely controlled by the holonomy map into
the space of conjugacy classes of homomorphisms
(the ``character variety''):
\index{holonomy map!is a local homeomorphism}
\index{holonomy map}
\begin{theorem} {\lie{a}}bel{thm:tbbaotop}
The holonomy map
$$ {\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0}) \, \stackrel{hol}{\longrightarrow} \,
\lie{o}peratorname{Hom}({\bb Z}^2, \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R}))/ \widetilde \lie{o}peratorname{GL} (2,{\bb R})$$
embeds the deformation space homeomorphically as an open connected subset of the character variety.
\end{theorem}
\begin{proof} Note, since $T^2$ is orientable, the holonomy takes values in $\widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$. The map $hol$ is injective, by Theorem \ref{thm:bbao_rigid}. Since $hol$ is also continuous and
open, it is a homeomorphism onto an open subset.
Connectedness of the deformation space
will follow from the considerations
in Section~ \ref{sect:cart} below.
\end{proof}
Now we look at the deformation space of structures which
modeled on the once-punctured plane. For such structures the holonomy map is not injective, as we already remarked in Example \ref{ex:hol_notinj}. However, as we show now at least \emph{locally} the topology of the deformation space of $( {\bb A}o, \lie{o}peratorname{GL}(2,{\bb R}))$ structures is fully controlled by the character variety:
\begin{corollary} {\lie{a}}bel{cor:bbaotop}
The holonomy map
$$ {\mathfrak D}(T^2, {\bb A}o) \, \stackrel{hol}{{\, \longrightarrow \, }} \,
\lie{o}peratorname{Hom}({\bb Z}^2, \lie{o}peratorname{GL}^+(2,{\bb R}))/ \lie{o}peratorname{GL}(2,{\bb R})$$
is a local homeomorphism onto its image, which is
a connected open subset in the
character variety.
\end{corollary}
\begin{proof} Since the subgeometry
$$ (\widetilde{\bbA^2 \! - 0}, \widetilde \lie{o}peratorname{GL}(2,{\bb R})) {\, \longrightarrow \, } ({\bb A}o, \lie{o}peratorname{GL}(2,{\bb R}))$$
is a covering, the induced map on deformation spaces (cf.\ Section~ \ref{sect:sub_geom})
$$ {\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0}) {\, \longrightarrow \, } {\mathfrak D}(T^2, {\bb A}o) $$
is a homeomorphism by Lemma \ref{lem:covering_geoms}.
The commutative diagram \eqref{eq:subgeometry1} for the subgeometry takes the form
\begin{align} {\lie{a}}bel{eq:subgeometry2}
\xymatrix{
\; \; {\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0}) \; \; \ar[d]_{\approx} \ar@{^{(}->}[r]^(0.35){hol}
& \; \;\lie{o}peratorname{Hom}({\bb Z}^2, \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R}))/ \widetilde \lie{o}peratorname{GL}\! (2,{\bb R}) \; \; \ar[d] \\
\; \; {\mathfrak D}(T^2, {\bb A}o) \; \; \ar[r]^(0.35){hol} & \; \;
\lie{o}peratorname{Hom}({\bb Z}^2, \lie{o}peratorname{GL}^+(2,{\bb R}))/ \lie{o}peratorname{GL}(2,{\bb R}) \, .}
\end{align}
Note that, by Theorem \ref{thm:tbbaotop}, the top horizontal map
is a topological embedding. Furthermore, by Corollary \ref{cor:ind_cov2}, the right vertical map is a local homeomorphism.
We deduce that the bottom
map $hol$ for $ {\mathfrak D}(T^2, {\bb A}o) $
is locally injective, and therefore it is a local homeomorphism onto an open subset.
\end{proof}
In the situation of Corollary \ref{cor:bbaotop},
all local topological properties of the deformation space
are reflected in the character variety and also vice versa.
For instance, singularities in the character variety give rise to singularities in the deformation space, as is the case in Example \ref{ex:defissing}. \emph{This shows that
$ {\mathfrak D}(T^2, {\bb A}o) $ is not Hausdorff, and it is not even a $\mathrm{T}_{1}$-topological space.} \index{deformation space!non-closed point of}
\subsubsection{Cartography of the deformation space} {\lie{a}}bel{sect:cart}
Important strata in the \index{deformation space!stratification of}
deformation space $$ {\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0}) $$
arise from the orbit types of the action of
$\widetilde \lie{o}peratorname{GL}^+ \! (2,{\bb R})$ on the image of
the holonomy map $$ \mathsf{hol}: {\mathrm{Dev}}(T^2, \widetilde{\bbA^2 \! - 0}) \, {\, \longrightarrow \, } \, \lie{o}peratorname{Hom}({\bb Z}^2, \widetilde \lie{o}peratorname{GL}^+ \! (2,{\bb R})) \, \; .$$
We introduce several such strata and describe their topological relations with each other. We use this information to establish the connectedness of the deformation space.
\paragraph{Overview}
According to the classification theorem, tori which are modeled on $\widetilde{\bbA^2 \! - 0}$ fall into three main classes distinguished by their development images. Namely the classes are formed by structures which have
development image equivalent to either
\begin{enumerate}
\item the once punctured plane $\widetilde{\bbA^2 \! - 0}$ (``complete structures''),
\item or a sector ${\mathcal Q}$,
\item or the open half space ${\mathcal H}$.
\end{enumerate}
The structures with development image $\widetilde{\bbA^2 \! - 0}$ comprise
non-homogeneous flat affine tori and the homogeneous
structures which arise from (lifts of) \'etale representations
of type $\mathsf{A}$.
The latter two strata arise from (the lifts of) \'etale representations
of type $\mathsf{B}$ and $\mathsf{C}_{1}$ respectively
(see Example \ref{ex:domains} for notation). Therefore
all corresponding tori in these two strata are homogeneous. \\
Another decomposition of the
deformation space is obtained by considering the subset
${\mathfrak T}$ of non-homogeneous structures and its complementary subspace ${\mathfrak D}_{h}(T^2, \widetilde{\bbA^2 \! - 0})$ consisting of homogeneous structures. The
space of non-homogeneous structures can be decomposed
into connected components parametrized by the
level of a non-homogeneous structure.
The subset of homogeneous
structures is connected. The subspace ${\mathfrak D}_{h}(T^2, \widetilde{\bbA^2 \! - 0})$ contains a two-dimensional
stratum ${\mathfrak H}$ of Hopf tori as a distinguished subset. Non-homogenous structures are connected to homogeneous
ones only along the space ${\mathfrak H}$ of Hopf tori.
\paragraph{Hopf tori} Recall that a torus which is modeled on
$\widetilde{\bbA^2 \! - 0}$ is called a \emph{Hopf torus} \index{Hopf torus}
if its holonomy is contained in the center of
$\widetilde \lie{o}peratorname{GL}^+\! (2,{\bb R})$. The holonomy
homomorphisms of marked Hopf tori form the closed subset of fixed points for the $\widetilde \lie{o}peratorname{GL}(2,{\bb R})$-conjugation action on the holonomy image of $\widetilde{\bbA^2 \! - 0}$ - structures. Therefore, the Hopf tori form a closed subset $${\mathfrak H} \, \subset \; \, {\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0}) \; . $$
\hspace{1cm}
All Hopf tori are derived
from the \'etale affine representation of type $\mathsf{A}$
as follows.
Let
$$ o: {\bb R}^2\, {\, \longrightarrow \, } \, {\bb A}o\, , \; \, \, (t,\theta) \, \mapsto \, \exp(t) (\cos \theta, \sin \theta) $$
be the orbit map associated to the
representation $\mathsf{A}$. For $k_{1}, k_{2} \in {\bb Z}$ and
${\lie{a}}mbda_{1}, {\lie{a}}mbda_{2}>0$, let
$\phi: {\bb R}^2 \rightarrow {\bb R}^2$ be the linear map, which satisfies
$$ \phi(e_{1}) = (\log {\lie{a}}mbda_{1}, k_{1} \pi) \; , \; \phi(e_{2}) = (\log {\lie{a}}mbda_{2}, k_{2} \pi) \; . $$
Then development maps of the form
$$ D = o \circ \phi : {\bb R}^2 \rightarrow {\bb A}o $$ define
a two-parameter family of marked Hopf tori
$$ {\mathcal H}_{{\lie{a}}mbda_{1},{\lie{a}}mbda_{2},k_{1}, k_{2} }\, \;
\,
\scriptstyle , \; \;
(\log {\lie{a}}mbda_{1}) k_{2} - (\log {\lie{a}}mbda_{2}) k_{1} \, \neq \; 0 . $$
(See Section~ \ref{sect:lattices} for the general construction.)
The corresponding holonomy homomorphisms $h: {\bb Z}^2 \rightarrow \lie{o}peratorname{Hom}({\bb Z}^2,\widetilde \lie{o}peratorname{GL}(2,{\bb R})) $ satisfy
$$ h(e_{i}) = \, \mathrm{diag}({\lie{a}}mbda_{i}) \, \tau^{k_{i}} \in \widetilde \lie{o}peratorname{GL}^+ \! (2,{\bb R}) \, .$$
(Here $e_{1}, e_{2}$ denote generators of ${\bb Z}^2$, and $ \mathrm{diag}({\lie{a}}mbda) \in A$ the diagonal
matrix which has both diagonal entries
equal to ${\lie{a}}mbda$.)
Observe that every marked Hopf torus is equivalent in the deformation space to precisely one of these tori. Forgetting about the marking, we note that every Hopf torus is affinely equivalent to a torus of the form ${\mathcal H}_{{\lie{a}}mbda_{1},{\lie{a}}mbda_{2},k, 0}$, where we call
$k = \mathrm{gcd}(k_{1}, k_{2}) \neq 0$ the \emph{level} of ${\mathcal H}$.\\
For fixed $k_{1}, k_{2} \in {\bb Z}$, the set of all
${\mathcal H}_{{\lie{a}}mbda_{1},{\lie{a}}mbda_{2},k_{1}, k_{2}}$ parametrizes
a closed (and also connected subset) of Hopf tori
${\mathfrak H}_{k_{1}, k_{2}}$,
and the subset of all Hopf tori decomposes as
$$ {\mathfrak H} \; = \bigcup_{(k_{1}, k_{2}) \, \neq \, 0 } {\mathfrak H}_{k_{1}, k_{2}} \; \subset \; \, {\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0}) \, .
$$
\paragraph{Non-homogeneous tori} \index{flat affine torus!non-homogeneous}
We let $$ \mathfrak{T} \, \subset \; \, {\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0}) $$ denote the subset of non-\-ho\-mog\-enous structures. Every marked manifold
${\mathcal{T}}$ which represents an element
of ${\mathfrak T}$ is equivalent as an $(\widetilde{\bbA^2 \! - 0}, \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R}))$-\-manifold to a torus $$ {\mathcal{T}}_{A,B,k} $$
as constructed in Corollary \ref{cor:inhomog}.
Here $A \in \lie{o}peratorname{GL}^+(2,{\bb R})$ is an expansion and $B\in \lie{o}peratorname{GL}^+(2,{\bb R})$ is upper triangular and commuting with $A$.
We call the number $k \in {\bb Z} - \{ 0 \}$
the \emph{level} of ${\mathcal{T}}$, respectively the level
of its class in ${\mathfrak T}$. Since the level cannot be zero,
claim (2) of Lemma \ref{lem:tGL2con} implies that
\emph{the set
$\mathfrak{T}$ is indeed an open subset of the deformation space.} \\
Let ${\mathfrak T}_k$ denote
the set of all elements in ${\mathfrak T}$ of level $k$.
We have the disjoint decomposition
$$ {\mathfrak T} = \bigcup_{k \in {\bb Z} - \{ 0 \}} {\mathfrak T}_k \; \, .$$
Proposition \ref{prop:levclosed}
implies that all subsets ${\mathfrak T}_k$
and their complements are closed subsets of ${\mathfrak T}$.
\paragraph{The closure of non-homogeneous tori}
We show now that the boundary of the set of non-homogeneous structures ${\mathfrak T}$
in the deformation space is formed by Hopf tori.
\begin{proposition} {\lie{a}}bel{prop:nonhom_clos}
The closure of ${\mathfrak T}_{k}$ in
${\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0})$ is ${\mathfrak T}_{k} \cup {\mathfrak H}_{k}$.
\end{proposition}
\begin{proof} Note first that the Hopf tori ${\mathcal H}_{{\lie{a}}mbda_{1},{\lie{a}}mbda_{2},k,0} \in {\mathfrak H}_{k}$, are in the closure of the elements ${\mathcal{T}}_{A,B,k}$ of ${\mathfrak T}_{k}$. (Just deform $A$ and $B$ to dilations.) It remains
to show that every homogeneous torus $M_{o}$ in the closure of
${\mathfrak T}_{k}$ is a Hopf torus:
Let $M_{\epsilon}$ equivalent to ${\mathcal{T}}_{A,B,k}$ be a marked non-homogeneous torus of level $k \neq 0$ which is in the vicinity of $M_o$ in the deformation space. Let $h_{\epsilon}$ denote its holonomy homomorphism. We assume that $h_{\epsilon}$ converges to $h_{o}$ in the space of conjugacy classes of
homomorphisms. The holonomy group of $M_{\epsilon}$ is generated
by $h_{\epsilon}(e_{i}) \in \widetilde\lie{o}peratorname{GL}^+ \! (2,{\bb R})$, $i = 1,2$.
The conjugacy class ${\mathcal{C}} h_{\epsilon}(e_{i})$ is thus
in the vicinity of the class ${\mathcal{C}} h_{o}(e_{i})$. Proposition
\ref{prop:levclosed} implies that the projections
$\mathsf{p} (h_o(e_{i})) \in \lie{o}peratorname{GL}^{+}(2,{\bb R})$
are conjugate to an element of $AN$. Since $M_{o}$ is
homogeneous, $M_{o}$ is either a Hopf torus or ${\mathop{\mathrm{lev\, }}} h_o(e_{1}) = {\mathop{\mathrm{lev\, }}} h_o(e_{2}) = 0$ (in which case $M_{o}$ has development image $\mathcal{H}$ or $\mathcal{Q}$). In the latter case, if $M_{\epsilon}$ is close enough, we must
have ${\mathop{\mathrm{lev\, }}} h_\epsilon(e_{1}) = {\mathop{\mathrm{lev\, }}} h_\epsilon(e_{2}) =0$,
again by Proposition
\ref{prop:levclosed}. This contradicts the fact that the
level of the non-homogeneous torus $M_{\epsilon}$ is
different from zero. Therefore, $M_{o}$ is a Hopf torus.
\end{proof}
\paragraph{Homogeneous tori modeled on ${\bb A}o$}
The subset of homogeneous structures
$$ {\mathfrak D}_{h}(T^2, \widetilde{\bbA^2 \! - 0}) \subset {\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0}) $$
decomposes into three strata ${\mathfrak A}$, ${\mathfrak B}$ and
${\mathfrak C}_{1}$, which are distinguished according to
the development images
$\widetilde{\bbA^2 \! - 0}$, ${\mathcal Q}$ and ${\mathcal H}$ respectively. These structures arise from the \'etale affine representations of the abelian Lie group ${\bb R}^2$
of type $\mathsf{A}$, $\mathsf{B}$ and $\mathsf{C}_{1}$
respectively. Moreover, it follows (using the construction in Section~ \ref{sect:lattices}) that
the three strata are continuous images of homogeneous spaces
via maps
$$ \lie{o}peratorname{GL}(2,{\bb R}) / N {\, \longrightarrow \, } \, {\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0}) \, , $$
where $N$ describes the group of those automorphisms of
${\bb R}^2$ which are induced by the conjugation action of the normalizers in $\lie{o}peratorname{GL}(2,{\bb R})$ for the groups
$\mathsf{A}$, $\mathsf{B}$ and $\mathsf{C}_{1}$.
(The normalizers are listed in Lemma \ref{lem:normalizers}.) In particular, it follows that the strata ${\mathfrak A}$ and ${\mathfrak B}$ are images of connected four-dimensional manifolds, while ${\mathfrak C}_{1}$ is a connected manifold of dimension three.
Note that structures in ${\mathfrak A}$ may be continuously deformed to structures in ${\mathfrak C}_{1}$, as follows from (4) of Lemma \ref{lem:tGL2con}. Compare also Figure \ref{figure:deform2}.
Similarly structures in ${\mathfrak B}$ can be deformed to structures in
${\mathfrak C}_{1}$, see Figure \ref{figure:BC1}. This shows in particular that $ {\mathfrak D}_{h}(T^2, \widetilde{\bbA^2 \! - 0})$ is connected.
\paragraph{Connectedness}
The deformation space
$$ {\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0}) \, = \; {\mathfrak T} \, \cup \, {\mathfrak D}_{h}(T^2, \widetilde{\bbA^2 \! - 0})$$
is connected. Indeed, by Proposition \ref{prop:nonhom_clos}
every marked non-homogeneous flat affine torus in $ {\mathfrak T}$ can be
deformed to a Hopf torus contained in some ${\mathfrak H}_{k_{1},k_{2}}$.
Since ${\mathfrak H}_{k_{1},k_{2}}$ is a subset of the connected
space $ {\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0})_{h }$ it follows that $ {\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0})$
is connected.
\subsection{The subspace of homogeneous structures}
{\lie{a}}bel{sect:def_homogeneous} \index{flat affine torus!homogeneous}
We describe now the properties of the subset ${\mathfrak D}_{h}(T^2,{\bb A}^2)$ of homogeneous flat affine structures as a subspace of the deformation space of all flat affine structures on $T^2$.
Since the non-homogeneous structures form an open subset
the space ${\mathfrak D}_{h}(T^2,{\bb A}^2)$ is closed. The complement
of Hopf tori $${\mathfrak D}_{h}(T^2,{\bb A}^2) - \mathfrak H$$ forms
a dense subset which is also open in the deformation space
of all structures ${\mathfrak D}(T^2,{\bb A}^2)$.
We established in the previous subsections:
\begin{proposition}
The set of all homogeneous structures ${\mathfrak D}_{h}(T^2,{\bb A}^2)$ is the continuous and injective image of the quadratic cone ${\mathcal{C}}(T^2,{\bb R})$ under the map $\mathcal E$ in \eqref{eq:LSAsection}.
The map is a homeomorphism in the complement of Hopf tori.
\end{proposition}
In particular:
\begin{corollary}
The deformation space ${\mathfrak D}_{h}(T^2,{\bb A}^2)$ of all homogeneous flat affine structures on the two-torus contains
the complement of Hopf tori
$$ {\mathfrak D}_{h}(T^2,{\bb A}^2) - \mathfrak H$$ as a dense open subset, which is a Hausdorff space and homeomorphic to a Zariski-open subset in a four-\-dimen\-sio\-nal quadratic cone in ${\bb R}^6$.
\end{corollary}
Note that the space of complete affine structures ${\mathfrak D}_{c}(T^2,{\bb A}^2)$ forms a two-dimensional closed subcone which is homeomorphic to ${\bb R}^2$, see \cite{BC_1}.
\subsubsection{The action of the linear group on translation invariant structures and conjugacy of \'etale affine groups}
{\lie{a}}bel{sect:orbit_closures}
The linear group $\lie{o}peratorname{GL}({2},{\bb R})$ naturally acts on the variety ${\mathcal{C}}(T^2,{\bb R})$ of commutative and associative algebra products on
${\bb R}^2$. The orbits of this action correspond to the isomorphism
classes of algebra products. Since the section map
$$ {\mathcal E} : {\mathcal{C}}(T^2,{\bb R}) \, {\, \longrightarrow \, } \, {\mathfrak D}_{h}(T^2,{\bb A}^2)$$
is a continuous bijection this constructs a \emph{natural} induced
action of $\lie{o}peratorname{GL}(2,{\bb R})$ on the deformation space of homogeneous structures ${\mathfrak D}_{h}(T^2,{\bb A}^2)$ which
is continuous on the complement of Hopf tori. This
group action may be used to reveal some of the
topology of ${\mathfrak D}_{h}(T^2,{\bb A}^2)$
and the possible deformations of structures. \\
Recall the classification of \'etale affine representations which is described in Section \ref{sect:etale_affine}.
Each orbit of $\lie{o}peratorname{GL}({2},{\bb R})$ in ${\mathfrak D}_{h}(T^2,{\bb A}^2)$ corresponds to exactly one of the
affine conjugacy classes of abelian almost simply transitive groups of affine
transformations on ${\bb A}^2$. We label the orbits accordingly with the symbols
${\mathfrak A}$, ${\mathfrak B}$, ${\mathfrak C}_{1}$, ${\mathfrak C}_{2}$, ${\mathfrak D}$ and $\mathsf{T}$. The decomposition
of ${\mathfrak D}_{h}(T^2)$ into the six orbit types of $\lie{o}peratorname{GL}({2},{\bb R})$ defines
a natural stratification on ${\mathfrak D}_{h}(T^2)$ into manifolds which are
homogeneous spaces of $\lie{o}peratorname{GL}(2,{\bb R})$, and each orbit is
a subcone of ${\mathfrak D}_{h}(T^2)$. Each such stratum may be also computed as the induced image of the subgeometry which is defined by the corresponding \'etale affine representation, see
the examples in Section \ref{sect:cart}, as well as Section \ref{sect:lattices} and Lemma \ref{lem:normalizers}. \index{deformation space!stratification of}
The closure of each stratum
consists of strata of lower dimensions and contains the unique closed stratum $\mathsf{T}$, which is a point. There are two open strata of dimension four labeled ${\mathfrak A}$ and ${\mathfrak B}$, which correspond to homogeneous flat affine
structures whose development images are the punctured plane, and
the sector respectively. In their closure are the three-dimensional
orbits ${\mathfrak C}_{1}$ and ${\mathfrak C}_{2}$, whose corresponding flat affine structures
develop into the halfplane.
The complete structures correspond to a
two dimensional orbit ${\mathfrak D}$ and the translation structure $\mathsf{T}$.
We say that the orbit ${\mathcal{O}}_{1}$ degenerates to the orbit ${\mathcal{O}}_{2}$
if ${\mathcal{O}}_{2}$ is in the closure of ${\mathcal{O}}_{1}$. Degeneration induces
a partial ordering on the strata of ${\mathfrak D}_{h}(T^2,{\bb A}^2)$, with the translation action
the unique minimal point. By the theorem of Hilbert-Mumford if ${\mathcal{O}}_{2}$
degenerates to ${\mathcal{O}}_{1}$ then there exists a one-parameter group
${\lie{a}}mbda: {\bb R} \rightarrow \lie{o}peratorname{GL}(2,{\bb R})$ such that $\lim_{t \rightarrow 0} {\lie{a}}mbda(t) o_{2} \in {\mathcal{O}}_{2}$. Therefore, every degeneration may be constructed explicitly as a limit of a curve of flat affine structures in the stratum. Moreover, every point of ${\mathfrak D}_{h}(T^2,{\bb A}^2)$ directly degenerates to the translation structure,
compare, for example, Figure \ref{figure:qHopfTo trans}.
\\
The graph shown in Figure \ref{fig:degenerations} describes all possible degenerations in the orbit stratification of ${\mathfrak D}_{h}(T^2,{\bb A}^2)$ with respect to the natural action of $\lie{o}peratorname{GL}(2,{\bb R})$.
\begin{figure}
\caption{Degenerations of $\lie{o}
\end{figure}
These degenerations are illustrated in Figures 1-5, Figure \ref{figure:qHopfTo trans} and Figure \ref{figure:deform2}.
\subsection{The deformation space of all flat affine structures on the two-torus} {\lie{a}}bel{sect:holislh}
Our main result is the following. \index{character variety}
\index{deformation space!holonomy map} \index{holonomy map!for flat affine structures}
\index{holonomy map!is a local homeomorphism}
\index{deformation space!of flat affine structures}
\begin{theorem} {\lie{a}}bel{thm:holonomy_main}
The holonomy map for flat affine
structures on the two-torus
$$ hol: {\mathfrak D}(T^2,{\bb A}^2) \rightarrow \lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2)) / {\rm Aff}(2) $$
is a local homeomorphism onto an open connected subset of the character variety.
\end{theorem}
\paragraph{The holonomy map for flat affine structures}
For the proof of Theorem \ref{thm:holonomy_main} we consider first the subgeometry of
$( {\bb A}o, \lie{o}peratorname{GL}(2,{\bb R}))$-structures
and its induced map on deformation spaces
(cf.\ Section~ \ref{sect:sub_geom}):
\begin{proposition} {\lie{a}}bel{pro:Uo}
The induced map on deformation spaces $$ {\mathfrak D}(T^2, {\bb A}o) \, {\, \longrightarrow \, } \, {\mathfrak D}(T^2,{\bb A}^2)$$
is an embedding onto an open subset ${\mathfrak U}_{o}$ of the space
${\mathfrak D}(T^2,{\bb A}^2)$. Moreover, the holonomy map for
${\mathfrak D}(T^2,{\bb A}^2)$ restricts to a local homeomorphism
on this subset.
\end{proposition}
\begin{proof} The commutative diagram \eqref{eq:subgeometry1} for the subgeometry takes the form
\begin{align*}
\xymatrix{
\; \; {\mathfrak D}(T^2, {\bb A}o) \; \; \ar[d] \ar [r]^(0.35){hol}
& \; \;\lie{o}peratorname{Hom}({\bb Z}^2, \lie{o}peratorname{GL}^+\!(2,{\bb R}))/ \lie{o}peratorname{GL}\! (2,{\bb R}) \; \; \ar[d] \\
\; \; {\mathfrak D}(T^2,{\bb A}^2) \; \; \ar[r]^(0.35){hol} & \; \;
\lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2)) / {\rm Aff}(2) \, .}
\end{align*}
The image of $ {\mathfrak D}(T^2, {\bb A}o)$ in ${\mathfrak D}(T^2,{\bb A}^2)$
consists of precisely those structures in ${\mathfrak D}(T^2,{\bb A}^2)$
whose linear part of the holonomy contains an expansion. Therefore, the image ${\mathfrak U}_{o}$ of the induced map is open in
${\mathfrak D}(T^2,{\bb A}^2)$. The left vertical map is
clearly injective. Note further that the operation of taking the
linear part of a homomorphism defines a continuous section of the right vertical map which is defined on the holonomy image $hol({\mathfrak U}_{o})$. The latter set is open in $\lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2)) / {\rm Aff}(2)$. By Corollary \ref{cor:bbaotop}, the upper map $hol$ is a local homeomorphism.
Therefore, the right vertical map is a topological embedding, and
the lower map $hol$ is a local homeomorphism on ${\mathfrak U}_{o}$. \end{proof}
Below we construct an open neighborhood ${\mathfrak U}_{1}$ of the
translation structure $\mathsf{T} \in {\mathfrak D}(T^2,{\bb A}^2)$, which
has the following properties:
\begin{enumerate}
\item ${\mathfrak U}_{1} \subset {\mathfrak D}_{h}(T^2,{\bb A}^2)$
is contained in the subset of homogeneous flat affine structures,
\item the restriction of the holonomy map $hol: {\mathfrak U}_{1} \rightarrow \lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2)) / {\rm Aff}(2)$ is injective,
\item ${\mathfrak D}(T^2,{\bb A}^2) = {\mathfrak U}_{o} \cup {\mathfrak U}_{1}$.
\end{enumerate}
Together with Proposition \ref{pro:Uo} this shows that
$$ hol: {\mathfrak D}(T^2,{\bb A}^2) \rightarrow \lie{o}peratorname{Hom}({\bb Z}^2, {\rm Aff}(2)) / {\rm Aff}(2)$$
is locally injective and therefore finishes the proof of
Theorem \ref{thm:holonomy_main}. \\
We observe the following refinement of Proposition
\ref{prop:nonhom_clos}:
\begin{proposition} {\lie{a}}bel{prop:nonhom_clos2}
The closure of the subset ${\mathfrak T}$ of non-homogeneous structures in the deformation space
${\mathfrak D}(T^2, {\bb A}^2)$ consists of Hopf tori.
\end{proposition}
\begin{proof} Suppose there is a sequence
of non-homogeneous marked tori $M_{i}$ which converge
in the deformation space ${\mathfrak D}(T^2,{\bb A}^2)$ to a
flat affine torus $M$. Let $h_{i}: {\bb Z}^2 \rightarrow {\rm Aff}(2)$
be their corresponding holonomy homomorphisms.
By Corollary \ref{cor:inhomog}, we may assume that the
linear parts of the $h_{i}$ are contained in the group of upper triangular
matrices $AN \cup -AN$, where $AN$ is the index two subgroup
with positive diagonal entries. Now if $M$ is homogeneous and
has development image different from the once-punctured
plane, the linear parts of all $h_{i}$ are contained in $AN$ for sufficiently large $i$.
Let $D_{i}$ be a corresponding sequence of development maps for the $M_{i}$ which converges to a development map $D$ which represents $M$.
Since $M$ is not modeled on the once-punctured plane the development map
$D$ is injective. By Example \ref{ex:embeddings}, there is a neighborhood of $D$
in the space of development maps such that $D$ is injective on the fundamental
domain for the standard action of ${\bb Z}^2$ on ${\bb R}^2$. However, the development maps $D_{i}$ are not injective on this fundamental domain by construction of the non-homogeneous tori $M_{i}$, see Example \ref{ex:gentorusnh}. This contradicts the
fact that the $D_{i}$ converge to $D$.
The claim now follows from Proposition \ref{prop:nonhom_clos}.
\end{proof}
Now the construction of ${\mathfrak U}_{1}$ goes as follows:
Following the notation in Appendix A, define $U_{\epsilon}
= \{ g \in \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R}) \mid |\theta(g)| < \epsilon \}$.
By the proof of
Proposition \ref{prop:ind_cov2}, we may choose
an open set $$ {\mathcal{U}}_{\epsilon} \subset \; \lie{o}peratorname{Hom}({\bb Z}^2, \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R}))/ \widetilde \lie{o}peratorname{GL}\! (2,{\bb R}) \, , $$
where
${\mathcal{U}}_{\epsilon}$ is of the form ${\mathcal{C}}(U_{\epsilon} \times U_{\epsilon})$ such that the projection $$ {\mathcal{U}}_{\epsilon} \, {\, \longrightarrow \, } \, \lie{o}peratorname{Hom}({\bb Z}^2, \lie{o}peratorname{GL}^+\!(2,{\bb R}))/ \lie{o}peratorname{GL}\! (2,{\bb R})$$ is injective.
The holonomy preimage $hol^{-1}({\mathcal{U}}_{\epsilon})$ is a non-empty open subset of
$ {\mathfrak D}(T^2, \widetilde{\bbA^2 \! - 0})$ which contains certain homogeneous structures of type ${\mathfrak A}$, and the strata ${\mathfrak B}$ and ${\mathfrak C}_{1}$.
It corresponds to a non-empty
open subset ${\mathcal V}_{\epsilon}$ of ${\mathfrak D}(T^2, {\bb A}o)$ such that the restriction
$$ hol: {\mathcal V}_{\epsilon} \, {\, \longrightarrow \, } \, \lie{o}peratorname{Hom}({\bb Z}^2, \lie{o}peratorname{GL}^+\!(2,{\bb R}))/ \lie{o}peratorname{GL}\! (2,{\bb R})$$ is injective. Let ${\mathcal M} = {\mathfrak D}(T^2, {\bb A}o) - {\mathcal V}_{\epsilon}$ be the complement (containing the non-homogeneous flat affine tori and also homogeneous structures of type ${\mathfrak A}$). Then we observe that ${\mathcal M}$ is closed
not only in ${\mathfrak D}(T^2, {\bb A}o)$, but also in ${\mathfrak D}(T^2, {\bb A}^2)$.
(Indeed, this follows since the closure of the space ${\mathfrak T}$ of non-homogeneous tori is contained in the space ${\mathfrak H}$ of Hopf tori.)
Now we put ${\mathfrak U}_{1} = {\mathfrak D}(T^2, {\bb A}^2) - {\mathcal M}$.
\begin{appendix}
\section{Conjugacy classes in the universal covering group of $\lie{o}peratorname{GL}(2,{\bb R})$}
{\lie{a}}bel{sect:GL2R}
Let $\lie{o}peratorname{GL}^+(2,{\bb R})$ be the group of $2 \times 2$ matrices with positive determinant, and let $$ \mathsf{p}: \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R}) \rightarrow
\lie{o}peratorname{GL}^+(2,{\bb R})$$ be its universal covering group.
\paragraph{Iwasawa decomposition}
Recall the Iwasawa decomposition
$$\lie{o}peratorname{GL}^+(2,{\bb R}) = {K}AN \; , $$
where $K= {\rm SO}(2,{\bb R})$ is the subgroup of rotations, $A$ is the group of diagonal matrices with positive
entries, and $N$ the group of unipotent upper triangular matrices.
Furthermore, we let $D$ be the central subgroup of $\lie{o}peratorname{GL}^+(2,{\bb R})$ contained in $A$ which consists of all elements of $A$ with
identical diagonal entries.
Let $\tilde K \rightarrow K$ be the universal covering of the
rotation group.
There is an induced Iwasawa decomposition
$$ \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R}) = \tilde{K} AN \; ,$$
where $A$ and $N$ are considered as subgroups of $\widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$.
Let $\mathcal Z$ be the subgroup of $\tilde{K}$, which is
mapped by the covering projection onto $\{+1, -1\} = \{ {\rm E}_{2} , {\mathrm{R}}_{\pi} \} \subset {\rm SO}(2,{\bb R})$.
Note that the center of $ \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$ consists
of the subgroup $D$ extended by the group $\mathcal Z$.
We choose a generator $\tau \in \tilde{K}$ for the infinite cyclic group
$\mathcal Z$. Then $\mathsf{p}(\tau) = {\mathrm{R}}_{\pi} \, ( \, = - {\rm E}_{2}) $ and the element
$\tau^2 \in \mathcal Z$ generates the kernel of the covering
projection $\mathsf{p}$.
\paragraph{The rotation angle function}
We consider the diffeomorphism
$\theta: \tilde K \rightarrow {\bb R}$ which satisfies $\theta(1) = 0$ and the relation
$$ \mathsf{p} = \begin{matrix}{rc} \cos \theta & \sin \theta \\
- \sin \theta & \cos \theta \end{matrix} \; . $$
Using the Iwasawa decomposition we construct an angular map
$$ \theta: \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R}) \rightarrow {\bb R} \; $$
by extending $\theta: \tilde K \rightarrow {\bb R}$. This means that
$\theta(g) = \theta(k)$, where $g \in \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$
has decomposition $g= kan$.
Geometrically, $\theta(g)$ thus is the angle of rotation or polar angle for the image
$\mathsf{p}(g) (e_{1})$ of the first standard basis vector $e_{1}$.
More specifically, when considering the action
of $\widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$ on $\widetilde{\bbA^2 \! - 0}$ (see Section~ \ref{sect:gentori}), the function $\theta$ can also be read off as the
$\theta$-coordinate of $$ g \cdot (r,0) = \; (s, \theta(g)) \, \in \widetilde{\bbA^2 \! - 0} \; . $$
\hspace{1cm}\\
The following properties of the function $\theta$ are easy to
verify:
\begin{lemma} {\lie{a}}bel{lem:theta}
Let $g,h \in \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$. Then
\begin{enumerate}
\item $\theta(g) = 0$ if and only if $g \in AN$.
\item $\theta(kg) = \theta(k) + \theta(g)$, for all $k \in \tilde K$;
$\theta(\tau^m) = m \pi$.
\item $ | \theta(gh) - \theta(g) - \theta(h)| < \pi$.
\item $ | \theta(g) - \theta(g^{-1}) | < \pi$.
\item $ | \theta(gkg^{-1}) -\theta(k) | < \pi$, for all $k \in \tilde K$.
\item $ | \theta (g h g^{-1})| < \pi$, for all $h \in AN$.
\end{enumerate}
\end{lemma}
\begin{proof} Recall (see Section~ \ref{sect:gentori}) that the action of $AN$ on $\widetilde{\bbA^2 \! - 0}$ preserves all lines $(r, \ell \pi) \in \widetilde{\bbA^2 \! - 0}$, where $\ell \in {\bb Z}$,
and the interior of all strips
$$\bar {\rm O}mega_{\ell} = \{ (r, \theta) \mid \ell \pi \leq \theta \leq (\ell +1) \pi \} \, \subset \, \widetilde{\bbA^2 \! - 0} \, . $$
Therefore, for any $g \in
\lie{o}peratorname{GL}^+\!(2,{\bb R})$ with $\theta(g) \in (\ell \pi, (\ell +1) \pi)$ and $h \in AN$, we have $$ \ell \pi < \theta(h gh^{-1}) < (\ell +1) \pi \; . $$ In particular, one deduces (5) and (6).
\end{proof}
\subsection{The induced covering on conjugacy classes}
Let $G$ be a Lie group, and
$${\mathcal{C}}{G} = \{ C(g) = {\rm Ad}(G) g \mid g \in G \}$$
its set of conjugacy classes. The set ${\mathcal{C}}{G}$
carries the quotient topology induced
from $G$. Observe that the center of $G$ acts on ${\mathcal{C}}{G}$.
Indeed, for any $z \in Z(G)$, we have $z C(g) = C(zg)$.
Given a covering
projection of Lie groups
$\mathsf{p}: G' \rightarrow G$, there is a natural induced surjective map on
conjugacy classes $$ {\mathcal{C}}{G'} {\, \longrightarrow \, } {\mathcal{C}}{G} \, , \; \, C(g) \mapsto C(\mathsf{p}(g)) \; . $$ The kernel $\kappa$ of the covering is a central subgroup
of $G'$ which acts on ${\mathcal{C}}{G'}$. As a matter of fact,
$ {\mathcal{C}}{G} = {\mathcal{C}}{G'}/ \kappa$ is the quotient space of this
action.
\begin{proposition} {\lie{a}}bel{prop:ind_cov1}
The natural projection map on conjugacy classes
$$ {\mathcal{C}}{ \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})} \, \longrightarrow \, {\mathcal{C}}{ \lie{o}peratorname{GL}^+\!(2,{\bb R})} $$
is a covering map.
\end{proposition}
\begin{proof} We consider the action of the kernel $\kappa = {\lie{a}}ngle \tau ^2 \rightarrowngle$ of the covering map $\mathsf{p}$ on ${\mathcal{C}}{ \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})}$.
For this, let $g \in \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$ and consider its neighborhood $$ U_{\epsilon} = U_{\epsilon}(g) = \{ h \in \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R}) \mid | \theta(g) - \theta (h)| < \epsilon \} \; . $$
We also put $${\mathcal{C}} U_{\epsilon} = \{ C(h) \mid h \in U_{\epsilon} \}$$
for the corresponding neighborhood of $C(g)$ in the
space of conjugacy classes.
Let us assume first that $g \in \tilde K D$, $g \notin \mathcal Z D$.
Let $V_{\epsilon} \subset U_{\epsilon}(g)$ be a neighborhood
of $g$, such that all its elements are conjugate
to an element of $ \tilde K \cdot D$. Let $h \in V_{\epsilon}$.
By using (5) of Lemma \ref{lem:theta}, we deduce that, for all
$\ell \in C(h)$, \begin{equation} {\lie{a}}bel{eq:Ueps}
| \theta(g) - \theta(\ell)| < \pi + \epsilon \; . \end{equation}
We observe that the open subsets ${\mathcal{C}} V_{\epsilon}$ and $\tau^{k}{\mathcal{C}} V_{\epsilon} = {\mathcal{C}} \tau^{k} V_{\epsilon}$ of ${\mathcal{C}}{ \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})}$ intersect if and only if there exist elements
$h,\ell \in V_{\epsilon}$ such that $$ \tau^k \ell \in C(h) \, . $$
If this is the case then, by the above estimate \eqref{eq:Ueps}, we have
\begin{equation} {\lie{a}}bel{eq:Ueps2} | \theta(g) - \theta(\tau^k \ell) | = | \theta(g) - k \pi - \theta(\ell)|< \pi + \epsilon \; . \end{equation}
Furthermore, $ |\theta(g) -\theta(\ell)| < \epsilon$, since $\ell \in U_{\epsilon}$. If $\epsilon$ is small \eqref{eq:Ueps2} is possible if and only if $k \in \{0,1,-1\}$. For $\epsilon$ small enough, this implies that all neighborhoods of the form
$\tau^{2k}{\mathcal{C}} V_{\epsilon} = {\mathcal{C}} \tau^{2k} V_{\epsilon}$ are mutually disjoint. Therefore, ${\mathcal{C}} V_{\epsilon}$ is a fundamental neighborhood of ${\mathcal{C}}(g)$ for the action of $\kappa$ on
${\mathcal{C}}\! \lie{o}peratorname{GL}^+\!(2,{\bb R})$.
Assume next that $g \in AN$ is upper triangular. Since $g$ has real and positive eigenvalues, we may choose a small neighborhood $V_{\epsilon}$ as above such that all its elements
are conjugate to an element of $AN$ or of $\tilde K D$.
In particular, for all $h \in V_{\epsilon}$ which are conjugate to
an element of $AN$, we deduce from (6) of Lemma \ref{lem:theta} that the range of $\theta$ on the conjugacy class
${\mathcal{C}}(h)$ is contained in the open interval $(- \pi, \pi)$.
Consequently,
$\theta ({\mathcal{C}}( \tau^k h)) $ is contained in $( (k -1) \pi, (k +1) \pi)$.
It follows that all neighborhoods of the form
$\tau^{2 k } {\mathcal{C}} V_{\epsilon}$ are mutually disjoint, and
thus ${\mathcal{C}} V_{\epsilon}$
is a fundamental neighborhood of ${\mathcal{C}}(g)$ for the action of $\kappa$.
An analogous argument works for $g$ with negative eigenvalues, that is,
$ g \in \tau AN$.
Therefore $\kappa$ acts discontinuously and freely on ${\mathcal{C}}{ \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})}$. This implies the proposition.
\end{proof}
\begin{corollary} {\lie{a}}bel{cor:ind_cov1}
The natural map on conjugacy classes
$$ {\mathcal{C}}{ \widetilde \lie{o}peratorname{GL}\!(2,{\bb R})} \, \longrightarrow \, {\mathcal{C}}{ \lie{o}peratorname{GL}\!(2,{\bb R})} $$
is a local homeomorphism.
\end{corollary}
\begin{proof} Indeed, local injectivity is implied
by the commutative diagram
\begin{align*}
\xymatrix{
\; \; {\mathcal{C}}{ \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})} \; \; \ar[d] \ar[r]
& \; \; {\mathcal{C}}{ \lie{o}peratorname{GL}^+\!(2,{\bb R})} \; \; \ar[d] \\
\; \; {\mathcal{C}}{ \widetilde \lie{o}peratorname{GL}\!(2,{\bb R})} \; \; \ar[r] & \; \;
{\mathcal{C}}{ \lie{o}peratorname{GL}\!(2,{\bb R})} \, .}
\end{align*}
\end{proof}
\paragraph{Closures of sets of conjugacy classes }
For any subset $M \subset G$ we define ${\mathcal{C}} M$ to be the set of conjugacy classes of elements in $M$, and $\lie{o}verline{{\mathcal{C}} M}$ its closure in ${\mathcal{C}} G$.
We shall require the following lemma:
\begin{lemma} {\lie{a}}bel{lem:tGL2con}
With the above convention the following hold in the space of
conjugacy classes ${\mathcal{C}} \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$:
\begin{enumerate}
\item Let $g_{i} \in \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$ be a sequence of elements such that each $g_{i}$ is conjugate to an element of $A N$ and such that
the sequence $|\theta(g_{i})|$ converges to $\pi$. Then the sequence
$g_{i}$ leaves every compact subset of\/ $\widetilde \lie{o}peratorname{GL}^+(2,{\bb R})$.
\item ${\mathcal{C}}(AN) = \lie{o}verline{{\mathcal{C}}(AN)}$ is a closed subset of ${\mathcal{C}} \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$.
\item ${\mathcal{C}} N = \lie{o}verline{{\mathcal{C}} N}
= \left\{ {\mathcal{C}}\!\begin{matrix}{cc} 1 & 1 \\ 0 & 1 \end{matrix},
{\mathcal{C}}\!\begin{matrix}{cc} 1 & -1 \\ 0 & 1 \end{matrix}, {\rm E}_{2} \right\}$ consists of three conjugacy classes.
\item $\lie{o}verline{{\mathcal{C}} \tilde K} = {\mathcal{C}} \tilde K \, \cup \, \bigcup_{k}
\! {\mathcal{C}} \, \tau^k N
$, $\; \lie{o}verline{{\mathcal{C}} D \tilde K} =\, D \,\lie{o}verline{{\mathcal{C}} \tilde K}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $g \in \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$ such that
$|\theta(g)|= \pi $, and $\bar g \in \lie{o}peratorname{GL}^+\!(2,{\bb R})$ the projection
of $g$. Clearly, by definition of $\theta$, $\bar g$
has at least one negative eigenvalue.
The sequence $g_{i}$ can not have a subsequence convergent
to $g$, since the corresponding $\bar g_{i}$ have positive eigenvalues.
Thus (1) follows.
To prove (2), we consider the subset $C(AN) \subset\widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$ which is the preimage of ${\mathcal{C}}(AN)$. For $g \in \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$, let $\lie{o}peratorname{dis}(g)$ denote the discriminant of the characteristic polynomial of $\mathsf{p}(g) \in \lie{o}peratorname{GL}^+(2,{\bb R})$. Then $g \in C(AN)$ if and only
if the following hold:
\begin{enumerate}
\item[i)] $|\theta(g)| < \pi$,
\item[ii)] $\lie{o}peratorname{dis}(g) {< , >}eq 0$,
\item[iii)] both eigenvalues of $\mathsf{p}(g)$ are positive.
\end{enumerate}
In view of (1), the condition i) is closed. Therefore,
$C(AN)$ is a closed subset of $\widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$, proving (2).
Regarding (4), remark first that the closure of ${\mathcal{C}} K$ in
${\mathcal{C}} \lie{o}peratorname{GL}^+(2,{\bb R})$ is contained in the union of ${\mathcal{C}} K$
and ${\mathcal{C}} N$, and ${\mathcal{C}} \! -{\rm E}_{2}N$. Now here is an example of a sequence
$$k_{\varphi} = \begin{matrix}{cc} \cos \varphi + \sqrt{\sin \varphi} & - \sin \varphi - 1 \\
\sin \varphi & \cos \varphi - \sqrt{\sin \varphi}
\end{matrix}$$
of matrices, where $k_{\varphi}$ is conjugate to the rotation ${\mathrm{R}}_{\varphi} \in \tilde K$, and
which, for $\varphi \rightarrow 0$ is converging to
$$
\begin{matrix}{lr} 1 & - 1 \\
0 & 1
\end{matrix} \, \in N \; .
$$ Therefore, ${\mathcal{C}} N$ is in the closure of ${\mathcal{C}} \tilde K$.
Since ${\mathcal{C}} \tilde K$ is invariant by left-multiplication with $\tau$,
the same is true for its closure.
This shows that
$\tau^k \, {\mathcal{C}} N \subset \lie{o}verline{{\mathcal{C}} \tilde K}$.
\end{proof}
Every element $g \in \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$ with
$\mathsf{p}(g) \in AN \leq \lie{o}peratorname{GL}^+\!(2,{\bb R})$ is of the form
$\tau^k g_{o}$,
where $g_{o} \in AN \leq \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$.
The integer ${\mathop{\mathrm{lev\, }}} g = k \in {\bb Z}$ is
called the \emph{level} of $g$. The notion of level is
defined for the conjugacy class ${\mathcal{C}}(g)$ of $g$.
The following states that the level separates
these conjugacy classes.
In particular, the subset of conjugacy classes
$$ \tau^k {\mathcal{C}} AN \, \subset \, {\mathcal{C}} \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$$
is closed.
\begin{proposition} {\lie{a}}bel{prop:levclosed}
Let $g_{i} \in \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$
be a sequence such that each $\mathsf{p}(g_{i})$ is conjugate to an element of $A N$. If the sequence of conjugacy classes ${\mathcal{C}}(g_{i})$ converges to ${\mathcal{C}}(h) \in {\mathcal{C}} \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$ then $\mathsf{p}(h)$ is conjugate to an element of $AN$, and there exists $i_{0}$, such that for all $i {< , >}eq i_{0}$, ${\mathop{\mathrm{lev\, }}} g_{i} = {\mathop{\mathrm{lev\, }}} h$.
\end{proposition}
\begin{proof} The discriminant of the characteristic polynomial $\lie{o}peratorname{dis} \mathsf{p}(g_{i})$ and the
eigenvalues of $\mathsf{p}(g_{i})$ are continuous functions
on the conjugacy classes. Therefore, $\mathsf{p}(h)$ is conjugate in $\lie{o}peratorname{GL}^+\!(2,{\bb R})$ to an element of $AN$.
By assumption, all $\mathsf{p}(g_{i})$ are contained
in ${\mathcal{C}} AN$.
Therefore, we have $g_{i} \in \tau^{2k_{i}} {\mathcal{C}}(h_{i})$ with
$h_{i} \in {\mathcal{C}} AN$. By Proposition \ref{prop:ind_cov1},
the group generated by $\tau^2$ acts properly discontinuously
on ${\mathcal{C}} \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$.
In particular, there exists a neighbourhood ${\mathcal{C}} U$ of ${\mathcal{C}}(h)$
such that ${\mathcal{C}}(g_{i}) \in {\mathcal{C}} U$ implies ${\mathop{\mathrm{lev\, }}} g_{i} = {\mathop{\mathrm{lev\, }}} h$.
\end{proof}
Incidentally, the assertion of Proposition \ref{prop:ind_cov1} fails
to be true when considering the situation for
the covering $$ \mathrm{P}\mathsf{p}: \widetilde \lie{o}peratorname{GL}(2,{\bb R}) \rightarrow {\rm PGL}(2,{\bb R}) = \lie{o}peratorname{GL}(2,{\bb R}) / \{ \pm \mathrm{E}_{2} \} \; . $$
\begin{example} {\lie{a}}bel{ex:CPGL}
We consider the induced map
\begin{equation} {\lie{a}}bel{eq:cPGL}
{\mathcal{C}} \widetilde \lie{o}peratorname{GL}(2,{\bb R}) {\, \longrightarrow \, } {\mathcal{C}} {\rm PGL}(2,{\bb R})
\end{equation} on conjugacy
classes. It is the quotient map of ${\mathcal{C}} \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$ with
respect to the action of the central subgroup ${\mathcal{Z}} = {\lie{a}}ngle \tau \rightarrowngle$, generated by the element $\tau$.
Then the $\widetilde \lie{o}peratorname{GL}(2,{\bb R})$-conjugacy class ${\mathcal{C}}(g_{a})$, where $g_{a} \in \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$ is a lift of
$$ \bar g_{a } = \begin{matrix}{cc} 0 & a \\ -a & 0 \end{matrix} \in \lie{o}peratorname{GL}^+\!(2,{\bb R}) \; , \; a \neq 0 , $$
is fixed by translation with $\tau$. Indeed, $\tau {\mathcal{C}}(g_{a}) =
{\mathcal{C}}(g_{-a}) = {\mathcal{C}}(g_{a})$. Therefore, the map \eqref{eq:cPGL} cannot be
a covering. It is \emph{not even a locally injective map}:
Indeed, in every neighborhood of $g_{a}$ there exist elements $g_{a,\epsilon}$ projecting to matrices of the form
$$ \bar g_{a,\epsilon } = \begin{matrix}{rc} \epsilon & a \\ -a & \epsilon \end{matrix} \in \lie{o}peratorname{GL}^+\!(2,{\bb R}) \; , \; \epsilon \neq 0 .$$
Then for the conjugacy classes in $\lie{o}peratorname{GL}(2,{\bb R})$, we have ${\mathcal{C}} \, \mathsf{p}(g_{a,\epsilon}) = {\mathcal{C}} \, \mathsf{p}(g_{-a,\epsilon})$ and therefore
${\mathcal{C}}\, \mathrm{P}\mathsf{p}(g_{a,\epsilon}) = {\mathcal{C}}\, \mathrm{P}\mathsf{p} ( \tau g_{-a, \epsilon}) = {\mathcal{C}}\, \mathrm{P}\mathsf{p} ( g_{ a,- \epsilon})$. But clearly, $g_{a,\epsilon}$ and $g_{a,- \epsilon}$ are not
conjugate in $\widetilde \lie{o}peratorname{GL}(2,{\bb R})$ unless $\epsilon = 0$.
Therefore, \emph{the map \eqref{eq:cPGL}
is a twofold branched covering near $g_{a}$}.
\end{example}
\subsection{Conjugacy classes of homomorphisms}
Let $G$ be a Lie group.
Recall that the evaluation map on the generators
$$ \lie{o}peratorname{Hom}({\bb Z}^2, G) \rightarrow G \times G \, , \; \rho \mapsto \left(\rho(e_{1}), \rho(e_{2}) \right)$$
identifies the space $\lie{o}peratorname{Hom}({\bb Z}^2, G)$ of all homomorphisms
${\bb Z}^2 \rightarrow G$ homeomorphically with an analytic subvariety of $G \times G$.
With respect to this map, the orbits of the conjugation action
of $G$ on $\lie{o}peratorname{Hom}({\bb Z}^2, G)$ correspond to sets of the form
$$ C(g_{1}, g_{2}) = \{ (g g_{1} g^{-1}, g g_{2} g^{-1}) \mid g \in G\} \subset G \times G \; .$$
We put
$$ {\mathcal X}({\bb Z}^2, G) = \lie{o}peratorname{Hom}({\bb Z}^2, G) / G $$
for the space of conjugacy classes of homomorphisms
${\bb Z}^2 \rightarrow G$ (also called the character variety).
Given a covering homomorphism $\mathsf{p}: G' \rightarrow G$ there is a natural induced surjective map
\begin{equation} {\lie{a}}bel{eq:indmap}
{\mathcal X}({\bb Z}^2, G') {\, \longrightarrow \, } {\mathcal X}({\bb Z}^2, G) \; .
\end{equation}
Returning to our specific context we introduce the following extension of Proposition \ref{prop:ind_cov1}:
\begin{proposition} {\lie{a}}bel{prop:ind_cov2}
The induced map on conjugacy classes of homomorphisms
$$ \lie{o}peratorname{Hom}({\bb Z}^2, \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})) /
\widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R}) {\, \longrightarrow \, } \lie{o}peratorname{Hom}({\bb Z}^2, \lie{o}peratorname{GL}^+\!(2,{\bb R})) /
\lie{o}peratorname{GL}^+\!(2,{\bb R})$$
is a covering map.
\end{proposition}
\begin{proof}
Let ${\rm Z}(G)$ denote the center of $G$.
This representation of $\lie{o}peratorname{Hom}({\bb Z}^2, G)$ as a subset of
$G \times G$ gives rise
to an action
of ${\rm Z}(G) \times {\rm Z}(G)$ on $\lie{o}peratorname{Hom}({\bb Z}^2, G)$ which is determined by
$$ \left( (z_{1}, z_{2})\cdot \rho\right) \, (e_{i}) \, = \; z_{i} \rho(e_{i})\; , $$ where $z_{i} \in {\rm Z}(G)$.
Moreover,
it factors to an action of ${\rm Z}(G) \times {\rm Z}(G)$
on the space of conjugacy classes
${\mathcal X}({\bb Z}^2, G)$.
Let $\kappa \leq Z(G')$ denote the kernel of $\mathsf{p}: G' \rightarrow G$. By the above, $\kappa \times \kappa$ acts on $\lie{o}peratorname{Hom}({\bb Z}^2, G')$, and the action factors to an action on the space of conjugacy classes ${\mathcal X}({\bb Z}^2, G')$, and, as is easily verified, the natural map
$$ {\mathcal X}({\bb Z}^2, G')/ \kappa \times \kappa {\, \longrightarrow \, } {\mathcal X}({\bb Z}^2, G) \; $$ which is induced on the quotient
is a homeomorphism. Therefore, \ref{eq:indmap}
is a covering if and only if
$\kappa \times \kappa$ acts discontinuously and freely on
$ {\mathcal X}({\bb Z}^2, G')$.
Here we consider only the case
$G'= \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})$ and $G=
\lie{o}peratorname{GL}^+(2,{\bb R})$.
For any $(g_{1}, g_{2}) \in G' \times G'$, choose
open neighborhoods $U_{\epsilon}(g_{i})$ as
in the proof of Proposition \ref{prop:ind_cov1}.
As follows from this previous proof, the open neighborhood
$$ U_{\epsilon}(g_{1},g_{2}) = U_{\epsilon}(g_{1}) \times U_{\epsilon}(g_{2}) \; $$
projects to a fundamental neighborhood $ \mathcal C U_{\epsilon}$ for the action of $\kappa \times \kappa$ on the set of all $G'$-orbits.
Hence, $\kappa \times \kappa$ acts discontinuously on $G'$-orbits.
\end{proof}
\begin{corollary} {\lie{a}}bel{cor:ind_cov2}
The natural map on conjugacy classes of homomorphisms
$$ \lie{o}peratorname{Hom}({\bb Z}^2, \widetilde \lie{o}peratorname{GL}^+\!(2,{\bb R})) /
\widetilde \lie{o}peratorname{GL}\!(2,{\bb R}) {\, \longrightarrow \, } \lie{o}peratorname{Hom}({\bb Z}^2, \lie{o}peratorname{GL}^+\!(2,{\bb R})) /
\lie{o}peratorname{GL}\!(2,{\bb R})$$
is a local homeomorphism.
\end{corollary}
\section{Example of a two-dimensional geometry where
$hol$ is not a local homeomorphism}
Let $(X,G)$ be the homogeneous geometry which is defined by
the natural action of ${\rm PGL}(2,{\bb R}) = \lie{o}peratorname{GL}(2,{\bb R})/ \{ \pm 1 \}$
on the space
$$ X = {\mathrm P}({\bbA^2 \! - 0}) = {\bb A}o / \{ \pm 1 \} \; , $$ that is, $X$ is the quotient space
of ${\bb R}^2 - \{0 \}$ by the action of the center $\{ {\rm E}_{2}, -{\rm E}_{2}\}$ of
$\lie{o}peratorname{SL}(2,{\bb R})$. The natural map $$ ({\bb A}o, \lie{o}peratorname{GL}(2,{\bb R})) \; {\, \longrightarrow \, } \;
( {\mathrm P}({\bbA^2 \! - 0}), {\rm PGL}(2,{\bb R}))$$ is a covering of geometries in the
sense of Definition \ref{def:subgeometry}.
By Lemma \ref{lem:covering_geoms}, the induced map
on deformation spaces
\begin{equation} {\lie{a}}bel{eq:indmap_p}
{\mathfrak D}(T^2, {\bb A}o) \; {\, \longrightarrow \, } \; {\mathfrak D}(T^2, {\mathrm P}({\bbA^2 \! - 0}))
\end{equation}
is a homeomorphism. \\
We claim that the \emph{holonomy
for the deformation space $ {\mathfrak D}(T^2, {\mathrm P}({\bbA^2 \! - 0}))$ is \emph{not}
a local homeomorphism}.
For this we recall first the $ ({\bb A}o, \lie{o}peratorname{GL}(2,{\bb R}))$-manifolds
${\mathcal H}_{{\lie{a}}mbda, {\pi \lie{o}ver 2},k}$ constructed in Example \ref{ex:finquothopftori} (finite quotients of Hopf tori).
Then we observe: \index{holonomy map!is not a local homeomorphism}
\begin{proposition} {\lie{a}}bel{prop:branchedhol}
The holonomy map
$$ {\mathfrak D}(T^2, {\mathrm P}({\bbA^2 \! - 0})) \, \stackrel{hol}{{\, \longrightarrow \, }} \,
\lie{o}peratorname{Hom}({\bb Z}^2, {\rm PGL}(2,{\bb R}))/ {\rm PGL}(2,{\bb R})$$
is a twofold branched covering near the image
of a homogeneous flat affine torus ${\mathcal H}_{{\lie{a}}mbda, {\pi \lie{o}ver 2},k}$
under the map \eqref{eq:indmap_p}. \end{proposition}
\begin{proof} The commutative diagram \eqref{eq:subgeometry1} for the subgeometry takes the form
\begin{align*}
\xymatrix{
\; \; {\mathfrak D}(T^2, {\bb A}o) \; \; \ar[d]_{\approx} \ar[r]^(0.35){hol}
& \; \;\lie{o}peratorname{Hom}({\bb Z}^2, \lie{o}peratorname{GL}^+\!(2,{\bb R}))/ \lie{o}peratorname{GL}\! (2,{\bb R}) \; \; \ar[d] \\
\; \; {\mathfrak D}(T^2, {\mathrm P}({\bbA^2 \! - 0})) \; \; \ar[r]^(0.35){hol} & \; \;
\lie{o}peratorname{Hom}({\bb Z}^2, {\rm PGL}(2,{\bb R}))/ {\rm PGL}(2,{\bb R}) \, .}
\end{align*}
Note that, by Corollary \ref{cor:bbaotop}, the top horizontal map
is a local homeomorphism. Furthermore, by Example \ref{ex:CPGL},
the right vertical map is a twofold branched covering near the
holonomy homomorphism of every flat affine
torus ${\mathcal H}_{{\lie{a}}mbda, {\pi \lie{o}ver 2},k}$.
We deduce that the bottom
map $hol$ for $ {\mathfrak D}(T^2, {\mathrm P}({\bbA^2 \! - 0})) $
is locally a twofold branched covering at the
images of ${\mathcal H}_{{\lie{a}}mbda, {\pi \lie{o}ver 2},k}$.
\end{proof}
\end{appendix}
\printindex
\end{document} |
\begin{document}
\author{Jacek
Jakubowski and
Maciej Wi\'sniewolski }
\title[Verhulst process]{Exact distribution of Verhulst process}
\maketitle
\begin{center}
{\small
Institute of Mathematics, University
of Warsaw \\
Banacha 2, 02-097 Warszawa, Poland \\
e-mail: {\tt jakub@mimuw.edu.pl } \\
and \\
{\tt wisniewolski@mimuw.edu.pl } }
\end{center}
\begin{abstract}
We investigate a Verhulst process, which is the special functional of geometric Brownian motion and has many applications, among others in biology and in stochastic volatility models. We present
an exact form of density of a one dimensional distribution of Verhulst process. Simple formula for the density of Verhulst process is obtained in the special case, when the drift of geometric Brownian motion is equal to $-\frac12$. Some special properties of this process are discussed, e.g. it turns out that under Girsanov's change of measure a Verhulst process still remains a Verhulst process but with different parameters.
\end{abstract}
\noindent
\begin{quote}
\noindent \textbf{Key words}: geometric Brownian motion,
Verhulst process, Girsanov's change of measure, Laplace transform, exponential functional of Brownian motion
\ \\
\textbf{2010 AMS Subject Classification}: 60J65, 60J70.
\end{quote}
\section{Introduction}
The paper is devoted to study the distribution of Verhulst process. A Verhulst process is the special functional of geometric Brownian motion with drift and its integral. The process had been studied among others by Mackevi$\check{c}$ius \cite{Mac} and Lungu and {\O}ksendal \cite{LO}, where it is called a process of population growth in stochastic crowded environment. The results
in both papers were obtained by a some kind of approximations. In {\O}ksendal \cite{O}
there were indicated some applications of a Verhulst process in finance and biology.
Mackevi$\check{c}$ius \cite{Mac1} during his conference presentation on Vilnius Conference 2014 indicates on the need to find the closed-form expressions for Laplace transform and exact distribution of Verhulst process.
In this paper we provide the formulas for one dimensional distribution of the process.
Results obtained by Yor and collaborators in several papers and monographes (see, e.g. \cite{CMY}, \cite{DYM}, \cite{DMY}, \cite{Mans08}, \cite{Mat-I}, \cite{MatII}, \cite{MatIII}) on the distribution of
$(B_t^{(\mu)},\int_0^te^{B_u^{(\mu)}}du)$, where $B_t^{(\mu)}=B_t + \mu t$ is a Brownian motion with drift, give us the background in providing closed formulas for the density of Verhulst process, which in case of drift $\mu = -\frac{1}{2}$ becomes especially simple.
We present also some interesting and important properties of the Verhulst process among them the fact that a Verhulst process remains, under Girsanov's change of measure, a Verhulst proces but with different parameters.
The ideas presented below are original and are not easy consequences of the previous results.
\section{Distribution and properties of Verhulst process}
We work on a complete probability
space $(\Omega,\mathcal{F},\mathbb{P})$ with filtration $(\mathcal{F}_t)_{t \in
[0,\infty)}$ and Brownian motion $B$ defined on it.
We define a Verhulst process $\theta^{(\mu,\beta)}$ starting from $1$ as the functional of a Brownian motion with drift
\begin{equation}\label{VP}
\theta_t^{(\mu,\beta)} = \frac{e^{B_t +\mu t}}{1+\beta\int_0^te^{B_s+\mu s}ds}.
\end{equation}
It is easy to see that this functional is
the unique strong solution of SDE
\begin{equation}\label{SDE}
d\theta_t^{(\mu,\beta)} = \theta_t^{(\mu,\beta)}dB_t + \Big((\mu+1/2)\theta_t^{(\mu,\beta)}-\beta(\theta_t^{(\mu,\beta)})^2\Big)dt, \ \theta_0^{(\mu,\beta)} = 1,
\end{equation}
for $\mu\in\mathbb{R},$ $\beta\geq 0$ and $ t\geq 0$.
It is worth to note that taking $\theta_t^{(\mu,\beta)}$ starting from any $x> 0$ does not change any of its probabilistic properties (see also Section 3). As it was mentioned, the process $\theta^{(\mu,\beta)}$ is also known in literature as a process of population growth in crowded stochastic environment process (see \cite{O}).
The approximation of the process was presented in \cite{Mac}. The properties of Verhulst process were also studied by Jakubowski and Wi\'sniewolski \cite{JWII}. In particular it was shown there (see Theorem 2.19) that for $\lambda >0, a\in\mathbb{R}$, we have
\begin{equation}\label{Lap}
\mathbb{E}\Big[ \exp\Big(-\frac{\lambda e^{aB^{(\mu)}_t}}{1+\beta
A_t^{(\mu)}}\Big) \Big]
= \mathbb{E} \Big[F_{B_t^{(\mu)}}\Big(\beta^{-1}R^{(\lambda e^{aB^{(\mu)}_t})}(1/2)\Big)\Big],
\end{equation}
where $B^{(\mu)}_t = B_t +\mu t$, $ A_t^{(\mu)} = \int_0^te^{2B^{(\mu)}_s}ds$, and $F_x$ is given by
\begin{equation}\label{postac-F}
F_{x}(z) = \exp\Big(-\frac{\varphi_z(x)^2 - x^2}{2t}\Big)
\end{equation}
with
\begin{align}
\label{varphi}
& \varphi_x(y) = \mbox{arcosh}(xe^{-y} + \cosh(y))\\
& = \ln\Big(xe^{-y} + \cosh(y)+\sqrt{x^2e^{-2y}+\sinh^2(y) +2xe^{-y}\cosh(y)}\Big).\notag
\end{align}
Moreover, $R^x$ is a squared Bessel process with the index $-1$ starting at $x$ and independent of
$(B^{(\mu)}_t,t\geq 0)$.
Observe that from \eqref{Lap}
we easily obtain the Laplace transform of $\theta_t^{(\mu,\beta)}$.
\begin{proposition} For $\lambda > 0$ we have
\begin{equation}
\mathbb{E}e^{-\lambda \theta_t^{(\mu,\beta)}} = \mathbb{E} \Big[F_{B_t^{(2\mu)}}\Big((4\beta)^{-1}R^{(\lambda e^{2B^{(2\mu)}_t})}(1/2)\Big)\Big].
\end{equation}
\end{proposition}
\begin{proof} Exactly in the same manner as in the proof of Theorem 2.20 in \cite{JWII}, we have
\begin{align*}
\theta_{4t}^{(\mu,\beta)} =
\frac{e^{2(B_{4t}/2 +2t\mu)}}{1+4\beta\int_0^{t}e^{2(B_{4t}/2 +2t\mu)}du}.
\end{align*}
Since $B_{4t}/2$ is a Brownian motion, the statement follows from
(\ref{Lap}) with
$ a = 2$, $\mu$ replaced by $2\mu$ and $\beta$ replaced by $4\beta$.
\end{proof}
The following lemma will be used further.
\begin{lemma}\label{LQ} Fix $\gamma >0$, $\mu\in\mathbb{R},$ $\beta\geq 0$. Then
\begin{equation}\label{defM}
M^{(\gamma,\mu,\beta)}_t = e^{-\gamma\int_0^t\theta_s^{(\mu,\beta)}dB_s -\frac{\gamma^2}{2}\int_0^t(\theta_s^{(\mu,\beta)})^2ds}, \qquad t \in[0,\infty)
\end{equation}
is a martingale.
\end{lemma}
\begin{proof} It is enough to prove that for fixed $T>0$ the local martingale $M^{(\gamma,\mu,\beta)}_t, t \in[0,T]$ is bounded. It is so, since SDE \eqref{SDE} implies that
\begin{align*}
&\exp \Big({-\gamma\int_0^t\theta_s^{(\mu,\beta)}dB_s -\frac{\gamma^2}{2}\int_0^t(\theta_s^{(\mu,\beta)})^2ds}\Big) \\ &= e^{-\gamma\Big(\theta_t^{(\mu,\beta)}-1 -(\mu+1/2)\int_0^t\theta_s^{(\mu,\beta)}ds+\beta\int_0^t(\theta_s^{(\mu,\beta)})^2ds\Big) -\frac{\gamma^2}{2}\int_0^t(\theta_s^{(\mu,\beta)})^2ds}\\
&= e^{-\gamma(\theta_t^{(\mu,\beta)}-1)-(\gamma\beta+\frac{\gamma^2}{2})\int_0^t\Big((\theta_s^{(\mu,\beta)})^2- 2\theta_s^{(\mu,\beta)}\frac{\gamma(\mu+1/2)}{\gamma^2+2\gamma\beta} +
\Big(\frac{\gamma(\mu+1/2)}{\gamma^2+2\gamma\beta}\Big)^2\Big)ds } \times \\
& \times e^{ \frac{(\gamma(\mu+1/2))^2} {4(\gamma\beta+\gamma^2/2)}t } <\infty.
\end{align*}
\end{proof}
\begin{remark}
One can wonder if $ \overline{M}^{(\gamma,\mu,\beta)}_t = e^{\gamma\int_0^t\theta_s^{(\mu,\beta)}dB_s -\frac{\gamma^2}{2}\int_0^t(\theta_s^{(\mu,\beta)})^2ds}$, for a fixed $\gamma >0$, could be a martingale as well. In Remark 1.1 \cite{DMY} it was noticed that $ \overline{M}^{(\gamma,\mu,\beta)}$
can not be a martingale.
\end{remark}
Lemma \ref{LQ} enables us to define a new probability measure
\begin{equation}\label{MQ}
\frac{d\mathbb{Q}^{(\gamma,\mu,\beta,T)}}{d\mathbb{P}}\Big|_{\mathcal{F}_T} = M^{(\gamma,\mu,\beta)}_T .
\end{equation}
From Girsanov's theorem $V_s = B_s + \gamma\int_0^s\theta_u^{(\mu,\beta)}du$, $s\leq T$, is a Brownian motion under $\mathbb{Q}^{(\gamma,\mu,\beta)}$. This leads us to the following
\begin{theorem} Let $\theta_t^{(\mu,\beta)}$ be defined by (\ref{VP}) and $\mathbb{Q}^{(\gamma,\mu,\beta,T)}$ by (\ref{MQ}). Then
\begin{equation}
\theta_t^{(\mu,\beta)} = \frac{e^{V_t +\mu t}}{1+ (\beta + \gamma)\int_0^te^{V_u +\mu u }du}, \qquad t\leq T,
\end{equation}
where $V$ is a standard Brownian motion under $\mathbb{Q}^{(\gamma,\mu,\beta,T)}$.
\end{theorem}
\begin{proof} From Ito's lemma and (\ref{SDE}) follows that
\begin{align}\label{Int}
\ln \theta_t^{(\mu,\beta)} = B_t + \int_0^t(\mu - \beta \theta_s^{(\mu,\beta)})ds.
\end{align}
Taking $V_t = B_t + \gamma\int_0^t\theta_u^{(\mu,\beta)}du$, a Brownian motion under $\mathbb{Q}^{(\gamma,\mu,\beta,T)}$, we obtain from the last equality that
\begin{align}\label{EV}
V_t + \mu t = \ln \theta_t^{(\mu,\beta)} + (\beta + \gamma)\int_0^t\theta_u^{(\mu,\beta)}du.
\end{align}
Thus direct computation yields
\begin{align*}
\int_0^te^{V_u + \mu u}du &= \int_0^t\theta_u^{(\mu,\beta)}e^{(\beta + \gamma)\int_0^u\theta_s^{(\mu,\beta)}ds}du\\
&= \frac{1}{\gamma+\beta}\Big(e^{(\beta + \gamma)\int_0^t\theta_u^{(\mu,\beta)}du} -1\Big).
\end{align*}
From the last equality and (\ref{EV}) we finally have
\begin{align*}
\theta_t^{(\mu,\beta)} = \frac{e^{V_t +\mu t}}{1+ (\beta + \gamma)\int_0^te^{V_u +\mu u }du}.
\end{align*}
\end{proof}
This theorem justifies that a Verhulst process $\theta^{(\mu,\beta)}$ on $(\Omega,\mathcal{F},\mathbb{P})$ remains a Verhulst process under $\mathbb{Q}^{(\gamma,\mu,\beta,T)}$, though with different parameters.
\begin{proposition} \label{Stb}
If $(\theta_t^{(\mu,\beta)})$, $t \leq T$, is a Verhulst process with parameters $(\mu,\beta)$ on $(\Omega,\mathcal{F},\mathbb{P})$, then $(\theta_t^{(\mu,\beta)})$ is on $(\Omega,\mathcal{F},\mathbb{Q}^{(\gamma,\mu,\beta,T)})$ a Verhulst process with parameters $(\mu,\gamma + \beta)$.
\end{proposition}
We are ready to derive the formula for the density of one dimensional distribution of Verhulst process, i.e. the density of $\theta_t^{(\mu,\beta)}$ for a fixed $t> 0$.
Let us introduce the following notation
\begin{align*}
\mathbb{P}(\theta_t^{(\mu,\beta)}\in dx) = g_t(\beta,x)dx.
\end{align*}
Observe that for $\beta = 0$, $g_t(0,x)$ is the density of geometric Brownian motion $e^{B_t +\mu t}$.
Proposition \ref{Stb} implies that
\begin{align*}
g_t(\beta+\gamma,x) = g_t(\beta, x)\mathbb{E}(M^{(\gamma,\mu,\beta)}_t|\theta_t^{(\mu,\beta)} = x).
\end{align*}
In particular, for $\beta = 0$ we obtain
\begin{align}\label{Dens}
g_t(\gamma,x) = g_t(0, x)\mathbb{E}(M^{(\gamma,\mu,0)}_t|e^{B_t +\mu t} = x).
\end{align}
Theorem 8.1 in Matsumoto - Yor \cite{MatII} states that: \\
for any $t>0, \lambda > 0, v > 0, x\in\mathbb{R}$
\begin{align}\label{MYOR}
&\psi_t^{(\mu)}(v,x)\mathbb{E}\Big(e^{-\frac{\lambda^2}{2}\int_0^te^{2B_u +2\mu u}du} \Big|\int_0^te^{B_u+\mu u}du = v, B_t + \mu t = x\Big)\\
&= e^{\mu x - \mu^2t/2}\frac{\lambda}{4\sinh(\lambda v/2)}e^{-\lambda(1+e^x)\coth(\lambda v/2)}\Theta(\phi(v,x,\lambda), t/4),\notag
\end{align}
where
\begin{align*}
\psi_t^{(\mu)}(v,x) &= \frac{1}{16}e^{\mu x - \mu^2t/2}\frac{1}{v}e^{-\frac{2(1+e^x)}{v}}\Theta(4e^{x/2}/v, t/4),\\
\Theta(r,t) &= \frac{r}{\sqrt{2\pi^3t}}e^{\frac{\pi^2}{2t}}\int_0^{\infty}e^{-\frac{z^2}{2t}}e^{-r\cos(z)}\sinh(z)\sin(\pi z/t)dz, \\
\phi(v,x,\lambda) &= \frac{2\lambda e^{x/2}}{\sinh(\lambda v/2)}.
\end{align*}
Using this result we can write the formula for density of Verhulst process.
\begin{theorem} \label{th:Vdens} The density of Verhulst process is given by
\begin{align*}
g_t(\gamma,x) = g_t(0, x)e^{-\gamma(x-1)}\mathbb{E}H_t(a_t^{(\mu)},x),
\end{align*}
where $a_t^{(\mu)} = \int_0^te^{B_u+\mu u}du$, and
\begin{align*}
H_t(y,x) = e^{\gamma(\mu+1/2)y}\mathbb{E}\Big(e^{-\frac{\gamma^2}{2}\int_0^te^{2B_u +2\mu u}du}|a_t^{(\mu)} = y, B_t + \mu t = \ln x\Big).
\end{align*}
\end{theorem}
\begin{proof} We have from (\ref{defM}) and (\ref{SDE})
\begin{align*}
\mathbb{E}(M^{(\gamma,\mu,0)}_t|e^{B_t +\mu t} = x) &= e^{-\gamma(x-1)}\mathbb{E}\Big(e^{\gamma(\mu +1/2)a_t^{(\mu)} - \frac{\gamma^2}{2}\int_0^te^{2B_u +2\mu u}du}|e^{B_t +\mu t} = x\Big)\\
&= e^{-\gamma(x-1)}\int_0^{\infty}H_t(y,x)\mathbb{P}(a_t^{(\mu)}\in dy)\\
&= e^{-\gamma(x-1)}\mathbb{E}H_t(a_t^{(\mu)},x).
\end{align*}
Thus, the result follows from (\ref{Dens}).
\end{proof}
\begin{remark}
Taking together Theorem \ref{th:Vdens} and (\ref{MYOR}) enable us to write down the close formula for density of Verhulst process. The density of $a_t^{(\mu)}$ can be obtained from the formula
\begin{align*}
\mathbb{P}(a_t^{(\mu)}\in dv, B_t +\mu t\in dx) = \psi_t^{(\mu)}(v,x)dvdx
\end{align*}
(see \cite[Theorem 4.1]{MatII}).
Another formula for density of $a_t^{(\mu)}$ can be also found on page 612 in \cite{SB}, formula (1.8.4).
\end{remark}
\begin{proposition}
For a Verhulst process $\theta_t^{(\mu,\beta)}$, a Brownian motion $B$, $\beta>0$ and $\mu \neq -1/2$ we have
\begin{align*}
\theta_t^{(\mu,\beta)}e^{\beta\int_0^t\theta_s^{(\mu,\beta)}ds} &= e^{B_t + \mu t},\\
\mathbb{E}e^{\beta\int_0^t\theta_s^{(\mu,\beta)}ds} &= 1 + \frac{\beta}{\mu + 1/2}\Big(e^{(\mu+1/2)t}-1\Big).
\end{align*}
\end{proposition}
\begin{proof} The first equality follows immediately from (\ref{Int}). For the second observe that
\begin{align} \label{1/6}
e^{\beta\int_0^t\theta_s^{(\mu,\beta)}ds} = (1+\beta\int_0^te^{B_u +\mu u}du),
\end{align}
so
\begin{align*}
e^{(\mu+1/2)t} = \mathbb{E}e^{B_t+\mu t} = \mathbb{E}\theta_t^{(\mu,\beta)}e^{\beta\int_0^t\theta_s^{(\mu,\beta)}ds}
= \frac{1}{\beta}\frac{\partial}{\partial t}\mathbb{E}e^{\beta\int_0^t\theta_s^{(\mu,\beta)}ds}.
\end{align*}
Thus
\begin{align*}
\mathbb{E}e^{\beta\int_0^t\theta_s^{(\mu,\beta)}ds} = 1 + \frac{\beta}{\mu + 1/2}\Big(e^{(\mu+1/2)t}-1\Big).
\end{align*}
\end{proof}
As an example of applications we present a solution to problem of finding a special representation of Brownian motion with drift.
For another distributional equations of this kind see e.g. Section 13 in \cite{Mat}.
\begin{example} Let $B$ be a Brownian motion under $\mathbb{P}$, $\mu\in\mathbb{R}$ and $\alpha\in(0,1)$. Our aim is to find a measure $\mathbb{Q}$, a Brownian motion $V$ under $\mathbb{Q}$ and a random variable $G$ such that distribution of $G$ under $\mathbb{P}$ and under $\mathbb{Q}$ belongs to the same class, and for fixed $t > 0$
\begin{align} \label{2/6}
B_t + \mu t = \alpha (V_t + \mu t) + (1-\alpha) G.
\end{align}
Fix $T$, $T>t$, and $\gamma > 0$. To find a representation \eqref{2/6} we take $\mathbb{Q}=\mathbb{Q}^{(\gamma, \mu,\beta,T)}$ given by (\ref{MQ}) with $\beta = \frac{\gamma\alpha}{1-\alpha}$. Then $V_t = B_t + \gamma\int_0^t\theta_s^{(\mu,\beta)}ds$ is a Brownian motion under $\mathbb{Q}$. By (\ref{Int}) and \eqref{1/6}
we have
\begin{align*}
e^{V_t +\mu t} = \theta_t^{(\mu,\beta)}\Big(1+\beta\int_0^te^{B_s +\mu s}ds\Big)^{\frac{\beta+\gamma}{\beta}}.
\end{align*}
From the definition of $\theta_t^{(\mu,\beta)}$ and the last equality we obtain
\begin{align} \label{2/7}
e^{B_t + \mu t} = e^{\frac{\beta}{\beta+\gamma}(V_t + \mu t)}(\theta_t^{(\mu,\beta)})^{\frac{\gamma}{\beta +\gamma}}.
\end{align}
Since $\alpha= \frac{\beta}{\beta+\gamma}$, by taking ln of both sides of \eqref{2/7} we have
$$
B_t+ \mu t = \alpha (V_t + \mu t) + (1-\alpha ) \ln \theta_t^{(\mu,\beta)},
$$
which is \eqref{2/6} with $G = \ln \theta_t^{(\mu,\beta)}$.
Observe that $\ln\theta_t^{(\mu,\beta)}$ under both $\mathbb{P}$ and $\mathbb{Q}$ belongs to the same family of $\ln$ of Verhulst process. This finishes the proof.
\end{example}
Now, we present another formula for Laplace transform of Verhulst process.
\begin{proposition} \label{LapT2} Let $\theta_t^{(\mu,\beta)}$ be a Verhulst process with parameters $(\mu,\beta)$. Then for $\lambda \geq 0$
\begin{equation}
\mathbb{E} e^{-\lambda \theta_t^{(\mu,\beta)}} = e^{\beta}\mathbb{E}e^{-(\beta+\lambda)e^{B_t+\mu t}+\beta(\mu+1/2)\int_0^te^{B_s+\mu s}ds -\frac{\beta^2}{2}\int_0^te^{2B_s+2\mu s}ds},
\end{equation}
where $B$ is a standard Brownian motion under $\mathbb{P}$.
\end{proposition}
\begin{proof}
From Proposition \ref{Stb} we know that a geometric Brownian motion $e^{B_t+\mu t}$
under $\mathbb{P}$ becomes a Verhulst process $\tilde{\theta}_t^{(\mu,\beta)}$ with parameters $(\mu,\beta)$ under $\mathbb{Q} = \mathbb{Q}^{(\beta,\mu,0,t)}$ given by \eqref{MQ}. Thus
\begin{align*}
\mathbb{E} e^{-\lambda \theta_t^{(\mu,\beta)}} = \mathbb{E}_{\mathbb{Q}}e^{-\lambda \tilde{\theta}_t^{(\mu,\beta)}} = \mathbb{E} e^{-\lambda e^{B_t+\mu t}} M^{(\beta,\mu,0)}_t,
\end{align*}
where $M^{(\beta,\mu,0)}$ is defined by (\ref{defM}). The assertion follows from $(\ref{SDE})$ and some simple algebra.
\end{proof}
\section{Exponential random time and drift $\mu = -\frac{1}{2}$}
In this section we will consider a Verhulst process $\theta$ starting from $x>0$ with $\mu = -\frac{1}{2}$ and $\beta=x$. Thus, $\theta$ is of the special form
\begin{align} \label{1/7}
\theta_t = \frac{xe^{B_t - \frac{t}{2}}}{1+x\int_0^te^{B_u - \frac{u}{2}}du},
\end{align}
where $B$ is a Brownian motion. Let $T_{\lambda}$ be an exponential random variable with parameter $\lambda>0$, independent of $B$. We have
\begin{lemma}\label{expdens} Let $v = \sqrt{2\lambda+1/4}$. The density of $\theta_{T_{\lambda}}$ is given on $(0,\infty)$ by
\begin{equation}
\mathbb{P}(\theta_{T_{\lambda}}\in dz) = 2\lambda e^{x-z}\sqrt{x/z^3}F_{v}(x,z)dz,
\end{equation}
where $$F_{v}(x,y) = I_{v}(x\vee y)K_{v}(x\wedge y)$$ is a product of two modified Bessel functions.
\end{lemma}
\begin{proof} For $r\geq 0$ from Proposition \ref{LapT2} (where we put $\mu = -1/2, \lambda = xr, \gamma = x$) we obtain
\begin{align*}
\mathbb{E}e^{-r\theta_t} = e^x\mathbb{E}e^{-x(r+1)e^{B_t-t/2} - \frac{x^2}{2}\int_0^te^{2B_u -u}du}.
\end{align*}
Thus, using \cite[Theorem 4.11]{MatII} describing the joint density
of the vector $(e^{B_{T_{\lambda}}-{T_{\lambda}}/2},\int_0^{T_{\lambda}}e^{2B_u -u}du)$, we have
\begin{align} \label{1/8}
\mathbb{E}e^{-r\theta_{T_{\lambda}}} &= e^x\mathbb{E}e^{-x(r+1)e^{B_{T_{\lambda}}-{T_{\lambda}}/2} - \frac{x^2}{2}\int_0^{T_{\lambda}}e^{2B_u -u}du}\\ \notag
&= e^x\int_0^{\infty}\int_0^{\infty}e^{-x(r+1)y-\frac{x^2}{2}w}\frac{\lambda}{y^{v+5/2}}p^{(v)}(w,1,y)dydw,
\end{align}
where $p^{(v)}$ is the transition probability density of the Bessel process with index $v$. Again from \cite{MatII} (see Remark 2.1) we have for $\alpha >0$
$$
\int_0^{\infty}e^{-\alpha t}p^{(v)}(t,x,y)dt = 2y\Big(\frac{y}{x}\Big)^vF_{v}(\sqrt{2\alpha}x, \sqrt{2\alpha}y).
$$
Thus by \eqref{1/8}, Fubbini's theorem and the last identity we have
\begin{align*}
e^x&\int_0^{\infty}\int_0^{\infty}e^{-x(r+1)y-\frac{x^2}{2}w}\frac{\lambda}{y^{v+5/2}}p^{(v)}(w,1,y)dydw\\
&= \int_0^{\infty}e^{x-x(r+1)y}\frac{\lambda}{y^{v+5/2}}2y^{v+1}F_v(x,xy)dy\\
&= \int_0^{\infty}e^{x-rw -w}\frac{2\lambda}{w^{3/2}}\sqrt{x}F_v(x,w)dw
\end{align*}
and the assertion follows from the standard Laplace transform argument.
\end{proof}
Now we are ready to derive the exact formula for the density of $\theta_t$.
\begin{theorem} Fix $t> 0$. Then on $(0,\infty)$
\begin{equation}
\mathbb{P}(\theta_{t}\in dw) = e^{-\frac{t}{8}+x -w}\sqrt{\frac{x}{w^3}}\Big(\int_0^{\infty}\frac{1}{z}e^{-\frac{z}{2}-\frac{x^2+w^2}{2z}}\Theta(xw/z,t)dz\Big)dw,
\end{equation}
where
\begin{equation}
\Theta(r,t) = \frac{r}{\sqrt{2\pi^3t}}e^{\frac{\pi^2}{2t}}\int_0^{\infty}e^{-\frac{z^2}{2t}}e^{-r\cos(z)}\sinh(z)\sin(\pi z/t)dz.
\end{equation}
\end{theorem}
\begin{proof}
We have on $(0,\infty)$
\begin{align}\label{RHS}
\mathbb{P}(\theta_{T_{\lambda}}\in dw) = \lambda\int_0^{\infty}e^{-\lambda t}\mathbb{P}(\theta_{t}\in dw)dt.\,
\end{align}
where $T_{\lambda}$ is an exponential random variable with parameter $\lambda$ independent of the process $\theta$. On the other side, from Lemma \ref{expdens} for $\lambda >0$ and $v=\sqrt{2\lambda+1/4}$, we have on $(0,\infty)$
\begin{align*}
\mathbb{P}&(\theta_{T_{\lambda}}\in dw) = 2\lambda e^{x-w}\sqrt{\frac{x}{w^3}}F_{v}(x,w)dw\\
&= 2\lambda e^{x-w}\sqrt{\frac{x}{w^3}}I_{v}(x\vee w)K_{v}(x\wedge w)dw\\
&= 2\lambda e^{x-w}\sqrt{\frac{x}{w^3}}\frac12\int_0^{\infty}e^{-\frac{z}{2}-\frac{(x\vee w)^2+(x\wedge w)^2}{2z}}I_v((x\vee w)(x\wedge w)/z)\frac{dz}{z}dw\\
&= \lambda e^{x-w}\sqrt{\frac{x}{w^3}}\int_0^{\infty}e^{-\frac{z}{2}-\frac{x^2+w^2}{2z}}I_v(xw/z)\frac{dz}{z}dw,
\end{align*}
where in the third equality we used the identity
$$
I_{v}(x)K_{v}(w) = \frac12\int_0^{\infty}e^{-\frac{z}{2}-\frac{x^2+w^2}{2z}}I_v(xw/z)\frac{dz}{z}
$$
for $w\geq x>0$ (see (2.7) in \cite{MatII}). To continue we use another identity for modified Bessel functions (see (2.10) in \cite{MatII})
\begin{align*}
I_v(r) = \int_0^{\infty}e^{-\frac{v^2}{2}t}\Theta(r,t)dt, \ r>0
\end{align*}
and obtain
\begin{align*}
\lambda &e^{x-w}\sqrt{\frac{x}{w^3}}\int_0^{\infty}e^{-\frac{z}{2}-\frac{x^2+w^2}{2z}}I_v(xw/z)\frac{dz}{z}\\
&= \lambda e^{x-w}\sqrt{\frac{x}{w^3}}\int_0^{\infty}\frac{1}{z}e^{-\frac{z}{2}-\frac{x^2+w^2}{2z}}\int_0^{\infty}e^{-\frac{v^2}{2}t}\Theta(xw/z,t)dt\\
&= \lambda e^{x-w}\sqrt{\frac{x}{w^3}}\int_0^{\infty}\int_0^{\infty}e^{-\lambda t -\frac{t}{8}}\Theta(xw/z,t)\frac{1}{z}e^{-\frac{z}{2}-\frac{x^2+w^2}{2z}}dzdt,
\end{align*}
where in the last equality we used Fubini's theorem and that $v^2 = 2\lambda + 1/4$. To finish the proof we compare the last expression with (\ref{RHS}).
\end{proof}
Now we consider a family of processes defined by \eqref{1/7} for all $x>0$, and to underline the dependence on $x$ we denote this process by $\theta(x)$, i.e.
\begin{align*}
\theta_t(x) = \frac{xe^{B_t - \frac{t}{2}}}{1+x\int_0^te^{B_u - \frac{u}{2}}du}.
\end{align*}
From Lemma \ref{expdens} we can deduce
\begin{proposition} Let $T_{\lambda}, \hat{T}_2$ be two independent exponential random variables, which are independent of the Brownian motion $B$. Then on $(0,\infty)$
\begin{equation}
z^2\mathbb{P}\Big(\theta_{T_{\lambda}}(\hat{T}_2)\in dz\Big) = \mathbb{P}(\hat{T}_2\in dz)\mathbb{E}(\theta_{T_{\lambda}}(z))^2.
\end{equation}
\end{proposition}
\begin{proof}
From Lemma \ref{expdens} for $x>0$, we have on $(0,\infty)$
\begin{align*}
e^{-2x}z^2\mathbb{P}(\theta_{T_{\lambda}}(x)\in dz) = 2\lambda e^{-x-z}\sqrt{xz}F_{v}(x,z)dz.
\end{align*}
Thus
\begin{align*}
e^{-2x}\mathbb{E}(\theta_{T_{\lambda}}(x))^2 = \int_0^{\infty}2\lambda e^{-x-z}\sqrt{xz}F_{v}(x,z)dz.
\end{align*}
From symmetry of $F_v$, after integrating on $x$, we obtain
\begin{align*}
z^2\int_0^{\infty}e^{-2x}\mathbb{P}(\theta_{T_{\lambda}}(x)\in dz)dx &= \Big(\int_0^{\infty}2\lambda e^{-x-z}\sqrt{xz}F_{v}(x,z)dx\Big)dz\\
&= e^{-2z}\mathbb{E}(\theta_{T_{\lambda}}(z))^2dz.
\end{align*}
The assertion follows.
\end{proof}
\end{document} |
\begin{document}
\begin{abstract} Given a bounded open set~$\Omega\subseteq\R^n$,
we consider the eigenvalue problem of a nonlinear mixed local/nonlocal operator
with vanishing conditions in the complement of~$\Omega$.
We prove that the second eigenvalue~$\lambda_2(\Omega)$ is always strictly larger than the first eigenvalue~$\lambda_1(B)$ of a ball~$B$ with volume half of that of~$\Omega$.
This bound is proven to be sharp, by comparing to the limit case in which~$\Omega$
consists of two equal balls far from each other. More precisely,
differently from the local case, an optimal shape for the second eigenvalue problem does not exist, but a minimizing sequence is given by the union of two disjoint balls of half volume whose
mutual distance tends to infinity.
\end{abstract}
\title[Hong-Krahn-Szeg\"{o} for
mixed operators]{A Hong-Krahn-Szeg\"{o} inequality \\ for mixed local and nonlocal operators}
\author[S.\,Biagi]{Stefano Biagi}
\author[S.\,Dipierro]{Serena Dipierro}
\author[E.\,Valdinoci]{Enrico Valdinoci}
\author[E.\,Vecchi]{Eugenio Vecchi}
\address[S.\,Biagi]{Dipartimento di Matematica
\newline\indent Politecnico di Milano \newline\indent
Via Bonardi 9, 20133 Milano, Italy}
\email{stefano.biagi@polimi.it}
\address[S.\,Dipierro]{Department of Mathematics and Statistics
\newline\indent University of Western Australia \newline\indent
35 Stirling Highway, WA 6009 Crawley, Australia}
\email{serena.dipierro@uwa.edu.au}
\address[E.\,Valdinoci]{Department of Mathematics and Statistics
\newline\indent University of Western Australia \newline\indent
35 Stirling Highway, WA 6009 Crawley, Australia}
\email{enrico.valdinoci@uwa.edu.au}
\address[E.\,Vecchi]{Dipartimento di Matematica
\newline\indent Università di Bologna \newline\indent
Piazza di Porta San Donato 5, 40126 Bologna, Italy}
\email{eugenio.vecchi2@unibo.it}
\subjclass[2020]
{49Q10, 35R11, 47A75, 49R05}
\keywords{Operators of mixed order, first eigenvalue, shape optimization, isoperimetric inequality,
Faber-Krahn inequality, quantitative results, stability.}
\thanks{The authors are members of INdAM. S. Biagi
is partially supported by the INdAM-GNAMPA project
\emph{Metodi topologici per problemi al contorno associati a certe
classi di equazioni alle derivate parziali}.
S. Dipierro and E. Valdinoci are members of AustMS.
S. Dipierro is supported by
the Australian Research Council DECRA DE180100957
``PDEs, free boundaries and applications''.
E. Valdinoci is supported by the Australian Laureate Fellowship
FL190100081
``Minimal surfaces, free boundaries and partial differential equations''.
E. Vecchi is partially supported
by the INdAM-GNAMPA project
\emph{Convergenze variazionali per funzionali e operatori dipendenti da campi vettoriali}.}
\date{\today}
\maketitle
\begin{center}
{\fmmfamily{ \Huge Dedicatoria. Al Ingenioso Hidalgo Don Ireneo.}}
\end{center}
\section{Introduction}
In this paper we consider a nonlinear operator arising from the superposition of a classical $p$-Laplace operator
and a fractional $p$-Laplace operator, of the form
\begin{equation}\label{ATORE}\mathcal{L}_{p,s} = -\Delta_p+(-\Delta)^s_p\end{equation}
with~$s\in(0,1)$ and~$p\in [2,+\infty)$. Here the fractional $p$-Laplace operator is defined, up to a multiplicative
constant that we neglect, as
$$ (-\Delta)^s_p u(x):=2\int_{\R^n}\frac{|u(x)-u(y)|^{p-2} (u(x)-u(y))}{|x-y|^{n+ps}}\,dy
.$$
Given a bounded open set~$\Omega\subseteq\R^n$, we consider the eigenvalue problem for the operator~$\mathcal{L}_{p,s}$ with homogeneous
Dirichlet boundary conditions (i.e., the ei\-gen\-fun\-ctions are prescribed to vanish
in the complement of $\Omega$). In particular, we define~$\lambda_1(\Omega)$ to be the smallest of such eigenvalues
and~$\lambda_2(\Omega)$ to be the second smallest one (in the sense made precise in~\cite{BMV, GoelSreenadh}).
The main result that we present here is a version of the Hong--Krahn--Szeg\"{o} i\-ne\-qua\-li\-ty
for the second Dirichlet eigenvalue~$\lambda_2(\Omega)$, according to the following statement:
\begin{theorem} \label{main:thm}
Let $\Omega\subseteq\R^n$ be a bounded open set.
Let $B$ be any Euclidean ball with volume $|\Omega|/2$. Then,
\begin{equation} \label{eq:HKS}
\lambda_2(\Omega) > \lambda_1(B).
\end{equation}
Furthermore, equality is never attained in
\eqref{eq:HKS}; however, the estimate is sharp in the following sense:
if $\{x_j\}_j,\,\{y_j\}_j\subseteq\R^n$ are two sequences such that
$$\lim_{j\to+\infty}|x_j-y_j| = +\infty,$$
and if we define $\Omega_j := B_r(x_j)\cup B_r(y_j)$, then
\begin{equation} \label{eq:optimal}
\lim_{j\to+\infty}\lambda_2(\Omega_j) = \lambda_1(B_r).
\end{equation}
\end{theorem}
To the best of our knowledge, Theorem~\ref{main:thm} is new even in the linear case~$p=2$.
Also, an interesting consequence of the fact that equality in~\eqref{eq:HKS} is never attained is that, for all~$c>0$,
the shape optimization problem
$$ \inf_{|\Omega|=c}\lambda_2(\Omega)$$
does not admit a solution.
Before diving into the technicalities of the proof of Theorem~\ref{main:thm},
we recall in the forthcoming Section~\ref{SEHI}
some classical motivations to study first and second eigenvalue problems,
then we devote Section~\ref{SEHI2} to showcase
the available results on the shape optimization problems related to the first and the second eigenvalues
of several elliptic operators.
\subsection{The importance of the first and second eigenvalues}\label{SEHI}
The notion of eigenvalue seems to date back to the 18th century, due to the works of Euler
and Lagrange on rigid bodies. Possibly inspired by Helmholtz, in his study of integral operators~\cite{HILB}
Hilbert introduced the terminology of ``Eigenfunktion'' and ``Eigenwert''
from which the modern terminology of ``eigenfunction'' and ``ei\-gen\-va\-lue'' originated.
The analysis of eigenvalues also became topical in quantum mechanics,
being equivalent in this setting to the energy of a quantum state of a system,
and in general in the study of wave phenomena, to distinguish high and low frequency components.
In modern technologies, a deep understanding of eigenvalues has become a central theme of research,
especially due to the several ranking algorithms, such as PageRank (used by search engines as Google to rank the results)
and EigenTrust (used by peer-to-peer networks to establish a trust value
on the account of authentic and corrupted resources). In a nutshell, these algorithms typically have entries
(e.g. the page rank of a given website, or the trust value of a peer) that are measured as linear superpositions
of the other entries. For instance (see Section~2.1.1 in~\cite{BRIN}, neglecting for simplicity damping factors)
one can model the page rank~$p_i$ of website~$i$ in terms of
the ratio~$R_{ij}$ between the
number of links outbound from website~$j$ to page~$i$ and the total number of outbound links of website~$j$, namely
\begin{equation}\label{FIN9}
p_i=\sum_j R_{ij} p_j.\end{equation}
Whether this is a finite or infinite sum boils
down to a merely philosophical question, given the huge number of websites explored by Google, but let us stick for
the moment with the discrete case of finitely many websites.
Interestingly~$p_i$ basically counts the probability that a random surfer visits website~$i$ by following the available links in the web.
\begin{figure}\label{PIPEDItangeFI}
\end{figure}
Now, in operator form, one can write~\eqref{FIN9}
as~$p=Rp$, with the matrix~$R$ known in principle from the outbound links of the websites and the ranking array~$p$ to be determined. Thus, up to diagonalizing~$R$, the determination of~$p$
reduces to the determination of the eigenvectors of~$R$,
or equivalently to the determination of the eigenvectors of the inverse matrix~$A:=R^{-1}$,
and this task can be accomplished, for instance, by iterative algorithms.
The simplest of these algorithms used in PageRank is probably the power iteration method.
For instance, if one defines~$\eta_{k+1}:=\frac{A\eta_k}{|A\eta_k|}$, given
a random starting vector~$\eta_0$, it follows that~$\eta_k=\frac{A^k\eta_0}{|A^k\eta_0|}$
and consequently, if~$\eta_0=\sum_{j} c_j w_j$, being~$w_j$ the eigenvectors of~$A$ with corresponding eigenvalues~$\mu_1>\mu_2\ge\mu_3\ge...$ (normalized to have unit length), we find that
$$ \eta_k=\frac{\sum_{j} c_j \mu_j^k w_j}{\left|\sum_{j} c_j \mu_j^k w_j\right|}=
\frac{\sum_{j} d_{jk} w_j}{\left|\sum_{j} d_{jk} w_j\right|}=\frac{w_1+\sum_{j\ne1} d_{jk} w_j}{\left|{\rm{sign}} (c_1)w_1+\sum_{j\ne1} d_{jk} w_j\right|}.
$$
with~$d_{jk}:=\frac{c_j \mu_j^k}{c_1 \mu_1^k}$ (here we are assuming that the eigenvalues are positive
and that, in view of the randomness of~$\eta_0$, we have that~$c_1\ne0$).
Since
$$ \left|\sum_{j\ne1} d_{jk} w_j\right|^2=\sum_{j\ne1} d_{jk}^2=
\sum_{j\ne1}\frac{c_j^2 \mu_j^{2k}}{c_1^2 \mu_1^{2k}}=O\left(\frac{\mu_2^{2k}}{\mu_1^{2k}}\right),$$
it follows that
$$ \eta_k=w_1+O\left(\frac{\mu_2^{k}}{\mu_1^{k}}\right)
$$
and accordingly~$\eta_k$ approximates the eigenfunction~$w_1$ with a
convergence induced by the ratio~$\frac{\mu_2}{\mu_1}<1$.
That is, if~$\lambda_1<\lambda_2\le\lambda_3\le\dots$ are the eigenvalues of the matrix~$R$,
the above rate of convergence is dictated by the ratio~$\frac{\lambda_1}{\lambda_2}$
of the smallest and second smallest eigenvalues of~$R$. This is one simple, but, in our opinion quite convincing,
example of the importance of the first two eigenvalues in problems with concrete applications.
To confirm the importance of the notion of eigenvalues in the modern tech\-no\-lo\-gies,
see Figure~\ref{PIPEDItangeFI} for a Google Ngram Viewer charting the frequencies of use
of the words eigenvalue, eigenfunction and eigenvector in the last~120 years.
Also, to highlight the importance of the difference between the first and second eigenvalues,
see Figure~\ref{PIPEDItangeFI1}
for a Google Ngram Viewer charting the frequencies of use
of the words eigengap and spectral gap in the last~120 years.
In the case of the Google PageRank,
an efficient estimate of the eigengap taking into account the damping factor has been proposed in~\cite{TECH}.
\begin{figure}\label{PIPEDItangeFI1}
\end{figure}
\subsection{Shape optimization problems for the first and second eigenvalues in the context of elliptic (linear and nonlinear, classical and fractional) equations}\label{SEHI2}
Now we leave the realm of high-tech applications and we come back to the partial differential equations and fractional equations setting: in this fra\-me\-work, we recall here below some of the main results about
shape optimization problems related to the first and the second eigenvalues.
\subsubsection{The case of the Laplacian}\label{p2kd}
One of the classical shape optimization problem is related to the detection of
the domain that minimizes the first eigenvalue of the Laplacian with homogeneous boundary conditions.
This is the content of the Faber--Krahn inequality~\cite{FABER, KRA}, whose result can be stated
by saying that among all domains of fixed volume, the ball has the smallest first eigenvalue.
In particular, as a physical application, one has that that among all drums of equal area, the circular drum possesses the lowest voice, and this somewhat corresponds to our intuition, since a very elongated rectangular drum produces a high pitch
related to the oscillations along the short edge.
Another physical consequence of the Faber--Krahn inequality is that
among all the regions of a given volume with the boundary maintained at
a constant temperature, the one which dissipates heat at the
slowest possible rate is the sphere, and this also corresponds to our everyday life experience
of spheres mi\-ni\-mi\-zing
contact with the external environment thus providing the optimal possible insulation.
{F}rom the mathematical point of view, the Faber--Krahn inequality also offers a classical stage for rearrangement
methods and variational characterizations of eigenvalues.
In view of the discussion in Section~\ref{SEHI}, the subsequent natural question
in\-ve\-sti\-ga\-tes
the optimal shape of the second eigenvalue. This problem
is addressed by the Hong--Krahn--Szeg\"{o} inequality~\cite{7CK, 7CH, C7P},
which asserts that
among all domains of fixed volume, the disjoint union of two equal balls
has the smallest second eigenvalue.
Therefore, for the case of the Laplacian with homogeneous Dirichlet data, the shape optimization problems related to both the first and the second eigenvalues are solvable and the solution has a simple geometry.
It is also interesting to point out a conceptual connection between
the Faber--Krahn and the Hong--Krahn--Szeg\"{o} inequalities, in the sense that the proof of the second
typically uses the first one as a basic ingredient. More specifically, the strategy to
prove the Hong--Krahn--Szeg\"{o} inequality is usually:
\begin{itemize}
\item Use that in a connected open set all eigenfunctions except the \label{BULL}
first one must change sign,
\item Deduce that~$\lambda_2(\Omega)=\max\{\lambda_1(\Omega_+),\lambda_1(\Omega_-)\}$,
for suitable subdomain~$\Omega_+$ and~$\Omega_-$ which are either nodal domains
for the second eigenfunction, if~$\Omega$ is connected, or otherwise connected components of~$\Omega$,
\item Utilize the Faber--Krahn inequality to show that~$\lambda_1(\Omega_\pm)$ is reduced
if we replace~$\Omega_\pm$ with a ball of volume~$|\Omega_\pm|$,
\item Employ the homogeneity of the problem to deduce that the volumes of these two balls are equal.
\end{itemize}
That is, roughly speaking, a cunning use of the Faber--Krahn inequality allows one to reduce to the case
of disjoint balls, which can thus be addressed specifically.
\subsubsection{The case of the $p$-Laplacian}
A natural extension of the optimal shape results for the Laplacian recalled in Section~\ref{p2kd}
is the investigation of the non\-li\-near operator setting and in particular
the case of the $p$-Laplacian.
This line of research was carried out in~\cite{FRANZ} in which a complete analogue of the results of
Section~\ref{p2kd} have been established for the $p$-Laplacian.
In particular, the first Dirichlet ei\-gen\-va\-lue of the $p$-Laplacian is minimized by the ball and
the second by any disjoint union of two equal balls.
We stress that, in spite of the similarity of the results obtained, the nonlinear case presents its
own specific peculiarities.
In particular, in the case of the $p$-Laplacian one can still define the first eigenvalue by minimization of
a Rayleigh quotient, in principle the notion of higher eigenvalues become more tricky,
since discreteness of the spectrum is not guaranteed and the eigenvalues theory for nonlinear operators
offers plenty of open problems at a fundamental level.
For the second eingevalue however one can obtain a variational characterization in terms of a mountain-pass result,
still allowing the definition of a spectral gap between the smallest and the second smallest eigenvalue.
\subsubsection{The cases of the fractional Laplacian and of the fractional $p$-Laplacian}
We now consider the question posed by the minimization of the first and second eigenvalues in a nonlocal setting.
The optimal shape problems
for the first eigenvalue of the fractional Laplacian with homogeneous external datum was addressed in~\cite{BANL, IFBBPL, SIRE, CINTI},
showing that the ball is the optimizer.
As for the nonlinear case, the spectral properties of the fractional $p$-Laplacian possess their own
special features, see~\cite{FRNPAL},
and they typically combine the dif\-fi\-cul\-ties coming from the nonlocal world with those arising from
the theory of nonlinear operators.
In~\cite{IFBBPL} the optimal shape problem for the first Dirichlet eigenvalue of the fractional $p$-
Laplacian
was addressed, by detecting the optimality of the ball as a consequence of a general
P\`olya--Szeg\"o principle.
For the second eigenvalue, however, the situation in the
nonlocal case is quite different from the classical one,
since in general nonlocal energy functionals are deeply influenced by
the mutual position of the different connected
components of the domain, see~\cite{LIND}.
In particular, the counterpart of the Hong--Krahn--Szeg\"{o} inequality for the fractional Laplacian and the fractional $p$-Laplacian was established in~\cite{BP}
and it presents significant differences with the classical case: in particular, the shape optimizer for
the second eigenvalue of the fractional $p$-Laplacian with homogeneous external datum does not exist
and one can bound such an eigenvalue from below by the first eigenvalue of a ball with half of the volume
of the given domain (and this is the best lower bound possible, since the case of a domain consisting of two equal balls
drifting away from each other would attain such a bound in the limit).
\subsubsection{The case of mixed operators}
The study of mixed local/nonlocal operators has been recently received an increasing level of attention,
both in view of their intriguing mathematical structure, which combines the classical setting
and the features typical of nonlocal operators in a framework that is
not scale-invariant
\cite{GaMe, JAKO1-05, JAKO2, BARLES08, Foondun, BISWAP10, CK, BAR12, CKSV, TESO17, DFR, SilvaSalort, GQR, BDVVAcc, ABA, BSM, dSOR, DPLV1, DPLV2, GarainKinnunen, GarainKinnunen2, GarainKinnunen3, GarainUkhlov, CABRE, BMV, BDVV, SalortVecchi},
and of their importance in practical applications such as the animal foraging hypothesis~\cite{DV, PAG}.
In regard to the shape optimization problem, a Faber--Krahn inequality for mixed local and nonlocal linear operators when~$p=2$ has been established in~\cite{BDVV2},
showing the optimality of the ball in the minimization of the first eigenvalue.
The corresponding inequality for the nonlinear setting presented in~\eqref{ATORE}
will be given here in the forthcoming Theorem~\ref{thm.FK}.
The inequality of Hong--Krahn--Szeg\"{o} type for mixed local and nonlocal linear operators
presented in~\eqref{main:thm} would thus complete the study of the optimal shape problems for the first
and second eigenvalues of the operator in~\eqref{ATORE}.
\subsection{Plan of the paper}
The rest of this paper is organized as follows. Section~\ref{sec.Prel}
sets up the notation and collects some auxiliary results from the existing literature.
In Section~\ref{INTRERSEC} we discuss a regularity theory which, in our setting,
plays an important role in the proof of Theorem~\ref{main:thm} in allowing us to speak about nodal regions
for the corresponding eigenfunction (recall the bullet point strategy pre\-sen\-ted
on page~\pageref{BULL}).
In any case, this regularity theory holds in a more general setting and can well come in handy in other
situations as well.
Finally, Section~\ref{LULT} introduces the corresponding Faber--Krahn inequality for
the operator in~\eqref{ATORE}
and completes the proof of Theorem~\ref{main:thm}.
\section{Preliminaries}\label{sec.Prel}
To deal with the nonlinear and mixed local/nonlocal operator in~\eqref{ATORE}, given
and open and bounded set~$\Omega\subseteq\R^n$, it is convenient to introduce the space
$$\mathcal{X}_0^{1,p}(\Omega)\subseteq W^{1,p}(\R^n),$$
defined
as the closure of $C_0^{\infty}(\Omega)$ with respect
to the \emph{global norm}
$$u\mapsto \Big(\int_{\R^n}|\nabla u|^p\,d x\Big)^{1/p}.$$
We highlight that,
since $\Omega$ is \emph{bounded}, $\mathcal{X}^{1,p}_0(\Omega)$ can be
equivalently defined by taking the closure of $C_0^\infty(\Omega)$ with respect
to the full norm
$$u\mapsto \bigg(\int_{\R^n}|u|^p\,d x\bigg)^{1/p}
+\bigg(\int_{\R^n}|\nabla u|^p\,dx\bigg)^{1/p};$$
however, we stress that $\mathcal{X}_0^{1,p}(\Omega)$ \emph{is different}
from the usual space $W^{1,p}_0(\Omega)$, which is
de\-fi\-ned
as the closure of $C_0^{\infty}(\Omega)$ with respect to the norm
$$u\mapsto \Big(\int_{\Omega}|\nabla u|^p\,d x\Big)^{1/p}.$$
As a matter of fact, while
the belonging of a function $u$ to $W^{1,p}_0(\Omega)$ only
depends on its behavior \emph{inside of $\Omega$}
(actually, $u$ does not even need to be defined outside of $\Omega$),
the belonging of $u$ to $\mathcal{X}_0^{1,p}(\Omega)$ is a \emph{global}
condition, and it depends on the behavior of $u$ \emph{on the whole space $\R^n$}
(in particular, $u$ \emph{has to be defined} on $\R^n$).
Just to give an example of the difference between these spaces,
let $u\in C^\infty_0(\R^n)\setminus\{0\}$ be such that
$$\mathrm{supp}(u)\cap \overline{\Omega} = \varnothing.$$
Since $u\equiv 0$ inside of $\Omega$, we clearly have that
$u\in W^{1,p}_0(\Omega)$; on the other hand, since $u\not\equiv 0$
in $\R^n\setminus\Omega$,
one has $u\notin \mathcal{X}_0^{1,p}(\Omega)$ (even if $u\in W^{1,p}(\R^n)$).
Although they \emph{do not coincide}, the spaces
$\mathcal{X}_0^{1,p}(\Omega)$ and $W^{1,p}_0(\Omega)$
are re\-la\-ted: to be more precise, using \cite[Proposition\,9.18]{Brezis}
and taking into account the definition
of $\mathcal{X}_0^{1,p}(\Omega)$, one can see that
\begin{itemize}
\item[(i)] if $u\in W_0^{1,p}(\Omega)$, then
$u\cdot\mathbf{1}_\Omega\in \mathcal{X}_0^{1,p}(\Omega)$;
\vspace*{0.1cm}
\item[(ii)] if $u\in \mathcal{X}_0^{1,p}(\Omega)$, then
$u\big|_{\Omega}\in W_0^{1,p}(\Omega)$.
\end{itemize}
Moreover, we can actually \emph{characterize} $\mathcal{X}_0^{1,p}(\Omega)$ as follows:
$$\mathcal{X}_0^{1,p}(\Omega) = \{u\in W^{1,p}(\R^n):\,
\text{$u\big|_\Omega\in W^{1,p}_0(\Omega)$ and $u = 0$ a.e.\,in $\R^n\setminus\Omega$}\}.$$
The main issue in trying to use (i)-(ii) to identify
$W_0^{1,p}(\Omega)$ with $\mathcal{X}_0^{1,p}(\Omega)$ is that,
if $u$ is \emph{globally defined} and $u\in W^{1,p}(\R^n)$, then
$$u\big|_{\Omega}\in W^{1,p}_0(\Omega)\,\,\Rightarrow\,\,u\cdot \mathbf{1}_\Omega\in
\mathcal{X}_0^{1,p}(\Omega);$$
however, we cannot say (in general) that
$u\neq u\cdot\mathbf{1}_\Omega$.
Even if they cannot allow to identify
$\mathcal{X}_0^{1,p}(\Omega)$ with $W_0^{1,p}(\Omega)$,
assertions (i)-(ii) can be used to deduce several
properties of the space
$\mathcal{X}_0^{1,p}(\Omega)$ starting from their
analog in $W_0^{1,p}(\Omega)$; for example, we have
the following fact, which shall be used in the what follows:
$$u\in \mathcal{X}_0^{1,p}(\Omega)
\,\,\Rightarrow\,\,|u|,\,u^+,\,u^-\in \mathcal{X}_0^{1,p}(\Omega).$$
\begin{remark} \label{rem:Omegaregular}
In the particular case when the open set $\Omega$ is of class $C^1$,
it follows from \cite[Proposition\,9.18]{Brezis} that, if $u\in W^{1,p}(\R^n)$
and $u = 0$ a.e.\,in $\R^n\setminus\Omega$, then
$$u\big|_{\Omega}\in W_0^{1,p}(\Omega).$$
As a consequence, we have
$$\mathcal{X}_0^{1,p}(\Omega) =
\{u\in W^{1,p}(\Omega):\,\text{$u = 0$ a.e.\,in $\R^n\setminus\Omega$}\}.$$
This fact shows that, when $\Omega$ is sufficiently regular,
$\mathcal{X}_0^{1,p}(\Omega)$ \emph{coincides}
with the space $\mathbb{X}_p(\Omega)$ introduced in
\cite{BDVV} (for $p = 2$) and in \cite{BMV} (for a general $p > 1$).
\end{remark}
For future reference, we introduce the following set
\begin{equation} \label{eq:defM}
\mathcal{M}(\Omega) := \bigg\{u\in\mathcal{X}_0^{1,p}(\Omega):\,
\int_{\R^n}|u|^p\,d x = 1\bigg\}.
\end{equation}
After these preliminaries, we can turn our attention
to the \emph{Dirichlet pro\-blem} for the operator
$\LL_{p,s}$.
Throughout the rest of this paper, to simplify the notation we set
\begin{equation} \label{eq:defJp}
J_p(t) := |t|^{p-2}t\qquad {\mbox{ for all }}t\in\R.
\end{equation}
Moreover, we define
$$p^* := \begin{cases}
\dfrac{np}{n-p} & \text{if $p < n$}, \\
+\infty & \text{if $p\geq n$},
\end{cases}\quad\text{and}\quad
(p^*)' := \begin{cases}
\dfrac{p^*}{p^*-1} & \text{if $p < n$}, \\
1 & \text{if $p\geq n$}.
\end{cases}
$$
\begin{definition} \label{def:weaksol}
Let $q\geq (p^*)'$, and let
$f\in L^q(\Omega)$. We say that a function~$u\in W^{1,p}(\R^n)$ is a
\emph{weak solution}
to the equation
\begin{equation} \label{eq:mainPDE}
\LL_{p,s}u = f \qquad\text{in $\Omega$}
\end{equation}
if, for every
$\phi\in \mathcal{X}_0^{1,p}(\Omega)$, the following identity is satisfied
\begin{equation} \label{eq:defweaksol}
\begin{split}
& \int_{\Omega}|\nabla u|^{p-2}\langle\nabla u, \nabla \phi\rangle\,d x
\\
& \qquad\qquad
+ \iint_{\R^{2n}}\frac{J_p(u(x)-u(y))(\phi(x)-\phi(y))}{|x-y|^{n+ps}}\,dx\,dy
= \int_{\Omega}f\phi\,d x,
\end{split}
\end{equation}
Moreover, given any $g\in W^{1,p}(\R^n)$, we say that a function
$u\in W^{1,p}(\R^n)$ is a weak solution to the
\emph{$(\LL_{p,s})$-Dirichlet
problem}
\begin{equation} \label{eq:DirPbfg}
\begin{cases}
\LL_{p,s}u = f & \text{in $\Omega$}, \\
u = g & \text{in $\R^n\setminus\Omega$},
\end{cases}
\end{equation}
if $u$ is a weak solution to \eqref{eq:mainPDE} and, in addition,
$$u- g\in\mathcal{X}_0^{1,p}(\Omega).$$
\end{definition}
\begin{remark} \label{rem:welldef}
(1)\,\,We point out that the above definition is well-posed: indeed, if~$u,v\in W^{1,p}(\Omega)$, by H\"older's inequality
and \cite[Proposition\,2.2]{DRV} we get
\begin{align*}
& \iint_{\R^{2n}}\frac{|u(x)-u(y)|^{p-1}|v(x)-v(y)|}{|x-y|^{n+ps}}\,dx\,d y
\ \\
& \qquad
\leq
\bigg(\iint_{\R^{2n}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{n+ps}}\,dx\,d y
\bigg)^{1/p}
\bigg(\iint_{\R^{2n}}\frac{|v(x)-v(y)|^{p}}{|x-y|^{n+ps}}\,dx\,d y
\bigg)^{1/p} \\[0.2cm]
& \qquad
\leq \mathbf{c}\,\|u\|_{W^{1,p}(\R^n)}\,\|v\|_{W^{1,p}(\R^n)} < +\infty.
\end{align*}
Moreover, since $f\in L^q(\Omega)$ and $q\geq (p^*)'$, again by H\"older's
inequality and by the Sobolev Embedding Theorem (applied
here to $v\in W^{1,p}(\R^n)$), we have
\begin{align*}
\int_{\Omega}|f||v|\, dx \leq \|f\|_{L^{(p^*)'}(\Omega)}\,\|v\|_{L^{p^*}(\Omega)}
<+\infty.
\end{align*}
(2)\,\,If $W^{1,p}(\R^n)$ is
is a weak solution to the
{$(\LL_{p,s})$-Dirichlet
problem} \eqref{eq:DirPbfg}, it follows from the definition
of $\mathcal{X}_0^{1,p}(\Omega)$ that
$$(u-g)\big|_{\Omega}\in W^{1,p}_0(\Omega)\qquad\text{and}\qquad
\text{$u = g$ a.e.\,in $\R^n\setminus\Omega$}.$$
Thus, $\mathcal{X}_0^{1,p}(\Omega)$ is the `right space' for the weak formulation
of \eqref{eq:DirPbfg}.
\end{remark}
With Definition \ref{def:weaksol} at hand, we now introduce the notion
of Dirichlet ei\-gen\-va\-lue/ei\-gen\-fun\-ction
for the operator $\LL_{p,s}$.
\begin{definition} \label{def:eigenval}
We say that $\lambda\in\R$ is a \emph{Dirichlet eigenvalue}
for $\LL_{p,s}$ if there exists a solution
$u\in W^{1,p}(\Omega)\setminus\{0\}$ of the
$(\LL_{p,s})$-Dirichlet problem
\begin{equation} \label{eq:DirPbEigen}
\begin{cases}
\LL_{p,s}u = \lambda|u|^{p-2}u & \text{in $\Omega$}, \\
u = 0 & \text{in $\R^n\setminus\Omega$}.
\end{cases}
\end{equation}
In this case, we say that $u$ is an \emph{eigenfunction} associated with $\lambda$.
\end{definition}
\begin{remark} \label{rem:defwellposedEigenval}
We point out that Definition
\ref{def:eigenval} is {well-posed}. Indeed,
if $u$ is any function in $W^{1,p}(\R^n)$,
by the Sobolev Embedding Theorem we have
$$f := |u|^{p-2}u\in L^{\frac{p^*}{p-1}}(\Omega);$$
then, a direct computation shows that $q := p^*/(p-1)\geq (p^*)'$. As a consequence,
the notion of weak solution
for \eqref{eq:DirPbEigen} agrees with the one contained in Definition~\ref{def:weaksol}.
In particular, if $u$ is an
{eigenfunction} associated with some eigenvalue $\lambda$, then
$$u\in\mathcal{X}_0^{1,p}(\Omega),$$
and thus $u\big|_\Omega\in W_0^{1,p}(\Omega)$ and $u = 0$ a.e.\,in $\R^n\setminus\Omega$.
\end{remark}
After these definitions, we close
the section by reviewing some results
about eigenvalues/eigenfucntions for $\LL_{p,s}$
which shall be used here below.
To begin with, we recall the following result proved in \cite{BMV} which establishes the existence
of the smallest eigenvalue and detects its basic properties.
\begin{proposition}[\protect{\cite[Proposition\,5.1]{BMV}}] \label{prop.BMV}
The smallest ei\-gen\-va\-lue
$\lambda_1(\Omega) $ for the operator $\LL_{p,s}$ is strictly positive and satisfies the following properties:
\begin{enumerate}
\item $\lambda_1(\Omega)$ is simple;
\item the eigenfunctions associated with $\lambda_1(\Omega)$ do not change sign in $\R^n$;
\item every eigenfunction associated to an eigenvalue
$$\lambda > \lambda_1(\Omega)$$
is nodal, i.e., sign changing.
\end{enumerate}
Moreover, $\lambda_1(\Omega)$ admits the following variational characterization
\begin{equation} \label{eq:deflambdavar}
\lambda_1(\Omega) = \min_{u\in\mathcal{M}(\Omega)}
\bigg(\int_\Omega|\nabla u|^p\,dx + \iint_{\R^{2n}}
\frac{|u(x)-u(y)|^p}{|x-y|^{n+ps}}\,dx\,dy\bigg),
\end{equation}
where $\mathcal{M}(\Omega)$ is as in \eqref{eq:defM}. The minimum is always attained, and the
ei\-gen\-fun\-ctions for $\LL_{p,s}$ associated with $\lambda_1(\Omega)$
are precisely the minimizers in \eqref{eq:deflambdavar}.
\end{proposition}
We observe that, on account of Proposition \ref{prop.BMV},
there exists a \emph{unique non-negative}
eigenfunction $u_0\in\mathcal{M}(\Omega)\subseteq\mathcal{X}_0^{1,p}(\Omega)$
associated with $\lambda_1(\Omega)$;
in par\-ti\-cu\-lar, $u_0$ is a minimizer in \eqref{eq:deflambdavar}, so that
\begin{equation} \label{eq:uzerolambda1}
\lambda_1(\Omega) =
\int_\Omega|\nabla u_0|^p\,dx + \iint_{\R^{2n}}
\frac{|u_0(x)-u_0(y)|^p}{|x-y|^{n+ps}}\,dx\,dy.
\end{equation}
We shall refer to $u_0$ as the \emph{principal eigenfunction} of $\LL_{p,s}$.
The next result was proved in \cite{GoelSreenadh} and concerns the \emph{second}
eigenvalue for $\LL_{p,s}$.
\begin{theorem}[\protect{\cite[Section\,5]{GoelSreenadh}}]\label{thm:GS}
We define:
\begin{equation} \label{eq:deflambda2}
\lambda_2(\Omega) :=
\inf_{f\in \mathcal{K}}\max_{u\in \mathrm{Im}(f)}
\bigg\{\int_{\Omega}|\nabla u|^p\,d x
+ \iint_{\R^{2n}}\frac{|u(x)-u(y)|^p}{|x-y|^{n+ps}}\,dx\,dy\bigg\},
\end{equation}
where $\mathcal{K} := \{f:S^1\to\mathcal{M}(\Omega):\,\text{$f$ is continuous and odd}\}$, with~$\mathcal{M}(\Omega)$ as in~\eqref{eq:defM}.
Then:
\begin{enumerate}
\item $\lambda_2(\Omega)$ is an eigenvalue for $\LL_{p,s}$;
\item $\lambda_2 (\Omega) > \lambda_1(\Omega)$;
\item If $\lambda > \lambda_1(\Omega)$ is an
eigenvalue for $\LL_{p,s}$, then $\lambda \geq \lambda_2(\Omega)$.
\end{enumerate}
\end{theorem}
In the rest of this paper, we shall refer to $\lambda_1(\Omega)$ and $\lambda_2(\Omega)$
as, respectively, the \emph{first and second eigenvalue}
of $\LL_{p,s}$ (in $\Omega$). We notice that,
as a con\-se\-que\-nce of~\eqref{eq:deflambdavar}-\eqref{eq:deflambda2},
both $\lambda_1(\cdot)$ and $\lambda_2(\cdot)$ are \emph{translation-invariant},
that is,
$$\lambda_1(x_0+\Omega) = \lambda_1(\Omega)\qquad{\mbox{ and }} \qquad
\lambda_2(x_0+\Omega) = \lambda_2(\Omega).$$
To proceed further, we now recall the following \emph{global boundedness}
result for the ei\-gen\-fun\-ctions of
$\LL_{p,s}$ (associated with \emph{any}
eigenvalue $\lambda$) established in \cite{BMV}.
\begin{theorem}[\protect{\cite[Theorem\,4.4]{BMV}}]\label{thm:GlobalBd}
Let $u\in
\mathcal{X}_0^{1,p}(\Omega)\setminus\{0\}$ be an eigenfunction for~$\LL_{p,s}$, as\-so\-cia\-ted
with an eigenfunction $\lambda \geq \lambda_1(\Omega)$. Then,
$u\in L^\infty(\R^n)$.
\end{theorem}
\begin{remark} \label{rem:proofBd}
Actually, in \cite[Theorem\,4.4]{BMV}
it is proved the global
bound\-ed\-ness of any \emph{non-negative} weak solution to the more general Dirichlet problem
$$\begin{cases}
\LL_{p,s} = f(x,u) & \text{in $\Omega$}, \\
u \equiv 0 & \text{a.e.\,in $\R^n\setminus\Omega$},
\end{cases}$$
where $f:\Omega\times\R\to\R$ is a Carath\'{e}odory function satisfying the properties
\begin{itemize}
\item[(a)] $f(\cdot,t)\in L^\infty(\Omega)$ for every $t\geq 0$;
\item[(b)] There exists a constant $c_p > 0$ such that
$$|f(x, t)| \leq c_p(1+t^{p-1})\qquad\text{for a.e.\,$x\in\Omega$ and every $t\geq 0$}.$$
\end{itemize}
However, by scrutinizing the proof of the theorem, it is easy
to check that the same argument
can be applied to our context, where we have
$$f(x,t) = \lambda|t|^{p-2}t\qquad
{\mbox{ for all }} x\in\Omega {\mbox{ and }} t\in\R,$$
but we do not make any assumption on the sign of $u$
(see also \cite[Proposition\,4]{SerVal2}).
\end{remark}
Finally, we state here an algebraic lemma which shall be useful in the
forth\-co\-ming computations.
\begin{lemma}\label{lem:algebrico}
Let $1<p<+\infty$ be fixed. Then, the following facts hold.
\begin{enumerate}
\item For every $a,b\in \mathbb{R}$
such that $ab\leq 0$, it holds that
\begin{equation*}
J_p(a-b)a \geq
\begin{cases}
|a|^p - (p-1)|a-b|^{p-2}ab, & \text{if $1<p\leq 2$}, \\[0.1cm]
|a|^p - (p-1)|a|^{p-2}ab, & \text{if $p> 2$}.
\end{cases}
\end{equation*}
\item There exists a constant $c_p > 0$ such that
$$|a-b|^p \leq |a|^p+|b|^p + c_p\big(|a|^2+|b|^2\big)^{\frac{p-2}{2}}|ab|,\qquad
\forall\,\,a,b\in\R.$$
\end{enumerate}
\end{lemma}
\section{Interior regularity of the eigenfunctions} \label{INTRERSEC}
In this section we prove the \emph{interior H\"older regularity}
of the eigenfunctions for~$\LL_{p,s}$, which is
a fundamental ingredient for the proof of
Theorem \ref{main:thm}.
As a matter of fact, on account of Theorem \ref{thm:GlobalBd}, we establish
the interior H\"older regularity for any \emph{bounded} weak solution
of the non-homogeneous equation
\eqref{eq:mainPDE}, when
$$f\in L^\infty(\Omega).$$
In what follows, we tacitly understand that
$$\text{$2\leq p \leq n$ and $s\in (0,1)$};$$
mo\-re\-o\-ver, $\Omega\subseteq\R^n$ is a bounded open set and
$f\in L^\infty(\Omega)$.
\begin{remark} \label{rem:choicepleqn}
The reason why we restrict ourselves to consider
$2\leq p\leq n$ follows from the definition
of weak solution to \eqref{eq:mainPDE}.
Indeed, if $u$ is a weak solution
to \eqref{eq:mainPDE}, then by definition we have $u\in W^{1,p}(\R^n)$;
as a consequence, if $p > n$, by the classical Sobolev Embedding Theorem
we can immediately conclude that $u\in C^{0,\gamma}(\R^n)$, where
$\gamma = 1-n/p$.
\end{remark}
In order to state (and prove) the main result
of this section, we need to
fix a notation:
for every $z\in\R^n,\,\rho > 0$ and $u\in L^p(\R^n)$, we define
$$\mathrm{Tail}(u,z,\rho) := \bigg(\rho^p\int_{\R^n\setminus
B_\rho(z)}\frac{|u|^{p}}{|x-z|^{n+ps}}\,dx\bigg)^{1/p}.$$
The quantity $\mathrm{Tail}(u,z,\rho)$
is referred to as the \emph{$(\LL_{p,s})$-tail} of $u$, see e.g.~\cite{KMS, AGNE}.
\begin{theorem} \label{thm:mainHolder}
Let $f\in L^\infty(\Omega)$, and let $u\in W^{1,p}(\R^n)\cap L^\infty(\R^n)$ be
a weak so\-lu\-tion to \eqref{eq:mainPDE}. Then, there exists
some $\beta = \beta(n,s,p)\in (0,1)$ such that
$u\in C^{0,\beta}_{\loc}(\Omega)$.
\vspace*{0.1cm}
More precisely, for every ball $B_{R_0}(z)\Subset\Omega$ we have the estimate
\begin{equation} \label{eq:HolderestimMain}
[u]_{C^{0,\beta}(B_{R_0}(z))}^p
\leq C\Big(\|f\|_{L^\infty(\Omega)}
+ \|u\|_{L^\infty(\Omega)}^p+
\mathrm{Tail}(u,z,R_1)^p+1\Big),
\end{equation}
where
$$R_1 := R_0 + \frac{\mathrm{dist}(B_{R_0}(z),\de\Omega)}{2}$$
and $C > 0$ is a constant independent of $u$ and~$R_1$.
\end{theorem}
In order to prove Theorem \ref{thm:mainHolder}, we follow the
approach in \cite{BLS}; broadly put, the main idea behind this approach
is to \emph{transfer} to the solution $u$ the oscillation estimates proved
in \cite{GarainKinnunen} for the \emph{$\LL_{p,s}$-harmonic functions}.
To begin with, we establish the following basic existence/uniqueness
result for the weak solutions to the $(\LL_{p,s})$-Dirichlet problem
\eqref{eq:DirPbfg}.
\begin{proposition} \label{prop:existuniqweaksol}
Let $f\in L^\infty(\Omega)$ and $g\in W^{1,p}(\R^n)$ be fixed. Then,
there exists a unique solution $u = u_{f,\,g}\in W^{1,p}(\R^n)$
to the Dirichlet problem \eqref{eq:DirPbfg}.
\end{proposition}
\begin{proof}
We consider the space
$$\mathbb{W}(g) := \{u\in W^{1,p}(\R^n):\,u-g\in\mathcal{X}_0^{1,p}(\Omega)\},$$
and the functional $J:\mathbb{W}(g)\to\R$ defined as follows:
\begin{align*}
J(u) & := \frac{1}{p}\int_{\Omega}
|\nabla u|^{p}\,d x
+ \frac{1}{p}\iint_{\Omega\times\Omega}\frac{|u(x)-u(y)|^p}{|x-y|^{n+ps}} \\[0.1cm]
& \qquad\quad
+ \frac{2}{p}\iint_{\Omega\times(\R^n\setminus\Omega)}\frac{|u(x)-g(y)|^p}{|x-y|^{n+ps}}
-\int_{\Omega}fu\,dx.
\end{align*}
On account of \cite[Remark~2.13]{BLS}, we have that
$J$ is \emph{strictly convex}; hence, by using the
Direct Methods in the Calculus of Variations, we derive that
$J$ has a unique minimizer $u = u_{f,\,g}$ on $\mathbb{W}(g)$, which
is the unique weak solution to \eqref{eq:DirPbfg}.
\end{proof}
Thanks to Proposition \ref{prop:existuniqweaksol}, we can
prove the following result:
\begin{lemma} \label{lem:trequattroBrasco}
Let $f\in L^\infty(\Omega)$ and let $u\in W^{1,p}(\R^n)$ be a weak solution
to \eqref{eq:mainPDE}. Moreover, let $B$ be a given
Euclidean ball such that $B\Subset\Omega$, and let
$v\in W^{1,p}(\R^n)$ be the unique weak solution
to the Dirichlet problem
\begin{equation} \label{eq:DirvLemma}
\begin{cases}
\LL_{p,s}v = 0 & \text{in $\Omega$}, \\
v = u & \text{in $\R^n\setminus\Omega$}.
\end{cases}
\end{equation}
Then, there exists a constant $C = C(n,s,p) > 0$ such that
\begin{equation} \label{eq:estimLemmatrequattroGagl}
[u-v]_{W^{s,p}(\R^n)}^p \leq C|B|^{p'-\frac{p'(n-sp)}{np}}\|f\|_{L^\infty(\Omega)}^{p'}.
\end{equation}
In particular, we have
\begin{equation} \label{eq:estimLemmatrequattroMean}
-\!\!\!\!\!\!\!\int_B|u-v|^p\,dx
\leq C|B|^{p'-\frac{p'(n-sp)}{np}+\frac{sp}{n}-1}\|f\|_{L^\infty(\Omega)}^{p'}.
\end{equation}
\end{lemma}
\begin{proof}
We observe that the existence of $v$
is ensured by
Proposition \ref{prop:existuniqweaksol}.
Then, ta\-king into account that $u$ is a weak solution
to \eqref{eq:mainPDE} and $v$ is the weak solution to \eqref{eq:DirvLemma},
for every $\phi\in \mathcal{X}^{1,p}_0(B)$ we get
\begin{align*}
& \int_{B}\big(|\nabla u|^{p-2}\langle \nabla u,\nabla\phi\rangle-
|\nabla v|^{p-2}\langle \nabla v,\nabla\phi\rangle\big)dx \\
& \quad
+ \iint_{\R^{2n}}\frac{\big(J_p(u(x)-u(y))-J_p(v(x)-v(y))\big)(\phi(x)-\phi(y))}{|x-y|^{n+ps}}
\,dx\,dy
= \int_B f\phi.
\end{align*}
Choosing, in particular, $\phi := u-v$ (notice that, since
$v$ is a weak solution of \eqref{eq:DirvLemma}, by definition we have
$v-u\in\mathcal{X}_0^{1,p}(\Omega)$), we obtain
\begin{equation} \label{eq:dastimareLemmatrequattro}
\begin{split}
& \int_{\Omega}\mathcal{B}(\nabla u,\nabla v)\,dx
+ \iint_{\R^{2n}}
\frac{\big(J_p(t_1)-J_p(t_2)\big)(t_1-t_2)}{|x-y|^{n+ps}}
\,dx\,dy \\
& \qquad
= \int_B f(u-v)\,d x,
\end{split}
\end{equation}
where
$t_1:= u(x)-u(y),\,t_2 := v(x)-v(y)$ and
$$\mathcal{B}(a,b) := |a|^p+|b|^p-(|a|^{p-2}+|b|^{p-2})\langle a,b\rangle
\qquad {\mbox{ for all }} a,b\in\R.$$
Now, an elementary computation based on Cauchy-Schwarz's inequality gives
\begin{equation} \label{eq:estimpartelocale}
\mathcal{B}(a,b)\geq 0\qquad{\mbox{ for all }} a,b\in\R.
\end{equation}
Moreover, since $p\geq 2$, by exploiting \cite[Remark~A.4]{BLS} we have
\begin{equation} \label{eq:estimpartenonlocale}
\big(J_p(t_1)-J_p(t_2)\big)(t_1-t_2)\geq \frac{1}{C}|t_1-t_2|^p,
\end{equation}
where $C = C(p) > 0$ is a constant only depending on $p$.
Thus, by combining \eqref{eq:dastimareLemmatrequattro},
\eqref{eq:estimpartelocale} and~\eqref{eq:estimpartenonlocale}, we obtain
the following estimate:
\begin{align*}
& [u-v]_{W^{s,p}(\R^n)}^p
= \iint_{\R^{2n}}\frac{|t_1-t_2|^p}{|x-y|^{n+ps}}\,dx\,dy
\\
& \qquad \leq C\bigg(\int_{\Omega}\mathcal{B}(\nabla u,\nabla v)\,dx
+ \iint_{\R^{2n}}
\frac{\big(J_p(t_1)-J_p(t_2)\big)(t_1-t_2)}{|x-y|^{n+ps}}
\,dx\,dy\bigg) \\[0.1cm]
& \qquad \leq C\int_B f(u-v)\,dx \\&\qquad
\leq C\|f\|_{L^\infty(\Omega)}\int_B|u-v|\,dx
\\
& \qquad
\leq C\,|B|^{1-\frac{1}{p^*_s}}\|f\|_{L^\infty(\Omega)}
\|u-v\|_{L^{p^*_s}(B)},
\end{align*}
where
we have also used the H\"older's inequality and~$p^*_s > 1$ is the so-called fractional critical exponent, that is,
$$p^*_s := \frac{np}{n-sp}.$$
Finally, by applying the fractional Sobolev
inequality to $\phi = u-v$ (notice that $\phi$ is compactly supported in
$B$), we get
$$[u-v]_{W^{s,p}(\R^n)}^p
\leq C\,|B|^{1-\frac{1}{p^*_s}}\|f\|_{L^\infty(\Omega)}
[u-v]_{W^{s,p}(\R^n)},$$
and this readily yields the desired \eqref{eq:estimLemmatrequattroGagl}.
To prove \eqref{eq:estimLemmatrequattroMean} we observe that,
by using the H\"older inequality and again the fractional Sobolev inequality, we have
\begin{align*}
-\!\!\!\!\!\!\!\int_B|u-v|^p\,dx
& \leq \bigg(-\!\!\!\!\!\!\!\int_B|u-v|^{p^*_s}\,dx\bigg)^{\frac{p}{p^*_s}}
\leq C\,|B|^{-\frac{p^*_s}{p}}\,[u-v]_{W^{s,p}(\R^n)}^{p};
\end{align*}
thus, estimate
\eqref{eq:estimLemmatrequattroMean} follows directly from \eqref{eq:estimLemmatrequattroGagl}.
\end{proof}
Using Lemma \ref{lem:trequattroBrasco}, we can prove the following
\emph{excess decay estimate}.
\begin{lemma} \label{prop:decayestimate}
Let $f\in L^\infty(\Omega)$ and let $u\in W^{1,p}(\R^n)$ be a weak solution
to \eqref{eq:mainPDE}.
Moreover, let $x_0\in\Omega$ and let
$R \in (0,1)$ be such that $B_{4R}(x_0)\Subset
\Omega$.
Then, for every $0<r\leq R$ we have the estimate
\begin{equation} \label{eq:excessdecay}
\begin{split}
& -\!\!\!\!\!\!\!\int_{B_r(x_0)}
|u-\overline{u}_{x_0,r}|^p\,dx
\leq C\bigg(\frac{R}{r}\bigg)^n\,R^{\gamma}\,\|f\|_{L^\infty(\Omega)}^{p'}
\\
& \qquad
+C\bigg(\frac{r}{R}\bigg)^{\alpha p}
\bigg(R^\gamma\,\|f\|_{L^\infty(\Omega)}^{p'}
+ -\!\!\!\!\!\!\!\int_{B_{4R}(x_0)}|u|^p\,dx
+ \mathrm{Tail}(u,x_0,4R)^p\bigg),
\end{split}
\end{equation}
where $C,\,\gamma$ and $\alpha$ are positive constants only depending
on $n$, $s$ and~$p$.
\end{lemma}
\begin{proof}
Let $v\in W^{1,p}(\R^n)$ be the unique weak solution to the problem
\begin{equation} \label{eq:pbSolvedudecay}
\begin{cases}
\LL_{p,s}v = 0 & \text{in $B_{3R}(x_0)$}, \\
v = u & \text{on $\R^n\setminus B_{3R}(x_0)$}.
\end{cases}
\end{equation}
We stress that the existence of $v$ is guaranteed by Proposition \ref{prop:existuniqweaksol}.
We also observe that, for e\-ve\-ry $r\in(0, R]$, we have
that
\begin{equation*}
|\overline{u}_{x_0,r}-\overline{v}_{x_0,r}|^p
= \bigg|-\!\!\!\!\!\!\!\int_{B_r(x_0)}(u-v)\,dx\bigg|^p\leq
-\!\!\!\!\!\!\!\int_{B_r(x_0)}|u-v|^p\,dx.
\end{equation*}
As a consequence, we obtain
\begin{equation} \label{eq:toestimI1I2}
\begin{split}
-\!\!\!\!\!\!\!\int_{B_r(x_0)}
|u-\overline{u}_{x_0,r}|^p\,dx
& \leq \kappa-\!\!\!\!\!\!\!\int_{B_r(x_0)}
|u-v|^p\,dx
+ \kappa-\!\!\!\!\!\!\!\int_{B_r(x_0)}
|v-\overline{v}_{x_0,r}|^p\,dx
\\
& \qquad
+ \kappa-\!\!\!\!\!\!\!\int_{B_r(x_0)}
|\overline{u}_{x_0,r}-\overline{v}_{x_0,r}|^p\,dx \\[0.2cm]
& \leq \kappa\bigg(-\!\!\!\!\!\!\!\int_{B_r(x_0)}
|u-v|^p\,dx+-\!\!\!\!\!\!\!\int_{B_r(x_0)}
|v-\overline{v}_{x_0,r}|^p\,dx\bigg),
\end{split}
\end{equation}
where $\kappa = \kappa_p > 0$ is a constant only depending on $p$.
Now, since $B_{3R}(x_0)\Subset
\Omega$ and $v$ is the weak solution
to \eqref{eq:pbSolvedudecay}, by Lemma \ref{lem:trequattroBrasco}
we have
\begin{equation} \label{eq:estimfI}
\begin{split}
-\!\!\!\!\!\!\!\int_{B_r(x_0)}
|u-v|^p\,dx &
\leq C\,r^{np'-\frac{p'(n-sp)}{p}+sp-n}\|f\|_{L^\infty(\Omega)}^{p'}
\\
& \leq
C\,\bigg(\frac{R}{r}\bigg)^n\,R^{np'-\frac{p'(n-sp)}{p}+sp-n}
\|f\|_{L^\infty(\Omega)}^{p'}.
\end{split}
\end{equation}
On the other hand, since $v\in W^{1,p}(\R^n)$ and $v$ is
$\LL_{p,s}$-harmonic in $B_{3R}(x_0)$ (that is, $\LL_{p,s}v = 0$ in the weak sense),
we can apply \cite[Theorem\,5.1]{GarainKinnunen}, obtaining
\begin{equation} \label{eq:estimsI}
\begin{split}
-\!\!\!\!\!\!\!\int_{B_r(x_0)}
|v-\overline{v}_{x_0,r}|^p\,dx
= \,&-\!\!\!\!\!\!\!\int_{B_r(x_0)}
\bigg|-\!\!\!\!\!\!\!\int_{B_r(x_0)}(v(x)-v(y))\,dy\bigg|^p\,dx \\
\leq \,&-\!\!\!\!\!\!\!\int_{B_r(x_0)}\bigg(-\!\!\!\!\!\!\!\int_{B_r(x_0)}|v(x)-v(y)|^p\,dy\bigg)dx
\\
\leq\,& \big(\mathrm{osc}_{B_r(x_0)}v\big)^p \\[0.2cm]
\leq\,& C\bigg(\frac{r}{R}\bigg)^{\alpha p}\bigg(
\mathrm{Tail}(v,x_0,R)^p+-\!\!\!\!\!\!\!\int_{B_{2R}(x_0)}|v|^p\,dx\bigg),
\end{split}
\end{equation}
where $C$ and $\alpha$ are positive constants only
depending on $n$, $s$ and~$p$. By combining
estimates
\eqref{eq:estimfI}-\eqref{eq:estimsI}
with \eqref{eq:toestimI1I2}, we then get
\begin{equation} \label{eq:toestimTailmean}
\begin{split}
-\!\!\!\!\!\!\!\int_{B_r(x_0)}
|u-\overline{u}_{x_0,r}|^p\,dx
& \leq
C\,\bigg(\frac{R}{r}\bigg)^n\,R^{\gamma}
\|f\|_{L^\infty(\Omega)}^{p'} \\
& \qquad
+ C\bigg(\frac{r}{R}\bigg)^{\alpha p}\bigg(
\mathrm{Tail}(v,x_0,R)^p+-\!\!\!\!\!\!\!\int_{B_{2R}(x_0)}|v|^p\,dx\bigg),
\end{split}
\end{equation}
where we have set
\begin{equation}\label{eq:gamma}
\gamma := np'-\frac{p'(n-sp)}{p}+sp-n >0.
\end{equation}
To complete the proof
of \eqref{eq:excessdecay} we observe that,
since $u \equiv v$ a.e.\,on $\R^n\setminus B_{3R}(x_0)$
(and $0 < R \leq 1$), by definition of $\mathrm{Tail}(v,x_0,R)$
we have
\begin{equation} \label{eq:estimTailexcess}
\begin{split}
& \mathrm{Tail}(v,x_0,R)^p
= R^p\int_{\R^n\setminus
B_R(x_0)}\frac{|v|^{p}}{|x-x_0|^{n+ps}}\,dx \\
& \qquad = R^p\int_{\R^n\setminus
B_{4R}(x_0)}\frac{|v|^{p}}{|x-x_0|^{n+ps}}\,dx
+ R^p\int_{
B_{4R}(x_0)\setminus B_R(x_0)}\frac{|v|^{p}}{|x-x_0|^{n+ps}}\,dx \\[0.2cm]
& \qquad \leq C\bigg(
\mathrm{Tail}(u,x_0,4R)^p
+ -\!\!\!\!\!\!\!\int_{B_{4R}(x_0)}
|v|^p\,dx\bigg).
\end{split}
\end{equation}
Moreover, by using again Lemma \ref{lem:trequattroBrasco}, we get
\begin{equation} \label{eq:estimmeanintexcess}
\begin{split}
-\!\!\!\!\!\!\!\int_{B_{4R}(x_0)}
|v|^p\,dx & \leq
C-\!\!\!\!\!\!\!\int_{B_{4R}(x_0)}|u-v|^p\,dx + C-\!\!\!\!\!\!\!\int_{B_{4R}(x_0)}|u|^p\,dx \\
& \leq
C\bigg(R^{\gamma}\|f\|_{L^\infty(\Omega)}^{p'}
+
-\!\!\!\!\!\!\!\int_{B_{4R}(x_0)}|u|^p\,dx\bigg).
\end{split}
\end{equation}
Thus, by inserting \eqref{eq:estimTailexcess}-\eqref{eq:estimmeanintexcess}
into \eqref{eq:toestimTailmean}, we obtain the desired
\eqref{eq:excessdecay}.
\end{proof}
By combining Lemmata \ref{lem:trequattroBrasco} and~\ref{prop:decayestimate},
we can provide the
\begin{proof}[Proof of Theorem\,\ref{thm:mainHolder}]
The proof follows the lines of \cite[Theorem 3.6]{BLS}.
First, we consider a ball $B_{R_0}(z) \subset\subset \Omega$ and we define the quantities
\begin{equation}
d:= \mathrm{dist}(B_{R_0}(z), \partial \Omega) >0
\quad \textrm{and }
\quad
R_1:= \dfrac{d}{2}+R_0.
\end{equation}
Thus, we can choose a point $x_0 \in B_{R_0}(z)$ and the ball $B_{4R}(x_0)$, where $R < \min\{1, \tfrac{d}{8}\}$. In particular, this implies that $B_{4R}(x_0)\subset B_{R_1}(z)$.
Since $R<1$, we can then apply Lemma \ref{prop:decayestimate}: this gives,
for every $0 < r \leq R$,
\begin{equation}\label{eq:StimaThm3.6}
\begin{aligned}
-\!\!\!\!\!\!\!\int_{B_{r}(x_0)}&|u-\overline{u}_{x_0,r}|^p\,dx \leq C \left(\dfrac{R}{r}\right)^{n}R^{\gamma} \|f\|^{p'}_{L^{\infty}(\Omega)}\\
&+ C\left(\dfrac{r}{R}\right)^{\alpha \,p}\left( R^{\gamma}\|f\|^{p'}_{L^{\infty}(\Omega)} + -\!\!\!\!\!\!\!\int_{B_{4R}(x_0)}|u|^p \, dx + \mathrm{Tail}(u,x_0,4R)^p\right)\\
&\leq C \left(\dfrac{R}{r}\right)^{n}R^{\gamma} \|f\|^{p'}_{L^{\infty}(\Omega)}\\
&+ C\left(\dfrac{r}{R}\right)^{\alpha \,p}\left( d^{\gamma}\|f\|^{p'}_{L^{\infty}(\Omega)} + \|u\|_{L^{\infty}(\Omega)}^p \, dx + \mathrm{Tail}(u,x_0,4R)^p\right),
\end{aligned}
\end{equation}
\noindent where $\gamma >0$ is as in \eqref{eq:gamma}.
Now, we notice that for every $x \notin B_{R_1}(z)$ it holds that
\begin{equation*}
|x-x_0| \geq |x-z|-|z-x_0| \geq \dfrac{R_1 - |z-x_0|}{R_1}|x-z|.
\end{equation*}
Therefore, we have
\begin{equation*}
\begin{split}
\mathrm{Tail}(u,x_0,4R)^p
& =
(4R)^p \int_{\mathbb{R}^n \setminus B_{R_1}(z)}
\dfrac{|u|^p}{|x-x_0|^{n+ps}}\, dx \\
& \qquad\qquad + (4R)^p \int_{B_{R_1}(z)\setminus B_{4R}(x_0)}
\dfrac{|u|^p}{|x-x_0|^{n+ps}}\, dx\\
& \leq \left(\dfrac{4R}{R_1}\right)^{p}\left(\dfrac{R_1}{R_1 - |z-x_0|}\right)^{n+ps}
\mathrm{Tail}(u,z,R_1)^p +
C \|u\|^p_{L^{\infty}(\Omega)} \\[0.2cm]
& \leq \mathrm{Tail}(u,z,R_1)^p + C \|u\|^p_{L^{\infty}(\Omega)}
\end{split}
\end{equation*}
\noindent for a constant $C$ depending on $n$, $s$ and~$p$. We recall that in the last estimate we exploited that
\begin{equation*}
\dfrac{4R}{R_1} < \dfrac{\tfrac{d}{2}}{R_0 + \tfrac{d}{2}}<1 \quad \textrm{and } \quad \dfrac{4R}{R_1 - |x_0-z|}\leq \dfrac{4R}{R_1-R_0}<1.
\end{equation*}
Consequently, continuing the estimate started with \eqref{eq:StimaThm3.6}, we find that
\begin{equation}\label{eq:StimaThm3.6Bis}
\begin{aligned}
-\!\!\!\!\!\!\!\int_{B_{r}(x_0)}&|u-\overline{u}_{x_0,r}|^p\,dx \leq C \left(\dfrac{R}{r}\right)^{n}R^{\gamma} \|f\|^{p'}_{L^{\infty}(\Omega)}\\
&+ C\left(\dfrac{r}{R}\right)^{\alpha \,p}\left( d^{\gamma}\|f\|^{p'}_{L^{\infty}(\Omega)} + \|u\|_{L^{\infty}(\Omega)}^p \, dx + \mathrm{Tail}(u,z,R_1)^p\right).
\end{aligned}
\end{equation}
We can now define the positive number
$$\theta := 1 + \dfrac{\gamma}{n+\alpha \, p},$$
\noindent and take $r:= R^{\theta}$ in \eqref{eq:StimaThm3.6Bis}, which yields
\begin{equation*}
\begin{split}
& r^{-\beta p}-\!\!\!\!\!\!\!\int_{B_{r}(x_0)\cap B_{R_0}(z)} |u-\overline{u}_{x_0,r}|^p\,dx \\
& \qquad \leq C \left( (d^{\gamma}+1)\|f\|^{p'}_{L^{\infty}(\Omega} + \|u\|^{p}_{L^{\infty}(\Omega)} + \mathrm{Tail}(u,z,R_1)^p\right),
\end{split}
\end{equation*}
where we have set
$$\beta:= \dfrac{\gamma \alpha}{n+\alpha p + \gamma}>0.$$
This shows that $u \in \mathcal{L}^{p,n+\beta\gamma}(B_{R_0}(z))$, the
Campanato space isomorphic to the H\"{o}lder space $C^{0,\beta}(\overline{B_{R_0}(z)})$.
This completes the proof of Theorem~\ref{thm:mainHolder}.
\end{proof}
By gathering together Theorems \ref{thm:GlobalBd} and
\ref{thm:mainHolder}, we can easily prove
the needed \emph{interior H\"older regularity} of the eigenfunctions
of $\LL_{p,s}$.
\begin{theorem} \label{thm:contEigen}
Let $\lambda\geq \lambda_1(\Omega)$ be an eigenvalue of
$\LL_{p,s}$, and let $\phi_\lambda\in \mathcal{X}_0^{1,p}(\Omega)\setminus\{0\}$
be an eigenfunction
associated with $\lambda$. Then, $\phi_\lambda\in C(\Omega)$.
\end{theorem}
\begin{proof}
On account of Theorem \ref{thm:GlobalBd}, we know that
$\phi_\lambda\in L^\infty(\R^n)$. As a consequence, $\phi_\lambda$
is a \emph{globally bounded weak solution}
to \eqref{eq:mainPDE},
with
$$f := \lambda|\phi_\lambda|^{p-2}\phi_\lambda\in L^\infty(\Omega).$$
We are then entitled to apply Theorem \ref{thm:mainHolder}, which ensures
that
$\phi_\lambda\in C^{0,\beta}_{\loc}(\Omega)$
for some
$\beta = \beta(n,s,p)\in (0,1)$. This ends the proof of Theorem~\ref{thm:contEigen}.
\end{proof}
\section{The Hong-Krahn-Szeg\"o inequality for $\LL_{p,s}$}\label{LULT}
In this last section
of the paper we provide the proof of Theorem \ref{main:thm}.
Before doing this, we establish two preliminary results.
First of all, we prove
the following \emph{Faber-Krahn type inequality for $\LL_{p,s}$.}
\begin{theorem} \label{thm.FK}
Let~$\Omega\subseteq\R^n$ be a
bounded open set,
and let $m:= |\Omega|\in (0,\infty)$. Then,
if $B^{(m)}$ is any Euclidean ball with
volume
$m$, one has
\begin{equation} \label{eq.FK}
\lambda_1(\Omega)\geq \lambda_1 (B^{(m)}).
\end{equation}
Moreover, if the equality holds in \eqref{eq.FK}, then
$\Omega$ is a ball.
\end{theorem}
\begin{proof}
The proof is similar to that in the linear case, see \cite[Theorem\,1.1]{BDVV2};
how\-e\-ver, we present it here in all the details for the sake of completeness.
\vspace*{0.1cm}
To begin with, let
$\widehat{B}^{(m)}$ be the Euclidean ball with centre $0$ and volume $m$.
Moreover, let $u_0\in\mathcal{M}(\Omega)$ be the
principal
eigenfunction for $\LL_{p,s}$. We recall that,
by definition, $u_0$ is the unique non-negative eigenfunction
associated with the first eigenvalue
$\lambda_1(\Omega)$; in particular, we have (see \eqref{eq:uzerolambda1})
\begin{equation} \label{eq:lambda1uzeroFK}
\lambda_1(\Omega) =
\int_\Omega|\nabla u_0|^p\,dx + \iint_{\R^{2n}}
\frac{|u_0(x)-u_0(y)|^p}{|x-y|^{n+ps}}\,dx\,dy.
\end{equation}
Then, we define
$u_0^\ast:\R^n\to\R$
as the (decreasing) Schwarz symmetrization of $u_0$.
Now, since $u_0\in\mathcal{M}(\Omega)$, from
the well-known inequality by P\`olya and Szeg\"o (see e.g.~\cite{PSZ})
we deduce that
\begin{equation} \label{eq.PSLoc}
u_0^\ast\in \mathcal{M}(\widehat{B}^{(m)})\qquad\text{and}\qquad
\int_{\widehat{B}^{(m)}}|\nabla u_0^\ast|^p\, dx
\leq \int_{\Omega}|\nabla u|^p\, dx.
\end{equation}
Furthermore, by \cite[Theorem\,9.2]{AlLieb}
(see also \cite[Theorem\,A.1]{FS}), we also have
\begin{equation} \label{eq.PSNonloc}
\iint_{\R^{2n}}\frac{|u_0^\ast(x)-u_0^\ast(y)|^p}{|x-y|^{n+ps}}\, d x\, dy
\leq \iint_{\R^{2n}}\frac{|u_0(x)-u_0(y)|^p}{|x-y|^{n+ps}}\, d x\, dy.
\end{equation}
Gathering all these facts and using \eqref{eq:lambda1uzeroFK}, we get
\begin{equation} \label{eq.estimuusharp}
\begin{split}
\lambda_{1}(\Omega)
&
= \int_{\Omega}|\nabla u_0|^2\, dx
+\iint_{\R^{2n}}\frac{|u_0(x)-u_0(y)|^2}{|x-y|^{n+2s}}\, d x\, dy \\
& \geq
\int_{\widehat{B}^{(m)}}|\nabla u_0^\ast|^2\, dx
+ \iint_{\R^{2n}}\frac{|u_0^\ast(x)-u_0^\ast(y)|^2}{|x-y|^{n+2s}}\, d x\, dy\\&
\geq \lambda_{1}(\widehat {B}^{(m)}).
\end{split}
\end{equation}
From this, since $\lambda_{1}(\cdot)$ is translation-invariant,
we derive the validity
of \eqref{eq.FK} for every Euclidean ball $B^{(m)}$ with volume $m$.
To complete the proof of Theorem~\ref{thm.FK}, let us suppose that
$$\lambda_{1}(\Omega) = \lambda_{1}(B^{(m)})$$
for some (and hence, for every) ball $B^{(m)}$ with $|B^{(m)}| = m$.
By \eqref{eq.estimuusharp} we have
\begin{align*}
& \int_{\Omega}|\nabla u_0|^p\, dx
+\iint_{\R^{2n}}\frac{|u_0(x)-u_0(y)|^p}{|x-y|^{n+ps}}\, d x\, dy
= \lambda_1(\Omega) \\
& \qquad = \lambda_1(\widehat{B}^{(m)}) =
\int_{\widehat{B}^{(m)}}|\nabla (u_0)^\ast|^p\, dx
+ \iint_{\R^{2n}}\frac{|u_0^\ast(x)-u_0^\ast(y)|^p}{|x-y|^{n+ps}}\, d x\, dy.
\end{align*}
In particular, by \eqref{eq.PSLoc} and~\eqref{eq.PSNonloc} we get
$$
\iint_{\R^{2n}}\frac{|u_0(x)-u_0(y)|^p}{|x-y|^{n+ps}}\, d x\, dy
= \iint_{\R^{2n}}\frac{|u_0^\ast(x)-u_0^\ast(y)|^p}{|x-y|^{n+ps}}\, d x\, dy.$$
We are then in the position to apply once again \cite[Theorem\,A.1]{FS},
which ensures that
$u_0$ must be proportional to a translation of a symmetric decreasing function.
As a consequence of this fact, we immediately deduce that
$$\Omega = \{x\in\R^n:\,u_0(x) > 0\}$$
must be a ball (up to a set of zero Lebesgue measure). This completes the proof of Theorem~\ref{thm.FK}.
\end{proof}
Then, we establish the following lemma on \emph{nodal domains}.
\begin{lemma} \label{lem:nodal}
Let $\lambda>\lambda_1(\Omega)$ be an eigenvalue of $\LL_{p,s}$,
and let $\phi_\lambda\in \mathcal{X}_0^{1,p}(\Omega)\setminus\{0\}$
be an eigenfunction associated with $\lambda$. We define the sets
\begin{equation*}
\Omega^+ := \left\{ x \in \Omega: \phi_{\lambda}(x) >0\right\} \quad \textrm{and} \quad \Omega^- := \left\{ x \in \Omega: \phi_{\lambda}(x) <0\right\}.
\end{equation*}
Then $\lambda > \max\left\{\lambda_1(\Omega^{+}), \lambda_1(\Omega^-)\right\}$.
\end{lemma}
The proof of Lemma \ref{lem:nodal}
takes inspiration from
\cite[Lemma\,6.1]{BP} (see also \cite[Lemma\,4.2]{GoelSreenadh}).
\begin{proof}[Proof of Lemma~\ref{lem:nodal}]
First of all, on account of Theorem \ref{thm:contEigen}
we have that the sets~$\Omega^+$ and~$\Omega^-$ are open, and
therefore the eigenvalues~$\lambda_{1}(\Omega^{\pm})$ are well--defined.
Moreover, thanks to Proposition \ref{prop.BMV},
we know that $\phi_\lambda$ changes sign in $\Omega$, and therefore it is convenient to write
$\phi_{\lambda}= \phi_{\lambda}^+ - \phi_{\lambda}^-,$
where $\phi_{\lambda}^+$ and $\phi_{\lambda}^-$ denote, respectively, the positive and negative parts of $\phi_{\lambda}$, with the convention that both the functions~$\phi_{\lambda}^+$ and $\phi_{\lambda}^-$ are non-negative.
Let us now start proving that $\lambda > \lambda_{1}(\Omega^+)$.
By using the fact that $\phi_{\lambda}$ is an eigenfuction of $\LL_{p,s}$ corresponding to $\lambda$, it follows that
\begin{equation*}
\begin{aligned}
& \int_{\Omega}|\nabla \phi_{\lambda}|^{p-2}\langle \nabla \phi_{\lambda},v\rangle \, dx \\
& \qquad\qquad
+ \iint_{\mathbb{R}^{2n}}\dfrac{|\phi_{\lambda}(x)-\phi_{\lambda}(y)|^{p-2}(\phi_{\lambda}(x)-\phi_{\lambda}(y))(v(x)-v(y)}{|x-y|^{n+ps}}\, dxdy\\
& \qquad = \lambda \int_{\Omega}|\phi_{\lambda}|^{p-2}\phi_{\lambda}v \, dx, \quad \textrm{ for all } v \in \mathcal{X}_{0}^{1,p}(\Omega).
\end{aligned}
\end{equation*}
In consideration of the fact that $\phi_{\lambda}^{+} \in \mathcal{X}_{0}^{1,p}(\Omega)$,
we can take $v = \phi_{\lambda}^+$ as a test function.
Now, since
$$\text{$\phi_{\lambda}^{+}(x)\phi_{\lambda}^{-}(x)=0$ for a.e. $x \in \Omega$},$$
we easily get that
$$ (\phi_{\lambda}^+ (x) - \phi_{\lambda}^{+}(y))(\phi_{\lambda}^- (x) - \phi_{\lambda}^{-}(y))\leq 0.$$
Moreover, since both $\Omega_+$ and $\Omega_-$ are non-void open set
(remind that $\phi_\lambda$ is con\-ti\-nuo\-us on $\Omega$ and it changes sign in $\Omega$), we have
\begin{eqnarray*}
&&\iint_{\R^{2n}}\frac{|\phi_\lambda(x)-\phi_\lambda(y)|^{p-2}
(\phi_{\lambda}^+ (x) - \phi_{\lambda}^{+}(y))(\phi_{\lambda}^- (x) - \phi_{\lambda}^{-}(y))}
{|x-y|^{n+ps}}\,d x\,d y \\
&& \qquad\qquad\qquad\leq -\int_{\Omega_+}\int_{\Omega_-}
\frac{|\phi_\lambda(x)-\phi_\lambda(y)|^{p-2}
\phi_{\lambda}^+ (x)\phi_{\lambda}^{-}(y)}
{|x-y|^{n+ps}}\,d x\,d y < 0\end{eqnarray*}
and
\begin{eqnarray*} &&\iint_{\R^{2n}}\frac{|\phi_\lambda^+(x)-\phi_\lambda^+(y)|^{p-2}
(\phi_{\lambda}^+ (x) - \phi_{\lambda}^{+}(y))(\phi_{\lambda}^- (x) - \phi_{\lambda}^{-}(y))}
{|x-y|^{n+ps}}\,d x\,d y \\
&& \qquad\qquad\qquad\leq -\int_{\Omega_+}\int_{\Omega_-}
\frac{|\phi^+_\lambda(x)|^{p-2}
\phi_{\lambda}^+ (x)\phi_{\lambda}^{-}(y)}
{|x-y|^{n+ps}}\,d x\,d y < 0.
\end{eqnarray*}
We can therefore exploit Lemma \ref{lem:algebrico}-(1) with
$$a := \phi_{\lambda}^+ (x) - \phi_{\lambda}^{+}(y)\qquad\text{and}\qquad
b := \phi_{\lambda}^- (x) - \phi_{\lambda}^{-}(y),$$
obtaining (remind that, by assumption, $p\geq 2$)
\begin{equation*}
\begin{aligned}
&\lambda \int_{\Omega^+}|\phi_{\lambda}^+|^{p}\, dx =
\lambda \int_{\Omega}|\phi_{\lambda}|^{p-2}\phi_{\lambda}\phi_{\lambda}^+ \, dx \\
&\qquad
=\int_{\Omega}\|\nabla \phi_{\lambda}\|^{p-2}\langle \nabla \phi_{\lambda}, \nabla \phi_{\lambda}^{+}\rangle \, dx \\
& \qquad\qquad
+ \iint_{\mathbb{R}^{2n}}\dfrac{|\phi_{\lambda}(x)-\phi_{\lambda}(y)|^{p-2}(\phi_{\lambda}(x)-\phi_{\lambda}(y))(\phi_{\lambda}^{+}(x)-\phi_{\lambda}^{+}(y)}{|x-y|^{n+ps}}\, dxdy\\
&\qquad = \int_{\Omega^+}\|\nabla \phi_{\lambda}^{+}|^{p}\, dx \\
& \qquad\qquad + \iint_{\mathbb{R}^{2n}}\dfrac{|\phi_{\lambda}(x)-\phi_{\lambda}(y)|^{p-2}(\phi_{\lambda}(x)-\phi_{\lambda}(y))(\phi_{\lambda}^{+}(x)-\phi_{\lambda}^{+}(y)}{|x-y|^{n+ps}}\, dxdy\\
&\qquad > \int_{\Omega^+}|\nabla \phi_{\lambda}^{+}|^{p}\, dx + \iint_{\mathbb{R}^{2n}}\dfrac{|\phi_{\lambda}^{+}(x)-\phi_{\lambda}^{+}(y)|^p}{|x-y|^{n+ps}}\, dxdy \\
& \qquad \geq \lambda_{1}(\Omega^+) \int_{\Omega^+}|\phi_{\lambda}^+|^{p}\, dx,
\end{aligned}
\end{equation*}
\noindent where we used the variational characterization
of $\lambda_{1}(\Omega^+)$, see \eqref{eq:deflambdavar}.
In particular, this gives that $\lambda > \lambda_{1}(\Omega^+)$. With a similar argument (see
e.g. \cite[Lemma\,6.1]{BP}), one can show that $\lambda > \lambda_{1}(\Omega^-)$ as well, and this closes
the proof of Lemma~\ref{lem:nodal}.
\end{proof}
By virtue of Theorem \ref{thm.FK} and Lemma \ref{lem:nodal}, we can provide the
\begin{proof}
[Proof of Theorem\,\ref{main:thm}]
We split the proof into two steps.
\textsc{Step I:} In this step, we prove inequality
\eqref{eq:HKS}.
To this end, let $\phi\in\mathcal{M}(\Omega)$ be a $L^p$-normalized
eigenfunction
associated with $\lambda_2(\Omega)$ (recall the definition of the space~$\mathcal{M}(\Omega)$ in~\eqref{eq:defM}).
On account of Theorem \ref{thm:GS}, we know that
$\phi\in C(\Omega)$.
Moreover, since $\phi$ changes sign in $\Omega$
(see Proposition \ref{prop.BMV}), we can define the non-void open sets
$$\Omega_+ := \{u > 0\}\qquad\text{and}\qquad
\Omega_- := \{u < 0\}.$$
Then, by combining Lemma \ref{lem:nodal} with
Theorem \ref{thm.FK}, we get
\begin{equation} \label{eq:lambda2max}
\lambda_2(\Omega) > \max\big\{\lambda_1(B_+),\lambda_1(B_-)\big\},
\end{equation}
where $B_+$ is a Euclidean ball with volume equal to $|\Omega_+|$ and
$B_-$ is a Euclidean ball with volume $|\Omega_-|$.
Now, since $\Omega_+\cup\Omega_- = \Omega$, we have
$$|B_+|+|B_-| = |\Omega_+|+|\Omega_-| \leq |\Omega| = m.$$
Taking into account this inequality, we claim that
\begin{equation} \label{eq:maxgeqBall}
\max\big\{\lambda_1(B_+),\lambda_1(B_-)\big\} \geq \lambda_1(B),
\end{equation}
being~$B$ a ball of volume~$|\Omega|/2$.
In order to prove \eqref{eq:maxgeqBall}, we distinguish three cases.
\begin{itemize}
\item[(i)] $|B_+|,\,|B_-|\leq m/2$. In this case, since
$\lambda_1(\cdot)$ is translation-invariant,
we can assume without loss of generality that
$B\subseteq B_+,\,B_-$;
as a consequence, since $\lambda_1(\cdot)$ is non-increasing, we obtain
$$\lambda_1(B_+),\,\lambda_1(B_-)\geq \lambda_1(B),$$
and this proves the claimed \eqref{eq:maxgeqBall}.
\vspace*{0.2cm}
\item[(ii)] $|B_-| < m/2 < |B_+|$. In this case, we can assume that
$B_-\subseteq B\subseteq B_+$;
from this, since $\lambda_1(\cdot)$ is non-increasing, we obtain
$$\lambda_1(B_+)\geq \lambda_1(B)\geq \lambda_1(B_-),$$
and this immediately implies the claimed \eqref{eq:maxgeqBall}.
\vspace*{0.2cm}
\item[(iii)] $|B_+| < m/2 < |B_-|$. In this last case,
it suffices to interchange the r\^oles of the balls
$B_-$ and $B_+$, and to argue
exactly as in case~(ii).
\end{itemize}
Gathering \eqref{eq:lambda2max} and
\eqref{eq:maxgeqBall}, we obtain the claim in~\eqref{eq:HKS}.
\textsc{Step II:} Now we prove the sharpness
of \eqref{eq:HKS}. To this end, according to
the statement of the theorem, we
fix $r > 0$ and we define
$$\Omega_j := B_r(x_j)\cup B_r(y_j),$$
where $\{x_j\}_j,\,\{y_j\}_j\subseteq\R^n$ are two sequences satisfying
\begin{equation} \label{eq:xjyjdiv}
\lim_{j\to+\infty}|x_j-y_j| = +\infty.
\end{equation}
On account of \eqref{eq:xjyjdiv}, we can assume that
\begin{equation} \label{eq:disjoint}
B_r(x_j)\cap B_r(y_j) = \varnothing\qquad{\mbox{ for all }} j\geq 1.
\end{equation}
Let now $u_0\in \mathcal{M}(B_r)$ be a $L^p$-normalized eigenfunction associated
with $\lambda_1(B_r)$ (here, $B_r = B_r(0)$). For every natural number $j\geq 1$, we set
\begin{equation} \label{eq:defphiij}
\phi_{j}(x) := u_0(x-x_j)\qquad\text{and}\qquad
\psi_{j}(x) := u_0(x-y_j).
\end{equation}
Since $\lambda_1(\cdot)$ is translation-invariant, it is immediate to check that
$\phi_j$ and $\psi_j$ are normalized eigenfunctions
associated with $\lambda_1(B_r(x_j))$ and $\lambda_1(B_r(y_j))$,
respectively.
Moreover,
taking into account
\eqref{eq:disjoint}, it is easy to see that
\begin{equation}\label{propa}
{\mbox{$\phi_j\equiv 0 $ on $\R^n\setminus B_r(x_j)\supseteq B_r(y_j)$
and
$\psi_j\equiv 0$ on $\R^n\setminus B_r(y_j)\supseteq B_r(x_j)$}}\end{equation}
and~$\phi_j\psi_j\equiv 0$ on $\R^n$.
We then consider the function $f$ defined as follows:
$$
f(z_1,z_2) := |z_1|^{\frac{2-p}{p}}z_1\phi_{j}-
|z_2|^{\frac{2-p}{p}}z_2\psi_j\qquad
\text{with $z = (z_1,z_2)\in S^1$}.$$
Taking into account that $B_r(x_j),\,B_r(y_j)\subseteq\Omega_j$
and $u_0\in \mathcal{M}(B_r)$, it is readily seen that
$f(S_1)\subseteq \mathcal{X}_0^{1,p}(\Omega_j)$.
Furthermore, the function $f$
is clearly \emph{odd} and continuous.
Also, using \eqref{eq:disjoint} and the fact that
$\phi\equiv 0$ out of $B_r$, one has
\begin{align*}
\|f(z_1,z_2)\|^p_{L^p(\Omega_j)} & = \big\||z_1|^{\frac{2-p}{p}}z_1\phi_{j}-
|z_2|^{\frac{2-p}{p}}z_2\psi_j\big\|^p_{L^p(\Omega_j)} \\
& = |z_1|^2\|\phi_j\|^p_{L^p(B_r(x_j))}
+ |z_2|^2\|\psi_j\|^p_{L^p(B_r(y_j))} \\
& =
(|z_1|^2+|z_2|^2)\|u_0\|_{L^p(B_r)}^p \\&= 1.
\end{align*}
We are thereby entitled to use $f$ in the definition
of $\lambda_2(\Omega)$, see \eqref{eq:deflambda2}:
setting $a_j:= \phi_j(x)-\phi_j(y)$ and $b_j:= \psi_j(x)-\psi_j(y)$ to simplify the notation,
this gives, together with~\eqref{eq:disjoint} and~\eqref{propa}, that
\begin{align*}
\lambda_2(\Omega_j)
& \leq
\max_{v\in\mathrm{Im}(f)}
\bigg\{\int_{\Omega_j}
|\nabla v|^p\,dx
+
\iint_{\R^{2n}}\frac{|v(x)-v(y)|^p}{|x-y|^{n+ps}}
\,dx\,dy\bigg\} \\
&
= \max_{|\omega_1|^p+|\omega_2|^p = 1}
\bigg\{\int_{\Omega_j}
|\nabla(\omega_1\phi_{j}-
\omega_2\psi_j)|^{p}\,dx +
\iint_{\R^{2n}}\frac{|\omega_1a_j-\omega_2b_j
|^p}{|x-y|^{n+ps}}
\,dx\,dy\bigg\} \\
& =
\max_{|\omega_1|^p+|\omega_2|^p = 1}
\bigg\{|\omega_1|^p\int_{B_r(x_j)}|\nabla\phi_j|^p\,dx
+ |\omega_2|^p\int_{B_r(y_j)}|\nabla\psi_j|^p\,dx \\
& \qquad\qquad\qquad\qquad
+ \iint_{\R^{2n}}\frac{|\omega_1a_j-\omega_2b_j
|^p}{|x-y|^{n+ps}}
\,dx\,dy\bigg\} \\
& =
\max_{|\omega_1|^p+|\omega_2|^p = 1}
\bigg\{\int_{B_r}|\nabla u_0|^p\,d x
+ \iint_{\R^{2n}}\frac{|\omega_1a_j-\omega_2b_j
|^p}{|x-y|^{n+ps}}\bigg\} .
\end{align*}
On the other hand, by applying Lemma \ref{lem:algebrico}-(2), we get
\begin{align*}
& \max_{|\omega_1|^p+|\omega_2|^p = 1}
\bigg\{\int_{B_r}|\nabla u_0|^p\,d x
+ \iint_{\R^{2n}}\frac{|\omega_1a_j-\omega_2b_j
|^p}{|x-y|^{n+ps}}\bigg\}\\& \leq
\max_{|\omega_1|^p+|\omega_2|^p = 1}
\bigg\{\int_{B_r}|\nabla u_0|^p\,d x
+ |\omega_1|^p\iint_{\R^{2n}}\frac{|\phi_j(x)-\phi_j(y)|^p}{|x-y|^{n+ps}}\,dx\,d y
\\
& \qquad\qquad
+ |\omega_2|^p\iint_{\R^{2n}}\frac{|\psi_j(x)-\psi_j(y)|^p}{|x-y|^{n+ps}}\,dx\,d y
\\
& \qquad\qquad+
c_p\iint_{\R^{2n}}\frac{|(\omega_1a_j)^2+(\omega_2b_j)^2|^{\frac{p-2}{2}}|\omega_1\omega_2a_jb_j|}
{|x-y|^{n+ps}}\,dx\,d y\bigg\} \\
& =
\max_{|\omega_1|^p+|\omega_2|^p = 1}
\bigg\{\int_{B_r}|\nabla u_0|^p\,d x
+ \iint_{\R^{2n}}\frac{|u_0(x)-u_0(y)|^p}{|x-y|^{n+ps}}\,dx\,d y
\\
& \qquad\qquad+
c_p\iint_{\R^{2n}}\frac{|(\omega_1a_j)^2+(\omega_2b_j)^2|^{\frac{p-2}{2}}|\omega_1\omega_2a_jb_j|}
{|x-y|^{n+ps}}\,dx\,d y\bigg\}
\\
& = \lambda_1(B_r)+c_p\,
\max_{|\omega_1|^p+|\omega_2|^p}\iint_{\R^{2n}}\frac{|(\omega_1a_j)^2+(\omega_2b_j)^2|^{\frac{p-2}{2}}|\omega_1\omega_2a_jb_j|}
{|x-y|^{n+ps}}\,dx\,d y,
\end{align*}
where we have also used that $u_0$ is a normalized eigenfunction associated with
the first eigenvalue $\lambda_1(B_r)$.
Summarizing, we have proved that
\begin{equation} \label{eq:topasslimit}
\begin{split}
& \lambda_2(\Omega_j) \leq \lambda_1(B_r) \\
& \qquad\qquad
+
c_p\,
\max_{|\omega_1|^p+|\omega_2|^p}\iint_{\R^{2n}}\frac{|(\omega_1a_j)^2+(\omega_2b_j)^2|^{\frac{p-2}{2}}|\omega_1\omega_2a_jb_j|}
{|x-y|^{n+ps}}\,dx\,d y.
\end{split}
\end{equation}We now set
$$\mathcal{R}_j := \max_{|\omega_1|^p+|\omega_2|^p=1}\iint_{\R^{2n}}
\frac{|(\omega_1a_j)^2+(\omega_2b_j)^2|^{\frac{p-2}{2}}|\omega_1\omega_2a_jb_j|}
{|x-y|^{n+ps}}\,dx\,d y$$
and we claim that $\mathcal{R}_j\to 0$ as $j\to+\infty$.
Indeed, since
$\phi_j\psi_j\equiv 0$ on $\R^n$, we have that
$$
a_jb_j = -\psi_j(x)\psi_j(y) -\phi_j(y)\psi_j(x).$$
As a consequence, recalling~\eqref{propa}
\begin{align*}
0\leq \mathcal{R}_j
&
\leq
2\max_{|\omega_1|^p+|\omega_2|^p=1}\int_{B_r(x_j)}\int_{B_r(y_j)}
\frac{|(\omega_1a_j)^2+(\omega_2b_j)^2|^{\frac{p-2}{2}}|\omega_1\omega_2a_jb_j|}
{|x-y|^{n+ps}}\,dx\,d y \\
&
\leq \frac{2}{|x_j-y_j|-2r} \\
& \qquad\times\max_{|\omega_1|^p+|\omega_2|^p = 1}\int_{B_r(x_j)}\int_{B_r(y_j)}
|(\omega_1a_j)^2+(\omega_2b_j)^2|^{\frac{p-2}{2}}|\omega_1\omega_2a_jb_j|\,dx\,d y \\
& = \frac{2}{|x_j-y_j|-2r}
\int_{B_r}\int_{B_r}
|u_0(x)^2+u_0(y)^2|^{\frac{p-2}{2}}
|u_0(x)u_0(y)|\,dx\,d y \\[0.1cm]
& =: \frac{2c_0}{|x_j-y_j|-2r}.
\end{align*}
Taking into account \eqref{eq:xjyjdiv}, we thereby conclude that
\begin{equation} \label{eq:limRjzero}
\lim_{j\to+\infty}\mathcal{R}_j = 0.
\end{equation}
Gathering together \eqref{eq:topasslimit} and
\eqref{eq:limRjzero}, we obtain the desired result in~\eqref{eq:optimal}.
\end{proof}
\end{document} |
\begin{document}
\newcolumntype{g}{>{\columncolor{Gray}}c}
\title{Model-Based and Model-Free point prediction algorithms for locally stationary random fields}
\begin{abstract}
The Model-free Prediction Principle has been successfully applied to general regression problems, as well as problems involving stationary and locally stationary time series. In this paper we demonstrate how Model-Free Prediction can be applied to handle random fields that are only locally stationary, i.e., they can be assumed to be stationary only across a limited part over their entire region of definition. We construct one-step-ahead point predictors and compare the performance of Model-free to Model-based prediction using models that incorporate a trend and/or heteroscedasticity. Both aspects of the paper, Model-free and Model-based, are novel in the context of random fields that are locally (but not globally) stationary. We demonstrate the application of our Model-based and Model-free point prediction methods to synthetic data as well as images from the CIFAR-10 dataset and in the latter case show that our best Model-free point prediction results outperform those obtained using Model-based prediction.
\end{abstract}
{\bf Keywords:} Kernel smoothing, linear predictor, random fields, nonstationary series, point prediction.
\section{Introduction}
\textcolor{black}{Consider a real-valued random field dataset $\{Y_{\underline t}, \underline t \in Z^2\}$ defined over a 2-D index-set $D$ e.g. pixel values over an image or satellite data observed on an ocean surface.
It may be unrealistic to assume that
the stochastic structure of such a random field {$Y_{\underline t}$} has stayed invariant
over the entire region of definition $D$
hence, we cannot assume that $\{Y_{\underline t}\}$ is stationary.}
Therefore it is more realistic to assume a slowly-changing
stochastic structure, i.e., a {\it locally stationary model}. Discussions of \textcolor{black}{such models for locally stationary time series } can be found in
\textcolor{black} {\cite{priestley1965evolutionary}, \cite{priestley1988non}, \cite{dahlhaus1997fitting} and \cite{dahlhaus2012locally}.} \textcolor{black}{In the context of random fields, locally stationary models have been proposed in \cite{kurisu2022nonparametric} and references therein where the data $Y_{\underline t}$ is defined over a continuous subset $S$ of $R^d$. In this paper we assume a locally stationary model for random fields $Y_{\underline t} \in R$ defined over $\underline t \in S$ where $S \subset Z^d$, $d = 2$. Given data $Y_{\underline t_1}, Y_{\underline t_2}, \ldots, Y_{\underline t_n}$,
our objective is to perform \textcolor{black}{point prediction} for a {\it future} unobserved data point $Y_{\underline t_{n+1}}$. Here $\underline t_1, \underline t_2, \ldots, \underline t_n, \underline t_{n+1} \in Z^2$ denote the coordinates of the random field over the 2-D index set $D$ and the notion of a {\it future} datapoint over a coordinate of a random field for purposes of predictive inference over {$\underline t \in Z^2\ $} is defined in Section \ref{RF.Causality}. Algorithms for point prediction and prediction intervals of locally stationary time series and their applications in both synthetic and real-life datasets have been discussed in \cite{das2021predictive}. Our work in this paper extends this framework to point prediction over locally stationary random fields with applications involving both synthetic and real-life image data.}
The usual approach for dealing with nonstationary series is to assume
that the data can be decomposed as the sum of three components:
$$
\mu(\underline t)+ S_{\underline t} + W_{\underline t}
$$
where $\mu(\underline t)$ is a deterministic trend function, $S_{\underline t}$ is a seasonal (periodic)
series, and $\{W_{\underline t}\}$ is (strictly) stationary with mean zero;
this is
the `classical' decomposition of a time series to trend, seasonal and stationary components see e.g. \textcolor{black} {\cite{brockwell2013time}} which can also be used for decomposition of nonstationary random field data. The seasonal (periodic) component, be it random or deterministic, can be easily estimated and
removed and
having done that, the `classical' decomposition
simplifies to the following model with additive trend, i.e.,
\begin{equation}
Y_{\underline t}=\mu(\underline t)+ W_{\underline t}
\label{RF.eq.model homo}
\end{equation}
which
can be generalized to accommodate
a
coordinate-changing
variance as well, i.e.,
\begin{equation}
Y_{\underline t}=\mu(\underline t)+ \sigma (\underline t) W_{\underline t} .
\label{RF.eq.model hetero}
\end{equation}
In both above models, the
series $\{W_{\underline t}\}$
is assumed to be (strictly) stationary,
weakly dependent, e.g. strong mixing, and satisfying $EW_{\underline t} =0$;
in model \eqref{RF.eq.model hetero}, it is also
assumed that $\Var(W_{\underline t})=1$.
As usual, the deterministic functions
$\mu( \cdot)$ and $\sigma (\cdot)$ are unknown but
assumed to belong to
a class of functions that is either finite-dimensional (parametric) or not \textcolor{black} {(nonparametric)};
we will focus on the latter, in which case
it is customary to assume that $\mu( \cdot)$ and $\sigma (\cdot)$ possess some degree of smoothness, i.e., that $\mu(\underline t)$ and $\sigma (\underline t)$ change
smoothly (and slowly) with $\underline t$.
As far as capturing the first two moments of $Y_{\underline t}$,
models \eqref{RF.eq.model homo} and \eqref{RF.eq.model hetero}
are considered general and
flexible---especially when $\mu( \cdot)$ and $\sigma (\cdot)$
are not parametrically specified---and have been studied
extensively in the case of time series; see e.g.
\textcolor{black} {\cite{zhou2009local}, \cite{zhou2010simultaneous}.}
However, it may be that the skewness and/or kurtosis of $Y_{\underline t}$ changes with $\underline t$,
in which case centering and studentization alone cannot render the problem stationary. To see why, note that
under model \eqref{RF.eq.model hetero},
$ EY_{\underline t}=\mu(\underline t)$ and $ {\rm Var\,} Y_{\underline t}=\sigma ^2( \underline t)$; hence,
\begin{equation}
W_{\underline t} =\frac{Y_{\underline t}-\mu( \underline t)}{ \sigma (\underline t)}
\label{RF.eq.W}
\end{equation}
cannot be (strictly) stationary unless
the skewness and kurtosis of $Y_{\underline t}$ are constant.
Furthermore, it may be the case that the nonstationarity is due to
a feature of the $m$--th dimensional marginal distribution not being constant
for some $m\geq 1$,
e.g., perhaps the correlation Corr$(Y_{\underline t_j}, Y_{\underline t_{j+1}})$ \textcolor{black}{where $\underline t_j, \underline t_{j+1} \in Z^2$}
changes smoothly (and slowly) with $\underline t_j$. Notably, models \eqref{RF.eq.model homo} and \eqref{RF.eq.model hetero} only concern themselves with features of the
1st marginal distribution.
For all the above reasons, it seems valuable to develop a methodology for
the statistical analysis of nonstationary random fields that does not
rely on simple additive models such as \eqref{RF.eq.model homo} and \eqref{RF.eq.model hetero}. Fortunately, the
Model-free Prediction Principle of \textcolor{black} {\cite{Politis2013}, \cite{politis2015model}}
\textcolor{black}{suggests a way} to accomplish Model-free inference
in the general setting of
random fields that are only locally stationary.
The key towards Model-free inference is to be able to construct an invertible transformation
$H_n: \underline{Y}_{\underline t_n} \mapsto \underline \ensuremath{\epsilon}silon_{n}$
where $ \underline{Y}_{\underline t_n} = (Y_{\underline t_1}, Y_{\underline t_2}, \ldots, Y_{\underline t_n})$ denotes the random field data under consideration and
$\underline \ensuremath{\epsilon}silon_{n}=(\ensuremath{\epsilon}silon_{1}, \ldots, \ensuremath{\epsilon}silon_{n} )'$
is a random vector with i.i.d.~components;
the details for point prediction are given in Section \ref{RF.Model-free inference}.
In Section \ref{RF.Model-based inference} we visit the problem of model-based
inference and develop a point prediction methodology for locally stationary random fields.
Both approaches, Model-based \textcolor{black} {of Section \ref{RF.Model-based inference}} and Model-free \textcolor{black}{of Section \ref{RF.Model-free inference}}, are novel, and they are empirically compared to each other in Section \ref{RF.Numerical} \textcolor{black} {using finite sample \textcolor{black}{experiments}}.
\graphicspath{{/Users/rumpagiri/Documents/NONPARAMETRIC/model_free/papers/RANDOM_FIELDS}}
\DeclareGraphicsExtensions{.png}
{\begin{figure}
\caption{Non Symmetric Half-Plane}
\label{NSHP}
\end{figure}}
\section{Causality of Random Fields}
\label{RF.Causality}
Given the random field observations {$Y_{\underline t_{1}}, \ldots, Y_{\underline t_{n}}$} our goal is predictive inference for the "next" unknown datapoint {$Y_{\underline t_{n+1}}$}. In this context a definition of causality is necessary to specify the random field coordinate $ \underline t_{n+1}$ where predictive inference will be performed. For this purpose we adopt the framework proposed in \cite{choi2007modeling} and consider random fields discussed in this paper to be defined over a subset of the non symmetric half-plane (NSHP) denoted as $H_{\infty}$. Figure \ref{NSHP} shows an NSHP centered at (0, 0). The NSHP can also be centered at any other point $\underline t$ as follows:
\begin{equation}
NSHP(\underline t) = {\underline t + \underline s \ \ \ \ \forall \underline s \in NSHP(0,0)}
\label{eq.NSHP}
\end{equation}
Such non symmetric half-planes have been used previously for specifying causal 2-D AR models \cite{choi2007modeling}. In such cases a causal 2-D AR model with $H_p \subset H_{\infty}$ can be defined as below in equation (\ref{eq.2D_AR}) where the set $H_p$ is termed as the region of support (ROS) of the 2-D AR model. . Here $H_p = \{(j,k) \ | \ j = 1,2, \ldots, p \ \ k = 0, \pm 1, \ldots, \pm p\} \cup \{(0,k) \ | \ k = 1,2,\ldots,p\}$ and $v_{t_1, t_2}$ is a 2-D white noise process with mean $0$ and variance $\sigma^2 > 0$.
\begin{equation}
Y_{t_1, t_2} = \sum \limits_{(j,k) \in H_p} \beta_{j,k} Y_{t_1-j, t_2-k} + v_{t_1, t_2}
\label{eq.2D_AR}
\end{equation}
Based on \cite{dudgeon1984multidimensional} a 2-D AR process with ROS $S$ is causal if there exists a subset $C$ of $Z^2$ satisfying the following conditions:
\begin{itemize}
\item The set C consists of 2 rays emanating from the origin and the points between the rays
\item The angle between the 2 rays is strictly less than 180 degrees
\item $S \subset C$
\end{itemize}
In this case since $H_p \subset H_{\infty}$ satisfies these conditions the 2-D AR process denoted by (\ref{eq.2D_AR}) is causal. Therefore we can use this framework to describe a causal random field defined over the NSHP and perform predictive inference on the same. Given this our setup for point prediction of random fields is described as below.
Consider random field data $\{Y_{\underline t}, \ \underline t \in E\}$ where $E$ can be any finite subset of $Z^2$ for e.g.
$E_{\underline n} = \{\underline t \in Z^2$ with \ $\underline n=(n_1, n_2)\}$.
Our goal is predictive inference at $\underline t = (t_1, t_2)$ where $0 < t_1 < n_1 \ \& \ 0 < t_2 < n_2,$. This "future" value $Y_{t_1, t_2}$ is determined using data defined over the region as shown in Figure \ref{NSHP_pred}:
$$
E_{\underline t, \underline n} = NSHP(\underline t) \cap E_{\underline n}
$$
Both model-based and model-free causal inference for $Y_{t_1, t_2}$ are performed using the data specified over this region $E_{\underline t, \underline n}$. We consider predictive inference at $Y_{\underline t} = Y_{t_1, t_2}$ given the data ($Y_{\underline s} \mid \underline s \prec \underline t \ \& \ \underline s \in E_{\underline t, \underline n}$) where the symbol $\prec$ denotes lexicographical ordering on the region of support of the random field i.e. $ (a_k, b_k) \prec (a_{k+1}, b_{k+1})$ if and only if either $a_k < a_{k+1}$ or $(a = a_{k+1}$ and $b_k < b_{k+1})$ \cite{choi2007modeling}.
\textcolor{black}{In the subsequent discussion is the lexicographically ordered "past" data $Y_{\underline s}$ will be denoted as $Y_{\underline t_1}, Y_{\underline t_2}, \ldots, Y_{\underline t_n}$ and point prediction will be performed at $Y_{\underline t} = Y_{\underline t_{n+1}}$.}
\graphicspath{{/Users/rumpagiri/Documents/NONPARAMETRIC/model_free/papers/RANDOM_FIELDS}}
\DeclareGraphicsExtensions{.png}
{\begin{figure}
\caption{Prediction point for NSHP}
\label{NSHP_pred}
\end{figure}}
\section{Model-based inference}
\label{RF.Model-based inference}
Throughout Section \ref{RF.Model-based inference}, we will assume
model \eqref{RF.eq.model hetero}---that
includes model \eqref{RF.eq.model homo} as a special case---together with
a nonparametric assumption on smoothness of $\mu(\cdot)$ and $\sigma ( \cdot)$.
\subsection{Theoretical optimal point prediction}
\label{RF.sec.TOPP}
It is well-known that the $L_2$--optimal
predictor of $Y_{\underline t_{n+1}}$ given the data
$Y_{\underline{s}} = $
$\underline{Y}_{\underline t_n}=(Y_{\underline t_1},\ldots, Y_{\underline t_n})'$
is the conditional expectation $E(Y_{\underline t_{n+1}}|\underline{Y}_{\underline t_n})$ where $\underline{Y}_{\underline t_n}$ indicates the data ${Y}_{\underline t_1}, \ldots, {Y}_{\underline t_n}$.
Furthermore, under model \eqref{RF.eq.model hetero}, we have
\begin{equation}
E(Y_{\underline t_{n+1}}|\underline{Y}_{\underline t_{n}})=\mu(\underline t_{n+1})+ \sigma (\underline t_{n+1}) E(W_{\underline t_{n+1}}|\underline{Y}_{\underline t_n}).
\label{RF.eq.point pred}
\end{equation}
For $\underline j \prec \underline J$ define ${\mathcal F}_{\underline j}^{\underline J}(Y)$ to be the
{\it information set} $\{ Y_{\underline j}, \ldots, Y_{\underline J}$\},
also known as $\sigma$--field,
and note that the information sets
${\mathcal F}_{-\infty}^{\underline t}(Y)$ and ${\mathcal F}_{-\infty}^{\underline t}(W)$ are identical for any $\underline t$,
i.e., knowledge of $\{ Y_{\underline s}$ for
$\underline s \prec \underline t$\}
is equivalent to
knowledge of {
$\{ W_{\underline s}$ for $\underline s \prec \underline t \}$.
Here $\mu(\cdot)$ and $\sigma ( \cdot)$
are assumed known and the symbol $\prec$ denotes lexicographical ordering on the region of support of the random field as described in Section \ref{RF.Causality}.
Hence, for large $n$, and due to the assumption that $W_{\underline t}$
is weakly dependent (and therefore
the same must be true for $Y_{\underline t}$ as well), the following large-sample
approximation is useful, i.e.,
\begin{equation}
E(W_{\underline t_{n+1}}|\underline{Y}_{\underline t_n}) = E(W_{\underline t_{n+1}}|{Y}_{\underline s})
\simeq E(W_{\underline t_{n+1}}|Y_{\underline r}, \underline r \preceq \underline s)
= E(W_{\underline t_{n+1}}|W_{\underline r}, \underline r \preceq \underline s)
\simeq E(W_{\underline t_{n+1}}|{W}_{\underline s})
= E(W_{\underline t_{n+1}}|\underline W_{\underline t_n } )
\end{equation}
\label{RF.eq.point pred approx}
where $\underline{W}_{\underline t_n} = (W_{t_1}, \ldots, W_{t_n})'$.
\textcolor{black}{We therefore need to} construct an approximation for
$E(W_{\underline t_{n+1}}|\underline W_{\underline t_n } )$.
\textcolor{black}{For this purpose}, the $L_2$--optimal linear predictor of $W_{\underline t_{n+1}}$ can be obtained by fitting
a (causal) AR($p, q$) model to the data $W_{\underline t_1}, \ldots, W_{\underline t_n}$ with $p, q$ chosen by minimizing AIC, BIC or a related criterion as described in \cite{choi2007modeling}; this would entail fitting the model:
\begin{equation} \label{RF.eq.AR}
W_{t_{n_{1}}, t_{n_{2}}} = \sum \limits_{(j,k) \in H_p} \beta_{j,k} W_{t_{n_{1}}-j, t_{n_{2}}-k} + v_{t_1, t_2}
\end{equation}
where $v_{t_1, t_2}$ is a 2-D white noise process i.e., an uncorrelated sequence, with mean $0$ and variance $\sigma^2 > 0$
and $(t_{n_{1}}, t_{n_{2}})$ denote the components of $\underline t_{n+1}$.
The implication then is that
\begin{equation}
\bar E(W_{\underline t_{n+1}}|\underline{W}_{\underline t_n} ) = \sum \limits_{(j,k) \in H_p} \beta_{j,k} W_{t_{n_{1}}-j, t_{n_{2}}-k}
\label{RF.eq.ARpredictor}
\end{equation}
\subsection{Trend estimation and practical prediction}
\label{RF.sec.trend}
To construct the $L_2$--optimal predictor \eqref{RF.eq.point pred},
we need to estimate the smooth trend $\mu(\cdot)$ and
variance $\sigma ( \cdot)$
in a nonparametric fashion; this can be easily accomplished via kernel smoothing \textcolor{black}{by using 2D kernels} ---see
e.g.
\textcolor{black} {\cite{hardle1992kernel}, \cite{kim1996bandwidth}, \cite{li2007nonparametric}}.
Note, furthermore, that the problem of prediction of $Y_{\underline t_{n+1}}$ involves
estimating the functions \textcolor{black}{$\mu( \cdot)$ and $\sigma (\cdot)$}
is essentially a boundary problem. In such cases, it is well-known that
local linear fitting has better properties---in particular, smaller bias---than
kernel smoothing which is well-known to be tantamount to local constant fitting;
\textcolor{black}{\cite{fan1996local},\cite{fan2007nonlinear}, or \cite{li2007nonparametric}}.
Note that for time series problems \{$Y_t, \ t \in Z$\} local linear nonparametric estimation can approximate
the trend locally by a straight line whereas for the case of random fields $\{Y_{\underline t}, \ \underline t \in Z^2\}$
discussed in this paper local linear estimation can be used to approximate
the trend locally with a plane.
\begin{Remark} [One-sided estimation] \rm
Since the goal is predictive inference on $Y_{\underline t_{n+1}}$,
local constant and/or local linear fitting must be performed in
a {\it one-sided way}.
\textcolor{black}{Furthermore} to compute $\bar E(W_{\underline t_{n+1}}|\underline{W}_{\underline t_n} )$
in eq.~\eqref{RF.eq.ARpredictor} we need access to the stationary data $W_{\underline t_1},\ldots,W_{\underline t_n}$.
The $W_{\underline t}$'s are not directly observed, but---much like residuals in a
regression---they can be reconstructed by eq.~\eqref{RF.eq.W}
with estimates of $\mu(\underline t)$ and $\sigma (\underline t)$ plugged-in.
What is important is that {\bf the way $W_{\underline t}$ is reconstructed/estimated
by (say) $\hat W_{\underline t}$ must remain the same for all $\underline t$}, otherwise
the reconstructed data $\hat W_{\underline t_1},\ldots,\hat W_{\underline t_n}$ can not be considered stationary.
Since $W_{\underline t}$ can only be estimated in a one-sided way for $\underline t$ close to $\underline t_n$,
the same one-sided way must also be implemented for $\underline t$ in the middle of
the dataset even though in that case two-sided estimation is possible.
\label{RF.re.onesided}
\end{Remark}
\textcolor{black}{By} analogy to model-based regression
\textcolor{black} {as described in \cite{Politis2013}},
the one-sided Nadaraya-Watson (NW) kernel estimators of $\mu(\underline t)$ and $\sigma (\underline t )$
can be defined in two ways.
Note that the bandwidth parameter $b$
will be assumed to satisfy
\begin{equation}
b\to \infty \ \mbox{as} \ n\to \infty \ \mbox{but} \ b/n\to 0,
\label{RF.eq.new bandwidth}
\end{equation}
i.e., $b$ is analogous to the product $hn$ \textcolor{black}{where $h$ is the usual bandwidth in nonparametric regression.}
We will assume throughout that $K(\cdot)$ is a nonnegative, symmetric 2-D Gaussian kernel function for which the diagonal values are set to the bandwidth $b$ and the off-diagonal terms are set to $0$. Random field data is denoted as $Y_{\underline t_{1}}, \ldots, Y_{\underline t_{k}}, \ldots Y_{\underline t_{n}}$.
\begin{enumerate}
\item {\bf NW--Regular fitting:}
Let $\underline t_{k} \in [\underline t_1, \underline t_n]$, and define
\begin{equation}
\hat \mu(\underline t_k) = \sum _{i=1}^{k} \ Y_{\underline t_{i}} \ \hat K\left(\frac{\underline t_k - \underline t_{i}}{b}\right)
\ \ \mbox{and} \ \
\hat M(\underline t_k) = \sum _{i=1}^{k } \ Y_{\underline t_{i}}^2 \ \hat K(\frac{\underline t_k - \underline t_{i}}{b})
\label{RF.eq.nw-mu}
\end{equation}
where
\begin{equation}
\hat \sigma(\underline t_k) = \sqrt { \hat M_{\underline t_k} - \hat \mu(\underline t_k)^2 }
\ \ \mbox{and} \ \
\hat K \left( \frac { \underline t_k - \underline t_{i} } {b}\right) = \frac { K(\frac{\underline t_k - \underline t_{i}} {b}) }{\sum _{j=1}^{k} K(\frac {\underline t_k - \underline t_{j}}{b})} .
\label{RF.eq.nw-sigma}
\end{equation}
Using $\hat \mu(\underline t_k)$ and $\hat \sigma(\underline t_k)$ we can now define the
{\it fitted} residuals by
\begin{equation}
\hat W_{\underline t_k}= \frac{Y_{\underline t_k}- \hat \mu(\underline t_k)}{ \hat \sigma (\underline t_k )}
\ \ \mbox{for} \ \
\underline t_k=\underline t_{1},\ldots, \underline t_n.
\label{RF.eq.hatW}
\end{equation}
\item {\bf NW--Predictive fitting (delete-1):}
Let
\begin{equation}
\tilde \mu(\underline t_k) = \sum _{i=1}^{k-1} \ Y_{\underline t_{i}} \ \hat K\left(\frac{\underline t_k - \underline t_{i}}{b}\right)
\ \ \mbox{and} \ \
\tilde M(\underline t_k) = \sum _{i=1}^{k-1} \ Y_{\underline t_{i}}^2 \ \hat K(\frac{\underline t_k - \underline t_{i}}{b})
\label{RF.eq.nw-muPRED}
\end{equation}
where
\begin{equation}
\tilde \sigma(\underline t_k) = \sqrt { \hat M_{\underline t_k} - \hat \mu(\underline t_k)^2 }
\ \ \mbox{and} \ \
\tilde K \left( \frac { \underline t_k - \underline t_{i} } {b}\right) = \frac { K(\frac{\underline t_k - \underline t_{i}} {b}) }{\sum _{j=1}^{k-1} K(\frac {\underline t_k - \underline t_{j}}{b})} .
\label{RF.eq.nw-sigmaPRED}
\end{equation}
Using $\tilde \mu(\underline t_k)$ and $\tilde \sigma(\underline t_k)$ we can now define the
{\it predictive} residuals by
\begin{equation}
\tilde W_{\underline t_k}= \frac{Y_{\underline t_k}- \tilde \mu(\underline t_k)}{ \tilde \sigma (\underline t_k )}
\ \ \mbox{for} \ \
\underline t_k=\underline t_{1},\ldots, \underline t_n.
\label{RF.eq.tildeW}
\end{equation}
\end{enumerate}
\vskip .1in
\noindent
Similarly, the one-sided local linear (LL) fitting estimators of $\mu(\underline t_k)$ and $\sigma (\underline t_k )$
can be defined in two ways.
\begin{enumerate}
\item {\bf LL--Regular fitting:}
Let $\underline t_{k} \in [\underline t_{1}, \underline t_n]$, and define
\begin{equation}
\hat \mu(\underline t_k)=\frac{ \sum_{j=1}^{k} w_jY_{\underline t_j} }{\sum_{j=1}^{k } w_j + n^{-2}}
\ \ \mbox{and} \ \
\hat M(\underline t_k) = \frac{ \sum_{j=1}^{k} w_jY_{\underline t_j}^2 }{\sum_{j=1}^{k} w_j + n^{-2}}
\label{RF.eq.locallinearF}
\end{equation}
\textcolor{black}{Denoting}
\begin{equation}
\underline a = (a_1, a_2) = (\underline t_j - \underline t_k)
\end{equation}
\begin{equation}
s_{t1,1} = \sum_{j=1}^{k} K\left(\frac{\underline t_j - \underline t_{k}}{b}\right) a_1
\end{equation}
\begin{equation}
s_{t2,1} = \sum_{j=1}^{k} K\left(\frac{\underline t_j - \underline t_{k}}{b}\right) a_2
\end{equation}
\begin{equation}
s_{t1,2} = \sum_{j=1}^{k} K\left(\frac{\underline t_j - \underline t_{k}}{b}\right) a_1^2
\end{equation}
\begin{equation}
s_{t2,2} = \sum_{j=1}^{k} K\left(\frac{\underline t_j - \underline t_{k}}{b}\right) a_2^2
\end{equation}
\begin{equation}
s_{t1,t2} = \sum_{j=1}^{k} K\left(\frac{\underline t_j - \underline t_{k}}{b}\right) a_1 a_2
\end{equation}
\begin{equation}
w_j= K(\frac{\underline t_j-\underline t_{k}} {b}) \left[s_{t1,2} s_{t2,2} - s_{t1,t2}^2 - a_1 (s_{t1,1} s_{t2,2} - s_{t2,1} s_{t1,t2}) + a_2 (s_{t1,1} s_{t1,t2} - s_{t1,2} s_{t2,1})
\right],
\label{RF.eq.locallinearweightsF}
\end{equation}
\\
The term $ n^{-2}$ in eq.~\eqref{RF.eq.locallinearF} is just
to ensure the denominator is not zero; see Fan (1993).
Eq.~\eqref{RF.eq.nw-sigma} then yields $\hat \sigma(\underline t_k)$,
and eq.~\eqref{RF.eq.hatW} yields~$\hat W_{\underline t_k}$.
\item {\bf LL--Predictive fitting (delete-1):}
Let
\begin{equation}
\tilde \mu(\underline t_k)=\frac{ \sum_{j=1}^{k-1} w_jY_{\underline t_j} }{\sum_{j=1}^{k-1 } w_j + n^{-2}}
\ \ \mbox{and} \ \
\tilde M(\underline t_k) =\frac{ \sum_{j=1}^{k-1 } w_jY_{\underline t_j}^2 }{\sum_{j=1}^{k-1 } w_j + n^{-2}}
\label{RF.eq.locallinearP}
\end{equation}
where
\begin{equation}
\underline a = (a_1, a_2) = (\underline t_j - \underline t_k)
\end{equation}
\begin{equation}
s_{t1,1} = \sum_{j=1}^{k-1} K\left(\frac{\underline t_j - \underline t_{k}}{b}\right) a_1
\end{equation}
\begin{equation}
s_{t2,1} = \sum_{j=1}^{k-1} K\left(\frac{\underline t_j - \underline t_{k}}{b}\right) a_2
\end{equation}
\begin{equation}
s_{t1,2} = \sum_{j=1}^{k-1} K\left(\frac{\underline t_j - \underline t_{k}}{b}\right) a_1^2
\end{equation}
\begin{equation}
s_{t2,2} = \sum_{j=1}^{k-1} K\left(\frac{\underline t_j - \underline t_{k}}{b}\right) a_2^2
\end{equation}
\begin{equation}
s_{t1,t2} = \sum_{j=1}^{k-1} K\left(\frac{\underline t_j - \underline t_{k}}{b}\right) a_1 a_2
\end{equation}
\begin{equation}
w_j= K(\frac{\underline t_j-\underline t_{k}} {b}) \left[s_{t1,2} s_{t2,2} - s_{t1,t2}^2 - a_1 (s_{t1,1} s_{t2,2} - s_{t2,1} s_{t1,t2}) + a_2 (s_{t1,1} s_{t1,t2} - s_{t1,2} s_{t2,1})
\right],
\label{RF.eq.locallinearweightsP}
\end{equation}
Eq.~\eqref{RF.eq.nw-sigmaPRED} then yields $\tilde \sigma(\underline t_k)$,
and eq.~\eqref{RF.eq.tildeW} yields $\tilde W_{\underline t_k}$.
\end{enumerate}
\vskip .1in
\noindent
Using one of the above four methods (NW vs.~LL, regular vs.~predictive)
gives estimates of the quantities needed to compute the $L_2$--optimal
predictor \eqref{RF.eq.point pred}. In order to
approximate $E(W_{\underline t_{n+1}}|\underline{Y}_{\underline t_n})$, one would treat the proxies
$\hat W_{\underline t_k}$ or $\tilde W_{\underline t_k}$ as if they were the true $W_{\underline t_k}$, and proceed
as outlined in Section \ref{RF.sec.TOPP}.
\textcolor{black}{The bandwidth $b$ in all 4 algorithms described above can be determined by cross-validation as described in Section \ref{RF.cross-validation}.}
\section{Model-free inference}
\label{RF.Model-free inference}
Model (\ref{RF.eq.model hetero}) is a flexible way to account for a
spatially-changing mean and variance of $Y_{\underline t}$.
However, nothing precludes that the random field $\{Y_{\underline t} $ for $\underline t\in {\bf Z^2} \}$ has a nonstationarity
in its third (or higher moment), and/or in some other feature of its $m$th
marginal distribution.
A way to address this difficulty, and at the same time give a fresh perspective to the problem, is provided by the
Model-Free Prediction Principle of Politis (2013, 2015).
The key towards Model-free inference is to be able to construct an invertible transformation
$H_n: \underline{Y}_{\underline t_n} \mapsto \underline \ensuremath{\epsilon}silon_{n}$
where \textcolor{black}{$ \underline{Y}_{\underline t_n} = (Y_{\underline t_1}, Y_{\underline t_2}, \ldots, Y_{\underline t_n})$ denotes the random field data under consideration}
and $\underline \ensuremath{\epsilon}silon_{n}=(\ensuremath{\epsilon}silon_{1}, \ldots, \ensuremath{\epsilon}silon_{n} )'$
is a random vector with i.i.d.~components. In order to do this in our context,
let some $m\geq1$, and denote by ${\mathcal L}(Y_{\underline t_{k} },Y_{\underline t_{k-1}},\ldots,Y_{\underline t_{k-m+1}}) $
the $m$th marginal of the random field { $Y_{\underline t_k}$ }, i.e. the joint probability law of the vector
$(Y_{\underline t_{k} },Y_{\underline t_{k-1}},\ldots,Y_{\underline t_{k-m+1}})'$. Although we abandon model~(\ref{RF.eq.model hetero})
in what follows, we still want to employ nonparametric smoothing for estimation; thus,
we must assume that \\${\mathcal L}(Y_{\underline t_{k} },Y_{\underline t_{k-1}},\ldots,Y_{\underline t_{k-m+1}})$ \textcolor{black}{changes} smoothly (and slowly) with $\underline t_k$. \textcolor{black}{In this case $\{Y_{\underline t_k}, \ \underline t_k \in Z^2\}$ can be defined over a 2-D index-set $D$ and the set $Y_{\underline t_{k} },Y_{\underline t_{k-1}},\ldots,Y_{\underline t_{k-m+1}}$ can be considered to be lexicographically ordered as discussed previously in Section \ref{RF.Causality}.}
A convenient way to ensure both the smoothness and data-based consistent estimation of
${\mathcal L}(Y_{\underline t_{k} },Y_{\underline t_{k-1}},\ldots,Y_{\underline t_{k-m+1}})$ is to assume that, for all $\underline t_k,$
\begin{equation}
Y_{\underline t_k } =
{\bf f}_{\underline t_k}(W_{\underline t_k },W_{\underline t_{k-1}},\ldots,W_{\underline t_{k-m+1}})
\label{RF.eq.book.9.24}
\end{equation}
\noindent
for some function ${\bf f}_{\underline t_k}$($w$) that is smooth in both arguments $\underline t_k$ and $w$, and some strictly stationary
and weakly dependent, univariate series ${W_{\underline t_k}}$; without loss of
generality, we may assume that $W_{\underline t_k}$ is a Gaussian series.
In fact,
Eq. (\ref{RF.eq.book.9.24}) with ${{\bf f}_{\underline t_k}}$($\cdot$) not depending on
$\underline t_k$ is a familiar assumption in studying non-Gaussian and/or long-range dependent
stationary processes---see e.g. \cite{samorodnitsky1994stable}.
By allowing ${{\bf f}_{\underline t_k}}$($\cdot$) to vary smoothly (and slowly) with $\underline t_k$,
Eq. (\ref{RF.eq.book.9.24}) can be used to describe a rather general
class of locally stationary processes.
Note that model~(\ref{RF.eq.model hetero}) is a special case
of Eq. (\ref{RF.eq.book.9.24}) with $m=1$, and
the function ${{\bf f}_{\underline t_k}}$($w$) being affine/linear in $w$.
Thus, for concreteness and easy comparison with the model-based case of
Eq. (\ref{RF.eq.model hetero}), we will focus in the sequel on the case $m=1$.
For reference model-free estimators for point prediction and prediction intervals in the case of locally stationary time series for $m=1$ have been discussed in \cite{das2021predictive}.
\subsection{Constructing the theoretical transformation}
\label{RF.sec.CTT}
\textcolor{black} {Hereafter, adopt the setup of \textcolor{black}{ Eq. (\ref{RF.eq.book.9.24})} with $m=1$,}
and let
$$D_{\underline t}(y)=P\{ Y_{\underline t} \leq y \}$$ denote the 1st marginal distribution of random field $\{Y_{\underline t} \}$.
Throughout Section \ref{RF.Model-free inference},
the default assumption will be that $D_{\underline t }(y)$ is (absolutely) continuous in $y$ for all $\underline t$.
We now define new variables via the probability integral transform, i.e., let
\begin{equation}
U_{\underline t} = D_{\underline t}(Y_{\underline t}) \ \ \mbox{for} \ \underline t= \underline t_1,\ldots, \underline t_n;
\label{RF_unif.eq.modelT}
\end{equation}
the assumed continuity of $D_{\underline t }(y)$ in $y$ implies that
$U_{\underline t_1},\ldots, U_{\underline t_n}$ are random variables having distribution Uniform $ (0,1)$.
However, $U_{\underline t_1},\ldots, U_{\underline t_n}$ are dependent; to transform them to
independence, a preliminary transformation towards Gaussianity is helpful as
discussed in
\textcolor{black}{\cite{Politis2013}}.
Letting ${\rm P}hi$ denote the cumulative distribution function (cdf) of the standard normal distribution, we define
\begin{equation}
Z_{\underline t} = {\rm P}hi^{-1} (U_{\underline t}) \ \ \mbox{for} \ \underline t=\underline t_1,\ldots, \underline t_n;
\label{RF_norm.eq.modelT}
\end{equation}
it then follows that $Z_{\underline t_1},\ldots, Z_{\underline t_n}$ are standard normal---albeit
correlated---random variables.
Let $ \Gamma_n $ denote the $n\times n$ covariance matrix
of the random vector $\underline{Z}_{\underline t_n}=(Z_{\underline t_1},\ldots, Z_{\underline t_n})' $.
Under standard assumptions,
e.g. that the spectral density of the series $\{Z_{\underline t_n}\}$
is continuous and bounded away from zero,\footnote{If the spectral density
is equal to zero over an interval---however small---then the time series $\{Z_{\underline t_n}\}$ is perfectly predictable
based on its infinite past, and the same would be true for
the time series $\{Y_{\underline t_n}\}$; see Brockwell and Davis (1991, Theorem 5.8.1)
on Kolmogorov's formula.} the matrix $ \Gamma_n $ is invertible
when $n$ is large enough. Consider the
Cholesky decomposition $ \Gamma_n = C_n C_n'$
where $ C_n$ is (lower) triangular, and construct the
{\it whitening} transformation:
\begin{equation}
\label{RF.eq.whitenfilterT}
\underline \ensuremath{\epsilon}silon_{n}= C_n^{-1} \underline{Z}_{\underline t_n} .
\end{equation}
It then follows that the entries of $\underline \ensuremath{\epsilon}silon_{n}=(\ensuremath{\epsilon}silon_1, \ldots, \ensuremath{\epsilon}silon_n)'$ are uncorrelated~standard normal.
Assuming that the random variables $Z_{\underline t_1},\ldots, Z_{\underline t_n}$ were {\it jointly} normal,
this can be strenghtened to claim that $
\ensuremath{\epsilon}silon_1, \ldots, \ensuremath{\epsilon}silon_n$ are i.i.d.~$N(0,1)$.
\textcolor{black}{Joint normality can be established by assuming a generative model of the random field as given by eq.(\ref{RF.eq.book.9.24}),
for a more detailed discussion refer to \cite{das2021predictive}}.
Consequently, the transformation
of the dataset
\textcolor{black}{$ \underline{Y}_{\underline t_n} = (Y_{\underline t_1}, Y_{\underline t_2}, \ldots, Y_{\underline t_n})$}
to the vector $\underline \ensuremath{\epsilon}silon_{n}$ with i.i.d.~components has been
achieved as required in premise (a) of the
Model-free Prediction Principle.
Note that all the steps in the transformation, i.e.,
eqs.~(\ref{RF_unif.eq.modelT}), (\ref{RF_norm.eq.modelT}) and (\ref{RF.eq.whitenfilterT}), are invertible; hence, the composite
transformation
\textcolor{black}{$H_n: \underline{Y}_{\underline t_n} \mapsto \underline \ensuremath{\epsilon}silon_{n}$}
is invertible as well.
\subsection{Kernel estimation of the `uniformizing' transformation}
\label{RF.sec.KEUT}
We first focus on estimating the `uniformizing' part of the transformation,
i.e., eq.~(\ref{RF_unif.eq.modelT}).
Recall that the Model-free setup implies that the function $D_{\underline t}(\cdot)$ changes smoothly (and slowly) with~$\underline t$; hence, local constant and/or local linear fitting can be used to estimate it. Consider random field data denoted as $Y_{\underline t_{1}}, \ldots, Y_{\underline t_{k}}, \ldots Y_{\underline t_{n}}$.
Using local constant, i.e., kernel estimation, a consistent estimator of the marginal distribution
$D_{\underline t_k }(y)$ is given by:
\begin{equation}
\hat D_{\underline t_k }(y) = \sum_{i=1}^{T} {\bf 1}\{ Y_{\underline t_{i}}\leq y\}
\tilde K (\frac{\underline t_{k} - \underline t_{i}}{b})
\label{RF.eq.hatD}
\end{equation}
where $\tilde K (\frac{\underline t_k -\underline t_{i}}{b}) = K (\frac{\underline t_k -\underline t_{i}}{b})/
\sum_{j=1}^{T}K (\frac{\underline t_k-\underline t_{j}}{b}) $. Similar to the model-based case we will assume throughout that $K(\cdot)$ is a nonnegative, symmetric 2-D Gaussian kernel function for which the diagonal values are set to the bandwidth $b$ and the off-diagonal terms are set to $0$.
Note that the kernel estimator \eqref{RF.eq.hatD} is {\it one-sided}
for the same reasons discussed in Remark \ref{RF.re.onesided}.
Since $\hat D_{\underline t_k }(y)$ is a step function in $y$, a smooth estimator
can be defined as:
\begin{equation}
\bar D_{\underline t_k }(y) = \sum_{i=1}^{T} \Lambda(\frac {y-Y_{\underline t_{i}}} {{h}_0})\tilde K (\frac{\underline t_k-{\underline t_i}}{b})
\label{RF.eq.barD}
\end{equation}
where ${h}_0$ is a secondary bandwidth.
Furthermore, as in Section \ref{RF.sec.trend}, we can let $T=k$ or $T=k-1$
leading to a {\bf fitted vs.~predictive} way to estimate $D_{\underline t_k }(y)$
by either $\hat D_{\underline t_k }(y)$ or $\bar D_{\underline t_k }(y)$.
\textcolor{black} {Cross-validation is used to determine the bandwidths $h_0$ and $b$ \textcolor{black}{; details}
are described in Section \ref{RF.cross-validation}.}
\subsection{Local linear estimation of the `uniformizing' transformation}
\label{RF.sec.LLEUT}
Note that the kernel estimator
$\hat D_{\underline t_k }(y)$ defined in eq.~\eqref{RF.eq.hatD}
is just the Nadaraya-Watson smoother, i.e., local average, of the variables
$u_1,\ldots, u_n$ where $u_i = {\bf 1}\{ Y_{\underline t_i}\leq y\}$.
Similarly, $\bar D_{\underline t _k}(y)$ defined in eq.~\eqref{RF.eq.barD}
is just the Nadaraya-Watson smoother of the variables
$v_1,\ldots, v_n$ where $v_i =\Lambda(\frac {y-Y_{\underline t_i}} {{h}_0})$.
In either case, it is only natural to try to consider a local
linear smoother as an alternative to Nadaraya-Watson especially
\textcolor{black}{since, once again, our interest lies in one-sided estimation on the boundary of the random field.}
Let
\textcolor{black} {$\hat D_{\underline t_k }^{LL}(y)$ and $\bar D_{\underline t_k}^{LL}(y)$}
denote the local linear estimators of $D_{\underline t_k }(y)$
based on either the indicator variables ${\bf 1}\{ Y_{\underline t_i}\leq y\}$
or the smoothed variables $\Lambda(\frac {y-Y_{\underline t_i}} {{h}_0})$
\textcolor{black} {respectively}.
Keeping $y$ fixed,
\textcolor{black} {$\hat D_{\underline t_k }^{LL}(y)$ and $\bar D_{\underline t_k}^{LL}(y)$}
\textcolor{black} {exhibit} good behavior \textcolor{black}{for estimation at the boundary,} e.g.
smaller bias than either $\hat D_{\underline t_k }(y)$ \textcolor{black}{and} $\bar D_{\underline t_k }(y)$
\textcolor{black}{respectively}.
However, there is no guarantee that
\textcolor{black} {these will be}
proper distribution \textcolor{black}{functions} as a function of $y$,
i.e., being nondecreasing in $y$ with a left limit of 0 and
a right limit of 1; see
\textcolor{black}{\cite{li2007nonparametric}}
for a discussion.
\textcolor{black} {One proposed solution put forward by
\cite{hansen2004nonparametric}}
\textcolor{black} {involves}
a straightforward adjustment to
the local linear estimator of a conditional distribution function
that maintains its favorable asymptotic properties.
The local linear versions of $\hat D_{\underline t_k}(y) $ and $\bar D_{\underline t_k}(y) $
adjusted via
Hansen's (2004)
proposal are
given as follows:
\begin{equation}
\label{RF.eq.sll_cdf}
\hat D_{\underline t_k}^{LLH}(y) = \frac{\sum_{i=1}^{T} w_{i}^\diamond {\bf 1}(Y_{{\underline t_i}} \le y)}{\sum_{i=1}^{T} w_{i}^\diamond}
\ \ \mbox{and} \ \
\bar D_{\underline t_k}^{LLH}(y) = \frac{\sum_{i=1}^{T} w_{i}^\diamond
\Lambda(\frac {y-Y_{\underline t_i}} {{h}_0})}{\sum_{i=1}^{T} w_{i}^\diamond} .
\end{equation}
The weights $w_{i}^\diamond$ are derived from weights $w_i$ described in equations (\ref{RF.eq.locallinearweightsF}) and (\ref{RF.eq.locallinearweightsP}) for the fitted and predictive cases where:
\begin{align}
\label{RF.eq.sll_diamond}
w_{i}^\diamond = \begin{cases}
\ 0 & \ \mbox{when} \ \ w_i < 0 \\
\ w_{i} & \ \mbox{when} \ \ w_i \ge 0
\end{cases}
\end{align}
\textcolor{black}
{As with eq.~\eqref{RF.eq.hatD}and~\eqref{RF.eq.barD}, we can let $T=k$ or $T=k-1$ in the above,
leading to a {\bf fitted vs.~predictive}
local linear estimators of $D_{\underline t_k }(y)$,
by either $\hat D_{\underline t_k}^{LLH}(y)$ or $\bar D_{\underline t_k}^{LLH}(y)$}.
\textcolor{black}{
\subsection{Uniformization using Monotone Local Linear Distribution Estimation}
}
\label{RF.sec.MLLDE}
\textcolor{black}
{Hansen's \textcolor{black} {(2004)} proposal replaces negative weights by zeros, and then renormalizes the nonzero
weights. The problem here is that if estimation is performed on the boundary (as in the case
with \textcolor{black}{one-step ahead prediction of} random fields), negative weights are crucially needed in order to ensure \textcolor{black}{the}
extrapolation takes place with minimal bias.}
\textcolor{black}{A recent proposal by \cite{das2019nonparametric}
addresses this issue by modifying the original, possibly nonmonotonic local linear distribution estimator $\bar D_{\underline t_k}^{LL }(y) $ to construct a monotonic version
denoted by $\bar {D}_{\underline t_k}^{LLM }(y)$.}
\textcolor{black}{The Monotone Local Linear Distribution Estimator $\bar {D}_{\underline t_k}^{LLM }(y)$ can be constructed by Algorithm \ref{Monotone_Density_Algo} given below.}
\begin{Algorithm}{
\bf{Monotone Local Linear Distribution Estimation}
\label{Monotone_Density_Algo}
}
\label{Monotone_density_LL}
\begin{enumerate}
\item Recall that the derivative of $\bar {D}_{\underline t_k}^{LL }(y) $
with respect to $y$ is given by
$$ \bar d_{\underline t_k}^{LL}(y)=\frac{\frac {1} {{h}_0} \sum_{j=1}^{T } w_j \lambda(\frac {y-Y_{ {\underline t_j}}} {{h}_0}) }{\sum_{j=1}^{T} w_j }
$$
where $\lambda(y)$ is the derivative of $\Lambda(y)$ and the weights $w_j$ can be derived based on equations (\ref{RF.eq.locallinearweightsF}) and (\ref{RF.eq.locallinearweightsP}) for the fitted and predictive cases.
\item Define a nonnegative version of $ \bar d_{\underline t_k}^{LL}(y)$ as
$ \bar d_{\underline t_k}^{LL+}(y)=\max (\bar d_{\underline t_k}^{LL}(y), 0)$.
\item To make the above a proper density function, renormalize
it to area one, i.e., let
\begin{equation}
\label{eq.densityMLL}
\bar d_{\underline t_k}^{LLM }(y) = \frac{\bar d_{\underline t_k}^{LL+}(y) } {\int_{-\infty}^\infty \bar d_{\underline t_k}^{LL+}(s)ds }.
\end{equation}
\item Finally, define $\bar {D}_{\underline t_k}^{LLM }(y) =\int_{-\infty}^y \bar d_{\underline t_k}^{LLM }(s) ds.$
\end{enumerate}
\end{Algorithm}
The above modification of the local linear estimator
allows one to maintain monotonicity while retaining the negative weights that
are helpful in problems which involve estimation at the boundary.
{As with eq.~\eqref{RF.eq.hatD} and~\eqref{RF.eq.barD}, we can let $T=k$ or $T=k-1$ in the above,
leading to a {\bf fitted vs.~predictive}
local linear estimators of $D_{\underline t_k }(y)$ that are monotone.
}
Different algorithms could also be employed for performing monotonicity correction on the original estimator $\bar {D}_{\underline t_k}^{LL }(y) $; these are discussed in detail in \cite{das2019nonparametric}. In practice, Algorithm \ref{Monotone_Density_Algo} is
preferable because it is the fastest in term of implementation; notably,
density estimates can be obtained in a fast way (using the Fast Fourier Transform)
using standard functions in statistical software such as R.
Computational speed is important
\textcolor{black}{in point prediction but is critical for cross-validation
where a large number of estimates of $\bar D_{\underline t_k }^{LLM}(y)$ must be computed to determine the optimal bandwidth.}
\subsection{Estimation of the whitening transformation}
\label{RF.seq.whiteningtransformation}
To implement the whitening transformation \eqref{RF.eq.whitenfilterT},
it is necessary to estimate $ \Gamma_n $, i.e., the $n\times n$ covariance matrix
of the random vector $\underline{Z}_{\underline t_n}=(Z_{\underline t_1},\ldots, Z_{\underline t_n})' $
where the $Z_{\underline t}$ are the normal random variables
defined in eq.~\eqref{RF_norm.eq.modelT}.
\textcolor{black}{The problem involves positive definite estimation of $ \Gamma_n $ based on the sample $Z_{\underline t_1},\ldots, Z_{\underline t_n}$.
Let $\hat \Gamma_n^{AR}$ be the $n\times n$ covariance matrix associated with the fitted AR(p,q) model to the data $Z_{\underline t_1},\ldots, Z_{\underline t_n}$. with $p, q$
by minimizing AIC, BIC or a related criterion as described in \cite{choi2007modeling}. Let $\hat \gamma_{|i-j|}^{AR}$ denote the $i,j$ element of the Toeplitz matrix $\hat \Gamma_n^{AR}$. Using the 2D Yule-Walker equations to fit the AR model implies that $\hat \gamma_{k, l}^{AR}= \breve \gamma_{k,l} $ for $k=0,1,\ldots, p$ and $l=0,1, \ldots, q$. For the cases where $ k > p$ or $l > q, \hat \gamma_{k, l}^{AR}$ can be fitted by iterating the difference equation that characterizes the fitted 2D AR model. In the R software this procedure is automated for time series using the {\tt ARMAacf()} function, here we extend the same for stationary data over random fields.}
\noindent Estimating the `uniformizing' transformation $D_{\underline t}(\cdot)$ and the whitening transformation based on $ \Gamma_n$
allows us to estimate the transformation $H_n: \underline{Y}_{\underline t_n} \mapsto \underline \ensuremath{\epsilon}silon_{n}$. However, in order to put the Model-Free Prediction Principle to work, we also need to estimate the transformation $H_{n+1}$ (and its inverse). \textcolor{black}{To do so, we need a positive definite estimator for
the matrix $ \Gamma_{n+1}$; this can be accomplished by extending the covariance matrix associated with the fitted 2D AR(p,q) model to $(n+1) by (n+1)$ i.e. calculate $\hat \Gamma_{n+1}^{AR}$.}
\noindent
Consider the `augmented' vectors:
\begin{itemize}
\item $\underline{Y}_{\underline t_{n+1}}= (Y_{\underline t_1},\ldots,Y_{\underline t_n}, Y_{\underline t_{n+1}})'$,
\item $\underline{Z}_{\underline t_{n+1}}=(Z_{\underline t_1}, \ldots,Z_{\underline t_n}, Z_{\underline t_{n+1}})'$
and
\item $\underline \ensuremath{\epsilon}silon_{n+1}=(\ensuremath{\epsilon}silon_1,\ldots, \ensuremath{\epsilon}silon_n, \ensuremath{\epsilon}silon_{n+1})'$
\end{itemize}
where the values $Y_{\underline t_{n+1}}, Z_{\underline t_{n+1}}$ and $\ensuremath{\epsilon}silon_{n+1}$
are yet unobserved.
We now show how to obtain
the inverse transformation $H_{n+1}^{-1}:
\underline \ensuremath{\epsilon}silon_{n+1} \mapsto
\underline{Y}_{\underline t_{n+1}} $. Recall that $\underline \ensuremath{\epsilon}silon_{n}$ and $
\underline{Y}_{\underline t_n } $ are related in a one-to-one way via
transformation $H_{n}$, so the values $ Y_{\underline t_1},\ldots,Y_{\underline t_n}$
are obtainable by $\underline{Y}_{\underline t_n } =H_{n}^{-1}(\ensuremath{\epsilon}silon_{n}).$
Hence, we just need to show how to create the unobserved
$Y_{\underline t_{n+1}}$ from $\underline \ensuremath{\epsilon}silon_{n+1} $; this is
done in the following three steps.
\begin{Algorithm} {
\textcolor{black}{GENERATION OF UNOBSERVED DATA\textcolor{black}{POINT} FROM \textcolor{black}{FUTURE }INNOVATIONS}
\label{Pred_MF_Algo}
}
\begin{itemize}
\item [i.] Let
\begin{equation}
\label{RF.eq.CholPred}
\underline{Z}_{\underline t_{n+1}}= C_{n+1} \underline \ensuremath{\epsilon}silon_{n+1}
\end{equation}
where $ C_{n+1}$ is the (lower) triangular Cholesky factor of
(our positive definite estimate of) $ \Gamma_{n+1} $.
From the above, it follows that
\begin{equation}
\label{RF.eq.invtrans1}
{Z}_{\underline t_{n+1}}=\underline c_{n+1} \underline \ensuremath{\epsilon}silon_{n+1}
\end{equation}
where $\underline c_{n+1}=(c_1,\ldots,c_n,c_{n+1}) $ is
a row vector consisting of the last row of
matrix $ C_{n+1}$.
\item [ii.] Create the uniform random variable
\begin{equation}
\label{RF.eq.invtrans2}
{U}_{\underline t_{n+1}}= {\rm P}hi(Z_{\underline t_{n+1}}) .
\end{equation}
\item [iii.]
Finally, define
\begin{equation}
\label{RF.eq.invtrans3}
Y_{\underline t_{n+1}} = D_{n+1}^{-1} ({U}_{\underline t_{n+1}});
\end{equation}
of course, in practice, the above will be based on an
estimate of $ D_{n+1}^{-1}(\cdot)$.
\end{itemize}
\end{Algorithm}
\noindent
Since $\underline{Y}_{\underline t_n } $ has already been created using (the
first $n$ coordinates of) $\underline \ensuremath{\epsilon}silon_{n+1}$, the above completes
the construction of $\underline{Y}_{\underline t_{n+1}} $ based on $\underline \ensuremath{\epsilon}silon_{n+1}$,
i.e., the mapping $H_{n+1}^{-1}:
\underline \ensuremath{\epsilon}silon_{n+1} \mapsto \underline{Y}_{\underline t_{n+1}} $.
\subsection{Model-free point prediction}
\label{RF.sec.prediction intervals}
In the previous sections, it was shown how the construct the transformation
$H_n: \underline{Y}_{\underline t_n} \mapsto \underline \ensuremath{\epsilon}silon_{n}$ and its
inverse $H_{n+1}^{-1}:
\underline \ensuremath{\epsilon}silon_{n+1} \mapsto \underline{Y}_{\underline t_{n+1}} $,
where the random variables $\ensuremath{\epsilon}silon_{1}, \ensuremath{\epsilon}silon_2, \ldots,$
are i.i.d. Note that by combining eq.~(\ref{RF.eq.invtrans1}), (\ref{RF.eq.invtrans2}) and (\ref{RF.eq.invtrans3}) we can write the formula:
$$
Y_{\underline t_{n+1}} = D_{n+1}^{-1}\left( {\rm P}hi( \ \underline c_{n+1} \underline \ensuremath{\epsilon}silon_{n+1} )\right).
$$
Recall that $\underline c_{n+1} \underline \ensuremath{\epsilon}silon_{n+1}
= \sum_{i=1}^n c_i \ensuremath{\epsilon}silon_i+c_{n+1} \ensuremath{\epsilon}silon_{n+1} $;
hence, the above can be compactly denoted as
\begin{equation}
\label{RF.eq.pred.equation}
Y_{\underline t_{n+1}} = g_{n+1}(\ensuremath{\epsilon}silon_{n+1})
\ \ \mbox{where} \ \
g_{n+1}(x)=
D_{\underline t_{n+1}}^{-1}\left( {\rm P}hi \left( \ \sum_{i=1}^n c_i \ensuremath{\epsilon}silon_i+c_{n+1}x \right) \right).
\end{equation}
Eq.~\eqref{RF.eq.pred.equation} is the predictive equation required in the
Model-free Prediction Principle;
conditionally on $\underline{Y}_{\underline t_n } $, it can be used like a model
equation in computing the $L_2$-- and $L_1$--optimal point predictors of
$Y_{\underline t_{n+1}} $. We will give these in detail as part of the
\textcolor{black}{general algorithm for the construction of Model-free point predictors.}
\begin{Algorithm}
\label{RF.Algo.BasicMF}
{\sc Model-free (MF) point predictors for $Y_{\underline t_{n+1}} $}
\begin{enumerate}
\item Construct $U_{\underline t_1},\ldots, U_{\underline t_n}$ by eq.~(\ref{RF_unif.eq.modelT}) with
$D_{\underline t_n}(\cdot)$ estimated by either $\bar D_{\underline t_n}(\cdot)$ \textcolor{black} {, $\bar D_{\underline t_n}^{LLH}(\cdot)$
or $\bar D_{\underline t_n}^{LLM}(\cdot)$};
for \textcolor{black} {all the 3 types of estimators}, \textcolor{black}{use the respective formulas with $T=k$.}
\item Construct $Z_{\underline t_1},\ldots, Z_{\underline t_n}$ by eq.~(\ref{RF_norm.eq.modelT}), and use the
methods of Section \ref{RF.seq.whiteningtransformation} to estimate
$\Gamma_n$ by
\textcolor{black}{$\hat \Gamma_n^{AR}$.}
\item
Construct $\ensuremath{\epsilon}silon_1,\ldots, \ensuremath{\epsilon}silon_n$ by eq.~(\ref{RF.eq.whitenfilterT}),
and let $\hat F_n$ denote their empirical distribution.
\item
The Model-free $L_2$--optimal point predictor of
$Y_{\underline t_{n+1}} $ is then \\
\textcolor{black}{$$ \hat Y_{\underline t_{n+1}}= \int g_{n+1}(x ) dF_n(x)
= \frac{1}{n} \sum_{i=1}^n g_{n+1}(\ensuremath{\epsilon}silon_i ) $$}
where the function $g_{n+1}$ is defined in the
predictive equation \eqref{RF.eq.pred.equation}
with $D_{\underline t_{n+1}} (\cdot)$ being again estimated by either $\bar D_{\underline t_{n+1}} (\cdot)$
\textcolor{black}{
,
$ \bar D_{\underline t_{n+1}}^{LLH} (\cdot)$
or $ \bar D_{\underline t_{n+1}}^{LLM} (\cdot)$
}
\textcolor{black} {all}
\textcolor{black}{with $T=k$.}
\item
The Model-free $L_1$--optimal point predictor of
$Y_{\underline t_{n+1}} $ is given by the median of the set
$\{ g_{n+1}(\ensuremath{\epsilon}silon_i )$ for $i=1,\ldots, n\}$.
\end{enumerate}
\end{Algorithm}
\vskip .13in
\noindent
Algorithm \ref{RF.Algo.BasicMF} used the construction of
$\bar D_{\underline t_k}(\cdot)$ \textcolor{black} {,
$\bar D_{\underline t_k}^{LLH}(\cdot)$
\textcolor{black} {or $\bar D_{\underline t_k}^{LLM}(\cdot)$ }
}
with $T=k$; using $T=k-1$ instead,
leads to the {\it predictive} version of the algorithm.
\vskip .173in
\begin{Algorithm}
\label{RF.Algo.PMF}
{\sc Predictive Model-free (PMF) predictors for $Y_{\underline t_{n+1}} $} \\
The algorithm is identical to Algorithm \ref{RF.Algo.BasicMF}
except for using $T=k-1$ instead of $T=k $ in the construction
of $\bar D_{\underline t_k}(\cdot)$
\textcolor{black}{
, $\bar D_{\underline t_k}^{LLH}(\cdot)$ and $\bar D_{\underline t_k}^{LLM}(\cdot)$.
}
\end{Algorithm}
\vskip .13in
\section{Random Fields cross-validation}
\label{RF.cross-validation}
To choose the bandwidth $b$ for either
\textcolor{black}{model-based or model-free point prediction}
predictive cross-validation may be used
but it must be adapted to the random field prediction setting, i.e., always
one-step-ahead. To elaborate, let $k<n$, and suppose only
subseries $Y_{\underline t_1},\ldots, Y_{\underline t_k}$ has been observed.
Denote
$\hat Y_{\underline t_{k+1}}$ the best predictor of $Y_{\underline t_{k+1}}$ based on the
data $Y_{\underline t_1},\ldots, Y_{\underline t_k}$ constructed according to the
above methodology and some choice of $b$. However, since $Y_{\underline t_{k+1}}$ is known, the
quality of the predictor can be assessed. So, for each value of $b$
over a reasonable range, we can form either
$PRESS(b)=\sum_{k=k_o}^{n-1} ( \hat Y_{\underline t_{k+1}} - Y_{\underline t_{k+1}})^2 $
or $PRESAR(b)=\sum_{k=k_o}^{n-1} | \hat Y_{\underline t_{k+1}} - Y_{\underline t_{k+1}}| $; here
$k_o$ should be big enough so that estimation is accurate,
e.g., $k_o$ can be of the order of $\sqrt{n}$.
The cross-validated bandwidth choice would then be the $b$ that
minimizes $PRESS(b)$; alternatively, we can choose to minimize $PRESAR(b)$ if
an $L_1$ measure of loss is preferred.
Finally, note that a quick-and-easy (albeit suboptimal) version of the above is to
use the (supoptimal) predictor $\hat Y_{\underline t_{k+1}}\simeq \hat \mu (\underline t_{k+1})$
and base $PRESS(b) $ or $PRESAR(b)$ on this approximation.
\textcolor{black}{For the problem of selecting $h_0$ in the case of model-free point predictors,} as in \cite{Politis2013}, our final choice is $h_0=h^2$ where $h=b/n$.
Note that an initial choice of $h_0$
(\textcolor{black}{needed to perform} uniformization and cross-validation to determine the optimal bandwidth $b$)
can be set by any plug-in rule; the effect of choosing an initial value of $h_0$
is minimal.
\section{Model-Free vs. Model-Based Inference:
\textcolor{black} {empirical comparisons}}
\label{RF.Numerical}
The performance of the Model-Free and Model-Based \textcolor{black}{predictors} described above are empirically compared using
simulated \textcolor{black}{and real-life} data
based on point prediction.
The Model-Based local constant and local linear methods are denoted as \textcolor{black}{MB-LC} and \textcolor{black}{MB-LL} respectively. \textcolor{black}{Model-Based predictors MB-LC and MB-LL are described in Section \ref{RF.Model-based inference}}. \textcolor{black}l{The Model-Free methods using local constant, local linear (Hansen) and local linear (Monotone)
are denoted as MF-LC, MF-LLH, MF-LLM.} Model-Free predictors are described in Section \ref{RF.Model-free inference}. Point prediction performance as indicated by
Mean Squared Error (MSE) are used to compare the estimators.
\subsection{Simulation: Additive model with stationary 2-D AR errors}
Let a random field be generated using the 2-D AR process as below:
\begin{equation}
y(t_1, t_2) = 0.25 y_{t_1-1,t_2-1} + 0.2y_{t_1-1,t_2+1} - 0.05y_{t_1-2,t_2} + v(t_1, t_2)
\end{equation}
Let this field be generated over the region defined by $0 \leq t_1 \leq n_1 \ \& \ 0 \leq t_2 \leq n_2$ where $n_1=101, n_2=101$. The NSHP limits are set from $(101,101)$ to $(50,50)$, \textcolor{black}{this defines the region $E_{\underline t, \underline n}$ as shown in Figure \ref{NSHP_pred}}. The data $Y_{\underline t}$ is generated using the additive model in eq.~\eqref{RF.eq.model homo} with trend specified as $\mu(\underline t) = \mu(t_1, t_2) = \sin (4\pi \frac{t_2-1}{n_2-1})$ where $0 \leq t_1 \leq n_1 \ \& \ 0 \leq t_2 \leq n_2$. Here $v(t_1, t_2)$ are i.i.d. $N(0,\tau^2)$ where $\tau=0.1$. Let $t_1=50, t_2=50$ where point prediction is performed. Bandwidths for estimating the trend are calculated using cross-validation for both Model-Based and Model-Free cases described in Section \ref{RF.cross-validation}.
\textcolor{black} {Results for point prediction using mean square error (MSE) over all MB and MF methods are shown
in Table \ref{pp_2D_AR}. A total of 100 realizations of the dataset were used for measuring point prediction performance.} \textcolor{black}{From this table it can be seen that MB-LL is the best point predictor. This is expected since the data was generated by a 2D AR model which is the same used in MB-LL prediction. In addition the estimation is performed at the boundary of the random field with a strong linear trend as shown in Figure \ref{NSHP_linear_trend} where LL regression is expected to perform the best. In addition it can be observed that MF-LLM performs the best among all MF point predictors and approaches the performance of MB-LL. This shows that monotonicity correction in the LLM distribution estimator has minimal effect on the center of the distribution that is used for point prediction.}
{\begin{figure}
\caption{Linear trend for NSHP where prediction is performed (50, 50)}
\label{NSHP_linear_trend}
\end{figure}}
\begin{table}[!htbp]
\centering
\caption{Point Prediction performance for 2-D AR dataset}
\label{pp_2D_AR}
\scalebox{1.20}{
\begin{tabular}{|c|c|c|c|cccccc}
\hline
Prediction Method & Residual Type & MSE\\
\hline
MB-LC & P & {1.488e-02}\\
\hline
& F & {1.520e-02}\\
\hline
MB-LL & P & {1.393e-02}\\
\hline
& F & {1.400e-02}\\
\hline
MF-LC & P & {1.530e-02}\\
\hline
& F & {1.549e-02}\\
\hline
MF-LLH & P & {1.471e-02}\\
\hline
& F & {1.515e-02}\\
\hline
MF-LLM & P & {1.414e-02}\\
\hline
& F & {1.456e-02}\\
\hline
\end{tabular}}
\end{table}
\subsection{Real-life example: CIFAR images}
The CIFAR-10 dataset \cite{krizhevsky2009cifar} is used as a real-life example to compare the model-based and model-free prediction algorithms discussed before. The original CIFAR-10 dataset consists of 60000 32 by 32 color images in 10 classes, with 6000 images per class. We pick 100 images from the class "dog" where the original images have 3 RGB (red, green, blue) channels with discrete pixel values. We pick the R (red) channel of each image, and standardize these to generate a new real-valued dataset. Our final transformed dataset has 100 32 by 32 random fields. The NSHP limits are set from $(32,32)$ to $(16,16)$, \textcolor{black}{this defines the region $E_{\underline t, \underline n}$ as shown in Figure \ref{NSHP_pred}}. Rest of the image is considered as occluded and their pixel values are not available for prediction. Sample images used for prediction are shown in Figure \ref{CIFAR10_dog}. Let $t_1=16, t_2=16$ where point prediction is performed. Bandwidths for estimating the trend are calculated using cross-validation for both Model-Based and Model-Free cases described in Section \ref{RF.cross-validation}.
\textcolor{black} {Results for point prediction using mean square error (MSE) over all MB and MF methods are shown
in Table \ref{pp_2D_AR}.}
\textcolor{black}{From this table it can be seen that MF-LLH and MF-LLM are the best point predictors. We attribute this to the fact that the CIFAR-10 image data is not compatible with additive model as given by eq.~\eqref{RF.eq.model homo}. It can also be seen that unlike the synthetic 2D AR dataset the two best predictors MF-LLH and MF-LLM are much closer in performance which is owing to lack of a linear trend at the point where prediction is performed. Lastly for point prediction there is a difference in performance between fitted and predictive residuals for some estimators which is not the case with the synthetic dataset discussed before. This is due to finite sample effects as the CIFAR image random field is smaller in size and we use only a part of this for our one-sided prediction.}
{\begin{figure}
\caption{Sample images from CIFAR-10 dataset with label dog (Note: Here full images are shown although only part of it is used for prediction.)}
\label{CIFAR10_dog}
\end{figure}}
\begin{table}[!htbp]
\centering
\caption{Point Prediction performance for CIFAR-10 dataset}
\label{pp_CIFAR}
\scalebox{1.20}{
\begin{tabular}{|c|c|c|c|cccccc}
\hline
Prediction Method & Residual Type & MSE\\
\hline
MB-LC & P & {1.98e-01}\\
\hline
& F & {2.20e-01}\\
\hline
MB-LL & P & 1.79e-01\\
\hline
& F & 1.95e-01\\
\hline
MF-LC & P & {1.79e-01}\\
\hline
& F & {2.12e-01}\\
\hline
MF-LLH & P & {1.60e-01}\\
\hline
& F & {1.89e-01}\\
\hline
MF-LLM & P & {1.64e-01}\\
\hline
& F & {1.70e-01}\\
\hline
\end{tabular}}
\end{table}
\section{Conclusions and Future Work}
\label{RF.Numerical}
\textcolor{black}{In this paper we investigate the problem of one-sided prediction over random fields that are stationary only across a limited part over their entire region of definition. For such locally stationary random fields we develop frameworks for point prediction using both a model-based approach which includes a coordinate changing trend and/or variance and also by using the model-free principle proposed by \textcolor{black}{\cite{Politis2013}, \cite{politis2015model}}. We apply our algorithms to both synthetic data as well as a real-life dataset consisting of images from the CIFAR-10 dataset. In the latter case we obtain the best performance using the model-free approach and thereby demonstrate the superiority of this technique versus the model-based case where an additive model is assumed arbitrarily for purposes of prediction. In future work we plan to investigate both model-based and model-free prediction using random fields with non-uniform spacing of data as well as extending our algorithms for estimating prediction intervals.}
\vskip .1in
\noindent
\textcolor{black} {
{\bf Acknowledgements} \\
This research was partially supported by NSF grant DMS 19-14556. The authors would like to acknowledge the Pacific Research Platform, NSF Project ACI-1541349 and Larry Smarr (PI, Calit2 at UCSD)
for providing the computing infrastructure used in this project.
}
\end{document} |
\begin{document}
\begin{abstract}
We study the homomorphism spaces between Specht modules for the Hecke algebras $
athcal{H}$ of type $A$. We prove a cellular analogue of the kernel intersection theorem and a $q$-analogue of a theorem of Fayers and Martin and apply these results to give an algorithm which computes the homomorphism spaces $\Hom_
athcal{H}(S^
u,S^\lambda)$ for certain pairs of partitions $\lambda$ and $
u$. We give an explicit description of the homomorphism spaces $\Hom_
athcal{H}(S^
u,S^\lambda)$ where $
athcal{H}$ is an algebra over the complex numbers, $\lambda=(\lambda_1,\lambda_2)$ and $
u$ is an arbitrary partition with $
u_1 \geq \lambda_2$.
\end{abstract}
aketitle
\section{Introduction}
The Hecke algebras $
athcal{H}=
athcal{H}_{F,q}(
athfrak{S}_n)$ of the symmetric groups are classical objects of study and the most important open problem in their representation theory is to determine the structure of the Specht modules $S^\lambda$ where $\lambda$ is a partition of $n$. In this area there are many obvious questions that remain unanswered. For example, we rarely know the composition factors of $S^\lambda$ or their multiplicities. However, information on the Specht modules may be obtained by computing $\Hom_{
athcal{H}}(S^
u,S^\lambda)$ for $
u$ and $\lambda$ partitions of $n$.
An approach to this problem using the kernel intersection theorem was suggested by James, who gave an easy classification of $\Hom_{F
athfrak{S}_n}(S^
u, S^{(n)})$~\cite[Theorem 24.4]{James}. This approach has subsequently been developed. In particular, results of Fayers and Martin~\cite{FayersMartin:homs} have given us techniques to compute $\Hom_{F
athfrak{S}_n}(S^
u, S^{\lambda})$ for more general $\lambda$ (which they used in the same paper to give an elementary proof of the Carter-Payne theorem). In this paper, we extend the most useful of their results to the Hecke algebra $
athcal{H}$. This enables us to give an algorithm, easily implemented on a computer, which will compute certain homomorphism spaces. Using this method we completely classify the homomorphism space $\Hom_{
athcal{H}}(S^
u,S^\lambda)$ where $\lambda$ has at most two parts, $
u_1 \geq \lambda_2$ and $
athcal{H}$ is defined over a field of characteristic zero.
The paper is organised as follows. We begin with the definition of the Hecke algebras and some background discussion. We then state our main results and give some examples and applications. The proofs of the main results, Theorem~\ref{hdtthm}, Theorem~\ref{Lemma7} and Propositions~\ref{first} to~\ref{last} are deferred to the next section; in fact, we give only an indication of the proof of the last propositions, for reasons we discuss in Section~\ref{Spaces}. We end with a brief discussion about homomorphisms between the Specht modules of Dipper and James.
\section{Main results}
\subsection{The Hecke algebras of type $A$}
The definitions in this section are standard and may all be found the the book of Mathas~\cite{M:ULect}.
For each integer $n\geq 0$, let $
athfrak{S}_n$ be the symmetric group on $n$ letters. If $R$ is a ring and $q$ an invertible element of $R$ then the Hecke algebra
$
athcal{H}=
athcal{H}_{R,q}(
athfrak{S}_n)$ is defined to be the unital associative
$R$-algebra with generators
$T_1,\dots,T_{n-1}$ subject to the relations
\begin{align*}
T_i T_j & = T_j T_i, && 1\leq i < j-1\leq n-2, \\
T_iT_{i+1}T_i &= T_{i+1}T_i T_{i+1}, && 1 \leq i \leq n-2, \\
(T_i+1)(T_i-q)&=0, &&1 \leq i \leq n-1,
\end{align*}
so that if $q=1$ then $
athcal{H} \cong R
athfrak{S}_n$. If $w \in
athfrak{S}_n$ can be written as
$w=(i_1,i_1+1)\ldots(i_k,i_k+1)$ where $k$ is minimal, we define $T_w = T_{i_1} \ldots T_{i_k}$.
Then $
athcal{H}$ is a free $R$-module with a basis $\{T_w
id w \in
athfrak{S}_n\}$.
An expression $w=(i_1,i_1+1)\ldots(i_k,i_k+1)$ with $k$ minimal is known as a reduced expression for $w$ and we define the length of $w$ by $\ell(w)=k$.
Recall that a composition of $n\geq 0$ is a sequence $
u=(
u_1,
u_2,\ldots,
u_l)$ of non-negative integers that sum to $n$ and a partition is a composition with the additional property that $
u_1 \geq
u_2 \geq \ldots \geq
u_l$. If $
u$ is a partition of $n$, write $
u \vdash n$.
We define a partial order $\unrhd$ on the set of compositions of $n$ by saying that $\lambda \unrhd
u$ if
\[ \sum_{i=1}^j \lambda_i \geq \sum_{i=1}^j
u_i\]
for all $j$. If $\lambda \unrhd
u$ and $\lambda \neq
u$, write $\lambda \rhd
u$.
Let $
u$ be a composition of $n$. Define the corresponding Young diagram $[
u]$ by
\[[
u] = \{(r,c)
id 1 \leq c \leq
u_r \}.\]
A $
u$-tableau $\mathsf{T}$ is a map $\mathsf{T}: [
u] \rightarrow \{1,2,\ldots\}$; we think of this as a way of replacing the nodes $(r,c)$ of $[
u]$ with the integers $1,2,\ldots$ and so may talk about the rows and columns of $\mathsf{T}$. (Note that we use the English convention for writing our diagrams.) Let $
athcal{T}(
u)$ be the set of $
u$-tableaux such that each integer $1,2,\ldots,n$ appears exactly once and let $\mathfrak{t}^
u \in
athcal{T}(
u)$ be the tableau with $\{1,2,\ldots,n\}$ entered in order along the rows of $[
u]$ from left to right and top to bottom. If $\mathfrak{t} \in
athcal{T}(
u)$ and $I \subseteq \{1,2,\ldots,n\}$, say that $I$ is in row-order in $\mathfrak{t}$ if for all $i,j \in I$ with $i<j$ either $j$ lies in a lower row than $i$ (that is, a row of higher index); or $i$ and $j$ lie in the same row, with $j$ to the right of $i$. Then $\mathfrak{t}^
u$ is the unique tableau in $
athcal{T}(
u)$ with $\{1,2,\ldots,n\}$ in row-order.
The symmetric group $
athfrak{S}_n$ acts on the right on the elements of $
athcal{T}(
u)$ by permuting the entries in each tableau. If $\mathfrak{t} \in
athcal{T}(
u)$, let $d(\mathfrak{t})$ be the permutation such that $\mathfrak{t} = \mathfrak{t}^
u d(\mathfrak{t})$. Let $
athfrak{S}_{
u}$ denote the row-stabilizer of $\mathfrak{t}^
u$, that is, the set of all permutations $w$ such that each $i \in \{1,2,\ldots,n\}$ lies in the same row of $\mathfrak{t}^
u$ as $\mathfrak{t}^
u w$.
We say that $\mathfrak{t} \in
athcal{T}(
u)$ is row-standard if the entries increase along the rows and standard if $
u$ is a partition and the entries increase both along the rows and down the columns. Let $\rowstd(
u) \subseteq
athcal{T}(
u)$ denote the set of row-standard $
u$-tableaux and, if $
u$ is a partition, let $\mathsf{S}td(
u) \subseteq \rowstd(
u)$ denote the set of standard $
u$-tableaux.
Define
\[m_
u = \sum_{w \in
athfrak{S}_
u} T_w,\] and set $M^
u$ to be the right $
athcal{H}$-module
\[M^
u = m_
u
athcal{H}.\]
Define $\ast:
athcal{H} \rightarrow
athcal{H}$ to be the anti-isomorphism determined by $T_{i}^\ast = T_{i}$
and if $\mathfrak{s},\mathfrak{t} \in \rowstd(
u)$ define
\[m_{\mathfrak{s}\mathfrak{t}}=T_{d(\mathfrak{s})}^\ast m_
u T_{d(\mathfrak{t})}.\]
Then
\[\{m_{\mathfrak{s}\mathfrak{t}}
id \mathfrak{s},\mathfrak{t} \in \mathsf{S}td(\lambda) \text{ for some } \lambda \vdash n\}\]
is a cellular basis of $
athcal{H}$ with respect to the partial order $\unrhd$ and the anti-isomorphism $\ast$. In accordance with the theory of cellular algebras, if $\lambda \vdash n$ we define $
athcal{H}la$ to be the free $R$-module with basis
\[\{m_{\mathfrak{s}\mathfrak{t}}
id \mathfrak{s},\mathfrak{t} \in \mathsf{S}td(\nu) \text{ for some } \nu \vdash n \text{ such that } \nu \rhd \lambda\};\]
then $
athcal{H}la$ is a two-sided ideal of $
athcal{H}$. Following Graham and Lehrer~\cite{GL}, we define the cell module $S^\lambda$, also known as a Specht module, to be the right $
athcal{H}$-module
\[S^\lambda = (
athcal{H}la + m_\lambda)
athcal{H}\]
and define $\pi_\lambda:M^\lambda \rightarrow S^\lambda$ to be the natural projection determined by $\pi_\lambda(m_\lambda)=
athcal{H}la + m_\lambda$.
These Specht modules are the main objects of interest in the study of the representation theory of the Hecke algebras $
athcal{H}$ and the symmetric groups $
athfrak{S}_n$. One of the most important open problems in representation theory is to determine the decomposition matrices for the Hecke algebra $
athcal{H}$, that is, compute the composition factors of the Specht modules. In this paper, we study a closely related problem. We consider homomorphisms between Specht modules $S^
u$ and $S^\lambda$, for $
u$ and $\lambda$ partitions of $n$.
\subsection{Homomorphisms between Specht modules} \lambdabel{HomSection}
Suppose $\lambda$ is a partition of $n$ and let $\mathsf{T}$ be a $\lambda$-tableaux. Say that $\mathsf{T}$ is of type $
u$ if $
u$ is the composition such that each integer $i \geq 1$ appears $
u_i$ times in $\mathsf{T}$.
Let $
athcal{T}(\lambda,
u)$ denote the set of $\lambda$-tableaux of type $
u$. We say that $\mathsf{S} \in
athcal{T}(\lambda,
u)$ is row-standard if the entries are non-decreasing along the rows and is semistandard if it is row-standard and the entries are strictly increasing down the columns. Let $
athcal{T}_{\text{r}}(\lambda,
u)\subseteq
athcal{T}(\lambda,
u)$ denote the set of row-standard $\lambda$-tableaux of type $
u$ and $
athcal{T}_0(\lambda,
u) \subseteq
athcal{T}_{\text{r}}(\lambda,
u)$ denote the set of semistandard $\lambda$-tableaux of type $
u$.
If $\mathfrak{s} \in
athcal{T}(\lambda)$, define $
u(\mathfrak{s}) \in
athcal{T}(\lambda,
u)$ to be the tableau obtained by replacing each integer $i\geq 1$ with its row index in $\mathfrak{t}^
u$.
Suppose that $\lambda$ is a partition of $n$ and that $
u$ is a composition of $n$.
If $\mathsf{S}\in
athcal{T}_{\text{r}}(\lambda,
u)$ define $\mathsf{T}heta_\mathsf{S}:M^
u \rightarrow S^\lambda$ to be the homomorphism determined by
\[\mathsf{T}heta_\mathsf{S}(m_
u) =
athcal{H}la+\sum_{{\mathfrak{s} \in \rowstd(\lambda) \atop
u(\mathfrak{s})=\mathsf{S}}}m_\lambda T_{d(\mathfrak{s})}. \]
Let $\EHom_{
athcal{H}}(M^
u,S^\lambda)$ be the subspace of $\Hom_{
athcal{H}}(M^
u,S^\lambda)$ consisting of homomorphisms $\mathsf{T}heta$ such that $\mathsf{T}heta=\pi_\lambda \circ \tilde \mathsf{T}heta$ for some $\tilde\mathsf{T}heta:M^
u \rightarrow M^\lambda$. By construction, if $\mathsf{S}\in
athcal{T}_{\text{r}}(\lambda,
u)$ then $\mathsf{T}heta_\mathsf{S} \in \EHom_{
athcal{H}}(M^
u,S^\lambda)$.
If $e\geq 2$, say that a partition $\lambda$ is $e$-restricted if $\lambda_i - \lambda_{i+1} <e$ for all $i$.
\begin{theorem}[{\cite[Corollary 8.7]{DJ:qWeyl}}] \lambdabel{SSBasis}
The maps
\[ \{\mathsf{T}heta_S
id S \in
athcal{T}_0(\lambdambda,
u)\}\]
are a basis of $\EHom_
athcal{H}(M^
u,S^\lambda)$. Furthermore, unless $q=-1$ and $\lambdambda$ is not 2-restricted \[\EHom_
athcal{H}(M^
u,S^\lambda)= \Hom_
athcal{H}(M^
u, S^\lambdambda).\]
\end{theorem}
Note that this implies that if $\EHom_
athcal{H}(M^
u,S^\lambda) \neq \{0\}$ then $\lambda \unrhd
u$. \\
Now suppose $
u$ is a partition and let $\EHom_{
athcal{H}}(S^
u,S^\lambda)$ be the set of maps $\bar\mathsf{T}heta \in \Hom_{
athcal{H}}(S^
u,S^\lambda)$ with the property that $\bar\mathsf{T}heta \circ \pi_
u \in \EHom_{
athcal{H}}(M^
u,S^\lambda)$. Again, $\EHom_{
athcal{H}}(S^
u,S^\lambda) = \Hom_{
athcal{H}}(S^
u,S^\lambda)$ unless $q=-1$ and $\lambda$ is not 2-restricted. Apart from the fact that our techniques are well-adapted to determining $\EHom_{
athcal{H}}(S^
u,S^\lambda)$, we have another reason to want to study this space.
\begin{theorem}[{\cite[Corollary 8.6]{DJ:qWeyl}}] \lambdabel{Weyl}
Let $
athcal{S}_q$ denote the $q$-Schur algebra and let $\Delta^
u$ and $\Delta^\lambda$ be the Weyl modules corresponding to the partitions $
u$ and $\lambda$. Then
\[\EHom_{
athcal{H}}(S^
u,S^\lambda) \cong_R \Hom_{
athcal{S}_q}(\Delta^
u,\Delta^\lambda).\]
\end{theorem}
Fix a pair of partitions $\lambda$ and $
u$ of $n$. We want to compute $\EHom_{
athcal{H}}(S^
u,S^\lambda)$. If $\bar\mathsf{T}heta \in \EHom_{
athcal{H}}(S^
u,S^\lambda)$ then $\bar\mathsf{T}heta$ can be pulled back to give a homomorphism $\mathsf{T}heta \in \EHom_{
athcal{H}}(M^
u,S^\lambda)$. Conversely, $\mathsf{T}heta \in \EHom_
athcal{H}(M^
u,S^\lambda)$ factors through $S^
u$ if and only if $\mathsf{T}heta(m_
u h)=0$ for all $h \in
athcal{H}$ such that $m_
u h \in
athcal{H}mu$.
\begin{diagram}
M^
u &\rTo^{\pi_
u} & S^
u \\
&\rdTo_{\mathsf{T}heta} & \dTo_{\bar\mathsf{T}heta} \\
&& S^\lambda
\end{diagram}
We would therefore like to make it easier to check this condition.
If $\eta=(\eta_1,\eta_2,\ldots,\eta_l)$ is any composition and $k \geq 0$, let $\bar\eta_k=\sum_{i=1}^k \eta_i$ and $\bar{\eta}=\sum_{i \geq 1} \eta_i$. For $m\geq 0$, let $
athfrak{S}_{\{m+1,\ldots,m+\bar\eta\}}$ denote the symmetric group on the letters $m+1,\ldots,m+\bar\eta$ and let $
athcal{D}_{m,\eta}$ be the set of minimal length right coset representatives of $
athfrak{S}_{(m,\eta_1,\dots,\eta_l)}\cap
athfrak{S}_{\{m+1,\ldots,m+\bar\eta\}}$ in $
athfrak{S}_{\{m+1,\ldots,m+\bar\eta\}}$. (Hence if $\mathfrak{t}$ is the $\eta$-tableau with the numbers $m+1,\ldots,m+\bar{\eta}$ entered in order along its rows then $w \in
athcal{D}_{m,\eta}$ if and only if $\mathfrak{t} w$ is row-standard.) Set
\[
athbb{C}C(m;\eta)=
athbb{C}C(m;\eta_1,\eta_2,\ldots,\eta_l)=\sum_{w \in
athcal{D}_{m,\eta}} T_w.\]
If $
u=(
u_1,
u_2,\ldots,
u_b)$ then for $1 \leq d < b$ and $1 \leq t \leq
u_{d+1}$, define
\[h_{d,t}=
athbb{C}C(\bar
u_{d-1};
u_d,t).\]
\begin{ex}
Let $
u = (3,2,2)$. Then
\begin{align*}
h_{1,1} & = I + T_3 + T_3T_2 + T_3T_2T_1, \\
h_{1,2} & = I + T_3 + T_3T_2 + T_3T_2T_1 + T_3T_4 + T_3T_2T_4 + T_3 T_2 T_1 T_4 + T_3T_2T_4T_3+ T_3T_2T_1T_4T_3 + T_3T_2T_1T_4T_3T_2, \\
h_{2,1} &= I + T_5 + T_5T_4, \\
h_{2,2} & = I + T_5 + T_5T_4 +T_5 T_6 +T_5T_4T_6 +T_5T_4T_6T_5. \\
\end{align*}
\end{ex}
\begin{theorem} \lambdabel{hdtthm}
Let $
athcal{I}$ be the right ideal generated by $\{m_
u h_{d,t}
id 1 \leq d <b \text{ and } 1 \leq t \leq
u_{d+1}\}$. Then
\[
athcal{I}=M^
u \cap
athcal{H}mu.\]
\end{theorem}
We prove Theorem~\ref{hdtthm} in Section~\ref{hdtproof}.
\begin{corollary} \lambdabel{simple}
Suppose that $\mathsf{T}heta:M^
u \rightarrow S^\lambda$. Then $\mathsf{T}heta(m_
u h)=0$ for all $h \in
athcal{H}$ such that $m_
u h \in
athcal{H}mu$ if and only if $\mathsf{T}heta(m_
u h_{d,t})=0$ for all $1 \leq d<b$ and $1 \leq t \leq
u_{d+1}$.
\end{corollary}
\begin{remark}
We have chosen to work with the Specht modules which arise as the cell modules for the Murphy basis, rather than the Specht modules of Dipper and James. This is consistent, for example, with the work of Corlett on homomorphisms between Specht modules for the Ariki-Koike algebras~\cite{Corlett}. As such, the kernel intersection theorem~\cite[Theorem~3.6]{DJ:qWeyl} does not apply, and so Theorem~\ref{hdtthm} has been created to take its place. In Section~\ref{DJSpecht}, we show that working in either world gives the same results.
\end{remark}
We have shown that determining $\EHom_{
athcal{H}}(S^
u,S^\lambda)$ is equivalent to finding \[\Psi(
u,\lambda)=\{\mathsf{T}heta \in \EHom_{
athcal{H}}(M^
u,S^\lambda)
id \mathsf{T}heta(m_
u h_{d,t})=0 \text{ for all } 1 \leq d<b, \;1 \leq t \leq
u_{d+1}\}, \]
bearing in mind that $\EHom_{
athcal{H}}(M^
u,S^\lambda)$ has a basis indexed by semistandard $\lambda$-tableaux of type $
u$.
If $\mathsf{S}$ is a tableau and $X$ and $Y$ are sets of
positive integers we define~$\mathsf{S}^X_Y$ be the number of entries in row $r$
of~$\mathsf{S}$, for some $r\in Y$, which are equal to some $x \in X$. We further abbreviate this notation by setting
$\mathsf{S}^{\le x}_{>r}=\mathsf{S}_{(r,\infty)}^{[1,x]}$, $\mathsf{S}^x_r=\mathsf{S}^{\{x\}}_{\{r\}}$
and so on.
Now if $m \geq 0$ define
\[ [m] = 1+q+\ldots + q^{m-1} \in R.\]
Let $[0]!=1$ and for $m \geq 1$, set $[m]! = [m][m-1]!$. If $m \geq k \geq 0$, set
\[\gauss{m}{k} = \mathfrak{r}ac{[m]!}{[k]![m-k]!}.\]
If $m,k \in
athbb{Z}$ and any of the conditions $m \geq k \geq 0$ fail, set $\gauss{m}{k}=0$. We record some results which we will need later. The first is well-known.
\begin{lemma} \lambdabel{GaussSum}
Suppose $n, m \geq 0$. Then
\begin{align*}
\gauss{n+1}{m} & = \gauss{n}{m-1} + q^{m} \gauss{n}{m} \\
&=\gauss{n}{m} + q^{n-m+1}\gauss{n}{m-1}.
\end{align*}
\end{lemma}
\begin{lemma} \lambdabel{GaussLemma}
Suppose $m, k \geq n \geq 0$. Then,
\[\sum_{\gamma \geq 0}(-1)^\gamma q^{\binom{\gamma}{2}}\gauss{n}{\gamma}\gauss{m-\gamma}{k} = q^{n(m-k)}\gauss{m-n}{k-n}.\]
\end{lemma}
\begin{proof}
The lemma is true for $n=0$ so suppose $n>0$ and that the lemma holds for $n-1$.
Then using Lemma~\ref{GaussSum} and the inductive hypothesis,
\begin{align*}
\sum_{\gamma \geq 0}(-1)^\gamma q^{\binom{\gamma}{2}}\gauss{n}{\gamma}\gauss{m-\gamma}{k}
& = \sum_{\gamma\geq 0}(-1)^\gamma q^{\binom{\gamma}{2}} \left(\gauss{n-1}{\gamma} +q^{n-\gamma}\gauss{n-1}{\gamma-1} \right) \gauss{m-\gamma}{k} \\
&= \sum_{\gamma \geq 0}(-1)^\gamma q^{\binom{\gamma}{2}} \gauss{n-1}{\gamma}\gauss{m-\gamma}{k} - q^{n-1} \sum_{\gamma\geq 0} (-1)^\gamma q^{\binom{\gamma}{2}} \gauss{n-1}{\gamma} \gauss{m-\gamma-1}{k} \\
&= q^{(n-1)(m-k)} \gauss{m-n+1}{k-n+1} -q^{n-1} q^{(n-1)(m-k-1)}\gauss{m-n}{k-n+1} \\
&=q^{n(m-k)} \gauss{m-n}{k-n}
\end{align*}
\end{proof}
The following result may be seen by applying~\cite[Equation 4.6]{M:ULect} and the anti-isomorphism $\ast$ to~\cite[Proposition~2.14 ]{Lyle:CP}.
\begin{proposition}[\cite{Lyle:CP}, Proposition~2.14] \lambdabel{CombTheorem1b}
Suppose that $\mathsf{T} \in
athbb{R}owT(\lambda,
u)$.
Choose $d$ with $1 \leq d <b$ and $t$ with $1 \leq t \leq
u_{d+1}$. Let $
athcal{S}$ be the set of row-standard tableaux obtained by replacing $t$ of the entries in $\mathsf{T}$ which are equal to $d+1$ with $d$. Each tableaux $\mathsf{S} \in
athcal{S}$ will be of type $\nu(d,t)$ where
\[\nu(d,t)_j =
\begin{cases}
u_j+t, & j=d, \\
u_j-t, & j=d+1, \\
u_j, & \text{otherwise}.
\end{cases} \\ \]
Recall that $\mathsf{T}heta_\mathsf{T}: M^
u \rightarrow S^\lambda$ and $\mathsf{T}heta_{\mathsf{S}}:M^{\nu(d,t)} \rightarrow S^\lambda$. Then
\[\mathsf{T}heta_\mathsf{T}(m_
u h_{d,t}) = \sum_{\mathsf{S} \in
athcal{S}} \left( \prod_{j= 1}^a q^{\mathsf{T}^d_{>j}(\mathsf{S}^d_j - \mathsf{T}^d_j)} \gauss{\mathsf{S}^d_j}{\mathsf{T}^d_j}\right) \mathsf{T}heta_\mathsf{S} (m_{\nu(d,t)}).\]
\end{proposition}
So if $\mathsf{T}heta \in \EHom_{
athcal{H}}(M^
u,S^\lambda)$ then we may write $\mathsf{T}heta(m_
u h_{d,t}) = \Phi(m_{\nu(d,t)})$ where $\Phi$ is a linear combination of homomorphisms indexed by $\lambda$-tableaux of type $\nu(d,t)$, with known coefficients. However, since these tableaux may not be semistandard, the corresponding homomorphisms may not be linearly independent and so we cannot say immediately whether $\mathsf{T}heta(m_
u h_{d,t})=0$. We would therefore like a method of writing a map $\mathsf{T}heta_\mathsf{S}$ as a linear combination of homomorphisms indexed by semistandard tableaux. Unfortunately, we do not have an algorithm for this process. However, we do have a way of rewriting homomorphisms. The following result is due to Fayers and Martin, and holds when $
athcal{H} \cong R
athfrak{S}_n$. It was probably the strongest combinatorial result they used to give their elementary proof of the Carter-Payne theorem~\cite{FayersMartin:homs}. Recall that if $\eta=(\eta_1,\eta_2,\ldots,\eta_l)$ is any sequence of integers then $\bar\eta_k = \sum_{i=1}^k \eta_i$.
\begin{proposition}[\cite{FayersMartin:homs}, Lemma 7] \lambdabel{FM:Lemma}
Suppose $
athcal{H} \cong R
athfrak{S}_n$ and that $\lambda=(\lambda_1,\ldots,\lambda_a)$ is a partition of $n$ and $\nu=(\nu_1,\ldots,\nu_b)$ a composition of $n$. Suppose $\mathsf{S} \in
athbb{R}owT(\lambda,\nu)$.
Choose $r_1\neq r_2$ with $1 \leq r_1,r_2 \leq a$ and $\lambda_{r_1} \geq \lambda_{r_2}$ and $d$ with $1 \leq d \leq b$. Let
\[
athcal{G} =\left\{g=(g_1,g_2,\ldots,g_b)
id g_d=0, \, \bar{g} = \mathsf{S}^d_{r_2},\, \text{ and } g_i \leq \mathsf{S}^{i}_{r_1} \text{ for } 1 \leq i \leq b\right\}.\]
For $g \in
athcal{G}$, let $\mathsf{U}_g$ be the row-standard tableau formed by moving all entries equal to $d$ from row $r_2$ to row $r_1$ and for $i \neq d$ moving $g_i$ entries equal to $i$ from row $r_1$ to row $r_2$ (where we assume we may reorder the rows if necessary). Then
\[\mathsf{T}heta_\mathsf{S} = (-1)^{\mathsf{S}^d_{r_2}} \sum_{g \in
athcal{G}}\left( \prod_{i=1}^b \binom{\mathsf{S}^i_{r_2}+g_i}{g_i}\right) \mathsf{T}heta_{\mathsf{U}_g}.\]
\end{proposition}
Since Fayers and Martin work in the setting by James~\cite{James}, it is not immediate that their result carries over to our cellular algebra setting. See, however, Section~\ref{DJSpecht}.
Unfortunately, the obvious $q$-analogue of Proposition~\ref{FM:Lemma}, that is, in the notation above, that the $
athcal{H}$-homomorphism $\mathsf{T}heta_{\mathsf{S}}$ can be writen as a linear combination of maps $\mathsf{T}heta_{\mathsf{U}_g}$ where $g \in
athcal{G}$, is false. The following identity can be checked by hand.
We identify a tableau $\mathsf{U}$ of type $\nu$ with the image $\mathsf{T}heta_\mathsf{U}(m_\nu)$.
\begin{ex} Let $\lambda=(2^2,1)$ and $\nu = (1^5)$. Then
\[ \tab(12,35,4) = - \tab(14,35,2)- \tab(24,35,1)+(q-1) \tab(12,34,5).\]
Now
\[\tab(14,35,2) = - \tab(14,25,3) + \tab(12,34,5) + \tab(13,24,5), \hat{q}quad \hat{q}quad \tab(24,35,1) = (q-2) \, \tab(12,34,5) - \tab(12,35,4) - \tab(13,24,5) + \tab(14,25,3),\]
so that if $q \neq 1$, we cannot write
$\tab(12,35,4)$
as a linear combination of
$\tab(14,35,2)$
and
$\tab(24,35,1)$.
\end{ex}
We do however have the following weaker analogue of Proposition~\ref{FM:Lemma}.
\begin{theorem} \lambdabel{Lemma7}
Suppose $\lambda=(\lambda_1,\ldots,\lambda_a)$ is a partition of $n$ and $\nu=(\nu_1,\ldots,\nu_b)$ is a composition of $n$.
Let $\mathsf{S} \in
athbb{R}owT(\lambda,\nu)$.
\begin{enumerate}
\item
Suppose $1 \leq r\leq a-1$ and that $1 \leq d \leq b$. Let
\[
athcal{G} =\left\{g=(g_1,g_2,\ldots,g_b)
id g_d=0, \, \bar{g}=\mathsf{S}^d_{r+1} \text{ and } g_i \leq \mathsf{S}^{i}_{r} \text{ for } 1 \leq i \leq b\right\}.\]
For $g \in
athcal{G}$, let $\mathsf{U}_g$ be the row-standard tableau formed by moving all entries equal to $d$ from row $r+1$ to row $r$ and for $i \neq d$ moving $g_i$ entries equal to $i$ from row $r$ to row $r+1$. Then
\[\mathsf{T}heta_\mathsf{S} = (-1)^{\mathsf{S}^d_{r+1}} q^{-\binom{\mathsf{S}^d_{r+1}+1}{2}} q^{-\mathsf{S}^d_{r+1}S^{<d}_{r+1}} \sum_{g \in
athcal{G}} q^{\bar{g}_{d-1}} \prod_{i=1}^b q^{g_i \mathsf{S}^{<i}_{r+1}} \gauss{\mathsf{S}^i_{r+1}+g_i}{g_i}\mathsf{T}heta_{\mathsf{U}_g}.\]
\item
Suppose $1 \leq r\leq a-1$ and $\lambda_r=\lambda_{r+1}$ and that $1 \leq d \leq b$. Let
\[
athcal{G} =\left\{g=(g_1,g_2,\ldots,g_b)
id g_d=0, \, \bar{g} = \mathsf{S}^d_r \text{ and } g_i \leq \mathsf{S}^{i}_{r+1} \text{ for } 1 \leq i \leq b \right\}.\]
For $g \in
athcal{G}$, let $\mathsf{U}_g$ be the row-standard tableau formed by moving all entries equal to $d$ from row $r$ to row $r+1$ of $\mathsf{S}$ and for $i \neq d$ moving $g_i$ entries equal to $i$ from row $r+1$ to row $r$. Then
\[\mathsf{T}heta_\mathsf{S} = (-1)^{\mathsf{S}^d_{r}} q^{-\binom{\mathsf{S}^d_{r}}{2}} q^{-\mathsf{S}^d_r \mathsf{S}^{>d}_r} \sum_{g \in
athcal{G}} q^{-\bar{g}_{d-1}} \prod_{i=1}^b q^{g_i \mathsf{S}^{>i}_{r}} \gauss{\mathsf{S}^i_{r}+g_i}{g_i} \mathsf{T}heta_{\mathsf{U}_g}.\]
\end{enumerate}
\end{theorem}
The proof of Theorem~\ref{Lemma7} is both technical and long, so we postpone it until the next section and give some examples. As above, we identify a tableau $\mathsf{T}$ of type $\sigma$ with $\mathsf{T}heta_{\mathsf{T}}(m_\sigma)$. Set $$e=
in\{f\ge2
id 1+q+\dots+q^{f-1}=0\},$$
with $e=\infty$ if $1+q+\dots+q^{f-1}\ne0$ for all $f\ge2$, and recall that if $R$ is a field then $
athcal{H}$ is (split) semisimple if and only if $e>n$.
\begin{ex}
Suppose $R$ is a field. Let $
u=(3,2,2)$ and $\lambda=(5,2)$. If $\mathsf{T}heta \in \EHom_
athcal{H}(M^
u, S^\lambda)$ then $\mathsf{T}heta$ is determined by
\[\mathsf{T}heta(m_
u)= \alpha \tab(11122,33) + \beta \tab(11123,23)+ \gamma \tab(11133,22)\]
for some $\alpha,\beta,\gamma \in R$.
Then applying Proposition~\ref{CombTheorem1b} and Theorem~\ref{Lemma7},
\begin{align*}
\mathsf{T}heta(m_
u h_{2,1}) & = \alpha \tab(11122,23) + [2]\beta \tab(11123,22)+ q[2]\beta \tab(11122,23) + q^2 \gamma \tab(11123,22), \\
\mathsf{T}heta(m_
u h_{2,2}) & = \alpha \tab(11122,22)+ q[2]^2 \beta \tab(11122,22) + q^4 \gamma \tab(11122,22), \\
\mathsf{T}heta(m_
u h_{1,1}) & = [4] \alpha \tab(11112,33) + [4] \beta \tab(11113,23) + \beta \tab(11123,13) + \gamma \tab(11133,12) \\
&= [4] \alpha \tab(11112,33) + [4] \beta \tab(11113,23) - \beta \tab(11113,23) - [2] \beta \tab(11112,33) - q \gamma \tab(11113,23), \\
\mathsf{T}heta(m_
u h_{1,2}) & = \gauss{5}{2} \alpha \tab(11111,33) + [4] \beta \tab(11113,13) + \gamma \tab(11133,11) \\
&= \gauss{5}{2} \alpha \tab(11111,33) -[2][4]\beta \tab(11111,33) + q \gamma \tab(11111,33).
\end{align*}
Since $\mathsf{T}heta \in \Psi(
u,\lambda)$ if and only if $\mathsf{T}heta(m_
u h_{d,t})=0$ for all $d,t$ above, we see that $\Psi(
u,\lambda)$ and hence
$\EHom_
athcal{H}(S^
u,S^\lambda)$ is 1-dimensional if $e=5$ and zero otherwise. Moreover if $e=5$, the space $\Psi(
u,\lambda)$ is spanned by the map $\mathsf{T}heta$ determined by
\[\mathsf{T}heta(m_
u)= q^3 [2] \tab(11122,33) -q^2 \tab(11123,23)+ [2] \tab(11133,22).\]
\end{ex}
\begin{ex}
Suppose that $q=1$ and that $R$ is a field of characteristic 2. Let $\lambda=(10,5)$ and $
u=(8,3,1,1,1,1)$. Then $\dim(\EHom_
athcal{H}(S^
u,S^\lambda))=2$. If $\mathsf{T}=\tab(1111111122,23456)$ then the space $\Psi(
u,\lambda)$ is spanned by the maps
$\mathsf{T}heta_{\mathsf{T}}$ and
$\sum_{\mathsf{S} \in
athcal{T}_0(\lambda,
u)} \mathsf{T}heta_\mathsf{S}$.
\end{ex}
It has recently been shown that for fixed parameters $e$ and $p$ there exist homomorphism spaces of arbitrarily high dimension~\cite{Dodge:Dim,Lyle:Dim}.
\begin{ex}
Take $\lambda=(5,4)$ and $
u = (3,3,2,1)$. Then $\mathsf{T}=\tab(11133,2224) \in
athcal{T}_0(\lambda,
u)$ and
\[ \mathsf{T}heta_\mathsf{T}(m_
u h_{3,1}) = \tab(11133,2223).\]
But we now have no obvious way of using Theorem~\ref{Lemma7} to write $\tab(11133,2223)$ in terms of homomorphisms indexed by semistandard tableaux.
\end{ex}
We are therefore most interested in pairs of partitions $\lambda$ and $
u$ where Proposition~\ref{CombTheorem1b} and Theorem~\ref{Lemma7} can give an algorithm for computing $\EHom_{
athcal{H}}(S^
u,S^\lambda)$. Suppose $\mathsf{T} \in
athcal{T}_0(\lambda,
u)$. If $1 \leq d <b$ and $1 \leq t \leq
u_{d+1}$ then Theorem~\ref{SSBasis} and Proposition~\ref{CombTheorem1b} show that there exist unique $m_{\mathsf{U}\mathsf{T}} \in R$ such that
\begin{equation} \lambdabel{MUT}
\mathsf{T}heta_\mathsf{T}(m_
u h_{d,t}) = \sum_{\mathsf{U} \in
athcal{T}_0(\lambda,\nu(d,t))} m_{\mathsf{U}\mathsf{T}} \mathsf{T}heta_\mathsf{U}(m_{\nu(d,t)}),
\end{equation}
where $\nu(d,t)$ is defined in Proposition~\ref{CombTheorem1b}. Now suppose $R$ is a field.
If $M=(m_{\mathsf{U}\mathsf{T}})$ is the matrix whose columns are indexed by tableaux $\mathsf{T} \in
athcal{T}_0(\lambda,
u)$ and rows by tableaux $\mathsf{U} \in
athcal{T}_0(\lambda,\nu(d,t))$ for some $1 \leq d <b$ and $1 \leq t \leq
u_{d+1}$, with entries $m_{\mathsf{U}\mathsf{T}}$ as in Equation~\ref{MUT} then, by Corollary~\ref{simple},
\[\dim(\EHom_{
athcal{H}}(S^
u,S^\lambda)) = \dim(\Psi(
u,\lambda))=\corank(M).\]
So the outstanding problem is to determine an explicit formula for $m_{\mathsf{U}\mathsf{T}}$. For the remainder of Section~\ref{HomSection}, suppose that $R$ is a field and that
$\lambda=(\lambda_1,\ldots,\lambda_a)$ and $
u=(
u_1,\ldots,
u_b)$ satisfy
\[\bar
u_j \geq \bar\lambda_{j-1}+\lambda_{j+1} \text{ for } 1 \leq j < a.\]
Since $\EHom_
athcal{H}(S^\lambda,S^
u)=\{0\}$ if $a>b$, we may also assume that $a \leq b$.
Let $a \leq d <b$ and $1 \leq t \leq
u_{d+1}$. If $\mathsf{T} \in
athcal{T}_0(\lambda,
u)$ then say that $\mathsf{T} \xrightarrow{d,t}\mathsf{U}$ if
\begin{itemize}
\renewcommand{\lambdabelitemi}{$-$}
\item $\mathsf{U} \in
athbb{R}owT(\lambda,\nu(d,t))$;
\item $\mathsf{U}^i_j = \mathsf{T}^i_j$ for all $i \neq d,d+1$ and all $j$;
\item $\mathsf{U}^d_j \geq \mathsf{T}^d_j$ for all $j$.
\end{itemize}
\begin{lemma} \lambdabel{abig}
Let $\mathsf{T} \in
athcal{T}_0(\lambda,
u)$. Suppose that $a\leq d <b$ and $1 \leq t \leq
u_{d+1}$. Then
\[\mathsf{T}heta_\mathsf{T}(m_
u h_{d,t})= \sum_{\mathsf{T}\xrightarrow{d,t}\mathsf{U}} \left( \prod_{j= 1}^a q^{\mathsf{T}^d_{>j}(\mathsf{U}^d_j - \mathsf{T}^d_j)} \gauss{\mathsf{U}^d_j}{\mathsf{T}^d_j}\right) \mathsf{T}heta_\mathsf{U}(m_{\nu(d,t)}) \]
and if $\mathsf{T}\xrightarrow{d,t}\mathsf{U}$ then $\mathsf{U}$ is semistandard.
\end{lemma}
\begin{proof}
Since $\mathsf{T} \xrightarrow{d,t}\mathsf{U}$ precisely when $\mathsf{U}$ is a row-standard tableau formed from $\mathsf{T}$ by changing $t$ entries equal to $d+1$ into $d$, the first part of the lemma is a restatement of Proposition~\ref{CombTheorem1b}. So suppose $\mathsf{T} \xrightarrow{d,t}\mathsf{U}$. If $1 \leq r < a$, then $\mathsf{T}^r_j=0$ for $j>r$, since $\mathsf{T}$ is semistandard, and so
\begin{align*}
\mathsf{T}^r_r & =
u_r -\mathsf{T}^r_{<r} \\
& =
u_r -(\bar\lambda_{r-1}-\mathsf{T}_{<r}^{<r} -\mathsf{T}_{<r}^{>r} ) \\
&\geq
u_r - \bar\lambda_{r-1}+\mathsf{T}^{<r}_{<r}\\
&=\bar
u_r-\bar\lambda_{r-1} \\
&\geq \lambda_{r+1}
\end{align*}
where the last inequality comes from our assumption on $\lambda$ and $
u$.
Each row $1 \leq r <a$ therefore contains at least $\lambda_{r+1}$ entries equal to $r$ and so each entry equal to $d+1$ in $\mathsf{T}$ is either in the top row or lies in row $r+1$ for some $1 \leq r<a$ and so is directly below a node of residue $r<d$. Hence a row-standard tableau obtained by changing entries equal to $d+1$ into $d$ is semistandard.
\end{proof}
Let $1 \leq d <a$ and $1 \leq t \leq
u_{d+1}$. If $\mathsf{T} \in
athcal{T}_0(\lambda,
u)$ then say that $\mathsf{T} \xrightarrow{d,t}\mathsf{U}$ if $\mathsf{U} \in
athbb{R}owT(\lambda,\nu(d,t))$ and
\[\begin{array}{c|ccc}
& i=d & i=d+1 & i \neq d,d+1 \\
athcal{H}line
j=d &\mathsf{U}^i_j \geq \mathsf{T}^i_j& \mathsf{U}^i_j \leq \mathsf{T}^i_j & \mathsf{U}^i_j \leq \mathsf{T}^i_j\\
j=d+1 &\mathsf{U}^i_j =0 &\mathsf{U}^i_j \leq \mathsf{T}^i_j& \mathsf{U}^i_j \geq \mathsf{T}^i_j \\
j \neq d,d+1 &\mathsf{U}^i_j \geq \mathsf{T}^i_j&\mathsf{U}^i_j \leq \mathsf{T}^i_j& \mathsf{U}^i_j = \mathsf{T}^i_j
\end{array}\]
Of course, some of the conditions in the table above are redundant since they are implied by the others.
\begin{lemma}
Let $\mathsf{T} \in
athcal{T}_0(\lambda,
u)$. Suppose that $1\leq d <a$ and $1 \leq t \leq
u_{d+1}$. Then
\begin{multline*}
\mathsf{T}heta_\mathsf{T}(m_
u h_{d,t})= \sum_{\mathsf{T}\xrightarrow{d,t}\mathsf{U}}
(-1)^{\mathsf{T}^{d+1}_{d+1}-\mathsf{U}^{d+1}_{d+1}}
q^{-\binom{\mathsf{T}^{d+1}_{d+1}-\mathsf{U}^{d+1}_{d+1}+1}{2}}q^{\mathsf{U}^{d+1}_{d+1}(\mathsf{U}^d_d-\mathsf{T}^{d}_{d}+\mathsf{U}^{d+1}_{d+1}-\mathsf{T}^{d+1}_{d+1})} \\
\left( \prod_{j=1}^{d-1} q^{\mathsf{T}^d_{>j}(\mathsf{U}^d_j-\mathsf{T}^d_j)} \gauss{\mathsf{U}^d_j}{\mathsf{T}^d_j} \right)\gauss{\mathsf{U}^d_d-\mathsf{T}^{d+1}_{d+1}}{\mathsf{T}^d_d-\mathsf{U}^{d+1}_{d+1}} \left( \prod_{i=d+2}^{b} q^{\mathsf{T}^{<i}_{d+1}(\mathsf{U}^i_{d+1}-\mathsf{T}^i_{d+1})} \gauss{\mathsf{U}^i_{d+1}}{\mathsf{T}^i_{d+1}} \right) \mathsf{T}heta_\mathsf{U}(m_{\nu(d,t)})
\end{multline*}
and if $\mathsf{T}\xrightarrow{d,t}\mathsf{U}$ then $\mathsf{U}$ is semistandard.
\end{lemma}
\begin{proof}
By Proposition~\ref{CombTheorem1b}, $\mathsf{T}heta_\mathsf{T}(m_
u) = \sum_\mathsf{S} a_\mathsf{S} \mathsf{T}heta_\mathsf{S}(m_{\nu(d,t)})$ where the sum is over row-standard tableaux formed by changing $t$ entries equal to $d+1$ in $\mathsf{T}$ into $d$. Suppose $\mathsf{S}$ is such a tableau. By Theorem~\ref{Lemma7}, if $\mathsf{S}^d_{d+1}>\lambda_d-\mathsf{S}^d_d$ then $\mathsf{T}heta_\mathsf{S}=0$; otherwise $\mathsf{T}heta_\mathsf{S}$ is a linear combination of maps $\mathsf{T}heta_\mathsf{U}$ indexed by row-standard tableaux $\mathsf{U}$ where $\mathsf{U}$ is formed from $\mathsf{S}$ by moving $\mathsf{S}^d_{d+1}$ entries equal to $d$ from row $d+1$ to row $d$ and replacing them with entries not equal to $d$ from row $d$. If $\mathsf{U}$ has this form, then clearly $\mathsf{T}\xrightarrow{d,t}\mathsf{U}$ so that
\[\mathsf{T}heta_\mathsf{T}(m_
u h_{d,t})= \sum_{\mathsf{T}\xrightarrow{d,t}\mathsf{U}} b_\mathsf{U} \mathsf{T}heta_\mathsf{U}(m_{\nu(d,t)}) \]
for some $b_\mathsf{U} \in R$. So suppose $\mathsf{T}\xrightarrow{d,t}\mathsf{U}$. Each of the intermediate tableaux $\mathsf{S}$ were formed from $\mathsf{T}$ by changing entries of $\mathsf{T}$ from $d+1$ to $d$. Then $\mathsf{U}^d_j-\mathsf{T}^d_j$ entries were changed in row $j$ for $1\leq j \leq d-1$, and for some $\gamma \geq 0$, $\gamma$ entries were changed in row $d$ and $\mathsf{U}^d_d-\mathsf{T}^d_d-\gamma$ entries in row $d+1$. Therefore, summing over all $\gamma$ with $
ax\{0,\mathsf{U}^d_d-\mathsf{T}^d_d-\mathsf{T}^{d+1}_{d+1}\} \leq \gamma \leq \mathsf{U}^d_d-\mathsf{T}^d_d+\mathsf{U}^{d+1}_{d+1}-\mathsf{T}^{d+1}_{d+1}$, we have
\begin{align*}
b_\mathsf{U} & = \sum_{\gamma}
\left( \prod_{j=1}^{d-1} q^{\mathsf{T}^d_{>j}(\mathsf{U}^d_j-\mathsf{T}^d_j)} \gauss{\mathsf{U}^d_j}{\mathsf{T}^d_j} \right)
\gauss{\mathsf{T}^d_d + \gamma}{\gamma}
(-1)^{\mathsf{U}^d_d-\mathsf{T}^d_d-\gamma} q^{-\binom{\mathsf{U}^d_d-\mathsf{T}^d_d-\gamma}{2}-\mathsf{U}^d_d+\mathsf{T}^d_d+\gamma} \\
&
athcal{H}space*{15mm} q^{(\mathsf{U}^{d}_{d}-\mathsf{T}^{d}_{d}+\mathsf{U}_{d+1}^{d+1}-\mathsf{T}^{d+1}_{d+1}-\gamma)(\mathsf{U}^d_d-\mathsf{T}^d_d-\gamma)}\gauss{\mathsf{U}^{d+1}_{d+1}}{\mathsf{T}^d_d-\mathsf{U}_d^d+\mathsf{T}^{d+1}_{d+1}+\gamma}
\left( \prod_{i=d+2}^{b} q^{\mathsf{T}^{<i}_{d+1}(\mathsf{U}^i_{d+1}-\mathsf{T}^i_{d+1})} \gauss{\mathsf{U}^i_{d+1}}{\mathsf{T}^i_{d+1}} \right) \\
&= (-1)^{\mathsf{U}^d_d-\mathsf{T}^d_d}\left( \prod_{j=1}^{d-1} q^{\mathsf{T}^d_{>j}(\mathsf{U}^d_j-\mathsf{T}^d_j)} \gauss{\mathsf{U}^d_j}{\mathsf{T}^d_j} \right) \left( \prod_{i=d+2}^{b} q^{\mathsf{T}^{<i}_{d+1}(\mathsf{U}^i_{d+1}-\mathsf{T}^i_{d+1})} \gauss{\mathsf{U}^i_{d+1}}{\mathsf{T}^i_{d+1}} \right) \\
&
athcal{H}space*{15mm} q^{(\mathsf{U}^{d+1}_{d+1}-\mathsf{T}^{d+1}_{d+1})(\mathsf{U}^d_d-\mathsf{T}^d_d) + \binom{\mathsf{U}_d^d-\mathsf{T}^d_d}{2}} \sum_{\gamma} (-1)^\gamma q^{\binom{\gamma+1}{2}-\gamma(\mathsf{U}^d_d-\mathsf{T}^d_d+\mathsf{U}^{d+1}_{d+1}-\mathsf{T}^{d+1}_{d+1})}\gauss{\mathsf{T}^d_d + \gamma}{\gamma} \gauss{\mathsf{U}^{d+1}_{d+1}}{\mathsf{T}^d_d-\mathsf{U}_d^d+\mathsf{T}^{d+1}_{d+1}+\gamma}
\end{align*}
Now we change the limits on the sum and apply Lemma~\ref{GaussLemma}.
\begin{align*}
\sum_{\gamma}(-1)^\gamma &
q^{\binom{\gamma+1}{2}-\gamma(\mathsf{U}^d_d-\mathsf{T}^d_d+\mathsf{U}^{d+1}_{d+1}-\mathsf{T}^{d+1}_{d+1})}
\gauss{\mathsf{T}^d_d + \gamma}{\gamma} \gauss{\mathsf{U}^{d+1}_{d+1}}{\mathsf{T}^d_d-\mathsf{U}_d^d+\mathsf{T}^{d+1}_{d+1}+\gamma} \\
&=(-1)^{\mathsf{U}_d^d-\mathsf{T}_d^d+\mathsf{U}^{d+1}_{d+1}-\mathsf{T}^{d+1}_{d+1}} q^{-\binom{\mathsf{U}_d^d-\mathsf{T}^d_d+\mathsf{U}^{d+1}_{d+1}-\mathsf{T}_{d+1}^{d+1}}{2}} \sum_{\gamma}(-1)^\gamma q^{\binom{\gamma}{2}} \gauss{\mathsf{U}^{d+1}_{d+1}}{\gamma} \gauss{\mathsf{U}^{d}_{d}+\mathsf{U}^{d+1}_{d+1} -\mathsf{T}^{d+1}_{d+1}-\gamma}{\mathsf{T}^d_d} \\
&= (-1)^{\mathsf{U}_d^d-\mathsf{T}_d^d+\mathsf{U}^{d+1}_{d+1}-\mathsf{T}^{d+1}_{d+1}} q^{-\binom{\mathsf{U}_d^d-\mathsf{T}^d_d+\mathsf{U}^{d+1}_{d+1}-\mathsf{T}_{d+1}^{d+1}}{2}} q^{\mathsf{U}^{d+1}_{d+1}(\mathsf{U}^d_d-\mathsf{T}^d_d+\mathsf{U}^{d+1}_{d+1}-\mathsf{T}^{d+1}_{d+1})} \gauss{\mathsf{U}^d_d-\mathsf{T}^{d+1}_{d+1}}{\mathsf{T}^d_d-\mathsf{U}^{d+1}_{d+1}}
\end{align*}
Hence
\begin{multline*}
b_\mathsf{U} = (-1)^{\mathsf{T}^{d+1}_{d+1}-\mathsf{U}^{d+1}_{d+1}}
q^{-\binom{\mathsf{T}^{d+1}_{d+1}-\mathsf{U}^{d+1}_{d+1}+1}{2}}q^{\mathsf{U}^{d+1}_{d+1}(\mathsf{U}^d_d-\mathsf{T}^{d}_{d}+\mathsf{U}^{d+1}_{d+1}-\mathsf{T}^{d+1}_{d+1})} \\
\left( \prod_{j=1}^{d-1} q^{\mathsf{T}^d_{>j}(\mathsf{U}^d_j-\mathsf{T}^d_j)} \gauss{\mathsf{U}^d_j}{\mathsf{T}^d_j} \right)\gauss{\mathsf{U}^d_d-\mathsf{T}^{d+1}_{d+1}}{\mathsf{T}^d_d-\mathsf{U}^{d+1}_{d+1}} \left( \prod_{i=d+2}^{b} q^{\mathsf{T}^{<i}_{d+1}(\mathsf{U}^i_{d+1}-\mathsf{T}^i_{d+1})} \gauss{\mathsf{U}^i_{d+1}}{\mathsf{T}^i_{d+1}} \right) .
\end{multline*}
It remains to show that if $\mathsf{T}\xrightarrow{d,t}\mathsf{U}$ then $\mathsf{U}$ is semistandard. From the proof of Lemma~\ref{abig}, we have that $\mathsf{T}^r_r \geq \lambda_{r+1}$ for $1 \leq r <a$, so the only way that $\mathsf{U}$ can fail to be semistandard is if there is an entry in row $d+1$ which is as big or bigger than the entry directly below it. However,
\begin{align*}\mathsf{U}^{d+1}_{d+1}
& =
u_{d+1}-t -\mathsf{U}^{d+1}_{\leq d} \\
&=
u_{d+1} -t-(\bar\lambda_{d}-\mathsf{U}_{\leq d}^{\leq d} - \mathsf{U}_{\leq d}^{>d+1})\\
&\geq
u_{d+1} - t-\bar\lambda_{d}+\mathsf{U}^{\leq d}_{\leq d} \\
&=\bar
u_{d+1}-\bar\lambda_{d} \\
&\geq \lambda_{d+2}
\end{align*}
so this is not possible.
\end{proof}
Let us summarize the results above.
\begin{proposition} \lambdabel{algorithm}
Suppose $\lambda=(\lambda_1,\ldots,\lambda_a)$ and $
u=(
u_1,\ldots,
u_b)$ are partitions of $n$ with the property that $\bar
u_j \geq \bar\lambda_{j-1}+\lambda_{j+1}$ for $1 \leq j < a$. Let $M=(m_{\mathsf{U}\mathsf{T}})$ be the matrix whose columns are indexed by tableaux $\mathsf{T} \in
athcal{T}_0(\lambda,
u)$ and rows by tableaux $\mathsf{U} \in
athcal{T}_0(\lambda,\nu(d,t))$ for some $1 \leq d <b$ and $1 \leq t \leq
u_{d+1}$, where
\[m_{\mathsf{U}\mathsf{T}}=
\begin{cases}
(-1)^{\mathsf{T}^{d+1}_{d+1}-\mathsf{U}^{d+1}_{d+1}}
q^{-\binom{\mathsf{T}^{d+1}_{d+1}-\mathsf{U}^{d+1}_{d+1}+1}{2}}q^{\mathsf{U}^{d+1}_{d+1}(\mathsf{U}^d_d-\mathsf{T}^{d}_{d}+\mathsf{U}^{d+1}_{d+1}-\mathsf{T}^{d+1}_{d+1})} \gauss{\mathsf{U}^d_d-\mathsf{T}^{d+1}_{d+1}}{\mathsf{T}^d_d-\mathsf{U}^{d+1}_{d+1}} \\
\hat{q}quad \times\left( \prod_{j=1}^{d-1} q^{\mathsf{T}^d_{>j}(\mathsf{U}^d_j-\mathsf{T}^d_j)} \gauss{\mathsf{U}^d_j}{\mathsf{T}^d_j} \right) \left( \prod_{i=d+2}^{b} q^{\mathsf{T}^{<i}_{d+1}(\mathsf{U}^i_{d+1}-\mathsf{T}^i_{d+1})} \gauss{\mathsf{U}^i_{d+1}}{\mathsf{T}^i_{d+1}} \right) , & 1 \leq d <a \text{ and } \mathsf{T} \xrightarrow{d,t} \mathsf{U}, \\
\prod_{j= 1}^a q^{\mathsf{T}^d_{>j}(\mathsf{U}^d_j - \mathsf{T}^d_j)} \gauss{\mathsf{U}^d_j}{\mathsf{T}^d_j}, & a \leq d <b \text{ and } \mathsf{T} \xrightarrow{d,t} \mathsf{U},\\
0, & \text{otherwise}.
\end{cases}
\]
Then $\dim(\EHom_
athcal{H}(S^
u,S^\lambda))=\corank(M)$.
\end{proposition}
We have taken a hard problem in representation theory and reduced it to a combination of combinatorics and linear algebra. However it should however be noted that in doing so we have lost some algebraic information. For example, $\EHom_
athcal{H}(S^
u,S^\lambda)=\{0\}$ unless $S^
u$ and $S^\lambda$ lie in the same block. Proposition~\ref{algorithm} does not seem to make use of this fact.
We note that Proposition~\ref{CombTheorem1b} and Theorem~\ref{Lemma7} can be used to compute homomorphism spaces other than those we have considered above. For example, the proof of the one-node Carter-Payne Theorem in~\cite{Lyle:CP} relied on Proposition~\ref{CombTheorem1b} and some special cases of Theorem~\ref{Lemma7}, but a one-node Carter-Payne pair $
u$ and $\lambda$ do not necessarily satisfy $\bar
u_{j} \geq \bar\lambda_{j-1}+\lambda_{j+1}$ for $1 \leq j <a$.
While a computer can use Proposition~\ref{algorithm} to solve individual problems, it is more satisfying to have explicit results. This is the purpose of the next section.
\subsection{Explicit homomorphism spaces} \lambdabel{Spaces}
In this section we show that a lower bound on $\dim(\Hom_{
athcal{H}}(S^
u,S^\lambda))$ can be obtained by looking at the algebra $
athcal{H}_{
athbb{C},q}(
athfrak{S}_n)$ and we then give the dimension of $\Hom_{
athcal{H}_{
athbb{C},q}(
athfrak{S}_n)}(S^
u,S^\lambda)$ where $\lambda=(\lambda_1,\lambda_2)$ and $
u_1 \geq \lambda_2$.
We would like to thank Meinolf Geck and Lacrimioara Iancu for pointing out the proof of Proposition~\ref{BiggerDim}.
Fix a field $k$ of characteristic $p>0$ and let $\eta \in k^\times$. Let $e=
in\{f\geq 2
id 1+\eta+\ldots+\eta^{f-1}=0\}$ where we assume that $e<\infty$. Let $\omega$ be a primitive $e^{\text{th}}$ root of unity in $
athbb{C}$.
Let $
athcal{Z}=
athbb{Z}[\hat{q},\hat{q}^{-1}]$ denote the ring of Laurent polynomials in the indeterminate $\hat{q}$. If $F$ is a field and $q$ an invertible element of $F$, define $\theta_{F,q}:
athcal{Z} \rightarrow F$ to be the ring homomorphism which sends $\hat{q}$ to $q$. If $S$ is a ring and $M \in M_{l \times m}(S)$, then $\rank(M)$ is the greatest order of any non-zero minor of $M$. If $\tilde{\theta}:S \rightarrow S'$ is a ring homomorphism, define $\tilde\theta(M)\in M_{l\times m}(S')$ to be the matrix with $(i,j)$-entry $\tilde\theta(M_{ij})$. For $M \in M_{l\times m}(
athcal{Z})$ set $M_{F,q} = \theta_{F,q}(M)$.
Let $\Phi_e(X) \subseteq
athbb{Z}[X]$ denote the $e^{\text{th}}$ cyclotomic polynomial. Observe that $\theta_{k,\eta}(\Phi_e(\hat{q}))=0_k$ and $\theta_{
athbb{C},\omega}(\Phi_e(\hat{q}))=0_
athbb{C}$ so that the maps $\theta_{k,\eta}$ and $\theta_{
athbb{C},\omega}$ both factor through $R=
athbb{Z}[\hat{q},\hat{q}^{-1}] / (\Phi_e(\hat{q}))$, that is, there exist ring homomorphisms $\tilde{\theta}_{k,\eta}$ and $\tilde{\theta}_{
athbb{C},\omega}$ such that the following diagram commutes.
\begin{diagram}
&& k \\
&\ruTo^{\theta_{k,\eta}} & \uTo_{\tilde\theta_{k,\eta}} \\
athcal{Z}=
athbb{Z}[\hat{q},\hat{q}^{-1}] &\rTo &
athbb{Z}[\hat{q},\hat{q}^{-1}]/(\Phi_e(\hat{q}))=R \\
&\rdTo_{\theta_{
athbb{C},\omega}} & \dTo_{\tilde\theta_{
athbb{C},\omega}} \\
&&
athbb{C}
\end{diagram}
\begin{lemma} \lambdabel{LemF}
Suppose $\tilde{M} \in M_{l \times m}(R)$. Then $\rank(\tilde{M})\geq \rank(\tilde{\theta}_{k,\eta}(\tilde{M}))$.
\end{lemma}
\begin{proof}
This follows since if $N$ is any $r \times r$ submatrix of $M$ then $\det(\tilde{\theta}_{k,\eta}(N)) =\tilde{\theta}_{k,\eta}(\det(N))$.
\end{proof}
\begin{lemma}
Suppose $\tilde{M} \in M_{l \times m}(R)$. Then $\rank(\tilde{M})=\rank(\tilde{\theta}_{
athbb{C},\omega}(\tilde{M}))$.
\end{lemma}
\begin{proof}
This follows from the proof of Lemma~\ref{LemF} and the fact that $\tilde\theta_{
athbb{C},\omega}$ is injective.
\end{proof}
\begin{corollary} \lambdabel{Rank}
Suppose $M \in M_{l\times m}(
athcal{Z})$. Then
\[\rank(M_{k,\eta}) \leq \rank(M_{
athbb{C},\omega}).\]
\end{corollary}
Now let $
athcal{H}^{
athcal{Z}}=
athcal{H}_{
athcal{Z},\hat{q}}(
athfrak{S}_n)$.
If $F$ is a field and $q$ an invertible element of $F$ then $
athcal{H}_{F,q}(
athfrak{S}_n) \cong
athcal{H}^{
athcal{Z}} \otimes_{
athcal{Z}} F$, where $\hat{q}$ acts on $F$ as multiplication by $q$. If $A$ is a $
athcal{H}^
athcal{Z}$-module then we define the $
athcal{H}_{F,q}(
athfrak{S}_n)$-module $A_{F,q}=A \otimes_{
athcal{Z}}F$.
\begin{proposition} \lambdabel{BiggerDim}
Suppose $A$ and $B$ are $
athcal{H}^{
athcal{Z}}$-modules which free as $
athcal{Z}$-modules of finite rank. Then
\[\dim(\Hom_{
athcal{H}_{k,\eta}(
athfrak{S}_n)}(A_{k,\eta},B_{k,\eta})) \geq \dim(\Hom_{
athcal{H}_{
athbb{C},\omega}(
athfrak{S}_n)}(A_{
athbb{C},\omega},B_{
athbb{C},\omega})).\]
\end{proposition}
\begin{proof}
Choose bases $\{a_i
id i \in I\}$ of $A$ and $\{b_j
id j \in J\}$ of $B$ and let $\{\phi_{ij}
id i \in I, j \in J\}$ be the corresponding basis of $\Hom_{
athcal{Z}}(A,B)$. Then $\phi =\sum_{i,j} \alpha_{ij} \phi_{ij}$ lies in $\Hom_{
athcal{H}^{
athcal{Z}}}(A,B)$ if and only if the coefficients $\alpha_{ij}$ satisfy a system of equations of the form $\sum_{i,j} \beta^{k}_{ij} \alpha_{ij}=0$ for $1 \leq k \leq N$, some $N\geq 0$. If we let $M$ be the matrix whose columns are indexed by $\{(i,j)
id i \in I, j \in J\}$ and rows by $1 \leq k \leq N$ and which has entries $\beta^k_{ij}$ then $\dim(\Hom_{
athcal{H}^{
athcal{Z}}}(A,B))= \corank(M)$. Furthermore,
\[ \dim(\Hom_{
athcal{H}_{k,\eta}(
athfrak{S}_n)}(A_{k,\eta},B_{k,\eta}))= \corank(M_{k,\eta}) \geq \corank(M_{
athbb{C},\omega}) = \dim(\Hom_{
athcal{H}_{
athbb{C},\omega}(
athfrak{S}_n)}(A_{
athbb{C},\omega},B_{
athbb{C},\omega}))\]
by Corollary~\ref{Rank}.
\end{proof}
In particular, we may take $A$ and $B$ to be Specht modules.
\begin{corollary} \lambdabel{DimCorollary}
Suppose that $\lambda$ and $
u$ are partitions of $n$. Then
\[\dim(\Hom_{
athcal{H}_{k,\eta}(
athfrak{S}_n)}(S^
u_{k,\eta},S^\lambda_{k,\eta})) \geq \dim(\Hom_{
athcal{H}_{
athbb{C},\omega}(
athfrak{S}_n)}(S^
u_{
athbb{C},\omega},S^\lambda_{
athbb{C},\omega})).\]
\end{corollary}
The following result is not implied by Corollary~\ref{DimCorollary} if $e \neq 2$. It could also be proved by giving an analogue of Proposition~\ref{BiggerDim} for the $q$-Schur algebra and using Theorem~\ref{Weyl}.
\begin{corollary} \lambdabel{Charp}
Suppose that $\lambda$ and $
u$ are partitions of $n$. Then
\[\dim(\EHom_{
athcal{H}_{k,\eta}(
athfrak{S}_n)}(S^
u_{k,\eta},S^\lambda_{k,\eta})) \geq \dim(\EHom_{
athcal{H}_{
athbb{C},\omega}(
athfrak{S}_n)}(S^
u_{
athbb{C},\omega},S^\lambda_{
athbb{C},\omega})).\]
\end{corollary}
\begin{proof}
Let $M=(m_{\mathsf{U}\mathsf{T}})$ be the matrix with entries in $
athcal{Z}$ given by Equation~\ref{MUT}. Then
\[\dim(\EHom_{
athcal{H}_{k,\eta}(
athfrak{S}_n)}(S^
u_{k,\eta},S^\lambda_{k,\eta})) =\corank(M_{k,\eta}) \geq \corank(M_{
athbb{C},\omega}) = \dim(\EHom_{
athcal{H}_{
athbb{C},\omega}(
athfrak{S}_n)}(S^
u_{
athbb{C},\omega},S^\lambda_{
athbb{C},\omega})).\]
\end{proof}
Now fix $2 \leq e < \infty$ and let $
athcal{H}=
athcal{H}_{
athbb{C},\omega}(
athfrak{S}_n)$ where $\omega$ is a primitive $e^{\text{th}}$ root of unity in $
athbb{C}$. If $\nu$ is a partition of $n$, let $\ell(\nu)$ be the number of non-zero parts of $\nu$. We describe $\dim(\EHom_
athcal{H}(S^
u,S^\lambda))$ where $\ell(\lambda) \leq 2$ and $
u_1 \geq \lambda_2$.
Where $\ell(
u) \leq 3$, please note that the homomorphism space $\EHom_{
athcal{H}_{F,q}}(S^
u,S^\lambda)$ has been computed for arbitary Hecke algebras of type $A$, even for partitions where $
u_1<\lambda_2$~\cite{Cox,Parker}.
\begin{proposition}[~\cite{Cox}] \lambdabel{first}
Suppose that $\lambda=(n)$ and that $
u=(
u_1,\ldots,
u_b)$ is a partition of $n$. Then
\[\dim(\EHom_
athcal{H}(S^
u,S^\lambda)) = \begin{cases}
1, &
u=(n), \\
1, & \ell(
u) \geq 2, \,
u_1 \equiv -1
od e \text{ and }
u_i=e-1 \text{ for }2 \leq i<b,\\
0, & \text{otherwise}.
\end{cases}\]
\end{proposition}
For the remainder of Section~\ref{Spaces}, suppose $\lambda$ and $
u$ are such that $\ell(\lambda)=2$, $\ell(
u)=b$ and $
u_1 \geq \lambda_2$. Since $\EHom_{
athcal{H}}(S^
u,S^\lambda)=\{0\}$ if $
u_1 > \lambda_1$ we also assume $
u_1 \leq \lambda_1$ and $b\geq 2$. For $k \geq 1$, let $N^
u_k=\#\{1 \leq i \leq b
id
u_i =k\}$. If $m\geq 0$, let $m'$ be the integer such that $0\leq m'<e$ and $m\equiv m'
od e$.
\begin{proposition} \lambdabel{OneDim}We have that
\[\dim(\Hom_{
athcal{H}}(S^
u,S^\lambda))\leq 1.\]
\end{proposition}
\begin{proposition}[~\cite{LM:rowhoms,Donkin:tilting}]
Suppose that $
u_1 = \lambda_1$. Let $\bar\lambda=(\lambda_2)$ and $\bar
u=(
u_2,\ldots,
u_b)$. Then
\[\dim(\EHom_{
athcal{H}}(S^
u,S^\lambda)) = \dim(\EHom_{
athcal{H}}(S^{\bar
u},S^{\bar\lambda})).\]
\end{proposition}
In fact, this is obvious in our setup since the matrices $M=(m_{\mathsf{U}\mathsf{T}})$ obtained in both cases are the same.
\begin{proposition}[~\cite{Cox}]
Suppose that $\ell(
u)=2$ and $\lambda_1>
u_1$. Then $\dim(\EHom_{
athcal{H}}(S^
u,S^\lambda))=1$ if and only if
\begin{itemize}
\item $\lambda_1 -
u_1 <e$ and $
u_1-\lambda_2+1 \equiv 0$.
\end{itemize}
\end{proposition}
\begin{proposition} [\cite{Parker}]
Suppose that $\ell(
u)=3$ and $\lambda_1>
u_1$. Then $\dim(\EHom_{
athcal{H}}(S^
u,S^\lambda))=1$ if and only if
\begin{itemize}
\item $
u_2=e-1$ and $
u_1-\lambda_2+1 \equiv 0$ and $\lambda_2 \leq
u_3$; or
\item $
u_2+1 \equiv 0$ and $
u_1-\lambda_2+1 \equiv 0$ and $\lambda_2 \geq
u_3$ and $
u_3 \leq e-1$ and $\lambda_1-
u_1<e$; or
\item $
u_1+2 \equiv 0$ and $
u_2 = \lambda_2$ and $
u_3 \leq e-1$; or
\item $
u_1+2 \equiv 0$ and $
u_2 = \lambda_2$ and $
u_3 \leq 2e-2$ and $(\lambda_2+1)'>
u_3'$; or
\item $
u_1+2 \equiv 0$ and $
u_2 \neq \lambda_2$ and $\lambda_2>
u_3$ and $\lambda_1-
u_2+1 \equiv 0$ and $
u_3 \leq e-1$ and $\lambda_1 -
u_1 <e$.
\end{itemize}
\end{proposition}
\begin{proposition}
Suppose that $\ell(
u) \geq 4$, that $\lambda_1>
u_1$ and that $
u_3=e-1$. Then $\dim(\EHom_{
athcal{H}}(S^\lambda,S^
u))=1$ if and only if $
u_{b-1}=e-1$ and
\begin{itemize}
\item $
u_2=e-1$ and $
u_1-\lambda_2+1 \equiv 0$; or
\item $
u_2+1 \equiv 0$ and $
u_1-\lambda_2+1 \equiv 0$ and $\lambda_2 \geq
u_2$ and $\lambda_1<
u_1+
u_2$ and $\lambda_1-
u_1 <e$; or
\item $
u_1+2 \equiv 0$ and $\lambda_1-
u_2+1 \equiv 0$ and $\lambda_2 \geq
u_2$ and $\lambda_1<
u_1+
u_2$ and $\lambda_1-
u_1 <e$; or
\item $
u_1+2 \equiv 0$ and $
u_2 \equiv \lambda_2$ and $\lambda_2 \geq
u_2$ and $(
u_2+1)'\leq \lambda_1-
u_1$.
\end{itemize}
\end{proposition}
Now say that the partition $
u$ has a good shape if it satisfies the following properties.
\begin{itemize}
\renewcommand{\lambdabelitemi}{$-$}
\item $\ell(
u) \geq 4$; and
\item $
u_1 +2 \equiv
u_2+2 \equiv 0
od e$; and
\item $
u_3 \leq 2e-2$; and
\item $\#\{3 \leq i \leq b
id
u_i \neq 2e-2,e-1\} \leq 2$; and
\item If $N^
u_{e-1} >0$ then $\#\{3 \leq i \leq b
id e-1>
u_i\}\leq 1$ and
$\#\{3 \leq i \leq b
id 2e-2>
u_i>e-1\} \leq 1$.
\end{itemize}
If $
u$ has a good shape define $
u^\ast$ to be the partition given by
\[
u^\ast = (
u_1 + (N^
u_{e-1}+1)(e-1),
u_2+(N^
u_{e-1}-1)(e-1)).\]
Let \[\alpha =
in\{3 \leq i \leq b
id
u_i \neq 2e-2,e-1\},\] with $\alpha=b+1$ if no such integer exists and let \[\beta =
in\{\alpha < i \leq b
id
u_i \neq 2e-2,e-1\},\] with $\beta=b+1$ if no such integer exists. (We assume $
u_{b+1}=0$.) If $\sigma$ and $\tau$ are two compositions, define the composition $\sigma+\tau$ by $(\sigma+\tau)_i=\sigma_i+\tau_i$ for all $i$.
\begin{proposition}
Suppose that $\lambda_1>
u_1$, that $
u$ has a good shape and that $N^
u_{e-1}=0$.
Set
\begin{align*}
u^{(1)}&=
u^\ast+(
u_\beta,
u_\alpha), \\
u^{(2)}&=
u^\ast+(
u_\alpha-e+1,
u_\beta+(e-1)).
\end{align*}
Then we have the following homomorphisms.
\begin{itemize}
\item Suppose that $e-1<
u_\beta$ or $
u_\alpha<e-1$. Then $\EHom_{
athcal{H}}(S^
u,S^\lambda)=1$ if and only if $\lambda=
u^{(1)}$.
\item Suppose that $0<
u_\beta<e-1<
u_\alpha$. Then $\EHom_{
athcal{H}}(S^
u,S^\lambda)=1$ if and only $\lambda=
u^{(1)}$ or $\lambda=
u^{(2)}$.
\item Suppose that $
u_\beta=0$ and $e-1<
u_\alpha$. Then $\EHom_{
athcal{H}}(S^
u,S^\lambda)=1$ if and only $\lambda=
u^{(2)}$.
\end{itemize}
\end{proposition}
\begin{proposition}
Suppose that $\lambda_1>
u_1$, that $
u$ has a good shape and that $N^
u_{e-1}\geq 1$. Suppose $k,m \geq 0$ are such that
\begin{align*}
u_\beta+(N^
u_{e-1}+1)(e-1)& \geq
u_\alpha+ke, \\
u_\alpha+(N^
u_{e-1}-1)(e-1) &\geq me.
\end{align*}
Set
\begin{align*}
u^{(1)}&=
u^\ast+(
u_\beta,
u_\alpha+N^
u_{e-1}(e-1)), \\
u^{(2)}&=
u^\ast+(
u_\alpha-e+1,
u_\beta+(N^
u_{e-1}+1)(e-1)), \\
u^{(3,m)}&=
u^\ast+(
u_\alpha+(N^
u_{e-1}-1)(e-1)-me,me+e-1), \\
u^{(4,k)}&=
u^\ast+(
u_\beta+N^
u_{e-1}(e-1)-ke,
u_\alpha+ke).
\end{align*}
Then we have the following homomorphisms.
\begin{itemize}
\item If $0=
u_\beta \leq
u_\alpha<e-1$ then $\EHom_{
athcal{H}}(S^
u,S^\lambda)=1$ if and only $\lambda =
u^{(1)}$ or $\lambda=
u^{(3,m)}$ for some $m$ as above.
\item If $
u_\beta<e-1<
u_\alpha$ then $\EHom_{
athcal{H}}(S^
u,S^\lambda)=1$ if and only $\lambda=
u^{(2)}$ or $\lambda=\lambda^{(4,k)}$ for some $k$ as above.
\end{itemize}
\end{proposition}
\begin{proposition} \lambdabel{last}
Suppose that $\ell(
u) \geq 4$, that $\lambda_1 >
u_1$, that $
u_3 \neq e-1$ and that $
u$ does not have a good shape. Then
$\dim(\EHom_{
athcal{H}}(S^
u,S^\lambda))=0$.
\end{proposition}
Combining Propositions~\ref{first} to~\ref{last} above completely classifies the homomorphism space $\EHom_{
athcal{H}}(S^
u,S^\lambda)$ where $
athcal{H}=
athcal{H}_{
athbb{C},q}(
athfrak{S}_n)$, $\ell(\lambda)\leq 2$ and $
u_1 \geq \lambda_2$.
Our proof of these results is obtained via case-by-case analysis; we are doing nothing more than solving systems of homogeneous linear equations. Unfortunately, there are many cases to check and the resulting computations are repetitive and formulaic. We do not, therefore, propose to prove all the propositions in this paper. In Section~\ref{SpaceProof}, we highlight the methods used and illustrate them with some examples.
The computations that helped lead us to these results were carried out using GAP~\cite{GAP}.
\section{Proofs} \lambdabel{Proofs}
In this section, we give the proofs of Theorem~\ref{hdtthm} and Theorem~\ref{Lemma7} and indicate the proof of Propositions~\ref{first} to~\ref{last}. For obvious reasons, this section is more technical than those preceeding it.
\subsection{Proof of Theorem~\ref{hdtthm}} \lambdabel{hdtproof}
Let $
athcal{H}=
athcal{H}_{R,q}(
athfrak{S}_n)$ and fix a partition $
u=(
u_1,\ldots,
u_b) \vdash n$. For $1 \leq d \leq b-1$ and $1 \leq t \leq
u_{d+1}$, recall that
\[h_{d,t} =
athbb{C}C(\bar{
u}_{d-1};
u_d,t),\]
and that $
athcal{I}$ is the right ideal of $
athcal{H}$ generated by
\[\{m_
u h_{d,t}
id 1 \leq d \leq b-1, 1 \leq t \leq
u_{d+1}\}.\]
We prove Theorem~\ref{hdtthm}, that is, that
\[
athcal{I} = M^
u \cap
athcal{H}mu.\]
Let $
athcal{D}_
u$ be a set of minimal length right coset representatives for $
athfrak{S}_
u$ in $
athfrak{S}_n$ and recall~\cite[Propn.~3.3]{M:ULect} that $
athcal{D}_
u = \{ d \in
athfrak{S}_n
id \mathfrak{t}^
u d \in \rowstd(
u)\}$.
\begin{lemma} [{~\cite[Cor.~3.4]{M:ULect}}]
The module $M^
u$ is a free $R$-module with basis
\[\{m_
u T_{d(\mathfrak{t})}
id \mathfrak{t} \in \rowstd(
u)\},\]
with the action of $
athcal{H}$ determined by
\[m_
u T_{d(\mathfrak{t})} T_{i} = \begin{cases}
q m_{
u} T_{d(\mathfrak{t})}, & i,i+1 \text{ lie in the same row of }\mathfrak{t}, \\
m_
u T_{d(\mathfrak{s})}, & i\text{ lies above } i+1 \text{ in } \mathfrak{t}, \\
q m_
u T_{d(\mathfrak{s})} + (q-1)m_
u T_{d(\mathfrak{t})}, & i\text{ lies below } i+1 \text{ in } \mathfrak{t}, \\
\end{cases}\]
where $\mathfrak{s}=\mathfrak{t}(i,i+1)$.
\end{lemma}
If $v \in
athfrak{S}_
u$ and $d \in
athcal{D}_
u$ then $\ell(vd)=\ell(v)+\ell(d)$ and $T_{vd}=T_vT_d$.
Therefore if $w \in
athfrak{S}_n$ then $w=vd$ for some $v \in
athfrak{S}_{
u}$ and $d \in
athcal{D}_
u$ so that $m_
u T_w = q^{\ell(v)} m_{
u}T_{d}$. Equally, if $\mathfrak{t}^
u w = \mathfrak{s}$, let $\dot\mathfrak{s}$ be the row-standard tableau obtained by rearranging the entries in each row of $\mathfrak{s}$. Then $d=d(\mathfrak{s})$ and $m_
u T_w = q^{\ell(v)} m_
u T_{d(\dot\mathfrak{s})}$.
If $\nu$ is a composition of $n$, define an equivalence relation $\sim_r$ on $
athcal{T}(\nu,
u)$ by saying that $\mathsf{S}\sim_r \mathsf{T}$ if $\mathsf{S}_j^i = \mathsf{T}_j^i$ for all $i,j$.
Recall that if $\mathfrak{s} \in
athcal{T}(\nu)$, then $
u(\mathfrak{s}) \in
athcal{T}(\nu,
u)$ is the tableau obtained by replacing each integer $i\geq 1$ with its row index in $\mathfrak{t}^
u$.
Now if $\mathsf{S} \in
athcal{T}(\nu,
u)$ and $\mathfrak{t} \in \rowstd(\nu)$, define
\[m_{\mathsf{S}\mathfrak{t}} = \sum_{{\mathfrak{s} \in \rowstd(\nu) \atop
u(\mathfrak{s})=\mathsf{S}}} m_{\mathfrak{s}\mathfrak{t}}.\]
\begin{lemma} [{~\cite[Thm.~4.9]{M:ULect}}] \lambdabel{mstbasis}
The right ideal $M^
u \cap
athcal{H}mu$ has a basis
\[\{m_{\mathsf{S}\mathfrak{t}}
id \mathsf{S} \in
athcal{T}_0(\nu,
u), \mathfrak{t} \in \mathsf{S}td(\nu) \text{ for some } \nu \vdash n \text{ such that } \nu \rhd
u\}.\]
\end{lemma}
\begin{lemma} [{~\cite[Eqn.~4.6]{M:ULect}}] \lambdabel{leftright}
Suppose $\nu$ is a composition of $n$ and $\mathsf{S} \in
athcal{T}(\nu,
u)$. Then
\[m_{\mathsf{S}\mathfrak{t}^\nu}=\sum_{\mathsf{S}' \sim_r \mathsf{S}} m_
u T_{\mathsf{S}'}.\]
\end{lemma}
\begin{ex}
Let $\nu=(3,2)$ and $
u=(2,2,1)$. Let $\mathsf{S}=\tab(112,23)$. Then
\[m_{\mathsf{S}\mathfrak{t}^{(3,2)}} = (1+T_3)m_{(3,2)} = m_{(2,2,1)}(I+T_2+T_2T_1)(I+T_4).\]
\end{ex}
\begin{lemma}[{~\cite[Lemma~3.10]{M:ULect}}] \lambdabel{minihigh}
Suppose $\nu$ is any composition of $n$. Let $\lambda$ be the partition obtained by rearranging the parts of $\nu$. If $\mathfrak{s},\mathfrak{t} \in \rowstd(\nu)$ then $m_{\mathfrak{s}\mathfrak{t}}$ is an $R$-linear combination of elements of the form $m_{\mathfrak{u}\mathfrak{v}}$ where $\mathfrak{u}$ and $\mathfrak{v}$ are row-standard $\lambda$-tableaux.
\end{lemma}
\begin{lemma}\lambdabel{higher}
Suppose $\nu$ is any composition of $n$ such that $\nu \rhd
u$ and $\mathfrak{u},\mathfrak{v} \in \rowstd(\lambda)$. Then $m_{\mathfrak{u}\mathfrak{v}} \in
athcal{H}mu$.
\end{lemma}
\begin{proof}
This follows from Lemma~\ref{minihigh}, noting that $\lambda \unrhd \nu \rhd
u$.
\end{proof}
Now suppose $1 \leq d < b$ and $1 \leq t \leq
u_{d+1}$ and let $\nu=\nu(d,t)$ be the composition given by
\[\nu_i=\begin{cases}
u_{i}+t, & i=d, \\
u_{i}-t, & i=d+1, \\
u_{i}, & \text{otherwise}.
\end{cases}\]
Let $\mathsf{S}=\mathsf{S}_{d,t}$ be the the row-standard $\nu$-tableau such that $\mathsf{S}^{d}_d =
u_d$, $\mathsf{S}^{d+1}_d = t$ and $\mathsf{S}^i_i = \nu_i$ for all $i \neq d$. By Lemma~\ref{leftright}, $m_
u h_{d,t} = m_{\mathsf{S}\mathfrak{t}^\nu} \in
athcal{H}mu$. The next result then follows by Lemma~\ref{higher}, noting that $M^
u \cap
athcal{H}mu$ is a right ideal of $
athcal{H}$.
\begin{corollary} \lambdabel{halfsub} We have
\[
athcal{I} \subseteq M^
u \cap
athcal{H}mu.\]
\end{corollary}
Now we introduce some new notation which will help us describe the elements $m_{\mathsf{S}\mathfrak{t}}$. If $\mathsf{S}\in
athbb{R}owT(\nu,
u)$, let $\mathfrak{t}_\mathsf{S} \in \rowstd(
u)$ be the row-standard $
u$-tableau in which $i$ is in row $r$ if the place occupied by $i$ in $\mathfrak{t}^\nu$ is occupied by $r$ in $\mathsf{S}$. If $\mathfrak{t}_\mathsf{S} = \mathfrak{t}^
u w$ then define $T_\mathsf{S}=T_w$.
\begin{corollary} \lambdabel{DescribeS}
Suppose $\nu \vdash n$ and $\mathsf{S} \in
athbb{R}owT(\nu,
u)$. Then
\[m_{\mathsf{S}\mathfrak{t}^\nu} = m_
u T_\mathsf{S} \prod_{i\geq 1}
athbb{C}C(\bar{\nu}_{i-1}; \mathsf{S}^1_i,\ldots,\mathsf{S}^b_i).\]
\end{corollary}
Recall that the length $\ell(w)$ of a permutation $w \in
athfrak{S}_n$ may be determined by
\begin{align} \lambdabel{length} \ell(w)=\#\{ (i,j)
id 1 \leq i <j \leq n \text{ and } i w > j w\},\end{align}
and if $w=uv \in
athfrak{S}_n$ is such that $\ell(w)=\ell(u)+\ell(v)$ then $T_w=T_uT_v$. If $s \geq r \geq 1$ and $x \geq 1$ define
\begin{align*}
\pi(s,r)&=(r,r+1,\ldots,s), \\
\D(s,r)&= T_{\pi(s,r)} = T_{s-1}T_{s-2}\ldots T_r, \\
\intertext{and}
\pi^\flat(s,r,x) &= \prod_{j=1}^x \pi(s+j,r+j), \\
\D^\flat(s,r,x) &= T_{\pi^\flat(s,r,x)}=\prod_{j=1}^x \D(s+j,r+j).
\end{align*}
Note that the last identity holds since $\ell({\pi^\flat(s,r,x)}) = x(s-r)$. Observe that if $s \geq t \geq r$ then
\[\D^\flat(s,r,x) = \D^\flat(s,t,x)\D^\flat(t,r,x).\]
\begin{lemma} \lambdabel{GetOrder}
Suppose $\nu \vdash n$ and $\mathsf{S} \in
athbb{R}owT(\nu,
u)$.
Write $\mathfrak{s}=\mathfrak{t}_{\mathsf{S}}$. Let $\mathfrak{s}(0)=\mathfrak{t}^
u$ and, for $1 \leq i \leq n$, let
\[\mathfrak{s}(i) = \mathfrak{s}(i-1)\pi(i^\ast,i)\]
where $i^\ast$ occupies the same position in $\mathfrak{s}(i-1)$ that $i$ occupies in $\mathfrak{s}$. Then $\mathfrak{s}(i)$ is the row-standard $
u$-tableau with the entries $1,\ldots,i$ occupying the same positions that they occupy in $\mathfrak{s}$ and with all other entries in row order, and furthermore
\begin{equation} \lambdabel{Eqni}
T_{d(\mathfrak{s}(i))} = \prod_{j=1}^i \D(j^\ast,j).\end{equation}
In particular,
\[m_{
u}T_\mathsf{S} = m_
u \prod_{j=1}^n \D(j^\ast,j).\]
\end{lemma}
\begin{proof}
The description of $\mathfrak{s}(i)$ is easily seen by induction on $i$. Equation~\ref{Eqni} follows, using induction or otherwise, by observing that
\[\ell(d(\mathfrak{s}(i))) = \sum_{j=1}^i \ell(\pi(j^\ast,j)).\]
\end{proof}
For $m,a,b \geq 0$ define
\[\seq{m}{a}{b}=\{{\bf i}=(i_1,\ldots,i_b)
id m+1 \leq i_1<\ldots<i_b\leq m+a+b\}.\]
\begin{lemma}
Let $m \geq 0$ and let $(a,b)$ be a composition. Then
\[
athbb{C}C(m;a,b)= \sum_{{\bf i} \in \seq{m}{a}{b}} \prod_{k=1}^{b} \D(m+a+k,i_k).\]
\end{lemma}
\begin{proof}
We may assume $m=0$. Let ${\bf i} \in \seq{0}{a}{b}$. If $w = \prod_{k=1}^{b}\pi(a+k,i_k)$ then by Equation~\ref{length},
\[\ell(w)= \sum_{k=1}^{b}(a+k-i_k)=\sum_{k=1}^b \ell(\pi(a+k,i_k)),\]
so that $T_w = \prod_{k=1}^{b}\D(a+k,i_k)$. Now recall that $w \in
athcal{D}_{0,(a,b)}$ if and only if $\mathfrak{t}^{(a,b)} w \in \rowstd((a,b))$ and observe that $\mathfrak{t}^{(a,b)} w$ is precisely the row-standard $(a,b)$-tableau with the entries $i_1,\ldots,i_{b}$ in the second row.
\end{proof}
\begin{corollary} \lambdabel{RearrangeCor} Suppose that $m,a,b \geq 0$ and $\bar{m} \geq m+a$. Then
\[\sum_{{\bf i} \in \seq{m}{a}{b}} \prod_{k=1}^b \D(\bar{m}+k,i_k) = \D^\flat(\bar{m},m+a,b)
athbb{C}C(m;a,b).\]
\end{corollary}
\begin{proof}
\begin{align*}
\sum_{{\bf i} \in \seq{m}{a}{b}} \prod_{k=1}^b \D(\bar{m}+k,i_k) &
= \sum_{{\bf i} \in \seq{m}{a}{b}} \prod_{k=1}^b \D(\bar{m}+k,m+a+k) \D(m+a+k,i_k) \\
&= \prod_{k=1}^b \D(\bar{m}+k,m+a+k) \times \sum_{{\bf i} \in \seq{m}{a}{b}} \prod_{k=1}^b \D(m+a+k,i_k) \\
& = \D^\flat(\bar{m},m+a,b)
athbb{C}C(m;a,b)
\end{align*}
\end{proof}
\begin{lemma} \lambdabel{PullsThrough}
Suppose $m,a,b \geq 0$ and $\bar{m} \geq m$. Then
\[
athbb{C}C(\bar{m};a,b)\D^\flat(\bar{m},m,a+b) = \D^\flat(\bar{m},m,a+b)
athbb{C}C(m;a,b).\]
\end{lemma}
\begin{proof}
If ${\bf i} =(i_1,\ldots,i_b) \in \seq{m}{a}{b}$, define ${\bf \bar{i}} = (\bar{i}_1,\ldots,\bar{i}_k) \in \seq{\bar{m}}{a}{b}$ by setting $\bar{i}_k = i_k+\bar{m}-m$ for all $k$. We claim that
\[\prod_{k=1}^b \D(\bar{m}+a+k,\bar{i}_k) \times \D^\flat(\bar{m},m,a+b) = \D^\flat(\bar{m},m,a+b) \times \prod_{k=1}^b \D(m+a+k,i_k)\]
for all ${\bf i} \in \seq{m}{a}{b}$. Let $\eta=(\bar{m},a,b)$ and let $\mathfrak{t} \in \rowstd(\eta)$ be the tableau containing $1,\ldots,m,m+a+b+1,\ldots,\bar{m}+a+b$ in the first row and $i_1,\ldots,i_b$ in the third row. Let $w$ be the permutation such that $\mathfrak{t}^\eta w = \mathfrak{t}$. It is sufficient to check that
\[\prod_{k=1}^b \pi(\bar{m}+a+k,\bar{i}_k) \times \pi^\flat(\bar{m},m,a+b) = w=\pi^\flat(\bar{m},m,a+b) \times \prod_{k=1}^b \pi(m+a+k,i_k)\]
and that \[\ell(w) = (\bar{m}-m)(a+b) + \sum_{k=1}^b (m+a+k-i_k),\]
which is a routine exercise.
\end{proof}
\begin{lemma} \lambdabel{PullsThrough2}
Suppose $m,a,b,r \geq 0$. Then
\[
athbb{C}C(m;a,b)\D^\flat(m+a+b,m,r) = \D^\flat(m+a+b,m,r)
athbb{C}C(m+r;a,b).\]
\end{lemma}
\begin{proof}
The proof is similar to the proof of Lemma~\ref{PullsThrough}; we consider tableaux of shape $\eta=(m,a,b,r)$.
\end{proof}
\begin{lemma} \lambdabel{IntoThree}
Let $m\geq 0$ and $\eta=(\eta_1,\eta_2,\ldots,\eta_l)$ be a composition. Suppose $0 \leq x\leq l$. Then
\[
athbb{C}C(m;\eta_1,\eta_2,\ldots,\eta_l) =
athbb{C}C(m;\eta_1,\ldots,\eta_x)
athbb{C}C(m+\bar{\eta}_x; \eta_{x+1},\ldots,\eta_l)
athbb{C}C(m; \bar{\eta}_x, \bar{\eta}_l-\bar{\eta}_x).\]
\end{lemma}
\begin{proof}
Again, we may assume that $m =0$. Let $\mathfrak{t} \in \rowstd(\eta)$ and let $w$ be the permutation such that $\mathfrak{t} = \mathfrak{t}^\eta w$. Suppose that the entries in rows $1,\ldots,x$ of $\mathfrak{t}$ are $j_1<\ldots<j_{\bar\eta_x}$ and that the entries in rows $x+1,\ldots,l$ are $i_{\bar\eta_x+1}<\ldots<i_{\bar\eta_l}$. Let $w_1$ be the permutation of $\{1,\ldots,\bar\eta_x\}$ which sends $1 \leq \alpha \leq \bar\eta_x$ to $\alpha^\ast$, where $j_{\alpha^\ast}$ occupies the same position in $\mathfrak{t}$ that $\alpha$ occupies in $\mathfrak{t}^\eta$. Similarly, let $w_2$ be the permutation of $\{\bar{\eta}_x+1,\ldots,\bar\eta_l\}$ which sends $\bar{\eta}_x+1 \leq \alpha \leq \bar\eta_x$ to $\alpha^\ast$, where $i_{\alpha^\ast}$ occupies the same position in $\mathfrak{t}$ that $\alpha$ occupies in $\mathfrak{t}^\eta$.
Finally let $w_3 = \prod_{k=\bar\eta_x+1}^{\bar\eta_l} \pi(k,i_k)$. It is clear that $w_1 \in C(0;\eta_1,\ldots,\eta_x), \, w_2 \in C(\bar\eta_{x};\eta_{x+1},\ldots,\eta_l)$ and $w_3 \in C(0; \bar\eta_x,\bar\eta_l-\bar\eta_x$.
We leave it as an exercise to check that $w=w_1w_2w_3$ and that $\ell(w)=\ell(w_1)+\ell(w_2)+\ell(w_3)$.
The proof of Lemma \ref{IntoThree} follows by counting the number of terms on both sides of the equation.
\end{proof}
\begin{lemma} \lambdabel{termstofront}
Suppose $\nu$ is a partition of $n$ with $\nu \rhd
u$ and $\mathsf{S}\in
athcal{T}_0(\nu,
u)$. Choose $k$ minimal such that $\nu_k>
u_k$ and $r>k$ minimal such that $\mathsf{S}^r_k>0$. Then
\[m_{\mathsf{S} \mathfrak{t}^\nu} = m_
u \D^\flat(\bar
u_{r-1},\bar
u_k,\mathsf{S}^r_k)
athbb{C}C(\bar
u_{k-1};
u_k,\mathsf{S}^r_k) h\]
for some $h \in
athcal{H}$.
\end{lemma}
\begin{proof} Using Lemma \ref{DescribeS} and Lemma \ref{IntoThree},
\begin{align*}
m_{\mathsf{S} \mathfrak{t}^\nu} & = m_
u T_\mathsf{S} \prod_{l\geq k}
athbb{C}C\left(\bar\nu_{l-1}; \mathsf{S}^l_l,\ldots \mathsf{S}^b_l \right) \\
&=m_
u T_\mathsf{S}
athbb{C}C(\bar
u_{k-1};
u_k,\mathsf{S}^r_k,\ldots,\mathsf{S}^{b}_k)\prod_{l>k }
athbb{C}C\left(\bar\nu_{l-1}; \mathsf{S}^l_l,\ldots, \mathsf{S}^b_l\right) \\
&=m_
u T_\mathsf{S}
athbb{C}C(\bar
u_{k-1};
u_k,\mathsf{S}^r_k)
athbb{C}C(\bar
u_{k}+\mathsf{S}^r_k;\mathsf{S}^{r+1}_k,\ldots,\mathsf{S}^b_k)
athbb{C}C(\bar
u_{k-1};
u_k+\mathsf{S}^r_k,\nu_k-
u_k-\mathsf{S}^r_k)\\
&
athcal{H}space{10mm}\times \prod_{l>k}
athbb{C}C\left(\bar\nu_{l-1}; \mathsf{S}^l_l,\ldots,\mathsf{S}^{b}_l\right).
\end{align*}
As in Lemma \ref{GetOrder}, and keeping the notation of that lemma, we may write
\[m_
u T_\mathsf{S} = m_
u \prod_{l = \bar{
u}_{k}+1}^n \D(l^\ast,l) = m_
u \D^\flat(\bar
u_{r-1},\bar
u_k,\mathsf{S}^r_k)\check h\] where
\[\check h = \prod_{l=\bar
u_k+\mathsf{S}^r_k+1}^n \D(l^\ast,l) \]
commutes with $
athbb{C}C(\bar
u_{k-1};
u_k,\mathsf{S}^r_k)$. The result follows.
\end{proof}
\begin{lemma}\lambdabel{killsall}
Suppose that $1\leq k< r \leq b$ and $1 \leq x \leq
u_{r}$. Then
\[m_
u \D^\flat(\bar
u_{r-1},\bar
u_{k},x)
athbb{C}C(\bar
u_{k-1};
u_{k},x) \in
athcal{I}.\]
\end{lemma}
\begin{proof}
It is straightforward to see that the proof for arbitrary $k$ is identical to the proof for $k=1$, so we assume that $k=1$.
We now prove that the lemma holds for all $2 \leq r \leq b$.
If $r=2$ and $1 \leq x \leq
u_2$ then
\[m_
u
athbb{C}C(0;
u_1,x) = m_
u h_{1,x} \in
athcal{I}.\]
So now suppose that $3 \leq r \leq b$ and that Lemma \ref{killsall} holds for all $r' < r$. Choose $x$ with $1 \leq x \leq
u_r$. Recall that if $1 \leq t \leq
u_{r}$ then
\[h_{r-1,t} = \sum_{{\bf i} \in \seq{\bar
u_{r-2}}{
u_{r-1}}{t}} \left(\prod_{l=1}^t \D(\bar
u_{r-1}+l,i_l)\right).\]
We have
\[m_
u \D^\flat(\bar
u_{r-1},
u_1,x)
athbb{C}C(0;
u_{1},x) = m_
u \D^\flat(\bar
u_{r-1},\bar
u_{r-2},x) \D^\flat(\bar
u_{r-2},
u_1,x)
athbb{C}C(0;
u_1,x) \]
where
\[\D^\flat(\bar
u_{r-1},\bar
u_{r-2},x) = h_{r-1,x}-\sum_{j=1}^{
u_{r-1}} \sum_{{\bf i} \in \seq{\bar
u_{r-2}}{j}{x-1}} \prod_{l=1}^{x-1} \Big( \D(\bar
u_{r-1}+l,i_l) \Big) \D(\bar
u_{r-1}+x,\bar
u_{r-2}+x+j) \]
so that it is sufficient to show that
\begin{equation} \lambdabel{toshow} m_
u \sum_{j=1}^{
u_{r-1}} \sum_{{\bf i} \in \seq{\bar
u_{r-2}}{j}{x-1}} \prod_{l=1}^{x-1} \Big( \D(\bar
u_{r-1}+l,i_l)\Big) \D(\bar
u_{r-1}+x,\bar
u_{r-2}+x+j) \D^\flat(\bar
u_{r-2},
u_1,x)
athbb{C}C(0;
u_1,x) \in
athcal{I}.
\end{equation}
Now for $1 \leq j \leq
u_{r-1}$ we may write
\begin{multline*} \sum_{{\bf i} \in \seq{\bar
u_{r-2}}{j}{x-1}} \prod_{l=1}^{x-1} \D(\bar
u_{r-1}+l,i_l)\\
= \sum_{y=
ax\{0,x-j\}}^{x-1} \left(\Big(\sum_{{\bf i} \in \seq{\bar
u_{r-2}}{x-y}{y}} \prod_{l=1}^y \D(\bar
u_{r-1}+l,i_l)\Big)
\Big(\sum_{{\bf i} \in \seq{\bar
u_{r-2}+x}{j-x+y}{x-y-1}} \prod_{l=1}^{x-y-1} \D(\bar
u_{r-1}+y+l,i_l)\Big)
\right)
\end{multline*}
so that, substituting into Equation \ref{toshow} and commuting terms to the left where possible, we must show that
\begin{multline*}
m_
u \sum_{y=0}^{x-1} \sum_{j=x-y}^{
u_{r-1}}\sum_{{\bf i} \in \seq{\bar
u_{r-2}}{x-y}{y}}
\left(\prod_{l=1}^y \D(\bar
u_{r-1}+l,i_l) \right) \D^\flat(\bar
u_{r-2},
u_1,x)
athbb{C}C(0;
u_1,x) \\
\times \sum_{{\bf i} \in \seq{\bar
u_{r-2}+x}{j-x+y}{x-y-1}} \left(\prod_{l=1}^{x-y-1} \D(\bar
u_{r-1}+y+l,i_l) \right)\D(
u_{r-1}+x,\bar
u_{r-2}+x+j) \in
athcal{I}.
\end{multline*}
Consider $y=0$. By the inductive hypothesis,
\[m_
u \D^\flat(\bar
u_{r-2},
u_1,x)
athbb{C}C(0;
u_1,x) \in
athcal{I}\]
so that it is sufficient to prove that for $1 \leq y <x$ we have
\[m_
u \sum_{{\bf i} \in \seq{\bar
u_{r-2}}{x-y}{y}} \left(\prod_{l=1}^y \D(\bar
u_{r-1}+l,i_l)\right) \D^\flat(\bar
u_{r-2},
u_1,x)
athbb{C}C(0;
u_1,x)\in
athcal{I}.\]
Now, using Corollary \ref{RearrangeCor}, Lemma \ref{PullsThrough} and Lemma \ref{IntoThree},
\begin{align*}
\sum_{{\bf i} \in \seq{\bar
u_{r-2}}{x-y}{y}}
\prod_{l=1}^y &\D(\bar
u_{r-1}+l,i_l)\D^\flat(\bar
u_{r-2},
u_1,x)
athbb{C}C(0;
u_1,x) \\
& = \D^\flat(\bar
u_{r-1},\bar
u_{r-2}+x-y,y)
athbb{C}C(\bar
u_{r-2}; x-y,y) \D^\flat(\bar
u_{r-2},
u_1,x)
athbb{C}C(0;
u_1,x) \\
& = \D^\flat(\bar
u_{r-1},\bar
u_{r-2}+x-y,y) \D^\flat(\bar
u_{r-2},
u_1,x)
athbb{C}C(
u_1;x-y,y)
athbb{C}C(0;
u_1,x) \\
&= \D^\flat(\bar
u_{r-1},\bar
u_{r-2}+x-y,y) \D^\flat(\bar
u_{r-2},
u_1,x)
athbb{C}C(0;
u_1,x-y,y) \\
& = \D^\flat(\bar
u_{r-1},\bar
u_{r-2}+x-y,y) \D^\flat(\bar
u_{r-2},
u_1,x)
athbb{C}C(0;
u_1,x-y)
athbb{C}C(0;
u_1+x-y,y) \\
& = \D^\flat(\bar
u_{r-1},\bar
u_{r-2}+x-y,y) \D^\flat(\bar
u_{r-2},
u_1,x-y) \D^\flat(\bar
u_{r-2}+x-y,
u_1+x-y,y) \\
&
athcal{H}space{1cm}
athbb{C}C(0;
u_1,x-y)
athbb{C}C(0;
u_1+x-y,y) \\
&= \D^\flat(\bar
u_{r-2},
u_1,x-y)
athbb{C}C(0;
u_1,x-y) \D^\flat(\bar
u_{r-1}, \bar
u_{r-2}+x-y,y) \\
&
athcal{H}space{1cm} \D^\flat(\bar
u_{r-2}+x-y,
u_1+x-y,y)
athbb{C}C(0;
u_1+x-y,y)
\end{align*}
and by the inductive hypothesis again,
\[m_
u \D^\flat(\bar
u_{r-2},
u_1,x-y)
athbb{C}C(0;\bar
u_1,x-y)\in
athcal{I}.\]
\end{proof}
\begin{proposition} \lambdabel{AllEqual}
We have
\[M^
u \cap
athcal{H}mu =
athcal{I}.\]
\end{proposition}
\begin{proof}
By Corollary \ref{halfsub}, $
athcal{I} \subseteq M^
u \cap
athcal{H}mu$. Now by Lemma \ref{mstbasis}, $M^
u \cap
athcal{H}mu$ has a basis
\[\{m_{\mathsf{S} \mathfrak{t}}
id \mathsf{S} \in
athcal{T}_0(\nu,
u),\mathfrak{t} \in \mathsf{S}td(\nu) \text{ for some } \nu \rhd
u\}.\] If $\nu \rhd
u$ and $\mathsf{S} \in
athcal{T}_0(\nu,
u), \mathfrak{t} \in \mathsf{S}td(\nu)$ then, since $
athcal{I}$ is a right ideal, it follows from Lemmas \ref{termstofront} and \ref{killsall} that $m_{\mathsf{S} \mathfrak{t}^\nu} \in
athcal{I}$ and so $m_{\mathsf{S}\mathfrak{t}} \in
athcal{I}$. Hence $M^{
u} \cap
athcal{H}mu \subseteq
athcal{I}$.
\end{proof}
\subsection{Proof of Theorem~\ref{Lemma7}} \lambdabel{Lemma7Proof}
In this section, we give the proof of Theorem~\ref{Lemma7}. Let $\hat{q}$ be an indeterminate over $
athbb{Z}$ and let $\mathcal{Z}=
athbb{Z}[\hat{q},\hat{q}^{-1}]$. Let $
athcal{H}^{\mathcal{Z}}=
athcal{H}_{\mathcal{Z},\hat{q}}(
athfrak{S}_n)$. We prove Theorem~\ref{Lemma7} for $
athcal{H}=
athcal{H}^{\mathcal{Z}}$; the general result follows by specialization.
Let $\lambda=(\lambda_1,\ldots,\lambda_a)$ be a partition of $n$, $\nu=(\nu_1,\ldots,\nu_b)$ a composition of $n$ and $\mathsf{S} \in
athbb{R}owT(\lambda,\nu)$ where we assume that $a \geq 2$. Our aim is to write $\mathsf{T}heta_\mathsf{S}:M^\nu \rightarrow S^\lambda$ as a linear combination of homomorphisms indexed by tableaux $\mathsf{U} \in
athbb{R}owT(\lambda,\nu)$. As in the previous examples, we identify $\mathsf{U} \in
athbb{R}owT(\lambda,\nu)$ with $\mathsf{T}heta_{\mathsf{U}}(m_\nu)$.
\begin{ex} \lambdabel{NiceEx}
Let $\lambda=(3,3)$ and $\nu=(2,1,1,1,1)$.
Recall that \[m_\lambda h_{1,1} = m_\lambda(I+T_3+T_3T_2+T_3T_2T_1) \in
athcal{H}la.\]
Then
\begin{align*}
\tab(114,235) &=
athcal{H}la+m_{\lambda}T_3T_4 \\
&=
athcal{H}la + \hat{q}^{-1}m_{\lambda}T_4T_3T_4 \\
&=
athcal{H}la + \hat{q}^{-1}m_\lambda T_3T_4T_3 \\
&=
athcal{H}la - \hat{q}^{-1}m_\lambda(I+T_3T_2+T_3T_2T_1)T_4T_3 \\
&=
athcal{H}la - m_\lambda (T_3 +\hat{q}^{-1}T_3T_2T_4T_3 + \hat{q}^{-1}T_3T_2T_1T_4T_3) \\
&= - \tab(113,245) -\hat{q}^{-1}\tab(134,125).
\end{align*}
\end{ex}
If $\mathsf{U} \in
athbb{R}owT(\lambda,\nu)$, define $\first(\mathsf{U}) \in \rowstd(\lambda)$ to be the tableau with $\nu(\first(\mathsf{U}))=\mathsf{U}$ and $\{\nu_i+1,\ldots,\nu_{i+1}\}$ in row order, for all $0 \leq i < b$. The following result is given by the definition of the map $\mathsf{T}heta_\mathsf{U}$. (Observe also the `reverse' statement in Corollary~\ref{DescribeS}.)
\begin{lemma} \lambdabel{reverse} Let $\mathsf{U} \in
athbb{R}owT(\lambda,\nu)$. Then
\[\mathsf{T}heta_{\mathsf{U}}(m_\nu) =
athcal{H}la+ m_\lambda T_{d(\first(\mathsf{U}))} \prod_{j=1}^b C(\bar\nu_{j-1}; \mathsf{U}^j_1,\ldots,\mathsf{U}^j_a).\]
\end{lemma}
We begin by considering the case where $\lambda=(\lambda_1,\lambda_2)$. If $\mathsf{S} \in
athbb{R}owT(\lambda,\nu)$ is such that $\mathsf{S}^i_1=\alpha_i$ and $\mathsf{S}^i_2=\beta_i$ for $1 \leq i \leq b$ then we will represent $\mathsf{S}$ by
\[\mathsf{S} = \rep{1^{\alpha_1}2^{\alpha_2}\ldots b^{\alpha_b}}{1^{\beta_1} 2^{\beta_2} \ldots b^{\beta_b}}.\]
If a number $i$ does not appear in the top (resp. bottom) row it should be understood that $\alpha_i=0$ (resp. $\beta_i=0$). Any such representation containing an index $\alpha_i,\beta_i<0$ should be taken to be zero, so for example in Corollary~\ref{basic5} below, if $\alpha_1=0$ then the first term on the right-hand side of the equation should be ignored. We continue to identify $\mathsf{S}$ with $\mathsf{T}heta_\mathsf{S}(m_\nu)$.
\begin{lemma} \lambdabel{Type1n}
Let $0 \leq \alpha \leq \lambda_1$ and $0 \leq \beta < \lambda_2$ and let $\mathsf{S} \in \rowstd(\lambda)$ be the tableau with $1,\ldots,\alpha,\alpha+\beta+2,\ldots,\lambda_1+\beta+1$ in the top row (and $\alpha+1,\ldots,\alpha+\beta+1,\lambda_1+\beta+2,\ldots,\lambda_1+\lambda_2$ in the second row). For $i \in \{1,\ldots,\alpha,\alpha+\beta+2,\ldots,\lambda_1+\beta+1\}$ let $\mathsf{U}_i$ be the row-standard tableau obtained from $\mathsf{S}$ by swapping $\alpha+\beta+1$ with $i$ and rearranging the rows if necessary. Then
\[\mathsf{T}heta_\mathsf{S} = - \hat{q}^{-\beta}\sum_{i=1}^\alpha \mathsf{T}heta_{\mathsf{U}_i} - \sum_{i=\alpha+\beta+2}^{\lambda_1+\beta+1}\mathsf{T}heta_{\mathsf{U}_i}.\]
\end{lemma}
\begin{proof} We use Lemma~\ref{PullsThrough2} and note that $m_\lambda h_{1,1} =m_\lambda C(0;\lambda_1,1) \in
athcal{H}la$. If $\alpha =0$ then
\begin{align*}
\mathsf{T}heta_{\mathsf{S}}(m_\nu) + \sum_{i= \beta+2}^{\lambda_1+\beta+1} \mathsf{T}heta_{\mathsf{U}_i}(m_\nu)
&=
athcal{H}la + m_\lambda \D^\flat(\lambda_1,0,\beta)
athbb{C}C(\beta; \lambda_1,1) \\
&=
athcal{H}la + q^{-\beta} m_\lambda \D^\flat(\lambda_1+1,0,\beta)
athbb{C}C(\beta; \lambda_1,1) \\
&=
athcal{H}la + q^{-\beta} m_\lambda
athbb{C}C(0; \lambda_1,1) \D^\flat(\lambda_1+1,0,\beta) \\
&=
athcal{H}la \\
&=0.
\end{align*}
Else if $\alpha >0$ then
\begin{align*}
\sum_{i=1}^\alpha \mathsf{T}heta_{\mathsf{U}_i}(m_\nu) &=
athcal{H}la + m_\lambda \Big(\sum_{i=1}^\alpha \D(\lambda_1+1,i)\Big) \D^\flat(\lambda_1+1,\alpha,\beta) \\
& =
athcal{H}la+ m_\lambda \Big(h_{1,1}-\sum_{i=\alpha+1}^{\lambda_1+1} \D(\lambda_1+1,i)\Big) \D^\flat(\lambda_1+1,\alpha,\beta) \\
&=
athcal{H}la- m_\lambda
athbb{C}C(\alpha;\lambda_1-\alpha,1) \D^\flat(\lambda_1+1,\alpha,\beta) \\
&=
athcal{H}la- m_\lambda \D^\flat(\lambda_1+1,\alpha,\beta)
athbb{C}C(\alpha+\beta; \lambda_1-\alpha,1) \\
&=
athcal{H}la- \hat{q}^{\beta} m_\lambda \D^\flat(\lambda_1,\alpha,\beta)
athbb{C}C(\alpha+\beta;\lambda_1-\alpha,1) \\
&= -\hat{q}^\beta \mathsf{T}heta_\mathsf{S}(m_\nu) - \hat{q}^\beta \sum_{i=\alpha+\beta+2}^{\lambda_1+\beta+1}\mathsf{T}heta_{\mathsf{U}_i}(m_\nu).
\end{align*}
\end{proof}
Using the definition of the maps $\mathsf{T}heta_\mathsf{U}$, we have the following corollary.
\begin{corollary} \lambdabel{basic5}
We have that
\[\rep{1^{\alpha_1} 4^{\alpha_4}}{2^{\beta_2} 3^1 5^{\beta_5}} =
-\hat{q}^{-\beta_{2}}
\rep{1^{\alpha_1-1} 3^{1} 4^{\alpha_4}}{1^1 2^{\beta_{2}} 5^{\beta_5}}
- \rep{1^{\alpha_1} 3^{1} 4^{\alpha_4-1}}{2^{\beta_2} 4^1 5^{\beta_5}}.
\]
\end{corollary}
\begin{lemma} \lambdabel{cosetattack}
Suppose $1 \leq d \leq b$ is such that $\alpha_d=0$ and $\beta_d=1$. Then
\begin{multline*}\rep{1^{\alpha_1}2^{\alpha_2}\ldots d^0\ldots b^{\alpha_b}}{1^{\beta_1}2^{\beta_2}\ldots d^{1} \ldots b^{\beta_b}}
= - \sum_{i=1}^{d-1} \hat{q}^{-\bar\beta_{d-1}+\bar\beta_{i-1}}[\beta_i+1] \rep{1^{\alpha_1}2^{\alpha_2} \ldots i^{\alpha_i-1} \ldots d^{1} \ldots b^{\alpha_b}}{1^{\beta_1}2^{\beta_2}\ldots i^{\beta_i+1} \ldots d^0 \ldots b^{\beta_b}}\\
- \sum_{i=d+1}^{b} \hat{q}^{-\bar\beta_{d}+\bar\beta_{i-1}}[\beta_i+1] \rep{1^{\alpha_1}2^{\alpha_2}\ldots d^{1} \ldots i^{\alpha_i-1} \ldots b^{\alpha_b}}{1^{\beta_1}2^{\beta_2}\ldots d^0 \ldots i^{\beta_i+1} \ldots b^{\beta_b}}.
\end{multline*}
\end{lemma}
\begin{proof}
Note that
\[\rep{1^{\alpha_1} 2^{\alpha_2}\ldots d^{0} \ldots b^{\alpha_b}}{1^{\beta_1}2^{\beta_2}\ldots d^{1} \ldots b^{\beta_b}}
= \rep{1^{\bar\alpha_{d-1}}4^{\bar\alpha_b-\bar\alpha_d}}{2^{\bar\beta_{d-1}}3^{1}5^{\bar{\beta_b}-\bar\beta_d}} T_w \prod_{i=1}^b
athbb{C}C(\bar\alpha_{i-1}+\bar\beta_{i-1};\alpha_i,\beta_i) \]
where
\[T_w=\prod_{i=1}^{d-1} \D^\flat(\bar\alpha_{d-1}+\bar\beta_{i-1},\bar\alpha_i+\bar\beta_{i-1},\beta_i)\prod_{i=d+1}^{b} \D^\flat(\bar\alpha_{b}+\bar\beta_{i-1},\bar\alpha_i+\bar\beta_{i-1},\beta_i) .\]
Let \[\mathsf{V}= \rep{1^{\bar\alpha_{d-1}}4^{\bar\alpha_b-\bar\alpha_d}}{2^{\bar\beta_{d-1}}3^{1}5^{\bar{\beta_b}-\bar\beta_d}} \]
and let $\mathfrak{t} \in \rowstd(\lambda)$ be the unique tableau such that $\nu(\mathfrak{t})=\mathsf{V}$. Choose $j$ in the top row of $\mathfrak{t}$ and let $\mathfrak{t}(j) \in \rowstd(\lambda)$ be the tableau obtained by swapping $j$ and $\bar\alpha_{d-1}+\bar\beta_{d-1}+1$ and rearranging the rows. By Lemma~\ref{Type1n},
\[\mathsf{T}heta_\mathfrak{t}=-\hat{q}^{-\bar\beta_{d-1}}\sum_{j=1}^{\bar\alpha_{d-1}} \mathsf{T}heta_{\mathfrak{t}(j)} - \sum_{j=\bar\alpha_{d}+\bar\beta_{d}+1}^{\bar\alpha_b+\bar\beta_d}\mathsf{T}heta_{\mathfrak{t}(j)},\]
that is
\begin{align*}
\rep{1^{\alpha_1} 2^{\alpha_2}\ldots d^{0} \ldots b^{\alpha_b}}{1^{\beta_1}2^{\beta_2}\ldots d^{1} \ldots b^{\beta_b}}
&=
athcal{H}la -\Big(\hat{q}^{-\bar\beta_{d-1}}\sum_{j=1}^{\bar\alpha_{d-1}}m_\lambda T_{d(\mathfrak{t}(j))} + \sum_{j=\bar\alpha_{d}+\bar\beta_d+1}^{\bar\alpha_b+\bar\beta_d}m_\lambda T_{d(\mathfrak{t}(j))}\Big)\mathsf{T}_w \prod_{k=1}^b
athbb{C}C(\bar\nu_{k-1};\alpha_k,\beta_k)\\
&=
athcal{H}la -\Big(\hat{q}^{-\bar\beta_{d-1}} \sum_{i=1}^{d-1} \sum_{j=\bar\alpha_{i-1}+1}^{\bar\alpha_{i}}m_\lambda T_{d(\mathfrak{t}(j))} + \sum_{i=d+1}^b \sum_{j=\bar\beta_d+\bar\alpha_{i-1}+1}^{\bar\alpha_i+\bar\beta_d}m_\lambda T_{d(\mathfrak{t}(j))}\Big) \\
&
athcal{H}space*{15mm} \mathsf{T}_w \prod_{k=1}^b
athbb{C}C(\bar\nu_{k-1};\alpha_k,\beta_k).
\end{align*}
Now choose $i$ with $1 \leq i \leq d-1$. We claim that
\[\sum_{j=\bar\alpha_{i-1}+1}^{\bar\alpha_i} m_\lambda T_{d(\mathfrak{t}(j))} T_w \prod_{k=1}^b
athbb{C}C(\bar\nu_{k-1};\alpha_k,\beta_k) =
\hat{q}^{\bar\beta_{i-1}}[\beta_i+1]
\rep{1^{\alpha_1}2^{\alpha_2} \ldots i^{\alpha_i-1} \ldots d^{1} \ldots b^{\alpha_b}}{1^{\beta_1}2^{\beta_2}\ldots i^{\beta_i+1} \ldots d^0 \ldots b^{\beta_b}}.\]
To prove the claim, choose $j$ with $\bar\alpha_{i-1}+1 \leq j \leq \bar\alpha_i$. Let $\mathfrak{s}$ be the row-standard $\lambda$-tableau containing entries from the set $ \cup_{i=1}^d
\{ \bar\alpha_i +\bar\beta_{i-1} +1, \ldots, \bar\alpha_i+\bar\beta_i \}
$.
Then
\[\mathfrak{t}^\lambda d(\mathfrak{t}(j)) w = \mathfrak{s} w'\]
where $w'$ acts on $\mathfrak{s}$ as follows. Suppose that $x$ occupies the same position in $\mathfrak{s}$ that $j$ occupies in $\mathfrak{t}$. Then $w'$ moves $\bar\nu_{d-1}+1$ into the first row, such that the row is in increasing order, moves $x$ into the far left of the second row and pushes the entries in the second row, up to the entry immediately to the left of $\bar\nu_{d-1}+1$ in $\mathfrak{s}$, one box to the right. If $\dot\mathfrak{s}$ is the row-standard tableau obtained by rearranging the rows of $\mathfrak{s} w'$, then $d(\mathfrak{s})w'=u d(\dot\mathfrak{s})$ where $u \in
athfrak{S}_\lambda$ has length $\bar\beta_{i-1}$.
Furthermore, using Equation~\ref{length}, we may check that $\ell(d(\mathfrak{t}(j)))+\ell(w) = \ell(d(\mathfrak{s})w')=\bar\beta_{i-1}+\ell(d(\dot\mathfrak{s}))$. It therefore follows that
\[\sum_{j=\bar\alpha_{i-1}+1}^{\bar\alpha_i} m_\lambda T_{d(\mathfrak{t}(j))} T_w = \hat{q}^{\bar\beta_{i-1}} \sum_{x=\bar\nu_{i-1}+1}^{\alpha_i+\beta_{i-1}} m_\lambda T_{d(\dot\mathfrak{s}(x))}\]
where $\dot\mathfrak{s}(x)$ is the tableau obtained from $\mathfrak{s}$ by swapping $x$ and $\bar\nu_{d-1}+1$ and rearranging the rows.
If we let $x_0 = \bar\nu_{i-1}+1$ then using Lemma~\ref{IntoThree},
\begin{align*}
\sum_{j=\bar\alpha_{i-1}+1}^{\bar\alpha_i} m_\lambda T_{d(\mathfrak{t}(j))} T_w C(\bar\nu_{i-1};\alpha_i,\beta_i) & = \hat{q}^{\bar\beta_{i-1}} m_\lambda T_{d(\dot\mathfrak{s}(x_0))} C(\bar\nu_{i-1};\alpha_i-1,1)
athbb{C}C(\bar\nu_{i-1}; \alpha_i,\beta_i)\\
&= \hat{q}^{\bar\beta_{i-1}} m_\lambda T_{d(\dot\mathfrak{s}(x_0))} C(\bar\nu_{i-1}; \alpha_i-1,1,\beta_i) \\
&= \hat{q}^{\bar\beta_{i-1}} m_\lambda T_{d(\dot\mathfrak{s}(x_0))} C(\bar\alpha_{i}+\bar\beta_{i-1}-1; 1, \beta_i)C(\bar\nu_{i-1};\alpha_{i}-1,\beta_i+1) \\
&= \hat{q}^{\bar\beta_{i-1}}[\beta_i+1] m_\lambda T_{d(\dot\mathfrak{s}(x_0))} C(\bar\nu_{i-1};\alpha_{i}-1,\beta_i+1) \\
\end{align*}
The claim then follows since
\[m_\lambda T_{d(\dot\mathfrak{s}(x_0))} C(\bar\nu_{i-1};\alpha_{i}-1,\beta_i+1) \prod_{k\neq i}
athbb{C}C(\bar\nu_{k-1};\alpha_k,\beta_k)=\rep{1^{\alpha_1}2^{\alpha_2} \ldots i^{\alpha_i-1} \ldots d^{1} \ldots b^{\alpha_b}}{1^{\beta_1}2^{\beta_2}\ldots i^{\beta_i+1} \ldots d^0 \ldots b^{\beta_b}}.\]
A similar proof shows that if $d+1 \leq i \leq b$ then
\[\sum_{j=\bar\alpha_{i-1}+\beta_d+1}^{\bar\alpha_i+\bar\beta_d} \mathsf{T}heta_{\mathfrak{t}(j)}(m_\nu)T_w \prod_{k=1}^b
athbb{C}C(\bar\nu_{k-1};\alpha_k,\beta_k) =
\hat{q}^{-\bar\beta_{d}+\bar\beta_{i-1}}[\beta_i+1]
\rep{1^{\alpha_1}2^{\alpha_2} \ldots d^{1} \ldots i^{\alpha_i-1} \ldots b^{\alpha_b}}{1^{\beta_1}2^{\beta_2}\ldots d^0 \ldots i^{\beta_i+1} \ldots b^{\beta_b}},\]
completing the proof of Lemma~\ref{cosetattack}.
\end{proof}
\begin{lemma} \lambdabel{Stage3}
Suppose
\[\mathsf{S}=\rep{1^{\alpha_1} \ldots d^0 \ldots b^{\alpha_b}}{1^{\beta_2} \ldots d^{\beta_d} \ldots b^{\beta_b}}\]
where $\beta_d\geq 1$.
Let \[
athcal{G}=\{(g_1,\ldots,g_b)
id g_d=0, \bar{g}=\beta_d \text{ and } g_i \leq \alpha_i \text{ for } 1 \leq i \leq b\}.\] For $g \in
athcal{G}$, let $\mathsf{U}_g$ be the row-standard tableau obtained from $\mathsf{S}$ by moving all entries equal to $d$ from row 2 to row 1, and for $i\neq d$ moving down $g_i$ entries equal to $i$ from row 1 to row 2. Then
\[\mathsf{T}heta_\mathsf{S} = \sum_{g \in
athcal{G}} (-1)^{\beta_d} \hat{q}^{-\binom{\beta_d+1}{2}+\bar{g}_{d-1}} \hat{q}^{-\bar\beta_{d-1}\beta_d}\prod_{i=1}^b \hat{q}^{g_i(\bar\beta_{i-1})} \gauss{\beta_i+g_i}{g_i}\mathsf{T}heta_{\mathsf{U}_g}.\]
\end{lemma}
\begin{proof} The case that $\beta_d=1$ is Lemma~\ref{cosetattack}. So assume $\beta_d >1$ and that the lemma holds when $\mathsf{S}^d_2<\beta_d$. We first consider
\[\mathsf{S}=\rep{1^{\alpha_1} 4^{\alpha_4}}{2^{\beta_2} 3^{\beta_3} 5^{\beta_5}}.\]
Consider the map $\mathsf{T}heta_\mathsf{S}:M^{\nu}\rightarrow S^\lambda$. If $\dot{\mathsf{S}}$ is the tableau
\[\dot\mathsf{S} = \rep{1^{\alpha_1} 5^{\alpha_4}}{2^{\beta_2} 3^1 4^{\beta_3-1} 6^{\beta_5}}\]
of type $\dot\nu$ then $\mathsf{T}heta_\mathsf{S}(m_\nu) = \mathsf{T}heta_{\dot{\mathsf{S}}}(m_{\dot{\nu}})$.
Let
\begin{align*}
athcal{G} &= \{(g_1,g_4)
id g_1+g_4=\beta_3, g_1 \leq \alpha_1, g_4 \leq \alpha_4 \}, \\
athcal{G}'&= \{(g'_1,g'_4)
id g'_1+g'_4=\beta_3-1, g'_1 \leq \alpha_1, g'_4 \leq \alpha_4 \}.
\end{align*}
For $g \in
athcal{G}$, let $\dot\mathsf{U}_g$ be the tableau obtained from $\dot\mathsf{S}$ by moving all entries equal to 3 or 4 from the second row to the first and moving $g_1$ entries equal to 1 and $g_4$ entries equal to 5 from the first to the second. For $g' \in
athcal{G'}$, let $\dot\mathsf{U}_{g'}$ be the tableau obtained from $\dot\mathsf{S}$ by moving all entries equal to 4 from the second row to the first and moving $g'_1$ entries equal to 1 and $g'_4$ entries equal to 5 from the first to the second. Note that $\mathsf{T}heta_{\mathsf{U}_g}(m_{\nu})=\mathsf{T}heta_{\dot\mathsf{U}_g}(m_{\dot\nu})$, where $\mathsf{U}_g$ is as defined in the statement of the lemma.
Applying the inductive hypothesis repeatedly to $\dot{\mathsf{S}}$, we have
\begin{align*}
\mathsf{T}heta_\mathsf{S}(m_\nu) &= \rep{1^{\alpha_1} 5^{\alpha_4}}{2^{\beta_2} 3^1 4^{\beta_3-1} 6^{\beta_5}} \\
& = -\hat{q}^{-\beta_2} \rep{1^{\alpha_1-1} 3^1 5^{\alpha_4}}{1^1 2^{\beta_2} 4^{\beta_3-1} 6^{\beta_5}} -\hat{q}^{\beta_3-1} \rep{1^{\alpha_1} 3^1 5^{\alpha_4}}{2^{\beta_2} 4^{\beta_3-1} 5^1 6^{\beta_5}} \\
&= \sum_{g \in
athcal{G}} (-1)^{\beta_3}\hat{q}^{-\beta_2}\hat{q}^{-\binom{\beta_3}{2}+g_1-1}\hat{q}^{-(\beta_3-1)(\beta_2+1)}[g_1]\hat{q}^{g_4(\beta_2+\beta_3)} \mathsf{T}heta_{\dot\mathsf{U}_g} \\
&\hat{q}uad + \sum_{g \in
athcal{G'}} (-1)^{\beta_3} \hat{q}^{-\beta_2}\hat{q}^{-\binom{\beta_3}{2}+g'_1}\hat{q}^{-(\beta_3-1)(\beta_2+1)}[g_1']\hat{q}^{\beta_2+1}\hat{q}^{g'_4(\beta_2+\beta_3)} \mathsf{T}heta_{\dot\mathsf{U}_{g'}} \\
&\hat{q}uad + \sum_{g \in
athcal{G}} (-1)^{\beta_3} \hat{q}^{\beta_3-1}\hat{q}^{-\binom{\beta_3}{2}+g_1} \hat{q}^{-(\beta_3-1)\beta_2} \hat{q}^{(g_4-1)(\beta_2+\beta_3-1)}[g_4]\mathsf{T}heta_{\dot\mathsf{U}_g} \\
&\hat{q}uad + \sum_{g' \in
athcal{G'}} (-1)^{\beta_3} \hat{q}^{\beta_3-1}\hat{q}^{-\binom{\beta_3}{2}+g_1'+1}\hat{q}^{-(\beta_3-1)\beta_2}\hat{q}^{\beta_2}\hat{q}^{(g_4'-1)(\beta_2+\beta_3-1)}[g_4']\mathsf{T}heta_{\dot\mathsf{U}_{g'}} \\
&= \sum_{g \in
athcal{G}} (-1)^{\beta_3} \hat{q}^{-\binom{\beta_3+1}{2}+g_1-\beta_2\beta_3} \hat{q}^{g_4(\beta_2+\beta_3)} [\beta_3] \mathsf{T}heta_{\dot\mathsf{U}_g} \\
&\hat{q}uad + \sum_{g' \in
athcal{G'}} (-1)^{\beta_3} \hat{q}^{-\binom{\beta_3}{2}+g'_1}\hat{q}^{g'_4(\beta_2+\beta_3)} \hat{q}^{-(\beta_2+1)(\beta_3-1)}\hat{q}[\beta_3-1] \mathsf{T}heta_{\dot\mathsf{U}_{g'}}.
\end{align*}
Applying the inductive hypothesis again, we also have
\begin{align*}
\mathsf{T}heta_{\dot\mathsf{S}}(m_{\dot\nu}) & = \rep{1^{\alpha_1} 5^{\alpha_4}}{2^{\beta_2} 3^1 4^{\beta_3-1} 6^{\beta_5}} \\
&= \sum_{g' \in
athcal{G}'}(-1)^{\beta_3-1}\hat{q}^{-\binom{\beta_3}{2}+g'_1}\hat{q}^{-(\beta_2+1)(\beta_3-1)}\hat{q}^{g'_4(\beta_2+\beta_3)} \mathsf{T}heta_{\dot\mathsf{U}_{g'}}(m_{\dot\nu}).
\end{align*}
So, substituting the two values of $\mathsf{T}heta_{\dot\mathsf{S}}(m_{\dot\nu})$ into the equation below, we get that
\begin{align*}
[\beta_3]\mathsf{T}heta_\mathsf{S}(m_\nu) & = \mathsf{T}heta_{\dot\mathsf{S}}(m_{\dot\nu}) + \hat{q}[\beta_3-1]\mathsf{T}heta_{\dot\mathsf{S}}(m_{\dot\nu}) \\
&= [\beta_3] \sum_{g \in
athcal{G}} (-1)^{\beta_3}\hat{q}^{-\binom{\beta_3+1}{2}+g_1-\beta_2\beta_3}\hat{q}^{g_4(\beta_2+\beta_3)} \mathsf{T}heta_{\dot\mathsf{U}_g}(m_{\dot\nu}) \\
&= [\beta_3] \sum_{g \in
athcal{G}} (-1)^{\beta_3}\hat{q}^{-\binom{\beta_3+1}{2}+g_1-\beta_2\beta_3}\hat{q}^{g_4(\beta_2+\beta_3)} \mathsf{T}heta_{\mathsf{U}_g}(m_{\nu}).
\end{align*}
Since we are working in $R=\mathcal{Z}$, we may cancel the terms $[\beta_3]$ on both sides of the equation. This completes the proof of Lemma~\ref{Stage3} when $\mathsf{S}$ has the form $\mathsf{S}=\rep{1^{\alpha_1} 4^{\alpha_4}}{2^{\beta_2} 3^{\beta_3} 5^{\beta_5}}$. The proof of Lemma~\ref{Stage3} for general $\mathsf{S}$ follows in the same way as the end of the proof of Lemma~\ref{cosetattack}.
\end{proof}
\begin{lemma} \lambdabel{main2part}
Suppose
\[\mathsf{S}=\rep{1^{\alpha_1} \ldots d^{\alpha_d} \ldots b^{\alpha_b}}{1^{\beta_2} \ldots d^{\beta_d} \ldots b^{\beta_b}}\]
where $\beta_d\geq 1$.
Let \[
athcal{G}=\{(g_1,\ldots,g_b)
id g_d=0, \bar{g}=\beta_d \text{ and } g_i \leq \alpha_i \text{ for } 1 \leq i \leq b\}.\] For $g \in
athcal{G}$, let $\mathsf{U}_g$ be the tableau obtained from $\mathsf{S}$ by moving all entries equal to $d$ from row 2 to row 1, and for $i\neq d$ moving down $g_i$ entries equal to $i$ from row 1 to row 2. Then
\[\mathsf{T}heta_\mathsf{S} = \sum_{g \in
athcal{G}} (-1)^{\beta_d} \hat{q}^{-\binom{\beta_d+1}{2}+\bar{g}_{d-1}} \hat{q}^{-\bar\beta_{d-1}\beta_d}\prod_{i=1}^b \hat{q}^{g_i \bar\beta_{i-1}} \gauss{\beta_i+g_i}{g_i}\mathsf{T}heta_{\mathsf{U}_g}.\]
\end{lemma}
\begin{proof}
The case that $\alpha_d=0$ is precisely Lemma~\ref{Stage3}. So suppose $\alpha_d>0$ and that the lemma holds when $\mathsf{S}^d_{1}<\alpha_d$.
Let
\begin{align*}
\mathsf{S}(1)&= \rep{1^{\alpha_1}\ldots d^{\alpha_d-1}d+1^1 d+2^{\alpha_{d+1}}\ldots b+1^{\alpha_b}}{1^{\beta_1}\ldots d^{\beta_d} d+2^{\beta_{d+1}}\ldots b+1^{\beta_b}}, \\
\mathsf{S}(2) & = \rep{1^{\alpha_1}\ldots d^{\alpha_d} d+2^{\alpha_{d+1}}\ldots b+1^{\alpha_b}}{1^{\beta_1}\ldots d^{\beta_d-1}d+1^1 d+2^{\beta_{d+1}}\ldots b+1^{\beta_b}}, \\
\end{align*}
and suppose they are of type $\dot\nu$ so that
\[\mathsf{T}heta_\mathsf{S}(m_\nu) = \mathsf{T}heta_{\mathsf{S}(1)}(m_{\dot{\nu}}) + \mathsf{T}heta_{\mathsf{S}(2)}(m_{\dot\nu}).\] Let
\[
athcal{G'}=\{(g'_1,\ldots,g'_b)
id g'_d=0, \bar{g'}=\beta_d -1\text{ and } g'_i \leq \alpha_i \text{ for } 1 \leq i \leq b\}.\]
For $g \in
athcal{G}$, let $\dot\mathsf{U}_g$ be the tableau obtained from $\mathsf{S}(1)$ by moving all entries equal to $d$ from row 2 to row 1, and for $i< d$ moving down $g_i$ entries equal to $i$ from row 1 to row 2 and for $i>d+1$ moving $g_{i-1}$ entries equal to $i$ from row 1 to row 2.
For $g' \in
athcal{G'}$, let $\dot\mathsf{U}_{g'}$ be the tableau obtained from $\mathsf{S}(2)$ by moving all entries equal to $d$ from row 2 to row 1, and for $i< d$ moving down $g'_i$ entries equal to $i$ from row 1 to row 2 and for $i>d+1$ moving $g_{i-1}$ entries equal to $i$ from row 1 to row 2. Then, using the inductive hypothesis,
\begin{align*}
\mathsf{T}heta_\mathsf{S}(m_\nu) &= \sum_{g \in
athcal{G}} (-1)^{\beta_d}\hat{q}^{-\binom{\beta_d+1}{2}+\bar{g}_{d-1}}\hat{q}^{-\beta_d \bar\beta_{d-1}} \prod_{i=1}^b \hat{q}^{g_i \bar\beta_{i-1}}\gauss{\beta_i+g_i}{g_i} \mathsf{T}heta_{\dot\mathsf{U}_g}(m_{\dot\nu})\\
&\hat{q}uad + \sum_{g \in
athcal{G}} (-1)^{\beta_d} \hat{q}^{-\binom{\beta_d+1}{2}+\bar{g}'_{d-1}} \hat{q}^{-\beta_d\bar\beta_{d-1}} \hat{q}^{\bar{\beta}_d} \prod_{i=1}^b \hat{q}^{g_i \bar\beta_{i-1}}\gauss{\beta_i+g_i}{g_i}\mathsf{T}heta_{\dot\mathsf{U}_g}(m_{\dot\nu}) \\
&\hat{q}uad + \sum_{g' \in
athcal{G'}} (-1)^{\beta_d-1} \hat{q}^{-\binom{\beta_d}{2}+\bar{g}'_{d-1}} \hat{q}^{(\beta_d-1)\bar{\beta}_{d-1}} \prod_{i=1}^b \hat{q}^{g_i \bar\beta_{i-1}}\gauss{\beta_i+g_i}{g_i} \mathsf{T}heta_{\dot\mathsf{U}_{g'}}(m_{\dot\nu}) \\
&= \sum_{g \in
athcal{G}} (-1)^{\beta_d} \hat{q}^{-\binom{\beta_d+1}{2}+\bar{g}_{d-1}} \hat{q}^{-\bar\beta_{d-1}\beta_d}\prod_{i=1}^b \hat{q}^{g_i(\bar\beta_{i-1})} \gauss{\beta_i+g_i}{g_i}\mathsf{T}heta_{\mathsf{U}_g}(m_\nu).
\end{align*}
\end{proof}
We now move on to the more general case where $\lambda$ may have more than 2 parts.
\begin{lemma} \lambdabel{somemoreparts}
Suppose $\mathsf{S} \in
athbb{R}owT(\lambda,\nu)$ where $\lambda=(\lambda_1,\ldots,\lambda_a)$ and $a \geq 2$. Choose $r$ with $1 \leq r<a$ and suppose that $\mathsf{S}$ satisfies the following conditions: There exists $k$ with $r+1 \leq k$ such that
\begin{itemize}
\item All entries of $\mathsf{S}$ in rows $1 \leq j <r$ are equal to $j$.
\item All entries of $\mathsf{S}$ in rows $r$ and equal to one of $r,r+1,\ldots,k$ and all entries in row $r+1$ are equal to one of $r+1,r+2,\ldots,k$,
\item All entries of $\mathsf{S}$ in rows $r+2\leq j \leq a$ are equal to $j+k-r-1$.
\end{itemize}
Choose $d$ with $\mathsf{S}^d_{r+1} \neq 0$.
Let \[
athcal{G}=\{(g_1,\ldots,g_b)
id g_d=0, \bar{g}=\mathsf{S}^d_{r+1} \text{ and } g_i \leq \mathsf{S}^i_r \text{ for } 1 \leq i \leq b\}.\] For $g \in
athcal{G}$, let $\mathsf{U}_g$ be the row-standard tableau obtained from $\mathsf{S}$ by moving all entries equal to $d$ from row $r+1$ to row $r$, and for $i\neq d$ moving down $g_i$ entries equal to $i$ from row $r$ to row $r+1$. Then
\[\mathsf{T}heta_\mathsf{S} = \sum_{g \in
athcal{G}} (-1)^{\mathsf{S}^d_{r+1}} \hat{q}^{-\binom{\mathsf{S}_{r+1}^d+1}{2}+\bar{g}_{d-1}} \hat{q}^{-\mathsf{S}^{<d}_{r+1}\mathsf{S}^d_{r+1}}\prod_{i=1}^b \hat{q}^{g_i \mathsf{S}_{r+1}^{<i}} \gauss{\mathsf{S}^i_{r+1}+g_i}{g_i}\mathsf{T}heta_{\mathsf{U}_g}.\]
\end{lemma}
\begin{proof}
The proof of Lemma~\ref{somemoreparts} is identical to the proof of Lemma~\ref{main2part}, except for the change in notation. We chose to give the proof of Lemma~\ref{main2part} rather than proving Lemma~\ref{somemoreparts} itself because the notation was easier to control.
\end{proof}
Now suppose that $\mathsf{S} \in
athbb{R}owT(\lambda,\nu)$.
Choose $r$ with $1 \leq r < a$ and define $\dot\mathsf{S}$ to be the $\lambda$-tableau such that
\begin{itemize}
\item Each row $1 \leq j <r$ contains $\lambda_j$ entries equal to $j$.
\item Each row $r+1 <j \leq a$ contains $\lambda_j$ entries equal to $j+b-2$.
\item Each row $j=r,r+1$ contains $\mathsf{S}^i_j$ entries equal to $i+r-1$, for $1 \leq i \leq b$.
\end{itemize}
Note that $\dot\mathsf{S}$ satisfies the conditions of Lemma~\ref{somemoreparts}.
\begin{ex}
Suppose that
\[\mathsf{S}=\tab(11122233445,112235,1224,3345),\]
and let $r=2$. Then
\[\dot\mathsf{S}=\tab(11111111111,223346,2334,7777).\]
\end{ex}
\begin{lemma} \lambdabel{LittlePerms}
Suppose $m,x \geq 0$ and $v$ is a permutation of $m+1,\ldots,m+x$. Suppose $w$ is a permutation such that $w(m+i) < w(m+j)$ for all $1 \leq i<j \leq x$. Then $\ell(vw) = \ell(v)+\ell(w) = \ell(wv)$.
\end{lemma}
\begin{proof}
The proof follows by applying Equation~\ref{length}.
\end{proof}
\begin{lemma} \lambdabel{NewPerms}
Let $\mathsf{S} \in
athbb{R}owT(\lambda,\nu)$. Choose $r$ with $1 \leq r <a$ and define $\dot\mathsf{S}$ and $\dot\nu$ as above.
If $w$ is the permutation such that $\first(\mathsf{S}) = \first(\dot\mathsf{S}) w$ then $\ell(d(\first(\mathsf{S}))) = \ell(d(\first(\dot\mathsf{S})))+ \ell(w)$.
Furthermore if $\mathsf{T} \in
athbb{R}owT(\lambda,\nu)$ is such that $\mathsf{S}$ and $\mathsf{T}$ are identical on all rows except possibly rows $r$ and $r+1$
then $\first(\mathsf{T}) = \first(\dot(\mathsf{T})) w$ (so that $\ell(\first(\mathsf{S})) = \ell(\first(\dot\mathsf{S})) + \ell(w)$).
\end{lemma}
\begin{proof}
Note that $d(\first(\dot\mathsf{S}))$ and $w$ satisfy the conditions of Lemma~\ref{LittlePerms} so that $\ell(d(\first(\dot\mathsf{S}))w) = \ell(d(\first(\dot\mathsf{S})))+ \ell(w)$. It is straightforward to see that the permutation $w$ works for both $\mathsf{S}$ and $\mathsf{T}$.
\end{proof}
\begin{lemma} \lambdabel{MoreThanThree}
Let $m \geq 0$ and $\eta=(\eta_1,\ldots,\eta_a)$ be a composition such that $a \geq 2$. Choose $r$ with $1 \leq r <a$. Then
\[
athbb{C}C(m; \eta) =
athbb{C}C(m+\bar\eta_{r-1}; \eta_r,\eta_{r+1})
athbb{C}C(m; \eta_1,\ldots,\eta_{r+1}, \eta_r+\eta_{r+1})
athbb{C}C(0; \bar\eta_{r+1},\eta_{r+1},\ldots,\eta_a).\]
\end{lemma}
\begin{proof}
As usual, we may assume $m=0$. Applying Lemma~\ref{IntoThree} repeatedly, we get
\begin{align*}
athbb{C}C(0;\eta)&=
athbb{C}C(0;\eta_1,\ldots,\eta_{r+1})
athbb{C}C(\bar\eta_{r+1}; \eta_{r+2},\ldots,\eta_{a})
athbb{C}C(0; \bar\eta_{r+1},\bar\eta_a-\bar\eta_{r+1}) \\
&=
athbb{C}C(\bar\eta_{r-1}; \eta_r,\eta_{r+1})
athbb{C}C(0; \eta_{1},\ldots,\eta_{r-1})
athbb{C}C(0; \bar\eta_{r-1}, \eta_{r}+\eta_{r+1}) \\
&
athcal{H}space*{15mm}
athbb{C}C(\bar\eta_{r+1}; \eta_{r+2},\ldots,\eta_{a})
athbb{C}C(0; \bar\eta_{r+1}, \bar\eta_a - \bar\eta_{r+1}) \\
&=
athbb{C}C(\bar\eta_{r-1}; \eta_r, \eta_{r+1})
athbb{C}C(0; \eta_1,\ldots,\eta_{r-1},\eta_r+\eta_{r+1})
athbb{C}C(0; \bar\eta_{r+1},\eta_{r+2},\ldots,\eta_{a}).
\end{align*}
\end{proof}
\begin{lemma} \lambdabel{Commutes}
Let $m, \eta_1, \eta_2 \geq 0$. Suppose $w$ is a permutation such that for all $m+1 \leq k \leq m+\eta_1 +\eta_2$ we have $w^{-1}(k) = k+x$ for some $x \in
athbb{Z}$. Then
\[ T_w C(m; \eta_1, \eta_2) = C(m+x; \eta_1, \eta_2) T_w.\]
\end{lemma}
\begin{proof}
If $v$ is any permutation of $m+1,\ldots,m+\eta_1+\eta_2$, let $\bar{v}$ be the permutation of $x+m+1,\ldots,x+m+\eta_1+\eta_2$ which sends $x+k$ to $v(k)+x$. Then clearly $wv = \dot{v}w$ and $\ell(wv) = \ell(w)+\ell(v)$ by Lemma~\ref{LittlePerms}. The result follows since $C(m;\eta_1,\eta_2)$ is a sum of basis elements indexed by permutations of $m+1,\ldots,m+\eta_1+\eta_2$.
\end{proof}
\begin{lemma} \lambdabel{BadPerm}
Let $\mathsf{S} \in
athbb{R}owT(\lambda,\nu)$. Choose $r$ with $1 \leq r <a$ and define $\dot\mathsf{S}, \, \dot\nu$ and $w$ as in Lemma~\ref{NewPerms}. Then
\[\mathsf{T}heta_{\mathsf{S}}(m_\nu)=\mathsf{T}heta_{\dot\mathsf{S}}(m_{\dot\nu}) T_{w} \prod_{i=1}^b
athbb{C}C(\bar\nu_{i-1};\mathsf{S}^i_1,\ldots,\mathsf{S}^i_{r-1},\mathsf{S}^i_r+\mathsf{S}^i_{r+1})
athbb{C}C(\bar\nu_{i-1};\mathsf{S}^i_{\leq r+1},\mathsf{S}^{i}_{r+2},\ldots,\mathsf{S}^i_a).\]
\end{lemma}
\begin{proof} Applying Lemmas~\ref{reverse},~\ref{NewPerms},~\ref{MoreThanThree} and~\ref{Commutes}, we have that
\begin{align*}
\mathsf{T}heta_\mathsf{S}(m_\nu) & =
athcal{H}la+ m_\lambda T_{d(\mathsf{S}))} \prod_{i=1}^b
athbb{C}C(\bar\nu_{i-1};\mathsf{S}^i_1,\ldots,\mathsf{S}^i_a) \\
&=
athcal{H}la + m_\lambda T_{d(\first(\dot\mathsf{S}))} T_{w} \prod_{i=1}^b
athbb{C}C(\bar\nu_{i-1}+\mathsf{S}^i_{ \leq r-1};\mathsf{S}^i_r,\mathsf{S}^i_{r+1})
athbb{C}C(\bar\nu_{i-1};\mathsf{S}^i_1,\ldots,\mathsf{S}^i_{r-1},\mathsf{S}^i_r+\mathsf{S}^i_{r+1})\\
&
athcal{H}space*{15mm}
athbb{C}C(\bar\nu_{i-1}; \mathsf{S}^i_{\leq r+1},\mathsf{S}^{i}_{r+2},\ldots,\mathsf{S}^i_a) \\
&=
athcal{H}la+ m_\lambda T_{d(\first(\dot\mathsf{S}))} \prod_{i=1}^b
athbb{C}C(\bar\lambda_{i-1}+\mathsf{S}^{<i}_r+\mathsf{S}^{<i}_{r+1};\mathsf{S}^i_r,\mathsf{S}^i_{r+1}) \\
&
athcal{H}space*{15mm} T_w \prod_{i=1}^b
athbb{C}C(\bar\nu_{i-1};\mathsf{S}^i_1,\ldots,\mathsf{S}^i_{r-1},\mathsf{S}^i_r+\mathsf{S}^i_{r+1})
athbb{C}C(\bar\nu_{i-1}; \mathsf{S}^i_{\leq r+1},\mathsf{S}^{i}_{r+2},\ldots,\mathsf{S}^i_a)\\
& =\mathsf{T}heta_{\dot\mathsf{S}}(m_{\dot\nu}) T_{w} \prod_{i=1}^b
athbb{C}C(\bar\nu_{i-1};\mathsf{S}^i_1,\ldots,\mathsf{S}^i_{r-1},\mathsf{S}^i_r+\mathsf{S}^i_{r+1})
athbb{C}C(\bar\nu_{i-1};\mathsf{S}^i_{\leq r+1},\mathsf{S}^{i}_{r+2},\ldots,\mathsf{S}^i_a).
\end{align*}
\end{proof}
We may now combine the previous results.
\begin{ex}
Let $\mathsf{S}=\tab(1223,1123,123,1)$ so that $\lambda=(5,4,3,1)$. Then
\begin{align*}
\tab(1223,1123,123,1) &=
athcal{H}la+ m_\lambda T_{(2,6,3,7,8,11,12,5)(4,10,9)}
athbb{C}C(0;1,2,1,1)
athbb{C}C(5; 2,1,1)
athbb{C}C(9;1,1,1) \\
&=
athcal{H}la+ m_\lambda T_{(7,8,10,9)}T_{(2,6,3,7,4,10,11,12,5)}
athbb{C}C(1;2,1)
athbb{C}C(7;1,1)
athbb{C}C(10;1,1) \\
&
athcal{H}space*{15mm}
athbb{C}C(0;1,3)
athbb{C}C(0;4,1) C(5;2,2)
athbb{C}C(9;1,2) \\
&=
athcal{H}la+ m_\lambda T_{(7,8,10,9)}
athbb{C}C(4;2,1)
athbb{C}C(7;1,1)
athbb{C}C(9;1,1)\\
&
athcal{H}space*{15mm} T_{(2,6,3,7,4,10,11,12,5)}
athbb{C}C(0;1,3)
athbb{C}C(0;4,1) C(5;2,2)
athbb{C}C(9;1,2) \\
&=\tab(1111,2234,234,5) T_{(2,6,3,7,4,10,11,12,5)}
athbb{C}C(0;1,3)
athbb{C}C(0;4,1) C(5;2,2)
athbb{C}C(9;1,2) \\
&=\Big(-q^{-1}[2] \tab(1111,2334,224,5)-[2]\tab(1111,2233,244,5) \Big) T_{(2,6,3,7,4,10,11,12,5)}
athbb{C}C(0;1,3)
athbb{C}C(0;4,1) C(5;2,2)
athbb{C}C(9;1,2) \\
&=-q^{-1}[2] \tab(1223,1223,113,1)- [2] \tab(1223,1122,133,1)
\end{align*}
\end{ex}
\begin{proposition} \lambdabel{Part1}
Suppose $\lambda=(\lambda_1,\ldots,\lambda_a)$ is a partition of $n$ and $\nu=(\nu_1,\ldots,\nu_b)$ is a composition of $n$.
Let $\mathsf{S} \in
athbb{R}owT(\lambda,\nu)$.
Suppose $1 \leq r\leq a-1$ and that $1 \leq d \leq b$. Let
\[
athcal{G} =\left\{g=(g_1,g_2,\ldots,g_b)
id g_d=0, \, \bar{g}=\mathsf{S}^d_{r+1}, \, \text{ and } g_i \leq \mathsf{S}^{i}_{r} \text{ for } 1 \leq i \leq b\right\}.\]
For $g \in
athcal{G}$, let $\mathsf{U}_g$ be the row-standard tableau formed by moving all entries equal to $d$ from row $r+1$ to row $r$ and for $i \neq d$, moving $g_i$ entries equal to $i$ from row $r$ to row $r+1$. Then
\[\mathsf{T}heta_\mathsf{S} = \sum_{g \in
athcal{G}} (-1)^{\mathsf{S}^d_{r+1}} \hat{q}^{-\binom{\mathsf{S}^d_{r+1}+1}{2}+\bar{g}_{d-1}} \hat{q}^{-S^{<d}_{r+1}\mathsf{S}^d_{r+1}} \prod_{i=1}^b \hat{q}^{g_i \mathsf{S}^{<i}_{r+1}} \gauss{\mathsf{S}^i_{r+1}+g_i}{g_i}\mathsf{T}heta_{\mathsf{U}_g}.\]
\end{proposition}
\begin{proof}
Using Lemmas~\ref{somemoreparts},~\ref{NewPerms} and~\ref{BadPerm}, and keeping the notation of Lemma~\ref{BadPerm},
\begin{align*}
\mathsf{T}heta_\mathsf{S}(m_\nu) &= \mathsf{T}heta_{\dot\mathsf{S}}(m_{\dot\nu}) T_{w} \prod_{i=1}^b
athbb{C}C(
u_{i-1};\mathsf{S}^i_1,\ldots,\mathsf{S}^i_{d-1},\mathsf{S}^i_d+\mathsf{S}^i_{d+1})
athbb{C}C(
u_{i-1};\bar\mathsf{S}^i_{d+1},\mathsf{S}^{i}_{d+1},\ldots,\mathsf{S}^i_a) \\
&= \sum_{g \in
athcal{G}} (-1)^{\mathsf{S}^d_{r+1}} \hat{q}^{-\binom{\mathsf{S}_{r+1}^d+1}{2}+\bar{g}_{d-1}} \hat{q}^{-\mathsf{S}^{<d}_{r+1}\mathsf{S}^d_{r+1}}\prod_{i=1}^b \hat{q}^{g_i(\mathsf{S}_{r+1}^{<i})} \gauss{\mathsf{S}^i_{r+1}+g_i}{g_i}\mathsf{T}heta_{\dot\mathsf{U}_g}(m_{\dot\nu}) \\
&
athcal{H}space*{15mm} T_w \prod_{i=1}^b
athbb{C}C(
u_{i-1};\mathsf{S}^i_1,\ldots,\mathsf{S}^i_{r-1},\mathsf{S}^i_r+\mathsf{S}^i_{r+1})
athbb{C}C(
u_{i-1};\bar\mathsf{S}^i_{r+1},\mathsf{S}^{i}_{r+1},\ldots,\mathsf{S}^i_a) \\
&= \sum_{g \in
athcal{G}} (-1)^{\mathsf{S}^d_{r+1}} \hat{q}^{-\binom{\mathsf{S}_{r+1}^d+1}{2}+\bar{g}_{d-1}} \hat{q}^{-\mathsf{S}^{<d}_{r+1}\mathsf{S}^d_{r+1}}\prod_{i=1}^b \hat{q}^{g_i(\mathsf{S}_{r+1}^{<i})} \gauss{\mathsf{S}^i_{r+1}+g_i}{g_i}\mathsf{T}heta_{\mathsf{U}_g}(m_\nu).
\end{align*}
\end{proof}
The proof of Proposition~\ref{Part1} gives the first half of the proof of Theorem~\ref{Lemma7}. Since the proof of the second half follows along identical lines, we omit most of it and give only the proof of the analogue of Lemma~\ref{Type1n}, where the difference is non-trivial.
\begin{ex} Let $\lambda=(3,3)$ and $\nu=(1,2,1,1,1)$.
Observe that \[m_\lambda(I+T_3+T_3T_4+T_3T_4T_5) =(I+T_2+T_1T_2)m_{(2,4)}\in
athcal{H}la\]
by Lemma~\ref{minihigh}. Then
\begin{align*}
\tab(134,225) &=
athcal{H}la+m_{\lambda}T_3T_4T_2T_3 \\
&=
athcal{H}la + \hat{q}^{-1}m_{\lambda}T_2T_3T_4T_2T_3 \\
&=
athcal{H}la + \hat{q}^{-1}m_\lambda T_3T_4T_2T_3T_4 \\
&=
athcal{H}la - \hat{q}^{-1}m_\lambda(I+T_3+T_3T_4T_5)T_2T_3T_4 \\
&=
athcal{H}la - m_\lambda (T_3T_4 + T_3T_2T_4 + \hat{q}^{-1}T_3T_4T_5T_2T_3T_4) \\
&= - \tab(124,235) -\hat{q}^{-1}\tab(145,223)
\end{align*}
\end{ex}
\begin{lemma} \lambdabel{Type1nb}
Let $\lambda=(m,m)$.
Let $0 \leq \alpha < m$ and $0 \leq \beta \leq m$ and let $\mathsf{S} \in \rowstd(\lambda)$ be the tableau with $1,\ldots,\alpha,\alpha+\beta+1,\ldots,m+\beta$ in the top row (and $\alpha+1,\ldots,\alpha+\beta,m+\beta+1,\ldots,2m$ in the second row). For $i \in \{\alpha+1,\ldots,\alpha+\beta,m+\beta+1,\ldots,2m\}$ let $\mathsf{U}_i$ be the row-standard tableau obtained by swapping $\alpha+\beta+1$ with $i$ and rearranging the rows if necessary. Then
\[\mathsf{T}heta_\mathsf{S} = -\sum_{i=\alpha+1}^{\alpha+\beta} \mathsf{T}heta_{\mathsf{U}_i} - \hat{q}^{-m+\alpha+1}\sum_{i=m+\beta+1}^{2m}\mathsf{T}heta_{\mathsf{U}_i}.\]
\end{lemma}
\begin{proof}
We use Lemma~\ref{PullsThrough} and note that
\[m_{\lambda}
athbb{C}C(m-1;1,m) =
athbb{C}C(0;m-1,1)m_{(m-1,m+1)} \in
athcal{H}la\]
by Lemma~\ref{minihigh}. If $\beta=m$ then
\begin{align*}
\mathsf{T}heta_{\mathsf{S}}(m_\nu) + \sum_{i=m+\beta+1}^{2m} \mathsf{T}heta_{\mathsf{U}_i}
&=
athcal{H}la+ m_{\lambda}\D^\flat(m,\alpha+1,m)
athbb{C}C(\alpha;1,m) \\
&=
athcal{H}la + q^{m-\alpha-1} m_\lambda
athbb{C}C(m-1; 1,m) \D^\flat(m-1,\alpha,m+1) \\
&=
athcal{H}la \\
&=0.
\end{align*}
Else if $\beta<m$ then
\begin{align*}
\sum_{i=m+\beta+1}^{2m} \mathsf{T}heta_{\mathsf{U}_i}(m_\nu) &=
athcal{H}la+m_\lambda T_{m}T_{m+1}\ldots T_{m+\beta}
athbb{C}C(m+\beta;1,m-\beta-1)
\D^\flat(m-1,\alpha,\beta+1) \\
& =
athcal{H}la+ m_\lambda \big(
athbb{C}C(m-1,1,m)-
athbb{C}C(m-1;1,\beta)\big) \D^\flat(m-1,\alpha,\beta+1) \\
&=
athcal{H}la - m_\lambda \D^\flat(m-1,\alpha,\beta+1)
athbb{C}C(\alpha,1,\beta) \\
&=
athcal{H}la- \hat{q}^{m-\alpha-1} m_\lambda\D^\flat(m,\alpha+1,\beta)
athbb{C}C(\alpha;1,\beta) \\
&= -\hat{q}^{m-\alpha-1} \mathsf{T}heta_\mathsf{S}(m_\nu) - \hat{q}^{m-\alpha-1} \sum_{i=\alpha+1}^{\alpha+\beta}\mathsf{T}heta_{\mathsf{U}_i}(m_\nu).
\end{align*}
\end{proof}
\subsection{How to prove the results in Section~\ref{Spaces}}\lambdabel{SpaceProof}
Let $
athcal{H}=
athcal{H}_{
athbb{C},q}(
athfrak{S}_n)$ where $q$ is a primitive $e^\text{th}$ root of unity in $
athbb{C}$ for some $2 \leq e < \infty$. In Propositions~\ref{first} to~\ref{last} we described the homomorphism space $\EHom_{
athcal{H}}(S^
u,S^\lambda)$ where $\lambda$ has at most two parts and $
u_1 \geq \lambda_2$. As previously mentioned, the proof of these results is given by case-by-case analysis, where the calculations consist of solving systems of homogeneous linear equations. These very many calculations do not belong in a research paper; they are both trivial and lengthy. We therefore begin this section by giving some identities which enabled us to solve the equations, followed by the proof of the results in some specific cases. The reader who wishes to use one or more of our propositions and who prefers not to rely on results which are not explicitely proved would be best advised to construct their own proofs; we would say this is more tedious than difficult.
We note that our proof of Proposition~\ref{OneDim}, which states that the homomorphism space is at most 1-dimensional, relies on looking at cases individually; we do not know of a direct proof.
\begin{lemma} \lambdabel{DivZero}
Suppose that $a,b >0$. Then \[\mathfrak{r}ac{[ae]}{[be]} = \mathfrak{r}ac{a}{b}.\]
\end{lemma}
\begin{proof} We have that
\begin{align*}
\mathfrak{r}ac{[ae]}{[be]} & = \mathfrak{r}ac{1+q+\ldots +q^{(a-1)e-1}}{1+q+\ldots+q^{(b-1)e-1}} \\
&=\mathfrak{r}ac{(1+q^e+\ldots+q^{(a-1)e})[e]}{(1+q^e+\ldots+q^{(b-1)e})[e]} \\
&=\mathfrak{r}ac{a}{b}.
\end{align*}
\end{proof}
\begin{lemma} \lambdabel{WhenZero}
Suppose that $m \geq k \geq 0$. Write $m=m^\ast e+m'$ and $k=k^\ast e+k'$ where $0 \leq m',k' <e$. Then $\gauss{m}{k} = 0$ if and only if $m' < k'$.
\end{lemma}
\begin{proof} We have that
\[\gauss{m}{k}=\mathfrak{r}ac{[m][m-1]\ldots[m-k+1]}{[k][k-1]\ldots[1]}\]
which, using Lemma~\ref{DivZero} is zero if and only if there are more terms divisible by $e$ in the numerator than the denominator, that is if and only if $m' < k'$.
\end{proof}
\begin{corollary} \lambdabel{e}
Let $m\geq e$. Then
\[\gauss{m}{e} \neq 0.\]
\end{corollary}
\begin{corollary}
Let $m,s \geq 1$. Then
\[\gauss{m+j}{j} = 0\]
for all $1 \leq j \leq s$
if and only if $e
id m+1$ and $s<e$.
\end{corollary}
\begin{lemma}
We have \[(-1)^e q^{\binom{e}{2}}=-1.\]
\end{lemma}
\begin{proof}
If $e$ is even then \begin{align*}
(-1)^e q^{\binom{e}{2}} & = q^{\mathfrak{r}ac{e}{2}(e-1)}=(-1)^{e-1}=-1.\\ \intertext{If it is odd then} (-1)^e q^{\binom{e}{2}} &= - q^{e \mathfrak{r}ac{e-1}{2}} =-1.
\end{align*}
\end{proof}
\begin{lemma}[Lemma~\ref{GaussLemma}] \lambdabel{Repeat}
Suppose $m, k \geq n \geq 0$. Then
\[\sum_{\gamma \geq 0}(-1)^\gamma q^{\binom{\gamma}{2}}\gauss{n}{\gamma}\gauss{m-\gamma}{k} = q^{n(m-k)}\gauss{m-n}{k-n}.\]
\end{lemma}
\begin{lemma} \lambdabel{NotRepeat}
Suppose $m \geq j \geq 0$ and $n \geq 0$. Then
\[\sum_{\gamma \geq 0} q^{\gamma(m-j+\gamma)} \gauss{n}{\gamma}\gauss{m}{j-\gamma} = \gauss{m+n}{j}.\]
\end{lemma}
\begin{proof}
We use induction on $n$. The lemma is true for $n=0$ so suppose that $n>0$ and that the lemma holds for $n-1$. Then using Lemma~\ref{GaussSum},
\begin{align*}
\sum_{\gamma \geq 0} q^{\gamma(m-j+\gamma)} \gauss{n}{\gamma}\gauss{m}{j-\gamma} &= \sum_{\gamma \geq 0} q^{\gamma(m-j+\gamma)} \left( \gauss{n-1}{\gamma-1} + q^\gamma \gauss{n-1}{\gamma} \right) \gauss{m}{j-\gamma} \\
&= \sum_{\gamma \geq 0} q^{(\gamma+1)(m-j+\gamma+1)} \gauss{n-1}{\gamma}\gauss{m}{j-\gamma-1} + \sum_{\gamma \geq 0} q^{\gamma(m-j+\gamma)} q^\gamma \gauss{n-1}{\gamma}\gauss{m}{j-\gamma} \\
&= \sum_{\gamma \geq 0} q^{\gamma(m-j+\gamma)}\gauss{n-1}{\gamma}\left(q^{m-j+\gamma+1} \gauss{m}{j-\gamma-1} + \gauss{m}{j-\gamma} \right) \\
&= \sum_{\gamma \geq 0} q^{\gamma(m-j+\gamma+1)} \gauss{n-1}{\gamma} \gauss{m+1}{j-\gamma} \\
&= \gauss{m+n}{j}
\end{align*}
by the inductive hypothesis.
\end{proof}
Armed with these results, we are ready to start solving some equations. Let us begin with the simplest non-trivial case which is when $
u$ has three parts.
Take $\lambda=(\lambda_1,\lambda_2)$ and $
u=(
u_1,
u_2,
u_3)$ to be partitions of $n$ where $\lambda_1 \geq
u_1 \geq \lambda_2$. Our aim is to determine $\Hom_
athcal{H}(S^
u,S^\lambda)$ by finding a basis for $\Psi(
u,\lambda)$.
For $
ax\{0,\lambda_2-
u_3\} \leq i \leq
in\{
u_2,\lambda_2\}$ let $\mathsf{T}heta_i:M^
u\rightarrow S^\lambda$ be given by
\[\mathsf{T}heta_i(m_
u) = \rep{1^{
u_1}2^{
u_2-i}3^{
u_3-\lambda_2+i}}{2^i3^{\lambda_2-i}},\]
where we use the notation of Section~\ref{Lemma7Proof}.
Then $\Psi(
u,\lambda)$ is the vector space of homomorphisms
\[\mathsf{T}heta=\sum_i a_i \mathsf{T}heta_i,\]
for $a_i \in
athbb{C}$, which satisfy $\mathsf{T}heta(m_
u h_{d,t})=0$ for $d=1,2$ and $1 \leq t \leq
u_{d+1}$. Note that for such $d,t$, we have
\begin{align*}
\mathsf{T}heta_i(m_
u h_{2,t}) & = \sum_j q^{i(t-j+i)} \gauss{
u_2+t-j}{
u_2-i} \gauss{j}{i} \rep{1^{
u_1}2^{
u_2+t-j}3^{
u_3-t-\lambda_2+j}}{2^j3^{\lambda_2-j}}, \\
\mathsf{T}heta_i(m_
u h_{1,t}) & = \sum_k (-1)^{k-i}q^{\binom{i-k}{2} +kt} \gauss{
u_1+t-i}{
u_1-k} \gauss{\lambda_2-k}{\lambda_2-i}\rep{1^{
u_1+t}2^{
u_2-t-k}3^{
u_3-\lambda_2+k}}{2^k3^{\lambda_2-k}},
\end{align*}
where the sums are over all $j$ such that $
ax\{0,\lambda_2-
u_3+t\} \leq j \leq
in\{\lambda_2,
u_2+t\}$ and all $k$ such that $
ax\{0,\lambda_2-
u_3\} \leq k \leq
in\{\lambda_2,
u_2-t\}$.
We must therefore solve the systems of equations
\begin{align*}
\sum_i q^{i(t-j+i)} \gauss{
u_2+t-j}{
u_2-i} \gauss{j}{i} a_i &=0, &&(2,t,j) \\
\sum_i (-1)^{k-i}q^{\binom{i-k}{2} +kt} \gauss{
u_1+t-i}{
u_1-k} \gauss{\lambda_2-k}{\lambda_2-i}a_i & = 0, && (1,t,k)
\end{align*}
for $i,t,j,k$ within the limits described above. Let us redefine $\Psi=\Psi(
u,\lambda)$ to be the solution space of the equations $(1,t,k)$ and $(2,t,j)$ and $\Phi=\Phi(
u,\lambda)$ to be the solution space of the equations $(2,t,j)$.
Finally, for any $m \geq 0$, we define $m^\ast$ and $m'$ by writing $m=m^\ast e + m'$ where $0 \leq m' <e$. The equivalence relation $\equiv$ is assumed to be equivalence modulo $e$.
\begin{lemma} \lambdabel{GoodLemma}
Suppose $0 \leq M \leq N$ and that $(a_i)_{M \leq i \leq N}$ is a non-zero solution to the system of equations
\begin{align*}[j+1]a_j +q^{j+1}[K-j]a_{j+1} &=0, && M \leq j \leq N-1,
\end{align*}
where $N \leq K+1$.
If there do not exist either $c$ with $M+1 \leq c \leq N$ and $c \equiv 0$ or $d$ with $M+1 \leq d \leq N$ and $K-d+1 \equiv 0$ then $a_i \neq 0$ for all $M\leq i \leq N$. Otherwise $a_i \neq 0$ only if $i' \geq (K+1)'$ and furthermore if $M \leq i,k \leq N$ are such that $i', k' \geq (K+1)'$ and $i^\ast = k^\ast$ then $a_i \neq 0$ if and only if $a_k \neq 0$.
\end{lemma}
\begin{proof}
If neither $c$ nor $d$ as above exist then all terms $[M+1],\ldots,[N],[K-M],\ldots,[K-N+1]$ are non-zero and the result is clear. So suppose at least one of $c,d$ exists. Let $b=(K+1)'$ and suppose $M \leq i \leq N$ is such that $i'<b$. If $i^\ast e-1 \geq M$ then the equations
\begin{align*}[j+1]a_j +q^{j+1}[K-j]a_{j+1} &=0, && i^\ast e-1 \leq j \leq i-1,
\end{align*}
ensure that $a_i=0$. Similarly if $i^\ast e + b \leq N$ then the equations
\begin{align*}[j+1]a_j +q^{j+1}[K-j]a_{j+1} &=0, && i \leq j \leq i^\ast e+b-1,
\end{align*}
ensure that $a_i=0$. But if $M \geq i^\ast e$ and $N<i^\ast e+b$ then we cannot find $c$ or $d$ as above. Hence if $i'<b$ then $a_i=0$.
Now suppose $M \leq i,k \leq N$ are such that $i', k' \geq b$ and $i^\ast = k^\ast$. Assume $i \leq k$. The equations
\begin{align*}[j+1]a_j +q^{j+1}[K-j]a_{j+1} &=0, && i \leq j \leq k-1,
\end{align*}
ensure that $a_i=0$ if and only if $a_k=0$.
\end{proof}
\begin{lemma} \lambdabel{lemmalow}
Suppose $\lambda_2 <
u_3$. Then
\[\dim(\Phi(
u,\lambda)) \leq \begin{cases} 0, & e\leq \lambda_2, \\
1, & (
u_2+1)' \leq \lambda_2 < e, \\
0, & \lambda_2 < (
u_2+1)'.
\end{cases}\]
\end{lemma}
\begin{proof}
Suppose $(a_i)_{0 \leq i \leq \lambda_2} \in \Phi$.
Setting $t=1$, we have the equations
\begin{align*}
[
u_2+1] a_0 & = 0, &&\\
[j+1]a_j +q^{j+1}[
u_2-j]a_{j+1} &=0, && 0 \leq j \leq \lambda_2-1.
\end{align*}
Let $b=(
u_2+1)'$. If $\lambda_2 <b$ then there can be no non-zero solution to these equations.
Otherwise, applying Lemma~\ref{GoodLemma}, $a_i \neq 0$ only if $b \leq i'$ and in this case $a_i \neq 0$ if and only if $a_{i^\ast e +b} \neq 0$ so that if $\lambda_2 < e+b$ then $\dim(\Phi) \leq 1$.
Suppose $\lambda_2 \geq e+b$ and consider the equation $(2,e,(r+1)e+b)$ where $0 \leq re+b \leq \lambda_2-e$. From this, and using Lemma~\ref{WhenZero},
\[0=\sum_{i=re+b}^{(r+1)e+b} q^{i(i-re-b)}\gauss{
u_2-re-b}{
u_2-i}\gauss{(r+1)e+b}{i} a_i = \gauss{(r+1)e+b}{e} a_{re+b} + \gauss{
u_2-re-b}{e}a_{(r+1)e+b},\]
so that, applying Lemma~\ref{e}, $a_{re+b} \neq 0$ if and only if $a_{(r+1)e+b} \neq 0$. Hence if $\lambda_2 \geq e$ then $\dim(\Phi) \leq 1$ and if $(a_i) \in \Phi \setminus \{0\}$ then $a_{e-1} \neq 0$. But in this case we obtain a contradiction by considering Equation $(2,e,e-1)$ and using Lemma~\ref{e}:
\[0= \sum_{i=0}^{e-1} q^{i(e-1-i)} \gauss{e-1}{i} \gauss{
u_2+1}{
u_2-i}a_i = q^{\lambda_2 e} \gauss{
u_2+1}{e} a_{e-1}\]
since $a_i = 0$ for $0 \leq i <b$ and the second Gaussian polynomial is zero for $b \leq i < e-1$. So $\dim(\Phi)=0$ if $\lambda_2 \geq e$.
\end{proof}
\begin{proposition}
Suppose that $\lambda_2 <
u_3$. Then
\[\dim(\Psi(
u,\lambda)) = \begin{cases}
1, & \lambda_2<e, \,
u_2=e-1 \text{ and }
u_1-\lambda_2+1 \equiv 0, \\
0, & \text{otherwise.}
\end{cases}\]
\end{proposition}
\begin{proof}
Let $b=(
u_2+1)'$.
By Lemma~\ref{lemmalow}, we have $\dim(\Psi) \leq 1$ and if $\dim(\Psi)>0$ then $b \leq \lambda_2 <e$; furthermore if $(a_i)_{0 \leq i \leq \lambda_2} \in \Psi \setminus \{0\}$ then $a_i \neq 0$ if and only if $b \leq i \leq \lambda_2$. Then if $(a_i) \in \Psi\setminus\{0\}$ then it also satisfies
\begin{align*}
[
u_1+1-k]a_k & = [\lambda_2-k]a_{k+1}, && 0 \leq k < \lambda_2, \\
[
u_1+1-\lambda_2]a_{\lambda_2} &=0. &&
\end{align*}
Therefore if $\dim(\Psi)>0$ then $
u_1-\lambda_2+1 \equiv 0$, and so $a_{\lambda_2-1}=a_{\lambda_2}$ so that by Equation $(2,1,\lambda_2-1)$ we have $
u_2+1 \equiv 0$ and $b=0$. So if $\dim(\Psi) >0$, then $\Psi$ is spanned by $(a_i)$ where $a_i=1$ for all $i$.
Suppose $b \leq \lambda_2<e$ and that $
u_1-\lambda_2+1 \equiv 0$.
Set $a_i =1$ for all $i$. Now if $
u_2 \geq e$ then by using Lemma~\ref{e} and considering Equation $(1,e,0)$ and Lemma~\ref{Repeat} we obtain the contradition
\[0=\sum_{i=0}^{\lambda_2} (-1)^i q^{\binom{i}{2}}\gauss{\lambda_2}{i} \gauss{
u_1+e-i}{
u_1}a_i = q^{\lambda_2 e} \gauss{
u_1-\lambda_2+e}{
u_1-\lambda_2},\]
so we must have if $\dim(\Psi)>0$ then $
u_2 = e-1$.
We have shown that if $\dim(\Psi) =1$ then $\lambda_2<e$, $
u_2=e-1$ and $
u_1-\lambda_2+1 \equiv 0$ and the space $\Psi$ is spanned by the solution $(a_i)$ where $a_i=1$ for $0 \leq i \leq \lambda_2$. We now show these conditions are sufficient.
First we must show that the equations $(2,t,j)$ where $1 \leq t \leq
u_3$ and $
ax\{0,\lambda_2-
u_3+t\} \leq j \leq \lambda_2$ are satisfied by the solution $a_i=1$ for all $i$. Using Lemma~\ref{NotRepeat} we have
\[\sum_i q^{i(t-j+i)} \gauss{
u_2+t-j}{
u_2-i} \gauss{j}{i} = \gauss{
u_2+t}{t} =0\]
by Lemma~\ref{WhenZero}, since $
u_2=e-1$, so these equations do hold. Now consider the equations $(1,t,k)$ where $1 \leq t \leq
u_2=e-1$ and $0 \leq k \leq
in\{\lambda_2,
u_2-t\}$. By Lemma~\ref{Repeat} we have
\begin{align*}
\sum_{i} (-1)^{k-i} q^{\binom{i-k}{2}} \gauss{
u_1+t-i}{
u_1-k} \gauss{\lambda_2-k}{\lambda_2-i}
&= \sum_{i} (-1)^i q^{\binom{i}{2}} \gauss{\lambda_2-k}{i} \gauss{
u_1+t-k-i}{
u_1-k} \\
& = q^{(\lambda_2-k)t} \gauss{
u_1-\lambda_2+t}{t} \\
& =0
\end{align*}
again by Lemma~\ref{WhenZero}, since $
u_1-\lambda_2 \equiv -1$, so these equations are also satisfied.
\end{proof}
We believe this proof should convince the reader that solving all the equations required for Propositions~\ref{first} to~\ref{last} is a lengthy business; but we hope that we have also convinced them that it is not particularly difficult. A strategy that works is as follows: Write down a system of equations $(d,t,j)$ which need to be solved, then use the equations $(d,1,j)$ and $(d,e,j)$ to come up with necessary conditions for the space $\Psi(
u,\lambda)$ to be non-zero. These conditions turn out to be sufficient.
\section{Dipper-James Specht modules} \lambdabel{DJSpecht}
We conclude by making a connection with the Specht modules of Dipper and James.
Recall that for each partition $\lambda$ of $n$, Dipper and James~\cite[Section~4]{DJ:Hecke} defined a $
athcal{H}$-module which they called a Specht module and which we shall denote $S(\lambda)$. The connection with the modules $S^\lambda$ is that
\begin{align*}
S(\lambda) \cong_
athcal{H} (S^{\lambda})^\diamond&&&(\ddag)\end{align*}
where $M^\diamond$ denotes the dual of a $
athcal{H}$-module $M$~\cite[Theorem~5.3]{Murphy}.
\begin{proposition}
We have the isomorphisms
\begin{align*}
\Hom_{
athcal{H}}(S(\lambda),S(
u)) &\cong_F \Hom_
athcal{H}(S^{
u},S^{\lambda}) \\
&\cong_F\Hom_
athcal{H}(S^{\lambda'},S^{
u'}) \\
&\cong_F\Hom_
athcal{H}(S(
u'),S(\lambda')).
\end{align*}
\end{proposition}
\begin{proof}
The first and last equations follow from Equation $(\ddag)$ above. Now, following~\cite[Lemma~2.3]{Murphy}, we define a $
athcal{H}$-automorphism $\#$. For a $
athcal{H}$-module $M$, define $M^\#$ to be the right $
athcal{H}$-module with action defined by
$m \cdot h = mh^\#$ for all $m \in M$ and $h\in
athcal{H}$. Then $(S^{\nu'})^\# = (S^\nu)^\diamond$ for any partition $\nu$ of $n$~\cite[Theorem~5.2]{Murphy}, and the middle equation follows.
\end{proof}
If $q=-1$ then we might also wish to replace $\Hom_
athcal{H}(\, , \,)$ with $\EHom_
athcal{H}(\, , \,)$ above. We now use Theorem~\ref{Weyl}. The middle equation follows since
\[\Hom_{
athcal{S}}(\Delta^
u,\Delta^\lambda) \cong\Hom_{
athcal{S}}(\Delta^{\lambda'},\Delta^{
u'});\]
see for example \cite[Lemma~3.4]{LM:rowhoms}. Recall that for each partition $\lambda$ of $n$, Dipper and James~\cite{DJ:qWeyl} defined a $
athcal{S}$-module which they called a Weyl module and which we shall denote $\Delta(\lambda)$. The relationship with the Weyl modules of Theorem~\ref{Weyl} is that $\Delta(\lambda) \cong_{
athcal{S}} \Delta^{\lambda'}$. Futhermore
\[\Hom_{
athcal{S}}(\Delta(\lambda),\Delta(
u))\cong \EHom_{
athcal{H}}(S(\lambda),S(
u))\]
and so
\[\EHom_{
athcal{H}}(S^{\lambda'},S^{
u'})\cong \EHom_{
athcal{H}}(S(\lambda),S(
u))\]
as required.
Since many authors prefer to work in the Dipper-James world, it is worth remarking, once and for all, that all the relevant combinatorics can be translated backwards and forwards without change.
In particular, there exists an analogue of Theorem~\ref{Lemma7}. Suppose $\lambda$ is a partition of $n$ and $\nu$ a composition of $n$. Recall that for each $\mathsf{S} \in
athbb{R}owT(\lambda,\nu)$ Dipper and James~\cite{DJ:Hecke} defined a homomorphism from $S^\lambda$ to $M^\nu$ which we shall now denote $\varphi_{\mathsf{S}}$.
\begin{theorem} \lambdabel{DJ:Lemma7}
Suppose $\lambda=(\lambda_1,\ldots,\lambda_a)$ is a partition of $n$ and $\nu=(\nu_1,\ldots,\nu_b)$ is a composition of $n$.
Let $\mathsf{S} \in
athbb{R}owT(\lambda,\nu)$.
\begin{enumerate}
\item
Suppose $1 \leq r\leq a-1$ and that $1 \leq d \leq b$. Let
\[
athcal{G} =\left\{g=(g_1,g_2,\ldots,g_b)
id g_d=0, \, \bar{g}=\mathsf{S}^d_{r+1} \text{ and } g_i \leq \mathsf{S}^{i}_{r} \text{ for } 1 \leq i \leq b\right\}.\]
For $g \in
athcal{G}$, let $\mathsf{U}_g$ be the row-standard tableau formed by moving all entries equal to $d$ from row $r+1$ to row $r$ and for $i \neq d$ moving $g_i$ entries equal to $i$ from row $r$ to row $r+1$. Then
\[\varphi_\mathsf{S} = (-1)^{\mathsf{S}^d_{r+1}} q^{-\binom{\mathsf{S}^d_{r+1}+1}{2}} q^{-\mathsf{S}^d_{r+1}S^{<d}_{r+1}} \sum_{g \in
athcal{G}} q^{\bar{g}_{d-1}} \prod_{i=1}^b q^{g_i \mathsf{S}^{<i}_{r+1}} \gauss{\mathsf{S}^i_{r+1}+g_i}{g_i}\varphi_{\mathsf{U}_g}.\]
\item
Suppose $1 \leq r\leq a-1$ and $\lambda_r=\lambda_{r+1}$ and that $1 \leq d \leq b$. Let
\[
athcal{G} =\left\{g=(g_1,g_2,\ldots,g_b)
id g_d=0, \, \bar{g} = \mathsf{S}^d_r \text{ and } g_i \leq \mathsf{S}^{i}_{r+1} \text{ for } 1 \leq i \leq b \right\}.\]
For $g \in
athcal{G}$, let $\mathsf{U}_g$ be the row-standard tableau formed by moving all entries equal to $d$ from row $r$ to row $r+1$ of $\mathsf{S}$ and for $i \neq d$ moving $g_i$ entries equal to $i$ from row $r+1$ to row $r$. Then
\[\varphi_\mathsf{S} = (-1)^{\mathsf{S}^d_{r}} q^{-\binom{\mathsf{S}^d_{r}}{2}} q^{-\mathsf{S}^d_r \mathsf{S}^{>d}_r} \sum_{g \in
athcal{G}} q^{-\bar{g}_{d-1}} \prod_{i=1}^b q^{g_i \mathsf{S}^{>i}_{r}} \gauss{\mathsf{S}^i_{r}+g_i}{g_i} \varphi_{\mathsf{U}_g}.\]
\end{enumerate}
\end{theorem}
The proof of Theorem~\ref{DJ:Lemma7} involves working through the same steps as the proof of Theorem~\ref{Lemma7}. Rather than working modulo $
athcal{H}la$, we now use the fact that $S(\lambda)$ is in the kernel of any homomorphism $M(\lambda) \rightarrow S(\sigma)$ where $\sigma \rhd \lambda$. We leave the details to the reader.
\begin{ex}
Let $\lambda=(3,3)$ and $\nu=(2,1,1,1,1)$ (as in Example~\ref{NiceEx}).
Recall that $S(\lambda)$ is generated by $m_\lambda X$ for a certain $X \in
athcal{H}$ (see~\cite{DJ:Hecke} for details) and that, by the kernel intersection theorem,
\[0=m_{(4,2)}(I+T_4+T_4T_5)X=(1+T_3+T_2T_3+T_1T_2T_3)m_{(3,3)}X.\]
Let $\mathsf{S}=\tab(114,235)$. Then
\begin{align*}
\varphi_{\mathsf{S}}(m_\lambda X) &= m_{\nu} T_4 T_3 (I+T_2+T_2T_1)(I+T_4)(I+T_5+T_5T_4)X \\
&=T_4 T_3 m_{\lambda} X \\
&=\hat{q}^{-1} T_4T_3T_4 m_{\lambda}X \\
&= \hat{q}^{-1} T_3T_4T_3 m_\lambda X \\
&= - \hat{q}^{-1} T_3T_4(I+T_2T_3+T_1T_2T_3)m_\lambda X \\
&=- (T_3 +\hat{q}^{-1}T_3T_2T_4T_3 + \hat{q}^{-1}T_3T_4T_1T_2T_3)m_\lambda X \\
&= - m_\nu T_3 (I+T_2+T_2T_1)(I+T_4)(I+T_5+T_5T_4)X \\
&
athcal{H}space*{15mm} -\hat{q}^{-1} m_\nu T_3T_2 T_4T_3 (I+T_4)(I+T_5+T_5T_4)(I+T_1)(I+T_2+T_2T_1)X \\
&= - \varphi_{\mathsf{U}_1}(m_\lambda X) -\hat{q}^{-1} \varphi_{\mathsf{U}_2}(m_\lambda X)
\end{align*}
where
\[\mathsf{U}_1=\tab(113,245), \hat{q}quad \hat{q}quad \mathsf{U}_2=\tab(134,125).\]
\end{ex}
\end{document} |
\begin{document}
\title{f The $3$-rainbow index and connected dominating setsootnote{Supported by NSFC No.11371205 and
PCSIRT.}
\begin{abstract}
A tree in an edge-colored graph is said to be rainbow if no two
edges on the tree share the same color. An edge-coloring of $G$ is
called 3-rainbow if for any three vertices in $G$, there exists a
rainbow tree connecting them. The 3-rainbow index $rx_3(G)$ of $G$
is defined as the minimum number of colors that are needed in a
3-rainbow coloring of $G$. This concept, introduced by Chartrand et
al., can be viewed as a generalization of the rainbow connection. In
this paper, we study the 3-rainbow index by using connected
three-way dominating sets and 3-dominating sets. We shown that for
every connected graph $G$ on $n$ vertices with minimum degree at
least $\delta$ ($3\leq\delta\leq5$), $rx_{3}(G)\leq
\frac{3n}{\delta+1}+4$, and the bound is tight up to an additive
constant; whereas for every connected graph $G$ on $n$ vertices with
minimum degree at least $\delta$ ($\delta\geq3$), we get that
$rx_{3}(G)\leq n\frac{ln(\delta+1)}{\delta+1}(1+o_{\delta}(1))+5$.
In addition, we obtain some tight upper bounds of the 3-rainbow
index for some special graph classes, including threshold graphs,
chain graphs and interval graphs.
\end{abstract}
\noindent{\bf Keywords:} $3$-rainbow index, connected dominating
sets, rainbow paths
\noindent{\bf AMS subject classification 2010:} 05C15, 05C38, 05C69.
\section {\large Introduction}
All graphs in this paper are undirected, finite and simple. We
follow \cite{Bondy} for graph theoretical notation and terminology
not described here. Let $G$ be a nontrivial connected graph with an
\emph{edge-coloring} $c: E(G)\rightarrow\{1, 2,\cdots, t\}, t \in
\mathbb{N}$, where adjacent edges may be colored the same. A path is
said to be a \emph{rainbow path} if no two edges on the path have
the same color. An edge-colored graph $G$ is called \emph{rainbow
connected} if for every pair of distinct vertices of $G$ there
exists a rainbow path connecting them. The \emph{rainbow connection
number} of $G$, denoted by $rc(G)$, is defined as the minimum number
of colors that are needed in order to make $G$ rainbow connected.
The \emph{rainbow $k$-connectivity} of $G$, denoted by $rc_{k}(G)$,
is defined as the minimum number of colors in an edge-coloring of
$G$ such that every two distinct vertices of $G$ are connected by
$k$ internally disjoint rainbow paths. These concepts were
introduced by Chartrand et al. in \cite{ChartrandGP, Chartrand1}.
Recently, there have been published a lot of results on the rainbow
connections. The interested readers can see \cite{LiSun1, LiSun2}
for a survey on this topic.
The $(k,\ell)$-rainbow index was also introduced by Chartrand et al.
in \cite{Chartrand2}, which can be viewed as a generalization of the
rainbow connection and rainbow connectivity. We call a tree $T$ of
an edge-colored graph $G$ a \emph{rainbow tree} if no two edges of
$T$ have the same color. For $S\subseteq V(G)$, a \emph{rainbow
$S$-tree} is a rainbow tree connecting the vertices of $S$. Suppose
that $\{T_{1},T_{2},\cdots, T_{\ell}\}$ is a set of rainbow
$S$-trees. They are called \emph{internally disjoint} if
$E(T_{i})\cap E(T_{j})=\emptyset$ and $V(T_{i})\bigcap V(T_{j})=S$
for every pair of distinct integers $i,j$ with $1\leq i,j\leq \ell$
(Note that these trees are vertex-disjoint in $G\setminus S$). Given
two positive integers $k$, $\ell$ with $k\geq 2$, the
\emph{$(k,\ell)$-rainbow index} $rx_{k,\ell}(G)$ of $G$ is the
minimum number of colors needed in an edge-coloring of $G$ such that
for any set $S$ of $k$ vertices of $G$, there exist $\ell$
internally disjoint rainbow $S$-trees. In particular, for $\ell=1$,
we often write $rx_{k}(G)$ rather than $rx_{k,1}(G)$ and call it the
\emph{$k$-rainbow index}. An edge-coloring of $G$ is called a
\emph{$k$-rainbow coloring} if for any set $S$ of $k$ vertices of
$G$, there exists a rainbow $S$-tree. A simple result for the
$k$-rainbow index \cite{Chartrand2} is that $k-1\leq rx_k(G)\leq
n-1$. It is easy to see that $rx_{2,\ell}(G)=rc_{\ell}(G)$. In the
sequel, we always assume $k\geq 3$. We refer to
\cite{Cai,Cai2,Cai3,Chen,Li3,Liu} for more details about the
$(k,\ell)$-rainbow index.
Computing the rainbow connection number of a graph is NP-hard
\cite{Chakraborty}, so is computing the $(k,\ell)$-rainbow index.
For this reason, one of the most important goals for studying
rainbow connection number and rainbow index is to obtain good upper
and lower bounds. In the search toward good upper bounds, an idea
that turned out to be successful more than once is considering the
``strengthened" connected dominating set: find a suitable
edge-coloring of the induced graph on such a set, and then extend it
to the whole graph using a constant number of additional colors.
Given a graph $G$, a set $D\subseteq V(G)$ is called a {\it
dominating set} if every vertex of $V\backslash D$ is adjacent to at
least one vertex of $D$. Further, if the subgraph $G[D]$ of $G$
induced by $D$ is connected, we call $D$ a {\it connected dominating
set} of $G$. The {\it domination number} $\gamma(G)$ is the number
of vertices in a minimum dominating set for $G$. Similarly, the {\it
connected domination number} $\gamma_{c}(G)$ is the number of
vertices in a minimum connected dominating set for $G$.
Let $k$ be a positive integer. A dominating set $D$ of $G$ is called
a {\it $k$-way dominating set} if $d(v)\geq k$ for every vertex
$v\in V\setminus D$. In addition, if $G[D]$ is connected, we call
$D$ a {\it connected $k$-way dominating set}. A set $D\subseteq
V(G)$ is called a {\it $k$-dominating set} of $G$ if every vertex of
$V\backslash D$ is adjacent to at least $k$ distinct vertices of
$D$. Furthermore, if $G[D]$ is connected, we call $D$ a {\it
connected $k$-dominating set}. Obviously, a (connected)
$k$-dominating set is also a (connected) $k$-way dominating set, but
the converse is not true.
There have been several results revealing the close relation between
the dominating sets and the rainbow connection number and rainbow
index.
\begin{theorem}\cite{Chandran}\label{thm1}
If $D$ is a connected two-way dominating set of a connected graph $G$,
then $rc(G)\leq rc(G[D])+3$.
\end{theorem}
In \cite{Chandran}, the authors employed Theorem \ref{thm1} to get
some tight upper bounds for the rainbow connection number
of many special graph classes, which were otherwise difficult to obtain.
\begin{theorem}\cite{Liu}\label{thm2}
Let $G$ be a connected graph with minimal degree $\delta(G)\geq 3$.
If $D$ is a connected 2-dominating set of $G$, then $rx_{3}(G)\leq
rx_{3}(G[D])+4$ and the bound is tight.
\end{theorem}
From Theorem \ref{thm2}, the authors determined a tight upper bound
for the 3-rainbow index of the complete bipartite graphs $K_{s,t} \
(3\leq s \leq t)$.
The proofs of the above two theorems are similar. First color the
edges in $G[D]$ using $k$ different colors $\left(k=rc(G[D])\ or \
rx_3(G[D])\right)$. Then select a spanning tree in every connected
component of $H=G-D$. So we construct a spanning forest $F$ of $H$
and choose $X$ and $Y$ as any one of the bipartitions defined by the
forest $F$. Color the edges between $X$ and $D$ and the edges
between $Y$ and $D$ as well as the edges between $X$ and $Y$ with
suitable colors, which gives an edge-coloring we want. Note that in
the process all the edges in $E(H)-E(F)$ are ignored.
In this paper, we will take the edges in $E(H)-E(F)$ into
consideration to get a more subtle coloring strategy. We show that
for a connected graph $G$, $rx_{3}(G)\leq rx_{3}(G[D])+6$, where $D$
is a connected three-way dominating set of $G$. Moreover, this bound
is tight. By using the results on spanning trees with many leaves,
we obtain that $rx_3(G)\leq\frac{3n}{\delta+1}+4$ for every
connected graph $G$ on $n$ vertices with minimum degree at least
$\delta$ ($3\leq\delta\leq5$), and the bound is tight up to an
additive constant; whereas for every connected graph $G$ on $n$
vertices with minimum degree at least $\delta$ ($\delta\geq3$), we
get that $rx_{3}(G)\leq
n\frac{ln(\delta+1)}{\delta+1}(1+o_{\delta}(1))+5$. In addition,
when considering a connected 3-dominating set $D$ of $G$, we prove
that $rx_{3}(G)\leq rx_{3}(G[D])+3$, and the bound is tight. The
farthest we can get with this idea is some tight upper bounds for
some special graph classes, including threshold graphs, chain graphs
and interval graphs.
\section{Preliminaries}
For a graph $G$, we use $V(G)$, $E(G)$, $|G|$, $\delta(G)$, and
$diam(G)$ to denote its vertex set, edge set, order (number of
vertices), minimum degree and the diameter (maximum distance between
every pair of vertices) of $G$, respectively. For $D\subseteq V(G)$,
let $\overline{D}=V(G)\setminus D$, and $G[D]$ be the subgraph of
$G$ induced on $D$. For $v\in V(G)$, let $N(v)$ denote the set of
neighbors of $v$. For two disjoint subsets $X$ and $Y$ of $V(G)$,
$E[X,Y]$ denotes the set of edges of $G$ between $X$ and $Y$.
\begin{definition}
BFS (breadth-first search) is a strategy for searching in a graph.
It begins at a root and inspects all its neighbors. Then for each of
those neighbors in turn, it inspects their neighbors which were
unvisited, and so on until all the vertices in the graph are
visited.
\end{definition}
\begin{definition}
A BFS-tree (breadth-first search tree) is a spanning rooted tree
returned by BFS. Let $T$ be a BFS-tree with $r$ as its root. For a
vertex $v$, the height of $v$ is the distance between $v$ and $r$.
All the vertices of height $k$ form the $kth$ level of $T$. The
ancestors of $v$ are the vertices on the unique $\{v,r\}$-path in
$T$. The parent of $v$ is its neighbor on the unique $\{v,r\}$-path
in $T$. Its other neighbors are called the children of $v$. The
siblings of $v$ are the vertices in the same level as $v$.
\end{definition}
\noindent\textbf{Remark:} $BFS$-trees have a nice property: every
edge of the graph joins vertices on the same or consecutive levels.
It is not possible for an edge to skip a level. Thus the neighbor of
a vertex $v$ has three possibilities: (1) a sibling of $v$; (2) the
parent of $v$ or a right sibling of the parent of $v$; (3) a child
of $v$ or a left sibling of the children of $v$; see Figure 1.
\begin{definition}
The Steiner distance $d(S)$ of a set $S$ of vertices in a graph $G$
is the minimum size of a tree in $G$ containing $S$. The $k$-Steiner
diameter $sdiam_k(G)$ of $G$ is the maximum Steiner distance of $S$
among all the sets $S$ with $k$ vertices in $G$. Obviously,
$sdiam_2(G)=diam(G)$ and $sdiam_k(G)\leq sdiam_{k+1}(G)$.
\end{definition}
\begin{definition}
Let $G$ be a graph, $D\subseteq V(G)$ and $v\in V(G)\setminus D$. we
call a path $P=v_0 v_1\cdots v_k$ is a v-D path if $v_0=v$ and
$V(P)\cap D=\{v_k\}$. Two or more paths are called internally
disjoint if none of them contains an inner vertex of another.
\end{definition}
\begin{definition}
An edge-colored graph is rainbow if no two edges in the graph share the same color.
\end{definition}
\begin{definition}
Let $D$ be a dominating set of a graph $G$. For $v\in \overline{D}$,
its neighbors in $D$ are called foots of $v$, and the corresponding
edges are called legs of $v$.
\end{definition}
\begin{definition}
A graph $G$ is called a threshold graph, if there exists a weight
function $w: V(G)\rightarrow R$ and a real constant $t$ such that
two vertices $u, v\in V(G)$ are adjacent if and only if
$w(u)+w(v)\geq t$. We call $t$ the threshold for $G$.
\end{definition}
\begin{definition}
A bipartite graph $G(A,B)$ is called a chain graph, if the vertices
of $A$ can be ordered as $A=(a_1,a_2, \ldots ,a_k)$ such that
$N(a_1)\subseteq N(a_2)\subseteq \ldots\subseteq N(a_k)$.
\end{definition}
\begin{definition}
An intersection graph of a family $\mathcal{F}$ of sets is a graph
whose vertices can be mapped to the sets in $\mathcal{F}$ such that
there is an edge between two vertices in the graph if and only if
the corresponding two sets in $\mathcal{F}$ have a non-empty
intersection. An interval graph is an intersection graph of
intervals on the real line.
\end{definition}
\section{Main results}
\begin{theorem}\label{thm3}
If $D$ is a connected three-way dominating set of a connected graph $G$,
then $rx_{3}(G)\leq rx_{3}(G[D])+6$. Moreover, the bound is tight.
\end{theorem}
The proof of Theorem \ref{thm3} is given in Section 4. Let us first
show how this implies the following results.
\begin{corollary}\label{cor0}
For every connected graph $G$ with $\delta(G)\geq3$,
$rx_{3}(G)\leq \gamma_c(G)+5$.
\end{corollary}
\begin{proof}
In this case, every connected dominating set of $G$ is a connected
three-way dominating set. Now take a minimum connected dominating
set $D$ in $G$. Then $rx_3(G[D])\leq |D|-1=\gamma_c(G)-1$. It
follows from Theorem \ref{thm3} that $rx_3(G)\leq rx_3(G[D])+6\leq
\gamma_c(G)+5$.
\end{proof}
From the following lemma, we can get the next corollary.
\begin{lemma}\label{lem1}
(1)\cite{Kleitman} Every connected graph on $n$ vertices with
minimum degree $\delta\geq3$ has a spanning tree with at least
$\frac{1}{4}n+2$ leaves;
(2)\cite{Griggs} Every connected graph on $n$ vertices with minimum degree
$\delta\geq4$ has a spanning tree with at least $\frac{2}{5}n+\frac{8}{5}$ leaves;
(3)\cite{Griggs} Every connected graph on $n$ vertices with minimum degree
$\delta\geq5$ has a spanning tree with at least $\frac{1}{2}n+2$ leaves;
\end{lemma}
\begin{corollary}\label{cor1}
(1) For every connected graph $G$ on $n$ vertices with $\delta(G)\geq3$,
$rx_{3}(G)\leq \frac{3}{4}n+3$.
(2) For every connected graph $G$ on $n$ vertices with $\delta(G)\geq4$,
$rx_{3}(G)\leq \frac{3}{5}n+\frac{17}{5}$.
(3) For every connected graph $G$ on $n$ vertices with $\delta(G)\geq5$,
$rx_{3}(G)\leq \frac{1}{2}n+3$.
Moreover, these bounds are tight up to an additive constant.
\end{corollary}
\begin{proof}
We only prove (1); (2) and (3) can be derived by the same arguments.
Clearly, we can take a connected dominating set consisting of all
the non-leaves in the spanning tree. Thus by Lemma \ref{lem1}, for
every connected graph $G$ on $n$ vertices with minimum degree
$\delta(G)\geq3$, $\gamma_c(G)\leq
n-(\frac{1}{4}n+2)=\frac{3}{4}n-2$. Then it follows from Corollary
\ref{cor0} that $rx_{3}(G)\leq\frac{3}{4}n+3$.
On the other hand, the factors in these bounds cannot be improved,
since there exist infinitely many graphs $G^*$ such that
$rx_{3}(G^*)\geq \frac{3}{\delta+1}n-\frac{\delta+7}{\delta+1}$. We
construct the graphs as follows (the construction was also mentioned
in \cite{Caro}): first take $m$ copies of $K_{\delta+1}$, denoted by
$X_1, X_2, \ldots, X_m$ and label the vertices of $X_i$ with
$x_{i,1},\ldots, x_{i,\delta+1}$. Then take two copies of
$K_{\delta+2}$, denoted by $X_0$ and $X_{m+1}$ and similarly label
their vertices. Now join $x_{i,2}$ and $x_{i+1,1}$ for
$i=0,1,\ldots, m$ with an edge and delete the edges $x_{i,1}x_{i,2}$
for $i=0,1,\ldots, m+1$. See Figure 2 for $\delta=3$. It is easy to
see that $diam(G^*)=\frac{3}{\delta+1}n-\frac{\delta+7}{\delta+1}$.
The $k$-Steiner diameter of a graph is a trivial lower bound for its
$k$-rainbow index \cite{Chartrand2}, and so $rx_3(G^*)\geq
sdiam_3(G^*)\geq
diam(G^*)=\frac{3}{\delta+1}n-\frac{\delta+7}{\delta+1}$. For
$\delta=3$, $rx_3(G^*)\geq\frac{3}{4}n-\frac{5}{2}$; for $\delta=4$,
$rx_3(G^*)\geq\frac{3}{5}n-\frac{11}{5}$; for $\delta=5$,
$rx_3(G^*)\geq\frac{1}{2}n-2$. Therefore, all these upper bounds are
tight up to an additive constant.
\end{proof}
As to general $\delta$, Caro et. al. \cite{Caro1} proved that for
every connected graph $G$ on $n$ vertices with minimum degree
$\delta$,
$\gamma_c(G)=n\frac{ln(\delta+1)}{\delta+1}(1+o_{\delta}(1))$.
Combining with Corollary \ref{cor0}, we get the following result.
\begin{corollary}\label{cor2}
For every connected graph $G$ on $n$ vertices with minimum degree
$\delta$ ($\delta\geq3$), $rx_{3}(G)\leq
n\frac{ln(\delta+1)}{\delta+1}(1+o_{\delta}(1))+5$.
\end{corollary}
The above bound is not believed to be optimal for $rx_3(G)$ in terms
of $\delta$. We pose the following conjecture, which has already
been proved for $\delta=3,4,5$ in Corollary \ref{cor1}. Note that if
the conjecture is true, it gives an upper bound tight up to an
additive constant by the construction of the graph $G^*$.
\begin{conjecture}
For every connected graph $G$ on $n$ vertices with minimum degree
$\delta$ ($\delta\geq3$), $rx_{3}(G)\leq\frac{3n}{\delta+1}+C$,
where $C$ is a positive constant.
\end{conjecture}
With regard to the graphs possessing vertices of degree 1 or 2, we
obtain the following result.
\begin{corollary}
For every connected graph $G$, $rx_3(G)\leq \gamma_c(G)+n_1+n_2+5$,
where $n_1$ and $n_2$ denote the number of vertices of degrees 1 and
2 in $G$, respectively.
\end{corollary}
\begin{proof}
Obviously, adding all the vertices of degrees 1 and 2 into a minimum
connected dominating set forms a connected three-way dominating set
in $G$ of size no more than $\gamma_c(G)+n_1+n_2$. Consequently, by
Theorem \ref{thm3}, $rx_3(G)\leq \gamma_c(G)+n_1+n_2+5$.
\end{proof}
We proceed with another upper bound for the 3-rainbow index of
graphs concerning the connected 3-dominating set.
\begin{theorem}\label{thm4}
If $D$ is a connected 3-dominating set of a connected graph $G$ with $\delta(G)\geq3$,
then $rx_{3}(G)\leq rx_{3}(G[D])+3$. Moreover, the bound is tight.
\end{theorem}
\begin{proof}
Since $D$ is a connected 3-dominating set, every vertex in
$\overline{D}$ has at least three legs. Color one of them with 1,
one of them with 2, and all the others with 3. Let $k=rx_3(G[D])$.
Then we can color the edges in $G[D]$ with $k$ different colors from
$\{4,5,\ldots, k+3\}$ such that for every triple of vertices in $D$,
there exists a rainbow tree in $G[D]$ connecting them. If there
remain uncolored edges in $G$, we color them with 1.
Next we will show that this edge-coloring is a 3-rainbow coloring of
$G$. For any triple $\{u,v,w\}$ of vertices in $G$, if $(u,v,w)\in
D\times D\times D$, then there is already a rainbow tree connecting
them in $G[D]$. If one of them is in $\overline{D}$, say $(u,v,w)
\in \overline{D}\times D\times D$, join any leg of $u$ (colored by
1, 2, or 3) with the rainbow tree connecting $v,w$ and the
corresponding foot of $u$ in $G[D]$. If two of them are in
$\overline{D}$, say $(u,v,w) \in \overline{D}\times
\overline{D}\times D$, join one leg of $u$ colored by $1$, one leg
of $v$ colored by $2$ with the rainbow tree connecting $w$ and the
corresponding foots of $u,v$ in $G[D]$. If $(u,v,w)\in
\overline{D}\times \overline{D}\times \overline{D}$, join one leg of
$u$ colored by $1$, one leg of $v$ colored by $2$, one leg of $w$
colored by $3$ with the rainbow tree connecting the corresponding
foots of $u,v,w$ in $G[D]$.
The tightness of the bound can be seen from the next Corollary.
\end{proof}
As immediate consequences of Theorem \ref{thm3} and Theorem
\ref{thm4}, we get the following:
\begin{corollary}
Let $G$ be a connected graph with $\delta(G)\geq3$.
(1) if $G$ is a threshold graph, then $rx_{3}(G)\leq 5$;
(2) if $G$ is a chain graph, then $rx_{3}(G)\leq 6$;
(3) if $G$ is an interval graph, then $rx_{3}(G)\leq diam(G)+4$.
Thus $diam(G) \leq rx_{3}(G)\leq diam(G)+4$;
Moreover, all these upper bounds are tight.
\end{corollary}
\begin{proof}
(1) Suppose that $V(G)=\{v_1,v_2,\ldots, v_n\}$ where $w(v_1)\geq
w(v_2)\geq\ldots\geq w(v_n)$. Since the minimum degree of $G$ is at
least three, $v_i$ ($1\leq i\leq 3$) is adjacent to all the other
vertices in $G$. Thus $D=\{v_1,v_2,v_3\}$ consists of a connected
3-dominating set of $G$. Note that $D$ induces a $K_3$, so
$rx_3(G[D])=2$. It follows from Theorem \ref{thm4} that $rx_3(G)\leq
rx_3(G[D])+3=5$.
(2) Suppose that $G=G(A,B)$ and the vertices of $A$ can be ordered
as $A=(a_1,a_2, \ldots ,a_k)$ such that $N(a_1)\subseteq
N(a_2)\subseteq \ldots\subseteq N(a_k)$. Since the minimum degree of
$G$ is at least three, $a_i$ ($k-2\leq i\leq k$) is adjacent to all
the vertices in $B$, and $N(a_1)$ has at least three vertices, say
$\{b_1,b_2,b_3\}$. Clearly $b_i$ ($1\leq i\leq 3$)
is adjacent to all the vertices in $A$. Thus
$D=\{a_{k-2},a_{k-1},a_{k}, b_1,b_2,b_3\}$ consists of a connected
3-dominating set of $G$. Note that $D$ induces a $K_{3,3}$, so
$rx_3(G[D])=3$ (see \cite{Chen}). It follows from Theorem \ref{thm4} that $rx_3(G)\leq
rx_3(G[D])+3=6$.
(3) If $G$ is isomorphic to a complete graph, then $rx_3(G)=2$ or
$3$ (see \cite{Chartrand2}), the assertion holds trivially. Otherwise, it was showed in
\cite{Chandran} that every interval graph $G$ which is not
isomorphic to a complete graph has a dominating path $P$ of length
at most $diam(G)-2$. Since $\delta(G)\geq3$, $P$ consists of a
connected three-way dominating set of $G$. It follows from Theorem
\ref{thm3} that $rx_3(G)\leq rx_3(P)+6\leq diam(G)+4$. On the other
hand, $rx_3(G)\geq sdiam_3(G) \geq diam_(G)$. We conclude that for a
connected interval graph $G$ with $\delta\geq3$, $diam(G)\leq
rx_3(G)\leq diam(G)+4$
Here we give examples to show the tightness of these upper bounds.
\noindent\textbf{Example 1:} A threshold graph $G$ with
$\delta(G)\geq3$ and $rx_3(G)=5$.
Consider the graph in $Figure \ 3$, where $t\geq2\times4^3+1$. It is
easy to see that it is a threshold graph ($y_1, y_2, y_3$ can be
given a weight 1, others a weight 0 and the threshold 1). By
contradiction, we assume that $G$ can be colored with 4 colors. Let
$S_i$ denote the star with $x_i$ as its center and $E(S_i) =
\{x_iy_1, x_iy_2, x_iy_3\}$. Every $S_i$ can be colored in $4^3$
different ways. Since $t\geq 2\times4^3+1$, there exist three
completely identical edge-colored stars, say $S_1$, $S_2$ and $S_3$.
If two of the three edges in $S_i$ $(1\leq i\leq 3)$ receive the
same color, then there are no rainbow trees connecting
$x_1,x_2,x_3$, a contradiction. If the three edges in $S_i$ $(1\leq
i\leq 3)$ receive distinct colors, then the rainbow tree connecting
$x_1,x_2,x_3$ must contain the vertices $y_1,y_2,y_3$. Thus the tree
has at least five edges, but only four different colors, a
contradiction.
\noindent\textbf{Example 2:} A chain graph $G$ with $\delta(G)\geq3$ and $rx_3(G)=6$.
Consider the bipartite graph in $Figure \ 4$, where $N(a_1)=N(a_2
=\cdots=N(a_{k-3})=\{b_1, b_2, b_3\}$,
$N(a_{k-2})=N(a_{k-1})=N(a_{k})=\{b_1, b_2, \cdots, b_t\}$, and
$t\geq 2\times5^3+4$. By contradiction, we assume that $G$ can be
colored with 5 colors. Let $S_i$ ($4\leq i\leq t$) denote the star
with $b_i$ as its center and $E(S_i) = \{b_ia_{k-2}, b_ia_{k-1},
b_ia_{k}\}$. Every $S_i$ can be colored in $5^3$ different ways.
Since $t-3\geq 2\times5^3+1$, among the $t-3$ $S_i$$'s$ there exist
three completely identical edge-colored stars, say $S_4$, $S_5$ and
$S_6$. If two of the three edges in $S_i$ $(4\leq i\leq 6)$ receive
the same color, then there are no rainbow trees connecting
$b_4,b_5,b_6$, a contradiction. If the three edges in $S_i$ $(4\leq
i\leq 6)$ receive distinct colors, then the rainbow tree connecting
$b_4,b_5,b_6$ must contain $a_{k-2},a_{k-1},a_{k}$ and at least one
vertex in $B\setminus \{b_4,b_5,b_6\}$ to connect
$a_{k-2},a_{k-1},a_{k}$. Thus the tree has at least six edges, but
only five different colors, a contradiction.
\noindent\textbf{Example 3:} An interval graph $G$ with
$\delta(G)\geq3$ and $rx_3(G)=diam(G)+4$.
Consider the graph in $Figure \ 5$ (it is known as a French
Windmill), where $t\geq 2\times 5^6+1$. It is easy to see that it is
an interval graph with diameter 2. It follows from (3) that
$rx_3(G)\leq diam(G)+4=6$. We will show that $rx_3(G)=6$. By
contradiction, we assume that $G$ can be colored with 5 colors. Let
$B_i$ denote the $K_4$ induced by $v_0,u_i,v_i,w_i$. Obviously, each
$B_i$ can be colored in at most $5^6$ different ways. Since $t\geq
2\times 5^6+1$, there exist three completely identical edge-colored
subgraphs, say $B_1, B_2, B_3$. If two of the three edges incident
with $v_0$ in $B_i$ ($1 \leq i\leq 3$) receive the same color, say
$c(v_0u_i)=c(v_0v_i)=1$, then there are no rainbow trees connecting
$u_1,u_2,u_3$, a contradiction. If the three edges incident with
$v_0$ in $B_i$ ($1 \leq i\leq 3$) receive distinct colors, say
$c(v_0u_i)=1, c(v_0v_i)=2, c(v_0w_i)=3$, then $c(u_iv_i)\neq
c(u_iw_i)$ and $c(u_iv_i), c(u_iw_i)\in \{4,5\}$ because there
exists a rainbow tree connecting $\{u_1,u_2,u_3\}$. Without loss of
generality, suppose $c(u_iv_i)=4$ and $c(u_iw_i)=5$ . Since there
exists a rainbow tree connecting $\{v_1,v_2,v_3\}$, then
$c(v_iw_i)=5$. But then there exist no rainbow trees connecting
$\{w_1,w_2,w_3\}$ in $G$, a contradiction.
\end{proof}
\section{Proof of Theorem 3}
Let $D$ be a connected three-way dominating set of a connected graph
$G$. We want to show that $rx_3(G)\leq rx_3(G[D])+6$.
To start with, we introduce some definitions and notation that are
used in the sequel. A set of rainbow paths
$\{P_1,P_2,\ldots,P_\ell,\}$ is called \emph{super-rainbow} if their
union $\bigcup_{i=1}^{\ell}P_{i}$ is also rainbow. For a vertex $v$
in $\overline{D}$, we call it \emph{safe} if there are three
internally disjoint super-rainbow $v-D$ paths. Otherwise, we call
$v$ \emph{dangerous}. Let $c(e)$ be the color of an edge $e$, $c(H)$
the set of colors appearing on the edges in a graph $H$. For a
vertex $v$ in a $BFS$-tree, we denote the height of $v$ by $h(v)$,
the parent of $v$ by $p(v)$, the child of $v$ by $ch(v)$, the
ancestor of $v$ in the first level by $\pi(v)$.
Let us overview our idea: firstly, we aim to color the edges in
$E[D,\overline{D}]$ and $E(G[\overline{D}])$ with six different
colors. Our coloring strategy has two steps: in the first step, we
give a periodical coloring on some edges in $E[D,\overline{D}]$ and
$E(G[\overline{D}])$. And then most vertices in $\overline{D}$
become safe; in the second step, we color the carefully chosen
uncolored edges and recolor some colored edges intelligently to
ensure that all the vertices in $\overline{D}$ are safe. Then we
extend the coloring to the whole graph by coloring the edges in
$G[D]$ with $rx_3(G[D])$ fresh colors. Finally, we will show that
this edge-coloring of $G$ is a 3-rainbow coloring, which implies
$rx_3(G)\leq rx_3(G[D])+6$.
\subsection{Color the edges in $E[D,\overline{D}]$ and $E(G[\overline{D}])$}
\noindent\textbf{4.1.1~ First step: a periodical coloring}
Assume that $C_1,C_2,\ldots,C_q$ are the connected components of the
subgraph $G-D$.
If $C_i$ $(1\leq i\leq q)$ consists of an isolated vertex $v$, then
$v$ has at least three legs. We color one of them with 1, one of
them with 2, and all the others with 3. Note that now $v$ is safe.
If $C_i$ $(1\leq i\leq q)$ consists of an isolated edge $uv$, then
$u$ has at least two legs. We color one of them with 1, and all the
others with 2. Similarly, $v$ has at least two legs. We color one of
them with 2, and all the others with 3. And color $uv$ with 4. Note
that now both $u$ and $v$ are safe.
If $C_i$ $(1\leq i\leq q)$ consists of at least three vertices, then
there exists a vertex $v_0$ in $C_i$ possessing at least two
neighbors in $C_i$. Starting from $v_0$, we construct a $BFS$-tree
$T$ of $C_i$. Suppose the neighbors of $v_0$ in $C_i$ are
$\{v_1,v_2,\ldots,v_k\}$ ($k\geq2$), which forms the first level of
$T$. For each vertex $v$ in $C_i$, let $e_v$ be one leg of $v$ (if
there are many legs, we pick one arbitrarily), $t(v)$ the
corresponding foot of $v$, $f_v$ the unique edge joining $v$ and its
parent in $T$.
Now we color the edges $e_v$ and $f_v$ as follows: $c(e_{v_0})=2$;
$c(f_{v_i})=4$ and $c(e_{v_i})=1$ for $1\leq i\leq k-1$;
$c(f_{v_k})=5$ and $c(e_{v_k})=3$; for each vertex $v$ in
$V(C_i)\setminus \{v_0,v_1,\ldots,v_k\}$, if $\pi(v)=v_k$, then set
$c(f_v)=4$ and $c(e_v)=2$ when $h(v)\equiv 0 \ (mod\ 3)$,
$c(f_v)=5$ and $c(e_v)=3$ when $h(v)\equiv 1 \ (mod\ 3)$, $c(f_v)=6$
and $c(e_v)=1$ when $h(v)\equiv 2 \ (mod\ 3)$; otherwise, if
$\pi(v)=v_i(1\leq i\leq k-1)$, then set $c(f_v)=6$ and $c(e_v)=2$
when $h(v)\equiv 0 \ (mod\ 3)$, $c(f_v)=4$ and $c(e_v)=1$ when
$h(v)\equiv 1 \ (mod\ 3)$, $c(f_v)=5$ and $c(e_v)=3$ when
$h(v)\equiv 2 \ (mod\ 3)$. In fact, this gives a periodical coloring
depicted as Figure 6.
We call the subtree of $T$ rooted at $v_i \ (1\leq i\leq k-1)$ of
type $I$ and the subtree of $T$ rooted at $v_k$ of type
$\uppercase\expandafter{\romannumeral2}$. There may be many subtrees
of type $I$, but only one subtree of type
$\uppercase\expandafter{\romannumeral2}$. The subtrees of the same
type are colored in the same way. More precisely, if two vertices
$u,v$ lie in the same level and belong to subtrees of the same type,
then $c(e_u)=c(e_v)$ and $c(f_u)=c(f_v)$ after first step.
Now each non-leaf vertex in $T$ has three internally disjoint
super-rainbow paths connecting it to $D$: for the root $v_0$,
$P_1^{v_0}=v_0,t(v_0)$; $P_2^{v_0}=v_0,v_1,t(v_1)$;
$P_3^{v_0}=v_0,v_k,t(v_k)$. for other non-leaf vertex $v$ in $T$,
$P_1^v=v,t(v)$; $P_2^v=v,p(v),t(p(v))$; $P_3^v=v,ch(v),t(ch(v))$.
(Note that $v$ may have many children $u_1, u_2,\ldots, u_{\ell}$,
but all the $e_{u_i}$$'s$, $f_{u_i}$$'s$ receive the same color. So
they only contribute one path to the three internally disjoint
super-rainbow $v-D$ paths.) In other words, after first step all the
non-leaf vertices in $T$ are safe .
As to each leaf $v'$ in $T$, since $v'$ has no children, it has
exactly two internally disjoint super-rainbow $v'-D$ paths:
$P_1^{v'}=v',t(v')$;
$P_2^{v'}=v',p(v'),t(p(v'))$.
In other words, after first step all the leaves in $T$ are dangerous.
\noindent\textbf{Example 4:} The root $v_0$ is safe:
$c(P_1^{v_0})=\{2\}$, $c(P_2^{v_0})=\{1,4\}$,
$c(P_3^{v_0})=\{3,5\}$.
If $v_i$ $(1\leq i\leq k-1)$ is not a leaf of $T$, then $v_i$ is
safe: $c(P_1^{v_i})=\{1\}$, $c(P_2^{v_i})=\{2,4\}$,
$c(P_3^{v_i})=\{3,5\}$.
If $v_k$ is not a leaf of $T$, then $v_k$ is safe:
$c(P_1^{v_k})=\{3\}$, $c(P_2^{v_k})=\{2,5\}$,
$c(P_3^{v_k})=\{1,6\}$.
\noindent\textbf{Example 5:} If $v$ is a leaf of $T$ in the second
level with parent $v_k$, then $v$ is dangerous: $c(P_1^{v})=\{1\}$,
$c(P_2^{v})=\{3,6\}$.
All the possible color sets of the three internally disjoint
super-rainbow paths connecting a non-leaf vertex to $D$ are: (the
first part in every brace is the color of $P_1$, the second is the
color of $P_2$, the third is the color of $P_3$)
$\{1,24,35\}$,~~~$\{2,36,14\}$,~~~$\{3,15,26\}$,
~~~$\{1,36,24\}$,~~~$\{2,14,35\}$,~~~$\{3,25,16\}$.
Bearing in mind that $D$ is a connected three-way dominating set,
each leaf in $T$ is incident with at least one uncolored edge. In
second step, we will color such edges and recolor some colored edges
suitably to ensure that all the vertices in $C_i$ are safe.
\noindent\textbf{4.1.2~ Second step: more edges with a more
intelligent coloring}
Let $v$ be a leaf in $T$ and $g_v=vv'$ be one uncolored edge incident
with $v$.
If $g_v$ connects $v$ to $D$, then we give $g_v$ a smallest color
from $\{1,2,3,4,5,6\}\setminus (c(P_1^v)\cup c(P_2^v))$. For
instance, $c(g_v)=2$ for the vertex $v$ in Example 5. Obviously, now
$v$ has three internally disjoint super-rainbow $v-D$ paths
$P_1^v,P_2^v,P_3^v$, where
$P_1^v=v,t(v)$; $P_2^v=v,v'$; $P_3^v=v,p(v),t(p(v))$.
In other words, $v$ is safe after second step. All the possible color sets of
the three internally disjoint super-rainbow paths connecting $v$ to
$D$ are: (the first part in every brace is the color of $P_1$, the
second is the color of $P_2$, the third is the color of $P_3$)
$\{1,2,36\}$, ~~$\{1,3,24\}$, ~~$\{1,3,25\}$, ~~$\{2,3,15\}$,
~~$\{2,3,14\}$.
Now it remains to deal with the leaves in $T$ whose incident uncolored edges
all lie in $C_i$. Let $A$ denote the set of such vertices. First, we
flag all the vertices in $V(C_i)\setminus A$, which are already
safe. Note that we only flag the safe vertices. Once one vertex gets
flagged, it is always flagged. Next we arrange the vertices in $A$
in a linear order by the following three rules:
(R1) for $u,v\in A$, let $\pi(u)=v_i$ and $\pi(v)=v_j$, if $i>j$,
then $u$ is before $v$ in the ordering;
(R2) if $\pi(u)=\pi(v)$ and $h(v)>h(u)$, then $u$ is before $v$ in
the ordering.
(R3) if $\pi(u)=\pi(v)$, $h(u)=h(v)$ and $u$ is reached earlier than
$v$ in the $BFS$-algorithm, then $u$ is before $v$ in the ordering.
Assume the vertices in $A$ are ordered as $A=(w_1,w_2,\ldots,w_s)$.
We will deal with them one by one. Suppose that now we go to the
vertex $w_i$ ($w_1,w_2,\ldots,w_{i-1}$ have been processed). If
$w_i$ is flagged, we go to the next vertex $w_{i+1}$; otherwise,
we distinguish the following four cases:
\emph{\textbf{Case 1}}: $\pi(w_i)=v_k$ and there exists at least one
uncolored edge connecting $w_i$ to some subtree of type $I$. Then we
choose one such edge $w_iv$ such that the height of $v$ is as small
as possible. Since $T$ is a $BFS$-tree and the subtree of $w_i$ is
to the right of the subtree of $v$, then $h(v)=h(w_i)$ or
$h(w_i)+1$.
\noindent\textbf{Fact 1.} $e_v$ is not recolored.
If $v\notin A$, then $e_v$ never gets recolored; if $v\in A$, since
$\pi(w_i)=v_k$ and $\pi(v)=v_j$ $(1\leq j\leq k-1)$, we have not
dealt with $v$ yet according to R1, thus $e_v$ is not recolored.
We distinguish three subcases based on the height of $w_i$.
$\ast$ \emph{Subcase 1.1}: $h(w_i)\equiv 0(mod\ 3)$
If $h(v)=h(w_i)$, then color $w_iv$ with $5$. We have
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{2\}\cup
\{1,4\}\cup \{3,5,6\}$ and $c(P_1^{v})\cup c(P_2^{v})\cup
c(P_3^{v})=\{2\}\cup \{3,6\}\cup \{1,4,5\}$.
If $h(v)=h(w_i)+1$, then color $w_iv$ with $5$. We have
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{2\}\cup
\{3,4,6\}\cup\{1,5\}$ and $c(P_1^{v})\cup c(P_2^{v})\cup
c(P_3^{v})=\{1\}\cup \{3,4,6\}\cup\{2,5\}$.
Now both $w_i$ and $v$ become safe. We flag $w_i$ and $v$ (if $v$ is
not flagged).
$\ast$ \emph{Subcase 1.2}: $h(w_i)\equiv 1(mod\ 3)$
If $h(v)=h(w_i)$, then color $w_iv$ with $6$. We have
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{3\}\cup
\{2,5\}\cup \{1,6\}$ and $c(P_1^{v})\cup c(P_2^{v})\cup
c(P_3^{v})=\{1\}\cup \{2,4\}\cup \{3,6\}$.
If $h(v)=h(w_i)+1$, then color $w_iv$ with $4$ and recolor $e_{w_i}$
with $6$. In this way, we ensure that the parent of $w_i$ is still
safe. Now $c(P_1^{p(w_i)})\cup c(P_2^{p(w_i)})\cup
c(P_3^{p(w_i)})=\{2\}\cup \{1,4\}\cup \{5,6\}$. Moreover, we have
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{6\}\cup
\{2,5\}\cup \{3,4\}$ and $c(P_1^{v})\cup c(P_2^{v})\cup
c(P_3^{v})=\{3\}\cup \{1,5\}\cup \{4,6\}$.
Now both $w_i$ and $v$ become safe. We flag $w_i$ and $v$ (if $v$ is
not flagged).
$\ast$ \emph{Subcase 1.3}: $h(w_i)\equiv 2(mod\ 3)$
If $h(v)=h(w_i)$, then color $w_iv$ with $2$ and recolor $e_{w_i}$
with $4$. In this way, we ensure that the parent of $w_i$ is still
safe. Now $c(P_1^{p(w_i)})\cup c(P_2^{p(w_i)})\cup
c(P_3^{p(w_i)})=\{3\}\cup \{2,5\}\cup \{4,6\}$. Moreover, we have
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{4\}\cup
\{3,6\}\cup \{1,2,5\}$ and $c(P_1^{v})\cup c(P_2^{v})\cup
c(P_3^{v})=\{3\}\cup \{1,5\}\cup \{2,4\}$.
If $h(v)=h(w_i)+1$, then color $w_iv$ with $5$. We have
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{1\}\cup
\{3,6\}\cup \{2,5\}$ and $c(P_1^{v})\cup c(P_2^{v})\cup
c(P_3^{v})=\{2\}\cup \{3,6\}\cup \{1,5\}$.
Now both $w_i$ and $v$ become safe. We flag $w_i$ and $v$ (if $v$ is
not flagged).
\noindent\textbf{Remarks:} 1. When dealing with $w_i$, we just do
two operations: (i) coloring $w_iv$; (ii) recoloring $e_{w_i}$ if
necessary. Note that $e_{w_i}$ is the only edge which may be
recolored in this process. Furthermore, we recolor it in such a way
that the parent of $w_i$ is still safe. In fact, for that sake, we
have no choice but to recolor $e_{w_i}$ ($w_i\notin
\{v_1,v_2,\ldots,v_{k-1}\}$) with the unique color which is from
$\{1,2,3,4,5,6\}$ but does not appear on the three super-rainbow
paths of $p(w_i)$ after the first step. For example, in $Subcase\
1.2$, the color set of the three super-rainbow paths of $p(w_i)$
after first step is $\{2,1,4,5,3\}$, so we recolor $e_{w_i}$ with
$6$. The exception that $w_i\in \{v_1,v_2,\ldots,v_{k-1}\}$ will be
discussed in $Subcase\ 3.2$.
2. One may wonder what is the effect of these operations. First of
all, after the process, $w_i$ becomes safe and gets flagged, and so
does $v$ if $v$ is not flagged. In addition, the process guarantees
that all the safe vertices remain safe. As mentioned above, $p(w_i)$
is still safe after this process. For every other safe vertex in
$V(C_i)\setminus A$, obviously its three internally disjoint
super-rainbow paths do not contain $e_{w_i}$, so it is still safe
after this process. For each safe vertex $v$ in $A$, if its three
internally disjoint super-rainbow paths contain $e_{w_i}$, $w_i$ is
already safe and gets flagged before dealing with it. Then we go to
$w_{i+1}$ directly without doing this process. So we claim that the
three internally disjoint super-rainbow paths of $v$ do not contain
$e_{w_i}$, and thus it is still safe after this process.
3. The three internally disjoint super-rainbow paths of $w_i$ is one
of the following three cases; see Figure 7.
~~~(i) $P_1^{w_i}=w_i,t(w_i)$, $P_2^{w_i}=w_i,p(w_i),t(p(w_i))$,
$P_3^{w_i}=w_i,v,t(v)$;
~~~(ii) $P_1^{w_i}=w_i,t(w_i)$, $P_2^{w_i}=w_i,p(w_i),t(p(w_i))$,
$P_3^{w_i}=w_i,v,p(v),t(p(v))$;
~~~(iii) $P_1^{w_i}=w_i,t(w_i)$, $P_2^{w_i}=w_i,p(w_i), p(p(w_i)),
t(p(p(w_i)))$, $P_3^{w_i}=w_i,v,t(v)$.
\emph{\textbf{Case 2}}: $\pi(w_i)=v_k$ and all the uncolored edges
connect $w_i$ to the subtree of type
$\uppercase\expandafter{\romannumeral2}$. Then we choose one such
edge $w_iv$ such that the height of $v$ is as small as possible.
Since $T$ is a $BFS-$tree, we get $h(v)=h(w_i)-1$, $h(w_i)$, or
$h(w_i)+1$. The following two facts are easy to see:
\noindent\textbf{Fact 2.} If $h(v)=h(w_i)-1$, then $v$ is already
flagged.
If $v\notin A$, then $v$ gets flagged at the very beginning; if $v\in A$,
since $\pi(v)=\pi(w_i)=v_k$ and $h(v)<h(w_i)$, we have already dealt
with $v$ according to R2, thus $v$ is flagged (note that $e_v$ may
be recolored).
\noindent\textbf{Fact 3.} If $h(v)=h(w_i)+1$, then $e_v$ is not
recolored.
If $v\notin A$, then $e_v$ never gets recolored; if $v\in A$, since
$\pi(v)=\pi(w_i)=v_k$ and $h(v)>h(w_i)$, we have not dealt with $v$
yet according to R2, thus $e_v$ is not recolored.
We distinguish three subcases based on the height of $w_i$.
$\ast$ \emph{Subcase 2.1}: $h(w_i)\equiv 0 \ (mod\ 3)$
If $h(v)=h(w_i)-1$, by Fact $1$ we know that $v$ is already flagged.
No matter whether $e_v$ is recolored or not, we color $w_iv$ with $5$.
Then $c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{2\}\cup
\{1,4\}\cup \{3,5,6\}$. Now $w_i$ becomes safe. We flag $w_i$.
If $h(v)=h(w_i)$, then $v$ may be flagged and $e_v$ may be recolored.
If $e_v$ is not recolored ($c(e_v)=2$), then color $w_iv$ with $6$
and recolor $e_{w_i}$ with $5$. The parent of $w_i$ is still safe.
Now $c(P_1^{p(w_i)})\cup c(P_2^{p(w_i)})\cup
c(P_3^{p(w_i)})=\{1\}\cup \{3,6\}\cup \{4,5\}$. Moreover,
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{5\}\cup
\{1,4\}\cup \{2,6\}$ and $c(P_1^{v})\cup c(P_2^{v})\cup
c(P_3^{v})=\{2\}\cup \{1,4\}\cup \{5,6\}$, i.e. both $w_i$ and $v$
are safe. We flag $w_i$ and $v$ (if $v$ is not flagged). If $e_v$ is
recolored ($c(e_v)=5$), it implies $v\in A$ has been dealt with and
got flagged. Then color $w_iv$ with $6$. Now $c(P_1^{w_i})\cup
c(P_2^{w_i})\cup c(P_3^{w_i})=\{2\}\cup \{1,4\}\cup \{5,6\}$, i.e.
$w_i$ becomes safe. We flag $w_i$.
If $h(v)=h(w_i)+1$, by Fact $2$ we know that $e_v$ is not recolored
($c(e_v)=3$). Then color $w_iv$ with $6$. We have $c(P_1^{w_i})\cup
c(P_2^{w_i})\cup c(P_3^{w_i})=\{2\}\cup \{1,4\}\cup \{3,6\}$ and
$c(P_1^{v})\cup c(P_2^{v})\cup c(P_3^{v})=\{3\}\cup \{2,5\}\cup
\{1,4,6\}$, i.e. both $w_i$ and $v$ are safe. We flag $w_i$ and $v$
(if $v$ is not flagged).
$\ast$ \emph{Subcase 2.2}: $h(w_i)\equiv 1 \ (mod\ 3)$
If $h(v)=h(w_i)-1$, by Fact $1$ we know that $v$ is already flagged.
No matter whether $e_v$ is recolored or not, we color $w_iv$ with $6$.
Then $c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{3\}\cup
\{2,5\}\cup \{1,4,6\}$. Now $w_i$ becomes safe. We flag $w_i$.
If $h(v)=h(w_i)$, then $v$ may be flagged and $e_v$ may be
recolored. If $e_v$ is not recolored ($c(e_v)=3$), then color $w_iv$
with $4$ and recolor $e_{w_i}$ with $6$. The parent of $w_i$ is
still safe. Now $c(P_1^{p(w_i)})\cup c(P_2^{p(w_i)})\cup
c(P_3^{p(w_i)})=\{2\}\cup \{1,4\}\cup \{5,6\}$. Moreover,
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{6\}\cup
\{2,5\}\cup \{3,4\}$ and $c(P_1^{v})\cup c(P_2^{v})\cup
c(P_3^{v})=\{3\}\cup \{2,5\}\cup \{4,6\}$, i.e. both $w_i$ and $v$
are safe. We flag $w_i$ and $v$ (if $v$ is not flagged). If $e_v$ is
recolored ($c(e_v)=6$), it implies $v$ is flagged. Then color $w_iv$
with $4$. Now $c(P_1^{w_i})\cup c(P_2^{w_i})\cup
c(P_3^{w_i})=\{3\}\cup \{2,5\}\cup \{4,6\}$, i.e. $w_i$ becomes
safe. We flag $w_i$.
If $h(v)=h(w_i)+1$, by Fact $2$ we know that $e_v$ is not recolored
($c(e_v)=1$). Then color $w_iv$ with $4$. We have $c(P_1^{w_i})\cup
c(P_2^{w_i})\cup c(P_3^{w_i})=\{3\}\cup \{2,5\}\cup \{1,4\}$ and
$c(P_1^{v})\cup c(P_2^{v})\cup c(P_3^{v})=\{1\}\cup \{3,6\}\cup
\{2,4,5\}$, i.e. both $w_i$ and $v$ are safe. We flag $w_i$ and $v$
(if $v$ is not flagged).
$\ast$ \emph{Subcase 2.3}: $h(w_i)\equiv 2 \ (mod\ 3)$
If $h(v)=h(w_i)-1$, by Fact $1$ we know that $v$ is already flagged.
No matter whether $e_v$ is recolored or not, we color $w_iv$ with $4$.
Then $c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{1\}\cup
\{3,6\}\cup \{2,4,5\}$. Now $w_i$ becomes safe. We flag $w_i$.
If $h(v)=h(w_i)$, then $v$ may be flagged and $e_v$ may be
recolored. If $e_v$ is not recolored ($c(e_v)=1$), then color $w_iv$
with $5$ and recolor $e_{w_i}$ with $4$. The parent of $w_i$ is
still safe. Now $c(P_1^{p(w_i)})\cup c(P_2^{p(w_i)})\cup
c(P_3^{p(w_i)})=\{3\}\cup \{2,5\}\cup \{1,4,6\}$. Moreover,
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{4\}\cup
\{3,6\}\cup \{1,5\}$ and $c(P_1^{v})\cup c(P_2^{v})\cup
c(P_3^{v})=\{1\}\cup \{3,6\}\cup \{4,5\}$, i.e. both $w_i$ and $v$
are safe. We flag $w_i$ and $v$ (if $v$ is not flagged). If $e_v$ is
recolored ($c(e_v)=4$), it implies $v$ is flagged. Then color $w_iv$
with $5$. Now $c(P_1^{w_i})\cup c(P_2^{w_i})\cup
c(P_3^{w_i})=\{1\}\cup \{3,6\}\cup \{4,5\}$, i.e. $w_i$ becomes
safe. We flag $w_i$.
If $h(v)=h(w_i)+1$, by Fact $2$ we know that $e_v$ is not recolored
($c(e_v)=2$). Then color $w_iv$ with $5$. Now $c(P_1^{w_i})\cup
c(P_2^{w_i})\cup c(P_3^{w_i})=\{1\}\cup \{3,6\}\cup \{2,5\}$ and
$c(P_1^{v})\cup c(P_2^{v})\cup c(P_3^{v})=\{2\}\cup \{1,4\}\cup
\{3,5,6\}$, i.e. both $w_i$ and $v$ are safe. We flag $w_i$ and $v$
(if $v$ is not flagged).
\emph{\textbf{Case 3}}: $\pi(w_i)=v_j(1\leq j\leq k-1)$ and there
exists at least one uncolored edge connecting $w_i$ to some subtree
of type $I$. Then we choose one such edge $w_iv$ such that the
height of $v$ is as small as possible. Since $T$ is a $BFS$-tree,
then $h(v)=h(w_i)-1$, $h(w_i)$, or $h(w_i)+1$. We have the following
two facts, which are similar to $Fact$ $1$ and $2$:
\noindent\textbf{Fact $2'$.} If $h(v)=h(w_i)-1$, then $v$ is already
flagged.
If $v\notin A$, then $v$ gets flagged at the very beginning; if $v\in A$,
let $\pi(v)=v_{j'}$, since $1\leq j\leq j'\leq k-1$ and
$h(v)<h(w_i)$, we have already dealt with $v$ according to R1 and
R2, thus $v$ is flagged (note that $e_v$ may be recolored).
\noindent\textbf{Fact $3'$.} If $h(v)=h(w_i)+1$, then $e_v$ is not recolored.
If $v\notin A$, then $e_v$ never gets recolored; if $v\in A$, let
$\pi(v)=v_{j'}$, since $1\leq j'\leq j\leq k-1$ and $h(v)>h(w_i)$,
we have not dealt with $v$ yet according to R1 and R2, thus $e_v$ is
not recolored.
We distinguish three subcases based on the height of $w_i$.
$\ast$ \emph{Subcase 3.1}: $h(w_i)\equiv 0 \ (mod\ 3)$
If $h(v)=h(w_i)-1$, by Fact $2'$ we know that $v$ is already
flagged. No matter whether $e_v$ is recolored or not, we color $w_iv$
with $4$. Then $c(P_1^{w_i})\cup c(P_2^{w_i})\cup
c(P_3^{w_i})=\{2\}\cup \{3,6\}\cup \{1,4,5\}$. Now
$w_i$ becomes safe. We flag $w_i$.
If $h(v)=h(w_i)$, then $v$ may be flagged and $e_v$ may be
recolored. If $e_v$ is not recolored ($c(e_v)=2$), then color $w_iv$
with $5$ and recolor $e_{w_i}$ with $4$. The parent of $w_i$ is
still safe. Now $c(P_1^{p(w_i)})\cup c(P_2^{p(w_i)})\cup
c(P_3^{p(w_i)})=\{3\}\cup \{1,5\}\cup \{4,6\}$. Moreover,
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{4\}\cup
\{3,6\}\cup \{2,5\}$ and $c(P_1^{v})\cup c(P_2^{v})\cup
c(P_3^{v})=\{2\}\cup \{3,6\}\cup \{4,5\}$, i.e. both $w_i$ and $v$
are safe. We flag $w_i$ and $v$ (if $v$ is not flagged). If $e_v$ is
recolored ($c(e_v)=4$), it implies $v$ is flagged. Then color $w_iv$
with $5$. Now $c(P_1^{w_i})\cup c(P_2^{w_i})\cup
c(P_3^{w_i})=\{2\}\cup \{3,6\}\cup \{4,5\}$, i.e. $w_i$ becomes
safe. We flag $w_i$.
If $h(v)=h(w_i)+1$, by Fact $3'$ we know that $e_v$ is not recolored
($c(e_v)=1$). Then color $w_iv$ with $5$. Now $c(P_1^{w_i})\cup
c(P_2^{w_i})\cup c(P_3^{w_i})=\{2\}\cup \{3,6\}\cup \{1,5\}$ and
$c(P_1^{v})\cup c(P_2^{v})\cup c(P_3^{v})=\{1\}\cup \{2,4\}\cup
\{3,5,6\}$, i.e. both $w_i$ and $v$ are safe. We flag $w_i$ and $v$
(if $v$ is not flagged).
$\ast$ \emph{Subcase 3.2}: $h(w_i)\equiv 1 \ (mod\ 3)$
If $h(v)=h(w_i)-1$, by Fact $2'$ we know that $v$ is already
flagged. No matter whether $e_v$ is recolored or not, we color $w_iv$
with $5$. Then $c(P_1^{w_i})\cup c(P_2^{w_i})\cup
c(P_3^{w_i})=\{1\}\cup \{2,4\}\cup \{3,5,6\}$. Now
$w_i$ becomes safe. We flag $w_i$.
If $h(v)=h(w_i)$, then $v$ may be flagged and $e_v$ may be
recolored. If $e_v$ is not recolored ($c(e_v)=1$), then $e_v$ never
gets recolored in second step. For $w_i$ not in the first level,
color $w_iv$ with $6$ and recolor $e_{w_i}$ with $5$. The parent of
$w_i$ is still safe. Now $c(P_1^{p(w_i)})\cup c(P_2^{p(w_i)})\cup
c(P_3^{p(w_i)})=\{2\}\cup \{3,6\}\cup \{4,5\}$. Moreover,
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{5\}\cup
\{2,4\}\cup \{1,6\}$ and $c(P_1^{v})\cup c(P_2^{v})\cup
c(P_3^{v})=\{1\}\cup \{2,4\}\cup \{5,6\}$, i.e. both $w_i$ and $v$
are safe. We flag $w_i$ and $v$ (if $v$ is not flagged). For $w_i$
in the first level, the parent of $w_i$, namely $v_0$, is already
safe. Its three internally disjoint super-rainbow paths to $D$ are
$P_1^{v_0}=v_0,t(v_0)$, $P_2^{v_0}=v_0,v,t(v)$,
$P_3^{v_0}=v_0,v_k,t(v_k)$, and $c(P_1^{v_0})\cup c(P_2^{v_0})\cup
c(P_3^{v_0})=\{2\}\cup \{1,4\}\cup \{3,5\}$ or $\{2\}\cup
\{1,4\}\cup \{5,6\}$. Since the paths
$P_1^{v_0},P_2^{v_0},P_3^{v_0}$ do not use $e_{w_i}$, we can recolor
$e_{w_i}$ with an arbitrary color from $\{1,2,3,4,5,6\}$. In line
with the previous case, we also color $w_iv$ with $6$ and recolor
$e_{w_i}$ with $5$. Then again both $w_i$ and $v$ are safe. We flag
$w_i$ and $v$ (if $v$ is not flagged). If $e_v$ is recolored
($c(e_v)=5$), it implies $v$ is flagged. Then color $w_iv$ with $6$.
Now $c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{1\}\cup
\{2,4\}\cup \{5,6\}$, i.e. $w_i$ becomes safe. We flag $w_i$.
If $h(v)=h(w_i)+1$, by Fact $3'$ we know that $e_v$ is not recolored
($c(e_v)=3$). Then color $w_iv$ with $6$. Now $c(P_1^{w_i})\cup
c(P_2^{w_i})\cup c(P_3^{w_i})=\{1\}\cup \{2,4\}\cup \{3,6\}$ and
$c(P_1^{v})\cup c(P_2^{v})\cup c(P_3^{v})=\{3\}\cup \{1,5\}\cup
\{2,4,6\}$, i.e. both $w_i$ and $v$ are safe. We flag $w_i$ and $v$
(if $v$ is not flagged).
$\ast$ \emph{Subcase 3.3}: $h(w_i)\equiv 2 \ (mod\ 3)$
If $h(v)=h(w_i)-1$, by Fact $2'$ we know that $v$ is already
flagged. No matter whether $e_v$ is recolored or not, we color $w_iv$
with $6$. Then $c(P_1^{w_i})\cup c(P_2^{w_i})\cup
c(P_3^{w_i})=\{3\}\cup \{1,5\}\cup \{2,4,6\}$. Now
$w_i$ becomes safe. We flag $w_i$.
If $h(v)=h(w_i)$, then $v$ may be flagged and $e_v$ may be
recolored. If $e_v$ is not recolored ($c(e_v)=3$), then color $w_iv$
with $4$ and recolor $e_{w_i}$ with $6$. The parent of $w_i$ is
still safe. Now $c(P_1^{p(w_i)})\cup c(P_2^{p(w_i)})\cup
c(P_3^{p(w_i)})=\{1\}\cup \{2,4\}\cup \{5,6\}$. Moreover,
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{6\}\cup
\{1,5\}\cup \{3,4\}$ and $c(P_1^{v})\cup c(P_2^{v})\cup
c(P_3^{v})=\{3\}\cup \{1,5\}\cup \{4,6\}$, i.e. both $w_i$ and $v$
are safe. We flag $w_i$ and $v$ (if $v$ is not flagged). If $e_v$ is
recolored ($c(e_v)=6$), it implies $v$ is flagged. Then color $w_iv$
with $4$. Now $c(P_1^{w_i})\cup c(P_2^{w_i})\cup
c(P_3^{w_i})=\{3\}\cup \{1,5\}\cup \{4,6\}$, i.e. $w_i$ becomes
safe. We flag $w_i$.
If $h(v)=h(w_i)+1$, by Fact $3'$ we know that $e_v$ is not recolored
($c(e_v)=2$). Then color $w_iv$ with $4$. Now $c(P_1^{w_i})\cup
c(P_2^{w_i})\cup c(P_3^{w_i})=\{3\}\cup \{1,5\}\cup \{2,4\}$ and
$c(P_1^{v})\cup c(P_2^{v})\cup c(P_3^{v})=\{2\}\cup \{3,6\}\cup
\{1,4,5\}$, i.e. both $w_i$ and $v$ are safe. We flag $w_i$ and $v$
(if $v$ is not flagged).
\emph{\textbf{Case 4}}: $\pi(w_i)=v_j(1\leq j\leq k-1)$ and all the
uncolored edges connect $w_i$ to the subtree of type
$\uppercase\expandafter{\romannumeral2}$. Then we choose one such
edge $w_iv$ such that the height of $v$ is as small as possible.
Since $T$ is a $BFS$-tree and the subtree of $w_i$ is to the left of
the subtree of $v$, then $h(v)=h(w_i)-1$ or $h(w_i)$. We have the
following fact:
\noindent\textbf{Fact 4.} $v$ is already flagged.
If $v\notin A$, then $v$ gets flagged at the very beginning; if $v\in A$,
since $\pi(v)=v_k$ and $\pi(w_i)=v_j\ (1\leq j\leq k-1)$, we have
already dealt with $v$ according to R1, thus $v$ is flagged (note
that $e_v$ may be recolored).
We distinguish three subcases based on the height of $w_i$.
$\ast$ \emph{Subcase 4.1}: $h(w_i)\equiv 0 \ (mod\ 3)$
If $h(v)=h(w_i)-1$, by Fact $4$ we know that $v$ is already flagged.
If $e_v$ is not recolored ($c(e_v)=1$), then color $w_iv$ with $5$.
We have $c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{2\}\cup
\{3,6\}\cup \{1,5\}$. If $e_v$ is recolored ($c(e_v)=4$), then color
$w_iv$ with $5$. We have $c(P_1^{w_i})\cup c(P_2^{w_i})\cup
c(P_3^{w_i})=\{2\}\cup \{3,6\}\cup \{4,5\}$. Now $w_i$ becomes safe.
We flag $w_i$.
If $h(v)=h(w_i)$, by Fact $4$ we know that $v$ is already flagged.
If $e_v$ is not recolored ($c(e_v)=2$), then color $w_iv$ with $5$.
We have $c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{2\}\cup
\{3,6\}\cup \{1,4,5\}$. If $e_v$ is recolored ($c(e_v)=5$), then
color $w_iv$ with $4$. We have $c(P_1^{w_i})\cup c(P_2^{w_i})\cup
c(P_3^{w_i})=\{2\}\cup \{3,6\}\cup \{4,5\}$. Now $w_i$ becomes safe.
We flag $w_i$.
$\ast$ \emph{Subcase 4.2}: $h(w_i)\equiv 1 \ (mod\ 3)$
If $h(v)=h(w_i)-1$, by Fact $4$ we know that $v$ is already flagged.
If $e_v$ is not recolored ($c(e_v)=2$), then color $w_iv$ with $5$.
We have $c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{1\}\cup
\{3,4,6\}\cup \{2,5\}$. If $e_v$ is recolored ($c(e_v)=5$), then
color $w_iv$ with $6$. We have $c(P_1^{w_i})\cup c(P_2^{w_i})\cup
c(P_3^{w_i})=\{1\}\cup \{2,4\}\cup \{5,6\}$. Now $w_i$ becomes safe.
We flag $w_i$.
If $h(v)=h(w_i)$, by Fact $4$ we know that $v$ is already flagged.
If $e_v$ is not recolored ($c(e_v)=3$), then color $w_iv$ with $6$.
We have $c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{1\}\cup
\{2,4\}\cup \{3,6\}$. If $e_v$ is recolored ($c(e_v)=6$), then color
$w_iv$ with $3$. We have $c(P_1^{w_i})\cup c(P_2^{w_i})\cup
c(P_3^{w_i})=\{1\}\cup \{2,4\}\cup \{3,6\}$. Now $w_i$ becomes safe.
We flag $w_i$.
$\ast$ \emph{Subcase 4.3}: $h(w_i)\equiv 2 \ (mod\ 3)$
If $h(v)=h(w_i)-1$, by Fact $4$ we know that $v$ is already flagged.
If $e_v$ is not recolored ($c(e_v)=3$), then color $w_iv$ with $4$
and recolor $e_{w_i}$ with $6$. The parent of $w_i$ is still safe.
Now $c(P_1^{p(w_i)})\cup c(P_2^{p(w_i)})\cup
c(P_3^{p(w_i)})=\{1\}\cup \{2,4\}\cup \{5,6\}$. Moreover,
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{6\}\cup
\{1,5\}\cup \{3,4\}$ , i.e. $w_i$ is safe. We flag $w_i$. If $e_v$
is recolored ($c(e_v)=6$), then color $w_iv$ with $4$. Now
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{3\}\cup
\{1,5\}\cup \{4,6\}$, i.e. $w_i$ becomes safe. We flag $w_i$.
If $h(v)=h(w_i)$, by Fact $4$ we know that $v$ is already flagged.
If $e_v$ is not recolored ($c(e_v)=1$), then color $w_iv$ with $3$
and recolor $e_{w_i}$ with $6$. The parent of $w_i$ is still safe.
Now $c(P_1^{p(w_i)})\cup c(P_2^{p(w_i)})\cup
c(P_3^{p(w_i)})=\{1\}\cup \{2,4\}\cup \{5,6\}$. Moreover,
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{6\}\cup
\{2,4,5\}\cup \{1,3\}$, i.e. $w_i$ is safe. We flag $w_i$. If $e_v$
is recolored ($c(e_v)=4$), then color $w_iv$ with $6$. Now
$c(P_1^{w_i})\cup c(P_2^{w_i})\cup c(P_3^{w_i})=\{3\}\cup
\{1,5\}\cup \{4,6\}$, i.e. $w_i$ becomes safe. We flag $w_i$.
Then we go to $w_{i+1}$ and repeat the process, until all the
vertices in $A$ are visited. We do the same operation to all
$C_{i}$$'s$. If there still exist uncolored edges in
$E[D,\overline{D}]\cup E(G[\overline{D}])$, color them with $1$. Now
we have a coloring of all the edges in $E[D,\overline{D}]\cup
E(G[\overline{D}])$ using six different colors from
$\{1,2,3,4,5,6\}$ such that all the vertices in $\overline{D}$ are
safe.
\subsection{Color the edges in $E(G[D])$}
Let $d:=rx_3(G[D])$. Then we can color the edges in $G[D]$ with $d$
fresh colors from $\{7,8,\ldots, d+6\}$ such that for each triple of
vertices in $D$, there exists a rainbow tree in $G[D]$ connecting
them. Hereto we obtain an edge-coloring $c: E(G)\rightarrow
\{1,2,\ldots,d+6\}$.
\subsection{Prove $c$ is a 3-rainbow coloring}
Next we will prove that this edge-coloring of $G$ is a 3-rainbow
coloring, which yields that $rx_3(G)\leq rx_3(G[D])+6$.
\begin{claim}
Under this coloring, for any three vertices $u,v,w$ in
$\overline{D}$, there exists a rainbow $u-D$ path $P^u$, a rainbow
$v-D$ path $P^v$ and a rainbow $w-D$ path $P^w$ such that $P^u \cup
P^v \cup P^w$ is also rainbow.
\end{claim}
Before giving the proof of Claim 1, let us show how it implies our
result. Let $S=\{u,v,w\}\subseteq V(G)$. If $|S\cap D|=3$, i.e.
$(u,v,w)\in D\times D \times D$, then there is already a rainbow
$S$-tree in $G[D]$. If $|S\cap D|=2$, say $(u,v,w)\in D\times
D\times \overline{D}$, then let $w'$ be the foot of $w$. The rainbow
tree in $G[D]$ connecting $u,v,w'$ together with the edge $ww'$
forms a rainbow $S$-tree. If $|S\cap D|=1$, say $(u,v,w)\in D \times
\overline{D} \times \overline{D}$, then by $Claim\ 1$, there exists
a rainbow $v-D$ path $P^v$ and a rainbow $w-D$ path $P^w$ such that
$P^v \cup P^w$ is also rainbow. Assume the endvertex of $P^v$, $P^w$
in $D$ is $v'$, $ w'$ respectively. Then the rainbow tree in $G[D]$
connecting $u,v',w'$ together with the paths $P^v$ and $P^w$ forms a
connected rainbow subgraph of $G$, denoted by $H$. Obviously, a
spanning tree of $H$ is a rainbow $S$-tree. If $|S\cap D|=0$, i.e.
$(u,v,w)\in \overline{D} \times \overline{D}\times \overline{D}$,
then by $Claim\ 1$, there exists a rainbow $u-D$ path $P^u$, a
rainbow $v-D$ path $P^v$ and a rainbow $w-D$ path $P^w$ such that
$P^u\cup P^v \cup P^w$ is also rainbow. Assume the endvertex of
$P^u$, $P^v$, $P^w$ in $D$ is $u'$, $v'$, $ w'$ respectively. Then
the rainbow tree in $G[D]$ connecting $u',v',w'$ together with the
paths $P^u$, $P^v$ and $P^w$ forms a connected rainbow subgraph of
$G$, denoted by $H'$. Obviously, a spanning tree of $H'$ is a
rainbow $S$-tree. So we come to the conclusion that the
edge-coloring $c$ is a 3-rainbow coloring.
\noindent\emph{Proof of Claim 1}: For any three vertices $u,v,w$ in
$\overline{D}$, $u,v,w$ are safe under this coloring. That is, there
exist three internally-disjoint super-rainbow $u-D$ paths $P^u_1$,
$P^u_2$, $P^u_3$, three internally-disjoint super-rainbow $v-D$
paths $P^v_1$, $P^v_2$, $P^v_3$ and three internally-disjoint
super-rainbow $w-D$ paths $P^w_1$, $P^w_2$, $P^w_3$. If we can pick
out $P_i^u$, $P_j^v$ and $P_k^w$ $(1\leq i,j,k\leq 3)$ from these
paths satisfying $P_i^u\cup P_j^v \cup P_k^w$ is also rainbow, we
are done. But unfortunately in some cases, we can not do that. For
example, if $c(P_1^u)\cup c(P_2^u) \cup c(P_3^u)= \{1\} \cup \{2,4\}
\cup \{5,6\}$, $c(P_1^v)\cup c(P_2^v) \cup c(P_3^v)= \{1\} \cup
\{2,5\} \cup \{4,6\}$, $c(P_1^w)\cup c(P_2^w) \cup c(P_3^w)= \{1\}
\cup \{2,6\} \cup \{4,5\}$, then one can check that $P_i^u\cup P_j^v
\cup P_k^w$ is not rainbow for each $1\leq i,j,k\leq 3$. Here we
will show a sufficient and necessary condition for the situation in
which we can pick out suitable $P_i^u$, $P_j^v$ and $P_k^w$. (Note
that $P_1^u$, $P_1^v$, $P_1^w$ contains exactly one edge.)
There exist $i,j,k\in\{1,2,3\}$ satisfying $P_i^u\cup P_j^v \cup
P_k^w$ is rainbow if and only if
(C1) $c(P_1^u)$, $c(P_1^v)$, $c(P_1^w)$ are not the same or
(C2) there exist two distinct vertices $x,y\in \{u,v,w\}$ and two
integers $s,t\in\{2,3\}$ ($s$ may equal to $t$) such that
$c(P_s^x)\cap c(P_t^y)=\emptyset$.
If (C1) is true, without loss of generality, we assume $c(P_1^u)=1$
and $c(P_1^v)=2$ . If $c(P_1^w)\in\{3,4,5,6\}$, then $P_1^u\cup
P_1^v \cup P_1^w$ is rainbow; If $c(P_1^w)\in\{1,2\}$, without loss
of generality let $c(P_1^w)=1$. Since $P_1^w\cup P_2^w \cup P_3^w$
is rainbow, both $P_2^w$ and $P_3^w$ contain no edges colored by 1,
and at least one of $P_2^w$ and $P_3^w$ contains no edges colored by
2, say $P_2^w$. Then $P_1^u\cup P_1^v \cup P_2^w$ is rainbow. If
(C2) is true, without loss of generality, we assume that
$c(P_2^u)\cap c(P_2^v)=\emptyset$. If $c(P_1^u)$, $c(P_1^v)$,
$c(P_1^w)$ are not the same, then the assertion holds by (C1);
otherwise, without loss of generality let
$c(P_1^u)=c(P_1^v)=c(P_1^w)=1$. Then $P_2^u$ and $P_2^v$ contain no
edges colored by 1. Bearing in mind that $c(P_2^u)\cap
c(P_2^v)=\emptyset$, we get that $P_1^w\cup P_2^u \cup P_2^v$ is
rainbow. For the other direction, assume that (C1) is not true, we
will show (C2) holds by contradiction. Suppose that
$c(P_1^u)=c(P_1^v)=c(P_1^w)$, and for any two distinct vertices
$x,y\in \{u,v,w\}$ and any two integers $s,t\in\{2,3\}$,
$c(P_s^x)\cap c(P_t^y)\neq \emptyset$. Since $P_i^u\cup P_j^v \cup
P_k^w$ is rainbow, we know that at most one of $i,j,k$ is equal to
1, say $j,k\in\{2,3\}$. Then by hypothesis, $c(P_j^v)\cap
c(P_k^w)\neq \emptyset$, a contradiction to the fact that $P_i^u\cup
P_j^v \cup P_k^w$ is rainbow.
From the above assertion, we can see that the colors of the three
internally disjoint super-rainbow paths connecting a vertex in
$\overline{D}$ to $D$ plays a crucial role. Here we list out all the
possible color sets of these paths under this coloring. For the sake
of brevity, we write $\{1,24,35\}$ instead of $c(P_1^u)\cup
c(P_2^u)\cup c(P_3^u)=\{1\}\cup \{2,4\} \cup \{3,5\}$ and
$c(P_1^u)\cup c(P_2^u)\cup c(P_3^u)=\{1\}\cup \{3,5\} \cup \{2,4\}$.
\\
\emph{Class 0}: $\{1,2,3\}$, ~~$\{1,2,34\}$, ~~$\{1,2,36\}$, ~~$\{2,3,14\}$, ~~$\{2,3,15\}$,
~~~~~~~~~~~~$\{1,3,24\}$, ~~$\{1,3,25\}$
\emph{Class 1}: $\{1,24,35\}$, ~~$\{1,36,24\}$, ~~$\{1,36,25\}$, ~~$\{1,24,56\}$, ~~$\{1,36,45\}$,
~~~~~~~~~~~~$\{1,36,245\}$, ~~$\{1,24,356\}$, ~~$\{1,346,25\}$.
\emph{Class 2}: $\{2,36,14\}$, ~~$\{2,14,35\}$, ~~$\{2,14,56\}$, ~~$\{2,36,15\}$, ~~$\{2,36,45\}$,
~~~~~~~~~~~~$\{2,46,35\}$, ~~$\{2,36,145\}$, ~~$\{2,14,356\}$, ~~$\{2,346,15\}$.
\emph{Class 3}: $\{3,15,26\}$, ~~$\{3,25,16\}$, ~~$\{3,15,46\}$, ~~$\{3,25,46\}$, ~~$\{3,15,24\}$,
~~~~~~~~~~~~$\{3,25,14\}$, ~~$\{3,25,146\}$, ~~$\{3,15,246\}$.
\emph{Class 4}: $\{4,36,15\}$, ~~$\{4,36,25\}$, ~~$\{4,36,125\}$.
\emph{Class 5}: $\{5,14,26\}$, ~~$\{5,24,16\}$.
\emph{Class 6}: $\{6,25,34\}$, ~~$\{6,15,34\}$, ~~$\{6,15,24\}$, ~~$\{6,245,13\}$.
For every triple $\{u,v,w\}$ of vertices in $\overline{D}$, if
$c(P_1^u)$, $c(P_1^v)$ and $c(P_1^w)$ are not the same, we are done.
Now suppose $c(P_1^u)=c(P_1^v)=c(P_1^w)$. If there exists one vertex
satisfying at least two of its three paths are of length 1, without
loss of generality, we assume that $c(P_1^u)=c(P_1^v)=c(P_1^w)=1$,
$P_2^u$ is of length 1, and $c(P_2^u)=2$. Since $P_1^v\cup P_2^v\cup
P_3^v$ is rainbow, we can find out one path, say $P_2^v$, which
contains no edges colored by 1 or 2. Then $P_2^u \cup P_2^v \cup
P_1^w$ is rainbow. Again we are done. Thus to prove $Claim \ 1$, it
suffices to check whether (C2) holds for every three color sets in
Class $i$ ($1\leq i\leq 6$). Since the number of color sets in one
class is no more than 9, the checking work can be done in a short
time and the answer in turn is affirmative. We complete the proof of
$Claim \ 1$.
To end the section, we illustrate the tightness of the bound
$rx_3(G)\leq rx_3(G[D])+6$ with the graph in $Figure \ 5$. It is
easy to see that $D=\{v_0\}$ is a connected three-way dominating
set. By Theorem \ref{thm3}, $rx_3(G)\leq rx_3(G[D])+6=6$. On the
other hand, we have already proved that $rx_3(G)=6$. So the bound is
tight.
\section{Concluding remarks}
To sum up, as for the 3-rainbow index of a graph, we can consider
the following three strengthened connected dominating sets:
Let $G$ be a connected graph and $D$ be a connected dominating set
of $G$.
(a) if every vertex in $\overline{D}$ is adjacent to at least three
distinct vertices of $D$, then $rx_3(G)\leq rx_3(G[D])+3$ (Theorem
\ref{thm4});
(b) if every vertex in $\overline{D}$ is of degree at least three
and adjacent to at least two distinct vertices of $D$, then
$rx_3(G)\leq rx_3(G[D])+4$ (Theorem \ref{thm2});
(c) if every vertex in $\overline{D}$ is of degree at least three,
then $rx_3(G)\leq rx_3(G[D])+6$ (Theorem \ref{thm3}).
From (a) to (c), we loosen the restrictions on the connected
dominating sets, while the additive constant increases. We cannot
tell which bound is the best. For example, for a French Windmill in
Figure 5, (c) is better than (a) and (b), whereas for a threshold
graph with $\delta\geq3$, (a) and (b) which imply $rx_3(G)\leq5$ are
better than (c) which implies $rx_3(G)\leq6$. Given a connected
graph $G$, we can calculate three upper bounds for the 3-rainbow
index of $G$ using (a), (b), (c) respectively (some of them may be
the same), and then choose the smallest one of them.
\end{document} |